By People's Voice Editorial·Breaking News Analysis·May 5, 2026 at 9:20 PM

Patel Says FBI Is Using AI to Triage Threat Tips

1075 words5 min read
Kash Patel discusses FBI artificial intelligence deployment during a Hang Out with Sean Hannity interview.Excerpted from Hang Out with Sean Hannity; video reposted by @ShadowofEzra on X

WASHINGTON - FBI Director Kash Patel said the bureau is using artificial intelligence to speed up fingerprint checks and sort threat tips, including two cases he said helped stop planned school violence in North Carolina and New York.

Patel made the comments in a Hang Out with Sean Hannity interview clip reposted to X on Tuesday. The clip gives the bureau's AI push a sharper public frame: faster searches, faster triage, and private-sector systems plugged into sensitive law-enforcement workflows.

What Happened

Patel said the FBI has moved AI into the Criminal Justice Information Services environment, the bureau's West Virginia-based hub for criminal justice data, biometric records, and identification services. He said the goal is to turn fingerprint hits around quickly enough to help agents move on fugitives and arrest warrants.

"I'm using it everywhere." - Kash Patel, FBI director, in a Hang Out with Sean Hannity interview

Patel said the same push extends to the National Threat Operations Center, the FBI unit that receives public tips through channels such as 1-800-CALL-FBI and online submissions. He said the center receives thousands of tips a week and that human review alone would not move fast enough for imminent threats.

"We stopped a school massacre in North Carolina because we got a tip and we were able to triage with artificial intelligence. We stopped a school shooting in New York because we got a tip from our private sector partners who are building our AI infrastructure." - Kash Patel, FBI director, in the interview clip

FBI fingerprint specialists work with biometric records. Photo by FBI, via Wikimedia Commons (public domain).
FBI fingerprint specialists work with biometric records. Photo by FBI, via Wikimedia Commons (public domain).

Public DOJ and FBI release indexes reviewed Tuesday did not identify the North Carolina or New York cases by date, defendant, local agency, or charging document. That means Patel's specific school-threat examples remain claims from the director's interview unless the bureau releases additional records.

What Federal Records Show

The broader AI deployment Patel described is consistent with the Justice Department's 2025 AI Use Case Inventory, which lists dozens of FBI AI systems, including several directly tied to threat triage, biometric matching, facial recognition, entity resolution, license-plate readers, translation, transcription, and document processing.

The inventory lists an FBI system called TIPS as deployed in September 2025. DOJ describes the system's problem statement this way: "It is intended to prioritize tips to be worked as well as determine if a tip should get a second human review." DOJ says the expected benefit is that the AI "helps to triage immediate threats in order to help FBI field offices."

The same inventory lists Next Generation Identification, the FBI's biometric identification system, as a deployed high-impact law-enforcement AI use case from June 2025. DOJ says the system is intended "to improve biometric and name-based matching for identification and investigation services" and produces "biometric identification and search results containing candidates for potential investigative leads."

Those records support the mechanism Patel described: AI can prioritize records, rank likely matches, and route outputs to human reviewers. They do not, by themselves, verify that AI stopped either school-threat incident Patel cited.

The Oversight Question

The stakes are not just technical. DOJ's own inventory classifies several FBI AI uses as high-impact law-enforcement systems, including Next Generation Identification, facial recognition technology, license plate readers, and some data-synthesis tools. For multiple FBI high-impact entries, DOJ marks AI impact assessments, independent review, ongoing monitoring, operator training, fail-safes, appeal processes, or public feedback as in progress.

The Justice Department says its 2025 inventory includes 315 AI entries, a 30.7 percent increase from 2024. DOJ says the inventory covers systems that are pre-deployment, pilot, deployed, or retired, and that some details were withheld under information-sharing restrictions and FOIA standards.

"DOJ is committed to fostering public trust through a comprehensive public release of our AI inventory." - Department of Justice AI Inventory page

The FBI Criminal Justice Information Services facility in West Virginia houses major criminal justice data systems. Photo by FBI, via Wikimedia Commons (public domain).
The FBI Criminal Justice Information Services facility in West Virginia houses major criminal justice data systems. Photo by FBI, via Wikimedia Commons (public domain).

Civil-liberties groups argue that speed is not enough when AI systems touch policing decisions, surveillance, or identification. The Electronic Frontier Foundation said in a police-technology report that the industry selling tools to law enforcement is "one of the most unregulated, unexamined, and consequential in the United States." The ACLU has called for a moratorium on law-enforcement and immigration-enforcement use of facial recognition until Congress and the public debate what uses should be permitted.

The Brennan Center for Justice and more than 40 civil society groups have also warned that law-enforcement use of face recognition can threaten privacy, chill First Amendment activity, and create due-process risks when systems are used without safeguards.

What People Are Saying

Patel framed AI as a way to make the bureau faster in urgent cases, especially where tips or fingerprints would otherwise wait in a queue.

"Now those hits are instantaneous because we are welcoming in artificial intelligence." - Kash Patel, FBI director, in the interview clip

The DOJ inventory frames the FBI's TIPS system as a triage tool, not as an autonomous enforcement decision. It says the system prioritizes tips or routes them to second review based on thresholds.

"The system will prioritize the tips or route them to a second review based on thresholds." - Department of Justice 2025 AI Use Case Inventory, FBI TIPS entry

Civil-liberties advocates argue that federal law-enforcement AI needs public limits before deployment becomes routine.

"History tells us that surveillance technology is often wrongly used to target immigrants, communities of color, and political protesters, and there is a danger that this time will be no different."

Neema Singh Guliani, ACLU senior legislative counsel, in an ACLU statement on law-enforcement facial recognition

The Big Picture

Patel's comments put a public face on a shift already visible in DOJ records. The FBI is not just experimenting with AI for office work. Its public inventory entries include deployed or pilot systems for threat-tip triage, biometric matching, facial recognition, entity resolution, and search prioritization.

The policy question now turns on proof and guardrails. If AI is helping agents find imminent threats faster, the bureau will face pressure to show how those systems are tested, monitored, and kept subordinate to human review. If the school-threat examples remain undocumented in public records, critics will argue that the bureau is asking the public to trust a system whose most dramatic claimed successes cannot yet be independently checked.