Of all the sectors disrupted by artificial intelligence in 2026, none carry consequences as severe as the criminal justice system. When an AI makes a mistake in marketing, a company loses money. When an AI makes a mistake in policing, a citizen loses their freedom—or their life.
As police departments globally integrate AI into their daily operations, organizations like Amnesty International, the ACLU, and the AI Now Institute are mounting fierce resistance against what they term “algorithmic discrimination.”
The Flaw in Predictive Policing
The core controversy centers around “predictive policing”—algorithms designed to analyze historical crime data to predict where future crimes will occur, effectively telling officers where to patrol.
In late 2025, Amnesty International UK published a damning report titled “Automated Racism,” revealing that nearly three-quarters of UK police forces utilize these systems. The fundamental flaw, researchers note, is the training data. Because marginalized and minority communities have historically been over-policed, the historical arrest data reflects that bias.
When an AI model is trained on this skewed data, it inevitably predicts that these same neighborhoods require more police presence. This leads to more arrests in those areas, which feeds new, biased data back into the algorithm, creating a self-fulfilling loop of “predictable policing” rather than predictive policing.
Facial Recognition and False Arrests
The deployment of facial recognition technology (FRT) remains equally fraught. By 2026, the technology’s inability to accurately identify people of color—particularly women—is well-documented.
The AI Now Institute and the ACLU have repeatedly highlighted that major FRT systems (including inherited systems from vendors like Amazon Rekognition) exhibit significantly higher error rates for African-American women compared to Caucasian men. This is not purely theoretical; it has led to documented cases of innocent Black men being falsely arrested and jailed based entirely on an incorrect AI facial match.
The Regulatory Pushback
In response to the rapid deployment of these untested technologies, 2026 has seen a surge in grassroots legal resistance.
Rather than waiting for federal intervention, the ACLU is aggressively championing Community Control Over Police Surveillance (CCOPS) laws at the municipal and state levels. These laws force police departments to disclose the AI tools they intend to buy and require explicit approval from a city council or community oversight board before deployment.
Several major U.S. cities have already enacted total bans on government use of facial recognition technology. Across the Atlantic, the EU AI Act classifies both real-time biometric identification in public spaces and predictive policing systems as “unacceptable risks” or “high-risk,” subjecting them to strict prohibitions or severe regulatory hurdles.
Frequently Asked Questions
What is an algorithmic risk assessment in court?
Judges increasingly use AI “risk assessment” tools to determine a defendant’s bail amount or sentence length. The AI analyzes the defendant’s background to output a “recidivism score” (likelihood of re-offending). Critics argue these tools heavily weight socioeconomic factors, inadvertently punishing poverty.
Why does facial recognition fail on people of color?
AI models learn from the data they are fed. If the dataset used to train a facial recognition model consists predominantly of white, male faces, the AI will be highly accurate at identifying white men but statistically terrible at distinguishing between the faces of women of color.
Are police allowed to use AI to write reports?
This is a growing trend in 2025-2026. Officers use AI “ambient scribes” to auto-generate police reports from bodycam audio. However, the ACLU has warned against this, noting that AI algorithms frequently hallucinate facts that could wrongly alter the outcome of a criminal trial.
What is the AI Now Institute?
It is a prominent research institute dedicated to studying the social implications of artificial intelligence. They have been leading voices calling for strict moratoriums on predictive policing tools that disproportionately harm marginalized communities.
Can the public stop police from using AI?
In jurisdictions with CCOPS (Community Control Over Police Surveillance) laws, yes. These bills require public hearings and democratic oversight before a police department can acquire and deploy new AI surveillance technology.
Newsletter
Stay ahead of the AI curve.
One email per week. No spam, no hype — just the most useful AI developments, tools, and tactics.