Is Predictive Policing Creating Judgement Or Justice?

The black-box approach means neither law enforcement nor the public truly understands how these tools generate risk scores or predictions.

In recent years, U.S. school districts have increasingly turned to predictive policing technologies to enhance campus safety. The New York Police Department reported a 5.1% decrease in murders over two years in targeted areas after launching a predictive policing program in 2016, according to Legal Solutions by Thomson Reuters. In 2017, the Chicago Police Department also saw a 23% decline in homicide rates during the first year of implementing a similar program.

These systems utilize data analysis and artificial intelligence to anticipate potential threats, aiming to prevent incidents such as school shootings. However, the implementation of these technologies has sparked significant ethical debates concerning privacy, bias, effectiveness, and the overall impact on the educational environment.

Predictive policing involves the use of algorithms to analyze historical data and forecast where and when crimes are likely to occur. In the educational context, this translates to monitoring student behaviors, social media activity, and other digital footprints to identify individuals who might pose a threat. The primary objective is to enable early intervention and prevent potential incidents before they materialize.​

Otherwise called crime forecasting as it uses  advanced technologies to assist law enforcement agencies in solving past crimes and preventing future ones. These technologies, when implemented effectively, enable law enforcement to optimize resource allocation, resulting in improved efficiency in crime prevention and control.

AI’s All-Seeing Eye

By leveraging AI, they analye vast amounts of historical crime data. In place-based policing, AI pinpoints crime-prone areas or “hotspots” by analyzing location-based data, helping law enforcement allocate patrols more efficiently. In person-based policing, AI evaluates past behavior, criminal records, and social connections to flag individuals at higher risk of committing or being affected by crime. While these methods aim to enhance public safety, they also raise ethical concerns about privacy, discrimination, and over-policing in marginalized communities.

As law enforcement agencies across the US seek new ways to respond to crime efficiently, predictive policing has emerged as a high-tech solution. At the heart of this shift are AI-powered tools developed by American startups, which use data to forecast criminal activity down to specific locations and individuals.

One of the most recognized names in predictive policing is PredPol, which has since rebranded as Geolitica. Originally, the company used historical crime data such as location, time, and type of incident to generate daily maps forecasting where certain crimes were most likely to occur, guiding police patrol patterns. While it gained rapid traction, PredPol faced backlash for allegedly reinforcing systemic biases and disproportionately targeting minority neighborhoods. The rebrand to Geolitica marked a shift toward broader public safety analytics with an emphasis on transparency.

Another major player is ShotSpotter, now operating under the name SoundThinking. The company deploys acoustic sensors in urban areas to detect and pinpoint gunfire in real time. While not predictive on its own, the data collected from ShotSpotter systems is often used in forecasting models and helps law enforcement respond rapidly to high-risk situations. The technology is currently deployed in over 100 U.S. cities.

Civitas AI offers a more holistic approach to public safety through its suite of analytical tools that evaluate long-term trends based on 911 call data, crime reports, and public sentiment. By integrating these data sources, Civitas provides police departments with actionable insights for resource allocation and patrol planning, while also aiming to promote transparency and build trust with communities.

Palantir Technologies, though much larger than most startups, plays a pivotal role in this space through its Gotham platform. Gotham aggregates massive datasets from arrest records and license plate scans to social media and surveillance feeds to identify patterns and connections that can serve as investigative leads or flag emerging threats. Its advanced analytics make it a powerful tool for law enforcement, though its use also raises significant civil liberties concerns.

Clearview AI contributes another layer to predictive policing with its facial recognition software, which enables identification of individuals through surveillance footage or publicly available images, including those from social media. While Clearview’s technology is often integrated into broader public safety systems, the company has been at the center of numerous privacy and legal controversies. Despite this, its services remain in use by various police departments across the U.S., highlighting the growing tension between technological capability and ethical oversight in modern policing.

Despite many startups offering predictive policing tools and numerous cities across the United States adopting them, a letter from US Senators to the Department of Justice (DOJ) highlighted that mounting evidence suggests these technologies do not reduce crime. Instead, they perpetuate and worsen the unequal treatment of people of color by law enforcement.

The Algorithm Made Me Do It

Predictive policing relies heavily on past crime data like arrest records, time and location of incidents, and even social media behavior to forecast where crimes are likely to occur or who might be involved. However, this data is often far from neutral. In many U.S. cities, it reflects years of racially biased policing, with Black and Latino communities disproportionately targeted. When such biased data is used to train algorithms, the technology ends up reinforcing and even amplifying those same disparities. The result is a self-perpetuating feedback loop where certain neighborhoods are over-policed simply because they were historically over-policed, regardless of current crime trends.

Compounding the problem is the lack of transparency around how these systems work. Many are developed by private companies that guard their algorithms as proprietary secrets. This black-box approach means neither law enforcement nor the public truly understands how these tools generate risk scores or predictions. Without transparency, it’s nearly impossible to audit these systems for fairness or to hold anyone accountable when they produce harmful outcomes.

Privacy is another major concern. In schools and communities, predictive systems often collect vast amounts of personal data in terms of surveillance footage, social media posts, school records, behavioral reports, which are later all analyzed to flag potential risks. While the goal is to prevent violence or intervene early, such monitoring can happen without individuals’ knowledge or consent. In schools especially, this creates a climate of suspicion, where students may feel watched and judged rather than supported.

One of the most troubling implications of predictive policing is the way it challenges the legal principle of “innocent until proven guilty.” When individuals are flagged as high-risk based on patterns, associations, or previous behavior, they may be treated as suspects without ever having committed a crime. Simply living in a certain neighborhood or being connected to someone with a criminal record can be enough to trigger increased scrutiny. In schools, this could result in a student facing disciplinary action or intervention based solely on what an algorithm suggests they might do.

Justice by Proxy

Over time, this  can lead to over-policing in the very communities that are already the most surveilled and over-criminalized. Instead of making residents feel safer, it can deepen mistrust and resentment. Young people in particular may grow up in environments where police presence feels constant, and every action is potentially monitored and judged. In schools, this can have a chilling effect—discouraging open expression, increasing anxiety, and deterring students from seeking help when they need it.

Despite the promises of data-driven safety, the effectiveness of predictive policing remains questionable. While some cities have reported short-term reductions in crime, long-term studies are inconclusive, and critics argue the perceived benefits often don’t outweigh the social costs. These systems can give a false sense of control, all while diverting resources from community-based approaches that prioritize human relationships, social services, and mental health support.

Ultimately, predictive policing exposes a fundamental truth about technology: it is never neutral. Algorithms are shaped by human decisions, trained on flawed data, and deployed in systems already marked by inequality. If left unchecked, they risk embedding those inequalities deeper into the fabric of public safety. As predictive technologies continue to advance, the challenge is not just to improve the tools, but to ask whether we should be using them at all—and who gets to decide.

📣 Want to advertise in AIM Research? Book here >

Picture of Upasana Banerjee
Upasana Banerjee
Upasana is a Content Strategist with AIM Research. Prior to her role at AIM, she worked as a journalist and social media editor, and holds a strong interest for global politics and international relations. Reach out to her at: upasana.banerjee@analyticsindiamag.com
Subscribe to our Latest Insights
By clicking the “Continue” button, you are agreeing to the AIM Media Terms of Use and Privacy Policy.
Recognitions & Lists
Discover, Apply, and Contribute on Noteworthy Awards and Surveys from AIM
AIM Leaders Council
An invitation-only forum of senior executives in the Data Science and AI industry.
Stay Current with our In-Depth Insights
The Most Powerful Generative AI Conference for Enterprise Leaders and Startup Founders

Cypher 2024
21-22 Nov 2024, Santa Clara Convention Center, CA

25 July 2025 | 583 Park Avenue, New York
The Biggest Exclusive Gathering of CDOs & AI Leaders In United States
Our Latest Reports on AI Industry
Supercharge your top goals and objectives to reach new heights of success!