Trending Topics

Forecasting justice: The promise of AI-enhanced law enforcement

While AI tools offer significant benefits in crime prevention and response, they also raise important ethical and operational concerns

file-NN7yBO89eA5KSz5LDRYcxt2o.jpg

Photo/DALL-E

This article is based on research conducted as a part of the CA POST Command College. It is a futures study of a particular emerging issue of relevance to law enforcement. Its purpose is not to predict the future; rather, to project a variety of possible scenarios useful for planning and action in anticipation of the emerging landscape facing policing organizations.

The article was created using the futures forecasting process of Command College and its outcomes. Managing the future means influencing it — creating, constraining and adapting to emerging trends and events in a way that optimizes the opportunities and minimizes the threats of relevance to the profession.

By Chief Deputy David Stephens

The clock nudges past midnight as we navigate the silent streets, our patrol car now a beacon of advanced law enforcement. A buzz from the dashboard signals an urgent call — a potential domestic disturbance, possibly violent. Unseen, an advanced AI system hums into action, seamlessly sifting through terabytes of data: recent 911 calls, local social media escalations, the residents’ criminal histories and known affiliations.

Our patrol car’s screens light up with a predictive map, the algorithm flagging a trail of digital footprints that suggests volatility. The system taps into public cameras, pulling up real-time footage as it cross-references involved persons’ profiles. It warns us of a registered firearm in the house and the owner’s erratic behavior documented in online rants.

Our AI partner operates with silent efficiency, its presence felt but not intrusive. It nudges us with a calculated risk assessment, probabilities flashing as we absorb the information, our training melding with the data to sharpen our judgment.

As we pull up, the system has already informed backup units, its predictive nature not just painting a picture of the present but anticipating the next strokes. It’s a partnership, a dance between human discernment and the cold, fast logic of artificial intelligence. Together we step out, ready but cautious, the AI’s silent vigilance our guardian in the fraught unknown.

Charting the uncharted: AI’s vanguard in policing

Emergent technologies such as artificial intelligence (AI), machine learning and deep learning have shown considerable promise to improve the capabilities of predictive policing. [1] At its core, predictive policing seeks to anticipate where crime might occur or who might be involved, allowing officers to preemptively address situations. The computational abilities of AI can analyze vast amounts of data faster and more accurately than humans, pointing to potential hotspots of criminal activity or even individuals who may be at risk. [2] Such capabilities will revolutionize real-time decision-making, giving officers an edge in anticipating and possibly preventing crime.

The integration of AI into law enforcement holds remarkable promise, yet it carries with it considerable ethical and operational quandaries. The human element in policing — imbued with personal discernment, judgment and the potential for fallibility — cannot be understated. A poignant illustration of this was the erroneous arrest of Robert Williams of Detroit in 2020, where facial recognition mistakenly identified him as a shoplifting suspect. This incident sheds light on the potential perils of overreliance on AI without adequate human oversight. [3]

In a similar vein, the 2019 case of another Michigan man, Michael Oliver, serves as a sobering example of the risks associated with AI-driven law enforcement tools. Oliver was wrongfully accused of a crime he did not commit, based solely on a facial recognition match that later proved to be inaccurate. This instance underscores the urgent need for stringent checks and balances in the deployment of AI technologies in policing. [4] While AI might empower officers with data-driven insights, there’s a substantial risk that it might perpetuate or amplify existing biases, especially if the underlying data is tainted with historical prejudices. [5]

The prospect of basing arrests or surveillance on algorithmic forecasts alone raises critical questions about the accuracy and moral implications of such actions. Bias in AI systems can manifest when the data used to train these algorithms is skewed or unrepresentative, reflecting historical prejudices or unequal treatment of different demographic groups. Moreover, the coding decisions made by developers, often influenced by their own unconscious biases or the lack of diverse perspectives in tech teams, can further propagate inequality and unfair treatment in algorithmic outputs. This results in AI systems that may inadvertently perpetuate and amplify existing societal biases, leading to unjust outcomes in law enforcement contexts.

Moreover, surveillance technologies, like body-worn cameras, while essential for accountability, also introduce concerns about privacy and the potential for misuse. If combined with AI-driven predictive policing, there’s a risk of creating a surveillance state where individuals are constantly monitored, judged and possibly penalized based on predictions. [6]

The rapid evolution of technology necessitates comprehensive discussions about its utility, ethics and legality in law enforcement. For AI to be universally accepted as a tool in policing, several challenges need to be addressed to ensure transparency and accountability.

Achieving completely accurate and unbiased data is an ideal goal, but in practice, it presents significant challenges due to the complex nature of data collection and inherent biases in the systems from which data is sourced. The concept of “accuracy” in data is multifaceted. It not only refers to the correctness of the data but also to its completeness, timeliness and relevance. In many scenarios, especially those involving human behavior, ensuring complete accuracy is difficult. The move toward unbiased data is multipronged, incorporating inputs from a wide range of sources, including underrepresented groups in data collection, regular review of relevance and accuracy and establishing ethical guidelines for analysis. Policymakers, technologists and law enforcement must come together to set guidelines, ensuring AI predictions are based on accurate, unbiased data and employed judiciously in real-world scenarios. Continuous training for law enforcement officers is also crucial, ensuring they understand the technology’s capabilities and limitations. [7]

Shaping an ethical AI framework in law enforcement

The incorporation of AI into the realms of predictive policing signifies a potential paradigm shift in law enforcement methodologies. While the possibilities are undoubtedly exciting, a cautious approach anchored in ethics, transparency and rigorous training is necessary to ensure a positive transformation.

A precedent for the use of AI in policing can be observed in Los Angeles, where the LAPD initiated an algorithm-driven program called PredPol in April 2019. This program analyzed crime data to predict where crimes were likely to occur during specific time windows. [8] While PredPol was designed to assist in the deployment of officers to areas of higher predicted crime rates, its introduction wasn’t without controversy. Critics voiced concerns about potential biases in the system and the danger of perpetuating overpolicing in already-marginalized communities. The LAPD discontinued its use of PredPol in April 2020 after public criticism and a report from the LAPD’s own inspector general suggested it was not effective in reducing crime and concerns about potential bias in its operations had not been resolved. [9] This example illustrates the double-edged sword that AI can represent in policing: It can be a tool for efficiency, but without proper safeguards it risks amplifying systemic issues of race, bias and confidence in law enforcement.

To address these challenges, one of the pivotal recommendations is comprehensive and continuous training for law enforcement officers. As officers become intertwined with the digital realm, they should be equipped not just to use AI tools but to understand their underpinnings and implications — they must be conversant with the technology they employ.

Even as we lean into the potential advantages of AI, the sanctity of human discretion in policing must remain undiluted. While AI can provide invaluable insights, machines, by their very nature, are devoid of human understanding, emotion and ethical reasoning. AI should be an assistant to human officers, not a replacement, guiding with data but allowing final decisions to rest with humans in the field. AI algorithms used in predictive policing must be transparent.

Transparency in AI, particularly in law enforcement, involves clear disclosure of how algorithms make decisions or predictions. This means the data sources, decision-making criteria and logic behind the AI’s recommendations must be accessible and understandable to officers and, ideally, the public. Implementing “explainable AI” systems that provide reasons for their suggestions in understandable terms is crucial in this context. Regular audits and reviews of AI systems by independent bodies can help to identify and mitigate biases or errors in these systems. Plainly speaking, more system oversight is key to mitigation of risks.

The code behind these algorithms should be open for scrutiny to identify and rectify any biases or flaws. Data scientist Cathy O’Neil warns of the dangers of unbridled faith in mathematical models, especially when they have the potential to entrench societal prejudices. [10] Building public trust is vital, and transparency in the tools used by law enforcement is a step toward achieving that trust. A guiding ethical framework is also essential. Researchers Kristian Lum and William Isaac detail the inherent challenges of predictive policing tools, especially their potential for bias and the subsequent erosion of public trust. [11]

The integration of AI into law enforcement must be approached with a profound respect for civil rights and an unwavering commitment to ethical practice. To ensure that AI integration respects civil rights, the police should collaborate with civil liberties groups, ethicists and community representatives in the development and deployment of AI tools. This collaborative approach would help create AI systems that are sensitive to civil liberties concerns. Additionally, implementing comprehensive training for law enforcement personnel on the ethical use of AI is crucial. This training should emphasize the importance of maintaining human oversight and discretion, particularly in scenarios where AI outputs have significant implications for individual rights.

While AI offers a tantalizing vision of future law enforcement operations, its successful and positive integration requires deliberate and ethically grounded steps. By understanding the successes and pitfalls of existing implementations, ensuring rigorous training, emphasizing human discretion and adhering to transparent and ethical principles, we can see a future where technology and human expertise harmoniously converge to achieve the paramount goal of policing: the safety and well-being of the community.

Better decisions, made faster

Beyond just forecasting, AI holds the key to revolutionizing on-the-spot decision-making for officers. Imagine a scenario like the opening of this article where an officer, within seconds, receives comprehensive threat assessments, real-time background checks and situational analyses. Such capabilities could drastically improve the speed and quality of decisions, potentially leading to more-positive and less-confrontational outcomes. This is the future of AI and decision-making.

In response to the rapid evolution of artificial intelligence and the need to balance technological innovation with ethical considerations, President Joe Biden signed an executive order laying the groundwork for robust governance of AI technology. This order underscores the administration’s commitment to foster AI systems designed and implemented in ways that respect privacy, civil rights and American values, while also maintaining public trust through transparency and accountability. [12] It is important for people interested in the evolution of AI to play a pivotal role in shaping ethical AI in law enforcement by actively participating in community discussions and policymaking processes. They should advocate for transparency and accountability in AI systems used by law enforcement. This can involve engaging with local law enforcement agencies, attending public meetings and voicing support for policies that require the use of ethically designed, tested and regulated AI tools.

Additionally, they can support or initiate community-led oversight committees that work closely with law enforcement to review and guide the implementation of AI technologies. Virginia Eubanks, an associate professor of political science at the University of Albany, argues individuals should actively participate in public discussions and policymaking processes, advocating for transparent, accountable and unbiased AI systems. Engaging with and supporting organizations that focus on civil liberties can help ensure AI tools in policing are developed and used responsibly. [13]

Law enforcement agencies should actively engage with technology experts, ethicists, civil rights advocates and community members. This involves participating in forums, workshops and conferences focused on AI and policing. Such collaborative environments foster a comprehensive understanding of the ethical, legal and social implications of AI in law enforcement. Agencies should invest in training their personnel in the basics of AI and data science. Understanding the capabilities and limitations of AI will enable officers to contribute more meaningfully to discussions about how AI should be deployed in their work. Law enforcement should commit to transparency in its use of AI tools. This includes public disclosure of the AI systems being used, the data they are trained on and the decision-making processes involved. Establishing clear channels for accountability in cases where AI tools contribute to adverse outcomes is also crucial. Regularly soliciting feedback and concerns from the communities they serve can help law enforcement understand public sentiment about AI in policing. This feedback should help guide policy and practice around AI tools.

Such governance mechanisms are crucial, as the trajectory of AI integration has shown that a blind trust in data models, especially opaque ones, can undermine public faith and intensify existing societal fissures. Ensuring AI is developed and used in a responsible manner aligns with the president’s vision of securing American leadership in this domain while safeguarding the democratic values the nation holds dear.

Conclusion

Projecting into the next decade, AI will be an integral part of law enforcement — from crime prediction and real-time decision aids to postincident analysis. These technologies could lead to smarter patrolling, fewer unnecessary confrontations and overall enhanced community safety. However, this vision can only materialize with rigorous oversight, consistent retraining and an undiluted focus on civil liberties and ethics. Law enforcement’s AI-driven future must be shaped by a symbiotic relationship where technology amplifies human judgment rather than replacing it. The future promises transformative advances, but it’s imperative that the compass of integrity guide this journey.

References

1. Perry WL, McInnis B, Price CC, et al. Predictive policing: The role of crime forecasting in law enforcement operations. RAND. September 2013.

2. Brayne S. Big data surveillance: The case of policing. Am Sociol Rev. October 2017.

3. Hill K. Wrongfully accused by an algorithm. New York Times. June 2020.

4. Vincent J. AI researchers tell Amazon to stop selling ‘flawed’ facial recognition to the police. The Verge. April 2019.

5. Richardson R, Schultz J, Crawford K. Dirty data, bad predictions: How civil rights violations impact police data, predictive policing systems, and justice. New York Univ Law Rev Online. 2019.

6. Manning P. The rise of big data policing: Surveillance, race, and the future of law enforcement. New York Univ Press. March 2019.

7. Smith A, Mannes S. Automated injustice: How a mechanical failure in a predictive policing system can violate the criminal defendant’s due process rights. Md Law Rev. 2017.

8. Mohler GO, Short MB, Malinowski S, et al. Randomized controlled field trials of predictive policing. J Am Stat Assoc. 2016.

9. Stewart E, Menezes M. LAPD ends another data-driven crime program amid criticism of such tools. Los Angeles Times. 2020.

10. O’Neil C. Weapons of math destruction: How big data increases inequality and threatens democracy. Crown. 2016.

11. Lum K, Isaac W. To predict and serve? R Stat Soc. 2016.

12. The White House. Executive order on the safe, secure, and trustworthy development and use of artificial intelligence. October 2023.

13. Eubanks V. Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press. 2018.

About the author

David Stephens has 17 years of law enforcement experience and currently holds the rank of Chief Deputy at the Kern County Sheriff’s Office. David oversees the Operations Bureau, encompassing patrol operations, detective investigations, communications center operations, narcotics enforcement, search and rescue operations and aircraft and helicopter operations for the agency. He holds a bachelor’s degree in criminal justice from Southern Columbia University. David is a graduate of California POST Command College, class 71.

RECOMMENDED FOR YOU