Content provided by Cognyte
One of the major challenges – and greatest opportunities – for law enforcement leaders today is deciding what approach their organizations should take toward embracing advanced technology using artificial intelligence (AI). If you’re one of those leaders and have been hoping you could wait for the dust to settle and postpone taking a position on the issue, think again. The rapid changes in technology present a unique opportunity for your organization to enhance efficiency, accuracy and transparency.
Consider all these recent events and developments in this area:
- Citizen groups in many U.S. cities are encouraging their police departments and city governments to become more transparent about the technology they’re using, how it works and whether it, in fact, results in unintended police practices targeting specific segments of the population.
- President Joe Biden has directed the departments of Justice and Homeland Security to report to him on the use of AI in the criminal justice system by the end of 2024, including recommending how AI can enhance law enforcement efficiency and accuracy, consistent with privacy, civil rights and civil liberties protections. The report also will recommend best practices for using AI, facial recognition technology, other technologies using biometric information and predictive algorithms.
- The European Union Parliament passed a comprehensive new law, the E.U. AI Act, which could become a global standard. The act will regulate the use of AI by governments and private sector companies, including setting rigorous new limits on how law enforcement can use AI.
- The implementation of new regulations on law enforcement technology is proving to be a complex task. A prime example is the situation in New York City, where citizen pressure led to the adoption of the POST Act by the city council. This act, an acronym for Public Oversight of Surveillance Technology, requires the police to regularly report on each technology’s impact and use policies. However, even after the NYPD began releasing its public reports, the city’s inspector general and citizen groups criticized the department for not being transparent enough. The police argued that revealing all the information demanded by critics would hinder their crime-fighting capabilities, illustrating the challenges of balancing transparency and effectiveness in law enforcement.
- Seven U.S. senators recently called on the Justice Department to halt all grants to local police agencies for predictive policing projects until the DOJ can ensure grant recipients will not use such systems in ways that have a discriminatory impact.
Much work remains to determine how best to balance the benefits advanced technology and AI can deliver in law enforcement and public safety while agreeing on standards that can build public trust in outcomes when these tools are used.
Defining these standards, rules and best practices is still work in progress, but much work has already been done. There is an emerging consensus that any technology using AI needs to be explainable, trainable and auditable to be trustworthy. Let’s look more closely at each of these qualities.
TECHNOLOGIES NEED THESE QUALITIES
But first it’s worth looking at the future – not the distant future, but a future we are close to realizing. Imagine a very real scenario police officers face frequently and consider how valuable AI-enabled tools can be in enabling better policing, saving lives and helping officers make better decisions. This scenario was described in a recent article in Police Chief Magazine by Assistant Chief Kenneth Kushner of the Santa Barbara, California Police Department.
“With blue lights flashing, sirens blaring and tires screeching, a patrol officer responds to a shirtless man covered in blood brandishing a knife on a public beach and screaming that he wants to kill people. As the rookie officer is driving, she asks her partner about the suspect’s history of calls with the police and a strategy to de-escalate. Her partner, a Police Artificial Intelligent Assistant (PAIT), scans department reports and open-source web information to analyze the successful outcome variables that have played a role with not only this individual but other similar incidents. PAIT provides valuable insight, suggests an area to park for the best approach based on what the drone is relaying, and reiterates the newest legislation on using force regarding edged weapons. PAIT also gives the officer several real-time pointers provided by veteran officers on successful de-escalation. Is this straight out of a sci-fi movie? Not really. The path to officers communicating with digital partners to seek information and guidance is on the cusp of reality.”
This example is not yet a reality, but AI is widely used in law enforcement and many other applications today. These uses range from predictive policing solutions to investigative analytics solutions, from decision intelligence platforms to software that helps police identify the location of gunfire in near real time.
For the public and others to have confidence that these AI-enabled tools are truly beneficial and can be used in fair, equitable ways that enhance public safety, we need to ensure the technology is explainable, trainable and auditable. Here’s what that means:
Explainable
As AI increasingly influences decisions, it’s essential to make AI models explainable. That means understanding how an AI system arrives at its conclusions. This is especially true in high-stakes decisions like identifying a likely suspect in a criminal investigation, determining how best to deploy law enforcement resources and assessing the relative risks of different potential terrorist threats.
The most innovative technology companies ensure their customers can easily customize the AI models in their platforms. This enables law enforcement users to tune their tools to address biased or problematic results and better explain how their tools operate if required in court and other public forums.
Trainable
AI is only as smart as the data it’s trained on. In many cases, problems with the data sets used to train AI models have resulted in AI systems that produce biased outcomes. So, following best practices in training is critical. Incomplete, inaccurate or unrepresentative data sets can lead to model bias, as can failures to account for nuances based on cultural, racial or gender considerations.
Auditable
Auditability means an examiner or auditor can evaluate the algorithms, models and datasets used in any AI tool and accurately report how they function, how specific results are produced and what affects the results.
Auditability means you can measure the system’s performance according to criteria like reliability and accuracy of results. An audit should verify that any AI tool does not infringe on critical principles or rights, such as respect for privacy or equity.
CONCLUSION
AI technology is here to stay, offering substantial advantages to law enforcement, intelligence and public safety agencies. Technologists and data and computer scientists are constantly pushing their tools forward in functionality and ease of use, making the futuristic “what if?” scenarios a reality. As leaders in law enforcement and security, we have a responsibility to help see these innovative tools are used ethically and for the public good. How well we do that will be key to technology adoption.
Learn more about how Cognyte empowers law enforcement agencies with data fusion and artificial intelligence to combat crime and protect public safety.