Sponsored by QueTel
By Tim Dees for Police1 BrandFocus
Most of the computer applications in law enforcement exploit the ability of the computer to handle large sets of data and to bring the relevant items to the user’s attention when needed. A relatively untapped capability lies with artificial intelligence, using the computer’s capacity for analysis.
Artificial intelligence is the common term, but a more accurate way of describing this capability is “machine learning.” This involves first teaching the computer to recognize the object(s) you’re interested in, then to get the machine to find that object amid many others. Humans learn to do this without thinking about the process, but getting the computer to do the same thing takes some understanding of how we learn and what it is we are recognizing.
AI has become something of an industry buzzword, but here are three key applications where AI and machine learning can help law enforcement:
1. Faster video redaction
One relatively simple application of machine learning that is useful for law enforcement lies within video redaction software. Before the output from a bodycam or dashcam can be released for public consumption, it is often necessary to first redact certain details, such as license plates or faces of individuals not involved in the incident.
Absent redaction software or a service like QueTel’s Redaction Center, a technician has to review every frame of the video and draw an outline around the unwanted feature to block it out or blur it. This is hugely time-consuming. With machine learning, the technician first identifies a sample of the unwanted object, then tells the computer to find every other instance of that example and render it unidentifiable.
If the software is working properly, it will find all the appearances of the object, even as they move in and out of the frame and change visual aspect, sometimes being seen from the side or some angle other than straight-on. This capability saves a lot of processing time and is invaluable for agencies that have to process many hours of video for release to third parties.
2. Assisted verbal analysis
Machine learning is also useful for analyzing statements from suspects and witnesses. It should be no great revelation that people routinely lie to the police, especially when they have been involved in a crime.
Lying is easy, but maintaining the lie over time requires careful thought and talent. The lie can’t conflict with other, truthful statements the subject might have made, and the subject has to remember what lies they have told and keep them consistent.
At the same time, the detective tasked with deriving the truth from multiple statements has to remember the various accounts and try to identify the inconsistencies. With machine learning, the statements (both audio and written) from all subjects can be broken down and analyzed by the computer, and any term used is presented on-demand to the detective.
“One of the things we can do is take the audio track and categorize all of the words in it,” said Jim Cleaveland, president of QueTel. “Then you do a search on the words and go immediately to the point in that several hours of interview where the relevant part was stated. You can clip that and bring that out in court to show that the guy was obviously lying.”
In other words, if the suspect says he was at his mother’s house at the time the crime was committed, then the software might look for “mother’s house,” the name of the mother, her address or other mentions of locations so that the detective can spot references that don’t line up. When there may be many hours’ worth of statements and recordings to review, automating this search is a major advantage.
3. Finding faces, vehicles in surveillance video
Machine learning can also help police analysts find useful information in surveillance video. Human analysts can easily be overwhelmed if asked to locate the face of a specific person in a video stream where there are hundreds of faces in view at any given moment. Only slightly less demanding is the job of finding a particular style and color of vehicle in a view of a surface street or freeway.
The computer can compare every face or vehicle against the characteristics it is told to find and present possible matches to the operator. This can often be done in real time (with live feeds) or even faster with recordings, as the computer does not necessarily have to review them at real-time speeds.
Working smarter
There is some public resistance to the use of machine learning to review video or surveillance feeds. The famously liberal San Francisco Board of Supervisors recently banned the use of facial recognition software by all city departments, using terms like “big brother” and “police state” in the rhetoric that criticized the technology.
Conspicuously absent from the discussion was the assertion that these analytical systems do not decide who is to be stopped, questioned or arrested. Those decisions are always made by one or more human operators, who are presented with the results the machine has found.
These systems do not ring bells and say, “This is the terrorist,” displaying the mug shot of the hapless lookalike. Instead, the machine tells the operator, “The other 32,000 people who appear in this video are not your suspect, but this one might be.”
Ultimately, says Cleaveland, this technology offers police departments a set of tools that can help them work more efficiently – saving taxpayer dollars while promoting public safety.
“What we’re really doing here saving the investigators and the officers time,” he said. “They could be looking at hundreds and hundreds of hours of surveillance. That’s going to take the time of a highly paid police professional. What we’re trying to do is shorten that time from literally hours down to minutes.”