This article is based on research conducted as a part of the CA POST Command College. It is a futures study of a particular emerging issue of relevance to law enforcement. Its purpose is not to predict the future; rather, to project a variety of possible scenarios useful for planning and action in anticipation of the emerging landscape facing policing organizations.
The article was created using the futures forecasting process of Command College and its outcomes. Managing the future means influencing it — creating, constraining and adapting to emerging trends and events in a way that optimizes the opportunities and minimizes the threats of relevance to the profession.
By Captain Bill Elbert
Imagine you’re an investigator for a metropolitan police agency, assigned to your first big case. Patrol officers arrested your suspect during a traffic stop, leaving you little time to prepare for your interrogation.
This case has had a lot of media coverage, and the arrest hit internet news sources as soon as the handcuffs were double-locked. Walking into the door of Interview Room B, you’re thinking, “Please, God, don’t let me mess this up. How am I going to get a confession out of a repeat offender who has been in more interview rooms than I have years in service? I’m sure he’ll weave lies within truths just enough to keep me guessing. They’re all watching behind the glass; I just don’t want to look like a rookie. I wish there were an easy way to know when he’s not telling the truth…”
Since the dawn of policing, officers have been interviewing suspects, and suspects have been lying to police. As expected, most officers believe they are skilled at detecting lies. [1] Yet, research shows that police are no more successful in lie detection than the average citizen. [2] However, artificial intelligence (AI) could offer a method to significantly improve the accuracy of lie detection. Research indicates that machines can detect lies more effectively than humans. This raises the question of whether it’s appropriate to entrust the outcome of a criminal investigation to machines.
In a world where uncovering the truth is more critical than ever, we still find ourselves relying on an uncertain method to detect deception: the human observer. In the past half-century, a substantial body of research, including over 200 published deception detection experiments, suggests that human observers identify deception with an accuracy barely exceeding that of random chance. [3] However, a 2021 study using AI algorithms to analyze facial expressions has shown an accuracy of over 80%. This begs the question: Why do we still rely on human observation to recognize deception in criminal investigations?
Picture a criminal investigation system using machine learning, a branch of AI, to detect deception, capturing even the most subtle and fleeting cues invisible to the human eye. If integrated into an interview room, imagine how this system could allow the interrogator to instantly access real-time insights into a subject’s hidden biological and physiological responses to deception. With advancements in AI and machine learning, there lies a tremendous opportunity to revolutionize the investigative process to achieve the goal of any interrogation – the truth. But has technology come so far that we are ready for machines to assess our guilt?
Complexities of interrogations and recognizing deception
Law enforcement interrogations can require an inordinate amount of planning, preparation, engagement and evaluation, especially in critical and complex criminal investigations. The average police interrogation lasts from 30 minutes to two hours, [4] although complex interviews have lasted well over 12 hours. [5] Interviews can be mentally exhausting for all involved, regardless of the duration.
During an interrogation, the investigating officer has many responsibilities. They must ask relevant questions, analyze the responses, develop follow-up questions and identify disparities in a subject’s statement. The officer needs to process all this information while also recognizing the suspect’s verbal and nonverbal cues of deception or avoidance. Failing to identify signs of deception can allow a false statement to go unchallenged and recognizing them can effectively change the direction of an interview.
How, though, do we recognize when we are being deceived? Studies indicate that humans judge dishonesty primarily by observing a person’s body language and speech. Fortunately, AI tools are on the near horizon to detect truth from falsehood that can be used by the police. Are we ready for this?
Is there truth in the machine?
Machine learning allows computers to “learn” from data to work on a task without being programmed explicitly for it. [6] Machine learning algorithms have the remarkable ability to convert input data into powerful problem-solving tools. A recent study [7] has demonstrated that machine-learning algorithms surpass human observers in detecting deception by up to 30%. This was supported by additional facial expression studies that demonstrated machines outperformed human judges in detecting deception. [8,9]
Why is it more difficult for humans than for machines to detect deception in facial expressions? Research indicates that the human cognitive system struggles to process a large amount of information in a short period. [10] Unlike machines, humans have limitations in this area. This highlights the significance of utilizing machine learning algorithms to enhance deception detection in law enforcement interrogations. By harnessing this potential for law enforcement, there is the ability to develop a sophisticated deception detection system that goes beyond the current investigatory processes.
How can AI assist humans in determining conclusions drawn from an interrogation? We already know that a machine learning algorithm can be “taught” to recognize signs of deceit, whether they are obvious or subtle. [11] Using those advancements, deception algorithms and other analytic tools are in use or being developed that could be adopted by law enforcement. These systems have the potential to revolutionize the way we understand and uncover deception, but they come with strong reservations for many naysayers.
Currently, proprietary deception detection algorithms such as AVATAR (Automated Virtual Agent for Truth Assessments in Real-Time) and iBorderCtrl (an AI border security program) are being used for border security in Europe, Canada, and the United States with an accuracy rate of 80% to 85%. [12] The American Civil Liberties Union (ACLU) and critics of the iBorderCtrl immigration project, however, have expressed concerns about data-driven biases, discrimination, and human rights violations through the use of AI. [13] These concerns have ethical and legal implications, which could potentially have a negative impact on the use of algorithms to assist the police in suspect interviews.
Other civil rights groups such as the NAACP have raised concerns about AI’s lack of transparency and reliability, as well as oversight, bias and privacy implications. Despite these concerns, research continues to develop as technology advances and the potential applications of AI in law enforcement evolve. To survive inevitable legislative and judicial challenges, it is critical for developers and the police to be able to explain the advantages while also creating safeguards for the use of AI to seek the truth.
Enhancing human detection
When considering the need for such technology, it is important to understand that AI deception technology differs significantly from polygraphs and voice stress analyzers. Although they all rely on the premise that deceptive behavior causes measurable physiological and psychological responses, AI’s data-driven algorithms are trained on large datasets to identify patterns indicative of deception.
Researchers from the University of Virginia remind us that human deception detection is not an exact science. [14] This is important to note because AI systems detect deception using the same general parameters as human observers: facial expressions, body language and linguistics. The stark difference is that AI can recognize subtle physiological changes that are not easily noticeable to the human eye. AI can also process information at a speed beyond the capability of a human being while identifying patterns derived from the data. If AI deception technology relies on the same deception cues used by the human observer but in a significantly more effective manner, should we not use it to enhance our own ability to detect deception?
There are three primary ways a technologically assisted interview or interrogation would be more effective than those conducted using investigators alone:
- Analyzing eye data: Pupil dilation and changes in eyeblink rate are linked to increased cognitive load during deception. [15] Such changes are not readily visible to a human observer, but can readily be detected by a trained algorithm.
- Micro expression: Facial microexpressions are much harder to observe than macroexpressions and can expose hidden emotions that contradict verbal statements. Microexpressions are involuntary movements of the face that occur from 0.04 to 0.2 seconds. As with eye movements, microexpressions can be mapped and observed easily with technology, although they are commonly unobserved fully by the human eye.
- Voice and speech patterns: In 1991, researchers documented that a significant increase in pitch was associated with deceptive speech compared to truth-telling speech. [16] Additionally, researchers found that differential changes in a subject’s voice can be used to determine deception in high-stress situations (i.e., criminal interviews). [17] Linguistic machine learning models can detect deception with around 85% accuracy. [18]
If we believe AI should be used to enhance our own deception capabilities, it would be valuable to acknowledge its present limitations and how it should be applied in a law enforcement interrogation setting.
Machine learning systems
Currently, there are several systems in use on the market, including Tobii Pro Spectrum, EyeDetect and Silent Talker. The high-speed camera and software system Tobii Pro Spectrum is an advanced, high-frequency eye tracking platform, intended for extensive studies into human behavior and the mechanics of eye movements. [19] The EyeDetect system has been successfully used by law enforcement for employee screenings in several states, [20] while the Silent Talker system is an artificial neural network-based system that automatically records and analyzes nonverbal cues (e.g., facial expressions, rate of eye blinking, eye movement) to categorize a subject’s psychological state). [21]
Each of these systems can provide:
- An objective analysis of observed data free from emotion
- Support for intuitive factors of deception to support human decision-making
- A constant analysis not susceptible to human fatigue
- A cross-cultural analysis of universal microexpressions.
For each of these areas, the encouragement is not to turn over truth detection to the machine but to use trained humans to interpret the data gleaned from the machines. This may provide a viable path through the legal and cultural objections and allow the police to be more effective in determining the truth. Knowing truth from falsehood not only allows the guilty to be culpable, but it also absolves the innocent of suspicion. This knowledge is a significant aspect of work when seeking truthfulness in these settings.
Considerations
To integrate machine learning technology into the investigative process, it’s important to carefully weigh the risks and rewards. It’s also critical to emphasize the use of AI deception models as a tool for pinpointing specific cues for the officer to examine, not a tool used to seek confirmation of whether deception is taking place. Most importantly, these tools should not be used to replace the human element in an interview. Leveraging a technology that focuses on the same cues as humans but with a higher success rate presents an incredible opportunity. Its primary potential lies in significantly enhancing the investigator’s ability to detect deception. It allows us to augment our natural observation skills, ensuring we’re always one step ahead in identifying the truth.
Despite AI deception technology’s tremendous potential, certain concerns need to be addressed for it to be deemed applicable in the interrogation setting. Among them are:
- An over-reliance on technology leading to poor decision-making by investigators
- Its use for non-criminal purposes such as administrative investigations against police officers
- The admissibility of machine-assisted confessions in court [22]
- Concerns of biased data and conclusions
- The fear of AI and mistrust in an over-reliance on technology to resolve human problems.
To combat these issues, and to ensure the credibility of any AI deception tool in law enforcement, future users should consider three things: testing, verifying and education. It would start with developing a robust machine-learning tool and integrating it into real-world testing. This would involve deploying the tool in actual investigations and then comparing the generated data with the outcomes of these investigations. By doing so, we can rigorously assess the tool’s effectiveness and reliability in practical scenarios. Once a tool has undergone real-world testing, its accuracy must be validated by the scientific community, and the findings must be shared to minimize resistance and concerns not only from the public but also from law enforcement.
Conclusion: Putting it all together
For law enforcement to embrace the detection of lies using machines, it would require a system that can recognize several cues of deception. Multiple indicators will be the most effective use of research data, providing a more reliable outcome. This multimodal approach should combine facial microexpressions with other indicators, such as voice analysis and eye data, providing a rich source of analytic information for machine learning algorithms.
While no method guarantees 100% accuracy in detecting deception, integrating intelligent systems into the interrogation process holds much promise. Machine learning can significantly enhance interrogations by quantifying physiological responses, behavioral and auditory variables, as well as unique patterns. This would allow the investigator to focus on analyzing statements and gathering facts while ultimately improving the efficiency and effectiveness of law enforcement interrogations. With further research and development, we have the opportunity to develop a tool that can allow investigators to not merely conduct interviews but also navigate the complex human psyche with unmatched precision and insight.
References
1. Hill JA, Moston S. Police perceptions of investigative interviewing: Training needs and operational practices in Australia. Br J Forensic Pract. 2011;13(2):72–83.
2. Kassin S. Confession evidence: Commonsense myths and misconceptions. Crim Justice Behav. 2008;36(10).
3. Bond CF Jr, DePaulo BM. Accuracy of deception judgments. Pers Soc Psychol Rev. 2006;10(3):214–234.
4. Leo RA. Inside the interrogation room. J Crim Law Criminol. 1996;86:266–303.
5. Drizin SA, Leo RA. The problem of false confessions in the post-DNA world. N C Law Rev. 2004;82:891–1007.
6. Bell J. Machine Learning: Hands-on for Developers and Technical Professionals. John Wiley & Sons; 2020.
7. Kleinberg B, Verschure B. How humans impair automated deception detection performance. Acta Psychol. 2021;213:103250.
8. Meservy T, Jensen ML, Kruse J, et al. Deception detection through automatic, unobtrusive analysis of nonverbal behavior. IEEE Intell Syst. 2005;20:36–43.
9. Perez-Rosas V, Abouelenien M, Mihalcea R, et al. Verbal and nonverbal clues for real-life deception detection. Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. 2015:2336–2346.
10. Merylin M, Stéphanie M, Cristina S, et al. Detecting deception through facial expressions in a dataset of videotaped interviews: A comparison between human judges and machine learning models. Comput Hum Behav. 2022;127:107063.
11. Prome SA, Ragavan NA, Islam MR, Asirvatham D, Jegathesan AJ. Deception detection using machine learning (ML) and deep learning (DL) techniques: A systematic review. Nat Lang Process J. 2024;6:100057.
12. Discern Science. Automated interviewing technology reveals deception.
13. Laupman C, Schippers L, Gagliardi MP. Biased algorithms and the discrimination upon immigration policy. Law Artif Intell. 2022.
14. Granhag PA, Strömwall LA, eds. The Detection of Deception in Forensic Contexts. Cambridge University Press; 2004.
15. Szulewski A, Fernando SM, Baylis J, Howes D. Increasing pupil size is associated with increasing cognitive processing demands: A pilot study using a mobile eye-tracking device. Open J Emerg Med. 2014;2:8–11.
16. Ekman P, Sullivan M, Friesen W, Scherer K. Face, voice, and body in detecting deceit. J Nonverbal Behav. 1991;15(2):125–135.
17. Miller S, Gordon I. Investigative Ethics: Ethics for Detectives and Criminal Investigators. 2014.
18. Terrill M. Detecting when CEOs lie: ASU professor says AI can detect deceptive language in business leaders better than traditional lie detectors. ASU News. October 2023.
19. Tobii. Tobii Pro Spectrum product description. https://nbtltd.com/wp-content/uploads/2018/05/tobii-pro-spectrum-product-description.pdf. Accessed May 28, 2024.
20. Joseph M, Slater A. Policing Project five-minute biometrics primer: Lie detection II – 21st century tests. Policing Project NYU Sch Law. September 2020.
21. Rothwell J, Bandar Z, O’Shea J, McLean D. Silent talker: A new computer-based system for the analysis of facial cues to deception. Appl Cogn Psychol. 2006;20:757–777.
22. Delisle v. Crane Co., FL 2018.
About the author
Captain Bill Elbert serves as the Commander of Compliance and Professional Standards at the Solano County Sheriff’s Office in Fairfield, California. With 20 years of law enforcement service, Bill has had a diverse range of experience including Patrol, Courts, Custody, Civil, Fire Investigator, Detective, Field Training Officer, K-9 handler and SWAT. He also provided agency training as an instructor in Defensive Tactics, Taser, Less Lethal Munitions and Tactical Rifle.
Bill graduated from the Sherman Block Supervisory Leadership Institute (SLI) Class 409 and attended POST Command College Class 72. In 2019, he received the FBI-LEEDA Trilogy Award and was recognized with the Distinguished Service Medal for his actions in the line of duty. Bill holds numerous professional certificates from the California Commission on Peace Officer Standards in Training (POST) as well as advanced certificates from the Robert Presley Institute of Criminal Investigation (ICI). He is the agency use of force subject matter expert and sits on the board of the California Force Instructors Association (CalFIA) as well as the California Police Officer’s Association (CPOA) Region VIII. Bill is also an Honorary Commander at Travis Air Force Base.