In November 2022, OpenAI introduced ChatGPT, an advanced natural language processing (NLP) tool powered by artificial intelligence (AI). This technology facilitates interactions that closely resemble human conversations, responding to inquiries with a high degree of accuracy and relevance.
NLP, a critical domain within AI, empowers computers to understand, generate and modify human language. It uses Large Language Models (LLPs) like ChatGPT, which are trained on extensive data sets. The Large Language Model can answer questions and assist users in a variety of ways (e.g., write an article, compose a speech, write commendation letters, create a fitness plan, develop a vacation itinerary, etc.).
As these models continue to evolve, the benefits across various industries, including law enforcement, are becoming more obvious. These models, trained on vast amounts of data, can generate coherent and contextually relevant text, making them invaluable tools in various sectors, especially law enforcement. This article looks at potential applications of LLMs, with a particular emphasis on their role in policing — especially writing reports.
Overview of Large Language Models
LLMs, including OpenAI’s GPT-4 or ChatGPT, are developed using a wide range of internet text. However, unlike humans, they lack the capability to think or comprehend content. These models operate by detecting patterns in their training data and producing responses that reflect those identified patterns.
One of the most promising applications of LLMs in law enforcement is in the area of report writing. Officers spend significant amounts of time drafting reports. LLMs can support officers in this task by generating coherent and detailed reports based on the inputs provided by the officers.
Colloquially known as “routine reports” in the profession, officers can efficiently input data for thefts, trespassing, drug possession, simple assault, vandalism and certain other low-level crimes violations. Systems to report more complex crimes can also be developed.
A template-based approach
A template-based approach is particularly helpful for routine reports, such as simple thefts, drug possessions, and assaults. It provides a structured format for officers to follow, ensuring that all essential information is captured. (See the end of the article for a PDF of checklist examples for report-writing templates.)
According to Professor Ian Adams of the University of South Carolina, some considerations when using ChatGPT:
- Select a template: An officer or a community services officer selects a pre-formatted template that matches the incident they’re reporting. These templates are designed with efficiency in mind, with clear sections for important information like incident details, people involved, evidence, and the details of what happened.
- Fill out details: Next, the officer fills in the basic info into the correct locations in the template. This helps give ChatGPT context and keeps the report organized.
- Request AI assistance: Requesting assistance is the cornerstone of AI’s functionality. ChatGPT analyzes the information an officer has put in the template and any other pertinent details the officer provided, then generates content that fits right into the template. This is extremely helpful for officers when crafting a narrative.
- Review and edit AI content: This is a critically important step. The officer reviews the AI-generated content ensuring the narrative is accurate and clear. If there are any errors, the officer may request changes using the AI system.
- Combine template and AI generative responses: Once the content is acceptable for submission, the officer will combine it with the original completed template. The officer then prints out copies of the AI-generated narrative, as well as the template to go in the report. This ensures the incident is documented in a detailed, well-organized way and keeps the reporting integrity intact.
- Finalize and submit: Finally, the officer gives the whole report a final review, including the template and narrative parts, and makes any corrections. Then, the officer concludes the report and submits it in accordance with department policies.
See the end of the article for a non-comprehensive list of checklists to create LLM templates for police report-writing. These checklists could be established and used while pairing with a ChatGPT-type program — one hopefully integrated with an established RMS.
Important considerations
In considering the practical application of ChatGPT in law enforcement, it’s important to address a potential concern: the time investment required in using ChatGPT for police report writing.
While initial observations suggest that the time spent entering data and subsequently proofreading AI-generated reports might not drastically reduce the time compared to traditional methods, there is a significant upside. This technology could be especially beneficial for officers who find report writing challenging, potentially offering exponential returns in terms of efficiency and report quality.
Additionally, the versatility of ChatGPT in generating quick, tailored checklists for various scenarios in the field cannot be overlooked. For instance, an officer responding to a sexual assault call or a tactical situation can swiftly generate a comprehensive checklist, such as a sexual assault investigation protocol, enhancing their preparedness and confidence in handling these sensitive situations.
The final consideration is the integration of ChatGPT into existing Records Management Systems (RMS). Such integration could streamline data processing, elevate the accuracy of reports and enhance overall operational effectiveness, marking a significant step forward in the intersection of AI technology and police work.
Additional ways LLMs could enhance police effectiveness
LLMS can also be used in the following additional ways:
- Data analysis: LLMs can assist in analyzing large volumes of data to identify patterns or trends that might be overlooked by humans, potentially aiding in crime prediction and prevention. LLMs can analyze vast amounts of data to identify patterns, which can be crucial in crime prediction and prevention.
- Triaging emergency calls: Chatbots and other artificial intelligence can be used to organize and coordinate non-emergency calls which can then be triaged into an officer’s mobile data computer through computer-aided dispatch.
- Fostering empathy: LLMs can be programmed to demonstrate empathy, providing a more human-like interaction. This will be useful in situations where individuals are in distress and need reassurance. For instance, LLMs can be used in hotlines to provide immediate, empathetic responses while the caller waits for a human operator.
- Surveillance: Integrating LLMs into surveillance systems via the use of computer vision (a field of artificial intelligence that enables computers to understand and interpret visual information), which could offer the potential for real-time analysis of surveillance footage, aiding in quicker response times.
- Training: LLMs can be used to create realistic training scenarios for officers, aiding in better preparation for real-life situations. Table-top exercises based on previous incidents with known outcomes can be used to further infuse the training with well-known checklists.
- Dictation: Using LLMs paired with dictation services could help with lengthy police investigations or internal affairs. By integrating LLMs with dictation services, officers can efficiently transcribe interviews, meetings and field notes, reducing the time spent on manual data entry.
Conclusion
The evolution of LLMs presents vast opportunities across an array of industries, including law enforcement. As these models become more integrated into daily life, it is essential to harness their potential with measured responsibility, ensuring the technology is used ethically and effectively.
While LLMs offer numerous benefits, there are challenges and ethical considerations. Concerns about privacy, potential misuse and the reliability of AI-generated content need to be addressed. In the context of law enforcement, ensuring that LLMs do not perpetuate biases is crucial.
This rapidly expanding technology requires continued training and analysis to ensure responses are accurate. Additionally, what experts call hallucinations, can occur when the chatbot delivers responses that can appear to be true, but are fabricated by the AI. Further, inputs or data entered into an AI chatbot immediately becomes accessible to its owner, Open AI.
Personnel must strictly abide by regulations and procedures to safeguard and protect critical information. Most law enforcement agencies likely have certain guidelines that apply equally to all modes of online activity including unofficial and personal use of the Internet. This may include the use of LLMs. However, if such a policy does not exist or appears vague, privacy officers must guide personnel regarding the impact of any text, imagery, or video content on operational or information security before posting online.
References
Adams IT. (2023). Working Paper. LLMs and AI for Police Executives. University of South Carolina.
Baker EM. (2021). I’Ve got my AI on you: Artificial intelligence in the law enforcement domain. Doctoral dissertation, Monterey, CA; Naval Postgraduate School.
Rajkomar A, Dean J, Kohane I. (2019). Machine learning in medicine. New England Journal of Medicine, 380(14):1347-1358.
Yong E. (2018, January 17). A popular algorithm is no better at predicting crimes than random people. The Atlantic.