Over the past semester, we had an opportunity to teach a course on public safety leadership and artificial intelligence (AI) to a highly experienced group of mid-career professionals in the University of Virginia’s Master of Public Safety graduate program. Our students came from local, state and federal agencies across the United States, representing policing, fire service, emergency management and other critical public safety roles. They brought real-world experience, healthy skepticism and a deep sense of responsibility to our online class as they explored one of the most rapidly evolving forces shaping the future of public service.
The learning curve varied. Some students came from agencies already deploying AI tools — automating the creation of body camera summaries, triaging reports, or analyzing crime trends. Others were just beginning to experiment with generative AI to support tasks like email drafting, meeting summaries, or project planning. A few were still largely unfamiliar with AI’s potential and risks. Despite these differences, we believe each of us left the course (instructors included) with a deeper understanding of AI possibilities and risks and a clearer sense of leaders’ responsibilities to ensure ethical and effective AI use.
This Contemporary Issues in Leadership course focused not just on technical AI awareness, but also on the role leaders can and should play in overseeing AI implementation. Students engaged in discussions and assignments that required them to evaluate agency readiness, analyze benefits and risks, and identify strategies for responsible implementation. They experimented directly with generative AI tools and reflected on how these tools could support or challenge their agency’s mission and values. As instructors, we saw an important shift: enthusiasm about AI’s potential increased throughout the course, but so did concern — particularly about privacy, bias, transparency and long-term accountability.
Much has already been written about the importance of reducing AI bias and verifying the accuracy of generative AI responses. Here we aim to offer five additional lessons from the course to help other public safety leaders implement AI and AI-driven tools in thoughtful, mission-aligned ways:
1. Clarify your role in shaping policy
Many students recognized the need for clearer guardrails to ensure AI is used safely and fairly in public safety. Yet, few expressed confidence about their own authority — or responsibility — to help shape policies beyond their immediate unit or department.
We urge public safety executives to consider where they have influence. That might include contributing to local or state policy discussions, participating in advisory groups, sharing data and lessons learned, or piloting internal policies that can serve as models for broader adoption. We agree with our students that AI governance should not be left only to technologists or policymakers. Practitioners have a critical voice in ensuring AI tools align with the complex realities of frontline service.
2. Engage the public early and often
Transparency is important, but insufficient to building public trust in AI. Our students argued that public trust in AI will depend on how well they engage their communities — not just after deployment, but during the exploration and planning phases. Whether it is using AI to prioritize patrol areas or process 911 calls, agencies will be most effective when they communicate clearly about what AI is (and isn’t), how it’s being used, and what protections are in place.
Several students in our course highlighted the power of early outreach, including community briefings, advisory groups and public-facing explainers that demystify AI and invite feedback. Ongoing and intentional communication about AI can strengthen public trust, especially when leaders demonstrate a commitment to oversight and values-based decision-making.
3. Set clear expectations with vendors
AI products are often purchased from external vendors, making procurement and contract management key areas of risk. Students and guest speakers noted that vendor relationships are often treated like typical technology contracts, with insufficient attention to issues like data ownership, long-term access, or what happens when a system is retired or replaced. Leaders must ask hard questions upfront:
- Who owns the data?
- Where is it stored?
- How can it be accessed later?
- What rights does the agency retain if the vendor goes out of business or the agency changes providers?
Procurement policies should also include expectations for explainability, accuracy benchmarks, and audit mechanisms. We discussed PEW Research Center findings that indicate the public is less trusting of technology companies than of law enforcement organizations, so public safety leaders should think carefully before giving technology vendors direct access to speak with community stakeholders about AI systems. Fortunately, there are resources to help — leaders don’t need to start from scratch (see lesson #5).
4. Understand informal use to guide policy and training
In most public safety agencies, someone is already using generative AI — whether to brainstorm community engagement ideas, summarize notes, or help prepare reports. Students offered numerous examples from their agencies. Rather than prohibit this informal use, public safety leaders should surface current use and impacts by asking personnel how they are using AI, what’s working and what concerns they have. This not only builds awareness but also creates a foundation for targeted training and appropriate policy development. Generic training programs may miss the mark since broadly available generative AI tools and public safety AI products are so varied and dynamic. Agencies should tailor their education efforts to the specific use cases already emerging among their personnel — and make space for dialogue about responsible experimentation as new products and uses evolve.
5. Don’t reinvent the wheel
There is a growing ecosystem of resources to help public agencies integrate AI responsibly. Organizations like the Future Policing Institute and the Government AI Coalition offer practical tools such as model policies, AI risk assessment checklists, procurement templates and even AI incident response plans. These resources reflect lessons learned across jurisdictions and help public safety leaders avoid common pitfalls. In a space evolving as quickly as AI, collaboration and knowledge-sharing are essential. Leaders should tap into professional networks, national associations, and peer agencies to stay informed and supported.
Ultimately, we believe the future of AI in public safety must be shaped not just by data scientists or private companies, but by the everyday decisions of public safety professionals and experienced public safety leaders. AI is not a silver bullet. It cannot replace judgment, empathy, or the human relationships that are core to effective public safety. But when used wisely — and led by values-driven professionals — it can be a powerful ally in advancing service, safety and equity.
As public safety agencies chart their path forward, we hope these lessons from the classroom can serve as a compass. The choices leaders make today will define not just how AI is used, but who it serves — and who it leaves behind.
| WATCH: Generative AI in law enforcement: Questions police chiefs need to answer