AI content, such as deepfakes, synthetic media and generative text, poses a significant threat to the integrity and security of the upcoming presidential election in 2024. In response to Olympic sanctions, Russia has employed artificial intelligence to spread disinformation with the intent to sway public opinion, and this tactic might be adopted by other malicious entities intent on affecting the outcome of the presidential election, regardless of the political candidate they endorse.
Disinformation and influence campaigns, fueled by AI content, can undermine public trust, sow social division, and incite violence and unrest. Additionally, conspiracy theories, which are often based on or amplified by AI content, can erode the credibility of democratic institutions, such as the electoral system, the judiciary and the media, and create a fertile ground for radicalization and extremism.
According to a recent report by the Center for Strategic and International Studies (CSIS), AI content, disinformation and influence are among the top 10 global threats for 2024, and they require urgent and coordinated action from governments, civil society, and the private sector. [1] Police agencies have a vital role to play in protecting the public from the harms of AI content, disinformation and influence, but they need to adopt long term operations planning and a proactive approach starting during the campaigns leading to the election and then in the days following.
This article proposes a framework for police agencies to educate and collaborate with the public and stakeholders regarding AI content, disinformation and influence based on four pillars: awareness, verification, reporting and response. This article also provides examples of best practices and recommendations for police agencies to implement the framework and enhance their capacity and readiness for the upcoming presidential election and beyond with respect to a national collaborative approach with the unification necessary to unite the country, promote trust in local communities and transparency while maintaining trust in the democracy of the republic in the United States with respect to fair elections.
Disinformation and conspiracy examples
We are currently witnessing an example of how disinformation can influence behavior. Russia is utilizing AI to create disinformation surrounding the 2024 Olympics in Paris. Russia has been banned from participating in the Olympics due to its invasion of Ukraine which is a “blatant violation” of the Olympic Truce [1] and has been allegedly spreading false narratives to undermine the legitimacy and credibility of the event.
In addition, with former president Donald Trump being the target of an assassination attempt recently a wave of conspiracy theories have erupted emulsified with AI.
Some of the AI-generated content that could be produced and disseminated includes:
- Deepfake videos of prominent athletes and officials making derogatory or inflammatory statements about the community, police, political opponent, safety, infrastructure, political stability, etc. [2]
- Fake news articles and social media posts claiming that the elections are unsafe, corrupt, or rigged, that certain events occurred or that certain people had certain knowledge based on expression or action in video, and that the US is violating human rights and environmental standards in its preparations. This could also occur around voting validity, arrests, special circumstances, constitutional violations and other situations to create civil unrest or create greater polarization. [3]
- Manipulated images and audio clips of alleged incidents or protests surrounding the election or events, candidates, community safety such as violence, vandalism, or boycotts, that are intended to create fear and confusion among the public and the media. [4]
- Fabricated documents and data that purport to show evidence of misconduct, fraud, or sabotage by the organizers, sponsors, or competitors of the election or opposing political parties. [5]
These types of AI content are designed to influence the public opinion and behavior of both domestic and international audiences, and to erode the trust and confidence in the democratic values and institutions that the republic of the United States represents.
A four-pillar framework for educating and collaborating with the public
The Awareness, Verification, Reporting, and Response framework is designed to help police agencies to achieve the following objectives:
- Raise public awareness and understanding of the nature, sources, and impacts of AI content, disinformation, and influence.
- Empower the public to verify and assess the credibility and accuracy of AI content, disinformation, and influence.
- Encourage the public to report and flag any suspicious or harmful AI content, disinformation, and influence to the relevant authorities or platforms.
- Provide the public with timely and appropriate response and guidance on how to cope with and counter AI content, disinformation, and influence.
The framework is based on the following principles:
- To respect the rights and freedoms of the public, such as freedom of expression, privacy, and access to information.
- To foster a culture of trust and transparency between the police and the public, and to avoid any actions that may erode public confidence or legitimacy.
- To leverage the expertise and resources of other stakeholders, such as academia, media, civil society, and the private sector, and to establish effective partnerships and coordination mechanisms.
- To adopt a flexible and adaptive approach that can respond to the evolving and dynamic nature of AI content, disinformation, and influence.
The following sections describe each pillar of the framework in more detail and provide examples of best practices and recommendations for police agencies to implement them.
Awareness
The first pillar of the framework is awareness, which aims to raise public awareness and understanding of the nature, sources, and impacts of AI content, disinformation and influence.
Public awareness and understanding are essential for building public resilience and critical thinking, and for reducing the susceptibility and vulnerability of the public to AI content, disinformation, and influence. Police agencies can play a key role in raising public awareness and understanding, by providing accurate and reliable information, by debunking myths and misconceptions, and by educating the public about the risks and challenges of AI content, disinformation, and influence.
Some of the best practices and recommendations for police agencies to raise public awareness and understanding include:
- To develop and disseminate clear and consistent messages and materials about AI content, disinformation, and influence, using various channels and platforms, such as social media, websites, newsletters, podcasts, webinars, or public events to include very clear messages about voting methods and transparent counting practices. Also taking ballot complaints seriously with due diligence. (This is just as important as crime notices to start building trust.)
- To use simple and accessible language and formats, such as infographics, videos, or animations, to explain complex and technical concepts and terms, such as deepfakes, synthetic media, or generative text.
- To provide concrete and relevant examples and case studies of AI content, disinformation, and influence, and to highlight their potential impacts and consequences on the public and the society.
- To engage and involve the public in the awareness-raising activities, such as by soliciting feedback, questions, or suggestions, or by organizing contests, quizzes, or games.
- To collaborate and coordinate with other stakeholders, such as academia, media, civil society, and the private sector, to leverage their expertise and resources, and to amplify and reinforce the awareness-raising messages and materials.
An idea for a successful awareness-raising campaign at a police agency is the “Spot the Deepfake” campaign, inspired by the University of Washington’s initiative. [6] This campaign should be launched very soon in August, but no later than September, ahead of the general election.
The campaign is aimed at educating the public about the dangers and impacts of deepfakes, which are AI-generated or manipulated videos that can impersonate or distort the appearance, voice, or actions of a person.
The campaign consists of the following elements:
- A website that provides information and resources about deepfakes, such as how they are created, how they can be detected, and how they can be reported.
- A series of videos that feature celebrities, politicians, and experts, who explain what deepfakes are, how they can affect the public and society, and how the public can spot and avoid them.
- A social media campaign that uses the hashtag #SpotTheDeepfake and encourages the public to share and discuss videos and the website, and to test their skills and knowledge by taking a quiz that challenges them to identify real and fake videos.
- A partnership with the local media, and a state fusion center which broadcasts the videos and the quizzes on their television, radio channels, and internet and will produce a documentary or news article that explores the phenomenon and the implications of deepfakes. The video should then be promoted over social media to gain more direct attention, and the department should consider spending advertising to ensure a good saturation of local population for education. This will help generate positive feedback and comments from the public, building trust and confidence in the police, as the public perceives the police as a credible and reliable source of information and guidance on AI content, disinformation, and influence.
Verification
The second pillar of the framework is verification, which aims to empower the public to verify and assess the credibility and accuracy of AI content, disinformation and influence.
Public verification and assessment are crucial for preventing and reducing the spread and impact of AI content, disinformation, and influence, and for promoting a culture of truth and accountability. Police agencies can play a key role in empowering the public to verify and assess AI content, disinformation, and influence, by providing tools, techniques, and tips, by supporting and endorsing credible and reputable sources and platforms, and by encouraging and rewarding good verification and assessment practices.
Some of the best practices and recommendations for police agencies to empower the public to verify and assess AI content, disinformation and influence include:
- To provide and recommend tools and techniques that can help the public to verify and assess AI content, disinformation, and influence, such as online fact-checking tools, reverse image search engines, metadata analysis tools, or digital literacy guides.
- To support and endorse credible and reputable sources and platforms that can provide verification and assessment services, such as fact-checking organizations, media outlets, academic institutions, or civil society groups.
- To provide and recommend tips and advice that can help the public to verify and assess AI content, disinformation, and influence, such as how to spot signs of manipulation, how to check the source and the date of the content, how to compare and contrast different sources and perspectives, or how to use common sense and logic.
- To encourage and reward good verification and assessment practices by the public, such as by acknowledging and praising the public for their efforts, by sharing and highlighting positive examples and stories, or by offering incentives and prizes.
- To collaborate and coordinate with other stakeholders, such as academia, media, civil society, and the private sector, to leverage their expertise and resources, and to align and harmonize the verification and assessment tools, techniques, and tips.
Another initiative I would implement in a police department is “Verify Before You Share,” which again should be initiated in August but no later than September. The initiative is aimed to empower the public to verify and assess AI content, disinformation, and influence, and to prevent and reduce their spread and impact.
The initiative consists of the following elements:
- A website that provides tools, techniques, and tips for the public to verify and assess AI content, disinformation, and influence, such as online fact-checking tools, reverse image search engines, metadata analysis tools, or digital literacy guides.
- A partnership with local media, a state fusion center which provides verification and assessment services, such as fact-checking articles, podcasts, and videos, that covers the most relevant and trending topics and issues related to the election.
- A social media campaign that uses the hashtag #VerifyBeforeYouShare, and that encourages the public to use the tools, techniques, and tips, and to consult the verification and assessment services, before they share any content related to the election. Also promoting civic responsibility to the public not to be misinforming people which could cause unintended consequences in the community.
- A reward system that offered incentives and prizes for the public who verified and assessed AI content, disinformation, and influence, such as badges, certificates, vouchers, or tickets perhaps through a relationship with local businesses could be a tremendous benefit. (Understand this could be abused so make sure it is based on verified harmful content)
The initiative should be shared on social media under the hashtag #VerifyBeforeYouShare to generate positive feedback and comments from the public. The anticipated results will reduce the spread and impact of AI content, disinformation, and influence, lowering the rate of sharing and liking of false or misleading content, and increasing the rate of reporting and flagging of suspicious or harmful content.
Reporting
The third pillar of the framework is reporting, which aims to encourage the public to report and flag any suspicious or harmful AI content, disinformation, and influence to the relevant authorities or platforms.
Public reporting and flagging are essential for detecting and removing AI content, disinformation, and influence, and for holding the perpetrators and enablers accountable and responsible. Police agencies can play a key role in encouraging the public to report and flag AI content, disinformation, and influence, by providing clear and easy reporting and flagging mechanisms, by ensuring timely and effective follow-up and feedback, and by protecting and supporting the reporters and flaggers.
Some of the best practices and recommendations for police agencies to encourage the public to report and flag AI content, disinformation and influence include:
- To provide clear and easy reporting and flagging mechanisms for the public to report and flag AI content, disinformation, and influence, such as online forms, hotlines, email addresses, or social media accounts.
- To ensure timely and effective follow-up and feedback for the public who report and flag AI content, disinformation, and influence, such as by acknowledging and thanking the public for their reports and flags, by informing them of the status and the outcome of their reports and flags, or by inviting them to provide additional information or evidence.
- To protect and support the reporters and flaggers of AI content, disinformation, and influence, such as by ensuring their anonymity and confidentiality, by providing them with legal and psychological assistance, or by offering them incentives and rewards.
- To collaborate and coordinate with other stakeholders, such as academia, media, civil society, and the private sector, to leverage their expertise and resources, and to streamline and synchronize the reporting and flagging mechanisms and processes.
Another program I would implement in a police department is “Report It, Don’t Spread It” and this also should be implemented immediately but preferably in August or September. This initiative is aimed to encourage the public to report and flag any suspicious or harmful AI content, disinformation, and influence to their local police, and to prevent and reduce their spread and impact. It’s the duty of local law enforcement to relay information to fusion centers, ensuring it spreads across the state and country for consistent awareness.
The initiative consists of the following elements:
- A website that provides a clear and easy reporting and flagging mechanism for the public to report and flag AI content, disinformation, and influence, such as an online form that requires the public to provide the URL, the screenshot, and the description of the content, and to indicate the reason and the urgency of the report or the flag.
- A follow-up and feedback system that ensures timely and effective follow-up and feedback for the public who reported and flagged AI content, disinformation, and influence, such as by sending an email confirmation and a reference number for each report or flag, by updating the public on the status and the outcome of their reports or flags, or by requesting the public to provide additional information or evidence.
- A protection and support system that protected and supported the reporters and flaggers of AI content, disinformation, and influence, such as by ensuring their anonymity and confidentiality, by providing them with referrals to legal and psychological assistance, or by offering them incentives and rewards, such as badges, certificates, vouchers, or tickets for reporting. (Understand this could be abused so make sure to have a disclaimer that it is based on verified harmful content)
- A partnership with the major social media platforms should be sought, such as Facebook, X, YouTube, and TikTok, which enables the public to report and flag AI content, disinformation, and influence directly from the platforms, and which allows the police to access and analyze the reported and flagged content.
The initiatives goal is to increase protection and support while receiving positive feedback and comments from the reporters and flaggers, who will feel more motivated and engaged in reporting and flagging AI content, disinformation, and influence. The aim of the initiative is to curb the proliferation and influence of AI-generated content and disinformation, enhance the efficiency of detecting and eliminating incorrect or deceptive material, and escalate the prosecution and penalization of individuals responsible for its dissemination.
Response
The fourth and final pillar of the framework is response, which aims to provide the public with accurate and timely information on AI-related issues, as well as to address their concerns and feedback. The response pillar consists of three main components:
- A national AI hotline that allows citizens to report any suspicious or harmful AI content or activity, as well as to ask questions or request assistance regarding AI matters. The hotline is staffed by trained operators who can escalate the calls to the relevant authorities or experts if needed. We have some fast work to do to get this going but it must happen.
- A network of AI ambassadors who are volunteers from various sectors and backgrounds who act as liaisons between the government and the public on AI issues. They help raise awareness and educate the public about the benefits and risks of AI, as well as to promote ethical and responsible AI practices. They also facilitate dialogue and collaboration among different stakeholders and communities on AI matters. We need these individuals to be assessable to all agencies and perhaps we can facilitate this through a federal organization or through the likes of a police organization.
- A dedicated AI website that serves as a one-stop portal for all AI-related information and resources in the country. The website provides updates on the latest AI developments and initiatives, as well as guidelines and best practices on AI governance and usage that is accessible to law enforcement and transparent to the public. The website also features interactive tools and platforms that allow the public to access and engage with AI services and applications, as well as to provide feedback and suggestions.
The response pillar aims to enhance the trust and confidence of the public in AI, as well as to foster a culture of openness and transparency around AI matters. By providing the public with reliable and accessible information and channels of communication, the response pillar also empowers the public to participate and contribute to the AI ecosystem in the country. The response piece is a heavy lift and I am dedicated to reaching out to all resources I know to see how I can collaborate and make this happen. Our goal as a country is to ensure we do not have an election crisis, and to ensure our country continues to have peaceful transition of power regardless of the outcome. I am available to reach out to if you would like to collaborate.
Law enforcement collaborative
The establishment of a law enforcement collaborative that involves all levels and sizes of law enforcement agencies in the country is essential going forward. The collaborative aims to ensure the security and integrity of the upcoming election, as well as to prevent and address any potential threats or incidents that may arise from the use or misuse of AI. The collaborative also seeks to foster a sense of unity and harmony among the public, regardless of their political affiliation or preference.
The law enforcement collaborative will develop and implement major long-term operational plans that cover various scenarios and contingencies related to the election outcomes. I propose this happens through the fusion centers a system that is already in place. The plans are based on the principles of fairness, impartiality, transparency and accountability, and are aligned with the national AI strategy and ethical framework. The collaborative also coordinates and communicates regularly with other relevant stakeholders, such as the electoral commission, the media, the civil society, and the international community, to ensure a smooth and successful electoral process.
The law enforcement collaborative also plays an important role in educating and informing the public about the role and impact of AI in the election, as well as the rights and responsibilities of the voters and candidates. The collaborative raises awareness and dispels myths and misinformation about AI, as well as provides reliable and accessible information and guidance on how to use and interact with AI services and applications during the election. The collaborative also encourages and facilitates public feedback and participation in the AI ecosystem, and upholds the values of trust, respect, and inclusiveness for all people.
Unity As law enforcement leaders, we have a responsibility to uphold the unity and integrity of our communities across the nation, especially during the critical time of this election in a very polarized country. We recognize and respect the diversity of opinions and perspectives that exist among our community members, and we value the contribution that each one of them can make to our democracy. We believe that our differences are not a source of division, but rather a source of strength and resilience. We should also acknowledge the scientific evidence that shows that constructive friction and disagreement can enhance our collective decision-making and problem-solving abilities, as long as they are based on facts and mutual respect. Diversity has contributed to the success of the United States and will continue to do so.
Leaving alone false information generated by AI, even if it does not hurt one’s position, still hurts everyone in the long run. False information erodes the foundation of democracy, which is based on the informed consent of the governed. It creates confusion, distrust, and polarization among the public, and undermines the legitimacy and authority of the elected officials and institutions. It also hampers the ability of the government and the society to address the real challenges and opportunities that we face as a nation and on the micro level, the community. False information can also have serious consequences for public safety, national security, and human rights. Therefore, we have a duty to expose and counter all false information as law enforcement leaders, regardless of who benefits or suffers from it, and to uphold the standards of truth, accuracy, and accountability that are essential for our democracy and communities to thrive especially in the changing information age with AI disrupting how we receive information.
Therefore, we should reject and condemn any attempt to undermine or manipulate the election process through the use of false or misleading information generated by AI or other means. We recognize this issue to be as serious as an active shooter and should be treating it as such in our agencies to include operations plans and coordinated efforts to scale across the nation. We should not support or endorse any agenda that relies on such information to gain an unfair advantage or to sow discord and distrust among the public nor should disinformation be left alone. We are committed to protecting the truth and the trust that are essential for our democracy to function. We should as law enforcement urge all the people to exercise their rights and duties as voters and candidates with honesty, transparency, integrity, and civility, and to respect the outcomes of the election, whatever they may be. We should pledge to work together with all the stakeholders to ensure a peaceful and orderly transition of power, and to continue to serve and protect our nation and its values but this can only occur if a properly executed operations plan is in place.
Conclusion
As I conclude this article, I want to emphasize the urgency and importance of combating false information generated by AI or other means in the context of the election process. I have presented the ethical, legal, and practical challenges that this phenomenon poses for our democracy and our law enforcement agencies, and I have proposed some recommendations and best practices for addressing them as well as ways to connect with your community.
I believe that by working together, we can preserve and strengthen the integrity and credibility of our elections and protect the rights and interests of our citizens and our nation to unify. I also hope that this article will raise awareness and stimulate further research and action on this critical issue.
Please reach out to me if you would like to collaborate. I invite the media, all the law enforcement leaders, officers and partners to join me in this effort, and to uphold the oath that we all took to support and defend the Constitution of the United States against all enemies, foreign and domestic, and to faithfully discharge the duties of our office. This is my solemn duty and my sacred honor, and I should not let anything, especially not false information, stand in my way. I end this paper with the words of the Pledge of Allegiance, which remind me of the values and principles that bind us together as one nation under God, indivisible, with liberty and justice for all:
“…… and to the republic for which it stands, one nation under God, indivisible, with liberty and justice for all.” [7]
References
1. Henderson C. Why is Russia banned from Paris Olympics? Can Russian athletes compete? USA Today. 2024.
2. Settles G. How to spot deepfake videos like a fact-checker. Poynter. 2023. Available from:
3. Menon S. Tokyo 2020: Olympic athletes targeted by false and misleading claims. BBC. 2021.
4. Council on Foreign Relations. The dangers of manipulated media in the midst of a crisis. 2020.
5. Delaney M, Fiallo K. Fake news in court: Attorney sanctioned for citing fictitious case law generated by AI. McLane Middleton. 2024.
6. University of Washington. About spot the deepfake. 2024.
7. The Pledge of Allegiance. The pledge of allegiance. USHistory.org. 1954. Available from: https://www.ushistory.org/documents/pledge.htm