Arizona Proposes AI Lab to Fortify Elections Against Threats
In response to the evolving landscape of risks presented by artificial intelligence, a newly formed security advisory committee in Arizona is advocating for the establishment of an AI elections lab. The committee’s primary recommendation, detailed in its initial report, centers on the creation of a specialized lab dedicated to fine-tuning AI models specifically for use by election offices.
The comprehensive 18-page report was published by the state’s Artificial Intelligence and Election Security Advisory Committee. The committee was recently established by Arizona Secretary of State Adrian Fontes. He expressed his dual interest in both mitigating the risks associated with AI and in simultaneously harnessing its “transformative potential.”
The committee is co-chaired by Chris Cummiskey, a cybersecurity and homeland security consultant. Gowri Ramachandran, director of elections and government at the Brennan Center for Justice, a non-profit law and public policy institute at the New York University School of Law, also co-chairs the committee. The report highlights the widespread need for government to increase efficiency.
“Local and state election officials collect and store large amounts of data and are underfunded and under-resourced,” the report reads. “This creates a significant opportunity for AI tools, with proper human oversight, to help election officials continue to administer safe and secure elections in today’s threat environment and efficiently improve voter experiences.”
In addition to recommending the AI lab, the committee suggests that the lab could be hosted by a major university. The lab would offer customized AI education for election officials. The committee also urges election officials to develop incident response plans as well as continue table-top exercises.
Secretary of State Fontes highlighted the importance of table-top exercises: he stated the exercises were “crucial” in securing the 2024 election and reducing disruptions caused by bomb threats at polling locations throughout the state. The report notes that AI can be deployed by election offices to defend against increasingly sophisticated cyberattacks, and that these attacks, along with misinformation campaigns, are also being amplified by the most current AI models.
Arizona faces an additional challenge: a scarcity of data. The committee notes that private companies are often unwilling to share critical data due to privacy restrictions or other legal concerns. The report elaborates:
“Data access poses a barrier to some efforts to harness the potential of AI to help officials and protect and mitigate against bad actors’ use of AI, which could range from supercharged phishing attempts to corrupting AI-training data to achieve harmful ends,” the report reads. “Access to comprehensive datasets from technology companies would provide policymakers and researchers with the information they need to do their part as we collectively face the rapid integration of AI into our society, including election administration.”
Notwithstanding those obstacles, the report details current and potential uses of AI to bolster elections. These include diminishing errors in voter rolls, identifying missing forms, and generating voter outreach materials. The report emphasizes that these are significant challenges for “chronically underfunded” election offices.
Matthew Jordan, a Pennsylvania State University professor who teaches a course on media and democracy, agrees that election offices are underfunded. However, he expressed concern that embracing the concept of “doing more with less” and integrating more technologies into the democratic process might not be enough to build public trust.
“It’s all about the vague notion that, in the future, look at this cool new thing that’s going to allow us to do that much more. Everything we know about AI so far is that it is riddled with errors, that people don’t trust it — rightfully so,” Jordan said. “Every day on social media somebody does a screengrab of a huge mistake one of these things makes.”
Instead, Jordan recommends that instead of merely using technological solutions to compensate for budgetary shortfalls, that governments should allocate more resources to form nonpartisan groups that thoughtfully deliberate over democratic processes. Furthermore, he believes they should be public about their activity.
“When we want to increase trust in democracy, what you do is put people of all stripes in charge of a process and you let them do what they need to do, and it cultivates that democratic disposition,” he said. “If the problem is that people don’t trust elections, I think we need to slow the process down, put more people in charge of it and let everyone see the work.”
Despite any concerns, and with an eye towards future AI applications, the Arizona Secretary of State already claims success in using AI to resolve recent challenges in the state. Fontes has utilized AI chatbots to support staff in responding to a significant number of communications – hundreds of thousands of “critical” texts and phone calls – leading up to the previous year’s presidential election. Furthermore, Fontes’s concerns about AI are driving his office to pursue preventative measures, such as establishing a repository of images signed with “content credentials.” This new feature is designed to increase public assurance that images are authentic. The need for this is growing as political campaigns and other bad actors are increasingly employing AI to generate deceptive images of their rivals and promote false narratives.
The report emphasizes the need for a coordinated strategy in dealing with AI threats:
“Ultimately, the challenges we identify for election security in an age of AI are all interconnected, and they demand a comprehensive approach that requires collaboration between election officials, technology companies, academics, civil society, the public, and regulators,” the report concludes. “Transparency mechanisms and advanced AI analytical tools can work hand in hand to counteract the evolving risks posed by AI to our democracy.”