As artificial intelligence technology advances daily, scientists and researchers have been looking into the risks and benefits AI would carry in this year’s upcoming election. While AI can allow bad actors to misinform the public, or affect security, Mike Kirby, a leadership member of the University of Utah’s Responsible AI Initiative and professor in the school of computing, said he thinks AI can be viewed as a tool rather than a risk.
The RAI is currently speaking with multiple community members to address how to use AI most effectively. These members include state leadership, lawyers and psychologists to retrieve as much data and input as they can.
According to a U report, the Responsible AI Initiative, funded with $100 million, aims to responsibly use advanced AI technology to tackle societal issues with current subtopics being environment, education and healthcare.
While elections are not currently a subtopic of the initiative, Kirby said it could be in the future.
Kirby said the media currently portrays AI as either a dystopian mechanism, used for ending the world, or a utopian supertool, and RAI lies in the middle of these polarized sides.
“We don’t take a dystopia or a utopian view,” he said. “ We try to take a measured view, a healthy optimistically measured view.”
However, while they are optimistic, Kirby clarified they do not operate under “blind optimism.”
RAI looks for the positives of AI and determines how to use these positives as a tool. At the same time, they understand this comes with future challenges.
When applying this research to the U.S. election system, Kirby said while the technology can be used to harm election results, that same technology can be used to counteract it.
Anomaly detection is one example. Kirby said forms of AI have ways of “sifting through data at rates that [humans] can’t and look for patterns that are anomalous and should be investigated.”
Kirby disagrees with the opinion that AI is “bad.” Considering how AI has been used for “deepfakes” and spreading disinformation to public voters, Kirby said AI should not be treated as an entity that has a choice. Bad actors use AI with negative intentions.
Using AI for disinformation is “encouraging a vigilance on the part of us as consumers,” he said. “Just understanding the fact that [we] need to be mindful of this.”
The International Federation of Library Associations and Institutions published an infographic on how to spot fake news. These guidelines include considering the source of the information, checking the sources provided, the date of publication and considering one’s own biases.
U Political Science Professor Josh McCrain said AI is not a concern when considering election security, adding election infrastructure is “extremely secure,” and any concerns of its integrity are misplaced by people with “bad intentions and bad faith” when the election turns out not in their favor.
“These are really secure elections,” he said. “And anybody suggesting otherwise has political motivations.”
McCrain said the main concern is deepfakes. Because there is no current legislation on deepfakes, it’s up to social media platforms, which are unregulated by the government, to figure it out on their own.
“That is definitely something that can be exploited by bad actors,” McCrain said.
Deepfakes have been around for years; however, as technology advances, they are expected to become even more prominent. Deepfakes can include fake videos of politicians saying things they haven’t said, which could ultimately sway voters with disinformation.
More recently, in January, a robocall emulating President Joe Biden went out to New Hampshire Democrats, telling them not to vote in the Jan. 23 presidential primary.
“Voting this Tuesday only enables the Republicans in their quest to elect Donald Trump again. Your vote makes a difference in November, not this Tuesday,” the call said, according to NBC News.
“Although the voice in the robocall sounds like the voice of President Biden, this message appears to be artificially generated,” the attorney general’s office said in a statement.
Solving the issue of deepfakes and disinformation is not as easy as recognizing anomalies of bad actor interference.
“What we don’t want is the mechanisms that we create to try to squash disinformation to be those mechanisms that squash the voice of freedom that’s needed,” said Kirby.
They also do not want these mechanisms to remove factual information.
Kirby said he appreciates the challenge.
“This is the amazing thing about our liberal democracies,” he said.