You are currently viewing A University of Utah program aims to make A.I. more responsible. Can it help during the election? – Salt Lake Tribune

A University of Utah program aims to make A.I. more responsible. Can it help during the election? – Salt Lake Tribune


This story is jointly published by nonprofits Amplify Utah and The Salt Lake Tribune, in collaboration with student media at the University of Utah, to elevate diverse perspectives in local media through student journalism.

With daily advances in artificial intelligence technology advances, scientists and researchers have been looking into the risks and benefits A.I. would carry in the 2024 election.

While some fear that bad actors will use A.I. to misinform the public or affect security, one University of Utah professor is making the argument that A.I. can be viewed as a tool rather than a risk.

Mike Kirby, a professor in the U.’s Kahlert School of Computing, is a leadership member of the university’s Responsible AI Initiative (RAI), which is meeting with community members — including state leaders, lawyers and psychologists — to collect data about how to use AI most effectively.

The initiative, backed with a $100 million investment from the university announced in November, aims to use advanced A.I. technology responsibly, to tackle societal issues including the environment, education and health care.

Elections aren’t now in the initiative’s field of interest, but Kirby says they could be.

The media, Kirby said, portrays A.I. as either a utopian supertool or a dystopian mechanism that will bring the world’s end. RAI, he said, lies somewhere in the middle of those polarized extremes.

“We don’t take a dystopia or a utopian view,” he said. ” We try to take a measured view, a healthy optimistically measured view.” Kirby clarified, however, that “healthy” optimism isn’t the same as “blind optimism.”

RAI, he said, looks for the positives of A.I. and determines how to use the technology as a tool — while understanding that A.I.’s potential use will come with challenges.

In applying the initiative’s research to the U.S. electoral system, he said the technology could be used to harm election results — but also to counteract those harms.

For example, he said, some forms of A.I. can detect voting anomalies, by “sifting through data at rates that [humans] can’t, and look for patterns that are anomalous and should be investigated.”

AI isn’t “bad,” Kirby said, and A.I. shouldn’t be treated “as if somehow its the entity that has a choice.” Many of the evils attributed to A.I. — such as “deepfakes” and spreading disinformation to voters — are, he said, the fault of “bad actors with bad intentions.”

The practice of using A.I. for disinformation is, he said, “encouraging a vigilance on the part of us as consumers — just understanding the fact that [we] need to be mindful of this.”

The International Federation of Library Associations and Institutions has published guidelines on how to spot fake news, boiled down to a handy infographic. The guidelines include considering the source of the information, checking the sources provided and the date of publication, and factoring in the news consumer’s own biases.

Josh McCrain, assistant professor of political science at the U., said A.I. is not a concern when it comes to election security. Election infrastructure is “extremely secure,” he said, and those casting doubts on its integrity often are people with “bad intentions and bad faith” when a vote goes against their preferred candidate.

“These are really secure elections,” he said, “and anybody suggesting otherwise has political motivations.”

Deepfakes — A.I.-assisted video or audio that make it appear that someone said or did something they didn’t — are a main concern, McCrain said. Deepfakes have been around for years, he noted, but they are expected to become more prominent as the technology advances.

“That is definitely something that can be exploited by bad actors,” McCrain said.

In January, NBC News reported, a robocall with a simulated voice resembling President Joe Biden’s went out to Democrats in New Hampshire, urging them not to vote in that state’s presidential primary that month. The attorney general’s office in New Hampshire issued a statement that said “this message appears to be artificially generated.”

There have been moves by some states to regulate deepfakes. For example, according to an Associated Press report from January, six states have criminalized nonconsensual deepfake porn.

Otherwise, though, McCrain said it’s up to social media platforms to regulate themselves.

Solving the issue of deepfakes and disinformation is not as easy as recognizing anomalies of bad actor interference, Kirby said. There’s also the concern that regulating A.I. too tightly will remove factual information, he said.

It’s a challenge to keep the right balance, he said, but “this is the amazing thing about our liberal democracies.

“What we don’t want,” Kirby said, “is the mechanisms that we create to try to squash disinformation to be those mechanisms that squash the voice of freedom that’s needed.”

Libbey Hanson wrote this story as a student at the University of Utah. It is published as part of a new collaborative including nonprofits Amplify Utah and The Salt Lake Tribune.