Feb. 4 — The messages burst onto social media feeds just as a heated election was coming to a head.
Beneath the stark, black-and-white photo of a weary coal miner, the graphic's message slammed President Barack Obama and called on voters to gather in support of then-candidate Donald Trump.
"Where: Steel Plaza, Pittsburgh," it said.
Though the rallying cry appeared to come from a political coalition called Coal Miners for Trump, federal investigators later discovered it was part of a sprawling election-interference campaign by the Russian government — an operation that upended U.S. politics and raised troubling questions about crucial weaknesses in America's democratic process.
Now, eight years after the Russian intrusion into the U.S. presidential race, election watchers and technology experts say the rise of publicly available artificial intelligence will present an even greater threat to the ability of voters to separate truth from fiction as a vital election draws closer.
It can come in the form of the mimicked voice of a familiar candidate spouting misinformation. It can appear as a damning video of a politician doing something that, in reality, never happened. It can pop up as a text message or email that looks nearly identical to an urgent message from the government, but is just a lie told by a fraudster trying to manipulate people.
As a deeply divided country braces for another bitter presidential election, AI tools are enabling the creation of fake audio and images of politicians — and doing so with a realism that the Russian operatives toiling away in 2016 could scarcely have dreamed of.
"The 2024 election is really going to be, for the U.S., the first AI election," said Lawrence Norden, director of the Elections and Government Program at the Brennan Center for Justice at the New York University School of Law.
Already, the technology has been deployed to try to suppress voter turnout in New Hampshire, where people across the state received a robocall mimicking Joe Biden's voice and urging Democrats not to go to the polls in a primary that was just two days away. Believed to be the first such deployment of AI in a U.S. presidential election, the message came almost a year after someone posted fake audio of a leading Chicago mayoral candidate downplaying police killings right before that city's election.
A fake message that mimics President Joe Biden's voice and urges people not to vote is the subject of a criminal investigation in New Hampshire. One expert described it as a crude use of artificial intelligence to create what's known as a deepfake, but one that's nonetheless a sign of misinformation efforts to come — some of which, people fear, will be far more sophisticated and convincing.
"Anyone with a computer can go and get the code to do this," Kathleen Carley, director of the Center for Computational Analysis of Social and Organizational Systems at Carnegie Mellon University. "It's not only relatively easy to do, it's cheap to do now."
Federal regulators, lawmakers and law enforcement agencies are racing to counter the threat. Three days before the fake Biden calls, Pennsylvania Attorney General Michelle Henry led a coalition of 26 attorneys general in urging the Federal Communications Commission to crack down on the use of AI in telemarketing calls.
On Wednesday, the FCC's chairwoman, citing Ms. Henry's efforts and the misleading call in New Hampshire, announced the commission would move to ban the use of AI-generated voices in robocalls under the Telephone Consumer Protection Act.
At the same time, a bipartisan group of senators from Minnesota, Missouri, Delaware and Maine are pushing a bill that would outlaw the use of so-called deepfakes — realistically mimicked voices and video of candidates — in political ads and fundraising pitches.
The scramble to respond — New Hampshire's attorney general launched an investigation into the voter-suppression calls in his state the day after they were made — comes after years of warnings by election protection advocates about the looming threat.
As far back as 2017, computer coders were toying with programs that used artificial intelligence models to map people's faces and create realistic, fake videos. In the past year, as AI technology has become widely accessible through websites such as ChatGPT, open-source versions of the learning programs have multiplied across the internet — and fake content has followed.
On social media platforms where posts are rarely policed, such as the encrypted messaging app Telegram, as much as a quarter of the political content is AI-generated misinformation, Ms. Carley said.
And it's getting worse.
"Prevalence is on an exponential upswing," she said.
Collision course
The rapid spread of these tools is colliding with a historically important year for democracy around the globe. Fully half the world's population lives in a country that will vote in a national election this year, according to the Associated Press.
More than 50 nations are holding those elections, and they include some of the world's largest and most influential countries. Taiwan chose a new president in January amid a barrage of AI-generated fake news that analysts say originated in China. India, the world's most populous democracy and the site of a rising Hindu nationalist movement, will hold parliamentary elections this spring that will determine who will serve as prime minister in the nuclear-armed country.
The use of AI to sway or disrupt the elections happening around the world will likely offer a preview of what's to come in the United States as Nov. 5 approaches, experts said.
"It's going to run the gamut," said Adav Noti, executive director of the Campaign Legal Center, a Washington-based group that pushes for election security and transparency measures.
At their most sophisticated, the attacks can exploit deeply held doubts that a broad swath of U.S. voters already have about the validity of their elections — doubts stoked in 2016 by Russian operatives and again in 2020 by former President Trump and the supporters who rejected his loss.
The aim, Mr. Noti said, is "to create chaos in the system."
Though the threat isn't new, the technology available today poses unprecedented challenges for voters trying to separate truth from fiction.
In the aftermath of the 2020 campaign, Mr. Trump's lead lawyer, Rudy Giuliani convinced millions of people that a surveillance video showed two election workers in Georgia passing a USB stick between them in an effort to corrupt the vote count in a state Mr. Trump lost.
The workers faced waves of threats and harassment that upended their lives, and Mr. Giuliani's lie — he's been ordered by a jury to pay $148 million for defaming the women — helped fuel conspiracy theories that continue to circulate years later.
And that was without the aid of AI-generated video, said Mr. Norden, of the Brennan Center.
In the heated days immediately before and after an election, when people are already inundated by a flood of claims and accusations, bad actors now have the ability to create convincing video or audio designed to inflame a candidate's supporters and roil the country, he said.
"That can be incredibly destabilizing," Mr. Norden said.
Steeper challenges
Farther down the ballot, where candidates have lower profiles and often struggle to get their message out, overcoming deepfakes can present even steeper challenges. When word of the New Hampshire call mimicking Mr. Biden's voice got out, the White House had access to an entire press corps to spread the word that it wasn't him.
"For smaller races, down-ballot races, city council races, I'm more concerned that, A, nobody's really going to be watching for false information that's generated through AI, and, B, even if it is noticed, the people who are running in those contests aren't going to have the same kind of resources to push back," Mr. Norden said.
"It's not just their voice, but their website, their image, a video of them," Mr. Norden said.
Financial scammers have already deployed similar AI-powered tools to trick people into believing their messages are coming from political candidates and government sources.
In Utah, official-looking text messages sent in December purported to warn voters that their registration was incomplete and included a link that supposedly led to a government website. Instead, they linked to sites that automatically uploaded malware to people's phones and stole their personal and financial information.
Elsewhere, texts that claimed to solicit donations for candidates contained links to websites that mimicked a politician's site but were actually part of a scam. People filled in their banking information, believing they were making a political donation when they were actually handing the data over to criminals, said Andrew Newman, founder of the digital security company ReasonLabs.
"Any major event that happens, scammers are going to leverage that to make money," Mr. Newman said. "It used to be really easy to spot a phishing email, right? The language was bad — bad grammar, spelling mistakes, stuff like that. Now, with AI, that's gone."
And as the fake Biden robocall showed, the message doesn't even have to come via text or email. Swindlers have mimicked the voices of people's relatives and created fake messages saying they're in trouble, and need their loved ones to send them money, Mr. Newman said.
Election experts worry the same technology could be used to threaten or intimidate people who are vital to the electoral process, such as poll workers.
"If your voice is ever online, someone can copy it," Ms. Carley said. "The technology to generate voices is cheap, easy to get at and can replicate these voices in near-real time."
Voters are best defense
Regulations like the proposed FCC rule banning AI-generated voices in robocalls can make it easier to prosecute people, she said, a job that falls to the Justice Department at the federal level and attorneys general or district attorneys at the local level. But that enforcement comes after the fact. Agencies like Pennsylvania's Department of State, which oversees elections, monitor social media for disinformation, but laws on the books can't stop someone determined to spread misinformation, Ms. Carley said.
Ultimately, combating misinformation falls to the voters themselves.
"People need to be aware of this so that if they see something or hear something that really provokes a visceral reaction in them around the elections, they're going to want to double check and go to authoritative sources," Mr. Norden said. "I worry that it's going to be used by some people to further undermine confidence — which is already a big problem in the United States."
For local leaders like Allegheny County GOP chairman Sam DeMarco, the growing misuse of artificial intelligence just adds to the problem of fighting tenacious conspiracy theories that continue to lead voters astray.
"Not only can we not believe everything we read, we're not going to be able to believe what we see or hear, either," Mr. DeMarco said. "It really is going to be like the Wild, Wild West. I think anybody that gets fed any type of information that would be considered damaging material needs to stop and consider the source."
Evan Robinson-Johnson contributed to this report.
Mike Wereschagin: [email protected]; Twitter: @wrschgn
___
(c)2024 the Pittsburgh Post-Gazette
Visit the Pittsburgh Post-Gazette at www.post-gazette.com
Distributed by Tribune Content Agency, LLC.