Deepfaking it: How the AI boom could wreak havoc on the 2024 election

by mcardinal

“I actually like Ron DeSantis a lot,” Hillary Clinton reveals in a fake online endorsement video. “He’s just the kind of guy this country needs, and I really mean that.”

In a series of viral photos, former president Donald Trump is wrangled to the ground by police officers as he is arrested in New York City, with one showing the sleeve of his suit jacket being ripped off.

Welcome to America’s 2024 presidential race, where reality is up for grabs.

The Clinton and Trump deepfakes – realistic yet fabricated images created by AI algorithms trained on copious online footage – are among thousands surfacing on social media, blurring fact and fiction in the polarized world of U.S. politics.

While such synthetic media has been around for several years, it’s been turbocharged over the past year by a slew of new “generative AI” tools such as Midjourney that make it cheap and easy to create convincing deepfakes, according to Reuters interviews with about two dozen specialists in fields including AI, online misinformation and political activism.

“It’s going to be very difficult for voters to distinguish the real from the fake. And you could just imagine how either Trump supporters or Biden supporters could use this technology to make the opponent look bad,” said Darrell West, senior fellow at the Brookings Institution’s Center for Technology Innovation.

“There could be things that drop right before the election that nobody has a chance to take down.”

Tools that can generate deepfakes are being released with few or imperfect guardrails to prevent harmful misinformation as the tech sector engages in an AI arms race, said Aza Raskin, co-founder of the Center for Human Technology, a nonprofit that studies technology’s impact on society.

While major social media platforms like Facebook, Twitter, and YouTube have made efforts to prohibit and remove deepfakes, their effectiveness at policing such content varies.


There have been three times as many video deepfakes of all kinds and eight times as many voice deepfakes posted online this year compared to the same time period in 2022, according to DeepMedia, a company working on tools to detect synthetic media.

In total, about 500,000 video and voice deepfakes will be shared on social media sites globally in 2023, DeepMedia estimates. Cloning a voice used to cost $10,000 in server and AI-training costs up until late last year, but now startups offer it for a few dollars, it says.

No one is certain where the generative AI road leads or how to effectively guard against its power for mass misinformation, according to the people interviewed.

Industry leader OpenAI, which has changed the game in recent months with its release of ChatGPT and the updated model GPT-4, is itself grappling with the issue. CEO Sam Altman told Congress this month that election integrity was a “significant area of concern” and urged rapid regulation of the sector.

Unlike some smaller startups, OpenAI has taken steps to restrict the use of its products in politics, according to a Reuters analysis of the terms of use of half a dozen leading companies offering generative-AI services.

The guardrails have gaps, though.

For example, OpenAI says it prohibits its image generator DALL-E from creating public figures – and indeed, when Reuters tried to create images of Trump and Biden, the request was blocked and a message appeared saying it “may not follow our content policy.”

Yet Reuters was able to create images of at least a dozen other U.S. politicians, including former Vice-President Mike Pence, who is also weighing a White House run for 2024.

OpenAI also restricts any “scaled” usage of its products for political purposes. That bans use of its AI to send out mass personalized emails to constituents, for example.

The company, which is backed by Microsoft, explained its political policies to Reuters in an interview but didn’t respond to further requests for comment around enforcement gaps in its policies, such as blocking image creation of politicians.

Several smaller startups have no explicit restrictions on political content.

Midjourney, which launched last year, is the leading player in AI-generated images, with 16 million users on its official Discord server. The app, which ranges from free to $60 a month depending on factors such as picture quantity and speed, is a favorite of AI designers and artists due to its ability to generate hyper-realistic images of celebrities and politicians, according to four AI researchers and creators interviewed.

Midjourney didn’t respond to a request for comment for this article. During an online chat on Discord last week, CEO David Holz said the company would likely make changes ahead of the election to combat misinformation.

Midjourney wants to cooperate on an industry solution to enable the traceability of AI-generated images with a digital equivalent of watermarking and would consider blocking images of political candidates, Holz added.


Even as the industry wrestles with how to prevent misuse, some political players are themselves seeking to harness the power of generative AI to soup up campaigns.

So far, the only high-profile AI-generated political ad in the U.S. was one published by the Republican National Committee in late April. The 30-second ad, which the RNC disclosed as being entirely generated by AI, used AI-generated images to suggest a cataclysmic scenario should Biden be reelected, with China invading Taiwan and San Francisco being shut down by crime.

The RNC didn’t respond to requests for comment on the ad or its wider use of AI. The Democratic National Committee declined to comment on its use of the technology.

Reuters polled all the Republican presidential campaigns on their use of AI. Most did not reply, although Nikki Haley’s team said they were not using the technology and longshot candidate Perry Johnson’s campaign said it was using AI for “copy generation and iteration,” without giving further details.

The potential for generative AI to produce campaign emails, posts, and advertisements is irresistible for some activists who feel the low-cost tech could level the playing field in elections.

Even deep in rural Hillsdale, Michigan, machine intelligence is on the march.

Jon Smith, Republican chair for Michigan’s 5th Congressional district, is holding several educational meetings so his allies can learn to use AI for social media and ad generation.

“AI helps us play against the big cats,” he said. “I see the biggest upswing in the local races. Someone who is 65 years old, a farmer and county commissioner, he could easily be primaried by a younger cat using the technology.”

Political consultancies are also seeking to harness AI, further muddying the line between real and unreal.

Numinar Analytics, a political data company that focuses on Republican clients, has begun experimenting with AI content generation for audio and images, as well as voice generation to potentially create personalized messaging in a candidate’s voice, founder Will Long said in an interview.

Democratic polling and strategy group Honan Strategy Group is meanwhile trying to develop an AI survey bot. It hopes to unroll a female bot in time for the 2023 municipal elections, CEO Bradley Honan said, citing research that both men and women are more likely to speak to a female interviewer.

Copyright 2023 Thomson/Reuters