Election Deepfakes Are Here And Better Than Ever [CNET]

View Article on CNET

It doesn’t take much to create a pretty good deepfake these days, especially when someone has access to artificial intelligence tools, a decent gaming computer and oodles of audio and video samples of the person they’re trying to clone.

Scammers are using the often real-time video and audio clones to scam everyone from companies who think they’re transferring money to a top executive to parents who send money in a panic after receiving a call for help from someone they think is their kid.

And now the increasingly convincing fake videos, some of which are being reposted on social media and amplified by the likes of Donald Trump and Elon Musk, are being used to snooker Americans ahead of the November presidential election. Experts worry these deepfakes could potentially affect how or even if people vote.

“It’s important to make people aware of these things, because the election is, what, three months away, and it’s already happening,” said Brandon Kovacs, a senior red teamer for the cybersecurity firm Bishop Fox. Red teamers help companies shore up their cyberdefenses by attacking them to find security holes.

Election disinformation, whether it’s spread by politicians or enemies of the US, isn’t a new thing. What is new for the 2024 presidential race is the rise of open-source AI-powered tools, complete with YouTube tutorials, that can allow just about anybody to create potentially convincing deepfakes and use them to spread disinformation.

“It’s not like you’re using some crazy secretive tools,” Kovacs said. “All of the stuff is out there ready to go.”

d5cd3f09-f8a6-40c0-8df6-4ee2bf5336de-1-201-a
Bishop Fox’s Brandon Kovacs demonstrates just how easy it is to deepfake someone by transforming himself into a female coworker.

Bree Fowler/CNET

That’s one of the big reasons why Kovacs spent a long weekend earlier this month at the Defcon conference in Las Vegas demonstrating in the massive event’s AI Village just how easy it is to create the fake videos. The annual gathering brings together tens of thousands of hackers and other cybersecurity professionals.

Using just a consumer-level gaming laptop, a basic DSLR camera, some lighting and a green screen, Kovacs transformed eager Defcon attendees into everyone from one of the higher-profile hackers on his team to celebrities like Jackie Chan and Keanu Reeves.

While the results admittedly weren’t perfect, it was amazing how well the face-swapping software worked. Attendees were transformed into the deepfaked person of their choice on a TV screen next to them in real time. Background scenes like an office or a TV newsroom replaced the green screen and props like wigs helped frame the swapped face and add elements of natural movement that made the overall image more convincing.

When the attendee moved and talked, so did the deepfaked image. Voices also were cloned in real time, but were tough to hear in the packed convention center. There wasn’t much of a video delay, and Kovacs said that the use of a higher-powered computer instead of a consumer-grade model would have minimized them significantly. 

The goal of the demonstrations was to make people aware of just how far deepfakes have come, as well as help those who defend computer systems build better models to detect them.

Deepfakes as disinformation

Deepfakes don’t have to be state-of-the-art to be convincing, especially if they’re spread by someone high-profile.

Trump recently posted images, at least some of which appeared to be AI-generated, to his Truth Social account that implied he was being endorsed by megastar Taylor Swift and her fans. The images, which he topped with the text “I accept,” were originally posted on X, formerly known as Twitter, by a user who labeled them as satire. One of the images reposted on Trump’s Truth Social account even has the word “satire” in the image text.

On the flip side, Trump has also falsely accused Vice President Kamala Harris’ campaign of deepfaking a photo taken at Detroit Wayne County Metropolitan Airport, saying she “AI’d” it to show a massive crowd that he claims didn’t exist. But numerous other videos and photos of the event showed a crowd similar in size to the one shown in the Harris campaign photo. Local reporters at the event estimated the crowd at about 15,000 people.

In the same Truth Social post, Trump also reiterated his false claims of election cheating by Democrats. Nearly four years after he was voted out of office, Trump continues to push the lie that the 2020 election was rigged, despite no actual evidence showing that.

Representatives for both the Trump campaign and Swift did not respond to emails seeking comment.

Separately, Elon Musk, the owner of the X platform who has endorsed Trump, got into hot water in July after he posted a video to X that used voice-cloning technology to replicate Harris’ voice, creating a deepfaked voiceover that played over footage from one of her campaign videos. 

Musk didn’t label the post as satire, but after being criticized clarified it as such.

It’s still OK to be funny, right?

Whether it’s coming from late-night talk show hosts or the internet, it’s hard to imagine modern American politics without satire which, admittedly, can sometimes inadvertently misinform people. But deepfakes can take things to a new level when the people behind them intentionally use them to spread disinformation for their own gain.

And Adam Marrè, chief information security officer at the cybersecurity company Arctic Wolf, says those deepfakes need to convince relatively few people to be effective. 

“As people stream through their feeds they’re going to see these and I think some of them may not understand that they’re deepfakes, or maybe they don’t even care,” Marrè said. “And all that is going to shape opinion in certain ways.”

That’s why, he says, it’s crucial for social media companies, as well as AI companies, to do their best to ferret out those behind damaging deepfakes.

“I do still worry though that we’re depending on their good graces, their desire to be a good citizen, for them to do this,” he said. “There’s no regulatory basis that we can use, or policymakers can use to enforce this. That’s something we’re still lacking.”