Microsoft Says AI Deepfake Abuse Should Be Illegal [CNET]

View Article on CNET

This week, X owner Elon Musk shared a video that used a cloned voice of Kamala Harris and defended it as parody.

Ian Sherr Contributor and Former Editor at Large / News

Ian Sherr (he/him/his) grew up in the San Francisco Bay Area, so he’s always had a connection to the tech world. As an editor at large at CNET, he wrote about Apple, Microsoft, VR, video games and internet troubles. Aside from writing, he tinkers with tech at home, is a longtime fencer — the kind with swords — and began woodworking during the pandemic.

Artificial intelligence seems to be everywhere these days, doing good by helping doctors detect cancer and doing bad by helping fraudsters bilk unsuspecting victims. Now, Microsoft says the US government needs new laws to hold people who abuse AI accountable.

In a blog post Tuesday, Microsoft said US lawmakers need to pass a “comprehensive deepfake fraud statute” targeting criminals who use AI technologies to steal from or manipulate everyday Americans.

“AI-generated deepfakes are realistic, easy for nearly anyone to make, and increasingly being used for fraud, abuse, and manipulation — especially to target kids and seniors,” Microsoft President Brad Smith wrote. “The greatest risk is not that the world will do too much to solve these problems. It’s that the world will do too little.”

AI Atlas art badge tag

Microsoft’s plea for regulation comes as AI tools are spreading across the tech industry, offering criminals increasingly easy access to tools that can help them more easily gain the confidence of their victims. Many of these schemes abuse legitimate technology that’s designed to help people write messages, do research for projects and create websites and images. In the hands of fraudsters, those tools can create fake forms and believable websites that fool and steal from users.

“The private sector has a responsibility to innovate and implement safeguards that prevent the misuse of AI,” Smith wrote. But he said governments need to establish policies that “promote responsible AI development and usage.”

Already behind

Though AI chatbot tools from Microsoft, Google, Meta and OpenAI have been made broadly available for free only over the past couple of years, the data about how criminals are abusing them is already staggering. 

Earlier this year, AI-generated pornography of global music star Taylor Swift spread “like wildfire” online, gaining more than 45 million views on X, according to a February report from the National Sexual Violence Resource Center

“While deepfake software wasn’t designed with the explicit intent of creating sexual imagery and video, it has become its most common use today,” the organization wrote. Yet, despite widespread acknowledgement of the problem, the group notes that “there is little legal recourse for victims of deepfake pornography.” 

Meanwhile, a report this summer from the Identity Theft Resource Center found that fraudsters are increasingly using AI to help create fake job listings as a new way to steal people’s identities. 

“The rapid improvement in the look, feel and messaging of identity scams is almost certainly the result of the introduction of AI-driven tools,” the ITRC wrote in its June trend report.

That’s all on top of the rapid spread of AI-manipulated online posts attempting to tear away at our shared understanding of reality. One recent example appeared shortly after the attempted assassination of former president Donald Trump earlier in July. Manipulated photos spread online that appeared to depict Secret Service agents smiling as they rushed Trump to safety. The original photograph shows the agents with neutral expressions. 

Even in the past week, X owner Elon Musk shared a video that used a cloned voice of vice president and Democratic presidential candidate Kamala Harris to denigrate President Joe Biden and refer to Harris as a “diversity hire.” X service rules prohibit users from sharing manipulated content, including “media likely to result in widespread confusion on public issues, impact public safety, or cause serious harm.” Musk has defended his post as parody.

For his part, Microsoft’s Smith said that while many experts have focused on deepfakes used in election interference, “the broad role they play in these other types of crime and abuse needs equal attention.” 

Other Services & Software