‘AI Obama’ and fake newscasters: How AI audio is swarming TikTok

Claire Leibowicz
Subscribe Now Choose a package that suits your preferences.
Start Free Account Get access to 7 premium stories every month for FREE!
Already a Subscriber? Current print subscriber? Activate your complimentary Digital account.

In a slickly produced TikTok video, former President Barack Obama — or a voice eerily like his — can be heard defending himself against an explosive new conspiracy theory about the sudden death of his former chef.

“While I cannot comprehend the basis of the allegations made against me,” the voice says, “I urge everyone to remember the importance of unity, understanding and not rushing to judgments.”

In fact, the voice did not belong to the former president. It was a convincing fake, generated by artificial intelligence using sophisticated new tools that can clone real voices to create AI puppets with a few clicks of a mouse.

The technology used to create AI voices has gained traction and wide acclaim since companies such as ElevenLabs released a slate of new tools late last year. Since then, audio fakes have rapidly become a new weapon on the online misinformation battlefield, threatening to turbocharge political disinformation before the 2024 election by giving creators a way to put their conspiracy theories into the mouths of celebrities, newscasters and politicians.

The fake audio adds to the AI-generated threats from “deepfake” videos, humanlike writing from ChatGPT and images from services such as Midjourney.

Disinformation watchdogs have noticed the number of videos containing AI voices has increased as content producers and misinformation peddlers adopt the novel tools. Social platforms including TikTok are scrambling to flag and label such content.

The video that sounded like Obama was discovered by NewsGuard, a company that monitors online misinformation. The video was published by one of 17 TikTok accounts pushing baseless claims with fake audio that NewsGuard identified, according to a report the group released in September. The accounts mostly published videos about celebrity rumors using narration from an AI voice, but also promoted the baseless claim that Obama is gay and the conspiracy theory that Oprah Winfrey is involved in the slave trade. The channels had collectively received hundreds of millions of views and comments that suggested some viewers believed the claims.

TikTok requires labels disclosing realistic AI-generated content as fake, but they did not appear on the videos flagged by NewsGuard. TikTok said it had removed or stopped recommending several of the accounts and videos for violating policies around posing as news organizations and spreading harmful misinformation.

Although NewsGuard’s report focused on TikTok, which has increasingly become a source of news, similar content was found spreading on YouTube, Instagram and Facebook.

Platforms like TikTok allow AI-generated content of public figures, including newscasters, so long as they do not spread misinformation.

The power of these technologies could profoundly sway viewers. “We do know audio and video are perhaps more sticky in our memories than text,” said Claire Leibowicz, head of AI and media integrity at the Partnership on AI, which has worked with technology and media companies on a set of recommendations for creating, sharing and distributing AI-generated content.

TikTok said last month that it was introducing a label that users could select to show whether their videos used AI. In April, the app started requiring users to disclose manipulated media showing realistic scenes and prohibiting deepfakes of young people and private figures. David Rand, a professor of management science at the Massachusetts Institute of Technology whom TikTok consulted for advice on how to word the new labels, said the labels were of limited use when it came to misinformation because “the people who are trying to be deceptive are not going to put the label on their stuff.”

TikTok also said last month that it was testing automated tools to detect and label AI-generated media, which Rand said would be more helpful, at least in the short term.

© 2023 The New York Times Company