Fake audio becomes the new torment for TikTok and YouTube – 10/12/2023 – Tech

Fake audio becomes the new torment for TikTok and YouTube – 10/12/2023 – Tech

[ad_1]

In a stylishly produced TikTok video, Barack Obama — or a voice eerily similar to the former US president — can be heard defending himself against an explosive new conspiracy theory about the sudden death of his former chef.

“Although I cannot understand the basis of the allegations made against me,” says the voice, “I urge everyone to remember the importance of unity, understanding and not to judge hastily.”

In fact, the voice does not belong to the former president. It was a convincing fake, generated by artificial intelligence using sophisticated tools that can clone real voices to create AI puppets with a few mouse clicks.

The technology used to create AI voices has been gaining prominence since companies like ElevenLabs launched a series of new tools late last year. Since then, audio fakes have quickly become a new weapon in the field of online disinformation, threatening to fuel political disinformation ahead of the 2024 US elections, giving creators a way to put their conspiracy theories in the mouths of celebrities, newscasters news and politicians.

The fake audio adds to AI-generated threats like deepfake videos, human-like writing from ChatGPT, and doctored images from services like Midjourney.

Misinformation watchers have noted that the number of videos containing AI-generated voices has increased as content producers and misinformation spreaders embrace the new tools. Social networks, including TikTok, are rushing to flag and label this type of content.

The video that appeared to be of Obama was discovered by NewsGuard, a company that monitors online misinformation. The video was posted by one of 17 TikTok accounts promoting baseless claims with false audio, identified by NewsGuard in a report released in September.

The accounts mainly published videos about celebrity rumors, using narration from an AI voice, but also promoted the false information that Obama is gay and that presenter Oprah Winfrey is involved in the slave trade. The channels received hundreds of millions of views and comments that suggested some viewers believed them.

While the accounts did not have an obvious political agenda, according to NewsGuard, the use of AI voices to share mostly gossip and sensationalized rumors offered an avenue for bad actors who want to manipulate public opinion and share falsehoods en masse.

“It’s a way for these accounts to gain a following that can attract engagement from a broad audience,” says Jack Brewster, business editor at NewsGuard. “Once they have the credibility of having a large following, they can venture into more conspiratorial content.”

What are platforms doing to combat fake audio and videos?

TikTok requires labels that identify realistic AI-generated content as fake, but they did not appear in the videos flagged by NewsGuard. TikTok claimed it removed or stopped recommending several of the accounts and videos for violating internal policies such as impersonating news organizations and spreading false information. It also removed the video that used the AI-generated voice that imitated Obama for violating TikTok’s media policy, as it contained highly realistic content that was not identified as altered or false.

“TikTok is the first platform to provide a tool for creators to identify AI-generated content and is a founding member of a new industry code of best practices that promotes the responsible use of artificial media,” explained Jamie Favazza, spokesperson of TikTok, referring to a framework recently introduced by the nonprofit Partnership on AI.

While NewsGuard’s report was focused on TikTok, which has increasingly become a news source, similar content was found spreading on YouTube, Instagram and Facebook.

Platforms like TikTok allow AI-generated content from public figures, including television news hosts, as long as they do not spread misinformation. Parody videos showing AI-generated conversations between politicians, celebrities or business leaders — some now deceased — have spread widely since the tools became popular.

The manipulated audio adds a new layer to the misleading videos on the platforms, which have previously featured fake versions of Tom Cruise, Elon Musk and news anchors such as Gayle King and Norah O’Donnell, both from CBS.

TikTok and other platforms have recently been dealing with a series of misleading ads featuring deepfakes of celebrities like Cruise and YouTube star Mr. Beast.

The power of these technologies can profoundly influence viewers. “We know that audio and video are perhaps more prominent in our memories than text,” said Claire Leibowicz, head of AI and media integrity at the Partnership on AI, who worked with technology and media companies on a set of recommendations to create, share and distribute AI-generated content.

TikTok said last month it was introducing a rating that users could select to show that their videos used AI. In April, the app began requiring users to release manipulated media showing realistic scenes and banning deepfakes of young people and private figures.

Asked by TikTok for advice on how to define the classifications, management science professor David Rand of the Massachusetts Institute of Technology said the labels would have limited use when it comes to false information because “the people who are trying to be deceptive they won’t put up this notice.”

TikTok also said last month that it was testing automated tools to detect and alert AI-generated media, which Rand said would be more useful, at least in the short term.

YouTube bans political ads that use AI and requires other advertisers to disclose in their ads when AI is used. Meta, which owns Facebook, added a warning to its fact-check suite in 2020 that describes whether a video is “altered.” X, formerly Twitter, requires that misleading content be “significantly and deceptively altered, manipulated or fabricated” to violate its policies. The company did not respond to requests for comment.

Tools at the service of disinformation

Obama’s AI voice was created using tools from ElevenLabs, a company that gained international attention late last year with its free AI text-to-speech tool capable of producing realistic audio in seconds. The tool also allowed users to upload recordings of someone’s voice and produce a digital copy.

After the launch of the tool, users of 4chan, a message board used by right-wing voters, organized themselves to create a fake version of actress Emma Watson reading an anti-Semitic speech.

ElevenLabs, a 27-employee company based in New York City, responded to the misuse by limiting the voice cloning feature to paid users. The company also launched an AI detection tool capable of identifying artificial intelligence content produced by its services.

“More than 99% of users on our platform are creating interesting, innovative and useful content,” an ElevenLabs representative said in an email statement, “but we recognize that instances of misuse exist and are continually developing and rolling out preventative measures to contain them.”

In tests carried out by The New York Times, ElevenLabs’ detector successfully identified audio from TikTok accounts as AI-generated. But the tool failed when music was added to the clip or when the audio was distorted, suggesting that misinformation spreaders could easily escape detection.

AI companies and academics have explored other methods for identifying fake audio, with mixed results. Some companies have added an invisible watermark to AI audio, incorporating signs that it was generated by AI. Others have been pushing AI companies to limit the voices that can be cloned, potentially banning replicas of politicians like Obama — a practice already in place in some imaging tools, like Dall-E, which refuse to generate certain political images. .

Leibowicz explained that computer-generated audio was especially challenging for listeners to identify compared to visual changes.

“If we were a podcast, would you need a warning every five seconds?” asked Leibowicz. “How do you have a signal over a long piece of audio that is consistent?”

Even if platforms adopt AI detectors, the technology must constantly improve to keep up with advances in AI generation.

TikTok said it was developing new detection methods internally and looking at partnership options.

“Big tech companies that are worth billions or even trillions of dollars can’t do this? That’s kind of surprising to me,” said Hafiz Malik, a professor at the University of Michigan-Dearborn who is developing AI audio detectors. “If they don’t want to do it on purpose? That’s understandable. But they can’t do it? I don’t accept it.”

[ad_2]

Source link

tiavia tubster.net tamilporan i already know hentai hentaibee.net moral degradation hentai boku wa tomodachi hentai hentai-freak.com fino bloodstone hentai pornvid pornolike.mobi salma hayek hot scene lagaan movie mp3 indianpornmms.net monali thakur hot hindi xvideo erovoyeurism.net xxx sex sunny leone loadmp4 indianteenxxx.net indian sex video free download unbirth henti hentaitale.net luluco hentai bf lokal video afiporn.net salam sex video www.xvideos.com telugu orgymovs.net mariyasex نيك عربية lesexcitant.com كس للبيع افلام رومانسية جنسية arabpornheaven.com افلام سكس عربي ساخن choda chodi image porncorntube.com gujarati full sexy video سكس شيميل جماعى arabicpornmovies.com سكس مصري بنات مع بعض قصص نيك مصرى okunitani.com تحسيس على الطيز