Fake porn images of Taylor Swift, generated by artificial intelligence, cause outrage in the US

Fake porn images of Taylor Swift, generated by artificial intelligence, cause outrage in the US

[ad_1]

One of the images was viewed 47 million times before it was removed. In a statement, X clarified that ‘publishing images of non-consensual nudity (NCN) is strictly prohibited’. Taylor Swift, singer nominated for her ‘Taylor Swift: The Eras Tour’, arrives at the 2024 Golden Globes Jordan Strauss/Invision/AP Fake pornographic images of Taylor Swift, generated by artificial intelligence, went viral on social media and caused outrage among fans of the singer and American politicians this Friday (26). One of the images was viewed 47 million times on X, the old Twitter, before being removed on Thursday (25). According to the American press, the publication was visible on the platform for approximately 17 hours. These “deepfakes” — fake but extremely realistic — pornographic images of celebrities are nothing new. However, activists and authorities are concerned that easy-to-use tools that utilize generative artificial intelligence (AI) will create an uncontrollable avalanche of toxic or harmful content. The attack on Swift, the second most listened to artist in the world on Spotify, could fuel debate about the phenomenon. “The only positive thing about this happening to Taylor Swift is that she probably has enough power to pass legislation to put an end to this. You guys are sick,” wrote influencer Danisha Carter on X. The social network is one of the biggest platforms for pornographic content in the world, some analysts say, as its policies on nudity are more flexible than those of Facebook or Instagram, Meta networks. In a statement, X clarified that “posting non-consensual nude (NCN) images is strictly prohibited” on its platform. “We have a zero tolerance policy for this type of content.” The platform, owned by mogul Elon Musk, said it was “actively removing all identified images and taking appropriate action against the accounts responsible for posting them.” Furthermore, it highlighted that it was “closely monitoring the situation to ensure that any further violations are addressed immediately and that the content is removed.” Swift’s representatives did not immediately respond to a request for comment from news agency France Presse. Yvette Clarke, a Democratic congresswoman from New York who supported a law to combat fake pornographic photos, highlighted that “with advances in AI, creating deepfakes is easier and cheaper.” For his part, Republican lawmaker Tom Keane warned that “AI technology is advancing faster than necessary barriers.” “Whether the victim is Taylor Swift or any young person in our country, we must establish safeguards to combat this alarming trend,” he added. According to research cited by “Wired” magazine, in the first nine months of 2023, 113,000 “deepfake” videos were sent to the most popular porn sites.

[ad_2]

Source link