Artificial intelligence-generated images of Taylor Swift spread across social media in late January and, a new report suggests, originated as part of a recurring challenge on one of the internet’s most notorious message boards.
Graphika, a research firm that studies misinformation, traced the images to a community on 4chan, a message board known for sharing hate speech, conspiracy theories and, increasingly, racist and offensive content created using generative AI. (like ChatGPT).
The people on 4chan who created the images of the singer did so as part of a game of sorts, researchers said — a test to see if they could create lewd (and sometimes violent) images of famous female figures.
Swift’s synthetic images spread to other platforms and were viewed millions of times. Fans rallied to Swift’s defense, and lawmakers demanded stronger protections against AI-created images.
Graphika found a thread of messages on 4chan that encouraged people to try to bypass safeguards put in place by image-generating tools, including OpenAI’s Dall-E, Microsoft Designer, and Bing Image Creator. Users were instructed to share “tips and tricks for finding new ways to bypass filters.” “Good luck, be creative,” the post encouraged.
Sharing harmful content through games allows people to feel connected to a wider community and be motivated by the prestige they receive for participating, experts said. Ahead of the 2022 midterm elections, groups on platforms like Telegram, WhatsApp, and Truth Social have engaged in a hunt for voter fraud, earning points or honorary titles for producing alleged evidence of voter misconduct — actual evidence of voter fraud is exceptionally rare. rare.
In the 4chan thread that led to Swift’s fake images, several users received praise — “beautiful gen anon,” wrote one — and were asked to share the command used to create the images. One user lamented that an instruction produced an image of a celebrity wearing a swimsuit rather than nude.
4chan’s posted rules that apply sitewide do not specifically prohibit AI-generated sexually explicit images of real adults.
“These images originated from a community of people motivated by the ‘challenge’ of bypassing the safeguards of generative AI products, and new restrictions are seen as just another obstacle to be ‘defeated,'” said Cristina López G., senior analyst from Graphika, in a statement. “It is important to understand the gamified nature of this malicious activity to prevent further abuse at the source.”
Swift is “far from the only victim,” said López G. In the 4chan community that manipulated her image, many actresses, singers and politicians were featured more frequently than Swift.
OpenAI said in a statement that Swift’s explicit images were not generated using its tools, noting that it filters out more explicit content when training its Dall-E model. The company also said it uses other security safeguards, such as denying requests that ask for a public figure by name or look for explicit content.
Microsoft said it is “continuing to investigate these images” and that it has “strengthened its existing security systems to further prevent misuse of the company’s services to help generate images like these.” The company prohibits users from using its tools to create adult or intimate content without consent and warns repeat offenders that they may be blocked.
Fake computer-generated pornography has been a scourge since at least 2017, affecting celebrities, government figures, Twitch streamers, students and others. Fragmented regulation leaves few victims with legal recourse; even fewer have a dedicated fan base to drown out the fake images with coordinated “Protect Taylor Swift” posts.
After Swift’s fake images went viral, White House press secretary Karine Jean-Pierre called the situation “alarming” and said social media companies’ lax enforcement of their own rules disproportionately affects women and girls. She said the Justice Department recently funded the first national hotline for people targeted by image-based sexual abuse, which the department described as meeting a “growing need for services” related to the distribution of intimate images without consent. The Screen Actors Guild-American Federation of Television and Radio Artists, the union that represents tens of thousands of actors, called the fake images of Swift and others a “theft of their privacy and right to autonomy.”
Artificially generated versions of Swift have also been used to promote scams involving Le Creuset cookware. AI was used to imitate President Joe Biden’s voice in robocalls discouraging voters from participating in the New Hampshire primary election. Technology experts say that as AI tools become more accessible and easier to use, audio and video fakes with realistic avatars can be created in a matter of minutes.
Researchers said the first explicitly sexual AI image of Swift in the 4chan thread appeared on January 6, 11 days before they reportedly appeared on Telegram and 12 days before they appeared on X (formerly Twitter), formerly known as Twitter. 404 Media reported on January 25 that viral images of Swift had spread to mainstream social media platforms from 4chan and a Telegram group dedicated to abusive images of women. The British newspaper the Daily Mail reported that week that a website known for sharing sexualized images of celebrities posted the images of Swift on January 15.
For several days, X blocked searches for Taylor Swift “out of an abundance of caution to ensure we were cleaning up and removing all images,” said Joe Benarroch, the company’s head of business operations.