Taylor Swift may have legal options including suing Elon Musk, after obscene AI-generated images of the pop star spread on social media, experts said.
The sexually explicit deep fakes began circulating on social media. The pictures mainly proliferated on X, formerly known as Twitter, but they were also posted to Instagram, Facebook and Reddit.
They were additionally uploaded to Celeb Jihad, a satirical website known for sharing leaked and often intimate videos and photographs of public figures.
The AI creations showed the 34-year-old in a number of provocative positions, centered around Kansas City Chiefs’ memorabilia. The Grammy winner has been dating Chiefs’ tight end Travis Kelce since last summer, with her increased presence at football games leading to backlash from a portion of NFL fans dubbed the “Dads, Brads and Chads.”
According to David Gelman, a criminal law attorney at Gelman Law in New Jersey, Swift could sue X, along with the other social networks that distributed the images.
“Legal liability could potentially attach to the original publisher and every re-publisher of the images,” Gelman told Newsweek. “If there is evidence that X or any other social network knew, or had reason to know, about the images and chose not to remove them as quickly as practicable, that is a factor that would be considered by a court.”
Musk purchased Twitter for $44 billion in October 2022, subsequently rebranding the micro-blogging site as X. The platform’s reported failure to curb intolerance and misinformation has been criticized by anti-hate speech groups, with several advertisers abandoning the platform in November over its alleged proliferation of antisemitism and Nazi propaganda.
On Thursday, a Swift insider told the Daily Mail that the singer was considering legal action against the deep fake porn group allegedly responsible for the pictures, with technology outlet 404 Media reportedly tracing the images’ origins to a community on Telegram.
The social network has a reputation for attracting extremists, with this specific group allegedly devoted to “abusive” pictures of women. The community reportedly uses a free Microsoft text-to-image AI generator to create the mock-ups, along with other deep-fake tools.
Swift fans hit out at X over the time it took for the social network to remove the pictures following user reports.
According to The Verge, one image took over 17 hours to be taken down from the site, racking up over 45 million views and 24,000 reports in the meantime.
Although X and other social media sites could be held legally responsible for the spread of the deep fakes, the odds of it happening are quite slim. According to New York-based criminal defense attorney David Schwartz, section 230(c)(1) of the Communications Decency Act gives “broad immunity” to social media sites.
“Hundreds of thousands of people are victims of AI-generated porn or other types of hateful forms of speech online,” he told Newsweek. “But only in very few limited situations, where there is some intention on the part of the provider, will they be held liable.”
The time it takes for a social network to remove an offensive image after it’s been reported could affect the outcome. However, Gelman said the timescale for action isn’t clear-cut.
“It’s an unresolved legal issue,” Gelman explained. “It’s an unresolved legal issue. This is something that Supreme Court will be surely looking at again soon.”
Legal Options for Swift
Swift could sue the creators of the images for using her likeness under intellectual property law, with Schwartz citing defamation, cyber harassment, identity theft, revenge porn and obscenity laws as other possible avenues the Anti-Hero singer could take.
“The technology is advancing every day and our antiquated laws are inadequate to address victims of AI-generated porn,” he said.
Separately, Musk has been accused of allowing hate speech and fake news to flourish on X since he took over the site.
Former X employees sued Musk in August, accusing the 52-year-old of sex, race and age discrimination.
Musk also filed a lawsuit against the non-profit Media Matters in November, after the media watchdog published a report accusing X of pairing major advertisers with antisemitic content, leading to the exodus of brands including Disney, Apple, IBM and Paramount.
This was followed by NewsGuard Analysis that found “Premium” X accounts—users who pay an $8 monthly subscription for the verified “Blue Tick” next to their name—accounted for almost three-quarters of misinformation regarding the Israel-Hamas conflict.
The Palestinian militant group launched a surprise attack on Israel on October 7, with Israel subsequently declaring war on the Gaza Strip, where Hamas is based.
According to NewsGuard, Premium X accounts shared 74 percent of the site’s most viral fake news posts about the war. The misinformation rating system company suggested that X’s “community notes” feature—in which users fact-check or debunk with additional sources or information—wasn’t enough to stop the spread of fake news.