The recent incident involving explicit and pornographic images of superstar Taylor Swift highlights how artificial intelligence can create convincingly realistic, harmful, and fraudulent visuals. However, this practice is not new; people have been using such technology to target women and girls for an extended period. With the growing prevalence and accessibility of AI tools, experts predict that the situation is poised to worsen, impacting individuals ranging from school-age children to adults.
Already, reports have emerged from various parts of the world, including New Jersey and Spain, where high school students have experienced their faces being manipulated by AI and subsequently shared online by their peers. Additionally, a popular female Twitch streamer recently discovered that her likeness was being used in a fake explicit pornographic video, which quickly spread throughout the gaming community.
Danielle Citron, a professor at the University of Virginia School of Law, emphasized that the issue extends beyond just targeting celebrities. It affects everyday individuals, including nurses, art and law students, teachers, and journalists. Stories have surfaced highlighting the impact on high school students and military personnel, making it a concern that spans across various demographics.
Despite not being a new practice, the fact that Taylor Swift, a prominent figure, was targeted could draw more attention to the growing problems associated with AI-generated imagery. Swift’s devoted fan base, known as “Swifties,” expressed their outrage on social media, which has thrust the issue into the spotlight. In 2022, a Ticketmaster incident preceding her Eras Tour concert led to widespread online anger and subsequent legislative efforts to address consumer-unfriendly ticketing policies.
Citron noted, “This is an interesting moment because Taylor Swift is so beloved. People may be paying more attention because it’s someone generally admired who has a significant cultural impact. It’s a moment for reflection.”
“Sinister motives lacking sufficient safeguards.”
The fabricated images of Taylor Swift primarily spread on the social media platform X, formerly known as Twitter. These photos, depicting the singer in explicit and sexually suggestive poses, garnered tens of millions of views before being removed from social media platforms. However, the persistence of content on the internet means that they are likely to continue circulating through less regulated channels.
While there have been significant warnings about how AI-generated imagery and videos could be exploited to influence elections and propagate disinformation, there has been limited public discussion about how women’s faces are being manipulated without consent into aggressive pornographic content.
This emerging trend is akin to “revenge porn” in the realm of AI, making it increasingly challenging to discern the authenticity of such photos and videos.
What distinguishes this situation is the collective action of Taylor Swift’s devoted fan base, who effectively used reporting tools to have the offensive posts taken down. However, for many victims, the burden often falls solely on them. While it reportedly took 17 hours for X to remove the photos, numerous manipulated images remain on social media platforms. Ben Decker, who oversees Memetica, a digital investigations agency, noted that social media companies lack effective strategies for content monitoring.
X, like many major social media platforms, has policies against sharing synthetic, manipulated, or misleading content that can deceive or harm people. However, it has significantly reduced its content moderation team and relies more on automated systems and user reporting. The company did not respond to CNN’s request for comment, and it is currently under investigation in the EU for its content moderation practices.
Other social media companies have also reduced their content moderation teams. Meta, for instance, made cuts to teams addressing disinformation and harassment on its platforms, raising concerns ahead of crucial 2024 elections in the United States and worldwide.
Decker emphasized that what happened to Swift serves as a prime example of how AI is being used for nefarious purposes without sufficient safeguards in place to protect the public.
In response to these images, White House press secretary Karine Jean-Pierre expressed alarm, stating, “It is alarming. We are alarmed by the reports of the circulation of images that you just laid out – false images, to be more exact, and it is alarming.”
An emerging pattern.
While this technology has been available for some time, it has garnered renewed attention due to the recent controversial photos of Swift.
In the previous year, a high school student from New Jersey initiated a campaign for federal legislation to address AI-generated explicit images. She reported that photos of herself and approximately 30 other female classmates were manipulated and potentially shared online. Francesca Mani, a student at Westfield High School, expressed frustration over the absence of legal measures to protect victims of AI-generated explicit content. According to her mother, it appeared that a member or members of the community had created these images without the girls’ consent.
School districts have been grappling with the challenges and implications of artificial intelligence and other accessible technologies for students, as stated by Westfield Superintendent Dr. Raymond González.
In February 2023, a similar incident occurred in the gaming community when a prominent male video game streamer on the popular platform Twitch was caught viewing deepfake videos featuring some of his female Twitch streaming colleagues. The Twitch streamer known as “Sweet Anita” described it as a surreal experience to watch oneself doing something one has never actually done.
The increased availability and accessibility of AI-generated tools have made it easier for individuals to create such images and videos. Furthermore, a broader realm of unmoderated not-safe-for-work AI models exists in open-source platforms, as noted by Decker.
Addressing this issue remains challenging. Currently, nine US states have enacted laws against the creation or sharing of non-consensual deepfake imagery or synthetic images imitating someone’s likeness, but there are no federal laws addressing this matter. Many experts advocate for changes to Section 230 of the Communications Decency Act, which shields online platforms from liability related to user-generated content.
Danielle Citron emphasized that punishing this under child pornography laws is not possible, as it is distinct from child sexual abuse. However, the humiliation and the feeling of being objectified, along with how individuals internalize this perception, can be profoundly disruptive to their social esteem.
Ways to safeguard your images
Individuals can take a few simple measures to enhance their protection against their images being used in non-consensual content.
According to computer security expert David Jones, who works for IT services firm Firewall Technical, one advisable step is to consider keeping online profiles private and sharing photos exclusively with trusted individuals. This precaution is essential because it’s impossible to predict who might be viewing your profile.
It’s worth noting that many instances of “revenge porn” involve individuals who are personally acquainted with their targets. Consequently, limiting what you share in general is the safest approach.
Furthermore, the tools used for generating explicit images rely on a substantial amount of raw data, including images that display faces from various angles. Thus, the less such data available to work with, the better. However, Jones cautioned that as AI systems become more efficient, it might be possible in the future to create a deepfake version of someone with just a single photo.
Additionally, hackers may attempt to exploit their victims by gaining access to their photo collections. To counter this threat, Jones advised against using easily guessable passwords and strongly recommended against writing down passwords.