Skip to main content
opinion
Open this photo in gallery:

Amy Sussman/Getty Images. Colour treatment: The Globe and Mail.

Suzie Dunn is an assistant professor at Dalhousie University’s Schulich School of Law. Kristen Thomasen is an assistant professor at the University of British Columbia’s Peter A. Allard School of Law.

For years now, Taylor Swift has been the target of sexual deepfakes: manipulated or completely manufactured sexual images typically broadcast without the depicted person’s consent. And a recent wave of graphic, falsified images of one of the world’s biggest stars reflects the latest manifestation of non-consensual sexual images, this time involving generative AI. With this technology, one no longer needs direct access to people’s actual bodies if they want to see them naked – they can go online and create images of anyone they please, engaging in any sex act they can imagine. This is a terrifying risk to our sexual integrity – violations of which are being made worse by the failures of governments and social-media companies to properly address the true harms of deepfakes, or provide any meaningful recourse for women since at least 2017.

This is not a glitch. Since 2022, what was then Twitter (and is now X) has steadily slashed its content moderation teams, and its failure to swiftly implement its own policies on non-consensual nudity is at the heart of the virality of these latest images. (It has since taken a short-sighted and blunt approach against the massive amplification of the images of Ms. Swift, temporarily banning the pop star’s name from being searched on the platform altogether.) But while X may not have a reputation for being hugely sensitive to human rights, other companies such as Google and Bing have also been slow to address non-consensual sexual deepfakes. As noted by Durham University professor Claire McGlynn, Google search results have directed users to pages displaying such images of women for years, even though the company has policies against precisely this kind of material, and has been alerted to the problems.

What’s more, AI image generators are now largely available to the public. Although there are software filters in place meant to mitigate this kind of violative use, it seems that the creators of these recent images were able to easily work around those filters. The technology is progressing, and its widespread adoption is also making it easier and cheaper for almost anyone to make synthetic images of others.

While this technology barrels ahead, we find ourselves in a potential legal and regulatory void. We need to urgently focus cultural and legal attention on the root causes of this kind of image-based abuse – specifically, ending gender-based sexual violence – while also dealing with how easily these kinds of images can be created and distributed.

In Canada, only a few provinces – British Columbia, New Brunswick, Prince Edward Island and Saskatchewan – have civil intimate-image protection acts that address “altered” images, which could include deepfakes. In all other provinces and territories, the civil and criminal laws on intimate image sharing to-date only cover actual nude or sexual images of people.

Without meaningful protection, anyone with photos on the internet can be a target of generative AI deployed for sexual exploitation. People should have legal protection over realistic sexual images of themselves. It shouldn’t have to be said, but someone’s desire to create nude images of another person should never trump that other person’s rights, including their right to control their sexual images, real or AI-generated.

This is particularly important for women. While people of all genders are targeted by deepfakes, research shows that it is predominantly women who are featured in public-facing deepfakes, and that they face worse outcomes when intimate images of them are shared without consent. While female celebrities are often targeted, female politicians, gamers, feminist commentators, public servants working on disinformation, journalists reporting on sexual violence, poets, teachers, and everyday women and girls have been targeted. They all deserve better protection from these violations.

Ms. Swift is reportedly considering legal action, as she should. Having someone of her enormous platform work to expose and advocate against the very real and extensive harms caused by synthetic image-based abuse might have an important and wide-reaching impact on cultural norms, and she would have the financial resources to push this as far as she chooses. But many who experience this kind of harm are not in the same position. And if this can happen to a huge celebrity like Ms. Swift, what does that mean for everyday women and girls?

In the end, while the law slowly catches up, what we really need most urgently is to face the more complicated social question: why do people (typically men) feel entitled to make and share intimate images of others (typically women)? If we ignore that question, we will continue to suffer a death by a thousand cuts, by which bad actors empowered by technology push us closer and closer to the edge.

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe