Skip to main content
opinion

A video on social media of Prime Minister Justin Trudeau shilling for a cryptocurrency scam. A video of Ukrainian President Volodymyr Zelensky telling his troops to lay down their arms in the war with Russia. A multiperson video conference call in which the chief financial officer of a company tells an employee to send US$25-million to a designated bank account.

Except for the hapless employee, every one of the people in those videos was a fake. To be precise, a deepfake – a video, audio file or photograph created with generative artificial intelligence to deceptively mimic the appearance and/or voice of public figures and ordinary people in order to commit fraud, exact revenge, disrupt elections in foreign lands, or just plain cause havoc.

The first time many people heard about deepfakes may have been earlier this year, when pop superstar Taylor Swift became the victim of online porn videos in which her AI-generated likeness appeared.

But what was new to Ms. Swift was already an unsettlingly common occurrence for girls and women who have become victims of deepfaked content spread on social media by malicious peers or ex-partners.

That happened in Winnipeg last December, when girls at a local school came forward to say faked nude photos of themselves were circulating online.

Police laid no charges in the case for a variety of reasons, including the fact that Ottawa has yet to make it illegal to create and share malicious deepfake images and videos of any kind.

The proposed federal Online Harms Act, if passed, restricts the sharing of deepfakes but only in the context of pornography and nude photos. Last year, British Columbia enacted a similar law, but again only when it comes to maliciously targetted sexual content.

Meanwhile, the major online platforms – Alphabet Inc., which owns Google and YouTube; Meta, which owns Facebook and Instagram; and X, formerly Twitter – are only just beginning to respond to the emerging threat of deepfakes, whether by applying resources to combat them, or by updating their policies.

Once the great disruptors, these giant platforms are now the disrupted. Generative AI has the potential to turn them into so great a threat to the public good that it could make the current era of online disinformation seem like a quaint prequel to the apocalypse.

As generative AI continues to learn, the deepfakes it produces will become more and more difficult to pre-emptively detect before they go online. In a circular twist worthy of Hollywood, continuing efforts to use generative AI to identify deepfakes could have the unintended consequence of teaching AI how to avoid detection.

The threat can’t be overstated. Last year, when ChatGPT and other commercial generative-AI platforms were in their infancy, the Canadian Security Intelligence Service (CSIS) held a workshop with experts on the dangers of deepfakes. Their conclusion? Deepfakes pose a threat to every important aspect of civil society: business, democracy, intelligence gathering, national defence, crime prevention, personal privacy …

To fight back, Canada and its allies need to criminalize the creation and dissemination of malicious deepfakes, even if the perpetrators are sometimes overseas. As well, the provisions of the online harms bill that apply to deepfake porn and photos, in which platforms have 24 hours to respond to a request to remove harmful content, should be extended to all malicious deepfakes.

Democratic governments such as Canada’s must make it clear to the large platforms that freedom of expression is a critical right, but that deepfakes present a whole new level of danger. Companies that don’t invest in adequate prevention efforts, or are slow to take down pernicious deepfakes, should be held to account.

Governments have too often been slow to take action against the harms inherent to the digital age. With the rise of generative AI, those harms could become existential threats. This time, the democratic world needs to respond quickly.

Machines Like Us: More from The Globe and Mail

On the Machines Like Us podcast, Taylor Owen asks the experts what our future might look like with artificial intelligence and other fast-evolving technologies. In this episode, he spoke with journalist Maria Ressa about how she predicted social media would go awry, and why she believes AI will do the same.

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe