Skip to main content
opinion

David Silverberg is a Toronto journalist and editor who has reported on digital culture and Silicon Valley for the past 17 years.

I remember the first time a fake AI-generated video almost fooled me. I was scrolling on Twitter, now called X, when I spotted a friend linking to a TikTok video of Tom Cruise strumming a guitar and singing the lyrics to the Dave Matthews Band hit Crash Into Me.

As a DMB fan, I couldn’t help but listen through the entire clip, and then wondered to myself: “This seems off-brand for Tom Cruise. Wait, does he even play guitar?” After some research, I found out it was a deepfake video generated by visual and AI-effects artist Chris Umé, with the help of a Cruise stand-in, actor Miles Fisher.

I’ve been covering artificial intelligence and disinformation as a freelance journalist for more than seven years, so I have the knowledge and tools needed to spot fake images, videos and even audio. But generally, Canadian students aren’t so well equipped.

For too long, educators have been left in limbo as to how to teach youth about AI-generated images that can do more harm than pairing an actor with a classic tune. It’s up to individual teachers to dedicate their class time to discuss the risks of AI-generated content, which can range from, say, Ukrainian President Volodymyr Zelensky announcing the war is over or Prime Minister Justin Trudeau promoting a financial “robot trader,” to former U.S. president Donald Trump accepting the endorsement of Taylor Swift. All three of these AI deepfake examples have spread online in the past two years.

We seem to be too confident in how easily we can detect deepfakes. A 2021 study found that “people are biased toward mistaking deepfakes as authentic videos (rather than vice versa) and they overestimate their own detection abilities.” These dangerous assumptions can lead to disastrous consequences: Political parties can manipulate our beliefs with just one image or video altered to show us a reality far from the truth, or nefarious actors can send e-mails from a “loved one in distress” to beg for money or, even more tragically, make a plea to meet in-person ASAP.

In June, I attended the annual Collision conference in Toronto and heard a keynote talk by Geoffrey Hinton, known as the “godfather of AI.” Of the many points he made about the risks of AI running amok, he remarked on how dire it is that deepfake videos made by AI systems are spreading online at a rapid pace, and there is little being done to warn people about their dangers.

“We have to inoculate the public about deepfakes by showing a fake video to them and saying exactly what makes it fake, whether that is Trump or Biden in a video doing or saying something they didn’t do,” he said. He stressed that the coming U.S. election will only lead to a boom in these deepfake videos.

He’s right, but I’d take it a step further. As important as it is for adults to understand the differences between fake and real content, AI literacy has to begin with teens. The more youth can learn how to detect deepfakes, the stronger they will be as adults to discern the garbage spamming their feeds. But they can’t do this alone.

School boards, along with education advocates, should ensure their curriculum mandates AI literacy courses in classrooms across Canada. I contend this education is as essential as math and English, even if today’s teens claim to be tech-savvy and adept at spotting bogus posts. The creators of these deepfakes will always be one step ahead of the public and their detection systems, and AI systems such as Grok-2 are now allowing real people to be deepfaked, a phenomenon unheard of until recently. Even more troubling is Google’s recent decision to soon allow users to create images of people via its Gemini AI tool, eight months after taking down the image-generation feature.

Learning more about AI will also provide students with in-demand skills to help them succeed in the future work force. If we’re so passionate about teaching kids to code, to help prepare them for jobs at Google or Microsoft or their own startup, we should take just as seriously the importance of spotting deepfake images and videos.

At the very least, Canadian educators can take inspiration from what other institutions are proposing. Stanford University has experimented with assisting educators in California to add lesson plans on the benefits and risks of AI. “At my school, there are a couple of other teachers who are excited about AI, but the rest are worried the sky is falling,” a teacher in Los Angeles said in a press release. “Keeping kids from cheating is not the issue. We don’t need to fear this technology. Let’s unpack the bias, bring it into the light, teach it, and show that we can mitigate it.”

Canada is likely brimming with forward-thinking and concerned teachers, but many educators may also be suspicious of AI’s role in spreading disinformation, or are already overwhelmed with their current course load. Understandably, they can’t enforce AI literacy in their school, but if school boards wise up to this worrying trend, they can take the lead and drive change in a sector that sorely needs it.

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe