Skip to main content

Your daughter calls you, sobbing. She says she’s been kidnapped. Then a man gets on the phone and demands a $1-million ransom. There’s no way you can come up with that amount of money, but the kidnapper quickly drops the price to $50,000 in cash.

This was the scenario Jennifer DeStefano faced earlier this year in Arizona and the one she recounted in a Senate judiciary committee on June 13. It was eventually revealed to be an elaborate scam when her daughter was located safe at home.

But how was the call made with her daughter’s voice? Turns out it was allegedly manufactured with technology that can clone and synthesize any voice into repeating whatever you would like it to say.

While we are inundated with the wild and wonderful things technology can do, from applying real-time face filters on TikTok and Instagram, deepfake videos that allow us to see Arnold Schwarzenegger’s face seamlessly integrated onto other actors’ performances, or using ChatGPT to write essays for school, scam artists are similarly effective at using these technologies to steal money from people.

Think you’re immune? I used to worry about my parents being scammed by phishing e-mails, despite telltale poor grammar, improper formatting or broken English. But now that AI technology apparently has better bedside manner and diagnostic accuracy than some human doctors do, it’s creating phishing e-mails so effective that even I’m starting to second-guess them.

Last month I was almost fooled after I got an e-mail from Meta saying that my business Facebook page contained some posts that violated copyright rules and that my account was in danger of being suspended. Initially I ignored it as I hadn’t posted anything in years, let alone material that didn’t belong to me.

But after a number of follow-up e-mails, I started to worry. There were no spelling mistakes. The e-mails were perfectly formatted. They looked like legitimate e-mails I’ve received before from Meta. Instead of clicking on any links within those e-mails, I opened a separate browser window and logged into my account. Everything was fine – it was a phishing e-mail after all. But I almost fell for it.

When I did a Google search for “Facebook business account phishing e-mail scam,” I found out many other users had unknowingly given scammers access to their accounts by clicking on those links. Some got blackmailed to regain access, while some saw large bills racked up on their accounts, oblivious to having been phished months or years ago.

In a grandparent or emergency scam, similar to the one Ms. DeStefano went through, a family member in distress contacts you needing urgent financial help. Perhaps they are travelling and lost their wallet. Or maybe they’ve just been in a fender-bender. The scam works because the heightened emotions of an emergency and the concern you would have for a loved one are disarming.

Historically, those who reported being scammed this way (many of these scams go unreported out of embarrassment) noted that the person calling didn’t sound quite like themselves. But that gets chalked up to a bad connection, or the fact that they hadn’t spoken in a while. The advent of voice-cloning technology has made this scam insidiously more effective. It probably won’t be long before video-cloning tech is ubiquitous enough that we start getting video calls that look and sound like our loved ones.

While technology keeps advancing, perhaps one of the most low-tech solutions could be the most effective. And it comes from old-school spy craft.

At the beginning of the sixth instalment of the Mission: Impossible franchise, Tom Cruise’s character meets a shadowy figure who tells him:

“Fate whispers to the warrior?”

“A storm is coming.”

“The warrior replies?”

“I am the storm.”

These movie adaptations of sign-countersign phrases – or challenge and password exchanges – which are used to confirm identities would be dead giveaways that someone is a spy. For the purpose of defending against the grandparent scam or derivations of it, the concept of a challenge and passphrase, or using what some might call an “AI safeword,” should be something to consider discussing with family.

Here are three steps to help prevent you from getting scammed in our brave new AI world:

  1. Awareness: Share information on how these scams operate.
  2. Create a challenge and passphrase, or an AI safe word: It’s probably okay, and a little bit fun, to come up with a nonsensical challenge and response pairing because, unlike an actual spy, you don’t need to worry about getting your cover blown. You just need to know if you can hang up on a scammer.
  3. Practice: You should try to use your challenge and response phrase periodically when you have a call with family. My wife and I developed an exchange and I keep forgetting it. So we either have to practise it more, or come up with something better.

As scams evolve, so must our strategies to defend against them.


Preet Banerjee is a consultant to the wealth management industry with a focus on commercial applications of behavioural finance research.

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe