Skip to main content
Open this photo in gallery:

AI’s fast-growing use stokes anxieties among wealth management firms’ information technology teams.Just_Super/iStockPhoto / Getty Images

Sign up for the Globe Advisor weekly newsletter for professional financial advisors on our sign-up page. Get exclusive investment industry news and insights, the week’s top headlines, and what you and your clients need to know. For more from Globe Advisor, visit our homepage.

Artificial intelligence (AI) cuts both ways for wealth management firms in Canada.

On the one hand, generative AI’s promise as an investment theme has driven many client portfolios back to profitability after 2022′s market declines on the back of crushing inflation and interest rate hikes.

On the other hand, AI’s fast-growing use stokes anxieties among wealth management firms’ information technology (IT) teams, who are increasingly concerned fraudsters are leveraging this technological marvel to scam not only clients but advisors too.

“Whenever the movement of money is involved, it’s always a good idea to verify if what someone is telling you is true,” says Chris Nicola, chief strategy officer at Vancouver-based Nicola Wealth Management Ltd., which focuses on serving high-net-worth clients.

Fraud Prevention Month in March could not be timelier given the need for greater awareness among advisors, wealth management firms’ leadership and clients, whose heightened vigilance is often the best defence against increasingly sophisticated scams leveraging widely available generative AI tools.

Last month, KPMG surveyed 300 Canadian organizations victimized by fraud and found that 95 per cent of leaders are very concerned that deep fakes increase the risk of fraud at their companies.

“The technology is indeed a double-edged sword,” says Joel Moses, distinguished engineer and chief technology officer, platforms and systems, at F5 Inc. in Seattle, a provider of cybersecurity solutions, including for Canadian financial institutions.

“It can be used for cybersecurity to defend against attacks, but attackers also use generative AI very well to increase the effectiveness, magnitude and reach of their attacks.”

A TransUnion report to be released this month found the financial services industry in Canada saw a 76 per cent year-over-year increase in digital fraud in 2023, compared with a 3 per cent rise globally.

The Canadian Anti-Fraud Centre also reports Canadians lost about $554-million to fraud in 2023, up from $531-million in 2022.

“The future [of fraud] is here today, and it’s only going to get more interesting,” says Larry Zelvin, head of the financial crimes unit at BMO Financial Group in New York.

He adds many AI-driven frauds and cyberattacks bear the same characteristics as those perpetrated before the technology was available. Typically, these are forms of phishing attacks, posing as someone a victim knows and trusts to get log-in credentials and passwords.

“But AI takes it to a whole new level,” says Mr. Zelvin, former director of the National Cybersecurity and Communications Integration Center for the U.S. Department of Homeland Security.

“For one, the bar to enter the space is lower, so you don’t need a lot of technical skills to do these attacks anymore.”

Generative AI also makes fraud more cost-effective. Fraudsters can create more targeted attacks with less work against a wider swath of targets, with increased chances of success.

“AI has taken the traditional thinking of financial motivation for attacks and turned it on its ear,” Mr. Moses says. “It’s basically superpowered attackers.”

Wealth management firms are a big target too because criminals follow the money. “It’s in the name of the industry,” Mr. Moses notes.

The stakes are high, especially as generative AI allows criminals to reproduce voices or even create so-called “deepfake” videos to trick advisors or clients into believing they’re talking to someone they know and trust.

“It’s not something we’ve seen in our business yet, but I make sure our client service team is aware that someone can clone a client’s voice now,” Mr. Nicola says.

He adds that advisory teams have processes in place to address these dangers, such as question-and-answer protocols involving information only advisors and clients know.

As well, for any transaction request prompted by a client phone call, virtual chat or e-mail, advisors often verify by contacting the client using a number on file, he adds.

Cindy Marques, co-founder, chief executive officer and certified financial planner at Money MakeCents Inc. in Toronto, says fraud risk is frequently on her radar.

“My inbox is constantly inundated with phishing e-mails that are posing as my employer or reputable services that I use,” says Ms. Marques, also director, financial planning and education, at Open Access Ltd., which provides employer plan group retirement benefits.

An equal concern is for clients, who could receive e-mails or be contacted in other ways by fraudsters impersonating Ms. Marques.

“At a quick glance, they would have no reason to question me for asking them about sensitive financial information, as this comes up during our planning process often,” she says.

In her view, the industry as a whole must up its game with ever-tightening processes around sharing sensitive data while educating clients and industry members.

“If data are being requested in a manner that is too casual or outside of the usual methods, this is an indication of fraud,” Ms. Marques says, noting she often reminds clients about this too.

That said, AI helps firms weed out many fraud attempts before they reach a victim, Mr. Moses says. “But we must continue to play a cat-and-mouse game and keep advancing because the attackers are always going to.”

Much is at stake given that the wealth management industry is built on trust, Mr. Zelvin says. AI tools – when used criminally – make it difficult to determine real from fake. Increasingly, it will be hard to discern “between the good and the bad actors.”

The relationship between individual advisors and clients will play a crucial role, including having the questions and answers only they know, as mentioned earlier.

“It’s sort of like a kidnapping where you need proof of life,” Mr. Zelvin says.

Yet, with social media, including LinkedIn, even these processes are not unassailable, Mr. Moses cautions.

“I don’t provide much information about myself in public view because it will be used for spear-phishing,” he says.

“This has been a problem for a while, but generative AI makes it much more efficient for attackers.”

In the end, the best defence may be one as old as the industry itself: in-person meetings.

At least technology is not yet at the point at which life-like androids are a reality. “When we get to that point, it’s my plan to become a beachcomber,” Mr. Moses says.

For more from Globe Advisor, visit our homepage.

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe