Before it onboarded a chatbot solution, Neo Financial Technologies Inc. received up to 15,000 emails a day from customers, which posed some unique challenges for the mobile-first, internet-based business.
“E-mail in banking is a really hard channel,” says Shannon Burch, the company’s head of customer experience. “It’s not authenticated; we really can’t do much for customers because we can’t give any personal banking information out unless we’ve confirmed their identity. Our customers clearly wanted that in-the-moment resolution.”
The most obvious solution was a chatbot that could instantly address some common customer questions, but Ms. Burch was initially concerned about an automated customer-service solution. As a branchless bank, Neo relies on its ability to provide great remote experiences, something chatbots have not always been able to deliver.
“I’m pretty sure most Canadians have had a really awful chatbot experience,” she explains. “I worked for a lot of big corporations prior to coming to Neo, and even set up some of those terrible chatbot experiences. We’ve all learned that’s not the kind of experience we want for customers.”
Ms. Burch says she learned the importance of experimentation and internal testing, of starting with the most common customer inquiries before gradually expanding, and having dedicated human resources to support automated customer-service tools.
After a few months of training and onboarding a new chatbot solution, Neo was able to eliminate its customer service e-mail channel altogether.
“The number of chats we get is one-third the number of emails we were getting,” Ms. Burch says. “Customers that come through the chat, we actually see 50 per cent contained within the bot, which means they don’t flow through to our live specialists because they’ve gotten their answer from the chatbot.”
With the rapid advancement of artificial intelligence, chatbot experiences have improved dramatically in recent years, quickly evolving from frustratingly limited preprogrammed responses to highly advanced, conversational generative AI.
This is the kind of artificial intelligence (popularized by ChatGPT) that can create text, images, audio and video based on data they’ve been “trained” on, and that can sometimes be indistinguishable from something made by a human.
While challenges remain, it’s become easier to provide a positive customer experience with an automated solution.
“Now we’re seeing the emergence of intelligent chatbots, driven by generative AI, which has cognitive capabilities, and that’s what I recommend,” says Krish Banerjee, managing director and Canada lead for Accenture’s data and AI practice. “You have an opportunity to leapfrog [older versions], because these new capabilities have only been available for the past 12 to 18 months.”
When it comes to building a chatbot customers will enjoy using, Mr. Banerjee says it’s important to start small.
“If a client is trying to build something for a customer, we would suggest first building something internally for your employees – for a controlled set of users – so you are managing the confidence of doing this.”
In 2023, for example, Accenture developed and deployed its Knowledge Assist application, a custom-built AI solution that helps staff find information internally, prior to rolling out the technology to its customers. That experimentation phase, Mr. Banerjee explains, was vital for testing the technology and for building trust with future customers, which he adds is an important consideration for any organization looking to onboard a chatbot.
“Trust is so important because one specific instance can lead to a breach of trust, and any organizations trying to build these capabilities need to protect themselves,” he says. “So the question is ‘how do you do that with responsible design in mind as you develop these solutions, rather than as an afterthought?’”
Facilitating trust between human clients and automated customer-service agents can be a nuanced endeavour. According to researchers, there are times when adding humanlike features to chatbots – such as names and faces or even some basic animations – improves trust, but in others it can diminish it.
“What we found is that the [chatbots] with more social capabilities, which use humanlike social cues, are more successful if they’re paired with some kind of humanlike image, as opposed to a robot image,” says Dr. Shamel Addas, an associate professor and distinguished research fellow of digital technology at Queen’s University’s Smith School of Business. “Whereas more basic chatbots are more successful when they use a robot image. We call that the fit between appearance and social capability.”
Peripheral cues, messages, and features, according to Dr. Addas’ research, send a signal to users about the kind of chatbot experience they should expect, with more humanlike features associated with more humanlike conversations.
When the chatbot fails to meet those expectations, customers often feel there’s been a breach of trust, making chatbots less effective, regardless of their capabilities. The same principle applies to the conversation itself, Dr. Addas says.
“If you say, ‘you’re about to interact with an AI agent,’ and these agents aren’t perfect, and if there’s any error ‘please repeat the question,’ or ‘if you want to hand it over to a human please say so,’ – if you set the right expectation – it’s less likely to trigger a sense of mistrust and frustration in the customer.”
And, in turn, result in a chatbot experience your customers don’t hate. They might even love it.