Skip to main content
managing

There’s a bit of a Wild West flavour to artificial intelligence these days, as companies seek to beat others in finding gold. Last year seemed to mark a turning point as ChatGPT became one of the fastest-growing web platforms of all time, while Bing Chat, Bard and other generative AI competitors joined in and companies began developing in-house versions.

From interviews with top executives, a team of Boston Consulting Group consultants uncovered these key factors to find gold:

  • Upskill systematically: Leading companies ensure that employees know how to use AI most effectively. “Most leaders agree that GenAI will create new roles in their companies and expect that, on average, almost half of their work force will need to be re-skilled in GenAI over the next three years,” the consultants write.
  • Control costs: Some companies are gaining significant costs savings of more than 10 per cent through using AI. But in the frenzy, not enough companies are giving enough weight to costs down the road, which will grow as companies launch larger customized projects. “Executives must proactively manage usage costs if they don’t want to deal with an expensive surprise later on,” the consultants warn.
  • Build strategic relationships: The technology and solutions are moving quickly when looking for AI solutions. It’s important to build partnerships with multiple companies, including software providers and GenAI startups, to gain access to the best technology.
  • Be responsible: Wherever they are in their AI journey, organizations must be pro-active in addressing issues of responsible handling of AI.

Andrew Atkins and Peter Mulford, of the BTS management consultancy, lead their list of questions organizational leaders must address with: What do ethics in AI mean?

“As AI’s impact deepens, leadership teams will need to make ethical judgment calls. According to a study of business leaders, only 29 per cent reported feeling very confident that machine learning tools are being handled ethically,” they write in Chief Executive. “As AI’s prediction skills increase, the value of human prediction decreases. And yet, as this happens, the value of human judgment increases; we need humans to play arbiter and attend to ethics. AI may be able to tell you what is likely to occur, but it cannot judge how you should feel about that occurrence.”

They note that AI is evolving too rapidly for business leaders to grasp it clearly before it shifts. Leaders, therefore, must understand the exponential nature of AI’s growth to make sensible decisions. They also have to ponder what all this will mean for organizational culture. One cultural clash they cite will be between machine learning tools, driven by math and coding, and human judgments, often made on faith or morality, and not necessarily logically consistent.

The consultants stress every job that involves a cognitive task will be able to be replaced or augmented by AI. But leaders have to pay attention to what work will have to remain exclusively done by humans. It may come down to the human ability to ‘prefer’ after gathering data and coming to the point of decision-making. “Humans are uniquely able to prefer one outcome over another. AI might be able to advise you, but AI will never be able to ‘prefer’ for you. Humans don’t know (yet) how unique and special their ability to prefer is,” the two consultants argue.

Finally, the big question leaders must ask: Do we have an AI approval process that can and will reject AI proposals when they violate our company values? That will mean the decision on AI use should be made by a diverse team of senior executives drawn from different functions in the organization, including HR and legal, not just financial and technology officials.

If those are new issues, organizations will have to grapple with the age-old question of how to get projects from idea to execution. Too often machine learning projects are stalling, according to Eric Siegel, author of The AI Playbook. “One of the major issues is that companies tend to focus more on the technology than how it should deploy. This is like being more excited about the development of a rocket than its launch,” he writes in Harvard Business Review.

A deployment goal is therefore critical: Define how machine learning will affect operations in order to improve them. For example, at UPS a goal was to improve the efficiency of package delivery by using machine learning to predict the best delivery destinations. You want to put the business objectives first, rather than the technology. Indeed, when you write down the objective, he urges you to use the word “we” to begin each sentence, rather than “AI,” which highlights this is about humans and change management.

Many machine learning projects “don’t recognize that the notion of change management applies, but model deployment means changing the way the business operates and that change must be proactively managed like any other,” he writes in the book.

There’s gold in AI, but we need to manage it and control it, not be dazzled by the speed and breadth of its calculations and expect magic to happen in our organizations.

Cannonballs

  • Surveys of United Kingdom employees in 233 organizations found that a range of mental well-being interventions, including mindfulness, resilience and stress-management training, time management and well-being apps had no impact on well-being. Only offering volunteering opportunities had a positive effect.
  • Executive coach Dan Rockwell recommends this powerful phrase in one-on-ones: “When I see you at your best, I see you …” and then naming the behaviour. To deal with poor behaviour, try: “You aren’t helping yourself when….” If the situation allows, tell someone, “I want you beside me during this challenge.”
  • If a work assignment takes a staff member away from their spouse, partner or family for a period of time, Toronto consultant Donald Cooper urges you to make it up to them by giving a gift certificate for a fancy dinner, weekend away or some other activity that will make you and your staff member a “hero” with their family.

Harvey Schachter is a Kingston-based writer specializing in management issues. He, along with Sheelagh Whittaker, former CEO of both EDS Canada and Cancom, are the authors of When Harvey Didn’t Meet Sheelagh: Emails on Leadership.

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe