Skip to main content
opinion
Open this photo in gallery:

A company's use of AI needs to align with its vision, mission and values and be based on a set of transparent and ethical principles and policies.DADO RUVIC/Reuters

Ian Robertson is the chief executive officer of strategic shareholder advisory and governance firm Kingsdale Advisors Inc.

Artificial Intelligence is bound to be the central engine of a fourth industrial revolution and is on the verge of playing a crucial role in the management and oversight of companies.

Some may be surprised to learn the use of “artificial governance intelligence” is already actively applied in boardrooms and corporate decision-making processes, such as due diligence of mergers and acquisitions, profiling investors, auditing annual reports, validating new business opportunities, analyzing and optimizing procurement, sales, marketing, and other corporate matters.

Most businesses are already utilizing some form of AI, algorithms and various platforms, such as ChatGPT. International organizations, governments, businesses, scientific and legal communities are racing to establish new regulations, laws, policies, ethical codes and privacy requirements as AI continues to evolve at a rapid pace while current legal and regulatory frameworks are lagging and becoming obsolete.

Against this backdrop it is important shareholders and boards start considering these issues, too, especially as it relates to augmenting or supplanting the role of corporate directors. Is your company ready for the rise of the robo-director?

In 2014, Hong Kong-based venture capital group Deep Knowledge Ventures appointed an algorithm named VITAL (Validating Investment Tool for Advancing Life Sciences) to its board of directors. VITAL was given the same right as human directors of the corporation to vote on whether the firm should invest in a specific company or not. Since then, VITAL has been widely acknowledged as the world’s first robo-director and other companies, such as software provider Tietoevry and Salesforce, have followed suit in employing AI in the boardroom.

The World Economic Forum has reported that by 2026, corporate governance will have undergone a robotization process on a massive scale. Momentum in computational power, breakthroughs in AI technology and advanced digitalization will inevitably lead to more established support for corporate directors using AI in their roles, if not their full replacement by autonomous systems. The result being that human directors sharing their decision-making powers with robo-directors will have become the new normal.

As the legal and regulatory landscape races to keep pace, companies need to forecast their compliance obligations that govern AI systems and boards will need to adjust to new corporate laws. In Canada, several coming federal and provincial privacy law reforms will affect the use of AI in business operations. The proposed federal Bill C- 27, if passed, would implement Canada’s first artificial intelligence legislation, the Artificial Intelligence and Data Act (AIDA), which could come into effect in 2025. Current corporate law is not adapted to artificial governance intelligence and will have to cope with new and complex legal questions once the use of AI as a support tool or replacement of human directors increases.

There are some key questions directors and shareholders alike should be considering: How do current legal strategies apply to robo-directors? How and who will be responsible for the execution of fiduciary duties? Financial compensation and pay-for-performance will be of no use to robo-directors, so who is being compensated and being held accountable behind the scenes for programming and controlling the robo-director? What are the needs and limitations of a robo-director and what roles of a traditional director should be ring-fenced from them?

The use of AI provides opportunities and potential threats, both requiring strong risk and governance frameworks. The board is accountable legally and ethically for the use of AI within the company and its impact on employees, customers and shareholders, including third-party products which may embed AI technologies.

The use of AI needs to align with the company’s vision, mission and values; be based on a set of safe, transparent and ethical principles and policies; and be rigorously monitored to ensure compliance with data privacy rules. Codes of conduct and ethics need to be updated to include an AI governance framework and ensure no bias in data-setting and decision-making. Companies should consider appointing an executive who will be responsible for AI governance and provide strategic insights to the board.

Interact with The Globe