Skip to main content
Open this photo in gallery:

A handout image generated by AI and provided by Jordan Rhone, which was created using Midjourney to highlight the resilience of conspiracy theories like the moon landing.Jordan Rhone/The New York Times News Service

Last June, the federal government put forward a bill attempting to regulate artificial intelligence. This was months before the debut of ChatGPT, before technological developments so great that some AI researchers called for a temporary halt, and before AI blessed us with the Pope in a puffy coat. In other words, an eternity has passed.

The Artificial Intelligence and Data Act (AIDA) is a component of Bill C-27, which also deals with consumer privacy and data protection. AIDA has been criticized by some experts for its vagueness and willingness to leave crucial details to sort out later. In the meantime, the capabilities of AI are rapidly advancing and the technology is making its way into countless products and services. The developments only increase the urgency of effectively regulating the technology in light of its potential for harm to society, such as algorithmic discrimination and job losses.

But there are still more questions than answers regarding AIDA, including the biggest of all: What exactly will be regulated?

AI is a particularly tricky thing to govern. Not only does it move fast, it can have emergent abilities, so a system can perform tasks that even its developers could not predict. AIDA attempts to deal with the ever-changing nature of AI by establishing a framework for responsible development and deployment. The act concerns itself with the most powerful applications of AI, prohibits malicious uses of the technology, establishes an oversight body of sorts and lays out financial penalties for businesses and individuals that transgress the law.

The precise details, however, will be determined later, through regulations that have yet to be written. The government anticipates AIDA will be in force no sooner than 2025.

“In principle, that makes sense. It will make for a much more agile framework,” said Philip Dawson, head of policy at Armilla AI, which provides a quality assurance platform for AI systems. Mr. Dawson, who also served as a senior adviser for AI and data policy at Innovation, Science and Economic Development Canada (ISED) until last May, said such flexibility will help avoid a situation similar to the one unfolding in the European Union. AI legislation introduced there in 2021, which has more specifics than AIDA, has been bogged down by proposed amendments. “From a resource standpoint, it’s been an enormous undertaking in Europe. I’m not sure we’re set up to do that in Canada,” he said.

But regulations aren’t necessarily crafted quickly either, said Teresa Scassa, a law professor at the University of Ottawa. Sometimes a government even passes a bill into law that anticipates new regulations but never gets around to writing them.

Will AI take over the world? And other questions Canadians are asking Google about the technology

At this point, it’s not even clear what will be regulated under AIDA. The act refers to “high-impact” AI systems, but it doesn’t define the term nor lay out the specific requirements for developers. It’s also unclear what kind of governance, if any, will apply to AI that doesn’t fit the eventual definition of high-impact.

In an AIDA companion document published in March, ISED outlined what it considers to be the “key factors” in defining high-impact AI, including the severity of potential harms and whether the risks are regulated under another law, but much was left unanswered. “It’s kind of a parliamentary blank cheque if it’s passed because it leaves it to the regulation-making process to determine what the law is actually about,” said Prof. Scassa.

The government introduced AIDA last year with no public consultation, catching some experts by surprise. Ottawa is now allotting six months for consultations on regulations, plus three months after a draft set is written, but Prof. Scassa said the public and parliamentarians are generally less engaged at that stage. “My preference would be to go back to the drawing board, in large part because there hasn’t been proper consultation,” she said. “It slows things down, but it can be very positive.”

Conservative MP Michelle Rempel Garner has concerns, too. “Leaving Canada’s thought process on artificial intelligence to a closed or bureaucratic process, given the urgency of the issue, probably isn’t the right approach,” she said. Ms. Rempel Garner has been following developments in AI for some time, even co-authoring a Substack post in February suggesting that governments consider pausing the public release of potentially harmful AI. Some of the implications keep her up at night, she said, including the potential for AI to manipulate humans and the fact that, in some cases, the technology is deployed without an adequate understanding of how it works.

“We will likely need a global system of governance, and given the state of geopolitics right now, being able to achieve that keeps me up at night, too,” she said.

Parliamentarians will also have to bring themselves up to speed with developments in AI. When Ms. Rempel Garner talked about ChatGPT in the House of Commons in December and raised concerns about how generative AI could affect employment, a number of MPs approached her afterward to learn more.

She said she would like to see a parliamentary committee formed immediately to study the issue. “We should call to the table some of the leaders and developers of this stuff to get a better understanding,” she said, adding that she’s had discussions with MPs from across the aisle who share her concerns.

“Recommendations should be informed by industry and academics who have a really good understanding, and that can be done in a very short period of time,” Ms. Rempel Garner said.

In an e-mail, ISED spokesperson Justin Simard said the government is still consulting with industry and academics to ensure it’s ready to begin the regulatory process, but did not provide specifics.

The peril and promise of artificial intelligence

Bill C-27 was the topic of debate on two occasions in Parliament last month, but MPs used much of the time to discuss the privacy aspects of the legislation, not AI. Some raised the question of whether AIDA should be a standalone piece of legislation and carved out of Bill C-27 entirely.

Countries around the world are grappling with AI regulation. Under the EU’s Artificial Intelligence Act, specific applications of the technology will be subject to stringent requirements related to security and reliability – or banned entirely. The British government put out a white paper in March outlining its approach, establishing five general principles that existing industry regulators will be responsible for implementing.

In Canada, AIDA will establish a commissioner responsible for oversight and enforcement that sits within ISED – which, incidentally, is the ministry charged with promoting the AI sector. That raises questions about just how effective oversight will be. The government has justified the approach by saying that because AI moves so quickly, it will be important for the ministry and the commissioner to work closely together during the first few years.

Gillian Hadfield, a professor at the University of Toronto who studies AI, said there is some soundness to the government’s position. “If you have an independent commissioner, they’re only looking at the harms,” she said, “not balancing that across the benefits” of the technology.

Still, she sees other deficiencies within the act. AIDA only considers harm that could be done to individuals such as an algorithm that discriminates owing to biased data, but not broader societal damage. Generative text applications such as ChatGPT have the capacity to make up facts and could be used to flood the information ecosystem with lies, eroding political discourse. “That’s something we should have a regulatory structure in place to address, so that we have less unreliable information in the world,” Prof. Hadfield said.

(Mr. Simard said the government is open to hearing from stakeholders about how to best address systemic harms.)

There are a number of regulatory steps the government could take now, Prof. Hadfield said, such as licensing the language models that underlie generative AI and forcing companies to disclose key details about how the models function. The U.S. is seeking input on similar measures. On Tuesday, the Commerce Department issued a public request for comment on AI accountability and will explore whether potentially risky models should be certified before they’re released.

“We can’t just spend another five years or even two years” debating the issue, Prof. Hadfield said. “We need to recognize something really transformative is in the works, and we don’t have the regulatory levers in place.”

Generative AI systems have been grabbing attention with their ability to make images, text, music and more from a text prompt. We put some Canadian terms into three image AIs to see what they came up with, with some bizarre and surprising results.

The Globe and Mail

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe