Last year’s version of James Cameron, the creator of the Terminator franchise, raised alarms over the dangers of artificial intelligence: “I warned you guys in 1984, and you didn’t listen.”
Yesterday’s version of James Cameron might have travelled back in time to stop himself from saying that: In an announcement that he was joining the board of Stability AI, a fast-growing startup that makes a powerful image-generation tool, Cameron 2.0 said the company will “unlock new ways for artists to tell stories in ways we could have never imagined.”
Maybe both James Camerons would get along – and maybe, more than anyone, they would know when to pull the plug if things get out of hand. To one of the world’s most prominent thinkers in this space, however, the fact that these decisions are being made by private industry represents an existential threat to humanity. No biggie! More on that below, but first:
In the news
Under the spotlight: Canadian Imperial Bank of Commerce reminded us TD isn’t the only bank facing scrutiny from U.S. regulators.
Nearing more rate cuts: (Definitely, maybe.) Bank of Canada Governor Tiff Macklem says more trims are coming, stops short of saying when, and sizes up his fight against inflation: “It has been a long journey.”
Leaving less money down: The federal government’s new mortgage rules will result in a significant decrease in required down payments.
Happening today
- U.S. reports new home sales for August.
- France reported improved consumer confidence for September.
- Sweden’s Riksbank joined the rate-cut parade. Välkomna!
In focus
What happens when AI gets smarter than us?
Yoshua Bengio is one of the world’s most influential computer scientists. In 2018, he and two colleagues were awarded the Turing Award – a high distinction that is often referred to as the Nobel Prize of computer science. In 2022, he was the most-cited computer scientist in the world.
For this week’s episode of Machines Like Us – The Globe and Mail’s podcast about AI and society – host Taylor Owen sat down with Bengio, who’s now a professor at the University of Montreal and founder of Mila, the Quebec Artificial Intelligence Institute.
Owen and Bengio spoke about the risks of unchecked advancements in AI and a future with super-intelligent machines. Here’s a part of their conversation:
Owen: I’m wondering what you make of the commercialization of AI in this moment we’re now in.
Bengio: There’s a good side to this commercialization: It can bring benefits of AI to more users and applications. But I find it problematic that all of the advances are happening in industries that have a profit motive, without the right guardrails. And the people in academia or non-profits typically don’t have the means to push the technology toward more beneficial applications or less dangerous ones.
Owen: Why should we be scared of human-level intelligence?
Bengio: Well, just imagine that we have entities that are smarter than us. For some reason they have their own goals, which may not be aligned with our goals. It would be like having a new species on this planet that is smarter than us. In evolution, species that were smarter, like us, have been dominating and often exterminating less smart species. I’m not saying it is going to happen, but it’s an easy thing to think about.
Owen: Are you worried about some of these runaway risks from governments, too?
Bengio: Not now. Currently, governments don’t have these capabilities. It’s all in the hands of a few private companies. And there is a sense in which it could be safer because governments tend to be fairly conservative in their actions. But it could also be more dangerous because, of course, a government itself could abuse that power.
Owen: Surely China is a concern here then, and their proximity to industrial development.
Bengio: All authoritarian governments would clearly want to use that for maintaining themselves in power or even gaining worldwide kind of dominance. So there are very complicated geopolitical questions: How do we avoid these scenarios? But it’s weird because that means we can’t really slow down or stop because we now are concerned that some states with bad intentions could exploit the technology. So we have to both co-ordinate with China, like maybe sign treaties and be able to verify them, and make sure our companies don’t do something really stupid.
Owen: Are companies not costing risk effectively? Aren’t there scenarios where a catastrophic failure or harm of AI could kill these companies, too? I mean, the end of humanity isn’t good for corporate interests.
Bengio: Yes. Let me offer an example. People are more familiar with climate change. Destruction of the climate isn’t good for the CEOs of fossil fuel companies either. But there’s so much money to be made in the interim. And why would a particular company be the one doing the research to avoid these catastrophes? If they do that, then they’re going to lose competition against other companies. In pharma, the cost of safety, like all the tests they do to make sure the drugs are not going to harm people, is way above the cost of actually discovering the drugs in the first place.
Owen: We have a notion of guardrails, that we as humans are wise enough to put bounds on how technologies can be used. But you’re saying that’s not the case here, that we need a totally different strategy because guardrails assume we know where this is headed.
Bengio: If we force enough transparency on the companies so that they have to reveal what they’re doing to protect the public and the results of these tests, then they will have a strong incentive to behave well. The other weapon is liability. So we need to have an adaptive regulatory framework. So, hypothetically, if the companies don’t use reasonable care, and then something bad happens that’s caused by their system, then they could be sued. And of course, they don’t want to be sued, especially for billions of dollars.
Owen: Should people be scared of this technology?
Bengio: Yes. I am scared. Not the current AI technology, but the one that will exist in some unknown number of years, that is beyond our own intelligence.
This interview has been edited for length and clarity.
📬 Are you seeing AI being used in your workplace? I’d like to hear how that’s going. Also: The Terminator or Terminator 2? Email me: cws@globeandmail.com
Charted
Major banks expect gold to extend its record-breaking price rally into 2025 because of a revival in large inflows to exchange-traded funds and expectations of additional interest rate cuts from prominent central banks around the world, including the U.S. Federal Reserve.
“Strong physical demand from China and central banks supported gold prices over the past two years, but investor flow, and retail-focused ETF builds in particular, continue to hold the key to a further sustained rally over the upcoming Fed cutting cycle,” analysts at J.P. Morgan said in a note on Monday.
Morning markets
Global markets drifted mostly lower as the rally fuelled by China’s stimulus measures lost steam, and investors again set sights on economic data and future rate moves by the U.S. Federal Reserve. Wall Street futures were mixed and TSX futures pointed lower.
Overseas, the pan-European STOXX 600 was little changed in morning trading. Britain’s FTSE 100 rose 0.36 per cent, Germany’s DAX declined 0.46 per cent and France’s CAC 40 gave back 0.36 per cent.
In Asia, Japan’s Nikkei closed 0.19 per cent lower, while Hong Kong’s Hang Seng gained 0.68 per cent.
The Canadian dollar traded at 74.43 U.S. cents.