Skip to main content
opinion

One of the themes that preoccupied Victor Hugo in Notre-Dame de Paris was the enormous destructive power of the printing press. In an early scene, the villainous Archdeacon of Notre Dame, Claude Frollo, broods in his study over the coming of the printed book. “This,” he says to a colleague, of the book on his desk, “will kill that,” gesturing at the cathedral outside his window. “The book will kill the building.”

Hugo himself takes up the subject at some length. Before the invention of printing, he writes, architecture had been the book of its time, the supreme expression of the human mind. After Gutenberg, “human thought discovers a mode of perpetuating itself, not only more durable and more resisting than architecture, but still more simple and easy.” The building had given way before the power of the printed word, and other structures with it: religion, authority, hierarchy of all kinds. “The invention of printing,” he writes, “is the greatest event in history. It is the mother of revolution.” But it is clear that he does not see it as an unalloyed blessing.

And rightly so. Books are generally marvellous things. They can also be dreadful things, such as Mein Kampf or the works of the Marquis de Sade. They stimulate thought, but they can also congeal it. (Plato was against books, on the grounds that if people could just look things up in a book they would no longer have any incentive to commit anything to memory.)

The same can be said of most technological advances, especially those that expand our capacity to know, and to tell others what we have learned. The telegraph, telephone, radio, television, computers, the internet: each is capable both of great good and great harm, because they are extensions of us and because that is our nature.

On balance, however, our intuition tells us that they probably represent net gains for humanity – though it is no more than that. We believe, more or less as an article of faith, that human intelligence is on balance a force for good, and therefore anything that magnifies our capacity to think and talk must be as well.

How AI-driven labs will fast-track the future of everything

Canadian AI experts issue letter in support of draft law aimed at curbing technology’s risks

That faith has begun to be tested with the advent of social media – when the internet became a means, not merely for publishing or broadcasting information in the usual way, from a few more or less authoritative sources to the many, but from the many to the many: unfiltered, unedited, instantaneously, anonymously, at global scale and at zero cost. Add to that the smartphone as two-way broadcaster-receiver, always connected and always at hand, and the algorithms of addiction that are the basis of the social-media business model, and you have what is now rightly regarded as a recipe for all manner of social ills: disinformation, extremism, polarization, isolation, anxiety, hate and so on.

Ten years ago if you had written that last sentence you would have been the object of a fusillade of “old man yells at cloud” memes. But by now I think we are all a little more aware that not all technological progress is necessarily an improvement; that just because a technology can be put to both good and bad uses does not mean the good must always outweigh the bad; and that “they said the same thing about the printing press” is not really much of an argument. Yes, some fears of past technological advances proved exaggerated or illusory. That does not mean all fears about current technology must be.

Besides, a lot of those early fears turned out to be true. They said television would rot people’s brains? Television did rot people’s brains.

Still, at least with each of these previous technological breakthroughs the capacity that was increased was our own. It was our ability, as human beings, to know and to think and to say and to do that was being expanded, for good or ill, and our ability to shape our world that was thus enhanced.

When it comes to artificial intelligence, however, we are confronted with an altogether different challenge. It is not our intelligence that is being expanded, but our creation’s – the computer – to the point that some have begun to fear it will surpass or even replace us. And it is not the usual enemies of progress who are raising the alarm. Rather, it is some of the biggest names and deepest thinkers in the AI community itself – the people who have been most responsible for bringing it into being. My God, some of them have begun to exclaim: what have we done?

That unease had been bubbling away for some time under the surface of public consciousness, emerging in the occasional polemic from this or that researcher or futurist. But so long as AI’s abilities remained relatively limited – “Hey Siri, play that song again” – they did not seem worth taking seriously.

That changed in the past few years, as the long-unrealized potential of machine-learning applications began to be realized, by running previously unimaginable quantities of data through them: the “large language models” (LLMs), drawing on virtually the entire content of the internet, with which we have lately become familiar. Give the machines enough data to train on, and they learn to recognize patterns, and to generate their own content based on what they have learned.

With the release, in recent months, of DALL-E (images) and ChatGPT (text), it became clear that generative AI had reached an unsettling inflection point. It was easy, a year ago, to dismiss that Google engineer who claimed the LLM he had been working on (with?) had become “sentient.” But the recent release of an open letter from more than 1,000 technology leaders and researchers calling for a six-month “pause” in the development of more powerful AI systems, on the grounds that they pose “profound risks to society and humanity,” caught public attention. With the resignation this week of Geoffrey Hinton, perhaps the pre-eminent thinker in the field, from his post at Google, the alarm bells rang even louder. He told The New York Times he had begun to regret his life’s work.

In retrospect, some of the earlier fears about AI – the kinds of things it was common to read even a few months ago – seem almost quaint. AI will mean the end of work? Probably not, any more than other labour-saving technologies have done historically.

More recent concerns, such as the difficulties in telling whether images and videos are real or AI-generated “deep fakes,” or the potential for students to use AI to cheat on term papers, seem like transitional issues. They’ll cause trouble for a time, but eventually society will adapt. And for every harmful consequence of this kind, it is easy to think of numerous, much larger potential benefits, from cancer research to resource management to personalized education and beyond.

But unchecked, exponential growth in computer intelligence is in an entirely new and different category. We are on the way to building something much, much smarter than ourselves, not in the narrow sense of a machine that can follow our instructions, rapidly but moronically, but of a machine that does not need us – that can improvise, learn, adapt, even write new code for itself: a recursive, recombinant, self-contained loop of ever-expanding, ever-accelerating capacity.

This is not something we can necessarily predict, or control, or even – according to some of the leading experts in the field – understand. The algorithms are, they tell us, black boxes. And within those black boxes, strange things are happening. Even the relatively primitive models we have today are doing unexpected things, developing new capacities no one saw coming.

Still, if it were just a matter of what has been created to date, there would be little to fear. ChatGPT has been aptly described as “autocorrect on steroids.” Its seemingly magical ability to read and write text is based on nothing more than probabilities.

But you only have to extrapolate current trends a little into the future to see where this can lead. A few months ago few people had heard of ChatGPT or OpenAI, its developer. Since then it has already gone through several iterations and spurred innumerable third-party applications, hundreds of them every week, as developers think up new uses for it.

One, known as AutoGPT, essentially takes the human intervenor out of the loop. Rather than the question-and-answer format of conventional chat, the app follows through on the implications of an initial question on its own, setting itself a series of tasks and completing them without further human input.

And this is just the start. We are in the very infancy of this technology; the learning curve the machines are on is potentially exponential. Potentially: it may not turn out to be. It is not inevitable that we will arrive at the worst-case scenario, soon or ever. Self-driving cars were supposed to have replaced every cab and truck driver by now, but the challenges of replicating this seemingly simple human ability have proved to be far greater than was imagined.

But the worst case is catastrophic, even existential. And the chances of it happening – again, according to some of the people who know this stuff best – are not slight. In one recent survey of AI researchers, half of those responding put the chances of human extinction at 10 per cent or more.

We need not divert ourselves with the question of whether the machines are or are likely to become sentient. We don’t need to know whether they are thinking, in the human sense of the term. It is enough that they behave as if they were. They don’t have to be “superintelligent.” They only have to be smarter than us. And they don’t need to wish us ill to do us harm. It is enough that the algorithm they are following optimizes for something other than us, or our needs.

What do we do about it? What can we do about it? The temptation is to throw up our hands. The genie cannot be returned to the bottle: the technology is too widespread, the interests too entrenched. Big Tech has launched itself on a ruinous “arms race” and will not be pushed off of it.

But there are some reasons to think it can still be contained. The kind of advanced AI we are talking about still requires access to huge amounts of data, enormous computing power, world-leading programming talent. These sorts of things cost a lot of money. This is not being developed in a garage, but in some of the world’s biggest tech companies (Microsoft, a partner and major investor in OpenAI; Google’s DeepMind project) and in leading universities. That gives the regulators some leverage.

Writing in the Financial Times, the AI venture capitalist Ian Hogarth has suggested some intriguing analogies. The world community has succeeded in closely restricting, or even halting, at least for a time, research on lethal viruses. Drug research is likewise highly regulated. In rare cases research has been reserved entirely to the public sector: “There is a real-world precedent for removing the profit motive from potentially dangerous research and putting it in the hands of an intergovernmental organization. This is how CERN, which operates the largest particle physics laboratory in the world, has worked for almost 70 years.”

That holds risks of its own. It’s entirely possible regulators will indulge in overkill, needlessly impeding progress on AI, and all of its potential benefits. But given the non-trivial potential for species-ending catastrophe, which should be our greatest fear: that we do too much to regulate AI too soon, or too little, too late?

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe