Skip to main content
opinion
Open this photo in gallery:

AI pioneer Geoffrey Hinton stands backstage before speaking at the Collision Conference, in Toronto, on June 19.Chris Young/The Canadian Press

What a bittersweet moment this must be for Geoffrey Hinton. On the one hand, he was just awarded among the most prestigious awards on the planet in recognition of his life’s work on artificial intelligence. On the other hand, he spent the last year warning about AI’s inherent potential for existential catastrophe.

Mr. Hinton has expressed concerns that AI could soon outpace human intelligence. “Somewhere between five and 20 years,” he told The Globe last spring, “there’s a 50-50 chance AI will get smarter than us. When it gets smarter than us, I don’t know what the probability is that it will take over, but it seems to me quite likely.”

Artificial intelligence is a technology, a tool. And like any tool, its use is decided by the person wielding it. Humans can still decide how to use AI – and must, before that decision slips from our grasp.

Consider how far computing technology has come. Computers are built upon a deceptively simple premise articulated by the 19th-century mathematician George Boole. The algebraic system of binary logic he devised permits only true or false statements, arranged with various operators to arrive at logically sound conclusions. In describing an elephant, for example: if(colour=grey) and if(has trunk=yes), then it passes the elephant test. But it also passes the grey sedan test. Every discrepant piece of identifying data would need to be coded into such a program for it to properly recognize an elephant and not a car.

But now, because of Mr. Hinton’s work on neural networks, computer programs are able to mimic the basic function of the human brain, forming multiple pathways simultaneously between points of information, discerning between nodes of relevant data. These programs are able to look at that same image and say yes it’s grey and yes it has a trunk, but it also has floppy ears and is conspicuously missing wheels.

And with such a remarkable step forward, a host of new possibilities has emerged.

On the one (six-fingered) hand is the sheer entertainment that generative AI provides. Cheese sliding off your pizza? No problem: just use glue. Ever wondered what Will Smith looks like eating spaghetti? Generative AI’s got that covered, too.

More seriously, there have already been beneficial developments in diagnostic medicine. A Toronto hospital found using a machine-learning-based early warning system resulted in a 26 per cent reduction in non-palliative deaths. AI can identify strokes nearly 40 minutes faster than human detection, and can detect signs of cancer missed by radiologists.

On the other hand, AI has a real capacity to do harm. Setting aside the theoretical risk of machine intelligence overthrowing humanity, AI is already generating sophisticated and convincing phishing schemes, has led to false arrests and is flooding the internet with a deluge of fake images and videos. A University of Waterloo study found that only 61 per cent of us can distinguish fake images from real ones – and generative AI is becoming more convincing by the day.

As work is both augmented and replaced by artificial intelligence, we are also facing the prospect of mass layoffs and a fundamental restructuring of work. Over 70 per cent of companies surveyed by McKinsey have adopted the use of AI. Customer service jobs are being replaced en masse. And AI has loomed over the creative industries, from writers and actors to video game creators.

As Mr. Hinton asks, “The real question is, can we keep it safe?”

Consider how George Boole’s simple statements of true/false logic provided the framework for all of modernity’s computer technology. Everything we have seen computers do rests on that binary foundation. Four generations later, Boole’s great-great-grandson, Geoffrey Hinton, provided us with a dramatically more intricate framework. With this new cognitive capacity, and at the current rate of innovation, what will change in four more generations?

The line from the vacuum-tube-powered calculators of the 1950s to today’s smartphone is neither a straight nor a benign one. Without solid guardrails in place, the potential for artificial intelligence is an easier and healthier life on the one hand, and an unjust and uncertain one on the other.

Mr. Hinton advises us to proceed cautiously and contemplate what we are doing with this technology while we still have time. The choice is ours – for now.

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe