In recent weeks, artificial intelligence has rarely been out of the headlines. Most of that global media attention focused on OpenAI, the company behind the language-model-based bot, ChatGPT.
It had been working on an advanced algorithm called Q*. But some staff at OpenAI believed safety and ethical implications were not being taken seriously enough. This led to the sudden dismissal of the company’s CEO, Sam Altman, who was then reinstated in that same position four days later.
Books we're reading and loving this week: Globe staffers and readers share their book picks
The secretive nature of this corporate controversy has raised two important questions. Can the gatekeepers of AI be trusted with such a powerful technological tool? And, more importantly, how is AI going to alter humanity’s future?
Today, AI can beat its human counterpart in a game of chess; read radiology images better than its human counterpart; and is currently on the cusp of autonomously driving cars. ChatGPT, meanwhile, can compose poetry, translate between languages at will and even write code.
Still, machine learning has many limitations. Max Bennett is still trying to figure them out.
The young, successful American entrepreneur is the co-founder and CEO of AI company Alby. Previously, he was the co-founder at Bluecore, a company with a value of more than US$1-billion that uses AI to assist leading global brands with their marketing strategies.
“Why is it that AI can crush any human in a game of chess but can’t load a dishwasher better than a six-year-old?” Bennett asks in the opening pages of A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains.
Even the most sophisticated AI programmers cannot answer this question. Mainly because they don’t yet fully grasp what they are trying to recreate inside an AI system. Namely: human intelligence. The human brain is the best place to look. But with 86 billion neurons and more than 100 trillion connections, it’s complicated. We therefore need to examine the fossil record of the history of life, Bennett explains.
His thoroughly researched, ambitious book does just that. It chronicles how the human brain evolved over the past four billion years. Bennett doesn’t offer any new original insights as such. He mostly cites the work of older and wiser experts, including evolutionary biologist Richard Dawkins; philosopher Daniel Dennett; neuroscientist Antonio Damasio; and psychologist Daniel Kahneman. Still, he organizes that complex and intriguing evolutionary history into a lucid, reader-friendly narrative for the non-specialist.
The book is divided into five major breakthroughs. Each new stage of the brain evolved, the author explains, as a sophisticated means of upholding homeostasis – the self-regulating process by which biological systems maintain stability, while adjusting to conditions that are optimal for their survival.
The first three billion years of life on Earth existed without brains. Then roughly 600 million years ago, in ancient worms, nerve nets consolidated into the first brain. With it, came the early affective template of animals: pleasure, pain, satiation and stress.
Five hundred million years ago, one lineage of ancient bilaterians grew a backbone, eyes, gills and a heart, becoming the first vertebrates – animals most similar to modern fish. Their brains formed into the basic template of all vertebrates: a cortex to learn to recognize patterns and build spatial maps and the basal ganglia to learn by trial and error.
Approximately 100 million years ago, our small mammal ancestors developed brains, which saw the dorsal cortex of our ancestral vertebrate reformed into the modern neocortex, giving them the ability to plan and rerender past events.
Sometime around 10 million to 30 million years ago, new regions of neocortex evolved in early primates. It gave them the power to anticipate their own future needs, as well as the needs and knowledge of other minds.
The brain’s fifth breakthrough came via the evolution of language. Some claim this happened 2½ million years ago, with the very first humans. Others believe it occurred 100,000 years ago and was unique to Homo sapiens.
Bennett, paraphrasing the work of the Israeli historian Yuval Noah Harari, notes how language, combined with technology, has given our species divine-like powers to build civilizations, invent religions, create art and, more recently, construct artificial intelligence in our own image.
I interviewed Harari, seven years ago, for Britain’s Jewish Chronicle. “We are becoming better than gods, because we can create living organisms according to our wishes,” the bestselling author of Sapiens: A Brief History of Humankind (2014) and Homo Deus: A Brief History of Tomorrow (2016) told me. Harari claims human history is on the verge of coming to an end. After that, a new process of life will begin. The change will be so drastic, he believes, that it’s hard to even fathom what it will look like.
The title of W. Russell Neuman’s latest book provides some clues. Evolutionary Intelligence: How Technology Will Make Us Smarter argues that we are on cusp of a new stage of human inventiveness. In this brave new world, computers will become integrated into our everyday sensory experience – and attached to our clothes, our glasses, our headsets and other body parts.
“I am proposing that this revolution is on the same order of magnitude as the invention of language,” Neuman writes. He is a specialist in new media and digital education who previously taught at both Harvard and Yale universities and served during the early 2000s as a senior policy analyst in the White House Office of Science and Technology Policy. But that experience has not made Neuman a convincing or disciplined writer. He wanders, digresses and tell bad jokes, far too often. A disappointing read, his book does, nevertheless, offer us occasional valuable insights into how evolutionary intelligence will shape the future.
Think of James Cameron’s Terminator 2 (1991), the author suggests. Take, for instance, one infamous scene from the science-fiction Hollywood classic, in which a cyborg time-travelling from the future, played by Arnold Schwarzenegger, surveys his surrounding environment, using a digital visual system that overlays relevant data on the scene he is observing. This digital helper is known as augmented intelligence. It could be, say, a visual overlay of graphics or text. Or perhaps an electronic voice, only you can hear, reminding you of critically important information.
This is going to dominate our lives, the way that smartphones do today. In fact, the technology is (at least partially) already here. Today, Amazon’s AI, Alexa, plays music when you request it. But it won’t be long before Alexa (or something similar to her) starts to suggest some recipes when you are hungry and then prompts you in the direction of your fridge.
Neuman then briefly looks at the latest research happening in the field of brain-computer interface (BCI). The technology is still largely experimental. But in theory, it allows individuals to control machines with their thoughts.
BCI remains at a primitive stage of development. The neurotechnology company Neuralink, founded in 2016 by Elon Musk, is currently proposing ambitious but morally questionable research.
One plan, for example, involves drilling four eight-millimetre holes in subjects’ skulls (pending FDA approval) and then inserting threads that will pass neuronal data to an implant behind the ear. The specific details surrounding these controversial experiments are still cloaked in secrecy. But they are due to be carried out on patients with various neurological and intellectual impairments.
Neuman dedicates little time or ink discussing the ethical implications of such invasive technology. Clearly, though, he sees evolutionary intelligence as more of a help than a hindrance. “In the future, when you communicate with a group, with an institution, or simply with another person, it will be technically mediated,” he concludes.
Tobias Rose-Stockwell claims we arrived at that moment in the late 2000s, when social media became an integral part of our daily online experience. His latest book, Outrage Machine, explores how the algorithms used by companies, such as Facebook, X, Instagram and TikTok, are now capable of forecasting human behaviour before it even occurs. The American writer, designer, technologist and media researcher claims social media has turned its users into dopamine addicts.
Rose-Stockwell writes with clarity, conviction and an insider’s eye and ear. He has spent much of his career mixing with Silicon Valley’s biggest and brightest entrepreneurs. In the early days of social media, most of them believed expressing emotions online had the potential to serve the greater good of humanity.
“While we were right about the significance of social media’s impact, we were very, very wrong about its inherent goodness,” he writes. The author also points out that algorithms goad us to post more self-righteous, aggressive, egotistical and partisan content. Social-media companies then profit from the revenue that comes with the click bait that follows.
AI is already driving our emotional states online. But it will soon become so sophisticated that it will be hard to distinguish if the conversation you are having online is with a machine or with a human. “Artificial intelligence is going to change the world. We should be concerned,” says Rose-Stockwell.
Governments across the globe are finally starting to wake up to the fact that AI is threatening not just the future of democracy, but the future of humanity. In late November, the U.S. Congress introduced a bill that, if passed, will require the Pentagon to collaborate more closely on AI with the anglosphere Five Eyes intelligence alliance, which includes Canada.
On Nov. 1, 2023, the British government published the Bletchley Declaration. It promotes the idea that for the good of all citizens worldwide, AI should be designed, developed, deployed and used in a manner that is safe, human-centric, trustworthy and responsible. The international agreement was signed by government representatives from 28 countries – including Canada – in attendance at the AI Safety Summit in England.
The historic setting of this major global tech conference at Bletchley Park was not coincidental. During the Second World War, the country estate was the epicentre of Britain’s wartime code-breaking effort. Among the star codebreakers was British mathematician, Alan Turing. In 1950, Turing published a scientific paper, entitled “Computing Machinery and Intelligence.” It contained a thought experiment known as the Turing test: this proposed that a computer can be said to possess artificial intelligence if it can mimic human responses under specific conditions.
Turing, who was forced to endure chemical castration and criminalized for his homosexuality, killed himself, aged 41, in June, 1954. But his perceptive ideas about how computers would shape the future lived on. Perhaps, though, in ways that even he could not have foreseen.
One of the highlights of the AI Safety Summit was a cringey conversation between Elon Musk and British Prime Minister Rishi Sunak. Sunak was clearly in awe of the tech billionaire, whose AI venture, xAI, is currently looking to secure US$1-billion in capital through an equity offering. Musk was also one of the original investors of OpenAI but has since distanced himself from the company that Microsoft currently owns half of and which is estimated to be worth US$90-billion.
Musk told Sunak that AI has the potential “to become the most disruptive force in history.” Musk has a penchant for saying anything that will put him centrestage in a storm of global controversy. But this time he is not exaggerating or fooling around. The future is already here. It seems the world’s wealthiest and most aggressive venture capitalists are the only ones who have truly prepared for the revolution.