Joseph Wilson is a doctoral candidate in anthropology at the University of Toronto. His book Humans of AI: Understanding the People Behind the Machines comes out in 2025.
Human society has arrived at a tipping point. Computers are increasing in power so quickly that they will soon breathe life into large models of data, unleashing what the author Yuval Noah Harari has called an “alien intelligence” – a force that we puny humans are unprepared to handle. Depending on whom you believe, this could bring about the extinction of humanity, or usher in a golden age of free labour and prosperity. (Despite their differences, AI evangelists and AI doomers agree on one thing: Both think they know what “intelligence” is, and that Silicon Valley has figured out how to manufacture it.)
At least, that’s what we’re told. But these predictions – indeed, the whole field of artificial intelligence – is based on an outdated, limited and often dangerous concept of intelligence.
Any theory of grand societal transformation at the hands of sentient robots or conscious chatbots requires, at its core, a concept of intelligence that can be directly equated with computing power. It needs to be an objective ability, something that can be measured quantitatively, ideally with a single number such as IQ (or its modern incarnation, the general-intelligence factor, more commonly called the g-factor) along a single scale. As a result, many in the AI community speak of intelligence as something people (and now machines) can have more of or less of, allowing us to rank everyone and everything by the ruthless logic that bigger is better.
In recent years, position papers and books written by computer engineers turned philosophers in Silicon Valley show how computational power has been increasing exponentially over the past decades, as have the number of parameters (i.e. “neurons”) in neural nets such as the large language model that powers OpenAI’s ChatGPT. The number of neurons in a model is often compared directly with the number of neurons in the human brain. Computer engineer Leopold Aschenbrenner recently argued in a paper that OpenAI’s 2019 model, GPT2, contained the same amount of “effective compute” as the brain of a preschooler. GPT3 was upgraded to the level of an “elementary schooler,” and GPT4, the model that powers the latest version of ChatGPT, performs at the level of a “smart high schooler.” (Unsurprisingly, what follows in the engineer’s taxonomy of intelligence is “automated AI researcher/engineer.”)
Yoshua Bengio: The future of AI – a future I helped create – keeps me up at night
If these were merely metaphors, then grains of salt would be taken. But Mr. Aschenbrenner and his ilk take these comparisons literally, arguing that by showing the same kind of intelligent behaviour as biological neurons, silicon neurons must be equivalent – and, thus, the inevitable result will be machines that can do anything humans can do. This is the holy grail of AI: what is being called Artificial General Intelligence (AGI). “It requires no esoteric beliefs,” Mr. Aschenbrenner writes, “merely trend extrapolation of straight lines.”
But the straight lines that point to AGI are based on some pretty fuzzy assumptions about intelligence: that only certain kinds of tasks really count as requiring intelligence; that intelligence is an innate quality that is determined wholly by the firing of neurons; and that intelligent behaviour, once demonstrated through high grades or degrees, is self-evident and obvious to all who bear witness.
These are the same kinds of assumptions that drove the development of “scientific” intelligence testing, a project rife with racist and classist beliefs about what kinds of people could, culturally speaking, be considered “smart.” During the Enlightenment, European powers swarmed the globe conquering countries populated with humans felt to be intellectually inferior. In the 19th century, this belief could be justified by appealing to Charles Darwin’s work on natural selection, which in turn ushered in a wave of pseudoscientific fads such as phrenology and the racial hygiene movement, and eventually more harmful government programs such as forced sterilization. In the 20th century, these ideas morphed into the IQ test: a magically efficient single test that could be used to sort and rank people based on innate ability. It was first adapted for use in North America by a psychologist at Stanford University, the academic anchor of today’s Silicon Valley.
The belief in an objective, unchanging, natural quality known as intelligence is, at best, a cultural quirk of interest to anthropologists and cultural historians. But at its worst, it can morph into truly noxious beliefs. Is it any coincidence that the rise in talk about AI amongst Silicon Valley elites over the years has been accompanied by a resurgence of enthusiasm for eugenics and an embrace of authoritarianism?
Ian Brown: Human beings are mortal. AI isn't. That matters
In the decades since that obsession with IQ, research in anthropology, educational psychology and cognitive science have shown instead that intelligence is a multifaceted thing: a blurry set of skills and competencies that cannot be measured in any meaningful way along a linear scale. Instead, intelligence can be better understood as a complex, culturally specific way of applying value to the work people do, and varies wildly across the world.
In many cultures, for instance, intelligence is tied to verbal skill, used to describe people with the ability (and the social right) to construct rhetorical arguments, or use metaphors in telling war stories, or flatter an in-law in a wedding speech. Many cultures focus on the demonstration of practical skills such as hunting or identifying medicinal herbs as the highest forms of intelligence. Some of these cultural concepts are more akin to the English adjectives “articulate,” “skilled” or “wise” than they are equivalent to the disembodied, quantitative notion of intelligence that has become common in the West.
Westerners seem to value mathematics, coding, language and other so-called “cognitive abilities” more than they do “non-cognitive abilities”: plumbing, cooking, playing a saxophone solo. But ironically, these last three examples are the kinds of tasks that AI systems struggle with the most. Robots fall flat – sometimes literally – when trying to capture the physical and emotional dimensions of the human experience. Could these be the truly intelligent components of the human psyche? After all, our bodies (and the minds within) had been busy crawling around, interacting with other people, feeling emotions and building world models well before language and mathematics entered the scene.
Derek Ruths: ChatGPT is blurring the lines between what it means to communicate
Bracketing off certain attributes as irrelevant to “true intelligence” has been part of the game that AI scientists have been playing since the field’s inception in 1956. Activities such as playing chess, doing math and coding were always held to be the pinnacle of the human intellect (coincidentally, that’s what the scientists themselves were typically good at). This gives rise to a kind of circular logic. Once tasks have been defined as requiring real intelligence, the computer models are trained in a manner that rewards those tasks and, lo, demonstrations abound of intelligence emerging spontaneously from the depths of the silicon wafers.
Including any soft skills in their AI models – such as empathy, or even dealing with the nuisance of having a clunky, unwieldy body to haul around – were dismissed as being unimportant to identifying and amplifying intelligence. It’s a Western cultural conceit that often makes people from societies not steeped in the Enlightenment rationality of Descartes furrow their brows in bewilderment. The computer is intelligent? But it can’t even weave a decent fishing net! Yet the concept of intelligence as a metric not only worth measuring quantitatively, but building an entire field of study around, has stubbornly remained.
The fundamental reason that AGI will never be realized is that intelligence is not a scientific concept – it’s a cultural one, a loosely defined cluster of hopes and fears about the power of the human mind. Because the theory of AGI is so dependent on shifting, socially influenced definitions of intelligence, it can never truly be proven right or wrong. From the reverence for the IQ scale to the uncanny ability of ChatGPT to answer questions in intelligent-sounding patterns – these are stories we tell about our minds and the power they have to usher in the future, whether it be the rapture or the apocalypse foretold by the AI prophets.