Skip to main content
opinion
Open this photo in gallery:

As Canada ramps up its policies to build an AI economy, now is a good time to think about the difference between adoption and adaptation, Marcel O’Gorman writes.Markus Schreiber/The Associated Press

Marcel O’Gorman is a professor and the founding director of the Critical Media Lab at the University of Waterloo.

In the winter of 2018, a thief smashed the window of my car in a gym parking lot and stole my MacBook. While I was jogging on a treadmill, he was sprinting toward an apartment nearby to wipe the hard drive I hadn’t properly backed up. By refusing to adopt iCloud’s increasingly expensive storage services, I lost weeks of data, including a book chapter I was writing. Eventually, I was shamed into adapting to this new monthly expense.

I also had to buy a new laptop, which meant adapting to Apple’s new butterfly keyboard, a loud, ramshackle innovation that disrupted my ability to write, which requires thinking with my hands. This is a story of technofailure – by which I do not mean glitches and crashes, but the failure of a human to adapt to an accelerating innovation economy.

While these events have nothing to do with AI, they disclose the mechanics of the prevailing tech ecosystem that generative AI has infected like a trillion-dollar virus. This economic machine pushes tech adoption as a means of survival, but it ultimately serves to enforce adaption. The friction in its gears is human lag, which ultimately results in what Arjun Appadurai and Neta Alexander describe as an “affective economy” of maladaptation, one that “produces and naturalizes failure and creates the pervasive sense that all successes are the result of technology and its virtues, and that all failure is the fault of the citizen, the investor, the user, the consumer.”

As Canada ramps up its policies to build an AI economy, now is a good time to think about the difference between adoption and adaptation. Bernard Stiegler, the celebrated French philosopher, once wrote that adoption “is the process of an individuation, an enrichment, whereas adaptation is a disindividuation: a restriction of the possibilities of an individual.” It feels like we’re doing more adaptation than adoption these days, with little say in the matter. And this is not a good feeling.

While an impatient AI economy bears down on Canadian policy makers, businesses and individuals, Canadians should be demanding opportunities for AI adoption that do not restrict our freedom or endanger our planet’s well-being – not to mention the livelihoods of people around the globe who are labouring in precarious conditions to keep AI productivity alive for the wealthiest. But the rules of the AI race are not bound to human values, equity or planetary fitness. Nor is generative AI as we know it today a product of science or creative ingenuity.

Generative AI is primarily an economic project. So rather than looking to philosophers and tech ethicists to teach us the truth about generative AI, we might learn more by paying attention to the economists.

Less than a year ago, Canada’s Minister of Innovation François-Philippe Champagne released the inelegantly titled “Voluntary Code of Conduct on the Responsible Development and Management and Development of Advanced Generative AI Systems.” Note the word “voluntary.”

Like many other declarations of its ilk, some of which I helped pen myself, this manifesto of sorts gestures toward fairness, equity, transparency and other essential values required to build public trust in AI. But it provides no road map for implementing these good intentions. The code is meant to be a stopgap as we wait for the implementation of more formal regulatory measures, such as the proposed Artificial Intelligence and Data Act (AIDA). But without regulation, adoption of such a code will occur only in direct proportion to the ability of its stated values to generate profits.

This means that the code, like the many other “responsible innovation” declarations and manifestos, are ultimately vapourware: exercises in ethics-washing designed to bolster the tech economy.

The most instructive component of the minister’s announcement was not the code of conduct itself but the context he created for it in the press release, including the following quote attributed to Mr. Champagne: “The government is committed to ensuring Canadians can trust AI systems used across the economy, which in turn will accelerate AI adoption. Through our Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems, leading Canadian companies will adopt responsible guardrails for advanced generative AI systems in order to build safety and trust as the technology spreads.”

The brief statement provides a superb case study in technopolitics on many fronts. Above all, it speaks to the issue of human “lag,” the problem of slow adoption when a society’s values and morals don’t click in time with relentless innovation. There is a key tension in this document between: a) accelerating AI adoption, which is an economic value based on increasing wealth; and b) building trust, which is a moral value based on increasing human well-being.

This tension is at the centre of a tired refrain sung by Canadian economists and AI pundits alike. As their paternalistic story goes, Canadians have been shamefully slow at adopting AI. In a recent Globe and Mail op-ed, an economist colleague of mine bemoaned the results of a recent Ipsos poll that shows our country’s people are some of “the most negative toward AI” among 28 participating countries, second only to France. His conclusion was that “Canada needs to build our country’s absorptive capacity for AI technologies” by investing in a mass AI literacy campaign that will spur productivity.

The call for such a campaign, like the minister’s urgent rhetoric of acceleration, reveals a patronizing frustration with human lag, and pushes for a shift from adoption to adaptation. If Canadians refuse to adopt, this logic seems to suggest, we must find ways to make them adapt; otherwise, the economy will sputter dangerously.

The problem with this hand-wringing strategy to boost productivity is that it fails to account for the reasons why Canadians might be “negative toward AI” in the first place, some of which might be quite valid and worthy of consideration. After all, among those negative Canadians is the “Godfather of AI,” Geoffrey Hinton, who retired from Google so he could speak more openly about the existential risks of AI. I have participated in public-speaking events where AI scientists and economists alike have dismissed the concerns of Hinton and other outspoken tech leaders as a symptom of irrational fear. The accusation struck me as a form of schoolyard bullying, labelling important critical thinkers as unmanly scaredy-cats.

I will concede that Dr. Hinton’s concerns that humans could one day be “subjugated” by an artificial general intelligence does bear traits of science-fictional apocalypticism. But maybe this misses the point, because in fact, many humans are already subjugated by AI. This is made clear in a response to the now-defunct Pause Letter co-written by whistle-blower Timnit Gebru, which points to the current harms of AI, including worker exploitation and massive data theft to create products that profit a handful of entities; the explosion of synthetic media in the world, which both reproduces systems of oppression and endangers our information ecosystem; and the concentration of power in the hands of a few people, which exacerbates social inequities.

The letter’s authors conclude: “We should be building machines that work for us, instead of ‘adapting’ society to be machine readable and writable.”

The authors might have added the environmental consequences of AI development, which are requiring our planet to adapt to the massive extraction of natural resources. In economic terms, these real and present dangers to human well-being might be considered as “negative externalities” of AI. But they are also good reasons for Canadians to be negative about AI, none of which have to do with our tech illiteracy, irrational fears or apocalypticism. Maybe slow adoption in Canada and France is not a symptom of fear, but rather a healthy indicator of critical thinking based on clear evidence.

The need to accelerate adoption in the face of human lag is a long-standing bugbear for neoliberal capitalism. In the early 20th century, John Dewey and Walter Lippmann described this lag in evolutionary terms, featuring a human animal incapable of adjusting to its constantly changing environment. As French economist Barbara Stiegler (Bernard’s daughter) puts it, whereas “the fluxes of innovation are urged to accelerate,” the human has evolved toward stable and closed environments. As a result, the story of human intelligence, which spoke to its capacity for stabilization and self-preservation, “has brutally become that of its maladaptation, maladjustment, and structural lag.” The big task for economics, then, is to solve this frustrating problem of lag, which is very much a human problem, if not the problem of how to make humanity more tech-adaptable.

Often, the problem is addressed by means of forced adaptation. Consider the case of writing instructors today who are dealing with an academic integrity crisis thanks to the absorption of ChatGPT by students on university campuses. Isabel Pedersen, a media theorist at Ontario Tech University, sums up the situation as follows: “Universities are compelled to adapt to generative AI as a phenomenon before there is agreement upon how AI writing should be used or even valued by society.” Whereas students were free to adopt the platform, there was no question of adoption for instructors who are experts in writing pedagogy; they were forced to adapt.

The very idea of “agreement” about how an innovation should be used seems quaint when faced by the momentum of an unregulated tech market. A similar story of forced technological adaptation has been playing out in Canadian classrooms for years, leading to some provinces to start banning the use in schools of smartphones, which were eagerly adopted by their students. The values of these educators is completely at odds with the acceleration of a powerful market driven to put expensive toys in the hands of adolescents. Parents feel pressured to adapt to the smartphone economy by paying for the devices with cash and credit. Meanwhile, their kids pay for the devices in hours of attention, feeding a data-hungry social-media factory powered by AI recommendation engines.

The concerns of educators and other Canadian citizens who understand the far-reaching stakes of the AI race will not be covered by the AIDA. As critics have pointed out in an open letter, the AIDA is not informed by public consultation, and “fails to protect the rights and freedoms of people across Canada from the risks that come with burgeoning developments in AI.”

As efforts to “accelerate AI adoption” make their way into every corner of our lives, asking us to adapt before we’ve even had a chance to adopt, it will become clear that our country places economic valuation above human values. Maybe I am stating the obvious, but at the very least we should be honest about the situation rather than wrapping it up in paper guardrails like a voluntary code of conduct to save face. This is the very definition of “ethics-washing.”

To borrow the words of Timothy Snyder from his book On Tyranny, silently adapting to a massively disruptive technology controlled by a handful of tech elites is “teaching power what it can do.”

The greatest goal of an AI literacy campaign should be to encourage the demand for policies that shift the playing field so that AI developers must adapt to human values, rather than humans adapting to an economic imperative.

It is the burden of every individual, and not only Canadians, to instruct themselves about the unethical data scraping practices, environmental effects, colonial-style exploitation and other negative externalities of generative AI so that more adoptable technologies can be developed at a human scale. Otherwise, we will be left with the forced adoption of solutions looking for problems, followed by the painful labour of cleaning up a dehumanizing mess left by a desperate, off-the-rails tech economy.

Editor’s note: A previous version of this article incorrectly stated that Geoffrey Hinton signed the now-defunct AI Pause Letter in 2023. He did not sign the letter. This version has been updated.

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe