Kean Birch is director of the Institute for Technoscience & Society at York University.
Increasingly, generative AI seems like a waste of our collective time and money. While generative AI technologies, like ChatGPT, have some playful uses, they potentially come with enormous social costs and limited social benefit.
To understand these social costs, we have to understand generative AI. It’s not an autonomous, intelligent system, able to think and decide like we do. Instead, as Emily Bender and colleagues emphasize, generative AI is a mimic of human action, parroting back our words and images. It doesn’t think, it guesses – and often quite badly in what is termed AI hallucination.
Understanding generative AI as probabilistic systems highlights the social costs that follow from their development.
AI depends upon computing capacity: the more AI we deploy, the more computing capacity we need. Not only does this take computing capacity away from other, potentially more useful activities, it requires an enormous amount of energy. These environmental costs are well-known, but they will get significantly worse as AI spreads. Sam Altman, chief executive of OpenAI, reinforces this point with his argument that AI needs an “energy revolution” to be successful. Even leaving aside the ecological costs, AI’s power-hungry nature will lead to rising energy prices across society.
Then there’s the fact that AI is underpinned by significant capital investment in computing infrastructure. AI is built on the back of fibre optics, servers, data centres, etc. We can see the cost of this in Big Tech’s corporate reports, which highlight the billions they’ve spent and are spending on this infrastructure. Big Tech now controls much of our computing capacity (which is a social cost in itself), but we will need to invest considerably more to make AI commercially viable as an everyday technology. This investment could go somewhere else, more useful.
AI is also sucking up innovation funding, especially venture capital. According to CBS Insights, venture capital spending on generative AI jumped fivefold between 2022 and 2023, reaching close to US$22-billion. In a shrinking venture pool, that money could, again, have been used elsewhere. More important, though, commentators like Ed Zitron point out that if the AI hype bubble bursts, which appears to be likely, then all that innovation funding will have been wasted (as would all the capital investment).
As AI continues on this trajectory, it is threatening to overwhelm us with AI spam. AI needs data to train models, but content producers – such as newspapers, websites and authors – are now challenging the scraping of their copyrighted content by suing organizations like OpenAI. More critically, as AI becomes saturated with AI-produced “data” released into the internet, it will collapse in on itself: As political economist Jathan Sadowski poetically puts it, we are facing the growing social cost from “Habsburg AI,” by which he means artificial intelligence technologies that are “so heavily trained on the outputs of other generative AIs that it becomes an inbred mutant, likely with exaggerated, grotesque feature.” This means hallucinations upon hallucinations, creating all sorts of unforeseen consequences.
Perhaps most important, AI entails passing the buck for its social impacts on to the rest of society, even when it provides no social benefit. AI will necessarily lead to significant social change and associated costs as we are forced to transform our social, political and economic institutions to deal with the fallout from its effects. Even something as basic as AI-generated images will create a collective cost when it comes to dealing with their effects on our political institutions; for example, it’s going to cost a fortune to adapt our political system to protect ourselves against generative AI’s turbo-charging of political misinformation.
The heart of the problem is that generative AI is not really designed to address actual social problems. We urgently need the expertise of social scientists to be able to make much-needed collective decisions about the future of generative AI that we want; we can’t leave it to business, markets or technologists. We need to turn to these experts to understand our social or collective problems and the challenges we want generative AI to address. We then need to work out whether – not simply how – artificial intelligence can contribute to finding viable solutions, and then getting AI companies to focus on producing those solutions.