Taylor Owen is the Beaverbrook Chair in media, ethics and communications and the director of the Centre for Media, Technology and Democracy at McGill University.
On Oct. 5, I decided to take a break from X, formerly Twitter.
This was not easy. For more than a decade, it has been my own gateway to information about the world, as well as the subject of my professional research on our media and technology ecosystem. Starting from the Arab Spring protests, I learned about a long series of conflicts through the lens of my Twitter feed, where I followed journalists, scholars and citizens living through the events to which I was glued.
But the platform is now broken. Where it once filtered valuable information about the world in a timely manner – albeit imperfectly – it is now calibrated to anger and extremes under the ownership of a spiteful billionaire clearly caught in an ideological rabbit hole.
And so, over the Canadian Thanksgiving long weekend, I did not even learn of the horrific Hamas attacks in Israel until almost a day after, when I turned on the radio. And in the days since, I have consumed information about the war the old-fashioned way: through journalism filtered not through increasingly narrow social-media algorithms, but by my own choice of what to listen to, read and watch. This change of context has proved revealing.
It is clear that traditional media still has a profound ability to shape the global understanding of events, and that the way wars are framed can have profound implications. Is mainstream Western journalism contributing to a moral equivalency between acts of war? Are news media outlets contributing to a military rather than political solution set? Whose voices are being amplified, and why?
Those questions are rightly being asked. But it is also clear that social media, AI and our broken information ecosystem are also having a profound effect on how we come to understand, feel about and respond to this conflict.
Platforms have gotten out of the business of news and the prioritization of reliable information. For years, social-media companies have incentivized publishers to share content on their platforms, normalizing them as a news source for citizens. But now, Meta is downgrading the ranking of news on their platforms, and in countries such as Canada, it has even cut off users’ ability to link to journalism entirely on Facebook and Instagram, in protest of regulatory efforts. At the same time, the rise of TikTok’s powerful algorithmic feed has led Meta, Twitter and YouTube to move away from user-informed content feeds (based on who we choose to follow) to feeds pulling content from across the entire platforms dictated by what they believe will keep us most engaged and entertained. During a war, this means being served content that plays to our ideological and emotional biases – material that either confirms our beliefs or enrages us, and little in between.
This is just the next phase in design and policy decisions that platforms have long been undertaking that have made their products more dangerous. Free-speech absolutists have succeeded in diminishing and in some cases removing even modest content-moderation rules, and civic and election integrity teams once tasked with responding to disinformation, hate speech and violence have been decimated. Tools that allow researchers to better understand the platforms have been turned off. Blue check marks, long a sign of credibility on Twitter, can now be purchased for fake legitimacy and amplification; bad actors are gaming the algorithms so that they serve up their harmful and extreme content, at the expense of the moderation, responsibility and reliable information that these moments demand.
In short, the guardrails are gone – and the result has been an information ecosystem that is deeply vulnerable to being weaponized against us. Just this past week, for example, according to research conducted by Reset and highlighted by the EU, we have seen a surge in antisemitic hate speech on X, and anti-Muslim posts are being pushed by a wide range of political actors, including the U.S. far right and Indian disinformation campaigns. The result is a spiral of escalation with both sides using the incentives and vulnerabilities of our media ecosystem to one-up the demonization of the other.
Compounding these issues is content generated by AI, to which platforms are highly susceptible. X is now filled with posts that are designed, written and promoted using tools such as ChatGPT, and as a result, users are losing the ability to know what is real. When our minds are flooded with AI content that is purpose-built for the incentives of our platforms, we are outsourcing our understanding of the world, including of complicated events in Israel and Gaza, to the design of our communication infrastructure. The result is an epistemological crisis in how we know war. A combination of unreliable, bias-confirming and emotionally-charged content is shaping the character of the public sphere, and how we understand the world.
The best chance we have in fixing this problem may be as simple as regulation. The EU’s Digital Services Act has recently come into effect, and is getting its first test case with this conflict. EU Digital Commissioner Thierry Breton has sent official notices to both TikTok and X that their platforms are being misused to spread content that is illegal in the EU. In response, the platforms will need to show how they are responding to user complaints, conduct risk assessments on their products, and demonstrate the measures they are taking to mitigate them. This kind of product-safety legislation is coming soon in Britain, and is likely to be enacted in Canada. Within three days of the letter, TikTok had removed more than 500,000 posts and 8,000 livestreams of content depicting violence and atrocities – a drop in the bucket, but a meaningful effort nonetheless.
Both traditional and social media have in the past been valuable, if imperfect, windows into conflicts such as this. But it feels like our current moment is particularly perilous. The institutions of journalism are in decline, and the social platforms that some believed might replace them are broken – and so we are experiencing the worst of both worlds, leaving too many of us anxious, listless and ungrounded. The need to reimagine an information ecosystem calibrated to reliable content about the world could not feel more urgent. At the very least, we would be wise to spend a bit less time on X.