The suspect in the New Zealand shootings planned for the live feed of his terrible act to travel, speeding across social media. And travel it did, despite the efforts of Facebook and Twitter and horrified viewers to stop it. Taken down in one place, it would pop up in another, easily searched by those who went looking, celebrated by those meant to take inspiration from his deed.
Prior to the attack, a 74-page "manifesto” was posted to an online right-wing forum, and included a link to a Facebook page where a helmet-camera documented the shooting in a graphic 17-minute video. A gunman went on to kill 49 people in their place of worship, at two mosques in the city of Christchurch, while a gruesome fan club cheered on in real time. The suspect made no secret of his ultimate goal − to use social media to strategically foment racially based tension and violence around the world.
Social-media companies worked to scrub the video from the internet, but links to it remained online for hours. A Twitter spokesperson said the company had suspended the suspect’s account, and was using search algorithms and a team of employees working in multiple languages, to find and remove hate speech connected to the shooting, as well as responding to reports from the public. Facebook described similar steps, and said it was working with investigators in New Zealand.
Still, the incident reveals how difficult it is to stop hate and horror from spreading online once it has been posted, especially when highly motivated followers are able to stay one step ahead of social-media companies. And research suggests that the more comments and posts a tragedy like this creates, the more likely a shooter will succeed in fuelling hate and fear, and another attack.
A 2016 paper, presented at the American Psychological Association, concluded that a key motivator for mass shootings was the fame and power that perpetrators received online and in media coverage. In a study the previous year, a team of researchers in Amsterdam and Miami found that higher numbers of tweets discussing school shootings increased the probability of a copycat in the United States. Another study published in the journal PLOS One in 2018 found that, compared with gun ownership, poverty levels and rates of mental illness, only online media coverage of shootings and the frequency of related online searches appeared connected to how soon the next shooting occurred.
Researchers have also pointed to the increase is mass shootings since 2011 − the same year that Facebook, Twitter and Instagram experienced a surge in followers. What’s more, a recent MIT study found that falsehoods travel much faster online than facts; in the analysis, fake news was 70 per cent more likely to be retweeted than the truth. And the major contributors to the spread of misinformation weren’t automatic bots, but humans at their keyboards.
So the accused in New Zealand knew full well that the web would take care of marketing the message. Barbara Perry, the Director of the Centre on Hate, Bias and Extremism at the University of Ontario Institute of Technology, followed the reaction of alt-right sites such as 4Chan after the attack. She says members were passing around the video and manifesto, “expressing praise" and lauding the gunman’s courage. “It is so chilling," she said. “It has energized the movement.”
This is why, Brad Galloway, a fourth-year criminology student who conducts research with Dr. Perry, says people should not watch the video, or share it, or even tweet condemnations about it. Each comment focused on the shooter and the motives behind the attack, however well-meaning, is giving the group the attention they seek. Mr. Galloway, 38, knows all too well: As a teenager, he joined a white-nationalist group and became swept up in the toxic forums and chat rooms before rejecting the ideology in his 20s.
While it is appropriate to push social-media sites to work harder at removing hate speech from their platforms, he says that they’re playing an unwinnable game of Whac-A-Mole. “You can regulate it, or move some content, but it will show up again.”
The more views, the more posts, “the more they feel like they have stoked a fire," he said. "If you feel like you have to comment, make it about the victims, and not the far-right rhetoric. Take the focus off, and see what you can do to build up the resilience of the people affected.”
Faiza Hirji, a communications professor who studies Islamophobia and extremism at McMaster University, points out that the web has become a petri dish for overt Islamophobia. Consider far-right Australian Senator Fraser Anning, who posted a statement hours after the shooting discussing fear of the Muslim presence, and criticizing immigration policies for allowing “Muslim fanatics” into New Zealand. The inflammatory language mirrored the suspect’s manifesto, and soon spread like its own virus across the internet. “Social media is just one facet of what is increasingly seen as acceptable racist discourse,” Dr. Hirji said.
And Canadians, say Dr. Perry, shouldn’t kid themselves: These same attitudes circulate here, fuelling, for instance, the reported increase in hate crime against Muslim Canadians after the Mosque shooting in Quebec. Dr. Perry was recently awarded a $370,000 federal grant to study online extremism, including what draws people into those chat rooms and forums, and how to pull them out. “We have been complacent,” she said. “This could have happened anywhere.”