Maybe it’s time to rethink what we’re doing on social media.
A racist massacre in Buffalo laid bare some shortcomings of modern approaches to online extremist content
A gunman, motivated by vicious racism and fascist ideology, traveled hours to a supermarket in a Black neighborhood in upstate New York, where he killed 10 people and injured three. A document he left behind said he was inspired by (and apparently imitating) the man who murdered 51 people in Christchurch, New Zealand, in 2019. Like the Christchurch murderer, he painted racist language on his gun, he left behind a document explaining the “great replacement” theory he found so convincing, and he livestreamed his attack to the public internet.
It was another act of white supremacist terror tailor-made for clicks.
I won’t be sharing the document the killer left, the diary of sorts he kept on Discord, or images of his murders here. It’s entirely possible to discuss tragedies like this one without giving too much airtime to the propaganda that is, in some ways, the point of the killing, at least in the killer’s mind. I also don’t agree that this propaganda “explains” the murders, whether or not their perpetrator says so—it’s just a coward’s justification for murdering innocent people.
There’s a deeper, more fundamental problem with dissecting documents like the ones the killer left in the hope of understanding him. An extremism researcher I really respect, Emmi, articulated it better than I could have; her Twitter account is currently private, so I’ll have to paraphrase her point.
In brief, Emmi pointed out that so much of the public conversation around this tragedy and its motivations has been assumed using only the content of this 180-page document and the killer’s Discord diary. Last week, extremism researchers picked apart a curated image—a propaganda version not of his ideologies, but of himself—that he hoped others would see. Insights from his document and chat logs are not necessarily the same as insights into the killer, but insights into the way he hoped the world would envision him after his act of terror. That is not to say there’s no value in studying that material. It should be taken seriously, but that intermediary is crucial: This is not what he thought, but what he thought about what he thought. We’ve seen it before, and frankly there’s very little that’s new.
The truth is brutal: White supremacist conspiracy theories like the “great replacement” can sometimes motivate their subscribers to commit atrocities. And it happened again.
Attention and fear are what this killer wanted. Specifically, he strove to make clickable content morsels out of his victims final moments of horror at the end of his rifle barrel. He used Twitch to broadcast these moments, though archive screenshots suggest that his audience never grew to more than a couple dozen viewers. Within two minutes of the start of his rampage, Twitch took down the video and suspended the account. “The user has been indefinitely suspended from our service, and we are taking all appropriate action, including monitoring for any accounts rebroadcasting this content,” the platform told Kotaku.
Twitch’s response is honestly about as good as one can reasonably hope in the current state of live content moderation. A two-minute response to a livestream that wasn’t particularly popular on its platform is pretty swift as far as reactions go, especially when we remember the 17 minutes that the Christchurch killer’s stream stayed up on Facebook before it was taken down. You do not under any circumstances gotta hand it to them, but there is still some credit due.
The internet is not a collection of independent, discrete platforms; I think of it as a series of gradients between large blobs. Because of blob-gradient overlap, what was originally Twitch’s problem soon became a problem for platforms like Facebook and Twitter. Those companies seemed less prepared to counteract the spread of the killer’s materials on their own platforms, which was largely done using links to videos and photos on third-party websites. The New York Times reported:
A clip from the original video — which bore a watermark that suggested it had been recorded with a free screen-recording software — was posted on a site called Streamable and viewed more than three million times before it was removed. And a link to that video was shared hundreds of times across Facebook and Twitter hours after the shooting.
Social media companies have long been hesitant or slow to write policies or rules related to content they don’t directly host. It’s a familiar type of thinking: “Not our platform, not our problem.” The rapid spread of the Buffalo killer’s own footage of himself shows how short-sighted that thinking can be in practice. The internet is fluid, and effective responses to crises have to be fluid, too.
Content moderation has progressed by leaps and bounds since Christchurch and even more recent events like the Capitol riot, but the failure to keep images of the Buffalo shooting from spreading in exactly the way the killer intended demonstrates that the approaches on hand are still lacking. On my recent appearance on The New York Times podcast “Sway,” I posed a question to tech platforms: “If the animating ideology of that violence can exist in other forms when there’s not literal bodies on the floor because of it, then what are you doing?”
Platforms like Facebook, Twitter, and Twitch understand, correctly, that after a public act of violence, they have some sort of content moderation job to do. But the driving ideology behind the violence seems to be considered ancillary, and so it is allowed to persist, even to go viral on their platforms when it is not attached directly to an act of violence. It’s a distinction without a difference; there is simply no separating white supremacist conspiracy theories from the bloodshed they inspire.
But to reckon with this problem would mean confronting the problem that many white Republicans believe the same things as the shooter in Buffalo, though they often prefer the version that survives a few pumps of Germ-X. “Great replacement” theory has become a fixture on mainstream GOP-aligned media outlets like Fox News. Right-wing social media influencers have reacted to the shooting by defending the racist theory, rather than condemning it. If platforms decide that “replacement theory” is no longer allowed, they will need to enforce that rule by taking down media published by powerful Republican politicians and pundits—something I don’t envision happening anytime soon.
It's entirely conceivable that platforms decide they're going to ban "replacement theory" the same way they currently punish misgendering and the use of slurs. The right-wing reaction to policies like these is always to adopt an "I'm-not-touching-you" version of whatever it is they want to say, but even that might be preferable to public figures overtly declaring that Black and Brown immigrants are the foot-soldiers of a conspiracy to steal the United States from white people.
In any case, the current posture is worth reexamining. The prevalence of an extremist idea does not detract from its substance and danger.
Still, even in a world of perfect moderation from the behemoths of social media, extremists like the Buffalo shooter would still be able to find a place to do the same thing elsewhere. These places online will, frankly, always exist in some form or another. The problem, I think, is that those pools of bile and hate have attained a level of currency in society that is producing dangerous, and sometimes deadly, consequences.
Throughout the last week, I increasingly asked myself: What exactly are we doing here? Because whatever it is, it is not working the way it should.
Edited by Sam Thielman
A good post