The Platforms Won't Save You
We can’t count on social media platforms alone to solve core problems driving extremism and disinformation online. The incentives don’t exist in their current framework.
Disinformation and extremism online flourishes in large part by maintaining constant outrage and fear—both are powerful currencies on modern platforms. Whether it’s a children’s book, immigration, or the “Clinton cabal,” this problematic content thrives when the audience is convinced of an impending crisis or looming boogeyman. Paranoia opens space for people to adopt more extreme positions and outlandish explanations than they might deduce on their own. Content consumers are often placed in positions that can feel as binary as they do existential; will they take a stand against a supposed moral transgression at the Potato Head company, or will they stay silent?
It sounds outlandish, but it’s a recipe that has repeatedly proven to be effective. Shock makes you click and fear keeps you coming back. The voices exploiting this dynamic often posture themselves as arbiters of truth in a sea of corruption, building loyal audiences in the process. Self-aware content creators know this, politicians have embraced it, and our political discourse has suffered for it. (I touched on this in my essay about the culture forever war we’re in.)
To effectively combat disinformation and extremism before they erupt, lessening or removing incentives like these from engines driving platforms is crucial. The big social media companies will never do this to themselves. In Congressional hearing after hearing, tech company CEOs claim they are ready to address these issues and often say they are already doing so. But little of the change over the last few years, with a handful of exceptions, has been truly proactive or comprehensive. These companies seem resistant to fix core issues, choosing instead to adhere Band-Aids on a system that’s producing constant problems.
Leon Yin and Aaron Sankin published a series of articles at The Markup last week that are worth your time. Yin and Sankin found that Google was maintaining a secret blocklist that would prevent ad buyers from targeting hateful content that it hosts on its platform, but that the system was full of holes. Ads can’t be targeted toward the hate phrase “neo-Nazi,” for example, but they could target videos for ads using the hate term “hammerskin.” What’s more, they found these blocks were easy to get around by slightly modifying words.
Google responded to Lin and Sankin’s investigation by revising the blocklist to include more terms, but not all. Then the company altered its system to prevent future investigations. A tweak and a door slam is not how I imagine a company interested in actually fixing a problem would behave.
Many skeptics of such large-scale changes will argue that alterations at the core could generate adverse effects on the broader content landscape. The idea that we will lose the good parts of the internet by fixing the bad ones is a false choice. Current proposals for legislative, anti-trust, and other actions against big tech need to have more imagination of what the internet could, should, and can be. It doesn’t have to be like this, and it shouldn’t.