We’ve all heard the word “deplatforming.” It feels more common than dirt, here in the internet age. Constantly, we’re asking ourselves: do titans of attention really deserve their sway? As the information superhighways of humanity’s most impressive invention have snuck their way into every pocket and desktop in the country, we ask whether the kings of those highways have any right to such influence. These titans have platforms—what does it mean to take them away? That is to say, what the hell is deplatforming?
Before continuing, let’s define relevant terms:
- (Content) Creator - Some individual or company which produces and releases internet content using some combination of text, video, and audio formats
- Streamers - A subset of creators who stream their video content live
- Platform - Can mean one of the following, depending on context:
- The company which offers and maintains the social media apparatus upon which content is found
- The specific audience access a given creator has—their soapbox
- Dox - To search for and publish private info about a creator with malicious intent
- Deplatforming - The removal or limiting of a creator’s access to their audience or potential future audience
To understand deplatforming, one must understand the forms it comes in. There are three, to be exact: self, platform, and government1. When a deplatforming has occured, it can be understood either as induced by the creator themselves, the platform they’re hosting it on, or the relevant governing body. Either the creator chooses to step down—because of backlash or the expectation of it, the company in charge of the social media finds it advantageous to remove them, or the government has found their speech criminal and is removing them as a result.
In terms of dealing with deplatforming as it is relevant to the general public, only the first two are relevant. Almost everybody will agree that government abuse of power is bad. As for what specifically the government should get to silence, that’s a discussion outside of scope for this post. Europeans will have more to discuss than those from the land-of-the-free-speech.
Both of these two, self-induced and platform-induced deplatforming induced, can be interpreted as caused by public pressure. In self induced, public pressure acts as the catalyst for whatever feeling causes them to step down. Whether they’re afraid or just want to take responsibility, it’s a feeling being spurred by the public. The same with platform induced, though maybe not at the same scale. Take the example of SSSniperwolf and Jacksfilms. The former doxed the latter, and though YouTube was criticized for responding slowly, the public’s pressure did lead to them demonitizing SSSniperwolf—the platform’s acted according to public pressure.
So what does this two-pronged approach to deplatforming teach us? Well, in both cases, we learn that deplatforming largely functions through public pressure. It is a mechanism that we, as the public, get to use, by virtue of pressuring the relevant bodies. Often times, the pressure might be nonspecific—directed at both the platform and the creator, and it does not much matter which caves first, as long as one does.
So if it’s this tool the public has access to, does it work? Well, yes and no. One paper analyzing the effect that deplatforming had on the income and viewership of online creators who moved from YouTube to BitChute indicated the following:
- Deplatformed creators had a 30% increase in revenue and a 50% increase in viewership on BitChute.
- The overall revenue viewership of the creators, across both YouTube and BitChute, fell.
Notably, streamers, who stream their video content live, were only ably to recoup 5.9% of their previous viewership on BitChute—less than 1/10th. Point being, as much as some might like to argue that the Streisand effect is at play, deplatforming has a distinct negative effect on the creator. SSSniperwolf, after being demonetized by YouTube, undoubtedly lost a singificant portion of her income. Removing the financial incentive for creators to continue creating their content after they’ve done something really damaging is undeniably effective.
But it isn’t that simple—actions don’t exist in a vaccum, and deplatforming has side effects and complications. Take the situation as argued by author David Renton: sure, we were able to remove Donald Trump and stem the flow of his hateful rhetoric, but that removal was circumstancial. If it weren’t for Black Lives Matter and advertiser boycotts on the relevant platforms, there’s no guarantee that the deplatforming could have even been pulled off, regardless of how much public pressure was placed on those involved. Using deplatforming also risks empowering the right to use it against the left. Or perhaps just empowering them, period—like with the example of Laura Loomer. It’s hard to reccommend deplatforming as strategy when you consider the 150k votes she received. The goal is to prevent someone from having a voice, not jump-start their political career!
“[w]hen users are banned from mainstream platforms, they become wholly dependent on the fringe alternatives […] This may pose a societal risk since fringe platforms are believed to facilitate the emergence of radical narratives and the spread of hate speech” - Mekacher et al, The systemic impact of deplatforming on social media
It’s also worth paying attention to how the conversation around deplatforming is had. Using moral foundations theory, one writer, Zhifan Luo, poses that the arguments in favor of deplatforming focus on “care” morality—compassion and empathy—while arguments against will focus on fairness, loyalty, and/or authority morality. That is to say, this discussion is being had on different moral fronts. If you’re going to argue that deplatforming isn’t just or fair, understand that the people that you’re arguing to may simply be working under a moral system where mitigating a harm being done takes priority over fairness. If we want to find common ground in this discussion, we must first understand where we stand morally and how that relates to where others stand.
So: to deplatform or not to deplatform; that is the question. Deplatforming is a tool that we, the public have access to—do we use it? Well, we know it can work, and that it can have the effects we want it to in close-up. But we’ve also learned that it is much more complicated when considering a broader view of the situation. When it comes down to it, whether each one of us personally believes deplatforming is justifiable comes down to where our moral foundations lead us. Deplatforming can work—it can silence the voice and empty the wallet of those spreading hatred and harm—but it can just as easily lift those terrible voices all the way to the decision-making table.
Deplatforming is not the only option. Alternatives are discussed in all of these articles:
- Alternatives to Deplatforming: How could we stop the joke before it begins to harm?
- Deplatforming: The Pros, Cons, and Alternatives
- An Empirical Analysis of Soft Moderation Interventions on Twitter
Given the fact that deplatforming is not the only option(like the tweet-labeling discussed in the last article), the takeaway is simple: Deplatforming can work, for it is an undoubtedly effective tool, but that doesn’t necessarily mean it should—using it without regard for unintended consequences or possible alternatives is a recipe for disaster. The way forward here is through a careful understanding of the moral divides within the conversation of deplatforming, along with an understanding of how deplatforming impacts creators, their audiences, and the surrounding social ecosystem.
Footnotes
-
This trichotomy is taken from Nick Gillespie’s Self-Cancellation, Deplatforming, and Censorship: A Taxonomy of Cancel Culture. His piece is fervently libertarian, but still offers a good lense through which to understand deplatforming. ↩