Just to be clear: I don't support Nazis, I don't agree with Nazis, and I believe the atrocities they perpetrated rank among the worst humanity has ever committed.
Now, on to something a little less black and white — content moderation, or in this case, moderating the content of said Nazis. Substack is the latest platform/media company to discover that content moderation is a thankless task. Facebook/Meta has constantly tied itself in knots, trying to moderate without moderating. Twitter/X was much the same, to the point where it left the door open to an anti-moderation owner (though even he moderates when politics demand it). Whenever a platform goes beyond being a place to create content and starts using algorithms to choose what you see, curates recommendations, or monetizes from said content, they all find themselves in the same tangled web. The biggest hurdle with moderating is that once you start, it's hard to draw the line.
Every single viewpoint has a counterpoint. The Earth is round. No, it isn't! Nazis are bad. No, white supremacy is great! People should love whoever they want. No, Jesus didn't talk about that in the bible! And every viewpoint has people on both sides, adamant that they have the moral high ground.
So, who decides, and where do you stop?
And therein lies the dilemma. I wasn't 100% sure, but I think I was on board with Substack's initial decision. It was basically, "fuck it, just leave the mad Nazi bastards to it." Or, in Substack's words —
"But some people do hold those and other extreme views. Given that, we don't think that censorship (including through demonetizing publications) makes the problem go away—in fact, it makes it worse.
We believe that supporting individual rights and civil liberties while subjecting ideas to open discourse is the best way to strip bad ideas of their power."
Yes, it's another way of saying, "We're doing nothing." But I think it was the right call. So long as Substack doesn't promote the content through algorithms, why not just leave these people to it? We, as users, have all the tools we need to remove them and their publications from our lives should we ever stumble upon them.
I get the argument of the platform profiting off of them, but it turns out "profiting" is a bit of a stretch.
In Shalom Auslander's post, The Nazi in the Haystack, she wrote this zinger —
“It might reasonably be said that to be considered a major platform in the new media landscape, that to claim you have reached a certain mass, you have to have at least a base number of Nazis. Until you pass a certain Nazi Threshold, you haven't really "arrived," media-wise. Now I don't know that the base Nazi Number is, I'm not in the new media biz, but I'm pretty sure it's more than six, so in actuality, Substack might have a Nazi shortage. Substack may be running low on Nazis.
The six Nazis have a combined 29 paid subscribers.
There are at least 2,000,000 paid subscribers on Substack.
Thus .00145% of the paid subscribers on Substack subscribe to Nazis. If that's enough to make you leave Substack, I suggest you leave Planet Earth while you're at it. The purity you're looking for doesn't exist.”
Yes, a whole six.
I mean, really? Research done by Public shows that the reach of these publications is minimal, at best. Jesse Singal got his hands on the email that Substack sent in response to Platformer’s reporting on the story. Part of it read, “None of these publications had paid subscriptions enabled, and they account for about 100 active readers in total.”
Six publication. 100 readers. No subscriptions enabled. In reality, the Nazi problem is a nothing issue, and just a dose of moral panic. I've done zero research for this, but I'd assume there’s a far bigger number of Substack newsletters that are racist, homophobic, anti-trans, hateful to women, or that are extremist on both sides of the divide. Why not ban every single one of them too? Where’s the uproar there?
Still, it’s brought up the thorny issue of content moderation, so let’s continue.
I recently read Adam Singer's post, The Internet is the real world, and while his post is nothing about the following (but worth a read), it had me thinking how true that was in a different sense. The internet is like the real world in the sense that it's full of assholes, and it can be quite a shitty place to exist in. The real world is much the same; whenever you walk into a room, it likely contains someone whose views and beliefs you would find morally abhorrent. In every bar, in every restaurant, in every office, in every Subway carriage, in every single walk of life, there are people who don't vibe with you. And in real life, we have to learn to co-exist with them, usually by blocking them out. You can't remove them from planet Earth. We don't engage. We ignore them. In some cases, we take on the more noble (or perhaps naive) pursuit of engaging with them in the hopes of understanding and then changing their viewpoints — but this is often a pointless endeavour.
The internet is much the same. There are assholes everywhere. Keyboard warriors galore. But, where the internet differs from real life, despite the array of tools at our disposal to block, hide, or remove people from our feeds and inboxes at the touch of a button, we always choose to go on the offensive. We don't ignore them; we engage. We don't leave them shouting into the void, but instead, it appears we hunt for their takes, trying desperately to find something that offends us so we can expose it and then demand platform heads to take action. And, like Substack did this week, platforms find themselves backed into a corner, where they inevitably have to heed the bigger, louder crowd. (Substack is now going to remove these six accounts, as they "violate rules against incitement to violence.”)
Is it a victory? I'm not so sure. Of course, fuck Nazis, but the bigger picture is less clear.
My thinking is this: If we push people with views and opinions we don't agree with off of platforms, we push them into more extreme platforms and groups, which only serves to increase and amplify the viewpoints until they spill over into something far more ugly.
And that's not a good outcome either.
Substack's initial response was the right one because content moderation is an impossible tightrope to walk. Do nothing, and you're accused of profiteering off of or promoting racism/extremism/hate speech/[insert other]. Do too much, and you're accused of being a lefty "woke-ist" with an agenda to silence every opinion but your own.
Moderation is messy.
I think it's time to rely less on the platforms to moderate, or lower our expectations on what moderation can achieve. Let's be frank; despite the tireless work of good people, they suck at it. AI and algorithms also suck at it. The internet is, and always will be, full of shitty people. We have to accept that and deal with it as best we can.
It's time for us as users to be grownups about it and moderate our own digital lives.
No, Casie Newton initially identified and verified 6 accounts as examples. Nowhere except in Nazi apologists minds is any statement whatever that there are ONLY 6 accounts. Do better.