What Should Be Done with Section 230?
With all the screaming about "Big Tech," "The Masters of the Universe," "Silicon Valley," and so on, it's time to sort out the signal from the noise. President Trump wants to get rid of Google's, Facebook's, and Twitter's immunity from prosecution for its blatant censorship of conservative voices. Democrats want the censoring strengthened. And some argue that there should be no changes, since without CDA 230, we would not have the internet as we know it.
CDA 230? What's so important that it has to be hidden behind this abbreviation? Communications Decency Act Section 230 says (emphasis added):
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
The legislative history of the Communications Decency Act revolves around three things: the desire to have the internet expand freely without a pot full of regulators messing things up; user controls; and the ability to block unwanted pornography, stalking, and harassment through your computer screen. Yup. The word "decency" in the title of the act is there because of pornography.
The MOTU have morphed this statute into something completely unrecognizable by the legislators who passed the law. They are happy to "fact-check" any item they don't like. I got hit by this recently.
When I hit "See Why," I was told that this "lacked context. Speaking as a medical professional, that's nonsense. The CDC's own publication found that there was an 86% probability that any mask/no mask difference was pure chance. Put simply, the CDC said masks are not useful. I posted what the CDC published, and I got dinged. This is typical for how egregious the MOTU are getting in censorship. And I have a fairly small audience.
How can we address this? Free speech is a constitutional right, and even if you are a total idiot, you are allowed to speak. You can't make me listen, and that's what "user control" means. I get to block you, just as I did to a foul-mouthed lefty tool on Twitter. And if that troll doesn't like what I have to say, he can block me as well. This isn't difficult. And it doesn't require the MOTU to do anything.
But when GooFaTwit decide to fact-check me, they've stopped being an "interactive computer service." They hide behind the immunity from "any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable." That applies to what I post, not what they say. They have published a fact check. This makes them a publisher.
Let's go farther. If Facebook decides to block my post so no one else can see it, this is also the act of a publisher, since it chooses what it will publish or not publish. If Facebook allow your post, it's "published" it. Not allowing mine is a similar editorial choice by a publisher. This is not "good faith" moderation in the way the statute was originally designed.
The MOTU are blatantly corrupt when they prevent my followers, who have the unilateral right to unfollow me, from seeing what I have to say. GooFaTwit are no longer acting within the intent of the statute to protect others from indecency. They've taken it as carte blanche to do whatever they wish with complete impunity.
But if we abolish CDA 230 as President Trump demands, we create a whole new set of problems. Now anyone with a bulletin board who innocently misses a violent post by a crank can be sued for damages. So we seem to be between Scylla and Charybdis. Do we pursue legal action against a "publisher" who can plausibly claim in court to not be a publisher under CDA 230? Or is there another way?
Having pondered this for some time, I believe that another option exists. First, we have to maintain a safe harbor for "neutral public forums." Thus, if a "neutral electronic forum" were to have certain user-selected filters that eliminated certain posts, then the forum would be held harmless for whatever that user chose to block. Of particular importance, in keeping with the original "decency" concern, parental filters might be the default, with various safeguards in place to ensure only adults access certain threads. Beyond that, only posts that are not constitutionally protected could safely be removed.
A safe harbor for moderated forums would require a clear public statement of the principles used in content moderation. Each participant would then have to affirmatively consent to that statement. Functionally, this would become similar to what FB calls "closed groups."
What would become of GooFaTwit under this "CDA 230 v2"? They would have a simple choice. They could elect to be "neutral public forums." A person who does not consent to the rules within the "closed group area" would not be exposed to any "fact-checking," "content moderation," or other editorial intervention except for constitutionally questionable speech. This would functionally split the MOTU into "Open FB," Closed FB," and so on for the others. Should they decline to make this change, they would be classified as publishers, and would become liable for what they allow to be posted.
Please note that this adjustment to CDA 230 neither increases nor decreases the power of the MOTU to censor their content. It merely requires them to be explicit about their rules of moderation. They can't moderate the open area, but they can do anything in the closed area, as long as they spell it out and get prior consent. This would make it clear that they are operating a "members only" area where fact checks and shadowbanning based on a particular point of view are facts of life.
Of further importance is that CDA 230 v2 would not impair the free growth of the internet as we know it. Civil libertarians have noted that CDA 230 is an important safe harbor that has, in large measure, been responsible for the internet freedom we enjoy today. CDA 230 v2 would simply clarify the boundaries within which opinion moderation would operate.