twitter facebook
© Dado Ruvic/Reuters
The much-misunderstood Section 230 of the 1996 Communications Decency Act is in the spotlight again. Twitter and, to a lesser extent, Facebook, suppressed reporting that Joe Biden abetted his son Hunter's cashing in on the then-vice president's political influence. Since there are plenty of alternatives to these social-media platforms, their actions merely drew more attention to the story while calling into question their qualification for Section 230 immunity from lawsuits.

The Biden news was broken by the New York Post, the nation's fourth-largest daily by circulation, whose Twitter account (with 1.8 million followers) remains suspended as of this writing โ€” irrationally so, given that the social-media site is no longer blocking other users from sharing links to the reports that prompted the Post's account to be locked. Twitter also locked the account of Kayleigh McEnany, the White House press secretary, as well as that of the Trump reelection campaign.

Twitter's bogus rationale for the suspensions was the claim that the Biden information had been "hacked" from a laptop computer that appears to have belonged to Hunter. In fact, there is no evidence that the emails, photographs, videos, and other materials on the laptop were hacked or otherwise misappropriated. Hunter suffers from drug addiction and is notoriously erratic. The laptop was brought to a repair shop in Delaware and never reclaimed. The shop owner, in addition to being given consensual access to the data, reported it to the FBI. Plus, Fox News reports that the work order prepared when the computer was dropped off appears to bear Hunter Biden's signature.

A hypothetical: Let's say information that really had been hacked was damaging to the Trump campaign, Republicans generally, or conservatives. Is there any doubt that Twitter would readily permit the free exchange of that information? Of course not. Twitter and its allies on the left would insist that the hacked information was newsworthy political data, that any unilateral effort to suppress it would be futile, and that if it weren't authentic then the victims would say so.

In my view, Twitter is playing games. It is not a place where everyone's voice is equally welcome; it is a Democratic partisan that occasionally censors information and people it finds politically disagreeable. Because Twitter poses as a non-partisan medium of exchange that does not engage in political-viewpoint discrimination, it is struggling to camouflage its suppression of news harmful to Democrats as good-faith, ideologically neutral censorship: portraying its actions as discouragement of cyber-theft, or the purging of "Russian disinformation" (another kneejerk claim that lacks supporting evidence).

Immunity Is a Benefit, Lack of Immunity Is Not Punishment

As one would expect, Twitter's poor judgment has calls to repeal or drastically amend Section 230 raining down from Washington. To be sure, the provision does need some tinkering, but the rancor against the statute is misplaced. Section 230 states a modest, salutary statutory immunity. The problem here is not the safe harbor; it is that Twitter should not be entitled to its protection unless it meets the qualifying conditions.

The matter is definitional. Twitter is claiming to be nothing more than an interactive computer service, as that term is defined in Section 230(f)(2). But it is patently more than that. It is also a content provider, as that term is defined under Section 230(f)(3), because it partially develops the substantive presentation of information by engaging in politically motivated content discrimination.

To be sure, it does not do this all the time. But understand the legal landscape: The issue here is not regulating behavior (i.e., the government telling Twitter how it must operate) or punishment (i.e., the government fining Twitter for its behavior). The issue is qualification for a legal benefit โ€” viz., immunity from liability that publishers of arguably actionable content ordinarily face. That is a benefit that the government provides, but only to entities that comply with the terms on which the benefit is offered.

No one sensible is claiming that Twitter's partisan censorship is illegal. Twitter is not the government; it is a private actor. It need not enable free speech. It is perfectly free to be openly progressive in its politics, and to suppress conservative or Republican viewpoints โ€” just as, say, The New Republic is. Twitter has not committed a legal wrong by suppressing a politically damaging story in order to help Joe Biden's presidential campaign.

But when we talk about denying Section 230 immunity, we are not talking about penalizing Twitter. Section 230 immunity is a legal privilege to be earned by compliance with the attendant conditions. If an entity fails to comply, that just means it does not get the privilege; it does not mean the entity is being denied a right or being punished.

To be a mere interactive computer service entitled to immunity from speaker/publisher liability, a platform must refrain from publishing activity โ€” which includes suppressing one point of view while promoting its competitor. Twitter is well within its rights to censor its partisan adversaries; but in doing so, it forfeits the legal privilege that is available only to interactive computer services that do not censor on political or ideological grounds.

To analogize, think of a non-profit corporation. If the non-profit wants immunity from taxation, which is a benefit Congress has prescribed in Section 501(c)(3) of the tax code, then it must refrain from supporting political candidates. If the non-profit engaged in that kind of political activism, then it doesn't matter whether we're talking about a single candidate or a steady stream of them. Refraining from all such support is the condition. If the non-profit fails to meet the condition, it has no claim on the benefit, period. That does not mean it is wrong for the non-profit to support candidates, much less that it must stop doing so or stop doing business. It just means that, by supporting a candidate, it fails to comply with the statutory condition and therefore no longer qualifies for the benefit.

Of course, many would lament that non-profits do worthy charitable and educational work, and that without the tax immunity they can't survive โ€” just as they'd pine that Twitter's enabling of the instantaneous, global exchange of information is socially valuable, but could be lost, or at least badly degraded, if it had to worry about liability for the content transmitted. But there's an easy fix, readily available in the power of the entities themselves: Non-profits should choose not to support political candidates and Twitter should choose not to engage in political-content discrimination, because any upside in doing these things is demonstrably outweighed by the costs to both the entities and society.

Diversity of Political Discourse

The big misimpression about Section 230 is that it is a regulatory scheme. It's not.

The statute is a vestige of the time before social-media platforms existed, at least in the gigantic, life-dominating forms they take today. In enacting Section 230, Congress sought to promote information-sharing on the Internet by immunizing platforms and their users from the liabilities the law has traditionally imposed on those who produce content โ€” writers, speakers, and publishers, who are subject to being civilly sued or, in some instances, criminally prosecuted for harms caused by the material they disseminate.

Supreme Court justice Clarence Thomas has recently explained that, in giving Section 230 immunity a very broad (he implies, too broad) interpretation, federal courts have relied less on the modest text of the immunity provisions than on the statute's ambitious statements of congressional purpose to promote a "vibrant and competitive free market" with minimal government regulation. Justice Thomas makes a number of sound points. And we should note that his opinion (concurring in the Court's refusal to review the Section 230 case at issue) is, in essence, simply a suggestion: In a more appropriate case, the justices should examine the lower courts' propensity to construe Section 230 extravagantly. But as long as we're on the subject of Congress's aspirational statements in Section 230, this one seems highly relevant (my italics):
The Internet and other interactive computer services offer a forum for true diversity of political discourse, unique opportunities for cultural development, and myriad avenues for intellectual activity.
We had plenty of political discourse before 1996. In determining that it was important to boost emerging interactive computer services at that time, Congress explicitly emphasized hope for "true diversity" of political viewpoints. It was not Congress's purpose to supplement the Left's dominance of the dissemination of news and political commentary.

Qualifying for the Safe Harbor

As noted above, Section 230 defines a "content provider" as "any person or entity that is responsible, in whole or in part, for the creation or development of information." An entity that shapes the information it disseminates, such as by choosing to disseminate or suppress information by dint of the political message it conveys, is a content provider. That distinguishes such an entity from being a mere "interactive computer service," which is just a medium of information exchange by multiple users across the Internet โ€” like a library that makes books available but does not dictate what is in the books or limit availability according to political preferences.

Congress sought to create a "Good Samaritan" safe harbor in the statute. To repeat: This was a modest objective. While lawmakers did not create a thoroughgoing regulatory scheme, neither did they enact sweeping immunity. Instead, Congress crafted a narrow safe harbor from lawsuits that would be available to interactive-computer-service providers that served solely as platforms for exchanging content, not as creators of that content.

The immunity spared the service providers from liability for the transmission on their platforms of information of the kind for which speakers and publishers are subject to suit because the information is objectively objectionable โ€” i.e., it is reasonably believed to be indecent or to promote civil or criminal wrongs.

What Congress was driving at is probably best exemplified by defamation. A neutral, good-faith social-media platform would be in a Catch-22 without immunity: It would not have wished to permit the transmission of obscene or offensive content; but if the platform actively suppressed obviously objectionable material, it risked being deemed a publisher. That would expose it to liability for any defamatory expression it failed to suppress. Indeed, that is exactly what happened in a 1995 New York State case, Stratton Oakmont v. Prodigy Services Co., which, as Justice Thomas recounts, spurred the enactment of Section 230.

So Congress extended two forms of immunity, passive and active. To encourage the free flow of information, lawmakers said in Section 230(c)(1) that if a platform served only as an interactive computer service (i.e., it did not function as a content provider), it would not be subjected to publisher liability. Concurrently, to encourage interactive computer services to police the transmission of offensive material, lawmakers said in Section 230(c)(2) that decisions to limit or suppress specified types of content would not trigger publisher liability.

Obscene, Offensive, or "Otherwise Objectionable"

Let's focus on the latter, active immunity in Section 230(c)(2), which has two parts: A and B.

Recall my concession at the outset of this essay that, while Section 230 does not need an overhaul, it could use some tinkering. Well, part A, the main immunity clause, is worth quoting, because doing so illustrates both what Congress had in mind and where (as my italics indicate) lawmakers got too vague:
Civil Liability: No provider or user of an interactive computer service shall be held liable on account of (A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected[.]
I'll get to part B in a moment. Part A elucidates that Congress sought to extend immunity for the policing of content that was objectively objectionable โ€” i.e., the kind of content that falls outside the First Amendment's protection of "the freedom of speech" (even though the statute does not call for a judicial finding on whether "material is constitutionally protected," since lawmakers recognized Section 230 would ordinarily be applied by laypeople, not legal scholars). Plainly, Congress was sticking close to longstanding free-speech jurisprudence, such as Chaplinsky v. New Hampshire (1942), where the Supreme Court observed:
There are certain well-defined and narrowly limited classes of speech, the prevention and punishment of which has never been thought to raise any Constitutional problem. These include the lewd and obscene, the profane, the libelous, and the insulting or 'fighting' words โ€” those which by their very utterance inflict injury or tend to incite an immediate breach of the peace [footnotes omitted].
Part B simply encourages actions to make available "the technical means to restrict access to material described" in Part A. This again underscores that Congress was targeting objectively objectionable materials. (Note: There seems to be a drafting error in the law, which refers to "material described in paragraph (1)"; Congress appears to have meant "material described in paragraph (A)" โ€” i.e., Section 230 (c)(2)(A) โ€” since there is no paragraph (1).)

Clearly, Congress was not of a mind to grant immunity from speaker/publisher liability to interactive computer services that suppressed content based on their subjective political objections to it. Such a grant would have contradicted Congress's afore-described objective to promote true diversity of political discourse.

This brings us to my highlighting, above, of part A's vague reference to material deemed to be "otherwise objectionable." The term "otherwise objectionable" is actually less vague than it sounds. In interpreting statutes, lawyers follow certain time-honored canons of construction. These include "ejusdem generis" (Latin for "of the same kind"), which instructs that where a list of things is followed by a general term, the general term refers to the same kind or class of thing found in the list. Applying this canon, the general term otherwise objectionable obviously means content that is objectionable because it is similar to material that is, as the list reads, "obscene, lewd, lascivious, filthy, excessively violent, [and] harassing." That is, "otherwise objectionable" content means material that reasonable people would consider offensive and that falls outside traditional free-speech protection because it lacks redeeming value.

All that said, Congress's inclusion of a term as literally broad as otherwise objectionable invites mischief. A platform such as Twitter could use it to claim it believed Congress was green-lighting the suppression of political speech that social-media platforms found objectionable. Of course, that was most certainly not the purpose of Section 230. Twitter knows this. That is why, in my opinion, it is pretending that it suppressed the Biden story because of concerns about hacking (i.e., in order to be a Good Samaritan that discourages cyber-theft), rather than admitting that it did so to protect its preferred candidate.

In any event, Congress should amend Section 230 to remove "otherwise objectionable." The Justice Department, which last month announced recommended amendments to the statute, has suggested that Congress replace "otherwise objectionable" with "unlawful" and "promotes terrorism." This would usefully reinforce what is already palpable: Section 230 is meant to immunize platforms that endeavor to discourage obscenity, civil harms, and crimes.

No Need for Government Regulatory Monitor

There is much more to be said about Section 230-reform efforts, but I will close with two points.

First, to repeat, it would not be a penalty to deny publisher immunity to Twitter and other social-media platforms that are content providers because they practice political-viewpoint discrimination. It would simply force them to make a choice. If they want to be progressive media outlets, then they have to bear the same risks as left-wing magazines, websites, and programming. This would be a challenge for them because they transmit copious amounts of content, and they want to do it instantaneously. But media outlets that shape content have to avoid defamation and other harms; if they don't, they have to bear the legal costs.

Twitter would carp about this, of course, because it would seem like a sea change. But that is only because the platform has been two-faced. It became gigantic by representing itself as a mere transmission service that would give everyone a voice. If it had honestly marketed itself as a progressive platform hostile to conservative voices, and a partisan platform hostile to Republicans, it would not have grown into what it has become.

On the other hand, if Twitter wants immunity from publisher liability, it should have to do what the law requires to earn immunity: Refrain from policing for political content. It can still put out as much progressive content as it wants, and โ€” since Twitter execs consciously identify as progressive โ€” that is certain to be a significant majority of its political content. But the immunity is reserved for policing against specific kinds of offensive expression; it does not countenance viewpoint discrimination. If you want the immunity, you have to comply with the conditions on which it is extended.

Second and finally, there is no need for a new government regulator of social media. The law prescribes the conditions of immunity. Twitter either qualifies for immunity or it does not. If people or entities believe they have been defamed or otherwise harmed by tweets, they can sue Twitter, which can then try to claim immunity. If Twitter can show that it has complied with the statutory conditions for immunity, the suit will be dismissed. In short order, few if any additional suits would be brought because pointless litigation is prohibitively expensive.

On the other hand, if a defamation plaintiff were to establish that Twitter is not a good-faith interactive computer service, but rather a content provider masquerading as such a service, Twitter may be liable, depending on what steps it could or should have taken to avoid the harm caused. At that point, Twitter would have to decide for itself whether engaging in political-viewpoint discrimination as a show of solidarity with its fellow progressives was worth the cost of publisher liability and lawsuits.

This is not a regulation challenge. We don't need a government body to oversee political expression. Twitter and other social-media platforms want to be spared litigation risk, just like 501(c)(3) non-profits want to be spared taxation. To earn the immunity, Twitter has to comply with the terms, which should include no political-viewpoint discrimination. It's not more complicated than that, and it is eminently fair.
Andrew C. McCarthy is a senior fellow at National Review Institute, an NR contributing editor, and author of Ball of Collusion: The Plot to Rig an Election and Destroy a Presidency. @AndrewCMcCarthy