Much of the ruling and the focus of the various separate opinions in the Supreme Court's ruling last month in Moody v. Netchoice concerned a technical question: whether Netchoice had made a sufficient showing to satisfy the standard for evaluating a facial challenge to a law. The Court held that neither the Fifth Circuit--in evaluating Texas's law regulating social media platforms' content moderation--nor the Eleventh Circuit--in evaluating Florida's similar law--had applied the correct standard and therefore sent the cases back down for further proceedings. But Justice Kagan's majority opinion also decided that content moderation by a social media platform is protected speech akin to the editorial discretion a newspaper exercises in deciding what letters to the editor or, for that matter, what stories, to publish. Justice Kagan wrote:
When the platforms use their Standards and Guidelines to decide which third-party content those feeds will display, or how the display will be ordered and organized, they are making expressive choices. And because that is true, they receive First Amendment protection.
The Court went on to explain that the interest in ideological balance asserted on behalf of the Texas law is not a sufficient basis for overcoming that protection:
The reason Texas is regulating the content-moderation policies that the major platforms use for their feeds is to change the speech that will be displayed there. Texas does not like the way those platforms are selecting and moderating content, and wants them to create a different expressive product, communicating different values and priorities. But under the First Amendment, that is a preference Texas may not impose.
So yes, as a technical matter, the Court held only that the lower courts applied the wrong standard. But as a functional matter, the Netchoice decision was a big win for the platforms.
Justice Alito, joined by Justices Thomas and Gorsuch, wrote a concurrence in the judgment that was a de facto dissent on the truly important question. He characterized all of the key language I've just quoted as dicta. And he questioned the majority's rationale for treating social media platforms as akin to newspaper editors.
One line of attack from Justice Alito seems quite clearly misguided. He suggested that because social media platforms use algorithms to moderate content, it's unknown how much, if any, editorial judgment the platforms are actually exercising. That's a misguided objection because the platforms obviously exercise considerable judgment in setting the algorithms' parameters.
For the majority, Justice Kagan gave examples of content that the major platforms disfavor with their algorithms but which they would not be permitted to disfavor under the Texas law. She listed posts that: "support Nazi ideology; advocate for terrorism; espouse racism, Islamophobia, or anti-Semitism; glorify rape or other gender-based violence; encourage teenage suicide and self-injury; discourage the use of vaccines; advise phony treatments for diseases; [or] advance false claims of election fraud."
Thus, Justice Alito was wrong to suggest that platforms are mere robots not engaged in the exercise of genuine editorial discretion. But he made another, more persuasive argument, one that resonates with late nineteenth and early twentieth century ideas (associated with Louis Brandeis, among others) that very large corporations or extremely wealthy individuals can exercise the kinds of power that threaten human freedom in ways that are similar in kind to rights violations perpetrated by governments. Justice Alito pointed to "the enormous power exercised by platforms like Facebook and YouTube as a result of 'network effects.'" He added that "maybe we should think about the unique ways in which social-media platforms influence public thought."
There is a too-easy answer to those concerns. The Constitution, being a limit on government but not private actors (setting aside the 13th Amendment), is not concerned with private limits on free speech or other rights. That's not just too easy, however. It's wrong.
Texas and Florida did not contend that the platforms were violating the First Amendment. The core of their argument was that natural persons who want to communicate effectively in the modern age need to use one or more of the handful of social media platforms that, through network effects, have a very substantial user base, so that allowing the platforms to suppress user content as a practical matter suppresses private speech. That suppression is not by itself a First Amendment violation--because it is not a product of state action--but it is a very legitimate concern of government that wishes to ensure that its citizens have ample opportunities to speak.
At that point, the argument for sustaining regulation of social media companies' content moderation policies can go in one or both of two ways. First, one can say that while social media companies do exercise editorial discretion in setting the parameters of their content moderation algorithms and are thus engaged in First-Amendment-protected activity, government policies that limit that discretion satisfy strict scrutiny. (Justice Alito suggested that the companies are engaged in commercial speech, requiring a lower level of still-heightened scrutiny. That suggestion seems wrong to me. Speech is not commercial merely because it is produced by a commercial actor.)
But strict (or even intermediate) scrutiny is difficult to satisfy, so the second route might be more effective. That second route would reject the majority's assumption that the threshold question of whether the platforms are speaking for First Amendment purposes should be answered by asking only whether they are making choices about what speech to promote. It would recognize that there is in fact a fundamental difference between a newspaper (or blog) that primarily publishes its own content but sets aside some space for letters to the editor (or user comments) and a social media platform whose raison d'être is user-generated content. This second route would recognize that whether to treat the platforms as speakers is a policy question, not a formalistic inquiry.
None of the foregoing is to say that the Texas or Florida laws are constitutional in all or most of their applications. Nor is it to say that they are a good idea as a matter of policy. It is to say only that there is a liberal/progressive case to be made for sensible state or federal laws regulating content moderation by social media platforms.
And that much should have been obvious already when the
Netchoice cases were argued. By then, Elon Musk had already bought Twitter, rebranded it X, and turned it into the (worse) cesspool of misinformation, hate, and stupidity that we have come to know. Now that
Musk has taken to openly supporting right-wing causes abroad and in the United States, the point should be even more obvious: In what is at best an oligopolistic market of social media platforms, it is not healthy for democracy to allow a single megalomaniac, corporation, or even ostensibly benevolent overseer to set the rules for public discourse, and the First Amendment should not be construed as an obstacle to doing anything about that problem.