The Supreme Court eluded a final resolution Monday in a pair of cases challenging state laws designed to limit social media companies’ power to moderate content. The ruling dashed an effort by Republicans who had pushed the legislation as a remedy for what they say is a bias against conservatives.
It was the latest case in which the Supreme Court considered, and then sidestepped, a major decision on the parameters of expression on social media platforms.
State laws differ on the specifics. Florida’s bars platforms from permanently banning candidates for political office in the state, while Texas’s bars platforms from removing any content based on a user’s views.
The justices unanimously agreed to remand the cases to the lower courts for review. Justice Elena Kagan, writing for the majority, noted that none of the lower appellate courts had properly analyzed the First Amendment challenges to the Florida and Texas laws.
“In sum, there is much work to be done on both cases,” Justice Kagan wrote, adding, “But that work must be done in a manner consistent with the First Amendment, which does not retire when social media is involved.”
Under the restrictive ruling, the state laws remain intact, but the lower court injunctions also remain in effect, meaning both laws remain on hold.
Although the justices voted 9-0 to return the cases to the lower courts, they were divided in their reasoning, with several writing separate concurrences to explain their positions. Justice Kagan was joined by Chief Justice John G. Roberts Jr., along with Justices Sonia Sotomayor, Brett M. Kavanaugh, and Amy Coney Barrett. Justice Ketanji Brown Jackson joined, in part.
In another concurring opinion, Justice Barrett hinted at how lower courts might analyze cases.
Judge Barrett wrote that the federal appeals court hearing the Florida case demonstrated an “understanding of the First Amendment’s protection of editorial discretion” that “was generally correct,” while the appeals court hearing the Texas case did not.
A three-judge panel of the U.S. Court of Appeals for the 11th Circuit, ruling unanimously, largely upheld a preliminary injunction temporarily blocking the Florida law.
Instead, a divided three-judge panel of the Fifth Circuit overturned a lower court's order blocking the Texas law.
The fact that the judges avoided making any significant statements on the issue allowed both sides to declare victory.
Chris Marchese, director of the litigation center at NetChoice, one of the trade groups challenging the laws, said in a statement that the “Supreme Court agreed with all of our First Amendment arguments.”
Ashley Moody, Florida's attorney general, suggested on social media that the outcome was in the state's favor. “While there are aspects of the decision that we disagree with, we look forward to continuing to defend state law,” she said.
The Biden administration had supported the social media companies in both cases, Moody v. NetChoice, No. 22-277, and NetChoice v. Paxton, No. 22-555.
In the majority opinion, Justice Kagan noted how rapidly the Internet has evolved. Less than 30 years ago, she wrote, justices still felt the need to define the Internet in their opinions, describing it at the time as “an international network of interconnected computers.”
Today, he wrote, “Facebook and YouTube alone have more than two billion users each.”
He described a surge in content that has prompted major platforms to “curb and organize” posts. Platforms sometimes remove posts entirely or add warnings or labels, often in accordance with community standards and guidelines that help sites determine how to treat a variety of content.
Because such sites can “create unprecedented opportunities and unprecedented dangers,” he added, it’s no surprise that lawmakers and government agencies are struggling with how and whether to regulate them.
Government entities are typically best positioned to respond to these challenges, Justice Kagan noted, but courts still play a critical role “in protecting the speech rights of those entities, just as courts have historically protected the rights of traditional media.”
The laws at issue in these cases, statutes enacted in 2021 by Florida and Texas legislatures, differ in the companies they cover and the activities they restrict. But both, Justice Kagan wrote, limit platforms’ choices about what user-generated content will be shown to the public. Both laws also require platforms to provide reasons for their content moderation choices.
Justice Kagan then provided a clue as to how a majority of the justices might consider applying the First Amendment to these types of laws.
While it was too early for the court to reach any conclusions in the cases, he wrote, the underlying evidence suggested that some platforms, at least in some cases, were engaged in expression activities.
“In constructing those feeds, those platforms make decisions about what third-party speech to display and how to display it,” Justice Kagan wrote. “They include and exclude, organize and prioritize, and in making millions of these decisions every day, they produce their own distinctive collections of expression.”
He added that while social media is a newer format, “the essence” is familiar. He compared the platforms to traditional publishers and editors who select and shape the expressions of others.
“We have repeatedly held that laws that limit their editorial choices must satisfy the requirements of the First Amendment,” Justice Kagan wrote. “The principle does not change because the curated compilation has moved from the physical to the virtual world.”
So far, however, judges have avoided definitively defining the liability of social media platforms for content, while continuing to recognize the enormous power and reach of the networks.
Last year, judges declined to hold tech platforms liable for user content in a pair of rulings, one involving Google and the other Twitter. Neither decision clarified the scope of the law that shields platforms from liability for those posts, Section 230 of the Communications Decency Act.
The Florida and Texas bills up for debate Monday were prompted in part by some platforms’ decisions to ban President Donald J. Trump following the Jan. 6, 2021, attack on the Capitol.
Supporters of the laws said they were an attempt to combat what they called Silicon Valley censorship. The laws, they added, promoted free speech by giving the public access to all points of view.
Opponents have argued that the laws trample on the platforms’ First Amendment rights and will turn them into cesspools of filth, hate and lies.
A ruling that tech platforms have no editorial discretion in deciding which posts to allow would have exposed users to a greater variety of viewpoints, but would almost certainly have amplified the ugliest aspects of the digital age, including hate speech and misinformation.
The two trade associations that challenged the state laws, NetChoice and the Computer & Communications Industry Association, argued that the actions that the Fifth Circuit Court of Appeals called censorship in upholding the Texas law were editorial judgments protected by the First Amendment.
The groups argued that social media companies were entitled to the same constitutional protections enjoyed by newspapers, which are generally free to publish without government interference.
Most of the justices harshly criticized the Fifth Circuit's decision to overturn a lower court's order that had blocked the Texas law.
Justice Kagan wrote that the Texas law barred social media platforms from using content moderation standards “to remove, edit, organize, prioritize, or deny posts in your News Feed.” That law, she wrote, blocks precisely the kinds of editorial judgments that the Supreme Court has previously held are protected by the First Amendment.
He said it was “unlikely that any particular application of the law could withstand First Amendment scrutiny.”
However, in concurrent opinions, Justices Jackson and Barrett acknowledged the difficulty of making general pronouncements about how free speech protections online should operate.
Judge Barrett offered one possibility: A social media platform might be protected by the First Amendment if it set rules about what content was allowed on its feed and then used an algorithm to automate the enforcement of those policies. But she said it might be less clear that the First Amendment protected software that determined, on its own, what content was harmful.
“And what about AI, which is evolving rapidly?” he wrote. “What if platform owners handed the reins to an AI tool and simply asked it to remove 'hateful' content?”
Olivier Sylvain, a law professor at Fordham University, said Monday’s ruling could open the door for the court or regulators to consider more complicated questions. That could include how to handle commercial discourse online, such as platforms that amplify discriminatory advertising, rather than the political views at the heart of Monday’s ruling.
“Texas and Florida have been caught up in an ideological political dispute that social media companies are biased against conservative viewpoints,” he said. “I hope this at least puts that stuff aside and we can start thinking about all the many issues that are much more interesting.”