Regulate Social Media to Mitigate Societal Harm
Table of Contents
Author(s)
Share this Publication
- Print This Publication
- Cite This Publication Copy Citation
Moshe Vardi, “Regulate Social Media to Mitigate Societal Harm,” Rice University’s Baker Institute for Public Policy, November 8, 2024, https://doi.org/10.25613/4N7W-Q459.
This brief is part of “Election 2024: Policy Playbook,” a series by Rice University and the Baker Institute that offers critical context, analysis, and recommendations to inform policymaking in the United States and Texas.
The Big Picture
- As social media has evolved, and as increasing examples emerge of its harm to vulnerable individuals, public concern over its adverse societal impact continues to grow.
- In less than 30 years, social media has changed the way the world communicates and consumes information and become a vehicle used to hurt, traumatize, and even kill.
- Despite these recognized challenges, there is no agreement on how social media speech should be regulated.
- Recent Supreme Court rulings suggest a misunderstanding of how social media platforms operate, yet there are solutions to address the underlying issues.
- Effective changes to social media policies are likely to come from citizens rather than from lawmakers or the courts.
Summarizing the Debate
As with many other vexing societal problems, the social media question ended up on the docket of the U.S. Supreme Court (SCOTUS) in 2023. SCOTUS decided two cases by upholding Section 230 of the Communications Decency Act of 1996, which offers immunity from liability to providers and users of an “interactive computer service” who publish information provided by third-party users. Then, in 2024, SCOTUS decided a pair of cases — brought by a trade group representing social media platforms — that challenged Texas and Florida laws seeking to regulate the content-moderation choices of large platforms. SCOTUS ultimately decided that these laws violated the First Amendment and sent the cases back to lower courts for further review.
As Justice Elena Kagan admitted, however, during oral arguments in 2023, the SCOTUS justices are not “the nine greatest experts on the Internet.” She was right. The court’s Section 230 decisions demonstrate that the Supreme Court does not seem to understand the “more is different” principle. Noble Laureate physicist Philip W. Anderson introduced this principle in a famous 1972 paper — “more is different” means scale and complexity are fundamentally important to understanding any system, as complex systems exhibit behaviors that cannot be fully understood by using the same terms and principles as their simpler counterparts. This is certainly the case with today’s social media platforms, which are exponentially more complex than their early counterparts.
It is important to note that Facebook and Twitter did not invent social media in the 2000s. Usenet, a computer-based distributed discussion system, was created in 1980. While it was similar to earlier dial-up bulletin board systems (BBS), its scope was worldwide. As I have previously noted, “When the internet became commercial in the mid-1990s, Usenet quickly acquired millions of new users outside of its earlier academic-research audience. The quality of Usenet discourse rapidly declined and its earlier users mostly abandoned it. Why? Because more is different!”
As Facebook and Twitter scaled up their user numbers in the 2000s, they encountered the same phenomenon. Facebook now has around 3 billion users, while Twitter, now X, has more than 0.5 billion users. Without aggressive content curation — deciding what content to display for a specific user, as opposed to content moderation — deciding what content to delete from the platform, large social media networks are simply not useful, due to a low ratio of high-quality to low-quality content. In fact, a couple of years ago, to mitigate this problem, Facebook even removed the “recent” option, which enabled users to see an uncurated stream of posts on their wall.
Expert Analysis
SCOTUS seems to misunderstand the fundamental concept of algorithmic content curation. “Defendants’ mere creation of their media platforms is no more culpable than the creation of email, cell phones, or the internet generally,” wrote the court in its Twitter v. Taamneh decision, adding that “defendants’ recommendation algorithms are merely part of the infrastructure through which all the content on their platforms is filtered.”
In other words, the Supreme Court does not seem to recognize that, unlike phone or email, what you see on social media is what the platform decides to show you. In fact, the very term “platform,” which implies passivity, is misleading. Facebook and X are in the content-curation business, much like The New York Times when it selects which letters to the editor to publish. Compounding the issue, SCOTUS argued that “the algorithms have been presented as agnostic as to the nature of the content,” while the whole point of algorithms is to fit content to users.
In July 2024, the court paused initiatives from Texas and Florida aimed at restricting how social media companies manage user-posted content. Writing for the court in a ruling that strongly defended the platforms’ free speech rights, Justice Elena Kagan stated that platforms like newspapers deserve protection from government intrusion in determining what to include or exclude from their space. “The principle does not change because the curated compilation has gone from the physical to the virtual world,” Kagan wrote. But this reasoning, which argues that content curation is an editorial function, is inconsistent with the reasoning used in the court’s Twitter v. Taamneh decision to maintain the platform’s Section 230 protection.
Policy Options
In view of the Supreme Court’s apparent misunderstanding of internet technology, it seems that we must wait to hear from the U.S. legislative branch, where there seems to be a bipartisan consensus that something ought to be done about social media — as demonstrated by the House of Representatives passing a ban on the Chinese-owned TikTok social media platform. In a recent Wired article, social media experts Jaron Lanier and Allison Stanger offered a simple solution: “Axe 26 words from the Communications Decency Act. Welcome to a world without Section 230.”
Eliminating Section 230’s liability protection does not, however, completely address the challenge of regulating speech on social media. The basic policy question of speech regulation is inseparable from another policy concern, which is how to deal with the concentration of power in technology. Social media networks are essentially walled gardens, where a handful of companies are fully in control. To counter that, Micah Beck of the University of Tennessee and social media expert Terry Moore propose breaking up the integrated digital monopolies vertically to enable competitive services based on a shared data structure. Author and journalist Cory Doctorow argues that forcing digital monopolies to open their APIs — which allow different software applications to talk to one another — should suffice. Policymakers should consider both options carefully when seeking to regulate social media speech.
The Bottom Line
The Supreme Court cannot be expected to resolve the complex issues surrounding freedom of speech on social media. Social media platforms do have editorial control over content via content-curation systems, so why should they enjoy immunity that traditional publishers do not? The ball is now back in the court of the people to call for effective changes in social media policies.
This material may be quoted or reproduced without prior permission, provided appropriate credit is given to the author and Rice University’s Baker Institute for Public Policy. The views expressed herein are those of the individual author(s), and do not necessarily represent the views of Rice University’s Baker Institute for Public Policy.