Navigating the First Amendment in the Digital Age: A Discourse on Freedom of Speech and Social Media
Abigail Harrison
8 minute read
The rise of social media has sparked a nationwide debate over the boundaries of our First Amendment right to freedom of speech. As this liberty is increasingly exercised online, many are questioning if there should be limits, and if so, where they should be drawn. This contentious issue has even reached the Supreme Court, as users grapple with the consequences that come with imposing restrictions on these platforms.
Freedom of speech—a principle that protects an individual’s ability to express their opinions without fear of censorship, legal sanction, or retaliation—is a fundamental value of American democracy. It allows for criticism of those in power, the use of vulgar language for political expression, and even symbolic acts such as flag burning, however, only provided these actions do not infringe on others’ rights or incite violence.1 This liberty, a principle that the United States has upheld for nearly 250 years, has found a new platform in the digital age: social media.
Platforms such as Instagram, TikTok, Facebook, and X (previously known as Twitter) have evolved into contemporary public squares, providing individuals with a space to voice their opinions freely. However, it’s essential to recognize that these platforms are not merely passive outlets of information. They are governed by sophisticated algorithms that determine what content is amplified, suppressed, or even entirely removed.
This intricate configuration has created a wedge between social media corporations attempting to regulate misinformation and users who perceive this as a violation of their civil rights. This conflict is particularly prominent in the context of political discourse.
The COVID-19 pandemic sparked a surge in the overall American social media usage, with participation skyrocketing to over 302 million users.2 As a result, mitigating misinformation became a primary task for social media strategists. For example, Georgia representative Marjorie Taylor Green was suspended on Twitter after she claimed masks and vaccines did not work.3 Similarly, Children’s Health Defense, a non-profit group led by Robert Kennedy Jr.’s account, was suspended on Instagram and Facebook after it continually promoted misleading claims about the COVID-19 vaccine.4
Kennedy equated Facebook’s actions to state-imposed censorship, despite Facebook’s prerogative as a private entity to enforce its own misinformation policies. This debate reached a peak during the 2020 presidential election, amidst rampant misinformation about voter fraud.
The degree of anonymity that social media platforms provide further complicates the issue. Users can conceal their identities behind pseudonyms, facilitating the distribution of false information and certain hate speech without fear of personal repercussions. This anonymity can embolden individuals to circulate unverified or misleading content, exacerbating the spread of misinformation and claims that incite violence or prejudicial actions against protected groups.
Social media platforms have instituted policies to combat specific hate speech and misinformation, but their enforcement is often labeled as inconsistent. Some argue that these platforms should intensify their efforts to combat hate speech, while others contend that such measures infringe upon their First Amendment rights.
One example is political commentator Candace Owens. Despite multiple suspensions from YouTube for disseminating anti-Semitic and racist content that perpetuates stereotypes and falsehoods, Owens continually finds her way back to the platform to voice her views.
Currently, the Supreme Court is scrutinizing a landmark case Murthy v. Missouri. This case was instigated by specific policies enacted during the early tenure of the Biden administration, which urged social media platforms to remove posts promoting misinformation about the COVID-19 pandemic and the 2020 presidential election.5
A U.S. district court judge ruled that the intervention by White House officials and some federal agencies infringed upon the First Amendment’s guarantee of free speech.6 The judge contended that they were “coercing” or “significantly encouraging” social media sites to moderate content in a specific manner. Consequently, an injunction was issued, curtailing the Biden administration’s interactions with these platforms on various issues.
During oral arguments, Supreme Court justices expressed reservations about a ruling that would broadly curtail the government’s capacity to communicate with social media platforms.7 They voiced concerns about potentially impeding officials’ ability to collaborate with social media CEOS on matters of importance.
The verdict of Murthy v. Missouri could have far-reaching implications. It could recalibrate the equilibrium between freedom of speech, hate speech, and the proliferation of misinformation on social media platforms in the digital era.
In another recent legal case, Elon Musk, the owner of X (formerly known as Twitter), was sued by the Center for Countering Digital Hate (CCDH), a non-profit organization committed to countering misinformation and hate speech.8 The CCDH had been outspoken in its critique of X, highlighting the platform’s struggles with hate speech and misinformation.
Musk retaliated with a lawsuit against the CCDH, accusing them of orchestrating a “scare campaign” aimed at dissuading advertisers. However, a U.S. District Judge dismissed Musk’s lawsuit, ruling in favor of the CCDH’s right to critique X’s policies. The judge’s decision underscored that the lawsuit seemed more intent on penalizing the CCDH for their criticism rather than addressing the issues they spotlighted.
The digital age has ushered in a new era for the First Amendment. The increase of social media platforms has amplified the reach of free speech, giving everyone a platform; however, it has also introduced new challenges, such as the spread of misinformation and particular hate speech.
Addressing these challenges calls for innovative solutions. One potential approach is the development of robust, transparent, and accountable content moderation policies. Such policies could provide clear guidelines on what constitutes misinformation and hate speech, helping to maintain the integrity of online discourse.
Moreover, promoting digital literacy can equip users with the skills to navigate the digital landscape effectively. This could involve collaborations between educational institutions, tech companies, and policymakers.
The journey towards balancing free speech and responsible use of social media is a complex one. It requires careful navigation to ensure that we uphold the principles of the First Amendment while fostering an informed and respectful online discourse. The path forward will undoubtedly be challenging, but it is a challenge that needs to be embraced as we continue to shape the future of free speech in the digital age.
____________
1 Fisher, Deborah. “Social Media.” Free Speech Center at Middle Tennessee State University, October 24, 2023. https://firstamendment.mtsu.edu/article/social-media/
2 Dixon, Stacy J. “Social Media Usage in the United States - Statistics & Facts.” Statista, December 13, 2023. https://www.statista.com/topics/3196/social-media-usage-in-the-united-states/#topicOverview
3 Alba, Davey. “Twitter Suspends Marjorie Taylor Greene’s Account.” The New York Times, August 10, 2021. https://www.nytimes.com/2021/08/10/technology/twitter-suspends-marjorie-taylor-greene.html
4 “RFK Jr.'s Anti-Vaccine Group Kicked Off Instagram and Facebook.”NBC News, August 18, 2022. https://www.nbcnews.com/tech/internet/rfk-jrs-anti-vaccine-group-kicked-instagram-facebook-rcna43830
5 Murphy v. Missouri, No. 23-411 (5th Cir. Mar. 18, 2024).
6 Quinn, Melissa. “Supreme Court wary of restricting government contact with social media platforms in free speech case.” CBS News, March 18, 2024. https://www.cbsnews.com/news/supreme-court-government-pressure-social-media-free-speech/
7 De Vogue, Ariane. “Supreme Court considers when government can block followers on social media.” CNN, October 31, 2023. https://www.cnn.com/2023/10/31/politics/supreme-court-social-media-first-amendment/index.html
8 “Judge tosses out X lawsuit against hate-speech researchers, saying Elon Musk tried to punish critics.” CBS News, March 25, 2024. https://www.cbsnews.com/news/elon-musk-x-lawsuit-dismissed-hate-speech/