Censorship or Safety: Discussing the Implications of the Kids Online Safety Act
Joann Fetner
December 2025
10 minute read
I. Introduction
In early October of this year, New York City officials discovered the bodies of two teenage girls on top of a moving subway train. Their cause of death appeared to officials to be a result of “subway surfing,”which is the act of riding on the exterior of a moving subway car. Although the stunt can be traced back over a century, in recent years, the trend has grown increasingly deadly. According to police data, there has been a steady rise in the number of fatalities in recent years, a trend city officials have largely attributed to social media. These online platforms expose teenagers to sensational videos of subway surfing, turning a life-threatening act into a means of social clout, thereby encouraging teenagers to partake in the practice themselves.[1]
The risks of social media extend beyond promoting hazardous behaviours such as subway surfing. On platforms like TikTok, Instagram, Snapchat, and X (formerly Twitter), users are frequently exposed to content that spreads misinformation, racism, and hate speech. Young people are especially vulnerable to the harms of social media for several reasons. For one, during adolescence, the areas of the brain that drive the desire for attention, feedback, and reinforcement from peers become particularly sensitive. Simultaneously, the regions responsible for self-control and impulse regulation are still developing, leaving adolescents more susceptible to the emotional and behavioural impacts of online interactions.[2] Adolescents in the current day have largely grown up immersed in social media. Unlike millennials and members of Generation X, whose childhoods were primarily shaped by offline modes of communication, young people now heavily rely on digital platforms to express themselves and connect with others.
Given the prevalence of harmful content in a space that plays a central role in the lives of adolescents, many have begun to question the lasting impact of social media on rising generations. This raises a critical question: should there be restrictions on social media use by adolescents? If so, to what extent should regulations be implemented? In this article, I argue that although social media poses certain risks to adolescents, it should not be heavily regulated. To do so, I will first provide an overview of a current legislative proposal—the Kids Online Safety Act—aimed at addressing concerns regarding adolescent online safety. I will then examine the arguments for and against this legislation, before concluding with my own assessment.
II. Background on the Kids Online Safety Act
Given the numerous arguments made in favour of social media regulation for adolescents, it is unsurprising that various forms of legislation have attempted to address these concerns. One such ruling is the Kids Online Safety Act (KOSA), a piece of federal legislation aimed at enhancing protections for minors on online platforms. If enacted, the bill would impose a “duty of care,” meaning that tech companies and social media platforms would be obligated to take steps to protect minors from foreseeable harms associated with said platforms. KOSA states:
A covered platform shall exercise reasonable care in the creation and implementation of any design feature to prevent and mitigate the following harms to minors: anxiety, depression, eating disorders, substance use disorders, and suicidal behaviors... patterns of use that indicate or encourage addiction-like behaviors by minors…[3]
As such, this “duty of care” could force parties to partake in actions such as revising digital design features and limiting certain content exposure. In accordance with the legislation, the Federal Trade Commission (FTC) would be the primary enforcement agency in regulating social media platforms, and would be given the means to sue apps and websites that they believe do not adequately protect minors against harm.[4] Although the bill has not been officially enacted, old versions of it have passed the Senate, such as the 91-3 vote on S.2703 in July 2024, and new versions have been introduced, such as S.1748 in May 2025.
While advocating for the safety of adolescents is largely uncontroversial, whether KOSA offers an effective solution to the risks posed by social media remains a subject of intense debate.
III. Arguments for the Kids Online Safety Act
One of the main arguments in favour of KOSA concerns its potential to protect adolescent mental health. The internet has facilitated the creation and circulation of vast amounts of content, much of which has the capacity to be psychologically harmful. Given the widely recognised susceptibility of adolescents to peer influence, social media can exert a particularly powerful impact on their mental health and overall well-being.
This argument has been supported by the American Academy of Pediatrics (AAP) as well as the former U.S. Surgeon General. In an article published last year, the AAP asserted that social media poses a profound risk for young people, advocating for the creation of healthier digital environments for adolescents through the implementation of the KOSA. According to the AAP President Benjamin D. Hoffman, “social media and the internet were not designed with kids in mind.”[5] Moreover, in 2023, the former U.S. Surgeon General Vivek Murthy urged Congress to more fully recognise the risks that social media poses for adolescents, citing studies that linked its usage to symptoms of anxiety and depression. Importantly, Murthy wrote that: “Nearly every teenager in America uses social media, and yet we do not have enough evidence to conclude that it is sufficiently safe for them…Our children have become unknowing participants in a decades-long experiment.”[6]
Supporters of KOSA also emphasize the importance of autonomy, contending that the very design of social media platforms undermines users’ ability to make independent choices. Addictive recommendation algorithms that “[hyperanalyze] your activities and [spit] out reinforcements”[7] can draw adolescents into spending excessive time online, increasing their potential exposure to harmful content. Currently, users have no meaningful way in which to opt out of these algorithmic systems. Accordingly, KOSA could directly address this concern by requiring platforms to disable algorithmic recommendations, thereby enabling adolescents to exercise greater control over how they engage with these platforms.
IV. Arguments against the Kids Online Safety Act
Although KOSA is designed to mitigate mental health harms associated with social media platforms, it could be argued that its “duty of care” may ultimately undermine the very well-being that it seeks to protect. Many adolescents, particularly those who live in isolated regions or in communities where few people share their lived experiences, use social media as a means to find supportive spaces and form meaningful connections. Accordingly, social media can act as a means to connect these adolescents with perspectives and communities that they are otherwise unable to find offline. For example, studies have shown that for LGBTQ+ youth in rural areas, exposure to positive queer representation and opportunities to engage with affirming online communities can significantly reduce feelings of isolation and provide support that they otherwise may not have had in their in-person communities.[8] Thus, KOSA, which could heavily restrict adolescents’ access to such digital platforms, risks cutting them off from essential networks that cannot be replicated locally. Its implementation may end up exacerbating youth mental health problems such as loneliness rather than alleviating them as it purports to do.
Similarly, it could also be argued that the implementation of KOSA could result in censorship that ultimately does more harm to adolescents than it does protect them. This can be attributed to the fact that the “duty of care”, which requires social media platforms to mitigate harms, inevitably leads to the stifling of free speech. Already, it seems that the act of mitigating harms is quite vague—so what constitutes harm?
It is conceivable that in an attempt to mitigate adolescents’ exposure to harmful content online and avoid legal repercussions, social media platforms end up over-regulating content. As such, important but “politically divisive” topics—such as abortion and transgender healthcare—could not only be limited but also completely censored.[9]
Furthermore, given that the FTC would be the primary enforcement agency in regulating these social platforms, what is censored is essentially up to the discretion of whoever is in charge of the FTC. For example, if a Republican leads the FTC, content about the LGBTQ+ community, reproductive health, and climate change may be labelled as harmful to youth, and thereby made subject to regulation or outright censorship. Equally, a Democrat leader could censor discussions about automatic weapons, school shootings, and religious sentiments that are anti-LGBTQ+. Essentially, KOSA’s “duty of care” is rather vague. Given that it is to be enforced by a political body, it can be interpreted to meet any given desired political end rather than the intended end of protecting kids.[10]
V. Conclusion
Despite the meaningful arguments made in favour of KOSA, the criticisms against its implementation are far more compelling. Ultimately, the most important question to consider is what ought to be considered “harmful”. While there is a broad consensus that certain activities, such as subway surfing, present tangible risks to adolescents, the imposition of increased regulation raises significant concerns regarding overreach and the potential for a domino effect to occur. Even though specific conduct is designated as dangerous today, it is unclear what forms of activity may be similarly restricted in the future. Accordingly, KOSA should not be implemented, as granting the authority to regulate digital content carries a substantial risk that would mean even well-intentioned oversight may evolve into expansive and unwarranted control over online expression.
[1] Coleman, “2 Girls Found Dead Atop a J Train in Suspected Subway Surfing Accident.”
[2] Weir, “Social Media Brings Benefits and Risks to Teens. Psychology Can Help Identify a Path Forward.”
[3] S.1748 - 119th Congress (2025-2026).
[4] Electronic Frontier Foundation, “The Kids Online Safety Act (KOSA) (September 2024).”
[5] Jenco, “AAP Applauds Senate Passage of Children’s Online Safety Legislation.”
[6] Jenco, “Surgeon General Advisory Warns of Social Media’s Effects on Youth Mental Health.”
[7] Cuthrell, “Starving for The Feed.”
[8] Escobar-Viera et al., “Examining Social Media Experiences and Attitudes Toward Technology-Based Interventions for Reducing Social Isolation Among LGBTQ Youth Living in Rural United States.”
[9] Ortutay, “What to Know about the Kids Online Safety Act That Just Passed the Senate.”
[10] Granieri, “Center for Democracy and Technology’s Kate Ruane on the Kids Online Safety Act.”

