The First Amendment Implications of the No AI FRAUD ACT:
An Overreaction to New Technologies
Ethan Du
2 minute read
With the advent of AI deepfake technology, the risk of having your brand and image defamed by malicious actors is an increasingly likely possibility. Just ask Taylor Swift, who had sexually explicit, deepfaked images of her spread widely on the social media site X. In spite of X’s quick response, the pictures garnered nearly 47 million views before they were taken down, highlighting the rapidity with which AI-altered images can spread on the internet.
Along with the creation of non-consensual pornographic images, deep fakes can also be used to blackmail, falsely incriminate, and attack the credibility of influential politicians. However, the most worrisome trend is the creation of a digital media culture in which people have a difficult time differentiating between real and AI-generated content, resulting in rising levels of paranoia. This trend has become increasingly apparent in the wake of the Ukraine-Russia war, in which the authentication of combat footage has become significantly more difficult. This has led to numerous problems, such as the spread of far-fetched conspiracy theories, as sentiments of mistrust within digital media have made it difficult to follow the actual status of the conflict. Experts worry that as deep fakes become more common in the digital sphere, they would undermine the epistemic trust that we have in any content, radically changing our perception of online media.
Bearing witness to the negative consequences associated with AI-generated deepfakes, a bipartisan group of U.S. House lawmakers introduced a bill to regulate the cloning of voices and likeness using AI. The No Artificial Intelligence Fake Replicas and Unauthorized Duplication Act, or the No AI FRAUD Act, is meant to create federal guidelines to protect one’s likeness from unauthorized uses. The bill achieves this by granting individuals the property rights to their own likeness and voice. This means that, at its core, the bill would allow individuals to sue people who use their voice or likeness without their permission.
While this concept is theoretically effective, the scope of the bill is too broad and encroaches on the constitutionally protected First Amendment right to free speech. The property right Congress seeks to create goes well beyond deep fakes; it extends to sweeps within its ambit one’s likeness or voice in comedy skits, political cartoons, historical movies, reenactments in true crime shows, and even internet memes, all well-protected areas of speech. On top of all this, the bill characterizes this new right as a type of federal intellectual property, which is not protected by Section 230 of Title 47 of the US Code, a part of the Communications Decency Act of 1996 that provides immunity to online platforms for third-party content posted on their sites. This, in effect, would grant individuals free reign in suing anyone or any corporation that creates, hosts, or transmits their image or depiction without their express written permission.
With the creation of this new digital property right, lawmakers foresaw the First Amendment implications associated with the bill’s implementation. Accordingly, they added a First Amendment defense shielding people from being sued for constitutionally protected speech unless the work was “harmful” towards the person whose voice and likeness was being used. While this is a necessary exemption, it is insufficient in protecting freedom of speech, as the mere threat of a lawsuit can chill speech. Lawsuits in this country are a notoriously expensive endeavor, with lawyers and legal fees costing $10,000 on average. To avoid that cost and hassle, many may end up censoring themselves, even if their speech is constitutionally protected.
On top of all this, the Supreme Court held in Freedman vs. Maryland (1965) that it is the government’s burden to prove that speech is unprotected. Whereas in the No AI Fraud Act, it becomes the people’s burden to prove that their speech was protected. This, along with the other measures mentioned above, shows the excessive impact that this bill would have on our current understanding of the First Amendment and would reverse years of precedent set by the courts. While the proposed First Amendment defense is a commendable addition to the bill, it is not enough to satisfy the strict scrutiny required to impose liability on such a broad swath of speech.
Moreover, the litigation of the First Amendment defense would be painstaking and challenging, as the bill requires the courts to balance First Amendment interests against “intellectual property interests related to voice or likeness.” The courts already struggle to assess liability in the First Amendment context, even when assessing routine causes of action such as defamation. This is evident in the numerous Supreme Court cases from New York Times Co. vs. Sullivan (1964) to Milkovich vs. Lorraine (1990) that provided special insights and clarifications to defamation law. Creating a new, overbroad, and vague cause of action would not only cause a multitude of conflicting opinions in the courts but would also force vulnerable social media platforms and AI companies to self-censor on vague specifications.
Finally, this bill undermines the original goal of protecting celebrities from AI-generated media. The act allows rights holders to transfer or license their rights to other people or organizations. This means that celebrities can lose the rights to their own likeness as record labels, talent agencies, and movie studios would be incentivized to acquire these rights from up-and-coming artists. This practice would become increasingly prevalent as AI-generated content becomes increasingly profitable, and could ultimately prevent celebrities from having any say in their participation in or compensation from AI-generated works. This means that in the near future, it is very possible that all new Taylor Swift songs could be completely AI-generated, even if she didn’t consent for her voice to be used in such a fashion.
While the intentions of the bill were positive and meant to protect people from the harmful impacts of AI-generated media, it is much too broad and creates a multitude of unnecessary and new legal problems. While some additional government regulations might be helpful in mitigating the harmful effects of AI-generated content, existing false light and defamation laws already provide sufficient protection from untrue and misleading media. Lawmakers should be careful not to let their unwarranted technophobia impede our constitutional right to free speech.
Humanity’s fear of new technology is nothing new and has existed for centuries. One of the earliest examples is the reaction to the printing press when it was first invented in the 1500s. Riots broke out in European cities, and Pope Alexander VI even threatened to excommunicate anyone who printed books without the church’s permission. For centuries, we have always been scared of the change that new technology will bring. AI-generated deepfakes will undoubtedly result in a wave of new issues, from revenge porn to even further declining public trust in digital media. However, history has also shown us that suppression is not the solution. There is no reason to trample on constitutionally protected First Amendment rights just to regulate AI. As with most new technology, deepfakes will ultimately become commonplace, something just as recognizable and impactful as printed books.
____________
1 “X Pauses Some Taylor Swift Searches as Deepfake Explicit Images Spread.” AP News, AP News (30 Jan. 2024), apnews.com/article/taylor-swift-deepfake-images-x-protecttaylorswift-6e5f9d086d1923a1cf5f5cde39fc890a
2 Saner, Emnie. “Inside the Taylor Swift Deepfake Scandal: ‘It’s Men Telling a Powerful Woman to Get Back in Her Box.’” The Guardian, Guardian News and Media (31 Jan. 2024), www.theguardian.com/technology/2024/jan/31/inside-the-taylor-swift-deepfake-scandal-its-men-telling-a-powerful-woman-to-get-back-in-her-box
3 Coleman, Kate. “How Deep Fakes Are Impacting Culture, Privacy, and Reputation.” RSS statuslabs.com/blog/what-is-a-deepfake#:~:text=The%20misrepresentation%20of%20individuals%20in,when%20verified%20as%20a%20deepfake
4 Twomey, John, et al. “Do Deepfake Videos Undermine Our Epistemic Trust? A Thematic Analysis of Tweets That Discuss Deepfakes in the Russian Invasion of Ukraine.” PloS One, vol. 18, no. 10 (Oct. 2023), p. e0291668. EBSCOhost,
https://doi.org/10.1371/journal.pone.0291668
5 Congress.gov. "Text - H.R.6943 - 118th Congress (2023-2024): No AI FRAUD Act." (January 10, 2024),
https://www.congress.gov/bill/118th-congress/house-bill/6943/text/ih
6 Brown, Elizabeth. “AI Fraud Act Could Outlaw Parodies, Political Cartoons, and More .” Reason.Com (19 Jan. 2024),
reason.com/2024/01/17/ai-fraud-act-could-outlaw-parodies-political-cartoons-and-more/
7 “Department of Justice’s Review of Section 230 of the Communications Decency Act of 1996.” Department of Justice (8 May 2023), www.justice.gov/archives/ag/department-justice-s-review-section-230-communications-decency-act-1996#:~:text=One%20key%20part%20of%20that,of%20content%20in%20certain%20circumstances
8 McSherry, Corynne. “The No AI Fraud Act Creates Way More Problems than It Solves.” Electronic Frontier Foundation (25 Jan. 2024),
www.eff.org/deeplinks/2024/01/no-ai-fraud-act-creates-way-more-problems-it-solves
9 Open letter from the ACLU and FIRE to the House Subcommittee on Courts, Intellectual Property, and the Internet. (February 1st, 2024) (unpublished draft), https://www.thefire.org/sites/default/files/2024/02/Coalition%20Letter%20to%20the%20House%20Judiciary%20Subcommittee%20on%20Courts%2C%20Intellectual%20Property%2C%20and%20the%20Internet%20Regarding%20the%20No%20AI%20FRAUD%20Act%20and%20the%20NO%20FAKES%20Act%2C%20February%201%2C%202024.pdf
10 “If You Are Thinking about Suing, These Are the Costs.” High Rise Financial (4 July 2022), www.highriselegalfunding.com/faqs/how-much-does-it-cost-to-sue-someone/#:~:text=Legal%20Funding%20Options-,The%20Average%20Cost%20of%20a%20Lawsuit,the%20cost%20of%20your%20lawsuit
11 Henkin, David. “Beware Technophobia Spreading amid AI Fears and Calls for Delay.” Forbes, Forbes Magazine (5 Oct. 2023), www.forbes.com/sites/davidhenkin/2023/07/06/beware-technophobia-spreading-amid-ai-fears-and-calls-for-delay/?sh=70f728591760
12 Thomson, Jonny. “People Destroyed Printing Presses out of Fear. What Will We Do to AI?” Big Think (6 Apr. 2023), bigthink.com/the-past/printing-press-ai/