Counsel, Powered by AI: The Challenge of Assigning Liability for AI’s Use in the Legal Context
Graylin Lucas
April 2026
4 Minute Read
Over the past few years, artificial intelligence (AI) has infiltrated nearly every profession. The legal field is no exception. Many lawyers view large language models (LLMs) as a useful tool for research and discovery.[1] However, LLMs have repeatedly generated fabricated citations, and a number of attorneys have incurred significant fines for submitting improperly reviewed documents.[2] Private citizens also view chatbots as accessible sources of legal advice, prompting concern among attorneys about clients receiving misleading information.[3] This raises a key question: where does the law assign liability for the use of inaccurate AI-generated content in legal arguments?
First, it is important to define AI-generated content in this context. Does it refer to ideas or arguments directly generated by AI, or to any part of a legal argument that AI contributed to developing? Given that AI is primarily used for research, AI-generated content is not often directly introduced in court. Rather, it indirectly influences the development of legal arguments. Thus, AI-generated content in the legal context encompasses any portion of argument that AI played a substantial role in developing.
Recently, State Sen. Kristen Gonzalez (D-NY) proposed a bill that would hold AI chatbots accountable for dispensing legal advice. S7263 would allow citizens to sue technology companies if AI impersonates a licensed professional, such as a lawyer, when dispensing advice.[4] The proposal rests on the logic of existing state law, which prohibits humans from impersonating licensed professionals.
However, existing case law suggests that this bill may not withstand legal challenges if enacted. In February, a federal judge ruled in United States v. Heppner(2026) that conversations with AI chatbots are not protected by attorney-client privilege.[5] Heppner, the defendant, input information from his attorney into the chatbot Claude while preparing his defense. Federal prosecutors later seized documents generated by Claude, and the defendant’s attorney argued that they were protected by attorney-client privilege. The judge’s ruling emphasized that an AI chatbot is not a lawyer and is not intended for legal advice; therefore, users have no reasonable expectation of confidentiality.[6]
This clear distinction between AI and human legal professionals weakens the logic underlying S7263. Furthermore, the Heppner ruling could be interpreted to be dismissive of AI having a legitimate role in the legal context. If the legal sphere broadly accepts this attitude towards AI, human actors would likely assume full responsibility for its use in developing legal arguments.
That approach makes logical sense. It is true that AI chatbots are explicitly not intended to give legal advice, and chatbots often include warnings to users that their accuracy is not guaranteed. Lawyers also have an obligation to uphold ethical standards as a condition of their profession. In 2024, the American Bar Association (ABA) issued a formal ethics opinion on generative AI. Although it is non-binding, the opinion establishes a framework of norms and best practices around AI. The guidance includes stipulations that attorneys are responsible for understanding confidentiality risks, disclosing and explaining their use of AI to clients, and preventing “hallucinations” (instances of LLMs generating false information).[7] Ultimately, the standards instruct lawyers to review all AI-generated work with the same diligence applied to a human legal assistant’s work.[8]
At the same time, there is a clear push for comprehensive state-level legislation regulating AI across the nation. Lawmakers appear intent on defining legal constraints as technology evolves. For example, New York state passed the Responsible AI Safety and Education (RAISE) Act this past December, which establishes safety disclosure and reporting requirements for large AI developers.[9] The bill also creates an oversight office and empowers the Attorney General to bring civil action against developers for violating requirements.[10]
By establishing legal pathways to hold technology companies accountable for LLMs’ outputs, these bills reflect an effort to assign liability to corporations. As the movement to regulate AI evolves, lawmakers may begin to focus on the technology’s use in the legal context. Future regulation could result in a new structure where lawyers are not solely responsible for AI content. Instead, human professionals and technology companies could be held separately liable under the law.
Colorado bill SB24-205 provides a potential model for this system. The bill jointly places responsibility on AI developers and private actors when AI is used in “high-risk” decision-making, such as decisions related to employment or loan approval. The bill requires both developers and actors employing AI in high-risk cases to publicly disclose protocols and information around how AI is used in decision-making to prevent algorithmic discrimination.[11]
Similarly, lawmakers could compel both AI developers and lawyers to disclose AI’s role in developing legal arguments. This could look like requiring developers to add explicit public disclaimers around LLMs’ capacity to assist with legal cases, combined with legislation codifying attorneys’ duties when using AI. While attorneys are already bound by ethical standards, codifying these expectations would strengthen clarity around the use of AI in the legal context and provide stronger enforcement mechanisms and clearer guidance for clients if their attorney violates these standards.
However, federal policy could restrict states’ ability to test these new approaches. In December, President Trump issued an executive order blocking state-level AI regulation in favor of a streamlined national approach.[12] Building on this, The White House shared a National AI Legislative Framework in March, which called for Congress to pass legislation overriding state laws.[13]
It is unclear whether Congress will pass comprehensive regulation, or enact legislation that is favorable to AI developers. While the Trump administration’s framework intends to reduce barriers to the development and implementation of AI,[14] a bill proposed by Sen. Marsha Blackburn would place a “duty of care” on AI developers to address foreseeable safety risks.[15] It also preserves states’ ability to regulate AI; similar to the RAISE Act, Blackburn’s bill would empower state Attorneys General to enforce regulations.[16] The bill appears to be a negotiating tool to push for the passage of broad federal legislation.
Yet, if Congress does not maintain state lawmakers’ momentum on regulating AI developers, individual lawyers may remain solely liable. Further, if state lawmakers are barred from continuing to innovate new approaches to AI regulation, there likely will not be legislation that explicitly addresses AI in the legal context. Therefore, the issue of liability may be governed by case law instead of legislation. It is possible that lawmakers may perceive the courts to be best-equipped to govern the use of AI in the legal context; the issue of liability could evolve through specific, narrow court decisions as issues arise, similar to what occurred in Heppner.
Regardless of whether federal or state lawmakers manage AI regulation, use of the technology in the legal context will continue to accelerate, opening up further questions. What should AI’s role be in the process of developing legal arguments? Should judges consider the extent of AI involvement when evaluating those arguments?
The law may not currently provide a straightforward answer to these questions, or the issue of liability. Whether state or federal lawmakers win the battle for dominance over AI regulation will determine whether the law assigns greater liability to developers and codifies attorneys’ duties, or if responsible use of AI in the legal context will instead be enforced through a norms-based system of ethical standards applied to individuals.
[1] American Bar Association. AI Legal Writing How Lawyers Can Work Faster Without Sacrificing Accuracy. American Bar Association Law Technology Today. 2025.
[2] Johnson, Kari. California issues historic fine over lawyer’s ChatGPT fabrications. CalMatters. 2025.
[3] Peck, Emily. “AI legal advice is driving lawyers bananas.” Axios. 2026.
[4] N.Y. State Sen. Bill 7263. 2025-2026 Leg., Reg. Sess. (N.Y. 2025)
[5] Lee, Andrew R., et al. “Your AI Conversations Are Not Privileged: What a New SDNY Ruling Means for Every Lawyer and Client.” National Law Review, 2025.
[6] Ibid.
[7] Fisher, Keith Robert. “ABA Ethics Opinion on Generative AI Offers Useful Framework.” American Bar Association: Business Law Today, Oct 2024.
[8] “ABA Ethics Rules and Generative AI.” Thomson Reuters Legal, 2024.
[9] New York General Business Law Art. 44-B §§1420-1426 (2025).
[10] Ibid.
[11] Colorado Revised Statutes Art. 1 Title. 6 Part 17 §§6-1-1601-1610 (2024).
[12] Exec. Order No. 14365, “Ensuring a National Policy Framework for Artificial Intelligence,” 90 Fed. Reg. (Dec. 11, 2025).
[13] Kang, Cecilia. “White House Unveils AI Policy Aimed at Blocking State Laws.” New York Times, March 20, 2026.
[14] Brownstein Hyatt Farber Schreck. “AI Governance Takes Shape: Breaking Down Washington's Latest AI Frameworks.” Brownstein Hyatt Farber Schreck, 2025.
[15] Ibid.
[16] Ibid.

