The EU AI Act: A Beacon Amid the Chaos of AI Innovation

Darren Kim 

2 minute read

The recent influx in artificial intelligence (hereinafter AI) technology has accentuated the need for substantial regulatory legislation, as exemplified by developments in Large Language Models (LLMs) and machine learning in general. The European Union has most notably been at the forefront of these efforts with its proposition of the AI Act in April 2021. The European Parliament finalized its version of the Act on June 14th of this past year, with a final vote on the bill expected prior to the 2024 European Parliament elections. This Act is the first official legislation on AI regulation and should substantially influence nations, such as the United States, that currently lack concrete legislation. The premise of the AI Act revolves around a risk-based approach that categorizes AI system applications into four categories: prohibited, high-risk, limited-risk, and minimal-risk systems.

Prohibited systems are covered primarily in Article 5 of the proposed bill, where an extensive list of banned AI practices is given. For example, systems that distort a person’s behavior to an extent that may cause physical or psychological harm to the user or third parties are prohibited. Systems that exploit vulnerabilities of a certain demographic (by age, race, physical or mental disability, etc.) with the intent of causing harm are also banned, as are systems that implement ‘social scoring,’ where AI systems evaluate and classify individuals based on social behavior or biometric characteristics. These practices are strictly forbidden. However, the Act grants an exception to similar systems used for human welfare, such as applications that protect individuals from imminent threats or search for criminals or missing persons.

High-risk systems primarily consist of AI applications that serve as safety components of other products. Examples include AI programs that assist with tasks such as surgery, the management of infrastructure, AI-assisted education, border control management, and judicial processes. These high-risk applications must undergo third-party conformity assessments as part of a broader risk management system, as outlined in Article 9. The system will involve a four-step comprehensive analysis of foreseeable risks throughout all stages of development and usage. Conformity assessments, which must be completed by AI service providers and verified by relevant national authorities within each Member State, are required as part of the risk management system’s scope, as are periodic audits and post-market monitoring systems that regulate AI applications after their introduction to the market. Crucially, these systems must provide documentation and information to users regarding the risks and risk management procedures (Article 13); system providers must also develop human oversight measures that can override the AI system when necessary (Article 14).

Limited-risk and minimal-risk systems are comparatively liberal in terms of regulation. Nevertheless, both categories are subject to transparency requirements similar to those of high-risk systems. These systems must ensure that users are informed that they are interacting with a machine and that manipulated content is explicitly disclosed as artificially generated. The provision of transparency and human oversight is yet another central focus for conformity assessments, which is ultimately central to protecting users overall.

A final area of interest in the AI Act can be found in Title V: Measures in Support of Innovation. This bill discusses regulatory sandboxes that would be open to developers who wish to test and validate the functionality and security of prospective AI systems. These sandboxes would provide secure spaces with sufficient supervision to maintain a safe yet open area to promote innovative techniques and thus may prove crucial to fulfilling the EU’s desires to enhance growth within the AI sector while ensuring security.

Moreover, the sandboxes appear to be a sort of failsafe against the temporal disadvantages of legislation as, at its current rate, AI innovation is outpacing the rate at which bills can be passed, meaning there is no way to regulate new technology. One need not look further than the AI Act itself, which was first drafted before the rise in machine learning and chatbots such as ChatGPT. As such, there is no mention of said LLMs throughout the bill, which has proven to be a point of great debate as of late. One can only imagine similar controversies that would arise with further innovative technologies and the unreasonable depths that legislation would have to encompass to regulate them sufficiently. The presence of sandboxes would mitigate this issue, giving developers a universally applicable set of guidelines to ensure security without the need for constant amendments.

Overall, it is clear that the EU AI Act will serve as a reference point in a time when AI legislation has become as crucial as it has been sparse, particularly in the United States. Despite recent developments such as the Blueprint for an AI Bill of Rights, which outlines risk management principles similar to that of the AI Act, and several executive orders regarding AI security (including one issued by the Biden administration late last month), there has been minimal legislative activity within Congress, aside from an AI Risk Management Framework led by the National Institute of Standards and Technology that outlines methods on how to approach risk management in AI systems. The institutional roadblocks within the American legal system ultimately underscore the importance of regulatory sandboxes akin to those proposed by the EU AI Act, which can establish regulatory guidelines on innovative AI applications much quicker than a Congressional response to the EU would.


Particularly in the US, companies are left to self-regulate AI innovation among themselves. Recent developments such as the Sam Altman-OpenAI saga reflect poor internal governance within these companies; to trust them as the principal agents of AI regulation seems extremely dangerous, as pointed out by NYU psychologist and AI expert Gary Marcus. Without regulation akin to the AI Act, it seems that it will only be a matter of time before chaos ensues. How the US and the rest of the world respond to the AI Act may be crucial to the future of our ever-increasingly digitized planet.

____________

1 “Blueprint for an AI Bill of Rights.” The White House, October 4, 2022. https://www.whitehouse.gov/ostp/ai-bill-of-rights/.

2 Coulter, Martin, and Supantha Mukherjee. “OpenAI CEO’s Ouster Brings EU Regulatory Debate into Focus.” Reuters, November 21, 2023. https://www.reuters.com/technology/openai-ceos-ouster-brings-eu-regulatory-debate-into-focus-2023-11-21/.

3 Djeffal, Christian. “The EU AI Act at a Crossroads: Generative AI as a Challenge for Regulation.” European Law Blog, July 24, 2023. https://europeanlawblog.eu/2023/07/24/the-eu-ai-act-at-a-crossroads-generative-ai-as-a-challenge-for-regulation/.

4 Engler, Alex. “Key Enforcement Issues of the AI Act Should Lead EU Trilogue Debate.” Brookings, June 29, 2023. https://www.brookings.edu/articles/key-enforcement-issues-of-the-ai-act-should-lead-eu-trilogue-debate/#:~:text=On%20June%2014th%2C%20the,Commission%20is%20pushing%20to%20finish.

5 European Union: European Commission, Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL LAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE (ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION LEGISLATIVE ACTS, 21 April 2021, COM(2021) 206 final, available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206.

6 “Fact Sheet: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence.” The White House, October 30, 2023. https://www.whitehouse.gov/briefing
-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/

7 Ringe, Wolf-Georg. “Why We Need a Regulatory Sandbox for AI.” Oxford Law Blogs, May 12, 2023. https://blogs.law.ox.ac.uk/oblb/blog-post/2023/05/why-we-need-regulatory-sandbox-ai.

8 United States. Dept. of Commerce. Artificial Intelligence Risk Management Framework. GPO, 2023. https://doi.org/10.6028/NIST.AI.100-1.