|
Governments around the world are creating regulation to come to grips with the perceived risks of Artificial Intelligence (AI). The United States issued an AI Executive Order 1 while the UK government released a non-binding Declaration of Principles 2 . China imposed a light-touch business-friendly AI regulation, primarily meant as a signal to accelerate technological progress (Zhang, 2024). The European Union’s Artificial Intelligence Act was proposed by the European Commission in April 2021 and the agreed final version is set for formal approval in the European Parliament and Council in April 2024.
What does the EU AI Act aim to do?
The Act is essentially a product safety regulation designed to reduce risks for humans from the use of AI systems. Product safety regulation works for single purpose products; the risks from application for that purpose can be assessed. Many older-generation AI systems are trained for a single application. The problem comes with the latest general purpose Large Language Models and Generative AI systems like OpenAI’s ChatGPT, Meta’s Llama or Google’s Gemini, which are models that can be molded for an almost infinite range of purposes. It becomes difficult to assess all risks and to design regulations for all possible uses. The AI Act tries to work around this with a general obligation to avoid harm to fundamental rights for humans. According to one of the co-architects of the Act in the European Parliament, this regulatory mix of product safety and fundamental rights criteria is not adapted to AI models 3 .
The AI Act classifies AI systems used in the EU, irrespective of where they are developed, according to level of risk. Most AI applications are considered minimal risk and not regulated. Limited risk systems are subject to transparency and user awareness obligations only, like chatbots and the watermarking of AI media output. Meanwhile, systems that are deemed to pose unacceptable risks are prohibited. These systems include remote biometric identification and categorisation, facial recognition databases and social scoring – with exceptions for medical and security reasons, which are subject to judicial authorisation and the respect of fundamental rights. The bulk of the AI Act focuses on regulation of high-risk AI systems, in between limited and unacceptable risk. These are single- or limited-purpose AI systems that interact with humans in education, employment, public services etc. The Act contains a complex set of rules and requirements to assess whether and under what conditions high-risk systems can be used.
Besides high-risk AI systems, there are General Purpose AI (GPAI) models. This refers to Large Language and Generative AI foundation Models 4 . These are considered general purpose because they can be applied to a wide range of tasks. GPAI providers must present technical documentation and instructions for use, unless they are open license models that can be adapted by users for their own purposes. Data used for training must be summarily documented and must comply with the EU Copyright Directive. In the law, GPAI models become systemic risk models when the computing power used for their training exceeds 10 flops (floating point computer operations). Providers of systemic risk GPAI models must conduct model evaluations and adversarial testing, provide metrics used to avoid harmful applications, report incidents and ensure cybersecurity protection. Currently available models do not reach that threshold 5 . But next-generation AI models, which could possibly be released in 2024, are likely to exceed the threshold. Eventually, it may capture all new large AI models. ...
more at Bruegel