CER's Meyer: Implementing the AI Act: The Commission's first big test for better regulation

06 December 2024

Ursula von der Leyen has said her next Commission will simplify the EU’s complex regulatory environment to boost growth, especially in high-tech sectors. How the EU implements its Artificial Intelligence Act will be its first big test.

Better regulation is the mantra of newly re-elected Commission president Ursula von der Leyen. Spurred by recent criticisms in influential reports by Enrico Letta and Mario Draghi about the quality of recent EU regulations and their impact on economic growth – Draghi’s report stated that “we are killing our companies” – she has tasked each of her commissioners with cutting red tape and making the EU rulebook easier for businesses to comply with.

The tech sector is a particular concern for three reasons. First, it is critical to Europe’s economic growth: the gap in productivity between the EU and the US is largely explained by the size of the latter’s high-productivity tech sector alone. Second, tech is the best example of Europe’s overly complex regulatory environment. The EU passed many wide-ranging tech laws in the last political cycle – on issues ranging from cybersecurity, digital competition and artificial intelligence to online safety and data sharing. Many of these laws have significant inconsistencies and overlaps, as business groups across Europe have repeatedly noted. Regulatory complexity is probably not the main reason for Europe’s poor record in tech investment, but it is certainly a contributing factor – and is likely to become a bigger problem if Trump cuts regulation in the US even further. Third, the EU may struggle to apply and enforce its tech laws dispassionately in the coming years, given the risk that Trump might be less tolerant than Biden was of Europe’s regulation of foreign companies, and more willing to retaliate. The Commission is currently investigating X/Twitter for breaching the Digital Services Act, for example, which will likely put Brussels on a collision course with Washington given the close relationship between X’s owner Elon Musk and Donald Trump.

All this means that the EU must be cautious about pushing forward with more tech laws, and instead focus first on making its existing rulebook clear, objective and non-discriminatory. One its first tests will be implementing the EU’s Artificial Intelligence Act (AI Act), aimed at helping to manage the technology’s risks. The AI Act gets some of the basics of better regulation right. But its institutional framework risks creating fragmentation and inconsistency, and the process of translating the law’s requirements into practical obligations risks becoming unwieldy and disproportionate. The Commission and member-states still have choices to help ensure the law supports EU innovation and economic growth.

Making the AI Act’s complex institutional framework work in practice

One of the issues with the AI Act is its institutional framework. Enforcement of the regulation will fall to a bewildering combination of authorities. The AI Office in the European Commission will enforce the law’s provisions for general purpose AI models, like ChatGPT. Each of the EU member-states will also have one or more ‘notifying authorities’, which will play a role in the conformity assessment process for AI systems, and ‘market surveillance authorities’, which will enforce the law for AI systems already on the market. A range of other institutions, like the ‘AI Board’, ‘advisory forum’ and ‘scientific panel’ have advisory or co-ordinating roles...

 

full paper

more at  CER


© CER