Follow Us

Follow us on Twitter  Follow us on LinkedIn
 

26 July 2024

CER's Meyer: In the UK's plans for AI, Brussels still looms large


The new British government plans to regulate powerful AI models. But it should also influence how European authorities implement their law on AI and help shape global norms on AI regulation.

The UK and the EU both suffer from sluggish economic growth. With a shrinking workforce, opposition to more migration and higher energy prices than in the US and China, the UK and the EU will both need to rely on productivity growth to boost the economy. That will require much greater investment in the deployment of technology. Artificial intelligence (AI) is a technology with the potential to boost productivity.

The technology also comes with risks – such as the potential to produce misinformation and discriminatory outcomes. The EU took a more cautious approach to these risks than the last UK government. EU law-makers enacted an overarching AI Act to help manage the technology’s risks while British Conservatives did not. The new Labour government has announced, however, that it will follow the EU and start the process of designing a British law for AI.

Some businesses claim the EU’s AI Act might dampen investment in the Union. If regulation in the UK had a similar effect, this could prove costly: while the EU has just a handful of large AI firms, the UK is one of the largest AI markets in the world after the US and China, with about 3,000 firms active in developing AI, generating £10 billion a year in revenues. Labour will probably pay careful attention to the impacts of a new law on these companies. But the EU’s approach, and the approach of other countries, will nevertheless impact many British firms. That means Labour must try to influence how EU authorities implement their AI Act and help shape global AI regulation.

Which systems and services should the UK regulate?

In designing a UK law for AI, the first question for Labour is whether to regulate all companies that fall under the EU’s regulation. The EU law covers two types of firms. First, it imposes rules on firms that develop or use AI systems as part of their business. Second, it regulates firms that develop general purpose AI models like OpenAI’s GPT. In its manifesto and the recent King’s Speech, Labour only sought to regulate the second group of firms, promising rules for the “handful of companies developing the most powerful AI models”.

If the UK adopts its own law, it should follow the EU approach of defining which general purpose AI models are so advanced that they pose special risks. Regulating the most powerful models has sound objectives. Those objectives have broad support, and reflect principles agreed in international discussions like the G7 Hiroshima AI Process, the UK AI Safety Summit, the EU-US Trade and Technology Council (TTC), and President Biden’s Executive Order on AI. For example, many countries agree that firms providing powerful models should be transparent about how the models work and take steps to ensure the models are safe. The EU's AI Act simply makes these obligations more concrete. Voluntary standards have proven insufficient: for example, several providers of large AI models failed to comply with the previous UK government’s attempt at voluntary self-regulation.

Furthermore, the UK will need to ensure rules covering general purpose AI models are compatible with the EU approach. The EU’s rules on general purpose AI models may achieve the ‘Brussels effect’ – meaning that providers of the most advanced models will comply with the law globally rather than creating distinct models or ways of doing business in the EU. Developers of large models generally want those models to be used widely around the globe, to maximise their take-up and because many models improve with more user feedback. That raises a question about whether UK rules are necessary at all. But it also means that if EU rules for the most powerful AI models have a negative impact on innovation, there is at least not too much harm in the UK following suit.

Alignment with the EU to ensure the same AI models are regulated will not be straightforward, however. The EU AI Act has a complex set of rules to determine which powerful AI models should be subject to the strictest regulatory provisions, and the European Commission has broad discretion to decide which models to regulate. This creates regulatory uncertainty, which the UK should try to avoid. The thresholds for the most stringent requirements in the EU could also capture a large number of existing models. The UK should avoid this uncertain approach, and adopt higher but clearer thresholds – sticking to its stated intention of only capturing a handful of today’s models. Otherwise, the UK may inadvertently end up regulating more models than Brussels does....

 more at CER

full report



© CER


< Next Previous >
Key
 Hover over the blue highlighted text to view the acronym meaning
Hover over these icons for more information



Add new comment