EPC's Riekeles, von Thun: AI won’t be safe until we rein in Big Tech

22 November 2023

The chaos at OpenAI perfectly illustrates why the EU needs to impose strict regulatory responsibilities on big AI model providers, instead of relying on self-regulation and goodwill from companies whose accountability is highly uncertain.

Earlier this month, British Prime Minister Rishi Sunak convened leading nations, AI companies and experts at Bletchley Park – the historic home of Allied code-breaking during WWII – to discuss how the much-hyped technology can be deployed safely.

This would-be first international AI Safety Summit was rightly criticised on a number of grounds, including prioritising input from Big Tech over civil society voices and fixating on far-flung existential risks over tangible everyday harms. But the summit’s biggest failure – itself a direct result of those biases – was that it had nothing meaningful to say about reining in the dominant corporations that pose the biggest threat to our safety.

The summit’s key “achievements” consisted of a vague joint “communiqué” warning of the risks from so-called “frontier” AI models and calling for “inclusive global dialogue”, plus an (entirely voluntary) agreement between governments and large AI companies on safety testing. Yet neither of these measures has any real teeth, and what’s worse – through their emphasis on “frontier” models – they give powerful corporations a privileged seat at the table when it comes to shaping the debate on AI regulation.

Big Tech is currently promoting the idea that its exclusive control over AI models is the only path to protecting society from major harms. In the words of an open letter signed by 1,500 civil society actors, accepting this premise is naïve at best, dangerous at worst.

Governments that are truly serious about ensuring that AI is used in the public interest would pursue a very different approach. Instead of noble-sounding statements of intent and backroom deals with industry, tough measures are needed to target corporate power. Two areas in particular are key: forceful enforcement of competition policy and tough regulatory obligations for dominant gatekeepers.

As it stands, a handful of tech giants have used their collective monopoly over computing power, data and technical expertise to seize the advantage when it comes to large-scale AI foundation models. Smaller companies without access to these scarce resources find themselves signing one-sided deals with (or being acquired by) larger players to gain access to them. Google’s takeover of Deepmind and OpenAI’s $13 billion partnership with Microsoft are the best-known examples but not the only ones.

EPC


© European Policy Centre EPC