At the recent AI Safety Summit at Bletchley Park (the base of UK code breakers during WWII) held on 1-2 November 2023, US Vice-President Kamala Harris said pointedly: “Let us be clear: when it comes to AI, America is a global leader. It is American companies that lead the world in AI innovation. It is America that can catalyse global action and build global consensus in a way no other country can” (White House 2023c).
Where does that leave the EU’s ambition to set the global rule book for AI? In this column, based on our recent paper (Kretschmer et al. 2023), we explain the complex “risk hierarchy” that pervades the proposed AI Act (European Commission 2023), currently in the final stages of trilogue negotiation (European Parliament 2023). This contrasts with the US focus on “national security risks”, which appears to be the area where there are existing federal executive powers that can compel AI companies (Federal Register 2023). We point out shortcomings of the EU approach that requires comprehensive risk assessment (ex ante), at the level of technology development. Using economic analysis, we distinguish exogenous and endogenous sources of potential AI harm arising from input data. We propose that from the perspective of encouraging ongoing innovation, (ex post) liability rules can provide the right incentives to improve data quality and AI safety.
Regulatory competition
There is global anticipation, and excitement among investors, that AI will change the way we live and offer potentially enormous benefits, in education, energy, healthcare, manufacturing, transport. The technology is still moving rapidly, with advances in deep reinforcement learning (RL) (Kaufmann et al. 2023) and the application and tuning of foundation models (Moor et al. 2023) into a variety of contexts beyond their original training sets....
more at CEPR