CEPR's Danielsson: Artificial intelligence and financial stability

27 October 2023

This column argues that although AI will bring considerable benefits, it also raises new challenges and can even destabilise the financial system.

The use of artificial intelligence in the private sector is accelerating, and the financial authorities have no choice but to follow if they are to remain effective. Even when preferring prudence, their use of AI will probably grow by stealth. 

The financial authorities are rapidly expanding their use of artificial intelligence (AI) in financial regulation. They have no choice. Competitive pressures drive the rapid private sector expansion of AI, and the authorities must keep up if they are to remain effective.

The impact will mostly be positive. AI promises considerable benefits, such as the more efficient delivery of financial services at a lower cost. The authorities will be able to do their job better with less staff (Danielsson 2023).

Yet there are risks, particularly for financial stability (Danielsson and Uthemann 2023). The reason is that AI relies far more than humans on large amounts of data to learn from. It needs immutable objectives to follow and finds understanding strategic interactions and unknown unknowns difficult.

The criteria for evaluating the use of AI in financial regulations

We propose six questions to ask when evaluating the use of AI for regulatory purposes:

  1. Does the AI engine have enough data?
  2. Are the rules immutable?
  3. Can we give AI clear objectives?
  4. Does the authority the AI works for make decisions on its own?
  5. Can we attribute responsibility for misbehaviour and mistakes?
  6. Are the consequences of mistakes catastrophic?

Table 1 shows how the various objectives of regulation are affected by these criteria.

 

The financial authorities are rapidly expanding their use of artificial intelligence (AI) in financial regulation. They have no choice. Competitive pressures drive the rapid private sector expansion of AI, and the authorities must keep up if they are to remain effective.

The impact will mostly be positive. AI promises considerable benefits, such as the more efficient delivery of financial services at a lower cost. The authorities will be able to do their job better with less staff (Danielsson 2023).

Yet there are risks, particularly for financial stability (Danielsson and Uthemann 2023). The reason is that AI relies far more than humans on large amounts of data to learn from. It needs immutable objectives to follow and finds understanding strategic interactions and unknown unknowns difficult.

The criteria for evaluating the use of AI in financial regulations

We propose six questions to ask when evaluating the use of AI for regulatory purposes:

  1. Does the AI engine have enough data?
  2. Are the rules immutable?
  3. Can we give AI clear objectives?
  4. Does the authority the AI works for make decisions on its own?
  5. Can we attribute responsibility for misbehaviour and mistakes?
  6. Are the consequences of mistakes catastrophic?

Table 1 shows how the various objectives of regulation are affected by these criteria.

 
Table 1 Particular regulatory tasks and AI consequences

 Table 1 Particular regulatory tasks and AI consequences

Source: Danielsson and Uthemann (2023).

 

 

Conceptual challenges

Financial crises are extremely costly. The most serious ones, classified as systemic, cost trillions of dollars. We will do everything possible to prevent them and lessen their impact if they occur, yet this is not a simple task....

more at CEPR


© CEPR - Centre for Economic Policy Research