The financial industry fully agrees on the potential of AI and the need to have a strong leadership in the adoption and development of AI in the EU to stay competitive at global level.
For financial services, AI approaches can enable the customization
of products to clients’ preferences, needs and expectations. Tasks
previously done manually can be performed more efficiently and with
higher accuracy, improving internal processes by reducing false
positives, or enhancing the claims management processes, including
improved detection of fraudulent claims. AI can help insurers to refine
existing actuarial models or processes, enhancing the availability and
innovativeness of insurance for EU customers.
There is also a huge potential for innovation and broader benefits. A
particularly promising area would be related to risk management, where
AI and data analytics can contribute to enhancing financial institutions
risk analysis capabilities, both helping more people get access to
financing, removing unintentional bias in existing processes, while also
improving the sustainability and resilience of the financial system.
The Artificial Intelligence Act aims to ensure the safety of AI by
safeguarding the EU’s fundamental rights. While AI systems can vary
greatly from one another, it is important that this regulation addresses
specific higher risks uses of AI and aims to mitigate these risks for
consumers and society at large. Therefore we welcome the risk-based
approach followed by the proposed AI Regulation as we think it could
support trustworthy AI use cases with embedded EU values.
However, a balance must be struck between ensuring security and
raising awareness on potential risks related to AI and fulfilling it
innovative potential The proposed text must be enhanced to avoid
regulatory uncertainties leading to unexpected regulatory burdens and,
therefore, hold back the development and adoption of AI in the EU.
There are some uncertainties in the Commission’s text that,
depending on the final interpretation, could significantly increase the
final impact of the Regulation. Those uncertainties can be grouped into
three categories: the scope of the Regulation, the governance framework,
and the requirements imposed. In its Paper the EFR gives
recommendations on these three categories.
A European AI Regulation that promotes the use and adoption of
trustworthy AI embedding EU values should be proportionate and clear.
This would avoid creating legal uncertainty or increasing (unnecessary)
compliance burden.
- 142.1 EFR Paper On Artificial Intelligence
EFR
© EFR - European Financial Services Round Table
Key
Hover over the blue highlighted
text to view the acronym meaning
Hover
over these icons for more information
Comments:
No Comments for this Article