Follow Us

Follow us on Twitter  Follow us on LinkedIn
 

13 September 2024

ICMA responds to the European Commission’s targeted consultation on artificial intelligence in the financial sector


ICMA members emphasise the importance of human validation and an internal framework to manage the responsible use of AI in an organisation. Internal risk frameworks covering AI-related risk have been longstanding in financial organisations...

 ICMA welcomes the opportunity to provide feedback on the European Commission’s targeted consultation on artificial intelligence (AI) in the financial sector.

ICMA’s response is based on the feedback from a subset of its Artificial Intelligence in Capital Markets (AICM) working group, including investors, banks, market infrastructures, law firms and issuers across the international debt capital markets. Due to the composition of the Artificial Intelligence in Capital Markets working group, our response to the consultation excludes part 2 on “Questions related to specific use cases in financial services”. However, for insight into specific use-cases of AI in the debt capital markets industry across the different market sectors, ICMA has been tracking new FinTech applications in bond markets on our website.

A key objective of the Artificial Intelligence in Capital Markets working group is to provide a forum for discussion and education on AI in the industry, including the promotion of best practices that foster effective bond markets globally. The feedback in this consultation is based on the initial findings of participants, who are at various stages of their AI trajectory.

We have, in addition, encouraged all our members across all market sectors, who are part of this extensive group of individuals to complete the survey bilaterally on behalf of their own organisation. We hope this will extend the depth of response from the capital market industry.

Executive Summary

(I) ICMA members encourage innovation across the debt capital markets industry and support fair and transparent applications of AI.

(II) ICMA members emphasise the importance of human validation and an internal framework to manage the responsible use of AI in an organisation. Internal risk frameworks covering AI-related risk have been longstanding in financial organisations, as AI models, including machine learning algorithms, are not a nascent technology in this industry.

(III) They highlight the interconnectedness of AI technology with ESG and DLT related developments, and the potential risk of an unintended impact on innovation if supplementary regulation to the EU AI Act is introduced too early.

(IV) In addition, existing pieces of legislation such as UCITS, AIFMD, and MiFID II/MiFIR already capture safeguards for the responsible use of technology, including AI and related service providers. It is recommended that in the implementation of the EU AI Act, this is taken into account to ensure that it is appropriately interlinked into current regulation.

(V) In general, ICMA members see increased efficiency and automation, freeing capacity for more high value tasks, as a key positive of AI use.

 

ICMA



© ICMA


< Next Previous >
Key
 Hover over the blue highlighted text to view the acronym meaning
Hover over these icons for more information



Add new comment