The Commission aims to turn Europe into the global hub for trustworthy Artificial Intelligence. If we share this idea on the principle, it should be recognised that this is a risky bet.
The European Commission published a proposal for an
Artificial Intelligence (AI) Act at the end of April. In parallel, it has set
up a public consultation for stakeholders to provide feedback on the draft
text, which ended on 6 August.
AI technology has only slowly began arriving on the
market and as applications become more sophisticated, they will likely often
become very unpredictable in their development. To ensure legal certainty, a
level playing field and no obstacles to innovation, a clear definition of
artificial intelligence is needed. This would cover the Commission work, as
well as national data protection authorities, the Council of Europe initiative,
and the OECD framework on classifying AI systems. ESBG members very much
welcome the proposed technology-neutral and future-proof definition of AI, and
the Commission's risk-based approach to enable a proportionate regulation.
The Commission aims to turn Europe into the global hub
for trustworthy Artificial Intelligence. If we of course share this idea on the
principle, it should be recognised that this is a risky bet. Indeed, if
European values are not ultimately adopted on an international scale,
non-European solutions are potentially more efficient because they have been
developed in less restrictive regulatory environments and could compete with
European solutions.
With regards to the acceptance of data usage, members
would like to use real datasets instead of the Commission proposed ‘synthetic’
datasets. These would mimic real life situations and allow AI training in a
realistic setting, without the risk of second order bias (e.g., ethnicity
indication based on living area or income).
We also believe that there should be a provision in
the draft text to protect European AI developers and users at international
level. AI does not discriminate against physical locations, and many different
countries across the world have different interpretations of copyright and
liability when it comes to AI applications.
Finally, we call for clarity on the scope of the text
when it comes to biometric identification of natural persons. It is not yet
clear if financial services firms and their providers, who rely on biometric
identification to onboard customers remotely (and comply with KYC – know your
customer requirements) will be included in the scope of the full set of
requirements in the AI regulation.
We support the Commission in its efforts to create a
clear legal framework for artificial intelligence which does not inhibit
innovation and at the same time provides security for all market participants.
We are particularly pleased with the Commission's philosophical approach to
promoting "digitalisation with a human face". We believe that
trustworthy AI in cooperation with human expertise will be of great value to
European society. We particularly emphasise the interaction between man and
machine. We firmly believe that both humans and machines are irreplaceable.
However, we must ensure that new regulation does not inadvertently cripple our
markets, dampen innovation and opportunities.
Full paper
© WSBI
Key
Hover over the blue highlighted
text to view the acronym meaning
Hover
over these icons for more information
Comments:
No Comments for this Article