Follow Us

Follow us on Twitter  Follow us on LinkedIn
 

19 April 2023

EPC's Riekeles: AI has escaped the 'sandbox' — can it still be regulated?


The stakes for the human race in current AI developments could not be higher. This is no time to cut ethical corners regarding research, regulation, or lobbying.

Recently I was introduced to the concept of "algorithmic gifts" as part of a research interview on tech lobbying in Brussels. The question was how algorithmic favours might be used to sway the direction of debates and policy.

When Twitter released segments of its code a few weeks back, we got a first, perhaps unsurprising, answer: far from the professed neutrality, posts from President Joe Biden, Twitter CEO Elon Musk and a few dozen selected luminaries such as basketball player LeBron James, American columnist Ben Shapiro and entrepreneur Marc Andreessen get an additional, artificial push by Twitter's algorithm.

It only adds to previous questions as to where Musk's Twitter is heading and, more fundamentally, of course, about the structure and integrity of today's platform-mediated public space.

By now, algorithms create and redistribute power across most aspects of our social, economic, and political life. We live in an algorithmic society and with that come steep ethical questions.

AI and singularity

Nowhere is the acceleration and disruption more evident than in artificial intelligence. The combination of immense data sets, massive computational force, and self-learning algorithms promises to unleash enormous powers — in every sense of the word.

In medical research, to take one example, the use of machine learning and mRNA technology (the same as in the COVID-19 jab) holds tremendous potential. Vaccines against cancer, cardiovascular and auto-immune diseases could be ready by the end of this decade.

Few would want to relinquish this promise. Much more contested is the emergence of so-called General Purpose Artificial Intelligence, or GPAI — self-learning algorithms capable of performing multiple, varied tasks to the extent of giving the impression of a sense of thought.

Within two months of its launch, the first application out of the starting blocks, ChatGPT reached 100 million users, a pace not seen for any other tech consumer application.

As these users now play with prompts, the machine learns. The latest version of the software already boasts spectacular analytical and creative capacities, presaging future permutations into practical human activities from finance (and column writing!) to arts and sciences.

With the dynamics of exponential advance, the popular scare is that these powerful, 'intelligent' technologies will radically and unpredictably transform our reality — or even develop some form of life of their own.

It's difficult to blame the naysayers and doomsters. In Silicon Valley, wizards' apprentices, accoutred with a libertarian philosophy and venture capital, have long been yearning for the moment of 'technological singularity', a future where technological growth is out of control and irreversible.

In the mind of Google's chief of engineering and a key figure, Ray Kurzweil, the process towards singularity has already long begun and will happen around 2030 (note: the same as for the vaccines).

Computers will have human intelligence, and our (final) choice might be to put them inside our brains, connecting our neocortex to the cloud.

EU's AI Act on the spot

As often, Europeans will be the first to regulate and can, by and large, take some pride in that. An Artificial Intelligence Act has been on the EU lawmakers' table for two years with the aim of setting the guardrails for safe and lawful AI.

Certain AI practices, such as social scoring, will be prohibited. Yet others, categorised as "high risk", will be subject to third-party audits and significant transparency by the legislation due to be finalised this year.

That is all good, but the response to the commercialisation of GPAI now stands as the decisive test.

In truth, EU lawmakers are very much in the dark about what to do. At a recent lunch with senior Brussels lawmakers and industry representatives, civil society voices raised the question of whether it could all be stopped: the emphatic answer was it could only go faster.

Coincidentally, only days later, more than 1,000 AI experts wrote an open letter asking for a pause in training systems more powerful than GPT-4, saying that "AI systems with human-competitive intelligence can pose profound risks to society and humanity."..

 more at EPC



© European Policy Centre EPC


< Next Previous >
Key
 Hover over the blue highlighted text to view the acronym meaning
Hover over these icons for more information



Add new comment