The Chatbot Hype: Implications for Risk Regulation in Artificial Intelligence

A few weeks ago, an impressive number of experts in their open letter entitled „Pause Giant AI Experiments“[1] called for a moratorium on development work for generative AI. They name societal risks involved with that kind of AI applications and demand to thoroughly consider respective regulation. Whether this would help with controlling AI risks is unclear. Nevertheless, it sheds new light on regulative effort on AI in the global arena.

In response to the chatbot hype stimulated by ChatGPT since November 2022, several tech giants meanwhile announced to invest significantly into chatbot technology. This will surely boost the evolution of AI on the whole. At the same time, regulation of AI gets more public awareness. In the US the regime of unrestricted trial and error is loosing support, as the open letter has shown. China sets out for an extensive regime of command and control, with the regulative authority there – the Cyberspace Administration of China – claiming that chatbots must be based on „socialist values“ and must not disturb the economic and social order.

In Europe, the debate on AI regulation rests upon work initiated by the European Commission more than five years ago. That was when they assigned an expert group the task to formulate ethical guidelines on AI, and implemented the European AI Alliance with the public platform where this contribution here is posted. The current chatbot hype is supposed to influence the resulting work on the AI act to be finalized later this year. As it appears, the European way of AI regulation will become one being based upon reflection and adjustment in the end.  

Most recently, a comprehensive contribution to the public discourse was provided by the German Council on Ethics[2].  Based on technical and philosophical reflections, the report points out the priority of human oversight with regard to machine action resulting from AI systems. This is well in line with the principle of human centred AI as recommended by the EC. And it will surely help with building acceptance of AI in the public. There is much to be done in this respect. A recent poll in Germany[3] showed a 40 percent rate of people being at foremost concerned when it comes to AI applications, versus 20 percent who welcome it as progress, while 30 percent held with both aspects.

In the wake of the recent chatbot hype, smart regulation will even more become key to the evolution of AI – as trigger of trust in the public, and by providing the needed orientation to AI developers.     

____________________

Norbert JASTROCH

eMail norbert.jastroch@metcommunications.de

____________________

 

 

[1]https://futureoflife.org/open-letter/pause-giant-ai-experiments/ , accessed 31st March 2023

[2] Deutscher Ethikrat: Mensch und Maschine – Herausforderungen durch Künstliche Intelligenz, preprint, 20th March 2023, www.ethikrat.org

[3] Institut für Demoskopie Allensbach poll, taken from: Frankfurter Allgemeine Sonntagszeitung, 23rd April 2023

Tags
artificial intelligence; chatbots; risk; regulation

Comments

Profile picture for user n002onzt
Submitted by Nicola Fabiano on Fri, 28/04/2023 - 18:41

I am one of the people who signed the "Future of Life Institute" Open letter due to the concerns related to the risks, especially - from my perspective - those related to data protection and privacy.

In my humble opinion, the subject is about several factors.

I think changing the mindset and acquiring a new approach based on learning technical standards (perhaps non-legislation) will be necessary. I believe that we don't need further legislation on that topic. From my perspective, it's not a matter of regulation but new specific technical standards. Last but not least, revision of models is required because those currently in use probably aren't adequate to the AI challenges. 

Probably everybody knows the measure adopted by the Italian Data Protection Authority which imposed restriction on processing to OpenAI.

Some hours ago the Italian Supervisory Authority (Garante per la protezione dei dati personal) published a press release that declares "OpenAI, the US-based company operating ChatGPT, sent a letter to the Italian SA describing the measures it implemented in order to comply with the order issued by the SA on 11 April. OpenAI explained that it had expanded the information to European users and non-users, that it had amended and clarified several mechanisms and deployed amenable solutions to enable users and non-users to exercise their rights. Based on these improvements, OpenAI reinstated access to ChatGPT for Italian users."

Hopefully, people will be aware of the key aspects and adopt a more correct approach to AI.

______

Nicola Fabiano

email: nicola@fabiano.law