The work of the High-Level Expert Group on AI has started

by Lucilla Sioli, Director for Digital Industry and Artificial Intelligence at DG Connect, European Commission

The first meeting of the High-Level Expert Group on Artificial Intelligence took place last Wednesday. Discussions were productive and an excellent start to the work of the Expert Group! I enjoyed the exchanges within the Expert Group, both as part of the panel meeting and in the breakout sessions on AI ethics and on policy and strategy.

The meeting kicked off with presentations by the Commission services on the EU AI strategy and on the mandate of the group. Mr van den Hoven of the  Group on Ethics in Science and New Technologies (EGE) highlighted the main points from its “Statement on Artificial Intelligence, Robotics and  'Autonomous' Systems", which, together with the inputs from the European AI Alliance, provided one of the starting points for discussions.

I found it very interesting to see this heterogeneous group of experts, with a variety of backgrounds, formulating the first suggestions  and gradually moving to more concrete ideas. One interesting discussion took place on how to formulate a usable definition of AI, and about how necessary a detailed definition really is. A number of experts decided to follow up on this.

In the break-out sessions the Expert Group on AI provisionally identified a number of core challenges in ethics, starting from the need to identify which are the core objectives (e.g. human dignity, fundamental rights, sustainable planet), the no-crossing lines (autonomous lethal weapons, cognitive AI), as well as principles including governance of ethics, generating trust, assigning responsibility; addressing bias; the implementation and practicality of the guidelines, the balance between regulation and innovation; and the delivery of ethical systems by design.

The experts also proposed a number of themes for discussions on policy and strategy to boost Europe's AI capacity and competitiveness. These included the characteristics of European AI and the most important regulatory challenges; skills gaps and opportunities for AI research and deployment; ensuring that excellent researchers and innovators in AI stay in Europe; novel European uses of AI, such as in the "green" agenda; potential creation of a European AI research organisation in the form of a "CERN for AI"; and encouraging the use of well-curated open public sector data.

I was pleased to see that Mr Pekka Ala-Pietilä was appointed Chair of the Expert Group. He planned for the group to draft and table minutes of their discussions in the next couple of weeks  and encouraged further elaboration of the proposed challenges and themes. These will be the basis of the workshops to be organised from September on.

Stay tuned for further exchanges.

Komentarji

Profile picture for user n0025dc0
Poslal Andres Abad Ro… Pon, 02/07/2018 - 21:50

I worked at CERN and the idea to create a CERN-like organization for Ai is simply amazing. It is an incredible idea.

Would it be possible to access those presentations?  

In reply to by Andres Abad Ro…

Profile picture for user baranja
Poslal Jaroslav BARAN Tor, 03/07/2018 - 11:09

The idea that the EU would benefit from a "CERN of AI" was mentioned in a number of discussions in various contexts. Similarly fascinating was the notion of an "Airbus of AI".

I have added to the Open Library a presentation from the event in which Mr Rouhana of the European Commission explained the EU strategy.

However, rather than being suggested by the Commission, both of the above-mentioned concepts arose from discussions among the members of the Expert Group. This certainly reflects the level of ambition that the High-Level Expert Group has.

Profile picture for user nvarralp
Poslal Alpo VÄRRI Čet, 05/07/2018 - 14:25

On the topic of "definition of AI" people should be aware that there is a Technical Committee ISO/IEC JTC 1/SC 42 Artificial Intelligence which works on these issues. They have an approved work item AWI 22989 "Artificial Intelligence Concepts and Terminology" which is still in its early stages and contributions are possible. Those who want to contribute should register themselves to their national standardisation organisation as experts to SC 42.

 

 

Profile picture for user ngaesteb
Poslal Esteban GARCÍA… Pet, 06/07/2018 - 15:07

Many thanks Lucillia for the summary.

I'm very glad to hear the idea of a big AI lab for Europe which surely could accomplish with many of the current challgenes that we have and also would allow to join many experts and engineers in the area in order to solve them. Furthermore would be a world wide reference for cutting edge science (keen to see it :-) ).

Reading George Cogan post I also thought on why cognitive AI should be a no-crossing line? My self response was related in some sense to the other no-crossing line (autonomous lethal weapons which indeed should be controlled by cognitive AI) but afterwards I also extended to other civil or commercial services such as bank investments which could be full controlled by this cognitive AI. Though I do fully agree that both cases are important the risk of both has nothing to do. The first is a must not happend and the second is a could happen but under control and legislation/regulation which I think is one of the main purposes of the High-Level Expert Group on Artificial Intelligence. Otherwise we could be limiting to much the progress of the AI at industrial level and focusing only in the science which in the long term would be unsustainable.

    Thanks all for your insighful comments

          Esteban.

User
Poslal Richard Krajčoviech Tor, 10/07/2018 - 17:26

Thank you, Lucilla, for this summary.

Regarding the no-crossing lines, we I would suggest to focus on preventing damages instead of limiting reasearch and development. This includes e.g. definition of responsibility of developers and manufacturers (AI is not in a stage to become responsible) for consequences and damages of AI usage in EU (following the GDPR example :-); avoiding "black-box" excuses to public; list of specific actions that cannot be performed by (whatever, not only AI) systems on their own without human intervention or what certifications are required for performing such actions, etc. I think that banning whole areas of AI is too strong.

Richard