Towards a Trustworthy AI "Made in Europe"

The commitment to European values and respect for fundamental rights together with the mobilisation of adequate policies and resources, are the two forces that drive our Strategy to place Europe at the forefront of the international AI landscape. Two groups established by the Commission last year in order to address these objectives presented the first results of their work in December 2018.

In a previous blog, I already introduced the Coordinated Plan for Artificial Intelligence agreed between EU Members States and the European Commission as a means for ensuring synergies and avoiding wasteful duplication of efforts in the facilitation of AI uptake. This time, I would like to talk about the draft AI Ethics Guidelines presented by the High-Level Expert Group on AI (AI HLEG) to provide a framework for Trustworthy AI. 

Supporting the development of a human-centric AI to increase human well-being and ensure common good, the AI HLEG introduced the concept of trustworthiness as a combination of ethical purpose and technical robustness.

Based on fundamental rights, the Guidelines list a set of principles and values that attribute AI applications with an ethical purpose. Ethical purpose consists of ensuring compliance with those rights, principles and values, as well as applicable regulation. However, ethical purpose – or put simply, good intentions – is not enough. The development and use of AI can also cause unintentional harm, for instance in case of a lack of technological mastery. Technical robustness is therefore likewise a key element to ensure the trustworthiness of our AI systems.

With these two elements in mind, the draft guidelines then provide a list of 10 requirements ranging from Accountability to Transparency that AI needs to have in order to be trustworthy. To ensure the implementation of those requirements, a series of technical and non-technical methods are offered as a suggestion for the development, deployment and use of an AI that is not just ethical, but also technically robust and reliable. Finally, the document offers an assessment list designed to help stakeholders to implement the guidelines in daily business. This is intended to operationalise the framework for Trustworthy AI, and in this regard goes beyond the other existing frameworks for ethical AI, which remain more on an abstract level.

Trustworthiness can become a distinct European quality mark that sets us apart from AI developed and used in other world regions.

While the draft Guidelines are still subject to a consultation process aiming to collect feedback from the wider AI community, last week the AI HLEG met with representatives of relevant European ministries to discuss the specific perception of the guidelines on the Member State level. The meeting that took place on 22 January in Brussels was designed to engage both sides in an active discussion on the purpose, implementation and future governance of the Ethical Guidelines for Trustworthy AI.

Member States, welcomed the draft as a guide for both National Strategies (some of which are already in place and several others in the preparation) and AI developers, towards the adoption of a European human-centric approach of AI. To this end, practical aspects of the document were appreciated while further guidance was requested on specific aspects of their application, such as for instance their non-binding nature or their relationship to existing regulation in the field.

The leading character of the document was also the object of discussion regarding its position on both European and international level. As the field of AI is still evolving, current regulatory processes should be self-designed and developed in cooperation with the industry. While all parties agreed that the aim of the Document is to guide without putting any barriers to innovation, its fast and effective adoption by both public and private sectors could strengthen European competitiveness and set an international example for a more ethical use of AI.

The Commission regards the draft Ethics Guidelines for Trustworthy AI as a building block that, together with the Coordinated Plan, sets the basis not solely for an “AI Made in Europe”, but one that is “Trustworthy”. Commissioner Gabriel met the Chair of the Group, Pekka Ala-Pietilä, earlier that day, and thanked the AI HLEG for all the efforts made to shape a European approach of AI based on high ethical standards. I happily follow her lead and likewise take the opportunity to thank the group for all the work they have already done, and that is still ahead.

While considering the feedback received through the consultation, the AI HLEG will now finalise the Guidelines and publicly present them in early April 2019 at the first AI Alliance Assembly. More information on this event will follow soon – stay tuned!

Tagi
jobs skills innovation Artificial Intelligence investment research Legal society Business Ethics uptake Trustworthy AI Guidlines AI Strategy for Europe Member State Forum on AI Ethical by Design

Komentarze

Profile picture for user Erny Gillen
Zamieścił/-a Erny Gillen, pon., 28/01/2019 - 20:47

Dear Lucilla,

in your blog you reconfirm the direction taken by the HLEG AI without mentionning major points of critic like: predominance of the industry in the HLEG, the issue WHO is trustworthy AI referring to, the non protecting consumers issue, the kneefall towards competitivneness ... 

You only admit that there is an issue with the non binding character of the Guidelines. The role of the Commission should be to take into account divergent opinions as well, as clearly stated by the EGE at the very beginning of this process.

The Ethics Guidelines are not about AI first, but about the EU, its role and function towards citizens. 

Kind regards

erny

 

See my comments on the systemic shortfalls posted at the EU website and on my website: moralfactory.com/ethics-first/

Profile picture for user n002g3p5
Zamieścił/-a Eberhard Schnebel, pon., 28/01/2019 - 21:45

1. Ethics as creating self-consciousness: To place Europe at the forefront of the international AI-landscape, it would not be enough to have two forces, if we don’t shape ideas on how create spaces for new and immense investments and testing new social mechanisms. In order to reach the forefront of AI landscape we should not only provide policies and resources - this is old state thinking. Rather, we should simply create the legal spaces for investments and resources to flow into this development. This would provide much more impetus and move forces. Then you don't need avoiding wasteful duplication, because you are able to encourage market based discovery processes without State allocation of resources for development.

2. Immorality of ethical purpose and order: The idea of an "ethical purpose" makes no sense in the prevailing ethical theory dealing with developing multicultural humanism (see eg Levinas). This problem becomes even more apparent when we want to use ethics for "ensuring compliance" for rights – this does not work and leads to the problem of legalizing ethics, a notion that is far from all ethical ideas or purpose. Compliance is a danger for ethics.

3. Trust in human development: Why is there no trust inside the AI HLEG Ethical Guideline to encourage a discovery procedure as Hayek described it, where all developments, that are technically robust, will prevail. Trustworthiness therefor is the wrong goal, because trustworthiness may also be developed and therefor discovered or covered by law and liability rules. Probably all things that need to be discovered can never be planned, even not with ethical guidelines - unless these guidelines open up spaces and don't dictate how things must be designed.

At the end this is a vote for more courage, more ethical consciousness and clear distinction between law, economics and ethics. Ethics is not soft law creating rules where legislaters fail.

User
Zamieścił/-a Jean-Philippe …, pt., 01/02/2019 - 11:23

The document defines the “ethical purpose” of AI as respecting the rights, principles and values as enshrined in the EU Treaties and in the Charter of Fundamental Rights of the European Union. Unfortunately, the delimitations between the concept of rights, principles and values appear rather vague and even tautological in their current formulation. The “rights-based approach” taken delivers insufficiently on an ethical case for these rights in proper terms.

Furthermore, the document is ambiguous over the term “ethical purpose”, since AI systems shall on the one hand “comply with” values, principles and rights and on the other serve them as a purpose. The latter case implies that AI, and thus also organisations developing it, can only be ethical if they serve the purpose of advancing fundamental rights. At the same time, these rights and their underpinnings can evolve over time, making the need for a stronger ethical foundation of the guidelines even more important.

Shifting away from the questions of rights, it may be argued that the ground-breaking trait of AI lies in its unmeasurable potential to create a utopian or dystopian society from the contemporary point of view and compared to previous technologies. This brings up classical ethical questions about the “good life”, as well as the famous Kantian questions[1]. Since our species, at least seemingly, could soon hope for almost everything, the central question appears to be: what, if almost everything is indeed possible, should the human do? And who is this being in a ubiquitous technological environment?

More on our contribution here: https://www.cec-managers.org/ai-ethics/

Profile picture for user dervijo
Zamieścił/-a Jola DERVISHAJ, pt., 01/02/2019 - 13:47

Thank you for your comments! We would invite you to submit those comments, and other suggestions you may have on the draft AI Ethics Guidelines , through the consultation form so the AI HLEG can consider them during the finalisation of the document. Please note that the consultation process will remain open until the end of the current business day.

Profile picture for user anna de stefano
Zamieścił/-a anna de stefano, ndz., 03/02/2019 - 16:10

Thank you for your post.

Another point that seems missing form the Guidelines is how to rule the interaxtions between AI systems, expecially when they take actions autonomosly based on learning AI and profiling of personal data and behaviours? How do we deal with a world where an AI will analyse all my choices/situations and it will talk to the other algoritms of that will rule the systems I use? (ex news feed, connections, thermostats, ecc...)

Thank you 

Anna

Profile picture for user ngaursac
Zamieścił/-a Sachin GAUR, śr., 06/03/2019 - 04:32

I recently read the book, AI Superpowers by Kai-Fu Lee, he paints a picture of dominance in AI for USA and China. As they seem to have a lead when it comes to datasets. He suggests that data superiority is more relevant than algorithm superiority. On a different day I had a discussion with a European friend who mentioned that how Europe is better than US in processes , hence EU stands a chance in B2B AI. Which kind of resonates with over all objective of this forum. Just sharing some views points for consideration :) 

Dodane przez Sachin GAUR w odpowiedzi na

Profile picture for user dervijo
Zamieścił/-a Jola DERVISHAJ, śr., 06/03/2019 - 17:08

Hi Sachin! There is actually a very relevant discussion we had some time ago on this topic. You might want to check it here.

Profile picture for user n002daga
Zamieścił/-a Kai Salmela, czw., 07/03/2019 - 10:26

Some thoughts about this:

From the technical point of view, we are still decades away from fully conscience AI.   There’s a lot of promise with the quantum and neural processing, but this technology is still in its baby shoes and  My expectations are low to see this (conscious AI) happening in my lifetime.

However, it doesn’t mean that we should not plan ahead and have a good discussion about this matter, but most likely humans themselves will cause a lot of trouble with the AI products in the meantime.  The tool of choice for controlling human behaviour is a law, if we forget religion, and that tool needs to be honed to its peak condition for the use of AI products that exist now or in the future.

Another way is to lead by the example.  For this we have a standardisation, and this process (the international Standardisation of an AI) has begun already.  I’d like to see everyone who's interested in to join into the standardisation process and make their very best in it.  There probably will be an European standard also, but at the moment we’re still in the beginning of the international standardisation process, so there’s plenty to do and now it is good time to join in to this effort.  More information you can find from your national body of standardisation.

So there’s plenty of things to resolve: we need to protect individuals and their data, but we need also make sure that there is plenty of data available for the development of an AI.  Companies need a fair playground and users need the safe environment.  Different minorities need to be treated equally and every language group need to have equal opportunities to develop an AI for their use.  For this European Union is uniquely well equipped organisation and we shouldn’t expect less of it because it is hard to achieve.  It may take some time, but when we get this done, European AI will be very appealing to the rest of the world and then Europe is on the top of this game.