The commitment to European values and respect for fundamental rights together with the mobilisation of adequate policies and resources, are the two forces that drive our Strategy to place Europe at the forefront of the international AI landscape. Two groups established by the Commission last year in order to address these objectives presented the first results of their work in December 2018.
In a previous blog, I already introduced the Coordinated Plan for Artificial Intelligence agreed between EU Members States and the European Commission as a means for ensuring synergies and avoiding wasteful duplication of efforts in the facilitation of AI uptake. This time, I would like to talk about the draft AI Ethics Guidelines presented by the High-Level Expert Group on AI (AI HLEG) to provide a framework for Trustworthy AI.
Supporting the development of a human-centric AI to increase human well-being and ensure common good, the AI HLEG introduced the concept of trustworthiness as a combination of ethical purpose and technical robustness.
Based on fundamental rights, the Guidelines list a set of principles and values that attribute AI applications with an ethical purpose. Ethical purpose consists of ensuring compliance with those rights, principles and values, as well as applicable regulation. However, ethical purpose – or put simply, good intentions – is not enough. The development and use of AI can also cause unintentional harm, for instance in case of a lack of technological mastery. Technical robustness is therefore likewise a key element to ensure the trustworthiness of our AI systems.
With these two elements in mind, the draft guidelines then provide a list of 10 requirements ranging from Accountability to Transparency that AI needs to have in order to be trustworthy. To ensure the implementation of those requirements, a series of technical and non-technical methods are offered as a suggestion for the development, deployment and use of an AI that is not just ethical, but also technically robust and reliable. Finally, the document offers an assessment list designed to help stakeholders to implement the guidelines in daily business. This is intended to operationalise the framework for Trustworthy AI, and in this regard goes beyond the other existing frameworks for ethical AI, which remain more on an abstract level.
Trustworthiness can become a distinct European quality mark that sets us apart from AI developed and used in other world regions.
While the draft Guidelines are still subject to a consultation process aiming to collect feedback from the wider AI community, last week the AI HLEG met with representatives of relevant European ministries to discuss the specific perception of the guidelines on the Member State level. The meeting that took place on 22 January in Brussels was designed to engage both sides in an active discussion on the purpose, implementation and future governance of the Ethical Guidelines for Trustworthy AI.
Member States, welcomed the draft as a guide for both National Strategies (some of which are already in place and several others in the preparation) and AI developers, towards the adoption of a European human-centric approach of AI. To this end, practical aspects of the document were appreciated while further guidance was requested on specific aspects of their application, such as for instance their non-binding nature or their relationship to existing regulation in the field.
The leading character of the document was also the object of discussion regarding its position on both European and international level. As the field of AI is still evolving, current regulatory processes should be self-designed and developed in cooperation with the industry. While all parties agreed that the aim of the Document is to guide without putting any barriers to innovation, its fast and effective adoption by both public and private sectors could strengthen European competitiveness and set an international example for a more ethical use of AI.
The Commission regards the draft Ethics Guidelines for Trustworthy AI as a building block that, together with the Coordinated Plan, sets the basis not solely for an “AI Made in Europe”, but one that is “Trustworthy”. Commissioner Gabriel met the Chair of the Group, Pekka Ala-Pietilä, earlier that day, and thanked the AI HLEG for all the efforts made to shape a European approach of AI based on high ethical standards. I happily follow her lead and likewise take the opportunity to thank the group for all the work they have already done, and that is still ahead.
While considering the feedback received through the consultation, the AI HLEG will now finalise the Guidelines and publicly present them in early April 2019 at the first AI Alliance Assembly. More information on this event will follow soon – stay tuned!
- Anmelden, um Kommentare zu posten.
- Tags
- jobs skills innovation Artificial Intelligence investment research Legal society Business Ethics uptake Trustworthy AI Guidlines AI Strategy for Europe Member State Forum on AI Ethical by Design
Kommentare
1. Ethics as creating self-consciousness: To place Europe at the forefront of the international AI-landscape, it would not be enough to have two forces, if we don’t shape ideas on how create spaces for new and immense investments and testing new social mechanisms. In order to reach the forefront of AI landscape we should not only provide policies and resources - this is old state thinking. Rather, we should simply create the legal spaces for investments and resources to flow into this development. This would provide much more impetus and move forces. Then you don't need avoiding wasteful duplication, because you are able to encourage market based discovery processes without State allocation of resources for development.
2. Immorality of ethical purpose and order: The idea of an "ethical purpose" makes no sense in the prevailing ethical theory dealing with developing multicultural humanism (see eg Levinas). This problem becomes even more apparent when we want to use ethics for "ensuring compliance" for rights – this does not work and leads to the problem of legalizing ethics, a notion that is far from all ethical ideas or purpose. Compliance is a danger for ethics.
3. Trust in human development: Why is there no trust inside the AI HLEG Ethical Guideline to encourage a discovery procedure as Hayek described it, where all developments, that are technically robust, will prevail. Trustworthiness therefor is the wrong goal, because trustworthiness may also be developed and therefor discovered or covered by law and liability rules. Probably all things that need to be discovered can never be planned, even not with ethical guidelines - unless these guidelines open up spaces and don't dictate how things must be designed.
At the end this is a vote for more courage, more ethical consciousness and clear distinction between law, economics and ethics. Ethics is not soft law creating rules where legislaters fail.
- Anmelden, um Kommentare zu posten.
Thank you for your comments! We would invite you to submit those comments, and other suggestions you may have on the draft AI Ethics Guidelines , through the consultation form so the AI HLEG can consider them during the finalisation of the document. Please note that the consultation process will remain open until the end of the current business day.
- Anmelden, um Kommentare zu posten.
I recently read the book, AI Superpowers by Kai-Fu Lee, he paints a picture of dominance in AI for USA and China. As they seem to have a lead when it comes to datasets. He suggests that data superiority is more relevant than algorithm superiority. On a different day I had a discussion with a European friend who mentioned that how Europe is better than US in processes , hence EU stands a chance in B2B AI. Which kind of resonates with over all objective of this forum. Just sharing some views points for consideration :)
- Anmelden, um Kommentare zu posten.
Antwort auf I recently read the book, AI von Sachin GAUR
Some thoughts about this:
From the technical point of view, we are still decades away from fully conscience AI. There’s a lot of promise with the quantum and neural processing, but this technology is still in its baby shoes and My expectations are low to see this (conscious AI) happening in my lifetime.
However, it doesn’t mean that we should not plan ahead and have a good discussion about this matter, but most likely humans themselves will cause a lot of trouble with the AI products in the meantime. The tool of choice for controlling human behaviour is a law, if we forget religion, and that tool needs to be honed to its peak condition for the use of AI products that exist now or in the future.
Another way is to lead by the example. For this we have a standardisation, and this process (the international Standardisation of an AI) has begun already. I’d like to see everyone who's interested in to join into the standardisation process and make their very best in it. There probably will be an European standard also, but at the moment we’re still in the beginning of the international standardisation process, so there’s plenty to do and now it is good time to join in to this effort. More information you can find from your national body of standardisation.
So there’s plenty of things to resolve: we need to protect individuals and their data, but we need also make sure that there is plenty of data available for the development of an AI. Companies need a fair playground and users need the safe environment. Different minorities need to be treated equally and every language group need to have equal opportunities to develop an AI for their use. For this European Union is uniquely well equipped organisation and we shouldn’t expect less of it because it is hard to achieve. It may take some time, but when we get this done, European AI will be very appealing to the rest of the world and then Europe is on the top of this game.
- Anmelden, um Kommentare zu posten.