By Pekka Ala-Pietilä, Chair of the High-Level Expert Group on AI

Dear members of the European AI Alliance,
We are now approaching mid-November, and have only a month and a half to go before we reach the end of the year and our deadline to deliver a first draft of our first deliverable – the AI Ethics Guidelines. I would hereby like to take the opportunity to update you about the progress made by the High-Level Expert Group on Artificial Intelligence (“AI HLEG”) in the meantime, and the next steps ahead.
After the AI HLEG workshop that took place on 20 September (slides, video-recording, report of outcomes) – to which you have kindly contributed with your input – the AI HLEG met two more times: on 8 and 9 October in Helsinki (in the margin of the AI Forum), and just last week, on 8 November in Brussels.
During this autumn, we agreed on the rough structure of the two deliverables and the main themes to be addressed in there. Subsequently, we established “subgroups” around those themes, and appointed two rapporteurs within the AI HLEG to start drafting the corresponding sections based on the input from the rest of the group. This happens under the coordination of the two Vice-Chairs of the AI HLEG, Nozha Boujemaa (for Deliverable 1 – the Guidelines) and Barry O’Sullivan (for Deliverable 2 – the Recommendations).
Note that these subgroups are very informal, as each member of the AI HLEG can be part of any subgroup and contribute to any section at any time. More importantly, this structure may still change (as we may decide to merge, split, add or delete certain sections) and should thus only be seen as an intermediary step in the work process.
For the draft AI ethics guidelines, the main themes are the following:
- Core Values and Principles (Intent) – for which the rapporteurs are Aimee Van Wynsberghe and Nicolas Petit. As the name suggests, this section is looking into the core values and principles that are key when dealing with AI, and that need to be part of the intent of whomever develops or uses AI.
- Implementation of Trusted AI – for which the rapporteurs are Virginia Dignum and Jean-François Gagné. This section looks into how the values and principles that we hold dear, and that can ensure our trust in AI, can actually be implemented into the technology.
- Checklist based on Use Cases – for which the rapporteurs are Cecilia Bonefeld-Dahl and Saskia Steinacker. This section will analyse a number of use cases and provide a checklist / set of guidelines to exemplify how the right intent and the correct implementation of that intent can be fostered.
- Red Lines – for which the rapporteurs are Thomas Metzinger and Urs Bergman. This section will look into whether there are any red lines in the development or use of AI that we may wish to draw as a society.
For the AI policy and investment Recommendations, the main themes are divided into impacts that we wish to achieve with AI, and the enablers needed to achieve those impacts.
- In terms of impacts to be achieved, we look into:
- Business Impact – for which the rapporteurs are Ieva Martinkenaite and Loubna Bouarfa
- Public Sector Impact – for which the rapporteurs are Françoise Soulié Fogelman and Leo Kärkkäinen
- World Class Research – for which the rapporteurs are Fredrik Heintz and Sami Haddadin
- Citizen benefits & engagement – for which the rapporteurs are Catelijne Muller and Virginia Dignum
- In terms of enablers that will need to be leveraged, we look into:
- Funding and Investment – for which the rapporteurs are Maria Bielikova and Markus Noga
- Data and Infrastructure – for which the rapporteurs are Philipp Slusallek and Françoise Soulié Fogelman
- Skills and Education – for which the rapporteurs are Sabine Theresia Köszegi and Thiébaut Weber
- Policy and Regulation – for which the rapporteurs are Ursula Pachl and Cecilia Bonefeld-Dahl
In our November meeting, the AI HLEG heard some presentations from the Commission on ongoing/planned AI initiatives at EU level, and the subgroups worked further on their different sections. While there is still some work ahead, the draft deliverables are slowly starting to take shape.
We can already announce that the first draft of the AI ethics guidelines will be published for consultation here, on the platform of the European AI Alliance, and we will seek your comments and feedback thereon during a period of one month, from 18 December 2018 and 18 January 2019.
However, already in the next days, the rapporteurs of the mentioned sections will start reaching out to you (and some of them already did) in order to seek your input on these topics. I strongly encourage you to share your expertise with them, as we would like to draw on as many useful contributions as possible.
We will continue working at a high pace and have planned to meet again in Brussels on 13-14 December. It is during that meeting that we will finalise the first draft of the AI ethics guidelines before publication on the platform of the Alliance.
We look forward to your contributions in the meantime.
Kind regards,
Pekka Ala-Pietilä
- Pro vkládání komentářů se musíte přihlásit
Komentáře

As someone who started to assess AI research and implementation many decades ago, I am still concerned by the level of “AI hype” which is in many respects worse today than it was.
For example, it is worrying that a UK House of Commons Select Committee engaged in a charade by giving scripted questions to a “robot” named Pepper that was switched to reply to the question with “pre-recorded” answers. The official Commons website makes no mention of this deception and described it as a ‘demonstration by Pepper’. Most newspaper and media coverage did not reveal how Pepper “appeared” to be intelligent, nor indeed explain that this demonstration could have been performed a hundred years ago because it required no computational or AI abilities.
This is but one example of the torrent of misinformation and nonsense that is coming from the media, AI researchers, corporations and governments. There is of course nothing new in this, for it has been known for many years by AI researchers such as Joseph Weizenbaum that much of AI’s success is built on making computing “appear” intelligent. Alan Turing was aware and predicted that before we can speak of machines thinking there would have to be a change in educated opinion and the meaning of words. This is no doubt true, but we need to understand how this process is happening and not allow it to be led by hype and fancy.
There are obvious dangers in developing a technology in such a distorted environment, not least of them that innovation will be frustrated and halted because it does not conform to the dominate groups hype. This is what happened in the early development of ANNs and machine learning fifty years ago which in no small part led to the ‘AI winter’.
Could you inform me whether you are addressing this issue? If so, may I ask who is doing it and how?
- Pro vkládání komentářů se musíte přihlásit
In reply to As someone who started to by Keith Tayler

The AI HLEG is indeed looking into raising citizen awareness of AI capabilities and limits, and this subject will in fact be part of both deliverables that we will produce. The AI HLEG is also working on a definition on AI which aims at clarifying what it is and what it isn't, in order to further demystify AI and explain what the AI HLEG means when it mentions 'AI'. As far as I am aware, the European Commission is undertaking some additional actions in this area, but I will let someone from the Commission elaborate on this. The issue of hype and misinformation around AI is a problem.
- Pro vkládání komentářů se musíte přihlásit
In reply to The AI HLEG is indeed looking by Barry O'SULLIVAN

Thanks Barry for your reply. I cannot see anything of any substance on the website but I will keep looking.
I certainly welcome any attempt to clarify the meaning of ‘AI’ and have always used it very sparingly if at all because it only causes misunderstanding. Indeed, it is a great pity that the acronym ‘AI’ survived the AI winter for it has and will continue to cause confusion and hype.
You will forgive me if I am somewhat sceptical as to whether an organisation with the title European AI Alliance and a group with the acronym AI HLEG will be able to reduce the present wave AI hype and debunk the humbug and myths. Nonetheless, I wish you well and look forward to seeing the distinctions and clarifications on the website.
- Pro vkládání komentářů se musíte přihlásit
In reply to The AI HLEG is indeed looking by Barry O'SULLIVAN

An example of how we are asking ourselves the same question here, at the European Commission, is the AlgoAware pilot project we are carrying out. AlgoAware is short for Algorithmic Awareness Building, a mandate that the European Parliament entrusted us with, and which we have structured in a ~2 years policy design project trying to build a solid evidence base for emerging challenges of algorithmic decision-making (including but not limited to AI) in the online environment. The project is taking on a number of specific case studies where algorithmic decisions have a policy stake, and analysing the existing evidence, maturity of the technology, the societal, economic, legal and ethical challenges, the interests at stake, etc. The analysis will eventually converge towards a ‘policy toolbox’, exploring proportionate and effective policy responses to the challenges identified.
As the project develops – and first findings should be soon published for peer review at www.AlgoAware.EU - it will also host more discussions around the difficult issues, and flag further evidence gaps, in particular where misconceptions and hype tend to steal the headlines in the public debate.
If further regulatory or policy interventions will be considered, then Commission is committed to the solid principles and processes for evidence-informed policy design in the Better Regulation rules we are bound by.
More generally, the Commission has had its contribution to solid research in AI, through the various research projects supported by our funding instruments. Importantly, Horizon 2020 projects have also advanced on cross-cutting issues along responsible research and innovation principles, and some projects have supported a deeper understanding of what ethical research in AI entails. Jola already explained a bit more about the AI Alliance and the work of the High Level Expert Group in her comment above.
- Pro vkládání komentářů se musíte přihlásit

In addition to the points raised by Keith Tayler below, I would like to ask whether the AI HLEG will (can?) do anything about the push in Malta regarding AI.
Their new strategy to bring Malta to the top of the AI game is admirable, but it was recently announced that they aim to, and I quote,
"Amongst others, with SingularityNET, we shall explore a pilot project to create a citizenship test for Robots in the process of drafting new regulation for AI. The citizenship test will serve as a tool to measure how much Robots understand citizenship and civic rules, in order to help in the drafting of new AI regulation that will harness the use of AI Robots for the benefit of the people."
The fact that an EU government has this project is apalling.
1) An AI trained on citizenship rules and regulations can be brought to the desired level to "pass a citizenship test" - whether that's a regular test for humans or a new one created for robots.
2) How would a citizenship test for robots be different? Why is there discussion of a citizenship test for robots to begin with? What are the legal implications, and don't you think this will negatively impact how people view AI? (It's already falsely conflated with robots.)
3) How would we measure how much a robot "understands" citizenship/civic rules? Passing a test after being trained on specific data is very different from "understanding" as we know it. Not to mention that outputs from one specific AI do not represent "all robots" and should not be used as guidelines to draft regulation that will apply to AI trained with different datasets for completely different contexts.
I would hope that the AI HLEG was established to at least influence such matters, even though I know the EU might not have the authority to intervene in some of the member states' affairs. In addition to the incident where Pepper "testified", this upcoming project in Malta, as well as the H2020 project iBorderCtrl have all received criticism from some of the biggest names in Tech+Society research. Who in the EU is taking concrete steps with regard to these topics? There have been many "principles" and general policies set forth by organizations, companies, and more recently governments. These do not matter if the people who create AI do not take them into consideration and problematic new applications go unchecked. Please let us know where we can follow the EU's position on these issues. Thank you.
(I will most likely make a whole separate post on iBorderCtrl as I haven't seen it discussed on this platform at all - in a nutshell, it's an H2020 project that aims to install facial recognition-powered "lie detectors" on all EU outer borders. Some big problems include lie detectors as a pseudoscience - as argued by prominent psychologists - and bias in computer vision - which is my research topic.)
- Pro vkládání komentářů se musíte přihlásit
In reply to In addition to the points by Pınar Barlas

Thank you Pinar for bringing this to our attention. Unfortunately this type of nonsense was all too often during the first wave of AI hype and myths, and if anything is more prevalent now with the second wave. What is worrying is that this nonsense distorts the underlying sound developments in computing that do need support.
It is relatively easy to get a computer (aka robot) to pass a citizenship and civic rules test, but this does not mean, as you say, the computer has “understood” citizenship any more than a calculator (mechanical or electronic) understands mathematics. Of course this could start a discourse on what understanding means and whether it is possible for a state machine to understand. Here is not the place for that, but we should all, and that includes the AI HLEG, not allow this pseudo-scientific claptrap to pass unchallenged.
I agree with you, I also would like to know where we can follow the EU's position on these issues.
- Pro vkládání komentářů se musíte přihlásit
In reply to Thank you Pinar for bringing by Keith Tayler

Dear Keith and Pinar,
Thank you for bringing the question of AI Hype in the conversation. It is indeed hard to maintain an informed balance between scientific research, commercial applications and ethical issues especially in a period where national policies are still being defined.
As the European AI Alliance is not an organisation but a forum of citizens, organisations, academics and all other interested stakeholders, willing to bring forward the discussion on AI, your suggestions on how to address such issues are fundamental.
Therefore, I would invite you to create a dedicated open discussion with your questions and suggestions on this, so other members of this forum can also suggest their own ideas. The HLEG is closely following the discussions in order to gather and reflect such ideas in the preparation of the guidelines and policy recommendations that it will submit to the European Commission.
In terms of content discussions of the AI HLEG, I can refer you already to the report of the workshop that took place on 20 September, as well as the presentations of the meeting in October, which indicates the a rough and preliminary structure of the deliverables. As the blogpost here above explains, you will be able to see the first draft of the Guidelines here on the 18th of December, and we look forward to your feedback thereon.
Note that, beyond the work of the AI HLEG, the European Commission is also following the forum discussions closely, as it can be used to inform its future policy making around AI.
- Pro vkládání komentářů se musíte přihlásit

Working already on the topic of AI Ethics I would like to join this group. Thanks
- Pro vkládání komentářů se musíte přihlásit