Report of the AI HLEG workshop of 20/9/2018

(0 bytes - PDF)

This is the report of the AI HLEG workshop of 20/9/2018, which took place in Brussels. The work was organised in breakout sessions that covered five areas: trusted AI, transparency and accountability, industry and ecosystems uptake of AI, use cases for the guidelines, and AI infrastructures and enablers.

It contains important ideas that also cover many aspects that have been mentioned in the submissions, discussions and comments on the European AI Alliance platform. For example, there are ideas related to what AI is, what trusted AI is, algorithmic assessment tools, certification, inclusiveness, responsibility & accountability, the European AI story, importance of data, skills & education, use cases for the AI ethics guidelines, building blocks for a successful AI-enabled European economy, talent, funding, regulation, ecosystems, and communication.

There are also video recordings available, from the introductory and concluding sessions.

jobs skills innovation investment research Legal society Business Ethics AI HLEG uptake


Profile picture for user Erny Gillen
Submitted by Erny Gillen on Tue, 23/10/2018 - 20:12



Dear all,


I'm looking at this report as an ethical professional from an ethical viewpoint, understanding ethics as a systematic reflection of (different) moral systems and approaches. As one of the two deliverables are "ethics guidelines", I'm puzzled by the naiv way "ethics" is used and considered in this setting.


The first session (Trusted AI) outsmarts ethics (as a discipline) by "turning ethics into a data science problem" (p.5). It doesn't list or recognise ethical expertise as contribution on its own (while not listing ethics together with lawyers, policy makers and technicians (p.5)) as being part of the communication effort to be made. After the (US provoked) Diesel-Gate one can doubt that European Companies will be seen "as the most trustworthy companions" (p.6). And again "trust" into ethical behaviour and reflection is delicately out-smashed by stating: "The discussion include thus NOT ONLY ETHICS, but also the law and legal accountability" (p.6). Later under session 2 (transparency) it is stated: "...ethics COULD also be an enabler, not an obstacle ... and it is a European strength" (p.11).


It would be great to work on that strength with more emphasis and expertise. "Ethical AI is NOT (an abstract and theoretical) CONCEPT" is a farther way to disqualify ethics as a specific discipline and thus contributor to "ethics guidelines". 


If "an ethical analysis/ethics assessment should be a preliminary step of the methodology" ... (p.15), this correct insight should be considered for the whole approach taken by the HLG. 


On page p.17 I understand that my puzzling was shared by some participants, when voicing: "Developing the ethics guidelines should not occur in a defensive way only". (sic)


Ethics is the best way we know to shape our freedom as people and societies. AI is already able to "deliberate" (cf. the Watson debater project) USING CONCEPTS & STRATEGIES! How can "we" argue if we argue "nakedly"?


Thanks for this great synthesis of the workshop enabling "absentees" to participate!


Erny Gillen, Luxembourg

In reply to by Erny Gillen

Profile picture for user cbrasoveanu
Submitted by Cristian Brasoveanu on Thu, 25/10/2018 - 12:00

Dear Erny,

Thank you very much for your valuable input. This is precisely why the European AI Alliance has been set up. We will pass on your input to the members of the AI HLEG.

There are also video recordings from that workshop, which are almost ready, and I think you will find those interesting and useful as well. One of them is the introductory plenary session, and the other one is the concluding plenary session where all the groups from the breakout sessions in-between summarised the group discussions.

They will be made available here as well, of course.

In reply to by Erny Gillen

Profile picture for user n002daga
Submitted by Kai Salmela on Mon, 29/10/2018 - 08:17

Thank you for these opinions, it is good to hear many sides of this matter.

I'm just wondering - how do we control ethical questions between humans ? Law and religion comes to my mind in the first place. Maybe we should start with the ethical law making in order to inut ethical patterns in AI behaviour?  After all , it has to be law obedient system even if the laws may wary from the region to another.

When AI has developed in to such a high grade that it can have a mind, then this question may have more levels.

As we know very well, nothing that a man can imagine is'nt impossible to do - even if it takes some time and efford.

wbr  Kai Salmela

Profile picture for user njastrno
Submitted by Norbert JASTROCH on Fri, 26/10/2018 - 19:14

Dear all,


having started to reflect upon the Outcomes of the AI HLEG Workshop on September 20,

I would like to raise some initial comments to section 1, Trusted AI.



Page 3, What is trusted AI:


Trust comes from understanding, and a lack of understanding translates into a lack of trust, that is why it is important to invest in educating the general public about what AI is and what it can do, to increase trust of future generations. Some stated that it should be the government’s responsibility to made citizens aware and knowledgeable about AI.“


It would be more than ambitious to educate the general public about AI. Was the general public educated about pharmaceutics in order to establish trust in pharmaceutical treatment? Clearly not. The mechanisms behind the generation of trust in the pharmaceutical area are a combination of regulation, ethics and consensus over methodology in the community of experts. Were it governments that built respective awareness and knowledge in the citizenship? Clearly not. Governments have set the regulative frame, the science community have developed the methodological foundations, and the ethical base was a result of public discourse. There is good reason to expect something similar when it comes to AI, which will most likely be more of a hidden magic to citizens than something they (can) understand.




Page 5, Available tools:


„As a template or analogy to trusted AI, we can also look at the tools we use to foster trust in humans, these primarily being laws, standards and certifications. Moreover, a major tool to ensure trusted AI can be AI itself: interactive normativity was raised in this regard. AI can be used to partly solve ethics, by turning ethics into a data science.“


This is a functional view on trust that applies to trust in institutions, but not in humans. Most important here is the notion that institutions loose their legitimation if they loose their trusted status. This well relates to the existence of and compliance with laws, standards and certifications, but that is only part of their trusted status. It must be accompanied by respective perception, and this in turn is subject to transparency and public discourse. That's why AI can not be the tool to generate ethics.




Page 6, Seal/Certification of trusted AI:


There was mention of the idea of developing a seal of trusted AI, reflecting the requirement that systems comply with certain standards. Nevertheless, in many cases such certification would not be easy, especially given that 95% of current AI is software and therefore ‘invisible’. Such systems may require tests that check for transparency, explainability, etc. Another complication is the need to ensure that such certification does not hurt companies, and instead is helping them by making their goods and services more valuable because they are known to be trustworthy. Europe has high values and that could be a competitive advantage because European companies are seen as the most trustworthy companions for consumers.“


Of course will such certification or sealing be challenging. Because of the complexity of AI systems, comprehensive co-operation of AI providers with whatever certification authority will be essential. The role of such an authority, however, can definitely not be to avoid 'hurting' companies, nor to help them making their offering more valuable. This perspective is clearly inappropriate.

As a way out, certification can be organized voluntarily: AI providers can apply, then co-operate by making things transparent to the authority (not the public), and if they get certified use this as their competitive advantage. Such a European label shall reflect the high European values, indeed.


Kind regards, Norbert Jastroch


Submitted by Eva Thelisson on Tue, 30/10/2018 - 18:42

Dear all,

Thank you for sharing this report. I was wondering whether AI in Healthcare should not be also addressed as a first priority. Medical Diagnosis, large scale DNA Sequencing and genomic editing of humans, animals and plants are becoming reality. A responsible AI Label would be useful for this field as well in the EU. Information of patients and consummers is key in application of the transparency principle.

Please contact the Arrige Association for further information on genome editing :

In Stanford :

Best regards,

Eva Thelisson




Profile picture for user n0029u2b
Submitted by Piotr Mieczkowski on Wed, 31/10/2018 - 14:26


thanks for sharring material.

any chance to get materials from other meetings (if they happened of course)?



Piotr Mieczkowski

In reply to by Piotr Mieczkowski

Profile picture for user cbrasoveanu
Submitted by Cristian Brasoveanu on Mon, 05/11/2018 - 13:18


All the AI HLEG documents available are listed in the Documents section. If you haven't been there yet, take a look. You can find there the very first documents prepared for or adopted at the first meeting of the AI HLEG, and other documents prepared for the experts.

You probably already know that for the workshop of 20/9 there are also video recordings available, from the introductory and concluding sessions.

Submitted by Kresimir Kalafatic on Thu, 01/11/2018 - 09:18

Dear all,
being a member of AI Alliance community I have read the paper "Outcomes of the AI HLEG Workshop of 20 September 2018" (SHA256:434176d4e77caec406113f6c3bdeb0c60e61f6d8b78b672b76c9beb8de30e399).

The paper states "In addition, in order to ensure responsible AI, all relevant stakeholders should be involved in the discussion. Importantly, fairness should be ensured not only towards the customer/user
of the AI tool, but also towards society at large.".

To guarantee fairness to the users and developers of AI, the EC should address the problem of unsecured documentation. Having wrong, outdated or partial documentation can be used for adversarial attack which can lead to wrong decisions and costly mistakes.

The problem of unsecured documentation is also tackled in recent DARPA SafeDocs projects started few months ago. Some of the project objectives are having secure tools and documentation for communication between humans, but also secure tools for interfacing humans with AI. Link to the project:
In the paper "Restoring balance between stakeholders - problems, tools and proof of concept" (SHA256: 4c716677baf8d42fb767e8f30875855b29fb324165d2740e002cab9c57c6e5ac) published on Open Library part of AI Alliance site, one basic element needed for Trusted AI was described.

The document addresses the problem of unsecured document supply chain, and proposes some simple implementation for improving security of the documents. The procedure described in the paper explains a possible technical extension of procedure for securing documentation used by some state agencies (for example USA department of agriculture).

Links to USDA documents are:

The paper title "Restoring balance between stakeholders - problems, tools and proof of concept" was chosen because the idea of the paper was to point to the need to secure the research papers, legal documents and other documentation from unauthorized changes and need for protecting them from adversarial attacks. One important feature is that the procedure enables distinguishing the documentation which has been written, generated, signed and verified by human from documentation generated by AI (restoring the balance between humans and AI). Adding a hash in the signed document protects the previous document from changing and adds a passive security component in the document (in the case toolchain used is trusted). The described simple procedure should be further analyzed and extended.

Taking into account the previously mentioned elements, procedures for securing documentation written and approved by US department of Agriculture, project SafeDocs started by DARPA, my opinion is that EC and digital agenda should define and implement regulation for securing documentation if trusted AI objective wants to be achieved. The elements stated in this text also are elements which are addressed by NIS and GDPR objectives, and the proposed solution is one of the passive components needed by the NIS and GDPR objective (some other technical elements for improving infrastructure were mentioned in the previous blocks).

My question is: Should this problem be addressed by AI HLEG or is this problem to be addressed by other department in European Commission? Clearly other countries have similar regulation, so EU should adopt similar regulation.

Food for thoughts: If genetically modified food requires markings for user deciding consumption, shouldn't AI decisions and generated documents be marked for user deciding consumption?

Best regards, Kresimir Kalafatic


Submitted by Christian RUSS on Sun, 11/11/2018 - 09:31

Thank you for the insightful report about the AI HLEG Workshop. Combined with the videos it really helps to follow the process and the different discussions and perspectives on the topic.

There was also a very small section about

What is AI?
identified four types of AI which could be in need of specific recommendations:
- Internet AI
- Business AI
- Sense AI (vision AI, facial recognition)
- Autonomous  system / trusted system

Is there also a deeper description what is meant with these bullet points? Especially the first two are open for a lot of interpretation, in my opinion.


Thanks and best