Report of the AI HLEG workshop of 20/9/2018

report_of_the_ai_hleg_workshop_on_20_september_2018.pdf
(0 bytes - PDF)
Downloaden

This is the report of the AI HLEG workshop of 20/9/2018, which took place in Brussels. The work was organised in breakout sessions that covered five areas: trusted AI, transparency and accountability, industry and ecosystems uptake of AI, use cases for the guidelines, and AI infrastructures and enablers.

It contains important ideas that also cover many aspects that have been mentioned in the submissions, discussions and comments on the European AI Alliance platform. For example, there are ideas related to what AI is, what trusted AI is, algorithmic assessment tools, certification, inclusiveness, responsibility & accountability, the European AI story, importance of data, skills & education, use cases for the AI ethics guidelines, building blocks for a successful AI-enabled European economy, talent, funding, regulation, ecosystems, and communication.

There are also video recordings available, from the introductory and concluding sessions.

Tags
jobs skills innovation investment research Legal society Business Ethics AI HLEG uptake

Reacties

Profile picture for user Erny Gillen
Geplaatst door Erny Gillen op di, 23/10/2018 - 20:12

ETHICS FOSTERS MORAL REFLECTION & BEHAVIOUR

 

Dear all,

 

I'm looking at this report as an ethical professional from an ethical viewpoint, understanding ethics as a systematic reflection of (different) moral systems and approaches. As one of the two deliverables are "ethics guidelines", I'm puzzled by the naiv way "ethics" is used and considered in this setting.

 

The first session (Trusted AI) outsmarts ethics (as a discipline) by "turning ethics into a data science problem" (p.5). It doesn't list or recognise ethical expertise as contribution on its own (while not listing ethics together with lawyers, policy makers and technicians (p.5)) as being part of the communication effort to be made. After the (US provoked) Diesel-Gate one can doubt that European Companies will be seen "as the most trustworthy companions" (p.6). And again "trust" into ethical behaviour and reflection is delicately out-smashed by stating: "The discussion include thus NOT ONLY ETHICS, but also the law and legal accountability" (p.6). Later under session 2 (transparency) it is stated: "...ethics COULD also be an enabler, not an obstacle ... and it is a European strength" (p.11).

 

It would be great to work on that strength with more emphasis and expertise. "Ethical AI is NOT (an abstract and theoretical) CONCEPT" is a farther way to disqualify ethics as a specific discipline and thus contributor to "ethics guidelines". 

 

If "an ethical analysis/ethics assessment should be a preliminary step of the methodology" ... (p.15), this correct insight should be considered for the whole approach taken by the HLG. 

 

On page p.17 I understand that my puzzling was shared by some participants, when voicing: "Developing the ethics guidelines should not occur in a defensive way only". (sic)

 

Ethics is the best way we know to shape our freedom as people and societies. AI is already able to "deliberate" (cf. the Watson debater project) USING CONCEPTS & STRATEGIES! How can "we" argue if we argue "nakedly"?

 

Thanks for this great synthesis of the workshop enabling "absentees" to participate!

 

Erny Gillen, Luxembourg

Als antwoord op door Erny Gillen

Profile picture for user cbrasoveanu
Geplaatst door Cristian BRASOVEANU op do, 25/10/2018 - 12:00

Dear Erny,

Thank you very much for your valuable input. This is precisely why the European AI Alliance has been set up. We will pass on your input to the members of the AI HLEG.

There are also video recordings from that workshop, which are almost ready, and I think you will find those interesting and useful as well. One of them is the introductory plenary session, and the other one is the concluding plenary session where all the groups from the breakout sessions in-between summarised the group discussions.

They will be made available here as well, of course.

Als antwoord op door Erny Gillen

Profile picture for user n002daga
Geplaatst door Kai Salmela op ma, 29/10/2018 - 08:17

Thank you for these opinions, it is good to hear many sides of this matter.

I'm just wondering - how do we control ethical questions between humans ? Law and religion comes to my mind in the first place. Maybe we should start with the ethical law making in order to inut ethical patterns in AI behaviour?  After all , it has to be law obedient system even if the laws may wary from the region to another.

When AI has developed in to such a high grade that it can have a mind, then this question may have more levels.

As we know very well, nothing that a man can imagine is'nt impossible to do - even if it takes some time and efford.

wbr  Kai Salmela

Profile picture for user njastrno
Geplaatst door Norbert JASTROCH op vr, 26/10/2018 - 19:14

Dear all,

 

having started to reflect upon the Outcomes of the AI HLEG Workshop on September 20,

I would like to raise some initial comments to section 1, Trusted AI.

 

 

Page 3, What is trusted AI:

 

Trust comes from understanding, and a lack of understanding translates into a lack of trust, that is why it is important to invest in educating the general public about what AI is and what it can do, to increase trust of future generations. Some stated that it should be the government’s responsibility to made citizens aware and knowledgeable about AI.“

 

It would be more than ambitious to educate the general public about AI. Was the general public educated about pharmaceutics in order to establish trust in pharmaceutical treatment? Clearly not. The mechanisms behind the generation of trust in the pharmaceutical area are a combination of regulation, ethics and consensus over methodology in the community of experts. Were it governments that built respective awareness and knowledge in the citizenship? Clearly not. Governments have set the regulative frame, the science community have developed the methodological foundations, and the ethical base was a result of public discourse. There is good reason to expect something similar when it comes to AI, which will most likely be more of a hidden magic to citizens than something they (can) understand.

 

 

 

Page 5, Available tools:

 

„As a template or analogy to trusted AI, we can also look at the tools we use to foster trust in humans, these primarily being laws, standards and certifications. Moreover, a major tool to ensure trusted AI can be AI itself: interactive normativity was raised in this regard. AI can be used to partly solve ethics, by turning ethics into a data science.“

 

This is a functional view on trust that applies to trust in institutions, but not in humans. Most important here is the notion that institutions loose their legitimation if they loose their trusted status. This well relates to the existence of and compliance with laws, standards and certifications, but that is only part of their trusted status. It must be accompanied by respective perception, and this in turn is subject to transparency and public discourse. That's why AI can not be the tool to generate ethics.

 

 

 

Page 6, Seal/Certification of trusted AI:

 

There was mention of the idea of developing a seal of trusted AI, reflecting the requirement that systems comply with certain standards. Nevertheless, in many cases such certification would not be easy, especially given that 95% of current AI is software and therefore ‘invisible’. Such systems may require tests that check for transparency, explainability, etc. Another complication is the need to ensure that such certification does not hurt companies, and instead is helping them by making their goods and services more valuable because they are known to be trustworthy. Europe has high values and that could be a competitive advantage because European companies are seen as the most trustworthy companions for consumers.“

 

Of course will such certification or sealing be challenging. Because of the complexity of AI systems, comprehensive co-operation of AI providers with whatever certification authority will be essential. The role of such an authority, however, can definitely not be to avoid 'hurting' companies, nor to help them making their offering more valuable. This perspective is clearly inappropriate.

As a way out, certification can be organized voluntarily: AI providers can apply, then co-operate by making things transparent to the authority (not the public), and if they get certified use this as their competitive advantage. Such a European label shall reflect the high European values, indeed.

 

Kind regards, Norbert Jastroch

 

Profile picture for user n0029u2b
Geplaatst door Piotr Mieczkowski op wo, 31/10/2018 - 14:26

Hi

thanks for sharring material.

any chance to get materials from other meetings (if they happened of course)?

 

best,

Piotr Mieczkowski

Als antwoord op door Piotr Mieczkowski

Profile picture for user cbrasoveanu
Geplaatst door Cristian BRASOVEANU op ma, 05/11/2018 - 13:18

Hi,

All the AI HLEG documents available are listed in the Documents section. If you haven't been there yet, take a look. You can find there the very first documents prepared for or adopted at the first meeting of the AI HLEG, and other documents prepared for the experts.

You probably already know that for the workshop of 20/9 there are also video recordings available, from the introductory and concluding sessions.

User
Geplaatst door Christian RUSS op zo, 11/11/2018 - 09:31

Thank you for the insightful report about the AI HLEG Workshop. Combined with the videos it really helps to follow the process and the different discussions and perspectives on the topic.

There was also a very small section about

What is AI?
identified four types of AI which could be in need of specific recommendations:
- Internet AI
- Business AI
- Sense AI (vision AI, facial recognition)
- Autonomous  system / trusted system

Is there also a deeper description what is meant with these bullet points? Especially the first two are open for a lot of interpretation, in my opinion.

 

Thanks and best

Chris