HLEG - Input Request: Red Lines

Dear members of the European AI Alliance,

Thomas Metzinger and I are co-rapporteurs for the "Red Lines" section of the upcoming Ethics Guidelines. A Red Line is a strict ethical rule that states what should not happen on EU territory.

 

Our current list of candidates under discussion comprises

  •  lethal autonomous weapon systems (LAWS)

  •  large scale identification of individuals

  •  normative scoring of citizens (scoring system)

  • human intervention into AI systems (e.g. insurences, credit scoring)

  • explainability & traceability of decisions (i.e. when should systems that produce not retrospectively reproducable decisions be disallowed? and when should non-explainable (to laymen) systems be disallowed?)

Further, the currently discussed ex-ante red lines are:

  • artificial suffering / consciousness

  • artificial moral agents (being "responsible for their actions")

  • artificial general intelligence (AGI) that recurrently self-improves

 

Feel free to comment on each of these topics, or propose a novel one, should you see the need.

 

Best,

Thomas & Urs

Clibeanna
Ethics HLEG red lines

Tráchtanna

Profile picture for user njastrno
Curtha isteach ag Norbert JASTROCH an Fri, 16/11/2018 - 14:15

Dear Thomas and Urs,

what about external control of humans through direct brain intrusion? One example would be warriors lead by an AI system intruding their brain directly and taking control of their action.

More general, there may be other cases raising the ethical issue that such kind of automatted, external control will dissolve the personal responsibility of a human actor for what he/she is doing. 

Regards, Norbert

 

Profile picture for user ncorneku
Curtha isteach ag Cornelia Kutterer an Fri, 16/11/2018 - 19:56

Hello Urs and Thomas, 

can you elaborate on the "explainability & traceability of decisions3 bullet in teh first list. I  would like to understand better how that fits in the list compared to the very clear bullet points above that one. 

Thanks,

Cornelia

In reply to by Cornelia Kutterer

User
Curtha isteach ag Urs Bergmann an Fri, 16/11/2018 - 20:16

Hi Cornelia,

I added a brief elaboration in brackets after the bullet point. Hope this helps in clarification of the discussion point.

Best,

Urs

Profile picture for user rbenjamins
Curtha isteach ag Richard Benjamins an Thu, 22/11/2018 - 16:45

- explainability & traceability of decisions 

Not every domain is the same. For domains that impact people's lives, explainability and traceability are essential when systems take autonomous decisions (health, loans, admittance to education, etc). But for domains like video recommendation or publicity, it is less relevant.

In case the system is not autonomous but "only" supports a person, this is less important since the person has full accountability. 

- Why should consciousness be a red line? It is very closely related to AGI. I think it is better to understand it and prepare for it than to forbid it. It will anyway not happen in the foreseeable future .... It took evolution XM years (though that was random). 

- large-scale identification of individuals. This is already planned for the Olympic games in 2020. It is not the technology development that should be forbidden, but how it is applied (like LAWS). 

I would make the list crisper and clearly indicate whether the red line is the development of the technology or the application, and formulate all items in the same manner (to avoid)

Kind regards

-- Richard

Profile picture for user nvasilla
Curtha isteach ag Laurentiu VASILIU an Fri, 23/11/2018 - 13:37

Hi all,

Except of the one commented below, I agree with the other red lines:

# explainability & traceability of decisions systems:

- decision systems that provide non-replicable outcomes (and historic input data stays unchanged) are unstable by definition so they should be avoided, AI or not!

- 'when should non-explainable (to laymen) systems be disallowed' Ok here I disagree: the fact that a system cannot be understood by non-specialists, this should not block its usage or implementation in any way. Otherwise no-one would use today mobile phones because they don't understand how Fourier series apply to electronics.

# artificial consciousness

Here I agree with Richard Benjamins: while we are far from it, artificial consciousness should not be a red line. Without it we may never pass some science/computing/robotics barriers due to our human biological brain limitations. We should rather perpare for it and have the right legislative set-up when the time comes.

@Richard - that time may not be so far, read this paper from MIT, 2014 by Max Tegmark: https://arxiv.org/abs/1401.1219

# artificial general intelligence (AGI) that recurrently self-improves

Well self improvement may be the only way of fast progress and it will depend of what do we use self-improvement functions for: why should we forbit for example AI applied to oncology to self-improve? Or to mathematical research?

Limits should be on the purpose, and not on the capabilities...

Profile picture for user njastrno
Curtha isteach ag Norbert JASTROCH an Fri, 23/11/2018 - 21:26

Dear all,

I wonder what the term 'artificial consciousness' stands for in this conversation.

My approach is that consciousness is the self-reflective thinking by a self of itself, where this thinking is not to be understood as dualistic (self as subject thinks about itself as object), but as a constitutive mental act (self constitutes itself and thus gets conscious). An AI system to be conscious in this sense would have to be capable of mental acts. I cannot see that talking about the possible realization of such an AI system is more than speculation, and therefore do see no reason to develop ethical principles for it, as those are tied to conscious beings. Insofar I agree that we do not need a red line.  

On the other hand, if 'artificial consciousness' would be ascribed to an AI system that, triggered by some kind of built-in intentionality, might develop the capability of (whatever that means) 'thinking about itself' in the dualistic sense, this would be a different use of the term consciousness. In such a context ethical considerations were not required, too - neither an ethical red line.

Regards, Norbert

Profile picture for user n002daga
Curtha isteach ag Kai Salmela an Sun, 25/11/2018 - 22:51

Hello , my 2C for these questions:

  •  lethal autonomous weapon systems (LAWS)  , These systems already exist and they're functional. I'm not sure how these could be banned since warfare companies are not the ones who are keen to obey regulations or laws. We must try anyhow, since AI systems will be more efficient also in this job.

  •  large scale identification of individuals:  These systems exist already too and some of them could be seen as AI sytems already.  We really must force GDPR to all of these systems within EU and invite other countries to this evaluation of algorithms also.  Easiest way to spread this idea would be an European Standard for an AI.

  •  normative scoring of citizens (scoring system). Also one item on the list that we need to ban , even if we do have systems that can do this even without AI. Why would anybody think that AI should'nt be compatible with the laws that we have ?

  • human intervention into AI systems (e.g. insurences, credit scoring):  I've been in the IT industry for over 30 years now, and i know for a fact that all the bigger financial banks are using programs like this. Why AI would be any difference?  If it only would be scaled down to banks, but ( at least here in Finland )  there are some chains of value, where Bank, Insurance, grocery stores, cell-phone companies, Gasoline stations, health care stations etc. are joined and most commonly owned and governed by same boards.  Same chains are the most innovative AI users. Contrary as they say, they know exactly what you eat, move, what is your salary and what you own. They know if you divorce or if you get a dog. ( last summer there was a threat of food poisoning, and crocery store could contact to all of the customers that were involved, knew where they were - even they had promised that there will not be any personal data collected... )  So . i'm pessimsitic with this - there should be strong legislation forced against collecting personal data, and using AI to data mining it.

  • explainability & traceability of decisions (i.e. when should systems that produce not retrospectively reproducable decisions be disallowed? and when should non-explainable (to laymen) systems be disallowed?)  -  AI should be built so ( a standard for this would be the most efficient way to go ) that there is modules that make it possible to trace down all the decisions.  We really do not want to end up to the situation where singularity is on our hands.  It is totally different question , who is allowed to trace decisions and when. Laws are made for this use and all AI systems should be "law compatible" just as all humans are under the law too. 

Further, the currently discussed ex-ante red lines are:

  • artificial suffering / consciousness.  - interesting question, but how do we recognise conciousness of humans or animals today ? And if we find a good test, would it be fair to AI too ?  Traditionally humans haven't been very good at this game. 

  • artificial moral agents (being "responsible for their actions") - yet another interesting question.  Who would be the one who's moral would be installed into AI ?  And if there's moral , there should be a guilty also? Punishment? Should AI feel hunger or pain?  Can it be swithced off?  Luckily this all would need a complex system that could match human brain , and it is still yet to come ( maybe with the Quantum processor revolution ).

  • artificial general intelligence (AGI) that recurrently self-improves ,   Why this is seen only as AGI problem ? Artificial Super Computers (ASI)  with Neural learning and coming Quantum processors are more likely to learn self-improving capabilities before AGI.  Eventually this will happen also, but before that we should have a wide adoption to logging, feedback and control systems on an AI. There should be a standard ans legislation ready before this happen.

 

Thank you for these questions - they're always a good point to evaluate where everyone's going with these issues. 

 

wbr Kai Salmela , AI Specialist, Robocoast R&D

User
Curtha isteach ag Anonymous (not verified) an Sun, 09/12/2018 - 16:42

User account was deleted

User
Curtha isteach ag Philip BREY an Wed, 12/12/2018 - 22:59

Dear all, 

I am coordinator of the SIENNA project in which we develop ethical guidelines for AI & robotics.  Regarding the proposed red lines, I agree with the first three.  I am not clear what the fourth one means.  Regarding the fifth, I believe that explainability is a requirement only for those algorithmic processes that pertain to decisions and actions that have significant impacts on human rights or cand do significant harm.   Then accountability requires that the system's reasoning processes can be made transparent.  Such transparency is not necessary for other applications.

I believe some of the candidate red lines can be summarized to say:  human capabilities should not be attributed to AI systems that do not really have them, except for poor imitations of them.  We tend to anthropomorphize AI and this is dangerous.  For the foreseeable future, AI systems cannot be morally responsible, have consciousness, have real emotions and pains.  So they should not be attributed such concepts, or the legal statuses that could follow from them (personhoods, rights, citizenship).

Finally, I propose a new red line, echoed by some other commentaries, which concerns decisions that AI systems should not make.  AI systems should not make any decisions that (1) are normally the subject of democratic decision-making procedures or stakeholder consultation (e.g., political decisions); (2) allow agents (humans or organizations) to delegate responsibility to AI systems and escape legal liability and accountability for their decisions; (3) normally require moral deliberation or conscience since they pertain to morally controversial decisions with significant impact; (4) go against prevailing legislation and regulations (unless defensible on the basis of an ulterior moral principle) or against widely accepted moral principles and norms.

 

User
Curtha isteach ag Anonymous (not verified) an Thu, 31/01/2019 - 07:53

User account was deleted