HLEG - Input Request: Red Lines

Dear members of the European AI Alliance,

Thomas Metzinger and I are co-rapporteurs for the "Red Lines" section of the upcoming Ethics Guidelines. A Red Line is a strict ethical rule that states what should not happen on EU territory.

 

Our current list of candidates under discussion comprises

  •  lethal autonomous weapon systems (LAWS)

  •  large scale identification of individuals

  •  normative scoring of citizens (scoring system)

  • human intervention into AI systems (e.g. insurences, credit scoring)

  • explainability & traceability of decisions (i.e. when should systems that produce not retrospectively reproducable decisions be disallowed? and when should non-explainable (to laymen) systems be disallowed?)

Further, the currently discussed ex-ante red lines are:

  • artificial suffering / consciousness

  • artificial moral agents (being "responsible for their actions")

  • artificial general intelligence (AGI) that recurrently self-improves

 

Feel free to comment on each of these topics, or propose a novel one, should you see the need.

 

Best,

Thomas & Urs

Tags
Ethics HLEG red lines

Comments

Profile picture for user njastrno
Submitted by Norbert JASTROCH on Fri, 16/11/2018 - 14:15

Dear Thomas and Urs,

what about external control of humans through direct brain intrusion? One example would be warriors lead by an AI system intruding their brain directly and taking control of their action.

More general, there may be other cases raising the ethical issue that such kind of automatted, external control will dissolve the personal responsibility of a human actor for what he/she is doing. 

Regards, Norbert

 

Profile picture for user ncorneku
Submitted by Cornelia Kutterer on Fri, 16/11/2018 - 19:56

Hello Urs and Thomas, 

can you elaborate on the "explainability & traceability of decisions3 bullet in teh first list. I  would like to understand better how that fits in the list compared to the very clear bullet points above that one. 

Thanks,

Cornelia

In reply to by Cornelia Kutterer

User
Submitted by Urs Bergmann on Fri, 16/11/2018 - 20:16

Hi Cornelia,

I added a brief elaboration in brackets after the bullet point. Hope this helps in clarification of the discussion point.

Best,

Urs

Profile picture for user rbenjamins
Submitted by Richard Benjamins on Thu, 22/11/2018 - 16:45

- explainability & traceability of decisions 

Not every domain is the same. For domains that impact people's lives, explainability and traceability are essential when systems take autonomous decisions (health, loans, admittance to education, etc). But for domains like video recommendation or publicity, it is less relevant.

In case the system is not autonomous but "only" supports a person, this is less important since the person has full accountability. 

- Why should consciousness be a red line? It is very closely related to AGI. I think it is better to understand it and prepare for it than to forbid it. It will anyway not happen in the foreseeable future .... It took evolution XM years (though that was random). 

- large-scale identification of individuals. This is already planned for the Olympic games in 2020. It is not the technology development that should be forbidden, but how it is applied (like LAWS). 

I would make the list crisper and clearly indicate whether the red line is the development of the technology or the application, and formulate all items in the same manner (to avoid)

Kind regards

-- Richard

Profile picture for user nvasilla
Submitted by Laurentiu VASILIU on Fri, 23/11/2018 - 13:37

Hi all,

Except of the one commented below, I agree with the other red lines:

# explainability & traceability of decisions systems:

- decision systems that provide non-replicable outcomes (and historic input data stays unchanged) are unstable by definition so they should be avoided, AI or not!

- 'when should non-explainable (to laymen) systems be disallowed' Ok here I disagree: the fact that a system cannot be understood by non-specialists, this should not block its usage or implementation in any way. Otherwise no-one would use today mobile phones because they don't understand how Fourier series apply to electronics.

# artificial consciousness

Here I agree with Richard Benjamins: while we are far from it, artificial consciousness should not be a red line. Without it we may never pass some science/computing/robotics barriers due to our human biological brain limitations. We should rather perpare for it and have the right legislative set-up when the time comes.

@Richard - that time may not be so far, read this paper from MIT, 2014 by Max Tegmark: https://arxiv.org/abs/1401.1219

# artificial general intelligence (AGI) that recurrently self-improves

Well self improvement may be the only way of fast progress and it will depend of what do we use self-improvement functions for: why should we forbit for example AI applied to oncology to self-improve? Or to mathematical research?

Limits should be on the purpose, and not on the capabilities...

Profile picture for user njastrno
Submitted by Norbert JASTROCH on Fri, 23/11/2018 - 21:26

Dear all,

I wonder what the term 'artificial consciousness' stands for in this conversation.

My approach is that consciousness is the self-reflective thinking by a self of itself, where this thinking is not to be understood as dualistic (self as subject thinks about itself as object), but as a constitutive mental act (self constitutes itself and thus gets conscious). An AI system to be conscious in this sense would have to be capable of mental acts. I cannot see that talking about the possible realization of such an AI system is more than speculation, and therefore do see no reason to develop ethical principles for it, as those are tied to conscious beings. Insofar I agree that we do not need a red line.  

On the other hand, if 'artificial consciousness' would be ascribed to an AI system that, triggered by some kind of built-in intentionality, might develop the capability of (whatever that means) 'thinking about itself' in the dualistic sense, this would be a different use of the term consciousness. In such a context ethical considerations were not required, too - neither an ethical red line.

Regards, Norbert

Profile picture for user n002daga
Submitted by Kai Salmela on Sun, 25/11/2018 - 22:51

Hello , my 2C for these questions:

  •  lethal autonomous weapon systems (LAWS)  , These systems already exist and they're functional. I'm not sure how these could be banned since warfare companies are not the ones who are keen to obey regulations or laws. We must try anyhow, since AI systems will be more efficient also in this job.

  •  large scale identification of individuals:  These systems exist already too and some of them could be seen as AI sytems already.  We really must force GDPR to all of these systems within EU and invite other countries to this evaluation of algorithms also.  Easiest way to spread this idea would be an European Standard for an AI.

  •  normative scoring of citizens (scoring system). Also one item on the list that we need to ban , even if we do have systems that can do this even without AI. Why would anybody think that AI should'nt be compatible with the laws that we have ?

  • human intervention into AI systems (e.g. insurences, credit scoring):  I've been in the IT industry for over 30 years now, and i know for a fact that all the bigger financial banks are using programs like this. Why AI would be any difference?  If it only would be scaled down to banks, but ( at least here in Finland )  there are some chains of value, where Bank, Insurance, grocery stores, cell-phone companies, Gasoline stations, health care stations etc. are joined and most commonly owned and governed by same boards.  Same chains are the most innovative AI users. Contrary as they say, they know exactly what you eat, move, what is your salary and what you own. They know if you divorce or if you get a dog. ( last summer there was a threat of food poisoning, and crocery store could contact to all of the customers that were involved, knew where they were - even they had promised that there will not be any personal data collected... )  So . i'm pessimsitic with this - there should be strong legislation forced against collecting personal data, and using AI to data mining it.

  • explainability & traceability of decisions (i.e. when should systems that produce not retrospectively reproducable decisions be disallowed? and when should non-explainable (to laymen) systems be disallowed?)  -  AI should be built so ( a standard for this would be the most efficient way to go ) that there is modules that make it possible to trace down all the decisions.  We really do not want to end up to the situation where singularity is on our hands.  It is totally different question , who is allowed to trace decisions and when. Laws are made for this use and all AI systems should be "law compatible" just as all humans are under the law too. 

Further, the currently discussed ex-ante red lines are:

  • artificial suffering / consciousness.  - interesting question, but how do we recognise conciousness of humans or animals today ? And if we find a good test, would it be fair to AI too ?  Traditionally humans haven't been very good at this game. 

  • artificial moral agents (being "responsible for their actions") - yet another interesting question.  Who would be the one who's moral would be installed into AI ?  And if there's moral , there should be a guilty also? Punishment? Should AI feel hunger or pain?  Can it be swithced off?  Luckily this all would need a complex system that could match human brain , and it is still yet to come ( maybe with the Quantum processor revolution ).

  • artificial general intelligence (AGI) that recurrently self-improves ,   Why this is seen only as AGI problem ? Artificial Super Computers (ASI)  with Neural learning and coming Quantum processors are more likely to learn self-improving capabilities before AGI.  Eventually this will happen also, but before that we should have a wide adoption to logging, feedback and control systems on an AI. There should be a standard ans legislation ready before this happen.

 

Thank you for these questions - they're always a good point to evaluate where everyone's going with these issues. 

 

wbr Kai Salmela , AI Specialist, Robocoast R&D

User
Submitted by Toby Walsh on Fri, 07/12/2018 - 23:08

>  I'm not sure how these (lethal autonomous weapons) could be banned since warfare companies are not the ones who are keen to obey regulations or laws

Sorry, what rubbish ... arms companies follow all the bans on chemical weapons, biological weapons ... there are plenty of lawful ways for them to make money selling weapons that aren't banned. 

LAWS are a clear red line -- thousands of AI researchers, Nobel Laureates, religious leaders and members of the public have expressed this view.

 

 

 

User
Submitted by Richard Krajčoviech on Sun, 09/12/2018 - 16:42

As it has been mentioned in other comments, the basic red line is compliance of all autonomous decisions with valid legislation, ethics and morality. Based on this and the state of the art of AI and autonomous systems, the very basic red line for me is: No legal subjectivity of any product (AI or other).

This means that responsibility for damages (including criminal ones) must be clearly assigned among designers/producers/owners/users etc. by law or by contract, as it is with any other product now. Before any thinking of legal subjectivity, the autonomous systems must be able to reliably perform very basic human activities: reporting on what happened, reasoning of decisions, cooperation with others, empathy, protection of life, compliance with laws, ability to assess consequences of own actions, ethics and morality, etc.

 

Deeply related to the above is next red line:

No autonomous decisions or actions that

  • violate valid legislation or regulations, ethics or morality principles,
  • circumvent them or
  • motivate or organize other systems, humans or animals to do so or
  • result in or activate agents (virtual or real) that have ability to do so.

Even if the system does not have legal subjectivity, it must be designed in a way that does not allow autonomous decisions or actions that are unlawful. In other words, it is behind red line to design a system that can perform unlawful, unethical or immoral action or motivate to such action without intention of the user. This, of course, does not require prevention of all unlawful user activities, which is impossible.

 

To simplify analysis, who is responsible for any unlawful, unethical or immoral decision or activity, the next red line would be: No autonomous actions or decisions without proper and permanent logging of the actions, related decisions and reasoning of such decisions. The logging must be at the level that allows, at least, identification of the one responsible for each autonomous action or decision.

 

Further red lines.

When thinking of the further red lines, I had on my mind mostly autonomous systems and AI driven communication, like chatbots or AI driven web sites, which can be dangerous if used in large scale. The red lines are limited to autonomous decisions, which include any decision that has not been designed by human and is rather result of a more general algorithm, like neural network, decision trees, optimization or so. The list of red lines is more-less random selection of limits that came to my mind as examples, and most of which have already been discussed here. They need further work to prepare comprehensive and consistent set of rules.

For each of the red lines, there should be clear responsibility of a human for every breach. However there must be freedom to perform research in these areas, providing it is  in line with ethical standards. When there will be sufficient progress in any of the areas, respective red line can be changed appropriately. We will be in better position to assess the consequences then.

No autonomous decisions or actions that would lead to moral dilemmas.

We are not able to describe the moral decisions by finite set rules, by an algorithm or by another implementable means, nor test quality of moral decisions made by AI. Morality depends on culture and differs among countries and societies. It is highly controversial topic. Autonomous systems must not count victims. They must be about preventing situations with victims. The question must not be "who will be the victim" but  must be "how to prevent victim". AI must prevent moral dilemmas by not doing what can lead to them. E.g. an autonomous car must slow down to the speed, in which it is able to prevent victims even when there is somebody hidden behind a barrier or acts unpredictably. Legislators can help to reduce the number of possible moral dilemmas by e.g. creating dedicated areas, like dedicated lanes, where pedestrians are guilty being there, like it is with e.g. railways of highways now.

No unescapable AI or prevention of switching AI off, either by design or by autonomous decisions or actions.

Every human must have an option to not be a subject to (significant) AI decisions, e.g. by switching the system off, by switching the AI functionality off, by not using the system, by receiving human review, or so. The AI must not influence the human to not switch it off, e.g. by emotions or by overemphasizing consequences.

 

No representation of a system as a human.

AI and autonomous systems must be clearly distinguishable from humans by a way that cannot be abused to designate a human as AI or autonomous system.

 

No mistreating any human as not a human in case the tool distinguishes humans from non-humans, animals or things in decisions that can significantly affect a human.

In other words, any system claiming it can distinguish humans form things or animals, which system is using this in significant decisions (e.g. whether to stop a car) must not generate false negatives and any false positive must not affect rights of actual human (e.g. endangering passengers in a car because of a fals positive pedestrian identified).

 

No usage of human emotions to describe AI behavior to general public or to users.

AI is not striving. AI is not happy. AI just fulfills goals designed by their designers. This might be dangerous especially when AI represents itself as a human.

 

Usage of scoring of humans without understanding its calculation, methodology behind and related risks of unfair treatment,

E.g. when it is taken from another system, especially in decisions significant for human life (legal rights, life and health, etc.)

Finally, the following (and many other) red lines can be derived from the existing law:

No autonomous decisions or actions that would lead to physical contact with a human without his/her informed permission.  

Or, more general, respect to comfortable distance from humans. Beside things like a hit by car, a medical treatment, drones flying too close to people or above their heads, etc., this includes guns and warfare, as well as capturing of a human in physically limited space, like in a room or so. The subject of the physical contact must be able to revoke the permission at any time.

After proper testing, certification and clear assignment of responsibility, the autonomous systems (like guns) can make autonomous decisions up to reliably limited damage. Expert in areas where such systems will be used must define further conditions to protect general public from danger in such case.

 

No autonomous decisions or actions that would lead to psychological manipulation, prohibited communication techniques or disrespect to human personality.

Cheating, misleading, blackmailing, abuse of insanity, and other wrongdoing prohibited to humans, including manipulation of humans or other systems towards criminal activity, including e.g. defamation of a person, race, nation or religion.

 

No autonomous decisions or actions that would lead to touching or usage of somebody else's property without proper permission, that would limit its usage by authorized person or system (owner, lessor or so) or that would lead to its unlawful transfer to or usage by another person.

This includes usage of somebody else's computing capacity or computer storage capacity or agents that do so, as well as improper use or abuse of public facility.

In reply to by Richard Krajčoviech

User
Submitted by Richard Krajčoviech on Tue, 11/12/2018 - 14:22

The redline about usage of scoring should be NO usage of scoring of humans without understanding its calculation, methodology behind and related risks of unfair treatment.

Profile picture for user n002cona
Submitted by cristina pozzi on Mon, 10/12/2018 - 18:30

Hello everybody. The discussion is already very complete. Thanks everybody for sharing your points of view. I would like to add something on top of what has already been discussed.

- cultural differences: let's imagine a selfdriving car. When travelling from one EU country to another should it adapt its setting to better fit for different regulations and cultural differences? Ethical choices and perception can vary based on geography and AI tools should take into consideration those differences.

 

- gender free AI assistants: probably it is too specific to go in this framework and it is not a red line, but I feel it is always important to mention this aspect too. I always try to fight the war in order to create gender free AI assistans as most of them are named with female names and contribute to increase biases about women and jobs we are supposed to do in our society :-)

 

- Fooling machines: it has already been said but I feel it is particularly important that humans interacting with machines must be correctly informed and that we should minimize the impacts on psicology of our digital companions (I recommend to read and analyze for example Amazon Echo reviews to get how they can be easly seen as people and influence ones capacity to interact with other humans).

 

 

 

 

 

User
Submitted by Philip BREY on Wed, 12/12/2018 - 22:59

Dear all, 

I am coordinator of the SIENNA project in which we develop ethical guidelines for AI & robotics.  Regarding the proposed red lines, I agree with the first three.  I am not clear what the fourth one means.  Regarding the fifth, I believe that explainability is a requirement only for those algorithmic processes that pertain to decisions and actions that have significant impacts on human rights or cand do significant harm.   Then accountability requires that the system's reasoning processes can be made transparent.  Such transparency is not necessary for other applications.

I believe some of the candidate red lines can be summarized to say:  human capabilities should not be attributed to AI systems that do not really have them, except for poor imitations of them.  We tend to anthropomorphize AI and this is dangerous.  For the foreseeable future, AI systems cannot be morally responsible, have consciousness, have real emotions and pains.  So they should not be attributed such concepts, or the legal statuses that could follow from them (personhoods, rights, citizenship).

Finally, I propose a new red line, echoed by some other commentaries, which concerns decisions that AI systems should not make.  AI systems should not make any decisions that (1) are normally the subject of democratic decision-making procedures or stakeholder consultation (e.g., political decisions); (2) allow agents (humans or organizations) to delegate responsibility to AI systems and escape legal liability and accountability for their decisions; (3) normally require moral deliberation or conscience since they pertain to morally controversial decisions with significant impact; (4) go against prevailing legislation and regulations (unless defensible on the basis of an ulterior moral principle) or against widely accepted moral principles and norms.

 

User
Submitted by Richard Krajčoviech on Thu, 27/12/2018 - 12:16

The idea behind the red line about the creation or activation of agents is to prevent circumvencing of illegal activities by indirect creating of systems, tools etc. that can do illegal activity. It should be behind red lines to create a system that is able (i.e. regardless of intention) to develop or optimize another system for doing illegal activities or which is able to use another exisitng ssystem for illegal activities.

User
Submitted by Richard Krajčoviech on Thu, 27/12/2018 - 14:32

For explainability, considering the availability of computer storage, I prefer to ask for adequate logs whenever there is even a little possibility of unlawful (including harm), unethical or immoral decision or activity. What is adequate will depend on the level of autonomy, complexity of the systems, power of actuators etc.from the safety point of view, it is better to manage the incurred expenses by level of logs (with possibly defining exceptions, where logs are not required at all) than to have unclarity on whether the logs are required and then strengthen the requirement gradually, which is unpredictable for the business.

Another reason for such approach is that, per my opinion, essential part of any intelligence is ability to remember and recall "observations of sensors", own decisions and own actions. This is crutial for any investiagation. The more systems will do actions without traces, the harder it will be to ensure justice. We need to discuss, how to ensure privacy of individuals, but with businesses, especially those involved in big number of transactions, the logs are essential.

User
Submitted by Anonymous (not verified) on Thu, 31/01/2019 - 07:53

User account was deleted

In reply to by Anonymous (not verified)

User
Submitted by Richard Krajčoviech on Fri, 01/02/2019 - 08:24

We need the explainability toots for emsuring control over AI actions. A fighter is a complex machine, but it is still deterministic (or at least desired to be). This means that if there is a malfunction, investigators (experts) are capable to analyze reasons and they are capable to explain it to a layman by extracting relevant subsystems, which all work on understandable basis, and by explaining seris of consequences. This is importand e.g. in court trias and in preserving public safety, because we know, what to do to prevent such malfunctions.Airplanes of a specific model might eb kept on ground until investigation is ongoing. Not mentioning that the fighter is build using proved and tested approaches, where we know relatioships between the chosen engineering parameters and the desired behaviour.

With AI we face a risk that not even experts are able (or not motivated enough) to analyze the system to explain a specific decision and how to prevent such malfunction in future, because it is by orders of magnitude easier to build such system than to analyze it. This poses a risk to public.