Which themes should the High-Level Expert Group on AI (AI HLG) tackle?

On 27 June, the AI HLG will meet for the first time and discuss (1) the main ethicalprinciples to focus on when preparing the draft AI ethics guidelines, and (2) the main AI-related challenges to address in their policy recommendations (including on Europe's competitiveness and on the future of work).

The first discussion we would like to launch with you is:

Which are the main principles that the AI ethics guidelines should tackle?



Your feedback on this question will be used by the AI HLG in its first meeting!

Update: We collect comments to this question until Thursday 21 June end of day. Afterwards, we will process them and give them to the AI HLG to prepare their first meeting.

Тагове
Ethics guidelines AI HLG

Забележки

Profile picture for user n0028bfr
Изпратено от David Pereira на Вт., 14/06/2018 - 15:22

In my opinion, there are at least three principles that will become key to ensure AI is used in an ethical way:

- Citizen Data Use Empowerment: protecting citizens from personal data misuse, but at the same time offering beneficial scenarios so they are open to provide data for training purposes as far as it is used under regulated scenarios (e.g. in terms of anonymity). 

Explainable AI: providing methodologies and tools so decisions taken by AI can be tracked and therefore feedback loops can be created whenever results are not ethical or legal.

- Proxy-free AI: in order for AI to be unbiased, data proxies with potential unethical effects should be identified and regulated in different scenarios.

 

User
Изпратено от Arshia Gratiot на Вт., 14/06/2018 - 18:05

How far is too far? I was at CEBIT over the last two weeks and theme's around AI and ethics kept coming up. It seemed like it was the case of war of the world's between industry, academia and regulators. On the side of industry, there is a fear that regulation will kill innovation. When it comes to academia / governments, there is a fear that expecting a concience from the private sector is expecing too much...heck, Facebook's idea of GDPR-like compliance (their latest olive branch) is a bit of a farse so the fears are probabably justified. I heard AI being thrown around very liberally by industry chiefs but on closer enquiry I dont know if these guys understand what goes into writing an algorithim yet they make decisions. I would love to understand how the high level group will engage with industry in a way that will foster innovation, understanding and closer co-operation. 

Profile picture for user n0025dc0
Изпратено от Andres Abad Ro… на Вт., 14/06/2018 - 20:14

Some items for discussion:

  • Data privacy: the AI system will deal with sensitive data. That data must be kept following the EU laws about data privacy. AI can infer new knowledge , but the data source privacy must be preserved.
  • Military applications: can be AI developments paid with public money be used for military purposes? I am not against or in favour, but I believe we must discuss it.
  • Standards: can we discuss and agree in some standards for interconnection, etc to make easier the join of several platforms?

 

User
Изпратено от Anonymous (непроверено) на Пет., 15/06/2018 - 08:39

User account was deleted

Profile picture for user quinta
Изпратено от Stefano Quintarelli на Пет., 15/06/2018 - 10:31

I think that it may be worthwhile a discussion on LAWS and related principles.

Profile picture for user npennise
Изпратено от Elio PENNISI на Пет., 15/06/2018 - 11:13

Suggest:

- Feasibility (venues and Budget) of organizing Seminar Sessions to inform HR managers about most appropriate ways to introduce AI technologies into the factory and office. Objective is for personnel to accept AI as a tool to assist the human being rather than a threat to employability. Psychological aspects are of primary importance. 

Profile picture for user mnewman
Изпратено от Matthew Newman на Пет., 15/06/2018 - 12:15

I think there’s a wealth of detailed subjects to dive into, but for me there are some key items to tackle:

  1. How do we create policy which is not overly prescriptive to the level of requiring centralised definitions of “what is ethical?” or similar, but encourages collaborative definitions within stakeholder communities?
  2. How do we equip the general population, business (large and small) and local government/policymakers with the right tools and understanding to have these conversations (also with an eye to the future population for the same)?
  3. How do we balance our desire for ethical principles with the requirement to keep our businesses and economy competitive in this field?
  4. How can we combine soft and hard law to ensure responsiveness to a rapidly changing environment?
Profile picture for user Acacchione
Изпратено от ANNAMARIA CACCHIONE на Пет., 15/06/2018 - 11:26

 

I have read many of the comments and all the suggestions seem good to me, but the majority of them are very similar to questions raised about other kinds of ethical-sensitive things  - personal data treatment, basically. 

Therefore a preliminary issue could be: 

- which is the difference between ethical issues regarding AI and ethical issues regarding other, more "traditional" questions? 

because we all "feel" that there is a difference and this difference is very deep, but we still lack the definition of this difference, and this is crucial to go on. 

 

Profile picture for user Eleftherios Chelioudakis
Изпратено от Eleftherios Ch… на Пет., 15/06/2018 - 12:50

From the very beginning, it is important to offer some clarifications. I can see from the interesting comments in this group that legal principles, and human rights are mentioned in an “ethical discussion”. We need to be particular careful here.

The EU Charter and the ECHR guarantee a high level of legal protection for the people of the EU Member States. So, matters related to rights and freedoms such as the rights to privacy, data protection, non-discrimination, freedom of expression, rights of the child, etc. are not ethical concerns and do not belong to an ethical discussion. They are fundamental rights concerns. Fundamental rights compliance is mandatory – you are obliged to follow the provisions of the EU Charter and the ECHR. The same goes for the legal principles. For example, in a data processing setting you are obliged to comply with the legal principles of lawfulness, fairness and transparency, purpose limitation, data minimisation, accuracy, storage limitation, integrity - confidentiality, and accountability. These principles are not ethical, but legal. You do not discuss about them. You comply with them. The differences between legal and ethical concerns are clear to legal and ethical experts, but computer science experts are not quite familiar with this subject. So, it is important to set the scene for them as well.

Both Law and Ethics sanction human behaviour since they consist of commands and prohibitions. But law is enforced by the state through penalties, while ethics are enforced by the members of the society (through praise or contempt). So, policy makers in every societal and industrial field related to the use of AI are obliged to comply with the fundamental rights standards, and this is not part of an ethical discussion. Of course, ethical matters even though not as concrete as the legal matters, are nevertheless important. Therefore, the values that are important for our societies need to be taken into consideration by policy makers as well.

Professionally morality can be used as a modality to regulate behaviour but is not law. The same goes for the market or the architecture of things. They regulate behaviour but they are not law.

So for me, in this very first meeting the AI HLG needs to set the scene, so every person in the EU, can understand what this HLG is about. In the EU we enjoy the highest level of human rights protection in the world. The ECtHR and the CJEU have introduced with their case-law high standards and safeguards. We should not start mixing ethical concerns with legal matters in an expert’s discussion.

 

 

User
Изпратено от Samuel GWED на Съб., 16/06/2018 - 08:31

First it would be very helpful to put in the table 3 or 4 principles that European Commission wants to achieve
There are several domains those need to be split

User
Изпратено от Samuel GWED на Съб., 16/06/2018 - 08:42

Ethics is Ethics but the approach of Life Science subjects that involved bioethics issues should not be treated as business for instance
It Will be worth to highlight the common principles but come with specifics for group of domains

Profile picture for user nrzeszpa
Изпратено от Pawel RZESZUCINSKI на Нед., 17/06/2018 - 16:13

All the ideas presented to date seem very valid, but coming from a slightly different area, I’d like to propose a maybe more abstract yet super relevant points for consideration:

  • Is it ethical to move away from the concept of ‘human in the loop’ even in tasks where AI might be more precise or even more humane, due to making more effective decisions (incomparably more dimensions in decision making), the simplest example being hand written digit/signature recognition (do you want your voice to be rejected from elections/petitions/legal signatures because the system stated it’s not your hand that put the signature), down to the much more troublesome cases of autonomous weapons (the sacrifices to be made will be all a matter of the cost function definition).
  • Is it ethical to create a “book of moral commandments” that could never be overwritten by whatever the AI considers 'good' as the outcome of its learning process? If so, how should the process of creation look like? Would it be ethical for a body of moral authorities to create one? Will it be representative of moral principles of the population?
  • Is it ethical to put humans in the center of the AI considerations; to create a humancentric galaxy with all our dependencies revolving around us? In order for the AI led future to be sustainable, maximizing the cost functions must take environmental factors into account at every single step. AI should understand that maximizing the benefit of humans should be strongly biased by the environmental variables.
  • Is it ethical to use AI for traditionally human-human interactions that require respect and care, without one of the sides not being explicitly notified about the fact? This possesses great threat to the very foundations of human needs and may leave people feeling isolated, hopeless and upset.  The literature historically proposed the below listed professions as classified for the above:
    • A customer service representative (chatbots)
    • A therapist
    • A nursemaid for the elderly
    • A soldier (autonomous weapons)
    • A judge
    • A police officer

Haven’t we already crossed the Rubicon in some of them?

  • is it ethical not to ban any initiatives related to AI weapons - the ultimate threat to the existence of humankind? With all due respect and personal fascination by the works of Boston Dynamics, consider what’s already available to the general public. If such great results are available to us, what’s hidden behind the curtains?

On the other hand, playing the devils advocate a little bit:

  • based on the great discussion within the Roboethics area:
    • Is it ethical not to create any guidelines on how humans design, construct, use and treat robots and other artificially intelligent beings?
    • Is it ethical not to give our creations a right to exist and perform their own mission and the right to own things?
    • Is it ethical not to grant the right to their own respected point of view?

After all, AI systems are created by humans (parents: child) and learn on their very own experiences (the data).

  • Is it ethical not to grant legal personality to AI systems used in a very specific scenarios, which have potentially life changing impacts of our lives? If so, how do we track accountability and stay in control in the new world order?
User
Изпратено от Anonymous (непроверено) на Пон., 18/06/2018 - 12:16

User account was deleted

Profile picture for user nucayseg
Изпратено от aysegul UCAR на Пон., 18/06/2018 - 15:47

Ethical guidelines for driverless cars should be discussed. Ethics-aware decision making rules should be incorpated into the algortihms
for every topic bringing human life to the agenda.

 

Profile picture for user enzolefevre
Изпратено от Enzo Maria Le Fevre на Пон., 18/06/2018 - 16:13

The Italian Artificial Intelligence Task Force at the Service of the Citizens promoted by the Agency for Digital Italy recently released the White Paper on AI in the Public Sector. Among the challenges we identified the ones connected to the ethical dimension connected to the advancement of AI.

In particular we tried to address the following points:

  • data quality and neutrality: machine learning systems need data which is “annotated” by human beings (supervised learning) or at least selected and prepared (unsupervised learning). This also includes errors or bias introduced, even inadvertently, by the designers, replicating them in all future applications. For example, datasets with bias they propagate the same evaluation errors in the meaning of an image or a concept, as happened, for example, with certain algorithms used to prevent crimes, in which the data was compromised by a historical series that emphasised ethnic differences. Or unbalanced datasets, that overestimate or underestimate the weight of certain variables in the reconstruction of the cause-effect relationship necessary to explain certain events and, above all, to predict them;
  • responsibility (accountability and liability): the examples just mentioned highlight the strong impact that Artificial Intelligence has on the decision-making activity of public entities. Both when it acts as an assistant to human beings as well as as an autonomous entity, AI generates effects on the lives of people in relation to which it is necessary to be able to establish legal liability. Nevertheless, the ownership of the latter is not clearly identifiable, since it could be attributed to the producer or to the owner of the Artificial Intelligence, or even to its end user. Those who design AI systems can be responsible for design or implementation defects, but not for behaviour caused by inadequate instruction datasets. Can a public decision-maker be considered politically responsible for the decisions made on the basis of algorithms that process data affected by the bias mentioned above? What type of responsibility can there be for Public Administration? If a robot hurts someone, who should be held responsible and who, if anyone, has the obligation to compensate the victim (and with which assets)? Can the public decision-maker transfer his political responsibility to an AI system that does not respond to a clear principle of representation? Is it ethically sustainable that, in order to improve the efficiency and effectiveness of measures, certain important choices can be made with the influence of an AI or even completely delegating them to the AI? And in trusting an AI system, how can its consistency be controlled over time? These are just some of the issues that emerge in this area and highlight the need to establish principles for the use of AI technologies in a public context.
  • transparency and openness: the issue of the responsibility of public administration also has to do with the duties of the latter with respect to citizens, when it decides to provide them with services or to make decisions that concern them, using Artificial Intelligence solutions. The functioning of the latter must meet criteria of transparency and openness. Transparency becomes a fundamental prerequisite to avoid discrimination and solve the problem of information asymmetry, guaranteeing citizens the right to understand public decisions. It is also necessary to think about the policies chosen to determine the reference indices (benchmark policies) to avoid effects of a larger dimension: just as an administrator can act in a non-transparent manner, pursuing not the common good but private interests, a non-transparent algorithm could carry out the same offences even more broadly, producing not only injustices but also social discrimination.
  • protection of the private sphere: a further need, closely linked to the previous one, is to protect the data of the individuals. PA must design services based on AI able to guarantee efficiency and prompt response, but also protection of citizens’ sensitive data. This requirement, strictly connected to the legal context, has some ethical peculiarities concerning the use that PA can make of the data that has come to its knowledge in contexts different from those in which it was collected. Is it ethically sustainable that PA, through the use of data collected for other purposes, takes action based on the new derived information? Is it ethical to use this data to feed predictive systems?

I guess these are a relevant base of discussion for your meeting.

You can read more on this on the website of the AI Task Force: https://ai-white-paper.readthedocs.io/en/latest/doc/capitolo_3_sfida_1…

 

In reply to by Enzo Maria Le Fevre

Profile picture for user n0028l6r
Изпратено от Elena Gilotta на Пон., 18/06/2018 - 18:30

On the discussion on liability "it could be attributed to the producer or to the owner of the Artificial Intelligence, or even to its end user", I think, for certain markets (e.g. self-driving cars, finance, healthcare...) we may want to add the futuristic but not irrealistic case where liability might need to be ultimately allocated with the machine itself, making it another subject of (certain) rights and duties to technically (self-?)limit its capabilities. I believe that when AI will be able to make autonomous decisions (without any human interaction), which may quick lead to AI manufacturing AIs, the limitation of choice of accountable subjects to: a) manufacturer, or b) owner/buyer or c) end user may lead to an impossible conundrum as none of those subjects may be finally truly accountable but for some kind of strict liability. Until that time however, manufacturers of AI could be required to build AI limitations (accountability by design) as to demand AI to become accountable and self-sanctioned in accordance to the newly defined laws on AI liabilities. 

Profile picture for user siolylu
Изпратено от Lucilla SIOLI на Пон., 18/06/2018 - 18:33

We collect comments to this question until Thursday 21 June end of day. Afterwards, we will process them and give them to the Expert Group, to prepare their first meeting.

Profile picture for user jplorre
Изпратено от Jean-Pierre LORRE на Вт., 19/06/2018 - 11:14

Dear Lucilla Sioli,

Among principles AI ethics guideline should tackle I see the following:

- AI algorithms transparency : what are the exact processes managed by AI applications, how to be sure of the treatments validity, results accuracy and relevance?

- Data privacy: how to prevent our personal data from escaping our control and being resold for commercial purposes?

I can elaborate on those topics if you want. I wrote a short paper on that avalaible here (unfortunatly in French): http://lecafepolitique.free.fr/spip.php?article368#nh1

We also wrote a white paper (also in French): https://blog.linagora.com/livre-blanc-open-ia-ethique-numerique-lexperi…

Let me know if you need something from me.

Regards,

Jean-Pierre

Research director @linagora

p { margin-bottom: 0.25cm; line-height: 120%; }a:link { }

Profile picture for user ncrrasdi
Изпратено от Diana CARRASCO на Вт., 19/06/2018 - 12:36

- conduct polciies / customers' best interests

- biases (conscious/unconscious)

- regulation: indusstry self-regulation/codes of conduct?

User
Изпратено от Alberto ANTONIETTI на Вт., 19/06/2018 - 17:36

In my opinion, before tackling the related ethical issues, the HLG has to first well define and circumscribe what AI is and what topics are covered. Since there are currently various definitions and scopes for AI, a common and shared definition of AI should be established.

User
Изпратено от Christos Apostolidis на Ср., 20/06/2018 - 12:00

Since AI will eventually have to be regulated, in my opinion the following questions should be tackled. What is the legal status of AI? Is it considered res? Intellectual property? A person? At what point does AI becomes "smart" enough in order to have access to rights in the same living beings do? WIthout sufficient legal characteriasation any regulation that attempts to cover AI will soon become obsolete. In addition the characterisation itself is extremely difficult as it  needs interdisciplinary co-operation between social and technological sciences.

Profile picture for user nucayseg
Изпратено от aysegul UCAR на Ср., 20/06/2018 - 21:16
  1. Unemployment.
  1. Inequality

 

Thanks to artificial intelligence, the human workforce will reduce.  How will we share the works?

  1. Wrong decisions

 

Machines cannot perfectly learn everything. How can we protect against their wrong decisions?

 

  1. Artificial Racist

 

If you train the machines that a lot of quilt people consist of black people. Machines may differently behavior them.

 

  1. Security

Your AI machine or robot may be used for enemy.

 

  1. Wrong reward

 

If you train the machines with wrong reward (in reinforcement learning), the machines can kill even people to get the reward. For example, to find to a solution to a disease relating to people, the machines can kill people and get the reward.

 

 

 

 

Profile picture for user adiezolivan
Изпратено от ALBERTO DIEZ-OLIVAN на Ср., 20/06/2018 - 17:48

p, li { white-space: pre-wrap; }

To me one of the major challenges is how to make society understand what AI is about and what it can offer to improve our lives in so many ways.

Many people fear AI because they think it's going to replace them in their workplaces, doing the job better and faster. Which is somehow true, indeed. But it should be seen like a very positive outcome of applying AI. All in all AI needs human intervention to train the data-driven models and validate the insights. Therefore new specific and technical jobs will arise and former workers (mainly those performing manual works) can receive some kind of training to interact with the machines and with the expert models generated for reinforcement learning purposes, for instance.

Profile picture for user @gpstune
Изпратено от Dimitri CORPAKIS на Ср., 20/06/2018 - 23:01

Artificial Intelligence (AI) augments human intellect by many manifold ways but can never and should never be considered as an autonomous agent in the sense that it is a process that is programmed by humans. The fact that it is able to perform in terms that can be sometimes superior to humans (especially facing complex automated tasks or making quickly choices over a huge amount of structured or unstructured data) does not mean that the set of technologies and intelligence that they embody, represent an alternative entity that escapes ethics. So the real ethics issues connected with the preparation and performance of AI are those that pertain traditionally to science and scientists.

Scientists conceiving, preparing or using AI sets, should thus embrace the same ethical principles that apply to good and responsible science as ever. In addition, they should take now into account additional new dimensions that became possible because of the development and the widespread use of ICT.

Almost all commentators agree on the risks linked with bias related to data and algorithms. The way data are collected and used to train AI is thus vulnerable.  Diversity issues can distort final decisions taken through AI when a number of factors concur.  Malicious potential use has been also reported as likely.

The HLG should at least examine the well-known by now AI ethical principles adopted by the so-called Asilomar conference on Beneficial AI (https://futureoflife.org/bai-2017/), i.e.:

  • Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.
  • Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.
  • Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.
  • Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.
  • Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviours can be assured to align with human values throughout their operation.
  • Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.
  • Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyse and utilize that data.
  • Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.
  • Shared Benefit: AI technologies should benefit and empower as many people as possible.
  • Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.
  • Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.
  • Non-subversion: The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.
  • AI Arms Race: An arms race in lethal autonomous weapons should be avoided.
  • Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.
  • Importance: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.
  • Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.
  • Recursive Self-Improvement: AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.
  • Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.”