Which themes should the High-Level Expert Group on AI (AI HLG) tackle?

On 27 June, the AI HLG will meet for the first time and discuss (1) the main ethicalprinciples to focus on when preparing the draft AI ethics guidelines, and (2) the main AI-related challenges to address in their policy recommendations (including on Europe's competitiveness and on the future of work).

The first discussion we would like to launch with you is:

Which are the main principles that the AI ethics guidelines should tackle?


Your feedback on this question will be used by the AI HLG in its first meeting!

Update: We collect comments to this question until Thursday 21 June end of day. Afterwards, we will process them and give them to the AI HLG to prepare their first meeting.

Taggar
Ethics guidelines AI HLG

Comments

Profile picture for user n0028bfr
Skickades av David Pereira den tors, 06/14/2018 - 15:22

In my opinion, there are at least three principles that will become key to ensure AI is used in an ethical way:

- Citizen Data Use Empowerment: protecting citizens from personal data misuse, but at the same time offering beneficial scenarios so they are open to provide data for training purposes as far as it is used under regulated scenarios (e.g. in terms of anonymity). 

Explainable AI: providing methodologies and tools so decisions taken by AI can be tracked and therefore feedback loops can be created whenever results are not ethical or legal.

- Proxy-free AI: in order for AI to be unbiased, data proxies with potential unethical effects should be identified and regulated in different scenarios.

 

User
Skickades av Arshia Gratiot den tors, 06/14/2018 - 18:05

How far is too far? I was at CEBIT over the last two weeks and theme's around AI and ethics kept coming up. It seemed like it was the case of war of the world's between industry, academia and regulators. On the side of industry, there is a fear that regulation will kill innovation. When it comes to academia / governments, there is a fear that expecting a concience from the private sector is expecing too much...heck, Facebook's idea of GDPR-like compliance (their latest olive branch) is a bit of a farse so the fears are probabably justified. I heard AI being thrown around very liberally by industry chiefs but on closer enquiry I dont know if these guys understand what goes into writing an algorithim yet they make decisions. I would love to understand how the high level group will engage with industry in a way that will foster innovation, understanding and closer co-operation. 

Profile picture for user efosch
Skickades av Eduard Fosch V… den tors, 06/14/2018 - 18:11

Some of the principles I would suggest discussing are:

- the principle of non-isolation (promoting human-human interaction)

- the principle of policy learning (from research, impact assessments)

- the principle of accessibility 

- the principle of compliance by design (accountability process for robot developers)

- the principle of action (solution-oriented guidelines, translational problem avoidance)

Have a great meeting!

Eduard Fosch Villaronga

User
Skickades av Guido Noto La Diega den tors, 06/14/2018 - 18:35

The first question should be: is it moral to build an ethical AI? Which degree of paternalism is desirable? How to ensure diversity and cultural relativism?

Profile picture for user n0025dc0
Skickades av Andres Abad Ro… den tors, 06/14/2018 - 20:14

Some items for discussion:

  • Data privacy: the AI system will deal with sensitive data. That data must be kept following the EU laws about data privacy. AI can infer new knowledge , but the data source privacy must be preserved.
  • Military applications: can be AI developments paid with public money be used for military purposes? I am not against or in favour, but I believe we must discuss it.
  • Standards: can we discuss and agree in some standards for interconnection, etc to make easier the join of several platforms?

 

User
Skickades av Vincent Franco… den fre, 06/15/2018 - 05:25

Current developments in AI follows the development of many tools brought by information and communications technologies. As with all new technologies, this comes with different potential opportunities and challenges for our society.

We'll need to be careful that AI algorithms are safe, reliable and predictable. And we'll likely need specific legislations to keep the agents/decision-makers/companies accountable for that.

In addition, AI algorithms also bring societal and ethical challenges, raising the question of how they can be used for the benefit of all. In particular, AI will likely yield to further automation and robotisation than it is currently possible. This is clearly a concern in the context of autonomous weapons, for instance. Automation also influences the economy, the job market and our society as a whole. A key challenge is to make sure that the future technological developments in artificial intelligence do not create an ecological crisis or deepen the inequalities in our society with potential social and economic instabilities.

User
Skickades av davide viviano den fre, 06/15/2018 - 08:38

Some topics that might be considered in the discussion:

- AI and unethical discrimination: Data might contain bias that exists in our society. The bias can lead algorithms to take decisions that are unfair with respect to sensitive attributes such as race and gender. Whereas the AI research community has proposed a variety of preprocessing algorithms to deal with discriminatory bias, a regulatory framework for data scientists to work with sensible attributes still remain an open issue.

- AI and market segmentation: the increasing implementation of AI algorithms in marketing has increased the level of market segmentation and , in some sectors, of discriminatory power of firms. This can have negative effects on market efficiency especially in those sectors that are not highly competitive.

- AI in the financial sector: which kind of regulations should be implemented for those financial institutions that take their decisions based on predictive modeling?

Profile picture for user n0028hbw
Skickades av Lêmy GODEFROY den fre, 06/15/2018 - 08:39

To express ethical principles (transparency, non-discrimination, fairness, explicability, protection of privacy, decision-making autonomy of the machine, etc.), would it not be necessary determining common ethical values ​​that support these principles?

Reflection on ethical principles would then imply to define moral values ​​of reference. For example, it is not the same ethical principle that prevails if we teach the autonomous car to violate the traffic laws to save a pedestrian in danger or if we learn to this car to strictly comply with these rules and to not react despite the danger for the life of the pedestrian.

In addition, these moral values ​​of reference should be ranked in order of importance (for example, protection of people first (what type of people: children, elderly, young people, etc.), in second, protection of property, etc.

One can also wonder whether the moral values ​​expected of a human must be transposed to the algorithms? The machine learns to imitate the human. It does not have proper cognitive abilities. Can we then teach to a machine that in some cases it is possible to go against moral values?

Moreover, once these moral values ​​have been defined, one may wonder how ethical considerations or moral values ​​can be mathematized and programmed, but that is another question.

Som svar på av Lêmy GODEFROY

Profile picture for user staveris
Skickades av Athanasios Sta… den fre, 06/15/2018 - 09:37

Hi I believe you have placed some important questions. 

Ethical or non-ethical behavior should be identified, it is a highly challenging task. I am not sure though, that we can attribute ethical characteristics to algorithms, but to the people and/or organizations that use them. 

Let me give you an example:

Let us suppose that I have created an AI solution that will scan faces in the crowd and will instanty profile them according to their background, (relations, believes, education, race, gender, etc) and decide whether these people can be possible threats. 

Is it ethical? 

Hard to answer. For sure, state will be given a weapon against terorism, but in the same time, our lives will be strictly monitored. So is it ethical (note I am not referring to legal issues) to monitor in depth all citizents because some might be a threat?

Another example:

AI solutions, can now detect if a person is suffering from a chronic disease, by analyzing body movement, facial characteristics and other. How confident are we to accept such personall information to be made public?

I am not sure that regulatory actions will provide safety against those "dark sides" of AI/ML, maybe we should first try to educate citizents on the use or miss-use of technology. 

I would really appreciate your opinions on this.

Best 

Athanasios

Profile picture for user quinta
Skickades av Stefano Quintarelli den fre, 06/15/2018 - 10:31

I think that it may be worthwhile a discussion on LAWS and related principles.

Profile picture for user npennise
Skickades av Elio PENNISI den fre, 06/15/2018 - 11:13

Suggest:

- Feasibility (venues and Budget) of organizing Seminar Sessions to inform HR managers about most appropriate ways to introduce AI technologies into the factory and office. Objective is for personnel to accept AI as a tool to assist the human being rather than a threat to employability. Psychological aspects are of primary importance. 

Profile picture for user mnewman
Skickades av Matthew Newman den fre, 06/15/2018 - 12:15

I think there’s a wealth of detailed subjects to dive into, but for me there are some key items to tackle:

  1. How do we create policy which is not overly prescriptive to the level of requiring centralised definitions of “what is ethical?” or similar, but encourages collaborative definitions within stakeholder communities?
  2. How do we equip the general population, business (large and small) and local government/policymakers with the right tools and understanding to have these conversations (also with an eye to the future population for the same)?
  3. How do we balance our desire for ethical principles with the requirement to keep our businesses and economy competitive in this field?
  4. How can we combine soft and hard law to ensure responsiveness to a rapidly changing environment?
Profile picture for user Acacchione
Skickades av ANNAMARIA CACCHIONE den fre, 06/15/2018 - 11:26

 

I have read many of the comments and all the suggestions seem good to me, but the majority of them are very similar to questions raised about other kinds of ethical-sensitive things  - personal data treatment, basically. 

Therefore a preliminary issue could be: 

- which is the difference between ethical issues regarding AI and ethical issues regarding other, more "traditional" questions? 

because we all "feel" that there is a difference and this difference is very deep, but we still lack the definition of this difference, and this is crucial to go on. 

 

Profile picture for user Eleftherios Chelioudakis
Skickades av Eleftherios Ch… den fre, 06/15/2018 - 12:50

From the very beginning, it is important to offer some clarifications. I can see from the interesting comments in this group that legal principles, and human rights are mentioned in an “ethical discussion”. We need to be particular careful here.

The EU Charter and the ECHR guarantee a high level of legal protection for the people of the EU Member States. So, matters related to rights and freedoms such as the rights to privacy, data protection, non-discrimination, freedom of expression, rights of the child, etc. are not ethical concerns and do not belong to an ethical discussion. They are fundamental rights concerns. Fundamental rights compliance is mandatory – you are obliged to follow the provisions of the EU Charter and the ECHR. The same goes for the legal principles. For example, in a data processing setting you are obliged to comply with the legal principles of lawfulness, fairness and transparency, purpose limitation, data minimisation, accuracy, storage limitation, integrity - confidentiality, and accountability. These principles are not ethical, but legal. You do not discuss about them. You comply with them. The differences between legal and ethical concerns are clear to legal and ethical experts, but computer science experts are not quite familiar with this subject. So, it is important to set the scene for them as well.

Both Law and Ethics sanction human behaviour since they consist of commands and prohibitions. But law is enforced by the state through penalties, while ethics are enforced by the members of the society (through praise or contempt). So, policy makers in every societal and industrial field related to the use of AI are obliged to comply with the fundamental rights standards, and this is not part of an ethical discussion. Of course, ethical matters even though not as concrete as the legal matters, are nevertheless important. Therefore, the values that are important for our societies need to be taken into consideration by policy makers as well.

Professionally morality can be used as a modality to regulate behaviour but is not law. The same goes for the market or the architecture of things. They regulate behaviour but they are not law.

So for me, in this very first meeting the AI HLG needs to set the scene, so every person in the EU, can understand what this HLG is about. In the EU we enjoy the highest level of human rights protection in the world. The ECtHR and the CJEU have introduced with their case-law high standards and safeguards. We should not start mixing ethical concerns with legal matters in an expert’s discussion.

 

 

Som svar på av Eleftherios Ch…

Profile picture for user nrzeszpa
Skickades av Pawel RZESZUCINSKI den ons, 06/20/2018 - 08:27

critical clarification for this thread. thumbs up!

User
Skickades av Yves Moreau den fre, 06/15/2018 - 15:19

 

EU legislation offers comprehensive legal protections to its residents (privacy, non-discrimination, etc.). The reach of AI is however global. Tough problems will arise where AI, business, and human rights meet (in particular, mass and targeted surveillance by autoritarian regimes). While violations taking place in the EU can be addressed through existing legal protections, legal instruments addressing violations taking place outside the EU are gravely lacking. Although the lack of legal instruments to regulate activities of EU entities outside of the EU is a problem that reaches beyond AI, AI has the potential to quickly amplify some of those problems (such as cybersurveillance, biometrics, or autonomous weapons). We should therefore have clear ethical guidelines for EU businesses, governments, and organizations that increase their accountability in case they contribute to human rights violations outside the EU. Such guidelines could lead the way towards novel legislation, or extensions and improvements of existing legislation (such as EU export controls or the recent French duty of care law). 

User
Skickades av Samuel Gwed den lör, 06/16/2018 - 08:31

First it would be very helpful to put in the table 3 or 4 principles that European Commission wants to achieve
There are several domains those need to be split

User
Skickades av Samuel Gwed den lör, 06/16/2018 - 08:42

Ethics is Ethics but the approach of Life Science subjects that involved bioethics issues should not be treated as business for instance
It Will be worth to highlight the common principles but come with specifics for group of domains

Profile picture for user nrzeszpa
Skickades av Pawel RZESZUCINSKI den sön, 06/17/2018 - 16:13

All the ideas presented to date seem very valid, but coming from a slightly different area, I’d like to propose a maybe more abstract yet super relevant points for consideration:

  • Is it ethical to move away from the concept of ‘human in the loop’ even in tasks where AI might be more precise or even more humane, due to making more effective decisions (incomparably more dimensions in decision making), the simplest example being hand written digit/signature recognition (do you want your voice to be rejected from elections/petitions/legal signatures because the system stated it’s not your hand that put the signature), down to the much more troublesome cases of autonomous weapons (the sacrifices to be made will be all a matter of the cost function definition).
  • Is it ethical to create a “book of moral commandments” that could never be overwritten by whatever the AI considers 'good' as the outcome of its learning process? If so, how should the process of creation look like? Would it be ethical for a body of moral authorities to create one? Will it be representative of moral principles of the population?
  • Is it ethical to put humans in the center of the AI considerations; to create a humancentric galaxy with all our dependencies revolving around us? In order for the AI led future to be sustainable, maximizing the cost functions must take environmental factors into account at every single step. AI should understand that maximizing the benefit of humans should be strongly biased by the environmental variables.
  • Is it ethical to use AI for traditionally human-human interactions that require respect and care, without one of the sides not being explicitly notified about the fact? This possesses great threat to the very foundations of human needs and may leave people feeling isolated, hopeless and upset.  The literature historically proposed the below listed professions as classified for the above:
    • A customer service representative (chatbots)
    • A therapist
    • A nursemaid for the elderly
    • A soldier (autonomous weapons)
    • A judge
    • A police officer

Haven’t we already crossed the Rubicon in some of them?

  • is it ethical not to ban any initiatives related to AI weapons - the ultimate threat to the existence of humankind? With all due respect and personal fascination by the works of Boston Dynamics, consider what’s already available to the general public. If such great results are available to us, what’s hidden behind the curtains?

On the other hand, playing the devils advocate a little bit:

  • based on the great discussion within the Roboethics area:
    • Is it ethical not to create any guidelines on how humans design, construct, use and treat robots and other artificially intelligent beings?
    • Is it ethical not to give our creations a right to exist and perform their own mission and the right to own things?
    • Is it ethical not to grant the right to their own respected point of view?

After all, AI systems are created by humans (parents: child) and learn on their very own experiences (the data).

  • Is it ethical not to grant legal personality to AI systems used in a very specific scenarios, which have potentially life changing impacts of our lives? If so, how do we track accountability and stay in control in the new world order?
Profile picture for user n0028jws
Skickades av Mathieu Sacrispeyre den mån, 06/18/2018 - 12:16

Hello,

Is beeing "too" ethical a good way not to be ethical in the end?

How to be agile and flexible on such a complex but fast changing topic?

The idea in the background is : USA and China are already ahead of Europe on AI topic and usually move faster than Europe, so creating a too strict rule maybe a way to slow our AI actors and so to let the other win the competition... Which would lead to follow their rules and try to make them beeing ethical.
 

Linked to that a clear challenge to address is how to create leader(s)..? If still possible, it would be great to avoid the cloud situation where we don't have major players; so most of our data goes to american platforms (and their ethics).

Som svar på av Mathieu Sacrispeyre

User
Skickades av Bjoern Juretzki den tis, 06/19/2018 - 17:17

Mathieu,

Thanks for your comment.

First of all, I am not sure I would fully agree with the statement that the US and China are ahead of Europe when it comes to AI. This is certainly true for some areas, but not for all. If I look at professional service robotics, for example, I think Europe is ahead of both the US and China.

In our strategy "Artificial Intelligence for Europe", we also point out that we need commercial and technological leadership if we want to influence the debate on ethical issues. Otherwise, we will not have a credible voice. For this reason, strengthening Europe's competitiveness in AI is closely linked to the discussion on ethical issues.

 

 

Profile picture for user Alejandro Zornoza
Skickades av Alejandro Zornoza den mån, 06/18/2018 - 12:43

In my opinion, our efforts must be channelled to reinforce the European Law. All imaginable ethical values and principles demanded for Ethical Guidelines are codified in a large national and international laws and documents, such as Constitutions, Acts, and Directives (v.gr, from art. 169 of Treaty on the Functionin of the EU to D 2001/95, D 85/374, R 2016/679).To propose the respect of privacy or safety or moral in ethical documents may make appear as if the European regulation that we have been applying until now to the design and manufacture of products has not been respecting the European Union’s values, but AI/autonomous systems would give us the opportunity to start from scratch.

In effect, we should to create formation and information actions for users. News about fatal accidents where a AI/autonomous agent is involved creates commotion, undermining user confidence in AI and autonomous systems, so European documents  play an important role in the eyes of the public opinion. Users must know what AI is, what to expect from it, and how to use it.

Overall, we should not use the terms ‘ethic’ or ‘moral’ to refer to a safety obligation. That projects a distorted and erroneous image of the Tort Law, the liability, and the AI.

 

User
Skickades av KATERINA SIDERI den mån, 06/18/2018 - 15:26

Principles: justice (within the limits of human nature) and moral sentiments such as compassion and respect.

 

Challenges

Biased AI-when AI makes decisions it may reproduce prejudice and stereorypes. Can we address this problem? How and to what extent?

Data onwership or privacy? What is the best framework to address user control over their data and why?

Trade secrets in databases. To what extent is it a problem for transparency and what can be done to address it?

 

 

 

 

Profile picture for user nucayseg
Skickades av aysegul UCAR den mån, 06/18/2018 - 15:47

Ethical guidelines for driverless cars should be discussed. Ethics-aware decision making rules should be incorpated into the algortihms
for every topic bringing human life to the agenda.

 

Profile picture for user enzolefevre
Skickades av Enzo Maria Le Fevre den mån, 06/18/2018 - 16:13

The Italian Artificial Intelligence Task Force at the Service of the Citizens promoted by the Agency for Digital Italy recently released the White Paper on AI in the Public Sector. Among the challenges we identified the ones connected to the ethical dimension connected to the advancement of AI.

In particular we tried to address the following points:

  • data quality and neutrality: machine learning systems need data which is “annotated” by human beings (supervised learning) or at least selected and prepared (unsupervised learning). This also includes errors or bias introduced, even inadvertently, by the designers, replicating them in all future applications. For example, datasets with bias they propagate the same evaluation errors in the meaning of an image or a concept, as happened, for example, with certain algorithms used to prevent crimes, in which the data was compromised by a historical series that emphasised ethnic differences. Or unbalanced datasets, that overestimate or underestimate the weight of certain variables in the reconstruction of the cause-effect relationship necessary to explain certain events and, above all, to predict them;
  • responsibility (accountability and liability): the examples just mentioned highlight the strong impact that Artificial Intelligence has on the decision-making activity of public entities. Both when it acts as an assistant to human beings as well as as an autonomous entity, AI generates effects on the lives of people in relation to which it is necessary to be able to establish legal liability. Nevertheless, the ownership of the latter is not clearly identifiable, since it could be attributed to the producer or to the owner of the Artificial Intelligence, or even to its end user. Those who design AI systems can be responsible for design or implementation defects, but not for behaviour caused by inadequate instruction datasets. Can a public decision-maker be considered politically responsible for the decisions made on the basis of algorithms that process data affected by the bias mentioned above? What type of responsibility can there be for Public Administration? If a robot hurts someone, who should be held responsible and who, if anyone, has the obligation to compensate the victim (and with which assets)? Can the public decision-maker transfer his political responsibility to an AI system that does not respond to a clear principle of representation? Is it ethically sustainable that, in order to improve the efficiency and effectiveness of measures, certain important choices can be made with the influence of an AI or even completely delegating them to the AI? And in trusting an AI system, how can its consistency be controlled over time? These are just some of the issues that emerge in this area and highlight the need to establish principles for the use of AI technologies in a public context.
  • transparency and openness: the issue of the responsibility of public administration also has to do with the duties of the latter with respect to citizens, when it decides to provide them with services or to make decisions that concern them, using Artificial Intelligence solutions. The functioning of the latter must meet criteria of transparency and openness. Transparency becomes a fundamental prerequisite to avoid discrimination and solve the problem of information asymmetry, guaranteeing citizens the right to understand public decisions. It is also necessary to think about the policies chosen to determine the reference indices (benchmark policies) to avoid effects of a larger dimension: just as an administrator can act in a non-transparent manner, pursuing not the common good but private interests, a non-transparent algorithm could carry out the same offences even more broadly, producing not only injustices but also social discrimination.
  • protection of the private sphere: a further need, closely linked to the previous one, is to protect the data of the individuals. PA must design services based on AI able to guarantee efficiency and prompt response, but also protection of citizens’ sensitive data. This requirement, strictly connected to the legal context, has some ethical peculiarities concerning the use that PA can make of the data that has come to its knowledge in contexts different from those in which it was collected. Is it ethically sustainable that PA, through the use of data collected for other purposes, takes action based on the new derived information? Is it ethical to use this data to feed predictive systems?

I guess these are a relevant base of discussion for your meeting.

You can read more on this on the website of the AI Task Force: https://ai-white-paper.readthedocs.io/en/latest/doc/capitolo_3_sfida_1…