Which themes should the High-Level Expert Group on AI (AI HLG) tackle?

On 27 June, the AI HLG will meet for the first time and discuss (1) the main ethicalprinciples to focus on when preparing the draft AI ethics guidelines, and (2) the main AI-related challenges to address in their policy recommendations (including on Europe's competitiveness and on the future of work).

The first discussion we would like to launch with you is:

Which are the main principles that the AI ethics guidelines should tackle?



Your feedback on this question will be used by the AI HLG in its first meeting!

Update: We collect comments to this question until Thursday 21 June end of day. Afterwards, we will process them and give them to the AI HLG to prepare their first meeting.

Sildid
Ethics guidelines AI HLG

Kommentaarid

Profile picture for user dessislava.fessenko@fessenko.com
Esitas Dessislava Fessenko kuupäeval R, 22/06/2018 - 14:14

Hi there,

 

With a slight delay, I would also take the liberty to feed thoughts in in response to your request and hope this does not come too late in the process.

Ethical standards applicable to humans is a broad concept and the exact standards may vary depending from one society or culture to another, time periods or even subject matter. But the main principles that modern communities, respectively societies/states and legal systems, appear to be organised all relate to ensuring sanctity/integrity of life, health, privacy and public order.

In my view, among the main principles that the AI guidelines should tackle are those related to possible interplay of /interference of AI with the values listed above, i.e.:

    1. (Non-)acceptable use of AI as an extended arm of humans in activities that it would not be acceptable for humans to undertake -- e.g. warfare, bold scientific experiments, etc.;
    2. Use of AI for surveillance/ensuring public order -- if/how AI is used to track behavioural patterns, policing, tacking decisions and acting upon socially unacceptable forms of conduct;
    3. Moral/ethical borders for employing AI in monitoring/ensuring public and individual's health -- can AI be unconditionally relied upon for decisions concerning health & safety?
    4. Use of AI for automated decision-making with possible legal effects on individuals -- can AI be unconditionally relied upon with material decisions without human vetting/oversight?
    5. And, more generally, checks and balances and safeguards in every AI-based process/activity so that the operations and outcomes of such processes are not irreversible and human’s involvement (decision/approval) is needed with respect to material decisions/outcomes.

 

Thank you!

 

Kind regards,

Dessislava

 

Profile picture for user dessislava.fessenko@fessenko.com
Esitas Dessislava Fessenko kuupäeval R, 22/06/2018 - 14:31

Hi there,

 

With a slight delay, I would also take the liberty to feed thoughts in in response to your request and hope this does not come too late in the process.

Ethical standards applicable to humans is a broad concept and the exact standards may vary depending from one society or culture to another, time periods or even subject matter. But the main principles that modern communities, respectively societies/states and legal systems, appear to be organised all relate to ensuring sanctity/integrity of life, health, privacy and public order.

In my view, among the main principles that the AI guidelines should tackle are those related to possible interplay of /interference of AI with the values listed above, i.e.:

    • (Non-)acceptable use of AI as an extended arm of humans in activities that it would not be acceptable for humans to undertake -- e.g. warfare, bold scientific experiments, etc.;
    • Use of AI for surveillance/ensuring public order -- if/how AI is used to track behavioural patterns, policing, tacking decisions and acting upon socially unacceptable forms of conduct;
    • Moral/ethical borders for employing AI in monitoring/ensuring public and individual's health -- can AI be unconditionally relied upon for decisions concerning health & safety?
    • Use of AI for automated decision-making with possible legal effects on individuals -- can AI be unconditionally relied upon with material decisions without human vetting/oversight?
    • And, more generally, checks and balances and safeguards in every AI-based process/activity so that the operations and outcomes of such processes are not irreversible and human’s involvement (decision/approval) is needed with respect to material decisions/outcomes.

 

Thank you!

 

Kind regards,

Dessislava

 

Profile picture for user micubog
Esitas Bogdan MICU kuupäeval P, 24/06/2018 - 14:31

Hope for the best, prepare for the worst : Building technology that benefits people and society 

  1. What should be the specific purpose of AI in & for Europe? To grossly simplify, we have to decide whether the main thrust should be towards a more competitive and innovative European industry, towards improvement of the quality of life for all Europeans, or maybe towards achieving deeper European integration.
  2. Based on the overall purpose of AI in / for Europe, and underpinned by the European value system, we should develop a priority ranking by type and domain : what kinds of AI do we want, and where do we want them?
  3. There is a distinction between operationalanalytical, and predictive (uses / facets of) AI. Another distinction could be between localized and extended AI : the former has a clearly circumscribed material footprint, while the latter is dispersed in a network. We might have also made the first steps towards what I call a philosopher AI, one that understands and develops concepts, makes inferences, and more generally interprets and gives meaning to the world as she experiences it..Each of these types needs to incorporate a different and specific set of value-based checks and failsafes.
  4. On granting AI legal personality : Should we / are we prepared to allow them to operate autonomously and anonymously in financial matters,  including to make payments / donations to political action groups?
  5. Should we encourage the development of a profession whose job description is the algorithmization of higher-level non-routine cognitive tasks? Should we allow the practice of this profession in Europe, and / or on European citizens?
  6. Do we allow non-certified AI, located outside European jurisdiction, to be used by governments, civil society organizations, private persons, companies, etc., operating in Europe? How do we regulate extended AI that cannot be pinpointed geographically?
  7. What are the necessary biases we would want incorporated in AI systems? For example, the relaxation of economic or logistical criteria for optimization when they conflict with protection of vulnerable persons.
Profile picture for user nmangost
Esitas Stefan MANGOLD kuupäeval E, 25/06/2018 - 19:55

Today, there should be no doubt about the many promising visionary ideas related to what is generally referred to as Artificial Intelligence (AI). Whereas some publicly available visions appear believable and demonstrate obvious benefits, other ideas sound more like stories that may never materialize, still helpful to raise awareness.

It is the right time for our society to step forward and prepare the process of policy making. I am writing this short note to applaud the European Commission for establishing the European AI Alliance with its open and efficient way of communicating with interest groups. I am convinced that this approach can be a critical differentiator for ensuring an optimal outcome for European stakeholders.

  1. Resources should be carefully allocated with the clear objective of identifying new paths towards commercial exploitation, and advancing the competitiveness of European stakeholders.

  2. A high level greenfield approach for policy recommendations might consider an approach known from artificial knowledge representation and ontology engineering (semantic web). This is not novel and has been discussed in the past in a number or FP7 and H2020 ICT research projects, which were targeted towards enabling a flexible radio spectrum regulation. It is interesting to think of ethical and technical rules being specified in machine-understandable ways. Benefits are twofold: One would be an accelerated adoption and possible evolution in AI systems. Another advantage would be the inherent possibility to detect patent infringement and further to automate compliance testing of AI system behaviors.

  3. Today’s digitalization continues to affect the European society by challenging nearly all aspects of people’s lives. Interestingly, future research work that will target industrial leadership in the area of high performance AI systems (including embedded and always available AI systems), might actually benefit from the partially ignored discipline of Analogue Communications and Analog Computing, instead of relying on incremental improvements in digital ICT. It might be beneficial for the European AI Alliance to critically reflect on and shape possible developments in the alternative non-digital approach. Hopefully, this might lead to new radical breakthroughs in AI system designs.

Once again, thank you, Ms. Sioli, for raising awareness and for your leadership in authoring this Q&A. I am planning to closely follow future communications and look forward to working with others towards a good progress.

Respectfully submitted in good faith, out of interest and curiosity.


Stefan Mangold, Lovefield Wireless, Switzerland 

https://www.lovefield.ch/~smangold

User
Esitas Vivek NALLUR kuupäeval T, 26/06/2018 - 16:17

One aspect of AI and ethics that doesn't seem to have gotten too much attention is what sort of ethics would AI systems negotiate amongst themselves? Can we predict them or (ideally) control the sort of ethical principles that autonomous systems agree to abide by. 

It's easy (relatively speaking) to think about imbuing individual autonomous machines with some ethical principle. However, when autonomous machines interact with one another, there is the risk of them agreeing/disagreeing with each other, and the resultant ethical behaviour might be hard to predict. 

We need to have research into 'emergent ethics', as well as informed social discussion on what it would mean, for some 'other' to set the ethical rules for us. How much are we willing to accept, what are the red lines?

Vastus kommentaarile kasutajalt Vivek NALLUR

Profile picture for user mnewman
Esitas Matthew Newman kuupäeval T, 26/06/2018 - 16:51

It's an interesting topic. One possible approach is to create a consensus building mechanism that might use reputation as a metric. In this way AI that have gained positive reputation through human scoring will have stronger voting when it comes to building consensus amongst AI. 

Vastus kommentaarile kasutajalt Matthew Newman

User
Esitas Vivek NALLUR kuupäeval N, 28/06/2018 - 13:28

That doesn't work, when each autonomous machine is individually (correctly) recommending/making an ethical action, however the joint effect of multiple actions is unethical. This could be a very real concern, when we are surrounded by multiple autonomous machine each making decisions about (say) employement, health, resource allocation in society, etc. Luciano Floridi, I believe, talked about distributed morality in the context of human decision-making with regards to the environment. The 'emergence' of distributed morality will be important to detect / predict, as we go forward.

Consensus based decisions are plausible but not necessarily scalable or foolproof.

Profile picture for user VanjaSkoric
Esitas Vanja Skoric kuupäeval K, 27/06/2018 - 16:50

The European Center for Not-for-Profit Law (ECNL) and the European Civic Forum wish to provide the following topic for discussion for the EU AI Expert Group:

Discuss opportunities and challenges of the Artificial Intelligence (AI) impact on civic space, civil society activities and civic fundamental freedoms. These specifically include the right to freedom of association, the right to freedom of peaceful assembly, the right to freedom of expression, access to information right, and public participation right, fundraising options, donor and beneficiary relations, etc. What policies and principles are necessary to address such opportunities and challenges?

Explanation: 

Within the scope of protecting civic space on the ground, we are increasingly coming across the adoption of innovations in areas of artificial intelligence (AI) that are creating new opportunities but also challenges to human rights and freedoms. AI’s potential impact on freedom of assembly, freedom of association, freedom of information, freedom of expression, political participation, due process and the full range of other civil, economic, social and cultural rights needs to be discussed and explored further. The Guidelines on Automated Individual Decision-making and Profiling for the Purposes of the New General Data Protection Regulation recently issued by the European Commission Data Protection Working Party have also cautionedthat “profiling and automated decision-making can pose significant risks for individuals’ rights and freedoms which require appropriate safeguards.” For example, algorithms used by some governments in law enforcement and access to due process have the potential to reinforce and deepensystemic discrimination or replicate existing biases. In addition, algorithms that support digital echo chambers are enabling the spread of ‘fake news’, deepening digital bubbles and impacting electoral processes. Governments are likely to deploy AI systems to regulate freedom of assembly, freedom of association, access to information and freedom of expression: e.g., AI systems will likely be adopted to automate the registration process for civil society organizations or notifications for peaceful assemblies. While an automated process would be efficient, if not properly designed, it could embed biases and constraints inconsistent with freedom of association and freedom of assembly. For example, reasons  for declining registation or declining a peaceful assembly could remain in the „black box“ without adequate possibility for full review. In addition, if governments purchase or lease AI systems from companies, will the algorithms be subject to freedom of information reviews and allow public oversight? The issue of transparency, accountability, and liability of such approach will be hugely important. Moreover, AI systems can be used to track online speech critical of government policies, which in turn could be used to produce alghoritms to stem the flow of critical speech.  In addition, AI systems could be designed to self-learn to identify and monitor people that are likely to participate in peaceful gatherings or associations critical of governmental authorities. Authorities could then use this information to prevent or limit such  freedoms.  On the other hand, the AI provides additional opportunities for charitable fundraising and interactionwith donors,  beneficiaries and citizens on civil society actions. AI applications can help promote public involvement in public policy, inclusive participation and more transparent administrative decision-making. Finally, the World Economic Forum's White Paperalso called for integrating human rights into machine learning. There will be a need for a standardisation system for verifying and monitoring AI systems, in the areas of safety, transparency, comprehensibility, accountability and compatibility with fundamental rights and freedoms.

 

The EU has increasingly found civil society under threat, and at the same time, considers civil society as a crucial component of any democratic system. That is why the EU institutions developed a number of policies on protectic civil society,which should be complemented by exploring and identifying AI features that will impact fundamental freedoms.  AI is a dual-use technology: its potential benefits are massive, but it's potential for harm is also great. We see the AI's great potential to benefit civic freedoms, promote inclusiveness and serve the needs of local communities. In doing so, we also see the need to minimise AI's potential for abuse, harm and threats to fundamental freedoms and human rights. 

Many thanks! 

Vanja (ECNL)

Vastus kommentaarile kasutajalt Vanja Skoric

User
Esitas Richard Krajčoviech kuupäeval K, 25/07/2018 - 15:12

Good point, Vanja, thanks a lot. My opinion is that any "black box" excuses must be prohibited and users and vendors of AI must share responsibility for compliance with any existing regulation (and for adoption to new ones) regardless they use AI or not. Happy to participate on such discussions.