I informed you earlier on the SIENNA project (www.sienna-project.eu), which is a Horizon 2020 project engaged (in part) with the development of ethical guidelines for AI. I hereby give you a brief recap of our approach to the development of these guidelines, and some of the ethical principles for AI that we have identified so far in our project. I value your feedback on both.
We engage in a very wide-ranging and comprehensive approach for the development of ethical guidelines. We do this over a 2 ½ year period. (We are not finished yet.) During this time, we consult 200+ experts in computer science, engineering, social science and ethics/humanities, we consult 200+ stakeholder organisations (industry, government, consumer organisations, labour unions, human rights organisations, etc. etc), and we also survey the opinions of 11,000 citizens and also hold five one-day panels of citizens across Europe. We engage in socio-economic impact assessments and foresight analysis of AI, looking 20 years into the future. We analyze the legal framework, and look at existing ethical guidelines across the world. We also do a systematic review of the academic literature in ethics of AI. As a result of all this, we develop an ethical framework (a set of proposals for the responsible development and use of AI), which is the basis for three sets of guidelines: general policy guidelines, ethical guidelines for professionals in AI (professional ethics), and ethical guidelines for research ethics (committees).
GENERAL ETHICAL PRINCIPLES
We have identified a provisional set of ethical principles that will form a partial foundation for the ethical guidelines for AI that we will be developing. These principles will be the basis for more detailed ethical analysis of issues in AI, including particular techniques, products, and application areas.
Autonomy and control: AI systems should enhance the autonomy of individuals and the control that they have over their onw lives, rather than diminish it.
Privacy: AI systems should protect and not harm privacy.(The privacy risks are however severe, since AI opens up many kinds of new risks because eof the many new ways in which it can collect, represent and process new types of personal information).
Safety and security: AI systems should be safe for users and third parties, and should also provide security and resist being hacked or compromised.
Dual use and misuse: AI systems should be designed and implemented in a way that anticipates and mitigates misuse and dual use.
Equality and non-discrimination: AI systems should generally decrease inequality in society rather than increase it, and should not by their operation discriminate or engage in unequal treatment.
Good and fair employment:The consequences of AI for employment in different sectors and in society as a whole should be anticipated, and its development and implementation should respect worker’s rights and interests, and avoid a future in which there is mass unemployment.
Democracy: AI systems should generally promote and uphold democracy and not harm it; decisions that are normally made democratically should not be delegated to AI systems.
- Nonanthropomorphic attribution:No attributions should be made to AI systems of human qualities other than information-processing abilities, unless there is very strong evidence for doing so (e.g., attribution of consciousness, emotions, pains, moral responsibility, personhood, rights, citizenship).