The Ethical Anatomy of Artificial Intelligence

The UN Secretary-General António Guterres has just established a High-level Panel on Digital Cooperation to foster a “broader global dialogue on how interdisciplinary and cooperative approaches can help ensure a safe and inclusive digital future for all.” My own interest spans the last five years thinking about how cooperation and governance apply to my field of expertise, AI and emerging technologies.

What does “cooperation” mean in a world where only a small proportion – about 0.004% of the global population – have the knowledge and power to build machines that are intelligent enough to potentially decide who wins on the job market, who can obtain insurance or has the upper-hand in the courtroom, or whose DNA or behavioural patterns will be mined by marketers? Never have we faced a technology like AI – whose design is in the hands of a few, and who are mostly born in societies of abundance, yet a technology powerful enough to shape multifaceted aspects of our lives. This asymmetry of knowledge and power raises significant challenges for global cooperation.


Submitted by Richard Krajčoviech on Fri, 10/08/2018 - 15:37

THank you, Eleonore, for posting this. I think, many of the issues mentioned fall within personal data protection. GDPR is very modern in this sense, but its interpretation in AI world still needs lot of work. Picture of face is personal information and should be treated that way. The question is, whether a neural network trained to recognize specific face is personal information. I think it is, because the AI model is "information relating to an identified or identifiable natural person" (the AI model is information, how to identify person's face from a picture), especially if it is accompanied by the identified person name, other direct identification (ID) or indirect identification (a vandal that destroyed something somewhere". So anybody, who is training a model to recognize a specific faces should have legal base to do so (Article 6 of GDPR). Same applies to recognition of features protected by GDPR, or other anti-discrimination legislation, like gender, race, nationality, etc. It is fine to do it for research, but it is (already) prohibited to use such information, regardless it is acuired from the data subject, third party, by AI system, or by any other means and it is irrelevant whether it is stored or infered on demand.