Introduction to AI Ethics

Dear all,

This is a backgrounder presentation in AI Ethics, written as an introduction to the knowledge domain, submitted for feedback from the group. 

Summary: There are 9 areas that matter in AI Ethics, which all relate to reputational, privacy and liability risks inherent in AI development. The central argument is that AI Ethics is directly tied to the reduction of risk (“AI Safety”) and can drive policy changes that benefit the business. It is shown that, with a guiding set of values, these risks can be mitigated and the potential is for AI be grasped profitably. Around the world, governmental bodies are forming new legislation on AI risks in response to many public mistakes. 



Length: 4500 words in the speaker notes.

Format: PDF Print of PowerPoint with speaker notes.

 

If you have any comments, then please contact me.

 

All rights are reserved, and please do not share the document outside the group.

 

https://drive.google.com/open?id=1O-svVmPE8kl7O2l5uGyrFq3rzbZH7nZa

 

Címkék
risk AI machine learning ai ethics

Észrevételek

Profile picture for user n002yhrz
Beküldte: faiz ikramulla ekkor: p, 11/10/2019 - 23:15

this is great, thank you for sharing.  i am a member of 3 IEEE standards working groups related to tech ethics, 2 specifically focused on ML/DL/AI.  While technically focused, the challenge has been how openly approach these type of issues you present, without slowing technological progress.  would you have any advice/recommendation for non-emotional/impersonal ways to define "quality" of AI? Thanks!

Válasz faiz ikramulla üzenetére

Profile picture for user n002hdf2
Beküldte: James Bell ekkor: k, 15/10/2019 - 14:18

Hi Faiz, yes - this is the area I am working hard upon and I believe that values only work when linked to practical advice and methods. I have previously setup such methods for the KPMG partnership I worked in, and there are a number of steps I recommend. I'd be happy to discuss them once I have documented them.

Profile picture for user n002yhrz
Beküldte: faiz ikramulla ekkor: p, 11/10/2019 - 23:24

My experience:  "explainability" can be metricized and compared, statistically, in a non-emotional/impersonal way, even with DCNNs.  I am not aware of how much this is stressed in the Data Science world, and I am glad to see "explainability" on your slides as 1 of the 9 values.

User
Beküldte: Matthias Gerberding ekkor: cs, 17/10/2019 - 10:37

Dear James, your slides are very web-friendly and accessible, thank you. My question to you: In a discussion between four researchers and about 180 citizens that my institution hosted last night the question came up as to where emotions come into the picture. If emotions are unique to humans and alien and external to any machine, and if at the same time moral assessments by humans are fed by emotions, what could an ethics that spans from humans to machines do about emotions? Thank you. Matthias G

Válasz Matthias Gerberding üzenetére

Profile picture for user n002hdf2
Beküldte: James Bell ekkor: cs, 17/10/2019 - 10:49

Certainly emotions are unique to humans. This is due to two reasons, firstly - humans have agency, which is the ability to act by choice. This is contrasted with the second - humans have natural drives, which fosters competition for passing on DNA. 

We could theoretically produce a machine able to experience both those things. Frankly, that would be nightmareish and something akin to bladerunner.

As it stands today, computers lack the basic human drives so any "displayed" emotions will only ever be a simulation. 

Morality is an agreed upon framework for dealing with others. Many humans have so much agency that they have died rather than "betray" their moral code. In society, morality is determined two ways, either by law codes (such as consitutions) or as case law (such as in the UK). We could easily train this code into any future machine and it will behave morally, according to our creed. What it won't do is actually do this for the same reasons humans do. 

Should a machine reach true consciousness (which there is no definition for that works btw), then we will have created new life, possibly distinct from the drives that work on us. That would be extremely dangerous, as we woldn't be able to predict what they want. Surely that would lead to one side being destroyed. 

In the end, what we won't be able to do to a machine is punish it. Punishment means nothing to something without agency.