Z-inspection: Towards a process to assess Ethical AI

 

We at the Frankfurt Big Data Lab at the Goeth University of Frankfurt are working on the definition of an assessment process for Ethical AI, that we call Z-inspection.

 

We decided to go for an open development and incremental improvement to establish our process and brand (“Z Inspected”). 

 

We are assessing right now a real use case in healthcare (cardiology) https://cardis.io

 

We will look for additional real AI use cases soon. 

 

The recording of our latest presentation is available here (30 min.):

https://www.youtube.com/watch?v=jrwuZvt_H7k&feature=youtu.be

 

Copy of the slides are available here:

http://cognitive-science.info/wp-content/uploads/2019/10/CSIGTalkZicari.20191031.pdf

 

More info on our research work on AI and Ethics is available here:

http://www.bigdata.uni-frankfurt.de/ethics-artificial-intelligence/

 

and this is our team: http://www.bigdata.uni-frankfurt.de/people/

 

The benefits of having such an AI Ethical assessment process in place are clearly explained in [1]: "If governments deploy AI systems on human populations without framework for accountability, they risk losing touch with how decisions have been made, thus making it difficult for them to identify or respond to bias, errors, or other problems. The public will have less insight into how agencies function, and have less power to question or appeal decisions." 

 

An Ethical assessment "would also benefit vendors (AI developers) that prioritize fairness, accountability, and transparency in their offering. Companies that are best equipped to help agencies and researchers study their system would have a competitive advantage over others. Cooperation would also help improve public trust, especially at a time when skepticism of the societal benefits of AI is on the rise.” [1]

 

The aim of our research work is to help contribute to closing the gap between “principles” (the “what” of AI ethics) and “practices” (the ”how”).

 

The project is non commercial.

Z-inspection is open access and distributed under the terms and conditions of the Creative Commons (Attribution-NonCommercial-ShareAlike CC BY-NC-SA) license (https://creativecommons.org/licenses/by-nc-sa/4.0/

 

In our opinion, one cornerstone of being able to conduct a neutral, effective AI Ethical assessment is the absence of conflict of interests (direct and indirect). 

 

This means: 

1. Ensure no conflict of interests exist between the inspectors and the entity/organization to be examined; 

2. Ensure no conflict of interests exist between the inspectors and vendors of tools and/toolkits/frameworks to be used in the inspection; 

3. Assess potential bias of the team of inspectors. 

This result in a: 

 GO if all three above are satisfied.

→ Still GO with restricted use of specific tools, if 2 is not satisfied. 

 NoGO if 1 or 3 are not satisfied. 

 

 

Reference

 

[1] Algorithmic Impact Assessment: A Practical Framework for Public Agency Accountability, AI Now, April 2018

https://ainowinstitute.org/aiareport2018.pdf

 

Team Members:

 

Roberto V. Zicari (*), Irmhild van Halem (*), Matthew Eric Bassett (*), 

Karsten Tolle (*), Timo Eichhorn (*), Todor Ivanov (*), Jesmin Jahan Tithi (**), 

Thomas Ploug (***),  Georgios Kararigas (+),Romeo Kienzler (§).  Marijana Tadic (++).

 

 

(*)  Frankfurt Big Data Lab, Goethe University Frankfurt, Germany.

 

(**) Intel Labs, Santa Clara, CA, USA.

 

(***) Centre for Applied Ethics and Philosophy of Science, Aalborg University Copenhagen, Denmark. 

 

(+) German Centre for Cardiovascular Research, Charité University Hospital, Berlin, Germany

 

(§)  IBM Center for Open Source Data and AI Technologies, San Francisco, CA, USA

 

 (++) Cardiology Department, Charite University Hospital, Berlin, Germany

 

Contact:

                                                                   

Prof. Roberto V. Zicari 

Founder

Frankfurt Big Data Lab 

Goethe University Frankfurt

http://www.bigdata.uni-frankfurt.de

 

Comments

Soumis par Juan LLORET le sam, 12/21/2019 - 19:17

Hi, Good Job. Many thanks to shared. We will analyze it in depth and try to write an article related to this framework.

Please, take a look to his article regarding AI framework

A framework for AI: the use of machines to reduce or eliminate the cognitive burden of decision-making for a human operator

"The concept of framework applied to artificial intelligence to support commercial applications, and even military developments, is an area of special scientific interest. Therefore, we will collect a proposed framework and subsequent application of the Army Artificial Intelligence (AI) Task Force."

 

https://editorialia.com/2019/12/21/un-framework-para-la-ia-el-uso-de-maquinas-para-reducir-o-eliminar-la-carga-cognitiva-de-la-toma-de-decisiones-para-un-operador-humano/

(Be aware to use Google translator, please, upper right corner).).

Regards, Juan A.