Challenge for Responsible AI, submission deadline Dec 15

https://mailchi.mp/luca-d3/challenge

Challenge – Introduction and Objectives


Machine Learning and Artificial Intelligence are increasingly applied across our society, and predictions are that this will grow exponentially in the coming years. Machines will take more and more decisions for us, yet recent research and publications have demonstrated that there are several undesired consequences of AI generating concerns about bias & unfair discrimination and black box algorithms.  A lot has been discussed about data privacy & security, and the debate around ethical use of data is only starting. Quite often, the discussion about data ethics is happening on a case by case basis and in this challenge we want to promote a more structural approach.

The objective is twofold:



1) To find out whether those concerns are limited to a few highly visible cases, or whether they are potentially happening on a much larger scale 

 

We all know cases such as COMPAS, the crime-predicting algorithm that was more likely to incorrectly categorize black defendants as having a high risk of reoffending. However, the real question is whether these are incidental cases or whether this happens at large:  

  • Detect, explain and visualize cases of unfair discrimination due to improper use or implementation of AI systems

  • Identify and visualize Open Data sets that contain undesired bias potentially affecting protected groups

 

The second objective of this challenge is:

 

2) To develop tools and/or algorithms that help detect and mitigate the concerns  

 

The ethical dimension of artificial intelligence is quickly getting traction both in press, research and think tanks. Consequently, some of the top companies in this industry are developing tools for detecting and removing bias from AI. Not only companies such as AccentureMicrosoft and Facebook have created their own private tools, but open-source tools have also been released. For instance, IBM launched AI Fairness 360, an open source library to help detect and remove bias in machine-learning models and datasets. Topics of interest for this objective therefore include but are not limited to:

 

  • Tools for explaining the conclusions reached by an AI algorithm towards mitigating the fear of "unexplainable" AI

  • Tools to detect bias in data sets related to sensitive data (impacting protected groups)

  • Tools to detect correlations in data sets between normal variables and sensitive variables)

  • Tools to re-identify anonymized data of public data sets

  • Tools to detect unbalanced outcomes of algorithms within sub groups of the population regarding false positives and false negatives

  • Methods & tools for providing an "ethical" score of data sets