Principled Artificial Intelligence by Berkman Klein Center, Harvard

Dear collegues,

I would like to drive the attention on the folllowing data visualization on Principle of Artificial Intelligence derived by Berkman Klein Center for Internet and Society at Harvard Univeristy:

https://ai-hr.cyber.harvard.edu/images/primp-viz.pdf

The paper is not out yet, and due to this, I think it would be even more interesting discussing the difference of principle from the public and the private sector and also which principles are considered more important. 

The guide for Trustworthy AI is taken into account too and it is portrayed that addresses almost all the principles (Privacy, Accountability, Safety & Security, Transparency & Explainability, Fairness and non-discrimination, Human Control of Technology, Professional Responsibility, Respect of Human Value) but not Human Right. It is quite interesting that they separate Human Rights from for example other notions like Respect of Human Value. For example, respect for human dignity is included in the Respect of Human Value but it should also be considered as a Human Right. 

Also, I would add another important principle which would be "Contestability" which includes the capacity for a human to give feedback for improving the quality of AI and for ensuring justice and equity. This would be a feature really important for developing human-centric AI and for ensuring no harm.

What do you think about this graph?

Tags
AI Ethics Trustworthy AI Principle

Kommentarer

Profile picture for user njastrno
Indsendt af Norbert JASTROCH den søn, 03/11/2019 - 12:27

To all those who are interested in more about human nature and dignity, I recommend:  Giovanni Pico della Mirandola: De hominis dignitate/Über die Würde des Menschen, Reclam 2009, a marvellous text about freedom at the center of human dignity.

Profile picture for user n002yhrz
Indsendt af faiz ikramulla den ons, 13/11/2019 - 22:05

hi francesca, 

i like your posting and assessment.  the graph is too complicated to follow, in my opinion.  I do like your point about adding "contestability", as this is feature in a new law in the US State of Illinois.  It allows the citizen/individual the right to contest a facial recognition system conclusion about them.  This is significant.  

The other point I would add is "reproducability"... this is a key metric for quality in any system that should be applied to ML/AI.

And finally, I totally agree with your last point which I would categorize as "continuous improvement", also an important feature in measuring quality of systems over time.

thank you for the posting!