From What to How: Operationalising AI Ethics

From what to how: from high level AI ethics principles to practice - operationalizing AI ethics in business processes and in tech solutions is one of the most important questions around AI development though it has been rarely addressed so far. Many say it's not even possible to operationalise ethics on algorithmic level. What's your thoughts about it? 

What are the best practice examples you are familiar with or you are working on yourself? I'd love to hear about it! 

IEEE SA is working very actively on standardization for example: Global Initiative on Ethics of Autonomous and Intelligent Systems.

I think Global Partnership on AI that has been launched this summer is working on something similar as well.  

Meanwhile I'm sharing the event series that I have just started with my non-profit inviting experts from various locations, backgrounds and experiences to share their best practices with the global audience. I myself look forward to listen to their insights. Feel free to join in on 19th of November, 7pm CET. 

Registration: https://www.eventbrite.de/e/responsible-ai-for-data-science-registratio…

 

There is also an interesting paper by the researchers at Oxford Internet Institute: From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices https://link.springer.com/article/10.1007/s11948-019-00165-5

 

Let me know your thoughts! 

Comments

Vytvořil uživatel faiz ikramulla dne So, 21/11/2020 - 13:39

interesting ! thank you, but i missed the meeting, as i am just seeing this posting now.  i am in one of those IEEE working groups, it is great, however, a few comments:

broad range of experience, limited on the data science/technical side.  most folks are not data scientists/engineers/developers, or users.  these groups must be represented in any ethical initiative, in addition to the standard politicians, do-gooders, subject domain experts, etc.

broad range of age, kids these days literally do not care / or care a lot less / or do not mind at all, it seems to me.  the impacts are experienced by elderly, middle aged, and down to teenagers.  so, one must communicate ethics across age range - esp. to the kids who don't mind, just so they are aware, can grow up to be more responsibile users, and maybe they can even become the next gen ethically focused developers.  i think we are seeing early indicators of that in universities now - younger academics are more in-tune with what the problem is and how they could contribute to possible solutions, academically.

lack of global standard - every nation-state has their own form of regulation.  how can one supplier of ai/ml/ds develop to be compliant with all of them?  this will lead to unnecessary or even accidental inequities in deployment of technology.  the ai world should learn from industries that have solid global standards (i.e. 3gpp for telcom, etc.).  once upon a time , i could not use my mobile in other counties and would have to purchase a temporary one just for traveling - global standards has made it possilbe to plug-and-play with telecom, should be even easier in ai (software)!