Thick (and hard) words from the Montreal AI Institute friends regarding norms publications in Artificial Intelligence

 

"The history of science and technology shows that seemingly innocuous developments in scientific theories and research have enabled real-world applications with significant negative consequences for humanity. "

 

"The history of science and technology shows that seemingly innocuous developments in scientific theories and research have enabled real-world applications with significant negative consequences for humanity. In order to ensure that the science and technology of AI is developed in a humane manner, we must develop research publication norms that are informed by our growing understanding of AI’s potential threats and use cases. Unfortunately, it’s difficult to create a set of publication norms for responsible AI because the field of AI is currently fragmented in terms of how this technology is researched, developed, funded, etc. To examine this challenge and find solutions, the Montreal AI Ethics Institute (MAIEI) collaborated with the Partnership on AI in May 2020 to host two public consultation meetups. These meetups examined potential publication norms for responsible AI, with the goal of creating a clear set of recommendations and ways forward for publishers" MAIEI.

 

Full report ir the Bible of AI (TM) journal: https://editorialia.com/2020/09/20/r0identifier_00354836602c7b84afed6da18fbd7a38/

 

In ArXiv: https://arxiv.org/abs/2009.07262

 

 

Comments

Iesniedzis Norbert JASTROCH Pr, 21/09/2020 - 12:28

We have been pointing out the relevance of "responsibility" for the AI field while the EU ethical guidelines were discussed. Responsibility (and liability) appear to be among the core concepts for risk mitigation in AI.

It is good to see this showing up here as well.