Responsibility versus Good Intentions – does AI need ethical guidelines that embrace the principle of responsibility?

The draft version of the ethical guidelines, as presented by the HLEG on December 18th, 2018, suggest five principles which developers, deployers, users and regulators should follow in order to uphold the purpose of human-centric and Trustworthy AI. These are named the principle of beneficence, of non maleficence, of human autonomy, of justice, and of explicability. They are intended to build a framework for AI that is aimed to guarantee this new technological field to be trustworthy.

 

Like the Oviedo Convention for the Protection of Human Rights and Dignity in the Biomedical field, the ethical guidelines for AI are an ambitious attempt to set limits to what shall be done in research and development into a technology which is bound to entail transformations of potentially deep impact to our human lives and societal constituents. It is far from being likely that such guidelines will become globally accepeted in the near future. The question thus arises if it would not be helpful to reduce these guidelines to the smallest common denominator, hoping to make them more easy to accept.

 

There is good reason to doubt this would work. As Max Weber stated for the political area, and Hans Jonas for the technological field (to mention only two renowned authors), ethical foundations in our modern civilisation need to incorporate the principle of responsibility, as ethics of good intentions are not sufficient anymore. So we all are requested to bring forward the idea of ethics of good intentions and responsibility, since otherwise the ethical guidelines will remain ineffective, because tentative and noncommittal.

 

That is why we suggest to include a 6th principle in the ethical guidelines, that of responsibility.

 

Norbert Jastroch

Tag
Ethics Responsibility

Commenti

Profile picture for user n0025j6f
Inviato da GEORGIOS LEKKAS il Mer, 06/03/2019 - 13:29

I strongly support your suggestion. As far as I am concerned I believe that we have "substantial" responsibility to protect urgently the communal dimension of the human being put at risk by the current dominant approach which considers the human being as a primarily autonomous entity.Georgios Lekkas (See my contribution: AI AS AN APPLIED ETHICS. BUT WHAT KIND OF ETHICS?)

Profile picture for user dervijo
Inviato da Jola DERVISHAJ il Mer, 06/03/2019 - 17:29

Thank you Norbert and Georgios for your suggestions! Following the consultation on the Guidelines, the AI HLEG is currently working on the various aspects to be considered for the principles as well. An updated version of the document will be  publicly presented on 9 April.

Profile picture for user n0025j6f
Inviato da GEORGIOS LEKKAS il Mer, 13/03/2019 - 01:28

Dear friends, what is missing for sure from the Draft is responsibility for the protection of the communal or referential dimension of the human being (=social principle) in the new AI era. Yours, Georgios

Profile picture for user njastrno
Inviato da Norbert JASTROCH il Mer, 13/03/2019 - 16:05

Vincent,

the multi faceted discussion you offer in your article is really appreciated - kind of a 'should read' for everyone in the AI ethics community. 

I can see a number of remarks you make there which clearly support the blending of Verantwortungsethik and Gesinnungsethik with respect to AI ethics -  like 'consideration of risk and prediction of outcomes become prominent' [34 - 35], or 'technology is not ethically neutral' [109], or 'design of technical artefacts has ethical relevance for their use as well' [142 - 143].

I doubt, though, that the principles of 'doing good' and 'doing no harm' - or beneficence, non maleficence - in the ethical guidelines can capture that in reality. And furthermore, the wording in the proposal - 'by design, AI systems should ...' -  suggests a certain indifference on who is being addressed with this principle.

I would argue for respective clarification by explicitly stating the responsibilty of the subjects involved in the research, design and use of AI objects. Otherwise, the moral as well as legal liability get diffuse and easy to misuse.

Regards,

Norbert