Some thoughts on the Assessment List

I am not sure if this is the right place to report, but I would like to add my 2 cents to the revision of the assessment list from the comments I gathered from my AI class.

In general, me and some students had the impression that this list may be too strict and hinder the development of AI. In particular regarding:

- 6. Societal and environmental well-being - Social impact – ”signals that its social interaction is simulated”: this goes against a lot of research in Social Robotics. Robots and artificial agents are simulating many aspects of communication in order to be credible. Target users should NOT know that the interaction is simulated in order to keep the "suspension of disbelief".  On the other hand, some emphasis should be added regarding the problem of nudging by the developers, and regarding the explicit warning to final user, not to take any personal advice from the AI (possibly this could added into 4.Transparency - Communication)

 

Other small things:

- 1. Human agency and oversight - Human agency: sounds a like the confinement of industrial robots into cages; should be clarified.

2. Technical robustness and safety

Resilience to attack and security: how do we know unexpected situations?

Fallback plan and general safety – “unintentional behaviour of the AI system”: does an AI system have an intention? How is the unintentional behaviour defined?

4. Transparency - Explainability: why should business model be explainable? They are generally sensitive information of a company.

 

 

That's all.

Best regards

Gabriele Trovato

School of International Liberal Studies

Waseda University

Tokyo, Japan



 

Comentarios

User
Enviado por Matthieu Vergne el Dom, 02/02/2020 - 15:58

These are my own opinions, but if the post is about asking clarifications, I assume this answer provides a reasonable contribution.

Suspension of disbelief makes sense in an entertainment perspective, and more generally when it is wanted, but this is not what AI in general, and these guidelines in particular, are about. Artificially supporting "suspension of disbelief" is called "deceiving", and this is not wanted and, even more, already legally forbidden. This aspect should be considered in this perspective.

Human agency may sound like the confinement of industrial robots into cages, but it is basically the idea: the aim of producing robots is to serve humans, in other words to create artificial slaves. The term is surely strongly negatively connoted, but it is so because we tend to apply it to humans, animals, or other beings we, today, tend to believe they should be free, not enslaved. Robots are a mean to achieve that, by providing a "tool" to delegate the tasks we want to delegate. Slaves help to empower their masters. As long as we consider artificial intelligence as a mean to produce artificial slaves in a morally-acceptable way, we can see human agency as a way to ensure that these artificial beings remain a mean to empower humans.

Resilience to attack and security: how do we know unexpected situations? We can't. Unexpected means we didn't expect it, so we can't know them, otherwise they are not unexpected. One should know that there is unknown, and thus should be ready to evolve to consider what we didn't consider before. Dealing with the unexpected is all about being able to improve continuously.

Regarding the intention, I consider there is no such thing within an AI system. I am not sure it was different in the guidelines, especially if we consider their overwhole redaction. We can interpret "unintentional behaviour of the AI system" as the behaviour of the system that humans have unintentionally put in it.

For the explainability, 2 points. First, it is not about the explainability of the business model, but the explainability of the behaviour of the AI system, explanation which may be supported by the business model, but not necessarily. Second, the latter case is already considered in the guidelines: transparency and explainability does not mean public access. Auditability can be done by independent, professional actors, in the same way certifications are delivered. Protection of the business model information obtained during the process is then a contractual detail.

Profile picture for user n0031ebs
Enviado por Daniel Draghicescu el Jue, 06/02/2020 - 23:35

I would like to partially address some of the points raised above.

„Human agency” is a psychological term which expresses the capacity of humans to make choices and to examine the course of the events following those choices. This is the way I understand the terminology within the Assessment List. It is in no way a negative (or positive, by this matter) term, as it doesn’t have any intrinsic ethical value. Thus, it is not about the “confinement” of future robots, but about the capacity of humans to understand, challenge and bypass an AI decision. Because, in the end, a decision should be human-centred.  

Regarding „unintentional behaviour of the AI system” – as this term is under the „Fallback plan and general safety” list, I believe it should be read as „that behaviour of an AI system which wasn’t implicitly expected when designing the system”. Here, „behaviour” is not the „human behaviour” as it is commonly understood, but the defined and expected execution of a software component. This means that an AI system does not have intention; the intention is that of the humans which are designing the system in the first place, a system which should process some inputs, execute in a certain way and provide some outputs - this is the behaviour of the system.

User
Enviado por Gabriele Trovato el Vie, 07/02/2020 - 01:20

OK thanks everyone.