Artificial Intelligence: Humans must stay in command

Policy Brief "Artificial Intelligence - humans in command"
(1 byte - PDF)

Lack of accountability, potential misuse in HR processes and digital data monopolies must be regulated – and social consequences anticipated

Artificial Intelligence (AI) may improve the efficiency and reliability of industrial processes. It could thereby support the market position of European companies and thus sustain high-quality employment in a globally competitive world.

However, it raises a number of major concerns for European workers in industry: (1) the capacity of machine-learning systems to supervise workers systematically and permanently; (2) the unexplainable nature of decisions or recommendations made by these systems; (3) their capacity to guess or to anticipate sensitive personal data of workers; (4) the rules to access industrial data, which can lead to digital monopolies; and (5) the volume of employment and the qualification of tasks remaining for humans.

Additional concerns relate to: (6) the inherent conservatism that algorithms based exclusively on past experience entail; (7) the loss of control on self-learning systems after delivery by the producer; and (8) the unreliability of a system that can use its own output as teaching material.

For each of these concerns, industriAll Europe makes suggestions for policy.

jobs fairness work labour trade unions Value added


Profile picture for user nvasilla
Indsendt af Laurentiu VASILIU den lør, 09/28/2019 - 09:33

I'd say the title of this policy is somehow misleading as it suggests that humans are somehow (already?) struggling to be in control of AI (that is in itself a fuzzy/foggy concept as it is made of a big set of independent and inter-connected IT technologies at various maturity levels)

I'd consider more appropriate the following title: "Artificial Intelligence: Humans must not abuse and misuse its capabilities and potential". The problem overall is to achieve solid ethics implementation in AI as human command exists anyway at the design and implementation phase of AI technologies.

And, the underlying assumption that somehow humans make better decisions is not a reflection of the reality: as HR was given as example, just think how many hiring/firing decisions are made (by HR professionals and senior managers) mainly  based on 'like/dislike' human emotions that is an intuitive an non-auditable (unexplainable) decision from a rational point of view.

There can be more examples on how human decisions are sub-optimal but for this post i keep it short.

And as cognitive psychology tells us, human intuition  leads consistently to wrong decisions, it is obvious that for very complex decision makings with far ending implications we as humans need support and sometime need to be challenged over our decisions that can be biased by emotions that...AI doesn't have.


Som svar til af Laurentiu VASILIU

Profile picture for user LZibell_industriAllEurope
Indsendt af Laurent ZIBELL den man, 09/30/2019 - 10:36

Our point in the document is not to say that humans make intrinsically better decisions than machines, but that humans can justify their decisions, with an ensuing discussion taking place regarding the legitimacy of the decisions and of the justification, and a possibility for redress.

In the current state of the art, machine-learning-based systems do not provide this possibility for understanding the justification of their decisions (or recommendations) - they are not explainable. In this context, they do not provide this essential ethical feature of any human decision - the possibility to appeal it and obtain redress.

This lack of explainability also leads to "recommendations" by machine-learning-based systems being de facto decisions, because no human will take the risk of challenging the "recommendation" of the machine - whereas s/he will be covered for having followed it. This means that even if humans are facially taking the ultimate decision, in effect they are not - hence the legitimacy of the demand that they do "stay in control".

Som svar til af Laurent ZIBELL

Profile picture for user nvasilla
Indsendt af Laurentiu VASILIU den man, 09/30/2019 - 13:24

Thanks for your reply, this is a very interesting discussion that sits at the core of decision making challenges these days.

The existance of discussions (re legitimacy of decisions) unfortunatelly do not guarantee either optimal decisions or ethical approaches.  Humans also do introduce extra subjective factors (hierarchical inequality at the discussion table, power positions, political interests etc) that can bias whatever decision making formalism is being used.

Then, *machine learning* (ML) technologies while may have the property of non-explainability,  are just a sub-set of a very big AI domain (100+ AI (non-ML technologies). Machine learning is not the only technology/research area that can be used for decision making. There are other technologies/methods or hybrid AI/nonAI approaches that can be applied for decision making and that can be fully automated *and* explainable.

"no human will take the risk of challenging the "recommendation" of the machine" it is true if the AI running of a machine is implemented in an autocratic manner, allowing for no further feedback. Nothing would stop the AI designer to allow multiple decision formalism to run in parallel and the human user say 'I disagree with the decision done by formalism A, run decision formalism B instead. Then the human user can compare the results and ask for a formalism C if A and B aren't still satisfactory.

As said, all stays at the AI design phase...