In my previous post “algorithmic repeatability is a requirement for a Trustworthy AI” I posed the question if analogical computing could be compatible with the ethical requirement of algorithmic repeatability. After the contributions of Jose Pedro Manzano, Nicolas Beaume and David WROTH I can rewrite the question in a more systematic way with a clear definition of the requirement and a better description of the issues it arises.
The ethical motivation: If a decision susceptible of ethical controversy has been taken with the support of an AI system, it should be possible to investigate all circumstances that led the system to deliver the result used to conform such decision.
Definition of algorithmic repeatability: Is the capability of reproducing all the operations carried out by an information processing system during any process susceptible of interest.
Issue machine learning: AI systems involving some kind of machine learning evolve over time thanks to the new data they consume. In such a case the system must record the evolution version used for each process in order to restore it and be able to exactly reproduce the operations when necessary.
Issue random functions: Some AI algorithms involve random value generation functions. These systems must record the random function result and the execution step each time they are called. In this case the algorithm could not be repeated in a strict way but simulated. It would be necessary to implement a specific simulation capability as an integral part of the AI software in order to be able to exactly reproduce the operations performed during a process. Although random function calls represent only a small fraction of the operations performed, a normal process could generate a large amount of extra data to be saved.
Issue analogical hardware: Analogical hardware is meant to be a relevant alternative to currently dominant digital hardware in AI computing systems. Analogical hardware can perform most of the operations involved in AI algorithms using less energy, less time and with a simpler architecture. But analogical operations are not accurate in the strict way digital operations are. AI systems totally or partially executed in analogical hardware could represent a serious challenge regarding algorithmic repeatability. As it happens with the issue caused by random functions, here the algorithm could not be repeated in a strict way but simulated. To achieve such simulation the implementation faces two problems. In one hand it is the problem of saving an enormous volume of information representing all operations performed by the analogical hardware. In the other hand it is the fact that the saving of that information cannot be implemented at a software level but must be implemented at a hardware level. Although not physically impossible, to implement algorithmic repeatability for AI systems executed in analogical hardware could highly compromise the advantages of this kind of hardware. Fortunately quantum computing will apparently surpass the expected advantages of analogical hardware in the coming decades offering digital accuracy.
Request to the AI HLEG: Please, be so kind to consider the above dissertations in order to clearly define the rules and recommendations for the implementation and fulfillment of the requirement of algorithmic repeatability in AI systems.
- Illoggja biex tippowstja l-kummenti
Kummenti
In reply to Dear Juan, by Anonymous (not verified)
Thanks Juergen.
Be it backpropagation or any other type of ML, if we are able to save the whole status of the software and its configuration before it is called for processing a specific dataset, then we will be able to reproduce it later regardless the internal changes if suffers during the process. Isn't it?. The goal is to recover that previous status, I think it is perfectly possible.
Other thing is the need of saving that status which is feasible, but could be very uncomfortable.
- Illoggja biex tippowstja l-kummenti
User account was deleted
- Illoggja biex tippowstja l-kummenti
In reply to Dear Juan, by Leo Kärkkäinen
Thanks Leo.
I think the forensic point of view you highlight is extremely important. I agree with your reasoning but have doubts about the premise you use.
An AI decision support system, can be considered as a measurement system with an error tolerance? I am not suggesting that this consideration is incorrect but, will this simplification be accepted in a trial? How can we define (and inform) the acceptable tolerance for an AI system?
If that could be accepted by engineers, users and authorities then the problem of algorithmic repeatability would not be a real problem, or at least it should be defined statistically and not in a strict way.
If the expected result of an AI system is a probability (e.g detect cancer from X-ray) we should accept that statistical approach.
Nevertheless, will we renounce to strict algorithmic repeatability for investigative purposes when a controversy arises?
Very interesting contribution, thanks again Leo.
- Illoggja biex tippowstja l-kummenti
Dear Juan,
In forensic analysis of an even analog AI solution, one can always run the system multiple times to get the predictable statistics of the behaviour for a given input data. Actually the expected error bars define repeatability trust in a consistent way - like for any measurement system, which AI is an example of (e.g detect cancer from X-ray).
Also, let us not forget that all IT systems are analog if you go deep enough, and have bit error rates. So this is not a question of principle, but a question of practical error tolerance.
In real life, the concern is about how the analog (originally) input data changes affect the results of the system. Can one in practice recreate a scene, take images and still get the same result from AI? The always present "insignificant" variation of conditions in input data should not make "significant" change in the result. Actually, this is exaclty the thing one trains for in neural networks: what features do matter, what do not.
Both analog and digital systems are vulnerable for bad traning, there is no fundamental difference between the two on that aspect.
Best Regards,
Leo Kärkkäinen
- Illoggja biex tippowstja l-kummenti