AI is the new steam machine – how automated decision-making is disrupting liability rules and what it means for consumers

Over the course of the 19th century, industrialisation dramatically challenged civil liability rules. Back then, most liability regimes in place were built on the notion of ‘negligence’, meaning that a negligent person who had caused harm to somebody had an obligation to fix that harm.

But the multiplication of complex machines, such as locomotives, no longer powered by humans but by steam, made it impossible for injured parties to trace and prove any human negligence. In parallel, the severity and frequency with which injuries occurred significantly increased.  This showed the limits of existing liability regimes and called for fresh solutions. One of the responses was to establish new liability rules for which it was no longer necessary to prove a human negligence in order to obtain compensation (this is what is called the “strict liability” regime).

In the 21st century, robotisation has replaced industrialisation, and artificial intelligence has become the new steam machine in disrupting  liability rules. Yet the challenges faced by injured individuals are comparable to those our predecessors experienced two centuries ago.

First, AI-driven products require a plurality of professionals to function, including manufacturers of products, programmers, app designers, cloud service providers and many others. It is impossible for the injured party to identify the person liable when things go wrong. Was the damage caused by a sensor wrongly interpreting or collecting the data? Was it caused by the design of the software, or is it due to its interaction with another product component? Consumers rightly worry about this aspect. According to a survey that BEUC member organisations organised at the end of 2019 in nine European member states, most European consumers believe that AI can be dangerous because machines fail. Most of them are worried that it is unclear who is accountable if AI is not secure or causes harm.[1]

Second, AI-driven products are equipped with multiple automated sensors and software constantly collecting and interpreting data, and most consumers have absolutely no clue how they function (what is usually referred to as the “black box effect” of AI). Because it is near impossible to identify the problem or to trace them back to any human intervention, the injured parties cannot exercise their rights. In other words, consumer protection rules become obsolete with AI-driven products and the consumer is likely to be left without a solution.

In this context, current liability legislations are outdated and are unable to bring adequate solutions. This situation is very clear at European level with the Product Liability Directive. The EU adopted this legislation in 1985 when AI-driven products were pure science-fiction. The rules are  no longer adapted to products evolving in complex digital ecosystems and hence those rules must urgently be upgraded to deliver on the ground for consumers. BEUC and its member organisations have made several recommendations to adapt the Product Liability Directive to AI and our increasingly digital world.[2]

AI can be an opportunity for consumers, but its development should not be to the detriment of consumer rights. There is also no reason to wait to come up with a solid and renewed liability framework accompanying the development of AI. The reported damage caused by autonomous vehicles in several countries have already given a first glimpse of the many liability issues which will undoubtedly arise because of artificial intelligence. Now is the time to act.

Several golden principles should guide policymakers when adapting the existing civil liability framework.

First, the liability framework should be clear and simple for all. Policymakers should avoid developing several pieces of legislations laying down different liability rules for distinct categories of people or organisations. Many of the challenges brought by AI-driven products can be fixed with an ambitious revision of the EU Product Liability Directive.

Second, the rules should be easy to navigate for harmed individuals. Due to the opacity and complexity of AI, apportioning liability among different parties involved in the supply chain of AI-powered products will become exceedingly difficult and lead to costly and complex proceedings, ultimately delaying compensation for harmed individuals. Instead, all professionals involved in the supply chain of AI-driven products should be jointly and severally liable and the injured parties should not bear the costs of identifying who the liable person is.

Third, policymakers must acknowledge the evidence burden of claimants when it comes to substantiating their claims with complex products such as AI-driven ones. For fairness reasons, claimants should therefore benefit from a reversal of the burden of proof.

Fourth, as AI-driven products may cause new damage (including damage to data, identity theft and other pure economic losses), liability rules should also cover all types of damages (including immaterial damage) and not only be limited to the material ones.

Fifth, as AI-driven products are constantly evolving and as professionals today keep a higher degree of control over them compared to what they used to do for “offline” products, liability rules should also follow the dynamic nature of AI-driven products. As a consequence, the liability of professionals should not stop once the product has been put on the market.

Out of challenges come great opportunities. Our opportunity today is to set up a liability framework for AI that works and delivers on the ground, which means a framework able to accompany the development of AI while offering society an appropriate level of protection.

This is our chance and we must not miss it.

 

 

[1]https://www.beuc.eu/publications/beuc-x-2020-078_artificial_intelligence_what_consumers_say_report.pdf

[2]https://www.beuc.eu/publications/product-liability-20-how-make-eu-rules-fit-consumers-digital-age/html

Značky
AI accountability liability