AI and Civil Liability

We expect AI to make our lives better and safer. But what happens if harm is caused? Will the rules we have in place protect the victim? Do we even know how liability will work?

Before one can confidently answer these questions, we need to understand how civil liability rules work in each EU Member State and if they could work for AI as well.

The recent comparative law study on civil liability for AI tries to shed light on how existing tort laws within the EU would apply civil liability to AI. It also offers some insights into key aspects of liability in the US.

The authors conclude that existing national laws on civil liability might make it very hard for victims of AI systems to obtain compensation. In many cases, for the victim to make a successful fault-based claim, they are required to identify a human misconduct and convince the court that is was to blame for causing the damage. However, due to the nature of AI systems, connecting a human misconduct to a damage is more challenging than in traditional civil liability cases. In addition, the outcome of such cases in the Member States will often not be the same. As we well know, uncertainty and fragmentation make it more difficult for businesses to benefit from the single market.

So how can we best address this issue? In the coming months, the European Commission will launch an open public consultation asking for feedback related to the challenges that AI brings to Product Liability Directive and national civil liability rules as well as to possible solutions.  

In the meantime, at the High-Level Conference on AI, we had a detailed discussion in the break-out session focusing on this topic. In case you did not follow the event live, here you can find the recordings of the session: