The AI Liability Directive — A Blueprint for AI Accountability in Europe

Enzai breaks down the first draft of the EU's AI Liability Directive
Ryan DonnellyRyan Donnelly
Ryan Donnelly
2 Jan
2024
The AI Liability Directive — A Blueprint for AI Accountability in Europe

On 28 September 2022, the European Commission released the first draft of the proposed Artificial Intelligence Liability Directive (the “AILD”). The purpose of the AILD is to lay down uniform rules for access to information and alleviation of the burden of proof in relation to damages caused by AI systems.

The AILD is the latest in a wider European legislative push to create a trustworthy AI ecosystem. This landscape also includes: (1) the Artificial Intelligence Act (the “EU AIA”), which you can read more on here; and (2) a revision of the now 40-year-old Product Liability Directive, which covers producers’ no-fault liability for defective products.


The need

A 2020 EU survey found that ambiguity around liability is one of the top three barriers to the use of AI by European companies. This is unsurprising given the patchwork of national liability rules currently in place in the EU. Many Member States have adopted fault-based regimes which require claimants to prove that an individual’s wrongful act or omission caused the damage in question. However, the “black box” nature of AI (with its autonomous behaviour, limited predictability, continuous adaptation and lack of transparency) makes it difficult to identify the source of fault necessary to support a successful liability claim under the existing regimes.

*** Eagled-eyed readers might notice that the AILD is a "Directive", whereas the EU AI Act is a "Regulation". What's the difference? Well, an EU Regulation  takes effect automatically in all Member States. A Directive, on the other  hand, sets out a goal that all Member States must achieve. It is down to the individual Member State to decide how best to achieve that goal with national legislation. ***

The rules

The AILD introduces two pro-claimant safeguards as goals for Member States to implement: (1) a presumption of causality; and (2) the right to evidence. These measures will help claimants overcome the unique evidentiary challenges that arise when applying liability rules to AI systems.


1. Presumption of causality

To address the difficulties of proving a causal link in cases of harm caused by AI, the AILD creates a rebuttable presumption that the defendant is liable for the fault or omission produced by their malfunctioning AI system. This presumption, which is refutable, will only arise when the following three conditions are met:

A - the conduct of the defendant did not meet a duty of care set out in European or national legislation specifically intended to protect against the damage that occurred;

B - it can be reasonably assumed, based on the circumstances of the case, that the fault of the defendant influenced the output produced by the AI system (or the failure of the AI system to produce an output); and

C - the claimant has demonstrated that the output produced by the AI system (or the failure of the AI system to produce an output) caused the damage.

In the case of a high-risk AI system under the EU AIA, the requirement set out at A above will be deemed automatically satisfied if the defendant has not complied with certain obligations imposed on them under the EU AIA. These obligations include, amongst other things, the documentation and the monitoring requirements.


2. Right to evidence

The AILD empowers potential claimants to obtain court orders requiring the disclosure of relevant evidence concerning high-risk AI systems. This kind of evidence would consist of the technical documentation, the monitoring logs and all other transparency requirements under the EU AIA. Crucially, if the defendant fails to disclose this information, then it is assumed they did not meet the relevant duty of care under European or national law, which automatically fulfils the requirement set out at A above. This means that a failure to disclose the relevant information would make it substantially harder for the defendant to overcome the presumption of causality.


Timeline

This proposal arose more than a year after the first draft of the EU AIA was published. Some commentators are worried by this, because a disjointed AI regulatory system in Europe could increase ambiguity and lead to regulatory gaps. It’s a fair point — if there is a significant gap between entry into force of the EU AIA and entry into force of the AILD, it does leave a regulatory gap. FWIW, the Commission doesn’t see that gap and only time will tell. It’s currently with the European Council for review, and also has to make its way through the Parliament.


Our take

This really underlines the importance of keeping detailed technical documentation in connection with your AI systems. As the presumption of liability created under the AILD is rebuttable, detailed and robust documentation could form a strong shield to protect against any potential claims that an AI system has caused harm.

It’s also interesting that the Commission chose to rely on existing fault-based rules across the Union, through requirement A of the rebuttable presumption above (an existing duty of care under European or national law must have been breached), rather than introducing AI-specific faults. The AILD does contain review provisions, so maybe this is a ‘let’s-wait-and-see’…

2023 is going to be a big year for AI regulation. Organisations building any kind of AI used in the EU would be well advised to start thinking about these issues now and work towards compliance. Enzai was founded and backed by lawyers who are experts in this space.

Read more about Enzai's AI Governance, AI Regulation and EU AI Act solutions.

Build and deploy AI with confidence

Enzai's AI governance platform allows you to build and deploy AI with confidence.
Contact us to begin your AI governance journey.