AI liability rules: a blocked horizon? By Michèle Dubrocard [1]
February 2025
Today, no one challenges the potential benefits offered by AI for individuals and society in general, but also the existence of serious risks, some of them already identified, others likely to emerge. Let’s have in mind the conclusions of the first International AI Safety Report [2] which, focusing on general-purpose AI, recognizes that ‘there is a wide range of possible outcomes even in the near future, including both very positive and very negative ones, as well as anything in between’.
So, when the European Commission issued on 28 September 2022 its Proposal for a Directive on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive- AILD), it raised a lot of hope among all those concerned about the potentially harmful consequences of the use of AI systems. These hopes were confirmed by the objective expressed in the explanatory memorandum of the Proposal, namely ‘ensuring victims of damage caused by AI obtain equivalent protection to victims of damage caused by products in general’ [3].
More specifically, the Commission seemed determined to take into due consideration the imbalance between the providers and deployers on the one hand, and the affected persons on the other. Indeed, referring to Member States’ general fault-based liability rules, Recital 3 of the Proposal recognizes that ‘when AI is interposed between the act or omission of a person and the damage, the specific characteristics of certain AI systems, such as opacity, autonomous behaviour and complexity, may make it excessively difficult, if not impossible, for the injured person to meet this burden of proof’.
Alas, the rules proposed by the Commission did not meet the expectations raised by the announced objective (I). Even worse, the Commission seems to have definitively shelved its project (II), leaving the door open to what it itself had criticized: the co-existence within the EU of ‘27 different liability regimes, leading to different levels of protection and distorted competition among businesses from different Member States’ [4].
I- A disappointing Proposal
The Proposal of the Commission did not challenge the choice of a fault-based regime, but instead mainly focused on two rules, aiming at alleviating the burden of proof, which remains on the victim. What are these two rules?
– The disclosure of evidence:
According to Article 3(1) of the Proposal, a court may order the disclosure of relevant evidence about specific high-risk AI systems that are suspected of having caused damage. However, the requests should be supported by ‘facts and evidence sufficient to establish the plausibility of the contemplated claim for damages’ and the requested evidence should be at the addressees’ disposal. Article 3(3) provides that the preservation of such evidence may also be ordered by the court.
However, the disclosure may be ordered by a court only to ‘that which is necessary and proportionate to support a potential claim or a claim for damages and the preservation to that which is necessary and proportionate to support such a claim for damages’. Article 3(4) specifies that ‘the legitimate interests of all parties’ must be considered by the court, when determining whether an order for the disclosure or preservation of evidence is proportionate. Moreover, the person who has been ordered to disclose or to preserve the evidence must benefit appropriate procedural remedies in response to such orders.
Article 3(5) introduces a presumption of non-compliance with a duty of care: when, in a claim for damages, the defendant fails to comply with an order by a national court to disclose or to preserve evidence at its disposal, the national court shall presume the defendant’s non-compliance with a relevant duty of care. That presumption remains rebuttable.
– The presumption of causal link in the case of fault:
Article 4 of the Proposal provides, under certain conditions, a presumption of a causal link between the fault of the defendant and the output produced by the AI system or the failure of the AI system to produce an output, that gave rise to the relevant damage.
However, the claimant has to prove the fault of the defendant, consisting in the non-compliance with a duty of care laid down in Union or national law directly intended to protect against the damage that occurred. He/she also has to prove that the AI system gave rise to the damage. There is another condition, related to the likelihood, based on the circumstances of the case, of the fault’s influence on the output produced by the AI system or the failure of the AI system to produce an output.
Moreover, the presumption shall not be applied if the defendant demonstrates that sufficient evidence and expertise is reasonably accessible for the claimant to prove the causal link. At last, in the case of a claim for damages concerning an AI system that is not a high-risk AI system, the presumption shall only apply where the national court considers it excessively difficult for the claimant to prove the causal link. Here also, the presumption is rebuttable.
– The limitations of the rules:
It follows from these provisions that the impact of the two rules laid down in the Proposal is limited by numerous conditions. In particular, the new mechanism of disclosure of evidence would be limited only to high-risk AI systems. Similarly, the presumption of causal link would mainly apply to high-risk AI systems, except where, according to the national judges, it would be excessively difficult for the claimant to prove the causal link.
In any case, the Proposal is based on a fault-based liability regime, which means that victims would still have to prove the fault or negligence of the AI system provider, or deployer. As noted by the EDPS in its own-initiative opinion [5] of 11 October 2023, ‘meeting such a requirement may be particularly difficult in the context of AI systems, where risks of manipulation, discrimination, and arbitrary decisions will be certainly occurring’, even when the providers and deployers have prima facie complied with their duty of care as defined by the AI Act.
In order to overcome these proof-related difficulties, several solutions have been proposed. BEUC, the European Consumer Organisation, has recommended introducing a reversal of the burden of proof [6], in order to allow the consumers to only have to prove the damage they suffered and the involvement of an AI system. A more nuanced approach has been suggested by an expert, aiming at differentiating between AI systems, whether they are high-risk or not, and general-purpose AI systems: providers and deployers of high-risk AI systems would be subjected to ‘truly strict liability’, while SMEs and non-high-risk AI systems should only be subjected to rebuttable presumptions of fault and causality [7]. In the same vein, the European Parliament considered in 2020 that it seemed ‘reasonable to set up a common strict liability regime for (…) high-risk autonomous AI-systems’. As regards other AI systems, the European Parliament also considered that ‘affected persons should nevertheless benefit from a presumption of fault on the part of the operator who should be able to exculpate itself by proving it has abided by its duty of care’ [8].
The Commission itself has acknowledged, in its impact assessment report, that ‘the specific characteristics of the AI-system could make the victim’s burden of proof prohibitively difficult or even impossible to meet’, and has evoked different approaches, among which the reversal of the burden of proof. As a sign of its hesitation, the Commission has introduced in the Proposal the possibility to review the directive five years after the end of the transposition period, in particular in order to ‘evaluate the appropriateness of no-fault liability rules for claims against the operators of certain AI systems, as long as not already covered by other Union liability rules, and the need for insurance coverage, while taking into account the effect and impact on the roll-out and uptake of AI systems, especially for SMEs’ [9].
II- The withdrawal of the Proposal
On 11 February 2025, the Commission decided to withdraw the Proposal, on the grounds that there was ‘no foreseeable agreement’, and that the Commission would ‘assess whether another proposal should be tabled or another type of approach should be chosen’ [10].
This decision caught the European Parliament’s rapporteur on the Proposal, Axel Voss (PPE), by surprise, who stated that the scrapping of the rules would mean ‘legal uncertainty, corporate power imbalances, and a Wild West approach to AI liability that benefits only Big Tech’ [11].
On the other hand, the decision of the Commission is reported to have satisfied both the Council and the private sector. In particular, France’s Permanent Representation would have indicated that it saw no reason to impose additional liability requirements on AI providers[12].
How can such a situation be explained?
It is true that the AI liability initiative launched by the Commission on 28 September 2022 was also composed of another Proposal, aiming at updating the Directive on liability for defective products (PLD). The new directive, which now includes software and digital manufacturing files within the definition of product, and expands the notion of compensable damage to include the destruction or corruption of data, came into force on 8 December 2024.
However, the scope of the revised PLD is limited: it only provides compensation for material losses resulting from death, personal injury, damages to property and loss or corruption of data (Article 6 PLD). In particular, damage stemming from a violation of a fundamental right without any material loss is not covered by this directive, but should have been covered by the AI liability directive. The draft AILD aimed at covering ‘national liability claims mainly based on the fault of any person with a view of compensating any type of damage and any type of victim’ [13].
The loopholes of the PLD have also been underlined by the complementary impact assessment required by the JURI Committee, to which the file had been attributed in the European Parliament. The study [14], published on 19 September 2024, underlines: ‘However, the PLD presents notable gaps, especially in areas such as protection against discrimination, personality rights, and coverage for professionally used property. It also lacks measures for addressing pure economic loss and sustainability harms, as well as damage caused by consumers, which are contingent on Member State laws. These limitations underscore the necessity for adopting the AILD (…)’.
Thus, in the light of the complementary impact assessment, it appears that the recent adoption of the revised PLD cannot compensate the withdrawal of the proposed AILD. Moreover, as stressed by the first International AI Safety Report, the specific characteristics of general-purpose AI systems make legal liability hard to determine:
‘The fact that general-purpose AI systems can act in ways that were not explicitly programmed or intended by their developers or users raises questions about who should be held liable for resulting harm’ [15] .
Conclusion:
Against this background, today the European citizens are left with a ‘fragmented patchwork of 27 different national legal systems’ [16], most of them relying on a fault-based regime, which is not able to respond to all the challenges posed by AI systems, and in particular to general-purpose AI systems.
The withdrawal of the proposed AILD is only one element of the Commission’s plan aiming at ‘simplifying rules and effective implementation’ [17], which enlists 37 withdrawn proposals in total.
The fact that the final 2025 work programme of the Commission -with the addition of the withdrawal of the AILD- was published just after the AI Act Summit, held in Paris on 10-11 February, may be a simple coincidence. However, it should be noted that the Statement [18] issued after the AI Summit does not refer to the issue of liability nor to the risks of AI systems, except in the context of information.
As observed by Anupriya Datta and Théophane Hartmann in Euractiv, ‘In this context, withdrawing the AI liability directive can be understood as a strategic manoeuvre by the EU to present an image of openness to capital and innovation, to show it prioritises competitiveness and show goodwill to the new US administration’ [19].
The final word may not have been spoken, yet. On 18 February, the Members of the European Parliament’s Internal Market and Consumer Protection Committee (IMCO) voted to keep working on liability rules for artificial intelligence products, despite the European Commission’s intention to withdraw the proposal [20].
[1] The opinions expressed in this article are the author’s own and do not necessarily represent the views of the EDPS
[2] International Scientific Report on the Safety of Advanced AI January 2025
[3] COM(2022) 496 final, page 2
[4] COM(2022) 496 final, page 6
[5] EDPS Opinion 42/2023 on the Proposals for two Directives on AI liability rules, 11 October 2023, par. 33
[6] Proposal for an AI liability Directive, BEUC position paper, page 12.
[7] The European AI liability directives – Critique of a half-hearted approach and lessons for the future-Philipp Hacker, page 49.
[8] European Parliament resolution of 20 October 2020 with recommendations to the Commission on a civil liability regime for artificial intelligence (2020/2014(INL)), par. 14 and 20.
[9] Article 5 of the Proposal.
[10] Annexes to the Communication from the Commission to the European Parliament, the Council, the European, Economic and Social Committee and the Committee of the Regions- Commission work programme 2025, page 26.
[11] Euractiv ‘Commission plans to withdraw AI Liability Directive draw mixed reactions’, 12 February 2025.
[12] Ibidem.
[13] COM(2022) 496 final, page 3.
[14] Proposal for a directive on adapting non-contractual civil liability rules to artificial intelligence-Complementary impact assessment.
[15] International Scientific Report on the Safety of Advanced AI, page 179.
[16] Euractiv, ‘Commission plans to withdraw AI Liability Directive draw mixed reactions’, Anupriya Datta, 12 February 2025
[17] Commission work programme 2025, page 11
[18] Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet: ‘We will keep addressing the risks of AI to information integrity and continue the work on AI transparency.’
[19] Euractiv ‘Commission withdraws AI liability directive after Vance attack on regulation’, 11 February 2025
[20] Euronews ‘Lawmakers reject Commission decision to scrap planned AI liability rules’, Cynthia Kroet, 18/02/2025