European Commission publishes new AI liability rules


The European Commission has kicked off the fall season by proposing a new AI Liability Directive meant to tackle consumers’ liability claims for damage caused by AI-enabled products and services. With this proposal, the Commission seeks to complement the currently negotiated Artificial Intelligence Act (you can read our position paper here) by introducing a new liability regime with the ultimate aim of ensuring greater legal certainty, and thereby enhance consumer trust in AI and ensure successful innovations across the EU.

The need for introducing a new set of liability rules specifically targeted at AI is rooted in the notion that the national liability rules, currently in place across the EU Member States, are ill-equipped for adequately dealing with liability claims for damage caused by AI-enabled products and services. Seen from the consumer perspective, this means that under the existing liability rules, victims bear the burden of proof and as such have to prove a wrongful action or omission of an action by a person who caused the damage. As AI systems generally are known for their ‘black box’ effect, characterised by complexity, it is considered difficult for consumers to identify the liable person and provide the required evidence.

The ineffectiveness of current liability rules bears the risk of harming businesses unnecessarily due to the large extent of legal uncertainty related to said rules. Today, whenever a victim puts forward a compensation claim for AI-related damage, national courts, when faced with the specific characteristics of AI as outlined above, may adapt the way in which they apply existing rules on an ad hoc basis in order to attain a righteous result for the victim. Such case-by-case approach makes it incredibly difficult for businesses to predict how existing liability rules might be applied, as well as to assess and insure their exposure to potential liability claims. Moreover, this legal uncertainty might be an even greater issue for those businesses trading across borders, as the uncertainty will extend to different national jurisdictions. This can easily lead to a highly fragmented judicial landscape of civil liability rules for AI-products and services, which undoubtedly affects small and medium-sized enterprises the most, as they cannot rely on in-house legal expertise nor possess financial resources for external judicial counselling.

Taking the rapid technological development into account, including its enormous potential for boosting the European economy, we must not risk unnecessary obstacles that hinder the innovation of European businesses. Therefore, Ecommerce Europe welcomes the Commission’s proposal of establishing clearer rules for European businesses operating with AI systems, products, or services. However, as the proposal is still in its early stages, we will closely monitor the negotiations taking place between policymakers and seek to influence the policy file, where needed.

Legal assessment: Disclosure of evidence and rebuttable presumptions

Drawing on the Commission’s proposal of the AI Liability Directive, civil liability claims for AI-related damage would in the future be legally assessed based on the notions of ‘disclosure of evidence’ and ‘rebuttable presumptions’. Concretely, this means that the burden of proof on claimants will be alleviated. As such, (potential) claimants will now be able to request access from the defendant to evidence for the presumed AI-related damage, which can help support the plausibility of a claim for damage. If businesses do not voluntarily provide access to such potential evidence, the claimant may upon a reasoned request get a national court to order the disclosure of information. If the business in question still refuses to disclose the relevant information, the national court may presume its non-compliance with rules regarding duty of care. The court will thus presume causality between the fault of the defendant and the output produced by an AI system or the failure of an AI system to produce the intended output. This presumption may be rebutted by the company.

Finally, it should be noted that the proposed Directive only concerns high-risk AI, as defined by the AI Act, and it also entails a safeguard for the protection of trade secrets.

If you have any questions or wish to know more about this topic, please contact us at