LEAK: Commission to propose rebuttable presumption for AI-related damages

[artjazz/Shutterstock]

The European Commission will present a liability regime targeted to damage originating from Artificial Intelligence (AI) that would put causality presumption on the defendant, according to a draft obtained by EURACTIV.

The AI Liability Directive is scheduled to be published on 28 September, and it is meant to complement the Artificial Intelligence Act, an upcoming regulation that introduces requirements for AI systems based on their level of risk.

“This directive provides in a very targeted and proportionate manner alleviations of the burden of proof through the use of disclosure and rebuttable presumptions,” the draft reads.

“These measures will help persons seeking compensation for damage caused by AI systems to handle their burden of proof so that justified liability claims can be successful .”

The proposal follows the European Parliament’s own-initiative resolution adopted in October 2020 that called for facilitating the burden of proof and a strict liability regime for AI-enabled technologies.

Scope

For consistency, the definitions of AI systems, including high-risk ones, AI providers and users, are directly referred to in the AI Act.

The directive applies to non-contractual civil law claims for damages caused by an AI system in fault-based liability regimes. In other words, where someone can be held responsible for a specific action or omission.

The idea is that these provisions would complement existing liability regimes in civil law since, besides the presumption, the directive would not affect the national rules related to the burden of proof, the degree of certainty required for the standard of proof or fault definition.

While liabilities related to criminal law or the field of transport are excluded from the scope, the provisions would also apply to national authorities insofar as obligations cover them under the AI Act.

Disclosure of information

A potential claimant may request the providers of a high-risk system to disclose the information the provider will have to keep as part of its obligations under the AI Act. The AI regulation mandates the retention of documentation for ten years after an AI system has been placed on the market.

The information requested would entail the datasets used to develop the AI system, technical documentation, logs, the quality management system and any corrective actions.

The addressees might refuse the request, which then can be raised again via a lawsuit where it will be assessed by a judge wherever it is justified and necessary to sustain a claim in case of accidents where AI was involved.

These disclosures are covered by safeguards and the principle of proportionality, notably trade secrets. The court might also order the provider to retain such information as long as deemed necessary.

If a provider refuses to comply with a disclosure order, the court would assume that the provider was non-compliant with the relevant obligations unless the defendant proves otherwise.

Non-compliance with AI Act

The directive is intended to provide a legal basis for claiming compensation following a lack of compliance with specific obligations set out in the EU’s AI regulation.

Insofar as a causal link between non-compliance and damage can only be established by explaining the AI’s inner workings, the approach is that the causal link is assumed under certain circumstances.

For AI systems that do not entail a particular level of risk, the presumption applies if there is demonstrated non-compliance with rules that could have prevented the damage and if the defendant is responsible for such non-compliance.

For high-risk systems, the presumption applies against the provider where suitable risk management measures were not in place, the training dataset did not meet the requested quality criteria, or the system does not meet the transparency, accuracy, robustness and cybersecurity criteria.

Other factors are the lack of adequate human oversight or negligence in immediately implementing the necessary corrective measures.

The presumption applies to users of high-risk systems in the case where the users failed to comply with the accompanying instructions or the system was exposed to input data not relevant to the system’s intended purpose.

In other words, it would be up to the AI provider that has violated the rules that its non-compliance did not cause the damage by demonstrating that there are more plausible explanations for the damage.

Non-compliance with other requirements

A similar principle has been introduced for violating other EU or national requirements. Also, in this case, the presumption of causality would apply only to cases where non-compliance with the so-called ‘duty of care’ is relevant to the damages at hand and intended to prevent it.

In this case, the conditions for presumption are that the AI system can be ‘reasonably assumed’ to be involved in the creation of the damage, and the complainant has demonstrated non-compliance with relevant requirements.

For the Commission, this approach “constitutes the least burdensome measure to address the need for fair compensation of the victim, without externalising the cost to the latter.”

Monitoring and transposition

The Commission is to establish a monitoring programme on incidents involving AI systems, with a targeted review within five years to assess whether additional measures would be needed.

The member states would have two years to transpose the directive into national law since its entry into force. In their transposition, the member states may adopt national rules that are more favourable for claimants as long as they are compatible with EU law.

[Edited by Nathalie Weatherald]

Read more with Euractiv

Subscribe to our newsletters

Subscribe