What is the role of artificial intelligence in defective product liability? Focus on directive 2024/2853

News, Partnerblog

Law has always had to face new challenges, but it is now confronted probably with one of the greatest technological and societal challenges of the 21st century: the exponential development of artificial intelligence.

This challenge goes beyond ethical regulation and urgently requires a revision of the safeguards protecting both consumers and professionals.

That is why the European Union has adopted the Directive 2024/2853 of 23 October 2024 to introduce specific provisions on liability for defective products that take into account the specific concerns of artificial intelligence.[1] The Directive repeals the Directive of 25 July 1985, with effect as of 9 December 2026, the deadline by which Member States must adopt all necessary measures to comply with the new Directive.

Indeed, more and more products are integrating AI systems. This development affects all sectors and product ranges: from autonomous vehicles, to smart house appliances such as robotic vacuum cleaners, and even connected watches.

It is therefore easy to understand why it was urgent to extend the scope of compensable damage to include new types of harm caused specifically by AI.

In this regard, Article 6 of the Directive now provides for compensation for any damage caused by “the destruction or corruption of data not used for professional purposes.”

The assessment of whether a product is defective has also been adapted to address the new challenges arising from the integration of AI into everyday products and the specific dangers that this may entail.

Article 7 of the Directive, which lists a series of decisive criteria for establishing the existence of a defect, expressly refers to the product’s “capacity to continue learning or to acquire new features after being placed on the market or put into service”.[2] This means that a product is no longer assessed solely by reference to its design and condition at the time of its placement on the market or entry into service. Since algorithms and software are capable of autonomous learning and adaptation, consumers may legitimately expect the product they use to meet a high level of safety.

The idea is that those who designs or produces a product that, through updates or intrinsic developments, may develop unexpected behaviour that is likely to cause damage, must remain responsible for that product.[3] This extended liability is justified by the fact that manufacturers effectively retain a form of control over the product and its proper functioning even after it is placed on the market, for example through updates.[4] In the era of connected and digitalised products, manufacturers remain truly involved in the product’s evolution for a much longer period.

For the same purpose, the Directive also adds to the list of criteria for assessing a defect the consideration of “relevant cybersecurity requirements”.[5] This expressly includes defects resulting from weaknesses in IT security or vulnerabilities to cyberattacks. Through this provision, the legislator imposes on manufacturers a duty of vigilance regarding data security, in a context where cyberattacks are increasingly recurrent. It is now established that the safety of a product integrating AI can no longer be dissociated from its IT security.

However, the Directive does not go so far as to specify concretely the level of safety that a consumer may legitimately expect from a product enhanced by AI. This remains a central but unresolved issue.

Finally, it is important to address the question of disclosure of evidence. Clearly, the technical and IT complexity of software integrating AI is beyond the reach of most consumers. This difficulty must be taken especially seriously in light of AI’s rapid evolution. It only exacerbates the information asymmetry that already exists between manufacturers and consumers.

Accordingly, the Directive introduces a disclosure obligation for producers, aimed at assisting victims as much as possible. This obligation, however, is subject to nuances detailed in Article 9 of the Directive.

Even when the manufacturer discloses all the necessary information, the technical complexity specific to products that incorporate artificial intelligence systems may remain a significant obstacle to the consumer’s full understanding, making it impossible for them to prove that the product is defective. Consequently, based on Article 10 paragraph 4 of the Directive, national courts will be able to presume both the existence of the defect and the causal link with the damage. The burden of proof will then shift to the producer, who will have to rebut this presumption.[6]

In conclusion, Directive 2024/2853, adopted on 23 October 2024, significantly reshapes defective product liability to address new concerns related to AI integration and the cybersecurity risks it entails. In the digital age, the very notion of a product defect has had to evolve to encompass algorithms, software requiring updates, and all possible mechanisms of digital interconnection. This exponential and unpredictable evolution creates new risks. Inevitably, this has an impact on manufacturers’ liability: to preserve consumer safety and trust in the market, manufacturers will have to continue to evolve in order to address a sector in constant change.

Authors
Denis Philippe and Soline Vasiljevs


[1] Hubin, J. et Ronneau, V., « Directive (UE) 2024/2853 relative à la responsabilité du fait des produits défectueux », R.D.T.I., 2024/3-4, n° 96-97, p. 229.

[2] Art. 7 §2 point c, Directive 2024/2853 of 23 octobre 2024.

[3] Recital, n°32.

[4] Recital, n°39.

[5] Art. 7 §2 point f, Directive 2024/2853 of 23 octobrer 2024.

[6] Hubin, J. et Ronneau, V., op. cit. , p. 237 et seq.

Share