Partnerblog
As artificial intelligence (AI) and autonomous robots move from research labs to real-world operations, the question for legal departments is no longer if regulation applies, but how soon and how far it will reach. The EU is leading this transformation with two cornerstone initiatives: the AI Act, adopted in June 2024, and the Revised Product Liability Directive, applicable from December 2026.
Together, they form a comprehensive framework to ensure that AI technologies remain trustworthy, safe, and accountable, especially when embedded in complex products such as robots. For company lawyers and in-house counsel, these changes redefine how risk is mapped, liability is allocated, and compliance must be demonstrated internally and to external stakeholders.
This article highlights the key developments, their implications for autonomous robots, and what corporate legal teams should already be preparing for.
1. High-Risk AI Systems: Understanding the Triggers and Obligations
What qualifies as “high-risk”
The EU AI Act introduces a risk-based approach, distinguishing between minimal, limited, and high-risk AI systems. High-risk systems are not prohibited, but they are subject to strict compliance obligations because of their potential impact on health, safety, and fundamental rights.
A system qualifies as high-risk in two main ways:
- As a safety component of a regulated product, for example, under the Machinery Regulation or Medical Devices Regulation.
- By function, if listed in Annex III of the AI Act, which covers AI used in:
- Biometric identification
- Education, recruitment, and employment
- Law enforcement and border control
- Access to essential services (e.g. healthcare, insurance)
- Administration of justice and democratic processes
Not every system in these categories automatically qualifies. AI that merely assists human decision-making or performs low-impact tasks may be exempt unless it profiles individuals, which always counts as high-risk.
Compliance duties: what legal teams must document
High-risk AI systems are subject to extensive obligations. The practical challenge lies in documenting and demonstrating compliance. Key elements include:
- Risk management: continuous identification, evaluation, and mitigation of potential hazards.
- Data governance: ensuring datasets are representative, accurate, and free from bias.
- Technical documentation and logs: maintaining evidence of design, testing, and monitoring.
- Human oversight: ensuring meaningful human control and intervention mechanisms.
- Cybersecurity: preventing tampering, misuse, or security breaches.
- Market monitoring: CE marking, incident reporting, and post-market surveillance.
In essence, AI can be autonomous but its governance must remain human-centric. This means aligning technical teams, procurement, and compliance functions around clear internal responsibilities and defensible documentation practices.
2. When AI Has Legs: Applying the Rules to Autonomous Robots
Although the AI Act does not explicitly mention “robots,” its impact is unavoidable. Autonomous robots often integrate multiple AI systems for navigation, object recognition, or decision-making many of which can qualify as high-risk.
The assessment depends on several factors.
A. Is the robot considered “machinery”?
Under the new Machinery Regulation (applicable from 2027), the term machinery covers a broad range of devices, easily encompassing most robots. When a robot incorporates a high-risk AI system as a safety component, both the AI Act and the Machinery Regulation apply in parallel.
For sectors such as manufacturing, logistics, or healthcare, this dual compliance is particularly relevant, combining product safety rules with AI-specific oversight obligations.
B. Does the robot use AI for high-risk functions?
A single robot can contain multiple AI systems with distinct purposes and regulatory statuses. For instance:
- A healthcare robot that suggests treatment options;
- A security robot performing facial recognition;
- A recruitment bot evaluating candidates.
Each AI module must be assessed individually. Even if the robot as a whole does not qualify as machinery, certain modules may fall under Annex III high-risk categories.
C. Multiple systems, multiple obligations
One robot may combine several AI components, each triggering its own compliance pathway. A navigation AI may fall under machinery rules, while a facial recognition function may be covered by the AI Act’s Annex III provisions. This layered compliance scenario requires careful mapping of all AI components, their risk categories, and the corresponding documentation.
D. Research vs. commercial use
Robots developed solely for scientific research are temporarily exempt. Once commercialized or deployed in real-world environments, however, they must fully comply with the AI Act’s requirements. Legal department teams should anticipate this transition early, especially when pilot projects evolve into operational deployments.
3. The Legal Catch-Up Game: Product Liability Reimagined
The Revised Product Liability Directive (PLD) modernises the EU’s liability framework for the digital age. It directly addresses the question of who bears responsibility when AI or software causes harm, a long-standing grey area under the 1985 directive.
A. Software is now a “product”
The new directive explicitly extends the concept of a “product” to include software, AI modules, and even updates or patches. If a software element is essential to a product’s performance, or capable of causing harm independently, it is subject to the same liability rules as tangible goods. This change closes a major gap: AI developers and software providers can now be held strictly liable for defective code or unsafe updates integrated into robots.
B. A broader liability chain
Liability no longer stops with the manufacturer. The revised PLD extends accountability to:
- Importers and distributors
- Component and software providers
- Service providers integrating AI into systems
Multiple actors can be jointly and severally liable, and contractual waivers of liability are prohibited. Courts may also order disclosure of technical information if evidence of a defect is withheld. A game-changer for claimants and insurers alike.
C. Expanded scope of damages
Compensation now includes:
- Psychological injury
- Loss or corruption of data
- Non-material loss linked to physical damage
Purely financial loss (for example, poor investment advice from an AI tool) remains excluded unless it is directly tied to tangible harm.
4. The Takeaway: Managing AI Risk in a Robotic Age
The EU’s new legal architecture reflects an ambitious goal: to make AI innovation sustainable by embedding accountability into every stage of its lifecycle. For companies deploying AI and robotics, the challenge lies not in the law’s complexity, but in the integration of legal, technical, and ethical oversight.
For company lawyers or in-house counsel, three priorities stand out:
- Audit your exposure: Identify where AI is used in products or operations and assess whether it falls under high-risk categories.
- Align internal governance: Establish documentation, human oversight, and escalation procedures consistent with the AI Act.
- Review contracts and liability allocation: Update supplier agreements and insurance coverage to reflect the revised Product Liability Directive.
The road ahead may be more regulated, but also more predictable. With a clear compliance strategy, organisations can innovate confidently, knowing that their AI systems meet both regulatory and ethical expectations. If your organisation is exploring or already deploying AI or autonomous systems, now is the time to assess your readiness under the new EU rules.
Authors:
