Partnerblog
One of the most pressing questions companies face today is whether the AI systems they use fall within the high-risk category and must comply with the obligations set out in the EU AI Act. This question has become increasingly urgent, particularly as the obligations for high-risk systems listed in Annex III will take effect in August next year, and the European Commission has launched an open consultation to support its upcoming guidance on how such systems should be classified. This blog post outlines seven key steps to help companies identify whether their AI system qualifies as high-risk under the EU AI Act.
Why does it matter?
Under the EU AI Act, providers and deployers of so-called ‘high-risk’ AI systems will be subject to considerably stricter regulatory obligations than those applicable to non-high-risk systems. This will imply changes to policies, practices, and contracts with customers and suppliers.
The AI technology itself, high-risk AI systems, must comply with a range of essential requirements set out in the AI Act, including obligations concerning risk management systems, data and data governance, technical documentation, record-keeping, transparency, human oversight, accuracy, robustness, and cybersecurity.
In addition, providers and deployers (users) of high-risk AI systems are subject to distinct regulatory obligations, such as conformity assessment procedures, ex post monitoring, incident and serious malfunction reporting, quality management systems, and registration in the EU database of high-risk AI systems.
Specific obligations also apply to importers and distributors, as well as to any third parties supplying tools, components, or services that are integrated into high-risk AI systems—especially where such elements contribute to the training, testing, or validation of the model or system.
Understanding your role: provider, deployer or both?
Companies that place high-risk AI systems on the market or put them into service are classified as providers and face the largest share of requirements under the EU AI Act. Companies that use an AI system under their own authority are classified as deployers and also have certain obligations relating to its use.
It is important to emphasise that companies may be classified as both provider and deployer more easily than they expect. For instance, if a company commissions a third-party developer to build a high-risk AI system tailored to its specific needs and then uses it internally, the company may be regarded as having developed the system under its own name. In this scenario, the company is considered both provider and deployer, while the third-party developer is treated as a subcontractor.
Companies that already have to comply with the product safety legislation listed in Annex I are usually familiar with the requirements and will be better prepared, particularly as the obligations will only apply to them from August 2027. By contrast, companies subject to the high-risk use cases in Annex III may face greater implementation challenges, as they lack experience with such regulations and must comply a year earlier, from August 2026.
Steps to determine whether your AI system is high-risk
Step 1: Does the system meet the definition of an AI system under Article 3(1) AI Act?
The first step is to check whether the system qualifies as an “AI system” as defined in Article 3(1) of the AI Act: “a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”.
Although the European Commission has published guidance on this definition, many aspects remain unclear. In practice, “autonomy” and “inferencing” appear to be the most important factors in determining whether a system falls within the scope.
- If the system meets the definition of an AI system, proceed to Step 2.
Step 2: Is the AI system a prohibited practice (Article 5)?
There may be some overlap between the prohibited practices listed in Article 5 and high-risk AI systems. It is therefore important to first check whether your system falls under one of the prohibited categories, such as manipulative techniques or biometric categorisation.
With regard to AI-based remote biometric identification systems, the EU AI Act expressly recognises the potential for biased outcomes and discriminatory effects. Consequently, such systems are classified as high-risk AI systems under Annex III, unless their sole purpose is to perform verification—that is, confirming that a person is who they claim to be, typically on a one-to-one basis (e.g. unlocking a device or accessing a system).
The risk classification therefore hinges on the intended use of the system: identification (one-to-many comparison in public or semi-public spaces) will generally fall under the high-risk regime, whereas limited verification purposes, such as for cybersecurity or personal data protection (e.g. user authentication), are explicitly excluded from the high-risk category.
The European Commission has also issued guidance on this point.
- If the system is prohibited, none of the following steps are relevant because the practice is banned outright.
- If the AI system is not prohibited, proceed to Step 3.
Step 3: Is the AI system covered by the product safety legislation listed in Annex I?
The AI Act recognises that some technology may be an “AI system” and another product at the same time. E.g. medical devices, industrial machines, cars, toys etc. Article 6(1) applies to AI systems that are either products or safety components of products subject to EU product safety legislation listed in Annex I (e.g. Machinery Directive, Medical Device Regulation etc.). Note that AI systems falling under Section B of Annex I are exempt from most of the requirements of the EU AI Act.
- If the AI system is covered by Annex I, go to Step 4. If not, skip to Step 5.
Step 4: Does the AI system require a third-party conformity assessment under the legislation listed in Annex I?
Even if the AI system is covered by Annex I, it is only considered high-risk if it requires a third-party conformity assessment (CE marking). Companies whose systems fall within the scope of Annex I may already be familiar with product safety requirements and may have many of these measures in place. However, they should be aware that the AI Act introduces additional obligations, including data governance and transparency. Conducting a compliance gap analysis is strongly recommended.
- If the AI system requires a third-party conformity assessment, it is classified as high-risk and must comply with the requirements set out in the EU AI Act by August 2027.
- If the AI system does not require a third-party conformity assessment, proceed to Step 5.
Step 5: Does the AI system fall under one of the high-risk categories in Annex III?
If an AI system does not qualify as high-risk under Annex I, it may still fall within the scope of Annex III if they pose a significant risk to health, safety, or fundamental rights based on their intended use. Annex III identifies eight sectors and related use cases, including biometric identification, critical infrastructure, education, recruitment, and credit scoring. The exact scope of some of these categories remains unclear and companies unfamiliar with product safety regulation may find the legal landscape particularly challenging.
- If the AI system falls under one of the high-risk categories in Annex III, proceed to Step 6.
Step 6: Does the AI system perform profiling?
Article 6(3) of the AI Act contains exemptions for AI systems listed in Annex III if they do not pose a significant risk. However, these exemptions do not apply if the system performs profiling of natural persons.
- If profiling is involved, an AI system that falls under one of the categories in Annex III is automatically classified as high-risk.
- If not, proceed to step 7.
Step 7: Does the AI system fall under one of the exemptions?
The final step is to assess whether the AI system may be exempt from classification as “high-risk” on the basis that it does not pose a significant risk to the health, safety, or fundamental rights of individuals.
Article 6(3) of the AI Act outlines an exhaustive list of conditions under which such an exemption may apply. An AI system may be excluded from the high-risk category if it:
- Only performs a narrow procedural task
- Only improves the outcome of a human decision
- Only detects deviations from established decision-making patterns
- Only performs a preparatory task related to a high-risk use case
However, companies should be cautious when relying on these exemptions. These exemptions are defined very narrowly in Recital 53 of the AI Act and must not be interpreted broadly. In addition, there is legal uncertainty around the “preparatory task” exemption, which may not always offer a reliable legal basis.
Often overlooked is the fact that, if a provider chooses to rely on one of these exemptions, they must not only document the assessment that justifies the non-classification before placing the system on the market or putting it into service. They must also register the system in the EU database in accordance with Article 49(2) of the AI Act. As part of this registration, a brief public summary of the justification must be provided (Annex VIII, Section B, point 7). If this explanation is flawed or misinterpreted, it could subject the company to public scrutiny and potentially lead to significant administrative fines.
- If one of the exemptions can be successfully invoked, the AI system will be exempted from the requirements.
- If not, the AI system will be classified as high-risk under Annex III.
Each of these steps calls for careful judgement. Want tailored advice or help submitting your views in the European Commission’s consultation process? Get in touch with us.
Authors:
