The classification of your AI system as 'high-risk' is the single most consequential determination under the EU AI Act. High-risk systems face the full weight of the Act's requirements — risk management, data governance, technical documentation, human oversight, accuracy, robustness, and cybersecurity. Getting this classification right is the foundation of EU AI Act compliance.
High-Risk Classification — Two Pathways
Article 6 establishes two routes to high-risk classification. Both lead to the same compliance obligations.
Pathway 1: Product Safety (Article 6(1) + Annex I)
The first pathway catches AI systems that are safety components of products already regulated under EU harmonisation legislation. If the product requires a third-party conformity assessment under its sector-specific legislation, and the AI component is a safety-relevant element, the system is classified as high-risk.
Annex I lists the relevant EU legislation — including the Machinery Regulation, Medical Devices Regulation, Civil Aviation Regulation, and others covering toys, lifts, pressure equipment, radio equipment, and vehicles. If your AI system operates within any of these regulated product categories and has safety relevance, it is high-risk.
The compliance deadline for Annex I product systems is 2 August 2027 — one year later than the general high-risk deadline.
Pathway 2: Use-Case Categories (Article 6(2) + Annex III)
The second pathway is broader and catches most AI systems that practitioners are concerned about. Annex III lists eight categories of high-risk use cases:
| # | Category | Examples |
|---|---|---|
| 1 | Biometrics | Remote biometric identification (not real-time law enforcement, which is prohibited), biometric categorisation, emotion recognition where permitted |
| 2 | Critical infrastructure | AI managing safety of road traffic, water supply, gas, heating, electricity, digital infrastructure |
| 3 | Education and vocational training | AI determining access to education, evaluating learning outcomes, monitoring prohibited behaviour during exams |
| 4 | Employment and worker management | AI screening CVs, evaluating candidates, making promotion/termination decisions, allocating tasks, monitoring performance |
| 5 | Access to essential services | Creditworthiness assessment, life and health insurance risk/pricing, emergency services dispatch, public assistance eligibility |
| 6 | Law enforcement | Polygraph-type AI, evidence reliability assessment, crime analytics, profiling in criminal investigations |
| 7 | Migration, asylum and border control | Polygraph-type AI for migration interviews, risk assessment for irregular migration, identity document authentication |
| 8 | Administration of justice and democratic processes | AI assisting judicial research and interpretation of law, AI intended to influence election outcomes |
The Article 6(3) Exception
Not every AI system that falls within an Annex III category is automatically high-risk. provides an important exception: an AI system listed in Annex III is not high-risk if it does not pose a significant risk of harm to the health, safety, or fundamental rights of natural persons.
Specifically, a system may be exempted if it performs a narrow procedural task, improves the result of a previously completed human activity, detects decision-making patterns without replacing or influencing human assessment, or performs a preparatory task to an assessment relevant to the use cases in Annex III.
"The provider must document the Article 6(3) assessment and register it in the EU database before placing the system on the market or putting it into service."
This is not a blanket escape clause. The provider must document why they consider the system non-high-risk and register this in the EU database under . Market surveillance authorities can challenge this assessment under .
What High-Risk Classification Means
Once classified as high-risk, your AI system must comply with Articles 8-15 (requirements) and Articles 16-27 (obligations on operators). The key requirements are:
- : A risk management system — continuous, iterative, covering the entire system lifecycle
- : Data and data governance — training, validation, and testing datasets must meet quality criteria
- : Technical documentation — detailed system documentation before market placement
- : Record-keeping — automatic logging of events during the system's operation
- : Transparency — deployers must receive sufficient information to understand and use the system
- : Human oversight — the system must be designed to allow effective human oversight
- : Accuracy, robustness, and cybersecurity — measurable levels, resilience against errors and attacks
Practical Classification Steps
Check Annex I first
If your AI system is part of a product covered by EU harmonisation legislation (medical devices, machinery, vehicles, etc.), classification is straightforward — it's high-risk via .
Map to Annex III categories
For standalone AI systems, assess whether the intended purpose falls within any of the eight Annex III categories. Focus on the system's purpose, not its technology.
Apply the Article 6(3) exception
If the system falls within Annex III but performs only narrow procedural tasks or preparatory work, assess whether the exception applies. Document this assessment thoroughly.
Consider intended and foreseeable use
Classification is based on intended purpose and reasonably foreseeable misuse — not just the current deployment. A recruitment tool is high-risk even if currently used for a single internal position.
Register your determination
If claiming non-high-risk under , register in the EU database. If classified as high-risk, begin compliance planning for the requirements in Articles 8-15.
Key Takeaways
Two pathways to high-risk: product safety (Annex I) or use-case categories (Annex III)
Annex III covers 8 categories: biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, and justice
allows exemption for narrow procedural or preparatory AI tasks — but requires documentation and database registration
High-risk classification triggers the full compliance framework: Articles 8-15 requirements plus operator obligations
The compliance deadline for Annex III systems is 2 August 2026; for Annex I product systems, 2 August 2027