Guide14 min read

Is Your AI System High-Risk? A Practical Guide to Article 6 and Annex III

How to determine whether your AI system is classified as high-risk under the EU AI Act — and what that means for compliance

P

Paul McCormack

AI Governance & Compliance · 3 March 2026

The classification of your AI system as 'high-risk' is the single most consequential determination under the EU AI Act. High-risk systems face the full weight of the Act's requirements — risk management, data governance, technical documentation, human oversight, accuracy, robustness, and cybersecurity. Getting this classification right is the foundation of EU AI Act compliance.

High-Risk Classification — Two Pathways

Is your AI system a safety component of, or itself, a product covered by EU harmonisation legislation listed in Annex I?
PATH 1High-risk via Article 6(1) — product safety pathwayAnnex I legislation
Does your AI system fall within one of the 8 use-case categories listed in Annex III?
PATH 2High-risk via Article 6(2) — use-case pathwayAnnex III categories

Article 6 establishes two routes to high-risk classification. Both lead to the same compliance obligations.

Pathway 1: Product Safety (Article 6(1) + Annex I)

The first pathway catches AI systems that are safety components of products already regulated under EU harmonisation legislation. If the product requires a third-party conformity assessment under its sector-specific legislation, and the AI component is a safety-relevant element, the system is classified as high-risk.

Annex I lists the relevant EU legislation — including the Machinery Regulation, Medical Devices Regulation, Civil Aviation Regulation, and others covering toys, lifts, pressure equipment, radio equipment, and vehicles. If your AI system operates within any of these regulated product categories and has safety relevance, it is high-risk.

The compliance deadline for Annex I product systems is 2 August 2027 — one year later than the general high-risk deadline.

Pathway 2: Use-Case Categories (Article 6(2) + Annex III)

The second pathway is broader and catches most AI systems that practitioners are concerned about. Annex III lists eight categories of high-risk use cases:

#CategoryExamples
1BiometricsRemote biometric identification (not real-time law enforcement, which is prohibited), biometric categorisation, emotion recognition where permitted
2Critical infrastructureAI managing safety of road traffic, water supply, gas, heating, electricity, digital infrastructure
3Education and vocational trainingAI determining access to education, evaluating learning outcomes, monitoring prohibited behaviour during exams
4Employment and worker managementAI screening CVs, evaluating candidates, making promotion/termination decisions, allocating tasks, monitoring performance
5Access to essential servicesCreditworthiness assessment, life and health insurance risk/pricing, emergency services dispatch, public assistance eligibility
6Law enforcementPolygraph-type AI, evidence reliability assessment, crime analytics, profiling in criminal investigations
7Migration, asylum and border controlPolygraph-type AI for migration interviews, risk assessment for irregular migration, identity document authentication
8Administration of justice and democratic processesAI assisting judicial research and interpretation of law, AI intended to influence election outcomes

The Article 6(3) Exception

Not every AI system that falls within an Annex III category is automatically high-risk. provides an important exception: an AI system listed in Annex III is not high-risk if it does not pose a significant risk of harm to the health, safety, or fundamental rights of natural persons.

Specifically, a system may be exempted if it performs a narrow procedural task, improves the result of a previously completed human activity, detects decision-making patterns without replacing or influencing human assessment, or performs a preparatory task to an assessment relevant to the use cases in Annex III.

"The provider must document the Article 6(3) assessment and register it in the EU database before placing the system on the market or putting it into service."

This is not a blanket escape clause. The provider must document why they consider the system non-high-risk and register this in the EU database under . Market surveillance authorities can challenge this assessment under .

What High-Risk Classification Means

Once classified as high-risk, your AI system must comply with Articles 8-15 (requirements) and Articles 16-27 (obligations on operators). The key requirements are:

  • : A risk management system — continuous, iterative, covering the entire system lifecycle
  • : Data and data governance — training, validation, and testing datasets must meet quality criteria
  • : Technical documentation — detailed system documentation before market placement
  • : Record-keeping — automatic logging of events during the system's operation
  • : Transparency — deployers must receive sufficient information to understand and use the system
  • : Human oversight — the system must be designed to allow effective human oversight
  • : Accuracy, robustness, and cybersecurity — measurable levels, resilience against errors and attacks

Practical Classification Steps

1

Check Annex I first

If your AI system is part of a product covered by EU harmonisation legislation (medical devices, machinery, vehicles, etc.), classification is straightforward — it's high-risk via .

2

Map to Annex III categories

For standalone AI systems, assess whether the intended purpose falls within any of the eight Annex III categories. Focus on the system's purpose, not its technology.

3

Apply the Article 6(3) exception

If the system falls within Annex III but performs only narrow procedural tasks or preparatory work, assess whether the exception applies. Document this assessment thoroughly.

4

Consider intended and foreseeable use

Classification is based on intended purpose and reasonably foreseeable misuse — not just the current deployment. A recruitment tool is high-risk even if currently used for a single internal position.

5

Register your determination

If claiming non-high-risk under , register in the EU database. If classified as high-risk, begin compliance planning for the requirements in Articles 8-15.

Key Takeaways

Two pathways to high-risk: product safety (Annex I) or use-case categories (Annex III)

Annex III covers 8 categories: biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, and justice

allows exemption for narrow procedural or preparatory AI tasks — but requires documentation and database registration

High-risk classification triggers the full compliance framework: Articles 8-15 requirements plus operator obligations

The compliance deadline for Annex III systems is 2 August 2026; for Annex I product systems, 2 August 2027

Tags

High-Risk AIArticle 6Annex IIIClassificationCompliance

Not sure if you need a FRIA?

Use our free FRIA Screening Tool to find out in under 5 minutes.

Built by Paul McCormack — lawyer, product leader, and founder of Kormoon. This site is an independent informational resource only and does not constitute legal advice. No reliance should be placed on its contents. For the authoritative text, refer to the official EUR-Lex source linked in the Annexes tab, or consult your legal advisor.