· 8 min read

What is High-Risk AI Under the EU AI Act? Annex III Explained

A complete guide to Annex III of the EU AI Act: which AI systems are classified as high-risk, the eight categories, how classification works, and what obligations apply.

Not all AI systems are treated equally under the EU AI Act. The regulation takes a risk-based approach: the higher the potential harm, the more stringent the obligations. At the top of this hierarchy — below only the prohibited practices — sits the High-Risk AI tier, governed primarily by Annex III.

Understanding whether your AI system falls into this category is the single most important compliance determination you will make. This guide explains exactly how that determination works.


What Makes an AI System “High-Risk”?

Under Article 6 of the EU AI Act, an AI system is classified as high-risk if it meets either of two conditions:

Condition A (Article 6(1) — Annex I products): The AI system is a safety component of a product covered by existing EU product safety legislation listed in Annex I — such as medical devices, machinery, aircraft, vehicles, or toys. These AI systems must comply with both the EU AI Act and the relevant sector-specific regulation.

Condition B (Article 6(2) — Annex III use cases): The AI system falls within one of the eight categories of high-risk use cases listed in Annex III of the Act. This is the path that applies to the vast majority of enterprise AI systems — and it is the focus of this guide.


The Eight Annex III Categories

Category 1 — Biometric Identification and Categorisation

This category covers AI systems used for:

What this means in practice: Facial recognition in retail or public safety contexts, voice-print identification in call centres, or any system that infers sensitive personal characteristics from biometric data. Note that real-time remote biometric identification in public spaces by law enforcement is separately governed — and largely prohibited.

Category 2 — Critical Infrastructure Management

AI systems managing or operating:

What this means in practice: AI-driven traffic signal optimisation, smart grid load balancing, or predictive maintenance systems for water treatment facilities all fall here if they make or recommend decisions affecting infrastructure safety.

Category 3 — Education and Vocational Training

AI systems that:

What this means in practice: AI admissions screening tools that rank applicants to universities, automated essay grading with consequential outcomes, or online exam proctoring systems that flag suspicious behaviour are all covered.

Category 4 — Employment, Workers Management, and Access to Self-Employment

This is one of the most commercially significant categories. It covers AI systems used in:

What this means in practice: If your organisation uses any AI tool in the hiring process — from CV screening to interview scheduling to candidate ranking — it almost certainly falls within Category 4. The same applies to performance management software that uses AI to score employees.

Category 5 — Access to Essential Private Services and Public Benefits

AI systems that:

What this means in practice: AI-based lending decisions, automated insurance underwriting, benefits fraud detection, or hospital triage support tools all fall here. The common thread is: AI that determines whether a person can access something essential to their wellbeing.

Category 6 — Law Enforcement

AI systems used by law enforcement authorities for:

What this means in practice: This category primarily applies to public law enforcement agencies, not private companies. Predictive policing tools, AI-assisted interrogation tools, or recidivism risk scoring systems used by courts fall here.

Category 7 — Migration, Asylum, and Border Control Management

AI systems assisting:

What this means in practice: AI systems processing immigration applications, screening travellers at border points, or flagging visa applicants for manual review are covered here.

Category 8 — Administration of Justice and Democratic Processes

AI systems used in:

What this means in practice: Legal research AI used directly by courts, AI-generated legal briefs influencing judicial decisions, or political micro-targeting systems fall into this category.


The Article 6(3) Exception: When High-Risk Categories Don’t Trigger Full Obligations

A crucial nuance: even if your system falls within an Annex III category, it is not treated as high-risk if:

  1. It performs only a narrow procedural task
  2. It improves the result of a previously completed human activity
  3. It performs only a preparatory task to a human decision — and poses no significant risk to health, safety, or fundamental rights

This exception is designed to exclude AI tools that provide informational assistance without making consequential determinations. However, regulators are likely to interpret this exception narrowly in early enforcement. If you intend to rely on it, document your legal reasoning thoroughly — a classification memo from qualified EU AI Act counsel is strongly recommended.


What Obligations Apply to High-Risk AI Systems?

If your system is classified as High-Risk under Annex III, you must comply with the full requirements of Title III, Chapter 2 of the EU AI Act:

ArticleObligation
Article 9Risk management system — continuous, iterative, lifecycle-long
Article 10Data governance — quality criteria, bias audits, provenance documentation
Article 11 + Annex IVTechnical documentation — 8 mandatory items
Article 12Automatic logging — input/output records enabling post-market audit
Article 13Transparency — instructions for use covering capabilities, limitations, oversight
Article 14Human oversight — technical measures enabling effective intervention
Article 15Accuracy, robustness, and cybersecurity — documented metrics and hardening
Article 43Conformity assessment — internal or third-party depending on category
Article 47EU Declaration of Conformity — formal signed declaration
Article 48CE marking — visible affixing to the system or its documentation
Article 49EU database registration — before market placement
Article 72Post-market monitoring — active collection and review of performance data
Article 73Serious incident reporting — 72-hour reporting rule for harm

Who Bears the Compliance Burden?

Providers (entities placing the system on the EU market) bear the primary obligation. All Annex IV documentation, conformity assessment, CE marking, and database registration fall on the provider.

Deployers (entities using the system in a professional context) have secondary obligations:

If you are both developing and deploying your own AI system, you carry the combined obligations of both roles.


Fines for Non-Compliance

Non-compliance with Annex III obligations attracts administrative fines of up to €15,000,000 or 3% of global annual turnover, whichever is higher. For SMEs, the absolute cap is lower — but the percentage threshold still applies.

Beyond fines, the EU Commission maintains a public register of non-compliant AI systems. A public finding can cause significant reputational damage, procurement disqualification, and secondary litigation risk that often exceeds the fine itself.


How to Check Your Classification

The classification question requires a fact-specific analysis of your system’s intended purpose, deployment context, and the decisions it influences. There is no universal shortcut — but a structured approach helps.

Use our free EU AI Act Status Quo Assessment: 12 questions, instant result, free PDF delivered to your inbox showing your Annex III classification, readiness score, and top 3 priority actions.

If your system is confirmed High-Risk, our Annex IV Technical Documentation Roadmap provides a 15-page personalised report with practical examples for every documentation item and a 90-day compliance action plan — for €149.

🎯

Free Status Quo Assessment

12 questions. Instant Annex III classification + readiness score. Free PDF delivered to your inbox.

Take free assessment →
📄

Annex IV Roadmap — €149

15-page personalised report. All 8 Annex IV items with practical examples. 90-day action plan. Instant PDF.

Get your roadmap →
← Back to all articles