EU AI Act Compliance Checklist 2026: Everything You Need Before the Deadline
Complete EU AI Act compliance checklist for 2026. Cover all Annex III classification checks, Annex IV documentation items, conformity assessment steps, and the August 2, 2026 deadline requirements.
The August 2, 2026 deadline for EU AI Act compliance is approaching fast. If your organisation develops or deploys AI systems in the EU market, you need a structured plan — not a vague awareness of the regulation.
This checklist covers every major obligation you need to fulfil, in the order you should tackle them.
Step 1: Determine Whether the EU AI Act Applies to You
Before anything else, confirm your organisation is within scope.
You are a Provider if you:
- Develop an AI system and place it on the EU market under your name or trademark
- Substantially modify a third-party AI system before deployment
- Use a third-party model (e.g. an LLM API) and build a product around it
You are a Deployer if you:
- Use an AI system in a professional context (not personal/private use)
- Integrate a third-party AI system into your operations
Providers carry the heaviest compliance burden. Deployers have secondary obligations — including monitoring, incident reporting, and human oversight implementation.
Non-EU companies: The Act has extra-territorial reach. If your AI system is placed on the EU market or produces outputs used in the EU, you must comply — and appoint an EU Authorised Representative.
Step 2: Classify Your AI System (Annex III Check)
The most consequential step. Use this checklist to determine if your system is High-Risk under Annex III:
Annex III — Eight High-Risk Categories
- Category 1 — Biometric systems: Does your system perform remote biometric identification, emotion recognition, or biometric categorisation inferring sensitive attributes?
- Category 2 — Critical infrastructure: Does your system manage or operate water, gas, electricity, heating, road traffic, or digital infrastructure?
- Category 3 — Education: Does your system determine access to educational institutions, evaluate learning outcomes, or proctor exams?
- Category 4 — Employment: Does your system assist in recruitment, task allocation, performance monitoring, promotion, or termination decisions?
- Category 5 — Essential services: Does your system perform credit scoring, insurance risk assessment, public benefits eligibility, healthcare triage, or emergency dispatch?
- Category 6 — Law enforcement: Does your system assess risk of criminal offences, operate as a polygraph tool, or predict crime?
- Category 7 — Migration and border control: Does your system assess risk of persons crossing borders, verify documents, or process asylum applications?
- Category 8 — Justice and democracy: Does your system assist courts, research or apply law, or influence electoral processes?
If you checked any of the above, your system is likely High-Risk and subject to the full obligations of Title III of the EU AI Act.
Article 6(3) Exception: Even if you fall within an Annex III category, your system may not be high-risk if it performs only a narrow procedural task, improves a previously completed human activity, or is purely preparatory — with no significant risk to health, safety, or fundamental rights. Document this reasoning carefully if you rely on this exception.
Step 3: Annex IV Technical Documentation (8 Items)
This is the most time-intensive obligation. All 8 items must be fully documented before you can complete the conformity assessment.
Documentation Checklist
- Item 1 — General description: Intended purpose, version/date, hardware and software requirements, forms of market placement, design trade-offs
- Item 2 — Design specifications: Overall logic and key design choices, model architecture, accuracy vs. fairness trade-offs, development methodology
- Item 3 — Training data: Dataset provenance, scope, and characteristics; collection and annotation methodology; bias detection and mitigation measures; GDPR alignment for personal data
- Item 4 — Instructions for use: Provider identity, system capabilities and limitations, performance metrics (disaggregated), input data requirements, deployer oversight responsibilities, maintenance schedule
- Item 5 — Risk management system: Risk register with identified risks, probability and severity estimates, mitigation measures, residual risks, testing procedures
- Item 6 — Human oversight measures: Technical tools for oversight persons; monitoring dashboard; override and disregard mechanisms; emergency stop procedure; automation bias mitigation
- Item 7 — Accuracy, robustness, and cybersecurity: Performance benchmarks; disaggregated accuracy by demographic group; robustness tests; adversarial robustness; cybersecurity threat model and controls
- Item 8 — Quality management and post-market monitoring: Quality management system; post-market monitoring plan with data collection and review triggers; automatic logging strategy; serious incident reporting procedures
Step 4: Implement Article 9 — Risk Management System
A Risk Management System (RMS) is not a document — it is a continuous, iterative process covering the entire AI system lifecycle. Your RMS must:
- Identify and analyse all known and foreseeable risks (intended use, misuse, edge cases)
- Estimate probability, severity, and number of affected persons per risk
- Adopt appropriate mitigation measures and document their effectiveness
- Document residual risks and communicate them to deployers
- Test the system against identified risk scenarios before deployment
- Update the RMS when the system is modified or new risks are identified
Common risks to address: proxy discrimination (bias via correlated features), misuse by deployers beyond intended scope, distribution shift post-deployment, adversarial attacks, privacy violations, automation bias in human reviewers.
Step 5: Article 10 — Data Governance
If your system uses training data, you must:
- Apply data quality criteria: training data must be relevant, representative, free of errors, and complete for the intended purpose
- Document bias detection and mitigation measures across demographic groups
- Record dataset provenance: origin, scope, collection methodology, geographic and temporal scope
- Align with GDPR: lawful basis for personal data in training sets, data minimisation, purpose limitation
Step 6: Article 13 — Transparency and Instructions for Use
Your deployers must receive documentation enabling them to understand and operate the system:
- Provider identity and contact details
- System capabilities, performance under stated conditions
- Known limitations, failure modes, and biases
- Input data format and quality requirements
- Deployer obligations for human oversight and available interventions
- Expected lifetime, maintenance schedule, and update procedure
Step 7: Article 14 — Human Oversight
The system must be designed so that natural persons can effectively oversee it. This requires:
- Understanding interface: Oversight persons can see capabilities, limitations, and confidence levels
- Real-time monitoring: Tools to detect anomalies and performance drift during use
- Override mechanisms: Clear, accessible controls to intervene or disregard AI outputs — with mandatory logging
- Stop function: Documented and tested emergency stop procedure
- Automation bias mitigation: Active measures preventing rubber-stamp approval of AI recommendations
Step 8: Article 15 — Accuracy, Robustness, and Cybersecurity
- Define and document appropriate accuracy metrics for your system type
- Run disaggregated performance analysis across relevant demographic sub-groups
- Test for fault resilience (erroneous, faulty, inconsistent inputs)
- Document adversarial robustness and implement countermeasures
- Establish monitoring for distribution shift post-deployment
- Address AI-specific cybersecurity threats: data poisoning, model inversion, model theft, adversarial examples
Step 9: Article 12 — Logging
- Implement automatic logging of all system operations relevant to post-market monitoring
- Logs must enable reconstruction of input/output context for any given decision
- Log events: start/stop, input data characteristics, output decisions, human overrides, error conditions
- Define retention period aligned with GDPR and regulatory audit requirements
Step 10: Conformity Assessment (Articles 43–47)
Once all documentation is complete, complete the internal conformity assessment:
- Internal audit: Verify compliance against every Article 9–17 obligation with evidence references
- Resolve all gaps: Every non-compliant item must be remediated before signing the Declaration
- Draw up the EU Declaration of Conformity (Art. 47): Signed by an authorised person; references the AI system, applicable obligations, applicable standards, and test results; dated
- Affix CE marking (Art. 48): Visible on the system or its documentation; indelible
- Register in the EU AI Act database (Art. 49): Register before market placement; include provider identity, system description, intended purpose, and Declaration reference
Step 11: Post-Market Monitoring (Articles 72–73)
After deployment, ongoing obligations apply:
- Collect and analyse post-deployment performance data against documented thresholds
- Define and implement an incident reporting procedure
- Report serious incidents (injury/harm) within 72 hours to the market surveillance authority
- Report serious malfunctioning (no harm yet) within 15 days
- Notify all deployers of any serious malfunctioning affecting their use
Key Deadlines Summary
| Date | Milestone |
|---|---|
| 1 August 2024 | EU AI Act entered into force |
| 2 February 2025 | Prohibited practices enforceable |
| 2 August 2025 | GPAI model obligations in force |
| 2 August 2026 | High-Risk AI (Annex III) — full compliance required |
| 2 August 2027 | Annex I regulated product AI systems |
| 2 August 2030 | Public authority AI (extended deadline) |
Where to Start
The two most common mistakes organisations make are: (1) starting too late because they assume the documentation can be produced quickly, and (2) underestimating the technical depth required for each Annex IV item.
Start with the classification check (Step 2) and the Annex IV documentation gap analysis (Step 3) — these determine how much work you have ahead.
If you’re not sure whether your system qualifies as High-Risk, use our free EU AI Act Status Quo Assessment to get an instant classification check, readiness score, and top 3 priority actions — no cost, delivered to your inbox.
For organisations that have confirmed their High-Risk classification and need a full 15-page Annex IV Technical Documentation Roadmap with practical examples for every item, see our paid report tool.
Free Status Quo Assessment
12 questions. Instant Annex III classification + readiness score. Free PDF delivered to your inbox.
Take free assessment →Annex IV Roadmap — €149
15-page personalised report. All 8 Annex IV items with practical examples. 90-day action plan. Instant PDF.
Get your roadmap →