EU AI Act Risk Management System: Article 9 Requirements Explained
A complete guide to the EU AI Act Article 9 Risk Management System requirement for high-risk AI. What to include, how to structure your risk register, and common compliance gaps.
Article 9 of the EU AI Act contains one of the most demanding obligations for high-risk AI providers — and one of the most commonly misunderstood. Most organisations approach it as a document to produce. It is not. It is a continuous process that must run for the entire lifetime of the AI system.
This guide explains what Article 9 actually requires, how to build a compliant risk management system, and what regulators will look for when they audit yours.
What Article 9 Actually Says
Article 9 requires providers of high-risk AI systems to establish, implement, document, and maintain a risk management system. The key word is “maintain” — this obligation does not end when your system is launched.
The risk management system must:
- Be an iterative continuous process — not a one-time assessment conducted during development
- Run throughout the entire AI system lifecycle — from design and development through deployment, operation, and decommissioning
- Be regularly reviewed and updated — whenever the system changes, new risks are identified, or deployment context evolves
- Result in documentation that feeds directly into Annex IV Item 5
Critically, Article 9 requires risk management to address risks “to the health, safety, or fundamental rights of natural persons.” This is a broader mandate than typical product safety risk management — it explicitly includes fundamental rights such as non-discrimination, privacy, and dignity.
The Five-Step Risk Management Process
Step 1: Risk Identification and Analysis
Identify all known and foreseeable risks to health, safety, and fundamental rights. This must cover:
- Intended use: Risks arising when the system is used as designed, by the intended users, in the intended context
- Reasonably foreseeable misuse: Risks from uses that are predictably likely even if not intended — e.g. a deployer using a recruitment screener for promotion decisions
- Edge cases and failure modes: What happens when the system encounters inputs outside its training distribution, or when its confidence is low
For each identified risk, document:
- Risk description: What could go wrong, to whom, in what circumstances
- Risk category: Fundamental rights, safety, operational, bias/fairness, transparency, misuse
- Source: Training data, model architecture, deployment context, human-AI interaction, adversarial attack, etc.
- Who is affected: The type of person, the population size, whether vulnerable groups are disproportionately affected
Particular attention to vulnerable groups: Article 9(7) specifically requires that when training data involves minors or other vulnerable groups, or when the system’s output affects them, additional care must be taken in identifying and mitigating relevant risks.
Example risk entries for an employment AI:
Risk: Gender proxy discrimination via job title history. Category: Fundamental rights — non-discrimination. Source: Training data encoding historical hiring bias. Affected persons: Female candidates, estimated 40,000/year. Probability: High (identified via correlation analysis). Severity: High (affects hiring decisions with long-term career impact).
Risk: Deployer scope misuse — applying screener to internal promotion decisions beyond intended purpose. Category: Misuse. Source: Deployment context. Affected persons: All employees subject to AI-assisted promotion review. Probability: Medium. Severity: High.
Step 2: Risk Estimation and Evaluation
For each identified risk, estimate:
- Probability: How likely is this risk to materialise, given existing controls?
- Severity: How serious is the harm if it materialises? Consider immediate and long-term effects; reversibility; scale.
- Number of persons affected: How many people could be harmed, and how severely does this scale with deployment size?
The EU AI Act does not prescribe a specific scoring methodology. A standard 5×5 probability/severity matrix is a reasonable approach — what matters is that the methodology is documented and applied consistently.
Prioritise risks by combined score. High probability + high severity risks require the most robust mitigation and the clearest documentation of residual risk.
Step 3: Risk Mitigation Measures
For each risk, adopt “the most appropriate risk management measures” to eliminate or reduce it to an acceptable level. Article 9(4) specifies that providers must:
- Address risks through design and development choices where possible
- Adopt adequate mitigation and control measures where risks cannot be eliminated at design stage
- Provide information and training to deployers where residual risk depends on correct use
Document each mitigation measure:
- What the measure is (technical, contractual, operational, or training-based)
- How it reduces the risk (the mechanism)
- Evidence of its effectiveness (test results, before/after metrics)
Examples of mitigation measures:
For proxy discrimination:
Removed postcode and educational institution name from feature set. Post-removal, shortlist rate gap between majority and minority groups reduced from 11% to 2% (documented in Bias Audit Report v1.3). Residual 2% gap deemed acceptable and disclosed in Instructions for Use.
For scope misuse:
Contractual use restriction in deployer licence agreement prohibiting use beyond recruitment context. API scope enforcement: system returns error if job role category is outside the validated set. Deployer onboarding training covers permitted scope.
For automation bias:
UI redesign: AI recommendation hidden until reviewer has read full candidate summary and clicked “I’ve reviewed this candidate.” Override reason mandatory for all decisions. Implemented and tested in v2.4 — override rate increased from 4% to 11%, indicating more active human engagement.
Step 4: Residual Risk Documentation and Communication
After mitigation, document what risk remains. Residual risks must be:
- Documented in Annex IV Item 5: Part of the formal technical file
- Communicated to deployers in the Instructions for Use: Deployers must know what risks they are accepting and what additional measures they should implement on their side
Example residual risk disclosure:
Instructions for Use, §7 — Known Residual Risks: A 2% accuracy gap between majority and minority demographic groups remains after bias mitigation. Deployers must apply mandatory manual review for all borderline-score candidates (50th–70th percentile confidence range). The system is not validated for job roles requiring more than 15 years of experience — do not use for senior executive hiring without additional validation.
Step 5: Testing Against Risk Scenarios
Before market placement, and after any substantial modification, test the system against the identified risk scenarios. Document the test methodology, test cases, and results.
Types of tests to consider:
- Bias and fairness testing: Run disaggregated performance analysis across all demographic groups covered in the risk register
- Adversarial testing: Generate inputs crafted to exploit identified failure modes
- Scope misuse simulation: Attempt to use the system in ways covered by identified misuse risks — verify that technical controls and contractual restrictions are effective
- Distribution shift testing: Test performance on recent data not in the training set to assess generalisation
Document all test results and link them to the relevant risk register entries. A risk that was identified but not tested is a compliance gap.
Building Your Risk Register
The risk register is the living document at the heart of your risk management system. It should be structured so that any reviewer can quickly see:
- What risks exist
- What their current status is
- What mitigations are in place
- Who owns each risk
- When it was last reviewed
Recommended fields:
| Field | Description |
|---|---|
| Risk ID | Unique identifier (e.g. R-001) |
| Description | What could go wrong |
| Category | Fundamental rights / Safety / Operational / Bias / Transparency / Misuse |
| Source | Training data / Architecture / Deployment / Human-AI interaction / External attack |
| Probability | 1–5 scale with criteria defined |
| Severity | 1–5 scale with criteria defined |
| Risk score | Probability × Severity |
| Affected persons | Who and how many |
| Mitigation measures | What controls are in place |
| Mitigation effectiveness | Evidence of reduction |
| Residual risk | Remaining risk after mitigation |
| Owner | Named person responsible |
| Last reviewed | Date of most recent review |
| Next review | Scheduled next review date |
How to Maintain It Over Time
The most common compliance failure is treating the risk register as a one-time deliverable. Article 9’s “continuous process” requirement means it must be actively maintained:
- On deployment: Risk register finalised and signed off; linked to Annex IV documentation
- On each model update: Risk register reviewed for new or changed risks; new mitigations documented
- On each deployment context change: New deployers, new sectors, or new geographies may introduce new risks
- On incident detection: Any serious incident or near-miss triggers a risk register update
- Minimum annually: Full review even without specific triggers
Assign a named Risk Management Owner — a specific person (not a team) responsible for keeping the register current. This person should have enough authority to escalate findings that require engineering changes or deployer communications.
Common Compliance Gaps
Gap 1: Risk register completed but never updated The register was prepared before launch and has not changed since. Post-deployment incident data, deployer feedback, and performance monitoring findings have not been incorporated.
Gap 2: Mitigation measures listed but not tested The register states that a mitigation measure is in place, but there are no test results demonstrating its effectiveness. Regulators will ask for evidence, not assertions.
Gap 3: Residual risks not communicated to deployers Residual risks are documented internally but do not appear in the Instructions for Use. Deployers are therefore unaware of the limitations they are expected to manage.
Gap 4: No review cadence defined The risk management system document does not specify when reviews will occur or what triggers an unscheduled review. Without this, “continuous” cannot be demonstrated.
Gap 5: Vulnerable groups not considered The risk assessment identifies risks to the general user population but does not specifically address whether the system’s outputs disproportionately affect minors, disabled persons, or other vulnerable groups.
Connecting Article 9 to the Rest of the Act
The risk management system does not stand alone — it connects directly to most other obligations:
- Article 10 (Data): Bias risks identified in Article 9 should drive the bias detection and mitigation measures applied to training data
- Article 13 (Transparency): Residual risks documented in Article 9 must be disclosed to deployers in the Instructions for Use
- Article 14 (Human oversight): Risks of automation bias and inadequate oversight identified in Article 9 should drive the design of human oversight measures
- Article 15 (Robustness): Adversarial risks identified in Article 9 should drive robustness testing and cybersecurity hardening
- Article 72 (Post-market monitoring): Post-deployment performance data should feed back into the risk register, triggering updates when actual risks diverge from anticipated risks
Where to Start
If you don’t have a risk register yet, start by listing every way your AI system could cause harm — then work through the five-step process above. Our free Status Quo Assessment will help you identify which risk management gaps are most critical for your specific system. For a full implementation guide including a worked risk register template, see our Annex IV Roadmap.
Free Status Quo Assessment
12 questions. Instant Annex III classification + readiness score. Free PDF delivered to your inbox.
Take free assessment →Annex IV Roadmap — €149
15-page personalised report. All 8 Annex IV items with practical examples. 90-day action plan. Instant PDF.
Get your roadmap →