· 7 min read

EU AI Act Risk Management System: Article 9 Requirements Explained

A complete guide to the EU AI Act Article 9 Risk Management System requirement for high-risk AI. What to include, how to structure your risk register, and common compliance gaps.

Article 9 of the EU AI Act contains one of the most demanding obligations for high-risk AI providers — and one of the most commonly misunderstood. Most organisations approach it as a document to produce. It is not. It is a continuous process that must run for the entire lifetime of the AI system.

This guide explains what Article 9 actually requires, how to build a compliant risk management system, and what regulators will look for when they audit yours.


What Article 9 Actually Says

Article 9 requires providers of high-risk AI systems to establish, implement, document, and maintain a risk management system. The key word is “maintain” — this obligation does not end when your system is launched.

The risk management system must:

  1. Be an iterative continuous process — not a one-time assessment conducted during development
  2. Run throughout the entire AI system lifecycle — from design and development through deployment, operation, and decommissioning
  3. Be regularly reviewed and updated — whenever the system changes, new risks are identified, or deployment context evolves
  4. Result in documentation that feeds directly into Annex IV Item 5

Critically, Article 9 requires risk management to address risks “to the health, safety, or fundamental rights of natural persons.” This is a broader mandate than typical product safety risk management — it explicitly includes fundamental rights such as non-discrimination, privacy, and dignity.


The Five-Step Risk Management Process

Step 1: Risk Identification and Analysis

Identify all known and foreseeable risks to health, safety, and fundamental rights. This must cover:

For each identified risk, document:

Particular attention to vulnerable groups: Article 9(7) specifically requires that when training data involves minors or other vulnerable groups, or when the system’s output affects them, additional care must be taken in identifying and mitigating relevant risks.

Example risk entries for an employment AI:

Risk: Gender proxy discrimination via job title history. Category: Fundamental rights — non-discrimination. Source: Training data encoding historical hiring bias. Affected persons: Female candidates, estimated 40,000/year. Probability: High (identified via correlation analysis). Severity: High (affects hiring decisions with long-term career impact).

Risk: Deployer scope misuse — applying screener to internal promotion decisions beyond intended purpose. Category: Misuse. Source: Deployment context. Affected persons: All employees subject to AI-assisted promotion review. Probability: Medium. Severity: High.

Step 2: Risk Estimation and Evaluation

For each identified risk, estimate:

The EU AI Act does not prescribe a specific scoring methodology. A standard 5×5 probability/severity matrix is a reasonable approach — what matters is that the methodology is documented and applied consistently.

Prioritise risks by combined score. High probability + high severity risks require the most robust mitigation and the clearest documentation of residual risk.

Step 3: Risk Mitigation Measures

For each risk, adopt “the most appropriate risk management measures” to eliminate or reduce it to an acceptable level. Article 9(4) specifies that providers must:

Document each mitigation measure:

Examples of mitigation measures:

For proxy discrimination:

Removed postcode and educational institution name from feature set. Post-removal, shortlist rate gap between majority and minority groups reduced from 11% to 2% (documented in Bias Audit Report v1.3). Residual 2% gap deemed acceptable and disclosed in Instructions for Use.

For scope misuse:

Contractual use restriction in deployer licence agreement prohibiting use beyond recruitment context. API scope enforcement: system returns error if job role category is outside the validated set. Deployer onboarding training covers permitted scope.

For automation bias:

UI redesign: AI recommendation hidden until reviewer has read full candidate summary and clicked “I’ve reviewed this candidate.” Override reason mandatory for all decisions. Implemented and tested in v2.4 — override rate increased from 4% to 11%, indicating more active human engagement.

Step 4: Residual Risk Documentation and Communication

After mitigation, document what risk remains. Residual risks must be:

Example residual risk disclosure:

Instructions for Use, §7 — Known Residual Risks: A 2% accuracy gap between majority and minority demographic groups remains after bias mitigation. Deployers must apply mandatory manual review for all borderline-score candidates (50th–70th percentile confidence range). The system is not validated for job roles requiring more than 15 years of experience — do not use for senior executive hiring without additional validation.

Step 5: Testing Against Risk Scenarios

Before market placement, and after any substantial modification, test the system against the identified risk scenarios. Document the test methodology, test cases, and results.

Types of tests to consider:

Document all test results and link them to the relevant risk register entries. A risk that was identified but not tested is a compliance gap.


Building Your Risk Register

The risk register is the living document at the heart of your risk management system. It should be structured so that any reviewer can quickly see:

Recommended fields:

FieldDescription
Risk IDUnique identifier (e.g. R-001)
DescriptionWhat could go wrong
CategoryFundamental rights / Safety / Operational / Bias / Transparency / Misuse
SourceTraining data / Architecture / Deployment / Human-AI interaction / External attack
Probability1–5 scale with criteria defined
Severity1–5 scale with criteria defined
Risk scoreProbability × Severity
Affected personsWho and how many
Mitigation measuresWhat controls are in place
Mitigation effectivenessEvidence of reduction
Residual riskRemaining risk after mitigation
OwnerNamed person responsible
Last reviewedDate of most recent review
Next reviewScheduled next review date

How to Maintain It Over Time

The most common compliance failure is treating the risk register as a one-time deliverable. Article 9’s “continuous process” requirement means it must be actively maintained:

Assign a named Risk Management Owner — a specific person (not a team) responsible for keeping the register current. This person should have enough authority to escalate findings that require engineering changes or deployer communications.


Common Compliance Gaps

Gap 1: Risk register completed but never updated The register was prepared before launch and has not changed since. Post-deployment incident data, deployer feedback, and performance monitoring findings have not been incorporated.

Gap 2: Mitigation measures listed but not tested The register states that a mitigation measure is in place, but there are no test results demonstrating its effectiveness. Regulators will ask for evidence, not assertions.

Gap 3: Residual risks not communicated to deployers Residual risks are documented internally but do not appear in the Instructions for Use. Deployers are therefore unaware of the limitations they are expected to manage.

Gap 4: No review cadence defined The risk management system document does not specify when reviews will occur or what triggers an unscheduled review. Without this, “continuous” cannot be demonstrated.

Gap 5: Vulnerable groups not considered The risk assessment identifies risks to the general user population but does not specifically address whether the system’s outputs disproportionately affect minors, disabled persons, or other vulnerable groups.


Connecting Article 9 to the Rest of the Act

The risk management system does not stand alone — it connects directly to most other obligations:


Where to Start

If you don’t have a risk register yet, start by listing every way your AI system could cause harm — then work through the five-step process above. Our free Status Quo Assessment will help you identify which risk management gaps are most critical for your specific system. For a full implementation guide including a worked risk register template, see our Annex IV Roadmap.

🎯

Free Status Quo Assessment

12 questions. Instant Annex III classification + readiness score. Free PDF delivered to your inbox.

Take free assessment →
📄

Annex IV Roadmap — €149

15-page personalised report. All 8 Annex IV items with practical examples. 90-day action plan. Instant PDF.

Get your roadmap →
← Back to all articles