EU AI Act for HR Tech: Is Your Recruitment AI High-Risk?
How the EU AI Act applies to HR technology and recruitment AI. Which HR AI tools are high-risk under Annex III Category 4, what compliance requires, and the August 2026 deadline impact.
Of all the sectors affected by the EU AI Act, HR technology is ground zero. Category 4 of Annex III — employment and workers management — is the most commercially significant high-risk category for enterprise software. If you build or use AI in the hiring process, performance management, or workforce decisions, the Act almost certainly applies to you.
This guide covers which HR AI systems are high-risk, what compliance requires, and what both HR tech providers and HR departments need to do before August 2, 2026.
Which HR AI Systems Are High-Risk?
Annex III, Category 4 covers AI systems used in the context of employment, workers management, and access to self-employment. Specifically:
Clearly High-Risk
Resume and CV screening: Any AI system that ranks, filters, or scores job applicants based on their CVs or application materials is covered — regardless of how the output is framed. Whether the system produces a “fit score,” a “shortlist recommendation,” or a ranked list, if it is used in a hiring workflow it is Category 4.
Candidate ranking and selection: Systems that rank candidates against job requirements, compare candidates to each other, or produce any ordering used by recruiters in their selection process.
Interview analysis: AI that analyses video interviews — including facial expression analysis, speech pattern analysis, or linguistic scoring — to assess candidate suitability. This category also overlaps with Category 1 (biometric) if the system uses emotion recognition.
Performance monitoring and scoring: AI that generates performance scores, productivity metrics, or ratings for existing employees — whether used directly in performance reviews or as an input to them.
Promotion and progression decisions: AI that recommends or scores employees for promotion, pay increases, or lateral moves.
Termination risk prediction: AI that scores employees on likelihood of voluntary departure, identifies employees “at risk” of low performance, or contributes to decisions about workforce reduction.
Typically Not High-Risk (but verify)
Administrative scheduling tools: AI that schedules interviews, sends calendar invites, or coordinates logistics — with no input into candidate evaluation — is generally not Category 4.
Job description optimisation: Tools that suggest wording improvements to job postings for clarity or inclusivity, without ranking or filtering candidates.
Benefits administration: AI managing payroll, leave tracking, or benefits enrolment that does not make or inform decisions about the employment relationship itself.
The distinction is whether the AI influences a consequential decision about a person’s employment. Administrative tools that do not cross that line are likely outside Category 4 — but document that reasoning carefully.
What Makes Category 4 Special
Category 4 carries a higher inherent compliance burden than most other Annex III categories for two reasons:
Scale of harm: Employment decisions have long-term, compounding consequences. A candidate wrongly screened out of a job opportunity loses not just one job but potentially a career trajectory. An employee wrongly scored on performance may face termination. These harms are serious, affect fundamental rights (non-discrimination, livelihood), and may not be immediately visible.
Scale of operation: HR AI systems deployed to large enterprises or sold as SaaS to multiple companies can affect millions of individual employment decisions per year. The risk register for a popular recruitment AI must account for the aggregate effect on the labour market, not just individual cases.
Compliance Requirements for HR AI Providers
If you build and sell HR AI systems, you are a Provider under the EU AI Act and carry the primary compliance burden.
Annex IV Documentation with HR-Specific Focus
All 8 Annex IV items apply. The items that require particular attention for HR AI:
Item 3 — Training data: Your bias audit must examine the following demographic dimensions at minimum:
- Gender (including non-binary where data permits)
- Age (at least three bands: under 30, 30–50, over 50)
- Nationality (EU/non-EU at minimum; finer granularity where data permits)
- Disability status (where available and applicable)
- Ethnic origin (where data is available; note special category data protections under GDPR)
For each dimension, document: what proxy features were identified that encode the protected characteristic, what correlation analysis was performed, what mitigation was applied, and what residual disparity remains.
Item 4 — Instructions for use: Must explicitly state:
- That the system must not be used as the sole decision-making tool
- Which job categories, experience levels, and candidate profiles have validated performance
- What accuracy and fairness limitations deployers must communicate to candidates
- What override mechanisms must be implemented by the deployer
Item 5 — Risk management: Must specifically address:
- Proxy discrimination risks for all demographic groups covered in the bias audit
- Automation bias risk — the tendency for recruiters to over-rely on AI recommendations
- Scope misuse risk — deployers using the system beyond its validated use case
- Distribution shift risk — performance degradation when the labour market evolves beyond training data
Human Oversight — What “Meaningful” Means for HR AI
Article 14’s human oversight requirement has particular teeth in the HR context. A nominal override button that is rarely used does not satisfy the requirement. Effective human oversight for HR AI means:
Active confirmation, not passive non-action: The system must require the recruiter to actively confirm or override AI recommendations. A UI that defaults to accepting AI outputs unless overridden is likely non-compliant.
Explainability: Recruiters must be able to see why the AI ranked a candidate as it did. SHAP values, feature contributions, or equivalent explanations must be surfaced — not just the score.
Override with reason: Override decisions should be logged with a reason code. This serves two purposes: it creates an audit trail for monitoring, and it requires the recruiter to consciously articulate their reasoning rather than rubber-stamping or mechanically overriding.
Calibration: Consider periodic blind-testing of recruiters — having them evaluate a subset of candidates without the AI score visible — to detect automation bias in your user base.
Transparency to Candidates
This is where EU AI Act obligations intersect with broader transparency expectations:
Article 52 (General transparency): If candidates interact with an AI system — for example, an AI-powered chatbot conducting a pre-screening interview — they must be informed they are interacting with AI, unless obvious from context.
Article 86 (Right to explanation): Individuals subject to decisions made with high-risk AI have the right to obtain an explanation of the decision from the deployer. Your Instructions for Use must enable deployers to provide this explanation — which means your system must produce explainable outputs, not just a score.
Deployer Obligations: What HR Departments Must Do
If your company uses a third-party AI tool in your HR process, you are a Deployer under the EU AI Act. Your obligations:
Implement the Oversight Measures
Deployers must implement the human oversight measures specified in the provider’s Instructions for Use. This is a legal obligation, not a suggestion. If the Instructions for Use specify that a human reviewer must actively confirm every AI recommendation, your process must implement this — not just nominally, but in a way that is auditable.
Monitor Performance Post-Deployment
Deployers must monitor the AI system’s performance in their specific context. If you notice the system producing systematically different outcomes for certain candidate groups (e.g. consistently lower scores for candidates from a particular country), you must report this to the provider and implement manual review for the affected population.
Log and Retain Records
Maintain logs of AI-assisted hiring decisions — including override events — for a minimum period aligned with your data retention policy and any applicable employment law requirements. In the EU, employment records are often subject to retention requirements of several years.
Report Serious Incidents
If an AI-assisted hiring decision causes or is suspected to have caused serious harm — for example, systematic discrimination identified in an audit — you must report this to the provider and to the relevant market surveillance authority.
Notify Candidates
Where required by the Instructions for Use or by national employment law, notify candidates that AI is used in the recruitment process, what information is processed, and how to request an explanation of any AI-assisted decision.
The August 2026 Timeline for HR Tech
If you are a provider: All documentation must be complete, conformity assessment signed, CE marking affixed, and EU database registration completed before August 2, 2026. For HR AI systems already deployed, this deadline applies to existing products — not just new releases.
If you are a deployer: Ensure your contracts with HR AI providers explicitly address EU AI Act compliance. Your provider should be able to confirm:
- That their system has completed or is on track to complete conformity assessment before August 2026
- That they will provide you with compliant Instructions for Use
- That their post-market monitoring plan covers your deployment context
If your current HR AI provider cannot confirm these things, factor compliance risk into your vendor assessment.
Where to Start
Use our free Status Quo Assessment to check your compliance readiness — including the key risk management, documentation, and oversight questions specific to employment AI systems. For a complete personalised Annex IV Documentation Roadmap, including worked examples for employment AI bias audits and human oversight implementation, see our paid report.
Free Status Quo Assessment
12 questions. Instant Annex III classification + readiness score. Free PDF delivered to your inbox.
Take free assessment →Annex IV Roadmap — €149
15-page personalised report. All 8 Annex IV items with practical examples. 90-day action plan. Instant PDF.
Get your roadmap →