EU AI Act Penalties 2026: Fines, Enforcement, and What's at Stake
Complete guide to EU AI Act penalties and fines under Article 99. Three tiers of fines, who enforces the Act, how enforcement works, and the reputational risks beyond financial penalties.
The EU AI Act’s penalty framework is among the most severe in EU regulatory history — comparable to GDPR in scale and potentially more consequential for some organisations. But the financial fines are only part of what’s at stake. This guide covers the full picture: the three-tier fine structure, who enforces the Act, how enforcement works in practice, and the reputational and commercial risks that often matter more than the fine itself.
The Three-Tier Fine Structure (Article 99)
The EU AI Act establishes three tiers of administrative fines, each applying to a different category of infringement.
Tier 1 — Prohibited AI Practices
Maximum fine: €35,000,000 or 7% of global annual turnover, whichever is higher
This tier applies to violations of Article 5 — the blanket prohibitions on AI practices considered unacceptably harmful. These include:
- Subliminal manipulation: AI systems that deploy subliminal techniques beyond a person’s consciousness to influence their behaviour in a way that causes harm
- Exploitation of vulnerabilities: AI that exploits vulnerabilities of specific groups (age, disability) to distort behaviour
- Social scoring by public authorities: Government or public body AI that evaluates or classifies citizens based on social behaviour, generating detrimental or discriminatory treatment
- Real-time remote biometric identification in public spaces by law enforcement (with limited exceptions)
- Biometric categorisation inferring sensitive attributes (race, political opinion, religion, sexual orientation)
- Emotion recognition in workplace and educational institutions
- Predictive policing targeting individuals based on profiling
These prohibitions have been in force since February 2, 2025. If your system falls within any of these categories, the risk is not theoretical — enforcement is already possible.
Tier 2 — Non-Compliance with High-Risk AI Obligations
Maximum fine: €15,000,000 or 3% of global annual turnover, whichever is higher
This tier covers non-compliance with the core obligations for high-risk AI systems under Title III — the obligations that most enterprise AI providers need to focus on:
- Failure to complete Annex IV technical documentation
- Failure to establish a risk management system
- Failure to implement human oversight measures
- Deploying without a completed conformity assessment
- Failure to register in the EU AI Act database
- Non-compliance with post-market monitoring obligations
- Failure to report serious incidents
This is the tier that applies to the August 2, 2026 deadline for most high-risk AI systems. From that date, market surveillance authorities will have legal authority to investigate, sanction, and require market withdrawal of non-compliant systems.
Tier 3 — Supplying Incorrect Information to Authorities
Maximum fine: €7,500,000 or 1.5% of global annual turnover, whichever is higher
This tier applies specifically to providing incorrect, incomplete, or misleading information to national competent authorities or the EU AI Office in response to an investigation or request. The practical implication: if you are investigated, honesty — even about compliance gaps — is a better strategy than providing incomplete or misleading responses.
SME Provisions
For SMEs and start-ups, Article 99(6) specifies that the lower of the fixed amount and the percentage-of-turnover threshold applies. A small company fined under Tier 2 would pay the lesser of €15M or 3% of global turnover — which for a startup with €2M revenue means a maximum of €60,000, not €15 million.
However, this does not create a substantive compliance exemption. The documentation obligations and conformity assessment requirements apply in full regardless of company size. The SME provisions affect fine quantum, not obligations.
Who Enforces the EU AI Act?
National Market Surveillance Authorities (MSAs)
Each EU Member State must designate one or more national competent authorities as market surveillance authorities for the AI Act. These bodies are responsible for:
- Supervising compliance with high-risk AI obligations (Articles 9–17, 43–49)
- Investigating complaints
- Conducting market surveillance (document requests, on-site inspections, technical testing)
- Issuing fines and ordering market withdrawal
- Coordinating with other Member State authorities on cross-border cases
Which MSA has jurisdiction? The primary MSA is the authority in the Member State where the provider is established. If the provider is non-EU, the relevant authority is typically in the Member State where the EU Authorised Representative is established, or where the system primarily operates.
Early enforcement is expected to be complaint-driven in most Member States — focused on systems causing visible harm — rather than comprehensive market surveillance of all high-risk AI.
The EU AI Office
The EU AI Office (established within the European Commission) has jurisdiction over:
- GPAI model providers — all general-purpose AI model obligations under Title VIII
- Cross-border cases involving multiple Member States
- Systemic risk models — the most powerful GPAI models with systemic risk designation
The EU AI Office can conduct its own investigations, request documents from providers, and impose fines independently of national MSAs.
The European Data Protection Board
Where a high-risk AI system processes personal data, the GDPR supervisory authorities (national Data Protection Authorities) retain concurrent jurisdiction over data protection aspects. Non-compliant AI processing personal data may attract both GDPR fines and EU AI Act fines simultaneously.
How Enforcement Works
Market Surveillance
From August 2, 2026, national MSAs have broad investigative powers:
- Document requests: They can require providers and deployers to produce technical documentation, risk management records, conformity assessment documentation, and logs — within a specified timeframe
- On-site inspections: They can inspect facilities, systems, and records in person
- Technical testing: They can require access to the AI system for testing purposes
- Interim measures: In cases of serious risk, MSAs can order temporary market withdrawal or suspension of deployment before a full investigation concludes
Complaint-Driven Enforcement
In early enforcement (2026–2027), the most likely trigger for investigation is a complaint — from:
- Affected individuals who believe they were harmed by an AI decision (candidates rejected by a hiring AI, applicants denied credit, patients triaged incorrectly)
- Competitors reporting non-compliant rivals
- Civil society organisations and NGOs focused on AI rights
- Whistleblowers within organisations
This means that systems with high public visibility, systems processing personal data, and systems making consequential decisions affecting large numbers of people are at greatest enforcement risk in the near term.
The Investigation Procedure
- MSA opens an investigation (own initiative or complaint)
- MSA requests documentation from the provider
- Provider has a defined period to respond
- MSA may conduct an on-site inspection
- MSA issues a preliminary finding
- Provider has the right to be heard and to contest findings
- MSA issues a final decision — including any fine and corrective measures required
- Provider can appeal to national courts
The entire procedure can take months to years for complex cases. However, interim orders requiring suspension of the system can be issued much faster in cases of demonstrated serious risk.
Beyond Fines: The Reputational and Commercial Risk
For many organisations, the financial fine is not the most damaging consequence of non-compliance. Consider these additional risks:
The Public Non-Compliant AI Register
The EU Commission maintains a public register of AI systems found non-compliant. A listing in this register:
- Is publicly visible and searchable
- Alerts customers, partners, and investors to regulatory failure
- Creates a permanent record that affects future due diligence reviews
- May be referenced in press coverage
Procurement Disqualification
EU public sector procurement increasingly includes AI compliance requirements. A non-compliance finding can disqualify your organisation from EU public tenders — including at national, regional, and local government level. For companies with significant public sector revenue, this is often more commercially damaging than the fine itself.
Secondary Litigation
Article 86 of the EU AI Act grants individuals who have suffered harm from a high-risk AI system the right to an explanation of the decision and the right to seek compensation. A regulatory non-compliance finding significantly strengthens any secondary civil litigation by affected persons.
Customer and Partner Trust
Enterprise customers increasingly conduct AI compliance due diligence before signing contracts. A public non-compliance finding, or even awareness that you do not have completed Annex IV documentation, can cause customers to delay or cancel purchases — particularly in regulated sectors (financial services, healthcare, public sector) where the customer’s own compliance depends on your compliance.
What Is Already Enforceable
It is worth being clear about the current enforcement timeline:
| Date | What became enforceable |
|---|---|
| 2 February 2025 | Prohibited AI practices (Article 5) — €35M / 7% fines |
| 2 August 2025 | GPAI model obligations — EU AI Office jurisdiction |
| 2 August 2026 | High-risk AI (Annex III) — full Tier 2 enforcement |
The August 2026 deadline is when the largest category of fines — those applying to non-compliant high-risk AI documentation and conformity assessment — becomes enforceable. But the prohibited practices tier has been live since February 2025. If your system uses real-time biometric identification, emotion recognition in the workplace, or social scoring, those violations are already actionable today.
How to Protect Yourself
The most effective protection against EU AI Act fines is demonstrable compliance — documented, evidenced, and organised so that you can respond to any regulatory request within a short timeframe.
Start with our free Status Quo Assessment to identify your current readiness gaps. For the complete documentation framework — all 8 Annex IV items with worked examples and a 90-day compliance sprint plan — see our Annex IV Technical Documentation Roadmap.
Free Status Quo Assessment
12 questions. Instant Annex III classification + readiness score. Free PDF delivered to your inbox.
Take free assessment →Annex IV Roadmap — €149
15-page personalised report. All 8 Annex IV items with practical examples. 90-day action plan. Instant PDF.
Get your roadmap →