EU AI Act Compliance Tracker

Every deadline.
Zero surprises.

The definitive reference for compliance officers navigating EU AI Act enforcement. Live countdowns, plain-English guidance, and what each deadline means for your organization.

Updated March 2026
Financial services focus
Plain-English guidance
Next Major Deadline
Loading…
--Days
:
--Hours
:
--Mins
:
--Secs
Non-Compliance Risk

Non-compliance is not a theoretical risk

EU AI Act penalties scale with global annual turnover — not local revenue. The numbers below are the maximums. Amounts in approximate USD equivalents.

~$38M
or 7% of global annual turnover — whichever is higher. Article 5 prohibited practice violations.
~$16M
or 3% of global annual turnover. Failure to comply with Title III high-risk AI obligations.
~$8M
or 1.5% of global annual turnover. Providing incorrect information to national authorities.
CRITICAL

Enforcement machinery is operational now

National competent authorities have been active since August 2025. Investigations can begin today — even for obligations not yet fully applicable. A proactive compliance posture is significantly more defensible than a reactive one.

Enforcement Timeline

Every deadline, mapped and explained

Regulation (EU) 2024/1689 — what each enforcement date means for organizations operating AI in regulated industries.

2 February
2025
In Force
Article 5 · Chapter I
Prohibited AI Practices — Now Enforced
The highest-risk AI applications are now banned outright across the EU. This includes AI that manipulates human behavior through subliminal techniques, exploits vulnerabilities of specific groups, enables social scoring by public authorities, and most real-time remote biometric identification in public spaces. Financial services firms must audit any AI used in customer interaction, credit scoring, or behavioral analysis for Article 5 exposure.
Financial Services Credit Scoring Customer AI Biometrics
2 August
2025
In Force
Articles 51–56 · Chapter V
General Purpose AI (GPAI) Model Obligations
Providers of general-purpose AI models must maintain technical documentation, comply with copyright law, and publish training data summaries. Models with systemic risk (≥10²³ FLOPs training compute) face additional evaluation and incident reporting requirements. If your organization deploys a GPAI system in a regulated workflow, you inherit compliance obligations from the provider.
LLM Deployments Foundation Models AI Procurement
2 August
2025
In Force
Articles 57–68 · Governance Structure
National Competent Authorities & EU AI Office Operational
Member states have designated national supervisory authorities. The EU AI Office is now operational and actively monitoring the market. Investigations can begin today, even for obligations not yet fully applicable. Organizations that have not begun building their compliance evidence base are accumulating risk with every passing month.
Regulatory Reporting Supervisory Access Market Surveillance
2 August
2026
Upcoming
Title III · Annex I Systems
High-Risk AI — Annex I Regulated Sectors
Full obligations apply to high-risk AI in safety-component roles across Annex I regulated sectors. For financial services this captures AI embedded in banking platforms, payment systems, and trading infrastructure where AI failure could have systemic consequences. Conformity assessments, and registration in the EU AI database are required from this date.
Banking Infrastructure Payment Systems Trading AI Conformity Assessment
Calculating…
2 August
2026
Critical
Title III · Annex III — Article 6(2)
High-Risk AI — Credit, Insurance & Employment
The deadline with the broadest impact on financial services. High-risk AI in creditworthiness assessment, insurance risk evaluation, employment and worker management, and access to essential private services triggers full Title III obligations — risk management systems, data governance, technical documentation, human oversight, accuracy and robustness requirements, and post-market monitoring. This is the most operationally demanding deadline for regulated financial institutions.
Credit Decisioning Insurance Underwriting AML & Fraud AI Robo-Advisory HR & Hiring AI
Calculating…
2 August
2027
Upcoming
Article 111 · Transitional Provisions
Existing High-Risk Systems — Grace Period Ends
AI systems already in service before August 2026 that qualify as high-risk under Annex III are granted an additional 12-month transition period. This is the final deadline — after August 2027, no high-risk AI system in scope can operate without full compliance. Organizations that have not begun compliance programs for legacy systems by mid-2026 will not have sufficient time to meet this deadline.
Legacy AI Systems Existing Deployments Retrofit Compliance
Calculating…
Financial Services

Four obligations with direct operational impact

Title III creates concrete requirements for how AI systems in scope must be built, monitored, and documented. These are not aspirational guidelines.

Article 9 · Risk Management

Continuous risk management system required

High-risk AI systems must have an ongoing risk management system — not a point-in-time assessment. This means continuous monitoring, regular re-evaluation, and documented evidence that risks are being managed throughout the AI lifecycle. A static risk assessment completed six months ago does not satisfy this obligation.

Article 10 · Data Governance

Training data must be documented and auditable

Training, validation, and test datasets must meet quality criteria and be fully documented. For credit scoring AI, this means tracing data provenance back to source with complete audit trails. Data governance gaps, biases, and known limitations must be disclosed and actively managed — not ignored.

Article 12 · Record-Keeping

Automatic event logging is mandatory

High-risk AI systems must automatically log events throughout their operational lifecycle with sufficient detail to enable post-incident investigation. Tamper-evident, timestamped audit trails are a legal obligation — not an architectural preference — for any high-risk AI deployment in scope of Annex III.

Article 14 · Human Oversight

Humans must be able to override AI decisions

High-risk AI systems must be designed to allow designated humans to understand, oversee, and where necessary override or halt AI operations. For automated credit and insurance decisions, documented human-in-the-loop processes are a compliance requirement — not an architectural preference.

Built for this moment

Article 12 compliance starts with Gradaris

Gradaris gives compliance teams the continuous, cryptographically signed audit trails that Article 12 mandates — and the Gradaris Governance Score to prove you're meeting the rest of the Title III obligations too. Set up in under 10 minutes.

Article 9 Risk Management Article 12 Audit Logging Article 14 Human Oversight EU AI Act Mapped
Schedule a Walkthrough Get Started Free
AI Governance Watch

Stay ahead of every deadline

A free weekly briefing for compliance officers. Enforcement updates, regulatory guidance, and what it means for your organization. No vendor pitch. Just signal.

Weekly. No spam. Unsubscribe anytime. Powered by Gradaris.