The EU AI Act doesn't switch on all at once. Regulation (EU) 2024/1689 entered into force on 1 August 2024, but its provisions apply in phases over three years. This article maps every key deadline, explains what becomes applicable at each stage, and helps you prioritise your compliance effort.
EU AI Act — Phased Application Timeline
Five key dates from entry into force to full application of product safety requirements.
Phase 1: 2 February 2025 — Prohibited Practices and AI Literacy
The first provisions to apply were the prohibited AI practices under and the AI literacy obligation under . This was deliberately front-loaded — the EU wanted the most harmful AI practices banned as quickly as possible.
bans eight categories of AI practice outright, including subliminal manipulation, social scoring, predictive policing based solely on profiling, untargeted facial image scraping, workplace emotion recognition, and real-time biometric identification in public spaces (with narrow law enforcement exceptions).
requires all providers and deployers to ensure sufficient AI literacy among staff and personnel who operate or oversee AI systems. This is a broad, ongoing obligation — not a one-time training exercise.
"Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf." — Article 4
Phase 2: 2 August 2025 — GPAI, Governance, and Notified Bodies
The second wave brought the general-purpose AI model framework into effect. This includes:
- Articles 51-55: All GPAI provider obligations — technical documentation, downstream information, copyright compliance, training data summaries, and additional obligations for models with systemic risk
- Articles 64-70: The governance framework — the AI Office, the European AI Board, the Advisory Forum, the Scientific Panel, and national competent authority designations
- Articles 28-39: The notified body framework — rules for conformity assessment bodies, notification procedures, and requirements for performing conformity assessments
- : Fines for GPAI providers who violate their obligations
Phase 3: 2 August 2026 — Full Application (The Big Deadline)
This is the date that matters most for the majority of organisations. On 2 August 2026, all remaining provisions become applicable, including:
- Articles 6-7: High-risk AI classification rules (Annex III use-case pathway)
- Articles 8-15: All requirements for high-risk AI systems — risk management, data governance, documentation, logging, transparency, human oversight, accuracy, robustness, cybersecurity
- Articles 16-27: All operator obligations — providers, deployers, importers, distributors, and authorised representatives
- : Fundamental rights impact assessment (FRIA) for public bodies, public service providers, and credit/insurance deployers
- : Transparency obligations for providers and deployers of certain AI systems (chatbots, deepfakes, emotion recognition)
- Articles 71-84: The full market surveillance framework, including post-market monitoring, incident reporting, and the EU database
- Articles 85-87: Rights provisions — right to complaint, right to explanation, whistleblower protection
- : Full penalty framework — up to EUR 35M or 7% turnover for prohibited practice violations, EUR 15M or 3% for other infringements
Phase 4: 2 August 2027 — Product Safety Alignment
The final phase applies to AI systems classified as high-risk under — those that are safety components of products covered by EU harmonisation legislation listed in Annex I. These systems get an extra year to comply because their conformity assessment procedures need to be integrated with existing sector-specific processes.
This affects AI in medical devices, machinery, vehicles, toys, lifts, radio equipment, civil aviation, and other regulated product categories. If your AI system is embedded in a physical product covered by Annex I legislation, your deadline is August 2027, not August 2026.
What You Should Be Doing Right Now (March 2026)
Confirm Article 5 compliance
The prohibited practices have been in force for over a year. If you haven't audited your AI systems against the eight banned categories, do it immediately.
Verify AI literacy measures
is also live. Ensure your staff have received adequate training on AI literacy relevant to their roles.
Classify your AI systems
With five months until the August 2026 deadline, you need to have classified every AI system as prohibited, high-risk, limited-risk, or minimal-risk. Use and Annex III as your guide.
Begin FRIA preparation
If you're a public body, public service provider, or credit/insurance deployer using high-risk AI, start your FRIA preparation now. Don't wait for the official template.
Engage your supply chain
If you're a deployer relying on a provider for high-risk AI systems, confirm that your provider is preparing for obligations — documentation, conformity assessment, CE marking.
Set up monitoring processes
Post-market monitoring () and incident reporting () need operational processes in place by August 2026.
Key Takeaways
The EU AI Act applies in four phases: February 2025, August 2025, August 2026, and August 2027
Prohibited practices () and AI literacy () are already in force since February 2025
GPAI obligations have applied since August 2025 — model providers must already comply
2 August 2026 is the critical deadline for high-risk AI systems, FRIA, transparency, and the full enforcement framework
Product safety AI systems (Annex I) have until August 2027
With five months to go, organisations should be deep into classification, FRIA preparation, and compliance planning