The EU AI Act: What Every Enterprise Must Do Before the Deadlines
A compliance timeline, risk classification guide, and practical action plan for enterprises affected by the EU AI Act — the world's first comprehensive AI regulation.

The World's First AI Law Is Now Enforceable
The EU AI Act entered into force on August 1, 2024, with a phased compliance timeline. The first deadline — the ban on prohibited AI practices — became enforceable on February 2, 2025. The second major deadline covering general-purpose AI model (GPAI) obligations and governance frameworks arrives in August 2025. High-risk AI system requirements for sectors including healthcare, financial services, education, and critical infrastructure follow in August 2026. Any enterprise deploying or providing AI systems in the EU market is subject to this regulation, regardless of where the company is headquartered.
The stakes are real. Violations of prohibited practice bans carry fines of up to €35 million or 7% of global annual turnover, whichever is higher. Non-compliance with high-risk AI requirements carries up to €15 million or 3% of global annual turnover. These are not theoretical enforcement numbers — the EU has a demonstrated track record of levying maximum-range GDPR fines, and the AI Act enforcement infrastructure is being built with the same seriousness.
Understanding the Risk Classification Framework
The EU AI Act classifies AI systems into four tiers. Unacceptable risk (banned): social scoring by governments, real-time biometric surveillance in public spaces, subliminal manipulation. High risk (strict compliance requirements): AI in critical infrastructure, employment and HR (CV screening, interview scoring), essential services (credit scoring, insurance), law enforcement, and administration of justice. Limited risk: chatbots and deepfakes must disclose AI nature. Minimal risk: no requirements.
The classification is not always intuitive. An AI system that makes or materially influences a decision affecting a person's access to employment, credit, or essential services is high-risk — full stop. This means many enterprise AI deployments that seem routine — resume screening, loan approval assistance, customer risk scoring — are high-risk AI systems with full compliance requirements.
TheTheEUEUAIAIActActdoesdoesnotnotbanbanAI.AI.ItItrequiresrequiresthatthatconsequentialconsequentialAIAIsystemssystemscancanbebeexplained,explained,audited,audited,andandoverriddenoverriddenbybyaahuman.human.
What High-Risk AI Compliance Actually Requires
High-risk AI systems must meet requirements across eight areas: risk management system, data governance, technical documentation, record-keeping, transparency and user information, human oversight (the ability for a human to override, halt, or correct the AI system), accuracy and robustness, and cybersecurity.
For most enterprises, the most demanding requirements are the risk management system and human oversight mechanisms — both require ongoing processes, not one-time documentation. The risk management system must be updated throughout the system lifecycle, meaning organizations need processes for monitoring deployed AI systems, reassessing risks when systems are updated, and documenting risk decisions with accountability.
The GPAI Model Obligations
The August 2025 deadline covers obligations for providers of general-purpose AI models — the foundation models that underlie most enterprise AI applications. OpenAI, Google, Anthropic, Meta, and Mistral are directly subject to GPAI obligations. Enterprises are indirectly affected because they must ensure the GPAI models they use comply, and they inherit documentation obligations when they fine-tune or deploy GPAI models in high-risk contexts.
GPAI models with 'systemic risk' (defined by training compute above 10^25 FLOPs) face additional obligations: adversarial testing, incident reporting to the European AI Office, and cybersecurity measures. Practical implication for enterprises: verify that AI service providers have completed their GPAI compliance and ensure procurement contracts include AI Act compliance representations and audit rights.
Your Compliance Action Plan
The four immediate actions every enterprise with EU market exposure should take: First, complete an AI inventory — document every AI system in use, its function, the data it processes, and the decisions it influences. Second, classify each system against the EU AI Act risk tiers. Third, for any system classified as high-risk, begin gap analysis against the eight compliance requirements. Fourth, establish an AI governance function with a designated owner and a policy framework.
Klevrworks helps enterprises navigate EU AI Act compliance: conducting AI inventories and risk classifications, building compliance documentation for high-risk systems, designing human oversight mechanisms that satisfy regulatory requirements, and preparing for conformity assessments. Contact our compliance team for an EU AI Act readiness assessment before the August 2025 GPAI deadline.
Related Articles

Keep reading
Agentic AI: The New Frontier of Enterprise Automation
How multi-agent AI systems are moving beyond chatbots to autonomously plan, execute, and adapt — and what enterprises need to deploy them safely at scale.

Keep reading
AI-Accelerated Development: How Engineering Teams Are Shipping 10x Faster
From AI code generation to autonomous pull requests — a practical guide to the tools, workflows, and organizational changes that let engineering teams do more with less.

Keep reading
Sovereign AI: Why Enterprises Are Taking LLMs In-House
Data privacy, latency, and customization requirements are pushing enterprises to deploy private LLMs. Here is how to build a sovereign AI strategy that works.