The EU AI Act Is Already Partly in Force. Most Enterprises Are Not Ready.

4 min read
Key Takeaways
  • The EU AI Act’s high-risk provisions are already binding — most enterprises have not completed the required conformity assessments or documentation.
  • GPAI model rules now apply to foundation model providers, including non-EU companies that offer services in Europe.
  • Non-compliance fines can reach €35 million or 7% of global annual turnover — whichever is higher.

Key Claim: Enterprises have until August 2026 to comply with the EU AI Act’s high-risk requirements, but most remain unprepared.

The framing that most enterprise AI teams are operating under — “we have until August 2026 to sort this out” — is already out of date. The EU Artificial Intelligence Act (AI Act) entered into force on 1 August 2024, and enforcement has been staged ever since. Two of its three major obligation waves have already hit. The third arrives in August 2026, and according to a February 2026 readiness report from Vision Compliance, 78% of enterprises are not prepared for it.

The NextWave Signal — Sharp analysis, twice a week.

What Is Already in Force

The first wave landed on 2 February 2025: prohibitions on AI practices deemed to carry unacceptable societal risk. These include systems that use subliminal manipulation, exploit vulnerable groups, enable social scoring by public authorities, or deploy real-time biometric identification in public spaces in most contexts. Any organisation deploying AI in the EU that has not audited against these prohibitions is already in violation.

The second wave — applying from 2 August 2025 — covers providers of general-purpose AI (GPAI) models: the foundation models that underpin most enterprise AI products. Providers must maintain technical documentation, publish training data summaries, comply with EU copyright law, and — for models that pose systemic risks — notify the European AI Office directly. The Commission published its GPAI Code of Practice on 10 July 2025 and formally approved it on 1 August 2025. Adherence is not mandatory, but the AI Office has made clear that non-adherents will face a greater volume of information requests and less favourable treatment when fines are calculated. These GPAI obligations sit at the core of several of 2025’s most significant AI developments, as the models now subject to oversight are the same systems that reshaped the field.

The August 2026 Threshold

The deadline that most compliance teams are focused on is 2 August 2026, when obligations for high-risk AI systems come into full effect. Under Annex III of the Act, “high-risk” covers AI used in employment screening, access to education, creditworthiness assessment, insurance pricing, law enforcement, and border management, among others. These are not edge cases: they describe a large proportion of enterprise AI deployments already running in production — including the agentic systems that have moved from experimental to operational in the past year.

The requirements for high-risk operators are substantial: quality management systems, complete technical documentation, conformity assessments, and EU database registrations. For large enterprises operating high-risk systems, compliance cost estimates range from $8 million to $15 million in initial investment, with annual ongoing costs between $500,000 and $2 million.

For mid-size companies the numbers are lower but not trivial, and many of those costs are already being incurred whether organisations recognise them as such or not. AI governance and data management spending is projected to reach $492 million in 2026, per Gartner — driven largely by compliance preparation.

The penalty structure provides the compliance incentive: up to €35 million or 7% of global annual turnover for the most serious violations, and up to €15 million or 3% for non-compliance with high-risk obligations. For large enterprises, the 7% figure is the relevant number.

The Digital Omnibus Wildcard

In November 2025, the European Commission proposed a “Digital Omnibus” package that included a delay of high-risk AI obligations to December 2027 — a 16-month extension from the August 2026 deadline. The motivation was explicit: reduce administrative burden on businesses, with a stated goal of cutting compliance costs by at least 25% for all companies and 35% for SMEs. The European Parliament voted in March 2026 on a conditional mechanism — rather than a blanket delay, the Parliament proposed tying the extension to a Commission decision confirming adequate compliance support is available. If that decision does not come, backstop dates of December 2027 (Annex III systems) and August 2028 (Annex I systems) apply.

The proposal has not passed into law. It still requires full approval through the ordinary legislative procedure. Planning around it as a given would be a costly gamble.

What Enterprises Should Be Doing Now

Three actions have clear value regardless of whether the Digital Omnibus delay passes. First, completing an AI inventory: organisations without a systematic record of AI systems in production cannot classify risk, let alone achieve compliance. Second, understanding role classification under the Act — whether the organisation is a provider, deployer, or modifier of an AI system determines which obligations apply. Third, for any system that plausibly falls under Annex III, starting the technical documentation process now. That work takes time, and the audit trail it creates has value beyond regulatory compliance.

The August 2026 deadline may slip. The regulatory direction will not.

This article was produced with AI assistance and reviewed by the editorial team.

Further Reading


Source Trail

Marcus Webb, policy and regulation correspondent at Next Waves Insight

About Marcus Webb

Marcus Webb covers AI policy, regulation, and geopolitics — from EU legislation to DARPA programmes to US-China technology competition. He has a background in technology law and previously worked as a policy analyst at a nonpartisan technology policy institute. He tracks standards bodies, government procurement signals, and legislative developments that others miss.

Meet the team →
Share: 𝕏 in
The NextWave SignalSubscribe free

The NextWave Signal

Enjoyed this analysis?

One AI market analysis + one emerging-tech signal, every Tuesday and Friday — written for engineers, PMs, and CTOs tracking what shifts before it goes mainstream.

Leave a Comment