The White House released a National AI Policy Framework on 20 March, laying out seven pillars for federal AI governance and calling on Congress to preempt state laws that “impose undue burdens.” Two weeks later, at least four more state AI bills have been signed into law, eight are crossing chambers, and the total for 2026 has passed 1,500 bills across 45 states. The framework promises regulatory unity. State legislatures are delivering the opposite.
For enterprise teams deploying AI systems across multiple states, the gap between the federal promise and the state-level reality is the compliance problem that matters right now.
The Federal Framework: Seven Pillars, No Force of Law
The framework, built on a December 2025 Executive Order, organises its recommendations around seven pillars: child protection, AI infrastructure, intellectual property, free speech, innovation, workforce development, and — crucially — federal preemption of state AI laws.
The preemption pillar targets state laws that regulate AI development (deemed “inherently interstate”), restrict otherwise lawful AI activity, or impose liability on developers for third-party uses. It preserves state authority over child protection, fraud prevention, zoning, and public-sector procurement.
Two features define the framework’s practical impact — or lack of it. First, it recommends no new federal regulator. AI oversight would remain with existing sector-specific agencies (FTC, SEC, FDA, etc.). Second, and more importantly, the framework is a set of legislative recommendations to Congress. It has no force of law. Until Congress acts, it changes nothing about the compliance obligations enterprises face today.
The administration created an AI Litigation Task Force to challenge state laws viewed as inconsistent with federal policy. But litigation takes years. Colorado’s AI Act takes effect before the end of Q2.
1,500 Bills and Counting
The scale of state AI legislation in 2026 is difficult to overstate. According to Multistate.ai’s tracker, more than 1,500 AI-related bills have been introduced across 45 states — up from 1,208 across all 50 states in 2025 and 635 across 45 states in 2024. The year-over-year growth rate is accelerating.
In 2025, 145 AI bills were enacted into law, up from 99 in 2024. The 2026 numbers are tracking above both years.
The pace in the two weeks since the federal framework illustrates the dynamic. Washington Governor Bob Ferguson signed two AI bills on 24 March: HB 2225 (chatbot safety, including minor protections and suicide-prevention protocols) and HB 1170 (AI content provenance requiring watermarks or metadata). Utah signed HB 276 (provenance). New York signed the RAISE Act amendments.
Bills are also advancing fast in chambers that haven’t yet reached the governor’s desk. Maryland’s HB 952 passed the House 123–4. Oklahoma’s HB 3546, which addresses AI personhood, passed 94–2.
These are not close votes on controversial measures. They are supermajority endorsements of AI regulation at the state level. Whether states are racing to legislate before federal preemption arrives or simply continuing existing momentum, the effect is the same: the compliance burden is growing while the federal framework remains a set of recommendations.
Colorado: The Compliance Benchmark
The law enterprise teams should be watching most closely is Colorado’s SB-205 — the most comprehensive state AI law in the country. Originally set for 1 February 2026, it was delayed to 30 June 2026 after a special legislative session in August 2025. It takes effect in less than three months.
SB-205 requires deployers of high-risk AI systems — those making or substantially influencing “consequential decisions” about consumers — to conduct impact assessments annually and within 90 days of any system modification. Developers must publish statements describing their systems and how algorithmic risks are managed. Violations carry penalties of up to $20,000 each, enforced by the Colorado Attorney General under the Consumer Protection Act.
The law’s scope is broad. “Consequential decisions” include employment, education, housing, insurance, financial services, and healthcare. Any enterprise using AI to screen job applicants, assess loan eligibility, or triage support requests in Colorado is likely in scope.
The Common Sense Institute projects the law will cost Colorado 40,000 jobs and $7 billion in economic output by 2030 — though those figures come from industry-funded research and should be read accordingly. What is less disputed: compliance requires meaningful investment in AI governance, documentation, and ongoing monitoring.
One concession matters. Organisations that comply with the NIST AI Risk Management Framework have an affirmative defence. For enterprises already aligned with NIST AI RMF, Colorado’s requirements are manageable. For those without a governance framework, the 30 June deadline is close.
The Five-State Hiring Problem
The practical burden of fragmentation becomes concrete when applied to a single use case. According to a U.S. Chamber of Commerce analysis, a company using AI-assisted hiring tools across California, Colorado, Illinois, New York, and Texas must now navigate five different compliance frameworks — each with different definitions of algorithmic discrimination, different audit timelines, and different disclosure obligations.
California’s requirements are the most mature. Colorado’s are the most comprehensive. Illinois was among the first to regulate AI in hiring (the AI Video Interview Act). New York City’s Local Law 144 requires annual bias audits for automated employment decision tools.
No two of these laws define “high-risk AI system” the same way. No two have the same audit frequency. No two agree on what constitutes adequate disclosure to candidates. The result is that a single HR-tech product must be documented, audited, and disclosed under five different regimes — or the enterprise must build a compliance superset that satisfies all of them simultaneously.
Small businesses in California face approximately $16,000 in annual compliance costs from privacy and AI rules alone. Scale that across multiple states and use cases, and the compliance burden becomes a material line item.
Federal Enforcement Without Federal Law
Adding complexity: federal agencies have signalled intent to enforce AI-related violations using existing statutes, even without a comprehensive AI law — and some have already acted.
The FTC is using Section 5 to pursue misleading claims about AI capabilities; its 2023 action against Rite Aid for deploying AI facial recognition without safeguards remains the clearest precedent. The SEC settled charges against two investment advisers in 2024 for “AI washing” — overstating AI capabilities in investor disclosures. The DOJ is pursuing False Claims Act violations in government-funded programmes that use AI. State attorneys general are scrutinising algorithmic pricing tools.
Enterprise teams now face a three-layer enforcement reality: state AI-specific laws with specific deadlines, federal enforcement under existing statutes without specific AI guidance, and the uncertain prospect of federal preemption that could render some state compliance work moot — or not.
What to Watch
30 June 2026 is the first hard deadline. Colorado’s SB-205 takes effect, and any enterprise deploying high-risk AI affecting Colorado consumers needs an impact assessment and public disclosure ready. NIST AI RMF alignment provides a safe harbour.
The federal preemption timeline is unpredictable. Congress has not introduced legislation based on the framework’s recommendations. Even with bipartisan support, comprehensive AI legislation would take 12–18 months minimum. State laws will continue to accumulate in the interim.
The NIST AI RMF is emerging as the strongest candidate for a compliance baseline. Colorado is the first state to codify it as a safe harbour. Whether other states follow will determine if this becomes the national standard — but for now, it’s the closest thing enterprises have to a common denominator across jurisdictions.
The EU AI Act becomes fully applicable on 2 August 2026. Enterprises operating in both the US and EU face the most complex compliance matrix in AI governance history. Those building for EU compliance may find their governance infrastructure transfers well to the US patchwork — the structural patterns (risk classification, impact assessment, transparency documentation) are converging even if the specific requirements differ.
Some policy experts argue that state-level experimentation is a feature, not a bug — different approaches get tested before federal standardisation, much as EU member-state regulation informed the AI Act. That argument has merit on a policy timeline. It does not help an enterprise compliance team that needs to ship a hiring product in five states by Q3.
For enterprise teams, the strategic calculation is straightforward: build AI governance infrastructure now, align with NIST AI RMF as the best available baseline, and treat state compliance as the floor — not the ceiling.
Further Reading
- The US Has No Federal AI Law. Here’s What the Regulatory Vacuum Means for Enterprise Teams. — Next Waves Insight
- The EU AI Act Is Already Partly in Force. Most Enterprises Are Not Ready. — Next Waves Insight
This article was produced with AI assistance and reviewed by the editorial team.

