The US Has No Federal AI Law. Here’s What the Regulatory Vacuum Means for Enterprise Teams.

4 min read

Key Claim

The United States has no federal AI law, 40+ competing state bills with conflicting requirements, and executive orders that can be reversed by the next administration — creating a compliance environment where enterprises building AI products face genuine legal uncertainty that is not resolvable by waiting for federal clarity.

Key Takeaways

  • As of Q1 2026, no comprehensive federal AI legislation has passed Congress; over 40 states have introduced AI-related bills with varying and sometimes contradictory requirements
  • The Biden-era AI Executive Order established voluntary safety commitments from frontier AI developers — the Trump administration rescinded it in January 2025 on day one
  • California’s SB 1047 veto set a precedent that state-level AI liability legislation faces significant political resistance; Colorado’s SB 205 passed and takes effect in 2026
  • Organisations deploying AI in hiring, credit, and healthcare face the most immediate legal exposure under existing anti-discrimination law, not new AI-specific regulation

The US AI regulatory environment is frequently described as a “vacuum” — implying an absence of rules that will eventually be filled. That framing understates the complexity of the current situation. There is no vacuum: there are existing laws (anti-discrimination, consumer protection, privacy, securities regulation) that apply to AI systems in ways that courts and regulators are actively interpreting. There are also 40+ state-level bills, two significant state laws already in effect, and a federal executive branch that has oscillated between aggressive AI governance and explicit deregulation within a two-year window. For enterprise legal and compliance teams, this is not a vacuum. It is an unstable and fragmented regulatory landscape.

What the Federal Government Has — and Has Not — Done

The Biden administration’s Executive Order on AI Safety, signed in October 2023, established the most comprehensive federal AI governance framework the US has produced. It required frontier AI developers to share safety test results with the government before public release, directed agencies to develop AI risk frameworks for their sectors, and established voluntary commitments from major AI companies around watermarking, safety testing, and responsible scaling policies. It had no enforcement mechanism — compliance was voluntary — but it created institutional infrastructure and signalled the direction of federal intent.

The Trump administration rescinded the EO on January 20, 2025. The replacement executive order framed AI governance as primarily a competitiveness issue, directing agencies to remove “unnecessary barriers” to AI development. The voluntary safety commitments from the major AI labs remain in place as company policy, but they are no longer a federal requirement or expectation. The National AI Safety Institute, established under the Biden EO, has had its mandate and staffing reduced.

The State Bill Landscape

In the absence of federal action, states have moved. The count of state-level AI bills introduced in 2024–2025 exceeds 400 across all 50 states, covering areas from AI-generated content disclosure to algorithmic hiring tools to autonomous vehicle liability. The problem for enterprise compliance teams is not the volume but the inconsistency. A hiring algorithm that complies with Illinois’s AI Video Interview Act requirements may not comply with New York City’s Local Law 144, which has different audit and bias testing requirements. A generative AI content disclosure requirement in California may conflict with the technical implementation of a different requirement in Texas.

Two state laws have achieved particular significance. Colorado’s SB 205, the Colorado Artificial Intelligence Act, passed in May 2024 and takes effect in February 2026. It imposes obligations on deployers of “high-risk AI systems” — systems that make or materially influence consequential decisions in employment, education, financial services, healthcare, and housing — including impact assessments, bias testing, and consumer notification requirements. It is the closest the US has to the EU AI Act in scope and structure, though more limited in coverage.

Where the Real Legal Exposure Is Now

Regardless of new AI-specific legislation, enterprises using AI in regulated domains face legal exposure under existing law. The Equal Credit Opportunity Act and Fair Housing Act prohibit discriminatory lending and housing decisions — AI systems that produce disparate impact on protected classes can violate these laws regardless of whether the discrimination was intentional. The EEOC has issued guidance making clear that AI-driven hiring decisions are subject to Title VII analysis. The FTC has brought enforcement actions against companies using AI in ways that it characterised as deceptive or unfair under existing consumer protection authority.

This means that the “we’ll wait for federal regulation before worrying about AI compliance” posture is legally untenable for any organisation deploying AI in hiring, credit, insurance, healthcare, or consumer-facing contexts. The regulatory framework already exists — it is just fragmented across multiple agencies and bodies of law rather than consolidated in a single AI statute.

What Enterprise Teams Should Actually Be Doing

The practical compliance posture that holds up in the current environment has three components. First, maintain documentation: for every consequential AI system, maintain records of training data provenance, model selection decisions, bias testing results, and performance monitoring. This documentation is what regulators and plaintiffs’ attorneys will request; having it positions organisations to respond; not having it creates additional liability.

Second, track Colorado SB 205 compliance requirements as a proxy for the direction federal regulation will eventually take. The EU AI Act — which does have extraterritorial reach for organisations serving EU users — provides the same function at higher stringency. Organisations that build toward the more demanding frameworks now are not over-investing; they are building toward the likely convergence point.

Third, treat AI governance as a legal and compliance function, not a communications or ethics function. The reputational risk from visible AI failures is real, but it is secondary to the legal risk from deploying AI systems that produce discriminatory outcomes in regulated domains. The governance infrastructure needs to be built to satisfy legal and regulatory scrutiny, not primarily to satisfy public relations requirements.

Source Trail

Colorado SB 205 (2024) full text · Biden AI Executive Order (Oct 2023) · Trump AI Executive Order (Jan 2025) · EEOC AI and Title VII guidance · FTC AI enforcement actions database · NCSL state AI legislation tracker 2025

Marcus Webb, policy and regulation correspondent at Next Waves Insight

About Marcus Webb

Marcus Webb covers AI policy, regulation, and geopolitics — from EU legislation to DARPA programmes to US-China technology competition. He has a background in technology law and previously worked as a policy analyst at a nonpartisan technology policy institute. He tracks standards bodies, government procurement signals, and legislative developments that others miss.

Meet the team →
Share: 𝕏 in
The NextWave SignalSubscribe free

The NextWave Signal

Enjoyed this analysis?

One AI market analysis + one emerging-tech signal, every Tuesday and Friday — written for engineers, PMs, and CTOs tracking what shifts before it goes mainstream.

Leave a Comment