What Regulators Actually Require When AI Runs the Grid or the Trading Desk

7 min read
Key Takeaways
  • AI already operates inside critical infrastructure under existing mandates — FERC Order 881 requires AI-powered dynamic line ratings, and SR 11-7 applies to AI models in banking — but neither framework addresses AI-specific risks such as model drift, adversarial manipulation, or retraining cadence.
  • FINRA’s 2026 Regulatory Oversight Report provides the most specific AI governance checklist in financial regulation: prompt and output logging, model version tracking, pre-deployment testing, and human-in-the-loop review are supervisory expectations, not voluntary guidance.
  • NERC has acknowledged its Critical Infrastructure Protection standards were not designed for self-learning AI systems, and no binding AI-specific reliability standard has been finalised — meaning AI in grid control rooms operates under a framework that predates it.

Key Claim: The compliance floor for AI in critical infrastructure already exists — FERC Order 881, SR 11-7, FINRA’s 2026 report, and NIS2 all apply — but none of these frameworks addresses the AI-specific risks that distinguish machine learning systems from the conventional software they were designed to govern.

AI is already operating inside critical infrastructure — managing transmission line ratings, running trading algorithms, screening financial transactions. The question enterprise teams in energy and finance are beginning to ask is not whether to comply with AI regulation. It is which rules already apply, which are coming, and where the coverage map has genuine holes.

The answers are more concrete than most commentary suggests, and more fragmented than any single framework admits.

The Compliance Floor That Already Exists

Start with what is in force.

In energy, FERC Order 881 required full compliance by July 2025. The order mandated that transmission providers replace static line ratings with Ambient-Adjusted Ratings (AARs) — dynamic ratings updated at least once per hour using real-time temperature and environmental data. The dominant implementation path is AI-powered dynamic line rating systems. That means AI tools are now embedded in a federally mandated transmission process. The gap: FERC Order 881 says nothing about how those AI systems should be validated, certified, or monitored for model drift. The mandate is for the output (hourly updated ratings). The mechanism for producing it is left to operators.

In finance, the foundational rule is SR 11-7, the Federal Reserve and OCC’s 2011 joint supervisory guidance on model risk management. SR 11-7 defines “model” as any quantitative method or system producing quantitative outputs — a definition broad enough to include modern machine learning systems. Under SR 11-7, banks must maintain a model inventory, validate each model through an independent function, document purpose and limitations, and monitor performance on an ongoing basis. This framework applies to AI models used in credit scoring, trading, fraud detection, and AML compliance today, even though it was written 15 years before large language models existed.

The OCC made this explicit. In April 2025, Acting Comptroller Rodney E. Hood stated that AI should be governed by “the same risk-based, technology-neutral principles that apply to other banking activities.” The OCC followed with Bulletin 2025-26 in September 2025 — described as “a first step” in a broader review of model risk guidance — clarifying validation frequency expectations for community banks. A comprehensive SR 11-7 update for modern AI has not been issued.

FINRA’s 2026 Report: The Most Specific Financial-Sector AI Framework to Date

FINRA’s 2026 Annual Regulatory Oversight Report, published in December 2025, is the clearest departure from generic risk-management language in the financial sector. FINRA’s position is that existing technology-neutral rules apply to GenAI use — supervision, communications, recordkeeping, and fair-dealing standards do not change because the underlying technology changed. But the 2026 report goes further by specifying operational expectations that amount to a governance checklist.

FINRA expects member firms to:

  • Establish formal governance frameworks and review processes before deploying generative AI
  • Maintain prompt and output logs for accountability and troubleshooting
  • Track which model version was used and when
  • Conduct pre-deployment testing across privacy, integrity, reliability, and accuracy dimensions
  • Implement ongoing human-in-the-loop review of model outputs, including regular checks for errors or bias

These are supervisory expectations, not voluntary best practice. FINRA examiners assess firm compliance against its supervisory framework, and a firm that cannot demonstrate governance and logging practices for a deployed AI system has a gap that can be cited.

The 2026 report also flags agentic AI — autonomous AI systems that take sequences of actions — as an emerging risk requiring novel oversight, including restrictions on system access and tracking of agent actions. This signal matters: as financial firms move from GenAI copilots to autonomous trading and compliance agents, the oversight gap will widen if governance frameworks do not keep pace.

FERC and NERC: Probing AI Without Mandating It

The energy sector’s regulatory picture is more active at the inquiry stage than at the enforcement stage.

FERC has opened Docket AD25-8-000, a technical conference examining how AI data centres — with their rapid on/off demand cycles unlike any historical load profile — should be incorporated into grid operations forecasting. FERC Chairman Rosner also sent letters to the six major RTOs and ISOs requesting standardised data on large-load forecasting practices. A binding rule has not yet emerged from this docket, but the structure of the inquiry makes rulemaking the likely next step.

CSIS analysis estimates that fewer than 25% of U.S. grid operators currently deploy AI in daily operations such as load forecasting, despite the technical maturity of available tools. Transmission congestion costs exceed $20 billion annually, according to CSIS’s analysis of grid data. The implication is that the pressure to adopt AI in grid operations will intensify — and the regulatory framework for governing that adoption is lagging adoption itself.

NERC, which sets the reliability standards that FERC enforces, published a July 2025 white paper acknowledging that its existing Critical Infrastructure Protection (CIP) standards were not designed for self-learning or frequently updating AI systems. NERC is evaluating options for modernising its standards body to address AI, cloud computing, and the electrification of transportation and hydrogen production. No new AI-specific reliability standard has been finalised. Until one is, AI systems deployed in grid control rooms operate under a framework that was not written for them.

The SEC’s Retreated Rule and What It Leaves Behind

One of the more consequential AI regulatory developments of 2025 was a withdrawal rather than a publication. On 17 June 2025, the SEC formally withdrew its proposed rule on “Conflicts of Interest Associated with the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers” — one of 14 Gensler-era proposals dropped by SEC Chair Paul Atkins’ commission.

The proposed rule would have required broker-dealers and investment advisers using AI to identify any conflict of interest between firm and investor created by that AI’s optimisation, and to eliminate or neutralise that conflict. It drew significant industry opposition, in part because its definition of “covered technology” was broad enough to encompass algorithmic tools well beyond AI. The new SEC’s 2026 Examination Priorities retain a focus on how firms identify and mitigate AI-related risks, but the examination approach relies on existing securities law obligations rather than AI-specific rules.

The practical effect: trading firms and investment advisers face AI governance expectations from examiners but no prescriptive AI rulebook. The FINRA supervisory framework provides the most specific operational guidance available in the financial sector.

CISA’s International Coalition: Guidance Without Teeth

In December 2025, CISA, the NSA, and counterpart agencies from Australia, Germany, the UK, Canada, and the Netherlands jointly published “Principles for the Secure Integration of Artificial Intelligence in Operational Technology.” The guidance addresses machine learning, LLM-based AI, and AI agents deployed in systems that directly control physical processes — power generation, water treatment, manufacturing.

The four principles are: Understand AI (educate personnel on AI-specific risks including model drift and adversarial manipulation), Assess AI Use in OT (evaluate business cases and manage data security), Establish AI Governance (governance frameworks with continuous testing), and Embed Safety and Security (maintain human oversight, ensure transparency).

This guidance is not legally binding in any jurisdiction. It represents international regulatory consensus on what responsible AI deployment in operational technology looks like — and it provides a template that sector-specific rulemakers are likely to reference when enforceable standards do arrive. For enterprise teams, it is a preview of where mandatory requirements are heading.

The EU’s Functional Approach: NIS2 Already Applies

European organisations operating in energy, finance, health, or digital infrastructure sectors face a harder compliance edge. The EU’s NIS2 Directive does not name AI explicitly, but it applies to all information systems underpinning essential services. Under its functional definition, an AI system producing outputs that inform or direct essential service operations qualifies as an information system in scope.

As of 5 January 2026, 19 of 27 EU member states had transposed NIS2 into national law. NIS2’s obligations are technology-neutral: entities operating essential services must maintain an inventory of information system assets and their supply-chain provenance, conduct risk assessments addressing threats to those systems, and report significant incidents within 24 hours of awareness. Non-compliance penalties reach €10 million or 2% of global annual turnover, whichever is higher.

Applied to AI systems specifically, these general obligations have concrete operational implications. An AI model producing outputs that direct grid operations or financial transactions is an information system asset that must be inventoried. Risk assessments must cover threats relevant to the technology in use — for machine learning systems, that includes risks such as adversarial manipulation, data poisoning, and model drift, even though NIS2 does not name those threats explicitly. The directive’s breadth is its enforcement mechanism: it does not need to mention AI to regulate it.

The EU AI Act (Regulation (EU) 2024/1689) adds a separate overlay for high-risk AI systems in critical infrastructure. Under Article 113, obligations for high-risk AI systems are scheduled to apply from 2 August 2026, though the European Commission has acknowledged implementation challenges that may affect certain provisions’ effective dates. For a detailed breakdown of those obligations, see our analysis of EU AI Act high-risk systems compliance. Organisations operating across jurisdictions face the most complex compliance surface: NIS2 cybersecurity obligations apply now, AI Act obligations are coming into force through 2026, and U.S. equivalents remain sector-specific and unevenly defined.

What Enterprise Teams in Energy and Finance Need to Do Now

The following reflects our assessment of practical steps given the current regulatory landscape, not a restatement of binding requirements.

The asymmetry in the regulatory landscape creates a specific operational problem. Obligations exist — SR 11-7 for AI models in banking, FERC Order 881 compliance processes, FINRA’s supervisory expectations — but the exam criteria for AI systems specifically have not been spelled out in most cases. That ambiguity does not reduce legal exposure; it increases operational risk because teams cannot be certain their current practices would satisfy an examiner.

For energy sector teams: The most concrete step is a model inventory audit for AI systems currently embedded in FERC Order 881 compliance (dynamic line rating). Document the validation process, model versioning, and monitoring cadence for each system. This creates the baseline needed when NERC’s standards modernisation produces enforceable requirements — and it is the kind of documentation NERC would examine under a future CIP standard.

For financial sector teams: FINRA’s 2026 governance checklist is the most actionable framework available. Implement prompt and output log retention, model version tracking, and documented pre-deployment testing for every AI system that interacts with customers or executes financial decisions. For banks, SR 11-7’s model inventory and validation requirements apply to AI models now — the absence of a 2026 update does not reduce the obligation.

Across both sectors: The CISA/NSA joint guidance principles provide a structured internal governance framework that maps closely to what sector-specific regulators are signalling. Using it as a governance architecture now reduces the retrofitting cost when binding rules arrive. This pattern of enforceable obligations arriving before sector-specific AI rules are written mirrors the broader U.S. AI governance gap that technology-neutral enforcement is being used to bridge.

The timeline pressure is real. FERC’s technical conference docket, NERC’s standards modernisation process, and FINRA’s escalating specificity on AI governance all point to the same direction. The question for enterprise teams is not whether AI-specific obligations are coming. It is whether the governance infrastructure they are building today will need to be rebuilt when those obligations land.

This article was produced with AI assistance and reviewed by the editorial team.

Marcus Webb, policy and regulation correspondent at Next Waves Insight

About Marcus Webb

Marcus Webb covers AI policy, regulation, and geopolitics — from EU legislation to DARPA programmes to US-China technology competition. He has a background in technology law and previously worked as a policy analyst at a nonpartisan technology policy institute. He tracks standards bodies, government procurement signals, and legislative developments that others miss.

Meet the team →
Share: 𝕏 in
The NextWave SignalSubscribe free

The NextWave Signal

Enjoyed this analysis?

One AI market analysis + one emerging-tech signal, every Tuesday and Friday — written for engineers, PMs, and CTOs tracking what shifts before it goes mainstream.

Leave a Comment