Intel Foundry’s external revenue in Q1 2026 was $174 million — against total foundry revenue of $5.4 billion. That single number describes the commercial gap that Intel’s most significant process technology milestone in a decade has yet to close. The 18A node entered high-volume manufacturing in January 2026 at Fab 52 in Chandler, Arizona, ahead of TSMC’s competing N2 node. The technical execution is real. The external customer pipeline is not.
Understanding what 18A actually delivers — and why that has not translated into a hyperscaler queue at Intel Foundry — is the most practically useful question for anyone evaluating the semiconductor supply chain over the next two to three years.
What RibbonFET and PowerVia Actually Do
Intel 18A is built on two structural innovations that distinguish it from every other node in volume production today.
RibbonFET is Intel’s implementation of gate-all-around (GAA) transistor architecture. Where a FinFET transistor controls current flow on three sides of a fin-shaped channel, a GAA transistor wraps the gate around all four sides of a nanosheet “ribbon” of semiconductor material. The result is tighter electrical control, lower leakage current at small gate lengths, and better scalability — the same architectural leap TSMC’s N2 also makes, though TSMC calls its implementation NanoFlex.
PowerVia is more novel. It relocates the power delivery network — the metal layers that carry current from the package to the transistors — from the front of the wafer to the back. This frees the front-side metal layers entirely for signal routing, improving logic cell utilisation by 5–10% and reducing the resistive voltage droop that degrades performance at high clock speeds. Intel holds a 6–12 month lead in backside power delivery over TSMC, whose equivalent technology (“Super Power Rail”) is not expected until the N2P and A16 nodes in late 2026 and 2027.
The combination positions 18A as a performance-per-watt leader, even if it trails TSMC N2 on raw transistor density.
The Density Trade-Off
Tom’s Hardware’s comparison of the two nodes is the clearest public analysis available: Intel 18A achieves approximately 238 million transistors per square millimetre (MTr/mm²) in high-density logic; TSMC N2 reaches approximately 313 MTr/mm². TSMC is roughly 1.3 times denser. For SRAM — the memory cells embedded throughout a chip that directly determine how much cache can fit on a die — the gap is similar: TSMC N2 delivers approximately 38 Mb/mm² at a 0.0175 µm² bit cell, versus Intel 18A’s 31.8 Mb/mm² at 0.021 µm².
Intel’s counter-argument is that the PowerVia backside power architecture partially recaptures that density penalty by eliminating power routing from the front-side stack, and that performance-per-watt matters more than transistors-per-square-millimetre for many AI workloads. Intel claims 25% higher performance or 36% lower power consumption versus its own Intel 3 node. Those are internal comparisons against Intel’s previous generation, not head-to-head measurements against TSMC N2 on the same workload — a distinction that procurement engineers evaluating second-source options will need to verify through independent benchmarks.
The honest summary: Intel is faster and more power-efficient at a given clock speed; TSMC fits more transistors and more SRAM into the same die area. For AI accelerator designs where memory bandwidth and on-die cache dominate performance, TSMC’s density advantage is structurally significant.
Intel’s Own Products: The 18A Volume Base
The overwhelming majority of 18A wafer starts are for Intel’s own products. Two chips anchor the node’s initial volume.
Panther Lake, Intel’s Core Ultra 300 series unveiled on 5 January 2026, is a multi-chiplet AI PC platform delivering up to 180 TOPS (tera-operations per second) from an integrated neural processing unit (NPU). It uses up to 16 performance and efficiency cores alongside a next-generation Arc GPU with up to 12 Xe3 cores. This is Intel’s answer to Qualcomm’s Snapdragon X Elite in the premium laptop segment and its most direct commercial expression of what 18A can do at volume.
Clearwater Forest (Xeon 6+), introduced at MWC 2026 on 3 March 2026, is Intel’s first 18A server CPU — a many-core, efficiency-oriented design with up to 288 E-cores and a 17% instructions-per-cycle (IPC) improvement over its predecessor. It targets cloud and hyperscaler rack deployments. Both chips are manufactured at Fab 52 in Arizona.
These two product lines are why 18A is in high-volume production. The Intel Foundry business is, for now, primarily serving its own customer.
The External Customer Picture
Intel Foundry’s external customer pipeline on 18A has three confirmed or reported elements, each with different levels of certainty.
Microsoft Maia. Multiple credible outlets — TechPowerUp and Tom’s Hardware among them — have reported that Microsoft will manufacture its Maia AI accelerator (codenamed “Griffin”) on Intel’s 18A or 18A-P process for Azure data centre use. Neither Microsoft nor Intel has disclosed this as a confirmed order in official communications, volume terms, or financial guidance. It is the most credible external 18A lead in the pipeline; it is not a confirmed hyperscaler commitment at volume.
Pentagon Secure Enclave. This is the one fully confirmed external 18A engagement. In late 2024, Intel was awarded up to $3 billion by the Department of Defense under its Secure Enclave programme to manufacture leading-edge semiconductors for US government and defence customers. This is a separate appropriation from the $7.86 billion CHIPS Act award Intel received from the Commerce Department. The programme, which builds on earlier Rapid Assured Microelectronics Prototypes – Commercial (RAMP-C) and State-of-the-Art Heterogeneous Integration Prototype (SHIP) contracts, has onboarded defence industrial base customers including Boeing, Northrop Grumman, Microsoft, IBM, and Nvidia for design and prototype work on 18A. These are prototyping and design engagements, not full-rate production orders — but the Secure Enclave relationship is the clearest evidence that Intel Foundry is a viable path for US government customers who cannot use TSMC’s Taiwan-based fabs for classified or export-controlled designs.
18A-P inbound interest. Intel Chief Executive Officer (CEO) Lip-Bu Tan cited growing inbound interest in the 18A-P (performance-optimised) variant as a foundry node. Intel 18A-P is the process derivative positioned for external customers who want the PowerVia advantage on a different performance-power operating point. “Inbound interest” is pre-commercial engagement — design win conversions have not been announced.
What is absent from this picture is meaningful: no confirmed volume order from Google, Amazon, Apple, or any major fabless semiconductor company has been publicly announced for 18A. The foundry’s $174 million external revenue in Q1 2026 is consistent with small-volume prototype and early-stage design work, not the scaled production runs that define a viable merchant foundry.
The Structural Problem: Ecosystem Lock-In
The technology milestone is genuine. The commercial conversion gap is structural, not simply a matter of Intel needing more time to demonstrate yield maturity.
TSMC’s advantage over any challenger is not primarily a matter of transistor density or yield rates. It is the depth of the process design kit (PDK) ecosystem: certified design tool flows, intellectual property (IP) libraries, packaging options, and customer engineering teams who have spent years optimising designs for TSMC’s specific process characteristics. A hyperscaler’s application-specific integrated circuit (ASIC) team choosing Intel 18A over TSMC N2 is not making a node-to-node technical comparison. It is absorbing the cost of porting or re-qualifying its entire design stack, training engineers on Intel Foundry’s PDKs, and accepting the supply-chain risk of a foundry with one leading-edge fab versus TSMC’s multi-fab, multi-geography capacity.
Intel Foundry’s $10.318 billion operating loss in 2025 on $17.826 billion revenue reflects the capital cost of building the manufacturing infrastructure. It does not yet reflect meaningful external customer scale. Breakeven is not projected until 2027 at the earliest.
Tan’s decision to gate the 14A node ramp — including the Ohio fab investment — on confirmed customer commitments is a direct acknowledgement of this problem. Intel will no longer build capacity ahead of demand. The 14A roadmap, with risk production in 2028 and volume production in 2029, is contingent on customer commitments expected to materialise in the second half of 2026 and into 2027. Two unnamed customers are evaluating 14A test chips; reported evaluations include Tesla for an AI chip project and Apple for potential A-series or M-series production at the 2029 horizon.
- Intel 18A entered high-volume production in January 2026 at Fab 52, Arizona — ahead of TSMC N2 on backside power delivery by 6–12 months.
- Yields are in the 55–65% range, improving faster than expected per Q1 2026 earnings; cost-level yield targets expected by end of 2026.
- External foundry revenue was $174 million in Q1 2026, against $5.4 billion total foundry revenue — the commercial gap is large.
- Confirmed external engagements: Pentagon Secure Enclave ($3B programme, defence prototyping). Reported but unconfirmed: Microsoft Maia AI accelerator on 18A/18A-P.
- CEO Lip-Bu Tan has gated the 14A ramp on customer commitments — a structural change from Intel’s historical “build and they will come” approach.
What to Watch
The Microsoft Maia confirmation. If Microsoft discloses an Intel Foundry manufacturing relationship in an Azure infrastructure announcement or in Intel’s earnings guidance, that shifts the conversation from “reported” to “confirmed.” A disclosed volume commitment would be the clearest external validation that 18A is genuinely competitive for AI accelerator designs.
18A yield trajectory through H2 2026. Intel’s own cost-level yield targets are projected to be met by the end of 2026. Tom’s Hardware’s earlier analysis suggested industry-standard yield levels by 2027. The delta between those two timelines matters: a node at commercial cost-level yields is fundable for external customers; a node still burning through yield learning is not. Quarterly earnings will be the primary readout.
TSMC N2 external customer announcements. TSMC’s N2 is expected to enter production for external customers in H2 2026. If Apple, NVIDIA, or a major hyperscaler confirms N2 production this year, it narrows the window in which Intel’s backside-power timing advantage is commercially relevant.
The 14A commitment gate. Tan has been explicit: Intel will not build Ohio at full scale without customer commitments. The H2 2026–H1 2027 window for those decisions is also the window in which Intel Foundry either demonstrates that 18A external revenue is growing or confirms that the merchant foundry thesis requires a longer timeline. Watch Intel’s Q2 and Q3 2026 earnings calls for any language on “design wins” or “confirmed customer engagements” — those are the leading indicators, not the quarterly revenue figures.
Intel has built a node that works. It has produced chips at scale. It has beaten TSMC to market on backside power delivery. The harder problem — convincing customers who have spent decades optimising for TSMC to absorb the cost and risk of switching — is not a technical problem. It is a commercial and ecosystem one. And in Q1 2026, $174 million in external revenue against $5.4 billion in total foundry sales is what that problem looks like in dollar terms.
This article was produced with AI assistance and reviewed by the editorial team.



