By People's Voice Editorial·Deep Dive·April 30, 2026 at 5:36 PM

OpenAI Says Stargate Has Cleared 10 Gigawatt U.S. Compute Goal Years Early

2214 words9 min read
OpenAI Says Stargate Has Cleared 10 Gigawatt U.S. Compute Goal Years Early
Photo by Carl Lender via Wikimedia Commons (CC BY)

OpenAI Says Stargate Has Cleared 10 Gigawatt U.S. Compute Goal Years Early

The frontier AI race is now an industrial buildout: chips, power, permits, and skilled trades.

ABILENE, Texas - OpenAI said this week that its Stargate AI infrastructure program has surpassed the 10-gigawatt U.S. capacity target it set in January 2025, clearing the milestone more than two years ahead of its 2029 deadline. The company said more than 3 gigawatts of that capacity was added in the last 90 days alone.

Server racks inside a hyperscale data center hall. Photo by Carl Lender via Wikimedia Commons (CC BY)

The announcement reframes frontier artificial intelligence as a physical industry. Building stronger models no longer reduces to writing better code. It now depends on gigawatt-scale electrical interconnections, NVIDIA accelerator allocations, water-stewardship plans, county-level permits, and skilled-trades workforce pipelines. OpenAI said the buildout requires coordination across local communities, utilities, energy providers, chipmakers, cloud providers, neoclouds, construction firms, investors, and public-sector partners.

For the United States, the stakes extend past commercial product launches. Compute capacity inside U.S. borders increasingly determines where the next generation of frontier models is trained, where the resulting jobs land, and which national jurisdiction sets the operating rules for advanced AI systems.

The Story So Far

OpenAI announced Stargate in January 2025 as a multi-year commitment to develop AI compute capacity in the United States, with an initial target of 10 gigawatts secured by 2029. The flagship site is in Abilene, Texas. Subsequent updates added five new U.S. sites, an expanded Oracle partnership covering 4.5 gigawatts of additional capacity, and a Wisconsin campus developed with Vantage Data Centers and Oracle.

In its latest update, OpenAI wrote:

"When we announced Stargate in January 2025, we committed to securing 10GW of AI infrastructure in the United States by 2029. Just over a year later, we have already surpassed that milestone, with more than 3GW added in the last 90 days alone, as demand for AI continues to accelerate."

OpenAI

The 10-gigawatt figure refers to capacity OpenAI says has been secured or surpassed, which spans planned, under-development, and operational stages. The company has not published a site-by-site breakdown of how much of that capacity is already powered up and serving training or inference workloads.

In a separate prior update on the Oracle expansion, OpenAI wrote:

"Oracle and OpenAI have entered an agreement to develop 4.5 gigawatts of additional Stargate data center capacity in the U.S. This investment will create new jobs, accelerate America's reindustrialization, and help advance U.S. AI leadership."

OpenAI

In its five-new-sites update, the company said the combined capacity from those sites, the Abilene flagship, and ongoing projects with CoreWeave brings Stargate to nearly 7 gigawatts of planned capacity and over $400 billion in investment over the next three years.

What's Happening Now

OpenAI tied the infrastructure milestone directly to a model claim. The company said GPT-5.5, which it described as its latest and smartest model, was trained at the Abilene flagship Stargate site, which operates on Oracle Cloud Infrastructure and runs NVIDIA GB200 systems.

"Our latest and smartest model yet, GPT-5.5, was trained at our flagship Stargate site in Abilene, Texas. The site operates on Oracle Cloud Infrastructure and runs NVIDIA GB200 systems."

OpenAI

That sentence connects four layers of the AI stack into one supply chain. NVIDIA's GB200 platform supplies the accelerators. Oracle Cloud Infrastructure supplies the cloud orchestration. The Abilene site supplies the physical building, power, and cooling. OpenAI supplies the training runs and the resulting model weights. The Stargate buildout is the connective tissue.

Exterior view of a hyperscale data center facility. Photo by Rsparks3 via Wikimedia Commons (public domain)

OpenAI also said the Abilene site uses closed-loop cooling rather than traditional evaporative cooling towers. The company said the one-time initial fill for each building is roughly two Olympic-sized swimming pools, and that annual cooling-system water use at full buildout is expected to be comparable to a medium-sized office building, or about four average households. Those figures are OpenAI's claims, not third-party measurements.

On the Wisconsin campus, Vantage Data Centers said:

"The campus will feature four cutting-edge data centers providing close to a gigawatt of AI capacity. Construction will begin soon and is scheduled for completion in 2028."

Vantage Data Centers

OpenAI said it launched a community engagement program with a donation to the Port Washington-Saukville Education Foundation in Wisconsin alongside Vantage and Oracle. Vantage said its Wisconsin campus plan includes a dedicated electricity rate, zero-emission energy matching, and infrastructure upgrades intended to avoid raising rates for other customers. Those are claims from Vantage, not independently verified outcomes.

The Conservative View

For conservatives, Stargate fits a long-running argument that American industrial capacity, not Silicon Valley software alone, is what keeps the United States ahead of strategic competitors. A 10-gigawatt domestic compute buildout means American electricians, equipment operators, concrete crews, utility planners, and steel suppliers, not foreign workforces. OpenAI's framing of Stargate as accelerating "America's reindustrialization" maps onto a policy worldview that prizes physical investment in American soil over offshored manufacturing.

The Oracle partnership and the GB200 hardware also keep the AI training stack inside U.S.-headquartered companies. Conservatives have argued for years that semiconductor sovereignty, including U.S. fab buildouts under the CHIPS and Science Act and U.S. Bureau of Industry and Security export controls on advanced chips to China, is a national-security necessity. A flagship frontier model trained on American power and American silicon supports that argument with a specific, dated example.

The cautionary note from this side is local. Gigawatt-scale campuses change county property tax bases, water tables, transmission planning, and traffic patterns. Conservatives skeptical of unaccountable corporate scale will want county commissioners and state utility regulators making the final calls on rate structures, water permits, and zoning, not deals negotiated in private between hyperscalers and utility executives.

The Progressive View

Progressives focus on the externalities. Gigawatt-scale AI campuses pull electrical load on a scale comparable to mid-sized cities. Even with Vantage's stated commitments to dedicated rates and zero-emission energy matching in Wisconsin, the broader question is whether the U.S. grid can absorb tens of gigawatts of new AI demand in a few years without raising costs for residential ratepayers or extending the operating life of fossil-fuel generation.

OpenAI's water claims, while modest in framing, also invite scrutiny. The company said annual cooling water use at the Abilene buildout will be comparable to four average households. That claim covers the closed-loop cooling system itself but does not necessarily include water associated with electricity generation feeding the site, which depends on the upstream generation mix.

NVIDIA GB200 server hardware on display at COMPUTEX 2024. Photo by Geekerwan via Wikimedia Commons (CC BY)

Progressives will also press on labor. Construction phases bring jobs, but operating phases at hyperscale data centers are notoriously thin on permanent headcount per megawatt. The question is whether community benefits agreements, project labor agreements, and apprenticeship pipelines are enforceable, not just rhetorical.

Other Perspectives

National-security analysts treat domestic compute capacity as a strategic asset on par with energy reserves and shipyard tonnage. From that lens, the central data point is not GPT-5.5 itself but the speed of the buildout: more than 3 gigawatts in 90 days, on U.S. soil, integrated with U.S. cloud infrastructure and U.S.-designed accelerators.

Capital-markets analysts treat the announcement as a signal about hyperscaler capital expenditure trajectories, NVIDIA accelerator demand, Oracle Cloud Infrastructure revenue, and electricity-sector capital planning. The over-$400 billion three-year investment figure OpenAI cited for the seven-gigawatt Stargate footprint, taken at face value, implies sustained high spending across construction, hardware, and energy infrastructure.

Skeptics inside the AI research community note that compute is necessary but not sufficient. Larger training runs improve model capability up to a point, but data quality, post-training methods, evaluation rigor, and deployment design all shape whether new models actually deliver claimed gains. OpenAI's framing of GPT-5.5 as "our latest and smartest model" is a company claim, and rigorous third-party evaluation typically takes months.

Economic Implications

OpenAI cited over $400 billion of Stargate-related investment over three years across nearly seven gigawatts of planned capacity. That works out to a capital intensity well above $50 billion per gigawatt when integrated across data centers, hardware, power, and ancillary infrastructure, though OpenAI did not publish a line-item breakdown.

The capital expenditure breaks into roughly four buckets:

  1. Hardware: NVIDIA GB200 systems, networking, storage, and racks. NVIDIA has disclosed in recent quarterly filings that data center revenue dominates its overall business, with hyperscaler spending as the primary driver.

  2. Real estate and construction: Land, shells, electrical rooms, cooling plants, and security. These are union-trades-heavy phases that flow into local construction and supply economies.

  3. Power and transmission: Substations, on-site generation where applicable, transmission upgrades, and long-term power purchase agreements. Vantage said its Wisconsin campus plan includes infrastructure upgrades intended to avoid raising rates for other customers.

  4. Operations: Permanent technical staff, security, maintenance contracts, and water and utility services.

For semiconductor sovereignty, the supply chain matters as much as the dollar figure. NVIDIA designs the GB200 in the United States, but advanced packaging and leading-edge fabrication still depend heavily on Taiwan Semiconductor Manufacturing Company. The CHIPS and Science Act subsidies for U.S. fab capacity, and BIS export controls limiting advanced AI chip shipments to China, are the policy instruments aimed at narrowing that exposure. Stargate's pace pulls forward the demand for both.

For energy, gigawatt-scale AI campuses are moving the U.S. grid toward a new structural load. Where that power comes from, who pays for the upgrades, and how interconnection queues are managed will be decided at state public utility commissions and regional transmission organizations, not in Silicon Valley.

For labor, the construction phase is the headline jobs story. The operating phase concentrates fewer, more technical roles per megawatt, which is why community benefits, apprenticeship programs, and local procurement commitments tend to dominate negotiations with counties and school districts.

By the Numbers

  • 10 gigawatts: Stargate U.S. capacity OpenAI says has been secured, ahead of the 2029 target.
  • 3 gigawatts: Capacity OpenAI says was added in the last 90 days.
  • 4.5 gigawatts: Additional U.S. data center capacity OpenAI said its Oracle partnership will develop.
  • ~7 gigawatts: Combined planned capacity OpenAI cited across the five new sites, Abilene, and CoreWeave projects.
  • Over $400 billion: Stargate-related investment OpenAI cited over the next three years.
  • Close to 1 gigawatt: AI capacity Vantage said its Port Washington, Wisconsin campus will provide across four data centers.
  • 2028: Year Vantage said its Wisconsin campus is scheduled for completion.
  • GPT-5.5: Model OpenAI said was trained at the Abilene flagship site on Oracle Cloud Infrastructure using NVIDIA GB200 systems.
  • ~2 Olympic pools: One-time initial cooling-water fill OpenAI cited for each Abilene building.
  • ~4 average households: Annual cooling-system water use OpenAI cited for the Abilene full buildout.

What People Are Saying

OpenAI on the central role of compute:

"Compute is the critical input that makes advanced AI possible. It is what allows us to train better models, serve them reliably, improve performance, lower costs over time, and bring more powerful tools to more people."

OpenAI

OpenAI on the operational difficulty of the buildout:

"These projects are complex, and they require the right combination of power, land, permitting, transmission, workforce, community support, and partner readiness."

OpenAI

OpenAI on the strategic framing of the Oracle expansion:

"This investment will create new jobs, accelerate America's reindustrialization, and help advance U.S. AI leadership."

OpenAI

Vantage Data Centers on the Wisconsin campus:

"The campus will feature four cutting-edge data centers providing close to a gigawatt of AI capacity. Construction will begin soon and is scheduled for completion in 2028."

Vantage Data Centers

Skyline of Abilene, Texas, the location of OpenAI's flagship Stargate site. Photo by Michael Barera via Wikimedia Commons (CC BY)

The Big Picture

The Stargate update marks the point at which frontier AI in the United States stops being a software story and becomes an industrial one. Land acquisition, county permits, transmission queues, electrical apprenticeship pipelines, and accelerator allocations are now the binding constraints on model capability.

That shift has three consequences worth tracking. First, U.S. AI leadership now runs through utility commissions and county commissions, not just through research labs. Second, semiconductor sovereignty is no longer an abstract slogan; it is a per-gigawatt demand signal flowing through CHIPS Act-funded fabs and BIS-regulated export controls. Third, the political economy of AI is moving from coastal tech debates to interior counties hosting gigawatt campuses, and from venture funding rounds to public utility rate cases.

OpenAI's claim of clearing the 10-gigawatt mark years early should be treated as exactly that, a claim. Independent verification will come from grid interconnection filings, county permit records, NVIDIA shipment data, Oracle Cloud Infrastructure revenue disclosures in earnings reports, and on-the-ground reporting from places like Abilene and Port Washington. The headline number is the start of the audit, not the end of it.

Either way, the direction is set. American AI capability is now being poured in concrete, copper, and silicon on American soil. Whether the United States stays ahead of China in frontier AI will be decided by how fast that pour continues, and whether the local communities hosting it come out ahead.