AI Infrastructure Supercycle · April 2026 update

The Tectonic Shift
in Compute.

Eighteen months on from the original thesis, the shape has sharpened, not softened. Three trillion dollars of committed hyperscaler capital expenditure against a thirty-seven billion dollar application-layer revenue pool.1 Depreciation schedules stretched out while product cycles collapse inward.2 An industry bending around a single company losing nine billion dollars a year. This is my reading of where the capital is going, what is being booked as income that is not income, and how the positioning has to change when the thesis works too well.

I · Macro

Three trillion dollars of spend against a thirty-seven billion revenue pool.

The five horsemen have committed to over three trillion dollars of data-centre capital expenditure across the next three years. That is more than twice their combined operating cash flow. Application-layer generative AI revenue for the full year is thirty-seven billion dollars.1 Even a linear extrapolation struggles to reach one hundred billion by 2027. The buildout is not responding to demand. It is responding to the cost of losing the decade.

~$3THyperscaler capex, three-year commitment
$37BApplication-layer GenAI revenue, full year
>$500BNvidia Blackwell + Rubin backlog
~14%Yield on GPU-backed junk credit

2025 CapEx estimates

Four companies, one direction. The prisoner's dilemma priced in.

Alphabet
$93B
Amazon
$73B
Microsoft
$65B
Meta
$65B

Silicon is not enough. It needs electrons.

A chip sale books revenue on delivery. An AI factory only books revenue when the data centre is energised. Grid queues in the core US markets stretch multi-year. A fifteen-year data-centre-building depreciation life sits alongside a two-year chip obsolescence cycle. The valuation premium belongs to entities that have secured both silicon and electrons. Everything else is inventory dressed as infrastructure.

II · The Hegemon

Software margins on hardware.
An inventory book the size of a year's cash flow.

Nvidia's third fiscal quarter of 2026 does not look like a semiconductor quarter. Gross margin sits between seventy-three and seventy-five per cent. Free cash flow in a single quarter is twenty-two billion dollars. The backlog stretches past five hundred billion. But read the filings past the income statement and a different picture emerges. Non-cancellable purchase obligations have moved from sixteen billion dollars to ninety-five. Total supply commitments run to one hundred and seventeen billion, within arm's reach of a full year's operating cash flow.3 This is Cisco in 2000 with better optics.4

F3Q26 Revenue

$57.0B

+62.5% year-on-year. Consensus beat.

Free cash flow

$22.0B

Single quarter. Fortress balance sheet.

Gross margin

73–75%

Excess-demand pricing. Reverts fast.

Supply obligations

$117B

Nearly a full year's operating cash flow locked in.

Purchase obligations moved from sixteen to ninety-five billion in one year.

TSMC demanded longer-term non-cancellable contracts and upfront cash to build custom fab and packaging capacity. That is structural, not temporary. The days-inventory-outstanding line is extending permanently. Cisco did the same thing in 2001, locking in capacity for a fifty per cent growth rate that never arrived. It wrote down roughly forty per cent of its supply-chain obligations the following year. Lucent, JDS Uniphase and Intel all ran the same play, all with the same result.3

Vendor financing dressed as demand.

Nvidia takes an equity stake in an AI customer. The customer raises GPU-backed debt at fourteen per cent. Proceeds are used to buy Nvidia chips. Nvidia books the revenue; lenders hold the depreciation risk. Rinse. Oracle's three hundred billion dollar supercloud deal. Microsoft's Azure routing of OpenAI workloads back into the anchor tenant. Private credit pouring into data-centre SPVs dressed as operating companies. The DOJ has been subpoenaing since September 2024.

Customer concentration

Four customers. Sixty-one per cent of revenue.

Customer A
22%
Customer B
15%
Customer C
13%
Customer D
11%

Nvidia is a derivative on four boardrooms' capital-allocation decisions. The thesis holds while those four plans hold. The implication is not that the four will change their mind. The implication is that the four can be made to change their mind by a single tail event in Taiwan, a single credit shock to private credit, or a single earnings print that admits the backlog was double-counted.

III · First principles

Language before reason.
The whole paradigm is upside down.

The trillion-dollar buildout rests on a single unexamined assumption: that larger models, trained on more tokens, running on more chips, converge asymptotically on intelligence. That assumption inverts the order in which cognition actually works. Reason comes first. Language is the wrapping you put on reason so it can travel between people. Large language models do the opposite. They put language first and try to bootstrap reason out of statistical regularities in tokens. This is not a tuning problem to be fixed by the next generation. It is a foundational architectural error, and every additional dollar of capital expenditure compounds it.6

The parameter trap.

Brute-force language processing, simulated by ever-larger stacks of ever-hotter chips, produces an increasingly sophisticated mirror of human linguistic output. It does not produce understanding. Understanding is the capacity to reason in the absence of language, and reason has to exist first for language to be layered on top of it coherently. When the architecture is built the other way around, hallucination is not a bug. Hallucination is the feature doing exactly what it was designed to do, generating plausible next-token sequences at the ragged edges of the training distribution. The system lives permanently in the middle. It is able to mimic reason because the humans who wrote the training corpus were reasoning. It is not itself reasoning.

The working test case is nearly a century and a half old. In 1880 the Smithsonian documented a pre-linguistic deaf-mute reasoning about mortality, causation and cosmology before acquiring a single word. Reason demonstrated in the absence of language. That capacity is precisely what the current paradigm cannot build and will not buy with three trillion dollars of graphics processors.

An entity does not possess the capacity for understanding until reason is demonstrated in the absence of language.6

What this does to the valuation.

Every model release is a bet that the next trillion parameters delivers the step change in capability the last ten did not. Scaling-laws curves are visibly flattening. The data wall is real. The energy ceiling is real. The cost per useful inference is moving in the wrong direction as capability gains slow. Competing reason-first approaches, System Two compression, neural-symbolic hybrids, AlphaGeometry-style integrations of search with language, do not require the three-trillion-dollar buildout to succeed. They need a reason to work. If any one of them lands inside the current capital-expenditure cycle, the scaling myth underwriting Nvidia, the hyperscalers and OpenAI is not compressed by fifty per cent. It is redundant.

IV · The gravity well

The industry is bending around a business losing nine billion dollars a year.

OpenAI has pledged to spend roughly one point four trillion dollars over eight years against ten to fifteen billion of revenue and nine to fourteen billion of losses. It is valued at around five hundred billion dollars. Nine hundred million weekly users of ChatGPT, five per cent of whom are willing to pay.1 The global software market is not one trillion dollars. The entire commitment schedule sits on top of two strategic backstops. Without them, it does not clear.

Committed spend

~$1.4T

Over eight years. Against roughly fifteen billion of revenue.

Pay conversion

5%

Of nine hundred million weekly users. The rest are a cost centre.

True agents

16–27%

Of enterprise and start-up AI deployments. The remainder is prompt engineering.

The commitment stack

Oracle "Supercloud" $300 billion
Microsoft Azure extension $250 billion
AWS commitment $38 billion

Three agreements. No other comparable customer. This is a commitment schedule, not a diversified revenue base. Oracle alone is borrowing twenty-five billion to fund the deal; Meta is borrowing thirty; Alphabet is lining up fifteen. Borrowing to fund capex, not buybacks. The buyback pillar that anchored the whole complex is already cracking: hyperscaler combined buybacks collapsed seventy-four per cent year-on-year in the final quarter of 2025.1

V · Depreciation

Accelerating obsolescence.
Decelerating depreciation.

Nvidia ships Hopper, Blackwell, Rubin, Rubin Ultra in twelve-month increments. Vera Rubin at launch claims five times the inference throughput of Blackwell, in one year. In the same window, every major hyperscaler has stretched the reported useful life of their servers and GPUs from three years in 2020 to five and a half or six years in 2025.2 Physics and accounting cannot both be right. One of them is fraud, and it is not physics.

Company
2020 life
2025 life
Implication
Meta
3 years
5.5 years
Reported earnings overstated across 2024–27.
Alphabet
3 years
6 years
$3.9B added to 2023 pre-tax income on the schedule change alone.
Microsoft
3 years
6 years
Nadella has publicly disclaimed the schedule he continues to use.
Amazon
4 years
5 years (cut back)
Cited "increased pace of technology development." The only honest entry.

Construction-in-progress does not depreciate.

Alphabet carries fifty-one billion dollars of construction-in-progress on its balance sheet. Meta twenty-seven. Amazon twenty-nine. Oracle seventeen. CoreWeave nearly seven. A data centre does not begin depreciating until it is placed in service. A slowdown in utilisation can be mothballed in plain sight simply by slow-rolling the date of commissioning. Microsoft does not break CIP out at all. That is itself a disclosure.2

Baidu already ran the playback.

November 2025. Baidu wrote down eleven point two billion renminbi of fixed assets, more than a third of the total fixed-asset base. The CFO cited chips that "no longer meet today's computing efficiency requirements." Baidu had previously extended useful life from five to six years, which had added one point two billion to reported net income out of a total three point three billion. Same playbook. Same outcome. WorldCom ran the same manoeuvre in 2002 and went from investment grade to bankrupt overnight.2

A six-year depreciation life fails any common-sense test against chips whose successor is three generations away inside three years.2

VI · Neoclouds

Two balance sheets. One thesis. Only one surviving being wrong.

Both companies tell themselves the same story about AI infrastructure demand. One is financing it with junk-rated debt at fourteen per cent. The other was financing it with net cash and a seventeen billion dollar Microsoft anchor, and was sold into strength on 15 April 2026 after delivering a one hundred and thirty per cent return in nine months. The thesis might be right for both. Survivability was never equal.

Levered beta

CoreWeave

A brilliant operating model strapped to a ticking time bomb of debt. Effectively a leveraged project-finance vehicle dressed as an operating business.

Debt load
$1.5B → ~$14.0B
Cost of capital
~14% (junk)
Logic
Priced for perfection. Equity paid last.

Trade closed · +130%

Nebius

A restructuring story with a fortress balance sheet, Microsoft validation, and a pre-set price target. The target hit on 15 April 2026. Goldman Sachs raised its target to two hundred and five dollars on the same day the position was closed at one hundred and sixty. That is the discipline that matters.

Entry → Exit
~$70 → $160
Return · Hold
+130% / 9 months
Forensic tail
$8.2B convertible stack. ICFR adverse.

The circular financing loop

  1. 01

    Nvidia takes an equity stake in a Neocloud operator.

  2. 02

    That operator issues GPU-backed debt at roughly fourteen per cent.

  3. 03

    Proceeds are used to buy GPUs from Nvidia.

  4. 04

    Nvidia books revenue. Lenders hold the depreciation risk.

Vendor-financed growth works until credit reprices, or silicon obsolescence accelerates. Both conditions are rising in probability. Meta has quietly built an escape clause into a recent data-centre build that triggers after four years. That is a principal admitting to itself what the accounting will not.

VII · Structural risks

Five-year debt against two-year silicon.

The edifice holds only if GPUs are long-dated assets. They are not. Debt maturities sit at five to seven years. Economic useful life for a given chip generation is closer to two to three before the next architecture reprices the installed base. The mismatch has a name. It is the same one that took subprime from AAA to zero inside eighteen months.

The duration mismatch

Debt maturity

5–7 years

Fixed obligation. Fixed schedule. Fixed rate.

Asset economic life

2–3 years

Hopper → Blackwell → Rubin → Rubin Ultra.

Result

Melting collateral

Under-water assets against unchanged obligations.

Blackwell thermal reality

Blackwell draws forty-three to seventy-one per cent more power per package than Hopper. Documented overheating in early production forced Microsoft, Amazon, Alphabet and Meta to pull rack orders in January 2025. Nvidia backfilled with last-generation product.2

First-year GPU failure

Meta's internal 2024 H100 study recorded a nine per cent first-year failure rate; faulty GPUs and memory accounted for roughly half. Oak Ridge's Titan logged mean-time-between-failures degrading twelve-fold after two to three years of service. That is a depreciation schedule.2

The innovation trap

A five-year-old H100 in 2028 is economically irrelevant. If lease rates collapse as Blackwell supply scales, operators cannot service fixed debt. A hardware price cut from Nvidia becomes a credit event for every leveraged operator downstream.

Warranty exclusion

Nvidia's consumer warranty explicitly excludes "large-scale datacenter use or GPU cluster commercial deployments." That language is not drafting sloppiness. It is a liability disclosure that has not yet met a large-scale failure event.2

VIII · Geopolitics

The entire buildout runs through one island.

Three trillion dollars of AI infrastructure spend over three years. Every leading-edge chip in that stack is fabricated in Taiwan. A thirteen-step escalation framework for great-power conflict currently sits at step nine, the stage described as multi-theatre conflicts increasingly happening simultaneously.5 This is not an abstract geopolitical concern. It is a supply-chain concentration risk priced at zero.

Current stage

Step 9 of 13

Pre-fighting to fighting. Analogous to 1913–14 and 1938–39.

Taiwan conflict · five years

30–40%

Peak-risk window centres on 2028.

Any major conflict · five years

>50%

Ukraine, North Korea, South China Sea, Taiwan. Aggregate.

US base footprint

750–800

Across seventy to eighty countries. China has one.5

TSMC is the single point of failure.

The entire thesis priced into Nvidia, Alphabet, Meta, Microsoft, Amazon and Oracle assumes uninterrupted access to Taiwan's foundry capacity. There is no substitute within the five-year horizon these stocks are priced on. A single incident that pushes the probability curve forward by twelve months is not a ten per cent revaluation. It is a multiple reset across the entire AI complex. The honest position is not to predict the date. It is to hold the percentage of cash that can be deployed into the other side of that reset.

IX · Scenarios

Three paths from here.

The honest thing to do with a thesis this large is to say out loud what each outcome looks like. The supercycle keeps running. The market digests in 2026. The application layer fails to deliver. All three are live. Positioning is about which you size for, and which you merely survive.

01

Supercycle

Upside anchored by backlog and power scarcity through 2027.

  • Nvidia: two hundred and fifty to two hundred and seventy-five dollar objective. Backlog intact. Margin intact.
  • Hardware infrastructure theme delivers: optical, memory, testing and specialty compute re-rate.
  • CoreWeave: equity survives only if spot GPU rates stay firm.

02

Digestion

Hyperscalers pause to optimise utilisation in 2026.

  • Nvidia: growth slows to twenty to thirty per cent. Multiple drifts toward twenty-five times. Cash remains strong.
  • Depreciation extensions start being rolled back. Hidden overstatement is written off quarter by quarter.
  • CoreWeave: debt service tightens. Leverage magnifies any utilisation dip.

03

Winter

Application-layer ROI disappoints. Or Taiwan shifts the timetable.

  • Nvidia: Cisco-moment risk. A fifty per cent drawdown as scarcity premium fades and Taiwan risk re-prices.
  • Hyperscalers write down CIP and extended-life assets. Earnings revised down for six to eight quarters.
  • CoreWeave: restructuring. Equity impaired. Melting collateral meets fixed debt.

About

Etienne Chen.

I run a long-only equity portfolio for my family. The book is concentrated in US-listed businesses at cash-flow inflection points, biased towards smaller names where institutional coverage is thin and the narrative is still catching up to the numbers.

What I look for is straightforward. Accelerating free cash flow. Honest capital allocation. A moat that fits in one sentence. A price that does not require the future to be generous. The writing above is my own synthesis, updated whenever the facts move enough to change the trade. I use it to pressure-test the book I run.

Connect on LinkedIn

Sources

  1. Michael J. Burry, “Unicorns and Cockroaches,” Cassandra Unchained. Cited for the three trillion dollar hyperscaler commitment, the thirty-seven billion dollar application-layer revenue estimate, OpenAI’s $1.4 trillion spend pledge, and hyperscaler buyback contraction data.
  2. Michael J. Burry, “The Blessed Fraud Recurrence,” Cassandra Unchained. Cited for useful-life extensions at Meta, Alphabet, Microsoft and Amazon, Alphabet’s $3.9 billion 2023 pre-tax boost, construction-in-progress balances, Baidu’s RMB 11.2 billion impairment, WorldCom parallel, Nadella depreciation quote, Blackwell thermals, and Meta / Oak Ridge GPU failure data.
  3. Michael J. Burry, “Short Thought: Nvidia Ratchets Up the Risk,” Cassandra Unchained. Cited for Nvidia’s $95.2 billion non-cancellable purchase obligations, $117 billion total supply commitments, the TSMC capacity dynamic, and the Lucent / Intel / JDS Uniphase precedents.
  4. Michael J. Burry, “The Supply-Side Gluttony Recurrence,” Cassandra Unchained. Cited for the Cisco 2000 parallel, the capital-cycle framing, and the fibre-buildout rhyme.
  5. Ray Dalio, “The Big Thing: We Are In A World War That Isn’t Going To End Anytime Soon.” Cited for the thirteen-step escalation framework, the step-nine diagnosis, Taiwan conflict probability estimates over the next five years, the aggregate >50% probability of at least one major conflict, and the US military base footprint comparison.
  6. Michael J. Burry, “History Rhymes: Large Language Models Off to a Bad Start,” Cassandra Unchained. Cited for the architectural critique of the LLM paradigm, the parameter-trap framing, the 1880 Smithsonian case study, and the test that understanding requires reason demonstrated in the absence of language.

All market data and position-level commentary reflect my own synthesis and analysis. The citations above acknowledge specific factual claims drawn from the named publications; the framing, interpretation and positioning are mine.