There is a 24-year-old who once worked inside OpenAI’s Superalignment team, got fired for raising safety concerns, published a 165-page essay arguing AGI arrives by 2027, and then quietly built one of the most aggressive hedge funds in AI. His name is Leopold Aschenbrenner. And his thesis is one of the clearest pieces of strategic thinking I have encountered in years.
Not because he predicted AI would need a lot of electricity. That is not the insight. The insight is in the specificity of the constraint.
You can order 100,000 GPUs and take delivery in six months. You cannot add 500 megawatts to the grid in six months. Grid interconnection takes two to three years. These are not bottlenecks you can outspend. You cannot throw a Stripe cheque at a transformer shortage and make it go away.
That asymmetry is the trade. Leopold saw it from the inside. He did not read about AI infrastructure. He built it, alongside the people trying to align superintelligent systems. When he left OpenAI, he translated that knowledge into a portfolio: sold every share of Nvidia and Broadcom, bought fuel cells, Bitcoin miners, and power companies instead.
The math
The power curve for AI training has been scaling roughly every 12 to 18 months without interruption. In 2022, training GPT-4 required around 10 megawatts and cost approximately $500 million. By 2024, the largest clusters had reached 100 megawatts. Right now, in 2026, the leading training infrastructure requires a full gigawatt of continuous power: the output of a large nuclear reactor.
By 2028, the projection reaches 10 gigawatts, more than most US states generate in total. By 2030, the model points to a 100-gigawatt training cluster: a single installation consuming over 20% of all US electricity production. Aschenbrenner calls it the trillion-dollar cluster.
And that is just training. Inference sits on top. Running AI products for hundreds of millions of users continuously requires multiples of the training demand. Every query, every agent action, every generated response, running 24 hours a day. The training cluster is the headline. The inference infrastructure is the ongoing cost.
Meanwhile, US electricity production has grown approximately 5% over the last decade. The grid was not designed for this. Transformer shortages. Switchgear backorders. Half of all planned US data centres currently stalled because they cannot get the power they need. These are the first visible symptoms of a wall that Aschenbrenner saw coming years before the market did.
The position
His largest holding is Bloom Energy: solid oxide fuel cells that generate electricity directly at the data centre site, bypassing the grid entirely. While competitors wait two to three years for grid interconnection, Bloom delivers a fully operational system in 55 days.
He entered 2026 with $876 million in Bloom Energy, 15% of his fund. The Oracle deal confirmed the thesis: 1.2 gigawatts contracted immediately, a pipeline to 2.8 gigawatts, the stock up 24% in a single session. Oracle chose Bloom because it delivered ahead of schedule on a previous installation. They needed power yesterday. The grid could not provide it.
His fund went from $225 million to $5.5 billion in equity exposure in a year. Beat the S&P by 47% in its first six months. Backed by the Collison brothers, Nat Friedman, and Daniel Gross. He is 24 years old with zero prior fund management experience.
Why this matters beyond the trade
This is not just a finance story. It is a strategy story. And it has direct implications for anyone building AI-native products, advising organisations, or thinking seriously about where value will accrue in the next five years.
The first implication: infrastructure is becoming a genuine moat. Not the model. Not the application layer. The infrastructure. The companies that can access power at scale, reliably, will run AI at a cost and speed others cannot match. That advantage compounds.
The second: the “just use the API” era has a ceiling. If you are building on shared compute, you are subject to shared constraints. The frontier is not. They are building private infrastructure at gigawatt scale, because the capacity is not available any other way.
The third is about Aschenbrenner’s timeline, which matters regardless of whether you think his AGI date is right. His model has 2027 as the year AI reaches expert-level capability across professional domains: not as a chatbot, but as what he calls a drop-in remote worker, running autonomously for hours. 2028 is the “unhobbling”: agents able to use computers freely, run long-horizon tasks without checkpoints, unlock economic value almost overnight. 2030 is the trillion-dollar cluster and the AGI threshold.
That timeline may be aggressive. But the power demand it describes is already visible and already constrained. You do not have to believe in AGI by 2027 to understand that AI data centres will require more electricity than the current grid can supply.
GPU supply is expanding on a known curve. Electricity supply is not. The gap between those two curves is the most important infrastructure fact in technology right now. It was always going to be the constraint. One person looked at the math, saw it clearly, and acted on it before the rest of the market caught up.
Keep your eyes on the grid.
I’ve put the full thesis together as a presentation at mikelitman.me/megawatts: 18 slides covering the power curve, the Oracle deal, the timeline, and the two curves that define the race.