This week I went back and listened to gmoney's newsletter The Early Signal. He opened with a piece built around Leopold Aschenbrenner's conversation with Dwarkesh Patel. If you don't know Aschenbrenner, you should: former OpenAI researcher, wrote the Situational Awareness paper in 2024 calling the shot on where AI was heading before most people were paying attention. The interview is from 2024 but it holds up, maybe more now than when it was recorded.

The frame gmoney pulled out is the one I can't stop thinking about. So I turned it into a deck. But first, the argument.

Standard Oil and the light bulb

Standard Oil was founded in the 1860s for one product: kerosene. Kerosene lamps were how America lit its homes. Standard Oil built an empire around that single use case, became one of the most powerful companies in history.

Then the light bulb was invented. Cheaper, cleaner, safer. By every reasonable analysis, Standard Oil should have been finished. The primary use case for their product was being replaced by something objectively better.

The opposite happened. New applications emerged: cars, trucks, industrial machinery, plastics. They consumed vastly more oil than kerosene lamps ever did. The thing that was supposed to kill demand turned out to be a rounding error compared to the demand that came next.

1000x
More oil consumed by cars and industry than kerosene lamps ever demanded
6-8x
AI inference memory compressed by TurboQuant (Google Research, 2026)
-19.5%
Micron's share price in five sessions on "peak capex" thesis

That's Jevons Paradox

When you make a resource more efficient to use, you don't use less of it. You use more. Because efficiency unlocks use cases that weren't possible before. Lower cost expands the market beyond what anyone could model.

It's a 160-year-old principle, named after the economist William Stanley Jevons, who noticed that improvements to the steam engine didn't reduce coal consumption. They accelerated it, because suddenly more industries could afford to run on steam. Every efficiency gain creates new demand. Every single time.

The current moment

Google Research published TurboQuant. It compresses the memory footprint of AI inference by 6 to 8 times. A 16GB MacBook can now run 100K-token conversations that previously required cloud APIs. The market's immediate reaction: AI needs less memory. Less compute. Peak capex. Sell the infrastructure stocks. Micron dropped 19.5% in five sessions on this thesis alone.

When inference gets 6x cheaper, you don't run the same amount of inference for less money. You run inference everywhere. On every device. In every app. On every agent. In every workflow.

Jevons says the bears have it backwards. The people selling infrastructure stocks on an efficiency thesis are making the same argument people would have made about oil refineries in 1890, when the light bulb arrived. Efficiency doesn't reduce demand. It explodes it.

The bigger frame

Aschenbrenner's point, and the one gmoney carries forward, is even larger than the efficiency argument. The current use cases for AI, chatbots, coding assistants, search, are the kerosene lamps. They're useful. They're real. They're generating real value. But they are not what AI is ultimately for.

The car hasn't been invented yet. We're building an empire around lamps and haven't even imagined what the car looks like. Meanwhile, everybody's arguing about the price of kerosene.

Aschenbrenner goes further still in the interview: a 2027 AGI timeline, the case for a trillion-dollar nationalised compute cluster, the CCP espionage risk at AI labs, the parallels to the Manhattan Project. Heavy material. I'm not going to compress it here because the nuances matter. Go read gmoney, then go listen to the episode.

My take

The infrastructure bet is still early. Not because I'm bullish on any particular company, but because the analytical frame the bears are using has been wrong every single time in the history of technology. Steam engines. Electricity. Bandwidth. Compute. Storage. Every efficiency gain created more demand, not less. The pattern is not ambiguous.

What gmoney does well is translate a complex macro argument into a legible historical analogy. The kerosene lamp frame is simple enough to hold in your head and specific enough to be genuinely useful. That's rare. Most AI takes are either too abstract to act on or too specific to survive contact with the next news cycle.

This one feels durable. Worth keeping in mind the next time there's a breakthrough that makes AI 6x more efficient and the market decides that means we need 6x less infrastructure.

📖
Source: This post is my commentary on gmoney's The Early Signal newsletter (@gmoneyNFT), which drew on Leopold Aschenbrenner's conversation with Dwarkesh Patel. The facts and analogies are theirs. The editorial framing is mine.

The deck

I turned the argument into a 17-slide talk. Covers the Standard Oil origin story, Jevons Paradox, TurboQuant, Micron's reaction, the kerosene lamp frame, and Aschenbrenner's bigger thesis. Includes my take at the end.

View the deck →