Some AI projects felt hollow. Others felt like the most alive I'd been at work in years. For a long time I couldn't explain the difference.

It wasn't quality. Some of the hollow things were technically impressive. It wasn't usage: some of the meaningful ones had almost none. It wasn't time spent, or complexity, or how clever the prompt was. I could build something in an afternoon that felt like it mattered. I could spend two weeks on something that felt like generating outputs into a void.

Same tools. Same stack. Roughly the same effort. Completely different feeling.

First Order

The clearest example was a voice agent I built called First Order. The idea: an AI agent that calls London restaurants ahead of your visit and asks about dish recommendations, kitchen flexibility, the things that are awkward to raise on the night itself.

I called 500 restaurants. Most hung up. Some were confused. A few were rude. One chef described his specials for four minutes to a robot that couldn't eat any of them. The success rate was 15%.

I kept going.

Not because the numbers were good. Not because I had users or a business model or any rational reason to continue. The problem was genuinely mine. I actually wanted to know if this would work. The outcome was genuinely uncertain. There was something in the gap between each call, something about not knowing what would come back, that made the whole thing feel like it mattered.

Compare that to some of the things I built around the same time: polished, capable, functional things I could predict would work before I started. I'd define the brief, generate the output, ship it, feel almost nothing. Professionally fine. Personally hollow.

The problem was genuinely mine. The outcome was genuinely uncertain. There was something about not knowing what would come back that made the whole thing feel like it mattered.

The essay that named it

I read Sam Lessin's essay "AI Is Not a Labor Crisis. It Is a Meaning Crisis" and something clicked into place. His argument is civilisational rather than personal, but it described something I'd been experiencing in miniature across 20-odd projects over two years.

His starting point: meaning isn't optional. It is, as he puts it, "the critical input. The cost of our greatest gift: imagination." Because humans can imagine futures, we can't live without a reason to inhabit one. Civilisations have always solved this through what he calls retail meaning: mass-produced systems of purpose that scale to billions.

Religion told people their suffering had weight beyond the grave. Industrial modernity told them their labour would improve the lives of their children, or eventually their own. Both were operating systems for sacrifice, duty, and hope. Both worked.

AI is breaking the second one.

Two bad stories

Lessin sees two versions of the AI story. Both are catastrophic for meaning.

The optimist version: AI brings abundance without effort. Life gets better just by staying alive. The problem is that "don't die" doesn't tell anyone what to do on a Tuesday morning. Passive abundance isn't a narrative. It's a void with good lighting.

The pessimist version: AI finally, fully devalues human labour. The strong person with a shovel lost their narrative a century ago. The knowledge worker is next. When the mechanism for upward movement disappears, it isn't just inequality. It's the end of the story people were living inside.

Someone always raises the creation argument here: "But AI lets you create anything you can imagine." Wrong. Creation is only meaningful when it's uniquely yours. The novelty of generating a poem or an image lasts two nanoseconds. "Yes, it is fun to make software with Claude for now," Lessin writes, "but just for now." Pleasure habituates even faster. All the abundance in the world doesn't solve the meaning problem once you're fed and housed.

What AI is actually breaking, he argues, is the effort-to-value link. "This is more psychologically damaging than inequality by itself. People can survive unfairness better than they can survive the feeling that their striving is irrelevant."

That sentence stopped me.

"Civilizations are not mainly threatened by discomfort; they are threatened by superfluity. A society full of people who feel unnecessary is more dangerous than a society full of people who are merely poor." — Sam Lessin

When intelligence and production cost almost nothing, what becomes scarce is trust, belonging, and costly commitment. Things that are hard to fake precisely because they required something of you. The things AI cannot generate on your behalf.

The builder's answer

This is where I found my way back to the question I'd been sitting with.

The hollow projects had no genuine uncertainty. I could predict the output before I started. The problem wasn't mine in any real sense. The AI did the hard parts. I directed traffic.

The meaningful ones had stakes. Something rode on them. There was always something the AI couldn't do for me: the decision about what actually mattered, the willingness to keep going when 85% of the calls got hung up, the judgment call about whether this was worth pursuing at all.

The effort-to-value link is still intact when the idea is genuinely yours and the outcome is genuinely uncertain. That's the condition AI can't collapse, not yet. And building is how you stay inside it.

I've worked all of this out in longer form at mikelitman.me/meaning: 20 slides, the full Lessin argument, and where I think it lands. It's the most honest thing I've made in a while.

The conclusion I keep coming back to: building is the only activity that still demands you. In a world where everything can be generated, the things that require your genuine struggle and uncertainty are the last available source of meaning. Protect that. Build from obsession. Keep the link intact.