I built CultureTerminal to solve a reading problem: too many tabs, too many sources, no way to know what mattered. It worked. 110+ publications, one scored feed. But after months of running it, a different problem emerged. I had the raw material. I didn't have the analysis.
CultureTerminal tells you what's happening. It doesn't tell you what it means. It doesn't connect Tuesday's fashion story to Thursday's tech deal. It doesn't spot the pattern running through apparently unrelated signals. That's the gap a human analyst fills. The question was whether I could build a system that does it daily, automatically, and doesn't sound like a robot.
The Pattern is that system. Every morning at 7am, it publishes a culture intelligence briefing at thepattern.media. One headline. Five signals with analysis. A connecting thesis. A prediction. Conversation starters. Recommended reads. And a full podcast episode narrated in my cloned voice. All generated, synthesised, and deployed without me touching a thing.
The pipeline
The Pattern sits on top of CultureTerminal's feed infrastructure. That matters, because the inputs are already curated. I'm not feeding AI the entire internet and hoping for the best. I'm feeding it the output of an opinionated scoring algorithm that has already surfaced the day's most culturally significant stories from 110+ sources across fashion, design, tech, brands, music, art, and lifestyle.
Here's what happens every morning:
1. Fetch and score. The pipeline pulls CultureTerminal's scored articles. The top 25 by relevance score become today's raw material. Each one has already been categorised, scored on freshness, authority, brand signal, and depth.
2. AI synthesis. The 25 articles go to Claude Sonnet with a detailed prompt. Not "summarise these articles." That would produce a boring digest. The prompt asks for something specific: a headline that connects the day's signals, analysis of why each story matters, a connecting pattern across all of them, and a bold, falsifiable prediction. The prompt encodes an editorial voice. Opinionated, insider, sharp. No hedging, no "in a rapidly evolving landscape" filler.
3. Audio generation. The synthesis includes a full podcast script. That script goes to ElevenLabs, which generates the audio using a voice clone trained on thirty minutes of my own speech. The result is a three-to-four-minute daily podcast episode that sounds like me, analysing stories I didn't read, making connections I didn't make.
4. Generate everything. One Python pipeline produces: the day's edition page, an updated homepage, an archive page, a brand index (every brand mentioned across all editions), a predictions ledger, trend analysis, category breakdowns, an RSS podcast feed, an Atom feed, social sharing cards, and the audio file. Around 6,500 lines of Python generating a complete publication.
5. Deploy. Everything pushes to Netlify via their REST API. GitHub Actions commits the edition data back to the repo. If anything fails, I get a push notification. Most mornings, I wake up, check The Pattern, and read my own culture briefing over hot chocolate.
The editorial layer
This is the part that makes The Pattern different from every other AI news digest.
Most AI summarisation products treat all content equally. Feed in articles, get back summaries. The output is technically accurate and editorially dead. No voice, no opinion, no point of view.
The Pattern has a point of view because I built one into it. The synthesis prompt doesn't just ask "what happened?" It asks "what does this mean?" and "what's the pattern connecting these signals?" and "what bold prediction can you make based on today's evidence?" The AI is being asked to think like a culture analyst, not a headline aggregator.
The five signals don't just describe stories. Each one has a "why it matters" line that's deliberately opinionated. When Demna's Gucci debut gets covered, The Pattern doesn't say "major fashion event." It says "when luxury's biggest turnaround bet refuses easy legibility, it's either genius or the industry admitting it doesn't know what sells anymore." That's a take. A specific one. From a system that generates it every day.
The predictions are the accountability mechanism. Every edition includes a specific, falsifiable claim with a confidence score and a deadline. These get tracked in a public ledger. If I'm going to build a system that makes daily calls about culture, it should be held to account.
The voice clone
The podcast was the moment this project went from interesting to genuinely unsettling. I recorded thirty minutes of myself talking about culture, brands, and design. ElevenLabs trained a voice model on it. Now, every morning, a version of me narrates the culture briefing.
It sounds like me. Not perfectly, but close enough that people who know me do a double-take. The cadence, the rhythm, the way I emphasise certain words. The AI writes the script. Another AI reads it in my voice. I'm the editorial director of a daily publication that runs without me.
There's something uncomfortable about that, and I think the discomfort is the point. This is where AI and media production are heading. The question isn't whether this will happen at scale. It's who builds the editorial layer that makes it worth listening to.
What I'm actually building
The Pattern is a product. But it's also a proof of concept for something bigger: the idea that one person with taste and the right tools can produce a daily publication that competes with editorial teams of ten.
I'm not a developer. I'm a strategy director who spent fifteen years in advertising and is now building products with AI tools. The Pattern's pipeline is 6,500 lines of Python that I couldn't have written a year ago. The voice clone uses an API I learned about two weeks before recording. The entire infrastructure sits on free tiers and open-source tools.
The constraint is taste, not technology. Anyone can point AI at news feeds and generate summaries. The difference is which sources you choose, what you ask the AI to look for, how you frame the analysis, and whether you have a point of view worth encoding. Those are editorial decisions. Strategy decisions. Taste decisions.
CultureTerminal showed me that algorithms and taste aren't opposites. The Pattern takes that further. It shows that you can build a system that thinks like you, sounds like you, and publishes like you. Every morning. Before the rest of the industry wakes up.
What it taught me
Three things.
First: the editorial layer is everything. The difference between a good AI product and a bad one is the quality of decisions encoded into the prompt. Garbage in, garbage out is the old version. Now it's taste in, taste out. My fifteen years of reading about culture, tracking trends, and building strategic narratives aren't replaced by AI. They're the only thing that makes the AI output worth reading.
Second: automation reveals your standards. When something runs daily without intervention, every flaw is amplified. Bad formatting shows up every day. Weak analysis shows up every day. Boring headlines show up every day. You can't hide behind manual polish. The system reflects exactly how well you built it.
Third: the future of media production looks like this. Not AI replacing journalists. But one person with strong editorial instincts building a system that does the heavy lifting, so they can focus on the judgment calls that actually matter. One person, 110+ sources, one daily briefing. That's The Pattern.