What happens when an AI calls every cafe in London and asks: are you pram-friendly? Everything we built, broke, fixed, and found.
Every parent with a pushchair knows the feeling. You approach a cafe. You can't see through the window. You don't know if the door is wide enough, if there's a step, if there's space, if they'll make you feel welcome or like a problem. So you guess. Sometimes you're wrong. Sometimes you fold the buggy in the rain.
I had a one-year-old and a buggy and no data. So I built the data.
This information doesn't exist anywhere.
No Michelin star for pram-friendliness. No Google Maps layer. No TripAdvisor filter. Venues say they're "family-friendly" on their website. Parents find out the hard way. The data gap is complete and nobody had noticed because nobody had tried to fill it.
One question: what's the price of a pint of Guinness? A simple question, asked at scale, produced the most honest pub price data in Ireland. Nobody had done it before. Not because the idea was hard, but because making 3,000 phone calls was impossible. Until it wasn't.
Same logic. Different question. Different city. The technology made it possible. The opportunity was the same: a dataset that has never existed, collected by a machine, verified by voice.
"Hi, quick question — are you pram-friendly?"
That's it. One question. Under 20 seconds when it works. The answer — yes, no, difficult, or silence — goes on the map. No survey. No form. No self-reported data. An AI called them and they said what they said.
ElevenLabs voice agent. Calls venues directly from a London number. Asks one question. Handles IVR menus with DTMF tones. Listens for vague answers and follows up — "so a pushchair can get through the door OK?" — before closing.
Sounded convincingly human in over 99.9% of calls. Only 8 out of 10,000+ conversations detected it was an AI.
Only 8% of venues gave an outright no. The other 92% are accessible in some form — confirmed yes, or accessible with some nuance. Most cafes want pram customers. The hostility isn't policy, it's architecture. Steps and narrow doorways, not attitude.
The amber category is where the honesty lives. "It depends when you come." "It's a bit tight but we try." That nuance is split almost 50/50 with green — and it didn't exist anywhere before Buggy Smart.
"We're a basement venue, so there's about fifteen steps down. Once you're in, it's all on the flat."
London restaurant — classified amber
This venue would have shown as "family-friendly" in any self-reported survey. The AI call got the actual answer. That's the data that doesn't exist elsewhere.
The pause before the answer. The hesitation. The "ooh, good question." Venues were genuinely surprised to be asked. Some consulted a colleague. Some looked around the room. The question made them think about accessibility in a way they never had to articulate before.
That's not a quirk. That's the proof of concept. If venues had to think, then parents had to guess. The gap between those two states is the entire value proposition of Buggy Smart.
Most just answered the question. A few chatted. One took a full reservation. The calls are indistinguishable from a curious parent calling ahead. That's not a trick — it's just what a simple, honest question sounds like.
The Claude Haiku classifier was too conservative. When a human answered with anything ambiguous — "I think so," "probably," "we try" — Haiku returned "unclear" instead of inferring the most likely answer. The result: thousands of real conversations buried in a fog of uncertainty, contributing nothing to the map.
GitHub Actions has a default 60-minute timeout. All three pipeline workflows — morning, refresh, retry — were hitting it daily and getting cancelled mid-batch. The watchdog that should have alerted us was also failing silently, due to a YAML heredoc parsing bug that made it crash in 0 seconds on every push.
69% of calls go to voicemail or no answer. Without answering machine detection, the ElevenLabs agent connected to every voicemail and spoke into it for the full call duration. One endpoint had a 90-second timeLimit. Every voicemail cost 90 seconds of AI time.
One day. 12 things fixed. The pipeline came out the other side genuinely humming.
184 venues appeared on the map this morning.
No calls were made.
The evening pipeline re-evaluated 10,000+ past call transcripts overnight and corrected its own earlier decisions. Calls made months ago, re-scored while no one was watching. Most datasets are frozen the moment they're collected. This one improves retroactively.
A venue that can get a pushchair in can also get a wheelchair in. Or a walking frame. Or a delivery trolley. Pram-friendliness is a proxy for step-free access, door width, ground-floor service — data that benefits anyone with a mobility need, not just new parents.
No London borough holds this data. No accessibility database has it. The NHS doesn't have it. Health visitors recommend cafes for postnatal groups without knowing if they're physically accessible. Buggy Smart fills a civic gap as much as a parenting one.
The press hook writes itself: "AI makes 10,000 calls to rank London's venues on buggy access." The supermarket chain comparison is the hero data point.
Once the London dataset is complete, the underlying data — venue IDs, coordinates, access classifications, call transcripts, confidence scores — becomes a structured feed licensable via API.
Once the technology is named in a Guardian article, the category is owned.
The press launch target: Times, Guardian, Telegraph family desk, Which?, Mumsnet News. The hook is the supermarket chain comparison — every editor wants to know which supermarket is worst. The mechanism (AI calling 10,000 venues) is the story above the story. Set a launch date. The pipeline runs itself.
Built by a London dad who got tired of guessing. Started it for my 2-year-old. Turns out every buggy-wielding parent in London needed it too.
This data should have existed a long time ago. I made it for myself. Now it does.