The gap between companies moving fast with AI and companies that are not is not theoretical. It is not a lab benchmark or an analyst projection. It is operational, visible, and widening every month.

Shopify: 14,000 employees, most of whom are not engineers. In April 2025, CEO Tobi Lütke published his internal memo: before requesting new headcount, every team must prove the work cannot be done with AI. That is not a policy update. That is a restructure of how the company thinks about labour.

Block: 10,000 employees down to 6,000 in February 2026. The 40% cut was explicitly attributed to AI. They built an internal coding agent called Goose. One engineer reported 90% of his code output now comes from it. Production code per engineer is up 40-plus percent since September 2025. Jack Dorsey’s framing: this is what AI-first actually looks like.

Revolut: grew its customer base 34% in 2024 while limiting customer support headcount growth to 5%. The gap between those two numbers is AI operating at scale. That is not an efficiency saving. That is a business model restructured.

These are not AI-native startups. They are not companies that have always operated this way. They are companies that decided to change, and then changed.

What actually separates them

Most analysis of the velocity gap focuses on tools. Which models. Which integrations. How much they are spending. This misses the point entirely.

The tools are available to everyone. Every developer in your organisation already has Copilot. Most of them are using it to autocomplete lines they would have written anyway. Tool access is not the differentiator. It has not been the differentiator for at least a year.

The companies moving fastest have done three things that are harder than buying a seat licence.

The first is permission. Not implicit permission. Explicit, public permission from leadership to let AI be the primary author. There is a specific psychological shift that has to happen before the velocity follows: you move from reviewing every AI output line by line, as if you wrote it, to trusting the system and intervening on exceptions. The middle manager who reviews every AI-generated line as if they wrote it themselves is not a quality gate. They are the bottleneck. The companies winning have given that permission explicitly and from the top.

The second is process redesign. When cycle time drops from weeks to hours, every downstream process breaks. Code review. Stand-ups. QA. Sprint planning. Amazon found this the hard way: they mandated 80% usage of their internal AI coding tool and tracked adoption as a corporate OKR. They did not redesign the safety and review infrastructure to match. The result: 6.3 million lost orders from cascading failures in AI-generated code. You cannot bolt AI onto a two-week sprint. The winners rebuilt the process around what AI can actually do, not around the process they inherited from a different era.

The third is measurement. Most companies track vanity metrics: seat licences activated, PRs up 20-30%, training hours completed. These are inputs. They tell you nothing about velocity. The companies winning track something different: production code per engineer per week, cycle time from idea to shipped feature, token spend as a percentage of build cost. The DORA benchmarks are useful here: elite engineering teams ship with a change lead time under one hour. Industry average is one to four weeks. Most companies do not know where they sit. Until you measure the actual gap, you cannot close it.

Tool access is not the differentiator. It has not been the differentiator for at least a year.

The uncomfortable part

The companies winning are not the ones with the best AI tools. They are the ones where the CEO is in the code base, where there is no sentimentality about processes or structures from a previous era.

No sentimentality is the hardest thing to operationalise. It means letting go of the 15-year engineer who reviews every line as if they wrote it. The process from 2018. The team structure built for a different era of software development. Block eliminated a middle management layer when Goose shipped. They did not renegotiate it. They just stopped hiring for it.

There is a reason most companies are still in pilot mode 18 months into this. The incentive structure punishes visible failure harder than invisible mediocrity. Moving fast and breaking something is visible. Staying slow and quietly ceding ground is not. Amazon’s 6.3 million lost orders made headlines. Nobody writes about the companies that moved too cautiously and found themselves a year behind.

But the ground is being ceded either way. The bell curve is just a comfortable place to wait while someone else builds the lead.

What to actually do

Five things appear consistently in the companies moving fastest: find people embedded across your organisation who are genuinely obsessed with the frontier (not an AI team; obsessives who spread through every function). Give them a budget that would make your CFO uncomfortable, treated as infrastructure rather than experiment. Start with internal tools, not client-facing products: EAs building platforms, finance building dashboards, HR running their own systems. Measure cycle time, not seat licences. And say out loud, from the top, that AI is the author.

That last one sounds obvious. It is not. Most leadership teams have implicitly sanctioned AI use while maintaining the expectation that outputs are still human-authored, human-reviewed, human-accountable. That ambiguity is where velocity dies.

The question worth sitting with: what did your team ship this week that AI wrote?

Not co-wrote. Not suggested. Wrote. The answer to that question is a more honest measure of where you actually sit in the distribution than anything on your product roadmap.

I’ve mapped this out in a full presentation at mikelitman.me/velocity: 15 slides, the company examples, the nine differentiators, and the five moves to cross the gap.