For decades, Nordic classrooms were the standard everyone else was trying to meet. High scores. Happy students. A national commitment to education that most countries could only admire from a distance. The world didn’t just respect Finland and Norway. It copied them.
So when those same countries decided to go all-in on classroom technology, the rest of the world paid attention.
In 2016, Norway gave a free iPad to every child on their first day of school. No parental controls. No usage limits. Parents who raised concerns were told they were dinosaurs. Books quietly left classrooms. It felt like the future.
Then the data came back.
The numbers
Norway’s PISA reading score fell from 499 in 2018 to 477 in 2022. That is not a rounding error. It represents a country that once led the OECD in literacy slipping to roughly average. Sweden ran the same experiment and found the same thing: by 2021, fourth-graders were reading worse than they had in 2016, before the screens arrived.
What followed was something rare in policy: an honest reversal.
Norway banned screens from preschools and curtailed them across the first four years of primary school. Prime Minister Støre publicly acknowledged that the model had failed. Sweden allocated €104 million to restore physical textbooks and removed the mandate for digital tools in early years entirely. Play-based learning came back. Handwriting returned. Read-aloud sessions resumed. English and other subjects were deferred until children had mastered reading, writing, and arithmetic.
The countries that led the world in going digital became the first to publicly say: we got this wrong. That is not failure. That is intellectual honesty at scale.
What the science confirmed
By the time the policy reversal was underway, the research had caught up. A 2024 meta-analysis of 49 studies found that students reading on paper consistently outperformed those reading the same text on screen. Eye-tracking research explains the mechanism: screen readers skim. Print readers stop and re-read. One optimises for speed. The other builds understanding.
Researchers call it the screen inferiority effect. It is not about nostalgia. It is not technophobia. It is a documented cognitive difference in how the brain processes information depending on the medium it arrives through. The medium is never neutral. Nordic educators assumed it was. They were wrong.
We are running this again
Here is the uncomfortable part.
Right now, AI tools are being rolled into classrooms, agencies, and organisations across the world. The language is identical to 2016: “AI natives,” “efficiency gains,” “the future of work.” The optimism is genuine. The ambition is real. And the absence of controls is almost total.
Norway in 2016 was not reckless. It was optimistic. The theory, that children growing up with screens would learn differently and better through them, seemed reasonable. It was also untested at scale. Nobody built in a measurement framework. Nobody defined what “working” would look like before they started. Nobody set a date to check.
That is the actual failure. Not the screens. The absence of a control.
The same pattern is playing out now with AI. Organisations are adopting tools at speed, which is fine. But very few are defining what they will measure, when they will review it, or what “it’s not working” would look like before they press go. The adoption curve is steep. The evaluation infrastructure is almost nonexistent.
What the reversal is actually about
The story is not “screens are bad.” Finland still uses screens. The point is not the technology. The point is the discipline.
The Nordic countries ran a national-scale experiment. They collected data. When the data told them something was wrong, they acted on it. Eight years from adoption to reversal. That sounds slow until you consider that most organisations never reverse anything.
Three questions before you press go
What will you measure? Not “productivity” in the abstract, but specifically: which outputs, which behaviours, which outcomes. Define it now, while the answer is still easy to give honestly.
When will you check? Set a date before you start. Six months. Twelve months. Put it in the calendar while the optimism is high and the political will exists to actually look.
What would “it’s not working” look like? This is the hardest question. Most rollouts never answer it, which means they never end. The screen experiment continued for six years after the first warning signs appeared, because nobody had defined in advance what failure looked like.
The Nordic schools didn’t fail because they used technology. They failed because they didn’t measure what they changed. That is a solvable problem. It requires intention, not intelligence. And it is entirely within reach for every organisation making a bet on AI right now.
The schools that built in the measurement are the ones that will know what to do next. Everyone else will find out the hard way, a few years from now, when someone starts asking where the evidence went.
I’ve put the full evidence base and policy timeline into a presentation: mikelitman.me/unplug