Beta Stories ■ Episode 5
In 2023, Klarna replaced 700 customer support agents with AI. Costs down 25 per cent. Efficiency up. Shareholders thoroughly delighted.
In 2025, Klarna's CEO told Fortune they "went too far" and started rehiring humans. One rather admires the brevity of the lesson.
The Promise
AI would make everything better. Faster support. Smarter search. Cleaner code. Sharper content. The pitch was enhancement: your people, but augmented. The human does the thinking, the machine does the lifting. Both improve. Costs fall. Quality rises. Simultaneously. Marvellous.
What arrived was replacement: your people, but gone.
The distinction matters, because the pitch and the practice diverged at precisely the point where it became clear that replacing a human is cheaper than augmenting one. Enhancement requires the human to remain. Replacement removes the cost entirely. The spreadsheet does not distinguish between "better output with AI" and "output without humans." Both show the same reduction in headcount. One produces quality. The other produces margin. The quarterly report cannot tell the difference, which is rather the problem.
The Decay
The data is remarkably consistent across every domain where AI replaced rather than augmented human expertise.
Support
Gartner surveyed 5,728 customers. 64 per cent prefer companies that do not use AI in customer service. Not "prefer less AI." Prefer none. Five9 found that 81 per cent would rather queue for a human than interact with AI immediately. The product got faster. The customers got unhappier. Quite the return on investment.
Air Canada's chatbot hallucinated a bereavement fare policy that did not exist. A customer booked a full-price ticket based on the chatbot's advice and demanded the discount. Air Canada argued the chatbot was a separate legal entity. The tribunal disagreed. The airline was liable for what its chatbot said, regardless of whether the chatbot knew what it was saying. One does hope someone in legal found this informative.
Search
Google launched AI Overviews. It recommended gluing cheese to pizza (sourced from a Reddit joke) and eating rocks for minerals (sourced from a satirical geology article). Result: 46 per cent fewer clicks on search results. The world's largest information product became measurably less reliable. The AI was confident. The information was wrong. The users ate fewer rocks than recommended, which is perhaps the one bright spot.
Content
CNET published 77 AI-generated articles. 53 per cent required corrections. Wikipedia downgraded CNET as a reliable source. A publication built over decades of credibility, eroded in months by a cost-saving exercise that nobody bothered to quality-check before publishing.
Adobe Stock: 48 per cent of images now AI-generated, up from 2.5 per cent in 2023. The images look polished. They also look identical. LinkedIn: 54 per cent of long-form posts are now AI-generated. They receive 30 per cent less reach. The algorithm can tell. The readers can tell. The authors apparently cannot.
Code
GitClear analysed 211 million lines of code and found AI-assisted code has a 41 per cent higher churn rate. Code that gets rewritten within two weeks of being committed. The machine writes it. The human rewrites it. Both bill for their time.
METR ran a randomised controlled trial with experienced open-source developers. Those using AI tools were 19 per cent slower. Not faster. Slower. This is not an opinion survey. This is a randomised controlled trial, the gold standard of empirical evidence. The assumption that AI improves developer productivity was never validated. It was budgeted.
CodeRabbit analysed 470 pull requests and found AI-generated code produced 1.7 times more issues than human-written code. The issues are not exotic edge cases. They are the ordinary kind: missing error handling, incorrect assumptions, untested paths. The kind a senior developer catches in review. The kind that nobody catches when the senior developer was the one who got replaced.
The Mechanism
AI did not cause this. Cost-cutting did. AI is the instrument.
A tool lies on a table. Various shapes. Various qualities. It does nothing whatsoever until someone picks it up. The trouble begins when the person reaching for it cannot tell a chisel from a crowbar. They lack the expertise to judge what they are holding and the humility to ask someone who does. But the label says "AI" and the consultancy says "transformational," so one does rather press ahead.
The pattern is remarkably consistent: replace what you do not understand with something else you do not understand. Keep the price. Pocket the difference. When quality collapses, blame the tool. Never the decision. Never the decider.
AI amplifies what is already there. Companies that valued quality use AI to enhance it. Companies that valued margins use AI to cut deeper. The tool is neutral. The incentive is not.
The Signal
There are five symptoms. They always appear together.
- Support replies arrive in seconds but resolve nothing
- Content reads fluently but says nothing specific
- Images look polished but feel interchangeable
- Code passes review but churns within weeks
- Prices hold steady whilst headcount drops 40 per cent
AI is a magnificent instrument in the hands of someone who understands both the tool and the craft. In the hands of someone who understands neither, it is a margin play dressed as innovation.
When the product gets cheaper to produce but not cheaper to buy, you are not the beneficiary.
You are the margin.
Klarna replaced 700 agents. Costs down 25%. Then rehired humans. 64% of customers prefer no AI in support. 46% fewer clicks on AI search. 53% of AI articles needed corrections. 41% higher code churn. 19% slower with AI tools. The tool is neutral. The incentive is not. When the product gets cheaper to produce but not cheaper to buy, you are the margin.