Over the past eight episodes of this series, we have dissected individual performance problems: the event loop, the dependency graph, the virtual DOM, the cloud invoice. Each one appeared to be its own isolated pathology. But stand back far enough and a shape emerges. The same shape, every time.
Today we name it.
Four Exhibits
Node.js is built around a single-threaded event loop, a concurrency model that was already considered a pragmatic compromise in the 1980s. The prescribed remedy is horizontal scaling: spin up more processes, more containers, more machines. This remedy, in turn, spawned an entire orchestration industry (Kubernetes, service meshes, observability platforms) each solving a problem that exists only because the original tool chose not to.
npm ships without proper dependency resolution, a problem most package managers solved comfortably in the 1990s. The workaround is lock files, security scanners, and audit tooling. The resulting gap created a lucrative security-as-a-service market (Snyk, Dependabot, and their many competitors) all sustained by the dependency chaos they were hired to tame.
React's Virtual DOM introduces a diffing layer to solve a DOM update problem that browsers already handle natively. The prescribed remedy is memoisation, profiling, and increasingly baroque build pipelines. This spawned an ecosystem of DevTools extensions, framework consultancies, and conference circuits dedicated to explaining why your application is slow and what you might purchase to make it less so.
Cloud pricing is deliberately opaque. Forty-seven-page invoices that require a new professional discipline, FinOps, merely to decode. The workaround is cost-optimisation tooling and dedicated specialists. Gartner estimates that 30% of cloud spend is pure waste. Naturally, the cure for this waste is yet more tooling.
One might notice a pattern emerging.
The Mechanism
Strip away the product names and release dates, and each case follows the same four-step cycle:
- Introduce a tool with a fundamental limitation.
- Present the workaround as industry best practice.
- Build an ecosystem that profits from the resulting complexity.
- Repeat until sunk cost makes switching unthinkable.
The limitation is not a bug. It is rather good business, actually.
This is not a conspiracy theory. No one sat in a room and planned it. The mechanism is subtler than that: a design trade-off becomes an industry default, the default creates demand for remediation, and remediation creates stakeholders who have no interest in the problem being solved at the root. Incentives do the rest. No malice required.
The Second Loop
The cycle does not stop at lock-in. It compounds. Consider what happens when the workaround for a single-threaded server is to decompose everything into microservices:
More services means more network communication. More network communication means more attack surface. More attack surface demands more security tooling. More security tooling means more complexity. More complexity means... one hardly needs to finish the sentence.
Spot the beneficiary.
Gartner expects global cybersecurity spending to reach approximately €197 billion in 2025. An industry that grows precisely because the architecture it protects was designed to be indefensible. Rather elegant, if one is selling the cure.
The Invoice
Consider the past eight years alone. According to Deloitte's benchmarks, IT spending as a percentage of revenue has nearly doubled: from 3.28% in 2016 to 5.85% in 2024, a 78% increase. The tooling, we are assured, got better. Marvellous.
Every workaround demands headcount. Every headcount demands coordination. Every coordination layer demands its own tooling. Eventually, you are not a product company with IT support. You are an IT company that occasionally ships product.
One does wonder at what percentage the board might begin to notice.
The Counter-Evidence
If this cycle were inevitable, if every tool must spawn an industry of apologies, then we would expect no exceptions. But exceptions are precisely what we find, and they have been hiding in plain sight for decades.
PostgreSQL has been in continuous development for 35 years. It does not require a constellation of third-party tools to compensate for design decisions it got wrong in 1989. It simply got them right.
SQLite consists of three files and zero external dependencies. It is the most widely deployed database engine in the world. No one has built a consultancy around making SQLite bearable.
FreeBSD Jails have provided kernel-native process isolation since the year 2000, three years before the term "containerisation" acquired its current meaning and a full thirteen years before Docker appeared. No orchestration layer required. No YAML liturgy.
Go and Rust handle concurrency at the language level: goroutines, ownership, lifetimes. No framework gymnastics. No horizontal scaling to paper over a single-threaded design.
None of these spawned billion-euro industries dedicated to patching their shortcomings. They simply work. Boring technology, it turns out, does not require an industry to support it.
That is rather the point.
The Uncomfortable Question
The pattern is not difficult to see. It is difficult to act upon, because acting upon it means questioning sunk costs, retraining teams, and disappointing vendors whose revenue depends on the status quo.
But the arithmetic is patient. A tool that solves the problem it creates is not a tool. It is a subscription. And the difference between engineering and subsidy is whether the spend converges or diverges.
Thirty-five years of PostgreSQL converge. Eight years of doubling IT budgets diverge. The maths is not ambiguous. Only the incentives are.