Vivian Voss

JavaScript Bloat: The Battery Tax

javascript web performance

Performance-Fresser ■ Episode 14

The median web page in 2026 ships 558 KB of JavaScript. Not the worst offenders. The median. Half of all pages deliver more. And of that 558 KB, 44 per cent is never executed. Not eventually. Not under rare conditions. Never. Two hundred and forty-five kilobytes of code that travels across the network, gets parsed by the engine, occupies memory, and does precisely nothing. Shipped, paid for, and thrown away on every single page load.

On the developer’s MacBook Pro, this is invisible. The M-series chip parses the bundle in under 200 milliseconds, the fan stays silent, and the Lighthouse score glows a reassuring green. The developer commits, deploys, and moves on to the next sprint.

On the user’s phone, the same bundle takes five seconds.

The 25x Gap

In 2019, the V8 team at Google published a paper that the industry has spent seven years politely ignoring. The Cost of JavaScript demonstrated that mobile devices parse JavaScript roughly 25 times slower than a high-end laptop. Not two times. Not five. Twenty-five.

The arithmetic is merciless. A 500 KB bundle that takes 200 milliseconds on a developer’s workstation demands five full seconds on a mid-range Android handset, the kind of device that represents the majority of the world’s web traffic. Five seconds of a locked main thread. Five seconds of an unresponsive screen. Five seconds during which the user stares at a layout that looks finished but refuses to obey their fingers.

500 KB JavaScript: parse time 0 1s 2s 3s 4s 5s Dev Laptop 200 ms Mobile 5,000 ms 25× slower Same code. Same bundle. Different planet. Source: v8.dev, 2019

The reason is physics, not software. A laptop runs an x86 or ARM chip at sustained multi-gigahertz clock speeds, backed by active cooling and a power supply that does not care about thermals. A phone runs a thermally constrained ARM SoC that throttles itself the moment it gets warm. And parsing half a megabyte of JavaScript makes it warm rather quickly.

The Throttling Spiral

This is where the story turns from regrettable to perverse. JavaScript execution on mobile is not merely slow. It is self-defeatingly slow, because of a feedback loop that the industry appears determined not to discuss.

Heavy JavaScript demands sustained main-thread CPU time. Sustained CPU time generates heat. Heat triggers thermal throttling. Throttling reduces clock speed. Reduced clock speed means the same JavaScript takes longer. Longer execution generates more heat. More heat triggers deeper throttling. The device enters a spiral where the payload actively degrades its own execution environment.

The CatchJS 2024 study quantified this with uncomfortable precision. React-based sites exhibited 431 per cent more main-thread time on mobile compared to desktop. Not 31. Not 131. Four hundred and thirty-one per cent. The same study measured a 146 per cent increase in frame defects (dropped frames, janky scrolling, input latency) directly attributable to thermal throttling under sustained JavaScript load.

THE THROTTLING SPIRAL Heavy JavaScript CPU Load Heat Thermal Throttle Slower Execution Longer Duration +431% main-thread time (mobile vs desktop) +146% frame defects under sustained load

The laptop never enters this spiral. It has a fan, a heat sink, and a power supply plugged into the wall. The phone has a lithium cell and whatever heat dissipation the chassis permits. The development environment and the deployment environment share a language and absolutely nothing else.

The Battery Asymmetry

There is a further indignity. Images and video, the assets that actually constitute the majority of page weight, are decoded by dedicated hardware. The GPU handles image decompression. A hardware video decoder handles media playback. These circuits are purpose-built, thermally efficient, and draw a fraction of the power that general-purpose computation requires.

JavaScript has no such luxury. Every byte of it demands the main CPU. Every parse cycle, every function call, every garbage collection pass runs on the same general-purpose cores that the operating system is trying to clock down to preserve battery life. A 500 KB image is decoded by a hardware unit that barely registers on the power budget. A 500 KB JavaScript bundle monopolises the main thread and drains the battery whilst doing so.

The user notices this. Not consciously, not as a technical observation, but as the vague, persistent awareness that certain websites make their phone warm and their battery percentage drop. They do not diagnose the cause. They simply leave. And in the analytics dashboard, this registers as a bounce, attributed to content, to design, to anything except the 558 KB of JavaScript that turned their phone into a hand warmer.

The Application Obesity Epidemic

Nikita Tonsky, in his 2024 survey of application sizes, catalogued the dimensions of the crisis with admirable precision. Slack: 55 MB. LinkedIn: 31 MB. Gmail: 20 MB. These are client-side footprints, the weight your device must download, store, and execute to perform tasks that are, at their core, astonishingly simple.

A chat message is roughly 100 bytes of text. The Slack client required to display that message weighs 55 megabytes. The ratio is 550,000 to 1. Five hundred and fifty thousand bytes of client infrastructure for every byte of actual content. One might call this comprehensive engineering. One might also call it an extraordinary failure of proportionality.

A chat message is 100 bytes. The client is 55 MB. The overhead ratio is 550,000:1. If physical post worked this way, every letter would arrive in a shipping container.

Gmail requires 20 MB to display an email. The email itself (headers, body, the lot) might be 15 KB. LinkedIn demands 31 MB to show a feed of text posts that average perhaps 500 bytes each. The clients have become several orders of magnitude larger than the content they exist to present. The envelope has consumed the letter.

“Mobile First”

The phrase “mobile first” appeared in design lexicon around 2011, when Luke Wroblewski argued that designing for the smallest screen forces clarity of purpose. It was a design principle. By 2026, it has been reduced to a CSS strategy: the same bundle, the same JavaScript, the same framework overhead, delivered to a viewport that is merely narrower.

This is not mobile first. This is desktop-with-media-queries. The device that is slowest at parsing JavaScript, most constrained in memory, most vulnerable to thermal throttling, and most sensitive to battery drain receives the identical payload as a workstation with active cooling and mains power. “Mobile first” in 2026 means the mobile device gets the desktop’s JavaScript bill and a responsive grid to make it look intentional.

To ship 558 KB of JavaScript to a device you know parses it 25 times slower, throttles under the load, and drains its battery in the process is not a technical oversight. It is contempt, expressed through negligence.

The Alternative

The solution is not complicated. It is merely unfashionable.

Server-side device intelligence: the server knows the client. The User-Agent header, the Client Hints API, the Save-Data header. The information is there. A template engine that builds per-device pages can deliver precisely what each client needs: full interactivity for the workstation, a minimal client for the phone, zero JavaScript for the reader who simply wants to read.

The pattern is old. It predates the SPA era. The server renders HTML. The browser displays it. Interactive elements are enhanced progressively, not reconstructed from a JavaScript monolith. The approach has a name, progressive enhancement, and it fell out of favour not because it stopped working, but because it was insufficiently complex to sustain a conference circuit.

JAVASCRIPT PAYLOAD: what ships vs what runs 0 200 KB 400 KB 600 KB Median Website 558 KB 56% executed 44% waste Server Rendered 8.8 KB gz 0% waste, direct DOM, no virtual DOM Sources: httparchive.org, web.dev

htm/a, the server-side reactivity framework from the Min2Max ecosystem, ships 8.8 KB gzipped. Direct DOM mutation. No virtual DOM. No reconciliation. No hydration ceremony. The server renders the page. The browser displays it. Interactive elements respond without downloading a quarter-megabyte of unused abstractions first.

8.8 KB is not a boast. It is an engineering constraint. When the framework must justify every byte, the byte count stays honest. When the framework is permitted to grow without consequence, you get 558 KB and a warm phone.

The Invoice

Let us itemise what the user pays.

558 KB of JavaScript, median. 245 KB of it never executes. 25× parse-time penalty on mobile. 431% more main-thread time for framework-heavy sites on phones. 146% more frame defects from thermal throttling. A battery that drains because JavaScript has no hardware decoder and every cycle runs on the main CPU. Application clients that outweigh their content by a factor of 550,000.

Every one of these numbers is measured. Every one is published. And every one is absorbed by the user, not as a statistic, but as heat in their palm, percentage points draining from their battery, and the quiet frustration of a page that will not respond.

The web was designed to be lightweight. HTML is lightweight. CSS is lightweight. Both can be parsed incrementally, rendered progressively, and displayed before the full document has arrived. JavaScript is the exception. It demands synchronous parsing, monopolises the main thread, generates heat, triggers throttling, and consumes battery. And the industry ships 558 KB of it, median, to devices that were designed to last a day on a single charge.

The invoice is not in your build output. It is in your user’s battery settings, listed under “Safari” or “Chrome,” quietly draining a resource they did not volunteer.