Lean Web ■ Episode 01
“Minimise HTTP requests. Bundle everything into one file. Otherwise your page gets too slow!”
This was excellent advice. In 2010.
In 2007, Steve Souders published High Performance Web Sites, the book that codified front-end optimisation as a discipline. Rule number one, printed in bold, underlined by every conference talk that followed: Minimise HTTP Requests. Concatenate your scripts. Merge your stylesheets. Sprite your images. Every request is a round trip. Every round trip is latency. Every millisecond is a user you might lose.
The rule was correct. The protocol made it so. HTTP/1.1 imposed a strict constraint: one request per connection, processed sequentially, with a practical ceiling of six parallel connections per origin. Six files could load simultaneously. The seventh queued. The eighth queued behind the seventh. A page with forty assets was a waterfall of serial waiting, and the only sensible response was to reduce the number of waterfalls by reducing the number of files.
Bundling was not a choice. It was engineering under constraint.
Then the constraint was removed. And the industry kept bundling.
The Protocol That Changed the Equation
In May 2015, the IETF published RFC 7540, the specification for HTTP/2. The headline feature was multiplexing: the ability to send and receive multiple requests and responses simultaneously over a single TCP connection, without head-of-line blocking at the application layer.
The implications were immediate and structural. Under HTTP/1.1, each request occupied an entire connection for its duration. A browser loading a CSS file could not use that connection for anything else until the response completed. Six connections, six files, everything else in the queue. Under HTTP/2, a single connection carries hundreds of concurrent streams. Each stream is an independent request-response pair, interleaved at the frame level. There is no queue. There is no waiting. The connection simply handles what you give it.
Alongside multiplexing, HTTP/2 introduced HPACK header compression. HTTP/1.1 headers are plain text, repeated in full on every request. A typical request carries 500 to 800 bytes of headers, cookies included. Multiply that by forty assets and the overhead is considerable. HPACK compresses headers using a static table of common fields and a dynamic table of previously seen values. The second request for the same origin carries a fraction of the header cost. By the tenth request, the headers are virtually free.
The result is a protocol that actively rewards what HTTP/1.1 punished. Under the old regime, forty files meant queueing, latency, and wasted connections. Under the new one, forty files arrive in parallel, compressed, over a single connection. The tax on small files is gone. The incentive to concatenate disappeared with it.
The Rule That Inverted
Souders’s first rule was not wrong. It was correct for its era, and spectacularly so. HTTP/1.1 was the dominant protocol from 1997 to roughly 2020, and during those two decades, minimising requests was the single most effective performance optimisation a front-end developer could make. Sprite sheets, concatenated scripts, inlined CSS: all of it made measurable differences on real connections for real users.
But protocols change. And when the underlying constraint disappears, the rules derived from that constraint do not become neutral. They invert.
Under HTTP/2, many small files outperform one large bundle. Not marginally. Structurally. The reason is not the protocol alone. It is what the protocol enables downstream: granular caching.
The Cache Invalidation Problem
Consider a typical bundled application. Your CSS, JavaScript, and possibly some templates are concatenated into a single file. It might be 350 KB. It might be 500 KB. The number hardly matters; what matters is the unit of invalidation.
You deploy a fix. One line of CSS changes. A border radius. A colour value. Two kilobytes of actual difference. But the browser does not know that. The browser knows one thing: the hash of the bundle changed. The entire file is stale. The entire file must be downloaded again. Your user, on a mobile connection, on a train, on the underground, re-downloads 500 KB because you changed two.
Now consider the same application split into thirty modules. You deploy the same fix. The same border radius. The same colour value. One file changes. One file gets a new hash. Twenty-nine files remain cached. The browser fetches 2 KB. The other 498 KB never leave the cache.
This is not a marginal improvement. It is a 250x reduction in transfer size for the most common deployment scenario: a small fix to a large application. And it requires no tooling. No configuration. No plugin. Just the file structure the protocol was designed to serve.
The Adoption Nobody Noticed
HTTP/2 browser support stands at 97% globally. Every major browser has supported it since 2015. Chrome, Firefox, Safari, Edge: all of them negotiate HTTP/2 automatically when the server supports it, which, in 2026, virtually every server does.
Nginx has served HTTP/2 since version 1.9.5, released in September 2015. Caddy has served HTTP/2 by default since its first release. Apache supports it via mod_http2. Cloudflare enables it automatically for every domain behind its proxy. AWS CloudFront, Fastly, Akamai: all of them speak HTTP/2 without configuration.
The protocol is not a proposal. It is not behind a flag. It is not experimental. It has been the default transport layer of the web for the better part of a decade. The transition happened so quietly that most developers never noticed. They certainly never revisited the assumptions built on the protocol it replaced.
What Bundlers Still Do Well
This is not, to be clear, a funeral notice for Webpack. Bundlers perform three operations that remain genuinely valuable regardless of the protocol.
Tree-shaking. If your application imports a utility library of two hundred functions and uses three, a bundler can eliminate the other 197 from the output. The browser cannot. Dead code elimination at build time is a real optimisation, and HTTP/2 does nothing to replicate it.
Minification. Removing whitespace, shortening variable names, and stripping comments reduces file size. Gzip and Brotli handle compression at the transport layer, but minification reduces the input to the compressor. The gains compound.
Transpilation. If your codebase uses syntax that not all target browsers understand, a build step transforms it. This is increasingly rare (ES2015 support is universal, and most modern syntax has shipped in all evergreen browsers) but it remains a legitimate use case for organisations supporting older environments.
These are real capabilities. They are not, however, the reason most teams configure a bundler. Most teams configure a bundler because “that is how it is done.” The boilerplate is copied from a starter template. The configuration is inherited from a previous project. The assumption is never questioned because the assumption was correct once, and once is apparently sufficient.
What Bundlers No Longer Need to Do
Reduce request count. This was the original purpose. This was the
reason webpack.config.js exists. This was Rule #1 in the book that
defined the discipline. And it is obsolete. HTTP/2 multiplexing handles hundreds
of parallel requests over a single connection. The protocol solved the problem.
The tooling that solved it before the protocol did has not noticed.
Concatenate for performance. Under HTTP/1.1, fewer files meant fewer connections, fewer round trips, and faster page loads. Under HTTP/2, the relationship has inverted. More files (within reason) means better caching, smaller invalidation units, and faster subsequent visits. Concatenation is no longer an optimisation. It is a de-optimisation, a deliberate choice to make cache invalidation coarser than the protocol requires.
The Archaeology of Best Practices
The industry has a particular talent for preserving advice long after its expiration date. CSS sprites survived until 2020 in production codebases, a full five years after HTTP/2 made them unnecessary. Inline CSS for “above-the-fold” content persists in lighthouse recommendations despite HTTP/2 server push (now deprecated, admittedly) and preload hints making the technique redundant for most cases.
Bundling follows the same pattern. The practice originated from a genuine constraint. The constraint was removed. The practice persisted. Not because it was re-evaluated and found still useful. Because it was never re-evaluated at all.
Souders wrote his book in 2007. The web he optimised ran on HTTP/1.1, IE6, and dial-up connections that charged by the minute. The web of 2026 runs on HTTP/2 (and increasingly HTTP/3), evergreen browsers with native ES Module support, and connections that handle hundreds of parallel streams without blinking. Applying 2007 rules to 2026 infrastructure is not cautious engineering. It is cargo culting.
The Invoice
What bundling costs in 2026: a build step that adds seconds or minutes to every
deployment; a configuration file that nobody fully understands; a cache
invalidation strategy that punishes users for your deployment frequency; a
node_modules directory that exists primarily to support the
concatenation of files that the protocol can serve in parallel.
What HTTP/2 provides for free: multiplexed loading of as many files as you need, HPACK header compression, granular caching with per-file invalidation, and a protocol that every server, every CDN, and every browser has spoken natively for over a decade.
The fastest bundle might be no bundle at all.
HTTP/2 has been the default since 2015. The rule inverted itself a decade ago. The question is not whether your server supports it. The question is why your build pipeline still assumes it does not.