Vivian Voss

Containerise Everything

docker kubernetes devops cloud

15 March 2013. PyCon, Santa Clara. Solomon Hykes steps onto the stage for a five-minute lightning talk entitled “The Future of Linux Containers.”

The problem was genuine. Shipping code between environments broke things. Different library versions, different kernel configurations, different ideas of what /usr/local was for. Docker wrapped Linux containers into a portable format: build once, ship anywhere. For deployment, it was, and remains, genuinely brilliant.

Then the industry decided it was equally brilliant for development.

One does wish they had checked.

The Original

Package your application. Ship the image. The server runs precisely what you built. No surprises from the host, no dependency drift, no “works on my machine” followed by the predictable silence of someone who knows it does not work on anyone else’s.

Key assumption: the target is a Linux server. Docker uses kernel features (namespaces, cgroups) that exist natively in the Linux kernel. Runs directly on the host. No overhead worth mentioning.

The Copy

By 2016, docker-compose up had become the default onboarding instruction. Clone the repo. Run compose. Make tea. Wait rather longer than the tea requires.

Docker was no longer for shipping to servers. Docker was for developing on your MacBook. Which does not run Linux.

The Missing Context

This is where the pattern fractures. The original context (a Linux server) and the copied context (a developer’s laptop running macOS) are architecturally incompatible in ways that compound with every layer of abstraction.

Linux Server Application Docker (native) Linux Kernel Direct. No overhead. Filesystem: nanosecond latency macOS (Developer Laptop) Application Docker Engine Linux VM macOS / Apple Silicon VM boundary. 2.5x slower.

On Linux, Docker runs natively. The filesystem is local. Latency is measured in nanoseconds. On macOS, Docker runs a Linux virtual machine, first HyperKit, then Apple Virtualisation Framework, now Docker VMM. Bind mounts cross a virtualisation boundary. The filesystem is no longer local; it is negotiated.

The numbers, courtesy of Paolo Mainardi’s 2025 benchmarks on an M4 Pro:

npm install, bind mount (M4 Pro, 2025) Native macOS 3.4s Docker VMM 8.5s (2.5x) Docker VZ 9.5s (2.8x) 0s 5s 10s Source: paolomainardi.com

And then there is the matter of what you are actually downloading.

node:20 image sizes vs actual runtime Full (Debian) 1.1 GB Slim 200 MB Alpine 130 MB Node.js tarball 28 MB Source: snyk.io, nodejs.org

node:20 ships an entire Debian installation at 1.1 GB. The Node.js tarball from nodejs.org is 28 MB. An entire operating system to run JavaScript. On your own machine. Where JavaScript already runs.

The Cascade

The truly instructive part is not any single cost. It is the sequence: each step presented as the natural, reasonable consequence of the one before, each one adding friction that was not in the original estimate.

"Use Docker for consistent environments" "Also for local development" "On macOS, it runs a Linux VM" architecturally required "The VM reserves 50% of RAM by default" 4 GB idle, zero containers "Bind mounts are 2.5x slower" hot reload requires patience "Images are 1.1 GB" the internet, downloaded twice "Use Alpine to shrink them" musl breaks native modules "Use multi-stage builds" 40 lines of workarounds "Docker Desktop is now paid" 250+ employees, August 2021 "Switch to Podman / OrbStack" alternatives to the alternative You are driving to your neighbour's house via Heathrow.

You are running a Linux VM on your Mac to execute a process that runs natively on your Mac. The engineering equivalent of driving to your neighbour’s house via Heathrow.

The Timeline

Context matters not merely technically, but historically. Docker did not arrive as a development tool. It arrived as a deployment tool and was gradually reinterpreted, a pattern this series has encountered rather frequently.

2013 PyCon lightning talk. Five minutes. 2014 Docker acquires Orchard. Fig becomes Compose. 2017 Multi-stage builds. A patch for image bloat. 2018 Solomon Hykes leaves Docker. 2021 Docker Desktop licensing change. Paid for 250+ employees.

The Irony

Nobody questioned whether the tool for shipping to servers was also the right tool for writing code on a laptop. It said “consistent environments” on the tin. That was apparently sufficient.

Kelsey Hightower put it rather well: “You haven’t mastered a tool until you understand when it should not be used.”

Docker for deployment: genuine solution, genuine problem. Docker for local development on non-Linux machines: a deployment tool cosplaying as a development environment.

The README That Could Have Been

The README could say:

npm install && npm start

But that would require trusting developers to install software on their own machines. And we stopped doing that the moment “consistency” became more important than “working.”

Docker is a marvellous deployment tool. The critique is not Docker itself. It is the assumption that a deployment tool is automatically a development tool. Use Docker for what it was designed for: shipping to Linux servers. For local development: install your runtime. Run your code. Trust your machine.