Vivian Voss

The Pipe

unix architecture tooling

Technical Beauty ■ Episode 27

In 1964, Douglas McIlroy wrote a memo at Bell Labs. It contained one analogy that would reshape how programmes communicate:

We should have some ways of coupling programs like garden hose: screw in another segment when it becomes necessary to massage data in another way.

Nine years passed. Then Ken Thompson read it and implemented it overnight. One character: |

One does appreciate a nine-year requirements phase followed by a one-night delivery. Most organisations achieve the inverse.

Before the Pipe

Before February 1973, connecting two programmes meant writing the output of the first to a temporary file, then reading that file as input to the second. Every composition required an intermediary. Every intermediary required disk space. Every temporary file required cleanup. Every cleanup was, naturally, forgotten.

Before pipes (pre-1973) Programme A /tmp/data.txt temporary file Programme B write to disk read from disk Two steps. One intermediary. Cleanup optional. One suspects the temp files are still on someone's server.

The process was cumbersome, error-prone, and utterly at odds with the Unix instinct for economy. McIlroy had seen the problem in 1964. He wanted something simpler. Thompson gave him simpler.

The Overnight Implementation

On the evening of 10 February 1973, Thompson sat down and added pipes to Unix Version 3. By the next morning, they worked. McIlroy immediately rewrote every programme in /bin to read from standard input and write to standard output, making them composable. The reaction was unanimous:

The next day, there were pipes, and there were several of us just sitting there saying, "God! Why didn't we think of this before?"

One night. One system call. The result: the most enduring composition model in computing.

The Mechanism

A pipe is a kernel-managed buffer. The writer puts bytes in. The reader takes bytes out. When the buffer is full, the writer waits. When the buffer is empty, the reader waits. When the writer closes its end, the reader receives EOF. That is the entire protocol.

After pipes (1973) Programme A stdout kernel buffer 64 KB (Linux) / 16 KB (FreeBSD) stdin Programme B Buffer full: writer waits Buffer empty: reader waits Writer closes: reader gets EOF No temporary files. No disk I/O. No configuration. API surface: read() and write(). That is all.

Consider a practical example:

cat access.log | grep 404 | sort | uniq -c | sort -rn

Five programmes. No temporary files. No configuration. No API contract. Each programme reads stdin, writes stdout. The pipe connects them. The kernel handles the buffer. The entire interface specification is: bytes in, bytes out.

One rather struggles to find the documentation. There is nothing to document.

The Contract

A pipe enforces one rule: text in, text out. No objects. No serialisation format. No schema negotiation. No version compatibility. Each tool is entirely ignorant of its neighbours. grep does not know that sort follows. sort does not care what preceded it.

Five tools, zero knowledge of each other cat | grep | sort | uniq | sort bytes flow left to right, unmodified by the pipe The thinnest interface produces the widest reuse.

This constraint is the design. The less a tool assumes about its neighbours, the more tools it composes with. McIlroy understood in 1964 what microservices are still learning in 2026: the thinnest possible interface produces the widest possible reuse. A tool that expects JSON cannot compose with one that emits CSV. A tool that expects plain text composes with everything.

What It Replaced

Monolithic programmes that did everything: read, parse, transform, filter, format, write. Each one a kingdom. None of them composable. Rather like enterprise software, come to think of it.

The pipe turned programmes into functions. Small, single-purpose, stateless. The shell became the orchestrator. The filesystem became the only state. No dependency injection, no service mesh, no API gateway. Just bytes flowing between tools that each do one thing well.

Named Pipes

In 1979, named pipes (FIFOs) arrived with Unix Version 7. An anonymous pipe exists only between a parent process and its children. A named pipe exists as a file in the filesystem. Any process that can open the file can read from it or write to it. Same mechanism. Same bytes. No shared ancestor required.

mkfifo /tmp/stream
tail -f /var/log/syslog > /tmp/stream &
grep error < /tmp/stream

A persistent channel between unrelated processes, implemented as a file. One does admire the Unix instinct: when in doubt, make it a file.

The Numbers

Pipe internals Kernel buffer 64 KB (Linux) ■ 16 KB (FreeBSD) Context switches Zero when buffer has space API surface read() and write() Protocol Bytes in, bytes out Latency: kernel-mediated, no userspace copy on modern systems.

The kernel buffer on Linux defaults to 64 KB. On FreeBSD, 16 KB. Context switches happen only when the buffer is full or empty. The system call interface is read() and write(). Nothing else. No configuration file. No flags. No negotiation phase. The pipe is not a feature with options. It is a constraint with consequences.

Fifty-Three Years

Every shell script. Every CI pipeline. Every data transformation that chains two tools together. The pipe is not a feature of Unix. It is the idea of Unix, made visible. Small tools, loosely joined, communicating through the simplest possible interface.

Fifty-three years later, this is still the most elegant composition model in computing. No framework has improved upon "bytes in, bytes out." Several have tried. Their node_modules folders speak for themselves.

McIlroy's garden hose from 1964, Thompson's implementation from 1973, your terminal from this morning. Same character. Same mechanism. Same bytes. Quite reassuring in an industry that rewrites everything every eighteen months.

Technical beauty emerges from reduction.