The Replacement ■ Episode 9
“Write once, run on any database!”
A marvellous promise. One rather spoilt by a single follow-up question: when did you last switch databases?
Not “considered switching.” Not “discussed switching in a meeting that ran forty minutes over.” Actually switched. Migrated the schema, rewritten the queries, verified the data, deployed to production, gone home. The answer, for the overwhelming majority of projects, is never. You chose PostgreSQL three years ago. You use JSONB, array fields, full-text search. You are not switching. The ORM’s headline feature, database portability, is insurance against an event that will not occur.
The Pitch
The Object-Relational Mapper arrives with four promises, each delivered with the quiet confidence of a salesman who has memorised the brochure but never driven the car:
No SQL knowledge required. You learn a domain-specific language instead. A DSL that works with this ORM and this ORM alone. Prisma’s syntax is not Sequelize’s syntax is not TypeORM’s syntax is not SQLAlchemy’s syntax. SQL, meanwhile, is SQL everywhere. It has been the same language since Chamberlin and Boyce published SEQUEL in 1974. You learn it once. It works in every relational database on the planet.
Database-agnostic. See above. You are not switching.
Safer queries. Parameterised queries prevent SQL injection. This is true, and it is also true of every modern database driver without an ORM. The postgres.js tagged template literal parameterises automatically. Safety is not an ORM feature. It is a driver feature.
Less code. This one deserves a diagram.
Eight lines for a single table lookup in Prisma: the model definition,
the schema file, the generated client. Three lines in raw SQL. Add a relation
and the ORM balloons to fifteen lines plus tooling; the SQL grows to seven.
“Less code” is less code only if you do not count the code the
ORM generates, the code the ORM requires you to configure, and the code you
will eventually write in .raw() when the abstraction fails.
The Abstraction Leak
Every ORM follows the same trajectory. The first week is beautiful. Simple
queries, clean syntax, no SQL in sight. The second week, you need a JOIN.
The ORM handles it, with some additional ceremony. The third week, you need
a subquery. The documentation grows vague. The fourth week, you need a
window function, a recursive CTE, or a database-specific feature. The ORM
offers .raw(), a method whose existence is an admission
that the abstraction has limits, and whose usage grows monotonically with
the complexity of your application.
The diagram is not metaphorical. This is the actual execution path. Three layers of indirection sit between your code and the query the database actually runs. Each layer adds overhead, cognitive and computational. Each layer obscures what is happening. And the database, blissfully unaware of your architectural preferences, sees only the SQL at the centre.
The Hidden Costs
An ORM does not merely abstract the query. It abstracts the consequences.
N+1 queries. The most common ORM performance problem, and
the most invisible. You fetch a list of orders. For each order, the ORM
lazily fetches the customer. One query becomes one hundred and one queries.
The code looks clean. The database logs tell a different story. An
EXPLAIN ANALYZE
would have caught it instantly. But you are not writing SQL, so you
are not running EXPLAIN ANALYZE.
Missing indices. The ORM generates queries you did not write. You cannot optimise queries you have not seen. A missing index on a foreign key column turns a millisecond lookup into a full table scan, and nothing in the ORM’s API tells you this is happening.
Full table scans. Abstracted away behind a method call that
looks identical whether it touches ten rows or ten million. The ORM treats
.findAll() the same regardless of table size. The database does
not.
These are not edge cases. They are the ordinary, predictable consequences of hiding the query from the person responsible for its performance.
The Portability Lie
Let us examine the central claim with the respect it deserves, which is to say: briefly.
Database portability assumes you are writing database-agnostic SQL. You
are not. You are using PostgreSQL’s JSONB operators. You are using
its array fields. You are using ILIKE, RETURNING,
ON CONFLICT, and half a dozen other PostgreSQL-specific
features that your ORM either wraps imperfectly or exposes through
dialect-specific escape hatches.
The ORM’s portability layer works precisely until you need a feature that differentiates one database from another, which is to say, precisely until you need a feature worth having. At that point, you write raw SQL inside the ORM, defeating the portability that justified the ORM’s existence.
The curve is not speculative. Every ORM project follows it. The abstraction
holds for simple CRUD. The moment complexity arrives (and complexity
always arrives) the ORM’s effort curve steepens while SQL’s
remains linear. Past the .raw() threshold, you are maintaining
two query languages: the ORM’s DSL for the simple bits, and SQL for
the bits that actually matter.
The Replacement
SQL. That is the replacement. Not a framework. Not a library. Not a DSL that generates what you could have written directly.
EXPLAIN ANALYZE
works. You see the query plan. You see the sequential scans. You see the
index hits, the row estimates, the actual execution time. Nothing is hidden.
Nothing is abstracted. The database tells you exactly what it is doing, and
you make informed decisions based on evidence rather than hope.
Modern database drivers have made raw SQL ergonomic. postgres.js uses tagged template literals for automatic parameterisation:
const users = await sql`
SELECT id, name, email
FROM users
WHERE active = ${true}
`
Three lines. Parameterised. Type-safe at the protocol level. No model definition. No schema file. No code generation step. No build tooling. No migration framework that generates SQL you then have to debug anyway. Just the query, visible, auditable, and running exactly as written.
For migrations, dbmate handles schema versioning in plain SQL files. For connection pooling, PgBouncer has done the job for two decades. The Unix philosophy applies: small tools, each doing one thing well, composed as needed. The ORM bundles all of these into one tool that does each of them less well.
The Fair Concession
ORMs reduce boilerplate for CRUD-heavy applications with simple data models. If your application is a todo list with five tables and no joins beyond a foreign key, an ORM will serve you adequately. The abstraction holds because the complexity never arrives.
The problem is scope. ORMs are not marketed as “suitable for simple
CRUD applications with predictable query patterns.” They are marketed
as database abstraction layers for production applications, applications
that will, inevitably, outgrow the abstraction. The ORM does not scale down
gracefully. It accumulates .raw() calls like barnacles on a hull,
each one a quiet confession that the abstraction was not sufficient.
The Verdict
ORMs hide complexity instead of eliminating it. They promise database portability you will never use. They obscure queries you should understand. They generate code you cannot optimise. They introduce N+1 problems you cannot see. They grow in effort faster than the SQL they replace.
SQL has been the interface to relational data since 1974. It is declarative,
portable across every relational database, and understood by every database
tool ever written. EXPLAIN ANALYZE works. The query plan is
visible. The performance is measurable. The language is stable.
Your ORM will be deprecated before your queries are.