The Last Honest Database
After three decades of churn, Postgres is still the only one willing to tell you the truth — and a few quiet reasons that won’t change anytime soon.
There is a particular kind of fatigue that sets in around year nine of writing software for money. You stop being delighted by features. You stop being persuaded by benchmarks. The marketing decks all blur together — the same diagrams, the same pastel arrows, the same promises about throughput that nobody intends to keep. What you start craving instead is honesty: a piece of software that does what it says, fails the way it promised, and refuses to lecture you about it.
Postgres is that software. It has been that software, quietly, for almost as long as I have been alive. While entire generations of databases came and went — riding waves of NoSQL, NewSQL, distributed-this, serverless-that — Postgres has done the unfashionable work of just continuing to exist, accruing features the way a coral reef accrues calcium: slowly, patiently, irreversibly.
It is not the fastest. It is not the most novel. It is not the database your CTO will write a thinkpiece about. But when the deadline arrives and the data must be correct, somehow it is always Postgres at the end of the wire.
What does honesty mean in a database?
When I say a database is honest, I do not mean it tells you what you want to hear. I mean the opposite. An honest database tells you when a write is durable and when it is not. It tells you when a transaction conflicts. It tells you when your query is going to take ninety seconds, not nine. It does not silently degrade reads, or mask split-brain failures behind a green dashboard, or pretend an eventual consistency window of forty seconds is the same as a fresh read.
The most expensive database in the world is the one you didn’t know was lying to you.
Postgres is honest about its costs. It will not pretend that adding a column to a billion-row table is free; it will let you know, with characteristic understatement, that you are about to take a long lock and that you might want to think about doing it differently. It will not pretend a JOIN over six tables with no indexes is wise. It will tell you, by being slow, that you have made a choice.
A short, unfair history
POSTGRES — the all-caps ancestor — emerged from Berkeley in the mid-1980s as the second draft of what Michael Stonebraker had started with Ingres. Its premise was unfashionable then and is unfashionable now: relational schemas, transactions, and extensibility, all in one place. SQL came later, pragmatically. Open source came later, gracefully. Replication, JSON, partitioning, generated columns, logical decoding, ICU collations — each arrived without fanfare, often years after the trend that demanded them had already moved on.
What you have to understand is that almost every other database’s release notes read like a startup’s pitch deck. Postgres release notes read like the minutes of a parish council. They describe, with cheerful precision, exactly what changed and why. There is no marketing in them. No one has ever written a Postgres release note that says “blazing fast.”
Things Postgres has, quietly:
- ACID transactions, including for DDL (try ALTER TABLE inside a transaction; it works).
- JSON and JSONB with proper indexing — five years before half its competitors.
- Logical replication that does not require a separate license, vendor, or pact.
- Foreign data wrappers — query MySQL, MongoDB, S3 buckets, anything you can write a wrapper for.
- Extensions — TimescaleDB, PostGIS, pgvector, pg_partman, and a thousand smaller things.
Each of those things, on its own, would be a reason to pick Postgres. Together, they form a kind of quiet gravitational well that pulls projects back, even after long detours through more fashionable systems.
The migration that doesn’t happen
There is a pattern I have watched play out at maybe a dozen companies. A team adopts something exotic — a graph database, a column store, a global, multi-master, eventually-consistent, geo-replicated wonder. Six months later, they discover that the new system does not handle one of the boring parts of their workload. Twelve months later, they have moved that part to Postgres. Twenty-four months later, the rest follows.
BEGIN;
WITH affected AS (
UPDATE accounts
SET balance = balance - 100
WHERE id = $1
AND balance >= 100
RETURNING id
)
INSERT INTO transfers (from_id, to_id, cents)
SELECT id, $2, 10000 FROM affected;
COMMIT;There is nothing in that snippet you couldn’t do twenty years ago. That is the point. Twenty years from now, that snippet will still work, on a Postgres written by people who were children when it was first deployed, on hardware that did not exist when this article was written. Honest software is patient. It outlasts its users.
Why it endures
The shorter answer is governance. Postgres has a development model — small core team, broad contributor base, mailing-list culture, no benevolent dictator — that has aged into a kind of municipal civility. Decisions are slow. Decisions are reversible. Almost nobody is famous, by software standards. Almost nobody is paid by anyone with a strong commercial axe.
It is not the database your CTO will write a thinkpiece about. It is the one waiting at the end of the wire when the deadline arrives.
The longer answer is that Postgres takes correctness seriously in a way that other databases reserve for marketing. There is a culture, traceable in commit messages and IRC archives, of refusing to ship something that might be quietly wrong, even if shipping it would be popular. That culture is rare. It is also why your data is still there in the morning.
I am not telling you to use Postgres for everything. There are workloads where it is wrong, and people who know more than me have written about those at length. I am telling you that when you are tired — when the deadline is real and the data must be correct — you can put your trust in it. It will not lie to you. It will not lecture you. It will just sit there, on a small server in a small room, and tell you the truth.
That, in 2026, is rarer than it should be.
Database engineer in Brooklyn. Writes about the unfashionable parts of data systems. Currently writing a book on Postgres internals.