Choosing PostgreSQL over MongoDB for a wholesale operations platform
We considered MongoDB early in the design phase. Here's the specific reasoning that led us to PostgreSQL, and the one area where we think the MongoDB crowd has a genuine point.
Early in the design phase for Baseframe, we had a real debate about the database. The data model has some genuinely document-like elements — invoice line items, audit log snapshots, customer contact metadata. MongoDB would have been a defensible choice. We chose PostgreSQL. Here's the specific reasoning.
The domain is relational. Full stop.
A wholesale operations platform has a data model that is deeply, unavoidably relational.
An invoice belongs to a customer. An invoice has line items. Each line item references a SKU. Each SKU has a cost and pricing tiers. A payment is applied to an invoice. A return references line items on an invoice. A ticket is associated with a customer and may reference a specific invoice. An audit log entry references the user who made the change.
These aren't loose associations. They're foreign key relationships with referential integrity requirements. An invoice line item must reference a valid SKU. A payment must reference a valid invoice. An audit record must reference a valid user.
In PostgreSQL, you declare these constraints in the schema and the database enforces them. You can't orphan an invoice line item by deleting its SKU. You can't apply a payment to an invoice that doesn't exist.
In MongoDB, you can express relationships — you'd store an invoice ID in the payment document — but the database doesn't enforce them. Nothing prevents you from creating a payment that references an invoice ID that doesn't exist. Nothing prevents you from deleting a customer who has outstanding invoices. The application has to enforce referential integrity, which means the application has to be perfect in a way that application code rarely is.
For a financial system, that's not a theoretical concern. It's a data integrity guarantee we'd have to write ourselves in application code, test exhaustively, and maintain forever. PostgreSQL gives us that guarantee for free.
Transactions matter when money is involved
When a customer pays an invoice, several things need to happen atomically:
- The payment record is created
- The invoice status changes to "paid"
- The inventory adjustment is committed (reserved stock becomes depleted stock)
- An audit log entry is written
Either all of these happen, or none of them do. If the server crashes between steps 2 and 3, the invoice shows as paid but the inventory is still reserved rather than depleted. That's a data integrity problem that compounds over time.
PostgreSQL has first-class transaction support. Wrap those four operations in a transaction and the database guarantees atomicity. All or nothing.
MongoDB has had multi-document transactions since version 4.0 (2018). They work. But they come with performance caveats and are widely considered a grudging addition to a database that wasn't designed for them. The MongoDB documentation still recommends denormalizing data to avoid needing transactions, which is fine advice for a content management system and bad advice for a financial system.
JSONB gives us the flexible parts without sacrificing the structured parts
The honest version of the MongoDB pitch is: your data will have parts that don't fit neatly into a rigid schema. Customer metadata varies. Audit log snapshots need to capture the exact state of arbitrary records. Invoice PDFs have rendering-specific data that doesn't belong in the main schema.
This is true. And PostgreSQL's JSONB column type handles it well.
Our audit log table looks roughly like this:
CREATE TABLE audit_log (
id uuid PRIMARY KEY DEFAULT gen_random_uuid(),
created_at timestamptz NOT NULL DEFAULT now(),
actor_id uuid NOT NULL REFERENCES users(id),
resource text NOT NULL,
resource_id uuid NOT NULL,
action text NOT NULL,
before jsonb,
after jsonb
);
The before and after columns are JSONB — they can store any shape of data, indexed efficiently, queryable with PostgreSQL's JSON operators. When we log an invoice update, we snapshot the entire invoice record as JSON. When we log a user role change, we snapshot the user record.
This gives us the document-like flexibility for audit snapshots while keeping the foreign key constraint on actor_id (an audit log entry must reference a real user) and the structured schema for the fields we care about querying.
JSONB is also how we handle per-client configuration differences. Different clients might have different metadata fields on customer records. We store that in a metadata jsonb column and let each client's application layer define what goes there.
The result is a schema that's relational where the domain demands it and flexible where flexibility is genuinely useful.
The MongoDB crowd has a genuine point about one thing
Schema migrations are harder in PostgreSQL than in MongoDB.
In MongoDB, you can add a new field to a document without a migration — documents don't all need to have the same shape. In PostgreSQL, adding a column requires an ALTER TABLE statement. On a large table, that can lock the table for seconds or minutes if not done carefully.
PostgreSQL has gotten better at this over time. Adding a nullable column with a default is now nearly instant (the default is stored at the schema level, not computed for every existing row). But complex migrations — renaming a column, changing a column type, splitting one column into two — still require careful planning and often application-level compatibility periods.
We manage this with Drizzle Kit, which generates migration SQL from schema changes. We review the generated SQL before running it, which catches the expensive operations. For large tables, we use the pg_repack extension or perform migrations in phases to avoid downtime.
It's more operational overhead than MongoDB's schemaless approach. We think that's the right trade-off for a financial system — the strictness that makes migrations harder is the same strictness that makes the data trustworthy.
Drizzle over Prisma, and why
We use Drizzle ORM rather than Prisma. This comes up often enough that it's worth addressing here.
Prisma generates a client from a schema definition file. The generated client is convenient and has good TypeScript inference. But it adds a code generation step to the development workflow, and the generated SQL is not always what you'd write by hand. For complex queries — say, an invoice list with aggregated payment totals and latest ticket status — Prisma's relational query API can produce surprising query plans.
Drizzle defines the schema in TypeScript directly. No code generation. The query builder produces explicit SQL that's readable in code review. When a query is slow, you can read the Drizzle code and understand exactly what SQL it's generating.
The practical difference in day-to-day development is smaller than the debates suggest. Both work. We prefer Drizzle because explicit SQL that can be reviewed and reasoned about is more important to us than convenience features.
What we'd reconsider
If we were building a platform where the primary use case was storing highly variable document structures — think a CMS, a research database, or an event log with hundreds of distinct event shapes — we'd have a real conversation about MongoDB. Its aggregation pipeline is genuinely good, and the developer experience for document-heavy workloads is better.
For a financial operations platform with well-defined entities and strict integrity requirements, PostgreSQL was the right call. The data has stayed clean. The migrations have been manageable. The transactions have been reliable.
That's what matters.