Full-stack distributed e-commerce: architecture decisions
A walkthrough of the architecture behind a distributed e-commerce platform — from the data model to the deployment pipeline.
When you're building an e-commerce platform that needs to handle real traffic, the naive monolith approach breaks down quickly. Here's how I approached the architecture for a system that serves thousands of concurrent sessions.
Data model: PostgreSQL as the source of truth
The product catalog, orders, and user accounts live in PostgreSQL. The key decision was to use a JSONB column for product attributes — this avoids the schema migration nightmare when merchants have wildly different product structures, while keeping the core relational model clean.
CREATE TABLE products (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
merchant_id UUID NOT NULL REFERENCES merchants(id),
slug TEXT NOT NULL,
price_cents INTEGER NOT NULL,
attributes JSONB DEFAULT '{}',
created_at TIMESTAMPTZ DEFAULT now()
);
CREATE INDEX ON products USING GIN (attributes);Redis for sessions and inventory locks
The two places where Redis earns its keep: session storage (sub-millisecond reads) and inventory reservation. When a user adds an item to their cart, a Redis lock holds the inventory for 15 minutes. This prevents overselling without requiring a database transaction on every page view.
The deployment
The app runs in Kubernetes with a horizontal pod autoscaler tied to CPU and request queue depth. The PostgreSQL connection pool (via PgBouncer) is the main scaling bottleneck — something I'd address with read replicas in a production deployment.
The right architecture for an e-commerce platform is the simplest one that handles your actual traffic, not your aspirational traffic.