Database Query Cache Strategy: Where to Put What
Caching is the second-largest performance lever after right-sized infrastructure. The patterns are well-known; mistakes are common.
Why cache
Cache replaces a slow operation with a fast one.
Wrong cache pattern: no benefit; or worse, stale data without warning.
Four caching patterns
- Memoize, in-process; per-pod.
- Cache-aside, app reads cache; falls back to DB.
- Read-through, cache layer auto-populates from DB.
- Write-through, writes go to cache + DB.
Where to put cache
App-tier (Redis): low latency; shared across pods; max benefit.
Database-tier (PG buffer cache): zero app changes; bounded by DB memory.
CDN: only for HTTP-cacheable content.
Cache stampede prevention
Cache stampede: cache expires; many requests hit DB simultaneously; DB collapses.
Prevention: probabilistic early refresh, single-flight pattern, lock-on-miss.
Antipatterns
- Cache without TTL. Stale forever.
- Cache invalidation by hand. Bugs.
- No stampede prevention. Single-cache-miss = DB outage.
What to do this week
Three moves. (1) Apply this pattern to your slowest production endpoint. (2) Measure p99 before/after. (3) Document the win and ship the runbook so the team can reproduce.