SDI.
All Strategies
Application

Cache-Aside (Lazy Loading)

The application checks the cache first; on a miss it reads from the database, then populates the cache before returning the result.

Cache-aside is the most common caching pattern. The application owns all cache interactions: it checks the cache on reads, fetches from the database on a miss, and writes the result back into the cache. On writes, the application typically invalidates or updates the cache entry. Because the application controls both paths, you get maximum flexibility but also bear the complexity of keeping cache and database in sync.

Read/Write Pattern

Reads check cache first, then database on miss; writes go to database then invalidate cache. The application explicitly manages both paths.

Consistency

Eventual consistency. There is a brief window after a database write and before cache invalidation where the cache may serve stale data. A subtle race condition can also cause stale data to be written into the cache after invalidation.

Failure Mode

If the cache is unavailable, all reads fall through to the database. This increases database load and latency but does not cause data loss or incorrect results. If the database is unavailable, cache hits still succeed but misses fail.

Likely Follow-Up Questions

  • How would you handle a thundering herd after a cache restart?
  • What happens if the cache invalidation fails after a successful database write?
  • How do you choose an appropriate TTL for cache-aside?
  • Compare cache-aside with read-through -- when would you pick one over the other?
  • How would you implement cache-aside in a microservices architecture with multiple services reading the same data?

Source: editorial — Synthesized from standard distributed systems caching literature and production engineering best practices.

Command Palette

Search for a command to run...