SDI.
All Strategies
Application

Read-Through Cache

The cache itself is responsible for loading data from the database on a miss, presenting a single unified read interface to the application.

In a read-through cache, the application always reads from the cache. On a miss, the cache -- not the application -- fetches the data from the backing store, populates itself, and returns the result. This simplifies application code because the caller never interacts with the database directly for reads. The trade-off is tighter coupling between the cache layer and the data store, and you still need a separate strategy for writes.

Read/Write Pattern

All reads go through the cache. On a miss, the cache itself loads data from the database via a configured loader function. Writes require a separate strategy (write-through, write-behind, or direct database writes with invalidation).

Consistency

Eventual consistency, same as cache-aside. Staleness window depends on TTL. Consistency improves when paired with write-through. Built-in request coalescing prevents multiple concurrent loads for the same key.

Failure Mode

If the cache is unavailable, reads fail unless the application has a fallback path to the database. If the database is unavailable, cache hits succeed but misses fail. Loader failures should be handled with timeouts and fallback policies.

Likely Follow-Up Questions

  • How does read-through handle the thundering herd problem differently than cache-aside?
  • What write strategy would you pair with read-through and why?
  • How would you implement read-through when different keys come from different data sources?
  • What happens if the loader function throws an exception -- how should the cache behave?
  • How would you warm a read-through cache on deployment?

Source: editorial — Synthesized from standard distributed systems caching literature and production engineering best practices.

Command Palette

Search for a command to run...