Tutorial

A step-by-step guide to setting up and operating layercache in production

Tutorial: Getting Started with layercache

A step-by-step guide to setting up and operating layercache in production.

Table of Contents

  1. Create a Cache Stack
  2. Basic Read-Through Caching
  3. Warm Critical Keys at Startup
  4. Wrap Service Methods
  5. Use Namespaces for Organization
  6. Set Up Tag-Based Invalidation
  7. Configure Stale Serving
  8. Add Resilience
  9. Monitor with Stats & Metrics
  10. Snapshot Before Deploys

1. Create a Cache Stack

Start with a two-layer setup: fast in-memory L1 and shared Redis L2.

hljs ts
import { CacheStack, MemoryLayer, RedisLayer } from 'layercache'
import Redis from 'ioredis'

const cache = new CacheStack([
  new MemoryLayer({ ttl: 60, maxSize: 5_000 }),
  new RedisLayer({
    client: new Redis(),
    ttl: 300,
    prefix: 'myapp:cache:',
    compression: 'gzip'
  })
], {
  gracefulDegradation: { retryAfterMs: 10_000 },
  stampedePrevention: true  // on by default
})

Why this setup?

  • Memory (L1) handles repeated reads with ~0.01ms latency
  • Redis (L2) provides shared state across instances with ~0.5ms latency
  • Compression reduces Redis memory usage for large values
  • Graceful degradation keeps the cache working even if Redis goes down

2. Basic Read-Through Caching

The simplest pattern: fetch on miss, cache automatically.

hljs ts
// Fetcher runs once on miss, result fills all layers
const user = await cache.get<User>('user:123', () => db.findUser(123))

// Subsequent calls hit L1 (memory) - no DB or Redis call
const sameUser = await cache.get<User>('user:123')

With options:

hljs ts
const user = await cache.get<User>('user:123', () => db.findUser(123), {
  ttl: { memory: 30, redis: 600 },  // short L1, longer L2
  tags: ['user', 'user:123'],       // for bulk invalidation later
  ttlJitter: 5                      // prevent synchronized expiry
})

3. Warm Critical Keys at Startup

Pre-populate the cache before traffic arrives:

hljs ts
await cache.warm(
  [
    { key: 'config:flags',    fetcher: () => fetchFlags(),    priority: 10 },
    { key: 'catalog:top-100', fetcher: () => fetchCatalog(),  priority: 5 },
    { key: 'pricing:matrix',  fetcher: () => fetchPricing(),  priority: 5 },
  ],
  { concurrency: 4, continueOnError: true }
)

Higher priority values load first. continueOnError ensures one failed fetch doesn't block the rest.


4. Wrap Service Methods

Turn any async function into a cached function with automatic key derivation:

hljs ts
const getUser = cache.wrap('user', (id: number) => db.findUser(id), {
  ttl: 60,
  tags: ['users']
})

// Calls are automatically cached with key "user:123"
const user = await getUser(123)

With a custom key resolver:

hljs ts
const searchProducts = cache.wrap(
  'search',
  (query: string, page: number) => db.search(query, page),
  { keyResolver: (query, page) => `${query}:p${page}`, ttl: 30 }
)

5. Use Namespaces for Organization

Scope cache operations to avoid key collisions:

hljs ts
const users = cache.namespace('users')
const posts = cache.namespace('posts')

await users.set('123', userData)   // stored as "users:123"
await posts.set('456', postData)   // stored as "posts:456"

// Clear only user cache
await users.clear()                // deletes "users:*" only

// Nested namespaces for multi-tenancy
const tenant = cache.namespace('tenant:acme')
const tenantUsers = tenant.namespace('users')
await tenantUsers.set('1', data)   // stored as "tenant:acme:users:1"

6. Set Up Tag-Based Invalidation

Tag keys when writing, invalidate groups when data changes:

hljs ts
// Tag related data together
await cache.set('user:123',         user,    { tags: ['user:123'] })
await cache.set('user:123:posts',   posts,   { tags: ['user:123', 'posts'] })
await cache.set('user:123:profile', profile, { tags: ['user:123'] })

// When user 123 updates their profile, invalidate everything related
await cache.invalidateByTag('user:123')
// All three keys are deleted across all layers

// Batch invalidation
await cache.invalidateByTags(['users', 'posts'], 'any')   // either tag
await cache.invalidateByTags(['tenant:a', 'users'], 'all') // both tags

For multi-instance deployments, use RedisTagIndex so all servers share the same tag state:

hljs ts
import { RedisTagIndex } from 'layercache'

const tagIndex = new RedisTagIndex({ client: redis, prefix: 'myapp:tags' })
const cache = new CacheStack([...], { tagIndex })

7. Configure Stale Serving

Keep serving cached data even after expiry while refreshing in the background:

hljs ts
const config = await cache.get('app:config', fetchConfig, {
  ttl: 60,
  staleWhileRevalidate: 30,  // serve stale for 30s while refreshing
  staleIfError: 300           // serve stale for 5min if refresh fails
})

Combined with refresh-ahead to proactively refresh before expiry:

hljs ts
const leaderboard = await cache.get('leaderboard', fetchLeaderboard, {
  ttl: 120,
  refreshAhead: 30    // start refreshing when <= 30s remain
})

8. Add Resilience

Protect your app from cascading failures:

hljs ts
const cache = new CacheStack([...], {
  // Skip failed layers temporarily
  gracefulDegradation: { retryAfterMs: 10_000 },

  // Stop hammering broken upstreams
  circuitBreaker: { failureThreshold: 5, cooldownMs: 30_000 },

  // Rate limit fetcher calls
  fetcherRateLimit: { maxConcurrent: 10 },

  // Don't fail writes if one layer is down
  writePolicy: 'best-effort'
})

9. Monitor with Stats & Metrics

Quick stats check

hljs ts
const stats = cache.getStats()
console.log(stats.metrics)  // { hits, misses, fetches, staleHits, ... }
console.log(stats.layers)   // [{ name, isLocal, degradedUntil }]

HTTP stats endpoint

hljs ts
import { createCacheStatsHandler } from 'layercache'

app.get('/cache/stats', createCacheStatsHandler(cache))

Health checks

hljs ts
const health = await cache.healthCheck()
// [{ layer: 'memory', healthy: true, latencyMs: 0.03 },
//  { layer: 'redis',  healthy: true, latencyMs: 0.41 }]

Event-based monitoring

hljs ts
cache.on('hit',   ({ key, layer }) => metrics.inc('cache.hit', { layer }))
cache.on('miss',  ({ key })        => metrics.inc('cache.miss'))
cache.on('error', ({ event, ctx }) => logger.error(event, ctx))

Admin CLI

hljs bash
npx layercache stats --redis redis://localhost:6379
npx layercache keys  --redis redis://localhost:6379 --pattern "user:*"

10. Snapshot Before Deploys

Save cache state before restarting:

hljs ts
// Before shutdown
await cache.persistToFile('./cache-snapshot.json')

// After restart
await cache.restoreFromFile('./cache-snapshot.json')

Or transfer between instances in-memory:

hljs ts
const snapshot = await cache.exportState()
await anotherCache.importState(snapshot)

Next Steps