Tutorial
A step-by-step guide to setting up and operating layercache in production
Tutorial: Getting Started with layercache
A step-by-step guide to setting up and operating layercache in production.
Table of Contents
- Create a Cache Stack
- Basic Read-Through Caching
- Warm Critical Keys at Startup
- Wrap Service Methods
- Use Namespaces for Organization
- Set Up Tag-Based Invalidation
- Configure Stale Serving
- Add Resilience
- Monitor with Stats & Metrics
- Snapshot Before Deploys
1. Create a Cache Stack
Start with a two-layer setup: fast in-memory L1 and shared Redis L2.
import { CacheStack, MemoryLayer, RedisLayer } from 'layercache'
import Redis from 'ioredis'
const cache = new CacheStack([
new MemoryLayer({ ttl: 60, maxSize: 5_000 }),
new RedisLayer({
client: new Redis(),
ttl: 300,
prefix: 'myapp:cache:',
compression: 'gzip'
})
], {
gracefulDegradation: { retryAfterMs: 10_000 },
stampedePrevention: true // on by default
})
Why this setup?
- Memory (L1) handles repeated reads with ~0.01ms latency
- Redis (L2) provides shared state across instances with ~0.5ms latency
- Compression reduces Redis memory usage for large values
- Graceful degradation keeps the cache working even if Redis goes down
2. Basic Read-Through Caching
The simplest pattern: fetch on miss, cache automatically.
// Fetcher runs once on miss, result fills all layers
const user = await cache.get<User>('user:123', () => db.findUser(123))
// Subsequent calls hit L1 (memory) - no DB or Redis call
const sameUser = await cache.get<User>('user:123')
With options:
const user = await cache.get<User>('user:123', () => db.findUser(123), {
ttl: { memory: 30, redis: 600 }, // short L1, longer L2
tags: ['user', 'user:123'], // for bulk invalidation later
ttlJitter: 5 // prevent synchronized expiry
})
3. Warm Critical Keys at Startup
Pre-populate the cache before traffic arrives:
await cache.warm(
[
{ key: 'config:flags', fetcher: () => fetchFlags(), priority: 10 },
{ key: 'catalog:top-100', fetcher: () => fetchCatalog(), priority: 5 },
{ key: 'pricing:matrix', fetcher: () => fetchPricing(), priority: 5 },
],
{ concurrency: 4, continueOnError: true }
)
Higher priority values load first. continueOnError ensures one failed fetch doesn't block the rest.
4. Wrap Service Methods
Turn any async function into a cached function with automatic key derivation:
const getUser = cache.wrap('user', (id: number) => db.findUser(id), {
ttl: 60,
tags: ['users']
})
// Calls are automatically cached with key "user:123"
const user = await getUser(123)
With a custom key resolver:
const searchProducts = cache.wrap(
'search',
(query: string, page: number) => db.search(query, page),
{ keyResolver: (query, page) => `${query}:p${page}`, ttl: 30 }
)
5. Use Namespaces for Organization
Scope cache operations to avoid key collisions:
const users = cache.namespace('users')
const posts = cache.namespace('posts')
await users.set('123', userData) // stored as "users:123"
await posts.set('456', postData) // stored as "posts:456"
// Clear only user cache
await users.clear() // deletes "users:*" only
// Nested namespaces for multi-tenancy
const tenant = cache.namespace('tenant:acme')
const tenantUsers = tenant.namespace('users')
await tenantUsers.set('1', data) // stored as "tenant:acme:users:1"
6. Set Up Tag-Based Invalidation
Tag keys when writing, invalidate groups when data changes:
// Tag related data together
await cache.set('user:123', user, { tags: ['user:123'] })
await cache.set('user:123:posts', posts, { tags: ['user:123', 'posts'] })
await cache.set('user:123:profile', profile, { tags: ['user:123'] })
// When user 123 updates their profile, invalidate everything related
await cache.invalidateByTag('user:123')
// All three keys are deleted across all layers
// Batch invalidation
await cache.invalidateByTags(['users', 'posts'], 'any') // either tag
await cache.invalidateByTags(['tenant:a', 'users'], 'all') // both tags
For multi-instance deployments, use RedisTagIndex so all servers share the same tag state:
import { RedisTagIndex } from 'layercache'
const tagIndex = new RedisTagIndex({ client: redis, prefix: 'myapp:tags' })
const cache = new CacheStack([...], { tagIndex })
7. Configure Stale Serving
Keep serving cached data even after expiry while refreshing in the background:
const config = await cache.get('app:config', fetchConfig, {
ttl: 60,
staleWhileRevalidate: 30, // serve stale for 30s while refreshing
staleIfError: 300 // serve stale for 5min if refresh fails
})
Combined with refresh-ahead to proactively refresh before expiry:
const leaderboard = await cache.get('leaderboard', fetchLeaderboard, {
ttl: 120,
refreshAhead: 30 // start refreshing when <= 30s remain
})
8. Add Resilience
Protect your app from cascading failures:
const cache = new CacheStack([...], {
// Skip failed layers temporarily
gracefulDegradation: { retryAfterMs: 10_000 },
// Stop hammering broken upstreams
circuitBreaker: { failureThreshold: 5, cooldownMs: 30_000 },
// Rate limit fetcher calls
fetcherRateLimit: { maxConcurrent: 10 },
// Don't fail writes if one layer is down
writePolicy: 'best-effort'
})
9. Monitor with Stats & Metrics
Quick stats check
const stats = cache.getStats()
console.log(stats.metrics) // { hits, misses, fetches, staleHits, ... }
console.log(stats.layers) // [{ name, isLocal, degradedUntil }]
HTTP stats endpoint
import { createCacheStatsHandler } from 'layercache'
app.get('/cache/stats', createCacheStatsHandler(cache))
Health checks
const health = await cache.healthCheck()
// [{ layer: 'memory', healthy: true, latencyMs: 0.03 },
// { layer: 'redis', healthy: true, latencyMs: 0.41 }]
Event-based monitoring
cache.on('hit', ({ key, layer }) => metrics.inc('cache.hit', { layer }))
cache.on('miss', ({ key }) => metrics.inc('cache.miss'))
cache.on('error', ({ event, ctx }) => logger.error(event, ctx))
Admin CLI
npx layercache stats --redis redis://localhost:6379
npx layercache keys --redis redis://localhost:6379 --pattern "user:*"
10. Snapshot Before Deploys
Save cache state before restarting:
// Before shutdown
await cache.persistToFile('./cache-snapshot.json')
// After restart
await cache.restoreFromFile('./cache-snapshot.json')
Or transfer between instances in-memory:
const snapshot = await cache.exportState()
await anotherCache.importState(snapshot)
Next Steps
- API Reference - Full API documentation
- Migration Guide - Switching from another library
- Comparison - Feature comparison with alternatives
- Benchmarking - Performance measurement guide