API Reference

Complete API documentation for layercache

API Reference

Complete API documentation for layercache.

Table of Contents


CacheStack

The main class that orchestrates reads, writes, and invalidation across multiple cache layers.

Constructor

hljs ts
import { CacheStack, MemoryLayer, RedisLayer } from 'layercache'

const cache = new CacheStack(layers, options?)

Parameters:

  • layers - CacheLayer[] - Array of cache layers, ordered from fastest (L1) to slowest (Ln)
  • options - CacheStackOptions - Optional configuration (see CacheStackOptions)

Read Operations

cache.get<T>(key, fetcher?, options?): Promise<T | null>

Reads through all layers in order. On a partial hit (found in L2 but not L1), backfills the upper layers automatically. On a full miss, runs the fetcher if provided.

hljs ts
// Without fetcher - returns null on miss
const user = await cache.get<User>('user:123')

// With fetcher - runs once on miss, fills all layers
const user = await cache.get<User>('user:123', () => db.findUser(123))

// With full options
const user = await cache.get<User>('user:123', () => db.findUser(123), {
  ttl: { memory: 30, redis: 600 },
  tags: ['user', 'user:123'],
  negativeCache: true,
  negativeTtl: 15,
  staleWhileRevalidate: 30,
  staleIfError: 300,
  ttlJitter: 5
})

cache.getOrThrow<T>(key, fetcher?, options?): Promise<T>

Like get(), but throws CacheMissError instead of returning null.

hljs ts
import { CacheMissError } from 'layercache'

try {
  const config = await cache.getOrThrow<Config>('app:config')
} catch (err) {
  if (err instanceof CacheMissError) {
    console.error(`Missing key: ${err.key}`)
  }
}

cache.mget<T>(entries): Promise<Array<T | null>>

Concurrent multi-key fetch. Uses layer-level getMany() fast paths when all entries are simple reads.

hljs ts
const [user1, user2] = await cache.mget([
  { key: 'user:1', fetch: () => db.findUser(1) },
  { key: 'user:2', fetch: () => db.findUser(2) },
])

cache.has(key): Promise<boolean>

Check if a key exists in any layer.

cache.ttl(key): Promise<number | null>

Get the remaining TTL in seconds.

cache.inspect(key): Promise<CacheInspectResult | null>

Returns detailed metadata about a cache key for debugging.

hljs ts
const info = await cache.inspect('user:123')
// {
//   key: 'user:123',
//   foundInLayers: ['memory', 'redis'],
//   freshTtlSeconds: 45,
//   staleTtlSeconds: 75,
//   errorTtlSeconds: 345,
//   isStale: false,
//   tags: ['user', 'user:123']
// }

Write Operations

cache.set<T>(key, value, options?): Promise<void>

Writes to all layers simultaneously.

hljs ts
await cache.set('user:123', user, {
  ttl: { memory: 60, redis: 600 },
  tags: ['user', 'user:123'],
  staleWhileRevalidate: { redis: 30 },
  staleIfError: { redis: 120 },
  ttlJitter: { redis: 5 }
})

// Uniform TTL across all layers
await cache.set('user:123', user, { ttl: 120, tags: ['user'] })

cache.mset<T>(entries): Promise<void>

Concurrent multi-key write.

cache.delete(key): Promise<void>

Delete from all layers.

cache.mdelete(keys): Promise<void>

Bulk delete.

cache.clear(): Promise<void>

Delete all keys from all layers.


Invalidation

cache.invalidateByTag(tag): Promise<void>

Deletes every key stored with this tag across all layers.

hljs ts
await cache.set('user:123',       user,  { tags: ['user:123'] })
await cache.set('user:123:posts', posts, { tags: ['user:123'] })

await cache.invalidateByTag('user:123') // both keys gone

cache.invalidateByTags(tags, mode?): Promise<void>

Delete keys matching any or all of a set of tags.

hljs ts
await cache.invalidateByTags(['tenant:a', 'users'], 'all') // keys tagged with both
await cache.invalidateByTags(['users', 'posts'], 'any')    // keys tagged with either

cache.invalidateByPattern(pattern): Promise<void>

Glob-style deletion. Patterns must be non-empty, at most 1024 characters, and free of control characters.

hljs ts
await cache.invalidateByPattern('user:*')

cache.invalidateByPrefix(prefix): Promise<void>

Hierarchical prefix-based invalidation. Prefer this over glob when keys are hierarchical.

hljs ts
await cache.invalidateByPrefix('user:123:') // deletes user:123:profile, user:123:posts, ...

Wrapping & Namespaces

cache.wrap(prefix, fetcher, options?)

Wraps an async function so every call is transparently cached. The key is derived from function arguments unless you supply a keyResolver.

hljs ts
const getUser = cache.wrap('user', (id: number) => db.findUser(id))
const user = await getUser(123) // key -> "user:123"

// Custom key resolver
const getUser = cache.wrap(
  'user',
  (id: number) => db.findUser(id),
  { keyResolver: (id) => String(id), ttl: 300 }
)

cache.namespace(prefix): CacheNamespace

Returns a scoped view with the same full API. clear() only touches prefix:* keys.

hljs ts
const users = cache.namespace('users')
const posts = cache.namespace('posts')

await users.set('123', userData)    // stored as "users:123"
await users.clear()                 // only deletes "users:*"

// Nested namespaces
const tenant = cache.namespace('tenant:abc')
const tenantPosts = tenant.namespace('posts')
await tenantPosts.set('1', data)   // stored as "tenant:abc:posts:1"

Namespace prefixes must be non-empty, at most 256 characters, and free of control characters.


Warming & Persistence

cache.warm(entries, options?)

Pre-populate layers at startup. Higher priority values run first.

hljs ts
await cache.warm(
  [
    { key: 'config',  fetcher: () => db.getConfig(),  priority: 10 },
    { key: 'user:1',  fetcher: () => db.findUser(1),  priority: 5 },
    { key: 'user:2',  fetcher: () => db.findUser(2),  priority: 5 },
  ],
  { concurrency: 4, continueOnError: true }
)

cache.exportState() / cache.importState(snapshot)

In-memory snapshot transfer.

hljs ts
const snapshot = await cache.exportState()
await anotherCache.importState(snapshot)

cache.persistToFile(path) / cache.restoreFromFile(path)

Disk-based snapshot persistence. Restricted to process.cwd() by default (configurable via snapshotBaseDir).

hljs ts
await cache.persistToFile('./cache-snapshot.json')
await cache.restoreFromFile('./cache-snapshot.json')

Observability

cache.getMetrics(): CacheMetricsSnapshot

hljs ts
const { hits, misses, fetches, staleHits, refreshes, writeFailures } = cache.getMetrics()

cache.getStats(): CacheStatsSnapshot

Returns metrics, per-layer degradation state, and background refresh count.

hljs ts
const { metrics, layers, backgroundRefreshes } = cache.getStats()
// layers: [{ name, isLocal, degradedUntil }]

cache.getHitRate()

Computed hit rate overall and per-layer.

cache.healthCheck(): Promise<CacheHealthCheckResult[]>

hljs ts
const health = await cache.healthCheck()
// [{ layer: 'memory', healthy: true, latencyMs: 0.03 }, ...]

cache.resetMetrics(): void

Resets all counters to zero.


Generation Management

Add a generation prefix to every key and rotate it for bulk invalidation without scanning.

hljs ts
const cache = new CacheStack([...], { generation: 1 })

await cache.set('user:123', user)
cache.bumpGeneration() // now reads use v2:user:123

// Optional: auto-cleanup old generation keys
const cache = new CacheStack([...], {
  generation: 1,
  generationCleanup: { batchSize: 500 }
})

cache.bumpGeneration()

Rotate cache namespace by incrementing generation.

cache.getGeneration()

Get current generation number.


Lifecycle

cache.disconnect(): Promise<void>

Graceful shutdown (unsubscribes from invalidation bus, etc.).


Cache Layers

All layers implement the CacheLayer interface:

hljs ts
interface CacheLayer {
  readonly name: string
  readonly defaultTtl: number
  readonly isLocal: boolean

  get<T>(key: string): Promise<T | null>
  getEntry?<T>(key: string): Promise<unknown | null>
  getMany?<T>(keys: string[]): Promise<Array<unknown | null>>
  set(key: string, value: unknown, ttl?: number): Promise<void>
  setMany?(entries: Array<{ key: string; value: unknown; ttl?: number }>): Promise<void>
  delete(key: string): Promise<void>
  deleteMany?(keys: string[]): Promise<void>
  clear(): Promise<void>
  keys?(): Promise<string[]>
  has?(key: string): Promise<boolean>
  ttl?(key: string): Promise<number | null>
  size?(): Promise<number>
  ping?(): Promise<boolean>
  dispose?(): Promise<void>
}

MemoryLayer

In-process LRU/LFU/FIFO eviction with configurable max size.

hljs ts
new MemoryLayer({
  ttl: 60,
  maxSize: 5_000,
  name: 'memory'    // default
})

RedisLayer

Distributed caching via ioredis with compression, serializers, and optional prefix.

hljs ts
new RedisLayer({
  client: redis,
  ttl: 300,
  prefix: 'myapp:cache:',
  compression: 'gzip',
  compressionThreshold: 1_024,
  serializer: new MsgpackSerializer(),
  name: 'redis',
  allowUnprefixedClear: false
})

DiskLayer

Persistent file-based caching with atomic writes.

hljs ts
import { resolve } from 'node:path'

new DiskLayer({
  directory: resolve('./var/cache/layercache'),
  maxFiles: 10_000,
  name: 'disk'
})

MemcachedLayer

Memcached support with pluggable serializers and bulk operations.

hljs ts
new MemcachedLayer({
  client: memcachedClient,
  ttl: 300,
  name: 'memcached'
})

Custom Layers

Implement CacheLayer to plug in any backend:

hljs ts
class MyCustomLayer implements CacheLayer {
  readonly name = 'custom'
  readonly defaultTtl = 300
  readonly isLocal = false

  async get<T>(key: string): Promise<T | null> { /* ... */ }
  async set(key: string, value: unknown, ttl?: number): Promise<void> { /* ... */ }
  async delete(key: string): Promise<void> { /* ... */ }
  async clear(): Promise<void> { /* ... */ }
}

Options Reference

CacheStackOptions

| Option | Type | Default | Description | |---|---|---|---| | logger | Logger \| boolean | false | Pluggable logger interface or boolean | | metrics | boolean | true | Enable/disable metrics collection | | stampedePrevention | boolean | true | In-process request deduplication | | invalidationBus | RedisInvalidationBus | - | Distributed L1 invalidation | | tagIndex | TagIndex \| RedisTagIndex | in-memory | Custom tag tracking | | generation | number | - | Generation prefix for bulk invalidation | | generationCleanup | { batchSize: number } | - | Auto-prune stale generation keys | | broadcastL1Invalidation | boolean | false | Publish writes to peer memory layers | | negativeCaching | boolean | false | Cache nulls globally | | staleWhileRevalidate | number | - | Global stale-while-revalidate window (seconds) | | staleIfError | number | - | Global stale-if-error window (seconds) | | adaptiveTtl | AdaptiveTtlOptions | - | Auto-ramp TTLs for hot keys | | circuitBreaker | CircuitBreakerOptions | - | Per-fetcher failure tracking | | gracefulDegradation | { retryAfterMs: number } | - | Skip failed layers temporarily | | writePolicy | 'strict' \| 'best-effort' | 'strict' | Write failure behavior | | writeStrategy | 'write-through' \| 'write-behind' | 'write-through' | Write batching strategy | | writeBehind | WriteBehindOptions | - | Batch size, flush interval, max queue | | fetcherRateLimit | RateLimitOptions | - | Global rate limiting | | backgroundRefreshTimeoutMs | number | 30000 | Max time for stale refresh attempts | | singleFlightCoordinator | RedisSingleFlightCoordinator | - | Distributed deduplication | | singleFlightLeaseMs | number | 30000 | Distributed lock duration | | singleFlightTimeoutMs | number | 5000 | Wait timeout for distributed lock | | singleFlightPollMs | number | 50 | Polling interval | | singleFlightRenewIntervalMs | number | - | Lease renewal cadence | | snapshotBaseDir | string \| false | process.cwd() | Base directory for file snapshots | | maxProfileEntries | number | 100000 | Max size before pruning internal maps |

Per-Operation Options

| Option | Type | Description | |---|---|---| | tags | string[] | Tags for tag-based invalidation | | ttl | number \| LayerTtlMap | TTL in seconds, or per-layer overrides | | ttlPolicy | string \| object \| function | 'until-midnight', 'next-hour', { alignTo }, or custom | | negativeCache | boolean | Cache null results | | negativeTtl | number | Short TTL for misses | | staleWhileRevalidate | number \| LayerTtlMap | Return stale and refresh in background | | staleIfError | number \| LayerTtlMap | Keep serving stale if refresh fails | | ttlJitter | number \| LayerTtlMap | +/- random jitter on expiry | | slidingTtl | boolean | Reset TTL on every read | | refreshAhead | number | Trigger background refresh when TTL drops below threshold | | adaptiveTtl | AdaptiveTtlOptions | Auto-ramp TTL for hot keys | | circuitBreaker | CircuitBreakerOptions | Per-operation circuit breaker | | fetcherRateLimit | RateLimitOptions | Per-operation rate limiting | | shouldCache | (value: T) => boolean | Predicate to skip caching specific results |


Invalidation Strategies

Tag Invalidation

hljs ts
await cache.set('user:123', user, { tags: ['user', 'user:123'] })
await cache.invalidateByTag('user:123')

Batch Tag Invalidation

hljs ts
await cache.invalidateByTags(['tenant:a', 'users'], 'all')
await cache.invalidateByTags(['users', 'posts'], 'any')

Wildcard Invalidation

hljs ts
await cache.invalidateByPattern('user:*')

Prefix Invalidation

hljs ts
await cache.invalidateByPrefix('user:123:')

Generation-Based Invalidation

hljs ts
cache.bumpGeneration() // instant bulk invalidation without scanning

Freshness Strategies

Stale-While-Revalidate

hljs ts
await cache.set('config', config, {
  ttl: 60,
  staleWhileRevalidate: 30,  // serve stale for 30s while refreshing
  staleIfError: 300           // serve stale for 5min if refresh fails
})

Sliding TTL

hljs ts
await cache.get('session:abc', fetchSession, { slidingTtl: true })

Adaptive TTL

hljs ts
await cache.get('popular-post', fetchPost, {
  adaptiveTtl: { hotAfter: 5, step: 60, maxTtl: 3600 }
})

Refresh-Ahead

hljs ts
await cache.get('leaderboard', fetchLeaderboard, {
  ttl: 120,
  refreshAhead: 30 // refresh when <= 30s remain
})

TTL Policies

hljs ts
await cache.set('daily-report', report, { ttlPolicy: 'until-midnight' })
await cache.set('hourly-rollup', rollup, { ttlPolicy: 'next-hour' })
await cache.set('aligned', value, { ttlPolicy: { alignTo: 300 } })
await cache.set('custom', value, {
  ttlPolicy: ({ key }) => key.startsWith('hot:') ? 30 : 300
})

Per-Layer TTL Overrides

hljs ts
await cache.set('session:abc', data, {
  ttl: { memory: 30, redis: 3600 }
})

Conditional Caching

hljs ts
const data = await cache.get('api:response', fetchFromApi, {
  shouldCache: (value) => (value as any).status === 200
})

Resilience

Graceful Degradation

hljs ts
new CacheStack([...], {
  gracefulDegradation: { retryAfterMs: 10_000 }
})

Circuit Breaker

hljs ts
new CacheStack([...], {
  circuitBreaker: { failureThreshold: 5, cooldownMs: 30_000 }
})

// Per-operation
await cache.get('fragile-key', fetch, {
  circuitBreaker: { failureThreshold: 3, cooldownMs: 10_000 }
})

Write Policies

hljs ts
// Strict (default): fail if any layer fails
new CacheStack([...], { writePolicy: 'strict' })

// Best-effort: only fail if every layer fails
new CacheStack([...], { writePolicy: 'best-effort' })

Scoped Fetcher Rate Limiting

hljs ts
await cache.get('user:123', fetchUser, {
  fetcherRateLimit: { maxConcurrent: 1, scope: 'key' }
})

Compression & Serialization

Compression

hljs ts
new RedisLayer({
  client: redis,
  ttl: 300,
  compression: 'gzip',         // or 'brotli'
  compressionThreshold: 1_024  // skip compression for small values
})

MessagePack Serializer

hljs ts
import { MsgpackSerializer } from 'layercache'

new RedisLayer({
  client: redis,
  ttl: 300,
  serializer: new MsgpackSerializer()
})

Distributed Features

Distributed Single-Flight

hljs ts
import { RedisSingleFlightCoordinator } from 'layercache'

const coordinator = new RedisSingleFlightCoordinator({ client: redis })

new CacheStack([...], {
  singleFlightCoordinator: coordinator,
  singleFlightLeaseMs: 30_000,
  singleFlightRenewIntervalMs: 10_000,
})

Cross-Server L1 Invalidation

hljs ts
import { RedisInvalidationBus } from 'layercache'

const bus = new RedisInvalidationBus({ publisher: redis, subscriber: new Redis() })

new CacheStack([...], {
  invalidationBus: bus,
  broadcastL1Invalidation: true
})

Distributed Tag Index

hljs ts
import { RedisTagIndex } from 'layercache'

const tagIndex = new RedisTagIndex({
  client: redis,
  prefix: 'myapp:tag-index',
  knownKeysShards: 8
})

new CacheStack([...], { tagIndex })

Event Hooks

CacheStack extends EventEmitter:

| Event | Payload | |---|---| | hit | { key, layer } | | miss | { key } | | set | { key } | | delete | { key } | | stale-serve | { key, state, layer } | | stampede-dedupe | { key } | | backfill | { key, fromLayer, toLayer } | | warm | { key } | | error | { event, context } |

hljs ts
cache.on('hit',   ({ key, layer }) => metrics.inc('cache.hit', { layer }))
cache.on('miss',  ({ key })        => metrics.inc('cache.miss'))
cache.on('error', ({ event, context }) => logger.error(event, context))

Framework Integrations

Express

hljs ts
import { createExpressCacheMiddleware } from 'layercache'

app.get('/api/users', createExpressCacheMiddleware(cache, {
  ttl: 30,
  tags: ['users'],
  keyResolver: (req) => `user:${req.url}`
}), handler)

Fastify

hljs ts
import { createFastifyLayercachePlugin } from 'layercache/integrations/fastify'

await fastify.register(createFastifyLayercachePlugin(cache, {
  statsPath: '/cache/stats'
}))

Hono

hljs ts
import { createHonoCacheMiddleware } from 'layercache/integrations/hono'

app.use('/api/*', createHonoCacheMiddleware(cache, { ttl: 60 }))

tRPC

hljs ts
import { createTrpcCacheMiddleware } from 'layercache/integrations/trpc'

const cacheMiddleware = createTrpcCacheMiddleware(cache, 'trpc', { ttl: 60 })
export const cachedProcedure = t.procedure.use(cacheMiddleware)

GraphQL

hljs ts
import { cacheGraphqlResolver } from 'layercache/integrations/graphql'

const resolvers = {
  Query: {
    user: cacheGraphqlResolver(cache, 'user', (_root, { id }) => db.findUser(id), {
      keyResolver: (_root, { id }) => id,
      ttl: 300
    })
  }
}

NestJS

hljs bash
npm install @cachestack/nestjs
hljs ts
import { CacheStackModule } from '@cachestack/nestjs'

@Module({
  imports: [
    CacheStackModule.forRoot({
      layers: [new MemoryLayer({ ttl: 20 }), new RedisLayer({ client: redis, ttl: 300 })]
    })
  ]
})
export class AppModule {}

OpenTelemetry

hljs ts
import { createOpenTelemetryPlugin } from 'layercache/integrations/opentelemetry'

createOpenTelemetryPlugin(cache, tracer)

Stats HTTP Handler

hljs ts
import { createCacheStatsHandler } from 'layercache'
import http from 'node:http'

const statsHandler = createCacheStatsHandler(cache)
http.createServer(statsHandler).listen(9090)

Admin CLI

Inspect and manage Redis-backed caches from the terminal.

hljs bash
npx layercache stats      --redis redis://localhost:6379
npx layercache keys       --redis redis://localhost:6379 --pattern "user:*"
npx layercache invalidate --redis redis://localhost:6379 --tag user:123
npx layercache invalidate --redis redis://localhost:6379 --pattern "session:*"

Debug Logging

hljs bash
DEBUG=layercache:debug node server.js

Or pass a logger instance:

hljs ts
new CacheStack([...], {
  logger: {
    debug(message, context) { myLogger.debug(message, context) }
  }
})