layercache 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/LICENSE ADDED
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2026
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
package/README.md ADDED
@@ -0,0 +1,492 @@
1
+ # layercache
2
+
3
+ **Multi-layer caching for Node.js — memory → Redis → your DB, unified in one API.**
4
+
5
+ [![npm version](https://img.shields.io/npm/v/layercache)](https://www.npmjs.com/package/layercache)
6
+ [![npm downloads](https://img.shields.io/npm/dw/layercache)](https://www.npmjs.com/package/layercache)
7
+ [![license](https://img.shields.io/npm/l/layercache)](LICENSE)
8
+ [![TypeScript](https://img.shields.io/badge/TypeScript-first-blue)](https://www.typescriptlang.org/)
9
+
10
+ ```
11
+ L1 hit ~0.01 ms ← served from memory, zero network
12
+ L2 hit ~0.5 ms ← served from Redis, backfilled to memory
13
+ miss ~20 ms ← fetcher runs once, all layers filled
14
+ ```
15
+
16
+ ---
17
+
18
+ ## Why layercache?
19
+
20
+ Most Node.js services end up with the same problem:
21
+
22
+ - **Memory-only** → fast, but not shared across servers
23
+ - **Redis-only** → shared, but every read pays a network round-trip
24
+ - **Hand-rolled layers** → works, but you rewrite stampede prevention, backfill logic, and tag invalidation in every project
25
+
26
+ layercache solves all three. You declare your layers once and call `get`. Everything else is handled.
27
+
28
+ ```ts
29
+ const user = await cache.get('user:123', () => db.findUser(123))
30
+ // ↑ only called on a full miss
31
+ ```
32
+
33
+ On a hit, the value is returned from the fastest layer that has it, and automatically backfilled into any faster layers that didn't. On a miss, the fetcher runs exactly once — even under 100 concurrent requests for the same key.
34
+
35
+ ---
36
+
37
+ ## Features
38
+
39
+ - **Layered reads & automatic backfill** — hits in slower layers propagate up
40
+ - **Cache stampede prevention** — mutex-based deduplication per key
41
+ - **Tag-based invalidation** — `set('user:123:posts', posts, { tags: ['user:123'] })` then `invalidateByTag('user:123')`
42
+ - **Pattern invalidation** — `invalidateByPattern('user:*')`
43
+ - **Per-layer TTL overrides** — different TTLs for memory vs. Redis in one call
44
+ - **Distributed tag index** — `RedisTagIndex` keeps tag state consistent across multiple servers
45
+ - **Cross-server L1 invalidation** — Redis pub/sub bus flushes stale memory on other instances when you write or delete
46
+ - **Metrics** — hit/miss/fetch/backfill counters built in
47
+ - **MessagePack serializer** — drop-in replacement for lower Redis memory usage
48
+ - **NestJS module** — `CacheStackModule.forRoot(...)` with `@InjectCacheStack()`
49
+ - **Custom layers** — implement the 5-method `CacheLayer` interface to plug in Memcached, DynamoDB, or anything else
50
+ - **ESM + CJS** — works with both module systems, Node.js ≥ 18
51
+
52
+ ---
53
+
54
+ ## Installation
55
+
56
+ ```bash
57
+ npm install layercache
58
+ # Redis support (optional)
59
+ npm install ioredis
60
+ ```
61
+
62
+ ---
63
+
64
+ ## Quick start
65
+
66
+ ```ts
67
+ import { CacheStack, MemoryLayer, RedisLayer } from 'layercache'
68
+ import Redis from 'ioredis'
69
+
70
+ const cache = new CacheStack([
71
+ new MemoryLayer({ ttl: 60, maxSize: 1_000 }), // L1 — local memory
72
+ new RedisLayer({ client: new Redis(), ttl: 3600 }) // L2 — Redis
73
+ ])
74
+
75
+ // Fetch pattern — cache miss runs the fetcher, hit skips it entirely
76
+ const user = await cache.get<User>('user:123', () => db.findUser(123))
77
+
78
+ // Manual set / delete
79
+ await cache.set('user:123', user)
80
+ await cache.delete('user:123')
81
+ ```
82
+
83
+ Memory-only setup (no Redis required):
84
+
85
+ ```ts
86
+ const cache = new CacheStack([
87
+ new MemoryLayer({ ttl: 60 })
88
+ ])
89
+ ```
90
+
91
+ ---
92
+
93
+ ## Core API
94
+
95
+ ### `cache.get<T>(key, fetcher?, options?): Promise<T | null>`
96
+
97
+ Reads through all layers in order. On a partial hit (found in L2 but not L1), backfills the upper layers automatically. On a full miss, runs the fetcher — if one was provided.
98
+
99
+ ```ts
100
+ // Without fetcher — returns null on miss
101
+ const user = await cache.get<User>('user:123')
102
+
103
+ // With fetcher — runs once on miss, fills all layers
104
+ const user = await cache.get<User>('user:123', () => db.findUser(123))
105
+
106
+ // With options
107
+ const user = await cache.get<User>('user:123', () => db.findUser(123), {
108
+ ttl: { memory: 30, redis: 600 }, // per-layer TTL
109
+ tags: ['user', 'user:123'] // tag this key for bulk invalidation
110
+ })
111
+ ```
112
+
113
+ ### `cache.set<T>(key, value, options?): Promise<void>`
114
+
115
+ Writes to all layers simultaneously.
116
+
117
+ ```ts
118
+ await cache.set('user:123', user, {
119
+ ttl: { memory: 60, redis: 600 }, // per-layer TTL (seconds)
120
+ tags: ['user', 'user:123']
121
+ })
122
+
123
+ await cache.set('user:123', user, {
124
+ ttl: 120, // uniform TTL across all layers
125
+ tags: ['user', 'user:123']
126
+ })
127
+ ```
128
+
129
+ ### `cache.invalidateByTag(tag): Promise<void>`
130
+
131
+ Deletes every key that was stored with this tag across all layers.
132
+
133
+ ```ts
134
+ await cache.set('user:123', user, { tags: ['user:123'] })
135
+ await cache.set('user:123:posts', posts, { tags: ['user:123'] })
136
+
137
+ await cache.invalidateByTag('user:123') // both keys gone
138
+ ```
139
+
140
+ ### `cache.invalidateByPattern(pattern): Promise<void>`
141
+
142
+ Glob-style deletion against the tracked key set.
143
+
144
+ ```ts
145
+ await cache.invalidateByPattern('user:*') // deletes user:1, user:2, …
146
+ ```
147
+
148
+ ### `cache.mget<T>(entries): Promise<Array<T | null>>`
149
+
150
+ Concurrent multi-key fetch, each with its own optional fetcher.
151
+
152
+ ```ts
153
+ const [user1, user2] = await cache.mget([
154
+ { key: 'user:1', fetch: () => db.findUser(1) },
155
+ { key: 'user:2', fetch: () => db.findUser(2) },
156
+ ])
157
+ ```
158
+
159
+ ### `cache.getMetrics(): CacheMetricsSnapshot`
160
+
161
+ ```ts
162
+ const { hits, misses, fetches, backfills } = cache.getMetrics()
163
+ ```
164
+
165
+ ---
166
+
167
+ ## Cache stampede prevention
168
+
169
+ When 100 requests arrive simultaneously for an uncached key, only one fetcher runs. The rest wait and share the result.
170
+
171
+ ```ts
172
+ const cache = new CacheStack([...])
173
+ // stampedePrevention is true by default
174
+
175
+ // 100 concurrent requests → fetcher executes exactly once
176
+ const results = await Promise.all(
177
+ Array.from({ length: 100 }, () =>
178
+ cache.get('hot-key', expensiveFetch)
179
+ )
180
+ )
181
+ ```
182
+
183
+ Disable it if you prefer independent fetches:
184
+
185
+ ```ts
186
+ new CacheStack([...], { stampedePrevention: false })
187
+ ```
188
+
189
+ ---
190
+
191
+ ## Distributed deployments
192
+
193
+ ### Cross-server L1 invalidation
194
+
195
+ When one server writes or deletes a key, other servers' memory layers go stale. The `RedisInvalidationBus` propagates invalidation events over Redis pub/sub so every instance stays consistent.
196
+
197
+ ```ts
198
+ import { RedisInvalidationBus } from 'layercache'
199
+
200
+ const publisher = new Redis()
201
+ const subscriber = new Redis()
202
+ const bus = new RedisInvalidationBus({ publisher, subscriber })
203
+
204
+ const cache = new CacheStack(
205
+ [new MemoryLayer({ ttl: 60 }), new RedisLayer({ client: publisher, ttl: 300 })],
206
+ { invalidationBus: bus }
207
+ )
208
+
209
+ await cache.disconnect() // unsubscribes cleanly on shutdown
210
+ ```
211
+
212
+ By default, every `set` also broadcasts an invalidation so other servers evict stale memory immediately. To suppress broadcasts on writes (high write-volume services):
213
+
214
+ ```ts
215
+ new CacheStack([...], { invalidationBus: bus, publishSetInvalidation: false })
216
+ ```
217
+
218
+ ### Distributed tag invalidation
219
+
220
+ The default `TagIndex` lives in process memory — `invalidateByTag` on server A only knows about keys *that server A wrote*. For full cross-server tag invalidation, use `RedisTagIndex`:
221
+
222
+ ```ts
223
+ import { RedisTagIndex } from 'layercache'
224
+
225
+ const sharedTagIndex = new RedisTagIndex({
226
+ client: redis,
227
+ prefix: 'myapp:tag-index' // namespaced so it doesn't collide with other data
228
+ })
229
+
230
+ // Every CacheStack instance should use the same Redis-backed tag index config
231
+ const cache = new CacheStack(
232
+ [new MemoryLayer({ ttl: 60 }), new RedisLayer({ client: redis, ttl: 300 })],
233
+ { invalidationBus: bus, tagIndex: sharedTagIndex }
234
+ )
235
+ ```
236
+
237
+ Now `invalidateByTag('user:123')` on any server deletes every tagged key, regardless of which server originally wrote it.
238
+
239
+ ### Safe Redis clearing
240
+
241
+ `RedisLayer.clear()` is intentionally conservative. Without a `prefix`, it throws instead of deleting the whole Redis database.
242
+
243
+ ```ts
244
+ const cache = new CacheStack([
245
+ new RedisLayer({
246
+ client: redis,
247
+ prefix: 'myapp:cache:' // recommended for safe clear() and key scans
248
+ })
249
+ ])
250
+ ```
251
+
252
+ If you really want to clear an unprefixed namespace, you must opt in explicitly:
253
+
254
+ ```ts
255
+ new RedisLayer({
256
+ client: redis,
257
+ allowUnprefixedClear: true
258
+ })
259
+ ```
260
+
261
+ ---
262
+
263
+ ## Per-layer TTL overrides
264
+
265
+ Layer names match the `name` option on each layer (`'memory'` and `'redis'` by default).
266
+
267
+ ```ts
268
+ await cache.set('session:abc', sessionData, {
269
+ ttl: { memory: 30, redis: 3600 } // 30s in RAM, 1h in Redis
270
+ })
271
+
272
+ // Same override works on get (applied to backfills)
273
+ await cache.get('session:abc', fetchSession, {
274
+ ttl: { memory: 30, redis: 3600 }
275
+ })
276
+ ```
277
+
278
+ Custom layer names:
279
+
280
+ ```ts
281
+ new MemoryLayer({ name: 'local', ttl: 60 })
282
+ new RedisLayer({ name: 'shared', client: redis, ttl: 300 })
283
+
284
+ // then
285
+ await cache.set('key', value, { ttl: { local: 15, shared: 600 } })
286
+ ```
287
+
288
+ ---
289
+
290
+ ## MessagePack serialization
291
+
292
+ Reduces Redis memory usage and speeds up serialization for large values:
293
+
294
+ ```ts
295
+ import { MsgpackSerializer } from 'layercache'
296
+
297
+ new RedisLayer({
298
+ client: redis,
299
+ ttl: 300,
300
+ serializer: new MsgpackSerializer()
301
+ })
302
+ ```
303
+
304
+ ---
305
+
306
+ ## Custom layers
307
+
308
+ Implement `CacheLayer` to plug in any backend:
309
+
310
+ ```ts
311
+ import type { CacheLayer } from 'layercache'
312
+
313
+ class MemcachedLayer implements CacheLayer {
314
+ readonly name = 'memcached'
315
+ readonly defaultTtl = 300
316
+ readonly isLocal = false
317
+
318
+ async get<T>(key: string): Promise<T | null> { /* … */ }
319
+ async set(key: string, value: unknown, ttl?: number): Promise<void> { /* … */ }
320
+ async delete(key: string): Promise<void> { /* … */ }
321
+ async clear(): Promise<void> { /* … */ }
322
+ }
323
+
324
+ const cache = new CacheStack([
325
+ new MemoryLayer({ ttl: 60 }),
326
+ new MemcachedLayer()
327
+ ])
328
+ ```
329
+
330
+ ---
331
+
332
+ ## NestJS
333
+
334
+ ```bash
335
+ npm install @cachestack/nestjs
336
+ ```
337
+
338
+ ```ts
339
+ // app.module.ts
340
+ import { CacheStackModule } from '@cachestack/nestjs'
341
+
342
+ @Module({
343
+ imports: [
344
+ CacheStackModule.forRoot({
345
+ layers: [
346
+ new MemoryLayer({ ttl: 20 }),
347
+ new RedisLayer({ client: redis, ttl: 300 })
348
+ ]
349
+ })
350
+ ]
351
+ })
352
+ export class AppModule {}
353
+ ```
354
+
355
+ ```ts
356
+ // your.service.ts
357
+ import { InjectCacheStack } from '@cachestack/nestjs'
358
+ import { CacheStack } from 'layercache'
359
+
360
+ @Injectable()
361
+ export class UserService {
362
+ constructor(@InjectCacheStack() private readonly cache: CacheStack) {}
363
+
364
+ async getUser(id: number) {
365
+ return this.cache.get(`user:${id}`, () => this.db.findUser(id))
366
+ }
367
+ }
368
+ ```
369
+
370
+ ---
371
+
372
+ ## Express / Next.js
373
+
374
+ ```ts
375
+ // Express
376
+ app.get('/users/:id', async (req, res) => {
377
+ const user = await cache.get(`user:${req.params.id}`,
378
+ () => db.findUser(Number(req.params.id)),
379
+ { tags: [`user:${req.params.id}`] }
380
+ )
381
+ res.json(user)
382
+ })
383
+
384
+ // Next.js App Router
385
+ export async function GET(_req: Request, { params }: { params: { id: string } }) {
386
+ const data = await cache.get(`user:${params.id}`, () => db.findUser(Number(params.id)))
387
+ return Response.json(data)
388
+ }
389
+ ```
390
+
391
+ ---
392
+
393
+ ## Environment-based configuration
394
+
395
+ ```ts
396
+ export const cache = process.env.NODE_ENV === 'production'
397
+ ? new CacheStack([
398
+ new MemoryLayer({ ttl: 60 }),
399
+ new RedisLayer({ client: redis, ttl: 3600 })
400
+ ])
401
+ : new CacheStack([
402
+ new MemoryLayer({ ttl: 60 }) // no Redis needed in dev
403
+ ])
404
+ ```
405
+
406
+ ---
407
+
408
+ ## Benchmarks
409
+
410
+ ```bash
411
+ npm run bench:latency
412
+ npm run bench:stampede
413
+ ```
414
+
415
+ These scripts use `ioredis-mock` and a synthetic no-cache delay, so treat the numbers as a quick sanity check rather than a production benchmark.
416
+
417
+ Example output from a local run:
418
+
419
+ | | avg latency |
420
+ |---|---|
421
+ | L1 memory hit | ~0.006 ms |
422
+ | L2 Redis hit | ~0.020 ms |
423
+ | No cache (simulated DB) | ~1.08 ms |
424
+
425
+ ```
426
+ ┌─────────────────────┬────────┐
427
+ │ concurrentRequests │ 100 │
428
+ │ fetcherExecutions │ 1 │ ← stampede prevention in action
429
+ └─────────────────────┴────────┘
430
+ ```
431
+
432
+ ---
433
+
434
+ ## Comparison
435
+
436
+ | | node-cache | ioredis | cache-manager | **layercache** |
437
+ |---|:---:|:---:|:---:|:---:|
438
+ | Multi-layer | ❌ | ❌ | △ | ✅ |
439
+ | Auto backfill | ❌ | ❌ | ❌ | ✅ |
440
+ | Stampede prevention | ❌ | ❌ | ❌ | ✅ |
441
+ | Tag invalidation | ❌ | ❌ | ❌ | ✅ |
442
+ | Distributed tags | ❌ | ❌ | ❌ | ✅ |
443
+ | Cross-server L1 flush | ❌ | ❌ | ❌ | ✅ |
444
+ | TypeScript-first | ❌ | ✅ | △ | ✅ |
445
+ | NestJS module | ❌ | ❌ | ✅ | ✅ |
446
+ | Custom layers | ❌ | — | △ | ✅ |
447
+
448
+ ---
449
+
450
+ ## Debug logging
451
+
452
+ ```bash
453
+ DEBUG=cachestack:debug node server.js
454
+ ```
455
+
456
+ Or pass a logger instance:
457
+
458
+ ```ts
459
+ new CacheStack([...], {
460
+ logger: {
461
+ debug(message, context) { myLogger.debug(message, context) }
462
+ }
463
+ })
464
+ ```
465
+
466
+ ---
467
+
468
+ ## Requirements
469
+
470
+ - Node.js ≥ 18
471
+ - TypeScript ≥ 5.0 (optional — fully typed, ships `.d.ts`)
472
+ - ioredis ≥ 5 (optional peer dependency — only needed for `RedisLayer` / `RedisTagIndex`)
473
+
474
+ ---
475
+
476
+ ## Contributing
477
+
478
+ ```bash
479
+ git clone https://github.com/flyingsquirrel0419/cachestack
480
+ cd cachestack
481
+ npm install
482
+ npm test # vitest
483
+ npm run build:all # esm + cjs + nestjs package
484
+ ```
485
+
486
+ PRs and issues welcome.
487
+
488
+ ---
489
+
490
+ ## License
491
+
492
+ MIT
@@ -0,0 +1,45 @@
1
+ import Redis from 'ioredis-mock'
2
+ import { performance } from 'node:perf_hooks'
3
+ import { CacheStack, MemoryLayer, RedisLayer } from '../src'
4
+
5
+ async function main(): Promise<void> {
6
+ const iterations = 5_000
7
+ const redis = new Redis()
8
+ const cache = new CacheStack([
9
+ new MemoryLayer({ ttl: 60, maxSize: 10_000 }),
10
+ new RedisLayer({ client: redis, ttl: 300 })
11
+ ])
12
+
13
+ await cache.set('bench:key', { ok: true })
14
+
15
+ const memoryStart = performance.now()
16
+ for (let index = 0; index < iterations; index += 1) {
17
+ await cache.get('bench:key')
18
+ }
19
+ const memoryElapsed = performance.now() - memoryStart
20
+
21
+ await redis.del('bench:key')
22
+ await redis.set('bench:key', JSON.stringify({ ok: true }))
23
+
24
+ const redisOnlyStart = performance.now()
25
+ for (let index = 0; index < iterations; index += 1) {
26
+ await cache.get('bench:key')
27
+ await cache.delete('bench:key')
28
+ await redis.set('bench:key', JSON.stringify({ ok: true }))
29
+ }
30
+ const redisOnlyElapsed = performance.now() - redisOnlyStart
31
+
32
+ const noCacheStart = performance.now()
33
+ for (let index = 0; index < iterations; index += 1) {
34
+ await new Promise((resolve) => setTimeout(resolve, 1))
35
+ }
36
+ const noCacheElapsed = performance.now() - noCacheStart
37
+
38
+ console.table({
39
+ l1MemoryAvgMs: memoryElapsed / iterations,
40
+ l2RedisAvgMs: redisOnlyElapsed / iterations,
41
+ noCacheAvgMs: noCacheElapsed / iterations
42
+ })
43
+ }
44
+
45
+ void main()
@@ -0,0 +1,29 @@
1
+ import Redis from 'ioredis-mock'
2
+ import { CacheStack, MemoryLayer, RedisLayer } from '../src'
3
+
4
+ async function main(): Promise<void> {
5
+ const redis = new Redis()
6
+ const cache = new CacheStack([
7
+ new MemoryLayer({ ttl: 60 }),
8
+ new RedisLayer({ client: redis, ttl: 300 })
9
+ ])
10
+
11
+ let executions = 0
12
+
13
+ await Promise.all(
14
+ Array.from({ length: 100 }, () =>
15
+ cache.get('stampede:key', async () => {
16
+ executions += 1
17
+ await new Promise((resolve) => setTimeout(resolve, 10))
18
+ return { ok: true }
19
+ })
20
+ )
21
+ )
22
+
23
+ console.table({
24
+ concurrentRequests: 100,
25
+ fetcherExecutions: executions
26
+ })
27
+ }
28
+
29
+ void main()