layercache 1.0.0 → 1.0.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +85 -6
- package/dist/index.cjs +437 -59
- package/dist/index.d.cts +64 -4
- package/dist/index.d.ts +64 -4
- package/dist/index.js +436 -59
- package/package.json +5 -1
- package/packages/nestjs/dist/index.cjs +368 -57
- package/packages/nestjs/dist/index.d.cts +23 -0
- package/packages/nestjs/dist/index.d.ts +23 -0
- package/packages/nestjs/dist/index.js +368 -57
package/README.md
CHANGED
|
@@ -41,7 +41,13 @@ On a hit, the value is returned from the fastest layer that has it, and automati
|
|
|
41
41
|
- **Tag-based invalidation** — `set('user:123:posts', posts, { tags: ['user:123'] })` then `invalidateByTag('user:123')`
|
|
42
42
|
- **Pattern invalidation** — `invalidateByPattern('user:*')`
|
|
43
43
|
- **Per-layer TTL overrides** — different TTLs for memory vs. Redis in one call
|
|
44
|
+
- **Negative caching** — cache known misses for a short TTL to protect the database
|
|
45
|
+
- **Stale strategies** — `staleWhileRevalidate` and `staleIfError` as opt-in read behavior
|
|
46
|
+
- **TTL jitter** — spread expirations to avoid synchronized stampedes
|
|
47
|
+
- **Best-effort writes** — tolerate partial layer write failures when desired
|
|
48
|
+
- **Bulk reads** — `mget` uses layer-level `getMany()` when available
|
|
44
49
|
- **Distributed tag index** — `RedisTagIndex` keeps tag state consistent across multiple servers
|
|
50
|
+
- **Optional distributed single-flight** — plug in a coordinator to dedupe misses across instances
|
|
45
51
|
- **Cross-server L1 invalidation** — Redis pub/sub bus flushes stale memory on other instances when you write or delete
|
|
46
52
|
- **Metrics** — hit/miss/fetch/backfill counters built in
|
|
47
53
|
- **MessagePack serializer** — drop-in replacement for lower Redis memory usage
|
|
@@ -106,7 +112,12 @@ const user = await cache.get<User>('user:123', () => db.findUser(123))
|
|
|
106
112
|
// With options
|
|
107
113
|
const user = await cache.get<User>('user:123', () => db.findUser(123), {
|
|
108
114
|
ttl: { memory: 30, redis: 600 }, // per-layer TTL
|
|
109
|
-
tags: ['user', 'user:123']
|
|
115
|
+
tags: ['user', 'user:123'], // tag this key for bulk invalidation
|
|
116
|
+
negativeCache: true, // cache null fetches
|
|
117
|
+
negativeTtl: 15, // short TTL for misses
|
|
118
|
+
staleWhileRevalidate: 30, // serve stale and refresh in background
|
|
119
|
+
staleIfError: 300, // serve stale if refresh fails
|
|
120
|
+
ttlJitter: 5 // +/- 5s expiry spread
|
|
110
121
|
})
|
|
111
122
|
```
|
|
112
123
|
|
|
@@ -117,7 +128,10 @@ Writes to all layers simultaneously.
|
|
|
117
128
|
```ts
|
|
118
129
|
await cache.set('user:123', user, {
|
|
119
130
|
ttl: { memory: 60, redis: 600 }, // per-layer TTL (seconds)
|
|
120
|
-
tags: ['user', 'user:123']
|
|
131
|
+
tags: ['user', 'user:123'],
|
|
132
|
+
staleWhileRevalidate: { redis: 30 },
|
|
133
|
+
staleIfError: { redis: 120 },
|
|
134
|
+
ttlJitter: { redis: 5 }
|
|
121
135
|
})
|
|
122
136
|
|
|
123
137
|
await cache.set('user:123', user, {
|
|
@@ -149,6 +163,8 @@ await cache.invalidateByPattern('user:*') // deletes user:1, user:2, …
|
|
|
149
163
|
|
|
150
164
|
Concurrent multi-key fetch, each with its own optional fetcher.
|
|
151
165
|
|
|
166
|
+
If every entry is a simple read (`{ key }` only), `CacheStack` will use layer-level `getMany()` fast paths when the layer implements one.
|
|
167
|
+
|
|
152
168
|
```ts
|
|
153
169
|
const [user1, user2] = await cache.mget([
|
|
154
170
|
{ key: 'user:1', fetch: () => db.findUser(1) },
|
|
@@ -159,11 +175,50 @@ const [user1, user2] = await cache.mget([
|
|
|
159
175
|
### `cache.getMetrics(): CacheMetricsSnapshot`
|
|
160
176
|
|
|
161
177
|
```ts
|
|
162
|
-
const { hits, misses, fetches,
|
|
178
|
+
const { hits, misses, fetches, staleHits, refreshes, writeFailures } = cache.getMetrics()
|
|
163
179
|
```
|
|
164
180
|
|
|
165
181
|
---
|
|
166
182
|
|
|
183
|
+
## Negative + stale caching
|
|
184
|
+
|
|
185
|
+
`negativeCache` stores fetcher misses for a short TTL, which is useful for "user not found" or "feature flag absent" style lookups.
|
|
186
|
+
|
|
187
|
+
```ts
|
|
188
|
+
const user = await cache.get(`user:${id}`, () => db.findUser(id), {
|
|
189
|
+
negativeCache: true,
|
|
190
|
+
negativeTtl: 15
|
|
191
|
+
})
|
|
192
|
+
```
|
|
193
|
+
|
|
194
|
+
`staleWhileRevalidate` returns the last cached value immediately after expiry and refreshes it in the background. `staleIfError` keeps serving the stale value if the refresh fails.
|
|
195
|
+
|
|
196
|
+
```ts
|
|
197
|
+
await cache.set('config', currentConfig, {
|
|
198
|
+
ttl: 60,
|
|
199
|
+
staleWhileRevalidate: 30,
|
|
200
|
+
staleIfError: 300
|
|
201
|
+
})
|
|
202
|
+
```
|
|
203
|
+
|
|
204
|
+
---
|
|
205
|
+
|
|
206
|
+
## Write failure policy
|
|
207
|
+
|
|
208
|
+
Default writes are strict: if any layer write fails, the operation throws.
|
|
209
|
+
|
|
210
|
+
If you prefer "at least one layer succeeds", enable best-effort mode:
|
|
211
|
+
|
|
212
|
+
```ts
|
|
213
|
+
const cache = new CacheStack([...], {
|
|
214
|
+
writePolicy: 'best-effort'
|
|
215
|
+
})
|
|
216
|
+
```
|
|
217
|
+
|
|
218
|
+
`best-effort` logs the failed layers, increments `writeFailures`, and only throws if *every* layer failed.
|
|
219
|
+
|
|
220
|
+
---
|
|
221
|
+
|
|
167
222
|
## Cache stampede prevention
|
|
168
223
|
|
|
169
224
|
When 100 requests arrive simultaneously for an uncached key, only one fetcher runs. The rest wait and share the result.
|
|
@@ -190,6 +245,28 @@ new CacheStack([...], { stampedePrevention: false })
|
|
|
190
245
|
|
|
191
246
|
## Distributed deployments
|
|
192
247
|
|
|
248
|
+
### Distributed single-flight
|
|
249
|
+
|
|
250
|
+
Local stampede prevention only deduplicates requests inside one Node.js process. To dedupe cross-instance misses, configure a shared coordinator.
|
|
251
|
+
|
|
252
|
+
```ts
|
|
253
|
+
import { RedisSingleFlightCoordinator } from 'layercache'
|
|
254
|
+
|
|
255
|
+
const coordinator = new RedisSingleFlightCoordinator({ client: redis })
|
|
256
|
+
|
|
257
|
+
const cache = new CacheStack(
|
|
258
|
+
[new MemoryLayer({ ttl: 60 }), new RedisLayer({ client: redis, ttl: 300 })],
|
|
259
|
+
{
|
|
260
|
+
singleFlightCoordinator: coordinator,
|
|
261
|
+
singleFlightLeaseMs: 30_000,
|
|
262
|
+
singleFlightTimeoutMs: 5_000,
|
|
263
|
+
singleFlightPollMs: 50
|
|
264
|
+
}
|
|
265
|
+
)
|
|
266
|
+
```
|
|
267
|
+
|
|
268
|
+
When another instance already owns the miss, the current process waits for the value to appear in the shared layer instead of running the fetcher again.
|
|
269
|
+
|
|
193
270
|
### Cross-server L1 invalidation
|
|
194
271
|
|
|
195
272
|
When one server writes or deletes a key, other servers' memory layers go stale. The `RedisInvalidationBus` propagates invalidation events over Redis pub/sub so every instance stays consistent.
|
|
@@ -316,6 +393,8 @@ class MemcachedLayer implements CacheLayer {
|
|
|
316
393
|
readonly isLocal = false
|
|
317
394
|
|
|
318
395
|
async get<T>(key: string): Promise<T | null> { /* … */ }
|
|
396
|
+
async getEntry?(key: string): Promise<unknown | null> { /* optional raw access */ }
|
|
397
|
+
async getMany?(keys: string[]): Promise<Array<unknown | null>> { /* optional bulk read */ }
|
|
319
398
|
async set(key: string, value: unknown, ttl?: number): Promise<void> { /* … */ }
|
|
320
399
|
async delete(key: string): Promise<void> { /* … */ }
|
|
321
400
|
async clear(): Promise<void> { /* … */ }
|
|
@@ -450,7 +529,7 @@ Example output from a local run:
|
|
|
450
529
|
## Debug logging
|
|
451
530
|
|
|
452
531
|
```bash
|
|
453
|
-
DEBUG=
|
|
532
|
+
DEBUG=layercache:debug node server.js
|
|
454
533
|
```
|
|
455
534
|
|
|
456
535
|
Or pass a logger instance:
|
|
@@ -476,8 +555,8 @@ new CacheStack([...], {
|
|
|
476
555
|
## Contributing
|
|
477
556
|
|
|
478
557
|
```bash
|
|
479
|
-
git clone https://github.com/flyingsquirrel0419/
|
|
480
|
-
cd
|
|
558
|
+
git clone https://github.com/flyingsquirrel0419/layercache
|
|
559
|
+
cd layercache
|
|
481
560
|
npm install
|
|
482
561
|
npm test # vitest
|
|
483
562
|
npm run build:all # esm + cjs + nestjs package
|