cacheable 1.10.4 → 2.0.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +58 -159
- package/dist/index.cjs +1 -1908
- package/dist/index.d.cts +106 -489
- package/dist/index.d.ts +106 -489
- package/dist/index.js +1 -1860
- package/package.json +9 -5
package/README.md
CHANGED
|
@@ -26,6 +26,7 @@
|
|
|
26
26
|
|
|
27
27
|
# Table of Contents
|
|
28
28
|
* [Getting Started](#getting-started)
|
|
29
|
+
* [v1 to v2 Changes](#v1-to-v2-changes)
|
|
29
30
|
* [Basic Usage](#basic-usage)
|
|
30
31
|
* [Hooks and Events](#hooks-and-events)
|
|
31
32
|
* [Storage Tiering and Caching](#storage-tiering-and-caching)
|
|
@@ -130,6 +131,37 @@ cache.onHook(CacheableHooks.BEFORE_SECONDARY_SETS_PRIMARY, (data) => {
|
|
|
130
131
|
```
|
|
131
132
|
This is called when the secondary store sets the value in the primary store. This is useful if you want to do something before the value is set in the primary store such as manipulating the ttl or the value.
|
|
132
133
|
|
|
134
|
+
The following events are provided:
|
|
135
|
+
|
|
136
|
+
- `error`: Emitted when an error occurs.
|
|
137
|
+
- `cache:hit`: Emitted when a cache hit occurs.
|
|
138
|
+
- `cache:miss`: Emitted when a cache miss occurs.
|
|
139
|
+
|
|
140
|
+
Here is an example of using the `error` event:
|
|
141
|
+
|
|
142
|
+
```javascript
|
|
143
|
+
import { Cacheable, CacheableEvents } from 'cacheable';
|
|
144
|
+
|
|
145
|
+
const cacheable = new Cacheable();
|
|
146
|
+
cacheable.on(CacheableEvents.ERROR, (error) => {
|
|
147
|
+
console.error(`Cacheable error: ${error.message}`);
|
|
148
|
+
});
|
|
149
|
+
```
|
|
150
|
+
|
|
151
|
+
We also offer `cache:hit` and `cache:miss` events. These events are emitted when a cache hit or miss occurs, respectively. Here is how to use them:
|
|
152
|
+
|
|
153
|
+
```javascript
|
|
154
|
+
import { Cacheable, CacheableEvents } from 'cacheable';
|
|
155
|
+
|
|
156
|
+
const cacheable = new Cacheable();
|
|
157
|
+
cacheable.on(CacheableEvents.CACHE_HIT, (data) => {
|
|
158
|
+
console.log(`Cache hit: ${data.key} ${data.value} ${data.store}`); // the store will say primary or secondary
|
|
159
|
+
});
|
|
160
|
+
cacheable.on(CacheableEvents.CACHE_MISS, (data) => {
|
|
161
|
+
console.log(`Cache miss: ${data.key} ${data.store}`); // the store will say primary or secondary
|
|
162
|
+
});
|
|
163
|
+
```
|
|
164
|
+
|
|
133
165
|
# Storage Tiering and Caching
|
|
134
166
|
|
|
135
167
|
`cacheable` is built as a layer 1 and layer 2 caching engine by default. The purpose is to have your layer 1 be fast and your layer 2 be more persistent. The primary store is the layer 1 cache and the secondary store is the layer 2 cache. By adding the secondary store you are enabling layer 2 caching. By default the operations are blocking but fault tolerant:
|
|
@@ -255,7 +287,16 @@ raws.forEach((entry, idx) => {
|
|
|
255
287
|
|
|
256
288
|
# Non-Blocking Operations
|
|
257
289
|
|
|
258
|
-
If you want your layer 2 (secondary) store to be non-blocking you can set the `nonBlocking` property to `true` in the options. This will make the secondary store non-blocking and will not wait for the secondary store to respond on `setting data`, `deleting data`, or `clearing data`. This is useful if you want to have a faster response time and not wait for the secondary store to respond.
|
|
290
|
+
If you want your layer 2 (secondary) store to be non-blocking you can set the `nonBlocking` property to `true` in the options. This will make the secondary store non-blocking and will not wait for the secondary store to respond on `setting data`, `deleting data`, or `clearing data`. This is useful if you want to have a faster response time and not wait for the secondary store to respond. Here is a full list of what each method does in nonBlocking mode:
|
|
291
|
+
|
|
292
|
+
* `set` - in non-blocking mode it will set at the `primary` storage and then in the background update `secondary`
|
|
293
|
+
* `get` - in non-blocking mode it will only check the primary storage but then in the background look to see if there is a value in the `secondary` and update the primary
|
|
294
|
+
|
|
295
|
+
* `getMany` - in non-blocking mode it will only check the primary storage but then in the background look to see if there is a value in the `secondary` and update the primary
|
|
296
|
+
|
|
297
|
+
* `getRaw` - in non-blocking mode it will only check the primary storage but then in the background look to see if there is a value in the `secondary` and update the primary
|
|
298
|
+
|
|
299
|
+
* `getManyRaw` - in non-blocking mode it will only check the primary storage but then in the background look to see if there is a value in the `secondary` and update the primary
|
|
259
300
|
|
|
260
301
|
# Non-Blocking with @keyv/redis
|
|
261
302
|
|
|
@@ -395,7 +436,7 @@ _This does not enable statistics for your layer 2 cache as that is a distributed
|
|
|
395
436
|
|
|
396
437
|
# CacheableMemory - In-Memory Cache
|
|
397
438
|
|
|
398
|
-
`cacheable` comes with a built-in in-memory cache called `CacheableMemory`. This is a simple in-memory cache that is used as the primary store for `cacheable`. You can use this as a standalone cache or as a primary store for `cacheable`. Here is an example of how to use `CacheableMemory`:
|
|
439
|
+
`cacheable` comes with a built-in in-memory cache called `CacheableMemory` from `@cacheable/memory`. This is a simple in-memory cache that is used as the primary store for `cacheable`. You can use this as a standalone cache or as a primary store for `cacheable`. Here is an example of how to use `CacheableMemory`:
|
|
399
440
|
|
|
400
441
|
```javascript
|
|
401
442
|
import { CacheableMemory } from 'cacheable';
|
|
@@ -409,165 +450,11 @@ cache.set('key', 'value');
|
|
|
409
450
|
const value = cache.get('key'); // value
|
|
410
451
|
```
|
|
411
452
|
|
|
412
|
-
|
|
413
|
-
|
|
414
|
-
This simple in-memory cache uses multiple Map objects and a with `expiration` and `lru` policies if set to manage the in memory cache at scale.
|
|
415
|
-
|
|
416
|
-
By default we use lazy expiration deletion which means on `get` and `getMany` type functions we look if it is expired and then delete it. If you want to have a more aggressive expiration policy you can set the `checkInterval` property to a value greater than `0` which will check for expired keys at the interval you set.
|
|
417
|
-
|
|
418
|
-
Here are some of the main features of `CacheableMemory`:
|
|
419
|
-
* High performance in-memory cache with a robust API and feature set. 🚀
|
|
420
|
-
* Can scale past the `16,777,216 (2^24) keys` limit of a single `Map` via `hashStoreSize`. Default is `16` Map objects.
|
|
421
|
-
* LRU (Least Recently Used) cache feature to limit the number of keys in the cache via `lruSize`. Limit to `16,777,216 (2^24) keys` total.
|
|
422
|
-
* Expiration policy to delete expired keys with lazy deletion or aggressive deletion via `checkInterval`.
|
|
423
|
-
* `Wrap` feature to memoize `sync` and `async` functions with stampede protection.
|
|
424
|
-
* Ability to do many operations at once such as `setMany`, `getMany`, `deleteMany`, and `takeMany`.
|
|
425
|
-
* Supports `raw` data retrieval with `getRaw` and `getManyRaw` methods to get the full metadata of the cache entry.
|
|
426
|
-
|
|
427
|
-
## CacheableMemory Store Hashing
|
|
428
|
-
|
|
429
|
-
`CacheableMemory` uses `Map` objects to store the keys and values. To make this scale past the `16,777,216 (2^24) keys` limit of a single `Map` we use a hash to balance the data across multiple `Map` objects. This is done by hashing the key and using the hash to determine which `Map` object to use. The default hashing algorithm is `djb2Hash` but you can change it by setting the `storeHashAlgorithm` property in the options. By default we set the amount of `Map` objects to `16`.
|
|
430
|
-
|
|
431
|
-
NOTE: if you are using the LRU cache feature the `lruSize` no matter how many `Map` objects you have it will be limited to the `16,777,216 (2^24) keys` limit of a single `Map` object. This is because we use a double linked list to manage the LRU cache and it is not possible to have more than `16,777,216 (2^24) keys` in a single `Map` object.
|
|
432
|
-
|
|
433
|
-
Here is an example of how to set the number of `Map` objects and the hashing algorithm:
|
|
434
|
-
|
|
435
|
-
```javascript
|
|
436
|
-
import { CacheableMemory } from 'cacheable';
|
|
437
|
-
const cache = new CacheableMemory({
|
|
438
|
-
storeSize: 32, // set the number of Map objects to 32
|
|
439
|
-
});
|
|
440
|
-
cache.set('key', 'value');
|
|
441
|
-
const value = cache.get('key'); // value
|
|
442
|
-
```
|
|
443
|
-
|
|
444
|
-
Here is an example of how to use the `storeHashAlgorithm` property:
|
|
445
|
-
|
|
446
|
-
```javascript
|
|
447
|
-
import { CacheableMemory } from 'cacheable';
|
|
448
|
-
const cache = new CacheableMemory({ storeHashAlgorithm: 'sha256' });
|
|
449
|
-
cache.set('key', 'value');
|
|
450
|
-
const value = cache.get('key'); // value
|
|
451
|
-
```
|
|
452
|
-
|
|
453
|
-
If you want to provide your own hashing function you can set the `storeHashAlgorithm` property to a function that takes an object and returns a `number` that is in the range of the amount of `Map` stores you have.
|
|
454
|
-
|
|
455
|
-
```javascript
|
|
456
|
-
import { CacheableMemory } from 'cacheable';
|
|
457
|
-
/**
|
|
458
|
-
* Custom hash function that takes a key and the size of the store
|
|
459
|
-
* and returns a number between 0 and storeHashSize - 1.
|
|
460
|
-
* @param {string} key - The key to hash.
|
|
461
|
-
* @param {number} storeHashSize - The size of the store (number of Map objects).
|
|
462
|
-
* @returns {number} - A number between 0 and storeHashSize - 1.
|
|
463
|
-
*/
|
|
464
|
-
const customHash = (key, storeHashSize) => {
|
|
465
|
-
// custom hashing logic
|
|
466
|
-
return key.length % storeHashSize; // returns a number between 0 and 31 for 32 Map objects
|
|
467
|
-
};
|
|
468
|
-
const cache = new CacheableMemory({ storeHashAlgorithm: customHash, storeSize: 32 });
|
|
469
|
-
cache.set('key', 'value');
|
|
470
|
-
const value = cache.get('key'); // value
|
|
471
|
-
```
|
|
472
|
-
|
|
473
|
-
## CacheableMemory LRU Feature
|
|
474
|
-
|
|
475
|
-
You can enable the LRU (Least Recently Used) feature in `CacheableMemory` by setting the `lruSize` property in the options. This will limit the number of keys in the cache to the size you set. When the cache reaches the limit it will remove the least recently used keys from the cache. This is useful if you want to limit the memory usage of the cache.
|
|
476
|
-
|
|
477
|
-
When you set the `lruSize` we use a double linked list to manage the LRU cache and also set the `hashStoreSize` to `1` which means we will only use a single `Map` object for the LRU cache. This is because the LRU cache is managed by the double linked list and it is not possible to have more than `16,777,216 (2^24) keys` in a single `Map` object.
|
|
478
|
-
|
|
479
|
-
```javascript
|
|
480
|
-
import { CacheableMemory } from 'cacheable';
|
|
481
|
-
const cache = new CacheableMemory({ lruSize: 1 }); // sets the LRU cache size to 1000 keys and hashStoreSize to 1
|
|
482
|
-
cache.set('key1', 'value1');
|
|
483
|
-
cache.set('key2', 'value2');
|
|
484
|
-
const value1 = cache.get('key1');
|
|
485
|
-
console.log(value1); // undefined if the cache is full and key1 is the least recently used
|
|
486
|
-
const value2 = cache.get('key2');
|
|
487
|
-
console.log(value2); // value2 if key2 is still in the cache
|
|
488
|
-
console.log(cache.size()); // 1
|
|
489
|
-
```
|
|
490
|
-
|
|
491
|
-
NOTE: if you set the `lruSize` property to `0` after it was enabled it will disable the LRU cache feature and will not limit the number of keys in the cache. This will remove the `16,777,216 (2^24) keys` limit of a single `Map` object and will allow you to store more keys in the cache.
|
|
492
|
-
|
|
493
|
-
## CacheableMemory Performance
|
|
494
|
-
|
|
495
|
-
Our goal with `cacheable` and `CacheableMemory` is to provide a high performance caching engine that is simple to use and has a robust API. We test it against other cacheing engines such that are less feature rich to make sure there is little difference. Here are some of the benchmarks we have run:
|
|
496
|
-
|
|
497
|
-
*Memory Benchmark Results:*
|
|
498
|
-
| name | summary | ops/sec | time/op | margin | samples |
|
|
499
|
-
|------------------------------------------|:---------:|----------:|----------:|:--------:|----------:|
|
|
500
|
-
| Map (v22) - set / get | 🥇 | 117K | 9µs | ±1.29% | 110K |
|
|
501
|
-
| Cacheable Memory (v1.10.0) - set / get | -1.3% | 116K | 9µs | ±0.77% | 110K |
|
|
502
|
-
| Node Cache - set / get | -4.1% | 112K | 9µs | ±1.34% | 107K |
|
|
503
|
-
| bentocache (v1.4.0) - set / get | -45% | 65K | 17µs | ±1.10% | 100K |
|
|
504
|
-
|
|
505
|
-
*Memory LRU Benchmark Results:*
|
|
506
|
-
| name | summary | ops/sec | time/op | margin | samples |
|
|
507
|
-
|------------------------------------------|:---------:|----------:|----------:|:--------:|----------:|
|
|
508
|
-
| quick-lru (v7.0.1) - set / get | 🥇 | 118K | 9µs | ±0.85% | 112K |
|
|
509
|
-
| Map (v22) - set / get | -0.56% | 117K | 9µs | ±1.35% | 110K |
|
|
510
|
-
| lru.min (v1.1.2) - set / get | -1.7% | 116K | 9µs | ±0.90% | 110K |
|
|
511
|
-
| Cacheable Memory (v1.10.0) - set / get | -3.3% | 114K | 9µs | ±1.16% | 108K |
|
|
512
|
-
|
|
513
|
-
As you can see from the benchmarks `CacheableMemory` is on par with other caching engines such as `Map`, `Node Cache`, and `bentocache`. We have also tested it against other LRU caching engines such as `quick-lru` and `lru.min` and it performs well against them too.
|
|
514
|
-
|
|
515
|
-
## CacheableMemory Options
|
|
516
|
-
|
|
517
|
-
* `ttl`: The time to live for the cache in milliseconds. Default is `undefined` which is means indefinitely.
|
|
518
|
-
* `useClones`: If the cache should use clones for the values. Default is `true`.
|
|
519
|
-
* `lruSize`: The size of the LRU cache. Default is `0` which is unlimited.
|
|
520
|
-
* `checkInterval`: The interval to check for expired keys in milliseconds. Default is `0` which is disabled.
|
|
521
|
-
* `storeHashSize`: The number of `Map` objects to use for the cache. Default is `16`.
|
|
522
|
-
* `storeHashAlgorithm`: The hashing algorithm to use for the cache. Default is `djb2Hash`.
|
|
523
|
-
|
|
524
|
-
## CacheableMemory - API
|
|
525
|
-
|
|
526
|
-
* `set(key, value, ttl?)`: Sets a value in the cache.
|
|
527
|
-
* `setMany([{key, value, ttl?}])`: Sets multiple values in the cache from `CacheableItem`.
|
|
528
|
-
* `get(key)`: Gets a value from the cache.
|
|
529
|
-
* `getMany([keys])`: Gets multiple values from the cache.
|
|
530
|
-
* `getRaw(key)`: Gets a value from the cache as `CacheableStoreItem`.
|
|
531
|
-
* `getManyRaw([keys])`: Gets multiple values from the cache as `CacheableStoreItem`.
|
|
532
|
-
* `has(key)`: Checks if a value exists in the cache.
|
|
533
|
-
* `hasMany([keys])`: Checks if multiple values exist in the cache.
|
|
534
|
-
* `delete(key)`: Deletes a value from the cache.
|
|
535
|
-
* `deleteMany([keys])`: Deletes multiple values from the cache.
|
|
536
|
-
* `take(key)`: Takes a value from the cache and deletes it.
|
|
537
|
-
* `takeMany([keys])`: Takes multiple values from the cache and deletes them.
|
|
538
|
-
* `wrap(function, WrapSyncOptions)`: Wraps a `sync` function in a cache.
|
|
539
|
-
* `clear()`: Clears the cache.
|
|
540
|
-
* `ttl`: The default time to live for the cache in milliseconds. Default is `undefined` which is disabled.
|
|
541
|
-
* `useClones`: If the cache should use clones for the values. Default is `true`.
|
|
542
|
-
* `lruSize`: The size of the LRU cache. Default is `0` which is unlimited.
|
|
543
|
-
* `size`: The number of keys in the cache.
|
|
544
|
-
* `checkInterval`: The interval to check for expired keys in milliseconds. Default is `0` which is disabled.
|
|
545
|
-
* `storeHashSize`: The number of `Map` objects to use for the cache. Default is `16`.
|
|
546
|
-
* `storeHashAlgorithm`: The hashing algorithm to use for the cache. Default is `djb2Hash`.
|
|
547
|
-
* `keys`: Get the keys in the cache. Not able to be set.
|
|
548
|
-
* `items`: Get the items in the cache as `CacheableStoreItem` example `{ key, value, expires? }`.
|
|
549
|
-
* `store`: The hash store for the cache which is an array of `Map` objects.
|
|
550
|
-
* `checkExpired()`: Checks for expired keys in the cache. This is used by the `checkInterval` property.
|
|
551
|
-
* `startIntervalCheck()`: Starts the interval check for expired keys if `checkInterval` is above 0 ms.
|
|
552
|
-
* `stopIntervalCheck()`: Stops the interval check for expired keys.
|
|
553
|
-
|
|
554
|
-
# Keyv Storage Adapter - KeyvCacheableMemory
|
|
555
|
-
|
|
556
|
-
`cacheable` comes with a built-in storage adapter for Keyv called `KeyvCacheableMemory`. This takes `CacheableMemory` and creates a storage adapter for Keyv. This is useful if you want to use `CacheableMemory` as a storage adapter for Keyv. Here is an example of how to use `KeyvCacheableMemory`:
|
|
557
|
-
|
|
558
|
-
```javascript
|
|
559
|
-
import { Keyv } from 'keyv';
|
|
560
|
-
import { KeyvCacheableMemory } from 'cacheable';
|
|
561
|
-
|
|
562
|
-
const keyv = new Keyv({ store: new KeyvCacheableMemory() });
|
|
563
|
-
await keyv.set('foo', 'bar');
|
|
564
|
-
const value = await keyv.get('foo');
|
|
565
|
-
console.log(value); // bar
|
|
566
|
-
```
|
|
453
|
+
To learn more go to [@cacheable/memory](https://cacheable.org/docs/memory/)
|
|
567
454
|
|
|
568
455
|
# Wrap / Memoization for Sync and Async Functions
|
|
569
456
|
|
|
570
|
-
`Cacheable` and `CacheableMemory` has a feature called `wrap` that allows you to wrap a function in a cache. This is useful for memoization and caching the results of a function. You can wrap a `sync` or `async` function in a cache. Here is an example of how to use the `wrap` function:
|
|
457
|
+
`Cacheable` and `CacheableMemory` has a feature called `wrap` that comes from [@cacheable/memoize](https://cacheable.org/docs/memoize/) and allows you to wrap a function in a cache. This is useful for memoization and caching the results of a function. You can wrap a `sync` or `async` function in a cache. Here is an example of how to use the `wrap` function:
|
|
571
458
|
|
|
572
459
|
```javascript
|
|
573
460
|
import { Cacheable } from 'cacheable';
|
|
@@ -660,9 +547,11 @@ If you would like to generate your own key for the wrapped function you can set
|
|
|
660
547
|
|
|
661
548
|
We will pass in the `function` that is being wrapped, the `arguments` passed to the function, and the `options` used to wrap the function. You can then use these to generate a custom key for the cache.
|
|
662
549
|
|
|
550
|
+
To learn more visit [@cacheable/memoize](https://cacheable.org/docs/memoize/)
|
|
551
|
+
|
|
663
552
|
# Get Or Set Memoization Function
|
|
664
553
|
|
|
665
|
-
The `getOrSet`
|
|
554
|
+
The `getOrSet` method that comes from [@cacheable/memoize](https://cacheable.org/docs/memoize/) provides a convenient way to implement the cache-aside pattern. It attempts to retrieve a value from cache, and if not found, calls the provided function to compute the value and store it in cache before returning it. Here are the options:
|
|
666
555
|
|
|
667
556
|
```typescript
|
|
668
557
|
export type GetOrSetFunctionOptions = {
|
|
@@ -698,7 +587,17 @@ const function_ = async () => Math.random() * 100;
|
|
|
698
587
|
const value = await cache.getOrSet(generateKey(), function_, { ttl: '1h' });
|
|
699
588
|
```
|
|
700
589
|
|
|
590
|
+
To learn more go to [@cacheable/memoize](https://cacheable.org/docs/memoize/)
|
|
591
|
+
|
|
592
|
+
# v1 to v2 Changes
|
|
593
|
+
|
|
594
|
+
`cacheable` is now using `@cacheable/utils`, `@cacheable/memoize`, and `@cacheable/memory` for its core functionality as we are moving to this modular architecture and plan to eventually have these modules across `cache-manager` and `flat-cache`. In addition there are some breaking changes:
|
|
701
595
|
|
|
596
|
+
* `get()` and `getMany()` no longer have the `raw` option but instead we have built out `getRaw()` and `getManyRaw()` to use.
|
|
597
|
+
* All `get` related functions now support `nonBlocking` which means if `nonBlocking: true` the primary store will return what it has and then in the background will work to sync from secondary storage for any misses. You can disable this by setting at the `get` function level the option `nonBlocking: false` which will look for any missing keys in the secondary.
|
|
598
|
+
* `Keyv` v5.5+ is now the recommended supported version as we are using its native `getMany*` and `getRaw*`
|
|
599
|
+
* `Wrap` and `getOrSet` have been updated with more robust options including the ability to use your own `serialize` function for creating the key in `wrap`.
|
|
600
|
+
* `hash` has now been updated with robust options and also an enum for setting the algorithm.
|
|
702
601
|
|
|
703
602
|
# How to Contribute
|
|
704
603
|
|