cacheable 1.8.10 → 1.10.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +257 -20
- package/dist/index.cjs +257 -163
- package/dist/index.d.cts +107 -46
- package/dist/index.d.ts +107 -46
- package/dist/index.js +258 -164
- package/package.json +9 -9
package/README.md
CHANGED
|
@@ -29,6 +29,7 @@
|
|
|
29
29
|
* [Basic Usage](#basic-usage)
|
|
30
30
|
* [Hooks and Events](#hooks-and-events)
|
|
31
31
|
* [Storage Tiering and Caching](#storage-tiering-and-caching)
|
|
32
|
+
* [TTL Propagation and Storage Tiering](#ttl-propagation-and-storage-tiering)
|
|
32
33
|
* [Shorthand for Time to Live (ttl)](#shorthand-for-time-to-live-ttl)
|
|
33
34
|
* [Non-Blocking Operations](#non-blocking-operations)
|
|
34
35
|
* [CacheSync - Distributed Updates](#cachesync---distributed-updates)
|
|
@@ -36,10 +37,13 @@
|
|
|
36
37
|
* [Cacheable Statistics (Instance Only)](#cacheable-statistics-instance-only)
|
|
37
38
|
* [Cacheable - API](#cacheable---api)
|
|
38
39
|
* [CacheableMemory - In-Memory Cache](#cacheablememory---in-memory-cache)
|
|
40
|
+
* [CacheableMemory Store Hashing](#cacheablememory-store-hashing)
|
|
41
|
+
* [CacheableMemory LRU Feature](#cacheablememory-lru-feature)
|
|
42
|
+
* [CacheableMemory Performance](#cacheablememory-performance)
|
|
39
43
|
* [CacheableMemory Options](#cacheablememory-options)
|
|
40
44
|
* [CacheableMemory - API](#cacheablememory---api)
|
|
41
|
-
* [Wrap / Memoization for Sync and Async Functions](#wrap--memoization-for-sync-and-async-functions)
|
|
42
45
|
* [Keyv Storage Adapter - KeyvCacheableMemory](#keyv-storage-adapter---keyvcacheablememory)
|
|
46
|
+
* [Wrap / Memoization for Sync and Async Functions](#wrap--memoization-for-sync-and-async-functions)
|
|
43
47
|
* [How to Contribute](#how-to-contribute)
|
|
44
48
|
* [License and Copyright](#license-and-copyright)
|
|
45
49
|
|
|
@@ -98,6 +102,7 @@ The following hooks are available for you to extend the functionality of `cachea
|
|
|
98
102
|
* `AFTER_GET`: This is called after the `get()` method is called.
|
|
99
103
|
* `BEFORE_GET_MANY`: This is called before the `getMany()` method is called.
|
|
100
104
|
* `AFTER_GET_MANY`: This is called after the `getMany()` method is called.
|
|
105
|
+
* `BEFORE_SECONDARY_SETS_PRIMARY`: This is called when the secondary store sets the value in the primary store.
|
|
101
106
|
|
|
102
107
|
An example of how to use these hooks:
|
|
103
108
|
|
|
@@ -110,6 +115,19 @@ cacheable.onHook(CacheableHooks.BEFORE_SET, (data) => {
|
|
|
110
115
|
});
|
|
111
116
|
```
|
|
112
117
|
|
|
118
|
+
Here is an example of how to use `BEFORE_SECONDARY_SETS_PRIMARY` hook:
|
|
119
|
+
|
|
120
|
+
```javascript
|
|
121
|
+
import { Cacheable, CacheableHooks } from 'cacheable';
|
|
122
|
+
import KeyvRedis from '@keyv/redis';
|
|
123
|
+
const secondary = new KeyvRedis('redis://user:pass@localhost:6379');
|
|
124
|
+
const cache = new Cacheable({secondary});
|
|
125
|
+
cache.onHook(CacheableHooks.BEFORE_SECONDARY_SETS_PRIMARY, (data) => {
|
|
126
|
+
console.log(`before secondary sets primary: ${data.key} ${data.value} ${data.ttl}`);
|
|
127
|
+
});
|
|
128
|
+
```
|
|
129
|
+
This is called when the secondary store sets the value in the primary store. This is useful if you want to do something before the value is set in the primary store such as manipulating the ttl or the value.
|
|
130
|
+
|
|
113
131
|
# Storage Tiering and Caching
|
|
114
132
|
|
|
115
133
|
`cacheable` is built as a layer 1 and layer 2 caching engine by default. The purpose is to have your layer 1 be fast and your layer 2 be more persistent. The primary store is the layer 1 cache and the secondary store is the layer 2 cache. By adding the secondary store you are enabling layer 2 caching. By default the operations are blocking but fault tolerant:
|
|
@@ -119,6 +137,48 @@ cacheable.onHook(CacheableHooks.BEFORE_SET, (data) => {
|
|
|
119
137
|
* `Deleting Data`: Deletes the value from the primary store and secondary store at the same time waiting for both to respond.
|
|
120
138
|
* `Clearing Data`: Clears the primary store and secondary store at the same time waiting for both to respond.
|
|
121
139
|
|
|
140
|
+
When `Getting Data` if the value does not exist in the primary store it will try to get it from the secondary store. If the secondary store returns the value it will set it in the primary store. Because we use [TTL Propagation](#ttl-propagation-and-storage-tiering) the value will be set in the primary store with the TTL of the secondary store unless the time to live (TTL) is greater than the primary store which will then use the TTL of the primary store. An example of this is:
|
|
141
|
+
|
|
142
|
+
```javascript
|
|
143
|
+
import { Cacheable } from 'cacheable';
|
|
144
|
+
import KeyvRedis from '@keyv/redis';
|
|
145
|
+
const secondary = new KeyvRedis('redis://user:pass@localhost:6379', { ttl: 1000 });
|
|
146
|
+
const cache = new Cacheable({secondary, ttl: 100});
|
|
147
|
+
|
|
148
|
+
await cache.set('key', 'value'); // sets the value in the primary store with a ttl of 100 ms and secondary store with a ttl of 1000 ms
|
|
149
|
+
|
|
150
|
+
await sleep(500); // wait for .5 seconds
|
|
151
|
+
|
|
152
|
+
const value = await cache.get('key'); // gets the value from the secondary store and now sets the value in the primary store with a ttl of 500 ms which is what is left from the secondary store
|
|
153
|
+
```
|
|
154
|
+
|
|
155
|
+
In this example the primary store has a ttl of `100 ms` and the secondary store has a ttl of `1000 ms`. Because the ttl is greater in the secondary store it will default to setting ttl value in the primary store.
|
|
156
|
+
|
|
157
|
+
```javascript
|
|
158
|
+
import { Cacheable } from 'cacheable';
|
|
159
|
+
import {Keyv} from 'keyv';
|
|
160
|
+
import KeyvRedis from '@keyv/redis';
|
|
161
|
+
const primary = new Keyv({ ttl: 200 });
|
|
162
|
+
const secondary = new KeyvRedis('redis://user:pass@localhost:6379', { ttl: 1000 });
|
|
163
|
+
const cache = new Cacheable({primary, secondary});
|
|
164
|
+
|
|
165
|
+
await cache.set('key', 'value'); // sets the value in the primary store with a ttl of 100 ms and secondary store with a ttl of 1000 ms
|
|
166
|
+
|
|
167
|
+
await sleep(200); // wait for .2 seconds
|
|
168
|
+
|
|
169
|
+
const value = await cache.get('key'); // gets the value from the secondary store and now sets the value in the primary store with a ttl of 200 ms which is what the primary store is set with
|
|
170
|
+
```
|
|
171
|
+
|
|
172
|
+
# TTL Propagation and Storage Tiering
|
|
173
|
+
|
|
174
|
+
Cacheable TTL propagation is a feature that allows you to set a time to live (TTL) for the cache. By default the TTL is set in the following order:
|
|
175
|
+
|
|
176
|
+
```
|
|
177
|
+
ttl = set at the function ?? storage adapter ttl ?? cacheable ttl
|
|
178
|
+
```
|
|
179
|
+
|
|
180
|
+
This means that if you set a TTL at the function level it will override the storage adapter TTL and the cacheable TTL. If you do not set a TTL at the function level it will use the storage adapter TTL and then the cacheable TTL. If you do not set a TTL at all it will use the default TTL of `undefined` which is disabled.
|
|
181
|
+
|
|
122
182
|
# Shorthand for Time to Live (ttl)
|
|
123
183
|
|
|
124
184
|
By default `Cacheable` and `CacheableMemory` the `ttl` is in milliseconds but you can use shorthand for the time to live. Here are the following shorthand values:
|
|
@@ -157,10 +217,79 @@ cache.ttl = -1; // sets the default ttl to 0 which is disabled
|
|
|
157
217
|
console.log(cache.ttl); // undefined
|
|
158
218
|
```
|
|
159
219
|
|
|
220
|
+
## Retrieving raw cache entries
|
|
221
|
+
|
|
222
|
+
The `get` and `getMany` methods support a `raw` option, which returns the full stored metadata (`StoredDataRaw<T>`) instead of just the value:
|
|
223
|
+
|
|
224
|
+
```typescript
|
|
225
|
+
import { Cacheable } from 'cacheable';
|
|
226
|
+
|
|
227
|
+
const cache = new Cacheable();
|
|
228
|
+
|
|
229
|
+
// store a value
|
|
230
|
+
await cache.set('user:1', { name: 'Alice' });
|
|
231
|
+
|
|
232
|
+
// default: only the value
|
|
233
|
+
const user = await cache.get<{ name: string }>('user:1');
|
|
234
|
+
console.log(user); // { name: 'Alice' }
|
|
235
|
+
|
|
236
|
+
// with raw: full record including expiration
|
|
237
|
+
const raw = await cache.get<{ name: string }>('user:1', { raw: true });
|
|
238
|
+
console.log(raw.value); // { name: 'Alice' }
|
|
239
|
+
console.log(raw.expires); // e.g. 1677628495000 or null
|
|
240
|
+
```
|
|
241
|
+
|
|
242
|
+
```typescript
|
|
243
|
+
// getMany with raw option
|
|
244
|
+
await cache.set('a', 1);
|
|
245
|
+
await cache.set('b', 2);
|
|
246
|
+
|
|
247
|
+
const raws = await cache.getMany<number>(['a', 'b'], { raw: true });
|
|
248
|
+
raws.forEach((entry, idx) => {
|
|
249
|
+
console.log(`key=${['a','b'][idx]}, value=${entry?.value}, expires=${entry?.expires}`);
|
|
250
|
+
});
|
|
251
|
+
```
|
|
252
|
+
|
|
253
|
+
|
|
160
254
|
# Non-Blocking Operations
|
|
161
255
|
|
|
162
256
|
If you want your layer 2 (secondary) store to be non-blocking you can set the `nonBlocking` property to `true` in the options. This will make the secondary store non-blocking and will not wait for the secondary store to respond on `setting data`, `deleting data`, or `clearing data`. This is useful if you want to have a faster response time and not wait for the secondary store to respond.
|
|
163
257
|
|
|
258
|
+
# GetOrSet
|
|
259
|
+
|
|
260
|
+
The `getOrSet` method provides a convenient way to implement the cache-aside pattern. It attempts to retrieve a value
|
|
261
|
+
from cache, and if not found, calls the provided function to compute the value and store it in cache before returning
|
|
262
|
+
it.
|
|
263
|
+
|
|
264
|
+
```typescript
|
|
265
|
+
import { Cacheable } from 'cacheable';
|
|
266
|
+
|
|
267
|
+
// Create a new Cacheable instance
|
|
268
|
+
const cache = new Cacheable();
|
|
269
|
+
|
|
270
|
+
// Use getOrSet to fetch user data
|
|
271
|
+
async function getUserData(userId: string) {
|
|
272
|
+
return await cache.getOrSet(
|
|
273
|
+
`user:${userId}`,
|
|
274
|
+
async () => {
|
|
275
|
+
// This function only runs if the data isn't in the cache
|
|
276
|
+
console.log('Fetching user from database...');
|
|
277
|
+
// Simulate database fetch
|
|
278
|
+
return { id: userId, name: 'John Doe', email: 'john@example.com' };
|
|
279
|
+
},
|
|
280
|
+
{ ttl: '30m' } // Cache for 30 minutes
|
|
281
|
+
);
|
|
282
|
+
}
|
|
283
|
+
|
|
284
|
+
// First call - will fetch from "database"
|
|
285
|
+
const user1 = await getUserData('123');
|
|
286
|
+
console.log(user1); // { id: '123', name: 'John Doe', email: 'john@example.com' }
|
|
287
|
+
|
|
288
|
+
// Second call - will retrieve from cache
|
|
289
|
+
const user2 = await getUserData('123');
|
|
290
|
+
console.log(user2); // Same data, but retrieved from cache
|
|
291
|
+
```
|
|
292
|
+
|
|
164
293
|
```javascript
|
|
165
294
|
import { Cacheable } from 'cacheable';
|
|
166
295
|
import {KeyvRedis} from '@keyv/redis';
|
|
@@ -215,7 +344,9 @@ _This does not enable statistics for your layer 2 cache as that is a distributed
|
|
|
215
344
|
* `set(key, value, ttl?)`: Sets a value in the cache.
|
|
216
345
|
* `setMany([{key, value, ttl?}])`: Sets multiple values in the cache.
|
|
217
346
|
* `get(key)`: Gets a value from the cache.
|
|
347
|
+
* `get(key, { raw: true })`: Gets a raw value from the cache.
|
|
218
348
|
* `getMany([keys])`: Gets multiple values from the cache.
|
|
349
|
+
* `getMany([keys], { raw: true })`: Gets multiple raw values from the cache.
|
|
219
350
|
* `has(key)`: Checks if a value exists in the cache.
|
|
220
351
|
* `hasMany([keys])`: Checks if multiple values exist in the cache.
|
|
221
352
|
* `take(key)`: Takes a value from the cache and deletes it.
|
|
@@ -224,6 +355,7 @@ _This does not enable statistics for your layer 2 cache as that is a distributed
|
|
|
224
355
|
* `deleteMany([keys])`: Deletes multiple values from the cache.
|
|
225
356
|
* `clear()`: Clears the cache stores. Be careful with this as it will clear both layer 1 and layer 2.
|
|
226
357
|
* `wrap(function, WrapOptions)`: Wraps an `async` function in a cache.
|
|
358
|
+
* `getOrSet(key, valueFunction, ttl?)`: Gets a value from cache or sets it if not found using the provided function.
|
|
227
359
|
* `disconnect()`: Disconnects from the cache stores.
|
|
228
360
|
* `onHook(hook, callback)`: Sets a hook.
|
|
229
361
|
* `removeHook(hook)`: Removes a hook.
|
|
@@ -258,12 +390,111 @@ This simple in-memory cache uses multiple Map objects and a with `expiration` an
|
|
|
258
390
|
|
|
259
391
|
By default we use lazy expiration deletion which means on `get` and `getMany` type functions we look if it is expired and then delete it. If you want to have a more aggressive expiration policy you can set the `checkInterval` property to a value greater than `0` which will check for expired keys at the interval you set.
|
|
260
392
|
|
|
393
|
+
Here are some of the main features of `CacheableMemory`:
|
|
394
|
+
* High performance in-memory cache with a robust API and feature set. 🚀
|
|
395
|
+
* Can scale past the `16,777,216 (2^24) keys` limit of a single `Map` via `hashStoreSize`. Default is `16` Map objects.
|
|
396
|
+
* LRU (Least Recently Used) cache feature to limit the number of keys in the cache via `lruSize`. Limit to `16,777,216 (2^24) keys` total.
|
|
397
|
+
* Expiration policy to delete expired keys with lazy deletion or aggressive deletion via `checkInterval`.
|
|
398
|
+
* `Wrap` feature to memoize `sync` and `async` functions with stampede protection.
|
|
399
|
+
* Ability to do many operations at once such as `setMany`, `getMany`, `deleteMany`, and `takeMany`.
|
|
400
|
+
* Supports `raw` data retrieval with `getRaw` and `getManyRaw` methods to get the full metadata of the cache entry.
|
|
401
|
+
|
|
402
|
+
## CacheableMemory Store Hashing
|
|
403
|
+
|
|
404
|
+
`CacheableMemory` uses `Map` objects to store the keys and values. To make this scale past the `16,777,216 (2^24) keys` limit of a single `Map` we use a hash to balance the data across multiple `Map` objects. This is done by hashing the key and using the hash to determine which `Map` object to use. The default hashing algorithm is `djb2Hash` but you can change it by setting the `storeHashAlgorithm` property in the options. By default we set the amount of `Map` objects to `16`.
|
|
405
|
+
|
|
406
|
+
NOTE: if you are using the LRU cache feature the `lruSize` no matter how many `Map` objects you have it will be limited to the `16,777,216 (2^24) keys` limit of a single `Map` object. This is because we use a double linked list to manage the LRU cache and it is not possible to have more than `16,777,216 (2^24) keys` in a single `Map` object.
|
|
407
|
+
|
|
408
|
+
Here is an example of how to set the number of `Map` objects and the hashing algorithm:
|
|
409
|
+
|
|
410
|
+
```javascript
|
|
411
|
+
import { CacheableMemory } from 'cacheable';
|
|
412
|
+
const cache = new CacheableMemory({
|
|
413
|
+
storeSize: 32, // set the number of Map objects to 32
|
|
414
|
+
});
|
|
415
|
+
cache.set('key', 'value');
|
|
416
|
+
const value = cache.get('key'); // value
|
|
417
|
+
```
|
|
418
|
+
|
|
419
|
+
Here is an example of how to use the `storeHashAlgorithm` property:
|
|
420
|
+
|
|
421
|
+
```javascript
|
|
422
|
+
import { CacheableMemory } from 'cacheable';
|
|
423
|
+
const cache = new CacheableMemory({ storeHashAlgorithm: 'sha256' });
|
|
424
|
+
cache.set('key', 'value');
|
|
425
|
+
const value = cache.get('key'); // value
|
|
426
|
+
```
|
|
427
|
+
|
|
428
|
+
If you want to provide your own hashing function you can set the `storeHashAlgorithm` property to a function that takes an object and returns a `number` that is in the range of the amount of `Map` stores you have.
|
|
429
|
+
|
|
430
|
+
```javascript
|
|
431
|
+
import { CacheableMemory } from 'cacheable';
|
|
432
|
+
/**
|
|
433
|
+
* Custom hash function that takes a key and the size of the store
|
|
434
|
+
* and returns a number between 0 and storeHashSize - 1.
|
|
435
|
+
* @param {string} key - The key to hash.
|
|
436
|
+
* @param {number} storeHashSize - The size of the store (number of Map objects).
|
|
437
|
+
* @returns {number} - A number between 0 and storeHashSize - 1.
|
|
438
|
+
*/
|
|
439
|
+
const customHash = (key, storeHashSize) => {
|
|
440
|
+
// custom hashing logic
|
|
441
|
+
return key.length % storeHashSize; // returns a number between 0 and 31 for 32 Map objects
|
|
442
|
+
};
|
|
443
|
+
const cache = new CacheableMemory({ storeHashAlgorithm: customHash, storeSize: 32 });
|
|
444
|
+
cache.set('key', 'value');
|
|
445
|
+
const value = cache.get('key'); // value
|
|
446
|
+
```
|
|
447
|
+
|
|
448
|
+
## CacheableMemory LRU Feature
|
|
449
|
+
|
|
450
|
+
You can enable the LRU (Least Recently Used) feature in `CacheableMemory` by setting the `lruSize` property in the options. This will limit the number of keys in the cache to the size you set. When the cache reaches the limit it will remove the least recently used keys from the cache. This is useful if you want to limit the memory usage of the cache.
|
|
451
|
+
|
|
452
|
+
When you set the `lruSize` we use a double linked list to manage the LRU cache and also set the `hashStoreSize` to `1` which means we will only use a single `Map` object for the LRU cache. This is because the LRU cache is managed by the double linked list and it is not possible to have more than `16,777,216 (2^24) keys` in a single `Map` object.
|
|
453
|
+
|
|
454
|
+
```javascript
|
|
455
|
+
import { CacheableMemory } from 'cacheable';
|
|
456
|
+
const cache = new CacheableMemory({ lruSize: 1 }); // sets the LRU cache size to 1000 keys and hashStoreSize to 1
|
|
457
|
+
cache.set('key1', 'value1');
|
|
458
|
+
cache.set('key2', 'value2');
|
|
459
|
+
const value1 = cache.get('key1');
|
|
460
|
+
console.log(value1); // undefined if the cache is full and key1 is the least recently used
|
|
461
|
+
const value2 = cache.get('key2');
|
|
462
|
+
console.log(value2); // value2 if key2 is still in the cache
|
|
463
|
+
console.log(cache.size()); // 1
|
|
464
|
+
```
|
|
465
|
+
|
|
466
|
+
NOTE: if you set the `lruSize` property to `0` after it was enabled it will disable the LRU cache feature and will not limit the number of keys in the cache. This will remove the `16,777,216 (2^24) keys` limit of a single `Map` object and will allow you to store more keys in the cache.
|
|
467
|
+
|
|
468
|
+
## CacheableMemory Performance
|
|
469
|
+
|
|
470
|
+
Our goal with `cacheable` and `CacheableMemory` is to provide a high performance caching engine that is simple to use and has a robust API. We test it against other cacheing engines such that are less feature rich to make sure there is little difference. Here are some of the benchmarks we have run:
|
|
471
|
+
|
|
472
|
+
*Memory Benchmark Results:*
|
|
473
|
+
| name | summary | ops/sec | time/op | margin | samples |
|
|
474
|
+
|------------------------------------------|:---------:|----------:|----------:|:--------:|----------:|
|
|
475
|
+
| Map (v22) - set / get | 🥇 | 117K | 9µs | ±1.29% | 110K |
|
|
476
|
+
| Cacheable Memory (v1.10.0) - set / get | -1.3% | 116K | 9µs | ±0.77% | 110K |
|
|
477
|
+
| Node Cache - set / get | -4.1% | 112K | 9µs | ±1.34% | 107K |
|
|
478
|
+
| bentocache (v1.4.0) - set / get | -45% | 65K | 17µs | ±1.10% | 100K |
|
|
479
|
+
|
|
480
|
+
*Memory LRU Benchmark Results:*
|
|
481
|
+
| name | summary | ops/sec | time/op | margin | samples |
|
|
482
|
+
|------------------------------------------|:---------:|----------:|----------:|:--------:|----------:|
|
|
483
|
+
| quick-lru (v7.0.1) - set / get | 🥇 | 118K | 9µs | ±0.85% | 112K |
|
|
484
|
+
| Map (v22) - set / get | -0.56% | 117K | 9µs | ±1.35% | 110K |
|
|
485
|
+
| lru.min (v1.1.2) - set / get | -1.7% | 116K | 9µs | ±0.90% | 110K |
|
|
486
|
+
| Cacheable Memory (v1.10.0) - set / get | -3.3% | 114K | 9µs | ±1.16% | 108K |
|
|
487
|
+
|
|
488
|
+
As you can see from the benchmarks `CacheableMemory` is on par with other caching engines such as `Map`, `Node Cache`, and `bentocache`. We have also tested it against other LRU caching engines such as `quick-lru` and `lru.min` and it performs well against them too.
|
|
489
|
+
|
|
261
490
|
## CacheableMemory Options
|
|
262
491
|
|
|
263
492
|
* `ttl`: The time to live for the cache in milliseconds. Default is `undefined` which is means indefinitely.
|
|
264
493
|
* `useClones`: If the cache should use clones for the values. Default is `true`.
|
|
265
494
|
* `lruSize`: The size of the LRU cache. Default is `0` which is unlimited.
|
|
266
495
|
* `checkInterval`: The interval to check for expired keys in milliseconds. Default is `0` which is disabled.
|
|
496
|
+
* `storeHashSize`: The number of `Map` objects to use for the cache. Default is `16`.
|
|
497
|
+
* `storeHashAlgorithm`: The hashing algorithm to use for the cache. Default is `djb2Hash`.
|
|
267
498
|
|
|
268
499
|
## CacheableMemory - API
|
|
269
500
|
|
|
@@ -281,13 +512,33 @@ By default we use lazy expiration deletion which means on `get` and `getMany` ty
|
|
|
281
512
|
* `takeMany([keys])`: Takes multiple values from the cache and deletes them.
|
|
282
513
|
* `wrap(function, WrapSyncOptions)`: Wraps a `sync` function in a cache.
|
|
283
514
|
* `clear()`: Clears the cache.
|
|
284
|
-
* `
|
|
285
|
-
* `
|
|
286
|
-
* `
|
|
515
|
+
* `ttl`: The default time to live for the cache in milliseconds. Default is `undefined` which is disabled.
|
|
516
|
+
* `useClones`: If the cache should use clones for the values. Default is `true`.
|
|
517
|
+
* `lruSize`: The size of the LRU cache. Default is `0` which is unlimited.
|
|
518
|
+
* `size`: The number of keys in the cache.
|
|
519
|
+
* `checkInterval`: The interval to check for expired keys in milliseconds. Default is `0` which is disabled.
|
|
520
|
+
* `storeHashSize`: The number of `Map` objects to use for the cache. Default is `16`.
|
|
521
|
+
* `storeHashAlgorithm`: The hashing algorithm to use for the cache. Default is `djb2Hash`.
|
|
522
|
+
* `keys`: Get the keys in the cache. Not able to be set.
|
|
523
|
+
* `items`: Get the items in the cache as `CacheableStoreItem` example `{ key, value, expires? }`.
|
|
524
|
+
* `store`: The hash store for the cache which is an array of `Map` objects.
|
|
287
525
|
* `checkExpired()`: Checks for expired keys in the cache. This is used by the `checkInterval` property.
|
|
288
526
|
* `startIntervalCheck()`: Starts the interval check for expired keys if `checkInterval` is above 0 ms.
|
|
289
527
|
* `stopIntervalCheck()`: Stops the interval check for expired keys.
|
|
290
|
-
|
|
528
|
+
|
|
529
|
+
# Keyv Storage Adapter - KeyvCacheableMemory
|
|
530
|
+
|
|
531
|
+
`cacheable` comes with a built-in storage adapter for Keyv called `KeyvCacheableMemory`. This takes `CacheableMemory` and creates a storage adapter for Keyv. This is useful if you want to use `CacheableMemory` as a storage adapter for Keyv. Here is an example of how to use `KeyvCacheableMemory`:
|
|
532
|
+
|
|
533
|
+
```javascript
|
|
534
|
+
import { Keyv } from 'keyv';
|
|
535
|
+
import { KeyvCacheableMemory } from 'cacheable';
|
|
536
|
+
|
|
537
|
+
const keyv = new Keyv({ store: new KeyvCacheableMemory() });
|
|
538
|
+
await keyv.set('foo', 'bar');
|
|
539
|
+
const value = await keyv.get('foo');
|
|
540
|
+
console.log(value); // bar
|
|
541
|
+
```
|
|
291
542
|
|
|
292
543
|
# Wrap / Memoization for Sync and Async Functions
|
|
293
544
|
|
|
@@ -363,23 +614,9 @@ console.log(wrappedFunction()); // error
|
|
|
363
614
|
console.log(wrappedFunction()); // error from cache
|
|
364
615
|
```
|
|
365
616
|
|
|
366
|
-
# Keyv Storage Adapter - KeyvCacheableMemory
|
|
367
|
-
|
|
368
|
-
`cacheable` comes with a built-in storage adapter for Keyv called `KeyvCacheableMemory`. This takes `CacheableMemory` and creates a storage adapter for Keyv. This is useful if you want to use `CacheableMemory` as a storage adapter for Keyv. Here is an example of how to use `KeyvCacheableMemory`:
|
|
369
|
-
|
|
370
|
-
```javascript
|
|
371
|
-
import { Keyv } from 'keyv';
|
|
372
|
-
import { KeyvCacheableMemory } from 'cacheable';
|
|
373
|
-
|
|
374
|
-
const keyv = new Keyv({ store: new KeyvCacheableMemory() });
|
|
375
|
-
await keyv.set('foo', 'bar');
|
|
376
|
-
const value = await keyv.get('foo');
|
|
377
|
-
console.log(value); // bar
|
|
378
|
-
```
|
|
379
|
-
|
|
380
617
|
# How to Contribute
|
|
381
618
|
|
|
382
619
|
You can contribute by forking the repo and submitting a pull request. Please make sure to add tests and update the documentation. To learn more about how to contribute go to our main README [https://github.com/jaredwray/cacheable](https://github.com/jaredwray/cacheable). This will talk about how to `Open a Pull Request`, `Ask a Question`, or `Post an Issue`.
|
|
383
620
|
|
|
384
621
|
# License and Copyright
|
|
385
|
-
[MIT © Jared Wray](./LICENSE)
|
|
622
|
+
[MIT © Jared Wray](./LICENSE)
|