cacheable 1.8.0 → 1.8.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -8,23 +8,25 @@
8
8
  [![tests](https://github.com/jaredwray/cacheable/actions/workflows/tests.yml/badge.svg)](https://github.com/jaredwray/cacheable/actions/workflows/tests.yml)
9
9
  [![npm](https://img.shields.io/npm/dm/cacheable.svg)](https://www.npmjs.com/package/cacheable)
10
10
  [![npm](https://img.shields.io/npm/v/cacheable)](https://www.npmjs.com/package/cacheable)
11
+ [![license](https://img.shields.io/github/license/jaredwray/cacheable)](https://github.com/jaredwray/cacheable/blob/main/LICENSE)
11
12
 
12
13
  `cacheable` is a high performance layer 1 / layer 2 caching engine that is focused on distributed caching with enterprise features such as `CacheSync` (coming soon). It is built on top of the robust storage engine [Keyv](https://keyv.org) and provides a simple API to cache and retrieve data.
13
14
 
14
15
  * Simple to use with robust API
15
16
  * Not bloated with additional modules
16
- * Extendable to your own caching engine
17
17
  * Scalable and trusted storage engine by Keyv
18
18
  * Memory Caching with LRU and Expiration `CacheableMemory`
19
19
  * Resilient to failures with try/catch and offline
20
+ * Wrap / Memoization for Sync and Async Functions
20
21
  * Hooks and Events to extend functionality
21
- * Comprehensive testing and code coverage
22
22
  * Shorthand for ttl in milliseconds `(1m = 60000) (1h = 3600000) (1d = 86400000)`
23
+ * Non-blocking operations for layer 2 caching
23
24
  * Distributed Caching Sync via Pub/Sub (coming soon)
24
- * ESM and CommonJS support with TypeScript
25
+ * Comprehensive testing and code coverage
26
+ * ESM and CommonJS support with Typescript
25
27
  * Maintained and supported regularly
26
28
 
27
- ## Table of Contents
29
+ # Table of Contents
28
30
  * [Getting Started](#getting-started)
29
31
  * [Basic Usage](#basic-usage)
30
32
  * [Hooks and Events](#hooks-and-events)
@@ -34,13 +36,16 @@
34
36
  * [CacheSync - Distributed Updates](#cachesync---distributed-updates)
35
37
  * [Cacheable Options](#cacheable-options)
36
38
  * [Cacheable Statistics (Instance Only)](#cacheable-statistics-instance-only)
37
- * [API](#api)
39
+ * [Cacheable - API](#cacheable---api)
38
40
  * [CacheableMemory - In-Memory Cache](#cacheablememory---in-memory-cache)
41
+ * [CacheableMemory Options](#cacheablememory-options)
42
+ * [CacheableMemory - API](#cacheablememory---api)
39
43
  * [Wrap / Memoization for Sync and Async Functions](#wrap--memoization-for-sync-and-async-functions)
44
+ * [Keyv Storage Adapter - KeyvCacheableMemory](#keyv-storage-adapter---keyvcacheablememory)
40
45
  * [How to Contribute](#how-to-contribute)
41
46
  * [License and Copyright](#license-and-copyright)
42
47
 
43
- ## Getting Started
48
+ # Getting Started
44
49
 
45
50
  `cacheable` is primarily used as an extension to you caching engine with a robust storage backend [Keyv](https://keyv.org), Memonization (Wrap), Hooks, Events, and Statistics.
46
51
 
@@ -48,7 +53,7 @@
48
53
  npm install cacheable
49
54
  ```
50
55
 
51
- ## Basic Usage
56
+ # Basic Usage
52
57
 
53
58
  ```javascript
54
59
  import { Cacheable } from 'cacheable';
@@ -83,7 +88,7 @@ const cache = new Cacheable({primary, secondary});
83
88
 
84
89
  This is a more advanced example and not needed for most use cases.
85
90
 
86
- ## Hooks and Events
91
+ # Hooks and Events
87
92
 
88
93
  The following hooks are available for you to extend the functionality of `cacheable` via `CacheableHooks` enum:
89
94
 
@@ -107,7 +112,7 @@ cacheable.onHook(CacheableHooks.BEFORE_SET, (data) => {
107
112
  });
108
113
  ```
109
114
 
110
- ## Storage Tiering and Caching
115
+ # Storage Tiering and Caching
111
116
 
112
117
  `cacheable` is built as a layer 1 and layer 2 caching engine by default. The purpose is to have your layer 1 be fast and your layer 2 be more persistent. The primary store is the layer 1 cache and the secondary store is the layer 2 cache. By adding the secondary store you are enabling layer 2 caching. By default the operations are blocking but fault tolerant:
113
118
 
@@ -154,7 +159,7 @@ cache.ttl = -1; // sets the default ttl to 0 which is disabled
154
159
  console.log(cache.ttl); // undefined
155
160
  ```
156
161
 
157
- ## Non-Blocking Operations
162
+ # Non-Blocking Operations
158
163
 
159
164
  If you want your layer 2 (secondary) store to be non-blocking you can set the `nonBlocking` property to `true` in the options. This will make the secondary store non-blocking and will not wait for the secondary store to respond on `setting data`, `deleting data`, or `clearing data`. This is useful if you want to have a faster response time and not wait for the secondary store to respond.
160
165
 
@@ -166,7 +171,7 @@ const secondary = new KeyvRedis('redis://user:pass@localhost:6379');
166
171
  const cache = new Cacheable({secondary, nonBlocking: true});
167
172
  ```
168
173
 
169
- ## CacheSync - Distributed Updates
174
+ # CacheSync - Distributed Updates
170
175
 
171
176
  `cacheable` has a feature called `CacheSync` that is coming soon. This feature will allow you to have distributed caching with Pub/Sub. This will allow you to have multiple instances of `cacheable` running and when a value is set, deleted, or cleared it will update all instances of `cacheable` with the same value. Current plan is to support the following:
172
177
 
@@ -178,7 +183,7 @@ const cache = new Cacheable({secondary, nonBlocking: true});
178
183
 
179
184
  This feature should be live by end of year.
180
185
 
181
- ## Cacheable Options
186
+ # Cacheable Options
182
187
 
183
188
  The following options are available for you to configure `cacheable`:
184
189
 
@@ -187,8 +192,9 @@ The following options are available for you to configure `cacheable`:
187
192
  * `nonBlocking`: If the secondary store is non-blocking. Default is `false`.
188
193
  * `stats`: To enable statistics for this instance. Default is `false`.
189
194
  * `ttl`: The default time to live for the cache in milliseconds. Default is `undefined` which is disabled.
195
+ * `namespace`: The namespace for the cache. Default is `undefined`.
190
196
 
191
- ## Cacheable Statistics (Instance Only)
197
+ # Cacheable Statistics (Instance Only)
192
198
 
193
199
  If you want to enable statistics for your instance you can set the `.stats.enabled` property to `true` in the options. This will enable statistics for your instance and you can get the statistics by calling the `stats` property. Here are the following property statistics:
194
200
 
@@ -206,7 +212,7 @@ You can clear / reset the stats by calling the `.stats.reset()` method.
206
212
 
207
213
  _This does not enable statistics for your layer 2 cache as that is a distributed cache_.
208
214
 
209
- ## API
215
+ # Cacheable - API
210
216
 
211
217
  * `set(key, value, ttl?)`: Sets a value in the cache.
212
218
  * `setMany([{key, value, ttl?}])`: Sets multiple values in the cache.
@@ -228,10 +234,11 @@ _This does not enable statistics for your layer 2 cache as that is a distributed
228
234
  * `hash(object: any, algorithm = 'sha256'): string`: Hashes an object with the algorithm. Default is `sha256`.
229
235
  * `primary`: The primary store for the cache (layer 1) defaults to in-memory by Keyv.
230
236
  * `secondary`: The secondary store for the cache (layer 2) usually a persistent cache by Keyv.
237
+ * `namespace`: The namespace for the cache. Default is `undefined`. This will set the namespace for the primary and secondary stores.
231
238
  * `nonBlocking`: If the secondary store is non-blocking. Default is `false`.
232
239
  * `stats`: The statistics for this instance which includes `hits`, `misses`, `sets`, `deletes`, `clears`, `errors`, `count`, `vsize`, `ksize`.
233
240
 
234
- ## CacheableMemory - In-Memory Cache
241
+ # CacheableMemory - In-Memory Cache
235
242
 
236
243
  `cacheable` comes with a built-in in-memory cache called `CacheableMemory`. This is a simple in-memory cache that is used as the primary store for `cacheable`. You can use this as a standalone cache or as a primary store for `cacheable`. Here is an example of how to use `CacheableMemory`:
237
244
 
@@ -253,22 +260,23 @@ This simple in-memory cache uses multiple Map objects and a with `expiration` an
253
260
 
254
261
  By default we use lazy expiration deletion which means on `get` and `getMany` type functions we look if it is expired and then delete it. If you want to have a more aggressive expiration policy you can set the `checkInterval` property to a value greater than `0` which will check for expired keys at the interval you set.
255
262
 
256
- ### CacheableMemory Options
263
+ ## CacheableMemory Options
257
264
 
258
265
  * `ttl`: The time to live for the cache in milliseconds. Default is `undefined` which is means indefinitely.
259
266
  * `useClones`: If the cache should use clones for the values. Default is `true`.
260
267
  * `lruSize`: The size of the LRU cache. Default is `0` which is unlimited.
261
268
  * `checkInterval`: The interval to check for expired keys in milliseconds. Default is `0` which is disabled.
262
269
 
263
- ### CacheableMemory API
270
+ ## CacheableMemory - API
264
271
 
265
272
  * `set(key, value, ttl?)`: Sets a value in the cache.
266
- * `setMany([{key, value, ttl?}])`: Sets multiple values in the cache from `CachableItem`.
273
+ * `setMany([{key, value, ttl?}])`: Sets multiple values in the cache from `CacheableItem`.
267
274
  * `get(key)`: Gets a value from the cache.
268
275
  * `getMany([keys])`: Gets multiple values from the cache.
269
276
  * `getRaw(key)`: Gets a value from the cache as `CacheableStoreItem`.
270
277
  * `getManyRaw([keys])`: Gets multiple values from the cache as `CacheableStoreItem`.
271
278
  * `has(key)`: Checks if a value exists in the cache.
279
+ * `hasMany([keys])`: Checks if multiple values exist in the cache.
272
280
  * `delete(key)`: Deletes a value from the cache.
273
281
  * `deleteMany([keys])`: Deletes multiple values from the cache.
274
282
  * `take(key)`: Takes a value from the cache and deletes it.
@@ -283,18 +291,24 @@ By default we use lazy expiration deletion which means on `get` and `getMany` ty
283
291
  * `stopIntervalCheck()`: Stops the interval check for expired keys.
284
292
  * `hash(object: any, algorithm = 'sha256'): string`: Hashes an object with the algorithm. Default is `sha256`.
285
293
 
286
- ## Wrap / Memoization for Sync and Async Functions
294
+ # Wrap / Memoization for Sync and Async Functions
287
295
 
288
296
  `Cacheable` and `CacheableMemory` has a feature called `wrap` that allows you to wrap a function in a cache. This is useful for memoization and caching the results of a function. You can wrap a `sync` or `async` function in a cache. Here is an example of how to use the `wrap` function:
289
297
 
290
298
  ```javascript
291
299
  import { Cacheable } from 'cacheable';
292
300
  const asyncFunction = async (value: number) => {
293
- return value * 2;
301
+ return Math.random() * value;
294
302
  };
295
303
 
296
304
  const cache = new Cacheable();
297
- const wrappedFunction = cache.wrap(asyncFunction, { ttl: '1h' });
305
+ const options = {
306
+ ttl: '1h', // 1 hour
307
+ keyPrefix: 'p1', // key prefix. This is used if you have multiple functions and need to set a unique prefix.
308
+ }
309
+ const wrappedFunction = cache.wrap(asyncFunction, options);
310
+ console.log(await wrappedFunction(2)); // 4
311
+ console.log(await wrappedFunction(2)); // 4 from cache
298
312
  ```
299
313
 
300
314
  In this example we are wrapping an `async` function in a cache with a `ttl` of `1 hour`. This will cache the result of the function for `1 hour` and then expire the value. You can also wrap a `sync` function in a cache:
@@ -306,14 +320,31 @@ const syncFunction = (value: number) => {
306
320
  };
307
321
 
308
322
  const cache = new CacheableMemory();
309
- const wrappedFunction = cache.wrap(syncFunction, { ttl: '1h' });
323
+ const wrappedFunction = cache.wrap(syncFunction, { ttl: '1h', key: 'syncFunction' });
324
+ console.log(wrappedFunction(2)); // 4
325
+ console.log(wrappedFunction(2)); // 4 from cache
326
+ console.log(cache.get('syncFunction')); // 4
310
327
  ```
311
328
 
312
- In this example we are wrapping a `sync` function in a cache with a `ttl` of `1 hour`. This will cache the result of the function for `1 hour` and then expire the value.
329
+ In this example we are wrapping a `sync` function in a cache with a `ttl` of `1 hour`. This will cache the result of the function for `1 hour` and then expire the value. You can also set the `key` property in the `wrap()` options to set a custom key for the cache.
330
+
331
+ # Keyv Storage Adapter - KeyvCacheableMemory
332
+
333
+ `cacheable` comes with a built-in storage adapter for Keyv called `KeyvCacheableMemory`. This takes `CacheableMemory` and creates a storage adapter for Keyv. This is useful if you want to use `CacheableMemory` as a storage adapter for Keyv. Here is an example of how to use `KeyvCacheableMemory`:
334
+
335
+ ```javascript
336
+ import { Keyv } from 'keyv';
337
+ import { KeyvCacheableMemory } from 'cacheable';
338
+
339
+ const keyv = new Keyv({ store: new KeyvCacheableMemory() });
340
+ await keyv.set('foo', 'bar');
341
+ const value = await keyv.get('key');
342
+ console.log(value); // bar
343
+ ```
313
344
 
314
- ## How to Contribute
345
+ # How to Contribute
315
346
 
316
347
  You can contribute by forking the repo and submitting a pull request. Please make sure to add tests and update the documentation. To learn more about how to contribute go to our main README [https://github.com/jaredwray/cacheable](https://github.com/jaredwray/cacheable). This will talk about how to `Open a Pull Request`, `Ask a Question`, or `Post an Issue`.
317
348
 
318
- ## License and Copyright
349
+ # License and Copyright
319
350
  [MIT © Jared Wray](./LICENSE)