memcache 1.2.0 → 1.3.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -44,6 +44,21 @@ Nodejs Memcache Client
44
44
  - [decr(key, value?)](#decrkey-value)
45
45
  - [touch(key, exptime)](#touchkey-exptime)
46
46
  - [Hook Examples](#hook-examples)
47
+ - [Distribution Algorithms](#distribution-algorithms)
48
+ - [KetamaHash (Default)](#ketamahash-default)
49
+ - [ModulaHash](#modulahash)
50
+ - [Choosing an Algorithm](#choosing-an-algorithm)
51
+ - [Retry Configuration](#retry-configuration)
52
+ - [Basic Retry Setup](#basic-retry-setup)
53
+ - [Backoff Strategies](#backoff-strategies)
54
+ - [Idempotent Safety](#idempotent-safety)
55
+ - [Methods Without Retry Support](#methods-without-retry-support)
56
+ - [SASL Authentication](#sasl-authentication)
57
+ - [Enabling SASL Authentication](#enabling-sasl-authentication)
58
+ - [SASL Options](#sasl-options)
59
+ - [Per-Node SASL Configuration](#per-node-sasl-configuration)
60
+ - [Authentication Events](#authentication-events)
61
+ - [Server Configuration](#server-configuration)
47
62
  - [Contributing](#contributing)
48
63
  - [License and Copyright](#license-and-copyright)
49
64
 
@@ -179,6 +194,10 @@ const client = new Memcache({
179
194
  - `keepAlive?: boolean` - Keep connection alive (default: true)
180
195
  - `keepAliveDelay?: number` - Keep alive delay in milliseconds (default: 1000)
181
196
  - `hash?: HashProvider` - Hash provider for consistent hashing (default: KetamaHash)
197
+ - `retries?: number` - Number of retry attempts for failed commands (default: 0)
198
+ - `retryDelay?: number` - Base delay in milliseconds between retries (default: 100)
199
+ - `retryBackoff?: RetryBackoffFunction` - Function to calculate backoff delay (default: fixed delay)
200
+ - `retryOnlyIdempotent?: boolean` - Only retry commands marked as idempotent (default: true)
182
201
 
183
202
  ## Properties
184
203
 
@@ -200,6 +219,18 @@ Get or set the keepAlive setting. Updates all existing nodes. Requires `reconnec
200
219
  ### `keepAliveDelay: number`
201
220
  Get or set the keep alive delay in milliseconds. Updates all existing nodes. Requires `reconnect()` to apply changes.
202
221
 
222
+ ### `retries: number`
223
+ Get or set the number of retry attempts for failed commands (default: 0).
224
+
225
+ ### `retryDelay: number`
226
+ Get or set the base delay in milliseconds between retry attempts (default: 100).
227
+
228
+ ### `retryBackoff: RetryBackoffFunction`
229
+ Get or set the backoff function for calculating retry delays.
230
+
231
+ ### `retryOnlyIdempotent: boolean`
232
+ Get or set whether retries are restricted to idempotent commands only (default: true).
233
+
203
234
  ## Connection Management
204
235
 
205
236
  ### `connect(nodeId?: string): Promise<void>`
@@ -473,9 +504,382 @@ client.onHook('after:set', async (context) => {
473
504
  });
474
505
  ```
475
506
 
507
+ # Distribution Algorithms
508
+
509
+ Memcache supports pluggable distribution algorithms to determine how keys are distributed across nodes. You can configure the algorithm using the `hash` option.
510
+
511
+ ## KetamaHash (Default)
512
+
513
+ KetamaHash uses the Ketama consistent hashing algorithm, which minimizes key redistribution when nodes are added or removed. This is the default and recommended algorithm for production environments with dynamic scaling.
514
+
515
+ ```javascript
516
+ import { Memcache } from 'memcache';
517
+
518
+ // KetamaHash is used by default
519
+ const client = new Memcache({
520
+ nodes: ['server1:11211', 'server2:11211', 'server3:11211']
521
+ });
522
+ ```
523
+
524
+ **Characteristics:**
525
+ - Minimal key redistribution (~1/n keys move when adding/removing nodes)
526
+ - Uses virtual nodes for better distribution
527
+ - Supports weighted nodes
528
+ - Best for production environments with dynamic scaling
529
+
530
+ ## ModulaHash
531
+
532
+ ModulaHash uses a simple modulo-based hashing algorithm (`hash(key) % nodeCount`). This is a simpler algorithm that may redistribute all keys when nodes change.
533
+
534
+ ```javascript
535
+ import { Memcache, ModulaHash } from 'memcache';
536
+
537
+ // Use ModulaHash for distribution
538
+ const client = new Memcache({
539
+ nodes: ['server1:11211', 'server2:11211', 'server3:11211'],
540
+ hash: new ModulaHash()
541
+ });
542
+
543
+ // With a custom hash algorithm (default is sha1)
544
+ const client2 = new Memcache({
545
+ nodes: ['server1:11211', 'server2:11211'],
546
+ hash: new ModulaHash('md5')
547
+ });
548
+ ```
549
+
550
+ **Characteristics:**
551
+ - Simple and fast algorithm
552
+ - All keys may be redistributed when nodes are added or removed
553
+ - Supports weighted nodes (nodes with higher weight appear more in the distribution)
554
+ - Best for fixed-size clusters or testing environments
555
+
556
+ ### Weighted Nodes with ModulaHash
557
+
558
+ ModulaHash supports weighted nodes, where nodes with higher weights receive proportionally more keys:
559
+
560
+ ```javascript
561
+ import { Memcache, ModulaHash, createNode } from 'memcache';
562
+
563
+ // Create nodes with different weights
564
+ const node1 = createNode('server1', 11211, { weight: 3 }); // 3x traffic
565
+ const node2 = createNode('server2', 11211, { weight: 1 }); // 1x traffic
566
+
567
+ const client = new Memcache({
568
+ nodes: [node1, node2],
569
+ hash: new ModulaHash()
570
+ });
571
+
572
+ // server1 will receive approximately 75% of keys
573
+ // server2 will receive approximately 25% of keys
574
+ ```
575
+
576
+ ## Choosing an Algorithm
577
+
578
+ | Feature | KetamaHash | ModulaHash |
579
+ |---------|------------|------------|
580
+ | Key redistribution on node change | Minimal (~1/n keys) | All keys may move |
581
+ | Complexity | Higher (virtual nodes) | Lower (simple modulo) |
582
+ | Performance | Slightly slower | Faster |
583
+ | Best for | Dynamic scaling | Fixed clusters |
584
+ | Weighted nodes | Yes | Yes |
585
+
586
+ **Use KetamaHash (default) when:**
587
+ - Your cluster size may change dynamically
588
+ - You want to minimize cache invalidation during scaling
589
+ - You're running in production
590
+
591
+ **Use ModulaHash when:**
592
+ - Your cluster size is fixed
593
+ - You prefer simplicity over minimal redistribution
594
+ - You're in a testing or development environment
595
+
596
+ # Retry Configuration
597
+
598
+ The Memcache client supports automatic retry of failed commands with configurable backoff strategies.
599
+
600
+ ## Basic Retry Setup
601
+
602
+ Enable retries by setting the `retries` option:
603
+
604
+ ```javascript
605
+ import { Memcache } from 'memcache';
606
+
607
+ const client = new Memcache({
608
+ nodes: ['localhost:11211'],
609
+ retries: 3, // Retry up to 3 times
610
+ retryDelay: 100 // 100ms between retries
611
+ });
612
+ ```
613
+
614
+ You can also modify retry settings at runtime:
615
+
616
+ ```javascript
617
+ client.retries = 5;
618
+ client.retryDelay = 200;
619
+ ```
620
+
621
+ ## Backoff Strategies
622
+
623
+ The client includes two built-in backoff functions:
624
+
625
+ ### Fixed Delay (Default)
626
+
627
+ ```javascript
628
+ import { Memcache, defaultRetryBackoff } from 'memcache';
629
+
630
+ const client = new Memcache({
631
+ retries: 3,
632
+ retryDelay: 100,
633
+ retryBackoff: defaultRetryBackoff // 100ms, 100ms, 100ms
634
+ });
635
+ ```
636
+
637
+ ### Exponential Backoff
638
+
639
+ ```javascript
640
+ import { Memcache, exponentialRetryBackoff } from 'memcache';
641
+
642
+ const client = new Memcache({
643
+ retries: 3,
644
+ retryDelay: 100,
645
+ retryBackoff: exponentialRetryBackoff // 100ms, 200ms, 400ms
646
+ });
647
+ ```
648
+
649
+ ### Custom Backoff Function
650
+
651
+ You can provide your own backoff function:
652
+
653
+ ```javascript
654
+ const client = new Memcache({
655
+ retries: 3,
656
+ retryDelay: 100,
657
+ retryBackoff: (attempt, baseDelay) => {
658
+ // Exponential backoff with jitter
659
+ const delay = baseDelay * Math.pow(2, attempt);
660
+ return delay + Math.random() * delay * 0.1;
661
+ }
662
+ });
663
+ ```
664
+
665
+ The backoff function receives:
666
+ - `attempt` - The current attempt number (0-indexed)
667
+ - `baseDelay` - The configured `retryDelay` value
668
+
669
+ ## Idempotent Safety
670
+
671
+ **Important:** By default, retries are only performed for commands explicitly marked as idempotent. This prevents accidental double-execution of non-idempotent operations like `incr`, `decr`, `append`, and `prepend`.
672
+
673
+ ### Why This Matters
674
+
675
+ If a network timeout occurs after the server applies a mutation but before the client receives the response, retrying would apply the mutation twice:
676
+ - Counter incremented twice instead of once
677
+ - Data appended twice instead of once
678
+
679
+ ### Safe Usage Patterns
680
+
681
+ **For read operations (always safe to retry):**
682
+
683
+ ```javascript
684
+ // Mark read operations as idempotent
685
+ await client.execute('get mykey', nodes, { idempotent: true });
686
+ ```
687
+
688
+ **For idempotent writes (safe to retry):**
689
+
690
+ ```javascript
691
+ // SET with the same value is idempotent
692
+ await client.execute('set mykey 0 0 5\r\nhello', nodes, { idempotent: true });
693
+ ```
694
+
695
+ **Disable safety for all commands (use with caution):**
696
+
697
+ ```javascript
698
+ const client = new Memcache({
699
+ retries: 3,
700
+ retryOnlyIdempotent: false // Allow retries for ALL commands
701
+ });
702
+ ```
703
+
704
+ ### Behavior Summary
705
+
706
+ | `retryOnlyIdempotent` | `idempotent` flag | Retries enabled? |
707
+ |-----------------------|-------------------|------------------|
708
+ | `true` (default) | `false` (default) | No |
709
+ | `true` (default) | `true` | Yes |
710
+ | `false` | (any) | Yes |
711
+
712
+ ### Methods Without Retry Support
713
+
714
+ The following methods do not use the retry mechanism and have their own error handling:
715
+
716
+ - `get()` - Returns `undefined` on failure
717
+ - `gets()` - Returns partial results on node failure
718
+ - `flush()` - Operates directly on nodes
719
+ - `stats()` - Operates directly on nodes
720
+ - `version()` - Operates directly on nodes
721
+
722
+ To use retries with read operations, use the `execute()` method directly:
723
+
724
+ ```javascript
725
+ const nodes = await client.getNodesByKey('mykey');
726
+ const results = await client.execute('get mykey', nodes, { idempotent: true });
727
+ ```
728
+
729
+ # SASL Authentication
730
+
731
+ The Memcache client supports SASL (Simple Authentication and Security Layer) authentication using the PLAIN mechanism. This allows you to connect to memcached servers that require authentication.
732
+
733
+ ## Enabling SASL Authentication
734
+
735
+ ```javascript
736
+ import { Memcache } from 'memcache';
737
+
738
+ const client = new Memcache({
739
+ nodes: ['localhost:11211'],
740
+ sasl: {
741
+ username: 'myuser',
742
+ password: 'mypassword',
743
+ },
744
+ });
745
+
746
+ await client.connect();
747
+ // Client is now authenticated and ready to use
748
+ ```
749
+
750
+ ## SASL Options
751
+
752
+ The `sasl` option accepts an object with the following properties:
753
+
754
+ - `username: string` - The username for authentication (required)
755
+ - `password: string` - The password for authentication (required)
756
+ - `mechanism?: 'PLAIN'` - The SASL mechanism to use (default: 'PLAIN')
757
+
758
+ Currently, only the PLAIN mechanism is supported.
759
+
760
+ ## Binary Protocol Methods
761
+
762
+ **Important:** Memcached servers with SASL enabled (`-S` flag) require the binary protocol for all operations after authentication. The standard text-based methods (`client.get()`, `client.set()`, etc.) will not work on SASL-enabled servers.
763
+
764
+ Use the `binary*` methods on nodes for SASL-enabled servers:
765
+
766
+ ```javascript
767
+ import { Memcache } from 'memcache';
768
+
769
+ const client = new Memcache({
770
+ nodes: ['localhost:11211'],
771
+ sasl: { username: 'user', password: 'pass' },
772
+ });
773
+
774
+ await client.connect();
775
+
776
+ // Access the node directly for binary operations
777
+ const node = client.nodes[0];
778
+
779
+ // Binary protocol operations
780
+ await node.binarySet('mykey', 'myvalue', 3600); // Set with 1 hour expiry
781
+ const value = await node.binaryGet('mykey'); // Get value
782
+ await node.binaryDelete('mykey'); // Delete key
783
+
784
+ // Other binary operations
785
+ await node.binaryAdd('newkey', 'value'); // Add (only if not exists)
786
+ await node.binaryReplace('existingkey', 'newvalue'); // Replace (only if exists)
787
+ await node.binaryIncr('counter', 1); // Increment
788
+ await node.binaryDecr('counter', 1); // Decrement
789
+ await node.binaryAppend('mykey', '-suffix'); // Append to value
790
+ await node.binaryPrepend('mykey', 'prefix-'); // Prepend to value
791
+ await node.binaryTouch('mykey', 7200); // Update expiration
792
+ await node.binaryFlush(); // Flush all
793
+ const version = await node.binaryVersion(); // Get server version
794
+ const stats = await node.binaryStats(); // Get server stats
795
+ ```
796
+
797
+ ## Per-Node SASL Configuration
798
+
799
+ You can also configure SASL credentials when creating individual nodes:
800
+
801
+ ```javascript
802
+ import { createNode } from 'memcache';
803
+
804
+ // Create a node with SASL credentials
805
+ const node = createNode('localhost', 11211, {
806
+ sasl: { username: 'user', password: 'pass' },
807
+ });
808
+
809
+ // Connect and use binary methods
810
+ await node.connect();
811
+ await node.binarySet('mykey', 'hello');
812
+ const value = await node.binaryGet('mykey');
813
+ ```
814
+
815
+ ## Authentication Events
816
+
817
+ You can listen for authentication events on both nodes and the client:
818
+
819
+ ```javascript
820
+ import { Memcache, MemcacheNode } from 'memcache';
821
+
822
+ // Node-level events
823
+ const node = new MemcacheNode('localhost', 11211, {
824
+ sasl: { username: 'user', password: 'pass' },
825
+ });
826
+
827
+ node.on('authenticated', () => {
828
+ console.log('Node authenticated successfully');
829
+ });
830
+
831
+ node.on('error', (error) => {
832
+ if (error.message.includes('SASL authentication failed')) {
833
+ console.error('Authentication failed:', error.message);
834
+ }
835
+ });
836
+
837
+ await node.connect();
838
+
839
+ // Client-level events (forwarded from nodes)
840
+ const client = new Memcache({
841
+ nodes: ['localhost:11211'],
842
+ sasl: { username: 'user', password: 'pass' },
843
+ });
844
+
845
+ client.on('authenticated', () => {
846
+ console.log('Client authenticated');
847
+ });
848
+
849
+ await client.connect();
850
+ ```
851
+
852
+ ### Node Properties
853
+
854
+ - `node.hasSaslCredentials` - Returns `true` if SASL credentials are configured
855
+ - `node.isAuthenticated` - Returns `true` if the node has successfully authenticated
856
+
857
+ ## Server Configuration
858
+
859
+ To use SASL authentication, your memcached server must be configured with SASL support:
860
+
861
+ 1. **Build memcached with SASL support** - Ensure memcached was compiled with `--enable-sasl`
862
+
863
+ 2. **Create SASL users** - Use `saslpasswd2` to create users:
864
+ ```bash
865
+ saslpasswd2 -a memcached -c username
866
+ ```
867
+
868
+ 3. **Configure SASL mechanism** - Create `/etc/sasl2/memcached.conf`:
869
+ ```
870
+ mech_list: plain
871
+ ```
872
+
873
+ 4. **Start memcached with SASL** - Use the `-S` flag:
874
+ ```bash
875
+ memcached -S -m 64 -p 11211
876
+ ```
877
+
878
+ For more details, see the [memcached SASL documentation](https://github.com/memcached/memcached/wiki/SASLHowto).
879
+
476
880
  # Contributing
477
881
 
478
- Please read our [Contributing Guidelines](./CONTRIBUTING.md) and also our [Code of Conduct](./CODE_OF_CONDUCT.md).
882
+ Please read our [Contributing Guidelines](./CONTRIBUTING.md) and also our [Code of Conduct](./CODE_OF_CONDUCT.md).
479
883
 
480
884
  # License and Copyright
481
885