ioredis 4.19.4 → 4.23.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/Changelog.md CHANGED
@@ -1,3 +1,38 @@
1
+ # [4.23.0](https://github.com/luin/ioredis/compare/v4.22.0...v4.23.0) (2021-02-25)
2
+
3
+
4
+ ### Features
5
+
6
+ * add support for DNS SRV records ([#1283](https://github.com/luin/ioredis/issues/1283)) ([13a8614](https://github.com/luin/ioredis/commit/13a861432c2331ca25038f6b4eb060ba7b865b47))
7
+
8
+ # [4.22.0](https://github.com/luin/ioredis/compare/v4.21.0...v4.22.0) (2021-02-06)
9
+
10
+
11
+ ### Features
12
+
13
+ * add type support for scanStream ([#1287](https://github.com/luin/ioredis/issues/1287)) ([ad8ffa0](https://github.com/luin/ioredis/commit/ad8ffa06d68788de3c0703a70fe4c5b64ab4ac5b)), closes [#1279](https://github.com/luin/ioredis/issues/1279)
14
+
15
+ # [4.21.0](https://github.com/luin/ioredis/compare/v4.20.0...v4.21.0) (2021-02-06)
16
+
17
+
18
+ ### Features
19
+
20
+ * upgrade command list to Redis 6.2 ([#1286](https://github.com/luin/ioredis/issues/1286)) ([6ef9c6e](https://github.com/luin/ioredis/commit/6ef9c6e839dee8be021bcd43a57eaee56ec2f573))
21
+
22
+ # [4.20.0](https://github.com/luin/ioredis/compare/v4.19.5...v4.20.0) (2021-02-05)
23
+
24
+
25
+ ### Features
26
+
27
+ * support username in URI ([#1284](https://github.com/luin/ioredis/issues/1284)) ([cbc5421](https://github.com/luin/ioredis/commit/cbc54218e26bd20ac3725df2e70b810599112ef8))
28
+
29
+ ## [4.19.5](https://github.com/luin/ioredis/compare/v4.19.4...v4.19.5) (2021-01-14)
30
+
31
+
32
+ ### Bug Fixes
33
+
34
+ * password contains colons ([#1274](https://github.com/luin/ioredis/issues/1274)) ([37c6daf](https://github.com/luin/ioredis/commit/37c6dafafd51d817a3dfe4b4ca722fb709a209e7))
35
+
1
36
  ## [4.19.4](https://github.com/luin/ioredis/compare/v4.19.3...v4.19.4) (2020-12-13)
2
37
 
3
38
 
package/README.md CHANGED
@@ -25,13 +25,14 @@ used in the world's biggest online commerce company [Alibaba](http://www.alibaba
25
25
  4. Transparent key prefixing.
26
26
  5. Abstraction for Lua scripting, allowing you to define custom commands.
27
27
  6. Support for binary data.
28
- 7. Support for TLS.
28
+ 7. Support for TLS 🔒.
29
29
  8. Support for offline queue and ready checking.
30
30
  9. Support for ES6 types, such as `Map` and `Set`.
31
- 10. Support for GEO commands (Redis 3.2 Unstable).
32
- 11. Sophisticated error handling strategy.
33
- 12. Support for NAT mapping.
34
- 13. Support for autopipelining
31
+ 10. Support for GEO commands 📍.
32
+ 11. Support for Redis ACL.
33
+ 12. Sophisticated error handling strategy.
34
+ 13. Support for NAT mapping.
35
+ 14. Support for autopipelining
35
36
 
36
37
  # Links
37
38
 
@@ -134,6 +135,13 @@ You can also specify connection options as a [`redis://` URL](http://www.iana.or
134
135
  ```javascript
135
136
  // Connect to 127.0.0.1:6380, db 4, using password "authpassword":
136
137
  new Redis("redis://:authpassword@127.0.0.1:6380/4");
138
+
139
+ // Username can also be passed via URI.
140
+ // It's worth to noticing that for compatibility reasons `allowUsernameInURI`
141
+ // need to be provided, otherwise the username part will be ignored.
142
+ new Redis(
143
+ "redis://username:authpassword@127.0.0.1:6380/4?allowUsernameInURI=true"
144
+ );
137
145
  ```
138
146
 
139
147
  See [API Documentation](API.md#new_Redis) for all available options.
@@ -204,7 +212,7 @@ redis.getBuffer("foo", (err, result) => {
204
212
  ## Pipelining
205
213
 
206
214
  If you want to send a batch of commands (e.g. > 5), you can use pipelining to queue
207
- the commands in memory and then send them to Redis all at once. This way the performance improves by 50%~300% (See [benchmark section](#benchmark)).
215
+ the commands in memory and then send them to Redis all at once. This way the performance improves by 50%~300% (See [benchmark section](#benchmarks)).
208
216
 
209
217
  `redis.pipeline()` creates a `Pipeline` instance. You can call any Redis
210
218
  commands on it just like the `Redis` instance. The commands are queued in memory
@@ -575,12 +583,15 @@ stream.on("end", () => {
575
583
  });
576
584
  ```
577
585
 
578
- `scanStream` accepts an option, with which you can specify the `MATCH` pattern and the `COUNT` argument:
586
+ `scanStream` accepts an option, with which you can specify the `MATCH` pattern, the `TYPE` filter, and the `COUNT` argument:
579
587
 
580
588
  ```javascript
581
589
  const stream = redis.scanStream({
582
590
  // only returns keys following the pattern of `user:*`
583
591
  match: "user:*",
592
+ // only return objects that match a given type,
593
+ // (requires Redis >= 6.0)
594
+ type: "zset",
584
595
  // returns approximately 100 elements per call
585
596
  count: 100,
586
597
  });
@@ -662,7 +673,7 @@ Set maxRetriesPerRequest to `null` to disable this behavior, and every command w
662
673
 
663
674
  ### Reconnect on error
664
675
 
665
- Besides auto-reconnect when the connection is closed, ioredis supports reconnecting on the specified errors by the `reconnectOnError` option. Here's an example that will reconnect when receiving `READONLY` error:
676
+ Besides auto-reconnect when the connection is closed, ioredis supports reconnecting on certain Redis errors using the `reconnectOnError` option. Here's an example that will reconnect when receiving `READONLY` error:
666
677
 
667
678
  ```javascript
668
679
  const redis = new Redis({
@@ -676,9 +687,9 @@ const redis = new Redis({
676
687
  });
677
688
  ```
678
689
 
679
- This feature is useful when using Amazon ElastiCache. Once failover happens, Amazon ElastiCache will switch the master we currently connected with to a slave, leading to the following writes fails with the error `READONLY`. Using `reconnectOnError`, we can force the connection to reconnect on this error in order to connect to the new master.
690
+ This feature is useful when using Amazon ElastiCache instances with Auto-failover disabled. On these instances, test your `reconnectOnError` handler by manually promoting the replica node to the primary role using the AWS console. The following writes fail with the error `READONLY`. Using `reconnectOnError`, we can force the connection to reconnect on this error in order to connect to the new master. Furthermore, if the `reconnectOnError` returns `2`, ioredis will resend the failed command after reconnecting.
680
691
 
681
- Furthermore, if the `reconnectOnError` returns `2`, ioredis will resend the failed command after reconnecting.
692
+ On ElastiCache insances with Auto-failover enabled, `reconnectOnError` does not execute. Instead of returning a Redis error, AWS closes all connections to the master endpoint until the new primary node is ready. ioredis reconnects via `retryStrategy` instead of `reconnectOnError` after about a minute. On ElastiCache insances with Auto-failover enabled, test failover events with the `Failover primary` option in the AWS console.
682
693
 
683
694
  ## Connection Events
684
695
 
@@ -922,13 +933,17 @@ Sometimes you may want to send a command to multiple nodes (masters or slaves) o
922
933
  ```javascript
923
934
  // Send `FLUSHDB` command to all slaves:
924
935
  const slaves = cluster.nodes("slave");
925
- Promise.all(slaves.map(node => node.flushdb()))
936
+ Promise.all(slaves.map((node) => node.flushdb()));
926
937
 
927
938
  // Get keys of all the masters:
928
939
  const masters = cluster.nodes("master");
929
- Promise.all(masters.map(node => node.keys()).then(keys => {
930
- // keys: [['key1', 'key2'], ['key3', 'key4']]
931
- }));
940
+ Promise.all(
941
+ masters
942
+ .map((node) => node.keys())
943
+ .then((keys) => {
944
+ // keys: [['key1', 'key2'], ['key3', 'key4']]
945
+ })
946
+ );
932
947
  ```
933
948
 
934
949
  ### NAT Mapping
@@ -1063,7 +1078,7 @@ const cluster = new Redis.Cluster(
1063
1078
 
1064
1079
  ## Autopipelining
1065
1080
 
1066
- In standard mode, when you issue multiple commands, ioredis sends them to the server one by one. As described in Redis pipeline documentation, this is a suboptimal use of the network link, especially when such link is not very performant.
1081
+ In standard mode, when you issue multiple commands, ioredis sends them to the server one by one. As described in Redis pipeline documentation, this is a suboptimal use of the network link, especially when such link is not very performant.
1067
1082
 
1068
1083
  The TCP and network overhead negatively affects performance. Commands are stuck in the send queue until the previous ones are correctly delivered to the server. This is a problem known as Head-Of-Line blocking (HOL).
1069
1084
 
@@ -1075,38 +1090,39 @@ This feature can dramatically improve throughput and avoids HOL blocking. In our
1075
1090
 
1076
1091
  While an automatic pipeline is executing, all new commands will be enqueued in a new pipeline which will be executed as soon as the previous finishes.
1077
1092
 
1078
- When using Redis Cluster, one pipeline per node is created. Commands are assigned to pipelines according to which node serves the slot.
1093
+ When using Redis Cluster, one pipeline per node is created. Commands are assigned to pipelines according to which node serves the slot.
1079
1094
 
1080
- A pipeline will thus contain commands using different slots but that ultimately are assigned to the same node.
1095
+ A pipeline will thus contain commands using different slots but that ultimately are assigned to the same node.
1081
1096
 
1082
1097
  Note that the same slot limitation within a single command still holds, as it is a Redis limitation.
1083
1098
 
1084
-
1085
1099
  ### Example of automatic pipeline enqueuing
1086
1100
 
1087
1101
  This sample code uses ioredis with automatic pipeline enabled.
1088
1102
 
1089
1103
  ```javascript
1090
- const Redis = require('./built');
1091
- const http = require('http');
1104
+ const Redis = require("./built");
1105
+ const http = require("http");
1092
1106
 
1093
1107
  const db = new Redis({ enableAutoPipelining: true });
1094
1108
 
1095
1109
  const server = http.createServer((request, response) => {
1096
- const key = new URL(request.url, 'https://localhost:3000/').searchParams.get('key');
1110
+ const key = new URL(request.url, "https://localhost:3000/").searchParams.get(
1111
+ "key"
1112
+ );
1097
1113
 
1098
1114
  db.get(key, (err, value) => {
1099
- response.writeHead(200, { 'Content-Type': 'text/plain' });
1115
+ response.writeHead(200, { "Content-Type": "text/plain" });
1100
1116
  response.end(value);
1101
1117
  });
1102
- })
1118
+ });
1103
1119
 
1104
1120
  server.listen(3000);
1105
1121
  ```
1106
1122
 
1107
1123
  When Node receives requests, it schedules them to be processed in one or more iterations of the events loop.
1108
1124
 
1109
- All commands issued by requests processing during one iteration of the loop will be wrapped in a pipeline automatically created by ioredis.
1125
+ All commands issued by requests processing during one iteration of the loop will be wrapped in a pipeline automatically created by ioredis.
1110
1126
 
1111
1127
  In the example above, the pipeline will have the following contents:
1112
1128
 
@@ -1128,24 +1144,22 @@ This approach increases the utilization of the network link, reduces the TCP ove
1128
1144
 
1129
1145
  ### Benchmarks
1130
1146
 
1131
- Here's some of the results of our tests for a single node.
1147
+ Here's some of the results of our tests for a single node.
1132
1148
 
1133
1149
  Each iteration of the test runs 1000 random commands on the server.
1134
1150
 
1135
1151
  | | Samples | Result | Tolerance |
1136
- |---------------------------|---------|---------------|-----------|
1152
+ | ------------------------- | ------- | ------------- | --------- |
1137
1153
  | default | 1000 | 174.62 op/sec | ± 0.45 % |
1138
1154
  | enableAutoPipelining=true | 1500 | 233.33 op/sec | ± 0.88 % |
1139
1155
 
1140
-
1141
1156
  And here's the same test for a cluster of 3 masters and 3 replicas:
1142
1157
 
1143
1158
  | | Samples | Result | Tolerance |
1144
- |---------------------------|---------|---------------|-----------|
1159
+ | ------------------------- | ------- | ------------- | --------- |
1145
1160
  | default | 1000 | 164.05 op/sec | ± 0.42 % |
1146
1161
  | enableAutoPipelining=true | 3000 | 235.31 op/sec | ± 0.94 % |
1147
1162
 
1148
-
1149
1163
  # Error Handling
1150
1164
 
1151
1165
  All the errors returned by the Redis server are instances of `ReplyError`, which can be accessed via `Redis`:
@@ -27,6 +27,9 @@ class ScanStream extends stream_1.Readable {
27
27
  if (this.opt.match) {
28
28
  args.push("MATCH", this.opt.match);
29
29
  }
30
+ if (this.opt.type) {
31
+ args.push("TYPE", this.opt.type);
32
+ }
30
33
  if (this.opt.count) {
31
34
  args.push("COUNT", String(this.opt.count));
32
35
  }
@@ -12,6 +12,8 @@ exports.DEFAULT_CLUSTER_OPTIONS = {
12
12
  retryDelayOnTryAgain: 100,
13
13
  slotsRefreshTimeout: 1000,
14
14
  slotsRefreshInterval: 5000,
15
+ useSRVRecords: false,
16
+ resolveSrv: dns_1.resolveSrv,
15
17
  dnsLookup: dns_1.lookup,
16
18
  enableAutoPipelining: false,
17
19
  autoPipeliningIgnoredCommands: [],
@@ -726,6 +726,30 @@ class Cluster extends events_1.EventEmitter {
726
726
  }
727
727
  });
728
728
  }
729
+ resolveSrv(hostname) {
730
+ return new Promise((resolve, reject) => {
731
+ this.options.resolveSrv(hostname, (err, records) => {
732
+ if (err) {
733
+ return reject(err);
734
+ }
735
+ const self = this, groupedRecords = util_1.groupSrvRecords(records), sortedKeys = Object.keys(groupedRecords).sort((a, b) => parseInt(a) - parseInt(b));
736
+ function tryFirstOne(err) {
737
+ if (!sortedKeys.length) {
738
+ return reject(err);
739
+ }
740
+ const key = sortedKeys[0], group = groupedRecords[key], record = util_1.weightSrvRecords(group);
741
+ if (!group.records.length) {
742
+ sortedKeys.shift();
743
+ }
744
+ self.dnsLookup(record.name).then((host) => resolve({
745
+ host,
746
+ port: record.port,
747
+ }), tryFirstOne);
748
+ }
749
+ tryFirstOne();
750
+ });
751
+ });
752
+ }
729
753
  dnsLookup(hostname) {
730
754
  return new Promise((resolve, reject) => {
731
755
  this.options.dnsLookup(hostname, (err, address) => {
@@ -758,11 +782,20 @@ class Cluster extends events_1.EventEmitter {
758
782
  if (hostnames.length === 0) {
759
783
  return Promise.resolve(startupNodes);
760
784
  }
761
- return Promise.all(hostnames.map((hostname) => this.dnsLookup(hostname))).then((ips) => {
762
- const hostnameToIP = utils_2.zipMap(hostnames, ips);
763
- return startupNodes.map((node) => hostnameToIP.has(node.host)
764
- ? Object.assign({}, node, { host: hostnameToIP.get(node.host) })
765
- : node);
785
+ return Promise.all(hostnames.map((this.options.useSRVRecords ? this.resolveSrv : this.dnsLookup).bind(this))).then((configs) => {
786
+ const hostnameToConfig = utils_2.zipMap(hostnames, configs);
787
+ return startupNodes.map((node) => {
788
+ const config = hostnameToConfig.get(node.host);
789
+ if (!config) {
790
+ return node;
791
+ }
792
+ else if (this.options.useSRVRecords) {
793
+ return Object.assign({}, node, config);
794
+ }
795
+ else {
796
+ return Object.assign({}, node, { host: config });
797
+ }
798
+ });
766
799
  });
767
800
  }
768
801
  }
@@ -57,3 +57,38 @@ function getUniqueHostnamesFromOptions(nodes) {
57
57
  return Object.keys(uniqueHostsMap).filter((host) => !net_1.isIP(host));
58
58
  }
59
59
  exports.getUniqueHostnamesFromOptions = getUniqueHostnamesFromOptions;
60
+ function groupSrvRecords(records) {
61
+ const recordsByPriority = {};
62
+ for (const record of records) {
63
+ if (!recordsByPriority.hasOwnProperty(record.priority)) {
64
+ recordsByPriority[record.priority] = {
65
+ totalWeight: record.weight,
66
+ records: [record],
67
+ };
68
+ }
69
+ else {
70
+ recordsByPriority[record.priority].totalWeight += record.weight;
71
+ recordsByPriority[record.priority].records.push(record);
72
+ }
73
+ }
74
+ return recordsByPriority;
75
+ }
76
+ exports.groupSrvRecords = groupSrvRecords;
77
+ function weightSrvRecords(recordsGroup) {
78
+ if (recordsGroup.records.length === 1) {
79
+ recordsGroup.totalWeight = 0;
80
+ return recordsGroup.records.shift();
81
+ }
82
+ // + `recordsGroup.records.length` to support `weight` 0
83
+ const random = Math.floor(Math.random() * (recordsGroup.totalWeight + recordsGroup.records.length));
84
+ let total = 0;
85
+ for (const [i, record] of recordsGroup.records.entries()) {
86
+ total += 1 + record.weight;
87
+ if (total > random) {
88
+ recordsGroup.totalWeight -= record.weight;
89
+ recordsGroup.records.splice(i, 1);
90
+ return record;
91
+ }
92
+ }
93
+ }
94
+ exports.weightSrvRecords = weightSrvRecords;
@@ -38,9 +38,9 @@ const debug = utils_1.Debug("redis");
38
38
  * it to reduce the latency.
39
39
  * @param {string} [options.connectionName=null] - Connection name.
40
40
  * @param {number} [options.db=0] - Database index to use.
41
- * @param {string} [options.username=null] - If set, client will send AUTH command with this user and password when connected.
42
41
  * @param {string} [options.password=null] - If set, client will send AUTH command
43
42
  * with the value of this option when connected.
43
+ * @param {string} [options.username=null] - Similar to `password`, Provide this for Redis ACL support.
44
44
  * @param {boolean} [options.dropBufferSupport=false] - Drop the buffer support for better performance.
45
45
  * This option is recommended to be enabled when
46
46
  * handling large array response and you don't need the buffer support.
@@ -254,10 +254,17 @@ function parseURL(url) {
254
254
  url = "//" + url;
255
255
  parsed = url_1.parse(url, true, true);
256
256
  }
257
+ const options = parsed.query || {};
258
+ const allowUsernameInURI = options.allowUsernameInURI && options.allowUsernameInURI !== "false";
259
+ delete options.allowUsernameInURI;
257
260
  const result = {};
258
261
  if (parsed.auth) {
259
- const parsedAuth = parsed.auth.split(":");
260
- result.password = parsedAuth[1];
262
+ const index = parsed.auth.indexOf(":");
263
+ if (allowUsernameInURI) {
264
+ result.username =
265
+ index === -1 ? parsed.auth : parsed.auth.slice(0, index);
266
+ }
267
+ result.password = index === -1 ? "" : parsed.auth.slice(index + 1);
261
268
  }
262
269
  if (parsed.pathname) {
263
270
  if (parsed.protocol === "redis:" || parsed.protocol === "rediss:") {
@@ -275,7 +282,7 @@ function parseURL(url) {
275
282
  if (parsed.port) {
276
283
  result.port = parsed.port;
277
284
  }
278
- lodash_1.defaults(result, parsed.query);
285
+ lodash_1.defaults(result, options);
279
286
  return result;
280
287
  }
281
288
  exports.parseURL = parseURL;
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "ioredis",
3
- "version": "4.19.4",
3
+ "version": "4.23.0",
4
4
  "description": "A robust, performance-focused and full-featured Redis client for Node.js.",
5
5
  "main": "built/index.js",
6
6
  "files": [
@@ -39,7 +39,7 @@
39
39
  "lodash.defaults": "^4.2.0",
40
40
  "lodash.flatten": "^4.4.0",
41
41
  "p-map": "^2.1.0",
42
- "redis-commands": "1.6.0",
42
+ "redis-commands": "1.7.0",
43
43
  "redis-errors": "^1.2.0",
44
44
  "redis-parser": "^3.0.0",
45
45
  "standard-as-callback": "^2.0.1"