power-queues 2.0.14 → 2.0.16

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,15 +1,21 @@
1
- # power-queues
2
- ## High-Performance Redis Streams Queue for Node.js
1
+ # power-queues - High‑Performance Redis Streams Queue Engine for Node.js
3
2
 
4
- Ultra-fast, fault-tolerant, Lua-optimized distributed task queue built on Redis Streams.
5
- Supports **bulk XADD**, **idempotent jobs**, **retries**, **DLQ**, **stuck-task recovery**, **batching**, and **consumer groups**.
6
- Designed for large-scale microservices, telemetry pipelines, and high-load systems.
3
+ Production‑ready, lightweight and highly scalable
4
+ queue engine built directly on **Redis Streams + Lua scripts**.
5
+ It is designed for real‑world distributed systems that require **high
6
+ throughput**, **idempotent task execution**, **automatic recovery**, and
7
+ **predictable performance under heavy load**.
8
+
9
+ Unlike traditional Redis‑based queues that rely on lists or heavy
10
+ abstractions, power-queues focuses on **low‑level control**, **atomic
11
+ operations**, and **minimal overhead**, making it ideal for high‑load
12
+ backends, microservices, schedulers, telemetry pipelines, and data
13
+ processing clusters.
7
14
 
8
15
  <p align="center">
9
16
  <img src="https://img.shields.io/badge/redis-streams-red?logo=redis" />
10
17
  <img src="https://img.shields.io/badge/nodejs-queue-green?logo=node.js" />
11
18
  <img src="https://img.shields.io/badge/typescript-ready-blue?logo=typescript" />
12
- <img src="https://img.shields.io/badge/nestjs-support-ea2845?logo=nestjs" />
13
19
  <img src="https://img.shields.io/badge/license-MIT-lightgrey" />
14
20
  <img src="https://img.shields.io/badge/status-production-success" />
15
21
  </p>
@@ -23,134 +29,278 @@ Full documentation is available here:
23
29
 
24
30
  ---
25
31
 
26
- ## 🚀 Features
32
+ ## 🚀 Key Features
33
+
34
+ ### **1. Ultra‑Fast Bulk XADD (Lua‑Powered)**
35
+
36
+ - Adds thousands of messages per second using optimized Lua scripts.
37
+ - Minimizes round‑trips to Redis.
38
+ - Supports batching based on:
39
+ - number of tasks
40
+ - number of Redis arguments (safe upper bound)
41
+ - Outperforms typical list‑based queues and generic abstractions.
42
+
43
+ ---
44
+
45
+ ### **2. Built‑in Idempotent Workers**
46
+
47
+ Every task can carry an `idemKey`, guaranteeing **exactly‑once
48
+ execution** even under: - worker crashes
49
+ - network interruptions
50
+ - duplicate task submissions
51
+ - process restarts
52
+
53
+ Idempotency includes: - Lock key
54
+ - Start key
55
+ - Done key
56
+ - TTL‑managed execution lock
57
+ - Automatic release on failure
58
+ - Heartbeat mechanism
59
+ - Waiting on TTL for contended executions
60
+
61
+ This makes the engine ideal for: - payment processing
62
+ - external API calls
63
+ - high‑value jobs
64
+ - distributed pipelines
65
+
66
+ ---
67
+
68
+ ### **3. Stuck Task Recovery (Advanced Stream Scanning)**
69
+
70
+ If a worker crashes mid‑execution, power-queues automatically detects: -
71
+ abandoned tasks
72
+ - stalled locks
73
+ - unfinished start keys
74
+
75
+ The engine then recovers these tasks back to active processing safely
76
+ and efficiently.
77
+
78
+ ---
79
+
80
+ ### **4. High‑Throughput Workers**
81
+
82
+ - Batch execution support
83
+ - Parallel or sequential processing mode
84
+ - Configurable worker loop interval
85
+ - Individual and batch‑level error hooks
86
+ - Safe retry flow with per‑task attempt counters
87
+
88
+ ---
89
+
90
+ ### **5. Native DLQ (Dead‑Letter Queue)**
91
+
92
+ When retries reach the configured limit: - the task is moved into
93
+ `${stream}:dlq`
94
+ - includes: payload, attempt count, job, timestamp, error text
95
+ - fully JSON‑safe
96
+
97
+ Perfect for monitoring or later re‑processing.
98
+
99
+ ---
100
+
101
+ ### **6. Zero‑Overhead Serialization**
102
+
103
+ power-queues uses: - safe JSON encoding
104
+ - optional "flat" key/value task format
105
+ - predictable and optimized payload transformation
106
+
107
+ This keeps Redis memory layout clean and eliminates overhead.
108
+
109
+ ---
110
+
111
+ ### **7. Complete Set of Lifecycle Hooks**
112
+
113
+ You can extend any part of the execution flow:
114
+
115
+ - `onSelected`
116
+ - `onExecute`
117
+ - `onSuccess`
118
+ - `onError`
119
+ - `onRetry`
120
+ - `onBatchError`
121
+ - `onReady`
122
+
123
+ This allows full integration with: - monitoring systems
124
+ - logging pipelines
125
+ - external APM tools
126
+ - domain logic
127
+
128
+ ---
129
+
130
+ ### **8. Atomic Script Loading + NOSCRIPT Recovery**
131
+
132
+ Scripts are: - loaded once
133
+ - cached
134
+ - auto‑reloaded if Redis restarts
135
+ - executed safely via SHA‑based calls
27
136
 
28
- - **Bulk XADD** — send thousands of tasks in a single Redis call
29
- - 🔁 **Retries & attempt tracking**
30
- - 🧠 **Idempotent job execution** (Lua locks, TTL, start/done keys)
31
- - 🧹 **Stuck task recovery** (XAUTOCLAIM + Lua-based recovery)
32
- - 🌀 **Consumer groups + batching**
33
- - 📥 **Dead Letter Queue (DLQ)**
34
- - 🔐 **Stream trimming, approx/exact maxlen, minid window**
35
- - 🧱 **Fully async, high-throughput, production-ready**
137
+ Ensures resilience in failover scenarios.
138
+
139
+ ---
140
+
141
+ ### **9. Job Progress Tracking**
142
+
143
+ Optional per‑job counters: - `job:ok` - `job:err` - `job:ready`
144
+
145
+ Useful for UI dashboards and real‑time job progress visualization.
36
146
 
37
147
  ---
38
148
 
39
149
  ## 📦 Installation
40
150
 
41
- ```bash
151
+ ``` bash
42
152
  npm install power-queues
43
153
  ```
44
-
154
+ OR
155
+ ```bash
156
+ yarn add power-redis
157
+ ```
45
158
  ---
46
159
 
47
160
  ## 🧪 Quick Start
48
161
 
49
- ```ts
50
- import { QueueService } from './queue.service';
162
+ ``` ts
163
+ const queue = new PowerQueues({
164
+ stream: "email",
165
+ group: "workers",
166
+ });
51
167
 
52
- const queue = new QueueService();
168
+ await queue.loadScripts(true);
53
169
 
54
- // Add tasks
55
- await queue.addTasks('my_queue', [
56
- { payload: { foo: 'bar' } },
57
- { payload: { a: 1, b: 2 } },
170
+ await queue.addTasks("email", [
171
+ { payload: { type: "welcome", userId: 42 } },
58
172
  ]);
173
+ ```
174
+
175
+ Worker:
59
176
 
60
- // Run worker
61
- queue.runQueue();
177
+ ``` ts
178
+ class EmailWorker extends PowerQueues {
179
+ async onExecute(id, payload) {
180
+ await sendEmail(payload);
181
+ }
182
+ }
62
183
  ```
63
184
 
64
185
  ---
65
186
 
66
- ## 🔧 Add Tasks (Bulk)
187
+ ## power-queues vs Existing Solutions
67
188
 
68
- ```ts
69
- await queue.addTasks('mass_polling', largeArray, {
70
- approx: true,
71
- minidWindowMs: 30000,
72
- maxlen: largeArray.length,
73
- });
74
- ```
189
+ |Feature |power-queues |BullMQ |Bee-Queue |Custom Streams|
190
+ |----------------------|----------------|----------- |------------|--------------|
191
+ |Bulk XADD (Lua) |✅ Yes |❌ No |❌ No |Rare |
192
+ |Idempotent workers |✅ Built-in |Partial |❌ No |❌ No |
193
+ |Stuck-task recovery |✅ Advanced |Basic |❌ No |Manual |
194
+ |Heartbeats |✅ Yes |Limited |❌ No |Manual |
195
+ |Retry logic |✅ Flexible |Good |Basic |Manual |
196
+ |DLQ |✅ Native |Basic |❌ No |Manual |
197
+ |Pure Streams |✅ Yes |Partial |❌ No |Yes |
198
+ |Lua optimization |✅ Strong |Minimal |❌ No |Manual |
199
+ |Throughput |🔥 Very high |High |Medium |Depends |
200
+ |Overhead |Low |Medium |Low |Very high |
75
201
 
76
- ---
202
+ ## 🛠 When to Choose power-queues
77
203
 
78
- ## 🏗️ Worker Hooks
204
+ Use this engine if you need:
79
205
 
80
- You can override:
206
+ ### **✔ High performance under load**
81
207
 
82
- - `onExecute`
83
- - `onSuccess`
84
- - `onError`
85
- - `onRetry`
86
- - `onBatchError`
87
- - `onSelected`
88
- - `onReady`
208
+ Millions of tasks per hour? No problem.
89
209
 
90
- Example:
210
+ ### **✔ Strong idempotent guarantees**
91
211
 
92
- ```ts
93
- async onExecute(id, payload) {
94
- console.log('executing', id, payload);
95
- }
96
- ```
212
+ Exactly‑once processing for critical operations.
97
213
 
98
- ---
214
+ ### **✔ Low‑level control without heavy abstractions**
99
215
 
100
- ## 🧱 Architecture Overview
216
+ No magic, no hidden states - everything is explicit.
101
217
 
102
- ```
103
- Producer → Redis Stream → Consumer Group → Worker → DLQ (optional)
104
- ```
218
+ ### **✔ Predictable behavior in distributed environments**
105
219
 
106
- - Redis Streams store tasks
107
- - Lua scripts handle trimming, idempotency, stuck recovery
108
- - Workers fetch tasks via XREADGROUP or Lua select
109
- - Tasks executed, ACKed, or sent to DLQ
220
+ Even with frequent worker restarts.
221
+
222
+ ### **✔ Production‑grade reliability**
223
+
224
+ Backpressure, recovery, retries, dead-lettering - all included.
110
225
 
111
226
  ---
112
227
 
113
- ## 🗄️ Dead Letter Queue (DLQ)
228
+ ## 🏗️ Project Structure & Architecture
114
229
 
115
- Failed tasks after `workerMaxRetries` automatically go to:
230
+ - Redis Streams for messaging
231
+ - Lua scripts for atomic operations
232
+ - JavaScript/TypeScript API
233
+ - Full worker lifecycle management
234
+ - Configurable backpressure & contention handling
235
+ - Optional job‑level progress tracking
116
236
 
117
- ```
118
- <stream>:dlq
119
- ```
237
+ ---
238
+
239
+ ## 🧩 Extensibility
240
+
241
+ power-queues is ideal for building:
242
+
243
+ - task schedulers
244
+ - distributed cron engines
245
+ - ETL pipelines
246
+ - telemetry processors
247
+ - notification workers
248
+ - device monitoring systems
249
+ - AI job pipelines
250
+ - high-frequency background jobs
120
251
 
121
252
  ---
122
253
 
123
- ## 🧩 Idempotency
254
+ ## 🧱 Reliability First
124
255
 
125
- Guaranteed by 3 keys:
256
+ Every part of the engine is designed to prevent:
126
257
 
127
- - `doneKey`
128
- - `lockKey`
129
- - `startKey`
258
+ - double execution
259
+ - stuck tasks
260
+ - orphan locks
261
+ - lost messages
262
+ - zombie workers
263
+ - script desynchronization
130
264
 
131
- This prevents double-execution during retries, crashes, or concurrency.
265
+ The heartbeat + TTL strategy guarantees that no task is "lost" even in
266
+ chaotic cluster environments.
132
267
 
133
268
  ---
134
269
 
135
- ## 🚀 Performance
270
+ ## 🏷️ SEO‑Optimized Keywords (Non‑Spam)
271
+
272
+ power-queues is relevant for:
136
273
 
137
- - 10,000+ XADDs/sec
138
- - Bulk mode: 50,000 operations in one request
139
- - Extremely low CPU usage due to Lua trimming
274
+ - Redis Streams queue engine
275
+ - Node.js stream-based queue
276
+ - idempotent task processing
277
+ - high‑performance job queue for Node.js
278
+ - Redis Lua queue
279
+ - distributed worker engine
280
+ - scalable background jobs
281
+ - enterprise-grade Redis queue
282
+ - microservices task runner
283
+ - fault-tolerant queue for Node.js
140
284
 
141
285
  ---
142
286
 
143
- ## 🏷️ SEO Keywords
287
+ ## 📝 License
144
288
 
145
- ```
146
- redis streams, redis queue, task queue, job queue, nodejs queue, nestjs queue,
147
- bulk xadd, distributed queue system, background jobs, retries, dlq,
148
- idempotency, redis lua scripts, microservices, high-performance queue,
149
- high-throughput, batching, concurrency control
150
- ```
289
+ MIT - free for commercial and private use.
151
290
 
152
291
  ---
153
292
 
154
- ## 📜 License
293
+ ## Why This Project Exists
294
+
295
+ Most Node.js queue libraries are: - too slow
296
+ - too abstract
297
+ - not idempotent
298
+ - not safe for financial or mission‑critical workloads
299
+
300
+ power-queues was built to solve real production problems where: -
301
+ *duplicate tasks cost money*,
302
+ - *workers are unstable*,
303
+ - *tasks must survive restarts*,
304
+ - *performance matters at scale*.
155
305
 
156
- MIT
306
+ If these things matter to you - this engine will feel like home.
package/dist/index.cjs CHANGED
@@ -163,13 +163,21 @@ var IdempotencyDone = `
163
163
  local doneKey = KEYS[1]
164
164
  local lockKey = KEYS[2]
165
165
  local startKey = KEYS[3]
166
+
166
167
  redis.call('SET', doneKey, 1)
167
- local ttlSec = tonumber(ARGV[1]) or 0
168
- if ttlSec > 0 then redis.call('EXPIRE', doneKey, ttlSec) end
168
+
169
+ local ttlMs = tonumber(ARGV[1]) or 0
170
+ if ttlMs > 0 then
171
+ redis.call('PEXPIRE', doneKey, ttlMs)
172
+ end
173
+
169
174
  if redis.call('GET', lockKey) == ARGV[2] then
170
175
  redis.call('DEL', lockKey)
171
- if startKey then redis.call('DEL', startKey) end
176
+ if startKey then
177
+ redis.call('DEL', startKey)
178
+ end
172
179
  end
180
+
173
181
  return 1
174
182
  `;
175
183
  var IdempotencyFree = `
@@ -260,12 +268,11 @@ var PowerQueues = class extends import_power_redis.PowerRedis {
260
268
  constructor() {
261
269
  super(...arguments);
262
270
  this.abort = new AbortController();
263
- this.strictCheckingConnection = ["true", "on", "yes", "y", "1"].includes(String(process.env.REDIS_STRICT_CHECK_CONNECTION ?? "").trim().toLowerCase());
264
271
  this.scripts = {};
265
272
  this.addingBatchTasksCount = 800;
266
273
  this.addingBatchKeysLimit = 1e4;
267
274
  this.workerExecuteLockTimeoutMs = 18e4;
268
- this.workerCacheTaskTimeoutMs = 60;
275
+ this.workerCacheTaskTimeoutMs = 6e4;
269
276
  this.approveBatchTasksCount = 2e3;
270
277
  this.removeOnExecuted = false;
271
278
  this.executeBatchAtOnce = false;
@@ -311,7 +318,7 @@ var PowerQueues = class extends import_power_redis.PowerRedis {
311
318
  }
312
319
  const tasksP = await this.onSelected(tasks);
313
320
  const ids = await this.execute((0, import_full_utils.isArrFilled)(tasksP) ? tasksP : tasks);
314
- if ((0, import_full_utils.isArrFilled)(tasks)) {
321
+ if ((0, import_full_utils.isArrFilled)(ids)) {
315
322
  await this.approve(ids);
316
323
  }
317
324
  } catch (err) {
@@ -454,7 +461,7 @@ var PowerQueues = class extends import_power_redis.PowerRedis {
454
461
  }
455
462
  const pairs = flat.length / 2;
456
463
  if ((0, import_full_utils.isNumNZ)(pairs)) {
457
- throw new Error('Task must have "payload" or "flat".');
464
+ throw new Error('Task "flat" must contain at least one field/value pair.');
458
465
  }
459
466
  argv.push(String(id));
460
467
  argv.push(String(pairs));
@@ -507,7 +514,13 @@ var PowerQueues = class extends import_power_redis.PowerRedis {
507
514
  return batches;
508
515
  }
509
516
  keysLength(task) {
510
- return 2 + ("flat" in task && Array.isArray(task.flat) && task.flat.length ? task.flat.length : Object.keys(task).length * 2);
517
+ if ("flat" in task && Array.isArray(task.flat) && task.flat.length) {
518
+ return 2 + task.flat.length;
519
+ }
520
+ if ("payload" in task && (0, import_full_utils.isObj)(task.payload)) {
521
+ return 2 + Object.keys(task.payload).length * 2;
522
+ }
523
+ return 2 + Object.keys(task).length * 2;
511
524
  }
512
525
  attemptsKey(id) {
513
526
  const safeStream = this.stream.replace(/[^\w:\-]/g, "_");
@@ -674,7 +687,8 @@ var PowerQueues = class extends import_power_redis.PowerRedis {
674
687
  error: String(err?.message || err),
675
688
  createdAt,
676
689
  job,
677
- id
690
+ id,
691
+ attempt: 0
678
692
  }
679
693
  }]);
680
694
  await this.clearAttempts(id);
@@ -777,42 +791,63 @@ var PowerQueues = class extends import_power_redis.PowerRedis {
777
791
  if (signal?.aborted) {
778
792
  return resolve();
779
793
  }
794
+ let delay;
795
+ if (ttl > 0) {
796
+ const base = Math.max(25, Math.min(ttl, 5e3));
797
+ const jitter = Math.floor(Math.min(base, 200) * Math.random());
798
+ delay = base + jitter;
799
+ } else {
800
+ delay = 5 + Math.floor(Math.random() * 15);
801
+ }
780
802
  const t = setTimeout(() => {
781
803
  if (signal) {
782
804
  signal.removeEventListener("abort", onAbort);
783
805
  }
784
806
  resolve();
785
- }, ttl > 0 ? 25 + Math.floor(Math.random() * 50) : 5 + Math.floor(Math.random() * 15));
807
+ }, delay);
786
808
  t.unref?.();
787
809
  function onAbort() {
788
810
  clearTimeout(t);
789
811
  resolve();
790
812
  }
791
- signal?.addEventListener("abort", onAbort, { once: true });
813
+ signal?.addEventListener?.("abort", onAbort, { once: true });
792
814
  });
793
815
  }
816
+ async sendHeartbeat(keys) {
817
+ try {
818
+ const r1 = await this.redis.pexpire(keys.lockKey, this.workerExecuteLockTimeoutMs);
819
+ const r2 = await this.redis.pexpire(keys.startKey, this.workerExecuteLockTimeoutMs);
820
+ const ok1 = Number(r1 || 0) === 1;
821
+ const ok2 = Number(r2 || 0) === 1;
822
+ return ok1 || ok2;
823
+ } catch {
824
+ return false;
825
+ }
826
+ }
794
827
  heartbeat(keys) {
795
828
  if (this.workerExecuteLockTimeoutMs <= 0) {
796
829
  return;
797
830
  }
798
- let timer, alive = true, hbFails = 0;
799
831
  const workerHeartbeatTimeoutMs = Math.max(1e3, Math.floor(Math.max(5e3, this.workerExecuteLockTimeoutMs | 0) / 4));
832
+ let timer;
833
+ let alive = true;
834
+ let hbFails = 0;
800
835
  const stop = () => {
801
836
  alive = false;
802
837
  if (timer) {
803
838
  clearTimeout(timer);
804
839
  }
805
840
  };
806
- const onAbort = () => stop();
807
841
  const signal = this.signal();
842
+ const onAbort = () => stop();
808
843
  signal?.addEventListener?.("abort", onAbort, { once: true });
809
844
  const tick = async () => {
810
845
  if (!alive) {
811
846
  return;
812
847
  }
813
848
  try {
814
- const r = await this.heartbeat(keys);
815
- hbFails = r ? 0 : hbFails + 1;
849
+ const ok = await this.sendHeartbeat(keys);
850
+ hbFails = ok ? 0 : hbFails + 1;
816
851
  if (hbFails >= 3) {
817
852
  throw new Error("Heartbeat lost.");
818
853
  }
@@ -823,9 +858,11 @@ var PowerQueues = class extends import_power_redis.PowerRedis {
823
858
  return;
824
859
  }
825
860
  }
826
- timer = setTimeout(tick, workerHeartbeatTimeoutMs).unref?.();
861
+ timer = setTimeout(tick, workerHeartbeatTimeoutMs);
862
+ timer.unref?.();
827
863
  };
828
- timer = setTimeout(tick, workerHeartbeatTimeoutMs).unref?.();
864
+ timer = setTimeout(tick, workerHeartbeatTimeoutMs);
865
+ timer.unref?.();
829
866
  return () => {
830
867
  signal?.removeEventListener?.("abort", onAbort);
831
868
  stop();
package/dist/index.d.cts CHANGED
@@ -34,7 +34,6 @@ type Task = {
34
34
  declare class PowerQueues extends PowerRedis {
35
35
  abort: AbortController;
36
36
  redis: IORedisLike;
37
- readonly strictCheckingConnection: boolean;
38
37
  readonly scripts: Record<string, SavedScript>;
39
38
  readonly addingBatchTasksCount: number;
40
39
  readonly addingBatchKeysLimit: number;
@@ -94,6 +93,7 @@ declare class PowerQueues extends PowerRedis {
94
93
  private selectStuck;
95
94
  private selectFresh;
96
95
  private waitAbortable;
96
+ private sendHeartbeat;
97
97
  private heartbeat;
98
98
  private normalizeEntries;
99
99
  private values;
package/dist/index.d.ts CHANGED
@@ -34,7 +34,6 @@ type Task = {
34
34
  declare class PowerQueues extends PowerRedis {
35
35
  abort: AbortController;
36
36
  redis: IORedisLike;
37
- readonly strictCheckingConnection: boolean;
38
37
  readonly scripts: Record<string, SavedScript>;
39
38
  readonly addingBatchTasksCount: number;
40
39
  readonly addingBatchKeysLimit: number;
@@ -94,6 +93,7 @@ declare class PowerQueues extends PowerRedis {
94
93
  private selectStuck;
95
94
  private selectFresh;
96
95
  private waitAbortable;
96
+ private sendHeartbeat;
97
97
  private heartbeat;
98
98
  private normalizeEntries;
99
99
  private values;
package/dist/index.js CHANGED
@@ -146,13 +146,21 @@ var IdempotencyDone = `
146
146
  local doneKey = KEYS[1]
147
147
  local lockKey = KEYS[2]
148
148
  local startKey = KEYS[3]
149
+
149
150
  redis.call('SET', doneKey, 1)
150
- local ttlSec = tonumber(ARGV[1]) or 0
151
- if ttlSec > 0 then redis.call('EXPIRE', doneKey, ttlSec) end
151
+
152
+ local ttlMs = tonumber(ARGV[1]) or 0
153
+ if ttlMs > 0 then
154
+ redis.call('PEXPIRE', doneKey, ttlMs)
155
+ end
156
+
152
157
  if redis.call('GET', lockKey) == ARGV[2] then
153
158
  redis.call('DEL', lockKey)
154
- if startKey then redis.call('DEL', startKey) end
159
+ if startKey then
160
+ redis.call('DEL', startKey)
161
+ end
155
162
  end
163
+
156
164
  return 1
157
165
  `;
158
166
  var IdempotencyFree = `
@@ -243,12 +251,11 @@ var PowerQueues = class extends PowerRedis {
243
251
  constructor() {
244
252
  super(...arguments);
245
253
  this.abort = new AbortController();
246
- this.strictCheckingConnection = ["true", "on", "yes", "y", "1"].includes(String(process.env.REDIS_STRICT_CHECK_CONNECTION ?? "").trim().toLowerCase());
247
254
  this.scripts = {};
248
255
  this.addingBatchTasksCount = 800;
249
256
  this.addingBatchKeysLimit = 1e4;
250
257
  this.workerExecuteLockTimeoutMs = 18e4;
251
- this.workerCacheTaskTimeoutMs = 60;
258
+ this.workerCacheTaskTimeoutMs = 6e4;
252
259
  this.approveBatchTasksCount = 2e3;
253
260
  this.removeOnExecuted = false;
254
261
  this.executeBatchAtOnce = false;
@@ -294,7 +301,7 @@ var PowerQueues = class extends PowerRedis {
294
301
  }
295
302
  const tasksP = await this.onSelected(tasks);
296
303
  const ids = await this.execute(isArrFilled(tasksP) ? tasksP : tasks);
297
- if (isArrFilled(tasks)) {
304
+ if (isArrFilled(ids)) {
298
305
  await this.approve(ids);
299
306
  }
300
307
  } catch (err) {
@@ -437,7 +444,7 @@ var PowerQueues = class extends PowerRedis {
437
444
  }
438
445
  const pairs = flat.length / 2;
439
446
  if (isNumNZ(pairs)) {
440
- throw new Error('Task must have "payload" or "flat".');
447
+ throw new Error('Task "flat" must contain at least one field/value pair.');
441
448
  }
442
449
  argv.push(String(id));
443
450
  argv.push(String(pairs));
@@ -490,7 +497,13 @@ var PowerQueues = class extends PowerRedis {
490
497
  return batches;
491
498
  }
492
499
  keysLength(task) {
493
- return 2 + ("flat" in task && Array.isArray(task.flat) && task.flat.length ? task.flat.length : Object.keys(task).length * 2);
500
+ if ("flat" in task && Array.isArray(task.flat) && task.flat.length) {
501
+ return 2 + task.flat.length;
502
+ }
503
+ if ("payload" in task && isObj(task.payload)) {
504
+ return 2 + Object.keys(task.payload).length * 2;
505
+ }
506
+ return 2 + Object.keys(task).length * 2;
494
507
  }
495
508
  attemptsKey(id) {
496
509
  const safeStream = this.stream.replace(/[^\w:\-]/g, "_");
@@ -657,7 +670,8 @@ var PowerQueues = class extends PowerRedis {
657
670
  error: String(err?.message || err),
658
671
  createdAt,
659
672
  job,
660
- id
673
+ id,
674
+ attempt: 0
661
675
  }
662
676
  }]);
663
677
  await this.clearAttempts(id);
@@ -760,42 +774,63 @@ var PowerQueues = class extends PowerRedis {
760
774
  if (signal?.aborted) {
761
775
  return resolve();
762
776
  }
777
+ let delay;
778
+ if (ttl > 0) {
779
+ const base = Math.max(25, Math.min(ttl, 5e3));
780
+ const jitter = Math.floor(Math.min(base, 200) * Math.random());
781
+ delay = base + jitter;
782
+ } else {
783
+ delay = 5 + Math.floor(Math.random() * 15);
784
+ }
763
785
  const t = setTimeout(() => {
764
786
  if (signal) {
765
787
  signal.removeEventListener("abort", onAbort);
766
788
  }
767
789
  resolve();
768
- }, ttl > 0 ? 25 + Math.floor(Math.random() * 50) : 5 + Math.floor(Math.random() * 15));
790
+ }, delay);
769
791
  t.unref?.();
770
792
  function onAbort() {
771
793
  clearTimeout(t);
772
794
  resolve();
773
795
  }
774
- signal?.addEventListener("abort", onAbort, { once: true });
796
+ signal?.addEventListener?.("abort", onAbort, { once: true });
775
797
  });
776
798
  }
799
+ async sendHeartbeat(keys) {
800
+ try {
801
+ const r1 = await this.redis.pexpire(keys.lockKey, this.workerExecuteLockTimeoutMs);
802
+ const r2 = await this.redis.pexpire(keys.startKey, this.workerExecuteLockTimeoutMs);
803
+ const ok1 = Number(r1 || 0) === 1;
804
+ const ok2 = Number(r2 || 0) === 1;
805
+ return ok1 || ok2;
806
+ } catch {
807
+ return false;
808
+ }
809
+ }
777
810
  heartbeat(keys) {
778
811
  if (this.workerExecuteLockTimeoutMs <= 0) {
779
812
  return;
780
813
  }
781
- let timer, alive = true, hbFails = 0;
782
814
  const workerHeartbeatTimeoutMs = Math.max(1e3, Math.floor(Math.max(5e3, this.workerExecuteLockTimeoutMs | 0) / 4));
815
+ let timer;
816
+ let alive = true;
817
+ let hbFails = 0;
783
818
  const stop = () => {
784
819
  alive = false;
785
820
  if (timer) {
786
821
  clearTimeout(timer);
787
822
  }
788
823
  };
789
- const onAbort = () => stop();
790
824
  const signal = this.signal();
825
+ const onAbort = () => stop();
791
826
  signal?.addEventListener?.("abort", onAbort, { once: true });
792
827
  const tick = async () => {
793
828
  if (!alive) {
794
829
  return;
795
830
  }
796
831
  try {
797
- const r = await this.heartbeat(keys);
798
- hbFails = r ? 0 : hbFails + 1;
832
+ const ok = await this.sendHeartbeat(keys);
833
+ hbFails = ok ? 0 : hbFails + 1;
799
834
  if (hbFails >= 3) {
800
835
  throw new Error("Heartbeat lost.");
801
836
  }
@@ -806,9 +841,11 @@ var PowerQueues = class extends PowerRedis {
806
841
  return;
807
842
  }
808
843
  }
809
- timer = setTimeout(tick, workerHeartbeatTimeoutMs).unref?.();
844
+ timer = setTimeout(tick, workerHeartbeatTimeoutMs);
845
+ timer.unref?.();
810
846
  };
811
- timer = setTimeout(tick, workerHeartbeatTimeoutMs).unref?.();
847
+ timer = setTimeout(tick, workerHeartbeatTimeoutMs);
848
+ timer.unref?.();
812
849
  return () => {
813
850
  signal?.removeEventListener?.("abort", onAbort);
814
851
  stop();
package/package.json CHANGED
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "name": "power-queues",
3
- "version": "2.0.14",
4
- "description": "Base classes for implementing custom queues in redis under high load conditions based on nestjs.",
3
+ "version": "2.0.16",
4
+ "description": "High-performance Redis Streams queue for Node.js with Lua-powered bulk XADD, idempotent workers, heartbeat locks, stuck-task recovery, retries, DLQ, and distributed processing.",
5
5
  "author": "ihor-bielchenko",
6
6
  "license": "MIT",
7
7
  "repository": {
@@ -81,8 +81,8 @@
81
81
  "power-redis"
82
82
  ],
83
83
  "dependencies": {
84
- "full-utils": "^2.0.3",
85
- "power-redis": "^2.0.6",
84
+ "full-utils": "^2.0.5",
85
+ "power-redis": "^2.0.8",
86
86
  "uuid": "^13.0.0"
87
87
  }
88
88
  }