power-queues 2.0.13 → 2.0.15

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,15 +1,21 @@
1
- # power-queues
2
- ## High-Performance Redis Streams Queue for Node.js
1
+ # power-queues - High‑Performance Redis Streams Queue Engine for Node.js
3
2
 
4
- Ultra-fast, fault-tolerant, Lua-optimized distributed task queue built on Redis Streams.
5
- Supports **bulk XADD**, **idempotent jobs**, **retries**, **DLQ**, **stuck-task recovery**, **batching**, and **consumer groups**.
6
- Designed for large-scale microservices, telemetry pipelines, and high-load systems.
3
+ Production‑ready, lightweight and highly scalable
4
+ queue engine built directly on **Redis Streams + Lua scripts**.
5
+ It is designed for real‑world distributed systems that require **high
6
+ throughput**, **idempotent task execution**, **automatic recovery**, and
7
+ **predictable performance under heavy load**.
8
+
9
+ Unlike traditional Redis‑based queues that rely on lists or heavy
10
+ abstractions, power-queues focuses on **low‑level control**, **atomic
11
+ operations**, and **minimal overhead**, making it ideal for high‑load
12
+ backends, microservices, schedulers, telemetry pipelines, and data
13
+ processing clusters.
7
14
 
8
15
  <p align="center">
9
16
  <img src="https://img.shields.io/badge/redis-streams-red?logo=redis" />
10
17
  <img src="https://img.shields.io/badge/nodejs-queue-green?logo=node.js" />
11
18
  <img src="https://img.shields.io/badge/typescript-ready-blue?logo=typescript" />
12
- <img src="https://img.shields.io/badge/nestjs-support-ea2845?logo=nestjs" />
13
19
  <img src="https://img.shields.io/badge/license-MIT-lightgrey" />
14
20
  <img src="https://img.shields.io/badge/status-production-success" />
15
21
  </p>
@@ -23,134 +29,278 @@ Full documentation is available here:
23
29
 
24
30
  ---
25
31
 
26
- ## 🚀 Features
32
+ ## 🚀 Key Features
33
+
34
+ ### **1. Ultra‑Fast Bulk XADD (Lua‑Powered)**
35
+
36
+ - Adds thousands of messages per second using optimized Lua scripts.
37
+ - Minimizes round‑trips to Redis.
38
+ - Supports batching based on:
39
+ - number of tasks
40
+ - number of Redis arguments (safe upper bound)
41
+ - Outperforms typical list‑based queues and generic abstractions.
42
+
43
+ ---
44
+
45
+ ### **2. Built‑in Idempotent Workers**
46
+
47
+ Every task can carry an `idemKey`, guaranteeing **exactly‑once
48
+ execution** even under: - worker crashes
49
+ - network interruptions
50
+ - duplicate task submissions
51
+ - process restarts
52
+
53
+ Idempotency includes: - Lock key
54
+ - Start key
55
+ - Done key
56
+ - TTL‑managed execution lock
57
+ - Automatic release on failure
58
+ - Heartbeat mechanism
59
+ - Waiting on TTL for contended executions
60
+
61
+ This makes the engine ideal for: - payment processing
62
+ - external API calls
63
+ - high‑value jobs
64
+ - distributed pipelines
65
+
66
+ ---
67
+
68
+ ### **3. Stuck Task Recovery (Advanced Stream Scanning)**
69
+
70
+ If a worker crashes mid‑execution, power-queues automatically detects: -
71
+ abandoned tasks
72
+ - stalled locks
73
+ - unfinished start keys
74
+
75
+ The engine then recovers these tasks back to active processing safely
76
+ and efficiently.
77
+
78
+ ---
79
+
80
+ ### **4. High‑Throughput Workers**
81
+
82
+ - Batch execution support
83
+ - Parallel or sequential processing mode
84
+ - Configurable worker loop interval
85
+ - Individual and batch‑level error hooks
86
+ - Safe retry flow with per‑task attempt counters
87
+
88
+ ---
89
+
90
+ ### **5. Native DLQ (Dead‑Letter Queue)**
91
+
92
+ When retries reach the configured limit: - the task is moved into
93
+ `${stream}:dlq`
94
+ - includes: payload, attempt count, job, timestamp, error text
95
+ - fully JSON‑safe
96
+
97
+ Perfect for monitoring or later re‑processing.
98
+
99
+ ---
100
+
101
+ ### **6. Zero‑Overhead Serialization**
102
+
103
+ power-queues uses: - safe JSON encoding
104
+ - optional "flat" key/value task format
105
+ - predictable and optimized payload transformation
106
+
107
+ This keeps Redis memory layout clean and eliminates overhead.
108
+
109
+ ---
110
+
111
+ ### **7. Complete Set of Lifecycle Hooks**
112
+
113
+ You can extend any part of the execution flow:
114
+
115
+ - `onSelected`
116
+ - `onExecute`
117
+ - `onSuccess`
118
+ - `onError`
119
+ - `onRetry`
120
+ - `onBatchError`
121
+ - `onReady`
122
+
123
+ This allows full integration with: - monitoring systems
124
+ - logging pipelines
125
+ - external APM tools
126
+ - domain logic
127
+
128
+ ---
129
+
130
+ ### **8. Atomic Script Loading + NOSCRIPT Recovery**
131
+
132
+ Scripts are: - loaded once
133
+ - cached
134
+ - auto‑reloaded if Redis restarts
135
+ - executed safely via SHA‑based calls
27
136
 
28
- - **Bulk XADD** — send thousands of tasks in a single Redis call
29
- - 🔁 **Retries & attempt tracking**
30
- - 🧠 **Idempotent job execution** (Lua locks, TTL, start/done keys)
31
- - 🧹 **Stuck task recovery** (XAUTOCLAIM + Lua-based recovery)
32
- - 🌀 **Consumer groups + batching**
33
- - 📥 **Dead Letter Queue (DLQ)**
34
- - 🔐 **Stream trimming, approx/exact maxlen, minid window**
35
- - 🧱 **Fully async, high-throughput, production-ready**
137
+ Ensures resilience in failover scenarios.
138
+
139
+ ---
140
+
141
+ ### **9. Job Progress Tracking**
142
+
143
+ Optional per‑job counters: - `job:ok` - `job:err` - `job:ready`
144
+
145
+ Useful for UI dashboards and real‑time job progress visualization.
36
146
 
37
147
  ---
38
148
 
39
149
  ## 📦 Installation
40
150
 
41
- ```bash
151
+ ``` bash
42
152
  npm install power-queues
43
153
  ```
44
-
154
+ OR
155
+ ```bash
156
+ yarn add power-redis
157
+ ```
45
158
  ---
46
159
 
47
160
  ## 🧪 Quick Start
48
161
 
49
- ```ts
50
- import { QueueService } from './queue.service';
162
+ ``` ts
163
+ const queue = new PowerQueues({
164
+ stream: "email",
165
+ group: "workers",
166
+ });
51
167
 
52
- const queue = new QueueService();
168
+ await queue.loadScripts(true);
53
169
 
54
- // Add tasks
55
- await queue.addTasks('my_queue', [
56
- { payload: { foo: 'bar' } },
57
- { payload: { a: 1, b: 2 } },
170
+ await queue.addTasks("email", [
171
+ { payload: { type: "welcome", userId: 42 } },
58
172
  ]);
173
+ ```
174
+
175
+ Worker:
59
176
 
60
- // Run worker
61
- queue.runQueue();
177
+ ``` ts
178
+ class EmailWorker extends PowerQueues {
179
+ async onExecute(id, payload) {
180
+ await sendEmail(payload);
181
+ }
182
+ }
62
183
  ```
63
184
 
64
185
  ---
65
186
 
66
- ## 🔧 Add Tasks (Bulk)
187
+ ## power-queues vs Existing Solutions
67
188
 
68
- ```ts
69
- await queue.addTasks('mass_polling', largeArray, {
70
- approx: true,
71
- minidWindowMs: 30000,
72
- maxlen: largeArray.length,
73
- });
74
- ```
189
+ |Feature |power-queues |BullMQ |Bee-Queue |Custom Streams|
190
+ |----------------------|----------------|----------- |------------|--------------|
191
+ |Bulk XADD (Lua) |✅ Yes |❌ No |❌ No |Rare |
192
+ |Idempotent workers |✅ Built-in |Partial |❌ No |❌ No |
193
+ |Stuck-task recovery |✅ Advanced |Basic |❌ No |Manual |
194
+ |Heartbeats |✅ Yes |Limited |❌ No |Manual |
195
+ |Retry logic |✅ Flexible |Good |Basic |Manual |
196
+ |DLQ |✅ Native |Basic |❌ No |Manual |
197
+ |Pure Streams |✅ Yes |Partial |❌ No |Yes |
198
+ |Lua optimization |✅ Strong |Minimal |❌ No |Manual |
199
+ |Throughput |🔥 Very high |High |Medium |Depends |
200
+ |Overhead |Low |Medium |Low |Very high |
75
201
 
76
- ---
202
+ ## 🛠 When to Choose power-queues
77
203
 
78
- ## 🏗️ Worker Hooks
204
+ Use this engine if you need:
79
205
 
80
- You can override:
206
+ ### **✔ High performance under load**
81
207
 
82
- - `onExecute`
83
- - `onSuccess`
84
- - `onError`
85
- - `onRetry`
86
- - `onBatchError`
87
- - `onSelected`
88
- - `onReady`
208
+ Millions of tasks per hour? No problem.
89
209
 
90
- Example:
210
+ ### **✔ Strong idempotent guarantees**
91
211
 
92
- ```ts
93
- async onExecute(id, payload) {
94
- console.log('executing', id, payload);
95
- }
96
- ```
212
+ Exactly‑once processing for critical operations.
97
213
 
98
- ---
214
+ ### **✔ Low‑level control without heavy abstractions**
99
215
 
100
- ## 🧱 Architecture Overview
216
+ No magic, no hidden states - everything is explicit.
101
217
 
102
- ```
103
- Producer → Redis Stream → Consumer Group → Worker → DLQ (optional)
104
- ```
218
+ ### **✔ Predictable behavior in distributed environments**
105
219
 
106
- - Redis Streams store tasks
107
- - Lua scripts handle trimming, idempotency, stuck recovery
108
- - Workers fetch tasks via XREADGROUP or Lua select
109
- - Tasks executed, ACKed, or sent to DLQ
220
+ Even with frequent worker restarts.
221
+
222
+ ### **✔ Production‑grade reliability**
223
+
224
+ Backpressure, recovery, retries, dead-lettering - all included.
110
225
 
111
226
  ---
112
227
 
113
- ## 🗄️ Dead Letter Queue (DLQ)
228
+ ## 🏗️ Project Structure & Architecture
114
229
 
115
- Failed tasks after `workerMaxRetries` automatically go to:
230
+ - Redis Streams for messaging
231
+ - Lua scripts for atomic operations
232
+ - JavaScript/TypeScript API
233
+ - Full worker lifecycle management
234
+ - Configurable backpressure & contention handling
235
+ - Optional job‑level progress tracking
116
236
 
117
- ```
118
- <stream>:dlq
119
- ```
237
+ ---
238
+
239
+ ## 🧩 Extensibility
240
+
241
+ power-queues is ideal for building:
242
+
243
+ - task schedulers
244
+ - distributed cron engines
245
+ - ETL pipelines
246
+ - telemetry processors
247
+ - notification workers
248
+ - device monitoring systems
249
+ - AI job pipelines
250
+ - high-frequency background jobs
120
251
 
121
252
  ---
122
253
 
123
- ## 🧩 Idempotency
254
+ ## 🧱 Reliability First
124
255
 
125
- Guaranteed by 3 keys:
256
+ Every part of the engine is designed to prevent:
126
257
 
127
- - `doneKey`
128
- - `lockKey`
129
- - `startKey`
258
+ - double execution
259
+ - stuck tasks
260
+ - orphan locks
261
+ - lost messages
262
+ - zombie workers
263
+ - script desynchronization
130
264
 
131
- This prevents double-execution during retries, crashes, or concurrency.
265
+ The heartbeat + TTL strategy guarantees that no task is "lost" even in
266
+ chaotic cluster environments.
132
267
 
133
268
  ---
134
269
 
135
- ## 🚀 Performance
270
+ ## 🏷️ SEO‑Optimized Keywords (Non‑Spam)
271
+
272
+ power-queues is relevant for:
136
273
 
137
- - 10,000+ XADDs/sec
138
- - Bulk mode: 50,000 operations in one request
139
- - Extremely low CPU usage due to Lua trimming
274
+ - Redis Streams queue engine
275
+ - Node.js stream-based queue
276
+ - idempotent task processing
277
+ - high‑performance job queue for Node.js
278
+ - Redis Lua queue
279
+ - distributed worker engine
280
+ - scalable background jobs
281
+ - enterprise-grade Redis queue
282
+ - microservices task runner
283
+ - fault-tolerant queue for Node.js
140
284
 
141
285
  ---
142
286
 
143
- ## 🏷️ SEO Keywords
287
+ ## 📝 License
144
288
 
145
- ```
146
- redis streams, redis queue, task queue, job queue, nodejs queue, nestjs queue,
147
- bulk xadd, distributed queue system, background jobs, retries, dlq,
148
- idempotency, redis lua scripts, microservices, high-performance queue,
149
- high-throughput, batching, concurrency control
150
- ```
289
+ MIT - free for commercial and private use.
151
290
 
152
291
  ---
153
292
 
154
- ## 📜 License
293
+ ## Why This Project Exists
294
+
295
+ Most Node.js queue libraries are: - too slow
296
+ - too abstract
297
+ - not idempotent
298
+ - not safe for financial or mission‑critical workloads
299
+
300
+ power-queues was built to solve real production problems where: -
301
+ *duplicate tasks cost money*,
302
+ - *workers are unstable*,
303
+ - *tasks must survive restarts*,
304
+ - *performance matters at scale*.
155
305
 
156
- MIT
306
+ If these things matter to you - this engine will feel like home.
package/dist/index.cjs CHANGED
@@ -260,7 +260,6 @@ var PowerQueues = class extends import_power_redis.PowerRedis {
260
260
  constructor() {
261
261
  super(...arguments);
262
262
  this.abort = new AbortController();
263
- this.strictCheckingConnection = ["true", "on", "yes", "y", "1"].includes(String(process.env.REDIS_STRICT_CHECK_CONNECTION ?? "").trim().toLowerCase());
264
263
  this.scripts = {};
265
264
  this.addingBatchTasksCount = 800;
266
265
  this.addingBatchKeysLimit = 1e4;
@@ -311,7 +310,7 @@ var PowerQueues = class extends import_power_redis.PowerRedis {
311
310
  }
312
311
  const tasksP = await this.onSelected(tasks);
313
312
  const ids = await this.execute((0, import_full_utils.isArrFilled)(tasksP) ? tasksP : tasks);
314
- if ((0, import_full_utils.isArrFilled)(tasks)) {
313
+ if ((0, import_full_utils.isArrFilled)(ids)) {
315
314
  await this.approve(ids);
316
315
  }
317
316
  } catch (err) {
@@ -454,7 +453,7 @@ var PowerQueues = class extends import_power_redis.PowerRedis {
454
453
  }
455
454
  const pairs = flat.length / 2;
456
455
  if ((0, import_full_utils.isNumNZ)(pairs)) {
457
- throw new Error('Task must have "payload" or "flat".');
456
+ throw new Error('Task "flat" must contain at least one field/value pair.');
458
457
  }
459
458
  argv.push(String(id));
460
459
  argv.push(String(pairs));
@@ -507,7 +506,13 @@ var PowerQueues = class extends import_power_redis.PowerRedis {
507
506
  return batches;
508
507
  }
509
508
  keysLength(task) {
510
- return 2 + ("flat" in task && Array.isArray(task.flat) && task.flat.length ? task.flat.length : Object.keys(task).length * 2);
509
+ if ("flat" in task && Array.isArray(task.flat) && task.flat.length) {
510
+ return 2 + task.flat.length;
511
+ }
512
+ if ("payload" in task && (0, import_full_utils.isObj)(task.payload)) {
513
+ return 2 + Object.keys(task.payload).length * 2;
514
+ }
515
+ return 2 + Object.keys(task).length * 2;
511
516
  }
512
517
  attemptsKey(id) {
513
518
  const safeStream = this.stream.replace(/[^\w:\-]/g, "_");
@@ -674,7 +679,8 @@ var PowerQueues = class extends import_power_redis.PowerRedis {
674
679
  error: String(err?.message || err),
675
680
  createdAt,
676
681
  job,
677
- id
682
+ id,
683
+ attempt: 0
678
684
  }
679
685
  }]);
680
686
  await this.clearAttempts(id);
@@ -777,42 +783,63 @@ var PowerQueues = class extends import_power_redis.PowerRedis {
777
783
  if (signal?.aborted) {
778
784
  return resolve();
779
785
  }
786
+ let delay;
787
+ if (ttl > 0) {
788
+ const base = Math.max(25, Math.min(ttl, 5e3));
789
+ const jitter = Math.floor(Math.min(base, 200) * Math.random());
790
+ delay = base + jitter;
791
+ } else {
792
+ delay = 5 + Math.floor(Math.random() * 15);
793
+ }
780
794
  const t = setTimeout(() => {
781
795
  if (signal) {
782
796
  signal.removeEventListener("abort", onAbort);
783
797
  }
784
798
  resolve();
785
- }, ttl > 0 ? 25 + Math.floor(Math.random() * 50) : 5 + Math.floor(Math.random() * 15));
799
+ }, delay);
786
800
  t.unref?.();
787
801
  function onAbort() {
788
802
  clearTimeout(t);
789
803
  resolve();
790
804
  }
791
- signal?.addEventListener("abort", onAbort, { once: true });
805
+ signal?.addEventListener?.("abort", onAbort, { once: true });
792
806
  });
793
807
  }
808
+ async sendHeartbeat(keys) {
809
+ try {
810
+ const r1 = await this.redis.pexpire(keys.lockKey, this.workerExecuteLockTimeoutMs);
811
+ const r2 = await this.redis.pexpire(keys.startKey, this.workerExecuteLockTimeoutMs);
812
+ const ok1 = Number(r1 || 0) === 1;
813
+ const ok2 = Number(r2 || 0) === 1;
814
+ return ok1 || ok2;
815
+ } catch {
816
+ return false;
817
+ }
818
+ }
794
819
  heartbeat(keys) {
795
820
  if (this.workerExecuteLockTimeoutMs <= 0) {
796
821
  return;
797
822
  }
798
- let timer, alive = true, hbFails = 0;
799
823
  const workerHeartbeatTimeoutMs = Math.max(1e3, Math.floor(Math.max(5e3, this.workerExecuteLockTimeoutMs | 0) / 4));
824
+ let timer;
825
+ let alive = true;
826
+ let hbFails = 0;
800
827
  const stop = () => {
801
828
  alive = false;
802
829
  if (timer) {
803
830
  clearTimeout(timer);
804
831
  }
805
832
  };
806
- const onAbort = () => stop();
807
833
  const signal = this.signal();
834
+ const onAbort = () => stop();
808
835
  signal?.addEventListener?.("abort", onAbort, { once: true });
809
836
  const tick = async () => {
810
837
  if (!alive) {
811
838
  return;
812
839
  }
813
840
  try {
814
- const r = await this.heartbeat(keys);
815
- hbFails = r ? 0 : hbFails + 1;
841
+ const ok = await this.sendHeartbeat(keys);
842
+ hbFails = ok ? 0 : hbFails + 1;
816
843
  if (hbFails >= 3) {
817
844
  throw new Error("Heartbeat lost.");
818
845
  }
@@ -823,9 +850,11 @@ var PowerQueues = class extends import_power_redis.PowerRedis {
823
850
  return;
824
851
  }
825
852
  }
826
- timer = setTimeout(tick, workerHeartbeatTimeoutMs).unref?.();
853
+ timer = setTimeout(tick, workerHeartbeatTimeoutMs);
854
+ timer.unref?.();
827
855
  };
828
- timer = setTimeout(tick, workerHeartbeatTimeoutMs).unref?.();
856
+ timer = setTimeout(tick, workerHeartbeatTimeoutMs);
857
+ timer.unref?.();
829
858
  return () => {
830
859
  signal?.removeEventListener?.("abort", onAbort);
831
860
  stop();
package/dist/index.d.cts CHANGED
@@ -34,7 +34,6 @@ type Task = {
34
34
  declare class PowerQueues extends PowerRedis {
35
35
  abort: AbortController;
36
36
  redis: IORedisLike;
37
- readonly strictCheckingConnection: boolean;
38
37
  readonly scripts: Record<string, SavedScript>;
39
38
  readonly addingBatchTasksCount: number;
40
39
  readonly addingBatchKeysLimit: number;
@@ -94,6 +93,7 @@ declare class PowerQueues extends PowerRedis {
94
93
  private selectStuck;
95
94
  private selectFresh;
96
95
  private waitAbortable;
96
+ private sendHeartbeat;
97
97
  private heartbeat;
98
98
  private normalizeEntries;
99
99
  private values;
package/dist/index.d.ts CHANGED
@@ -34,7 +34,6 @@ type Task = {
34
34
  declare class PowerQueues extends PowerRedis {
35
35
  abort: AbortController;
36
36
  redis: IORedisLike;
37
- readonly strictCheckingConnection: boolean;
38
37
  readonly scripts: Record<string, SavedScript>;
39
38
  readonly addingBatchTasksCount: number;
40
39
  readonly addingBatchKeysLimit: number;
@@ -94,6 +93,7 @@ declare class PowerQueues extends PowerRedis {
94
93
  private selectStuck;
95
94
  private selectFresh;
96
95
  private waitAbortable;
96
+ private sendHeartbeat;
97
97
  private heartbeat;
98
98
  private normalizeEntries;
99
99
  private values;
package/dist/index.js CHANGED
@@ -243,7 +243,6 @@ var PowerQueues = class extends PowerRedis {
243
243
  constructor() {
244
244
  super(...arguments);
245
245
  this.abort = new AbortController();
246
- this.strictCheckingConnection = ["true", "on", "yes", "y", "1"].includes(String(process.env.REDIS_STRICT_CHECK_CONNECTION ?? "").trim().toLowerCase());
247
246
  this.scripts = {};
248
247
  this.addingBatchTasksCount = 800;
249
248
  this.addingBatchKeysLimit = 1e4;
@@ -294,7 +293,7 @@ var PowerQueues = class extends PowerRedis {
294
293
  }
295
294
  const tasksP = await this.onSelected(tasks);
296
295
  const ids = await this.execute(isArrFilled(tasksP) ? tasksP : tasks);
297
- if (isArrFilled(tasks)) {
296
+ if (isArrFilled(ids)) {
298
297
  await this.approve(ids);
299
298
  }
300
299
  } catch (err) {
@@ -437,7 +436,7 @@ var PowerQueues = class extends PowerRedis {
437
436
  }
438
437
  const pairs = flat.length / 2;
439
438
  if (isNumNZ(pairs)) {
440
- throw new Error('Task must have "payload" or "flat".');
439
+ throw new Error('Task "flat" must contain at least one field/value pair.');
441
440
  }
442
441
  argv.push(String(id));
443
442
  argv.push(String(pairs));
@@ -490,7 +489,13 @@ var PowerQueues = class extends PowerRedis {
490
489
  return batches;
491
490
  }
492
491
  keysLength(task) {
493
- return 2 + ("flat" in task && Array.isArray(task.flat) && task.flat.length ? task.flat.length : Object.keys(task).length * 2);
492
+ if ("flat" in task && Array.isArray(task.flat) && task.flat.length) {
493
+ return 2 + task.flat.length;
494
+ }
495
+ if ("payload" in task && isObj(task.payload)) {
496
+ return 2 + Object.keys(task.payload).length * 2;
497
+ }
498
+ return 2 + Object.keys(task).length * 2;
494
499
  }
495
500
  attemptsKey(id) {
496
501
  const safeStream = this.stream.replace(/[^\w:\-]/g, "_");
@@ -657,7 +662,8 @@ var PowerQueues = class extends PowerRedis {
657
662
  error: String(err?.message || err),
658
663
  createdAt,
659
664
  job,
660
- id
665
+ id,
666
+ attempt: 0
661
667
  }
662
668
  }]);
663
669
  await this.clearAttempts(id);
@@ -760,42 +766,63 @@ var PowerQueues = class extends PowerRedis {
760
766
  if (signal?.aborted) {
761
767
  return resolve();
762
768
  }
769
+ let delay;
770
+ if (ttl > 0) {
771
+ const base = Math.max(25, Math.min(ttl, 5e3));
772
+ const jitter = Math.floor(Math.min(base, 200) * Math.random());
773
+ delay = base + jitter;
774
+ } else {
775
+ delay = 5 + Math.floor(Math.random() * 15);
776
+ }
763
777
  const t = setTimeout(() => {
764
778
  if (signal) {
765
779
  signal.removeEventListener("abort", onAbort);
766
780
  }
767
781
  resolve();
768
- }, ttl > 0 ? 25 + Math.floor(Math.random() * 50) : 5 + Math.floor(Math.random() * 15));
782
+ }, delay);
769
783
  t.unref?.();
770
784
  function onAbort() {
771
785
  clearTimeout(t);
772
786
  resolve();
773
787
  }
774
- signal?.addEventListener("abort", onAbort, { once: true });
788
+ signal?.addEventListener?.("abort", onAbort, { once: true });
775
789
  });
776
790
  }
791
+ async sendHeartbeat(keys) {
792
+ try {
793
+ const r1 = await this.redis.pexpire(keys.lockKey, this.workerExecuteLockTimeoutMs);
794
+ const r2 = await this.redis.pexpire(keys.startKey, this.workerExecuteLockTimeoutMs);
795
+ const ok1 = Number(r1 || 0) === 1;
796
+ const ok2 = Number(r2 || 0) === 1;
797
+ return ok1 || ok2;
798
+ } catch {
799
+ return false;
800
+ }
801
+ }
777
802
  heartbeat(keys) {
778
803
  if (this.workerExecuteLockTimeoutMs <= 0) {
779
804
  return;
780
805
  }
781
- let timer, alive = true, hbFails = 0;
782
806
  const workerHeartbeatTimeoutMs = Math.max(1e3, Math.floor(Math.max(5e3, this.workerExecuteLockTimeoutMs | 0) / 4));
807
+ let timer;
808
+ let alive = true;
809
+ let hbFails = 0;
783
810
  const stop = () => {
784
811
  alive = false;
785
812
  if (timer) {
786
813
  clearTimeout(timer);
787
814
  }
788
815
  };
789
- const onAbort = () => stop();
790
816
  const signal = this.signal();
817
+ const onAbort = () => stop();
791
818
  signal?.addEventListener?.("abort", onAbort, { once: true });
792
819
  const tick = async () => {
793
820
  if (!alive) {
794
821
  return;
795
822
  }
796
823
  try {
797
- const r = await this.heartbeat(keys);
798
- hbFails = r ? 0 : hbFails + 1;
824
+ const ok = await this.sendHeartbeat(keys);
825
+ hbFails = ok ? 0 : hbFails + 1;
799
826
  if (hbFails >= 3) {
800
827
  throw new Error("Heartbeat lost.");
801
828
  }
@@ -806,9 +833,11 @@ var PowerQueues = class extends PowerRedis {
806
833
  return;
807
834
  }
808
835
  }
809
- timer = setTimeout(tick, workerHeartbeatTimeoutMs).unref?.();
836
+ timer = setTimeout(tick, workerHeartbeatTimeoutMs);
837
+ timer.unref?.();
810
838
  };
811
- timer = setTimeout(tick, workerHeartbeatTimeoutMs).unref?.();
839
+ timer = setTimeout(tick, workerHeartbeatTimeoutMs);
840
+ timer.unref?.();
812
841
  return () => {
813
842
  signal?.removeEventListener?.("abort", onAbort);
814
843
  stop();
package/package.json CHANGED
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "name": "power-queues",
3
- "version": "2.0.13",
4
- "description": "Base classes for implementing custom queues in redis under high load conditions based on nestjs.",
3
+ "version": "2.0.15",
4
+ "description": "High-performance Redis Streams queue for Node.js with Lua-powered bulk XADD, idempotent workers, heartbeat locks, stuck-task recovery, retries, DLQ, and distributed processing.",
5
5
  "author": "ihor-bielchenko",
6
6
  "license": "MIT",
7
7
  "repository": {
@@ -81,11 +81,8 @@
81
81
  "power-redis"
82
82
  ],
83
83
  "dependencies": {
84
- "@nestjs-labs/nestjs-ioredis": "^11.0.4",
85
- "@nestjs/common": "^11.1.8",
86
- "full-utils": "^2.0.3",
87
- "ioredis": "^5.8.2",
88
- "power-redis": "^2.0.6",
84
+ "full-utils": "^2.0.5",
85
+ "power-redis": "^2.0.8",
89
86
  "uuid": "^13.0.0"
90
87
  }
91
88
  }