glide-mq 0.3.0 → 0.4.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -4,21 +4,41 @@ High-performance message queue for Node.js, built on Valkey/Redis Streams with [
4
4
 
5
5
  ## Performance
6
6
 
7
+ ### Processing throughput
8
+
7
9
  | Concurrency | Throughput |
8
10
  |-------------|-----------|
9
11
  | c=1 | 4,376 jobs/s |
10
- | c=10 | 20,979 jobs/s |
11
- | c=50 | 44,643 jobs/s |
12
+ | c=5 | 14,925 jobs/s |
13
+ | c=10 | 15,504 jobs/s |
14
+ | c=50 | 48,077 jobs/s |
15
+
16
+ ### Bulk add (addBulk with Batch API)
17
+
18
+ | Jobs | Serial | Batch | Speedup |
19
+ |------|--------|-------|---------|
20
+ | 200 | 76ms | 14ms | 5.4x |
21
+ | 1,000 | 228ms | 18ms | 12.7x |
22
+
23
+ ### Payload compression
24
+
25
+ | Mode | Stored size (15KB payload) | Savings |
26
+ |------|--------------------------|---------|
27
+ | Plain | 15,327 bytes | - |
28
+ | Gzip | 331 bytes | 98% |
12
29
 
13
30
  No-op processor, Valkey 8.0, single node.
14
31
 
15
32
  ## Why
16
33
 
17
- - **Streams-first** - uses Redis Streams + consumer groups + PEL instead of Lists + BRPOPLPUSH. Fewer moving parts, built-in at-least-once delivery.
34
+ - **Streams-first** - Redis Streams + consumer groups + PEL instead of Lists + BRPOPLPUSH. Fewer moving parts, built-in at-least-once delivery.
18
35
  - **Server Functions** - single `FUNCTION LOAD` instead of dozens of EVAL scripts. Persistent across restarts, no NOSCRIPT cache-miss errors.
19
36
  - **1 RTT per job** - `completeAndFetchNext` combines job completion + next job fetch + activation in a single FCALL round trip.
20
37
  - **Cluster-native** - hash-tagged keys from day one. No afterthought `{braces}` requirement.
21
38
  - **Native bindings** - built on [@glidemq/speedkey](https://github.com/avifenesh/speedkey) (valkey-glide with Rust core + NAPI).
39
+ - **AZ-Affinity** - route reads to same-AZ replicas, reducing cross-AZ latency and AWS costs by up to 75%.
40
+ - **Batch pipelining** - `addBulk` uses GLIDE's Batch API for single round-trip bulk operations (12.7x faster than serial).
41
+ - **Transparent compression** - gzip payloads with zero-config decompression on workers (98% savings on repetitive data).
22
42
 
23
43
  ## Install
24
44
 
@@ -26,7 +46,7 @@ No-op processor, Valkey 8.0, single node.
26
46
  npm install glide-mq
27
47
  ```
28
48
 
29
- Requires Node.js 20+ and a running Valkey (7.2+) or Redis (6.2+) instance.
49
+ Requires Node.js 20+ and a running Valkey (7.0+) or Redis (7.0+) instance for FUNCTION support.
30
50
 
31
51
  ## Quick Start
32
52
 
@@ -68,6 +88,7 @@ worker.on('failed', (job, err) => console.log(`Job ${job.id} failed: ${err.messa
68
88
  - **Global concurrency** - limit active jobs across all workers
69
89
  - **Job retention** - removeOnComplete/removeOnFail (count, age-based)
70
90
  - **Priorities** - encoded in sorted set scores, FIFO within same priority
91
+ - **Compression** - transparent gzip for job payloads (Node.js zlib, zero deps)
71
92
 
72
93
  ### Workflows
73
94
  - **FlowProducer** - atomic parent-child job trees with nested flows
@@ -79,7 +100,13 @@ worker.on('failed', (job, err) => console.log(`Job ${job.id} failed: ${err.messa
79
100
  - **QueueEvents** - stream-based event subscription (added, completed, failed, stalled, etc.)
80
101
  - **Job schedulers** - cron patterns and fixed intervals for repeatable jobs
81
102
  - **Metrics** - getJobCounts, getMetrics
82
- - **OpenTelemetry** - optional spans for Queue.add, Worker.process, FlowProducer.add
103
+ - **OpenTelemetry** - automatic tracing spans for Queue.add and FlowProducer.add operations (optional peer dependency)
104
+
105
+ ### Cloud-Native (GLIDE-exclusive)
106
+ - **AZ-Affinity routing** - route reads to same-AZ replicas for lower latency and reduced cross-AZ costs
107
+ - **IAM authentication** - native AWS ElastiCache/MemoryDB auth with auto-token refresh
108
+ - **Batch API** - single round-trip bulk operations via GLIDE's non-atomic pipeline
109
+ - **Multiplexed connections** - single connection per node instead of connection pools
83
110
 
84
111
  ### Operations
85
112
  - **Graceful shutdown** - SIGTERM/SIGINT handler, waits for active jobs
@@ -93,8 +120,16 @@ worker.on('failed', (job, err) => console.log(`Job ${job.id} failed: ${err.messa
93
120
 
94
121
  ```typescript
95
122
  const queue = new Queue('name', {
96
- connection: { addresses: [{ host, port }], clusterMode: false },
97
- prefix: 'glide', // key prefix (default: 'glide')
123
+ connection: {
124
+ addresses: [{ host, port }],
125
+ clusterMode: false,
126
+ readFrom: ReadFrom.AZAffinity, // route reads to same-AZ replicas
127
+ clientAz: 'us-east-1a',
128
+ credentials: { password: 'secret' },
129
+ // or IAM: { type: 'iam', serviceType: 'elasticache', region: 'us-east-1', userId: 'user', clusterName: 'my-cluster' }
130
+ },
131
+ prefix: 'glide',
132
+ compression: 'gzip', // transparent payload compression
98
133
  });
99
134
 
100
135
  await queue.add('jobName', data, {
@@ -107,6 +142,12 @@ await queue.add('jobName', data, {
107
142
  deduplication: { id: 'unique-key', mode: 'simple' },
108
143
  });
109
144
 
145
+ // Bulk add - 12.7x faster than serial via Batch API
146
+ await queue.addBulk([
147
+ { name: 'job1', data: { a: 1 } },
148
+ { name: 'job2', data: { a: 2 } },
149
+ ]);
150
+
110
151
  await queue.pause();
111
152
  await queue.resume();
112
153
  await queue.getJobCounts(); // { waiting, active, delayed, completed, failed }
@@ -118,6 +159,7 @@ await queue.close();
118
159
 
119
160
  ```typescript
120
161
  const worker = new Worker('name', async (job) => {
162
+ await job.log('Starting processing');
121
163
  await job.updateProgress(50);
122
164
  return result;
123
165
  }, {
@@ -127,6 +169,10 @@ const worker = new Worker('name', async (job) => {
127
169
  stalledInterval: 30000,
128
170
  lockDuration: 30000,
129
171
  limiter: { max: 100, duration: 60000 },
172
+ deadLetterQueue: { name: 'failed-jobs' },
173
+ backoffStrategies: {
174
+ custom: (attemptsMade, err) => attemptsMade * 1000,
175
+ },
130
176
  });
131
177
 
132
178
  worker.on('completed', (job, result) => {});
@@ -149,17 +195,23 @@ await flow.add({
149
195
  ],
150
196
  });
151
197
 
152
- // Chain: A -> B -> C (sequential)
153
- await chain(connection, 'tasks', [
198
+ // Chain: A -> B -> C (sequential, each step receives previous result)
199
+ await chain('tasks', [
154
200
  { name: 'step1', data: {} },
155
201
  { name: 'step2', data: {} },
156
- ]);
202
+ ], connection);
157
203
 
158
- // Group: A + B + C (parallel)
159
- await group(connection, 'tasks', [
204
+ // Group: A + B + C (parallel, parent completes when all children done)
205
+ await group('tasks', [
160
206
  { name: 'task1', data: {} },
161
207
  { name: 'task2', data: {} },
162
- ]);
208
+ ], connection);
209
+
210
+ // Chord: run group in parallel, then callback with all results
211
+ await chord('tasks', [
212
+ { name: 'task1', data: {} },
213
+ { name: 'task2', data: {} },
214
+ ], { name: 'aggregate', data: {} }, connection);
163
215
  ```
164
216
 
165
217
  ### Events
@@ -173,18 +225,149 @@ events.on('failed', ({ jobId, failedReason }) => {});
173
225
  events.on('stalled', ({ jobId }) => {});
174
226
  ```
175
227
 
228
+ ### OpenTelemetry
229
+
230
+ ```typescript
231
+ // Install optional peer dependency
232
+ // npm install @opentelemetry/api
233
+
234
+ // Automatic tracing for Queue.add and FlowProducer.add
235
+ const queue = new Queue('tasks', { connection });
236
+ await queue.add('job', data); // Creates span: glide-mq.queue.add
237
+
238
+ // Custom tracer (optional)
239
+ import { setTracer } from 'glide-mq';
240
+ setTracer(customTracerInstance);
241
+ ```
242
+
243
+ ### Graceful Shutdown
244
+
245
+ ```typescript
246
+ import { gracefulShutdown } from 'glide-mq';
247
+
248
+ const queue = new Queue('tasks', { connection });
249
+ const worker = new Worker('tasks', processor, { connection });
250
+ const events = new QueueEvents('tasks', { connection });
251
+
252
+ // Registers SIGTERM/SIGINT handlers and resolves when all components are closed
253
+ await gracefulShutdown([queue, worker, events]);
254
+ ```
255
+
176
256
  ## Cluster Mode
177
257
 
178
258
  ```typescript
179
259
  const connection = {
180
260
  addresses: [{ host: 'cluster-node', port: 7000 }],
181
261
  clusterMode: true,
262
+ readFrom: ReadFrom.AZAffinity,
263
+ clientAz: 'us-east-1a',
182
264
  };
183
265
 
184
266
  // Everything works the same - keys are hash-tagged automatically
185
267
  const queue = new Queue('tasks', { connection });
186
268
  ```
187
269
 
270
+ ## Testing Mode
271
+
272
+ glide-mq ships a built-in in-memory backend so you can unit-test your job processors **without a running Valkey instance**.
273
+
274
+ ```typescript
275
+ import { TestQueue, TestWorker } from 'glide-mq/testing';
276
+
277
+ const queue = new TestQueue('tasks');
278
+
279
+ const worker = new TestWorker(queue, async (job) => {
280
+ return { processed: job.data };
281
+ });
282
+
283
+ worker.on('completed', (job, result) => {
284
+ console.log(`Job ${job.id} done:`, result);
285
+ });
286
+
287
+ worker.on('failed', (job, err) => {
288
+ console.error(`Job ${job.id} failed:`, err.message);
289
+ });
290
+
291
+ await queue.add('send-email', { to: 'user@example.com' });
292
+
293
+ // Inspect state without touching Valkey
294
+ const counts = await queue.getJobCounts();
295
+ // { waiting: 0, active: 0, delayed: 0, completed: 1, failed: 0 }
296
+
297
+ // Search jobs by name or data fields
298
+ const jobs = await queue.searchJobs({ name: 'send-email', state: 'completed' });
299
+
300
+ await worker.close();
301
+ await queue.close();
302
+ ```
303
+
304
+ `TestQueue` and `TestWorker` mirror the real `Queue` / `Worker` public API (add, addBulk, getJob, getJobs, getJobCounts, pause, resume, events, retries, concurrency), making it straightforward to swap implementations between test and production code.
305
+
306
+ ## Dashboard
307
+
308
+ glide-mq exposes a REST + Server-Sent Events API that can be consumed by the [`@glidemq/dashboard`](https://github.com/avifenesh/glidemq-dashboard) UI package.
309
+
310
+ ### Quick start with the built-in demo server
311
+
312
+ ```bash
313
+ cd demo
314
+ npm install
315
+ npm run dashboard # starts http://localhost:3000
316
+ ```
317
+
318
+ ### REST endpoints
319
+
320
+ | Method | Path | Description |
321
+ |--------|------|-------------|
322
+ | `GET` | `/api/queues` | List all queues with counts and metrics |
323
+ | `GET` | `/api/queues/:name` | Queue details + recent jobs |
324
+ | `GET` | `/api/queues/:name/jobs/:id` | Single job details, state, logs |
325
+ | `POST` | `/api/queues/:name/jobs` | Add a new job |
326
+ | `POST` | `/api/queues/:name/pause` | Pause a queue |
327
+ | `POST` | `/api/queues/:name/resume` | Resume a queue |
328
+ | `POST` | `/api/queues/:name/jobs/:id/retry` | Retry a failed job |
329
+ | `DELETE` | `/api/queues/:name/jobs/:id` | Remove a job |
330
+ | `POST` | `/api/queues/:name/drain` | Drain all waiting jobs |
331
+ | `POST` | `/api/queues/:name/obliterate` | Obliterate queue and all data |
332
+ | `GET` | `/api/events` | SSE stream for real-time job events |
333
+
334
+ ### Real-time events via SSE
335
+
336
+ ```javascript
337
+ const es = new EventSource('http://localhost:3000/api/events');
338
+ es.onmessage = ({ data }) => {
339
+ const { queue, event, jobId } = JSON.parse(data);
340
+ // event: 'added' | 'completed' | 'failed' | 'progress' | 'stalled' | 'heartbeat'
341
+ console.log(`[${queue}] ${event} – job ${jobId}`);
342
+ };
343
+ ```
344
+
345
+ ### Embedding the dashboard server in your own Express app
346
+
347
+ ```typescript
348
+ import express from 'express';
349
+ import { Queue, QueueEvents } from 'glide-mq';
350
+
351
+ const app = express();
352
+ app.use(express.json());
353
+
354
+ const queues: Record<string, Queue> = {
355
+ orders: new Queue('orders', { connection }),
356
+ payments: new Queue('payments', { connection }),
357
+ };
358
+
359
+ app.get('/api/queues', async (_req, res) => {
360
+ const data = await Promise.all(
361
+ Object.entries(queues).map(async ([name, q]) => ({
362
+ name,
363
+ counts: await q.getJobCounts(),
364
+ isPaused: await q.isPaused(),
365
+ })),
366
+ );
367
+ res.json(data);
368
+ });
369
+ ```
370
+
188
371
  ## License
189
372
 
190
373
  Apache-2.0
package/demo/README.md ADDED
@@ -0,0 +1,169 @@
1
+ # glide-mq Demo Application
2
+
3
+ Comprehensive demonstration of all glide-mq features simulating a complete e-commerce platform.
4
+
5
+ ## Features Demonstrated
6
+
7
+ ### Core Queue Operations
8
+ - Job processing with progress tracking
9
+ - Bulk operations with Batch API
10
+ - Priority queues with FIFO ordering
11
+ - Delayed/scheduled jobs
12
+ - Job retries with exponential backoff
13
+ - Dead letter queue handling
14
+
15
+ ### Advanced Features
16
+ - Deduplication (simple, throttle, debounce)
17
+ - Rate limiting (sliding window)
18
+ - Global concurrency limits
19
+ - Job retention policies
20
+ - Transparent gzip compression
21
+ - Timeout handling
22
+
23
+ ### Workflows
24
+ - FlowProducer for parent-child job trees
25
+ - Chain pattern for sequential pipelines
26
+ - Group pattern for parallel execution
27
+ - Complex nested workflows
28
+
29
+ ### Observability
30
+ - Real-time event streaming
31
+ - Progress tracking
32
+ - Metrics and job counts
33
+ - Job logs and state tracking
34
+ - Dashboard API integration
35
+
36
+ ## Setup
37
+
38
+ 1. Install dependencies:
39
+ ```bash
40
+ cd demo
41
+ npm install
42
+ ```
43
+
44
+ 2. Ensure Valkey/Redis is running:
45
+ ```bash
46
+ # Single node on port 6379
47
+ valkey-server
48
+
49
+ # Or for cluster mode (optional)
50
+ # Ports 7000-7005
51
+ ```
52
+
53
+ ## Running the Demo
54
+
55
+ ### Option 1: Full Demo with All Features
56
+ ```bash
57
+ npm start
58
+ ```
59
+
60
+ This launches:
61
+ - 10 different queue types (orders, payments, inventory, etc.)
62
+ - 10 specialized workers with different configurations
63
+ - 12 demo scenarios showcasing various features
64
+ - Real-time metrics display every 5 seconds
65
+
66
+ ### Option 2: Dashboard Server
67
+ ```bash
68
+ npm run dashboard
69
+ ```
70
+
71
+ Opens dashboard API server on http://localhost:3000
72
+
73
+ Features:
74
+ - REST API for queue management
75
+ - Server-Sent Events for real-time updates
76
+ - Queue pause/resume controls
77
+ - Job retry/remove operations
78
+ - Metrics and monitoring
79
+
80
+ ## Demo Scenarios
81
+
82
+ 1. **Simple Order Processing** - Basic job with progress tracking
83
+ 2. **Bulk Inventory Update** - 20 jobs added via addBulk
84
+ 3. **Priority Tasks** - Jobs with different priority levels
85
+ 4. **Scheduled Notifications** - Delayed job execution
86
+ 5. **Payment with Retries** - Automatic retry on failure
87
+ 6. **Deduplicated Analytics** - Prevents duplicate events
88
+ 7. **E-commerce Workflow** - Complex parent-child job tree
89
+ 8. **Sequential Pipeline** - Chain pattern for data processing
90
+ 9. **Parallel Broadcast** - Group pattern for notifications
91
+ 10. **Large Report Generation** - Compression for big payloads
92
+ 11. **Rate-Limited Recommendations** - Throttled processing
93
+ 12. **Timeout Handling** - Job timeout demonstration
94
+
95
+ ## Queue Types
96
+
97
+ - **orders** - Order processing with progress tracking
98
+ - **payments** - Payment processing with retry logic
99
+ - **inventory** - Rate-limited inventory updates
100
+ - **shipping** - Shipping label generation
101
+ - **notifications** - Multi-channel notifications (email/SMS/push)
102
+ - **analytics** - Event aggregation and metrics
103
+ - **recommendations** - ML recommendation generation
104
+ - **reports** - Long-running report generation
105
+ - **priority-tasks** - Priority-based task execution
106
+ - **dead-letter** - Failed job investigation
107
+
108
+ ## API Endpoints (Dashboard Server)
109
+
110
+ - `GET /api/queues` - List all queues with counts
111
+ - `GET /api/queues/:name` - Queue details and jobs
112
+ - `GET /api/queues/:name/jobs/:id` - Specific job details
113
+ - `POST /api/queues/:name/jobs` - Add new job
114
+ - `POST /api/queues/:name/pause` - Pause queue
115
+ - `POST /api/queues/:name/resume` - Resume queue
116
+ - `POST /api/queues/:name/jobs/:id/retry` - Retry failed job
117
+ - `DELETE /api/queues/:name/jobs/:id` - Remove job
118
+ - `POST /api/queues/:name/drain` - Drain queue
119
+ - `POST /api/queues/:name/obliterate` - Obliterate queue
120
+ - `GET /api/events` - SSE stream for real-time updates
121
+
122
+ ## Monitoring
123
+
124
+ The demo displays real-time metrics including:
125
+ - Queue counts (waiting, active, delayed, completed, failed)
126
+ - Job progress updates
127
+ - Success/failure events
128
+ - Stalled job detection
129
+ - Worker status
130
+
131
+ ## Architecture
132
+
133
+ ```
134
+ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐
135
+ │ Producer │────▶│ Queue │────▶│ Worker │
136
+ │ (Demo App) │ │ (Valkey) │ │ (Processor) │
137
+ └─────────────┘ └─────────────┘ └─────────────┘
138
+
139
+
140
+ ┌─────────────┐
141
+ │ Dashboard │
142
+ │ (API/UI) │
143
+ └─────────────┘
144
+ ```
145
+
146
+ ## Customization
147
+
148
+ Edit `index.ts` to:
149
+ - Add new queue types
150
+ - Modify worker configurations
151
+ - Create custom scenarios
152
+ - Adjust timing and concurrency
153
+ - Change retry strategies
154
+
155
+ ## Troubleshooting
156
+
157
+ - Ensure Valkey/Redis is running on localhost:6379
158
+ - Check Node.js version is 20+
159
+ - Verify all dependencies installed
160
+ - Monitor console for error messages
161
+ - Use dashboard API to inspect job details
162
+
163
+ ## Performance Tips
164
+
165
+ - Increase worker concurrency for higher throughput
166
+ - Use bulk operations for batch processing
167
+ - Enable compression for large payloads
168
+ - Configure appropriate retention policies
169
+ - Use priority queues for critical tasks