flashq 0.3.0 → 0.3.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (72) hide show
  1. package/README.md +340 -239
  2. package/dist/client/connection.d.ts +27 -3
  3. package/dist/client/connection.d.ts.map +1 -1
  4. package/dist/client/connection.js +308 -56
  5. package/dist/client/connection.js.map +1 -1
  6. package/dist/client/http/request.d.ts +4 -0
  7. package/dist/client/http/request.d.ts.map +1 -1
  8. package/dist/client/http/request.js +135 -42
  9. package/dist/client/http/request.js.map +1 -1
  10. package/dist/client/http/response.d.ts.map +1 -1
  11. package/dist/client/http/response.js +3 -1
  12. package/dist/client/http/response.js.map +1 -1
  13. package/dist/client/index.d.ts +15 -7
  14. package/dist/client/index.d.ts.map +1 -1
  15. package/dist/client/index.js +105 -16
  16. package/dist/client/index.js.map +1 -1
  17. package/dist/client/methods/advanced.d.ts.map +1 -1
  18. package/dist/client/methods/advanced.js.map +1 -1
  19. package/dist/client/methods/core.d.ts +24 -1
  20. package/dist/client/methods/core.d.ts.map +1 -1
  21. package/dist/client/methods/core.js +105 -0
  22. package/dist/client/methods/core.js.map +1 -1
  23. package/dist/client/methods/cron.d.ts.map +1 -1
  24. package/dist/client/methods/cron.js.map +1 -1
  25. package/dist/client/methods/dlq.d.ts.map +1 -1
  26. package/dist/client/methods/dlq.js.map +1 -1
  27. package/dist/client/methods/flows.d.ts.map +1 -1
  28. package/dist/client/methods/flows.js.map +1 -1
  29. package/dist/client/methods/jobs.d.ts.map +1 -1
  30. package/dist/client/methods/jobs.js.map +1 -1
  31. package/dist/client/methods/metrics.d.ts.map +1 -1
  32. package/dist/client/methods/metrics.js.map +1 -1
  33. package/dist/client/methods/queue.d.ts.map +1 -1
  34. package/dist/client/methods/queue.js.map +1 -1
  35. package/dist/client/types.d.ts +10 -3
  36. package/dist/client/types.d.ts.map +1 -1
  37. package/dist/errors.d.ts +105 -0
  38. package/dist/errors.d.ts.map +1 -0
  39. package/dist/errors.js +223 -0
  40. package/dist/errors.js.map +1 -0
  41. package/dist/events/subscriber.d.ts +1 -0
  42. package/dist/events/subscriber.d.ts.map +1 -1
  43. package/dist/events/subscriber.js +48 -7
  44. package/dist/events/subscriber.js.map +1 -1
  45. package/dist/events/types.d.ts +2 -0
  46. package/dist/events/types.d.ts.map +1 -1
  47. package/dist/hooks.d.ts +166 -0
  48. package/dist/hooks.d.ts.map +1 -0
  49. package/dist/hooks.js +73 -0
  50. package/dist/hooks.js.map +1 -0
  51. package/dist/index.d.ts +8 -1
  52. package/dist/index.d.ts.map +1 -1
  53. package/dist/index.js +35 -1
  54. package/dist/index.js.map +1 -1
  55. package/dist/queue.d.ts.map +1 -1
  56. package/dist/queue.js +9 -5
  57. package/dist/queue.js.map +1 -1
  58. package/dist/types.d.ts +53 -0
  59. package/dist/types.d.ts.map +1 -1
  60. package/dist/utils/logger.d.ts +53 -0
  61. package/dist/utils/logger.d.ts.map +1 -0
  62. package/dist/utils/logger.js +150 -0
  63. package/dist/utils/logger.js.map +1 -0
  64. package/dist/utils/retry.d.ts +70 -0
  65. package/dist/utils/retry.d.ts.map +1 -0
  66. package/dist/utils/retry.js +149 -0
  67. package/dist/utils/retry.js.map +1 -0
  68. package/dist/worker.d.ts +26 -3
  69. package/dist/worker.d.ts.map +1 -1
  70. package/dist/worker.js +159 -56
  71. package/dist/worker.js.map +1 -1
  72. package/package.json +11 -1
package/README.md CHANGED
@@ -1,29 +1,37 @@
1
1
  # flashQ TypeScript SDK
2
2
 
3
- **Drop-in BullMQ replacement. No Redis required.**
3
+ [![npm version](https://img.shields.io/npm/v/flashq)](https://www.npmjs.com/package/flashq)
4
+ [![npm downloads](https://img.shields.io/npm/dm/flashq)](https://www.npmjs.com/package/flashq)
5
+ [![GitHub stars](https://img.shields.io/github/stars/egeominotti/flashq)](https://github.com/egeominotti/flashq)
6
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
4
7
 
5
- Same API. Single binary. 10x faster. Built with Rust.
8
+ > **High-performance job queue with BullMQ-compatible API. No Redis required.**
6
9
 
7
- **Perfect for AI workloads:** LLM pipelines, RAG, agents, batch inference.
10
+ flashQ is a drop-in replacement for BullMQ that runs on a single Rust binary. It's designed for AI/ML workloads with support for 10MB payloads, job dependencies, and 300K+ jobs/sec throughput.
8
11
 
9
- [![npm](https://img.shields.io/npm/v/flashq)](https://www.npmjs.com/package/flashq)
10
- [![GitHub](https://img.shields.io/github/stars/egeominotti/flashq)](https://github.com/egeominotti/flashq)
12
+ ## Features
13
+
14
+ - **BullMQ-Compatible API** - Migrate with minimal code changes
15
+ - **No Redis Required** - Single binary, zero infrastructure
16
+ - **10x Faster** - Rust + io_uring + lock-free data structures
17
+ - **AI/ML Ready** - 10MB payloads, job dependencies, progress tracking
18
+ - **Production Ready** - Typed errors, retry logic, graceful shutdown, observability hooks
11
19
 
12
20
  ## Installation
13
21
 
14
22
  ```bash
15
- bun add flashq
16
- # or
17
23
  npm install flashq
24
+ # or
25
+ yarn add flashq
26
+ # or
27
+ bun add flashq
18
28
  ```
19
29
 
20
- ## Start the Server
30
+ ## Quick Start
21
31
 
22
- ```bash
23
- # Pull from GitHub Container Registry (multi-arch: amd64 + arm64)
24
- docker pull ghcr.io/egeominotti/flashq:latest
32
+ ### 1. Start the Server
25
33
 
26
- # Run with HTTP/Dashboard enabled
34
+ ```bash
27
35
  docker run -d --name flashq \
28
36
  -p 6789:6789 \
29
37
  -p 6790:6790 \
@@ -31,320 +39,413 @@ docker run -d --name flashq \
31
39
  ghcr.io/egeominotti/flashq:latest
32
40
  ```
33
41
 
34
- Dashboard: http://localhost:6790
42
+ Dashboard available at http://localhost:6790
35
43
 
36
- ## Quick Start
44
+ ### 2. Create a Queue and Worker
37
45
 
38
46
  ```typescript
39
47
  import { Queue, Worker } from 'flashq';
40
48
 
41
- // Create queue
49
+ // Create a queue
42
50
  const queue = new Queue('emails');
43
51
 
44
- // Add job
45
- await queue.add('send', { to: 'user@example.com' });
52
+ // Add a job
53
+ const job = await queue.add('send-welcome', {
54
+ to: 'user@example.com',
55
+ subject: 'Welcome!',
56
+ });
46
57
 
47
- // Process jobs (auto-starts)
58
+ // Process jobs
48
59
  const worker = new Worker('emails', async (job) => {
49
- console.log('Processing:', job.data);
50
- return { sent: true };
60
+ console.log(`Sending email to ${job.data.to}`);
61
+ // ... send email
62
+ return { sent: true, timestamp: Date.now() };
51
63
  });
52
- ```
53
64
 
54
- ---
65
+ // Handle events
66
+ worker.on('completed', (job, result) => {
67
+ console.log(`Job ${job.id} completed:`, result);
68
+ });
55
69
 
56
- ## Built for AI Workloads
70
+ worker.on('failed', (job, error) => {
71
+ console.error(`Job ${job.id} failed:`, error.message);
72
+ });
73
+ ```
57
74
 
58
- flashQ is designed for modern AI/ML pipelines with **10MB payload support** for embeddings, images, and large contexts.
75
+ ## API Reference
59
76
 
60
- | Use Case | How flashQ Helps |
61
- |----------|------------------|
62
- | **LLM API Calls** | Rate limiting to control OpenAI/Anthropic costs |
63
- | **Batch Inference** | 300K jobs/sec throughput for high-volume inference |
64
- | **AI Agents** | Job dependencies for multi-step workflows |
65
- | **RAG Pipelines** | Chain jobs: embed → search → generate |
66
- | **Training Jobs** | Progress tracking, long timeouts, retries |
77
+ ### Queue
67
78
 
68
79
  ```typescript
69
- // AI Agent workflow example
70
- const agent = new Queue('ai-agent');
71
-
72
- // Step 1: Parse user intent
73
- const parse = await agent.add('parse', { prompt: userInput });
80
+ import { Queue } from 'flashq';
74
81
 
75
- // Step 2: Retrieve context (waits for step 1)
76
- const retrieve = await agent.add('retrieve', { query }, {
77
- depends_on: [parse.id]
82
+ const queue = new Queue('my-queue', {
83
+ host: 'localhost',
84
+ port: 6789,
78
85
  });
79
86
 
80
- // Step 3: Generate response (waits for step 2)
81
- const generate = await agent.add('generate', { context }, {
82
- depends_on: [retrieve.id],
83
- priority: 10
87
+ // Add a single job
88
+ const job = await queue.add('job-name', { data: 'value' }, {
89
+ priority: 10, // Higher = processed first
90
+ delay: 5000, // Delay in ms
91
+ attempts: 3, // Max retry attempts
92
+ backoff: 1000, // Exponential backoff base (ms)
93
+ timeout: 30000, // Processing timeout (ms)
94
+ jobId: 'unique-id', // Custom ID for idempotency
95
+ depends_on: [1, 2], // Wait for these jobs to complete
84
96
  });
85
97
 
86
- // Wait for the final result
87
- const result = await agent.finished(generate.id);
88
- ```
98
+ // Add multiple jobs
99
+ await queue.addBulk([
100
+ { name: 'task', data: { id: 1 } },
101
+ { name: 'task', data: { id: 2 }, opts: { priority: 10 } },
102
+ ]);
103
+
104
+ // Wait for job completion
105
+ const result = await queue.finished(job.id, 30000); // timeout in ms
89
106
 
90
- ---
107
+ // Queue control
108
+ await queue.pause();
109
+ await queue.resume();
110
+ await queue.drain(); // Remove all waiting jobs
111
+ await queue.obliterate(); // Remove ALL queue data
91
112
 
92
- ## ⚡ Performance Benchmark: flashQ vs BullMQ
113
+ // Cleanup
114
+ await queue.close();
115
+ ```
93
116
 
94
- > **flashQ is 3x to 10x faster than BullMQ** in real-world benchmarks.
117
+ ### Worker
95
118
 
96
- ### Test Environment
119
+ ```typescript
120
+ import { Worker } from 'flashq';
97
121
 
98
- | Component | Version | Configuration |
99
- |-----------|---------|---------------|
100
- | **flashQ Server** | 0.1.0 | Docker with `io_uring` enabled, Rust + tokio async runtime |
101
- | **BullMQ** | 5.66.5 | npm package |
102
- | **Redis** | 7.4.7 | Docker (`redis:7-alpine`), jemalloc allocator |
103
- | **Bun** | 1.3.6 | TypeScript runtime |
104
- | **Platform** | Linux/macOS | Docker containers |
122
+ const worker = new Worker('my-queue', async (job) => {
123
+ // Process job
124
+ console.log('Processing:', job.id, job.data);
105
125
 
106
- ### Benchmark Configuration
126
+ // Update progress
127
+ await worker.updateProgress(job.id, 50, 'Halfway done');
107
128
 
108
- ```
109
- Workers: 8
110
- Concurrency/worker: 50
111
- Total concurrency: 400
112
- Batch size: 1,000 jobs
113
- Data verification: Enabled (input === output)
129
+ // Return result (auto-acknowledged)
130
+ return { processed: true };
131
+ }, {
132
+ concurrency: 10, // Parallel job processing
133
+ autostart: true, // Start automatically (default: true)
134
+ closeTimeout: 30000, // Graceful shutdown timeout (ms)
135
+ });
136
+
137
+ // Events
138
+ worker.on('ready', () => console.log('Worker ready'));
139
+ worker.on('active', (job) => console.log('Job started:', job.id));
140
+ worker.on('completed', (job, result) => console.log('Job done:', result));
141
+ worker.on('failed', (job, error) => console.log('Job failed:', error));
142
+ worker.on('stopping', () => console.log('Worker stopping...'));
143
+ worker.on('stopped', () => console.log('Worker stopped'));
144
+
145
+ // Graceful shutdown
146
+ await worker.close(); // Wait for current jobs
147
+ await worker.close(true); // Force close immediately
114
148
  ```
115
149
 
116
- ### Results: No-op Jobs (100,000 jobs)
150
+ ### Low-Level Client
117
151
 
118
- Minimal job processing to measure pure queue overhead.
152
+ For advanced use cases, use the `FlashQ` client directly:
119
153
 
120
- | Metric | flashQ | BullMQ | Speedup |
121
- |--------|-------:|-------:|--------:|
122
- | **Push Rate** | 307,692 jobs/sec | 43,649 jobs/sec | **7.0x** |
123
- | **Process Rate** | 292,398 jobs/sec | 27,405 jobs/sec | **10.7x** |
124
- | **Total Time** | 0.67s | 5.94s | **8.9x** |
154
+ ```typescript
155
+ import { FlashQ } from 'flashq';
125
156
 
126
- ### Results: CPU-Bound Jobs (100,000 jobs)
157
+ const client = new FlashQ({
158
+ host: 'localhost',
159
+ port: 6789,
160
+ timeout: 5000,
161
+ });
127
162
 
128
- Each job performs realistic CPU work:
129
- - JSON serialize/deserialize
130
- - 10x SHA256 hash rounds
131
- - Array sort/filter/reduce (100 elements)
132
- - String manipulation
163
+ await client.connect();
133
164
 
134
- | Metric | flashQ | BullMQ | Speedup |
135
- |--------|-------:|-------:|--------:|
136
- | **Push Rate** | 220,751 jobs/sec | 43,422 jobs/sec | **5.1x** |
137
- | **Process Rate** | 62,814 jobs/sec | 23,923 jobs/sec | **2.6x** |
138
- | **Total Time** | 2.04s | 6.48s | **3.2x** |
165
+ // Push/Pull operations
166
+ const job = await client.push('queue', { data: 'value' });
167
+ const pulled = await client.pull('queue', 5000);
168
+ await client.ack(pulled.id, { result: 'done' });
139
169
 
140
- ### Results: 1 Million Jobs (flashQ only)
170
+ // Job management
171
+ const state = await client.getState(job.id);
172
+ const counts = await client.getJobCounts('queue');
173
+ await client.cancel(job.id);
141
174
 
142
- | Scenario | Push Rate | Process Rate | Total Time | Data Integrity |
143
- |----------|----------:|-------------:|-----------:|:--------------:|
144
- | **No-op** | 266,809/s | 262,536/s | 7.56s | ✅ 100% |
145
- | **CPU-bound** | 257,334/s | 65,240/s | 19.21s | ✅ 100% |
175
+ // Cron jobs
176
+ await client.addCron('daily-cleanup', {
177
+ queue: 'maintenance',
178
+ schedule: '0 0 * * *',
179
+ data: { task: 'cleanup' },
180
+ });
146
181
 
147
- ### Why flashQ is Faster
182
+ await client.close();
183
+ ```
148
184
 
149
- | Optimization | Description |
150
- |--------------|-------------|
151
- | **Rust + tokio** | Zero-cost abstractions, no GC pauses |
152
- | **io_uring** | Linux kernel async I/O (when available) |
153
- | **32 Shards** | Lock-free concurrent access via DashMap |
154
- | **MessagePack** | 40% smaller payloads vs JSON |
155
- | **Batch Operations** | Amortized network overhead |
156
- | **No Redis Dependency** | Direct TCP protocol, no intermediary |
185
+ ## Error Handling
157
186
 
158
- ### Run Benchmarks
187
+ flashQ provides typed error classes for precise error handling:
159
188
 
160
- ```bash
161
- # flashQ benchmarks
162
- bun run examples/heavy-benchmark.ts # No-op 100K
163
- bun run examples/cpu-benchmark.ts # CPU-bound 100K
164
- bun run examples/million-benchmark.ts # 1M jobs
165
-
166
- # BullMQ comparison (requires Redis)
167
- docker run -d -p 6379:6379 redis:7-alpine
168
- bun run examples/bullmq-benchmark.ts # No-op 100K
169
- bun run examples/bullmq-cpu-benchmark.ts # CPU-bound 100K
189
+ ```typescript
190
+ import {
191
+ FlashQError,
192
+ ConnectionError,
193
+ TimeoutError,
194
+ ValidationError,
195
+ ServerError,
196
+ AuthenticationError,
197
+ } from 'flashq';
198
+
199
+ try {
200
+ await client.push('queue', data);
201
+ } catch (error) {
202
+ if (error instanceof ConnectionError) {
203
+ console.log('Connection failed, retrying...');
204
+ } else if (error instanceof TimeoutError) {
205
+ console.log(`Timeout after ${error.timeout}ms`);
206
+ } else if (error instanceof ValidationError) {
207
+ console.log(`Invalid ${error.field}: ${error.message}`);
208
+ } else if (error instanceof ServerError) {
209
+ console.log(`Server error: ${error.serverCode}`);
210
+ }
211
+
212
+ // Check if error is retryable
213
+ if (error instanceof FlashQError && error.retryable) {
214
+ // Safe to retry
215
+ }
216
+ }
170
217
  ```
171
218
 
172
- ---
219
+ ## Retry Logic
173
220
 
174
- ## Queue
221
+ Built-in retry utilities with exponential backoff:
175
222
 
176
223
  ```typescript
177
- const queue = new Queue('emails', {
178
- host: 'localhost',
179
- port: 6789,
180
- });
224
+ import { withRetry, retryable, RetryPresets } from 'flashq';
225
+
226
+ // Wrap a single operation
227
+ const result = await withRetry(
228
+ () => client.push('queue', data),
229
+ {
230
+ maxRetries: 3,
231
+ initialDelay: 100,
232
+ maxDelay: 5000,
233
+ backoffMultiplier: 2,
234
+ jitter: true,
235
+ onRetry: (error, attempt, delay) => {
236
+ console.log(`Retry ${attempt} after ${delay}ms: ${error.message}`);
237
+ },
238
+ }
239
+ );
240
+
241
+ // Create a retryable function
242
+ const retryablePush = retryable(
243
+ (queue: string, data: unknown) => client.push(queue, data),
244
+ RetryPresets.standard
245
+ );
246
+
247
+ await retryablePush('emails', { to: 'user@example.com' });
248
+
249
+ // Available presets
250
+ RetryPresets.fast // 2 retries, 50ms initial, 500ms max
251
+ RetryPresets.standard // 3 retries, 100ms initial, 5s max
252
+ RetryPresets.aggressive // 5 retries, 200ms initial, 30s max
253
+ RetryPresets.none // No retries
254
+ ```
181
255
 
182
- // Add single job
183
- await queue.add('send', data, {
184
- priority: 10,
185
- delay: 5000,
186
- attempts: 3,
187
- backoff: { type: 'exponential', delay: 1000 },
188
- });
256
+ ## Observability Hooks
189
257
 
190
- // Add bulk
191
- await queue.addBulk([
192
- { name: 'send', data: { to: 'a@test.com' } },
193
- { name: 'send', data: { to: 'b@test.com' }, opts: { priority: 10 } },
194
- ]);
258
+ Integrate with OpenTelemetry, DataDog, or any observability platform:
195
259
 
196
- // Control
197
- await queue.pause();
198
- await queue.resume();
199
- await queue.drain(); // remove waiting
200
- await queue.obliterate(); // remove all
260
+ ```typescript
261
+ import { FlashQ, ClientHooks } from 'flashq';
262
+
263
+ const hooks: ClientHooks = {
264
+ onPush: (ctx) => {
265
+ console.log(`Pushing to ${ctx.queue}`, ctx.data);
266
+ },
267
+ onPushComplete: (ctx) => {
268
+ console.log(`Pushed job ${ctx.job?.id} in ${ctx.duration}ms`);
269
+ },
270
+ onPushError: (ctx, error) => {
271
+ console.error(`Push failed: ${error.message}`);
272
+ },
273
+ onConnect: (ctx) => {
274
+ console.log('Connected to flashQ');
275
+ },
276
+ onDisconnect: (ctx) => {
277
+ console.log(`Disconnected: ${ctx.reason}`);
278
+ },
279
+ };
280
+
281
+ const client = new FlashQ({ hooks });
282
+ ```
283
+
284
+ Worker hooks for job processing:
201
285
 
202
- // Wait for job completion (synchronous workflow)
203
- const job = await queue.add('process', data);
204
- const result = await queue.finished(job.id); // blocks until done
286
+ ```typescript
287
+ import { Worker, WorkerHooks } from 'flashq';
288
+
289
+ const workerHooks: WorkerHooks = {
290
+ onProcess: (ctx) => {
291
+ console.log(`Processing job ${ctx.job.id}`);
292
+ },
293
+ onProcessComplete: (ctx) => {
294
+ console.log(`Job ${ctx.job.id} completed in ${ctx.duration}ms`);
295
+ },
296
+ onProcessError: (ctx, error) => {
297
+ console.error(`Job ${ctx.job.id} failed: ${error.message}`);
298
+ },
299
+ };
300
+
301
+ const worker = new Worker('queue', processor, { workerHooks });
205
302
  ```
206
303
 
207
- ## Worker
304
+ ## Logging
305
+
306
+ Configurable logging with request ID tracking:
208
307
 
209
308
  ```typescript
210
- // Auto-starts by default (like BullMQ)
211
- const worker = new Worker('emails', async (job) => {
212
- return { done: true };
213
- }, {
214
- concurrency: 10,
309
+ import { FlashQ, Logger, createLogger } from 'flashq';
310
+
311
+ // Use built-in logger
312
+ const client = new FlashQ({
313
+ logLevel: 'debug', // trace | debug | info | warn | error | silent
215
314
  });
216
315
 
217
- // Events
218
- worker.on('completed', (job, result) => {});
219
- worker.on('failed', (job, error) => {});
316
+ // Custom logger
317
+ const logger = createLogger({
318
+ level: 'info',
319
+ prefix: 'my-app',
320
+ timestamps: true,
321
+ handler: (entry) => {
322
+ // Send to your logging service
323
+ myLoggingService.log(entry);
324
+ },
325
+ });
220
326
 
221
- // Shutdown
222
- await worker.close();
327
+ // Request ID tracking for distributed tracing
328
+ logger.setRequestId('req-12345');
329
+ logger.info('Processing request', { userId: 123 });
330
+ // Output: [2024-01-15T10:30:00.000Z] [INFO] [my-app] [req-12345] Processing request {"userId":123}
223
331
  ```
224
332
 
225
- ## Job Options
333
+ ## Performance
226
334
 
227
- | Option | Type | Description |
228
- |--------|------|-------------|
229
- | `priority` | number | Higher = first (default: 0) |
230
- | `delay` | number | Delay in ms |
231
- | `attempts` | number | Retry count |
232
- | `backoff` | number \| object | Backoff config |
233
- | `timeout` | number | Processing timeout |
234
- | `jobId` | string | Custom ID for idempotency |
235
- | `depends_on` | number[] | Wait for these job IDs to complete |
335
+ flashQ is **3-10x faster** than BullMQ in real-world benchmarks:
236
336
 
237
- ## Key-Value Storage
238
-
239
- Redis-like KV store with TTL support and batch operations.
337
+ | Metric | flashQ | BullMQ | Speedup |
338
+ |--------|-------:|-------:|--------:|
339
+ | Push Rate | 307,692/s | 43,649/s | **7.0x** |
340
+ | Process Rate | 292,398/s | 27,405/s | **10.7x** |
341
+ | CPU-Bound Processing | 62,814/s | 23,923/s | **2.6x** |
240
342
 
241
- ```typescript
242
- import { FlashQ } from 'flashq';
343
+ ### Why flashQ is Faster
243
344
 
244
- const client = new FlashQ();
345
+ | Optimization | Description |
346
+ |--------------|-------------|
347
+ | **Rust + tokio** | Zero-cost abstractions, no GC pauses |
348
+ | **io_uring** | Linux kernel async I/O |
349
+ | **32 Shards** | Lock-free concurrent access |
350
+ | **MessagePack** | 40% smaller payloads |
351
+ | **No Redis** | Direct TCP protocol |
245
352
 
246
- // Basic operations
247
- await client.kvSet('user:123', { name: 'John', email: 'john@example.com' });
248
- const user = await client.kvGet('user:123');
249
- await client.kvDel('user:123');
353
+ ## AI/ML Workloads
250
354
 
251
- // With TTL (milliseconds)
252
- await client.kvSet('session:abc', { token: 'xyz' }, { ttl: 3600000 }); // 1 hour
355
+ flashQ is designed for AI pipelines with large payloads and complex workflows:
253
356
 
254
- // TTL operations
255
- await client.kvExpire('user:123', 60000); // Set TTL
256
- const ttl = await client.kvTtl('user:123'); // Get remaining TTL
357
+ ```typescript
358
+ // AI Agent with job dependencies
359
+ const agent = new Queue('ai-agent');
257
360
 
258
- // Batch operations (10-100x faster!)
259
- await client.kvMset([
260
- { key: 'user:1', value: { name: 'Alice' } },
261
- { key: 'user:2', value: { name: 'Bob' } },
262
- { key: 'user:3', value: { name: 'Charlie' }, ttl: 60000 },
263
- ]);
361
+ // Step 1: Parse user intent
362
+ const parse = await agent.add('parse', { prompt: userInput });
264
363
 
265
- const users = await client.kvMget(['user:1', 'user:2', 'user:3']);
364
+ // Step 2: Retrieve context (waits for step 1)
365
+ const retrieve = await agent.add('retrieve', { query }, {
366
+ depends_on: [parse.id],
367
+ });
266
368
 
267
- // Pattern matching
268
- const userKeys = await client.kvKeys('user:*');
269
- const sessionKeys = await client.kvKeys('session:???');
369
+ // Step 3: Generate response (waits for step 2)
370
+ const generate = await agent.add('generate', { context }, {
371
+ depends_on: [retrieve.id],
372
+ priority: 10,
373
+ });
270
374
 
271
- // Atomic counters
272
- await client.kvIncr('page:views'); // +1
273
- await client.kvIncr('user:123:score', 10); // +10
274
- await client.kvDecr('stock:item:456'); // -1
375
+ // Wait for the final result
376
+ const result = await agent.finished(generate.id, 60000);
275
377
  ```
276
378
 
277
- ### KV Performance
278
-
279
- | Operation | Throughput |
280
- |-----------|------------|
281
- | Sequential SET/GET | ~30K ops/sec |
282
- | **Batch MSET** | **640K ops/sec** |
283
- | **Batch MGET** | **1.2M ops/sec** |
379
+ ## Configuration
284
380
 
285
- > Use batch operations (MSET/MGET) for best performance!
381
+ ### Client Options
286
382
 
287
- ## Pub/Sub
383
+ ```typescript
384
+ interface ClientOptions {
385
+ host?: string; // Default: 'localhost'
386
+ port?: number; // Default: 6789
387
+ httpPort?: number; // Default: 6790
388
+ token?: string; // Auth token
389
+ timeout?: number; // Connection timeout (ms)
390
+ useHttp?: boolean; // Use HTTP instead of TCP
391
+ useBinary?: boolean; // Use MessagePack (40% smaller)
392
+ logLevel?: LogLevel; // Logging level
393
+ compression?: boolean; // Enable gzip compression
394
+ compressionThreshold?: number; // Min size to compress (bytes)
395
+ hooks?: ClientHooks; // Observability hooks
396
+ }
397
+ ```
288
398
 
289
- Redis-like publish/subscribe messaging.
399
+ ### Worker Options
290
400
 
291
401
  ```typescript
292
- import { FlashQ } from 'flashq';
293
-
294
- const client = new FlashQ();
402
+ interface WorkerOptions {
403
+ concurrency?: number; // Parallel jobs (default: 1)
404
+ autostart?: boolean; // Auto-start (default: true)
405
+ closeTimeout?: number; // Graceful shutdown timeout (ms)
406
+ workerHooks?: WorkerHooks; // Processing hooks
407
+ }
408
+ ```
295
409
 
296
- // Publish messages to a channel
297
- const receivers = await client.publish('notifications', { type: 'alert', text: 'Hello!' });
298
- console.log(`Message sent to ${receivers} subscribers`);
410
+ ## Examples
299
411
 
300
- // Subscribe to channels
301
- await client.pubsubSubscribe(['notifications', 'alerts']);
412
+ Run examples with:
302
413
 
303
- // Pattern subscribe (e.g., "events:*" matches "events:user:signup")
304
- await client.pubsubPsubscribe(['events:*', 'logs:*']);
414
+ ```bash
415
+ bun run examples/01-basic.ts
416
+ ```
305
417
 
306
- // List active channels
307
- const allChannels = await client.pubsubChannels();
308
- const eventChannels = await client.pubsubChannels('events:*');
418
+ | Example | Description |
419
+ |---------|-------------|
420
+ | `01-basic.ts` | Queue and Worker basics |
421
+ | `02-job-options.ts` | Priority, delay, retry |
422
+ | `03-bulk-jobs.ts` | Batch operations |
423
+ | `04-events.ts` | Worker events |
424
+ | `05-queue-control.ts` | Pause, resume, drain |
425
+ | `06-delayed.ts` | Scheduled jobs |
426
+ | `07-retry.ts` | Retry with backoff |
427
+ | `08-priority.ts` | Priority ordering |
428
+ | `09-concurrency.ts` | Parallel processing |
429
+ | `ai-workflow.ts` | AI agent with dependencies |
309
430
 
310
- // Get subscriber counts
311
- const counts = await client.pubsubNumsub(['notifications', 'alerts']);
312
- // [['notifications', 5], ['alerts', 2]]
431
+ ## Migration from BullMQ
313
432
 
314
- // Unsubscribe
315
- await client.pubsubUnsubscribe(['notifications']);
316
- await client.pubsubPunsubscribe(['events:*']);
317
- ```
433
+ flashQ provides a BullMQ-compatible API. Most code works with minimal changes:
318
434
 
319
- ## Examples
435
+ ```typescript
436
+ // Before (BullMQ)
437
+ import { Queue, Worker } from 'bullmq';
438
+ const queue = new Queue('my-queue', { connection: { host: 'redis' } });
320
439
 
321
- ```bash
322
- bun run examples/01-basic.ts
440
+ // After (flashQ)
441
+ import { Queue, Worker } from 'flashq';
442
+ const queue = new Queue('my-queue', { host: 'flashq-server' });
323
443
  ```
324
444
 
325
- | File | Description |
326
- |------|-------------|
327
- | 01-basic.ts | Queue + Worker basics |
328
- | 02-job-options.ts | Priority, delay, retry |
329
- | 03-bulk-jobs.ts | Add multiple jobs |
330
- | 04-events.ts | Worker events |
331
- | 05-queue-control.ts | Pause, resume, drain |
332
- | 06-delayed.ts | Scheduled jobs |
333
- | 07-retry.ts | Retry with backoff |
334
- | 08-priority.ts | Priority ordering |
335
- | 09-concurrency.ts | Parallel processing |
336
- | 10-benchmark.ts | Basic performance test |
337
- | **heavy-benchmark.ts** | 100K no-op benchmark |
338
- | **cpu-benchmark.ts** | 100K CPU-bound benchmark |
339
- | **million-benchmark.ts** | 1M jobs with verification |
340
- | **benchmark-full.ts** | Memory + latency + throughput |
341
- | **bullmq-benchmark.ts** | BullMQ comparison (no-op) |
342
- | **bullmq-cpu-benchmark.ts** | BullMQ comparison (CPU) |
343
- | **bullmq-benchmark-full.ts** | BullMQ memory + latency |
344
- | kv-benchmark.ts | KV store benchmark |
345
- | pubsub-example.ts | Pub/Sub messaging |
346
- | **ai-workflow.ts** | AI agent with job dependencies |
347
- | **ai-workflow-manual.ts** | Manual AI workflow control |
445
+ Key differences:
446
+ - No Redis connection required
447
+ - `connection` option replaced with `host`/`port`
448
+ - Some advanced BullMQ features may have different behavior
348
449
 
349
450
  ## License
350
451