flashq 0.3.1 → 0.3.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +348 -238
  2. package/package.json +1 -1
package/README.md CHANGED
@@ -1,29 +1,39 @@
1
1
  # flashQ TypeScript SDK
2
2
 
3
- **Drop-in BullMQ replacement. No Redis required.**
3
+ [![npm version](https://img.shields.io/npm/v/flashq)](https://www.npmjs.com/package/flashq)
4
+ [![npm downloads](https://img.shields.io/npm/dm/flashq)](https://www.npmjs.com/package/flashq)
5
+ [![GitHub stars](https://img.shields.io/github/stars/egeominotti/flashq)](https://github.com/egeominotti/flashq)
6
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
4
7
 
5
- Same API. Single binary. 10x faster. Built with Rust.
8
+ **[Website](https://flashq.dev)** · **[Documentation](https://flashq.dev/docs/)** · **[GitHub](https://github.com/egeominotti/flashq)**
6
9
 
7
- **Perfect for AI workloads:** LLM pipelines, RAG, agents, batch inference.
10
+ > **High-performance job queue with BullMQ-compatible API. No Redis required.**
8
11
 
9
- [![npm](https://img.shields.io/npm/v/flashq)](https://www.npmjs.com/package/flashq)
10
- [![GitHub](https://img.shields.io/github/stars/egeominotti/flashq)](https://github.com/egeominotti/flashq)
12
+ flashQ is a drop-in replacement for BullMQ that runs on a single Rust binary. It's designed for AI/ML workloads with support for 10MB payloads, job dependencies, and 300K+ jobs/sec throughput.
13
+
14
+ ## Features
15
+
16
+ - **BullMQ-Compatible API** - Migrate with minimal code changes
17
+ - **No Redis Required** - Single binary, zero infrastructure
18
+ - **10x Faster** - Rust + io_uring + lock-free data structures
19
+ - **AI/ML Ready** - 10MB payloads, job dependencies, progress tracking
20
+ - **Production Ready** - Typed errors, retry logic, graceful shutdown, observability hooks
11
21
 
12
22
  ## Installation
13
23
 
14
24
  ```bash
15
- bun add flashq
16
- # or
17
25
  npm install flashq
26
+ # or
27
+ yarn add flashq
28
+ # or
29
+ bun add flashq
18
30
  ```
19
31
 
20
- ## Start the Server
32
+ ## Quick Start
21
33
 
22
- ```bash
23
- # Pull from GitHub Container Registry (multi-arch: amd64 + arm64)
24
- docker pull ghcr.io/egeominotti/flashq:latest
34
+ ### 1. Start the Server
25
35
 
26
- # Run with HTTP/Dashboard enabled
36
+ ```bash
27
37
  docker run -d --name flashq \
28
38
  -p 6789:6789 \
29
39
  -p 6790:6790 \
@@ -31,320 +41,420 @@ docker run -d --name flashq \
31
41
  ghcr.io/egeominotti/flashq:latest
32
42
  ```
33
43
 
34
- Dashboard: http://localhost:6790
44
+ Dashboard available at http://localhost:6790
35
45
 
36
- ## Quick Start
46
+ ### 2. Create a Queue and Worker
37
47
 
38
48
  ```typescript
39
49
  import { Queue, Worker } from 'flashq';
40
50
 
41
- // Create queue
51
+ // Create a queue
42
52
  const queue = new Queue('emails');
43
53
 
44
- // Add job
45
- await queue.add('send', { to: 'user@example.com' });
54
+ // Add a job
55
+ const job = await queue.add('send-welcome', {
56
+ to: 'user@example.com',
57
+ subject: 'Welcome!',
58
+ });
46
59
 
47
- // Process jobs (auto-starts)
60
+ // Process jobs
48
61
  const worker = new Worker('emails', async (job) => {
49
- console.log('Processing:', job.data);
50
- return { sent: true };
62
+ console.log(`Sending email to ${job.data.to}`);
63
+ // ... send email
64
+ return { sent: true, timestamp: Date.now() };
51
65
  });
52
- ```
53
66
 
54
- ---
67
+ // Handle events
68
+ worker.on('completed', (job, result) => {
69
+ console.log(`Job ${job.id} completed:`, result);
70
+ });
55
71
 
56
- ## Built for AI Workloads
72
+ worker.on('failed', (job, error) => {
73
+ console.error(`Job ${job.id} failed:`, error.message);
74
+ });
75
+ ```
57
76
 
58
- flashQ is designed for modern AI/ML pipelines with **10MB payload support** for embeddings, images, and large contexts.
77
+ ## API Reference
59
78
 
60
- | Use Case | How flashQ Helps |
61
- |----------|------------------|
62
- | **LLM API Calls** | Rate limiting to control OpenAI/Anthropic costs |
63
- | **Batch Inference** | 300K jobs/sec throughput for high-volume inference |
64
- | **AI Agents** | Job dependencies for multi-step workflows |
65
- | **RAG Pipelines** | Chain jobs: embed → search → generate |
66
- | **Training Jobs** | Progress tracking, long timeouts, retries |
79
+ ### Queue
67
80
 
68
81
  ```typescript
69
- // AI Agent workflow example
70
- const agent = new Queue('ai-agent');
71
-
72
- // Step 1: Parse user intent
73
- const parse = await agent.add('parse', { prompt: userInput });
82
+ import { Queue } from 'flashq';
74
83
 
75
- // Step 2: Retrieve context (waits for step 1)
76
- const retrieve = await agent.add('retrieve', { query }, {
77
- depends_on: [parse.id]
84
+ const queue = new Queue('my-queue', {
85
+ host: 'localhost',
86
+ port: 6789,
78
87
  });
79
88
 
80
- // Step 3: Generate response (waits for step 2)
81
- const generate = await agent.add('generate', { context }, {
82
- depends_on: [retrieve.id],
83
- priority: 10
89
+ // Add a single job
90
+ const job = await queue.add('job-name', { data: 'value' }, {
91
+ priority: 10, // Higher = processed first
92
+ delay: 5000, // Delay in ms
93
+ attempts: 3, // Max retry attempts
94
+ backoff: 1000, // Exponential backoff base (ms)
95
+ timeout: 30000, // Processing timeout (ms)
96
+ jobId: 'unique-id', // Custom ID for idempotency
97
+ depends_on: [1, 2], // Wait for these jobs to complete
84
98
  });
85
99
 
86
- // Wait for the final result
87
- const result = await agent.finished(generate.id);
88
- ```
100
+ // Add multiple jobs
101
+ await queue.addBulk([
102
+ { name: 'task', data: { id: 1 } },
103
+ { name: 'task', data: { id: 2 }, opts: { priority: 10 } },
104
+ ]);
105
+
106
+ // Wait for job completion
107
+ const result = await queue.finished(job.id, 30000); // timeout in ms
89
108
 
90
- ---
109
+ // Queue control
110
+ await queue.pause();
111
+ await queue.resume();
112
+ await queue.drain(); // Remove all waiting jobs
113
+ await queue.obliterate(); // Remove ALL queue data
91
114
 
92
- ## ⚡ Performance Benchmark: flashQ vs BullMQ
115
+ // Cleanup
116
+ await queue.close();
117
+ ```
93
118
 
94
- > **flashQ is 3x to 10x faster than BullMQ** in real-world benchmarks.
119
+ ### Worker
95
120
 
96
- ### Test Environment
121
+ ```typescript
122
+ import { Worker } from 'flashq';
97
123
 
98
- | Component | Version | Configuration |
99
- |-----------|---------|---------------|
100
- | **flashQ Server** | 0.1.0 | Docker with `io_uring` enabled, Rust + tokio async runtime |
101
- | **BullMQ** | 5.66.5 | npm package |
102
- | **Redis** | 7.4.7 | Docker (`redis:7-alpine`), jemalloc allocator |
103
- | **Bun** | 1.3.6 | TypeScript runtime |
104
- | **Platform** | Linux/macOS | Docker containers |
124
+ const worker = new Worker('my-queue', async (job) => {
125
+ // Process job
126
+ console.log('Processing:', job.id, job.data);
105
127
 
106
- ### Benchmark Configuration
128
+ // Update progress
129
+ await worker.updateProgress(job.id, 50, 'Halfway done');
107
130
 
108
- ```
109
- Workers: 8
110
- Concurrency/worker: 50
111
- Total concurrency: 400
112
- Batch size: 1,000 jobs
113
- Data verification: Enabled (input === output)
131
+ // Return result (auto-acknowledged)
132
+ return { processed: true };
133
+ }, {
134
+ concurrency: 10, // Parallel job processing
135
+ autostart: true, // Start automatically (default: true)
136
+ closeTimeout: 30000, // Graceful shutdown timeout (ms)
137
+ });
138
+
139
+ // Events
140
+ worker.on('ready', () => console.log('Worker ready'));
141
+ worker.on('active', (job) => console.log('Job started:', job.id));
142
+ worker.on('completed', (job, result) => console.log('Job done:', result));
143
+ worker.on('failed', (job, error) => console.log('Job failed:', error));
144
+ worker.on('stopping', () => console.log('Worker stopping...'));
145
+ worker.on('stopped', () => console.log('Worker stopped'));
146
+
147
+ // Graceful shutdown
148
+ await worker.close(); // Wait for current jobs
149
+ await worker.close(true); // Force close immediately
114
150
  ```
115
151
 
116
- ### Results: No-op Jobs (100,000 jobs)
152
+ ### Low-Level Client
117
153
 
118
- Minimal job processing to measure pure queue overhead.
154
+ For advanced use cases, use the `FlashQ` client directly:
119
155
 
120
- | Metric | flashQ | BullMQ | Speedup |
121
- |--------|-------:|-------:|--------:|
122
- | **Push Rate** | 307,692 jobs/sec | 43,649 jobs/sec | **7.0x** |
123
- | **Process Rate** | 292,398 jobs/sec | 27,405 jobs/sec | **10.7x** |
124
- | **Total Time** | 0.67s | 5.94s | **8.9x** |
156
+ ```typescript
157
+ import { FlashQ } from 'flashq';
125
158
 
126
- ### Results: CPU-Bound Jobs (100,000 jobs)
159
+ const client = new FlashQ({
160
+ host: 'localhost',
161
+ port: 6789,
162
+ timeout: 5000,
163
+ });
127
164
 
128
- Each job performs realistic CPU work:
129
- - JSON serialize/deserialize
130
- - 10x SHA256 hash rounds
131
- - Array sort/filter/reduce (100 elements)
132
- - String manipulation
165
+ await client.connect();
133
166
 
134
- | Metric | flashQ | BullMQ | Speedup |
135
- |--------|-------:|-------:|--------:|
136
- | **Push Rate** | 220,751 jobs/sec | 43,422 jobs/sec | **5.1x** |
137
- | **Process Rate** | 62,814 jobs/sec | 23,923 jobs/sec | **2.6x** |
138
- | **Total Time** | 2.04s | 6.48s | **3.2x** |
167
+ // Push/Pull operations
168
+ const job = await client.push('queue', { data: 'value' });
169
+ const pulled = await client.pull('queue', 5000);
170
+ await client.ack(pulled.id, { result: 'done' });
139
171
 
140
- ### Results: 1 Million Jobs (flashQ only)
172
+ // Job management
173
+ const state = await client.getState(job.id);
174
+ const counts = await client.getJobCounts('queue');
175
+ await client.cancel(job.id);
141
176
 
142
- | Scenario | Push Rate | Process Rate | Total Time | Data Integrity |
143
- |----------|----------:|-------------:|-----------:|:--------------:|
144
- | **No-op** | 266,809/s | 262,536/s | 7.56s | ✅ 100% |
145
- | **CPU-bound** | 257,334/s | 65,240/s | 19.21s | ✅ 100% |
177
+ // Cron jobs
178
+ await client.addCron('daily-cleanup', {
179
+ queue: 'maintenance',
180
+ schedule: '0 0 * * *',
181
+ data: { task: 'cleanup' },
182
+ });
146
183
 
147
- ### Why flashQ is Faster
184
+ await client.close();
185
+ ```
148
186
 
149
- | Optimization | Description |
150
- |--------------|-------------|
151
- | **Rust + tokio** | Zero-cost abstractions, no GC pauses |
152
- | **io_uring** | Linux kernel async I/O (when available) |
153
- | **32 Shards** | Lock-free concurrent access via DashMap |
154
- | **MessagePack** | 40% smaller payloads vs JSON |
155
- | **Batch Operations** | Amortized network overhead |
156
- | **No Redis Dependency** | Direct TCP protocol, no intermediary |
187
+ ## Error Handling
157
188
 
158
- ### Run Benchmarks
189
+ flashQ provides typed error classes for precise error handling:
159
190
 
160
- ```bash
161
- # flashQ benchmarks
162
- bun run examples/heavy-benchmark.ts # No-op 100K
163
- bun run examples/cpu-benchmark.ts # CPU-bound 100K
164
- bun run examples/million-benchmark.ts # 1M jobs
165
-
166
- # BullMQ comparison (requires Redis)
167
- docker run -d -p 6379:6379 redis:7-alpine
168
- bun run examples/bullmq-benchmark.ts # No-op 100K
169
- bun run examples/bullmq-cpu-benchmark.ts # CPU-bound 100K
191
+ ```typescript
192
+ import {
193
+ FlashQError,
194
+ ConnectionError,
195
+ TimeoutError,
196
+ ValidationError,
197
+ ServerError,
198
+ AuthenticationError,
199
+ } from 'flashq';
200
+
201
+ try {
202
+ await client.push('queue', data);
203
+ } catch (error) {
204
+ if (error instanceof ConnectionError) {
205
+ console.log('Connection failed, retrying...');
206
+ } else if (error instanceof TimeoutError) {
207
+ console.log(`Timeout after ${error.timeout}ms`);
208
+ } else if (error instanceof ValidationError) {
209
+ console.log(`Invalid ${error.field}: ${error.message}`);
210
+ } else if (error instanceof ServerError) {
211
+ console.log(`Server error: ${error.serverCode}`);
212
+ }
213
+
214
+ // Check if error is retryable
215
+ if (error instanceof FlashQError && error.retryable) {
216
+ // Safe to retry
217
+ }
218
+ }
170
219
  ```
171
220
 
172
- ---
221
+ ## Retry Logic
173
222
 
174
- ## Queue
223
+ Built-in retry utilities with exponential backoff:
175
224
 
176
225
  ```typescript
177
- const queue = new Queue('emails', {
178
- host: 'localhost',
179
- port: 6789,
180
- });
226
+ import { withRetry, retryable, RetryPresets } from 'flashq';
227
+
228
+ // Wrap a single operation
229
+ const result = await withRetry(
230
+ () => client.push('queue', data),
231
+ {
232
+ maxRetries: 3,
233
+ initialDelay: 100,
234
+ maxDelay: 5000,
235
+ backoffMultiplier: 2,
236
+ jitter: true,
237
+ onRetry: (error, attempt, delay) => {
238
+ console.log(`Retry ${attempt} after ${delay}ms: ${error.message}`);
239
+ },
240
+ }
241
+ );
242
+
243
+ // Create a retryable function
244
+ const retryablePush = retryable(
245
+ (queue: string, data: unknown) => client.push(queue, data),
246
+ RetryPresets.standard
247
+ );
248
+
249
+ await retryablePush('emails', { to: 'user@example.com' });
250
+
251
+ // Available presets
252
+ RetryPresets.fast // 2 retries, 50ms initial, 500ms max
253
+ RetryPresets.standard // 3 retries, 100ms initial, 5s max
254
+ RetryPresets.aggressive // 5 retries, 200ms initial, 30s max
255
+ RetryPresets.none // No retries
256
+ ```
181
257
 
182
- // Add single job
183
- await queue.add('send', data, {
184
- priority: 10,
185
- delay: 5000,
186
- attempts: 3,
187
- backoff: { type: 'exponential', delay: 1000 },
188
- });
258
+ ## Observability Hooks
189
259
 
190
- // Add bulk
191
- await queue.addBulk([
192
- { name: 'send', data: { to: 'a@test.com' } },
193
- { name: 'send', data: { to: 'b@test.com' }, opts: { priority: 10 } },
194
- ]);
260
+ Integrate with OpenTelemetry, DataDog, or any observability platform:
195
261
 
196
- // Control
197
- await queue.pause();
198
- await queue.resume();
199
- await queue.drain(); // remove waiting
200
- await queue.obliterate(); // remove all
262
+ ```typescript
263
+ import { FlashQ, ClientHooks } from 'flashq';
264
+
265
+ const hooks: ClientHooks = {
266
+ onPush: (ctx) => {
267
+ console.log(`Pushing to ${ctx.queue}`, ctx.data);
268
+ },
269
+ onPushComplete: (ctx) => {
270
+ console.log(`Pushed job ${ctx.job?.id} in ${ctx.duration}ms`);
271
+ },
272
+ onPushError: (ctx, error) => {
273
+ console.error(`Push failed: ${error.message}`);
274
+ },
275
+ onConnect: (ctx) => {
276
+ console.log('Connected to flashQ');
277
+ },
278
+ onDisconnect: (ctx) => {
279
+ console.log(`Disconnected: ${ctx.reason}`);
280
+ },
281
+ };
282
+
283
+ const client = new FlashQ({ hooks });
284
+ ```
285
+
286
+ Worker hooks for job processing:
201
287
 
202
- // Wait for job completion (synchronous workflow)
203
- const job = await queue.add('process', data);
204
- const result = await queue.finished(job.id); // blocks until done
288
+ ```typescript
289
+ import { Worker, WorkerHooks } from 'flashq';
290
+
291
+ const workerHooks: WorkerHooks = {
292
+ onProcess: (ctx) => {
293
+ console.log(`Processing job ${ctx.job.id}`);
294
+ },
295
+ onProcessComplete: (ctx) => {
296
+ console.log(`Job ${ctx.job.id} completed in ${ctx.duration}ms`);
297
+ },
298
+ onProcessError: (ctx, error) => {
299
+ console.error(`Job ${ctx.job.id} failed: ${error.message}`);
300
+ },
301
+ };
302
+
303
+ const worker = new Worker('queue', processor, { workerHooks });
205
304
  ```
206
305
 
207
- ## Worker
306
+ ## Logging
307
+
308
+ Configurable logging with request ID tracking:
208
309
 
209
310
  ```typescript
210
- // Auto-starts by default (like BullMQ)
211
- const worker = new Worker('emails', async (job) => {
212
- return { done: true };
213
- }, {
214
- concurrency: 10,
311
+ import { FlashQ, Logger, createLogger } from 'flashq';
312
+
313
+ // Use built-in logger
314
+ const client = new FlashQ({
315
+ logLevel: 'debug', // trace | debug | info | warn | error | silent
215
316
  });
216
317
 
217
- // Events
218
- worker.on('completed', (job, result) => {});
219
- worker.on('failed', (job, error) => {});
318
+ // Custom logger
319
+ const logger = createLogger({
320
+ level: 'info',
321
+ prefix: 'my-app',
322
+ timestamps: true,
323
+ handler: (entry) => {
324
+ // Send to your logging service
325
+ myLoggingService.log(entry);
326
+ },
327
+ });
220
328
 
221
- // Shutdown
222
- await worker.close();
329
+ // Request ID tracking for distributed tracing
330
+ logger.setRequestId('req-12345');
331
+ logger.info('Processing request', { userId: 123 });
332
+ // Output: [2024-01-15T10:30:00.000Z] [INFO] [my-app] [req-12345] Processing request {"userId":123}
223
333
  ```
224
334
 
225
- ## Job Options
335
+ ## Performance
226
336
 
227
- | Option | Type | Description |
228
- |--------|------|-------------|
229
- | `priority` | number | Higher = first (default: 0) |
230
- | `delay` | number | Delay in ms |
231
- | `attempts` | number | Retry count |
232
- | `backoff` | number \| object | Backoff config |
233
- | `timeout` | number | Processing timeout |
234
- | `jobId` | string | Custom ID for idempotency |
235
- | `depends_on` | number[] | Wait for these job IDs to complete |
337
+ flashQ is **3-10x faster** than BullMQ in real-world benchmarks:
236
338
 
237
- ## Key-Value Storage
238
-
239
- Redis-like KV store with TTL support and batch operations.
339
+ | Metric | flashQ | BullMQ | Speedup |
340
+ |--------|-------:|-------:|--------:|
341
+ | Push Rate | 307,692/s | 43,649/s | **7.0x** |
342
+ | Process Rate | 292,398/s | 27,405/s | **10.7x** |
343
+ | CPU-Bound Processing | 62,814/s | 23,923/s | **2.6x** |
240
344
 
241
- ```typescript
242
- import { FlashQ } from 'flashq';
345
+ ### Why flashQ is Faster
243
346
 
244
- const client = new FlashQ();
347
+ | Optimization | Description |
348
+ |--------------|-------------|
349
+ | **Rust + tokio** | Zero-cost abstractions, no GC pauses |
350
+ | **io_uring** | Linux kernel async I/O |
351
+ | **32 Shards** | Lock-free concurrent access |
352
+ | **MessagePack** | 40% smaller payloads |
353
+ | **No Redis** | Direct TCP protocol |
245
354
 
246
- // Basic operations
247
- await client.kvSet('user:123', { name: 'John', email: 'john@example.com' });
248
- const user = await client.kvGet('user:123');
249
- await client.kvDel('user:123');
355
+ ## AI/ML Workloads
250
356
 
251
- // With TTL (milliseconds)
252
- await client.kvSet('session:abc', { token: 'xyz' }, { ttl: 3600000 }); // 1 hour
357
+ flashQ is designed for AI pipelines with large payloads and complex workflows:
253
358
 
254
- // TTL operations
255
- await client.kvExpire('user:123', 60000); // Set TTL
256
- const ttl = await client.kvTtl('user:123'); // Get remaining TTL
359
+ ```typescript
360
+ // AI Agent with job dependencies
361
+ const agent = new Queue('ai-agent');
257
362
 
258
- // Batch operations (10-100x faster!)
259
- await client.kvMset([
260
- { key: 'user:1', value: { name: 'Alice' } },
261
- { key: 'user:2', value: { name: 'Bob' } },
262
- { key: 'user:3', value: { name: 'Charlie' }, ttl: 60000 },
263
- ]);
363
+ // Step 1: Parse user intent
364
+ const parse = await agent.add('parse', { prompt: userInput });
264
365
 
265
- const users = await client.kvMget(['user:1', 'user:2', 'user:3']);
366
+ // Step 2: Retrieve context (waits for step 1)
367
+ const retrieve = await agent.add('retrieve', { query }, {
368
+ depends_on: [parse.id],
369
+ });
266
370
 
267
- // Pattern matching
268
- const userKeys = await client.kvKeys('user:*');
269
- const sessionKeys = await client.kvKeys('session:???');
371
+ // Step 3: Generate response (waits for step 2)
372
+ const generate = await agent.add('generate', { context }, {
373
+ depends_on: [retrieve.id],
374
+ priority: 10,
375
+ });
270
376
 
271
- // Atomic counters
272
- await client.kvIncr('page:views'); // +1
273
- await client.kvIncr('user:123:score', 10); // +10
274
- await client.kvDecr('stock:item:456'); // -1
377
+ // Wait for the final result
378
+ const result = await agent.finished(generate.id, 60000);
275
379
  ```
276
380
 
277
- ### KV Performance
278
-
279
- | Operation | Throughput |
280
- |-----------|------------|
281
- | Sequential SET/GET | ~30K ops/sec |
282
- | **Batch MSET** | **640K ops/sec** |
283
- | **Batch MGET** | **1.2M ops/sec** |
381
+ ## Configuration
284
382
 
285
- > Use batch operations (MSET/MGET) for best performance!
383
+ ### Client Options
286
384
 
287
- ## Pub/Sub
385
+ ```typescript
386
+ interface ClientOptions {
387
+ host?: string; // Default: 'localhost'
388
+ port?: number; // Default: 6789
389
+ httpPort?: number; // Default: 6790
390
+ token?: string; // Auth token
391
+ timeout?: number; // Connection timeout (ms)
392
+ useHttp?: boolean; // Use HTTP instead of TCP
393
+ useBinary?: boolean; // Use MessagePack (40% smaller)
394
+ logLevel?: LogLevel; // Logging level
395
+ compression?: boolean; // Enable gzip compression
396
+ compressionThreshold?: number; // Min size to compress (bytes)
397
+ hooks?: ClientHooks; // Observability hooks
398
+ }
399
+ ```
288
400
 
289
- Redis-like publish/subscribe messaging.
401
+ ### Worker Options
290
402
 
291
403
  ```typescript
292
- import { FlashQ } from 'flashq';
404
+ interface WorkerOptions {
405
+ concurrency?: number; // Parallel jobs (default: 1)
406
+ autostart?: boolean; // Auto-start (default: true)
407
+ closeTimeout?: number; // Graceful shutdown timeout (ms)
408
+ workerHooks?: WorkerHooks; // Processing hooks
409
+ }
410
+ ```
293
411
 
294
- const client = new FlashQ();
412
+ ## Examples
295
413
 
296
- // Publish messages to a channel
297
- const receivers = await client.publish('notifications', { type: 'alert', text: 'Hello!' });
298
- console.log(`Message sent to ${receivers} subscribers`);
414
+ Run examples with:
415
+
416
+ ```bash
417
+ bun run examples/01-basic.ts
418
+ ```
299
419
 
300
- // Subscribe to channels
301
- await client.pubsubSubscribe(['notifications', 'alerts']);
420
+ | Example | Description |
421
+ |---------|-------------|
422
+ | `01-basic.ts` | Queue and Worker basics |
423
+ | `02-job-options.ts` | Priority, delay, retry |
424
+ | `03-bulk-jobs.ts` | Batch operations |
425
+ | `04-events.ts` | Worker events |
426
+ | `05-queue-control.ts` | Pause, resume, drain |
427
+ | `06-delayed.ts` | Scheduled jobs |
428
+ | `07-retry.ts` | Retry with backoff |
429
+ | `08-priority.ts` | Priority ordering |
430
+ | `09-concurrency.ts` | Parallel processing |
431
+ | `ai-workflow.ts` | AI agent with dependencies |
302
432
 
303
- // Pattern subscribe (e.g., "events:*" matches "events:user:signup")
304
- await client.pubsubPsubscribe(['events:*', 'logs:*']);
433
+ ## Migration from BullMQ
305
434
 
306
- // List active channels
307
- const allChannels = await client.pubsubChannels();
308
- const eventChannels = await client.pubsubChannels('events:*');
435
+ flashQ provides a BullMQ-compatible API. Most code works with minimal changes:
309
436
 
310
- // Get subscriber counts
311
- const counts = await client.pubsubNumsub(['notifications', 'alerts']);
312
- // [['notifications', 5], ['alerts', 2]]
437
+ ```typescript
438
+ // Before (BullMQ)
439
+ import { Queue, Worker } from 'bullmq';
440
+ const queue = new Queue('my-queue', { connection: { host: 'redis' } });
313
441
 
314
- // Unsubscribe
315
- await client.pubsubUnsubscribe(['notifications']);
316
- await client.pubsubPunsubscribe(['events:*']);
442
+ // After (flashQ)
443
+ import { Queue, Worker } from 'flashq';
444
+ const queue = new Queue('my-queue', { host: 'flashq-server' });
317
445
  ```
318
446
 
319
- ## Examples
447
+ Key differences:
448
+ - No Redis connection required
449
+ - `connection` option replaced with `host`/`port`
450
+ - Some advanced BullMQ features may have different behavior
320
451
 
321
- ```bash
322
- bun run examples/01-basic.ts
323
- ```
452
+ ## Resources
324
453
 
325
- | File | Description |
326
- |------|-------------|
327
- | 01-basic.ts | Queue + Worker basics |
328
- | 02-job-options.ts | Priority, delay, retry |
329
- | 03-bulk-jobs.ts | Add multiple jobs |
330
- | 04-events.ts | Worker events |
331
- | 05-queue-control.ts | Pause, resume, drain |
332
- | 06-delayed.ts | Scheduled jobs |
333
- | 07-retry.ts | Retry with backoff |
334
- | 08-priority.ts | Priority ordering |
335
- | 09-concurrency.ts | Parallel processing |
336
- | 10-benchmark.ts | Basic performance test |
337
- | **heavy-benchmark.ts** | 100K no-op benchmark |
338
- | **cpu-benchmark.ts** | 100K CPU-bound benchmark |
339
- | **million-benchmark.ts** | 1M jobs with verification |
340
- | **benchmark-full.ts** | Memory + latency + throughput |
341
- | **bullmq-benchmark.ts** | BullMQ comparison (no-op) |
342
- | **bullmq-cpu-benchmark.ts** | BullMQ comparison (CPU) |
343
- | **bullmq-benchmark-full.ts** | BullMQ memory + latency |
344
- | kv-benchmark.ts | KV store benchmark |
345
- | pubsub-example.ts | Pub/Sub messaging |
346
- | **ai-workflow.ts** | AI agent with job dependencies |
347
- | **ai-workflow-manual.ts** | Manual AI workflow control |
454
+ - **Website:** [flashq.dev](https://flashq.dev)
455
+ - **Documentation:** [flashq.dev/docs](https://flashq.dev/docs/)
456
+ - **GitHub:** [github.com/egeominotti/flashq](https://github.com/egeominotti/flashq)
457
+ - **npm:** [npmjs.com/package/flashq](https://www.npmjs.com/package/flashq)
348
458
 
349
459
  ## License
350
460
 
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "flashq",
3
- "version": "0.3.1",
3
+ "version": "0.3.3",
4
4
  "description": "Official TypeScript SDK for flashQ - High-Performance Job Queue (BullMQ-compatible API)",
5
5
  "author": "Egeo Minotti",
6
6
  "license": "MIT",