flashq 0.3.1 → 0.3.3
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +348 -238
- package/package.json +1 -1
package/README.md
CHANGED
|
@@ -1,29 +1,39 @@
|
|
|
1
1
|
# flashQ TypeScript SDK
|
|
2
2
|
|
|
3
|
-
|
|
3
|
+
[](https://www.npmjs.com/package/flashq)
|
|
4
|
+
[](https://www.npmjs.com/package/flashq)
|
|
5
|
+
[](https://github.com/egeominotti/flashq)
|
|
6
|
+
[](https://opensource.org/licenses/MIT)
|
|
4
7
|
|
|
5
|
-
|
|
8
|
+
**[Website](https://flashq.dev)** · **[Documentation](https://flashq.dev/docs/)** · **[GitHub](https://github.com/egeominotti/flashq)**
|
|
6
9
|
|
|
7
|
-
**
|
|
10
|
+
> **High-performance job queue with BullMQ-compatible API. No Redis required.**
|
|
8
11
|
|
|
9
|
-
|
|
10
|
-
|
|
12
|
+
flashQ is a drop-in replacement for BullMQ that runs on a single Rust binary. It's designed for AI/ML workloads with support for 10MB payloads, job dependencies, and 300K+ jobs/sec throughput.
|
|
13
|
+
|
|
14
|
+
## Features
|
|
15
|
+
|
|
16
|
+
- **BullMQ-Compatible API** - Migrate with minimal code changes
|
|
17
|
+
- **No Redis Required** - Single binary, zero infrastructure
|
|
18
|
+
- **10x Faster** - Rust + io_uring + lock-free data structures
|
|
19
|
+
- **AI/ML Ready** - 10MB payloads, job dependencies, progress tracking
|
|
20
|
+
- **Production Ready** - Typed errors, retry logic, graceful shutdown, observability hooks
|
|
11
21
|
|
|
12
22
|
## Installation
|
|
13
23
|
|
|
14
24
|
```bash
|
|
15
|
-
bun add flashq
|
|
16
|
-
# or
|
|
17
25
|
npm install flashq
|
|
26
|
+
# or
|
|
27
|
+
yarn add flashq
|
|
28
|
+
# or
|
|
29
|
+
bun add flashq
|
|
18
30
|
```
|
|
19
31
|
|
|
20
|
-
## Start
|
|
32
|
+
## Quick Start
|
|
21
33
|
|
|
22
|
-
|
|
23
|
-
# Pull from GitHub Container Registry (multi-arch: amd64 + arm64)
|
|
24
|
-
docker pull ghcr.io/egeominotti/flashq:latest
|
|
34
|
+
### 1. Start the Server
|
|
25
35
|
|
|
26
|
-
|
|
36
|
+
```bash
|
|
27
37
|
docker run -d --name flashq \
|
|
28
38
|
-p 6789:6789 \
|
|
29
39
|
-p 6790:6790 \
|
|
@@ -31,320 +41,420 @@ docker run -d --name flashq \
|
|
|
31
41
|
ghcr.io/egeominotti/flashq:latest
|
|
32
42
|
```
|
|
33
43
|
|
|
34
|
-
Dashboard
|
|
44
|
+
Dashboard available at http://localhost:6790
|
|
35
45
|
|
|
36
|
-
|
|
46
|
+
### 2. Create a Queue and Worker
|
|
37
47
|
|
|
38
48
|
```typescript
|
|
39
49
|
import { Queue, Worker } from 'flashq';
|
|
40
50
|
|
|
41
|
-
// Create queue
|
|
51
|
+
// Create a queue
|
|
42
52
|
const queue = new Queue('emails');
|
|
43
53
|
|
|
44
|
-
// Add job
|
|
45
|
-
await queue.add('send', {
|
|
54
|
+
// Add a job
|
|
55
|
+
const job = await queue.add('send-welcome', {
|
|
56
|
+
to: 'user@example.com',
|
|
57
|
+
subject: 'Welcome!',
|
|
58
|
+
});
|
|
46
59
|
|
|
47
|
-
// Process jobs
|
|
60
|
+
// Process jobs
|
|
48
61
|
const worker = new Worker('emails', async (job) => {
|
|
49
|
-
console.log(
|
|
50
|
-
|
|
62
|
+
console.log(`Sending email to ${job.data.to}`);
|
|
63
|
+
// ... send email
|
|
64
|
+
return { sent: true, timestamp: Date.now() };
|
|
51
65
|
});
|
|
52
|
-
```
|
|
53
66
|
|
|
54
|
-
|
|
67
|
+
// Handle events
|
|
68
|
+
worker.on('completed', (job, result) => {
|
|
69
|
+
console.log(`Job ${job.id} completed:`, result);
|
|
70
|
+
});
|
|
55
71
|
|
|
56
|
-
|
|
72
|
+
worker.on('failed', (job, error) => {
|
|
73
|
+
console.error(`Job ${job.id} failed:`, error.message);
|
|
74
|
+
});
|
|
75
|
+
```
|
|
57
76
|
|
|
58
|
-
|
|
77
|
+
## API Reference
|
|
59
78
|
|
|
60
|
-
|
|
61
|
-
|----------|------------------|
|
|
62
|
-
| **LLM API Calls** | Rate limiting to control OpenAI/Anthropic costs |
|
|
63
|
-
| **Batch Inference** | 300K jobs/sec throughput for high-volume inference |
|
|
64
|
-
| **AI Agents** | Job dependencies for multi-step workflows |
|
|
65
|
-
| **RAG Pipelines** | Chain jobs: embed → search → generate |
|
|
66
|
-
| **Training Jobs** | Progress tracking, long timeouts, retries |
|
|
79
|
+
### Queue
|
|
67
80
|
|
|
68
81
|
```typescript
|
|
69
|
-
|
|
70
|
-
const agent = new Queue('ai-agent');
|
|
71
|
-
|
|
72
|
-
// Step 1: Parse user intent
|
|
73
|
-
const parse = await agent.add('parse', { prompt: userInput });
|
|
82
|
+
import { Queue } from 'flashq';
|
|
74
83
|
|
|
75
|
-
|
|
76
|
-
|
|
77
|
-
|
|
84
|
+
const queue = new Queue('my-queue', {
|
|
85
|
+
host: 'localhost',
|
|
86
|
+
port: 6789,
|
|
78
87
|
});
|
|
79
88
|
|
|
80
|
-
//
|
|
81
|
-
const
|
|
82
|
-
|
|
83
|
-
|
|
89
|
+
// Add a single job
|
|
90
|
+
const job = await queue.add('job-name', { data: 'value' }, {
|
|
91
|
+
priority: 10, // Higher = processed first
|
|
92
|
+
delay: 5000, // Delay in ms
|
|
93
|
+
attempts: 3, // Max retry attempts
|
|
94
|
+
backoff: 1000, // Exponential backoff base (ms)
|
|
95
|
+
timeout: 30000, // Processing timeout (ms)
|
|
96
|
+
jobId: 'unique-id', // Custom ID for idempotency
|
|
97
|
+
depends_on: [1, 2], // Wait for these jobs to complete
|
|
84
98
|
});
|
|
85
99
|
|
|
86
|
-
//
|
|
87
|
-
|
|
88
|
-
|
|
100
|
+
// Add multiple jobs
|
|
101
|
+
await queue.addBulk([
|
|
102
|
+
{ name: 'task', data: { id: 1 } },
|
|
103
|
+
{ name: 'task', data: { id: 2 }, opts: { priority: 10 } },
|
|
104
|
+
]);
|
|
105
|
+
|
|
106
|
+
// Wait for job completion
|
|
107
|
+
const result = await queue.finished(job.id, 30000); // timeout in ms
|
|
89
108
|
|
|
90
|
-
|
|
109
|
+
// Queue control
|
|
110
|
+
await queue.pause();
|
|
111
|
+
await queue.resume();
|
|
112
|
+
await queue.drain(); // Remove all waiting jobs
|
|
113
|
+
await queue.obliterate(); // Remove ALL queue data
|
|
91
114
|
|
|
92
|
-
|
|
115
|
+
// Cleanup
|
|
116
|
+
await queue.close();
|
|
117
|
+
```
|
|
93
118
|
|
|
94
|
-
|
|
119
|
+
### Worker
|
|
95
120
|
|
|
96
|
-
|
|
121
|
+
```typescript
|
|
122
|
+
import { Worker } from 'flashq';
|
|
97
123
|
|
|
98
|
-
|
|
99
|
-
|
|
100
|
-
|
|
101
|
-
| **BullMQ** | 5.66.5 | npm package |
|
|
102
|
-
| **Redis** | 7.4.7 | Docker (`redis:7-alpine`), jemalloc allocator |
|
|
103
|
-
| **Bun** | 1.3.6 | TypeScript runtime |
|
|
104
|
-
| **Platform** | Linux/macOS | Docker containers |
|
|
124
|
+
const worker = new Worker('my-queue', async (job) => {
|
|
125
|
+
// Process job
|
|
126
|
+
console.log('Processing:', job.id, job.data);
|
|
105
127
|
|
|
106
|
-
|
|
128
|
+
// Update progress
|
|
129
|
+
await worker.updateProgress(job.id, 50, 'Halfway done');
|
|
107
130
|
|
|
108
|
-
|
|
109
|
-
|
|
110
|
-
|
|
111
|
-
|
|
112
|
-
|
|
113
|
-
|
|
131
|
+
// Return result (auto-acknowledged)
|
|
132
|
+
return { processed: true };
|
|
133
|
+
}, {
|
|
134
|
+
concurrency: 10, // Parallel job processing
|
|
135
|
+
autostart: true, // Start automatically (default: true)
|
|
136
|
+
closeTimeout: 30000, // Graceful shutdown timeout (ms)
|
|
137
|
+
});
|
|
138
|
+
|
|
139
|
+
// Events
|
|
140
|
+
worker.on('ready', () => console.log('Worker ready'));
|
|
141
|
+
worker.on('active', (job) => console.log('Job started:', job.id));
|
|
142
|
+
worker.on('completed', (job, result) => console.log('Job done:', result));
|
|
143
|
+
worker.on('failed', (job, error) => console.log('Job failed:', error));
|
|
144
|
+
worker.on('stopping', () => console.log('Worker stopping...'));
|
|
145
|
+
worker.on('stopped', () => console.log('Worker stopped'));
|
|
146
|
+
|
|
147
|
+
// Graceful shutdown
|
|
148
|
+
await worker.close(); // Wait for current jobs
|
|
149
|
+
await worker.close(true); // Force close immediately
|
|
114
150
|
```
|
|
115
151
|
|
|
116
|
-
###
|
|
152
|
+
### Low-Level Client
|
|
117
153
|
|
|
118
|
-
|
|
154
|
+
For advanced use cases, use the `FlashQ` client directly:
|
|
119
155
|
|
|
120
|
-
|
|
121
|
-
|
|
122
|
-
| **Push Rate** | 307,692 jobs/sec | 43,649 jobs/sec | **7.0x** |
|
|
123
|
-
| **Process Rate** | 292,398 jobs/sec | 27,405 jobs/sec | **10.7x** |
|
|
124
|
-
| **Total Time** | 0.67s | 5.94s | **8.9x** |
|
|
156
|
+
```typescript
|
|
157
|
+
import { FlashQ } from 'flashq';
|
|
125
158
|
|
|
126
|
-
|
|
159
|
+
const client = new FlashQ({
|
|
160
|
+
host: 'localhost',
|
|
161
|
+
port: 6789,
|
|
162
|
+
timeout: 5000,
|
|
163
|
+
});
|
|
127
164
|
|
|
128
|
-
|
|
129
|
-
- JSON serialize/deserialize
|
|
130
|
-
- 10x SHA256 hash rounds
|
|
131
|
-
- Array sort/filter/reduce (100 elements)
|
|
132
|
-
- String manipulation
|
|
165
|
+
await client.connect();
|
|
133
166
|
|
|
134
|
-
|
|
135
|
-
|
|
136
|
-
|
|
137
|
-
|
|
138
|
-
| **Total Time** | 2.04s | 6.48s | **3.2x** |
|
|
167
|
+
// Push/Pull operations
|
|
168
|
+
const job = await client.push('queue', { data: 'value' });
|
|
169
|
+
const pulled = await client.pull('queue', 5000);
|
|
170
|
+
await client.ack(pulled.id, { result: 'done' });
|
|
139
171
|
|
|
140
|
-
|
|
172
|
+
// Job management
|
|
173
|
+
const state = await client.getState(job.id);
|
|
174
|
+
const counts = await client.getJobCounts('queue');
|
|
175
|
+
await client.cancel(job.id);
|
|
141
176
|
|
|
142
|
-
|
|
143
|
-
|
|
144
|
-
|
|
145
|
-
|
|
177
|
+
// Cron jobs
|
|
178
|
+
await client.addCron('daily-cleanup', {
|
|
179
|
+
queue: 'maintenance',
|
|
180
|
+
schedule: '0 0 * * *',
|
|
181
|
+
data: { task: 'cleanup' },
|
|
182
|
+
});
|
|
146
183
|
|
|
147
|
-
|
|
184
|
+
await client.close();
|
|
185
|
+
```
|
|
148
186
|
|
|
149
|
-
|
|
150
|
-
|--------------|-------------|
|
|
151
|
-
| **Rust + tokio** | Zero-cost abstractions, no GC pauses |
|
|
152
|
-
| **io_uring** | Linux kernel async I/O (when available) |
|
|
153
|
-
| **32 Shards** | Lock-free concurrent access via DashMap |
|
|
154
|
-
| **MessagePack** | 40% smaller payloads vs JSON |
|
|
155
|
-
| **Batch Operations** | Amortized network overhead |
|
|
156
|
-
| **No Redis Dependency** | Direct TCP protocol, no intermediary |
|
|
187
|
+
## Error Handling
|
|
157
188
|
|
|
158
|
-
|
|
189
|
+
flashQ provides typed error classes for precise error handling:
|
|
159
190
|
|
|
160
|
-
```
|
|
161
|
-
|
|
162
|
-
|
|
163
|
-
|
|
164
|
-
|
|
165
|
-
|
|
166
|
-
|
|
167
|
-
|
|
168
|
-
|
|
169
|
-
|
|
191
|
+
```typescript
|
|
192
|
+
import {
|
|
193
|
+
FlashQError,
|
|
194
|
+
ConnectionError,
|
|
195
|
+
TimeoutError,
|
|
196
|
+
ValidationError,
|
|
197
|
+
ServerError,
|
|
198
|
+
AuthenticationError,
|
|
199
|
+
} from 'flashq';
|
|
200
|
+
|
|
201
|
+
try {
|
|
202
|
+
await client.push('queue', data);
|
|
203
|
+
} catch (error) {
|
|
204
|
+
if (error instanceof ConnectionError) {
|
|
205
|
+
console.log('Connection failed, retrying...');
|
|
206
|
+
} else if (error instanceof TimeoutError) {
|
|
207
|
+
console.log(`Timeout after ${error.timeout}ms`);
|
|
208
|
+
} else if (error instanceof ValidationError) {
|
|
209
|
+
console.log(`Invalid ${error.field}: ${error.message}`);
|
|
210
|
+
} else if (error instanceof ServerError) {
|
|
211
|
+
console.log(`Server error: ${error.serverCode}`);
|
|
212
|
+
}
|
|
213
|
+
|
|
214
|
+
// Check if error is retryable
|
|
215
|
+
if (error instanceof FlashQError && error.retryable) {
|
|
216
|
+
// Safe to retry
|
|
217
|
+
}
|
|
218
|
+
}
|
|
170
219
|
```
|
|
171
220
|
|
|
172
|
-
|
|
221
|
+
## Retry Logic
|
|
173
222
|
|
|
174
|
-
|
|
223
|
+
Built-in retry utilities with exponential backoff:
|
|
175
224
|
|
|
176
225
|
```typescript
|
|
177
|
-
|
|
178
|
-
|
|
179
|
-
|
|
180
|
-
|
|
226
|
+
import { withRetry, retryable, RetryPresets } from 'flashq';
|
|
227
|
+
|
|
228
|
+
// Wrap a single operation
|
|
229
|
+
const result = await withRetry(
|
|
230
|
+
() => client.push('queue', data),
|
|
231
|
+
{
|
|
232
|
+
maxRetries: 3,
|
|
233
|
+
initialDelay: 100,
|
|
234
|
+
maxDelay: 5000,
|
|
235
|
+
backoffMultiplier: 2,
|
|
236
|
+
jitter: true,
|
|
237
|
+
onRetry: (error, attempt, delay) => {
|
|
238
|
+
console.log(`Retry ${attempt} after ${delay}ms: ${error.message}`);
|
|
239
|
+
},
|
|
240
|
+
}
|
|
241
|
+
);
|
|
242
|
+
|
|
243
|
+
// Create a retryable function
|
|
244
|
+
const retryablePush = retryable(
|
|
245
|
+
(queue: string, data: unknown) => client.push(queue, data),
|
|
246
|
+
RetryPresets.standard
|
|
247
|
+
);
|
|
248
|
+
|
|
249
|
+
await retryablePush('emails', { to: 'user@example.com' });
|
|
250
|
+
|
|
251
|
+
// Available presets
|
|
252
|
+
RetryPresets.fast // 2 retries, 50ms initial, 500ms max
|
|
253
|
+
RetryPresets.standard // 3 retries, 100ms initial, 5s max
|
|
254
|
+
RetryPresets.aggressive // 5 retries, 200ms initial, 30s max
|
|
255
|
+
RetryPresets.none // No retries
|
|
256
|
+
```
|
|
181
257
|
|
|
182
|
-
|
|
183
|
-
await queue.add('send', data, {
|
|
184
|
-
priority: 10,
|
|
185
|
-
delay: 5000,
|
|
186
|
-
attempts: 3,
|
|
187
|
-
backoff: { type: 'exponential', delay: 1000 },
|
|
188
|
-
});
|
|
258
|
+
## Observability Hooks
|
|
189
259
|
|
|
190
|
-
|
|
191
|
-
await queue.addBulk([
|
|
192
|
-
{ name: 'send', data: { to: 'a@test.com' } },
|
|
193
|
-
{ name: 'send', data: { to: 'b@test.com' }, opts: { priority: 10 } },
|
|
194
|
-
]);
|
|
260
|
+
Integrate with OpenTelemetry, DataDog, or any observability platform:
|
|
195
261
|
|
|
196
|
-
|
|
197
|
-
|
|
198
|
-
|
|
199
|
-
|
|
200
|
-
|
|
262
|
+
```typescript
|
|
263
|
+
import { FlashQ, ClientHooks } from 'flashq';
|
|
264
|
+
|
|
265
|
+
const hooks: ClientHooks = {
|
|
266
|
+
onPush: (ctx) => {
|
|
267
|
+
console.log(`Pushing to ${ctx.queue}`, ctx.data);
|
|
268
|
+
},
|
|
269
|
+
onPushComplete: (ctx) => {
|
|
270
|
+
console.log(`Pushed job ${ctx.job?.id} in ${ctx.duration}ms`);
|
|
271
|
+
},
|
|
272
|
+
onPushError: (ctx, error) => {
|
|
273
|
+
console.error(`Push failed: ${error.message}`);
|
|
274
|
+
},
|
|
275
|
+
onConnect: (ctx) => {
|
|
276
|
+
console.log('Connected to flashQ');
|
|
277
|
+
},
|
|
278
|
+
onDisconnect: (ctx) => {
|
|
279
|
+
console.log(`Disconnected: ${ctx.reason}`);
|
|
280
|
+
},
|
|
281
|
+
};
|
|
282
|
+
|
|
283
|
+
const client = new FlashQ({ hooks });
|
|
284
|
+
```
|
|
285
|
+
|
|
286
|
+
Worker hooks for job processing:
|
|
201
287
|
|
|
202
|
-
|
|
203
|
-
|
|
204
|
-
|
|
288
|
+
```typescript
|
|
289
|
+
import { Worker, WorkerHooks } from 'flashq';
|
|
290
|
+
|
|
291
|
+
const workerHooks: WorkerHooks = {
|
|
292
|
+
onProcess: (ctx) => {
|
|
293
|
+
console.log(`Processing job ${ctx.job.id}`);
|
|
294
|
+
},
|
|
295
|
+
onProcessComplete: (ctx) => {
|
|
296
|
+
console.log(`Job ${ctx.job.id} completed in ${ctx.duration}ms`);
|
|
297
|
+
},
|
|
298
|
+
onProcessError: (ctx, error) => {
|
|
299
|
+
console.error(`Job ${ctx.job.id} failed: ${error.message}`);
|
|
300
|
+
},
|
|
301
|
+
};
|
|
302
|
+
|
|
303
|
+
const worker = new Worker('queue', processor, { workerHooks });
|
|
205
304
|
```
|
|
206
305
|
|
|
207
|
-
##
|
|
306
|
+
## Logging
|
|
307
|
+
|
|
308
|
+
Configurable logging with request ID tracking:
|
|
208
309
|
|
|
209
310
|
```typescript
|
|
210
|
-
|
|
211
|
-
|
|
212
|
-
|
|
213
|
-
|
|
214
|
-
|
|
311
|
+
import { FlashQ, Logger, createLogger } from 'flashq';
|
|
312
|
+
|
|
313
|
+
// Use built-in logger
|
|
314
|
+
const client = new FlashQ({
|
|
315
|
+
logLevel: 'debug', // trace | debug | info | warn | error | silent
|
|
215
316
|
});
|
|
216
317
|
|
|
217
|
-
//
|
|
218
|
-
|
|
219
|
-
|
|
318
|
+
// Custom logger
|
|
319
|
+
const logger = createLogger({
|
|
320
|
+
level: 'info',
|
|
321
|
+
prefix: 'my-app',
|
|
322
|
+
timestamps: true,
|
|
323
|
+
handler: (entry) => {
|
|
324
|
+
// Send to your logging service
|
|
325
|
+
myLoggingService.log(entry);
|
|
326
|
+
},
|
|
327
|
+
});
|
|
220
328
|
|
|
221
|
-
//
|
|
222
|
-
|
|
329
|
+
// Request ID tracking for distributed tracing
|
|
330
|
+
logger.setRequestId('req-12345');
|
|
331
|
+
logger.info('Processing request', { userId: 123 });
|
|
332
|
+
// Output: [2024-01-15T10:30:00.000Z] [INFO] [my-app] [req-12345] Processing request {"userId":123}
|
|
223
333
|
```
|
|
224
334
|
|
|
225
|
-
##
|
|
335
|
+
## Performance
|
|
226
336
|
|
|
227
|
-
|
|
228
|
-
|--------|------|-------------|
|
|
229
|
-
| `priority` | number | Higher = first (default: 0) |
|
|
230
|
-
| `delay` | number | Delay in ms |
|
|
231
|
-
| `attempts` | number | Retry count |
|
|
232
|
-
| `backoff` | number \| object | Backoff config |
|
|
233
|
-
| `timeout` | number | Processing timeout |
|
|
234
|
-
| `jobId` | string | Custom ID for idempotency |
|
|
235
|
-
| `depends_on` | number[] | Wait for these job IDs to complete |
|
|
337
|
+
flashQ is **3-10x faster** than BullMQ in real-world benchmarks:
|
|
236
338
|
|
|
237
|
-
|
|
238
|
-
|
|
239
|
-
|
|
339
|
+
| Metric | flashQ | BullMQ | Speedup |
|
|
340
|
+
|--------|-------:|-------:|--------:|
|
|
341
|
+
| Push Rate | 307,692/s | 43,649/s | **7.0x** |
|
|
342
|
+
| Process Rate | 292,398/s | 27,405/s | **10.7x** |
|
|
343
|
+
| CPU-Bound Processing | 62,814/s | 23,923/s | **2.6x** |
|
|
240
344
|
|
|
241
|
-
|
|
242
|
-
import { FlashQ } from 'flashq';
|
|
345
|
+
### Why flashQ is Faster
|
|
243
346
|
|
|
244
|
-
|
|
347
|
+
| Optimization | Description |
|
|
348
|
+
|--------------|-------------|
|
|
349
|
+
| **Rust + tokio** | Zero-cost abstractions, no GC pauses |
|
|
350
|
+
| **io_uring** | Linux kernel async I/O |
|
|
351
|
+
| **32 Shards** | Lock-free concurrent access |
|
|
352
|
+
| **MessagePack** | 40% smaller payloads |
|
|
353
|
+
| **No Redis** | Direct TCP protocol |
|
|
245
354
|
|
|
246
|
-
|
|
247
|
-
await client.kvSet('user:123', { name: 'John', email: 'john@example.com' });
|
|
248
|
-
const user = await client.kvGet('user:123');
|
|
249
|
-
await client.kvDel('user:123');
|
|
355
|
+
## AI/ML Workloads
|
|
250
356
|
|
|
251
|
-
|
|
252
|
-
await client.kvSet('session:abc', { token: 'xyz' }, { ttl: 3600000 }); // 1 hour
|
|
357
|
+
flashQ is designed for AI pipelines with large payloads and complex workflows:
|
|
253
358
|
|
|
254
|
-
|
|
255
|
-
|
|
256
|
-
const
|
|
359
|
+
```typescript
|
|
360
|
+
// AI Agent with job dependencies
|
|
361
|
+
const agent = new Queue('ai-agent');
|
|
257
362
|
|
|
258
|
-
//
|
|
259
|
-
await
|
|
260
|
-
{ key: 'user:1', value: { name: 'Alice' } },
|
|
261
|
-
{ key: 'user:2', value: { name: 'Bob' } },
|
|
262
|
-
{ key: 'user:3', value: { name: 'Charlie' }, ttl: 60000 },
|
|
263
|
-
]);
|
|
363
|
+
// Step 1: Parse user intent
|
|
364
|
+
const parse = await agent.add('parse', { prompt: userInput });
|
|
264
365
|
|
|
265
|
-
|
|
366
|
+
// Step 2: Retrieve context (waits for step 1)
|
|
367
|
+
const retrieve = await agent.add('retrieve', { query }, {
|
|
368
|
+
depends_on: [parse.id],
|
|
369
|
+
});
|
|
266
370
|
|
|
267
|
-
//
|
|
268
|
-
const
|
|
269
|
-
|
|
371
|
+
// Step 3: Generate response (waits for step 2)
|
|
372
|
+
const generate = await agent.add('generate', { context }, {
|
|
373
|
+
depends_on: [retrieve.id],
|
|
374
|
+
priority: 10,
|
|
375
|
+
});
|
|
270
376
|
|
|
271
|
-
//
|
|
272
|
-
await
|
|
273
|
-
await client.kvIncr('user:123:score', 10); // +10
|
|
274
|
-
await client.kvDecr('stock:item:456'); // -1
|
|
377
|
+
// Wait for the final result
|
|
378
|
+
const result = await agent.finished(generate.id, 60000);
|
|
275
379
|
```
|
|
276
380
|
|
|
277
|
-
|
|
278
|
-
|
|
279
|
-
| Operation | Throughput |
|
|
280
|
-
|-----------|------------|
|
|
281
|
-
| Sequential SET/GET | ~30K ops/sec |
|
|
282
|
-
| **Batch MSET** | **640K ops/sec** |
|
|
283
|
-
| **Batch MGET** | **1.2M ops/sec** |
|
|
381
|
+
## Configuration
|
|
284
382
|
|
|
285
|
-
|
|
383
|
+
### Client Options
|
|
286
384
|
|
|
287
|
-
|
|
385
|
+
```typescript
|
|
386
|
+
interface ClientOptions {
|
|
387
|
+
host?: string; // Default: 'localhost'
|
|
388
|
+
port?: number; // Default: 6789
|
|
389
|
+
httpPort?: number; // Default: 6790
|
|
390
|
+
token?: string; // Auth token
|
|
391
|
+
timeout?: number; // Connection timeout (ms)
|
|
392
|
+
useHttp?: boolean; // Use HTTP instead of TCP
|
|
393
|
+
useBinary?: boolean; // Use MessagePack (40% smaller)
|
|
394
|
+
logLevel?: LogLevel; // Logging level
|
|
395
|
+
compression?: boolean; // Enable gzip compression
|
|
396
|
+
compressionThreshold?: number; // Min size to compress (bytes)
|
|
397
|
+
hooks?: ClientHooks; // Observability hooks
|
|
398
|
+
}
|
|
399
|
+
```
|
|
288
400
|
|
|
289
|
-
|
|
401
|
+
### Worker Options
|
|
290
402
|
|
|
291
403
|
```typescript
|
|
292
|
-
|
|
404
|
+
interface WorkerOptions {
|
|
405
|
+
concurrency?: number; // Parallel jobs (default: 1)
|
|
406
|
+
autostart?: boolean; // Auto-start (default: true)
|
|
407
|
+
closeTimeout?: number; // Graceful shutdown timeout (ms)
|
|
408
|
+
workerHooks?: WorkerHooks; // Processing hooks
|
|
409
|
+
}
|
|
410
|
+
```
|
|
293
411
|
|
|
294
|
-
|
|
412
|
+
## Examples
|
|
295
413
|
|
|
296
|
-
|
|
297
|
-
|
|
298
|
-
|
|
414
|
+
Run examples with:
|
|
415
|
+
|
|
416
|
+
```bash
|
|
417
|
+
bun run examples/01-basic.ts
|
|
418
|
+
```
|
|
299
419
|
|
|
300
|
-
|
|
301
|
-
|
|
420
|
+
| Example | Description |
|
|
421
|
+
|---------|-------------|
|
|
422
|
+
| `01-basic.ts` | Queue and Worker basics |
|
|
423
|
+
| `02-job-options.ts` | Priority, delay, retry |
|
|
424
|
+
| `03-bulk-jobs.ts` | Batch operations |
|
|
425
|
+
| `04-events.ts` | Worker events |
|
|
426
|
+
| `05-queue-control.ts` | Pause, resume, drain |
|
|
427
|
+
| `06-delayed.ts` | Scheduled jobs |
|
|
428
|
+
| `07-retry.ts` | Retry with backoff |
|
|
429
|
+
| `08-priority.ts` | Priority ordering |
|
|
430
|
+
| `09-concurrency.ts` | Parallel processing |
|
|
431
|
+
| `ai-workflow.ts` | AI agent with dependencies |
|
|
302
432
|
|
|
303
|
-
|
|
304
|
-
await client.pubsubPsubscribe(['events:*', 'logs:*']);
|
|
433
|
+
## Migration from BullMQ
|
|
305
434
|
|
|
306
|
-
|
|
307
|
-
const allChannels = await client.pubsubChannels();
|
|
308
|
-
const eventChannels = await client.pubsubChannels('events:*');
|
|
435
|
+
flashQ provides a BullMQ-compatible API. Most code works with minimal changes:
|
|
309
436
|
|
|
310
|
-
|
|
311
|
-
|
|
312
|
-
|
|
437
|
+
```typescript
|
|
438
|
+
// Before (BullMQ)
|
|
439
|
+
import { Queue, Worker } from 'bullmq';
|
|
440
|
+
const queue = new Queue('my-queue', { connection: { host: 'redis' } });
|
|
313
441
|
|
|
314
|
-
//
|
|
315
|
-
|
|
316
|
-
|
|
442
|
+
// After (flashQ)
|
|
443
|
+
import { Queue, Worker } from 'flashq';
|
|
444
|
+
const queue = new Queue('my-queue', { host: 'flashq-server' });
|
|
317
445
|
```
|
|
318
446
|
|
|
319
|
-
|
|
447
|
+
Key differences:
|
|
448
|
+
- No Redis connection required
|
|
449
|
+
- `connection` option replaced with `host`/`port`
|
|
450
|
+
- Some advanced BullMQ features may have different behavior
|
|
320
451
|
|
|
321
|
-
|
|
322
|
-
bun run examples/01-basic.ts
|
|
323
|
-
```
|
|
452
|
+
## Resources
|
|
324
453
|
|
|
325
|
-
|
|
326
|
-
|
|
327
|
-
|
|
328
|
-
|
|
329
|
-
| 03-bulk-jobs.ts | Add multiple jobs |
|
|
330
|
-
| 04-events.ts | Worker events |
|
|
331
|
-
| 05-queue-control.ts | Pause, resume, drain |
|
|
332
|
-
| 06-delayed.ts | Scheduled jobs |
|
|
333
|
-
| 07-retry.ts | Retry with backoff |
|
|
334
|
-
| 08-priority.ts | Priority ordering |
|
|
335
|
-
| 09-concurrency.ts | Parallel processing |
|
|
336
|
-
| 10-benchmark.ts | Basic performance test |
|
|
337
|
-
| **heavy-benchmark.ts** | 100K no-op benchmark |
|
|
338
|
-
| **cpu-benchmark.ts** | 100K CPU-bound benchmark |
|
|
339
|
-
| **million-benchmark.ts** | 1M jobs with verification |
|
|
340
|
-
| **benchmark-full.ts** | Memory + latency + throughput |
|
|
341
|
-
| **bullmq-benchmark.ts** | BullMQ comparison (no-op) |
|
|
342
|
-
| **bullmq-cpu-benchmark.ts** | BullMQ comparison (CPU) |
|
|
343
|
-
| **bullmq-benchmark-full.ts** | BullMQ memory + latency |
|
|
344
|
-
| kv-benchmark.ts | KV store benchmark |
|
|
345
|
-
| pubsub-example.ts | Pub/Sub messaging |
|
|
346
|
-
| **ai-workflow.ts** | AI agent with job dependencies |
|
|
347
|
-
| **ai-workflow-manual.ts** | Manual AI workflow control |
|
|
454
|
+
- **Website:** [flashq.dev](https://flashq.dev)
|
|
455
|
+
- **Documentation:** [flashq.dev/docs](https://flashq.dev/docs/)
|
|
456
|
+
- **GitHub:** [github.com/egeominotti/flashq](https://github.com/egeominotti/flashq)
|
|
457
|
+
- **npm:** [npmjs.com/package/flashq](https://www.npmjs.com/package/flashq)
|
|
348
458
|
|
|
349
459
|
## License
|
|
350
460
|
|