bunqueue 1.9.3 β 1.9.5
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +27 -732
- package/dist/application/operations/push.d.ts.map +1 -1
- package/dist/application/operations/push.js +48 -9
- package/dist/application/operations/push.js.map +1 -1
- package/dist/application/queueManager.d.ts +25 -0
- package/dist/application/queueManager.d.ts.map +1 -1
- package/dist/application/queueManager.js +122 -2
- package/dist/application/queueManager.js.map +1 -1
- package/dist/cli/commands/cron.js +3 -0
- package/dist/cli/commands/cron.js.map +1 -1
- package/dist/cli/commands/server.d.ts.map +1 -1
- package/dist/cli/commands/server.js +6 -0
- package/dist/cli/commands/server.js.map +1 -1
- package/dist/client/queue/queue.d.ts +22 -0
- package/dist/client/queue/queue.d.ts.map +1 -1
- package/dist/client/queue/queue.js +77 -0
- package/dist/client/queue/queue.js.map +1 -1
- package/dist/domain/queue/shard.d.ts +15 -4
- package/dist/domain/queue/shard.d.ts.map +1 -1
- package/dist/domain/queue/shard.js +70 -6
- package/dist/domain/queue/shard.js.map +1 -1
- package/dist/domain/types/command.d.ts +12 -1
- package/dist/domain/types/command.d.ts.map +1 -1
- package/dist/domain/types/cron.d.ts +4 -0
- package/dist/domain/types/cron.d.ts.map +1 -1
- package/dist/domain/types/cron.js +1 -0
- package/dist/domain/types/cron.js.map +1 -1
- package/dist/domain/types/deduplication.d.ts +44 -0
- package/dist/domain/types/deduplication.d.ts.map +1 -0
- package/dist/domain/types/deduplication.js +27 -0
- package/dist/domain/types/deduplication.js.map +1 -0
- package/dist/domain/types/job.d.ts +9 -0
- package/dist/domain/types/job.d.ts.map +1 -1
- package/dist/domain/types/job.js.map +1 -1
- package/dist/infrastructure/persistence/schema.d.ts +1 -1
- package/dist/infrastructure/persistence/schema.d.ts.map +1 -1
- package/dist/infrastructure/persistence/schema.js +2 -1
- package/dist/infrastructure/persistence/schema.js.map +1 -1
- package/dist/infrastructure/persistence/sqlite.d.ts.map +1 -1
- package/dist/infrastructure/persistence/sqlite.js +2 -1
- package/dist/infrastructure/persistence/sqlite.js.map +1 -1
- package/dist/infrastructure/persistence/statements.d.ts +1 -0
- package/dist/infrastructure/persistence/statements.d.ts.map +1 -1
- package/dist/infrastructure/persistence/statements.js +2 -2
- package/dist/infrastructure/scheduler/cronParser.d.ts +4 -2
- package/dist/infrastructure/scheduler/cronParser.d.ts.map +1 -1
- package/dist/infrastructure/scheduler/cronParser.js +6 -4
- package/dist/infrastructure/scheduler/cronParser.js.map +1 -1
- package/dist/infrastructure/scheduler/cronScheduler.js +3 -3
- package/dist/infrastructure/scheduler/cronScheduler.js.map +1 -1
- package/dist/infrastructure/server/handler.d.ts.map +1 -1
- package/dist/infrastructure/server/handler.js +6 -2
- package/dist/infrastructure/server/handler.js.map +1 -1
- package/dist/infrastructure/server/handlers/cron.d.ts.map +1 -1
- package/dist/infrastructure/server/handlers/cron.js +3 -0
- package/dist/infrastructure/server/handlers/cron.js.map +1 -1
- package/dist/infrastructure/server/handlers/dlq.d.ts +4 -0
- package/dist/infrastructure/server/handlers/dlq.d.ts.map +1 -1
- package/dist/infrastructure/server/handlers/dlq.js +10 -0
- package/dist/infrastructure/server/handlers/dlq.js.map +1 -1
- package/dist/infrastructure/server/handlers/query.d.ts +7 -3
- package/dist/infrastructure/server/handlers/query.d.ts.map +1 -1
- package/dist/infrastructure/server/handlers/query.js +14 -5
- package/dist/infrastructure/server/handlers/query.js.map +1 -1
- package/dist/main.js +6 -0
- package/dist/main.js.map +1 -1
- package/package.json +1 -1
package/README.md
CHANGED
|
@@ -6,772 +6,67 @@
|
|
|
6
6
|
<a href="https://github.com/egeominotti/bunqueue/actions"><img src="https://github.com/egeominotti/bunqueue/actions/workflows/ci.yml/badge.svg" alt="CI"></a>
|
|
7
7
|
<a href="https://github.com/egeominotti/bunqueue/releases"><img src="https://img.shields.io/github/v/release/egeominotti/bunqueue" alt="Release"></a>
|
|
8
8
|
<a href="https://github.com/egeominotti/bunqueue/blob/main/LICENSE"><img src="https://img.shields.io/github/license/egeominotti/bunqueue" alt="License"></a>
|
|
9
|
+
<a href="https://www.npmjs.com/package/bunqueue"><img src="https://img.shields.io/npm/v/bunqueue" alt="npm"></a>
|
|
9
10
|
</p>
|
|
10
11
|
|
|
11
12
|
<p align="center">
|
|
12
|
-
<
|
|
13
|
-
<a href="#features">Features</a> β’
|
|
14
|
-
<a href="#quick-start">Quick Start</a> β’
|
|
15
|
-
<a href="#embedded-mode">Embedded</a> β’
|
|
16
|
-
<a href="#server-mode">Server</a> β’
|
|
17
|
-
<a href="#docker">Docker</a>
|
|
13
|
+
<strong>High-performance job queue for Bun. Zero external dependencies.</strong>
|
|
18
14
|
</p>
|
|
19
15
|
|
|
20
16
|
<p align="center">
|
|
21
|
-
<a href="https://
|
|
22
|
-
<a href="https://www.npmjs.com/package/bunqueue"><img src="https://img.shields.io/npm/dm/bunqueue" alt="npm downloads"></a>
|
|
17
|
+
<a href="https://egeominotti.github.io/bunqueue/"><strong>Documentation</strong></a>
|
|
23
18
|
</p>
|
|
24
19
|
|
|
25
20
|
---
|
|
26
21
|
|
|
27
22
|
## Why bunqueue?
|
|
28
23
|
|
|
29
|
-
> β οΈ **Bun only** β bunqueue requires [Bun](https://bun.sh) runtime. Node.js is not supported.
|
|
30
|
-
|
|
31
|
-
**Every other job queue requires external infrastructure.** bunqueue doesn't.
|
|
32
|
-
|
|
33
24
|
| Library | Requires |
|
|
34
25
|
|---------|----------|
|
|
35
|
-
| BullMQ |
|
|
36
|
-
| Agenda |
|
|
37
|
-
|
|
|
38
|
-
|
|
|
39
|
-
| Celery | β Redis/RabbitMQ |
|
|
40
|
-
| **bunqueue** | β
**Nothing. Zero. Nada.** |
|
|
26
|
+
| BullMQ | Redis |
|
|
27
|
+
| Agenda | MongoDB |
|
|
28
|
+
| pg-boss | PostgreSQL |
|
|
29
|
+
| **bunqueue** | **Nothing** |
|
|
41
30
|
|
|
42
|
-
|
|
43
|
-
- **
|
|
44
|
-
- **
|
|
45
|
-
- **
|
|
46
|
-
- **100K+ jobs/sec** β Faster than Redis-based queues
|
|
47
|
-
- **Single file deployment** β Just your app, that's it
|
|
31
|
+
- **BullMQ-compatible API** β Same `Queue`, `Worker`, `QueueEvents`
|
|
32
|
+
- **Zero dependencies** β No Redis, no MongoDB
|
|
33
|
+
- **SQLite persistence** β Survives restarts
|
|
34
|
+
- **100K+ jobs/sec** β Built on Bun
|
|
48
35
|
|
|
49
|
-
|
|
50
|
-
# Others: Install Redis, configure connection, manage infrastructure...
|
|
51
|
-
# bunqueue:
|
|
52
|
-
bun add bunqueue
|
|
53
|
-
```
|
|
54
|
-
|
|
55
|
-
```typescript
|
|
56
|
-
import { Queue, Worker } from 'bunqueue/client';
|
|
57
|
-
// That's it. You're done. Start queuing.
|
|
58
|
-
```
|
|
59
|
-
|
|
60
|
-
---
|
|
61
|
-
|
|
62
|
-
## Quick Install
|
|
36
|
+
## Install
|
|
63
37
|
|
|
64
38
|
```bash
|
|
65
|
-
# Requires Bun runtime (https://bun.sh)
|
|
66
39
|
bun add bunqueue
|
|
67
40
|
```
|
|
68
41
|
|
|
69
|
-
|
|
70
|
-
|
|
71
|
-
| Mode | Description | Use Case |
|
|
72
|
-
|------|-------------|----------|
|
|
73
|
-
| **Embedded** | In-process, no server needed | Monolith, scripts, serverless |
|
|
74
|
-
| **Server** | Standalone TCP/HTTP server | Microservices, multi-process |
|
|
75
|
-
|
|
76
|
-
---
|
|
42
|
+
> Requires [Bun](https://bun.sh) runtime. Node.js is not supported.
|
|
77
43
|
|
|
78
|
-
## Quick
|
|
79
|
-
|
|
80
|
-
### Embedded Mode (Recommended)
|
|
81
|
-
|
|
82
|
-
No server required. BullMQ-compatible API.
|
|
44
|
+
## Quick Example
|
|
83
45
|
|
|
84
46
|
```typescript
|
|
85
47
|
import { Queue, Worker } from 'bunqueue/client';
|
|
86
48
|
|
|
87
|
-
|
|
88
|
-
const queue = new Queue('emails');
|
|
49
|
+
const queue = new Queue('emails', { embedded: true });
|
|
89
50
|
|
|
90
|
-
// Create worker
|
|
91
51
|
const worker = new Worker('emails', async (job) => {
|
|
92
|
-
console.log('Sending email to:', job.data.to);
|
|
93
|
-
await job.updateProgress(50);
|
|
94
|
-
return { sent: true };
|
|
95
|
-
}, { concurrency: 5 });
|
|
96
|
-
|
|
97
|
-
// Handle events
|
|
98
|
-
worker.on('completed', (job, result) => {
|
|
99
|
-
console.log(`Job ${job.id} completed:`, result);
|
|
100
|
-
});
|
|
101
|
-
|
|
102
|
-
worker.on('failed', (job, err) => {
|
|
103
|
-
console.error(`Job ${job.id} failed:`, err.message);
|
|
104
|
-
});
|
|
105
|
-
|
|
106
|
-
// Add jobs
|
|
107
|
-
await queue.add('send-welcome', { to: 'user@example.com' });
|
|
108
|
-
```
|
|
109
|
-
|
|
110
|
-
### Server Mode
|
|
111
|
-
|
|
112
|
-
For multi-process or microservice architectures.
|
|
113
|
-
|
|
114
|
-
**Terminal 1 - Start server:**
|
|
115
|
-
```bash
|
|
116
|
-
bunqueue start
|
|
117
|
-
```
|
|
118
|
-
|
|
119
|
-
<img src=".github/terminal.png" alt="bunqueue server running" width="600" />
|
|
120
|
-
|
|
121
|
-
**Terminal 2 - Producer:**
|
|
122
|
-
```typescript
|
|
123
|
-
const res = await fetch('http://localhost:6790/push', {
|
|
124
|
-
method: 'POST',
|
|
125
|
-
headers: { 'Content-Type': 'application/json' },
|
|
126
|
-
body: JSON.stringify({
|
|
127
|
-
queue: 'emails',
|
|
128
|
-
data: { to: 'user@example.com' }
|
|
129
|
-
})
|
|
130
|
-
});
|
|
131
|
-
```
|
|
132
|
-
|
|
133
|
-
**Terminal 3 - Consumer:**
|
|
134
|
-
```typescript
|
|
135
|
-
while (true) {
|
|
136
|
-
const res = await fetch('http://localhost:6790/pull', {
|
|
137
|
-
method: 'POST',
|
|
138
|
-
body: JSON.stringify({ queue: 'emails', timeout: 5000 })
|
|
139
|
-
});
|
|
140
|
-
|
|
141
|
-
const job = await res.json();
|
|
142
|
-
if (job.id) {
|
|
143
|
-
console.log('Processing:', job.data);
|
|
144
|
-
await fetch('http://localhost:6790/ack', {
|
|
145
|
-
method: 'POST',
|
|
146
|
-
body: JSON.stringify({ id: job.id })
|
|
147
|
-
});
|
|
148
|
-
}
|
|
149
|
-
}
|
|
150
|
-
```
|
|
151
|
-
|
|
152
|
-
---
|
|
153
|
-
|
|
154
|
-
## Features
|
|
155
|
-
|
|
156
|
-
- **Blazing Fast** β 500K+ jobs/sec, built on Bun runtime
|
|
157
|
-
- **Dual Mode** β Embedded (in-process) or Server (TCP/HTTP)
|
|
158
|
-
- **BullMQ-Compatible API** β Easy migration with `Queue`, `Worker`, `QueueEvents`
|
|
159
|
-
- **Persistent Storage** β SQLite with WAL mode
|
|
160
|
-
- **Sandboxed Workers** β Isolated processes for crash protection
|
|
161
|
-
- **Priority Queues** β FIFO, LIFO, and priority-based ordering
|
|
162
|
-
- **Delayed Jobs** β Schedule jobs for later
|
|
163
|
-
- **Repeatable Jobs** β Recurring jobs with interval and limit
|
|
164
|
-
- **Cron Scheduling** β Recurring jobs with cron expressions
|
|
165
|
-
- **Queue Groups** β Organize queues in namespaces
|
|
166
|
-
- **Flow/Pipelines** β Chain jobs A β B β C with result passing
|
|
167
|
-
- **Retry & Backoff** β Automatic retries with exponential backoff
|
|
168
|
-
- **Dead Letter Queue** β Failed jobs preserved for inspection
|
|
169
|
-
- **Job Dependencies** β Parent-child relationships
|
|
170
|
-
- **Progress Tracking** β Real-time progress updates
|
|
171
|
-
- **Rate Limiting** β Per-queue rate limits
|
|
172
|
-
- **Webhooks** β HTTP callbacks on job events
|
|
173
|
-
- **Real-time Events** β WebSocket and SSE support
|
|
174
|
-
- **Prometheus Metrics** β Built-in monitoring
|
|
175
|
-
- **Full CLI** β Manage queues from command line
|
|
176
|
-
|
|
177
|
-
---
|
|
178
|
-
|
|
179
|
-
## Embedded Mode
|
|
180
|
-
|
|
181
|
-
### Queue API
|
|
182
|
-
|
|
183
|
-
```typescript
|
|
184
|
-
import { Queue } from 'bunqueue/client';
|
|
185
|
-
|
|
186
|
-
const queue = new Queue('my-queue');
|
|
187
|
-
|
|
188
|
-
// Add job
|
|
189
|
-
const job = await queue.add('task-name', { data: 'value' });
|
|
190
|
-
|
|
191
|
-
// Add with options
|
|
192
|
-
await queue.add('task', { data: 'value' }, {
|
|
193
|
-
priority: 10, // Higher = processed first
|
|
194
|
-
delay: 5000, // Delay in ms
|
|
195
|
-
attempts: 3, // Max retries
|
|
196
|
-
backoff: 1000, // Backoff base (ms)
|
|
197
|
-
timeout: 30000, // Processing timeout
|
|
198
|
-
jobId: 'unique-id', // Custom ID
|
|
199
|
-
removeOnComplete: true,
|
|
200
|
-
removeOnFail: false,
|
|
201
|
-
});
|
|
202
|
-
|
|
203
|
-
// Bulk add
|
|
204
|
-
await queue.addBulk([
|
|
205
|
-
{ name: 'task1', data: { id: 1 } },
|
|
206
|
-
{ name: 'task2', data: { id: 2 } },
|
|
207
|
-
]);
|
|
208
|
-
|
|
209
|
-
// Get job
|
|
210
|
-
const job = await queue.getJob('job-id');
|
|
211
|
-
|
|
212
|
-
// Remove job
|
|
213
|
-
await queue.remove('job-id');
|
|
214
|
-
|
|
215
|
-
// Get counts
|
|
216
|
-
const counts = await queue.getJobCounts();
|
|
217
|
-
// { waiting: 10, active: 2, completed: 100, failed: 5 }
|
|
218
|
-
|
|
219
|
-
// Queue control
|
|
220
|
-
await queue.pause();
|
|
221
|
-
await queue.resume();
|
|
222
|
-
await queue.drain(); // Remove waiting jobs
|
|
223
|
-
await queue.obliterate(); // Remove ALL data
|
|
224
|
-
```
|
|
225
|
-
|
|
226
|
-
### Worker API
|
|
227
|
-
|
|
228
|
-
```typescript
|
|
229
|
-
import { Worker } from 'bunqueue/client';
|
|
230
|
-
|
|
231
|
-
const worker = new Worker('my-queue', async (job) => {
|
|
232
|
-
console.log('Processing:', job.name, job.data);
|
|
233
|
-
|
|
234
|
-
// Update progress
|
|
235
|
-
await job.updateProgress(50, 'Halfway done');
|
|
236
|
-
|
|
237
|
-
// Add log
|
|
238
|
-
await job.log('Processing step completed');
|
|
239
|
-
|
|
240
|
-
// Return result
|
|
241
|
-
return { success: true };
|
|
242
|
-
}, {
|
|
243
|
-
concurrency: 10, // Parallel jobs
|
|
244
|
-
autorun: true, // Start automatically
|
|
245
|
-
});
|
|
246
|
-
|
|
247
|
-
// Events
|
|
248
|
-
worker.on('active', (job) => {
|
|
249
|
-
console.log(`Job ${job.id} started`);
|
|
250
|
-
});
|
|
251
|
-
|
|
252
|
-
worker.on('completed', (job, result) => {
|
|
253
|
-
console.log(`Job ${job.id} completed:`, result);
|
|
254
|
-
});
|
|
255
|
-
|
|
256
|
-
worker.on('failed', (job, err) => {
|
|
257
|
-
console.error(`Job ${job.id} failed:`, err.message);
|
|
258
|
-
});
|
|
259
|
-
|
|
260
|
-
worker.on('progress', (job, progress) => {
|
|
261
|
-
console.log(`Job ${job.id} progress:`, progress);
|
|
262
|
-
});
|
|
263
|
-
|
|
264
|
-
worker.on('error', (err) => {
|
|
265
|
-
console.error('Worker error:', err);
|
|
266
|
-
});
|
|
267
|
-
|
|
268
|
-
// Control
|
|
269
|
-
worker.pause();
|
|
270
|
-
worker.resume();
|
|
271
|
-
await worker.close(); // Graceful shutdown
|
|
272
|
-
await worker.close(true); // Force close
|
|
273
|
-
```
|
|
274
|
-
|
|
275
|
-
### SandboxedWorker
|
|
276
|
-
|
|
277
|
-
Run job processors in **isolated Bun Worker processes**. Perfect for:
|
|
278
|
-
- CPU-intensive tasks that would block the event loop
|
|
279
|
-
- Processing untrusted code/data
|
|
280
|
-
- Jobs that might crash or have memory leaks
|
|
281
|
-
- Workloads requiring process-level isolation
|
|
282
|
-
|
|
283
|
-
```typescript
|
|
284
|
-
import { Queue, SandboxedWorker } from 'bunqueue/client';
|
|
285
|
-
|
|
286
|
-
const queue = new Queue('image-processing');
|
|
287
|
-
|
|
288
|
-
// Create sandboxed worker pool
|
|
289
|
-
const worker = new SandboxedWorker('image-processing', {
|
|
290
|
-
processor: './imageProcessor.ts', // Runs in separate process
|
|
291
|
-
concurrency: 4, // 4 parallel worker processes
|
|
292
|
-
timeout: 60000, // 60s timeout per job
|
|
293
|
-
maxMemory: 256, // MB per worker (uses smol mode if β€64)
|
|
294
|
-
maxRestarts: 10, // Auto-restart crashed workers
|
|
295
|
-
});
|
|
296
|
-
|
|
297
|
-
worker.start();
|
|
298
|
-
|
|
299
|
-
// Add jobs normally
|
|
300
|
-
await queue.add('resize', {
|
|
301
|
-
image: 'photo.jpg',
|
|
302
|
-
width: 800
|
|
303
|
-
});
|
|
304
|
-
|
|
305
|
-
// Check worker stats
|
|
306
|
-
const stats = worker.getStats();
|
|
307
|
-
// { total: 4, busy: 2, idle: 2, restarts: 0 }
|
|
308
|
-
|
|
309
|
-
// Graceful shutdown
|
|
310
|
-
await worker.stop();
|
|
311
|
-
```
|
|
312
|
-
|
|
313
|
-
**Processor file** (`imageProcessor.ts`):
|
|
314
|
-
```typescript
|
|
315
|
-
// This runs in an isolated Bun Worker process
|
|
316
|
-
export default async (job: {
|
|
317
|
-
id: string;
|
|
318
|
-
data: any;
|
|
319
|
-
queue: string;
|
|
320
|
-
attempts: number;
|
|
321
|
-
progress: (value: number) => void;
|
|
322
|
-
}) => {
|
|
323
|
-
job.progress(10);
|
|
324
|
-
|
|
325
|
-
// CPU-intensive work - won't block main process
|
|
326
|
-
const result = await processImage(job.data.image, job.data.width);
|
|
327
|
-
|
|
328
|
-
job.progress(100);
|
|
329
|
-
return { processed: true, path: result };
|
|
330
|
-
};
|
|
331
|
-
```
|
|
332
|
-
|
|
333
|
-
**Comparison:**
|
|
334
|
-
|
|
335
|
-
| Feature | Worker | SandboxedWorker |
|
|
336
|
-
|---------|--------|-----------------|
|
|
337
|
-
| Execution | In-process | Separate process |
|
|
338
|
-
| Latency | ~0.002ms | ~2-5ms (IPC overhead) |
|
|
339
|
-
| Crash isolation | β | β
|
|
|
340
|
-
| Memory leak protection | β | β
|
|
|
341
|
-
| CPU-bound safety | β Blocks event loop | β
Isolated |
|
|
342
|
-
| Use case | Fast I/O tasks | Heavy computation |
|
|
343
|
-
|
|
344
|
-
### QueueEvents
|
|
345
|
-
|
|
346
|
-
Listen to queue events without processing jobs.
|
|
347
|
-
|
|
348
|
-
```typescript
|
|
349
|
-
import { QueueEvents } from 'bunqueue/client';
|
|
350
|
-
|
|
351
|
-
const events = new QueueEvents('my-queue');
|
|
352
|
-
|
|
353
|
-
events.on('waiting', ({ jobId }) => {
|
|
354
|
-
console.log(`Job ${jobId} waiting`);
|
|
355
|
-
});
|
|
356
|
-
|
|
357
|
-
events.on('active', ({ jobId }) => {
|
|
358
|
-
console.log(`Job ${jobId} active`);
|
|
359
|
-
});
|
|
360
|
-
|
|
361
|
-
events.on('completed', ({ jobId, returnvalue }) => {
|
|
362
|
-
console.log(`Job ${jobId} completed:`, returnvalue);
|
|
363
|
-
});
|
|
364
|
-
|
|
365
|
-
events.on('failed', ({ jobId, failedReason }) => {
|
|
366
|
-
console.log(`Job ${jobId} failed:`, failedReason);
|
|
367
|
-
});
|
|
368
|
-
|
|
369
|
-
events.on('progress', ({ jobId, data }) => {
|
|
370
|
-
console.log(`Job ${jobId} progress:`, data);
|
|
371
|
-
});
|
|
372
|
-
|
|
373
|
-
await events.close();
|
|
374
|
-
```
|
|
375
|
-
|
|
376
|
-
### Repeatable Jobs
|
|
377
|
-
|
|
378
|
-
Jobs that repeat automatically at fixed intervals.
|
|
379
|
-
|
|
380
|
-
```typescript
|
|
381
|
-
import { Queue, Worker } from 'bunqueue/client';
|
|
382
|
-
|
|
383
|
-
const queue = new Queue('heartbeat');
|
|
384
|
-
|
|
385
|
-
// Repeat every 5 seconds, max 10 times
|
|
386
|
-
await queue.add('ping', { timestamp: Date.now() }, {
|
|
387
|
-
repeat: {
|
|
388
|
-
every: 5000, // 5 seconds
|
|
389
|
-
limit: 10, // max 10 repetitions
|
|
390
|
-
}
|
|
391
|
-
});
|
|
392
|
-
|
|
393
|
-
// Infinite repeat (no limit)
|
|
394
|
-
await queue.add('health-check', {}, {
|
|
395
|
-
repeat: { every: 60000 } // every minute, forever
|
|
396
|
-
});
|
|
397
|
-
|
|
398
|
-
const worker = new Worker('heartbeat', async (job) => {
|
|
399
|
-
console.log('Heartbeat:', job.data);
|
|
400
|
-
return { ok: true };
|
|
401
|
-
});
|
|
402
|
-
```
|
|
403
|
-
|
|
404
|
-
### Queue Groups
|
|
405
|
-
|
|
406
|
-
Organize queues with namespaces.
|
|
407
|
-
|
|
408
|
-
```typescript
|
|
409
|
-
import { QueueGroup } from 'bunqueue/client';
|
|
410
|
-
|
|
411
|
-
// Create a group with namespace
|
|
412
|
-
const billing = new QueueGroup('billing');
|
|
413
|
-
|
|
414
|
-
// Get queues (automatically prefixed)
|
|
415
|
-
const invoices = billing.getQueue('invoices'); // β "billing:invoices"
|
|
416
|
-
const payments = billing.getQueue('payments'); // β "billing:payments"
|
|
417
|
-
|
|
418
|
-
// Get workers for the group
|
|
419
|
-
const worker = billing.getWorker('invoices', async (job) => {
|
|
420
|
-
console.log('Processing invoice:', job.data);
|
|
421
|
-
return { processed: true };
|
|
422
|
-
});
|
|
423
|
-
|
|
424
|
-
// List all queues in the group
|
|
425
|
-
const queues = billing.listQueues(); // ['invoices', 'payments']
|
|
426
|
-
|
|
427
|
-
// Bulk operations on the group
|
|
428
|
-
billing.pauseAll();
|
|
429
|
-
billing.resumeAll();
|
|
430
|
-
billing.drainAll();
|
|
431
|
-
```
|
|
432
|
-
|
|
433
|
-
### FlowProducer (Pipelines)
|
|
434
|
-
|
|
435
|
-
Chain jobs with dependencies and result passing.
|
|
436
|
-
|
|
437
|
-
```typescript
|
|
438
|
-
import { FlowProducer, Worker } from 'bunqueue/client';
|
|
439
|
-
|
|
440
|
-
const flow = new FlowProducer();
|
|
441
|
-
|
|
442
|
-
// Chain: A β B β C (sequential execution)
|
|
443
|
-
const { jobIds } = await flow.addChain([
|
|
444
|
-
{ name: 'fetch', queueName: 'pipeline', data: { url: 'https://api.example.com' } },
|
|
445
|
-
{ name: 'process', queueName: 'pipeline', data: {} },
|
|
446
|
-
{ name: 'store', queueName: 'pipeline', data: {} },
|
|
447
|
-
]);
|
|
448
|
-
|
|
449
|
-
// Parallel then merge: [A, B, C] β D
|
|
450
|
-
const result = await flow.addBulkThen(
|
|
451
|
-
[
|
|
452
|
-
{ name: 'fetch-1', queueName: 'parallel', data: { source: 'api1' } },
|
|
453
|
-
{ name: 'fetch-2', queueName: 'parallel', data: { source: 'api2' } },
|
|
454
|
-
{ name: 'fetch-3', queueName: 'parallel', data: { source: 'api3' } },
|
|
455
|
-
],
|
|
456
|
-
{ name: 'merge', queueName: 'parallel', data: {} }
|
|
457
|
-
);
|
|
458
|
-
|
|
459
|
-
// Tree structure
|
|
460
|
-
await flow.addTree({
|
|
461
|
-
name: 'root',
|
|
462
|
-
queueName: 'tree',
|
|
463
|
-
data: { level: 0 },
|
|
464
|
-
children: [
|
|
465
|
-
{ name: 'child1', queueName: 'tree', data: { level: 1 } },
|
|
466
|
-
{ name: 'child2', queueName: 'tree', data: { level: 1 } },
|
|
467
|
-
],
|
|
468
|
-
});
|
|
469
|
-
|
|
470
|
-
// Worker with parent result access
|
|
471
|
-
const worker = new Worker('pipeline', async (job) => {
|
|
472
|
-
if (job.name === 'fetch') {
|
|
473
|
-
const data = await fetchData(job.data.url);
|
|
474
|
-
return { data };
|
|
475
|
-
}
|
|
476
|
-
|
|
477
|
-
if (job.name === 'process' && job.data.__flowParentId) {
|
|
478
|
-
// Get result from previous job in chain
|
|
479
|
-
const parentResult = flow.getParentResult(job.data.__flowParentId);
|
|
480
|
-
return { processed: transform(parentResult.data) };
|
|
481
|
-
}
|
|
482
|
-
|
|
483
|
-
return { done: true };
|
|
484
|
-
});
|
|
485
|
-
```
|
|
486
|
-
|
|
487
|
-
### Shutdown
|
|
488
|
-
|
|
489
|
-
```typescript
|
|
490
|
-
import { shutdownManager } from 'bunqueue/client';
|
|
491
|
-
|
|
492
|
-
// Cleanup when done
|
|
493
|
-
shutdownManager();
|
|
494
|
-
```
|
|
495
|
-
|
|
496
|
-
---
|
|
497
|
-
|
|
498
|
-
## Server Mode
|
|
499
|
-
|
|
500
|
-
### Start Server
|
|
501
|
-
|
|
502
|
-
```bash
|
|
503
|
-
# Basic
|
|
504
|
-
bunqueue start
|
|
505
|
-
|
|
506
|
-
# With options
|
|
507
|
-
bunqueue start --tcp-port 6789 --http-port 6790 --data-path ./data/queue.db
|
|
508
|
-
|
|
509
|
-
# With environment variables
|
|
510
|
-
DATA_PATH=./data/bunqueue.db AUTH_TOKENS=secret bunqueue start
|
|
511
|
-
```
|
|
512
|
-
|
|
513
|
-
### Environment Variables
|
|
514
|
-
|
|
515
|
-
```env
|
|
516
|
-
TCP_PORT=6789
|
|
517
|
-
HTTP_PORT=6790
|
|
518
|
-
HOST=0.0.0.0
|
|
519
|
-
DATA_PATH=./data/bunqueue.db
|
|
520
|
-
AUTH_TOKENS=token1,token2
|
|
521
|
-
```
|
|
522
|
-
|
|
523
|
-
### HTTP API
|
|
524
|
-
|
|
525
|
-
```bash
|
|
526
|
-
# Push job
|
|
527
|
-
curl -X POST http://localhost:6790/push \
|
|
528
|
-
-H "Content-Type: application/json" \
|
|
529
|
-
-d '{"queue":"emails","data":{"to":"user@test.com"},"priority":10}'
|
|
530
|
-
|
|
531
|
-
# Pull job
|
|
532
|
-
curl -X POST http://localhost:6790/pull \
|
|
533
|
-
-H "Content-Type: application/json" \
|
|
534
|
-
-d '{"queue":"emails","timeout":5000}'
|
|
535
|
-
|
|
536
|
-
# Acknowledge
|
|
537
|
-
curl -X POST http://localhost:6790/ack \
|
|
538
|
-
-H "Content-Type: application/json" \
|
|
539
|
-
-d '{"id":"job-id","result":{"sent":true}}'
|
|
540
|
-
|
|
541
|
-
# Fail
|
|
542
|
-
curl -X POST http://localhost:6790/fail \
|
|
543
|
-
-H "Content-Type: application/json" \
|
|
544
|
-
-d '{"id":"job-id","error":"Failed to send"}'
|
|
545
|
-
|
|
546
|
-
# Stats
|
|
547
|
-
curl http://localhost:6790/stats
|
|
548
|
-
|
|
549
|
-
# Health
|
|
550
|
-
curl http://localhost:6790/health
|
|
551
|
-
|
|
552
|
-
# Prometheus metrics
|
|
553
|
-
curl http://localhost:6790/prometheus
|
|
554
|
-
```
|
|
555
|
-
|
|
556
|
-
### TCP Protocol
|
|
557
|
-
|
|
558
|
-
```bash
|
|
559
|
-
nc localhost 6789
|
|
560
|
-
|
|
561
|
-
# Commands (JSON)
|
|
562
|
-
{"cmd":"PUSH","queue":"tasks","data":{"action":"process"}}
|
|
563
|
-
{"cmd":"PULL","queue":"tasks","timeout":5000}
|
|
564
|
-
{"cmd":"ACK","id":"1","result":{"done":true}}
|
|
565
|
-
{"cmd":"FAIL","id":"1","error":"Something went wrong"}
|
|
566
|
-
```
|
|
567
|
-
|
|
568
|
-
---
|
|
569
|
-
|
|
570
|
-
## CLI
|
|
571
|
-
|
|
572
|
-
```bash
|
|
573
|
-
# Server
|
|
574
|
-
bunqueue start
|
|
575
|
-
bunqueue start --tcp-port 6789 --http-port 6790
|
|
576
|
-
|
|
577
|
-
# Jobs
|
|
578
|
-
bunqueue push emails '{"to":"user@test.com"}'
|
|
579
|
-
bunqueue push tasks '{"action":"sync"}' --priority 10 --delay 5000
|
|
580
|
-
bunqueue pull emails --timeout 5000
|
|
581
|
-
bunqueue ack <job-id>
|
|
582
|
-
bunqueue fail <job-id> --error "Failed"
|
|
583
|
-
|
|
584
|
-
# Job management
|
|
585
|
-
bunqueue job get <id>
|
|
586
|
-
bunqueue job progress <id> 50 --message "Processing"
|
|
587
|
-
bunqueue job cancel <id>
|
|
588
|
-
|
|
589
|
-
# Queue control
|
|
590
|
-
bunqueue queue list
|
|
591
|
-
bunqueue queue pause emails
|
|
592
|
-
bunqueue queue resume emails
|
|
593
|
-
bunqueue queue drain emails
|
|
594
|
-
|
|
595
|
-
# Cron
|
|
596
|
-
bunqueue cron list
|
|
597
|
-
bunqueue cron add cleanup -q maintenance -d '{}' -s "0 * * * *"
|
|
598
|
-
bunqueue cron delete cleanup
|
|
599
|
-
|
|
600
|
-
# DLQ
|
|
601
|
-
bunqueue dlq list emails
|
|
602
|
-
bunqueue dlq retry emails
|
|
603
|
-
bunqueue dlq purge emails
|
|
604
|
-
|
|
605
|
-
# Monitoring
|
|
606
|
-
bunqueue stats
|
|
607
|
-
bunqueue metrics
|
|
608
|
-
bunqueue health
|
|
609
|
-
|
|
610
|
-
# Backup (S3)
|
|
611
|
-
bunqueue backup now
|
|
612
|
-
bunqueue backup list
|
|
613
|
-
bunqueue backup restore <key> --force
|
|
614
|
-
```
|
|
615
|
-
|
|
616
|
-
---
|
|
617
|
-
|
|
618
|
-
## Docker
|
|
619
|
-
|
|
620
|
-
```bash
|
|
621
|
-
# Run
|
|
622
|
-
docker run -p 6789:6789 -p 6790:6790 ghcr.io/egeominotti/bunqueue
|
|
623
|
-
|
|
624
|
-
# With persistence
|
|
625
|
-
docker run -p 6789:6789 -p 6790:6790 \
|
|
626
|
-
-v bunqueue-data:/app/data \
|
|
627
|
-
-e DATA_PATH=/app/data/bunqueue.db \
|
|
628
|
-
ghcr.io/egeominotti/bunqueue
|
|
629
|
-
|
|
630
|
-
# With auth
|
|
631
|
-
docker run -p 6789:6789 -p 6790:6790 \
|
|
632
|
-
-e AUTH_TOKENS=secret \
|
|
633
|
-
ghcr.io/egeominotti/bunqueue
|
|
634
|
-
```
|
|
635
|
-
|
|
636
|
-
### Docker Compose
|
|
637
|
-
|
|
638
|
-
```yaml
|
|
639
|
-
version: "3.8"
|
|
640
|
-
services:
|
|
641
|
-
bunqueue:
|
|
642
|
-
image: ghcr.io/egeominotti/bunqueue
|
|
643
|
-
ports:
|
|
644
|
-
- "6789:6789"
|
|
645
|
-
- "6790:6790"
|
|
646
|
-
volumes:
|
|
647
|
-
- bunqueue-data:/app/data
|
|
648
|
-
environment:
|
|
649
|
-
- DATA_PATH=/app/data/bunqueue.db
|
|
650
|
-
- AUTH_TOKENS=your-secret-token
|
|
651
|
-
|
|
652
|
-
volumes:
|
|
653
|
-
bunqueue-data:
|
|
654
|
-
```
|
|
655
|
-
|
|
656
|
-
---
|
|
657
|
-
|
|
658
|
-
## S3 Backup
|
|
659
|
-
|
|
660
|
-
```env
|
|
661
|
-
S3_BACKUP_ENABLED=1
|
|
662
|
-
S3_ACCESS_KEY_ID=your-key
|
|
663
|
-
S3_SECRET_ACCESS_KEY=your-secret
|
|
664
|
-
S3_BUCKET=my-bucket
|
|
665
|
-
S3_REGION=us-east-1
|
|
666
|
-
S3_BACKUP_INTERVAL=21600000 # 6 hours
|
|
667
|
-
S3_BACKUP_RETENTION=7
|
|
668
|
-
```
|
|
669
|
-
|
|
670
|
-
Supported providers: AWS S3, Cloudflare R2, MinIO, DigitalOcean Spaces.
|
|
671
|
-
|
|
672
|
-
---
|
|
673
|
-
|
|
674
|
-
## When to Use What?
|
|
675
|
-
|
|
676
|
-
| Scenario | Mode |
|
|
677
|
-
|----------|------|
|
|
678
|
-
| Single app, monolith | **Embedded** |
|
|
679
|
-
| Scripts, CLI tools | **Embedded** |
|
|
680
|
-
| Serverless (with persistence) | **Embedded** |
|
|
681
|
-
| Microservices | **Server** |
|
|
682
|
-
| Multiple languages | **Server** (HTTP API) |
|
|
683
|
-
| Horizontal scaling | **Server** |
|
|
684
|
-
|
|
685
|
-
### Server Mode SDK
|
|
686
|
-
|
|
687
|
-
For communicating with bunqueue server from **separate processes**, use the [flashq](https://www.npmjs.com/package/flashq) SDK:
|
|
688
|
-
|
|
689
|
-
```bash
|
|
690
|
-
bun add flashq
|
|
691
|
-
```
|
|
692
|
-
|
|
693
|
-
```typescript
|
|
694
|
-
import { FlashQ } from 'flashq';
|
|
695
|
-
|
|
696
|
-
const client = new FlashQ({ host: 'localhost', port: 6789 });
|
|
697
|
-
|
|
698
|
-
// Push job
|
|
699
|
-
await client.push('emails', { to: 'user@test.com' });
|
|
700
|
-
|
|
701
|
-
// Pull and process
|
|
702
|
-
const job = await client.pull('emails');
|
|
703
|
-
if (job) {
|
|
704
52
|
console.log('Processing:', job.data);
|
|
705
|
-
|
|
706
|
-
}
|
|
707
|
-
```
|
|
708
|
-
|
|
709
|
-
| Package | Use Case |
|
|
710
|
-
|---------|----------|
|
|
711
|
-
| `bunqueue/client` | Same process (embedded) |
|
|
712
|
-
| `flashq` | Different process (TCP client) |
|
|
713
|
-
|
|
714
|
-
```
|
|
715
|
-
βββββββββββββββββββ βββββββββββββββββββ
|
|
716
|
-
β Your App β β Your App β
|
|
717
|
-
β β β β
|
|
718
|
-
β bunqueue/clientβ β flashq β
|
|
719
|
-
β (embedded) β β (TCP client) β
|
|
720
|
-
ββββββββββ¬βββββββββ ββββββββββ¬βββββββββ
|
|
721
|
-
β β
|
|
722
|
-
β βΌ
|
|
723
|
-
β βββββββββββββββββββ
|
|
724
|
-
β β bunqueue server β
|
|
725
|
-
β β (port 6789) β
|
|
726
|
-
β βββββββββββββββββββ
|
|
727
|
-
β
|
|
728
|
-
Same process Different process
|
|
729
|
-
```
|
|
730
|
-
|
|
731
|
-
---
|
|
732
|
-
|
|
733
|
-
## Architecture
|
|
53
|
+
return { sent: true };
|
|
54
|
+
}, { embedded: true });
|
|
734
55
|
|
|
735
|
-
|
|
736
|
-
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
|
|
737
|
-
β bunqueue β
|
|
738
|
-
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
|
|
739
|
-
β Embedded Mode β Server Mode β
|
|
740
|
-
β (bunqueue/client) β (bunqueue start) β
|
|
741
|
-
β β β
|
|
742
|
-
β Queue, Worker β TCP (6789) + HTTP (6790) β
|
|
743
|
-
β in-process β multi-process β
|
|
744
|
-
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
|
|
745
|
-
β Core Engine β
|
|
746
|
-
β ββββββββββββ ββββββββββββ βββββββββββββ ββββββββββββ β
|
|
747
|
-
β β Queues β β Workers β β Scheduler β β DLQ β β
|
|
748
|
-
β β(32 shards)β β β β (Cron) β β β β
|
|
749
|
-
β ββββββββββββ ββββββββββββ βββββββββββββ ββββββββββββ β
|
|
750
|
-
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
|
|
751
|
-
β SQLite (WAL mode, 256MB mmap) β
|
|
752
|
-
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
|
|
56
|
+
await queue.add('welcome', { to: 'user@example.com' });
|
|
753
57
|
```
|
|
754
58
|
|
|
755
|
-
|
|
756
|
-
|
|
757
|
-
## Contributing
|
|
59
|
+
## Documentation
|
|
758
60
|
|
|
759
|
-
|
|
760
|
-
bun install
|
|
761
|
-
bun test
|
|
762
|
-
bun run lint
|
|
763
|
-
bun run format
|
|
764
|
-
bun run check
|
|
765
|
-
```
|
|
61
|
+
**[Read the full documentation β](https://egeominotti.github.io/bunqueue/)**
|
|
766
62
|
|
|
767
|
-
|
|
63
|
+
- [Quick Start](https://egeominotti.github.io/bunqueue/guide/quickstart/)
|
|
64
|
+
- [Queue API](https://egeominotti.github.io/bunqueue/guide/queue/)
|
|
65
|
+
- [Worker API](https://egeominotti.github.io/bunqueue/guide/worker/)
|
|
66
|
+
- [Server Mode](https://egeominotti.github.io/bunqueue/guide/server/)
|
|
67
|
+
- [CLI Reference](https://egeominotti.github.io/bunqueue/guide/cli/)
|
|
68
|
+
- [Environment Variables](https://egeominotti.github.io/bunqueue/guide/env-vars/)
|
|
768
69
|
|
|
769
70
|
## License
|
|
770
71
|
|
|
771
|
-
MIT
|
|
772
|
-
|
|
773
|
-
---
|
|
774
|
-
|
|
775
|
-
<p align="center">
|
|
776
|
-
Built with <a href="https://bun.sh">Bun</a> π₯
|
|
777
|
-
</p>
|
|
72
|
+
MIT
|