bunqueue 1.2.1 → 1.3.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -10,254 +10,205 @@
10
10
 
11
11
  <p align="center">
12
12
  <a href="#features">Features</a> •
13
- <a href="#sdk">SDK</a> •
14
13
  <a href="#quick-start">Quick Start</a> •
15
- <a href="#installation">Installation</a> •
14
+ <a href="#embedded-mode">Embedded</a> •
15
+ <a href="#server-mode">Server</a> •
16
16
  <a href="#api-reference">API</a> •
17
17
  <a href="#docker">Docker</a>
18
18
  </p>
19
19
 
20
20
  <p align="center">
21
21
  <a href="https://www.npmjs.com/package/bunqueue"><img src="https://img.shields.io/npm/v/bunqueue?label=bunqueue" alt="bunqueue npm"></a>
22
- <a href="https://www.npmjs.com/package/flashq"><img src="https://img.shields.io/npm/v/flashq?label=flashq" alt="flashq npm"></a>
23
- <a href="https://www.npmjs.com/package/flashq"><img src="https://img.shields.io/npm/dm/flashq" alt="npm downloads"></a>
22
+ <a href="https://www.npmjs.com/package/bunqueue"><img src="https://img.shields.io/npm/dm/bunqueue" alt="npm downloads"></a>
24
23
  </p>
25
24
 
26
25
  ---
27
26
 
28
27
  ## Quick Install
29
28
 
30
- bunqueue requires two packages: the **server** and the **SDK**.
31
-
32
29
  ```bash
33
- # Install both packages
34
- bun add bunqueue flashq
30
+ bun add bunqueue
35
31
  ```
36
32
 
37
- | Package | Description |
38
- |---------|-------------|
39
- | [bunqueue](https://www.npmjs.com/package/bunqueue) | Job queue server |
40
- | [flashq](https://www.npmjs.com/package/flashq) | TypeScript SDK for clients |
33
+ bunqueue works in **two modes**:
41
34
 
42
- ### Start Server
35
+ | Mode | Description | Use Case |
36
+ |------|-------------|----------|
37
+ | **Embedded** | In-process, no server needed | Monolith, scripts, serverless |
38
+ | **Server** | Standalone TCP/HTTP server | Microservices, multi-process |
43
39
 
44
- ```bash
45
- # Option 1: Run via npx (recommended)
46
- npx bunqueue
40
+ ---
47
41
 
48
- # Option 2: Run via bun
49
- bunx bunqueue
42
+ ## Quick Start
50
43
 
51
- # Option 3: Run locally after install
52
- ./node_modules/.bin/bunqueue
44
+ ### Embedded Mode (Recommended)
53
45
 
54
- # Option 4: Global install
55
- bun add -g bunqueue
56
- bunqueue
46
+ No server required. BullMQ-compatible API.
57
47
 
58
- # Option 5: Docker
59
- docker run -p 6789:6789 -p 6790:6790 ghcr.io/egeominotti/bunqueue
60
- ```
48
+ ```typescript
49
+ import { Queue, Worker } from 'bunqueue/client';
61
50
 
62
- <img src=".github/terminal.png" alt="bunqueue server running" width="600" />
51
+ // Create queue
52
+ const queue = new Queue('emails');
63
53
 
64
- ### Production Setup
54
+ // Create worker
55
+ const worker = new Worker('emails', async (job) => {
56
+ console.log('Sending email to:', job.data.to);
57
+ await job.updateProgress(50);
58
+ return { sent: true };
59
+ }, { concurrency: 5 });
65
60
 
66
- For production, enable **persistence** and **authentication**:
61
+ // Handle events
62
+ worker.on('completed', (job, result) => {
63
+ console.log(`Job ${job.id} completed:`, result);
64
+ });
67
65
 
68
- ```bash
69
- # With environment variables
70
- DATA_PATH=./data/bunqueue.db AUTH_TOKENS=your-secret-token bunqueue
66
+ worker.on('failed', (job, err) => {
67
+ console.error(`Job ${job.id} failed:`, err.message);
68
+ });
71
69
 
72
- # Or with custom ports
73
- TCP_PORT=6789 HTTP_PORT=6790 DATA_PATH=./data/bunqueue.db AUTH_TOKENS=token1,token2 bunqueue
70
+ // Add jobs
71
+ await queue.add('send-welcome', { to: 'user@example.com' });
74
72
  ```
75
73
 
76
- Or create a `.env` file:
77
-
78
- ```env
79
- # Server ports
80
- TCP_PORT=6789
81
- HTTP_PORT=6790
82
- HOST=0.0.0.0
83
-
84
- # Persistence (required for production)
85
- DATA_PATH=./data/bunqueue.db
86
-
87
- # Authentication (recommended for production)
88
- AUTH_TOKENS=your-secret-token-1,your-secret-token-2
89
-
90
- # Optional: Protect metrics endpoint
91
- METRICS_AUTH=true
92
- ```
74
+ ### Server Mode
93
75
 
94
- Then run:
76
+ For multi-process or microservice architectures.
95
77
 
78
+ **Terminal 1 - Start server:**
96
79
  ```bash
97
- bunqueue
80
+ bunqueue start
98
81
  ```
99
82
 
100
- Output:
101
- ```
102
- bunqueue server running
83
+ <img src=".github/terminal.png" alt="bunqueue server running" width="600" />
103
84
 
104
- TCP: 0.0.0.0:6789
105
- HTTP: 0.0.0.0:6790
106
- Data: ./data/bunqueue.db
107
- Auth: enabled (2 tokens)
85
+ **Terminal 2 - Producer:**
86
+ ```typescript
87
+ const res = await fetch('http://localhost:6790/push', {
88
+ method: 'POST',
89
+ headers: { 'Content-Type': 'application/json' },
90
+ body: JSON.stringify({
91
+ queue: 'emails',
92
+ data: { to: 'user@example.com' }
93
+ })
94
+ });
108
95
  ```
109
96
 
110
- ### Use SDK
111
-
97
+ **Terminal 3 - Consumer:**
112
98
  ```typescript
113
- import { Queue, Worker } from 'flashq';
114
-
115
- // Producer: add jobs
116
- const queue = new Queue('emails');
117
- await queue.add('send-welcome', { to: 'user@example.com' });
118
-
119
- // Consumer: process jobs
120
- const worker = new Worker('emails', async (job) => {
121
- console.log('Sending email to:', job.data.to);
122
- return { sent: true };
123
- });
99
+ while (true) {
100
+ const res = await fetch('http://localhost:6790/pull', {
101
+ method: 'POST',
102
+ body: JSON.stringify({ queue: 'emails', timeout: 5000 })
103
+ });
104
+
105
+ const job = await res.json();
106
+ if (job.id) {
107
+ console.log('Processing:', job.data);
108
+ await fetch('http://localhost:6790/ack', {
109
+ method: 'POST',
110
+ body: JSON.stringify({ id: job.id })
111
+ });
112
+ }
113
+ }
124
114
  ```
125
115
 
126
116
  ---
127
117
 
128
118
  ## Features
129
119
 
130
- - **Blazing Fast** — Built on Bun runtime with native SQLite, optimized for maximum throughput
131
- - **Persistent Storage** — SQLite with WAL mode for durability and concurrent access
132
- - **Priority Queues** — FIFO, LIFO, and priority-based job ordering
133
- - **Delayed Jobs** — Schedule jobs to run at specific times
134
- - **Cron Scheduling** — Recurring jobs with cron expressions or fixed intervals
120
+ - **Blazing Fast** — 500K+ jobs/sec, built on Bun runtime
121
+ - **Dual Mode** — Embedded (in-process) or Server (TCP/HTTP)
122
+ - **BullMQ-Compatible API** — Easy migration with `Queue`, `Worker`, `QueueEvents`
123
+ - **Persistent Storage** — SQLite with WAL mode
124
+ - **Priority Queues** — FIFO, LIFO, and priority-based ordering
125
+ - **Delayed Jobs** — Schedule jobs for later
126
+ - **Cron Scheduling** — Recurring jobs with cron expressions
135
127
  - **Retry & Backoff** — Automatic retries with exponential backoff
136
- - **Dead Letter Queue** — Failed jobs preserved for inspection and retry
137
- - **Job Dependencies** — Define parent-child relationships and execution order
138
- - **Progress Tracking** — Real-time progress updates for long-running jobs
139
- - **Rate Limiting** — Per-queue rate limits and concurrency control
128
+ - **Dead Letter Queue** — Failed jobs preserved for inspection
129
+ - **Job Dependencies** — Parent-child relationships
130
+ - **Progress Tracking** — Real-time progress updates
131
+ - **Rate Limiting** — Per-queue rate limits
140
132
  - **Webhooks** — HTTP callbacks on job events
141
- - **Real-time Events** — WebSocket and Server-Sent Events (SSE) support
142
- - **Prometheus Metrics** — Built-in metrics endpoint for monitoring
143
- - **Authentication** — Token-based auth for secure access
144
- - **Dual Protocol** — TCP (high performance) and HTTP/REST (compatibility)
145
- - **Full-Featured CLI** — Manage queues, jobs, cron, and more from the command line
146
-
147
- ## CLI
148
-
149
- bunqueue includes a powerful CLI for managing the server and executing commands.
150
-
151
- ### Server Mode
152
-
153
- ```bash
154
- # Start server with defaults
155
- bunqueue
156
-
157
- # Start with options
158
- bunqueue start --tcp-port 6789 --http-port 6790 --data-path ./data/queue.db
159
- ```
160
-
161
- ### Client Commands
133
+ - **Real-time Events** — WebSocket and SSE support
134
+ - **Prometheus Metrics** — Built-in monitoring
135
+ - **Full CLI** — Manage queues from command line
162
136
 
163
- ```bash
164
- # Push a job
165
- bunqueue push emails '{"to":"user@test.com","subject":"Hello"}'
166
- bunqueue push tasks '{"action":"sync"}' --priority 10 --delay 5000
137
+ ---
167
138
 
168
- # Pull and process jobs
169
- bunqueue pull emails --timeout 5000
170
- bunqueue ack 12345 --result '{"sent":true}'
171
- bunqueue fail 12345 --error "SMTP timeout"
139
+ ## Embedded Mode
172
140
 
173
- # Job management
174
- bunqueue job get 12345
175
- bunqueue job progress 12345 50 --message "Processing..."
176
- bunqueue job cancel 12345
141
+ ### Queue API
177
142
 
178
- # Queue control
179
- bunqueue queue list
180
- bunqueue queue pause emails
181
- bunqueue queue resume emails
182
- bunqueue queue drain emails
143
+ ```typescript
144
+ import { Queue } from 'bunqueue/client';
183
145
 
184
- # Cron jobs
185
- bunqueue cron list
186
- bunqueue cron add hourly-cleanup -q maintenance -d '{"task":"cleanup"}' -s "0 * * * *"
187
- bunqueue cron delete hourly-cleanup
146
+ const queue = new Queue('my-queue');
188
147
 
189
- # DLQ management
190
- bunqueue dlq list emails
191
- bunqueue dlq retry emails
192
- bunqueue dlq purge emails
148
+ // Add job
149
+ const job = await queue.add('task-name', { data: 'value' });
193
150
 
194
- # Monitoring
195
- bunqueue stats
196
- bunqueue metrics
197
- bunqueue health
198
- ```
151
+ // Add with options
152
+ await queue.add('task', { data: 'value' }, {
153
+ priority: 10, // Higher = processed first
154
+ delay: 5000, // Delay in ms
155
+ attempts: 3, // Max retries
156
+ backoff: 1000, // Backoff base (ms)
157
+ timeout: 30000, // Processing timeout
158
+ jobId: 'unique-id', // Custom ID
159
+ removeOnComplete: true,
160
+ removeOnFail: false,
161
+ });
199
162
 
200
- ### Global Options
163
+ // Bulk add
164
+ await queue.addBulk([
165
+ { name: 'task1', data: { id: 1 } },
166
+ { name: 'task2', data: { id: 2 } },
167
+ ]);
201
168
 
202
- ```bash
203
- -H, --host <host> # Server host (default: localhost)
204
- -p, --port <port> # TCP port (default: 6789)
205
- -t, --token <token> # Authentication token
206
- --json # Output as JSON
207
- --help # Show help
208
- --version # Show version
209
- ```
169
+ // Get job
170
+ const job = await queue.getJob('job-id');
210
171
 
211
- ## SDK (flashq)
172
+ // Remove job
173
+ await queue.remove('job-id');
212
174
 
213
- The [flashq](https://www.npmjs.com/package/flashq) SDK provides a type-safe TypeScript interface for bunqueue.
175
+ // Get counts
176
+ const counts = await queue.getJobCounts();
177
+ // { waiting: 10, active: 2, completed: 100, failed: 5 }
214
178
 
215
- ```bash
216
- bun add flashq
179
+ // Queue control
180
+ await queue.pause();
181
+ await queue.resume();
182
+ await queue.drain(); // Remove waiting jobs
183
+ await queue.obliterate(); // Remove ALL data
217
184
  ```
218
185
 
219
- > **Prerequisites:** A running bunqueue server (see [Quick Install](#quick-install))
220
-
221
- ### Basic Usage
186
+ ### Worker API
222
187
 
223
188
  ```typescript
224
- import { Queue, Worker } from 'flashq';
189
+ import { Worker } from 'bunqueue/client';
225
190
 
226
- // Create a queue
227
- const queue = new Queue('my-queue', {
228
- connection: { host: 'localhost', port: 6789 }
229
- });
230
-
231
- // Add a job
232
- await queue.add('process-data', { userId: 123, action: 'sync' });
233
-
234
- // Add with options
235
- await queue.add('send-email',
236
- { to: 'user@example.com', subject: 'Hello' },
237
- {
238
- priority: 10,
239
- delay: 5000, // 5 seconds
240
- attempts: 3,
241
- backoff: { type: 'exponential', delay: 1000 }
242
- }
243
- );
244
-
245
- // Create a worker
246
191
  const worker = new Worker('my-queue', async (job) => {
247
192
  console.log('Processing:', job.name, job.data);
248
193
 
249
194
  // Update progress
250
- await job.updateProgress(50);
195
+ await job.updateProgress(50, 'Halfway done');
251
196
 
252
- // Do work...
197
+ // Add log
198
+ await job.log('Processing step completed');
253
199
 
200
+ // Return result
254
201
  return { success: true };
255
202
  }, {
256
- connection: { host: 'localhost', port: 6789 },
257
- concurrency: 5
203
+ concurrency: 10, // Parallel jobs
204
+ autorun: true, // Start automatically
205
+ });
206
+
207
+ // Events
208
+ worker.on('active', (job) => {
209
+ console.log(`Job ${job.id} started`);
258
210
  });
259
211
 
260
- // Handle events
261
212
  worker.on('completed', (job, result) => {
262
213
  console.log(`Job ${job.id} completed:`, result);
263
214
  });
@@ -265,667 +216,290 @@ worker.on('completed', (job, result) => {
265
216
  worker.on('failed', (job, err) => {
266
217
  console.error(`Job ${job.id} failed:`, err.message);
267
218
  });
268
- ```
269
-
270
- ### Cron Jobs
271
219
 
272
- ```typescript
273
- import { Queue } from 'flashq';
220
+ worker.on('progress', (job, progress) => {
221
+ console.log(`Job ${job.id} progress:`, progress);
222
+ });
274
223
 
275
- const queue = new Queue('scheduled', {
276
- connection: { host: 'localhost', port: 6789 }
224
+ worker.on('error', (err) => {
225
+ console.error('Worker error:', err);
277
226
  });
278
227
 
279
- // Every hour
280
- await queue.upsertJobScheduler('hourly-report',
281
- { pattern: '0 * * * *' },
282
- { name: 'generate-report', data: { type: 'hourly' } }
283
- );
284
-
285
- // Every 5 minutes
286
- await queue.upsertJobScheduler('health-check',
287
- { every: 300000 },
288
- { name: 'ping', data: {} }
289
- );
228
+ // Control
229
+ worker.pause();
230
+ worker.resume();
231
+ await worker.close(); // Graceful shutdown
232
+ await worker.close(true); // Force close
290
233
  ```
291
234
 
292
- ### Job Dependencies (Flows)
235
+ ### QueueEvents
236
+
237
+ Listen to queue events without processing jobs.
293
238
 
294
239
  ```typescript
295
- import { FlowProducer } from 'flashq';
240
+ import { QueueEvents } from 'bunqueue/client';
296
241
 
297
- const flow = new FlowProducer({
298
- connection: { host: 'localhost', port: 6789 }
299
- });
242
+ const events = new QueueEvents('my-queue');
300
243
 
301
- // Create a flow with parent-child dependencies
302
- await flow.add({
303
- name: 'final-step',
304
- queueName: 'pipeline',
305
- data: { step: 'aggregate' },
306
- children: [
307
- {
308
- name: 'step-1',
309
- queueName: 'pipeline',
310
- data: { step: 'fetch' }
311
- },
312
- {
313
- name: 'step-2',
314
- queueName: 'pipeline',
315
- data: { step: 'transform' }
316
- }
317
- ]
244
+ events.on('waiting', ({ jobId }) => {
245
+ console.log(`Job ${jobId} waiting`);
318
246
  });
319
- ```
320
-
321
- ### Real-time Events
322
-
323
- ```typescript
324
- import { QueueEvents } from 'flashq';
325
247
 
326
- const events = new QueueEvents('my-queue', {
327
- connection: { host: 'localhost', port: 6789 }
248
+ events.on('active', ({ jobId }) => {
249
+ console.log(`Job ${jobId} active`);
328
250
  });
329
251
 
330
252
  events.on('completed', ({ jobId, returnvalue }) => {
331
- console.log(`Job ${jobId} completed with:`, returnvalue);
253
+ console.log(`Job ${jobId} completed:`, returnvalue);
332
254
  });
333
255
 
334
256
  events.on('failed', ({ jobId, failedReason }) => {
335
- console.error(`Job ${jobId} failed:`, failedReason);
257
+ console.log(`Job ${jobId} failed:`, failedReason);
336
258
  });
337
259
 
338
260
  events.on('progress', ({ jobId, data }) => {
339
261
  console.log(`Job ${jobId} progress:`, data);
340
262
  });
341
- ```
342
-
343
- For more examples, see the [SDK documentation](https://www.npmjs.com/package/flashq).
344
-
345
- ## Quick Start
346
-
347
- ### Start the Server
348
263
 
349
- ```bash
350
- # Using Bun directly
351
- bun run src/main.ts
352
-
353
- # Or with Docker
354
- docker run -p 6789:6789 -p 6790:6790 ghcr.io/egeominotti/bunqueue
264
+ await events.close();
355
265
  ```
356
266
 
357
- ### Push a Job (HTTP)
358
-
359
- ```bash
360
- curl -X POST http://localhost:6790/queues/emails/jobs \
361
- -H "Content-Type: application/json" \
362
- -d '{"data": {"to": "user@example.com", "subject": "Hello"}}'
363
- ```
364
-
365
- ### Pull a Job (HTTP)
366
-
367
- ```bash
368
- curl http://localhost:6790/queues/emails/jobs
369
- ```
370
-
371
- ### Acknowledge Completion
372
-
373
- ```bash
374
- curl -X POST http://localhost:6790/jobs/1/ack \
375
- -H "Content-Type: application/json" \
376
- -d '{"result": {"sent": true}}'
377
- ```
378
-
379
- ## Installation
380
-
381
- ### Server + SDK
382
-
383
- bunqueue is composed of two packages:
384
-
385
- | Package | Description | Install |
386
- |---------|-------------|---------|
387
- | **bunqueue** | Job queue server | `bun add bunqueue` |
388
- | **flashq** | TypeScript SDK | `bun add flashq` |
389
-
390
- ```bash
391
- # Install both
392
- bun add bunqueue flashq
393
- ```
394
-
395
- ### Quick Setup
396
-
397
- ```bash
398
- # 1. Start the server
399
- bunqueue
400
-
401
- # 2. Use the SDK in your app
402
- ```
267
+ ### Shutdown
403
268
 
404
269
  ```typescript
405
- import { Queue, Worker } from 'flashq';
270
+ import { shutdownManager } from 'bunqueue/client';
406
271
 
407
- const queue = new Queue('tasks');
408
- await queue.add('my-job', { data: 'hello' });
409
-
410
- const worker = new Worker('tasks', async (job) => {
411
- console.log(job.data);
412
- return { done: true };
413
- });
272
+ // Cleanup when done
273
+ shutdownManager();
414
274
  ```
415
275
 
416
- ### From Source
417
-
418
- ```bash
419
- git clone https://github.com/egeominotti/bunqueue.git
420
- cd bunqueue
421
- bun install
422
- bun run start
423
- ```
424
-
425
- ### Build Binary
276
+ ---
426
277
 
427
- ```bash
428
- bun run build
429
- ./dist/bunqueue
430
- ```
278
+ ## Server Mode
431
279
 
432
- ### Docker
280
+ ### Start Server
433
281
 
434
282
  ```bash
435
- docker pull ghcr.io/egeominotti/bunqueue
436
- docker run -d \
437
- -p 6789:6789 \
438
- -p 6790:6790 \
439
- -v bunqueue-data:/app/data \
440
- ghcr.io/egeominotti/bunqueue
441
- ```
442
-
443
- ### Docker Compose
283
+ # Basic
284
+ bunqueue start
444
285
 
445
- ```yaml
446
- version: "3.8"
447
- services:
448
- bunqueue:
449
- image: ghcr.io/egeominotti/bunqueue
450
- ports:
451
- - "6789:6789"
452
- - "6790:6790"
453
- volumes:
454
- - bunqueue-data:/app/data
455
- environment:
456
- - AUTH_TOKENS=your-secret-token
286
+ # With options
287
+ bunqueue start --tcp-port 6789 --http-port 6790 --data-path ./data/queue.db
457
288
 
458
- volumes:
459
- bunqueue-data:
289
+ # With environment variables
290
+ DATA_PATH=./data/bunqueue.db AUTH_TOKENS=secret bunqueue start
460
291
  ```
461
292
 
462
- ## Usage
463
-
464
- ### TCP Protocol (High Performance)
465
-
466
- Connect via TCP for maximum throughput. Commands are newline-delimited JSON.
467
-
468
- ```bash
469
- # Connect with netcat
470
- nc localhost 6789
471
-
472
- # Push a job
473
- {"cmd":"PUSH","queue":"tasks","data":{"action":"process"}}
474
-
475
- # Pull a job
476
- {"cmd":"PULL","queue":"tasks"}
293
+ ### Environment Variables
477
294
 
478
- # Acknowledge
479
- {"cmd":"ACK","id":"1"}
295
+ ```env
296
+ TCP_PORT=6789
297
+ HTTP_PORT=6790
298
+ HOST=0.0.0.0
299
+ DATA_PATH=./data/bunqueue.db
300
+ AUTH_TOKENS=token1,token2
480
301
  ```
481
302
 
482
- ### HTTP REST API
303
+ ### HTTP API
483
304
 
484
305
  ```bash
485
306
  # Push job
486
- curl -X POST http://localhost:6790/queues/tasks/jobs \
307
+ curl -X POST http://localhost:6790/push \
487
308
  -H "Content-Type: application/json" \
488
- -d '{
489
- "data": {"action": "process"},
490
- "priority": 10,
491
- "delay": 5000,
492
- "maxAttempts": 5
493
- }'
309
+ -d '{"queue":"emails","data":{"to":"user@test.com"},"priority":10}'
494
310
 
495
- # Pull job (with timeout)
496
- curl "http://localhost:6790/queues/tasks/jobs?timeout=30000"
311
+ # Pull job
312
+ curl -X POST http://localhost:6790/pull \
313
+ -H "Content-Type: application/json" \
314
+ -d '{"queue":"emails","timeout":5000}'
497
315
 
498
- # Get job by ID
499
- curl http://localhost:6790/jobs/123
316
+ # Acknowledge
317
+ curl -X POST http://localhost:6790/ack \
318
+ -H "Content-Type: application/json" \
319
+ -d '{"id":"job-id","result":{"sent":true}}'
500
320
 
501
- # Fail a job
502
- curl -X POST http://localhost:6790/jobs/123/fail \
321
+ # Fail
322
+ curl -X POST http://localhost:6790/fail \
503
323
  -H "Content-Type: application/json" \
504
- -d '{"error": "Processing failed"}'
324
+ -d '{"id":"job-id","error":"Failed to send"}'
505
325
 
506
- # Get stats
326
+ # Stats
507
327
  curl http://localhost:6790/stats
508
- ```
509
-
510
- ### WebSocket (Real-time)
511
328
 
512
- ```javascript
513
- const ws = new WebSocket('ws://localhost:6790/ws');
514
-
515
- ws.onmessage = (event) => {
516
- const job = JSON.parse(event.data);
517
- console.log('Job event:', job);
518
- };
519
-
520
- // Subscribe to specific queue
521
- const wsQueue = new WebSocket('ws://localhost:6790/ws/queues/emails');
522
- ```
523
-
524
- ### Server-Sent Events (SSE)
525
-
526
- ```javascript
527
- const events = new EventSource('http://localhost:6790/events');
528
-
529
- events.onmessage = (event) => {
530
- const data = JSON.parse(event.data);
531
- console.log('Event:', data);
532
- };
533
-
534
- // Filter by queue
535
- const queueEvents = new EventSource('http://localhost:6790/events/queues/emails');
536
- ```
537
-
538
- ### Job Options
539
-
540
- | Option | Type | Default | Description |
541
- |--------|------|---------|-------------|
542
- | `data` | any | required | Job payload |
543
- | `priority` | number | 0 | Higher = processed first |
544
- | `delay` | number | 0 | Delay in milliseconds |
545
- | `maxAttempts` | number | 3 | Max retry attempts |
546
- | `backoff` | number | 1000 | Initial backoff (ms), doubles each retry |
547
- | `ttl` | number | null | Time-to-live in milliseconds |
548
- | `timeout` | number | null | Job processing timeout |
549
- | `uniqueKey` | string | null | Deduplication key |
550
- | `jobId` | string | null | Custom job identifier |
551
- | `dependsOn` | string[] | [] | Job IDs that must complete first |
552
- | `tags` | string[] | [] | Tags for filtering |
553
- | `groupId` | string | null | Group identifier |
554
- | `lifo` | boolean | false | Last-in-first-out ordering |
555
- | `removeOnComplete` | boolean | false | Auto-delete on completion |
556
- | `removeOnFail` | boolean | false | Auto-delete on failure |
557
-
558
- ### Cron Jobs
559
-
560
- ```bash
561
- # Cron expression (every hour)
562
- curl -X POST http://localhost:6790/cron \
563
- -d '{"cmd":"Cron","name":"hourly-cleanup","queue":"maintenance","data":{"task":"cleanup"},"schedule":"0 * * * *"}'
564
-
565
- # Fixed interval (every 5 minutes)
566
- curl -X POST http://localhost:6790/cron \
567
- -d '{"cmd":"Cron","name":"health-check","queue":"monitoring","data":{"check":"ping"},"repeatEvery":300000}'
329
+ # Health
330
+ curl http://localhost:6790/health
568
331
 
569
- # With execution limit
570
- curl -X POST http://localhost:6790/cron \
571
- -d '{"cmd":"Cron","name":"one-time-migration","queue":"migrations","data":{},"repeatEvery":0,"maxLimit":1}'
332
+ # Prometheus metrics
333
+ curl http://localhost:6790/prometheus
572
334
  ```
573
335
 
574
- ## API Reference
575
-
576
- ### Core Operations
577
-
578
- | Command | Description |
579
- |---------|-------------|
580
- | `PUSH` | Add a job to a queue |
581
- | `PUSHB` | Batch push multiple jobs |
582
- | `PULL` | Get the next job from a queue |
583
- | `PULLB` | Batch pull multiple jobs |
584
- | `ACK` | Mark job as completed |
585
- | `ACKB` | Batch acknowledge jobs |
586
- | `FAIL` | Mark job as failed |
587
-
588
- ### Query Operations
589
-
590
- | Command | Description |
591
- |---------|-------------|
592
- | `GetJob` | Get job by ID |
593
- | `GetState` | Get job state |
594
- | `GetResult` | Get job result |
595
- | `GetJobs` | List jobs with filters |
596
- | `GetJobCounts` | Count jobs by state |
597
- | `GetJobByCustomId` | Find job by custom ID |
598
- | `GetProgress` | Get job progress |
599
- | `GetLogs` | Get job logs |
600
-
601
- ### Job Management
602
-
603
- | Command | Description |
604
- |---------|-------------|
605
- | `Cancel` | Cancel a pending job |
606
- | `Progress` | Update job progress |
607
- | `Update` | Update job data |
608
- | `ChangePriority` | Change job priority |
609
- | `Promote` | Move delayed job to waiting |
610
- | `MoveToDelayed` | Delay a waiting job |
611
- | `Discard` | Discard a job |
612
- | `Heartbeat` | Send job heartbeat |
613
- | `AddLog` | Add log entry to job |
614
-
615
- ### Queue Control
616
-
617
- | Command | Description |
618
- |---------|-------------|
619
- | `Pause` | Pause queue processing |
620
- | `Resume` | Resume queue processing |
621
- | `IsPaused` | Check if queue is paused |
622
- | `Drain` | Remove all waiting jobs |
623
- | `Obliterate` | Remove all queue data |
624
- | `Clean` | Remove old jobs |
625
- | `ListQueues` | List all queues |
626
- | `RateLimit` | Set queue rate limit |
627
- | `SetConcurrency` | Set max concurrent jobs |
628
-
629
- ### Dead Letter Queue
630
-
631
- | Command | Description |
632
- |---------|-------------|
633
- | `Dlq` | Get failed jobs |
634
- | `RetryDlq` | Retry failed jobs |
635
- | `PurgeDlq` | Clear failed jobs |
636
-
637
- ### Scheduling
638
-
639
- | Command | Description |
640
- |---------|-------------|
641
- | `Cron` | Create/update cron job |
642
- | `CronDelete` | Delete cron job |
643
- | `CronList` | List all cron jobs |
644
-
645
- ### Workers & Webhooks
646
-
647
- | Command | Description |
648
- |---------|-------------|
649
- | `RegisterWorker` | Register a worker |
650
- | `UnregisterWorker` | Unregister a worker |
651
- | `ListWorkers` | List active workers |
652
- | `AddWebhook` | Add webhook endpoint |
653
- | `RemoveWebhook` | Remove webhook |
654
- | `ListWebhooks` | List webhooks |
655
-
656
- ### Monitoring
657
-
658
- | Command | Description |
659
- |---------|-------------|
660
- | `Stats` | Get server statistics |
661
- | `Metrics` | Get job metrics |
662
- | `Prometheus` | Prometheus format metrics |
663
-
664
- ## Configuration
336
+ ### TCP Protocol
665
337
 
666
- ### Environment Variables
667
-
668
- | Variable | Default | Description |
669
- |----------|---------|-------------|
670
- | `TCP_PORT` | 6789 | TCP protocol port |
671
- | `HTTP_PORT` | 6790 | HTTP/WebSocket port |
672
- | `HOST` | 0.0.0.0 | Bind address |
673
- | `AUTH_TOKENS` | - | Comma-separated auth tokens |
674
- | `DATA_PATH` | - | SQLite database path (in-memory if not set) |
675
- | `CORS_ALLOW_ORIGIN` | * | Allowed CORS origins |
676
-
677
- ### S3 Backup Configuration
678
-
679
- | Variable | Default | Description |
680
- |----------|---------|-------------|
681
- | `S3_BACKUP_ENABLED` | 0 | Enable automated S3 backups (1/true) |
682
- | `S3_ACCESS_KEY_ID` | - | S3 access key |
683
- | `S3_SECRET_ACCESS_KEY` | - | S3 secret key |
684
- | `S3_BUCKET` | - | S3 bucket name |
685
- | `S3_REGION` | us-east-1 | S3 region |
686
- | `S3_ENDPOINT` | - | Custom endpoint for S3-compatible services |
687
- | `S3_BACKUP_INTERVAL` | 21600000 | Backup interval in ms (default: 6 hours) |
688
- | `S3_BACKUP_RETENTION` | 7 | Number of backups to retain |
689
- | `S3_BACKUP_PREFIX` | backups/ | Prefix for backup files |
690
-
691
- **Supported S3 Providers:**
692
- - AWS S3
693
- - Cloudflare R2: `S3_ENDPOINT=https://<account-id>.r2.cloudflarestorage.com`
694
- - MinIO: `S3_ENDPOINT=http://localhost:9000`
695
- - DigitalOcean Spaces: `S3_ENDPOINT=https://<region>.digitaloceanspaces.com`
696
-
697
- **CLI Commands:**
698
338
  ```bash
699
- # Create backup immediately
700
- bunqueue backup now
701
-
702
- # List available backups
703
- bunqueue backup list
704
-
705
- # Restore from backup (requires --force)
706
- bunqueue backup restore backups/bunqueue-2024-01-15.db --force
339
+ nc localhost 6789
707
340
 
708
- # Show backup status
709
- bunqueue backup status
341
+ # Commands (JSON)
342
+ {"cmd":"PUSH","queue":"tasks","data":{"action":"process"}}
343
+ {"cmd":"PULL","queue":"tasks","timeout":5000}
344
+ {"cmd":"ACK","id":"1","result":{"done":true}}
345
+ {"cmd":"FAIL","id":"1","error":"Something went wrong"}
710
346
  ```
711
347
 
712
- ### Authentication
713
-
714
- Enable authentication by setting `AUTH_TOKENS`:
348
+ ---
715
349
 
716
- ```bash
717
- AUTH_TOKENS=token1,token2 bun run start
718
- ```
350
+ ## CLI
719
351
 
720
- **HTTP:**
721
352
  ```bash
722
- curl -H "Authorization: Bearer token1" http://localhost:6790/stats
723
- ```
353
+ # Server
354
+ bunqueue start
355
+ bunqueue start --tcp-port 6789 --http-port 6790
724
356
 
725
- **TCP:**
726
- ```json
727
- {"cmd":"Auth","token":"token1"}
728
- ```
729
-
730
- **WebSocket:**
731
- ```json
732
- {"cmd":"Auth","token":"token1"}
733
- ```
734
-
735
- ## Monitoring
736
-
737
- ### Health Check
357
+ # Jobs
358
+ bunqueue push emails '{"to":"user@test.com"}'
359
+ bunqueue push tasks '{"action":"sync"}' --priority 10 --delay 5000
360
+ bunqueue pull emails --timeout 5000
361
+ bunqueue ack <job-id>
362
+ bunqueue fail <job-id> --error "Failed"
738
363
 
739
- ```bash
740
- curl http://localhost:6790/health
741
- # {"ok":true,"status":"healthy"}
742
- ```
364
+ # Job management
365
+ bunqueue job get <id>
366
+ bunqueue job progress <id> 50 --message "Processing"
367
+ bunqueue job cancel <id>
743
368
 
744
- ### Prometheus Metrics
369
+ # Queue control
370
+ bunqueue queue list
371
+ bunqueue queue pause emails
372
+ bunqueue queue resume emails
373
+ bunqueue queue drain emails
745
374
 
746
- ```bash
747
- curl http://localhost:6790/prometheus
748
- ```
375
+ # Cron
376
+ bunqueue cron list
377
+ bunqueue cron add cleanup -q maintenance -d '{}' -s "0 * * * *"
378
+ bunqueue cron delete cleanup
749
379
 
750
- Metrics include:
751
- - `bunqueue_jobs_total{queue,state}` Job counts by state
752
- - `bunqueue_jobs_processed_total{queue}` Total processed jobs
753
- - `bunqueue_jobs_failed_total{queue}` Total failed jobs
754
- - `bunqueue_queue_latency_seconds{queue}` — Processing latency
380
+ # DLQ
381
+ bunqueue dlq list emails
382
+ bunqueue dlq retry emails
383
+ bunqueue dlq purge emails
755
384
 
756
- ### Statistics
385
+ # Monitoring
386
+ bunqueue stats
387
+ bunqueue metrics
388
+ bunqueue health
757
389
 
758
- ```bash
759
- curl http://localhost:6790/stats
390
+ # Backup (S3)
391
+ bunqueue backup now
392
+ bunqueue backup list
393
+ bunqueue backup restore <key> --force
760
394
  ```
761
395
 
762
- ```json
763
- {
764
- "ok": true,
765
- "stats": {
766
- "waiting": 150,
767
- "active": 10,
768
- "delayed": 25,
769
- "completed": 10000,
770
- "failed": 50,
771
- "dlq": 5,
772
- "totalPushed": 10235,
773
- "totalPulled": 10085,
774
- "totalCompleted": 10000,
775
- "totalFailed": 50
776
- }
777
- }
778
- ```
396
+ ---
779
397
 
780
398
  ## Docker
781
399
 
782
- ### Build
783
-
784
400
  ```bash
785
- docker build -t bunqueue .
786
- ```
787
-
788
- ### Run
789
-
790
- ```bash
791
- # Basic
792
- docker run -p 6789:6789 -p 6790:6790 bunqueue
401
+ # Run
402
+ docker run -p 6789:6789 -p 6790:6790 ghcr.io/egeominotti/bunqueue
793
403
 
794
404
  # With persistence
795
405
  docker run -p 6789:6789 -p 6790:6790 \
796
406
  -v bunqueue-data:/app/data \
797
407
  -e DATA_PATH=/app/data/bunqueue.db \
798
- bunqueue
408
+ ghcr.io/egeominotti/bunqueue
799
409
 
800
- # With authentication
410
+ # With auth
801
411
  docker run -p 6789:6789 -p 6790:6790 \
802
- -e AUTH_TOKENS=secret1,secret2 \
803
- bunqueue
412
+ -e AUTH_TOKENS=secret \
413
+ ghcr.io/egeominotti/bunqueue
804
414
  ```
805
415
 
806
416
  ### Docker Compose
807
417
 
808
- ```bash
809
- # Production
810
- docker compose up -d
418
+ ```yaml
419
+ version: "3.8"
420
+ services:
421
+ bunqueue:
422
+ image: ghcr.io/egeominotti/bunqueue
423
+ ports:
424
+ - "6789:6789"
425
+ - "6790:6790"
426
+ volumes:
427
+ - bunqueue-data:/app/data
428
+ environment:
429
+ - DATA_PATH=/app/data/bunqueue.db
430
+ - AUTH_TOKENS=your-secret-token
811
431
 
812
- # Development (hot reload)
813
- docker compose --profile dev up bunqueue-dev
432
+ volumes:
433
+ bunqueue-data:
814
434
  ```
815
435
 
816
- ## Deployment
817
-
818
- bunqueue requires a **persistent server** with filesystem access for SQLite. It is **not compatible** with serverless platforms like Vercel or Cloudflare Workers.
819
-
820
- ### Compatible Platforms
821
-
822
- | Platform | Bun | SQLite | TCP | Notes |
823
- |----------|:---:|:------:|:---:|-------|
824
- | [Fly.io](https://fly.io) | ✅ | ✅ | ✅ | Recommended - persistent volumes, global deployment |
825
- | [Railway](https://railway.app) | ✅ | ✅ | ✅ | Easy deploy from GitHub |
826
- | [Render](https://render.com) | ✅ | ✅ | ✅ | Docker support, persistent disks |
827
- | [DigitalOcean](https://digitalocean.com) | ✅ | ✅ | ✅ | App Platform or Droplets |
828
- | Any VPS | ✅ | ✅ | ✅ | Full control |
829
-
830
- ### Fly.io (Recommended)
831
-
832
- ```bash
833
- # Install flyctl
834
- curl -L https://fly.io/install.sh | sh
835
-
836
- # Launch (uses existing Dockerfile)
837
- fly launch
838
-
839
- # Create persistent volume for SQLite
840
- fly volumes create bunqueue_data --size 1
841
-
842
- # Set secrets
843
- fly secrets set AUTH_TOKENS=your-secret-token
844
-
845
- # Deploy
846
- fly deploy
847
- ```
436
+ ---
848
437
 
849
- Add to `fly.toml`:
850
- ```toml
851
- [mounts]
852
- source = "bunqueue_data"
853
- destination = "/app/data"
438
+ ## S3 Backup
854
439
 
855
- [env]
856
- DATA_PATH = "/app/data/bunqueue.db"
440
+ ```env
441
+ S3_BACKUP_ENABLED=1
442
+ S3_ACCESS_KEY_ID=your-key
443
+ S3_SECRET_ACCESS_KEY=your-secret
444
+ S3_BUCKET=my-bucket
445
+ S3_REGION=us-east-1
446
+ S3_BACKUP_INTERVAL=21600000 # 6 hours
447
+ S3_BACKUP_RETENTION=7
857
448
  ```
858
449
 
859
- ### Railway
450
+ Supported providers: AWS S3, Cloudflare R2, MinIO, DigitalOcean Spaces.
860
451
 
861
- [![Deploy on Railway](https://railway.app/button.svg)](https://railway.app/template)
452
+ ---
862
453
 
863
- ```bash
864
- # Or via CLI
865
- railway login
866
- railway init
867
- railway up
868
- ```
454
+ ## When to Use What?
869
455
 
870
- ### Not Compatible
456
+ | Scenario | Mode |
457
+ |----------|------|
458
+ | Single app, monolith | **Embedded** |
459
+ | Scripts, CLI tools | **Embedded** |
460
+ | Serverless (with persistence) | **Embedded** |
461
+ | Microservices | **Server** |
462
+ | Multiple languages | **Server** (HTTP API) |
463
+ | Horizontal scaling | **Server** |
871
464
 
872
- | Platform | Reason |
873
- |----------|--------|
874
- | Vercel | Serverless functions, no persistent filesystem, no TCP |
875
- | Cloudflare Workers | V8 isolates (not Bun), no filesystem, no TCP |
876
- | AWS Lambda | Serverless, no persistent storage |
877
- | Netlify Functions | Serverless, no filesystem |
465
+ ---
878
466
 
879
467
  ## Architecture
880
468
 
881
469
  ```
882
470
  ┌─────────────────────────────────────────────────────────────┐
883
- │ bunqueue Server
471
+ │ bunqueue
884
472
  ├─────────────────────────────────────────────────────────────┤
885
- HTTP/WS (Bun.serve) TCP Protocol (Bun.listen)
473
+ Embedded Mode Server Mode
474
+ │ (bunqueue/client) │ (bunqueue start) │
475
+ │ │ │
476
+ │ Queue, Worker │ TCP (6789) + HTTP (6790) │
477
+ │ in-process │ multi-process │
886
478
  ├─────────────────────────────────────────────────────────────┤
887
- │ Core Engine
888
- │ ┌──────────┐ ┌──────────┐ ┌───────────┐ ┌──────────┐
889
- │ │ Queues │ │ Workers │ │ Scheduler │ │ DLQ │
890
- │ │ (32 shards) │ │ │ (Cron) │ │ │
891
- │ └──────────┘ └──────────┘ └───────────┘ └──────────┘
479
+ │ Core Engine
480
+ │ ┌──────────┐ ┌──────────┐ ┌───────────┐ ┌──────────┐
481
+ │ │ Queues │ │ Workers │ │ Scheduler │ │ DLQ │
482
+ │ │(32 shards) │ │ │ (Cron) │ │ │
483
+ │ └──────────┘ └──────────┘ └───────────┘ └──────────┘
892
484
  ├─────────────────────────────────────────────────────────────┤
893
- │ SQLite (WAL mode, 256MB mmap)
485
+ │ SQLite (WAL mode, 256MB mmap)
894
486
  └─────────────────────────────────────────────────────────────┘
895
487
  ```
896
488
 
897
- ### Performance Optimizations
898
-
899
- - **32 Shards** — Lock contention minimized with FNV-1a hash distribution
900
- - **WAL Mode** — Concurrent reads during writes
901
- - **Memory-mapped I/O** — 256MB mmap for fast access
902
- - **Batch Operations** — Bulk inserts and updates
903
- - **Bounded Collections** — Automatic memory cleanup
489
+ ---
904
490
 
905
491
  ## Contributing
906
492
 
907
- Contributions are welcome! Please read our contributing guidelines before submitting PRs.
908
-
909
493
  ```bash
910
- # Install dependencies
911
494
  bun install
912
-
913
- # Run tests
914
495
  bun test
915
-
916
- # Run linter
917
496
  bun run lint
918
-
919
- # Format code
920
497
  bun run format
921
-
922
- # Type check
923
- bun run typecheck
924
-
925
- # Run all checks
926
498
  bun run check
927
499
  ```
928
500
 
501
+ ---
502
+
929
503
  ## License
930
504
 
931
505
  MIT License — see [LICENSE](LICENSE) for details.