glide-mq 0.11.0 → 0.11.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +22 -25
  2. package/package.json +1 -1
package/README.md CHANGED
@@ -4,6 +4,7 @@
4
4
  [![license](https://img.shields.io/npm/l/glide-mq)](https://github.com/avifenesh/glide-mq/blob/main/LICENSE)
5
5
  [![CI](https://github.com/avifenesh/glide-mq/actions/workflows/ci.yml/badge.svg)](https://github.com/avifenesh/glide-mq/actions/workflows/ci.yml)
6
6
  [![node](https://img.shields.io/node/v/glide-mq)](https://nodejs.org/)
7
+ [![changelog](https://img.shields.io/badge/changelog-CHANGELOG.md-blue)](CHANGELOG.md)
7
8
 
8
9
  High-performance message queue for Node.js built on Valkey/Redis Streams with 1-RTT job operations and cluster-native design.
9
10
 
@@ -13,7 +14,7 @@ glide-mq is for anyone building background jobs, task queues, or workflow orches
13
14
 
14
15
  ## Why glide-mq
15
16
 
16
- - Use this when you need **throughput**: 48k jobs/s at concurrency=50, 4x faster than BullMQ on the same hardware.
17
+ - Use this when you need **throughput**: 25,000+ jobs/s single-node with 1 RTT per job via Valkey Server Functions.
17
18
  - Use this when you run **Valkey/Redis clusters**: all keys hash-tagged out of the box, no `{braces}` workarounds.
18
19
  - Use this when you need **workflows**: parent-child trees, DAGs with fan-in, step jobs, batch processing, and cron scheduling in one library.
19
20
  - Use this when you deploy to **serverless**: lightweight `Producer` and `ServerlessPool` cache connections across warm invocations.
@@ -46,32 +47,26 @@ worker.on('completed', (job) => console.log(`Job ${job.id} done`));
46
47
  worker.on('failed', (job, err) => console.error(`Job ${job.id} failed:`, err.message));
47
48
  ```
48
49
 
49
- ## Benchmarks
50
+ ## Performance
50
51
 
51
- | Concurrency | Throughput |
52
- |-------------|-----------|
53
- | c=1 | 4,376 jobs/s |
54
- | c=5 | 14,925 jobs/s |
55
- | c=10 | 15,504 jobs/s |
56
- | c=50 | 48,077 jobs/s |
52
+ - **1 RTT per job** -- `completeAndFetchNext` completes the current job and fetches the next in a single FCALL
53
+ - **25,000+ jobs/s** single-node floor (bare-metal Linux, localhost); scales higher with network pipelining
54
+ - **addBulk**: 10,000 jobs in 350 ms
55
+ - **Gzip compression**: 98% payload reduction on 15 KB payloads
57
56
 
58
- `addBulk` batch API: **1,000 jobs in 18 ms** (12.7x faster than serial).
59
- Gzip compression: **98% payload reduction** on 15 KB payloads.
57
+ Throughput scales with concurrency up to the Valkey single-thread FCALL execution ceiling. Deployments with network latency between app and Valkey benefit from glide's auto-pipelining -- higher concurrency batches more commands per wire write. Run `npm run bench` to measure your environment.
60
58
 
61
- *Valkey 8.0, single node, no-op processor. Run `npm run bench` to reproduce.*
59
+ ## How it's different
62
60
 
63
- ## Comparison
64
-
65
- | | glide-mq | BullMQ | Bee Queue |
66
- |---|---|---|---|
67
- | **Network per job** | 1 RTT (`completeAndFetchNext`) | 4-7 RTTs (lock + complete + fetch) | 2-3 RTTs |
68
- | **Client** | Rust NAPI ([valkey-glide](https://github.com/valkey-io/valkey-glide)) | ioredis (pure JS) | node_redis (pure JS) |
69
- | **Server logic** | 1 Valkey Function library (persistent, named) | 53 EVAL scripts (cache-miss prone) | Lua scripts |
70
- | **Cluster** | Hash-tagged keys, zero config | Manual `{braces}` or workarounds | Not supported |
71
- | **Workflows** | FlowProducer trees, DAG, chain/group/chord | FlowProducer trees | Not supported |
72
- | **Pub/sub** | Native Broadcast with subject filtering | Not supported | Not supported |
73
- | **Serverless** | Producer + ServerlessPool | Not supported | Not supported |
74
- | **Throughput** | 48k jobs/s (c=50) | ~12k jobs/s (c=50) | ~5k jobs/s (c=50) |
61
+ | Aspect | glide-mq approach |
62
+ |--------|-------------------|
63
+ | **Network per job** | 1 RTT -- complete current job + fetch next in a single FCALL |
64
+ | **Client** | Rust NAPI bindings ([valkey-glide](https://github.com/valkey-io/valkey-glide)) -- no JS protocol parsing |
65
+ | **Server logic** | 1 persistent Valkey Function library (FUNCTION LOAD + FCALL) -- no per-call EVAL recompilation |
66
+ | **Cluster** | Hash-tagged keys (`glide:{queueName}:*`) -- all queue data routes to the same slot automatically |
67
+ | **Workflows** | FlowProducer trees, DAGs with fan-in, chain/group/chord, step jobs, dynamic children |
68
+ | **Pub/sub** | Broadcast with NATS-style subject filtering, independent subscriber retries |
69
+ | **Serverless** | Lightweight `Producer` + `ServerlessPool` for Lambda/Edge with connection reuse |
75
70
 
76
71
  ## Core concepts
77
72
 
@@ -156,7 +151,7 @@ Gzip compression: **98% payload reduction** on 15 KB payloads.
156
151
  | [`@glidemq/fastify`](https://github.com/avifenesh/glidemq-fastify) | `npm i @glidemq/fastify` | `app.register(glideMQPlugin, { connection, queues: { ... } })` |
157
152
  | [`@glidemq/nestjs`](https://github.com/avifenesh/glidemq-nestjs) | `npm i @glidemq/nestjs` | `GlideMQModule.forRoot({ connection, queues: { ... } })` |
158
153
  | [`@glidemq/dashboard`](https://github.com/avifenesh/glidemq-dashboard) | `npm i @glidemq/dashboard` | `app.use('/dashboard', createDashboard([queue1, queue2]))` |
159
- | @glidemq/hapi | coming soon | Hapi plugin with the same REST + SSE surface |
154
+ | [`@glidemq/hapi`](https://github.com/avifenesh/glidemq-hapi) | `npm i @glidemq/hapi` | `await server.register({ plugin: glideMQPlugin, options: { connection, queues } })` |
160
155
 
161
156
  All framework packages provide REST endpoints, SSE events, and serverless Producer support. See each package's README for full documentation.
162
157
 
@@ -188,7 +183,9 @@ For zero-overhead integration, call Valkey Server Functions directly from any la
188
183
 
189
184
  | Guide | Topics |
190
185
  |-------|--------|
191
- | [Usage](docs/USAGE.md) | Queue, Worker, Broadcast, Producer, batch, request-reply, step jobs, graceful shutdown, cluster mode |
186
+ | [Usage](docs/USAGE.md) | Queue, Worker, Producer, batch, request-reply, graceful shutdown, cluster mode |
187
+ | [Broadcast](docs/BROADCAST.md) | Pub/sub fan-out, BroadcastWorker, subject filtering |
188
+ | [Step Jobs](docs/STEP_JOBS.md) | `moveToDelayed`, `moveToWaitingChildren`, multi-step processors |
192
189
  | [Advanced](docs/ADVANCED.md) | Schedulers, rate limiting, dedup, compression, retries, DLQ, custom IDs, LIFO, TTL, serializers |
193
190
  | [Workflows](docs/WORKFLOWS.md) | FlowProducer, DAG, `chain`, `group`, `chord`, dynamic children |
194
191
  | [Observability](docs/OBSERVABILITY.md) | OpenTelemetry, time-series metrics, job logs, dashboard |
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "glide-mq",
3
- "version": "0.11.0",
3
+ "version": "0.11.1",
4
4
  "description": "High-performance message queue for Node.js built on Valkey/Redis with native NAPI bindings",
5
5
  "main": "dist/index.js",
6
6
  "types": "dist/index.d.ts",