glide-mq 0.11.0 → 0.11.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +22 -25
- package/package.json +1 -1
package/README.md
CHANGED
|
@@ -4,6 +4,7 @@
|
|
|
4
4
|
[](https://github.com/avifenesh/glide-mq/blob/main/LICENSE)
|
|
5
5
|
[](https://github.com/avifenesh/glide-mq/actions/workflows/ci.yml)
|
|
6
6
|
[](https://nodejs.org/)
|
|
7
|
+
[](CHANGELOG.md)
|
|
7
8
|
|
|
8
9
|
High-performance message queue for Node.js built on Valkey/Redis Streams with 1-RTT job operations and cluster-native design.
|
|
9
10
|
|
|
@@ -13,7 +14,7 @@ glide-mq is for anyone building background jobs, task queues, or workflow orches
|
|
|
13
14
|
|
|
14
15
|
## Why glide-mq
|
|
15
16
|
|
|
16
|
-
- Use this when you need **throughput**:
|
|
17
|
+
- Use this when you need **throughput**: 25,000+ jobs/s single-node with 1 RTT per job via Valkey Server Functions.
|
|
17
18
|
- Use this when you run **Valkey/Redis clusters**: all keys hash-tagged out of the box, no `{braces}` workarounds.
|
|
18
19
|
- Use this when you need **workflows**: parent-child trees, DAGs with fan-in, step jobs, batch processing, and cron scheduling in one library.
|
|
19
20
|
- Use this when you deploy to **serverless**: lightweight `Producer` and `ServerlessPool` cache connections across warm invocations.
|
|
@@ -46,32 +47,26 @@ worker.on('completed', (job) => console.log(`Job ${job.id} done`));
|
|
|
46
47
|
worker.on('failed', (job, err) => console.error(`Job ${job.id} failed:`, err.message));
|
|
47
48
|
```
|
|
48
49
|
|
|
49
|
-
##
|
|
50
|
+
## Performance
|
|
50
51
|
|
|
51
|
-
|
|
52
|
-
|
|
53
|
-
|
|
54
|
-
|
|
55
|
-
| c=10 | 15,504 jobs/s |
|
|
56
|
-
| c=50 | 48,077 jobs/s |
|
|
52
|
+
- **1 RTT per job** -- `completeAndFetchNext` completes the current job and fetches the next in a single FCALL
|
|
53
|
+
- **25,000+ jobs/s** single-node floor (bare-metal Linux, localhost); scales higher with network pipelining
|
|
54
|
+
- **addBulk**: 10,000 jobs in 350 ms
|
|
55
|
+
- **Gzip compression**: 98% payload reduction on 15 KB payloads
|
|
57
56
|
|
|
58
|
-
|
|
59
|
-
Gzip compression: **98% payload reduction** on 15 KB payloads.
|
|
57
|
+
Throughput scales with concurrency up to the Valkey single-thread FCALL execution ceiling. Deployments with network latency between app and Valkey benefit from glide's auto-pipelining -- higher concurrency batches more commands per wire write. Run `npm run bench` to measure your environment.
|
|
60
58
|
|
|
61
|
-
|
|
59
|
+
## How it's different
|
|
62
60
|
|
|
63
|
-
|
|
64
|
-
|
|
65
|
-
|
|
|
66
|
-
|
|
67
|
-
| **
|
|
68
|
-
| **
|
|
69
|
-
| **
|
|
70
|
-
| **
|
|
71
|
-
| **
|
|
72
|
-
| **Pub/sub** | Native Broadcast with subject filtering | Not supported | Not supported |
|
|
73
|
-
| **Serverless** | Producer + ServerlessPool | Not supported | Not supported |
|
|
74
|
-
| **Throughput** | 48k jobs/s (c=50) | ~12k jobs/s (c=50) | ~5k jobs/s (c=50) |
|
|
61
|
+
| Aspect | glide-mq approach |
|
|
62
|
+
|--------|-------------------|
|
|
63
|
+
| **Network per job** | 1 RTT -- complete current job + fetch next in a single FCALL |
|
|
64
|
+
| **Client** | Rust NAPI bindings ([valkey-glide](https://github.com/valkey-io/valkey-glide)) -- no JS protocol parsing |
|
|
65
|
+
| **Server logic** | 1 persistent Valkey Function library (FUNCTION LOAD + FCALL) -- no per-call EVAL recompilation |
|
|
66
|
+
| **Cluster** | Hash-tagged keys (`glide:{queueName}:*`) -- all queue data routes to the same slot automatically |
|
|
67
|
+
| **Workflows** | FlowProducer trees, DAGs with fan-in, chain/group/chord, step jobs, dynamic children |
|
|
68
|
+
| **Pub/sub** | Broadcast with NATS-style subject filtering, independent subscriber retries |
|
|
69
|
+
| **Serverless** | Lightweight `Producer` + `ServerlessPool` for Lambda/Edge with connection reuse |
|
|
75
70
|
|
|
76
71
|
## Core concepts
|
|
77
72
|
|
|
@@ -156,7 +151,7 @@ Gzip compression: **98% payload reduction** on 15 KB payloads.
|
|
|
156
151
|
| [`@glidemq/fastify`](https://github.com/avifenesh/glidemq-fastify) | `npm i @glidemq/fastify` | `app.register(glideMQPlugin, { connection, queues: { ... } })` |
|
|
157
152
|
| [`@glidemq/nestjs`](https://github.com/avifenesh/glidemq-nestjs) | `npm i @glidemq/nestjs` | `GlideMQModule.forRoot({ connection, queues: { ... } })` |
|
|
158
153
|
| [`@glidemq/dashboard`](https://github.com/avifenesh/glidemq-dashboard) | `npm i @glidemq/dashboard` | `app.use('/dashboard', createDashboard([queue1, queue2]))` |
|
|
159
|
-
|
|
|
154
|
+
| [`@glidemq/hapi`](https://github.com/avifenesh/glidemq-hapi) | `npm i @glidemq/hapi` | `await server.register({ plugin: glideMQPlugin, options: { connection, queues } })` |
|
|
160
155
|
|
|
161
156
|
All framework packages provide REST endpoints, SSE events, and serverless Producer support. See each package's README for full documentation.
|
|
162
157
|
|
|
@@ -188,7 +183,9 @@ For zero-overhead integration, call Valkey Server Functions directly from any la
|
|
|
188
183
|
|
|
189
184
|
| Guide | Topics |
|
|
190
185
|
|-------|--------|
|
|
191
|
-
| [Usage](docs/USAGE.md) | Queue, Worker,
|
|
186
|
+
| [Usage](docs/USAGE.md) | Queue, Worker, Producer, batch, request-reply, graceful shutdown, cluster mode |
|
|
187
|
+
| [Broadcast](docs/BROADCAST.md) | Pub/sub fan-out, BroadcastWorker, subject filtering |
|
|
188
|
+
| [Step Jobs](docs/STEP_JOBS.md) | `moveToDelayed`, `moveToWaitingChildren`, multi-step processors |
|
|
192
189
|
| [Advanced](docs/ADVANCED.md) | Schedulers, rate limiting, dedup, compression, retries, DLQ, custom IDs, LIFO, TTL, serializers |
|
|
193
190
|
| [Workflows](docs/WORKFLOWS.md) | FlowProducer, DAG, `chain`, `group`, `chord`, dynamic children |
|
|
194
191
|
| [Observability](docs/OBSERVABILITY.md) | OpenTelemetry, time-series metrics, job logs, dashboard |
|