glide-mq 0.12.0 → 0.14.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +64 -2
- package/README.md +161 -197
- package/dist/base-worker.d.ts +21 -3
- package/dist/base-worker.d.ts.map +1 -1
- package/dist/base-worker.js +215 -6
- package/dist/base-worker.js.map +1 -1
- package/dist/broadcast-worker.d.ts.map +1 -1
- package/dist/broadcast-worker.js +3 -12
- package/dist/broadcast-worker.js.map +1 -1
- package/dist/connection.d.ts.map +1 -1
- package/dist/connection.js +3 -0
- package/dist/connection.js.map +1 -1
- package/dist/errors.d.ts +12 -0
- package/dist/errors.d.ts.map +1 -1
- package/dist/errors.js +16 -1
- package/dist/errors.js.map +1 -1
- package/dist/flow-producer.d.ts +18 -2
- package/dist/flow-producer.d.ts.map +1 -1
- package/dist/flow-producer.js +59 -2
- package/dist/flow-producer.js.map +1 -1
- package/dist/functions/index.d.ts +27 -2
- package/dist/functions/index.d.ts.map +1 -1
- package/dist/functions/index.js +342 -37
- package/dist/functions/index.js.map +1 -1
- package/dist/index.d.ts +3 -2
- package/dist/index.d.ts.map +1 -1
- package/dist/index.js +2 -1
- package/dist/index.js.map +1 -1
- package/dist/job.d.ts +93 -1
- package/dist/job.d.ts.map +1 -1
- package/dist/job.js +211 -0
- package/dist/job.js.map +1 -1
- package/dist/proxy/routes.d.ts.map +1 -1
- package/dist/proxy/routes.js +67 -0
- package/dist/proxy/routes.js.map +1 -1
- package/dist/queue-events.d.ts.map +1 -1
- package/dist/queue-events.js +1 -4
- package/dist/queue-events.js.map +1 -1
- package/dist/queue.d.ts +89 -1
- package/dist/queue.d.ts.map +1 -1
- package/dist/queue.js +440 -4
- package/dist/queue.js.map +1 -1
- package/dist/scheduler.d.ts +5 -0
- package/dist/scheduler.d.ts.map +1 -1
- package/dist/scheduler.js +15 -1
- package/dist/scheduler.js.map +1 -1
- package/dist/telemetry.js +2 -2
- package/dist/testing.d.ts +178 -3
- package/dist/testing.d.ts.map +1 -1
- package/dist/testing.js +472 -3
- package/dist/testing.js.map +1 -1
- package/dist/types.d.ts +219 -1
- package/dist/types.d.ts.map +1 -1
- package/dist/types.js.map +1 -1
- package/dist/utils.d.ts +18 -1
- package/dist/utils.d.ts.map +1 -1
- package/dist/utils.js +75 -4
- package/dist/utils.js.map +1 -1
- package/dist/worker.d.ts.map +1 -1
- package/dist/worker.js +3 -12
- package/dist/worker.js.map +1 -1
- package/package.json +24 -5
package/CHANGELOG.md
CHANGED
|
@@ -6,6 +6,70 @@ The format follows [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
|
|
|
6
6
|
|
|
7
7
|
---
|
|
8
8
|
|
|
9
|
+
## [0.14.0] - 2026-03-28
|
|
10
|
+
|
|
11
|
+
### Breaking Changes
|
|
12
|
+
|
|
13
|
+
- **JobUsage redesigned**: `inputTokens`/`outputTokens` replaced with `tokens: Record<string, number>` for extensible category tracking (input, output, reasoning, cachedInput, etc.)
|
|
14
|
+
- **Cost tracking redesigned**: `costUsd` replaced with `costs: Record<string, number>` + `costUnit` for currency-agnostic per-category cost tracking
|
|
15
|
+
- **BudgetOptions expanded**: `maxCostUsd` replaced with `maxTotalCost`. Added `maxTokens` (per-category caps), `tokenWeights` (weighted totals), `maxCosts` (per-category cost caps), `costUnit`
|
|
16
|
+
- **getFlowUsage return type changed**: `totalInputTokens`/`totalOutputTokens`/`totalCostUsd` replaced with `tokens`/`costs` maps + `totalTokens`/`totalCost`
|
|
17
|
+
|
|
18
|
+
### Added
|
|
19
|
+
|
|
20
|
+
- `job.streamChunk(type, content?)` - typed streaming convenience for reasoning vs content chunks
|
|
21
|
+
- Per-category budget enforcement with independent limits per token/cost category
|
|
22
|
+
- Weighted token budgets - reasoning tokens can count 4x toward budget
|
|
23
|
+
- `ConnectionOptions.requestTimeout` - configurable command timeout (was hardcoded 500ms)
|
|
24
|
+
- 9 new examples: thinking-model, cost-breakdown, budget-weighted, reasoning-stream, agent-budget-loop, multi-model-cost, fallback-usage, streaming-sse, batch-embed-tpm
|
|
25
|
+
- Upgraded to valkey-search 1.2 in test infrastructure (compose.yaml)
|
|
26
|
+
- Bumped speedkey to 0.3.0-rc1
|
|
27
|
+
|
|
28
|
+
### Fixed
|
|
29
|
+
|
|
30
|
+
- Budget bypass when only `totalTokens` reported without `tokens` breakdown
|
|
31
|
+
- `JSON.parse` null safety in budget and usage parsing
|
|
32
|
+
- Prototype pollution prevention with `Object.create(null)` in aggregation maps
|
|
33
|
+
- DAG cluster test flaky timeouts (15s -> 30s)
|
|
34
|
+
- `TestJobRecord` missing `usage` field causing empty `getFlowUsage()` in testing mode
|
|
35
|
+
|
|
36
|
+
---
|
|
37
|
+
|
|
38
|
+
## [0.13.0] - 2026-03-27
|
|
39
|
+
|
|
40
|
+
### Added
|
|
41
|
+
|
|
42
|
+
- **Structured AI metadata** (#168): `job.reportUsage({ model, tokens: { input, output }, costs: { total } })` records LLM usage on any job. `queue.getFlowUsage(flowId)` aggregates token counts and cost across an entire flow.
|
|
43
|
+
- **Per-job streaming channel** (#169): `job.stream(chunk)` publishes incremental data (LLM tokens, progress events) to a dedicated channel. `queue.readStream(jobId, opts?)` consumes chunks in real time. Blocking reads via XREAD BLOCK.
|
|
44
|
+
- **Suspend/resume with signals** (#170): `job.suspend(opts?)` pauses a job mid-processor; `queue.signal(jobId, name, data?)` resumes it with an external event. Enables human-in-the-loop approval gates, webhook callbacks, and any pattern requiring external input before a job can continue.
|
|
45
|
+
- `SuspendOptions`: `reason` (label), `timeout` (auto-fail after N ms)
|
|
46
|
+
- `onResume` callback: best-effort same-worker continuation called with `signals[]` on resume
|
|
47
|
+
- `queue.getSuspendInfo(jobId)`: returns suspension metadata and signals delivered so far
|
|
48
|
+
- `glidemq_suspend` FCALL: moves active job to suspended sorted set, releases group slot
|
|
49
|
+
- `glidemq_signal` FCALL: appends signal, re-queues job to stream
|
|
50
|
+
- `glidemq_sweepSuspended` FCALL: fails timed-out suspended jobs on each stalled recovery tick
|
|
51
|
+
- Proxy: `POST /queues/:name/jobs/:id/signal` endpoint
|
|
52
|
+
- Testing: `TestJob.suspend()` and `TestQueue.signal()` with full parity (no Valkey)
|
|
53
|
+
- **Per-job lockDuration override** (#172): set `lockDuration` per job to control heartbeat interval and stall detection timeout independently of the worker default.
|
|
54
|
+
- **Fallback chains** (#173): ordered list of model/provider alternatives via `opts.fallbacks`. On processor failure, the job automatically retries with the next fallback entry. Each fallback can override `data` and `metadata`.
|
|
55
|
+
- **Budget middleware** (#174): flow-level token and cost caps. Set `budget: { maxTokens, maxCost }` on a flow; jobs that would exceed the budget are failed before execution.
|
|
56
|
+
- **Dual-axis rate limiting (RPM + TPM)** (#175): enforce both requests-per-minute and tokens-per-minute limits on a queue. Designed for LLM API compliance where providers impose concurrent rate ceilings.
|
|
57
|
+
- **18 real-world AI examples** (#176): framework integrations covering LangChain, Vercel AI SDK, OpenAI, Anthropic, multi-model routing, RAG pipelines, and more.
|
|
58
|
+
- **Valkey Search integration** (#177): vector search over jobs using Valkey Search module. `queue.createIndex(schema, opts?)` defines indexes; `queue.search(query, opts?)` runs hybrid vector + filter queries. `IndexCreateOptions` and `SearchQueryOptions` types decoupled from speedkey.
|
|
59
|
+
- `SuspendError`, `SuspendOptions`, `SignalEntry` exported from public API.
|
|
60
|
+
- Stress tests: 38 tests for correctness under concurrent load and edge-case pressure.
|
|
61
|
+
- Docker: `compose.yaml` uses `valkey-bundle` image (search + json + bloom modules).
|
|
62
|
+
- CI: `test-search` job with `valkey-bundle` for search integration tests.
|
|
63
|
+
|
|
64
|
+
### Fixed
|
|
65
|
+
|
|
66
|
+
- OTel `SpanStatusCode` values corrected (OK=1, ERROR=2) - previously swapped.
|
|
67
|
+
- Signal data auto-deserialization: signals received via `onResume` are now parsed from JSON automatically.
|
|
68
|
+
- Fallback type uses explicit `metadata` field instead of index signature.
|
|
69
|
+
- `glidemq_clean` and `glidemq_drain` now delete `signals:{id}` LIST keys when removing jobs, preventing a key leak when suspended jobs time out or are cleaned after failure.
|
|
70
|
+
|
|
71
|
+
---
|
|
72
|
+
|
|
9
73
|
## [0.12.0] - 2026-03-20
|
|
10
74
|
|
|
11
75
|
### Added
|
|
@@ -44,8 +108,6 @@ The format follows [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
|
|
|
44
108
|
|
|
45
109
|
- `groupq` key type changed from LIST to ZSET. Existing groups with queued jobs need migration (drain before upgrade). Pre-stable, acceptable.
|
|
46
110
|
|
|
47
|
-
## [Unreleased]
|
|
48
|
-
|
|
49
111
|
---
|
|
50
112
|
|
|
51
113
|
## [0.11.0] - 2026-03-10
|
package/README.md
CHANGED
|
@@ -3,247 +3,211 @@
|
|
|
3
3
|
[](https://www.npmjs.com/package/glide-mq)
|
|
4
4
|
[](https://github.com/avifenesh/glide-mq/blob/main/LICENSE)
|
|
5
5
|
[](https://github.com/avifenesh/glide-mq/actions/workflows/ci.yml)
|
|
6
|
-
[](https://nodejs.org/)
|
|
7
|
-
[](CHANGELOG.md)
|
|
8
|
-
[](https://avifenesh.github.io/glide-mq.dev/)
|
|
9
6
|
|
|
10
|
-
High-performance message queue for Node.js
|
|
7
|
+
High-performance message queue for Node.js with first-class AI orchestration. Built on Valkey/Redis Streams with a Rust NAPI core.
|
|
11
8
|
|
|
12
|
-
|
|
13
|
-
|
|
14
|
-
> If glide-mq is useful to you, consider giving it a star on [GitHub](https://github.com/avifenesh/glide-mq). It helps others discover the project.
|
|
15
|
-
|
|
16
|
-
## Why glide-mq
|
|
17
|
-
|
|
18
|
-
- Use this when you need **throughput**: 18,000+ jobs/s on production infrastructure with TLS -- up to 36% faster than alternatives at typical concurrency, 1 RTT per job via Valkey Server Functions.
|
|
19
|
-
- Use this when you run **Valkey/Redis clusters**: all keys hash-tagged out of the box, no `{braces}` workarounds.
|
|
20
|
-
- Use this when you need **workflows**: parent-child trees, DAGs with fan-in, step jobs, batch processing, and cron scheduling in one library.
|
|
21
|
-
- Use this when you deploy to **serverless**: lightweight `Producer` and `ServerlessPool` cache connections across warm invocations.
|
|
22
|
-
- Use this when you want **pub/sub with durability**: `Broadcast` delivers to all subscribers with retries, backpressure, and NATS-style subject filtering.
|
|
23
|
-
|
|
24
|
-
## Install
|
|
9
|
+
Completes and fetches the next job in a single server-side function call (1 RTT per job), hash-tags every key for zero-config clustering, and ships seven built-in primitives for LLM orchestration - cost tracking, token streaming, human-in-the-loop, model failover, TPM rate limiting, budget caps, and vector search.
|
|
25
10
|
|
|
26
11
|
```bash
|
|
27
12
|
npm install glide-mq
|
|
28
13
|
```
|
|
29
14
|
|
|
30
|
-
|
|
31
|
-
|
|
32
|
-
## Quick start
|
|
15
|
+
### General Usage
|
|
33
16
|
|
|
34
17
|
```typescript
|
|
35
18
|
import { Queue, Worker } from 'glide-mq';
|
|
36
19
|
|
|
37
20
|
const connection = { addresses: [{ host: 'localhost', port: 6379 }] };
|
|
38
|
-
|
|
39
21
|
const queue = new Queue('tasks', { connection });
|
|
40
|
-
await queue.add('send-email', { to: 'user@example.com', subject: 'Hello' });
|
|
41
22
|
|
|
42
|
-
|
|
43
|
-
console.log(`Processing ${job.name}:`, job.data);
|
|
44
|
-
return { sent: true };
|
|
45
|
-
}, { connection, concurrency: 10 });
|
|
23
|
+
await queue.add('send-email', { to: 'user@example.com', subject: 'Welcome' });
|
|
46
24
|
|
|
47
|
-
worker
|
|
48
|
-
|
|
25
|
+
const worker = new Worker(
|
|
26
|
+
'tasks',
|
|
27
|
+
async (job) => {
|
|
28
|
+
await sendEmail(job.data.to, job.data.subject);
|
|
29
|
+
return { sent: true };
|
|
30
|
+
},
|
|
31
|
+
{ connection, concurrency: 10 },
|
|
32
|
+
);
|
|
49
33
|
```
|
|
50
34
|
|
|
51
|
-
|
|
52
|
-
|
|
53
|
-
Benchmarked on production infrastructure -- not localhost, where all queues look similar.
|
|
54
|
-
|
|
55
|
-
**Setup**: AWS ElastiCache Valkey 8.2 (r7g.large), TLS enabled, EC2 client in the same region. No-op processor, warmup jobs excluded from measurement.
|
|
56
|
-
|
|
57
|
-
| Concurrency | glide-mq | Leading alternative | Delta |
|
|
58
|
-
|:-----------:|------------:|--------------------:|:-----------:|
|
|
59
|
-
| c=1 | 2,479 j/s | 2,535 j/s | -2% |
|
|
60
|
-
| c=5 | 10,754 j/s | 9,866 j/s | +9% |
|
|
61
|
-
| c=10 | **18,218 j/s** | 13,541 j/s | **+35%** |
|
|
62
|
-
| c=15 | **19,583 j/s** | 14,162 j/s | **+38%** |
|
|
63
|
-
| c=20 | 19,408 j/s | 16,085 j/s | +21% |
|
|
64
|
-
| c=50 | 19,768 j/s | 19,159 j/s | +3% |
|
|
65
|
-
|
|
66
|
-
Most production deployments run workers with concurrency between 5 and 20 -- exactly where glide-mq's architecture pays off the most. The advantage comes from completing and fetching the next job in a single server-side function call (1 RTT per job). When the network between your application and Valkey/Redis is real -- as it is in every production deployment where app servers and data stores run on separate machines -- that round-trip savings compounds across concurrent workers.
|
|
35
|
+
### AI Usage
|
|
67
36
|
|
|
68
|
-
|
|
37
|
+
```typescript
|
|
38
|
+
import { Queue, Worker } from 'glide-mq';
|
|
69
39
|
|
|
70
|
-
|
|
40
|
+
const queue = new Queue('ai', { connection });
|
|
41
|
+
|
|
42
|
+
await queue.add(
|
|
43
|
+
'inference',
|
|
44
|
+
{ prompt: 'Explain message queues' },
|
|
45
|
+
{
|
|
46
|
+
fallbacks: [{ model: 'gpt-5.4-nano', provider: 'openai' }],
|
|
47
|
+
lockDuration: 120000,
|
|
48
|
+
},
|
|
49
|
+
);
|
|
50
|
+
|
|
51
|
+
const worker = new Worker(
|
|
52
|
+
'ai',
|
|
53
|
+
async (job) => {
|
|
54
|
+
const result = await callLLM(job.data.prompt);
|
|
55
|
+
await job.reportUsage({
|
|
56
|
+
model: 'gpt-5.4',
|
|
57
|
+
tokens: { input: 50, output: 200 },
|
|
58
|
+
costs: { total: 0.003 },
|
|
59
|
+
});
|
|
60
|
+
await job.stream({ type: 'token', content: result });
|
|
61
|
+
return result;
|
|
62
|
+
},
|
|
63
|
+
{ connection, tokenLimiter: { maxTokens: 100000, duration: 60000 } },
|
|
64
|
+
);
|
|
65
|
+
```
|
|
71
66
|
|
|
72
|
-
|
|
73
|
-
- **Gzip compression**: 98% payload reduction on 15 KB payloads
|
|
67
|
+
## When to use glide-mq
|
|
74
68
|
|
|
75
|
-
|
|
69
|
+
- **Background jobs and task processing** - email, image processing, data pipelines, webhooks, any async work.
|
|
70
|
+
- **Scheduled and recurring work** - cron jobs, interval tasks, bounded schedulers.
|
|
71
|
+
- **Distributed workflows** - parent-child trees, DAGs, fan-in/fan-out, step jobs, dynamic children.
|
|
72
|
+
- **High-throughput queues over real networks** - 1 RTT per job via Valkey Server Functions, up to 38% faster than alternatives.
|
|
73
|
+
- **LLM pipelines and model orchestration** - cost tracking, token streaming, model failover, budget caps without external middleware.
|
|
74
|
+
- **Valkey/Redis clusters** - hash-tagged keys out of the box with zero configuration.
|
|
76
75
|
|
|
77
76
|
## How it's different
|
|
78
77
|
|
|
79
|
-
| Aspect
|
|
80
|
-
|
|
81
|
-
| **Network per job** | 1 RTT
|
|
82
|
-
| **Client**
|
|
83
|
-
| **Server logic**
|
|
84
|
-
| **Cluster**
|
|
85
|
-
| **
|
|
86
|
-
| **
|
|
87
|
-
| **Serverless** | Lightweight `Producer` + `ServerlessPool` for Lambda/Edge with connection reuse |
|
|
88
|
-
|
|
89
|
-
## Core concepts
|
|
90
|
-
|
|
91
|
-
- **Queue** -- stores jobs in Valkey Streams. Handles enqueue, delay, priority, pause, drain, and bulk operations.
|
|
92
|
-
- **Worker** -- processes jobs with configurable concurrency, prefetch, lock duration, and stalled-job recovery.
|
|
93
|
-
- **Job** -- a unit of work with name, data, options (retries, backoff, priority, TTL), and lifecycle events.
|
|
94
|
-
- **FlowProducer** -- creates parent-child job trees and DAGs. A parent waits for all children before processing.
|
|
95
|
-
- **Producer** -- lightweight enqueue-only client. No EventEmitter, no Job instances, returns plain string IDs. Built for serverless.
|
|
96
|
-
- **Broadcast** -- fan-out pub/sub. Each message is delivered to every subscriber group with independent retries and backpressure.
|
|
97
|
-
- **QueueEvents** -- real-time stream of job lifecycle events (completed, failed, delayed, waiting, etc.).
|
|
98
|
-
|
|
99
|
-
## Features
|
|
100
|
-
|
|
101
|
-
### Core
|
|
102
|
-
|
|
103
|
-
- **Queues and workers** with configurable concurrency, prefetch, and lock duration ([Usage](docs/USAGE.md))
|
|
104
|
-
- **Delayed, priority, and bulk enqueue** for scheduling and high-throughput ingestion ([Usage](docs/USAGE.md))
|
|
105
|
-
- **Batch processing** -- process multiple jobs at once via `batch: { size, timeout? }` ([Usage](docs/USAGE.md#batch-processing))
|
|
106
|
-
- **Request-reply** -- `queue.addAndWait(name, data, { waitTimeout })` for synchronous RPC ([Usage](docs/USAGE.md#request-reply-with-addandwait))
|
|
107
|
-
- **LIFO mode** -- `lifo: true` processes newest jobs first ([Advanced](docs/ADVANCED.md#lifo-mode))
|
|
108
|
-
- **Job TTL** -- auto-expire jobs after a time-to-live window ([Advanced](docs/ADVANCED.md#job-ttl))
|
|
109
|
-
- **Custom job IDs** -- deterministic, idempotent enqueue; duplicates return `null` ([Advanced](docs/ADVANCED.md#custom-job-ids))
|
|
110
|
-
- **Pluggable serializers** -- swap JSON for any `{ serialize, deserialize }` implementation ([Advanced](docs/ADVANCED.md#pluggable-serializers))
|
|
111
|
-
- **Transparent compression** -- gzip payloads at the queue level ([Advanced](docs/ADVANCED.md#transparent-compression))
|
|
112
|
-
|
|
113
|
-
### Reliability
|
|
114
|
-
|
|
115
|
-
- **Retries with exponential, fixed, or custom backoff** and dead-letter queues ([Advanced](docs/ADVANCED.md#retries-and-backoff))
|
|
116
|
-
- **UnrecoverableError** -- skip all retries and fail permanently ([Usage](docs/USAGE.md#unrecoverableerror))
|
|
117
|
-
- **Stalled recovery** -- auto-reclaim stuck jobs via consumer group PEL and `XAUTOCLAIM` ([Usage](docs/USAGE.md#worker))
|
|
118
|
-
- **Job revocation** -- cooperative cancellation with `AbortSignal` ([Advanced](docs/ADVANCED.md#job-revocation))
|
|
119
|
-
- **Deduplication** -- simple, throttle, and debounce modes with configurable TTL ([Advanced](docs/ADVANCED.md#deduplication))
|
|
120
|
-
- **Per-key ordering** -- sequential processing per ordering key with configurable group concurrency ([Advanced](docs/ADVANCED.md#ordering-and-group-concurrency))
|
|
121
|
-
- **Rate limiting** -- per-group sliding window, token bucket, and global queue-wide limits ([Advanced](docs/ADVANCED.md#global-rate-limiting))
|
|
122
|
-
- **Sandboxed processors** -- run processors in worker threads or child processes ([Architecture](docs/ARCHITECTURE.md))
|
|
123
|
-
|
|
124
|
-
### Orchestration
|
|
78
|
+
| Aspect | glide-mq |
|
|
79
|
+
| ------------------- | --------------------------------------------------------------------------------------------------------- |
|
|
80
|
+
| **Network per job** | 1 RTT - complete + fetch next in a single FCALL |
|
|
81
|
+
| **Client** | Rust NAPI bindings via [valkey-glide](https://github.com/valkey-io/valkey-glide) - no JS protocol parsing |
|
|
82
|
+
| **Server logic** | Persistent Valkey Function library (FUNCTION LOAD + FCALL) - no per-call EVAL |
|
|
83
|
+
| **Cluster** | Hash-tagged keys (`glide:{queueName}:*`) route to the same slot automatically |
|
|
84
|
+
| **AI-native** | Cost tracking, token streaming, suspend/resume, fallback chains, TPM limits, budget caps |
|
|
85
|
+
| **Vector search** | KNN similarity queries over job data via Valkey Search |
|
|
125
86
|
|
|
126
|
-
|
|
127
|
-
- **DAG workflows** -- arbitrary dependency graphs with `FlowProducer.addDAG()` and `dag()` helper; multi-parent fan-in, diamond patterns, cycle detection ([Workflows](docs/WORKFLOWS.md))
|
|
128
|
-
- **Step jobs** -- `job.moveToDelayed(timestamp, nextStep)` suspends a job mid-processor and resumes later ([Usage](docs/USAGE.md#pause-and-resume-a-job-later-step-jobs))
|
|
129
|
-
- **Dynamic children** -- `job.moveToWaitingChildren()` pauses a parent to add children mid-execution ([Workflows](docs/WORKFLOWS.md))
|
|
130
|
-
- **Batch processing** -- process multiple jobs at once for bulk I/O ([Usage](docs/USAGE.md#batch-processing))
|
|
87
|
+
## AI-native primitives
|
|
131
88
|
|
|
132
|
-
|
|
89
|
+
Seven primitives for LLM and agent workflows, built into the core API.
|
|
133
90
|
|
|
134
|
-
- **
|
|
135
|
-
- **
|
|
91
|
+
- **Cost tracking** - `job.reportUsage()` records model, tokens, cost, latency per job. `queue.getFlowUsage()` aggregates across flows.
|
|
92
|
+
- **Token streaming** - `job.stream(chunk)` pushes LLM output tokens in real time. `queue.readStream(jobId)` consumes them with optional long-polling.
|
|
93
|
+
- **Suspend/resume** - `job.suspend()` pauses mid-processor for human approval or webhook callback. `queue.signal(jobId, name, data)` resumes with external input.
|
|
94
|
+
- **Fallback chains** - ordered `fallbacks` array on job options. On failure, the next retry reads `job.currentFallback` for the alternate model/provider.
|
|
95
|
+
- **TPM rate limiting** - `tokenLimiter` on worker options enforces tokens-per-minute caps. Combine with RPM `limiter` for dual-axis rate control.
|
|
96
|
+
- **Budget caps** - `FlowProducer.add(flow, { budget })` sets `maxTotalTokens` and `maxTotalCost` across all jobs in a flow. Jobs fail or pause when exceeded.
|
|
97
|
+
- **Per-job lock duration** - override `lockDuration` per job for adaptive stall detection. Short for classifiers, long for multi-minute LLM calls.
|
|
136
98
|
|
|
137
|
-
|
|
99
|
+
See [Usage - AI-native primitives](docs/USAGE.md#ai-native-primitives) for full examples.
|
|
138
100
|
|
|
139
|
-
|
|
140
|
-
- **BroadcastWorker** -- independent consumer groups with own retries, concurrency, and backpressure ([Usage](docs/USAGE.md#broadcast--broadcastworker))
|
|
141
|
-
- **Subject filtering** -- NATS-style patterns (`*` one segment, `>` trailing wildcard) for topic-based routing ([Usage](docs/USAGE.md#broadcast--broadcastworker))
|
|
142
|
-
|
|
143
|
-
### Serverless
|
|
144
|
-
|
|
145
|
-
- **Producer** -- enqueue without EventEmitter overhead, returns plain string IDs ([Usage](docs/USAGE.md))
|
|
146
|
-
- **ServerlessPool** -- connection caching across warm Lambda/Edge invocations ([Serverless](docs/SERVERLESS.md))
|
|
147
|
-
|
|
148
|
-
### Observability
|
|
149
|
-
|
|
150
|
-
- **QueueEvents** -- real-time stream-based lifecycle events ([Observability](docs/OBSERVABILITY.md))
|
|
151
|
-
- **Time-series metrics** -- per-minute throughput and latency retained 24h, recorded server-side ([Observability](docs/OBSERVABILITY.md))
|
|
152
|
-
- **OpenTelemetry** -- automatic span emission; bring your own tracer or auto-detect `@opentelemetry/api` ([Observability](docs/OBSERVABILITY.md))
|
|
153
|
-
- **Job logs** -- append structured log entries per job with pagination ([Observability](docs/OBSERVABILITY.md))
|
|
154
|
-
- **Job mutations** -- `changePriority()`, `changeDelay()`, `promote()` after enqueue; `retryJobs()` and `clean()` in bulk ([Usage](docs/USAGE.md))
|
|
155
|
-
- **Graceful shutdown** -- `gracefulShutdown()` helper registers SIGTERM/SIGINT handlers ([Usage](docs/USAGE.md#graceful-shutdown))
|
|
156
|
-
- **In-memory testing** -- `TestQueue` and `TestWorker` with zero Valkey dependency ([Testing](docs/TESTING.md))
|
|
157
|
-
|
|
158
|
-
### Cloud
|
|
159
|
-
|
|
160
|
-
- **Cluster-native** -- hash-tagged keys `glide:{queueName}:*` route all queue data to the same slot ([Usage](docs/USAGE.md#cluster-mode))
|
|
161
|
-
- **IAM authentication** -- native SigV4 auth for AWS ElastiCache and MemoryDB ([Usage](docs/USAGE.md#cluster-mode))
|
|
162
|
-
- **AZ-affinity routing** -- `readFrom: 'AZAffinity'` routes reads to same-AZ replicas ([Usage](docs/USAGE.md#cluster-mode))
|
|
163
|
-
|
|
164
|
-
## Framework integrations
|
|
165
|
-
|
|
166
|
-
| Package | Install | Setup |
|
|
167
|
-
|---------|---------|-------|
|
|
168
|
-
| [`@glidemq/hono`](https://github.com/avifenesh/glidemq-hono) | `npm i @glidemq/hono` | `app.use(glideMQ({ connection, queues: { ... } }))` |
|
|
169
|
-
| [`@glidemq/fastify`](https://github.com/avifenesh/glidemq-fastify) | `npm i @glidemq/fastify` | `app.register(glideMQPlugin, { connection, queues: { ... } })` |
|
|
170
|
-
| [`@glidemq/nestjs`](https://github.com/avifenesh/glidemq-nestjs) | `npm i @glidemq/nestjs` | `GlideMQModule.forRoot({ connection, queues: { ... } })` |
|
|
171
|
-
| [`@glidemq/dashboard`](https://github.com/avifenesh/glidemq-dashboard) | `npm i @glidemq/dashboard` | `app.use('/dashboard', createDashboard([queue1, queue2]))` |
|
|
172
|
-
| [`@glidemq/hapi`](https://github.com/avifenesh/glidemq-hapi) | `npm i @glidemq/hapi` | `await server.register({ plugin: glideMQPlugin, options: { connection, queues } })` |
|
|
173
|
-
|
|
174
|
-
All framework packages provide REST endpoints, SSE events, and serverless Producer support. See each package's README for full documentation.
|
|
175
|
-
|
|
176
|
-
## Cross-language
|
|
177
|
-
|
|
178
|
-
Non-Node.js services can enqueue jobs into glide-mq queues using the HTTP proxy or direct FCALL:
|
|
179
|
-
|
|
180
|
-
```typescript
|
|
181
|
-
import { createProxyServer } from 'glide-mq/proxy';
|
|
182
|
-
|
|
183
|
-
const proxy = createProxyServer({
|
|
184
|
-
connection: { addresses: [{ host: 'localhost', port: 6379 }] },
|
|
185
|
-
queues: ['emails', 'reports'],
|
|
186
|
-
});
|
|
187
|
-
proxy.app.listen(3000);
|
|
188
|
-
```
|
|
101
|
+
## Features
|
|
189
102
|
|
|
190
|
-
|
|
191
|
-
|
|
192
|
-
|
|
193
|
-
|
|
194
|
-
|
|
103
|
+
- **1 RTT per job** - complete current + fetch next in a single server-side function call
|
|
104
|
+
- **Cluster-native** - hash-tagged keys, zero cluster configuration
|
|
105
|
+
- **Workflows** - FlowProducer trees, DAGs with fan-in, chain/group/chord, step jobs, dynamic children
|
|
106
|
+
- **Scheduling** - 5-field cron with timezone, fixed intervals, bounded schedulers
|
|
107
|
+
- **Retries** - exponential, fixed, or custom backoff with dead-letter queues
|
|
108
|
+
- **Rate limiting** - per-group sliding window, token bucket, global queue-wide limits
|
|
109
|
+
- **Broadcast** - fan-out pub/sub with NATS-style subject filtering and independent subscriber retries
|
|
110
|
+
- **Batch processing** - process multiple jobs at once for bulk I/O
|
|
111
|
+
- **Request-reply** - `queue.addAndWait()` for synchronous RPC patterns
|
|
112
|
+
- **Deduplication** - simple, throttle, and debounce modes
|
|
113
|
+
- **Compression** - transparent gzip at the queue level
|
|
114
|
+
- **Serverless** - lightweight `Producer` and `ServerlessPool` for Lambda/Edge
|
|
115
|
+
- **OpenTelemetry** - automatic span emission with bring-your-own tracer
|
|
116
|
+
- **In-memory testing** - `TestQueue` and `TestWorker` with zero Valkey dependency
|
|
117
|
+
- **Cross-language** - HTTP proxy and wire protocol for non-Node.js services
|
|
195
118
|
|
|
196
|
-
|
|
119
|
+
## Performance
|
|
197
120
|
|
|
198
|
-
|
|
121
|
+
Benchmarked on AWS ElastiCache Valkey 8.2 (r7g.large) with TLS, EC2 client in the same region.
|
|
122
|
+
|
|
123
|
+
| Concurrency | glide-mq | BullMQ | Delta |
|
|
124
|
+
| :---------: | ---------: | ---------: | :---: |
|
|
125
|
+
| c=5 | 10,754 j/s | 9,866 j/s | +9% |
|
|
126
|
+
| c=10 | 18,218 j/s | 13,541 j/s | +35% |
|
|
127
|
+
| c=15 | 19,583 j/s | 14,162 j/s | +38% |
|
|
128
|
+
| c=20 | 19,408 j/s | 16,085 j/s | +21% |
|
|
129
|
+
|
|
130
|
+
The advantage comes from completing and fetching the next job in a single FCALL. The savings compound over real network latency - exactly the conditions in every production deployment. At high concurrency both libraries converge toward the Valkey single-thread ceiling.
|
|
131
|
+
|
|
132
|
+
Reproduce with `npm run bench` or `npx tsx benchmarks/elasticache-head-to-head.ts` against your own infrastructure.
|
|
133
|
+
|
|
134
|
+
## Examples
|
|
135
|
+
|
|
136
|
+
27 runnable examples in `examples/`. Run any with `npx tsx examples/<name>.ts`.
|
|
137
|
+
|
|
138
|
+
| Example | What it shows |
|
|
139
|
+
| ----------------------- | ----------------------------------------------- |
|
|
140
|
+
| `usage-tracking.ts` | Token and cost tracking across multi-step flows |
|
|
141
|
+
| `token-streaming.ts` | Real-time LLM token streaming to clients |
|
|
142
|
+
| `human-approval.ts` | Suspend/resume with editorial review gate |
|
|
143
|
+
| `model-failover.ts` | Fallback chains across providers |
|
|
144
|
+
| `tpm-throttle.ts` | Dual-axis RPM + TPM rate limiting |
|
|
145
|
+
| `budget-cap.ts` | Flow-level token and cost caps |
|
|
146
|
+
| `vector-search.ts` | KNN similarity search with pre-filters |
|
|
147
|
+
| `with-langchain.ts` | LangChain integration with token tracking |
|
|
148
|
+
| `with-vercel-ai-sdk.ts` | Vercel AI SDK integration with streaming |
|
|
149
|
+
| `rag-pipeline.ts` | RAG with embedding, indexing, retrieval |
|
|
150
|
+
| `ai-agent-loop.ts` | Autonomous agent loop with budget enforcement |
|
|
151
|
+
| `testing-mode.ts` | In-memory testing without Valkey |
|
|
152
|
+
| `agent-budget-loop.ts` | Agent loop with per-step budget tracking |
|
|
153
|
+
| `multi-model-cost.ts` | Cost breakdown across multiple models |
|
|
154
|
+
| `fallback-usage.ts` | Usage tracking through fallback chains |
|
|
155
|
+
| `streaming-sse.ts` | Server-sent events with token streaming |
|
|
156
|
+
| `batch-embed-tpm.ts` | Batch embeddings with TPM rate limiting |
|
|
157
|
+
| `thinking-model.ts` | Thinking/reasoning model token tracking |
|
|
158
|
+
| `cost-breakdown.ts` | Detailed per-category cost breakdown |
|
|
159
|
+
| `budget-weighted.ts` | Weighted budget allocation across flow steps |
|
|
160
|
+
| `reasoning-stream.ts` | Streaming reasoning/chain-of-thought tokens |
|
|
161
|
+
| `adaptive-timeout.ts` | Adaptive lock duration based on model complexity |
|
|
162
|
+
| `broadcast-events.ts` | Fan-out event publishing with subject filtering |
|
|
163
|
+
| `agent-memory.ts` | Multi-turn agent with persistent memory |
|
|
164
|
+
| `search-dashboard.ts` | Job search and monitoring dashboard |
|
|
165
|
+
| `embedding-pipeline.ts` | Batch document embedding with rate limiting |
|
|
166
|
+
| `content-pipeline.ts` | Content moderation with streaming and approval |
|
|
167
|
+
|
|
168
|
+
## When NOT to use glide-mq
|
|
169
|
+
|
|
170
|
+
- **You need a log-based event streaming platform.** glide-mq is a job/task queue, not a partitioned event log. It does not provide Kafka-style topic partitions, consumer offset management, or event replay.
|
|
171
|
+
- **You need browser support.** The Rust NAPI client requires a server-side runtime (Node.js 20+, Bun, or Deno with NAPI support).
|
|
172
|
+
- **You need exactly-once semantics.** glide-mq provides at-least-once delivery. Duplicate processing is rare but possible - design processors to be idempotent.
|
|
173
|
+
- **You need to run without Valkey or Redis.** Production use requires Valkey 7.0+ or Redis 7.0+. For dev/testing, `TestQueue`/`TestWorker` run fully in-memory.
|
|
199
174
|
|
|
200
175
|
## Documentation
|
|
201
176
|
|
|
202
|
-
| Guide
|
|
203
|
-
|
|
204
|
-
| [Usage](docs/USAGE.md)
|
|
205
|
-
| [
|
|
206
|
-
| [
|
|
207
|
-
| [
|
|
208
|
-
| [
|
|
209
|
-
| [
|
|
210
|
-
| [
|
|
211
|
-
| [
|
|
212
|
-
| [
|
|
213
|
-
| [
|
|
214
|
-
| [
|
|
215
|
-
| [Migration](docs/MIGRATION.md)
|
|
216
|
-
|
|
217
|
-
## Limitations
|
|
218
|
-
|
|
219
|
-
- Requires a running Valkey 7.0+ or Redis 7.0+ instance. There is no embedded mode.
|
|
220
|
-
- Node.js only. The Rust-native NAPI client (`@valkey/valkey-glide`) does not run in browsers or Deno.
|
|
221
|
-
- At-least-once delivery semantics. Jobs may be processed more than once after crashes or stalled recovery.
|
|
222
|
-
- Not a streaming platform. glide-mq is a job/task queue, not a replacement for Kafka or NATS JetStream.
|
|
223
|
-
- Single dependency on `@glidemq/speedkey` (which wraps `@valkey/valkey-glide`). Native addon compilation is required on install.
|
|
177
|
+
| Guide | Topics |
|
|
178
|
+
| -------------------------------------- | ----------------------------------------------------------- |
|
|
179
|
+
| [Usage](docs/USAGE.md) | Queue, Worker, Producer, batch, request-reply, cluster mode |
|
|
180
|
+
| [Workflows](docs/WORKFLOWS.md) | FlowProducer, DAG, chain/group/chord, dynamic children |
|
|
181
|
+
| [Advanced](docs/ADVANCED.md) | Schedulers, rate limiting, dedup, compression, retries, DLQ |
|
|
182
|
+
| [Broadcast](docs/BROADCAST.md) | Pub/sub fan-out, subject filtering |
|
|
183
|
+
| [Observability](docs/OBSERVABILITY.md) | OpenTelemetry, metrics, job logs, dashboard |
|
|
184
|
+
| [Serverless](docs/SERVERLESS.md) | Producer, ServerlessPool, Lambda/Edge |
|
|
185
|
+
| [Testing](docs/TESTING.md) | In-memory TestQueue and TestWorker |
|
|
186
|
+
| [Wire Protocol](docs/WIRE_PROTOCOL.md) | Cross-language FCALL specs, Python/Go examples |
|
|
187
|
+
| [Step Jobs](docs/STEP_JOBS.md) | Step-job workflows with moveToDelayed |
|
|
188
|
+
| [Durability](docs/DURABILITY.md) | Durability guarantees, persistence, delivery semantics |
|
|
189
|
+
| [Architecture](docs/ARCHITECTURE.md) | Internal architecture and design reference |
|
|
190
|
+
| [Migration](docs/MIGRATION.md) | Coming from BullMQ - API mapping guide |
|
|
224
191
|
|
|
225
192
|
## Ecosystem
|
|
226
193
|
|
|
227
|
-
| Package
|
|
228
|
-
|
|
229
|
-
| [
|
|
230
|
-
| [@glidemq/
|
|
231
|
-
| [@glidemq/
|
|
232
|
-
| [@glidemq/
|
|
233
|
-
| [@glidemq/
|
|
234
|
-
| [@glidemq/hapi](https://github.com/avifenesh/glidemq-hapi)
|
|
235
|
-
| [
|
|
236
|
-
| [glidemq-examples](https://github.com/avifenesh/glidemq-examples) | 40+ runnable examples across frameworks and use cases | [GitHub](https://github.com/avifenesh/glidemq-examples) |
|
|
237
|
-
| [glide-mq.dev](https://avifenesh.github.io/glide-mq.dev/) | Full documentation, guides, API reference | [Website](https://avifenesh.github.io/glide-mq.dev/) |
|
|
238
|
-
|
|
239
|
-
> If glide-mq is useful to you, consider [starring the repo](https://github.com/avifenesh/glide-mq). It helps others find the project.
|
|
194
|
+
| Package | Description |
|
|
195
|
+
| -------------------------------------------------------------------- | --------------------------------------------- |
|
|
196
|
+
| [@glidemq/speedkey](https://github.com/avifenesh/speedkey) | Valkey GLIDE client with native NAPI bindings |
|
|
197
|
+
| [@glidemq/dashboard](https://github.com/avifenesh/glidemq-dashboard) | Web UI for metrics, schedulers, job mutations |
|
|
198
|
+
| [@glidemq/hono](https://github.com/avifenesh/glidemq-hono) | Hono middleware |
|
|
199
|
+
| [@glidemq/fastify](https://github.com/avifenesh/glidemq-fastify) | Fastify plugin |
|
|
200
|
+
| [@glidemq/nestjs](https://github.com/avifenesh/glidemq-nestjs) | NestJS module |
|
|
201
|
+
| [@glidemq/hapi](https://github.com/avifenesh/glidemq-hapi) | Hapi plugin |
|
|
202
|
+
| [glide-mq.dev](https://avifenesh.github.io/glide-mq.dev/) | Full documentation site |
|
|
240
203
|
|
|
241
204
|
## Contributing
|
|
242
205
|
|
|
243
|
-
Bug reports, feature requests, and pull requests are welcome.
|
|
206
|
+
Bug reports, feature requests, and pull requests are welcome.
|
|
244
207
|
|
|
245
208
|
- [Open an issue](https://github.com/avifenesh/glide-mq/issues)
|
|
246
209
|
- [Discussions](https://github.com/avifenesh/glide-mq/discussions)
|
|
210
|
+
- [Changelog](CHANGELOG.md)
|
|
247
211
|
|
|
248
212
|
## License
|
|
249
213
|
|
package/dist/base-worker.d.ts
CHANGED
|
@@ -1,10 +1,10 @@
|
|
|
1
1
|
import { EventEmitter } from 'events';
|
|
2
|
-
import type { WorkerOptions, Processor, BatchProcessor, Client, Serializer } from './types';
|
|
2
|
+
import type { WorkerOptions, Processor, BatchProcessor, Client, Serializer, SignalEntry } from './types';
|
|
3
3
|
import { Job } from './job';
|
|
4
4
|
import { buildKeys } from './utils';
|
|
5
5
|
import type { QueueKeys } from './functions/index';
|
|
6
6
|
import { Scheduler } from './scheduler';
|
|
7
|
-
export type WorkerEvent = 'completed' | 'failed' | 'error' | 'stalled' | 'closing' | 'closed' | 'active' | 'drained';
|
|
7
|
+
export type WorkerEvent = 'completed' | 'failed' | 'error' | 'stalled' | 'closing' | 'closed' | 'active' | 'drained' | 'budget-exceeded';
|
|
8
8
|
/**
|
|
9
9
|
* Configuration that differs between Worker and BroadcastWorker.
|
|
10
10
|
* Passed from the subclass constructor to BaseWorker.
|
|
@@ -59,9 +59,12 @@ export declare abstract class BaseWorker<D = any, R = any> extends EventEmitter
|
|
|
59
59
|
protected globalRateLimitEnabled: boolean;
|
|
60
60
|
protected cachedRateLimitMax: number;
|
|
61
61
|
protected cachedRateLimitDuration: number;
|
|
62
|
+
protected tpmLocalCounter: number;
|
|
63
|
+
protected tpmWindowStart: number;
|
|
62
64
|
protected sandboxClose?: (force?: boolean) => Promise<void>;
|
|
63
65
|
protected workerHeartbeatTimer: ReturnType<typeof setInterval> | null;
|
|
64
66
|
protected pollLoopPromise: Promise<void> | null;
|
|
67
|
+
protected suspendContinuations: Map<string, (signals: SignalEntry[]) => Promise<any>>;
|
|
65
68
|
protected readonly startedAt: number;
|
|
66
69
|
protected readonly hostname: string;
|
|
67
70
|
protected serializer: Serializer;
|
|
@@ -217,7 +220,7 @@ export declare abstract class BaseWorker<D = any, R = any> extends EventEmitter
|
|
|
217
220
|
* Returns true if the job was found and aborted, false if not currently active.
|
|
218
221
|
*/
|
|
219
222
|
abortJob(jobId: string): boolean;
|
|
220
|
-
protected startHeartbeat(jobId: string): void;
|
|
223
|
+
protected startHeartbeat(jobId: string, jobLockDuration?: number): void;
|
|
221
224
|
protected stopHeartbeat(jobId: string): void;
|
|
222
225
|
protected moveToDLQ(job: Job<D, R>, error: Error): Promise<void>;
|
|
223
226
|
/**
|
|
@@ -225,8 +228,23 @@ export declare abstract class BaseWorker<D = any, R = any> extends EventEmitter
|
|
|
225
228
|
* Also respects any manual rate limit set via rateLimit(ms).
|
|
226
229
|
*/
|
|
227
230
|
protected waitForRateLimit(): Promise<void>;
|
|
231
|
+
/**
|
|
232
|
+
* Check the TPM (token-per-minute) rate limit and wait if either the local or
|
|
233
|
+
* per-queue counter exceeds the configured maxTokens for the current window.
|
|
234
|
+
*/
|
|
235
|
+
protected waitForTokenLimit(): Promise<void>;
|
|
236
|
+
/**
|
|
237
|
+
* Increment the TPM counter after a job completes (or reports tokens).
|
|
238
|
+
* Called from the completion path when tokenLimiter is configured.
|
|
239
|
+
*/
|
|
240
|
+
protected incrementTpmCounter(tokens: number): Promise<void>;
|
|
228
241
|
/** Refresh cached meta flags from Valkey. Called on init and each scheduler tick. */
|
|
229
242
|
private refreshMetaFlags;
|
|
243
|
+
/**
|
|
244
|
+
* Read the onExceeded policy from a budget hash.
|
|
245
|
+
* Returns 'fail' (default) or 'pause'.
|
|
246
|
+
*/
|
|
247
|
+
private getBudgetOnExceeded;
|
|
230
248
|
/**
|
|
231
249
|
* Register this worker in Valkey with a TTL-based heartbeat key.
|
|
232
250
|
* The key expires after stalledInterval ms; a periodic timer refreshes it at half that interval.
|
|
@@ -1 +1 @@
|
|
|
1
|
-
{"version":3,"file":"base-worker.d.ts","sourceRoot":"","sources":["../src/base-worker.ts"],"names":[],"mappings":"AAAA,OAAO,EAAE,YAAY,EAAE,MAAM,QAAQ,CAAC;AAItC,OAAO,KAAK,
|
|
1
|
+
{"version":3,"file":"base-worker.d.ts","sourceRoot":"","sources":["../src/base-worker.ts"],"names":[],"mappings":"AAAA,OAAO,EAAE,YAAY,EAAE,MAAM,QAAQ,CAAC;AAItC,OAAO,KAAK,EACV,aAAa,EACb,SAAS,EACT,cAAc,EACd,MAAM,EACN,UAAU,EAEV,WAAW,EACZ,MAAM,SAAS,CAAC;AAEjB,OAAO,EAAE,GAAG,EAAE,MAAM,OAAO,CAAC;AAC5B,OAAO,EACL,SAAS,EASV,MAAM,SAAS,CAAC;AAkCjB,OAAO,KAAK,EAAE,SAAS,EAAE,MAAM,mBAAmB,CAAC;AACnD,OAAO,EAAE,SAAS,EAAE,MAAM,aAAa,CAAC;AAExC,MAAM,MAAM,WAAW,GACnB,WAAW,GACX,QAAQ,GACR,OAAO,GACP,SAAS,GACT,SAAS,GACT,QAAQ,GACR,QAAQ,GACR,SAAS,GACT,iBAAiB,CAAC;AAEtB;;;GAGG;AACH,MAAM,WAAW,gBAAgB;IAC/B,4DAA4D;IAC5D,aAAa,EAAE,MAAM,CAAC;IACtB,gEAAgE;IAChE,aAAa,EAAE,OAAO,CAAC;IACvB,gEAAgE;IAChE,SAAS,EAAE,MAAM,CAAC;CACnB;AAED;;;;;;;GAOG;AACH,8BAAsB,UAAU,CAAC,CAAC,GAAG,GAAG,EAAE,CAAC,GAAG,GAAG,CAAE,SAAQ,YAAY;IACrE,QAAQ,CAAC,IAAI,EAAE,MAAM,CAAC;IACtB,SAAS,CAAC,IAAI,EAAE,aAAa,CAAC;IAC9B,SAAS,CAAC,SAAS,EAAE,SAAS,CAAC,CAAC,EAAE,CAAC,CAAC,CAAC;IACrC,SAAS,CAAC,aAAa,EAAE,MAAM,GAAG,IAAI,CAAQ;IAC9C,SAAS,CAAC,kBAAkB,UAAQ;IACpC,SAAS,CAAC,cAAc,EAAE,MAAM,GAAG,IAAI,CAAQ;IAC/C,SAAS,CAAC,OAAO,UAAS;IAC1B,SAAS,CAAC,MAAM,UAAS;IACzB,SAAS,CAAC,OAAO,UAAS;IAC1B,SAAS,CAAC,MAAM,UAAS;IACzB,SAAS,CAAC,SAAS,EAAE,UAAU,CAAC,OAAO,SAAS,CAAC,CAAC;IAClD,SAAS,CAAC,UAAU,EAAE,MAAM,CAAC;IAC7B,SAAS,CAAC,WAAW,SAAK;IAC1B,SAAS,CAAC,cAAc,EAAE,GAAG,CAAC,OAAO,CAAC,IAAI,CAAC,CAAC,CAAa;IACzD,SAAS,CAAC,sBAAsB,EAAE,GAAG,CAAC,MAAM,EAAE,eAAe,CAAC,CAAa;IAC3E,SAAS,CAAC,SAAS,EAAE,SAAS,GAAG,IAAI,CAAQ;IAC7C,SAAS,CAAC,WAAW,EAAE,OAAO,CAAC,IAAI,CAAC,CAAC;IACrC,SAAS,CAAC,cAAc,SAAK;IAC7B,SAAS,CAAC,SAAS,UAAQ;IAC3B,SAAS,CAAC,gBAAgB,SAAK;IAC/B,SAAS,CAAC,cAAc,wBAAsB;IAG9C,SAAS,CAAC,WAAW,EAAE,MAAM,CAAC;IAC9B,SAAS,CAAC,QAAQ,EAAE,MAAM,CAAC;IAC3B,SAAS,CAAC,YAAY,EAAE,MAAM,CAAC;IAC/B,SAAS,CAAC,eAAe,EAAE,MAAM,CAAC;IAClC,SAAS,CAAC,eAAe,EAAE,MAAM,CAAC;IAClC,SAAS,CAAC,YAAY,EAAE,MAAM,CAAC;IAC/B,SAAS,CAAC,kBAAkB,EAAE,GAAG,CAAC,MAAM,EAAE,UAAU,CAAC,OAAO,WAAW,CAAC,CAAC,CAAa;IACtF,SAAS,CAAC,YAAY,EAAE,MAAM,CAAC,MAAM,EAAE,MAAM,CAAC,CAAuB;IACrE,SAAS,CAAC,wBAAwB,UAAS;IAC3C,SAAS,CAAC,sBAAsB,UAAS;IACzC,SAAS,CAAC,kBAAkB,SAAK;IACjC,SAAS,CAAC,uBAAuB,SAAK;IAGtC,SAAS,CAAC,eAAe,SAAK;IAC9B,SAAS,CAAC,cAAc,SAAK;IAC7B,SAAS,CAAC,YAAY,CAAC,EAAE,CAAC,KAAK,CAAC,EAAE,OAAO,KAAK,OAAO,CAAC,IAAI,CAAC,CAAC;IAC5D,SAAS,CAAC,oBAAoB,EAAE,UAAU,CAAC,OAAO,WAAW,CAAC,GAAG,IAAI,CAAQ;IAC7E,SAAS,CAAC,eAAe,EAAE,OAAO,CAAC,IAAI,CAAC,GAAG,IAAI,CAAQ;IACvD,SAAS,CAAC,oBAAoB,wBAA6B,WAAW,EAAE,KAAK,OAAO,CAAC,GAAG,CAAC,EAAI;IAC7F,SAAS,CAAC,QAAQ,CAAC,SAAS,SAAc;IAC1C,SAAS,CAAC,QAAQ,CAAC,QAAQ,SAAiB;IAC5C,SAAS,CAAC,UAAU,EAAE,UAAU,CAAC;IACjC,SAAS,CAAC,QAAQ,CAAC,SAAS,EAAE,OAAO,CAAC;IACtC,SAAS,CAAC,QAAQ,CAAC,SAAS,EAAE,MAAM,CAAC;IACrC,SAAS,CAAC,QAAQ,CAAC,YAAY,EAAE,MAAM,CAAC;IACxC,SAAS,CAAC,QAAQ,CAAC,cAAc,EAAE,cAAc,CAAC,CAAC,EAAE,CAAC,CAAC,GAAG,IAAI,CAAC;IAG/D,SAAS,CAAC,QAAQ,CAAC,aAAa,EAAE,MAAM,CAAC;IACzC,SAAS,CAAC,QAAQ,CAAC,aAAa,EAAE,OAAO,CAAC;IAC1C,SAAS,CAAC,QAAQ,CAAC,SAAS,EAAE,MAAM,CAAC;IACrC,SAAS,CAAC,QAAQ,CAAC,UAAU,EAAE,OAAO,CAAC;IACvC,SAAS,CAAC,QAAQ,CAAC,WAAW,EAAE,OAAO,CAAC;IAGxC,OAAO,CAAC,qBAAqB,CAAS;IACtC,OAAO,CAAC,kBAAkB,CAAS;IACnC,OAAO,CAAC,kBAAkB,CAAS;IAEnC,SAAS,aACP,IAAI,EAAE,MAAM,EACZ,SAAS,EAAE,SAAS,CAAC,CAAC,EAAE,CAAC,CAAC,GAAG,cAAc,CAAC,CAAC,EAAE,CAAC,CAAC,GAAG,MAAM,EAC1D,IAAI,EAAE,aAAa,EACnB,MAAM,EAAE,gBAAgB;IAgG1B;;OAEG;IACG,cAAc,IAAI,OAAO,CAAC,IAAI,CAAC;YAIvB,IAAI;IAoDlB;;;;OAIG;cACa,QAAQ,IAAI,OAAO,CAAC,IAAI,CAAC;IAqBzC,OAAO,CAAC,YAAY,CASlB;IAEF;;OAEG;YACW,kBAAkB;cAqFhB,WAAW,IAAI,OAAO,CAAC,IAAI,CAAC;IAoB5C;;;;OAIG;IACH,SAAS,CAAC,QAAQ,CAAC,QAAQ,IAAI,OAAO,CAAC,IAAI,CAAC;IAE5C;;;OAGG;IACH,SAAS,CAAC,WAAW,CAAC,KAAK,EAAE,MAAM,EAAE,OAAO,EAAE,MAAM,GAAG,IAAI;IAsB3D;;OAEG;IACH,SAAS,CAAC,aAAa,CAAC,KAAK,EAAE;QAAE,KAAK,EAAE,MAAM,CAAC;QAAC,OAAO,EAAE,MAAM,CAAC;QAAC,GAAG,EAAE,GAAG,CAAC,CAAC,EAAE,CAAC,CAAC,CAAA;KAAE,EAAE,GAAG,IAAI;IAsB1F;;;OAGG;cACa,uBAAuB,CAAC,SAAS,EAAE;QAAE,KAAK,EAAE,MAAM,CAAC;QAAC,OAAO,EAAE,MAAM,CAAA;KAAE,EAAE,GAAG,OAAO,CAAC,IAAI,CAAC;IAyDvG;;;OAGG;cACa,YAAY,CAAC,KAAK,EAAE;QAAE,KAAK,EAAE,MAAM,CAAC;QAAC,OAAO,EAAE,MAAM,CAAC;QAAC,GAAG,EAAE,GAAG,CAAC,CAAC,EAAE,CAAC,CAAC,CAAA;KAAE,EAAE,GAAG,OAAO,CAAC,IAAI,CAAC;IAiNxG;;;OAGG;cACa,0BAA0B,CACxC,UAAU,EACN,MAAM,CAAC,MAAM,EAAE,MAAM,CAAC,GACtB,SAAS,GACT,SAAS,GACT,YAAY,GACZ,oBAAoB,GACpB,qBAAqB,GACrB,eAAe,GACf,2BAA2B,GAC3B,IAAI,EACR,KAAK,EAAE,MAAM,EACb,OAAO,EAAE,MAAM,GACd,OAAO,CAAC,OAAO,CAAC;IAyDnB;;;OAGG;cACa,YAAY,CAC1B,GAAG,EAAE,GAAG,CAAC,CAAC,EAAE,CAAC,CAAC,EACd,KAAK,EAAE,MAAM,GACZ,OAAO,CAAC;QAAE,MAAM,CAAC,EAAE,CAAC,CAAC;QAAC,KAAK,CAAC,EAAE,KAAK,CAAC;QAAC,OAAO,EAAE,OAAO,CAAA;KAAE,CAAC;IAuC3D;;;;OAIG;cACa,eAAe,CAAC,GAAG,EAAE,GAAG,CAAC,CAAC,EAAE,CAAC,CAAC,EAAE,MAAM,EAAE,MAAM,GAAG,OAAO,CAAC,MAAM,CAAC;IAIhF;;;OAGG;cACa,gBAAgB,CAAC,GAAG,EAAE,GAAG,CAAC,CAAC,EAAE,CAAC,CAAC,EAAE,KAAK,EAAE,MAAM,EAAE,OAAO,EAAE,MAAM,EAAE,KAAK,EAAE,KAAK,GAAG,OAAO,CAAC,OAAO,CAAC;IAuEhH;;OAEG;cACa,mBAAmB,CACjC,GAAG,EAAE,GAAG,CAAC,CAAC,EAAE,CAAC,CAAC,EACd,KAAK,EAAE,MAAM,EACb,OAAO,EAAE,MAAM,EACf,OAAO,EAAE;QAAE,YAAY,EAAE,MAAM,CAAC;QAAC,cAAc,CAAC,EAAE,MAAM,CAAC;QAAC,QAAQ,CAAC,EAAE,CAAC,CAAA;KAAE,GACvE,OAAO,CAAC,IAAI,CAAC;IAyBhB;;;;;;;;;;;;;;;;;;;;;OAqBG;cACa,4BAA4B,CAAC,aAAa,EAAE,MAAM,EAAE,GAAG,EAAE,MAAM,GAAG,OAAO,CAAC,IAAI,CAAC;IAgC/F;;OAEG;cACa,eAAe,CAC7B,GAAG,EAAE,GAAG,CAAC,CAAC,EAAE,CAAC,CAAC,EACd,KAAK,EAAE,MAAM,GACZ,OAAO,CAAC;QAAE,UAAU,EAAE,MAAM,CAAC;QAAC,QAAQ,EAAE,MAAM,CAAC;QAAC,UAAU,EAAE,SAAS,CAAA;KAAE,GAAG,SAAS,CAAC;IAyBvF,SAAS,CAAC,iBAAiB,CAAC,GAAG,EAAE,GAAG,CAAC,CAAC,EAAE,CAAC,CAAC,GAAG,MAAM,GAAG,IAAI;IAK1D;;;OAGG;cACa,cAAc,CAAC,GAAG,EAAE,GAAG,CAAC,CAAC,EAAE,CAAC,CAAC,GAAG,OAAO,CAAC,OAAO,CAAC;IAWhE;;OAEG;cACa,kBAAkB,CAAC,KAAK,EAAE,MAAM,EAAE,OAAO,EAAE,MAAM,GAAG,OAAO,CAAC,IAAI,CAAC;IAcjF;;;;OAIG;cACa,UAAU,CAAC,KAAK,EAAE,MAAM,EAAE,OAAO,EAAE,MAAM,GAAG,OAAO,CAAC,IAAI,CAAC;IA0VzE;;;;OAIG;IACH,QAAQ,CAAC,KAAK,EAAE,MAAM,GAAG,OAAO;IAShC,SAAS,CAAC,cAAc,CAAC,KAAK,EAAE,MAAM,EAAE,eAAe,CAAC,EAAE,MAAM,GAAG,IAAI;IAkBvE,SAAS,CAAC,aAAa,CAAC,KAAK,EAAE,MAAM,GAAG,IAAI;cAQ5B,SAAS,CAAC,GAAG,EAAE,GAAG,CAAC,CAAC,EAAE,CAAC,CAAC,EAAE,KAAK,EAAE,KAAK,GAAG,OAAO,CAAC,IAAI,CAAC;IAsBtE;;;OAGG;cACa,gBAAgB,IAAI,OAAO,CAAC,IAAI,CAAC;IAoCjD;;;OAGG;cACa,iBAAiB,IAAI,OAAO,CAAC,IAAI,CAAC;IA8ClD;;;OAGG;cACa,mBAAmB,CAAC,MAAM,EAAE,MAAM,GAAG,OAAO,CAAC,IAAI,CAAC;IA8BlE,qFAAqF;YACvE,gBAAgB;IAqB9B;;;OAGG;YACW,mBAAmB;IAUjC;;;;OAIG;YACW,cAAc;IAkB5B;;OAEG;IACH,SAAS,IAAI,OAAO;IAIpB;;OAEG;IACH,QAAQ,IAAI,OAAO;IAInB;;;OAGG;IACG,SAAS,CAAC,EAAE,EAAE,MAAM,GAAG,OAAO,CAAC,IAAI,CAAC;IAI1C;;OAEG;IACG,KAAK,CAAC,KAAK,CAAC,EAAE,OAAO,GAAG,OAAO,CAAC,IAAI,CAAC;IAO3C;;OAEG;IACG,MAAM,IAAI,OAAO,CAAC,IAAI,CAAC;IAQ7B;;;;OAIG;cACa,eAAe,IAAI,OAAO,CAAC,OAAO,CAAC;IAOnD;;;OAGG;IACG,KAAK,IAAI,OAAO,CAAC,IAAI,CAAC;IAoB5B;;;OAGG;IACG,KAAK,CAAC,KAAK,CAAC,EAAE,OAAO,GAAG,OAAO,CAAC,IAAI,CAAC;cA4E3B,iBAAiB,IAAI,OAAO,CAAC,IAAI,CAAC;IAMlD,MAAM,CAAC,gBAAgB,CAAC,KAAK,EAAE,KAAK,GAAG,OAAO;IAI9C,MAAM,CAAC,cAAc;;;;;;;;;;MAKnB;CACH"}
|