@gravito/zenith 1.0.1 โ 1.1.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +17 -0
- package/ECOSYSTEM_EXPANSION_RFC.md +130 -0
- package/dist/bin.js +105 -83
- package/dist/server/index.js +105 -83
- package/package.json +1 -1
- package/DEMO.md +0 -156
package/CHANGELOG.md
CHANGED
|
@@ -1,5 +1,22 @@
|
|
|
1
1
|
# @gravito/zenith
|
|
2
2
|
|
|
3
|
+
## 1.1.0
|
|
4
|
+
|
|
5
|
+
### Minor Changes
|
|
6
|
+
|
|
7
|
+
- Implement several more examples and fix module issues, including:
|
|
8
|
+
- Support middleware in core route definitions.
|
|
9
|
+
- Improve Atlas driver loading and dependency injection.
|
|
10
|
+
- Add PostgreSQL support to Ecommerce MVC example.
|
|
11
|
+
- Fix internal type resolution issues across packages.
|
|
12
|
+
|
|
13
|
+
### Patch Changes
|
|
14
|
+
|
|
15
|
+
- Updated dependencies
|
|
16
|
+
- @gravito/atlas@1.2.0
|
|
17
|
+
- @gravito/quasar@1.2.0
|
|
18
|
+
- @gravito/stream@1.0.2
|
|
19
|
+
|
|
3
20
|
## 1.0.1
|
|
4
21
|
|
|
5
22
|
### Patch Changes
|
|
@@ -0,0 +1,130 @@
|
|
|
1
|
+
# Zenith Ecosystem Expansion RFC
|
|
2
|
+
|
|
3
|
+
**Status**: Draft
|
|
4
|
+
**Date**: 2026-01-10
|
|
5
|
+
**Goal**: Expand Zenith monitoring capabilities beyond Gravito/Laravel to Python, Node.js, and Go ecosystems.
|
|
6
|
+
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
## 1. Executive Summary
|
|
10
|
+
|
|
11
|
+
Gravito Zenith (Flux Console) is a unified control plane for background job processing. Currently, it supports **Gravito Stream** (Native) and **Laravel Queues** (via `laravel-zenith`). To become a true polyglot observability platform, we need to implement connectors for other popular queue systems.
|
|
12
|
+
|
|
13
|
+
This RFC defines the **Universal Zenith Protocol (UZP)** and proposes implementation roadmaps for Python (Celery) and Node.js (BullMQ).
|
|
14
|
+
|
|
15
|
+
---
|
|
16
|
+
|
|
17
|
+
## 2. The Universal Zenith Protocol (UZP)
|
|
18
|
+
|
|
19
|
+
Any background job system can be monitored by Zenith if it implements the following Redis-based interfaces.
|
|
20
|
+
|
|
21
|
+
### 2.1. Discovery (Heartbeat)
|
|
22
|
+
Workers must announce their presence every 30 seconds to avoid being marked as "Offline".
|
|
23
|
+
|
|
24
|
+
* **Command**: `SETEX flux_console:worker:<worker_id> 60 <payload>`
|
|
25
|
+
* **Payload (JSON)**:
|
|
26
|
+
```json
|
|
27
|
+
{
|
|
28
|
+
"id": "celery@worker-1",
|
|
29
|
+
"hostname": "pod-xyz",
|
|
30
|
+
"pid": 1234,
|
|
31
|
+
"uptime": 3600,
|
|
32
|
+
"queues": ["high", "default"],
|
|
33
|
+
"concurrency": 4,
|
|
34
|
+
"memory": { "rss": "50MB", "heapUsed": "N/A" },
|
|
35
|
+
"framework": "celery", // "laravel", "bullmq", "asynq"
|
|
36
|
+
"language": "python", // "php", "typescript", "go"
|
|
37
|
+
"timestamp": "2026-01-10T12:00:00Z"
|
|
38
|
+
}
|
|
39
|
+
```
|
|
40
|
+
|
|
41
|
+
### 2.2. Event Stream (Logs)
|
|
42
|
+
Workers publish lifecycle events to a shared Pub/Sub channel.
|
|
43
|
+
|
|
44
|
+
* **Command**: `PUBLISH flux_console:logs <payload>`
|
|
45
|
+
* **Payload (JSON)**:
|
|
46
|
+
```json
|
|
47
|
+
{
|
|
48
|
+
"level": "info", // "info" (start), "success", "error"
|
|
49
|
+
"message": "Processing Task: tasks.send_email",
|
|
50
|
+
"workerId": "celery@worker-1",
|
|
51
|
+
"queue": "default",
|
|
52
|
+
"jobId": "uuid-v4",
|
|
53
|
+
"timestamp": "2026-01-10T12:00:01Z",
|
|
54
|
+
"metadata": {
|
|
55
|
+
"attempt": 1,
|
|
56
|
+
"latency": 45 // ms (for success/error events)
|
|
57
|
+
}
|
|
58
|
+
}
|
|
59
|
+
```
|
|
60
|
+
|
|
61
|
+
### 2.3. Metrics (Optional but Recommended)
|
|
62
|
+
Connectors should increment counters for throughput aggregation.
|
|
63
|
+
|
|
64
|
+
* `INCR flux_console:metrics:processed`
|
|
65
|
+
* `INCR flux_console:metrics:failed`
|
|
66
|
+
|
|
67
|
+
---
|
|
68
|
+
|
|
69
|
+
## 3. Implementation Plan: Python (Celery)
|
|
70
|
+
|
|
71
|
+
**Target**: `gravito/zenith-celery` (PyPI Package)
|
|
72
|
+
|
|
73
|
+
### Architecture
|
|
74
|
+
Celery has a rich Signal system. We can hook into `worker_ready`, `task_prerun`, `task_success`, and `task_failure`.
|
|
75
|
+
|
|
76
|
+
### Component Design
|
|
77
|
+
1. **ZenithMonitor**: A Celery Bootstep that starts a background thread for Heartbeats.
|
|
78
|
+
2. **SignalHandlers**:
|
|
79
|
+
* `task_prerun`: Publish `level: info` log.
|
|
80
|
+
* `task_success`: Publish `level: success` log + metrics.
|
|
81
|
+
* `task_failure`: Publish `level: error` log with traceback.
|
|
82
|
+
|
|
83
|
+
### Configuration
|
|
84
|
+
```python
|
|
85
|
+
# celery.py
|
|
86
|
+
app.conf.zenith_redis_url = "redis://localhost:6379/0"
|
|
87
|
+
app.conf.zenith_enabled = True
|
|
88
|
+
```
|
|
89
|
+
|
|
90
|
+
---
|
|
91
|
+
|
|
92
|
+
## 4. Implementation Plan: Node.js (BullMQ)
|
|
93
|
+
|
|
94
|
+
**Target**: `@gravito/zenith-bullmq` (NPM Package)
|
|
95
|
+
|
|
96
|
+
*Note: Gravito Stream is based on BullMQ principles but internal. This adapter allows *standard* BullMQ instances (e.g., in a NestJS app) to report to Zenith.*
|
|
97
|
+
|
|
98
|
+
### Architecture
|
|
99
|
+
BullMQ uses `QueueEvents` (which listens to Redis streams). A separate "Monitor" process is the best approach to avoid modifying the worker code too much.
|
|
100
|
+
|
|
101
|
+
### Component Design
|
|
102
|
+
1. **ZenithMonitor Class**:
|
|
103
|
+
```typescript
|
|
104
|
+
const monitor = new ZenithMonitor({
|
|
105
|
+
connection: redisOptions,
|
|
106
|
+
queues: ['email', 'reports']
|
|
107
|
+
});
|
|
108
|
+
monitor.start();
|
|
109
|
+
```
|
|
110
|
+
2. It listens to BullMQ global events (completed, failed) and bridges them to UZP.
|
|
111
|
+
3. **Heartbeat**: Since BullMQ workers don't have a central registry, the Monitor acts as a "Virtual Worker" or we require users to instantiate a `ZenithWorker` wrapper.
|
|
112
|
+
|
|
113
|
+
---
|
|
114
|
+
|
|
115
|
+
## 5. Implementation Plan: Go (Asynq)
|
|
116
|
+
|
|
117
|
+
**Target**: `github.com/gravito-framework/zenith-asynq`
|
|
118
|
+
|
|
119
|
+
### Architecture
|
|
120
|
+
Asynq provides `Server` middleware.
|
|
121
|
+
|
|
122
|
+
### Component Design
|
|
123
|
+
1. **Middleware**: `zenith.NewMiddleware(redisClient)`.
|
|
124
|
+
2. Wraps handler execution to capture Start/Success/Fail times.
|
|
125
|
+
3. Publishes to Redis asynchronously.
|
|
126
|
+
|
|
127
|
+
---
|
|
128
|
+
|
|
129
|
+
## 6. Future Work: Rust (Faktory?)
|
|
130
|
+
(To be determined based on demand)
|
package/dist/bin.js
CHANGED
|
@@ -44,6 +44,59 @@ var __export = (target, all) => {
|
|
|
44
44
|
var __esm = (fn, res) => () => (fn && (res = fn(fn = 0)), res);
|
|
45
45
|
var __require = import.meta.require;
|
|
46
46
|
|
|
47
|
+
// ../atlas/src/errors/index.ts
|
|
48
|
+
var DatabaseError, ConstraintViolationError, UniqueConstraintError, ForeignKeyConstraintError, NotNullConstraintError, TableNotFoundError, ConnectionError;
|
|
49
|
+
var init_errors = __esm(() => {
|
|
50
|
+
DatabaseError = class DatabaseError extends Error {
|
|
51
|
+
originalError;
|
|
52
|
+
query;
|
|
53
|
+
bindings;
|
|
54
|
+
constructor(message, originalError, query, bindings) {
|
|
55
|
+
super(message);
|
|
56
|
+
this.name = "DatabaseError";
|
|
57
|
+
this.originalError = originalError;
|
|
58
|
+
this.query = query;
|
|
59
|
+
this.bindings = bindings;
|
|
60
|
+
}
|
|
61
|
+
};
|
|
62
|
+
ConstraintViolationError = class ConstraintViolationError extends DatabaseError {
|
|
63
|
+
constructor(message, originalError, query, bindings) {
|
|
64
|
+
super(message, originalError, query, bindings);
|
|
65
|
+
this.name = "ConstraintViolationError";
|
|
66
|
+
}
|
|
67
|
+
};
|
|
68
|
+
UniqueConstraintError = class UniqueConstraintError extends ConstraintViolationError {
|
|
69
|
+
constructor(message, originalError, query, bindings) {
|
|
70
|
+
super(message, originalError, query, bindings);
|
|
71
|
+
this.name = "UniqueConstraintError";
|
|
72
|
+
}
|
|
73
|
+
};
|
|
74
|
+
ForeignKeyConstraintError = class ForeignKeyConstraintError extends ConstraintViolationError {
|
|
75
|
+
constructor(message, originalError, query, bindings) {
|
|
76
|
+
super(message, originalError, query, bindings);
|
|
77
|
+
this.name = "ForeignKeyConstraintError";
|
|
78
|
+
}
|
|
79
|
+
};
|
|
80
|
+
NotNullConstraintError = class NotNullConstraintError extends ConstraintViolationError {
|
|
81
|
+
constructor(message, originalError, query, bindings) {
|
|
82
|
+
super(message, originalError, query, bindings);
|
|
83
|
+
this.name = "NotNullConstraintError";
|
|
84
|
+
}
|
|
85
|
+
};
|
|
86
|
+
TableNotFoundError = class TableNotFoundError extends DatabaseError {
|
|
87
|
+
constructor(message, originalError, query, bindings) {
|
|
88
|
+
super(message, originalError, query, bindings);
|
|
89
|
+
this.name = "TableNotFoundError";
|
|
90
|
+
}
|
|
91
|
+
};
|
|
92
|
+
ConnectionError = class ConnectionError extends DatabaseError {
|
|
93
|
+
constructor(message, originalError) {
|
|
94
|
+
super(message, originalError);
|
|
95
|
+
this.name = "ConnectionError";
|
|
96
|
+
}
|
|
97
|
+
};
|
|
98
|
+
});
|
|
99
|
+
|
|
47
100
|
// ../../node_modules/.bun/bson@6.10.4/node_modules/bson/lib/bson.cjs
|
|
48
101
|
var require_bson = __commonJS((exports) => {
|
|
49
102
|
var TypedArrayPrototypeGetSymbolToStringTag = (() => {
|
|
@@ -33364,71 +33417,20 @@ var require_lib3 = __commonJS((exports) => {
|
|
|
33364
33417
|
} });
|
|
33365
33418
|
});
|
|
33366
33419
|
|
|
33367
|
-
// ../atlas/src/errors/index.ts
|
|
33368
|
-
var DatabaseError, ConstraintViolationError, UniqueConstraintError, ForeignKeyConstraintError, NotNullConstraintError, TableNotFoundError, ConnectionError;
|
|
33369
|
-
var init_errors = __esm(() => {
|
|
33370
|
-
DatabaseError = class DatabaseError extends Error {
|
|
33371
|
-
originalError;
|
|
33372
|
-
query;
|
|
33373
|
-
bindings;
|
|
33374
|
-
constructor(message, originalError, query, bindings) {
|
|
33375
|
-
super(message);
|
|
33376
|
-
this.name = "DatabaseError";
|
|
33377
|
-
this.originalError = originalError;
|
|
33378
|
-
this.query = query;
|
|
33379
|
-
this.bindings = bindings;
|
|
33380
|
-
}
|
|
33381
|
-
};
|
|
33382
|
-
ConstraintViolationError = class ConstraintViolationError extends DatabaseError {
|
|
33383
|
-
constructor(message, originalError, query, bindings) {
|
|
33384
|
-
super(message, originalError, query, bindings);
|
|
33385
|
-
this.name = "ConstraintViolationError";
|
|
33386
|
-
}
|
|
33387
|
-
};
|
|
33388
|
-
UniqueConstraintError = class UniqueConstraintError extends ConstraintViolationError {
|
|
33389
|
-
constructor(message, originalError, query, bindings) {
|
|
33390
|
-
super(message, originalError, query, bindings);
|
|
33391
|
-
this.name = "UniqueConstraintError";
|
|
33392
|
-
}
|
|
33393
|
-
};
|
|
33394
|
-
ForeignKeyConstraintError = class ForeignKeyConstraintError extends ConstraintViolationError {
|
|
33395
|
-
constructor(message, originalError, query, bindings) {
|
|
33396
|
-
super(message, originalError, query, bindings);
|
|
33397
|
-
this.name = "ForeignKeyConstraintError";
|
|
33398
|
-
}
|
|
33399
|
-
};
|
|
33400
|
-
NotNullConstraintError = class NotNullConstraintError extends ConstraintViolationError {
|
|
33401
|
-
constructor(message, originalError, query, bindings) {
|
|
33402
|
-
super(message, originalError, query, bindings);
|
|
33403
|
-
this.name = "NotNullConstraintError";
|
|
33404
|
-
}
|
|
33405
|
-
};
|
|
33406
|
-
TableNotFoundError = class TableNotFoundError extends DatabaseError {
|
|
33407
|
-
constructor(message, originalError, query, bindings) {
|
|
33408
|
-
super(message, originalError, query, bindings);
|
|
33409
|
-
this.name = "TableNotFoundError";
|
|
33410
|
-
}
|
|
33411
|
-
};
|
|
33412
|
-
ConnectionError = class ConnectionError extends DatabaseError {
|
|
33413
|
-
constructor(message, originalError) {
|
|
33414
|
-
super(message, originalError);
|
|
33415
|
-
this.name = "ConnectionError";
|
|
33416
|
-
}
|
|
33417
|
-
};
|
|
33418
|
-
});
|
|
33419
|
-
|
|
33420
33420
|
// ../atlas/src/drivers/MongoDBDriver.ts
|
|
33421
33421
|
class MongoDBDriver {
|
|
33422
33422
|
config;
|
|
33423
33423
|
client = null;
|
|
33424
33424
|
db = null;
|
|
33425
33425
|
MongoClientCtor;
|
|
33426
|
-
constructor(config, deps
|
|
33426
|
+
constructor(config, deps) {
|
|
33427
33427
|
if (config.driver !== "mongodb") {
|
|
33428
|
-
throw new Error(`Invalid driver type '
|
|
33428
|
+
throw new Error(`Invalid driver type '${config.driver}' for MongoDBDriver`);
|
|
33429
33429
|
}
|
|
33430
33430
|
this.config = config;
|
|
33431
|
-
|
|
33431
|
+
if (deps?.MongoClient) {
|
|
33432
|
+
this.MongoClientCtor = deps.MongoClient;
|
|
33433
|
+
}
|
|
33432
33434
|
}
|
|
33433
33435
|
getDriverName() {
|
|
33434
33436
|
return "mongodb";
|
|
@@ -33438,13 +33440,22 @@ class MongoDBDriver {
|
|
|
33438
33440
|
return;
|
|
33439
33441
|
}
|
|
33440
33442
|
try {
|
|
33441
|
-
|
|
33443
|
+
const Ctor = this.MongoClientCtor || (await this.loadMongoModule()).MongoClient;
|
|
33444
|
+
this.client = new Ctor(this.config.uri ?? `mongodb://${this.config.host}:${this.config.port}/${this.config.database}`);
|
|
33442
33445
|
await this.client.connect();
|
|
33443
33446
|
this.db = this.client.db(this.config.database);
|
|
33444
33447
|
} catch (error) {
|
|
33445
33448
|
throw new ConnectionError("Could not connect to MongoDB cluster", error);
|
|
33446
33449
|
}
|
|
33447
33450
|
}
|
|
33451
|
+
async loadMongoModule() {
|
|
33452
|
+
try {
|
|
33453
|
+
const mongodb = await Promise.resolve().then(() => __toESM(require_lib3(), 1));
|
|
33454
|
+
return mongodb;
|
|
33455
|
+
} catch (e) {
|
|
33456
|
+
throw new Error(`MongoDB driver requires the "mongodb" package. Please install it: bun add mongodb. Original Error: ${e}`);
|
|
33457
|
+
}
|
|
33458
|
+
}
|
|
33448
33459
|
async disconnect() {
|
|
33449
33460
|
if (this.client) {
|
|
33450
33461
|
await this.client.close();
|
|
@@ -33537,10 +33548,8 @@ class MongoDBDriver {
|
|
|
33537
33548
|
return doc;
|
|
33538
33549
|
}
|
|
33539
33550
|
}
|
|
33540
|
-
var import_mongodb;
|
|
33541
33551
|
var init_MongoDBDriver = __esm(() => {
|
|
33542
33552
|
init_errors();
|
|
33543
|
-
import_mongodb = __toESM(require_lib3(), 1);
|
|
33544
33553
|
});
|
|
33545
33554
|
|
|
33546
33555
|
// ../../node_modules/.bun/sqlstring@2.3.3/node_modules/sqlstring/lib/SqlString.js
|
|
@@ -67480,12 +67489,14 @@ class RedisDriver {
|
|
|
67480
67489
|
config;
|
|
67481
67490
|
client = null;
|
|
67482
67491
|
RedisCtor;
|
|
67483
|
-
constructor(config, deps
|
|
67492
|
+
constructor(config, deps) {
|
|
67484
67493
|
if (config.driver !== "redis") {
|
|
67485
67494
|
throw new Error(`Invalid driver type '${config.driver}' for RedisDriver`);
|
|
67486
67495
|
}
|
|
67487
67496
|
this.config = config;
|
|
67488
|
-
|
|
67497
|
+
if (deps?.Redis) {
|
|
67498
|
+
this.RedisCtor = deps.Redis;
|
|
67499
|
+
}
|
|
67489
67500
|
}
|
|
67490
67501
|
getDriverName() {
|
|
67491
67502
|
return "redis";
|
|
@@ -67495,7 +67506,8 @@ class RedisDriver {
|
|
|
67495
67506
|
return;
|
|
67496
67507
|
}
|
|
67497
67508
|
try {
|
|
67498
|
-
this.
|
|
67509
|
+
const Ctor = this.RedisCtor || (await this.loadRedisModule()).default;
|
|
67510
|
+
this.client = new Ctor({
|
|
67499
67511
|
host: this.config.host,
|
|
67500
67512
|
port: this.config.port ?? 6379,
|
|
67501
67513
|
password: this.config.password,
|
|
@@ -67507,6 +67519,14 @@ class RedisDriver {
|
|
|
67507
67519
|
throw new ConnectionError("Could not connect to Redis host", error);
|
|
67508
67520
|
}
|
|
67509
67521
|
}
|
|
67522
|
+
async loadRedisModule() {
|
|
67523
|
+
try {
|
|
67524
|
+
const ioredis = await Promise.resolve().then(() => __toESM(require_built3(), 1));
|
|
67525
|
+
return ioredis;
|
|
67526
|
+
} catch (e) {
|
|
67527
|
+
throw new Error(`Redis driver requires the "ioredis" package. Please install it: bun add ioredis. Original Error: ${e}`);
|
|
67528
|
+
}
|
|
67529
|
+
}
|
|
67510
67530
|
async disconnect() {
|
|
67511
67531
|
if (this.client) {
|
|
67512
67532
|
await this.client.quit();
|
|
@@ -67578,10 +67598,8 @@ class RedisDriver {
|
|
|
67578
67598
|
return false;
|
|
67579
67599
|
}
|
|
67580
67600
|
}
|
|
67581
|
-
var import_ioredis;
|
|
67582
67601
|
var init_RedisDriver = __esm(() => {
|
|
67583
67602
|
init_errors();
|
|
67584
|
-
import_ioredis = __toESM(require_built3(), 1);
|
|
67585
67603
|
});
|
|
67586
67604
|
|
|
67587
67605
|
// ../../node_modules/.bun/better-sqlite3@11.10.0/node_modules/better-sqlite3/lib/util.js
|
|
@@ -68376,11 +68394,15 @@ class SQLiteDriver {
|
|
|
68376
68394
|
});
|
|
68377
68395
|
this.client.exec("PRAGMA journal_mode = WAL;");
|
|
68378
68396
|
} else {
|
|
68379
|
-
|
|
68380
|
-
|
|
68381
|
-
|
|
68382
|
-
|
|
68383
|
-
|
|
68397
|
+
try {
|
|
68398
|
+
const { default: Database } = await Promise.resolve().then(() => __toESM(require_lib10(), 1));
|
|
68399
|
+
this.client = new Database(this.config.database, {
|
|
68400
|
+
readonly: this.config.readonly ?? false
|
|
68401
|
+
});
|
|
68402
|
+
this.client.pragma("journal_mode = WAL");
|
|
68403
|
+
} catch (e) {
|
|
68404
|
+
throw new Error(`SQLite driver requires "better-sqlite3" when running in Node.js. Please install it: bun add better-sqlite3. Original Error: ${e}`);
|
|
68405
|
+
}
|
|
68384
68406
|
}
|
|
68385
68407
|
} catch (error) {
|
|
68386
68408
|
throw new ConnectionError("Could not connect to SQLite database", error);
|
|
@@ -70644,7 +70666,7 @@ var init_Connection = __esm(() => {
|
|
|
70644
70666
|
return new Proxy(this, {
|
|
70645
70667
|
get(target, prop) {
|
|
70646
70668
|
if (prop in target) {
|
|
70647
|
-
return target
|
|
70669
|
+
return Reflect.get(target, prop);
|
|
70648
70670
|
}
|
|
70649
70671
|
if (typeof prop === "string" && target.driver && typeof target.driver[prop] === "function") {
|
|
70650
70672
|
return target.driver[prop].bind(target.driver);
|
|
@@ -116028,7 +116050,7 @@ class NodeProbe {
|
|
|
116028
116050
|
}
|
|
116029
116051
|
}
|
|
116030
116052
|
// ../quasar/src/QuasarAgent.ts
|
|
116031
|
-
var
|
|
116053
|
+
var import_ioredis = __toESM(require_built3(), 1);
|
|
116032
116054
|
|
|
116033
116055
|
// ../quasar/src/probes/BullProbe.ts
|
|
116034
116056
|
class BullProbe {
|
|
@@ -116147,7 +116169,7 @@ class QuasarAgent {
|
|
|
116147
116169
|
this.transportRedis = options.transport.client;
|
|
116148
116170
|
} else {
|
|
116149
116171
|
const url = options.transport?.url || options.redisUrl || "redis://localhost:6379";
|
|
116150
|
-
this.transportRedis = new
|
|
116172
|
+
this.transportRedis = new import_ioredis.Redis(url, {
|
|
116151
116173
|
lazyConnect: true,
|
|
116152
116174
|
...options.transport?.options || {}
|
|
116153
116175
|
});
|
|
@@ -116156,7 +116178,7 @@ class QuasarAgent {
|
|
|
116156
116178
|
if (options.monitor.client) {
|
|
116157
116179
|
this.monitorRedis = options.monitor.client;
|
|
116158
116180
|
} else if (options.monitor.url) {
|
|
116159
|
-
this.monitorRedis = new
|
|
116181
|
+
this.monitorRedis = new import_ioredis.Redis(options.monitor.url, {
|
|
116160
116182
|
lazyConnect: true,
|
|
116161
116183
|
...options.monitor.options || {}
|
|
116162
116184
|
});
|
|
@@ -116252,7 +116274,7 @@ class QuasarAgent {
|
|
|
116252
116274
|
return true;
|
|
116253
116275
|
}
|
|
116254
116276
|
const redisUrl = this.transportRedis.options?.host ? `redis://${this.transportRedis.options.host}:${this.transportRedis.options.port || 6379}` : "redis://localhost:6379";
|
|
116255
|
-
this.subscriberRedis = new
|
|
116277
|
+
this.subscriberRedis = new import_ioredis.Redis(redisUrl, {
|
|
116256
116278
|
lazyConnect: true
|
|
116257
116279
|
});
|
|
116258
116280
|
try {
|
|
@@ -118148,12 +118170,12 @@ import path from "path";
|
|
|
118148
118170
|
import { fileURLToPath } from "url";
|
|
118149
118171
|
|
|
118150
118172
|
// src/server/services/CommandService.ts
|
|
118151
|
-
var
|
|
118173
|
+
var import_ioredis2 = __toESM(require_built3(), 1);
|
|
118152
118174
|
|
|
118153
118175
|
class CommandService {
|
|
118154
118176
|
redis;
|
|
118155
118177
|
constructor(redisUrl) {
|
|
118156
|
-
this.redis = new
|
|
118178
|
+
this.redis = new import_ioredis2.Redis(redisUrl, {
|
|
118157
118179
|
lazyConnect: true
|
|
118158
118180
|
});
|
|
118159
118181
|
}
|
|
@@ -118211,13 +118233,13 @@ class CommandService {
|
|
|
118211
118233
|
}
|
|
118212
118234
|
|
|
118213
118235
|
// src/server/services/PulseService.ts
|
|
118214
|
-
var
|
|
118236
|
+
var import_ioredis3 = __toESM(require_built3(), 1);
|
|
118215
118237
|
|
|
118216
118238
|
class PulseService {
|
|
118217
118239
|
redis;
|
|
118218
118240
|
prefix = "gravito:quasar:node:";
|
|
118219
118241
|
constructor(redisUrl) {
|
|
118220
|
-
this.redis = new
|
|
118242
|
+
this.redis = new import_ioredis3.Redis(redisUrl, {
|
|
118221
118243
|
lazyConnect: true
|
|
118222
118244
|
});
|
|
118223
118245
|
}
|
|
@@ -118269,10 +118291,10 @@ class PulseService {
|
|
|
118269
118291
|
|
|
118270
118292
|
// src/server/services/QueueService.ts
|
|
118271
118293
|
import { EventEmitter as EventEmitter2 } from "events";
|
|
118272
|
-
var
|
|
118294
|
+
var import_ioredis5 = __toESM(require_built3(), 1);
|
|
118273
118295
|
|
|
118274
118296
|
// src/server/services/AlertService.ts
|
|
118275
|
-
var
|
|
118297
|
+
var import_ioredis4 = __toESM(require_built3(), 1);
|
|
118276
118298
|
var import_nodemailer = __toESM(require_nodemailer(), 1);
|
|
118277
118299
|
import { EventEmitter } from "events";
|
|
118278
118300
|
|
|
@@ -118285,7 +118307,7 @@ class AlertService {
|
|
|
118285
118307
|
RULES_KEY = "gravito:zenith:alerts:rules";
|
|
118286
118308
|
CONFIG_KEY = "gravito:zenith:alerts:config";
|
|
118287
118309
|
constructor(redisUrl) {
|
|
118288
|
-
this.redis = new
|
|
118310
|
+
this.redis = new import_ioredis4.Redis(redisUrl, {
|
|
118289
118311
|
lazyConnect: true
|
|
118290
118312
|
});
|
|
118291
118313
|
this.rules = [
|
|
@@ -118537,10 +118559,10 @@ class QueueService {
|
|
|
118537
118559
|
manager;
|
|
118538
118560
|
alerts;
|
|
118539
118561
|
constructor(redisUrl, prefix = "queue:", persistence) {
|
|
118540
|
-
this.redis = new
|
|
118562
|
+
this.redis = new import_ioredis5.Redis(redisUrl, {
|
|
118541
118563
|
lazyConnect: true
|
|
118542
118564
|
});
|
|
118543
|
-
this.subRedis = new
|
|
118565
|
+
this.subRedis = new import_ioredis5.Redis(redisUrl, {
|
|
118544
118566
|
lazyConnect: true
|
|
118545
118567
|
});
|
|
118546
118568
|
this.prefix = prefix;
|
package/dist/server/index.js
CHANGED
|
@@ -43,6 +43,59 @@ var __export = (target, all) => {
|
|
|
43
43
|
var __esm = (fn, res) => () => (fn && (res = fn(fn = 0)), res);
|
|
44
44
|
var __require = import.meta.require;
|
|
45
45
|
|
|
46
|
+
// ../atlas/src/errors/index.ts
|
|
47
|
+
var DatabaseError, ConstraintViolationError, UniqueConstraintError, ForeignKeyConstraintError, NotNullConstraintError, TableNotFoundError, ConnectionError;
|
|
48
|
+
var init_errors = __esm(() => {
|
|
49
|
+
DatabaseError = class DatabaseError extends Error {
|
|
50
|
+
originalError;
|
|
51
|
+
query;
|
|
52
|
+
bindings;
|
|
53
|
+
constructor(message, originalError, query, bindings) {
|
|
54
|
+
super(message);
|
|
55
|
+
this.name = "DatabaseError";
|
|
56
|
+
this.originalError = originalError;
|
|
57
|
+
this.query = query;
|
|
58
|
+
this.bindings = bindings;
|
|
59
|
+
}
|
|
60
|
+
};
|
|
61
|
+
ConstraintViolationError = class ConstraintViolationError extends DatabaseError {
|
|
62
|
+
constructor(message, originalError, query, bindings) {
|
|
63
|
+
super(message, originalError, query, bindings);
|
|
64
|
+
this.name = "ConstraintViolationError";
|
|
65
|
+
}
|
|
66
|
+
};
|
|
67
|
+
UniqueConstraintError = class UniqueConstraintError extends ConstraintViolationError {
|
|
68
|
+
constructor(message, originalError, query, bindings) {
|
|
69
|
+
super(message, originalError, query, bindings);
|
|
70
|
+
this.name = "UniqueConstraintError";
|
|
71
|
+
}
|
|
72
|
+
};
|
|
73
|
+
ForeignKeyConstraintError = class ForeignKeyConstraintError extends ConstraintViolationError {
|
|
74
|
+
constructor(message, originalError, query, bindings) {
|
|
75
|
+
super(message, originalError, query, bindings);
|
|
76
|
+
this.name = "ForeignKeyConstraintError";
|
|
77
|
+
}
|
|
78
|
+
};
|
|
79
|
+
NotNullConstraintError = class NotNullConstraintError extends ConstraintViolationError {
|
|
80
|
+
constructor(message, originalError, query, bindings) {
|
|
81
|
+
super(message, originalError, query, bindings);
|
|
82
|
+
this.name = "NotNullConstraintError";
|
|
83
|
+
}
|
|
84
|
+
};
|
|
85
|
+
TableNotFoundError = class TableNotFoundError extends DatabaseError {
|
|
86
|
+
constructor(message, originalError, query, bindings) {
|
|
87
|
+
super(message, originalError, query, bindings);
|
|
88
|
+
this.name = "TableNotFoundError";
|
|
89
|
+
}
|
|
90
|
+
};
|
|
91
|
+
ConnectionError = class ConnectionError extends DatabaseError {
|
|
92
|
+
constructor(message, originalError) {
|
|
93
|
+
super(message, originalError);
|
|
94
|
+
this.name = "ConnectionError";
|
|
95
|
+
}
|
|
96
|
+
};
|
|
97
|
+
});
|
|
98
|
+
|
|
46
99
|
// ../../node_modules/.bun/bson@6.10.4/node_modules/bson/lib/bson.cjs
|
|
47
100
|
var require_bson = __commonJS((exports) => {
|
|
48
101
|
var TypedArrayPrototypeGetSymbolToStringTag = (() => {
|
|
@@ -33363,71 +33416,20 @@ var require_lib3 = __commonJS((exports) => {
|
|
|
33363
33416
|
} });
|
|
33364
33417
|
});
|
|
33365
33418
|
|
|
33366
|
-
// ../atlas/src/errors/index.ts
|
|
33367
|
-
var DatabaseError, ConstraintViolationError, UniqueConstraintError, ForeignKeyConstraintError, NotNullConstraintError, TableNotFoundError, ConnectionError;
|
|
33368
|
-
var init_errors = __esm(() => {
|
|
33369
|
-
DatabaseError = class DatabaseError extends Error {
|
|
33370
|
-
originalError;
|
|
33371
|
-
query;
|
|
33372
|
-
bindings;
|
|
33373
|
-
constructor(message, originalError, query, bindings) {
|
|
33374
|
-
super(message);
|
|
33375
|
-
this.name = "DatabaseError";
|
|
33376
|
-
this.originalError = originalError;
|
|
33377
|
-
this.query = query;
|
|
33378
|
-
this.bindings = bindings;
|
|
33379
|
-
}
|
|
33380
|
-
};
|
|
33381
|
-
ConstraintViolationError = class ConstraintViolationError extends DatabaseError {
|
|
33382
|
-
constructor(message, originalError, query, bindings) {
|
|
33383
|
-
super(message, originalError, query, bindings);
|
|
33384
|
-
this.name = "ConstraintViolationError";
|
|
33385
|
-
}
|
|
33386
|
-
};
|
|
33387
|
-
UniqueConstraintError = class UniqueConstraintError extends ConstraintViolationError {
|
|
33388
|
-
constructor(message, originalError, query, bindings) {
|
|
33389
|
-
super(message, originalError, query, bindings);
|
|
33390
|
-
this.name = "UniqueConstraintError";
|
|
33391
|
-
}
|
|
33392
|
-
};
|
|
33393
|
-
ForeignKeyConstraintError = class ForeignKeyConstraintError extends ConstraintViolationError {
|
|
33394
|
-
constructor(message, originalError, query, bindings) {
|
|
33395
|
-
super(message, originalError, query, bindings);
|
|
33396
|
-
this.name = "ForeignKeyConstraintError";
|
|
33397
|
-
}
|
|
33398
|
-
};
|
|
33399
|
-
NotNullConstraintError = class NotNullConstraintError extends ConstraintViolationError {
|
|
33400
|
-
constructor(message, originalError, query, bindings) {
|
|
33401
|
-
super(message, originalError, query, bindings);
|
|
33402
|
-
this.name = "NotNullConstraintError";
|
|
33403
|
-
}
|
|
33404
|
-
};
|
|
33405
|
-
TableNotFoundError = class TableNotFoundError extends DatabaseError {
|
|
33406
|
-
constructor(message, originalError, query, bindings) {
|
|
33407
|
-
super(message, originalError, query, bindings);
|
|
33408
|
-
this.name = "TableNotFoundError";
|
|
33409
|
-
}
|
|
33410
|
-
};
|
|
33411
|
-
ConnectionError = class ConnectionError extends DatabaseError {
|
|
33412
|
-
constructor(message, originalError) {
|
|
33413
|
-
super(message, originalError);
|
|
33414
|
-
this.name = "ConnectionError";
|
|
33415
|
-
}
|
|
33416
|
-
};
|
|
33417
|
-
});
|
|
33418
|
-
|
|
33419
33419
|
// ../atlas/src/drivers/MongoDBDriver.ts
|
|
33420
33420
|
class MongoDBDriver {
|
|
33421
33421
|
config;
|
|
33422
33422
|
client = null;
|
|
33423
33423
|
db = null;
|
|
33424
33424
|
MongoClientCtor;
|
|
33425
|
-
constructor(config, deps
|
|
33425
|
+
constructor(config, deps) {
|
|
33426
33426
|
if (config.driver !== "mongodb") {
|
|
33427
|
-
throw new Error(`Invalid driver type '
|
|
33427
|
+
throw new Error(`Invalid driver type '${config.driver}' for MongoDBDriver`);
|
|
33428
33428
|
}
|
|
33429
33429
|
this.config = config;
|
|
33430
|
-
|
|
33430
|
+
if (deps?.MongoClient) {
|
|
33431
|
+
this.MongoClientCtor = deps.MongoClient;
|
|
33432
|
+
}
|
|
33431
33433
|
}
|
|
33432
33434
|
getDriverName() {
|
|
33433
33435
|
return "mongodb";
|
|
@@ -33437,13 +33439,22 @@ class MongoDBDriver {
|
|
|
33437
33439
|
return;
|
|
33438
33440
|
}
|
|
33439
33441
|
try {
|
|
33440
|
-
|
|
33442
|
+
const Ctor = this.MongoClientCtor || (await this.loadMongoModule()).MongoClient;
|
|
33443
|
+
this.client = new Ctor(this.config.uri ?? `mongodb://${this.config.host}:${this.config.port}/${this.config.database}`);
|
|
33441
33444
|
await this.client.connect();
|
|
33442
33445
|
this.db = this.client.db(this.config.database);
|
|
33443
33446
|
} catch (error) {
|
|
33444
33447
|
throw new ConnectionError("Could not connect to MongoDB cluster", error);
|
|
33445
33448
|
}
|
|
33446
33449
|
}
|
|
33450
|
+
async loadMongoModule() {
|
|
33451
|
+
try {
|
|
33452
|
+
const mongodb = await Promise.resolve().then(() => __toESM(require_lib3(), 1));
|
|
33453
|
+
return mongodb;
|
|
33454
|
+
} catch (e) {
|
|
33455
|
+
throw new Error(`MongoDB driver requires the "mongodb" package. Please install it: bun add mongodb. Original Error: ${e}`);
|
|
33456
|
+
}
|
|
33457
|
+
}
|
|
33447
33458
|
async disconnect() {
|
|
33448
33459
|
if (this.client) {
|
|
33449
33460
|
await this.client.close();
|
|
@@ -33536,10 +33547,8 @@ class MongoDBDriver {
|
|
|
33536
33547
|
return doc;
|
|
33537
33548
|
}
|
|
33538
33549
|
}
|
|
33539
|
-
var import_mongodb;
|
|
33540
33550
|
var init_MongoDBDriver = __esm(() => {
|
|
33541
33551
|
init_errors();
|
|
33542
|
-
import_mongodb = __toESM(require_lib3(), 1);
|
|
33543
33552
|
});
|
|
33544
33553
|
|
|
33545
33554
|
// ../../node_modules/.bun/sqlstring@2.3.3/node_modules/sqlstring/lib/SqlString.js
|
|
@@ -67479,12 +67488,14 @@ class RedisDriver {
|
|
|
67479
67488
|
config;
|
|
67480
67489
|
client = null;
|
|
67481
67490
|
RedisCtor;
|
|
67482
|
-
constructor(config, deps
|
|
67491
|
+
constructor(config, deps) {
|
|
67483
67492
|
if (config.driver !== "redis") {
|
|
67484
67493
|
throw new Error(`Invalid driver type '${config.driver}' for RedisDriver`);
|
|
67485
67494
|
}
|
|
67486
67495
|
this.config = config;
|
|
67487
|
-
|
|
67496
|
+
if (deps?.Redis) {
|
|
67497
|
+
this.RedisCtor = deps.Redis;
|
|
67498
|
+
}
|
|
67488
67499
|
}
|
|
67489
67500
|
getDriverName() {
|
|
67490
67501
|
return "redis";
|
|
@@ -67494,7 +67505,8 @@ class RedisDriver {
|
|
|
67494
67505
|
return;
|
|
67495
67506
|
}
|
|
67496
67507
|
try {
|
|
67497
|
-
this.
|
|
67508
|
+
const Ctor = this.RedisCtor || (await this.loadRedisModule()).default;
|
|
67509
|
+
this.client = new Ctor({
|
|
67498
67510
|
host: this.config.host,
|
|
67499
67511
|
port: this.config.port ?? 6379,
|
|
67500
67512
|
password: this.config.password,
|
|
@@ -67506,6 +67518,14 @@ class RedisDriver {
|
|
|
67506
67518
|
throw new ConnectionError("Could not connect to Redis host", error);
|
|
67507
67519
|
}
|
|
67508
67520
|
}
|
|
67521
|
+
async loadRedisModule() {
|
|
67522
|
+
try {
|
|
67523
|
+
const ioredis = await Promise.resolve().then(() => __toESM(require_built3(), 1));
|
|
67524
|
+
return ioredis;
|
|
67525
|
+
} catch (e) {
|
|
67526
|
+
throw new Error(`Redis driver requires the "ioredis" package. Please install it: bun add ioredis. Original Error: ${e}`);
|
|
67527
|
+
}
|
|
67528
|
+
}
|
|
67509
67529
|
async disconnect() {
|
|
67510
67530
|
if (this.client) {
|
|
67511
67531
|
await this.client.quit();
|
|
@@ -67577,10 +67597,8 @@ class RedisDriver {
|
|
|
67577
67597
|
return false;
|
|
67578
67598
|
}
|
|
67579
67599
|
}
|
|
67580
|
-
var import_ioredis;
|
|
67581
67600
|
var init_RedisDriver = __esm(() => {
|
|
67582
67601
|
init_errors();
|
|
67583
|
-
import_ioredis = __toESM(require_built3(), 1);
|
|
67584
67602
|
});
|
|
67585
67603
|
|
|
67586
67604
|
// ../../node_modules/.bun/better-sqlite3@11.10.0/node_modules/better-sqlite3/lib/util.js
|
|
@@ -68375,11 +68393,15 @@ class SQLiteDriver {
|
|
|
68375
68393
|
});
|
|
68376
68394
|
this.client.exec("PRAGMA journal_mode = WAL;");
|
|
68377
68395
|
} else {
|
|
68378
|
-
|
|
68379
|
-
|
|
68380
|
-
|
|
68381
|
-
|
|
68382
|
-
|
|
68396
|
+
try {
|
|
68397
|
+
const { default: Database } = await Promise.resolve().then(() => __toESM(require_lib10(), 1));
|
|
68398
|
+
this.client = new Database(this.config.database, {
|
|
68399
|
+
readonly: this.config.readonly ?? false
|
|
68400
|
+
});
|
|
68401
|
+
this.client.pragma("journal_mode = WAL");
|
|
68402
|
+
} catch (e) {
|
|
68403
|
+
throw new Error(`SQLite driver requires "better-sqlite3" when running in Node.js. Please install it: bun add better-sqlite3. Original Error: ${e}`);
|
|
68404
|
+
}
|
|
68383
68405
|
}
|
|
68384
68406
|
} catch (error) {
|
|
68385
68407
|
throw new ConnectionError("Could not connect to SQLite database", error);
|
|
@@ -70643,7 +70665,7 @@ var init_Connection = __esm(() => {
|
|
|
70643
70665
|
return new Proxy(this, {
|
|
70644
70666
|
get(target, prop) {
|
|
70645
70667
|
if (prop in target) {
|
|
70646
|
-
return target
|
|
70668
|
+
return Reflect.get(target, prop);
|
|
70647
70669
|
}
|
|
70648
70670
|
if (typeof prop === "string" && target.driver && typeof target.driver[prop] === "function") {
|
|
70649
70671
|
return target.driver[prop].bind(target.driver);
|
|
@@ -116027,7 +116049,7 @@ class NodeProbe {
|
|
|
116027
116049
|
}
|
|
116028
116050
|
}
|
|
116029
116051
|
// ../quasar/src/QuasarAgent.ts
|
|
116030
|
-
var
|
|
116052
|
+
var import_ioredis = __toESM(require_built3(), 1);
|
|
116031
116053
|
|
|
116032
116054
|
// ../quasar/src/probes/BullProbe.ts
|
|
116033
116055
|
class BullProbe {
|
|
@@ -116146,7 +116168,7 @@ class QuasarAgent {
|
|
|
116146
116168
|
this.transportRedis = options.transport.client;
|
|
116147
116169
|
} else {
|
|
116148
116170
|
const url = options.transport?.url || options.redisUrl || "redis://localhost:6379";
|
|
116149
|
-
this.transportRedis = new
|
|
116171
|
+
this.transportRedis = new import_ioredis.Redis(url, {
|
|
116150
116172
|
lazyConnect: true,
|
|
116151
116173
|
...options.transport?.options || {}
|
|
116152
116174
|
});
|
|
@@ -116155,7 +116177,7 @@ class QuasarAgent {
|
|
|
116155
116177
|
if (options.monitor.client) {
|
|
116156
116178
|
this.monitorRedis = options.monitor.client;
|
|
116157
116179
|
} else if (options.monitor.url) {
|
|
116158
|
-
this.monitorRedis = new
|
|
116180
|
+
this.monitorRedis = new import_ioredis.Redis(options.monitor.url, {
|
|
116159
116181
|
lazyConnect: true,
|
|
116160
116182
|
...options.monitor.options || {}
|
|
116161
116183
|
});
|
|
@@ -116251,7 +116273,7 @@ class QuasarAgent {
|
|
|
116251
116273
|
return true;
|
|
116252
116274
|
}
|
|
116253
116275
|
const redisUrl = this.transportRedis.options?.host ? `redis://${this.transportRedis.options.host}:${this.transportRedis.options.port || 6379}` : "redis://localhost:6379";
|
|
116254
|
-
this.subscriberRedis = new
|
|
116276
|
+
this.subscriberRedis = new import_ioredis.Redis(redisUrl, {
|
|
116255
116277
|
lazyConnect: true
|
|
116256
116278
|
});
|
|
116257
116279
|
try {
|
|
@@ -118147,12 +118169,12 @@ import path from "path";
|
|
|
118147
118169
|
import { fileURLToPath } from "url";
|
|
118148
118170
|
|
|
118149
118171
|
// src/server/services/CommandService.ts
|
|
118150
|
-
var
|
|
118172
|
+
var import_ioredis2 = __toESM(require_built3(), 1);
|
|
118151
118173
|
|
|
118152
118174
|
class CommandService {
|
|
118153
118175
|
redis;
|
|
118154
118176
|
constructor(redisUrl) {
|
|
118155
|
-
this.redis = new
|
|
118177
|
+
this.redis = new import_ioredis2.Redis(redisUrl, {
|
|
118156
118178
|
lazyConnect: true
|
|
118157
118179
|
});
|
|
118158
118180
|
}
|
|
@@ -118210,13 +118232,13 @@ class CommandService {
|
|
|
118210
118232
|
}
|
|
118211
118233
|
|
|
118212
118234
|
// src/server/services/PulseService.ts
|
|
118213
|
-
var
|
|
118235
|
+
var import_ioredis3 = __toESM(require_built3(), 1);
|
|
118214
118236
|
|
|
118215
118237
|
class PulseService {
|
|
118216
118238
|
redis;
|
|
118217
118239
|
prefix = "gravito:quasar:node:";
|
|
118218
118240
|
constructor(redisUrl) {
|
|
118219
|
-
this.redis = new
|
|
118241
|
+
this.redis = new import_ioredis3.Redis(redisUrl, {
|
|
118220
118242
|
lazyConnect: true
|
|
118221
118243
|
});
|
|
118222
118244
|
}
|
|
@@ -118268,10 +118290,10 @@ class PulseService {
|
|
|
118268
118290
|
|
|
118269
118291
|
// src/server/services/QueueService.ts
|
|
118270
118292
|
import { EventEmitter as EventEmitter2 } from "events";
|
|
118271
|
-
var
|
|
118293
|
+
var import_ioredis5 = __toESM(require_built3(), 1);
|
|
118272
118294
|
|
|
118273
118295
|
// src/server/services/AlertService.ts
|
|
118274
|
-
var
|
|
118296
|
+
var import_ioredis4 = __toESM(require_built3(), 1);
|
|
118275
118297
|
var import_nodemailer = __toESM(require_nodemailer(), 1);
|
|
118276
118298
|
import { EventEmitter } from "events";
|
|
118277
118299
|
|
|
@@ -118284,7 +118306,7 @@ class AlertService {
|
|
|
118284
118306
|
RULES_KEY = "gravito:zenith:alerts:rules";
|
|
118285
118307
|
CONFIG_KEY = "gravito:zenith:alerts:config";
|
|
118286
118308
|
constructor(redisUrl) {
|
|
118287
|
-
this.redis = new
|
|
118309
|
+
this.redis = new import_ioredis4.Redis(redisUrl, {
|
|
118288
118310
|
lazyConnect: true
|
|
118289
118311
|
});
|
|
118290
118312
|
this.rules = [
|
|
@@ -118536,10 +118558,10 @@ class QueueService {
|
|
|
118536
118558
|
manager;
|
|
118537
118559
|
alerts;
|
|
118538
118560
|
constructor(redisUrl, prefix = "queue:", persistence) {
|
|
118539
|
-
this.redis = new
|
|
118561
|
+
this.redis = new import_ioredis5.Redis(redisUrl, {
|
|
118540
118562
|
lazyConnect: true
|
|
118541
118563
|
});
|
|
118542
|
-
this.subRedis = new
|
|
118564
|
+
this.subRedis = new import_ioredis5.Redis(redisUrl, {
|
|
118543
118565
|
lazyConnect: true
|
|
118544
118566
|
});
|
|
118545
118567
|
this.prefix = prefix;
|
package/package.json
CHANGED
package/DEMO.md
DELETED
|
@@ -1,156 +0,0 @@
|
|
|
1
|
-
# ๐ฎ Flux Console - Live Demo Walkthrough
|
|
2
|
-
|
|
3
|
-
This guide provides a step-by-step script for demonstrating the capabilities of **Flux Console**. It simulates a real-world production environment with traffic spikes, worker processing, and real-time monitoring.
|
|
4
|
-
|
|
5
|
-
## ๐๏ธ Architecture Setup
|
|
6
|
-
|
|
7
|
-
In this demo, we will run three components locally:
|
|
8
|
-
1. **Redis**: The message broker (must be running on `localhost:6379`).
|
|
9
|
-
2. **Flux Console**: The monitoring dashboard.
|
|
10
|
-
3. **Demo Worker**: A simulated worker that processes jobs from queues (`orders`, `reports`, etc.).
|
|
11
|
-
4. **Traffic Generator**: A script to flood the queues with jobs.
|
|
12
|
-
|
|
13
|
-
---
|
|
14
|
-
|
|
15
|
-
## ๐๏ธ Persistence & History (Optional)
|
|
16
|
-
|
|
17
|
-
To test the **Job Archive**, **Operational Logs**, and **Search** features, you need a database. Flux Console supports two modes:
|
|
18
|
-
|
|
19
|
-
### A. Zero-Config (SQLite) - **Recommended for Quick Tests**
|
|
20
|
-
Simply set the `DB_DRIVER` and `DB_NAME` environment variables. It will create a local `.sqlite` file.
|
|
21
|
-
```bash
|
|
22
|
-
export DB_DRIVER=sqlite
|
|
23
|
-
export DB_NAME=flux.sqlite
|
|
24
|
-
export PERSIST_ARCHIVE_COMPLETED=true # Archive successful jobs too
|
|
25
|
-
```
|
|
26
|
-
|
|
27
|
-
### B. Full Stack (MySQL + Redis) - **Using Docker**
|
|
28
|
-
If you have Docker installed, you can spin up a production-ready environment:
|
|
29
|
-
```bash
|
|
30
|
-
cd packages/flux-console
|
|
31
|
-
docker-compose up -d
|
|
32
|
-
```
|
|
33
|
-
Then set your env variables to match:
|
|
34
|
-
```bash
|
|
35
|
-
export DB_HOST=localhost
|
|
36
|
-
export DB_USER=root
|
|
37
|
-
export DB_PASSWORD=root
|
|
38
|
-
export DB_NAME=flux
|
|
39
|
-
```
|
|
40
|
-
|
|
41
|
-
---
|
|
42
|
-
|
|
43
|
-
## ๐ฌ Step-by-Step Demo Script
|
|
44
|
-
|
|
45
|
-
### Step 1: Start the Flux Console ๐ฅ๏ธ
|
|
46
|
-
|
|
47
|
-
Open your first terminal window and launch the console. This starts both the web server and the SSE (Server-Sent Events) stream.
|
|
48
|
-
|
|
49
|
-
```bash
|
|
50
|
-
cd packages/flux-console
|
|
51
|
-
bun run start
|
|
52
|
-
```
|
|
53
|
-
|
|
54
|
-
> **Verify**: Open [http://localhost:3000](http://localhost:3000) in your browser. You should see the dashboard. It might be empty or show "No Data" initially.
|
|
55
|
-
|
|
56
|
-
### Step 2: Start the Worker ๐ท
|
|
57
|
-
|
|
58
|
-
We need a worker to "eat" the jobs. Without this, jobs will just pile up in the queue.
|
|
59
|
-
Open a **second terminal window**:
|
|
60
|
-
|
|
61
|
-
```bash
|
|
62
|
-
cd packages/flux-console
|
|
63
|
-
bun run scripts/demo-worker.ts
|
|
64
|
-
```
|
|
65
|
-
|
|
66
|
-
> **Observe**:
|
|
67
|
-
> - You should see `[Consumer] Started`.
|
|
68
|
-
> - The console output will show it's watching queues: `orders`, `notifications`, `billing`, etc.
|
|
69
|
-
> - **In the Browser**: Go to the **Workers** page. You should see `worker-xxxxx` appear as "Online". Note the **Cluster RAM** and **Load** metrics which reflect your actual machine's status.
|
|
70
|
-
|
|
71
|
-
### Step 3: Unleash the Traffic! ๐
|
|
72
|
-
|
|
73
|
-
Now, let's simulate a traffic spike (e.g., Black Friday sale).
|
|
74
|
-
Open a **third terminal window**:
|
|
75
|
-
|
|
76
|
-
```bash
|
|
77
|
-
cd packages/flux-console
|
|
78
|
-
bun run scripts/generate-random-traffic.ts
|
|
79
|
-
```
|
|
80
|
-
|
|
81
|
-
This script will:
|
|
82
|
-
- Push **50 jobs** randomly distributed to different queues.
|
|
83
|
-
- Some jobs are designed to **fail** (to test error handling).
|
|
84
|
-
- Some jobs are **delayed**.
|
|
85
|
-
|
|
86
|
-
> **Pro Tip**: Run this command multiple times rapidly to simulate a higher load spike!
|
|
87
|
-
|
|
88
|
-
---
|
|
89
|
-
|
|
90
|
-
## ๐งช Understanding Test Job Behavior
|
|
91
|
-
|
|
92
|
-
The demo worker uses a special `TestJob` class that simulates different real-world scenarios:
|
|
93
|
-
|
|
94
|
-
### Intentional Failures (DLQ Testing)
|
|
95
|
-
Jobs with IDs containing `"fail"` (e.g., `job-fail-1767244949663-25`) are **designed to always throw an error**. This is intentional and serves to demonstrate:
|
|
96
|
-
|
|
97
|
-
1. **Retry Mechanism**: You'll see these jobs attempt multiple times (`Attempt: 1, 2, 3...`).
|
|
98
|
-
2. **Exponential Backoff**: Each retry waits longer than the previous one (2s, 6s, 18s...).
|
|
99
|
-
3. **Dead Letter Queue (DLQ)**: After max attempts (default: 3), the job moves to the **Failed** queue.
|
|
100
|
-
4. **Error Handling UI**: You can see these in the Console's "Failed" tab with full error stack traces.
|
|
101
|
-
|
|
102
|
-
**This is expected behavior!** These jobs represent scenarios like:
|
|
103
|
-
- Invalid order IDs
|
|
104
|
-
- Malformed email addresses
|
|
105
|
-
- External API permanently rejecting a request
|
|
106
|
-
|
|
107
|
-
### Normal Jobs
|
|
108
|
-
Jobs without `"fail"` in their ID will:
|
|
109
|
-
- Process successfully after a simulated 50ms delay
|
|
110
|
-
- Update the throughput metrics
|
|
111
|
-
- Disappear from the queue
|
|
112
|
-
|
|
113
|
-
### The `default` Queue
|
|
114
|
-
When you click **"Retry All Failed"** in the Console, failed jobs are moved back to the queue. Due to how the retry mechanism works, they may be placed in the `default` queue instead of their original queue. This is why the worker monitors both specific queues (`orders`, `email`, etc.) **and** the `default` queue.
|
|
115
|
-
|
|
116
|
-
---
|
|
117
|
-
|
|
118
|
-
## ๐ฌ Step 4: The Showcase (What to show in the UI) โจ
|
|
119
|
-
|
|
120
|
-
Now, switch to the browser window and walk through these views:
|
|
121
|
-
|
|
122
|
-
#### 1. ๐ Dashboard (Overview)
|
|
123
|
-
- **Throughput Chart**: You will see a sudden spike in the green line (Processed/min).
|
|
124
|
-
- **Active Queues**: You'll see numbers jumping in `Waiting` and `Active` columns.
|
|
125
|
-
- **Top Right Live Logs**: Watch the logs stream in real-time as the worker processes jobs.
|
|
126
|
-
- **Log Search**: Click on **"Search Archive"** in the logs panel to open the historical log browser. This allows querying through millions of past events stored in SQL.
|
|
127
|
-
|
|
128
|
-
#### 2. ๐งฑ Queues Page
|
|
129
|
-
- Navigate to the **Queues** tab.
|
|
130
|
-
- Click on `queue:orders` or `queue:email`.
|
|
131
|
-
- **Action**: You can see jobs moving from **Waiting** to **Active**.
|
|
132
|
-
- **Inspection**: Click the "Eye" icon (Inspector) on a queue to see the JSON payload of waiting jobs.
|
|
133
|
-
|
|
134
|
-
#### 3. ๐จ Retry Handling (The "Oh No!" Moment)
|
|
135
|
-
- Go to the **Queues** page and look for the **Failed** tab (Red badge).
|
|
136
|
-
- You should see jobs with an error like `Simulated permanent failure`.
|
|
137
|
-
- **Action**: Click the "Retry All" button specifically for the failed jobs.
|
|
138
|
-
- **Result**: Watch the "Failed" count drop to 0 and the "Waiting" count go up. The worker will pick them up again.
|
|
139
|
-
|
|
140
|
-
#### 4. โ๏ธ Workers Page
|
|
141
|
-
- Refresh or stay on the **Workers** page.
|
|
142
|
-
- Observe the **Avg Load** bar changing colors (Green -> Amber) depending on your CPU usage.
|
|
143
|
-
- Explain that this demonstrates the **Real-time Health Monitoring** of the infrastructure.
|
|
144
|
-
|
|
145
|
-
---
|
|
146
|
-
|
|
147
|
-
## ๐งน Cleanup
|
|
148
|
-
|
|
149
|
-
To reset the demo environment (purge all queues):
|
|
150
|
-
|
|
151
|
-
```bash
|
|
152
|
-
# In the third terminal
|
|
153
|
-
bun run scripts/debug-redis.ts
|
|
154
|
-
# OR manually flush redis if you have redis-cli installed
|
|
155
|
-
# redis-cli flushall
|
|
156
|
-
```
|