@gravito/stream 2.0.0 → 2.0.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,351 +1,167 @@
1
1
  # @gravito/stream
2
2
 
3
- Lightweight, high-performance queueing for Gravito. Supports multiple storage drivers, embedded and standalone workers, and flexible job serialization.
3
+ > Lightweight, high-performance queue and background job system for Galaxy Architecture.
4
4
 
5
- **Status**: v0.1.0 - core features complete with Memory, Database, Redis, Kafka, and SQS drivers.
5
+ [![npm version](https://img.shields.io/npm/v/@gravito/stream.svg)](https://www.npmjs.com/package/@gravito/stream)
6
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
7
+ [![TypeScript](https://img.shields.io/badge/TypeScript-5.0+-blue.svg)](https://www.typescriptlang.org/)
8
+ [![Bun](https://img.shields.io/badge/Bun-1.0+-black.svg)](https://bun.sh/)
6
9
 
7
- ## Features
10
+ **@gravito/stream** is the standard background processing unit for Gravito applications. Built on the **Orbit** pattern, it provides a unified abstraction for various message brokers and queue systems, allowing you to scale from simple in-memory tasks to distributed event-driven architectures with zero friction.
8
11
 
9
- - **Zero runtime overhead**: Thin wrappers that delegate to drivers
10
- - **Multi-driver support**: Memory, Database, Redis, Kafka, SQS, RabbitMQ
11
- - **Modular**: Install only the driver you need (core < 50KB)
12
- - **Embedded or standalone workers**: Run in-process during development or standalone in production
13
- - **AI-friendly**: Strong typing, clear JSDoc, and predictable APIs
14
- - **Custom Retry Strategies**: Built-in exponential backoff with per-job overrides
15
- - **Dead Letter Queue (DLQ)**: Automatic handling of permanently failed jobs, with retry and clear operations
16
- - **Priority Queues**: Assign priority (critical, high, low) to any job
17
- - **Rate Limiting**: Control job consumption rate per queue (requires Redis)
12
+ ## Features
18
13
 
19
- ## Installation
14
+ - 🪐 **Orbit Integration** - Native integration with PlanetCore micro-kernel and dependency injection.
15
+ - 🔌 **Multi-Broker Support** - Built-in drivers for **Redis**, **SQS**, **Kafka**, **RabbitMQ**, **Database** (SQL), and **Memory**.
16
+ - 🛠️ **Job-Based API** - Clean, class-based job definitions with built-in serialization and failure handling.
17
+ - 🚀 **High Throughput** - Optimized for **Bun**, supporting batch consumption, concurrent processing, and adaptive polling.
18
+ - 🛡️ **Reliability** - Built-in exponential backoff retries, Dead Letter Queues (DLQ), and sequential job grouping.
19
+ - 📝 **Audit & Persistence** - Optional SQL-based persistence layer for archiving job history and providing complete audit trails.
20
+ - 🕒 **Scheduler** - Built-in CRON-based task scheduling for recurring jobs.
21
+ - 🏢 **Worker Modes** - Run embedded workers during development or standalone worker processes in production.
22
+
23
+ ## 📦 Installation
20
24
 
21
25
  ```bash
22
26
  bun add @gravito/stream
23
27
  ```
24
28
 
25
- ## Quick Start
29
+ ## 🚀 Quick Start
30
+
31
+ ### 1. Define a Job
26
32
 
27
- ### 1. Define a job
33
+ Create a class extending `Job` and implement the `handle` logic:
28
34
 
29
35
  ```typescript
30
- import { Job } from '@gravito/stream'
36
+ import { Job } from '@gravito/stream';
31
37
 
32
- export class SendWelcomeEmail extends Job {
33
- constructor(private userId: string) {
34
- super()
38
+ export class ProcessOrder extends Job {
39
+ constructor(private orderId: string) {
40
+ super();
35
41
  }
36
42
 
37
43
  async handle(): Promise<void> {
38
- const user = await User.find(this.userId)
39
- await mail.send(new WelcomeEmail(user))
44
+ // Business logic: process the order
45
+ console.log(`Processing order: ${this.orderId}`);
40
46
  }
41
- }
42
- ```
43
-
44
- ### 3. Rate Limit & Priority (Optional)
45
-
46
- ```typescript
47
- const queue = c.get('queue')
48
-
49
- // High priority job
50
- await queue.push(new SendWelcomeEmail(user.id))
51
- .onQueue('emails')
52
- .withPriority('high') // 'critical' | 'high' | 'default' | 'low'
53
47
 
54
- // Configure rate limits in Consumer
55
- const consumer = new Consumer(manager, {
56
- rateLimits: {
57
- emails: { limit: 10, window: 60 } // Max 10 jobs per minute
48
+ async failed(error: Error): Promise<void> {
49
+ // Optional: cleanup or notify on permanent failure
50
+ console.error(`Order ${this.orderId} failed: ${error.message}`);
58
51
  }
59
- })
60
- ```
61
-
62
- ### 4. Concurrency & Groups
63
-
64
- ```typescript
65
- const consumer = new Consumer(manager, {
66
- queues: ['default'],
67
- concurrency: 5, // Process up to 5 jobs concurrently
68
- groupJobsSequential: true // Ensure jobs with same groupId run sequentially (default: true)
69
- })
52
+ }
70
53
  ```
71
54
 
72
- Jobs with the same `groupId` will always be processed in order, even with high concurrency. Jobs from different groups (or no group) will run in parallel.
55
+ ### 2. Initialize OrbitStream
73
56
 
74
- ### 5. Polling & Batching Optimization
57
+ Register the orbit in your application bootstrap:
75
58
 
76
59
  ```typescript
77
- const consumer = new Consumer(manager, {
78
- queues: ['default'],
79
- // Polling Strategy
80
- pollInterval: 1000, // Initial poll interval
81
- minPollInterval: 100, // Adaptive: reduce to 100ms when jobs found
82
- maxPollInterval: 5000, // Adaptive: backoff up to 5s when idle
83
- backoffMultiplier: 1.5, // Exponential backoff factor
84
-
85
- // Batch Consumption
86
- batchSize: 10, // Fetch 10 jobs at once (requires concurrency > 1 for parallel processing)
87
- concurrency: 10,
88
-
89
- // Blocking Pop (Redis/SQS)
90
- useBlocking: true, // Use BLPOP when batchSize=1 (reduces CPU usage)
91
- blockingTimeout: 5 // Block for 5 seconds
92
- })
93
- ```
60
+ import { PlanetCore } from '@gravito/core';
61
+ import { OrbitStream } from '@gravito/stream';
94
62
 
95
- ### 6. Monitoring & Stats
63
+ const core = new PlanetCore();
96
64
 
97
- ```typescript
98
- const stats = consumer.getStats()
99
- console.log(`Processed: ${stats.processed}, Failed: ${stats.failed}`)
65
+ core.addOrbit(OrbitStream.configure({
66
+ default: 'redis',
67
+ connections: {
68
+ redis: {
69
+ driver: 'redis',
70
+ host: 'localhost',
71
+ port: 6379
72
+ }
73
+ },
74
+ autoStartWorker: process.env.NODE_ENV === 'development',
75
+ workerOptions: { queues: ['default'] }
76
+ }));
100
77
 
101
- // Metrics are also included in the heartbeat if monitor is enabled
78
+ await core.bootstrap();
102
79
  ```
103
80
 
104
- ### 2. Enqueue a job
105
-
106
- ```typescript
107
- const queue = c.get('queue')
108
-
109
- await queue.push(new SendWelcomeEmail(user.id))
110
- .onQueue('emails')
111
- .delay(60)
112
- ```
81
+ ### 3. Enqueue Jobs
113
82
 
114
- ### 3. Configure OrbitStream (Memory driver)
83
+ Access the `queue` service from the request context or container:
115
84
 
116
85
  ```typescript
117
- import { OrbitStream } from '@gravito/stream'
118
-
119
- const core = await PlanetCore.boot({
120
- orbits: [
121
- OrbitStream.configure({
122
- default: 'memory',
123
- connections: {
124
- memory: { driver: 'memory' }
125
- },
126
- autoStartWorker: true,
127
- workerOptions: {
128
- queues: ['default', 'emails']
129
- }
130
- })
131
- ]
132
- })
133
- ```
86
+ core.app.post('/orders', async (c) => {
87
+ const { id } = await c.req.json();
88
+ const queue = c.get('queue');
134
89
 
135
- ### 5. Polling & Batching Optimization
90
+ // Push with fluent configuration
91
+ await queue.push(new ProcessOrder(id))
92
+ .onQueue('high-priority')
93
+ .delay(30)
94
+ .backoff(5, 2); // Start with 5s delay, then double for each retry
136
95
 
137
- ```typescript
138
- const consumer = new Consumer(manager, {
139
- queues: ['default'],
140
- // Polling Strategy
141
- pollInterval: 1000, // Initial poll interval
142
- minPollInterval: 100, // Adaptive: reduce to 100ms when jobs found
143
- maxPollInterval: 5000, // Adaptive: backoff up to 5s when idle
144
- backoffMultiplier: 1.5, // Exponential backoff factor
145
-
146
- // Batch Consumption
147
- batchSize: 10, // Fetch 10 jobs at once (requires concurrency > 1 for parallel processing)
148
- concurrency: 10,
149
-
150
- // Blocking Pop (Redis/SQS)
151
- useBlocking: true, // Use BLPOP when batchSize=1 (reduces CPU usage)
152
- blockingTimeout: 5 // Block for 5 seconds
153
- })
96
+ return c.json({ success: true });
97
+ });
154
98
  ```
155
99
 
156
- ## Database Driver Example
157
-
158
- ```typescript
159
- import { OrbitStream } from '@gravito/stream'
160
-
161
- // Create a database service adapter that implements DatabaseService interface
162
- const dbService = {
163
- execute: async (sql, bindings) => yourDbClient.query(sql, bindings),
164
- transaction: async (callback) => yourDbClient.transaction(callback),
165
- }
100
+ ## 🔧 Advanced Configuration
166
101
 
167
- const core = await PlanetCore.boot({
168
- orbits: [
169
- OrbitStream.configure({
170
- default: 'database',
171
- connections: {
172
- database: {
173
- driver: 'database',
174
- table: 'jobs',
175
- dbService: dbService // Pass your database service
176
- }
177
- }
178
- })
179
- ]
180
- })
181
- ```
102
+ ### Multi-Queue & Concurrency
182
103
 
183
- ## RabbitMQ Driver Example
104
+ Configure the consumer to handle multiple queues with different priorities and concurrency levels:
184
105
 
185
106
  ```typescript
186
- import { OrbitStream } from '@gravito/stream'
187
- import amqp from 'amqplib'
188
-
189
- const connection = await amqp.connect('amqp://localhost')
190
-
191
- const core = await PlanetCore.boot({
192
- orbits: [
193
- OrbitStream.configure({
194
- default: 'rabbitmq',
195
- connections: {
196
- rabbitmq: {
197
- driver: 'rabbitmq',
198
- client: connection,
199
- exchange: 'gravito.events',
200
- exchangeType: 'fanout'
201
- }
202
- }
203
- })
204
- ]
205
- })
206
- ```
207
-
208
- ## Database Schema
209
-
210
- ```sql
211
- CREATE TABLE jobs (
212
- id BIGSERIAL PRIMARY KEY,
213
- queue VARCHAR(255) NOT NULL,
214
- payload TEXT NOT NULL,
215
- attempts INT DEFAULT 0,
216
- reserved_at TIMESTAMP,
217
- available_at TIMESTAMP NOT NULL,
218
- created_at TIMESTAMP NOT NULL DEFAULT NOW()
219
- );
220
-
221
- -- Optimized index for batch popping with SKIP LOCKED
222
- CREATE INDEX idx_jobs_queue_available_reserved ON jobs(queue, available_at, reserved_at);
223
- CREATE INDEX idx_jobs_reserved ON jobs(reserved_at);
107
+ const consumer = new Consumer(manager, {
108
+ queues: ['critical', 'default', 'low'],
109
+ concurrency: 10, // Max 10 concurrent jobs
110
+ groupJobsSequential: true, // Process jobs with same groupId in strict order
111
+ batchSize: 5, // Fetch 5 jobs per poll
112
+ });
224
113
  ```
225
114
 
226
- ## Persistence and Audit Mode
227
-
228
- The `@gravito/stream` package supports an optional persistence layer (using SQLite or MySQL) for archiving job history and providing an audit trail.
115
+ ### Persistence & Audit Trail
229
116
 
230
- ### Configuration
117
+ Keep a history of all jobs (completed, failed, or enqueued):
231
118
 
232
119
  ```typescript
233
120
  OrbitStream.configure({
234
- // ... other config
121
+ // ... connections
235
122
  persistence: {
236
- adapter: new SQLitePersistence(DB), // or MySQLPersistence
237
- archiveCompleted: true, // Archive jobs when they complete successfully
238
- archiveFailed: true, // Archive jobs when they fail permanently
239
- archiveEnqueued: true, // (Audit Mode) Archive jobs immediately when pushed
240
- bufferSize: 100, // (Optional) Batch size for buffered writes. Recommended: 50-200. Default: 0 (disabled)
241
- flushInterval: 1000 // (Optional) Max time (ms) to wait before flushing buffer. Default: 0
123
+ adapter: new SQLitePersistence(db),
124
+ archiveCompleted: true,
125
+ archiveFailed: true,
126
+ archiveEnqueued: true, // Audit Mode: Log immediately when pushed
127
+ bufferSize: 100 // Batch writes for performance
242
128
  }
243
- })
244
- ```
245
-
246
- ### Audit Mode (`archiveEnqueued: true`)
247
-
248
- When Audit Mode is enabled, every job pushed to the queue is immediately written to the SQL archive with a `waiting` status. This happens in parallel with the main queue operation (Fire-and-Forget).
249
-
250
- - **Benefit**: Provides a complete audit trail. Even if the queue driver (e.g., Redis) crashes and loses data, the SQL archive will contain the record of the job being enqueued.
251
- - **Performance**: Designed to be non-blocking. The SQL write happens asynchronously and does not delay the `push()` operation.
252
-
253
- ## Standalone Worker
254
-
255
- ```bash
256
- bun run packages/stream/cli/queue-worker.ts \
257
- --connection=database \
258
- --queues=default,emails \
259
- --workers=4
129
+ });
260
130
  ```
261
131
 
262
- ## Debugging & Monitoring
263
-
264
- ### Enable Debug Mode
132
+ ## 📖 API Reference
265
133
 
266
- Enable verbose logging to see detailed information about job lifecycle events (enqueue, process, complete, fail).
134
+ ### `QueueManager`
267
135
 
268
- ```typescript
269
- OrbitStream.configure({
270
- debug: true, // Enable debug logging
271
- // ...
272
- })
273
- ```
136
+ Accessed via `c.get('queue')` or `core.container.make('queue')`.
274
137
 
275
- ```typescript
276
- const consumer = new Consumer(manager, {
277
- debug: true, // Enable consumer debug logging
278
- // ...
279
- })
280
- ```
138
+ - **`push(job)`**: Dispatch a job to the queue.
139
+ - **`pushMany(jobs)`**: Dispatch multiple jobs efficiently.
140
+ - **`size(queue?)`**: Get the number of jobs in a queue.
141
+ - **`clear(queue?)`**: Remove all jobs from a queue.
281
142
 
282
- ### Monitoring
143
+ ### `Job` Fluent Methods
283
144
 
284
- The Consumer emits events that you can listen to for custom monitoring:
285
-
286
- ```typescript
287
- consumer.on('job:started', ({ job, queue }) => {
288
- console.log(`Job ${job.id} started on ${queue}`)
289
- })
290
-
291
- consumer.on('job:failed', ({ job, error }) => {
292
- console.error(`Job ${job.id} failed: ${error.message}`)
293
- })
294
- ```
295
-
296
- ## API Reference
297
-
298
- ### Job
299
-
300
- ```typescript
301
- abstract class Job implements Queueable {
302
- abstract handle(): Promise<void>
303
- async failed(error: Error): Promise<void>
304
-
305
- onQueue(queue: string): this
306
- onConnection(connection: string): this
307
- delay(seconds: number): this
308
-
309
- /**
310
- * Set retry backoff strategy.
311
- * @param seconds - Initial delay in seconds
312
- * @param multiplier - Multiplier for each subsequent attempt (default: 2)
313
- */
314
- backoff(seconds: number, multiplier = 2): this
315
- }
316
- ```
317
-
318
- ### QueueManager
319
-
320
- ```typescript
321
- class QueueManager {
322
- async push<T extends Job>(job: T): Promise<T>
323
- async pushMany<T extends Job>(jobs: T[]): Promise<void>
324
- async pop(queue?: string, connection?: string): Promise<Job | null>
325
- async size(queue?: string, connection?: string): Promise<number>
326
- async clear(queue?: string, connection?: string): Promise<void>
327
- async complete(job: Job): Promise<void>
328
- async fail(job: Job, error: Error): Promise<void>
329
- registerJobClasses(jobClasses: Array<new (...args: unknown[]) => Job>): void
330
- }
331
- ```
145
+ - **`onQueue(name)`**: Specify target queue.
146
+ - **`onConnection(name)`**: Use a specific broker connection.
147
+ - **`delay(seconds)`**: Set initial delay.
148
+ - **`backoff(seconds, multiplier?)`**: Configure retry strategy.
149
+ - **`withPriority(priority)`**: Set job priority.
332
150
 
333
- ## Implemented Drivers
151
+ ## 🔌 Supported Drivers
334
152
 
335
- - **MemoryDriver** - in-memory (development)
336
- - **DatabaseDriver** - PostgreSQL/MySQL/SQLite
337
- - **RedisDriver** - delayed jobs, priority queues, rate limiting, and DLQ support
338
- - **KafkaDriver** - topics and consumer groups
339
- - **SQSDriver** - standard/FIFO queues and long polling
340
- - **RabbitMQDriver** - exchanges, queues, and advanced confirm mode
153
+ - **Redis** - Feature-rich (DLQ, Rate limiting, Priorities).
154
+ - **SQS** - AWS managed queue (Standard/FIFO).
155
+ - **Kafka** - High-throughput distributed streams.
156
+ - **RabbitMQ** - Traditional AMQP broker.
157
+ - **Database** - Simple SQL-based persistence (PostgreSQL, MySQL, SQLite).
158
+ - **Memory** - Fast, zero-config for local development/testing.
341
159
 
342
- ## Best Practices
160
+ ## 🤝 Contributing
343
161
 
344
- 1. **Idempotency**: Ensure your jobs are idempotent. Jobs may be retried if they fail or if the worker crashes.
345
- 2. **Granularity**: Keep jobs small and focused. Large jobs can block workers and increase memory usage.
346
- 3. **Timeouts**: Set appropriate timeouts for your jobs to prevent them from hanging indefinitely.
347
- 4. **Error Handling**: Use the `failed` method or throw errors to trigger retries. Avoid swallowing errors unless you want to suppress retries.
162
+ Contributions, issues and feature requests are welcome!
163
+ Feel free to check the [issues page](https://github.com/gravito-framework/gravito/issues).
348
164
 
349
- ## License
165
+ ## 📝 License
350
166
 
351
- MIT
167
+ MIT © [Carl Lee](https://github.com/gravito-framework/gravito)
package/README.zh-TW.md CHANGED
@@ -1,34 +1,167 @@
1
1
  # @gravito/stream
2
2
 
3
- > Gravito 的佇列模組,支援多種驅動與獨立 Worker。
3
+ > Galaxy 架構的高效能輕量化隊列與背景任務系統。
4
4
 
5
- ## 安裝
5
+ [![npm version](https://img.shields.io/npm/v/@gravito/stream.svg)](https://www.npmjs.com/package/@gravito/stream)
6
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
7
+ [![TypeScript](https://img.shields.io/badge/TypeScript-5.0+-blue.svg)](https://www.typescriptlang.org/)
8
+ [![Bun](https://img.shields.io/badge/Bun-1.0+-black.svg)](https://bun.sh/)
9
+
10
+ **@gravito/stream** 是 Gravito 應用程式的標準背景處理單元。基於 **Orbit** 模式構建,它為各種訊息代理(Broker)和隊列系統提供了統一的抽象層,讓您能夠從簡單的記憶體任務無縫擴展到分佈式事件驅動架構。
11
+
12
+ ## ✨ 特性
13
+
14
+ - 🪐 **Orbit 整合** - 與 PlanetCore 微核心及依賴注入系統原生整合。
15
+ - 🔌 **多 Broker 支援** - 內建支援 **Redis**、**SQS**、**Kafka**、**RabbitMQ**、**資料庫** (SQL) 與 **記憶體** (Memory) 驅動。
16
+ - 🛠️ **基於 Job 的 API** - 簡潔的類別式(Class-based)任務定義,內建序列化與錯誤處理。
17
+ - 🚀 **高吞吐量** - 針對 **Bun** 進行優化,支援批量消費(Batching)、並發處理與自適應輪詢(Polling)。
18
+ - 🛡️ **可靠性** - 內建指數退避重試(Exponential Backoff)、死信隊列(DLQ)與順序任務分組。
19
+ - 📝 **審計與持久化** - 可選的 SQL 持久化層,用於存檔任務歷史並提供完整的審計追蹤(Audit Trail)。
20
+ - 🕒 **排程器** - 內建基於 CRON 的排程功能,支援週期性任務。
21
+ - 🏢 **Worker 模式** - 開發環境可運行嵌入式 Worker,生產環境可運行獨立 Worker 進程。
22
+
23
+ ## 📦 安裝
6
24
 
7
25
  ```bash
8
26
  bun add @gravito/stream
9
27
  ```
10
28
 
11
- ## 快速開始
29
+ ## 🚀 快速上手
30
+
31
+ ### 1. 定義任務 (Job)
32
+
33
+ 建立一個繼承自 `Job` 的類別並實作 `handle` 邏輯:
12
34
 
13
35
  ```typescript
14
- import { Job } from '@gravito/stream'
36
+ import { Job } from '@gravito/stream';
15
37
 
16
- export class SendWelcomeEmail extends Job {
17
- constructor(private userId: string) {
18
- super()
38
+ export class ProcessOrder extends Job {
39
+ constructor(private orderId: string) {
40
+ super();
19
41
  }
20
42
 
21
43
  async handle(): Promise<void> {
22
- const user = await User.find(this.userId)
23
- await mail.send(new WelcomeEmail(user))
44
+ // 業務邏輯:處理訂單
45
+ console.log(`正在處理訂單: ${this.orderId}`);
46
+ }
47
+
48
+ async failed(error: Error): Promise<void> {
49
+ // 選配:在永久失敗時進行清理或通知
50
+ console.error(`訂單 ${this.orderId} 失敗: ${error.message}`);
24
51
  }
25
52
  }
26
53
  ```
27
54
 
55
+ ### 2. 初始化 OrbitStream
56
+
57
+ 在應用程式啟動時註冊 Orbit:
58
+
59
+ ```typescript
60
+ import { PlanetCore } from '@gravito/core';
61
+ import { OrbitStream } from '@gravito/stream';
62
+
63
+ const core = new PlanetCore();
64
+
65
+ core.addOrbit(OrbitStream.configure({
66
+ default: 'redis',
67
+ connections: {
68
+ redis: {
69
+ driver: 'redis',
70
+ host: 'localhost',
71
+ port: 6379
72
+ }
73
+ },
74
+ autoStartWorker: process.env.NODE_ENV === 'development',
75
+ workerOptions: { queues: ['default'] }
76
+ }));
77
+
78
+ await core.bootstrap();
79
+ ```
80
+
81
+ ### 3. 將任務推入隊列
82
+
83
+ 從請求上下文或容器中獲取 `queue` 服務:
84
+
28
85
  ```typescript
29
- const queue = c.get('queue')
86
+ core.app.post('/orders', async (c) => {
87
+ const { id } = await c.req.json();
88
+ const queue = c.get('queue');
89
+
90
+ // 使用流暢介面進行配置
91
+ await queue.push(new ProcessOrder(id))
92
+ .onQueue('high-priority') // 指定隊列
93
+ .delay(30) // 延遲 30 秒執行
94
+ .backoff(5, 2); // 重試策略:初始延遲 5s,之後每次翻倍
30
95
 
31
- await queue.push(new SendWelcomeEmail(user.id))
32
- .onQueue('emails')
33
- .delay(60)
96
+ return c.json({ success: true });
97
+ });
34
98
  ```
99
+
100
+ ## 🔧 進階配置
101
+
102
+ ### 多隊列與並發處理
103
+
104
+ 配置消費者以處理不同優先級的隊列與並發等級:
105
+
106
+ ```typescript
107
+ const consumer = new Consumer(manager, {
108
+ queues: ['critical', 'default', 'low'],
109
+ concurrency: 10, // 最大同時執行 10 個任務
110
+ groupJobsSequential: true, // 相同 groupId 的任務將嚴格依序執行
111
+ batchSize: 5, // 每次輪詢獲取 5 個任務
112
+ });
113
+ ```
114
+
115
+ ### 持久化與審計追蹤
116
+
117
+ 保留所有任務(成功、失敗或排隊中)的歷史記錄:
118
+
119
+ ```typescript
120
+ OrbitStream.configure({
121
+ // ... 連線配置
122
+ persistence: {
123
+ adapter: new SQLitePersistence(db),
124
+ archiveCompleted: true,
125
+ archiveFailed: true,
126
+ archiveEnqueued: true, // 審計模式:推入隊列時立即記錄
127
+ bufferSize: 100 // 批量寫入以提升效能
128
+ }
129
+ });
130
+ ```
131
+
132
+ ## 📖 API 參考
133
+
134
+ ### `QueueManager`
135
+
136
+ 透過 `c.get('queue')` 或 `core.container.make('queue')` 獲取。
137
+
138
+ - **`push(job)`**: 將任務派發至隊列。
139
+ - **`pushMany(jobs)`**: 高效派發多個任務。
140
+ - **`size(queue?)`**: 獲取隊列中的任務數量。
141
+ - **`clear(queue?)`**: 清空隊列中的所有任務。
142
+
143
+ ### `Job` 流暢方法
144
+
145
+ - **`onQueue(name)`**: 指定目標隊列。
146
+ - **`onConnection(name)`**: 使用特定連線。
147
+ - **`delay(seconds)`**: 設置初始延遲。
148
+ - **`backoff(seconds, multiplier?)`**: 配置重試策略。
149
+ - **`withPriority(priority)`**: 設置任務優先級。
150
+
151
+ ## 🔌 支援的驅動 (Drivers)
152
+
153
+ - **Redis** - 功能豐富(支援 DLQ、限流、優先級)。
154
+ - **SQS** - AWS 託管隊列(Standard/FIFO)。
155
+ - **Kafka** - 高吞吐量分佈式串流。
156
+ - **RabbitMQ** - 傳統 AMQP 代理。
157
+ - **Database** - 簡單的 SQL 持久化方案(PostgreSQL, MySQL, SQLite)。
158
+ - **Memory** - 快速、開發/測試環境零配置。
159
+
160
+ ## 🤝 貢獻
161
+
162
+ 歡迎提交貢獻、問題與功能請求!
163
+ 請隨時查看 [Issues 頁面](https://github.com/gravito-framework/gravito/issues)。
164
+
165
+ ## 📝 授權
166
+
167
+ MIT © [Carl Lee](https://github.com/gravito-framework/gravito)