@gravito/stream 2.0.1 → 2.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (83) hide show
  1. package/README.md +127 -285
  2. package/README.zh-TW.md +146 -13
  3. package/dist/BatchConsumer.d.ts +81 -0
  4. package/dist/Consumer.d.ts +215 -0
  5. package/dist/DashboardProvider.d.ts +20 -0
  6. package/dist/Job.d.ts +183 -0
  7. package/dist/OrbitStream.d.ts +151 -0
  8. package/dist/QueueManager.d.ts +319 -0
  9. package/dist/Queueable.d.ts +91 -0
  10. package/dist/Scheduler.d.ts +214 -0
  11. package/dist/StreamEventBackend.d.ts +114 -0
  12. package/dist/SystemEventJob.d.ts +33 -0
  13. package/dist/Worker.d.ts +139 -0
  14. package/dist/benchmarks/PerformanceReporter.d.ts +99 -0
  15. package/dist/consumer/ConcurrencyGate.d.ts +55 -0
  16. package/dist/consumer/ConsumerStrategy.d.ts +41 -0
  17. package/dist/consumer/GroupSequencer.d.ts +57 -0
  18. package/dist/consumer/HeartbeatManager.d.ts +65 -0
  19. package/dist/consumer/JobExecutor.d.ts +61 -0
  20. package/dist/consumer/JobSourceGenerator.d.ts +31 -0
  21. package/dist/consumer/PollingStrategy.d.ts +42 -0
  22. package/dist/consumer/ReactiveStrategy.d.ts +41 -0
  23. package/dist/consumer/StreamingConsumer.d.ts +88 -0
  24. package/dist/consumer/index.d.ts +13 -0
  25. package/dist/consumer/types.d.ts +102 -0
  26. package/dist/drivers/BinaryJobFrame.d.ts +78 -0
  27. package/dist/drivers/BullMQDriver.d.ts +186 -0
  28. package/dist/drivers/DatabaseDriver.d.ts +131 -0
  29. package/dist/drivers/GrpcDriver.d.ts +16 -0
  30. package/dist/drivers/KafkaDriver.d.ts +148 -0
  31. package/dist/drivers/MemoryDriver.d.ts +108 -0
  32. package/dist/drivers/QueueDriver.d.ts +250 -0
  33. package/dist/drivers/RabbitMQDriver.d.ts +102 -0
  34. package/dist/drivers/RedisDriver.d.ts +294 -0
  35. package/dist/drivers/SQSDriver.d.ts +111 -0
  36. package/dist/drivers/kafka/BackpressureController.d.ts +60 -0
  37. package/dist/drivers/kafka/BatchProcessor.d.ts +50 -0
  38. package/dist/drivers/kafka/ConsumerLifecycleManager.d.ts +80 -0
  39. package/dist/drivers/kafka/ErrorCategorizer.d.ts +39 -0
  40. package/dist/drivers/kafka/ErrorRecoveryManager.d.ts +100 -0
  41. package/dist/drivers/kafka/HeartbeatManager.d.ts +57 -0
  42. package/dist/drivers/kafka/KafkaDriver.d.ts +138 -0
  43. package/dist/drivers/kafka/KafkaMetrics.d.ts +88 -0
  44. package/dist/drivers/kafka/KafkaNotifier.d.ts +54 -0
  45. package/dist/drivers/kafka/MessageBuffer.d.ts +71 -0
  46. package/dist/drivers/kafka/OffsetTracker.d.ts +63 -0
  47. package/dist/drivers/kafka/PerformanceMonitor.d.ts +88 -0
  48. package/dist/drivers/kafka/RateLimiter.d.ts +52 -0
  49. package/dist/drivers/kafka/RebalanceHandler.d.ts +104 -0
  50. package/dist/drivers/kafka/RingBuffer.d.ts +63 -0
  51. package/dist/drivers/kafka/index.d.ts +22 -0
  52. package/dist/drivers/kafka/types.d.ts +553 -0
  53. package/dist/drivers/prepareJobForTransport.d.ts +10 -0
  54. package/dist/index.cjs +6274 -3777
  55. package/dist/index.cjs.map +71 -0
  56. package/dist/index.d.ts +60 -2233
  57. package/dist/index.js +6955 -4446
  58. package/dist/index.js.map +71 -0
  59. package/dist/locks/DistributedLock.d.ts +175 -0
  60. package/dist/persistence/BufferedPersistence.d.ts +130 -0
  61. package/dist/persistence/BunBufferedPersistence.d.ts +173 -0
  62. package/dist/persistence/MySQLPersistence.d.ts +134 -0
  63. package/dist/persistence/SQLitePersistence.d.ts +133 -0
  64. package/dist/serializers/BinarySerializer.d.ts +42 -0
  65. package/dist/serializers/CachedSerializer.d.ts +38 -0
  66. package/dist/serializers/CborNativeSerializer.d.ts +56 -0
  67. package/dist/serializers/ClassNameSerializer.d.ts +58 -0
  68. package/dist/serializers/JobSerializer.d.ts +33 -0
  69. package/dist/serializers/JsonSerializer.d.ts +28 -0
  70. package/dist/serializers/JsonlSerializer.d.ts +90 -0
  71. package/dist/serializers/MessagePackSerializer.d.ts +29 -0
  72. package/dist/types.d.ts +653 -0
  73. package/dist/workers/BinaryWorkerProtocol.d.ts +77 -0
  74. package/dist/workers/BunWorker.d.ts +179 -0
  75. package/dist/workers/SandboxedWorker.d.ts +132 -0
  76. package/dist/workers/WorkerFactory.d.ts +128 -0
  77. package/dist/workers/WorkerPool.d.ts +186 -0
  78. package/dist/workers/bun-job-executor.d.ts +14 -0
  79. package/dist/workers/index.d.ts +13 -0
  80. package/dist/workers/job-executor.d.ts +9 -0
  81. package/package.json +13 -6
  82. package/proto/queue.proto +101 -0
  83. package/dist/index.d.cts +0 -2242
package/README.md CHANGED
@@ -1,351 +1,193 @@
1
1
  # @gravito/stream
2
2
 
3
- Lightweight, high-performance queueing for Gravito. Supports multiple storage drivers, embedded and standalone workers, and flexible job serialization.
3
+ > Lightweight, high-performance queue and background job system for Galaxy Architecture.
4
4
 
5
- **Status**: v0.1.0 - core features complete with Memory, Database, Redis, Kafka, and SQS drivers.
5
+ [![npm version](https://img.shields.io/npm/v/@gravito/stream.svg)](https://www.npmjs.com/package/@gravito/stream)
6
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
7
+ [![TypeScript](https://img.shields.io/badge/TypeScript-5.0+-blue.svg)](https://www.typescriptlang.org/)
8
+ [![Bun](https://img.shields.io/badge/Bun-1.0+-black.svg)](https://bun.sh/)
6
9
 
7
- ## Features
10
+ **@gravito/stream** is the standard background processing unit for Gravito applications. Built on the **Orbit** pattern, it provides a unified abstraction for various message brokers and queue systems, allowing you to scale from simple in-memory tasks to distributed event-driven architectures with zero friction.
8
11
 
9
- - **Zero runtime overhead**: Thin wrappers that delegate to drivers
10
- - **Multi-driver support**: Memory, Database, Redis, Kafka, SQS, RabbitMQ
11
- - **Modular**: Install only the driver you need (core < 50KB)
12
- - **Embedded or standalone workers**: Run in-process during development or standalone in production
13
- - **AI-friendly**: Strong typing, clear JSDoc, and predictable APIs
14
- - **Custom Retry Strategies**: Built-in exponential backoff with per-job overrides
15
- - **Dead Letter Queue (DLQ)**: Automatic handling of permanently failed jobs, with retry and clear operations
16
- - **Priority Queues**: Assign priority (critical, high, low) to any job
17
- - **Rate Limiting**: Control job consumption rate per queue (requires Redis)
12
+ ## Features
18
13
 
19
- ## Installation
14
+ - 🪐 **Galaxy-Ready Orbit** - Native integration with PlanetCore micro-kernel and dependency injection.
15
+ - 🔌 **Multi-Broker Support** - Built-in drivers for **Redis**, **SQS**, **Kafka**, **RabbitMQ**, **Database** (SQL), and **Memory**.
16
+ - 🛠️ **Job-Based API** - Clean, class-based job definitions with built-in serialization and failure handling.
17
+ - 🚀 **High Throughput** - Optimized for **Bun**, supporting batch consumption, concurrent processing, and adaptive polling.
18
+ - 📡 **Cross-Satellite Event Streaming** - Enable loosely coupled communication between isolated Satellites.
19
+ - 🛡️ **Reliability** - Built-in exponential backoff retries, Dead Letter Queues (DLQ), and sequential job grouping.
20
+ - 📝 **Audit & Persistence** - Optional SQL-based persistence layer for archiving job history and providing complete audit trails.
21
+ - 🕒 **Scheduler** - Built-in CRON-based task scheduling for recurring jobs.
22
+ - 🏢 **Worker Modes** - Run embedded workers during development or standalone worker processes in production.
23
+
24
+ ## 🌌 Role in Galaxy Architecture
25
+
26
+ In the **Gravito Galaxy Architecture**, Stream acts as the **Async Engine (Background Processing)**.
27
+
28
+ - **Satellite Decoupling**: Instead of Satellite A calling Satellite B synchronously and waiting for a response, Satellite A fires an event/job into Stream, and Satellite B consumes it asynchronously.
29
+ - **Resilience Backbone**: Works hand-in-hand with `@gravito/resilience` to ensure that failed web requests or heavy tasks are retried in the background without affecting the user experience.
30
+ - **Distributed State**: Facilitates event-driven architecture (EDA), allowing multiple Satellites to react to the same domain event reliably.
31
+
32
+ ```mermaid
33
+ graph TD
34
+ SatA[Satellite: Order] -- "Push Job" --> Stream{Stream Orbit}
35
+ Stream -- "Queue: emails" --> Worker1[Worker Pool]
36
+ Worker1 --> SatB[Satellite: Notification]
37
+ Stream -- "Queue: payments" --> Worker2[Worker Pool]
38
+ Worker2 --> SatC[Satellite: Finance]
39
+ ```
40
+
41
+ ## 📦 Installation
20
42
 
21
43
  ```bash
22
44
  bun add @gravito/stream
23
45
  ```
24
46
 
25
- ## Quick Start
47
+ ## 🚀 Quick Start
48
+
49
+ ### 1. Define a Job
26
50
 
27
- ### 1. Define a job
51
+ Create a class extending `Job` and implement the `handle` logic:
28
52
 
29
53
  ```typescript
30
- import { Job } from '@gravito/stream'
54
+ import { Job } from '@gravito/stream';
31
55
 
32
- export class SendWelcomeEmail extends Job {
33
- constructor(private userId: string) {
34
- super()
56
+ export class ProcessOrder extends Job {
57
+ constructor(private orderId: string) {
58
+ super();
35
59
  }
36
60
 
37
61
  async handle(): Promise<void> {
38
- const user = await User.find(this.userId)
39
- await mail.send(new WelcomeEmail(user))
62
+ // Business logic: process the order
63
+ console.log(`Processing order: ${this.orderId}`);
40
64
  }
41
- }
42
- ```
43
65
 
44
- ### 3. Rate Limit & Priority (Optional)
45
-
46
- ```typescript
47
- const queue = c.get('queue')
48
-
49
- // High priority job
50
- await queue.push(new SendWelcomeEmail(user.id))
51
- .onQueue('emails')
52
- .withPriority('high') // 'critical' | 'high' | 'default' | 'low'
53
-
54
- // Configure rate limits in Consumer
55
- const consumer = new Consumer(manager, {
56
- rateLimits: {
57
- emails: { limit: 10, window: 60 } // Max 10 jobs per minute
66
+ async failed(error: Error): Promise<void> {
67
+ // Optional: cleanup or notify on permanent failure
68
+ console.error(`Order ${this.orderId} failed: ${error.message}`);
58
69
  }
59
- })
60
- ```
61
-
62
- ### 4. Concurrency & Groups
63
-
64
- ```typescript
65
- const consumer = new Consumer(manager, {
66
- queues: ['default'],
67
- concurrency: 5, // Process up to 5 jobs concurrently
68
- groupJobsSequential: true // Ensure jobs with same groupId run sequentially (default: true)
69
- })
70
+ }
70
71
  ```
71
72
 
72
- Jobs with the same `groupId` will always be processed in order, even with high concurrency. Jobs from different groups (or no group) will run in parallel.
73
+ ### 2. Initialize OrbitStream
73
74
 
74
- ### 5. Polling & Batching Optimization
75
+ Register the orbit in your application bootstrap:
75
76
 
76
77
  ```typescript
77
- const consumer = new Consumer(manager, {
78
- queues: ['default'],
79
- // Polling Strategy
80
- pollInterval: 1000, // Initial poll interval
81
- minPollInterval: 100, // Adaptive: reduce to 100ms when jobs found
82
- maxPollInterval: 5000, // Adaptive: backoff up to 5s when idle
83
- backoffMultiplier: 1.5, // Exponential backoff factor
84
-
85
- // Batch Consumption
86
- batchSize: 10, // Fetch 10 jobs at once (requires concurrency > 1 for parallel processing)
87
- concurrency: 10,
88
-
89
- // Blocking Pop (Redis/SQS)
90
- useBlocking: true, // Use BLPOP when batchSize=1 (reduces CPU usage)
91
- blockingTimeout: 5 // Block for 5 seconds
92
- })
93
- ```
78
+ import { PlanetCore } from '@gravito/core';
79
+ import { OrbitStream } from '@gravito/stream';
94
80
 
95
- ### 6. Monitoring & Stats
81
+ const core = new PlanetCore();
96
82
 
97
- ```typescript
98
- const stats = consumer.getStats()
99
- console.log(`Processed: ${stats.processed}, Failed: ${stats.failed}`)
83
+ core.addOrbit(OrbitStream.configure({
84
+ default: 'redis',
85
+ connections: {
86
+ redis: {
87
+ driver: 'redis',
88
+ host: 'localhost',
89
+ port: 6379
90
+ }
91
+ },
92
+ autoStartWorker: process.env.NODE_ENV === 'development',
93
+ workerOptions: { queues: ['default'] }
94
+ }));
100
95
 
101
- // Metrics are also included in the heartbeat if monitor is enabled
96
+ await core.bootstrap();
102
97
  ```
103
98
 
104
- ### 2. Enqueue a job
105
-
106
- ```typescript
107
- const queue = c.get('queue')
108
-
109
- await queue.push(new SendWelcomeEmail(user.id))
110
- .onQueue('emails')
111
- .delay(60)
112
- ```
99
+ ### 3. Enqueue Jobs
113
100
 
114
- ### 3. Configure OrbitStream (Memory driver)
101
+ Access the `queue` service from the request context or container:
115
102
 
116
103
  ```typescript
117
- import { OrbitStream } from '@gravito/stream'
118
-
119
- const core = await PlanetCore.boot({
120
- orbits: [
121
- OrbitStream.configure({
122
- default: 'memory',
123
- connections: {
124
- memory: { driver: 'memory' }
125
- },
126
- autoStartWorker: true,
127
- workerOptions: {
128
- queues: ['default', 'emails']
129
- }
130
- })
131
- ]
132
- })
133
- ```
104
+ core.app.post('/orders', async (c) => {
105
+ const { id } = await c.req.json();
106
+ const queue = c.get('queue');
134
107
 
135
- ### 5. Polling & Batching Optimization
108
+ // Push with fluent configuration
109
+ await queue.push(new ProcessOrder(id))
110
+ .onQueue('high-priority')
111
+ .delay(30)
112
+ .backoff(5, 2); // Start with 5s delay, then double for each retry
136
113
 
137
- ```typescript
138
- const consumer = new Consumer(manager, {
139
- queues: ['default'],
140
- // Polling Strategy
141
- pollInterval: 1000, // Initial poll interval
142
- minPollInterval: 100, // Adaptive: reduce to 100ms when jobs found
143
- maxPollInterval: 5000, // Adaptive: backoff up to 5s when idle
144
- backoffMultiplier: 1.5, // Exponential backoff factor
145
-
146
- // Batch Consumption
147
- batchSize: 10, // Fetch 10 jobs at once (requires concurrency > 1 for parallel processing)
148
- concurrency: 10,
149
-
150
- // Blocking Pop (Redis/SQS)
151
- useBlocking: true, // Use BLPOP when batchSize=1 (reduces CPU usage)
152
- blockingTimeout: 5 // Block for 5 seconds
153
- })
114
+ return c.json({ success: true });
115
+ });
154
116
  ```
155
117
 
156
- ## Database Driver Example
118
+ ## 🔧 Advanced Configuration
157
119
 
158
- ```typescript
159
- import { OrbitStream } from '@gravito/stream'
120
+ ### Multi-Queue & Concurrency
160
121
 
161
- // Create a database service adapter that implements DatabaseService interface
162
- const dbService = {
163
- execute: async (sql, bindings) => yourDbClient.query(sql, bindings),
164
- transaction: async (callback) => yourDbClient.transaction(callback),
165
- }
166
-
167
- const core = await PlanetCore.boot({
168
- orbits: [
169
- OrbitStream.configure({
170
- default: 'database',
171
- connections: {
172
- database: {
173
- driver: 'database',
174
- table: 'jobs',
175
- dbService: dbService // Pass your database service
176
- }
177
- }
178
- })
179
- ]
180
- })
181
- ```
182
-
183
- ## RabbitMQ Driver Example
122
+ Configure the consumer to handle multiple queues with different priorities and concurrency levels:
184
123
 
185
124
  ```typescript
186
- import { OrbitStream } from '@gravito/stream'
187
- import amqp from 'amqplib'
188
-
189
- const connection = await amqp.connect('amqp://localhost')
190
-
191
- const core = await PlanetCore.boot({
192
- orbits: [
193
- OrbitStream.configure({
194
- default: 'rabbitmq',
195
- connections: {
196
- rabbitmq: {
197
- driver: 'rabbitmq',
198
- client: connection,
199
- exchange: 'gravito.events',
200
- exchangeType: 'fanout'
201
- }
202
- }
203
- })
204
- ]
205
- })
206
- ```
207
-
208
- ## Database Schema
209
-
210
- ```sql
211
- CREATE TABLE jobs (
212
- id BIGSERIAL PRIMARY KEY,
213
- queue VARCHAR(255) NOT NULL,
214
- payload TEXT NOT NULL,
215
- attempts INT DEFAULT 0,
216
- reserved_at TIMESTAMP,
217
- available_at TIMESTAMP NOT NULL,
218
- created_at TIMESTAMP NOT NULL DEFAULT NOW()
219
- );
220
-
221
- -- Optimized index for batch popping with SKIP LOCKED
222
- CREATE INDEX idx_jobs_queue_available_reserved ON jobs(queue, available_at, reserved_at);
223
- CREATE INDEX idx_jobs_reserved ON jobs(reserved_at);
125
+ const consumer = new Consumer(manager, {
126
+ queues: ['critical', 'default', 'low'],
127
+ concurrency: 10, // Max 10 concurrent jobs
128
+ groupJobsSequential: true, // Process jobs with same groupId in strict order
129
+ batchSize: 5, // Fetch 5 jobs per poll
130
+ });
224
131
  ```
225
132
 
226
- ## Persistence and Audit Mode
227
-
228
- The `@gravito/stream` package supports an optional persistence layer (using SQLite or MySQL) for archiving job history and providing an audit trail.
133
+ ### Persistence & Audit Trail
229
134
 
230
- ### Configuration
135
+ Keep a history of all jobs (completed, failed, or enqueued):
231
136
 
232
137
  ```typescript
233
138
  OrbitStream.configure({
234
- // ... other config
139
+ // ... connections
235
140
  persistence: {
236
- adapter: new SQLitePersistence(DB), // or MySQLPersistence
237
- archiveCompleted: true, // Archive jobs when they complete successfully
238
- archiveFailed: true, // Archive jobs when they fail permanently
239
- archiveEnqueued: true, // (Audit Mode) Archive jobs immediately when pushed
240
- bufferSize: 100, // (Optional) Batch size for buffered writes. Recommended: 50-200. Default: 0 (disabled)
241
- flushInterval: 1000 // (Optional) Max time (ms) to wait before flushing buffer. Default: 0
141
+ adapter: new SQLitePersistence(db),
142
+ archiveCompleted: true,
143
+ archiveFailed: true,
144
+ archiveEnqueued: true, // Audit Mode: Log immediately when pushed
145
+ bufferSize: 100 // Batch writes for performance
242
146
  }
243
- })
244
- ```
245
-
246
- ### Audit Mode (`archiveEnqueued: true`)
247
-
248
- When Audit Mode is enabled, every job pushed to the queue is immediately written to the SQL archive with a `waiting` status. This happens in parallel with the main queue operation (Fire-and-Forget).
249
-
250
- - **Benefit**: Provides a complete audit trail. Even if the queue driver (e.g., Redis) crashes and loses data, the SQL archive will contain the record of the job being enqueued.
251
- - **Performance**: Designed to be non-blocking. The SQL write happens asynchronously and does not delay the `push()` operation.
252
-
253
- ## Standalone Worker
254
-
255
- ```bash
256
- bun run packages/stream/cli/queue-worker.ts \
257
- --connection=database \
258
- --queues=default,emails \
259
- --workers=4
260
- ```
261
-
262
- ## Debugging & Monitoring
263
-
264
- ### Enable Debug Mode
265
-
266
- Enable verbose logging to see detailed information about job lifecycle events (enqueue, process, complete, fail).
267
-
268
- ```typescript
269
- OrbitStream.configure({
270
- debug: true, // Enable debug logging
271
- // ...
272
- })
273
- ```
274
-
275
- ```typescript
276
- const consumer = new Consumer(manager, {
277
- debug: true, // Enable consumer debug logging
278
- // ...
279
- })
147
+ });
280
148
  ```
281
149
 
282
- ### Monitoring
150
+ ## 📖 API Reference
283
151
 
284
- The Consumer emits events that you can listen to for custom monitoring:
152
+ ### `QueueManager`
285
153
 
286
- ```typescript
287
- consumer.on('job:started', ({ job, queue }) => {
288
- console.log(`Job ${job.id} started on ${queue}`)
289
- })
154
+ Accessed via `c.get('queue')` or `core.container.make('queue')`.
290
155
 
291
- consumer.on('job:failed', ({ job, error }) => {
292
- console.error(`Job ${job.id} failed: ${error.message}`)
293
- })
294
- ```
156
+ - **`push(job)`**: Dispatch a job to the queue.
157
+ - **`pushMany(jobs)`**: Dispatch multiple jobs efficiently.
158
+ - **`size(queue?)`**: Get the number of jobs in a queue.
159
+ - **`clear(queue?)`**: Remove all jobs from a queue.
295
160
 
296
- ## API Reference
161
+ ### `Job` Fluent Methods
297
162
 
298
- ### Job
163
+ - **`onQueue(name)`**: Specify target queue.
164
+ - **`onConnection(name)`**: Use a specific broker connection.
165
+ - **`delay(seconds)`**: Set initial delay.
166
+ - **`backoff(seconds, multiplier?)`**: Configure retry strategy.
167
+ - **`withPriority(priority)`**: Set job priority.
299
168
 
300
- ```typescript
301
- abstract class Job implements Queueable {
302
- abstract handle(): Promise<void>
303
- async failed(error: Error): Promise<void>
304
-
305
- onQueue(queue: string): this
306
- onConnection(connection: string): this
307
- delay(seconds: number): this
308
-
309
- /**
310
- * Set retry backoff strategy.
311
- * @param seconds - Initial delay in seconds
312
- * @param multiplier - Multiplier for each subsequent attempt (default: 2)
313
- */
314
- backoff(seconds: number, multiplier = 2): this
315
- }
316
- ```
169
+ ## 📚 Documentation
317
170
 
318
- ### QueueManager
171
+ Detailed guides and references for the Galaxy Architecture:
319
172
 
320
- ```typescript
321
- class QueueManager {
322
- async push<T extends Job>(job: T): Promise<T>
323
- async pushMany<T extends Job>(jobs: T[]): Promise<void>
324
- async pop(queue?: string, connection?: string): Promise<Job | null>
325
- async size(queue?: string, connection?: string): Promise<number>
326
- async clear(queue?: string, connection?: string): Promise<void>
327
- async complete(job: Job): Promise<void>
328
- async fail(job: Job, error: Error): Promise<void>
329
- registerJobClasses(jobClasses: Array<new (...args: unknown[]) => Job>): void
330
- }
331
- ```
173
+ - [🏗️ **Architecture Overview**](./README.md) — Multi-broker queue and job system.
174
+ - [📡 **Event-Driven Architecture**](./doc/EVENT_DRIVEN_ARCHITECTURE.md) — **NEW**: Cross-Satellite communication and pub/sub.
175
+ - [⚙️ **Worker Configuration**](#-worker-modes) — Embedded vs standalone background workers.
332
176
 
333
- ## Implemented Drivers
177
+ ## 🔌 Supported Drivers
334
178
 
335
- - **MemoryDriver** - in-memory (development)
336
- - **DatabaseDriver** - PostgreSQL/MySQL/SQLite
337
- - **RedisDriver** - delayed jobs, priority queues, rate limiting, and DLQ support
338
- - **KafkaDriver** - topics and consumer groups
339
- - **SQSDriver** - standard/FIFO queues and long polling
340
- - **RabbitMQDriver** - exchanges, queues, and advanced confirm mode
179
+ - **Redis** - Feature-rich (DLQ, Rate limiting, Priorities).
180
+ - **SQS** - AWS managed queue (Standard/FIFO).
181
+ - **Kafka** - High-throughput distributed streams.
182
+ - **RabbitMQ** - Traditional AMQP broker.
183
+ - **Database** - Simple SQL-based persistence (PostgreSQL, MySQL, SQLite).
184
+ - **Memory** - Fast, zero-config for local development/testing.
341
185
 
342
- ## Best Practices
186
+ ## 🤝 Contributing
343
187
 
344
- 1. **Idempotency**: Ensure your jobs are idempotent. Jobs may be retried if they fail or if the worker crashes.
345
- 2. **Granularity**: Keep jobs small and focused. Large jobs can block workers and increase memory usage.
346
- 3. **Timeouts**: Set appropriate timeouts for your jobs to prevent them from hanging indefinitely.
347
- 4. **Error Handling**: Use the `failed` method or throw errors to trigger retries. Avoid swallowing errors unless you want to suppress retries.
188
+ Contributions, issues and feature requests are welcome!
189
+ Feel free to check the [issues page](https://github.com/gravito-framework/gravito/issues).
348
190
 
349
- ## License
191
+ ## 📝 License
350
192
 
351
- MIT
193
+ MIT © [Carl Lee](https://github.com/gravito-framework/gravito)
package/README.zh-TW.md CHANGED
@@ -1,34 +1,167 @@
1
1
  # @gravito/stream
2
2
 
3
- > Gravito 的佇列模組,支援多種驅動與獨立 Worker。
3
+ > Galaxy 架構的高效能輕量化隊列與背景任務系統。
4
4
 
5
- ## 安裝
5
+ [![npm version](https://img.shields.io/npm/v/@gravito/stream.svg)](https://www.npmjs.com/package/@gravito/stream)
6
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
7
+ [![TypeScript](https://img.shields.io/badge/TypeScript-5.0+-blue.svg)](https://www.typescriptlang.org/)
8
+ [![Bun](https://img.shields.io/badge/Bun-1.0+-black.svg)](https://bun.sh/)
9
+
10
+ **@gravito/stream** 是 Gravito 應用程式的標準背景處理單元。基於 **Orbit** 模式構建,它為各種訊息代理(Broker)和隊列系統提供了統一的抽象層,讓您能夠從簡單的記憶體任務無縫擴展到分佈式事件驅動架構。
11
+
12
+ ## ✨ 特性
13
+
14
+ - 🪐 **Orbit 整合** - 與 PlanetCore 微核心及依賴注入系統原生整合。
15
+ - 🔌 **多 Broker 支援** - 內建支援 **Redis**、**SQS**、**Kafka**、**RabbitMQ**、**資料庫** (SQL) 與 **記憶體** (Memory) 驅動。
16
+ - 🛠️ **基於 Job 的 API** - 簡潔的類別式(Class-based)任務定義,內建序列化與錯誤處理。
17
+ - 🚀 **高吞吐量** - 針對 **Bun** 進行優化,支援批量消費(Batching)、並發處理與自適應輪詢(Polling)。
18
+ - 🛡️ **可靠性** - 內建指數退避重試(Exponential Backoff)、死信隊列(DLQ)與順序任務分組。
19
+ - 📝 **審計與持久化** - 可選的 SQL 持久化層,用於存檔任務歷史並提供完整的審計追蹤(Audit Trail)。
20
+ - 🕒 **排程器** - 內建基於 CRON 的排程功能,支援週期性任務。
21
+ - 🏢 **Worker 模式** - 開發環境可運行嵌入式 Worker,生產環境可運行獨立 Worker 進程。
22
+
23
+ ## 📦 安裝
6
24
 
7
25
  ```bash
8
26
  bun add @gravito/stream
9
27
  ```
10
28
 
11
- ## 快速開始
29
+ ## 🚀 快速上手
30
+
31
+ ### 1. 定義任務 (Job)
32
+
33
+ 建立一個繼承自 `Job` 的類別並實作 `handle` 邏輯:
12
34
 
13
35
  ```typescript
14
- import { Job } from '@gravito/stream'
36
+ import { Job } from '@gravito/stream';
15
37
 
16
- export class SendWelcomeEmail extends Job {
17
- constructor(private userId: string) {
18
- super()
38
+ export class ProcessOrder extends Job {
39
+ constructor(private orderId: string) {
40
+ super();
19
41
  }
20
42
 
21
43
  async handle(): Promise<void> {
22
- const user = await User.find(this.userId)
23
- await mail.send(new WelcomeEmail(user))
44
+ // 業務邏輯:處理訂單
45
+ console.log(`正在處理訂單: ${this.orderId}`);
46
+ }
47
+
48
+ async failed(error: Error): Promise<void> {
49
+ // 選配:在永久失敗時進行清理或通知
50
+ console.error(`訂單 ${this.orderId} 失敗: ${error.message}`);
24
51
  }
25
52
  }
26
53
  ```
27
54
 
55
+ ### 2. 初始化 OrbitStream
56
+
57
+ 在應用程式啟動時註冊 Orbit:
58
+
59
+ ```typescript
60
+ import { PlanetCore } from '@gravito/core';
61
+ import { OrbitStream } from '@gravito/stream';
62
+
63
+ const core = new PlanetCore();
64
+
65
+ core.addOrbit(OrbitStream.configure({
66
+ default: 'redis',
67
+ connections: {
68
+ redis: {
69
+ driver: 'redis',
70
+ host: 'localhost',
71
+ port: 6379
72
+ }
73
+ },
74
+ autoStartWorker: process.env.NODE_ENV === 'development',
75
+ workerOptions: { queues: ['default'] }
76
+ }));
77
+
78
+ await core.bootstrap();
79
+ ```
80
+
81
+ ### 3. 將任務推入隊列
82
+
83
+ 從請求上下文或容器中獲取 `queue` 服務:
84
+
28
85
  ```typescript
29
- const queue = c.get('queue')
86
+ core.app.post('/orders', async (c) => {
87
+ const { id } = await c.req.json();
88
+ const queue = c.get('queue');
89
+
90
+ // 使用流暢介面進行配置
91
+ await queue.push(new ProcessOrder(id))
92
+ .onQueue('high-priority') // 指定隊列
93
+ .delay(30) // 延遲 30 秒執行
94
+ .backoff(5, 2); // 重試策略:初始延遲 5s,之後每次翻倍
30
95
 
31
- await queue.push(new SendWelcomeEmail(user.id))
32
- .onQueue('emails')
33
- .delay(60)
96
+ return c.json({ success: true });
97
+ });
34
98
  ```
99
+
100
+ ## 🔧 進階配置
101
+
102
+ ### 多隊列與並發處理
103
+
104
+ 配置消費者以處理不同優先級的隊列與並發等級:
105
+
106
+ ```typescript
107
+ const consumer = new Consumer(manager, {
108
+ queues: ['critical', 'default', 'low'],
109
+ concurrency: 10, // 最大同時執行 10 個任務
110
+ groupJobsSequential: true, // 相同 groupId 的任務將嚴格依序執行
111
+ batchSize: 5, // 每次輪詢獲取 5 個任務
112
+ });
113
+ ```
114
+
115
+ ### 持久化與審計追蹤
116
+
117
+ 保留所有任務(成功、失敗或排隊中)的歷史記錄:
118
+
119
+ ```typescript
120
+ OrbitStream.configure({
121
+ // ... 連線配置
122
+ persistence: {
123
+ adapter: new SQLitePersistence(db),
124
+ archiveCompleted: true,
125
+ archiveFailed: true,
126
+ archiveEnqueued: true, // 審計模式:推入隊列時立即記錄
127
+ bufferSize: 100 // 批量寫入以提升效能
128
+ }
129
+ });
130
+ ```
131
+
132
+ ## 📖 API 參考
133
+
134
+ ### `QueueManager`
135
+
136
+ 透過 `c.get('queue')` 或 `core.container.make('queue')` 獲取。
137
+
138
+ - **`push(job)`**: 將任務派發至隊列。
139
+ - **`pushMany(jobs)`**: 高效派發多個任務。
140
+ - **`size(queue?)`**: 獲取隊列中的任務數量。
141
+ - **`clear(queue?)`**: 清空隊列中的所有任務。
142
+
143
+ ### `Job` 流暢方法
144
+
145
+ - **`onQueue(name)`**: 指定目標隊列。
146
+ - **`onConnection(name)`**: 使用特定連線。
147
+ - **`delay(seconds)`**: 設置初始延遲。
148
+ - **`backoff(seconds, multiplier?)`**: 配置重試策略。
149
+ - **`withPriority(priority)`**: 設置任務優先級。
150
+
151
+ ## 🔌 支援的驅動 (Drivers)
152
+
153
+ - **Redis** - 功能豐富(支援 DLQ、限流、優先級)。
154
+ - **SQS** - AWS 託管隊列(Standard/FIFO)。
155
+ - **Kafka** - 高吞吐量分佈式串流。
156
+ - **RabbitMQ** - 傳統 AMQP 代理。
157
+ - **Database** - 簡單的 SQL 持久化方案(PostgreSQL, MySQL, SQLite)。
158
+ - **Memory** - 快速、開發/測試環境零配置。
159
+
160
+ ## 🤝 貢獻
161
+
162
+ 歡迎提交貢獻、問題與功能請求!
163
+ 請隨時查看 [Issues 頁面](https://github.com/gravito-framework/gravito/issues)。
164
+
165
+ ## 📝 授權
166
+
167
+ MIT © [Carl Lee](https://github.com/gravito-framework/gravito)