@kaapi/kafka-messaging 0.0.39 → 0.0.41

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md ADDED
@@ -0,0 +1,48 @@
1
+ # Changelog
2
+
3
+ All notable changes to this project will be documented in this file.
4
+
5
+ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
6
+ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7
+
8
+ ## [Unreleased]
9
+
10
+ ## [0.0.41] - 2026-01-17
11
+
12
+ ### Added
13
+ - `publishBatch()` method for efficient multi-message publishing
14
+ - `onError` callback in `subscribe()` for custom error handling
15
+ - `groupId` and `groupIdPrefix` options in `subscribe()` for custom consumer group IDs
16
+ - `logOffsets` option in `subscribe()` (opt-in, avoids admin overhead)
17
+ - `fetchTopicOffsets()` public method
18
+ - Lazy initialization for shared admin client
19
+ - Race condition protection in `getProducer()` and `getSharedAdmin()`
20
+ - Comprehensive JSDoc documentation
21
+ - Mock test suite (no Kafka broker required)
22
+ - Integration test suite
23
+ - Support for Buffer, string, and null message values in `publish()` and `publishBatch()`
24
+
25
+ ### Changed
26
+ - Default group ID separator changed from `---` to `.` (e.g., `my-service.topic`)
27
+ - Offset logging now disabled by default for performance
28
+ - Test scripts reorganized: `test` (mock), `test:integration`, `test:all`
29
+
30
+ ### Fixed
31
+ - Potential race condition when `getProducer()` called concurrently
32
+ - Added try/catch in `publish()` to properly handle and re-throw errors
33
+ - Graceful handling of non-JSON message payloads in `subscribe()`
34
+
35
+ ## [0.0.1] - 2025-XX-XX
36
+
37
+ ### Added
38
+ - Initial release
39
+ - `KafkaMessaging` class implementing `IMessaging` interface
40
+ - `publish()` and `subscribe()` methods
41
+ - `createProducer()`, `createConsumer()`, `createAdmin()` methods
42
+ - `createTopic()` and `waitForTopicReady()` utilities
43
+ - `shutdown()` for graceful disconnection
44
+ - `safeDisconnect()` with timeout protection
45
+ - Integration with Kaapi's `ILogger`
46
+
47
+ [Unreleased]: https://github.com/demingongo/kaapi/compare/v0.0.41...HEAD
48
+ [0.0.41]: https://github.com/demingongo/kaapi/commits/v0.0.41
package/README.md CHANGED
@@ -2,23 +2,27 @@
2
2
 
3
3
  `@kaapi/kafka-messaging` is a lightweight wrapper around [`kafkajs`](https://github.com/tulios/kafkajs) that integrates with the [`Kaapi`](https://github.com/demingongo/kaapi) framework to provide a clean and consistent **message publishing and consuming interface**.
4
4
 
5
- It abstracts Kafkas producer/consumer logic and provides a simple interface to:
5
+ It abstracts Kafka's producer/consumer logic and provides a simple interface to:
6
6
 
7
- * ✅ Publish messages
8
- * ✅ Subscribe to topics
7
+ * ✅ Publish messages (single or batch)
8
+ * ✅ Subscribe to topics with flexible consumer group options
9
9
  * ✅ Support structured logging via Kaapi's logger
10
10
  * ✅ Handle offsets and message metadata
11
- * ✅ Reuse Kafka producers/consumers
11
+ * ✅ Reuse Kafka producers/consumers with race-condition protection
12
+ * ✅ Custom error handling for failed message handlers
12
13
 
13
14
  ---
14
15
 
15
16
  ## ✨ Features
16
17
 
17
- * Simple `publish(topic, message)` API
18
- * Flexible `subscribe(topic, handler, config)` with offset tracking
18
+ * Simple `publish(topic, message)` and `publishBatch(topic, messages)` APIs
19
+ * Flexible `subscribe(topic, handler, config)` with custom groupId, error handling, and offset tracking
20
+ * Singleton producer with race-condition safe initialization
21
+ * Lazy admin initialization to minimize connections
19
22
  * KafkaJS-compatible configuration
20
- * Structured logging via Kaapis `ILogger`
23
+ * Structured logging via Kaapi's `ILogger`
21
24
  * Typed message handling with TypeScript
25
+ * Graceful shutdown with detailed summary
22
26
 
23
27
  ---
24
28
 
@@ -80,7 +84,9 @@ await messaging.waitForTopicReady('my-topic', timeoutMs, checkIntervalMs);
80
84
 
81
85
  ---
82
86
 
83
- ### Publishing a Message
87
+ ### Publishing Messages
88
+
89
+ #### Single Message
84
90
 
85
91
  `publish(topic, message)` sends a message to a given Kafka topic.
86
92
 
@@ -91,8 +97,30 @@ await messaging.publish('my-topic', {
91
97
  });
92
98
  ```
93
99
 
94
- * `topic`: The Kafka topic name
95
- * `message`: Any serializable object
100
+ Messages can be:
101
+ - **Objects** automatically JSON-serialized
102
+ - **Strings** → sent as-is
103
+ - **Buffers** → sent as-is (for binary data)
104
+ - **null** → sent as null (tombstone messages)
105
+
106
+ #### Batch Publishing
107
+
108
+ `publishBatch(topic, messages)` sends multiple messages in a single request for better throughput.
109
+
110
+ ```ts
111
+ await messaging.publishBatch('user-events', [
112
+ { value: { event: 'user.created', userId: '1' } },
113
+ { value: { event: 'user.created', userId: '2' } },
114
+ { value: { event: 'user.updated', userId: '3' }, key: 'user-3' },
115
+ { value: { event: 'user.deleted', userId: '4' }, headers: { priority: 'high' } },
116
+ ]);
117
+ ```
118
+
119
+ Each message in the batch can include:
120
+ - `value` — the message payload (required)
121
+ - `key` — optional partition key
122
+ - `partition` — optional specific partition
123
+ - `headers` — optional custom headers
96
124
 
97
125
  ---
98
126
 
@@ -103,28 +131,115 @@ await messaging.publish('my-topic', {
103
131
  ```ts
104
132
  await messaging.subscribe('my-topic', async (message, context) => {
105
133
  console.log('Received:', message);
106
- console.log('From:', context.name, context.address);
107
134
  console.log('Offset:', context.offset);
135
+ console.log('Timestamp:', context.timestamp);
108
136
  }, {
109
137
  fromBeginning: true
110
138
  });
111
139
  ```
112
140
 
113
- * `topic`: The Kafka topic name
114
- * `handler`: `(message, context) => void | Promise<void>`
115
- * `config?`: `KafkaMessagingSubscribeConfig` (extends `ConsumerConfig`)
116
- * `groupId?`: Kafka consumer group ID
117
- * `fromBeginning?`: boolean - Start consuming from beginning of topic
141
+ #### Subscribe Configuration
142
+
143
+ | Option | Type | Description |
144
+ | --------------- | ---------- | --------------------------------------------------------------------------- |
145
+ | `groupId` | `string` | Custom consumer group ID. Overrides auto-generated ID. |
146
+ | `groupIdPrefix` | `string` | Prefix for auto-generated group ID (default: service `name`). |
147
+ | `fromBeginning` | `boolean` | Start consuming from the beginning of the topic. |
148
+ | `logOffsets` | `boolean` | Log partition offsets on subscribe (adds admin overhead). Default: `false`. |
149
+ | `onReady` | `function` | Callback invoked when the consumer is ready. |
150
+ | `onError` | `function` | Callback invoked when a message handler throws an error. |
151
+
152
+ #### Consumer Group ID Resolution
153
+
154
+ The consumer group ID is resolved in this order:
155
+ 1. `groupId` if provided
156
+ 2. `{groupIdPrefix}.{topic}` if prefix provided
157
+ 3. `{name}.{topic}` using the service name from config
158
+ 4. `group.{topic}` as fallback
159
+
160
+ ```ts
161
+ // Using custom group ID
162
+ await messaging.subscribe('user-events', handler, {
163
+ groupId: 'my-custom-consumer-group'
164
+ });
165
+
166
+ // Using custom prefix → "analytics.user-events"
167
+ await messaging.subscribe('user-events', handler, {
168
+ groupIdPrefix: 'analytics'
169
+ });
170
+ ```
171
+
172
+ #### Error Handling
173
+
174
+ Use the `onError` callback to handle errors from message handlers without crashing:
175
+
176
+ ```ts
177
+ await messaging.subscribe('user-events', async (message) => {
178
+ await processMessage(message); // might throw
179
+ }, {
180
+ onError: async (error, message, context) => {
181
+ console.error('Failed to process message:', error);
182
+ console.error('Message:', message);
183
+ console.error('Offset:', context.offset);
184
+
185
+ // Log to external service, send to DLQ, etc.
186
+ await alertService.notify(error);
187
+ }
188
+ });
189
+ ```
190
+
191
+ The `onError` callback receives:
192
+ - `error` — the error thrown by the handler
193
+ - `message` — the parsed message that failed
194
+ - `context` — the message context (offset, headers, timestamp, etc.)
195
+
196
+ #### Consumer Ready Callback
197
+
198
+ ```ts
199
+ await messaging.subscribe('my-topic', handler, {
200
+ onReady: (consumer) => {
201
+ console.log('Consumer is ready!');
202
+ // Access the raw KafkaJS consumer if needed
203
+ }
204
+ });
205
+ ```
206
+
207
+ ---
208
+
209
+ ### Fetching Topic Offsets
210
+
211
+ ```ts
212
+ const offsets = await messaging.fetchTopicOffsets('my-topic');
213
+
214
+ offsets?.forEach((partition) => {
215
+ console.log(`Partition ${partition.partition}: offset=${partition.offset}, high=${partition.high}, low=${partition.low}`);
216
+ });
217
+ ```
118
218
 
119
219
  ---
120
220
 
121
221
  ### Graceful Shutdown
122
222
 
123
223
  ```ts
124
- await messaging.shutdown();
224
+ const result = await messaging.shutdown();
225
+
226
+ console.log(`Disconnected ${result.successProducers} producers`);
227
+ console.log(`Disconnected ${result.successConsumers} consumers`);
228
+ console.log(`Disconnected ${result.successAdmins} admins`);
229
+ console.log(`Errors: ${result.errorCount}`);
125
230
  ```
231
+
126
232
  This will disconnect all tracked producers, consumers, and admin clients safely.
127
233
 
234
+ ```ts
235
+ // Example: graceful shutdown on SIGTERM
236
+ process.on('SIGTERM', async () => {
237
+ const result = await messaging.shutdown();
238
+ console.log(`Shutdown complete: ${result.errorCount} errors`);
239
+ process.exit(0);
240
+ });
241
+ ```
242
+
128
243
  ---
129
244
 
130
245
  ## 🧱 Example Usage
@@ -132,7 +247,7 @@ This will disconnect all tracked producers, consumers, and admin clients safely.
132
247
  ```ts
133
248
  // messaging.ts
134
249
 
135
- import { Kaapi, createLogger } from '@kaapi/kaapi'
250
+ import { Kaapi } from '@kaapi/kaapi'
136
251
  import { KafkaMessaging } from '@kaapi/kafka-messaging';
137
252
 
138
253
  const messaging = new KafkaMessaging({
@@ -174,11 +289,22 @@ async function runExample(): Promise<void> {
174
289
  // Publish a message
175
290
  await messaging.publish('my-topic', { event: 'user.created', userId: 123 });
176
291
 
177
- // Subscribe to messages
292
+ // Subscribe with error handling
178
293
  await messaging.subscribe('my-topic', async (message, context) => {
179
294
  console.log('Received:', message);
180
295
  console.log('Offset:', context.offset);
296
+ }, {
297
+ fromBeginning: true,
298
+ onError: (error, message, context) => {
299
+ console.error('Handler failed:', error);
300
+ }
181
301
  });
302
+
303
+ // Batch publish
304
+ await messaging.publishBatch('my-topic', [
305
+ { value: { event: 'user.created', userId: 1 } },
306
+ { value: { event: 'user.created', userId: 2 } },
307
+ ]);
182
308
  }
183
309
 
184
310
  runExample().catch((err) => {
@@ -194,36 +320,50 @@ The `KafkaMessaging` class provides a safe and resilient interface for interacti
194
320
 
195
321
  ### Public Methods
196
322
 
197
- | Method | Purpose |
198
- |-----------------------------------|-------------------------------------------------------------------------|
199
- | `createProducer()` | Creates and connects a Kafka producer. Automatically tracked and cleaned up. |
200
- | `createConsumer(groupId, config?)`| Creates and connects a Kafka consumer. Automatically tracked and cleaned up. |
201
- | `createAdmin(config?)` | Creates and connects a Kafka admin client. Tracked for shutdown. |
202
- | `publish(topic, message)` | Sends a message to the specified topic using the managed producer. |
203
- | `subscribe(topic, handler, config?)` | Subscribes to a topic and processes messages with the given handler. |
204
- | `shutdown()` | Gracefully disconnects all tracked producers, consumers, and admins. |
205
- | `safeDisconnect(client, timeoutMs?)` | Disconnects a Kafka client with timeout protection. |
206
- | `createTopic(topicConfig, options?)` | Creates a Kafka topic with optional validation and leader wait. |
207
- | `waitForTopicReady(topic, timeoutMs?, checkIntervalMs?)` | Ensures the topic is ready. |
208
-
209
- ### Internal Methods (Not Public)
210
-
211
- | Method | Status | Reason for Restriction |
212
- |----------------|------------|-------------------------------------------------|
213
- | `getKafka()` | Protected | Used internally to instantiate Kafka clients. Avoid direct access to prevent unmanaged connections. |
323
+ | Method | Purpose |
324
+ | ------------------------------------------------- | ---------------------------------------------------------------------------- |
325
+ | `createProducer(config?)` | Creates and connects a Kafka producer. Automatically tracked. |
326
+ | `createConsumer(groupId, config?)` | Creates and connects a Kafka consumer. Automatically tracked. |
327
+ | `createAdmin(config?)` | Creates and connects a Kafka admin client. Tracked for shutdown. |
328
+ | `getProducer()` | Gets or creates the singleton producer (race-condition safe). |
329
+ | `publish(topic, message)` | Sends a message to the specified topic. |
330
+ | `publishBatch(topic, messages)` | Sends multiple messages in a single batch. |
331
+ | `subscribe(topic, handler, config?)` | Subscribes to a topic and processes messages with the given handler. |
332
+ | `fetchTopicOffsets(topic)` | Fetches partition offsets for a topic. |
333
+ | `createTopic(topicConfig, options?)` | Creates a Kafka topic with optional validation and leader wait. |
334
+ | `waitForTopicReady(topic, timeoutMs?, intervalMs?)` | Waits for a topic to be ready (has partitions). |
335
+ | `shutdown()` | Gracefully disconnects all tracked clients. Returns a summary. |
336
+ | `safeDisconnect(client, timeoutMs?)` | Disconnects a Kafka client with timeout protection. |
337
+ | `disconnectProducer()` | Disconnects the singleton producer. |
338
+
339
+ ### Read-only Properties
340
+
341
+ | Property | Type | Description |
342
+ | ----------------- | ----------------------- | ---------------------------------- |
343
+ | `activeProducers` | `ReadonlySet<Producer>` | Currently tracked producers. |
344
+ | `activeConsumers` | `ReadonlySet<Consumer>` | Currently tracked consumers. |
345
+
346
+ ### Internal/Protected Methods
347
+
348
+ | Method | Status | Reason |
349
+ | ---------------- | --------- | ------------------------------------------------------------- |
350
+ | `getKafka()` | Protected | Used internally to instantiate Kafka clients. |
351
+ | `getSharedAdmin()` | Protected | Lazy-initialized shared admin for internal operations. |
214
352
 
215
353
  ### Best Practices
216
354
 
217
355
  - Always use `createProducer`, `createConsumer`, or `createAdmin` to ensure proper tracking.
356
+ - Use `getProducer()` for the singleton producer pattern (recommended for most use cases).
218
357
  - Avoid accessing the raw Kafka instance directly.
219
358
  - Call `shutdown()` during application teardown to release resources.
220
359
  - Use `createTopic()` and `waitForTopicReady()` in tests or dynamic topic scenarios.
360
+ - Use `onError` callback in `subscribe()` to handle message processing failures gracefully.
221
361
 
222
362
  ---
223
363
 
224
364
  ## 🛠️ Requirements
225
365
 
226
- * Node.js 16+
366
+ * Node.js 18+
227
367
  * A running Kafka instance
228
368
  * Optional: integrate into a [Kaapi](https://github.com/demingongo/kaapi) service lifecycle
229
369
 
@@ -239,7 +379,24 @@ The `KafkaMessaging` class provides a safe and resilient interface for interacti
239
379
 
240
380
  ## 🧪 Testing
241
381
 
242
- You can mock Kafka in tests or point to a local dev broker. Integration testing can be done using Docker or services like Redpanda.
382
+ ```bash
383
+ # Run mock tests (no Kafka required)
384
+ pnpm test
385
+
386
+ # Run integration tests (requires Kafka broker)
387
+ pnpm test:integration
388
+
389
+ # Run all tests
390
+ pnpm test:all
391
+ ```
392
+
393
+ You can run Kafka locally using Docker:
394
+
395
+ ```bash
396
+ docker run -d --name kafka \
397
+ -p 9092:9092 \
398
+ apache/kafka:latest
399
+ ```
243
400
 
244
401
  ---
245
402