@drarzter/kafka-client 0.7.4 → 0.9.2
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +282 -5
- package/dist/{chunk-BVWRZTMD.mjs → chunk-Z2DOJQRI.mjs} +2948 -2134
- package/dist/chunk-Z2DOJQRI.mjs.map +1 -0
- package/dist/core.d.mts +100 -279
- package/dist/core.d.ts +100 -279
- package/dist/core.js +2947 -2133
- package/dist/core.js.map +1 -1
- package/dist/core.mjs +1 -1
- package/dist/index.d.mts +2 -2
- package/dist/index.d.ts +2 -2
- package/dist/index.js +2945 -2131
- package/dist/index.js.map +1 -1
- package/dist/index.mjs +1 -1
- package/dist/otel.d.mts +1 -1
- package/dist/otel.d.ts +1 -1
- package/dist/testing.d.mts +215 -2
- package/dist/testing.d.ts +215 -2
- package/dist/testing.js +298 -2
- package/dist/testing.js.map +1 -1
- package/dist/testing.mjs +293 -2
- package/dist/testing.mjs.map +1 -1
- package/dist/{types-Db7qSbZP.d.mts → types-4XNxkici.d.mts} +1024 -14
- package/dist/{types-Db7qSbZP.d.ts → types-4XNxkici.d.ts} +1024 -14
- package/package.json +1 -1
- package/dist/chunk-BVWRZTMD.mjs.map +0 -1
package/README.md
CHANGED
|
@@ -45,6 +45,14 @@ Type-safe Kafka client for Node.js. Framework-agnostic core with a first-class N
|
|
|
45
45
|
- [Seek to timestamp](#seek-to-timestamp)
|
|
46
46
|
- [Message TTL](#message-ttl)
|
|
47
47
|
- [DLQ replay](#dlq-replay)
|
|
48
|
+
- [Read snapshot](#read-snapshot)
|
|
49
|
+
- [Offset checkpointing](#offset-checkpointing)
|
|
50
|
+
- [checkpointOffsets](#checkpointoffsets)
|
|
51
|
+
- [restoreFromCheckpoint](#restorefromcheckpoint)
|
|
52
|
+
- [Windowed batch consumer](#windowed-batch-consumer)
|
|
53
|
+
- [Header-based routing](#header-based-routing)
|
|
54
|
+
- [Lag-based producer throttling](#lag-based-producer-throttling)
|
|
55
|
+
- [Transactional consumer](#transactional-consumer)
|
|
48
56
|
- [Admin API](#admin-api)
|
|
49
57
|
- [Graceful shutdown](#graceful-shutdown)
|
|
50
58
|
- [Consumer handles](#consumer-handles)
|
|
@@ -953,6 +961,8 @@ Options for `sendMessage()` — the third argument:
|
|
|
953
961
|
| `batch` | `false` | (decorator only) Use `startBatchConsumer` instead of `startConsumer` |
|
|
954
962
|
| `partitionAssigner` | `'cooperative-sticky'` | Partition assignment strategy: `'cooperative-sticky'` (minimal movement on rebalance, best for horizontal scaling), `'roundrobin'` (even distribution), `'range'` (contiguous partition ranges) |
|
|
955
963
|
| `onTtlExpired` | — | Per-consumer override of the client-level `onTtlExpired` callback; takes precedence when set. Receives `TtlExpiredContext` — same shape as the client-level hook |
|
|
964
|
+
| `onMessageLost` | — | Per-consumer override of the client-level `onMessageLost` callback; takes precedence when set. Use for consumer-specific dead-message alerting or structured logging |
|
|
965
|
+
| `onRetry` | — | Per-consumer retry callback; fires **in addition to** the built-in metrics hook (does not replace it). Same signature as `KafkaInstrumentation.onRetry` |
|
|
956
966
|
| `subscribeRetry.retries` | `5` | Max attempts for `consumer.subscribe()` when topic doesn't exist yet |
|
|
957
967
|
| `subscribeRetry.backoffMs` | `5000` | Delay between subscribe retry attempts (ms) |
|
|
958
968
|
|
|
@@ -1414,6 +1424,7 @@ Options:
|
|
|
1414
1424
|
| `targetTopic` | `x-dlq-original-topic` header | Override the destination topic |
|
|
1415
1425
|
| `dryRun` | `false` | Count messages without sending |
|
|
1416
1426
|
| `filter` | — | `(headers) => boolean` — skip messages where the callback returns `false` |
|
|
1427
|
+
| `fromBeginning` | `true` | `true` = full replay every call; `false` = incremental (only new messages since the last call) |
|
|
1417
1428
|
|
|
1418
1429
|
```typescript
|
|
1419
1430
|
// Dry run — see how many messages would be replayed
|
|
@@ -1426,9 +1437,259 @@ const result = await kafka.replayDlq('orders', { targetTopic: 'orders.v2' });
|
|
|
1426
1437
|
const filtered = await kafka.replayDlq('orders', {
|
|
1427
1438
|
filter: (headers) => headers['x-correlation-id'] === 'corr-123',
|
|
1428
1439
|
});
|
|
1440
|
+
|
|
1441
|
+
// Incremental replay — only messages added since the last call
|
|
1442
|
+
const incremental = await kafka.replayDlq('orders', { fromBeginning: false });
|
|
1443
|
+
```
|
|
1444
|
+
|
|
1445
|
+
`replayDlq` reads the DLQ topic up to the high-watermark at the time of the call — messages published after replay starts are not included. By default (`fromBeginning: true`) an ephemeral group ID is used on each call so all messages are always replayed; the group is deleted from the broker after use. With `fromBeginning: false` a stable group ID persists committed offsets between calls, enabling incremental replay. DLQ metadata headers (`x-dlq-original-topic`, `x-dlq-error-message`, `x-dlq-error-stack`, `x-dlq-failed-at`, `x-dlq-attempt-count`) are stripped from the replayed messages; all other headers (e.g. `x-correlation-id`) are preserved.
|
|
1446
|
+
|
|
1447
|
+
## Read snapshot
|
|
1448
|
+
|
|
1449
|
+
Read any topic from the beginning to its current high-watermark and return a `Map<key, EventEnvelope<T>>` with the **latest value per key**. Useful for bootstrapping in-memory state at service startup without an external cache:
|
|
1450
|
+
|
|
1451
|
+
```typescript
|
|
1452
|
+
// Build a key → latest-value index for a compacted topic
|
|
1453
|
+
const orders = await kafka.readSnapshot('orders.state');
|
|
1454
|
+
orders.get('order-123'); // EventEnvelope with the latest payload for that key
|
|
1455
|
+
```
|
|
1456
|
+
|
|
1457
|
+
Tombstone records (null-value messages) remove the key from the map, consistent with log-compaction semantics:
|
|
1458
|
+
|
|
1459
|
+
```typescript
|
|
1460
|
+
const snapshot = await kafka.readSnapshot('orders.state', {
|
|
1461
|
+
onTombstone: (key) => console.log(`Key deleted: ${key}`),
|
|
1462
|
+
});
|
|
1463
|
+
```
|
|
1464
|
+
|
|
1465
|
+
Optional schema validation skips invalid messages with a warning instead of throwing:
|
|
1466
|
+
|
|
1467
|
+
```typescript
|
|
1468
|
+
import { z } from 'zod';
|
|
1469
|
+
|
|
1470
|
+
const OrderSchema = z.object({ orderId: z.string(), amount: z.number() });
|
|
1471
|
+
|
|
1472
|
+
const snapshot = await kafka.readSnapshot('orders.state', {
|
|
1473
|
+
schema: OrderSchema,
|
|
1474
|
+
});
|
|
1475
|
+
```
|
|
1476
|
+
|
|
1477
|
+
`readSnapshot` uses a short-lived temporary consumer that is **not** registered in the client's consumer map — it disconnects as soon as all partitions reach their high-watermark. The call resolves with the complete snapshot; it does not stream.
|
|
1478
|
+
|
|
1479
|
+
| Option | Description |
|
|
1480
|
+
| ------ | ----------- |
|
|
1481
|
+
| `schema` | Zod / Valibot / ArkType (any `.parse()` shape) — invalid messages are skipped with a warning |
|
|
1482
|
+
| `onTombstone` | Called for each tombstone key before it is removed from the map |
|
|
1483
|
+
|
|
1484
|
+
## Offset checkpointing
|
|
1485
|
+
|
|
1486
|
+
Save and restore consumer group offsets via a dedicated Kafka topic. Useful for point-in-time recovery, blue/green deployments, and disaster recovery without resetting to `earliest`/`latest`.
|
|
1487
|
+
|
|
1488
|
+
### checkpointOffsets
|
|
1489
|
+
|
|
1490
|
+
Snapshot the current committed offsets of a consumer group into a Kafka topic:
|
|
1491
|
+
|
|
1492
|
+
```typescript
|
|
1493
|
+
// Checkpoint the default group
|
|
1494
|
+
const result = await kafka.checkpointOffsets(undefined, 'checkpoints');
|
|
1495
|
+
// {
|
|
1496
|
+
// groupId: 'orders-group',
|
|
1497
|
+
// topics: ['orders', 'payments'],
|
|
1498
|
+
// partitionCount: 4,
|
|
1499
|
+
// savedAt: 1710000000000,
|
|
1500
|
+
// }
|
|
1501
|
+
|
|
1502
|
+
// Checkpoint a specific group
|
|
1503
|
+
await kafka.checkpointOffsets('payments-group', 'checkpoints');
|
|
1429
1504
|
```
|
|
1430
1505
|
|
|
1431
|
-
|
|
1506
|
+
Each call appends a new record to the checkpoint topic keyed by `groupId`, with `x-checkpoint-timestamp` and `x-checkpoint-group-id` headers. The checkpoint topic acts as an append-only audit log — use a **non-compacted** topic to retain history.
|
|
1507
|
+
|
|
1508
|
+
Requires `connectProducer()` to have been called before checkpointing.
|
|
1509
|
+
|
|
1510
|
+
### restoreFromCheckpoint
|
|
1511
|
+
|
|
1512
|
+
Restore a consumer group's committed offsets from the nearest checkpoint:
|
|
1513
|
+
|
|
1514
|
+
```typescript
|
|
1515
|
+
// Restore to the latest checkpoint
|
|
1516
|
+
const result = await kafka.restoreFromCheckpoint(undefined, 'checkpoints');
|
|
1517
|
+
// {
|
|
1518
|
+
// groupId: 'orders-group',
|
|
1519
|
+
// offsets: [{ topic: 'orders', partition: 0, offset: '1500' }, ...],
|
|
1520
|
+
// restoredAt: 1710000000000,
|
|
1521
|
+
// checkpointAge: 3600000, // ms since the checkpoint was saved
|
|
1522
|
+
// }
|
|
1523
|
+
|
|
1524
|
+
// Restore to the nearest checkpoint before a specific timestamp
|
|
1525
|
+
const ts = new Date('2024-06-01T12:00:00Z').getTime();
|
|
1526
|
+
await kafka.restoreFromCheckpoint(undefined, 'checkpoints', { timestamp: ts });
|
|
1527
|
+
```
|
|
1528
|
+
|
|
1529
|
+
Checkpoint selection rules:
|
|
1530
|
+
|
|
1531
|
+
1. If `timestamp` is omitted — the **latest** checkpoint is selected.
|
|
1532
|
+
2. If `timestamp` is given — the newest checkpoint whose `savedAt ≤ timestamp` is selected.
|
|
1533
|
+
3. If all checkpoints are newer than `timestamp` — falls back to the **oldest** checkpoint with a warning.
|
|
1534
|
+
4. Throws if no checkpoint exists for the group.
|
|
1535
|
+
|
|
1536
|
+
**Important:** the consumer group must be stopped before calling `restoreFromCheckpoint`. An error is thrown if any consumer in the group is currently running.
|
|
1537
|
+
|
|
1538
|
+
`restoreFromCheckpoint` uses a short-lived temporary consumer to read all checkpoint records up to the current high-watermark, then calls `admin.setOffsets` for every topic-partition in the selected checkpoint.
|
|
1539
|
+
|
|
1540
|
+
| Option | Description |
|
|
1541
|
+
| ------ | ----------- |
|
|
1542
|
+
| `timestamp` | Target Unix ms. Omit to restore the latest checkpoint |
|
|
1543
|
+
|
|
1544
|
+
## Windowed batch consumer
|
|
1545
|
+
|
|
1546
|
+
Accumulate messages into a buffer and flush a handler when either a **size** or **time** trigger fires — whichever comes first. Gives explicit control over both batch size and processing latency, unlike `startBatchConsumer` which delivers broker-sized batches of unpredictable size:
|
|
1547
|
+
|
|
1548
|
+
```typescript
|
|
1549
|
+
const handle = await kafka.startWindowConsumer(
|
|
1550
|
+
'orders',
|
|
1551
|
+
async (envelopes, meta) => {
|
|
1552
|
+
console.log(`Flushing ${envelopes.length} orders (trigger: ${meta.trigger})`);
|
|
1553
|
+
await db.bulkInsert(envelopes.map((e) => e.payload));
|
|
1554
|
+
},
|
|
1555
|
+
{
|
|
1556
|
+
maxMessages: 100, // flush when 100 messages accumulate
|
|
1557
|
+
maxMs: 5_000, // or after 5 s, whichever fires first
|
|
1558
|
+
},
|
|
1559
|
+
);
|
|
1560
|
+
```
|
|
1561
|
+
|
|
1562
|
+
`WindowMeta` is passed to the handler on every flush:
|
|
1563
|
+
|
|
1564
|
+
| Field | Description |
|
|
1565
|
+
| ----- | ----------- |
|
|
1566
|
+
| `trigger` | `"size"` — buffer reached `maxMessages`; `"time"` — `maxMs` elapsed |
|
|
1567
|
+
| `windowStart` | Unix ms of the first message in the flushed window |
|
|
1568
|
+
| `windowEnd` | Unix ms when the flush was initiated |
|
|
1569
|
+
|
|
1570
|
+
On `handle.stop()` any buffered messages are flushed before the consumer disconnects — no messages are lost on clean shutdown.
|
|
1571
|
+
|
|
1572
|
+
`retryTopics: true` is rejected at startup with a clear error — the retry topic chain is incompatible with windowed accumulation.
|
|
1573
|
+
|
|
1574
|
+
| Option | Default | Description |
|
|
1575
|
+
| ------ | ------- | ----------- |
|
|
1576
|
+
| `maxMessages` | required | Flush when the buffer reaches this many messages |
|
|
1577
|
+
| `maxMs` | required | Flush after this many ms since the first buffered message |
|
|
1578
|
+
| All `ConsumerOptions` fields | — | Standard consumer options apply (`retry`, `dlq`, `deduplication`, etc.) |
|
|
1579
|
+
|
|
1580
|
+
## Header-based routing
|
|
1581
|
+
|
|
1582
|
+
Dispatch messages to different handlers based on the value of a Kafka header — no `if/switch` boilerplate in a catch-all handler. Useful when one topic carries multiple event types distinguished by a header like `x-event-type`:
|
|
1583
|
+
|
|
1584
|
+
```typescript
|
|
1585
|
+
await kafka.startRoutedConsumer(['events'], {
|
|
1586
|
+
header: 'x-event-type',
|
|
1587
|
+
routes: {
|
|
1588
|
+
'order.created': async (e) => handleOrderCreated(e.payload),
|
|
1589
|
+
'order.cancelled': async (e) => handleOrderCancelled(e.payload),
|
|
1590
|
+
'order.shipped': async (e) => handleOrderShipped(e.payload),
|
|
1591
|
+
},
|
|
1592
|
+
fallback: async (e) => logger.warn('Unknown event type', e.headers),
|
|
1593
|
+
});
|
|
1594
|
+
```
|
|
1595
|
+
|
|
1596
|
+
Messages are dispatched to the handler whose key matches `envelope.headers[header]`. If the header is absent or its value has no matching route:
|
|
1597
|
+
|
|
1598
|
+
- The `fallback` handler is called if provided.
|
|
1599
|
+
- The message is silently skipped if `fallback` is omitted.
|
|
1600
|
+
|
|
1601
|
+
All standard `ConsumerOptions` apply uniformly across every route — retry, DLQ, deduplication, circuit breaker, interceptors, etc.:
|
|
1602
|
+
|
|
1603
|
+
```typescript
|
|
1604
|
+
await kafka.startRoutedConsumer(
|
|
1605
|
+
['events'],
|
|
1606
|
+
{
|
|
1607
|
+
header: 'x-event-type',
|
|
1608
|
+
routes: {
|
|
1609
|
+
'payment.processed': async (e) => processPayment(e.payload),
|
|
1610
|
+
'payment.failed': async (e) => handleFailure(e.payload),
|
|
1611
|
+
},
|
|
1612
|
+
},
|
|
1613
|
+
{
|
|
1614
|
+
retry: { maxRetries: 3, backoffMs: 500 },
|
|
1615
|
+
dlq: true,
|
|
1616
|
+
deduplication: { strategy: 'drop' },
|
|
1617
|
+
},
|
|
1618
|
+
);
|
|
1619
|
+
```
|
|
1620
|
+
|
|
1621
|
+
The returned `ConsumerHandle` works the same as `startConsumer` — `handle.stop()` stops the consumer cleanly.
|
|
1622
|
+
|
|
1623
|
+
## Lag-based producer throttling
|
|
1624
|
+
|
|
1625
|
+
Delay `sendMessage`, `sendBatch`, and `sendTombstone` automatically when a consumer group falls behind. Provides backpressure without an external store — the lag is measured via the built-in admin API:
|
|
1626
|
+
|
|
1627
|
+
```typescript
|
|
1628
|
+
const kafka = new KafkaClient('my-service', 'orders-group', brokers, {
|
|
1629
|
+
lagThrottle: {
|
|
1630
|
+
maxLag: 10_000, // delay sends when lag exceeds 10 000 messages
|
|
1631
|
+
pollIntervalMs: 5_000, // check lag every 5 s (default)
|
|
1632
|
+
maxWaitMs: 30_000, // give up waiting after 30 s and send anyway (default)
|
|
1633
|
+
},
|
|
1634
|
+
});
|
|
1635
|
+
|
|
1636
|
+
await kafka.connectProducer(); // starts the background polling loop
|
|
1637
|
+
```
|
|
1638
|
+
|
|
1639
|
+
While the observed lag exceeds `maxLag`, every send waits in a `100 ms` spin-loop until the lag drops or `maxWaitMs` is reached. When `maxWaitMs` is exceeded a warning is logged and the send proceeds — this is best-effort throttling, not hard backpressure.
|
|
1640
|
+
|
|
1641
|
+
| Option | Default | Description |
|
|
1642
|
+
| ------ | ------- | ----------- |
|
|
1643
|
+
| `maxLag` | required | Total lag threshold (sum across all partitions) |
|
|
1644
|
+
| `groupId` | client default group | Consumer group whose lag is monitored |
|
|
1645
|
+
| `pollIntervalMs` | `5000` | How often to call `getConsumerLag()` in the background |
|
|
1646
|
+
| `maxWaitMs` | `30000` | Maximum time (ms) a single send waits before proceeding anyway |
|
|
1647
|
+
|
|
1648
|
+
The polling timer is started by `connectProducer()` and cleared by `disconnect()` or `disconnectProducer()`. Poll errors are silently ignored — a failing admin call never blocks sends.
|
|
1649
|
+
|
|
1650
|
+
## Transactional consumer
|
|
1651
|
+
|
|
1652
|
+
Consume messages with **exactly-once semantics** for read-process-write pipelines. Each message is processed inside a Kafka transaction: outgoing sends and the source offset commit succeed or fail atomically — no partial writes, no duplicates on restart:
|
|
1653
|
+
|
|
1654
|
+
```typescript
|
|
1655
|
+
await kafka.startTransactionalConsumer(
|
|
1656
|
+
['orders'],
|
|
1657
|
+
async (envelope, tx) => {
|
|
1658
|
+
// Both sends and the offset commit are part of one atomic transaction
|
|
1659
|
+
await tx.send('invoices', { orderId: envelope.payload.orderId, amount: envelope.payload.amount });
|
|
1660
|
+
await tx.send('notifications', { userId: envelope.payload.userId, message: 'Order confirmed' });
|
|
1661
|
+
// tx commits automatically when this function returns
|
|
1662
|
+
},
|
|
1663
|
+
);
|
|
1664
|
+
```
|
|
1665
|
+
|
|
1666
|
+
The handler receives a `TransactionalHandlerContext` with two methods:
|
|
1667
|
+
|
|
1668
|
+
| Method | Description |
|
|
1669
|
+
| ------ | ----------- |
|
|
1670
|
+
| `tx.send(topic, message, options?)` | Stage a single message inside the transaction |
|
|
1671
|
+
| `tx.sendBatch(topic, messages, options?)` | Stage multiple messages inside the transaction |
|
|
1672
|
+
|
|
1673
|
+
**On handler success** — staged sends + source offset commit are committed atomically via `tx.sendOffsets()` + `tx.commit()`. Downstream consumers only see the messages after the commit.
|
|
1674
|
+
|
|
1675
|
+
**On handler failure** — `tx.abort()` is called automatically. No staged sends become visible. The source message offset is not committed, so Kafka redelivers the message on the next poll.
|
|
1676
|
+
|
|
1677
|
+
```typescript
|
|
1678
|
+
await kafka.startTransactionalConsumer(
|
|
1679
|
+
['payments'],
|
|
1680
|
+
async (envelope, tx) => {
|
|
1681
|
+
const result = await processPayment(envelope.payload);
|
|
1682
|
+
// Only route to the audit topic if payment succeeded
|
|
1683
|
+
await tx.send('payments.audit', { paymentId: result.id, status: 'ok' });
|
|
1684
|
+
},
|
|
1685
|
+
{
|
|
1686
|
+
groupId: 'payments-eos',
|
|
1687
|
+
deduplication: { strategy: 'drop' }, // standard ConsumerOptions apply
|
|
1688
|
+
},
|
|
1689
|
+
);
|
|
1690
|
+
```
|
|
1691
|
+
|
|
1692
|
+
`retryTopics: true` is rejected at startup — EOS redelivery on failure is already guaranteed by the transaction. `autoCommit` is always `false` (managed internally).
|
|
1432
1693
|
|
|
1433
1694
|
## Admin API
|
|
1434
1695
|
|
|
@@ -1506,19 +1767,25 @@ await kafka.disconnect(10_000); // wait up to 10 s, then force disconnect
|
|
|
1506
1767
|
|
|
1507
1768
|
## Consumer handles
|
|
1508
1769
|
|
|
1509
|
-
`startConsumer()` and `startBatchConsumer()` return a `ConsumerHandle` instead of `void`. Use it to stop a specific consumer
|
|
1770
|
+
`startConsumer()` and `startBatchConsumer()` return a `ConsumerHandle` instead of `void`. Use it to stop a specific consumer or wait for it to be ready:
|
|
1510
1771
|
|
|
1511
1772
|
```typescript
|
|
1512
|
-
const handle = await kafka.startConsumer(['orders'], handler);
|
|
1773
|
+
const handle = await kafka.startConsumer(['orders'], handler, { fromBeginning: false });
|
|
1513
1774
|
|
|
1514
1775
|
console.log(handle.groupId); // e.g. "my-group"
|
|
1515
1776
|
|
|
1777
|
+
// Wait until Kafka has assigned at least one partition to this consumer.
|
|
1778
|
+
// Safe to call before sending test messages — eliminates fixed setTimeout delays.
|
|
1779
|
+
await handle.ready();
|
|
1780
|
+
|
|
1516
1781
|
// Later — stop only this consumer, producer stays connected
|
|
1517
1782
|
await handle.stop();
|
|
1518
1783
|
```
|
|
1519
1784
|
|
|
1520
1785
|
`handle.stop()` is equivalent to `kafka.stopConsumer(handle.groupId)`. Useful in lifecycle methods or when you need to conditionally stop one consumer while others keep running.
|
|
1521
1786
|
|
|
1787
|
+
`handle.ready()` resolves once the broker fires the first partition-assignment event. For `fromBeginning: false` consumers it adds a 500 ms settle window so librdkafka can complete the async `latest` offset fetch before you send; for `fromBeginning: true` consumers it resolves immediately on assignment. In unit tests with `FakeTransport`, `subscribe()` fires the assignment synchronously, so `handle.ready()` resolves in the same tick.
|
|
1788
|
+
|
|
1522
1789
|
## onMessageLost
|
|
1523
1790
|
|
|
1524
1791
|
By default, if a consumer handler throws and `dlq` is not enabled, the message is logged and dropped. Use `onMessageLost` to catch these silent losses:
|
|
@@ -1874,6 +2141,7 @@ src/
|
|
|
1874
2141
|
├── client/ # Core library — zero framework dependencies
|
|
1875
2142
|
│ ├── types.ts # All public interfaces: KafkaClientOptions, ConsumerOptions,
|
|
1876
2143
|
│ │ # SendOptions, EventEnvelope, ConsumerHandle, BatchMeta,
|
|
2144
|
+
│ │ # RoutingOptions, TransactionalHandlerContext,
|
|
1877
2145
|
│ │ # KafkaInstrumentation, ConsumerInterceptor, SchemaLike, …
|
|
1878
2146
|
│ ├── errors.ts # KafkaProcessingError, KafkaRetryExhaustedError, KafkaValidationError
|
|
1879
2147
|
│ │
|
|
@@ -1885,7 +2153,10 @@ src/
|
|
|
1885
2153
|
│ ├── kafka.client/
|
|
1886
2154
|
│ │ ├── index.ts # KafkaClient class — public API, producer/consumer lifecycle,
|
|
1887
2155
|
│ │ │ # Lamport clock, ALS correlation ID, graceful shutdown,
|
|
1888
|
-
│ │ │ #
|
|
2156
|
+
│ │ │ # clockRecovery, readSnapshot(), checkpointOffsets(),
|
|
2157
|
+
│ │ │ # restoreFromCheckpoint(), startWindowConsumer(),
|
|
2158
|
+
│ │ │ # startRoutedConsumer(), startTransactionalConsumer(),
|
|
2159
|
+
│ │ │ # lagThrottle poller, waitIfThrottled()
|
|
1889
2160
|
│ │ │
|
|
1890
2161
|
│ │ ├── admin/
|
|
1891
2162
|
│ │ │ └── ops.ts # AdminOps: listConsumerGroups(), describeTopics(),
|
|
@@ -1938,6 +2209,11 @@ src/
|
|
|
1938
2209
|
│ │ ├── deduplication.spec.ts # Lamport clock dedup, strategies (drop/dlq/topic)
|
|
1939
2210
|
│ │ ├── interceptors.spec.ts # ConsumerInterceptor before/after/onError hooks
|
|
1940
2211
|
│ │ ├── dlq-replay.spec.ts # replayDlq(), dryRun, filter, targetTopic
|
|
2212
|
+
│ │ ├── read-snapshot.spec.ts # readSnapshot(), tombstones, multi-partition, schema, HWM
|
|
2213
|
+
│ │ ├── checkpoint.spec.ts # checkpointOffsets(), restoreFromCheckpoint(), timestamp selection
|
|
2214
|
+
│ │ ├── window-consumer.spec.ts # startWindowConsumer(), size/time triggers, shutdown flush
|
|
2215
|
+
│ │ ├── router.spec.ts # startRoutedConsumer(), route dispatch, fallback, skip
|
|
2216
|
+
│ │ ├── transactional-consumer.spec.ts # startTransactionalConsumer(), EOS commit/abort, tx.send
|
|
1941
2217
|
│ │ ├── ttl.spec.ts # messageTtlMs, onTtlExpired, TTL→DLQ routing
|
|
1942
2218
|
│ │ ├── message-lost.spec.ts # onMessageLost — handler error, validation, DLQ failure
|
|
1943
2219
|
│ │ ├── handler-timeout.spec.ts # handlerTimeoutMs warning
|
|
@@ -1946,7 +2222,8 @@ src/
|
|
|
1946
2222
|
│ │ ├── producer.spec.ts # sendMessage(), sendBatch(), sendTombstone(), compression
|
|
1947
2223
|
│ │ ├── transaction.spec.ts # transaction(), tx.send(), tx.sendBatch(), rollback
|
|
1948
2224
|
│ │ ├── schema.spec.ts # Schema validation on send/consume, strictSchemas
|
|
1949
|
-
│ │
|
|
2225
|
+
│ │ ├── topic.spec.ts # topic() descriptor, TopicsFrom, schema registry
|
|
2226
|
+
│ │ └── lag-throttle.spec.ts # lagThrottle option, threshold, maxWaitMs, poll errors
|
|
1950
2227
|
│ ├── admin/
|
|
1951
2228
|
│ │ ├── admin.spec.ts # listConsumerGroups(), describeTopics(), deleteRecords(),
|
|
1952
2229
|
│ │ │ # resetOffsets(), seekToOffset(), seekToTimestamp()
|