@durable-streams/client 0.1.2 → 0.1.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,17 +1,26 @@
1
1
  # @durable-streams/client
2
2
 
3
- TypeScript client for the Electric Durable Streams protocol.
3
+ TypeScript client for the Durable Streams protocol.
4
+
5
+ ## Installation
6
+
7
+ ```bash
8
+ npm install @durable-streams/client
9
+ ```
4
10
 
5
11
  ## Overview
6
12
 
7
- The Durable Streams client provides two main APIs:
13
+ The Durable Streams client provides three main APIs:
8
14
 
9
15
  1. **`stream()` function** - A fetch-like read-only API for consuming streams
10
16
  2. **`DurableStream` class** - A handle for read/write operations on a stream
17
+ 3. **`IdempotentProducer` class** - High-throughput producer with exactly-once write semantics (recommended for writes)
11
18
 
12
19
  ## Key Features
13
20
 
14
- - **Automatic Batching**: Multiple `append()` calls are automatically batched together when a POST is in-flight, significantly improving throughput for high-frequency writes
21
+ - **Exactly-Once Writes**: `IdempotentProducer` provides Kafka-style exactly-once semantics with automatic deduplication
22
+ - **Automatic Batching**: Multiple writes are automatically batched together for high throughput
23
+ - **Pipelining**: Up to 5 concurrent batches in flight by default for maximum throughput
15
24
  - **Streaming Reads**: `stream()` and `DurableStream.stream()` provide rich consumption options (promises, ReadableStreams, subscribers)
16
25
  - **Resumable**: Offset-based reads let you resume from any point
17
26
  - **Real-time**: Long-poll and SSE modes for live tailing with catch-up from any offset
@@ -75,9 +84,56 @@ const unsubscribe3 = res.subscribeText(async (chunk) => {
75
84
  })
76
85
  ```
77
86
 
87
+ ### High-Throughput Writes: Using `IdempotentProducer` (Recommended)
88
+
89
+ For reliable, high-throughput writes with exactly-once semantics, use `IdempotentProducer`:
90
+
91
+ ```typescript
92
+ import { DurableStream, IdempotentProducer } from "@durable-streams/client"
93
+
94
+ const stream = await DurableStream.create({
95
+ url: "https://streams.example.com/events",
96
+ contentType: "application/json",
97
+ })
98
+
99
+ const producer = new IdempotentProducer(stream, "event-processor-1", {
100
+ autoClaim: true,
101
+ onError: (err) => console.error("Batch failed:", err), // Errors reported here
102
+ })
103
+
104
+ // Fire-and-forget - don't await, errors go to onError callback
105
+ for (const event of events) {
106
+ producer.append(event) // Objects serialized automatically for JSON streams
107
+ }
108
+
109
+ // IMPORTANT: Always flush before shutdown to ensure delivery
110
+ await producer.flush()
111
+ await producer.close()
112
+ ```
113
+
114
+ For high-throughput scenarios, `append()` is fire-and-forget (returns immediately):
115
+
116
+ ```typescript
117
+ // Fire-and-forget - errors reported via onError callback
118
+ for (const event of events) {
119
+ producer.append(event) // Returns void, adds to batch
120
+ }
121
+
122
+ // Always flush before shutdown to ensure delivery
123
+ await producer.flush()
124
+ ```
125
+
126
+ **Why use IdempotentProducer?**
127
+
128
+ - **Exactly-once delivery**: Server deduplicates using `(producerId, epoch, seq)` tuple
129
+ - **Automatic batching**: Multiple writes batched into single HTTP requests
130
+ - **Pipelining**: Multiple batches in flight concurrently
131
+ - **Zombie fencing**: Stale producers are rejected, preventing split-brain scenarios
132
+ - **Network resilience**: Safe to retry on network errors (server deduplicates)
133
+
78
134
  ### Read/Write: Using `DurableStream`
79
135
 
80
- For write operations or when you need a persistent handle:
136
+ For simple write operations or when you need a persistent handle:
81
137
 
82
138
  ```typescript
83
139
  import { DurableStream } from "@durable-streams/client"
@@ -92,7 +148,7 @@ const handle = await DurableStream.create({
92
148
  ttlSeconds: 3600,
93
149
  })
94
150
 
95
- // Append data
151
+ // Append data (simple API without exactly-once guarantees)
96
152
  await handle.append(JSON.stringify({ type: "message", text: "Hello" }), {
97
153
  seq: "writer-1-000001",
98
154
  })
@@ -781,6 +837,117 @@ res.subscribeJson(async (batch) => {
781
837
 
782
838
  ---
783
839
 
840
+ ## IdempotentProducer
841
+
842
+ The `IdempotentProducer` class provides Kafka-style exactly-once write semantics with automatic batching and pipelining.
843
+
844
+ ### Constructor
845
+
846
+ ```typescript
847
+ new IdempotentProducer(stream: DurableStream, producerId: string, opts?: IdempotentProducerOptions)
848
+ ```
849
+
850
+ **Parameters:**
851
+
852
+ - `stream` - The DurableStream to write to
853
+ - `producerId` - Stable identifier for this producer (e.g., "order-service-1")
854
+ - `opts` - Optional configuration
855
+
856
+ **Options:**
857
+
858
+ ```typescript
859
+ interface IdempotentProducerOptions {
860
+ epoch?: number // Starting epoch (default: 0)
861
+ autoClaim?: boolean // On 403, retry with epoch+1 (default: false)
862
+ maxBatchBytes?: number // Max bytes before sending batch (default: 1MB)
863
+ lingerMs?: number // Max time to wait for more messages (default: 5ms)
864
+ maxInFlight?: number // Concurrent batches in flight (default: 5)
865
+ signal?: AbortSignal // Cancellation signal
866
+ fetch?: typeof fetch // Custom fetch implementation
867
+ onError?: (error: Error) => void // Error callback for batch failures
868
+ }
869
+ ```
870
+
871
+ ### Methods
872
+
873
+ #### `append(body): void`
874
+
875
+ Append data to the stream (fire-and-forget). For JSON streams, you can pass objects directly.
876
+ Returns immediately after adding to the internal batch. Errors are reported via `onError` callback.
877
+
878
+ ```typescript
879
+ // For JSON streams - pass objects directly
880
+ producer.append({ event: "click", x: 100 })
881
+
882
+ // Or strings/bytes
883
+ producer.append("message data")
884
+ producer.append(new Uint8Array([1, 2, 3]))
885
+
886
+ // All appends are fire-and-forget - use flush() to wait for delivery
887
+ await producer.flush()
888
+ ```
889
+
890
+ #### `flush(): Promise<void>`
891
+
892
+ Send any pending batch immediately and wait for all in-flight batches to complete.
893
+
894
+ ```typescript
895
+ // Always call before shutdown
896
+ await producer.flush()
897
+ ```
898
+
899
+ #### `close(): Promise<void>`
900
+
901
+ Flush pending messages and close the producer. Further `append()` calls will throw.
902
+
903
+ ```typescript
904
+ await producer.close()
905
+ ```
906
+
907
+ #### `restart(): Promise<void>`
908
+
909
+ Increment epoch and reset sequence. Call this when restarting the producer to establish a new session.
910
+
911
+ ```typescript
912
+ await producer.restart()
913
+ ```
914
+
915
+ ### Properties
916
+
917
+ - `epoch: number` - Current epoch for this producer
918
+ - `nextSeq: number` - Next sequence number to be assigned
919
+ - `pendingCount: number` - Messages in the current pending batch
920
+ - `inFlightCount: number` - Batches currently in flight
921
+
922
+ ### Error Handling
923
+
924
+ Errors are delivered via the `onError` callback since `append()` is fire-and-forget:
925
+
926
+ ```typescript
927
+ import {
928
+ IdempotentProducer,
929
+ StaleEpochError,
930
+ SequenceGapError,
931
+ } from "@durable-streams/client"
932
+
933
+ const producer = new IdempotentProducer(stream, "my-producer", {
934
+ onError: (error) => {
935
+ if (error instanceof StaleEpochError) {
936
+ // Another producer has a higher epoch - this producer is "fenced"
937
+ console.log(`Fenced by epoch ${error.currentEpoch}`)
938
+ } else if (error instanceof SequenceGapError) {
939
+ // Sequence gap detected (should not happen with proper usage)
940
+ console.log(`Expected seq ${error.expectedSeq}, got ${error.receivedSeq}`)
941
+ }
942
+ },
943
+ })
944
+
945
+ producer.append("data") // Fire-and-forget, errors go to onError
946
+ await producer.flush() // Wait for all batches to complete
947
+ ```
948
+
949
+ ---
950
+
784
951
  ## Types
785
952
 
786
953
  Key types exported from the package:
@@ -791,6 +958,9 @@ Key types exported from the package:
791
958
  - `JsonBatch<T>` - `{ items: T[], offset: Offset, upToDate: boolean, cursor?: string }`
792
959
  - `TextChunk` - `{ text: string, offset: Offset, upToDate: boolean, cursor?: string }`
793
960
  - `HeadResult` - Metadata from HEAD requests
961
+ - `IdempotentProducer` - Exactly-once producer class
962
+ - `StaleEpochError` - Thrown when producer epoch is stale (zombie fencing)
963
+ - `SequenceGapError` - Thrown when sequence numbers are out of order
794
964
  - `DurableStreamError` - Protocol-level errors with codes
795
965
  - `FetchError` - Transport/network errors
796
966