ai-stream-utils 1.5.0 → 2.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -9,327 +9,224 @@
9
9
 
10
10
  </div>
11
11
 
12
- This library provides composable stream transformation and filter utilities for UI message streams created by [`streamText()`](https://ai-sdk.dev/docs/reference/ai-sdk-core/stream-text) in the AI SDK.
12
+ This library provides composable filter and transformation utilities for UI message streams created by [`streamText()`](https://ai-sdk.dev/docs/reference/ai-sdk-core/stream-text) in the AI SDK.
13
13
 
14
14
  ### Why?
15
15
 
16
16
  The AI SDK UI message stream created by [`toUIMessageStream()`](https://ai-sdk.dev/docs/reference/ai-sdk-core/stream-text#to-ui-message-stream) streams all parts (text, tools, reasoning, etc.) to the client by default. However, you may want to:
17
17
 
18
- - **Filter**: Tool calls like database queries often contain large amounts of data or sensitive information that should not be visible on the client
18
+ - **Filter**: Tool calls like database queries often contain large amounts of data or sensitive information that should not be streamed to the client
19
19
  - **Transform**: Modify text or tool outputs while they are streamed to the client
20
+ - **Observe**: Log stream lifecycle events, tool calls, or other chunks without consuming or modifying the stream
20
21
 
21
22
  This library provides type-safe, composable utilities for all these use cases.
22
23
 
23
24
  ### Installation
24
25
 
25
- This library only supports AI SDK v5.
26
+ This library supports AI SDK v5 and v6.
26
27
 
27
28
  ```bash
28
29
  npm install ai-stream-utils
29
30
  ```
30
31
 
31
- ## Overview
32
-
33
- | Function | Input | Returns | Use Case |
34
- |----------|-------------|---------|----------|
35
- | [`mapUIMessageStream`](#mapuimessagestream) | [UIMessageChunk](https://github.com/vercel/ai/blob/main/packages/ai/src/ui-message-stream/ui-message-chunks.ts) | `chunk \| chunk[] \| null` | Transform or filter chunks in real-time (e.g., smooth streaming) |
36
- | [`flatMapUIMessageStream`](#flatmapuimessagestream) | [UIMessagePart](https://ai-sdk.dev/docs/reference/ai-sdk-core/ui-message#uimessagepart-types) | `part \| part[] \| null` | Buffer until complete, then transform (e.g., redact tool output) |
37
- | [`filterUIMessageStream`](#filteruimessagestream) | [UIMessageChunk](https://github.com/vercel/ai/blob/main/packages/ai/src/ui-message-stream/ui-message-chunks.ts) | `boolean` | Include/exclude parts by type (e.g., hide reasoning) |
38
-
39
32
  ## Usage
40
33
 
41
- ### `mapUIMessageStream`
34
+ The `pipe` function provides a composable pipeline API for filtering, transforming, and observing UI message streams. Multiple operators can be chained together, and type guards automatically narrow chunk and part types, thus enabling type-safe stream transformations with autocomplete.
42
35
 
43
- The `mapUIMessageStream` function operates on chunks and can be used to transform or filter individual chunks as they stream through. It receives the current chunk and the partial part representing all already processed chunks.
36
+ ### `.filter()`
37
+
38
+ Filter chunks by returning `true` to keep or `false` to exclude.
44
39
 
45
40
  ```typescript
46
- import { mapUIMessageStream } from 'ai-stream-utils';
41
+ const stream = pipe(result.toUIMessageStream())
42
+ .filter(({ chunk, part }) => {
43
+ // chunk.type: "text-delta" | "text-start" | "tool-input-available" | ...
44
+ // part.type: "text" | "reasoning" | "tool-weather" | ...
47
45
 
48
- const stream = mapUIMessageStream(
49
- result.toUIMessageStream<MyUIMessage>(),
50
- ({ chunk, part }) => {
51
- // Transform: modify the chunk
52
- if (chunk.type === 'text-delta') {
53
- return { ...chunk, delta: chunk.delta.toUpperCase() };
54
- }
55
- // Filter: return null to exclude chunks
56
- if (part.type === 'tool-weather') {
57
- return null;
46
+ if (chunk.type === "data-weather") {
47
+ return false; // exclude chunk
58
48
  }
59
- return chunk;
60
- }
61
- );
49
+
50
+ return true; // keep chunk
51
+ })
52
+ .toStream();
62
53
  ```
63
54
 
64
- ### `flatMapUIMessageStream`
55
+ **Type guards** provide a simpler API for common filtering patterns:
65
56
 
66
- The `flatMapUIMessageStream` function operates on parts. It buffers all chunks of a particular type (e.g. text parts) until the part is complete and then transforms or filters the complete part. The optional predicate `partTypeIs()` can be used to selectively buffer only specific parts while streaming others through immediately.
57
+ - `includeChunks("text-delta")` or `includeChunks(["text-delta", "text-end"])`: Include specific chunk types
58
+ - `excludeChunks("text-delta")` or `excludeChunks(["text-delta", "text-end"])`: Exclude specific chunk types
59
+ - `includeParts("text")` or `includeParts(["text", "reasoning"])`: Include specific part types
60
+ - `excludeParts("reasoning")` or `excludeParts(["reasoning", "tool-database"])`: Exclude specific part types
67
61
 
68
- ```typescript
69
- import { flatMapUIMessageStream, partTypeIs } from 'ai-stream-utils';
62
+ **Example:** Exclude tool calls from the client.
70
63
 
71
- const stream = flatMapUIMessageStream(
72
- result.toUIMessageStream<MyUIMessage>(),
73
- // Predicate to only buffer tool-weather parts and pass through other parts
74
- partTypeIs('tool-weather'),
75
- ({ part }) => {
76
- // Transform: modify the complete part
77
- if (part.state === 'output-available') {
78
- return { ...part, output: { ...part.output, temperature: toFahrenheit(part.output.temperature) } };
79
- }
80
- // Filter: return null to exclude parts
81
- return part;
82
- }
83
- );
64
+ ```typescript
65
+ const stream = pipe(result.toUIMessageStream())
66
+ .filter(excludeParts(["tool-weather", "tool-database"]))
67
+ .toStream();
84
68
  ```
85
69
 
86
- ### `filterUIMessageStream`
70
+ ### `.map()`
87
71
 
88
- The `filterUIMessageStream` function is a convenience function around `mapUIMessageStream` with a simpler API to filter chunks by part type. It provides the `includeParts()` and `excludeParts()` predicates for common patterns.
72
+ Transform chunks by returning a chunk, an array of chunks, or `null` to exclude.
89
73
 
90
74
  ```typescript
91
- import { filterUIMessageStream, includeParts, excludeParts } from 'ai-stream-utils';
75
+ const stream = pipe(result.toUIMessageStream())
76
+ .map(({ chunk, part }) => {
77
+ // chunk.type: "text-delta" | "text-start" | "tool-input-available" | ...
78
+ // part.type: "text" | "reasoning" | "tool-weather" | ...
92
79
 
93
- // Include only specific parts
94
- const stream = filterUIMessageStream(
95
- result.toUIMessageStream<MyUIMessage>(),
96
- includeParts(['text', 'tool-weather'])
97
- );
80
+ if (chunk.type === "text-start") {
81
+ return chunk; // pass through unchanged
82
+ }
98
83
 
99
- // Exclude specific parts
100
- const stream = filterUIMessageStream(
101
- result.toUIMessageStream<MyUIMessage>(),
102
- excludeParts(['reasoning', 'tool-database'])
103
- );
84
+ if (chunk.type === "text-delta") {
85
+ return { ...chunk, delta: "modified" }; // transform chunk
86
+ }
104
87
 
105
- // Custom filter function
106
- const stream = filterUIMessageStream(
107
- result.toUIMessageStream<MyUIMessage>(),
108
- ({ part, chunk }) => {
109
- if (part.type === 'text') return true;
110
- if (chunk.type === 'tool-input-available') return true;
111
- return false;
112
- }
113
- );
88
+ if (chunk.type === "data-weather") {
89
+ return [chunk1, chunk2]; // emit multiple chunks
90
+ }
91
+
92
+ return null; // exclude chunk (same as filter)
93
+ })
94
+ .toStream();
114
95
  ```
115
96
 
116
- ## Examples
97
+ **Example:** Convert text to uppercase.
98
+
99
+ ```typescript
100
+ const stream = pipe(result.toUIMessageStream())
101
+ .map(({ chunk }) => {
102
+ if (chunk.type === "text-delta") {
103
+ return { ...chunk, delta: chunk.delta.toUpperCase() };
104
+ }
105
+
106
+ return chunk;
107
+ })
108
+ .toStream();
109
+ ```
117
110
 
118
- ### Smooth Streaming
111
+ ### `.on()`
119
112
 
120
- Buffers multiple text chunks into a string, splits at word boundaries and re-emits each word as a separate chunk for smoother UI rendering. See [examples/smooth-streaming.ts](./examples/smooth-streaming.ts) for the full implementation.
113
+ Observe chunks without modifying the stream. The callback is invoked for matching chunks.
121
114
 
122
115
  ```typescript
123
- import { mapUIMessageStream } from 'ai-stream-utils';
116
+ const stream = pipe(result.toUIMessageStream())
117
+ .on(
118
+ ({ chunk, part }) => {
119
+ // return true to invoke callback, false to skip
120
+ return chunk.type === "text-delta";
121
+ },
122
+ ({ chunk, part }) => {
123
+ // callback invoked for matching chunks
124
+ console.log(chunk, part);
125
+ },
126
+ )
127
+ .toStream();
128
+ ```
124
129
 
125
- const WORD_REGEX = /\S+\s+/m;
126
- let buffer = '';
130
+ **Type guard** provides a type-safe way to observe specific chunk types:
127
131
 
128
- const smoothedStream = mapUIMessageStream(
129
- result.toUIMessageStream(),
130
- ({ chunk }) => {
131
- if (chunk.type !== 'text-delta') {
132
- // Flush buffer on non-text chunks
133
- if (buffer.length > 0) {
134
- const flushed = { type: 'text-delta' as const, id: chunk.id, delta: buffer };
135
- buffer = '';
136
- return [flushed, chunk];
137
- }
138
- return chunk;
139
- }
132
+ - `chunkType("text-delta")` or `chunkType(["start", "finish"])`: Observe specific chunk types
133
+ - `partType("text")` or `partType(["text", "reasoning"])`: Observe chunks belonging to specific part types
140
134
 
141
- // Append the text delta to the buffer
142
- buffer += chunk.delta;
143
- const chunks = [];
144
-
145
- let match;
146
- while ((match = WORD_REGEX.exec(buffer)) !== null) {
147
- chunks.push({ type: 'text-delta', id: chunk.id, delta: buffer.slice(0, match.index + match[0].length) });
148
- buffer = buffer.slice(match.index + match[0].length);
149
- }
150
- // Emit the word-by-word chunks
151
- return chunks;
152
- }
153
- );
135
+ > [!NOTE]
136
+ > The `partType` type guard still operates on chunks. That means `partType("text")` will match any text chunks such as `text-start`, `text-delta`, and `text-end`.
154
137
 
155
- // Output: word-by-word streaming
156
- // { type: 'text-delta', delta: 'Why ' }
157
- // { type: 'text-delta', delta: "don't " }
158
- // { type: 'text-delta', delta: 'scientists ' }
138
+ **Example:** Log stream lifecycle events.
139
+
140
+ ```typescript
141
+ const stream = pipe(result.toUIMessageStream())
142
+ .on(chunkType("start"), () => {
143
+ console.log("Stream started");
144
+ })
145
+ .on(chunkType("finish"), ({ chunk }) => {
146
+ console.log("Stream finished:", chunk.finishReason);
147
+ })
148
+ .on(chunkType("tool-input-available"), ({ chunk }) => {
149
+ console.log("Tool called:", chunk.toolName, chunk.input);
150
+ })
151
+ .toStream();
159
152
  ```
160
153
 
161
- ### Redacting Sensitive Data
154
+ ### `.toStream()`
162
155
 
163
- Buffer tool calls until complete, then redact sensitive fields before streaming to the client. See [examples/order-lookup.ts](./examples/order-lookup.ts) for the full example.
156
+ Convert the pipeline back to a `AsyncIterableStream<InferUIMessageChunk<UI_MESSAGE>>` that can be returned to the client or consumed.
164
157
 
165
158
  ```typescript
166
- import { flatMapUIMessageStream, partTypeIs } from 'ai-stream-utils';
167
-
168
- const tools = {
169
- lookupOrder: tool({
170
- description: 'Look up order details by order ID',
171
- inputSchema: z.object({
172
- orderId: z.string().describe('The order ID to look up'),
173
- }),
174
- execute: ({ orderId }) => ({
175
- orderId,
176
- status: 'shipped',
177
- items: ['iPhone 15'],
178
- total: 1299.99,
179
- email: 'customer@example.com', // Sensitive
180
- address: '123 Main St, SF, CA 94102', // Sensitive
181
- }),
182
- }),
183
- };
159
+ const stream = pipe(result.toUIMessageStream())
160
+ .filter(({ chunk }) => {})
161
+ .map(({ chunk }) => {})
162
+ .toStream();
184
163
 
185
- const result = streamText({
186
- model: openai('gpt-4o'),
187
- prompt: 'Where is my order #12345?',
188
- tools,
189
- });
164
+ // Iterate with for-await-of
165
+ for await (const chunk of stream) {
166
+ console.log(chunk);
167
+ }
190
168
 
191
- // Buffer tool-lookupOrder parts, stream text parts immediately
192
- const redactedStream = flatMapUIMessageStream(
193
- result.toUIMessageStream<MyUIMessage>(),
194
- partTypeIs('tool-lookupOrder'),
195
- ({ part }) => {
196
- if (part.state === 'output-available') {
197
- return {
198
- ...part,
199
- output: {
200
- ...part.output,
201
- email: '[REDACTED]',
202
- address: '[REDACTED]',
203
- },
204
- };
205
- }
206
- return part;
207
- },
208
- );
169
+ // Consume as ReadableStream
170
+ for await (const message of readUIMessageStream({ stream })) {
171
+ console.log(message);
172
+ }
209
173
 
210
- // Text streams immediately, tool output is redacted:
211
- // { type: 'text-delta', delta: 'Let me look that up...' }
212
- // { type: 'tool-output-available', output: { orderId: '12345', email: '[REDACTED]', address: '[REDACTED]' } }
174
+ // Return to client with useChat()
175
+ return stream;
213
176
  ```
214
177
 
215
- ### Conditional Part Injection
178
+ ### Chaining and Type Narrowing
216
179
 
217
- Inspect previously streamed parts to conditionally inject new parts. This example creates a text part from a tool call message if the model didn't generate one. See [examples/ask-permission.ts](./examples/ask-permission.ts) for the full example.
180
+ Multiple operators can be chained together. After filtering with type guards, chunk and part types are narrowed automatically.
218
181
 
219
182
  ```typescript
220
- import { flatMapUIMessageStream, partTypeIs } from 'ai-stream-utils';
183
+ const stream = pipe<MyUIMessage>(result.toUIMessageStream())
184
+ .filter(includeParts("text"))
185
+ .map(({ chunk, part }) => {
186
+ // chunk is narrowed to text chunks only
187
+ // part.type is narrowed to "text"
188
+ return chunk;
189
+ })
190
+ .toStream();
191
+ ```
221
192
 
222
- const tools = {
223
- askForPermission: tool({
224
- description: 'Ask for permission to access current location',
225
- inputSchema: z.object({
226
- message: z.string().describe('The message to ask for permission'),
227
- }),
228
- }),
229
- };
193
+ ### Control Chunks
230
194
 
231
- const result = streamText({
232
- model: openai('gpt-4o'),
233
- prompt: 'Is it sunny today?',
234
- tools,
235
- });
195
+ [Control chunks](https://github.com/vercel/ai/blob/main/packages/ai/src/ui-message-stream/ui-message-chunks.ts#L278-L293) always pass through regardless of filter/transform settings:
236
196
 
237
- // Buffer askForPermission tool calls, check if text was already generated
238
- const stream = flatMapUIMessageStream(
239
- result.toUIMessageStream<MyUIMessage>(),
240
- partTypeIs('tool-askForPermission'),
241
- (current, context) => {
242
- if (current.part.state === 'input-available') {
243
- // Check if a text part was already streamed
244
- const hasTextPart = context.parts.some((p) => p.type === 'text');
245
-
246
- if (!hasTextPart) {
247
- // Inject a text part from the tool call message
248
- return [
249
- { type: 'text', text: current.part.input.message },
250
- current.part,
251
- ];
252
- }
253
- }
254
- return current.part;
255
- },
256
- );
197
+ - `start`: Stream start marker
198
+ - `finish`: Stream finish marker
199
+ - `abort`: Stream abort marker
200
+ - `message-metadata`: Message metadata updates
201
+ - `error`: Error messages
257
202
 
258
- // If model only generated tool call, we inject the text:
259
- // { type: 'text', text: 'May I access your location?' }
260
- // { type: 'tool-askForPermission', input: { message: 'May I access your location?' } }
261
- ```
203
+ ## Stream Utilities
204
+
205
+ Helper functions for consuming streams and converting between streams, arrays, and async iterables.
262
206
 
263
- ### Transform Tool Output
207
+ ### `consumeUIMessageStream`
264
208
 
265
- Transform tool outputs on-the-fly, such as converting temperature units. See [examples/weather.ts](./examples/weather.ts) for the full example.
209
+ Consumes a UI message stream by fully reading it and returns the final assembled message. Useful for server-side processing without streaming to the client.
266
210
 
267
211
  ```typescript
268
- import { flatMapUIMessageStream, partTypeIs } from 'ai-stream-utils';
269
-
270
- const toFahrenheit = (celsius: number) => (celsius * 9) / 5 + 32;
271
-
272
- const tools = {
273
- weather: tool({
274
- description: 'Get the weather in a location',
275
- inputSchema: z.object({ location: z.string() }),
276
- execute: ({ location }) => ({
277
- location,
278
- temperature: 22, // Celsius from API
279
- unit: 'C',
280
- }),
281
- }),
282
- };
212
+ import { consumeUIMessageStream } from "ai-stream-utils";
283
213
 
284
214
  const result = streamText({
285
- model: openai('gpt-4o'),
286
- prompt: 'What is the weather in Tokyo?',
287
- tools,
215
+ model: openai("gpt-4o"),
216
+ prompt: "Tell me a joke",
288
217
  });
289
218
 
290
- // Convert Celsius to Fahrenheit before streaming to client
291
- const stream = flatMapUIMessageStream(
292
- result.toUIMessageStream<MyUIMessage>(),
293
- partTypeIs('tool-weather'),
294
- ({ part }) => {
295
- if (part.state === 'output-available') {
296
- return {
297
- ...part,
298
- output: {
299
- ...part.output,
300
- temperature: toFahrenheit(part.output.temperature),
301
- unit: 'F',
302
- },
303
- };
304
- }
305
- return part;
306
- },
307
- );
219
+ const message = await consumeUIMessageStream(result.toUIMessageStream<MyUIMessage>());
308
220
 
309
- // Output is converted:
310
- // { type: 'tool-output-available', output: { location: 'Tokyo', temperature: 71.6, unit: 'F' } }
221
+ console.log(message.parts); // All parts fully assembled
311
222
  ```
312
223
 
313
- ## Stream Utilities
314
-
315
- Helper functions for converting between streams, arrays, and async iterables.
316
-
317
- | Function | Converts | To |
318
- |----------|----------|-----|
319
- | `createAsyncIterableStream` | `ReadableStream<T>` | `AsyncIterableStream<T>` |
320
- | `convertArrayToStream` | `Array<T>` | `ReadableStream<T>` |
321
- | `convertAsyncIterableToStream` | `AsyncIterable<T>` | `ReadableStream<T>` |
322
- | `convertAsyncIterableToArray` | `AsyncIterable<T>` | `Promise<Array<T>>` |
323
- | `convertStreamToArray` | `ReadableStream<T>` | `Promise<Array<T>>` |
324
- | `convertUIMessageToSSEStream` | `ReadableStream<UIMessageChunk>` | `ReadableStream<string>` |
325
- | `convertSSEToUIMessageStream` | `ReadableStream<string>` | `ReadableStream<UIMessageChunk>` |
326
-
327
224
  ### `createAsyncIterableStream`
328
225
 
329
226
  Adds async iterator protocol to a `ReadableStream`, enabling `for await...of` loops.
330
227
 
331
228
  ```typescript
332
- import { createAsyncIterableStream } from 'ai-stream-utils/utils';
229
+ import { createAsyncIterableStream } from "ai-stream-utils";
333
230
 
334
231
  const asyncStream = createAsyncIterableStream(readableStream);
335
232
  for await (const chunk of asyncStream) {
@@ -342,7 +239,7 @@ for await (const chunk of asyncStream) {
342
239
  Converts an array to a `ReadableStream` that emits each element.
343
240
 
344
241
  ```typescript
345
- import { convertArrayToStream } from 'ai-stream-utils/utils';
242
+ import { convertArrayToStream } from "ai-stream-utils";
346
243
 
347
244
  const stream = convertArrayToStream([1, 2, 3]);
348
245
  ```
@@ -352,7 +249,7 @@ const stream = convertArrayToStream([1, 2, 3]);
352
249
  Converts an async iterable (e.g., async generator) to a `ReadableStream`.
353
250
 
354
251
  ```typescript
355
- import { convertAsyncIterableToStream } from 'ai-stream-utils/utils';
252
+ import { convertAsyncIterableToStream } from "ai-stream-utils";
356
253
 
357
254
  async function* generator() {
358
255
  yield 1;
@@ -366,7 +263,7 @@ const stream = convertAsyncIterableToStream(generator());
366
263
  Collects all values from an async iterable into an array.
367
264
 
368
265
  ```typescript
369
- import { convertAsyncIterableToArray } from 'ai-stream-utils/utils';
266
+ import { convertAsyncIterableToArray } from "ai-stream-utils";
370
267
 
371
268
  const array = await convertAsyncIterableToArray(asyncIterable);
372
269
  ```
@@ -376,7 +273,7 @@ const array = await convertAsyncIterableToArray(asyncIterable);
376
273
  Consumes a `ReadableStream` and collects all chunks into an array.
377
274
 
378
275
  ```typescript
379
- import { convertStreamToArray } from 'ai-stream-utils/utils';
276
+ import { convertStreamToArray } from "ai-stream-utils";
380
277
 
381
278
  const array = await convertStreamToArray(readableStream);
382
279
  ```
@@ -386,7 +283,7 @@ const array = await convertStreamToArray(readableStream);
386
283
  Converts a UI message stream to an SSE (Server-Sent Events) stream. Useful for sending UI message chunks over HTTP as SSE-formatted text.
387
284
 
388
285
  ```typescript
389
- import { convertUIMessageToSSEStream } from 'ai-stream-utils/utils';
286
+ import { convertUIMessageToSSEStream } from "ai-stream-utils";
390
287
 
391
288
  const uiStream = result.toUIMessageStream();
392
289
  const sseStream = convertUIMessageToSSEStream(uiStream);
@@ -399,181 +296,94 @@ const sseStream = convertUIMessageToSSEStream(uiStream);
399
296
  Converts an SSE stream back to a UI message stream. Useful for parsing SSE-formatted responses on the client.
400
297
 
401
298
  ```typescript
402
- import { convertSSEToUIMessageStream } from 'ai-stream-utils/utils';
299
+ import { convertSSEToUIMessageStream } from "ai-stream-utils";
403
300
 
404
- const response = await fetch('/api/chat');
301
+ const response = await fetch("/api/chat");
405
302
  const sseStream = response.body.pipeThrough(new TextDecoderStream());
406
303
  const uiStream = convertSSEToUIMessageStream(sseStream);
407
304
  ```
408
305
 
409
- ## Type Safety
306
+ ## Deprecated Functions
410
307
 
411
- The [`toUIMessageStream()`](https://ai-sdk.dev/docs/reference/ai-sdk-core/stream-text#to-ui-message-stream) from [`streamText()`](https://ai-sdk.dev/docs/reference/ai-sdk-core/stream-text) returns a generic `ReadableStream<UIMessageChunk>`, which means the part types cannot be inferred automatically.
308
+ > [!WARNING]
309
+ > These functions are deprecated and will be removed in a future version. Use `pipe()` instead.
412
310
 
413
- To enable autocomplete and type-safety, pass your [`UIMessage`](https://ai-sdk.dev/docs/reference/ai-sdk-core/ui-message#creating-your-own-uimessage-type) type as a generic parameter:
311
+ ### `mapUIMessageStream`
414
312
 
415
313
  ```typescript
416
- import type { UIMessage, InferUITools } from 'ai';
314
+ import { mapUIMessageStream } from "ai-stream-utils";
417
315
 
418
- type MyUIMessageMetadata = {};
419
- type MyDataPart = {};
420
- type MyTools = InferUITools<typeof tools>;
316
+ const stream = mapUIMessageStream(result.toUIMessageStream(), ({ chunk }) => {
317
+ if (chunk.type === "text-delta") {
318
+ return { ...chunk, delta: chunk.delta.toUpperCase() };
319
+ }
320
+ return chunk;
321
+ });
322
+ ```
421
323
 
422
- type MyUIMessage = UIMessage<
423
- MyUIMessageMetadata,
424
- MyDataPart,
425
- MyTools
426
- >;
324
+ ### `filterUIMessageStream`
427
325
 
428
- // Use MyUIMessage type when creating the UI message stream
429
- const uiStream = result.toUIMessageStream<MyUIMessage>();
326
+ ```typescript
327
+ import { filterUIMessageStream, includeParts } from "ai-stream-utils";
430
328
 
431
- // Type-safe filtering with autocomplete
432
329
  const stream = filterUIMessageStream(
433
- uiStream,
434
- includeParts(['text', 'tool-weather']) // Autocomplete works!
435
- );
436
-
437
- // Type-safe chunk mapping
438
- const stream = mapUIMessageStream(
439
- uiStream,
440
- ({ chunk, part }) => {
441
- // part.type is typed based on MyUIMessage
442
- return chunk;
443
- }
330
+ result.toUIMessageStream(),
331
+ includeParts(["text", "tool-weather"]),
444
332
  );
445
333
  ```
446
334
 
447
- ## Client-Side Usage
448
-
449
- The transformed stream has the same type as the original UI message stream. You can consume it with [`useChat()`](https://ai-sdk.dev/docs/reference/ai-sdk-ui/use-chat) or [`readUIMessageStream()`](https://ai-sdk.dev/docs/reference/ai-sdk-ui/read-ui-message-stream).
450
-
451
- Since message parts may be different on the client vs. the server, you may need to reconcile message parts when the client sends messages back to the server.
452
-
453
- If you save messages to a database and configure `useChat()` to [only send the last message](https://ai-sdk.dev/docs/ai-sdk-ui/chatbot-message-persistence#sending-only-the-last-message), you can read existing messages from the database. This means the model will have access to all message parts, including filtered parts not available on the client.
454
-
455
- ## Part Type Mapping
456
-
457
- The transformations operate on [UIMessagePart](https://ai-sdk.dev/docs/reference/ai-sdk-core/ui-message#uimessagepart-types) types, which are derived from [UIMessageChunk](https://github.com/vercel/ai/blob/main/packages/ai/src/ui-message-stream/ui-message-chunks.ts) types:
458
-
459
- | Part Type | Chunk Types |
460
- | ----------------- | ------------------------------------- |
461
- | [`text`](https://ai-sdk.dev/docs/reference/ai-sdk-core/ui-message#textuipart) | `text-start`, `text-delta`, `text-end` |
462
- | [`reasoning`](https://ai-sdk.dev/docs/reference/ai-sdk-core/ui-message#reasoninguipart) | `reasoning-start`, `reasoning-delta`, `reasoning-end` |
463
- | [`tool-{name}`](https://ai-sdk.dev/docs/reference/ai-sdk-core/ui-message#tooluipart) | `tool-input-start`, `tool-input-delta`, `tool-input-available`, `tool-input-error`, `tool-output-available`, `tool-output-error` |
464
- | [`data-{name}`](https://ai-sdk.dev/docs/reference/ai-sdk-core/ui-message#datauipart) | `data-{name}` |
465
- | [`step-start`](https://ai-sdk.dev/docs/reference/ai-sdk-core/ui-message#stepstartuipart) | `start-step` |
466
- | [`file`](https://ai-sdk.dev/docs/reference/ai-sdk-core/ui-message#fileuipart) | `file` |
467
- | [`source-url`](https://ai-sdk.dev/docs/reference/ai-sdk-core/ui-message#sourceurluipart) | `source-url` |
468
- | [`source-document`](https://ai-sdk.dev/docs/reference/ai-sdk-core/ui-message#sourcedocumentuipart) | `source-document` |
469
-
470
- ### Control Chunks
471
-
472
- [Control chunks](https://github.com/vercel/ai/blob/main/packages/ai/src/ui-message-stream/ui-message-chunks.ts#L278-L293) always pass through regardless of filter/transform settings:
473
-
474
- - `start`: Stream start marker
475
- - `finish`: Stream finish marker
476
- - `abort`: Stream abort marker
477
- - `message-metadata`: Message metadata updates
478
- - `error`: Error messages
479
-
480
- ### Step Boundary Handling
481
-
482
- Step boundaries are handled automatically:
483
-
484
- 1. `start-step` is buffered until the first content chunk is encountered
485
- 2. If the first content chunk passes through, `start-step` is included
486
- 3. If the first content chunk is filtered out, `start-step` is also filtered out
487
- 4. `finish-step` is only included if the corresponding `start-step` was included
488
-
489
- ## API Reference
490
-
491
- ### `mapUIMessageStream`
335
+ ### `flatMapUIMessageStream`
492
336
 
493
337
  ```typescript
494
- function mapUIMessageStream<UI_MESSAGE extends UIMessage>(
495
- stream: ReadableStream<UIMessageChunk>,
496
- mapFn: MapUIMessageStreamFn<UI_MESSAGE>,
497
- ): AsyncIterableStream<InferUIMessageChunk<UI_MESSAGE>>
498
-
499
- type MapUIMessageStreamFn<UI_MESSAGE extends UIMessage> = (
500
- input: MapInput<UI_MESSAGE>,
501
- ) => InferUIMessageChunk<UI_MESSAGE> | InferUIMessageChunk<UI_MESSAGE>[] | null;
502
-
503
- type MapInput<UI_MESSAGE extends UIMessage> = {
504
- chunk: InferUIMessageChunk<UI_MESSAGE>;
505
- part: InferUIMessagePart<UI_MESSAGE>;
506
- };
338
+ import { flatMapUIMessageStream, partTypeIs } from "ai-stream-utils";
339
+
340
+ const stream = flatMapUIMessageStream(
341
+ result.toUIMessageStream(),
342
+ partTypeIs("tool-weather"),
343
+ ({ part }) => {
344
+ if (part.state === "output-available") {
345
+ return {
346
+ ...part,
347
+ output: { ...part.output, temperature: toFahrenheit(part.output.temperature) },
348
+ };
349
+ }
350
+ return part;
351
+ },
352
+ );
507
353
  ```
508
354
 
509
- ### `flatMapUIMessageStream`
355
+ ## Type Safety
510
356
 
511
- ```typescript
512
- // Without predicate - buffer all parts
513
- function flatMapUIMessageStream<UI_MESSAGE extends UIMessage>(
514
- stream: ReadableStream<UIMessageChunk>,
515
- flatMapFn: FlatMapUIMessageStreamFn<UI_MESSAGE>,
516
- ): AsyncIterableStream<InferUIMessageChunk<UI_MESSAGE>>
517
-
518
- // With predicate - buffer only matching parts, pass through others
519
- function flatMapUIMessageStream<UI_MESSAGE extends UIMessage, PART extends InferUIMessagePart<UI_MESSAGE>>(
520
- stream: ReadableStream<UIMessageChunk>,
521
- predicate: FlatMapUIMessageStreamPredicate<UI_MESSAGE, PART>,
522
- flatMapFn: FlatMapUIMessageStreamFn<UI_MESSAGE, PART>,
523
- ): AsyncIterableStream<InferUIMessageChunk<UI_MESSAGE>>
524
-
525
- type FlatMapUIMessageStreamFn<UI_MESSAGE extends UIMessage, PART = InferUIMessagePart<UI_MESSAGE>> = (
526
- input: FlatMapInput<UI_MESSAGE, PART>,
527
- context: FlatMapContext<UI_MESSAGE>,
528
- ) => InferUIMessagePart<UI_MESSAGE> | InferUIMessagePart<UI_MESSAGE>[] | null;
529
-
530
- type FlatMapInput<UI_MESSAGE extends UIMessage, PART = InferUIMessagePart<UI_MESSAGE>> = {
531
- part: PART;
532
- };
533
-
534
- type FlatMapContext<UI_MESSAGE extends UIMessage> = {
535
- index: number;
536
- parts: InferUIMessagePart<UI_MESSAGE>[];
537
- };
538
- ```
357
+ The [`toUIMessageStream()`](https://ai-sdk.dev/docs/reference/ai-sdk-core/stream-text#to-ui-message-stream) from [`streamText()`](https://ai-sdk.dev/docs/reference/ai-sdk-core/stream-text) returns a generic `ReadableStream<UIMessageChunk>`, which means the part types cannot be inferred automatically.
539
358
 
540
- #### `partTypeIs`
359
+ To enable autocomplete and type-safety, pass your [`UIMessage`](https://ai-sdk.dev/docs/reference/ai-sdk-core/ui-message#creating-your-own-uimessage-type) type as a generic parameter:
541
360
 
542
361
  ```typescript
543
- function partTypeIs<UI_MESSAGE extends UIMessage, T extends InferUIMessagePartType<UI_MESSAGE>>(
544
- type: T | T[],
545
- ): FlatMapUIMessageStreamPredicate<UI_MESSAGE, Extract<InferUIMessagePart<UI_MESSAGE>, { type: T }>>
362
+ import type { UIMessage, InferUITools } from "ai";
546
363
 
547
- type FlatMapUIMessageStreamPredicate<UI_MESSAGE extends UIMessage, PART extends InferUIMessagePart<UI_MESSAGE>> =
548
- (part: InferUIMessagePart<UI_MESSAGE>) => boolean;
549
- ```
364
+ type MyUIMessageMetadata = {};
365
+ type MyDataPart = {};
366
+ type MyTools = InferUITools<typeof tools>;
550
367
 
551
- ### `filterUIMessageStream`
368
+ type MyUIMessage = UIMessage<MyUIMessageMetadata, MyDataPart, MyTools>;
552
369
 
553
- ```typescript
554
- function filterUIMessageStream<UI_MESSAGE extends UIMessage>(
555
- stream: ReadableStream<UIMessageChunk>,
556
- filterFn: FilterUIMessageStreamPredicate<UI_MESSAGE>,
557
- ): AsyncIterableStream<InferUIMessageChunk<UI_MESSAGE>>
558
-
559
- type FilterUIMessageStreamPredicate<UI_MESSAGE extends UIMessage> = (
560
- input: MapInput<UI_MESSAGE>,
561
- context: MapContext<UI_MESSAGE>,
562
- ) => boolean;
370
+ // Use MyUIMessage type when creating the UI message stream
371
+ const uiStream = result.toUIMessageStream<MyUIMessage>();
372
+
373
+ // Type-safe filtering with autocomplete
374
+ const stream = pipe<MyUIMessage>(uiStream)
375
+ .filter(includeParts(["text", "tool-weather"])) // Autocomplete works!
376
+ .map(({ chunk, part }) => {
377
+ // part.type is typed based on MyUIMessage
378
+ return chunk;
379
+ })
380
+ .toStream();
563
381
  ```
564
382
 
565
- #### `includeParts`
383
+ ## Client-Side Usage
566
384
 
567
- ```typescript
568
- function includeParts<UI_MESSAGE extends UIMessage>(
569
- partTypes: Array<InferUIMessagePartType<UI_MESSAGE>>,
570
- ): FilterUIMessageStreamPredicate<UI_MESSAGE>
571
- ```
385
+ The transformed stream has the same type as the original UI message stream. You can consume it with [`useChat()`](https://ai-sdk.dev/docs/reference/ai-sdk-ui/use-chat) or [`readUIMessageStream()`](https://ai-sdk.dev/docs/reference/ai-sdk-ui/read-ui-message-stream).
572
386
 
573
- #### `excludeParts`
387
+ Since message parts may be different on the client vs. the server, you may need to reconcile message parts when the client sends messages back to the server.
574
388
 
575
- ```typescript
576
- function excludeParts<UI_MESSAGE extends UIMessage>(
577
- partTypes: Array<InferUIMessagePartType<UI_MESSAGE>>,
578
- ): FilterUIMessageStreamPredicate<UI_MESSAGE>
579
- ```
389
+ If you save messages to a database and configure `useChat()` to [only send the last message](https://ai-sdk.dev/docs/ai-sdk-ui/chatbot-message-persistence#sending-only-the-last-message), you can read existing messages from the database. This means the model will have access to all message parts, including filtered parts not available on the client.