pipeai 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md ADDED
@@ -0,0 +1,721 @@
1
+ # pipeai
2
+
3
+ A typed multi-agent workflow pipeline built on top of the [Vercel AI SDK v6](https://sdk.vercel.ai/). It provides two core primitives — **Agent** and **Workflow** — that compose into declarative, streamable AI pipelines with shared context and typed outputs.
4
+
5
+ Agents are pure AI SDK wrappers that return native `GenerateTextResult` / `StreamTextResult`. Workflows chain agents into pipelines with automatic stream merging, deterministic agent routing, and typed output extraction.
6
+
7
+ The library is ~1000 lines across 4 files. It's designed to be read, understood, and modified — a thin composition layer over AI SDK, not a framework to learn around.
8
+
9
+ ## Core Concepts
10
+
11
+ | Primitive | Purpose |
12
+ | -------------- | ---------------------------------------------------------------------------------------------------- |
13
+ | `Agent` | A pure AI SDK wrapper. Supports `generate()`, `stream()`, `asTool()`, and `asToolProvider()`. |
14
+ | `Workflow` | A typed pipeline that chains agents with `step()`, `branch()`, `foreach()`, `repeat()`, `catch()`, and `finally()`. |
15
+ | `defineTool` | A context-aware tool factory — injects runtime context into tool `execute` calls. |
16
+
17
+ ## Installation
18
+
19
+ ```bash
20
+ npm install pipeai
21
+ ```
22
+
23
+ Peer dependencies:
24
+
25
+ ```json
26
+ {
27
+ "peerDependencies": {
28
+ "ai": "^6.0.0",
29
+ "zod": ">=3.0.0 || >=4.0.0"
30
+ }
31
+ }
32
+ ```
33
+
34
+ ## Agent
35
+
36
+ An `Agent` wraps AI SDK's `generateText` / `streamText` with typed context, input, and output. It returns native AI SDK result types — no custom wrappers to learn.
37
+
38
+ ### Defining an agent
39
+
40
+ ```ts
41
+ import { Agent } from "pipeai";
42
+ import { openai } from "@ai-sdk/openai";
43
+
44
+ type Ctx = {
45
+ userId: string;
46
+ db: Database;
47
+ };
48
+
49
+ const assistant = new Agent<Ctx>({
50
+ id: "assistant",
51
+ model: openai("gpt-4o"),
52
+ system: "You are a helpful assistant.",
53
+ prompt: (ctx, input) => input,
54
+ tools: { search, writeFile },
55
+ });
56
+ ```
57
+
58
+ ### Running an agent
59
+
60
+ ```ts
61
+ // Non-streaming — returns native GenerateTextResult
62
+ const result = await assistant.generate(ctx, "Help me refactor the auth module");
63
+ result.text; // string
64
+ result.usage; // LanguageModelUsage
65
+ result.steps; // step history
66
+ result.toolCalls; // tools that were called
67
+
68
+ // Streaming — returns native StreamTextResult
69
+ const result = await assistant.stream(ctx, "Explain quantum computing");
70
+ for await (const chunk of result.textStream) {
71
+ process.stdout.write(chunk);
72
+ }
73
+ ```
74
+
75
+ ### Structured output
76
+
77
+ ```ts
78
+ import { Output } from "ai";
79
+ import { z } from "zod";
80
+
81
+ const classificationSchema = z.object({
82
+ priority: z.enum(["low", "medium", "high", "critical"]),
83
+ category: z.string(),
84
+ summary: z.string(),
85
+ });
86
+
87
+ const classifier = new Agent<Ctx>({
88
+ id: "classifier",
89
+ input: z.object({ title: z.string(), body: z.string() }),
90
+ output: Output.object({ schema: classificationSchema }),
91
+ model: openai("gpt-4o-mini"),
92
+ system: "Classify support tickets.",
93
+ prompt: (ctx, input) => `Title: ${input.title}\n\nBody: ${input.body}`,
94
+ });
95
+
96
+ const result = await classifier.generate(ctx, { title: "App crash", body: "Crashes on save" });
97
+ result.output; // { priority: "high", category: "bug", summary: "..." }
98
+ ```
99
+
100
+ ### Dynamic configuration (Resolvable)
101
+
102
+ Most config fields accept a static value or a `(ctx, input) => value` function:
103
+
104
+ ```ts
105
+ const agent = new Agent<Ctx>({
106
+ id: "adaptive",
107
+ model: (ctx) => ctx.isPremium ? openai("gpt-4o") : openai("gpt-4o-mini"),
108
+ system: (ctx) => `You assist ${ctx.userName}. Role: ${ctx.role}.`,
109
+ tools: (ctx) => {
110
+ const base = { search: searchTool };
111
+ if (ctx.isAdmin) return { ...base, deleteUser: deleteUserTool };
112
+ return base;
113
+ },
114
+ prompt: (ctx, input) => input,
115
+ });
116
+ ```
117
+
118
+ ### AI SDK callbacks
119
+
120
+ Same callback names as AI SDK v6, extended with `ctx` and `input`. The AI SDK event payload is available as `result`:
121
+
122
+ ```ts
123
+ const agent = new Agent<Ctx>({
124
+ id: "monitored",
125
+ model: openai("gpt-4o"),
126
+ prompt: (ctx, input) => input,
127
+ onStepFinish: ({ result, ctx }) => {
128
+ console.log(`Step done, used ${result.usage.totalTokens} tokens`);
129
+ },
130
+ onFinish: ({ result, ctx }) => {
131
+ console.log(`Total: ${result.totalUsage.totalTokens} tokens`);
132
+ },
133
+ onError: ({ error, ctx }) => {
134
+ ctx.logger.error("Agent failed", error);
135
+ },
136
+ });
137
+ ```
138
+
139
+ ### Configuration options
140
+
141
+ | Option | Type | Description |
142
+ | ------------- | ------------------------- | ----------------------------------------------------------------- |
143
+ | `id` | `string` | Agent identifier. |
144
+ | `description` | `string` | Agent description (used by `asTool()` for tool description). |
145
+ | `input` | `ZodType` | Input schema. Required for `asTool()`. Infers `TInput`. |
146
+ | `output` | `Output` | AI SDK Output (e.g. `Output.object({ schema })`). Infers `TOutput`. |
147
+ | `model` | `Resolvable` | Language model. Static or `(ctx, input) => model`. |
148
+ | `system` | `Resolvable` | System prompt. |
149
+ | `prompt` | `Resolvable` | String prompt. Mutually exclusive with `messages`. |
150
+ | `messages` | `Resolvable` | Message array. Mutually exclusive with `prompt`. |
151
+ | `tools` | `Resolvable` | Tool map. Supports `Tool`, `ToolProvider`, and `agent.asTool()`. |
152
+ | `activeTools` | `Resolvable` | Subset of tool names to enable. |
153
+ | `toolChoice` | `Resolvable` | Tool choice strategy. Static or `(ctx, input) => toolChoice`. |
154
+ | `stopWhen` | `Resolvable` | Condition for stopping the tool loop. Static or `(ctx, input) => condition`. |
155
+ | `onStepFinish`| `({ result, ctx, input })`| Called after each step. |
156
+ | `onFinish` | `({ result, ctx, input })`| Called when all steps complete. |
157
+ | `onError` | `({ error, ctx, input })` | Called on error. |
158
+ | `...` | AI SDK options | All other `streamText`/`generateText` options pass through (e.g. `temperature`, `maxTokens`, `maxRetries`, `headers`, `prepareStep`, `onChunk`, etc.). |
159
+
160
+ ## `asTool()` — Agent as Tool
161
+
162
+ `asTool()` compiles an agent into a standard AI SDK `Tool`. The parent agent's LLM tool loop handles routing — no dedicated router needed.
163
+
164
+ ```ts
165
+ const codingAgent = new Agent<Ctx>({
166
+ id: "coding",
167
+ description: "Writes and modifies code.",
168
+ input: z.object({
169
+ task: z.string().describe("What code to write"),
170
+ language: z.string().optional(),
171
+ }),
172
+ model: openai("gpt-4o"),
173
+ prompt: (ctx, input) => `Task: ${input.task}`,
174
+ tools: { writeFile, readFile },
175
+ });
176
+
177
+ const qaAgent = new Agent<Ctx>({
178
+ id: "qa",
179
+ description: "Answers technical questions.",
180
+ input: z.object({ question: z.string() }),
181
+ model: openai("gpt-4o"),
182
+ prompt: (ctx, input) => input.question,
183
+ tools: { readFile, search },
184
+ });
185
+
186
+ // Parent agent uses sub-agents as tools
187
+ const orchestrator = new Agent<Ctx>({
188
+ id: "orchestrator",
189
+ model: openai("gpt-4o"),
190
+ system: "Delegate work to the right specialist.",
191
+ prompt: (ctx, input) => input,
192
+ tools: (ctx) => ({
193
+ coding: codingAgent.asTool(ctx),
194
+ qa: qaAgent.asTool(ctx),
195
+ }),
196
+ });
197
+
198
+ const result = await orchestrator.generate(ctx, "Write a fizzbuzz function in Python");
199
+ ```
200
+
201
+ Custom output extraction:
202
+
203
+ ```ts
204
+ codingAgent.asTool(ctx, {
205
+ mapOutput: (result) => ({
206
+ text: result.text,
207
+ files: result.steps
208
+ .flatMap(s => s.toolResults)
209
+ .filter(tr => tr.toolName === "writeFile")
210
+ .map(tr => tr.args.path),
211
+ }),
212
+ });
213
+ ```
214
+
215
+ **Note:** `asTool()` uses `generate()` internally — sub-agent execution is non-streaming. This is an AI SDK tool loop constraint. For streaming multi-agent workflows, use `step()` with `branch()` instead.
216
+
217
+ ## `asToolProvider()` — Deferred Context
218
+
219
+ `asTool(ctx)` bakes the context in at call time. `asToolProvider()` defers context resolution — the tool is created with the correct context when another agent's tool resolution runs:
220
+
221
+ ```ts
222
+ const orchestrator = new Agent<Ctx>({
223
+ id: "orchestrator",
224
+ model: openai("gpt-4o"),
225
+ system: "Delegate work to the right specialist.",
226
+ prompt: (ctx, input) => input,
227
+ tools: {
228
+ // Context resolved when the orchestrator's tools are resolved
229
+ coding: codingAgent.asToolProvider(),
230
+ qa: qaAgent.asToolProvider(),
231
+ },
232
+ });
233
+ ```
234
+
235
+ This is useful when the agent is defined at module scope but the context isn't available until runtime. `asToolProvider()` returns an `IToolProvider` — the same interface used by `defineTool`.
236
+
237
+ ## defineTool — Context-Aware Tools
238
+
239
+ `defineTool` wraps a tool definition so the agent's runtime context is injected into every `execute` call. The `input` field maps to AI SDK's `parameters`:
240
+
241
+ ```ts
242
+ import { defineTool } from "pipeai";
243
+ import { tool } from "ai";
244
+
245
+ type Ctx = { db: Database; userId: string };
246
+
247
+ const define = defineTool<Ctx>();
248
+
249
+ const searchOrders = define({
250
+ description: "Search user orders",
251
+ input: z.object({ query: z.string() }),
252
+ execute: async ({ query }, ctx) => {
253
+ return ctx.db.orders.search(ctx.userId, query);
254
+ },
255
+ });
256
+
257
+ const cancelOrder = define({
258
+ description: "Cancel an order by ID",
259
+ input: z.object({ orderId: z.string() }),
260
+ execute: async ({ orderId }, ctx) => {
261
+ return ctx.db.orders.cancel(ctx.userId, orderId);
262
+ },
263
+ });
264
+
265
+ // Mix with plain AI SDK tools freely
266
+ const agent = new Agent<Ctx>({
267
+ id: "support",
268
+ model: openai("gpt-4o"),
269
+ prompt: (ctx, input) => input,
270
+ tools: { searchOrders, cancelOrder, calculator: plainTool },
271
+ });
272
+ ```
273
+
274
+ ## Workflow
275
+
276
+ A `Workflow` chains agents and transformation steps into a typed pipeline. Context is read-only — agents communicate through outputs.
277
+
278
+ ### Building a workflow
279
+
280
+ ```ts
281
+ import { Workflow } from "pipeai";
282
+
283
+ const pipeline = Workflow.create<Ctx>()
284
+ .step(classifier)
285
+ .step("build-prompt", ({ input }) => {
286
+ return `Handle this ${input.priority} ${input.category} ticket: ${input.summary}`;
287
+ })
288
+ .step(supportAgent)
289
+ .step("save", async ({ input, ctx }) => {
290
+ await ctx.db.responses.save(input);
291
+ return input;
292
+ });
293
+ ```
294
+
295
+ ### Running a workflow
296
+
297
+ ```ts
298
+ // Non-streaming — calls agent.generate() at each step
299
+ const { output } = await pipeline.generate(ctx, initialInput);
300
+
301
+ // Streaming — calls agent.stream() at each step, merges into a single ReadableStream
302
+ const { stream, output } = pipeline.stream(ctx, initialInput);
303
+ return new Response(stream);
304
+
305
+ const finalOutput = await output; // resolves when pipeline completes
306
+ ```
307
+
308
+ ### Nested workflows
309
+
310
+ Workflows can be passed as steps into other workflows. The nested workflow's steps execute within the parent's runtime state — streams merge naturally, and errors propagate to the parent's `catch()`:
311
+
312
+ ```ts
313
+ // A reusable sub-workflow
314
+ const classifyAndRoute = Workflow.create<Ctx>()
315
+ .step(classifier, {
316
+ handleStream: async ({ result }) => { await result.text; },
317
+ })
318
+ .branch({
319
+ select: ({ input }) => input.agent,
320
+ agents: { bug: bugAgent, feature: featureAgent },
321
+ });
322
+
323
+ // Compose into a larger pipeline
324
+ const pipeline = Workflow.create<Ctx>()
325
+ .step(classifyAndRoute) // nested workflow as a step
326
+ .step("save", async ({ input, ctx }) => {
327
+ await ctx.db.save(input);
328
+ return input;
329
+ })
330
+ .catch("fallback", () => "Something went wrong.");
331
+ ```
332
+
333
+ Nested workflows can be arbitrarily deep — a workflow step can contain another workflow that itself contains nested workflows.
334
+
335
+ ### Predicate branching via `branch()`
336
+
337
+ Route to different agents based on runtime conditions. The first matching `when` wins. A case without `when` acts as the default:
338
+
339
+ ```ts
340
+ const pipeline = Workflow.create<Ctx>()
341
+ .step(classifier)
342
+ .branch([
343
+ { when: ({ ctx }) => ctx.isPremium, agent: premiumAgent },
344
+ { agent: standardAgent }, // default
345
+ ]);
346
+ ```
347
+
348
+ All branches must produce the same output type — enforced at compile time. This eliminates the type-safety holes that per-step conditionals create.
349
+
350
+ ### Key-based routing via `branch()`
351
+
352
+ Route to different agents based on the previous step's output. Type-safe — the `select` return type must match the `agents` keys:
353
+
354
+ ```ts
355
+ const classifierOutput = z.object({
356
+ agent: z.enum(["bug", "feature", "question"]),
357
+ reasoning: z.string(),
358
+ });
359
+
360
+ const classifier = new Agent<Ctx>({
361
+ id: "classifier",
362
+ output: Output.object({ schema: classifierOutput }),
363
+ model: openai("gpt-4o-mini"),
364
+ system: "Classify the user's request. Pick the best agent.",
365
+ messages: (ctx) => ctx.chatHistory,
366
+ });
367
+
368
+ const pipeline = Workflow.create<Ctx>()
369
+ .step(classifier)
370
+ .branch({
371
+ select: ({ input }) => input.agent, // must return "bug" | "feature" | "question"
372
+ agents: {
373
+ bug: bugAgent,
374
+ feature: featureAgent,
375
+ question: questionAgent,
376
+ },
377
+ })
378
+ .step("save", async ({ input, ctx }) => {
379
+ await ctx.db.save(input);
380
+ return input;
381
+ });
382
+
383
+ const { stream } = pipeline.stream(ctx);
384
+ return new Response(stream);
385
+ ```
386
+
387
+ ### Custom output extraction
388
+
389
+ Separate callbacks for `generate()` vs `stream()` — each receives the correct result type:
390
+
391
+ ```ts
392
+ const pipeline = Workflow.create<Ctx>()
393
+ .step(codingAgent, {
394
+ // Called during workflow.generate() — GenerateTextResult (sync access)
395
+ mapGenerateResult: ({ result }) => ({
396
+ text: result.text,
397
+ files: result.steps
398
+ .flatMap(s => s.toolResults)
399
+ .filter(tr => tr.toolName === "writeFile")
400
+ .map(tr => tr.args.path),
401
+ }),
402
+ // Called during workflow.stream() — StreamTextResult (async access)
403
+ mapStreamResult: async ({ result }) => ({
404
+ text: await result.text,
405
+ files: [],
406
+ }),
407
+ });
408
+ ```
409
+
410
+ ### Per-step result access
411
+
412
+ Access the full AI SDK result at each step — useful for persistence, logging, or analytics without coupling that logic to agent definitions:
413
+
414
+ ```ts
415
+ const pipeline = Workflow.create<Ctx>()
416
+ .step(supportAgent, {
417
+ // Called during workflow.generate()
418
+ onGenerateResult: async ({ result, ctx, input }) => {
419
+ await ctx.db.conversations.save(ctx.userId, {
420
+ role: "assistant",
421
+ content: result.text,
422
+ toolCalls: result.toolCalls,
423
+ });
424
+ },
425
+ // Called during workflow.stream()
426
+ onStreamResult: async ({ result, ctx, input }) => {
427
+ await ctx.db.conversations.save(ctx.userId, {
428
+ role: "assistant",
429
+ content: await result.text,
430
+ });
431
+ },
432
+ });
433
+ ```
434
+
435
+ ### Fine-grained stream control
436
+
437
+ Override how each agent's stream is merged into the workflow stream. By default, every agent's output is merged into the workflow stream via `writer.merge(result.toUIMessageStream())`. Use `handleStream` to change this — for example, to suppress intermediate agents so only the final response streams to the client:
438
+
439
+ ```ts
440
+ const pipeline = Workflow.create<Ctx>()
441
+ // Suppress the classifier's stream — the user shouldn't see
442
+ // the structured classification output, only the final response
443
+ .step(classifier, {
444
+ handleStream: async ({ result }) => {
445
+ await result.text; // consume the stream without forwarding it
446
+ },
447
+ })
448
+ .branch({
449
+ select: ({ input }) => input.agent,
450
+ agents: { bug: bugAgent, feature: featureAgent, question: questionAgent },
451
+ });
452
+ // Only the selected agent's response streams to the client
453
+ ```
454
+
455
+ ### Array iteration via `foreach()`
456
+
457
+ `foreach()` maps each element of an array output through an agent or workflow. Items run in generate mode to avoid interleaved streams:
458
+
459
+ ```ts
460
+ const summarizer = new Agent<Ctx, string, string>({
461
+ id: "summarizer",
462
+ model: openai("gpt-4o-mini"),
463
+ prompt: (ctx, input) => `Summarize: ${input}`,
464
+ });
465
+
466
+ const pipeline = Workflow.create<Ctx>()
467
+ .step("fetch-articles", async ({ ctx }) => {
468
+ return ctx.db.articles.getRecent(10); // string[]
469
+ })
470
+ .foreach(summarizer) // output: string[]
471
+ .step("combine", ({ input }) => input.join("\n\n"));
472
+ ```
473
+
474
+ Concurrent processing with batched parallelism:
475
+
476
+ ```ts
477
+ // Process 3 items at a time
478
+ .foreach(summarizer, { concurrency: 3 })
479
+ ```
480
+
481
+ Works with nested workflows too:
482
+
483
+ ```ts
484
+ const processItem = Workflow.create<Ctx, string>()
485
+ .step(analyzeAgent)
486
+ .step(enrichAgent);
487
+
488
+ pipeline.foreach(processItem, { concurrency: 5 });
489
+ ```
490
+
491
+ **Type safety:** `foreach()` uses `ElementOf<TOutput>` to extract the array element type. If the previous step doesn't produce an array, the call is rejected at compile time.
492
+
493
+ ### Conditional loops via `repeat()`
494
+
495
+ `repeat()` runs an agent or workflow in a loop until a condition is met. The body's output feeds back as input — same type in, same type out:
496
+
497
+ ```ts
498
+ const refiner = new Agent<Ctx, string, string>({
499
+ id: "refiner",
500
+ model: openai("gpt-4o"),
501
+ system: "Improve the given text. Make it clearer and more concise.",
502
+ prompt: (ctx, input) => input,
503
+ });
504
+
505
+ const pipeline = Workflow.create<Ctx>()
506
+ .step("draft", ({ ctx }) => ctx.initialDraft)
507
+ .repeat(refiner, {
508
+ until: ({ output, iterations }) => {
509
+ // Stop when quality is good enough or after 3 iterations
510
+ return output.length < 500 || iterations >= 3;
511
+ },
512
+ });
513
+ ```
514
+
515
+ Use `while` for the opposite condition (repeat while true, stop when false):
516
+
517
+ ```ts
518
+ .repeat(refiner, {
519
+ while: ({ output }) => output.includes("TODO"), // keep going while TODOs remain
520
+ maxIterations: 5, // safety limit (default: 10)
521
+ })
522
+ ```
523
+
524
+ The `until` and `while` options are mutually exclusive — TypeScript enforces this at compile time.
525
+
526
+ When `maxIterations` is exceeded, a `WorkflowLoopError` is thrown — catchable by `.catch()`:
527
+
528
+ ```ts
529
+ .repeat(agent, { until: () => false, maxIterations: 3 })
530
+ .catch("loop-safety", ({ error }) => {
531
+ if (error instanceof WorkflowLoopError) {
532
+ return "Reached iteration limit, returning best result.";
533
+ }
534
+ throw error;
535
+ })
536
+ ```
537
+
538
+ In stream mode, each iteration streams to the client — the user sees the refinement in real-time.
539
+
540
+ ### Error handling
541
+
542
+ ```ts
543
+ const pipeline = Workflow.create<Ctx>()
544
+ .step(classifier)
545
+ .step(supportAgent)
546
+ .catch("fallback", ({ error, ctx, stepId }) => {
547
+ ctx.logger.error(`Step "${stepId}" failed`, error);
548
+ return "Sorry, something went wrong.";
549
+ })
550
+ .finally("cleanup", ({ ctx }) => {
551
+ ctx.metrics.recordPipelineRun();
552
+ });
553
+ ```
554
+
555
+ ### Stream callbacks
556
+
557
+ `stream()` accepts the same callbacks as AI SDK's `createUIMessageStream` — `onError` for custom error messages and `onFinish` for post-stream cleanup:
558
+
559
+ ```ts
560
+ const { stream, output } = pipeline.stream(ctx, initialInput, {
561
+ onError: (error) => {
562
+ // Return a user-facing error message (default: generic error string)
563
+ console.error("Stream error", error);
564
+ return "An error occurred while processing your request.";
565
+ },
566
+ onFinish: async () => {
567
+ // Called when the stream closes — useful for analytics, cleanup
568
+ await analytics.track("workflow-stream-complete");
569
+ },
570
+ });
571
+ ```
572
+
573
+ ### Builder methods
574
+
575
+ | Method | Description |
576
+ | ------------------------- | --------------------------------------------------------------------------- |
577
+ | `.step(agent, options?)` | Execute an agent. Options: `mapGenerateResult`, `mapStreamResult`, `onGenerateResult`, `onStreamResult`, `handleStream`. |
578
+ | `.step(workflow)` | Execute a nested workflow. Its steps run within the parent's runtime state. |
579
+ | `.step(id, fn)` | Transform the output. `fn` receives `{ ctx, input }` and returns the new output. |
580
+ | `.branch([...cases])` | Predicate routing. First `when` match wins; case without `when` is default. |
581
+ | `.branch({ select, agents })` | Key routing. `select` returns a key, runs the matching agent. |
582
+ | `.foreach(target, opts?)` | Map each array element through an agent or workflow. `opts.concurrency` controls parallelism (default: 1). |
583
+ | `.repeat(target, opts)` | Loop an agent or workflow. Use `{ until }` or `{ while }` (mutually exclusive). `maxIterations` defaults to 10. |
584
+ | `.catch(id, fn)` | Handle errors. `fn` receives `{ error, ctx, lastOutput, stepId }` and returns a recovery value. |
585
+ | `.finally(id, fn)` | Always runs. `fn` receives `{ ctx }`. |
586
+
587
+ ### Output flow
588
+
589
+ Output flows through the pipeline: each `step()` or `branch()` produces a new output that becomes the next step's `input`. `finally()` preserves the existing output.
590
+
591
+ Auto-extraction priority for `step()` with an agent:
592
+ 1. Explicit `mapGenerateResult` / `mapStreamResult` on step options
593
+ 2. `result.output` if the agent has a structured `output` set
594
+ 3. `result.text` as fallback
595
+
596
+ ## Two Composition Patterns
597
+
598
+ | Pattern | Who decides? | Streaming? | Use case |
599
+ | -------------------- | --------------------- | ---------------------- | --------------------------------------- |
600
+ | `asTool()` | LLM (tool loop) | Sub-agents don't stream | LLM picks which agent(s) to call, can loop |
601
+ | `branch()` | Deterministic | Full streaming | Previous output or runtime conditions determine the next agent |
602
+ | `step(workflow)` | Deterministic | Full streaming | Compose reusable sub-workflows into larger pipelines |
603
+ | `foreach()` | Deterministic | Items don't stream | Process each element of an array through an agent or workflow |
604
+ | `repeat()` | Condition function | Each iteration streams | Iterative refinement until a quality threshold is met |
605
+
606
+ ## Full Example
607
+
608
+ ```ts
609
+ import { Agent, Workflow, defineTool } from "pipeai";
610
+ import { Output } from "ai";
611
+ import { openai } from "@ai-sdk/openai";
612
+ import { z } from "zod";
613
+
614
+ type Ctx = {
615
+ chatHistory: ModelMessage[];
616
+ db: Database;
617
+ userId: string;
618
+ };
619
+
620
+ // 1. Define context-aware tools
621
+ const define = defineTool<Ctx>();
622
+
623
+ const searchLogs = define({
624
+ description: "Search application logs",
625
+ input: z.object({ query: z.string() }),
626
+ execute: async ({ query }, ctx) => ctx.db.logs.search(query),
627
+ });
628
+
629
+ const createTicket = define({
630
+ description: "Create a support ticket",
631
+ input: z.object({ title: z.string(), body: z.string() }),
632
+ execute: async ({ title, body }, ctx) => ctx.db.tickets.create(ctx.userId, title, body),
633
+ });
634
+
635
+ // 2. Define classifier
636
+ const classifier = new Agent<Ctx>({
637
+ id: "classifier",
638
+ output: Output.object({
639
+ schema: z.object({
640
+ agent: z.enum(["bug", "feature", "question"]),
641
+ reasoning: z.string(),
642
+ }),
643
+ }),
644
+ model: openai("gpt-4o-mini"),
645
+ system: "Classify the user's request. Pick the best agent.",
646
+ messages: (ctx) => ctx.chatHistory,
647
+ });
648
+
649
+ // 3. Define specialist agents
650
+ const bugAgent = new Agent<Ctx>({
651
+ id: "bug",
652
+ model: openai("gpt-4o"),
653
+ system: "You help users debug issues.",
654
+ messages: (ctx) => ctx.chatHistory,
655
+ tools: { searchLogs, createTicket },
656
+ });
657
+
658
+ const featureAgent = new Agent<Ctx>({
659
+ id: "feature",
660
+ model: openai("gpt-4o"),
661
+ system: "You help with feature requests.",
662
+ messages: (ctx) => ctx.chatHistory,
663
+ });
664
+
665
+ const questionAgent = new Agent<Ctx>({
666
+ id: "question",
667
+ model: openai("gpt-4o"),
668
+ system: "You answer general questions.",
669
+ messages: (ctx) => ctx.chatHistory,
670
+ });
671
+
672
+ // 4. Compose workflow
673
+ const pipeline = Workflow.create<Ctx>()
674
+ // Classify silently — don't stream the structured JSON to the client
675
+ .step(classifier, {
676
+ handleStream: async ({ result }) => {
677
+ await result.text;
678
+ },
679
+ })
680
+ // Route to the right specialist based on classification
681
+ .branch({
682
+ select: ({ input }) => input.agent,
683
+ agents: { bug: bugAgent, feature: featureAgent, question: questionAgent },
684
+ // Persist the agent's full result for conversation history
685
+ onGenerateResult: async ({ result, ctx }) => {
686
+ await ctx.db.conversations.append(ctx.userId, {
687
+ role: "assistant",
688
+ content: result.text,
689
+ toolCalls: result.toolCalls,
690
+ });
691
+ },
692
+ onStreamResult: async ({ result, ctx }) => {
693
+ await ctx.db.conversations.append(ctx.userId, {
694
+ role: "assistant",
695
+ content: await result.text,
696
+ });
697
+ },
698
+ })
699
+ .catch("fallback", ({ error, ctx, stepId }) => {
700
+ console.error(`Step "${stepId}" failed`, error);
701
+ return "Sorry, something went wrong. Please try again.";
702
+ })
703
+ .finally("cleanup", ({ ctx }) => {
704
+ ctx.db.audit.log(ctx.userId, "pipeline-complete");
705
+ });
706
+
707
+ // 5. Execute with streaming
708
+ const ctx = { chatHistory: messages, db: myDb, userId: "user-123" };
709
+
710
+ const { stream, output } = pipeline.stream(ctx, undefined, {
711
+ onError: (error) => {
712
+ console.error("Stream error", error);
713
+ return "Something went wrong.";
714
+ },
715
+ });
716
+ return new Response(stream);
717
+ ```
718
+
719
+ ## License
720
+
721
+ MIT