blazen 0.1.96 → 0.1.98

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (4) hide show
  1. package/README.md +355 -121
  2. package/index.d.ts +607 -20
  3. package/index.js +58 -52
  4. package/package.json +3 -2
package/README.md CHANGED
@@ -4,7 +4,7 @@
4
4
  [![Node >= 18](https://img.shields.io/badge/node-%3E%3D18-brightgreen.svg)](https://nodejs.org/)
5
5
  [![License: AGPL-3.0](https://img.shields.io/badge/License-AGPL--3.0-blue.svg)](https://opensource.org/licenses/AGPL-3.0)
6
6
 
7
- Event-driven AI workflow engine for Node.js, powered by a Rust core via napi-rs.
7
+ Event-driven AI workflow engine for Node.js and TypeScript, powered by a Rust core via napi-rs. Define workflows as a graph of async steps connected by typed events. Built-in LLM integration, streaming, pause/resume, and fan-out.
8
8
 
9
9
  ---
10
10
 
@@ -16,248 +16,482 @@ pnpm add blazen
16
16
  npm install blazen
17
17
  ```
18
18
 
19
+ No native compilation required -- prebuilt binaries are provided for Linux (x86_64, aarch64) and macOS (x86_64, Apple Silicon).
20
+
19
21
  ---
20
22
 
21
23
  ## Quick Start
22
24
 
23
- Build a workflow by registering steps, each of which handles one or more event types and returns the next event. The reserved event types `blazen::StartEvent` and `blazen::StopEvent` mark the entry and exit points.
25
+ A workflow is a directed graph of **steps**. Each step listens for one or more event types and returns the next event. The reserved types `"blazen::StartEvent"` and `"blazen::StopEvent"` mark the entry and exit points.
26
+
27
+ Events are plain objects with a `type` field. All other fields are your data.
24
28
 
25
29
  ```typescript
26
- import { Workflow } from "blazen";
30
+ import { Workflow, Context } from "blazen";
31
+ import type { JsWorkflowResult } from "blazen";
27
32
 
28
- const workflow = new Workflow("hello-world");
33
+ const wf = new Workflow("hello");
29
34
 
30
- workflow.addStep("greet", ["blazen::StartEvent"], async (event, ctx) => {
31
- const name = event.name ?? "world";
32
- await ctx.set("greeted", name);
33
- return { type: "blazen::StopEvent", result: `Hello, ${name}!` };
35
+ wf.addStep("parse", ["blazen::StartEvent"], async (event: Record<string, any>, ctx: Context) => {
36
+ return { type: "GreetEvent", name: event.name || "World" };
34
37
  });
35
38
 
36
- const result = await workflow.run({ name: "Blazen" });
37
- console.log(result.data); // "Hello, Blazen!"
39
+ wf.addStep("greet", ["GreetEvent"], async (event: Record<string, any>, ctx: Context) => {
40
+ return { type: "blazen::StopEvent", result: { greeting: `Hello, ${event.name}!` } };
41
+ });
42
+
43
+ const result: JsWorkflowResult = await wf.run({ name: "Blazen" });
44
+ console.log(result.type); // "blazen::StopEvent"
45
+ console.log(result.data); // { greeting: "Hello, Blazen!" }
38
46
  ```
39
47
 
48
+ **Key concepts:**
49
+
50
+ - `addStep(name, eventTypes, handler)` -- `eventTypes` is a `string[]` of event types this step handles.
51
+ - The handler receives `(event, ctx)` and returns the next event object, an array of events, or `null`.
52
+ - `result.type` is the final event type (typically `"blazen::StopEvent"`).
53
+ - `result.data` is the payload you passed as `result` inside the `StopEvent`.
54
+
40
55
  ---
41
56
 
42
57
  ## Multi-Step Workflows
43
58
 
44
- Steps communicate by emitting events. Any step whose `eventTypes` list includes a given event type will be invoked when that event is fired.
59
+ Steps communicate by emitting custom events. Any step whose `eventTypes` list includes a given event type will be invoked when that event fires.
45
60
 
46
61
  ```typescript
47
62
  import { Workflow } from "blazen";
48
63
 
49
- const workflow = new Workflow("pipeline");
64
+ const wf = new Workflow("pipeline");
65
+
66
+ wf.addStep("extract", ["blazen::StartEvent"], async (event, ctx) => {
67
+ const raw = event.text;
68
+ await ctx.set("raw", raw);
69
+ return { type: "CleanEvent", text: raw.trim().toLowerCase() };
70
+ });
50
71
 
51
- workflow.addStep("extract", ["blazen::StartEvent"], async (event, ctx) => {
52
- await ctx.set("input", event.text);
53
- return { type: "analyze", payload: event.text };
72
+ wf.addStep("analyze", ["CleanEvent"], async (event, ctx) => {
73
+ const wordCount = event.text.split(/\s+/).length;
74
+ return { type: "SummarizeEvent", text: event.text, wordCount };
54
75
  });
55
76
 
56
- workflow.addStep("analyze", ["analyze"], async (event, ctx) => {
57
- const input = await ctx.get("input");
58
- const summary = `Analyzed: ${input}`;
59
- return { type: "blazen::StopEvent", result: summary };
77
+ wf.addStep("summarize", ["SummarizeEvent"], async (event, ctx) => {
78
+ const raw = await ctx.get("raw");
79
+ return {
80
+ type: "blazen::StopEvent",
81
+ result: {
82
+ original: raw,
83
+ cleaned: event.text,
84
+ wordCount: event.wordCount,
85
+ },
86
+ };
60
87
  });
61
88
 
62
- const result = await workflow.run({ text: "some content" });
89
+ const result = await wf.run({ text: " Hello World " });
63
90
  console.log(result.data);
91
+ // { original: " Hello World ", cleaned: "hello world", wordCount: 2 }
64
92
  ```
65
93
 
66
94
  ---
67
95
 
68
- ## LLM Integration
96
+ ## Event Streaming
69
97
 
70
- `CompletionModel` provides a unified interface to a wide range of LLM providers. Pass a model instance into your step's closure through a shared variable or via `ctx`.
98
+ Steps can push intermediate events to external consumers via `ctx.writeEventToStream()`. Use `runStreaming(input, callback)` to receive them as they arrive.
71
99
 
72
100
  ```typescript
73
- import { Workflow, CompletionModel } from "blazen";
101
+ import { Workflow } from "blazen";
74
102
 
75
- const model = CompletionModel.openai(process.env.OPENAI_API_KEY!);
103
+ const wf = new Workflow("streaming");
76
104
 
77
- const workflow = new Workflow("llm-workflow");
105
+ wf.addStep("process", ["blazen::StartEvent"], async (event, ctx) => {
106
+ await ctx.writeEventToStream({ type: "Progress", message: "Starting..." });
78
107
 
79
- workflow.addStep("chat", ["blazen::StartEvent"], async (event, ctx) => {
80
- const response = await model.complete([
81
- { role: "system", content: "You are a helpful assistant." },
82
- { role: "user", content: event.question },
83
- ]);
108
+ // ... do work ...
109
+
110
+ await ctx.writeEventToStream({ type: "Progress", message: "Halfway done." });
84
111
 
85
- return { type: "blazen::StopEvent", result: response.content };
112
+ // ... more work ...
113
+
114
+ await ctx.writeEventToStream({ type: "Progress", message: "Complete." });
115
+
116
+ return { type: "blazen::StopEvent", result: { status: "done" } };
86
117
  });
87
118
 
88
- const result = await workflow.run({ question: "What is 2 + 2?" });
89
- console.log(result.data);
119
+ const result = await wf.runStreaming({}, (event) => {
120
+ // Called for every event published via ctx.writeEventToStream()
121
+ console.log(`[stream] ${event.type}: ${event.message}`);
122
+ });
123
+
124
+ console.log(result.data); // { status: "done" }
125
+ ```
126
+
127
+ `writeEventToStream` publishes to external consumers only. It does **not** route events through the internal step registry. Use `ctx.sendEvent()` for internal routing.
128
+
129
+ ---
130
+
131
+ ## LLM Integration
132
+
133
+ `CompletionModel` provides a unified interface to 15 LLM providers. Create a model instance with a static factory method and call `complete()` or `completeWithOptions()`. All messages and responses are fully typed.
134
+
135
+ ### ChatMessage and Role
136
+
137
+ Build messages with the `ChatMessage` class and `Role` enum:
138
+
139
+ ```typescript
140
+ import { CompletionModel, ChatMessage, Role } from "blazen";
141
+ import type { CompletionResponse, ToolCall, TokenUsage } from "blazen";
142
+
143
+ const model = CompletionModel.openrouter(process.env.OPENROUTER_API_KEY!);
144
+
145
+ // Using static factory methods (recommended)
146
+ const response: CompletionResponse = await model.complete([
147
+ ChatMessage.system("You are helpful."),
148
+ ChatMessage.user("What is 2+2?"),
149
+ ]);
150
+
151
+ console.log(response.content); // "4"
152
+ console.log(response.model); // model name used
153
+ console.log(response.usage); // TokenUsage: { promptTokens, completionTokens, totalTokens }
154
+ console.log(response.finishReason); // "stop", "tool_calls", etc.
155
+ console.log(response.toolCalls); // ToolCall[] | undefined
90
156
  ```
91
157
 
92
- ### Supported Providers
93
-
94
- | Factory method | Provider |
95
- |---------------------------------------------------|------------------|
96
- | `CompletionModel.openai(apiKey)` | OpenAI |
97
- | `CompletionModel.anthropic(apiKey)` | Anthropic |
98
- | `CompletionModel.gemini(apiKey)` | Google Gemini |
99
- | `CompletionModel.azure(apiKey, resource, deploy)` | Azure OpenAI |
100
- | `CompletionModel.openrouter(apiKey)` | OpenRouter |
101
- | `CompletionModel.groq(apiKey)` | Groq |
102
- | `CompletionModel.together(apiKey)` | Together AI |
103
- | `CompletionModel.mistral(apiKey)` | Mistral AI |
104
- | `CompletionModel.deepseek(apiKey)` | DeepSeek |
105
- | `CompletionModel.fireworks(apiKey)` | Fireworks AI |
106
- | `CompletionModel.perplexity(apiKey)` | Perplexity |
107
- | `CompletionModel.xai(apiKey)` | xAI / Grok |
108
- | `CompletionModel.cohere(apiKey)` | Cohere |
109
- | `CompletionModel.bedrock(apiKey, region)` | AWS Bedrock |
110
- | `CompletionModel.fal(apiKey)` | fal.ai |
111
-
112
- You can also pass additional options such as `temperature`, `maxTokens`, `topP`, a model override, or tool definitions:
158
+ You can also construct messages with the `ChatMessage` constructor:
113
159
 
114
160
  ```typescript
115
- const response = await model.completeWithOptions(messages, {
116
- model: "gpt-4o",
117
- temperature: 0.7,
118
- maxTokens: 1024,
119
- });
161
+ const msg = new ChatMessage({ role: Role.User, content: "Hello" });
120
162
  ```
121
163
 
122
- ---
164
+ ### Multimodal Messages
165
+
166
+ Send images alongside text using multimodal factory methods:
167
+
168
+ ```typescript
169
+ // Image from URL
170
+ const msg = ChatMessage.userImageUrl("https://example.com/photo.jpg", "What's in this image?");
171
+
172
+ // Image from base64
173
+ const msg = ChatMessage.userImageBase64(base64Data, "image/png", "Describe this.");
174
+
175
+ // Multiple content parts
176
+ import type { ContentPart } from "blazen";
177
+ const msg = ChatMessage.userParts([
178
+ { type: "text", text: "Compare these two images:" },
179
+ { type: "image_url", imageUrl: { url: "https://example.com/a.jpg" } },
180
+ { type: "image_url", imageUrl: { url: "https://example.com/b.jpg" } },
181
+ ]);
182
+ ```
123
183
 
124
- ## Streaming
184
+ ### Advanced Options
125
185
 
126
- Steps can push intermediate events to external consumers via `ctx.writeEventToStream()`. Use `runStreaming` to receive them as they arrive.
186
+ Use `completeWithOptions` to control temperature, token limits, model selection, and tool definitions:
127
187
 
128
188
  ```typescript
129
- import { Workflow, CompletionModel } from "blazen";
189
+ import type { CompletionOptions } from "blazen";
190
+
191
+ const options: CompletionOptions = {
192
+ temperature: 0.9,
193
+ maxTokens: 256,
194
+ topP: 0.95,
195
+ model: "anthropic/claude-sonnet-4-20250514",
196
+ tools: [/* tool definitions */],
197
+ };
198
+
199
+ const response = await model.completeWithOptions(
200
+ [
201
+ ChatMessage.system("You are a creative writer."),
202
+ ChatMessage.user("Write a haiku about Rust."),
203
+ ],
204
+ options,
205
+ );
206
+ ```
130
207
 
131
- const model = CompletionModel.anthropic(process.env.ANTHROPIC_API_KEY!);
208
+ ### All 15 Providers
209
+
210
+ | Factory Method | Provider |
211
+ |---|---|
212
+ | `CompletionModel.openai(apiKey)` | OpenAI |
213
+ | `CompletionModel.anthropic(apiKey)` | Anthropic |
214
+ | `CompletionModel.gemini(apiKey)` | Google Gemini |
215
+ | `CompletionModel.azure(apiKey, resourceName, deploymentName)` | Azure OpenAI |
216
+ | `CompletionModel.openrouter(apiKey)` | OpenRouter |
217
+ | `CompletionModel.groq(apiKey)` | Groq |
218
+ | `CompletionModel.together(apiKey)` | Together AI |
219
+ | `CompletionModel.mistral(apiKey)` | Mistral AI |
220
+ | `CompletionModel.deepseek(apiKey)` | DeepSeek |
221
+ | `CompletionModel.fireworks(apiKey)` | Fireworks AI |
222
+ | `CompletionModel.perplexity(apiKey)` | Perplexity |
223
+ | `CompletionModel.xai(apiKey)` | xAI / Grok |
224
+ | `CompletionModel.cohere(apiKey)` | Cohere |
225
+ | `CompletionModel.bedrock(apiKey, region)` | AWS Bedrock |
226
+ | `CompletionModel.fal(apiKey)` | fal.ai |
227
+
228
+ ### Using LLMs Inside Workflows
132
229
 
133
- const workflow = new Workflow("streaming-workflow");
230
+ ```typescript
231
+ import { Workflow, CompletionModel, ChatMessage } from "blazen";
134
232
 
135
- workflow.addStep("stream-tokens", ["blazen::StartEvent"], async (event, ctx) => {
136
- // Publish progress events consumers can observe in real time.
137
- await ctx.writeEventToStream({ type: "progress", message: "Starting..." });
233
+ const model = CompletionModel.openai(process.env.OPENAI_API_KEY!);
138
234
 
235
+ const wf = new Workflow("llm-workflow");
236
+
237
+ wf.addStep("ask", ["blazen::StartEvent"], async (event, ctx) => {
139
238
  const response = await model.complete([
140
- { role: "user", content: event.prompt },
239
+ ChatMessage.system("You are a helpful assistant."),
240
+ ChatMessage.user(event.question),
141
241
  ]);
242
+ return { type: "blazen::StopEvent", result: { answer: response.content } };
243
+ });
244
+
245
+ const result = await wf.run({ question: "What is the capital of France?" });
246
+ console.log(result.data.answer);
247
+ ```
248
+
249
+ ---
250
+
251
+ ## Branching / Fan-Out
252
+
253
+ Return an array of events from a step handler to dispatch multiple events simultaneously. Each event routes to the step that handles its type.
254
+
255
+ ```typescript
256
+ import { Workflow } from "blazen";
142
257
 
143
- await ctx.writeEventToStream({ type: "progress", message: "Done." });
258
+ const wf = new Workflow("fan-out");
144
259
 
145
- return { type: "blazen::StopEvent", result: response.content };
260
+ wf.addStep("split", ["blazen::StartEvent"], async (event, ctx) => {
261
+ // Return an array to fan out into parallel branches
262
+ return [
263
+ { type: "BranchA", value: event.input },
264
+ { type: "BranchB", value: event.input },
265
+ ];
146
266
  });
147
267
 
148
- const result = await workflow.runStreaming(
149
- { prompt: "Tell me a story." },
150
- (event) => {
151
- // Called for every event published via ctx.writeEventToStream().
152
- console.log("[stream]", event);
153
- }
154
- );
268
+ wf.addStep("handle_a", ["BranchA"], async (event, ctx) => {
269
+ return { type: "blazen::StopEvent", result: { branch: "a", value: event.value } };
270
+ });
155
271
 
272
+ wf.addStep("handle_b", ["BranchB"], async (event, ctx) => {
273
+ return { type: "blazen::StopEvent", result: { branch: "b", value: event.value } };
274
+ });
275
+
276
+ const result = await wf.run({ input: "data" });
277
+ // The first branch to produce a StopEvent wins
156
278
  console.log(result.data);
157
279
  ```
158
280
 
159
281
  ---
160
282
 
283
+ ## Side-Effect Steps
284
+
285
+ Return `null` from a step to perform side effects without emitting a return event. Use `ctx.sendEvent()` to manually route the next event through the internal step registry.
286
+
287
+ ```typescript
288
+ import { Workflow } from "blazen";
289
+
290
+ const wf = new Workflow("side-effect");
291
+
292
+ wf.addStep("log_and_continue", ["blazen::StartEvent"], async (event, ctx) => {
293
+ // Perform side effects
294
+ await ctx.set("processed", true);
295
+
296
+ // Manually send the next event
297
+ await ctx.sendEvent({ type: "NextStep", data: event.input });
298
+
299
+ // Return null -- no event emitted from the return value
300
+ return null;
301
+ });
302
+
303
+ wf.addStep("finish", ["NextStep"], async (event, ctx) => {
304
+ const processed = await ctx.get("processed");
305
+ return { type: "blazen::StopEvent", result: { processed, data: event.data } };
306
+ });
307
+
308
+ const result = await wf.run({ input: "hello" });
309
+ console.log(result.data); // { processed: true, data: "hello" }
310
+ ```
311
+
312
+ ---
313
+
161
314
  ## Pause and Resume
162
315
 
163
- `runWithHandler` gives you a `WorkflowHandler` that lets you pause execution and serialize the full workflow state to JSON. You can store the snapshot and resume it later -- on a different machine if needed.
316
+ `runWithHandler` returns a `WorkflowHandler` that gives you control over execution. Pause a workflow to serialize its full state as a JSON string, then resume it later -- even on a different machine.
164
317
 
165
318
  ```typescript
166
319
  import { Workflow } from "blazen";
167
320
  import { writeFileSync, readFileSync } from "fs";
168
321
 
169
- const workflow = new Workflow("pausable");
322
+ const wf = new Workflow("pausable");
170
323
 
171
- workflow.addStep("long-task", ["blazen::StartEvent"], async (event, ctx) => {
172
- // ... expensive work ...
173
- return { type: "blazen::StopEvent", result: "done" };
324
+ wf.addStep("work", ["blazen::StartEvent"], async (event, ctx) => {
325
+ // ... expensive computation ...
326
+ return { type: "blazen::StopEvent", result: { answer: 42 } };
174
327
  });
175
328
 
176
- // Start the workflow and immediately pause it.
177
- const handler = await workflow.runWithHandler({ input: "data" });
329
+ // Start the workflow and get a handler
330
+ const handler = await wf.runWithHandler({ input: "data" });
331
+
332
+ // Pause and serialize the snapshot
178
333
  const snapshot = await handler.pause();
179
334
  writeFileSync("snapshot.json", snapshot);
180
335
 
181
- // Later: restore the workflow and resume from the snapshot.
182
- const snapshot2 = readFileSync("snapshot.json", "utf-8");
183
- const resumedHandler = await workflow.resume(snapshot2);
336
+ // Later: resume from the snapshot
337
+ const saved = readFileSync("snapshot.json", "utf-8");
338
+ const resumedHandler = await wf.resume(saved);
184
339
  const result = await resumedHandler.result();
185
- console.log(result.data);
340
+ console.log(result.data); // { answer: 42 }
186
341
  ```
187
342
 
188
- ---
343
+ **Important:** `handler.result()` and `handler.pause()` each consume the handler. You can only call one of them, and only once.
189
344
 
190
- ## Human-in-the-Loop
345
+ ### Human-in-the-Loop
191
346
 
192
- Pause/resume is the foundation for human-in-the-loop workflows. Pause after a step completes to wait for human review, inject updated data into the context, then call `resume` to continue:
347
+ Pause/resume is the foundation for human-in-the-loop workflows. Pause after a step to wait for human review, then resume when approved:
193
348
 
194
349
  ```typescript
195
- const handler = await workflow.runWithHandler({ document: rawText });
350
+ const handler = await wf.runWithHandler({ document: rawText });
196
351
 
197
- // Pause and persist state until a human reviews/approves.
352
+ // Pause and persist until a human reviews
198
353
  const snapshot = await handler.pause();
199
354
  await db.saveSnapshot(jobId, snapshot);
200
355
 
201
- // ... human reviews and approves via your UI ...
356
+ // ... human reviews via UI ...
202
357
 
203
- // Resume with the same workflow instance (steps must be re-registered).
204
- const savedSnapshot = await db.loadSnapshot(jobId);
205
- const resumedHandler = await workflow.resume(savedSnapshot);
358
+ // Resume
359
+ const saved = await db.loadSnapshot(jobId);
360
+ const resumedHandler = await wf.resume(saved);
206
361
  const result = await resumedHandler.result();
207
362
  ```
208
363
 
364
+ ### Streaming with Handler
365
+
366
+ Use `handler.streamEvents()` to subscribe to intermediate events before calling `result()`:
367
+
368
+ ```typescript
369
+ const handler = await wf.runWithHandler({ prompt: "Tell me a story." });
370
+
371
+ // Subscribe to stream events (must be called before result() or pause())
372
+ await handler.streamEvents((event) => {
373
+ console.log("[stream]", event);
374
+ });
375
+
376
+ // Then await the final result
377
+ const result = await handler.result();
378
+ ```
379
+
209
380
  ---
210
381
 
211
382
  ## Context API
212
383
 
213
- Every step receives a `ctx` object for shared state and event routing.
384
+ Every step handler receives a `ctx` (Context) object. All methods are **async** and must be `await`ed.
214
385
 
215
386
  ```typescript
216
- // Store and retrieve values across steps.
217
- await ctx.set("key", { any: "json-serializable value" });
387
+ // Store a JSON-serializable value
388
+ await ctx.set("key", { any: "value" });
389
+
390
+ // Retrieve a stored value (returns null if not found)
218
391
  const value = await ctx.get("key");
219
392
 
220
- // Route an event to a registered step by type.
221
- await ctx.sendEvent({ type: "my-event", data: "..." });
393
+ // Store raw binary data (no serialization requirement)
394
+ await ctx.setBytes("model-weights", buffer);
395
+
396
+ // Retrieve raw binary data (returns null if not found)
397
+ const data: Buffer | null = await ctx.getBytes("model-weights");
398
+
399
+ // Send an event through the internal step registry
400
+ await ctx.sendEvent({ type: "MyEvent", data: "..." });
222
401
 
223
- // Publish an event to external streaming consumers.
224
- await ctx.writeEventToStream({ type: "progress", pct: 50 });
402
+ // Publish an event to external streaming consumers (does NOT route internally)
403
+ await ctx.writeEventToStream({ type: "Progress", percent: 50 });
225
404
 
226
- // Get the current run ID.
405
+ // Get the unique run ID for this workflow execution
227
406
  const runId = await ctx.runId();
228
407
  ```
229
408
 
409
+ ### Binary Storage
410
+
411
+ `setBytes` / `getBytes` let you store raw binary data in the context with no serialization requirement. Store any type by converting to bytes yourself (e.g., MessagePack, protobuf, or raw buffers). Binary data persists through pause/resume/checkpoint.
412
+
413
+ ```typescript
414
+ // Store a raw buffer
415
+ const pixels = Buffer.from([0xff, 0x00, 0x00, 0xff]);
416
+ await ctx.setBytes("image-pixels", pixels);
417
+
418
+ // Retrieve it later in another step
419
+ const restored = await ctx.getBytes("image-pixels");
420
+ ```
421
+
230
422
  ---
231
423
 
232
- ## TypeScript Support
424
+ ## Timeout
233
425
 
234
- Full TypeScript type definitions are included -- no `@types` package needed. All classes (`Workflow`, `WorkflowHandler`, `Context`, `CompletionModel`) and the `JsWorkflowResult` interface are exported from `index.d.ts`.
426
+ Set a workflow timeout in seconds. The default is 300 seconds (5 minutes). Set to 0 or negative to disable.
235
427
 
236
428
  ```typescript
237
- import type { JsWorkflowResult } from "blazen";
429
+ const wf = new Workflow("my-workflow");
430
+ wf.setTimeout(60); // 60 second timeout
431
+ ```
238
432
 
239
- const result: JsWorkflowResult = await workflow.run({ text: "hello" });
240
- console.log(result.type); // "blazen::StopEvent"
241
- console.log(result.data); // your result payload
433
+ ---
434
+
435
+ ## TypeScript Support
436
+
437
+ Full TypeScript type definitions ship with the package -- no `@types` needed. All classes and interfaces are exported.
438
+
439
+ ```typescript
440
+ import {
441
+ Workflow, WorkflowHandler, Context, CompletionModel,
442
+ ChatMessage, Role, version,
443
+ } from "blazen";
444
+ import type {
445
+ JsWorkflowResult, CompletionResponse, CompletionOptions,
446
+ ToolCall, TokenUsage, ContentPart, ImageContent, ImageSource,
447
+ } from "blazen";
242
448
  ```
243
449
 
244
450
  ---
245
451
 
246
452
  ## API Summary
247
453
 
248
- | Export | Description |
249
- |-------------------|----------------------------------------------------------|
250
- | `Workflow` | Build and run event-driven workflows |
251
- | `WorkflowHandler` | Control handle returned by `runWithHandler` (pause/resume/stream) |
252
- | `Context` | Per-run key/value store, event routing, and stream sink |
253
- | `CompletionModel` | Unified LLM client with provider factory methods |
254
- | `version()` | Returns the blazen library version string |
454
+ | Export | Description |
455
+ |---|---|
456
+ | `Workflow` | Build and run event-driven workflows |
457
+ | `Workflow.addStep(name, eventTypes, handler)` | Register a step that handles specific event types |
458
+ | `Workflow.run(input)` | Run the workflow, returns `Promise<JsWorkflowResult>` |
459
+ | `Workflow.runStreaming(input, callback)` | Run with streaming, callback receives intermediate events |
460
+ | `Workflow.runWithHandler(input)` | Run and return a `WorkflowHandler` for pause/resume control |
461
+ | `Workflow.resume(snapshotJson)` | Resume a paused workflow from a JSON snapshot |
462
+ | `Workflow.setTimeout(seconds)` | Set workflow timeout in seconds |
463
+ | `WorkflowHandler` | Control handle for a running workflow |
464
+ | `WorkflowHandler.result()` | Await the final workflow result |
465
+ | `WorkflowHandler.pause()` | Pause and get a serialized snapshot string |
466
+ | `WorkflowHandler.streamEvents(callback)` | Subscribe to intermediate stream events |
467
+ | `Context` | Per-run shared state, event routing, and stream output |
468
+ | `Context.set(key, value)` | Store a JSON-serializable value (async) |
469
+ | `Context.get(key)` | Retrieve a value (async, returns null if missing) |
470
+ | `Context.setBytes(key, buffer)` | Store raw binary data (async) |
471
+ | `Context.getBytes(key)` | Retrieve raw binary data (async, returns null if missing) |
472
+ | `Context.sendEvent(event)` | Route an event to matching steps (async) |
473
+ | `Context.writeEventToStream(event)` | Publish to external stream consumers (async) |
474
+ | `Context.runId()` | Get the workflow run ID (async) |
475
+ | `CompletionModel` | Unified LLM client with 15 provider factory methods |
476
+ | `CompletionModel.complete(messages)` | Chat completion with typed `ChatMessage[]` input, returns `CompletionResponse` (async) |
477
+ | `CompletionModel.completeWithOptions(messages, opts)` | Chat completion with `CompletionOptions` (async) |
478
+ | `CompletionModel.modelId` | Getter for the current model ID |
479
+ | `ChatMessage` | Chat message class with static factories: `.system()`, `.user()`, `.assistant()`, `.tool()`, `.userImageUrl()`, `.userImageBase64()`, `.userParts()` |
480
+ | `Role` | String enum: `Role.System`, `Role.User`, `Role.Assistant`, `Role.Tool` |
481
+ | `CompletionResponse` | Interface: `{ content, toolCalls, usage, model, finishReason }` |
482
+ | `ToolCall` | Interface: `{ id, name, arguments }` |
483
+ | `TokenUsage` | Interface: `{ promptTokens, completionTokens, totalTokens }` |
484
+ | `CompletionOptions` | Interface: `{ temperature?, maxTokens?, topP?, model?, tools? }` |
485
+ | `ContentPart` / `ImageContent` / `ImageSource` | Types for multimodal message content |
486
+ | `JsWorkflowResult` | Interface: `{ type: string, data: any }` |
487
+ | `version()` | Returns the blazen library version string |
255
488
 
256
489
  ---
257
490
 
258
- ## Full Documentation
491
+ ## Links
259
492
 
260
- See the [Blazen GitHub repository](https://github.com/ZachHandley/Blazen) for full documentation, advanced examples, and the Rust core source.
493
+ - [GitHub](https://github.com/ZachHandley/Blazen) -- source, issues, and advanced examples
494
+ - [blazen.dev](https://blazen.dev) -- documentation and guides
261
495
 
262
496
  ---
263
497