@bratsos/workflow-engine 0.1.0 → 0.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (37) hide show
  1. package/README.md +270 -513
  2. package/dist/chunk-HL3OJG7W.js +1033 -0
  3. package/dist/chunk-HL3OJG7W.js.map +1 -0
  4. package/dist/{chunk-7IITBLFY.js → chunk-NYKMT46J.js} +268 -25
  5. package/dist/chunk-NYKMT46J.js.map +1 -0
  6. package/dist/chunk-SPXBCZLB.js +17 -0
  7. package/dist/chunk-SPXBCZLB.js.map +1 -0
  8. package/dist/{client-5vz5Vv4A.d.ts → client-D4PoxADF.d.ts} +3 -143
  9. package/dist/client.d.ts +3 -2
  10. package/dist/{index-DmR3E8D7.d.ts → index-DAzCfO1R.d.ts} +20 -1
  11. package/dist/index.d.ts +234 -601
  12. package/dist/index.js +46 -2034
  13. package/dist/index.js.map +1 -1
  14. package/dist/{interface-Cv22wvLG.d.ts → interface-MMqhfQQK.d.ts} +69 -2
  15. package/dist/kernel/index.d.ts +26 -0
  16. package/dist/kernel/index.js +3 -0
  17. package/dist/kernel/index.js.map +1 -0
  18. package/dist/kernel/testing/index.d.ts +44 -0
  19. package/dist/kernel/testing/index.js +85 -0
  20. package/dist/kernel/testing/index.js.map +1 -0
  21. package/dist/persistence/index.d.ts +2 -2
  22. package/dist/persistence/index.js +2 -1
  23. package/dist/persistence/prisma/index.d.ts +2 -2
  24. package/dist/persistence/prisma/index.js +2 -1
  25. package/dist/plugins-BCnDUwIc.d.ts +415 -0
  26. package/dist/ports-tU3rzPXJ.d.ts +245 -0
  27. package/dist/stage-BPw7m9Wx.d.ts +144 -0
  28. package/dist/testing/index.d.ts +23 -1
  29. package/dist/testing/index.js +156 -13
  30. package/dist/testing/index.js.map +1 -1
  31. package/package.json +11 -1
  32. package/skills/workflow-engine/SKILL.md +234 -348
  33. package/skills/workflow-engine/references/03-runtime-setup.md +111 -426
  34. package/skills/workflow-engine/references/05-persistence-setup.md +32 -0
  35. package/skills/workflow-engine/references/07-testing-patterns.md +141 -474
  36. package/skills/workflow-engine/references/08-common-patterns.md +118 -431
  37. package/dist/chunk-7IITBLFY.js.map +0 -1
@@ -4,33 +4,23 @@ description: Guide for @bratsos/workflow-engine - a type-safe workflow engine wi
4
4
  license: MIT
5
5
  metadata:
6
6
  author: bratsos
7
- version: "0.1.0"
7
+ version: "0.2.0"
8
8
  repository: https://github.com/bratsos/workflow-engine
9
9
  ---
10
10
 
11
11
  # @bratsos/workflow-engine Skill
12
12
 
13
- Type-safe workflow engine for building AI-powered, multi-stage pipelines with persistence and batch processing support.
13
+ Type-safe workflow engine for building AI-powered, multi-stage pipelines with persistence and batch processing support. Uses a **command kernel** architecture with environment-agnostic design.
14
14
 
15
- ## ⚠️ CRITICAL: Required Prisma Models
15
+ ## Architecture Overview
16
16
 
17
- **Before using this library, your Prisma schema MUST include ALL of these models with EXACT field names:**
17
+ The engine follows a **kernel + host** pattern:
18
18
 
19
- | Model | Required | Purpose |
20
- |-------|----------|---------|
21
- | `WorkflowRun` | Yes | Workflow execution records |
22
- | `WorkflowStage` | ✅ Yes | Stage execution state |
23
- | `WorkflowLog` | ✅ Yes | Stage logging |
24
- | `WorkflowArtifact` | ✅ Yes | Stage output storage |
25
- | `JobQueue` | ✅ Yes | Job scheduling |
26
- | `AICall` | Optional | AI call tracking |
19
+ - **Core library** (`@bratsos/workflow-engine`) - Command kernel, stage/workflow definitions, persistence adapters
20
+ - **Node Host** (`@bratsos/workflow-engine-host-node`) - Long-running worker with polling loops and signal handling
21
+ - **Serverless Host** (`@bratsos/workflow-engine-host-serverless`) - Stateless single-invocation for edge/lambda/workers
27
22
 
28
- **Common Errors:**
29
- - `Cannot read properties of undefined (reading 'create')` → Missing `WorkflowLog` model
30
- - `Cannot read properties of undefined (reading 'upsert')` → Missing `WorkflowArtifact` model
31
- - `Unknown argument 'duration'. Did you mean 'durationMs'?` → Field name mismatch (use `duration`, not `durationMs`)
32
-
33
- **See [05-persistence-setup.md](references/05-persistence-setup.md) for the complete schema.**
23
+ The **kernel** is a pure command dispatcher. All workflow operations are expressed as typed commands dispatched via `kernel.dispatch()`. Hosts wrap the kernel with environment-specific process management.
34
24
 
35
25
  ## When to Apply
36
26
 
@@ -39,20 +29,20 @@ Type-safe workflow engine for building AI-powered, multi-stage pipelines with pe
39
29
  - User is implementing workflow persistence with Prisma
40
30
  - User needs AI integration (generateText, generateObject, embeddings, batch)
41
31
  - User is building multi-stage data processing pipelines
42
- - User mentions workflow runtime, job queues, or stage execution
43
- - User wants to rerun a workflow from a specific stage (retry after failure)
44
- - User needs to test workflows with mocks
32
+ - User mentions kernel, command dispatch, or job execution
33
+ - User wants to set up a Node.js worker or serverless worker
34
+ - User wants to rerun a workflow from a specific stage
35
+ - User needs to test workflows with in-memory adapters
45
36
 
46
37
  ## Quick Start
47
38
 
48
39
  ```typescript
40
+ import { defineStage, WorkflowBuilder } from "@bratsos/workflow-engine";
41
+ import { createKernel } from "@bratsos/workflow-engine/kernel";
42
+ import { createNodeHost } from "@bratsos/workflow-engine-host-node";
49
43
  import {
50
- defineStage,
51
- WorkflowBuilder,
52
- createWorkflowRuntime,
53
44
  createPrismaWorkflowPersistence,
54
45
  createPrismaJobQueue,
55
- createPrismaAICallLogger,
56
46
  } from "@bratsos/workflow-engine";
57
47
  import { z } from "zod";
58
48
 
@@ -66,32 +56,42 @@ const processStage = defineStage({
66
56
  config: z.object({ verbose: z.boolean().default(false) }),
67
57
  },
68
58
  async execute(ctx) {
69
- const processed = ctx.input.data.toUpperCase();
70
- return { output: { result: processed } };
59
+ return { output: { result: ctx.input.data.toUpperCase() } };
71
60
  },
72
61
  });
73
62
 
74
63
  // 2. Build a workflow
75
64
  const workflow = new WorkflowBuilder(
76
- "my-workflow",
77
- "My Workflow",
78
- "Processes data",
65
+ "my-workflow", "My Workflow", "Processes data",
79
66
  z.object({ data: z.string() }),
80
67
  z.object({ result: z.string() })
81
68
  )
82
69
  .pipe(processStage)
83
70
  .build();
84
71
 
85
- // 3. Create runtime and execute
86
- const runtime = createWorkflowRuntime({
72
+ // 3. Create kernel
73
+ const kernel = createKernel({
87
74
  persistence: createPrismaWorkflowPersistence(prisma),
88
- jobQueue: createPrismaJobQueue(prisma),
89
- aiCallLogger: createPrismaAICallLogger(prisma),
90
- registry: { getWorkflow: (id) => (id === "my-workflow" ? workflow : null) },
75
+ blobStore: myBlobStore,
76
+ jobTransport: createPrismaJobQueue(prisma),
77
+ eventSink: myEventSink,
78
+ scheduler: myScheduler,
79
+ clock: { now: () => new Date() },
80
+ registry: { getWorkflow: (id) => (id === "my-workflow" ? workflow : undefined) },
91
81
  });
92
82
 
93
- await runtime.start();
94
- const { workflowRunId } = await runtime.createRun({
83
+ // 4. Start a Node host
84
+ const host = createNodeHost({
85
+ kernel,
86
+ jobTransport: createPrismaJobQueue(prisma),
87
+ workerId: "worker-1",
88
+ });
89
+ await host.start();
90
+
91
+ // 5. Dispatch a command
92
+ await kernel.dispatch({
93
+ type: "run.create",
94
+ idempotencyKey: crypto.randomUUID(),
95
95
  workflowId: "my-workflow",
96
96
  input: { data: "hello" },
97
97
  });
@@ -99,18 +99,34 @@ const { workflowRunId } = await runtime.createRun({
99
99
 
100
100
  ## Core Exports Reference
101
101
 
102
- | Export | Type | Purpose |
103
- |--------|------|---------|
104
- | `defineStage` | Function | Create sync stages |
105
- | `defineAsyncBatchStage` | Function | Create async/batch stages |
106
- | `WorkflowBuilder` | Class | Chain stages into workflows |
107
- | `Workflow` | Class | Built workflow definition |
108
- | `WorkflowRuntime` | Class | Execute workflows with persistence |
109
- | `createAIHelper` | Function | AI operations (text, object, embed, batch) |
110
- | `AVAILABLE_MODELS` | Object | Model configurations |
111
- | `registerModels` | Function | Add custom models |
112
- | `calculateCost` | Function | Estimate token costs |
113
- | `NoInputSchema` | Schema | For stages without input |
102
+ | Export | Type | Import Path | Purpose |
103
+ |--------|------|-------------|---------|
104
+ | `defineStage` | Function | `@bratsos/workflow-engine` | Create sync stages |
105
+ | `defineAsyncBatchStage` | Function | `@bratsos/workflow-engine` | Create async/batch stages |
106
+ | `WorkflowBuilder` | Class | `@bratsos/workflow-engine` | Chain stages into workflows |
107
+ | `createKernel` | Function | `@bratsos/workflow-engine/kernel` | Create command kernel |
108
+ | `createNodeHost` | Function | `@bratsos/workflow-engine-host-node` | Create Node.js host |
109
+ | `createServerlessHost` | Function | `@bratsos/workflow-engine-host-serverless` | Create serverless host |
110
+ | `createAIHelper` | Function | `@bratsos/workflow-engine` | AI operations (text, object, embed, batch) |
111
+ | `definePlugin` | Function | `@bratsos/workflow-engine/kernel` | Define kernel plugins |
112
+ | `createPluginRunner` | Function | `@bratsos/workflow-engine/kernel` | Create plugin event processor |
113
+
114
+ ## Kernel Commands
115
+
116
+ All operations go through `kernel.dispatch(command)`:
117
+
118
+ | Command | Description |
119
+ |---------|-------------|
120
+ | `run.create` | Create a new workflow run |
121
+ | `run.claimPending` | Claim pending runs, enqueue first-stage jobs |
122
+ | `run.transition` | Advance to next stage group or complete |
123
+ | `run.cancel` | Cancel a running workflow |
124
+ | `run.rerunFrom` | Rerun from a specific stage |
125
+ | `job.execute` | Execute a single stage |
126
+ | `stage.pollSuspended` | Poll suspended stages for readiness (returns `resumedWorkflowRunIds`) |
127
+ | `lease.reapStale` | Release stale job leases |
128
+ | `outbox.flush` | Publish pending outbox events |
129
+ | `plugin.replayDLQ` | Replay dead-letter queue events |
114
130
 
115
131
  ## Stage Definition
116
132
 
@@ -118,33 +134,27 @@ const { workflowRunId } = await runtime.createRun({
118
134
 
119
135
  ```typescript
120
136
  const myStage = defineStage({
121
- id: "my-stage", // Unique identifier
122
- name: "My Stage", // Display name
123
- description: "Optional", // Description
124
- dependencies: ["prev"], // Required previous stages
137
+ id: "my-stage",
138
+ name: "My Stage",
139
+ description: "Optional",
140
+ dependencies: ["prev"],
125
141
 
126
142
  schemas: {
127
143
  input: InputSchema, // Zod schema or "none"
128
- output: OutputSchema, // Zod schema
129
- config: ConfigSchema, // Zod schema with defaults
144
+ output: OutputSchema,
145
+ config: ConfigSchema,
130
146
  },
131
147
 
132
148
  async execute(ctx) {
133
- // Access input, config, workflow context
134
149
  const { input, config, workflowContext } = ctx;
150
+ const prevOutput = ctx.require("prev");
151
+ const optOutput = ctx.optional("other");
135
152
 
136
- // Get output from previous stages
137
- const prevOutput = ctx.require("prev"); // Throws if missing
138
- const optOutput = ctx.optional("other"); // Returns undefined if missing
139
-
140
- // Access services
141
153
  await ctx.log("INFO", "Processing...");
142
- await ctx.storage.save("key", data);
143
154
 
144
155
  return {
145
156
  output: { ... },
146
157
  customMetrics: { itemsProcessed: 10 },
147
- artifacts: { rawData: data },
148
158
  };
149
159
  },
150
160
  });
@@ -156,51 +166,26 @@ const myStage = defineStage({
156
166
  const batchStage = defineAsyncBatchStage({
157
167
  id: "batch-process",
158
168
  name: "Batch Process",
159
- mode: "async-batch", // Required
160
-
161
- schemas: {
162
- input: "none",
163
- output: OutputSchema,
164
- config: ConfigSchema,
165
- },
169
+ mode: "async-batch",
170
+ schemas: { input: "none", output: OutputSchema, config: ConfigSchema },
166
171
 
167
172
  async execute(ctx) {
168
- // Check if resuming from suspension
169
173
  if (ctx.resumeState) {
170
174
  return { output: ctx.resumeState.cachedResult };
171
175
  }
172
176
 
173
- // Submit batch and suspend
174
177
  const batchId = await submitBatchJob(ctx.input);
175
-
176
178
  return {
177
179
  suspended: true,
178
- state: {
179
- batchId,
180
- submittedAt: new Date().toISOString(),
181
- pollInterval: 60000,
182
- maxWaitTime: 3600000,
183
- },
184
- pollConfig: {
185
- pollInterval: 60000,
186
- maxWaitTime: 3600000,
187
- nextPollAt: new Date(Date.now() + 60000),
188
- },
180
+ state: { batchId, pollInterval: 60000, maxWaitTime: 3600000 },
181
+ pollConfig: { pollInterval: 60000, maxWaitTime: 3600000, nextPollAt: new Date(Date.now() + 60000) },
189
182
  };
190
183
  },
191
184
 
192
185
  async checkCompletion(suspendedState, ctx) {
193
186
  const status = await checkBatchStatus(suspendedState.batchId);
194
-
195
- if (status === "completed") {
196
- const results = await getBatchResults(suspendedState.batchId);
197
- return { ready: true, output: { results } };
198
- }
199
-
200
- if (status === "failed") {
201
- return { ready: false, error: "Batch failed" };
202
- }
203
-
187
+ if (status === "completed") return { ready: true, output: { results } };
188
+ if (status === "failed") return { ready: false, error: "Batch failed" };
204
189
  return { ready: false, nextCheckIn: 60000 };
205
190
  },
206
191
  });
@@ -210,235 +195,156 @@ const batchStage = defineAsyncBatchStage({
210
195
 
211
196
  ```typescript
212
197
  const workflow = new WorkflowBuilder(
213
- "workflow-id",
214
- "Workflow Name",
215
- "Description",
216
- InputSchema,
217
- OutputSchema
198
+ "workflow-id", "Workflow Name", "Description",
199
+ InputSchema, OutputSchema
218
200
  )
219
- .pipe(stage1) // Sequential
201
+ .pipe(stage1)
220
202
  .pipe(stage2)
221
- .parallel([stage3a, stage3b]) // Parallel execution
203
+ .parallel([stage3a, stage3b])
222
204
  .pipe(stage4)
223
205
  .build();
224
206
 
225
- // Workflow utilities
226
- workflow.getStageIds(); // ["stage1", "stage2", ...]
227
- workflow.getExecutionPlan(); // Grouped by execution order
228
- workflow.getDefaultConfig(); // Default config for all stages
229
- workflow.validateConfig(config); // Validate config object
207
+ workflow.getStageIds();
208
+ workflow.getExecutionPlan();
209
+ workflow.getDefaultConfig();
210
+ workflow.validateConfig(config);
230
211
  ```
231
212
 
232
- ## AI Integration & Cost Tracking
213
+ ## Kernel Setup
233
214
 
234
- ### Topic Convention for Cost Aggregation
215
+ ```typescript
216
+ import { createKernel } from "@bratsos/workflow-engine/kernel";
217
+ import type { Kernel, KernelConfig, Persistence, BlobStore, JobTransport, EventSink, Scheduler, Clock } from "@bratsos/workflow-engine/kernel";
218
+
219
+ const kernel = createKernel({
220
+ persistence, // Persistence port - runs, stages, logs, outbox, idempotency
221
+ blobStore, // BlobStore port - large payload storage
222
+ jobTransport, // JobTransport port - job queue
223
+ eventSink, // EventSink port - async event publishing
224
+ scheduler, // Scheduler port - deferred command triggers
225
+ clock, // Clock port - injectable time source
226
+ registry, // WorkflowRegistry - { getWorkflow(id) }
227
+ });
235
228
 
236
- Use hierarchical topics to enable cost tracking at different levels:
229
+ // Dispatch typed commands
230
+ const { workflowRunId } = await kernel.dispatch({
231
+ type: "run.create",
232
+ idempotencyKey: "unique-key",
233
+ workflowId: "my-workflow",
234
+ input: { data: "hello" },
235
+ });
236
+ ```
237
+
238
+ ### Node Host
237
239
 
238
240
  ```typescript
239
- // In a stage - use the standard convention
240
- const ai = runtime.createAIHelper(`workflow.${ctx.workflowRunId}.stage.${ctx.stageId}`);
241
+ import { createNodeHost } from "@bratsos/workflow-engine-host-node";
242
+
243
+ const host = createNodeHost({
244
+ kernel,
245
+ jobTransport,
246
+ workerId: "worker-1",
247
+ orchestrationIntervalMs: 10_000,
248
+ jobPollIntervalMs: 1_000,
249
+ staleLeaseThresholdMs: 60_000,
250
+ });
241
251
 
242
- // Later, query costs by prefix:
243
- const workflowCost = await aiLogger.getStats(`workflow.${workflowRunId}`); // All stages
244
- const stageCost = await aiLogger.getStats(`workflow.${workflowRunId}.stage.extraction`); // One stage
252
+ await host.start(); // Starts polling loops + signal handlers
253
+ await host.stop(); // Graceful shutdown
254
+ host.getStats(); // { workerId, jobsProcessed, orchestrationTicks, isRunning, uptimeMs }
245
255
  ```
246
256
 
247
- **Note:** When a workflow completes, `WorkflowRun.totalCost` and `totalTokens` are automatically populated.
248
-
249
- ### AI Methods
257
+ ### Serverless Host
250
258
 
251
259
  ```typescript
252
- // Text generation
253
- const { text, cost } = await ai.generateText("gemini-2.5-flash", prompt, {
254
- temperature: 0.7,
255
- maxTokens: 1000,
260
+ import {
261
+ createServerlessHost,
262
+ type ServerlessHost,
263
+ type ServerlessHostConfig,
264
+ type JobMessage,
265
+ type JobResult,
266
+ type ProcessJobsResult,
267
+ type MaintenanceTickResult,
268
+ } from "@bratsos/workflow-engine-host-serverless";
269
+
270
+ const host = createServerlessHost({
271
+ kernel,
272
+ jobTransport,
273
+ workerId: "my-worker",
274
+ // Optional tuning (same defaults as Node host)
275
+ staleLeaseThresholdMs: 60_000,
276
+ maxClaimsPerTick: 10,
277
+ maxSuspendedChecksPerTick: 10,
278
+ maxOutboxFlushPerTick: 100,
256
279
  });
280
+ ```
257
281
 
258
- // Structured output
259
- const { object } = await ai.generateObject(
260
- "gemini-2.5-flash",
261
- prompt,
262
- z.object({ items: z.array(z.string()) })
263
- );
282
+ #### `handleJob(msg: JobMessage): Promise<JobResult>`
264
283
 
265
- // Embeddings
266
- const { embedding, embeddings } = await ai.embed(
267
- "text-embedding-004",
268
- ["text1", "text2"],
269
- { dimensions: 768 }
270
- );
284
+ Execute a single pre-dequeued job. Consumers wire platform-specific ack/retry around the result.
271
285
 
272
- // Batch operations (50% cost savings)
273
- const batch = ai.batch("claude-sonnet-4-20250514", "anthropic");
274
- const handle = await batch.submit([
275
- { id: "req1", prompt: "..." },
276
- { id: "req2", prompt: "...", schema: OutputSchema },
277
- ]);
286
+ ```typescript
287
+ // JobMessage shape (matches queue message body)
288
+ interface JobMessage {
289
+ jobId: string;
290
+ workflowRunId: string;
291
+ workflowId: string;
292
+ stageId: string;
293
+ attempt: number;
294
+ maxAttempts?: number;
295
+ payload: Record<string, unknown>;
296
+ }
278
297
 
279
- // Check status and get results
280
- const status = await batch.getStatus(handle.id);
281
- const results = await batch.getResults(handle.id);
298
+ // JobResult
299
+ interface JobResult {
300
+ outcome: "completed" | "suspended" | "failed";
301
+ error?: string;
302
+ }
282
303
 
283
- // Get aggregated stats
284
- const stats = await ai.getStats();
285
- console.log(`Cost: $${stats.totalCost.toFixed(4)}, Tokens: ${stats.totalInputTokens + stats.totalOutputTokens}`);
304
+ const result = await host.handleJob(msg);
305
+ if (result.outcome === "completed") msg.ack();
306
+ else if (result.outcome === "suspended") msg.ack();
307
+ else msg.retry();
286
308
  ```
287
309
 
288
- **See [04-ai-integration.md](references/04-ai-integration.md) for complete topic convention and cost tracking docs.**
310
+ #### `processAvailableJobs(opts?): Promise<ProcessJobsResult>`
289
311
 
290
- ## Persistence Setup
312
+ Dequeue and process jobs from the job transport. Defaults to 1 job (safe for edge runtimes with CPU limits).
291
313
 
292
- ### ⚠️ Required Prisma Models (ALL are required)
314
+ ```typescript
315
+ const result = await host.processAvailableJobs({ maxJobs: 5 });
316
+ // { processed: number, succeeded: number, failed: number }
317
+ ```
293
318
 
294
- Copy this complete schema. Missing models or wrong field names will cause runtime errors.
319
+ #### `runMaintenanceTick(): Promise<MaintenanceTickResult>`
295
320
 
296
- ```prisma
297
- // Required enums
298
- enum Status {
299
- PENDING
300
- RUNNING
301
- SUSPENDED
302
- COMPLETED
303
- FAILED
304
- CANCELLED
305
- SKIPPED
306
- }
321
+ Run one bounded maintenance cycle: claim pending, poll suspended, reap stale, flush outbox.
307
322
 
308
- enum LogLevel {
309
- DEBUG
310
- INFO
311
- WARN
312
- ERROR
313
- }
323
+ ```typescript
324
+ const tick = await host.runMaintenanceTick();
325
+ // { claimed: number, suspendedChecked: number, staleReleased: number, eventsFlushed: number }
326
+ // Note: resumed suspended stages are automatically followed by run.transition.
327
+ ```
314
328
 
315
- enum ArtifactType {
316
- STAGE_OUTPUT
317
- ARTIFACT
318
- METADATA
319
- }
329
+ ## AI Integration & Cost Tracking
320
330
 
321
- // ✅ REQUIRED: WorkflowRun
322
- model WorkflowRun {
323
- id String @id @default(cuid())
324
- createdAt DateTime @default(now())
325
- updatedAt DateTime @updatedAt
326
- workflowId String
327
- workflowName String
328
- workflowType String
329
- status Status @default(PENDING)
330
- startedAt DateTime?
331
- completedAt DateTime?
332
- duration Int? // ⚠️ Must be "duration", not "durationMs"
333
- input Json
334
- output Json?
335
- config Json @default("{}")
336
- totalCost Float @default(0)
337
- totalTokens Int @default(0)
338
- priority Int @default(5)
339
-
340
- stages WorkflowStage[]
341
- logs WorkflowLog[]
342
- artifacts WorkflowArtifact[]
343
-
344
- @@index([status])
345
- @@map("workflow_runs")
346
- }
331
+ ```typescript
332
+ const ai = createAIHelper(
333
+ `workflow.${ctx.workflowRunId}.stage.${ctx.stageId}`,
334
+ aiCallLogger,
335
+ );
347
336
 
348
- // REQUIRED: WorkflowStage
349
- model WorkflowStage {
350
- id String @id @default(cuid())
351
- createdAt DateTime @default(now())
352
- updatedAt DateTime @updatedAt
353
- workflowRunId String
354
- stageId String
355
- stageName String
356
- stageNumber Int
357
- executionGroup Int
358
- status Status @default(PENDING)
359
- startedAt DateTime?
360
- completedAt DateTime?
361
- duration Int? // ⚠️ Must be "duration", not "durationMs"
362
- inputData Json?
363
- outputData Json?
364
- config Json?
365
- suspendedState Json?
366
- resumeData Json?
367
- nextPollAt DateTime?
368
- pollInterval Int?
369
- maxWaitUntil DateTime?
370
- metrics Json?
371
- embeddingInfo Json?
372
- errorMessage String?
373
-
374
- workflowRun WorkflowRun @relation(fields: [workflowRunId], references: [id], onDelete: Cascade)
375
- logs WorkflowLog[]
376
- artifacts WorkflowArtifact[]
377
-
378
- @@unique([workflowRunId, stageId])
379
- @@map("workflow_stages")
380
- }
337
+ const { text, cost } = await ai.generateText("gemini-2.5-flash", prompt);
338
+ const { object } = await ai.generateObject("gemini-2.5-flash", prompt, schema);
339
+ const { embedding } = await ai.embed("text-embedding-004", ["text1"], { dimensions: 768 });
340
+ ```
381
341
 
382
- // REQUIRED: WorkflowLog (missing = "Cannot read 'create'" error)
383
- model WorkflowLog {
384
- id String @id @default(cuid())
385
- createdAt DateTime @default(now())
386
- workflowRunId String?
387
- workflowStageId String?
388
- level LogLevel
389
- message String
390
- metadata Json?
391
-
392
- workflowRun WorkflowRun? @relation(fields: [workflowRunId], references: [id], onDelete: Cascade)
393
- workflowStage WorkflowStage? @relation(fields: [workflowStageId], references: [id], onDelete: Cascade)
394
-
395
- @@index([workflowRunId])
396
- @@map("workflow_logs")
397
- }
342
+ ## Persistence Setup
398
343
 
399
- // REQUIRED: WorkflowArtifact (missing = "Cannot read 'upsert'" error)
400
- model WorkflowArtifact {
401
- id String @id @default(cuid())
402
- createdAt DateTime @default(now())
403
- updatedAt DateTime @updatedAt
404
- workflowRunId String
405
- workflowStageId String?
406
- key String
407
- type ArtifactType
408
- data Json
409
- size Int
410
- metadata Json?
411
-
412
- workflowRun WorkflowRun @relation(fields: [workflowRunId], references: [id], onDelete: Cascade)
413
- workflowStage WorkflowStage? @relation(fields: [workflowStageId], references: [id], onDelete: Cascade)
414
-
415
- @@unique([workflowRunId, key])
416
- @@map("workflow_artifacts")
417
- }
344
+ ### Required Prisma Models (ALL are required)
418
345
 
419
- // REQUIRED: JobQueue
420
- model JobQueue {
421
- id String @id @default(cuid())
422
- createdAt DateTime @default(now())
423
- updatedAt DateTime @updatedAt
424
- workflowRunId String
425
- stageId String
426
- status Status @default(PENDING)
427
- priority Int @default(5)
428
- workerId String?
429
- lockedAt DateTime?
430
- startedAt DateTime?
431
- completedAt DateTime?
432
- attempt Int @default(0)
433
- maxAttempts Int @default(3)
434
- lastError String?
435
- nextPollAt DateTime?
436
- payload Json @default("{}")
437
-
438
- @@index([status, nextPollAt])
439
- @@map("job_queue")
440
- }
441
- ```
346
+ Copy the complete schema from the [package README](../../README.md#1-database-setup). This includes:
347
+ WorkflowRun, WorkflowStage, WorkflowLog, WorkflowArtifact, AICall, JobQueue, OutboxEvent, IdempotencyKey.
442
348
 
443
349
  ### Create Persistence
444
350
 
@@ -449,91 +355,71 @@ import {
449
355
  createPrismaAICallLogger,
450
356
  } from "@bratsos/workflow-engine/persistence/prisma";
451
357
 
452
- // PostgreSQL (default)
453
358
  const persistence = createPrismaWorkflowPersistence(prisma);
454
359
  const jobQueue = createPrismaJobQueue(prisma);
360
+ const aiCallLogger = createPrismaAICallLogger(prisma);
455
361
 
456
362
  // SQLite - MUST pass databaseType option
457
363
  const persistence = createPrismaWorkflowPersistence(prisma, { databaseType: "sqlite" });
458
364
  const jobQueue = createPrismaJobQueue(prisma, { databaseType: "sqlite" });
459
-
460
- const aiCallLogger = createPrismaAICallLogger(prisma);
461
- ```
462
-
463
- ## Runtime Configuration
464
-
465
- ```typescript
466
- const runtime = createWorkflowRuntime({
467
- persistence,
468
- jobQueue,
469
- aiCallLogger,
470
- registry: {
471
- getWorkflow: (id) => workflowMap[id] ?? null,
472
- },
473
-
474
- // Optional configuration
475
- pollIntervalMs: 10000, // Orchestration poll interval
476
- jobPollIntervalMs: 1000, // Job dequeue interval
477
- staleJobThresholdMs: 60000, // Stale job timeout
478
- workerId: "worker-1", // Custom worker ID
479
- });
480
-
481
- // Lifecycle
482
- await runtime.start(); // Start processing
483
- runtime.stop(); // Graceful shutdown
484
-
485
- // Manual operations
486
- await runtime.createRun({ workflowId, input });
487
- await runtime.transitionWorkflow(runId);
488
- await runtime.pollSuspendedStages();
489
365
  ```
490
366
 
491
367
  ## Testing
492
368
 
493
369
  ```typescript
370
+ // In-memory persistence and job queue
494
371
  import {
495
- TestWorkflowPersistence,
496
- TestJobQueue,
497
- MockAIHelper,
372
+ InMemoryWorkflowPersistence,
373
+ InMemoryJobQueue,
374
+ InMemoryAICallLogger,
498
375
  } from "@bratsos/workflow-engine/testing";
499
376
 
500
- const persistence = new TestWorkflowPersistence();
501
- const jobQueue = new TestJobQueue();
502
- const mockAI = new MockAIHelper();
503
-
504
- // Configure mock responses
505
- mockAI.mockGenerateText("Expected response");
506
- mockAI.mockGenerateObject({ items: ["a", "b"] });
507
-
508
- // Test stage execution
509
- const result = await myStage.execute({
510
- input: { data: "test" },
511
- config: { verbose: true },
512
- workflowContext: {},
513
- workflowRunId: "test-run",
514
- stageId: "my-stage",
515
- log: async () => {},
516
- storage: persistence.createStorage("test-run"),
377
+ // Kernel-specific test adapters
378
+ import {
379
+ FakeClock,
380
+ InMemoryBlobStore,
381
+ CollectingEventSink,
382
+ NoopScheduler,
383
+ } from "@bratsos/workflow-engine/kernel/testing";
384
+
385
+ // Create kernel with all in-memory adapters
386
+ const persistence = new InMemoryWorkflowPersistence();
387
+ const jobQueue = new InMemoryJobQueue();
388
+ const kernel = createKernel({
389
+ persistence,
390
+ blobStore: new InMemoryBlobStore(),
391
+ jobTransport: jobQueue,
392
+ eventSink: new CollectingEventSink(),
393
+ scheduler: new NoopScheduler(),
394
+ clock: new FakeClock(),
395
+ registry: { getWorkflow: (id) => workflows.get(id) },
517
396
  });
397
+
398
+ // Test a full workflow lifecycle
399
+ await kernel.dispatch({ type: "run.create", idempotencyKey: "test", workflowId: "my-wf", input: {} });
400
+ await kernel.dispatch({ type: "run.claimPending", workerId: "test-worker" });
401
+ const job = await jobQueue.dequeue();
402
+ await kernel.dispatch({ type: "job.execute", workflowRunId: job.workflowRunId, workflowId: job.workflowId, stageId: job.stageId, config: {} });
403
+ await kernel.dispatch({ type: "run.transition", workflowRunId: job.workflowRunId });
518
404
  ```
519
405
 
520
406
  ## Reference Files
521
407
 
522
- For detailed documentation, see the reference files:
523
-
524
408
  - [01-stage-definitions.md](references/01-stage-definitions.md) - Complete stage API
525
409
  - [02-workflow-builder.md](references/02-workflow-builder.md) - WorkflowBuilder patterns
526
- - [03-runtime-setup.md](references/03-runtime-setup.md) - Runtime configuration
410
+ - [03-kernel-host-setup.md](references/03-runtime-setup.md) - Kernel & host configuration
527
411
  - [04-ai-integration.md](references/04-ai-integration.md) - AI helper methods
528
412
  - [05-persistence-setup.md](references/05-persistence-setup.md) - Database setup
529
413
  - [06-async-batch-stages.md](references/06-async-batch-stages.md) - Async operations
530
- - [07-testing-patterns.md](references/07-testing-patterns.md) - Testing utilities
531
- - [08-common-patterns.md](references/08-common-patterns.md) - Best practices
414
+ - [07-testing-patterns.md](references/07-testing-patterns.md) - Testing with kernel
415
+ - [08-common-patterns.md](references/08-common-patterns.md) - Kernel patterns & best practices
532
416
 
533
417
  ## Key Principles
534
418
 
535
419
  1. **Type Safety**: All schemas are Zod - types flow through the entire pipeline
536
- 2. **Context Access**: Use `ctx.require()` and `ctx.optional()` for type-safe stage output access
537
- 3. **Unified Status**: Single `Status` enum for workflows, stages, and jobs
538
- 4. **Cost Tracking**: All AI calls automatically track tokens and costs
539
- 5. **Batch Savings**: Use async-batch stages for 50% cost savings on large operations
420
+ 2. **Command Kernel**: All operations are typed commands dispatched through `kernel.dispatch()`
421
+ 3. **Environment-Agnostic**: Kernel has no timers, no signals, no global state
422
+ 4. **Context Access**: Use `ctx.require()` and `ctx.optional()` for type-safe stage output access
423
+ 5. **Transactional Outbox**: Events written to outbox, published via `outbox.flush` command
424
+ 6. **Idempotency**: `run.create` and `job.execute` replay cached results by key; concurrent same-key dispatch throws `IdempotencyInProgressError`
425
+ 7. **Cost Tracking**: All AI calls automatically track tokens and costs