@lhi/n8m 0.2.1 → 0.2.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (37) hide show
  1. package/README.md +105 -6
  2. package/dist/agentic/graph.d.ts +50 -0
  3. package/dist/agentic/graph.js +3 -11
  4. package/dist/agentic/nodes/architect.d.ts +5 -0
  5. package/dist/agentic/nodes/architect.js +8 -22
  6. package/dist/agentic/nodes/engineer.d.ts +15 -0
  7. package/dist/agentic/nodes/engineer.js +25 -4
  8. package/dist/agentic/nodes/qa.d.ts +1 -0
  9. package/dist/agentic/nodes/qa.js +280 -45
  10. package/dist/agentic/nodes/reviewer.d.ts +4 -0
  11. package/dist/agentic/nodes/reviewer.js +71 -13
  12. package/dist/agentic/nodes/supervisor.js +2 -3
  13. package/dist/agentic/state.d.ts +1 -0
  14. package/dist/agentic/state.js +4 -0
  15. package/dist/commands/create.js +162 -95
  16. package/dist/commands/doc.js +1 -1
  17. package/dist/commands/fixture.d.ts +12 -0
  18. package/dist/commands/fixture.js +258 -0
  19. package/dist/commands/test.d.ts +63 -4
  20. package/dist/commands/test.js +1179 -90
  21. package/dist/fixture-schema.json +162 -0
  22. package/dist/resources/node-definitions-fallback.json +185 -8
  23. package/dist/resources/node-test-hints.json +188 -0
  24. package/dist/resources/workflow-test-fixtures.json +42 -0
  25. package/dist/services/ai.service.d.ts +42 -0
  26. package/dist/services/ai.service.js +271 -21
  27. package/dist/services/node-definitions.service.d.ts +1 -0
  28. package/dist/services/node-definitions.service.js +4 -11
  29. package/dist/utils/config.js +2 -0
  30. package/dist/utils/fixtureManager.d.ts +28 -0
  31. package/dist/utils/fixtureManager.js +41 -0
  32. package/dist/utils/n8nClient.d.ts +27 -0
  33. package/dist/utils/n8nClient.js +169 -5
  34. package/dist/utils/spinner.d.ts +17 -0
  35. package/dist/utils/spinner.js +52 -0
  36. package/oclif.manifest.json +49 -1
  37. package/package.json +2 -2
package/README.md CHANGED
@@ -142,23 +142,120 @@ n8m doc
142
142
 
143
143
  ### `n8m test` — Validate and auto-repair a workflow
144
144
 
145
- Deploys a workflow ephemerally to your instance, validates it, and purges it
146
- when done. If validation fails, the repair loop kicks in automatically.
145
+ Validates a workflow against your n8n instance. If it fails, the AI repair loop
146
+ kicks in analyzing the error, applying fixes, and retrying automatically.
147
147
 
148
148
  ```bash
149
- # Test a local file
149
+ # Test a local file or remote workflow (browse to select)
150
150
  n8m test ./workflows/my-flow.json
151
-
152
- # Browse and pick from local files + instance workflows
153
151
  n8m test
152
+
153
+ # Generate 3 diverse AI test scenarios (happy path, edge case, error)
154
+ n8m test --ai-scenarios
155
+
156
+ # Use a specific fixture file for offline testing
157
+ n8m test --fixture .n8m/fixtures/abc123.json
158
+ n8m test -f ./my-fixture.json
154
159
  ```
155
160
 
156
161
  - Resolves and deploys sub-workflow dependencies automatically
157
- - Patches node IDs after ephemeral deployment
162
+ - After a passing live test, prompts to **save a fixture** for future offline runs
163
+ - When a fixture exists for a workflow, prompts to **run offline** (no n8n calls)
158
164
  - After a passing test, prompts to deploy or save the validated/repaired version
159
165
  - **Auto-documents**: Generates or updates the project `README.md` upon saving.
160
166
  - All temporary assets are deleted on exit
161
167
 
168
+ #### Offline testing with fixtures
169
+
170
+ n8m can capture real execution data from n8n and replay it offline — no live
171
+ instance, credentials, or external API calls needed.
172
+
173
+ **First run — capture a fixture:**
174
+ ```bash
175
+ n8m test # runs live against your n8n instance
176
+ # → Save fixture for future offline runs? [Y/n] ← answer Y
177
+ # → .n8m/fixtures/<workflowId>.json created
178
+ ```
179
+
180
+ **Subsequent runs — replay offline:**
181
+ ```bash
182
+ n8m test
183
+ # → Fixture found from Mar 4, 2026, 10:30 AM. Run offline? [Y/n]
184
+ ```
185
+
186
+ The offline mode uses your real node-by-node execution data, so the AI evaluator
187
+ works with actual production outputs rather than mocked data. The AI healing loop
188
+ still runs — if the captured execution shows an error, n8m will try to fix it and
189
+ evaluate the fix against the real fixture data.
190
+
191
+ ---
192
+
193
+ ### `n8m fixture` — Manage test fixtures
194
+
195
+ Two ways to create a fixture:
196
+
197
+ ```bash
198
+ # Pull real execution data from n8n (no test run required)
199
+ n8m fixture capture <workflowId>
200
+
201
+ # Scaffold an empty template to fill in by hand
202
+ n8m fixture init <workflowId>
203
+ ```
204
+
205
+ **`capture`** connects to your n8n instance, fetches the 25 most recent
206
+ executions for the workflow, and presents an interactive menu to pick which one
207
+ to save as a fixture — no tests run. Use this when you have a workflow that
208
+ already ran successfully in n8n and you want to lock in that execution data for
209
+ offline testing going forward.
210
+
211
+ ```bash
212
+ n8m fixture capture abc123
213
+ # → Fetching executions for workflow abc123...
214
+ # → ? Select an execution to capture:
215
+ # → #177916 success 3/4/2026, 10:48:47 AM
216
+ # → #177914 success 3/4/2026, 10:48:23 AM
217
+ # → ❯ #177913 error 3/4/2026, 10:47:59 AM
218
+ # → Selected execution 177913
219
+ # → Fixture saved to .n8m/fixtures/abc123.json
220
+ # → Workflow: My Workflow
221
+ # → Execution: error · 5 node(s) captured
222
+ ```
223
+
224
+ **`init`** creates an empty template when you want to define the fixture data
225
+ yourself, without needing a live execution first.
226
+
227
+ ```json
228
+ {
229
+ "$schema": "../../node_modules/n8m/dist/fixture-schema.json",
230
+ "version": "1.0",
231
+ "workflowId": "abc123",
232
+ "workflowName": "My Workflow",
233
+ "workflow": { "name": "My Workflow", "nodes": [], "connections": {} },
234
+ "execution": {
235
+ "status": "success",
236
+ "data": {
237
+ "resultData": {
238
+ "error": null,
239
+ "runData": {
240
+ "Your Node Name": [{ "json": { "key": "value" } }]
241
+ }
242
+ }
243
+ }
244
+ }
245
+ }
246
+ ```
247
+
248
+ Fill in `execution.data.resultData.runData` with the actual output of each node
249
+ (keyed by exact node name). Then test against it:
250
+
251
+ ```bash
252
+ n8m test --fixture .n8m/fixtures/abc123.json
253
+ ```
254
+
255
+ Fixture files are project-local (`.n8m/fixtures/`) and should be committed to
256
+ your repo so your team can run the same offline tests. Add the `$schema` field to
257
+ get autocomplete and validation in any editor that supports JSON Schema.
258
+
162
259
  ---
163
260
 
164
261
  ### `n8m deploy` — Push a workflow to n8n
@@ -306,5 +403,7 @@ npm run dev
306
403
  - [x] AI-driven test scenario generation (`--ai-scenarios`)
307
404
  - [x] Static node type reference & fallback mechanism
308
405
  - [x] Multi-workflow project generation support
406
+ - [x] Fixture record & replay — offline testing with real execution data
407
+ - [x] Hand-crafted fixture scaffolding (`n8m fixture init`) with JSON Schema
309
408
  - [ ] Native n8n canvas integration
310
409
  - [ ] Multi-agent collaboration on a single goal
@@ -15,6 +15,7 @@ export declare const graph: import("@langchain/langgraph").CompiledStateGraph<{
15
15
  collaborationLog: string[];
16
16
  userFeedback: string;
17
17
  testScenarios: any[];
18
+ maxRevisions: number;
18
19
  }, {
19
20
  userGoal?: string | undefined;
20
21
  spec?: any;
@@ -31,6 +32,7 @@ export declare const graph: import("@langchain/langgraph").CompiledStateGraph<{
31
32
  collaborationLog?: string[] | undefined;
32
33
  userFeedback?: string | undefined;
33
34
  testScenarios?: any[] | undefined;
35
+ maxRevisions?: number | undefined;
34
36
  }, "__start__" | "architect" | "engineer" | "reviewer" | "supervisor" | "qa", {
35
37
  userGoal: {
36
38
  (): import("@langchain/langgraph").LastValue<string>;
@@ -91,6 +93,7 @@ export declare const graph: import("@langchain/langgraph").CompiledStateGraph<{
91
93
  (annotation: import("@langchain/langgraph").SingleReducer<any[], any[]>): import("@langchain/langgraph").BinaryOperatorAggregate<any[], any[]>;
92
94
  Root: <S extends import("@langchain/langgraph").StateDefinition>(sd: S) => import("@langchain/langgraph").AnnotationRoot<S>;
93
95
  };
96
+ maxRevisions: import("@langchain/langgraph").BinaryOperatorAggregate<number, number>;
94
97
  }, {
95
98
  userGoal: {
96
99
  (): import("@langchain/langgraph").LastValue<string>;
@@ -151,8 +154,14 @@ export declare const graph: import("@langchain/langgraph").CompiledStateGraph<{
151
154
  (annotation: import("@langchain/langgraph").SingleReducer<any[], any[]>): import("@langchain/langgraph").BinaryOperatorAggregate<any[], any[]>;
152
155
  Root: <S extends import("@langchain/langgraph").StateDefinition>(sd: S) => import("@langchain/langgraph").AnnotationRoot<S>;
153
156
  };
157
+ maxRevisions: import("@langchain/langgraph").BinaryOperatorAggregate<number, number>;
154
158
  }, import("@langchain/langgraph").StateDefinition, {
155
159
  architect: {
160
+ spec?: undefined;
161
+ strategies?: undefined;
162
+ needsClarification?: undefined;
163
+ collaborationLog?: undefined;
164
+ } | {
156
165
  spec: import("../services/ai.service.js").WorkflowSpec;
157
166
  strategies: {
158
167
  strategyName: string;
@@ -171,13 +180,28 @@ export declare const graph: import("@langchain/langgraph").CompiledStateGraph<{
171
180
  collaborationLog: string[];
172
181
  };
173
182
  engineer: {
183
+ revisionCount: number;
184
+ validationStatus: "failed";
185
+ validationErrors: string[];
186
+ workflowJson?: undefined;
187
+ candidates?: undefined;
188
+ } | {
174
189
  workflowJson: any;
190
+ revisionCount: number;
191
+ validationStatus?: undefined;
192
+ validationErrors?: undefined;
175
193
  candidates?: undefined;
176
194
  } | {
195
+ revisionCount?: undefined;
196
+ validationStatus?: undefined;
197
+ validationErrors?: undefined;
177
198
  workflowJson?: undefined;
178
199
  candidates?: undefined;
179
200
  } | {
180
201
  candidates: any[];
202
+ revisionCount?: undefined;
203
+ validationStatus?: undefined;
204
+ validationErrors?: undefined;
181
205
  workflowJson?: undefined;
182
206
  };
183
207
  reviewer: import("@langchain/langgraph").UpdateType<{
@@ -240,6 +264,7 @@ export declare const graph: import("@langchain/langgraph").CompiledStateGraph<{
240
264
  (annotation: import("@langchain/langgraph").SingleReducer<any[], any[]>): import("@langchain/langgraph").BinaryOperatorAggregate<any[], any[]>;
241
265
  Root: <S extends import("@langchain/langgraph").StateDefinition>(sd: S) => import("@langchain/langgraph").AnnotationRoot<S>;
242
266
  };
267
+ maxRevisions: import("@langchain/langgraph").BinaryOperatorAggregate<number, number>;
243
268
  }>;
244
269
  supervisor: {
245
270
  workflowJson?: undefined;
@@ -308,6 +333,7 @@ export declare const graph: import("@langchain/langgraph").CompiledStateGraph<{
308
333
  (annotation: import("@langchain/langgraph").SingleReducer<any[], any[]>): import("@langchain/langgraph").BinaryOperatorAggregate<any[], any[]>;
309
334
  Root: <S extends import("@langchain/langgraph").StateDefinition>(sd: S) => import("@langchain/langgraph").AnnotationRoot<S>;
310
335
  };
336
+ maxRevisions: import("@langchain/langgraph").BinaryOperatorAggregate<number, number>;
311
337
  }>;
312
338
  }, unknown, unknown>;
313
339
  /**
@@ -376,6 +402,7 @@ export declare const runAgenticWorkflow: (goal: string, initialState?: Partial<t
376
402
  (annotation: import("@langchain/langgraph").SingleReducer<any[], any[]>): import("@langchain/langgraph").BinaryOperatorAggregate<any[], any[]>;
377
403
  Root: <S extends import("@langchain/langgraph").StateDefinition>(sd: S) => import("@langchain/langgraph").AnnotationRoot<S>;
378
404
  };
405
+ maxRevisions: import("@langchain/langgraph").BinaryOperatorAggregate<number, number>;
379
406
  }>>;
380
407
  /**
381
408
  * Run the Agentic Workflow with Streaming
@@ -384,6 +411,11 @@ export declare const runAgenticWorkflow: (goal: string, initialState?: Partial<t
384
411
  */
385
412
  export declare const runAgenticWorkflowStream: (goal: string, threadId?: string) => Promise<import("@langchain/core/utils/stream").IterableReadableStream<{
386
413
  architect?: {
414
+ spec?: undefined;
415
+ strategies?: undefined;
416
+ needsClarification?: undefined;
417
+ collaborationLog?: undefined;
418
+ } | {
387
419
  spec: import("../services/ai.service.js").WorkflowSpec;
388
420
  strategies: {
389
421
  strategyName: string;
@@ -402,13 +434,28 @@ export declare const runAgenticWorkflowStream: (goal: string, threadId?: string)
402
434
  collaborationLog: string[];
403
435
  } | undefined;
404
436
  engineer?: {
437
+ revisionCount: number;
438
+ validationStatus: "failed";
439
+ validationErrors: string[];
440
+ workflowJson?: undefined;
441
+ candidates?: undefined;
442
+ } | {
405
443
  workflowJson: any;
444
+ revisionCount: number;
445
+ validationStatus?: undefined;
446
+ validationErrors?: undefined;
406
447
  candidates?: undefined;
407
448
  } | {
449
+ revisionCount?: undefined;
450
+ validationStatus?: undefined;
451
+ validationErrors?: undefined;
408
452
  workflowJson?: undefined;
409
453
  candidates?: undefined;
410
454
  } | {
411
455
  candidates: any[];
456
+ revisionCount?: undefined;
457
+ validationStatus?: undefined;
458
+ validationErrors?: undefined;
412
459
  workflowJson?: undefined;
413
460
  } | undefined;
414
461
  reviewer?: import("@langchain/langgraph").UpdateType<{
@@ -471,6 +518,7 @@ export declare const runAgenticWorkflowStream: (goal: string, threadId?: string)
471
518
  (annotation: import("@langchain/langgraph").SingleReducer<any[], any[]>): import("@langchain/langgraph").BinaryOperatorAggregate<any[], any[]>;
472
519
  Root: <S extends import("@langchain/langgraph").StateDefinition>(sd: S) => import("@langchain/langgraph").AnnotationRoot<S>;
473
520
  };
521
+ maxRevisions: import("@langchain/langgraph").BinaryOperatorAggregate<number, number>;
474
522
  }> | undefined;
475
523
  supervisor?: {
476
524
  workflowJson?: undefined;
@@ -539,6 +587,7 @@ export declare const runAgenticWorkflowStream: (goal: string, threadId?: string)
539
587
  (annotation: import("@langchain/langgraph").SingleReducer<any[], any[]>): import("@langchain/langgraph").BinaryOperatorAggregate<any[], any[]>;
540
588
  Root: <S extends import("@langchain/langgraph").StateDefinition>(sd: S) => import("@langchain/langgraph").AnnotationRoot<S>;
541
589
  };
590
+ maxRevisions: import("@langchain/langgraph").BinaryOperatorAggregate<number, number>;
542
591
  }> | undefined;
543
592
  }>>;
544
593
  /**
@@ -604,4 +653,5 @@ export declare const resumeAgenticWorkflow: (threadId: string, input?: any) => P
604
653
  (annotation: import("@langchain/langgraph").SingleReducer<any[], any[]>): import("@langchain/langgraph").BinaryOperatorAggregate<any[], any[]>;
605
654
  Root: <S extends import("@langchain/langgraph").StateDefinition>(sd: S) => import("@langchain/langgraph").AnnotationRoot<S>;
606
655
  };
656
+ maxRevisions: import("@langchain/langgraph").BinaryOperatorAggregate<number, number>;
607
657
  }>>;
@@ -1,4 +1,4 @@
1
- import { StateGraph, START, END, Send } from "@langchain/langgraph";
1
+ import { StateGraph, START, END } from "@langchain/langgraph";
2
2
  import { checkpointer } from "./checkpointer.js";
3
3
  import { architectNode } from "./nodes/architect.js";
4
4
  import { engineerNode } from "./nodes/engineer.js";
@@ -15,14 +15,8 @@ const workflow = new StateGraph(TeamState)
15
15
  .addNode("qa", qaNode)
16
16
  // Edges
17
17
  .addEdge(START, "architect")
18
- // Parallel Fan-Out: Architect -> Multiple Engineers (via Send)
19
- .addConditionalEdges("architect", (state) => {
20
- if (state.strategies && state.strategies.length > 0) {
21
- return state.strategies.map(s => new Send("engineer", { spec: s }));
22
- }
23
- // Fallback for linear path
24
- return "engineer";
25
- }, ["engineer"]) // We must declare valid destination nodes for visualization/compilation
18
+ // Architect -> Engineer (spec is chosen interactively in create.ts before resuming)
19
+ .addEdge("architect", "engineer")
26
20
  // Fan-In: Engineer -> Supervisor (Wait for all to finish) or route to Reviewer (if fixing)
27
21
  .addConditionalEdges("engineer", (state) => {
28
22
  // If we have errors, we are in "Repair Mode" -> Skip Supervisor (which only handles fresh candidates)
@@ -62,7 +56,6 @@ export const graph = workflow.compile({
62
56
  * @returns The final state of the graph
63
57
  */
64
58
  export const runAgenticWorkflow = async (goal, initialState = {}, threadId = "default_session") => {
65
- console.log(`🚀 Starting Agentic Workflow for goal: "${goal}" (Thread: ${threadId})`);
66
59
  const result = await graph.invoke({
67
60
  userGoal: goal,
68
61
  messages: [],
@@ -93,7 +86,6 @@ export const runAgenticWorkflowStream = async (goal, threadId = "default_session
93
86
  * Resume the Agentic Workflow from an interrupted state
94
87
  */
95
88
  export const resumeAgenticWorkflow = async (threadId, input) => {
96
- console.log(`▶️ Resuming Agentic Workflow (Thread: ${threadId})`);
97
89
  return await graph.invoke(input, {
98
90
  configurable: { thread_id: threadId }
99
91
  });
@@ -1,5 +1,10 @@
1
1
  import { TeamState } from "../state.js";
2
2
  export declare const architectNode: (state: typeof TeamState.State) => Promise<{
3
+ spec?: undefined;
4
+ strategies?: undefined;
5
+ needsClarification?: undefined;
6
+ collaborationLog?: undefined;
7
+ } | {
3
8
  spec: import("../../services/ai.service.js").WorkflowSpec;
4
9
  strategies: {
5
10
  strategyName: string;
@@ -4,27 +4,14 @@ export const architectNode = async (state) => {
4
4
  if (!state.userGoal) {
5
5
  throw new Error("User goal is missing from state.");
6
6
  }
7
- // Pass-through if we already have a workflow (Repairs/Testing mode)
8
- // BUT if we have a goal that implies modification, we should probably still generate a spec?
9
- // For now, let's allow spec generation even if workflowJson exists, so the Engineer can use the spec + old workflow to make new one.
10
- // The logic in Architect assumes "generateSpec" creates a NEW spec from scratch.
11
- // We might need a "modifySpec" or just rely on the Engineer to interpret the goal + existing workflow.
12
- // If we skip the architect, we go straight to Engineer?
13
- // The graph edges are: START -> architect -> engineer.
14
- // If we return empty here, 'spec' is undefined in state.
15
- // Engineer checks state.spec.
16
- // If we want to support modification, the Architect should probably analyze the request vs the current workflow.
17
- // However, for the first MVP, if we return empty, the Engineer will run.
18
- // Does Engineer handle "no spec" but "has workflowJson" + "userGoal"?
19
- // Let's assume we want the Architect to generate a plan (Spec) for the modification.
20
- // So we REMOVE this early return, or condition it on "isRepair" vs "isModify".
21
- // Since we don't have an explicit flag, we can just let it run.
22
- // The prompt for generateSpec might need to know about the existing workflow?
23
- // Currently generateSpec only sees the goal.
24
- // Let's comment it out for now to allow Architect to run.
25
- // if (state.workflowJson) {
26
- // return {};
27
- // }
7
+ // Validation / repair mode: an existing workflow was supplied.
8
+ // Skip spec generation entirely the engineer and reviewer operate directly
9
+ // on the existing workflowJson. Generating a brand-new spec here causes the
10
+ // parallel engineers (via Send) to rebuild the workflow from scratch, which
11
+ // produces very large JSON that is error-prone and throws away the user's work.
12
+ if (state.workflowJson) {
13
+ return {};
14
+ }
28
15
  try {
29
16
  const spec = await aiService.generateSpec(state.userGoal);
30
17
  // Check if the spec requires clarification
@@ -47,7 +34,6 @@ export const architectNode = async (state) => {
47
34
  },
48
35
  ];
49
36
  const logEntry = `Architect: Generated 2 strategies — "${strategies[0].suggestedName}" (primary) and "${strategies[1].suggestedName}" (alternative)`;
50
- console.log(`[Architect] ${logEntry}`);
51
37
  return {
52
38
  spec,
53
39
  strategies,
@@ -1,11 +1,26 @@
1
1
  import { TeamState } from "../state.js";
2
2
  export declare const engineerNode: (state: typeof TeamState.State) => Promise<{
3
+ revisionCount: number;
4
+ validationStatus: "failed";
5
+ validationErrors: string[];
6
+ workflowJson?: undefined;
7
+ candidates?: undefined;
8
+ } | {
3
9
  workflowJson: any;
10
+ revisionCount: number;
11
+ validationStatus?: undefined;
12
+ validationErrors?: undefined;
4
13
  candidates?: undefined;
5
14
  } | {
15
+ revisionCount?: undefined;
16
+ validationStatus?: undefined;
17
+ validationErrors?: undefined;
6
18
  workflowJson?: undefined;
7
19
  candidates?: undefined;
8
20
  } | {
9
21
  candidates: any[];
22
+ revisionCount?: undefined;
23
+ validationStatus?: undefined;
24
+ validationErrors?: undefined;
10
25
  workflowJson?: undefined;
11
26
  }>;
@@ -1,6 +1,7 @@
1
1
  import { AIService } from "../../services/ai.service.js";
2
2
  import { NodeDefinitionsService } from "../../services/node-definitions.service.js";
3
3
  import { jsonrepair } from "jsonrepair";
4
+ import { theme } from "../../utils/theme.js";
4
5
  export const engineerNode = async (state) => {
5
6
  const aiService = AIService.getInstance();
6
7
  // RAG: Load and Search Node Definitions
@@ -14,12 +15,31 @@ export const engineerNode = async (state) => {
14
15
  const ragContext = (relevantDefs.length > 0 || staticRef)
15
16
  ? `\n\n[N8N NODE REFERENCE GUIDE]\n${staticRef}\n\n[AVAILABLE NODE SCHEMAS - USE THESE EXACT PARAMETERS]\n${nodeService.formatForLLM(relevantDefs)}`
16
17
  : "";
17
- if (relevantDefs.length > 0) {
18
- console.log(`[Engineer] RAG: Found ${relevantDefs.length} relevant node schemas.`);
19
- }
20
18
  // Self-Correction Loop Check
21
19
  if (state.validationErrors && state.validationErrors.length > 0) {
22
- console.log("🔧 Engineer is fixing the workflow based on QA feedback...");
20
+ const currentRevision = (state.revisionCount || 0) + 1;
21
+ const maxRevisions = state.maxRevisions || 3;
22
+ if (currentRevision > maxRevisions) {
23
+ console.log(theme.fail(`Max self-healing revisions (${maxRevisions}) reached. Manual intervention required.`));
24
+ return {
25
+ revisionCount: currentRevision,
26
+ validationStatus: 'failed',
27
+ validationErrors: [
28
+ `Self-healing limit (${maxRevisions} revisions) exceeded. Remaining issues:`,
29
+ ...state.validationErrors,
30
+ ],
31
+ };
32
+ }
33
+ const errCount = state.validationErrors.length;
34
+ console.log(theme.agent(`Repairing workflow — revision ${currentRevision}/${maxRevisions} (${errCount} issue${errCount === 1 ? '' : 's'})...`));
35
+ const MAX_SHOWN = 4;
36
+ state.validationErrors.slice(0, MAX_SHOWN).forEach(e => {
37
+ const truncated = e.length > 110 ? e.substring(0, 110) + '…' : e;
38
+ console.log(theme.muted(` ↳ ${truncated}`));
39
+ });
40
+ if (errCount > MAX_SHOWN) {
41
+ console.log(theme.muted(` ↳ +${errCount - MAX_SHOWN} more`));
42
+ }
23
43
  try {
24
44
  // We pass the entire list of errors as context
25
45
  const errorContext = state.validationErrors.join('\n');
@@ -28,6 +48,7 @@ export const engineerNode = async (state) => {
28
48
  false, state.availableNodeTypes || []);
29
49
  return {
30
50
  workflowJson: fixedWorkflow,
51
+ revisionCount: currentRevision,
31
52
  // validationErrors will be overwritten by next QA run
32
53
  };
33
54
  }
@@ -1,5 +1,6 @@
1
1
  import { TeamState } from "../state.js";
2
2
  export declare const qaNode: (state: typeof TeamState.State) => Promise<{
3
+ workflowJson?: any;
3
4
  validationStatus: string;
4
5
  validationErrors: string[];
5
6
  }>;