@lhi/n8m 0.1.3 → 0.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,12 +1,15 @@
1
1
  # n8m: The Agentic CLI for n8n
2
2
 
3
- > Generate, modify, test, and deploy n8n workflows from the command line using AI.
3
+ > Generate, modify, test, and deploy n8n workflows from the command line using
4
+ > AI.
4
5
 
5
6
  [![TypeScript](https://badgen.net/badge/Built%20with/TypeScript/blue)](https://typescriptlang.org/)
6
7
  [![oclif](https://badgen.net/badge/CLI/oclif/purple)](https://oclif.io/)
7
8
  [![n8n](https://badgen.net/badge/n8n/Compatible/orange)](https://n8n.io)
8
9
 
9
- **Stop clicking. Start shipping.** `n8m` is an open-source CLI that wraps your n8n instance with an agentic AI layer. Describe what you want in plain English — the agent designs, builds, validates, and deploys it.
10
+ **Stop clicking. Start shipping.** `n8m` is an open-source CLI that wraps your
11
+ n8n instance with an agentic AI layer. Describe what you want in plain English —
12
+ the agent designs, builds, validates, and deploys it.
10
13
 
11
14
  No account. No server. Bring your own AI key and your n8n instance.
12
15
 
@@ -26,7 +29,8 @@ npm install -g n8m
26
29
 
27
30
  ### 1. Configure your AI provider
28
31
 
29
- `n8m` stores credentials in `~/.n8m/config.json` so they persist across sessions — including `npx` invocations.
32
+ `n8m` stores credentials in `~/.n8m/config.json` so they persist across sessions
33
+ — including `npx` invocations.
30
34
 
31
35
  ```bash
32
36
  # OpenAI
@@ -42,13 +46,14 @@ npx n8m config --ai-provider gemini --ai-key AIza...
42
46
  npx n8m config --ai-base-url http://localhost:11434/v1 --ai-key ollama --ai-model llama3
43
47
  ```
44
48
 
45
- You can also use environment variables or a `.env` file — env vars take priority over stored config:
49
+ You can also use environment variables or a `.env` file — env vars take priority
50
+ over stored config:
46
51
 
47
- | Variable | Description |
48
- |---|---|
49
- | `AI_PROVIDER` | Preset: `openai`, `anthropic`, or `gemini` |
50
- | `AI_API_KEY` | API key for your provider |
51
- | `AI_MODEL` | Override the model (optional) |
52
+ | Variable | Description |
53
+ | ------------- | -------------------------------------------------------- |
54
+ | `AI_PROVIDER` | Preset: `openai`, `anthropic`, or `gemini` |
55
+ | `AI_API_KEY` | API key for your provider |
56
+ | `AI_MODEL` | Override the model (optional) |
52
57
  | `AI_BASE_URL` | Custom base URL for any OpenAI-compatible API (optional) |
53
58
 
54
59
  Default models per provider: `gpt-4o` · `claude-sonnet-4-6` · `gemini-2.5-flash`
@@ -59,7 +64,8 @@ Default models per provider: `gpt-4o` · `claude-sonnet-4-6` · `gemini-2.5-flas
59
64
  npx n8m config --n8n-url https://your-n8n.example.com --n8n-key <your-n8n-api-key>
60
65
  ```
61
66
 
62
- Credentials are saved locally to `~/.n8m/config.json`. You can also use environment variables `N8N_API_URL` and `N8N_API_KEY` instead.
67
+ Credentials are saved locally to `~/.n8m/config.json`. You can also use
68
+ environment variables `N8N_API_URL` and `N8N_API_KEY` instead.
63
69
 
64
70
  ---
65
71
 
@@ -67,7 +73,8 @@ Credentials are saved locally to `~/.n8m/config.json`. You can also use environm
67
73
 
68
74
  ### `n8m create` — Generate a workflow
69
75
 
70
- Describe what you want and the agentic pipeline designs, builds, and validates it.
76
+ Describe what you want and the agentic pipeline designs, builds, and validates
77
+ it.
71
78
 
72
79
  ```bash
73
80
  n8m create "Send a Slack message whenever a new row is added to a Google Sheet"
@@ -80,11 +87,17 @@ n8m create --multiline
80
87
  ```
81
88
 
82
89
  The agent runs through three stages:
90
+
83
91
  1. **Architect** — designs the blueprint and identifies required nodes
84
92
  2. **Engineer** — generates the workflow JSON
85
93
  3. **QA** — validates the result; loops back to Engineer if issues are found
86
94
 
87
- The finished workflow is saved as a local JSON file (default: `./workflows/`).
95
+ The finished workflow is saved as an organized project folder (default:
96
+ `./workflows/<project-slug>/`). Each project folder contains:
97
+
98
+ - `workflow.json`: The generated n8n workflow.
99
+ - `README.md`: Automatic documentation including a Mermaid.js diagram and an
100
+ AI-generated summary.
88
101
 
89
102
  ---
90
103
 
@@ -103,13 +116,34 @@ n8m modify
103
116
  n8m modify --multiline
104
117
  ```
105
118
 
106
- After modification you'll be prompted to save locally, deploy to your instance, or run a test.
119
+ After modification you'll be prompted to save locally (organized into its
120
+ project folder), deploy to your instance, or run a test.
121
+
122
+ ---
123
+
124
+ ### `n8m doc` — Generate documentation
125
+
126
+ Generate visual and text documentation for existing local or remote workflows.
127
+
128
+ ```bash
129
+ # Document a local workflow file
130
+ n8m doc ./workflows/my-workflow.json
131
+
132
+ # Browse and select from local files + remote instance
133
+ n8m doc
134
+ ```
135
+
136
+ - Generates a `README.md` in the workflow's project directory.
137
+ - Includes a **Mermaid.js** flowchart of the workflow logic.
138
+ - Includes an **AI-generated summary** of the nodes and execution flow.
139
+ - Automatically organizes loose `.json` files into project folders.
107
140
 
108
141
  ---
109
142
 
110
143
  ### `n8m test` — Validate and auto-repair a workflow
111
144
 
112
- Deploys a workflow ephemerally to your instance, validates it, and purges it when done. If validation fails, the repair loop kicks in automatically.
145
+ Deploys a workflow ephemerally to your instance, validates it, and purges it
146
+ when done. If validation fails, the repair loop kicks in automatically.
113
147
 
114
148
  ```bash
115
149
  # Test a local file
@@ -122,6 +156,7 @@ n8m test
122
156
  - Resolves and deploys sub-workflow dependencies automatically
123
157
  - Patches node IDs after ephemeral deployment
124
158
  - After a passing test, prompts to deploy or save the validated/repaired version
159
+ - **Auto-documents**: Generates or updates the project `README.md` upon saving.
125
160
  - All temporary assets are deleted on exit
126
161
 
127
162
  ---
@@ -141,19 +176,22 @@ n8m deploy ./workflows/my-flow.json --activate
141
176
 
142
177
  ### `n8m resume` — Resume a paused session
143
178
 
144
- The agent can pause mid-run for human review (HITL). Resume it with its thread ID.
179
+ The agent can pause mid-run for human review (HITL). Resume it with its thread
180
+ ID.
145
181
 
146
182
  ```bash
147
183
  n8m resume <thread-id>
148
184
  ```
149
185
 
150
- Sessions are persisted to a local SQLite database, so they survive crashes and restarts.
186
+ Sessions are persisted to a local SQLite database, so they survive crashes and
187
+ restarts.
151
188
 
152
189
  ---
153
190
 
154
191
  ### `n8m prune` — Clean up your instance
155
192
 
156
- Removes duplicate workflows and leftover test artifacts (`[n8m:test:*]` prefixed names).
193
+ Removes duplicate workflows and leftover test artifacts (`[n8m:test:*]` prefixed
194
+ names).
157
195
 
158
196
  ```bash
159
197
  # Preview what would be deleted
@@ -167,7 +205,8 @@ n8m prune --force
167
205
 
168
206
  ### `n8m config` — Manage configuration
169
207
 
170
- All credentials are saved to `~/.n8m/config.json` and persist across sessions (including `npx` invocations).
208
+ All credentials are saved to `~/.n8m/config.json` and persist across sessions
209
+ (including `npx` invocations).
171
210
 
172
211
  ```bash
173
212
  # Set AI provider
@@ -209,13 +248,23 @@ Developer → n8m create "..."
209
248
  └──────┬──────┘ └─────────────┘
210
249
  │ passed
211
250
 
212
- ./workflows/output.json
251
+
252
+ ./workflows/<slug>/
253
+ ├── workflow.json
254
+ └── README.md (with Mermaid diagram)
213
255
  ```
214
256
 
215
257
  - **Local first**: credentials and workflow files live on your machine
258
+ - **Organized Projects**: Workflows are grouped into folders with auto-generated
259
+ documentation
216
260
  - **SQLite persistence**: session state survives interruptions
217
261
  - **HITL pauses**: the agent stops for your review before committing
218
- - **Bring your own AI**: works with OpenAI, Claude, Gemini, Ollama, or any OpenAI-compatible API
262
+ - **Bring your own AI**: works with OpenAI, Claude, Gemini, Ollama, or any
263
+ OpenAI-compatible API
264
+
265
+ > **For developers**: See the [Developer Guide](docs/DEVELOPER_GUIDE.md) for a
266
+ > deep-dive into the agentic graph internals, RAG implementation, how to add new
267
+ > agent nodes, and how to extend the CLI.
219
268
 
220
269
  ---
221
270
 
@@ -250,6 +299,12 @@ npm run dev
250
299
  - [x] HITL interrupts and resume
251
300
  - [x] Sub-workflow dependency resolution in tests
252
301
  - [x] Open source — no account required
253
- - [x] Multi-provider AI support (OpenAI, Claude, Gemini, Ollama, any OpenAI-compatible API)
302
+ - [x] Multi-provider AI support (OpenAI, Claude, Gemini, Ollama, any
303
+ OpenAI-compatible API)
304
+ - [x] Automatic documentation generation (Mermaid + AI Summary)
305
+ - [x] Project-based folder organization
306
+ - [x] AI-driven test scenario generation (`--ai-scenarios`)
307
+ - [x] Static node type reference & fallback mechanism
308
+ - [x] Multi-workflow project generation support
254
309
  - [ ] Native n8n canvas integration
255
310
  - [ ] Multi-agent collaboration on a single goal
@@ -13,6 +13,8 @@ export declare const graph: import("@langchain/langgraph").CompiledStateGraph<{
13
13
  candidates: any[];
14
14
  customTools: Record<string, string>;
15
15
  collaborationLog: string[];
16
+ userFeedback: string;
17
+ testScenarios: any[];
16
18
  }, {
17
19
  userGoal?: string | undefined;
18
20
  spec?: any;
@@ -27,6 +29,8 @@ export declare const graph: import("@langchain/langgraph").CompiledStateGraph<{
27
29
  candidates?: any[] | undefined;
28
30
  customTools?: Record<string, string> | undefined;
29
31
  collaborationLog?: string[] | undefined;
32
+ userFeedback?: string | undefined;
33
+ testScenarios?: any[] | undefined;
30
34
  }, "__start__" | "architect" | "engineer" | "reviewer" | "supervisor" | "qa", {
31
35
  userGoal: {
32
36
  (): import("@langchain/langgraph").LastValue<string>;
@@ -77,6 +81,16 @@ export declare const graph: import("@langchain/langgraph").CompiledStateGraph<{
77
81
  candidates: import("@langchain/langgraph").BinaryOperatorAggregate<any[], any[]>;
78
82
  customTools: import("@langchain/langgraph").BinaryOperatorAggregate<Record<string, string>, Record<string, string>>;
79
83
  collaborationLog: import("@langchain/langgraph").BinaryOperatorAggregate<string[], string[]>;
84
+ userFeedback: {
85
+ (): import("@langchain/langgraph").LastValue<string>;
86
+ (annotation: import("@langchain/langgraph").SingleReducer<string, string>): import("@langchain/langgraph").BinaryOperatorAggregate<string, string>;
87
+ Root: <S extends import("@langchain/langgraph").StateDefinition>(sd: S) => import("@langchain/langgraph").AnnotationRoot<S>;
88
+ };
89
+ testScenarios: {
90
+ (): import("@langchain/langgraph").LastValue<any[]>;
91
+ (annotation: import("@langchain/langgraph").SingleReducer<any[], any[]>): import("@langchain/langgraph").BinaryOperatorAggregate<any[], any[]>;
92
+ Root: <S extends import("@langchain/langgraph").StateDefinition>(sd: S) => import("@langchain/langgraph").AnnotationRoot<S>;
93
+ };
80
94
  }, {
81
95
  userGoal: {
82
96
  (): import("@langchain/langgraph").LastValue<string>;
@@ -127,6 +141,16 @@ export declare const graph: import("@langchain/langgraph").CompiledStateGraph<{
127
141
  candidates: import("@langchain/langgraph").BinaryOperatorAggregate<any[], any[]>;
128
142
  customTools: import("@langchain/langgraph").BinaryOperatorAggregate<Record<string, string>, Record<string, string>>;
129
143
  collaborationLog: import("@langchain/langgraph").BinaryOperatorAggregate<string[], string[]>;
144
+ userFeedback: {
145
+ (): import("@langchain/langgraph").LastValue<string>;
146
+ (annotation: import("@langchain/langgraph").SingleReducer<string, string>): import("@langchain/langgraph").BinaryOperatorAggregate<string, string>;
147
+ Root: <S extends import("@langchain/langgraph").StateDefinition>(sd: S) => import("@langchain/langgraph").AnnotationRoot<S>;
148
+ };
149
+ testScenarios: {
150
+ (): import("@langchain/langgraph").LastValue<any[]>;
151
+ (annotation: import("@langchain/langgraph").SingleReducer<any[], any[]>): import("@langchain/langgraph").BinaryOperatorAggregate<any[], any[]>;
152
+ Root: <S extends import("@langchain/langgraph").StateDefinition>(sd: S) => import("@langchain/langgraph").AnnotationRoot<S>;
153
+ };
130
154
  }, import("@langchain/langgraph").StateDefinition, {
131
155
  architect: {
132
156
  spec: import("../services/ai.service.js").WorkflowSpec;
@@ -206,6 +230,16 @@ export declare const graph: import("@langchain/langgraph").CompiledStateGraph<{
206
230
  candidates: import("@langchain/langgraph").BinaryOperatorAggregate<any[], any[]>;
207
231
  customTools: import("@langchain/langgraph").BinaryOperatorAggregate<Record<string, string>, Record<string, string>>;
208
232
  collaborationLog: import("@langchain/langgraph").BinaryOperatorAggregate<string[], string[]>;
233
+ userFeedback: {
234
+ (): import("@langchain/langgraph").LastValue<string>;
235
+ (annotation: import("@langchain/langgraph").SingleReducer<string, string>): import("@langchain/langgraph").BinaryOperatorAggregate<string, string>;
236
+ Root: <S extends import("@langchain/langgraph").StateDefinition>(sd: S) => import("@langchain/langgraph").AnnotationRoot<S>;
237
+ };
238
+ testScenarios: {
239
+ (): import("@langchain/langgraph").LastValue<any[]>;
240
+ (annotation: import("@langchain/langgraph").SingleReducer<any[], any[]>): import("@langchain/langgraph").BinaryOperatorAggregate<any[], any[]>;
241
+ Root: <S extends import("@langchain/langgraph").StateDefinition>(sd: S) => import("@langchain/langgraph").AnnotationRoot<S>;
242
+ };
209
243
  }>;
210
244
  supervisor: {
211
245
  workflowJson?: undefined;
@@ -264,6 +298,16 @@ export declare const graph: import("@langchain/langgraph").CompiledStateGraph<{
264
298
  candidates: import("@langchain/langgraph").BinaryOperatorAggregate<any[], any[]>;
265
299
  customTools: import("@langchain/langgraph").BinaryOperatorAggregate<Record<string, string>, Record<string, string>>;
266
300
  collaborationLog: import("@langchain/langgraph").BinaryOperatorAggregate<string[], string[]>;
301
+ userFeedback: {
302
+ (): import("@langchain/langgraph").LastValue<string>;
303
+ (annotation: import("@langchain/langgraph").SingleReducer<string, string>): import("@langchain/langgraph").BinaryOperatorAggregate<string, string>;
304
+ Root: <S extends import("@langchain/langgraph").StateDefinition>(sd: S) => import("@langchain/langgraph").AnnotationRoot<S>;
305
+ };
306
+ testScenarios: {
307
+ (): import("@langchain/langgraph").LastValue<any[]>;
308
+ (annotation: import("@langchain/langgraph").SingleReducer<any[], any[]>): import("@langchain/langgraph").BinaryOperatorAggregate<any[], any[]>;
309
+ Root: <S extends import("@langchain/langgraph").StateDefinition>(sd: S) => import("@langchain/langgraph").AnnotationRoot<S>;
310
+ };
267
311
  }>;
268
312
  }, unknown, unknown>;
269
313
  /**
@@ -322,6 +366,16 @@ export declare const runAgenticWorkflow: (goal: string, initialState?: Partial<t
322
366
  candidates: import("@langchain/langgraph").BinaryOperatorAggregate<any[], any[]>;
323
367
  customTools: import("@langchain/langgraph").BinaryOperatorAggregate<Record<string, string>, Record<string, string>>;
324
368
  collaborationLog: import("@langchain/langgraph").BinaryOperatorAggregate<string[], string[]>;
369
+ userFeedback: {
370
+ (): import("@langchain/langgraph").LastValue<string>;
371
+ (annotation: import("@langchain/langgraph").SingleReducer<string, string>): import("@langchain/langgraph").BinaryOperatorAggregate<string, string>;
372
+ Root: <S extends import("@langchain/langgraph").StateDefinition>(sd: S) => import("@langchain/langgraph").AnnotationRoot<S>;
373
+ };
374
+ testScenarios: {
375
+ (): import("@langchain/langgraph").LastValue<any[]>;
376
+ (annotation: import("@langchain/langgraph").SingleReducer<any[], any[]>): import("@langchain/langgraph").BinaryOperatorAggregate<any[], any[]>;
377
+ Root: <S extends import("@langchain/langgraph").StateDefinition>(sd: S) => import("@langchain/langgraph").AnnotationRoot<S>;
378
+ };
325
379
  }>>;
326
380
  /**
327
381
  * Run the Agentic Workflow with Streaming
@@ -407,6 +461,16 @@ export declare const runAgenticWorkflowStream: (goal: string, threadId?: string)
407
461
  candidates: import("@langchain/langgraph").BinaryOperatorAggregate<any[], any[]>;
408
462
  customTools: import("@langchain/langgraph").BinaryOperatorAggregate<Record<string, string>, Record<string, string>>;
409
463
  collaborationLog: import("@langchain/langgraph").BinaryOperatorAggregate<string[], string[]>;
464
+ userFeedback: {
465
+ (): import("@langchain/langgraph").LastValue<string>;
466
+ (annotation: import("@langchain/langgraph").SingleReducer<string, string>): import("@langchain/langgraph").BinaryOperatorAggregate<string, string>;
467
+ Root: <S extends import("@langchain/langgraph").StateDefinition>(sd: S) => import("@langchain/langgraph").AnnotationRoot<S>;
468
+ };
469
+ testScenarios: {
470
+ (): import("@langchain/langgraph").LastValue<any[]>;
471
+ (annotation: import("@langchain/langgraph").SingleReducer<any[], any[]>): import("@langchain/langgraph").BinaryOperatorAggregate<any[], any[]>;
472
+ Root: <S extends import("@langchain/langgraph").StateDefinition>(sd: S) => import("@langchain/langgraph").AnnotationRoot<S>;
473
+ };
410
474
  }> | undefined;
411
475
  supervisor?: {
412
476
  workflowJson?: undefined;
@@ -465,6 +529,16 @@ export declare const runAgenticWorkflowStream: (goal: string, threadId?: string)
465
529
  candidates: import("@langchain/langgraph").BinaryOperatorAggregate<any[], any[]>;
466
530
  customTools: import("@langchain/langgraph").BinaryOperatorAggregate<Record<string, string>, Record<string, string>>;
467
531
  collaborationLog: import("@langchain/langgraph").BinaryOperatorAggregate<string[], string[]>;
532
+ userFeedback: {
533
+ (): import("@langchain/langgraph").LastValue<string>;
534
+ (annotation: import("@langchain/langgraph").SingleReducer<string, string>): import("@langchain/langgraph").BinaryOperatorAggregate<string, string>;
535
+ Root: <S extends import("@langchain/langgraph").StateDefinition>(sd: S) => import("@langchain/langgraph").AnnotationRoot<S>;
536
+ };
537
+ testScenarios: {
538
+ (): import("@langchain/langgraph").LastValue<any[]>;
539
+ (annotation: import("@langchain/langgraph").SingleReducer<any[], any[]>): import("@langchain/langgraph").BinaryOperatorAggregate<any[], any[]>;
540
+ Root: <S extends import("@langchain/langgraph").StateDefinition>(sd: S) => import("@langchain/langgraph").AnnotationRoot<S>;
541
+ };
468
542
  }> | undefined;
469
543
  }>>;
470
544
  /**
@@ -520,4 +594,14 @@ export declare const resumeAgenticWorkflow: (threadId: string, input?: any) => P
520
594
  candidates: import("@langchain/langgraph").BinaryOperatorAggregate<any[], any[]>;
521
595
  customTools: import("@langchain/langgraph").BinaryOperatorAggregate<Record<string, string>, Record<string, string>>;
522
596
  collaborationLog: import("@langchain/langgraph").BinaryOperatorAggregate<string[], string[]>;
597
+ userFeedback: {
598
+ (): import("@langchain/langgraph").LastValue<string>;
599
+ (annotation: import("@langchain/langgraph").SingleReducer<string, string>): import("@langchain/langgraph").BinaryOperatorAggregate<string, string>;
600
+ Root: <S extends import("@langchain/langgraph").StateDefinition>(sd: S) => import("@langchain/langgraph").AnnotationRoot<S>;
601
+ };
602
+ testScenarios: {
603
+ (): import("@langchain/langgraph").LastValue<any[]>;
604
+ (annotation: import("@langchain/langgraph").SingleReducer<any[], any[]>): import("@langchain/langgraph").BinaryOperatorAggregate<any[], any[]>;
605
+ Root: <S extends import("@langchain/langgraph").StateDefinition>(sd: S) => import("@langchain/langgraph").AnnotationRoot<S>;
606
+ };
523
607
  }>>;
@@ -53,7 +53,7 @@ const workflow = new StateGraph(TeamState)
53
53
  // Compile the graph with persistence and interrupts
54
54
  export const graph = workflow.compile({
55
55
  checkpointer: checkpointer,
56
- interruptBefore: ["qa"],
56
+ interruptBefore: ["engineer", "qa"],
57
57
  });
58
58
  /**
59
59
  * Run the Agentic Workflow
@@ -52,6 +52,7 @@ export const engineerNode = async (state) => {
52
52
  Specification:
53
53
  ${JSON.stringify(state.spec, null, 2)}
54
54
  ${ragContext}
55
+ ${state.userFeedback ? `\n\nUSER FEEDBACK / REFINEMENTS:\n${state.userFeedback}\n(Incorporate this feedback into the generation process)` : ""}
55
56
 
56
57
  IMPORTANT:
57
58
  1. Desciptive Naming: Name nodes descriptively (e.g. "Fetch Bitcoin Price" instead of "HTTP Request").
@@ -92,7 +93,7 @@ export const engineerNode = async (state) => {
92
93
  throw new Error("AI generated invalid JSON for workflow from spec");
93
94
  }
94
95
  if (result.workflows && Array.isArray(result.workflows)) {
95
- result.workflows = result.workflows.map((wf) => fixHallucinatedNodes(wf));
96
+ result.workflows = result.workflows.map((wf) => aiService.fixHallucinatedNodes(wf));
96
97
  }
97
98
  return {
98
99
  // Only push to candidates — the Supervisor sets workflowJson after fan-in.
@@ -106,84 +107,4 @@ export const engineerNode = async (state) => {
106
107
  throw error;
107
108
  }
108
109
  };
109
- /**
110
- * Auto-correct common n8n node type hallucinations
111
- */
112
- function fixHallucinatedNodes(workflow) {
113
- if (!workflow.nodes || !Array.isArray(workflow.nodes))
114
- return workflow;
115
- const corrections = {
116
- "n8n-nodes-base.rssFeed": "n8n-nodes-base.rssFeedRead",
117
- "rssFeed": "n8n-nodes-base.rssFeedRead",
118
- "n8n-nodes-base.gpt": "n8n-nodes-base.openAi",
119
- "n8n-nodes-base.openai": "n8n-nodes-base.openAi",
120
- "openai": "n8n-nodes-base.openAi",
121
- "n8n-nodes-base.openAiChat": "n8n-nodes-base.openAi",
122
- "n8n-nodes-base.openAIChat": "n8n-nodes-base.openAi",
123
- "n8n-nodes-base.openaiChat": "n8n-nodes-base.openAi",
124
- "n8n-nodes-base.gemini": "n8n-nodes-base.googleGemini",
125
- "n8n-nodes-base.cheerioHtml": "n8n-nodes-base.htmlExtract",
126
- "cheerioHtml": "n8n-nodes-base.htmlExtract",
127
- "n8n-nodes-base.schedule": "n8n-nodes-base.scheduleTrigger",
128
- "schedule": "n8n-nodes-base.scheduleTrigger",
129
- "n8n-nodes-base.cron": "n8n-nodes-base.scheduleTrigger",
130
- "n8n-nodes-base.googleCustomSearch": "n8n-nodes-base.googleGemini",
131
- "googleCustomSearch": "n8n-nodes-base.googleGemini"
132
- };
133
- workflow.nodes = workflow.nodes.map((node) => {
134
- if (node.type && corrections[node.type]) {
135
- node.type = corrections[node.type];
136
- }
137
- // Ensure base prefix if missing
138
- if (node.type && !node.type.startsWith('n8n-nodes-base.') && !node.type.includes('.')) {
139
- node.type = `n8n-nodes-base.${node.type}`;
140
- }
141
- return node;
142
- });
143
- return fixN8nConnections(workflow);
144
- }
145
- /**
146
- * Force-fix connection structure to prevent "object is not iterable" errors
147
- */
148
- function fixN8nConnections(workflow) {
149
- if (!workflow.connections || typeof workflow.connections !== 'object')
150
- return workflow;
151
- const fixedConnections = {};
152
- for (const [sourceNode, targets] of Object.entries(workflow.connections)) {
153
- if (!targets || typeof targets !== 'object')
154
- continue;
155
- const targetObj = targets;
156
- // 2. Ensure "main" exists and is an array
157
- if (targetObj.main) {
158
- let mainArr = targetObj.main;
159
- if (!Array.isArray(mainArr))
160
- mainArr = [[{ node: String(mainArr), type: 'main', index: 0 }]];
161
- const fixedMain = mainArr.map((segment) => {
162
- if (!segment)
163
- return [];
164
- if (!Array.isArray(segment)) {
165
- // Wrap in array if it's a single object
166
- return [segment];
167
- }
168
- return segment.map((conn) => {
169
- if (!conn)
170
- return { node: 'Unknown', type: 'main', index: 0 };
171
- if (typeof conn === 'string')
172
- return { node: conn, type: 'main', index: 0 };
173
- return {
174
- node: String(conn.node || 'Unknown'),
175
- type: conn.type || 'main',
176
- index: conn.index || 0
177
- };
178
- });
179
- });
180
- fixedConnections[sourceNode] = { main: fixedMain };
181
- }
182
- else {
183
- // If it's just raw data like { "Source": { "node": "Target" } }, wrap it
184
- fixedConnections[sourceNode] = targetObj;
185
- }
186
- }
187
- workflow.connections = fixedConnections;
188
- return workflow;
189
- }
110
+ // Local helpers removed, using AIService methods instead.
@@ -61,70 +61,69 @@ export const qaNode = async (state) => {
61
61
  console.log(theme.agent(`Deploying ephemeral root: ${rootPayload.name}...`));
62
62
  const result = await client.createWorkflow(rootPayload.name, rootPayload);
63
63
  createdWorkflowId = result.id;
64
- // 4. Generate Mock Data
64
+ // 4. Determine Test Scenarios
65
+ let scenarios = state.testScenarios;
66
+ if (!scenarios || scenarios.length === 0) {
67
+ // Fallback to generating a single mock payload for efficiency if no scenarios provided
68
+ const nodeNames = targetWorkflow.nodes.map((n) => n.name).join(', ');
69
+ const context = `Workflow Name: "${targetWorkflow.name}"
70
+ Nodes: ${nodeNames}
71
+ Goal: "${state.userGoal}"
72
+ Generate a SINGLE JSON object payload that effectively tests this workflow.`;
73
+ const mockPayload = await aiService.generateMockData(context);
74
+ scenarios = [{ name: "Default Test", payload: mockPayload }];
75
+ }
65
76
  const webhookNode = rootPayload.nodes.find((n) => n.type === 'n8n-nodes-base.webhook');
66
- let triggerSuccess = false;
67
77
  if (webhookNode) {
68
78
  const path = webhookNode.parameters?.path;
69
79
  if (path) {
70
80
  // Activate for webhook testing
71
81
  await client.activateWorkflow(createdWorkflowId);
72
- const nodeNames = targetWorkflow.nodes.map((n) => n.name).join(', ');
73
- const context = `Workflow Name: "${targetWorkflow.name}"
74
- Nodes: ${nodeNames}
75
- Goal: "${state.userGoal}"
76
- Generate a SINGLE JSON object payload that effectively tests this workflow.`;
77
- const mockPayload = await aiService.generateMockData(context);
78
82
  const baseUrl = new URL(n8nUrl).origin;
79
83
  const webhookUrl = `${baseUrl}/webhook/${path}`;
80
- const response = await fetch(webhookUrl, {
81
- method: 'POST',
82
- headers: { 'Content-Type': 'application/json' },
83
- body: JSON.stringify(mockPayload)
84
- });
85
- if (response.ok) {
86
- triggerSuccess = true;
87
- }
88
- else {
89
- throw new Error(`Webhook trigger failed with status ${response.status}`);
84
+ for (const scenario of scenarios) {
85
+ console.log(theme.info(`🧪 Running Scenario: ${theme.value(scenario.name)}...`));
86
+ const response = await fetch(webhookUrl, {
87
+ method: 'POST',
88
+ headers: { 'Content-Type': 'application/json' },
89
+ body: JSON.stringify(scenario.payload)
90
+ });
91
+ if (!response.ok) {
92
+ validationErrors.push(`Scenario "${scenario.name}" failed to trigger: ${response.status}`);
93
+ continue;
94
+ }
95
+ // 5. Verify Execution for this scenario
96
+ const executionStartTime = Date.now();
97
+ let executionFound = false;
98
+ const maxPoll = 15;
99
+ for (let i = 0; i < maxPoll; i++) {
100
+ await new Promise(r => setTimeout(r, 2000));
101
+ const executions = await client.getWorkflowExecutions(createdWorkflowId);
102
+ const recentExec = executions.find((e) => new Date(e.startedAt).getTime() > (executionStartTime - 5000));
103
+ if (recentExec) {
104
+ executionFound = true;
105
+ const fullExec = await client.getExecution(recentExec.id);
106
+ if (fullExec.status === 'success') {
107
+ console.log(theme.success(` ✔ Passed`));
108
+ }
109
+ else {
110
+ const errorMsg = fullExec.data?.resultData?.error?.message || "Unknown flow failure";
111
+ validationErrors.push(`Scenario "${scenario.name}" Failed: ${errorMsg}`);
112
+ console.log(theme.error(` ✘ Failed: ${errorMsg}`));
113
+ }
114
+ break;
115
+ }
116
+ }
117
+ if (!executionFound) {
118
+ validationErrors.push(`Scenario "${scenario.name}": No execution detected after trigger.`);
119
+ console.log(theme.warn(` ⚠ No execution detected.`));
120
+ }
90
121
  }
91
122
  }
92
123
  }
93
124
  else {
94
125
  // Just execute if no webhook (manual trigger)
95
126
  await client.executeWorkflow(createdWorkflowId);
96
- triggerSuccess = true;
97
- }
98
- // 5. Verify Execution
99
- // Wait for execution to appear
100
- if (triggerSuccess) {
101
- const executionStartTime = Date.now();
102
- let executionFound = false;
103
- const maxPoll = 20; // shorter poll for agent
104
- for (let i = 0; i < maxPoll; i++) {
105
- await new Promise(r => setTimeout(r, 2000));
106
- const executions = await client.getWorkflowExecutions(createdWorkflowId);
107
- const recentExec = executions.find((e) => new Date(e.startedAt).getTime() > (executionStartTime - 5000));
108
- if (recentExec) {
109
- executionFound = true;
110
- const fullExec = await client.getExecution(recentExec.id);
111
- if (fullExec.status === 'success') {
112
- return {
113
- validationStatus: 'passed',
114
- validationErrors: [],
115
- };
116
- }
117
- else {
118
- const errorMsg = fullExec.data?.resultData?.error?.message || "Unknown flow failure";
119
- validationErrors.push(`Execution Failed: ${errorMsg}`);
120
- console.log(theme.error(`Execution Failed: ${errorMsg}`));
121
- break;
122
- }
123
- }
124
- }
125
- if (!executionFound) {
126
- validationErrors.push("No execution detected after trigger.");
127
- }
128
127
  }
129
128
  // 6. Dynamic Tool Execution (Sandbox)
130
129
  // If the Agent has defined a custom validation script, run it now.
@@ -157,7 +156,7 @@ export const qaNode = async (state) => {
157
156
  }
158
157
  }
159
158
  return {
160
- validationStatus: 'failed',
159
+ validationStatus: validationErrors.length === 0 ? 'passed' : 'failed',
161
160
  validationErrors,
162
161
  };
163
162
  };
@@ -49,4 +49,14 @@ export declare const TeamState: import("@langchain/langgraph").AnnotationRoot<{
49
49
  candidates: import("@langchain/langgraph").BinaryOperatorAggregate<any[], any[]>;
50
50
  customTools: import("@langchain/langgraph").BinaryOperatorAggregate<Record<string, string>, Record<string, string>>;
51
51
  collaborationLog: import("@langchain/langgraph").BinaryOperatorAggregate<string[], string[]>;
52
+ userFeedback: {
53
+ (): import("@langchain/langgraph").LastValue<string>;
54
+ (annotation: import("@langchain/langgraph").SingleReducer<string, string>): import("@langchain/langgraph").BinaryOperatorAggregate<string, string>;
55
+ Root: <S extends import("@langchain/langgraph").StateDefinition>(sd: S) => import("@langchain/langgraph").AnnotationRoot<S>;
56
+ };
57
+ testScenarios: {
58
+ (): import("@langchain/langgraph").LastValue<any[]>;
59
+ (annotation: import("@langchain/langgraph").SingleReducer<any[], any[]>): import("@langchain/langgraph").BinaryOperatorAggregate<any[], any[]>;
60
+ Root: <S extends import("@langchain/langgraph").StateDefinition>(sd: S) => import("@langchain/langgraph").AnnotationRoot<S>;
61
+ };
52
62
  }>;
@@ -28,4 +28,6 @@ export const TeamState = Annotation.Root({
28
28
  reducer: (x, y) => x.concat(y),
29
29
  default: () => [],
30
30
  }),
31
+ userFeedback: (Annotation),
32
+ testScenarios: (Annotation),
31
33
  });