@leclabs/agent-flow-navigator-mcp 1.5.2 → 1.7.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +81 -8
- package/catalog/workflows/bug-hunt.json +242 -0
- package/copier.js +7 -7
- package/diagram.js +12 -2
- package/engine.js +133 -16
- package/index.js +9 -3
- package/package.json +3 -3
- package/store.js +67 -0
- package/types.d.ts +29 -1
package/README.md
CHANGED
|
@@ -2,7 +2,7 @@
|
|
|
2
2
|
|
|
3
3
|
[](https://npmjs.com/package/@leclabs/agent-flow-navigator-mcp)
|
|
4
4
|
|
|
5
|
-
A workflow state machine MCP server that navigates agents through
|
|
5
|
+
A workflow state machine MCP server that navigates agents through graph-based workflows.
|
|
6
6
|
|
|
7
7
|
Navigator tracks task state and evaluates graph edges -- it tells the orchestrator _where to go next_, but doesn't drive. Think of it like a GPS: you tell it where you are and what happened, it tells you where to go.
|
|
8
8
|
|
|
@@ -145,13 +145,13 @@ Lists workflows available in the built-in catalog. No parameters.
|
|
|
145
145
|
|
|
146
146
|
### Key Concepts
|
|
147
147
|
|
|
148
|
-
| Concept | Description
|
|
149
|
-
| ----------------------- |
|
|
150
|
-
| **Workflow Definition** | A
|
|
151
|
-
| **Navigate 3-Mode API** | Start a workflow, get current state, or advance -- one tool, three calling patterns
|
|
152
|
-
| **Write-Through** | State transitions are persisted to the task file atomically on every advance
|
|
153
|
-
| **Conditional Edges** | Edges with `on` condition (passed/failed) -- retry logic is on nodes via `maxRetries`
|
|
154
|
-
| **HITL Escalation** | When retries are exhausted, tasks route to end nodes with `escalation: "hitl"`
|
|
148
|
+
| Concept | Description |
|
|
149
|
+
| ----------------------- | -------------------------------------------------------------------------------------- |
|
|
150
|
+
| **Workflow Definition** | A graph blueprint describing how to execute a type of work (nodes + conditional edges) |
|
|
151
|
+
| **Navigate 3-Mode API** | Start a workflow, get current state, or advance -- one tool, three calling patterns |
|
|
152
|
+
| **Write-Through** | State transitions are persisted to the task file atomically on every advance |
|
|
153
|
+
| **Conditional Edges** | Edges with `on` condition (passed/failed) -- retry logic is on nodes via `maxRetries` |
|
|
154
|
+
| **HITL Escalation** | When retries are exhausted, tasks route to end nodes with `escalation: "hitl"` |
|
|
155
155
|
|
|
156
156
|
## Workflow Definition Schema
|
|
157
157
|
|
|
@@ -163,6 +163,8 @@ Lists workflows available in the built-in catalog. No parameters.
|
|
|
163
163
|
| `end` | Exit point with `result` (success/failure/blocked/cancelled) |
|
|
164
164
|
| `task` | Executable work unit |
|
|
165
165
|
| `gate` | Quality gate / review checkpoint |
|
|
166
|
+
| `fork` | Fan out into parallel branches (paired with a `join`) |
|
|
167
|
+
| `join` | Collect parallel branches back together |
|
|
166
168
|
| `subflow` | Connector to another workflow |
|
|
167
169
|
|
|
168
170
|
### End Node Properties
|
|
@@ -193,6 +195,77 @@ Lists workflows available in the built-in catalog. No parameters.
|
|
|
193
195
|
| `label` | Human-readable edge description |
|
|
194
196
|
| `condition` | Expression for future conditional routing (informational) |
|
|
195
197
|
|
|
198
|
+
### Fork/Join (Parallel Branches)
|
|
199
|
+
|
|
200
|
+
Fork/join lets a workflow fan out into parallel investigation tracks that converge at a join point. Each fork must be paired 1:1 with a join, and nested forks are not supported.
|
|
201
|
+
|
|
202
|
+
#### Fork Node
|
|
203
|
+
|
|
204
|
+
```json
|
|
205
|
+
{
|
|
206
|
+
"type": "fork",
|
|
207
|
+
"name": "Fork Investigation",
|
|
208
|
+
"join": "join_investigate",
|
|
209
|
+
"branches": {
|
|
210
|
+
"reproduce": {
|
|
211
|
+
"entryStep": "reproduce",
|
|
212
|
+
"description": "Try to trigger the bug"
|
|
213
|
+
},
|
|
214
|
+
"code_archaeology": {
|
|
215
|
+
"entryStep": "code_archaeology",
|
|
216
|
+
"description": "Trace code paths related to symptoms"
|
|
217
|
+
}
|
|
218
|
+
}
|
|
219
|
+
}
|
|
220
|
+
```
|
|
221
|
+
|
|
222
|
+
| Property | Description |
|
|
223
|
+
| ---------- | -------------------------------------------------------------- |
|
|
224
|
+
| `join` | ID of the paired join node (required) |
|
|
225
|
+
| `branches` | Map of branch name to `{ entryStep, description? }` (required) |
|
|
226
|
+
|
|
227
|
+
Each branch's `entryStep` must reference an existing task/gate node in the workflow. The orchestrator creates a child task per branch and runs them from their entry step.
|
|
228
|
+
|
|
229
|
+
#### Join Node
|
|
230
|
+
|
|
231
|
+
```json
|
|
232
|
+
{
|
|
233
|
+
"type": "join",
|
|
234
|
+
"name": "Join Investigation",
|
|
235
|
+
"fork": "fork_investigate",
|
|
236
|
+
"strategy": "all-pass"
|
|
237
|
+
}
|
|
238
|
+
```
|
|
239
|
+
|
|
240
|
+
| Property | Description |
|
|
241
|
+
| ---------- | -------------------------------------------------------------------------------------------- |
|
|
242
|
+
| `fork` | ID of the paired fork node (required) |
|
|
243
|
+
| `strategy` | `"all-pass"` (all branches must pass) or `"any-pass"` (one is enough). Default: `"all-pass"` |
|
|
244
|
+
|
|
245
|
+
#### Edges
|
|
246
|
+
|
|
247
|
+
Wire the fork and join like any other nodes. Branches run from their `entryStep` until they reach the join node via normal edges:
|
|
248
|
+
|
|
249
|
+
```json
|
|
250
|
+
{ "from": "triage", "to": "fork_investigate" },
|
|
251
|
+
{ "from": "fork_investigate", "to": "reproduce" },
|
|
252
|
+
{ "from": "fork_investigate", "to": "code_archaeology" },
|
|
253
|
+
{ "from": "reproduce", "to": "join_investigate" },
|
|
254
|
+
{ "from": "code_archaeology", "to": "join_investigate" },
|
|
255
|
+
{ "from": "join_investigate", "to": "synthesize", "on": "passed" },
|
|
256
|
+
{ "from": "join_investigate", "to": "hitl_inconclusive", "on": "failed" }
|
|
257
|
+
```
|
|
258
|
+
|
|
259
|
+
The join node supports conditional edges (`on: "passed"` / `on: "failed"`) so the workflow can route to different paths depending on whether the branches succeeded.
|
|
260
|
+
|
|
261
|
+
#### Validation Rules
|
|
262
|
+
|
|
263
|
+
- Fork and join must reference each other (1:1 pairing)
|
|
264
|
+
- All branch `entryStep` values must reference existing nodes
|
|
265
|
+
- Branch entry steps cannot be fork nodes (no nesting)
|
|
266
|
+
|
|
267
|
+
See `catalog/workflows/bug-hunt.json` for a complete working example.
|
|
268
|
+
|
|
196
269
|
## Testing
|
|
197
270
|
|
|
198
271
|
```bash
|
|
@@ -0,0 +1,242 @@
|
|
|
1
|
+
{
|
|
2
|
+
"id": "bug-hunt",
|
|
3
|
+
"name": "Bug Hunt",
|
|
4
|
+
"description": "Parallel investigation workflow for vague bug reports. Fans out into reproduction, code archaeology, and git forensics tracks, then synthesizes findings into a root cause analysis and fix.",
|
|
5
|
+
"nodes": {
|
|
6
|
+
"start": {
|
|
7
|
+
"type": "start",
|
|
8
|
+
"name": "Start",
|
|
9
|
+
"description": "Bug hunt begins"
|
|
10
|
+
},
|
|
11
|
+
"triage": {
|
|
12
|
+
"type": "task",
|
|
13
|
+
"name": "Triage Report",
|
|
14
|
+
"description": "Parse the vague report. Extract symptoms, affected area, timing, severity. Form 2-3 hypotheses to test.",
|
|
15
|
+
"agent": "Investigator",
|
|
16
|
+
"stage": "planning"
|
|
17
|
+
},
|
|
18
|
+
"fork_investigate": {
|
|
19
|
+
"type": "fork",
|
|
20
|
+
"name": "Fork Investigation",
|
|
21
|
+
"join": "join_investigate",
|
|
22
|
+
"branches": {
|
|
23
|
+
"reproduce": {
|
|
24
|
+
"entryStep": "reproduce",
|
|
25
|
+
"description": "Try to trigger the bug"
|
|
26
|
+
},
|
|
27
|
+
"code_archaeology": {
|
|
28
|
+
"entryStep": "code_archaeology",
|
|
29
|
+
"description": "Trace code paths related to symptoms"
|
|
30
|
+
},
|
|
31
|
+
"git_forensics": {
|
|
32
|
+
"entryStep": "git_forensics",
|
|
33
|
+
"description": "Check recent commits and git blame"
|
|
34
|
+
}
|
|
35
|
+
}
|
|
36
|
+
},
|
|
37
|
+
"reproduce": {
|
|
38
|
+
"type": "task",
|
|
39
|
+
"name": "Reproduce Bug",
|
|
40
|
+
"description": "Try to trigger the bug. Document exact reproduction steps, environment, and observed vs expected behavior.",
|
|
41
|
+
"agent": "Tester",
|
|
42
|
+
"stage": "investigation"
|
|
43
|
+
},
|
|
44
|
+
"code_archaeology": {
|
|
45
|
+
"type": "task",
|
|
46
|
+
"name": "Code Archaeology",
|
|
47
|
+
"description": "Trace the code paths related to the reported symptoms. Map data flow, identify suspect modules, check edge cases.",
|
|
48
|
+
"agent": "Investigator",
|
|
49
|
+
"stage": "investigation"
|
|
50
|
+
},
|
|
51
|
+
"git_forensics": {
|
|
52
|
+
"type": "task",
|
|
53
|
+
"name": "Git Forensics",
|
|
54
|
+
"description": "Check recent commits touching affected areas. Run git blame on suspect files. Look for correlated changes or regressions.",
|
|
55
|
+
"agent": "Investigator",
|
|
56
|
+
"stage": "investigation"
|
|
57
|
+
},
|
|
58
|
+
"join_investigate": {
|
|
59
|
+
"type": "join",
|
|
60
|
+
"name": "Join Investigation",
|
|
61
|
+
"fork": "fork_investigate",
|
|
62
|
+
"strategy": "all-pass"
|
|
63
|
+
},
|
|
64
|
+
"synthesize": {
|
|
65
|
+
"type": "task",
|
|
66
|
+
"name": "Synthesize Findings",
|
|
67
|
+
"description": "Combine findings from all investigation tracks into a root cause analysis. Identify the most likely cause, supporting evidence, and a fix strategy.",
|
|
68
|
+
"agent": "Architect",
|
|
69
|
+
"stage": "planning"
|
|
70
|
+
},
|
|
71
|
+
"write_fix": {
|
|
72
|
+
"type": "task",
|
|
73
|
+
"name": "Write Fix",
|
|
74
|
+
"description": "Implement the fix with minimal changes",
|
|
75
|
+
"agent": "Developer",
|
|
76
|
+
"stage": "development"
|
|
77
|
+
},
|
|
78
|
+
"add_regression_test": {
|
|
79
|
+
"type": "task",
|
|
80
|
+
"name": "Add Regression Test",
|
|
81
|
+
"description": "Write a test that would have caught this bug",
|
|
82
|
+
"agent": "Tester",
|
|
83
|
+
"stage": "development"
|
|
84
|
+
},
|
|
85
|
+
"verify_fix": {
|
|
86
|
+
"type": "gate",
|
|
87
|
+
"name": "Verify Fix",
|
|
88
|
+
"description": "Run tests, verify fix addresses root cause",
|
|
89
|
+
"agent": "Tester",
|
|
90
|
+
"stage": "verification",
|
|
91
|
+
"maxRetries": 3
|
|
92
|
+
},
|
|
93
|
+
"lint_format": {
|
|
94
|
+
"type": "gate",
|
|
95
|
+
"name": "Lint & Format",
|
|
96
|
+
"description": "Run lint and format checks. Auto-fix issues where possible.",
|
|
97
|
+
"agent": "Developer",
|
|
98
|
+
"stage": "delivery",
|
|
99
|
+
"maxRetries": 3
|
|
100
|
+
},
|
|
101
|
+
"commit": {
|
|
102
|
+
"type": "task",
|
|
103
|
+
"name": "Commit Changes",
|
|
104
|
+
"description": "Commit the fix and regression test with a descriptive message",
|
|
105
|
+
"agent": "Developer",
|
|
106
|
+
"stage": "delivery"
|
|
107
|
+
},
|
|
108
|
+
"end_success": {
|
|
109
|
+
"type": "end",
|
|
110
|
+
"result": "success",
|
|
111
|
+
"name": "Bug Fixed",
|
|
112
|
+
"description": "Bug hunted down and fixed"
|
|
113
|
+
},
|
|
114
|
+
"hitl_inconclusive": {
|
|
115
|
+
"type": "end",
|
|
116
|
+
"result": "blocked",
|
|
117
|
+
"escalation": "hitl",
|
|
118
|
+
"name": "Inconclusive",
|
|
119
|
+
"description": "Investigation tracks couldn't determine root cause - needs human context"
|
|
120
|
+
},
|
|
121
|
+
"hitl_fix_failed": {
|
|
122
|
+
"type": "end",
|
|
123
|
+
"result": "blocked",
|
|
124
|
+
"escalation": "hitl",
|
|
125
|
+
"name": "Fix Failed",
|
|
126
|
+
"description": "Unable to fix the bug - needs human help"
|
|
127
|
+
}
|
|
128
|
+
},
|
|
129
|
+
"edges": [
|
|
130
|
+
{
|
|
131
|
+
"from": "start",
|
|
132
|
+
"to": "triage"
|
|
133
|
+
},
|
|
134
|
+
{
|
|
135
|
+
"from": "triage",
|
|
136
|
+
"to": "fork_investigate"
|
|
137
|
+
},
|
|
138
|
+
{
|
|
139
|
+
"from": "fork_investigate",
|
|
140
|
+
"to": "reproduce",
|
|
141
|
+
"label": "Branch: reproduce"
|
|
142
|
+
},
|
|
143
|
+
{
|
|
144
|
+
"from": "fork_investigate",
|
|
145
|
+
"to": "code_archaeology",
|
|
146
|
+
"label": "Branch: code archaeology"
|
|
147
|
+
},
|
|
148
|
+
{
|
|
149
|
+
"from": "fork_investigate",
|
|
150
|
+
"to": "git_forensics",
|
|
151
|
+
"label": "Branch: git forensics"
|
|
152
|
+
},
|
|
153
|
+
{
|
|
154
|
+
"from": "reproduce",
|
|
155
|
+
"to": "join_investigate"
|
|
156
|
+
},
|
|
157
|
+
{
|
|
158
|
+
"from": "code_archaeology",
|
|
159
|
+
"to": "join_investigate"
|
|
160
|
+
},
|
|
161
|
+
{
|
|
162
|
+
"from": "git_forensics",
|
|
163
|
+
"to": "join_investigate"
|
|
164
|
+
},
|
|
165
|
+
{
|
|
166
|
+
"from": "join_investigate",
|
|
167
|
+
"to": "synthesize",
|
|
168
|
+
"on": "passed",
|
|
169
|
+
"label": "All tracks complete"
|
|
170
|
+
},
|
|
171
|
+
{
|
|
172
|
+
"from": "join_investigate",
|
|
173
|
+
"to": "hitl_inconclusive",
|
|
174
|
+
"on": "failed",
|
|
175
|
+
"label": "Investigation couldn't determine root cause"
|
|
176
|
+
},
|
|
177
|
+
{
|
|
178
|
+
"from": "synthesize",
|
|
179
|
+
"to": "write_fix"
|
|
180
|
+
},
|
|
181
|
+
{
|
|
182
|
+
"from": "write_fix",
|
|
183
|
+
"to": "add_regression_test"
|
|
184
|
+
},
|
|
185
|
+
{
|
|
186
|
+
"from": "add_regression_test",
|
|
187
|
+
"to": "verify_fix"
|
|
188
|
+
},
|
|
189
|
+
{
|
|
190
|
+
"from": "verify_fix",
|
|
191
|
+
"to": "lint_format",
|
|
192
|
+
"on": "passed",
|
|
193
|
+
"label": "Fix verified, run lint checks"
|
|
194
|
+
},
|
|
195
|
+
{
|
|
196
|
+
"from": "verify_fix",
|
|
197
|
+
"to": "write_fix",
|
|
198
|
+
"on": "failed",
|
|
199
|
+
"label": "Fix didn't work, try again"
|
|
200
|
+
},
|
|
201
|
+
{
|
|
202
|
+
"from": "verify_fix",
|
|
203
|
+
"to": "hitl_fix_failed",
|
|
204
|
+
"on": "failed",
|
|
205
|
+
"label": "Cannot fix the bug"
|
|
206
|
+
},
|
|
207
|
+
{
|
|
208
|
+
"from": "lint_format",
|
|
209
|
+
"to": "commit",
|
|
210
|
+
"on": "passed",
|
|
211
|
+
"label": "Lint passes, commit changes"
|
|
212
|
+
},
|
|
213
|
+
{
|
|
214
|
+
"from": "lint_format",
|
|
215
|
+
"to": "write_fix",
|
|
216
|
+
"on": "failed",
|
|
217
|
+
"label": "Fix lint/format issues"
|
|
218
|
+
},
|
|
219
|
+
{
|
|
220
|
+
"from": "lint_format",
|
|
221
|
+
"to": "hitl_fix_failed",
|
|
222
|
+
"on": "failed",
|
|
223
|
+
"label": "Lint issues persist"
|
|
224
|
+
},
|
|
225
|
+
{
|
|
226
|
+
"from": "commit",
|
|
227
|
+
"to": "end_success"
|
|
228
|
+
},
|
|
229
|
+
{
|
|
230
|
+
"from": "hitl_inconclusive",
|
|
231
|
+
"to": "triage",
|
|
232
|
+
"on": "passed",
|
|
233
|
+
"label": "Human provided context, re-triage"
|
|
234
|
+
},
|
|
235
|
+
{
|
|
236
|
+
"from": "hitl_fix_failed",
|
|
237
|
+
"to": "write_fix",
|
|
238
|
+
"on": "passed",
|
|
239
|
+
"label": "Human resolved issue, resume fix"
|
|
240
|
+
}
|
|
241
|
+
]
|
|
242
|
+
}
|
package/copier.js
CHANGED
|
@@ -12,7 +12,7 @@
|
|
|
12
12
|
export function generateFlowReadme() {
|
|
13
13
|
return `# Flow Plugin
|
|
14
14
|
|
|
15
|
-
|
|
15
|
+
Graph-based workflow orchestration for AI agents.
|
|
16
16
|
|
|
17
17
|
## Quick Start
|
|
18
18
|
|
|
@@ -65,7 +65,7 @@ This directory contains workflow definitions for the flow plugin.
|
|
|
65
65
|
.flow/workflows/
|
|
66
66
|
├── README.md # This file
|
|
67
67
|
└── {workflow}/
|
|
68
|
-
└── workflow.json #
|
|
68
|
+
└── workflow.json # Workflow definition (steps + edges)
|
|
69
69
|
\`\`\`
|
|
70
70
|
|
|
71
71
|
## How Step Instructions Work
|
|
@@ -105,11 +105,11 @@ To add custom instructions for a step, add an \`instructions\` field to the node
|
|
|
105
105
|
|
|
106
106
|
### workflow.json
|
|
107
107
|
|
|
108
|
-
Defines the workflow
|
|
108
|
+
Defines the workflow graph:
|
|
109
109
|
- \`nodes\`: Map of node definitions (type, name, description, agent, maxRetries)
|
|
110
110
|
- \`edges\`: Array of transitions between nodes (from, to, on, label)
|
|
111
111
|
|
|
112
|
-
Do NOT edit edge logic unless you understand
|
|
112
|
+
Do NOT edit edge logic unless you understand workflow flow control.
|
|
113
113
|
|
|
114
114
|
## Adding Custom Workflows
|
|
115
115
|
|
|
@@ -135,8 +135,8 @@ export function isValidWorkflowForCopy(content) {
|
|
|
135
135
|
* @returns {string[]} IDs to copy
|
|
136
136
|
*/
|
|
137
137
|
export function computeWorkflowsToCopy(requestedIds, availableIds) {
|
|
138
|
-
if (requestedIds
|
|
139
|
-
|
|
138
|
+
if (!requestedIds || requestedIds.length === 0) {
|
|
139
|
+
throw new Error("workflowIds is required. Use ListCatalog to see available workflows, then pass specific IDs.");
|
|
140
140
|
}
|
|
141
|
-
return
|
|
141
|
+
return requestedIds;
|
|
142
142
|
}
|
package/diagram.js
CHANGED
|
@@ -58,6 +58,8 @@ export function generateDiagram(workflowDef, currentStep = null) {
|
|
|
58
58
|
lines.push(` ${stepId}[["${label}"]]`);
|
|
59
59
|
} else if (termType === "hitl" || termType === "failure") {
|
|
60
60
|
lines.push(` ${stepId}{{"${label}"}}`);
|
|
61
|
+
} else if (step.type === "fork" || step.type === "join") {
|
|
62
|
+
lines.push(` ${stepId}{{{"${label}"}}}`);
|
|
61
63
|
} else if (step.type === "gate") {
|
|
62
64
|
lines.push(` ${stepId}{"${label}"}`);
|
|
63
65
|
} else {
|
|
@@ -82,6 +84,7 @@ export function generateDiagram(workflowDef, currentStep = null) {
|
|
|
82
84
|
lines.push(" classDef successStep fill:#87CEEB,stroke:#4169E1");
|
|
83
85
|
lines.push(" classDef hitlStep fill:#FFB6C1,stroke:#DC143C");
|
|
84
86
|
lines.push(" classDef gateStep fill:#E6E6FA,stroke:#9370DB");
|
|
87
|
+
lines.push(" classDef forkJoinStep fill:#FFEAA7,stroke:#FDCB6E");
|
|
85
88
|
lines.push(" classDef currentStep fill:#FFD700,stroke:#FF8C00,stroke-width:3px");
|
|
86
89
|
|
|
87
90
|
const startSteps = Object.entries(nodes)
|
|
@@ -97,10 +100,15 @@ export function generateDiagram(workflowDef, currentStep = null) {
|
|
|
97
100
|
.filter(([, s]) => s.type === "gate")
|
|
98
101
|
.map(([id]) => id);
|
|
99
102
|
|
|
103
|
+
const forkJoinSteps = Object.entries(nodes)
|
|
104
|
+
.filter(([, s]) => s.type === "fork" || s.type === "join")
|
|
105
|
+
.map(([id]) => id);
|
|
106
|
+
|
|
100
107
|
if (startSteps.length) lines.push(` class ${startSteps.join(",")} startStep`);
|
|
101
108
|
if (successSteps.length) lines.push(` class ${successSteps.join(",")} successStep`);
|
|
102
109
|
if (hitlSteps.length) lines.push(` class ${hitlSteps.join(",")} hitlStep`);
|
|
103
110
|
if (gateSteps.length) lines.push(` class ${gateSteps.join(",")} gateStep`);
|
|
111
|
+
if (forkJoinSteps.length) lines.push(` class ${forkJoinSteps.join(",")} forkJoinStep`);
|
|
104
112
|
|
|
105
113
|
if (currentStep && nodes[currentStep]) {
|
|
106
114
|
lines.push(` class ${currentStep} currentStep`);
|
|
@@ -111,8 +119,10 @@ export function generateDiagram(workflowDef, currentStep = null) {
|
|
|
111
119
|
tableRows.push("| Stage | Step | Name | Agent | Instructions |");
|
|
112
120
|
tableRows.push("|-------|------|------|-------|--------------|");
|
|
113
121
|
|
|
114
|
-
// Group steps by stage for organized display - filter out terminal/start/end nodes
|
|
115
|
-
const stepEntries = Object.entries(nodes).filter(
|
|
122
|
+
// Group steps by stage for organized display - filter out terminal/start/end and fork/join nodes
|
|
123
|
+
const stepEntries = Object.entries(nodes).filter(
|
|
124
|
+
([, step]) => !isTerminalNode(step) && step.type !== "fork" && step.type !== "join"
|
|
125
|
+
);
|
|
116
126
|
|
|
117
127
|
for (const [stepId, step] of stepEntries) {
|
|
118
128
|
const stage = step.stage || "-";
|
package/engine.js
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
/**
|
|
2
2
|
* Workflow Engine
|
|
3
3
|
*
|
|
4
|
-
* Evaluates
|
|
4
|
+
* Evaluates graph transitions based on node outputs and retry state.
|
|
5
5
|
*
|
|
6
6
|
* Schema:
|
|
7
7
|
* - nodes: { [id]: { type, name, maxRetries?, ... } }
|
|
@@ -32,12 +32,21 @@ export function readTaskFile(taskFilePath) {
|
|
|
32
32
|
|
|
33
33
|
/**
|
|
34
34
|
* Check if a node is a terminal node (start or end)
|
|
35
|
+
* Fork/join are control-flow nodes, not terminal.
|
|
35
36
|
*/
|
|
36
37
|
export function isTerminalNode(node) {
|
|
37
38
|
if (!node) return false;
|
|
38
39
|
return node.type === "start" || node.type === "end";
|
|
39
40
|
}
|
|
40
41
|
|
|
42
|
+
/**
|
|
43
|
+
* Check if a node is a fork or join control-flow node
|
|
44
|
+
*/
|
|
45
|
+
export function isForkJoinNode(node) {
|
|
46
|
+
if (!node) return false;
|
|
47
|
+
return node.type === "fork" || node.type === "join";
|
|
48
|
+
}
|
|
49
|
+
|
|
41
50
|
/**
|
|
42
51
|
* Get terminal type for a node
|
|
43
52
|
* Returns: "start" | "success" | "hitl" | "failure" | null
|
|
@@ -67,6 +76,7 @@ export function toSubagentRef(agentId) {
|
|
|
67
76
|
const WORKFLOW_EMOJIS = {
|
|
68
77
|
"feature-development": "✨",
|
|
69
78
|
"bug-fix": "🐛",
|
|
79
|
+
"bug-hunt": "🔍",
|
|
70
80
|
"agile-task": "📋",
|
|
71
81
|
"context-optimization": "🔧",
|
|
72
82
|
"quick-task": "⚡",
|
|
@@ -253,6 +263,47 @@ ${description || "{task description}"}`;
|
|
|
253
263
|
return result;
|
|
254
264
|
}
|
|
255
265
|
|
|
266
|
+
/**
|
|
267
|
+
* Build response for fork/join control-flow nodes.
|
|
268
|
+
* These nodes are declarative — the engine returns metadata for the orchestrator to act on.
|
|
269
|
+
*/
|
|
270
|
+
function buildForkJoinResponse(workflowType, stepId, stepDef, action, retryCount = 0, sourceRoot = null) {
|
|
271
|
+
const base = {
|
|
272
|
+
currentStep: stepId,
|
|
273
|
+
stage: null,
|
|
274
|
+
subagent: null,
|
|
275
|
+
stepInstructions: null,
|
|
276
|
+
terminal: null,
|
|
277
|
+
action,
|
|
278
|
+
retriesIncremented: false,
|
|
279
|
+
autonomyContinued: false,
|
|
280
|
+
maxRetries: 0,
|
|
281
|
+
orchestratorInstructions: null,
|
|
282
|
+
metadata: {
|
|
283
|
+
workflowType,
|
|
284
|
+
currentStep: stepId,
|
|
285
|
+
retryCount,
|
|
286
|
+
},
|
|
287
|
+
sourceRoot,
|
|
288
|
+
};
|
|
289
|
+
|
|
290
|
+
if (stepDef.type === "fork") {
|
|
291
|
+
base.action = "fork";
|
|
292
|
+
base.fork = {
|
|
293
|
+
branches: stepDef.branches,
|
|
294
|
+
joinStep: stepDef.join,
|
|
295
|
+
};
|
|
296
|
+
} else if (stepDef.type === "join") {
|
|
297
|
+
base.action = "join";
|
|
298
|
+
base.join = {
|
|
299
|
+
forkStep: stepDef.fork,
|
|
300
|
+
strategy: stepDef.strategy || "all-pass",
|
|
301
|
+
};
|
|
302
|
+
}
|
|
303
|
+
|
|
304
|
+
return base;
|
|
305
|
+
}
|
|
306
|
+
|
|
256
307
|
/**
|
|
257
308
|
* Build unified response shape for Navigate
|
|
258
309
|
* Minimal output: only what Orchestrator needs for control flow and delegation
|
|
@@ -481,9 +532,10 @@ export class WorkflowEngine {
|
|
|
481
532
|
* @param {string} [options.workflowType] - Workflow ID (for start only)
|
|
482
533
|
* @param {string} [options.result] - Step result: "passed" | "failed" (for advance)
|
|
483
534
|
* @param {string} [options.description] - User's task description
|
|
535
|
+
* @param {string} [options.stepId] - Start at a specific step (mid-flow recovery). Only used when starting a workflow (no currentStep).
|
|
484
536
|
* @returns {Object} Navigation response with currentStep, stepInstructions, terminal, action, metadata, etc.
|
|
485
537
|
*/
|
|
486
|
-
navigate({ taskFilePath, workflowType, result, description, projectRoot, autonomy } = {}) {
|
|
538
|
+
navigate({ taskFilePath, workflowType, result, description, projectRoot, autonomy, stepId } = {}) {
|
|
487
539
|
let currentStep = null;
|
|
488
540
|
let retryCount = 0;
|
|
489
541
|
|
|
@@ -533,28 +585,52 @@ export class WorkflowEngine {
|
|
|
533
585
|
|
|
534
586
|
const { nodes } = wfDef;
|
|
535
587
|
|
|
536
|
-
// Case 1: No currentStep - start at first work step
|
|
588
|
+
// Case 1: No currentStep - start at first work step (or stepId if provided)
|
|
537
589
|
if (!currentStep) {
|
|
538
|
-
|
|
539
|
-
|
|
540
|
-
|
|
541
|
-
|
|
542
|
-
|
|
590
|
+
let targetStepId;
|
|
591
|
+
let targetStepDef;
|
|
592
|
+
|
|
593
|
+
if (stepId) {
|
|
594
|
+
// Mid-flow start: validate and use the provided stepId
|
|
595
|
+
targetStepDef = nodes[stepId];
|
|
596
|
+
if (!targetStepDef) {
|
|
597
|
+
throw new Error(`Step '${stepId}' not found in workflow '${workflowType}'`);
|
|
598
|
+
}
|
|
599
|
+
if (isTerminalNode(targetStepDef)) {
|
|
600
|
+
throw new Error(
|
|
601
|
+
`Cannot start at ${targetStepDef.type} node '${stepId}'. Provide a task, gate, or fork/join step.`
|
|
602
|
+
);
|
|
603
|
+
}
|
|
604
|
+
targetStepId = stepId;
|
|
605
|
+
} else {
|
|
606
|
+
// Default: find first step after start node
|
|
607
|
+
const startEntry = Object.entries(nodes).find(([, node]) => node.type === "start");
|
|
608
|
+
if (!startEntry) {
|
|
609
|
+
throw new Error(`Workflow '${workflowType}' has no start node`);
|
|
610
|
+
}
|
|
611
|
+
const startNodeId = startEntry[0];
|
|
543
612
|
|
|
544
|
-
|
|
545
|
-
|
|
546
|
-
|
|
613
|
+
const firstEdge = wfDef.edges.find((e) => e.from === startNodeId);
|
|
614
|
+
if (!firstEdge) {
|
|
615
|
+
throw new Error(`No edge from start step in workflow '${workflowType}'`);
|
|
616
|
+
}
|
|
617
|
+
|
|
618
|
+
targetStepDef = nodes[firstEdge.to];
|
|
619
|
+
if (!targetStepDef) {
|
|
620
|
+
throw new Error(`First step '${firstEdge.to}' not found in workflow`);
|
|
621
|
+
}
|
|
622
|
+
targetStepId = firstEdge.to;
|
|
547
623
|
}
|
|
548
624
|
|
|
549
|
-
|
|
550
|
-
if (
|
|
551
|
-
|
|
625
|
+
// Fork/join at start position — return control-flow response
|
|
626
|
+
if (isForkJoinNode(targetStepDef)) {
|
|
627
|
+
return buildForkJoinResponse(workflowType, targetStepId, targetStepDef, "start", 0, sourceRoot);
|
|
552
628
|
}
|
|
553
629
|
|
|
554
630
|
return buildNavigateResponse(
|
|
555
631
|
workflowType,
|
|
556
|
-
|
|
557
|
-
|
|
632
|
+
targetStepId,
|
|
633
|
+
targetStepDef,
|
|
558
634
|
"start",
|
|
559
635
|
false,
|
|
560
636
|
0,
|
|
@@ -572,6 +648,11 @@ export class WorkflowEngine {
|
|
|
572
648
|
throw new Error(`Step '${currentStep}' not found in workflow '${workflowType}'`);
|
|
573
649
|
}
|
|
574
650
|
|
|
651
|
+
// Fork/join — return control-flow response
|
|
652
|
+
if (isForkJoinNode(stepDef)) {
|
|
653
|
+
return buildForkJoinResponse(workflowType, currentStep, stepDef, "current", retryCount, sourceRoot);
|
|
654
|
+
}
|
|
655
|
+
|
|
575
656
|
return buildNavigateResponse(
|
|
576
657
|
workflowType,
|
|
577
658
|
currentStep,
|
|
@@ -602,6 +683,42 @@ export class WorkflowEngine {
|
|
|
602
683
|
throw new Error(`Next step '${evaluation.nextStep}' not found in workflow`);
|
|
603
684
|
}
|
|
604
685
|
|
|
686
|
+
// Fork/join on advance — return control-flow response
|
|
687
|
+
if (isForkJoinNode(nextStepDef)) {
|
|
688
|
+
const forkJoinResponse = buildForkJoinResponse(
|
|
689
|
+
workflowType,
|
|
690
|
+
evaluation.nextStep,
|
|
691
|
+
nextStepDef,
|
|
692
|
+
nextStepDef.type, // "fork" or "join"
|
|
693
|
+
0,
|
|
694
|
+
sourceRoot
|
|
695
|
+
);
|
|
696
|
+
|
|
697
|
+
// Write-through for fork/join advance
|
|
698
|
+
if (taskFilePath) {
|
|
699
|
+
const task = readTaskFile(taskFilePath);
|
|
700
|
+
if (task) {
|
|
701
|
+
const filenameId = basename(taskFilePath, ".json");
|
|
702
|
+
if (task.id !== filenameId) {
|
|
703
|
+
console.error(
|
|
704
|
+
`[Navigator] WARNING: task.id "${task.id}" does not match filename "${filenameId}.json" — auto-correcting`
|
|
705
|
+
);
|
|
706
|
+
}
|
|
707
|
+
const canonicalId = filenameId;
|
|
708
|
+
task.metadata = { ...task.metadata, ...forkJoinResponse.metadata };
|
|
709
|
+
task.id = canonicalId;
|
|
710
|
+
writeFileSync(taskFilePath, JSON.stringify(task, null, 2));
|
|
711
|
+
}
|
|
712
|
+
}
|
|
713
|
+
|
|
714
|
+
// Persist autonomy in metadata
|
|
715
|
+
if (autonomy) {
|
|
716
|
+
forkJoinResponse.metadata.autonomy = true;
|
|
717
|
+
}
|
|
718
|
+
|
|
719
|
+
return forkJoinResponse;
|
|
720
|
+
}
|
|
721
|
+
|
|
605
722
|
// Determine action and whether retries incremented
|
|
606
723
|
const currentStepDef = nodes[currentStep];
|
|
607
724
|
const isHitlResume = getTerminalType(currentStepDef) === "hitl";
|
package/index.js
CHANGED
|
@@ -169,6 +169,11 @@ server.setRequestHandler(ListToolsRequestSchema, async () => {
|
|
|
169
169
|
description:
|
|
170
170
|
"When true, auto-continue through stage boundary end nodes (non-HITL end nodes with outgoing edges).",
|
|
171
171
|
},
|
|
172
|
+
stepId: {
|
|
173
|
+
type: "string",
|
|
174
|
+
description:
|
|
175
|
+
"Start at a specific step instead of the beginning (mid-flow recovery). Only used when starting a workflow (no taskFilePath). Ignored during advance.",
|
|
176
|
+
},
|
|
172
177
|
},
|
|
173
178
|
},
|
|
174
179
|
},
|
|
@@ -221,7 +226,7 @@ server.setRequestHandler(ListToolsRequestSchema, async () => {
|
|
|
221
226
|
workflowIds: {
|
|
222
227
|
type: "array",
|
|
223
228
|
items: { type: "string" },
|
|
224
|
-
description: "Workflow IDs to copy.
|
|
229
|
+
description: "Workflow IDs to copy. Required. Use ListCatalog to see available workflows.",
|
|
225
230
|
},
|
|
226
231
|
},
|
|
227
232
|
},
|
|
@@ -277,6 +282,7 @@ server.setRequestHandler(CallToolRequestSchema, async (request) => {
|
|
|
277
282
|
description: args.description,
|
|
278
283
|
projectRoot: PROJECT_ROOT,
|
|
279
284
|
autonomy: args.autonomy,
|
|
285
|
+
stepId: args.stepId,
|
|
280
286
|
});
|
|
281
287
|
return jsonResponse(result);
|
|
282
288
|
}
|
|
@@ -365,7 +371,7 @@ server.setRequestHandler(CallToolRequestSchema, async (request) => {
|
|
|
365
371
|
writeFileSync(join(workflowDir, "workflow.json"), JSON.stringify(content, null, 2));
|
|
366
372
|
|
|
367
373
|
// Load into memory
|
|
368
|
-
store.loadDefinition(id, content);
|
|
374
|
+
store.loadDefinition(id, content, "project", PROJECT_ROOT);
|
|
369
375
|
copied.push(id);
|
|
370
376
|
} catch (e) {
|
|
371
377
|
errors.push({ id, error: e.message });
|
|
@@ -435,7 +441,7 @@ server.setRequestHandler(CallToolRequestSchema, async (request) => {
|
|
|
435
441
|
try {
|
|
436
442
|
const content = JSON.parse(readFileSync(wfFile, "utf-8"));
|
|
437
443
|
if (validateWorkflow(id, content)) {
|
|
438
|
-
store.loadDefinition(id, content, "project");
|
|
444
|
+
store.loadDefinition(id, content, "project", PROJECT_ROOT);
|
|
439
445
|
loaded.push(id);
|
|
440
446
|
} else {
|
|
441
447
|
errors.push({ id, error: "invalid schema" });
|
package/package.json
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "@leclabs/agent-flow-navigator-mcp",
|
|
3
|
-
"version": "1.
|
|
4
|
-
"description": "MCP server that navigates agents through
|
|
3
|
+
"version": "1.7.0",
|
|
4
|
+
"description": "MCP server that navigates agents through graph-based workflows",
|
|
5
5
|
"license": "MIT",
|
|
6
6
|
"author": "leclabs",
|
|
7
7
|
"homepage": "https://github.com/leclabs/agent-toolkit",
|
|
@@ -15,7 +15,7 @@
|
|
|
15
15
|
"type": "module",
|
|
16
16
|
"scripts": {
|
|
17
17
|
"start": "node index.js",
|
|
18
|
-
"test": "node --test engine.test.js diagram.test.js store.test.js dialog.test.js copier.test.js catalog.test.js refactor-workflow.test.js build-review-workflow.test.js"
|
|
18
|
+
"test": "node --test engine.test.js diagram.test.js store.test.js dialog.test.js copier.test.js catalog.test.js refactor-workflow.test.js build-review-workflow.test.js bug-hunt-workflow.test.js"
|
|
19
19
|
},
|
|
20
20
|
"keywords": [
|
|
21
21
|
"mcp",
|
package/store.js
CHANGED
|
@@ -20,6 +20,73 @@ export function validateWorkflow(id, content) {
|
|
|
20
20
|
console.error(`Invalid workflow ${id}: missing 'edges' array`);
|
|
21
21
|
return false;
|
|
22
22
|
}
|
|
23
|
+
|
|
24
|
+
// Validate fork/join nodes
|
|
25
|
+
const nodes = content.nodes;
|
|
26
|
+
const forkNodes = Object.entries(nodes).filter(([, n]) => n.type === "fork");
|
|
27
|
+
const joinNodes = Object.entries(nodes).filter(([, n]) => n.type === "join");
|
|
28
|
+
|
|
29
|
+
for (const [forkId, forkDef] of forkNodes) {
|
|
30
|
+
// Fork's join field must reference an existing join node
|
|
31
|
+
if (!forkDef.join || !nodes[forkDef.join]) {
|
|
32
|
+
console.error(`Invalid workflow ${id}: fork '${forkId}' references missing join node '${forkDef.join}'`);
|
|
33
|
+
return false;
|
|
34
|
+
}
|
|
35
|
+
if (nodes[forkDef.join].type !== "join") {
|
|
36
|
+
console.error(`Invalid workflow ${id}: fork '${forkId}' join field '${forkDef.join}' is not a join node`);
|
|
37
|
+
return false;
|
|
38
|
+
}
|
|
39
|
+
|
|
40
|
+
// All branch entryStep values must reference existing nodes
|
|
41
|
+
if (!forkDef.branches || typeof forkDef.branches !== "object") {
|
|
42
|
+
console.error(`Invalid workflow ${id}: fork '${forkId}' missing branches`);
|
|
43
|
+
return false;
|
|
44
|
+
}
|
|
45
|
+
for (const [branchName, branchDef] of Object.entries(forkDef.branches)) {
|
|
46
|
+
if (!branchDef.entryStep || !nodes[branchDef.entryStep]) {
|
|
47
|
+
console.error(
|
|
48
|
+
`Invalid workflow ${id}: fork '${forkId}' branch '${branchName}' references missing node '${branchDef.entryStep}'`
|
|
49
|
+
);
|
|
50
|
+
return false;
|
|
51
|
+
}
|
|
52
|
+
}
|
|
53
|
+
|
|
54
|
+
// No nested forks in v1: branch paths must not contain fork nodes
|
|
55
|
+
for (const [branchName, branchDef] of Object.entries(forkDef.branches)) {
|
|
56
|
+
if (nodes[branchDef.entryStep]?.type === "fork") {
|
|
57
|
+
console.error(
|
|
58
|
+
`Invalid workflow ${id}: fork '${forkId}' branch '${branchName}' entry step '${branchDef.entryStep}' is a nested fork (not supported in v1)`
|
|
59
|
+
);
|
|
60
|
+
return false;
|
|
61
|
+
}
|
|
62
|
+
}
|
|
63
|
+
}
|
|
64
|
+
|
|
65
|
+
for (const [joinId, joinDef] of joinNodes) {
|
|
66
|
+
// Join's fork field must reference an existing fork node
|
|
67
|
+
if (!joinDef.fork || !nodes[joinDef.fork]) {
|
|
68
|
+
console.error(`Invalid workflow ${id}: join '${joinId}' references missing fork node '${joinDef.fork}'`);
|
|
69
|
+
return false;
|
|
70
|
+
}
|
|
71
|
+
if (nodes[joinDef.fork].type !== "fork") {
|
|
72
|
+
console.error(`Invalid workflow ${id}: join '${joinId}' fork field '${joinDef.fork}' is not a fork node`);
|
|
73
|
+
return false;
|
|
74
|
+
}
|
|
75
|
+
}
|
|
76
|
+
|
|
77
|
+
// Fork-join pairs must be matched 1:1
|
|
78
|
+
const forkToJoin = new Map(forkNodes.map(([id, def]) => [id, def.join]));
|
|
79
|
+
const joinToFork = new Map(joinNodes.map(([id, def]) => [id, def.fork]));
|
|
80
|
+
|
|
81
|
+
for (const [forkId, joinId] of forkToJoin) {
|
|
82
|
+
if (joinToFork.get(joinId) !== forkId) {
|
|
83
|
+
console.error(
|
|
84
|
+
`Invalid workflow ${id}: fork '${forkId}' → join '${joinId}' pair mismatch (join points to '${joinToFork.get(joinId)}')`
|
|
85
|
+
);
|
|
86
|
+
return false;
|
|
87
|
+
}
|
|
88
|
+
}
|
|
89
|
+
|
|
23
90
|
return true;
|
|
24
91
|
}
|
|
25
92
|
|
package/types.d.ts
CHANGED
|
@@ -13,7 +13,7 @@
|
|
|
13
13
|
// Node Types (Discriminated Union)
|
|
14
14
|
// =============================================================================
|
|
15
15
|
|
|
16
|
-
export type Node = StartNode | EndNode | TaskNode | GateNode | SubflowNode;
|
|
16
|
+
export type Node = StartNode | EndNode | TaskNode | GateNode | SubflowNode | ForkNode | JoinNode;
|
|
17
17
|
|
|
18
18
|
export interface StartNode {
|
|
19
19
|
type: "start";
|
|
@@ -81,6 +81,30 @@ export interface SubflowNode {
|
|
|
81
81
|
inputs?: Record<string, string>;
|
|
82
82
|
}
|
|
83
83
|
|
|
84
|
+
/**
|
|
85
|
+
* Fork node - declares parallel branches within the same workflow.
|
|
86
|
+
* Each branch names an entry step. Links to a join node that collects results.
|
|
87
|
+
*/
|
|
88
|
+
export interface ForkNode {
|
|
89
|
+
type: "fork";
|
|
90
|
+
name: string;
|
|
91
|
+
description?: string;
|
|
92
|
+
branches: Record<string, { entryStep: string; description?: string }>;
|
|
93
|
+
join: string; // join node ID
|
|
94
|
+
}
|
|
95
|
+
|
|
96
|
+
/**
|
|
97
|
+
* Join node - collects results from parallel branches.
|
|
98
|
+
* Strategy determines how branch results are evaluated.
|
|
99
|
+
*/
|
|
100
|
+
export interface JoinNode {
|
|
101
|
+
type: "join";
|
|
102
|
+
name: string;
|
|
103
|
+
description?: string;
|
|
104
|
+
fork: string; // fork node ID
|
|
105
|
+
strategy?: "all-pass" | "any-pass"; // default: "all-pass"
|
|
106
|
+
}
|
|
107
|
+
|
|
84
108
|
export type Stage = "planning" | "development" | "verification" | "delivery";
|
|
85
109
|
|
|
86
110
|
// =============================================================================
|
|
@@ -145,6 +169,8 @@ export interface NavigateOptions {
|
|
|
145
169
|
description?: string;
|
|
146
170
|
projectRoot?: string;
|
|
147
171
|
autonomy?: boolean;
|
|
172
|
+
/** Start at a specific step instead of the beginning (mid-flow recovery). Only used when starting a workflow (no taskFilePath). Ignored during advance. */
|
|
173
|
+
stepId?: string;
|
|
148
174
|
}
|
|
149
175
|
|
|
150
176
|
export interface NavigateResponse {
|
|
@@ -158,6 +184,8 @@ export interface NavigateResponse {
|
|
|
158
184
|
autonomyContinued: boolean;
|
|
159
185
|
maxRetries: number;
|
|
160
186
|
orchestratorInstructions: string | null;
|
|
187
|
+
fork?: { branches: Record<string, { entryStep: string; description?: string }>; joinStep: string };
|
|
188
|
+
join?: { forkStep: string; strategy: string };
|
|
161
189
|
metadata: {
|
|
162
190
|
workflowType: string;
|
|
163
191
|
currentStep: string;
|