e2e-ai 1.4.2 → 1.5.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +54 -29
- package/dist/cli-hjczkpxm.js +117 -0
- package/dist/cli.js +39 -25
- package/dist/mcp.js +464 -10
- package/package.json +1 -1
- package/templates/workflow.md +35 -1
- /package/agents/{init-agent.md → 0.init-agent.md} +0 -0
- /package/agents/{transcript-agent.md → 1_1.transcript-agent.md} +0 -0
- /package/agents/{scenario-agent.md → 1_2.scenario-agent.md} +0 -0
- /package/agents/{playwright-generator-agent.md → 2.playwright-generator-agent.md} +0 -0
- /package/agents/{refactor-agent.md → 3.refactor-agent.md} +0 -0
- /package/agents/{self-healing-agent.md → 4.self-healing-agent.md} +0 -0
- /package/agents/{qa-testcase-agent.md → 5.qa-testcase-agent.md} +0 -0
- /package/agents/{feature-analyzer-agent.md → 6_1.feature-analyzer-agent.md} +0 -0
- /package/agents/{scenario-planner-agent.md → 6_2.scenario-planner-agent.md} +0 -0
package/README.md
CHANGED
|
@@ -505,25 +505,21 @@ E2E_AI_API_KEY=key-... # API key for push command
|
|
|
505
505
|
|
|
506
506
|
## AI Agents
|
|
507
507
|
|
|
508
|
-
|
|
509
|
-
|
|
510
|
-
|
|
511
|
-
|
|
512
|
-
-
|
|
513
|
-
-
|
|
514
|
-
|
|
515
|
-
|
|
|
516
|
-
|
|
517
|
-
| `
|
|
518
|
-
| `
|
|
519
|
-
| `
|
|
520
|
-
| `
|
|
521
|
-
|
|
522
|
-
|
|
523
|
-
| `feature-analyzer-agent` | AST scan result | Features, workflows, components JSON | `analyze` |
|
|
524
|
-
| `scenario-planner-agent` | Features + workflows | Complete QA map with scenarios JSON | `analyze` |
|
|
525
|
-
|
|
526
|
-
You can customize agent behavior by editing the `.md` files directly. The frontmatter `model` field is the default model for that agent (overridable via `--model` or `config.llm.agentModels`).
|
|
508
|
+
Nine specialized agents live in `agents/*.md`, numbered by pipeline order. Each has a system prompt, input/output schemas, rules, and examples.
|
|
509
|
+
|
|
510
|
+
| # | File | Input | Output | Used by |
|
|
511
|
+
|---|------|-------|--------|---------|
|
|
512
|
+
| 0 | `0.init-agent` | Codebase scan | `.e2e-ai/context.md` | `init` (AI chat) |
|
|
513
|
+
| 1.1 | `1_1.transcript-agent` | codegen + transcript JSON | Structured narrative with intent mapping | `scenario` |
|
|
514
|
+
| 1.2 | `1_2.scenario-agent` | narrative + issue context | YAML test scenario | `scenario` |
|
|
515
|
+
| 2 | `2.playwright-generator-agent` | scenario + project context | `.test.ts` file | `generate` |
|
|
516
|
+
| 3 | `3.refactor-agent` | test + project context | Improved test file | `refine` |
|
|
517
|
+
| 4 | `4.self-healing-agent` | failing test + error output | Diagnosis + patched test | `heal` |
|
|
518
|
+
| 5 | `5.qa-testcase-agent` | test + scenario + issue data | QA markdown + test case JSON | `qa` |
|
|
519
|
+
| 6.1 | `6_1.feature-analyzer-agent` | AST scan result | Features, workflows, components JSON | `analyze` |
|
|
520
|
+
| 6.2 | `6_2.scenario-planner-agent` | Features + workflows | Complete QA map with scenarios JSON | `analyze` |
|
|
521
|
+
|
|
522
|
+
Agents are loaded by bare name (e.g., `loadAgent('scenario-agent')`) — the numbered prefix is resolved automatically. You can customize agent behavior by editing the `.md` files in `.e2e-ai/agents/`.
|
|
527
523
|
|
|
528
524
|
## Output Directory Structure
|
|
529
525
|
|
|
@@ -598,6 +594,16 @@ If e2e-ai is installed globally or as a project dependency, you can use the bina
|
|
|
598
594
|
|
|
599
595
|
### Available Tools
|
|
600
596
|
|
|
597
|
+
#### Orchestration (workflow automation)
|
|
598
|
+
|
|
599
|
+
| Tool | Description | Input |
|
|
600
|
+
|------|-------------|-------|
|
|
601
|
+
| `e2e_ai_plan_workflow` | Plan an automation workflow — returns an ordered todo list of steps | `goal`, `key?`, `from?`, `skip?`, `voice?`, `trace?`, `scanDir?` |
|
|
602
|
+
| `e2e_ai_execute_step` | Execute a single pipeline step | `step`, `key?`, `voice?`, `trace?`, `scanDir?`, `output?`, `extraArgs?` |
|
|
603
|
+
| `e2e_ai_get_workflow_guide` | Get the full workflow guide explaining how the pipeline works | (none) |
|
|
604
|
+
|
|
605
|
+
#### Project setup
|
|
606
|
+
|
|
601
607
|
| Tool | Description | Input |
|
|
602
608
|
|------|-------------|-------|
|
|
603
609
|
| `e2e_ai_scan_codebase` | Scan project for test files, configs, fixtures, path aliases, and sample test content | `projectRoot?` (defaults to cwd) |
|
|
@@ -605,16 +611,35 @@ If e2e-ai is installed globally or as a project dependency, you can use the bina
|
|
|
605
611
|
| `e2e_ai_read_agent` | Load an agent prompt by name — returns system prompt + config | `agentName` (e.g. `scenario-agent`) |
|
|
606
612
|
| `e2e_ai_get_example` | Get the example context markdown template | (none) |
|
|
607
613
|
|
|
608
|
-
###
|
|
609
|
-
|
|
610
|
-
|
|
611
|
-
|
|
612
|
-
1. **
|
|
613
|
-
2. **
|
|
614
|
-
3. **
|
|
615
|
-
|
|
616
|
-
|
|
617
|
-
|
|
614
|
+
### How AI Orchestration Works
|
|
615
|
+
|
|
616
|
+
The MCP server includes built-in orchestration instructions that teach AI assistants (Claude Code, Cursor, etc.) how to run e2e-ai workflows autonomously. The protocol is:
|
|
617
|
+
|
|
618
|
+
1. **Plan** — The AI calls `e2e_ai_plan_workflow` with your goal. It returns an ordered step list.
|
|
619
|
+
2. **Approve** — The AI presents the plan to you for review. You can adjust steps before proceeding.
|
|
620
|
+
3. **Execute** — The AI runs each step one at a time via `e2e_ai_execute_step`, reporting results between steps. If a step fails, it stops and asks you how to proceed.
|
|
621
|
+
|
|
622
|
+
Each step is executed as a separate job (ideally a subagent) to keep context clean. The AI never runs multiple pipeline steps at once.
|
|
623
|
+
|
|
624
|
+
**Example interaction:**
|
|
625
|
+
|
|
626
|
+
> **You:** "Run the full test pipeline for PROJ-101"
|
|
627
|
+
>
|
|
628
|
+
> **AI:** *Calls `e2e_ai_plan_workflow`*, then presents:
|
|
629
|
+
> 1. `record` — Launch browser codegen + voice recording
|
|
630
|
+
> 2. `transcribe` — Transcribe voice via Whisper
|
|
631
|
+
> 3. `scenario` — Generate YAML test scenario
|
|
632
|
+
> 4. `generate` — Generate Playwright test
|
|
633
|
+
> 5. `refine` — Refactor test with AI
|
|
634
|
+
> 6. `test` — Run Playwright test
|
|
635
|
+
> 7. `heal` — Self-heal if failing (can skip if test passes)
|
|
636
|
+
> 8. `qa` — Generate QA documentation
|
|
637
|
+
>
|
|
638
|
+
> "Does this look right? Ready to start?"
|
|
639
|
+
>
|
|
640
|
+
> **You:** "Skip voice, go ahead"
|
|
641
|
+
>
|
|
642
|
+
> **AI:** *Removes transcribe, executes each step sequentially*
|
|
618
643
|
|
|
619
644
|
## Library API
|
|
620
645
|
|
|
@@ -0,0 +1,117 @@
|
|
|
1
|
+
import {
|
|
2
|
+
getPackageRoot,
|
|
3
|
+
getProjectRoot
|
|
4
|
+
} from "./cli-kx32qnf3.js";
|
|
5
|
+
|
|
6
|
+
// src/agents/loadAgent.ts
|
|
7
|
+
import { readFileSync, existsSync, readdirSync } from "node:fs";
|
|
8
|
+
import { join } from "node:path";
|
|
9
|
+
function resolveAgentFile(dir, agentName) {
|
|
10
|
+
const exact = join(dir, `${agentName}.md`);
|
|
11
|
+
if (existsSync(exact))
|
|
12
|
+
return exact;
|
|
13
|
+
try {
|
|
14
|
+
const files = readdirSync(dir);
|
|
15
|
+
const suffix = `.${agentName}.md`;
|
|
16
|
+
const match = files.find((f) => f.endsWith(suffix));
|
|
17
|
+
if (match)
|
|
18
|
+
return join(dir, match);
|
|
19
|
+
} catch {}
|
|
20
|
+
return null;
|
|
21
|
+
}
|
|
22
|
+
function loadAgent(agentName, config) {
|
|
23
|
+
const localDir = join(getProjectRoot(), ".e2e-ai", "agents");
|
|
24
|
+
const packageDir = join(getPackageRoot(), "agents");
|
|
25
|
+
const filePath = resolveAgentFile(localDir, agentName) ?? resolveAgentFile(packageDir, agentName);
|
|
26
|
+
if (!filePath) {
|
|
27
|
+
throw new Error(`Agent file not found for "${agentName}" in ${localDir} or ${packageDir}`);
|
|
28
|
+
}
|
|
29
|
+
let content;
|
|
30
|
+
try {
|
|
31
|
+
content = readFileSync(filePath, "utf-8");
|
|
32
|
+
} catch {
|
|
33
|
+
throw new Error(`Agent file not readable: ${filePath}`);
|
|
34
|
+
}
|
|
35
|
+
const { frontmatter, body } = parseFrontmatter(content);
|
|
36
|
+
const agentConfig = extractConfig(frontmatter);
|
|
37
|
+
let systemPrompt = body;
|
|
38
|
+
if (config) {
|
|
39
|
+
const contextPath = join(getProjectRoot(), ".e2e-ai", "context.md");
|
|
40
|
+
if (existsSync(contextPath)) {
|
|
41
|
+
const projectContext = readFileSync(contextPath, "utf-8").trim();
|
|
42
|
+
if (projectContext) {
|
|
43
|
+
systemPrompt = `${body}
|
|
44
|
+
|
|
45
|
+
## Project Context
|
|
46
|
+
|
|
47
|
+
${projectContext}`;
|
|
48
|
+
}
|
|
49
|
+
}
|
|
50
|
+
if (config.llm.agentModels[agentName]) {
|
|
51
|
+
agentConfig.model = config.llm.agentModels[agentName];
|
|
52
|
+
}
|
|
53
|
+
}
|
|
54
|
+
const sections = parseSections(body);
|
|
55
|
+
return {
|
|
56
|
+
name: frontmatter.agent ?? agentName,
|
|
57
|
+
systemPrompt,
|
|
58
|
+
inputSchema: sections["Input Schema"],
|
|
59
|
+
outputSchema: sections["Output Schema"],
|
|
60
|
+
rules: sections["Rules"],
|
|
61
|
+
example: sections["Example"],
|
|
62
|
+
config: agentConfig
|
|
63
|
+
};
|
|
64
|
+
}
|
|
65
|
+
function parseFrontmatter(content) {
|
|
66
|
+
const match = content.match(/^---\n([\s\S]*?)\n---\n([\s\S]*)$/);
|
|
67
|
+
if (!match)
|
|
68
|
+
return { frontmatter: {}, body: content };
|
|
69
|
+
const frontmatter = {};
|
|
70
|
+
for (const line of match[1].split(`
|
|
71
|
+
`)) {
|
|
72
|
+
const colonIdx = line.indexOf(":");
|
|
73
|
+
if (colonIdx === -1)
|
|
74
|
+
continue;
|
|
75
|
+
const key = line.slice(0, colonIdx).trim();
|
|
76
|
+
let value = line.slice(colonIdx + 1).trim();
|
|
77
|
+
if (value.startsWith('"') && value.endsWith('"'))
|
|
78
|
+
value = value.slice(1, -1);
|
|
79
|
+
if (value === "true")
|
|
80
|
+
value = true;
|
|
81
|
+
if (value === "false")
|
|
82
|
+
value = false;
|
|
83
|
+
if (!isNaN(Number(value)) && value !== "")
|
|
84
|
+
value = Number(value);
|
|
85
|
+
frontmatter[key] = value;
|
|
86
|
+
}
|
|
87
|
+
return { frontmatter, body: match[2] };
|
|
88
|
+
}
|
|
89
|
+
function extractConfig(frontmatter) {
|
|
90
|
+
return {
|
|
91
|
+
model: frontmatter.model,
|
|
92
|
+
maxTokens: frontmatter.max_tokens ?? 4096,
|
|
93
|
+
temperature: frontmatter.temperature ?? 0.2
|
|
94
|
+
};
|
|
95
|
+
}
|
|
96
|
+
function parseSections(body) {
|
|
97
|
+
const sections = {};
|
|
98
|
+
const headingRegex = /^##\s+(.+)$/gm;
|
|
99
|
+
const headings = [];
|
|
100
|
+
let match;
|
|
101
|
+
while ((match = headingRegex.exec(body)) !== null) {
|
|
102
|
+
headings.push({ title: match[1].trim(), index: match.index });
|
|
103
|
+
}
|
|
104
|
+
const systemMatch = body.match(/^#\s+System Prompt\n([\s\S]*?)(?=\n##\s|$)/m);
|
|
105
|
+
if (systemMatch) {
|
|
106
|
+
sections["System Prompt"] = systemMatch[1].trim();
|
|
107
|
+
}
|
|
108
|
+
for (let i = 0;i < headings.length; i++) {
|
|
109
|
+
const start = headings[i].index + body.slice(headings[i].index).indexOf(`
|
|
110
|
+
`) + 1;
|
|
111
|
+
const end = i + 1 < headings.length ? headings[i + 1].index : body.length;
|
|
112
|
+
sections[headings[i].title] = body.slice(start, end).trim();
|
|
113
|
+
}
|
|
114
|
+
return sections;
|
|
115
|
+
}
|
|
116
|
+
|
|
117
|
+
export { loadAgent };
|
package/dist/cli.js
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
#!/usr/bin/env node
|
|
2
2
|
import {
|
|
3
3
|
loadAgent
|
|
4
|
-
} from "./cli-
|
|
4
|
+
} from "./cli-hjczkpxm.js";
|
|
5
5
|
import {
|
|
6
6
|
getPackageRoot,
|
|
7
7
|
getProjectRoot,
|
|
@@ -9057,33 +9057,35 @@ function registerInit(program2) {
|
|
|
9057
9057
|
program2.command("init").description("Initialize e2e-ai configuration for your project").option("--non-interactive", "Skip interactive prompts, use defaults").action(async (cmdOpts) => {
|
|
9058
9058
|
const projectRoot = getProjectRoot();
|
|
9059
9059
|
const e2eDir = join13(projectRoot, ".e2e-ai");
|
|
9060
|
-
|
|
9061
|
-
const answers = cmdOpts?.nonInteractive ? getDefaultAnswers() : await askConfigQuestions();
|
|
9062
|
-
const config = buildConfigFromAnswers(answers);
|
|
9060
|
+
const nonInteractive = !!cmdOpts?.nonInteractive;
|
|
9063
9061
|
const configPath = join13(e2eDir, "config.ts");
|
|
9064
|
-
|
|
9065
|
-
|
|
9066
|
-
|
|
9067
|
-
|
|
9068
|
-
|
|
9069
|
-
|
|
9070
|
-
|
|
9071
|
-
success(`Config written: ${configPath}`);
|
|
9072
|
-
}
|
|
9062
|
+
const isReInit = fileExists(configPath);
|
|
9063
|
+
header("e2e-ai init");
|
|
9064
|
+
if (isReInit) {
|
|
9065
|
+
info(`Existing .e2e-ai/ detected — preserving config and context.
|
|
9066
|
+
`);
|
|
9067
|
+
await copyAgentsToLocal(projectRoot, nonInteractive);
|
|
9068
|
+
await copyWorkflowGuide(projectRoot, nonInteractive);
|
|
9073
9069
|
} else {
|
|
9070
|
+
const answers = nonInteractive ? getDefaultAnswers() : await askConfigQuestions();
|
|
9071
|
+
const config = buildConfigFromAnswers(answers);
|
|
9074
9072
|
writeFile(configPath, generateConfigFile(config));
|
|
9075
9073
|
success(`Config written: ${configPath}`);
|
|
9074
|
+
await copyAgentsToLocal(projectRoot, nonInteractive);
|
|
9075
|
+
await copyWorkflowGuide(projectRoot, nonInteractive);
|
|
9076
9076
|
}
|
|
9077
|
-
await copyAgentsToLocal(projectRoot, !!cmdOpts?.nonInteractive);
|
|
9078
|
-
copyWorkflowGuide(projectRoot);
|
|
9079
9077
|
console.log("");
|
|
9080
9078
|
success(`Initialization complete!
|
|
9081
9079
|
`);
|
|
9082
|
-
|
|
9083
|
-
|
|
9084
|
-
|
|
9085
|
-
|
|
9086
|
-
|
|
9080
|
+
if (!isReInit) {
|
|
9081
|
+
console.log(import_picocolors2.default.bold("Next steps:"));
|
|
9082
|
+
console.log(` 1. Use the ${import_picocolors2.default.cyan("init-agent")} in your AI tool to generate ${import_picocolors2.default.cyan(".e2e-ai/context.md")}`);
|
|
9083
|
+
console.log(` (or use the MCP server: ${import_picocolors2.default.cyan("e2e_ai_scan_codebase")} + ${import_picocolors2.default.cyan("e2e_ai_read_agent")})`);
|
|
9084
|
+
console.log(` 2. Review the generated ${import_picocolors2.default.cyan(".e2e-ai/context.md")}`);
|
|
9085
|
+
console.log(` 3. Run: ${import_picocolors2.default.cyan("e2e-ai run --key PROJ-101")}`);
|
|
9086
|
+
} else {
|
|
9087
|
+
console.log(import_picocolors2.default.dim("Config and context.md were preserved. Only agents and workflow were checked."));
|
|
9088
|
+
}
|
|
9087
9089
|
});
|
|
9088
9090
|
}
|
|
9089
9091
|
function getDefaultAnswers() {
|
|
@@ -9199,8 +9201,8 @@ async function copyAgentsToLocal(projectRoot, nonInteractive) {
|
|
|
9199
9201
|
return 0;
|
|
9200
9202
|
}
|
|
9201
9203
|
const overwrite = await dist_default4({
|
|
9202
|
-
message: `
|
|
9203
|
-
default:
|
|
9204
|
+
message: `Update agents to latest version? (${agentFiles.length} files, currently ${existingFiles.length} in .e2e-ai/agents/)`,
|
|
9205
|
+
default: true
|
|
9204
9206
|
});
|
|
9205
9207
|
if (!overwrite) {
|
|
9206
9208
|
info("Skipping agent copy");
|
|
@@ -9216,14 +9218,26 @@ async function copyAgentsToLocal(projectRoot, nonInteractive) {
|
|
|
9216
9218
|
success(`Agents copied to .e2e-ai/agents/ (${agentFiles.length} files)`);
|
|
9217
9219
|
return agentFiles.length;
|
|
9218
9220
|
}
|
|
9219
|
-
function copyWorkflowGuide(projectRoot) {
|
|
9221
|
+
async function copyWorkflowGuide(projectRoot, nonInteractive) {
|
|
9220
9222
|
const packageRoot = getPackageRoot();
|
|
9221
9223
|
const source = join13(packageRoot, "templates", "workflow.md");
|
|
9222
9224
|
const target = join13(projectRoot, ".e2e-ai", "workflow.md");
|
|
9223
9225
|
if (!existsSync2(source))
|
|
9224
9226
|
return;
|
|
9225
|
-
if (existsSync2(target))
|
|
9226
|
-
|
|
9227
|
+
if (existsSync2(target)) {
|
|
9228
|
+
if (nonInteractive) {
|
|
9229
|
+
info("Workflow guide already exists, skipping");
|
|
9230
|
+
return;
|
|
9231
|
+
}
|
|
9232
|
+
const overwrite = await dist_default4({
|
|
9233
|
+
message: "Update workflow.md to latest version?",
|
|
9234
|
+
default: true
|
|
9235
|
+
});
|
|
9236
|
+
if (!overwrite) {
|
|
9237
|
+
info("Skipping workflow guide update");
|
|
9238
|
+
return;
|
|
9239
|
+
}
|
|
9240
|
+
}
|
|
9227
9241
|
const content = readFileSync2(source, "utf-8");
|
|
9228
9242
|
writeFile(target, content);
|
|
9229
9243
|
success("Workflow guide written to .e2e-ai/workflow.md");
|
package/dist/mcp.js
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
#!/usr/bin/env node
|
|
2
2
|
import {
|
|
3
3
|
loadAgent
|
|
4
|
-
} from "./cli-
|
|
4
|
+
} from "./cli-hjczkpxm.js";
|
|
5
5
|
import {
|
|
6
6
|
getPackageRoot
|
|
7
7
|
} from "./cli-kx32qnf3.js";
|
|
@@ -14856,7 +14856,8 @@ class StdioServerTransport {
|
|
|
14856
14856
|
}
|
|
14857
14857
|
|
|
14858
14858
|
// src/mcp.ts
|
|
14859
|
-
import {
|
|
14859
|
+
import { execSync } from "node:child_process";
|
|
14860
|
+
import { existsSync as existsSync2, readFileSync as readFileSync2 } from "node:fs";
|
|
14860
14861
|
import { join as join2 } from "node:path";
|
|
14861
14862
|
|
|
14862
14863
|
// src/utils/scan.ts
|
|
@@ -14957,13 +14958,364 @@ function validateContext(content) {
|
|
|
14957
14958
|
}
|
|
14958
14959
|
|
|
14959
14960
|
// src/mcp.ts
|
|
14960
|
-
var
|
|
14961
|
-
|
|
14962
|
-
|
|
14963
|
-
|
|
14961
|
+
var SERVER_INSTRUCTIONS = `
|
|
14962
|
+
# e2e-ai — Orchestration Guide
|
|
14963
|
+
|
|
14964
|
+
You have access to e2e-ai, an AI-powered E2E test automation tool. Follow this protocol when the user asks you to perform any e2e-ai automation.
|
|
14965
|
+
|
|
14966
|
+
## Core Principle: Plan → Approve → Execute Step-by-Step
|
|
14967
|
+
|
|
14968
|
+
NEVER run multiple pipeline steps at once. Each step is a separate job with its own context.
|
|
14969
|
+
|
|
14970
|
+
## Protocol
|
|
14971
|
+
|
|
14972
|
+
1. **Plan first.** Call \`e2e_ai_plan_workflow\` with the user's goal. This returns a structured todo list of steps.
|
|
14973
|
+
2. **Check prerequisites.** The plan includes a \`ready\` boolean and \`missingPrerequisites\` array. If \`ready\` is false, show the user what's missing (API keys, config, etc.) and **wait for them to fix it** before proceeding. Do NOT attempt to execute any step while prerequisites are missing.
|
|
14974
|
+
3. **Present the plan.** Show the user the ordered step list with descriptions. Ask for confirmation or adjustments before proceeding.
|
|
14975
|
+
4. **Execute one step at a time.** For each step in the approved plan:
|
|
14976
|
+
a. Tell the user which step you're about to run and why.
|
|
14977
|
+
b. Call \`e2e_ai_execute_step\` with the step name and parameters.
|
|
14978
|
+
c. Report the result to the user (success, key output, any warnings).
|
|
14979
|
+
d. If the step fails, stop and discuss with the user before continuing.
|
|
14980
|
+
e. Move to the next step only after the current one succeeds.
|
|
14981
|
+
5. **Use subagents when available.** If your AI platform supports subagents (e.g., Claude Code Agent tool), dispatch each step as a dedicated subagent to preserve context. Each subagent should:
|
|
14982
|
+
- Receive only the context it needs (step name, key, relevant file paths)
|
|
14983
|
+
- Call \`e2e_ai_execute_step\` to do its work
|
|
14984
|
+
- Return the result to the orchestrator
|
|
14985
|
+
|
|
14986
|
+
## Step Dependencies
|
|
14987
|
+
|
|
14988
|
+
Steps produce artifacts that feed into later steps. The pipeline handles this automatically — each step picks up where the previous one left off. Do not skip steps unless the plan says a step can be skipped.
|
|
14989
|
+
|
|
14990
|
+
## Interactive Steps
|
|
14991
|
+
|
|
14992
|
+
The \`record\` step opens a browser and requires user interaction. When the plan includes \`record\`:
|
|
14993
|
+
- Tell the user they need to interact with the browser window
|
|
14994
|
+
- The step will block until they close the codegen window
|
|
14995
|
+
- After recording completes, proceed with the next step
|
|
14996
|
+
|
|
14997
|
+
## When Things Fail
|
|
14998
|
+
|
|
14999
|
+
- If \`test\` fails and \`heal\` is in the plan, that's expected — heal will attempt to fix it
|
|
15000
|
+
- If \`heal\` exhausts all retries, stop and show the user the last error output
|
|
15001
|
+
- For any other failure, stop and ask the user how to proceed
|
|
15002
|
+
|
|
15003
|
+
## Available Workflows
|
|
15004
|
+
|
|
15005
|
+
- **Full test pipeline**: record → transcribe → scenario → generate → refine → test → heal → qa
|
|
15006
|
+
- **From existing recording**: transcribe → scenario → generate → refine → test → heal → qa
|
|
15007
|
+
- **AI-only (no recording)**: scenario → generate → refine → test → heal → qa
|
|
15008
|
+
- **Generate from scenario**: generate → refine → test → heal → qa
|
|
15009
|
+
- **Test + heal loop**: test → heal
|
|
15010
|
+
- **Scanner pipeline**: scan → analyze → push
|
|
15011
|
+
- **Single step**: any individual command
|
|
15012
|
+
|
|
15013
|
+
Always use \`e2e_ai_plan_workflow\` to determine the right steps — don't guess.
|
|
15014
|
+
`.trim();
|
|
15015
|
+
var TEST_PIPELINE_STEPS = [
|
|
15016
|
+
{
|
|
15017
|
+
name: "record",
|
|
15018
|
+
description: "Launch Playwright codegen in the browser. Optionally records voice narration for richer test scenarios.",
|
|
15019
|
+
produces: "codegen .ts file + optional .wav voice recording",
|
|
15020
|
+
requires: "none",
|
|
15021
|
+
interactive: true
|
|
15022
|
+
},
|
|
15023
|
+
{
|
|
15024
|
+
name: "transcribe",
|
|
15025
|
+
description: "Transcribe the voice recording via OpenAI Whisper. Merges timestamped voice comments into the codegen file.",
|
|
15026
|
+
produces: "transcript JSON + annotated codegen file",
|
|
15027
|
+
requires: "voice recording from record step",
|
|
15028
|
+
interactive: false,
|
|
15029
|
+
canSkip: "No voice recording exists or voice is disabled"
|
|
15030
|
+
},
|
|
15031
|
+
{
|
|
15032
|
+
name: "scenario",
|
|
15033
|
+
description: "AI analyzes the codegen + transcript and generates a structured YAML test scenario with semantic steps and expected results.",
|
|
15034
|
+
produces: "YAML test scenario file",
|
|
15035
|
+
requires: "codegen file (+ optional transcript)",
|
|
15036
|
+
interactive: false
|
|
15037
|
+
},
|
|
15038
|
+
{
|
|
15039
|
+
name: "generate",
|
|
15040
|
+
description: "AI converts the YAML scenario into a complete Playwright .test.ts file using project conventions from context.md.",
|
|
15041
|
+
produces: "Playwright .test.ts file",
|
|
15042
|
+
requires: "YAML scenario file",
|
|
15043
|
+
interactive: false
|
|
15044
|
+
},
|
|
15045
|
+
{
|
|
15046
|
+
name: "refine",
|
|
15047
|
+
description: "AI refactors the test: replaces raw selectors with semantic alternatives, adds proper timeouts, uses project helpers.",
|
|
15048
|
+
produces: "improved .test.ts file (in-place)",
|
|
15049
|
+
requires: "Playwright .test.ts file",
|
|
15050
|
+
interactive: false
|
|
15051
|
+
},
|
|
15052
|
+
{
|
|
15053
|
+
name: "test",
|
|
15054
|
+
description: "Run the Playwright test with trace/video/screenshot capture. Reports pass/fail status.",
|
|
15055
|
+
produces: "test results + trace files",
|
|
15056
|
+
requires: "Playwright .test.ts file",
|
|
15057
|
+
interactive: false
|
|
15058
|
+
},
|
|
15059
|
+
{
|
|
15060
|
+
name: "heal",
|
|
15061
|
+
description: "If the test failed, AI diagnoses the failure and patches the test. Retries up to 3 times with different strategies.",
|
|
15062
|
+
produces: "patched .test.ts file (if test was failing)",
|
|
15063
|
+
requires: "failing test + error output",
|
|
15064
|
+
interactive: false,
|
|
15065
|
+
canSkip: "Test already passes"
|
|
15066
|
+
},
|
|
15067
|
+
{
|
|
15068
|
+
name: "qa",
|
|
15069
|
+
description: "Generate formal QA documentation: markdown test case with preconditions, steps table, and optional Zephyr XML export.",
|
|
15070
|
+
produces: "QA markdown + optional Zephyr XML",
|
|
15071
|
+
requires: "Playwright .test.ts file + scenario",
|
|
15072
|
+
interactive: false
|
|
15073
|
+
}
|
|
15074
|
+
];
|
|
15075
|
+
var SCANNER_PIPELINE_STEPS = [
|
|
15076
|
+
{
|
|
15077
|
+
name: "scan",
|
|
15078
|
+
description: "Scan the codebase AST: extract routes, components, hooks, imports, and dependency graph.",
|
|
15079
|
+
produces: "ast-scan.json with full codebase structure",
|
|
15080
|
+
requires: "none",
|
|
15081
|
+
interactive: false
|
|
15082
|
+
},
|
|
15083
|
+
{
|
|
15084
|
+
name: "analyze",
|
|
15085
|
+
description: "AI analyzes the AST scan to identify features, workflows, components, and generate test scenarios.",
|
|
15086
|
+
produces: "qa-map.json with features, workflows, scenarios",
|
|
15087
|
+
requires: "ast-scan.json from scan step",
|
|
15088
|
+
interactive: false
|
|
15089
|
+
},
|
|
15090
|
+
{
|
|
15091
|
+
name: "push",
|
|
15092
|
+
description: "Push the QA map to a remote API endpoint for integration with external tools.",
|
|
15093
|
+
produces: "push confirmation with version info",
|
|
15094
|
+
requires: "qa-map.json from analyze step + API config",
|
|
15095
|
+
interactive: false
|
|
15096
|
+
}
|
|
15097
|
+
];
|
|
15098
|
+
var ALL_STEPS = [...TEST_PIPELINE_STEPS, ...SCANNER_PIPELINE_STEPS];
|
|
15099
|
+
var STEP_REQUIREMENTS = {
|
|
15100
|
+
record: { envVars: [] },
|
|
15101
|
+
transcribe: { envVars: [{ name: "OPENAI_API_KEY", reason: "Whisper transcription requires OpenAI API key" }] },
|
|
15102
|
+
scenario: { envVars: [
|
|
15103
|
+
{ name: "OPENAI_API_KEY", reason: "LLM calls require OpenAI API key", onlyIf: () => getProvider() === "openai" },
|
|
15104
|
+
{ name: "ANTHROPIC_API_KEY", reason: "LLM calls require Anthropic API key", onlyIf: () => getProvider() === "anthropic" }
|
|
15105
|
+
] },
|
|
15106
|
+
generate: { envVars: [
|
|
15107
|
+
{ name: "OPENAI_API_KEY", reason: "LLM calls require OpenAI API key", onlyIf: () => getProvider() === "openai" },
|
|
15108
|
+
{ name: "ANTHROPIC_API_KEY", reason: "LLM calls require Anthropic API key", onlyIf: () => getProvider() === "anthropic" }
|
|
15109
|
+
] },
|
|
15110
|
+
refine: { envVars: [
|
|
15111
|
+
{ name: "OPENAI_API_KEY", reason: "LLM calls require OpenAI API key", onlyIf: () => getProvider() === "openai" },
|
|
15112
|
+
{ name: "ANTHROPIC_API_KEY", reason: "LLM calls require Anthropic API key", onlyIf: () => getProvider() === "anthropic" }
|
|
15113
|
+
] },
|
|
15114
|
+
test: { envVars: [] },
|
|
15115
|
+
heal: { envVars: [
|
|
15116
|
+
{ name: "OPENAI_API_KEY", reason: "LLM calls require OpenAI API key", onlyIf: () => getProvider() === "openai" },
|
|
15117
|
+
{ name: "ANTHROPIC_API_KEY", reason: "LLM calls require Anthropic API key", onlyIf: () => getProvider() === "anthropic" }
|
|
15118
|
+
] },
|
|
15119
|
+
qa: { envVars: [
|
|
15120
|
+
{ name: "OPENAI_API_KEY", reason: "LLM calls require OpenAI API key", onlyIf: () => getProvider() === "openai" },
|
|
15121
|
+
{ name: "ANTHROPIC_API_KEY", reason: "LLM calls require Anthropic API key", onlyIf: () => getProvider() === "anthropic" }
|
|
15122
|
+
] },
|
|
15123
|
+
scan: { envVars: [] },
|
|
15124
|
+
analyze: { envVars: [
|
|
15125
|
+
{ name: "OPENAI_API_KEY", reason: "LLM calls require OpenAI API key", onlyIf: () => getProvider() === "openai" },
|
|
15126
|
+
{ name: "ANTHROPIC_API_KEY", reason: "LLM calls require Anthropic API key", onlyIf: () => getProvider() === "anthropic" }
|
|
15127
|
+
] },
|
|
15128
|
+
push: { envVars: [
|
|
15129
|
+
{ name: "E2E_AI_API_URL", reason: "Push requires API URL (set E2E_AI_API_URL or push.apiUrl in config)" },
|
|
15130
|
+
{ name: "E2E_AI_API_KEY", reason: "Push requires API key (set E2E_AI_API_KEY or push.apiKey in config)" }
|
|
15131
|
+
] }
|
|
15132
|
+
};
|
|
15133
|
+
function getProvider() {
|
|
15134
|
+
return process.env.AI_PROVIDER ?? "openai";
|
|
15135
|
+
}
|
|
15136
|
+
function checkPrerequisites(stepNames) {
|
|
15137
|
+
const issueMap = new Map;
|
|
15138
|
+
for (const stepName of stepNames) {
|
|
15139
|
+
const reqs = STEP_REQUIREMENTS[stepName];
|
|
15140
|
+
if (!reqs)
|
|
15141
|
+
continue;
|
|
15142
|
+
for (const envReq of reqs.envVars) {
|
|
15143
|
+
if (envReq.onlyIf && !envReq.onlyIf())
|
|
15144
|
+
continue;
|
|
15145
|
+
if (!process.env[envReq.name]) {
|
|
15146
|
+
const key = `env:${envReq.name}`;
|
|
15147
|
+
if (issueMap.has(key)) {
|
|
15148
|
+
issueMap.get(key).stepsAffected.push(stepName);
|
|
15149
|
+
} else {
|
|
15150
|
+
issueMap.set(key, {
|
|
15151
|
+
type: "env_var",
|
|
15152
|
+
name: envReq.name,
|
|
15153
|
+
reason: envReq.reason,
|
|
15154
|
+
stepsAffected: [stepName]
|
|
15155
|
+
});
|
|
15156
|
+
}
|
|
15157
|
+
}
|
|
15158
|
+
}
|
|
15159
|
+
if (reqs.files) {
|
|
15160
|
+
for (const fileReq of reqs.files) {
|
|
15161
|
+
if (!existsSync2(fileReq.path)) {
|
|
15162
|
+
const key = `file:${fileReq.path}`;
|
|
15163
|
+
if (issueMap.has(key)) {
|
|
15164
|
+
issueMap.get(key).stepsAffected.push(stepName);
|
|
15165
|
+
} else {
|
|
15166
|
+
issueMap.set(key, {
|
|
15167
|
+
type: "file",
|
|
15168
|
+
name: fileReq.label,
|
|
15169
|
+
reason: `File not found: ${fileReq.path}`,
|
|
15170
|
+
stepsAffected: [stepName]
|
|
15171
|
+
});
|
|
15172
|
+
}
|
|
15173
|
+
}
|
|
15174
|
+
}
|
|
15175
|
+
}
|
|
15176
|
+
}
|
|
15177
|
+
const missing = Array.from(issueMap.values());
|
|
15178
|
+
return { ready: missing.length === 0, missing };
|
|
15179
|
+
}
|
|
15180
|
+
function planWorkflow(goal, options) {
|
|
15181
|
+
const goalLower = goal.toLowerCase();
|
|
15182
|
+
const notes = [];
|
|
15183
|
+
const isScannerGoal = /\b(scan|analyze|qa.?map|feature.?analy|push.?qa|codebase.?scan)\b/.test(goalLower);
|
|
15184
|
+
const isSingleStep = ALL_STEPS.some((s) => goalLower === s.name || goalLower === `run ${s.name}`);
|
|
15185
|
+
let stepDefs;
|
|
15186
|
+
if (isScannerGoal && !isSingleStep) {
|
|
15187
|
+
stepDefs = [...SCANNER_PIPELINE_STEPS];
|
|
15188
|
+
if (!/\bpush\b/.test(goalLower)) {
|
|
15189
|
+
stepDefs = stepDefs.filter((s) => s.name !== "push");
|
|
15190
|
+
notes.push("Push step excluded — add it if you want to upload the QA map to a remote API.");
|
|
15191
|
+
}
|
|
15192
|
+
} else if (isSingleStep) {
|
|
15193
|
+
const stepName = ALL_STEPS.find((s) => goalLower.includes(s.name)).name;
|
|
15194
|
+
stepDefs = ALL_STEPS.filter((s) => s.name === stepName);
|
|
15195
|
+
} else {
|
|
15196
|
+
stepDefs = [...TEST_PIPELINE_STEPS];
|
|
15197
|
+
if (options.from) {
|
|
15198
|
+
const fromIdx = stepDefs.findIndex((s) => s.name === options.from);
|
|
15199
|
+
if (fromIdx > 0) {
|
|
15200
|
+
const skipped = stepDefs.slice(0, fromIdx).map((s) => s.name);
|
|
15201
|
+
stepDefs = stepDefs.slice(fromIdx);
|
|
15202
|
+
notes.push(`Starting from "${options.from}" — skipping: ${skipped.join(", ")}`);
|
|
15203
|
+
}
|
|
15204
|
+
} else {
|
|
15205
|
+
if (/\b(from recording|existing recording|already recorded)\b/.test(goalLower)) {
|
|
15206
|
+
stepDefs = stepDefs.filter((s) => s.name !== "record");
|
|
15207
|
+
notes.push("Skipping record — using existing recording files.");
|
|
15208
|
+
}
|
|
15209
|
+
if (/\b(from scenario|existing scenario|manual scenario|yaml)\b/.test(goalLower)) {
|
|
15210
|
+
stepDefs = stepDefs.filter((s) => !["record", "transcribe", "scenario"].includes(s.name));
|
|
15211
|
+
notes.push("Starting from generate — using existing scenario YAML.");
|
|
15212
|
+
}
|
|
15213
|
+
if (/\b(generate.?only|just.?generate|no.?record)\b/.test(goalLower)) {
|
|
15214
|
+
stepDefs = stepDefs.filter((s) => !["record", "transcribe"].includes(s.name));
|
|
15215
|
+
}
|
|
15216
|
+
if (/\b(test.?and.?heal|test.?heal|heal.?loop|fix.?test|self.?heal)\b/.test(goalLower)) {
|
|
15217
|
+
stepDefs = stepDefs.filter((s) => ["test", "heal"].includes(s.name));
|
|
15218
|
+
}
|
|
15219
|
+
if (/\b(refine|refactor)\b/.test(goalLower) && !/\brun\b/.test(goalLower)) {
|
|
15220
|
+
stepDefs = stepDefs.filter((s) => s.name === "refine");
|
|
15221
|
+
}
|
|
15222
|
+
if (/\bqa\b/.test(goalLower) && /\b(doc|only|generate)\b/.test(goalLower)) {
|
|
15223
|
+
stepDefs = stepDefs.filter((s) => s.name === "qa");
|
|
15224
|
+
}
|
|
15225
|
+
}
|
|
15226
|
+
}
|
|
15227
|
+
if (options.skip?.length) {
|
|
15228
|
+
stepDefs = stepDefs.filter((s) => !options.skip.includes(s.name));
|
|
15229
|
+
notes.push(`Skipping: ${options.skip.join(", ")}`);
|
|
15230
|
+
}
|
|
15231
|
+
if (options.voice === false) {
|
|
15232
|
+
stepDefs = stepDefs.filter((s) => s.name !== "transcribe");
|
|
15233
|
+
notes.push("Voice disabled — transcribe step removed.");
|
|
15234
|
+
}
|
|
15235
|
+
const cliBase = "e2e-ai";
|
|
15236
|
+
const steps = stepDefs.map((s, i) => {
|
|
15237
|
+
const args = [s.name];
|
|
15238
|
+
if (options.key && !["scan", "analyze", "push"].includes(s.name)) {
|
|
15239
|
+
args.push("--key", options.key);
|
|
15240
|
+
}
|
|
15241
|
+
if (s.name === "record") {
|
|
15242
|
+
if (options.voice === false)
|
|
15243
|
+
args.push("--no-voice");
|
|
15244
|
+
if (options.trace === false)
|
|
15245
|
+
args.push("--no-trace");
|
|
15246
|
+
}
|
|
15247
|
+
if (s.name === "scan" && options.scanDir) {
|
|
15248
|
+
args.push("--scan-dir", options.scanDir);
|
|
15249
|
+
}
|
|
15250
|
+
return {
|
|
15251
|
+
order: i + 1,
|
|
15252
|
+
name: s.name,
|
|
15253
|
+
description: s.description,
|
|
15254
|
+
command: `${cliBase} ${args.join(" ")}`,
|
|
15255
|
+
produces: s.produces,
|
|
15256
|
+
interactive: s.interactive,
|
|
15257
|
+
canSkip: s.canSkip
|
|
15258
|
+
};
|
|
15259
|
+
});
|
|
15260
|
+
const pipeline2 = isScannerGoal ? "scanner" : isSingleStep ? "single" : "test";
|
|
15261
|
+
if (!options.key && pipeline2 === "test" && steps.length > 1) {
|
|
15262
|
+
notes.push("No --key provided. Use --key <ISSUE-KEY> to organize files by issue.");
|
|
15263
|
+
}
|
|
15264
|
+
const prereqs = checkPrerequisites(steps.map((s) => s.name));
|
|
15265
|
+
return { goal, pipeline: pipeline2, ready: prereqs.ready, missingPrerequisites: prereqs.missing, steps, notes };
|
|
15266
|
+
}
|
|
15267
|
+
function executeStep(stepName, options) {
|
|
15268
|
+
const args = [stepName];
|
|
15269
|
+
if (options.key && !["scan", "analyze", "push"].includes(stepName)) {
|
|
15270
|
+
args.push("--key", options.key);
|
|
15271
|
+
}
|
|
15272
|
+
if (stepName === "record") {
|
|
15273
|
+
if (options.voice === false)
|
|
15274
|
+
args.push("--no-voice");
|
|
15275
|
+
if (options.trace === false)
|
|
15276
|
+
args.push("--no-trace");
|
|
15277
|
+
}
|
|
15278
|
+
if (stepName === "scan" && options.scanDir) {
|
|
15279
|
+
args.push("--scan-dir", options.scanDir);
|
|
15280
|
+
}
|
|
15281
|
+
if (options.output) {
|
|
15282
|
+
args.push("--output", options.output);
|
|
15283
|
+
}
|
|
15284
|
+
if (options.extraArgs?.length) {
|
|
15285
|
+
args.push(...options.extraArgs);
|
|
15286
|
+
}
|
|
15287
|
+
const pkgRoot = getPackageRoot();
|
|
15288
|
+
const cliBin = join2(pkgRoot, "dist", "cli.js");
|
|
15289
|
+
const command = `node ${cliBin} ${args.join(" ")}`;
|
|
15290
|
+
try {
|
|
15291
|
+
const stdout = execSync(command, {
|
|
15292
|
+
cwd: process.cwd(),
|
|
15293
|
+
encoding: "utf-8",
|
|
15294
|
+
timeout: 300000,
|
|
15295
|
+
env: { ...process.env },
|
|
15296
|
+
stdio: ["pipe", "pipe", "pipe"]
|
|
15297
|
+
});
|
|
15298
|
+
return { success: true, output: stdout, command };
|
|
15299
|
+
} catch (err) {
|
|
15300
|
+
const stderr = err.stderr?.toString() ?? "";
|
|
15301
|
+
const stdout = err.stdout?.toString() ?? "";
|
|
15302
|
+
return {
|
|
15303
|
+
success: false,
|
|
15304
|
+
output: `EXIT CODE: ${err.status ?? "unknown"}
|
|
15305
|
+
|
|
15306
|
+
STDOUT:
|
|
15307
|
+
${stdout}
|
|
15308
|
+
|
|
15309
|
+
STDERR:
|
|
15310
|
+
${stderr}`,
|
|
15311
|
+
command
|
|
15312
|
+
};
|
|
15313
|
+
}
|
|
15314
|
+
}
|
|
15315
|
+
var server = new McpServer({ name: "e2e-ai", version: "1.5.0" }, { instructions: SERVER_INSTRUCTIONS });
|
|
14964
15316
|
server.registerTool("e2e_ai_scan_codebase", {
|
|
14965
15317
|
title: "Scan Codebase",
|
|
14966
|
-
description: "Scan a project directory for test files, configs, fixtures, path aliases, and sample test content",
|
|
15318
|
+
description: "Scan a project directory for test files, configs, fixtures, path aliases, and sample test content. Use this during project setup or to understand test infrastructure.",
|
|
14967
15319
|
inputSchema: exports_external.object({
|
|
14968
15320
|
projectRoot: exports_external.string().optional().describe("Project root directory (defaults to cwd)")
|
|
14969
15321
|
})
|
|
@@ -14976,7 +15328,7 @@ server.registerTool("e2e_ai_scan_codebase", {
|
|
|
14976
15328
|
});
|
|
14977
15329
|
server.registerTool("e2e_ai_validate_context", {
|
|
14978
15330
|
title: "Validate Context",
|
|
14979
|
-
description: "Validate that a context markdown file contains all required sections",
|
|
15331
|
+
description: "Validate that a context markdown file contains all required sections (Application, Test Infrastructure, Feature Methods, Import Conventions, Selector Conventions, Test Structure Template, Utility Patterns).",
|
|
14980
15332
|
inputSchema: exports_external.object({
|
|
14981
15333
|
content: exports_external.string().describe("The markdown content of the context file to validate")
|
|
14982
15334
|
})
|
|
@@ -14988,7 +15340,7 @@ server.registerTool("e2e_ai_validate_context", {
|
|
|
14988
15340
|
});
|
|
14989
15341
|
server.registerTool("e2e_ai_read_agent", {
|
|
14990
15342
|
title: "Read Agent",
|
|
14991
|
-
description: "Read an agent prompt definition by name. Returns the agent
|
|
15343
|
+
description: "Read an agent prompt definition by name. Returns the agent system prompt and config. Agents: transcript-agent, scenario-agent, playwright-generator-agent, refactor-agent, self-healing-agent, qa-testcase-agent, feature-analyzer-agent, scenario-planner-agent, init-agent.",
|
|
14992
15344
|
inputSchema: exports_external.object({
|
|
14993
15345
|
agentName: exports_external.string().describe("Agent name (e.g. scenario-agent, playwright-generator-agent)")
|
|
14994
15346
|
})
|
|
@@ -15014,7 +15366,7 @@ server.registerTool("e2e_ai_read_agent", {
|
|
|
15014
15366
|
});
|
|
15015
15367
|
server.registerTool("e2e_ai_get_example", {
|
|
15016
15368
|
title: "Get Example Context",
|
|
15017
|
-
description: "Returns the full example context markdown file that shows the expected format for .e2e-ai/context.md",
|
|
15369
|
+
description: "Returns the full example context markdown file that shows the expected format for .e2e-ai/context.md.",
|
|
15018
15370
|
inputSchema: exports_external.object({})
|
|
15019
15371
|
}, async () => {
|
|
15020
15372
|
try {
|
|
@@ -15030,6 +15382,108 @@ server.registerTool("e2e_ai_get_example", {
|
|
|
15030
15382
|
};
|
|
15031
15383
|
}
|
|
15032
15384
|
});
|
|
15385
|
+
server.registerTool("e2e_ai_plan_workflow", {
|
|
15386
|
+
title: "Plan Workflow",
|
|
15387
|
+
description: "Plan an e2e-ai automation workflow. Call this FIRST when the user asks to run any automation. " + "Returns an ordered list of steps (todo list) that should be executed one at a time. " + "Present the plan to the user for approval before executing any step.",
|
|
15388
|
+
inputSchema: exports_external.object({
|
|
15389
|
+
goal: exports_external.string().describe('What the user wants to achieve. Examples: "run full pipeline for PROJ-101", ' + '"generate test from existing recording", "scan codebase and analyze features", ' + '"heal failing test PROJ-101", "refactor test PROJ-101"'),
|
|
15390
|
+
key: exports_external.string().optional().describe("Issue key (e.g. PROJ-101, LIN-42)"),
|
|
15391
|
+
from: exports_external.string().optional().describe("Start from a specific step (skip all prior steps)"),
|
|
15392
|
+
skip: exports_external.array(exports_external.string()).optional().describe('Steps to skip (e.g. ["transcribe", "heal"])'),
|
|
15393
|
+
voice: exports_external.boolean().optional().describe("Enable voice recording (default: true)"),
|
|
15394
|
+
trace: exports_external.boolean().optional().describe("Enable trace capture (default: true)"),
|
|
15395
|
+
scanDir: exports_external.string().optional().describe("Directory to scan (for scanner pipeline)")
|
|
15396
|
+
})
|
|
15397
|
+
}, async ({ goal, key, from, skip, voice, trace, scanDir }) => {
|
|
15398
|
+
const plan = planWorkflow(goal, { key, from, skip, voice, trace, scanDir });
|
|
15399
|
+
return {
|
|
15400
|
+
content: [{
|
|
15401
|
+
type: "text",
|
|
15402
|
+
text: JSON.stringify(plan, null, 2)
|
|
15403
|
+
}]
|
|
15404
|
+
};
|
|
15405
|
+
});
|
|
15406
|
+
server.registerTool("e2e_ai_execute_step", {
|
|
15407
|
+
title: "Execute Pipeline Step",
|
|
15408
|
+
description: "Execute a single e2e-ai pipeline step. Call this ONE STEP AT A TIME from an approved plan. " + "Each step produces artifacts consumed by later steps. " + "If your AI platform supports subagents, run each step in a dedicated subagent to preserve context. " + 'The "record" step is interactive and will open a browser window — the user must interact with it.',
|
|
15409
|
+
inputSchema: exports_external.object({
|
|
15410
|
+
step: exports_external.string().describe("Step name: record, transcribe, scenario, generate, refine, test, heal, qa, scan, analyze, push"),
|
|
15411
|
+
key: exports_external.string().optional().describe("Issue key (e.g. PROJ-101)"),
|
|
15412
|
+
voice: exports_external.boolean().optional().describe("Enable voice recording (record step only)"),
|
|
15413
|
+
trace: exports_external.boolean().optional().describe("Enable trace capture (record step only)"),
|
|
15414
|
+
scanDir: exports_external.string().optional().describe("Directory to scan (scan step only)"),
|
|
15415
|
+
output: exports_external.string().optional().describe("Custom output path (scan/analyze steps)"),
|
|
15416
|
+
extraArgs: exports_external.array(exports_external.string()).optional().describe("Additional CLI arguments")
|
|
15417
|
+
})
|
|
15418
|
+
}, async ({ step, key, voice, trace, scanDir, output, extraArgs }) => {
|
|
15419
|
+
const validSteps = ALL_STEPS.map((s) => s.name);
|
|
15420
|
+
if (!validSteps.includes(step)) {
|
|
15421
|
+
return {
|
|
15422
|
+
content: [{
|
|
15423
|
+
type: "text",
|
|
15424
|
+
text: `Error: Unknown step "${step}". Valid steps: ${validSteps.join(", ")}`
|
|
15425
|
+
}],
|
|
15426
|
+
isError: true
|
|
15427
|
+
};
|
|
15428
|
+
}
|
|
15429
|
+
const prereqs = checkPrerequisites([step]);
|
|
15430
|
+
if (!prereqs.ready) {
|
|
15431
|
+
const lines = prereqs.missing.map((m) => `- ${m.type === "env_var" ? `Set ${m.name}` : m.name}: ${m.reason}`);
|
|
15432
|
+
return {
|
|
15433
|
+
content: [{
|
|
15434
|
+
type: "text",
|
|
15435
|
+
text: JSON.stringify({
|
|
15436
|
+
step,
|
|
15437
|
+
success: false,
|
|
15438
|
+
blocked: true,
|
|
15439
|
+
missingPrerequisites: prereqs.missing,
|
|
15440
|
+
message: `Cannot run "${step}" — missing prerequisites:
|
|
15441
|
+
${lines.join(`
|
|
15442
|
+
`)}
|
|
15443
|
+
|
|
15444
|
+
Ask the user to provide these before retrying.`
|
|
15445
|
+
}, null, 2)
|
|
15446
|
+
}],
|
|
15447
|
+
isError: true
|
|
15448
|
+
};
|
|
15449
|
+
}
|
|
15450
|
+
const result = executeStep(step, { key, voice, trace, scanDir, output, extraArgs });
|
|
15451
|
+
return {
|
|
15452
|
+
content: [{
|
|
15453
|
+
type: "text",
|
|
15454
|
+
text: JSON.stringify({
|
|
15455
|
+
step,
|
|
15456
|
+
success: result.success,
|
|
15457
|
+
command: result.command,
|
|
15458
|
+
output: result.output
|
|
15459
|
+
}, null, 2)
|
|
15460
|
+
}]
|
|
15461
|
+
};
|
|
15462
|
+
});
|
|
15463
|
+
server.registerTool("e2e_ai_get_workflow_guide", {
|
|
15464
|
+
title: "Get Workflow Guide",
|
|
15465
|
+
description: "Returns the e2e-ai workflow guide explaining how the pipeline works, step by step. Useful for understanding what each step does and how they connect.",
|
|
15466
|
+
inputSchema: exports_external.object({})
|
|
15467
|
+
}, async () => {
|
|
15468
|
+
try {
|
|
15469
|
+
const guidePath = join2(getPackageRoot(), "templates", "workflow.md");
|
|
15470
|
+
if (!existsSync2(guidePath)) {
|
|
15471
|
+
return {
|
|
15472
|
+
content: [{ type: "text", text: "Error: workflow.md not found in templates" }],
|
|
15473
|
+
isError: true
|
|
15474
|
+
};
|
|
15475
|
+
}
|
|
15476
|
+
const content = readFileSync2(guidePath, "utf-8");
|
|
15477
|
+
return {
|
|
15478
|
+
content: [{ type: "text", text: content }]
|
|
15479
|
+
};
|
|
15480
|
+
} catch (err) {
|
|
15481
|
+
return {
|
|
15482
|
+
content: [{ type: "text", text: `Error: ${err.message}` }],
|
|
15483
|
+
isError: true
|
|
15484
|
+
};
|
|
15485
|
+
}
|
|
15486
|
+
});
|
|
15033
15487
|
async function main() {
|
|
15034
15488
|
const transport = new StdioServerTransport;
|
|
15035
15489
|
await server.connect(transport);
|
package/package.json
CHANGED
package/templates/workflow.md
CHANGED
|
@@ -14,6 +14,10 @@ record → transcribe → scenario → generate → refine → test → heal →
|
|
|
14
14
|
|
|
15
15
|
**In short:** You record yourself testing in the browser (optionally narrating what you're doing), and e2e-ai turns that into a production-ready Playwright test with QA documentation.
|
|
16
16
|
|
|
17
|
+
**Two ways to run it:**
|
|
18
|
+
- **CLI**: Run commands directly (`e2e-ai run --key PROJ-101`)
|
|
19
|
+
- **AI assistant**: Ask your AI tool (Claude Code, Cursor, etc.) — the MCP server guides it through the pipeline step by step, asking for your approval before starting
|
|
20
|
+
|
|
17
21
|
---
|
|
18
22
|
|
|
19
23
|
## Setup
|
|
@@ -198,6 +202,27 @@ This is independent from the test pipeline — use it to get an overview of your
|
|
|
198
202
|
|
|
199
203
|
---
|
|
200
204
|
|
|
205
|
+
## AI-Assisted Workflow (MCP)
|
|
206
|
+
|
|
207
|
+
If you have the e2e-ai MCP server configured, you can ask your AI assistant to run the pipeline for you. The MCP server teaches the AI how to orchestrate the workflow:
|
|
208
|
+
|
|
209
|
+
1. **You say:** "Run the full test pipeline for PROJ-101" (or any variation)
|
|
210
|
+
2. **AI plans:** Calls `e2e_ai_plan_workflow` → gets an ordered step list
|
|
211
|
+
3. **AI shows plan:** Presents the steps and asks for your approval
|
|
212
|
+
4. **You adjust:** "Skip voice" / "Start from generate" / "Looks good, go"
|
|
213
|
+
5. **AI executes:** Runs each step one at a time via `e2e_ai_execute_step`, reporting results between steps
|
|
214
|
+
|
|
215
|
+
Each step runs as a separate subagent (when supported by the AI platform) to keep context clean and focused. If a step fails, the AI stops and asks you what to do.
|
|
216
|
+
|
|
217
|
+
**Example prompts you can give your AI assistant:**
|
|
218
|
+
- "Run the full pipeline for PROJ-101"
|
|
219
|
+
- "Generate a test from the existing recording for PROJ-101, skip voice"
|
|
220
|
+
- "Just run test and heal for PROJ-101"
|
|
221
|
+
- "Scan the codebase and analyze features"
|
|
222
|
+
- "Refactor the test for PROJ-101"
|
|
223
|
+
|
|
224
|
+
---
|
|
225
|
+
|
|
201
226
|
## File Structure
|
|
202
227
|
|
|
203
228
|
After running the pipeline for `PROJ-101`:
|
|
@@ -207,7 +232,16 @@ After running the pipeline for `PROJ-101`:
|
|
|
207
232
|
config.ts ← your configuration
|
|
208
233
|
context.md ← project context (teach AI your conventions)
|
|
209
234
|
workflow.md ← this file
|
|
210
|
-
agents/ ← AI agent prompts (
|
|
235
|
+
agents/ ← AI agent prompts (numbered by pipeline order)
|
|
236
|
+
0.init-agent.md
|
|
237
|
+
1_1.transcript-agent.md
|
|
238
|
+
1_2.scenario-agent.md
|
|
239
|
+
2.playwright-generator-agent.md
|
|
240
|
+
3.refactor-agent.md
|
|
241
|
+
4.self-healing-agent.md
|
|
242
|
+
5.qa-testcase-agent.md
|
|
243
|
+
6_1.feature-analyzer-agent.md
|
|
244
|
+
6_2.scenario-planner-agent.md
|
|
211
245
|
PROJ-101/ ← working files (codegen, recordings)
|
|
212
246
|
|
|
213
247
|
e2e/
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|