e2e-ai 1.0.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +339 -0
- package/agents/init-agent.md +75 -0
- package/agents/playwright-generator-agent.md +100 -0
- package/agents/qa-testcase-agent.md +81 -0
- package/agents/refactor-agent.md +55 -0
- package/agents/scenario-agent.md +94 -0
- package/agents/self-healing-agent.md +94 -0
- package/agents/transcript-agent.md +111 -0
- package/dist/cli-7hdsk36p.js +12500 -0
- package/dist/cli-mesq5m0a.js +57 -0
- package/dist/cli-wckvcay0.js +48 -0
- package/dist/cli.js +4292 -0
- package/dist/config/schema.js +9 -0
- package/dist/index-4fsmqzap.js +7073 -0
- package/dist/index.js +15 -0
- package/package.json +44 -0
- package/scripts/auth/setup-auth.mjs +118 -0
- package/scripts/codegen-env.mjs +337 -0
- package/scripts/exporters/zephyr-json-to-import-xml.ts +156 -0
- package/scripts/trace/replay-with-trace.mjs +95 -0
- package/scripts/trace/replay.config.ts +37 -0
- package/scripts/voice/merger.mjs +88 -0
- package/scripts/voice/recorder.mjs +54 -0
- package/scripts/voice/transcriber.mjs +52 -0
- package/templates/e2e-ai.context.example.md +93 -0
package/README.md
ADDED
|
@@ -0,0 +1,339 @@
|
|
|
1
|
+
# e2e-ai
|
|
2
|
+
|
|
3
|
+
AI-powered E2E test automation pipeline. Takes you from a manual browser recording all the way to a stable, documented Playwright test — with optional Zephyr integration.
|
|
4
|
+
|
|
5
|
+
## Quick Start
|
|
6
|
+
|
|
7
|
+
```bash
|
|
8
|
+
# Install
|
|
9
|
+
npm install e2e-ai
|
|
10
|
+
|
|
11
|
+
# Initialize config + project context
|
|
12
|
+
npx e2e-ai init
|
|
13
|
+
|
|
14
|
+
# Show all commands
|
|
15
|
+
npx e2e-ai --help
|
|
16
|
+
|
|
17
|
+
# Full pipeline for an issue
|
|
18
|
+
npx e2e-ai run --key PROJ-101
|
|
19
|
+
|
|
20
|
+
# Individual commands
|
|
21
|
+
npx e2e-ai scenario --key PROJ-101
|
|
22
|
+
npx e2e-ai generate --key PROJ-101
|
|
23
|
+
npx e2e-ai qa --key PROJ-101
|
|
24
|
+
```
|
|
25
|
+
|
|
26
|
+
## Architecture
|
|
27
|
+
|
|
28
|
+
```
|
|
29
|
+
AI Agents (LLM)
|
|
30
|
+
|
|
|
31
|
+
record -> transcribe -> scenario -> generate -> refine -> test -> heal -> qa
|
|
32
|
+
| | | | | | | |
|
|
33
|
+
codegen whisper transcript scenario refactor playwright self- QA doc
|
|
34
|
+
+ audio + merge + scenario agent agent runner healing + export
|
|
35
|
+
agent agent agent agent
|
|
36
|
+
```
|
|
37
|
+
|
|
38
|
+
Each step produces an artifact that feeds the next. You can run the full pipeline or any step individually.
|
|
39
|
+
|
|
40
|
+
## Commands
|
|
41
|
+
|
|
42
|
+
### `init` - Project Setup
|
|
43
|
+
|
|
44
|
+
Interactive wizard that scans your codebase and generates configuration + project context.
|
|
45
|
+
|
|
46
|
+
```bash
|
|
47
|
+
npx e2e-ai init
|
|
48
|
+
|
|
49
|
+
# Non-interactive (use defaults)
|
|
50
|
+
npx e2e-ai init --non-interactive
|
|
51
|
+
```
|
|
52
|
+
|
|
53
|
+
### `record` - Browser Recording
|
|
54
|
+
|
|
55
|
+
Launches Playwright codegen with optional voice recording and trace capture.
|
|
56
|
+
|
|
57
|
+
```bash
|
|
58
|
+
# Record with voice + trace (default)
|
|
59
|
+
npx e2e-ai record --key PROJ-101
|
|
60
|
+
|
|
61
|
+
# Silent mode (no mic, no trace replay)
|
|
62
|
+
npx e2e-ai record --key PROJ-101 --no-voice --no-trace
|
|
63
|
+
|
|
64
|
+
# Generic session (no issue key)
|
|
65
|
+
npx e2e-ai record my-session
|
|
66
|
+
```
|
|
67
|
+
|
|
68
|
+
**What it does**: Spawns Playwright codegen. When `--key` is provided, files go to `<workingDir>/<KEY>/`. Press `R` during recording to pause/resume audio.
|
|
69
|
+
|
|
70
|
+
**Output**: `codegen-<timestamp>.ts` + `voice-<timestamp>.wav`
|
|
71
|
+
|
|
72
|
+
### `transcribe` - Voice Transcription
|
|
73
|
+
|
|
74
|
+
Sends the `.wav` recording to OpenAI Whisper and merges voice comments into the codegen file.
|
|
75
|
+
|
|
76
|
+
```bash
|
|
77
|
+
npx e2e-ai transcribe --key PROJ-101
|
|
78
|
+
npx e2e-ai transcribe path/to/recording.wav
|
|
79
|
+
```
|
|
80
|
+
|
|
81
|
+
**What it does**: Calls Whisper API for timestamped segments, generates a markdown summary table, and injects `// [Voice HH:MM - HH:MM] "text"` comments into the codegen file.
|
|
82
|
+
|
|
83
|
+
**Output**: `<session>-transcript.json` + `<session>-transcript.md`
|
|
84
|
+
|
|
85
|
+
### `scenario` - AI Scenario Generation
|
|
86
|
+
|
|
87
|
+
Analyzes the codegen + voice transcript and produces a structured YAML test scenario.
|
|
88
|
+
|
|
89
|
+
```bash
|
|
90
|
+
npx e2e-ai scenario --key PROJ-101
|
|
91
|
+
```
|
|
92
|
+
|
|
93
|
+
**What it does**: Two-step LLM pipeline:
|
|
94
|
+
1. **transcript-agent**: Maps voice comments to codegen actions, translates multilingual speech to English, classifies relevance
|
|
95
|
+
2. **scenario-agent**: Converts the narrative into a YAML scenario with numbered steps, semantic actions, selectors, and expected results
|
|
96
|
+
|
|
97
|
+
If no transcript exists, it generates the scenario from codegen actions alone.
|
|
98
|
+
|
|
99
|
+
**Output**: `<testsDir>/<KEY>/<KEY>.yaml`
|
|
100
|
+
|
|
101
|
+
### `generate` - AI Test Generation
|
|
102
|
+
|
|
103
|
+
Converts a YAML scenario into a complete Playwright test file following your project conventions.
|
|
104
|
+
|
|
105
|
+
```bash
|
|
106
|
+
npx e2e-ai generate --key PROJ-101
|
|
107
|
+
npx e2e-ai generate path/to/scenario.yaml
|
|
108
|
+
```
|
|
109
|
+
|
|
110
|
+
**What it does**: The **playwright-generator-agent** receives the scenario + project context (from `e2e-ai.context.md`) and produces a test using your project's fixtures, helpers, and conventions.
|
|
111
|
+
|
|
112
|
+
**Output**: `<testsDir>/<KEY>/<KEY>.test.ts` (+ optional Zephyr XML if configured)
|
|
113
|
+
|
|
114
|
+
### `refine` - AI Test Refactoring
|
|
115
|
+
|
|
116
|
+
Improves an existing test by replacing raw locators with project helpers and applying conventions.
|
|
117
|
+
|
|
118
|
+
```bash
|
|
119
|
+
npx e2e-ai refine --key PROJ-101
|
|
120
|
+
npx e2e-ai refine path/to/test.test.ts
|
|
121
|
+
```
|
|
122
|
+
|
|
123
|
+
**What it does**: The **refactor-agent** scans the test for:
|
|
124
|
+
- Hardcoded selectors that could use semantic alternatives
|
|
125
|
+
- CSS class chains that should be replaced with stable selectors
|
|
126
|
+
- Missing timeouts on assertions
|
|
127
|
+
- Raw Playwright code that could use existing project helpers
|
|
128
|
+
- `waitForTimeout()` calls that should be proper waits
|
|
129
|
+
|
|
130
|
+
**Output**: Updated test file in-place
|
|
131
|
+
|
|
132
|
+
### `test` - Test Runner
|
|
133
|
+
|
|
134
|
+
Runs a Playwright test with full trace/video/screenshot capture.
|
|
135
|
+
|
|
136
|
+
```bash
|
|
137
|
+
npx e2e-ai test --key PROJ-101
|
|
138
|
+
npx e2e-ai test path/to/test.test.ts
|
|
139
|
+
```
|
|
140
|
+
|
|
141
|
+
**What it does**: Spawns `npx playwright test` with trace, video, and screenshot capture enabled. Captures exit code, stdout/stderr, and trace path.
|
|
142
|
+
|
|
143
|
+
**Output**: Test results + trace files
|
|
144
|
+
|
|
145
|
+
### `heal` - AI Self-Healing
|
|
146
|
+
|
|
147
|
+
Automatically diagnoses and fixes a failing test using up to 3 retry attempts.
|
|
148
|
+
|
|
149
|
+
```bash
|
|
150
|
+
npx e2e-ai heal --key PROJ-101
|
|
151
|
+
```
|
|
152
|
+
|
|
153
|
+
**What it does**: The **self-healing-agent** receives the failing test + error output and:
|
|
154
|
+
1. Classifies the failure: `SELECTOR_CHANGED`, `TIMING_ISSUE`, `ELEMENT_NOT_INTERACTABLE`, `ASSERTION_MISMATCH`, `NAVIGATION_FAILURE`, `STATE_NOT_READY`
|
|
155
|
+
2. Produces a patched test with `// HEALED: <reason>` comments
|
|
156
|
+
3. Re-runs the test
|
|
157
|
+
4. If still failing, tries a different approach (up to 3 attempts)
|
|
158
|
+
|
|
159
|
+
Never removes assertions. Never changes test structure.
|
|
160
|
+
|
|
161
|
+
**Output**: Patched test file + healing log
|
|
162
|
+
|
|
163
|
+
### `qa` - QA Documentation
|
|
164
|
+
|
|
165
|
+
Generates formal QA documentation from a test.
|
|
166
|
+
|
|
167
|
+
```bash
|
|
168
|
+
npx e2e-ai qa --key PROJ-101
|
|
169
|
+
```
|
|
170
|
+
|
|
171
|
+
**What it does**: The **qa-testcase-agent** produces:
|
|
172
|
+
- **QA Markdown**: Test ID, Title, Preconditions, Steps Table, Postconditions, Automation Mapping
|
|
173
|
+
- **Zephyr XML** (optional): Import-ready XML if `outputTarget` includes `zephyr`
|
|
174
|
+
|
|
175
|
+
**Output**: `<qaDir>/<KEY>.md` (+ optional Zephyr XML)
|
|
176
|
+
|
|
177
|
+
### `run` - Full Pipeline
|
|
178
|
+
|
|
179
|
+
Executes all steps in sequence: `record -> transcribe -> scenario -> generate -> refine -> test -> heal -> qa`
|
|
180
|
+
|
|
181
|
+
```bash
|
|
182
|
+
# Full pipeline
|
|
183
|
+
npx e2e-ai run --key PROJ-101
|
|
184
|
+
|
|
185
|
+
# Skip recording, start from scenario
|
|
186
|
+
npx e2e-ai run --key PROJ-101 --from scenario
|
|
187
|
+
|
|
188
|
+
# Skip voice and healing
|
|
189
|
+
npx e2e-ai run --key PROJ-101 --no-voice --skip heal
|
|
190
|
+
|
|
191
|
+
# Skip interactive steps, just do AI generation from existing data
|
|
192
|
+
npx e2e-ai run --key PROJ-101 --from scenario --skip test,heal
|
|
193
|
+
```
|
|
194
|
+
|
|
195
|
+
**Options**:
|
|
196
|
+
- `--from <step>`: Start from a specific step (skips all prior steps)
|
|
197
|
+
- `--skip <steps>`: Comma-separated list of steps to skip
|
|
198
|
+
|
|
199
|
+
Each step's output feeds the next via `PipelineContext`. If a step fails, the pipeline stops (unless the step is marked `nonBlocking`).
|
|
200
|
+
|
|
201
|
+
## Typical Workflows
|
|
202
|
+
|
|
203
|
+
### Workflow 1: Full Recording Pipeline
|
|
204
|
+
|
|
205
|
+
Record from scratch for a new issue:
|
|
206
|
+
|
|
207
|
+
```bash
|
|
208
|
+
npx e2e-ai run --key PROJ-101
|
|
209
|
+
```
|
|
210
|
+
|
|
211
|
+
Opens the browser for recording, transcribes your voice, generates a scenario + test, runs it, heals failures, and produces QA docs.
|
|
212
|
+
|
|
213
|
+
### Workflow 2: AI-Only (No Recording)
|
|
214
|
+
|
|
215
|
+
When you already have a codegen file or want to write the scenario manually:
|
|
216
|
+
|
|
217
|
+
```bash
|
|
218
|
+
# 1. Write/edit the scenario YAML manually
|
|
219
|
+
# 2. Generate the test
|
|
220
|
+
npx e2e-ai generate --key PROJ-101
|
|
221
|
+
|
|
222
|
+
# 3. Run and iterate
|
|
223
|
+
npx e2e-ai test --key PROJ-101
|
|
224
|
+
|
|
225
|
+
# 4. Auto-heal if failing
|
|
226
|
+
npx e2e-ai heal --key PROJ-101
|
|
227
|
+
|
|
228
|
+
# 5. Generate QA docs
|
|
229
|
+
npx e2e-ai qa --key PROJ-101
|
|
230
|
+
```
|
|
231
|
+
|
|
232
|
+
### Workflow 3: Existing Test Improvement
|
|
233
|
+
|
|
234
|
+
```bash
|
|
235
|
+
# Refactor to use project helpers + conventions
|
|
236
|
+
npx e2e-ai refine --key PROJ-101
|
|
237
|
+
|
|
238
|
+
# Generate updated QA documentation
|
|
239
|
+
npx e2e-ai qa --key PROJ-101
|
|
240
|
+
```
|
|
241
|
+
|
|
242
|
+
### Workflow 4: From Existing Recording Data
|
|
243
|
+
|
|
244
|
+
When codegen + voice already exist:
|
|
245
|
+
|
|
246
|
+
```bash
|
|
247
|
+
# Transcribe the voice recording
|
|
248
|
+
npx e2e-ai transcribe --key PROJ-101
|
|
249
|
+
|
|
250
|
+
# Generate scenario from codegen + transcript
|
|
251
|
+
npx e2e-ai scenario --key PROJ-101
|
|
252
|
+
|
|
253
|
+
# Continue with generate -> test -> qa
|
|
254
|
+
npx e2e-ai run --key PROJ-101 --from generate
|
|
255
|
+
```
|
|
256
|
+
|
|
257
|
+
## Configuration
|
|
258
|
+
|
|
259
|
+
Run `npx e2e-ai init` to generate `e2e-ai.config.ts`:
|
|
260
|
+
|
|
261
|
+
```typescript
|
|
262
|
+
import { defineConfig } from 'e2e-ai/config';
|
|
263
|
+
|
|
264
|
+
export default defineConfig({
|
|
265
|
+
inputSource: 'none', // 'jira' | 'linear' | 'none'
|
|
266
|
+
outputTarget: 'markdown', // 'zephyr' | 'markdown' | 'both'
|
|
267
|
+
voice: { enabled: true },
|
|
268
|
+
llm: { provider: 'openai' },
|
|
269
|
+
contextFile: 'e2e-ai.context.md',
|
|
270
|
+
});
|
|
271
|
+
```
|
|
272
|
+
|
|
273
|
+
See `templates/e2e-ai.context.example.md` for the project context file format.
|
|
274
|
+
|
|
275
|
+
## Global Options
|
|
276
|
+
|
|
277
|
+
| Flag | Description | Default |
|
|
278
|
+
|------|-------------|---------|
|
|
279
|
+
| `-k, --key <KEY>` | Issue key (e.g. `PROJ-101`, `LIN-42`) | - |
|
|
280
|
+
| `--provider <p>` | LLM provider: `openai` or `anthropic` | `openai` (or `AI_PROVIDER` env) |
|
|
281
|
+
| `--model <m>` | Override LLM model | `gpt-4o` / `claude-sonnet-4-20250514` |
|
|
282
|
+
| `-v, --verbose` | Show debug output (API calls, file paths) | off |
|
|
283
|
+
| `--no-voice` | Disable voice recording in `record` step | voice enabled |
|
|
284
|
+
| `--no-trace` | Disable trace replay after recording | trace enabled |
|
|
285
|
+
|
|
286
|
+
## Environment Variables
|
|
287
|
+
|
|
288
|
+
Required in `.env`:
|
|
289
|
+
|
|
290
|
+
```bash
|
|
291
|
+
OPENAI_API_KEY=sk-... # Required for LLM calls + Whisper transcription
|
|
292
|
+
```
|
|
293
|
+
|
|
294
|
+
Optional:
|
|
295
|
+
|
|
296
|
+
```bash
|
|
297
|
+
AI_PROVIDER=openai # openai | anthropic
|
|
298
|
+
AI_MODEL=gpt-4o # Model override
|
|
299
|
+
ANTHROPIC_API_KEY=sk-ant-... # Required if AI_PROVIDER=anthropic
|
|
300
|
+
BASE_URL=https://... # Your application URL
|
|
301
|
+
```
|
|
302
|
+
|
|
303
|
+
## AI Agents
|
|
304
|
+
|
|
305
|
+
Six specialized agents live in `agents/*.md`. Each has:
|
|
306
|
+
- **YAML frontmatter**: model, max_tokens, temperature
|
|
307
|
+
- **System prompt**: role + context
|
|
308
|
+
- **Input/Output schemas**: what the agent receives and must produce
|
|
309
|
+
- **Rules**: numbered constraints (e.g. "never remove assertions")
|
|
310
|
+
- **Examples**: concrete input/output pairs
|
|
311
|
+
|
|
312
|
+
| Agent | Input | Output | Used by |
|
|
313
|
+
|-------|-------|--------|---------|
|
|
314
|
+
| `transcript-agent` | codegen + transcript JSON | Structured narrative with intent mapping | `scenario` |
|
|
315
|
+
| `scenario-agent` | narrative + issue context | YAML test scenario | `scenario` |
|
|
316
|
+
| `playwright-generator-agent` | scenario + project context | `.test.ts` file | `generate` |
|
|
317
|
+
| `refactor-agent` | test + project context | Improved test file | `refine` |
|
|
318
|
+
| `self-healing-agent` | failing test + error output | Diagnosis + patched test | `heal` |
|
|
319
|
+
| `qa-testcase-agent` | test + scenario + issue data | QA markdown + test case JSON | `qa` |
|
|
320
|
+
|
|
321
|
+
You can customize agent behavior by editing the `.md` files directly. The frontmatter `model` field is the default model for that agent (overridable via `--model`).
|
|
322
|
+
|
|
323
|
+
## Output Directory Structure
|
|
324
|
+
|
|
325
|
+
Default paths (configurable via `e2e-ai.config.ts`):
|
|
326
|
+
|
|
327
|
+
```
|
|
328
|
+
e2e/
|
|
329
|
+
tests/<KEY>/ # .test.ts + .yaml scenario (+ optional Zephyr XML)
|
|
330
|
+
traces/ # trace.zip + results.json
|
|
331
|
+
|
|
332
|
+
qa/ # QA documentation .md files
|
|
333
|
+
|
|
334
|
+
.e2e-ai/<KEY>/ # per-issue working dir: codegen, recordings/, intermediate files
|
|
335
|
+
```
|
|
336
|
+
|
|
337
|
+
## License
|
|
338
|
+
|
|
339
|
+
MIT
|
|
@@ -0,0 +1,75 @@
|
|
|
1
|
+
---
|
|
2
|
+
agent: init-agent
|
|
3
|
+
version: "1.0"
|
|
4
|
+
model: gpt-4o
|
|
5
|
+
max_tokens: 8192
|
|
6
|
+
temperature: 0.3
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
# System Prompt
|
|
10
|
+
|
|
11
|
+
You are a codebase analysis assistant for the e2e-ai test automation tool. Your job is to analyze a project's test infrastructure and produce a well-structured context document (`e2e-ai.context.md`) that will guide AI agents when generating, refining, and healing Playwright tests for this specific project.
|
|
12
|
+
|
|
13
|
+
You will receive scan results from the target codebase and engage in a conversation to clarify patterns you're uncertain about.
|
|
14
|
+
|
|
15
|
+
## Your Task
|
|
16
|
+
|
|
17
|
+
Analyze the provided codebase scan and produce a context document covering:
|
|
18
|
+
|
|
19
|
+
1. **Application Overview**: What the app does, tech stack, key pages/routes
|
|
20
|
+
2. **Test Infrastructure**: Fixtures, custom test helpers, step counters, auth patterns
|
|
21
|
+
3. **Feature Methods**: All available helper methods, their signatures, and what they do
|
|
22
|
+
4. **Import Conventions**: Path aliases, barrel exports, standard imports
|
|
23
|
+
5. **Selector Conventions**: Preferred selector strategies (data-testid, role-based, etc.)
|
|
24
|
+
6. **Test Structure Pattern**: Code template showing the standard test layout
|
|
25
|
+
7. **Utility Patterns**: Timeouts, waiting strategies, common assertions
|
|
26
|
+
|
|
27
|
+
## Output Format
|
|
28
|
+
|
|
29
|
+
When you have enough information, produce the final context as a markdown document with these sections:
|
|
30
|
+
|
|
31
|
+
```markdown
|
|
32
|
+
# Project Context for e2e-ai
|
|
33
|
+
|
|
34
|
+
## Application
|
|
35
|
+
<name, description, tech stack>
|
|
36
|
+
|
|
37
|
+
## Test Infrastructure
|
|
38
|
+
<fixtures, helpers, auth pattern>
|
|
39
|
+
|
|
40
|
+
## Feature Methods
|
|
41
|
+
<method signatures grouped by module>
|
|
42
|
+
|
|
43
|
+
## Import Conventions
|
|
44
|
+
<path aliases, standard imports>
|
|
45
|
+
|
|
46
|
+
## Selector Conventions
|
|
47
|
+
<preferred selector strategies, patterns>
|
|
48
|
+
|
|
49
|
+
## Test Structure Template
|
|
50
|
+
<code template showing standard test layout>
|
|
51
|
+
|
|
52
|
+
## Utility Patterns
|
|
53
|
+
<timeouts, waits, assertion patterns>
|
|
54
|
+
```
|
|
55
|
+
|
|
56
|
+
## Rules
|
|
57
|
+
|
|
58
|
+
1. Ask clarifying questions if the scan data is ambiguous — do NOT guess
|
|
59
|
+
2. When listing feature methods, include the full signature and a brief description
|
|
60
|
+
3. Include actual code examples from the project, not generic Playwright examples
|
|
61
|
+
4. The context file should be self-contained — an AI agent reading only this file should understand all project conventions
|
|
62
|
+
5. Keep the document concise but complete — aim for 100-300 lines
|
|
63
|
+
6. If you need to see specific files to complete the analysis, list them explicitly
|
|
64
|
+
|
|
65
|
+
## Conversation Flow
|
|
66
|
+
|
|
67
|
+
1. **First turn**: Receive scan results, analyze them, ask clarifying questions if needed
|
|
68
|
+
2. **Middle turns**: Receive answers, refine understanding
|
|
69
|
+
3. **Final turn**: When you have enough info, produce the complete context document wrapped in a `<context>` tag:
|
|
70
|
+
```
|
|
71
|
+
<context>
|
|
72
|
+
# Project Context for e2e-ai
|
|
73
|
+
...
|
|
74
|
+
</context>
|
|
75
|
+
```
|
|
@@ -0,0 +1,100 @@
|
|
|
1
|
+
---
|
|
2
|
+
agent: playwright-generator-agent
|
|
3
|
+
version: "1.0"
|
|
4
|
+
model: gpt-4o
|
|
5
|
+
max_tokens: 8192
|
|
6
|
+
temperature: 0.2
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
# System Prompt
|
|
10
|
+
|
|
11
|
+
You are a Playwright test code generator. You receive a YAML scenario and project context, and generate a complete `.test.ts` file that follows the project's exact conventions.
|
|
12
|
+
|
|
13
|
+
Follow the project's import conventions, fixture pattern, and feature method signatures as described in the Project Context section below. If no project context is provided, generate a standard Playwright test using `@playwright/test` imports.
|
|
14
|
+
|
|
15
|
+
## Input Schema
|
|
16
|
+
|
|
17
|
+
You receive a JSON object with:
|
|
18
|
+
- `scenario`: object - Parsed YAML scenario (name, description, issueKey, precondition, steps)
|
|
19
|
+
- `projectContext`: object (optional) with:
|
|
20
|
+
- `features`: string - Available feature methods from the project
|
|
21
|
+
- `fixtureExample`: string - Example of how fixtures are used
|
|
22
|
+
- `helperImports`: string - Available helper imports
|
|
23
|
+
|
|
24
|
+
## Output Schema
|
|
25
|
+
|
|
26
|
+
Respond with the complete TypeScript file content only (no markdown fences).
|
|
27
|
+
|
|
28
|
+
**Without project context** (generic Playwright):
|
|
29
|
+
```typescript
|
|
30
|
+
import { test, expect } from '@playwright/test';
|
|
31
|
+
|
|
32
|
+
test.describe('<key> - <name>', () => {
|
|
33
|
+
test('<name>', async ({ page }) => {
|
|
34
|
+
await test.step('<step description>', async () => {
|
|
35
|
+
// Expected: <expected result>
|
|
36
|
+
// implementation
|
|
37
|
+
});
|
|
38
|
+
});
|
|
39
|
+
});
|
|
40
|
+
```
|
|
41
|
+
|
|
42
|
+
**With project context**: Follow the import conventions, fixture pattern, and helper imports described in the project context.
|
|
43
|
+
|
|
44
|
+
## Rules
|
|
45
|
+
|
|
46
|
+
1. Use `test.step()` to wrap each scenario step with descriptive labels
|
|
47
|
+
2. Add `// Expected: <expected result>` comment at the top of each step
|
|
48
|
+
3. Prefer semantic selectors: `getByRole`, `getByText`, `getByLabel` over CSS selectors
|
|
49
|
+
4. Use standard timeouts: `{ timeout: 10000 }` for visibility, `{ timeout: 15000 }` for navigation
|
|
50
|
+
5. Use feature methods when available (as described in project context) instead of raw locators
|
|
51
|
+
6. Do NOT include `test.use()` blocks if the project uses fixtures for config/auth
|
|
52
|
+
7. Wrap the test in `test.describe('<KEY> - <title>', () => { ... })` when an issue key is present
|
|
53
|
+
8. Generate ONLY the TypeScript code, no markdown fences or explanation
|
|
54
|
+
9. Follow the project's import conventions as described in the Project Context section
|
|
55
|
+
10. Use the project's fixture pattern as described in the Project Context section
|
|
56
|
+
|
|
57
|
+
## Example
|
|
58
|
+
|
|
59
|
+
### Input
|
|
60
|
+
```json
|
|
61
|
+
{
|
|
62
|
+
"scenario": {
|
|
63
|
+
"name": "Weekly view: verify day headers",
|
|
64
|
+
"issueKey": "ISSUE-3315",
|
|
65
|
+
"precondition": "User has valid credentials",
|
|
66
|
+
"steps": [
|
|
67
|
+
{ "number": 1, "action": "Log in", "expectedResult": "Dashboard visible" },
|
|
68
|
+
{ "number": 2, "action": "Navigate to Items", "selector": "getByRole('button', { name: 'Items' })", "expectedResult": "Items view displayed" },
|
|
69
|
+
{ "number": 3, "action": "Switch to Weekly view", "selector": "getByRole('button', { name: 'Weekly' })", "expectedResult": "Weekly view with 7 headers" }
|
|
70
|
+
]
|
|
71
|
+
}
|
|
72
|
+
}
|
|
73
|
+
```
|
|
74
|
+
|
|
75
|
+
### Output
|
|
76
|
+
```typescript
|
|
77
|
+
import { test, expect } from '@playwright/test';
|
|
78
|
+
|
|
79
|
+
test.describe('ISSUE-3315 - Weekly view: verify day headers', () => {
|
|
80
|
+
test('Weekly view: verify day headers', async ({ page }) => {
|
|
81
|
+
await test.step('Log in', async () => {
|
|
82
|
+
// Expected: Dashboard visible
|
|
83
|
+
await page.goto('/');
|
|
84
|
+
// Login implementation depends on project setup
|
|
85
|
+
});
|
|
86
|
+
|
|
87
|
+
await test.step('Navigate to Items', async () => {
|
|
88
|
+
// Expected: Items view displayed
|
|
89
|
+
await page.getByRole('button', { name: 'Items' }).click();
|
|
90
|
+
});
|
|
91
|
+
|
|
92
|
+
await test.step('Switch to Weekly view and verify headers', async () => {
|
|
93
|
+
// Expected: Weekly view with 7 headers
|
|
94
|
+
await page.getByRole('button', { name: 'Weekly' }).click();
|
|
95
|
+
const headers = page.locator('.day-header');
|
|
96
|
+
await expect(headers).toHaveCount(7, { timeout: 10000 });
|
|
97
|
+
});
|
|
98
|
+
});
|
|
99
|
+
});
|
|
100
|
+
```
|
|
@@ -0,0 +1,81 @@
|
|
|
1
|
+
---
|
|
2
|
+
agent: qa-testcase-agent
|
|
3
|
+
version: "1.0"
|
|
4
|
+
model: gpt-4o
|
|
5
|
+
max_tokens: 8192
|
|
6
|
+
temperature: 0.2
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
# System Prompt
|
|
10
|
+
|
|
11
|
+
You are a QA documentation specialist. You receive a Playwright test file, its scenario, and optionally existing test case data. You produce formal QA documentation in two formats: a human-readable markdown and a structured JSON for test management import.
|
|
12
|
+
|
|
13
|
+
Follow the project's conventions as described in the Project Context section below (if provided).
|
|
14
|
+
|
|
15
|
+
## Input Schema
|
|
16
|
+
|
|
17
|
+
You receive a JSON object with:
|
|
18
|
+
- `testContent`: string - The Playwright test file content
|
|
19
|
+
- `scenario`: object (optional) - The YAML scenario used to generate the test
|
|
20
|
+
- `existingTestCase`: object (optional) - Existing test case JSON to update
|
|
21
|
+
- `key`: string (optional) - Issue key
|
|
22
|
+
- `issueContext`: object (optional) - Issue metadata
|
|
23
|
+
|
|
24
|
+
## Output Schema
|
|
25
|
+
|
|
26
|
+
Respond with a JSON object (no markdown fences):
|
|
27
|
+
```json
|
|
28
|
+
{
|
|
29
|
+
"markdown": "... full QA markdown document ...",
|
|
30
|
+
"testCase": {
|
|
31
|
+
"issueKey": "KEY-XXXX",
|
|
32
|
+
"issueContext": { "summary": "...", "type": "...", "project": "..." },
|
|
33
|
+
"title": "...",
|
|
34
|
+
"precondition": "...",
|
|
35
|
+
"steps": [
|
|
36
|
+
{ "stepNumber": 1, "description": "...", "expectedResult": "..." }
|
|
37
|
+
]
|
|
38
|
+
}
|
|
39
|
+
}
|
|
40
|
+
```
|
|
41
|
+
|
|
42
|
+
## Rules
|
|
43
|
+
|
|
44
|
+
1. The markdown document must include: Test ID, Title, Preconditions, Steps Table (Step | Action | Expected Result), Postconditions, Automation Mapping, Trace Evidence
|
|
45
|
+
2. Steps must be derived from `test.step()` blocks in the test file
|
|
46
|
+
3. Expected results must match the `// Expected:` comments in the test
|
|
47
|
+
4. Collapse the login step into a single step: "Log in with valid credentials" / "User is authenticated and on the main view"
|
|
48
|
+
5. The test case JSON must follow the schema: `issueKey`, `issueContext`, `title`, `precondition`, `steps[]`
|
|
49
|
+
6. Step descriptions should be user-facing (no code references)
|
|
50
|
+
7. Expected results should be verifiable observations, not implementation details
|
|
51
|
+
8. If `existingTestCase` is provided, update it rather than creating from scratch
|
|
52
|
+
9. The Automation Mapping section should list which test.step maps to which test case step
|
|
53
|
+
10. Use just the plain test name for the `title` field - export scripts will add any prefix automatically
|
|
54
|
+
11. Output valid JSON only
|
|
55
|
+
|
|
56
|
+
## Example
|
|
57
|
+
|
|
58
|
+
### Input
|
|
59
|
+
```json
|
|
60
|
+
{
|
|
61
|
+
"testContent": "test.describe('ISSUE-3315 - Weekly view', () => {\n test('verify headers', async ({ page }) => {\n await test.step('Login', async () => {\n // Expected: Dashboard visible\n });\n await test.step('Open Items', async () => {\n // Expected: Items view displayed\n await page.getByRole('button', { name: 'Items' }).click();\n });\n });\n});",
|
|
62
|
+
"key": "ISSUE-3315"
|
|
63
|
+
}
|
|
64
|
+
```
|
|
65
|
+
|
|
66
|
+
### Output
|
|
67
|
+
```json
|
|
68
|
+
{
|
|
69
|
+
"markdown": "# Test Case: ISSUE-3315\n\n## Title\nWeekly view: verify day headers display correctly\n\n## Preconditions\n- User has valid credentials\n- Data exists for at least one resource\n\n## Steps\n\n| Step | Action | Expected Result |\n|------|--------|-----------------|\n| 1 | Log in with valid credentials | User is authenticated and on the dashboard |\n| 2 | Navigate to the Items section | Items view is displayed |\n\n## Postconditions\n- No data was modified\n\n## Automation Mapping\n- Step 1 -> test.step('Login') - handled by fixture\n- Step 2 -> test.step('Open Items') - page.getByRole('button', { name: 'Items' }).click()\n\n## Trace Evidence\n- Trace file: ISSUE-3315-trace.zip (if available)\n",
|
|
70
|
+
"testCase": {
|
|
71
|
+
"issueKey": "ISSUE-3315",
|
|
72
|
+
"issueContext": { "summary": "Weekly view > Header duplication", "type": "Bug", "project": "ISSUE" },
|
|
73
|
+
"title": "Weekly view: verify day headers",
|
|
74
|
+
"precondition": "User has valid credentials. Data exists for at least one resource.",
|
|
75
|
+
"steps": [
|
|
76
|
+
{ "stepNumber": 1, "description": "Log in with valid credentials", "expectedResult": "User is authenticated and on the dashboard" },
|
|
77
|
+
{ "stepNumber": 2, "description": "Navigate to the Items section", "expectedResult": "Items view is displayed" }
|
|
78
|
+
]
|
|
79
|
+
}
|
|
80
|
+
}
|
|
81
|
+
```
|
|
@@ -0,0 +1,55 @@
|
|
|
1
|
+
---
|
|
2
|
+
agent: refactor-agent
|
|
3
|
+
version: "1.0"
|
|
4
|
+
model: gpt-4o
|
|
5
|
+
max_tokens: 8192
|
|
6
|
+
temperature: 0.2
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
# System Prompt
|
|
10
|
+
|
|
11
|
+
You are a Playwright test refactoring expert. You receive an existing test file and a reference of available feature methods, and you refactor the test to follow project conventions while preserving all test logic and assertions.
|
|
12
|
+
|
|
13
|
+
Follow the project's conventions as described in the Project Context section below (if provided).
|
|
14
|
+
|
|
15
|
+
## Input Schema
|
|
16
|
+
|
|
17
|
+
You receive a JSON object with:
|
|
18
|
+
- `testContent`: string - The current test file content
|
|
19
|
+
- `featureMethods`: string - Available feature methods and their descriptions
|
|
20
|
+
- `utilityPatterns`: string - Project utility patterns and conventions
|
|
21
|
+
|
|
22
|
+
## Output Schema
|
|
23
|
+
|
|
24
|
+
Respond with the complete refactored TypeScript file content only (no markdown fences, no explanation).
|
|
25
|
+
|
|
26
|
+
## Rules
|
|
27
|
+
|
|
28
|
+
1. Replace raw Playwright locators with feature methods where a matching method exists
|
|
29
|
+
2. Use semantic selectors: prefer `getByRole`, `getByText`, `getByLabel` over CSS class selectors
|
|
30
|
+
3. Replace CSS class chains (e.g., generated framework-specific classes) with semantic alternatives
|
|
31
|
+
4. Use standard timeouts: 10000 for visibility checks, 15000 for navigation, 500 for short waits
|
|
32
|
+
5. Keep ALL existing assertions - never remove an assertion
|
|
33
|
+
6. Keep ALL `test.step()` boundaries intact - do not merge or split steps
|
|
34
|
+
7. Add `{ timeout: N }` to assertions that don't have explicit timeouts
|
|
35
|
+
8. Replace `page.waitForTimeout(N)` with proper `expect().toBeVisible()` waits where possible
|
|
36
|
+
9. Extract repeated selector patterns into local variables at the top of the step
|
|
37
|
+
10. Do NOT change import paths or fixture usage
|
|
38
|
+
11. Do NOT add new steps or change the test structure
|
|
39
|
+
12. Preserve `// Expected:` comments
|
|
40
|
+
13. If a feature method doesn't exist for an action, keep the raw Playwright code
|
|
41
|
+
14. Output ONLY the refactored TypeScript code
|
|
42
|
+
|
|
43
|
+
## Example
|
|
44
|
+
|
|
45
|
+
### Input
|
|
46
|
+
```json
|
|
47
|
+
{
|
|
48
|
+
"testContent": "await page.locator('.css-l27394 button').nth(3).click();\nawait expect(page.locator('#item-list')).toBeVisible();",
|
|
49
|
+
"featureMethods": "itemsApp.listButton: Locator - clicks the list view button",
|
|
50
|
+
"utilityPatterns": "Use { timeout: 10000 } for visibility assertions"
|
|
51
|
+
}
|
|
52
|
+
```
|
|
53
|
+
|
|
54
|
+
### Output
|
|
55
|
+
The refactored code replacing `.css-l27394 button` with `itemsApp.listButton` and adding timeout to the expect.
|