levante 0.1.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/agents/0.init-agent.md +83 -0
- package/agents/1_1.transcript-agent.md +107 -0
- package/agents/1_2.scenario-agent.md +90 -0
- package/agents/2.playwright-generator-agent.md +96 -0
- package/agents/3.refactor-agent.md +51 -0
- package/agents/4.self-healing-agent.md +90 -0
- package/agents/5.qa-testcase-agent.md +77 -0
- package/dist/cli-039axkk7.js +251 -0
- package/dist/cli-0kkk77d6.js +251 -0
- package/dist/cli-69cp23fq.js +76 -0
- package/dist/cli-6wrk5ptg.js +250 -0
- package/dist/cli-akwt7nqw.js +67 -0
- package/dist/cli-hdefnftg.js +13610 -0
- package/dist/cli-w0v11cvq.js +13610 -0
- package/dist/cli-wckvcay0.js +48 -0
- package/dist/cli.js +9214 -0
- package/dist/config/schema.js +9 -0
- package/dist/index-2qecm8mk.js +7073 -0
- package/dist/index.js +15 -0
- package/dist/mcp.js +19647 -0
- package/package.json +43 -0
- package/scripts/auth/setup-auth.mjs +118 -0
- package/scripts/codegen-env.mjs +377 -0
- package/scripts/exporters/zephyr-json-to-import-xml.ts +156 -0
- package/scripts/trace/replay-with-trace.mjs +95 -0
- package/scripts/trace/replay.config.ts +37 -0
- package/scripts/voice/merger.mjs +119 -0
- package/scripts/voice/recorder.mjs +54 -0
- package/scripts/voice/transcriber.mjs +52 -0
- package/templates/e2e-ai.context.example.md +93 -0
- package/templates/workflow.md +289 -0
|
@@ -0,0 +1,289 @@
|
|
|
1
|
+
# levante Workflow Guide
|
|
2
|
+
|
|
3
|
+
This file explains how levante works and how to use it. Keep it as a reference in your `.qai/levante/` folder.
|
|
4
|
+
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
## How It Works
|
|
8
|
+
|
|
9
|
+
levante converts manual browser recordings into stable, documented Playwright tests. An AI pipeline processes your recording through multiple stages, each producing an artifact that feeds the next.
|
|
10
|
+
|
|
11
|
+
```
|
|
12
|
+
record → transcribe → scenario → generate → refine → test → heal → qa
|
|
13
|
+
```
|
|
14
|
+
|
|
15
|
+
**In short:** You record yourself testing in the browser (optionally narrating what you're doing), and levante turns that into a production-ready Playwright test with QA documentation.
|
|
16
|
+
|
|
17
|
+
**Two ways to run it:**
|
|
18
|
+
- **CLI**: Run commands directly (`levante run --key PROJ-101`)
|
|
19
|
+
- **AI assistant**: Ask your AI tool (Claude Code, Cursor, etc.) — the MCP server guides it through the pipeline step by step, asking for your approval before starting
|
|
20
|
+
|
|
21
|
+
---
|
|
22
|
+
|
|
23
|
+
## Setup
|
|
24
|
+
|
|
25
|
+
After running `levante init`, you need a **context file** (`.qai/levante/context.md`) that teaches the AI about your project's test conventions — fixtures, helpers, selectors, login flows, etc.
|
|
26
|
+
|
|
27
|
+
**How to create it:** Use the `init-agent` in your AI tool (Claude Code, Cursor, etc.). If you have the MCP server configured, the AI can scan your codebase automatically with `e2e_ai_scan_codebase`.
|
|
28
|
+
|
|
29
|
+
---
|
|
30
|
+
|
|
31
|
+
## The Standard Workflow
|
|
32
|
+
|
|
33
|
+
### 1. Record (`record`)
|
|
34
|
+
|
|
35
|
+
Opens Playwright codegen in your browser. You interact with your app while codegen captures every action.
|
|
36
|
+
|
|
37
|
+
```bash
|
|
38
|
+
levante record --key PROJ-101
|
|
39
|
+
```
|
|
40
|
+
|
|
41
|
+
**With voice** (default): Records your microphone while you narrate what you're testing. Press `R` to pause/resume audio. Your voice comments become test documentation.
|
|
42
|
+
|
|
43
|
+
**Without voice:**
|
|
44
|
+
```bash
|
|
45
|
+
levante record --key PROJ-101 --no-voice
|
|
46
|
+
```
|
|
47
|
+
|
|
48
|
+
**Output:** `codegen-<timestamp>.ts` + `voice-<timestamp>.wav` (if voice enabled)
|
|
49
|
+
|
|
50
|
+
### 2. Transcribe (`transcribe`)
|
|
51
|
+
|
|
52
|
+
Sends the voice recording to OpenAI Whisper. Gets back timestamped text segments and injects them as comments into the codegen file:
|
|
53
|
+
|
|
54
|
+
```typescript
|
|
55
|
+
// [Voice 00:12 - 00:15] "Now I'm checking the item list loads correctly"
|
|
56
|
+
await page.getByRole('button', { name: 'Items' }).click();
|
|
57
|
+
```
|
|
58
|
+
|
|
59
|
+
**Skipped automatically** if no voice recording exists.
|
|
60
|
+
|
|
61
|
+
### 3. Scenario (`scenario`)
|
|
62
|
+
|
|
63
|
+
Two AI agents process the codegen + transcript:
|
|
64
|
+
|
|
65
|
+
1. **transcript-agent** — Maps your voice comments to codegen actions, translates non-English speech, classifies what's test-relevant vs. noise
|
|
66
|
+
2. **scenario-agent** — Converts everything into a structured YAML test scenario with semantic steps and expected results
|
|
67
|
+
|
|
68
|
+
```yaml
|
|
69
|
+
name: "Items list: verify weekly view headers"
|
|
70
|
+
steps:
|
|
71
|
+
- number: 1
|
|
72
|
+
action: "Log in with valid credentials"
|
|
73
|
+
expectedResult: "User is redirected to dashboard"
|
|
74
|
+
- number: 2
|
|
75
|
+
action: "Navigate to Items section"
|
|
76
|
+
selector: "getByRole('button', { name: 'Items' })"
|
|
77
|
+
expectedResult: "Items list is displayed"
|
|
78
|
+
```
|
|
79
|
+
|
|
80
|
+
**Without voice:** The scenario is generated from codegen actions alone (the AI infers intent from selectors and page structure).
|
|
81
|
+
|
|
82
|
+
### 4. Generate (`generate`)
|
|
83
|
+
|
|
84
|
+
The **playwright-generator-agent** takes the YAML scenario + your project context (`.qai/levante/context.md`) and writes a complete `.test.ts` file using your project's fixtures, helpers, and conventions.
|
|
85
|
+
|
|
86
|
+
### 5. Refine (`refine`)
|
|
87
|
+
|
|
88
|
+
The **refactor-agent** improves the generated test:
|
|
89
|
+
- Replaces raw CSS selectors with semantic alternatives (`getByRole`, `getByText`)
|
|
90
|
+
- Uses your project's helper methods where available
|
|
91
|
+
- Adds proper timeouts to assertions
|
|
92
|
+
- Replaces `waitForTimeout()` with proper waits
|
|
93
|
+
|
|
94
|
+
### 6. Test (`test`)
|
|
95
|
+
|
|
96
|
+
Runs the test with Playwright, capturing traces, video, and screenshots.
|
|
97
|
+
|
|
98
|
+
- **If it passes** → moves to QA documentation
|
|
99
|
+
- **If it fails** → moves to self-healing
|
|
100
|
+
|
|
101
|
+
### 7. Heal (`heal`)
|
|
102
|
+
|
|
103
|
+
The **self-healing-agent** diagnoses the failure and patches the test. Up to 3 attempts, each trying a different fix strategy:
|
|
104
|
+
|
|
105
|
+
| Failure Type | Fix Strategy |
|
|
106
|
+
|---|---|
|
|
107
|
+
| Selector changed | Try semantic selectors, stable attributes |
|
|
108
|
+
| Timing issue | Add waits, increase timeouts |
|
|
109
|
+
| Element not interactable | Wait for enabled state, scroll into view |
|
|
110
|
+
| Assertion mismatch | Update expected values |
|
|
111
|
+
| Navigation failure | Add `waitForURL`, `waitForLoadState` |
|
|
112
|
+
|
|
113
|
+
Never removes assertions. Never changes test structure. Adds `// HEALED: <reason>` comments.
|
|
114
|
+
|
|
115
|
+
**Skipped automatically** if the test passes.
|
|
116
|
+
|
|
117
|
+
### 8. QA (`qa`)
|
|
118
|
+
|
|
119
|
+
The **qa-testcase-agent** generates formal QA documentation:
|
|
120
|
+
- Markdown test case (ID, preconditions, steps table, postconditions)
|
|
121
|
+
- Zephyr XML (optional, if configured)
|
|
122
|
+
|
|
123
|
+
---
|
|
124
|
+
|
|
125
|
+
## Running the Full Pipeline
|
|
126
|
+
|
|
127
|
+
```bash
|
|
128
|
+
# Everything in one command
|
|
129
|
+
levante run --key PROJ-101
|
|
130
|
+
|
|
131
|
+
# Without voice recording
|
|
132
|
+
levante run --key PROJ-101 --no-voice
|
|
133
|
+
|
|
134
|
+
# Start from a specific step (skip prior steps)
|
|
135
|
+
levante run --key PROJ-101 --from scenario
|
|
136
|
+
|
|
137
|
+
# Skip specific steps
|
|
138
|
+
levante run --key PROJ-101 --skip heal
|
|
139
|
+
|
|
140
|
+
# Common: generate from existing recording data
|
|
141
|
+
levante run --key PROJ-101 --from generate
|
|
142
|
+
```
|
|
143
|
+
|
|
144
|
+
---
|
|
145
|
+
|
|
146
|
+
## Workflow Variations
|
|
147
|
+
|
|
148
|
+
### With Issue Tracker (Jira / Linear)
|
|
149
|
+
|
|
150
|
+
Set `inputSource: 'jira'` or `'linear'` in config. The scenario step will fetch issue context (summary, acceptance criteria, labels) and use it to align the test scenario with the ticket.
|
|
151
|
+
|
|
152
|
+
```bash
|
|
153
|
+
levante run --key PROJ-101 # fetches Jira/Linear issue automatically
|
|
154
|
+
```
|
|
155
|
+
|
|
156
|
+
### Without Issue Tracker
|
|
157
|
+
|
|
158
|
+
Set `inputSource: 'none'` (default). Use any identifier as the key, or omit it entirely:
|
|
159
|
+
|
|
160
|
+
```bash
|
|
161
|
+
levante run --key login-flow
|
|
162
|
+
levante run my-session
|
|
163
|
+
```
|
|
164
|
+
|
|
165
|
+
### AI-Only (No Recording)
|
|
166
|
+
|
|
167
|
+
Write the YAML scenario manually or have it generated from an existing codegen file, then run the AI pipeline:
|
|
168
|
+
|
|
169
|
+
```bash
|
|
170
|
+
levante generate --key PROJ-101
|
|
171
|
+
levante test --key PROJ-101
|
|
172
|
+
levante heal --key PROJ-101
|
|
173
|
+
levante qa --key PROJ-101
|
|
174
|
+
```
|
|
175
|
+
|
|
176
|
+
### Existing Test Improvement
|
|
177
|
+
|
|
178
|
+
Refactor an existing test to follow project conventions:
|
|
179
|
+
|
|
180
|
+
```bash
|
|
181
|
+
levante refine --key PROJ-101
|
|
182
|
+
```
|
|
183
|
+
|
|
184
|
+
---
|
|
185
|
+
|
|
186
|
+
## Scanner Pipeline (Separate Workflow)
|
|
187
|
+
|
|
188
|
+
Scans your codebase to build a QA map of features, workflows, and test scenarios:
|
|
189
|
+
|
|
190
|
+
```bash
|
|
191
|
+
# 1. Extract AST (routes, components, hooks)
|
|
192
|
+
levante scan
|
|
193
|
+
|
|
194
|
+
# 2. AI analysis → features, workflows, scenarios
|
|
195
|
+
levante analyze
|
|
196
|
+
|
|
197
|
+
# 3. Push QA map to remote API (optional)
|
|
198
|
+
levante push
|
|
199
|
+
```
|
|
200
|
+
|
|
201
|
+
This is independent from the test pipeline — use it to get an overview of your app's testable surface.
|
|
202
|
+
|
|
203
|
+
---
|
|
204
|
+
|
|
205
|
+
## AI-Assisted Workflow (MCP)
|
|
206
|
+
|
|
207
|
+
If you have the levante MCP server configured, you can ask your AI assistant to run the pipeline for you. The MCP server teaches the AI how to orchestrate the workflow:
|
|
208
|
+
|
|
209
|
+
1. **You say:** "Run the full test pipeline for PROJ-101" (or any variation)
|
|
210
|
+
2. **AI plans:** Calls `e2e_ai_plan_workflow` → gets an ordered step list
|
|
211
|
+
3. **AI shows plan:** Presents the steps and asks for your approval
|
|
212
|
+
4. **You adjust:** "Skip voice" / "Start from generate" / "Looks good, go"
|
|
213
|
+
5. **AI executes:** Runs each step one at a time via `e2e_ai_execute_step`, reporting results between steps
|
|
214
|
+
|
|
215
|
+
Each step runs as a separate subagent (when supported by the AI platform) to keep context clean and focused. If a step fails, the AI stops and asks you what to do.
|
|
216
|
+
|
|
217
|
+
**Example prompts you can give your AI assistant:**
|
|
218
|
+
- "Run the full pipeline for PROJ-101"
|
|
219
|
+
- "Generate a test from the existing recording for PROJ-101, skip voice"
|
|
220
|
+
- "Just run test and heal for PROJ-101"
|
|
221
|
+
- "Scan the codebase and analyze features"
|
|
222
|
+
- "Refactor the test for PROJ-101"
|
|
223
|
+
|
|
224
|
+
---
|
|
225
|
+
|
|
226
|
+
## File Structure
|
|
227
|
+
|
|
228
|
+
After running the pipeline for `PROJ-101`:
|
|
229
|
+
|
|
230
|
+
```
|
|
231
|
+
.qai/levante/
|
|
232
|
+
config.ts ← your configuration
|
|
233
|
+
context.md ← project context (teach AI your conventions)
|
|
234
|
+
workflow.md ← this file
|
|
235
|
+
agents/ ← AI agent prompts (numbered by pipeline order)
|
|
236
|
+
0.init-agent.md
|
|
237
|
+
1_1.transcript-agent.md
|
|
238
|
+
1_2.scenario-agent.md
|
|
239
|
+
2.playwright-generator-agent.md
|
|
240
|
+
3.refactor-agent.md
|
|
241
|
+
4.self-healing-agent.md
|
|
242
|
+
5.qa-testcase-agent.md
|
|
243
|
+
6_1.feature-analyzer-agent.md
|
|
244
|
+
6_2.scenario-planner-agent.md
|
|
245
|
+
PROJ-101/ ← working files (codegen, recordings)
|
|
246
|
+
|
|
247
|
+
e2e/
|
|
248
|
+
tests/PROJ-101/
|
|
249
|
+
PROJ-101.yaml ← generated scenario
|
|
250
|
+
PROJ-101.test.ts ← generated Playwright test
|
|
251
|
+
|
|
252
|
+
qa/
|
|
253
|
+
PROJ-101.md ← QA documentation
|
|
254
|
+
```
|
|
255
|
+
|
|
256
|
+
---
|
|
257
|
+
|
|
258
|
+
## Environment Variables
|
|
259
|
+
|
|
260
|
+
```bash
|
|
261
|
+
# Required
|
|
262
|
+
OPENAI_API_KEY=sk-... # For LLM calls + Whisper transcription
|
|
263
|
+
|
|
264
|
+
# Optional
|
|
265
|
+
AI_PROVIDER=openai # openai | anthropic
|
|
266
|
+
AI_MODEL=gpt-4o # Model override
|
|
267
|
+
ANTHROPIC_API_KEY=sk-ant-... # If using Anthropic
|
|
268
|
+
BASE_URL=https://your-app.com # Your application URL
|
|
269
|
+
```
|
|
270
|
+
|
|
271
|
+
---
|
|
272
|
+
|
|
273
|
+
## Quick Reference
|
|
274
|
+
|
|
275
|
+
| Command | What it does |
|
|
276
|
+
|---|---|
|
|
277
|
+
| `levante init` | Create config + copy agents |
|
|
278
|
+
| `levante record --key X` | Record browser session |
|
|
279
|
+
| `levante transcribe --key X` | Transcribe voice recording |
|
|
280
|
+
| `levante scenario --key X` | Generate YAML test scenario |
|
|
281
|
+
| `levante generate --key X` | Generate Playwright test |
|
|
282
|
+
| `levante refine --key X` | Refactor test with AI |
|
|
283
|
+
| `levante test --key X` | Run Playwright test |
|
|
284
|
+
| `levante heal --key X` | Auto-fix failing test |
|
|
285
|
+
| `levante qa --key X` | Generate QA documentation |
|
|
286
|
+
| `levante run --key X` | Run full pipeline |
|
|
287
|
+
| `levante scan` | Scan codebase AST |
|
|
288
|
+
| `levante analyze` | AI feature/scenario analysis |
|
|
289
|
+
| `levante push` | Push QA map to API |
|