assuremind 1.2.0 → 1.2.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,265 +1,265 @@
1
- # Assuremind Studio
2
-
3
- **AI-powered codeless UI & API test automation framework**
4
-
5
- [![npm version](https://img.shields.io/npm/v/assuremind.svg)](https://www.npmjs.com/package/assuremind)
6
- [![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](LICENSE)
7
- [![Node.js](https://img.shields.io/badge/node-%3E%3D18-brightgreen.svg)](https://nodejs.org)
8
- [![Playwright](https://img.shields.io/badge/powered%20by-Playwright-2EAD33.svg)](https://playwright.dev)
9
-
10
- Describe tests in plain English. AI generates Playwright code. Run anywhere.
11
-
12
- ---
13
-
14
- ## Why Assuremind?
15
-
16
- | Capability | What it does |
17
- |-----------|-------------|
18
- | **Zero coding** | Write steps in plain English — AI generates Playwright code automatically |
19
- | **MCP-sighted generation** | AI sees real page elements via Playwright MCP accessibility snapshots (~90-95% accuracy) |
20
- | **3 suite types** | **UI** (browser automation) · **API** (HTTP tests) · **Audit** (Playwright + Lighthouse) |
21
- | **5-level self-healing** | Broken selectors are auto-fixed by AI during runs — smart retry → AI regen → multi-selector → visual → decompose |
22
- | **12 AI providers** | Anthropic · OpenAI · Gemini · Groq · DeepSeek · Together · Qwen · Perplexity · Ollama · Bedrock · Azure OpenAI · Custom |
23
- | **Device emulation** | iPhone, Pixel, iPad, Galaxy — full Playwright device descriptors from UI or `--device` CLI flag |
24
- | **Studio UI** | Browser-based editor, run dashboard, reports, healing review, git control center — with dark mode |
25
- | **RAG memory** | AI learns from every run — retrieves similar past steps & healing fixes for smarter generation |
26
- | **Test Recorder** | Record tests by clicking in a real browser — locators verified against Playwright's accessibility tree, zero AI cost |
27
- | **Cost-optimised** | Template engine + code cache + RAG handle ~80% of steps with zero AI calls |
28
- | **CI-ready** | `npx assuremind run --all --ci` — exit code 0/1, works with any pipeline |
29
- | **File-based** | Plain JSON storage, fully Git-friendly, no database |
30
-
31
- ---
32
-
33
- ## Quick Start
34
-
35
- ```bash
36
- npm install assuremind
37
- npx assuremind init # folders, config, Playwright browsers
38
- npx assuremind studio # opens http://localhost:4400
39
- ```
40
-
41
- > **Requirements:** Node.js >= 18 · macOS / Linux / Windows
42
-
43
- ### Configure AI Provider
44
-
45
- Edit `.env` — pick one provider:
46
-
47
- ```bash
48
- # Anthropic # OpenAI # Google
49
- AI_PROVIDER=anthropic AI_PROVIDER=openai AI_PROVIDER=google
50
- ANTHROPIC_API_KEY=sk-ant-... OPENAI_API_KEY=sk-... GOOGLE_API_KEY=AIza...
51
- ANTHROPIC_MODEL=claude-sonnet.. OPENAI_MODEL=gpt-4o GOOGLE_MODEL=gemini-2.5-pro
52
- ```
53
-
54
- See `.env.example` for all 12 providers including Gemini, OpenAI, Anthropic, Ollama (local/free), etc.
55
-
56
- ---
57
-
58
- ## CLI
59
-
60
- ```bash
61
- npx assuremind run --all # run everything
62
- npx assuremind run --type ui --tag smoke # filter by type + tag
63
- npx assuremind run --suite "Login" --browser chromium # run a suite
64
- npx assuremind run --all --device "iPhone 15 Pro" --ci # mobile + CI mode
65
- npx assuremind generate --story "User resets password" # AI generates full suite
66
- npx assuremind apply-healing --yes # accept all healed selectors
67
- npx assuremind validate # check config health
68
- npx assuremind doctor # system diagnostics
69
- ```
70
-
71
- | Flag | Description |
72
- |------|-------------|
73
- | `--all` | Run every suite |
74
- | `--type <ui\|api\|audit>` | Filter by suite type |
75
- | `--suite <name>` | Partial name match |
76
- | `--tag <tag>` | Filter by tag |
77
- | `--device <name>` | Emulate device (e.g. `"iPhone 15 Pro"`, `"Pixel 7"`) |
78
- | `--browser <list>` | `chromium` `firefox` `webkit` |
79
- | `--ci` | CI mode — exit code reflects pass/fail |
80
- | `--headed` | Show browser window |
81
- | `--no-healing` | Disable self-healing |
82
-
83
- Full reference → [docs/CLI-REFERENCE.md](docs/CLI-REFERENCE.md)
84
-
85
- ---
86
-
87
- ## Studio UI
88
-
89
- Start with `npx assuremind studio` — opens at `http://localhost:4400`.
90
-
91
- **Dashboard** · **Smart Tests** · **Test Editor** · **Run Config** · **Reports** · **Variables** · **Self-Healing** · **Step Library** · **Git Control** · **Settings** · **Docs**
92
-
93
- Full walkthrough → [docs/STUDIO.md](docs/STUDIO.md)
94
-
95
- ---
96
-
97
- ## MCP Integration
98
-
99
- AI sees **real page elements** during code generation via the official `@playwright/mcp` server. Enabled by default.
100
-
101
- | Mode | Accuracy | Latency | Config |
102
- |------|----------|---------|--------|
103
- | Blind (MCP off) | ~50-70% | Fastest | `mcp.enabled: false` |
104
- | **Snapshot-driven** | ~90-95% | +2-5s first page | `mcp.enabled: true` (default) |
105
- | Act-then-script | ~98-100% | +5-10s/step | `mcp.actThenScript: true` |
106
-
107
- - MCP is **only used during code generation** — test execution is never affected
108
- - Silent fallback — if MCP fails, generation continues blindly without error
109
- - Configure in **Settings → MCP Integration** or `autotest.config.json`
110
-
111
- ---
112
-
113
- ## Test Recorder
114
-
115
- Record tests by interacting with your application in a real browser — **zero AI, zero cost, zero guesswork**.
116
-
117
- Click **Record** in the Test Editor, perform your actions, and stop. Each interaction becomes a step with verified Playwright code, ready to run.
118
-
119
- ### How it works
120
-
121
- 1. A headed Chromium browser opens your app's URL
122
- 2. Every click, fill, and navigation is captured in real time — **including inside iframes**
123
- 3. Locators are resolved against Playwright's **accessibility tree** — the recorder tries 6 strategies (data-testid, getByRole, getByLabel, getByPlaceholder, getByText, CSS) and verifies each with `count() === 1`
124
- 4. **Iframe-aware** — elements inside iframes automatically generate `page.frameLocator('#iframe').getByRole(...)` code with the correct frame chain
125
- 5. Assertions via keyboard shortcuts: **Shift+Click** (element visible), **Ctrl+Shift+U** (URL), **Ctrl+Shift+T** (page title)
126
- 6. On stop, each action is added as a step with **pre-generated Playwright code** — no AI call needed
127
-
128
- ### What makes it stand out vs other recorders
129
-
130
- | Feature | Selenium IDE | Playwright Codegen | Assuremind Recorder |
131
- |---------|-------------|-------------------|---------------------|
132
- | Locator quality | CSS/XPath | Good | Best — 6 strategies, verified against live page |
133
- | Accessibility tree | No | Partial | Full — every locator checked via Playwright API |
134
- | **Iframe support** | Partial | Manual | **Auto** — detects iframes, generates `frameLocator()` code |
135
- | Assertions | Manual | Manual | Shift+Click (hard), Ctrl+Shift+Click (soft), URL & title shortcuts |
136
- | Plain-English steps | No | No | Yes — human-readable instructions auto-generated |
137
- | Self-healing after | No | No | Yes — 5-level AI healing cascade |
138
- | RAG memory | No | No | Yes — recorded steps feed the learning loop |
139
- | Cost | Free | Free | Free |
140
-
141
- ### Biggest pain points in test automation — solved
142
-
143
- | Pain Point | How the Recorder Solves It |
144
- |-----------|---------------------------|
145
- | Writing tests is slow | Record a full test in 30 seconds |
146
- | Selectors break constantly | Locators verified against Playwright's accessibility tree in real time |
147
- | AI costs money | Recording + code generation = $0, zero AI calls |
148
- | Non-technical testers can't write tests | Anyone who can click a browser can create tests |
149
- | Assertions are hard to write | Shift+Click for hard, Ctrl+Shift+Click for soft, Ctrl+Shift+U for URL, Ctrl+Shift+T for title |
150
- | Hard vs soft assertions | Soft assertions (`expect.soft()`) let the test continue — all failures reported at end |
151
- | Recorded tests are fragile | 6-strategy locator resolution + post-run 5-level self-healing |
152
- | Apps use iframes (SAP, Salesforce) | Auto-detects iframe context, generates `frameLocator()` chains — no manual frame handling |
153
-
154
- ---
155
-
156
- ## RAG Memory (Retrieval-Augmented Generation)
157
-
158
- The AI learns from every test run, building semantic memory that improves accuracy over time — **zero setup required**:
159
-
160
- | Corpus | What it stores | When it's used |
161
- |--------|---------------|----------------|
162
- | **Code Corpus** | Instruction-to-code mappings from successful runs | During generation — similar past steps are retrieved as AI examples or used directly (score >= 0.90) |
163
- | **Healing Corpus** | Past healing events (error + fix pairs) | During self-healing — proven past fixes are injected into the repair prompt |
164
- | **Error Catalog** | Recurring error patterns per URL | During generation — the AI is warned about known-bad selectors to avoid |
165
-
166
- **Zero cost, zero database** — uses local TF-IDF embeddings and file-based JSON storage (`results/.rag/`). Enabled by default — works automatically from the very first run.
167
-
168
- ### How it improves over time
169
-
170
- - **Run 1** — memory is empty, AI generates code normally
171
- - **Run 2+** — RAG kicks in silently: similar instructions are retrieved instead of making API calls (free + faster), healing uses proven past fixes
172
- - **Run 10+** — most common steps are served from RAG memory at zero cost, self-healing resolves issues on the first attempt
173
-
174
- ### Consumer FAQ
175
-
176
- | Question | Answer |
177
- |----------|--------|
178
- | Do I need to configure anything? | No. RAG is ON by default with zero setup. |
179
- | Does it cost anything? | No. TF-IDF embedder runs locally. RAG direct hits replace paid AI calls. |
180
- | Does it slow down my tests? | No. RAG lookup is <1ms. It actually speeds up generation. |
181
- | Does it work in CI/CD? | Yes. Cache `results/.rag/` between CI runs to persist memory. |
182
- | How do I share memory across team? | Commit `results/.rag/` to Git or use a CI cache step. |
183
- | How do I reset memory? | Delete the `results/.rag/` folder. |
184
-
185
- ### When to use Settings → RAG Memory
186
-
187
- Most users never need to touch RAG settings. The Settings card exists for power-user scenarios:
188
-
189
- | Scenario | Action |
190
- |----------|--------|
191
- | Debugging a flaky test | Turn OFF Code Corpus — forces fresh AI generation |
192
- | Healing keeps suggesting a bad fix | Turn OFF Healing Corpus — clears bad fix influence |
193
- | Major app redesign | Turn OFF RAG entirely — old memory is now misleading |
194
- | Error warnings are outdated | Turn OFF Error Catalog — stops avoiding selectors that are fine now |
195
- | Want deterministic CI runs | Disable RAG in CI config, keep ON locally |
196
-
197
- ---
198
-
199
- ## Self-Healing
200
-
201
- When your app's UI changes (button renamed, element moved, DOM restructured), tests break. Instead of failing immediately, Assuremind automatically attempts to fix the broken selector through a 5-level cascade — **fully automated, no manual intervention needed**:
202
-
203
- | Level | What happens | Example | AI Cost |
204
- |-------|-------------|---------|---------|
205
- | 1 | **Smart retry** — waits for the element with exponential backoff | Element was loading slowly; retry finds it after 2s | Free |
206
- | 2 | **AI regeneration** — AI rewrites the Playwright code using current page context | `#login-btn` removed → AI generates `page.getByRole('button', { name: 'Sign In' })` | 1 call |
207
- | 3 | **Multi-selector** — AI generates 5 alternative selector strategies, tries each | Tries role → label → placeholder → text → CSS until one works | 1 call |
208
- | 4 | **Visual analysis** — takes a screenshot, AI visually locates the element | Button has no text/role but AI sees it in the screenshot | 1 call |
209
- | 5 | **Decompose** — breaks the failing step into 3-5 simpler micro-actions | "Fill login form and submit" → separate fill email + fill password + click submit | 1 call |
210
-
211
- If all 5 levels fail, the step is marked **failed** and saved to the healing report for your review.
212
-
213
- ### How you use it
214
-
215
- 1. **During test runs** — healing happens automatically. If Level 2 fixes a broken `#login-btn`, your test **passes** and continues.
216
- 2. **After the run** — healed selectors are saved as **pending suggestions** (not auto-applied to source files).
217
- 3. **Review & accept** — in Studio → **Self-Healing** page, or from CLI:
218
- ```bash
219
- npx assuremind apply-healing # interactive review: accept/reject each fix
220
- npx assuremind apply-healing --yes # accept all in CI
221
- ```
222
- 4. **Accepted fixes** are written back to your `.test.json` files — next run uses the healed code permanently.
223
-
224
- > **CI/CD tip:** Add `npx assuremind apply-healing --yes` as a post-test step so healed selectors are committed back automatically. Enable `healing.autoPR` in Settings to auto-create a GitHub PR with the fixes.
225
-
226
- ---
227
-
228
- ## CI/CD
229
-
230
- ```yaml
231
- # GitHub Actions
232
- - name: Run tests
233
- env:
234
- AI_PROVIDER: google
235
- GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
236
- GOOGLE_MODEL: gemini-2.5-pro
237
- run: npx assuremind run --all --ci
238
- ```
239
-
240
- Also supports **GitLab CI** and **Jenkins** — or use the built-in **CI Config Generator** in Studio (Run Config → Generate CI Config).
241
-
242
- Exit code `0` = all passed · `1` = failures.
243
-
244
- ---
245
-
246
- ## Documentation
247
-
248
- | Resource | Location |
249
- |----------|----------|
250
- | Getting started | [docs/GETTING-STARTED.md](docs/GETTING-STARTED.md) |
251
- | Studio walkthrough | [docs/STUDIO.md](docs/STUDIO.md) |
252
- | CLI reference | [docs/CLI-REFERENCE.md](docs/CLI-REFERENCE.md) |
253
- | Contributing | [CONTRIBUTING.md](CONTRIBUTING.md) |
254
- | All AI providers | `.env.example` |
255
- | Built-in docs | Studio → **Docs** page |
256
-
257
- ---
258
-
259
- ## License
260
-
261
- MIT — see [LICENSE](LICENSE) for details.
262
-
263
- ---
264
-
265
- *Built by [Deepak Hiremath](https://www.linkedin.com/in/deepak-hiremath-0017937a/)*
1
+ # Assuremind Studio
2
+
3
+ **AI-powered codeless UI & API test automation framework**
4
+
5
+ [![npm version](https://img.shields.io/npm/v/assuremind.svg)](https://www.npmjs.com/package/assuremind)
6
+ [![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](LICENSE)
7
+ [![Node.js](https://img.shields.io/badge/node-%3E%3D18-brightgreen.svg)](https://nodejs.org)
8
+ [![Playwright](https://img.shields.io/badge/powered%20by-Playwright-2EAD33.svg)](https://playwright.dev)
9
+
10
+ Describe tests in plain English. AI generates Playwright code. Run anywhere.
11
+
12
+ ---
13
+
14
+ ## Why Assuremind?
15
+
16
+ | Capability | What it does |
17
+ |-----------|-------------|
18
+ | **Zero coding** | Write steps in plain English — AI generates Playwright code automatically |
19
+ | **MCP-sighted generation** | AI sees real page elements via Playwright MCP accessibility snapshots (~90-95% accuracy) |
20
+ | **3 suite types** | **UI** (browser automation) · **API** (HTTP tests) · **Audit** (Playwright + Lighthouse) |
21
+ | **5-level self-healing** | Broken selectors are auto-fixed by AI during runs — smart retry → AI regen → multi-selector → visual → decompose |
22
+ | **12 AI providers** | Anthropic · OpenAI · Gemini · Groq · DeepSeek · Together · Qwen · Perplexity · Ollama · Bedrock · Azure OpenAI · Custom |
23
+ | **Device emulation** | iPhone, Pixel, iPad, Galaxy — full Playwright device descriptors from UI or `--device` CLI flag |
24
+ | **Studio UI** | Browser-based editor, run dashboard, reports, healing review, git control center — with dark mode |
25
+ | **RAG memory** | AI learns from every run — retrieves similar past steps & healing fixes for smarter generation |
26
+ | **Test Recorder** | Record tests by clicking in a real browser — locators verified against Playwright's accessibility tree, zero AI cost |
27
+ | **Cost-optimised** | Template engine + code cache + RAG handle ~80% of steps with zero AI calls |
28
+ | **CI-ready** | `npx assuremind run --all --ci` — exit code 0/1, works with any pipeline |
29
+ | **File-based** | Plain JSON storage, fully Git-friendly, no database |
30
+
31
+ ---
32
+
33
+ ## Quick Start
34
+
35
+ ```bash
36
+ npm install assuremind
37
+ npx assuremind init # folders, config, Playwright browsers
38
+ npx assuremind studio # opens http://localhost:4400
39
+ ```
40
+
41
+ > **Requirements:** Node.js >= 18 · macOS / Linux / Windows
42
+
43
+ ### Configure AI Provider
44
+
45
+ Edit `.env` — pick one provider:
46
+
47
+ ```bash
48
+ # Anthropic # OpenAI # Google
49
+ AI_PROVIDER=anthropic AI_PROVIDER=openai AI_PROVIDER=google
50
+ ANTHROPIC_API_KEY=sk-ant-... OPENAI_API_KEY=sk-... GOOGLE_API_KEY=AIza...
51
+ ANTHROPIC_MODEL=claude-sonnet.. OPENAI_MODEL=gpt-4o GOOGLE_MODEL=gemini-2.5-pro
52
+ ```
53
+
54
+ See `.env.example` for all 12 providers including Gemini, OpenAI, Anthropic, Ollama (local/free), etc.
55
+
56
+ ---
57
+
58
+ ## CLI
59
+
60
+ ```bash
61
+ npx assuremind run --all # run everything
62
+ npx assuremind run --type ui --tag smoke # filter by type + tag
63
+ npx assuremind run --suite "Login" --browser chromium # run a suite
64
+ npx assuremind run --all --device "iPhone 15 Pro" --ci # mobile + CI mode
65
+ npx assuremind generate --story "User resets password" # AI generates full suite
66
+ npx assuremind apply-healing --yes # accept all healed selectors
67
+ npx assuremind validate # check config health
68
+ npx assuremind doctor # system diagnostics
69
+ ```
70
+
71
+ | Flag | Description |
72
+ |------|-------------|
73
+ | `--all` | Run every suite |
74
+ | `--type <ui\|api\|audit>` | Filter by suite type |
75
+ | `--suite <name>` | Partial name match |
76
+ | `--tag <tag>` | Filter by tag |
77
+ | `--device <name>` | Emulate device (e.g. `"iPhone 15 Pro"`, `"Pixel 7"`) |
78
+ | `--browser <list>` | `chromium` `firefox` `webkit` |
79
+ | `--ci` | CI mode — exit code reflects pass/fail |
80
+ | `--headed` | Show browser window |
81
+ | `--no-healing` | Disable self-healing |
82
+
83
+ Full reference → [docs/CLI-REFERENCE.md](docs/CLI-REFERENCE.md)
84
+
85
+ ---
86
+
87
+ ## Studio UI
88
+
89
+ Start with `npx assuremind studio` — opens at `http://localhost:4400`.
90
+
91
+ **Dashboard** · **Smart Tests** · **Test Editor** · **Run Config** · **Reports** · **Variables** · **Self-Healing** · **Step Library** · **Git Control** · **Settings** · **Docs**
92
+
93
+ Full walkthrough → [docs/STUDIO.md](docs/STUDIO.md)
94
+
95
+ ---
96
+
97
+ ## MCP Integration
98
+
99
+ AI sees **real page elements** during code generation via the official `@playwright/mcp` server. Enabled by default.
100
+
101
+ | Mode | Accuracy | Latency | Config |
102
+ |------|----------|---------|--------|
103
+ | Blind (MCP off) | ~50-70% | Fastest | `mcp.enabled: false` |
104
+ | **Snapshot-driven** | ~90-95% | +2-5s first page | `mcp.enabled: true` (default) |
105
+ | Act-then-script | ~98-100% | +5-10s/step | `mcp.actThenScript: true` |
106
+
107
+ - MCP is **only used during code generation** — test execution is never affected
108
+ - Silent fallback — if MCP fails, generation continues blindly without error
109
+ - Configure in **Settings → MCP Integration** or `autotest.config.json`
110
+
111
+ ---
112
+
113
+ ## Test Recorder
114
+
115
+ Record tests by interacting with your application in a real browser — **zero AI, zero cost, zero guesswork**.
116
+
117
+ Click **Record** in the Test Editor, perform your actions, and stop. Each interaction becomes a step with verified Playwright code, ready to run.
118
+
119
+ ### How it works
120
+
121
+ 1. A headed Chromium browser opens your app's URL
122
+ 2. Every click, fill, and navigation is captured in real time — **including inside iframes**
123
+ 3. Locators are resolved against Playwright's **accessibility tree** — the recorder tries 6 strategies (data-testid, getByRole, getByLabel, getByPlaceholder, getByText, CSS) and verifies each with `count() === 1`
124
+ 4. **Iframe-aware** — elements inside iframes automatically generate `page.frameLocator('#iframe').getByRole(...)` code with the correct frame chain
125
+ 5. Assertions via keyboard shortcuts: **Shift+Click** (element visible), **Ctrl+Shift+U** (URL), **Ctrl+Shift+T** (page title)
126
+ 6. On stop, each action is added as a step with **pre-generated Playwright code** — no AI call needed
127
+
128
+ ### What makes it stand out vs other recorders
129
+
130
+ | Feature | Selenium IDE | Playwright Codegen | Assuremind Recorder |
131
+ |---------|-------------|-------------------|---------------------|
132
+ | Locator quality | CSS/XPath | Good | Best — 6 strategies, verified against live page |
133
+ | Accessibility tree | No | Partial | Full — every locator checked via Playwright API |
134
+ | **Iframe support** | Partial | Manual | **Auto** — detects iframes, generates `frameLocator()` code |
135
+ | Assertions | Manual | Manual | Shift+Click (hard), Ctrl+Shift+Click (soft), URL & title shortcuts |
136
+ | Plain-English steps | No | No | Yes — human-readable instructions auto-generated |
137
+ | Self-healing after | No | No | Yes — 5-level AI healing cascade |
138
+ | RAG memory | No | No | Yes — recorded steps feed the learning loop |
139
+ | Cost | Free | Free | Free |
140
+
141
+ ### Biggest pain points in test automation — solved
142
+
143
+ | Pain Point | How the Recorder Solves It |
144
+ |-----------|---------------------------|
145
+ | Writing tests is slow | Record a full test in 30 seconds |
146
+ | Selectors break constantly | Locators verified against Playwright's accessibility tree in real time |
147
+ | AI costs money | Recording + code generation = $0, zero AI calls |
148
+ | Non-technical testers can't write tests | Anyone who can click a browser can create tests |
149
+ | Assertions are hard to write | Shift+Click for hard, Ctrl+Shift+Click for soft, Ctrl+Shift+U for URL, Ctrl+Shift+T for title |
150
+ | Hard vs soft assertions | Soft assertions (`expect.soft()`) let the test continue — all failures reported at end |
151
+ | Recorded tests are fragile | 6-strategy locator resolution + post-run 5-level self-healing |
152
+ | Apps use iframes (SAP, Salesforce) | Auto-detects iframe context, generates `frameLocator()` chains — no manual frame handling |
153
+
154
+ ---
155
+
156
+ ## RAG Memory (Retrieval-Augmented Generation)
157
+
158
+ The AI learns from every test run, building semantic memory that improves accuracy over time — **zero setup required**:
159
+
160
+ | Corpus | What it stores | When it's used |
161
+ |--------|---------------|----------------|
162
+ | **Code Corpus** | Instruction-to-code mappings from successful runs | During generation — similar past steps are retrieved as AI examples or used directly (score >= 0.90) |
163
+ | **Healing Corpus** | Past healing events (error + fix pairs) | During self-healing — proven past fixes are injected into the repair prompt |
164
+ | **Error Catalog** | Recurring error patterns per URL | During generation — the AI is warned about known-bad selectors to avoid |
165
+
166
+ **Zero cost, zero database** — uses local TF-IDF embeddings and file-based JSON storage (`results/.rag/`). Enabled by default — works automatically from the very first run.
167
+
168
+ ### How it improves over time
169
+
170
+ - **Run 1** — memory is empty, AI generates code normally
171
+ - **Run 2+** — RAG kicks in silently: similar instructions are retrieved instead of making API calls (free + faster), healing uses proven past fixes
172
+ - **Run 10+** — most common steps are served from RAG memory at zero cost, self-healing resolves issues on the first attempt
173
+
174
+ ### Consumer FAQ
175
+
176
+ | Question | Answer |
177
+ |----------|--------|
178
+ | Do I need to configure anything? | No. RAG is ON by default with zero setup. |
179
+ | Does it cost anything? | No. TF-IDF embedder runs locally. RAG direct hits replace paid AI calls. |
180
+ | Does it slow down my tests? | No. RAG lookup is <1ms. It actually speeds up generation. |
181
+ | Does it work in CI/CD? | Yes. Cache `results/.rag/` between CI runs to persist memory. |
182
+ | How do I share memory across team? | Commit `results/.rag/` to Git or use a CI cache step. |
183
+ | How do I reset memory? | Delete the `results/.rag/` folder. |
184
+
185
+ ### When to use Settings → RAG Memory
186
+
187
+ Most users never need to touch RAG settings. The Settings card exists for power-user scenarios:
188
+
189
+ | Scenario | Action |
190
+ |----------|--------|
191
+ | Debugging a flaky test | Turn OFF Code Corpus — forces fresh AI generation |
192
+ | Healing keeps suggesting a bad fix | Turn OFF Healing Corpus — clears bad fix influence |
193
+ | Major app redesign | Turn OFF RAG entirely — old memory is now misleading |
194
+ | Error warnings are outdated | Turn OFF Error Catalog — stops avoiding selectors that are fine now |
195
+ | Want deterministic CI runs | Disable RAG in CI config, keep ON locally |
196
+
197
+ ---
198
+
199
+ ## Self-Healing
200
+
201
+ When your app's UI changes (button renamed, element moved, DOM restructured), tests break. Instead of failing immediately, Assuremind automatically attempts to fix the broken selector through a 5-level cascade — **fully automated, no manual intervention needed**:
202
+
203
+ | Level | What happens | Example | AI Cost |
204
+ |-------|-------------|---------|---------|
205
+ | 1 | **Smart retry** — waits for the element with exponential backoff | Element was loading slowly; retry finds it after 2s | Free |
206
+ | 2 | **AI regeneration** — AI rewrites the Playwright code using current page context | `#login-btn` removed → AI generates `page.getByRole('button', { name: 'Sign In' })` | 1 call |
207
+ | 3 | **Multi-selector** — AI generates 5 alternative selector strategies, tries each | Tries role → label → placeholder → text → CSS until one works | 1 call |
208
+ | 4 | **Visual analysis** — takes a screenshot, AI visually locates the element | Button has no text/role but AI sees it in the screenshot | 1 call |
209
+ | 5 | **Decompose** — breaks the failing step into 3-5 simpler micro-actions | "Fill login form and submit" → separate fill email + fill password + click submit | 1 call |
210
+
211
+ If all 5 levels fail, the step is marked **failed** and saved to the healing report for your review.
212
+
213
+ ### How you use it
214
+
215
+ 1. **During test runs** — healing happens automatically. If Level 2 fixes a broken `#login-btn`, your test **passes** and continues.
216
+ 2. **After the run** — healed selectors are saved as **pending suggestions** (not auto-applied to source files).
217
+ 3. **Review & accept** — in Studio → **Self-Healing** page, or from CLI:
218
+ ```bash
219
+ npx assuremind apply-healing # interactive review: accept/reject each fix
220
+ npx assuremind apply-healing --yes # accept all in CI
221
+ ```
222
+ 4. **Accepted fixes** are written back to your `.test.json` files — next run uses the healed code permanently.
223
+
224
+ > **CI/CD tip:** Add `npx assuremind apply-healing --yes` as a post-test step so healed selectors are committed back automatically. Enable `healing.autoPR` in Settings to auto-create a GitHub PR with the fixes.
225
+
226
+ ---
227
+
228
+ ## CI/CD
229
+
230
+ ```yaml
231
+ # GitHub Actions
232
+ - name: Run tests
233
+ env:
234
+ AI_PROVIDER: google
235
+ GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
236
+ GOOGLE_MODEL: gemini-2.5-pro
237
+ run: npx assuremind run --all --ci
238
+ ```
239
+
240
+ Also supports **GitLab CI** and **Jenkins** — or use the built-in **CI Config Generator** in Studio (Run Config → Generate CI Config).
241
+
242
+ Exit code `0` = all passed · `1` = failures.
243
+
244
+ ---
245
+
246
+ ## Documentation
247
+
248
+ | Resource | Location |
249
+ |----------|----------|
250
+ | Getting started | [docs/GETTING-STARTED.md](docs/GETTING-STARTED.md) |
251
+ | Studio walkthrough | [docs/STUDIO.md](docs/STUDIO.md) |
252
+ | CLI reference | [docs/CLI-REFERENCE.md](docs/CLI-REFERENCE.md) |
253
+ | Contributing | [CONTRIBUTING.md](CONTRIBUTING.md) |
254
+ | All AI providers | `.env.example` |
255
+ | Built-in docs | Studio → **Docs** page |
256
+
257
+ ---
258
+
259
+ ## License
260
+
261
+ MIT — see [LICENSE](LICENSE) for details.
262
+
263
+ ---
264
+
265
+ *Built by DVH*