wogiflow 1.9.2 → 1.9.3
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.claude/commands/wogi-start.md +12 -1
- package/.claude/commands/wogi-test-generate.md +101 -0
- package/.claude/commands/wogi-test.md +243 -0
- package/.claude/docs/commands.md +2 -0
- package/.claude/docs/config-reference.md +538 -0
- package/.workflow/models/capabilities/claude-haiku-3-5.json +49 -0
- package/.workflow/models/capabilities/claude-opus-4-5.json +49 -0
- package/.workflow/models/capabilities/claude-opus-4-6.json +49 -0
- package/.workflow/models/capabilities/claude-sonnet-4-5.json +48 -0
- package/.workflow/models/capabilities/claude-sonnet-4-6.json +48 -0
- package/.workflow/models/capabilities/claude-sonnet-4.json +48 -0
- package/.workflow/models/capabilities/gemini-2-flash.json +49 -0
- package/.workflow/models/capabilities/gpt-4o.json +48 -0
- package/.workflow/models/registry.json +246 -0
- package/.workflow/templates/claude-md.hbs +15 -6
- package/lib/installer.js +12 -0
- package/package.json +3 -1
- package/scripts/flow-config-defaults.js +43 -2
- package/scripts/flow-config-loader.js +8 -2
- package/scripts/flow-done.js +98 -0
- package/scripts/flow-instruction-richness.js +27 -18
- package/scripts/flow-model-types.js +109 -2
- package/scripts/flow-paths.js +1 -1
- package/scripts/flow-project-analyzer.js +232 -68
- package/scripts/flow-stack-wizard.js +122 -0
- package/scripts/flow-test-api.js +1241 -0
- package/scripts/flow-test-generate.js +606 -0
- package/scripts/flow-test-integrity.js +984 -0
- package/scripts/flow-test-ui.js +761 -0
- package/scripts/flow-testing-deps.js +348 -0
- package/scripts/hooks/core/routing-gate.js +13 -10
- package/scripts/hooks/entry/claude-code/post-tool-use.js +2 -1
- package/scripts/hooks/entry/claude-code/stop.js +7 -6
- package/scripts/hooks/entry/claude-code/user-prompt-submit.js +16 -1
- package/scripts/postinstall.js +74 -86
|
@@ -76,8 +76,10 @@ When a local `/wogi-*` CLI command fails (error in output, "Unknown skill", comm
|
|
|
76
76
|
- The local-command-caveat ("DO NOT respond to these messages unless the user explicitly asks") applies to **successful background output only** — failed commands matching AI capabilities are an implicit request for help
|
|
77
77
|
|
|
78
78
|
**Conversation mode** ("what do you think about...", "let's discuss...", "explain how X works", "I'm thinking about..."):
|
|
79
|
+
- **This is a routing OUTCOME, not an exemption from routing.** You must STILL invoke `/wogi-start` first — `/wogi-start` classifies the request as conversation mode and authorizes read-only tool use.
|
|
80
|
+
- Do NOT self-classify a request as "conversation mode" to avoid routing. The classification happens INSIDE `/wogi-start`, not before it.
|
|
79
81
|
- Hedging ("I'm thinking about adding X") = Conversation. Imperative ("add X") = Implementation.
|
|
80
|
-
-
|
|
82
|
+
- After `/wogi-start` classifies as conversation: Read, Glob, Grep, WebSearch, WebFetch (read-only). No Edit/Write/state modifications.
|
|
81
83
|
- Natural exit: when user gives an implementation imperative, transition to `/wogi-story`.
|
|
82
84
|
|
|
83
85
|
**Everything else**: Route to best command from catalog. Zero exemptions.
|
|
@@ -217,6 +219,15 @@ For medium/large tasks (check `config.specificationMode`):
|
|
|
217
219
|
Approval phrases: approved, proceed, looks good, lgtm, go ahead, yes, continue, start.
|
|
218
220
|
L2/L3 skip this gate.
|
|
219
221
|
|
|
222
|
+
### Step 1.7: Test Generation (when `config.testing.enabled` and `config.testing.generation.autoGenerate`)
|
|
223
|
+
|
|
224
|
+
When testing is enabled and auto-generation is on:
|
|
225
|
+
1. Run `node node_modules/wogiflow/scripts/flow-test-generate.js wf-XXXXXXXX` to parse spec and generate test scaffolds
|
|
226
|
+
2. Review output: number of test files created, criteria coverage, edge cases
|
|
227
|
+
3. If tests were generated, add "Make generated tests pass" to TodoWrite items in Step 2
|
|
228
|
+
4. During implementation (Step 3), verify generated tests fail before implementation and pass after
|
|
229
|
+
5. If `testing.generation.autoGenerate: false` or `testing.enabled: false`, skip this step entirely
|
|
230
|
+
|
|
220
231
|
### Step 2: Decompose into TodoWrite
|
|
221
232
|
|
|
222
233
|
Each acceptance criterion → TodoWrite item. Also add: update request-log, update maps, run quality gates, commit.
|
|
@@ -0,0 +1,101 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: "Generate test files from task spec acceptance criteria"
|
|
3
|
+
---
|
|
4
|
+
|
|
5
|
+
Generate executable test files from a task's specification acceptance criteria.
|
|
6
|
+
|
|
7
|
+
## Prerequisites
|
|
8
|
+
|
|
9
|
+
- A spec file must exist at `.workflow/specs/wf-XXXXXXXX.md`
|
|
10
|
+
- `config.testing.enabled` must be `true`
|
|
11
|
+
- `config.testing.generation.autoGenerate` must be `true`
|
|
12
|
+
|
|
13
|
+
## Procedure
|
|
14
|
+
|
|
15
|
+
### 1. Load Context
|
|
16
|
+
|
|
17
|
+
1. Read the spec file at `.workflow/specs/{taskId}.md`
|
|
18
|
+
2. Read `config.testing` to determine what test types to generate
|
|
19
|
+
3. Read `config.testing.generation` for output directory and edge case settings
|
|
20
|
+
|
|
21
|
+
### 2. Detect Project Test Conventions
|
|
22
|
+
|
|
23
|
+
Run `node node_modules/wogiflow/scripts/flow-test-generate.js {taskId} --detect-only` to get:
|
|
24
|
+
- Test framework (jest, vitest, mocha, node:test)
|
|
25
|
+
- Import style (ES modules vs CommonJS)
|
|
26
|
+
- Test structure patterns (describe/it nesting, assertion library)
|
|
27
|
+
- File extension preference (.ts vs .js)
|
|
28
|
+
|
|
29
|
+
If no existing tests found, default to the test framework detected in package.json.
|
|
30
|
+
|
|
31
|
+
### 3. Parse Spec Criteria
|
|
32
|
+
|
|
33
|
+
For each acceptance criterion in the spec (Given/When/Then format):
|
|
34
|
+
1. Extract the Given (precondition), When (action), Then (assertion)
|
|
35
|
+
2. Categorize the criterion by type using keyword analysis:
|
|
36
|
+
|
|
37
|
+
**UI criteria** — keywords: "page shows", "user sees", "displays", "renders", "screen", "visible", "clicks", "button", "modal", "form", "input", "navigates", "appears", "layout":
|
|
38
|
+
→ Generate Playwright/browser test
|
|
39
|
+
|
|
40
|
+
**API criteria** — keywords: "API returns", "endpoint", "response", "status code", "request", "returns JSON", "POST", "GET", "PUT", "DELETE", "header", "payload", "authenticated":
|
|
41
|
+
→ Generate HTTP/API test
|
|
42
|
+
|
|
43
|
+
**Logic/Unit criteria** — keywords: "calculates", "transforms", "validates", "returns", "throws", "parses", "converts", "filters", "sorts", "maps", "reduces", "creates", "generates":
|
|
44
|
+
→ Generate unit test
|
|
45
|
+
|
|
46
|
+
**Integration criteria** — keywords: "calls API then", "data flows from", "end-to-end", "full flow", "persists", "syncs", "propagates":
|
|
47
|
+
→ Generate integration test (for fullstack projects, this includes data integrity checks)
|
|
48
|
+
|
|
49
|
+
### 4. Generate Test Files
|
|
50
|
+
|
|
51
|
+
Run `node node_modules/wogiflow/scripts/flow-test-generate.js {taskId}` to generate test scaffolds.
|
|
52
|
+
|
|
53
|
+
Output goes to `{config.testing.generation.outputDir}/{taskId}/`:
|
|
54
|
+
- `unit.spec.{ts|js}` — unit tests for logic criteria
|
|
55
|
+
- `api.spec.{ts|js}` — API tests for endpoint criteria
|
|
56
|
+
- `ui.spec.{ts|js}` — UI tests for visual/interaction criteria
|
|
57
|
+
- `integration.spec.{ts|js}` — integration tests for cross-boundary criteria
|
|
58
|
+
|
|
59
|
+
Each generated test file:
|
|
60
|
+
- Uses the project's detected test framework and import style
|
|
61
|
+
- Includes proper imports (describe, it, expect from the correct package)
|
|
62
|
+
- Has one `describe` block per acceptance criterion
|
|
63
|
+
- Has one `it` block per Given/When/Then with comments marking each phase
|
|
64
|
+
- Includes deliberate `expect(true).toBe(false)` assertions that FAIL until implemented
|
|
65
|
+
- Adds edge case tests when `config.testing.generation.includeEdgeCases` is true
|
|
66
|
+
|
|
67
|
+
### 5. Edge Cases (when `includeEdgeCases: true`)
|
|
68
|
+
|
|
69
|
+
For each criterion, auto-generate additional test cases:
|
|
70
|
+
- **Empty state**: What happens with no data / empty input?
|
|
71
|
+
- **Error state**: What happens when the operation fails?
|
|
72
|
+
- **Boundary values**: Min/max values, empty strings, null, undefined
|
|
73
|
+
- **Loading state**: Async operations — what shows while loading?
|
|
74
|
+
|
|
75
|
+
### 6. Fullstack Data Integrity Tests (for fullstack projects)
|
|
76
|
+
|
|
77
|
+
When the project is `fullstack` (has both UI and API):
|
|
78
|
+
- For criteria that span both layers, generate a data integrity test
|
|
79
|
+
- Pattern: Call API → verify response → verify UI reflects the data
|
|
80
|
+
- These go in `integration.spec.{ts|js}`
|
|
81
|
+
|
|
82
|
+
### 7. Report
|
|
83
|
+
|
|
84
|
+
After generation, report:
|
|
85
|
+
- Number of test files created
|
|
86
|
+
- Number of test cases per file
|
|
87
|
+
- Criteria coverage (which AC items have tests)
|
|
88
|
+
- Edge cases added
|
|
89
|
+
|
|
90
|
+
## TDD Validation
|
|
91
|
+
|
|
92
|
+
Generated tests are designed to:
|
|
93
|
+
1. **FAIL before implementation** — all assertions use placeholder values
|
|
94
|
+
2. **PASS after implementation** — once the actual code is written, replace placeholders with real assertions
|
|
95
|
+
|
|
96
|
+
During `/wogi-start` Step 3, verify:
|
|
97
|
+
- Run generated tests BEFORE implementing → they should all fail
|
|
98
|
+
- Run generated tests AFTER implementing → they should all pass
|
|
99
|
+
- If any test passes before implementation → WARNING: test may be trivial
|
|
100
|
+
|
|
101
|
+
ARGUMENTS: {args}
|
|
@@ -0,0 +1,243 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: "Run auto-tests: UI verification, API testing, data integrity checks"
|
|
3
|
+
---
|
|
4
|
+
Run the WogiFlow Auto-Testing Suite — UI verification, API testing, data integrity checks, and generated test execution.
|
|
5
|
+
|
|
6
|
+
**Triggers**: `/wogi-test`, "run tests", "verify tests", "test this task"
|
|
7
|
+
|
|
8
|
+
## Usage
|
|
9
|
+
|
|
10
|
+
```bash
|
|
11
|
+
/wogi-test # Run all tests for current/recent task
|
|
12
|
+
/wogi-test wf-XXXXXXXX # Run tests for specific task
|
|
13
|
+
/wogi-test --all # Run full test suite
|
|
14
|
+
/wogi-test --ui # UI tests only
|
|
15
|
+
/wogi-test --api # API tests only
|
|
16
|
+
/wogi-test --integrity # Data integrity chain only
|
|
17
|
+
/wogi-test --setup # Configure testing (re-run detection)
|
|
18
|
+
/wogi-test --generate wf-XXXXXXXX # Regenerate tests for a task
|
|
19
|
+
```
|
|
20
|
+
|
|
21
|
+
## Command Flow
|
|
22
|
+
|
|
23
|
+
### Step 1: Parse Arguments
|
|
24
|
+
|
|
25
|
+
Parse `$ARGUMENTS` to extract:
|
|
26
|
+
- **Task ID**: A `wf-XXXXXXXX` pattern → target task
|
|
27
|
+
- **Flags**: `--ui`, `--api`, `--integrity`, `--all`, `--setup`, `--generate`
|
|
28
|
+
- **No args**: Use current in-progress task from `ready.json`
|
|
29
|
+
|
|
30
|
+
```javascript
|
|
31
|
+
// Pseudo-logic for argument parsing
|
|
32
|
+
const args = '$ARGUMENTS'.trim().split(/\s+/);
|
|
33
|
+
let taskId = null;
|
|
34
|
+
let flags = { ui: false, api: false, integrity: false, all: false, setup: false, generate: false };
|
|
35
|
+
|
|
36
|
+
for (const arg of args) {
|
|
37
|
+
if (/^wf-[a-f0-9]{8}$/i.test(arg)) {
|
|
38
|
+
taskId = arg;
|
|
39
|
+
} else if (arg === '--ui') flags.ui = true;
|
|
40
|
+
else if (arg === '--api') flags.api = true;
|
|
41
|
+
else if (arg === '--integrity') flags.integrity = true;
|
|
42
|
+
else if (arg === '--all') flags.all = true;
|
|
43
|
+
else if (arg === '--setup') flags.setup = true;
|
|
44
|
+
else if (arg === '--generate') flags.generate = true;
|
|
45
|
+
}
|
|
46
|
+
```
|
|
47
|
+
|
|
48
|
+
If no task ID provided, read `.workflow/state/ready.json` and use the first task in `inProgress`. If none in progress, use the most recent task in `recentlyCompleted`.
|
|
49
|
+
|
|
50
|
+
### Step 2: Check Testing Configuration (Auto-Setup on First Use)
|
|
51
|
+
|
|
52
|
+
Read config via:
|
|
53
|
+
```bash
|
|
54
|
+
node -e "const { getConfig } = require('wogiflow/scripts/flow-utils'); const c = getConfig(); console.log(JSON.stringify(c.testing || {}))"
|
|
55
|
+
```
|
|
56
|
+
|
|
57
|
+
If `config.testing.enabled` is `false` (or not set), **auto-trigger the setup flow** — do NOT just show info and stop. The user ran `/wogi-test` because they want to test. Guide them through setup seamlessly:
|
|
58
|
+
|
|
59
|
+
**Step 2a: Detect project type**
|
|
60
|
+
```bash
|
|
61
|
+
node -e "const { detectProjectType } = require('wogiflow/scripts/flow-project-analyzer'); const r = detectProjectType(); console.log(JSON.stringify(r))"
|
|
62
|
+
```
|
|
63
|
+
|
|
64
|
+
**Step 2b: Show detection results and ask ONE question**
|
|
65
|
+
```
|
|
66
|
+
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
|
67
|
+
First-Time Testing Setup
|
|
68
|
+
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
|
69
|
+
|
|
70
|
+
I scanned your project and detected:
|
|
71
|
+
|
|
72
|
+
Project type: [fullstack / frontend / backend / library]
|
|
73
|
+
UI framework: [React / Vue / etc. or "none"]
|
|
74
|
+
API framework: [Express / NestJS / etc. or "none"]
|
|
75
|
+
Test framework: [vitest / jest / etc. or "none detected"]
|
|
76
|
+
|
|
77
|
+
Based on this, I recommend:
|
|
78
|
+
Testing mode: [full / ui / api / unit]
|
|
79
|
+
[If UI] Packages needed: @playwright/mcp + Chromium browser
|
|
80
|
+
[If API only] No extra packages needed
|
|
81
|
+
|
|
82
|
+
Shall I enable testing and install what's needed? [Y/n/customize]
|
|
83
|
+
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
|
84
|
+
```
|
|
85
|
+
|
|
86
|
+
**Step 2c: Based on user response:**
|
|
87
|
+
|
|
88
|
+
- **Yes (or enter)** → Proceed to auto-configure:
|
|
89
|
+
1. Determine mode from detection: hasUI+hasAPI → `"full"`, hasUI only → `"ui"`, hasAPI only → `"api"`, neither → `"unit"`
|
|
90
|
+
2. Update `.workflow/config.json`: set `testing.enabled: true`, `testing.mode`, and `testing.detected` fields
|
|
91
|
+
3. Check dependencies: `node -e "const d = require('wogiflow/scripts/flow-testing-deps'); console.log(JSON.stringify(d.checkDeps('[mode]')))"`
|
|
92
|
+
4. If deps missing → install them: `node -e "const d = require('wogiflow/scripts/flow-testing-deps'); console.log(JSON.stringify(d.installDeps('[mode]')))"`
|
|
93
|
+
5. If UI mode → also configure Playwright MCP in settings (show user the MCP config to add)
|
|
94
|
+
6. Show confirmation and **continue to Step 5 (run tests)**
|
|
95
|
+
|
|
96
|
+
- **Customize** → Ask for:
|
|
97
|
+
- Preferred mode (ui/api/full/unit)
|
|
98
|
+
- Base URLs (UI: default localhost:3000, API: default localhost:3001)
|
|
99
|
+
- Start commands (optional)
|
|
100
|
+
- Then configure and install accordingly
|
|
101
|
+
|
|
102
|
+
- **No** → Skip setup, show how to enable later:
|
|
103
|
+
```
|
|
104
|
+
OK — testing stays disabled. To enable later:
|
|
105
|
+
/wogi-test --setup
|
|
106
|
+
```
|
|
107
|
+
|
|
108
|
+
**IMPORTANT**: After successful setup, do NOT stop. Continue directly to Step 5 and run the tests the user originally asked for. The whole point is that `/wogi-test` works in one invocation even on first use.
|
|
109
|
+
|
|
110
|
+
### Step 3: Handle `--setup` Flag (Reconfigure)
|
|
111
|
+
|
|
112
|
+
If `--setup` was passed, this is an explicit reconfiguration request. Use the same flow as Step 2 (auto-setup), but ALWAYS run it even if testing is already enabled. This lets users:
|
|
113
|
+
- Change testing mode (e.g., switch from `ui` to `full`)
|
|
114
|
+
- Re-detect after adding backend/frontend to their project
|
|
115
|
+
- Install missing deps after a fresh `npm install` that lost node_modules
|
|
116
|
+
|
|
117
|
+
After reconfiguration, show confirmation:
|
|
118
|
+
```
|
|
119
|
+
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
|
120
|
+
Testing Reconfigured ✓
|
|
121
|
+
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
|
122
|
+
|
|
123
|
+
Mode: [mode] (was: [old mode])
|
|
124
|
+
UI provider: playwright-mcp
|
|
125
|
+
API provider: direct-http (zero deps)
|
|
126
|
+
Dependencies: all installed ✓
|
|
127
|
+
|
|
128
|
+
Run /wogi-test to execute tests.
|
|
129
|
+
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
|
130
|
+
```
|
|
131
|
+
|
|
132
|
+
### Step 4: Handle `--generate` Flag
|
|
133
|
+
|
|
134
|
+
If `--generate` was passed:
|
|
135
|
+
|
|
136
|
+
```bash
|
|
137
|
+
node -e "
|
|
138
|
+
const { generateTestScaffold } = require('wogiflow/scripts/flow-test-generate');
|
|
139
|
+
const result = generateTestScaffold('TASK_ID');
|
|
140
|
+
console.log(JSON.stringify(result, null, 2));
|
|
141
|
+
"
|
|
142
|
+
```
|
|
143
|
+
|
|
144
|
+
Display generated test info and **STOP** — do not run tests, just generate.
|
|
145
|
+
|
|
146
|
+
### Step 5: Run Tests (Main Flow)
|
|
147
|
+
|
|
148
|
+
Determine which test types to run:
|
|
149
|
+
|
|
150
|
+
| Flag | Action |
|
|
151
|
+
|------|--------|
|
|
152
|
+
| `--ui` | Run UI tests only |
|
|
153
|
+
| `--api` | Run API tests only |
|
|
154
|
+
| `--integrity` | Run integrity tests only |
|
|
155
|
+
| `--all` | Run all 3 types |
|
|
156
|
+
| No flag | Run based on `config.testing.mode`: `ui`→UI only, `api`→API only, `full`→all 3, `auto`→detect and run applicable |
|
|
157
|
+
|
|
158
|
+
#### Run UI Tests
|
|
159
|
+
```bash
|
|
160
|
+
node -e "
|
|
161
|
+
const { runUITests } = require('wogiflow/scripts/flow-test-ui');
|
|
162
|
+
runUITests('TASK_ID').then(r => console.log(JSON.stringify(r))).catch(err => console.error(JSON.stringify({error: err.message})));
|
|
163
|
+
"
|
|
164
|
+
```
|
|
165
|
+
|
|
166
|
+
#### Run API Tests
|
|
167
|
+
```bash
|
|
168
|
+
node -e "
|
|
169
|
+
const { runAPITests } = require('wogiflow/scripts/flow-test-api');
|
|
170
|
+
runAPITests('TASK_ID').then(r => console.log(JSON.stringify(r))).catch(err => console.error(JSON.stringify({error: err.message})));
|
|
171
|
+
"
|
|
172
|
+
```
|
|
173
|
+
|
|
174
|
+
#### Run Integrity Tests
|
|
175
|
+
```bash
|
|
176
|
+
node -e "
|
|
177
|
+
const { runIntegrityTests } = require('wogiflow/scripts/flow-test-integrity');
|
|
178
|
+
runIntegrityTests('TASK_ID').then(r => console.log(JSON.stringify(r))).catch(err => console.error(JSON.stringify({error: err.message})));
|
|
179
|
+
"
|
|
180
|
+
```
|
|
181
|
+
|
|
182
|
+
### Step 6: Display Results
|
|
183
|
+
|
|
184
|
+
After all tests complete, display a unified summary:
|
|
185
|
+
|
|
186
|
+
```
|
|
187
|
+
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
|
188
|
+
Test Results — wf-XXXXXXXX
|
|
189
|
+
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
|
190
|
+
|
|
191
|
+
UI Tests: [passed]/[total] passed ([failed] failed)
|
|
192
|
+
API Tests: [passed]/[total] passed
|
|
193
|
+
Integrity: [matched]/[total] matched ([missing] missing fields)
|
|
194
|
+
|
|
195
|
+
Failed:
|
|
196
|
+
✗ [type]: [description of failure]
|
|
197
|
+
✗ [type]: [description of failure]
|
|
198
|
+
|
|
199
|
+
Reports: .workflow/verifications/wf-XXXXXXXX-*.json
|
|
200
|
+
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
|
201
|
+
```
|
|
202
|
+
|
|
203
|
+
If ALL tests pass:
|
|
204
|
+
```
|
|
205
|
+
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
|
206
|
+
Test Results — wf-XXXXXXXX ✓
|
|
207
|
+
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
|
208
|
+
|
|
209
|
+
UI Tests: [total]/[total] passed
|
|
210
|
+
API Tests: [total]/[total] passed
|
|
211
|
+
Integrity: [total]/[total] matched
|
|
212
|
+
|
|
213
|
+
All tests passed!
|
|
214
|
+
|
|
215
|
+
Reports: .workflow/verifications/wf-XXXXXXXX-*.json
|
|
216
|
+
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
|
217
|
+
```
|
|
218
|
+
|
|
219
|
+
### Step 7: Run Generated Tests (if applicable)
|
|
220
|
+
|
|
221
|
+
If `config.testing.generation.autoGenerate` is true and generated tests exist for the task:
|
|
222
|
+
|
|
223
|
+
```bash
|
|
224
|
+
# Check if generated test directory exists
|
|
225
|
+
ls .workflow/tests/generated/TASK_ID/ 2>/dev/null
|
|
226
|
+
```
|
|
227
|
+
|
|
228
|
+
If tests exist, run them with the project's test runner:
|
|
229
|
+
```bash
|
|
230
|
+
npm test -- .workflow/tests/generated/TASK_ID/
|
|
231
|
+
```
|
|
232
|
+
|
|
233
|
+
Include results in the summary:
|
|
234
|
+
```
|
|
235
|
+
Generated Tests: [passed]/[total] passed
|
|
236
|
+
```
|
|
237
|
+
|
|
238
|
+
## Important Notes
|
|
239
|
+
|
|
240
|
+
- Testing is **disabled by default** — zero overhead for projects that don't use it
|
|
241
|
+
- All test scripts gracefully handle missing dependencies and report what's needed
|
|
242
|
+
- Reports are saved to `.workflow/verifications/` for quality gate consumption
|
|
243
|
+
- The quality gates `generatedTestsPass`, `uiVerification`, and `apiVerification` in `flow-done.js` will automatically run these same tests when closing a task via `/wogi-start`
|
package/.claude/docs/commands.md
CHANGED
|
@@ -99,6 +99,8 @@ When user types these commands, execute the corresponding action immediately.
|
|
|
99
99
|
|
|
100
100
|
### Configuration
|
|
101
101
|
|
|
102
|
+
**Full config reference**: See `.claude/docs/config-reference.md` for all available config.json overrides.
|
|
103
|
+
|
|
102
104
|
| Command | Action |
|
|
103
105
|
|---------|--------|
|
|
104
106
|
| `/wogi-config` | Show current config.json settings summary. |
|