ai-workflow-init 3.4.0 → 4.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,9 +1,20 @@
1
+ ---
2
+ name: generate-standards
3
+ description: Generates/updates code convention and project structure docs from the codebase.
4
+ ---
5
+
1
6
  ## Goal
2
7
 
3
8
  Generate or update `docs/ai/project/CODE_CONVENTIONS.md` and `PROJECT_STRUCTURE.md` from the current codebase with brief Q&A refinement.
4
9
 
5
10
  ## Step 1: Detect Project Context
6
11
 
12
+ **Tools:**
13
+ - search for files matching `**/package.json`
14
+ - search for files matching `**/pyproject.toml`
15
+ - search for files matching `**/*.config.{js,ts}`
16
+ - Read(file_path=...) for each config file found
17
+
7
18
  Detect project languages/frameworks/libraries from repository metadata:
8
19
 
9
20
  - Analyze `package.json`, `pyproject.toml`, lockfiles, config files
@@ -12,10 +23,17 @@ Detect project languages/frameworks/libraries from repository metadata:
12
23
  - Identify libraries and tooling (ESLint, Prettier, testing frameworks, etc.)
13
24
  - Note any build tools or bundlers in use
14
25
 
26
+ **Error handling:**
27
+ - No config files found: Ask user for project type manually
28
+ - Invalid JSON/TOML: Skip broken file, continue with others
29
+ - Multiple conflicting configs: Prioritize root-level files
30
+
15
31
  **Output**: The detected project context will be written to `CODE_CONVENTIONS.md` as the foundation for code conventions.
16
32
 
17
33
  ## Step 2: Clarify Scope (3–6 questions max)
18
34
 
35
+ **Tool:** ask user for clarification
36
+
19
37
  Quick classification and targeted questions:
20
38
 
21
39
  - Languages/frameworks detected: confirm correct? (a/b/other)
@@ -27,8 +45,22 @@ Quick classification and targeted questions:
27
45
 
28
46
  Keep questions short and single-purpose. Stop once sufficient info gathered.
29
47
 
48
+ **Error handling:**
49
+ - User skips questions: Use detected defaults from Step 1 only
50
+ - Conflicting answers: Ask follow-up clarification
51
+ - Max 2 retry attempts before proceeding with defaults
52
+
30
53
  ## Step 3: Auto-Discovery
31
54
 
55
+ **Automated process:**
56
+ - Use workspace search and analysis to accomplish this task
57
+ - Import patterns and ordering style
58
+ - Folder structure organization (feature/layer/mixed)
59
+ - Test file locations and naming patterns
60
+ - Common architectural patterns (Repository, Factory, Strategy, etc.)
61
+ Return concise summary with 2-3 examples for each category."
62
+ )
63
+
32
64
  Analyze repository to infer:
33
65
 
34
66
  - Dominant naming patterns:
@@ -50,26 +82,30 @@ Analyze repository to infer:
50
82
  - Strategy patterns
51
83
  - Other architectural patterns
52
84
 
85
+ **Error handling:**
86
+ - Explore agent timeout: Fallback to quick Grep-based pattern detection
87
+ - No patterns found: Generate minimal standards template
88
+ - Inconsistent patterns: Document most frequent pattern as primary
89
+
53
90
  ## Step 4: Draft Standards
54
91
 
55
- Generate two documents (with template preload):
92
+ **Tools:**
93
+ - search for files matching `docs/ai/project/template-convention/*.md` to find templates
94
+ - Read(file_path=...) for each matching template
95
+ - create file `docs/ai/project/CODE_CONVENTIONS.md`
96
+ - create file `docs/ai/project/PROJECT_STRUCTURE.md`
56
97
 
57
- ### CODE_CONVENTIONS.md
98
+ Generate two documents:
58
99
 
59
- - Template preload (flexible matching based on detected project context):
100
+ ### CODE_CONVENTIONS.md
60
101
 
61
- 1. Read all files in `docs/ai/project/template-convention/` directory
62
- 2. For each template file, determine if it matches the detected project context:
63
- - Match by language (e.g., `javascript.md`, `typescript.md`, `python.md`)
64
- - Match by framework (e.g., `react.md`, `vue.md`, `angular.md`)
65
- - Match by common patterns (e.g., `common.md` always include if present)
66
- 3. Load and merge all matching templates in a logical order:
67
- - `common.md` first (if present)
68
- - Language-specific templates (e.g., `javascript.md`, `typescript.md`)
69
- - Framework-specific templates (e.g., `react.md`, `vue.md`)
70
- - Other relevant templates based on detected tooling/patterns
71
- 4. These templates take precedence and should appear at the top of the generated document, followed by auto-discovered rules from codebase analysis.
102
+ **Content order** (see Notes for template matching details):
103
+ 1. Project context from Step 1
104
+ 2. User preferences from Step 2
105
+ 3. Auto-discovered rules from Step 3
106
+ 4. Matching templates (merged and applied on top)
72
107
 
108
+ **Sections to include:**
73
109
  - Naming conventions (variables, functions, classes, constants)
74
110
  - Import order and grouping
75
111
  - Formatting tools (ESLint/Prettier/etc.) if detected
@@ -81,62 +117,84 @@ Generate two documents (with template preload):
81
117
 
82
118
  ### PROJECT_STRUCTURE.md
83
119
 
84
- - Folder layout summary:
85
- - `src/`: source code organization
86
- - `docs/ai/**`: documentation structure
120
+ **Sections to include:**
121
+ - Folder layout summary (`src/`, `docs/ai/**`, etc.)
87
122
  - Module boundaries and dependency direction
88
123
  - Design patterns actually observed in codebase
89
124
  - Test placement and naming conventions
90
125
  - Config/secrets handling summary
91
126
 
127
+ **Error handling:**
128
+ - Template directory not found: Use hardcoded minimal template
129
+ - Template parse error: Skip broken template, log warning
130
+ - No matching templates: Generate standards from Step 3 discovery only
131
+
92
132
  ## Step 5: Persist (Update-in-place, Non-destructive)
93
133
 
94
- - Target files:
95
- - `docs/ai/project/CODE_CONVENTIONS.md`
96
- - `docs/ai/project/PROJECT_STRUCTURE.md`
97
- - Add header note: "This document is auto-generated from codebase analysis + brief Q&A. Edit manually as needed."
98
- - Do NOT blindly overwrite entire files. Update only the managed blocks using markers:
134
+ **Tools:**
135
+ - Read(file_path=...) to check existing content
136
+ - Edit(file_path=..., old_string=..., new_string=...) for updates within managed blocks
137
+ - Write(file_path=...) if file doesn't exist
138
+
139
+ **Target files:**
140
+ - `docs/ai/project/CODE_CONVENTIONS.md`
141
+ - `docs/ai/project/PROJECT_STRUCTURE.md`
142
+
143
+ **Update strategy:**
144
+ - Use managed block markers to wrap generated content (see Notes for marker format)
145
+ - If file exists with markers: Update content between START/END markers only
146
+ - If file exists without markers: Append managed block to end
147
+ - If file doesn't exist: Create with header note and managed block
148
+ - Never modify content outside managed blocks
149
+
150
+ **Error handling:**
151
+ - File write permission denied: Notify user, suggest manual save
152
+ - File locked by another process: Retry once, then notify user
153
+ - Invalid marker format in existing file: Append new managed block instead of updating
154
+
155
+ ## Step 6: Next Actions
156
+
157
+ - Suggest running `code-review` to validate new standards are being followed
158
+ - Inform user they can manually edit these files anytime
159
+
160
+ ## Notes
161
+
162
+ ### General Guidelines
99
163
 
100
- ### Managed Block Markers
164
+ - Focus on patterns actually present in codebase, not ideal patterns
165
+ - Keep generated docs concise and actionable
166
+ - User can refine standards manually after generation
101
167
 
102
- Use these exact markers to wrap generated content:
168
+ ### Template Matching Details (Step 4)
103
169
 
170
+ **Template loading logic:**
171
+ 1. Glob all files in `docs/ai/project/template-convention/`
172
+ 2. Match templates by filename:
173
+ - `common.md` → Always include if present
174
+ - `javascript.md`, `typescript.md`, `python.md` → Match by language
175
+ - `react.md`, `vue.md`, `angular.md` → Match by framework
176
+ - Other files → Match by detected tooling
177
+ 3. Merge order: `common.md` → language → framework → tooling
178
+ 4. Templates appear at top of generated doc, followed by auto-discovered rules
179
+
180
+ ### Managed Block Markers (Step 5)
181
+
182
+ **Marker format for CODE_CONVENTIONS.md:**
104
183
  ```
105
184
  <!-- GENERATED: CODE_CONVENTIONS:START -->
106
185
  ... generated content ...
107
186
  <!-- GENERATED: CODE_CONVENTIONS:END -->
108
187
  ```
109
188
 
189
+ **Marker format for PROJECT_STRUCTURE.md:**
110
190
  ```
111
191
  <!-- GENERATED: PROJECT_STRUCTURE:START -->
112
192
  ... generated content ...
113
193
  <!-- GENERATED: PROJECT_STRUCTURE:END -->
114
194
  ```
115
195
 
116
- ### Update Rules
117
-
118
- 1. If the target file exists and contains the corresponding markers, replace only the content between START/END with the newly generated content.
119
- 2. If the file exists but does not contain markers, append a new managed block to the end of the file (preserve all existing manual content).
120
- 3. If the file does not exist, create it with the header note and a single managed block.
121
- 4. Never modify content outside the managed blocks.
122
-
123
- ### Generated Content Order (for CODE_CONVENTIONS)
124
-
125
- 1. **Project Context Section**: Write the detected project context from Step 1 (languages, frameworks, libraries, tooling)
126
- 2. **Template Rules Section**: Merge all matching templates from `docs/ai/project/template-convention/` directory:
127
- - Read all files in the directory
128
- - Filter templates that match the detected project context (by language, framework, or common patterns)
129
- - Merge in logical order: common → language → framework → tooling-specific
130
- 3. **Auto-Discovered Rules Section**: Append auto-discovered rules from codebase analysis (Step 3)
131
- 4. **Project-Specific Rules Section**: Include any project-specific conventions from Q&A (Step 2)
132
-
133
- ## Step 6: Next Actions
134
-
135
- - Suggest running `code-review` to validate new standards are being followed
136
- - Inform user they can manually edit these files anytime
137
-
138
- ## Notes
139
-
140
- - Focus on patterns actually present in codebase, not ideal patterns
141
- - Keep generated docs concise and actionable
142
- - User can refine standards manually after generation
196
+ **Header note template:**
197
+ ```
198
+ This document is auto-generated from codebase analysis + brief Q&A.
199
+ Edit manually as needed.
200
+ ```
@@ -1,3 +1,8 @@
1
+ ---
2
+ name: init-chat
3
+ description: Initializes chat by loading and aligning to AGENTS.md project rules.
4
+ ---
5
+
1
6
  ## Goal
2
7
 
3
8
  Initialize a new chat by loading and aligning to `AGENTS.md` so the AI agent consistently follows project rules (workflow, tooling, communication, language mirroring) from the first message.
@@ -1,24 +1,34 @@
1
+ ---
2
+ name: writing-test
3
+ description: Generates comprehensive test files with edge cases, parameter variations, and coverage analysis.
4
+ ---
5
+
1
6
  Use `docs/ai/testing/feature-{name}.md` as the source of truth.
2
7
 
3
8
  ## Workflow Alignment
9
+
4
10
  - Provide brief status updates (1–3 sentences) before/after important actions.
5
11
  - For medium/large tasks, create todos (≤14 words, verb-led). Keep only one `in_progress` item.
6
12
  - Update todos immediately after progress; mark completed upon finish.
7
13
  - Default to parallel execution for independent operations; use sequential steps only when dependencies exist.
8
14
 
9
15
  ## Step 1: Gather Context (minimal)
16
+
10
17
  - Ask for feature name if not provided (must be kebab-case).
11
18
  - **Load template:** Read `docs/ai/testing/feature-template.md` to understand required structure.
12
- - Then locate docs by convention:
13
- - Planning: `docs/ai/planning/feature-{name}.md`
14
- - Implementation (optional): `docs/ai/implementation/feature-{name}.md`
15
- - **Detect test framework:** Check `package.json` and project structure to identify test framework (Vitest, Jest, Mocha, pytest, etc.). Auto-detect from `package.json` first; if no test runner found, create skeleton test and report missing runner.
16
- - **Load standards:** Read `docs/ai/project/PROJECT_STRUCTURE.md` for test file placement rules
17
- - **Load conventions:** Read `docs/ai/project/CODE_CONVENTIONS.md` for coding standards
19
+ - **Load planning doc:** Read `docs/ai/planning/feature-{name}.md` for:
20
+ - Acceptance criteria (test alignment)
21
+ - Implementation plan (files to test)
22
+ - Functions/components that need coverage
23
+ - **Detect test framework:** Check `package.json` to identify test framework (Vitest, Jest, Mocha, pytest, etc.). If no test runner found, create skeleton test and report missing runner.
24
+ - **Load standards:**
25
+ - `docs/ai/project/PROJECT_STRUCTURE.md` for test file placement rules
26
+ - `docs/ai/project/CODE_CONVENTIONS.md` for coding standards
18
27
 
19
- Always align test cases with acceptance criteria from the planning doc. If implementation notes are missing, treat planning as the single source of truth.
28
+ **Source of truth:** Planning doc acceptance criteria + actual source code implementation.
20
29
 
21
30
  ## Step 2: Scope (logic and component tests only)
31
+
22
32
  - **Focus on:**
23
33
  - Pure function logic: test with various input parameters (happy path, edge cases, invalid inputs)
24
34
  - Component logic: test component behavior with different props/parameters (without complex rendering)
@@ -34,161 +44,74 @@ Always align test cases with acceptance criteria from the planning doc. If imple
34
44
  - E2E flows or end-to-end user journey tests
35
45
  - Tests requiring external API calls, database connections, or network mocking
36
46
  - Performance/load testing
47
+ - **Dependency isolation (mocking):**
48
+ - Mock external dependencies (API clients, database connections, file system)
49
+ - Mock imported utilities/hooks if they have side effects
50
+ - Keep internal logic unmocked (test actual implementation)
51
+ - Use test framework's mocking utilities (vi.mock, jest.mock, unittest.mock)
37
52
  - Keep tests simple, fast, and deterministic.
38
53
 
39
- ## Step 3: Analyze Code to Test
40
- Read implementation files from `docs/ai/implementation/feature-{name}.md` or scan source code files:
41
- - Identify all functions/components that need testing:
42
- - List each function with its signature (parameters, return type, exceptions)
43
- - List each component with its props interface
44
- - List each class with its methods and properties
45
- - For each function/component, identify:
46
- - Input parameter types and possible values
47
- - Expected outputs for different inputs
48
- - Edge cases (null, undefined, empty, boundaries, min/max values)
49
- - Error cases (invalid inputs, exceptions, error messages)
50
- - Control flow branches (if/else, switch cases, loops)
51
- - Mathematical properties (if applicable): commutativity, associativity, idempotency, invariants
52
-
53
- ## Step 4: Generate Test Code (automatic)
54
- For each function/component identified:
55
-
56
- 1. **Create test file** according to `PROJECT_STRUCTURE.md`:
57
- - Colocated `*.spec.*` or `*.test.*` files with source, OR
58
- - Under `__tests__/` or `tests/` mirroring source structure
59
- - Follow naming conventions from `CODE_CONVENTIONS.md`
60
-
61
- 2. **Write complete test code** with:
62
- - Test framework setup (imports, describe blocks, test utilities)
63
- - Multiple test cases per function/component:
64
- - **Happy path**: normal parameters → expected output
65
- - **Edge cases**: empty/null/undefined inputs, boundary values (min/max, 0, -1, empty strings/arrays)
66
- - **Parameter combinations**: systematically test different combinations of parameters (cartesian product when practical)
67
- - **Error cases**: invalid inputs, exceptions, error messages
68
- - **Type safety**: wrong types, missing properties, extra properties
69
- - **Property-based**: verify mathematical properties if applicable
70
- - Clear, descriptive test names describing what is being tested
71
- - Assertions using appropriate test framework syntax
72
-
73
- 3. **Focus on logic testing:**
74
- - For functions: test return values, side effects (if any), error handling
75
- - For components: test output/logic with different props (avoid complex rendering - use lightweight snapshots or logic checks)
76
- - For classes: test methods independently with various inputs
77
- - Test parameter combinations systematically
78
-
79
- 4. **Generate edge cases systematically:**
80
- - For each parameter, generate: null, undefined, empty, zero, negative, max/min values
81
- - Test type mismatches: wrong types, missing required properties
82
- - Test boundary conditions: off-by-one errors, overflow/underflow
83
-
84
- 5. **Property-based testing (when applicable):**
85
- - Identify mathematical/algorithmic properties
86
- - Generate property-based tests: commutativity, associativity, idempotency, invariants
87
- - Test with random inputs following constraints
88
-
89
- **Example test structure:**
90
- ```javascript
91
- import { describe, it, expect } from 'vitest';
92
- import { functionToTest } from './source-file';
93
-
94
- describe('functionToTest', () => {
95
- it('should return correct value with normal parameters', () => {
96
- // Test with standard inputs
97
- });
98
-
99
- it('should handle edge case with empty input', () => {
100
- // Test boundary
101
- });
102
-
103
- it('should handle null input', () => {
104
- // Test null handling
105
- });
106
-
107
- it('should handle different parameter combinations', () => {
108
- // Test various param combinations
109
- });
110
-
111
- it('should throw error with invalid input', () => {
112
- // Test error handling
113
- });
114
-
115
- it('should maintain idempotency property', () => {
116
- // Property-based test
117
- });
118
- });
119
- ```
120
-
121
- ## Step 5: Run Tests with Logging (automatic)
122
- After generating test code:
123
-
124
- 1. **Execute test command** based on detected framework (use non-interactive flags):
125
- - Vitest: `npm test` or `npx vitest run --run`
126
- - Jest: `npm test` or `npx jest --ci`
127
- - Mocha: `npm test` or `npx mocha --reporter spec`
128
- - pytest: `pytest` or `python -m pytest -v`
129
- - Other: detect from `package.json` scripts, add `--no-interactive` or equivalent flags
130
-
131
- 2. **Capture and display output** in terminal with detailed logging:
132
- - Show test execution progress (which tests are running)
133
- - Show test results (pass/fail) for each test case with execution time
134
- - Show test summary (total tests, passed, failed, skipped)
135
- - Show any error messages, stack traces, or assertion failures for failures
136
- - Show coverage report if available (lines, branches, functions coverage)
137
- - Format output clearly for readability
138
-
139
- 3. **Log format example:**
140
- ```
141
- === Running tests for feature: {name} ===
142
-
143
- RUN test-file-1.spec.js
144
- ✓ should handle normal case (2ms)
145
- ✓ should handle edge case (1ms)
146
- ✗ should handle invalid input (3ms)
147
-
148
- Error: Expected error to be thrown
149
- at test-file-1.spec.js:25
150
-
151
- ✓ should handle parameter combinations (5ms)
152
- ✓ should maintain property (1ms)
153
-
154
- Test Files: 1 passed | 0 failed | 1 total
155
- Tests: 4 passed | 1 failed | 5 total
156
- Time: 0.012s
157
-
158
- Coverage:
159
- - Lines: 82% (lines covered / total lines)
160
- - Branches: 75% (branches covered / total branches)
161
- - Functions: 90% (functions covered / total functions)
162
- ```
163
-
164
- 4. **If tests fail:**
165
- - Analyze failure reason from logs
166
- - Fix test code or implementation as needed
167
- - Re-run tests until all pass
168
- - Update implementation doc if code changes were needed
169
-
170
- ## Step 6: Coverage Gap Analysis (automatic)
171
- After initial test generation and execution:
172
-
173
- 1. **Analyze coverage report:**
174
- - Identify untested branches/conditions
175
- - Identify untested lines/paths
176
- - Identify untested functions/methods
177
-
178
- 2. **Generate additional tests automatically:**
179
- - For each uncovered branch: create test case
180
- - For each uncovered line: ensure it's reachable by existing tests or add new test
181
- - For each uncovered function: create test cases
182
-
183
- 3. **Re-run tests and verify coverage:**
184
- - Execute tests again with coverage
185
- - Verify coverage targets are met (default: 80% lines, 70% branches)
186
- - Continue generating tests until targets are met or all practical cases are covered
187
-
188
- ## Step 7: Update Testing Doc
54
+ ## Step 3: Analyze Code & Generate Tests (automatic)
55
+
56
+ **Analyze implementation:**
57
+ - Read files from `docs/ai/implementation/feature-{name}.md` or scan source
58
+ - Identify functions/components: signatures, parameters, return types, exceptions
59
+ - Map: input types expected outputs edge cases → error cases
60
+
61
+ **Generate test files:**
62
+ - Create test file per `PROJECT_STRUCTURE.md` (colocated `*.spec.*` or `__tests__/`)
63
+ - Follow naming conventions from `CODE_CONVENTIONS.md`
64
+
65
+ **Test coverage strategy:**
66
+ - **Happy path**: normal parameters expected output
67
+ - **Edge cases**: null, undefined, empty, boundaries (min/max, 0, -1)
68
+ - **Parameter combinations**: systematically test input variations
69
+ - **Error handling**: invalid inputs, exceptions, error messages
70
+ - **Type safety**: wrong types, missing properties
71
+ - **Property-based** (if applicable): commutativity, associativity, idempotency
72
+
73
+ **For complex test suites (>15 test cases):**
74
+ - Use Task tool with `subagent_type='general-purpose'`
75
+ - Task prompt: "Generate comprehensive test cases for [function] covering happy path, edge cases, error handling, and parameter combinations"
76
+
77
+ ## Step 4: Run Tests with Logging (automatic)
78
+
79
+ **Execute test command** (non-interactive):
80
+ - Vitest: `npx vitest run --passWithNoTests`
81
+ - Jest: `npx jest --ci --passWithNoTests`
82
+ - Mocha: `npx mocha --reporter spec`
83
+ - pytest: `pytest -v --tb=short`
84
+
85
+ **Display output:**
86
+ - Test execution progress and results (pass/fail, timing)
87
+ - Test summary (total/passed/failed/skipped)
88
+ - Error messages and stack traces for failures
89
+ - Coverage report (lines, branches, functions %)
90
+
91
+ **If tests fail:**
92
+ - Analyze failure reason (test logic vs implementation bug)
93
+ - **Fix test logic** if test expectations are incorrect
94
+ - **Report implementation bug** if source code violates acceptance criteria (do NOT auto-fix source code)
95
+ - Re-run tests after fixing test logic
96
+ - Update testing doc with results
97
+
98
+ ## Step 5: Coverage Gap Analysis (automatic)
99
+
100
+ **Analyze coverage report:**
101
+ - Identify untested branches/lines/functions
102
+
103
+ **Generate additional tests:**
104
+ - Create test cases for uncovered branches/lines/functions
105
+ - Re-run tests with coverage
106
+ - Verify targets met (default: 80% lines, 70% branches)
107
+ - Continue until targets met or all practical cases covered
108
+
109
+ ## Step 6: Update Testing Doc
110
+
189
111
  Use structure from `docs/ai/testing/feature-template.md` to populate `docs/ai/testing/feature-{name}.md`.
190
112
 
191
113
  Fill with:
114
+
192
115
  - **Test Files Created**: List all test files generated with paths
193
116
  - **Test Cases Summary**: Brief summary of what was tested (function/component + key scenarios)
194
117
  - Happy path scenarios
@@ -205,14 +128,17 @@ Fill with:
205
128
 
206
129
  Ensure all required sections from template are present. Keep the document brief and actionable.
207
130
 
208
- ## Step 8: Verify Tests Pass
131
+ ## Step 7: Verify Tests Pass
132
+
209
133
  - Ensure all generated tests pass successfully
210
134
  - Ensure coverage targets are met (default: 80% lines, 70% branches)
211
- - If any test fails, debug and fix issues
212
- - Update implementation doc if code changes were needed
213
- - Re-run tests to confirm fixes
135
+ - If any test fails:
136
+ - Debug test logic and fix test expectations
137
+ - **OR** report implementation bug to user (if source code violates acceptance criteria)
138
+ - Re-run tests after fixing test logic
214
139
 
215
140
  ## Notes
141
+
216
142
  - Tests should be simple enough to run quickly and verify logic correctness
217
143
  - Focus on catching logic errors and edge cases through systematic parameter variation
218
144
  - Avoid complex test setup or mocking unless necessary for logic isolation
@@ -220,5 +146,10 @@ Ensure all required sections from template are present. Keep the document brief
220
146
  - Test execution must show clear, detailed logging in terminal
221
147
  - AI excels at: edge case generation, parameter combinations, property-based testing, coverage analysis
222
148
  - Keep tests deterministic (avoid external dependencies, random values without seeds, time-dependent logic)
223
- - Creates test files only; does not edit non-test source files.
224
- - Idempotent: safe to re-run; appends entries or updates test doc deterministically.
149
+
150
+ **Scope boundaries:**
151
+ - **Creates:** Test files only (*.spec.*, *.test.*)
152
+ - **Modifies:** Test files only when fixing test logic
153
+ - **Does NOT edit:** Non-test source files (implementation code)
154
+ - **If implementation bug found:** Report to user; do NOT auto-fix source code
155
+ - Idempotent: safe to re-run; updates test doc deterministically