ai-workflow-init 1.1.0 → 1.1.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -43,9 +43,8 @@ Then collect inputs (after Q&A):
43
43
  ## Step 2: Load Templates
44
44
  **Before creating docs, read the following files:**
45
45
  - `docs/ai/planning/feature-template.md` - Template structure to follow
46
- - `docs/ai/testing/feature-template.md` - Template structure for test plan skeleton
47
46
 
48
- These templates define the required structure and format. Use them as the baseline for creating feature docs.
47
+ This template defines the required structure and format. Use it as the baseline for creating the planning doc.
49
48
 
50
49
  ## Step 3: Draft the Plan (auto-generate)
51
50
  Using the Q&A results and templates, immediately generate the plan without asking for confirmation.
@@ -55,11 +54,10 @@ Auto-name feature:
55
54
  - Example: "Login Page (HTML/CSS)" → `feature-login-page`.
56
55
  - If a file with the same name already exists, append a numeric suffix: `feature-{name}-2`, `feature-{name}-3`, ...
57
56
 
58
- Create the following files automatically and populate initial content:
57
+ Create the following file automatically and populate initial content:
59
58
  - `docs/ai/planning/feature-{name}.md` - Use structure from `feature-template.md`
60
- - `docs/ai/testing/feature-{name}.md` - Use structure from testing `feature-template.md` (skeleton)
61
59
 
62
- Do NOT create the implementation file at this step.
60
+ Do NOT create the implementation or testing files at this step.
63
61
  Notify the user when done.
64
62
 
65
63
  Produce a Markdown doc following the template structure:
@@ -69,3 +67,4 @@ Produce a Markdown doc following the template structure:
69
67
 
70
68
  ## Step 4: Next Actions
71
69
  Suggest running `execute-plan` to begin task execution.
70
+ Note: Test documentation will be created separately using the `writing-test` command.
@@ -6,49 +6,211 @@ Use `docs/ai/testing/feature-{name}.md` as the source of truth.
6
6
  - Then locate docs by convention:
7
7
  - Planning: `docs/ai/planning/feature-{name}.md`
8
8
  - Implementation (optional): `docs/ai/implementation/feature-{name}.md`
9
+ - **Detect test framework:** Check `package.json` and project structure to identify test framework (Vitest, Jest, Mocha, pytest, etc.)
10
+ - **Load standards:** Read `docs/ai/project/PROJECT_STRUCTURE.md` for test file placement rules
11
+ - **Load conventions:** Read `docs/ai/project/CODE_CONVENTIONS.md` for coding standards
9
12
 
10
13
  Always align test cases with acceptance criteria from the planning doc. If implementation notes are missing, treat planning as the single source of truth.
11
14
 
12
- ## Step 2: Scope (simple tests only)
13
- - Focus on pure functions, small utilities, and isolated component logic.
14
- - Test edge cases and logic branches.
15
- - **Do NOT** write complex rendering tests, E2E flows, or integration tests requiring heavy setup.
16
- - Keep tests simple and fast to run via command line.
17
-
18
- ## Step 3: Generate Unit Tests
19
- - List scenarios: happy path, edge cases, error cases.
20
- - Propose concrete test cases/snippets per main function/module.
21
- - Ensure tests are deterministic (avoid external IO when possible).
22
- - Focus on logic validation, not complex UI rendering or user flows.
23
-
24
- ## Step 4: Placement & Commands
25
- - Place tests per `PROJECT_STRUCTURE.md`:
26
- - Colocated `*.spec.*` files with source, or
27
- - Under `__tests__/` mirroring source structure
28
- - Provide run commands consistent with project:
29
- - Example: `npm test -- --run <pattern>` or language-specific runner
30
- - Ensure tests can be run individually for quick iteration
31
- - Keep each test file small and focused.
32
-
33
- ## Step 5: Coverage Strategy (lightweight)
34
- - Suggest coverage command if available (e.g., `npm test -- --coverage`).
35
- - Default targets (adjust if project-specific):
36
- - Lines: 80%
37
- - Branches: 70%
38
- - Highlight files still lacking coverage for critical paths.
39
-
40
- ## Step 6: Update Testing Doc
41
- - Use structure from `docs/ai/testing/feature-template.md` to populate `docs/ai/testing/feature-{name}.md`.
42
- - Fill cases/snippets/coverage notes following the template sections:
43
- - `## Unit Tests`
44
- - `## Integration Tests`
45
- - `## Manual Checklist`
46
- - `## Coverage Targets`
47
- - Keep the document brief and actionable.
48
- - Include run commands for quick verification.
49
- - Ensure all required sections from template are present.
15
+ ## Step 2: Scope (logic and component tests only)
16
+ - **Focus on:**
17
+ - Pure function logic: test with various input parameters (happy path, edge cases, invalid inputs)
18
+ - Component logic: test component behavior with different props/parameters (without complex rendering)
19
+ - Utility functions: test return values, error handling, boundary conditions
20
+ - Parameter variations: systematically test different combinations of parameters
21
+ - Edge cases: null, undefined, empty values, boundary values
22
+ - Error handling: invalid inputs, exceptions, error messages
23
+ - Type safety: type mismatches, missing properties, contract violations
24
+ - Property-based testing: mathematical properties (commutativity, associativity, idempotency)
25
+ - **DO NOT write:**
26
+ - Integration tests between multiple modules/services (requires heavy setup)
27
+ - Complex UI rendering tests requiring heavy DOM setup
28
+ - E2E flows or end-to-end user journey tests
29
+ - Tests requiring external API calls, database connections, or network mocking
30
+ - Performance/load testing
31
+ - Keep tests simple, fast, and deterministic.
32
+
33
+ ## Step 3: Analyze Code to Test
34
+ Read implementation files from `docs/ai/implementation/feature-{name}.md` or scan source code files:
35
+ - Identify all functions/components that need testing:
36
+ - List each function with its signature (parameters, return type, exceptions)
37
+ - List each component with its props interface
38
+ - List each class with its methods and properties
39
+ - For each function/component, identify:
40
+ - Input parameter types and possible values
41
+ - Expected outputs for different inputs
42
+ - Edge cases (null, undefined, empty, boundaries, min/max values)
43
+ - Error cases (invalid inputs, exceptions, error messages)
44
+ - Control flow branches (if/else, switch cases, loops)
45
+ - Mathematical properties (if applicable): commutativity, associativity, idempotency, invariants
46
+
47
+ ## Step 4: Generate Test Code (automatic)
48
+ For each function/component identified:
49
+
50
+ 1. **Create test file** according to `PROJECT_STRUCTURE.md`:
51
+ - Colocated `*.spec.*` or `*.test.*` files with source, OR
52
+ - Under `__tests__/` or `tests/` mirroring source structure
53
+ - Follow naming conventions from `CODE_CONVENTIONS.md`
54
+
55
+ 2. **Write complete test code** with:
56
+ - Test framework setup (imports, describe blocks, test utilities)
57
+ - Multiple test cases per function/component:
58
+ - **Happy path**: normal parameters → expected output
59
+ - **Edge cases**: empty/null/undefined inputs, boundary values (min/max, 0, -1, empty strings/arrays)
60
+ - **Parameter combinations**: systematically test different combinations of parameters (cartesian product when practical)
61
+ - **Error cases**: invalid inputs, exceptions, error messages
62
+ - **Type safety**: wrong types, missing properties, extra properties
63
+ - **Property-based**: verify mathematical properties if applicable
64
+ - Clear, descriptive test names describing what is being tested
65
+ - Assertions using appropriate test framework syntax
66
+
67
+ 3. **Focus on logic testing:**
68
+ - For functions: test return values, side effects (if any), error handling
69
+ - For components: test output/logic with different props (avoid complex rendering - use lightweight snapshots or logic checks)
70
+ - For classes: test methods independently with various inputs
71
+ - Test parameter combinations systematically
72
+
73
+ 4. **Generate edge cases systematically:**
74
+ - For each parameter, generate: null, undefined, empty, zero, negative, max/min values
75
+ - Test type mismatches: wrong types, missing required properties
76
+ - Test boundary conditions: off-by-one errors, overflow/underflow
77
+
78
+ 5. **Property-based testing (when applicable):**
79
+ - Identify mathematical/algorithmic properties
80
+ - Generate property-based tests: commutativity, associativity, idempotency, invariants
81
+ - Test with random inputs following constraints
82
+
83
+ **Example test structure:**
84
+ ```javascript
85
+ import { describe, it, expect } from 'vitest';
86
+ import { functionToTest } from './source-file';
87
+
88
+ describe('functionToTest', () => {
89
+ it('should return correct value with normal parameters', () => {
90
+ // Test with standard inputs
91
+ });
92
+
93
+ it('should handle edge case with empty input', () => {
94
+ // Test boundary
95
+ });
96
+
97
+ it('should handle null input', () => {
98
+ // Test null handling
99
+ });
100
+
101
+ it('should handle different parameter combinations', () => {
102
+ // Test various param combinations
103
+ });
104
+
105
+ it('should throw error with invalid input', () => {
106
+ // Test error handling
107
+ });
108
+
109
+ it('should maintain idempotency property', () => {
110
+ // Property-based test
111
+ });
112
+ });
113
+ ```
114
+
115
+ ## Step 5: Run Tests with Logging (automatic)
116
+ After generating test code:
117
+
118
+ 1. **Execute test command** based on detected framework:
119
+ - Vitest: `npm test` or `npx vitest run`
120
+ - Jest: `npm test` or `npx jest`
121
+ - Mocha: `npm test` or `npx mocha`
122
+ - pytest: `pytest` or `python -m pytest`
123
+ - Other: detect from `package.json` scripts
124
+
125
+ 2. **Capture and display output** in terminal with detailed logging:
126
+ - Show test execution progress (which tests are running)
127
+ - Show test results (pass/fail) for each test case with execution time
128
+ - Show test summary (total tests, passed, failed, skipped)
129
+ - Show any error messages, stack traces, or assertion failures for failures
130
+ - Show coverage report if available (lines, branches, functions coverage)
131
+ - Format output clearly for readability
132
+
133
+ 3. **Log format example:**
134
+ ```
135
+ === Running tests for feature: {name} ===
136
+
137
+ RUN test-file-1.spec.js
138
+ ✓ should handle normal case (2ms)
139
+ ✓ should handle edge case (1ms)
140
+ ✗ should handle invalid input (3ms)
141
+
142
+ Error: Expected error to be thrown
143
+ at test-file-1.spec.js:25
144
+
145
+ ✓ should handle parameter combinations (5ms)
146
+ ✓ should maintain property (1ms)
147
+
148
+ Test Files: 1 passed | 0 failed | 1 total
149
+ Tests: 4 passed | 1 failed | 5 total
150
+ Time: 0.012s
151
+
152
+ Coverage:
153
+ - Lines: 82% (lines covered / total lines)
154
+ - Branches: 75% (branches covered / total branches)
155
+ - Functions: 90% (functions covered / total functions)
156
+ ```
157
+
158
+ 4. **If tests fail:**
159
+ - Analyze failure reason from logs
160
+ - Fix test code or implementation as needed
161
+ - Re-run tests until all pass
162
+ - Update implementation doc if code changes were needed
163
+
164
+ ## Step 6: Coverage Gap Analysis (automatic)
165
+ After initial test generation and execution:
166
+
167
+ 1. **Analyze coverage report:**
168
+ - Identify untested branches/conditions
169
+ - Identify untested lines/paths
170
+ - Identify untested functions/methods
171
+
172
+ 2. **Generate additional tests automatically:**
173
+ - For each uncovered branch: create test case
174
+ - For each uncovered line: ensure it's reachable by existing tests or add new test
175
+ - For each uncovered function: create test cases
176
+
177
+ 3. **Re-run tests and verify coverage:**
178
+ - Execute tests again with coverage
179
+ - Verify coverage targets are met (default: 80% lines, 70% branches)
180
+ - Continue generating tests until targets are met or all practical cases are covered
181
+
182
+ ## Step 7: Update Testing Doc
183
+ Use structure from `docs/ai/testing/feature-template.md` to populate `docs/ai/testing/feature-{name}.md`.
184
+
185
+ Fill with:
186
+ - **Test Files Created**: List all test files generated with paths
187
+ - **Test Cases Summary**: Brief summary of what was tested (function/component + key scenarios)
188
+ - Happy path scenarios
189
+ - Edge cases covered
190
+ - Parameter variations tested
191
+ - Error scenarios handled
192
+ - Property-based tests (if any)
193
+ - **Run Command**: Command to execute tests
194
+ - **Last Run Results**: Summary of test execution
195
+ - Total/passed/failed count
196
+ - Coverage percentages (lines, branches, functions)
197
+ - Any notable findings or issues
198
+ - **Coverage Targets**: Coverage goals and current status
199
+
200
+ Ensure all required sections from template are present. Keep the document brief and actionable.
201
+
202
+ ## Step 8: Verify Tests Pass
203
+ - Ensure all generated tests pass successfully
204
+ - Ensure coverage targets are met (default: 80% lines, 70% branches)
205
+ - If any test fails, debug and fix issues
206
+ - Update implementation doc if code changes were needed
207
+ - Re-run tests to confirm fixes
50
208
 
51
209
  ## Notes
52
- - Tests should be simple enough to run quickly and verify logic correctness.
53
- - Avoid complex test setup or mocking unless necessary.
54
- - Focus on catching logic errors and edge cases, not testing frameworks or flows.
210
+ - Tests should be simple enough to run quickly and verify logic correctness
211
+ - Focus on catching logic errors and edge cases through systematic parameter variation
212
+ - Avoid complex test setup or mocking unless necessary for logic isolation
213
+ - All tests must be automatically runnable via command line
214
+ - Test execution must show clear, detailed logging in terminal
215
+ - AI excels at: edge case generation, parameter combinations, property-based testing, coverage analysis
216
+ - Keep tests deterministic (avoid external dependencies, random values without seeds, time-dependent logic)
@@ -7,36 +7,77 @@ description: Feature test plans and testing guidelines
7
7
  # Testing Documentation
8
8
 
9
9
  ## Purpose
10
- This directory contains test plans for individual features. These docs focus on simple, fast-running tests to verify logic correctness.
10
+ This directory contains test plans for individual features. These docs focus on simple, fast-running tests to verify logic correctness. Tests are automatically generated and executed by AI agents.
11
11
 
12
12
  ## Testing Workflow
13
13
 
14
14
  ### Creating Test Plans
15
- Use the `writing-test` command to generate test plans:
15
+ Use the `writing-test` command to generate test plans and test code:
16
16
  - Command: `.cursor/commands/writing-test.md`
17
17
  - Output: `docs/ai/testing/feature-{name}.md`
18
18
  - Template: `docs/ai/testing/feature-template.md`
19
19
 
20
+ ### Automatic Test Generation
21
+ The `writing-test` command automatically:
22
+ 1. **Analyzes code** from implementation notes or source files
23
+ 2. **Generates test code** with comprehensive test cases:
24
+ - Happy path scenarios
25
+ - Edge cases (null, undefined, empty, boundaries)
26
+ - Parameter combinations
27
+ - Error handling
28
+ - Type safety checks
29
+ - Property-based tests (when applicable)
30
+ 3. **Runs tests** automatically with detailed logging
31
+ 4. **Analyzes coverage** and generates additional tests to fill gaps
32
+ 5. **Updates test documentation** with results and coverage
33
+
20
34
  ### Test Plan Structure
21
35
  Each test plan follows the template structure:
22
- - **Unit Tests**: Simple test cases for functions/components
36
+ - **Test Files Created**: List of all generated test files
37
+ - **Unit Tests**: Test cases organized by function/component
23
38
  - Happy path scenarios
24
- - Edge cases
39
+ - Edge cases (null, undefined, empty, boundaries)
40
+ - Parameter variations
25
41
  - Error cases
26
- - **Integration Tests**: Simple component interaction tests (if needed)
27
- - **Manual Checklist**: Steps for manual verification
28
- - **Coverage Targets**: Coverage goals and gaps
42
+ - Type safety checks
43
+ - Property-based tests
44
+ - **Run Command**: Command to execute tests
45
+ - **Last Run Results**: Test execution summary with coverage
46
+ - **Coverage Targets**: Coverage goals and current status
29
47
 
30
48
  ### Testing Philosophy
31
- - **Focus**: Pure functions, small utilities, isolated component logic
49
+ - **Focus**: Pure functions, small utilities, isolated component logic, parameter variations
32
50
  - **Speed**: Tests must run quickly via command line
33
51
  - **Simplicity**: Avoid complex rendering tests, E2E flows, or heavy setup
34
- - **Purpose**: Catch logic errors and edge cases, not test frameworks
52
+ - **Purpose**: Catch logic errors and edge cases through systematic testing
53
+ - **Automation**: AI agents excel at generating comprehensive test cases and running them automatically
54
+
55
+ ### Test Types (AI Agent Strengths)
56
+ AI agents excel at automatically generating:
57
+ 1. **Logic Testing**: Function behavior with various parameters
58
+ 2. **Edge Cases**: Null, undefined, empty, boundary values systematically
59
+ 3. **Parameter Combinations**: Cartesian products and systematic variations
60
+ 4. **Error Handling**: Invalid inputs, exceptions, error messages
61
+ 5. **Type Safety**: Type mismatches, missing properties, contract violations
62
+ 6. **Property-Based Testing**: Mathematical properties (commutativity, associativity, idempotency)
63
+ 7. **Coverage Analysis**: Identifying and filling coverage gaps
64
+ 8. **Regression Testing**: Test generation from code changes and bug reports
65
+
66
+ ### Test Types (Not in Scope)
67
+ The workflow focuses on simple, fast tests. Complex tests should be handled separately:
68
+ - Integration tests between multiple services (requires heavy setup)
69
+ - Complex UI rendering tests (requires heavy DOM setup)
70
+ - E2E flows or end-to-end user journey tests
71
+ - Performance/load testing
72
+ - Tests requiring external API calls or database connections
35
73
 
36
74
  ### Coverage Targets
37
75
  Default targets (adjust if project-specific):
38
76
  - Lines: 80%
39
77
  - Branches: 70%
78
+ - Functions: 90%
79
+
80
+ Coverage is automatically analyzed and gaps are filled by generating additional tests.
40
81
 
41
82
  ## Template Reference
42
83
  See `feature-template.md` for the exact structure required for test plans.
@@ -48,4 +89,4 @@ See `feature-template.md` for the exact structure required for test plans.
48
89
 
49
90
  ---
50
91
 
51
- **Note**: For complex E2E tests, performance testing, or bug tracking strategies, document these separately or in project-level documentation.
92
+ **Note**: For complex E2E tests, performance testing, or bug tracking strategies, document these separately or in project-level documentation. The `writing-test` command focuses on automated, logic-focused testing that AI agents excel at.
@@ -1,16 +1,48 @@
1
1
  # Test Plan: {Feature Name}
2
2
 
3
+ ## Test Files Created
4
+ - `path/to/test-file.spec.js` - Tests for [component/function]
5
+ - `path/to/another-test.spec.js` - Tests for [component/function]
6
+
3
7
  ## Unit Tests
4
- - Case 1: ...
5
- - Case 2: ...
6
- - Edge: ...
7
8
 
8
- ## Integration Tests
9
- - Main flow: ...
10
- - Failure modes: ...
9
+ ### Function/Component Name
10
+ - Happy path: normal parameters → expected output
11
+ - Edge case: empty/null input → handled correctly
12
+ - ✓ Edge case: boundary values (min/max, 0, -1) → handled correctly
13
+ - ✓ Parameter variation: [param1]=X, [param2]=Y → output Z
14
+ - ✓ Error case: invalid input → throws/returns error
15
+ - ✓ Type safety: wrong type input → handled correctly
16
+ - ✓ Property-based: [property] maintained → verified
17
+
18
+ ### Another Function/Component
19
+ - [Similar test cases...]
11
20
 
12
- ## Manual Checklist
13
- - Steps for manual verification
21
+ ## Run Command
22
+ ```bash
23
+ npm test
24
+ # or
25
+ npx vitest run
26
+ # or language-specific command
27
+ ```
28
+
29
+ ## Last Run Results
30
+ - Total: 15 tests
31
+ - Passed: 14
32
+ - Failed: 1
33
+ - Skipped: 0
34
+ - Coverage:
35
+ - Lines: 82% (123/150)
36
+ - Branches: 75% (30/40)
37
+ - Functions: 90% (18/20)
14
38
 
15
39
  ## Coverage Targets
16
- - Coverage goals and remaining gaps
40
+ - Target: 80% (lines), 70% (branches)
41
+ - Current: 82% (lines), 75% (branches)
42
+ - Status: ✓ Targets met
43
+
44
+ ## Notes
45
+ - All tests run automatically via command line
46
+ - Test results logged to terminal with detailed output
47
+ - Focus on logic testing, edge cases, and parameter variations
48
+ - No complex integration or UI rendering tests
package/package.json CHANGED
@@ -1,11 +1,11 @@
1
1
  {
2
2
  "name": "ai-workflow-init",
3
- "version": "1.1.0",
3
+ "version": "1.1.1",
4
4
  "description": "Initialize AI workflow docs & commands into any repo with one command",
5
5
  "bin": {
6
6
  "ai-workflow-init": "./cli.js"
7
7
  },
8
- "author": "your-name",
8
+ "author": "tuanpa",
9
9
  "license": "MIT",
10
10
  "repository": {
11
11
  "type": "git",
@@ -24,5 +24,18 @@
24
24
  },
25
25
  "engines": {
26
26
  "node": ">=14"
27
- }
27
+ },
28
+ "scripts": {
29
+ "release": "node scripts/release.js",
30
+ "release:patch": "node scripts/release.js patch",
31
+ "release:minor": "node scripts/release.js minor",
32
+ "release:major": "node scripts/release.js major"
33
+ },
34
+ "files": [
35
+ "cli.js",
36
+ "docs/",
37
+ ".cursor/",
38
+ "AGENTS.md",
39
+ "README.md"
40
+ ]
28
41
  }
package/.npmignore DELETED
@@ -1,2 +0,0 @@
1
- node_modules
2
- .git