ai-workflow-init 3.4.0 → 3.6.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -3,29 +3,141 @@ name: check-implementation
3
3
  description: Validates implementation against planning doc.
4
4
  ---
5
5
 
6
- Compare current implementation against planning doc.
6
+ Compare current implementation against planning doc to ensure all requirements are met and completed tasks have corresponding code.
7
7
 
8
8
  ## Workflow Alignment
9
9
 
10
10
  - Provide brief status updates (1–3 sentences) before/after important actions.
11
+ - For medium/large validations, create todos (≤14 words, verb-led). Keep only one `in_progress` item.
12
+ - Update todos immediately after progress; mark completed upon finish.
11
13
  - Provide a high-signal summary at completion highlighting key mismatches and next steps.
12
14
 
13
- 1. Ask me for:
15
+ ---
16
+
17
+ ## Step 1: Load Planning Doc
18
+
19
+ **Tools:**
20
+ - ask user for clarification if feature name not provided
21
+ - read `docs/ai/planning/feature-{name}.md`
22
+
23
+ **Purpose:** Load planning doc to extract:
24
+ - Acceptance criteria (Given-When-Then scenarios)
25
+ - Implementation plan tasks (with `[ ]` or `[x]` status)
26
+ - Expected file changes mentioned in tasks
27
+
28
+ **Error handling:**
29
+ - Planning doc not found: Cannot proceed, notify user and exit
30
+ - Invalid format: Parse available content, warn about missing sections
31
+ - No completed tasks: Nothing to validate, notify user
32
+
33
+ ---
34
+
35
+ ## Step 2: Discover Implementation Files
36
+
37
+ **Strategy (in order):**
38
+ 1. Extract file paths from planning doc task list
39
+ 2. If no file paths found: Read implementation doc `docs/ai/implementation/feature-{name}.md`
40
+ 3. If no implementation doc: Use git diff to find changed files
41
+ 4. If no git changes: Ask user for file paths
14
42
 
15
- - Feature name (if not provided)
16
- - Then locate planning doc by feature name:
17
- - Planning: `docs/ai/planning/feature-{name}.md`
43
+ **Tools:**
44
+ - read `docs/ai/planning/feature-{name}.md` - extract file mentions
45
+ - read `docs/ai/implementation/feature-{name}.md` - fallback
46
+ - run command: `git diff --name-only main` - find changed files
47
+ - search for files matching `src/**/*.{js,ts,py}` - last resort full scan
48
+ - AskUserQuestion - ask user if all else fails
18
49
 
19
- 2. Validation Scope (no inference):
50
+ **Output:** List of files to validate against planning doc
20
51
 
21
- - Verify code follows the acceptance criteria from the planning doc
22
- - Verify code matches the steps/changes in the implementation plan phases
23
- - Check that completed tasks (marked `[x]`) have corresponding code changes
24
- - Do NOT invent or infer alternative logic beyond what the docs specify
52
+ **Error handling:**
53
+ - No files found: Cannot validate, notify user
54
+ - File paths invalid: Skip broken paths, continue with valid ones
55
+ - Git not available: Fall back to asking user
56
+
57
+ ---
58
+
59
+ ## Step 3: Validate Implementation vs Planning
60
+
61
+ **Automated process:**
62
+ - Use workspace search and analysis to accomplish this task
63
+
64
+ **Alternative approach:** If Explore agent unavailable, manually:
65
+ 1. Read each file from Step 2
66
+ 2. For each completed task `[x]`, search for related code
67
+ 3. Compare against acceptance criteria
68
+ 4. Document mismatches
69
+
70
+ **Validation scope (no inference):**
71
+ - Verify code follows the acceptance criteria from planning doc
72
+ - Verify code matches the implementation plan task descriptions
73
+ - Check completed tasks `[x]` have corresponding code changes
74
+ - **Do NOT** invent or infer alternative logic beyond what docs specify
75
+
76
+ **Error handling:**
77
+ - Agent timeout: Retry with , then fall back to manual
78
+ - No completed tasks: Report "Nothing to validate"
79
+ - Ambiguous results: Flag for manual review
80
+
81
+ ---
25
82
 
26
- 3. Output
83
+ ## Step 4: Generate Validation Report
84
+
85
+ **Report structure:**
86
+
87
+ ### Summary
88
+ - Total tasks in planning: `X`
89
+ - Completed tasks `[x]`: `Y`
90
+ - Validated successfully: `Z`
91
+ - Mismatches found: `N` (critical: `C`, minor: `M`)
92
+ - Missing implementations: `P`
93
+
94
+ ### Completed Tasks Without Implementation
95
+
96
+ For each task marked `[x]` but code not found:
97
+
98
+ ```
99
+ - [ ] Task: [Task description from planning]
100
+ - Expected file: [file mentioned in task or "not specified"]
101
+ - Status: Missing or Partial
102
+ - Action: Implement missing code or update task to [ ]
103
+ ```
104
+
105
+ ### Mismatches (Code ≠ Planning)
106
+
107
+ For each discrepancy between planning and code:
108
+
109
+ ```
110
+ - File: path/to/file.ext
111
+ - Planning requirement: [what planning doc says]
112
+ - Current implementation: [what code actually does]
113
+ - Severity: Critical / Minor
114
+ - Action: Update code to match planning OR revise planning doc
115
+ ```
116
+
117
+ ### Acceptance Criteria Status
118
+
119
+ For each acceptance criteria:
120
+
121
+ ```
122
+ - [x] AC1: [Description] → ✅ Verified (code satisfies criteria)
123
+ - [ ] AC2: [Description] → ❌ Not implemented
124
+ - [x] AC3: [Description] → ⚠️ Partial (missing edge cases)
125
+ ```
126
+
127
+ ### Next Steps (Prioritized)
128
+
129
+ 1. **Critical issues** (blocking):
130
+ - [ ] Fix mismatch in [file]: [specific issue]
131
+ - [ ] Implement missing task: [task name]
132
+
133
+ 2. **Minor issues** (non-blocking):
134
+ - [ ] Address partial implementation in [file]
135
+ - [ ] Update planning doc to reflect changes
136
+
137
+ 3. **Follow-up**:
138
+ - [ ] Re-run `/check-implementation` after fixes
139
+ - [ ] Run `/code-review` for standards compliance
140
+
141
+ ---
27
142
 
28
- - List concrete mismatches between code and planning doc
29
- - List missing pieces the planning doc requires but code lacks
30
- - List tasks marked complete `[x]` but code not implemented
31
- - Short actionable next steps
143
+ **Note:** This validation checks implementation completeness, not code quality. Run `/code-review` separately for standards compliance.
@@ -1,80 +1,186 @@
1
+ ---
2
+ name: code-review
3
+ description: Performs a local code review strictly for standards conformance.
4
+ ---
5
+
1
6
  You are helping me perform a local code review **before** I push changes. This review is restricted to standards conformance only.
2
7
 
3
8
  ## Workflow Alignment
9
+
4
10
  - Provide brief status updates (1–3 sentences) before/after important actions.
11
+ - For medium/large reviews, create todos (≤14 words, verb-led). Keep only one `in_progress` item.
12
+ - Update todos immediately after progress; mark completed upon finish.
5
13
  - Provide a high-signal summary at completion highlighting key findings and impact.
6
14
 
7
- ## Step 1: Gather Context (minimal)
8
- - Ask for feature name if not provided (must be kebab-case).
9
- - Then locate and read:
10
- - Planning doc: `docs/ai/planning/feature-{name}.md` (for file list context only)
11
- - Implementation doc: `docs/ai/implementation/feature-{name}.md` (for file list context only)
12
-
13
- ## Step 2: Standards Focus (only)
14
- - Load project standards:
15
- - `docs/ai/project/CODE_CONVENTIONS.md`
16
- - `docs/ai/project/PROJECT_STRUCTURE.md`
17
- - Review code strictly for violations against these two documents.
18
- - **Do NOT** provide design opinions, performance guesses, or alternative architectures.
19
- - **Do NOT** infer requirements beyond what standards explicitly state.
20
-
21
- ### Automated Checks (recommended)
22
- - Detect tools from project config and run non-interactive checks:
23
- - JS/TS: `npx eslint .` (or `pnpm eslint .`), consider `--max-warnings=0`; type checks with `npx tsc --noEmit`
24
- - Python: `ruff .` or `flake8 .`; type checks with `mypy .` or `pyright`
25
- - Go: `golangci-lint run` or `go vet ./...`
26
- - Rust: `cargo clippy -- -D warnings`
27
- - Java: `./gradlew check` or `mvn -q -DskipTests=false -Dspotbugs.failOnError=true verify`
28
- - Use results to focus the review; still report only clear violations per standards.
29
-
30
- ## Step 3: File-by-File Review (standards violations only)
31
- For every relevant file, report ONLY standards violations per the two docs above. Do not assess broader design or run git commands.
32
-
33
- ## Step 4: Cross-Cutting Concerns (standards only)
34
- - Naming consistency and adherence to CODE_CONVENTIONS
35
- - Structure/module boundaries per PROJECT_STRUCTURE
36
-
37
- ### Standards Conformance Report (required)
38
- - After reviewing `CODE_CONVENTIONS.md` and `PROJECT_STRUCTURE.md`, list violations in this exact format:
15
+ ---
16
+
17
+
18
+ ## Step 1: Determine Review Scope & Gather Context
19
+
20
+ **Scope detection (in order):**
21
+ 1. If feature name provided: Review files from planning/implementation docs
22
+ 2. If no feature name but git repo: Review changed files from git diff
23
+ 3. If no git changes: Ask user for file paths or review entire src/
24
+
25
+ **Tools:**
26
+ - ask user for clarification if feature name not provided
27
+ - read `docs/ai/planning/feature-{name}.md` for file list
28
+ - read `docs/ai/implementation/feature-{name}.md` for file list
29
+ - run command: `git diff --name-only HEAD` to find changed files (fallback)
30
+ - search for files matching `src/**/*.{js,ts,py,go,rs,java}` for full project scan (last resort)
31
+
32
+ **Error handling:**
33
+ - Feature docs not found: Fall back to git diff
34
+ - Git not available: Ask user for file paths
35
+ - No files to review: Notify user and exit
36
+
37
+ ## Step 2: Load Standards & Run Quality Checks
38
+
39
+ **Tools:**
40
+ - read `docs/ai/project/CODE_CONVENTIONS.md`
41
+ - read `docs/ai/project/PROJECT_STRUCTURE.md`
42
+
43
+ **Standards review scope:**
44
+ - Review code strictly for violations against CODE_CONVENTIONS and PROJECT_STRUCTURE only
45
+ - **Do NOT** provide design opinions, performance guesses, or alternative architectures
46
+ - **Do NOT** infer requirements beyond what standards explicitly state
47
+
48
+ **Quality checks (automated):**
49
+
50
+ Follow step by step in `.claude/skills/quality/code-check/SKILL.md` for automated validation:
51
+ - **Linting**: Code style and best practices (ESLint, Ruff, golangci-lint, Clippy)
52
+ - **Type checking**: Type safety validation (tsc, MyPy, Pyright)
53
+ - **Build verification**: Compilation and packaging checks
54
+
55
+ See Notes section for manual commands by language if needed.
56
+
57
+ Use results to focus manual review; report only clear violations per standards.
58
+
59
+ **Error handling:**
60
+ - Standards docs not found: Notify user, cannot proceed without standards
61
+ - Skill not available: Fall back to manual commands (see Notes)
62
+ - Quality checks fail: Report errors as violations, fix and retry up to 3 times
63
+
64
+ ## Step 3: Scan for Standards Violations
65
+
66
+ **Automated process:**
67
+ - Use workspace search and analysis to accomplish this task
68
+ - Import order and grouping
69
+ - Folder structure and module boundaries
70
+ - Test placement and naming
71
+ - Cross-file consistency (naming patterns, module boundaries)
72
+ Return violations with file:line, rule violated, and brief description."
73
+ )
74
+
75
+ **Alternative approach:** If Explore agent unavailable, manually Read each file and check against standards.
76
+
77
+ **Review scope:** Files from Step 1 only. Report ONLY standards violations, not design opinions.
78
+
79
+ **Output format (Standards Conformance Report):**
80
+
39
81
  ```
40
82
  - path/to/file.ext — [Rule]: short description of the violated rule
41
83
  ```
42
- - Only include clear violations. Group similar violations by file when helpful.
43
84
 
44
- ## Step 5: Summarize Findings (rules-focused)
45
- Provide results in this structure:
46
- ```
47
- ### Summary
48
- - Blocking issues: [count]
49
- - Important follow-ups: [count]
50
- - Nice-to-have improvements: [count]
85
+ Only include clear violations. Group similar violations by file when helpful.
51
86
 
52
- Severity criteria:
53
- - Blocking: Violates CODE_CONVENTIONS or PROJECT_STRUCTURE causing build/test failure, security risk, or clear architectural breach.
54
- - Important: Violations that don't break build but degrade maintainability/performance or developer ergonomics.
55
- - Nice-to-have: Style/consistency improvements with low impact.
87
+ **Error handling:**
88
+ - Agent timeout: Retry once with , then fall back to manual review
89
+ - No violations found: Report clean bill of health
90
+ - Too many violations (>50): Group by file and summarize patterns
56
91
 
57
- ### Detailed Notes
58
- 1. **[File or Component]**
59
- - Issue/Observation: ...
60
- - Impact: (blocking / important / nice-to-have)
61
- - Rule violated: [Rule name from CODE_CONVENTIONS or PROJECT_STRUCTURE]
62
- - Recommendation: Fix to comply with [Rule]
92
+ ## Step 4: Summarize Findings (rules-focused)
63
93
 
64
- 2. ... (repeat per finding)
94
+ **Report structure:**
65
95
 
66
- ### Recommended Next Steps
67
- - [ ] Address blocking issues
68
- - [ ] Fix important violations
69
- - [ ] Consider nice-to-have improvements
70
- - [ ] Re-run code review command after fixes
71
- ```
96
+ 1. **Summary**: Violation counts by severity (Blocking / Important / Nice-to-have)
97
+ 2. **Detailed Findings**: For each violation:
98
+ - File path and line number
99
+ - Rule violated (from CODE_CONVENTIONS or PROJECT_STRUCTURE)
100
+ - Brief description and impact
101
+ - Recommended fix
102
+ 3. **Next Steps**: Prioritized action items
103
+
104
+ **Severity criteria:**
105
+ - **Blocking**: Build/test failure, security risk, architectural breach
106
+ - **Important**: Degrades maintainability, not immediately breaking
107
+ - **Nice-to-have**: Style/consistency improvements, low impact
108
+
109
+ (See Notes for detailed report format example)
110
+
111
+ ## Step 5: Final Checklist (rules-focused)
72
112
 
73
- ## Step 6: Final Checklist (rules-focused)
74
113
  Confirm whether each item is complete (yes/no/needs follow-up):
114
+
75
115
  - Naming and formatting adhere to CODE_CONVENTIONS
76
116
  - Structure and boundaries adhere to PROJECT_STRUCTURE
77
117
 
78
118
  ---
79
119
 
120
+ ## Notes
121
+
122
+ ### Automated Checks Complete List (Step 2)
123
+
124
+ **JavaScript/TypeScript:**
125
+ - Linting: `npx eslint . --max-warnings=0` or `pnpm eslint .`
126
+ - Type checking: `npx tsc --noEmit`
127
+ - Alternative linters: `npx biome check .`
128
+
129
+ **Python:**
130
+ - Linting: `ruff check .` or `flake8 .`
131
+ - Type checking: `mypy .` or `pyright`
132
+ - Formatting: `ruff format --check .` or `black --check .`
133
+
134
+ **Go:**
135
+ - Linting: `golangci-lint run`
136
+ - Vet: `go vet ./...`
137
+ - Formatting: `gofmt -l .`
138
+
139
+ **Rust:**
140
+ - Linting: `cargo clippy -- -D warnings`
141
+ - Formatting: `cargo fmt --check`
142
+
143
+ **Java:**
144
+ - Gradle: `./gradlew check`
145
+ - Maven: `mvn -q -DskipTests=false -Dspotbugs.failOnError=true verify`
146
+
147
+ ### Detailed Report Format Example (Step 5)
148
+
149
+ ```markdown
150
+ ## Code Review Summary
151
+
152
+ ### Standards Compliance Overview
153
+ - ✅ Blocking issues: 0
154
+ - ⚠️ Important follow-ups: 3
155
+ - 💡 Nice-to-have improvements: 5
156
+
157
+ ### Detailed Findings
158
+
159
+ #### 1. src/services/user-service.ts
160
+ - **Issue**: Function name uses snake_case instead of camelCase
161
+ - **Rule violated**: CODE_CONVENTIONS > Naming > Functions must use camelCase
162
+ - **Impact**: Important (consistency)
163
+ - **Recommendation**: Rename `get_user_by_id` → `getUserById`
164
+
165
+ #### 2. src/utils/helpers.ts
166
+ - **Issue**: Missing index.ts export
167
+ - **Rule violated**: PROJECT_STRUCTURE > Module boundaries > All utils must export via index
168
+ - **Impact**: Important (architecture)
169
+ - **Recommendation**: Add export to `src/utils/index.ts`
170
+
171
+ #### 3. tests/unit/user.spec.ts
172
+ - **Issue**: Test file placed in wrong directory
173
+ - **Rule violated**: CODE_CONVENTIONS > Tests > Colocate with source files
174
+ - **Impact**: Nice-to-have (organization)
175
+ - **Recommendation**: Move to `src/services/user-service.spec.ts`
176
+
177
+ ### Recommended Next Steps
178
+ - [ ] Fix 3 important violations (estimated 15 minutes)
179
+ - [ ] Address 5 nice-to-have improvements (estimated 30 minutes)
180
+ - [ ] Re-run `/code-review` after fixes
181
+ - [ ] Run automated checks: `npm run lint && npm run type-check`
182
+ ```
183
+
184
+ ---
185
+
80
186
  Let me know when you're ready to begin the review.