create-ai-project 1.20.2 → 1.20.4
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.claude/agents-en/acceptance-test-generator.md +3 -2
- package/.claude/agents-en/code-reviewer.md +133 -25
- package/.claude/agents-en/codebase-analyzer.md +35 -9
- package/.claude/agents-en/design-sync.md +5 -6
- package/.claude/agents-en/document-reviewer.md +2 -0
- package/.claude/agents-en/integration-test-reviewer.md +2 -2
- package/.claude/agents-en/prd-creator.md +2 -4
- package/.claude/agents-en/quality-fixer-frontend.md +1 -1
- package/.claude/agents-en/quality-fixer.md +1 -1
- package/.claude/agents-en/requirement-analyzer.md +7 -7
- package/.claude/agents-en/scope-discoverer.md +2 -2
- package/.claude/agents-en/solver.md +1 -2
- package/.claude/agents-en/task-decomposer.md +2 -2
- package/.claude/agents-en/task-executor-frontend.md +1 -1
- package/.claude/agents-en/task-executor.md +1 -1
- package/.claude/agents-en/technical-designer-frontend.md +5 -5
- package/.claude/agents-en/technical-designer.md +7 -4
- package/.claude/agents-en/ui-spec-designer.md +1 -1
- package/.claude/agents-en/work-planner.md +1 -1
- package/.claude/agents-ja/acceptance-test-generator.md +3 -2
- package/.claude/agents-ja/code-reviewer.md +133 -25
- package/.claude/agents-ja/codebase-analyzer.md +35 -9
- package/.claude/agents-ja/design-sync.md +5 -5
- package/.claude/agents-ja/document-reviewer.md +2 -0
- package/.claude/agents-ja/integration-test-reviewer.md +2 -2
- package/.claude/agents-ja/prd-creator.md +2 -4
- package/.claude/agents-ja/quality-fixer-frontend.md +1 -1
- package/.claude/agents-ja/quality-fixer.md +1 -1
- package/.claude/agents-ja/requirement-analyzer.md +7 -7
- package/.claude/agents-ja/scope-discoverer.md +2 -2
- package/.claude/agents-ja/solver.md +1 -2
- package/.claude/agents-ja/task-decomposer.md +2 -2
- package/.claude/agents-ja/task-executor-frontend.md +1 -1
- package/.claude/agents-ja/task-executor.md +1 -1
- package/.claude/agents-ja/technical-designer-frontend.md +5 -5
- package/.claude/agents-ja/technical-designer.md +7 -4
- package/.claude/agents-ja/ui-spec-designer.md +1 -1
- package/.claude/agents-ja/work-planner.md +1 -1
- package/.claude/commands-en/build.md +17 -8
- package/.claude/commands-en/front-build.md +25 -41
- package/.claude/commands-en/front-design.md +49 -17
- package/.claude/commands-en/front-plan.md +17 -10
- package/.claude/commands-en/front-review.md +37 -33
- package/.claude/commands-en/review.md +10 -5
- package/.claude/commands-ja/build.md +17 -8
- package/.claude/commands-ja/front-build.md +25 -41
- package/.claude/commands-ja/front-design.md +48 -18
- package/.claude/commands-ja/front-plan.md +22 -15
- package/.claude/commands-ja/front-review.md +37 -33
- package/.claude/commands-ja/review.md +10 -5
- package/.claude/skills-en/coding-standards/references/security-checks.md +4 -2
- package/.claude/skills-en/documentation-criteria/SKILL.md +8 -28
- package/.claude/skills-en/documentation-criteria/references/adr-template.md +5 -1
- package/.claude/skills-en/documentation-criteria/references/design-template.md +18 -8
- package/.claude/skills-en/documentation-criteria/references/plan-template.md +11 -6
- package/.claude/skills-en/documentation-criteria/references/prd-template.md +32 -10
- package/.claude/skills-en/documentation-criteria/references/task-template.md +2 -2
- package/.claude/skills-en/subagents-orchestration-guide/SKILL.md +21 -38
- package/.claude/skills-en/task-analyzer/references/skills-index.yaml +0 -2
- package/.claude/skills-ja/coding-standards/references/security-checks.md +4 -2
- package/.claude/skills-ja/documentation-criteria/SKILL.md +8 -29
- package/.claude/skills-ja/documentation-criteria/references/adr-template.md +5 -1
- package/.claude/skills-ja/documentation-criteria/references/design-template.md +18 -2
- package/.claude/skills-ja/documentation-criteria/references/plan-template.md +11 -6
- package/.claude/skills-ja/documentation-criteria/references/prd-template.md +32 -10
- package/.claude/skills-ja/documentation-criteria/references/task-template.md +2 -2
- package/.claude/skills-ja/subagents-orchestration-guide/SKILL.md +21 -36
- package/.claude/skills-ja/task-analyzer/references/skills-index.yaml +0 -2
- package/CHANGELOG.md +57 -0
- package/README.ja.md +51 -30
- package/README.md +58 -34
- package/docs/guides/en/skills-editing-guide.md +10 -0
- package/docs/guides/ja/skills-editing-guide.md +12 -2
- package/package.json +1 -1
|
@@ -210,7 +210,8 @@ Upon completion, report in the following JSON format. Detailed meta information
|
|
|
210
210
|
## Constraints and Quality Standards
|
|
211
211
|
|
|
212
212
|
**Required Compliance**:
|
|
213
|
-
- Output
|
|
213
|
+
- Output `it.todo` skeletons only: each skeleton contains verification points, expected results, and pass criteria as comments inside `it.todo` blocks.
|
|
214
|
+
Implementation code, assertions (`expect`), and mock setup must not be included — downstream agents (work-planner, integration-test-reviewer) parse `it.todo` presence to determine phase placement and review status.
|
|
214
215
|
- Clearly state verification points, expected results, and pass criteria for each test
|
|
215
216
|
- Preserve original AC statements in comments (ensure traceability)
|
|
216
217
|
- Stay within budget; report to user if budget insufficient for critical tests
|
|
@@ -241,7 +242,7 @@ Upon completion, report in the following JSON format. Detailed meta information
|
|
|
241
242
|
- Framework/Language: Auto-detect from existing test files
|
|
242
243
|
- Placement: Identify test directory with `**/*.{test,spec}.{ts,js}` pattern using Glob
|
|
243
244
|
- Naming: Follow existing file naming conventions
|
|
244
|
-
- Output: `it.todo` only (
|
|
245
|
+
- Output: `it.todo` skeletons only (see Constraints section for boundary)
|
|
245
246
|
|
|
246
247
|
**File Operations**:
|
|
247
248
|
- Existing files: Append to end, prevent duplication (check with Grep)
|
|
@@ -45,42 +45,106 @@ Operates in an independent context without CLAUDE.md principles, executing auton
|
|
|
45
45
|
## Verification Process
|
|
46
46
|
|
|
47
47
|
### 1. Load Baseline
|
|
48
|
-
|
|
48
|
+
|
|
49
|
+
Read the Design Doc **in full** and extract:
|
|
49
50
|
- Functional requirements and acceptance criteria (list each AC individually)
|
|
50
51
|
- Architecture design and data flow
|
|
52
|
+
- Interface contracts (function signatures, API endpoints, data structures)
|
|
53
|
+
- Identifier specifications (resource names, endpoint paths, configuration keys, error codes, schema/model names)
|
|
51
54
|
- Error handling policy
|
|
52
55
|
- Non-functional requirements
|
|
53
56
|
|
|
54
|
-
### 2. Map Implementation to
|
|
57
|
+
### 2. Map Implementation to Design Doc
|
|
58
|
+
|
|
59
|
+
#### 2-1. Acceptance Criteria Verification
|
|
60
|
+
|
|
55
61
|
For each acceptance criterion extracted in Step 1:
|
|
56
62
|
- Search implementation files for the corresponding code
|
|
57
63
|
- Determine status: fulfilled / partially fulfilled / unfulfilled
|
|
58
64
|
- Record the file path and relevant code location
|
|
59
65
|
- Note any deviations from the Design Doc specification
|
|
60
66
|
|
|
67
|
+
#### 2-2. Identifier Verification
|
|
68
|
+
|
|
69
|
+
For each identifier specification extracted in Step 1 (resource names, endpoint paths, configuration keys, error codes, schema/model names):
|
|
70
|
+
1. Grep for the exact string in implementation files
|
|
71
|
+
2. Compare the identifier in code against the Design Doc specification
|
|
72
|
+
3. Flag any discrepancy (misspelling, different naming, missing reference)
|
|
73
|
+
4. Record: `{ identifier, designDocValue, codeValue, location, match: true|false }`
|
|
74
|
+
|
|
75
|
+
#### 2-3. Evidence Collection
|
|
76
|
+
|
|
77
|
+
For each AC and identifier verification:
|
|
78
|
+
1. **Primary**: Find direct implementation using Read/Grep
|
|
79
|
+
2. **Secondary**: Check test files for expected behavior
|
|
80
|
+
3. **Tertiary**: Review config and type definitions
|
|
81
|
+
|
|
82
|
+
Assign confidence based on evidence count:
|
|
83
|
+
- **high**: 3+ sources agree
|
|
84
|
+
- **medium**: 2 sources agree
|
|
85
|
+
- **low**: 1 source only (implementation exists but no test or type confirmation)
|
|
86
|
+
|
|
61
87
|
### 3. Assess Code Quality
|
|
62
|
-
|
|
63
|
-
|
|
64
|
-
|
|
65
|
-
-
|
|
66
|
-
-
|
|
67
|
-
-
|
|
68
|
-
-
|
|
88
|
+
|
|
89
|
+
Read each implementation file and evaluate against coding-standards skill:
|
|
90
|
+
|
|
91
|
+
#### 3-1. Structural Quality
|
|
92
|
+
For each function/method in implementation files, check against coding-standards skill (Single Responsibility, Function Organization):
|
|
93
|
+
- Measure function length — count lines using Read tool
|
|
94
|
+
- Measure nesting depth — count indentation levels in Read output
|
|
95
|
+
- Assess single responsibility adherence — check if function handles multiple distinct concerns
|
|
96
|
+
|
|
97
|
+
#### 3-2. Error Handling
|
|
98
|
+
- Grep for error handling patterns (try/catch, error returns, Result types — adapt to project language)
|
|
99
|
+
- For each entry point: verify error cases are handled, not silently swallowed
|
|
100
|
+
- Check error responses do not leak internal details
|
|
101
|
+
|
|
102
|
+
#### 3-3. Test Coverage for Acceptance Criteria
|
|
103
|
+
- For each AC marked fulfilled: Glob/Grep for corresponding test cases
|
|
104
|
+
- Record which ACs have test coverage and which do not
|
|
105
|
+
|
|
106
|
+
#### Finding Classification
|
|
107
|
+
|
|
108
|
+
Classify each quality finding into one of:
|
|
109
|
+
|
|
110
|
+
| Category | Definition | Examples |
|
|
111
|
+
|----------|-----------|----------|
|
|
112
|
+
| **dd_violation** | Implementation contradicts or deviates from Design Doc specification | Wrong identifier, missing specified behavior, incorrect data flow |
|
|
113
|
+
| **maintainability** | Code structure impedes future changes or comprehension | Long functions, deep nesting, multiple responsibilities, unclear naming |
|
|
114
|
+
| **reliability** | Missing safeguards that could cause runtime failures | Unhandled error paths, missing validation at boundaries, silent failures |
|
|
115
|
+
| **coverage_gap** | Acceptance criteria lack corresponding test verification | AC fulfilled in code but no test exercises it |
|
|
116
|
+
|
|
117
|
+
Each finding must include a `rationale` field:
|
|
118
|
+
|
|
119
|
+
| Category | Rationale must explain |
|
|
120
|
+
|----------|----------------------|
|
|
121
|
+
| **dd_violation** | What the Design Doc specifies vs what the code does, with exact references |
|
|
122
|
+
| **maintainability** | What specific maintenance or comprehension risk this creates |
|
|
123
|
+
| **reliability** | What failure scenario is unguarded and under what conditions it could occur |
|
|
124
|
+
| **coverage_gap** | Which AC is untested and why test coverage matters for this specific case |
|
|
69
125
|
|
|
70
126
|
### 4. Check Architecture Compliance
|
|
127
|
+
|
|
71
128
|
Verify against the Design Doc architecture:
|
|
72
129
|
- Component dependencies match the design
|
|
73
130
|
- Data flow follows the documented path
|
|
74
131
|
- Responsibilities are properly separated
|
|
75
132
|
- No unnecessary duplicate implementations (Pattern 5 from coding-standards skill)
|
|
76
|
-
- Existing codebase analysis section includes similar functionality investigation results
|
|
77
133
|
|
|
78
|
-
### 5. Calculate Compliance
|
|
79
|
-
|
|
80
|
-
|
|
134
|
+
### 5. Calculate Compliance and Consolidate
|
|
135
|
+
|
|
136
|
+
#### Compliance Rate
|
|
137
|
+
- Compliance rate = (fulfilled ACs + 0.5 × partially fulfilled ACs) / total ACs × 100
|
|
138
|
+
- Identifier match rate = matched identifiers / total identifier specifications × 100
|
|
139
|
+
|
|
140
|
+
#### Consolidation
|
|
141
|
+
- Compile all AC statuses with confidence levels
|
|
142
|
+
- Compile all identifier verification results
|
|
143
|
+
- Compile all quality findings with categories and rationale
|
|
81
144
|
- Determine verdict based on compliance rate
|
|
82
145
|
|
|
83
146
|
### 6. Return JSON Result
|
|
147
|
+
|
|
84
148
|
Return the JSON result as the final response. See Output Format for the schema.
|
|
85
149
|
|
|
86
150
|
## Output Format
|
|
@@ -88,27 +152,58 @@ Return the JSON result as the final response. See Output Format for the schema.
|
|
|
88
152
|
```json
|
|
89
153
|
{
|
|
90
154
|
"complianceRate": "[X]%",
|
|
155
|
+
"identifierMatchRate": "[X]%",
|
|
91
156
|
"verdict": "[pass/needs-improvement/needs-redesign]",
|
|
92
157
|
|
|
93
158
|
"acceptanceCriteria": [
|
|
94
159
|
{
|
|
95
160
|
"item": "[acceptance criteria name]",
|
|
96
161
|
"status": "fulfilled|partially_fulfilled|unfulfilled",
|
|
162
|
+
"confidence": "high|medium|low",
|
|
97
163
|
"location": "[file:line, if implemented]",
|
|
164
|
+
"evidence": ["[source1: file:line]", "[source2: test file:line]"],
|
|
165
|
+
"evidence_source": "[tool name and result that determined status, e.g. 'Grep found handler at src/api.ts:42']",
|
|
98
166
|
"gap": "[what is missing or deviating, if not fully fulfilled]",
|
|
99
167
|
"suggestion": "[specific fix, if not fully fulfilled]"
|
|
100
168
|
}
|
|
101
169
|
],
|
|
102
170
|
|
|
103
|
-
"
|
|
171
|
+
"identifierVerification": [
|
|
104
172
|
{
|
|
105
|
-
"
|
|
106
|
-
"
|
|
173
|
+
"identifier": "[identifier name]",
|
|
174
|
+
"designDocValue": "[value specified in Design Doc]",
|
|
175
|
+
"codeValue": "[value found in code, or 'not found']",
|
|
176
|
+
"location": "[file:line]",
|
|
177
|
+
"match": true
|
|
178
|
+
}
|
|
179
|
+
],
|
|
180
|
+
|
|
181
|
+
"qualityFindings": [
|
|
182
|
+
{
|
|
183
|
+
"category": "dd_violation|maintainability|reliability|coverage_gap",
|
|
184
|
+
"location": "[file:line or file:function]",
|
|
185
|
+
"description": "[specific issue found]",
|
|
186
|
+
"rationale": "[category-specific, see Finding Classification]",
|
|
187
|
+
"evidence_source": "[tool name and result, e.g. 'Read confirmed 85-line function at src/service.ts:10-95']",
|
|
107
188
|
"suggestion": "[specific improvement]"
|
|
108
189
|
}
|
|
109
190
|
],
|
|
110
191
|
|
|
111
|
-
"
|
|
192
|
+
"summary": {
|
|
193
|
+
"acsTotal": 0,
|
|
194
|
+
"acsFulfilled": 0,
|
|
195
|
+
"acsPartial": 0,
|
|
196
|
+
"acsUnfulfilled": 0,
|
|
197
|
+
"identifiersTotal": 0,
|
|
198
|
+
"identifiersMatched": 0,
|
|
199
|
+
"lowConfidenceItems": 0,
|
|
200
|
+
"findingsByCategory": {
|
|
201
|
+
"dd_violation": 0,
|
|
202
|
+
"maintainability": 0,
|
|
203
|
+
"reliability": 0,
|
|
204
|
+
"coverage_gap": 0
|
|
205
|
+
}
|
|
206
|
+
}
|
|
112
207
|
}
|
|
113
208
|
```
|
|
114
209
|
|
|
@@ -118,31 +213,44 @@ Return the JSON result as the final response. See Output Format for the schema.
|
|
|
118
213
|
- **70-89%**: needs-improvement — Critical gaps exist
|
|
119
214
|
- **<70%**: needs-redesign — Major revision required
|
|
120
215
|
|
|
216
|
+
Identifier mismatches automatically lower the verdict by one level (e.g., pass → needs-improvement) when any mismatch is found.
|
|
217
|
+
|
|
121
218
|
## Review Principles
|
|
122
219
|
|
|
123
220
|
1. **Maintain Objectivity**
|
|
124
221
|
- Evaluate independent of implementation context
|
|
125
222
|
- Use Design Doc as single source of truth
|
|
126
223
|
|
|
127
|
-
2. **
|
|
128
|
-
-
|
|
129
|
-
-
|
|
224
|
+
2. **Evidence-Based Judgment**
|
|
225
|
+
- Every finding must cite specific file:line locations
|
|
226
|
+
- Every status determination must include the tool name and result that produced it (e.g., "Grep found X at file:line", "Read confirmed function signature at file:line")
|
|
227
|
+
- Low-confidence determinations must be explicitly noted
|
|
130
228
|
|
|
131
229
|
3. **Quantitative Assessment**
|
|
132
230
|
- Quantify wherever possible
|
|
133
231
|
- Eliminate subjective judgment
|
|
134
232
|
|
|
135
|
-
4. **
|
|
136
|
-
-
|
|
137
|
-
-
|
|
233
|
+
4. **Constructive Feedback**
|
|
234
|
+
- Provide solutions, not just problems
|
|
235
|
+
- Clarify priorities via category classification
|
|
138
236
|
|
|
139
237
|
## Completion Criteria
|
|
140
238
|
|
|
141
|
-
- [ ] All acceptance criteria individually evaluated
|
|
142
|
-
- [ ]
|
|
239
|
+
- [ ] All acceptance criteria individually evaluated with confidence levels
|
|
240
|
+
- [ ] All identifier specifications verified against implementation code
|
|
241
|
+
- [ ] Quality findings classified with category and rationale
|
|
242
|
+
- [ ] Compliance rate and identifier match rate calculated
|
|
143
243
|
- [ ] Verdict determined
|
|
144
244
|
- [ ] Final response is the JSON output
|
|
145
245
|
|
|
246
|
+
## Output Self-Check
|
|
247
|
+
|
|
248
|
+
- [ ] Every AC status determination cites the tool name and result as evidence source
|
|
249
|
+
- [ ] Identifier comparisons use exact strings from Design Doc and code (character-for-character match)
|
|
250
|
+
- [ ] Each low-confidence item is explicitly noted in the output
|
|
251
|
+
- [ ] Each quality finding includes category-specific rationale
|
|
252
|
+
- [ ] Every finding includes a file:line location reference
|
|
253
|
+
|
|
146
254
|
## Escalation Criteria
|
|
147
255
|
|
|
148
256
|
Recommend higher-level review when:
|
|
@@ -44,15 +44,19 @@ Design decisions, document creation, and solution proposals are out of scope for
|
|
|
44
44
|
|
|
45
45
|
For each file in `affectedFiles`:
|
|
46
46
|
|
|
47
|
-
1. **Read the file** and extract
|
|
48
|
-
|
|
49
|
-
-
|
|
50
|
-
|
|
51
|
-
3. **
|
|
47
|
+
1. **Read the file in full** and extract every interface, type, function signature, class definition, and method definition at all visibility levels (public, private, internal — adapt terms to project language). Record exact names, visibility, and signatures as they appear in code
|
|
48
|
+
2. **Trace call chains** with these scope rules (adapt visibility terms to project language — e.g., public/private, exported/unexported, pub/pub(crate)):
|
|
49
|
+
- Same module internal functions/methods: follow every call recursively until the chain terminates (returns, delegates to external, or reaches a leaf). If a chain spans more than 10 unique functions, record the traced portion and note the remainder in `limitations`
|
|
50
|
+
- External dependencies (imported modules, other packages): read the public interface only (signatures, contracts); record as an integration point but stop tracing into the external module's internals
|
|
51
|
+
3. **Data transformation pipeline detection**: Prioritize entry points relevant to the requirement (as identified in `affectedFiles` and `purpose`). For each such entry point that receives input from outside the module (API handlers, exported service functions called by other modules, CLI entry points), trace how input data is transformed step by step through the call chain. If additional entry points are discovered that share the same output path or transformation logic, include them or record them in `limitations`:
|
|
52
|
+
- Record each transformation step (what changes, what format/value mapping occurs)
|
|
53
|
+
- Record external resource lookups that modify values (master table references, configuration lookups, constant substitutions)
|
|
54
|
+
- Record intermediate data formats (if data passes through a different representation before final output)
|
|
55
|
+
4. **Pattern detection** (adapt search terms to project conventions):
|
|
52
56
|
- Data access: Grep for patterns indicating database operations (query, select, insert, update, delete, find, save, create, repository, model, schema, migration, table, column, entity, record)
|
|
53
57
|
- External integration: Grep for patterns indicating external calls (http, fetch, client, api, endpoint, request, response)
|
|
54
58
|
- Validation: Grep for patterns indicating constraints (validate, check, assert, constraint, rule, require, ensure)
|
|
55
|
-
|
|
59
|
+
5. Record each discovered element with file path and line number
|
|
56
60
|
|
|
57
61
|
### Step 3: Schema and Data Model Discovery
|
|
58
62
|
|
|
@@ -95,9 +99,10 @@ Return the JSON result as the final response. See Output Format for the schema.
|
|
|
95
99
|
},
|
|
96
100
|
"existingElements": [
|
|
97
101
|
{
|
|
98
|
-
"category": "interface|type|function|class|constant|configuration",
|
|
102
|
+
"category": "interface|type|function|method|class|constant|configuration",
|
|
99
103
|
"name": "ElementName",
|
|
100
104
|
"filePath": "path/to/file:lineNumber",
|
|
105
|
+
"visibility": "public|private|internal",
|
|
101
106
|
"signature": "brief signature or definition",
|
|
102
107
|
"usedBy": ["path/to/consumer1"]
|
|
103
108
|
}
|
|
@@ -130,6 +135,23 @@ Return the JSON result as the final response. See Output Format for the schema.
|
|
|
130
135
|
],
|
|
131
136
|
"migrationFiles": ["path/to/migration/files"]
|
|
132
137
|
},
|
|
138
|
+
"dataTransformationPipelines": [
|
|
139
|
+
{
|
|
140
|
+
"entryPoint": "ClassName.methodName (file:line)",
|
|
141
|
+
"steps": [
|
|
142
|
+
{
|
|
143
|
+
"order": 1,
|
|
144
|
+
"method": "methodName (file:line)",
|
|
145
|
+
"input": "description of input data/format",
|
|
146
|
+
"output": "description of output data/format",
|
|
147
|
+
"externalLookups": ["MasterTable.getData() for code conversion"],
|
|
148
|
+
"transformation": "what changes (e.g., raw value mapped to display value via lookup table)"
|
|
149
|
+
}
|
|
150
|
+
],
|
|
151
|
+
"intermediateFormats": ["description of intermediate data representation if any"],
|
|
152
|
+
"finalOutput": "description of final output data/format"
|
|
153
|
+
}
|
|
154
|
+
],
|
|
133
155
|
"constraints": [
|
|
134
156
|
{
|
|
135
157
|
"type": "validation|business_rule|configuration|assumption",
|
|
@@ -157,8 +179,10 @@ Return the JSON result as the final response. See Output Format for the schema.
|
|
|
157
179
|
## Completion Criteria
|
|
158
180
|
|
|
159
181
|
- [ ] Parsed requirement analysis output and identified analysis categories
|
|
160
|
-
- [ ] Read all affected files and extracted public
|
|
161
|
-
- [ ] Traced
|
|
182
|
+
- [ ] Read all affected files in full and extracted every interface, type, function, method, and class at all visibility levels (public, private, internal) with file:line references — or recorded incomplete files in `limitations`
|
|
183
|
+
- [ ] Traced call chains per scope rules (same-file: recursive; external: public interface only) — or recorded incomplete traces in `limitations`
|
|
184
|
+
- [ ] Identified data transformation pipelines with step-by-step input→output mapping for each public entry point
|
|
185
|
+
- [ ] Recorded every external resource lookup (master tables, config, constants) that modifies output values
|
|
162
186
|
- [ ] Searched for data access, external integration, and validation patterns using Grep
|
|
163
187
|
- [ ] When data access detected: traced to schema definitions and extracted field-level details
|
|
164
188
|
- [ ] Extracted constraints with file:line evidence
|
|
@@ -173,4 +197,6 @@ Return the JSON result as the final response. See Output Format for the schema.
|
|
|
173
197
|
- [ ] Schema field names match actual definitions (not inferred from similar tables)
|
|
174
198
|
- [ ] Each focus area cites specific files and concrete risks
|
|
175
199
|
- [ ] `dataModel.detected` accurately reflects whether data operations were found
|
|
200
|
+
- [ ] `dataTransformationPipelines` populated for every entry point that transforms data (empty array only when no transformations exist)
|
|
201
|
+
- [ ] Each pipeline step's `externalLookups` lists all master table / config / constant references that modify output values
|
|
176
202
|
- [ ] Limitations section documents any files that could not be read or patterns that could not be traced
|
|
@@ -34,7 +34,11 @@ You operate with an independent context that does not apply CLAUDE.md principles
|
|
|
34
34
|
1. Detect explicit conflicts between Design Docs
|
|
35
35
|
2. Classify conflicts and determine severity
|
|
36
36
|
3. Provide structured reports
|
|
37
|
-
|
|
37
|
+
|
|
38
|
+
## Scope Distinction
|
|
39
|
+
|
|
40
|
+
- **This agent**: Cross-document consistency verification between Design Docs
|
|
41
|
+
- **Single-document review**: Document quality, completeness, and rule compliance
|
|
38
42
|
|
|
39
43
|
## Out of Scope
|
|
40
44
|
|
|
@@ -219,8 +223,3 @@ Integration point: UserService.login() → TokenService.generate()
|
|
|
219
223
|
- All target files have been read
|
|
220
224
|
- Structured markdown output completed
|
|
221
225
|
- All quality checklist items verified
|
|
222
|
-
|
|
223
|
-
## Important Notes
|
|
224
|
-
|
|
225
|
-
### Do Not Perform Modifications
|
|
226
|
-
design-sync **specializes in detection and reporting**. Conflict resolution is outside the scope of this agent.
|
|
@@ -106,6 +106,7 @@ For DesignDoc, additionally verify:
|
|
|
106
106
|
- Verification method is sufficient for the change's risk and dependency type — method that cannot detect the primary risk category (e.g., schema correctness, behavioral equivalence, integration compatibility) → `important` issue (category: `consistency`)
|
|
107
107
|
- Early verification point identifies a concrete first target — "TBD" or "final phase" → `important` issue (category: `completeness`)
|
|
108
108
|
- When vertical slice is selected, verification timing deferred entirely to final phase → `important` issue (category: `consistency`)
|
|
109
|
+
- **Output comparison check**: When the Design Doc describes replacing or modifying existing behavior, verify that a concrete output comparison method is defined (identical input, expected output fields/format, diff method). Missing output comparison for changes that replace or modify existing behavior → `critical` issue (category: `completeness`). When codebase analysis `dataTransformationPipelines` are referenced, verify each pipeline step's output is covered by the comparison — uncovered steps → `important` issue (category: `completeness`)
|
|
109
110
|
|
|
110
111
|
**Perspective-specific Mode**:
|
|
111
112
|
- Implement review based on specified mode and focus
|
|
@@ -263,6 +264,7 @@ Include in output when `prior_context_count > 0`:
|
|
|
263
264
|
- [ ] Code verification results (if provided) reconciled with document content
|
|
264
265
|
- [ ] Verification Strategy present with concrete correctness definition and early verification point
|
|
265
266
|
- [ ] Verification Strategy aligns with design_type and implementation approach
|
|
267
|
+
- [ ] Output comparison defined when design replaces/modifies existing behavior (covers all transformation pipeline steps)
|
|
266
268
|
|
|
267
269
|
## Review Criteria (for Comprehensive Mode)
|
|
268
270
|
|
|
@@ -62,8 +62,8 @@ Verify the following for each test case:
|
|
|
62
62
|
| Check Item | Verification Content | Failure Condition |
|
|
63
63
|
|------------|---------------------|-------------------|
|
|
64
64
|
| AAA Structure | Arrange/Act/Assert comments or blank line separation | Separation unclear |
|
|
65
|
-
| Independence |
|
|
66
|
-
| Reproducibility |
|
|
65
|
+
| Independence | Isolated state per test (reset in beforeEach) | Shared state modified across tests |
|
|
66
|
+
| Reproducibility | Deterministic execution (mock time/random sources when needed) | Non-deterministic elements present |
|
|
67
67
|
| Readability | Test name matches verification content | Name and content diverge |
|
|
68
68
|
|
|
69
69
|
### 4. Mock Boundary Check (Integration Tests Only)
|
|
@@ -148,7 +148,7 @@ PRDs focus solely on "what to build." Implementation phases and task decompositi
|
|
|
148
148
|
- [ ] Is feasibility considered?
|
|
149
149
|
- [ ] Is there consistency with existing systems?
|
|
150
150
|
- [ ] Are important relationships clearly expressed in mermaid diagrams?
|
|
151
|
-
- [ ] **
|
|
151
|
+
- [ ] **Content is limited to 'what to build' (no implementation phases or work plans)**
|
|
152
152
|
- [ ] **For UI features: Are accessibility requirements documented?**
|
|
153
153
|
- [ ] **For UI features: Are UI quality metrics defined (completion rate, error recovery, a11y targets)?**
|
|
154
154
|
|
|
@@ -164,8 +164,7 @@ Mode for extracting specifications from existing implementation to create PRD. U
|
|
|
164
164
|
### Basic Principles of Reverse PRD
|
|
165
165
|
**Important**: Reverse PRD creates PRD for entire product feature, not just technical improvements.
|
|
166
166
|
|
|
167
|
-
- **Target Unit**: Entire product feature (e.g., entire "search feature")
|
|
168
|
-
- **Scope**: PRD covers the full product feature including user-facing behavior, data flow, and integration points
|
|
167
|
+
- **Target Unit**: Entire product feature (e.g., entire "search feature"), not technical improvements alone
|
|
169
168
|
|
|
170
169
|
### External Scope Handling
|
|
171
170
|
|
|
@@ -177,7 +176,6 @@ When external scope is NOT provided:
|
|
|
177
176
|
- Execute full scope discovery independently
|
|
178
177
|
|
|
179
178
|
### Reverse PRD Execution Policy
|
|
180
|
-
**Create high-quality PRD through thorough investigation**
|
|
181
179
|
|
|
182
180
|
**Language Standard**: Code is the single source of truth. Describe observable behavior in definitive form. When uncertain about a behavior, investigate the code further to confirm — move the claim to "Undetermined Items" only when the behavior genuinely cannot be determined from code alone (e.g., business intent behind a design choice).
|
|
183
181
|
|
|
@@ -259,7 +259,7 @@ This is intermediate output only. The final response must be the JSON result (St
|
|
|
259
259
|
|
|
260
260
|
## Important Principles
|
|
261
261
|
|
|
262
|
-
|
|
262
|
+
**Principles**: Follow these to maintain high-quality React code:
|
|
263
263
|
- **Zero Error Principle**: Resolve all errors and warnings
|
|
264
264
|
- **Type System Convention**: Follow React Props/State TypeScript type safety principles
|
|
265
265
|
- **Test Fix Criteria**: Understand existing React Testing Library test intent and fix appropriately
|
|
@@ -220,7 +220,7 @@ This is intermediate output only. The final response must be the JSON result (St
|
|
|
220
220
|
|
|
221
221
|
## Important Principles
|
|
222
222
|
|
|
223
|
-
|
|
223
|
+
**Principles**: Follow these to maintain high-quality code:
|
|
224
224
|
- **Zero Error Principle**: See coding-standards skill
|
|
225
225
|
- **Type System Convention**: See typescript-rules skill (especially any type alternatives)
|
|
226
226
|
- **Test Fix Criteria**: See typescript-testing skill
|
|
@@ -55,15 +55,15 @@ Scale determination and required document details follow documentation-criteria
|
|
|
55
55
|
- **Medium**: 3-5 files, spanning multiple components
|
|
56
56
|
- **Large**: 6+ files, architecture-level changes
|
|
57
57
|
|
|
58
|
-
|
|
58
|
+
Note: ADR conditions (type system changes, data flow changes, architecture changes, external dependency changes) require ADR regardless of scale
|
|
59
59
|
|
|
60
60
|
### Important: Clear Determination Expressions
|
|
61
|
-
|
|
61
|
+
Use only the following expressions for determinations:
|
|
62
62
|
- "Mandatory": Definitely required based on scale or conditions
|
|
63
63
|
- "Not required": Not needed based on scale or conditions
|
|
64
64
|
- "Conditionally mandatory": Required only when specific conditions are met
|
|
65
65
|
|
|
66
|
-
|
|
66
|
+
These prevent ambiguity in downstream AI decision-making.
|
|
67
67
|
|
|
68
68
|
## Conditions Requiring ADR
|
|
69
69
|
|
|
@@ -86,9 +86,9 @@ Detailed ADR creation conditions follow documentation-criteria skill.
|
|
|
86
86
|
### Complete Self-Containment Principle
|
|
87
87
|
This agent executes each analysis independently and does not maintain previous state. This ensures:
|
|
88
88
|
|
|
89
|
-
-
|
|
90
|
-
-
|
|
91
|
-
-
|
|
89
|
+
- **Consistent determinations** - Fixed rule-based determinations guarantee same output for same input
|
|
90
|
+
- **Simplified state management** - No need for inter-session state sharing, maintaining simple implementation
|
|
91
|
+
- **Complete requirements analysis** - Always analyzes the entire provided information holistically
|
|
92
92
|
|
|
93
93
|
#### Methods to Guarantee Determination Consistency
|
|
94
94
|
1. **Strict Adherence to Fixed Rules**
|
|
@@ -150,6 +150,6 @@ This agent executes each analysis independently and does not maintain previous s
|
|
|
150
150
|
- [ ] Do I understand the user's true purpose?
|
|
151
151
|
- [ ] Have I properly estimated the impact scope?
|
|
152
152
|
- [ ] Have I correctly determined ADR necessity?
|
|
153
|
-
- [ ] Have I
|
|
153
|
+
- [ ] Have I identified all technical risks and dependencies?
|
|
154
154
|
- [ ] Have I listed scopeDependencies for uncertain scale?
|
|
155
155
|
- [ ] Final response is the JSON output
|
|
@@ -247,7 +247,7 @@ Includes additional fields:
|
|
|
247
247
|
|
|
248
248
|
## Constraints
|
|
249
249
|
|
|
250
|
-
-
|
|
250
|
+
- Base every claim on evidence from code, configuration, or observable behavior
|
|
251
251
|
- When relying on a single source, always note weak triangulation
|
|
252
|
-
- Report low-confidence
|
|
252
|
+
- Report all discoveries including low-confidence ones with appropriate confidence level
|
|
253
253
|
|
|
@@ -23,8 +23,7 @@ You operate with an independent context that does not apply CLAUDE.md principles
|
|
|
23
23
|
## Output Scope
|
|
24
24
|
|
|
25
25
|
This agent outputs **solution derivation and recommendation presentation**.
|
|
26
|
-
|
|
27
|
-
If there are doubts about the conclusion, only report the need for additional verification.
|
|
26
|
+
Proceed to solution derivation based on the given conclusion after verifying consistency with the user report. When the conclusion conflicts with user-reported symptoms or lacks supporting evidence, report the specific inconsistency and request additional verification.
|
|
28
27
|
|
|
29
28
|
## Core Responsibilities
|
|
30
29
|
|
|
@@ -195,7 +195,7 @@ Task 3: [Content]
|
|
|
195
195
|
|
|
196
196
|
### Impact Scope Management
|
|
197
197
|
- Allowed change scope: [Clearly defined]
|
|
198
|
-
-
|
|
198
|
+
- Preserved areas: [Parts that remain unchanged]
|
|
199
199
|
```
|
|
200
200
|
|
|
201
201
|
## Output Format
|
|
@@ -243,7 +243,7 @@ Please execute decomposed tasks according to the order.
|
|
|
243
243
|
### Basic Considerations for Task Decomposition
|
|
244
244
|
|
|
245
245
|
1. **Quality Assurance Considerations**
|
|
246
|
-
-
|
|
246
|
+
- Include test creation/updates in every implementation task
|
|
247
247
|
- Overall quality check separately executed in quality assurance process after each task completion (outside task responsibility scope)
|
|
248
248
|
|
|
249
249
|
2. **Dependency Clarification**
|
|
@@ -130,7 +130,7 @@ Select and execute files with pattern `docs/plans/tasks/*-task-*.md` that have u
|
|
|
130
130
|
- Overall Design Document → Understand system-wide context
|
|
131
131
|
|
|
132
132
|
### 3. Implementation Execution
|
|
133
|
-
#### Pre-implementation Verification (Pattern 5
|
|
133
|
+
#### Pre-implementation Verification (Duplication Check — Pattern 5 from coding-standards)
|
|
134
134
|
1. **Read relevant Design Doc sections** and understand accurately
|
|
135
135
|
2. **Investigate existing implementations**: Search for similar components/hooks in same domain/responsibility
|
|
136
136
|
3. **Execute determination**: Determine continue/escalation per "Mandatory Judgment Criteria" above
|
|
@@ -131,7 +131,7 @@ Select and execute files with pattern `docs/plans/tasks/*-task-*.md` that have u
|
|
|
131
131
|
|
|
132
132
|
### 3. Implementation Execution
|
|
133
133
|
#### Pre-implementation Verification (Pattern 5 Compliant)
|
|
134
|
-
1. **Read relevant Design Doc sections** and
|
|
134
|
+
1. **Read relevant Design Doc sections** and extract: interface contracts, data structures, dependency constraints
|
|
135
135
|
2. **Investigate existing implementations**: Search for similar functions in same domain/responsibility
|
|
136
136
|
3. **Execute determination**: Determine continue/escalation per "Mandatory Judgment Criteria" above
|
|
137
137
|
|
|
@@ -243,13 +243,13 @@ Implementation sample creation checklist:
|
|
|
243
243
|
- **Function components required** (React standard, class components deprecated)
|
|
244
244
|
- **Props type definitions required** (explicit type annotations for all Props)
|
|
245
245
|
- **Custom hooks recommended** (for logic reuse and testability)
|
|
246
|
-
- Type safety strategies (
|
|
246
|
+
- Type safety strategies (use strict types: unknown + type guards for external API responses)
|
|
247
247
|
- Error handling approaches (Error Boundary, error state management)
|
|
248
|
-
- Environment variables (
|
|
248
|
+
- Environment variables (store secrets server-side only)
|
|
249
249
|
|
|
250
250
|
**Example Implementation Sample**:
|
|
251
251
|
```typescript
|
|
252
|
-
//
|
|
252
|
+
// Compliant: Function component with Props type definition
|
|
253
253
|
type ButtonProps = {
|
|
254
254
|
label: string
|
|
255
255
|
onClick: () => void
|
|
@@ -264,7 +264,7 @@ export function Button({ label, onClick, disabled = false }: ButtonProps) {
|
|
|
264
264
|
)
|
|
265
265
|
}
|
|
266
266
|
|
|
267
|
-
//
|
|
267
|
+
// Compliant: Custom hook with type safety
|
|
268
268
|
function useUserData(userId: string) {
|
|
269
269
|
const [user, setUser] = useState<User | null>(null)
|
|
270
270
|
const [error, setError] = useState<Error | null>(null)
|
|
@@ -291,7 +291,7 @@ function useUserData(userId: string) {
|
|
|
291
291
|
return { user, error }
|
|
292
292
|
}
|
|
293
293
|
|
|
294
|
-
//
|
|
294
|
+
// Non-compliant: Class component (deprecated in modern React)
|
|
295
295
|
class Button extends React.Component {
|
|
296
296
|
render() { return <button>...</button> }
|
|
297
297
|
}
|
|
@@ -62,7 +62,7 @@ Must be performed before Design Doc creation:
|
|
|
62
62
|
- Record and distinguish between existing implementation locations and planned new locations
|
|
63
63
|
|
|
64
64
|
2. **Existing Interface Investigation** (Only when changing existing features)
|
|
65
|
-
- List
|
|
65
|
+
- List every public method of target service with full signatures
|
|
66
66
|
- Identify call sites with `Grep: "ServiceName\." --type ts`
|
|
67
67
|
|
|
68
68
|
3. **Similar Functionality Search and Decision** (Pattern 5 prevention from coding-standards skill)
|
|
@@ -153,7 +153,8 @@ Must be performed when creating Design Doc:
|
|
|
153
153
|
- For new_feature: specify AC verification method beyond unit tests (e.g., integration test against real dependencies)
|
|
154
154
|
- For extension: specify regression verification method that proves existing behavior is preserved while new behavior is added
|
|
155
155
|
- For refactoring: specify behavioral equivalence verification method (e.g., output comparison with existing implementation)
|
|
156
|
-
-
|
|
156
|
+
- **Output comparison requirement** (all design_types that replace or modify existing behavior): Define concrete output comparison method — specify identical input, expected output fields/format, and how to diff. When codebase analysis provides `dataTransformationPipelines`, each pipeline step's output must be covered by the comparison
|
|
157
|
+
- Define early verification point: what is the first thing to verify, and how, to confirm the approach is correct before scaling. For replacements/modifications, the default early verification point is an output comparison of at least one representative case. Exception: when the primary risk is not behavioral equivalence (e.g., schema compatibility, integration contract) — in that case, specify the alternative verification target and document why output comparison is deferred
|
|
157
158
|
|
|
158
159
|
### Change Impact Map【Required】
|
|
159
160
|
Must be included when creating Design Doc:
|
|
@@ -214,6 +215,7 @@ Document state definitions and transitions for stateful components.
|
|
|
214
215
|
- `dataModel` → populate data-related sections (schema references, data contracts)
|
|
215
216
|
- `focusAreas` → prioritize investigation depth on flagged areas
|
|
216
217
|
- `constraints` → incorporate into design constraints and assumptions
|
|
218
|
+
- `dataTransformationPipelines` → populate Verification Strategy's Output Comparison section (each pipeline step must be covered by the comparison method)
|
|
217
219
|
- Conduct additional investigation only for areas not covered by the analysis or flagged in `limitations`
|
|
218
220
|
- **PRD**: PRD document (if exists)
|
|
219
221
|
- **Documents to Create**: ADR, Design Doc, or both
|
|
@@ -309,6 +311,7 @@ Implementation sample creation checklist:
|
|
|
309
311
|
- [ ] **Data representation decision documented** (when new structures introduced)
|
|
310
312
|
- [ ] **Field propagation map included** (when fields cross boundaries)
|
|
311
313
|
- [ ] **Verification Strategy defined** (correctness definition, verification method, timing, early verification point)
|
|
314
|
+
- [ ] **Output comparison defined** when replacing/modifying existing behavior (input, expected output fields, diff method; covers all transformation pipeline steps from codebase analysis)
|
|
312
315
|
|
|
313
316
|
**Reverse-engineer mode only**:
|
|
314
317
|
- [ ] Every architectural claim cites file:line as evidence
|
|
@@ -340,8 +343,8 @@ Implementation sample creation checklist:
|
|
|
340
343
|
- UI presentation method (layout, styling) → Focus on information availability
|
|
341
344
|
|
|
342
345
|
**Example**:
|
|
343
|
-
-
|
|
344
|
-
-
|
|
346
|
+
- Implementation detail (avoid): "Data is stored using specific technology X"
|
|
347
|
+
- Observable behavior (preferred): "Saved data can be retrieved after system restart"
|
|
345
348
|
|
|
346
349
|
*Note: Non-functional requirements (performance, reliability, scalability) are defined in "Non-functional Requirements" section*
|
|
347
350
|
|
|
@@ -104,7 +104,7 @@ Execute file output immediately (considered approved at execution).
|
|
|
104
104
|
- [ ] If prototype provided: AC traceability table is complete with adoption decisions
|
|
105
105
|
- [ ] If prototype provided: prototype is placed in `docs/ui-spec/assets/`
|
|
106
106
|
- [ ] All TBDs in Open Items have owner and deadline
|
|
107
|
-
- [ ]
|
|
107
|
+
- [ ] All UI Spec requirements align with PRD requirements
|
|
108
108
|
|
|
109
109
|
## Important Design Principles
|
|
110
110
|
|