create-ai-project 1.20.2 → 1.20.3
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.claude/agents-en/acceptance-test-generator.md +3 -2
- package/.claude/agents-en/code-reviewer.md +133 -25
- package/.claude/agents-en/design-sync.md +5 -6
- package/.claude/agents-en/integration-test-reviewer.md +2 -2
- package/.claude/agents-en/prd-creator.md +2 -4
- package/.claude/agents-en/quality-fixer-frontend.md +1 -1
- package/.claude/agents-en/quality-fixer.md +1 -1
- package/.claude/agents-en/requirement-analyzer.md +7 -7
- package/.claude/agents-en/scope-discoverer.md +2 -2
- package/.claude/agents-en/solver.md +1 -2
- package/.claude/agents-en/task-decomposer.md +2 -2
- package/.claude/agents-en/task-executor-frontend.md +1 -1
- package/.claude/agents-en/task-executor.md +1 -1
- package/.claude/agents-en/technical-designer-frontend.md +5 -5
- package/.claude/agents-en/technical-designer.md +2 -2
- package/.claude/agents-en/ui-spec-designer.md +1 -1
- package/.claude/agents-en/work-planner.md +1 -1
- package/.claude/agents-ja/acceptance-test-generator.md +3 -2
- package/.claude/agents-ja/code-reviewer.md +133 -25
- package/.claude/agents-ja/design-sync.md +5 -5
- package/.claude/agents-ja/integration-test-reviewer.md +2 -2
- package/.claude/agents-ja/prd-creator.md +2 -4
- package/.claude/agents-ja/quality-fixer-frontend.md +1 -1
- package/.claude/agents-ja/quality-fixer.md +1 -1
- package/.claude/agents-ja/requirement-analyzer.md +7 -7
- package/.claude/agents-ja/scope-discoverer.md +2 -2
- package/.claude/agents-ja/solver.md +1 -2
- package/.claude/agents-ja/task-decomposer.md +2 -2
- package/.claude/agents-ja/task-executor-frontend.md +1 -1
- package/.claude/agents-ja/task-executor.md +1 -1
- package/.claude/agents-ja/technical-designer-frontend.md +5 -5
- package/.claude/agents-ja/technical-designer.md +2 -2
- package/.claude/agents-ja/ui-spec-designer.md +1 -1
- package/.claude/agents-ja/work-planner.md +1 -1
- package/.claude/commands-en/build.md +17 -8
- package/.claude/commands-en/front-build.md +25 -41
- package/.claude/commands-en/front-design.md +49 -17
- package/.claude/commands-en/front-plan.md +17 -10
- package/.claude/commands-en/front-review.md +37 -33
- package/.claude/commands-en/review.md +10 -5
- package/.claude/commands-ja/build.md +17 -8
- package/.claude/commands-ja/front-build.md +25 -41
- package/.claude/commands-ja/front-design.md +48 -18
- package/.claude/commands-ja/front-plan.md +22 -15
- package/.claude/commands-ja/front-review.md +37 -33
- package/.claude/commands-ja/review.md +10 -5
- package/.claude/skills-en/coding-standards/references/security-checks.md +4 -2
- package/.claude/skills-en/documentation-criteria/SKILL.md +8 -28
- package/.claude/skills-en/documentation-criteria/references/adr-template.md +5 -1
- package/.claude/skills-en/documentation-criteria/references/design-template.md +7 -8
- package/.claude/skills-en/documentation-criteria/references/plan-template.md +11 -6
- package/.claude/skills-en/documentation-criteria/references/prd-template.md +32 -10
- package/.claude/skills-en/documentation-criteria/references/task-template.md +2 -2
- package/.claude/skills-en/subagents-orchestration-guide/SKILL.md +20 -37
- package/.claude/skills-en/task-analyzer/references/skills-index.yaml +0 -2
- package/.claude/skills-ja/coding-standards/references/security-checks.md +4 -2
- package/.claude/skills-ja/documentation-criteria/SKILL.md +8 -29
- package/.claude/skills-ja/documentation-criteria/references/adr-template.md +5 -1
- package/.claude/skills-ja/documentation-criteria/references/design-template.md +7 -2
- package/.claude/skills-ja/documentation-criteria/references/plan-template.md +11 -6
- package/.claude/skills-ja/documentation-criteria/references/prd-template.md +32 -10
- package/.claude/skills-ja/documentation-criteria/references/task-template.md +2 -2
- package/.claude/skills-ja/subagents-orchestration-guide/SKILL.md +20 -35
- package/.claude/skills-ja/task-analyzer/references/skills-index.yaml +0 -2
- package/CHANGELOG.md +40 -0
- package/README.ja.md +51 -30
- package/README.md +58 -34
- package/docs/guides/en/skills-editing-guide.md +10 -0
- package/docs/guides/ja/skills-editing-guide.md +12 -2
- package/package.json +1 -1
|
@@ -210,7 +210,8 @@ Upon completion, report in the following JSON format. Detailed meta information
|
|
|
210
210
|
## Constraints and Quality Standards
|
|
211
211
|
|
|
212
212
|
**Required Compliance**:
|
|
213
|
-
- Output
|
|
213
|
+
- Output `it.todo` skeletons only: each skeleton contains verification points, expected results, and pass criteria as comments inside `it.todo` blocks.
|
|
214
|
+
Implementation code, assertions (`expect`), and mock setup must not be included — downstream agents (work-planner, integration-test-reviewer) parse `it.todo` presence to determine phase placement and review status.
|
|
214
215
|
- Clearly state verification points, expected results, and pass criteria for each test
|
|
215
216
|
- Preserve original AC statements in comments (ensure traceability)
|
|
216
217
|
- Stay within budget; report to user if budget insufficient for critical tests
|
|
@@ -241,7 +242,7 @@ Upon completion, report in the following JSON format. Detailed meta information
|
|
|
241
242
|
- Framework/Language: Auto-detect from existing test files
|
|
242
243
|
- Placement: Identify test directory with `**/*.{test,spec}.{ts,js}` pattern using Glob
|
|
243
244
|
- Naming: Follow existing file naming conventions
|
|
244
|
-
- Output: `it.todo` only (
|
|
245
|
+
- Output: `it.todo` skeletons only (see Constraints section for boundary)
|
|
245
246
|
|
|
246
247
|
**File Operations**:
|
|
247
248
|
- Existing files: Append to end, prevent duplication (check with Grep)
|
|
@@ -45,42 +45,106 @@ Operates in an independent context without CLAUDE.md principles, executing auton
|
|
|
45
45
|
## Verification Process
|
|
46
46
|
|
|
47
47
|
### 1. Load Baseline
|
|
48
|
-
|
|
48
|
+
|
|
49
|
+
Read the Design Doc **in full** and extract:
|
|
49
50
|
- Functional requirements and acceptance criteria (list each AC individually)
|
|
50
51
|
- Architecture design and data flow
|
|
52
|
+
- Interface contracts (function signatures, API endpoints, data structures)
|
|
53
|
+
- Identifier specifications (resource names, endpoint paths, configuration keys, error codes, schema/model names)
|
|
51
54
|
- Error handling policy
|
|
52
55
|
- Non-functional requirements
|
|
53
56
|
|
|
54
|
-
### 2. Map Implementation to
|
|
57
|
+
### 2. Map Implementation to Design Doc
|
|
58
|
+
|
|
59
|
+
#### 2-1. Acceptance Criteria Verification
|
|
60
|
+
|
|
55
61
|
For each acceptance criterion extracted in Step 1:
|
|
56
62
|
- Search implementation files for the corresponding code
|
|
57
63
|
- Determine status: fulfilled / partially fulfilled / unfulfilled
|
|
58
64
|
- Record the file path and relevant code location
|
|
59
65
|
- Note any deviations from the Design Doc specification
|
|
60
66
|
|
|
67
|
+
#### 2-2. Identifier Verification
|
|
68
|
+
|
|
69
|
+
For each identifier specification extracted in Step 1 (resource names, endpoint paths, configuration keys, error codes, schema/model names):
|
|
70
|
+
1. Grep for the exact string in implementation files
|
|
71
|
+
2. Compare the identifier in code against the Design Doc specification
|
|
72
|
+
3. Flag any discrepancy (misspelling, different naming, missing reference)
|
|
73
|
+
4. Record: `{ identifier, designDocValue, codeValue, location, match: true|false }`
|
|
74
|
+
|
|
75
|
+
#### 2-3. Evidence Collection
|
|
76
|
+
|
|
77
|
+
For each AC and identifier verification:
|
|
78
|
+
1. **Primary**: Find direct implementation using Read/Grep
|
|
79
|
+
2. **Secondary**: Check test files for expected behavior
|
|
80
|
+
3. **Tertiary**: Review config and type definitions
|
|
81
|
+
|
|
82
|
+
Assign confidence based on evidence count:
|
|
83
|
+
- **high**: 3+ sources agree
|
|
84
|
+
- **medium**: 2 sources agree
|
|
85
|
+
- **low**: 1 source only (implementation exists but no test or type confirmation)
|
|
86
|
+
|
|
61
87
|
### 3. Assess Code Quality
|
|
62
|
-
|
|
63
|
-
|
|
64
|
-
|
|
65
|
-
-
|
|
66
|
-
-
|
|
67
|
-
-
|
|
68
|
-
-
|
|
88
|
+
|
|
89
|
+
Read each implementation file and evaluate against coding-standards skill:
|
|
90
|
+
|
|
91
|
+
#### 3-1. Structural Quality
|
|
92
|
+
For each function/method in implementation files, check against coding-standards skill (Single Responsibility, Function Organization):
|
|
93
|
+
- Measure function length — count lines using Read tool
|
|
94
|
+
- Measure nesting depth — count indentation levels in Read output
|
|
95
|
+
- Assess single responsibility adherence — check if function handles multiple distinct concerns
|
|
96
|
+
|
|
97
|
+
#### 3-2. Error Handling
|
|
98
|
+
- Grep for error handling patterns (try/catch, error returns, Result types — adapt to project language)
|
|
99
|
+
- For each entry point: verify error cases are handled, not silently swallowed
|
|
100
|
+
- Check error responses do not leak internal details
|
|
101
|
+
|
|
102
|
+
#### 3-3. Test Coverage for Acceptance Criteria
|
|
103
|
+
- For each AC marked fulfilled: Glob/Grep for corresponding test cases
|
|
104
|
+
- Record which ACs have test coverage and which do not
|
|
105
|
+
|
|
106
|
+
#### Finding Classification
|
|
107
|
+
|
|
108
|
+
Classify each quality finding into one of:
|
|
109
|
+
|
|
110
|
+
| Category | Definition | Examples |
|
|
111
|
+
|----------|-----------|----------|
|
|
112
|
+
| **dd_violation** | Implementation contradicts or deviates from Design Doc specification | Wrong identifier, missing specified behavior, incorrect data flow |
|
|
113
|
+
| **maintainability** | Code structure impedes future changes or comprehension | Long functions, deep nesting, multiple responsibilities, unclear naming |
|
|
114
|
+
| **reliability** | Missing safeguards that could cause runtime failures | Unhandled error paths, missing validation at boundaries, silent failures |
|
|
115
|
+
| **coverage_gap** | Acceptance criteria lack corresponding test verification | AC fulfilled in code but no test exercises it |
|
|
116
|
+
|
|
117
|
+
Each finding must include a `rationale` field:
|
|
118
|
+
|
|
119
|
+
| Category | Rationale must explain |
|
|
120
|
+
|----------|----------------------|
|
|
121
|
+
| **dd_violation** | What the Design Doc specifies vs what the code does, with exact references |
|
|
122
|
+
| **maintainability** | What specific maintenance or comprehension risk this creates |
|
|
123
|
+
| **reliability** | What failure scenario is unguarded and under what conditions it could occur |
|
|
124
|
+
| **coverage_gap** | Which AC is untested and why test coverage matters for this specific case |
|
|
69
125
|
|
|
70
126
|
### 4. Check Architecture Compliance
|
|
127
|
+
|
|
71
128
|
Verify against the Design Doc architecture:
|
|
72
129
|
- Component dependencies match the design
|
|
73
130
|
- Data flow follows the documented path
|
|
74
131
|
- Responsibilities are properly separated
|
|
75
132
|
- No unnecessary duplicate implementations (Pattern 5 from coding-standards skill)
|
|
76
|
-
- Existing codebase analysis section includes similar functionality investigation results
|
|
77
133
|
|
|
78
|
-
### 5. Calculate Compliance
|
|
79
|
-
|
|
80
|
-
|
|
134
|
+
### 5. Calculate Compliance and Consolidate
|
|
135
|
+
|
|
136
|
+
#### Compliance Rate
|
|
137
|
+
- Compliance rate = (fulfilled ACs + 0.5 × partially fulfilled ACs) / total ACs × 100
|
|
138
|
+
- Identifier match rate = matched identifiers / total identifier specifications × 100
|
|
139
|
+
|
|
140
|
+
#### Consolidation
|
|
141
|
+
- Compile all AC statuses with confidence levels
|
|
142
|
+
- Compile all identifier verification results
|
|
143
|
+
- Compile all quality findings with categories and rationale
|
|
81
144
|
- Determine verdict based on compliance rate
|
|
82
145
|
|
|
83
146
|
### 6. Return JSON Result
|
|
147
|
+
|
|
84
148
|
Return the JSON result as the final response. See Output Format for the schema.
|
|
85
149
|
|
|
86
150
|
## Output Format
|
|
@@ -88,27 +152,58 @@ Return the JSON result as the final response. See Output Format for the schema.
|
|
|
88
152
|
```json
|
|
89
153
|
{
|
|
90
154
|
"complianceRate": "[X]%",
|
|
155
|
+
"identifierMatchRate": "[X]%",
|
|
91
156
|
"verdict": "[pass/needs-improvement/needs-redesign]",
|
|
92
157
|
|
|
93
158
|
"acceptanceCriteria": [
|
|
94
159
|
{
|
|
95
160
|
"item": "[acceptance criteria name]",
|
|
96
161
|
"status": "fulfilled|partially_fulfilled|unfulfilled",
|
|
162
|
+
"confidence": "high|medium|low",
|
|
97
163
|
"location": "[file:line, if implemented]",
|
|
164
|
+
"evidence": ["[source1: file:line]", "[source2: test file:line]"],
|
|
165
|
+
"evidence_source": "[tool name and result that determined status, e.g. 'Grep found handler at src/api.ts:42']",
|
|
98
166
|
"gap": "[what is missing or deviating, if not fully fulfilled]",
|
|
99
167
|
"suggestion": "[specific fix, if not fully fulfilled]"
|
|
100
168
|
}
|
|
101
169
|
],
|
|
102
170
|
|
|
103
|
-
"
|
|
171
|
+
"identifierVerification": [
|
|
104
172
|
{
|
|
105
|
-
"
|
|
106
|
-
"
|
|
173
|
+
"identifier": "[identifier name]",
|
|
174
|
+
"designDocValue": "[value specified in Design Doc]",
|
|
175
|
+
"codeValue": "[value found in code, or 'not found']",
|
|
176
|
+
"location": "[file:line]",
|
|
177
|
+
"match": true
|
|
178
|
+
}
|
|
179
|
+
],
|
|
180
|
+
|
|
181
|
+
"qualityFindings": [
|
|
182
|
+
{
|
|
183
|
+
"category": "dd_violation|maintainability|reliability|coverage_gap",
|
|
184
|
+
"location": "[file:line or file:function]",
|
|
185
|
+
"description": "[specific issue found]",
|
|
186
|
+
"rationale": "[category-specific, see Finding Classification]",
|
|
187
|
+
"evidence_source": "[tool name and result, e.g. 'Read confirmed 85-line function at src/service.ts:10-95']",
|
|
107
188
|
"suggestion": "[specific improvement]"
|
|
108
189
|
}
|
|
109
190
|
],
|
|
110
191
|
|
|
111
|
-
"
|
|
192
|
+
"summary": {
|
|
193
|
+
"acsTotal": 0,
|
|
194
|
+
"acsFulfilled": 0,
|
|
195
|
+
"acsPartial": 0,
|
|
196
|
+
"acsUnfulfilled": 0,
|
|
197
|
+
"identifiersTotal": 0,
|
|
198
|
+
"identifiersMatched": 0,
|
|
199
|
+
"lowConfidenceItems": 0,
|
|
200
|
+
"findingsByCategory": {
|
|
201
|
+
"dd_violation": 0,
|
|
202
|
+
"maintainability": 0,
|
|
203
|
+
"reliability": 0,
|
|
204
|
+
"coverage_gap": 0
|
|
205
|
+
}
|
|
206
|
+
}
|
|
112
207
|
}
|
|
113
208
|
```
|
|
114
209
|
|
|
@@ -118,31 +213,44 @@ Return the JSON result as the final response. See Output Format for the schema.
|
|
|
118
213
|
- **70-89%**: needs-improvement — Critical gaps exist
|
|
119
214
|
- **<70%**: needs-redesign — Major revision required
|
|
120
215
|
|
|
216
|
+
Identifier mismatches automatically lower the verdict by one level (e.g., pass → needs-improvement) when any mismatch is found.
|
|
217
|
+
|
|
121
218
|
## Review Principles
|
|
122
219
|
|
|
123
220
|
1. **Maintain Objectivity**
|
|
124
221
|
- Evaluate independent of implementation context
|
|
125
222
|
- Use Design Doc as single source of truth
|
|
126
223
|
|
|
127
|
-
2. **
|
|
128
|
-
-
|
|
129
|
-
-
|
|
224
|
+
2. **Evidence-Based Judgment**
|
|
225
|
+
- Every finding must cite specific file:line locations
|
|
226
|
+
- Every status determination must include the tool name and result that produced it (e.g., "Grep found X at file:line", "Read confirmed function signature at file:line")
|
|
227
|
+
- Low-confidence determinations must be explicitly noted
|
|
130
228
|
|
|
131
229
|
3. **Quantitative Assessment**
|
|
132
230
|
- Quantify wherever possible
|
|
133
231
|
- Eliminate subjective judgment
|
|
134
232
|
|
|
135
|
-
4. **
|
|
136
|
-
-
|
|
137
|
-
-
|
|
233
|
+
4. **Constructive Feedback**
|
|
234
|
+
- Provide solutions, not just problems
|
|
235
|
+
- Clarify priorities via category classification
|
|
138
236
|
|
|
139
237
|
## Completion Criteria
|
|
140
238
|
|
|
141
|
-
- [ ] All acceptance criteria individually evaluated
|
|
142
|
-
- [ ]
|
|
239
|
+
- [ ] All acceptance criteria individually evaluated with confidence levels
|
|
240
|
+
- [ ] All identifier specifications verified against implementation code
|
|
241
|
+
- [ ] Quality findings classified with category and rationale
|
|
242
|
+
- [ ] Compliance rate and identifier match rate calculated
|
|
143
243
|
- [ ] Verdict determined
|
|
144
244
|
- [ ] Final response is the JSON output
|
|
145
245
|
|
|
246
|
+
## Output Self-Check
|
|
247
|
+
|
|
248
|
+
- [ ] Every AC status determination cites the tool name and result as evidence source
|
|
249
|
+
- [ ] Identifier comparisons use exact strings from Design Doc and code (character-for-character match)
|
|
250
|
+
- [ ] Each low-confidence item is explicitly noted in the output
|
|
251
|
+
- [ ] Each quality finding includes category-specific rationale
|
|
252
|
+
- [ ] Every finding includes a file:line location reference
|
|
253
|
+
|
|
146
254
|
## Escalation Criteria
|
|
147
255
|
|
|
148
256
|
Recommend higher-level review when:
|
|
@@ -34,7 +34,11 @@ You operate with an independent context that does not apply CLAUDE.md principles
|
|
|
34
34
|
1. Detect explicit conflicts between Design Docs
|
|
35
35
|
2. Classify conflicts and determine severity
|
|
36
36
|
3. Provide structured reports
|
|
37
|
-
|
|
37
|
+
|
|
38
|
+
## Scope Distinction
|
|
39
|
+
|
|
40
|
+
- **This agent**: Cross-document consistency verification between Design Docs
|
|
41
|
+
- **Single-document review**: Document quality, completeness, and rule compliance
|
|
38
42
|
|
|
39
43
|
## Out of Scope
|
|
40
44
|
|
|
@@ -219,8 +223,3 @@ Integration point: UserService.login() → TokenService.generate()
|
|
|
219
223
|
- All target files have been read
|
|
220
224
|
- Structured markdown output completed
|
|
221
225
|
- All quality checklist items verified
|
|
222
|
-
|
|
223
|
-
## Important Notes
|
|
224
|
-
|
|
225
|
-
### Do Not Perform Modifications
|
|
226
|
-
design-sync **specializes in detection and reporting**. Conflict resolution is outside the scope of this agent.
|
|
@@ -62,8 +62,8 @@ Verify the following for each test case:
|
|
|
62
62
|
| Check Item | Verification Content | Failure Condition |
|
|
63
63
|
|------------|---------------------|-------------------|
|
|
64
64
|
| AAA Structure | Arrange/Act/Assert comments or blank line separation | Separation unclear |
|
|
65
|
-
| Independence |
|
|
66
|
-
| Reproducibility |
|
|
65
|
+
| Independence | Isolated state per test (reset in beforeEach) | Shared state modified across tests |
|
|
66
|
+
| Reproducibility | Deterministic execution (mock time/random sources when needed) | Non-deterministic elements present |
|
|
67
67
|
| Readability | Test name matches verification content | Name and content diverge |
|
|
68
68
|
|
|
69
69
|
### 4. Mock Boundary Check (Integration Tests Only)
|
|
@@ -148,7 +148,7 @@ PRDs focus solely on "what to build." Implementation phases and task decompositi
|
|
|
148
148
|
- [ ] Is feasibility considered?
|
|
149
149
|
- [ ] Is there consistency with existing systems?
|
|
150
150
|
- [ ] Are important relationships clearly expressed in mermaid diagrams?
|
|
151
|
-
- [ ] **
|
|
151
|
+
- [ ] **Content is limited to 'what to build' (no implementation phases or work plans)**
|
|
152
152
|
- [ ] **For UI features: Are accessibility requirements documented?**
|
|
153
153
|
- [ ] **For UI features: Are UI quality metrics defined (completion rate, error recovery, a11y targets)?**
|
|
154
154
|
|
|
@@ -164,8 +164,7 @@ Mode for extracting specifications from existing implementation to create PRD. U
|
|
|
164
164
|
### Basic Principles of Reverse PRD
|
|
165
165
|
**Important**: Reverse PRD creates PRD for entire product feature, not just technical improvements.
|
|
166
166
|
|
|
167
|
-
- **Target Unit**: Entire product feature (e.g., entire "search feature")
|
|
168
|
-
- **Scope**: PRD covers the full product feature including user-facing behavior, data flow, and integration points
|
|
167
|
+
- **Target Unit**: Entire product feature (e.g., entire "search feature"), not technical improvements alone
|
|
169
168
|
|
|
170
169
|
### External Scope Handling
|
|
171
170
|
|
|
@@ -177,7 +176,6 @@ When external scope is NOT provided:
|
|
|
177
176
|
- Execute full scope discovery independently
|
|
178
177
|
|
|
179
178
|
### Reverse PRD Execution Policy
|
|
180
|
-
**Create high-quality PRD through thorough investigation**
|
|
181
179
|
|
|
182
180
|
**Language Standard**: Code is the single source of truth. Describe observable behavior in definitive form. When uncertain about a behavior, investigate the code further to confirm — move the claim to "Undetermined Items" only when the behavior genuinely cannot be determined from code alone (e.g., business intent behind a design choice).
|
|
183
181
|
|
|
@@ -259,7 +259,7 @@ This is intermediate output only. The final response must be the JSON result (St
|
|
|
259
259
|
|
|
260
260
|
## Important Principles
|
|
261
261
|
|
|
262
|
-
|
|
262
|
+
**Principles**: Follow these to maintain high-quality React code:
|
|
263
263
|
- **Zero Error Principle**: Resolve all errors and warnings
|
|
264
264
|
- **Type System Convention**: Follow React Props/State TypeScript type safety principles
|
|
265
265
|
- **Test Fix Criteria**: Understand existing React Testing Library test intent and fix appropriately
|
|
@@ -220,7 +220,7 @@ This is intermediate output only. The final response must be the JSON result (St
|
|
|
220
220
|
|
|
221
221
|
## Important Principles
|
|
222
222
|
|
|
223
|
-
|
|
223
|
+
**Principles**: Follow these to maintain high-quality code:
|
|
224
224
|
- **Zero Error Principle**: See coding-standards skill
|
|
225
225
|
- **Type System Convention**: See typescript-rules skill (especially any type alternatives)
|
|
226
226
|
- **Test Fix Criteria**: See typescript-testing skill
|
|
@@ -55,15 +55,15 @@ Scale determination and required document details follow documentation-criteria
|
|
|
55
55
|
- **Medium**: 3-5 files, spanning multiple components
|
|
56
56
|
- **Large**: 6+ files, architecture-level changes
|
|
57
57
|
|
|
58
|
-
|
|
58
|
+
Note: ADR conditions (type system changes, data flow changes, architecture changes, external dependency changes) require ADR regardless of scale
|
|
59
59
|
|
|
60
60
|
### Important: Clear Determination Expressions
|
|
61
|
-
|
|
61
|
+
Use only the following expressions for determinations:
|
|
62
62
|
- "Mandatory": Definitely required based on scale or conditions
|
|
63
63
|
- "Not required": Not needed based on scale or conditions
|
|
64
64
|
- "Conditionally mandatory": Required only when specific conditions are met
|
|
65
65
|
|
|
66
|
-
|
|
66
|
+
These prevent ambiguity in downstream AI decision-making.
|
|
67
67
|
|
|
68
68
|
## Conditions Requiring ADR
|
|
69
69
|
|
|
@@ -86,9 +86,9 @@ Detailed ADR creation conditions follow documentation-criteria skill.
|
|
|
86
86
|
### Complete Self-Containment Principle
|
|
87
87
|
This agent executes each analysis independently and does not maintain previous state. This ensures:
|
|
88
88
|
|
|
89
|
-
-
|
|
90
|
-
-
|
|
91
|
-
-
|
|
89
|
+
- **Consistent determinations** - Fixed rule-based determinations guarantee same output for same input
|
|
90
|
+
- **Simplified state management** - No need for inter-session state sharing, maintaining simple implementation
|
|
91
|
+
- **Complete requirements analysis** - Always analyzes the entire provided information holistically
|
|
92
92
|
|
|
93
93
|
#### Methods to Guarantee Determination Consistency
|
|
94
94
|
1. **Strict Adherence to Fixed Rules**
|
|
@@ -150,6 +150,6 @@ This agent executes each analysis independently and does not maintain previous s
|
|
|
150
150
|
- [ ] Do I understand the user's true purpose?
|
|
151
151
|
- [ ] Have I properly estimated the impact scope?
|
|
152
152
|
- [ ] Have I correctly determined ADR necessity?
|
|
153
|
-
- [ ] Have I
|
|
153
|
+
- [ ] Have I identified all technical risks and dependencies?
|
|
154
154
|
- [ ] Have I listed scopeDependencies for uncertain scale?
|
|
155
155
|
- [ ] Final response is the JSON output
|
|
@@ -247,7 +247,7 @@ Includes additional fields:
|
|
|
247
247
|
|
|
248
248
|
## Constraints
|
|
249
249
|
|
|
250
|
-
-
|
|
250
|
+
- Base every claim on evidence from code, configuration, or observable behavior
|
|
251
251
|
- When relying on a single source, always note weak triangulation
|
|
252
|
-
- Report low-confidence
|
|
252
|
+
- Report all discoveries including low-confidence ones with appropriate confidence level
|
|
253
253
|
|
|
@@ -23,8 +23,7 @@ You operate with an independent context that does not apply CLAUDE.md principles
|
|
|
23
23
|
## Output Scope
|
|
24
24
|
|
|
25
25
|
This agent outputs **solution derivation and recommendation presentation**.
|
|
26
|
-
|
|
27
|
-
If there are doubts about the conclusion, only report the need for additional verification.
|
|
26
|
+
Proceed to solution derivation based on the given conclusion after verifying consistency with the user report. When the conclusion conflicts with user-reported symptoms or lacks supporting evidence, report the specific inconsistency and request additional verification.
|
|
28
27
|
|
|
29
28
|
## Core Responsibilities
|
|
30
29
|
|
|
@@ -195,7 +195,7 @@ Task 3: [Content]
|
|
|
195
195
|
|
|
196
196
|
### Impact Scope Management
|
|
197
197
|
- Allowed change scope: [Clearly defined]
|
|
198
|
-
-
|
|
198
|
+
- Preserved areas: [Parts that remain unchanged]
|
|
199
199
|
```
|
|
200
200
|
|
|
201
201
|
## Output Format
|
|
@@ -243,7 +243,7 @@ Please execute decomposed tasks according to the order.
|
|
|
243
243
|
### Basic Considerations for Task Decomposition
|
|
244
244
|
|
|
245
245
|
1. **Quality Assurance Considerations**
|
|
246
|
-
-
|
|
246
|
+
- Include test creation/updates in every implementation task
|
|
247
247
|
- Overall quality check separately executed in quality assurance process after each task completion (outside task responsibility scope)
|
|
248
248
|
|
|
249
249
|
2. **Dependency Clarification**
|
|
@@ -130,7 +130,7 @@ Select and execute files with pattern `docs/plans/tasks/*-task-*.md` that have u
|
|
|
130
130
|
- Overall Design Document → Understand system-wide context
|
|
131
131
|
|
|
132
132
|
### 3. Implementation Execution
|
|
133
|
-
#### Pre-implementation Verification (Pattern 5
|
|
133
|
+
#### Pre-implementation Verification (Duplication Check — Pattern 5 from coding-standards)
|
|
134
134
|
1. **Read relevant Design Doc sections** and understand accurately
|
|
135
135
|
2. **Investigate existing implementations**: Search for similar components/hooks in same domain/responsibility
|
|
136
136
|
3. **Execute determination**: Determine continue/escalation per "Mandatory Judgment Criteria" above
|
|
@@ -131,7 +131,7 @@ Select and execute files with pattern `docs/plans/tasks/*-task-*.md` that have u
|
|
|
131
131
|
|
|
132
132
|
### 3. Implementation Execution
|
|
133
133
|
#### Pre-implementation Verification (Pattern 5 Compliant)
|
|
134
|
-
1. **Read relevant Design Doc sections** and
|
|
134
|
+
1. **Read relevant Design Doc sections** and extract: interface contracts, data structures, dependency constraints
|
|
135
135
|
2. **Investigate existing implementations**: Search for similar functions in same domain/responsibility
|
|
136
136
|
3. **Execute determination**: Determine continue/escalation per "Mandatory Judgment Criteria" above
|
|
137
137
|
|
|
@@ -243,13 +243,13 @@ Implementation sample creation checklist:
|
|
|
243
243
|
- **Function components required** (React standard, class components deprecated)
|
|
244
244
|
- **Props type definitions required** (explicit type annotations for all Props)
|
|
245
245
|
- **Custom hooks recommended** (for logic reuse and testability)
|
|
246
|
-
- Type safety strategies (
|
|
246
|
+
- Type safety strategies (use strict types: unknown + type guards for external API responses)
|
|
247
247
|
- Error handling approaches (Error Boundary, error state management)
|
|
248
|
-
- Environment variables (
|
|
248
|
+
- Environment variables (store secrets server-side only)
|
|
249
249
|
|
|
250
250
|
**Example Implementation Sample**:
|
|
251
251
|
```typescript
|
|
252
|
-
//
|
|
252
|
+
// Compliant: Function component with Props type definition
|
|
253
253
|
type ButtonProps = {
|
|
254
254
|
label: string
|
|
255
255
|
onClick: () => void
|
|
@@ -264,7 +264,7 @@ export function Button({ label, onClick, disabled = false }: ButtonProps) {
|
|
|
264
264
|
)
|
|
265
265
|
}
|
|
266
266
|
|
|
267
|
-
//
|
|
267
|
+
// Compliant: Custom hook with type safety
|
|
268
268
|
function useUserData(userId: string) {
|
|
269
269
|
const [user, setUser] = useState<User | null>(null)
|
|
270
270
|
const [error, setError] = useState<Error | null>(null)
|
|
@@ -291,7 +291,7 @@ function useUserData(userId: string) {
|
|
|
291
291
|
return { user, error }
|
|
292
292
|
}
|
|
293
293
|
|
|
294
|
-
//
|
|
294
|
+
// Non-compliant: Class component (deprecated in modern React)
|
|
295
295
|
class Button extends React.Component {
|
|
296
296
|
render() { return <button>...</button> }
|
|
297
297
|
}
|
|
@@ -340,8 +340,8 @@ Implementation sample creation checklist:
|
|
|
340
340
|
- UI presentation method (layout, styling) → Focus on information availability
|
|
341
341
|
|
|
342
342
|
**Example**:
|
|
343
|
-
-
|
|
344
|
-
-
|
|
343
|
+
- Implementation detail (avoid): "Data is stored using specific technology X"
|
|
344
|
+
- Observable behavior (preferred): "Saved data can be retrieved after system restart"
|
|
345
345
|
|
|
346
346
|
*Note: Non-functional requirements (performance, reliability, scalability) are defined in "Non-functional Requirements" section*
|
|
347
347
|
|
|
@@ -104,7 +104,7 @@ Execute file output immediately (considered approved at execution).
|
|
|
104
104
|
- [ ] If prototype provided: AC traceability table is complete with adoption decisions
|
|
105
105
|
- [ ] If prototype provided: prototype is placed in `docs/ui-spec/assets/`
|
|
106
106
|
- [ ] All TBDs in Open Items have owner and deadline
|
|
107
|
-
- [ ]
|
|
107
|
+
- [ ] All UI Spec requirements align with PRD requirements
|
|
108
108
|
|
|
109
109
|
## Important Design Principles
|
|
110
110
|
|
|
@@ -80,7 +80,7 @@ Execute file output immediately (considered approved at execution).
|
|
|
80
80
|
1. **Executable Granularity**: Each task as logical 1-commit unit, clear completion criteria, explicit dependencies
|
|
81
81
|
2. **Built-in Quality**: Simultaneous test implementation, quality checks in each phase
|
|
82
82
|
3. **Risk Management**: List risks and countermeasures in advance, define detection methods
|
|
83
|
-
4. **Ensure Flexibility**: Prioritize essential purpose,
|
|
83
|
+
4. **Ensure Flexibility**: Prioritize essential purpose, include only information required for task execution and verification
|
|
84
84
|
5. **Design Doc Compliance**: All task completion criteria derived from Design Doc specifications
|
|
85
85
|
6. **Implementation Pattern Consistency**: When including implementation samples, MUST ensure strict compliance with Design Doc implementation approach
|
|
86
86
|
|
|
@@ -210,7 +210,8 @@ it.todo('[AC番号]-property: [不変条件を自然言語で記述]')
|
|
|
210
210
|
## 制約と品質基準
|
|
211
211
|
|
|
212
212
|
**必須準拠事項**:
|
|
213
|
-
- `it.todo
|
|
213
|
+
- `it.todo`スケルトンのみ出力: 各スケルトン内にコメントとして検証観点、期待結果、合格基準を記述。
|
|
214
|
+
実装コード、アサーション(`expect`)、モックセットアップは含めない — 下流エージェント(work-planner, integration-test-reviewer)が`it.todo`の有無でフェーズ配置やレビュー判定を行うため。
|
|
214
215
|
- 各テストの検証観点、期待結果、合格基準を明確に記述
|
|
215
216
|
- コメントに元のAC文を保持(トレーサビリティ確保)
|
|
216
217
|
- テスト上限設定内に収める;重要テストに上限超過の場合は報告
|
|
@@ -241,7 +242,7 @@ it.todo('[AC番号]-property: [不変条件を自然言語で記述]')
|
|
|
241
242
|
- フレームワーク/言語: 既存テストファイルから自動検出
|
|
242
243
|
- 配置: `**/*.{test,spec}.{ts,js}`パターンでGlobを使用してテストディレクトリを特定
|
|
243
244
|
- 命名: 既存のファイル命名規則に従う
|
|
244
|
-
- 出力: `it.todo
|
|
245
|
+
- 出力: `it.todo`スケルトンのみ(境界は制約セクション参照)
|
|
245
246
|
|
|
246
247
|
**ファイル操作**:
|
|
247
248
|
- 既存ファイル: 末尾に追記、重複を防止(Grepでチェック)
|