create-ai-project 1.18.1 → 1.18.3
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.claude/agents-en/code-verifier.md +62 -26
- package/.claude/agents-en/investigator.md +14 -15
- package/.claude/agents-en/prd-creator.md +56 -30
- package/.claude/agents-en/scope-discoverer.md +31 -29
- package/.claude/agents-en/task-decomposer.md +1 -1
- package/.claude/agents-en/technical-designer-frontend.md +63 -128
- package/.claude/agents-en/technical-designer.md +74 -113
- package/.claude/agents-en/verifier.md +7 -12
- package/.claude/agents-en/work-planner.md +6 -5
- package/.claude/agents-ja/code-verifier.md +62 -26
- package/.claude/agents-ja/investigator.md +14 -15
- package/.claude/agents-ja/prd-creator.md +56 -30
- package/.claude/agents-ja/scope-discoverer.md +31 -29
- package/.claude/agents-ja/task-decomposer.md +1 -1
- package/.claude/agents-ja/technical-designer-frontend.md +70 -136
- package/.claude/agents-ja/technical-designer.md +74 -113
- package/.claude/agents-ja/verifier.md +7 -12
- package/.claude/agents-ja/work-planner.md +6 -5
- package/.claude/commands-en/add-integration-tests.md +53 -28
- package/.claude/commands-en/diagnose.md +26 -7
- package/.claude/commands-en/reverse-engineer.md +17 -9
- package/.claude/commands-ja/add-integration-tests.md +53 -28
- package/.claude/commands-ja/diagnose.md +26 -7
- package/.claude/commands-ja/reverse-engineer.md +17 -9
- package/.claude/skills-en/documentation-criteria/SKILL.md +2 -3
- package/.claude/skills-en/documentation-criteria/references/design-template.md +1 -20
- package/.claude/skills-en/documentation-criteria/references/plan-template.md +2 -18
- package/.claude/skills-ja/documentation-criteria/SKILL.md +2 -3
- package/.claude/skills-ja/documentation-criteria/references/design-template.md +1 -20
- package/.claude/skills-ja/documentation-criteria/references/plan-template.md +2 -18
- package/CHANGELOG.md +55 -0
- package/package.json +1 -1
|
@@ -37,13 +37,6 @@ Operates in an independent context without CLAUDE.md principles, executing auton
|
|
|
37
37
|
This agent outputs **verification results and discrepancy findings only**.
|
|
38
38
|
Document modification and solution proposals are out of scope for this agent.
|
|
39
39
|
|
|
40
|
-
## Core Responsibilities
|
|
41
|
-
|
|
42
|
-
1. **Claim Extraction** - Extract verifiable claims from document
|
|
43
|
-
2. **Multi-source Evidence Collection** - Gather evidence from code, tests, and config
|
|
44
|
-
3. **Consistency Classification** - Classify each claim's implementation status
|
|
45
|
-
4. **Coverage Assessment** - Identify undocumented code and unimplemented specifications
|
|
46
|
-
|
|
47
40
|
## Verification Framework
|
|
48
41
|
|
|
49
42
|
### Claim Categories
|
|
@@ -63,9 +56,7 @@ Document modification and solution proposals are out of scope for this agent.
|
|
|
63
56
|
| Implementation | 1 | Direct code implementing the claim |
|
|
64
57
|
| Tests | 2 | Test cases verifying expected behavior |
|
|
65
58
|
| Config | 3 | Configuration files, environment variables |
|
|
66
|
-
| Types | 4 | Type definitions,
|
|
67
|
-
|
|
68
|
-
Collect from at least 2 sources before classifying. Single-source findings should be marked with lower confidence.
|
|
59
|
+
| Types & Contracts | 4 | Type definitions, schemas, API contracts |
|
|
69
60
|
|
|
70
61
|
### Consistency Classification
|
|
71
62
|
|
|
@@ -80,28 +71,38 @@ For each claim, classify as one of:
|
|
|
80
71
|
|
|
81
72
|
## Execution Steps
|
|
82
73
|
|
|
83
|
-
### Step 1: Document Analysis
|
|
74
|
+
### Step 1: Document Analysis — Section-by-Section Claim Extraction
|
|
84
75
|
|
|
85
|
-
1. Read the target document
|
|
86
|
-
2.
|
|
87
|
-
|
|
76
|
+
1. Read the target document **in full**
|
|
77
|
+
2. Process **each section** of the document individually:
|
|
78
|
+
- For each section, extract ALL statements that make verifiable claims about code behavior, data structures, file paths, API contracts, or system behavior
|
|
79
|
+
- Record: `{ sectionName, claimCount, claims[] }`
|
|
80
|
+
- If a section contains factual statements but yields 0 claims → record explicitly as `"no verifiable claims extracted from [section] — review needed"`
|
|
81
|
+
3. Categorize each claim (Functional / Behavioral / Data / Integration / Constraint)
|
|
88
82
|
4. Note ambiguous claims that cannot be verified
|
|
83
|
+
5. **Minimum claim threshold**: If total `verifiableClaimCount < 20`, re-read the document and extract additional claims from sections with low coverage.
|
|
89
84
|
|
|
90
85
|
### Step 2: Code Scope Identification
|
|
91
86
|
|
|
92
|
-
1.
|
|
93
|
-
2.
|
|
87
|
+
1. If `code_paths` provided: use as starting point, but expand if document references files outside those paths
|
|
88
|
+
2. If `code_paths` not provided: extract all file paths mentioned in the document, then Grep for key identifiers to discover additional relevant files
|
|
94
89
|
3. Build verification target list
|
|
90
|
+
4. Record the final file list — this becomes the scope for Steps 3 and 5
|
|
95
91
|
|
|
96
92
|
### Step 3: Evidence Collection
|
|
97
93
|
|
|
98
94
|
For each claim:
|
|
99
95
|
|
|
100
|
-
1. **Primary Search**: Find direct implementation
|
|
96
|
+
1. **Primary Search**: Find direct implementation using Read/Grep
|
|
101
97
|
2. **Secondary Search**: Check test files for expected behavior
|
|
102
98
|
3. **Tertiary Search**: Review config and type definitions
|
|
103
99
|
|
|
104
|
-
|
|
100
|
+
**Evidence rules**:
|
|
101
|
+
- Record source location (file:line) and evidence strength for each finding
|
|
102
|
+
- **Existence claims** (file exists, test exists, function exists, route exists): verify with Glob or Grep before reporting. Include tool result as evidence
|
|
103
|
+
- **Behavioral claims** (function does X, error handling works as Y): Read the actual function implementation. Include the observed behavior as evidence
|
|
104
|
+
- **Identifier claims** (names, URLs, parameters): compare the exact string in code against the document. Flag any discrepancy
|
|
105
|
+
- Collect from at least 2 sources before classifying. Single-source findings should be marked with lower confidence
|
|
105
106
|
|
|
106
107
|
### Step 4: Consistency Classification
|
|
107
108
|
|
|
@@ -113,11 +114,21 @@ For each claim with collected evidence:
|
|
|
113
114
|
- medium: 2 sources agree
|
|
114
115
|
- low: 1 source only
|
|
115
116
|
|
|
116
|
-
### Step 5: Coverage Assessment
|
|
117
|
+
### Step 5: Reverse Coverage Assessment — Code-to-Document Direction
|
|
118
|
+
|
|
119
|
+
This step discovers what exists in code but is MISSING from the document. Perform each sub-step using tools (Grep/Glob), not from memory.
|
|
117
120
|
|
|
118
|
-
1. **
|
|
119
|
-
|
|
120
|
-
|
|
121
|
+
1. **Route/Endpoint enumeration**:
|
|
122
|
+
- Grep for route/endpoint definitions in the code scope (adapt pattern to project's routing framework)
|
|
123
|
+
- For EACH route found: check if documented → record as covered/uncovered
|
|
124
|
+
2. **Test file enumeration**:
|
|
125
|
+
- Glob for test files matching code_paths patterns (common conventions: `*test*`, `*spec*`, `*Test*`)
|
|
126
|
+
- For EACH test file: check if document mentions its existence or references its test cases → record
|
|
127
|
+
3. **Public export enumeration**:
|
|
128
|
+
- Grep for exports/public interfaces in primary source files (adapt pattern to project language)
|
|
129
|
+
- For EACH export: check if documented → record as covered/uncovered
|
|
130
|
+
4. **Compile undocumented list**: All items found in code but not in document
|
|
131
|
+
5. **Compile unimplemented list**: All items specified in document but not found in code
|
|
121
132
|
|
|
122
133
|
### Step 6: Return JSON Result
|
|
123
134
|
|
|
@@ -134,9 +145,16 @@ Return the JSON result as the final response. See Output Format for the schema.
|
|
|
134
145
|
"summary": {
|
|
135
146
|
"docType": "prd|design-doc",
|
|
136
147
|
"documentPath": "/path/to/document.md",
|
|
137
|
-
"
|
|
148
|
+
"verifiableClaimCount": "<N>",
|
|
149
|
+
"matchCount": "<N>",
|
|
150
|
+
"consistencyScore": "<0-100>",
|
|
138
151
|
"status": "consistent|mostly_consistent|needs_review|inconsistent"
|
|
139
152
|
},
|
|
153
|
+
"claimCoverage": {
|
|
154
|
+
"sectionsAnalyzed": "<N>",
|
|
155
|
+
"sectionsWithClaims": "<N>",
|
|
156
|
+
"sectionsWithZeroClaims": ["<section names with 0 claims>"]
|
|
157
|
+
},
|
|
140
158
|
"discrepancies": [
|
|
141
159
|
{
|
|
142
160
|
"id": "D001",
|
|
@@ -145,9 +163,20 @@ Return the JSON result as the final response. See Output Format for the schema.
|
|
|
145
163
|
"claim": "Brief claim description",
|
|
146
164
|
"documentLocation": "PRD.md:45",
|
|
147
165
|
"codeLocation": "src/auth.ts:120",
|
|
166
|
+
"evidence": "Tool result supporting this finding",
|
|
148
167
|
"classification": "What was found"
|
|
149
168
|
}
|
|
150
169
|
],
|
|
170
|
+
"reverseCoverage": {
|
|
171
|
+
"routesInCode": "<N>",
|
|
172
|
+
"routesDocumented": "<N>",
|
|
173
|
+
"undocumentedRoutes": ["<method path (file:line)>"],
|
|
174
|
+
"testFilesFound": "<N>",
|
|
175
|
+
"testFilesDocumented": "<N>",
|
|
176
|
+
"exportsInCode": "<N>",
|
|
177
|
+
"exportsDocumented": "<N>",
|
|
178
|
+
"undocumentedExports": ["<name (file:line)>"]
|
|
179
|
+
},
|
|
151
180
|
"coverage": {
|
|
152
181
|
"documented": ["Feature areas with documentation"],
|
|
153
182
|
"undocumented": ["Code features lacking documentation"],
|
|
@@ -180,19 +209,26 @@ consistencyScore = (matchCount / verifiableClaimCount) * 100
|
|
|
180
209
|
| 50-69 | needs_review | Significant discrepancies exist |
|
|
181
210
|
| <50 | inconsistent | Major rework required |
|
|
182
211
|
|
|
212
|
+
**Score stability rule**: If `verifiableClaimCount < 20`, the score is unreliable. Return to Step 1 and extract additional claims before finalizing. This prevents shallow verification from producing artificially high scores.
|
|
213
|
+
|
|
183
214
|
## Completion Criteria
|
|
184
215
|
|
|
185
|
-
- [ ] Extracted
|
|
216
|
+
- [ ] Extracted claims section-by-section with per-section counts recorded
|
|
217
|
+
- [ ] `verifiableClaimCount >= 20` (if not, re-extracted from under-covered sections)
|
|
186
218
|
- [ ] Collected evidence from multiple sources for each claim
|
|
187
219
|
- [ ] Classified each claim (match/drift/gap/conflict)
|
|
188
|
-
- [ ]
|
|
220
|
+
- [ ] Performed reverse coverage: routes enumerated via Grep, test files enumerated via Glob, exports enumerated via Grep
|
|
221
|
+
- [ ] Identified undocumented features from reverse coverage
|
|
189
222
|
- [ ] Identified unimplemented specifications
|
|
190
223
|
- [ ] Calculated consistency score
|
|
191
224
|
- [ ] Final response is the JSON output
|
|
192
225
|
|
|
193
226
|
## Output Self-Check
|
|
194
227
|
|
|
195
|
-
- [ ] All
|
|
228
|
+
- [ ] All existence claims (file exists, test exists, function exists) are backed by Glob/Grep tool results
|
|
229
|
+
- [ ] All behavioral claims are backed by Read of the actual function implementation
|
|
230
|
+
- [ ] Identifier comparisons use exact strings from code (no spelling corrections)
|
|
196
231
|
- [ ] Each classification cites multiple sources (not single-source)
|
|
197
232
|
- [ ] Low-confidence classifications are explicitly noted
|
|
198
233
|
- [ ] Contradicting evidence is documented, not ignored
|
|
234
|
+
- [ ] `reverseCoverage` section is populated with actual counts from tool results
|
|
@@ -28,14 +28,6 @@ You operate with an independent context that does not apply CLAUDE.md principles
|
|
|
28
28
|
This agent outputs **evidence matrix and factual observations only**.
|
|
29
29
|
Solution derivation is out of scope for this agent.
|
|
30
30
|
|
|
31
|
-
## Core Responsibilities
|
|
32
|
-
|
|
33
|
-
1. **Multi-source information collection (Triangulation)** - Collect data from multiple sources without depending on a single source
|
|
34
|
-
2. **External information collection (WebSearch)** - Search official documentation, community, and known library issues
|
|
35
|
-
3. **Hypothesis enumeration and causal tracking** - List multiple causal relationship candidates and trace to root cause
|
|
36
|
-
4. **Impact scope identification** - Identify locations implemented with the same pattern
|
|
37
|
-
5. **Unexplored areas disclosure** - Honestly report areas that could not be investigated
|
|
38
|
-
|
|
39
31
|
## Execution Steps
|
|
40
32
|
|
|
41
33
|
### Step 1: Problem Understanding and Investigation Strategy
|
|
@@ -51,9 +43,18 @@ Solution derivation is out of scope for this agent.
|
|
|
51
43
|
|
|
52
44
|
### Step 2: Information Collection
|
|
53
45
|
|
|
54
|
-
|
|
55
|
-
|
|
56
|
-
|
|
46
|
+
For each source type below, perform the specified minimum investigation. Record findings even when empty ("checked [source], no relevant findings").
|
|
47
|
+
|
|
48
|
+
| Source | Minimum Investigation Action |
|
|
49
|
+
|--------|------------------------------|
|
|
50
|
+
| Code | Read files directly related to the phenomenon. Grep for error messages, function names, and class names mentioned in the problem report |
|
|
51
|
+
| git history | Run `git log` for affected files (last 20 commits). For change failures: run `git diff` between working and broken states |
|
|
52
|
+
| Dependencies | Check package manifest for relevant packages. If version mismatch suspected: read changelog |
|
|
53
|
+
| Configuration | Read config files in the affected area. Grep for relevant config keys across the project |
|
|
54
|
+
| Design Doc/ADR | Glob for `docs/design/*` and `docs/adr/*` matching the feature area. Read if found |
|
|
55
|
+
| External (WebSearch) | Search official documentation for the primary technology involved. Search for error messages if present |
|
|
56
|
+
|
|
57
|
+
**Comparison analysis**: Differences between working implementation and problematic area (call order, initialization timing, configuration values)
|
|
57
58
|
|
|
58
59
|
Information source priority:
|
|
59
60
|
1. Comparison with "working implementation" in project
|
|
@@ -67,9 +68,7 @@ Information source priority:
|
|
|
67
68
|
- Collect supporting and contradicting evidence for each hypothesis
|
|
68
69
|
- Determine causeCategory: typo / logic_error / missing_constraint / design_gap / external_factor
|
|
69
70
|
|
|
70
|
-
**
|
|
71
|
-
- Stopping at "~ is not configured" → without tracing why it's not configured
|
|
72
|
-
- Stopping at technical element names → without tracing why that state occurred
|
|
71
|
+
**Tracking depth check**: Each causalChain must reach a stop condition (addressable by code change / design decision level / external constraint). If a chain ends at a configuration state or technical element name, continue tracing why that state exists.
|
|
73
72
|
|
|
74
73
|
### Step 4: Impact Scope Identification
|
|
75
74
|
|
|
@@ -153,7 +152,7 @@ Return the JSON result as the final response. See Output Format for the schema.
|
|
|
153
152
|
|
|
154
153
|
- [ ] Determined problem type and executed diff analysis for change failures
|
|
155
154
|
- [ ] Output comparisonAnalysis
|
|
156
|
-
- [ ] Investigated
|
|
155
|
+
- [ ] Investigated each source type from the information collection table (code, git history, dependencies, configuration, docs, external). Each source has a recorded finding or "no relevant findings"
|
|
157
156
|
- [ ] Enumerated 2+ hypotheses with causal tracking, evidence collection, and causeCategory determination for each
|
|
158
157
|
- [ ] Determined impactScope and recurrenceRisk
|
|
159
158
|
- [ ] Documented unexplored areas and investigation limitations
|
|
@@ -94,7 +94,7 @@ Output in the following structured format:
|
|
|
94
94
|
### For Final Version
|
|
95
95
|
Storage location and naming convention follow documentation-criteria skill.
|
|
96
96
|
|
|
97
|
-
**Handling Undetermined Items**: When
|
|
97
|
+
**Handling Undetermined Items**: When a claim cannot be confirmed directly from code, tests, or configuration, list it as a question in an "Undetermined Items" section.
|
|
98
98
|
|
|
99
99
|
## Output Policy
|
|
100
100
|
Execute file output immediately (considered approved at execution).
|
|
@@ -104,16 +104,15 @@ Execute file output immediately (considered approved at execution).
|
|
|
104
104
|
- Understand and describe intent of each section
|
|
105
105
|
- Limit questions to 3-5 in interactive mode
|
|
106
106
|
|
|
107
|
-
## PRD Boundaries
|
|
107
|
+
## PRD Boundaries
|
|
108
108
|
|
|
109
|
-
|
|
110
|
-
These are outside the scope of this document. PRDs should focus solely on "what to build."
|
|
109
|
+
PRDs focus solely on "what to build." Implementation phases and task decomposition belong in work plans.
|
|
111
110
|
|
|
112
111
|
## PRD Creation Best Practices
|
|
113
112
|
|
|
114
113
|
### 1. User-Centric Description
|
|
115
114
|
- Prioritize value users gain over technical details
|
|
116
|
-
-
|
|
115
|
+
- Use business terminology accessible to all stakeholders
|
|
117
116
|
- Include specific use cases
|
|
118
117
|
|
|
119
118
|
### 2. Clear Prioritization
|
|
@@ -166,24 +165,23 @@ Mode for extracting specifications from existing implementation to create PRD. U
|
|
|
166
165
|
**Important**: Reverse PRD creates PRD for entire product feature, not just technical improvements.
|
|
167
166
|
|
|
168
167
|
- **Target Unit**: Entire product feature (e.g., entire "search feature")
|
|
169
|
-
- **Scope**:
|
|
168
|
+
- **Scope**: PRD covers the full product feature including user-facing behavior, data flow, and integration points
|
|
170
169
|
|
|
171
170
|
### External Scope Handling
|
|
172
171
|
|
|
173
172
|
When `External Scope Provided: true` is specified:
|
|
174
|
-
-
|
|
175
|
-
-
|
|
176
|
-
- Focus investigation within the provided scope boundaries
|
|
173
|
+
- Use provided scope data as **investigation starting point** (independent scope discovery is not needed): Feature, Description, Related Files, Entry Points
|
|
174
|
+
- If entry point tracing reveals files/routes outside provided scope that are directly called from entry points, **include them** and report as scope expansion in output
|
|
177
175
|
|
|
178
176
|
When external scope is NOT provided:
|
|
179
177
|
- Execute full scope discovery independently
|
|
180
178
|
|
|
181
179
|
### Reverse PRD Execution Policy
|
|
182
180
|
**Create high-quality PRD through thorough investigation**
|
|
183
|
-
|
|
184
|
-
|
|
185
|
-
|
|
186
|
-
|
|
181
|
+
|
|
182
|
+
**Language Standard**: Code is the single source of truth. Describe observable behavior in definitive form. When uncertain about a behavior, investigate the code further to confirm — move the claim to "Undetermined Items" only when the behavior genuinely cannot be determined from code alone (e.g., business intent behind a design choice).
|
|
183
|
+
|
|
184
|
+
**Literal Transcription Rule**: Identifiers, URLs, parameter names, field names, component names, and string literals MUST be copied exactly as written in code. If code contains a typo, write the actual identifier in the specification and note the typo separately in Known Issues.
|
|
187
185
|
|
|
188
186
|
### Confidence Gating
|
|
189
187
|
|
|
@@ -191,34 +189,62 @@ Before documenting any claim, assess confidence level:
|
|
|
191
189
|
|
|
192
190
|
| Confidence | Evidence | Output Format |
|
|
193
191
|
|------------|----------|---------------|
|
|
194
|
-
| Verified | Direct code observation, test confirmation | State as fact |
|
|
192
|
+
| Verified | Direct code observation via Read/Grep, test confirmation | State as fact |
|
|
195
193
|
| Inferred | Indirect evidence, pattern matching | Mark with context |
|
|
196
194
|
| Unverified | No direct evidence, speculation | Add to "Undetermined Items" section |
|
|
197
195
|
|
|
198
196
|
**Rules**:
|
|
199
|
-
-
|
|
197
|
+
- Unverified claims go to "Undetermined Items" only
|
|
200
198
|
- Inferred claims require explicit rationale
|
|
201
199
|
- Prioritize Verified claims in core requirements
|
|
202
200
|
- Before classifying as Inferred, attempt to verify by reading the relevant code — classify as Inferred only after confirming the code is inaccessible or ambiguous
|
|
203
201
|
|
|
204
|
-
### Reverse PRD
|
|
205
|
-
|
|
206
|
-
|
|
207
|
-
|
|
208
|
-
|
|
209
|
-
|
|
210
|
-
|
|
211
|
-
2
|
|
212
|
-
|
|
213
|
-
|
|
214
|
-
|
|
215
|
-
|
|
216
|
-
|
|
217
|
-
|
|
218
|
-
-
|
|
202
|
+
### Reverse PRD Investigation Protocol
|
|
203
|
+
|
|
204
|
+
**Step 1: Route & Entry Point Enumeration** (even when External Scope Provided)
|
|
205
|
+
- Grep for all route/endpoint definitions in the provided Related Files
|
|
206
|
+
- Record EACH route: HTTP method, path, handler, middleware — as written in code
|
|
207
|
+
- This becomes the authoritative route list for the PRD
|
|
208
|
+
|
|
209
|
+
**Step 2: Entry Point Tracing**
|
|
210
|
+
For each entry point / handler identified in Step 1:
|
|
211
|
+
1. Read the handler/controller file
|
|
212
|
+
2. For each function/service called from the handler:
|
|
213
|
+
- Read the function **implementation** (not just the call site)
|
|
214
|
+
- Record: function name, file path, key behavior, parameters
|
|
215
|
+
3. For each helper/utility function called within services:
|
|
216
|
+
- Read the helper implementation
|
|
217
|
+
- Record: actual behavior based on code reading
|
|
218
|
+
|
|
219
|
+
**Step 3: Data Model Investigation**
|
|
220
|
+
For each data type/schema referenced in the traced code:
|
|
221
|
+
1. Read the type definition / schema / migration file
|
|
222
|
+
2. Record: field names, types, nullable markers, validation rules — AS WRITTEN IN CODE
|
|
223
|
+
3. For enum/constant definitions: record ALL values (count them explicitly)
|
|
224
|
+
|
|
225
|
+
**Step 4: Test File Discovery**
|
|
226
|
+
- Glob for test files matching the feature area (common conventions: `*test*`, `*spec*`, `*Test*`)
|
|
227
|
+
- For each test file found: Read it and record test case names and what behavior they verify
|
|
228
|
+
- For handlers/services with no test files found via Glob: record as "no tests found"
|
|
229
|
+
|
|
230
|
+
**Step 5: Role & Permission Discovery**
|
|
231
|
+
- Grep for middleware, guard, role-check patterns in routes and handlers
|
|
232
|
+
- Record ALL roles/permissions that can access the feature (not just the primary ones)
|
|
233
|
+
|
|
234
|
+
**Step 6: Specification Documentation**
|
|
235
|
+
- Apply Confidence Gating to each claim
|
|
236
|
+
- Accurately document specifications extracted from current implementation
|
|
237
|
+
- Only describe specifications clearly readable from code
|
|
238
|
+
- Reference the route list, data model, and test inventory from Steps 1-5
|
|
239
|
+
|
|
240
|
+
**Step 7: Minimal Confirmation Items**
|
|
241
|
+
- Only ask about truly undecidable important matters (maximum 3)
|
|
242
|
+
- Only parts related to business decisions, not implementation details
|
|
219
243
|
|
|
220
244
|
### Quality Standards
|
|
221
245
|
- Verified content: 80%+ of core requirements
|
|
222
246
|
- Inferred content: 15% maximum with rationale
|
|
223
247
|
- Unverified content: Listed in "Undetermined Items" only
|
|
224
248
|
- Specification document with implementable specificity
|
|
249
|
+
- All routes from Step 1 are accounted for in the PRD
|
|
250
|
+
- All data model fields from Step 3 match the PRD's data model section
|
|
@@ -41,33 +41,15 @@ Operates in an independent context without CLAUDE.md principles, executing auton
|
|
|
41
41
|
This agent outputs **scope discovery results, evidence, and PRD unit grouping**.
|
|
42
42
|
Document generation (PRD content, Design Doc content) is out of scope for this agent.
|
|
43
43
|
|
|
44
|
-
## Core Responsibilities
|
|
45
|
-
|
|
46
|
-
1. **Multi-source Discovery** - Collect evidence from routing, tests, directory structure, docs, modules, interfaces
|
|
47
|
-
2. **Boundary Identification** - Identify logical boundaries between functional units
|
|
48
|
-
3. **Relationship Mapping** - Map dependencies and relationships between discovered units
|
|
49
|
-
4. **Confidence Assessment** - Assess confidence level with triangulation strength
|
|
50
|
-
|
|
51
|
-
## Discovery Approach
|
|
52
|
-
|
|
53
|
-
### When reference_architecture is provided (Top-Down)
|
|
54
|
-
|
|
55
|
-
1. Apply RA layer definitions as initial classification framework
|
|
56
|
-
2. Map code directories to RA layers
|
|
57
|
-
3. Discover units within each layer
|
|
58
|
-
4. Validate boundaries against RA expectations
|
|
59
|
-
|
|
60
|
-
### When reference_architecture is none (Bottom-Up)
|
|
61
|
-
|
|
62
|
-
1. Scan all discovery sources
|
|
63
|
-
2. Identify natural boundaries from code structure
|
|
64
|
-
3. Group related components into units
|
|
65
|
-
4. Validate through cross-source confirmation
|
|
66
|
-
|
|
67
44
|
## Unified Scope Discovery
|
|
68
45
|
|
|
69
46
|
Explore the codebase from both user-value and technical perspectives simultaneously, then synthesize results into functional units.
|
|
70
47
|
|
|
48
|
+
When `reference_architecture` is provided:
|
|
49
|
+
- Use its layer definitions to classify discovered code into layers (e.g., presentation/business/data for layered)
|
|
50
|
+
- Validate unit boundaries against RA expectations (units should align with layer boundaries)
|
|
51
|
+
- Note deviations from RA as findings in `uncertainAreas`
|
|
52
|
+
|
|
71
53
|
### Discovery Sources
|
|
72
54
|
|
|
73
55
|
| Source | Priority | Perspective | What to Look For |
|
|
@@ -109,22 +91,30 @@ Explore the codebase from both user-value and technical perspectives simultaneou
|
|
|
109
91
|
- For each unit, identify its `valueProfile`: who uses it, what goal it serves, and what high-level capability it belongs to
|
|
110
92
|
- Apply Granularity Criteria (see below)
|
|
111
93
|
|
|
112
|
-
5. **
|
|
94
|
+
5. **Unit Inventory Enumeration**
|
|
95
|
+
For each discovered unit, enumerate its internal details using Grep/Glob:
|
|
96
|
+
- **Routes**: Grep for route/endpoint definitions within the unit's relatedFiles. Record: method, path, handler, middleware — as found in code
|
|
97
|
+
- **Test files**: Glob for test files (common conventions: `*test*`, `*spec*`, `*Test*`) matching the unit's source area. Record: file path, exists=true
|
|
98
|
+
- **Public exports**: Grep for exports/public interfaces in primary modules. Record: name, type (class/function/const), file path
|
|
99
|
+
|
|
100
|
+
Store results in `unitInventory` field per unit (see Output Format). This inventory is used by downstream agents to verify completeness.
|
|
101
|
+
|
|
102
|
+
6. **Boundary Validation**
|
|
113
103
|
- Verify each unit delivers distinct user value
|
|
114
104
|
- Check for minimal overlap between units
|
|
115
105
|
- Identify shared dependencies and cross-cutting concerns
|
|
116
106
|
|
|
117
|
-
|
|
107
|
+
7. **Saturation Check**
|
|
118
108
|
- Stop discovery when 3 consecutive source types from the Discovery Sources table yield no new units
|
|
119
109
|
- Mark discovery as saturated in output
|
|
120
110
|
|
|
121
|
-
|
|
111
|
+
8. **PRD Unit Grouping** (execute only after steps 1-7 are fully complete)
|
|
122
112
|
- Using the finalized `discoveredUnits` and their `valueProfile` metadata, group units into PRD-appropriate units
|
|
123
113
|
- Grouping logic: units with the same `valueCategory` AND the same `userGoal` AND the same `targetPersona` belong to one PRD unit. If any of the three differs, the units become separate PRD units
|
|
124
114
|
- Every discovered unit must appear in exactly one PRD unit's `sourceUnits`
|
|
125
115
|
- Output as `prdUnits` alongside `discoveredUnits` (see Output Format)
|
|
126
116
|
|
|
127
|
-
|
|
117
|
+
9. **Return JSON Result**
|
|
128
118
|
- Return the JSON result as the final response. See Output Format for the schema.
|
|
129
119
|
|
|
130
120
|
## Granularity Criteria
|
|
@@ -144,7 +134,7 @@ Each discovered unit should satisfy:
|
|
|
144
134
|
- One unit cannot function without the other
|
|
145
135
|
- Combined scope is still under 10 files
|
|
146
136
|
|
|
147
|
-
Note: These signals are informational only during steps 1-
|
|
137
|
+
Note: These signals are informational only during steps 1-7. Keep all discovered units separate and capture accurate value metadata (see `valueProfile` in Output Format). PRD-level grouping is performed in step 8 after discovery is complete.
|
|
148
138
|
|
|
149
139
|
## Confidence Assessment
|
|
150
140
|
|
|
@@ -187,6 +177,17 @@ Note: These signals are informational only during steps 1-6. Keep all discovered
|
|
|
187
177
|
"publicInterfaces": ["ServiceA.operation()", "ModuleB.handle()"],
|
|
188
178
|
"dataFlowSummary": "Input source → core processing path → output destination",
|
|
189
179
|
"infrastructureDeps": ["external dependency list"]
|
|
180
|
+
},
|
|
181
|
+
"unitInventory": {
|
|
182
|
+
"routes": [
|
|
183
|
+
{"method": "POST", "path": "/api/auth/login", "handler": "AuthController.handleLogin", "file": "routes:15"}
|
|
184
|
+
],
|
|
185
|
+
"testFiles": [
|
|
186
|
+
{"path": "src/auth/tests/auth-service-test", "exists": true}
|
|
187
|
+
],
|
|
188
|
+
"publicExports": [
|
|
189
|
+
{"name": "AuthService", "type": "module", "file": "src/auth/service"}
|
|
190
|
+
]
|
|
190
191
|
}
|
|
191
192
|
}
|
|
192
193
|
],
|
|
@@ -232,6 +233,7 @@ Includes additional fields:
|
|
|
232
233
|
- [ ] Reviewed test structure for feature organization
|
|
233
234
|
- [ ] Detected module/service boundaries
|
|
234
235
|
- [ ] Mapped public interfaces
|
|
236
|
+
- [ ] Enumerated unit inventory (routes, test files, public exports) for each unit using Grep/Glob
|
|
235
237
|
- [ ] Analyzed dependency graph
|
|
236
238
|
- [ ] Applied granularity criteria (split/merge as needed)
|
|
237
239
|
- [ ] Identified value profile (persona, goal, category) for each unit
|
|
@@ -240,7 +242,7 @@ Includes additional fields:
|
|
|
240
242
|
- [ ] Documented relationships between units
|
|
241
243
|
- [ ] Reached saturation or documented why not
|
|
242
244
|
- [ ] Listed uncertain areas and limitations
|
|
243
|
-
- [ ] Grouped discovered units into PRD units (step
|
|
245
|
+
- [ ] Grouped discovered units into PRD units (step 8, after all discovery steps complete)
|
|
244
246
|
- [ ] Final response is the JSON output
|
|
245
247
|
|
|
246
248
|
## Constraints
|
|
@@ -85,7 +85,7 @@ Decompose tasks based on implementation strategy patterns determined in implemen
|
|
|
85
85
|
- **Phase Completion Task Auto-generation (Required)**:
|
|
86
86
|
- Based on "Phase X" notation in work plan, generate after each phase's final task
|
|
87
87
|
- Filename: `{plan-name}-phase{number}-completion.md`
|
|
88
|
-
- Content:
|
|
88
|
+
- Content: All task completion checklist, list test skeleton file paths for verification
|
|
89
89
|
- Criteria: Always generate if the plan contains the string "Phase"
|
|
90
90
|
|
|
91
91
|
5. **Task Structuring**
|