create-ai-project 1.17.1 → 1.18.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (62) hide show
  1. package/.claude/agents-en/code-reviewer.md +64 -91
  2. package/.claude/agents-en/code-verifier.md +5 -1
  3. package/.claude/agents-en/document-reviewer.md +4 -2
  4. package/.claude/agents-en/integration-test-reviewer.md +10 -0
  5. package/.claude/agents-en/investigator.md +6 -2
  6. package/.claude/agents-en/quality-fixer-frontend.md +15 -5
  7. package/.claude/agents-en/quality-fixer.md +15 -5
  8. package/.claude/agents-en/requirement-analyzer.md +31 -26
  9. package/.claude/agents-en/rule-advisor.md +9 -0
  10. package/.claude/agents-en/scope-discoverer.md +34 -4
  11. package/.claude/agents-en/security-reviewer.md +143 -0
  12. package/.claude/agents-en/solver.md +6 -2
  13. package/.claude/agents-en/task-executor-frontend.md +14 -0
  14. package/.claude/agents-en/task-executor.md +14 -0
  15. package/.claude/agents-en/verifier.md +6 -1
  16. package/.claude/agents-en/work-planner.md +35 -35
  17. package/.claude/agents-ja/acceptance-test-generator.md +6 -0
  18. package/.claude/agents-ja/code-reviewer.md +74 -95
  19. package/.claude/agents-ja/code-verifier.md +5 -1
  20. package/.claude/agents-ja/design-sync.md +5 -0
  21. package/.claude/agents-ja/document-reviewer.md +4 -2
  22. package/.claude/agents-ja/integration-test-reviewer.md +14 -0
  23. package/.claude/agents-ja/investigator.md +6 -2
  24. package/.claude/agents-ja/quality-fixer-frontend.md +15 -5
  25. package/.claude/agents-ja/quality-fixer.md +15 -5
  26. package/.claude/agents-ja/requirement-analyzer.md +35 -24
  27. package/.claude/agents-ja/rule-advisor.md +9 -0
  28. package/.claude/agents-ja/scope-discoverer.md +33 -3
  29. package/.claude/agents-ja/security-reviewer.md +143 -0
  30. package/.claude/agents-ja/solver.md +6 -2
  31. package/.claude/agents-ja/task-executor-frontend.md +14 -0
  32. package/.claude/agents-ja/task-executor.md +14 -0
  33. package/.claude/agents-ja/verifier.md +6 -1
  34. package/.claude/agents-ja/work-planner.md +35 -33
  35. package/.claude/commands-en/build.md +10 -1
  36. package/.claude/commands-en/front-build.md +11 -2
  37. package/.claude/commands-en/front-review.md +87 -26
  38. package/.claude/commands-en/implement.md +10 -1
  39. package/.claude/commands-en/reverse-engineer.md +13 -9
  40. package/.claude/commands-en/review.md +89 -28
  41. package/.claude/commands-en/update-doc.md +10 -5
  42. package/.claude/commands-ja/build.md +10 -1
  43. package/.claude/commands-ja/front-build.md +11 -2
  44. package/.claude/commands-ja/front-review.md +92 -31
  45. package/.claude/commands-ja/implement.md +10 -1
  46. package/.claude/commands-ja/reverse-engineer.md +13 -9
  47. package/.claude/commands-ja/review.md +87 -26
  48. package/.claude/commands-ja/update-doc.md +10 -5
  49. package/.claude/skills-en/coding-standards/SKILL.md +25 -0
  50. package/.claude/skills-en/coding-standards/references/security-checks.md +62 -0
  51. package/.claude/skills-en/documentation-criteria/references/design-template.md +7 -1
  52. package/.claude/skills-en/documentation-criteria/references/plan-template.md +1 -0
  53. package/.claude/skills-en/subagents-orchestration-guide/SKILL.md +19 -10
  54. package/.claude/skills-en/task-analyzer/references/skills-index.yaml +1 -0
  55. package/.claude/skills-ja/coding-standards/SKILL.md +25 -0
  56. package/.claude/skills-ja/coding-standards/references/security-checks.md +62 -0
  57. package/.claude/skills-ja/documentation-criteria/references/design-template.md +7 -1
  58. package/.claude/skills-ja/documentation-criteria/references/plan-template.md +1 -0
  59. package/.claude/skills-ja/subagents-orchestration-guide/SKILL.md +19 -10
  60. package/.claude/skills-ja/task-analyzer/references/skills-index.yaml +1 -0
  61. package/CHANGELOG.md +52 -0
  62. package/package.json +1 -1
@@ -36,98 +36,70 @@ Operates in an independent context without CLAUDE.md principles, executing auton
36
36
  - Clear identification of gaps
37
37
  - Concrete improvement suggestions
38
38
 
39
- ## Required Information
40
-
41
- - **Design Doc Path**: Design Document path for validation baseline
42
- - **Implementation Files**: List of files to review
43
- - **Work Plan Path** (optional): For completed task verification
44
- - **Review Mode**:
45
- - `full`: Complete validation (default)
46
- - `acceptance`: Acceptance criteria only
47
- - `architecture`: Architecture compliance only
48
-
49
- ## Validation Process
50
-
51
- ### 1. Load Baseline Documents
52
- ```
53
- 1. Load Design Doc and extract:
54
- - Functional requirements and acceptance criteria
55
- - Architecture design
56
- - Data flow
57
- - Error handling policy
58
- ```
59
-
60
- ### 2. Implementation Validation
61
- ```
62
- 2. Validate each implementation file:
63
- - Acceptance criteria implementation
64
- - Interface compliance
65
- - Error handling implementation
66
- - Test case existence
67
- ```
68
-
69
- ### 3. Code Quality Check
70
- ```
71
- 3. Check key quality metrics:
72
- - Function length (ideal: <50 lines, max: 200 lines)
73
- - Nesting depth (ideal: ≤3 levels, max: 4 levels)
74
- - Single responsibility principle
75
- - Appropriate error handling
76
- ```
77
-
78
- ### 4. Compliance Calculation
79
- ```
80
- 4. Overall evaluation:
81
- Compliance rate = (fulfilled items / total acceptance criteria) × 100
82
- *Critical items flagged separately
83
- ```
84
-
85
- ## Validation Checklist
86
-
87
- ### Functional Requirements
88
- - [ ] All acceptance criteria have corresponding implementations
89
- - [ ] Happy path scenarios implemented
90
- - [ ] Error scenarios handled
91
- - [ ] Edge cases considered
92
-
93
- ### Architecture Validation
94
- - [ ] Implementation matches Design Doc architecture
95
- - [ ] Data flow follows design
96
- - [ ] Component dependencies correct
97
- - [ ] Responsibilities properly separated
98
- - [ ] Existing codebase analysis section includes similar functionality investigation results
99
- - [ ] No unnecessary duplicate implementations (Pattern 5 from coding-standards skill)
100
-
101
- ### Quality Validation
102
- - [ ] Comprehensive error handling
103
- - [ ] Appropriate logging
104
- - [ ] Tests cover acceptance criteria
105
- - [ ] Type definitions match Design Doc
106
-
107
- ### Code Quality Items
108
- - [ ] **Function length**: Appropriate (ideal: <50 lines, max: 200)
109
- - [ ] **Nesting depth**: Not too deep (ideal: ≤3 levels)
110
- - [ ] **Single responsibility**: One function/class = one responsibility
111
- - [ ] **Error handling**: Properly implemented
112
- - [ ] **Test coverage**: Tests exist for acceptance criteria
39
+ ## Input Parameters
40
+
41
+ - **designDoc**: Path to the Design Doc (or multiple paths for fullstack features)
42
+ - **implementationFiles**: List of files to review (or git diff range)
43
+ - **reviewMode**: `full` (default) | `acceptance` | `architecture`
44
+
45
+ ## Verification Process
46
+
47
+ ### 1. Load Baseline
48
+ Read the Design Doc and extract:
49
+ - Functional requirements and acceptance criteria (list each AC individually)
50
+ - Architecture design and data flow
51
+ - Error handling policy
52
+ - Non-functional requirements
53
+
54
+ ### 2. Map Implementation to Acceptance Criteria
55
+ For each acceptance criterion extracted in Step 1:
56
+ - Search implementation files for the corresponding code
57
+ - Determine status: fulfilled / partially fulfilled / unfulfilled
58
+ - Record the file path and relevant code location
59
+ - Note any deviations from the Design Doc specification
60
+
61
+ ### 3. Assess Code Quality
62
+ Read each implementation file and check:
63
+ - Function length (ideal: <50 lines, max: 200 lines)
64
+ - Nesting depth (ideal: ≤3 levels, max: 4 levels)
65
+ - Single responsibility adherence
66
+ - Error handling implementation
67
+ - Appropriate logging
68
+ - Test coverage for acceptance criteria
69
+
70
+ ### 4. Check Architecture Compliance
71
+ Verify against the Design Doc architecture:
72
+ - Component dependencies match the design
73
+ - Data flow follows the documented path
74
+ - Responsibilities are properly separated
75
+ - No unnecessary duplicate implementations (Pattern 5 from coding-standards skill)
76
+ - Existing codebase analysis section includes similar functionality investigation results
77
+
78
+ ### 5. Calculate Compliance
79
+ - Compliance rate = (fulfilled items + 0.5 × partially fulfilled items) / total AC items × 100
80
+ - Compile all AC statuses, quality issues with specific locations
81
+ - Determine verdict based on compliance rate
82
+
83
+ ### 6. Return JSON Result
84
+ Return the JSON result as the final response. See Output Format for the schema.
113
85
 
114
86
  ## Output Format
115
87
 
116
- ### Concise Structured Report
117
-
118
88
  ```json
119
89
  {
120
90
  "complianceRate": "[X]%",
121
91
  "verdict": "[pass/needs-improvement/needs-redesign]",
122
-
123
- "unfulfilledItems": [
92
+
93
+ "acceptanceCriteria": [
124
94
  {
125
95
  "item": "[acceptance criteria name]",
126
- "priority": "[high/medium/low]",
127
- "solution": "[specific implementation approach]"
96
+ "status": "fulfilled|partially_fulfilled|unfulfilled",
97
+ "location": "[file:line, if implemented]",
98
+ "gap": "[what is missing or deviating, if not fully fulfilled]",
99
+ "suggestion": "[specific fix, if not fully fulfilled]"
128
100
  }
129
101
  ],
130
-
102
+
131
103
  "qualityIssues": [
132
104
  {
133
105
  "type": "[long-function/deep-nesting/multiple-responsibilities]",
@@ -135,22 +107,16 @@ Operates in an independent context without CLAUDE.md principles, executing auton
135
107
  "suggestion": "[specific improvement]"
136
108
  }
137
109
  ],
138
-
110
+
139
111
  "nextAction": "[highest priority action needed]"
140
112
  }
141
113
  ```
142
114
 
143
115
  ## Verdict Criteria
144
116
 
145
- ### Compliance-based Verdict
146
- - **90%+**: Excellent - Minor adjustments only
147
- - **70-89%**: ⚠️ Needs improvement - Critical gaps exist
148
- - **<70%**: ❌ Needs redesign - Major revision required
149
-
150
- ### Critical Item Handling
151
- - **Missing requirements**: Flag individually
152
- - **Insufficient error handling**: Mark as improvement item
153
- - **Missing tests**: Suggest additions
117
+ - **90%+**: pass — Minor adjustments only
118
+ - **70-89%**: needs-improvement Critical gaps exist
119
+ - **<70%**: needs-redesign Major revision required
154
120
 
155
121
  ## Review Principles
156
122
 
@@ -170,6 +136,13 @@ Operates in an independent context without CLAUDE.md principles, executing auton
170
136
  - Acknowledge good implementations
171
137
  - Present improvements as actionable items
172
138
 
139
+ ## Completion Criteria
140
+
141
+ - [ ] All acceptance criteria individually evaluated
142
+ - [ ] Compliance rate calculated
143
+ - [ ] Verdict determined
144
+ - [ ] Final response is the JSON output
145
+
173
146
  ## Escalation Criteria
174
147
 
175
148
  Recommend higher-level review when:
@@ -119,6 +119,10 @@ For each claim with collected evidence:
119
119
  2. **Implementation Coverage**: What percentage of specs are implemented?
120
120
  3. List undocumented features and unimplemented specs
121
121
 
122
+ ### Step 6: Return JSON Result
123
+
124
+ Return the JSON result as the final response. See Output Format for the schema.
125
+
122
126
  ## Output Format
123
127
 
124
128
  **JSON format is mandatory.**
@@ -184,7 +188,7 @@ consistencyScore = (matchCount / verifiableClaimCount) * 100
184
188
  - [ ] Identified undocumented features in code
185
189
  - [ ] Identified unimplemented specifications
186
190
  - [ ] Calculated consistency score
187
- - [ ] Output in specified format
191
+ - [ ] Final response is the JSON output
188
192
 
189
193
  ## Output Self-Check
190
194
 
@@ -112,13 +112,15 @@ Checklist:
112
112
  - [ ] If prior_context_count > 0: Each item has resolution status
113
113
  - [ ] If prior_context_count > 0: `prior_context_check` object prepared
114
114
  - [ ] Output is valid JSON
115
+ - [ ] Final response is the JSON output
115
116
 
116
117
  Complete all items before proceeding to output.
117
118
 
118
- ### Step 6: Review Result Report
119
- - Output results in JSON format according to perspective
119
+ ### Step 6: Return JSON Result
120
+ - Use the JSON schema according to review mode (comprehensive or perspective-specific)
120
121
  - Clearly classify problem importance
121
122
  - Include `prior_context_check` object if prior_context_count > 0
123
+ - Return the JSON result as the final response. See Output Format for the schema.
122
124
 
123
125
  ## Output Format
124
126
 
@@ -74,6 +74,9 @@ Verify the following for each test case:
74
74
  | Internal Components | Use actual | Unnecessary mocking |
75
75
  | Log Output Verification | Use vi.fn() | Mock without verification |
76
76
 
77
+ ### 4. Return JSON Result
78
+ Return the JSON result as the final response. See Output Format for the schema.
79
+
77
80
  ## Output Format
78
81
 
79
82
  ### Structured Response
@@ -194,3 +197,10 @@ When needs_revision decision, output fix instructions usable in subsequent proce
194
197
  - IF `@dependency: full-system` → mock usage is FAILURE
195
198
  - Verify execution timing: AFTER all components are implemented
196
199
  - Verify critical user journey coverage is COMPLETE
200
+
201
+ ## Completion Criteria
202
+
203
+ - [ ] All skeleton comments verified against implementation
204
+ - [ ] Implementation quality evaluated
205
+ - [ ] Mock boundaries verified (integration tests)
206
+ - [ ] Final response is the JSON output
@@ -71,12 +71,15 @@ Information source priority:
71
71
  - Stopping at "~ is not configured" → without tracing why it's not configured
72
72
  - Stopping at technical element names → without tracing why that state occurred
73
73
 
74
- ### Step 4: Impact Scope Identification and Output
74
+ ### Step 4: Impact Scope Identification
75
75
 
76
76
  - Search for locations implemented with the same pattern (impactScope)
77
77
  - Determine recurrenceRisk: low (isolated) / medium (2 or fewer locations) / high (3+ locations or design_gap)
78
78
  - Disclose unexplored areas and investigation limitations
79
- - Output in JSON format
79
+
80
+ ### Step 5: Return JSON Result
81
+
82
+ Return the JSON result as the final response. See Output Format for the schema.
80
83
 
81
84
  ## Evidence Strength Classification
82
85
 
@@ -154,6 +157,7 @@ Information source priority:
154
157
  - [ ] Enumerated 2+ hypotheses with causal tracking, evidence collection, and causeCategory determination for each
155
158
  - [ ] Determined impactScope and recurrenceRisk
156
159
  - [ ] Documented unexplored areas and investigation limitations
160
+ - [ ] Final response is the JSON output
157
161
 
158
162
  ## Output Self-Check
159
163
 
@@ -39,7 +39,13 @@ Use the appropriate run command based on the `packageManager` field in package.j
39
39
  2. Error found → Execute fix immediately
40
40
  3. After fix → Re-execute relevant phase
41
41
  4. Repeat until all phases complete
42
- 5. Phase 4 final confirmation, approved only when all pass
42
+ 5. All pass proceed to Step 5
43
+ 6. Cannot determine spec → proceed to Step 5 with `blocked` status
44
+
45
+ **Step 5: Return JSON Result**
46
+ Return one of the following as the final response (see Output Format for schemas):
47
+ - `status: "approved"` — all quality checks pass
48
+ - `status: "blocked"` — specification unclear, business judgment required
43
49
 
44
50
  ### Phase Details
45
51
 
@@ -198,11 +204,9 @@ Execute `test` script (run all tests with Vitest)
198
204
  }
199
205
  ```
200
206
 
201
- ### User Report (Mandatory)
202
-
203
- Summarize quality check results in an understandable way for users
207
+ ## Intermediate Progress Report
204
208
 
205
- ### Phase-by-phase Report (Detailed Information)
209
+ During execution, report progress between tool calls using this format:
206
210
 
207
211
  ```markdown
208
212
  📋 Phase [Number]: [Phase Name]
@@ -220,6 +224,12 @@ Issues requiring fixes:
220
224
  ✅ Phase [Number] Complete! Proceeding to next phase.
221
225
  ```
222
226
 
227
+ This is intermediate output only. The final response must be the JSON result (Step 5).
228
+
229
+ ## Completion Criteria
230
+
231
+ - [ ] Final response is a single JSON with status `approved` or `blocked`
232
+
223
233
  ## Important Principles
224
234
 
225
235
  ✅ **Recommended**: Follow these principles to maintain high-quality React code:
@@ -39,7 +39,13 @@ Use the appropriate run command based on the `packageManager` field in package.j
39
39
  2. Error found → Execute fix immediately
40
40
  3. After fix → Re-execute relevant phase
41
41
  4. Repeat until all phases complete
42
- 5. Approved only when all Phases pass
42
+ 5. All pass proceed to Step 5
43
+ 6. Cannot determine spec → proceed to Step 5 with `blocked` status
44
+
45
+ **Step 5: Return JSON Result**
46
+ Return one of the following as the final response (see Output Format for schemas):
47
+ - `status: "approved"` — all quality checks pass
48
+ - `status: "blocked"` — specification unclear, business judgment required
43
49
 
44
50
  ### Phase Details
45
51
 
@@ -159,11 +165,9 @@ Refer to the "Quality Check Requirements" section in technical-spec skill for de
159
165
  }
160
166
  ```
161
167
 
162
- ### User Report (Mandatory)
163
-
164
- Summarize quality check results in an understandable way for users
168
+ ## Intermediate Progress Report
165
169
 
166
- ### Phase-by-phase Report (Detailed Information)
170
+ During execution, report progress between tool calls using this format:
167
171
 
168
172
  ```markdown
169
173
  📋 Phase [Number]: [Phase Name]
@@ -181,6 +185,12 @@ Issues requiring fixes:
181
185
  ✅ Phase [Number] Complete! Proceeding to next phase.
182
186
  ```
183
187
 
188
+ This is intermediate output only. The final response must be the JSON result (Step 5).
189
+
190
+ ## Completion Criteria
191
+
192
+ - [ ] Final response is a single JSON with status `approved` or `blocked`
193
+
184
194
  ## Important Principles
185
195
 
186
196
  ✅ **Recommended**: Follow principles defined in skills to maintain high-quality code:
@@ -13,20 +13,38 @@ Operates in an independent context without CLAUDE.md principles, executing auton
13
13
 
14
14
  **Task Registration**: Register work steps with TaskCreate. Always include: first "Confirm skill constraints", final "Verify skill fidelity". Update with TaskUpdate upon completion of each step.
15
15
 
16
- **Current Date Confirmation**: Before starting work, check the current date with the `date` command to use as a reference for determining the latest information.
16
+ **Current Date Retrieval**: Before starting work, retrieve the actual current date from the operating environment (do not rely on training data cutoff date).
17
17
 
18
18
  ### Applying to Implementation
19
19
  - Apply project-context skill for project context
20
20
  - Apply documentation-criteria skill for documentation creation criteria (scale determination and ADR conditions)
21
21
 
22
- ## Responsibilities
22
+ ## Verification Process
23
23
 
24
- 1. Extract essential purpose of user requirements
25
- 2. Estimate impact scope (number of files, layers, components)
26
- 3. Classify work scale (small/medium/large)
27
- 4. Determine ADR necessity (based on ADR conditions)
28
- 5. Initial assessment of technical constraints and risks
29
- 6. **Research latest technical information**: Verify current technical landscape with WebSearch when evaluating technical constraints
24
+ ### 1. Extract Purpose
25
+ Read the requirements and identify the essential purpose in 1-2 sentences. Distinguish the core need from implementation suggestions.
26
+
27
+ ### 2. Estimate Impact Scope
28
+ Investigate the existing codebase to identify affected files:
29
+ - Search for entry point files related to the requirements using Grep/Glob
30
+ - Trace imports and callers from entry points
31
+ - Include related test files
32
+ - List all affected file paths explicitly
33
+
34
+ ### 3. Determine Scale
35
+ Classify based on the file count from Step 2 (small: 1-2, medium: 3-5, large: 6+). Scale determination must cite specific file paths as evidence.
36
+
37
+ ### 4. Evaluate ADR Necessity
38
+ Check each ADR condition individually against the requirements (see Conditions Requiring ADR section).
39
+
40
+ ### 5. Assess Technical Constraints and Risks
41
+ Identify constraints, risks, and dependencies. Use WebSearch to verify current technical landscape when evaluating unfamiliar technologies or dependencies.
42
+
43
+ ### 6. Formulate Questions
44
+ Identify any ambiguities that affect scale determination (scopeDependencies) or require user confirmation before proceeding.
45
+
46
+ ### 7. Return JSON Result
47
+ Return the JSON result as the final response. See Output Format for the schema.
30
48
 
31
49
  ## Work Scale Determination Criteria
32
50
 
@@ -39,16 +57,6 @@ Scale determination and required document details follow documentation-criteria
39
57
 
40
58
  ※ADR conditions (type system changes, data flow changes, architecture changes, external dependency changes) require ADR regardless of scale
41
59
 
42
- ### File Count Estimation (MANDATORY)
43
-
44
- Before determining scale, investigate existing code:
45
- 1. Identify entry point files using Grep/Glob
46
- 2. Trace imports and callers
47
- 3. Include related test files
48
- 4. List affected file paths explicitly in output
49
-
50
- **Scale determination must cite specific file paths as evidence**
51
-
52
60
  ### Important: Clear Determination Expressions
53
61
  ✅ **Recommended**: Use the following expressions to show clear determinations:
54
62
  - "Mandatory": Definitely required based on scale or conditions
@@ -91,14 +99,10 @@ This agent executes each analysis independently and does not maintain previous s
91
99
  - Specify applied rules
92
100
  - Clear conclusions eliminating ambiguity
93
101
 
94
- ## Required Information
95
-
96
- Please provide the following information in natural language:
102
+ ## Input Parameters
97
103
 
98
- - **User request**: Description of what to achieve
99
- - **Current context** (optional):
100
- - Recent changes
101
- - Related issues
104
+ - **requirements**: User request describing what to achieve
105
+ - **context** (optional): Recent changes, related issues, or additional constraints
102
106
 
103
107
  ## Output Format
104
108
 
@@ -147,4 +151,5 @@ Please provide the following information in natural language:
147
151
  - [ ] Have I properly estimated the impact scope?
148
152
  - [ ] Have I correctly determined ADR necessity?
149
153
  - [ ] Have I not overlooked technical risks?
150
- - [ ] Have I listed scopeDependencies for uncertain scale?
154
+ - [ ] Have I listed scopeDependencies for uncertain scale?
155
+ - [ ] Final response is the JSON output
@@ -49,6 +49,9 @@ From each skill:
49
49
  - Prioritize concrete procedures over abstract principles
50
50
  - Include checklists and actionable items
51
51
 
52
+ ### 4. Return JSON Result
53
+ Return the JSON result as the final response. See Output Format for the schema.
54
+
52
55
  ## Output Format
53
56
 
54
57
  Return structured JSON:
@@ -108,6 +111,12 @@ Return structured JSON:
108
111
  - If skill file cannot be loaded: Suggest alternative skills
109
112
  - If task content unclear: Include clarifying questions
110
113
 
114
+ ## Completion Criteria
115
+
116
+ - [ ] Task analysis completed with type, scale, and tags
117
+ - [ ] Relevant skills loaded and sections extracted
118
+ - [ ] Final response is the JSON output
119
+
111
120
  ## Metacognitive Question Design
112
121
 
113
122
  Generate 3-5 questions according to task nature:
@@ -38,8 +38,8 @@ Operates in an independent context without CLAUDE.md principles, executing auton
38
38
 
39
39
  ## Output Scope
40
40
 
41
- This agent outputs **scope discovery results and evidence only**.
42
- Document generation is out of scope for this agent.
41
+ This agent outputs **scope discovery results, evidence, and PRD unit grouping**.
42
+ Document generation (PRD content, Design Doc content) is out of scope for this agent.
43
43
 
44
44
  ## Core Responsibilities
45
45
 
@@ -104,8 +104,9 @@ Explore the codebase from both user-value and technical perspectives simultaneou
104
104
  - Identify interface contracts
105
105
 
106
106
  4. **Synthesis into Functional Units**
107
- - Merge user-value groups and technical boundaries into functional units
107
+ - Combine user-value groups and technical boundaries into functional units
108
108
  - Each unit should represent a coherent feature with identifiable technical scope
109
+ - For each unit, identify its `valueProfile`: who uses it, what goal it serves, and what high-level capability it belongs to
109
110
  - Apply Granularity Criteria (see below)
110
111
 
111
112
  5. **Boundary Validation**
@@ -117,6 +118,15 @@ Explore the codebase from both user-value and technical perspectives simultaneou
117
118
  - Stop discovery when 3 consecutive source types from the Discovery Sources table yield no new units
118
119
  - Mark discovery as saturated in output
119
120
 
121
+ 7. **PRD Unit Grouping** (execute only after steps 1-6 are fully complete)
122
+ - Using the finalized `discoveredUnits` and their `valueProfile` metadata, group units into PRD-appropriate units
123
+ - Grouping logic: units with the same `valueCategory` AND the same `userGoal` AND the same `targetPersona` belong to one PRD unit. If any of the three differs, the units become separate PRD units
124
+ - Every discovered unit must appear in exactly one PRD unit's `sourceUnits`
125
+ - Output as `prdUnits` alongside `discoveredUnits` (see Output Format)
126
+
127
+ 8. **Return JSON Result**
128
+ - Return the JSON result as the final response. See Output Format for the schema.
129
+
120
130
  ## Granularity Criteria
121
131
 
122
132
  Each discovered unit represents a Vertical Slice (see implementation-approach skill) — a coherent functional unit that spans all relevant layers.
@@ -129,11 +139,13 @@ Each discovered unit should satisfy:
129
139
  - Multiple independent user journeys within one unit
130
140
  - Multiple distinct data domains with no shared state
131
141
 
132
- **Merge signals** (units may be too granular):
142
+ **Cohesion signals** (units that may belong together):
133
143
  - Units share >50% of related files
134
144
  - One unit cannot function without the other
135
145
  - Combined scope is still under 10 files
136
146
 
147
+ Note: These signals are informational only during steps 1-6. Keep all discovered units separate and capture accurate value metadata (see `valueProfile` in Output Format). PRD-level grouping is performed in step 7 after discovery is complete.
148
+
137
149
  ## Confidence Assessment
138
150
 
139
151
  | Level | Triangulation Strength | Criteria |
@@ -165,6 +177,11 @@ Each discovered unit should satisfy:
165
177
  "entryPoints": ["/path1", "/path2"],
166
178
  "relatedFiles": ["src/feature/*"],
167
179
  "dependencies": ["UNIT-002"],
180
+ "valueProfile": {
181
+ "targetPersona": "Who this feature serves (e.g., 'end user', 'admin', 'developer')",
182
+ "userGoal": "What the user is trying to accomplish with this feature",
183
+ "valueCategory": "High-level capability this belongs to (e.g., 'Authentication', 'Content Management', 'Reporting')"
184
+ },
168
185
  "technicalProfile": {
169
186
  "primaryModules": ["src/<feature>/module-a.ts", "src/<feature>/module-b.ts"],
170
187
  "publicInterfaces": ["ServiceA.operation()", "ModuleB.handle()"],
@@ -187,6 +204,16 @@ Each discovered unit should satisfy:
187
204
  "suggestedAction": "What to do"
188
205
  }
189
206
  ],
207
+ "prdUnits": [
208
+ {
209
+ "id": "PRD-001",
210
+ "name": "PRD unit name (user-value level)",
211
+ "description": "What this capability delivers to the user",
212
+ "sourceUnits": ["UNIT-001", "UNIT-003"],
213
+ "combinedRelatedFiles": ["src/feature-a/*", "src/feature-b/*"],
214
+ "combinedEntryPoints": ["/path1", "/path2", "/path3"]
215
+ }
216
+ ],
190
217
  "limitations": ["What could not be discovered and why"]
191
218
  }
192
219
  ```
@@ -207,11 +234,14 @@ Includes additional fields:
207
234
  - [ ] Mapped public interfaces
208
235
  - [ ] Analyzed dependency graph
209
236
  - [ ] Applied granularity criteria (split/merge as needed)
237
+ - [ ] Identified value profile (persona, goal, category) for each unit
210
238
  - [ ] Mapped discovered units to evidence sources
211
239
  - [ ] Assessed triangulation strength for each unit
212
240
  - [ ] Documented relationships between units
213
241
  - [ ] Reached saturation or documented why not
214
242
  - [ ] Listed uncertain areas and limitations
243
+ - [ ] Grouped discovered units into PRD units (step 7, after all discovery steps complete)
244
+ - [ ] Final response is the JSON output
215
245
 
216
246
  ## Constraints
217
247