create-ai-project 1.20.0 → 1.20.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (35) hide show
  1. package/.claude/agents-en/codebase-analyzer.md +2 -2
  2. package/.claude/agents-en/document-reviewer.md +12 -4
  3. package/.claude/agents-en/solver.md +1 -1
  4. package/.claude/agents-en/task-decomposer.md +16 -0
  5. package/.claude/agents-en/technical-designer-frontend.md +11 -2
  6. package/.claude/agents-en/technical-designer.md +11 -2
  7. package/.claude/agents-en/ui-spec-designer.md +1 -1
  8. package/.claude/agents-en/verifier.md +1 -1
  9. package/.claude/agents-en/work-planner.md +12 -5
  10. package/.claude/agents-ja/document-reviewer.md +12 -4
  11. package/.claude/agents-ja/solver.md +1 -1
  12. package/.claude/agents-ja/task-decomposer.md +17 -1
  13. package/.claude/agents-ja/task-executor.md +1 -1
  14. package/.claude/agents-ja/technical-designer-frontend.md +11 -2
  15. package/.claude/agents-ja/technical-designer.md +12 -3
  16. package/.claude/agents-ja/ui-spec-designer.md +1 -1
  17. package/.claude/agents-ja/verifier.md +1 -1
  18. package/.claude/agents-ja/work-planner.md +14 -7
  19. package/.claude/commands-en/update-doc.md +4 -4
  20. package/.claude/commands-ja/update-doc.md +4 -4
  21. package/.claude/skills-en/documentation-criteria/SKILL.md +21 -4
  22. package/.claude/skills-en/documentation-criteria/references/design-template.md +20 -0
  23. package/.claude/skills-en/documentation-criteria/references/plan-template.md +68 -17
  24. package/.claude/skills-en/documentation-criteria/references/task-template.md +8 -1
  25. package/.claude/skills-en/subagents-orchestration-guide/SKILL.md +57 -17
  26. package/.claude/skills-en/task-analyzer/references/skills-index.yaml +3 -2
  27. package/.claude/skills-ja/documentation-criteria/SKILL.md +24 -1
  28. package/.claude/skills-ja/documentation-criteria/references/design-template.md +20 -0
  29. package/.claude/skills-ja/documentation-criteria/references/plan-template.md +68 -17
  30. package/.claude/skills-ja/documentation-criteria/references/task-template.md +8 -1
  31. package/.claude/skills-ja/implementation-approach/SKILL.md +1 -1
  32. package/.claude/skills-ja/subagents-orchestration-guide/SKILL.md +59 -17
  33. package/.claude/skills-ja/task-analyzer/references/skills-index.yaml +4 -3
  34. package/CHANGELOG.md +35 -0
  35. package/package.json +1 -1
@@ -41,8 +41,8 @@ description: 既存設計ドキュメント(Design Doc / PRD / ADR)をレビ
41
41
  - design-syncによる整合性検証(Design Docのみ)
42
42
 
43
43
  **実行しない**:
44
- - 新規要件分析(新規ドキュメントには/designを使用)
45
- - 作業計画や実装(本コマンド後に/planまたは/taskを使用)
44
+ - 新規要件分析
45
+ - 作業計画や実装
46
46
 
47
47
  **責務境界**: このコマンドは更新されたドキュメントの承認で責務完了。
48
48
 
@@ -64,7 +64,7 @@ ls docs/design/*.md docs/prd/*.md docs/adr/*.md 2>/dev/null | grep -v template
64
64
  | $ARGUMENTSがパスを指定 | 指定ドキュメントを使用 |
65
65
  | $ARGUMENTSがトピックを記述 | トピックに合致するドキュメントを検索 |
66
66
  | 複数の候補が見つかった | AskUserQuestionで選択肢を提示 |
67
- | ドキュメントが見つからない | 報告して終了(代わりに/designを推奨) |
67
+ | ドキュメントが見つからない | 報告して終了(ドキュメント新規作成は本コマンドの対象外) |
68
68
 
69
69
  ### ステップ2: ドキュメントタイプとレイヤーの判定
70
70
 
@@ -194,7 +194,7 @@ prompt: |
194
194
 
195
195
  | エラー | アクション |
196
196
  |--------|-----------|
197
- | 対象ドキュメントが見つからない | 報告して終了(代わりに/designを推奨) |
197
+ | 対象ドキュメントが見つからない | 報告して終了(ドキュメント新規作成は本コマンドの対象外) |
198
198
  | サブエージェントの更新が失敗 | 失敗をログ、エラーをユーザーに提示、1回リトライ |
199
199
  | 2回の修正後もレビューがリジェクト | ループを停止、人間介入用にフラグ |
200
200
  | design-syncが矛盾を検出 | 解決判断のためユーザーに提示 |
@@ -133,6 +133,9 @@ description: Guides PRD, ADR, Design Doc, and Work Plan creation. Use when creat
133
133
  - **Data representation decision** (when introducing new structures)
134
134
  - **Applicable standards** (explicit/implicit classification)
135
135
  - **Prerequisite ADRs** (including common ADRs)
136
+ - **Verification Strategy** (required)
137
+ - Correctness proof method (what "correct" means for this change, how it's verified, when)
138
+ - Early verification point (first target to prove the approach works, success criteria, failure response)
136
139
 
137
140
  **Excludes**:
138
141
  - Why that technology was chosen (->Reference ADR)
@@ -146,19 +149,33 @@ description: Guides PRD, ADR, Design Doc, and Work Plan creation. Use when creat
146
149
  **Includes**:
147
150
  - Task breakdown and dependencies (maximum 2 levels)
148
151
  - Schedule and duration estimates
149
- - **Include test skeleton file paths from acceptance-test-generator** (integration and E2E)
150
- - **Phase 4 Quality Assurance Phase (required)**
152
+ - **Include test skeleton file paths** (integration and E2E)
153
+ - **Verification Strategy summary** (extracted from Design Doc)
154
+ - **Final Quality Assurance Phase (required)**
151
155
  - Progress records (checkbox format)
152
156
 
153
157
  **Excludes**:
154
158
  - Technical rationale (->ADR)
155
159
  - Design details (->Design Doc)
156
160
 
157
- **Phase Division Criteria**:
161
+ **Phase Division Criteria** (adapt to implementation approach from Design Doc):
162
+
163
+ **When Vertical Slice selected**:
164
+ - Each phase = one value unit (feature, component, or migration target)
165
+ - Each phase includes its own implementation + verification per Verification Strategy
166
+ - Final Phase: Quality Assurance (cross-cutting verification, all tests passing)
167
+
168
+ **When Horizontal Slice selected**:
158
169
  1. **Phase 1: Foundation Implementation** - Type definitions, interfaces, test preparation
159
170
  2. **Phase 2: Core Feature Implementation** - Business logic, unit tests
160
171
  3. **Phase 3: Integration Implementation** - External connections, presentation layer
161
- 4. **Phase 4: Quality Assurance (Required)** - Acceptance criteria achievement, all tests passing, quality checks
172
+ 4. **Final Phase: Quality Assurance (Required)** - Acceptance criteria achievement, all tests passing, quality checks
173
+
174
+ **When Hybrid selected**:
175
+ - Combine vertical and horizontal as defined in Design Doc implementation approach
176
+ - Final Phase: Quality Assurance (Required)
177
+
178
+ **All approaches**: Final phase is always Quality Assurance. Each phase's verification method follows Verification Strategy from Design Doc.
162
179
 
163
180
  **Three Elements of Task Completion Definition**:
164
181
  1. **Implementation Complete**: Code is functional
@@ -286,6 +286,26 @@ Mark as N/A with brief rationale when the feature has no data layer dependencies
286
286
 
287
287
  - [List critical integration points that require testing beyond unit-level mocks]
288
288
 
289
+ ## Verification Strategy
290
+
291
+ Verification Strategy defines what correctness means and how to prove it at design time. L1/L2/L3 (from implementation-approach skill) define completion verification granularity at task execution time.
292
+
293
+ ### Correctness Proof Method
294
+
295
+ How will this change's correctness be demonstrated?
296
+
297
+ - **Correctness definition**: [What "correct" means for this change — e.g., "output matches existing behavior", "all ACs pass in production-equivalent environment", "generated queries execute without error on target DB"]
298
+ - **Verification method**: [Specific technique — e.g., "compare new implementation output against existing implementation", "run against staging DB", "contract test with real API"]
299
+ - **Verification timing**: [When verification occurs — e.g., "after first vertical slice", "per repository", "at integration phase"]
300
+
301
+ ### Early Verification Point
302
+
303
+ What is verified first, and how, to confirm the approach is correct before scaling?
304
+
305
+ - **First verification target**: [The smallest unit that proves the approach works — e.g., "first repository migration", "single API endpoint", "one screen flow"]
306
+ - **Success criteria**: [Observable outcome — e.g., "CSV download produces identical output to legacy", "API returns 200 with expected schema"]
307
+ - **Failure response**: [What to do if early verification fails — e.g., "reassess approach before proceeding", "escalate to user"]
308
+
289
309
  ## Alternative Solutions
290
310
 
291
311
  ### Alternative 1
@@ -12,6 +12,18 @@ Related Issue/PR: #XXX (if any)
12
12
  - ADR: [docs/adr/ADR-XXXX.md] (if any)
13
13
  - PRD: [docs/prd/XXX.md] (if any)
14
14
 
15
+ ## Verification Strategy (from Design Doc)
16
+
17
+ ### Correctness Proof Method
18
+ - **Correctness definition**: [extracted from Design Doc]
19
+ - **Verification method**: [extracted from Design Doc]
20
+ - **Verification timing**: [extracted from Design Doc]
21
+
22
+ ### Early Verification Point
23
+ - **First verification target**: [extracted from Design Doc]
24
+ - **Success criteria**: [extracted from Design Doc]
25
+ - **Failure response**: [extracted from Design Doc]
26
+
15
27
  ## Objective
16
28
  [Why this change is necessary, what problem it solves]
17
29
 
@@ -31,51 +43,90 @@ Related Issue/PR: #XXX (if any)
31
43
  - [ ] Design Doc update needed
32
44
  - [ ] README update needed
33
45
 
34
- ## Implementation Plan
46
+ ## Implementation Phases
35
47
 
36
- (Note: Phase structure is determined based on Design Doc technical dependencies and implementation approach)
48
+ Select ONE phase structure based on implementation approach from Design Doc.
49
+ See documentation-criteria skill for detailed Phase Division Criteria.
50
+ **Delete the unused Option entirely from the final plan.** For hybrid approach, use Option A as the base and add horizontal foundation phases where needed.
37
51
 
38
- ### Phase 1: [Phase Name] (Estimated commits: X)
39
- **Purpose**: [What this phase aims to achieve]
52
+ ### Option A: Vertical Slice Phase Structure
40
53
 
41
- #### Tasks
54
+ Use when implementation approach is Vertical Slice. Each phase = one value unit with verification.
55
+
56
+ #### Phase 1: [Value Unit 1 Name] (Estimated commits: X)
57
+ **Purpose**: [First vertical slice — proves approach works]
58
+ **Verification**: [From Verification Strategy: early verification point]
59
+
60
+ ##### Tasks
61
+ - [ ] Task 1: Implementation
62
+ - [ ] Task 2: Verification per Verification Strategy
63
+ - [ ] Quality check: Implement staged quality checks (refer to technical-spec skill)
64
+
65
+ ##### Phase Completion Criteria
66
+ - [ ] Early verification point passed
67
+ - [ ] [Functional criteria]
68
+
69
+ #### Phase 2: [Value Unit 2 Name] (Estimated commits: X)
70
+ **Purpose**: [Subsequent value unit]
71
+ **Verification**: [From Verification Strategy]
72
+
73
+ ##### Tasks
74
+ - [ ] Task 1: Implementation
75
+ - [ ] Task 2: Verification per Verification Strategy
76
+ - [ ] Quality check
77
+
78
+ ##### Phase Completion Criteria
79
+ - [ ] [Functional criteria]
80
+ - [ ] [Quality criteria]
81
+
82
+ ### Option B: Horizontal Slice Phase Structure
83
+
84
+ Use when implementation approach is Horizontal Slice. Phases follow Foundation → Core → Integration → QA.
85
+
86
+ #### Phase 1: [Foundation] (Estimated commits: X)
87
+ **Purpose**: Contract definitions, interfaces, test preparation
88
+
89
+ ##### Tasks
42
90
  - [ ] Task 1: Specific work content
43
91
  - [ ] Task 2: Specific work content
44
92
  - [ ] Quality check: Implement staged quality checks (refer to technical-spec skill)
45
93
  - [ ] Unit tests: All related tests pass
46
94
 
47
- #### Phase Completion Criteria
95
+ ##### Phase Completion Criteria
48
96
  - [ ] [Functional completion criteria]
49
97
  - [ ] [Quality completion criteria]
50
98
 
51
- ### Phase 2: [Phase Name] (Estimated commits: X)
52
- **Purpose**: [What this phase aims to achieve]
99
+ #### Phase 2: [Core Feature] (Estimated commits: X)
100
+ **Purpose**: Business logic, unit tests
53
101
 
54
- #### Tasks
102
+ ##### Tasks
55
103
  - [ ] Task 1: Specific work content
56
104
  - [ ] Task 2: Specific work content
57
- - [ ] Quality check: Implement staged quality checks (refer to technical-spec skill)
105
+ - [ ] Quality check
58
106
  - [ ] Integration tests: Verify overall feature functionality
59
107
 
60
- #### Phase Completion Criteria
108
+ ##### Phase Completion Criteria
61
109
  - [ ] [Functional completion criteria]
62
110
  - [ ] [Quality completion criteria]
63
111
 
64
- ### Phase 3: [Phase Name] (Estimated commits: X)
65
- **Purpose**: [What this phase aims to achieve]
112
+ #### Phase 3: [Integration] (Estimated commits: X)
113
+ **Purpose**: External connections, presentation layer
66
114
 
67
- #### Tasks
115
+ ##### Tasks
68
116
  - [ ] Task 1: Specific work content
69
117
  - [ ] Task 2: Specific work content
70
- - [ ] Quality check: Implement staged quality checks (refer to technical-spec skill)
118
+ - [ ] Quality check
71
119
  - [ ] Integration tests: Verify component coordination
72
120
 
73
- #### Phase Completion Criteria
121
+ ##### Phase Completion Criteria
74
122
  - [ ] [Functional completion criteria]
75
123
  - [ ] [Quality completion criteria]
76
124
 
77
125
  ### Final Phase: Quality Assurance (Required) (Estimated commits: 1)
78
- **Purpose**: Overall quality assurance and Design Doc consistency verification
126
+
127
+ This phase is required for ALL implementation approaches.
128
+
129
+ **Purpose**: Cross-cutting quality assurance and Design Doc consistency verification
79
130
 
80
131
  #### Tasks
81
132
  - [ ] Verify all Design Doc acceptance criteria achieved
@@ -33,9 +33,16 @@ Files to read before starting implementation (file path, with optional search hi
33
33
  - [ ] Improve code (maintain passing tests)
34
34
  - [ ] Confirm added tests still pass
35
35
 
36
+ ## Operation Verification Methods
37
+ (Derived from Verification Strategy in work plan)
38
+ - **Verification method**: [What to verify and how — e.g., "compare new implementation output against existing implementation at src/legacy/order_calc", "run endpoint against test database and verify response matches contract"]
39
+ - **Success criteria**: [Observable outcome that proves correctness — e.g., "output matches existing implementation for all input combinations", "API returns 200 with expected schema"]
40
+ - **Failure response**: [What to do if verification fails — e.g., "reassess approach before proceeding", "escalate to user"]
41
+ - **Verification level**: [L1/L2/L3, per implementation-approach skill]
42
+
36
43
  ## Completion Criteria
37
44
  - [ ] All added tests pass
38
- - [ ] Operation verified (select L1/L2/L3, per implementation-approach skill)
45
+ - [ ] Operation verified per Operation Verification Methods above
39
46
  - [ ] Deliverables created (for research/design tasks)
40
47
 
41
48
  ## Notes
@@ -16,12 +16,9 @@ This document provides practical behavioral guidelines for me (Claude) to effici
16
16
  - **During flow execution**: STRICTLY follow scale-based flow
17
17
  - **Each phase**: DELEGATE to appropriate subagent
18
18
  - **Stop points**: ALWAYS wait for user approval
19
-
20
- ### Prohibited Actions
21
- - Executing investigation directly with Grep/Glob/Read
22
- - Performing analysis or design without subagent delegation
23
- - Saying "Let me first investigate" then starting work directly
24
- - Skipping or postponing requirement-analyzer
19
+ - **Investigation**: Delegate all investigation to requirement-analyzer or codebase-analyzer (Grep/Glob/Read are specialist-internal tools)
20
+ - **Analysis/Design**: Delegate to the appropriate specialist subagent
21
+ - **First action**: Pass user requirements to requirement-analyzer before any other step
25
22
 
26
23
  **First Action Rule**: To accurately analyze user requirements, pass them directly to requirement-analyzer and determine the workflow based on its analysis results.
27
24
 
@@ -68,6 +65,36 @@ graph TD
68
65
 
69
66
  ## My Orchestration Principles
70
67
 
68
+ ### Delegation Boundary: What vs How
69
+
70
+ I pass **what to accomplish** and **where to work**. Each specialist determines **how to execute** autonomously.
71
+
72
+ **I pass to specialists** (what/where/constraints):
73
+ - Task file path — executor agents (task-executor, task-decomposer) receive a task file path; broader scope requires explicit user request
74
+ - Target directory or package scope — for discovery/review agents (codebase-analyzer, code-verifier, security-reviewer, integration-test-reviewer)
75
+ - Acceptance criteria and hard constraints from the user or design artifacts
76
+
77
+ **I let specialists determine** (how):
78
+ - Specific commands to run (specialists discover these from project configuration and repo conventions)
79
+ - Execution order and tool flags
80
+ - Executor/fixer agents: which files to inspect or modify within the given scope
81
+ - Review/discovery agents: which files to inspect within the given scope (read-only access)
82
+
83
+ | | Bad (I prescribe how) | Good (I pass what) |
84
+ |---|---|---|
85
+ | quality-fixer | "Run these checks: 1. lint 2. test" | "Execute all quality checks and fixes" |
86
+ | task-executor | "Edit file X and add handler Y" | "Task file: docs/plans/tasks/003-feature.md" |
87
+
88
+ **Decision precedence when outputs conflict**:
89
+ 1. User instructions (explicit requests or constraints)
90
+ 2. Task files and design artifacts (Design Doc, PRD, work plan)
91
+ 3. Objective repo state (git status, file system, project configuration)
92
+ 4. Specialist judgment
93
+
94
+ When two specialists conflict, or when a specialist conflicts with my expectation, I apply the precedence order above. I verify against objective repo state (item 3). I follow specialist output when it aligns with items 1 and 2. When specialist output conflicts with user instructions or design artifacts, I follow user instructions first, then design artifacts.
95
+
96
+ When a specialist cannot determine execution method from repo state and artifacts, the specialist escalates as blocked. I then escalate to the user with the specialist's blocked details.
97
+
71
98
  ### Task Assignment with Responsibility Separation
72
99
 
73
100
  I understand each subagent's responsibilities and assign work appropriately:
@@ -109,16 +136,25 @@ I repeat this cycle for each task to ensure quality.
109
136
  ## Structured Response Specifications
110
137
 
111
138
  Subagents respond in JSON format. Key fields for orchestrator decisions:
112
- - **requirement-analyzer**: scale, confidence, adrRequired, crossLayerScope, scopeDependencies, questions
113
- - **codebase-analyzer**: analysisScope.categoriesDetected, dataModel.detected, focusAreas[], existingElements count, limitations
114
- - **code-verifier**: (in design flow) consistencyScore, discrepancies[], reverseCoverage (including dataOperationsInCode, testBoundariesSectionPresent)
115
- - **task-executor**: status (escalation_needed/completed), escalation_type (design_compliance_violation/similar_function_found/similar_component_found/investigation_target_not_found/out_of_scope_file), testsAdded, requiresTestReview
116
- - **quality-fixer**: status (approved/blocked). Discriminate blocked type by `reason` field: `"Cannot determine due to unclear specification"` → read `blockingIssues[]` for specification details; `"Execution prerequisites not met"` → read `missingPrerequisites[]` with `resolutionSteps` — present these to the user as actionable next steps
117
- - **document-reviewer**: approvalReady (true/false)
118
- - **design-sync**: sync_status (synced/conflicts_found)
119
- - **integration-test-reviewer**: status (approved/needs_revision/blocked), requiredFixes
120
- - **security-reviewer**: status (approved/approved_with_notes/needs_revision/blocked), findings, notes, requiredFixes
121
- - **acceptance-test-generator**: status, generatedFiles
139
+
140
+ | Agent | Key Fields | Decision Logic |
141
+ |-------|-----------|----------------|
142
+ | requirement-analyzer | scale, confidence, adrRequired, crossLayerScope, scopeDependencies, questions | Select flow by scale; check adrRequired for ADR step |
143
+ | codebase-analyzer | analysisScope.categoriesDetected, dataModel.detected, focusAreas[], existingElements count, limitations | Pass focusAreas to technical-designer as context |
144
+ | code-verifier | consistencyScore, discrepancies[], reverseCoverage (dataOperationsInCode, testBoundariesSectionPresent) | Flag discrepancies for document-reviewer |
145
+ | task-executor | status (escalation_needed/completed), escalation_type, testsAdded, requiresTestReview | On escalation_needed: handle by escalation_type (design_compliance_violation, similar_function_found, similar_component_found, investigation_target_not_found, out_of_scope_file) |
146
+ | quality-fixer | status (approved/blocked), reason, blockingIssues[], missingPrerequisites[] | On blocked: see quality-fixer blocked handling below |
147
+ | document-reviewer | approvalReady (true/false) | Proceed to next step on true; request fixes on false |
148
+ | design-sync | sync_status (synced/conflicts_found) | On conflicts_found: present conflicts to user before proceeding |
149
+ | integration-test-reviewer | status (approved/needs_revision/blocked), requiredFixes | On needs_revision: pass requiredFixes back to task-executor |
150
+ | security-reviewer | status (approved/approved_with_notes/needs_revision/blocked), findings, notes, requiredFixes | On needs_revision: pass requiredFixes back to task-executor |
151
+ | acceptance-test-generator | status, generatedFiles | Pass generatedFiles to work-planner |
152
+
153
+ ### quality-fixer Blocked Handling
154
+
155
+ When quality-fixer returns `status: "blocked"`, discriminate by `reason`:
156
+ - `"Cannot determine due to unclear specification"` → read `blockingIssues[]` for specification details
157
+ - `"Execution prerequisites not met"` → read `missingPrerequisites[]` with `resolutionSteps` and present to user as actionable next steps
122
158
 
123
159
  ## My Basic Flow for Work Planning
124
160
 
@@ -272,6 +308,10 @@ Construct the prompt from the agent's Input Parameters section and the deliverab
272
308
  **Pass to code-verifier**: Design Doc path (doc_type: design-doc). `code_paths` is intentionally omitted — the verifier independently discovers code scope from the document.
273
309
  **Pass to document-reviewer**: code-verifier JSON output as `code_verification` parameter.
274
310
 
311
+ #### technical-designer → work-planner
312
+
313
+ **Pass to work-planner**: Design Doc path. Work-planner extracts Verification Strategy from the Design Doc and includes it in the work plan header.
314
+
275
315
  #### *1 acceptance-test-generator → work-planner
276
316
 
277
317
  **Pass to acceptance-test-generator**:
@@ -297,7 +337,7 @@ Construct the prompt from the agent's Input Parameters section and the deliverab
297
337
  - **Structured response is MANDATORY**: Information transmission between subagents MUST use JSON format
298
338
  - **Approval management**: Document creation -> Execute document-reviewer -> Get user approval BEFORE proceeding
299
339
  - **Flow confirmation**: After getting approval, ALWAYS check next step with work planning flow (large/medium/small scale)
300
- - **Consistency verification**: IF subagent determinations contradict -> prioritize these guidelines
340
+ - **Consistency verification**: When subagent outputs conflict, apply Decision precedence (see Delegation Boundary section)
301
341
 
302
342
  ## Required Dialogue Points with Humans
303
343
 
@@ -102,7 +102,7 @@ skills:
102
102
 
103
103
  documentation-criteria:
104
104
  skill: "documentation-criteria"
105
- tags: [documentation, adr, prd, ui-spec, design-doc, work-plan, decision-matrix]
105
+ tags: [documentation, adr, prd, ui-spec, design-doc, work-plan, decision-matrix, verification-strategy]
106
106
  typical-use: "Implementation start scale assessment, document creation decisions, ADR/PRD/Design Doc/work plan creation criteria"
107
107
  size: medium
108
108
  key-references:
@@ -119,11 +119,12 @@ skills:
119
119
  - "AI Automation Rules"
120
120
  - "Diagram Requirements"
121
121
  - "Common ADR Relationships"
122
+ - "Phase Division Criteria"
122
123
  - "Templates"
123
124
 
124
125
  implementation-approach:
125
126
  skill: "implementation-approach"
126
- tags: [architecture, implementation, task-decomposition, strategy-patterns, strangler-pattern, facade-pattern, design, planning, confirmation-levels]
127
+ tags: [architecture, implementation, task-decomposition, strategy-patterns, strangler-pattern, facade-pattern, design, planning, verification-levels]
127
128
  typical-use: "Implementation strategy selection, task decomposition, design decisions, large-scale change planning"
128
129
  size: medium
129
130
  key-references:
@@ -133,6 +133,9 @@ description: PRD、ADR、Design Doc、作業計画書の作成を支援。技術
133
133
  - **データ構造の採用判断**(新規構造導入時)
134
134
  - **適用基準**(explicit/implicit分類)
135
135
  - **前提となるADR**(共通ADR含む)
136
+ - **検証戦略**(必須)
137
+ - 正しさの証明方法(この変更で「正しい」とは何か、どう検証するか、いつ検証するか)
138
+ - 早期検証ポイント(アプローチの妥当性を証明する最初の対象、成功基準、失敗時の対応)
136
139
 
137
140
  **必須構造要素**:
138
141
  ```yaml
@@ -162,7 +165,8 @@ description: PRD、ADR、Design Doc、作業計画書の作成を支援。技術
162
165
  - **フェーズ構成**(Design Docの技術的依存関係を基に作成)
163
166
  - タスク分解と依存関係(最大2階層まで)
164
167
  - スケジュールと期間見積もり
165
- - **acceptance-test-generatorのテストスケルトンファイルパスを配置**(統合テスト・E2E)
168
+ - **テストスケルトンファイルパスを配置**(統合テスト・E2E)
169
+ - **検証戦略の要約**(Design Docから抽出)
166
170
  - **最終フェーズに品質保証を含む**(必須)
167
171
  - 進捗記録(チェックボックス形式)
168
172
 
@@ -171,6 +175,25 @@ description: PRD、ADR、Design Doc、作業計画書の作成を支援。技術
171
175
  - 設計の詳細(→Design Doc)
172
176
  - 技術的依存関係の決定(→Design Doc)
173
177
 
178
+ **フェーズ分割基準**(Design Docの実装アプローチに応じて適用):
179
+
180
+ **垂直スライス選択時**:
181
+ - 各フェーズ = 1つの価値単位(機能、コンポーネント、移行対象)
182
+ - 各フェーズに検証戦略に基づく実装+検証を含む
183
+ - 最終フェーズ: 品質保証(横断的検証、全テストパス)
184
+
185
+ **水平スライス選択時**:
186
+ 1. **Phase 1: 基盤実装** - 型定義、インターフェース、テスト準備
187
+ 2. **Phase 2: コア機能実装** - ビジネスロジック、ユニットテスト
188
+ 3. **Phase 3: 統合実装** - 外部接続、プレゼンテーション層
189
+ 4. **最終Phase: 品質保証(必須)** - 受入条件達成、全テストパス、品質チェック
190
+
191
+ **ハイブリッド選択時**:
192
+ - Design Docの実装アプローチに基づき垂直と水平を組み合わせる
193
+ - 最終フェーズ: 品質保証(必須)
194
+
195
+ **全アプローチ共通**: 最終フェーズは常に品質保証。各フェーズの検証手法はDesign Docの検証戦略に従う。
196
+
174
197
  **タスク完了定義の3要素**:
175
198
  1. **実装完了**: コードが動作する
176
199
  2. **品質完了**: テスト・型チェック・リントがパス
@@ -286,6 +286,26 @@ unknowns:
286
286
 
287
287
  - [ユニットレベルのモックを超えるテストが必要な重要な統合ポイントの一覧]
288
288
 
289
+ ## 検証戦略
290
+
291
+ 設計時に「正しさとは何か」「どう証明するか」を定義する。L1/L2/L3(implementation-approachスキル)はタスク実行時の完了検証の粒度を定義する。
292
+
293
+ ### 正しさの証明方法
294
+
295
+ この変更の正しさをどう示すか?
296
+
297
+ - **正しさの定義**: [この変更における「正しい」の意味 — 例: 「既存の振る舞いと出力が一致」「全ACが本番同等環境でパス」「生成クエリが対象DBでエラーなく実行」]
298
+ - **検証手法**: [具体的な技法 — 例: 「新旧実装の出力を比較」「ステージングDBに対して実行」「実APIとのコントラクトテスト」]
299
+ - **検証タイミング**: [検証の実施時期 — 例: 「最初の垂直スライス完了後」「リポジトリごと」「統合フェーズ」]
300
+
301
+ ### 早期検証ポイント
302
+
303
+ アプローチの妥当性を本格展開前に確認するため、最初に何を、どうやって検証するか?
304
+
305
+ - **最初の検証対象**: [アプローチが機能することを証明する最小単位 — 例: 「最初のリポジトリ移行」「単一のAPIエンドポイント」「1つの画面フロー」]
306
+ - **成功基準**: [観察可能な成果 — 例: 「CSVダウンロードが既存と同一の出力を生成」「APIが期待スキーマで200を返す」]
307
+ - **失敗時の対応**: [早期検証が失敗した場合の対処 — 例: 「続行前にアプローチを再評価」「ユーザーにエスカレーション」]
308
+
289
309
  ## 代替案
290
310
 
291
311
  ### 代替案1
@@ -12,6 +12,18 @@
12
12
  - ADR: [docs/adr/ADR-XXXX.md](該当する場合)
13
13
  - PRD: [docs/prd/XXX.md](該当する場合)
14
14
 
15
+ ## 検証戦略(Design Docより)
16
+
17
+ ### 正しさの証明方法
18
+ - **正しさの定義**: [Design Docから抽出]
19
+ - **検証手法**: [Design Docから抽出]
20
+ - **検証タイミング**: [Design Docから抽出]
21
+
22
+ ### 早期検証ポイント
23
+ - **最初の検証対象**: [Design Docから抽出]
24
+ - **成功基準**: [Design Docから抽出]
25
+ - **失敗時の対応**: [Design Docから抽出]
26
+
15
27
  ## 目的
16
28
  [なぜこの変更が必要か、何を解決するか]
17
29
 
@@ -31,51 +43,90 @@
31
43
  - [ ] 設計書更新が必要
32
44
  - [ ] README更新が必要
33
45
 
34
- ## 実装計画
46
+ ## 実装フェーズ
35
47
 
36
- (注: フェーズ構成はDesign Docの技術的依存関係と実装アプローチに基づいて決定)
48
+ Design Docの実装アプローチに基づいてフェーズ構成をひとつ選択する。
49
+ 詳細はdocumentation-criteriaスキルのフェーズ分割基準を参照。
50
+ **未使用のOptionは最終計画書から削除すること。** ハイブリッドの場合はOption Aをベースに、必要に応じて水平基盤フェーズを追加する。
37
51
 
38
- ### Phase 1: [フェーズ名](想定コミット数: X)
39
- **目的**: [このフェーズで達成すること]
52
+ ### Option A: 垂直スライスのフェーズ構成
40
53
 
41
- #### タスク
54
+ 実装アプローチが垂直スライスの場合に使用。各フェーズ = 検証付きの1価値単位。
55
+
56
+ #### Phase 1: [価値単位1の名前](想定コミット数: X)
57
+ **目的**: [最初の垂直スライス — アプローチの妥当性を証明]
58
+ **検証**: [検証戦略の早期検証ポイントから]
59
+
60
+ ##### タスク
61
+ - [ ] タスク1: 実装
62
+ - [ ] タスク2: 検証戦略に基づく検証
63
+ - [ ] 品質チェック: 段階的品質チェック実施(technical-specスキル参照)
64
+
65
+ ##### フェーズ完了条件
66
+ - [ ] 早期検証ポイント通過
67
+ - [ ] [機能条件]
68
+
69
+ #### Phase 2: [価値単位2の名前](想定コミット数: X)
70
+ **目的**: [後続の価値単位]
71
+ **検証**: [検証戦略から]
72
+
73
+ ##### タスク
74
+ - [ ] タスク1: 実装
75
+ - [ ] タスク2: 検証戦略に基づく検証
76
+ - [ ] 品質チェック
77
+
78
+ ##### フェーズ完了条件
79
+ - [ ] [機能条件]
80
+ - [ ] [品質条件]
81
+
82
+ ### Option B: 水平スライスのフェーズ構成
83
+
84
+ 実装アプローチが水平スライスの場合に使用。Foundation → Core → Integration → QAの順。
85
+
86
+ #### Phase 1: [基盤](想定コミット数: X)
87
+ **目的**: 契約定義、インターフェース、テスト準備
88
+
89
+ ##### タスク
42
90
  - [ ] タスク1: 具体的な作業内容
43
91
  - [ ] タスク2: 具体的な作業内容
44
92
  - [ ] 品質チェック: 段階的品質チェック実施(technical-specスキル参照)
45
93
  - [ ] 単体テスト: 関連する全テストがパス
46
94
 
47
- #### フェーズ完了条件
95
+ ##### フェーズ完了条件
48
96
  - [ ] [機能完了条件]
49
97
  - [ ] [品質完了条件]
50
98
 
51
- ### Phase 2: [フェーズ名](想定コミット数: X)
52
- **目的**: [このフェーズで達成すること]
99
+ #### Phase 2: [コア機能](想定コミット数: X)
100
+ **目的**: ビジネスロジック、ユニットテスト
53
101
 
54
- #### タスク
102
+ ##### タスク
55
103
  - [ ] タスク1: 具体的な作業内容
56
104
  - [ ] タスク2: 具体的な作業内容
57
- - [ ] 品質チェック: 段階的品質チェック実施(technical-specスキル参照)
105
+ - [ ] 品質チェック
58
106
  - [ ] 統合テスト: 機能全体の動作確認
59
107
 
60
- #### フェーズ完了条件
108
+ ##### フェーズ完了条件
61
109
  - [ ] [機能完了条件]
62
110
  - [ ] [品質完了条件]
63
111
 
64
- ### Phase 3: [フェーズ名](想定コミット数: X)
65
- **目的**: [このフェーズで達成すること]
112
+ #### Phase 3: [統合](想定コミット数: X)
113
+ **目的**: 外部接続、プレゼンテーション層
66
114
 
67
- #### タスク
115
+ ##### タスク
68
116
  - [ ] タスク1: 具体的な作業内容
69
117
  - [ ] タスク2: 具体的な作業内容
70
- - [ ] 品質チェック: 段階的品質チェック実施(technical-specスキル参照)
118
+ - [ ] 品質チェック
71
119
  - [ ] 統合テスト: コンポーネント連携の検証
72
120
 
73
- #### フェーズ完了条件
121
+ ##### フェーズ完了条件
74
122
  - [ ] [機能完了条件]
75
123
  - [ ] [品質完了条件]
76
124
 
77
125
  ### 最終Phase: 品質保証(必須)(想定コミット数: 1)
78
- **目的**: 全体品質保証とDesign Doc整合性検証
126
+
127
+ このフェーズは全実装アプローチで必須。
128
+
129
+ **目的**: 横断的品質保証とDesign Doc整合性検証
79
130
 
80
131
  #### タスク
81
132
  - [ ] Design Docの全受入条件達成を確認
@@ -33,9 +33,16 @@
33
33
  - [ ] コードを改善(テストはパス状態を維持)
34
34
  - [ ] 追加したテストが引き続きパスすることを確認
35
35
 
36
+ ## 動作検証方法
37
+ (作業計画書の検証戦略から導出)
38
+ - **検証手法**: [何をどう検証するか — 例: 「新実装の出力をsrc/legacy/order_calcの既存実装と比較」「エンドポイントをテストDBに対して実行しレスポンスがコントラクトに一致することを確認」]
39
+ - **成功基準**: [正しさを証明する観察可能な成果 — 例: 「全入力パターンで既存実装と出力が一致」「APIが期待スキーマで200を返す」]
40
+ - **失敗時の対応**: [検証失敗時の対処 — 例: 「続行前にアプローチを再評価」「ユーザーにエスカレーション」]
41
+ - **検証レベル**: [L1/L2/L3、implementation-approachスキル参照]
42
+
36
43
  ## 完了条件
37
44
  - [ ] 追加した全テストがパス
38
- - [ ] 動作確認完了(L1/L2/L3を選択、implementation-approachスキル参照)
45
+ - [ ] 動作検証方法に基づく動作確認完了
39
46
  - [ ] 成果物作成完了(調査・設計タスクの場合)
40
47
 
41
48
  ## 備考
@@ -105,7 +105,7 @@ description: 実装戦略(垂直スライス、水平、ハイブリッド)
105
105
 
106
106
  **Design Docでの記載**:実装戦略の選択理由と根拠を明記する。
107
107
 
108
- ## 確認レベル定義
108
+ ## 検証レベル定義
109
109
 
110
110
  各タスクの完了確認における優先順位:
111
111