create-ai-project 1.19.0 → 1.20.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.claude/agents-en/acceptance-test-generator.md +9 -2
- package/.claude/agents-en/code-verifier.md +14 -4
- package/.claude/agents-en/codebase-analyzer.md +176 -0
- package/.claude/agents-en/document-reviewer.md +8 -0
- package/.claude/agents-en/integration-test-reviewer.md +2 -2
- package/.claude/agents-en/quality-fixer-frontend.md +32 -5
- package/.claude/agents-en/quality-fixer.md +32 -5
- package/.claude/agents-en/task-decomposer.md +23 -2
- package/.claude/agents-en/task-executor-frontend.md +48 -3
- package/.claude/agents-en/task-executor.md +48 -3
- package/.claude/agents-en/technical-designer-frontend.md +7 -0
- package/.claude/agents-en/technical-designer.md +7 -0
- package/.claude/agents-en/work-planner.md +37 -14
- package/.claude/agents-ja/acceptance-test-generator.md +9 -2
- package/.claude/agents-ja/code-verifier.md +14 -4
- package/.claude/agents-ja/codebase-analyzer.md +176 -0
- package/.claude/agents-ja/document-reviewer.md +8 -0
- package/.claude/agents-ja/integration-test-reviewer.md +2 -2
- package/.claude/agents-ja/quality-fixer-frontend.md +32 -6
- package/.claude/agents-ja/quality-fixer.md +32 -6
- package/.claude/agents-ja/task-decomposer.md +23 -2
- package/.claude/agents-ja/task-executor-frontend.md +48 -3
- package/.claude/agents-ja/task-executor.md +48 -3
- package/.claude/agents-ja/technical-designer-frontend.md +7 -0
- package/.claude/agents-ja/technical-designer.md +7 -0
- package/.claude/agents-ja/work-planner.md +37 -14
- package/.claude/commands-en/design.md +17 -6
- package/.claude/commands-en/front-design.md +11 -3
- package/.claude/commands-en/implement.md +2 -0
- package/.claude/commands-en/reverse-engineer.md +2 -6
- package/.claude/commands-en/update-doc.md +16 -2
- package/.claude/commands-ja/design.md +17 -6
- package/.claude/commands-ja/front-design.md +11 -3
- package/.claude/commands-ja/implement.md +2 -0
- package/.claude/commands-ja/reverse-engineer.md +2 -6
- package/.claude/commands-ja/update-doc.md +16 -2
- package/.claude/skills-en/documentation-criteria/references/design-template.md +20 -0
- package/.claude/skills-en/documentation-criteria/references/task-template.md +5 -0
- package/.claude/skills-en/integration-e2e-testing/SKILL.md +4 -2
- package/.claude/skills-en/integration-e2e-testing/references/e2e-environment-prerequisites.md +70 -0
- package/.claude/skills-en/subagents-orchestration-guide/SKILL.md +64 -32
- package/.claude/skills-en/task-analyzer/references/skills-index.yaml +2 -0
- package/.claude/skills-en/typescript-testing/SKILL.md +39 -0
- package/.claude/skills-ja/documentation-criteria/references/design-template.md +20 -0
- package/.claude/skills-ja/documentation-criteria/references/task-template.md +5 -0
- package/.claude/skills-ja/integration-e2e-testing/SKILL.md +4 -0
- package/.claude/skills-ja/integration-e2e-testing/references/e2e-environment-prerequisites.md +70 -0
- package/.claude/skills-ja/subagents-orchestration-guide/SKILL.md +64 -32
- package/.claude/skills-ja/task-analyzer/references/skills-index.yaml +2 -0
- package/.claude/skills-ja/typescript-testing/SKILL.md +39 -0
- package/CHANGELOG.md +50 -0
- package/README.ja.md +1 -1
- package/README.md +1 -1
- package/package.json +3 -4
- package/.madgerc +0 -14
|
@@ -13,8 +13,13 @@ Metadata:
|
|
|
13
13
|
- [ ] [Implementation file path]
|
|
14
14
|
- [ ] [Test file path]
|
|
15
15
|
|
|
16
|
+
## Investigation Targets
|
|
17
|
+
Files to read before starting implementation (file path, with optional search hint):
|
|
18
|
+
- [e.g., src/orders/checkout (processOrder function) — determined by task-decomposer based on task nature]
|
|
19
|
+
|
|
16
20
|
## Implementation Steps (TDD: Red-Green-Refactor)
|
|
17
21
|
### 1. Red Phase
|
|
22
|
+
- [ ] Read all Investigation Targets and record key observations
|
|
18
23
|
- [ ] Review dependency deliverables (if any)
|
|
19
24
|
- [ ] Verify/create contract definitions
|
|
20
25
|
- [ ] Write failing tests
|
|
@@ -8,6 +8,7 @@ description: Designs integration and E2E tests with mock boundaries and behavior
|
|
|
8
8
|
## References
|
|
9
9
|
|
|
10
10
|
- **[references/e2e-design.md](references/e2e-design.md)** - E2E test design principles with Playwright (candidate sources, selection criteria, UI Spec mapping)
|
|
11
|
+
- **[references/e2e-environment-prerequisites.md](references/e2e-environment-prerequisites.md)** - E2E environment prerequisites (seed data, auth fixtures, environment checklist)
|
|
11
12
|
|
|
12
13
|
## Test Types and Limits
|
|
13
14
|
|
|
@@ -37,6 +38,8 @@ description: Designs integration and E2E tests with mock boundaries and behavior
|
|
|
37
38
|
|
|
38
39
|
### Required Comment Format
|
|
39
40
|
|
|
41
|
+
Each test MUST include the following annotations.
|
|
42
|
+
|
|
40
43
|
```typescript
|
|
41
44
|
// AC: "[Acceptance criteria original text]"
|
|
42
45
|
// ROI: [0-100] | Business Value: [0-10] | Frequency: [0-10]
|
|
@@ -44,7 +47,7 @@ description: Designs integration and E2E tests with mock boundaries and behavior
|
|
|
44
47
|
// @category: core-functionality | integration | edge-case | ux | e2e
|
|
45
48
|
// @dependency: none | [component name] | full-system
|
|
46
49
|
// @complexity: low | medium | high
|
|
47
|
-
|
|
50
|
+
// @real-dependency: [component name] (optional, when Test Boundaries specify non-mock setup)
|
|
48
51
|
```
|
|
49
52
|
|
|
50
53
|
### Property Annotations
|
|
@@ -52,7 +55,6 @@ it.todo('[AC number]: [Test name]')
|
|
|
52
55
|
```typescript
|
|
53
56
|
// Property: `[Verification expression]`
|
|
54
57
|
// fast-check: fc.property(fc.[arbitrary], (input) => [invariant])
|
|
55
|
-
it.todo('[AC number]-property: [Invariant description]')
|
|
56
58
|
```
|
|
57
59
|
|
|
58
60
|
### ROI Calculation
|
|
@@ -0,0 +1,70 @@
|
|
|
1
|
+
# E2E Environment Prerequisites
|
|
2
|
+
|
|
3
|
+
E2E tests require a running application with real data state. Unlike unit/integration tests, environment setup is part of E2E test implementation scope.
|
|
4
|
+
|
|
5
|
+
## Seed Data Strategy
|
|
6
|
+
|
|
7
|
+
Prepare test data via API calls or database seeding — never through UI interaction:
|
|
8
|
+
|
|
9
|
+
```typescript
|
|
10
|
+
// fixtures/seed.fixture.ts
|
|
11
|
+
import { test as base } from '@playwright/test'
|
|
12
|
+
|
|
13
|
+
export const test = base.extend<{ seededData: SeedResult }>({
|
|
14
|
+
seededData: async ({ request }, use) => {
|
|
15
|
+
// Arrange: Create test data via API before test
|
|
16
|
+
// Example: adjust to the project's actual seeding mechanism
|
|
17
|
+
const result = await request.post('/api/test/seed', {
|
|
18
|
+
data: { scenario: 'e2e-user-with-subscription' }
|
|
19
|
+
})
|
|
20
|
+
const seedData = await result.json()
|
|
21
|
+
|
|
22
|
+
await use(seedData)
|
|
23
|
+
|
|
24
|
+
// Cleanup: Remove test data after test
|
|
25
|
+
await request.delete(`/api/test/seed/${seedData.id}`)
|
|
26
|
+
},
|
|
27
|
+
})
|
|
28
|
+
```
|
|
29
|
+
|
|
30
|
+
**Principles**:
|
|
31
|
+
- Use the application's existing seeding mechanism if present; create new seed endpoints only when no alternative exists
|
|
32
|
+
- Seed data setup belongs to test fixtures, not to a separate manual step
|
|
33
|
+
- Each test must be self-contained: create its own data, clean up after
|
|
34
|
+
- Use API endpoints or direct DB access for seeding — not UI flows
|
|
35
|
+
|
|
36
|
+
## Authentication Fixture
|
|
37
|
+
|
|
38
|
+
Implement auth fixtures that match the application's actual login flow:
|
|
39
|
+
|
|
40
|
+
```typescript
|
|
41
|
+
// fixtures/auth.fixture.ts
|
|
42
|
+
export const test = base.extend<{ playerPage: Page }>({
|
|
43
|
+
playerPage: async ({ page, request }, use) => {
|
|
44
|
+
// Use the application's existing auth endpoint — not admin backdoors
|
|
45
|
+
// Example: adjust the URL and payload to match the project's actual login flow
|
|
46
|
+
await request.post('/api/login', {
|
|
47
|
+
data: { loginId: E2E_LOGIN_ID, password: E2E_PASSWORD }
|
|
48
|
+
})
|
|
49
|
+
// Transfer session to browser context
|
|
50
|
+
await page.goto('/')
|
|
51
|
+
await use(page)
|
|
52
|
+
},
|
|
53
|
+
})
|
|
54
|
+
```
|
|
55
|
+
|
|
56
|
+
**Principles**:
|
|
57
|
+
- Use the application's existing authentication flow; auth fixtures must follow the same path that real users use
|
|
58
|
+
- Store test credentials in environment variables, never hardcoded
|
|
59
|
+
- If the auth flow requires specific user records, seed them in the fixture
|
|
60
|
+
|
|
61
|
+
## Environment Checklist
|
|
62
|
+
|
|
63
|
+
Before E2E tests can pass, verify:
|
|
64
|
+
- [ ] Application is running and accessible at `baseURL`
|
|
65
|
+
- [ ] Database has required seed data (test users, subscriptions, content)
|
|
66
|
+
- [ ] Authentication flow works with test credentials
|
|
67
|
+
- [ ] Environment variables are set (`E2E_*` prefixed)
|
|
68
|
+
- [ ] External services are either available or mocked via `page.route()`
|
|
69
|
+
|
|
70
|
+
When the work plan includes dedicated environment setup tasks (Phase 0), follow those tasks. When no setup tasks exist in the plan, address missing prerequisites as part of the E2E test implementation task itself.
|
|
@@ -56,13 +56,15 @@ graph TD
|
|
|
56
56
|
|
|
57
57
|
### Document Creation Agents
|
|
58
58
|
6. **requirement-analyzer**: Requirement analysis and work scale determination (WebSearch enabled, latest technical information research)
|
|
59
|
-
7. **
|
|
60
|
-
8. **
|
|
61
|
-
9. **
|
|
62
|
-
10. **
|
|
63
|
-
11. **
|
|
64
|
-
12. **
|
|
65
|
-
13. **
|
|
59
|
+
7. **codebase-analyzer**: Analyze existing codebase to produce focused guidance for technical design
|
|
60
|
+
8. **prd-creator**: Product Requirements Document creation (WebSearch enabled, market trend research)
|
|
61
|
+
9. **ui-spec-designer**: UI Specification creation from PRD and optional prototype code (frontend/fullstack features)
|
|
62
|
+
10. **technical-designer**: ADR/Design Doc creation (latest technology research, Property annotation assignment)
|
|
63
|
+
11. **work-planner**: Work plan creation from Design Doc and test skeletons
|
|
64
|
+
12. **document-reviewer**: Single document quality, completeness, and rule compliance check
|
|
65
|
+
13. **code-verifier**: Verify Design Doc claims against existing codebase (used pre-review in design flow)
|
|
66
|
+
14. **design-sync**: Design Doc consistency verification (detects explicit conflicts only)
|
|
67
|
+
15. **acceptance-test-generator**: Generate separate integration and E2E test skeletons from Design Doc ACs and optional UI Spec
|
|
66
68
|
|
|
67
69
|
## My Orchestration Principles
|
|
68
70
|
|
|
@@ -108,8 +110,10 @@ I repeat this cycle for each task to ensure quality.
|
|
|
108
110
|
|
|
109
111
|
Subagents respond in JSON format. Key fields for orchestrator decisions:
|
|
110
112
|
- **requirement-analyzer**: scale, confidence, adrRequired, crossLayerScope, scopeDependencies, questions
|
|
111
|
-
- **
|
|
112
|
-
- **
|
|
113
|
+
- **codebase-analyzer**: analysisScope.categoriesDetected, dataModel.detected, focusAreas[], existingElements count, limitations
|
|
114
|
+
- **code-verifier**: (in design flow) consistencyScore, discrepancies[], reverseCoverage (including dataOperationsInCode, testBoundariesSectionPresent)
|
|
115
|
+
- **task-executor**: status (escalation_needed/completed), escalation_type (design_compliance_violation/similar_function_found/similar_component_found/investigation_target_not_found/out_of_scope_file), testsAdded, requiresTestReview
|
|
116
|
+
- **quality-fixer**: status (approved/blocked). Discriminate blocked type by `reason` field: `"Cannot determine due to unclear specification"` → read `blockingIssues[]` for specification details; `"Execution prerequisites not met"` → read `missingPrerequisites[]` with `resolutionSteps` — present these to the user as actionable next steps
|
|
113
117
|
- **document-reviewer**: approvalReady (true/false)
|
|
114
118
|
- **design-sync**: sync_status (synced/conflicts_found)
|
|
115
119
|
- **integration-test-reviewer**: status (approved/needs_revision/blocked), requiredFixes
|
|
@@ -121,7 +125,7 @@ Subagents respond in JSON format. Key fields for orchestrator decisions:
|
|
|
121
125
|
When receiving new features or change requests, I first request requirement analysis from requirement-analyzer.
|
|
122
126
|
According to scale determination:
|
|
123
127
|
|
|
124
|
-
### Large Scale (6+ Files) -
|
|
128
|
+
### Large Scale (6+ Files) - 13 Steps (backend) / 15 Steps (frontend/fullstack)
|
|
125
129
|
|
|
126
130
|
1. requirement-analyzer → Requirement analysis + Check existing PRD **[Stop]**
|
|
127
131
|
2. prd-creator → PRD creation
|
|
@@ -130,24 +134,28 @@ According to scale determination:
|
|
|
130
134
|
5. **(frontend/fullstack only)** document-reviewer → UI Spec review **[Stop: UI Spec Approval]**
|
|
131
135
|
6. technical-designer → ADR creation (if architecture/technology/data flow changes)
|
|
132
136
|
7. document-reviewer → ADR review (if ADR created) **[Stop: ADR Approval]**
|
|
133
|
-
8.
|
|
134
|
-
9.
|
|
135
|
-
10.
|
|
136
|
-
11.
|
|
137
|
-
12.
|
|
138
|
-
13.
|
|
137
|
+
8. codebase-analyzer → Codebase analysis (pass requirement-analyzer output + PRD path)
|
|
138
|
+
9. technical-designer → Design Doc creation (pass codebase-analyzer output as additional context; cross-layer: per layer, see Cross-Layer Orchestration)
|
|
139
|
+
10. code-verifier → Verify Design Doc against existing code (doc_type: design-doc)
|
|
140
|
+
11. document-reviewer → Design Doc review (pass code-verifier results as code_verification; cross-layer: per Design Doc)
|
|
141
|
+
12. design-sync → Consistency verification **[Stop: Design Doc Approval]**
|
|
142
|
+
13. acceptance-test-generator → Test skeleton generation, pass to work-planner (*1)
|
|
143
|
+
14. work-planner → Work plan creation **[Stop: Batch approval]**
|
|
144
|
+
15. task-decomposer → Autonomous execution → Completion report
|
|
139
145
|
|
|
140
|
-
### Medium Scale (3-5 Files) -
|
|
146
|
+
### Medium Scale (3-5 Files) - 9 Steps (backend) / 11 Steps (frontend/fullstack)
|
|
141
147
|
|
|
142
148
|
1. requirement-analyzer → Requirement analysis **[Stop]**
|
|
143
|
-
2.
|
|
144
|
-
3. **(frontend/fullstack only)**
|
|
145
|
-
4.
|
|
146
|
-
5.
|
|
147
|
-
6.
|
|
148
|
-
7.
|
|
149
|
-
8.
|
|
150
|
-
9.
|
|
149
|
+
2. codebase-analyzer → Codebase analysis (pass requirement-analyzer output)
|
|
150
|
+
3. **(frontend/fullstack only)** Ask user for prototype code → ui-spec-designer → UI Spec creation
|
|
151
|
+
4. **(frontend/fullstack only)** document-reviewer → UI Spec review **[Stop: UI Spec Approval]**
|
|
152
|
+
5. technical-designer → Design Doc creation (pass codebase-analyzer output as additional context; cross-layer: per layer, see Cross-Layer Orchestration)
|
|
153
|
+
6. code-verifier → Verify Design Doc against existing code (doc_type: design-doc)
|
|
154
|
+
7. document-reviewer → Design Doc review (pass code-verifier results as code_verification; cross-layer: per Design Doc)
|
|
155
|
+
8. design-sync → Consistency verification **[Stop: Design Doc Approval]**
|
|
156
|
+
9. acceptance-test-generator → Test skeleton generation, pass to work-planner (*1)
|
|
157
|
+
10. work-planner → Work plan creation **[Stop: Batch approval]**
|
|
158
|
+
11. task-decomposer → Autonomous execution → Completion report
|
|
151
159
|
|
|
152
160
|
### Small Scale (1-2 Files) - 2 Steps
|
|
153
161
|
|
|
@@ -164,14 +172,18 @@ Replace the standard Design Doc creation step with per-layer creation:
|
|
|
164
172
|
|
|
165
173
|
| Step | Agent | Purpose |
|
|
166
174
|
|------|-------|---------|
|
|
167
|
-
|
|
|
168
|
-
|
|
|
169
|
-
|
|
|
170
|
-
|
|
|
175
|
+
| 8 | codebase-analyzer ×2 | Codebase analysis per layer (pass req-analyzer output, filtered to layer) |
|
|
176
|
+
| 9a | technical-designer | Backend Design Doc (with backend codebase-analyzer context) |
|
|
177
|
+
| 9b | technical-designer-frontend | Frontend Design Doc (with frontend codebase-analyzer context + backend Integration Points) |
|
|
178
|
+
| 10 | code-verifier ×2 | Verify each Design Doc against existing code |
|
|
179
|
+
| 11 | document-reviewer ×2 | Review each Design Doc (with code-verifier results as code_verification) |
|
|
180
|
+
| 12 | design-sync | Cross-layer consistency verification **[Stop]** |
|
|
181
|
+
|
|
182
|
+
Steps marked with ×2 invoke the agent once per layer. These invocations are independent and can run in parallel when the orchestrator supports concurrent Agent tool calls.
|
|
171
183
|
|
|
172
184
|
**Layer Context in Design Doc Creation**:
|
|
173
|
-
- **Backend**: "Create a backend Design Doc from PRD at [path]. Focus on: API contracts, data layer, business logic, service architecture."
|
|
174
|
-
- **Frontend**: "Create a frontend Design Doc from PRD at [path]. Reference backend Design Doc at [path] for API contracts and Integration Points. Focus on: component hierarchy, state management, UI interactions, data fetching."
|
|
185
|
+
- **Backend**: "Create a backend Design Doc from PRD at [path]. Codebase analysis: [JSON from codebase-analyzer for backend layer]. Focus on: API contracts, data layer, business logic, service architecture."
|
|
186
|
+
- **Frontend**: "Create a frontend Design Doc from PRD at [path]. Codebase analysis: [JSON from codebase-analyzer for frontend layer]. Reference backend Design Doc at [path] for API contracts and Integration Points. Focus on: component hierarchy, state management, UI interactions, data fetching."
|
|
175
187
|
|
|
176
188
|
**design-sync**: Use frontend Design Doc as source. design-sync auto-discovers other Design Docs in `docs/design/` for comparison.
|
|
177
189
|
|
|
@@ -230,6 +242,16 @@ Every subagent prompt must include:
|
|
|
230
242
|
|
|
231
243
|
Construct the prompt from the agent's Input Parameters section and the deliverables available at that point in the flow.
|
|
232
244
|
|
|
245
|
+
### Call Example (codebase-analyzer)
|
|
246
|
+
- subagent_type: "codebase-analyzer"
|
|
247
|
+
- description: "Codebase analysis"
|
|
248
|
+
- prompt: "requirement_analysis: [JSON from requirement-analyzer]. prd_path: [path if exists]. requirements: [original user requirements]. Analyze the existing codebase and produce design guidance."
|
|
249
|
+
|
|
250
|
+
### Call Example (code-verifier — design flow)
|
|
251
|
+
- subagent_type: "code-verifier"
|
|
252
|
+
- description: "Design Doc verification"
|
|
253
|
+
- prompt: "doc_type: design-doc document_path: [Design Doc path] Verify Design Doc against existing code."
|
|
254
|
+
|
|
233
255
|
## My Main Roles as Orchestrator
|
|
234
256
|
|
|
235
257
|
1. **State Management**: Grasp current phase, each subagent's state, and next action
|
|
@@ -240,6 +262,16 @@ Construct the prompt from the agent's Input Parameters section and the deliverab
|
|
|
240
262
|
- Compose commit messages from changeSummary -> **Execute git commit with Bash**
|
|
241
263
|
- Explicitly integrate initial and additional requirements when requirements change
|
|
242
264
|
|
|
265
|
+
#### codebase-analyzer → technical-designer
|
|
266
|
+
|
|
267
|
+
**Pass to codebase-analyzer**: requirement-analyzer JSON output, PRD path (if exists), original user requirements
|
|
268
|
+
**Pass to technical-designer**: codebase-analyzer JSON output as additional context in the Design Doc creation prompt. The designer uses `focusAreas` and `dataModel` to inform the Existing Codebase Analysis section.
|
|
269
|
+
|
|
270
|
+
#### code-verifier → document-reviewer (Design Doc review)
|
|
271
|
+
|
|
272
|
+
**Pass to code-verifier**: Design Doc path (doc_type: design-doc). `code_paths` is intentionally omitted — the verifier independently discovers code scope from the document.
|
|
273
|
+
**Pass to document-reviewer**: code-verifier JSON output as `code_verification` parameter.
|
|
274
|
+
|
|
243
275
|
#### *1 acceptance-test-generator → work-planner
|
|
244
276
|
|
|
245
277
|
**Pass to acceptance-test-generator**:
|
|
@@ -255,7 +287,7 @@ Construct the prompt from the agent's Input Parameters section and the deliverab
|
|
|
255
287
|
- E2E test file: [path] (execute only in final phase)
|
|
256
288
|
|
|
257
289
|
**On error**: Escalate to user if files are not generated
|
|
258
|
-
3. **Quality Assurance and Commit Execution**: After confirming approved
|
|
290
|
+
3. **Quality Assurance and Commit Execution**: After confirming `status: "approved"`, immediately execute git commit
|
|
259
291
|
4. **Autonomous Execution Mode Management**: Start/stop autonomous execution after approval, escalation decisions
|
|
260
292
|
5. **ADR Status Management**: Update ADR status after user decision (Accepted/Rejected)
|
|
261
293
|
|
|
@@ -69,6 +69,7 @@ skills:
|
|
|
69
69
|
- "Test Implementation Conventions"
|
|
70
70
|
- "Test Quality Criteria"
|
|
71
71
|
- "Mock Type Safety Enforcement"
|
|
72
|
+
- "Data Layer Testing"
|
|
72
73
|
- "Basic Vitest Example"
|
|
73
74
|
|
|
74
75
|
technical-spec:
|
|
@@ -147,6 +148,7 @@ skills:
|
|
|
147
148
|
- "Node.js Testing Best Practices - Yoni Goldberg"
|
|
148
149
|
- "Property-Based Testing - 2025 Practices"
|
|
149
150
|
- "references/e2e-design.md - E2E test design principles"
|
|
151
|
+
- "references/e2e-environment-prerequisites.md - E2E environment prerequisites"
|
|
150
152
|
sections:
|
|
151
153
|
- "References"
|
|
152
154
|
- "Test Types and Limits"
|
|
@@ -135,6 +135,45 @@ const sdkMock = {
|
|
|
135
135
|
} as unknown as ExternalSDK // Complex external SDK type structure
|
|
136
136
|
```
|
|
137
137
|
|
|
138
|
+
## Data Layer Testing
|
|
139
|
+
|
|
140
|
+
### Mock Limitations for Data Layer
|
|
141
|
+
|
|
142
|
+
Mocks validate call patterns but cannot verify data layer correctness. The following pass through undetected with mock-only testing:
|
|
143
|
+
- Schema mismatches (table names, column names, data types)
|
|
144
|
+
- Query correctness (joins, filters, aggregations, grouping)
|
|
145
|
+
- Database constraints (NOT NULL, UNIQUE, foreign keys)
|
|
146
|
+
- Migration drift (schema changes that make code out of sync)
|
|
147
|
+
|
|
148
|
+
### When Mocks Are Appropriate for Data Access
|
|
149
|
+
|
|
150
|
+
- Testing business logic that receives data from the data layer (mock the repository, test the service)
|
|
151
|
+
- Testing error handling paths (simulating connection failures, timeouts)
|
|
152
|
+
- Unit tests where data access is a dependency, not the subject under test
|
|
153
|
+
|
|
154
|
+
### When Mocks Are Insufficient for Data Access
|
|
155
|
+
|
|
156
|
+
- Testing repository or data access implementations themselves
|
|
157
|
+
- Verifying query correctness (joins, filters, aggregations, grouping)
|
|
158
|
+
- Testing data integrity constraints
|
|
159
|
+
- Testing migration compatibility
|
|
160
|
+
|
|
161
|
+
### Real Database Testing (Environment-Dependent)
|
|
162
|
+
|
|
163
|
+
Options for verifying data layer correctness against a real database engine:
|
|
164
|
+
- **Containerized databases** for CI environments
|
|
165
|
+
- **In-memory databases** for fast feedback (note: dialect differences may mask issues)
|
|
166
|
+
- **Dedicated test databases** with seed data
|
|
167
|
+
|
|
168
|
+
The appropriate approach depends on project environment and CI/CD capabilities.
|
|
169
|
+
|
|
170
|
+
### AI-Generated Code and Schema Awareness
|
|
171
|
+
|
|
172
|
+
- AI-generated data access code has heightened schema hallucination risk
|
|
173
|
+
- Generated queries may use correct syntax but reference nonexistent schema elements
|
|
174
|
+
- Mock-based tests pass regardless of schema accuracy
|
|
175
|
+
- Mitigation: Design Docs should include explicit schema references; code-verifier reverse coverage verifies data operations against documented schemas
|
|
176
|
+
|
|
138
177
|
## Basic Vitest Example
|
|
139
178
|
|
|
140
179
|
```typescript
|
|
@@ -266,6 +266,26 @@ unknowns:
|
|
|
266
266
|
|
|
267
267
|
機能に関連する信頼境界がない場合は、簡潔な理由とともにN/Aと記載。
|
|
268
268
|
|
|
269
|
+
## テスト境界
|
|
270
|
+
|
|
271
|
+
### モック境界決定
|
|
272
|
+
|
|
273
|
+
| コンポーネント/依存先 | モック化? | 根拠 |
|
|
274
|
+
|---------------------|----------|------|
|
|
275
|
+
| [外部API / DB / ファイルシステム 等] | [Yes/No] | [この境界を選択した理由] |
|
|
276
|
+
|
|
277
|
+
### データ層テスト戦略
|
|
278
|
+
|
|
279
|
+
- **スキーマ依存先**: [この機能が読み書きするテーブル/モデルの一覧と定義ファイルパス]
|
|
280
|
+
- **テストデータ手法**: [テストデータの提供方法 — fixtures, factories, seed scripts, 実DB]
|
|
281
|
+
- **モックの限界**: [この機能でモックだけでは信頼性のあるテストができない箇所]
|
|
282
|
+
|
|
283
|
+
機能にデータ層の依存がない場合は、簡潔な理由とともにN/Aと記載。
|
|
284
|
+
|
|
285
|
+
### 統合検証ポイント
|
|
286
|
+
|
|
287
|
+
- [ユニットレベルのモックを超えるテストが必要な重要な統合ポイントの一覧]
|
|
288
|
+
|
|
269
289
|
## 代替案
|
|
270
290
|
|
|
271
291
|
### 代替案1
|
|
@@ -13,8 +13,13 @@
|
|
|
13
13
|
- [ ] [実装ファイルパス]
|
|
14
14
|
- [ ] [テストファイルパス]
|
|
15
15
|
|
|
16
|
+
## 調査対象
|
|
17
|
+
実装開始前に読むべきファイル(ファイルパス、任意でサーチヒント付き):
|
|
18
|
+
- [例: src/orders/checkout (processOrder関数) — タスクの性質に基づきtask-decomposerが決定]
|
|
19
|
+
|
|
16
20
|
## 実装ステップ(TDD: Red-Green-Refactor)
|
|
17
21
|
### 1. Redフェーズ
|
|
22
|
+
- [ ] 全ての調査対象を読み、主要な所見を記録
|
|
18
23
|
- [ ] 依存関係の成果物を確認(ある場合)
|
|
19
24
|
- [ ] 契約定義を確認・作成
|
|
20
25
|
- [ ] 失敗するテストを書く
|
|
@@ -8,6 +8,7 @@ description: 統合テストとE2Eテストを設計。モック境界と振る
|
|
|
8
8
|
## References
|
|
9
9
|
|
|
10
10
|
- **[references/e2e-design.md](references/e2e-design.md)** — E2Eテスト設計原則(候補ソース、選定基準、UI Specからのマッピング)
|
|
11
|
+
- **[references/e2e-environment-prerequisites.md](references/e2e-environment-prerequisites.md)** — E2E環境前提条件(seed data、auth fixture、環境チェックリスト)
|
|
11
12
|
|
|
12
13
|
## テスト種別と上限
|
|
13
14
|
|
|
@@ -37,6 +38,8 @@ description: 統合テストとE2Eテストを設計。モック境界と振る
|
|
|
37
38
|
|
|
38
39
|
### 必須コメント形式
|
|
39
40
|
|
|
41
|
+
各テストに以下のアノテーションを含めること。
|
|
42
|
+
|
|
40
43
|
```typescript
|
|
41
44
|
// AC: "[受入条件原文]"
|
|
42
45
|
// ROI: [0-100] | ビジネス価値: [0-10] | 頻度: [0-10]
|
|
@@ -44,6 +47,7 @@ description: 統合テストとE2Eテストを設計。モック境界と振る
|
|
|
44
47
|
// @category: core-functionality | integration | edge-case | ux | e2e
|
|
45
48
|
// @dependency: none | [コンポーネント名] | full-system
|
|
46
49
|
// @complexity: low | medium | high
|
|
50
|
+
// @real-dependency: [コンポーネント名](任意、テスト境界で非モックセットアップが指定された場合)
|
|
47
51
|
it.todo('[AC番号]: [テスト名]')
|
|
48
52
|
```
|
|
49
53
|
|
|
@@ -0,0 +1,70 @@
|
|
|
1
|
+
# E2E環境前提条件
|
|
2
|
+
|
|
3
|
+
E2Eテストはリアルなデータ状態で動作するアプリケーションが必要です。ユニット/統合テストと異なり、環境セットアップはE2Eテスト実装スコープの一部です。
|
|
4
|
+
|
|
5
|
+
## Seed Data Strategy
|
|
6
|
+
|
|
7
|
+
テストデータはAPI callまたはdatabase seedingで準備する — UI操作によるデータ作成は行わない:
|
|
8
|
+
|
|
9
|
+
```typescript
|
|
10
|
+
// fixtures/seed.fixture.ts
|
|
11
|
+
import { test as base } from '@playwright/test'
|
|
12
|
+
|
|
13
|
+
export const test = base.extend<{ seededData: SeedResult }>({
|
|
14
|
+
seededData: async ({ request }, use) => {
|
|
15
|
+
// Arrange: テスト前にAPI経由でテストデータを作成
|
|
16
|
+
// 例: プロジェクトの実際のseeding機構に合わせて調整
|
|
17
|
+
const result = await request.post('/api/test/seed', {
|
|
18
|
+
data: { scenario: 'e2e-user-with-subscription' }
|
|
19
|
+
})
|
|
20
|
+
const seedData = await result.json()
|
|
21
|
+
|
|
22
|
+
await use(seedData)
|
|
23
|
+
|
|
24
|
+
// Cleanup: テスト後にテストデータを削除
|
|
25
|
+
await request.delete(`/api/test/seed/${seedData.id}`)
|
|
26
|
+
},
|
|
27
|
+
})
|
|
28
|
+
```
|
|
29
|
+
|
|
30
|
+
**原則**:
|
|
31
|
+
- アプリケーションに既存のseeding機構がある場合はそれを使用する。代替手段がない場合のみ新規seedエンドポイントを作成
|
|
32
|
+
- seed dataのセットアップはtest fixturesに属する。手動ステップとして分離しない
|
|
33
|
+
- 各テストは自己完結: 自身のデータを作成し、テスト後にクリーンアップ
|
|
34
|
+
- seedingにはAPIエンドポイントまたは直接DB操作を使用 — UIフローは使わない
|
|
35
|
+
|
|
36
|
+
## Authentication Fixture
|
|
37
|
+
|
|
38
|
+
アプリケーションの実際のログインフローに合わせたauth fixtureを実装:
|
|
39
|
+
|
|
40
|
+
```typescript
|
|
41
|
+
// fixtures/auth.fixture.ts
|
|
42
|
+
export const test = base.extend<{ playerPage: Page }>({
|
|
43
|
+
playerPage: async ({ page, request }, use) => {
|
|
44
|
+
// アプリケーションの既存認証エンドポイントを使用 — admin backdoorは使わない
|
|
45
|
+
// 例: プロジェクトの実際のログインフローに合わせてURL・payloadを調整
|
|
46
|
+
await request.post('/api/login', {
|
|
47
|
+
data: { loginId: E2E_LOGIN_ID, password: E2E_PASSWORD }
|
|
48
|
+
})
|
|
49
|
+
// セッションをブラウザコンテキストに移行
|
|
50
|
+
await page.goto('/')
|
|
51
|
+
await use(page)
|
|
52
|
+
},
|
|
53
|
+
})
|
|
54
|
+
```
|
|
55
|
+
|
|
56
|
+
**原則**:
|
|
57
|
+
- アプリケーションの既存認証フローを使用する。auth fixtureは実ユーザーと同じ経路を通ること
|
|
58
|
+
- テスト認証情報は環境変数に格納し、ハードコードしない
|
|
59
|
+
- 認証フローに特定のユーザーレコードが必要な場合はfixture内でseedする
|
|
60
|
+
|
|
61
|
+
## 環境チェックリスト
|
|
62
|
+
|
|
63
|
+
E2Eテストがパスするために、以下を確認:
|
|
64
|
+
- [ ] アプリケーションが`baseURL`で起動・アクセス可能
|
|
65
|
+
- [ ] データベースに必要なseed dataがある(テストユーザー、サブスクリプション、コンテンツ)
|
|
66
|
+
- [ ] テスト認証情報で認証フローが動作する
|
|
67
|
+
- [ ] 環境変数が設定されている(`E2E_*`プレフィックス)
|
|
68
|
+
- [ ] 外部サービスが利用可能、または`page.route()`でモック済み
|
|
69
|
+
|
|
70
|
+
作業計画に専用の環境セットアップタスク(Phase 0)が含まれる場合はそれに従う。計画にセットアップタスクがない場合は、E2Eテスト実装タスクの一部として不足する前提条件に対応する。
|