create-claude-webapp 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,109 @@
1
+ # Quick Start Guide for AI Development
2
+
3
+ This guide walks you through setting up the AI Coding Project Boilerplate and implementing your first feature. Let's skip the complex stuff for now and just get it running.
4
+
5
+ ## Setup (5 minutes to complete)
6
+
7
+ ### For New Projects
8
+
9
+ Open your terminal and run these commands. Feel free to change the project name to whatever you like.
10
+
11
+ ```bash
12
+ npx github:svenmalvik/claude-boilerplate-4-web my-awesome-project
13
+ cd my-awesome-project
14
+ npm install
15
+ ```
16
+
17
+ That's it! You now have a `my-awesome-project` folder with all the necessary files ready to use.
18
+
19
+ ### For Existing Projects
20
+
21
+ If you already have a TypeScript project, you can copy the necessary files. First, download the boilerplate to a temporary location.
22
+
23
+ ```bash
24
+ # Download the boilerplate temporarily
25
+ npx github:svenmalvik/claude-boilerplate-4-web temp-boilerplate
26
+ ```
27
+
28
+ Then copy the following files into the root directory of your existing project.
29
+
30
+ ```bash
31
+ # Run in your existing project directory
32
+ cp -r temp-boilerplate/.claude/agents .claude/agents
33
+ cp -r temp-boilerplate/.claude/commands .claude/commands
34
+ cp -r temp-boilerplate/docs/rules docs/rules
35
+ cp -r temp-boilerplate/docs/adr docs/
36
+ cp -r temp-boilerplate/docs/design docs/
37
+ cp -r temp-boilerplate/docs/plans docs/
38
+ cp -r temp-boilerplate/docs/prd docs/
39
+ cp -r temp-boilerplate/docs/guides/en/sub-agents.md docs/guides/sub-agents.md
40
+ cp temp-boilerplate/CLAUDE.md .
41
+ ```
42
+
43
+ If you want to change the directory structure, you'll need to adjust paths starting with `@docs/rules/` in sub-agents and command definition files. Also update `docs/rules-index.yaml` accordingly.
44
+ Because this can get complicated, we recommend sticking with the default structure unless you have specific needs.
45
+
46
+ ## Launch Claude Code and Initial Setup
47
+
48
+ Launch Claude Code in your project directory.
49
+
50
+ ```bash
51
+ claude
52
+ ```
53
+
54
+ Once launched, let's set up project-specific information. This is a crucial step for the AI to understand your project.
55
+
56
+ Run the following custom slash command inside Claude Code.
57
+
58
+ ```bash
59
+ /project-inject
60
+ ```
61
+
62
+ You'll be guided through an interactive dialog to clarify your project information. Feel free to answer casually - you can change this information later.
63
+ Once you finish answering, your project context will be saved to `docs/rules/project-context.md`. This enables the AI to understand your project's purpose and generate more appropriate code.
64
+
65
+ After reviewing the file and confirming it looks good, complete this step with the following command.
66
+
67
+ ```bash
68
+ /sync-rules
69
+ ```
70
+
71
+ ## Let's Implement Your First Feature
72
+
73
+ Now let's create your first feature. Specify it using the following command in Claude Code.
74
+
75
+ ```bash
76
+ /implement Create a simple API that returns "Hello"
77
+ ```
78
+
79
+ The `/implement` command guides you through the entire workflow from design to implementation.
80
+
81
+ First, it analyzes your requirements to determine the feature scale. It might say something like "This is a medium-scale feature requiring about 3 files." Based on the scale, it creates necessary design documents.
82
+
83
+ For small features, it creates just a simple work plan. For large features, it creates a complete flow from PRD (Product Requirements Document) to Design Doc (Technical Design Document). After each design document is created, an automatic review is performed.
84
+
85
+ Once the design is complete, the AI will ask "Here's the design I've created. Could you review it?" Read through it and request changes if needed, or approve it if it looks good.
86
+ Depending on the Claude Code model you're using, it may automatically proceed to the next design document if your review is positive. At some point it will ask for your approval, so you can provide batch feedback then.
87
+
88
+ After design approval, integration test skeletons and a work plan are created. Once you review the implementation steps (requesting changes if needed) and approve, the actual implementation begins.
89
+
90
+ The AI breaks down the work plan into single-commit task units and autonomously implements each task using a TDD approach. After each task, it performs a defined 6-step quality check, fixes any errors, and creates a commit if everything passes. You can simply watch the progress.
91
+
92
+ When all tasks are complete, it will report "Implementation complete." Check `git log` to see a clean series of commits.
93
+
94
+ ## Development Philosophy of This Boilerplate
95
+
96
+ Let me explain the thinking behind this boilerplate.
97
+
98
+ To maximize throughput with AI assistance, it's crucial to minimize human intervention. However, achieving high execution accuracy with AI requires careful context management—providing the right information at the right time. Without proper systems, humans need to guide the implementation process, which prevents throughput maximization.
99
+
100
+ This boilerplate solves this through systematic approaches: selecting appropriate context (rules, requirements, specifications) for each phase and limiting unnecessary information during task execution.
101
+
102
+ The workflow is: create design documents, review them with users to align understanding, use those documents as context for planning and task creation. Tasks are executed by sub-agents with dedicated contexts, which removes unnecessary information and stabilizes implementation. This system helps maintain quality even for projects too large to fit within a single coding agent's context window (Claude Opus 4.1 supports 200K tokens, Sonnet 4 beta supports up to 1M tokens, but the basic limit is 200K tokens).
103
+
104
+ ## Troubleshooting
105
+
106
+ If things aren't working, check the following.
107
+
108
+ **If implementation stops midway**
109
+ Describe the current state to Claude Code and ask it to resume, for example: "You've completed up to the Design Doc creation, so please continue from there and complete the implementation."
@@ -0,0 +1,264 @@
1
+ # Rule Editing Guide
2
+
3
+ This guide explains the core concepts and best practices for writing effective rules that maximize LLM execution accuracy, based on how LLMs work.
4
+
5
+ ## Project Philosophy and the Importance of Rule Files
6
+
7
+ This boilerplate is designed based on the concepts of "Agentic Coding" and "Context Engineering":
8
+ - **Agentic Coding**: LLMs autonomously making decisions and carrying out implementation tasks
9
+ - **Context Engineering**: Building mechanisms to provide appropriate context at the right time for LLMs to make proper decisions
10
+
11
+ Proper rule management and [sub-agents](https://docs.anthropic.com/en/docs/claude-code/sub-agents) are crucial keys to realizing these concepts.
12
+
13
+ Rule files are written to maximize LLM execution accuracy as described below.
14
+
15
+ Sub-agents have dedicated contexts separate from the main agent. They are designed to load only the necessary rule files to fulfill specific responsibilities.
16
+ When the main agent executes tasks, it uses "metacognition" (reflecting on and analyzing its own reasoning process) to understand task context, select necessary rules from the rule file collection, and execute tasks.
17
+ This approach maximizes execution accuracy by retrieving the right rules at the right time without excess or deficiency.
18
+
19
+ While it's impossible to completely control LLM output, it is possible to maximize execution accuracy by establishing proper systems.
20
+ Conversely, LLM execution accuracy can easily degrade depending on rule file content.
21
+
22
+ With the premise that complete control is impossible, executing tasks, reflecting on issues that arise, and feeding back into the system enables maintaining and improving execution accuracy.
23
+ When using this in actual projects and results don't match expectations, consider improving rule files.
24
+
25
+ ## Determining Where to Document Rules
26
+
27
+ ### File Roles and Scope
28
+
29
+ | File | Scope | When Applied | Example Content |
30
+ |------|-------|--------------|-----------------|
31
+ | **CLAUDE.md** | All tasks | Always | Approval required before Edit/Write, stop at 5+ file changes |
32
+ | **Rule files** | Specific technical domains | When using that technology | Use specific types, error handling required, functions under 30 lines |
33
+ | **Guidelines** | Specific workflows | When performing that workflow | Sub-agent selection strategies |
34
+ | **Design Docs** | Specific features | When developing that feature | Feature requirements, API specifications, security constraints |
35
+
36
+ ### Decision Flow
37
+
38
+ ```
39
+ When is this rule needed?
40
+ ├─ Always → CLAUDE.md
41
+ ├─ Only for specific feature development → Design Doc
42
+ ├─ When using specific technology → Rule files
43
+ └─ When performing specific workflow → Guidelines
44
+ ```
45
+
46
+ ## 9 Rule Principles for Maximizing LLM Execution Accuracy
47
+
48
+ Here are 9 rule creation principles based on LLM characteristics and this boilerplate's design philosophy.
49
+ While we provide a `/refine-rule` custom slash command to assist with rule modifications, we ultimately recommend interactive rule editing through dialogue rather than commands, as LLMs tend to have difficulty reaching issues without comparing output with thinking after generation.
50
+
51
+ ### 1. Achieve Maximum Accuracy with Minimum Description (Context Pressure vs. Execution Accuracy)
52
+
53
+ Context is a precious resource. Avoid redundant explanations and include only essential information.
54
+ However, it's not just about being short - it must be the minimum description that doesn't cause decision hesitation.
55
+
56
+ ```markdown
57
+ ❌ Redundant description (22 words)
58
+ Please make sure to record all errors in the log when they occur
59
+
60
+ ✅ Concise description (9 words)
61
+ All errors must be logged
62
+
63
+ ❌ Over-abbreviated description (6 words)
64
+ Record all errors
65
+ ```
66
+
67
+ Aim for concise expressions that keep the same meaning. But don't shorten so much that ambiguity is introduced.
68
+
69
+ ### 2. Completely Unify Notation
70
+
71
+ Always use the same terms for the same concepts. Notation inconsistencies hinder LLM understanding.
72
+
73
+ ```markdown
74
+ # Term Definitions (Unified across project)
75
+ - API response/return value → Unified as `response`
76
+ - User/customer → Unified as `user`
77
+ - Error/abnormality → Unified as `error` (exception/failure may be used depending on context)
78
+ ```
79
+
80
+ ### 3. Thoroughly Eliminate Duplication
81
+
82
+ Repeating the same content across multiple files wastes context capacity. Consolidate in one place.
83
+
84
+ ```markdown
85
+ ❌ Same content in multiple locations
86
+ # docs/rules/base.md
87
+ Standard error format: { success: false, error: string, code: number }
88
+
89
+ # docs/rules/api.md
90
+ Error responses follow standard error format `{ success: false, error: string, code: number }`
91
+
92
+ ✅ Consolidated in one location
93
+ # docs/rules/base.md
94
+ Standard error format: { success: false, error: string, code: number }
95
+ ```
96
+
97
+ Check for duplication between files and eliminate contradictions and redundancy.
98
+ Eliminating duplication also reduces maintenance costs by preventing notation inconsistencies from update omissions.
99
+
100
+ ### 4. Appropriately Aggregate Responsibilities
101
+
102
+ Consolidating related content in one file maintains single responsibility and prevents unnecessary context mixing in tasks.
103
+
104
+ ```markdown
105
+ # Authentication consolidated in one file
106
+ docs/rules/auth.md
107
+ ├── JWT Specification
108
+ ├── Authentication Flow
109
+ ├── Error Handling
110
+ └── Security Requirements
111
+
112
+ # ❌ Dispersed responsibilities
113
+ docs/rules/auth.md
114
+ ├── JWT Specification
115
+ ├── Error Handling
116
+ └── Security Requirements
117
+ docs/rules/flow.md
118
+ ├── User Registration Flow
119
+ └── Authentication Flow
120
+ ```
121
+
122
+ However, if a file becomes too large, reading costs increase, so aim for logical division or rule selection around 250 lines (approximately 1,500 tokens).
123
+
124
+ ### 5. Set Measurable Decision Criteria
125
+
126
+ Ambiguous instructions cause interpretation inconsistencies. Clarify criteria with numbers and specific conditions.
127
+
128
+ ```markdown
129
+ ✅ Measurable criteria
130
+ - Function lines: 30 or less
131
+ - Cyclomatic complexity: 10 or less
132
+ - Response time: Within 200ms at p95
133
+ - Test coverage: 80% or more
134
+
135
+ ❌ Ambiguous criteria
136
+ - Readable code
137
+ - Fast processing
138
+ - Sufficient tests
139
+ ```
140
+
141
+ Note that LLMs cannot understand time, so descriptions like "break down tasks to complete within 30 minutes" are not effective.
142
+
143
+ ### 6. Show NG Patterns as Recommendations with Background
144
+
145
+ Showing recommended patterns with reasons is more effective than listing prohibitions.
146
+
147
+ ```markdown
148
+ ✅ Description in recommended format
149
+ 【State Management】
150
+ Recommended: Use Zustand or Context API
151
+ Reason: Global variables are difficult to test and state tracking is complex
152
+ NG Example: window.globalState = { ... }
153
+
154
+ ❌ List of prohibitions
155
+ - Don't use global variables
156
+ - Don't save values to window object
157
+ ```
158
+
159
+ If prohibitions are needed, present them as background context rather than the main rule.
160
+
161
+ ### 7. Verbalize Implicit Assumptions
162
+
163
+ Even things obvious to human developers must be explicitly stated for LLMs to understand.
164
+
165
+ ```markdown
166
+ ## Prerequisites
167
+ - Execution environment: Node.js 20.x on AWS Lambda
168
+ - Maximum execution time: 15 minutes (Lambda limit)
169
+ - Memory limit: 3GB
170
+ - Concurrent executions: 1000 (account limit)
171
+ - Timezone: All UTC
172
+ - Character encoding: UTF-8 only
173
+ ```
174
+
175
+ Use the `/project-inject` command at project start or when project assumptions change to document project context as rules.
176
+
177
+ ### 8. Arrange Descriptions by Importance
178
+
179
+ LLMs pay more attention to information at the beginning. Place most important rules first, exceptional cases last.
180
+
181
+ ```markdown
182
+ # API Rules
183
+
184
+ ## Critical Principles (Must follow)
185
+ 1. All APIs require JWT authentication
186
+ 2. Rate limit: 100 requests/minute
187
+ 3. Timeout: 30 seconds
188
+
189
+ ## Standard Specifications
190
+ - Methods: Follow REST principles
191
+ - Body: JSON format
192
+ - Character encoding: UTF-8
193
+
194
+ ## Exceptional Cases (Only for special situations)
195
+ - multipart/form-data allowed only for file uploads
196
+ - WebSocket connections only at /ws endpoint
197
+ ```
198
+
199
+ ### 9. Clarify Scope Boundaries
200
+
201
+ Explicitly stating what is and isn't covered prevents unnecessary processing and misunderstandings.
202
+
203
+ ```markdown
204
+ ## Scope of This Rule
205
+
206
+ ### Covered
207
+ - REST APIs in general
208
+ - GraphQL endpoints
209
+ - WebSocket communication
210
+
211
+ ### Not Covered
212
+ - Static file delivery
213
+ - Health check endpoint (/health)
214
+ - Metrics endpoint (/metrics)
215
+ ```
216
+
217
+ ## Reference: Efficient Rule Writing
218
+
219
+ Rule files under `docs/rules` are created with these principles in mind.
220
+ Each is written with no duplication, single responsibility, and minimal description, serving as references when adding or creating new rules.
221
+
222
+ ### Correspondence Between Rule Files and Applied Principles
223
+
224
+ | Rule File | Main Content | Examples of Applied Principles |
225
+ |-----------|-------------|--------------------------------|
226
+ | **typescript.md** | TypeScript code creation/modification/refactoring, modern type features | **Principle 2**: Unified notation (consistent terms like "any type completely prohibited")<br>**Principle 5**: Measurable criteria (20 fields max, 3 nesting levels max) |
227
+ | **typescript-testing.md** | Test creation, quality checks, development steps | **Principle 5**: Measurable criteria (coverage 70% or more)<br>**Principle 8**: Arrangement by importance (quality requirements at the top) |
228
+ | **ai-development-guide.md** | Technical decision criteria, anti-pattern detection, best practices | **Principle 6**: Show NG patterns in recommended format (anti-pattern collection)<br>**Principle 3**: Eliminate duplication (Rule of Three for consolidation decisions) |
229
+ | **technical-spec.md** | Technical design, environment setup, documentation process | **Principle 4**: Aggregate responsibilities (technical design in one file)<br>**Principle 7**: Verbalize implicit assumptions (security rules documented) |
230
+ | **project-context.md** | Project-specific information, implementation principles | **Principle 7**: Verbalize implicit assumptions (project characteristics documented)<br>**Principle 1**: Maximum accuracy with minimum description (concise bullet format) |
231
+ | **documentation-criteria.md** | Scale determination, document creation criteria | **Principle 5**: Measurable criteria (creation decision matrix)<br>**Principle 9**: Clarify scope boundaries (clearly state what's included/excluded) |
232
+ | **implementation-approach.md** | Implementation strategy selection, task breakdown, large-scale change planning | **Principle 8**: Arrangement by importance (Phase-ordered structure)<br>**Principle 6**: Show NG patterns in recommended format (risk analysis) |
233
+
234
+ All 9 principles are practiced across these files, serving as practical references for rule creation.
235
+
236
+ ## Troubleshooting
237
+
238
+ ### Problem: Rules are too long and overload the context window
239
+
240
+ **Solutions**
241
+ 1. Find and remove duplications
242
+ 2. Minimize examples
243
+ 3. Utilize reference format
244
+ 4. Move low-priority rules to separate files
245
+
246
+ ### Problem: Inconsistent generation results
247
+
248
+ **Solutions**
249
+ 1. Unify terms and notation
250
+ 2. Quantify decision criteria
251
+ 3. Clarify priorities
252
+ 4. Eliminate contradicting rules
253
+
254
+ ### Problem: Important rules are not followed
255
+
256
+ **Solutions**
257
+ 1. Move to file beginning
258
+ 2. Add 【Required】【Important】 tags
259
+ 3. Add one specific example
260
+ 4. Convert negative form to positive form
261
+
262
+ ## Summary
263
+
264
+ Well-written rules stabilize LLM output. By following the 9 principles and continuously refining your rules, you can maximize LLM capabilities. Build the optimal rule set for your project through regular implementation review and improvement.
@@ -0,0 +1,308 @@
1
+ # Use Cases Quick Reference
2
+
3
+ New here? Start with the [Quick Start Guide](./quickstart.md). This page serves as your daily development cheatsheet.
4
+
5
+ ## Top 5 Commands (Learn These First)
6
+
7
+ | Command | Purpose | Example |
8
+ |---------|---------|---------|
9
+ | `/implement` | End-to-end feature implementation (from requirements to completion) | `/implement Add rate limiting to API` |
10
+ | `/task` | Single task with rule-based precision | `/task Fix bug` |
11
+ | `/design` | Design docs only (no implementation) | `/design Design payment system` |
12
+ | `/review` | Code review and auto-fix | `/review auth-system` |
13
+ | `/build` | Execute implementation from plan | `/build` |
14
+
15
+ ## Overall Flow
16
+
17
+ ```mermaid
18
+ graph LR
19
+ A[Requirements] --> B[Scale Detection]
20
+ B -->|Small:1-2 files| C[Direct Implementation]
21
+ B -->|Medium:3-5 files| D[Design Doc→Implementation]
22
+ B -->|Large:6+ files| E[PRD→ADR→Design Doc→Implementation]
23
+ ```
24
+
25
+ ## Inside /implement Command
26
+
27
+ ```mermaid
28
+ graph TD
29
+ Start["/implement requirements"] --> RA["requirement-analyzer scale detection"]
30
+ RA -->|Small| Direct[Direct implementation]
31
+ RA -->|Medium| TD["technical-designer Design Doc"]
32
+ RA -->|Large| PRD["prd-creator PRD"]
33
+
34
+ PRD --> ADR["technical-designer ADR"]
35
+ ADR --> TD
36
+ TD --> WP["work-planner Work plan"]
37
+ WP --> TE["task-executor Execute tasks"]
38
+ Direct --> QF["quality-fixer Quality checks"]
39
+ TE --> QF
40
+ QF --> End[Complete]
41
+
42
+ style Start fill:#e1f5fe
43
+ style End fill:#c8e6c9
44
+ ```
45
+
46
+ ---
47
+
48
+ # Detailed Use Cases
49
+
50
+ ## Want to add a feature?
51
+
52
+ ```bash
53
+ /implement Add webhook API with retry logic and signature verification
54
+ ```
55
+
56
+ The LLM automatically detects scale, creates necessary documentation, and completes the implementation.
57
+
58
+ ## Want to fix a bug?
59
+
60
+ ```bash
61
+ /task Fix email validation bug with "+" character
62
+ ```
63
+
64
+ Clarifies rules before fixing the issue.
65
+ `/task` triggers a process of metacognition (self-reflection on reasoning). It helps the LLM clarify the situation, retrieve relevant rules, build task lists, and understand the work context—improving execution accuracy.
66
+
67
+ ## Want design only?
68
+
69
+ ```bash
70
+ /design Design large-scale batch processing system
71
+ ```
72
+
73
+ Creates design documents, conducts LLM self-review, requests user review as needed, and finalizes the design docs. Does not implement.
74
+
75
+ ## Want to work step by step?
76
+
77
+ Execute Design → Plan → Build individually. You can work more incrementally by specifying phases directly in the command arguments.
78
+
79
+ ```bash
80
+ /design # Create design docs
81
+ /plan # Create work plan
82
+ /build implement phase 1 # Execute implementation (with phase specification)
83
+ ```
84
+
85
+ ## Want to resume work?
86
+
87
+ ```bash
88
+ # Check progress
89
+ ls docs/plans/tasks/*.md | head -5
90
+ git log --oneline -5
91
+
92
+ # Resume with the build command
93
+ /build auth-implementation
94
+ # Or simply continue where you left off
95
+ /build
96
+ ```
97
+
98
+ Tasks are marked complete with checkmarks (- [x]) in Markdown format.
99
+ Some Claude Code models may not automatically mark tasks as completed. In that case, you can instruct: "Please mark completed tasks by reviewing the commit history."
100
+
101
+ ## Want code review?
102
+
103
+ ```bash
104
+ /review # Check Design Doc compliance
105
+ ```
106
+
107
+ Auto-fix is suggested if compliance is below 70%.
108
+ Fixes are created as task files under `docs/plans/tasks` and executed by sub-agents.
109
+
110
+ ## Want to initialize or customize project settings?
111
+
112
+ ```bash
113
+ /project-inject # Set project context
114
+ /sync-rules # Sync metadata
115
+ ```
116
+
117
+ ---
118
+
119
+ # Command Reference
120
+
121
+ ## Scale Detection Criteria
122
+
123
+ | Scale | Files | Examples | Generated Docs |
124
+ |-------|-------|----------|----------------|
125
+ | Small | 1-2 | Bug fixes, refactoring | None |
126
+ | Medium | 3-5 | API additions, rate limiting | Design Doc + Work plan |
127
+ | Large | 6+ | Auth system, payment system | PRD + ADR + Design Doc + Work plan |
128
+
129
+ ## Command Details
130
+
131
+ ### /implement
132
+ **Purpose**: Full automation from requirements to implementation
133
+ **Args**: Requirements description
134
+ **Process**:
135
+ 1. requirement-analyzer detects scale
136
+ 2. Generate docs based on scale
137
+ 3. task-executor implements
138
+ 4. quality-fixer ensures quality
139
+ 5. Commit per task
140
+
141
+ Helps clarify requirements and creates design documents. Creates work plans and task files from design docs, then completes implementation according to the plan.
142
+ Aimed at completing Agentic Coding (LLMs autonomously making decisions and executing implementation tasks), performing automatic execution following the flow with minimal human intervention except for design clarification and handling issues beyond LLM judgment.
143
+
144
+ ### /task
145
+ **Purpose**: Rule-based high-precision task execution
146
+ **Args**: Task description
147
+ **Process**:
148
+ 1. Clarify applicable rules
149
+ 2. Determine initial action
150
+ 3. Confirm restrictions
151
+ 4. Execute task
152
+
153
+ Encourages metacognition (self-reflection on reasoning), understands task essence and applicable rules, then refines the specified task. Uses the `rule-advisor` sub-agent to retrieve and utilize appropriate rules from rule files under `docs/rules`.
154
+
155
+ ### /design
156
+ **Purpose**: Design docs creation (no implementation)
157
+ **Args**: What to design
158
+ **Process**:
159
+ 1. Requirements analysis (requirement-analyzer)
160
+ 2. PRD creation (if large scale)
161
+ 3. ADR creation (if tech choices needed)
162
+ 4. Design Doc creation
163
+ 5. End with approval
164
+
165
+ Interacts with users to organize requirements and create various design documents. Determines necessary documents based on implementation scale, finalizes design docs through creation, self-review, and user review reflection.
166
+ Use when not adopting the full design-to-implementation process via `/implement`.
167
+
168
+ ### /plan
169
+ **Purpose**: Create work plan
170
+ **Args**: [design doc name] (optional)
171
+ **Prerequisite**: Design Doc must exist
172
+ **Process**:
173
+ 1. Select design doc
174
+ 2. Confirm E2E test generation
175
+ 3. work-planner creates plan
176
+ 4. Get approval
177
+
178
+ Creates work plan from Design Doc. Also creates integration/E2E tests required for implementation.
179
+ Use when not adopting the full design-to-implementation process via `/implement`.
180
+
181
+ ### /build
182
+ **Purpose**: Automated implementation execution
183
+ **Args**: [task file name] (optional)
184
+ **Prerequisite**: Task files or work plan must exist
185
+ **Process**:
186
+ 1. Check task files
187
+ 2. Generate with task-decomposer if missing
188
+ 3. Execute with task-executor
189
+ 4. quality-fixer checks quality
190
+ 5. Commit per task
191
+
192
+ Executes implementation tasks described in specified task files. If only work plan exists without task files, uses `task-decomposer` to break down tasks before executing.
193
+ Use when not adopting the full design-to-implementation process via `/implement`.
194
+
195
+ Unless specified otherwise, automatically executes until completing the implementation described in the plan. If you want work done in phases or task units, clearly communicate the desired phase in arguments. Be careful as explicitly interrupting implementation midway may leave code in an unexecutable state.
196
+
197
+ **Example for phase-based implementation**
198
+ ```bash
199
+ /build Refer to docs/plans/tasks and complete phase 1 tasks
200
+ ```
201
+
202
+ ### /review
203
+ **Purpose**: Design Doc compliance, code quality verification
204
+ **Args**: [Design Doc name] (optional)
205
+ **Process**:
206
+ 1. code-reviewer calculates compliance
207
+ 2. List unmet items
208
+ 3. Suggest auto-fixes
209
+ 4. Execute fixes with task-executor after approval
210
+
211
+ Conducts code review. Primarily reviews whether implementation complies with Design Doc and meets rule-based code quality standards, providing feedback. Creates task files and uses sub-agents like `task-executor` to fix issues upon user instruction.
212
+ Use when not adopting the full design-to-implementation process via `/implement`.
213
+
214
+ ### /refine-rule
215
+ **Purpose**: Rule improvement
216
+ **Args**: What to change
217
+ **Process**:
218
+ 1. Select rule file
219
+ 2. Create change proposal
220
+ 3. 3-pass review process
221
+ 4. Apply
222
+
223
+ Assists with rule file editing. Since rules must be optimized for LLMs to maintain execution accuracy, creating optimal rules with this command alone is difficult. Refer to the [Rule Editing Guide](./rule-editing-guide.md) and refine rules through command usage or direct dialogue with LLMs.
224
+
225
+ ### /sync-rules
226
+ **Purpose**: Sync rule metadata
227
+ **Args**: None
228
+ **When**: After rule file edits
229
+
230
+ Updates metadata files used by the `rule-advisor` sub-agent to find rules to reference. Must be executed after changing rules. Not needed if rules haven't changed.
231
+
232
+ Common behavior patterns:
233
+ - "9 files checked, all synchronized, no updates needed" → This is normal
234
+ - "3 improvement suggestions: [specific suggestions]" → Approve as needed
235
+ - Forcing changes every time → This is inappropriate behavior, please report
236
+
237
+ ### /project-inject
238
+ **Purpose**: Set project context
239
+ **Args**: None
240
+ **Process**: Interactive project information collection
241
+
242
+ **When to use**:
243
+ - Initial setup (required)
244
+ - When project direction changes significantly
245
+ - When target users change
246
+ - When business requirements change significantly
247
+
248
+ This command sets project background information as rule files to maximize the probability of work being done with understanding of context. Therefore, it doesn't need to be run daily. Use only at initial setup and when fundamental project assumptions change.
249
+
250
+
251
+ ---
252
+
253
+ # Troubleshooting
254
+
255
+ ## Task Files
256
+
257
+ Task files exist under `docs/plans/tasks`. Implementation is performed in units of these task files, with completed tasks marked with Markdown checkmarks (- [x]) upon completion.
258
+ Some Claude Code models may not automatically mark tasks as completed. In that case, you can instruct: "Please mark completed tasks by reviewing the commit history."
259
+
260
+ ## When implementation is interrupted
261
+
262
+ Use the `/implement` or `/build` commands to resume work.
263
+ ```bash
264
+ /implement Resume from task 3 and complete the work
265
+ /build Search for incomplete tasks from docs/plans/tasks and resume implementation
266
+ ```
267
+
268
+ | Issue | Check Command | Solution |
269
+ |-------|---------------|----------|
270
+ | Repeating same error | `npm run check:all` | Check environment, fix with `/task` |
271
+ | Code differs from design | `/review` | Check compliance, auto-fix |
272
+ | Task stuck | `ls docs/plans/tasks/` | Identify blocker, check task file |
273
+ | Command not recognized | `ls .claude/commands/` | Check typo |
274
+
275
+ ---
276
+
277
+ # Examples
278
+
279
+ ## Webhook Feature (Medium scale – about 4 files)
280
+ ```bash
281
+ /implement External system webhook API
282
+ ```
283
+ **Generated files**:
284
+ - docs/design/webhook-system.md
285
+ - src/services/webhook.service.ts
286
+ - src/services/retry.service.ts
287
+ - src/controllers/webhook.controller.ts
288
+
289
+ ## Auth System (Large scale – 10+ files)
290
+ ```bash
291
+ /implement JWT auth with RBAC system
292
+ ```
293
+ **Generated files**:
294
+ - docs/prd/auth-system.md
295
+ - docs/adr/auth-architecture.md
296
+ - docs/design/auth-system.md
297
+ - src/auth/ (implementation files)
298
+
299
+ ---
300
+
301
+ ## Next Steps
302
+
303
+ Once you understand the basics, start applying them in practice. As you gain experience and feel the need to improve, try customizing the rules.
304
+
305
+ → **[Rule Editing Guide](./rule-editing-guide.md)** - How to understand LLM characteristics and create effective rules
306
+
307
+ See command definitions in `.claude/commands/` for details.
308
+ Having issues? Check [GitHub Issues](https://github.com/svenmalvik/claude-boilerplate-4-web/issues).