sdlc-subagents 0.1.1 → 0.1.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,7 @@
1
+ Create a new commit for all of our uncommitted changes
2
+ run git status && git diff HEAD && git status --porcelain to see what files are uncommitted
3
+ add the untracked and changed files
4
+
5
+ Add an atomic commit message with an appropriate message
6
+
7
+ add a tag such as "feat", "fix", "docs", etc. that reflects our work
@@ -0,0 +1,151 @@
1
+ ---
2
+ description: Create a Product Requirements Document from conversation
3
+ argument-hint: [output-filename]
4
+ ---
5
+
6
+ # Create PRD: Generate Product Requirements Document
7
+
8
+ ## Overview
9
+
10
+ Generate a comprehensive Product Requirements Document (PRD) based on the current conversation context and requirements discussed. Use the structure and sections defined below to create a thorough, professional PRD.
11
+
12
+ ## Output File
13
+
14
+ Write the PRD to: `$ARGUMENTS` (default: `PRD.md`)
15
+
16
+ ## PRD Structure
17
+
18
+ Create a well-structured PRD with the following sections. Adapt depth and detail based on available information:
19
+
20
+ ### Required Sections
21
+
22
+ **1. Executive Summary**
23
+ - Concise product overview (2-3 paragraphs)
24
+ - Core value proposition
25
+ - MVP goal statement
26
+
27
+ **2. Mission**
28
+ - Product mission statement
29
+ - Core principles (3-5 key principles)
30
+
31
+ **3. Target Users**
32
+ - Primary user personas
33
+ - Technical comfort level
34
+ - Key user needs and pain points
35
+
36
+ **4. MVP Scope**
37
+ - **In Scope:** Core functionality for MVP (use ✅ checkboxes)
38
+ - **Out of Scope:** Features deferred to future phases (use ❌ checkboxes)
39
+ - Group by categories (Core Functionality, Technical, Integration, Deployment)
40
+
41
+ **5. User Stories**
42
+ - Primary user stories (5-8 stories) in format: "As a [user], I want to [action], so that [benefit]"
43
+ - Include concrete examples for each story
44
+ - Add technical user stories if relevant
45
+
46
+ **6. Core Architecture & Patterns**
47
+ - High-level architecture approach
48
+ - Directory structure (if applicable)
49
+ - Key design patterns and principles
50
+ - Technology-specific patterns
51
+
52
+ **7. Tools/Features**
53
+ - Detailed feature specifications
54
+ - If building an agent: Tool designs with purpose, operations, and key features
55
+ - If building an app: Core feature breakdown
56
+
57
+ **8. Technology Stack**
58
+ - Backend/Frontend technologies with versions
59
+ - Dependencies and libraries
60
+ - Optional dependencies
61
+ - Third-party integrations
62
+
63
+ **9. Security & Configuration**
64
+ - Authentication/authorization approach
65
+ - Configuration management (environment variables, settings)
66
+ - Security scope (in-scope and out-of-scope)
67
+ - Deployment considerations
68
+
69
+ **10. API Specification** (if applicable)
70
+ - Endpoint definitions
71
+ - Request/response formats
72
+ - Authentication requirements
73
+ - Example payloads
74
+
75
+ **11. Success Criteria**
76
+ - MVP success definition
77
+ - Functional requirements (use ✅ checkboxes)
78
+ - Quality indicators
79
+ - User experience goals
80
+
81
+ **12. Implementation Phases**
82
+ - Break down into 3-4 phases
83
+ - Each phase includes: Goal, Deliverables (✅ checkboxes), Validation criteria
84
+ - Realistic timeline estimates
85
+
86
+ **13. Future Considerations**
87
+ - Post-MVP enhancements
88
+ - Integration opportunities
89
+ - Advanced features for later phases
90
+
91
+ **14. Risks & Mitigations**
92
+ - 3-5 key risks with specific mitigation strategies
93
+
94
+ **15. Appendix** (if applicable)
95
+ - Related documents
96
+ - Key dependencies with links
97
+ - Repository/project structure
98
+
99
+ ## Instructions
100
+
101
+ ### 1. Extract Requirements
102
+ - Review the entire conversation history
103
+ - Identify explicit requirements and implicit needs
104
+ - Note technical constraints and preferences
105
+ - Capture user goals and success criteria
106
+
107
+ ### 2. Synthesize Information
108
+ - Organize requirements into appropriate sections
109
+ - Fill in reasonable assumptions where details are missing
110
+ - Maintain consistency across sections
111
+ - Ensure technical feasibility
112
+
113
+ ### 3. Write the PRD
114
+ - Use clear, professional language
115
+ - Include concrete examples and specifics
116
+ - Use markdown formatting (headings, lists, code blocks, checkboxes)
117
+ - Add code snippets for technical sections where helpful
118
+ - Keep Executive Summary concise but comprehensive
119
+
120
+ ### 4. Quality Checks
121
+ - ✅ All required sections present
122
+ - ✅ User stories have clear benefits
123
+ - ✅ MVP scope is realistic and well-defined
124
+ - ✅ Technology choices are justified
125
+ - ✅ Implementation phases are actionable
126
+ - ✅ Success criteria are measurable
127
+ - ✅ Consistent terminology throughout
128
+
129
+ ## Style Guidelines
130
+
131
+ - **Tone:** Professional, clear, action-oriented
132
+ - **Format:** Use markdown extensively (headings, lists, code blocks, tables)
133
+ - **Checkboxes:** Use ✅ for in-scope items, ❌ for out-of-scope
134
+ - **Specificity:** Prefer concrete examples over abstract descriptions
135
+ - **Length:** Comprehensive but scannable (typically 30-60 sections worth of content)
136
+
137
+ ## Output Confirmation
138
+
139
+ After creating the PRD:
140
+ 1. Confirm the file path where it was written
141
+ 2. Provide a brief summary of the PRD contents
142
+ 3. Highlight any assumptions made due to missing information
143
+ 4. Suggest next steps (e.g., review, refinement, planning)
144
+
145
+ ## Notes
146
+
147
+ - If critical information is missing, ask clarifying questions before generating
148
+ - Adapt section depth based on available details
149
+ - For highly technical products, emphasize architecture and technical stack
150
+ - For user-facing products, emphasize user stories and experience
151
+ - This command contains the complete PRD template structure - no external references needed
@@ -0,0 +1,156 @@
1
+ ---
2
+ description: Create global rules (CLAUDE.md) from codebase analysis
3
+ ---
4
+
5
+ # Create Global Rules
6
+
7
+ Generate a CLAUDE.md file by analyzing the codebase and extracting patterns.
8
+
9
+ ---
10
+
11
+ ## Objective
12
+
13
+ Create project-specific global rules that give Claude context about:
14
+ - What this project is
15
+ - Technologies used
16
+ - How the code is organized
17
+ - Patterns and conventions to follow
18
+ - How to build, test, and validate
19
+
20
+ ---
21
+
22
+ ## Phase 1: DISCOVER
23
+
24
+ ### Identify Project Type
25
+
26
+ First, determine what kind of project this is:
27
+
28
+ | Type | Indicators |
29
+ |------|------------|
30
+ | Web App (Full-stack) | Separate client/server dirs, API routes |
31
+ | Web App (Frontend) | React/Vue/Svelte, no server code |
32
+ | API/Backend | Express/Fastify/etc, no frontend |
33
+ | Library/Package | `main`/`exports` in package.json, publishable |
34
+ | CLI Tool | `bin` in package.json, command-line interface |
35
+ | Monorepo | Multiple packages, workspaces config |
36
+ | Script/Automation | Standalone scripts, task-focused |
37
+
38
+ ### Analyze Configuration
39
+
40
+ Look at root configuration files:
41
+
42
+ ```
43
+ package.json → dependencies, scripts, type
44
+ tsconfig.json → TypeScript settings
45
+ vite.config.* → Build tool
46
+ *.config.js/ts → Various tool configs
47
+ ```
48
+
49
+ ### Map Directory Structure
50
+
51
+ Explore the codebase to understand organization:
52
+ - Where does source code live?
53
+ - Where are tests?
54
+ - Any shared code?
55
+ - Configuration locations?
56
+
57
+ ---
58
+
59
+ ## Phase 2: ANALYZE
60
+
61
+ ### Extract Tech Stack
62
+
63
+ From package.json and config files, identify:
64
+ - Runtime/Language (Node, Bun, Deno, browser)
65
+ - Framework(s)
66
+ - Database (if any)
67
+ - Testing tools
68
+ - Build tools
69
+ - Linting/formatting
70
+
71
+ ### Identify Patterns
72
+
73
+ Study existing code for:
74
+ - **Naming**: How are files, functions, classes named?
75
+ - **Structure**: How is code organized within files?
76
+ - **Errors**: How are errors created and handled?
77
+ - **Types**: How are types/interfaces defined?
78
+ - **Tests**: How are tests structured?
79
+
80
+ ### Find Key Files
81
+
82
+ Identify files that are important to understand:
83
+ - Entry points
84
+ - Configuration
85
+ - Core business logic
86
+ - Shared utilities
87
+ - Type definitions
88
+
89
+ ---
90
+
91
+ ## Phase 3: GENERATE
92
+
93
+ ### Create CLAUDE.md
94
+
95
+ Use the template at `.agents/CLAUDE-template.md` as a starting point.
96
+
97
+ **Output path**: `CLAUDE.md` (project root)
98
+
99
+ **Adapt to the project:**
100
+ - Remove sections that don't apply
101
+ - Add sections specific to this project type
102
+ - Keep it concise - focus on what's useful
103
+
104
+ **Key sections to include:**
105
+
106
+ 1. **Project Overview** - What is this and what does it do?
107
+ 2. **Tech Stack** - What technologies are used?
108
+ 3. **Commands** - How to dev, build, test, lint?
109
+ 4. **Structure** - How is the code organized?
110
+ 5. **Patterns** - What conventions should be followed?
111
+ 6. **Key Files** - What files are important to know?
112
+
113
+ **Optional sections (add if relevant):**
114
+ - Architecture (for complex apps)
115
+ - API endpoints (for backends)
116
+ - Component patterns (for frontends)
117
+ - Database patterns (if using a DB)
118
+ - On-demand context references
119
+
120
+ ---
121
+
122
+ ## Phase 4: OUTPUT
123
+
124
+ ```markdown
125
+ ## Global Rules Created
126
+
127
+ **File**: `CLAUDE.md`
128
+
129
+ ### Project Type
130
+
131
+ {Detected project type}
132
+
133
+ ### Tech Stack Summary
134
+
135
+ {Key technologies detected}
136
+
137
+ ### Structure
138
+
139
+ {Brief structure overview}
140
+
141
+ ### Next Steps
142
+
143
+ 1. Review the generated `CLAUDE.md`
144
+ 2. Add any project-specific notes
145
+ 3. Remove any sections that don't apply
146
+ 4. Optionally create reference docs in `.agents/reference/`
147
+ ```
148
+
149
+ ---
150
+
151
+ ## Tips
152
+
153
+ - Keep CLAUDE.md focused and scannable
154
+ - Don't duplicate information that's in other docs (link instead)
155
+ - Focus on patterns and conventions, not exhaustive documentation
156
+ - Update it as the project evolves
@@ -0,0 +1,13 @@
1
+ ---
2
+ description: Run comprehensive end-to-end testing using agent-browser
3
+ ---
4
+
5
+ # E2E Test: Full Application Testing
6
+
7
+ ## Objective
8
+
9
+ Run comprehensive end-to-end testing of the application. This launches parallel research agents, then systematically tests every user journey using the browser, takes screenshots, validates database records, and fixes any issues found.
10
+
11
+ ## Instructions
12
+
13
+ Load the `e2e-test` skill and follow its instructions exactly, starting from the Pre-flight Check through to the final Report.
@@ -0,0 +1,101 @@
1
+ ---
2
+ description: Execute an implementation plan
3
+ argument-hint: [path-to-plan]
4
+ ---
5
+
6
+ # Execute: Implement from Plan
7
+
8
+ ## Plan to Execute
9
+
10
+ Read plan file: `$ARGUMENTS`
11
+
12
+ ## Execution Instructions
13
+
14
+ ### 1. Read and Understand
15
+
16
+ - Read the ENTIRE plan carefully
17
+ - Understand all tasks and their dependencies
18
+ - Note the validation commands to run
19
+ - Review the testing strategy
20
+
21
+ ### 2. Execute Tasks in Order
22
+
23
+ For EACH task in "Step by Step Tasks":
24
+
25
+ #### a. Navigate to the task
26
+ - Identify the file and action required
27
+ - Read existing related files if modifying
28
+
29
+ #### b. Implement the task
30
+ - Follow the detailed specifications exactly
31
+ - Maintain consistency with existing code patterns
32
+ - Include proper type hints and documentation
33
+ - Add structured logging where appropriate
34
+
35
+ #### c. Verify as you go
36
+ - After each file change, check syntax
37
+ - Ensure imports are correct
38
+ - Verify types are properly defined
39
+
40
+ ### 3. Implement Testing Strategy
41
+
42
+ After completing implementation tasks:
43
+
44
+ - Create all test files specified in the plan
45
+ - Implement all test cases mentioned
46
+ - Follow the testing approach outlined
47
+ - Ensure tests cover edge cases
48
+
49
+ ### 4. Run Validation Commands
50
+
51
+ Execute ALL validation commands from the plan in order:
52
+
53
+ ```bash
54
+ # Run each command exactly as specified in plan
55
+ ```
56
+
57
+ If any command fails:
58
+ - Fix the issue
59
+ - Re-run the command
60
+ - Continue only when it passes
61
+
62
+ ### 5. Final Verification
63
+
64
+ Before completing:
65
+
66
+ - ✅ All tasks from plan completed
67
+ - ✅ All tests created and passing
68
+ - ✅ All validation commands pass
69
+ - ✅ Code follows project conventions
70
+ - ✅ Documentation added/updated as needed
71
+
72
+ ## Output Report
73
+
74
+ Provide summary:
75
+
76
+ ### Completed Tasks
77
+ - List of all tasks completed
78
+ - Files created (with paths)
79
+ - Files modified (with paths)
80
+
81
+ ### Tests Added
82
+ - Test files created
83
+ - Test cases implemented
84
+ - Test results
85
+
86
+ ### Validation Results
87
+ ```bash
88
+ # Output from each validation command
89
+ ```
90
+
91
+ ### Ready for Commit
92
+ - Confirm all changes are complete
93
+ - Confirm all validations pass
94
+ - Ready for `/commit` command
95
+
96
+ ## Notes
97
+
98
+ - If you encounter issues not addressed in the plan, document them
99
+ - If you need to deviate from the plan, explain why
100
+ - If tests fail, fix implementation until they pass
101
+ - Don't skip validation steps
@@ -0,0 +1,61 @@
1
+ # Initialize Project
2
+
3
+ Run the following commands to set up and start the project locally:
4
+
5
+ ## 1. Create Environment File
6
+ ```bash
7
+ cp .env.example .env
8
+ ```
9
+ Creates your local environment configuration from the example template.
10
+
11
+ ## 2. Install Dependencies
12
+ ```bash
13
+ uv sync
14
+ ```
15
+ Installs all Python packages defined in pyproject.toml.
16
+
17
+ ## 3. Start Database
18
+ ```bash
19
+ docker-compose up -d db
20
+ ```
21
+ Starts PostgreSQL 18 in a Docker container on port 5433.
22
+
23
+ ## 4. Run Database Migrations
24
+ ```bash
25
+ uv run alembic upgrade head
26
+ ```
27
+ Applies all pending database migrations.
28
+
29
+ ## 5. Start Development Server
30
+ ```bash
31
+ uv run uvicorn app.main:app --reload --port 8123
32
+ ```
33
+ Starts the FastAPI server with hot-reload on port 8123.
34
+
35
+ ## 6. Validate Setup
36
+
37
+ Check that everything is working:
38
+
39
+ ```bash
40
+ # Test API health
41
+ curl -s http://localhost:8123/health
42
+
43
+ # Test database connection
44
+ curl -s http://localhost:8123/health/db
45
+ ```
46
+
47
+ Both should return `{"status":"healthy"}` responses.
48
+
49
+ ## Access Points
50
+
51
+ - Swagger UI: http://localhost:8123/docs
52
+ - Health Check: http://localhost:8123/health
53
+ - Database: localhost:5433
54
+
55
+ ## Cleanup
56
+
57
+ To stop services:
58
+ ```bash
59
+ # Stop dev server: Ctrl+C
60
+ # Stop database: docker-compose down
61
+ ```
@@ -0,0 +1,433 @@
1
+ ---
2
+ description: "Create comprehensive feature plan with deep codebase analysis and research"
3
+ ---
4
+
5
+ # Plan a new task
6
+
7
+ ## Feature: $ARGUMENTS
8
+
9
+ ## Mission
10
+
11
+ Transform a feature request into a **comprehensive implementation plan** through systematic codebase analysis, external research, and strategic planning.
12
+
13
+ **Core Principle**: We do NOT write code in this phase. Our goal is to create a context-rich implementation plan that enables one-pass implementation success for ai agents.
14
+
15
+ **Key Philosophy**: Context is King. The plan must contain ALL information needed for implementation - patterns, mandatory reading, documentation, validation commands - so the execution agent succeeds on the first attempt.
16
+
17
+ ## Planning Process
18
+
19
+ ### Phase 1: Feature Understanding
20
+
21
+ **Deep Feature Analysis:**
22
+
23
+ - Extract the core problem being solved
24
+ - Identify user value and business impact
25
+ - Determine feature type: New Capability/Enhancement/Refactor/Bug Fix
26
+ - Assess complexity: Low/Medium/High
27
+ - Map affected systems and components
28
+
29
+ **Create User Story Format Or Refine If Story Was Provided By The User:**
30
+
31
+ ```
32
+ As a <type of user>
33
+ I want to <action/goal>
34
+ So that <benefit/value>
35
+ ```
36
+
37
+ ### Phase 2: Codebase Intelligence Gathering
38
+
39
+ **Use specialized agents and parallel analysis:**
40
+
41
+ **1. Project Structure Analysis**
42
+
43
+ - Detect primary language(s), frameworks, and runtime versions
44
+ - Map directory structure and architectural patterns
45
+ - Identify service/component boundaries and integration points
46
+ - Locate configuration files (pyproject.toml, package.json, etc.)
47
+ - Find environment setup and build processes
48
+
49
+ **2. Pattern Recognition** (Use specialized subagents when beneficial)
50
+
51
+ - Search for similar implementations in codebase
52
+ - Identify coding conventions:
53
+ - Naming patterns (CamelCase, snake_case, kebab-case)
54
+ - File organization and module structure
55
+ - Error handling approaches
56
+ - Logging patterns and standards
57
+ - Extract common patterns for the feature's domain
58
+ - Document anti-patterns to avoid
59
+ - Check CLAUDE.md for project-specific rules and conventions
60
+
61
+ **3. Dependency Analysis**
62
+
63
+ - Catalog external libraries relevant to feature
64
+ - Understand how libraries are integrated (check imports, configs)
65
+ - Find relevant documentation in docs/, ai_docs/, .agents/reference or ai-wiki if available
66
+ - Note library versions and compatibility requirements
67
+
68
+ **4. Testing Patterns**
69
+
70
+ - Identify test framework and structure (pytest, jest, etc.)
71
+ - Find similar test examples for reference
72
+ - Understand test organization (unit vs integration)
73
+ - Note coverage requirements and testing standards
74
+
75
+ **5. Integration Points**
76
+
77
+ - Identify existing files that need updates
78
+ - Determine new files that need creation and their locations
79
+ - Map router/API registration patterns
80
+ - Understand database/model patterns if applicable
81
+ - Identify authentication/authorization patterns if relevant
82
+
83
+ **Clarify Ambiguities:**
84
+
85
+ - If requirements are unclear at this point, ask the user to clarify before you continue
86
+ - Get specific implementation preferences (libraries, approaches, patterns)
87
+ - Resolve architectural decisions before proceeding
88
+
89
+ ### Phase 3: External Research & Documentation
90
+
91
+ **Use specialized subagents when beneficial for external research:**
92
+
93
+ **Documentation Gathering:**
94
+
95
+ - Research latest library versions and best practices
96
+ - Find official documentation with specific section anchors
97
+ - Locate implementation examples and tutorials
98
+ - Identify common gotchas and known issues
99
+ - Check for breaking changes and migration guides
100
+
101
+ **Technology Trends:**
102
+
103
+ - Research current best practices for the technology stack
104
+ - Find relevant blog posts, guides, or case studies
105
+ - Identify performance optimization patterns
106
+ - Document security considerations
107
+
108
+ **Compile Research References:**
109
+
110
+ ```markdown
111
+ ## Relevant Documentation
112
+
113
+ - [Library Official Docs](https://example.com/docs#section)
114
+ - Specific feature implementation guide
115
+ - Why: Needed for X functionality
116
+ - [Framework Guide](https://example.com/guide#integration)
117
+ - Integration patterns section
118
+ - Why: Shows how to connect components
119
+ ```
120
+
121
+ ### Phase 4: Deep Strategic Thinking
122
+
123
+ **Think Harder About:**
124
+
125
+ - How does this feature fit into the existing architecture?
126
+ - What are the critical dependencies and order of operations?
127
+ - What could go wrong? (Edge cases, race conditions, errors)
128
+ - How will this be tested comprehensively?
129
+ - What performance implications exist?
130
+ - Are there security considerations?
131
+ - How maintainable is this approach?
132
+
133
+ **Design Decisions:**
134
+
135
+ - Choose between alternative approaches with clear rationale
136
+ - Design for extensibility and future modifications
137
+ - Plan for backward compatibility if needed
138
+ - Consider scalability implications
139
+
140
+ ### Phase 5: Plan Structure Generation
141
+
142
+ **Create comprehensive plan with the following structure:**
143
+
144
+ Whats below here is a template for you to fill for th4e implementation agent:
145
+
146
+ ```markdown
147
+ # Feature: <feature-name>
148
+
149
+ The following plan should be complete, but its important that you validate documentation and codebase patterns and task sanity before you start implementing.
150
+
151
+ Pay special attention to naming of existing utils types and models. Import from the right files etc.
152
+
153
+ ## Feature Description
154
+
155
+ <Detailed description of the feature, its purpose, and value to users>
156
+
157
+ ## User Story
158
+
159
+ As a <type of user>
160
+ I want to <action/goal>
161
+ So that <benefit/value>
162
+
163
+ ## Problem Statement
164
+
165
+ <Clearly define the specific problem or opportunity this feature addresses>
166
+
167
+ ## Solution Statement
168
+
169
+ <Describe the proposed solution approach and how it solves the problem>
170
+
171
+ ## Feature Metadata
172
+
173
+ **Feature Type**: [New Capability/Enhancement/Refactor/Bug Fix]
174
+ **Estimated Complexity**: [Low/Medium/High]
175
+ **Primary Systems Affected**: [List of main components/services]
176
+ **Dependencies**: [External libraries or services required]
177
+
178
+ ---
179
+
180
+ ## CONTEXT REFERENCES
181
+
182
+ ### Relevant Codebase Files IMPORTANT: YOU MUST READ THESE FILES BEFORE IMPLEMENTING!
183
+
184
+ <List files with line numbers and relevance>
185
+
186
+ - `path/to/file.py` (lines 15-45) - Why: Contains pattern for X that we'll mirror
187
+ - `path/to/model.py` (lines 100-120) - Why: Database model structure to follow
188
+ - `path/to/test.py` - Why: Test pattern example
189
+
190
+ ### New Files to Create
191
+
192
+ - `path/to/new_service.py` - Service implementation for X functionality
193
+ - `path/to/new_model.py` - Data model for Y resource
194
+ - `tests/path/to/test_new_service.py` - Unit tests for new service
195
+
196
+ ### Relevant Documentation YOU SHOULD READ THESE BEFORE IMPLEMENTING!
197
+
198
+ - [Documentation Link 1](https://example.com/doc1#section)
199
+ - Specific section: Authentication setup
200
+ - Why: Required for implementing secure endpoints
201
+ - [Documentation Link 2](https://example.com/doc2#integration)
202
+ - Specific section: Database integration
203
+ - Why: Shows proper async database patterns
204
+
205
+ ### Patterns to Follow
206
+
207
+ <Specific patterns extracted from codebase - include actual code examples from the project>
208
+
209
+ **Naming Conventions:** (for example)
210
+
211
+ **Error Handling:** (for example)
212
+
213
+ **Logging Pattern:** (for example)
214
+
215
+ **Other Relevant Patterns:** (for example)
216
+
217
+ ---
218
+
219
+ ## IMPLEMENTATION PLAN
220
+
221
+ ### Phase 1: Foundation
222
+
223
+ <Describe foundational work needed before main implementation>
224
+
225
+ **Tasks:**
226
+
227
+ - Set up base structures (schemas, types, interfaces)
228
+ - Configure necessary dependencies
229
+ - Create foundational utilities or helpers
230
+
231
+ ### Phase 2: Core Implementation
232
+
233
+ <Describe the main implementation work>
234
+
235
+ **Tasks:**
236
+
237
+ - Implement core business logic
238
+ - Create service layer components
239
+ - Add API endpoints or interfaces
240
+ - Implement data models
241
+
242
+ ### Phase 3: Integration
243
+
244
+ <Describe how feature integrates with existing functionality>
245
+
246
+ **Tasks:**
247
+
248
+ - Connect to existing routers/handlers
249
+ - Register new components
250
+ - Update configuration files
251
+ - Add middleware or interceptors if needed
252
+
253
+ ### Phase 4: Testing & Validation
254
+
255
+ <Describe testing approach>
256
+
257
+ **Tasks:**
258
+
259
+ - Implement unit tests for each component
260
+ - Create integration tests for feature workflow
261
+ - Add edge case tests
262
+ - Validate against acceptance criteria
263
+
264
+ ---
265
+
266
+ ## STEP-BY-STEP TASKS
267
+
268
+ IMPORTANT: Execute every task in order, top to bottom. Each task is atomic and independently testable.
269
+
270
+ ### Task Format Guidelines
271
+
272
+ Use information-dense keywords for clarity:
273
+
274
+ - **CREATE**: New files or components
275
+ - **UPDATE**: Modify existing files
276
+ - **ADD**: Insert new functionality into existing code
277
+ - **REMOVE**: Delete deprecated code
278
+ - **REFACTOR**: Restructure without changing behavior
279
+ - **MIRROR**: Copy pattern from elsewhere in codebase
280
+
281
+ ### {ACTION} {target_file}
282
+
283
+ - **IMPLEMENT**: {Specific implementation detail}
284
+ - **PATTERN**: {Reference to existing pattern - file:line}
285
+ - **IMPORTS**: {Required imports and dependencies}
286
+ - **GOTCHA**: {Known issues or constraints to avoid}
287
+ - **VALIDATE**: `{executable validation command}`
288
+
289
+ <Continue with all tasks in dependency order...>
290
+
291
+ ---
292
+
293
+ ## TESTING STRATEGY
294
+
295
+ <Define testing approach based on project's test framework and patterns discovered in during research>
296
+
297
+ ### Unit Tests
298
+
299
+ <Scope and requirements based on project standards>
300
+
301
+ Design unit tests with fixtures and assertions following existing testing approaches
302
+
303
+ ### Integration Tests
304
+
305
+ <Scope and requirements based on project standards>
306
+
307
+ ### Edge Cases
308
+
309
+ <List specific edge cases that must be tested for this feature>
310
+
311
+ ---
312
+
313
+ ## VALIDATION COMMANDS
314
+
315
+ <Define validation commands based on project's tools discovered in Phase 2>
316
+
317
+ Execute every command to ensure zero regressions and 100% feature correctness.
318
+
319
+ ### Level 1: Syntax & Style
320
+
321
+ <Project-specific linting and formatting commands>
322
+
323
+ ### Level 2: Unit Tests
324
+
325
+ <Project-specific unit test commands>
326
+
327
+ ### Level 3: Integration Tests
328
+
329
+ <Project-specific integration test commands>
330
+
331
+ ### Level 4: Manual Validation
332
+
333
+ <Feature-specific manual testing steps - API calls, UI testing, etc.>
334
+
335
+ ### Level 5: Additional Validation (Optional)
336
+
337
+ <MCP servers or additional CLI tools if available>
338
+
339
+ ---
340
+
341
+ ## ACCEPTANCE CRITERIA
342
+
343
+ <List specific, measurable criteria that must be met for completion>
344
+
345
+ - [ ] Feature implements all specified functionality
346
+ - [ ] All validation commands pass with zero errors
347
+ - [ ] Unit test coverage meets requirements (80%+)
348
+ - [ ] Integration tests verify end-to-end workflows
349
+ - [ ] Code follows project conventions and patterns
350
+ - [ ] No regressions in existing functionality
351
+ - [ ] Documentation is updated (if applicable)
352
+ - [ ] Performance meets requirements (if applicable)
353
+ - [ ] Security considerations addressed (if applicable)
354
+
355
+ ---
356
+
357
+ ## COMPLETION CHECKLIST
358
+
359
+ - [ ] All tasks completed in order
360
+ - [ ] Each task validation passed immediately
361
+ - [ ] All validation commands executed successfully
362
+ - [ ] Full test suite passes (unit + integration)
363
+ - [ ] No linting or type checking errors
364
+ - [ ] Manual testing confirms feature works
365
+ - [ ] Acceptance criteria all met
366
+ - [ ] Code reviewed for quality and maintainability
367
+
368
+ ---
369
+
370
+ ## NOTES
371
+
372
+ <Additional context, design decisions, trade-offs>
373
+ ```
374
+
375
+ ## Output Format
376
+
377
+ **Filename**: `.agents/plans/{kebab-case-descriptive-name}.md`
378
+
379
+ - Replace `{kebab-case-descriptive-name}` with short, descriptive feature name
380
+ - Examples: `add-user-authentication.md`, `implement-search-api.md`, `refactor-database-layer.md`
381
+
382
+ **Directory**: Create `.agents/plans/` if it doesn't exist
383
+
384
+ ## Quality Criteria
385
+
386
+ ### Context Completeness ✓
387
+
388
+ - [ ] All necessary patterns identified and documented
389
+ - [ ] External library usage documented with links
390
+ - [ ] Integration points clearly mapped
391
+ - [ ] Gotchas and anti-patterns captured
392
+ - [ ] Every task has executable validation command
393
+
394
+ ### Implementation Ready ✓
395
+
396
+ - [ ] Another developer could execute without additional context
397
+ - [ ] Tasks ordered by dependency (can execute top-to-bottom)
398
+ - [ ] Each task is atomic and independently testable
399
+ - [ ] Pattern references include specific file:line numbers
400
+
401
+ ### Pattern Consistency ✓
402
+
403
+ - [ ] Tasks follow existing codebase conventions
404
+ - [ ] New patterns justified with clear rationale
405
+ - [ ] No reinvention of existing patterns or utils
406
+ - [ ] Testing approach matches project standards
407
+
408
+ ### Information Density ✓
409
+
410
+ - [ ] No generic references (all specific and actionable)
411
+ - [ ] URLs include section anchors when applicable
412
+ - [ ] Task descriptions use codebase keywords
413
+ - [ ] Validation commands are non interactive executable
414
+
415
+ ## Success Metrics
416
+
417
+ **One-Pass Implementation**: Execution agent can complete feature without additional research or clarification
418
+
419
+ **Validation Complete**: Every task has at least one working validation command
420
+
421
+ **Context Rich**: The Plan passes "No Prior Knowledge Test" - someone unfamiliar with codebase can implement using only Plan content
422
+
423
+ **Confidence Score**: #/10 that execution will succeed on first attempt
424
+
425
+ ## Report
426
+
427
+ After creating the Plan, provide:
428
+
429
+ - Summary of feature and approach
430
+ - Full path to created Plan file
431
+ - Complexity assessment
432
+ - Key implementation risks or considerations
433
+ - Estimated confidence score for one-pass success
@@ -0,0 +1,75 @@
1
+ ---
2
+ description: Prime agent with codebase understanding
3
+ ---
4
+
5
+ # Prime: Load Project Context
6
+
7
+ ## Objective
8
+
9
+ Build comprehensive understanding of the codebase by analyzing structure, documentation, and key files.
10
+
11
+ ## Process
12
+
13
+ ### 1. Analyze Project Structure
14
+
15
+ List all tracked files:
16
+ !`git ls-files`
17
+
18
+ Show directory structure:
19
+ On Linux, run: `tree -L 3 -I 'node_modules|__pycache__|.git|dist|build'`
20
+
21
+ ### 2. Read Core Documentation
22
+
23
+ - Read the PRD.md or similar spec file
24
+ - Read CLAUDE.md or similar global rules file
25
+ - Read README files at project root and major directories
26
+ - Read any architecture documentation
27
+ - Read the drizzle config so you understand the database schema
28
+
29
+ ### 3. Identify Key Files
30
+
31
+ Based on the structure, identify and read:
32
+ - Main entry points (main.py, index.ts, app.py, etc.)
33
+ - Core configuration files (pyproject.toml, package.json, tsconfig.json)
34
+ - Key model/schema definitions
35
+ - Important service or controller files
36
+
37
+ ### 4. Understand Current State
38
+
39
+ Check recent activity:
40
+ !`git log -10 --oneline`
41
+
42
+ Check current branch and status:
43
+ !`git status`
44
+
45
+ ## Output Report
46
+
47
+ Provide a concise summary covering:
48
+
49
+ ### Project Overview
50
+ - Purpose and type of application
51
+ - Primary technologies and frameworks
52
+ - Current version/state
53
+
54
+ ### Architecture
55
+ - Overall structure and organization
56
+ - Key architectural patterns identified
57
+ - Important directories and their purposes
58
+
59
+ ### Tech Stack
60
+ - Languages and versions
61
+ - Frameworks and major libraries
62
+ - Build tools and package managers
63
+ - Testing frameworks
64
+
65
+ ### Core Principles
66
+ - Code style and conventions observed
67
+ - Documentation standards
68
+ - Testing approach
69
+
70
+ ### Current State
71
+ - Active branch
72
+ - Recent changes or development focus
73
+ - Any immediate observations or concerns
74
+
75
+ **Make this summary easy to scan - use bullet points and clear headers.**
package/dist/index.js CHANGED
@@ -1,7 +1,7 @@
1
1
  #!/usr/bin/env node
2
2
 
3
3
  // src/index.ts
4
- import { existsSync, mkdirSync, readFileSync, writeFileSync } from "fs";
4
+ import { existsSync, mkdirSync, readFileSync, readdirSync, writeFileSync } from "fs";
5
5
  import { dirname, join, resolve } from "path";
6
6
  import { execSync } from "child_process";
7
7
  import { fileURLToPath } from "url";
@@ -121,6 +121,7 @@ function sleep(ms) {
121
121
  var __filename = fileURLToPath(import.meta.url);
122
122
  var __dirname = dirname(__filename);
123
123
  var TEMPLATES_DIR = resolve(__dirname, "..", "templates");
124
+ var COMMANDS_DIR = resolve(__dirname, "..", "commands");
124
125
  function isCommandAvailable(command) {
125
126
  try {
126
127
  const cmd = process.platform === "win32" ? `where ${command}` : `which ${command}`;
@@ -141,6 +142,13 @@ function readTemplate(relativePath) {
141
142
  const fullPath = join(TEMPLATES_DIR, relativePath);
142
143
  return readFileSync(fullPath, "utf-8");
143
144
  }
145
+ function readCommand(filename) {
146
+ const fullPath = join(COMMANDS_DIR, filename);
147
+ return readFileSync(fullPath, "utf-8");
148
+ }
149
+ function listCommands() {
150
+ return readdirSync(COMMANDS_DIR).filter((f) => f.endsWith(".md"));
151
+ }
144
152
  function writeFile(targetDir, relativePath, content) {
145
153
  const fullPath = join(targetDir, relativePath);
146
154
  const dir = dirname(fullPath);
@@ -261,27 +269,45 @@ async function main() {
261
269
  summaryParts.push(bgYellow(` ${missing.length} missing `));
262
270
  console.log(` ${summaryParts.join(" ")}`);
263
271
  console.log();
264
- const totalFiles = AGENTS.length + 1;
272
+ const totalAgentFiles = AGENTS.length + 1;
265
273
  let fileCount = 0;
266
- const skillsDir = ".agents/skills";
267
- const fileSpinner = createSpinner("Writing skill files...");
274
+ const agentsDir = ".opencode/agents";
275
+ const fileSpinner = createSpinner("Writing agent skill files...");
268
276
  fileSpinner.start();
269
277
  const orchestratorContent = readTemplate("skills/sdlc-orchestrator/SKILL.md");
270
- writeFile(targetDir, `${skillsDir}/sdlc-orchestrator/SKILL.md`, orchestratorContent);
278
+ writeFile(targetDir, `${agentsDir}/sdlc-orchestrator/SKILL.md`, orchestratorContent);
271
279
  fileCount++;
272
280
  await sleep(200);
273
281
  for (const agent of AGENTS) {
274
282
  const content = readTemplate(`skills/${agent.id}/SKILL.md`);
275
- writeFile(targetDir, `${skillsDir}/${agent.id}/SKILL.md`, content);
283
+ writeFile(targetDir, `${agentsDir}/${agent.id}/SKILL.md`, content);
276
284
  fileCount++;
277
285
  fileSpinner.stop();
278
286
  process.stdout.write(
279
- `\r ${cyan(SPINNER_FRAMES[fileCount % SPINNER_FRAMES.length])} Writing skill files... ${progressBar(fileCount, totalFiles)}`
287
+ `\r ${cyan(SPINNER_FRAMES[fileCount % SPINNER_FRAMES.length])} Writing agent skill files... ${progressBar(fileCount, totalAgentFiles)}`
280
288
  );
281
289
  await sleep(200);
282
290
  }
283
291
  process.stdout.write(`\r\x1B[K`);
284
- console.log(` ${green("\u2714")} Created ${bold(String(totalFiles))} skill files in ${cyan(skillsDir + "/")}`);
292
+ console.log(` ${green("\u2714")} Created ${bold(String(totalAgentFiles))} agent skill files in ${cyan(agentsDir + "/")}`);
293
+ console.log();
294
+ const commandFiles = listCommands();
295
+ const commandsDir = ".opencode/commands";
296
+ let cmdCount = 0;
297
+ const cmdSpinner = createSpinner("Writing command files...");
298
+ cmdSpinner.start();
299
+ for (const cmdFile of commandFiles) {
300
+ const content = readCommand(cmdFile);
301
+ writeFile(targetDir, `${commandsDir}/${cmdFile}`, content);
302
+ cmdCount++;
303
+ cmdSpinner.stop();
304
+ process.stdout.write(
305
+ `\r ${cyan(SPINNER_FRAMES[cmdCount % SPINNER_FRAMES.length])} Writing command files... ${progressBar(cmdCount, commandFiles.length)}`
306
+ );
307
+ await sleep(150);
308
+ }
309
+ process.stdout.write(`\r\x1B[K`);
310
+ console.log(` ${green("\u2714")} Created ${bold(String(cmdCount))} command files in ${cyan(commandsDir + "/")}`);
285
311
  console.log();
286
312
  const configSpinner = createSpinner("Configuring opencode.json...");
287
313
  configSpinner.start();
@@ -296,7 +322,7 @@ async function main() {
296
322
  console.log();
297
323
  console.log(dim(" \u2500\u2500\u2500 Files Created \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500"));
298
324
  console.log();
299
- console.log(` ${cyan("\u{1F4C1} .agents/skills/")}`);
325
+ console.log(` ${cyan("\u{1F4C1} .opencode/agents/")}`);
300
326
  console.log(` ${dim("\u251C\u2500\u2500")} ${cyan("sdlc-orchestrator/")} ${dim("\u2190 master routing skill")}`);
301
327
  for (let i = 0; i < AGENTS.length; i++) {
302
328
  const agent = AGENTS[i];
@@ -308,6 +334,15 @@ async function main() {
308
334
  );
309
335
  }
310
336
  console.log();
337
+ console.log(` ${cyan("\u{1F4C1} .opencode/commands/")}`);
338
+ const cmdFileList = listCommands();
339
+ for (let i = 0; i < cmdFileList.length; i++) {
340
+ const isLast = i === cmdFileList.length - 1;
341
+ const prefix = isLast ? "\u2514\u2500\u2500" : "\u251C\u2500\u2500";
342
+ const name = cmdFileList[i].replace(".md", "");
343
+ console.log(` ${dim(prefix)} ${cyan(cmdFileList[i])} ${dim(`\u2190 /${name}`)}`);
344
+ }
345
+ console.log();
311
346
  console.log(` ${cyan("\u{1F4C4} opencode.json")} ${dim("\u2190 permissions + /delegate commands")}`);
312
347
  console.log();
313
348
  console.log(dim(" \u2500\u2500\u2500 Usage \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500"));
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "sdlc-subagents",
3
- "version": "0.1.1",
3
+ "version": "0.1.2",
4
4
  "description": "Scaffold OpenCode as an orchestrator of multiple coding agent CLIs (Gemini, Copilot, Claude, Aider, Kimi, Cursor)",
5
5
  "bin": {
6
6
  "sdlc-subagents": "dist/index.js"
@@ -28,7 +28,8 @@
28
28
  "license": "MIT",
29
29
  "files": [
30
30
  "dist",
31
- "templates"
31
+ "templates",
32
+ "commands"
32
33
  ],
33
34
  "engines": {
34
35
  "node": ">=18"