@langadventurellc/task-trellis-mcp 1.3.1 → 1.3.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (76) hide show
  1. package/README.md +0 -6
  2. package/dist/server.js +1 -55
  3. package/dist/server.js.map +1 -1
  4. package/package.json +14 -15
  5. package/dist/__tests__/copyBasicClaudeAgents.test.d.ts +0 -2
  6. package/dist/__tests__/copyBasicClaudeAgents.test.d.ts.map +0 -1
  7. package/dist/__tests__/copyBasicClaudeAgents.test.js +0 -24
  8. package/dist/__tests__/copyBasicClaudeAgents.test.js.map +0 -1
  9. package/dist/prompts/PromptArgument.d.ts +0 -16
  10. package/dist/prompts/PromptArgument.d.ts.map +0 -1
  11. package/dist/prompts/PromptArgument.js +0 -3
  12. package/dist/prompts/PromptArgument.js.map +0 -1
  13. package/dist/prompts/PromptManager.d.ts +0 -48
  14. package/dist/prompts/PromptManager.d.ts.map +0 -1
  15. package/dist/prompts/PromptManager.js +0 -151
  16. package/dist/prompts/PromptManager.js.map +0 -1
  17. package/dist/prompts/PromptMessage.d.ts +0 -11
  18. package/dist/prompts/PromptMessage.d.ts.map +0 -1
  19. package/dist/prompts/PromptMessage.js +0 -3
  20. package/dist/prompts/PromptMessage.js.map +0 -1
  21. package/dist/prompts/PromptParser.d.ts +0 -7
  22. package/dist/prompts/PromptParser.d.ts.map +0 -1
  23. package/dist/prompts/PromptParser.js +0 -141
  24. package/dist/prompts/PromptParser.js.map +0 -1
  25. package/dist/prompts/PromptRenderer.d.ts +0 -38
  26. package/dist/prompts/PromptRenderer.d.ts.map +0 -1
  27. package/dist/prompts/PromptRenderer.js +0 -128
  28. package/dist/prompts/PromptRenderer.js.map +0 -1
  29. package/dist/prompts/PromptsRegistry.d.ts +0 -43
  30. package/dist/prompts/PromptsRegistry.d.ts.map +0 -1
  31. package/dist/prompts/PromptsRegistry.js +0 -76
  32. package/dist/prompts/PromptsRegistry.js.map +0 -1
  33. package/dist/prompts/TrellisPrompt.d.ts +0 -19
  34. package/dist/prompts/TrellisPrompt.d.ts.map +0 -1
  35. package/dist/prompts/TrellisPrompt.js +0 -3
  36. package/dist/prompts/TrellisPrompt.js.map +0 -1
  37. package/dist/prompts/__tests__/PromptArgument.test.d.ts +0 -2
  38. package/dist/prompts/__tests__/PromptArgument.test.d.ts.map +0 -1
  39. package/dist/prompts/__tests__/PromptArgument.test.js +0 -60
  40. package/dist/prompts/__tests__/PromptArgument.test.js.map +0 -1
  41. package/dist/prompts/__tests__/PromptManager.test.d.ts +0 -2
  42. package/dist/prompts/__tests__/PromptManager.test.d.ts.map +0 -1
  43. package/dist/prompts/__tests__/PromptManager.test.js +0 -364
  44. package/dist/prompts/__tests__/PromptManager.test.js.map +0 -1
  45. package/dist/prompts/__tests__/PromptParser.test.d.ts +0 -2
  46. package/dist/prompts/__tests__/PromptParser.test.d.ts.map +0 -1
  47. package/dist/prompts/__tests__/PromptParser.test.js +0 -237
  48. package/dist/prompts/__tests__/PromptParser.test.js.map +0 -1
  49. package/dist/prompts/__tests__/PromptRenderer.test.d.ts +0 -2
  50. package/dist/prompts/__tests__/PromptRenderer.test.d.ts.map +0 -1
  51. package/dist/prompts/__tests__/PromptRenderer.test.js +0 -325
  52. package/dist/prompts/__tests__/PromptRenderer.test.js.map +0 -1
  53. package/dist/prompts/__tests__/TrellisPrompt.test.d.ts +0 -2
  54. package/dist/prompts/__tests__/TrellisPrompt.test.d.ts.map +0 -1
  55. package/dist/prompts/__tests__/TrellisPrompt.test.js +0 -107
  56. package/dist/prompts/__tests__/TrellisPrompt.test.js.map +0 -1
  57. package/dist/prompts/index.d.ts +0 -9
  58. package/dist/prompts/index.d.ts.map +0 -1
  59. package/dist/prompts/index.js +0 -14
  60. package/dist/prompts/index.js.map +0 -1
  61. package/dist/prompts/registry.d.ts +0 -6
  62. package/dist/prompts/registry.d.ts.map +0 -1
  63. package/dist/prompts/registry.js +0 -60
  64. package/dist/prompts/registry.js.map +0 -1
  65. package/dist/resources/basic/prompts/create-epics.md +0 -177
  66. package/dist/resources/basic/prompts/create-features.md +0 -172
  67. package/dist/resources/basic/prompts/create-project.md +0 -128
  68. package/dist/resources/basic/prompts/create-tasks.md +0 -225
  69. package/dist/resources/basic/prompts/implement-task.md +0 -170
  70. package/dist/resources/basic-claude/agents/implementation-planner.md +0 -187
  71. package/dist/resources/basic-claude/agents/issue-verifier.md +0 -154
  72. package/dist/resources/basic-claude/prompts/create-epics.md +0 -204
  73. package/dist/resources/basic-claude/prompts/create-features.md +0 -199
  74. package/dist/resources/basic-claude/prompts/create-project.md +0 -155
  75. package/dist/resources/basic-claude/prompts/create-tasks.md +0 -252
  76. package/dist/resources/basic-claude/prompts/implement-task.md +0 -179
@@ -1,155 +0,0 @@
1
- ---
2
- description: Create a new project in the Trellis task management system by analyzing specifications and gathering requirements
3
- ---
4
-
5
- # Create Project Trellis Command
6
-
7
- Create a new project in the Trellis task management system by analyzing specifications provided and gathering additional requirements as needed.
8
-
9
- ## Goal
10
-
11
- Transform project specifications into a comprehensive project definition with full context and requirements that enable other agents to effectively create epics, features, and ultimately implementable tasks.
12
-
13
- ## Process
14
-
15
- ### 1. Parse Input Specifications
16
-
17
- #### Specification Input
18
-
19
- `$ARGUMENTS`
20
-
21
- #### Instructions
22
-
23
- Read and analyze the specifications:
24
-
25
- - Extract key project goals, requirements, and constraints
26
-
27
- ### 2. Analyze Project Context
28
-
29
- **Before gathering requirements, research the existing system:**
30
-
31
- - **Search codebase** for similar projects or patterns
32
- - **Identify existing architecture** and conventions
33
- - **Document discovered technologies** for consistency
34
-
35
- ### 3. Gather Additional Requirements
36
-
37
- **Continue asking questions until you can create a complete project specification:**
38
-
39
- Use this structured approach:
40
-
41
- - **Ask one question at a time** with specific options
42
- - **Focus on decomposition** - break large concepts into smaller components
43
- - **Clarify boundaries** - understand where one component ends and another begins
44
- - **Continue until complete** - don't stop until you have full understanding
45
-
46
- Key areas to explore:
47
-
48
- - **Functional Requirements**: What specific capabilities must the system provide?
49
- - **Technical Architecture**: What technologies, frameworks, and patterns should be used?
50
- - **Integration Points**: What external systems or APIs need to be integrated?
51
- - **User Types**: Who will use this system and what are their needs?
52
- - **Performance Requirements**: What are the response time, load, and scaling needs?
53
- - **Security Requirements**: What authentication, authorization, and data protection is needed?
54
- - **Deployment Environment**: Where and how will this be deployed?
55
- - **Timeline & Phases**: Are there specific deadlines or phase requirements?
56
- - **Success Metrics**: How will project success be measured?
57
-
58
- **Example questioning approach:**
59
-
60
- ```
61
- How should user authentication be handled in this project?
62
- Options:
63
- - A) Use existing authentication system (specify integration points)
64
- - B) Implement new authentication mechanism (specify requirements)
65
- - C) No authentication needed for this project
66
- ```
67
-
68
- Continue asking clarifying questions until you have enough information to create a comprehensive project description that would enable another agent to:
69
-
70
- - Understand the full scope and vision
71
- - Create appropriate epics covering all aspects
72
- - Make informed technical decisions
73
- - Understand constraints and requirements
74
-
75
- ### 4. Generate Project Title and Description
76
-
77
- Based on gathered information:
78
-
79
- - **Title**: Create a clear, concise project title (5-7 words)
80
- - **Description**: Write comprehensive project specification including:
81
- - Executive summary
82
- - Detailed functional requirements
83
- - Technical requirements and constraints
84
- - Architecture overview
85
- - User stories or personas
86
- - Non-functional requirements (performance, security, etc.)
87
- - Integration requirements
88
- - Deployment strategy
89
- - **Detailed Acceptance Criteria**: Specific, measurable requirements that define feature completion, including:
90
- - Functional behavior with specific input/output expectations
91
- - User interface requirements and interaction patterns
92
- - Data validation and error handling criteria
93
- - Integration points with other features or systems
94
- - Performance benchmarks and response time requirements
95
- - Security validation and access control requirements
96
- - Browser/platform compatibility requirements
97
- - Accessibility and usability standards
98
- - Any other context needed for epic creation
99
-
100
- ### 5. Create Project Using MCP
101
-
102
- Call the Task Trellis MCP `create_issue` tool to create the project with the following parameters:
103
-
104
- - `type`: Set to `"project"`
105
- - `title`: The generated project title
106
- - `status`: Set to `"open"` (default, ready to begin work) or `"draft"` unless specified
107
- - `description`: The comprehensive project description generated in the previous step
108
-
109
- ### 6. Verify Created Project
110
-
111
- Call the `issue-verifier` sub-agent to validate the created project:
112
-
113
- **Prepare verification inputs:**
114
-
115
- - Original specifications from `$ARGUMENTS`
116
- - Created issue ID(s) from the MCP response
117
- - Any additional context gathered during requirement gathering phase
118
-
119
- **Call the verifier:**
120
-
121
- ```
122
- Verify the created project for completeness and correctness:
123
- - Original requirements: [Include the original $ARGUMENTS specifications]
124
- - Created issue ID(s): [issue-id from MCP response]
125
- - Additional context: [Include any clarifications, decisions, or requirements gathered during the interactive Q&A phase]
126
- ```
127
-
128
- **Review verification results:**
129
-
130
- - If verdict is `APPROVED`: Proceed to output format
131
- - If verdict is `NEEDS REVISION`: Evaluate the feedback and, if applicable, update the project using MCP based on recommendations
132
- - If verdict is `REJECTED`: Evaluate the feedback and, if applicable, recreate the project addressing critical issues
133
-
134
- If you're not 100% sure of the correctness of the feedback, **STOP** and ask the user for clarification.
135
-
136
- ### 7. Output Format
137
-
138
- After successful creation:
139
-
140
- ```
141
- ✅ Project created successfully!
142
-
143
- 📁 Project: [Generated Title]
144
- 📍 ID: [generated-id]
145
- 📊 Status: [actual-status]
146
-
147
- 📝 Project Summary:
148
- [First paragraph of description]
149
- ```
150
-
151
- <rules>
152
- <critical>Use MCP tools for all operations (create_issue, get_issue, activate, etc.)</critical>
153
- <critical>Continue asking questions until you have a complete understanding of the requirements</critical>
154
- <critical>Ask one question at a time with specific options</critical>
155
- </rules>
@@ -1,252 +0,0 @@
1
- ---
2
- description: Break down a feature into specific, actionable tasks (1-2 hours each)
3
- ---
4
-
5
- # Create Tasks Command
6
-
7
- Break down a feature into specific, actionable tasks using the Trellis task management system. Do not attempt to create multiple tasks in parallel. Do them sequentially one at a time.
8
-
9
- ## Goal
10
-
11
- Analyze a feature's comprehensive specification to create granular tasks that can be individually claimed and completed by developers, ensuring complete implementation of the feature with proper testing and security considerations.
12
-
13
- ## Process
14
-
15
- ### 1. Identify Context and Requirements
16
-
17
- #### Input
18
-
19
- `$ARGUMENTS`
20
-
21
- #### Context Determination
22
-
23
- The input may contain:
24
-
25
- - **Feature ID**: (e.g., "F-user-registration") - Create tasks within a feature hierarchy
26
- - **Task Requirements**: Direct description of standalone work needed
27
- - **Mixed**: Feature ID plus additional task specifications
28
-
29
- #### Instructions
30
-
31
- **For Hierarchical Tasks:**
32
-
33
- - Retrieve the feature using MCP `get_issue` to access its comprehensive description, requirements, and parent epic/project context
34
-
35
- **For Standalone Tasks:**
36
-
37
- - Analyze the provided requirements directly
38
- - No parent context needed, focus on the specific work described
39
-
40
- ### 2. Analyze Requirements
41
-
42
- **Thoroughly analyze the requirements (feature description OR standalone requirements) to identify required tasks:**
43
-
44
- - **Search codebase** for similar task patterns or implementations
45
- - Extract all components and deliverables from the feature description
46
- - Review implementation guidance and technical approach
47
- - Identify testing requirements for comprehensive coverage
48
- - Consider security considerations that need implementation
49
- - Analyze performance requirements and constraints
50
- - Group related implementation work
51
- - Identify task dependencies and sequencing
52
- - Note any specific instructions provided in `input`
53
-
54
- ### 3. Gather Additional Information
55
-
56
- **Ask clarifying questions as needed to refine the task breakdown:**
57
-
58
- Use this structured approach:
59
-
60
- - **Ask one question at a time** with specific options
61
- - **Focus on task boundaries** - understand what constitutes a complete, testable task
62
- - **Identify implementation details** - specific technical approaches or patterns
63
- - **Continue until complete** - don't stop until you have clear task structure
64
-
65
- Key areas to clarify:
66
-
67
- - **Implementation Details**: Specific technical approaches or patterns?
68
- - Include unit testing in the tasks for the implementation.
69
- - **Task Boundaries**: What constitutes a complete, testable task?
70
- - **Dependencies**: Which tasks must complete before others?
71
- - **Testing Approach**: Unit tests, integration tests, or both?
72
- - Do not create separate tasks just for unit tests. Unit tests should always be included in the same task as the changes to the production code.
73
- - Do create separate tasks for integration tests.
74
- - If specifically requested, do create separate tasks for performance tests. But, do not add tasks for performance tests unless specifically requested by the user.
75
- - **Security Implementation**: How to handle validation and authorization?
76
-
77
- **Example questioning approach:**
78
-
79
- ```
80
- How should the user model validation be implemented?
81
- Options:
82
- - A) Basic field validation only (required fields, data types)
83
- - B) Advanced validation with custom rules and error messages
84
- - C) Validation with integration to existing validation framework
85
- ```
86
-
87
- Continue until the task structure:
88
-
89
- - Covers all aspects of the feature specification
90
- - Represents atomic units of work (1-2 hours each)
91
- - Has clear implementation boundaries
92
- - Includes adequate testing and security tasks
93
-
94
- ### 4. Generate Task Structure
95
-
96
- For each task, create:
97
-
98
- - **Title**: Clear, actionable description
99
- - **Description**: Detailed explanation including:
100
- - **Detailed Context**: Enough information for a developer new to the project to complete the work, including:
101
- - Links to relevant specifications, documentation, or other Trellis issues (tasks, features, epics, projects)
102
- - References to existing patterns or similar implementations in the codebase
103
- - Specific technologies, frameworks, or libraries to use
104
- - File paths and component locations where work should be done
105
- - **Specific implementation requirements**: What exactly needs to be built
106
- - **Technical approach to follow**: Step-by-step guidance on implementation
107
- - **Detailed Acceptance Criteria**: Specific, measurable requirements that define project success, including:
108
- - Functional deliverables with clear success metrics
109
- - Performance benchmarks (response times, throughput, capacity)
110
- - Security requirements and compliance standards
111
- - User experience criteria and usability standards
112
- - Integration testing requirements with external systems
113
- - Deployment and operational readiness criteria
114
- - **Dependencies on other tasks**: Prerequisites and sequencing
115
- - **Security considerations**: Validation, authorization, and protection requirements
116
- - **Testing requirements**: Specific tests to write and coverage expectations
117
-
118
- **Task Granularity Guidelines:**
119
-
120
- Each task should be sized appropriately for implementation:
121
-
122
- - **1-2 hours per task** - Tasks should be completable in one sitting
123
- - **Atomic units of work** - Each task should produce a meaningful, testable change
124
- - **Independent implementation** - Tasks should be workable without blocking others
125
- - **Specific scope** - Implementation approach should be clear from the task description
126
- - **Testable outcome** - Tasks should have defined acceptance criteria
127
-
128
- **Default task hierarchy approach:**
129
-
130
- - **Prefer flat structure** - Most tasks should be at the same level
131
- - **Only create sub-tasks when necessary** - When a task is genuinely too large (>2 hours)
132
- - **Keep it simple** - Avoid unnecessary complexity in task organization
133
-
134
- Group tasks logically:
135
-
136
- - **Setup/Configuration**: Initial setup tasks
137
- - **Core Implementation**: Main functionality (includes unit tests and documentation)
138
- - **Security**: Validation and protection (includes related tests and docs)
139
-
140
- ### 5. Create Tasks Using MCP
141
-
142
- For each task, call the Task Trellis MCP `create_issue` tool:
143
-
144
- - `type`: Set to `"task"`
145
- - `parent`: The feature ID (optional - omit for standalone tasks)
146
- - `title`: Generated task title
147
- - `status`: Set to `"open"` (default, ready to claim) or `"draft"`
148
- - `priority`: Based on criticality and dependencies (`"high"`, `"medium"`, or `"low"`)
149
- - `prerequisites`: List of task IDs that must complete first
150
- - `description`: Comprehensive task description
151
-
152
- **For standalone tasks**: Simply omit the `parent` parameter entirely.
153
-
154
- ### 6. Verify Created Project
155
-
156
- Call the `issue-verifier` sub-agent to validate the created project:
157
-
158
- **Prepare verification inputs:**
159
-
160
- - Original specifications from `$ARGUMENTS`
161
- - Created issue ID(s) from the MCP response
162
- - Any additional context gathered during requirement gathering phase
163
-
164
- **Call the verifier:**
165
-
166
- ```
167
- Verify the created project for completeness and correctness:
168
- - Original requirements: [Include the original $ARGUMENTS specifications]
169
- - Created issue ID(s): [issue-id from MCP response]
170
- - Additional context: [Include any clarifications, decisions, or requirements gathered during the interactive Q&A phase]
171
- ```
172
-
173
- **Review verification results:**
174
-
175
- - If verdict is `APPROVED`: Proceed to output format
176
- - If verdict is `NEEDS REVISION`: Evaluate the feedback and, if applicable, update the project using MCP based on recommendations
177
- - If verdict is `REJECTED`: Evaluate the feedback and, if applicable, recreate the project addressing critical issues
178
-
179
- If you're not 100% sure of the correctness of the feedback, **STOP** and ask the user for clarification.
180
-
181
- ### 7. Output Format
182
-
183
- After successful creation:
184
-
185
- ```
186
- ✅ Successfully created [N] tasks for feature "[Feature Title]"
187
-
188
- 📋 Created Tasks:
189
- Database & Models:
190
- ✓ T-[id1]: Create user database model with validation and unit tests
191
- ✓ T-[id2]: Add email verification token system with tests and docs
192
-
193
- API Development:
194
- ✓ T-[id3]: Create POST /api/register endpoint with tests and validation
195
- ✓ T-[id4]: Implement email verification endpoint with tests
196
- ✓ T-[id5]: Add rate limiting with monitoring and tests
197
-
198
- Frontend:
199
- ✓ T-[id6]: Create registration form component with tests and error handling
200
- ✓ T-[id7]: Add client-side validation with unit tests
201
- ✓ T-[id8]: Implement success/error states with component tests
202
-
203
- Integration:
204
- ✓ T-[id9]: Write end-to-end integration tests for full registration flow
205
-
206
- 📊 Task Summary:
207
- - Total Tasks: [N]
208
- - High Priority: [X]
209
- ```
210
-
211
- ## Task Creation Guidelines
212
-
213
- Ensure tasks are:
214
-
215
- - **Atomic**: Completable in one sitting (1-2 hours)
216
- - **Specific**: Clear implementation path
217
- - **Testable**: Defined acceptance criteria. Include instructions for writing unit tests in the same tasks as writing the production code. Integration tests should be in separate tasks.
218
- - **Independent**: Minimal coupling where possible
219
- - **Secure**: Include necessary validations
220
-
221
- Common task patterns:
222
-
223
- - **Model/Schema**: Create with validation, indexing, unit tests, and docs
224
- - **API Endpoint**: Implement with input validation, error handling, tests, and docs
225
- - **Frontend Component**: Create with interactivity, state handling, tests, and docs
226
- - **Security**: Input validation, authorization, rate limiting with tests and docs
227
-
228
- ## Question Guidelines
229
-
230
- Ask questions that:
231
-
232
- - **Clarify implementation**: Specific libraries or approaches?
233
- - **Define boundaries**: What's included in each task?
234
- - **Identify prerequisites**: What must be built first?
235
- - **Confirm testing strategy**: What types of tests are needed?
236
-
237
- ## Priority Assignment
238
-
239
- Assign priorities based on:
240
-
241
- - **High**: Blocking other work, security-critical, core functionality
242
- - **Medium**: Standard implementation tasks
243
- - **Low**: Enhancements, optimizations, nice-to-have features
244
-
245
- <rules>
246
- <critical>Use MCP tools for all operations (create_issue, get_issue, etc.)</critical>
247
- <critical>Each task must be completable in 1-2 hours</critical>
248
- <critical>Ask one question at a time with specific options</critical>
249
- <critical>Continue asking questions until you have complete understanding of task boundaries</critical>
250
- <important>Include testing and documentation within implementation tasks</important>
251
- <important>Add security validation with tests where applicable</important>
252
- </rules>
@@ -1,179 +0,0 @@
1
- ---
2
- description: Claim and implement a task following Research and Plan → Implement workflow
3
- ---
4
-
5
- # Implement Task Command
6
-
7
- Claim and implement the next available task from the backlog using the Trellis task management system with the Research and Plan → Implement workflow.
8
-
9
- ## Goal
10
-
11
- Automatically claim the highest-priority available task and implement it following project standards, with comprehensive research, planning, and quality checks before marking complete.
12
-
13
- ## Process
14
-
15
- ### 1. Claim Next Available Task
16
-
17
- Command: `claim_task`
18
-
19
- #### Input
20
-
21
- `$ARGUMENTS` (optional) - Can specify:
22
-
23
- - Specific task ID to claim (e.g., "T-create-user-model")
24
- - `--worktree (worktree-path)` - Worktree identifier to stamp on claimed task (currently informational only)
25
- - `--scope (issue-id)` - Hierarchical scope for task filtering (P-, E-, F- prefixed)
26
- - `--force` - Bypass validation when claiming specific task (only with `taskId`)
27
- - Additional context or preferences
28
-
29
- #### Instructions
30
-
31
- Claims are made against the current working directory's task trellis system (tasks are managed in `.trellis` folder).
32
-
33
- ### 2. Research and Planning Phase (MANDATORY)
34
-
35
- **Delegate research to planner, but verify critical details:**
36
-
37
- The research phase uses the Implementation Plan Generator subagent to analyze the codebase and create a detailed plan, but you must verify key assumptions before implementation.
38
-
39
- - **Read parent issues for context**: Use MCP `get_issue` to read the parent feature (if it has one) for context and requirements. Do not continue until you have claimed or loaded an already claimed task.
40
- - **Generate Implementation Plan**: Use the Implementation Plan Generator subagent, providing:
41
- - The task description and requirements
42
- - Parent issue context you've gathered
43
- - Any project-specific constraints or standards
44
- - **CRITICAL - If No Response From Subagent**: **STOP IMMEDIATELY** and alert the user
45
- - **CRITICAL - Verify the Plan**: Before implementing, spot-check the plan for accuracy:
46
- - Verify 2-3 key file paths actually exist where the plan says they do
47
- - Confirm at least one pattern/convention the plan identified
48
- - Check that any imports or dependencies the plan references are real
49
- - If the plan makes assumptions, verify at least the most critical ones
50
-
51
- ```
52
- 📚 Research Phase for T-create-user-model
53
-
54
- 1️⃣ Reading parent issues for context...
55
- - `get_issue` for task's feature, epic, and project
56
-
57
- 2️⃣ Using research and implementation planner subagent for comprehensive research and planning...
58
-
59
- ✅ Research and planning complete. Implementing plan...
60
- ```
61
-
62
- ### 3. Implementation Phase
63
-
64
- **Execute the plan with progress updates:**
65
-
66
- The implementation phase is where the actual coding happens. During this phase:
67
-
68
- - **Write clean code**: Follow project conventions and best practices
69
- - **Run quality checks frequently**: Format, lint, and test after each major change
70
- - **Write tests alongside code**: Don't leave testing for the end
71
- - **Apply security measures**: Implement validation, sanitization, and protection as planned
72
- - **Handle errors gracefully**: Include proper error handling and user feedback
73
-
74
- ### 4. Complete Task
75
-
76
- **Update task and provide summary:**
77
-
78
- The completion phase ensures the task is properly documented and marked as done. This phase includes:
79
-
80
- - **Verify all requirements met**: Check that the implementation satisfies the task description
81
- - **Confirm quality checks pass**: Ensure all tests, linting, and formatting are clean
82
- - **Write meaningful summary**: Describe what was implemented and any important decisions
83
- - **List all changed files**: Document what was created or modified
84
- - **Update task status**: Use MCP to mark the task as complete
85
- - **Note any follow-up needed**: Identify if additional tasks should be created
86
-
87
- Use MCP `complete_task` with:
88
-
89
- - Task ID
90
- - Summary of work done
91
- - List of files changed
92
-
93
- Example for a database model task:
94
-
95
- ```
96
- ✅ Completing task: T-create-user-model
97
-
98
- Summary:
99
- Implemented User model with all required fields including secure password
100
- hashing, email validation, and unique constraints. Added comprehensive
101
- test coverage and database migrations. Password hashing uses bcrypt with 12
102
- rounds for security. Email validation includes regex pattern and uniqueness
103
- constraints. All tests passing with 100% coverage.
104
-
105
- Files changed:
106
- - models/User.js (new) - User model with validation methods
107
- - models/index.js (modified) - Added User export
108
- - migrations/001_create_users.sql (new) - Database schema
109
- - tests/models/User.test.js (new) - Comprehensive test suite
110
- - package.json (modified) - Added bcrypt dependency
111
-
112
- ✅ Task completed and moved to done folder!
113
- ```
114
-
115
- ### 5. Next Steps
116
-
117
- **Provide clear next actions:**
118
-
119
- ```
120
- 🎯 Task Complete: T-create-user-model
121
- ```
122
-
123
- **STOP!** - Do not proceed. Complete one task and one task only. Do not implement another task.
124
-
125
- ### During Research and Planning Phase
126
-
127
- ```
128
- ⚠️ Research issue: Cannot find existing model patterns
129
-
130
- Attempting alternative approaches:
131
- - Checking documentation...
132
- - Searching for examples...
133
- - Using web search for best practices...
134
-
135
- [If still stuck]
136
- ❓ Need clarification:
137
- The project doesn't seem to have existing models.
138
- Should I:
139
- A) Create the first model and establish patterns
140
- B) Check if models are in a different location
141
- C) Use a different approach (raw SQL, different ORM)
142
- ```
143
-
144
- ## Security & Performance Principles
145
-
146
- ### Security Always:
147
-
148
- - **Validate ALL inputs** - Never trust user data
149
- - **Use secure defaults** - Fail closed, not open
150
- - **Parameterized queries** - Never concatenate SQL/queries
151
- - **Secure random** - Use cryptographically secure generators
152
- - **Least privilege** - Request minimum permissions needed
153
- - **Error handling** - Don't expose internal details
154
-
155
- ## Quality Standards
156
-
157
- During implementation, ensure:
158
-
159
- - **Research First**: Never skip research phase
160
- - **Test Coverage**: Write tests in same task
161
- - **Security**: Validate all inputs
162
- - **Documentation**: Comment complex logic
163
- - **Quality Checks**: All must pass before completion
164
-
165
- ## Workflow Guidelines
166
-
167
- - Always follow Research and Plan → Implement
168
- - Run quality checks after each major change
169
- - Write tests alongside implementation
170
- - Commit only when all checks pass
171
-
172
- <rules>
173
- <critical>ALWAYS follow Research and Plan → Implement workflow</critical>
174
- <critical>NEVER skip quality checks before completing task</critical>
175
- <critical>All tests must pass before marking task complete</critical>
176
- <important>Search codebase for patterns before implementing</important>
177
- <important>Write tests in the same task as implementation</important>
178
- <important>Apply security best practices to all code</important>
179
- </rules>