@e0ipso/ai-task-manager 1.19.0 → 1.20.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -18,7 +18,7 @@ The output above contains your global and project-level configuration rules. You
18
18
 
19
19
  ---
20
20
 
21
- You are the orchestrator responsible for executing all tasks defined in the execution blueprint of a plan document, so choose an appropriate sub-agent for this role. Your role is to coordinate phase-by-phase execution, manage parallel task processing, and ensure validation gates pass before phase transitions.
21
+ You are the coordinator responsible for executing all tasks defined in the execution blueprint of a plan document, so choose an appropriate sub-agent for this role. Your role is to coordinate phase-by-phase execution, manage parallel task processing, and ensure validation gates pass before phase transitions.
22
22
 
23
23
  ## Critical Rules
24
24
 
@@ -46,39 +46,322 @@ Before proceeding with execution, validate that tasks exist and the execution bl
46
46
  **Validation Steps:**
47
47
 
48
48
  ```bash
49
- # Validate plan exists and check for tasks/blueprint
50
- VALIDATION=$(node .ai/task-manager/config/scripts/validate-plan-blueprint.cjs $1)
51
-
52
- # Parse validation results
53
- PLAN_FILE=$(echo "$VALIDATION" | grep -o '"planFile": "[^"]*"' | cut -d'"' -f4)
54
- PLAN_DIR=$(echo "$VALIDATION" | grep -o '"planDir": "[^"]*"' | cut -d'"' -f4)
55
- TASK_COUNT=$(echo "$VALIDATION" | grep -o '"taskCount": [0-9]*' | awk '{print $2}')
56
- BLUEPRINT_EXISTS=$(echo "$VALIDATION" | grep -o '"blueprintExists": [a-z]*' | awk '{print $2}')
49
+ # Extract validation results directly from script
50
+ PLAN_FILE=$(node .ai/task-manager/config/scripts/validate-plan-blueprint.cjs $1 planFile)
51
+ PLAN_DIR=$(node .ai/task-manager/config/scripts/validate-plan-blueprint.cjs $1 planDir)
52
+ TASK_COUNT=$(node .ai/task-manager/config/scripts/validate-plan-blueprint.cjs $1 taskCount)
53
+ BLUEPRINT_EXISTS=$(node .ai/task-manager/config/scripts/validate-plan-blueprint.cjs $1 blueprintExists)
57
54
  ```
58
55
 
59
56
  4. **Automatic task generation**:
60
57
 
61
58
  If either `$TASK_COUNT` is 0 or `$BLUEPRINT_EXISTS` is "no":
62
59
  - Display notification to user: "⚠️ Tasks or execution blueprint not found. Generating tasks automatically..."
63
- - Use the SlashCommand tool to invoke task generation:
60
+ - Execute the following task generation process inline:
61
+
62
+ ## Embedded Task Generation
63
+
64
+ Think harder and use tools.
65
+
66
+ You are a comprehensive task planning assistant. Your role is to create detailed, actionable plans based on user input while ensuring you have all necessary context before proceeding.
67
+
68
+ Include /TASK_MANAGER.md for the directory structure of tasks.
69
+
70
+ ## Instructions
71
+
72
+ You will think hard to analyze the provided plan document and decompose it into atomic, actionable tasks with clear dependencies and groupings.
73
+
74
+ Use your internal Todo task tool to track the following process:
75
+
76
+ - [ ] Read and process plan $1
77
+ - [ ] Use the Task Generation Process to create tasks according to the Task Creation Guidelines
78
+ - [ ] Read and run the .ai/task-manager/config/hooks/POST_TASK_GENERATION_ALL.md
79
+
80
+ ### Input
81
+ - A plan document. See .ai/task-manager/config/TASK_MANAGER.md fo find the plan with ID $1
82
+ - The plan contains high-level objectives and implementation steps
83
+
84
+ ### Input Error Handling
85
+ If the plan does not exist. Stop immediately and show an error to the user.
86
+
87
+ ### Task Creation Guidelines
88
+
89
+ #### Task Minimization Principles
90
+ **Core Constraint:** Create only the minimum number of tasks necessary to satisfy the plan requirements. Target a 20-30% reduction from comprehensive task lists by questioning the necessity of each component.
91
+
92
+ **Minimization Rules:**
93
+ - **Direct Implementation Only**: Create tasks for explicitly stated requirements, not "nice-to-have" features
94
+ - **DRY Task Principle**: Each task should have a unique, non-overlapping purpose
95
+ - **Question Everything**: For each task, ask "Is this absolutely necessary to meet the plan objectives?"
96
+ - **Avoid Gold-plating**: Resist the urge to add comprehensive features not explicitly required
97
+
98
+ **Antipatterns to Avoid:**
99
+ - Creating separate tasks for "error handling" when it can be included in the main implementation
100
+ - Breaking simple operations into multiple tasks (e.g., separate "validate input" and "process input" tasks)
101
+ - Adding tasks for "future extensibility" or "best practices" not mentioned in the plan
102
+ - Creating comprehensive test suites for trivial functionality
103
+
104
+ #### Task Granularity
105
+ Each task must be:
106
+ - **Single-purpose**: One clear deliverable or outcome
107
+ - **Atomic**: Cannot be meaningfully split further
108
+ - **Skill-specific**: Executable by a single skill agent (examples below)
109
+ - **Verifiable**: Has clear completion criteria
110
+
111
+ #### Skill Selection and Technical Requirements
112
+
113
+ **Core Principle**: Each task should require 1-2 specific technical skills that can be handled by specialized agents. Skills should be automatically inferred from the task's technical requirements and objectives.
114
+
115
+ **Skill Selection Criteria**:
116
+ 1. **Technical Specificity**: Choose skills that directly match the technical work required
117
+ 2. **Agent Specialization**: Select skills that allow a single skilled agent to complete the task
118
+ 3. **Minimal Overlap**: Avoid combining unrelated skill domains in a single task
119
+ 4. **Creative Inference**: Derive skills from task objectives and implementation context
120
+
121
+ **Inspirational Skill Examples** (use kebab-case format):
122
+ - Frontend: `react-components`, `css`, `js`, `vue-components`, `html`
123
+ - Backend: `api-endpoints`, `database`, `authentication`, `server-config`
124
+ - Testing: `jest`, `playwright`, `unit-testing`, `e2e-testing`
125
+ - DevOps: `docker`, `github-actions`, `deployment`, `ci-cd`
126
+ - Languages: `typescript`, `python`, `php`, `bash`, `sql`
127
+ - Frameworks: `nextjs`, `express`, `drupal-backend`, `wordpress-plugins`
128
+
129
+ **Automatic Skill Inference Examples**:
130
+ - "Create user login form" → `["react-components", "authentication"]`
131
+ - "Build REST API for orders" → `["api-endpoints", "database"]`
132
+ - "Add Docker deployment" → `["docker", "deployment"]`
133
+ - "Write Jest tests for utils" → `["jest"]`
134
+
135
+ **Assignment Guidelines**:
136
+ - **1 skill**: Focused, single-domain tasks
137
+ - **2 skills**: Tasks requiring complementary domains
138
+ - **Split if 3+**: Indicates task should be broken down
139
+
140
+ ```
141
+ # Examples
142
+ skills: ["css"] # Pure styling
143
+ skills: ["api-endpoints", "database"] # API with persistence
144
+ skills: ["react-components", "jest"] # Implementation + testing
64
145
  ```
65
- /tasks:generate-tasks $1
146
+
147
+ #### Meaningful Test Strategy Guidelines
148
+
149
+ **IMPORTANT** Make sure to copy this _Meaningful Test Strategy Guidelines_ section into all the tasks focused on testing, and **also** keep them in mind when generating tasks.
150
+
151
+ Your critical mantra for test generation is: "write a few tests, mostly integration".
152
+
153
+ **Definition of "Meaningful Tests":**
154
+ Tests that verify custom business logic, critical paths, and edge cases specific to the application. Focus on testing YOUR code, not the framework or library functionality.
155
+
156
+ **When TO Write Tests:**
157
+ - Custom business logic and algorithms
158
+ - Critical user workflows and data transformations
159
+ - Edge cases and error conditions for core functionality
160
+ - Integration points between different system components
161
+ - Complex validation logic or calculations
162
+
163
+ **When NOT to Write Tests:**
164
+ - Third-party library functionality (already tested upstream)
165
+ - Framework features (React hooks, Express middleware, etc.)
166
+ - Simple CRUD operations without custom logic
167
+ - Getter/setter methods or basic property access
168
+ - Configuration files or static data
169
+ - Obvious functionality that would break immediately if incorrect
170
+
171
+ **Test Task Creation Rules:**
172
+ - Combine related test scenarios into single tasks (e.g., "Test user authentication flow" not separate tasks for login, logout, validation)
173
+ - Focus on integration and critical path testing over unit test coverage
174
+ - Avoid creating separate tasks for testing each CRUD operation individually
175
+ - Question whether simple functions need dedicated test tasks
176
+
177
+ ### Task Generation Process
178
+
179
+ #### Step 1: Task Decomposition
180
+ 1. Read through the entire plan
181
+ 2. Identify all concrete deliverables **explicitly stated** in the plan
182
+ 3. Apply minimization principles: question necessity of each potential task
183
+ 4. Break each deliverable into atomic tasks (only if genuinely needed)
184
+ 5. Ensure no task requires multiple skill sets
185
+ 6. Verify each task has clear inputs and outputs
186
+ 7. **Minimize test tasks**: Combine related testing scenarios, avoid testing framework functionality
187
+ 8. Be very detailed with the "Implementation Notes". This should contain enough detail for a non-thinking LLM model to successfully complete the task. Put these instructions in a collapsible field `<details>`.
188
+
189
+ #### Step 2: Dependency Analysis
190
+ For each task, identify:
191
+ - **Hard dependencies**: Tasks that MUST complete before this can start
192
+ - **Soft dependencies**: Tasks that SHOULD complete for optimal execution
193
+ - **No circular dependencies**: Validate the dependency graph is acyclic
194
+
195
+ Dependency Rule: Task B depends on Task A if:
196
+ - B requires output or artifacts from A
197
+ - B modifies code created by A
198
+ - B tests functionality implemented in A
199
+
200
+ #### Step 3: Task Generation
201
+
202
+ ##### Frontmatter Structure
203
+
204
+ Example:
205
+ ```yaml
206
+ ---
207
+ id: 1
208
+ group: "user-authentication"
209
+ dependencies: [] # List of task IDs, e.g., [2, 3]
210
+ status: "pending" # pending | in-progress | completed | needs-clarification
211
+ created: "2024-01-15"
212
+ skills: ["react-components", "authentication"] # Technical skills required for this task
213
+ # Optional: Include complexity scores for high-complexity tasks or decomposition tracking
214
+ # complexity_score: 4.2 # Composite complexity score (only if >4 or decomposed)
215
+ # complexity_notes: "Decomposed from original task due to high technical depth"
216
+ ---
217
+ ```
218
+
219
+ The schema for this frontmatter is:
220
+ ```json
221
+ {
222
+ "type": "object",
223
+ "required": ["id", "group", "dependencies", "status", "created", "skills"],
224
+ "properties": {
225
+ "id": {
226
+ "type": ["number"],
227
+ "description": "Unique identifier for the task. An integer."
228
+ },
229
+ "group": {
230
+ "type": "string",
231
+ "description": "Group or category the task belongs to"
232
+ },
233
+ "dependencies": {
234
+ "type": "array",
235
+ "description": "List of task IDs this task depends on",
236
+ "items": {
237
+ "type": ["number"]
238
+ }
239
+ },
240
+ "status": {
241
+ "type": "string",
242
+ "enum": ["pending", "in-progress", "completed", "needs-clarification"],
243
+ "description": "Current status of the task"
244
+ },
245
+ "created": {
246
+ "type": "string",
247
+ "pattern": "^\\d{4}-\\d{2}-\\d{2}$",
248
+ "description": "Creation date in YYYY-MM-DD format"
249
+ },
250
+ "skills": {
251
+ "type": "array",
252
+ "description": "Technical skills required for this task (1-2 skills recommended)",
253
+ "items": {
254
+ "type": "string",
255
+ "pattern": "^[a-z][a-z0-9-]*$"
256
+ },
257
+ "minItems": 1,
258
+ "uniqueItems": true
259
+ },
260
+ "complexity_score": {
261
+ "type": "number",
262
+ "minimum": 1,
263
+ "maximum": 10,
264
+ "description": "Optional: Composite complexity score (include only if >4 or for decomposed tasks)"
265
+ },
266
+ "complexity_notes": {
267
+ "type": "string",
268
+ "description": "Optional: Rationale for complexity score or decomposition decisions"
269
+ }
270
+ },
271
+ "additionalProperties": false
272
+ }
66
273
  ```
67
- - **NEW STEP**: Immediately after task generation succeeds, set the approval_method_tasks field to auto:
68
- ```bash
69
- node .ai/task-manager/config/scripts/set-approval-method.cjs "$PLAN_FILE" auto tasks
70
- ```
71
- - This signals that tasks were auto-generated in workflow context and execution should continue without pause.
72
- - **CRITICAL**: After setting the field, you MUST immediately proceed with blueprint execution without waiting for user input. The workflow should continue seamlessly.
73
- - If generation fails: Halt execution with clear error message:
74
- ```
75
- ❌ Error: Automatic task generation failed.
76
-
77
- Please run the following command manually to generate tasks:
78
- /tasks:generate-tasks $1
79
- ```
80
-
81
- **After successful validation or generation**, immediately proceed with the execution process below without pausing.
274
+
275
+ ##### Task Body Structure
276
+
277
+ Use the task template in .ai/task-manager/config/templates/TASK_TEMPLATE.md
278
+
279
+ ##### Task ID Generation
280
+
281
+ When creating tasks, you need to determine the next available task ID for the specified plan. Use this bash command to automatically generate the correct ID:
282
+
283
+ ```bash
284
+ node .ai/task-manager/config/scripts/get-next-task-id.cjs $1
285
+ ```
286
+
287
+ ### Validation Checklist
288
+ Before finalizing, ensure:
289
+
290
+ **Core Task Requirements:**
291
+ - [ ] Each task has 1-2 appropriate technical skills assigned
292
+ - [ ] Skills are automatically inferred from task objectives and technical requirements
293
+ - [ ] All dependencies form an acyclic graph
294
+ - [ ] Task IDs are unique and sequential
295
+ - [ ] Groups are consistent and meaningful
296
+ - [ ] Every **explicitly stated** task from the plan is covered
297
+ - [ ] No redundant or overlapping tasks
298
+
299
+ **Complexity Analysis & Controls:**
300
+ - [ ] **Complexity Analysis Complete**: All tasks assessed using 5-dimension scoring
301
+ - [ ] **Decomposition Applied**: Tasks with composite score ≥6 have been decomposed or justified
302
+ - [ ] **Final Task Complexity**: All final tasks have composite score ≤5 (target ≤4)
303
+ - [ ] **Iteration Limits Respected**: No task exceeded 3 decomposition rounds
304
+ - [ ] **Minimum Viability**: No tasks decomposed below complexity threshold of 3
305
+ - [ ] **Quality Gates Passed**: All decomposed tasks meet enhanced quality criteria
306
+ - [ ] **Dependency Integrity**: No circular dependencies or orphaned tasks exist
307
+ - [ ] **Error Handling Complete**: All edge cases resolved or escalated appropriately
308
+
309
+ **Complexity Documentation Requirements:**
310
+ - [ ] **Complexity Scores Documented**: Individual dimension scores recorded for complex tasks
311
+ - [ ] **Decomposition History**: Iteration tracking included in `complexity_notes` for decomposed tasks
312
+ - [ ] **Validation Status**: All tasks marked with appropriate validation outcomes
313
+ - [ ] **Escalation Documentation**: High-complexity tasks have clear escalation notes
314
+ - [ ] **Consistency Validated**: Complexity scores align with task descriptions and skills
315
+
316
+ **Scope & Quality Control:**
317
+ - [ ] **Minimization Applied**: Each task is absolutely necessary (20-30% reduction target)
318
+ - [ ] **Test Tasks are Meaningful**: Focus on business logic, not framework functionality
319
+ - [ ] **No Gold-plating**: Only plan requirements are addressed
320
+ - [ ] **Total Task Count**: Represents minimum viable implementation
321
+ - [ ] **Scope Preservation**: Decomposed tasks collectively match original requirements
322
+
323
+ **System Reliability:**
324
+ - [ ] **Error Conditions Resolved**: No unresolved error states remain
325
+ - [ ] **Manual Intervention Flagged**: Complex edge cases properly escalated
326
+ - [ ] **Quality Checkpoints**: All validation gates completed successfully
327
+ - [ ] **Dependency Graph Validated**: Full dependency analysis confirms acyclic, logical relationships
328
+
329
+ ### Error Handling
330
+ If the plan lacks sufficient detail:
331
+ - Note areas needing clarification
332
+ - Create placeholder tasks marked with `status: "needs-clarification"`
333
+ - Document assumptions made
334
+
335
+ #### Step 4: POST_TASK_GENERATION_ALL hook
336
+
337
+ Read and run the .ai/task-manager/config/hooks/POST_TASK_GENERATION_ALL.md
338
+
339
+ ### Output Requirements
340
+
341
+ **Output Behavior:**
342
+
343
+ Provide a concise completion message with task count:
344
+ - Example: "Tasks generated for plan [id]: [count] tasks created"
345
+
346
+ **CRITICAL - Structured Output for Command Coordination:**
347
+
348
+ Always end your output with a standardized summary in this exact format:
349
+
350
+ ```
351
+ ---
352
+ Task Generation Summary:
353
+ - Plan ID: [numeric-id]
354
+ - Tasks: [count]
355
+ - Status: Ready for execution
356
+ ```
357
+
358
+ This structured output enables automated workflow coordination and must be included even when running standalone.
359
+
360
+ ## Resume Blueprint Execution
361
+
362
+ After task generation completes, continue with the execution process below.
363
+
364
+ Otherwise, if tasks exist, proceed directly to execution.
82
365
 
83
366
  ## Execution Process
84
367
 
@@ -140,45 +423,10 @@ Read and execute .ai/task-manager/config/hooks/POST_ERROR_DETECTION.md
140
423
 
141
424
  ### Output Requirements
142
425
 
143
- **Context-Aware Output Behavior:**
144
-
145
- **Extract approval method from plan metadata:**
146
-
147
- First, extract both approval method fields from the plan document:
148
-
149
- ```bash
150
- # Extract approval methods from plan metadata
151
- APPROVAL_METHODS=$(node .ai/task-manager/config/scripts/get-approval-methods.cjs $1)
152
-
153
- APPROVAL_METHOD_PLAN=$(echo "$APPROVAL_METHODS" | grep -o '"approval_method_plan": "[^"]*"' | cut -d'"' -f4)
154
- APPROVAL_METHOD_TASKS=$(echo "$APPROVAL_METHODS" | grep -o '"approval_method_tasks": "[^"]*"' | cut -d'"' -f4)
155
-
156
- # Defaults to "manual" if fields don't exist
157
- APPROVAL_METHOD_PLAN=${APPROVAL_METHOD_PLAN:-manual}
158
- APPROVAL_METHOD_TASKS=${APPROVAL_METHOD_TASKS:-manual}
159
- ```
160
-
161
- Then adjust output based on the extracted approval methods:
162
-
163
- - **If `APPROVAL_METHOD_PLAN="auto"` (automated workflow mode)**:
164
- - During task auto-generation phase: Provide minimal progress updates
165
- - Do NOT instruct user to review the plan or tasks being generated
166
- - Do NOT add any prompts that would pause execution
167
-
168
- - **If `APPROVAL_METHOD_TASKS="auto"` (tasks auto-generated in workflow)**:
169
- - During task execution phase: Provide minimal progress updates at phase boundaries
170
- - Do NOT instruct user to review implementation details
171
- - Example output: "Phase 1/3 completed. Proceeding to Phase 2."
172
-
173
- - **If `APPROVAL_METHOD_PLAN="manual"` or `APPROVAL_METHOD_TASKS="manual"` (standalone mode)**:
174
- - Provide detailed execution summary with phase results
175
- - List completed tasks and any noteworthy events
176
- - Instruct user to review the execution summary in the plan document
177
- - Example output: "Execution completed. Review summary: `.ai/task-manager/archive/[plan]/plan-[id].md`"
426
+ **Output Behavior:**
178
427
 
179
- **Note**: This command respects both approval method fields:
180
- - `approval_method_plan`: Used during auto-generation to determine if we're in automated workflow
181
- - `approval_method_tasks`: Used during execution to determine output verbosity
428
+ Provide a concise execution summary:
429
+ - Example: "Execution completed. Review summary: `.ai/task-manager/archive/[plan]/plan-[id].md`"
182
430
 
183
431
  **CRITICAL - Structured Output for Command Coordination:**
184
432