@calmo/task-runner 3.8.3 → 4.0.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (94) hide show
  1. package/.agent/workflows/openspec-apply.md +20 -0
  2. package/.agent/workflows/openspec-archive.md +24 -0
  3. package/.agent/workflows/openspec-proposal.md +25 -0
  4. package/.github/workflows/release-please.yml +46 -0
  5. package/.husky/commit-msg +0 -0
  6. package/.husky/pre-commit +0 -0
  7. package/.release-please-manifest.json +3 -0
  8. package/AGENTS.md +2 -4
  9. package/CHANGELOG.md +109 -0
  10. package/README.md +1 -1
  11. package/dist/TaskGraphValidator.js +2 -4
  12. package/dist/TaskGraphValidator.js.map +1 -1
  13. package/openspec/changes/adopt-release-pr/design.md +40 -0
  14. package/openspec/changes/adopt-release-pr/proposal.md +47 -0
  15. package/openspec/changes/adopt-release-pr/specs/release-pr/spec.md +34 -0
  16. package/openspec/changes/adopt-release-pr/tasks.md +14 -0
  17. package/openspec/changes/archive/2026-01-18-add-concurrency-control/specs/task-runner/spec.md +26 -0
  18. package/openspec/changes/archive/2026-01-18-add-external-task-cancellation/specs/task-runner/spec.md +63 -0
  19. package/openspec/changes/archive/2026-01-18-add-integration-tests/specs/task-runner/spec.md +22 -0
  20. package/openspec/changes/archive/2026-01-18-add-task-retry-policy/specs/task-runner/spec.md +40 -0
  21. package/openspec/changes/archive/2026-01-18-add-workflow-preview/specs/task-runner/spec.md +25 -0
  22. package/openspec/changes/archive/2026-01-18-refactor-core-architecture/specs/task-runner/spec.md +31 -0
  23. package/openspec/changes/feat-continue-on-error/proposal.md +20 -0
  24. package/openspec/changes/feat-continue-on-error/tasks.md +17 -0
  25. package/openspec/changes/feat-per-task-timeout/specs/task-runner/spec.md +34 -0
  26. package/openspec/changes/feat-task-metrics/specs/001-generic-task-runner/spec.md +13 -0
  27. package/openspec/specs/task-runner/spec.md +162 -0
  28. package/package.json +11 -20
  29. package/release-please-config.json +9 -0
  30. package/src/TaskGraphValidator.ts +2 -4
  31. package/.gemini/commands/speckit.analyze.toml +0 -188
  32. package/.gemini/commands/speckit.checklist.toml +0 -298
  33. package/.gemini/commands/speckit.clarify.toml +0 -185
  34. package/.gemini/commands/speckit.constitution.toml +0 -86
  35. package/.gemini/commands/speckit.implement.toml +0 -139
  36. package/.gemini/commands/speckit.plan.toml +0 -93
  37. package/.gemini/commands/speckit.specify.toml +0 -262
  38. package/.gemini/commands/speckit.tasks.toml +0 -141
  39. package/.gemini/commands/speckit.taskstoissues.toml +0 -34
  40. package/.github/workflows/release.yml +0 -46
  41. package/.releaserc.json +0 -27
  42. package/coverage/base.css +0 -224
  43. package/coverage/block-navigation.js +0 -87
  44. package/coverage/coverage-final.json +0 -15
  45. package/coverage/favicon.png +0 -0
  46. package/coverage/index.html +0 -146
  47. package/coverage/lcov-report/base.css +0 -224
  48. package/coverage/lcov-report/block-navigation.js +0 -87
  49. package/coverage/lcov-report/favicon.png +0 -0
  50. package/coverage/lcov-report/index.html +0 -146
  51. package/coverage/lcov-report/prettify.css +0 -1
  52. package/coverage/lcov-report/prettify.js +0 -2
  53. package/coverage/lcov-report/sort-arrow-sprite.png +0 -0
  54. package/coverage/lcov-report/sorter.js +0 -210
  55. package/coverage/lcov-report/src/EventBus.ts.html +0 -379
  56. package/coverage/lcov-report/src/ExecutionConstants.ts.html +0 -121
  57. package/coverage/lcov-report/src/TaskGraphValidationError.ts.html +0 -130
  58. package/coverage/lcov-report/src/TaskGraphValidator.ts.html +0 -649
  59. package/coverage/lcov-report/src/TaskRunner.ts.html +0 -706
  60. package/coverage/lcov-report/src/TaskRunnerBuilder.ts.html +0 -337
  61. package/coverage/lcov-report/src/TaskRunnerExecutionConfig.ts.html +0 -154
  62. package/coverage/lcov-report/src/TaskStateManager.ts.html +0 -529
  63. package/coverage/lcov-report/src/WorkflowExecutor.ts.html +0 -712
  64. package/coverage/lcov-report/src/contracts/ErrorTypes.ts.html +0 -103
  65. package/coverage/lcov-report/src/contracts/RunnerEvents.ts.html +0 -217
  66. package/coverage/lcov-report/src/contracts/index.html +0 -131
  67. package/coverage/lcov-report/src/index.html +0 -236
  68. package/coverage/lcov-report/src/strategies/DryRunExecutionStrategy.ts.html +0 -178
  69. package/coverage/lcov-report/src/strategies/RetryingExecutionStrategy.ts.html +0 -373
  70. package/coverage/lcov-report/src/strategies/StandardExecutionStrategy.ts.html +0 -190
  71. package/coverage/lcov-report/src/strategies/index.html +0 -146
  72. package/coverage/lcov.info +0 -671
  73. package/coverage/prettify.css +0 -1
  74. package/coverage/prettify.js +0 -2
  75. package/coverage/sort-arrow-sprite.png +0 -0
  76. package/coverage/sorter.js +0 -210
  77. package/coverage/src/EventBus.ts.html +0 -379
  78. package/coverage/src/ExecutionConstants.ts.html +0 -121
  79. package/coverage/src/TaskGraphValidationError.ts.html +0 -130
  80. package/coverage/src/TaskGraphValidator.ts.html +0 -649
  81. package/coverage/src/TaskRunner.ts.html +0 -706
  82. package/coverage/src/TaskRunnerBuilder.ts.html +0 -337
  83. package/coverage/src/TaskRunnerExecutionConfig.ts.html +0 -154
  84. package/coverage/src/TaskStateManager.ts.html +0 -529
  85. package/coverage/src/WorkflowExecutor.ts.html +0 -712
  86. package/coverage/src/contracts/ErrorTypes.ts.html +0 -103
  87. package/coverage/src/contracts/RunnerEvents.ts.html +0 -217
  88. package/coverage/src/contracts/index.html +0 -131
  89. package/coverage/src/index.html +0 -236
  90. package/coverage/src/strategies/DryRunExecutionStrategy.ts.html +0 -178
  91. package/coverage/src/strategies/RetryingExecutionStrategy.ts.html +0 -373
  92. package/coverage/src/strategies/StandardExecutionStrategy.ts.html +0 -190
  93. package/coverage/src/strategies/index.html +0 -146
  94. package/test-report.xml +0 -299
@@ -1,139 +0,0 @@
1
- description = "Execute the implementation plan by processing and executing all tasks defined in tasks.md"
2
-
3
- prompt = """
4
- ---
5
- description: Execute the implementation plan by processing and executing all tasks defined in tasks.md
6
- ---
7
-
8
- ## User Input
9
-
10
- ```text
11
- $ARGUMENTS
12
- ```
13
-
14
- You **MUST** consider the user input before proceeding (if not empty).
15
-
16
- ## Outline
17
-
18
- 1. Run `.specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\\''m Groot' (or double-quote if possible: "I'm Groot").
19
-
20
- 2. **Check checklists status** (if FEATURE_DIR/checklists/ exists):
21
- - Scan all checklist files in the checklists/ directory
22
- - For each checklist, count:
23
- - Total items: All lines matching `- [ ]` or `- [X]` or `- [x]`
24
- - Completed items: Lines matching `- [X]` or `- [x]`
25
- - Incomplete items: Lines matching `- [ ]`
26
- - Create a status table:
27
-
28
- ```text
29
- | Checklist | Total | Completed | Incomplete | Status |
30
- |-----------|-------|-----------|------------|--------|
31
- | ux.md | 12 | 12 | 0 | ✓ PASS |
32
- | test.md | 8 | 5 | 3 | ✗ FAIL |
33
- | security.md | 6 | 6 | 0 | ✓ PASS |
34
- ```
35
-
36
- - Calculate overall status:
37
- - **PASS**: All checklists have 0 incomplete items
38
- - **FAIL**: One or more checklists have incomplete items
39
-
40
- - **If any checklist is incomplete**:
41
- - Display the table with incomplete item counts
42
- - **STOP** and ask: "Some checklists are incomplete. Do you want to proceed with implementation anyway? (yes/no)"
43
- - Wait for user response before continuing
44
- - If user says "no" or "wait" or "stop", halt execution
45
- - If user says "yes" or "proceed" or "continue", proceed to step 3
46
-
47
- - **If all checklists are complete**:
48
- - Display the table showing all checklists passed
49
- - Automatically proceed to step 3
50
-
51
- 3. Load and analyze the implementation context:
52
- - **REQUIRED**: Read tasks.md for the complete task list and execution plan
53
- - **REQUIRED**: Read plan.md for tech stack, architecture, and file structure
54
- - **IF EXISTS**: Read data-model.md for entities and relationships
55
- - **IF EXISTS**: Read contracts/ for API specifications and test requirements
56
- - **IF EXISTS**: Read research.md for technical decisions and constraints
57
- - **IF EXISTS**: Read quickstart.md for integration scenarios
58
-
59
- 4. **Project Setup Verification**:
60
- - **REQUIRED**: Create/verify ignore files based on actual project setup:
61
-
62
- **Detection & Creation Logic**:
63
- - Check if the following command succeeds to determine if the repository is a git repo (create/verify .gitignore if so):
64
-
65
- ```sh
66
- git rev-parse --git-dir 2>/dev/null
67
- ```
68
-
69
- - Check if Dockerfile* exists or Docker in plan.md → create/verify .dockerignore
70
- - Check if .eslintrc* exists → create/verify .eslintignore
71
- - Check if eslint.config.* exists → ensure the config's `ignores` entries cover required patterns
72
- - Check if .prettierrc* exists → create/verify .prettierignore
73
- - Check if .npmrc or package.json exists → create/verify .npmignore (if publishing)
74
- - Check if terraform files (*.tf) exist → create/verify .terraformignore
75
- - Check if .helmignore needed (helm charts present) → create/verify .helmignore
76
-
77
- **If ignore file already exists**: Verify it contains essential patterns, append missing critical patterns only
78
- **If ignore file missing**: Create with full pattern set for detected technology
79
-
80
- **Common Patterns by Technology** (from plan.md tech stack):
81
- - **Node.js/JavaScript/TypeScript**: `node_modules/`, `dist/`, `build/`, `*.log`, `.env*`
82
- - **Python**: `__pycache__/`, `*.pyc`, `.venv/`, `venv/`, `dist/`, `*.egg-info/`
83
- - **Java**: `target/`, `*.class`, `*.jar`, `.gradle/`, `build/`
84
- - **C#/.NET**: `bin/`, `obj/`, `*.user`, `*.suo`, `packages/`
85
- - **Go**: `*.exe`, `*.test`, `vendor/`, `*.out`
86
- - **Ruby**: `.bundle/`, `log/`, `tmp/`, `*.gem`, `vendor/bundle/`
87
- - **PHP**: `vendor/`, `*.log`, `*.cache`, `*.env`
88
- - **Rust**: `target/`, `debug/`, `release/`, `*.rs.bk`, `*.rlib`, `*.prof*`, `.idea/`, `*.log`, `.env*`
89
- - **Kotlin**: `build/`, `out/`, `.gradle/`, `.idea/`, `*.class`, `*.jar`, `*.iml`, `*.log`, `.env*`
90
- - **C++**: `build/`, `bin/`, `obj/`, `out/`, `*.o`, `*.so`, `*.a`, `*.exe`, `*.dll`, `.idea/`, `*.log`, `.env*`
91
- - **C**: `build/`, `bin/`, `obj/`, `out/`, `*.o`, `*.a`, `*.so`, `*.exe`, `Makefile`, `config.log`, `.idea/`, `*.log`, `.env*`
92
- - **Swift**: `.build/`, `DerivedData/`, `*.swiftpm/`, `Packages/`
93
- - **R**: `.Rproj.user/`, `.Rhistory`, `.RData`, `.Ruserdata`, `*.Rproj`, `packrat/`, `renv/`
94
- - **Universal**: `.DS_Store`, `Thumbs.db`, `*.tmp`, `*.swp`, `.vscode/`, `.idea/`
95
-
96
- **Tool-Specific Patterns**:
97
- - **Docker**: `node_modules/`, `.git/`, `Dockerfile*`, `.dockerignore`, `*.log*`, `.env*`, `coverage/`
98
- - **ESLint**: `node_modules/`, `dist/`, `build/`, `coverage/`, `*.min.js`
99
- - **Prettier**: `node_modules/`, `dist/`, `build/`, `coverage/`, `package-lock.json`, `yarn.lock`, `pnpm-lock.yaml`
100
- - **Terraform**: `.terraform/`, `*.tfstate*`, `*.tfvars`, `.terraform.lock.hcl`
101
- - **Kubernetes/k8s**: `*.secret.yaml`, `secrets/`, `.kube/`, `kubeconfig*`, `*.key`, `*.crt`
102
-
103
- 5. Parse tasks.md structure and extract:
104
- - **Task phases**: Setup, Tests, Core, Integration, Polish
105
- - **Task dependencies**: Sequential vs parallel execution rules
106
- - **Task details**: ID, description, file paths, parallel markers [P]
107
- - **Execution flow**: Order and dependency requirements
108
-
109
- 6. Execute implementation following the task plan:
110
- - **Phase-by-phase execution**: Complete each phase before moving to the next
111
- - **Respect dependencies**: Run sequential tasks in order, parallel tasks [P] can run together
112
- - **Follow TDD approach**: Execute test tasks before their corresponding implementation tasks
113
- - **File-based coordination**: Tasks affecting the same files must run sequentially
114
- - **Validation checkpoints**: Verify each phase completion before proceeding
115
-
116
- 7. Implementation execution rules:
117
- - **Setup first**: Initialize project structure, dependencies, configuration
118
- - **Tests before code**: If you need to write tests for contracts, entities, and integration scenarios
119
- - **Core development**: Implement models, services, CLI commands, endpoints
120
- - **Integration work**: Database connections, middleware, logging, external services
121
- - **Polish and validation**: Unit tests, performance optimization, documentation
122
-
123
- 8. Progress tracking and error handling:
124
- - Report progress after each completed task
125
- - Halt execution if any non-parallel task fails
126
- - For parallel tasks [P], continue with successful tasks, report failed ones
127
- - Provide clear error messages with context for debugging
128
- - Suggest next steps if implementation cannot proceed
129
- - **IMPORTANT** For completed tasks, make sure to mark the task off as [X] in the tasks file.
130
-
131
- 9. Completion validation:
132
- - Verify all required tasks are completed
133
- - Check that implemented features match the original specification
134
- - Validate that tests pass and coverage meets requirements
135
- - Confirm the implementation follows the technical plan
136
- - Report final status with summary of completed work
137
-
138
- Note: This command assumes a complete task breakdown exists in tasks.md. If tasks are incomplete or missing, suggest running `/speckit.tasks` first to regenerate the task list.
139
- """
@@ -1,93 +0,0 @@
1
- description = "Execute the implementation planning workflow using the plan template to generate design artifacts."
2
-
3
- prompt = """
4
- ---
5
- description: Execute the implementation planning workflow using the plan template to generate design artifacts.
6
- handoffs:
7
- - label: Create Tasks
8
- agent: speckit.tasks
9
- prompt: Break the plan into tasks
10
- send: true
11
- - label: Create Checklist
12
- agent: speckit.checklist
13
- prompt: Create a checklist for the following domain...
14
- ---
15
-
16
- ## User Input
17
-
18
- ```text
19
- $ARGUMENTS
20
- ```
21
-
22
- You **MUST** consider the user input before proceeding (if not empty).
23
-
24
- ## Outline
25
-
26
- 1. **Setup**: Run `.specify/scripts/bash/setup-plan.sh --json` from repo root and parse JSON for FEATURE_SPEC, IMPL_PLAN, SPECS_DIR, BRANCH. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\\''m Groot' (or double-quote if possible: "I'm Groot").
27
-
28
- 2. **Load context**: Read FEATURE_SPEC and `.specify/memory/constitution.md`. Load IMPL_PLAN template (already copied).
29
-
30
- 3. **Execute plan workflow**: Follow the structure in IMPL_PLAN template to:
31
- - Fill Technical Context (mark unknowns as "NEEDS CLARIFICATION")
32
- - Fill Constitution Check section from constitution
33
- - Evaluate gates (ERROR if violations unjustified)
34
- - Phase 0: Generate research.md (resolve all NEEDS CLARIFICATION)
35
- - Phase 1: Generate data-model.md, contracts/, quickstart.md
36
- - Phase 1: Update agent context by running the agent script
37
- - Re-evaluate Constitution Check post-design
38
-
39
- 4. **Stop and report**: Command ends after Phase 2 planning. Report branch, IMPL_PLAN path, and generated artifacts.
40
-
41
- ## Phases
42
-
43
- ### Phase 0: Outline & Research
44
-
45
- 1. **Extract unknowns from Technical Context** above:
46
- - For each NEEDS CLARIFICATION → research task
47
- - For each dependency → best practices task
48
- - For each integration → patterns task
49
-
50
- 2. **Generate and dispatch research agents**:
51
-
52
- ```text
53
- For each unknown in Technical Context:
54
- Task: "Research {unknown} for {feature context}"
55
- For each technology choice:
56
- Task: "Find best practices for {tech} in {domain}"
57
- ```
58
-
59
- 3. **Consolidate findings** in `research.md` using format:
60
- - Decision: [what was chosen]
61
- - Rationale: [why chosen]
62
- - Alternatives considered: [what else evaluated]
63
-
64
- **Output**: research.md with all NEEDS CLARIFICATION resolved
65
-
66
- ### Phase 1: Design & Contracts
67
-
68
- **Prerequisites:** `research.md` complete
69
-
70
- 1. **Extract entities from feature spec** → `data-model.md`:
71
- - Entity name, fields, relationships
72
- - Validation rules from requirements
73
- - State transitions if applicable
74
-
75
- 2. **Generate API contracts** from functional requirements:
76
- - For each user action → endpoint
77
- - Use standard REST/GraphQL patterns
78
- - Output OpenAPI/GraphQL schema to `/contracts/`
79
-
80
- 3. **Agent context update**:
81
- - Run `.specify/scripts/bash/update-agent-context.sh gemini`
82
- - These scripts detect which AI agent is in use
83
- - Update the appropriate agent-specific context file
84
- - Add only new technology from current plan
85
- - Preserve manual additions between markers
86
-
87
- **Output**: data-model.md, /contracts/*, quickstart.md, agent-specific file
88
-
89
- ## Key rules
90
-
91
- - Use absolute paths
92
- - ERROR on gate failures or unresolved clarifications
93
- """
@@ -1,262 +0,0 @@
1
- description = "Create or update the feature specification from a natural language feature description."
2
-
3
- prompt = """
4
- ---
5
- description: Create or update the feature specification from a natural language feature description.
6
- handoffs:
7
- - label: Build Technical Plan
8
- agent: speckit.plan
9
- prompt: Create a plan for the spec. I am building with...
10
- - label: Clarify Spec Requirements
11
- agent: speckit.clarify
12
- prompt: Clarify specification requirements
13
- send: true
14
- ---
15
-
16
- ## User Input
17
-
18
- ```text
19
- $ARGUMENTS
20
- ```
21
-
22
- You **MUST** consider the user input before proceeding (if not empty).
23
-
24
- ## Outline
25
-
26
- The text the user typed after `/speckit.specify` in the triggering message **is** the feature description. Assume you always have it available in this conversation even if `{{args}}` appears literally below. Do not ask the user to repeat it unless they provided an empty command.
27
-
28
- Given that feature description, do this:
29
-
30
- 1. **Generate a concise short name** (2-4 words) for the branch:
31
- - Analyze the feature description and extract the most meaningful keywords
32
- - Create a 2-4 word short name that captures the essence of the feature
33
- - Use action-noun format when possible (e.g., "add-user-auth", "fix-payment-bug")
34
- - Preserve technical terms and acronyms (OAuth2, API, JWT, etc.)
35
- - Keep it concise but descriptive enough to understand the feature at a glance
36
- - Examples:
37
- - "I want to add user authentication" → "user-auth"
38
- - "Implement OAuth2 integration for the API" → "oauth2-api-integration"
39
- - "Create a dashboard for analytics" → "analytics-dashboard"
40
- - "Fix payment processing timeout bug" → "fix-payment-timeout"
41
-
42
- 2. **Check for existing branches before creating new one**:
43
-
44
- a. First, fetch all remote branches to ensure we have the latest information:
45
-
46
- ```bash
47
- git fetch --all --prune
48
- ```
49
-
50
- b. Find the highest feature number across all sources for the short-name:
51
- - Remote branches: `git ls-remote --heads origin | grep -E 'refs/heads/[0-9]+-<short-name>$'`
52
- - Local branches: `git branch | grep -E '^[* ]*[0-9]+-<short-name>$'`
53
- - Specs directories: Check for directories matching `specs/[0-9]+-<short-name>`
54
-
55
- c. Determine the next available number:
56
- - Extract all numbers from all three sources
57
- - Find the highest number N
58
- - Use N+1 for the new branch number
59
-
60
- d. Run the script `.specify/scripts/bash/create-new-feature.sh --json "{{args}}"` with the calculated number and short-name:
61
- - Pass `--number N+1` and `--short-name "your-short-name"` along with the feature description
62
- - Bash example: `.specify/scripts/bash/create-new-feature.sh --json "{{args}}" --json --number 5 --short-name "user-auth" "Add user authentication"`
63
- - PowerShell example: `.specify/scripts/bash/create-new-feature.sh --json "{{args}}" -Json -Number 5 -ShortName "user-auth" "Add user authentication"`
64
-
65
- **IMPORTANT**:
66
- - Check all three sources (remote branches, local branches, specs directories) to find the highest number
67
- - Only match branches/directories with the exact short-name pattern
68
- - If no existing branches/directories found with this short-name, start with number 1
69
- - You must only ever run this script once per feature
70
- - The JSON is provided in the terminal as output - always refer to it to get the actual content you're looking for
71
- - The JSON output will contain BRANCH_NAME and SPEC_FILE paths
72
- - For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\\''m Groot' (or double-quote if possible: "I'm Groot")
73
-
74
- 3. Load `.specify/templates/spec-template.md` to understand required sections.
75
-
76
- 4. Follow this execution flow:
77
-
78
- 1. Parse user description from Input
79
- If empty: ERROR "No feature description provided"
80
- 2. Extract key concepts from description
81
- Identify: actors, actions, data, constraints
82
- 3. For unclear aspects:
83
- - Make informed guesses based on context and industry standards
84
- - Only mark with [NEEDS CLARIFICATION: specific question] if:
85
- - The choice significantly impacts feature scope or user experience
86
- - Multiple reasonable interpretations exist with different implications
87
- - No reasonable default exists
88
- - **LIMIT: Maximum 3 [NEEDS CLARIFICATION] markers total**
89
- - Prioritize clarifications by impact: scope > security/privacy > user experience > technical details
90
- 4. Fill User Scenarios & Testing section
91
- If no clear user flow: ERROR "Cannot determine user scenarios"
92
- 5. Generate Functional Requirements
93
- Each requirement must be testable
94
- Use reasonable defaults for unspecified details (document assumptions in Assumptions section)
95
- 6. Define Success Criteria
96
- Create measurable, technology-agnostic outcomes
97
- Include both quantitative metrics (time, performance, volume) and qualitative measures (user satisfaction, task completion)
98
- Each criterion must be verifiable without implementation details
99
- 7. Identify Key Entities (if data involved)
100
- 8. Return: SUCCESS (spec ready for planning)
101
-
102
- 5. Write the specification to SPEC_FILE using the template structure, replacing placeholders with concrete details derived from the feature description (arguments) while preserving section order and headings.
103
-
104
- 6. **Specification Quality Validation**: After writing the initial spec, validate it against quality criteria:
105
-
106
- a. **Create Spec Quality Checklist**: Generate a checklist file at `FEATURE_DIR/checklists/requirements.md` using the checklist template structure with these validation items:
107
-
108
- ```markdown
109
- # Specification Quality Checklist: [FEATURE NAME]
110
-
111
- **Purpose**: Validate specification completeness and quality before proceeding to planning
112
- **Created**: [DATE]
113
- **Feature**: [Link to spec.md]
114
-
115
- ## Content Quality
116
-
117
- - [ ] No implementation details (languages, frameworks, APIs)
118
- - [ ] Focused on user value and business needs
119
- - [ ] Written for non-technical stakeholders
120
- - [ ] All mandatory sections completed
121
-
122
- ## Requirement Completeness
123
-
124
- - [ ] No [NEEDS CLARIFICATION] markers remain
125
- - [ ] Requirements are testable and unambiguous
126
- - [ ] Success criteria are measurable
127
- - [ ] Success criteria are technology-agnostic (no implementation details)
128
- - [ ] All acceptance scenarios are defined
129
- - [ ] Edge cases are identified
130
- - [ ] Scope is clearly bounded
131
- - [ ] Dependencies and assumptions identified
132
-
133
- ## Feature Readiness
134
-
135
- - [ ] All functional requirements have clear acceptance criteria
136
- - [ ] User scenarios cover primary flows
137
- - [ ] Feature meets measurable outcomes defined in Success Criteria
138
- - [ ] No implementation details leak into specification
139
-
140
- ## Notes
141
-
142
- - Items marked incomplete require spec updates before `/speckit.clarify` or `/speckit.plan`
143
- ```
144
-
145
- b. **Run Validation Check**: Review the spec against each checklist item:
146
- - For each item, determine if it passes or fails
147
- - Document specific issues found (quote relevant spec sections)
148
-
149
- c. **Handle Validation Results**:
150
-
151
- - **If all items pass**: Mark checklist complete and proceed to step 6
152
-
153
- - **If items fail (excluding [NEEDS CLARIFICATION])**:
154
- 1. List the failing items and specific issues
155
- 2. Update the spec to address each issue
156
- 3. Re-run validation until all items pass (max 3 iterations)
157
- 4. If still failing after 3 iterations, document remaining issues in checklist notes and warn user
158
-
159
- - **If [NEEDS CLARIFICATION] markers remain**:
160
- 1. Extract all [NEEDS CLARIFICATION: ...] markers from the spec
161
- 2. **LIMIT CHECK**: If more than 3 markers exist, keep only the 3 most critical (by scope/security/UX impact) and make informed guesses for the rest
162
- 3. For each clarification needed (max 3), present options to user in this format:
163
-
164
- ```markdown
165
- ## Question [N]: [Topic]
166
-
167
- **Context**: [Quote relevant spec section]
168
-
169
- **What we need to know**: [Specific question from NEEDS CLARIFICATION marker]
170
-
171
- **Suggested Answers**:
172
-
173
- | Option | Answer | Implications |
174
- |--------|--------|--------------|
175
- | A | [First suggested answer] | [What this means for the feature] |
176
- | B | [Second suggested answer] | [What this means for the feature] |
177
- | C | [Third suggested answer] | [What this means for the feature] |
178
- | Custom | Provide your own answer | [Explain how to provide custom input] |
179
-
180
- **Your choice**: _[Wait for user response]_
181
- ```
182
-
183
- 4. **CRITICAL - Table Formatting**: Ensure markdown tables are properly formatted:
184
- - Use consistent spacing with pipes aligned
185
- - Each cell should have spaces around content: `| Content |` not `|Content|`
186
- - Header separator must have at least 3 dashes: `|--------|`
187
- - Test that the table renders correctly in markdown preview
188
- 5. Number questions sequentially (Q1, Q2, Q3 - max 3 total)
189
- 6. Present all questions together before waiting for responses
190
- 7. Wait for user to respond with their choices for all questions (e.g., "Q1: A, Q2: Custom - [details], Q3: B")
191
- 8. Update the spec by replacing each [NEEDS CLARIFICATION] marker with the user's selected or provided answer
192
- 9. Re-run validation after all clarifications are resolved
193
-
194
- d. **Update Checklist**: After each validation iteration, update the checklist file with current pass/fail status
195
-
196
- 7. Report completion with branch name, spec file path, checklist results, and readiness for the next phase (`/speckit.clarify` or `/speckit.plan`).
197
-
198
- **NOTE:** The script creates and checks out the new branch and initializes the spec file before writing.
199
-
200
- ## General Guidelines
201
-
202
- ## Quick Guidelines
203
-
204
- - Focus on **WHAT** users need and **WHY**.
205
- - Avoid HOW to implement (no tech stack, APIs, code structure).
206
- - Written for business stakeholders, not developers.
207
- - DO NOT create any checklists that are embedded in the spec. That will be a separate command.
208
-
209
- ### Section Requirements
210
-
211
- - **Mandatory sections**: Must be completed for every feature
212
- - **Optional sections**: Include only when relevant to the feature
213
- - When a section doesn't apply, remove it entirely (don't leave as "N/A")
214
-
215
- ### For AI Generation
216
-
217
- When creating this spec from a user prompt:
218
-
219
- 1. **Make informed guesses**: Use context, industry standards, and common patterns to fill gaps
220
- 2. **Document assumptions**: Record reasonable defaults in the Assumptions section
221
- 3. **Limit clarifications**: Maximum 3 [NEEDS CLARIFICATION] markers - use only for critical decisions that:
222
- - Significantly impact feature scope or user experience
223
- - Have multiple reasonable interpretations with different implications
224
- - Lack any reasonable default
225
- 4. **Prioritize clarifications**: scope > security/privacy > user experience > technical details
226
- 5. **Think like a tester**: Every vague requirement should fail the "testable and unambiguous" checklist item
227
- 6. **Common areas needing clarification** (only if no reasonable default exists):
228
- - Feature scope and boundaries (include/exclude specific use cases)
229
- - User types and permissions (if multiple conflicting interpretations possible)
230
- - Security/compliance requirements (when legally/financially significant)
231
-
232
- **Examples of reasonable defaults** (don't ask about these):
233
-
234
- - Data retention: Industry-standard practices for the domain
235
- - Performance targets: Standard web/mobile app expectations unless specified
236
- - Error handling: User-friendly messages with appropriate fallbacks
237
- - Authentication method: Standard session-based or OAuth2 for web apps
238
- - Integration patterns: RESTful APIs unless specified otherwise
239
-
240
- ### Success Criteria Guidelines
241
-
242
- Success criteria must be:
243
-
244
- 1. **Measurable**: Include specific metrics (time, percentage, count, rate)
245
- 2. **Technology-agnostic**: No mention of frameworks, languages, databases, or tools
246
- 3. **User-focused**: Describe outcomes from user/business perspective, not system internals
247
- 4. **Verifiable**: Can be tested/validated without knowing implementation details
248
-
249
- **Good examples**:
250
-
251
- - "Users can complete checkout in under 3 minutes"
252
- - "System supports 10,000 concurrent users"
253
- - "95% of searches return results in under 1 second"
254
- - "Task completion rate improves by 40%"
255
-
256
- **Bad examples** (implementation-focused):
257
-
258
- - "API response time is under 200ms" (too technical, use "Users see results instantly")
259
- - "Database can handle 1000 TPS" (implementation detail, use user-facing metric)
260
- - "React components render efficiently" (framework-specific)
261
- - "Redis cache hit rate above 80%" (technology-specific)
262
- """
@@ -1,141 +0,0 @@
1
- description = "Generate an actionable, dependency-ordered tasks.md for the feature based on available design artifacts."
2
-
3
- prompt = """
4
- ---
5
- description: Generate an actionable, dependency-ordered tasks.md for the feature based on available design artifacts.
6
- handoffs:
7
- - label: Analyze For Consistency
8
- agent: speckit.analyze
9
- prompt: Run a project analysis for consistency
10
- send: true
11
- - label: Implement Project
12
- agent: speckit.implement
13
- prompt: Start the implementation in phases
14
- send: true
15
- ---
16
-
17
- ## User Input
18
-
19
- ```text
20
- $ARGUMENTS
21
- ```
22
-
23
- You **MUST** consider the user input before proceeding (if not empty).
24
-
25
- ## Outline
26
-
27
- 1. **Setup**: Run `.specify/scripts/bash/check-prerequisites.sh --json` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\\''m Groot' (or double-quote if possible: "I'm Groot").
28
-
29
- 2. **Load design documents**: Read from FEATURE_DIR:
30
- - **Required**: plan.md (tech stack, libraries, structure), spec.md (user stories with priorities)
31
- - **Optional**: data-model.md (entities), contracts/ (API endpoints), research.md (decisions), quickstart.md (test scenarios)
32
- - Note: Not all projects have all documents. Generate tasks based on what's available.
33
-
34
- 3. **Execute task generation workflow**:
35
- - Load plan.md and extract tech stack, libraries, project structure
36
- - Load spec.md and extract user stories with their priorities (P1, P2, P3, etc.)
37
- - If data-model.md exists: Extract entities and map to user stories
38
- - If contracts/ exists: Map endpoints to user stories
39
- - If research.md exists: Extract decisions for setup tasks
40
- - Generate tasks organized by user story (see Task Generation Rules below)
41
- - Generate dependency graph showing user story completion order
42
- - Create parallel execution examples per user story
43
- - Validate task completeness (each user story has all needed tasks, independently testable)
44
-
45
- 4. **Generate tasks.md**: Use `.specify/templates/tasks-template.md` as structure, fill with:
46
- - Correct feature name from plan.md
47
- - Phase 1: Setup tasks (project initialization)
48
- - Phase 2: Foundational tasks (blocking prerequisites for all user stories)
49
- - Phase 3+: One phase per user story (in priority order from spec.md)
50
- - Each phase includes: story goal, independent test criteria, tests (if requested), implementation tasks
51
- - Final Phase: Polish & cross-cutting concerns
52
- - All tasks must follow the strict checklist format (see Task Generation Rules below)
53
- - Clear file paths for each task
54
- - Dependencies section showing story completion order
55
- - Parallel execution examples per story
56
- - Implementation strategy section (MVP first, incremental delivery)
57
-
58
- 5. **Report**: Output path to generated tasks.md and summary:
59
- - Total task count
60
- - Task count per user story
61
- - Parallel opportunities identified
62
- - Independent test criteria for each story
63
- - Suggested MVP scope (typically just User Story 1)
64
- - Format validation: Confirm ALL tasks follow the checklist format (checkbox, ID, labels, file paths)
65
-
66
- Context for task generation: {{args}}
67
-
68
- The tasks.md should be immediately executable - each task must be specific enough that an LLM can complete it without additional context.
69
-
70
- ## Task Generation Rules
71
-
72
- **CRITICAL**: Tasks MUST be organized by user story to enable independent implementation and testing.
73
-
74
- **Tests are OPTIONAL**: Only generate test tasks if explicitly requested in the feature specification or if user requests TDD approach.
75
-
76
- ### Checklist Format (REQUIRED)
77
-
78
- Every task MUST strictly follow this format:
79
-
80
- ```text
81
- - [ ] [TaskID] [P?] [Story?] Description with file path
82
- ```
83
-
84
- **Format Components**:
85
-
86
- 1. **Checkbox**: ALWAYS start with `- [ ]` (markdown checkbox)
87
- 2. **Task ID**: Sequential number (T001, T002, T003...) in execution order
88
- 3. **[P] marker**: Include ONLY if task is parallelizable (different files, no dependencies on incomplete tasks)
89
- 4. **[Story] label**: REQUIRED for user story phase tasks only
90
- - Format: [US1], [US2], [US3], etc. (maps to user stories from spec.md)
91
- - Setup phase: NO story label
92
- - Foundational phase: NO story label
93
- - User Story phases: MUST have story label
94
- - Polish phase: NO story label
95
- 5. **Description**: Clear action with exact file path
96
-
97
- **Examples**:
98
-
99
- - ✅ CORRECT: `- [ ] T001 Create project structure per implementation plan`
100
- - ✅ CORRECT: `- [ ] T005 [P] Implement authentication middleware in src/middleware/auth.py`
101
- - ✅ CORRECT: `- [ ] T012 [P] [US1] Create User model in src/models/user.py`
102
- - ✅ CORRECT: `- [ ] T014 [US1] Implement UserService in src/services/user_service.py`
103
- - ❌ WRONG: `- [ ] Create User model` (missing ID and Story label)
104
- - ❌ WRONG: `T001 [US1] Create model` (missing checkbox)
105
- - ❌ WRONG: `- [ ] [US1] Create User model` (missing Task ID)
106
- - ❌ WRONG: `- [ ] T001 [US1] Create model` (missing file path)
107
-
108
- ### Task Organization
109
-
110
- 1. **From User Stories (spec.md)** - PRIMARY ORGANIZATION:
111
- - Each user story (P1, P2, P3...) gets its own phase
112
- - Map all related components to their story:
113
- - Models needed for that story
114
- - Services needed for that story
115
- - Endpoints/UI needed for that story
116
- - If tests requested: Tests specific to that story
117
- - Mark story dependencies (most stories should be independent)
118
-
119
- 2. **From Contracts**:
120
- - Map each contract/endpoint → to the user story it serves
121
- - If tests requested: Each contract → contract test task [P] before implementation in that story's phase
122
-
123
- 3. **From Data Model**:
124
- - Map each entity to the user story(ies) that need it
125
- - If entity serves multiple stories: Put in earliest story or Setup phase
126
- - Relationships → service layer tasks in appropriate story phase
127
-
128
- 4. **From Setup/Infrastructure**:
129
- - Shared infrastructure → Setup phase (Phase 1)
130
- - Foundational/blocking tasks → Foundational phase (Phase 2)
131
- - Story-specific setup → within that story's phase
132
-
133
- ### Phase Structure
134
-
135
- - **Phase 1**: Setup (project initialization)
136
- - **Phase 2**: Foundational (blocking prerequisites - MUST complete before user stories)
137
- - **Phase 3+**: User Stories in priority order (P1, P2, P3...)
138
- - Within each story: Tests (if requested) → Models → Services → Endpoints → Integration
139
- - Each phase should be a complete, independently testable increment
140
- - **Final Phase**: Polish & Cross-Cutting Concerns
141
- """
@@ -1,34 +0,0 @@
1
- description = "Convert existing tasks into actionable, dependency-ordered GitHub issues for the feature based on available design artifacts."
2
-
3
- prompt = """
4
- ---
5
- description: Convert existing tasks into actionable, dependency-ordered GitHub issues for the feature based on available design artifacts.
6
- tools: ['github/github-mcp-server/issue_write']
7
- ---
8
-
9
- ## User Input
10
-
11
- ```text
12
- $ARGUMENTS
13
- ```
14
-
15
- You **MUST** consider the user input before proceeding (if not empty).
16
-
17
- ## Outline
18
-
19
- 1. Run `.specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\\''m Groot' (or double-quote if possible: "I'm Groot").
20
- 1. From the executed script, extract the path to **tasks**.
21
- 1. Get the Git remote by running:
22
-
23
- ```bash
24
- git config --get remote.origin.url
25
- ```
26
-
27
- > [!CAUTION]
28
- > ONLY PROCEED TO NEXT STEPS IF THE REMOTE IS A GITHUB URL
29
-
30
- 1. For each task in the list, use the GitHub MCP server to create a new issue in the repository that is representative of the Git remote.
31
-
32
- > [!CAUTION]
33
- > UNDER NO CIRCUMSTANCES EVER CREATE ISSUES IN REPOSITORIES THAT DO NOT MATCH THE REMOTE URL
34
- """