@ifi/pi-spec 0.2.14

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,84 @@
1
+ ---
2
+ description: Create or update the project constitution from interactive or provided principle inputs, ensuring all dependent templates stay in sync.
3
+ handoffs:
4
+ - label: Build Specification
5
+ agent: speckit.specify
6
+ prompt: Implement the feature specification based on the updated constitution. I want to build...
7
+ ---
8
+
9
+ ## User Input
10
+
11
+ ```text
12
+ $ARGUMENTS
13
+ ```
14
+
15
+ You **MUST** consider the user input before proceeding (if not empty).
16
+
17
+ ## Outline
18
+
19
+ You are updating the project constitution at `.specify/memory/constitution.md`. This file is a TEMPLATE containing placeholder tokens in square brackets (e.g. `[PROJECT_NAME]`, `[PRINCIPLE_1_NAME]`). Your job is to (a) collect/derive concrete values, (b) fill the template precisely, and (c) propagate any amendments across dependent artifacts.
20
+
21
+ **Note**: If `.specify/memory/constitution.md` does not exist yet, it should have been initialized from `.specify/templates/constitution-template.md` during project setup. If it's missing, copy the template first.
22
+
23
+ Follow this execution flow:
24
+
25
+ 1. Load the existing constitution at `.specify/memory/constitution.md`.
26
+ - Identify every placeholder token of the form `[ALL_CAPS_IDENTIFIER]`.
27
+ **IMPORTANT**: The user might require less or more principles than the ones used in the template. If a number is specified, respect that - follow the general template. You will update the doc accordingly.
28
+
29
+ 2. Collect/derive values for placeholders:
30
+ - If user input (conversation) supplies a value, use it.
31
+ - Otherwise infer from existing repo context (README, docs, prior constitution versions if embedded).
32
+ - For governance dates: `RATIFICATION_DATE` is the original adoption date (if unknown ask or mark TODO), `LAST_AMENDED_DATE` is today if changes are made, otherwise keep previous.
33
+ - `CONSTITUTION_VERSION` must increment according to semantic versioning rules:
34
+ - MAJOR: Backward incompatible governance/principle removals or redefinitions.
35
+ - MINOR: New principle/section added or materially expanded guidance.
36
+ - PATCH: Clarifications, wording, typo fixes, non-semantic refinements.
37
+ - If version bump type ambiguous, propose reasoning before finalizing.
38
+
39
+ 3. Draft the updated constitution content:
40
+ - Replace every placeholder with concrete text (no bracketed tokens left except intentionally retained template slots that the project has chosen not to define yet—explicitly justify any left).
41
+ - Preserve heading hierarchy and comments can be removed once replaced unless they still add clarifying guidance.
42
+ - Ensure each Principle section: succinct name line, paragraph (or bullet list) capturing non‑negotiable rules, explicit rationale if not obvious.
43
+ - Ensure Governance section lists amendment procedure, versioning policy, and compliance review expectations.
44
+
45
+ 4. Consistency propagation checklist (convert prior checklist into active validations):
46
+ - Read `.specify/templates/plan-template.md` and ensure any "Constitution Check" or rules align with updated principles.
47
+ - Read `.specify/templates/spec-template.md` for scope/requirements alignment—update if constitution adds/removes mandatory sections or constraints.
48
+ - Read `.specify/templates/tasks-template.md` and ensure task categorization reflects new or removed principle-driven task types (e.g., observability, versioning, testing discipline).
49
+ - Read each command file in `.specify/templates/commands/*.md` (including this one) to verify no outdated references (agent-specific names like CLAUDE only) remain when generic guidance is required.
50
+ - Read any runtime guidance docs (e.g., `README.md`, `docs/quickstart.md`, or agent-specific guidance files if present). Update references to principles changed.
51
+
52
+ 5. Produce a Sync Impact Report (prepend as an HTML comment at top of the constitution file after update):
53
+ - Version change: old → new
54
+ - List of modified principles (old title → new title if renamed)
55
+ - Added sections
56
+ - Removed sections
57
+ - Templates requiring updates (✅ updated / ⚠ pending) with file paths
58
+ - Follow-up TODOs if any placeholders intentionally deferred.
59
+
60
+ 6. Validation before final output:
61
+ - No remaining unexplained bracket tokens.
62
+ - Version line matches report.
63
+ - Dates ISO format YYYY-MM-DD.
64
+ - Principles are declarative, testable, and free of vague language ("should" → replace with MUST/SHOULD rationale where appropriate).
65
+
66
+ 7. Write the completed constitution back to `.specify/memory/constitution.md` (overwrite).
67
+
68
+ 8. Output a final summary to the user with:
69
+ - New version and bump rationale.
70
+ - Any files flagged for manual follow-up.
71
+ - Suggested commit message (e.g., `docs: amend constitution to vX.Y.Z (principle additions + governance update)`).
72
+
73
+ Formatting & Style Requirements:
74
+
75
+ - Use Markdown headings exactly as in the template (do not demote/promote levels).
76
+ - Wrap long rationale lines to keep readability (<100 chars ideally) but do not hard enforce with awkward breaks.
77
+ - Keep a single blank line between sections.
78
+ - Avoid trailing whitespace.
79
+
80
+ If the user supplies partial updates (e.g., only one principle revision), still perform validation and version decision steps.
81
+
82
+ If critical info missing (e.g., ratification date truly unknown), insert `TODO(<FIELD_NAME>): explanation` and include in the Sync Impact Report under deferred items.
83
+
84
+ Do not create a new template; always operate on the existing `.specify/memory/constitution.md` file.
@@ -0,0 +1,201 @@
1
+ ---
2
+ description: Execute the implementation plan by processing and executing all tasks defined in tasks.md
3
+ scripts:
4
+ sh: scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks
5
+ ps: scripts/powershell/check-prerequisites.ps1 -Json -RequireTasks -IncludeTasks
6
+ ---
7
+
8
+ ## User Input
9
+
10
+ ```text
11
+ $ARGUMENTS
12
+ ```
13
+
14
+ You **MUST** consider the user input before proceeding (if not empty).
15
+
16
+ ## Pre-Execution Checks
17
+
18
+ **Check for extension hooks (before implementation)**:
19
+ - Check if `.specify/extensions.yml` exists in the project root.
20
+ - If it exists, read it and look for entries under the `hooks.before_implement` key
21
+ - If the YAML cannot be parsed or is invalid, skip hook checking silently and continue normally
22
+ - Filter to only hooks where `enabled: true`
23
+ - For each remaining hook, do **not** attempt to interpret or evaluate hook `condition` expressions:
24
+ - If the hook has no `condition` field, or it is null/empty, treat the hook as executable
25
+ - If the hook defines a non-empty `condition`, skip the hook and leave condition evaluation to the HookExecutor implementation
26
+ - For each executable hook, output the following based on its `optional` flag:
27
+ - **Optional hook** (`optional: true`):
28
+ ```
29
+ ## Extension Hooks
30
+
31
+ **Optional Pre-Hook**: {extension}
32
+ Command: `/{command}`
33
+ Description: {description}
34
+
35
+ Prompt: {prompt}
36
+ To execute: `/{command}`
37
+ ```
38
+ - **Mandatory hook** (`optional: false`):
39
+ ```
40
+ ## Extension Hooks
41
+
42
+ **Automatic Pre-Hook**: {extension}
43
+ Executing: `/{command}`
44
+ EXECUTE_COMMAND: {command}
45
+
46
+ Wait for the result of the hook command before proceeding to the Outline.
47
+ ```
48
+ - If no hooks are registered or `.specify/extensions.yml` does not exist, skip silently
49
+
50
+ ## Outline
51
+
52
+ 1. Run `{SCRIPT}` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
53
+
54
+ 2. **Check checklists status** (if FEATURE_DIR/checklists/ exists):
55
+ - Scan all checklist files in the checklists/ directory
56
+ - For each checklist, count:
57
+ - Total items: All lines matching `- [ ]` or `- [X]` or `- [x]`
58
+ - Completed items: Lines matching `- [X]` or `- [x]`
59
+ - Incomplete items: Lines matching `- [ ]`
60
+ - Create a status table:
61
+
62
+ ```text
63
+ | Checklist | Total | Completed | Incomplete | Status |
64
+ |-----------|-------|-----------|------------|--------|
65
+ | ux.md | 12 | 12 | 0 | ✓ PASS |
66
+ | test.md | 8 | 5 | 3 | ✗ FAIL |
67
+ | security.md | 6 | 6 | 0 | ✓ PASS |
68
+ ```
69
+
70
+ - Calculate overall status:
71
+ - **PASS**: All checklists have 0 incomplete items
72
+ - **FAIL**: One or more checklists have incomplete items
73
+
74
+ - **If any checklist is incomplete**:
75
+ - Display the table with incomplete item counts
76
+ - **STOP** and ask: "Some checklists are incomplete. Do you want to proceed with implementation anyway? (yes/no)"
77
+ - Wait for user response before continuing
78
+ - If user says "no" or "wait" or "stop", halt execution
79
+ - If user says "yes" or "proceed" or "continue", proceed to step 3
80
+
81
+ - **If all checklists are complete**:
82
+ - Display the table showing all checklists passed
83
+ - Automatically proceed to step 3
84
+
85
+ 3. Load and analyze the implementation context:
86
+ - **REQUIRED**: Read tasks.md for the complete task list and execution plan
87
+ - **REQUIRED**: Read plan.md for tech stack, architecture, and file structure
88
+ - **IF EXISTS**: Read data-model.md for entities and relationships
89
+ - **IF EXISTS**: Read contracts/ for API specifications and test requirements
90
+ - **IF EXISTS**: Read research.md for technical decisions and constraints
91
+ - **IF EXISTS**: Read quickstart.md for integration scenarios
92
+
93
+ 4. **Project Setup Verification**:
94
+ - **REQUIRED**: Create/verify ignore files based on actual project setup:
95
+
96
+ **Detection & Creation Logic**:
97
+ - Check if the following command succeeds to determine if the repository is a git repo (create/verify .gitignore if so):
98
+
99
+ ```sh
100
+ git rev-parse --git-dir 2>/dev/null
101
+ ```
102
+
103
+ - Check if Dockerfile* exists or Docker in plan.md → create/verify .dockerignore
104
+ - Check if .eslintrc* exists → create/verify .eslintignore
105
+ - Check if eslint.config.* exists → ensure the config's `ignores` entries cover required patterns
106
+ - Check if .prettierrc* exists → create/verify .prettierignore
107
+ - Check if .npmrc or package.json exists → create/verify .npmignore (if publishing)
108
+ - Check if terraform files (*.tf) exist → create/verify .terraformignore
109
+ - Check if .helmignore needed (helm charts present) → create/verify .helmignore
110
+
111
+ **If ignore file already exists**: Verify it contains essential patterns, append missing critical patterns only
112
+ **If ignore file missing**: Create with full pattern set for detected technology
113
+
114
+ **Common Patterns by Technology** (from plan.md tech stack):
115
+ - **Node.js/JavaScript/TypeScript**: `node_modules/`, `dist/`, `build/`, `*.log`, `.env*`
116
+ - **Python**: `__pycache__/`, `*.pyc`, `.venv/`, `venv/`, `dist/`, `*.egg-info/`
117
+ - **Java**: `target/`, `*.class`, `*.jar`, `.gradle/`, `build/`
118
+ - **C#/.NET**: `bin/`, `obj/`, `*.user`, `*.suo`, `packages/`
119
+ - **Go**: `*.exe`, `*.test`, `vendor/`, `*.out`
120
+ - **Ruby**: `.bundle/`, `log/`, `tmp/`, `*.gem`, `vendor/bundle/`
121
+ - **PHP**: `vendor/`, `*.log`, `*.cache`, `*.env`
122
+ - **Rust**: `target/`, `debug/`, `release/`, `*.rs.bk`, `*.rlib`, `*.prof*`, `.idea/`, `*.log`, `.env*`
123
+ - **Kotlin**: `build/`, `out/`, `.gradle/`, `.idea/`, `*.class`, `*.jar`, `*.iml`, `*.log`, `.env*`
124
+ - **C++**: `build/`, `bin/`, `obj/`, `out/`, `*.o`, `*.so`, `*.a`, `*.exe`, `*.dll`, `.idea/`, `*.log`, `.env*`
125
+ - **C**: `build/`, `bin/`, `obj/`, `out/`, `*.o`, `*.a`, `*.so`, `*.exe`, `*.dll`, `autom4te.cache/`, `config.status`, `config.log`, `.idea/`, `*.log`, `.env*`
126
+ - **Swift**: `.build/`, `DerivedData/`, `*.swiftpm/`, `Packages/`
127
+ - **R**: `.Rproj.user/`, `.Rhistory`, `.RData`, `.Ruserdata`, `*.Rproj`, `packrat/`, `renv/`
128
+ - **Universal**: `.DS_Store`, `Thumbs.db`, `*.tmp`, `*.swp`, `.vscode/`, `.idea/`
129
+
130
+ **Tool-Specific Patterns**:
131
+ - **Docker**: `node_modules/`, `.git/`, `Dockerfile*`, `.dockerignore`, `*.log*`, `.env*`, `coverage/`
132
+ - **ESLint**: `node_modules/`, `dist/`, `build/`, `coverage/`, `*.min.js`
133
+ - **Prettier**: `node_modules/`, `dist/`, `build/`, `coverage/`, `package-lock.json`, `yarn.lock`, `pnpm-lock.yaml`
134
+ - **Terraform**: `.terraform/`, `*.tfstate*`, `*.tfvars`, `.terraform.lock.hcl`
135
+ - **Kubernetes/k8s**: `*.secret.yaml`, `secrets/`, `.kube/`, `kubeconfig*`, `*.key`, `*.crt`
136
+
137
+ 5. Parse tasks.md structure and extract:
138
+ - **Task phases**: Setup, Tests, Core, Integration, Polish
139
+ - **Task dependencies**: Sequential vs parallel execution rules
140
+ - **Task details**: ID, description, file paths, parallel markers [P]
141
+ - **Execution flow**: Order and dependency requirements
142
+
143
+ 6. Execute implementation following the task plan:
144
+ - **Phase-by-phase execution**: Complete each phase before moving to the next
145
+ - **Respect dependencies**: Run sequential tasks in order, parallel tasks [P] can run together
146
+ - **Follow TDD approach**: Execute test tasks before their corresponding implementation tasks
147
+ - **File-based coordination**: Tasks affecting the same files must run sequentially
148
+ - **Validation checkpoints**: Verify each phase completion before proceeding
149
+
150
+ 7. Implementation execution rules:
151
+ - **Setup first**: Initialize project structure, dependencies, configuration
152
+ - **Tests before code**: If you need to write tests for contracts, entities, and integration scenarios
153
+ - **Core development**: Implement models, services, CLI commands, endpoints
154
+ - **Integration work**: Database connections, middleware, logging, external services
155
+ - **Polish and validation**: Unit tests, performance optimization, documentation
156
+
157
+ 8. Progress tracking and error handling:
158
+ - Report progress after each completed task
159
+ - Halt execution if any non-parallel task fails
160
+ - For parallel tasks [P], continue with successful tasks, report failed ones
161
+ - Provide clear error messages with context for debugging
162
+ - Suggest next steps if implementation cannot proceed
163
+ - **IMPORTANT** For completed tasks, make sure to mark the task off as [X] in the tasks file.
164
+
165
+ 9. Completion validation:
166
+ - Verify all required tasks are completed
167
+ - Check that implemented features match the original specification
168
+ - Validate that tests pass and coverage meets requirements
169
+ - Confirm the implementation follows the technical plan
170
+ - Report final status with summary of completed work
171
+
172
+ Note: This command assumes a complete task breakdown exists in tasks.md. If tasks are incomplete or missing, suggest running `/speckit.tasks` first to regenerate the task list.
173
+
174
+ 10. **Check for extension hooks**: After completion validation, check if `.specify/extensions.yml` exists in the project root.
175
+ - If it exists, read it and look for entries under the `hooks.after_implement` key
176
+ - If the YAML cannot be parsed or is invalid, skip hook checking silently and continue normally
177
+ - Filter to only hooks where `enabled: true`
178
+ - For each remaining hook, do **not** attempt to interpret or evaluate hook `condition` expressions:
179
+ - If the hook has no `condition` field, or it is null/empty, treat the hook as executable
180
+ - If the hook defines a non-empty `condition`, skip the hook and leave condition evaluation to the HookExecutor implementation
181
+ - For each executable hook, output the following based on its `optional` flag:
182
+ - **Optional hook** (`optional: true`):
183
+ ```
184
+ ## Extension Hooks
185
+
186
+ **Optional Hook**: {extension}
187
+ Command: `/{command}`
188
+ Description: {description}
189
+
190
+ Prompt: {prompt}
191
+ To execute: `/{command}`
192
+ ```
193
+ - **Mandatory hook** (`optional: false`):
194
+ ```
195
+ ## Extension Hooks
196
+
197
+ **Automatic Hook**: {extension}
198
+ Executing: `/{command}`
199
+ EXECUTE_COMMAND: {command}
200
+ ```
201
+ - If no hooks are registered or `.specify/extensions.yml` does not exist, skip silently
@@ -0,0 +1,96 @@
1
+ ---
2
+ description: Execute the implementation planning workflow using the plan template to generate design artifacts.
3
+ handoffs:
4
+ - label: Create Tasks
5
+ agent: speckit.tasks
6
+ prompt: Break the plan into tasks
7
+ send: true
8
+ - label: Create Checklist
9
+ agent: speckit.checklist
10
+ prompt: Create a checklist for the following domain...
11
+ scripts:
12
+ sh: scripts/bash/setup-plan.sh --json
13
+ ps: scripts/powershell/setup-plan.ps1 -Json
14
+ agent_scripts:
15
+ sh: scripts/bash/update-agent-context.sh __AGENT__
16
+ ps: scripts/powershell/update-agent-context.ps1 -AgentType __AGENT__
17
+ ---
18
+
19
+ ## User Input
20
+
21
+ ```text
22
+ $ARGUMENTS
23
+ ```
24
+
25
+ You **MUST** consider the user input before proceeding (if not empty).
26
+
27
+ ## Outline
28
+
29
+ 1. **Setup**: Run `{SCRIPT}` from repo root and parse JSON for FEATURE_SPEC, IMPL_PLAN, SPECS_DIR, BRANCH. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
30
+
31
+ 2. **Load context**: Read FEATURE_SPEC and `/memory/constitution.md`. Load IMPL_PLAN template (already copied).
32
+
33
+ 3. **Execute plan workflow**: Follow the structure in IMPL_PLAN template to:
34
+ - Fill Technical Context (mark unknowns as "NEEDS CLARIFICATION")
35
+ - Fill Constitution Check section from constitution
36
+ - Evaluate gates (ERROR if violations unjustified)
37
+ - Phase 0: Generate research.md (resolve all NEEDS CLARIFICATION)
38
+ - Phase 1: Generate data-model.md, contracts/, quickstart.md
39
+ - Phase 1: Update agent context by running the agent script
40
+ - Re-evaluate Constitution Check post-design
41
+
42
+ 4. **Stop and report**: Command ends after Phase 2 planning. Report branch, IMPL_PLAN path, and generated artifacts.
43
+
44
+ ## Phases
45
+
46
+ ### Phase 0: Outline & Research
47
+
48
+ 1. **Extract unknowns from Technical Context** above:
49
+ - For each NEEDS CLARIFICATION → research task
50
+ - For each dependency → best practices task
51
+ - For each integration → patterns task
52
+
53
+ 2. **Generate and dispatch research agents**:
54
+
55
+ ```text
56
+ For each unknown in Technical Context:
57
+ Task: "Research {unknown} for {feature context}"
58
+ For each technology choice:
59
+ Task: "Find best practices for {tech} in {domain}"
60
+ ```
61
+
62
+ 3. **Consolidate findings** in `research.md` using format:
63
+ - Decision: [what was chosen]
64
+ - Rationale: [why chosen]
65
+ - Alternatives considered: [what else evaluated]
66
+
67
+ **Output**: research.md with all NEEDS CLARIFICATION resolved
68
+
69
+ ### Phase 1: Design & Contracts
70
+
71
+ **Prerequisites:** `research.md` complete
72
+
73
+ 1. **Extract entities from feature spec** → `data-model.md`:
74
+ - Entity name, fields, relationships
75
+ - Validation rules from requirements
76
+ - State transitions if applicable
77
+
78
+ 2. **Define interface contracts** (if project has external interfaces) → `/contracts/`:
79
+ - Identify what interfaces the project exposes to users or other systems
80
+ - Document the contract format appropriate for the project type
81
+ - Examples: public APIs for libraries, command schemas for CLI tools, endpoints for web services, grammars for parsers, UI contracts for applications
82
+ - Skip if project is purely internal (build scripts, one-off tools, etc.)
83
+
84
+ 3. **Agent context update**:
85
+ - Run `{AGENT_SCRIPT}`
86
+ - These scripts detect which AI agent is in use
87
+ - Update the appropriate agent-specific context file
88
+ - Add only new technology from current plan
89
+ - Preserve manual additions between markers
90
+
91
+ **Output**: data-model.md, /contracts/*, quickstart.md, agent-specific file
92
+
93
+ ## Key rules
94
+
95
+ - Use absolute paths
96
+ - ERROR on gate failures or unresolved clarifications
@@ -0,0 +1,242 @@
1
+ ---
2
+ description: Create or update the feature specification from a natural language feature description.
3
+ handoffs:
4
+ - label: Build Technical Plan
5
+ agent: speckit.plan
6
+ prompt: Create a plan for the spec. I am building with...
7
+ - label: Clarify Spec Requirements
8
+ agent: speckit.clarify
9
+ prompt: Clarify specification requirements
10
+ send: true
11
+ scripts:
12
+ sh: scripts/bash/create-new-feature.sh "{ARGS}"
13
+ ps: scripts/powershell/create-new-feature.ps1 "{ARGS}"
14
+ ---
15
+
16
+ ## User Input
17
+
18
+ ```text
19
+ $ARGUMENTS
20
+ ```
21
+
22
+ You **MUST** consider the user input before proceeding (if not empty).
23
+
24
+ ## Outline
25
+
26
+ The text the user typed after `/speckit.specify` in the triggering message **is** the feature description. Assume you always have it available in this conversation even if `{ARGS}` appears literally below. Do not ask the user to repeat it unless they provided an empty command.
27
+
28
+ Given that feature description, do this:
29
+
30
+ 1. **Generate a concise short name** (2-4 words) for the branch:
31
+ - Analyze the feature description and extract the most meaningful keywords
32
+ - Create a 2-4 word short name that captures the essence of the feature
33
+ - Use action-noun format when possible (e.g., "add-user-auth", "fix-payment-bug")
34
+ - Preserve technical terms and acronyms (OAuth2, API, JWT, etc.)
35
+ - Keep it concise but descriptive enough to understand the feature at a glance
36
+ - Examples:
37
+ - "I want to add user authentication" → "user-auth"
38
+ - "Implement OAuth2 integration for the API" → "oauth2-api-integration"
39
+ - "Create a dashboard for analytics" → "analytics-dashboard"
40
+ - "Fix payment processing timeout bug" → "fix-payment-timeout"
41
+
42
+ 2. **Create the feature branch** by running the script with `--short-name` (and `--json`), and do NOT pass `--number` (the script auto-detects the next globally available number across all branches and spec directories):
43
+
44
+ - Bash example: `{SCRIPT} --json --short-name "user-auth" "Add user authentication"`
45
+ - PowerShell example: `{SCRIPT} -Json -ShortName "user-auth" "Add user authentication"`
46
+
47
+ **IMPORTANT**:
48
+ - Do NOT pass `--number` — the script determines the correct next number automatically
49
+ - Always include the JSON flag (`--json` for Bash, `-Json` for PowerShell) so the output can be parsed reliably
50
+ - You must only ever run this script once per feature
51
+ - The JSON is provided in the terminal as output - always refer to it to get the actual content you're looking for
52
+ - The JSON output will contain BRANCH_NAME and SPEC_FILE paths
53
+ - For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot")
54
+
55
+ 3. Load `templates/spec-template.md` to understand required sections.
56
+
57
+ 4. Follow this execution flow:
58
+
59
+ 1. Parse user description from Input
60
+ If empty: ERROR "No feature description provided"
61
+ 2. Extract key concepts from description
62
+ Identify: actors, actions, data, constraints
63
+ 3. For unclear aspects:
64
+ - Make informed guesses based on context and industry standards
65
+ - Only mark with [NEEDS CLARIFICATION: specific question] if:
66
+ - The choice significantly impacts feature scope or user experience
67
+ - Multiple reasonable interpretations exist with different implications
68
+ - No reasonable default exists
69
+ - **LIMIT: Maximum 3 [NEEDS CLARIFICATION] markers total**
70
+ - Prioritize clarifications by impact: scope > security/privacy > user experience > technical details
71
+ 4. Fill User Scenarios & Testing section
72
+ If no clear user flow: ERROR "Cannot determine user scenarios"
73
+ 5. Generate Functional Requirements
74
+ Each requirement must be testable
75
+ Use reasonable defaults for unspecified details (document assumptions in Assumptions section)
76
+ 6. Define Success Criteria
77
+ Create measurable, technology-agnostic outcomes
78
+ Include both quantitative metrics (time, performance, volume) and qualitative measures (user satisfaction, task completion)
79
+ Each criterion must be verifiable without implementation details
80
+ 7. Identify Key Entities (if data involved)
81
+ 8. Return: SUCCESS (spec ready for planning)
82
+
83
+ 5. Write the specification to SPEC_FILE using the template structure, replacing placeholders with concrete details derived from the feature description (arguments) while preserving section order and headings.
84
+
85
+ 6. **Specification Quality Validation**: After writing the initial spec, validate it against quality criteria:
86
+
87
+ a. **Create Spec Quality Checklist**: Generate a checklist file at `FEATURE_DIR/checklists/requirements.md` using the checklist template structure with these validation items:
88
+
89
+ ```markdown
90
+ # Specification Quality Checklist: [FEATURE NAME]
91
+
92
+ **Purpose**: Validate specification completeness and quality before proceeding to planning
93
+ **Created**: [DATE]
94
+ **Feature**: [Link to spec.md]
95
+
96
+ ## Content Quality
97
+
98
+ - [ ] No implementation details (languages, frameworks, APIs)
99
+ - [ ] Focused on user value and business needs
100
+ - [ ] Written for non-technical stakeholders
101
+ - [ ] All mandatory sections completed
102
+
103
+ ## Requirement Completeness
104
+
105
+ - [ ] No [NEEDS CLARIFICATION] markers remain
106
+ - [ ] Requirements are testable and unambiguous
107
+ - [ ] Success criteria are measurable
108
+ - [ ] Success criteria are technology-agnostic (no implementation details)
109
+ - [ ] All acceptance scenarios are defined
110
+ - [ ] Edge cases are identified
111
+ - [ ] Scope is clearly bounded
112
+ - [ ] Dependencies and assumptions identified
113
+
114
+ ## Feature Readiness
115
+
116
+ - [ ] All functional requirements have clear acceptance criteria
117
+ - [ ] User scenarios cover primary flows
118
+ - [ ] Feature meets measurable outcomes defined in Success Criteria
119
+ - [ ] No implementation details leak into specification
120
+
121
+ ## Notes
122
+
123
+ - Items marked incomplete require spec updates before `/speckit.clarify` or `/speckit.plan`
124
+ ```
125
+
126
+ b. **Run Validation Check**: Review the spec against each checklist item:
127
+ - For each item, determine if it passes or fails
128
+ - Document specific issues found (quote relevant spec sections)
129
+
130
+ c. **Handle Validation Results**:
131
+
132
+ - **If all items pass**: Mark checklist complete and proceed to step 6
133
+
134
+ - **If items fail (excluding [NEEDS CLARIFICATION])**:
135
+ 1. List the failing items and specific issues
136
+ 2. Update the spec to address each issue
137
+ 3. Re-run validation until all items pass (max 3 iterations)
138
+ 4. If still failing after 3 iterations, document remaining issues in checklist notes and warn user
139
+
140
+ - **If [NEEDS CLARIFICATION] markers remain**:
141
+ 1. Extract all [NEEDS CLARIFICATION: ...] markers from the spec
142
+ 2. **LIMIT CHECK**: If more than 3 markers exist, keep only the 3 most critical (by scope/security/UX impact) and make informed guesses for the rest
143
+ 3. For each clarification needed (max 3), present options to user in this format:
144
+
145
+ ```markdown
146
+ ## Question [N]: [Topic]
147
+
148
+ **Context**: [Quote relevant spec section]
149
+
150
+ **What we need to know**: [Specific question from NEEDS CLARIFICATION marker]
151
+
152
+ **Suggested Answers**:
153
+
154
+ | Option | Answer | Implications |
155
+ |--------|--------|--------------|
156
+ | A | [First suggested answer] | [What this means for the feature] |
157
+ | B | [Second suggested answer] | [What this means for the feature] |
158
+ | C | [Third suggested answer] | [What this means for the feature] |
159
+ | Custom | Provide your own answer | [Explain how to provide custom input] |
160
+
161
+ **Your choice**: _[Wait for user response]_
162
+ ```
163
+
164
+ 4. **CRITICAL - Table Formatting**: Ensure markdown tables are properly formatted:
165
+ - Use consistent spacing with pipes aligned
166
+ - Each cell should have spaces around content: `| Content |` not `|Content|`
167
+ - Header separator must have at least 3 dashes: `|--------|`
168
+ - Test that the table renders correctly in markdown preview
169
+ 5. Number questions sequentially (Q1, Q2, Q3 - max 3 total)
170
+ 6. Present all questions together before waiting for responses
171
+ 7. Wait for user to respond with their choices for all questions (e.g., "Q1: A, Q2: Custom - [details], Q3: B")
172
+ 8. Update the spec by replacing each [NEEDS CLARIFICATION] marker with the user's selected or provided answer
173
+ 9. Re-run validation after all clarifications are resolved
174
+
175
+ d. **Update Checklist**: After each validation iteration, update the checklist file with current pass/fail status
176
+
177
+ 7. Report completion with branch name, spec file path, checklist results, and readiness for the next phase (`/speckit.clarify` or `/speckit.plan`).
178
+
179
+ **NOTE:** The script creates and checks out the new branch and initializes the spec file before writing.
180
+
181
+ ## General Guidelines
182
+
183
+ ## Quick Guidelines
184
+
185
+ - Focus on **WHAT** users need and **WHY**.
186
+ - Avoid HOW to implement (no tech stack, APIs, code structure).
187
+ - Written for business stakeholders, not developers.
188
+ - DO NOT create any checklists that are embedded in the spec. That will be a separate command.
189
+
190
+ ### Section Requirements
191
+
192
+ - **Mandatory sections**: Must be completed for every feature
193
+ - **Optional sections**: Include only when relevant to the feature
194
+ - When a section doesn't apply, remove it entirely (don't leave as "N/A")
195
+
196
+ ### For AI Generation
197
+
198
+ When creating this spec from a user prompt:
199
+
200
+ 1. **Make informed guesses**: Use context, industry standards, and common patterns to fill gaps
201
+ 2. **Document assumptions**: Record reasonable defaults in the Assumptions section
202
+ 3. **Limit clarifications**: Maximum 3 [NEEDS CLARIFICATION] markers - use only for critical decisions that:
203
+ - Significantly impact feature scope or user experience
204
+ - Have multiple reasonable interpretations with different implications
205
+ - Lack any reasonable default
206
+ 4. **Prioritize clarifications**: scope > security/privacy > user experience > technical details
207
+ 5. **Think like a tester**: Every vague requirement should fail the "testable and unambiguous" checklist item
208
+ 6. **Common areas needing clarification** (only if no reasonable default exists):
209
+ - Feature scope and boundaries (include/exclude specific use cases)
210
+ - User types and permissions (if multiple conflicting interpretations possible)
211
+ - Security/compliance requirements (when legally/financially significant)
212
+
213
+ **Examples of reasonable defaults** (don't ask about these):
214
+
215
+ - Data retention: Industry-standard practices for the domain
216
+ - Performance targets: Standard web/mobile app expectations unless specified
217
+ - Error handling: User-friendly messages with appropriate fallbacks
218
+ - Authentication method: Standard session-based or OAuth2 for web apps
219
+ - Integration patterns: Use project-appropriate patterns (REST/GraphQL for web services, function calls for libraries, CLI args for tools, etc.)
220
+
221
+ ### Success Criteria Guidelines
222
+
223
+ Success criteria must be:
224
+
225
+ 1. **Measurable**: Include specific metrics (time, percentage, count, rate)
226
+ 2. **Technology-agnostic**: No mention of frameworks, languages, databases, or tools
227
+ 3. **User-focused**: Describe outcomes from user/business perspective, not system internals
228
+ 4. **Verifiable**: Can be tested/validated without knowing implementation details
229
+
230
+ **Good examples**:
231
+
232
+ - "Users can complete checkout in under 3 minutes"
233
+ - "System supports 10,000 concurrent users"
234
+ - "95% of searches return results in under 1 second"
235
+ - "Task completion rate improves by 40%"
236
+
237
+ **Bad examples** (implementation-focused):
238
+
239
+ - "API response time is under 200ms" (too technical, use "Users see results instantly")
240
+ - "Database can handle 1000 TPS" (implementation detail, use user-facing metric)
241
+ - "React components render efficiently" (framework-specific)
242
+ - "Redis cache hit rate above 80%" (technology-specific)