@shipfast-ai/shipfast 0.6.1 → 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,76 @@
1
+ ---
2
+ name: sf:check-plan
3
+ description: "Verify planned tasks before execution. Checks scope, consumers, dependencies, must-haves coverage."
4
+ allowed-tools:
5
+ - Read
6
+ - Bash
7
+ - Glob
8
+ - Grep
9
+ ---
10
+
11
+ <objective>
12
+ Plan-checker: verifies tasks in brain.db are safe to execute before /sf-do runs them.
13
+ Catches scope creep, missing consumers, broken dependencies, and uncovered must-haves.
14
+ </objective>
15
+
16
+ <process>
17
+
18
+ ## Step 1: Load tasks and must-haves
19
+
20
+ ```bash
21
+ sqlite3 -json .shipfast/brain.db "SELECT id, description, plan_text FROM tasks WHERE status = 'pending' ORDER BY created_at;" 2>/dev/null
22
+ sqlite3 -json .shipfast/brain.db "SELECT value FROM context WHERE key LIKE 'must_haves:%' LIMIT 1;" 2>/dev/null
23
+ ```
24
+
25
+ ## Step 2: Check each task
26
+
27
+ For each task, verify:
28
+
29
+ **Files exist**: every file path mentioned in plan_text exists on disk (or is marked as "create")
30
+ **Consumers checked**: if task removes/modifies exports, grep confirms consumer list is complete
31
+ **Dependencies**: if task depends on another task, that task is also pending (not missing)
32
+ **Scope**: no banned words (v1, simplified, placeholder, hardcoded for now)
33
+ **Verify command**: task has a concrete verify command (not vague)
34
+
35
+ ## Step 3: Check must-haves coverage
36
+
37
+ Every truth in must-haves must be addressed by at least one task.
38
+ Every artifact must have a task that creates/modifies it.
39
+ Flag orphaned must-haves (not covered by any task).
40
+
41
+ ## Step 4: STRIDE threat check (Feature #1)
42
+
43
+ For tasks that create new endpoints, auth flows, or data access:
44
+
45
+ | Threat | Check |
46
+ |---|---|
47
+ | **S**poofing | Does the task include auth/identity verification? |
48
+ | **T**ampering | Is input validated before processing? |
49
+ | **R**epudiation | Are actions logged/auditable? |
50
+ | **I**nformation disclosure | Are errors sanitized (no stack traces to users)? |
51
+ | **D**enial of service | Is there rate limiting or input size validation? |
52
+ | **E**levation of privilege | Are permissions checked before sensitive operations? |
53
+
54
+ For each applicable threat, output: `THREAT: [S/T/R/I/D/E] [component] — [mitigation needed]`
55
+
56
+ ## Step 5: Report
57
+
58
+ ```
59
+ Plan Check: [PASS / ISSUES FOUND]
60
+
61
+ Tasks: [N] pending
62
+ Must-haves: [N]/[M] covered
63
+ Threats: [N] flagged
64
+
65
+ [If issues:]
66
+ ISSUE: Task [id] — [file not found / missing consumer / scope creep / etc.]
67
+ THREAT: [S/T/R/I/D/E] [component] — [what's needed]
68
+
69
+ Fix plan with /sf-plan, then re-check with /sf-check-plan.
70
+ ```
71
+
72
+ </process>
73
+
74
+ <context>
75
+ $ARGUMENTS
76
+ </context>
package/commands/sf/do.md CHANGED
@@ -123,31 +123,65 @@ Before execution:
123
123
 
124
124
  ## STEP 6: EXECUTE (2-30K tokens)
125
125
 
126
- ### Trivial workflow (no agent spawn):
127
- Execute inline in this context. No separate Builder agent.
128
- - Read the relevant file
129
- - Make the change
130
- - Run build/test if applicable
131
- - Commit with conventional format
126
+ **CRITICAL RULE FOR ALL WORKFLOWS:**
127
+ Before removing, deleting, or modifying any function/type/selector/export/component:
128
+ 1. `grep -r "name" --include="*.ts" --include="*.tsx" .` to find ALL consumers
129
+ 2. If other files use it, update them or keep it
130
+ 3. NEVER remove without checking consumers first
131
+ 4. Run build/typecheck AFTER each task, BEFORE committing
132
+
133
+ ### Trivial workflow (fast-mode, ≤3 edits, no agent spawn):
134
+ Execute inline. No planning, no Scout, no Architect, no Critic.
135
+ 1. Read the file(s) + grep for consumers of what you'll change
136
+ 2. Make the change (match existing patterns)
137
+ 3. Run build: `tsc --noEmit` / `npm run build` / `cargo check`
138
+ 4. If build fails, fix (up to 3 attempts)
139
+ 5. Commit with conventional format
140
+ 6. Done. No SUMMARY, no verification.
141
+ **Redirect**: if work exceeds 3 file edits or needs research → upgrade to medium workflow.
132
142
 
133
143
  ### Medium workflow (1 Builder agent):
134
144
  Launch ONE Builder agent with ALL tasks batched:
135
- - Agent gets: base prompt (~1.5K) + brain context (~500) + all task descriptions
145
+ - Agent gets: base prompt + brain context + all task descriptions
136
146
  - Agent executes tasks sequentially within its context
137
- - One agent call instead of one per task = massive token savings
147
+ - One agent call instead of one per task = token savings
138
148
  - If Critic is not skipped, launch Critic after Builder completes
139
149
 
140
- ### Complex workflow (wave-based):
141
- Group tasks into dependency waves:
142
- - Wave 1: all independent tasks (can run in parallel)
143
- - Wave 2: tasks that depend on Wave 1
144
- - etc.
145
- For each wave:
146
- - Launch Builder agent with wave tasks + shared brain context
147
- - If a task fails after 3 attempts: document as DEFERRED, continue next task
148
- - Save state to brain.db after each wave (enables resume)
149
- - Launch Critic after all waves complete
150
- - Launch Scribe to record decisions and prepare PR description
150
+ ### Complex workflow (per-task agents, fresh context each):
151
+
152
+ **Check brain.db first** if `/sf-plan` was run, tasks already exist:
153
+ ```bash
154
+ sqlite3 -json .shipfast/brain.db "SELECT id, description, plan_text FROM tasks WHERE status = 'pending' ORDER BY created_at;" 2>/dev/null
155
+ ```
156
+
157
+ If tasks found in brain.db, execute them. If not, run inline planning first.
158
+
159
+ **Per-task execution (fresh context per task):**
160
+ For each pending task in brain.db:
161
+ 1. Launch a SEPARATE sf-builder agent with ONLY that task + brain context
162
+ 2. Builder gets fresh context — no accumulated garbage from previous tasks
163
+ 3. Builder executes: read → grep consumers → implement → build → verify → commit
164
+ 4. After Builder completes, update task status in brain.db:
165
+ ```bash
166
+ sqlite3 .shipfast/brain.db "UPDATE tasks SET status='passed', commit_sha='[sha]' WHERE id='[id]';"
167
+ ```
168
+ 5. If Builder fails after 3 attempts:
169
+ ```bash
170
+ sqlite3 .shipfast/brain.db "UPDATE tasks SET status='failed', error='[error]' WHERE id='[id]';"
171
+ ```
172
+ 6. Continue to next task regardless
173
+
174
+ **Wave grouping:**
175
+ - Independent tasks (no `depends`) → same wave → launch Builder agents in parallel
176
+ - Dependent tasks → later wave → wait for dependencies to complete
177
+ - Tasks touching same files → sequential (never parallel)
178
+
179
+ **After all tasks:**
180
+ - Launch Critic agent (fresh context) to review ALL changes: `git diff HEAD~N`
181
+ - Launch Scribe agent (fresh context) to record decisions + learnings to brain.db
182
+ - Save session state for `/sf-resume`
183
+
184
+ **After execution, run `/sf-verify` for thorough verification.**
151
185
 
152
186
  ### Builder behavior:
153
187
  - Follows deviation tiers: auto-fix bugs (T1-3), STOP for architecture changes (T4)
@@ -17,39 +17,47 @@ ShipFast — Autonomous Context-Engineered Development
17
17
  =====================================================
18
18
 
19
19
  CORE
20
- /sf-do <task> The one command. Describe what you want in natural language.
21
- Auto-detects complexity: trivial (3K tokens) medium (15K) complex (40K)
20
+ /sf-do <task> Execute a task. Auto-detects complexity.
21
+ Trivial: inline (~3K) | Medium: 1 agent (~15K) | Complex: per-task (~40K)
22
22
 
23
23
  PLANNING
24
- /sf-discuss <task> Detect ambiguity and ask targeted questions before planning.
25
- Stores answers as locked decisions in brain.db.
26
- /sf-project <desc> Decompose a large project into phases with REQ-ID tracing.
27
- Each phase runs through /sf-do independently.
24
+ /sf-discuss <task> Detect ambiguity, ask questions, lock decisions.
25
+ /sf-plan <task> Research (Scout) + Plan (Architect). Stores tasks in brain.db.
26
+ /sf-check-plan Verify plan before execution: scope, consumers, STRIDE threats.
27
+ /sf-project <desc> Decompose large project into phases with REQ-ID tracing.
28
+
29
+ EXECUTION
30
+ /sf-do Execute tasks from brain.db. Per-task fresh context for complex.
31
+ /sf-verify Verify: 3-level artifacts, data flow, stubs, build, consumers.
28
32
 
29
33
  SHIPPING
30
- /sf-ship [branch] Create branch, push, output PR link with auto-generated description.
34
+ /sf-ship [branch] Create branch, push, output PR link.
35
+ /sf-milestone Complete or start a milestone.
31
36
 
32
37
  SESSION
33
- /sf-status Show brain stats, tasks, checkpoints.
34
- /sf-resume Resume work from a previous session. Loads state from brain.db.
35
- /sf-undo [task-id] Rollback a completed task via git revert or stash.
38
+ /sf-status Brain stats, tasks, checkpoints, version.
39
+ /sf-resume Resume from previous session.
40
+ /sf-undo [task-id] Rollback a completed task.
36
41
 
37
42
  KNOWLEDGE
38
- /sf-brain <query> Query the codebase knowledge graph directly.
39
- Examples: "files like auth", "decisions", "hot files", "stats"
40
- /sf-learn <pattern> Teach a reusable pattern. Persists across sessions.
41
- Example: /sf-learn tailwind-v4: Use @import not @tailwind
43
+ /sf-brain <query> Query knowledge graph: files, decisions, learnings, hot files.
44
+ /sf-learn <pattern> Teach a reusable pattern.
45
+ /sf-map Generate codebase report from brain.db.
42
46
 
43
- CONFIG
44
- /sf-config [key val] View or set model tiers and preferences.
45
- /sf-help Show this help message.
47
+ PARALLEL WORK
48
+ /sf-workstream list Show all workstreams.
49
+ /sf-workstream create Create namespaced workstream with branch.
50
+ /sf-workstream switch Switch active workstream.
51
+ /sf-workstream complete Complete and merge workstream.
46
52
 
47
- WORKFLOW
48
- /sf-do runs a 9-step pipeline that adapts to task complexity:
53
+ CONFIG
54
+ /sf-config View or set model tiers and preferences.
55
+ /sf-help Show this help.
49
56
 
50
- TRIVIAL (fix typo) → Builder only, no planning, ~3K tokens
51
- MEDIUM (add feature) → Scout Architect → Builder → Critic, ~15K tokens
52
- COMPLEX (new system) → Full pipeline + discussion + verification, ~40K tokens
57
+ WORKFLOWS
58
+ Simple: /sf-do fix the typo in header
59
+ Standard: /sf-plan add dark mode /sf-check-plan /sf-do /sf-verify
60
+ Complex: /sf-project → /sf-discuss → /sf-plan → /sf-check-plan → /sf-do → /sf-verify → /sf-ship
53
61
 
54
62
  Steps: Analyze → Init Brain → Discuss → Plan → Checkpoint → Execute → Verify → Learn → Report
55
63
  Each step is skippable — the system only runs what's needed.
@@ -0,0 +1,84 @@
1
+ ---
2
+ name: sf:map
3
+ description: "Generate human-readable codebase report from brain.db. Shows architecture, structure, hot files, conventions."
4
+ allowed-tools:
5
+ - Bash
6
+ ---
7
+
8
+ <objective>
9
+ Generate a readable codebase summary from brain.db data.
10
+ Unlike GSD's 7 markdown mapper agents, this queries the existing SQLite brain directly — zero LLM tokens for data retrieval.
11
+ </objective>
12
+
13
+ <process>
14
+
15
+ Run these queries and format the output. Do NOT modify the queries.
16
+
17
+ ## File structure
18
+ ```bash
19
+ sqlite3 .shipfast/brain.db "SELECT file_path FROM nodes WHERE kind = 'file' ORDER BY file_path;" 2>/dev/null | head -50
20
+ ```
21
+
22
+ ## Symbol counts by kind
23
+ ```bash
24
+ sqlite3 .shipfast/brain.db "SELECT kind, COUNT(*) as count FROM nodes GROUP BY kind ORDER BY count DESC;" 2>/dev/null
25
+ ```
26
+
27
+ ## Top functions (most connected)
28
+ ```bash
29
+ sqlite3 .shipfast/brain.db "SELECT n.name, n.file_path, n.signature, COUNT(e.target) as connections FROM nodes n LEFT JOIN edges e ON n.id = e.source WHERE n.kind = 'function' GROUP BY n.id ORDER BY connections DESC LIMIT 15;" 2>/dev/null
30
+ ```
31
+
32
+ ## Hot files (most changed)
33
+ ```bash
34
+ sqlite3 .shipfast/brain.db "SELECT file_path, change_count FROM hot_files ORDER BY change_count DESC LIMIT 15;" 2>/dev/null
35
+ ```
36
+
37
+ ## Import graph (top connections)
38
+ ```bash
39
+ sqlite3 .shipfast/brain.db "SELECT REPLACE(source,'file:','') as from_file, REPLACE(target,'file:','') as to_file, kind FROM edges WHERE kind = 'imports' LIMIT 20;" 2>/dev/null
40
+ ```
41
+
42
+ ## Co-change clusters
43
+ ```bash
44
+ sqlite3 .shipfast/brain.db "SELECT REPLACE(source,'file:','') as file_a, REPLACE(target,'file:','') as file_b, weight FROM edges WHERE kind = 'co_changes' AND weight > 0.3 ORDER BY weight DESC LIMIT 15;" 2>/dev/null
45
+ ```
46
+
47
+ ## Decisions made
48
+ ```bash
49
+ sqlite3 .shipfast/brain.db "SELECT question, decision, phase FROM decisions ORDER BY created_at DESC LIMIT 10;" 2>/dev/null
50
+ ```
51
+
52
+ ## Learnings
53
+ ```bash
54
+ sqlite3 .shipfast/brain.db "SELECT pattern, problem, solution, confidence FROM learnings WHERE confidence > 0.3 ORDER BY confidence DESC LIMIT 10;" 2>/dev/null
55
+ ```
56
+
57
+ Format as:
58
+
59
+ ```
60
+ Codebase Map
61
+ ============
62
+
63
+ Structure: [N] files | [N] functions | [N] types | [N] components
64
+
65
+ Top Functions (most connected):
66
+ [name] in [file] — [connections] deps
67
+
68
+ Hot Files (most changed):
69
+ [file] — [N] changes
70
+
71
+ Co-Change Clusters (files that change together):
72
+ [file_a] ↔ [file_b] ([weight])
73
+
74
+ Decisions: [N] recorded
75
+ Learnings: [N] stored ([N] high confidence)
76
+ ```
77
+
78
+ STOP after output. No analysis.
79
+
80
+ </process>
81
+
82
+ <context>
83
+ $ARGUMENTS
84
+ </context>
@@ -0,0 +1,106 @@
1
+ ---
2
+ name: sf:plan
3
+ description: "Research and plan a phase. Scout gathers findings, Architect creates task list. Stores tasks in brain.db."
4
+ argument-hint: "<describe what to build>"
5
+ allowed-tools:
6
+ - Read
7
+ - Bash
8
+ - Glob
9
+ - Grep
10
+ - Agent
11
+ - AskUserQuestion
12
+ ---
13
+
14
+ <objective>
15
+ Dedicated planning command. Produces a precise task list stored in brain.db.
16
+ Does NOT execute — that's /sf-do's job.
17
+
18
+ Separation matters: planning uses different context than execution.
19
+ Fresh context for each phase = no degradation.
20
+ </objective>
21
+
22
+ <process>
23
+
24
+ ## Step 1: Analyze
25
+
26
+ Classify intent and complexity (same as /sf-do Step 1):
27
+ - fix/feature/refactor/test/ship/perf/security/style/data/remove
28
+ - trivial/medium/complex
29
+
30
+ If trivial: skip planning. Tell user to run `/sf-do` directly.
31
+
32
+ ## Step 2: Scout (fresh agent)
33
+
34
+ Launch sf-scout agent to research the task:
35
+ - Provide: task description + brain.db context (decisions, learnings, hot files)
36
+ - Scout returns: files, functions, consumers, conventions, risks, recommendation
37
+ - Scout tags findings with confidence: [VERIFIED], [CITED], [ASSUMED]
38
+
39
+ **Scout runs in its own agent = fresh context, no pollution.**
40
+
41
+ Wait for Scout to complete before proceeding.
42
+
43
+ ## Step 3: Discuss (if complex or ambiguous)
44
+
45
+ Check for ambiguity (rule-based, zero tokens):
46
+ - WHERE: no file paths mentioned
47
+ - WHAT: no behavior described
48
+ - HOW: multiple approaches possible
49
+ - RISK: touches auth/payment/data
50
+ - SCOPE: >30 words with conjunctions
51
+
52
+ If ambiguous: ask 2-5 targeted questions. Store answers as locked decisions in brain.db:
53
+ ```bash
54
+ sqlite3 .shipfast/brain.db "INSERT INTO decisions (question, decision, reasoning, phase) VALUES ('[Q]', '[A]', '[why]', '[phase]');"
55
+ ```
56
+
57
+ ## Step 4: Architect (fresh agent)
58
+
59
+ Launch sf-architect agent to create task list:
60
+ - Provide: task description + Scout findings + locked decisions from brain.db
61
+ - Architect returns: must-haves (truths/artifacts/links) + ordered task list
62
+
63
+ **Architect runs in its own agent = fresh context, no pollution.**
64
+
65
+ Architect's output must include for EACH task:
66
+ - Exact file paths
67
+ - Consumer list (who uses what's being changed)
68
+ - Specific action instructions
69
+ - Verify command
70
+ - Measurable done criteria
71
+
72
+ ## Step 5: Store tasks in brain.db
73
+
74
+ For each task from Architect, store in brain.db:
75
+ ```bash
76
+ sqlite3 .shipfast/brain.db "INSERT INTO tasks (id, phase, description, plan_text, status) VALUES ('[id]', '[phase]', '[description]', '[full task details]', 'pending');"
77
+ ```
78
+
79
+ Also store must-haves:
80
+ ```bash
81
+ sqlite3 .shipfast/brain.db "INSERT OR REPLACE INTO context (id, scope, key, value, version, updated_at) VALUES ('phase:[name]:must_haves', 'phase', 'must_haves:[name]', '[JSON must-haves]', 1, strftime('%s', 'now'));"
82
+ ```
83
+
84
+ ## Step 6: Report
85
+
86
+ ```
87
+ Plan ready: [N] tasks stored in brain.db
88
+
89
+ Must-haves:
90
+ Truths: [list]
91
+ Artifacts: [list]
92
+ Key links: [list]
93
+
94
+ Tasks:
95
+ 1. [description] — [files] — [size]
96
+ 2. [description] — [files] — [size]
97
+ ...
98
+
99
+ Run /sf-do to execute. Tasks will run with fresh context per task.
100
+ ```
101
+
102
+ </process>
103
+
104
+ <context>
105
+ $ARGUMENTS
106
+ </context>
@@ -30,6 +30,22 @@ Read the user's project description. If brain.db exists, load:
30
30
 
31
31
  If the project description is ambiguous, run the ambiguity detection from /sf-discuss first.
32
32
 
33
+ ## Step 1.5: Parallel Domain Research (for new/complex projects)
34
+
35
+ If the project involves unfamiliar technology or external integrations, launch **up to 4 Scout agents in parallel** to research:
36
+
37
+ 1. **Stack Scout** — What's the standard stack for this domain? Libraries, versions, frameworks.
38
+ 2. **Architecture Scout** — How are similar systems typically structured? Patterns, tiers, boundaries.
39
+ 3. **Pitfalls Scout** — What do projects like this commonly get wrong? Gotchas, anti-patterns.
40
+ 4. **Integration Scout** — What external services/APIs are needed? Auth, webhooks, SDKs.
41
+
42
+ Each Scout runs in its own context. Findings are stored in brain.db:
43
+ ```bash
44
+ sqlite3 .shipfast/brain.db "INSERT OR REPLACE INTO context (id, scope, key, value, version, updated_at) VALUES ('project:research:[topic]', 'project', 'research:[topic]', '[findings JSON]', 1, strftime('%s', 'now'));"
45
+ ```
46
+
47
+ Skip this step for simple projects or projects where brain.db already has relevant decisions.
48
+
33
49
  ### Multi-Repo Detection
34
50
  Check if the workspace contains multiple git repositories (submodules, monorepo packages):
35
51
  ```bash
@@ -0,0 +1,140 @@
1
+ ---
2
+ name: sf:verify
3
+ description: "Verify completed work against must-haves. Checks artifacts, data flow, stubs, build, consumers."
4
+ allowed-tools:
5
+ - Read
6
+ - Bash
7
+ - Glob
8
+ - Grep
9
+ - AskUserQuestion
10
+ ---
11
+
12
+ <objective>
13
+ Dedicated verification command. Runs AFTER /sf-do completes.
14
+ Checks the codebase delivers what was planned — not just "tests pass".
15
+
16
+ Separation matters: verification needs fresh context to see the code objectively,
17
+ without the biases accumulated during execution.
18
+ </objective>
19
+
20
+ <process>
21
+
22
+ ## Step 1: Load must-haves from brain.db
23
+
24
+ ```bash
25
+ sqlite3 -json .shipfast/brain.db "SELECT value FROM context WHERE key LIKE 'must_haves:%' ORDER BY updated_at DESC LIMIT 1;" 2>/dev/null
26
+ ```
27
+
28
+ If no must-haves stored, extract from the task descriptions:
29
+ ```bash
30
+ sqlite3 -json .shipfast/brain.db "SELECT description FROM tasks WHERE status = 'passed' ORDER BY created_at;" 2>/dev/null
31
+ ```
32
+
33
+ ## Step 2: Check observable truths
34
+
35
+ For each truth in must-haves, verify:
36
+ - Does the code actually implement this?
37
+ - Grep for the function/component/route that delivers it
38
+ - Is it wired (imported and used), not just existing?
39
+
40
+ Score: VERIFIED / FAILED / NEEDS_HUMAN
41
+
42
+ ## Step 3: 3-Level artifact validation
43
+
44
+ For each artifact in must-haves:
45
+
46
+ **Level 1 — Exists**: `[ -f path ] && echo OK || echo MISSING`
47
+ **Level 2 — Substantive**: File has >3 non-comment lines (not empty/stub)
48
+ **Level 3 — Wired**: `grep -r "basename" --include="*.ts" .` shows imports from other files
49
+
50
+ Score per artifact: L1/L2/L3 or MISSING
51
+
52
+ ## Step 4: Data flow check
53
+
54
+ For new components/APIs, check they receive real data:
55
+ - Not hardcoded empty arrays: `grep "data: \[\]" [file]`
56
+ - Not returning null: `grep "return null" [file]`
57
+ - Not empty handlers: `grep "() => {}" [file]`
58
+ - Fetch calls have response handling
59
+
60
+ ## Step 5: Stub detection (deep)
61
+
62
+ Scan all files changed in this session:
63
+ ```bash
64
+ git diff --name-only HEAD~[N commits]
65
+ ```
66
+
67
+ Check each for:
68
+ - TODO, FIXME, HACK, "not implemented", "placeholder"
69
+ - Empty click/submit handlers
70
+ - console.log debug statements
71
+ - debugger statements
72
+ - Commented-out code blocks
73
+
74
+ ## Step 6: Build verification
75
+
76
+ ```bash
77
+ npm run build 2>&1 | tail -5
78
+ # or: tsc --noEmit
79
+ # or: cargo check
80
+ ```
81
+
82
+ ## Step 7: Consumer integrity
83
+
84
+ For every function/type/export that was modified or removed:
85
+ ```bash
86
+ grep -r "removed_function_name" --include="*.ts" --include="*.tsx" .
87
+ ```
88
+ Any remaining consumers = CRITICAL failure.
89
+
90
+ ## Step 8: Score and report
91
+
92
+ ```
93
+ Verification Results
94
+ ====================
95
+
96
+ Truths: [N]/[M] verified
97
+ Artifacts: [N]/[M] at Level 3 (wired)
98
+ Data flow: [PASS/ISSUES]
99
+ Stubs: [N] found
100
+ Build: [PASS/FAIL]
101
+ Consumers: [CLEAN/BROKEN]
102
+
103
+ Verdict: PASS | PASS_WITH_WARNINGS | FAIL
104
+
105
+ [If FAIL:]
106
+ Failed items:
107
+ - [truth/artifact]: [what's wrong]
108
+ - [truth/artifact]: [what's wrong]
109
+
110
+ Fix with: /sf-do [fix description]
111
+ ```
112
+
113
+ ## Step 9: Store results
114
+
115
+ ```bash
116
+ sqlite3 .shipfast/brain.db "INSERT OR REPLACE INTO context (id, scope, key, value, version, updated_at) VALUES ('verify:latest', 'session', 'verification', '[JSON results]', 1, strftime('%s', 'now'));"
117
+ ```
118
+
119
+ ## Step 10: Interactive UAT (if complex)
120
+
121
+ For complex features, offer manual testing:
122
+ ```
123
+ Manual checks (answer pass/issue/skip):
124
+
125
+ Test 1: [what to test]
126
+ Expected: [behavior]
127
+ Result?
128
+
129
+ Test 2: [what to test]
130
+ Expected: [behavior]
131
+ Result?
132
+ ```
133
+
134
+ For each issue reported, generate a fix task and store in brain.db.
135
+
136
+ </process>
137
+
138
+ <context>
139
+ $ARGUMENTS
140
+ </context>
@@ -0,0 +1,51 @@
1
+ ---
2
+ name: sf:workstream
3
+ description: "Manage parallel workstreams — create, list, switch, complete."
4
+ argument-hint: "list | create <name> | switch <name> | complete <name>"
5
+ allowed-tools:
6
+ - Bash
7
+ - AskUserQuestion
8
+ ---
9
+
10
+ <objective>
11
+ Workstreams let you work on multiple features in parallel, each with its own branch and task tracking.
12
+ Each workstream gets a namespaced set of tasks in brain.db.
13
+ </objective>
14
+
15
+ <process>
16
+
17
+ ## Parse subcommand from $ARGUMENTS
18
+
19
+ ### list
20
+ ```bash
21
+ sqlite3 -json .shipfast/brain.db "SELECT key, value FROM context WHERE scope = 'workstream' ORDER BY updated_at DESC;" 2>/dev/null
22
+ git branch --list "sf/*" 2>/dev/null
23
+ ```
24
+ Show all workstreams with status (active/complete) and branch name.
25
+
26
+ ### create <name>
27
+ 1. Create git branch: `git checkout -b sf/[name]`
28
+ 2. Store in brain.db:
29
+ ```bash
30
+ sqlite3 .shipfast/brain.db "INSERT OR REPLACE INTO context (id, scope, key, value, version, updated_at) VALUES ('workstream:[name]', 'workstream', '[name]', '{\"status\":\"active\",\"branch\":\"sf/[name]\",\"created\":\"[timestamp]\"}', 1, strftime('%s', 'now'));"
31
+ ```
32
+ 3. Report: `Workstream [name] created on branch sf/[name]`
33
+
34
+ ### switch <name>
35
+ 1. `git checkout sf/[name]`
36
+ 2. Report: `Switched to workstream [name]`
37
+
38
+ ### complete <name>
39
+ 1. Ask: "Merge sf/[name] into current branch? [y/n]"
40
+ 2. If yes: `git merge sf/[name]` then `git branch -d sf/[name]`
41
+ 3. Update brain.db:
42
+ ```bash
43
+ sqlite3 .shipfast/brain.db "UPDATE context SET value = replace(value, 'active', 'complete') WHERE id = 'workstream:[name]';"
44
+ ```
45
+ 4. Report: `Workstream [name] completed and merged.`
46
+
47
+ </process>
48
+
49
+ <context>
50
+ $ARGUMENTS
51
+ </context>