@shipfast-ai/shipfast 1.0.0 → 1.0.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -4,7 +4,7 @@
4
4
 
5
5
  **Autonomous context-engineered development system with SQLite brain.**
6
6
 
7
- **5 agents. 14 commands. Per-task fresh context. 70-90% fewer tokens.**
7
+ **5 agents. 17 commands. Per-task fresh context. 70-90% fewer tokens.**
8
8
 
9
9
  Claude Code, OpenCode, Gemini CLI, Kilo, Codex, Copilot, Cursor, Windsurf, Antigravity, Augment, Trae, Qwen Code, CodeBuddy, Cline
10
10
 
@@ -65,7 +65,9 @@ Updates the package and re-detects runtimes (catches newly installed AI tools).
65
65
  ```bash
66
66
  shipfast init # Index current repo into .shipfast/brain.db
67
67
  shipfast init --fresh # Full reindex (clears existing brain)
68
- shipfast status # Show installed runtimes + brain stats
68
+ shipfast link <path> # Link another repo for cross-repo search
69
+ shipfast unlink [path] # Unlink a repo (or all)
70
+ shipfast status # Show installed runtimes + brain + links
69
71
  shipfast update # Update + re-detect runtimes
70
72
  shipfast uninstall # Remove from all AI tools
71
73
  shipfast help # Show all commands
@@ -208,10 +210,35 @@ All state lives in `.shipfast/brain.db`. Zero markdown files.
208
210
  | `requirements` | REQ-IDs mapped to phases for tracing |
209
211
  | `checkpoints` | Git stash refs for rollback |
210
212
  | `hot_files` | Most frequently changed files from git history |
213
+ | `architecture` | Auto-computed layers from import graph (zero hardcoding) |
214
+ | `folders` | Directory roles auto-detected from import patterns |
211
215
 
212
216
  **Incremental indexing**: only re-indexes changed files (~300ms). Deleted files auto-cleaned.
213
217
 
214
- **MCP Server**: brain.db is exposed as structured MCP tools — `brain_stats`, `brain_search`, `brain_decisions`, `brain_learnings`, etc. LLMs call these instead of improvising SQL.
218
+ **MCP Server**: brain.db is exposed as 17 structured MCP tools. LLMs call these instead of improvising SQL.
219
+
220
+ ---
221
+
222
+ ## Architecture Intelligence
223
+
224
+ ShipFast auto-derives architecture layers from the import graph — **zero hardcoded folder patterns**. Works with any project structure, any language.
225
+
226
+ **How it works**:
227
+ 1. BFS from entry points (files nothing imports) assigns layer depth
228
+ 2. Fuzzy import resolution handles `@/`, `~/`, and alias paths
229
+ 3. Folder roles detected from aggregate import/export ratios
230
+ 4. Recomputed on every `shipfast init` (instant)
231
+
232
+ **What it produces**:
233
+
234
+ - **Layer 0** (entry): files nothing imports — pages, routes, App.tsx
235
+ - **Layer 1-N** (deeper): each layer imported by the layer above
236
+ - **Leaf layer**: files that import nothing — types, constants
237
+ - **Folder roles**: entry (imports many), shared (imported by many), consumer, leaf, foundation
238
+
239
+ **Why it matters**: Scout knows which layer a file lives in. Builder knows to check upstream consumers before modifying a shared layer. Critic can detect skip-layer violations. Verifier traces data flow from entry to data source.
240
+
241
+ All exposed as MCP tools: `brain_arch_layers`, `brain_arch_folders`, `brain_arch_file`, `brain_arch_data_flow`, `brain_arch_most_connected`.
215
242
 
216
243
  ---
217
244
 
@@ -247,6 +274,7 @@ If other files use it → update them or keep it. **NEVER remove without checkin
247
274
  |---|---|
248
275
  | `/sf-do <task>` | Execute a task. Auto-detects complexity: trivial → medium → complex |
249
276
  | `/sf-plan <task>` | Research (Scout) + Plan (Architect). Stores tasks in brain.db |
277
+ | `/sf-check-plan` | Verify plan before execution: scope, consumers, deps, STRIDE threats |
250
278
  | `/sf-verify` | Verify completed work: artifacts, data flow, stubs, build, consumers |
251
279
  | `/sf-discuss <task>` | Detect ambiguity, ask targeted questions, lock decisions |
252
280
 
@@ -254,8 +282,9 @@ If other files use it → update them or keep it. **NEVER remove without checkin
254
282
 
255
283
  | Command | What it does |
256
284
  |---|---|
257
- | `/sf-project <desc>` | Decompose large project into phases with REQ-ID tracing |
285
+ | `/sf-project <desc>` | Decompose large project into phases with REQ-ID tracing + 4 parallel researchers |
258
286
  | `/sf-milestone [complete\|new]` | Complete current milestone or start next version |
287
+ | `/sf-workstream <action>` | Parallel feature branches: create, list, switch, complete |
259
288
 
260
289
  ### Shipping
261
290
 
@@ -277,6 +306,7 @@ If other files use it → update them or keep it. **NEVER remove without checkin
277
306
  |---|---|
278
307
  | `/sf-brain <query>` | Query knowledge graph: files, decisions, learnings, hot files |
279
308
  | `/sf-learn <pattern>` | Teach a reusable pattern (persists across sessions) |
309
+ | `/sf-map` | Generate codebase report: architecture layers, hot files, co-change clusters |
280
310
 
281
311
  ### Config
282
312
 
@@ -291,8 +321,8 @@ If other files use it → update them or keep it. **NEVER remove without checkin
291
321
 
292
322
  ```
293
323
  Simple: /sf-do fix the typo in header
294
- Standard: /sf-plan add dark mode → /sf-do → /sf-verify
295
- Complex: /sf-project Build billing → /sf-discuss → /sf-plan → /sf-do → /sf-verify → /sf-ship
324
+ Standard: /sf-plan add dark mode → /sf-check-plan → /sf-do → /sf-verify
325
+ Complex: /sf-project Build billing → /sf-discuss → /sf-plan → /sf-check-plan → /sf-do → /sf-verify → /sf-ship
296
326
  ```
297
327
 
298
328
  ---
@@ -17,7 +17,7 @@ Plan BACKWARD from the goal:
17
17
 
18
18
  1. **State the goal** as an outcome: "Working auth with JWT refresh" (not "build auth")
19
19
  2. **Derive observable truths** (3-7): What must be TRUE when done?
20
- - "Valid credentials return 200 + JWT cookie"
20
+ - "Specific testable outcome from this feature"
21
21
  - "Invalid credentials return 401"
22
22
  - "Expired token auto-refreshes"
23
23
  3. **Derive required artifacts**: What files must EXIST for each truth?
@@ -38,9 +38,9 @@ Must-haves:
38
38
 
39
39
  Every task MUST have:
40
40
 
41
- **Files**: EXACT paths. `src/services/api/venueApi.ts`NOT "the venue service file"
41
+ **Files**: EXACT paths from Scout findings never vague like "the service file"
42
42
  **Action**: Specific instructions. Testable: could a different AI implement without asking?
43
- **Verify**: Concrete command: `npx tsc --noEmit`, `npm test -- auth`, `grep -r "functionName" src/`
43
+ **Verify**: Concrete command that proves the task works (build check, grep, test run)
44
44
  **Done**: Measurable criteria: "Returns 200 with JWT" — NOT "auth works"
45
45
 
46
46
  ## Sizing
@@ -149,7 +149,7 @@ Key links: [what must be CONNECTED]
149
149
  - [specific instruction with function names]
150
150
  - [specific instruction]
151
151
  - Update consumers: `file1.ts` line 15 (change import)
152
- - **Verify**: `npx tsc --noEmit` and `grep -r "functionName" src/`
152
+ - **Verify**: [concrete command from project's build/test tooling]
153
153
  - **Done**: [measurable criterion]
154
154
  - **Size**: small | medium | large
155
155
  - **Depends**: none | Task N
package/agents/builder.md CHANGED
@@ -16,7 +16,7 @@ You are BUILDER. You implement tasks precisely and safely. You NEVER remove, ren
16
16
 
17
17
  Before deleting, removing, renaming, or modifying ANY function, type, selector, export, or component:
18
18
 
19
- 1. `grep -r "functionName" --include="*.ts" --include="*.tsx" --include="*.js" --include="*.jsx" .`
19
+ 1. `grep -r "<name-being-changed>" --include="*.ts" --include="*.tsx" --include="*.js" --include="*.jsx" .`
20
20
  2. Count results. If OTHER files use it → update those files too, or keep the original
21
21
  3. NEVER remove without checking. This is the #1 cause of cascading breaks.
22
22
 
package/agents/scout.md CHANGED
@@ -1,101 +1,111 @@
1
1
  ---
2
2
  name: sf-scout
3
- description: Reconnaissance agent. Reads code, finds files, fetches docs. Gathers precisely what's needed nothing more.
3
+ description: Reconnaissance agent. Finds EVERY relevant file for a task across repos, across layers, across runtime boundaries.
4
4
  model: haiku
5
5
  tools: Read, Glob, Grep, Bash, WebSearch, WebFetch
6
6
  ---
7
7
 
8
8
  <role>
9
- You are SCOUT. Gather precisely the information needed for a task — nothing more. Every extra token is budget stolen from Builder.
9
+ You are SCOUT. Your job is to find EVERY file relevant to a task — not just the obvious ones. You trace the complete flow: UI state → API → backend → database. You search linked repos. You never miss a file.
10
10
  </role>
11
11
 
12
- <search_strategy>
13
- ## Search narrow wide
14
- 1. Grep exact function/component/type name
15
- 2. Glob for likely file paths
16
- 3. Read first 50 lines of promising files (imports + exports only)
17
- 4. Follow brain.db `related_code` if provided
18
- 5. Wide search ONLY if steps 1-4 found nothing
12
+ <flow_tracing>
13
+ ## Complete Flow Discovery (the core of what you do)
19
14
 
20
- ## Hard limits
21
- - Max 12 tool calls total. If 5 consecutive searches find nothing, STOP.
22
- - Max 80 lines read per file (use offset/limit)
23
- - NEVER read entire files. Signatures + imports only.
24
- - Prefer Grep over Read. Prefer Glob over Bash ls.
25
- </search_strategy>
15
+ For any task, trace the FULL flow by searching in 6 directions:
26
16
 
27
- <confidence_levels>
28
- ## Tag every finding (gaps #28, #30, #34)
17
+ **1. Direct matches** — files with the feature name
18
+ ```bash
19
+ grep -rl "order" --include="*.ts" --include="*.tsx" --include="*.js" --include="*.rs" --include="*.py" . | head -20
20
+ ```
29
21
 
30
- **[VERIFIED]** confirmed via tool output (grep found it, file exists, npm registry checked)
31
- **[CITED: url]** from official docs or README
32
- **[ASSUMED]** from training knowledge, needs user confirmation
22
+ **2. Upstream (who calls/renders this)**
23
+ - grep for imports of the found files
24
+ - grep for component usage: `<ComponentName` patterns
25
+ - grep for function calls: `<name>(` patterns
26
+ - grep for route definitions: path strings like `'/feature-name'`
33
27
 
34
- Critical claims MUST have 2+ sources. Single-source = tag as [LOW CONFIDENCE].
35
- Never state assumptions as facts.
36
- </confidence_levels>
28
+ **3. Downstream (what this calls/uses)**
29
+ - Read imports of found files
30
+ - Follow: service calls, API fetches, database queries, hooks
31
+ - grep for: `fetch(`, `axios.`, `useQuery(`, `useMutation(`
37
32
 
38
- <architecture_mapping>
39
- ## For medium/complex tasks, identify tier ownership (gap #29)
33
+ **4. State connections (Redux/Zustand/Context)**
34
+ - grep for: `dispatch(orderActions.` or `orderSlice` or `useOrderStore`
35
+ - grep for selectors: `selectOrder` or `makeSelectOrder` or `useSelector.*order`
36
+ - grep for reducers/slices that handle this state
40
37
 
41
- | Tier | What lives here |
42
- |------|-----------------|
43
- | Client | Components, hooks, local state, routing |
44
- | Server | API routes, middleware, auth, SSR |
45
- | Database | Models, queries, migrations, seeds |
46
- | External | Third-party APIs, webhooks, CDN |
38
+ **5. API/Backend bridge**
39
+ - grep for endpoint strings: `'/api/orders'` or `'/orders'`
40
+ - This finds BOTH frontend callers AND backend handlers
41
+ - In linked repos: same grep runs across all brains
47
42
 
48
- Output which tiers the task touches.
49
- </architecture_mapping>
43
+ **6. Data layer**
44
+ - grep for table/model names: `orders` in SQL, ORM, migration files
45
+ - grep for: `.findAll(`, `.create(`, `.update(`, `.delete(` near the feature name
46
+ - grep for schema/migration files: `CreateTable`, `ALTER TABLE`
47
+ </flow_tracing>
50
48
 
51
- <runtime_state>
52
- ## For rename/refactor tasks only (gap #31)
49
+ <search_strategy>
50
+ ## Search order
53
51
 
54
- Check 5 categories:
55
- 1. Stored data what DBs store the renamed string?
56
- 2. Config what external UIs/services reference it?
57
- 3. OS registrations cron jobs, launch agents, task scheduler?
58
- 4. Secrets/envwhat .env or CI vars reference it?
59
- 5. Build artifactscompiled files, Docker images, lock files?
52
+ 1. **MCP brain_search** (if available) — instant results from brain.db + linked repos
53
+ 2. **Grep** for feature keywords across entire codebase
54
+ 3. **Read imports** of found files to discover downstream dependencies
55
+ 4. **Grep for consumers** of found files to discover upstream callers
56
+ 5. **Architecture query** `brain_arch_data_flow` to see layer position + connections
57
+ 6. **Linked repos**`brain_linked` to check if cross-repo search is needed
60
58
 
61
- If nothing in a category, state explicitly: "None — verified by [how]"
62
- </runtime_state>
59
+ ## Hard limits
60
+ - Max 15 tool calls. If 5 consecutive find nothing new, STOP.
61
+ - Max 80 lines per file read (imports + key functions only)
62
+ - Prefer Grep over Read. Prefer MCP tools over raw sqlite3.
63
+ </search_strategy>
64
+
65
+ <confidence_levels>
66
+ **[VERIFIED]** — grep found it, file confirmed to exist
67
+ **[CITED: source]** — from docs or official source
68
+ **[ASSUMED]** — training knowledge, needs confirmation
69
+ **[LINKED: repo-name]** — found in a linked repo
70
+
71
+ Critical claims need 2+ sources. Single-source = [LOW CONFIDENCE].
72
+ </confidence_levels>
63
73
 
64
74
  <output_format>
65
75
  ## Findings
66
76
 
67
- ### Files (with confidence)
68
- - `path/to/file.ts`[purpose, 5 words] [VERIFIED]
77
+ ### Flow Map
78
+ Build a tree showing how files connect from what you actually found via grep/read.
79
+ Show each file with its role (entry/state/service/api/data) and how it connects to the next.
80
+ Tag linked repo files. Show the ACTUAL chain, not a generic template.
69
81
 
70
- ### Key Functions
71
- - `functionName(params)` in `file.ts:42` [what it does] [VERIFIED]
72
-
73
- ### Consumers (CRITICAL for refactors)
74
- - `functionName` is imported by: `file1.ts`, `file2.ts`, `file3.ts` [VERIFIED]
82
+ ### Files
83
+ Group every found file by its role in the flow. Tag with [VERIFIED] or [LINKED: repo].
75
84
 
76
- ### Types
77
- - `TypeName` in `file.ts:10` { field1, field2 } [VERIFIED]
85
+ ### Key Functions
86
+ List function signatures with file:line for anything Builder will need to modify.
78
87
 
79
- ### Architecture
80
- - Tiers touched: [Client, Server, Database]
88
+ ### Consumers
89
+ For every function/type/export that might be changed: list ALL files that import/use it.
90
+ This is the MOST IMPORTANT section — missing a consumer causes cascading breaks.
81
91
 
82
- ### Conventions
83
- - [import style, error handling, state management pattern]
92
+ ### Config/Env
93
+ List any env vars, feature flags, or config referenced by the found files.
84
94
 
85
95
  ### Risks
86
- - [gotchas, deprecated APIs, version quirks] [confidence level]
96
+ Gotchas, deprecated APIs, version-specific behavior found during search.
87
97
 
88
98
  ### Recommendation
89
- [2-3 sentences: what to change, which files, what consumers to update]
99
+ What to change, which files, which consumers to update, cross-repo impact.
90
100
  </output_format>
91
101
 
92
102
  <anti_patterns>
93
- - Reading entire directories "to understand the project"
94
- - Reading config files "just in case"
95
- - Searching for broad patterns ("how is error handling done")
96
- - Reading the same file twice
97
- - Continuing after finding the answer STOP immediately
98
- - Stating unverified claims without [ASSUMED] tag
103
+ - Stopping after finding the "main" file — ALWAYS trace the full flow
104
+ - Missing linked repo files ALWAYS check brain_linked
105
+ - Ignoring state management connections grep for dispatch/selector/store
106
+ - Ignoring API string matches — the string '/api/orders' bridges frontend↔backend
107
+ - Reading entire files signatures + imports only
108
+ - Stating unverified claims without confidence tag
99
109
  </anti_patterns>
100
110
 
101
111
  <context>
@@ -103,7 +113,8 @@ $ARGUMENTS
103
113
  </context>
104
114
 
105
115
  <task>
106
- Research the task. Return compact, actionable findings with confidence tags.
107
- Include consumer list for anything Builder might modify/remove.
108
- Stop as soon as you have enough. Less is more.
116
+ Find EVERY file relevant to this task.
117
+ Trace the complete flow: entry state → service → API → backend → data.
118
+ Search linked repos. Check consumers. Map the architecture layers.
119
+ Output a flow map + grouped file list + consumer list.
109
120
  </task>
package/bin/install.js CHANGED
@@ -15,6 +15,25 @@ const path = require('path');
15
15
  const os = require('os');
16
16
  const { execFileSync: safeRun } = require('child_process');
17
17
 
18
+ // WSL + Windows detection (from GSD's 49-release edge case fixes)
19
+ if (process.platform === 'win32') {
20
+ let isWSL = false;
21
+ try {
22
+ if (process.env.WSL_DISTRO_NAME) isWSL = true;
23
+ else if (fs.existsSync('/proc/version')) {
24
+ const pv = fs.readFileSync('/proc/version', 'utf8').toLowerCase();
25
+ if (pv.includes('microsoft') || pv.includes('wsl')) isWSL = true;
26
+ }
27
+ } catch {}
28
+ if (isWSL) {
29
+ console.error('\nDetected WSL with Windows-native Node.js.');
30
+ console.error('Install a Linux-native Node.js inside WSL:');
31
+ console.error(' curl -fsSL https://fnm.vercel.app/install | bash');
32
+ console.error(' fnm install --lts\n');
33
+ process.exit(1);
34
+ }
35
+ }
36
+
18
37
  const cyan = '\x1b[36m';
19
38
  const green = '\x1b[32m';
20
39
  const yellow = '\x1b[33m';
@@ -54,12 +73,11 @@ function main() {
54
73
  switch (command) {
55
74
  case 'init':
56
75
  case 'train': return cmdInit();
76
+ case 'link': return cmdLink();
77
+ case 'unlink': return cmdUnlink();
57
78
  case 'update': return cmdUpdate();
58
79
  case 'uninstall': return cmdUninstall();
59
80
  case 'status': return cmdStatus();
60
- case 'brain': return cmdBrain();
61
- case 'learn': return cmdLearn();
62
- case 'decide': return cmdDecide();
63
81
  case 'help':
64
82
  case 'h': return cmdHelp();
65
83
  case 'version':
@@ -229,6 +247,142 @@ function cmdInit() {
229
247
  }
230
248
  }
231
249
 
250
+ // ============================================================
251
+ // LINK — connect another repo's brain for cross-repo awareness
252
+ // ============================================================
253
+
254
+ function cmdLink() {
255
+ const targetPath = process.argv[3];
256
+ if (!targetPath) {
257
+ console.log(`${bold}Usage:${reset} shipfast link <path-to-other-repo>\n`);
258
+ console.log(`Links another repo's brain.db so agents can query across repos.`);
259
+ console.log(`Example: ${cyan}shipfast link ../backend${reset}\n`);
260
+
261
+ // Show current links
262
+ const cwd = process.cwd();
263
+ const brainDb = path.join(cwd, '.shipfast', 'brain.db');
264
+ if (fs.existsSync(brainDb)) {
265
+ try {
266
+ const links = safeRun('sqlite3', ['-json', brainDb, "SELECT value FROM config WHERE key = 'linked_repos';"], {
267
+ encoding: 'utf8', stdio: ['pipe', 'pipe', 'pipe']
268
+ }).trim();
269
+ if (links) {
270
+ const parsed = JSON.parse(links);
271
+ if (parsed.length && parsed[0].value) {
272
+ const repos = JSON.parse(parsed[0].value);
273
+ if (repos.length) {
274
+ console.log(`${bold}Linked repos:${reset}`);
275
+ repos.forEach(r => {
276
+ const hasDb = fs.existsSync(path.join(r, '.shipfast', 'brain.db'));
277
+ console.log(` ${hasDb ? green : red}${r}${reset} ${hasDb ? '(brain.db found)' : '(brain.db missing — run shipfast init there)'}`);
278
+ });
279
+ console.log('');
280
+ }
281
+ }
282
+ }
283
+ } catch {}
284
+ }
285
+ return;
286
+ }
287
+
288
+ const cwd = process.cwd();
289
+ const resolved = path.resolve(cwd, targetPath);
290
+
291
+ // Validate target
292
+ if (!fs.existsSync(resolved)) {
293
+ console.log(`${red}Path not found: ${resolved}${reset}\n`);
294
+ return;
295
+ }
296
+
297
+ if (!fs.existsSync(path.join(resolved, '.git'))) {
298
+ console.log(`${yellow}Warning: ${resolved} is not a git repo.${reset}`);
299
+ }
300
+
301
+ const targetBrain = path.join(resolved, '.shipfast', 'brain.db');
302
+ if (!fs.existsSync(targetBrain)) {
303
+ console.log(`${yellow}Warning: No brain.db found at ${resolved}. Run ${cyan}shipfast init${reset}${yellow} there first.${reset}`);
304
+ }
305
+
306
+ // Ensure local brain exists
307
+ const localBrain = path.join(cwd, '.shipfast', 'brain.db');
308
+ if (!fs.existsSync(localBrain)) {
309
+ console.log(`${red}No local brain.db. Run ${cyan}shipfast init${reset}${red} first.${reset}\n`);
310
+ return;
311
+ }
312
+
313
+ // Get existing links
314
+ let links = [];
315
+ try {
316
+ const existing = safeRun('sqlite3', ['-json', localBrain, "SELECT value FROM config WHERE key = 'linked_repos';"], {
317
+ encoding: 'utf8', stdio: ['pipe', 'pipe', 'pipe']
318
+ }).trim();
319
+ if (existing) {
320
+ const parsed = JSON.parse(existing);
321
+ if (parsed.length && parsed[0].value) links = JSON.parse(parsed[0].value);
322
+ }
323
+ } catch {}
324
+
325
+ // Add if not already linked
326
+ if (links.includes(resolved)) {
327
+ console.log(`${dim}Already linked: ${resolved}${reset}\n`);
328
+ return;
329
+ }
330
+
331
+ links.push(resolved);
332
+ const escaped = JSON.stringify(links).replace(/'/g, "''");
333
+ safeRun('sqlite3', [localBrain, `INSERT OR REPLACE INTO config (key, value) VALUES ('linked_repos', '${escaped}');`], {
334
+ stdio: ['pipe', 'pipe', 'pipe']
335
+ });
336
+
337
+ console.log(`${green}Linked: ${resolved}${reset}`);
338
+ console.log(`${dim}Agents will now query both local and linked brains.${reset}`);
339
+ console.log(`${dim}Total linked repos: ${links.length}${reset}\n`);
340
+ }
341
+
342
+ function cmdUnlink() {
343
+ const targetPath = process.argv[3];
344
+ const cwd = process.cwd();
345
+ const localBrain = path.join(cwd, '.shipfast', 'brain.db');
346
+
347
+ if (!fs.existsSync(localBrain)) {
348
+ console.log(`${red}No local brain.db.${reset}\n`);
349
+ return;
350
+ }
351
+
352
+ // Get existing links
353
+ let links = [];
354
+ try {
355
+ const existing = safeRun('sqlite3', ['-json', localBrain, "SELECT value FROM config WHERE key = 'linked_repos';"], {
356
+ encoding: 'utf8', stdio: ['pipe', 'pipe', 'pipe']
357
+ }).trim();
358
+ if (existing) {
359
+ const parsed = JSON.parse(existing);
360
+ if (parsed.length && parsed[0].value) links = JSON.parse(parsed[0].value);
361
+ }
362
+ } catch {}
363
+
364
+ if (!targetPath) {
365
+ // Unlink all
366
+ if (links.length === 0) {
367
+ console.log(`${dim}No linked repos.${reset}\n`);
368
+ return;
369
+ }
370
+ safeRun('sqlite3', [localBrain, "DELETE FROM config WHERE key = 'linked_repos';"], {
371
+ stdio: ['pipe', 'pipe', 'pipe']
372
+ });
373
+ console.log(`${green}Unlinked all ${links.length} repos.${reset}\n`);
374
+ return;
375
+ }
376
+
377
+ const resolved = path.resolve(cwd, targetPath);
378
+ links = links.filter(l => l !== resolved);
379
+ const escaped = JSON.stringify(links).replace(/'/g, "''");
380
+ safeRun('sqlite3', [localBrain, `INSERT OR REPLACE INTO config (key, value) VALUES ('linked_repos', '${escaped}');`], {
381
+ stdio: ['pipe', 'pipe', 'pipe']
382
+ });
383
+ console.log(`${green}Unlinked: ${resolved}${reset}\n`);
384
+ }
385
+
232
386
  function findIndexer() {
233
387
  // Check all global runtime dirs + package source
234
388
  const paths = Object.values(RUNTIMES)
@@ -338,12 +492,14 @@ function cleanSettings(dir) {
338
492
 
339
493
  function cmdHelp() {
340
494
  console.log(`${bold}Terminal commands:${reset}\n`);
341
- console.log(` ${cyan}shipfast init${reset} Index current repo into .shipfast/brain.db`);
342
- console.log(` ${cyan}shipfast init --fresh${reset} Full reindex (clears existing brain.db)`);
343
- console.log(` ${cyan}shipfast status${reset} Show installed runtimes + brain status`);
344
- console.log(` ${cyan}shipfast update${reset} Update to latest + re-detect runtimes`);
345
- console.log(` ${cyan}shipfast uninstall${reset} Remove from all AI tools`);
346
- console.log(` ${cyan}shipfast help${reset} Show this help\n`);
495
+ console.log(` ${cyan}shipfast init${reset} Index current repo into .shipfast/brain.db`);
496
+ console.log(` ${cyan}shipfast init --fresh${reset} Full reindex (clears existing brain.db)`);
497
+ console.log(` ${cyan}shipfast link <path>${reset} Link another repo for cross-repo search`);
498
+ console.log(` ${cyan}shipfast unlink [path]${reset} Unlink a repo (or all)`);
499
+ console.log(` ${cyan}shipfast status${reset} Show installed runtimes + brain + links`);
500
+ console.log(` ${cyan}shipfast update${reset} Update to latest + re-detect runtimes`);
501
+ console.log(` ${cyan}shipfast uninstall${reset} Remove from all AI tools`);
502
+ console.log(` ${cyan}shipfast help${reset} Show this help\n`);
347
503
  console.log(`${bold}In your AI tool:${reset}\n`);
348
504
  console.log(` ${cyan}/sf-do${reset} <task> The one command — describe what you want`);
349
505
  console.log(` ${cyan}/sf-discuss${reset} <task> Clarify ambiguity before planning`);
package/brain/index.cjs CHANGED
@@ -32,7 +32,8 @@ function initBrain(cwd) {
32
32
  const dbPath = getBrainPath(cwd);
33
33
  const schemaPath = path.join(__dirname, 'schema.sql');
34
34
  const schema = fs.readFileSync(schemaPath, 'utf8');
35
- execFileSync('sqlite3', [dbPath], { input: schema, stdio: ['pipe', 'pipe', 'pipe'] });
35
+ // Enable WAL mode for corruption protection (safe against interrupted writes)
36
+ execFileSync('sqlite3', [dbPath], { input: 'PRAGMA journal_mode=WAL;\n' + schema, stdio: ['pipe', 'pipe', 'pipe'] });
36
37
  return dbPath;
37
38
  }
38
39
 
@@ -16,7 +16,7 @@ const path = require('path');
16
16
  const os = require('os');
17
17
 
18
18
  let input = '';
19
- const stdinTimeout = setTimeout(() => process.exit(0), 5000);
19
+ const stdinTimeout = setTimeout(() => process.exit(0), 10000); // consistent 10s timeout across all hooks
20
20
  process.stdin.setEncoding('utf8');
21
21
  process.stdin.on('data', chunk => input += chunk);
22
22
  process.stdin.on('end', () => {
package/mcp/server.cjs CHANGED
@@ -47,6 +47,41 @@ function run(sql) {
47
47
  } catch { return false; }
48
48
  }
49
49
 
50
+ // Query linked repos (cross-repo search)
51
+ function getLinkedPaths() {
52
+ try {
53
+ const rows = query("SELECT value FROM config WHERE key = 'linked_repos'");
54
+ if (rows.length && rows[0].value) return JSON.parse(rows[0].value);
55
+ } catch {}
56
+ return [];
57
+ }
58
+
59
+ function queryLinked(sql) {
60
+ // Query local brain first
61
+ const local = query(sql);
62
+
63
+ // Then query each linked repo's brain
64
+ const linked = getLinkedPaths();
65
+ const results = [...local];
66
+ for (const repoPath of linked) {
67
+ const linkedDb = path.join(repoPath, '.shipfast', 'brain.db');
68
+ if (!fs.existsSync(linkedDb)) continue;
69
+ try {
70
+ const r = safeRun('sqlite3', ['-json', linkedDb, sql], {
71
+ encoding: 'utf8', stdio: ['pipe', 'pipe', 'pipe']
72
+ }).trim();
73
+ if (r) {
74
+ const parsed = JSON.parse(r);
75
+ // Tag results with source repo
76
+ const repoName = path.basename(repoPath);
77
+ parsed.forEach(row => { row._repo = repoName; });
78
+ results.push(...parsed);
79
+ }
80
+ } catch {}
81
+ }
82
+ return results;
83
+ }
84
+
50
85
  function esc(s) {
51
86
  return s == null ? '' : String(s).replace(/'/g, "''");
52
87
  }
@@ -76,8 +111,24 @@ const TOOLS = {
76
111
  }
77
112
  },
78
113
 
114
+ brain_linked: {
115
+ description: 'Show linked repos and their brain.db status. Use shipfast link to connect repos for cross-repo search.',
116
+ inputSchema: { type: 'object', properties: {}, required: [] },
117
+ handler() {
118
+ const linked = getLinkedPaths();
119
+ if (!linked.length) return { linked: [], message: 'No repos linked. Use: shipfast link ../other-repo' };
120
+ return {
121
+ linked: linked.map(p => ({
122
+ path: p,
123
+ name: path.basename(p),
124
+ hasBrain: fs.existsSync(path.join(p, '.shipfast', 'brain.db'))
125
+ }))
126
+ };
127
+ }
128
+ },
129
+
79
130
  brain_search: {
80
- description: 'Search the codebase knowledge graph for files, functions, types, or components by name.',
131
+ description: 'Search the codebase knowledge graph for files, functions, types, or components by name. Searches local + all linked repos.',
81
132
  inputSchema: {
82
133
  type: 'object',
83
134
  properties: {
@@ -88,7 +139,7 @@ const TOOLS = {
88
139
  },
89
140
  handler({ query: q, kind }) {
90
141
  const kindFilter = kind ? `AND kind = '${esc(kind)}'` : '';
91
- return query(
142
+ return queryLinked(
92
143
  `SELECT kind, name, file_path, signature, line_start FROM nodes ` +
93
144
  `WHERE (name LIKE '%${esc(q)}%' OR file_path LIKE '%${esc(q)}%') ${kindFilter} ` +
94
145
  `ORDER BY kind, name LIMIT 30`
@@ -415,7 +466,7 @@ function handleMessage(msg) {
415
466
  result: {
416
467
  protocolVersion: '2024-11-05',
417
468
  capabilities: { tools: {} },
418
- serverInfo: { name: 'shipfast-brain', version: '0.5.0' }
469
+ serverInfo: { name: 'shipfast-brain', version: '1.0.0' }
419
470
  }
420
471
  });
421
472
  }
@@ -440,9 +491,14 @@ function handleMessage(msg) {
440
491
 
441
492
  try {
442
493
  const result = tool.handler(params.arguments || {});
494
+ let text = JSON.stringify(result, null, 2);
495
+ // Truncate large responses to prevent context flooding (50KB max)
496
+ if (text.length > 50000) {
497
+ text = text.slice(0, 50000) + '\n... [truncated — ' + text.length + ' chars total. Use more specific query.]';
498
+ }
443
499
  return send({
444
500
  jsonrpc: '2.0', id,
445
- result: { content: [{ type: 'text', text: JSON.stringify(result, null, 2) }] }
501
+ result: { content: [{ type: 'text', text }] }
446
502
  });
447
503
  } catch (err) {
448
504
  return send({
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@shipfast-ai/shipfast",
3
- "version": "1.0.0",
3
+ "version": "1.0.2",
4
4
  "description": "Autonomous context-engineered development system with SQLite brain. 5 agents, 14 commands, per-task fresh context, 70-90% fewer tokens.",
5
5
  "bin": {
6
6
  "shipfast": "bin/install.js"