@yemi33/squad 0.1.7 → 0.1.9

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -229,6 +229,23 @@ Agents need MCP tools to interact with your repo host (create PRs, post review c
229
229
 
230
230
  Manually refresh with `squad mcp-sync`.
231
231
 
232
+ ### Azure DevOps Users
233
+
234
+ For the best experience with ADO repos, install the [Azure CLI](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli) and use the [Azure DevOps MCP server](https://github.com/microsoft/azure-devops-mcp). This gives agents full access to PRs, work items, repos, and pipelines via MCP tools — no `gh` CLI needed.
235
+
236
+ ```bash
237
+ # Install Azure CLI
238
+ winget install Microsoft.AzureCLI # Windows
239
+ brew install azure-cli # macOS
240
+ curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash # Linux
241
+
242
+ # Login and set defaults
243
+ az login
244
+ az devops configure --defaults organization=https://dev.azure.com/YOUR_ORG project=YOUR_PROJECT
245
+ ```
246
+
247
+ Then add the ADO MCP server to your Claude Code settings (`~/.claude.json`). The engine will auto-sync it to all agents on next start.
248
+
232
249
  ## Work Items
233
250
 
234
251
  All work items use the shared `playbooks/work-item.md` template, which provides consistent branch naming, worktree workflow, PR creation steps, and status tracking.
@@ -24,23 +24,62 @@ Agent completes task
24
24
  ```
25
25
  Agent finishes task
26
26
  → writes notes/inbox/<agent>-<date>.md
27
- → engine checks inbox on each tick
28
- at 5+ files: consolidateInbox() runs
29
- items categorized (reviews, feedback, learnings, other)
30
- summary appended to notes.md
27
+ → engine checks inbox on next tick (~60s)
28
+ consolidateInbox() fires (threshold: 1 file)
29
+ spawns Claude Haiku for LLM-powered summarization
30
+ Haiku reads all inbox notes + existing notes.md
31
+ → produces deduplicated, categorized digest
32
+ → digest appended to notes.md
31
33
  → originals moved to notes/archive/
32
34
  → notes.md injected into every future agent prompt
33
35
  ```
34
36
 
35
- ### Smart Consolidation
37
+ ### LLM-Powered Consolidation (Primary)
36
38
 
37
- The engine doesn't just dump files it categorizes them:
38
- - **Reviews** — files containing "review" or PR references
39
- - **Feedback** — review feedback files for authors
40
- - **Learnings** — build summaries, explorations, implementation notes
41
- - **Other** — everything else
39
+ The engine spawns Claude Haiku to read all inbox notes and produce a smart digest. Haiku is chosen for speed (~10s) and low cost while being capable enough for summarization.
42
40
 
43
- Each category gets a header with item count and one-line summaries.
41
+ **What Haiku does:**
42
+ - Reads full content of every inbox note (up to 8KB each)
43
+ - Cross-references against existing notes.md to avoid repeating known knowledge
44
+ - Extracts only actionable insights: patterns, conventions, gotchas, build results, architectural decisions
45
+ - Deduplicates — if multiple agents report the same finding, merges into one entry
46
+ - Groups by category: Patterns & Conventions, Build & Test Results, PR Review Findings, Bugs & Gotchas, Architecture Notes, Action Items
47
+ - Attributes each insight to the source agent(s)
48
+ - Writes a descriptive title summarizing what was learned
49
+
50
+ **Example output:**
51
+ ```markdown
52
+ ### 2026-03-13: PR-4964594 Build & Lint Analysis
53
+ **By:** Engine (LLM-consolidated)
54
+
55
+ #### Build & Test Results
56
+ - **Build and tests passing** for progression UI feature _(rebecca)_
57
+
58
+ #### Bugs & Gotchas
59
+ - **Barrel file violation**: index.ts under src/features/progression/ violates no-barrel-files rule — use concrete paths _(rebecca)_
60
+ - **11 lint errors**: unnecessary conditionals and missing curly braces in progressionAtoms.ts and ProgressionCard.tsx _(rebecca)_
61
+
62
+ #### Patterns & Conventions
63
+ - **Jotai for local state**: appropriate for synchronous UI state in Bebop features _(rebecca)_
64
+
65
+ _Processed 1 note, 4 insights extracted, 1 duplicate removed._
66
+ ```
67
+
68
+ **Async execution:** The LLM call doesn't block the engine tick loop. A `_consolidationInFlight` flag prevents concurrent runs.
69
+
70
+ **Timeout:** 3 minutes max. If Haiku doesn't respond, the engine kills the process and falls back to regex.
71
+
72
+ ### Regex Fallback Consolidation
73
+
74
+ If the LLM call fails (timeout, error, malformed output), the engine falls back to pattern-matching extraction:
75
+
76
+ - **Numbered items**: `1. **Bold key**: explanation` — captured via regex
77
+ - **Bullet items**: `- **Bold key**: explanation` — captured via regex
78
+ - **Important lines**: standalone sentences containing must/never/always/convention/pattern/gotcha
79
+ - **Deduplication**: fingerprint-based (lowercased, stripped, 70% word overlap threshold)
80
+ - **Cross-reference**: checks against existing notes.md to skip known insights
81
+
82
+ Fallback entries are labeled `**By:** Engine (regex fallback)` so you can tell which method was used.
44
83
 
45
84
  ### Auto-Pruning
46
85
 
@@ -284,7 +323,9 @@ When a git merge or rebase produces conflicts in yarn.lock.
284
323
 
285
324
  | Setting | Default | What it controls |
286
325
  |---------|---------|-----------------|
287
- | `engine.inboxConsolidateThreshold` | 5 | Files needed before consolidation triggers |
326
+ | `engine.inboxConsolidateThreshold` | 1 | Files needed before consolidation triggers |
327
+ | Consolidation model | Haiku | LLM used for summarization (fast, cheap) |
328
+ | Consolidation timeout | 3 min | Max time for LLM call before fallback |
288
329
  | notes.md max size | 50KB | Auto-prunes old sections above this |
289
330
  | Agent history entries | 20 | Max entries kept in history.md |
290
331
  | Metrics file | `engine/metrics.json` | Auto-created on first completion |
package/engine.js CHANGED
@@ -1966,10 +1966,14 @@ function updateMetrics(agentId, dispatchItem, result) {
1966
1966
 
1967
1967
  // ─── Inbox Consolidation ────────────────────────────────────────────────────
1968
1968
 
1969
+ // Track in-flight LLM consolidation to prevent concurrent runs
1970
+ let _consolidationInFlight = false;
1971
+
1969
1972
  function consolidateInbox(config) {
1970
- const threshold = config.engine?.inboxConsolidateThreshold || 5;
1973
+ const threshold = config.engine?.inboxConsolidateThreshold || 1;
1971
1974
  const files = getInboxFiles();
1972
1975
  if (files.length < threshold) return;
1976
+ if (_consolidationInFlight) return;
1973
1977
 
1974
1978
  log('info', `Consolidating ${files.length} inbox items into notes.md`);
1975
1979
 
@@ -1978,91 +1982,286 @@ function consolidateInbox(config) {
1978
1982
  content: safeRead(path.join(INBOX_DIR, f))
1979
1983
  }));
1980
1984
 
1981
- // Categorize by type based on filename patterns
1982
- const categories = {
1983
- reviews: [],
1984
- feedback: [],
1985
- learnings: [],
1986
- other: []
1987
- };
1985
+ const existingNotes = getNotes() || '';
1986
+ consolidateWithLLM(items, existingNotes, files, config);
1987
+ }
1988
1988
 
1989
- for (const item of items) {
1990
- const name = item.name.toLowerCase();
1991
- if (name.includes('review') || name.includes('pr-') || name.includes('pr4')) {
1992
- categories.reviews.push(item);
1993
- } else if (name.includes('feedback')) {
1994
- categories.feedback.push(item);
1995
- } else if (name.includes('build') || name.includes('explore') || name.includes('m0') || name.includes('m1')) {
1996
- categories.learnings.push(item);
1997
- } else {
1998
- categories.other.push(item);
1989
+ // ─── LLM-Powered Consolidation ──────────────────────────────────────────────
1990
+ // Spawns Claude (Haiku) to read all inbox notes + existing notes.md and produce
1991
+ // a smart, deduplicated, categorized digest. Falls back to regex if LLM fails.
1992
+
1993
+ function consolidateWithLLM(items, existingNotes, files, config) {
1994
+ _consolidationInFlight = true;
1995
+
1996
+ // Build the prompt with all inbox notes
1997
+ const notesBlock = items.map(item =>
1998
+ `<note file="${item.name}">\n${(item.content || '').slice(0, 8000)}\n</note>`
1999
+ ).join('\n\n');
2000
+
2001
+ // Include tail of existing notes.md for dedup context
2002
+ const existingTail = existingNotes.length > 2000
2003
+ ? '...\n' + existingNotes.slice(-2000)
2004
+ : existingNotes;
2005
+
2006
+ const prompt = `You are a knowledge manager for a software engineering squad. Your job is to consolidate agent notes into team memory.
2007
+
2008
+ ## Inbox Notes to Process
2009
+
2010
+ ${notesBlock}
2011
+
2012
+ ## Existing Team Notes (for deduplication — do NOT repeat what's already here)
2013
+
2014
+ <existing_notes>
2015
+ ${existingTail}
2016
+ </existing_notes>
2017
+
2018
+ ## Instructions
2019
+
2020
+ Read every inbox note carefully. Produce a consolidated digest following these rules:
2021
+
2022
+ 1. **Extract actionable knowledge only**: patterns, conventions, gotchas, warnings, build results, architectural decisions, review findings. Skip boilerplate (dates, filenames, task IDs).
2023
+
2024
+ 2. **Deduplicate aggressively**: If an insight already exists in the existing team notes, skip it entirely. If multiple agents report the same finding, merge into one entry and credit all agents.
2025
+
2026
+ 3. **Write concisely**: Each insight should be 1-2 sentences max. Use **bold key** at the start of each bullet.
2027
+
2028
+ 4. **Group by category**: Use these exact headers (only include categories that have content):
2029
+ - \`#### Patterns & Conventions\`
2030
+ - \`#### Build & Test Results\`
2031
+ - \`#### PR Review Findings\`
2032
+ - \`#### Bugs & Gotchas\`
2033
+ - \`#### Architecture Notes\`
2034
+ - \`#### Action Items\`
2035
+
2036
+ 5. **Attribute sources**: End each bullet with _(agentName)_ or _(agent1, agent2)_ if multiple.
2037
+
2038
+ 6. **Write a descriptive title**: First line must be a single-line title summarizing what was learned. Do NOT use generic text like "Consolidated from N items".
2039
+
2040
+ ## Output Format
2041
+
2042
+ Respond with ONLY the markdown below — no preamble, no explanation, no code fences:
2043
+
2044
+ ### YYYY-MM-DD: <descriptive title>
2045
+ **By:** Engine (LLM-consolidated)
2046
+
2047
+ #### Category Name
2048
+ - **Bold key**: insight text _(agent)_
2049
+
2050
+ _Processed N notes, M insights extracted, K duplicates removed._
2051
+
2052
+ Use today's date: ${dateStamp()}`;
2053
+
2054
+ // Write prompt to temp file
2055
+ const promptPath = path.join(ENGINE_DIR, 'consolidate-prompt.md');
2056
+ safeWrite(promptPath, prompt);
2057
+
2058
+ const sysPrompt = 'You are a concise knowledge manager. Output only markdown. No preamble. No code fences around your output.';
2059
+ const sysPromptPath = path.join(ENGINE_DIR, 'consolidate-sysprompt.md');
2060
+ safeWrite(sysPromptPath, sysPrompt);
2061
+
2062
+ // Build args — use Haiku for speed/cost, single turn, print mode
2063
+ const spawnScript = path.join(ENGINE_DIR, 'spawn-agent.js');
2064
+ const args = [
2065
+ '--output-format', 'text',
2066
+ '--max-turns', '1',
2067
+ '--model', 'haiku',
2068
+ '--permission-mode', 'bypassPermissions',
2069
+ '--verbose',
2070
+ ];
2071
+
2072
+ const childEnv = { ...process.env };
2073
+ for (const key of Object.keys(childEnv)) {
2074
+ if (key === 'CLAUDECODE' || key.startsWith('CLAUDE_CODE') || key.startsWith('CLAUDECODE_')) {
2075
+ delete childEnv[key];
1999
2076
  }
2000
2077
  }
2001
2078
 
2002
- let entry = `\n\n---\n\n### ${dateStamp()}: Consolidated from ${items.length} inbox items\n`;
2003
- entry += `**By:** Engine (auto-consolidation)\n\n`;
2079
+ log('info', 'Spawning Haiku for LLM consolidation...');
2080
+
2081
+ const proc = spawn(process.execPath, [spawnScript, promptPath, sysPromptPath, ...args], {
2082
+ cwd: SQUAD_DIR,
2083
+ stdio: ['pipe', 'pipe', 'pipe'],
2084
+ env: childEnv
2085
+ });
2086
+
2087
+ let stdout = '';
2088
+ let stderr = '';
2089
+ proc.stdout.on('data', d => { stdout += d.toString(); if (stdout.length > 100000) stdout = stdout.slice(-50000); });
2090
+ proc.stderr.on('data', d => { stderr += d.toString(); if (stderr.length > 50000) stderr = stderr.slice(-25000); });
2091
+
2092
+ // Timeout: 3 minutes max
2093
+ const timeout = setTimeout(() => {
2094
+ log('warn', 'LLM consolidation timed out after 3m — killing and falling back to regex');
2095
+ try { proc.kill('SIGTERM'); } catch {}
2096
+ }, 180000);
2097
+
2098
+ proc.on('close', (code) => {
2099
+ clearTimeout(timeout);
2100
+ _consolidationInFlight = false;
2101
+ try { fs.unlinkSync(promptPath); } catch {}
2102
+ try { fs.unlinkSync(sysPromptPath); } catch {}
2103
+
2104
+ if (code === 0 && stdout.trim().length > 50) {
2105
+ let digest = stdout.trim();
2106
+ // Strip any code fences the model might add
2107
+ digest = digest.replace(/^\`\`\`\w*\n?/gm, '').replace(/\n?\`\`\`$/gm, '').trim();
2108
+
2109
+ // Validate: must contain ### (our expected format)
2110
+ if (!digest.startsWith('### ')) {
2111
+ const sectionIdx = digest.indexOf('### ');
2112
+ if (sectionIdx >= 0) {
2113
+ digest = digest.slice(sectionIdx);
2114
+ } else {
2115
+ log('warn', 'LLM consolidation output missing expected format — falling back to regex');
2116
+ consolidateWithRegex(items, files);
2117
+ return;
2118
+ }
2119
+ }
2120
+
2121
+ const entry = '\n\n---\n\n' + digest;
2122
+ const current = getNotes();
2123
+ let newContent = current + entry;
2124
+
2125
+ // Prune if too large
2126
+ if (newContent.length > 50000) {
2127
+ const sections = newContent.split('\n---\n\n### ');
2128
+ if (sections.length > 10) {
2129
+ const header = sections[0];
2130
+ const recent = sections.slice(-8);
2131
+ newContent = header + '\n---\n\n### ' + recent.join('\n---\n\n### ');
2132
+ log('info', `Pruned notes.md: removed ${sections.length - 9} old sections`);
2133
+ }
2134
+ }
2004
2135
 
2005
- if (categories.reviews.length > 0) {
2006
- entry += `#### Review Findings (${categories.reviews.length})\n`;
2007
- for (const item of categories.reviews) {
2008
- // Extract first meaningful line as summary
2009
- const firstLine = item.content.split('\n').find(l => l.trim() && !l.startsWith('#')) || item.name;
2010
- entry += `- **${item.name}**: ${firstLine.trim().slice(0, 150)}\n`;
2136
+ safeWrite(NOTES_PATH, newContent);
2137
+ archiveInboxFiles(files);
2138
+ log('info', `LLM consolidation complete: ${files.length} notes processed by Haiku`);
2139
+ } else {
2140
+ log('warn', `LLM consolidation failed (code=${code}) falling back to regex`);
2141
+ if (stderr) log('debug', `LLM stderr: ${stderr.slice(0, 500)}`);
2142
+ consolidateWithRegex(items, files);
2011
2143
  }
2012
- entry += '\n';
2013
- }
2144
+ });
2145
+
2146
+ proc.on('error', (err) => {
2147
+ clearTimeout(timeout);
2148
+ _consolidationInFlight = false;
2149
+ log('warn', `LLM consolidation spawn error: ${err.message} — falling back to regex`);
2150
+ try { fs.unlinkSync(promptPath); } catch {}
2151
+ try { fs.unlinkSync(sysPromptPath); } catch {}
2152
+ consolidateWithRegex(items, files);
2153
+ });
2154
+ }
2155
+
2156
+ // ─── Regex Fallback Consolidation ────────────────────────────────────────────
2014
2157
 
2015
- if (categories.feedback.length > 0) {
2016
- entry += `#### Review Feedback (${categories.feedback.length})\n`;
2017
- for (const item of categories.feedback) {
2018
- const firstLine = item.content.split('\n').find(l => l.trim() && !l.startsWith('#')) || item.name;
2019
- entry += `- **${item.name}**: ${firstLine.trim().slice(0, 150)}\n`;
2158
+ function consolidateWithRegex(items, files) {
2159
+ const allInsights = [];
2160
+ for (const item of items) {
2161
+ const content = item.content || '';
2162
+ const agentMatch = item.name.match(/^(\w+)-/);
2163
+ const agent = agentMatch ? agentMatch[1] : 'unknown';
2164
+ const lines = content.split('\n');
2165
+ const titleLine = lines.find(l => /^#\s/.test(l));
2166
+ const noteTitle = titleLine ? titleLine.replace(/^#+\s*/, '').trim() : item.name;
2167
+
2168
+ const nameLower = item.name.toLowerCase();
2169
+ const contentLower = content.toLowerCase();
2170
+ let category = 'learnings';
2171
+ if (nameLower.includes('review') || nameLower.includes('pr-') || nameLower.includes('pr4')) category = 'reviews';
2172
+ else if (nameLower.includes('feedback')) category = 'feedback';
2173
+ else if (nameLower.includes('build') || nameLower.includes('bt-')) category = 'build-results';
2174
+ else if (nameLower.includes('explore')) category = 'exploration';
2175
+ else if (contentLower.includes('bug') || contentLower.includes('fix')) category = 'bugs-fixes';
2176
+
2177
+ const numberedPattern = /^\d+\.\s+\*\*(.+?)\*\*\s*[\u2014\u2013:-]\s*(.+)/;
2178
+ const bulletPattern = /^[-*]\s+\*\*(.+?)\*\*[:\s]+(.+)/;
2179
+ const sectionPattern = /^###+\s+(.+)/;
2180
+ const importantKeywords = /\b(must|never|always|convention|pattern|gotcha|warning|important|rule|tip|note that)\b/i;
2181
+
2182
+ for (const line of lines) {
2183
+ const trimmed = line.trim();
2184
+ if (!trimmed || sectionPattern.test(trimmed)) continue;
2185
+ let insight = null;
2186
+ const numMatch = trimmed.match(numberedPattern);
2187
+ if (numMatch) insight = `**${numMatch[1].trim()}**: ${numMatch[2].trim()}`;
2188
+ if (!insight) {
2189
+ const bulMatch = trimmed.match(bulletPattern);
2190
+ if (bulMatch) insight = `**${bulMatch[1].trim()}**: ${bulMatch[2].trim()}`;
2191
+ }
2192
+ if (!insight && importantKeywords.test(trimmed) && !trimmed.startsWith('#') && trimmed.length > 30 && trimmed.length < 500) {
2193
+ insight = trimmed;
2194
+ }
2195
+ if (insight) {
2196
+ if (insight.length > 300) insight = insight.slice(0, 297) + '...';
2197
+ const fp = insight.toLowerCase().replace(/[^a-z0-9 ]/g, '').replace(/\s+/g, ' ').trim().slice(0, 80);
2198
+ allInsights.push({ text: insight, source: item.name, noteTitle, category, agent, fingerprint: fp });
2199
+ }
2200
+ }
2201
+ if (!allInsights.some(i => i.source === item.name)) {
2202
+ allInsights.push({ text: `See full note: ${noteTitle}`, source: item.name, noteTitle, category, agent,
2203
+ fingerprint: noteTitle.toLowerCase().replace(/[^a-z0-9 ]/g, '').replace(/\s+/g, ' ').trim().slice(0, 80) });
2020
2204
  }
2021
- entry += '\n';
2022
2205
  }
2023
2206
 
2024
- if (categories.learnings.length > 0) {
2025
- entry += `#### Learnings & Conventions (${categories.learnings.length})\n`;
2026
- for (const item of categories.learnings) {
2027
- const firstLine = item.content.split('\n').find(l => l.trim() && !l.startsWith('#')) || item.name;
2028
- entry += `- **${item.name}**: ${firstLine.trim().slice(0, 150)}\n`;
2207
+ // Dedup
2208
+ const existingNotes = (getNotes() || '').toLowerCase();
2209
+ const seen = new Map();
2210
+ const deduped = [];
2211
+ for (const insight of allInsights) {
2212
+ const fpWords = insight.fingerprint.split(' ').filter(w => w.length > 4).slice(0, 5);
2213
+ if (fpWords.length >= 3 && fpWords.every(w => existingNotes.includes(w))) continue;
2214
+ const existing = seen.get(insight.fingerprint);
2215
+ if (existing) { if (!existing.sources.includes(insight.agent)) existing.sources.push(insight.agent); continue; }
2216
+ let isDup = false;
2217
+ for (const [fp, entry] of seen) {
2218
+ const a = new Set(fp.split(' ')), b = new Set(insight.fingerprint.split(' '));
2219
+ if ([...a].filter(w => b.has(w)).length / Math.max(a.size, b.size) > 0.7) {
2220
+ if (!entry.sources.includes(insight.agent)) entry.sources.push(insight.agent); isDup = true; break;
2221
+ }
2029
2222
  }
2030
- entry += '\n';
2223
+ if (isDup) continue;
2224
+ seen.set(insight.fingerprint, { insight, sources: [insight.agent] });
2225
+ deduped.push({ ...insight, sources: seen.get(insight.fingerprint).sources });
2031
2226
  }
2032
2227
 
2033
- if (categories.other.length > 0) {
2034
- entry += `#### Other (${categories.other.length})\n`;
2035
- for (const item of categories.other) {
2036
- const firstLine = item.content.split('\n').find(l => l.trim() && !l.startsWith('#')) || item.name;
2037
- entry += `- **${item.name}**: ${firstLine.trim().slice(0, 150)}\n`;
2228
+ const agents = [...new Set(items.map(i => { const m = i.name.match(/^(\w+)-/); return m ? m[1].charAt(0).toUpperCase() + m[1].slice(1) : 'Unknown'; }))];
2229
+ const catLabels = { reviews: 'PR Review Findings', feedback: 'Review Feedback', 'build-results': 'Build & Test Results', exploration: 'Codebase Exploration', 'bugs-fixes': 'Bugs & Gotchas', learnings: 'Patterns & Conventions' };
2230
+ const topicHints = [...new Set(deduped.map(i => i.category))].map(c => ({ reviews: 'PR reviews', feedback: 'review feedback', 'build-results': 'build/test results', exploration: 'codebase exploration', 'bugs-fixes': 'bug findings' }[c] || 'learnings'));
2231
+ const title = `${agents.join(', ')}: ${topicHints.join(', ')} (${deduped.length} insights from ${items.length} notes)`;
2232
+
2233
+ const grouped = {};
2234
+ for (const item of deduped) { if (!grouped[item.category]) grouped[item.category] = []; grouped[item.category].push(item); }
2235
+
2236
+ let entry = `\n\n---\n\n### ${dateStamp()}: ${title}\n`;
2237
+ entry += '**By:** Engine (regex fallback)\n\n';
2238
+ for (const [cat, catItems] of Object.entries(grouped)) {
2239
+ entry += `#### ${catLabels[cat] || cat} (${catItems.length})\n`;
2240
+ for (const item of catItems) {
2241
+ const src = item.sources.length > 1 ? ` _(${item.sources.join(', ')})_` : ` _(${item.agent})_`;
2242
+ entry += `- ${item.text}${src}\n`;
2038
2243
  }
2039
2244
  entry += '\n';
2040
2245
  }
2246
+ const dupCount = allInsights.length - deduped.length;
2247
+ if (dupCount > 0) entry += `_Deduplication: ${dupCount} duplicate(s) removed._\n`;
2041
2248
 
2042
2249
  const current = getNotes();
2043
-
2044
- // Prune old consolidations if notes.md is getting too large (>50KB)
2045
2250
  let newContent = current + entry;
2046
2251
  if (newContent.length > 50000) {
2047
2252
  const sections = newContent.split('\n---\n\n### ');
2048
- if (sections.length > 10) {
2049
- // Keep header + last 8 consolidation sections
2050
- const header = sections[0];
2051
- const recent = sections.slice(-8);
2052
- newContent = header + '\n---\n\n### ' + recent.join('\n---\n\n### ');
2053
- log('info', `Pruned notes.md: removed ${sections.length - 9} old sections`);
2054
- }
2253
+ if (sections.length > 10) { newContent = sections[0] + '\n---\n\n### ' + sections.slice(-8).join('\n---\n\n### '); }
2055
2254
  }
2056
-
2057
2255
  safeWrite(NOTES_PATH, newContent);
2256
+ archiveInboxFiles(files);
2257
+ log('info', `Regex fallback: consolidated ${files.length} notes → ${deduped.length} insights into notes.md`);
2258
+ }
2058
2259
 
2059
- // Archive
2260
+ function archiveInboxFiles(files) {
2060
2261
  if (!fs.existsSync(ARCHIVE_DIR)) fs.mkdirSync(ARCHIVE_DIR, { recursive: true });
2061
2262
  for (const f of files) {
2062
2263
  try { fs.renameSync(path.join(INBOX_DIR, f), path.join(ARCHIVE_DIR, `${dateStamp()}-${f}`)); } catch {}
2063
2264
  }
2064
-
2065
- log('info', `Consolidated ${files.length} items into notes.md (${Object.entries(categories).map(([k,v]) => `${v.length} ${k}`).join(', ')})`);
2066
2265
  }
2067
2266
 
2068
2267
  // ─── State Snapshot ─────────────────────────────────────────────────────────
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@yemi33/squad",
3
- "version": "0.1.7",
3
+ "version": "0.1.9",
4
4
  "description": "Multi-agent AI dev team that runs from ~/.squad/ — five autonomous agents share a single engine, dashboard, and knowledge base",
5
5
  "bin": {
6
6
  "squad": "bin/squad.js"