compound-agent 1.7.3 → 1.7.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/dist/cli.js CHANGED
@@ -3901,6 +3901,226 @@ For each pair, classify as one of:
3901
3901
  - **Complementary**: Related but distinct \u2014 propose related links, keep both
3902
3902
 
3903
3903
  Output a structured action plan with specific \`npx ca\` commands to execute.
3904
+ `,
3905
+ "lint-classifier.md": `---
3906
+ name: Lint Classifier
3907
+ description: Classifies compound phase insights as lintable or semantic-only, creating beads tasks for mechanically-enforceable patterns
3908
+ model: sonnet
3909
+ ---
3910
+
3911
+ # Lint Classifier
3912
+
3913
+ Classify each insight from the compound phase as LINTABLE, PARTIAL, or NOT_LINTABLE. For LINTABLE insights with HIGH confidence, create beads tasks describing lint rules to implement.
3914
+
3915
+ ## Input
3916
+
3917
+ You receive insights via SendMessage from the compound lead. Each message contains:
3918
+ - \`id\`: The lesson ID (e.g., \`Laa504c6880ba5d41\`)
3919
+ - \`insight\`: The verbatim insight text
3920
+ - \`severity\`: high, medium, or low
3921
+
3922
+ If no insights are provided via message, read the last 10 entries from \`.claude/lessons/index.jsonl\` (each line is a JSON object; parse and reverse to get newest-first, take up to 10).
3923
+
3924
+ ## Classification Definition
3925
+
3926
+ An insight is **LINTABLE** if and only if:
3927
+ - It describes a property determinable from source code alone (text, tokens, AST nodes, import paths, file names)
3928
+ - The check would return a deterministic yes/no without running the program
3929
+ - The bad pattern can be expressed as a regex, AST selector, or import constraint
3930
+ - It does NOT require: runtime state, human intent, cross-session history, deployment processes, or semantic meaning
3931
+
3932
+ ## Classes
3933
+
3934
+ | Class | Meaning |
3935
+ |-------|---------|
3936
+ | LINTABLE | Check can be written as regex / AST / import rule |
3937
+ | PARTIAL | A subset is lintable; the rest is process/semantic |
3938
+ | NOT_LINTABLE | Requires runtime state, process knowledge, or human judgment |
3939
+
3940
+ ## Confidence Levels
3941
+
3942
+ | Level | Definition |
3943
+ |-------|-----------|
3944
+ | HIGH | Named code construct + prohibitive imperative. Check is unambiguous. |
3945
+ | MEDIUM | Pattern is conditional on context, or needs AST analysis. |
3946
+ | LOW | Genuinely ambiguous. Flag for human review. |
3947
+
3948
+ ## Few-Shot Examples
3949
+
3950
+ **Example 1** -- LINTABLE, HIGH
3951
+ > "Use isModelAvailable() instead of isModelUsable() in hot paths"
3952
+ - Reasoning: Named function substitution. Can detect \`isModelUsable(\` via regex in \`src/**/*.ts\`.
3953
+ - VERDICT: LINTABLE | CONFIDENCE: HIGH | CHECK_TYPE: file-pattern
3954
+
3955
+ **Example 2** -- LINTABLE, HIGH
3956
+ > "Never use if(condition){expect()} in tests"
3957
+ - Reasoning: Structural pattern. Detect IfStatement containing expect() call via AST visitor.
3958
+ - VERDICT: LINTABLE | CONFIDENCE: HIGH | CHECK_TYPE: ast
3959
+
3960
+ **Example 3** -- PARTIAL, HIGH
3961
+ > "When adding new hook types, update ALL user-facing output to include the complete set"
3962
+ - Reasoning: Can detect hardcoded hook-type arrays in known files (lintable subset). Cannot verify "all" surfaces were updated (process). The detectable part is unambiguous.
3963
+ - VERDICT: PARTIAL | CONFIDENCE: HIGH | CHECK_TYPE: file-pattern
3964
+
3965
+ **Example 4** -- PARTIAL, MEDIUM
3966
+ > "Update ALL output surfaces when adding hook types"
3967
+ - Reasoning: Can check for hardcoded hook-type lists in known files (partial). Cannot verify "all" surfaces were updated (process). Context-dependent which files to check.
3968
+ - VERDICT: PARTIAL | CONFIDENCE: MEDIUM | CHECK_TYPE: file-pattern
3969
+
3970
+ **Example 5** -- NOT_LINTABLE, HIGH
3971
+ > "Inlining phase instructions causes context drift under compaction"
3972
+ - Reasoning: Describes a runtime/architectural effect. No code pattern to match.
3973
+ - VERDICT: NOT_LINTABLE | CONFIDENCE: HIGH | CHECK_TYPE: N/A
3974
+
3975
+ **Example 6** -- NOT_LINTABLE, HIGH
3976
+ > "Always verify git diff after subagent completes"
3977
+ - Reasoning: Human process step. Cannot be detected from source code.
3978
+ - VERDICT: NOT_LINTABLE | CONFIDENCE: HIGH | CHECK_TYPE: N/A
3979
+
3980
+ **Example 7** -- LINTABLE, LOW
3981
+ > "Avoid complex nested callbacks in async code"
3982
+ - Reasoning: "Complex" and "nested" are subjective. Could write a nesting-depth check but threshold is arbitrary. Genuinely ambiguous.
3983
+ - VERDICT: LINTABLE | CONFIDENCE: LOW | CHECK_TYPE: ast
3984
+
3985
+ ## Classification Procedure
3986
+
3987
+ For each insight, reason step by step then output:
3988
+
3989
+ \`\`\`
3990
+ VERDICT: [LINTABLE|PARTIAL|NOT_LINTABLE]
3991
+ CONFIDENCE: [HIGH|MEDIUM|LOW]
3992
+ CHECK_TYPE: [file-pattern|file-size|script|ast|N/A]
3993
+ RULE_CLASS: [A|B|N/A]
3994
+ RATIONALE: One sentence summary
3995
+ \`\`\`
3996
+
3997
+ ### Rule Classes
3998
+
3999
+ - **Class A** (native \`rules.json\`): The check can be expressed as a regex/glob (\`file-pattern\`), line count (\`file-size\`), or shell command (\`script\`). These map directly to the compound-agent rule engine. No external linter needed.
4000
+ - **Class B** (external linter): The check requires AST analysis or linter-specific features. Targets the user's detected linter (ESLint, Ruff, ast-grep, etc.).
4001
+
4002
+ CHECK_TYPE must be one of: \`file-pattern\`, \`file-size\`, \`script\` (Class A -- maps to compound-agent rule engine), \`ast\` (Class B -- requires external linter), or \`N/A\` (not lintable).
4003
+
4004
+ ## Routing Rules
4005
+
4006
+ | Classification | Action |
4007
+ |---------------|--------|
4008
+ | LINTABLE + HIGH | Create beads task under "Linting Improvement" epic |
4009
+ | LINTABLE + MEDIUM or PARTIAL + HIGH | Create a follow-up beads task noting the partial lintability for manual triage |
4010
+ | NOT_LINTABLE or LOW confidence | No additional action (lesson already stored by compound flow) |
4011
+
4012
+ For LINTABLE + MEDIUM or PARTIAL + HIGH, create a triage task:
4013
+ \`\`\`bash
4014
+ bd create --title="Triage: potentially-lintable lesson <lesson-id>" --type=task --priority=3 --description="Lesson <lesson-id>: <insight>\\n\\nVerdict: <LINTABLE/PARTIAL> | Confidence: <MEDIUM/HIGH>\\nReason lintability is uncertain: <rationale>"
4015
+ \`\`\`
4016
+
4017
+ **Critical**: ALL insights are already stored as lessons by the compound pipeline (step 8). Lint task creation is purely additive. Do not re-store or modify existing lessons.
4018
+
4019
+ ## Linter Detection
4020
+
4021
+ Before creating Class B tasks, detect the project's linter by checking the repo root for config files (first match wins):
4022
+
4023
+ 1. \`eslint.config.*\` / \`.eslintrc.*\` -> eslint
4024
+ 2. \`ruff.toml\` / \`.ruff.toml\` / \`pyproject.toml\` with \`[tool.ruff]\` -> ruff
4025
+ 3. \`clippy.toml\` / \`.clippy.toml\` -> clippy
4026
+ 4. \`.golangci.yml\` / \`.golangci.yaml\` -> golangci-lint
4027
+ 5. \`sgconfig.yml\` -> ast-grep
4028
+ 6. \`.semgrep.yml\` / \`.semgrep.yaml\` -> semgrep
4029
+
4030
+ **When no linter is detected**: For Class A rules, proceed normally (they use the native rule engine). For Class B rules, set target to \`ast-grep\` or \`semgrep\` as universal YAML-based fallbacks and note that no project linter was detected.
4031
+
4032
+ ## Task Creation
4033
+
4034
+ For each LINTABLE + HIGH insight, run:
4035
+
4036
+ \`\`\`bash
4037
+ bd create --title="Lint Rule: <rule-id> (from <lesson-id>)" --type=task --priority=<N> --description="<structured markdown>"
4038
+ \`\`\`
4039
+
4040
+ ### Description structure (Class A -- native rule engine):
4041
+
4042
+ \`\`\`markdown
4043
+ ## Source Lesson
4044
+ ID: <lesson-id>
4045
+ Insight: <verbatim insight text>
4046
+
4047
+ ## Rule Identity
4048
+ - ID: <kebab-case-rule-id>
4049
+ - Class: A (native rule engine)
4050
+ - Severity: <error|warning|info>
4051
+ - Scope: <glob pattern for affected files>
4052
+
4053
+ ## Detection Spec
4054
+ Check type: <file-pattern|file-size|script>
4055
+ Glob: <glob pattern>
4056
+ Pattern: <regex> (for file-pattern)
4057
+ mustMatch: <true|false>
4058
+
4059
+ ## Violation Message
4060
+ <What the developer sees. Under 3 lines. Imperative mood.>
4061
+ <Must answer: what violated, how to fix.>
4062
+
4063
+ ## Remediation
4064
+ <Concrete fix instruction.>
4065
+
4066
+ ## Code Examples
4067
+ Bad:
4068
+ <code that violates>
4069
+
4070
+ Good:
4071
+ <code that follows the rule>
4072
+ \`\`\`
4073
+
4074
+ ### Description structure (Class B -- external linter):
4075
+
4076
+ \`\`\`markdown
4077
+ ## Source Lesson
4078
+ ID: <lesson-id>
4079
+ Insight: <verbatim insight text>
4080
+
4081
+ ## Rule Identity
4082
+ - ID: <kebab-case-rule-id>
4083
+ - Class: B (external linter)
4084
+ - Target: <detected linter>
4085
+ - Severity: <error|warning|info>
4086
+ - Scope: <glob pattern for affected files>
4087
+
4088
+ ## Detection Spec
4089
+ <Natural language description of AST pattern>
4090
+ <Suggested selector or rule YAML -- mark "suggested, verify before use">
4091
+
4092
+ ## Violation Message
4093
+ <What the developer sees. Under 3 lines. Imperative mood.>
4094
+ <Must answer: what violated, how to fix.>
4095
+
4096
+ ## Remediation
4097
+ <Concrete fix instruction.>
4098
+
4099
+ ## Code Examples
4100
+ Bad:
4101
+ <code that violates>
4102
+
4103
+ Good:
4104
+ <code that follows the rule>
4105
+ \`\`\`
4106
+
4107
+ ### Priority mapping: lesson severity high->1, medium->2, low->3 (default: 2)
4108
+
4109
+ ## Epic Management
4110
+
4111
+ 1. Check if a "Linting Improvement" epic exists: \`bd search "Linting Improvement"\`
4112
+ 2. If not found, create: \`bd create --title="Linting Improvement" --type=epic --priority=2 --description="Epic for lint rules graduated from compound phase lessons."\`
4113
+ 3. Link tasks to the epic: \`bd dep add <epic-id> <task-id>\` (epic blocks the task)
4114
+
4115
+ **Important**: Use \`bd dep add <epic-id> <task-id>\` (epic first), NOT \`<task-id> <epic-id>\`. The epic blocks the tasks, not the other way around.
4116
+
4117
+ ## Constraints
4118
+
4119
+ - AST selectors are suggestions only -- mark as "suggested, verify before use"
4120
+ - Never skip classification for any insight
4121
+ - Do not modify the existing lesson capture flow
4122
+ - CHECK_TYPE must map to an actual engine type (\`file-pattern\`, \`file-size\`, \`script\`) for Class A, or \`ast\` for Class B
4123
+ - Report a summary at the end: N insights classified, X lintable (Y Class A, Z Class B), T tasks created
3904
4124
  `
3905
4125
  };
3906
4126
 
@@ -5050,7 +5270,7 @@ $ARGUMENTS
5050
5270
 
5051
5271
  **MANDATORY FIRST STEP -- NON-NEGOTIABLE**: Use the Read tool to open and read \`.claude/skills/compound/researcher/SKILL.md\` NOW. Do NOT proceed until you have read the complete skill file.
5052
5272
 
5053
- Then: scan docs/compound/research/ for gaps, propose topics via AskUserQuestion, spawn parallel researcher subagents.
5273
+ Then: scan docs/research/ and docs/compound/research/ for gaps, propose topics via AskUserQuestion, spawn parallel researcher subagents.
5054
5274
  `,
5055
5275
  "agentic-audit.md": `---
5056
5276
  name: compound:agentic-audit
@@ -5077,6 +5297,17 @@ $ARGUMENTS
5077
5297
  **MANDATORY FIRST STEP -- NON-NEGOTIABLE**: Use the Read tool to open and read \`.claude/skills/compound/agentic/SKILL.md\` NOW. Do NOT proceed until you have read the complete skill file.
5078
5298
 
5079
5299
  Run in **setup** mode. Audit first, then propose and create files to fill gaps.
5300
+ `,
5301
+ "architect.md": `---
5302
+ name: compound:architect
5303
+ description: Decompose a large system specification into cook-it-ready epic beads
5304
+ argument-hint: "<system spec epic ID, file path, or description>"
5305
+ ---
5306
+ $ARGUMENTS
5307
+
5308
+ # Architect
5309
+
5310
+ **MANDATORY FIRST STEP -- NON-NEGOTIABLE**: Use the Read tool to open and read \`.claude/skills/compound/architect/SKILL.md\` NOW. Do NOT proceed until you have read the complete skill file. It contains the full workflow you must follow.
5080
5311
  `,
5081
5312
  // =========================================================================
5082
5313
  // Utility commands (kept: learn-that, check-that, prime)
@@ -5555,9 +5786,9 @@ Skills are instructions that Claude reads before executing each phase. They live
5555
5786
 
5556
5787
  **Purpose**: Conduct deep, PhD-level research to build knowledge for working subagents.
5557
5788
 
5558
- **When invoked**: When agents need domain knowledge not yet covered in \`docs/compound/research/\`.
5789
+ **When invoked**: When agents need domain knowledge not yet covered in \`docs/research/\`.
5559
5790
 
5560
- **What it does**: Analyzes beads epics for knowledge gaps, checks existing docs coverage, proposes research topics for user confirmation, spawns parallel researcher subagents, and stores output at \`docs/compound/research/<topic>/<slug>.md\`.
5791
+ **What it does**: Analyzes beads epics for knowledge gaps, checks existing docs coverage, proposes research topics for user confirmation, spawns parallel researcher subagents, and stores output at \`docs/research/<topic>/<slug>.md\`.
5561
5792
 
5562
5793
  ### \`/compound:agentic-audit\`
5563
5794
 
@@ -5748,12 +5979,8 @@ Work is not complete until \`git push\` succeeds.
5748
5979
  };
5749
5980
 
5750
5981
  // src/setup/templates/skills.ts
5751
- var _bc = "\x1B[96m";
5752
- var _cn = "\x1B[36m";
5753
- var _gr = "\x1B[32m";
5754
- var _mg = "\x1B[35m";
5755
- var _yl = "\x1B[33m";
5756
- var _rs = "\x1B[0m";
5982
+ var cn = "\x1B[36m";
5983
+ var rs = "\x1B[0m";
5757
5984
  var PHASE_SKILLS = {
5758
5985
  "spec-dev": `---
5759
5986
  name: Spec Dev
@@ -5787,7 +6014,10 @@ Scale formality to risk: skip for trivial (<1h), lightweight (EARS + epic) for s
5787
6014
  2. Use Mermaid diagrams (\`sequenceDiagram\`, \`stateDiagram-v2\`) to expose hidden structure
5788
6015
  3. Detect ambiguities: vague adjectives, unclear pronouns, passive voice, compound requirements. See \`references/spec-guide.md\` for full checklist
5789
6016
  4. Build a domain glossary for ambiguous terms
5790
- 5. Use \`AskUserQuestion\` to resolve each ambiguity
6017
+ 5. **Change volatility**: rate each capability stable/moderate/high. Flag high-volatility areas for modularity investment in architect phase.
6018
+ 6. **Cynefin classify** each requirement: Clear (apply known solution), Complicated (analyze tradeoffs), Complex (needs safe-to-fail experiments). Complex requirements need experimental validation, not just analysis.
6019
+ 7. For composed systems, add **composition EARS**: \`When <A> times out, <B> shall...\`, \`If <A> retries, <B> shall...\`
6020
+ 8. Use \`AskUserQuestion\` to resolve each ambiguity
5791
6021
 
5792
6022
  **Iteration trigger**: If specifying reveals missing knowledge, loop back to Explore.
5793
6023
 
@@ -5844,6 +6074,7 @@ Read \`.claude/skills/compound/spec-dev/references/spec-guide.md\` on demand for
5844
6074
  - Not creating the beads epic
5845
6075
  - Specifying implementation instead of requirements
5846
6076
  - Skipping scenario table generation after EARS requirements
6077
+ - Not classifying requirements by Cynefin domain (Complex needs experiments)
5847
6078
 
5848
6079
  ## Quality Criteria
5849
6080
  - [ ] Requirements use EARS notation
@@ -5855,6 +6086,7 @@ Read \`.claude/skills/compound/spec-dev/references/spec-guide.md\` on demand for
5855
6086
  - [ ] Scenario table generated from EARS requirements and diagrams
5856
6087
  - [ ] Spec and scenario table stored in beads epic description
5857
6088
  - [ ] ADRs created for significant decisions
6089
+ - [ ] Cynefin classification applied, volatility assessed
5858
6090
  `,
5859
6091
  plan: `---
5860
6092
  name: Plan
@@ -5877,11 +6109,14 @@ Create a concrete implementation plan by decomposing work into small, testable t
5877
6109
  5. Synthesize research findings into a coherent approach. Flag conflicts between ADRs and proposed plan.
5878
6110
  6. Use \`AskUserQuestion\` to resolve ambiguities, conflicting constraints, or priority trade-offs before decomposing
5879
6111
  7. Decompose into tasks small enough to verify individually
5880
- 8. Define acceptance criteria for each task
5881
- 9. Ensure each task traces back to a spec requirement for traceability
5882
- 10. Map dependencies between tasks
5883
- 11. Create beads issues: \`bd create --title="..." --type=task\`
5884
- 12. Create review and compound blocking tasks (\`bd create\` + \`bd dep add\`) that depend on work tasks \u2014 these survive compaction and surface via \`bd ready\` after work completes
6112
+ 8. **Boundary stability check**: verify each task stays within its epic's scope boundary. If a task crosses epic boundaries, split it or expand scope.
6113
+ 9. **Last Responsible Moment**: for each task, assess if it can be deferred for information gain. Mark deferrable tasks with rationale: "defer until <milestone> because <information needed>".
6114
+ 10. **Change coupling check**: if files A and B change together >50% of the time (git log), they should be in the same task, not separate ones.
6115
+ 11. Define acceptance criteria for each task
6116
+ 12. Ensure each task traces back to a spec requirement for traceability
6117
+ 13. Map dependencies between tasks
6118
+ 14. Create beads issues: \`bd create --title="..." --type=task\`
6119
+ 15. Create review and compound blocking tasks (\`bd create\` + \`bd dep add\`) that depend on work tasks \u2014 these survive compaction and surface via \`bd ready\` after work completes
5885
6120
 
5886
6121
  ## Memory Integration
5887
6122
  - Run \`npx ca search\` and \`npx ca knowledge "relevant topic"\` for patterns related to the feature area
@@ -5901,6 +6136,8 @@ Create a concrete implementation plan by decomposing work into small, testable t
5901
6136
  - Not reviewing existing ADRs and docs for constraints
5902
6137
  - Making architectural decisions without research backing (use the researcher skill for complex domains)
5903
6138
  - Planning implementation details too early (stay at task level)
6139
+ - Not checking tasks against epic boundaries (boundary drift)
6140
+ - Splitting change-coupled files into separate tasks
5904
6141
 
5905
6142
  ## Quality Criteria
5906
6143
  - Each task has clear acceptance criteria
@@ -5911,6 +6148,8 @@ Create a concrete implementation plan by decomposing work into small, testable t
5911
6148
  - Ambiguities resolved via \`AskUserQuestion\` before decomposing
5912
6149
  - Complexity estimates are realistic (no "should be quick")
5913
6150
  - Each task traces back to a spec requirement
6151
+ - Tasks respect epic scope boundaries (no cross-boundary work)
6152
+ - Change-coupled files grouped together
5914
6153
 
5915
6154
  ## POST-PLAN VERIFICATION -- MANDATORY
5916
6155
  After creating all tasks, verify review and compound tasks exist:
@@ -5976,11 +6215,21 @@ for complex changes. For all changes, \`/implementation-reviewer\` is the minimu
5976
6215
  - Run \`npx ca knowledge "TDD test-first"\` for indexed knowledge on testing methodology
5977
6216
  - Run \`npx ca search "testing"\` for lessons from past TDD cycles
5978
6217
 
6218
+ ## Technical Debt Protocol
6219
+ When shortcuts are proposed, classify using Fowler's quadrant: only **Prudent/Deliberate** debt is rational (conscious choice, known trade-off, explicit repayment plan). Reckless or Inadvertent debt must be fixed immediately. Document debt decisions in epic notes.
6220
+
6221
+ ## Composition Boundary Verification
6222
+ If work touches a composition boundary (inter-epic or inter-service interface):
6223
+ - Verify implementation matches the interface contracts (explicit + implicit) from architect phase
6224
+ - Write tests for implicit contracts: timeout interactions, retry behavior, backpressure
6225
+ - Check for metastable failure risk: feedback loops (retry amplification, cache storms)
6226
+
5979
6227
  ## Common Pitfalls
5980
6228
  - Lead writing code instead of delegating to agents
5981
6229
  - Not injecting memory context into agent prompts
5982
6230
  - Modifying tests to make them pass instead of fixing implementation
5983
6231
  - Not running the full test suite after agent work completes
6232
+ - Accumulating reckless/inadvertent tech debt without classification
5984
6233
 
5985
6234
  ## Quality Criteria
5986
6235
  - Tests existed before implementation code
@@ -5990,6 +6239,8 @@ for complex changes. For all changes, \`/implementation-reviewer\` is the minimu
5990
6239
  - All tests pass after refactoring
5991
6240
  - Task lifecycle tracked via beads (\`bd\`)
5992
6241
  - Implementation aligns with spec requirements from epic
6242
+ - Technical debt classified (only Prudent/Deliberate accepted)
6243
+ - Composition boundaries verified against interface contracts
5993
6244
 
5994
6245
  ## PHASE GATE 3 -- MANDATORY
5995
6246
  Before starting Review, verify ALL work tasks are closed:
@@ -6014,7 +6265,8 @@ Perform thorough code review by spawning specialized reviewers in parallel, cons
6014
6265
  4. Select reviewer tier based on diff size:
6015
6266
  - **Small** (<100 lines): 4 core -- security, test-coverage, simplicity, cct-reviewer
6016
6267
  - **Medium** (100-500): add architecture, performance, edge-case, scenario-coverage (8 total)
6017
- - **Large** (500+): all 12 reviewers including docs, consistency, error-handling, pattern-matcher
6268
+ - **Large** (500+): all 12+ reviewers including docs, consistency, error-handling, pattern-matcher
6269
+ - **Composition changes**: add boundary-reviewer (coupling within epic boundaries?), control-structure-reviewer (STPA mitigations implemented?), observability-reviewer (metrics at boundaries?)
6018
6270
  5. Spawn reviewers in an **AgentTeam** (TeamCreate + Task with \`team_name\`):
6019
6271
  - Role skills: \`.claude/skills/compound/agents/{security-reviewer,architecture-reviewer,performance-reviewer,test-coverage-reviewer,simplicity-reviewer,scenario-coverage-reviewer}/SKILL.md\`
6020
6272
  - Security specialist skills (on-demand, spawned by security-reviewer): \`.claude/skills/compound/agents/{security-injection,security-secrets,security-auth,security-data,security-deps}/SKILL.md\`
@@ -6024,7 +6276,7 @@ Perform thorough code review by spawning specialized reviewers in parallel, cons
6024
6276
  8. Classify by severity: P0 (blocks merge), P1 (critical/blocking), P2 (important), P3 (minor)
6025
6277
  9. Use \`AskUserQuestion\` when severity is ambiguous or fix has multiple valid options
6026
6278
  10. Create beads issues for P1 findings: \`bd create --title="P1: ..."\`
6027
- 11. Verify spec alignment: flag unmet EARS requirements as P1, flag requirements met but missing from acceptance criteria as gaps
6279
+ 11. Verify spec alignment: flag unmet EARS requirements as P1. Verify assumptions from architect phase still hold. Check change coupling: do modified files cluster within epic boundaries or leak across?
6028
6280
  12. Fix all P1 findings before proceeding
6029
6281
  13. Run \`/implementation-reviewer\` as mandatory gate
6030
6282
  14. Capture novel findings with \`npx ca learn\`; pattern-matcher auto-reinforces recurring issues
@@ -6052,6 +6304,7 @@ Perform thorough code review by spawning specialized reviewers in parallel, cons
6052
6304
  - Skipping quality gates before review
6053
6305
  - Bypassing the implementation-reviewer gate
6054
6306
  - Not checking CCT patterns for known Claude mistakes
6307
+ - Not verifying architect assumptions still hold after implementation
6055
6308
 
6056
6309
  ## Quality Criteria
6057
6310
  - All quality gates pass (\`pnpm test\`, lint)
@@ -6090,6 +6343,9 @@ Lessons go to \`.claude/lessons/index.jsonl\` through the CLI. MEMORY.md is a di
6090
6343
  ## Methodology
6091
6344
  1. Review what happened during this cycle (git diff, test results, plan context)
6092
6345
  2. Detect spec drift: compare final implementation against original EARS requirements in the epic description (\`bd show <epic>\`). Note any divergences -- what changed, why, was it justified. If drift reveals a spec was wrong or incomplete, flag that for lesson extraction.
6346
+ 3. **Decomposition quality assessment**: compare actual implementation against predicted boundaries from architect. Did files cluster as predicted? Were assumptions valid? Rate boundary quality: "This boundary [succeeded/failed] because..." Capture as lesson for future architect runs.
6347
+ 4. **Assumption tracking**: for each assumption from architect phase, record predicted vs actual volatility. Store with \`npx ca learn\` for calibration.
6348
+ 5. **Emergence analysis**: if unexpected system behaviors occurred, classify root cause: incomplete interface contract (Garlan), control structure inadequacy (STPA), or scale-induced phase transition. Capture preventive lesson.
6093
6349
  3. Spawn the analysis pipeline in an **AgentTeam** (TeamCreate + Task with \`team_name\`):
6094
6350
  - Role skills: \`.claude/skills/compound/agents/{context-analyzer,lesson-extractor,pattern-matcher,solution-writer,compounding}/SKILL.md\`
6095
6351
  - For large diffs, deploy MULTIPLE context-analyzers and lesson-extractors
@@ -6123,6 +6379,8 @@ Lessons go to \`.claude/lessons/index.jsonl\` through the CLI. MEMORY.md is a di
6123
6379
  - Requiring user confirmation for every item (only high-severity needs it)
6124
6380
  - Not classifying items by type (lesson/solution/pattern/preference)
6125
6381
  - Capturing vague lessons ("be careful with X") -- be specific and concrete
6382
+ - Not assessing decomposition boundary quality against architect predictions
6383
+ - Not tracking assumption accuracy for future calibration
6126
6384
 
6127
6385
  ## Quality Criteria
6128
6386
  - Analysis team was spawned and agents coordinated via pipeline
@@ -6135,6 +6393,9 @@ Lessons go to \`.claude/lessons/index.jsonl\` through the CLI. MEMORY.md is a di
6135
6393
  - Beads checked for related issues (\`bd\`)
6136
6394
  - Each item gives clear, concrete guidance for future sessions
6137
6395
  - Spec drift analyzed and captured
6396
+ - Decomposition boundary quality assessed
6397
+ - Architect assumptions tracked (predicted vs actual)
6398
+ - Emergent failures classified by root cause if any occurred
6138
6399
 
6139
6400
  ## FINAL GATE -- EPIC CLOSURE
6140
6401
  Before closing the epic:
@@ -6151,7 +6412,7 @@ description: Deep research producing structured survey documents for informed de
6151
6412
  # Researcher Skill
6152
6413
 
6153
6414
  ## Overview
6154
- Conduct deep research on a topic and produce a structured survey document following the project's research template. This skill spawns parallel research subagents to gather comprehensive information, then synthesizes findings into a PhD-depth document stored in \`docs/compound/research/\`.
6415
+ Conduct deep research on a topic and produce a structured survey document following the project's research template. This skill spawns parallel research subagents to gather comprehensive information, then synthesizes findings into a PhD-depth document stored in \`docs/research/\`.
6155
6416
 
6156
6417
  ## Methodology
6157
6418
  1. Identify the research question, scope, and exclusions
@@ -6172,16 +6433,16 @@ Conduct deep research on a topic and produce a structured survey document follow
6172
6433
  - Conclusion
6173
6434
  - References (full citations)
6174
6435
  - Practitioner Resources (annotated tools/repos)
6175
- 6. Store output at \`docs/compound/research/<topic-slug>.md\` (kebab-case filename)
6436
+ 6. Store output at \`docs/research/<topic-slug>.md\` (kebab-case filename)
6176
6437
  7. Report key findings back for upstream skill (spec-dev/plan) to act on
6177
6438
 
6178
6439
  ## Memory Integration
6179
6440
  - Run \`npx ca search\` with topic keywords before starting research
6180
- - Check for existing research docs in \`docs/compound/research/\` that overlap
6441
+ - Check for existing research docs in \`docs/research/\` and \`docs/compound/research/\` that overlap
6181
6442
  - After completion, key findings can be captured via \`npx ca learn\`
6182
6443
 
6183
6444
  ## Docs Integration
6184
- - Scan \`docs/compound/research/\` for prior survey documents on related topics
6445
+ - Scan \`docs/research/\` and \`docs/compound/research/\` for prior survey documents on related topics
6185
6446
  - Check \`docs/decisions/\` for ADRs that inform or constrain the research scope
6186
6447
  - Reference existing project docs as primary sources where relevant
6187
6448
 
@@ -6272,25 +6533,17 @@ Before starting EACH phase, you MUST use the Read tool to open its skill file:
6272
6533
  Do NOT proceed from memory. Read the skill, then follow it exactly.
6273
6534
 
6274
6535
  ## Session Start
6275
- When a cooking session begins, IMMEDIATELY print the brain banner below (copy it verbatim):
6276
-
6277
- ___ ___
6278
- .."\`) "\`.." "(\`\`.
6279
- .'; _..=. ${_bc}::${_rs} \`-'._ ;\`.
6280
- : ) ;"\`':._. ${_bc}::${_rs}_. ( :.
6281
- .:-" _. \`"${_cn}##${_rs}"\` "._ \`-:\\
6282
- /." -"\` ._.${_bc}::${_rs}._. .'" ".:
6283
- : : ( -: \`" ${_bc}::${_rs} "\` :- ) : )
6284
- ( .":==._ ' \`'=._${_cn}##${_rs}_.=' ' _.==: .'
6285
- (: \`, \`"\` \`"${_cn}##${_rs}"\` \`"\` .'\`".)
6286
- \\\`' \`"--. "- )( -" ..--"\` \`-/
6287
- (" (_."\` ="""-..."\`...-""= "._) ")
6288
- "..__..-" )${_gr}%${_rs}\`..'\`${_gr}%${_rs}( "-..__.."
6289
- (#"...'\\\\${_gr}%%%%${_rs}/\`..."#)
6290
- \`${_mg}######${_rs}\`--'${_mg}######${_rs}"
6291
- "${_mg}###${_rs}")${_yl}@@${_rs}(\`${_mg}###${_rs}"
6292
- \\${_yl}@@${_rs}/
6293
- Claw'd
6536
+ When a cooking session begins, IMMEDIATELY print the banner below (copy it verbatim):
6537
+
6538
+ ${cn} o
6539
+ /|\\
6540
+ o-o-o
6541
+ /|\\ /|\\
6542
+ o-o-o-o-o
6543
+ \\|/ \\|/
6544
+ o-o-o
6545
+ \\|/
6546
+ o${rs}
6294
6547
 
6295
6548
  Then proceed with the protocol below.
6296
6549
 
@@ -6664,6 +6917,110 @@ After all approved actions are applied, verify:
6664
6917
  - Setup mode ran audit first
6665
6918
  - No files overwritten without approval
6666
6919
  - Generated content based on actual codebase analysis
6920
+ `,
6921
+ "architect": `---
6922
+ name: Architect
6923
+ description: Decompose a large system specification into cook-it-ready epic beads via DDD bounded contexts
6924
+ ---
6925
+
6926
+ # Architect Skill
6927
+
6928
+ ## Overview
6929
+ Take a large system specification and decompose it into naturally-scoped epic beads that the infinity loop can process via cook-it. Each output epic is sized for one cook-it cycle.
6930
+
6931
+ 4 phases with 3 human gates. Runs BEFORE spec-dev -- each decomposed epic then goes through full cook-it (including spec-dev to refine its EARS subset).
6932
+
6933
+ ## Input
6934
+ - Beads epic ID: read epic description as input
6935
+ - File path: read markdown file as input
6936
+ - Neither: use \`AskUserQuestion\` to gather the system description
6937
+
6938
+ ## Phase 1: Socratic
6939
+ **Goal**: Understand the system domain before decomposing.
6940
+ 1. Search memory: \`npx ca search\` for past features, constraints, decisions
6941
+ 2. Search knowledge: \`npx ca knowledge "relevant terms"\`
6942
+ 3. Ask "why" before "how" -- understand the real need
6943
+ 4. Build a **domain glossary** (ubiquitous language) from the dialogue
6944
+ 5. Produce a **discovery mindmap** (Mermaid \`mindmap\`) to expose assumptions
6945
+ 6. **Reversibility analysis**: classify decisions as irreversible (schema, public API, service boundary), moderate (framework), or reversible (library, config). Spend effort proportional to irreversibility.
6946
+ 7. **Change volatility**: rate each boundary stable/moderate/high. High-volatility justifies modularity investment.
6947
+ 8. Use \`AskUserQuestion\` to clarify scope and preferences
6948
+
6949
+ **Gate 1**: Use \`AskUserQuestion\` to confirm the understanding is complete before proceeding to Spec.
6950
+
6951
+ ## Phase 2: Spec
6952
+ **Goal**: Produce a system-level specification.
6953
+ 1. Write **system-level EARS requirements** (Ubiquitous/Event/State/Unwanted/Optional patterns)
6954
+ 2. Produce **architecture diagrams**: C4Context, sequenceDiagram, stateDiagram-v2
6955
+ 3. Generate a **scenario table** from the EARS requirements
6956
+ 4. Write the spec to \`docs/specs/<name>.md\` and create a **meta-epic bead**
6957
+
6958
+ **Gate 2**: Use \`AskUserQuestion\` to get human approval of the system-level spec.
6959
+
6960
+ ## Phase 3: Decompose
6961
+ **Goal**: Break the system into naturally-scoped epics using DDD bounded contexts.
6962
+
6963
+ Spawn **6 parallel subagents** (via Task tool):
6964
+ 1. **Bounded context mapper**: Identify natural domain boundaries and propose candidate epics
6965
+ 2. **Dependency analyst**: Structural + change coupling (git history entropy), dependency graph, processing order
6966
+ 3. **Scope sizer**: "One cook-it cycle" heuristic, cognitive load check (7+/-2 concepts per epic)
6967
+ 4. **Interface designer**: Explicit contracts (API/data) + implicit contracts (threading, delivery guarantees, timeout/retry, backpressure, resource ownership, failure modes)
6968
+ 5. **Control structure analyst** (STPA): Identify hazards at composition boundaries, unsafe control actions (commission/omission/timing), propose mitigations
6969
+ 6. **Structural-semantic gap analyst**: Compare dependency graph partition vs DDD semantic partition, flag disagreements
6970
+
6971
+ **Synthesis**: Merge subagent findings into a proposed epic structure. For each epic:
6972
+ - Title and scope boundaries (what is in, what is out)
6973
+ - Relevant EARS subset from the system spec
6974
+ - Interface contracts: explicit (API/data) + implicit (timing, threading, failure modes)
6975
+ - Assumptions that must hold for this boundary to remain valid
6976
+ - Org alignment: which team type owns this (stream-aligned/platform/enabling/complicated-subsystem)?
6977
+ - Pointer to the master spec file
6978
+
6979
+ **Multi-criteria validation** before Gate 3 -- for each epic:
6980
+ - [ ] Structural: low change coupling, acyclic dependencies
6981
+ - [ ] Semantic: stable bounded context, coherent ubiquitous language
6982
+ - [ ] Organizational: single team owner, within cognitive budget
6983
+ - [ ] Economic: modularity benefit > coordination overhead
6984
+
6985
+ **Gate 3**: Use \`AskUserQuestion\` to get human approval of the epic structure, dependency graph, and interface contracts.
6986
+
6987
+ ## Phase 4: Materialize
6988
+ **Goal**: Create the actual beads.
6989
+ 1. Create epic beads via \`bd create --title="..." --type=epic --priority=<N>\` for each approved epic
6990
+ 2. Store scope, EARS subset, interface contracts (explicit + implicit), and key assumptions in each epic description
6991
+ 3. Define **fitness functions** per epic to monitor assumptions. Document re-decomposition trigger.
6992
+ 4. Wire dependencies via \`bd dep add\` for all relationships
6993
+ 5. Store processing order as notes on the meta-epic
6994
+ 6. Capture lessons via \`npx ca learn\`
6995
+
6996
+ ## Memory Integration
6997
+ - \`npx ca search\` before starting each phase
6998
+ - \`npx ca knowledge\` for indexed project docs
6999
+ - \`npx ca learn\` after corrections or discoveries
7000
+
7001
+ ## Common Pitfalls
7002
+ - Jumping to decomposition without understanding the domain (skip Socratic)
7003
+ - Micro-slicing epics too small (each epic should be a natural bounded context, not a single task)
7004
+ - Missing interface contracts between epics (coupling will bite during implementation)
7005
+ - Not searching memory for past decomposition patterns
7006
+ - Skipping human gates (the 3 gates are the quality checkpoints)
7007
+ - Creating epics without EARS subset (loses traceability to system spec)
7008
+ - Not wiring dependencies (loop will process in wrong order)
7009
+ - Treating complex decisions as complicated (Cynefin): service boundaries need experiments, not just analysis
7010
+ - Ignoring implicit contracts (threading, timing, backpressure) -- Garlan's architectural mismatch
7011
+ - Not capturing assumptions that would invalidate the decomposition if wrong
7012
+
7013
+ ## Quality Criteria
7014
+ - [ ] Socratic phase completed with domain glossary and mindmap
7015
+ - [ ] System-level EARS requirements cover all capabilities
7016
+ - [ ] Architecture diagrams produced (C4, sequence, state)
7017
+ - [ ] Spec written to docs/specs/ and meta-epic created
7018
+ - [ ] 6-angle convoy executed for decomposition (DDD + STPA + gap analysis)
7019
+ - [ ] Each epic has scope boundaries, EARS subset, interface contracts (explicit + implicit), and assumptions
7020
+ - [ ] Dependencies wired via bd dep add
7021
+ - [ ] Processing order stored on meta-epic
7022
+ - [ ] 3 human gates passed via AskUserQuestion
7023
+ - [ ] Memory searched at each phase
6667
7024
  `
6668
7025
  };
6669
7026
  var PHASE_SKILL_REFERENCES = {
@@ -9634,7 +9991,10 @@ async function fetchLatestVersion(packageName = "compound-agent") {
9634
9991
  });
9635
9992
  if (!res.ok) return null;
9636
9993
  const data = await res.json();
9637
- return data["dist-tags"] ? data["dist-tags"].latest ?? null : null;
9994
+ const tags = data["dist-tags"];
9995
+ if (typeof tags !== "object" || tags === null) return null;
9996
+ const latest = tags["latest"];
9997
+ return typeof latest === "string" ? latest : null;
9638
9998
  } catch {
9639
9999
  return null;
9640
10000
  }
@@ -9647,7 +10007,7 @@ async function checkForUpdate(cacheDir) {
9647
10007
  return {
9648
10008
  current: VERSION,
9649
10009
  latest: cached.latest,
9650
- updateAvailable: cached.latest !== VERSION
10010
+ updateAvailable: semverGt(cached.latest, VERSION)
9651
10011
  };
9652
10012
  }
9653
10013
  const latest = await fetchLatestVersion();
@@ -9661,7 +10021,7 @@ async function checkForUpdate(cacheDir) {
9661
10021
  return {
9662
10022
  current: VERSION,
9663
10023
  latest,
9664
- updateAvailable: latest !== VERSION
10024
+ updateAvailable: semverGt(latest, VERSION)
9665
10025
  };
9666
10026
  } catch {
9667
10027
  return null;
@@ -9669,7 +10029,18 @@ async function checkForUpdate(cacheDir) {
9669
10029
  }
9670
10030
  function formatUpdateNotification(current, latest) {
9671
10031
  return `Update available: ${current} -> ${latest}
9672
- Run: pnpm update compound-agent`;
10032
+ Run: pnpm update --latest compound-agent`;
10033
+ }
10034
+ function semverGt(a, b) {
10035
+ const parse = (v) => {
10036
+ const parts = v.split(".").map((n) => parseInt(n, 10) || 0);
10037
+ return [parts[0] ?? 0, parts[1] ?? 0, parts[2] ?? 0];
10038
+ };
10039
+ const [aMaj, aMin, aPat] = parse(a);
10040
+ const [bMaj, bMin, bPat] = parse(b);
10041
+ if (aMaj !== bMaj) return aMaj > bMaj;
10042
+ if (aMin !== bMin) return aMin > bMin;
10043
+ return aPat > bPat;
9673
10044
  }
9674
10045
  function readCache(cachePath) {
9675
10046
  try {
@@ -9796,16 +10167,18 @@ ${formattedLessons}
9796
10167
  if (cookitSection !== null) {
9797
10168
  output += cookitSection;
9798
10169
  }
9799
- try {
9800
- const updateResult = await checkForUpdate(join(root, ".claude", ".cache"));
9801
- if (updateResult?.updateAvailable) {
9802
- output += `
10170
+ if (!process.stdout.isTTY) {
10171
+ try {
10172
+ const updateResult = await checkForUpdate(join(root, ".claude", ".cache"));
10173
+ if (updateResult?.updateAvailable) {
10174
+ output += `
9803
10175
  ---
9804
10176
  # Update Available
9805
- compound-agent v${updateResult.latest} is available (current: v${updateResult.current}). Run \`pnpm update compound-agent\` to update.
10177
+ compound-agent v${updateResult.latest} is available (current: v${updateResult.current}). Run \`pnpm update --latest compound-agent\` to update.
9806
10178
  `;
10179
+ }
10180
+ } catch {
9807
10181
  }
9808
- } catch {
9809
10182
  }
9810
10183
  return output;
9811
10184
  }
@@ -10532,7 +10905,26 @@ function registerVerifyGatesCommand(program) {
10532
10905
  }
10533
10906
 
10534
10907
  // src/changelog-data.ts
10535
- var CHANGELOG_RECENT = `## [1.7.3] - 2026-03-09
10908
+ var CHANGELOG_RECENT = `## [1.7.4] - 2026-03-11
10909
+
10910
+ ### Added
10911
+
10912
+ - **Research-enriched phase skills**: Applied insights from 3 PhD-level research documents (Science of Decomposition, Architecture Under Uncertainty, Emergent Behavior in Composed Systems) across all 6 core phase skills:
10913
+ - **Architect**: reversibility analysis (Baldwin & Clark), change volatility, 6-subagent convoy (added STPA control structure analyst + structural-semantic gap analyst), implicit interface contracts (threading, backpressure, delivery guarantees), organizational alignment (Team Topologies), multi-criteria validation gate (structural/semantic/organizational/economic), assumption capture with fitness functions and re-decomposition triggers
10914
+ - **Spec-dev**: Cynefin classification (Clear/Complicated/Complex), composition EARS templates (timeout/retry interactions), change volatility assessment
10915
+ - **Plan**: boundary stability check, Last Responsible Moment identification, change coupling prevention
10916
+ - **Work**: Fowler technical debt quadrant (only Prudent/Deliberate accepted), composition boundary verification with metastable failure checks
10917
+ - **Review**: composition-specific reviewers (boundary-reviewer, control-structure-reviewer, observability-reviewer), architect assumption validation
10918
+ - **Compound**: decomposition quality assessment, assumption tracking (predicted vs actual), emergence root cause classification (Garlan/STPA/phase transition)
10919
+ - **Lint graduation in compound phase**: The compound phase (step 10) now spawns a \`lint-classifier\` subagent that classifies each captured insight as LINTABLE, PARTIAL, or NOT_LINTABLE. High-confidence lintable insights are promoted to beads tasks under a "Linting Improvement" epic with self-contained rule specifications. Two rule classes: Class A (native \`rules.json\` \u2014 regex/glob) and Class B (external linter \u2014 AST analysis).
10920
+ - **Linter detection module** (\`src/lint/\`): Scans repos for ESLint (flat + legacy configs including TypeScript variants), Ruff (including \`pyproject.toml\`), Clippy, golangci-lint, ast-grep, and Semgrep. Exported from the package as \`detectLinter()\`, \`LinterInfoSchema\`, \`LinterNameSchema\`.
10921
+ - **Lint-classifier agent template**: Ships via \`npx ca init\` to \`.claude/agents/compound/lint-classifier.md\`. Includes 7 few-shot examples, Class A/B routing, and linter-aware task creation.
10922
+
10923
+ ### Fixed
10924
+
10925
+ - **PhD research output path**: \`/compound:get-a-phd\` now writes user-generated research to \`docs/research/\` instead of \`docs/compound/research/\`. The \`docs/compound/\` directory is reserved for shipped library content; project-specific research no longer pollutes it. Overlap scanning checks both directories.
10926
+
10927
+ ## [1.7.3] - 2026-03-09
10536
10928
 
10537
10929
  ### Added
10538
10930
 
@@ -10574,18 +10966,7 @@ var CHANGELOG_RECENT = `## [1.7.3] - 2026-03-09
10574
10966
  - **Hardcoded model extracted**: Five occurrences of \`'claude-opus-4-6'\` in loop.ts extracted to \`DEFAULT_MODEL\` constant.
10575
10967
  - **EPIC_ID_PATTERN deduplicated**: \`watch.ts\` now imports \`LOOP_EPIC_ID_PATTERN\` from \`loop.ts\` instead of maintaining a duplicate.
10576
10968
  - **\`warn()\` output corrected**: \`shared.ts\` warn helper now writes to \`stderr\` instead of \`stdout\`.
10577
- - **Templates import fixed**: \`templates.ts\` now imports \`VERSION\` from \`../version.js\` instead of barrel re-export.
10578
-
10579
- ## [1.7.1] - 2026-03-09
10580
-
10581
- ### Added
10582
-
10583
- - **Scenario testing integration**: Spec-dev Phase 3 now generates scenario tables from EARS requirements and Mermaid diagrams with five categories (happy, error, boundary, combinatorial, adversarial). Review phase verifies coverage via a new \`scenario-coverage-reviewer\` agent using heuristic AI-driven matching.
10584
- - **Scenario coverage reviewer**: New medium-tier AgentTeam reviewer that matches test files against epic scenario tables and flags gaps (P1) or partial coverage (P2). Spawned for diffs >100 lines.
10585
-
10586
- ### Fixed
10587
-
10588
- - **Stale reviewer count in tests**: Updated "5 reviewer perspectives" test to "6" with \`scenario-coverage\` assertion. Removed no-op \`.replace('security-', 'security-')\` in escalation wiring test.`;
10969
+ - **Templates import fixed**: \`templates.ts\` now imports \`VERSION\` from \`../version.js\` instead of barrel re-export.`;
10589
10970
 
10590
10971
  // src/commands/about.ts
10591
10972
  function registerAboutCommand(program) {
@@ -10779,8 +11160,8 @@ function registerKnowledgeIndexCommand(program) {
10779
11160
  }
10780
11161
  });
10781
11162
  program.command("embed-worker <repoRoot>", { hidden: true }).description("Internal: background embedding worker").action(async (repoRoot) => {
10782
- const { existsSync: existsSync24, statSync: statSync6 } = await import('fs');
10783
- if (!existsSync24(repoRoot) || !statSync6(repoRoot).isDirectory()) {
11163
+ const { existsSync: existsSync24, statSync: statSync7 } = await import('fs');
11164
+ if (!existsSync24(repoRoot) || !statSync7(repoRoot).isDirectory()) {
10784
11165
  out.error(`Invalid repoRoot: "${repoRoot}" is not a directory`);
10785
11166
  process.exitCode = 1;
10786
11167
  return;
@@ -11169,6 +11550,21 @@ init_embeddings();
11169
11550
  init_search2();
11170
11551
  init_sqlite_knowledge();
11171
11552
  init_compound();
11553
+ var LinterNameSchema = z.enum([
11554
+ "eslint",
11555
+ "ruff",
11556
+ "clippy",
11557
+ "golangci-lint",
11558
+ "ast-grep",
11559
+ "semgrep",
11560
+ "unknown"
11561
+ ]);
11562
+ z.object({
11563
+ linter: LinterNameSchema,
11564
+ configPath: z.string().nullable()
11565
+ });
11566
+
11567
+ // src/index.ts
11172
11568
  init_types();
11173
11569
 
11174
11570
  // src/commands/retrieval.ts
@@ -11493,8 +11889,50 @@ function registerManagementCommands(program) {
11493
11889
  }
11494
11890
 
11495
11891
  // src/commands/loop-templates.ts
11496
- function buildEpicSelector() {
11892
+ function buildDependencyCheck() {
11497
11893
  return `
11894
+ # check_deps_closed() - Verify all depends_on for an epic are closed
11895
+ # Returns 0 if all deps closed (or no deps), 1 if any dep is open
11896
+ # Uses the depends_on array from bd show --json (objects with .id/.status)
11897
+ check_deps_closed() {
11898
+ local epic_id="$1"
11899
+ local deps_json
11900
+ deps_json=$(bd show "$epic_id" --json 2>/dev/null || echo "")
11901
+ if [ -z "$deps_json" ]; then
11902
+ return 0
11903
+ fi
11904
+ local blocking_dep
11905
+ if [ "$HAS_JQ" = true ]; then
11906
+ blocking_dep=$(echo "$deps_json" | jq -r '
11907
+ if type == "array" then .[0] else . end |
11908
+ (.depends_on // .dependencies // []) |
11909
+ map(select(.status != "closed")) |
11910
+ .[0].id // empty
11911
+ ' 2>/dev/null || echo "")
11912
+ else
11913
+ blocking_dep=$(echo "$deps_json" | python3 -c "
11914
+ import sys, json
11915
+ data = json.load(sys.stdin)
11916
+ if isinstance(data, list):
11917
+ data = data[0] if data else {}
11918
+ deps = data.get('depends_on', data.get('dependencies', []))
11919
+ for d in deps:
11920
+ s = d.get('status', 'open') if isinstance(d, dict) else 'open'
11921
+ if s != 'closed':
11922
+ print(d.get('id', d) if isinstance(d, dict) else d)
11923
+ break
11924
+ " 2>/dev/null || echo "")
11925
+ fi
11926
+ if [ -n "$blocking_dep" ]; then
11927
+ log "Skip $epic_id: blocked by dependency $blocking_dep (not closed)"
11928
+ return 1
11929
+ fi
11930
+ return 0
11931
+ }
11932
+ `;
11933
+ }
11934
+ function buildEpicSelector() {
11935
+ return buildDependencyCheck() + `
11498
11936
  get_next_epic() {
11499
11937
  if [ -n "$EPIC_IDS" ]; then
11500
11938
  for epic_id in $EPIC_IDS; do
@@ -11502,6 +11940,7 @@ get_next_epic() {
11502
11940
  local status
11503
11941
  status=$(bd show "$epic_id" --json 2>/dev/null | parse_json '.status' 2>/dev/null || echo "")
11504
11942
  if [ "$status" = "open" ]; then
11943
+ check_deps_closed "$epic_id" || continue
11505
11944
  echo "$epic_id"
11506
11945
  return 0
11507
11946
  fi
@@ -11512,18 +11951,25 @@ get_next_epic() {
11512
11951
  if [ "$HAS_JQ" = true ]; then
11513
11952
  epic_id=$(bd list --type=epic --ready --json --limit=10 2>/dev/null | jq -r '.[].id' 2>/dev/null | while read -r id; do
11514
11953
  case " $PROCESSED " in (*" $id "*) continue ;; esac
11954
+ check_deps_closed "$id" || continue
11515
11955
  echo "$id"
11516
11956
  break
11517
11957
  done)
11518
11958
  else
11519
- epic_id=$(bd list --type=epic --ready --json --limit=10 2>/dev/null | python3 -c "
11959
+ # Emit ALL unprocessed candidates so the shell loop can check each one's deps.
11960
+ local candidates epic_id=""
11961
+ candidates=$(bd list --type=epic --ready --json --limit=10 2>/dev/null | python3 -c "
11520
11962
  import sys, json
11521
11963
  processed = set('$PROCESSED'.split())
11522
11964
  items = json.load(sys.stdin)
11523
11965
  for item in items:
11524
11966
  if item['id'] not in processed:
11525
- print(item['id'])
11526
- break" 2>/dev/null || echo "")
11967
+ print(item['id'])" 2>/dev/null || echo "")
11968
+ for cid in $candidates; do
11969
+ check_deps_closed "$cid" || continue
11970
+ epic_id="$cid"
11971
+ break
11972
+ done
11527
11973
  fi
11528
11974
  if [ -z "$epic_id" ]; then
11529
11975
  return 1
@@ -12625,12 +13071,19 @@ function createProgram() {
12625
13071
  return program;
12626
13072
  }
12627
13073
  async function runProgram(program, argv = process.argv) {
13074
+ let updatePromise = null;
13075
+ if (process.stdout.isTTY) {
13076
+ try {
13077
+ const cacheDir = join(getRepoRoot(), ".claude", ".cache");
13078
+ updatePromise = checkForUpdate(cacheDir);
13079
+ } catch {
13080
+ }
13081
+ }
12628
13082
  try {
12629
13083
  await program.parseAsync(argv);
12630
- if (process.stdout.isTTY) {
13084
+ if (updatePromise) {
12631
13085
  try {
12632
- const cacheDir = join(getRepoRoot(), ".claude", ".cache");
12633
- const result = await checkForUpdate(cacheDir);
13086
+ const result = await updatePromise;
12634
13087
  if (result?.updateAvailable) {
12635
13088
  console.log(formatUpdateNotification(result.current, result.latest));
12636
13089
  }