compound-agent 1.1.0 → 1.2.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +36 -1
- package/README.md +13 -0
- package/dist/cli.js +433 -10
- package/dist/cli.js.map +1 -1
- package/dist/index.d.ts +6 -6
- package/dist/index.js +26 -0
- package/dist/index.js.map +1 -1
- package/dist/mcp.js +119 -95
- package/dist/mcp.js.map +1 -1
- package/package.json +21 -12
package/CHANGELOG.md
CHANGED
|
@@ -9,6 +9,39 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
|
|
9
9
|
|
|
10
10
|
## [Unreleased]
|
|
11
11
|
|
|
12
|
+
## [1.2.1] - 2026-02-15
|
|
13
|
+
|
|
14
|
+
### Added
|
|
15
|
+
|
|
16
|
+
- **`ca verify-gates <epic-id>` command**: Verifies review and compound beads tasks exist and are closed before epic can be marked complete
|
|
17
|
+
- **Phase enforcement gates in `/compound:lfg`**: Mechanical STOP markers (`PHASE GATE 3`, `PHASE GATE 4`, `FINAL GATE`) between workflow phases prevent Claude from skipping review and compound phases
|
|
18
|
+
- **Per-phase MEMORY CHECK instructions**: Each of the 5 phases in lfg.md now has explicit `MEMORY CHECK` instructions for memory_search/memory_capture
|
|
19
|
+
- **Phase state tracking**: lfg.md tracks phase completion via `bd update --notes` with `Phase: COMPLETE` markers, surviving context compaction
|
|
20
|
+
- **SESSION CLOSE checklist in lfg.md**: Inviolable 8-step checklist at end of lfg workflow ensures bd sync and git push
|
|
21
|
+
|
|
22
|
+
### Fixed
|
|
23
|
+
|
|
24
|
+
- **`ca setup --update` now ensures MCP config**: Previously only regenerated templates; now also calls `configureClaudeSettings()` to ensure `.mcp.json` and hooks are current for projects upgrading from older versions
|
|
25
|
+
- **`ca prime` warns when MCP server is missing**: Displays actionable warning with `Run 'npx ca setup'` when `.mcp.json` is not registered
|
|
26
|
+
- **work.md verification gate strengthened**: Replaced soft "Verification Gate" with `MANDATORY VERIFICATION` section requiring `/implementation-reviewer` APPROVED status
|
|
27
|
+
- **compound.md minimum capture requirement**: Added "At minimum, capture 1 lesson per significant decision" to prevent empty compound phases
|
|
28
|
+
- **plan.md post-plan verification**: Added `POST-PLAN VERIFICATION` section with grep checks for review and compound task creation
|
|
29
|
+
|
|
30
|
+
## [1.2.0] - 2026-02-15
|
|
31
|
+
|
|
32
|
+
### Added
|
|
33
|
+
|
|
34
|
+
- **`ca loop` command**: Generate autonomous infinity loop scripts that process beads epics end-to-end via chained Claude Code sessions
|
|
35
|
+
- **HUMAN_REQUIRED marker**: Loop detects human-blocking issues, logs reason to beads, skips epic without stopping the loop
|
|
36
|
+
- **Review+compound blocking tasks**: Plan phase now creates review and compound beads issues with dependencies, ensuring these phases survive context compaction and surface via `bd ready`
|
|
37
|
+
|
|
38
|
+
### Fixed
|
|
39
|
+
|
|
40
|
+
- **Loop script `set -u` crash**: `LOOP_DRY_RUN` now uses safe expansion (`${VAR:-}`) for `set -u` compatibility
|
|
41
|
+
- **Infinite reprocessing**: Loop tracks processed epics to prevent re-selecting the same epic in dry-run or human-required paths
|
|
42
|
+
- **Input validation**: `--max-retries` rejects non-integer values; epic IDs validated against safe pattern to prevent shell injection
|
|
43
|
+
- **Exit codes**: `ca loop` now returns non-zero on errors (overwrite refusal, invalid options)
|
|
44
|
+
|
|
12
45
|
## [1.1.0] - 2026-02-15
|
|
13
46
|
|
|
14
47
|
### Added
|
|
@@ -423,7 +456,9 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
|
|
423
456
|
- Vitest test suite
|
|
424
457
|
- tsup build configuration
|
|
425
458
|
|
|
426
|
-
[Unreleased]: https://github.com/Nathandela/learning_agent/compare/v1.1
|
|
459
|
+
[Unreleased]: https://github.com/Nathandela/learning_agent/compare/v1.2.1...HEAD
|
|
460
|
+
[1.2.1]: https://github.com/Nathandela/learning_agent/compare/v1.2.0...v1.2.1
|
|
461
|
+
[1.2.0]: https://github.com/Nathandela/learning_agent/compare/v1.1.0...v1.2.0
|
|
427
462
|
[1.1.0]: https://github.com/Nathandela/learning_agent/compare/v1.0.0...v1.1.0
|
|
428
463
|
[1.0.0]: https://github.com/Nathandela/learning_agent/compare/v0.2.9...v1.0.0
|
|
429
464
|
[0.2.9]: https://github.com/Nathandela/learning_agent/compare/v0.2.8...v0.2.9
|
package/README.md
CHANGED
|
@@ -166,10 +166,23 @@ The CLI binary is `ca` (alias: `compound-agent`).
|
|
|
166
166
|
| `ca export` | Export items as JSON |
|
|
167
167
|
| `ca import <file>` | Import items from JSONL file |
|
|
168
168
|
| `ca prime` | Load workflow context (used by hooks) |
|
|
169
|
+
| `ca verify-gates <epic-id>` | Verify review + compound tasks exist and are closed |
|
|
169
170
|
| `ca audit` | Run audit checks against the codebase |
|
|
170
171
|
| `ca rules check` | Run repository-defined rule checks |
|
|
171
172
|
| `ca test-summary` | Run tests and output a compact summary |
|
|
172
173
|
|
|
174
|
+
### Automation
|
|
175
|
+
|
|
176
|
+
| Command | Description |
|
|
177
|
+
|---------|-------------|
|
|
178
|
+
| `ca loop` | Generate infinity loop script for autonomous epic processing |
|
|
179
|
+
| `ca loop --epics <ids...>` | Target specific epic IDs |
|
|
180
|
+
| `ca loop -o <path>` | Custom output path (default: `./infinity-loop.sh`) |
|
|
181
|
+
| `ca loop --max-retries <n>` | Max retries per epic on failure (default: 1) |
|
|
182
|
+
| `ca loop --force` | Overwrite existing script |
|
|
183
|
+
|
|
184
|
+
Generated scripts detect three markers: `EPIC_COMPLETE` (success), `EPIC_FAILED` (retry then stop), `HUMAN_REQUIRED: <reason>` (skip and continue). Run with `LOOP_DRY_RUN=1` to preview.
|
|
185
|
+
|
|
173
186
|
### Setup
|
|
174
187
|
|
|
175
188
|
| Command | Description |
|
package/dist/cli.js
CHANGED
|
@@ -3,9 +3,9 @@ import { Command } from 'commander';
|
|
|
3
3
|
import { getLlama, resolveModelFile } from 'node-llama-cpp';
|
|
4
4
|
import { mkdirSync, writeFileSync, statSync, existsSync, readFileSync, unlinkSync, chmodSync, readdirSync } from 'fs';
|
|
5
5
|
import { homedir } from 'os';
|
|
6
|
-
import { join, dirname, relative } from 'path';
|
|
6
|
+
import { join, dirname, resolve, relative } from 'path';
|
|
7
7
|
import * as fs from 'fs/promises';
|
|
8
|
-
import { readFile, mkdir, appendFile,
|
|
8
|
+
import { readFile, mkdir, appendFile, writeFile, chmod, rm, rename } from 'fs/promises';
|
|
9
9
|
import { createHash } from 'crypto';
|
|
10
10
|
import { z } from 'zod';
|
|
11
11
|
import { createRequire } from 'module';
|
|
@@ -2966,12 +2966,28 @@ Create a structured implementation plan enriched by semantic memory and existing
|
|
|
2966
2966
|
5. Synthesize research findings from all agents into a coherent plan. Flag any conflicts between ADRs and proposed approach.
|
|
2967
2967
|
6. Use \`AskUserQuestion\` to resolve ambiguities: unclear requirements, conflicting ADRs, or priority trade-offs that need user input before decomposing.
|
|
2968
2968
|
7. Break the goal into concrete, ordered tasks with clear acceptance criteria.
|
|
2969
|
-
8. Create
|
|
2969
|
+
8. **Create review and compound blocking tasks** so they survive compaction:
|
|
2970
|
+
\`\`\`bash
|
|
2971
|
+
bd create --title="Review: /compound:review" --type=task --priority=1
|
|
2972
|
+
bd create --title="Compound: /compound:compound" --type=task --priority=1
|
|
2973
|
+
bd dep add <review-id> <last-work-task> # review depends on work
|
|
2974
|
+
bd dep add <compound-id> <review-id> # compound depends on review
|
|
2975
|
+
\`\`\`
|
|
2976
|
+
These tasks surface via \`bd ready\` after work completes, ensuring review and compound phases are never skipped \u2014 even after context compaction.
|
|
2977
|
+
9. Create beads issues and map dependencies:
|
|
2970
2978
|
\`\`\`bash
|
|
2971
2979
|
bd create --title="<task>" --type=task --priority=<1-4>
|
|
2972
2980
|
bd dep add <dependent-task> <blocking-task>
|
|
2973
2981
|
\`\`\`
|
|
2974
|
-
|
|
2982
|
+
10. Output the plan as a structured list with task IDs and dependency graph.
|
|
2983
|
+
|
|
2984
|
+
## POST-PLAN VERIFICATION -- MANDATORY
|
|
2985
|
+
After creating all tasks, verify review and compound tasks exist:
|
|
2986
|
+
\`\`\`bash
|
|
2987
|
+
bd list --status=open | grep 'Review:' # Must show a result
|
|
2988
|
+
bd list --status=open | grep 'Compound:' # Must show a result
|
|
2989
|
+
\`\`\`
|
|
2990
|
+
If either is missing, CREATE THEM NOW. The plan is NOT complete without these gates.
|
|
2975
2991
|
|
|
2976
2992
|
## Memory Integration
|
|
2977
2993
|
- Call \`memory_search\` before planning to learn from past approaches.
|
|
@@ -2999,7 +3015,7 @@ Execute implementation by delegating to an agent team. The lead coordinates and
|
|
|
2999
3015
|
## Workflow
|
|
3000
3016
|
1. Parse task from \`$ARGUMENTS\`. If empty, run \`bd ready\` to find available tasks.
|
|
3001
3017
|
2. Mark task in progress: \`bd update <id> --status=in_progress\`.
|
|
3002
|
-
3. Call \`memory_search\` with the task description to retrieve relevant lessons. Run \`memory_search\` per agent/subtask so each gets targeted context.
|
|
3018
|
+
3. Call \`memory_search\` with the task description to retrieve relevant lessons. Run \`memory_search\` per agent/subtask so each gets targeted context. Display retrieved lessons in your response. Do not silently discard memory results.
|
|
3003
3019
|
4. Assess complexity to determine team strategy.
|
|
3004
3020
|
5. If **trivial** (config changes, typos, one-line fixes): handle directly with a single subagent. No AgentTeam needed. Proceed to step 10.
|
|
3005
3021
|
6. If **simple** or **complex**, create an AgentTeam:
|
|
@@ -3026,9 +3042,16 @@ Execute implementation by delegating to an agent team. The lead coordinates and
|
|
|
3026
3042
|
14. Run the full test suite to check for regressions.
|
|
3027
3043
|
15. Close the task: \`bd close <id>\`.
|
|
3028
3044
|
|
|
3029
|
-
##
|
|
3030
|
-
Before
|
|
3031
|
-
1.
|
|
3045
|
+
## MANDATORY VERIFICATION -- DO NOT CLOSE TASK WITHOUT THIS
|
|
3046
|
+
STOP. Before running \`bd close\`, you MUST:
|
|
3047
|
+
1. Run \`pnpm test && pnpm lint\` (quality gates)
|
|
3048
|
+
2. Run /implementation-reviewer on the changed code
|
|
3049
|
+
3. Wait for APPROVED status
|
|
3050
|
+
If /implementation-reviewer returns REJECTED: fix ALL issues, re-run tests, resubmit.
|
|
3051
|
+
DO NOT close the task until approved. This is INVIOLABLE per CLAUDE.md.
|
|
3052
|
+
|
|
3053
|
+
The full 8-step pipeline (invariant-designer through implementation-reviewer) is recommended
|
|
3054
|
+
for complex changes. For all changes, /implementation-reviewer is the minimum required gate.
|
|
3032
3055
|
|
|
3033
3056
|
## Memory Integration
|
|
3034
3057
|
- Call \`memory_search\` per delegated subtask with the subtask's specific description, not one shared query.
|
|
@@ -3127,6 +3150,7 @@ Multi-agent analysis to capture high-quality lessons from completed work into th
|
|
|
3127
3150
|
- **Medium**: workflow changes, pattern corrections, tooling preferences
|
|
3128
3151
|
- **Low**: style preferences, minor optimizations, reinforcements
|
|
3129
3152
|
7. For approved items, store via \`memory_capture\` with supersedes/related linking to connect with existing memory.
|
|
3153
|
+
At minimum, capture 1 lesson per significant decision made during this cycle.
|
|
3130
3154
|
8. After storing new items, delegate to the **compounding** subagent to run compounding synthesis:
|
|
3131
3155
|
- Read all lessons from \`.claude/lessons/index.jsonl\`
|
|
3132
3156
|
- Cluster by embedding similarity (threshold 0.75)
|
|
@@ -3156,41 +3180,76 @@ Chain all phases: brainstorm, plan, work, review, compound. End-to-end delivery.
|
|
|
3156
3180
|
|
|
3157
3181
|
## Workflow
|
|
3158
3182
|
1. **Brainstorm phase**: Explore the goal from \`$ARGUMENTS\`.
|
|
3183
|
+
- MEMORY CHECK: Call \`memory_search\` with the current goal/task. Display results to user. If relevant items found, state which ones apply and why. If none found, state "No relevant lessons found."
|
|
3159
3184
|
- Call \`memory_search\` with the goal.
|
|
3160
3185
|
- \`TeamCreate\` team "brainstorm-<slug>", spawn docs-explorer + code-explorer as parallel teammates.
|
|
3161
3186
|
- Ask clarifying questions via \`AskUserQuestion\`, explore alternatives.
|
|
3162
3187
|
- Auto-create ADRs for significant decisions in \`docs/decisions/\`.
|
|
3163
3188
|
- Create a beads epic from conclusions with \`bd create --type=feature\`.
|
|
3164
3189
|
- Shut down brainstorm team before next phase.
|
|
3190
|
+
- Update epic phase state: \`bd update <epic-id> --notes="Phase: brainstorm COMPLETE | Next: plan"\`
|
|
3165
3191
|
|
|
3166
3192
|
2. **Plan phase**: Structure the work.
|
|
3193
|
+
- MEMORY CHECK: Call \`memory_search\` with the current goal/task. Display results to user. If relevant items found, state which ones apply and why. If none found, state "No relevant lessons found."
|
|
3167
3194
|
- Check for brainstorm epic via \`bd list\`.
|
|
3168
3195
|
- \`TeamCreate\` team "plan-<slug>", spawn docs-analyst + repo-analyst + memory-analyst as parallel teammates.
|
|
3169
3196
|
- Break into tasks with dependencies and acceptance criteria.
|
|
3170
3197
|
- Create beads issues with \`bd create\` and map dependencies with \`bd dep add\`.
|
|
3198
|
+
- Create review and compound blocking tasks (\`bd create\` + \`bd dep add\`) so they survive compaction and surface via \`bd ready\` after work completes.
|
|
3171
3199
|
- Shut down plan team before next phase.
|
|
3200
|
+
- Update epic phase state: \`bd update <epic-id> --notes="Phase: plan COMPLETE | Next: work"\`
|
|
3172
3201
|
|
|
3173
3202
|
3. **Work phase**: Implement with adaptive TDD.
|
|
3203
|
+
- MEMORY CHECK: Call \`memory_search\` with the current goal/task. Display results to user. If relevant items found, state which ones apply and why. If none found, state "No relevant lessons found."
|
|
3174
3204
|
- Assess complexity (trivial/simple/complex) to choose strategy.
|
|
3175
3205
|
- Trivial: single subagent, no team. Simple/complex: \`TeamCreate\` team "work-<task-id>".
|
|
3176
3206
|
- Spawn test-analyst first, then test-writer + implementer as teammates.
|
|
3177
3207
|
- Call \`memory_search\` per subtask; \`memory_capture\` after corrections.
|
|
3178
3208
|
- Commit incrementally. Close tasks as they complete.
|
|
3179
3209
|
- Run verification gate before marking complete. Shut down work team.
|
|
3210
|
+
- Update epic phase state: \`bd update <epic-id> --notes="Phase: work COMPLETE | Next: review"\`
|
|
3211
|
+
|
|
3212
|
+
## PHASE GATE 3->4 -- MANDATORY
|
|
3213
|
+
Before starting Review, verify ALL work tasks are closed:
|
|
3214
|
+
\`\`\`bash
|
|
3215
|
+
bd list --status=in_progress # Must return empty
|
|
3216
|
+
bd list --status=open | grep -v 'Review:\\|Compound:' # Must return empty (only review+compound should be open)
|
|
3217
|
+
\`\`\`
|
|
3218
|
+
If any work tasks remain open, DO NOT proceed. Complete them first.
|
|
3219
|
+
Update epic phase: \`bd update <epic-id> --notes="Phase: work COMPLETE | Next: review"\`
|
|
3180
3220
|
|
|
3181
3221
|
4. **Review phase**: 11-agent review with severity classification.
|
|
3222
|
+
- MEMORY CHECK: Call \`memory_search\` with the current goal/task. Display results to user. If relevant items found, state which ones apply and why. If none found, state "No relevant lessons found."
|
|
3182
3223
|
- Run quality gates first: \`pnpm test && pnpm lint\`.
|
|
3183
3224
|
- \`TeamCreate\` team "review-<slug>", spawn all 11 reviewers as parallel teammates.
|
|
3184
3225
|
- Classify findings as P1 (critical/blocking), P2 (important), P3 (minor).
|
|
3185
3226
|
- P1 findings must be fixed before proceeding \u2014 they block completion.
|
|
3186
3227
|
- Submit to \`/implementation-reviewer\` as the mandatory gate. Shut down review team.
|
|
3228
|
+
- Update epic phase state: \`bd update <epic-id> --notes="Phase: review COMPLETE | Next: compound"\`
|
|
3229
|
+
|
|
3230
|
+
## PHASE GATE 4->5 -- MANDATORY
|
|
3231
|
+
Before starting Compound, verify review is complete:
|
|
3232
|
+
- /implementation-reviewer must have returned APPROVED
|
|
3233
|
+
- All P1 findings must be resolved
|
|
3234
|
+
Update epic phase: \`bd update <epic-id> --notes="Phase: review COMPLETE | Next: compound"\`
|
|
3187
3235
|
|
|
3188
3236
|
5. **Compound phase**: Capture learnings.
|
|
3237
|
+
- MEMORY CHECK: Call \`memory_search\` with the current goal/task. Display results to user. If relevant items found, state which ones apply and why. If none found, state "No relevant lessons found."
|
|
3189
3238
|
- \`TeamCreate\` team "compound-<slug>", spawn 6 analysis agents as parallel teammates.
|
|
3190
3239
|
- Search first with \`memory_search\` to avoid duplicates. Apply quality filters (novelty + specificity).
|
|
3191
3240
|
- Store novel insights via \`memory_capture\` with supersedes/related links.
|
|
3192
3241
|
- Update outdated docs and deprecate superseded ADRs.
|
|
3193
3242
|
- Use \`AskUserQuestion\` to confirm high-severity items. Shut down compound team.
|
|
3243
|
+
- Update epic phase state: \`bd update <epic-id> --notes="Phase: compound COMPLETE | Next: close"\`
|
|
3244
|
+
|
|
3245
|
+
## FINAL GATE -- EPIC CLOSURE
|
|
3246
|
+
Before closing the epic, run:
|
|
3247
|
+
\`\`\`bash
|
|
3248
|
+
ca verify-gates <epic-id> # Must return PASS
|
|
3249
|
+
pnpm test && pnpm lint # Must pass
|
|
3250
|
+
\`\`\`
|
|
3251
|
+
If verify-gates fails, the missing phase was SKIPPED. Go back and complete it.
|
|
3252
|
+
CRITICAL: 3/5 phases is NOT success. All 5 phases are required.
|
|
3194
3253
|
|
|
3195
3254
|
## Agent Team Pattern
|
|
3196
3255
|
Each phase creates its own AgentTeam via \`TeamCreate\`, spawns teammates via \`Task\` tool with \`team_name\`, coordinates via \`SendMessage\`, and shuts down with \`shutdown_request\` before the next phase starts. Use subagents (Task without team_name) only for quick lookups like \`memory_search\` or \`bd\` commands.
|
|
@@ -3199,7 +3258,7 @@ Each phase creates its own AgentTeam via \`TeamCreate\`, spawns teammates via \`
|
|
|
3199
3258
|
- **Skip phases**: Parse \`$ARGUMENTS\` for "from <phase>" (e.g., "from plan"). Skip all phases before the named one.
|
|
3200
3259
|
- **Progress**: Announce the current phase before starting it (e.g., "[Phase 2/5] Plan").
|
|
3201
3260
|
- **Retry**: If a phase fails, report the failure and ask the user whether to retry, skip, or abort.
|
|
3202
|
-
- **Resume**: After interruption,
|
|
3261
|
+
- **Resume**: After interruption, run \`bd show <epic-id>\` and read the notes field for current phase state. Resume from that phase. If no phase state, check \`bd list --status=in_progress\` to infer.
|
|
3203
3262
|
|
|
3204
3263
|
## Stop Conditions
|
|
3205
3264
|
- Stop if brainstorm reveals the goal is unclear (ask user).
|
|
@@ -3209,6 +3268,18 @@ Each phase creates its own AgentTeam via \`TeamCreate\`, spawns teammates via \`
|
|
|
3209
3268
|
## Memory Integration
|
|
3210
3269
|
- \`memory_search\` is called in brainstorm, work, and compound phases.
|
|
3211
3270
|
- \`memory_capture\` is called in work and compound phases.
|
|
3271
|
+
|
|
3272
|
+
## SESSION CLOSE -- INVIOLABLE
|
|
3273
|
+
Before saying "done" or "complete", ALL of these must pass:
|
|
3274
|
+
1. \`ca verify-gates <epic-id>\` -- All workflow gates satisfied
|
|
3275
|
+
2. \`pnpm test && pnpm lint\` -- Quality gates green
|
|
3276
|
+
3. \`git status\` -- Review changes
|
|
3277
|
+
4. \`git add <specific-files>\` -- Stage (never git add .)
|
|
3278
|
+
5. \`bd sync\` -- Sync beads
|
|
3279
|
+
6. \`git commit -m "..."\` -- Commit
|
|
3280
|
+
7. \`bd sync\` -- Post-commit sync
|
|
3281
|
+
8. \`git push\` -- Push to remote
|
|
3282
|
+
If ANY step fails, fix it. Work is NOT done until git push succeeds.
|
|
3212
3283
|
`,
|
|
3213
3284
|
// =========================================================================
|
|
3214
3285
|
// Utility commands (CLI wrappers)
|
|
@@ -3348,6 +3419,7 @@ Create a concrete implementation plan by decomposing work into small, testable t
|
|
|
3348
3419
|
7. Define acceptance criteria for each task
|
|
3349
3420
|
8. Map dependencies between tasks
|
|
3350
3421
|
9. Create beads issues: \`bd create --title="..." --type=task\`
|
|
3422
|
+
10. Create review and compound blocking tasks (\`bd create\` + \`bd dep add\`) that depend on work tasks \u2014 these survive compaction and surface via \`bd ready\` after work completes
|
|
3351
3423
|
|
|
3352
3424
|
## Memory Integration
|
|
3353
3425
|
- Call \`memory_search\` for patterns related to the feature area
|
|
@@ -3810,7 +3882,12 @@ async function runUpdate(repoRoot, dryRun) {
|
|
|
3810
3882
|
}
|
|
3811
3883
|
}
|
|
3812
3884
|
}
|
|
3813
|
-
|
|
3885
|
+
let configUpdated = false;
|
|
3886
|
+
if (!dryRun) {
|
|
3887
|
+
const { hooks, mcpServer } = await configureClaudeSettings(repoRoot);
|
|
3888
|
+
configUpdated = hooks || mcpServer;
|
|
3889
|
+
}
|
|
3890
|
+
return { updated, added, skipped, configUpdated };
|
|
3814
3891
|
}
|
|
3815
3892
|
async function runStatus(repoRoot) {
|
|
3816
3893
|
const agentsDir = join(repoRoot, ".claude", "agents", "compound");
|
|
@@ -3861,6 +3938,7 @@ function registerSetupAllCommand(setupCommand) {
|
|
|
3861
3938
|
if (result2.added > 0) console.log(` ${prefix}Added: ${result2.added} file(s)`);
|
|
3862
3939
|
}
|
|
3863
3940
|
if (result2.skipped > 0) console.log(` Skipped: ${result2.skipped} user-customized file(s)`);
|
|
3941
|
+
if (result2.configUpdated) console.log(` ${prefix}Config: hooks/MCP updated`);
|
|
3864
3942
|
return;
|
|
3865
3943
|
}
|
|
3866
3944
|
if (options.status) {
|
|
@@ -5055,6 +5133,13 @@ async function getPrimeContext(repoRoot) {
|
|
|
5055
5133
|
}
|
|
5056
5134
|
const lessons = await loadSessionLessons(root, 5);
|
|
5057
5135
|
let output = TRUST_LANGUAGE_TEMPLATE;
|
|
5136
|
+
const hasMcp = await hasMcpServerInMcpJson(root);
|
|
5137
|
+
if (!hasMcp) {
|
|
5138
|
+
output += `
|
|
5139
|
+
WARNING: MCP server not registered. Run 'npx ca setup' to enable memory_search/memory_capture tools.
|
|
5140
|
+
|
|
5141
|
+
`;
|
|
5142
|
+
}
|
|
5058
5143
|
if (lessons.length > 0) {
|
|
5059
5144
|
const formattedLessons = lessons.map(formatLessonForPrime).join("\n\n");
|
|
5060
5145
|
output += `
|
|
@@ -5371,6 +5456,79 @@ function registerTestSummaryCommand(program2) {
|
|
|
5371
5456
|
process.exit(exitCode);
|
|
5372
5457
|
});
|
|
5373
5458
|
}
|
|
5459
|
+
function parseDeps(output) {
|
|
5460
|
+
const deps = [];
|
|
5461
|
+
const lines = output.split("\n");
|
|
5462
|
+
let inDeps = false;
|
|
5463
|
+
for (const line of lines) {
|
|
5464
|
+
if (line.trim() === "DEPENDS ON") {
|
|
5465
|
+
inDeps = true;
|
|
5466
|
+
continue;
|
|
5467
|
+
}
|
|
5468
|
+
if (inDeps) {
|
|
5469
|
+
const match = line.match(
|
|
5470
|
+
/^\s+→\s+(✓|○)\s+\S+-\S+:\s+(.+?)\s+●/
|
|
5471
|
+
);
|
|
5472
|
+
if (match && match[1] && match[2]) {
|
|
5473
|
+
deps.push({ closed: match[1] === "\u2713", title: match[2] });
|
|
5474
|
+
} else if (line.trim() !== "" && !line.startsWith(" ")) {
|
|
5475
|
+
break;
|
|
5476
|
+
}
|
|
5477
|
+
}
|
|
5478
|
+
}
|
|
5479
|
+
return deps;
|
|
5480
|
+
}
|
|
5481
|
+
function checkGate(deps, prefix, gateName) {
|
|
5482
|
+
const task = deps.find((d) => d.title.startsWith(prefix));
|
|
5483
|
+
if (!task) {
|
|
5484
|
+
return { name: gateName, status: "fail", detail: `No ${gateName.toLowerCase()} found (missing)` };
|
|
5485
|
+
}
|
|
5486
|
+
if (!task.closed) {
|
|
5487
|
+
return { name: gateName, status: "fail", detail: `${gateName} exists but is not closed` };
|
|
5488
|
+
}
|
|
5489
|
+
return { name: gateName, status: "pass" };
|
|
5490
|
+
}
|
|
5491
|
+
async function runVerifyGates(epicId) {
|
|
5492
|
+
const raw = execSync(`bd show ${epicId}`, { encoding: "utf-8" });
|
|
5493
|
+
const deps = parseDeps(raw);
|
|
5494
|
+
return [
|
|
5495
|
+
checkGate(deps, "Review:", "Review task"),
|
|
5496
|
+
checkGate(deps, "Compound:", "Compound task")
|
|
5497
|
+
];
|
|
5498
|
+
}
|
|
5499
|
+
var STATUS_LABEL = {
|
|
5500
|
+
pass: "PASS",
|
|
5501
|
+
fail: "FAIL"
|
|
5502
|
+
};
|
|
5503
|
+
function registerVerifyGatesCommand(program2) {
|
|
5504
|
+
program2.command("verify-gates <epic-id>").description("Verify workflow gates are satisfied before epic closure").action(async (epicId) => {
|
|
5505
|
+
try {
|
|
5506
|
+
const checks = await runVerifyGates(epicId);
|
|
5507
|
+
console.log(`Gate checks for epic ${epicId}:
|
|
5508
|
+
`);
|
|
5509
|
+
for (const check of checks) {
|
|
5510
|
+
const label = STATUS_LABEL[check.status];
|
|
5511
|
+
console.log(` [${label}] ${check.name}`);
|
|
5512
|
+
if (check.detail) {
|
|
5513
|
+
console.log(` ${check.detail}`);
|
|
5514
|
+
}
|
|
5515
|
+
}
|
|
5516
|
+
const failures = checks.filter((c) => c.status === "fail");
|
|
5517
|
+
console.log("");
|
|
5518
|
+
if (failures.length === 0) {
|
|
5519
|
+
console.log("All gates passed.");
|
|
5520
|
+
} else {
|
|
5521
|
+
console.log(`${failures.length} gate(s) failed.`);
|
|
5522
|
+
process.exitCode = 1;
|
|
5523
|
+
}
|
|
5524
|
+
} catch (err) {
|
|
5525
|
+
console.error(
|
|
5526
|
+
`Error: ${err instanceof Error ? err.message : String(err)}`
|
|
5527
|
+
);
|
|
5528
|
+
process.exitCode = 1;
|
|
5529
|
+
}
|
|
5530
|
+
});
|
|
5531
|
+
}
|
|
5374
5532
|
|
|
5375
5533
|
// src/commands/capture.ts
|
|
5376
5534
|
function createLessonFromFlags(trigger, insight, confirmed) {
|
|
@@ -5612,6 +5770,269 @@ function registerCaptureCommands(program2) {
|
|
|
5612
5770
|
await handleCapture(this, options);
|
|
5613
5771
|
});
|
|
5614
5772
|
}
|
|
5773
|
+
var EPIC_ID_PATTERN = /^[a-zA-Z0-9_.-]+$/;
|
|
5774
|
+
function buildScriptHeader(timestamp, maxRetries, model, epicIds) {
|
|
5775
|
+
return `#!/usr/bin/env bash
|
|
5776
|
+
# Infinity Loop - Generated by: ca loop
|
|
5777
|
+
# Date: ${timestamp}
|
|
5778
|
+
# Autonomously processes beads epics via Claude Code sessions.
|
|
5779
|
+
#
|
|
5780
|
+
# Usage:
|
|
5781
|
+
# ./infinity-loop.sh
|
|
5782
|
+
# LOOP_DRY_RUN=1 ./infinity-loop.sh # Preview without executing
|
|
5783
|
+
|
|
5784
|
+
set -euo pipefail
|
|
5785
|
+
|
|
5786
|
+
# Config
|
|
5787
|
+
MAX_RETRIES=${maxRetries}
|
|
5788
|
+
MODEL="${model}"
|
|
5789
|
+
EPIC_IDS="${epicIds}"
|
|
5790
|
+
LOG_DIR="agent_logs"
|
|
5791
|
+
|
|
5792
|
+
# Helpers
|
|
5793
|
+
timestamp() { date '+%Y-%m-%d_%H-%M-%S'; }
|
|
5794
|
+
log() { echo "[$(timestamp)] $*"; }
|
|
5795
|
+
die() { log "FATAL: $*"; exit 1; }
|
|
5796
|
+
|
|
5797
|
+
command -v python3 >/dev/null || die "python3 required for JSON parsing"
|
|
5798
|
+
command -v claude >/dev/null || die "claude CLI required"
|
|
5799
|
+
command -v bd >/dev/null || die "bd (beads) CLI required"
|
|
5800
|
+
|
|
5801
|
+
mkdir -p "$LOG_DIR"
|
|
5802
|
+
` + buildEpicSelector() + buildPromptFunction();
|
|
5803
|
+
}
|
|
5804
|
+
function buildEpicSelector() {
|
|
5805
|
+
return `
|
|
5806
|
+
get_next_epic() {
|
|
5807
|
+
if [ -n "$EPIC_IDS" ]; then
|
|
5808
|
+
# From explicit list, find first still-open epic not yet processed
|
|
5809
|
+
for epic_id in $EPIC_IDS; do
|
|
5810
|
+
case " $PROCESSED " in *" $epic_id "*) continue ;; esac
|
|
5811
|
+
local status
|
|
5812
|
+
status=$(bd show "$epic_id" --json 2>/dev/null | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('status',''))" 2>/dev/null || echo "")
|
|
5813
|
+
if [ "$status" = "open" ]; then
|
|
5814
|
+
echo "$epic_id"
|
|
5815
|
+
return 0
|
|
5816
|
+
fi
|
|
5817
|
+
done
|
|
5818
|
+
return 1
|
|
5819
|
+
else
|
|
5820
|
+
# Dynamic: get next ready epic from dependency graph, filtering processed
|
|
5821
|
+
local epic_id
|
|
5822
|
+
epic_id=$(bd list --type=epic --ready --json --limit=10 2>/dev/null | python3 -c "
|
|
5823
|
+
import sys,json
|
|
5824
|
+
processed = set('$PROCESSED'.split())
|
|
5825
|
+
items = json.load(sys.stdin)
|
|
5826
|
+
for item in items:
|
|
5827
|
+
if item['id'] not in processed:
|
|
5828
|
+
print(item['id'])
|
|
5829
|
+
break" 2>/dev/null || echo "")
|
|
5830
|
+
if [ -z "$epic_id" ]; then
|
|
5831
|
+
return 1
|
|
5832
|
+
fi
|
|
5833
|
+
echo "$epic_id"
|
|
5834
|
+
return 0
|
|
5835
|
+
fi
|
|
5836
|
+
}
|
|
5837
|
+
`;
|
|
5838
|
+
}
|
|
5839
|
+
function buildPromptFunction() {
|
|
5840
|
+
return `
|
|
5841
|
+
build_prompt() {
|
|
5842
|
+
local epic_id="$1"
|
|
5843
|
+
cat <<'PROMPT_HEADER'
|
|
5844
|
+
You are running in an autonomous infinity loop. Your task is to fully implement a beads epic.
|
|
5845
|
+
|
|
5846
|
+
## Step 1: Load context
|
|
5847
|
+
Run these commands to prime your session:
|
|
5848
|
+
PROMPT_HEADER
|
|
5849
|
+
cat <<PROMPT_BODY
|
|
5850
|
+
\\\`\\\`\\\`bash
|
|
5851
|
+
npx ca load-session
|
|
5852
|
+
bd show $epic_id
|
|
5853
|
+
\\\`\\\`\\\`
|
|
5854
|
+
|
|
5855
|
+
Read the epic details carefully. Understand scope, acceptance criteria, and sub-tasks.
|
|
5856
|
+
|
|
5857
|
+
## Step 2: Execute the workflow
|
|
5858
|
+
Run the full compound workflow for this epic, starting from the plan phase
|
|
5859
|
+
(brainstorm is already done -- the epic exists):
|
|
5860
|
+
|
|
5861
|
+
/compound:lfg from plan -- Epic: $epic_id
|
|
5862
|
+
|
|
5863
|
+
Work through all phases: plan, work, review, compound.
|
|
5864
|
+
|
|
5865
|
+
## Step 3: On completion
|
|
5866
|
+
When all work is done and tests pass:
|
|
5867
|
+
1. Close the epic: \`bd close $epic_id\`
|
|
5868
|
+
2. Sync beads: \`bd sync\`
|
|
5869
|
+
3. Commit and push all changes
|
|
5870
|
+
4. Output this exact marker on its own line:
|
|
5871
|
+
|
|
5872
|
+
EPIC_COMPLETE
|
|
5873
|
+
|
|
5874
|
+
## Step 4: On failure
|
|
5875
|
+
If you cannot complete the epic after reasonable effort:
|
|
5876
|
+
1. Add a note: \`bd update $epic_id --notes "Loop failed: <reason>"\`
|
|
5877
|
+
2. Output this exact marker on its own line:
|
|
5878
|
+
|
|
5879
|
+
EPIC_FAILED
|
|
5880
|
+
|
|
5881
|
+
## Step 5: On human required
|
|
5882
|
+
If you hit a blocker that REQUIRES human action (account creation, API keys,
|
|
5883
|
+
external service setup, design decisions you cannot make, etc.):
|
|
5884
|
+
1. Add a note: \`bd update $epic_id --notes "Human required: <reason>"\`
|
|
5885
|
+
2. Output this exact marker followed by a short reason on the SAME line:
|
|
5886
|
+
|
|
5887
|
+
HUMAN_REQUIRED: <reason>
|
|
5888
|
+
|
|
5889
|
+
Example: HUMAN_REQUIRED: Need AWS credentials configured in .env
|
|
5890
|
+
|
|
5891
|
+
## Rules
|
|
5892
|
+
- Do NOT ask questions -- there is no human. Make reasonable decisions.
|
|
5893
|
+
- Do NOT stop early -- complete the full workflow.
|
|
5894
|
+
- If tests fail, fix them. Retry up to 3 times before declaring failure.
|
|
5895
|
+
- Use HUMAN_REQUIRED only for true blockers that no amount of retrying can solve.
|
|
5896
|
+
- Commit incrementally as you make progress.
|
|
5897
|
+
PROMPT_BODY
|
|
5898
|
+
}`;
|
|
5899
|
+
}
|
|
5900
|
+
function buildMainLoop() {
|
|
5901
|
+
return `
|
|
5902
|
+
# Main loop
|
|
5903
|
+
COMPLETED=0
|
|
5904
|
+
FAILED=0
|
|
5905
|
+
SKIPPED=0
|
|
5906
|
+
PROCESSED=""
|
|
5907
|
+
|
|
5908
|
+
log "Infinity loop starting"
|
|
5909
|
+
log "Config: max_retries=$MAX_RETRIES model=$MODEL"
|
|
5910
|
+
[ -n "$EPIC_IDS" ] && log "Targeting epics: $EPIC_IDS" || log "Targeting: all ready epics"
|
|
5911
|
+
|
|
5912
|
+
while true; do
|
|
5913
|
+
EPIC_ID=$(get_next_epic) || break
|
|
5914
|
+
|
|
5915
|
+
log "Processing epic: $EPIC_ID"
|
|
5916
|
+
|
|
5917
|
+
ATTEMPT=0
|
|
5918
|
+
SUCCESS=false
|
|
5919
|
+
|
|
5920
|
+
while [ $ATTEMPT -le $MAX_RETRIES ]; do
|
|
5921
|
+
ATTEMPT=$((ATTEMPT + 1))
|
|
5922
|
+
LOGFILE="$LOG_DIR/loop_$EPIC_ID-$(timestamp).log"
|
|
5923
|
+
|
|
5924
|
+
log "Attempt $ATTEMPT/$((MAX_RETRIES + 1)) for $EPIC_ID (log: $LOGFILE)"
|
|
5925
|
+
|
|
5926
|
+
if [ -n "\${LOOP_DRY_RUN:-}" ]; then
|
|
5927
|
+
log "[DRY RUN] Would run claude session for $EPIC_ID"
|
|
5928
|
+
SUCCESS=true
|
|
5929
|
+
break
|
|
5930
|
+
fi
|
|
5931
|
+
|
|
5932
|
+
PROMPT=$(build_prompt "$EPIC_ID")
|
|
5933
|
+
|
|
5934
|
+
claude --dangerously-skip-permissions \\
|
|
5935
|
+
--model "$MODEL" \\
|
|
5936
|
+
-p "$PROMPT" \\
|
|
5937
|
+
&> "$LOGFILE" || true
|
|
5938
|
+
|
|
5939
|
+
if grep -q "EPIC_COMPLETE" "$LOGFILE"; then
|
|
5940
|
+
log "Epic $EPIC_ID completed successfully"
|
|
5941
|
+
SUCCESS=true
|
|
5942
|
+
break
|
|
5943
|
+
elif grep -q "HUMAN_REQUIRED" "$LOGFILE"; then
|
|
5944
|
+
REASON=$(grep "HUMAN_REQUIRED:" "$LOGFILE" | head -1 | sed 's/.*HUMAN_REQUIRED: *//')
|
|
5945
|
+
log "Epic $EPIC_ID needs human action: $REASON"
|
|
5946
|
+
bd update "$EPIC_ID" --notes "Human required: $REASON" 2>/dev/null || true
|
|
5947
|
+
SKIPPED=$((SKIPPED + 1))
|
|
5948
|
+
SUCCESS=skip
|
|
5949
|
+
break
|
|
5950
|
+
elif grep -q "EPIC_FAILED" "$LOGFILE"; then
|
|
5951
|
+
log "Epic $EPIC_ID reported failure (attempt $ATTEMPT)"
|
|
5952
|
+
else
|
|
5953
|
+
log "Epic $EPIC_ID session ended without marker (attempt $ATTEMPT)"
|
|
5954
|
+
fi
|
|
5955
|
+
|
|
5956
|
+
if [ $ATTEMPT -le $MAX_RETRIES ]; then
|
|
5957
|
+
log "Retrying $EPIC_ID..."
|
|
5958
|
+
sleep 5
|
|
5959
|
+
fi
|
|
5960
|
+
done
|
|
5961
|
+
|
|
5962
|
+
if [ "$SUCCESS" = true ]; then
|
|
5963
|
+
COMPLETED=$((COMPLETED + 1))
|
|
5964
|
+
log "Epic $EPIC_ID done. Completed so far: $COMPLETED"
|
|
5965
|
+
elif [ "$SUCCESS" = skip ]; then
|
|
5966
|
+
log "Epic $EPIC_ID skipped (human required). Continuing."
|
|
5967
|
+
else
|
|
5968
|
+
FAILED=$((FAILED + 1))
|
|
5969
|
+
log "Epic $EPIC_ID failed after $((MAX_RETRIES + 1)) attempts. Stopping loop."
|
|
5970
|
+
PROCESSED="$PROCESSED $EPIC_ID"
|
|
5971
|
+
break
|
|
5972
|
+
fi
|
|
5973
|
+
|
|
5974
|
+
PROCESSED="$PROCESSED $EPIC_ID"
|
|
5975
|
+
done
|
|
5976
|
+
|
|
5977
|
+
log "Loop finished. Completed: $COMPLETED, Failed: $FAILED, Skipped: $SKIPPED"
|
|
5978
|
+
[ $FAILED -eq 0 ] && exit 0 || exit 1`;
|
|
5979
|
+
}
|
|
5980
|
+
function validateOptions(options) {
|
|
5981
|
+
if (!Number.isInteger(options.maxRetries) || options.maxRetries < 0) {
|
|
5982
|
+
throw new Error(`Invalid maxRetries: must be a non-negative integer, got ${options.maxRetries}`);
|
|
5983
|
+
}
|
|
5984
|
+
if (options.epics) {
|
|
5985
|
+
for (const id of options.epics) {
|
|
5986
|
+
if (!EPIC_ID_PATTERN.test(id)) {
|
|
5987
|
+
throw new Error(`Invalid epic ID "${id}": must match ${EPIC_ID_PATTERN}`);
|
|
5988
|
+
}
|
|
5989
|
+
}
|
|
5990
|
+
}
|
|
5991
|
+
}
|
|
5992
|
+
function generateLoopScript(options) {
|
|
5993
|
+
validateOptions(options);
|
|
5994
|
+
const epicIds = options.epics?.join(" ") ?? "";
|
|
5995
|
+
const timestamp = (/* @__PURE__ */ new Date()).toISOString();
|
|
5996
|
+
return buildScriptHeader(timestamp, options.maxRetries, options.model, epicIds) + buildMainLoop();
|
|
5997
|
+
}
|
|
5998
|
+
async function handleLoop(cmd, options) {
|
|
5999
|
+
const outputPath = resolve(options.output ?? "./infinity-loop.sh");
|
|
6000
|
+
if (existsSync(outputPath) && !options.force) {
|
|
6001
|
+
out.error(`File already exists: ${outputPath}`);
|
|
6002
|
+
out.info("Use --force to overwrite");
|
|
6003
|
+
process.exitCode = 1;
|
|
6004
|
+
return;
|
|
6005
|
+
}
|
|
6006
|
+
const maxRetries = Number(options.maxRetries ?? 1);
|
|
6007
|
+
if (!Number.isInteger(maxRetries) || maxRetries < 0) {
|
|
6008
|
+
out.error(`Invalid --max-retries: must be a non-negative integer, got "${options.maxRetries}"`);
|
|
6009
|
+
process.exitCode = 1;
|
|
6010
|
+
return;
|
|
6011
|
+
}
|
|
6012
|
+
let script;
|
|
6013
|
+
try {
|
|
6014
|
+
script = generateLoopScript({
|
|
6015
|
+
epics: options.epics,
|
|
6016
|
+
maxRetries,
|
|
6017
|
+
model: options.model ?? "claude-opus-4-6"
|
|
6018
|
+
});
|
|
6019
|
+
} catch (err) {
|
|
6020
|
+
out.error(err.message);
|
|
6021
|
+
process.exitCode = 1;
|
|
6022
|
+
return;
|
|
6023
|
+
}
|
|
6024
|
+
await mkdir(dirname(outputPath), { recursive: true });
|
|
6025
|
+
await writeFile(outputPath, script, "utf-8");
|
|
6026
|
+
await chmod(outputPath, 493);
|
|
6027
|
+
out.success(`Generated infinity loop script: ${outputPath}`);
|
|
6028
|
+
out.info("Run it with: " + outputPath);
|
|
6029
|
+
out.info("Preview with: LOOP_DRY_RUN=1 " + outputPath);
|
|
6030
|
+
}
|
|
6031
|
+
function registerLoopCommands(program2) {
|
|
6032
|
+
program2.command("loop").description("Generate infinity loop script for epic tasks").option("--epics <ids...>", "Specific epic IDs to process").option("-o, --output <path>", "Output script path", "./infinity-loop.sh").option("--max-retries <n>", "Max retries per epic on failure", "1").option("--model <model>", "Claude model to use", "claude-opus-4-6").option("--force", "Overwrite existing script").action(async function(options) {
|
|
6033
|
+
await handleLoop(this, options);
|
|
6034
|
+
});
|
|
6035
|
+
}
|
|
5615
6036
|
function parseLimitOrExit(rawLimit, optionName, commandName) {
|
|
5616
6037
|
try {
|
|
5617
6038
|
return parseLimit(rawLimit, optionName);
|
|
@@ -5872,6 +6293,7 @@ function registerManagementCommands(program2) {
|
|
|
5872
6293
|
registerReviewerCommand(program2);
|
|
5873
6294
|
registerRulesCommands(program2);
|
|
5874
6295
|
registerTestSummaryCommand(program2);
|
|
6296
|
+
registerVerifyGatesCommand(program2);
|
|
5875
6297
|
}
|
|
5876
6298
|
|
|
5877
6299
|
// src/cli.ts
|
|
@@ -5897,6 +6319,7 @@ registerRetrievalCommands(program);
|
|
|
5897
6319
|
registerManagementCommands(program);
|
|
5898
6320
|
registerSetupCommands(program);
|
|
5899
6321
|
registerCompoundCommands(program);
|
|
6322
|
+
registerLoopCommands(program);
|
|
5900
6323
|
program.parse();
|
|
5901
6324
|
//# sourceMappingURL=cli.js.map
|
|
5902
6325
|
//# sourceMappingURL=cli.js.map
|