macro-agent 0.1.8 → 0.1.11
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CLAUDE.md +263 -33
- package/README.md +781 -131
- package/dist/acp/claude-code-replay.d.ts +11 -0
- package/dist/acp/claude-code-replay.d.ts.map +1 -0
- package/dist/acp/claude-code-replay.js +190 -0
- package/dist/acp/claude-code-replay.js.map +1 -0
- package/dist/acp/macro-agent.d.ts.map +1 -1
- package/dist/acp/macro-agent.js +192 -7
- package/dist/acp/macro-agent.js.map +1 -1
- package/dist/acp/types.d.ts +9 -0
- package/dist/acp/types.d.ts.map +1 -1
- package/dist/acp/types.js.map +1 -1
- package/dist/adapters/tasks-adapter.d.ts.map +1 -1
- package/dist/adapters/tasks-adapter.js +3 -0
- package/dist/adapters/tasks-adapter.js.map +1 -1
- package/dist/adapters/types.d.ts +1 -0
- package/dist/adapters/types.d.ts.map +1 -1
- package/dist/agent/agent-manager-v2.d.ts +21 -0
- package/dist/agent/agent-manager-v2.d.ts.map +1 -1
- package/dist/agent/agent-manager-v2.js +308 -54
- package/dist/agent/agent-manager-v2.js.map +1 -1
- package/dist/agent/agent-manager.d.ts +12 -0
- package/dist/agent/agent-manager.d.ts.map +1 -1
- package/dist/agent/agent-manager.js.map +1 -1
- package/dist/agent/agent-store.d.ts +10 -0
- package/dist/agent/agent-store.d.ts.map +1 -1
- package/dist/agent/agent-store.js +22 -0
- package/dist/agent/agent-store.js.map +1 -1
- package/dist/agent/types.d.ts +15 -2
- package/dist/agent/types.d.ts.map +1 -1
- package/dist/agent/types.js.map +1 -1
- package/dist/boot-v2.d.ts +129 -1
- package/dist/boot-v2.d.ts.map +1 -1
- package/dist/boot-v2.js +359 -8
- package/dist/boot-v2.js.map +1 -1
- package/dist/cli/acp.js +4 -0
- package/dist/cli/acp.js.map +1 -1
- package/dist/cli/index.js +56 -0
- package/dist/cli/index.js.map +1 -1
- package/dist/cognitive/macro-agent-backend.d.ts.map +1 -1
- package/dist/cognitive/macro-agent-backend.js +40 -22
- package/dist/cognitive/macro-agent-backend.js.map +1 -1
- package/dist/integrations/skilltree.d.ts.map +1 -1
- package/dist/integrations/skilltree.js +1 -0
- package/dist/integrations/skilltree.js.map +1 -1
- package/dist/lifecycle/cascade.d.ts +25 -2
- package/dist/lifecycle/cascade.d.ts.map +1 -1
- package/dist/lifecycle/cascade.js +70 -2
- package/dist/lifecycle/cascade.js.map +1 -1
- package/dist/lifecycle/cleanup.d.ts +33 -2
- package/dist/lifecycle/cleanup.d.ts.map +1 -1
- package/dist/lifecycle/cleanup.js +28 -6
- package/dist/lifecycle/cleanup.js.map +1 -1
- package/dist/lifecycle/handlers-v2.d.ts +7 -0
- package/dist/lifecycle/handlers-v2.d.ts.map +1 -1
- package/dist/lifecycle/handlers-v2.js +28 -2
- package/dist/lifecycle/handlers-v2.js.map +1 -1
- package/dist/lifecycle/types.d.ts +11 -0
- package/dist/lifecycle/types.d.ts.map +1 -1
- package/dist/lifecycle/types.js.map +1 -1
- package/dist/map/acp-bridge.d.ts +9 -0
- package/dist/map/acp-bridge.d.ts.map +1 -1
- package/dist/map/acp-bridge.js +15 -2
- package/dist/map/acp-bridge.js.map +1 -1
- package/dist/map/cascade-action-handler.d.ts +24 -0
- package/dist/map/cascade-action-handler.d.ts.map +1 -0
- package/dist/map/cascade-action-handler.js +170 -0
- package/dist/map/cascade-action-handler.js.map +1 -0
- package/dist/map/cascade-bridge.d.ts +44 -0
- package/dist/map/cascade-bridge.d.ts.map +1 -0
- package/dist/map/cascade-bridge.js +294 -0
- package/dist/map/cascade-bridge.js.map +1 -0
- package/dist/map/coordination-handler.d.ts.map +1 -1
- package/dist/map/coordination-handler.js +12 -1
- package/dist/map/coordination-handler.js.map +1 -1
- package/dist/map/lifecycle-bridge.d.ts +1 -1
- package/dist/map/lifecycle-bridge.d.ts.map +1 -1
- package/dist/map/lifecycle-bridge.js +58 -23
- package/dist/map/lifecycle-bridge.js.map +1 -1
- package/dist/map/server.d.ts.map +1 -1
- package/dist/map/server.js +219 -7
- package/dist/map/server.js.map +1 -1
- package/dist/map/sidecar.d.ts.map +1 -1
- package/dist/map/sidecar.js +49 -2
- package/dist/map/sidecar.js.map +1 -1
- package/dist/map/types.d.ts +22 -0
- package/dist/map/types.d.ts.map +1 -1
- package/dist/mcp/tools/done-v2.d.ts.map +1 -1
- package/dist/mcp/tools/done-v2.js +8 -0
- package/dist/mcp/tools/done-v2.js.map +1 -1
- package/dist/teams/team-manager-v2.d.ts.map +1 -1
- package/dist/teams/team-manager-v2.js +26 -0
- package/dist/teams/team-manager-v2.js.map +1 -1
- package/dist/teams/team-runtime-v2.d.ts.map +1 -1
- package/dist/teams/team-runtime-v2.js +16 -3
- package/dist/teams/team-runtime-v2.js.map +1 -1
- package/dist/workspace/config.d.ts +10 -10
- package/dist/workspace/config.d.ts.map +1 -1
- package/dist/workspace/config.js +4 -4
- package/dist/workspace/config.js.map +1 -1
- package/dist/workspace/git-cascade-adapter.d.ts +510 -0
- package/dist/workspace/git-cascade-adapter.d.ts.map +1 -0
- package/dist/workspace/git-cascade-adapter.js +934 -0
- package/dist/workspace/git-cascade-adapter.js.map +1 -0
- package/dist/workspace/index.d.ts +3 -3
- package/dist/workspace/index.d.ts.map +1 -1
- package/dist/workspace/index.js +4 -4
- package/dist/workspace/index.js.map +1 -1
- package/dist/workspace/landing/direct-push.d.ts +20 -0
- package/dist/workspace/landing/direct-push.d.ts.map +1 -0
- package/dist/workspace/landing/direct-push.js +74 -0
- package/dist/workspace/landing/direct-push.js.map +1 -0
- package/dist/workspace/landing/index.d.ts +29 -0
- package/dist/workspace/landing/index.d.ts.map +1 -0
- package/dist/workspace/landing/index.js +37 -0
- package/dist/workspace/landing/index.js.map +1 -0
- package/dist/workspace/landing/merge-to-parent.d.ts +41 -0
- package/dist/workspace/landing/merge-to-parent.d.ts.map +1 -0
- package/dist/workspace/landing/merge-to-parent.js +186 -0
- package/dist/workspace/landing/merge-to-parent.js.map +1 -0
- package/dist/workspace/landing/optimistic-push.d.ts +16 -0
- package/dist/workspace/landing/optimistic-push.d.ts.map +1 -0
- package/dist/workspace/landing/optimistic-push.js +27 -0
- package/dist/workspace/landing/optimistic-push.js.map +1 -0
- package/dist/workspace/landing/queue-to-branch.d.ts +24 -0
- package/dist/workspace/landing/queue-to-branch.d.ts.map +1 -0
- package/dist/workspace/landing/queue-to-branch.js +79 -0
- package/dist/workspace/landing/queue-to-branch.js.map +1 -0
- package/dist/workspace/merge-queue/merge-queue.d.ts +10 -0
- package/dist/workspace/merge-queue/merge-queue.d.ts.map +1 -1
- package/dist/workspace/merge-queue/merge-queue.js +10 -0
- package/dist/workspace/merge-queue/merge-queue.js.map +1 -1
- package/dist/workspace/merge-queue/types.d.ts +16 -2
- package/dist/workspace/merge-queue/types.d.ts.map +1 -1
- package/dist/workspace/merge-queue/types.js +9 -0
- package/dist/workspace/merge-queue/types.js.map +1 -1
- package/dist/workspace/pool/types.d.ts +1 -0
- package/dist/workspace/pool/types.d.ts.map +1 -1
- package/dist/workspace/pool/worktree-pool.d.ts.map +1 -1
- package/dist/workspace/pool/worktree-pool.js +1 -0
- package/dist/workspace/pool/worktree-pool.js.map +1 -1
- package/dist/workspace/recovery/abandon.d.ts +15 -0
- package/dist/workspace/recovery/abandon.d.ts.map +1 -0
- package/dist/workspace/recovery/abandon.js +45 -0
- package/dist/workspace/recovery/abandon.js.map +1 -0
- package/dist/workspace/recovery/auto-resolve.d.ts +27 -0
- package/dist/workspace/recovery/auto-resolve.d.ts.map +1 -0
- package/dist/workspace/recovery/auto-resolve.js +99 -0
- package/dist/workspace/recovery/auto-resolve.js.map +1 -0
- package/dist/workspace/recovery/defer.d.ts +15 -0
- package/dist/workspace/recovery/defer.d.ts.map +1 -0
- package/dist/workspace/recovery/defer.js +16 -0
- package/dist/workspace/recovery/defer.js.map +1 -0
- package/dist/workspace/recovery/escalate.d.ts +16 -0
- package/dist/workspace/recovery/escalate.d.ts.map +1 -0
- package/dist/workspace/recovery/escalate.js +24 -0
- package/dist/workspace/recovery/escalate.js.map +1 -0
- package/dist/workspace/recovery/index.d.ts +32 -0
- package/dist/workspace/recovery/index.d.ts.map +1 -0
- package/dist/workspace/recovery/index.js +45 -0
- package/dist/workspace/recovery/index.js.map +1 -0
- package/dist/workspace/recovery/spawn-resolver.d.ts +45 -0
- package/dist/workspace/recovery/spawn-resolver.d.ts.map +1 -0
- package/dist/workspace/recovery/spawn-resolver.js +118 -0
- package/dist/workspace/recovery/spawn-resolver.js.map +1 -0
- package/dist/workspace/recovery/types.d.ts +63 -0
- package/dist/workspace/recovery/types.d.ts.map +1 -0
- package/dist/workspace/recovery/types.js +12 -0
- package/dist/workspace/recovery/types.js.map +1 -0
- package/dist/workspace/topology/index.d.ts +9 -0
- package/dist/workspace/topology/index.d.ts.map +1 -0
- package/dist/workspace/topology/index.js +8 -0
- package/dist/workspace/topology/index.js.map +1 -0
- package/dist/workspace/topology/no-workspace.d.ts +18 -0
- package/dist/workspace/topology/no-workspace.d.ts.map +1 -0
- package/dist/workspace/topology/no-workspace.js +25 -0
- package/dist/workspace/topology/no-workspace.js.map +1 -0
- package/dist/workspace/topology/types.d.ts +97 -0
- package/dist/workspace/topology/types.d.ts.map +1 -0
- package/dist/workspace/topology/types.js +20 -0
- package/dist/workspace/topology/types.js.map +1 -0
- package/dist/workspace/topology/yaml-driven.d.ts +69 -0
- package/dist/workspace/topology/yaml-driven.d.ts.map +1 -0
- package/dist/workspace/topology/yaml-driven.js +273 -0
- package/dist/workspace/topology/yaml-driven.js.map +1 -0
- package/dist/workspace/types-v3.d.ts +117 -0
- package/dist/workspace/types-v3.d.ts.map +1 -0
- package/dist/workspace/types-v3.js +20 -0
- package/dist/workspace/types-v3.js.map +1 -0
- package/dist/workspace/types.d.ts +162 -17
- package/dist/workspace/types.d.ts.map +1 -1
- package/dist/workspace/workspace-manager.d.ts +101 -13
- package/dist/workspace/workspace-manager.d.ts.map +1 -1
- package/dist/workspace/workspace-manager.js +416 -13
- package/dist/workspace/workspace-manager.js.map +1 -1
- package/dist/workspace/yaml-schema.d.ts +254 -0
- package/dist/workspace/yaml-schema.d.ts.map +1 -0
- package/dist/workspace/yaml-schema.js +170 -0
- package/dist/workspace/yaml-schema.js.map +1 -0
- package/docs/conflict-recovery.md +472 -0
- package/docs/design/task-dispatcher.md +880 -0
- package/docs/git-cascade-integration-gaps.md +678 -0
- package/docs/workspace-interfaces.md +731 -0
- package/docs/workspace-redesign-plan.md +302 -0
- package/package.json +6 -5
- package/src/__tests__/boot-v2.test.ts +435 -0
- package/src/__tests__/e2e/acp-over-map.e2e.test.ts +92 -0
- package/src/__tests__/e2e/auto-sync.e2e.test.ts +257 -0
- package/src/__tests__/e2e/bootstrap.e2e.test.ts +319 -0
- package/src/__tests__/e2e/cascade-rebase.e2e.test.ts +254 -0
- package/src/__tests__/e2e/cli-run.e2e.test.ts +167 -0
- package/src/__tests__/e2e/dispatch-coordination.e2e.test.ts +495 -0
- package/src/__tests__/e2e/dispatch-live.e2e.test.ts +564 -0
- package/src/__tests__/e2e/dispatch-opentasks.e2e.test.ts +496 -0
- package/src/__tests__/e2e/dispatch-phase2-live.e2e.test.ts +456 -0
- package/src/__tests__/e2e/dispatch-phase2.e2e.test.ts +386 -0
- package/src/__tests__/e2e/dispatch.e2e.test.ts +376 -0
- package/src/__tests__/e2e/self-driving-v3.e2e.test.ts +197 -0
- package/src/__tests__/e2e/spawn-resolver.e2e.test.ts +200 -0
- package/src/__tests__/e2e/workspace-lifecycle.e2e.test.ts +30 -22
- package/src/__tests__/e2e/workspace-v3.e2e.test.ts +413 -0
- package/src/acp/__tests__/claude-code-replay.test.ts +225 -0
- package/src/acp/__tests__/macro-agent.test.ts +39 -1
- package/src/acp/claude-code-replay.ts +208 -0
- package/src/acp/macro-agent.ts +203 -10
- package/src/acp/types.ts +10 -0
- package/src/adapters/__tests__/tasks-adapter.test.ts +1 -0
- package/src/adapters/tasks-adapter.ts +3 -0
- package/src/adapters/types.ts +1 -0
- package/src/agent/__tests__/agent-manager-topology.test.ts +73 -0
- package/src/agent/__tests__/agent-manager-v2.test.ts +66 -0
- package/src/agent/__tests__/agent-store.test.ts +52 -0
- package/src/agent/__tests__/task-ref-resolution.test.ts +231 -0
- package/src/agent/agent-manager-v2.ts +372 -59
- package/src/agent/agent-manager.ts +14 -0
- package/src/agent/agent-store.ts +24 -0
- package/src/agent/types.ts +16 -2
- package/src/boot-v2.ts +589 -35
- package/src/cli/acp.ts +4 -0
- package/src/cli/index.ts +61 -0
- package/src/cognitive/macro-agent-backend.ts +45 -29
- package/src/integrations/skilltree.ts +1 -0
- package/src/lifecycle/__tests__/cascade-consolidation.test.ts +240 -0
- package/src/lifecycle/cascade.ts +77 -2
- package/src/lifecycle/cleanup.ts +52 -3
- package/src/lifecycle/handlers-v2.ts +40 -3
- package/src/lifecycle/types.ts +12 -0
- package/src/map/__tests__/cascade-bridge.test.ts +229 -0
- package/src/map/__tests__/emit-event.test.ts +71 -0
- package/src/map/__tests__/lifecycle-bridge.test.ts +86 -10
- package/src/map/acp-bridge.ts +26 -3
- package/src/map/cascade-action-handler.ts +205 -0
- package/src/map/cascade-bridge.ts +339 -0
- package/src/map/coordination-handler.ts +13 -1
- package/src/map/lifecycle-bridge.ts +52 -17
- package/src/map/server.ts +225 -7
- package/src/map/sidecar.ts +48 -1
- package/src/map/types.ts +23 -0
- package/src/mcp/tools/done-v2.ts +9 -0
- package/src/teams/team-manager-v2.ts +37 -0
- package/src/teams/team-runtime-v2.ts +23 -3
- package/src/workspace/__tests__/{dataplane-adapter.test.ts → git-cascade-adapter.test.ts} +209 -14
- package/src/workspace/__tests__/land-dispatch.test.ts +214 -0
- package/src/workspace/__tests__/self-driving-yaml.test.ts +114 -0
- package/src/workspace/__tests__/shared-worktree-refcount.test.ts +154 -0
- package/src/workspace/__tests__/standalone-mode.test.ts +118 -0
- package/src/workspace/__tests__/workspace-manager-v3.test.ts +245 -0
- package/src/workspace/__tests__/yaml-schema.test.ts +210 -0
- package/src/workspace/config.ts +11 -11
- package/src/workspace/git-cascade-adapter.ts +1213 -0
- package/src/workspace/index.ts +11 -11
- package/src/workspace/landing/__tests__/strategies.test.ts +184 -0
- package/src/workspace/landing/direct-push.ts +91 -0
- package/src/workspace/landing/index.ts +40 -0
- package/src/workspace/landing/merge-to-parent.ts +229 -0
- package/src/workspace/landing/optimistic-push.ts +36 -0
- package/src/workspace/landing/queue-to-branch.ts +108 -0
- package/src/workspace/merge-queue/merge-queue.ts +10 -0
- package/src/workspace/merge-queue/types.ts +16 -2
- package/src/workspace/pool/__tests__/worktree-pool.integration.test.ts +5 -5
- package/src/workspace/pool/types.ts +1 -0
- package/src/workspace/pool/worktree-pool.ts +1 -0
- package/src/workspace/recovery/__tests__/auto-resolve-integration.test.ts +127 -0
- package/src/workspace/recovery/__tests__/spawn-resolver.test.ts +139 -0
- package/src/workspace/recovery/__tests__/strategies.test.ts +145 -0
- package/src/workspace/recovery/abandon.ts +51 -0
- package/src/workspace/recovery/auto-resolve.ts +119 -0
- package/src/workspace/recovery/defer.ts +23 -0
- package/src/workspace/recovery/escalate.ts +30 -0
- package/src/workspace/recovery/index.ts +58 -0
- package/src/workspace/recovery/spawn-resolver.ts +152 -0
- package/src/workspace/recovery/types.ts +54 -0
- package/src/workspace/topology/__tests__/yaml-driven.test.ts +345 -0
- package/src/workspace/topology/index.ts +18 -0
- package/src/workspace/topology/no-workspace.ts +39 -0
- package/src/workspace/topology/types.ts +116 -0
- package/src/workspace/topology/yaml-driven.ts +316 -0
- package/src/workspace/types-v3.ts +162 -0
- package/src/workspace/types.ts +211 -20
- package/src/workspace/workspace-manager.ts +533 -19
- package/src/workspace/yaml-schema.ts +216 -0
- package/dist/workspace/dataplane-adapter.d.ts +0 -260
- package/dist/workspace/dataplane-adapter.d.ts.map +0 -1
- package/dist/workspace/dataplane-adapter.js +0 -416
- package/dist/workspace/dataplane-adapter.js.map +0 -1
- package/src/workspace/dataplane-adapter.ts +0 -546
|
@@ -0,0 +1,880 @@
|
|
|
1
|
+
# Task Dispatcher Design
|
|
2
|
+
|
|
3
|
+
## Implementation Status
|
|
4
|
+
|
|
5
|
+
**Extracted to standalone package: [`swarm-dispatch`](https://www.npmjs.com/package/swarm-dispatch).**
|
|
6
|
+
|
|
7
|
+
The dispatch logic originally described in this document was first implemented inside macro-agent's `trigger/dispatch/` directory, then extracted to a runtime-agnostic npm package. macro-agent now consumes `swarm-dispatch` via two thin adapters in `boot-v2.ts`:
|
|
8
|
+
|
|
9
|
+
- **DispatchTaskSource** — wraps `TasksAdapter` (opentasks IPC)
|
|
10
|
+
- **DispatchAgentRuntime** — wraps `AgentManagerV2` (spawn, terminate, onStopped)
|
|
11
|
+
|
|
12
|
+
The dispatcher is exposed on `MacroAgentSystemV2.taskDispatcher` (optional). Dispatch events are bridged to MAP via `mapSidecar.emitEvent()` for observability.
|
|
13
|
+
|
|
14
|
+
**What moved to swarm-dispatch:**
|
|
15
|
+
- Dispatch tracker (concurrency, retry, state reconstruction)
|
|
16
|
+
- Eligibility checker (static filters + heuristic scoring)
|
|
17
|
+
- Prompt builder (default markdown template)
|
|
18
|
+
- Reconciliation (external state change detection)
|
|
19
|
+
- The dispatch loop itself (poll → claim → spawn → monitor)
|
|
20
|
+
- OpenTasks adapter (`createOpenTasksSource`)
|
|
21
|
+
|
|
22
|
+
**What stays in macro-agent:**
|
|
23
|
+
- Boot wiring (~40 lines in `boot-v2.ts`)
|
|
24
|
+
- AgentManagerV2 adapter (spawn with `parent: null`, lifecycle events)
|
|
25
|
+
- MAP event bridge (dispatch events → MAP sidecar)
|
|
26
|
+
- E2E tests (3 files: mocked, live agent, live agent + opentasks)
|
|
27
|
+
|
|
28
|
+
The rest of this document is the original design that informed the implementation.
|
|
29
|
+
|
|
30
|
+
---
|
|
31
|
+
|
|
32
|
+
## Overview (Original Design)
|
|
33
|
+
|
|
34
|
+
A dispatch mode for macro-agent's trigger system that polls opentasks for ready work and spawns agents to execute it. Turns the existing event-driven trigger architecture into an autonomous work processor — the swarmkit equivalent of Symphony's daemon loop, built on primitives that already exist.
|
|
35
|
+
|
|
36
|
+
## Problem
|
|
37
|
+
|
|
38
|
+
Swarmkit can coordinate agents and track tasks across systems, but has no "point at a backlog, walk away" mode. Today, agents must be manually spawned or externally triggered. There's no continuous loop that:
|
|
39
|
+
|
|
40
|
+
1. Watches for ready tasks
|
|
41
|
+
2. Claims and dispatches them to agents
|
|
42
|
+
3. Manages concurrency, retries, and cleanup
|
|
43
|
+
|
|
44
|
+
## Architecture
|
|
45
|
+
|
|
46
|
+
The dispatcher is **not a new system** — it's three components wired into the existing trigger pipeline:
|
|
47
|
+
|
|
48
|
+
```
|
|
49
|
+
┌─────────────────────────────────────────────────────────────────┐
|
|
50
|
+
│ trigger system v2 │
|
|
51
|
+
│ │
|
|
52
|
+
│ ┌──────────┐ ┌──────────────┐ ┌───────────────────┐ │
|
|
53
|
+
│ │ CronJob │────▶│ TriggerEvent │────▶│ TaskDispatch │ │
|
|
54
|
+
│ │ "poll" │ │ (internal) │ │ RoutingStrategy │ │
|
|
55
|
+
│ │ every N │ └──────────────┘ │ │ │
|
|
56
|
+
│ └──────────┘ │ query ready ─────┼──▶ opentasks
|
|
57
|
+
│ ┌─────────────│ check capacity │ │
|
|
58
|
+
│ ┌──────────┐ │ │ claim + spawn ───┼──▶ agentManager
|
|
59
|
+
│ │ Reconcile│────────────┘ │ reconcile state │ │
|
|
60
|
+
│ │ CronJob │ (separate cadence) │ track dispatch │ │
|
|
61
|
+
│ │ every M │ └───────────────────┘ │
|
|
62
|
+
│ └──────────┘ │
|
|
63
|
+
│ │
|
|
64
|
+
│ ┌──────────────────┐ ┌─────────────────────────────────┐ │
|
|
65
|
+
│ │ DispatchLifecycle │────▶│ onLifecycleEvent() callback │ │
|
|
66
|
+
│ │ Listener │ │ + inbox signal filter │ │
|
|
67
|
+
│ │ │ │ + retry on failure │ │
|
|
68
|
+
│ └──────────────────┘ └─────────────────────────────────┘ │
|
|
69
|
+
│ │
|
|
70
|
+
│ ┌──────────────────┐ │
|
|
71
|
+
│ │ DispatchTracker │ in-memory state: active dispatches, │
|
|
72
|
+
│ │ │ retry queue, concurrency counts │
|
|
73
|
+
│ │ │ + reconstruction from opentasks on boot │
|
|
74
|
+
│ └──────────────────┘ │
|
|
75
|
+
└─────────────────────────────────────────────────────────────────┘
|
|
76
|
+
```
|
|
77
|
+
|
|
78
|
+
---
|
|
79
|
+
|
|
80
|
+
## Design Decisions
|
|
81
|
+
|
|
82
|
+
### 1. Parentless Agents
|
|
83
|
+
|
|
84
|
+
**Decision: Dispatched agents spawn as root agents (`parent: null`).**
|
|
85
|
+
|
|
86
|
+
AgentManagerV2 already supports parentless agents — they're treated as "head managers" with `isHeadManager: true`. The lifecycle works without a parent:
|
|
87
|
+
|
|
88
|
+
- `spawn()` accepts `parent: undefined` with no validation error (agent-manager-v2.ts:348)
|
|
89
|
+
- Signal emission is skipped when `!context.parentId` (handlers-v2.ts:59)
|
|
90
|
+
- Cascade termination works regardless of parent (agent-manager-v2.ts:801-822)
|
|
91
|
+
|
|
92
|
+
This means dispatched agents are **peers, not children**. They don't report upward via signals — the dispatcher tracks them directly via lifecycle events (see §7 below).
|
|
93
|
+
|
|
94
|
+
**Why not a synthetic coordinator parent?** A headless coordinator that nobody interacts with adds complexity for no benefit. The dispatcher itself is the coordination layer — it tracks state, manages retries, and handles lifecycle. A parent agent would just be a proxy for logic that already lives in the dispatch strategy.
|
|
95
|
+
|
|
96
|
+
**Implication:** Dispatched agents can't use the `done()` signal path to notify a parent. Instead, the dispatcher listens to `onLifecycleEvent()` callbacks (type `"stopped"`) and inbox signals via `addSignalFilter()`. See §7.
|
|
97
|
+
|
|
98
|
+
---
|
|
99
|
+
|
|
100
|
+
### 2. Hybrid Push/Pull Dispatch
|
|
101
|
+
|
|
102
|
+
**Decision: Support both modes, configurable per dispatch config. Default to push.**
|
|
103
|
+
|
|
104
|
+
The dispatcher supports three modes:
|
|
105
|
+
|
|
106
|
+
```typescript
|
|
107
|
+
export type DispatchMode =
|
|
108
|
+
| "push" // Dispatcher assigns task, spawns dedicated agent
|
|
109
|
+
| "pull" // Dispatcher maintains a pool of idle workers that self-claim
|
|
110
|
+
| "hybrid"; // Dispatcher pushes high-priority, workers pull the rest
|
|
111
|
+
```
|
|
112
|
+
|
|
113
|
+
#### Push Mode (default)
|
|
114
|
+
|
|
115
|
+
Dispatcher claims task → spawns agent with task prompt → agent works on assigned task → done.
|
|
116
|
+
|
|
117
|
+
- **Pros:** Predictable, simple lifecycle, one agent per task
|
|
118
|
+
- **Cons:** Cold start per task (agent spawn overhead)
|
|
119
|
+
- **Best for:** Heavy tasks, tasks needing specific roles/prompts
|
|
120
|
+
|
|
121
|
+
#### Pull Mode
|
|
122
|
+
|
|
123
|
+
Dispatcher maintains N idle worker agents. Workers call `claim_task` / `list_claimable_tasks` to self-select work. Dispatcher respawns workers when they terminate or the pool drops below threshold.
|
|
124
|
+
|
|
125
|
+
```typescript
|
|
126
|
+
export interface PullModeConfig {
|
|
127
|
+
/** Target number of idle workers to maintain */
|
|
128
|
+
poolSize: number;
|
|
129
|
+
/** Role for pool workers */
|
|
130
|
+
workerRole: string;
|
|
131
|
+
/** How long a worker can be idle before termination (ms) */
|
|
132
|
+
idleTimeoutMs?: number;
|
|
133
|
+
/** Whether workers should loop (claim next task after completing one) */
|
|
134
|
+
workerLoop: boolean;
|
|
135
|
+
}
|
|
136
|
+
```
|
|
137
|
+
|
|
138
|
+
- **Pros:** Amortizes spawn cost, workers self-select based on capability
|
|
139
|
+
- **Cons:** Pool management complexity, workers may compete for same tasks
|
|
140
|
+
- **Best for:** Many small tasks, fast throughput
|
|
141
|
+
|
|
142
|
+
#### Hybrid Mode
|
|
143
|
+
|
|
144
|
+
High-priority tasks (priority >= threshold) get push-dispatched. Everything else is available for pool workers to pull.
|
|
145
|
+
|
|
146
|
+
```typescript
|
|
147
|
+
export interface HybridConfig {
|
|
148
|
+
push: DispatchConfig;
|
|
149
|
+
pull: PullModeConfig;
|
|
150
|
+
/** Tasks at or above this priority get push-dispatched */
|
|
151
|
+
pushPriorityThreshold: number;
|
|
152
|
+
}
|
|
153
|
+
```
|
|
154
|
+
|
|
155
|
+
---
|
|
156
|
+
|
|
157
|
+
### 3. Workspace Lifecycle Across Retries
|
|
158
|
+
|
|
159
|
+
**Decision: Configurable per dispatch config. Three strategies:**
|
|
160
|
+
|
|
161
|
+
```typescript
|
|
162
|
+
export type RetryWorkspaceStrategy =
|
|
163
|
+
| "reuse" // Keep worktree, agent resumes from existing state
|
|
164
|
+
| "fresh" // Delete worktree, agent starts clean
|
|
165
|
+
| "branch"; // Keep worktree but create a new branch from the pre-failure state
|
|
166
|
+
```
|
|
167
|
+
|
|
168
|
+
```yaml
|
|
169
|
+
dispatch:
|
|
170
|
+
retry:
|
|
171
|
+
maxRetries: 3
|
|
172
|
+
workspaceStrategy: reuse # or "fresh" or "branch"
|
|
173
|
+
preserveOnExhaustion: true # keep workspace for inspection after final failure
|
|
174
|
+
cleanupDelayMs: 300000 # wait 5 min before cleaning completed workspaces
|
|
175
|
+
```
|
|
176
|
+
|
|
177
|
+
**`reuse` (default):** The agent gets the workspace as-is. Its prompt includes retry context (attempt number, previous error). This matches Symphony's behavior — workspace persists across turns/retries.
|
|
178
|
+
|
|
179
|
+
**`fresh`:** The worktree is deleted and recreated. Appropriate when failures leave corrupted state (bad merges, broken dependencies).
|
|
180
|
+
|
|
181
|
+
**`branch`:** Creates a new branch from the current worktree state before retrying. Preserves progress while giving the agent a clean commit history to work from.
|
|
182
|
+
|
|
183
|
+
**Cleanup:** On final completion, worktree cleanup happens after `cleanupDelayMs` (default: 5 min, configurable to 0 for immediate). On retry exhaustion with `preserveOnExhaustion: true`, the workspace is preserved and a MAP event is emitted so operators can inspect it.
|
|
184
|
+
|
|
185
|
+
---
|
|
186
|
+
|
|
187
|
+
### 4. Task-to-Prompt Mapping via Opentasks
|
|
188
|
+
|
|
189
|
+
**Decision: Extend opentasks with a `context` field and a prompt assembly pipeline.**
|
|
190
|
+
|
|
191
|
+
The current opentasks `TaskRecord` has `title`, `content`, and `metadata` — not enough for rich agent prompts. Rather than building prompt logic into the dispatcher, extend opentasks to carry structured context that any consumer (dispatcher, agent, dashboard) can use:
|
|
192
|
+
|
|
193
|
+
```typescript
|
|
194
|
+
// Extension to opentasks TaskRecord
|
|
195
|
+
export interface TaskContext {
|
|
196
|
+
/** Structured description (markdown) */
|
|
197
|
+
description?: string;
|
|
198
|
+
/** File paths relevant to this task */
|
|
199
|
+
files?: string[];
|
|
200
|
+
/** Related task IDs for cross-reference */
|
|
201
|
+
related?: string[];
|
|
202
|
+
/** Acceptance criteria */
|
|
203
|
+
criteria?: string[];
|
|
204
|
+
/** Labels/categories from the source tracker */
|
|
205
|
+
labels?: string[];
|
|
206
|
+
/** Source tracker URL (e.g., Linear issue URL) */
|
|
207
|
+
sourceUrl?: string;
|
|
208
|
+
/** Free-form key-value context from the tracker */
|
|
209
|
+
extra?: Record<string, unknown>;
|
|
210
|
+
}
|
|
211
|
+
```
|
|
212
|
+
|
|
213
|
+
The dispatcher's prompt pipeline then becomes composable:
|
|
214
|
+
|
|
215
|
+
```typescript
|
|
216
|
+
export interface PromptPipeline {
|
|
217
|
+
/** Ordered list of prompt builders — each appends context */
|
|
218
|
+
stages: PromptStage[];
|
|
219
|
+
}
|
|
220
|
+
|
|
221
|
+
export interface PromptStage {
|
|
222
|
+
name: string;
|
|
223
|
+
build(task: TaskRecord, context: PromptContext): Promise<string | null>;
|
|
224
|
+
}
|
|
225
|
+
|
|
226
|
+
// Built-in stages:
|
|
227
|
+
// 1. "task-core" — title, description, criteria, files
|
|
228
|
+
// 2. "retry-context" — attempt number, previous error, workspace state
|
|
229
|
+
// 3. "playbook" — query cognitive-core for relevant playbooks (opt-in)
|
|
230
|
+
// 4. "role-prompt" — append role-specific instructions from openteams
|
|
231
|
+
// 5. "custom" — user-provided function
|
|
232
|
+
```
|
|
233
|
+
|
|
234
|
+
This keeps the dispatcher thin (it calls the pipeline) while making prompt assembly extensible. The playbook stage is opt-in — only active if cognitive-core is configured.
|
|
235
|
+
|
|
236
|
+
---
|
|
237
|
+
|
|
238
|
+
### 5. Task Eligibility: Heuristic + Configurable + Agent-Driven
|
|
239
|
+
|
|
240
|
+
**Decision: Three-layer eligibility check before dispatch.**
|
|
241
|
+
|
|
242
|
+
"Ready" (no blockers in opentasks) is necessary but not sufficient. The dispatcher applies:
|
|
243
|
+
|
|
244
|
+
#### Layer 1: Static Filters (config-driven)
|
|
245
|
+
|
|
246
|
+
```yaml
|
|
247
|
+
dispatch:
|
|
248
|
+
eligibility:
|
|
249
|
+
tags: [backend, auto] # Only tasks with these tags
|
|
250
|
+
excludeTags: [manual, blocked] # Skip tasks with these tags
|
|
251
|
+
trackers: [linear, github] # Only from these tracker types
|
|
252
|
+
minPriority: 2 # Skip low-priority tasks
|
|
253
|
+
maxAge: 86400000 # Skip tasks older than 24h (ms)
|
|
254
|
+
requireFields: [description] # Skip tasks missing required fields
|
|
255
|
+
```
|
|
256
|
+
|
|
257
|
+
#### Layer 2: Heuristic Scoring (built-in)
|
|
258
|
+
|
|
259
|
+
Tasks that pass static filters get a dispatch score:
|
|
260
|
+
|
|
261
|
+
```typescript
|
|
262
|
+
export interface EligibilityScore {
|
|
263
|
+
taskId: string;
|
|
264
|
+
score: number; // 0-1, higher = more eligible
|
|
265
|
+
reasons: string[]; // Why this score
|
|
266
|
+
}
|
|
267
|
+
|
|
268
|
+
function scoreTask(task: TaskRecord): EligibilityScore {
|
|
269
|
+
let score = 1.0;
|
|
270
|
+
const reasons: string[] = [];
|
|
271
|
+
|
|
272
|
+
// Penalize tasks with no description
|
|
273
|
+
if (!task.content && !task.metadata?.description) {
|
|
274
|
+
score *= 0.3;
|
|
275
|
+
reasons.push("no description — agent may lack context");
|
|
276
|
+
}
|
|
277
|
+
|
|
278
|
+
// Penalize tasks with too many prior failures
|
|
279
|
+
const failures = task.metadata?.failureCount as number ?? 0;
|
|
280
|
+
if (failures > 0) {
|
|
281
|
+
score *= Math.pow(0.7, failures);
|
|
282
|
+
reasons.push(`${failures} prior failures`);
|
|
283
|
+
}
|
|
284
|
+
|
|
285
|
+
// Boost tasks with acceptance criteria
|
|
286
|
+
if (task.metadata?.criteria) {
|
|
287
|
+
score *= 1.2;
|
|
288
|
+
reasons.push("has acceptance criteria");
|
|
289
|
+
}
|
|
290
|
+
|
|
291
|
+
// Boost tasks with file references
|
|
292
|
+
if (task.metadata?.files) {
|
|
293
|
+
score *= 1.1;
|
|
294
|
+
reasons.push("has file references");
|
|
295
|
+
}
|
|
296
|
+
|
|
297
|
+
return { taskId: task.id, score: Math.min(score, 1), reasons };
|
|
298
|
+
}
|
|
299
|
+
```
|
|
300
|
+
|
|
301
|
+
Tasks below a configurable `minScore` threshold (default: 0.3) are skipped. They stay in opentasks as ready but aren't dispatched until they gain more context.
|
|
302
|
+
|
|
303
|
+
#### Layer 3: Agent-Driven Triage (opt-in)
|
|
304
|
+
|
|
305
|
+
For teams that want smarter triage, the dispatcher can spawn a lightweight triage agent that evaluates borderline tasks:
|
|
306
|
+
|
|
307
|
+
```yaml
|
|
308
|
+
dispatch:
|
|
309
|
+
eligibility:
|
|
310
|
+
agentTriage:
|
|
311
|
+
enabled: true
|
|
312
|
+
role: triage # Role from openteams
|
|
313
|
+
minScoreForTriage: 0.3 # Only triage tasks in this range
|
|
314
|
+
maxScoreForTriage: 0.7
|
|
315
|
+
maxTriagePerCycle: 3 # Don't triage too many per poll
|
|
316
|
+
```
|
|
317
|
+
|
|
318
|
+
The triage agent gets a batch of borderline tasks and returns a verdict per task: `dispatch`, `skip`, or `needs-context` (which creates an opentasks annotation requesting more info from the human).
|
|
319
|
+
|
|
320
|
+
This is the AI router pattern applied to task eligibility — same trade-off (expensive but intelligent).
|
|
321
|
+
|
|
322
|
+
---
|
|
323
|
+
|
|
324
|
+
### 6. Multi-Instance Safety
|
|
325
|
+
|
|
326
|
+
**Decision: Opentasks daemon is the coordination point. Add atomic claim to opentasks.**
|
|
327
|
+
|
|
328
|
+
macro-agent is single-process-per-project by design. But multiple instances (different machines, CI environments) may share the same opentasks task pool. The current `claimTask` is not atomic (query → assign is TOCTOU).
|
|
329
|
+
|
|
330
|
+
#### Required: Atomic Claim in Opentasks
|
|
331
|
+
|
|
332
|
+
```typescript
|
|
333
|
+
// New opentasks operation: atomic claim-if-unclaimed
|
|
334
|
+
interface AtomicClaimRequest {
|
|
335
|
+
action: "claim";
|
|
336
|
+
taskId: string;
|
|
337
|
+
claimant: string; // Unique claimant ID (instance + agent)
|
|
338
|
+
ttlMs?: number; // Claim expires if not renewed (heartbeat)
|
|
339
|
+
}
|
|
340
|
+
|
|
341
|
+
interface AtomicClaimResponse {
|
|
342
|
+
success: boolean;
|
|
343
|
+
claimedBy?: string; // Who currently holds the claim (if failed)
|
|
344
|
+
}
|
|
345
|
+
```
|
|
346
|
+
|
|
347
|
+
This must be atomic at the opentasks daemon level — a single IPC operation that checks and sets. If two dispatchers race, one gets `success: false` and moves on.
|
|
348
|
+
|
|
349
|
+
#### Instance Identity
|
|
350
|
+
|
|
351
|
+
Each dispatcher registers with a unique claimant prefix:
|
|
352
|
+
|
|
353
|
+
```typescript
|
|
354
|
+
const claimantId = `${hostname}:${pid}:${instanceId}`;
|
|
355
|
+
// e.g., "dev-laptop:12345:inst_a1b2c3d4"
|
|
356
|
+
```
|
|
357
|
+
|
|
358
|
+
This uses the existing `stable-instance-id.ts` (path-derived hash) plus hostname/pid for uniqueness.
|
|
359
|
+
|
|
360
|
+
#### Claim TTL + Heartbeat
|
|
361
|
+
|
|
362
|
+
Claims have a TTL (default: 5 min). The dispatch poll loop doubles as a heartbeat — each cycle renews claims for active dispatches. If an instance crashes, its claims expire and other instances can pick up the work.
|
|
363
|
+
|
|
364
|
+
```typescript
|
|
365
|
+
// In the dispatch strategy's route(), after spawning:
|
|
366
|
+
tracker.track(taskId, spawned.id, attempt);
|
|
367
|
+
|
|
368
|
+
// In every poll cycle, renew claims for active dispatches:
|
|
369
|
+
for (const record of tracker.listActive()) {
|
|
370
|
+
await tasksAdapter.renewClaim(record.taskId, claimantId);
|
|
371
|
+
}
|
|
372
|
+
```
|
|
373
|
+
|
|
374
|
+
#### What NOT to Add
|
|
375
|
+
|
|
376
|
+
- No leader election — dispatchers are peers, not primary/secondary
|
|
377
|
+
- No distributed lock service — opentasks daemon's atomic claim is sufficient
|
|
378
|
+
- No shared state beyond opentasks — each instance has its own DispatchTracker (in-memory), reconstructed from opentasks on boot
|
|
379
|
+
|
|
380
|
+
---
|
|
381
|
+
|
|
382
|
+
### 7. Lifecycle Integration via Existing Signals
|
|
383
|
+
|
|
384
|
+
**Decision: Use `onLifecycleEvent()` callback + inbox signal filter. No new event surface.**
|
|
385
|
+
|
|
386
|
+
AgentManagerV2 already emits lifecycle events via `onLifecycleEvent()`:
|
|
387
|
+
|
|
388
|
+
- `{ type: "spawned", agent }` — on spawn (agent-manager-v2.ts:629)
|
|
389
|
+
- `{ type: "started", agent }` — on session start (agent-manager-v2.ts:630)
|
|
390
|
+
- `{ type: "stopped", agent, reason }` — on terminate (agent-manager-v2.ts:799)
|
|
391
|
+
|
|
392
|
+
And agents emit inbox signals via done handlers:
|
|
393
|
+
|
|
394
|
+
- `WORKER_DONE` — worker completed (handlers-v2.ts:49-80)
|
|
395
|
+
- `HELP_NEEDED` — worker blocked
|
|
396
|
+
- `WORKER_DEFERRED` — worker deferred
|
|
397
|
+
|
|
398
|
+
The dispatcher hooks into both:
|
|
399
|
+
|
|
400
|
+
```typescript
|
|
401
|
+
// trigger/dispatch/dispatch-lifecycle.ts
|
|
402
|
+
|
|
403
|
+
export function createDispatchLifecycleListener(
|
|
404
|
+
tracker: DispatchTracker,
|
|
405
|
+
tasksAdapter: TasksAdapter,
|
|
406
|
+
agentManager: AgentManager,
|
|
407
|
+
inboxAdapter: InboxAdapter,
|
|
408
|
+
claimantId: string
|
|
409
|
+
): DispatchLifecycleListener {
|
|
410
|
+
|
|
411
|
+
// Hook 1: AgentManager lifecycle callback
|
|
412
|
+
// Catches all terminations (normal, crash, external kill)
|
|
413
|
+
const unsubscribe = agentManager.onLifecycleEvent((event) => {
|
|
414
|
+
if (event.type !== "stopped") return;
|
|
415
|
+
|
|
416
|
+
const taskId = tracker.findTaskForAgent(event.agent.id);
|
|
417
|
+
if (!taskId) return; // Not a dispatched agent
|
|
418
|
+
|
|
419
|
+
const reason = event.reason;
|
|
420
|
+
if (reason === "done" || reason === "completed") {
|
|
421
|
+
tracker.complete(taskId);
|
|
422
|
+
tasksAdapter.transitionTask(taskId, "complete");
|
|
423
|
+
tasksAdapter.releaseClaim(taskId, claimantId);
|
|
424
|
+
} else {
|
|
425
|
+
tracker.fail(taskId, `agent stopped: ${reason}`);
|
|
426
|
+
if (!tracker.isTracked(taskId)) {
|
|
427
|
+
// Retries exhausted
|
|
428
|
+
tasksAdapter.transitionTask(taskId, "fail");
|
|
429
|
+
tasksAdapter.releaseClaim(taskId, claimantId);
|
|
430
|
+
}
|
|
431
|
+
// If still tracked (retry queued), claim is kept — retry will reuse it
|
|
432
|
+
}
|
|
433
|
+
});
|
|
434
|
+
|
|
435
|
+
// Hook 2: Inbox signal filter for richer status
|
|
436
|
+
// Captures HELP_NEEDED, WORKER_DEFERRED for status tracking
|
|
437
|
+
inboxAdapter.addSignalFilter("dispatch-lifecycle", (message) => {
|
|
438
|
+
const signal = message.content?.event;
|
|
439
|
+
if (!signal) return true; // Pass through
|
|
440
|
+
|
|
441
|
+
const agentId = message.from;
|
|
442
|
+
const taskId = tracker.findTaskForAgent(agentId);
|
|
443
|
+
if (!taskId) return true; // Not dispatched, pass through
|
|
444
|
+
|
|
445
|
+
if (signal === "HELP_NEEDED") {
|
|
446
|
+
tracker.updateStatus(taskId, "blocked");
|
|
447
|
+
// Emit MAP event for observability
|
|
448
|
+
}
|
|
449
|
+
|
|
450
|
+
return true; // Always pass through — we're observing, not filtering
|
|
451
|
+
});
|
|
452
|
+
|
|
453
|
+
return { unsubscribe };
|
|
454
|
+
}
|
|
455
|
+
```
|
|
456
|
+
|
|
457
|
+
**Why not new events?** The lifecycle callback handles the critical path (agent stopped → update tracker). Inbox signals provide richer status (blocked, deferred) but are supplementary. No changes needed to AgentManagerV2 or the handler chain.
|
|
458
|
+
|
|
459
|
+
**Edge case: agent crash without done().** The `"stopped"` lifecycle event fires on all terminations, including crashes. The stop reason distinguishes normal completion from crashes. If an agent crashes, the dispatcher treats it as a failure and queues retry.
|
|
460
|
+
|
|
461
|
+
---
|
|
462
|
+
|
|
463
|
+
### 8. Team-Aware Dispatch
|
|
464
|
+
|
|
465
|
+
**Decision: Task metadata specifies spawn mode. Dispatcher supports single agent, team template, or custom topology.**
|
|
466
|
+
|
|
467
|
+
```typescript
|
|
468
|
+
export type SpawnMode =
|
|
469
|
+
| { type: "agent"; role?: string } // Single agent (default)
|
|
470
|
+
| { type: "team"; template: string; config?: object } // Full team from openteams
|
|
471
|
+
| { type: "custom"; spawn: (task: TaskRecord, context: RoutingContext) => Promise<string[]> };
|
|
472
|
+
```
|
|
473
|
+
|
|
474
|
+
In opentasks, task metadata carries the spawn hint:
|
|
475
|
+
|
|
476
|
+
```json
|
|
477
|
+
{
|
|
478
|
+
"id": "task-123",
|
|
479
|
+
"title": "Security audit for auth module",
|
|
480
|
+
"metadata": {
|
|
481
|
+
"spawn": {
|
|
482
|
+
"type": "team",
|
|
483
|
+
"template": "security-audit"
|
|
484
|
+
}
|
|
485
|
+
}
|
|
486
|
+
}
|
|
487
|
+
```
|
|
488
|
+
|
|
489
|
+
The dispatch strategy checks `task.metadata.spawn` and delegates:
|
|
490
|
+
|
|
491
|
+
```typescript
|
|
492
|
+
// In the dispatch strategy
|
|
493
|
+
async function spawnForTask(
|
|
494
|
+
task: TaskRecord,
|
|
495
|
+
attempt: number,
|
|
496
|
+
context: RoutingContext,
|
|
497
|
+
config: DispatchConfig
|
|
498
|
+
): Promise<string[]> {
|
|
499
|
+
const spawnMode = task.metadata?.spawn as SpawnMode
|
|
500
|
+
?? { type: "agent", role: config.defaultRole };
|
|
501
|
+
|
|
502
|
+
switch (spawnMode.type) {
|
|
503
|
+
case "agent":
|
|
504
|
+
const spawned = await context.agentManager.spawn({
|
|
505
|
+
task: buildPrompt(task, attempt),
|
|
506
|
+
task_id: task.id,
|
|
507
|
+
role: spawnMode.role ?? config.defaultRole,
|
|
508
|
+
parent: null,
|
|
509
|
+
});
|
|
510
|
+
return [spawned.id];
|
|
511
|
+
|
|
512
|
+
case "team":
|
|
513
|
+
// Use TeamManagerV2 to start a team instance for this task
|
|
514
|
+
const team = await teamManager.startTeam(spawnMode.template, {
|
|
515
|
+
taskId: task.id,
|
|
516
|
+
config: spawnMode.config,
|
|
517
|
+
});
|
|
518
|
+
return team.agents.map(a => a.id);
|
|
519
|
+
|
|
520
|
+
case "custom":
|
|
521
|
+
return spawnMode.spawn(task, context);
|
|
522
|
+
}
|
|
523
|
+
}
|
|
524
|
+
```
|
|
525
|
+
|
|
526
|
+
**Lifecycle for teams:** When a team is dispatched, the tracker records all agent IDs. The team's coordinator handles internal lifecycle. The dispatcher watches for the coordinator's `"stopped"` event as the signal that the team is done.
|
|
527
|
+
|
|
528
|
+
---
|
|
529
|
+
|
|
530
|
+
### 9. Configurable Concurrency Scoping
|
|
531
|
+
|
|
532
|
+
**Decision: Concurrency limits are hierarchical and composable.**
|
|
533
|
+
|
|
534
|
+
```yaml
|
|
535
|
+
dispatch:
|
|
536
|
+
concurrency:
|
|
537
|
+
# Global cap across everything
|
|
538
|
+
global: 10
|
|
539
|
+
|
|
540
|
+
# Per-project limits (project = opentasks project context)
|
|
541
|
+
perProject:
|
|
542
|
+
backend-api: 5
|
|
543
|
+
frontend: 3
|
|
544
|
+
|
|
545
|
+
# Per-tracker limits
|
|
546
|
+
perTracker:
|
|
547
|
+
linear: 8
|
|
548
|
+
github: 4
|
|
549
|
+
|
|
550
|
+
# Per-role limits
|
|
551
|
+
perRole:
|
|
552
|
+
worker: 8
|
|
553
|
+
security-auditor: 2
|
|
554
|
+
|
|
555
|
+
# Per-tag limits (useful for resource-bound work)
|
|
556
|
+
perTag:
|
|
557
|
+
gpu: 1
|
|
558
|
+
database-migration: 1
|
|
559
|
+
```
|
|
560
|
+
|
|
561
|
+
Enforcement is **most restrictive wins** — a dispatch only happens if ALL applicable limits have available slots:
|
|
562
|
+
|
|
563
|
+
```typescript
|
|
564
|
+
function hasCapacity(task: TaskRecord, tracker: DispatchTracker, config: ConcurrencyConfig): boolean {
|
|
565
|
+
const checks = [
|
|
566
|
+
tracker.activeCount() < config.global,
|
|
567
|
+
tracker.activeByProject(task.project) < (config.perProject?.[task.project] ?? Infinity),
|
|
568
|
+
tracker.activeByTracker(task.tracker) < (config.perTracker?.[task.tracker] ?? Infinity),
|
|
569
|
+
tracker.activeByRole(task.role) < (config.perRole?.[task.role] ?? Infinity),
|
|
570
|
+
...task.tags.map(tag =>
|
|
571
|
+
tracker.activeByTag(tag) < (config.perTag?.[tag] ?? Infinity)
|
|
572
|
+
),
|
|
573
|
+
];
|
|
574
|
+
return checks.every(Boolean);
|
|
575
|
+
}
|
|
576
|
+
```
|
|
577
|
+
|
|
578
|
+
The DispatchTracker is extended with indexed counts:
|
|
579
|
+
|
|
580
|
+
```typescript
|
|
581
|
+
export interface DispatchTracker {
|
|
582
|
+
// ... existing methods ...
|
|
583
|
+
activeByProject(project: string): number;
|
|
584
|
+
activeByTracker(tracker: string): number;
|
|
585
|
+
activeByRole(role: string): number;
|
|
586
|
+
activeByTag(tag: string): number;
|
|
587
|
+
}
|
|
588
|
+
```
|
|
589
|
+
|
|
590
|
+
---
|
|
591
|
+
|
|
592
|
+
### 10. External State Reconciliation
|
|
593
|
+
|
|
594
|
+
**Decision: Separate reconciliation cron job, following Symphony's approach.**
|
|
595
|
+
|
|
596
|
+
Symphony handles this in its orchestrator tick loop (orchestrator.ex:275-298):
|
|
597
|
+
|
|
598
|
+
1. Each tick, fetches current state from Linear for all running issues
|
|
599
|
+
2. Compares against configured `active_states` / `terminal_states`
|
|
600
|
+
3. If issue moved to terminal state externally → stop agent + cleanup workspace
|
|
601
|
+
4. If issue reassigned → stop agent (no cleanup)
|
|
602
|
+
5. If issue moved to non-active state (e.g., "blocked") → stop agent (no cleanup)
|
|
603
|
+
|
|
604
|
+
**Our equivalent:** A second cron job (separate from the dispatch poll) that reconciles external tracker state:
|
|
605
|
+
|
|
606
|
+
```typescript
|
|
607
|
+
// Reconciliation strategy — registered alongside dispatch strategy
|
|
608
|
+
export function createReconcileStrategy(
|
|
609
|
+
tasksAdapter: TasksAdapter,
|
|
610
|
+
tracker: DispatchTracker,
|
|
611
|
+
agentManager: AgentManager,
|
|
612
|
+
config: ReconcileConfig
|
|
613
|
+
): RoutingStrategy {
|
|
614
|
+
return {
|
|
615
|
+
name: "task-reconcile",
|
|
616
|
+
canHandle: (e) => e.source.type === "cron" && e.source.jobName === "task-reconcile",
|
|
617
|
+
|
|
618
|
+
async route(_event, context): Promise<RoutingDecision> {
|
|
619
|
+
const active = tracker.listActive();
|
|
620
|
+
|
|
621
|
+
for (const record of active) {
|
|
622
|
+
const task = await tasksAdapter.getTask(record.taskId);
|
|
623
|
+
|
|
624
|
+
// Task was closed/completed externally
|
|
625
|
+
if (task.status === "closed") {
|
|
626
|
+
await agentManager.terminate(record.agentId, "external_completion");
|
|
627
|
+
tracker.complete(record.taskId);
|
|
628
|
+
continue;
|
|
629
|
+
}
|
|
630
|
+
|
|
631
|
+
// Task was reassigned to someone else
|
|
632
|
+
if (task.assignee && task.assignee !== record.claimantId) {
|
|
633
|
+
await agentManager.terminate(record.agentId, "reassigned");
|
|
634
|
+
tracker.complete(record.taskId); // Don't retry — human took over
|
|
635
|
+
continue;
|
|
636
|
+
}
|
|
637
|
+
|
|
638
|
+
// Task moved to blocked state
|
|
639
|
+
if (task.status === "blocked") {
|
|
640
|
+
await agentManager.terminate(record.agentId, "blocked_externally");
|
|
641
|
+
// Don't retry immediately — wait for unblock via normal poll
|
|
642
|
+
tracker.complete(record.taskId);
|
|
643
|
+
continue;
|
|
644
|
+
}
|
|
645
|
+
|
|
646
|
+
// Task disappeared (deleted from tracker)
|
|
647
|
+
if (!task) {
|
|
648
|
+
await agentManager.terminate(record.agentId, "task_deleted");
|
|
649
|
+
tracker.complete(record.taskId);
|
|
650
|
+
continue;
|
|
651
|
+
}
|
|
652
|
+
}
|
|
653
|
+
|
|
654
|
+
return { targetAgents: [], reason: "reconciliation complete" };
|
|
655
|
+
},
|
|
656
|
+
};
|
|
657
|
+
}
|
|
658
|
+
```
|
|
659
|
+
|
|
660
|
+
```yaml
|
|
661
|
+
dispatch:
|
|
662
|
+
reconcile:
|
|
663
|
+
enabled: true
|
|
664
|
+
intervalMs: 60000 # Check every 60s (slower than dispatch poll)
|
|
665
|
+
```
|
|
666
|
+
|
|
667
|
+
**Why a separate cron job?** Reconciliation is read-heavy (fetches current state from external tracker per active task) and less time-sensitive than dispatch. Running it at a slower cadence (60s vs 15s) reduces API load on Linear/GitHub/Jira.
|
|
668
|
+
|
|
669
|
+
**Agent-cooperative check:** In addition to dispatcher-side reconciliation, agents should also check task state between turns (like Symphony's `continue_with_issue?()` check). This can be added as a standard instruction in the prompt pipeline: "Before starting a new turn, verify your task is still active via the `task` tool."
|
|
670
|
+
|
|
671
|
+
---
|
|
672
|
+
|
|
673
|
+
## Updated Component Design
|
|
674
|
+
|
|
675
|
+
### Component 1: DispatchTracker
|
|
676
|
+
|
|
677
|
+
Extended from original design with multi-dimensional concurrency, claim management, and state reconstruction.
|
|
678
|
+
|
|
679
|
+
```typescript
|
|
680
|
+
// trigger/dispatch/dispatch-tracker.ts
|
|
681
|
+
|
|
682
|
+
export interface DispatchRecord {
|
|
683
|
+
taskId: string;
|
|
684
|
+
agentIds: string[]; // Multiple for team dispatch
|
|
685
|
+
spawnedAt: number;
|
|
686
|
+
attempt: number;
|
|
687
|
+
status: "running" | "completed" | "failed" | "retrying" | "blocked";
|
|
688
|
+
project?: string;
|
|
689
|
+
tracker?: string;
|
|
690
|
+
role?: string;
|
|
691
|
+
tags?: string[];
|
|
692
|
+
claimantId: string;
|
|
693
|
+
workspacePath?: string;
|
|
694
|
+
}
|
|
695
|
+
|
|
696
|
+
export interface DispatchTracker {
|
|
697
|
+
track(record: Omit<DispatchRecord, "spawnedAt" | "status">): void;
|
|
698
|
+
complete(taskId: string): void;
|
|
699
|
+
fail(taskId: string, error?: string): void;
|
|
700
|
+
updateStatus(taskId: string, status: DispatchRecord["status"]): void;
|
|
701
|
+
getRetryReady(): RetryEntry[];
|
|
702
|
+
isTracked(taskId: string): boolean;
|
|
703
|
+
findTaskForAgent(agentId: string): string | undefined;
|
|
704
|
+
|
|
705
|
+
// Concurrency queries
|
|
706
|
+
activeCount(): number;
|
|
707
|
+
activeByProject(project: string): number;
|
|
708
|
+
activeByTracker(tracker: string): number;
|
|
709
|
+
activeByRole(role: string): number;
|
|
710
|
+
activeByTag(tag: string): number;
|
|
711
|
+
availableSlots(config: ConcurrencyConfig, task?: TaskRecord): number;
|
|
712
|
+
|
|
713
|
+
// Observability
|
|
714
|
+
listActive(): DispatchRecord[];
|
|
715
|
+
listRetries(): RetryEntry[];
|
|
716
|
+
|
|
717
|
+
// Reconstruction
|
|
718
|
+
reconstructFromTasks(tasks: TaskRecord[], claimantId: string): void;
|
|
719
|
+
}
|
|
720
|
+
```
|
|
721
|
+
|
|
722
|
+
**Reconstruction on boot:** When the dispatcher starts, it queries opentasks for all tasks claimed by this instance's `claimantId` that are still `in_progress`. These become the initial `active` set. This handles process restarts without losing track of running agents.
|
|
723
|
+
|
|
724
|
+
### Component 2: TaskDispatch Routing Strategy
|
|
725
|
+
|
|
726
|
+
Updated to support push/pull/hybrid modes, team dispatch, eligibility checking, and atomic claiming.
|
|
727
|
+
|
|
728
|
+
```typescript
|
|
729
|
+
// trigger/strategies/task-dispatch.ts
|
|
730
|
+
|
|
731
|
+
export interface TaskDispatchStrategyDeps {
|
|
732
|
+
tasksAdapter: TasksAdapter;
|
|
733
|
+
tracker: DispatchTracker;
|
|
734
|
+
agentManager: AgentManager;
|
|
735
|
+
teamManager?: TeamManagerV2; // Optional, for team dispatch
|
|
736
|
+
promptPipeline: PromptPipeline;
|
|
737
|
+
eligibility: EligibilityChecker;
|
|
738
|
+
}
|
|
739
|
+
|
|
740
|
+
export interface TaskDispatchStrategyConfig {
|
|
741
|
+
mode: DispatchMode;
|
|
742
|
+
concurrency: ConcurrencyConfig;
|
|
743
|
+
retry: RetryConfig;
|
|
744
|
+
claimantId: string;
|
|
745
|
+
push?: { defaultRole: string; tags?: string[] };
|
|
746
|
+
pull?: PullModeConfig;
|
|
747
|
+
hybrid?: HybridConfig;
|
|
748
|
+
}
|
|
749
|
+
```
|
|
750
|
+
|
|
751
|
+
### Component 3: Dispatch Lifecycle Listener
|
|
752
|
+
|
|
753
|
+
Updated to use `onLifecycleEvent()` + inbox signal filter (see §7 above).
|
|
754
|
+
|
|
755
|
+
### Component 4: Reconciliation Strategy
|
|
756
|
+
|
|
757
|
+
New component (see §10 above).
|
|
758
|
+
|
|
759
|
+
---
|
|
760
|
+
|
|
761
|
+
## Configuration
|
|
762
|
+
|
|
763
|
+
Full dispatch config:
|
|
764
|
+
|
|
765
|
+
```yaml
|
|
766
|
+
dispatch:
|
|
767
|
+
enabled: true
|
|
768
|
+
|
|
769
|
+
mode: push # push | pull | hybrid
|
|
770
|
+
|
|
771
|
+
poll:
|
|
772
|
+
intervalMs: 15000 # Dispatch poll cadence
|
|
773
|
+
|
|
774
|
+
reconcile:
|
|
775
|
+
enabled: true
|
|
776
|
+
intervalMs: 60000 # External state check cadence
|
|
777
|
+
|
|
778
|
+
concurrency:
|
|
779
|
+
global: 10
|
|
780
|
+
perProject: { backend: 5 }
|
|
781
|
+
perTracker: { linear: 8 }
|
|
782
|
+
perRole: { worker: 8 }
|
|
783
|
+
perTag: { gpu: 1 }
|
|
784
|
+
|
|
785
|
+
retry:
|
|
786
|
+
maxRetries: 3
|
|
787
|
+
baseDelayMs: 10000
|
|
788
|
+
maxDelayMs: 300000
|
|
789
|
+
workspaceStrategy: reuse # reuse | fresh | branch
|
|
790
|
+
preserveOnExhaustion: true
|
|
791
|
+
cleanupDelayMs: 300000
|
|
792
|
+
|
|
793
|
+
eligibility:
|
|
794
|
+
tags: [auto]
|
|
795
|
+
excludeTags: [manual]
|
|
796
|
+
minPriority: 2
|
|
797
|
+
minScore: 0.3
|
|
798
|
+
requireFields: [description]
|
|
799
|
+
agentTriage:
|
|
800
|
+
enabled: false
|
|
801
|
+
role: triage
|
|
802
|
+
minScoreForTriage: 0.3
|
|
803
|
+
maxScoreForTriage: 0.7
|
|
804
|
+
|
|
805
|
+
prompt:
|
|
806
|
+
stages: [task-core, retry-context, role-prompt]
|
|
807
|
+
# Add "playbook" to include cognitive-core context
|
|
808
|
+
|
|
809
|
+
push:
|
|
810
|
+
defaultRole: worker
|
|
811
|
+
|
|
812
|
+
pull:
|
|
813
|
+
poolSize: 3
|
|
814
|
+
workerRole: worker
|
|
815
|
+
idleTimeoutMs: 300000
|
|
816
|
+
workerLoop: true
|
|
817
|
+
|
|
818
|
+
hybrid:
|
|
819
|
+
pushPriorityThreshold: 3
|
|
820
|
+
```
|
|
821
|
+
|
|
822
|
+
---
|
|
823
|
+
|
|
824
|
+
## What This Reuses (Not New)
|
|
825
|
+
|
|
826
|
+
| Concern | Existing Component | How Dispatch Uses It |
|
|
827
|
+
|---|---|---|
|
|
828
|
+
| Scheduling | CronService | `every` jobs for poll + reconcile |
|
|
829
|
+
| Routing | TriggerRouter + RoutingStrategy | Two strategies (dispatch + reconcile) |
|
|
830
|
+
| Agent spawn | AgentManagerV2.spawn() | Parentless root agents |
|
|
831
|
+
| Team spawn | TeamManagerV2.startTeam() | Team-aware dispatch |
|
|
832
|
+
| Workspace isolation | WorkspaceManager | createWorkspaceForRole() called by spawn |
|
|
833
|
+
| Task state | TasksAdapter + opentasks | Atomic claim, transition, release |
|
|
834
|
+
| Lifecycle events | onLifecycleEvent() | Agent stopped → update tracker |
|
|
835
|
+
| Inbox signals | addSignalFilter() | WORKER_DONE, HELP_NEEDED observation |
|
|
836
|
+
| Event delivery | SystemEventQueue + WakeManager | Cron → event → route → dispatch |
|
|
837
|
+
|
|
838
|
+
## What This Adds
|
|
839
|
+
|
|
840
|
+
| File | ~Lines | Purpose |
|
|
841
|
+
|---|---|---|
|
|
842
|
+
| `trigger/dispatch/dispatch-tracker.ts` | 200 | Multi-dimensional concurrency + retry + reconstruction |
|
|
843
|
+
| `trigger/strategies/task-dispatch.ts` | 200 | Routing strategy (push/pull/hybrid + eligibility) |
|
|
844
|
+
| `trigger/strategies/task-reconcile.ts` | 80 | External state reconciliation |
|
|
845
|
+
| `trigger/dispatch/dispatch-lifecycle.ts` | 80 | Lifecycle + signal listener |
|
|
846
|
+
| `trigger/dispatch/eligibility.ts` | 120 | Scoring + filtering + triage |
|
|
847
|
+
| `trigger/dispatch/prompt-pipeline.ts` | 100 | Composable prompt assembly |
|
|
848
|
+
| `trigger/dispatch/types.ts` | 80 | All dispatch-specific types |
|
|
849
|
+
| Boot wiring | 40 | Opt-in init |
|
|
850
|
+
| **Total** | **~900** | |
|
|
851
|
+
|
|
852
|
+
## What Needs to Change in Opentasks
|
|
853
|
+
|
|
854
|
+
| Change | Scope | Purpose |
|
|
855
|
+
|---|---|---|
|
|
856
|
+
| Atomic `claim` operation | opentasks daemon | Prevent TOCTOU race in multi-instance |
|
|
857
|
+
| Claim TTL + heartbeat renewal | opentasks daemon | Auto-release on instance crash |
|
|
858
|
+
| `TaskContext` extension | opentasks schema | Richer task metadata for prompts |
|
|
859
|
+
| `releaseClaim` operation | opentasks daemon | Explicit claim release on completion |
|
|
860
|
+
|
|
861
|
+
## Comparison to Symphony
|
|
862
|
+
|
|
863
|
+
| Feature | Symphony | This Design |
|
|
864
|
+
|---|---|---|
|
|
865
|
+
| Polling | GenServer tick | CronService `every` job |
|
|
866
|
+
| Dispatch | Orchestrator.dispatch | TaskDispatch routing strategy |
|
|
867
|
+
| Workspace isolation | Per-issue directory + git clone | WorkspaceManager worktrees |
|
|
868
|
+
| Retry | In-orchestrator backoff queue | DispatchTracker retry map |
|
|
869
|
+
| Concurrency | max_concurrent_agents | Multi-dimensional (global/project/tracker/role/tag) |
|
|
870
|
+
| Agent hierarchy | None (flat) | None (parentless root agents) |
|
|
871
|
+
| Reconciliation | In tick loop, checks Linear | Separate reconcile cron, checks opentasks |
|
|
872
|
+
| Multi-instance | Disjoint issue sets via Linear assignee | Atomic claims + TTL in opentasks |
|
|
873
|
+
| Agent runtime | Codex only | Any AgentFactory (Claude Code, Codex, etc.) |
|
|
874
|
+
| Configuration | WORKFLOW.md (single file) | macro-agent config (composable) |
|
|
875
|
+
| Observability | Phoenix LiveView dashboard | MAP protocol events |
|
|
876
|
+
| Task source | Linear only | Any tracker via opentasks federation |
|
|
877
|
+
| Memory/learning | None | cognitive-core + minimem (opt-in) |
|
|
878
|
+
| Team dispatch | No | Yes (openteams templates) |
|
|
879
|
+
| Push/pull | Push only | Push, pull, or hybrid |
|
|
880
|
+
| Task eligibility | None (all active issues) | Configurable filters + scoring + AI triage |
|