opencode-swarm-plugin 0.12.4 → 0.12.6

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -108,12 +108,11 @@ The `/swarm` command is defined in `~/.config/opencode/command/swarm.md`:
108
108
  description: Decompose task into parallel subtasks and coordinate agents
109
109
  ---
110
110
 
111
- You are a swarm coordinator. Take a complex task, break it into beads,
112
- and unleash parallel agents.
111
+ You are a swarm coordinator. Decompose the task into beads and spawn parallel agents.
113
112
 
114
- ## Usage
113
+ ## Task
115
114
 
116
- /swarm <task description or bead-id>
115
+ $ARGUMENTS
117
116
 
118
117
  ## Workflow
119
118
 
@@ -136,83 +135,72 @@ and unleash parallel agents.
136
135
  Begin decomposition now.
137
136
  ```
138
137
 
139
- ### @swarm-planner Agent
138
+ > **Note**: The `$ARGUMENTS` placeholder captures everything you type after `/swarm`. This is how your task description gets passed to the agent.
139
+
140
+ ### Agents
140
141
 
141
- The `@swarm-planner` agent is defined in `~/.config/opencode/agent/swarm-planner.md`:
142
+ The setup wizard creates two agents with your chosen models:
142
143
 
143
- ````markdown
144
+ **@swarm-planner** (`~/.config/opencode/agent/swarm-planner.md`) - Coordinator that decomposes tasks:
145
+
146
+ ```yaml
144
147
  ---
145
148
  name: swarm-planner
146
149
  description: Strategic task decomposition for swarm coordination
147
- model: claude-sonnet-4-5
150
+ model: anthropic/claude-sonnet-4-5 # Your chosen coordinator model
148
151
  ---
152
+ ```
149
153
 
150
- You are a swarm planner. Decompose tasks into optimal parallel subtasks.
151
-
152
- ## Workflow
154
+ **@swarm-worker** (`~/.config/opencode/agent/swarm-worker.md`) - Fast executor for subtasks:
153
155
 
154
- 1. Call `swarm_select_strategy` to analyze the task
155
- 2. Call `swarm_plan_prompt` to get strategy-specific guidance
156
- 3. Create a BeadTree following the guidelines
157
- 4. Return ONLY valid JSON - no markdown, no explanation
158
-
159
- ## Output Format
160
-
161
- ```json
162
- {
163
- "epic": { "title": "...", "description": "..." },
164
- "subtasks": [
165
- {
166
- "title": "...",
167
- "description": "...",
168
- "files": ["src/..."],
169
- "dependencies": [],
170
- "estimated_complexity": 2
171
- }
172
- ]
173
- }
156
+ ```yaml
157
+ ---
158
+ name: swarm-worker
159
+ description: Executes subtasks in a swarm - fast, focused, cost-effective
160
+ model: anthropic/claude-haiku-4-5 # Your chosen worker model
161
+ ---
174
162
  ```
175
- ````
176
-
177
- ## Rules
178
163
 
179
- - 2-7 subtasks (too few = not parallel, too many = overhead)
180
- - No file overlap between subtasks
181
- - Include tests with the code they test
182
- - Order by dependency (if B needs A, A comes first)
164
+ ### Decomposition Rules
183
165
 
184
- ````
166
+ - **2-7 subtasks** - Too few = not parallel, too many = coordination overhead
167
+ - **No file overlap** - Each file appears in exactly one subtask
168
+ - **Include tests** - Put test files with the code they test
169
+ - **Order by dependency** - If B needs A's output, A comes first (lower index)
185
170
 
186
171
  Edit these files to customize behavior. Run `swarm setup` to regenerate defaults.
187
172
 
188
173
  ## Dependencies
189
174
 
190
- | Dependency | Purpose | Required |
191
- |------------|---------|----------|
192
- | [OpenCode](https://opencode.ai) | Plugin host | Yes |
193
- | [Beads](https://github.com/steveyegge/beads) | Git-backed issue tracking | Yes |
194
- | [Go](https://go.dev) | Required for Agent Mail | No |
195
- | [MCP Agent Mail](https://github.com/Dicklesworthstone/mcp_agent_mail) | Multi-agent coordination, file reservations | No |
196
- | [CASS (Coding Agent Session Search)](https://github.com/Dicklesworthstone/coding_agent_session_search) | Historical context from past sessions | No |
197
- | [UBS (Ultimate Bug Scanner)](https://github.com/Dicklesworthstone/ultimate_bug_scanner) | Pre-completion bug scanning using AI-powered static analysis | No |
198
- | [semantic-memory](https://github.com/joelhooks/semantic-memory) | Learning persistence | No |
199
- | [Redis](https://redis.io) | Rate limiting (SQLite fallback available) | No |
175
+ | Dependency | Purpose | Required |
176
+ | ------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------ | -------- |
177
+ | [OpenCode](https://opencode.ai) | Plugin host | Yes |
178
+ | [Beads](https://github.com/steveyegge/beads) | Git-backed issue tracking | Yes |
179
+ | [Go](https://go.dev) | Required for Agent Mail | No |
180
+ | [MCP Agent Mail](https://github.com/Dicklesworthstone/mcp_agent_mail) | Multi-agent coordination, file reservations | No |
181
+ | [CASS (Coding Agent Session Search)](https://github.com/Dicklesworthstone/coding_agent_session_search) | Historical context from past sessions | No |
182
+ | [UBS (Ultimate Bug Scanner)](https://github.com/Dicklesworthstone/ultimate_bug_scanner) | Pre-completion bug scanning using AI-powered static analysis | No |
183
+ | [semantic-memory](https://github.com/joelhooks/semantic-memory) | Learning persistence | No |
184
+ | [Redis](https://redis.io) | Rate limiting (SQLite fallback available) | No |
200
185
 
201
186
  All dependencies are checked and can be installed via `swarm setup`.
202
187
 
203
188
  ### Installing Optional Dependencies
204
189
 
205
190
  **UBS (Ultimate Bug Scanner)** - Scans code for bugs before task completion:
191
+
206
192
  ```bash
207
193
  curl -fsSL "https://raw.githubusercontent.com/Dicklesworthstone/ultimate_bug_scanner/master/install.sh" | bash
208
194
  ```
209
195
 
210
196
  **CASS (Coding Agent Session Search)** - Indexes and searches AI coding agent history:
197
+
211
198
  ```bash
212
199
  curl -fsSL https://raw.githubusercontent.com/Dicklesworthstone/coding_agent_session_search/main/install.sh | bash -s -- --easy-mode
213
200
  ```
214
201
 
215
202
  **MCP Agent Mail** - Multi-agent coordination and file reservations:
203
+
216
204
  ```bash
217
205
  curl -fsSL "https://raw.githubusercontent.com/Dicklesworthstone/mcp_agent_mail/main/scripts/install.sh" | bash -s -- --yes
218
206
  ```
@@ -221,57 +209,57 @@ curl -fsSL "https://raw.githubusercontent.com/Dicklesworthstone/mcp_agent_mail/m
221
209
 
222
210
  ### Swarm
223
211
 
224
- | Tool | Description |
225
- |------|-------------|
226
- | `swarm_init` | Initialize swarm session |
227
- | `swarm_select_strategy` | Analyze task, recommend decomposition strategy (file/feature/risk-based) |
228
- | `swarm_plan_prompt` | Generate strategy-specific planning prompt with CASS history |
229
- | `swarm_decompose` | Generate decomposition prompt |
230
- | `swarm_validate_decomposition` | Validate response, detect file conflicts |
231
- | `swarm_spawn_subtask` | Generate worker agent prompt with Agent Mail/beads instructions |
232
- | `swarm_status` | Get swarm progress by epic ID |
233
- | `swarm_progress` | Report subtask progress to coordinator |
234
- | `swarm_complete` | Complete subtask - runs UBS (Ultimate Bug Scanner), releases reservations |
235
- | `swarm_record_outcome` | Record outcome for learning (duration, errors, retries) |
212
+ | Tool | Description |
213
+ | ------------------------------ | ------------------------------------------------------------------------- |
214
+ | `swarm_init` | Initialize swarm session |
215
+ | `swarm_select_strategy` | Analyze task, recommend decomposition strategy (file/feature/risk-based) |
216
+ | `swarm_plan_prompt` | Generate strategy-specific planning prompt with CASS history |
217
+ | `swarm_decompose` | Generate decomposition prompt |
218
+ | `swarm_validate_decomposition` | Validate response, detect file conflicts |
219
+ | `swarm_spawn_subtask` | Generate worker agent prompt with Agent Mail/beads instructions |
220
+ | `swarm_status` | Get swarm progress by epic ID |
221
+ | `swarm_progress` | Report subtask progress to coordinator |
222
+ | `swarm_complete` | Complete subtask - runs UBS (Ultimate Bug Scanner), releases reservations |
223
+ | `swarm_record_outcome` | Record outcome for learning (duration, errors, retries) |
236
224
 
237
225
  ### Beads
238
226
 
239
- | Tool | Description |
240
- |------|-------------|
241
- | `beads_create` | Create bead with type-safe validation |
242
- | `beads_create_epic` | Create epic + subtasks atomically |
243
- | `beads_query` | Query beads with filters (status, type, ready) |
244
- | `beads_update` | Update status/description/priority |
245
- | `beads_close` | Close bead with reason |
246
- | `beads_start` | Mark bead as in-progress |
247
- | `beads_ready` | Get next unblocked bead |
248
- | `beads_sync` | Sync to git and push |
249
- | `beads_link_thread` | Link bead to Agent Mail thread |
227
+ | Tool | Description |
228
+ | ------------------- | ---------------------------------------------- |
229
+ | `beads_create` | Create bead with type-safe validation |
230
+ | `beads_create_epic` | Create epic + subtasks atomically |
231
+ | `beads_query` | Query beads with filters (status, type, ready) |
232
+ | `beads_update` | Update status/description/priority |
233
+ | `beads_close` | Close bead with reason |
234
+ | `beads_start` | Mark bead as in-progress |
235
+ | `beads_ready` | Get next unblocked bead |
236
+ | `beads_sync` | Sync to git and push |
237
+ | `beads_link_thread` | Link bead to Agent Mail thread |
250
238
 
251
239
  ### Agent Mail
252
240
 
253
- | Tool | Description |
254
- |------|-------------|
255
- | `agentmail_init` | Initialize session, register agent |
256
- | `agentmail_send` | Send message to agents |
257
- | `agentmail_inbox` | Fetch inbox (max 5, no bodies - context safe) |
258
- | `agentmail_read_message` | Fetch single message body by ID |
241
+ | Tool | Description |
242
+ | ---------------------------- | ---------------------------------------------- |
243
+ | `agentmail_init` | Initialize session, register agent |
244
+ | `agentmail_send` | Send message to agents |
245
+ | `agentmail_inbox` | Fetch inbox (max 5, no bodies - context safe) |
246
+ | `agentmail_read_message` | Fetch single message body by ID |
259
247
  | `agentmail_summarize_thread` | Summarize thread (preferred over fetching all) |
260
- | `agentmail_reserve` | Reserve file paths for exclusive editing |
261
- | `agentmail_release` | Release file reservations |
262
- | `agentmail_ack` | Acknowledge message |
263
- | `agentmail_search` | Search messages by keyword |
264
- | `agentmail_health` | Check if Agent Mail server is running |
248
+ | `agentmail_reserve` | Reserve file paths for exclusive editing |
249
+ | `agentmail_release` | Release file reservations |
250
+ | `agentmail_ack` | Acknowledge message |
251
+ | `agentmail_search` | Search messages by keyword |
252
+ | `agentmail_health` | Check if Agent Mail server is running |
265
253
 
266
254
  ### Structured Output
267
255
 
268
- | Tool | Description |
269
- |------|-------------|
270
- | `structured_extract_json` | Extract JSON from markdown/text (multiple strategies) |
271
- | `structured_validate` | Validate response against schema |
272
- | `structured_parse_evaluation` | Parse self-evaluation response |
273
- | `structured_parse_decomposition` | Parse task decomposition response |
274
- | `structured_parse_bead_tree` | Parse bead tree for epic creation |
256
+ | Tool | Description |
257
+ | -------------------------------- | ----------------------------------------------------- |
258
+ | `structured_extract_json` | Extract JSON from markdown/text (multiple strategies) |
259
+ | `structured_validate` | Validate response against schema |
260
+ | `structured_parse_evaluation` | Parse self-evaluation response |
261
+ | `structured_parse_decomposition` | Parse task decomposition response |
262
+ | `structured_parse_bead_tree` | Parse bead tree for epic creation |
275
263
 
276
264
  ## Decomposition Strategies
277
265
 
@@ -309,32 +297,32 @@ Best for: bug fixes, security issues
309
297
 
310
298
  The plugin learns from outcomes:
311
299
 
312
- | Mechanism | How It Works |
313
- |-----------|--------------|
314
- | Confidence decay | Criteria weights fade unless revalidated (90-day half-life) |
315
- | Implicit feedback | Fast + success = helpful signal, slow + errors = harmful |
316
- | Pattern maturity | candidate → established → proven (or deprecated) |
317
- | Anti-patterns | Patterns with >60% failure rate auto-invert |
300
+ | Mechanism | How It Works |
301
+ | ----------------- | ----------------------------------------------------------- |
302
+ | Confidence decay | Criteria weights fade unless revalidated (90-day half-life) |
303
+ | Implicit feedback | Fast + success = helpful signal, slow + errors = harmful |
304
+ | Pattern maturity | candidate → established → proven (or deprecated) |
305
+ | Anti-patterns | Patterns with >60% failure rate auto-invert |
318
306
 
319
307
  ## Context Preservation
320
308
 
321
309
  Hard limits to prevent context exhaustion:
322
310
 
323
- | Constraint | Default | Reason |
324
- |------------|---------|--------|
325
- | Inbox limit | 5 messages | Prevents token burn |
326
- | Bodies excluded | Always | Fetch individually when needed |
327
- | Summarize preferred | Yes | Key points, not raw dump |
311
+ | Constraint | Default | Reason |
312
+ | ------------------- | ---------- | ------------------------------ |
313
+ | Inbox limit | 5 messages | Prevents token burn |
314
+ | Bodies excluded | Always | Fetch individually when needed |
315
+ | Summarize preferred | Yes | Key points, not raw dump |
328
316
 
329
317
  ## Rate Limiting
330
318
 
331
319
  Client-side limits (Redis primary, SQLite fallback):
332
320
 
333
321
  | Endpoint | Per Minute | Per Hour |
334
- |----------|------------|----------|
335
- | send | 20 | 200 |
336
- | reserve | 10 | 100 |
337
- | inbox | 60 | 600 |
322
+ | -------- | ---------- | -------- |
323
+ | send | 20 | 200 |
324
+ | reserve | 10 | 100 |
325
+ | inbox | 60 | 600 |
338
326
 
339
327
  Configure via `OPENCODE_RATE_LIMIT_{ENDPOINT}_PER_MIN` env vars.
340
328
 
@@ -345,7 +333,7 @@ bun install
345
333
  bun run typecheck
346
334
  bun test
347
335
  bun run build
348
- ````
336
+ ```
349
337
 
350
338
  ## License
351
339
 
package/bin/swarm.ts CHANGED
@@ -363,7 +363,8 @@ const DEPENDENCIES: Dependency[] = [
363
363
  required: false,
364
364
  install: "https://github.com/Dicklesworthstone/mcp_agent_mail",
365
365
  installType: "manual",
366
- description: "Multi-agent coordination & file reservations (like Gmail for coding agents)",
366
+ description:
367
+ "Multi-agent coordination & file reservations (like Gmail for coding agents)",
367
368
  },
368
369
  {
369
370
  name: "CASS (Coding Agent Session Search)",
@@ -488,11 +489,11 @@ const SWARM_COMMAND = `---
488
489
  description: Decompose task into parallel subtasks and coordinate agents
489
490
  ---
490
491
 
491
- You are a swarm coordinator. Take a complex task, break it into beads, and unleash parallel agents.
492
+ You are a swarm coordinator. Decompose the task into beads and spawn parallel agents.
492
493
 
493
- ## Usage
494
+ ## Task
494
495
 
495
- /swarm <task description or bead-id>
496
+ $ARGUMENTS
496
497
 
497
498
  ## Workflow
498
499
 
@@ -500,14 +501,12 @@ You are a swarm coordinator. Take a complex task, break it into beads, and unlea
500
501
  2. **Decompose**: Use \`swarm_select_strategy\` then \`swarm_plan_prompt\` to break down the task
501
502
  3. **Create beads**: \`beads_create_epic\` with subtasks and file assignments
502
503
  4. **Reserve files**: \`agentmail_reserve\` for each subtask's files
503
- 5. **Spawn agents**: Use Task tool with \`swarm_spawn_subtask\` prompts (or use @swarm-worker for sequential/single-file tasks)
504
+ 5. **Spawn agents**: Use Task tool with \`swarm_spawn_subtask\` prompts
504
505
  6. **Monitor**: Check \`agentmail_inbox\` for progress, use \`agentmail_summarize_thread\` for overview
505
506
  7. **Complete**: \`swarm_complete\` when done, then \`beads_sync\` to push
506
507
 
507
508
  ## Strategy Selection
508
509
 
509
- The plugin auto-selects decomposition strategy based on task keywords:
510
-
511
510
  | Strategy | Best For | Keywords |
512
511
  | ------------- | ----------------------- | -------------------------------------- |
513
512
  | file-based | Refactoring, migrations | refactor, migrate, rename, update all |
package/dist/index.js CHANGED
@@ -21560,15 +21560,15 @@ var BeadDependencySchema = exports_external.object({
21560
21560
  type: exports_external.enum(["blocks", "blocked-by", "related", "discovered-from"])
21561
21561
  });
21562
21562
  var BeadSchema = exports_external.object({
21563
- id: exports_external.string().regex(/^[a-z0-9-]+-[a-z0-9]+(\.\d+)?$/, "Invalid bead ID format"),
21563
+ id: exports_external.string().regex(/^[a-z0-9]+(-[a-z0-9]+)+(\.\d+)?$/, "Invalid bead ID format"),
21564
21564
  title: exports_external.string().min(1, "Title required"),
21565
21565
  description: exports_external.string().optional().default(""),
21566
21566
  status: BeadStatusSchema.default("open"),
21567
21567
  priority: exports_external.number().int().min(0).max(3).default(2),
21568
21568
  issue_type: BeadTypeSchema.default("task"),
21569
- created_at: exports_external.string(),
21570
- updated_at: exports_external.string().optional(),
21571
- closed_at: exports_external.string().optional(),
21569
+ created_at: exports_external.string().datetime({ offset: true }),
21570
+ updated_at: exports_external.string().datetime({ offset: true }).optional(),
21571
+ closed_at: exports_external.string().datetime({ offset: true }).optional(),
21572
21572
  parent_id: exports_external.string().optional(),
21573
21573
  dependencies: exports_external.array(BeadDependencySchema).optional().default([]),
21574
21574
  metadata: exports_external.record(exports_external.string(), exports_external.unknown()).optional()
@@ -21641,7 +21641,7 @@ var EvaluationSchema = exports_external.object({
21641
21641
  criteria: exports_external.record(exports_external.string(), CriterionEvaluationSchema),
21642
21642
  overall_feedback: exports_external.string(),
21643
21643
  retry_suggestion: exports_external.string().nullable(),
21644
- timestamp: exports_external.string().optional()
21644
+ timestamp: exports_external.string().datetime({ offset: true }).optional()
21645
21645
  });
21646
21646
  var DEFAULT_CRITERIA = [
21647
21647
  "type_safe",
@@ -21659,7 +21659,7 @@ var WeightedEvaluationSchema = exports_external.object({
21659
21659
  criteria: exports_external.record(exports_external.string(), WeightedCriterionEvaluationSchema),
21660
21660
  overall_feedback: exports_external.string(),
21661
21661
  retry_suggestion: exports_external.string().nullable(),
21662
- timestamp: exports_external.string().optional(),
21662
+ timestamp: exports_external.string().datetime({ offset: true }).optional(),
21663
21663
  average_weight: exports_external.number().min(0).max(1).optional(),
21664
21664
  raw_score: exports_external.number().min(0).max(1).optional(),
21665
21665
  weighted_score: exports_external.number().min(0).max(1).optional()
@@ -21732,7 +21732,7 @@ var SwarmSpawnResultSchema = exports_external.object({
21732
21732
  coordinator_name: exports_external.string(),
21733
21733
  thread_id: exports_external.string(),
21734
21734
  agents: exports_external.array(SpawnedAgentSchema),
21735
- started_at: exports_external.string()
21735
+ started_at: exports_external.string().datetime({ offset: true })
21736
21736
  });
21737
21737
  var AgentProgressSchema = exports_external.object({
21738
21738
  bead_id: exports_external.string(),
@@ -21742,7 +21742,7 @@ var AgentProgressSchema = exports_external.object({
21742
21742
  message: exports_external.string().optional(),
21743
21743
  files_touched: exports_external.array(exports_external.string()).optional(),
21744
21744
  blockers: exports_external.array(exports_external.string()).optional(),
21745
- timestamp: exports_external.string()
21745
+ timestamp: exports_external.string().datetime({ offset: true })
21746
21746
  });
21747
21747
  var SwarmStatusSchema = exports_external.object({
21748
21748
  epic_id: exports_external.string(),
@@ -21752,7 +21752,7 @@ var SwarmStatusSchema = exports_external.object({
21752
21752
  failed: exports_external.number().int().min(0),
21753
21753
  blocked: exports_external.number().int().min(0),
21754
21754
  agents: exports_external.array(SpawnedAgentSchema),
21755
- last_update: exports_external.string()
21755
+ last_update: exports_external.string().datetime({ offset: true })
21756
21756
  });
21757
21757
  // src/beads.ts
21758
21758
  class BeadError extends Error {
@@ -21890,18 +21890,32 @@ var beads_create_epic = tool({
21890
21890
  };
21891
21891
  return JSON.stringify(result, null, 2);
21892
21892
  } catch (error45) {
21893
- const rollbackHint = created.map((b) => `bd close ${b.id} --reason "Rollback partial epic"`).join(`
21894
- `);
21895
- const result = {
21896
- success: false,
21897
- epic: created[0] || {},
21898
- subtasks: created.slice(1),
21899
- rollback_hint: rollbackHint
21900
- };
21901
- return JSON.stringify({
21902
- ...result,
21903
- error: error45 instanceof Error ? error45.message : String(error45)
21904
- }, null, 2);
21893
+ const rollbackCommands = [];
21894
+ for (const bead of created) {
21895
+ try {
21896
+ const closeCmd = [
21897
+ "bd",
21898
+ "close",
21899
+ bead.id,
21900
+ "--reason",
21901
+ "Rollback partial epic",
21902
+ "--json"
21903
+ ];
21904
+ await Bun.$`${closeCmd}`.quiet().nothrow();
21905
+ rollbackCommands.push(`bd close ${bead.id} --reason "Rollback partial epic"`);
21906
+ } catch (rollbackError) {
21907
+ console.error(`Failed to rollback bead ${bead.id}:`, rollbackError);
21908
+ }
21909
+ }
21910
+ const errorMsg = error45 instanceof Error ? error45.message : String(error45);
21911
+ const rollbackInfo = rollbackCommands.length > 0 ? `
21912
+
21913
+ Rolled back ${rollbackCommands.length} bead(s):
21914
+ ${rollbackCommands.join(`
21915
+ `)}` : `
21916
+
21917
+ No beads to rollback.`;
21918
+ throw new BeadError(`Epic creation failed: ${errorMsg}${rollbackInfo}`, "beads_create_epic", 1);
21905
21919
  }
21906
21920
  }
21907
21921
  });
@@ -22028,17 +22042,22 @@ var beads_sync = tool({
22028
22042
  },
22029
22043
  async execute(args, ctx) {
22030
22044
  const autoPull = args.auto_pull ?? true;
22045
+ const TIMEOUT_MS = 30000;
22046
+ const withTimeout = async (promise2, timeoutMs, operation) => {
22047
+ const timeoutPromise = new Promise((_, reject) => setTimeout(() => reject(new BeadError(`Operation timed out after ${timeoutMs}ms`, operation)), timeoutMs));
22048
+ return Promise.race([promise2, timeoutPromise]);
22049
+ };
22031
22050
  if (autoPull) {
22032
- const pullResult = await Bun.$`git pull --rebase`.quiet().nothrow();
22051
+ const pullResult = await withTimeout(Bun.$`git pull --rebase`.quiet().nothrow(), TIMEOUT_MS, "git pull --rebase");
22033
22052
  if (pullResult.exitCode !== 0) {
22034
22053
  throw new BeadError(`Failed to pull: ${pullResult.stderr.toString()}`, "git pull --rebase", pullResult.exitCode);
22035
22054
  }
22036
22055
  }
22037
- const syncResult = await Bun.$`bd sync`.quiet().nothrow();
22056
+ const syncResult = await withTimeout(Bun.$`bd sync`.quiet().nothrow(), TIMEOUT_MS, "bd sync");
22038
22057
  if (syncResult.exitCode !== 0) {
22039
22058
  throw new BeadError(`Failed to sync beads: ${syncResult.stderr.toString()}`, "bd sync", syncResult.exitCode);
22040
22059
  }
22041
- const pushResult = await Bun.$`git push`.quiet().nothrow();
22060
+ const pushResult = await withTimeout(Bun.$`git push`.quiet().nothrow(), TIMEOUT_MS, "git push");
22042
22061
  if (pushResult.exitCode !== 0) {
22043
22062
  throw new BeadError(`Failed to push: ${pushResult.stderr.toString()}`, "git push", pushResult.exitCode);
22044
22063
  }
@@ -22347,9 +22366,11 @@ function getLimitsForEndpoint(endpoint) {
22347
22366
  const upperEndpoint = endpoint.toUpperCase();
22348
22367
  const perMinuteEnv = process.env[`OPENCODE_RATE_LIMIT_${upperEndpoint}_PER_MIN`];
22349
22368
  const perHourEnv = process.env[`OPENCODE_RATE_LIMIT_${upperEndpoint}_PER_HOUR`];
22369
+ const parsedPerMinute = perMinuteEnv ? parseInt(perMinuteEnv, 10) : NaN;
22370
+ const parsedPerHour = perHourEnv ? parseInt(perHourEnv, 10) : NaN;
22350
22371
  return {
22351
- perMinute: perMinuteEnv ? parseInt(perMinuteEnv, 10) : defaults.perMinute,
22352
- perHour: perHourEnv ? parseInt(perHourEnv, 10) : defaults.perHour
22372
+ perMinute: Number.isNaN(parsedPerMinute) ? defaults.perMinute : parsedPerMinute,
22373
+ perHour: Number.isNaN(parsedPerHour) ? defaults.perHour : parsedPerHour
22353
22374
  };
22354
22375
  }
22355
22376
 
@@ -22388,6 +22409,7 @@ class RedisRateLimiter {
22388
22409
  const pipeline = this.redis.pipeline();
22389
22410
  pipeline.zremrangebyscore(key, 0, windowStart);
22390
22411
  pipeline.zcard(key);
22412
+ pipeline.zrange(key, 0, 0, "WITHSCORES");
22391
22413
  const results = await pipeline.exec();
22392
22414
  if (!results) {
22393
22415
  return { allowed: true, remaining: limit, resetAt: now + windowDuration };
@@ -22397,7 +22419,7 @@ class RedisRateLimiter {
22397
22419
  const allowed = count < limit;
22398
22420
  let resetAt = now + windowDuration;
22399
22421
  if (!allowed) {
22400
- const oldest = await this.redis.zrange(key, 0, 0, "WITHSCORES");
22422
+ const oldest = results[2]?.[1] || [];
22401
22423
  if (oldest.length >= 2) {
22402
22424
  const oldestTimestamp = parseInt(oldest[1], 10);
22403
22425
  resetAt = oldestTimestamp + windowDuration;
@@ -22638,8 +22660,11 @@ function saveSessionState(sessionID, state) {
22638
22660
  }
22639
22661
  const path = getSessionStatePath(sessionID);
22640
22662
  writeFileSync(path, JSON.stringify(state, null, 2));
22663
+ return true;
22641
22664
  } catch (error45) {
22642
- console.warn(`[agent-mail] Could not save session state: ${error45}`);
22665
+ console.error(`[agent-mail] CRITICAL: Could not save session state: ${error45}`);
22666
+ console.error(`[agent-mail] Session state will not persist across CLI invocations!`);
22667
+ return false;
22643
22668
  }
22644
22669
  }
22645
22670
  var sessionStates = new Map;
@@ -22919,6 +22944,7 @@ async function mcpCall(toolName, args) {
22919
22944
  } catch (error45) {
22920
22945
  lastError = error45 instanceof Error ? error45 : new Error(String(error45));
22921
22946
  consecutiveFailures++;
22947
+ const retryable = isRetryableError(error45);
22922
22948
  if (consecutiveFailures >= RECOVERY_CONFIG.failureThreshold && RECOVERY_CONFIG.enabled) {
22923
22949
  console.warn(`[agent-mail] ${consecutiveFailures} consecutive failures, checking server health...`);
22924
22950
  const healthy = await isServerFunctional();
@@ -22927,12 +22953,14 @@ async function mcpCall(toolName, args) {
22927
22953
  const restarted = await restartServer();
22928
22954
  if (restarted) {
22929
22955
  agentMailAvailable = null;
22930
- attempt--;
22931
- continue;
22956
+ if (retryable) {
22957
+ attempt--;
22958
+ continue;
22959
+ }
22932
22960
  }
22933
22961
  }
22934
22962
  }
22935
- if (!isRetryableError(error45)) {
22963
+ if (!retryable) {
22936
22964
  console.warn(`[agent-mail] Non-retryable error for ${toolName}: ${lastError.message}`);
22937
22965
  throw lastError;
22938
22966
  }
@@ -23065,12 +23093,12 @@ var agentmail_read_message = tool({
23065
23093
  const messages = await mcpCall("fetch_inbox", {
23066
23094
  project_key: state.projectKey,
23067
23095
  agent_name: state.agentName,
23068
- limit: 1,
23096
+ limit: 50,
23069
23097
  include_bodies: true
23070
23098
  });
23071
23099
  const message = messages.find((m) => m.id === args.message_id);
23072
23100
  if (!message) {
23073
- return `Message ${args.message_id} not found`;
23101
+ return `Message ${args.message_id} not found in recent 50 messages. Try using agentmail_search to locate it.`;
23074
23102
  }
23075
23103
  await recordRateLimitedRequest(state.agentName, "read_message");
23076
23104
  return JSON.stringify(message, null, 2);
@@ -24567,6 +24595,13 @@ var swarm_validate_decomposition = tool({
24567
24595
  for (let i = 0;i < validated.subtasks.length; i++) {
24568
24596
  const deps = validated.subtasks[i].dependencies;
24569
24597
  for (const dep of deps) {
24598
+ if (dep < 0 || dep >= validated.subtasks.length) {
24599
+ return JSON.stringify({
24600
+ valid: false,
24601
+ error: `Invalid dependency: subtask ${i} depends on ${dep}, but only ${validated.subtasks.length} subtasks exist (indices 0-${validated.subtasks.length - 1})`,
24602
+ hint: "Dependency index is out of bounds"
24603
+ }, null, 2);
24604
+ }
24570
24605
  if (dep >= i) {
24571
24606
  return JSON.stringify({
24572
24607
  valid: false,
@@ -25232,16 +25267,29 @@ async function resolveSemanticMemoryCommand() {
25232
25267
  return cachedCommand;
25233
25268
  }
25234
25269
  async function execSemanticMemory(args) {
25235
- const cmd = await resolveSemanticMemoryCommand();
25236
- const fullCmd = [...cmd, ...args];
25237
- const proc = Bun.spawn(fullCmd, {
25238
- stdout: "pipe",
25239
- stderr: "pipe"
25240
- });
25241
- const stdout = Buffer.from(await new Response(proc.stdout).arrayBuffer());
25242
- const stderr = Buffer.from(await new Response(proc.stderr).arrayBuffer());
25243
- const exitCode = await proc.exited;
25244
- return { exitCode, stdout, stderr };
25270
+ try {
25271
+ const cmd = await resolveSemanticMemoryCommand();
25272
+ const fullCmd = [...cmd, ...args];
25273
+ const proc = Bun.spawn(fullCmd, {
25274
+ stdout: "pipe",
25275
+ stderr: "pipe"
25276
+ });
25277
+ try {
25278
+ const stdout = Buffer.from(await new Response(proc.stdout).arrayBuffer());
25279
+ const stderr = Buffer.from(await new Response(proc.stderr).arrayBuffer());
25280
+ const exitCode = await proc.exited;
25281
+ return { exitCode, stdout, stderr };
25282
+ } finally {
25283
+ proc.kill();
25284
+ }
25285
+ } catch (error45) {
25286
+ const errorMessage = error45 instanceof Error ? error45.message : String(error45);
25287
+ return {
25288
+ exitCode: 1,
25289
+ stdout: Buffer.from(""),
25290
+ stderr: Buffer.from(`Error executing semantic-memory: ${errorMessage}`)
25291
+ };
25292
+ }
25245
25293
  }
25246
25294
  var DEFAULT_STORAGE_CONFIG = {
25247
25295
  backend: "semantic-memory",
@@ -25503,20 +25551,26 @@ async function createStorageWithFallback(config2 = {}) {
25503
25551
  return new InMemoryStorage;
25504
25552
  }
25505
25553
  var globalStorage = null;
25554
+ var globalStoragePromise = null;
25506
25555
  async function getStorage() {
25507
- if (!globalStorage) {
25508
- globalStorage = await createStorageWithFallback();
25556
+ if (!globalStoragePromise) {
25557
+ globalStoragePromise = createStorageWithFallback().then((storage) => {
25558
+ globalStorage = storage;
25559
+ return storage;
25560
+ });
25509
25561
  }
25510
- return globalStorage;
25562
+ return globalStoragePromise;
25511
25563
  }
25512
25564
  function setStorage(storage) {
25513
25565
  globalStorage = storage;
25566
+ globalStoragePromise = Promise.resolve(storage);
25514
25567
  }
25515
25568
  async function resetStorage() {
25516
25569
  if (globalStorage) {
25517
25570
  await globalStorage.close();
25518
25571
  globalStorage = null;
25519
25572
  }
25573
+ globalStoragePromise = null;
25520
25574
  }
25521
25575
 
25522
25576
  // src/index.ts