opencode-swarm 7.0.3 → 7.1.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -2,7 +2,9 @@
2
2
 
3
3
  <div align="center">
4
4
 
5
- **Your AI writes the code. Swarm makes sure it actually works.**
5
+ # Your AI writes the code. Swarm proves it works.
6
+
7
+ **Closing the trust gap between "the model said it's done" and "this actually works in production."**
6
8
 
7
9
  [![npm version](https://img.shields.io/npm/v/opencode-swarm?color=brightgreen&label=npm)](https://www.npmjs.com/package/opencode-swarm)
8
10
  [![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](LICENSE)
@@ -14,7 +16,7 @@
14
16
 
15
17
  ---
16
18
 
17
- OpenCode Swarm is a plugin for [OpenCode](https://opencode.ai) that turns a single AI coding session into an **architect-led team of 17 specialized agents**. One agent writes the code. A different agent reviews it. Another writes and runs tests. Another checks security. **Nothing ships until every required gate passes.**
19
+ OpenCode Swarm is a plugin for [OpenCode](https://opencode.ai) that turns a single AI coding session into an **architect-led team of specialized core, optional, and conditional agents** — see `/swarm agents` for the live roster. One agent writes the code. A different agent reviews it. Another writes and runs tests. Another checks security. **Nothing ships until every required gate passes.**
18
20
 
19
21
  ```bash
20
22
  bunx opencode-swarm install
@@ -22,13 +24,15 @@ bunx opencode-swarm install
22
24
 
23
25
  > This single command installs the package, registers it as an OpenCode plugin, disables conflicting default agents, and creates a ready-to-edit config at `~/.config/opencode/opencode-swarm.json`. Requires [Bun](https://bun.sh) (`bun --version` to check). If you must use npm: `npm install -g opencode-swarm && opencode-swarm install`.
24
26
 
27
+ > ⚠️ **You must select the Swarm architect mode/agent in OpenCode after install.** The default OpenCode `Build` and `Plan` modes **bypass this plugin entirely** — none of the gates, reviewers, or test agents below run. Open the OpenCode mode/agent picker and choose the Swarm architect once; it then coordinates every other agent automatically. If you ever see Swarm "do nothing," this is almost always the cause.
28
+
25
29
  ### Why Swarm?
26
30
 
27
31
  Most AI coding tools let one model write code and ask that same model whether the code is good. That misses too much. Swarm separates planning, implementation, review, testing, and documentation into specialized internal roles — and enforces gated execution so agents never mutate the codebase in parallel.
28
32
 
29
33
  ### Key Features
30
34
 
31
- - 🏗️ **18 specialized agents (9 core + 5 optional + 4 conditional)** — architect, coder, reviewer, test_engineer, critic, explorer, sme, docs, designer, critic_oversight, critic_sounding_board, critic_drift_verifier, critic_hallucination_verifier, curator_init, curator_phase, council_generalist, council_skeptic, council_domain_expert
35
+ - 🏗️ **Specialized core, optional, and conditional agents** — architect, coder, reviewer, test_engineer, critic, explorer, sme, docs, designer, critic_oversight, critic_sounding_board, critic_drift_verifier, critic_hallucination_verifier, curator_init, curator_phase, council_generalist, council_skeptic, council_domain_expert. Run `/swarm agents` for the live roster — that is the source of truth, not this list.
32
36
  - 🔒 **Gated pipeline** — code never ships without reviewer + test engineer approval
33
37
  - 🔄 **Phase completion gates** — completion-verify and drift verifier gates enforced before phase completion
34
38
  - 🔁 **Resumable sessions** — all state saved to `.swarm/`; pick up any project any day
@@ -37,7 +41,20 @@ Most AI coding tools let one model write code and ask that same model whether th
37
41
  - 🆓 **Free tier** — works with OpenCode Zen's free model roster
38
42
  - ⚙️ **Fully configurable** — override any agent's model, disable agents, tune guardrails
39
43
 
40
- > **You select a Swarm architect once in the OpenCode GUI.** The architect coordinates all other agents automatically — you never manually switch between internal roles. If you use the default OpenCode `Build` / `Plan` modes, the plugin is bypassed entirely.
44
+ > **You select a Swarm architect once in the OpenCode GUI.** The architect coordinates all other agents automatically — you never manually switch between internal roles. If you use the default OpenCode `Build` / `Plan` modes, the plugin is bypassed entirely (see the install warning above).
45
+
46
+ ---
47
+
48
+ ## What Swarm Catches
49
+
50
+ Concrete classes of failure that Swarm gates exist to stop — every item ties to an agent or pipeline gate that already runs in this repo:
51
+
52
+ - **Hallucinated APIs and citations** — `critic_hallucination_verifier` verifies referenced APIs and citations against real sources before they reach the codebase.
53
+ - **Missing tests and regressions** — `test_engineer` writes and runs tests on every task; the architect runs a regression sweep across the graph after each task (pipeline step `5l`).
54
+ - **Unsafe secret and logging patterns** — `secretscan` and `sast_scan` (63+ rules across 9 languages, offline) run as part of the per-task `pre_check_batch`.
55
+ - **Plan and spec drift** — `critic_drift_verifier` is a blocking phase-completion gate; `curator_phase` also flags workflow drift across phases.
56
+ - **Placeholders and TODO stubs** — `placeholder_scan` runs in the per-task pipeline (step `5d`) and rejects code that ships incomplete stubs.
57
+ - **Untrusted plans** — `critic` reviews the plan before any code is written; `completion-verify` is a deterministic phase-close gate that checks plan task identifiers actually exist in source files.
41
58
 
42
59
  ---
43
60
 
@@ -107,6 +124,62 @@ The 15-minute guide covers:
107
124
 
108
125
  ---
109
126
 
127
+ ## 30-Second Demo
128
+
129
+ No animated GIF is shipped in the repo — instead, here is the exact terminal session you can record yourself with `asciinema rec demo.cast` (or any screen recorder). Every command below is real and runs against this repo as published.
130
+
131
+ **Recording script (copy/paste-able, ~30 seconds):**
132
+
133
+ ```bash
134
+ # 1. Install the plugin (5s)
135
+ bunx opencode-swarm install
136
+
137
+ # 2. Open opencode and select the Swarm architect mode/agent in the picker
138
+ # (this step is manual in the OpenCode UI — without it, the plugin is bypassed)
139
+ opencode
140
+
141
+ # 3. Inside the OpenCode session, verify Swarm is live (5s)
142
+ /swarm diagnose
143
+ /swarm agents
144
+
145
+ # 4. Kick off a task — the architect plans, then gates fire automatically (15s)
146
+ Build me a JWT auth helper with tests.
147
+
148
+ # 5. Watch the gates land in real time (5s)
149
+ /swarm status
150
+ /swarm evidence
151
+ ```
152
+
153
+ **ASCII storyboard** of what a viewer should see:
154
+
155
+ ```
156
+ ┌──────────────────────────────────────────────────────────────┐
157
+ │ $ bunx opencode-swarm install │
158
+ │ ✓ installed opencode-swarm │
159
+ │ ✓ wrote ~/.config/opencode/opencode-swarm.json │
160
+ │ │
161
+ │ $ opencode │
162
+ │ [mode picker] → select: Swarm Architect │
163
+ │ │
164
+ │ > /swarm diagnose │
165
+ │ ✓ plugin loaded ✓ agents registered ✓ gates armed │
166
+ │ │
167
+ │ > Build me a JWT auth helper with tests. │
168
+ │ [architect] PLAN → critic gate → APPROVED │
169
+ │ [coder] task 1.1 implementing… │
170
+ │ [reviewer] correctness OK │
171
+ │ [test_eng.] 3 tests written, 3 pass │
172
+ │ [architect] regression sweep clean → phase_complete │
173
+ │ │
174
+ │ > /swarm evidence │
175
+ │ task 1.1: review ✓ tests ✓ sast ✓ secrets ✓ drift ✓ │
176
+ └──────────────────────────────────────────────────────────────┘
177
+ ```
178
+
179
+ Each row corresponds to a real gate documented further down this README — none are simulated.
180
+
181
+ ---
182
+
110
183
  ## Upgrading
111
184
 
112
185
  **OpenCode caches plugins indefinitely.** A normal OpenCode restart does **not**
@@ -154,7 +227,7 @@ See [docs/commands.md](docs/commands.md) for the full reference (41 commands).
154
227
 
155
228
  ## The Agents
156
229
 
157
- Swarm has 17 specialized agents (9 core + 5 optional + 3 conditional). You don't manually switch between them — the architect coordinates automatically.
230
+ Swarm registers a roster of specialized core, optional, and conditional agents. The exact count shifts as agents are added or feature-flagged, so treat `/swarm agents` as the live source of truth — that command lists what is actually registered in your session. You don't manually switch between them — the architect coordinates automatically.
158
231
 
159
232
  | Agent | Role | Badge |
160
233
  |---|---|---|
@@ -239,7 +312,7 @@ graph TB
239
312
 
240
313
  | Feature | Swarm | oh-my-opencode | get-shit-done |
241
314
  |---|:-:|:-:|:-:|
242
- | Multiple specialized agents | ✅ 17 agents | ❌ | ❌ |
315
+ | Multiple specialized agents | ✅ Core + optional + conditional roster (`/swarm agents`) | ❌ | ❌ |
243
316
  | Plan reviewed before coding | ✅ | ❌ | ❌ |
244
317
  | Every task reviewed + tested | ✅ | ❌ | ❌ |
245
318
  | Different model for review vs. code | ✅ | ❌ | ❌ |
@@ -0,0 +1 @@
1
+ export {};
@@ -0,0 +1 @@
1
+ export {};
package/dist/cli/index.js CHANGED
@@ -33,8 +33,11 @@ var __require = import.meta.require;
33
33
  var init_errors = () => {};
34
34
 
35
35
  // src/utils/logger.ts
36
+ function isDebug() {
37
+ return process.env.OPENCODE_SWARM_DEBUG === "1";
38
+ }
36
39
  function log(message, data) {
37
- if (!DEBUG)
40
+ if (!isDebug())
38
41
  return;
39
42
  const timestamp = new Date().toISOString();
40
43
  if (data !== undefined) {
@@ -44,7 +47,7 @@ function log(message, data) {
44
47
  }
45
48
  }
46
49
  function warn(message, data) {
47
- if (!DEBUG)
50
+ if (!isDebug())
48
51
  return;
49
52
  const timestamp = new Date().toISOString();
50
53
  if (data !== undefined) {
@@ -53,10 +56,6 @@ function warn(message, data) {
53
56
  console.warn(`[opencode-swarm ${timestamp}] WARN: ${message}`);
54
57
  }
55
58
  }
56
- var DEBUG;
57
- var init_logger = __esm(() => {
58
- DEBUG = process.env.OPENCODE_SWARM_DEBUG === "1";
59
- });
60
59
 
61
60
  // src/utils/merge.ts
62
61
  function deepMergeInternal(base, override, depth) {
@@ -96,7 +95,6 @@ function simpleGlobToRegex(pattern, flags = "i") {
96
95
  // src/utils/index.ts
97
96
  var init_utils = __esm(() => {
98
97
  init_errors();
99
- init_logger();
100
98
  });
101
99
 
102
100
  // src/utils/bun-compat.ts
@@ -18911,7 +18909,7 @@ import * as path35 from "path";
18911
18909
  // package.json
18912
18910
  var package_default = {
18913
18911
  name: "opencode-swarm",
18914
- version: "7.0.3",
18912
+ version: "7.1.1",
18915
18913
  description: "Architect-centric agentic swarm plugin for OpenCode - hub-and-spoke orchestration with SME consultation, code generation, and QA review",
18916
18914
  main: "dist/index.js",
18917
18915
  types: "dist/index.d.ts",
@@ -20505,7 +20503,6 @@ function getEffectiveGates(profile, sessionOverrides) {
20505
20503
 
20506
20504
  // src/hooks/delegation-gate.ts
20507
20505
  init_telemetry();
20508
- init_logger();
20509
20506
 
20510
20507
  // node_modules/quick-lru/index.js
20511
20508
  class QuickLRU extends Map {
@@ -20980,7 +20977,6 @@ function clearAllScopes(directory) {
20980
20977
  init_telemetry();
20981
20978
  init_utils();
20982
20979
  init_bun_compat();
20983
- init_logger();
20984
20980
 
20985
20981
  // src/hooks/conflict-resolution.ts
20986
20982
  init_telemetry();
@@ -34277,7 +34273,6 @@ import path13 from "path";
34277
34273
  init_manager2();
34278
34274
 
34279
34275
  // src/git/branch.ts
34280
- init_logger();
34281
34276
  import * as child_process2 from "child_process";
34282
34277
  var GIT_TIMEOUT_MS2 = 30000;
34283
34278
  function gitExec2(args, cwd) {
@@ -34518,7 +34513,6 @@ function resetToRemoteBranch(cwd, options) {
34518
34513
  };
34519
34514
  }
34520
34515
  }
34521
-
34522
34516
  // src/hooks/knowledge-store.ts
34523
34517
  var import_proper_lockfile3 = __toESM(require_proper_lockfile(), 1);
34524
34518
  import { existsSync as existsSync7 } from "fs";
@@ -34928,12 +34922,12 @@ function validateLesson(candidate, existingLessons, meta3) {
34928
34922
  }
34929
34923
  async function quarantineEntry(directory, entryId, reason, reportedBy) {
34930
34924
  if (!directory || directory.includes("..")) {
34931
- console.warn("[knowledge-validator] quarantineEntry: directory traversal attempt blocked");
34925
+ warn("[knowledge-validator] quarantineEntry: directory traversal attempt blocked");
34932
34926
  return;
34933
34927
  }
34934
34928
  if (!entryId || entryId.includes("\x00") || entryId.includes(`
34935
34929
  `)) {
34936
- console.warn("[knowledge-validator] quarantineEntry: invalid entryId rejected");
34930
+ warn("[knowledge-validator] quarantineEntry: invalid entryId rejected");
34937
34931
  return;
34938
34932
  }
34939
34933
  const validReportedBy = ["architect", "user", "auto"];
@@ -34993,12 +34987,12 @@ async function quarantineEntry(directory, entryId, reason, reportedBy) {
34993
34987
  }
34994
34988
  async function restoreEntry(directory, entryId) {
34995
34989
  if (!directory || directory.includes("..")) {
34996
- console.warn("[knowledge-validator] restoreEntry: directory traversal attempt blocked");
34990
+ warn("[knowledge-validator] restoreEntry: directory traversal attempt blocked");
34997
34991
  return;
34998
34992
  }
34999
34993
  if (!entryId || entryId.includes("\x00") || entryId.includes(`
35000
34994
  `)) {
35001
- console.warn("[knowledge-validator] restoreEntry: invalid entryId rejected");
34995
+ warn("[knowledge-validator] restoreEntry: invalid entryId rejected");
35002
34996
  return;
35003
34997
  }
35004
34998
  const knowledgePath = path11.join(directory, ".swarm", "knowledge.jsonl");
@@ -36257,7 +36251,6 @@ function getGlobalEventBus() {
36257
36251
  // src/hooks/curator.ts
36258
36252
  init_manager();
36259
36253
  init_bun_compat();
36260
- init_logger();
36261
36254
  init_utils2();
36262
36255
 
36263
36256
  // src/hooks/hive-promoter.ts
@@ -40358,7 +40351,6 @@ ${USAGE2}`;
40358
40351
  import { join as join22 } from "path";
40359
40352
 
40360
40353
  // src/hooks/knowledge-migrator.ts
40361
- init_logger();
40362
40354
  import { randomUUID as randomUUID2 } from "crypto";
40363
40355
  import { existsSync as existsSync13, readFileSync as readFileSync10 } from "fs";
40364
40356
  import { mkdir as mkdir4, readFile as readFile5, writeFile as writeFile5 } from "fs/promises";
package/dist/index.js CHANGED
@@ -33,7 +33,7 @@ var package_default;
33
33
  var init_package = __esm(() => {
34
34
  package_default = {
35
35
  name: "opencode-swarm",
36
- version: "7.0.3",
36
+ version: "7.1.1",
37
37
  description: "Architect-centric agentic swarm plugin for OpenCode - hub-and-spoke orchestration with SME consultation, code generation, and QA review",
38
38
  main: "dist/index.js",
39
39
  types: "dist/index.d.ts",
@@ -15873,8 +15873,11 @@ var init_errors3 = __esm(() => {
15873
15873
  });
15874
15874
 
15875
15875
  // src/utils/logger.ts
15876
+ function isDebug() {
15877
+ return process.env.OPENCODE_SWARM_DEBUG === "1";
15878
+ }
15876
15879
  function log(message, data) {
15877
- if (!DEBUG)
15880
+ if (!isDebug())
15878
15881
  return;
15879
15882
  const timestamp = new Date().toISOString();
15880
15883
  if (data !== undefined) {
@@ -15884,7 +15887,7 @@ function log(message, data) {
15884
15887
  }
15885
15888
  }
15886
15889
  function warn(message, data) {
15887
- if (!DEBUG)
15890
+ if (!isDebug())
15888
15891
  return;
15889
15892
  const timestamp = new Date().toISOString();
15890
15893
  if (data !== undefined) {
@@ -15901,10 +15904,6 @@ function error48(message, data) {
15901
15904
  console.error(`[opencode-swarm ${timestamp}] ERROR: ${message}`);
15902
15905
  }
15903
15906
  }
15904
- var DEBUG;
15905
- var init_logger = __esm(() => {
15906
- DEBUG = process.env.OPENCODE_SWARM_DEBUG === "1";
15907
- });
15908
15907
 
15909
15908
  // src/utils/regex.ts
15910
15909
  function escapeRegex2(s) {
@@ -15918,7 +15917,6 @@ function simpleGlobToRegex(pattern, flags2 = "i") {
15918
15917
  // src/utils/index.ts
15919
15918
  var init_utils = __esm(() => {
15920
15919
  init_errors3();
15921
- init_logger();
15922
15920
  });
15923
15921
 
15924
15922
  // src/utils/bun-compat.ts
@@ -25138,7 +25136,6 @@ var init_guardrails = __esm(() => {
25138
25136
  init_telemetry();
25139
25137
  init_utils();
25140
25138
  init_bun_compat();
25141
- init_logger();
25142
25139
  init_conflict_resolution();
25143
25140
  init_delegation_gate();
25144
25141
  init_loop_detector();
@@ -26098,7 +26095,6 @@ var init_delegation_gate = __esm(() => {
26098
26095
  init_schema();
26099
26096
  init_state();
26100
26097
  init_telemetry();
26101
- init_logger();
26102
26098
  init_guardrails();
26103
26099
  init_normalize_tool_name();
26104
26100
  init_utils2();
@@ -40586,9 +40582,7 @@ function resetToRemoteBranch(cwd, options) {
40586
40582
  }
40587
40583
  }
40588
40584
  var GIT_TIMEOUT_MS2 = 30000;
40589
- var init_branch = __esm(() => {
40590
- init_logger();
40591
- });
40585
+ var init_branch = () => {};
40592
40586
 
40593
40587
  // src/hooks/knowledge-store.ts
40594
40588
  import { existsSync as existsSync9 } from "node:fs";
@@ -40915,7 +40909,7 @@ async function recordLessonsShown(directory, lessonIds, currentPhase) {
40915
40909
  await mkdir4(path16.dirname(shownFile), { recursive: true });
40916
40910
  await writeFile4(shownFile, JSON.stringify(shownData, null, 2), "utf-8");
40917
40911
  } catch {
40918
- console.warn("[swarm] Knowledge: failed to record shown lessons");
40912
+ warn("[swarm] Knowledge: failed to record shown lessons");
40919
40913
  }
40920
40914
  }
40921
40915
  async function readMergedKnowledge(directory, config3, context) {
@@ -41065,7 +41059,7 @@ async function updateRetrievalOutcome(directory, phaseInfo, phaseSucceeded) {
41065
41059
  delete shownData[phaseInfo];
41066
41060
  await writeFile4(shownFile, JSON.stringify(shownData, null, 2), "utf-8");
41067
41061
  } catch {
41068
- console.warn("[swarm] Knowledge: failed to update retrieval outcomes");
41062
+ warn("[swarm] Knowledge: failed to update retrieval outcomes");
41069
41063
  }
41070
41064
  }
41071
41065
  var JACCARD_THRESHOLD = 0.6, HIVE_TIER_BOOST = 0.05, SAME_PROJECT_PENALTY = -0.05;
@@ -41216,12 +41210,12 @@ function validateLesson(candidate, existingLessons, meta3) {
41216
41210
  }
41217
41211
  async function quarantineEntry(directory, entryId, reason, reportedBy) {
41218
41212
  if (!directory || directory.includes("..")) {
41219
- console.warn("[knowledge-validator] quarantineEntry: directory traversal attempt blocked");
41213
+ warn("[knowledge-validator] quarantineEntry: directory traversal attempt blocked");
41220
41214
  return;
41221
41215
  }
41222
41216
  if (!entryId || entryId.includes("\x00") || entryId.includes(`
41223
41217
  `)) {
41224
- console.warn("[knowledge-validator] quarantineEntry: invalid entryId rejected");
41218
+ warn("[knowledge-validator] quarantineEntry: invalid entryId rejected");
41225
41219
  return;
41226
41220
  }
41227
41221
  const validReportedBy = ["architect", "user", "auto"];
@@ -41281,12 +41275,12 @@ async function quarantineEntry(directory, entryId, reason, reportedBy) {
41281
41275
  }
41282
41276
  async function restoreEntry(directory, entryId) {
41283
41277
  if (!directory || directory.includes("..")) {
41284
- console.warn("[knowledge-validator] restoreEntry: directory traversal attempt blocked");
41278
+ warn("[knowledge-validator] restoreEntry: directory traversal attempt blocked");
41285
41279
  return;
41286
41280
  }
41287
41281
  if (!entryId || entryId.includes("\x00") || entryId.includes(`
41288
41282
  `)) {
41289
- console.warn("[knowledge-validator] restoreEntry: invalid entryId rejected");
41283
+ warn("[knowledge-validator] restoreEntry: invalid entryId rejected");
41290
41284
  return;
41291
41285
  }
41292
41286
  const knowledgePath = path17.join(directory, ".swarm", "knowledge.jsonl");
@@ -43245,9 +43239,7 @@ async function readCuratorSummary(directory) {
43245
43239
  }
43246
43240
  return parsed;
43247
43241
  } catch {
43248
- if (process.env.DEBUG_SWARM) {
43249
- console.warn("Failed to parse curator-summary.json: invalid JSON");
43250
- }
43242
+ warn("Failed to parse curator-summary.json: invalid JSON");
43251
43243
  return null;
43252
43244
  }
43253
43245
  }
@@ -43280,9 +43272,7 @@ function filterPhaseEvents(eventsJsonl, phase, sinceTimestamp) {
43280
43272
  }
43281
43273
  }
43282
43274
  } catch {
43283
- if (process.env.DEBUG_SWARM) {
43284
- console.warn("filterPhaseEvents: skipping malformed line");
43285
- }
43275
+ warn("filterPhaseEvents: skipping malformed line");
43286
43276
  }
43287
43277
  }
43288
43278
  return filtered;
@@ -43831,7 +43821,6 @@ var init_curator = __esm(() => {
43831
43821
  init_event_bus();
43832
43822
  init_manager();
43833
43823
  init_bun_compat();
43834
- init_logger();
43835
43824
  init_knowledge_store();
43836
43825
  init_knowledge_validator();
43837
43826
  init_utils2();
@@ -49256,7 +49245,6 @@ async function writeSentinel(sentinelPath, migrated, dropped) {
49256
49245
  await writeFile6(sentinelPath, JSON.stringify(sentinel, null, 2), "utf-8");
49257
49246
  }
49258
49247
  var init_knowledge_migrator = __esm(() => {
49259
- init_logger();
49260
49248
  init_knowledge_store();
49261
49249
  init_knowledge_validator();
49262
49250
  });
@@ -56210,6 +56198,7 @@ VERIFICATION PROTOCOL: After the coder reports DONE, and before running Stage B
56210
56198
 
56211
56199
  ── STAGE B: AGENT REVIEW GATES ──
56212
56200
  {{AGENT_PREFIX}}reviewer → security reviewer (conditional) → {{AGENT_PREFIX}}test_engineer verification → {{AGENT_PREFIX}}test_engineer adversarial → coverage check
56201
+ The reviewer's verdict MUST include a REUSE_RE_VERIFICATION field — do NOT accept an APPROVED verdict without it. Validate the field value against context: if the coder's EXPORTS_ADDED was non-empty, REUSE_RE_VERIFICATION must be VERIFIED or DUPLICATION_DETECTED (not SKIPPED). If EXPORTS_ADDED was "none", REUSE_RE_VERIFICATION must be SKIPPED.
56213
56202
  Stage B runs by default for TIER 1-3 classifications. Stage A passing does not satisfy Stage B.
56214
56203
  Stage B is where logic errors, security flaws, edge cases, and behavioral bugs are caught.
56215
56204
  You MUST delegate to each Stage B agent and wait for their response.
@@ -56307,6 +56296,7 @@ ANTI-RATIONALIZATION GATE — gates are mandatory for ALL changes, no exceptions
56307
56296
  ✗ "just a rename" → Renames break callers. Reviewer is required.
56308
56297
  ✗ "pre_check_batch will catch any issues" → pre_check_batch catches lint/SAST/secrets. It does NOT catch logic errors or edge cases.
56309
56298
  ✗ "authors are blind to their own mistakes" is WHY the reviewer exists — your certainty about correctness is irrelevant.
56299
+ ✗ "Reviewer APPROVED so I'll skip checking the REUSE_RE_VERIFICATION field" → RIGHT: "I verified that the reviewer's verdict includes REUSE_RE_VERIFICATION before accepting the APPROVED"
56310
56300
  <!-- BEHAVIORAL_GUIDANCE_END -->
56311
56301
 
56312
56302
  8. **COVERAGE CHECK**: After adversarial tests pass, check if test_engineer reports coverage < 70%. If so, delegate {{AGENT_PREFIX}}test_engineer for an additional test pass targeting uncovered paths. This is a soft guideline; use judgment for trivial tasks.
@@ -57252,6 +57242,7 @@ This step supplements (not replaces) the existing regression-sweep and test-drif
57252
57242
  [TOOL] build_check: PASS / SKIPPED — value: ___
57253
57243
  [TOOL] pre_check_batch: PASS (lint:check ✓ secretscan ✓ sast_scan ✓ quality_budget ✓) — value: ___
57254
57244
  [GATE] reviewer: APPROVED — value: ___
57245
+ [GATE] reuse_re_verification: VERIFIED / SKIPPED / DUPLICATION_DETECTED — value: ___
57255
57246
  [GATE] security-reviewer: APPROVED / SKIPPED — value: ___
57256
57247
  [GATE] test_engineer-verification: PASS — value: ___
57257
57248
  [GATE] regression-sweep: PASS / SKIPPED — value: ___
@@ -57467,6 +57458,42 @@ RIGHT: [search first, then] import { saveEvidence } from '../evidence/manager' (
57467
57458
 
57468
57459
  If available_symbols was provided in your scope declaration, you MUST only call functions from that list when importing from existing project modules. Do not invent function names that are not in the list.
57469
57460
 
57461
+ ## REUSE SCAN PROTOCOL (MANDATORY)
57462
+ Before writing ANY new function, utility, class, hook, helper, or type:
57463
+
57464
+ 1. SCAN: Use the search tool to check for conceptually similar implementations in:
57465
+ - src/utils/
57466
+ - src/hooks/
57467
+ - src/tools/
57468
+ - src/services/
57469
+ - Any directory named lib/, shared/, helpers/, or common/
57470
+
57471
+ Search queries must be SEMANTIC, not just literal. For a "path normalizer" function,
57472
+ search for: normalize path, resolve path, join path, cross-platform path — not just
57473
+ the exact function name you are about to write.
57474
+
57475
+ 2. READ: If any candidate result exists, read that file. Determine if it:
57476
+ - Already implements the behavior you need (REUSE IT — do not reimplement)
57477
+ - Partially implements it (EXTEND IT — do not duplicate)
57478
+ - Is unrelated (PROCEED to write new code)
57479
+
57480
+ 3. REPORT: In your completion output, include a REUSE_SCAN field:
57481
+ REUSE_SCAN: [EXISTING_REUSED | EXTENDED | NO_MATCH_FOUND | SCAN_NOT_APPLICABLE]
57482
+ With a one-line explanation for each new function/class you wrote.
57483
+
57484
+ AUTOMATIC REJECTION CONDITIONS:
57485
+ - If you write a function that already exists under a different name in the project
57486
+ - If you write a utility that duplicates behavior in an existing file you did not read
57487
+ - If REUSE_SCAN is missing from your completion output when new functions were created
57488
+
57489
+ SCAN_NOT_APPLICABLE is only valid when:
57490
+ - The task is modifying an existing function (not creating new ones)
57491
+ - The task is purely adding types with no behavioral logic
57492
+ - The task explicitly states "create new, no reuse" with architect justification
57493
+
57494
+ The Reviewer WILL independently re-run this scan. Omitting it does not save time —
57495
+ it guarantees rejection.
57496
+
57470
57497
  ## DEFENSIVE CODING RULES
57471
57498
  - NEVER use \`any\` type in TypeScript — always use specific types
57472
57499
  - NEVER leave empty catch blocks — at minimum log the error
@@ -57530,6 +57557,7 @@ EXPORTS_ADDED: [new exported functions/types/classes, or "none"]
57530
57557
  EXPORTS_REMOVED: [removed exports, or "none"]
57531
57558
  EXPORTS_MODIFIED: [exports with changed signatures, or "none"]
57532
57559
  DEPS_ADDED: [new external package imports, or "none"]
57560
+ REUSE_SCAN: [EXISTING_REUSED | EXTENDED | NO_MATCH_FOUND | SCAN_NOT_APPLICABLE] — [explanation per new function]
57533
57561
  BLOCKED: [what went wrong]
57534
57562
  NEED: [what additional context or change would fix it]
57535
57563
 
@@ -57545,6 +57573,7 @@ Output only one of these structured templates:
57545
57573
  EXPORTS_REMOVED: [removed exports, or "none"]
57546
57574
  EXPORTS_MODIFIED: [exports with changed signatures, or "none"]
57547
57575
  DEPS_ADDED: [new external package imports, or "none"]
57576
+ REUSE_SCAN: [EXISTING_REUSED | EXTENDED | NO_MATCH_FOUND | SCAN_NOT_APPLICABLE] — [explanation per new function]
57548
57577
  SELF-AUDIT: [print the checklist below with [x]/[ ] status for every line]
57549
57578
  - Blocked task:
57550
57579
  BLOCKED: [what went wrong]
@@ -57583,6 +57612,7 @@ Before you report task completion, verify:
57583
57612
  [ ] I did not use vague identifier names (result, data, temp, value, item, info, stuff, obj, ret, val)
57584
57613
  [ ] I did not write empty or tautological comments (e.g., "// sets the value", "// constructor", "// handle error")
57585
57614
  [ ] I did not leave placeholder JSDoc/docstring @param descriptions blank or copy-paste identical descriptions across functions
57615
+ [ ] I ran a reuse scan for every new function/class I created and included REUSE_SCAN in my output
57586
57616
  If ANY box is unchecked, fix it before reporting completion.
57587
57617
  Print this checklist with your completion report.
57588
57618
 
@@ -58731,6 +58761,32 @@ DO (explicitly):
58731
58761
  - VERIFY platform compatibility: path.join() used for all paths, no hardcoded separators
58732
58762
  - For confirmed issues requiring a concrete fix: use suggest_patch to produce a structured patch artifact for the coder
58733
58763
 
58764
+ ## REUSE RE-VERIFICATION (MANDATORY FOR NEW EXPORTS)
58765
+
58766
+ When EXPORTS_ADDED is non-empty in the coder's completion report:
58767
+
58768
+ 1. For EACH new export listed, independently run the search tool using semantic queries
58769
+ against src/utils/, src/hooks/, src/tools/, src/services/, and any lib/shared/ directories.
58770
+
58771
+ 2. Use AT LEAST 3 different search queries per new export — varying the concept, not just
58772
+ the exact name. If the coder named their function \`normalizePath\`, also search for:
58773
+ \`resolve path\`, \`join path segments\`, \`cross-platform path\`, and similar synonyms.
58774
+
58775
+ 3. If you find a pre-existing function/class that implements the same behavior:
58776
+ - Report as DUPLICATION_DETECTED: [new export name] duplicates [existing path:line]
58777
+ - REJECT immediately — this is a Tier 1 CORRECTNESS failure
58778
+ - Do NOT proceed to Tier 2 or Tier 3
58779
+
58780
+ 4. If no match is found after semantic search: report REUSE_RE_VERIFICATION: VERIFIED — NO_DUPLICATE_FOUND
58781
+
58782
+ 5. Cross-check the coder's REUSE_SCAN report against your own findings:
58783
+ - If coder reported EXISTING_REUSED or EXTENDED but you find a true duplicate: REJECT
58784
+ - If coder reported SCAN_NOT_APPLICABLE while EXPORTS_ADDED is non-empty: REJECT — coder created new exports but claimed no scan was needed (contradiction)
58785
+ - If coder reported NO_MATCH_FOUND and you also find none: REUSE_RE_VERIFICATION: VERIFIED — NO_DUPLICATE_FOUND
58786
+
58787
+ If EXPORTS_ADDED is "none", this section is skipped. Note that you skipped it:
58788
+ REUSE_RE_VERIFICATION: SKIPPED (no new exports)
58789
+
58734
58790
  ## REVIEW REASONING
58735
58791
  For each changed function or method, answer these before formulating issues:
58736
58792
  1. PRECONDITIONS: What must be true for this code to work correctly?
@@ -58793,7 +58849,9 @@ Code style, naming, duplication, test coverage, documentation completeness. This
58793
58849
 
58794
58850
  VERDICT FORMAT:
58795
58851
  APPROVED: Tier 1 PASS, Tier 2 PASS [, Tier 3 notes if any]
58852
+ REUSE_RE_VERIFICATION: [VERIFIED | SKIPPED]
58796
58853
  REJECTED: Tier [1|2] FAIL — [first error description] — [specific fix instruction]
58854
+ REUSE_RE_VERIFICATION: [DUPLICATION_DETECTED | SKIPPED]
58797
58855
 
58798
58856
  Do NOT approve with caveats. "APPROVED but fix X later" is not valid. Either it passes or it doesn't.
58799
58857
 
@@ -58813,6 +58871,7 @@ PROCESSING: If GATES is provided and includes passing results for lint, SAST, pl
58813
58871
  Begin directly with VERDICT. Do NOT prepend "Here's my review..." or any conversational preamble.
58814
58872
 
58815
58873
  VERDICT: APPROVED | REJECTED
58874
+ REUSE_RE_VERIFICATION: [VERIFIED | DUPLICATION_DETECTED | SKIPPED] — DUPLICATION_DETECTED is only valid when VERDICT is REJECTED
58816
58875
  RISK: LOW | MEDIUM | HIGH | CRITICAL
58817
58876
  ISSUES: list with line numbers, grouped by CHECK dimension
58818
58877
  FIXES: required changes if rejected
@@ -60375,7 +60434,6 @@ var init_preflight_integration = __esm(() => {
60375
60434
  init_status_artifact();
60376
60435
  init_trigger();
60377
60436
  init_preflight_service();
60378
- init_logger();
60379
60437
  });
60380
60438
 
60381
60439
  // node_modules/web-tree-sitter/tree-sitter.js
@@ -64422,7 +64480,6 @@ function buildDriftInjectionText(report, maxChars) {
64422
64480
  var DRIFT_REPORT_PREFIX = "drift-report-phase-";
64423
64481
  var init_curator_drift = __esm(() => {
64424
64482
  init_event_bus();
64425
- init_logger();
64426
64483
  init_utils2();
64427
64484
  });
64428
64485
 
@@ -65650,7 +65707,6 @@ init_schema();
65650
65707
  init_file_locks();
65651
65708
  init_state();
65652
65709
  init_telemetry();
65653
- init_logger();
65654
65710
  init_utils2();
65655
65711
  import * as fs33 from "node:fs";
65656
65712
  var END_OF_SENTENCE_QUESTION_PATTERN = /\?\s*$/;
@@ -66442,7 +66498,6 @@ import * as path52 from "node:path";
66442
66498
 
66443
66499
  // src/tools/repo-graph.ts
66444
66500
  init_utils2();
66445
- init_logger();
66446
66501
  init_path_security();
66447
66502
  import * as fsSync3 from "node:fs";
66448
66503
  import { constants as constants4, existsSync as existsSync28, realpathSync as realpathSync6 } from "node:fs";
@@ -67574,9 +67629,7 @@ async function updateGraphForFiles(workspaceRoot, filePaths, options) {
67574
67629
  await saveGraph(workspaceRoot, graph);
67575
67630
  return graph;
67576
67631
  }
67577
-
67578
67632
  // src/hooks/repo-graph-builder.ts
67579
- init_logger();
67580
67633
  var SUPPORTED_EXTENSIONS2 = [
67581
67634
  ".ts",
67582
67635
  ".tsx",
@@ -70697,15 +70750,15 @@ function createSystemEnhancerHook(config3, directory) {
70697
70750
  const existingLessons = new Set(existingEntries.map((e) => e.lesson));
70698
70751
  const newEntries = knowledgeEntries.filter((e) => !existingLessons.has(e.lesson));
70699
70752
  if (newEntries.length === 0) {
70700
- console.warn(`[system-enhancer] No new knowledge entries (all duplicates)`);
70753
+ warn(`[system-enhancer] No new knowledge entries (all duplicates)`);
70701
70754
  } else {
70702
70755
  for (const entry of newEntries) {
70703
70756
  await appendKnowledge(knowledgePath, entry);
70704
70757
  }
70705
- console.warn(`[system-enhancer] Created ${newEntries.length} new knowledge entries (${knowledgeEntries.length - newEntries.length} duplicates skipped)`);
70758
+ warn(`[system-enhancer] Created ${newEntries.length} new knowledge entries (${knowledgeEntries.length - newEntries.length} duplicates skipped)`);
70706
70759
  }
70707
70760
  } catch (e) {
70708
- console.warn(`[system-enhancer] Failed to create knowledge entries: ${e}`);
70761
+ warn(`[system-enhancer] Failed to create knowledge entries: ${e}`);
70709
70762
  }
70710
70763
  }
70711
70764
  }
@@ -72406,7 +72459,7 @@ function createKnowledgeInjectorHook(directory, config3) {
72406
72459
  const headroomChars = MODEL_LIMIT_CHARS - existingChars;
72407
72460
  const MIN_INJECT_CHARS = config3.context_budget_threshold ?? 300;
72408
72461
  if (headroomChars < MIN_INJECT_CHARS) {
72409
- console.warn(`[knowledge-injector] Skipping: only ${headroomChars} chars of headroom remain (existing: ${existingChars}, limit: ${MODEL_LIMIT_CHARS})`);
72462
+ warn(`[knowledge-injector] Skipping: only ${headroomChars} chars of headroom remain (existing: ${existingChars}, limit: ${MODEL_LIMIT_CHARS})`);
72410
72463
  return;
72411
72464
  }
72412
72465
  const maxInjectChars = config3.inject_char_budget ?? 2000;
@@ -72784,6 +72837,50 @@ function checkStaleImports(content, threshold) {
72784
72837
  detail: `${staleImports.length} unused import identifier(s): ${staleImports.slice(0, 3).join(", ")}${staleImports.length > 3 ? "..." : ""}. Remove stale imports.`
72785
72838
  };
72786
72839
  }
72840
+ function checkDuplicateUtility(content, projectDir, startTime, targetFile) {
72841
+ const exportMatches = content.matchAll(/^\+(?:export)\s+(?:function|class|const|type|interface)\s+(\w{3,})/gm);
72842
+ const newExports = [];
72843
+ for (const match of exportMatches) {
72844
+ if (match[1])
72845
+ newExports.push(match[1]);
72846
+ }
72847
+ if (newExports.length === 0)
72848
+ return null;
72849
+ const deadline = startTime + 480;
72850
+ const utilityDirs = [
72851
+ "src/utils/",
72852
+ "src/hooks/",
72853
+ "src/tools/",
72854
+ "src/services/"
72855
+ ];
72856
+ for (const utilDir of utilityDirs) {
72857
+ if (Date.now() > deadline)
72858
+ break;
72859
+ const utilPath = path66.join(projectDir, utilDir);
72860
+ if (!fs47.existsSync(utilPath))
72861
+ continue;
72862
+ const files = walkFiles(utilPath, [".ts", ".tsx", ".js", ".jsx"], deadline);
72863
+ for (const file3 of files) {
72864
+ if (Date.now() > deadline)
72865
+ break;
72866
+ if (targetFile && path66.resolve(file3) === path66.resolve(targetFile))
72867
+ continue;
72868
+ try {
72869
+ const text = fs47.readFileSync(file3, "utf-8");
72870
+ for (const name2 of newExports) {
72871
+ const exportPattern = new RegExp(`\\bexport\\s+(?:function|class|const|type|interface)\\s+${name2}\\b`);
72872
+ if (exportPattern.test(text)) {
72873
+ return {
72874
+ type: "duplicate_utility",
72875
+ detail: `New export "${name2}" already exists in ${utilDir}. Reuse the existing utility instead of duplicating it.`
72876
+ };
72877
+ }
72878
+ }
72879
+ } catch {}
72880
+ }
72881
+ }
72882
+ return null;
72883
+ }
72787
72884
  function createSlopDetectorHook(config3, projectDir, injectSystemMessage) {
72788
72885
  return {
72789
72886
  toolAfter: async (input, output) => {
@@ -72805,6 +72902,7 @@ function createSlopDetectorHook(config3, projectDir, injectSystemMessage) {
72805
72902
  })();
72806
72903
  if (!content || content.length < 10)
72807
72904
  return;
72905
+ const targetFilePath = typeof args2?.filePath === "string" ? args2.filePath : undefined;
72808
72906
  const taskDescription = typeof args2?.description === "string" ? args2.description : typeof args2?.task === "string" ? args2.task : "";
72809
72907
  const startTime = Date.now();
72810
72908
  const findings = [];
@@ -72825,6 +72923,11 @@ function createSlopDetectorHook(config3, projectDir, injectSystemMessage) {
72825
72923
  if (dead)
72826
72924
  findings.push(dead);
72827
72925
  } catch {}
72926
+ try {
72927
+ const dupUtil = checkDuplicateUtility(content, projectDir, startTime, targetFilePath);
72928
+ if (dupUtil)
72929
+ findings.push(dupUtil);
72930
+ } catch {}
72828
72931
  }
72829
72932
  try {
72830
72933
  const stale = checkStaleImports(content, config3.importHygieneThreshold ?? 2);
@@ -78136,7 +78239,7 @@ init_create_tool();
78136
78239
  var GITINGEST_TIMEOUT_MS = 1e4;
78137
78240
  var GITINGEST_MAX_RESPONSE_BYTES = 5242880;
78138
78241
  var GITINGEST_MAX_RETRIES = 2;
78139
- var delay = (ms) => new Promise((resolve29) => setTimeout(resolve29, ms));
78242
+ var delay = (ms) => new Promise((resolve30) => setTimeout(resolve30, ms));
78140
78243
  async function fetchGitingest(args2) {
78141
78244
  for (let attempt = 0;attempt <= GITINGEST_MAX_RETRIES; attempt++) {
78142
78245
  try {
@@ -80219,7 +80322,7 @@ async function runNpmAudit(directory) {
80219
80322
  stderr: "pipe",
80220
80323
  cwd: directory
80221
80324
  });
80222
- const timeoutPromise = new Promise((resolve30) => setTimeout(() => resolve30("timeout"), AUDIT_TIMEOUT_MS));
80325
+ const timeoutPromise = new Promise((resolve31) => setTimeout(() => resolve31("timeout"), AUDIT_TIMEOUT_MS));
80223
80326
  const result = await Promise.race([
80224
80327
  Promise.all([proc.stdout.text(), proc.stderr.text()]).then(([stdout2, stderr2]) => ({ stdout: stdout2, stderr: stderr2 })),
80225
80328
  timeoutPromise
@@ -80339,7 +80442,7 @@ async function runPipAudit(directory) {
80339
80442
  stderr: "pipe",
80340
80443
  cwd: directory
80341
80444
  });
80342
- const timeoutPromise = new Promise((resolve30) => setTimeout(() => resolve30("timeout"), AUDIT_TIMEOUT_MS));
80445
+ const timeoutPromise = new Promise((resolve31) => setTimeout(() => resolve31("timeout"), AUDIT_TIMEOUT_MS));
80343
80446
  const result = await Promise.race([
80344
80447
  Promise.all([proc.stdout.text(), proc.stderr.text()]).then(([stdout2, stderr2]) => ({ stdout: stdout2, stderr: stderr2 })),
80345
80448
  timeoutPromise
@@ -80467,7 +80570,7 @@ async function runCargoAudit(directory) {
80467
80570
  stderr: "pipe",
80468
80571
  cwd: directory
80469
80572
  });
80470
- const timeoutPromise = new Promise((resolve30) => setTimeout(() => resolve30("timeout"), AUDIT_TIMEOUT_MS));
80573
+ const timeoutPromise = new Promise((resolve31) => setTimeout(() => resolve31("timeout"), AUDIT_TIMEOUT_MS));
80471
80574
  const result = await Promise.race([
80472
80575
  Promise.all([proc.stdout.text(), proc.stderr.text()]).then(([stdout2, stderr]) => ({ stdout: stdout2, stderr })),
80473
80576
  timeoutPromise
@@ -80591,7 +80694,7 @@ async function runGoAudit(directory) {
80591
80694
  stderr: "pipe",
80592
80695
  cwd: directory
80593
80696
  });
80594
- const timeoutPromise = new Promise((resolve30) => setTimeout(() => resolve30("timeout"), AUDIT_TIMEOUT_MS));
80697
+ const timeoutPromise = new Promise((resolve31) => setTimeout(() => resolve31("timeout"), AUDIT_TIMEOUT_MS));
80595
80698
  const result = await Promise.race([
80596
80699
  Promise.all([proc.stdout.text(), proc.stderr.text()]).then(([stdout2, stderr]) => ({ stdout: stdout2, stderr })),
80597
80700
  timeoutPromise
@@ -80724,7 +80827,7 @@ async function runDotnetAudit(directory) {
80724
80827
  stderr: "pipe",
80725
80828
  cwd: directory
80726
80829
  });
80727
- const timeoutPromise = new Promise((resolve30) => setTimeout(() => resolve30("timeout"), AUDIT_TIMEOUT_MS));
80830
+ const timeoutPromise = new Promise((resolve31) => setTimeout(() => resolve31("timeout"), AUDIT_TIMEOUT_MS));
80728
80831
  const result = await Promise.race([
80729
80832
  Promise.all([proc.stdout.text(), proc.stderr.text()]).then(([stdout2, stderr]) => ({ stdout: stdout2, stderr })),
80730
80833
  timeoutPromise
@@ -80840,7 +80943,7 @@ async function runBundleAudit(directory) {
80840
80943
  stderr: "pipe",
80841
80944
  cwd: directory
80842
80945
  });
80843
- const timeoutPromise = new Promise((resolve30) => setTimeout(() => resolve30("timeout"), AUDIT_TIMEOUT_MS));
80946
+ const timeoutPromise = new Promise((resolve31) => setTimeout(() => resolve31("timeout"), AUDIT_TIMEOUT_MS));
80844
80947
  const result = await Promise.race([
80845
80948
  Promise.all([proc.stdout.text(), proc.stderr.text()]).then(([stdout2, stderr]) => ({ stdout: stdout2, stderr })),
80846
80949
  timeoutPromise
@@ -80985,7 +81088,7 @@ async function runDartAudit(directory) {
80985
81088
  stderr: "pipe",
80986
81089
  cwd: directory
80987
81090
  });
80988
- const timeoutPromise = new Promise((resolve30) => setTimeout(() => resolve30("timeout"), AUDIT_TIMEOUT_MS));
81091
+ const timeoutPromise = new Promise((resolve31) => setTimeout(() => resolve31("timeout"), AUDIT_TIMEOUT_MS));
80989
81092
  const result = await Promise.race([
80990
81093
  Promise.all([proc.stdout.text(), proc.stderr.text()]).then(([stdout2, stderr]) => ({ stdout: stdout2, stderr })),
80991
81094
  timeoutPromise
@@ -81100,7 +81203,7 @@ async function runComposerAudit(directory) {
81100
81203
  stderr: "pipe",
81101
81204
  cwd: directory
81102
81205
  });
81103
- const timeoutPromise = new Promise((resolve30) => setTimeout(() => resolve30("timeout"), AUDIT_TIMEOUT_MS));
81206
+ const timeoutPromise = new Promise((resolve31) => setTimeout(() => resolve31("timeout"), AUDIT_TIMEOUT_MS));
81104
81207
  const result = await Promise.race([
81105
81208
  Promise.all([proc.stdout.text(), proc.stderr.text()]).then(([stdout2, stderr]) => ({ stdout: stdout2, stderr })),
81106
81209
  timeoutPromise
@@ -82740,7 +82843,7 @@ function mapSemgrepSeverity(severity) {
82740
82843
  }
82741
82844
  }
82742
82845
  async function executeWithTimeout(command, args2, options) {
82743
- return new Promise((resolve31) => {
82846
+ return new Promise((resolve32) => {
82744
82847
  const child = child_process9.spawn(command, args2, {
82745
82848
  shell: false,
82746
82849
  cwd: options.cwd
@@ -82749,7 +82852,7 @@ async function executeWithTimeout(command, args2, options) {
82749
82852
  let stderr = "";
82750
82853
  const timeout = setTimeout(() => {
82751
82854
  child.kill("SIGTERM");
82752
- resolve31({
82855
+ resolve32({
82753
82856
  stdout,
82754
82857
  stderr: "Process timed out",
82755
82858
  exitCode: 124
@@ -82763,7 +82866,7 @@ async function executeWithTimeout(command, args2, options) {
82763
82866
  });
82764
82867
  child.on("close", (code) => {
82765
82868
  clearTimeout(timeout);
82766
- resolve31({
82869
+ resolve32({
82767
82870
  stdout,
82768
82871
  stderr,
82769
82872
  exitCode: code ?? 0
@@ -82771,7 +82874,7 @@ async function executeWithTimeout(command, args2, options) {
82771
82874
  });
82772
82875
  child.on("error", (err2) => {
82773
82876
  clearTimeout(timeout);
82774
- resolve31({
82877
+ resolve32({
82775
82878
  stdout,
82776
82879
  stderr: err2.message,
82777
82880
  exitCode: 1
@@ -82956,7 +83059,7 @@ async function acquireLock(lockPath) {
82956
83059
  };
82957
83060
  } catch {
82958
83061
  if (attempt < LOCK_RETRY_DELAYS_MS.length) {
82959
- await new Promise((resolve32) => setTimeout(resolve32, LOCK_RETRY_DELAYS_MS[attempt]));
83062
+ await new Promise((resolve33) => setTimeout(resolve33, LOCK_RETRY_DELAYS_MS[attempt]));
82960
83063
  }
82961
83064
  }
82962
83065
  }
@@ -86782,7 +86885,7 @@ async function ripgrepSearch(opts) {
86782
86885
  stderr: "pipe",
86783
86886
  cwd: opts.workspace
86784
86887
  });
86785
- const timeout = new Promise((resolve38) => setTimeout(() => resolve38("timeout"), REGEX_TIMEOUT_MS));
86888
+ const timeout = new Promise((resolve39) => setTimeout(() => resolve39("timeout"), REGEX_TIMEOUT_MS));
86786
86889
  const exitPromise = proc.exited;
86787
86890
  const result = await Promise.race([exitPromise, timeout]);
86788
86891
  if (result === "timeout") {
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "opencode-swarm",
3
- "version": "7.0.3",
3
+ "version": "7.1.1",
4
4
  "description": "Architect-centric agentic swarm plugin for OpenCode - hub-and-spoke orchestration with SME consultation, code generation, and QA review",
5
5
  "main": "dist/index.js",
6
6
  "types": "dist/index.d.ts",