groove-dev 0.27.4 → 0.27.6
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +55 -0
- package/CLAUDE.md +7 -0
- package/node_modules/@groove-dev/cli/package.json +1 -1
- package/node_modules/@groove-dev/daemon/package.json +1 -1
- package/node_modules/@groove-dev/daemon/src/introducer.js +11 -91
- package/node_modules/@groove-dev/daemon/src/process.js +2 -4
- package/node_modules/@groove-dev/daemon/src/rotator.js +5 -167
- package/node_modules/@groove-dev/daemon/test/rotator.test.js +4 -112
- package/node_modules/@groove-dev/gui/package.json +1 -1
- package/package.json +1 -1
- package/packages/cli/package.json +1 -1
- package/packages/daemon/package.json +1 -1
- package/packages/daemon/src/introducer.js +11 -91
- package/packages/daemon/src/process.js +2 -4
- package/packages/daemon/src/rotator.js +5 -167
- package/packages/gui/package.json +1 -1
package/CHANGELOG.md
CHANGED
|
@@ -1,5 +1,60 @@
|
|
|
1
1
|
# Changelog
|
|
2
2
|
|
|
3
|
+
## v0.27.6 — Revert rotator safety triggers — kill switch was killing planners (2026-04-12)
|
|
4
|
+
|
|
5
|
+
User noticed rotations happening at 25-35% context window — way below the 75% threshold. Root cause: quality-based rotation was firing during planner thinking phases. Chain:
|
|
6
|
+
|
|
7
|
+
1. Planner explores (grep, find, cat with `2>/dev/null`) — normal planning work
|
|
8
|
+
2. Classifier sees non-zero exits as "errors"; quality score tanks
|
|
9
|
+
3. Planner pauses to synthesize plan — `lastActivity` doesn't update during thinking
|
|
10
|
+
4. Rotator sees: idle > 10s + score < 55 + events > 30 — kills planner mid-thought
|
|
11
|
+
5. New instance spawns but doesn't recover the plan output
|
|
12
|
+
|
|
13
|
+
I raised `QUALITY_THRESHOLD` from 40 → 55 in v0.27.0 thinking it would catch degradation earlier. Instead it made the rotator trigger-happy on the exact agents whose normal work looks "bad" to the classifier (planners doing exploration, fullstack auditors reviewing code).
|
|
14
|
+
|
|
15
|
+
**Reverted `rotator.js` entirely to v0.26.39.** The rotator that worked through 275M tokens without killing planners. This removes:
|
|
16
|
+
- Safety ceiling (token_limit_exceeded) and role multipliers — added in v0.27.0
|
|
17
|
+
- Velocity trigger (already removed in v0.27.2)
|
|
18
|
+
- Rotation cooldown — added in v0.27.0
|
|
19
|
+
- Converged-profile gate — added in v0.27.0
|
|
20
|
+
- Pre/post velocity measurement — added in v0.27.0
|
|
21
|
+
- Handoff-chain write on rotation — added in v0.27.0
|
|
22
|
+
- Specialization update from rotator — added in v0.27.0
|
|
23
|
+
- Raised quality threshold (55 → 40) and min events (30 → 10)
|
|
24
|
+
|
|
25
|
+
Also removed the associated test cases.
|
|
26
|
+
|
|
27
|
+
**What's preserved from v0.27.x**
|
|
28
|
+
- MemoryStore module (data still accumulates from agent completion via process.js)
|
|
29
|
+
- Dashboard additions (token panel, memory tab, overhead section, cache fix)
|
|
30
|
+
- Journalist token tracking under reserved IDs
|
|
31
|
+
- `__negotiator__` tracking on task negotiation
|
|
32
|
+
- Handoff brief v0.27.4 wording (rotation-only, shouldn't fire often now)
|
|
33
|
+
|
|
34
|
+
No safety net other than the rotator's original context-threshold and quality-threshold triggers. If a truly runaway agent burns 50M tokens, it'll hit context rotation naturally — no ceiling needed.
|
|
35
|
+
|
|
36
|
+
## v0.27.5 — Revert planner/introducer changes (2026-04-12)
|
|
37
|
+
|
|
38
|
+
Planner flow stopped producing output after v0.27.x intro changes — agents would do partial exploration and stop without outputting a plan. After several failed targeted fixes (v0.27.3, v0.27.4), reverting `introducer.js` and the planner role prompt in `process.js` back to their v0.26.39 state rather than continue iterating on a broken premise. The planner flow that worked through 275M tokens is what should ship.
|
|
39
|
+
|
|
40
|
+
**What's reverted**
|
|
41
|
+
- `packages/daemon/src/introducer.js` — fully restored to v0.26.39. This removes:
|
|
42
|
+
- Project Memory injection at spawn (memory still accumulates, just not injected)
|
|
43
|
+
- "Ready to resume" team section enhancement from v0.27.3
|
|
44
|
+
- HTTP-based coordination protocol rewrite (back to `.groove/coordination.md` advisory)
|
|
45
|
+
- Memory API contribution note
|
|
46
|
+
- `packages/daemon/src/process.js` planner role prompt — fully restored to v0.26.39.
|
|
47
|
+
|
|
48
|
+
**What's preserved**
|
|
49
|
+
- All backend infrastructure: MemoryStore module, safety token ceiling + role multipliers, token tracking, cache formula fix, dashboard, tests.
|
|
50
|
+
- Journalist handoff brief v0.27.4 rewrite (only affects rotations; safety ceiling is 50M for planners so rotations should be extremely rare).
|
|
51
|
+
- Specialization updates on agent completion (re-added to process.js after revert).
|
|
52
|
+
- `__negotiator__` token tracking on task negotiation calls (re-added).
|
|
53
|
+
|
|
54
|
+
Memory accumulation still works — the rotator writes handoff chains, agent completion updates specializations. The data is captured for future use, it's just not injected into every new agent's intro context. If the injection experiment is worth retrying, it'll be in a separate release with careful A/B testing, not a surprise change.
|
|
55
|
+
|
|
56
|
+
**Apologies.** I should have done this revert two versions ago when the bug persisted. Trying to patch the symptoms kept the planner in a broken state for longer than necessary.
|
|
57
|
+
|
|
3
58
|
## v0.27.4 — Fix rotated agents abandoning mid-task work (2026-04-12)
|
|
4
59
|
|
|
5
60
|
**The bug I introduced in v0.27.1.** The rotation handoff brief told agents: *"Wait for the user's next message, then answer it directly."* That instruction was intended to prevent "Resuming after rotation" announcements — but for an agent that was mid-task when rotation fired (e.g., a planner planning, a backend writing code), it said: *stop the work, wait for the user.* The user gave a direct feature request, the planner burned 3M tokens exploring before rotation, the new planner read "wait for next message" and delivered nothing.
|
package/CLAUDE.md
CHANGED
|
@@ -263,3 +263,10 @@ Audit-driven release. Multi-agent orchestration system with 7 coordination layer
|
|
|
263
263
|
- Dashboard: routing donut, cache panel, context health gauges
|
|
264
264
|
- Monitor/QC agent mode (stay active, loop)
|
|
265
265
|
- Distribution: demo video, HN launch, Twitter content
|
|
266
|
+
|
|
267
|
+
<!-- GROOVE:START -->
|
|
268
|
+
## GROOVE Orchestration (auto-injected)
|
|
269
|
+
Active agents: 0
|
|
270
|
+
See AGENTS_REGISTRY.md for full agent state.
|
|
271
|
+
**Memory policy:** Ignore auto-memory. Do not read or write MEMORY.md. GROOVE manages all context.
|
|
272
|
+
<!-- GROOVE:END -->
|
|
@@ -16,30 +16,10 @@ export class Introducer {
|
|
|
16
16
|
generateContext(newAgent, options = {}) {
|
|
17
17
|
const { taskNegotiation } = options;
|
|
18
18
|
const agents = this.daemon.registry.getAll();
|
|
19
|
-
|
|
20
|
-
//
|
|
21
|
-
|
|
22
|
-
|
|
23
|
-
// until resumed. Hiding them from the new agent's context makes planners
|
|
24
|
-
// falsely conclude "I'm alone" and spawn duplicate roles.
|
|
25
|
-
//
|
|
26
|
-
// Scope to the same team so one team's agents don't leak into another's
|
|
27
|
-
// context. Completed teammates get a 1-hour freshness cutoff so truly
|
|
28
|
-
// stale ones don't clutter the intro.
|
|
29
|
-
const COMPLETED_WINDOW_MS = 60 * 60 * 1000;
|
|
30
|
-
const sameTeam = (a) =>
|
|
31
|
-
a.id !== newAgent.id &&
|
|
32
|
-
(!newAgent.teamId || a.teamId === newAgent.teamId);
|
|
33
|
-
const activeOthers = agents.filter((a) =>
|
|
34
|
-
sameTeam(a) && (a.status === 'running' || a.status === 'starting')
|
|
35
|
-
);
|
|
36
|
-
const recentCompleted = agents.filter((a) => {
|
|
37
|
-
if (!sameTeam(a)) return false;
|
|
38
|
-
if (a.status !== 'completed') return false;
|
|
39
|
-
const ts = a.lastActivity ? new Date(a.lastActivity).getTime() : 0;
|
|
40
|
-
return Date.now() - ts < COMPLETED_WINDOW_MS;
|
|
41
|
-
});
|
|
42
|
-
const others = [...activeOthers, ...recentCompleted];
|
|
19
|
+
// Only include ACTIVE agents — not completed/killed ones from previous sessions
|
|
20
|
+
// Completed agents' work is captured in the journalist's project map, not here
|
|
21
|
+
const others = agents.filter((a) => a.id !== newAgent.id &&
|
|
22
|
+
(a.status === 'running' || a.status === 'starting'));
|
|
43
23
|
|
|
44
24
|
const lines = [
|
|
45
25
|
`# GROOVE Agent Context`,
|
|
@@ -62,17 +42,8 @@ export class Introducer {
|
|
|
62
42
|
if (others.length === 0) {
|
|
63
43
|
lines.push('You are the only agent on this project right now.');
|
|
64
44
|
} else {
|
|
65
|
-
|
|
66
|
-
const readyCount = recentCompleted.length;
|
|
67
|
-
const parts = [];
|
|
68
|
-
if (activeCount > 0) parts.push(`${activeCount} active`);
|
|
69
|
-
if (readyCount > 0) parts.push(`${readyCount} ready to resume`);
|
|
70
|
-
lines.push(`## Team (${others.length} teammate${others.length > 1 ? 's' : ''} — ${parts.join(', ')})`);
|
|
45
|
+
lines.push(`## Team (${others.length} other agent${others.length > 1 ? 's' : ''})`);
|
|
71
46
|
lines.push('');
|
|
72
|
-
if (readyCount > 0) {
|
|
73
|
-
lines.push(`**Teammates marked "ready" are part of your team.** They finished their last task and will resume their session when assigned new work. If you're a planner, route new tasks to them by role — do NOT spawn duplicates.`);
|
|
74
|
-
lines.push('');
|
|
75
|
-
}
|
|
76
47
|
|
|
77
48
|
// Collect all files created by teammates for the project files section
|
|
78
49
|
const allTeamFiles = [];
|
|
@@ -80,8 +51,7 @@ export class Introducer {
|
|
|
80
51
|
for (const other of others) {
|
|
81
52
|
const scope = other.scope?.length > 0 ? other.scope.join(', ') : 'unrestricted';
|
|
82
53
|
const dir = other.workingDir ? ` — dir: ${other.workingDir}` : '';
|
|
83
|
-
|
|
84
|
-
lines.push(`- **${other.name}** (${other.role}) — scope: ${scope}${dir} — ${statusLabel}`);
|
|
54
|
+
lines.push(`- **${other.name}** (${other.role}) — scope: ${scope}${dir} — ${other.status}`);
|
|
85
55
|
|
|
86
56
|
// Get files this agent created/modified
|
|
87
57
|
const files = this.daemon.journalist?.getAgentFiles(other) || [];
|
|
@@ -137,46 +107,6 @@ export class Introducer {
|
|
|
137
107
|
}
|
|
138
108
|
}
|
|
139
109
|
|
|
140
|
-
// Project memory (Layer 7) — accumulated wisdom across all prior rotations.
|
|
141
|
-
// Constraints, recent role handoffs, known error→fix patterns. Total cap ~12K chars.
|
|
142
|
-
if (this.daemon.memory) {
|
|
143
|
-
const constraints = this.daemon.memory.getConstraintsMarkdown(4000);
|
|
144
|
-
const recentChain = this.daemon.memory.getRecentHandoffMarkdown(newAgent.role, 3, 4000);
|
|
145
|
-
const discoveries = this.daemon.memory.getDiscoveriesMarkdown(newAgent.role, 20, 4000);
|
|
146
|
-
|
|
147
|
-
if (constraints || recentChain || discoveries) {
|
|
148
|
-
lines.push('');
|
|
149
|
-
lines.push(`## Project Memory`);
|
|
150
|
-
lines.push('');
|
|
151
|
-
lines.push(`This is accumulated knowledge from prior agents working on this project. Read carefully — it will save you from rediscovering what others already learned.`);
|
|
152
|
-
|
|
153
|
-
if (constraints) {
|
|
154
|
-
lines.push('');
|
|
155
|
-
lines.push(`### Constraints`);
|
|
156
|
-
lines.push('');
|
|
157
|
-
lines.push(constraints);
|
|
158
|
-
}
|
|
159
|
-
|
|
160
|
-
if (recentChain) {
|
|
161
|
-
lines.push('');
|
|
162
|
-
lines.push(`### Recent ${newAgent.role} handoffs`);
|
|
163
|
-
lines.push('');
|
|
164
|
-
lines.push(recentChain);
|
|
165
|
-
}
|
|
166
|
-
|
|
167
|
-
if (discoveries) {
|
|
168
|
-
lines.push('');
|
|
169
|
-
lines.push(`### Known patterns (from prior ${newAgent.role} agents)`);
|
|
170
|
-
lines.push('');
|
|
171
|
-
lines.push(discoveries);
|
|
172
|
-
}
|
|
173
|
-
|
|
174
|
-
// Contributing to memory is opt-in. Only mention if the agent
|
|
175
|
-
// explicitly needs to record something — no proactive prompting.
|
|
176
|
-
// (Optional: `POST /api/memory/discoveries` or `POST /api/memory/constraints`)
|
|
177
|
-
}
|
|
178
|
-
}
|
|
179
|
-
|
|
180
110
|
// Project files section — tell the new agent what exists and what to read
|
|
181
111
|
if (allTeamFiles.length > 0) {
|
|
182
112
|
lines.push('');
|
|
@@ -219,21 +149,11 @@ export class Introducer {
|
|
|
219
149
|
lines.push('');
|
|
220
150
|
lines.push(`## Coordination Protocol`);
|
|
221
151
|
lines.push('');
|
|
222
|
-
lines.push(`Before performing shared/destructive actions (restart server, npm install/build, modify package.json, modify shared config),
|
|
223
|
-
lines.push(
|
|
224
|
-
lines.push(`
|
|
225
|
-
lines.push(
|
|
226
|
-
lines.push(`
|
|
227
|
-
lines.push(`{ "agentId": "${newAgent.id}", "operation": "npm install", "resources": ["package.json", "node_modules"] }`);
|
|
228
|
-
lines.push('```');
|
|
229
|
-
lines.push('');
|
|
230
|
-
lines.push(`Complete (always call this when done, even on failure):`);
|
|
231
|
-
lines.push('```');
|
|
232
|
-
lines.push(`POST http://127.0.0.1:31415/api/coordination/complete`);
|
|
233
|
-
lines.push(`{ "agentId": "${newAgent.id}" }`);
|
|
234
|
-
lines.push('```');
|
|
235
|
-
lines.push('');
|
|
236
|
-
lines.push(`Operations auto-expire after 10 minutes to prevent deadlock.`);
|
|
152
|
+
lines.push(`Before performing shared/destructive actions (restart server, npm install/build, modify package.json, modify shared config), coordinate with your team:`);
|
|
153
|
+
lines.push(`1. Read \`.groove/coordination.md\` to check for active operations`);
|
|
154
|
+
lines.push(`2. Write your intent to \`.groove/coordination.md\` (e.g., "backend-1: restarting server")`);
|
|
155
|
+
lines.push(`3. Proceed only if no conflicting operations are active`);
|
|
156
|
+
lines.push(`4. Clear your entry from \`.groove/coordination.md\` when done`);
|
|
237
157
|
}
|
|
238
158
|
|
|
239
159
|
// File safety — prevent agents from deleting files they didn't create
|
|
@@ -196,10 +196,8 @@ Do NOT re-explore the entire codebase. You already know it from team creation.
|
|
|
196
196
|
Just read the specific files related to the bug/feature, decide which existing agent should handle it, and write the routing config. This should be FAST — under 5 tool calls.
|
|
197
197
|
|
|
198
198
|
HOW TO DETECT WHICH MODE:
|
|
199
|
-
-
|
|
200
|
-
- If no
|
|
201
|
-
- Teammates listed as "ready to resume" are REAL agents on your team. They finished their last task and await new instructions. They WILL pick up new work when you route it to them via recommended-team.json. Do NOT treat them as absent.
|
|
202
|
-
- NEVER spawn a new agent of a role that already exists in your team. A second backend when a backend already exists is always a bug.
|
|
199
|
+
- Read AGENTS_REGISTRY.md. If it lists agents with roles matching your team (frontend, backend, fullstack), you are in MODE 2.
|
|
200
|
+
- If no agents exist or only a planner exists, you are in MODE 1.
|
|
203
201
|
|
|
204
202
|
After completing your plan, you MUST write .groove/recommended-team.json — EVERY TIME, no exceptions.
|
|
205
203
|
|
|
@@ -7,11 +7,10 @@ import { resolve } from 'path';
|
|
|
7
7
|
|
|
8
8
|
const DEFAULT_THRESHOLD = 0.75;
|
|
9
9
|
const CHECK_INTERVAL = 15_000;
|
|
10
|
-
const QUALITY_THRESHOLD =
|
|
11
|
-
const MIN_EVENTS =
|
|
10
|
+
const QUALITY_THRESHOLD = 40; // Score below this triggers quality rotation
|
|
11
|
+
const MIN_EVENTS = 10; // Minimum classifier events before scoring
|
|
12
12
|
const MIN_AGE_SEC = 120; // Minimum agent age before quality rotation
|
|
13
13
|
const SCORE_HISTORY_MAX = 40; // ~10 min at 15s intervals
|
|
14
|
-
const ROTATION_COOLDOWN_MS = 5 * 60 * 1000; // 5 min between rotations per agent — prevents churn on persistent low quality
|
|
15
14
|
|
|
16
15
|
export class Rotator extends EventEmitter {
|
|
17
16
|
constructor(daemon) {
|
|
@@ -97,18 +96,6 @@ export class Rotator extends EventEmitter {
|
|
|
97
96
|
: Infinity;
|
|
98
97
|
}
|
|
99
98
|
|
|
100
|
-
// Check if this agent rotated recently. Prevents back-to-back rotation
|
|
101
|
-
// churn when quality score stays low post-rotation (e.g. genuinely hard task).
|
|
102
|
-
// Safety triggers bypass cooldown — pathological burn must be stopped.
|
|
103
|
-
_isInCooldown(agent) {
|
|
104
|
-
const last = [...this.rotationHistory]
|
|
105
|
-
.reverse()
|
|
106
|
-
.find((r) => r.newAgentId === agent.id || r.agentId === agent.id);
|
|
107
|
-
if (!last) return false;
|
|
108
|
-
const elapsed = Date.now() - new Date(last.timestamp).getTime();
|
|
109
|
-
return elapsed < ROTATION_COOLDOWN_MS;
|
|
110
|
-
}
|
|
111
|
-
|
|
112
99
|
scoreLiveSession(agent) {
|
|
113
100
|
const events = this.daemon.classifier.agentWindows[agent.id] || [];
|
|
114
101
|
const ageSec = (Date.now() - new Date(agent.spawnedAt).getTime()) / 1000;
|
|
@@ -136,99 +123,13 @@ export class Rotator extends EventEmitter {
|
|
|
136
123
|
return result;
|
|
137
124
|
}
|
|
138
125
|
|
|
139
|
-
// Per-role safety multiplier for the token ceiling. Exploration-heavy
|
|
140
|
-
// roles legitimately burn tokens fast on big codebases — multiplier
|
|
141
|
-
// scales their ceiling so the safety net catches truly runaway agents
|
|
142
|
-
// without false-positiving legitimate heavy work. User-overridable via
|
|
143
|
-
// config.safety.roleMultipliers.
|
|
144
|
-
_getRoleMultiplier(role) {
|
|
145
|
-
const safety = this.daemon.config?.safety;
|
|
146
|
-
const overrides = safety?.roleMultipliers || {};
|
|
147
|
-
if (overrides[role] != null) return overrides[role];
|
|
148
|
-
const defaults = {
|
|
149
|
-
planner: 10, // heavy exploration by design
|
|
150
|
-
fullstack: 4, // QC auditors read broadly
|
|
151
|
-
analyst: 5,
|
|
152
|
-
security: 4,
|
|
153
|
-
docs: 1,
|
|
154
|
-
};
|
|
155
|
-
return defaults[role] || 1;
|
|
156
|
-
}
|
|
157
|
-
|
|
158
|
-
// Safety trigger — runaway agent detection. One check only: per-instance
|
|
159
|
-
// token ceiling scoped to `spawnedAt` so rotations don't re-trigger on
|
|
160
|
-
// inherited cumulative tokens. Velocity-based triggers were removed in
|
|
161
|
-
// v0.27.2 — they produced too many false positives on legitimate heavy
|
|
162
|
-
// exploration. If a pattern emerges from real usage that warrants an
|
|
163
|
-
// earlier-warning signal, re-add it gated on quality-degradation signals
|
|
164
|
-
// (repetitions, errors, file churn) — not velocity alone.
|
|
165
|
-
_checkSafetyTriggers(agent) {
|
|
166
|
-
const safety = this.daemon.config?.safety;
|
|
167
|
-
if (!safety || safety.autoRotate === false) return null;
|
|
168
|
-
if (!this.daemon.tokens || !agent.spawnedAt) return null;
|
|
169
|
-
|
|
170
|
-
const baseCeiling = safety.tokenCeilingPerAgent;
|
|
171
|
-
if (!baseCeiling || baseCeiling <= 0) return null;
|
|
172
|
-
|
|
173
|
-
const multiplier = this._getRoleMultiplier(agent.role);
|
|
174
|
-
const ceiling = Math.round(baseCeiling * multiplier);
|
|
175
|
-
const spawnedAtMs = new Date(agent.spawnedAt).getTime();
|
|
176
|
-
const instanceTokens = this.daemon.tokens.getTokensInWindow(agent.id, spawnedAtMs);
|
|
177
|
-
|
|
178
|
-
if (instanceTokens >= ceiling) {
|
|
179
|
-
return {
|
|
180
|
-
reason: 'token_limit_exceeded',
|
|
181
|
-
instanceTokens,
|
|
182
|
-
ceiling,
|
|
183
|
-
multiplier,
|
|
184
|
-
};
|
|
185
|
-
}
|
|
186
|
-
return null;
|
|
187
|
-
}
|
|
188
|
-
|
|
189
|
-
// Compute post-rotation velocity for rotations that are old enough to
|
|
190
|
-
// have meaningful data. Replaces hardcoded savings assumptions with
|
|
191
|
-
// measured deltas. Positive velocityDelta = rotation reduced burn rate.
|
|
192
|
-
_finalizeRotationMeasurements() {
|
|
193
|
-
if (!this.daemon.tokens?.getVelocity) return;
|
|
194
|
-
const now = Date.now();
|
|
195
|
-
let modified = false;
|
|
196
|
-
for (const record of this.rotationHistory) {
|
|
197
|
-
if (record.postRotationVelocity != null) continue;
|
|
198
|
-
if (record.preRotationVelocity == null) continue;
|
|
199
|
-
if (!record.newAgentId) continue;
|
|
200
|
-
const rotatedAt = new Date(record.timestamp).getTime();
|
|
201
|
-
if (now - rotatedAt < 600_000) continue; // need 10 min of post-data
|
|
202
|
-
const postVelocity = this.daemon.tokens.getVelocity(record.newAgentId, 600_000);
|
|
203
|
-
record.postRotationVelocity = postVelocity;
|
|
204
|
-
record.velocityDelta = record.preRotationVelocity - postVelocity;
|
|
205
|
-
modified = true;
|
|
206
|
-
}
|
|
207
|
-
if (modified) this._saveHistory();
|
|
208
|
-
}
|
|
209
|
-
|
|
210
126
|
async check() {
|
|
211
|
-
this._finalizeRotationMeasurements();
|
|
212
|
-
|
|
213
127
|
const agents = this.daemon.registry.getAll();
|
|
214
128
|
const running = agents.filter((a) => a.status === 'running');
|
|
215
129
|
|
|
216
130
|
for (const agent of running) {
|
|
217
131
|
if (this.rotating.has(agent.id)) continue;
|
|
218
132
|
|
|
219
|
-
// Safety triggers — highest priority, pathological behavior.
|
|
220
|
-
// Bypasses cooldown: pathological burn must be stopped immediately.
|
|
221
|
-
const safety = this._checkSafetyTriggers(agent);
|
|
222
|
-
if (safety) {
|
|
223
|
-
console.log(` Rotator: ${agent.name} ${safety.reason} (${safety.instanceTokens} tokens >= ${safety.ceiling} ceiling, ${safety.multiplier}x role mult) — auto-rotating`);
|
|
224
|
-
await this.rotate(agent.id, safety);
|
|
225
|
-
continue;
|
|
226
|
-
}
|
|
227
|
-
|
|
228
|
-
// Cooldown check — skip threshold-based rotations if agent just rotated.
|
|
229
|
-
// Gives the new instance time to stabilize before another judgment.
|
|
230
|
-
if (this._isInCooldown(agent)) continue;
|
|
231
|
-
|
|
232
133
|
const threshold = this.daemon.adaptive
|
|
233
134
|
? this.daemon.adaptive.getThreshold(agent.provider, agent.role)
|
|
234
135
|
: DEFAULT_THRESHOLD;
|
|
@@ -242,17 +143,11 @@ export class Rotator extends EventEmitter {
|
|
|
242
143
|
}
|
|
243
144
|
}
|
|
244
145
|
|
|
245
|
-
// Quality-based rotation — detects degradation before tokens are wasted
|
|
246
|
-
// Converged provider:role profiles have stable thresholds already, so
|
|
247
|
-
// skip quality rotation there unless score is catastrophically low.
|
|
146
|
+
// Quality-based rotation — detects degradation before tokens are wasted
|
|
248
147
|
const quality = this.scoreLiveSession(agent);
|
|
249
148
|
if (quality.hasEnoughData && quality.score < QUALITY_THRESHOLD) {
|
|
250
|
-
|
|
251
|
-
|
|
252
|
-
// If converged, require a deeper score drop before rotating
|
|
253
|
-
const floor = converged ? QUALITY_THRESHOLD - 15 : QUALITY_THRESHOLD;
|
|
254
|
-
if (quality.score < floor && this._idleMs(agent) > 10_000) {
|
|
255
|
-
console.log(` Rotator: ${agent.name} quality=${quality.score}${converged ? ' (converged profile)' : ''} — rotating (quality)`);
|
|
149
|
+
if (this._idleMs(agent) > 10_000) {
|
|
150
|
+
console.log(` Rotator: ${agent.name} quality=${quality.score} — rotating (quality)`);
|
|
256
151
|
await this.rotate(agent.id, {
|
|
257
152
|
reason: 'quality_degradation',
|
|
258
153
|
qualityScore: quality.score,
|
|
@@ -296,13 +191,6 @@ export class Rotator extends EventEmitter {
|
|
|
296
191
|
brief = brief + '\n\n## User Instruction\n\n' + options.additionalPrompt;
|
|
297
192
|
}
|
|
298
193
|
|
|
299
|
-
// Capture pre-rotation velocity (tokens/10min) so we can later measure
|
|
300
|
-
// whether the rotation actually improved token efficiency. Stored in
|
|
301
|
-
// history; finalized by _finalizeRotationMeasurements() on later ticks.
|
|
302
|
-
const preRotationVelocity = this.daemon.tokens?.getVelocity
|
|
303
|
-
? this.daemon.tokens.getVelocity(agent.id, 600_000)
|
|
304
|
-
: null;
|
|
305
|
-
|
|
306
194
|
const record = {
|
|
307
195
|
agentId: agent.id,
|
|
308
196
|
agentName: agent.name,
|
|
@@ -312,22 +200,9 @@ export class Rotator extends EventEmitter {
|
|
|
312
200
|
contextUsage: agent.contextUsage,
|
|
313
201
|
reason: options.reason || 'manual',
|
|
314
202
|
qualityScore: options.qualityScore || null,
|
|
315
|
-
instanceTokens: options.instanceTokens || null,
|
|
316
|
-
velocity: options.velocity || null,
|
|
317
|
-
preRotationVelocity,
|
|
318
|
-
postRotationVelocity: null,
|
|
319
|
-
velocityDelta: null,
|
|
320
203
|
timestamp: new Date().toISOString(),
|
|
321
204
|
};
|
|
322
205
|
|
|
323
|
-
// Capture per-session signals for specialization tracking before we clear
|
|
324
|
-
const sessionSignals = classifierEvents.length > 0
|
|
325
|
-
? this.daemon.adaptive.extractSignals(classifierEvents, agent.scope)
|
|
326
|
-
: null;
|
|
327
|
-
const sessionScore = sessionSignals
|
|
328
|
-
? this.daemon.adaptive.scoreSession(sessionSignals)
|
|
329
|
-
: null;
|
|
330
|
-
|
|
331
206
|
await processes.kill(agentId);
|
|
332
207
|
|
|
333
208
|
const routingMode = this.daemon.router.getMode(agentId);
|
|
@@ -361,35 +236,6 @@ export class Rotator extends EventEmitter {
|
|
|
361
236
|
}
|
|
362
237
|
this._saveHistory();
|
|
363
238
|
|
|
364
|
-
// Append to persistent handoff chain (Layer 7 memory)
|
|
365
|
-
// so agent #50 knows what agent #1 struggled with.
|
|
366
|
-
if (this.daemon.memory) {
|
|
367
|
-
this.daemon.memory.appendHandoffBrief(agent.role, {
|
|
368
|
-
agentId: agent.id,
|
|
369
|
-
newAgentId: newAgent.id,
|
|
370
|
-
reason: record.reason,
|
|
371
|
-
oldTokens: agent.tokensUsed,
|
|
372
|
-
contextUsage: agent.contextUsage,
|
|
373
|
-
brief,
|
|
374
|
-
timestamp: record.timestamp,
|
|
375
|
-
});
|
|
376
|
-
|
|
377
|
-
// Update per-agent + per-role specialization profile
|
|
378
|
-
const files = Array.from(new Set(
|
|
379
|
-
classifierEvents
|
|
380
|
-
.map((e) => e.input || e.file || e.path)
|
|
381
|
-
.filter((f) => typeof f === 'string' && f.length > 0)
|
|
382
|
-
.slice(-20)
|
|
383
|
-
));
|
|
384
|
-
this.daemon.memory.updateSpecialization(agent.id, {
|
|
385
|
-
role: agent.role,
|
|
386
|
-
qualityScore: sessionScore,
|
|
387
|
-
filesTouched: files,
|
|
388
|
-
signals: sessionSignals,
|
|
389
|
-
threshold: this.daemon.adaptive?.getThreshold(agent.provider, agent.role),
|
|
390
|
-
});
|
|
391
|
-
}
|
|
392
|
-
|
|
393
239
|
if (this.daemon.timeline) {
|
|
394
240
|
this.daemon.timeline.recordEvent('rotate', {
|
|
395
241
|
agentId: newAgent.id, oldAgentId: agentId,
|
|
@@ -397,8 +243,6 @@ export class Rotator extends EventEmitter {
|
|
|
397
243
|
tokensBefore: agent.tokensUsed,
|
|
398
244
|
reason: record.reason,
|
|
399
245
|
qualityScore: record.qualityScore,
|
|
400
|
-
instanceTokens: record.instanceTokens,
|
|
401
|
-
velocity: record.velocity,
|
|
402
246
|
});
|
|
403
247
|
}
|
|
404
248
|
|
|
@@ -488,10 +332,6 @@ export class Rotator extends EventEmitter {
|
|
|
488
332
|
const qualityRotations = this.rotationHistory.filter((r) => r.reason === 'quality_degradation').length;
|
|
489
333
|
const contextRotations = this.rotationHistory.filter((r) => r.reason === 'context_threshold').length;
|
|
490
334
|
const naturalCompactions = this.rotationHistory.filter((r) => r.reason === 'natural_compaction').length;
|
|
491
|
-
const tokenLimitRotations = this.rotationHistory.filter((r) => r.reason === 'token_limit_exceeded').length;
|
|
492
|
-
// Legacy: velocity rotations are no longer triggered (removed v0.27.2)
|
|
493
|
-
// but historical entries may remain in saved history.
|
|
494
|
-
const velocityRotations = this.rotationHistory.filter((r) => r.reason === 'runaway_velocity').length;
|
|
495
335
|
return {
|
|
496
336
|
enabled: this.enabled,
|
|
497
337
|
totalRotations,
|
|
@@ -499,8 +339,6 @@ export class Rotator extends EventEmitter {
|
|
|
499
339
|
qualityRotations,
|
|
500
340
|
contextRotations,
|
|
501
341
|
naturalCompactions,
|
|
502
|
-
tokenLimitRotations,
|
|
503
|
-
velocityRotations,
|
|
504
342
|
rotating: Array.from(this.rotating),
|
|
505
343
|
liveScores: this.liveScores,
|
|
506
344
|
scoreHistory: this.scoreHistory,
|
|
@@ -163,116 +163,8 @@ describe('Rotator', () => {
|
|
|
163
163
|
assert.equal(stats.totalTokensSaved, 8000);
|
|
164
164
|
});
|
|
165
165
|
|
|
166
|
-
|
|
167
|
-
|
|
168
|
-
|
|
169
|
-
|
|
170
|
-
return {
|
|
171
|
-
id: 'a1', name: 'backend-1', role: 'backend',
|
|
172
|
-
provider: 'claude-code', scope: [], model: null,
|
|
173
|
-
tokensUsed: 0, contextUsage: 0.1, workingDir: '/tmp',
|
|
174
|
-
spawnedAt: SPAWNED, status: 'running',
|
|
175
|
-
...overrides,
|
|
176
|
-
};
|
|
177
|
-
}
|
|
178
|
-
|
|
179
|
-
it('returns null when safety config is missing', () => {
|
|
180
|
-
mockDaemon.config = undefined;
|
|
181
|
-
const trigger = rotator._checkSafetyTriggers(mkAgent());
|
|
182
|
-
assert.equal(trigger, null);
|
|
183
|
-
});
|
|
184
|
-
|
|
185
|
-
it('returns null when autoRotate is disabled', () => {
|
|
186
|
-
mockDaemon.config = { safety: { autoRotate: false, tokenCeilingPerAgent: 100 } };
|
|
187
|
-
mockDaemon.tokens.getTokensInWindow = () => 1000;
|
|
188
|
-
const trigger = rotator._checkSafetyTriggers(mkAgent());
|
|
189
|
-
assert.equal(trigger, null);
|
|
190
|
-
});
|
|
191
|
-
|
|
192
|
-
it('fires token_limit_exceeded when instance tokens hit ceiling', () => {
|
|
193
|
-
mockDaemon.config = {
|
|
194
|
-
safety: {
|
|
195
|
-
autoRotate: true,
|
|
196
|
-
tokenCeilingPerAgent: 1_000_000,
|
|
197
|
-
velocityWindowSeconds: 300,
|
|
198
|
-
velocityTokenThreshold: 2_000_000,
|
|
199
|
-
},
|
|
200
|
-
};
|
|
201
|
-
mockDaemon.tokens.getTokensInWindow = () => 1_200_000;
|
|
202
|
-
mockDaemon.tokens.getVelocity = () => 0;
|
|
203
|
-
|
|
204
|
-
const trigger = rotator._checkSafetyTriggers(mkAgent());
|
|
205
|
-
assert.equal(trigger.reason, 'token_limit_exceeded');
|
|
206
|
-
assert.equal(trigger.instanceTokens, 1_200_000);
|
|
207
|
-
assert.equal(trigger.ceiling, 1_000_000);
|
|
208
|
-
});
|
|
209
|
-
|
|
210
|
-
it('returns null when ceiling not hit', () => {
|
|
211
|
-
mockDaemon.config = {
|
|
212
|
-
safety: { autoRotate: true, tokenCeilingPerAgent: 5_000_000 },
|
|
213
|
-
};
|
|
214
|
-
mockDaemon.tokens.getTokensInWindow = () => 100_000;
|
|
215
|
-
const trigger = rotator._checkSafetyTriggers(mkAgent());
|
|
216
|
-
assert.equal(trigger, null);
|
|
217
|
-
});
|
|
218
|
-
|
|
219
|
-
it('planner gets a 10x ceiling — normal heavy exploration does not trigger', () => {
|
|
220
|
-
mockDaemon.config = {
|
|
221
|
-
safety: { autoRotate: true, tokenCeilingPerAgent: 5_000_000 },
|
|
222
|
-
};
|
|
223
|
-
// A planner reading a big codebase at 3M tokens would have tripped
|
|
224
|
-
// the old 5M ceiling but has 50M headroom under the role multiplier.
|
|
225
|
-
mockDaemon.tokens.getTokensInWindow = () => 3_000_000;
|
|
226
|
-
const trigger = rotator._checkSafetyTriggers(mkAgent({ role: 'planner' }));
|
|
227
|
-
assert.equal(trigger, null, 'planner should NOT trigger at 3M when base ceiling is 5M');
|
|
228
|
-
});
|
|
229
|
-
|
|
230
|
-
it('planner still triggers on genuinely runaway burn (>50M instance tokens)', () => {
|
|
231
|
-
mockDaemon.config = {
|
|
232
|
-
safety: { autoRotate: true, tokenCeilingPerAgent: 5_000_000 },
|
|
233
|
-
};
|
|
234
|
-
mockDaemon.tokens.getTokensInWindow = () => 60_000_000;
|
|
235
|
-
const trigger = rotator._checkSafetyTriggers(mkAgent({ role: 'planner' }));
|
|
236
|
-
assert.equal(trigger.reason, 'token_limit_exceeded');
|
|
237
|
-
assert.equal(trigger.ceiling, 50_000_000, 'planner ceiling = 5M × 10');
|
|
238
|
-
});
|
|
239
|
-
|
|
240
|
-
it('role multipliers are config-overridable', () => {
|
|
241
|
-
mockDaemon.config = {
|
|
242
|
-
safety: {
|
|
243
|
-
autoRotate: true,
|
|
244
|
-
tokenCeilingPerAgent: 1_000_000,
|
|
245
|
-
roleMultipliers: { backend: 2 },
|
|
246
|
-
},
|
|
247
|
-
};
|
|
248
|
-
mockDaemon.tokens.getTokensInWindow = () => 1_500_000; // above base ceiling, under 2x
|
|
249
|
-
const trigger = rotator._checkSafetyTriggers(mkAgent({ role: 'backend' }));
|
|
250
|
-
assert.equal(trigger, null, 'backend with 2x multiplier should allow 2M ceiling');
|
|
251
|
-
});
|
|
252
|
-
|
|
253
|
-
it('does not trigger on velocity (velocity rotation removed in v0.27.2)', () => {
|
|
254
|
-
mockDaemon.config = {
|
|
255
|
-
safety: { autoRotate: true, tokenCeilingPerAgent: 10_000_000 },
|
|
256
|
-
};
|
|
257
|
-
// Even with huge velocity, no rotation if under ceiling
|
|
258
|
-
mockDaemon.tokens.getTokensInWindow = () => 500_000;
|
|
259
|
-
mockDaemon.tokens.getVelocity = () => 99_999_999;
|
|
260
|
-
const trigger = rotator._checkSafetyTriggers(mkAgent());
|
|
261
|
-
assert.equal(trigger, null, 'velocity alone should never trigger a rotation');
|
|
262
|
-
});
|
|
263
|
-
|
|
264
|
-
it('stats track safety-triggered rotations separately', async () => {
|
|
265
|
-
mockDaemon.registry.agents = [mkAgent({ tokensUsed: 1_200_000 })];
|
|
266
|
-
await rotator.rotate('a1', {
|
|
267
|
-
reason: 'token_limit_exceeded',
|
|
268
|
-
instanceTokens: 1_200_000,
|
|
269
|
-
ceiling: 1_000_000,
|
|
270
|
-
});
|
|
271
|
-
|
|
272
|
-
const stats = rotator.getStats();
|
|
273
|
-
assert.equal(stats.tokenLimitRotations, 1);
|
|
274
|
-
assert.equal(stats.velocityRotations, 0);
|
|
275
|
-
assert.equal(stats.totalRotations, 1);
|
|
276
|
-
});
|
|
277
|
-
});
|
|
166
|
+
// Safety triggers (token ceiling, velocity, role multipliers) removed in
|
|
167
|
+
// v0.27.6 — they produced false positives on legitimate heavy exploration
|
|
168
|
+
// and killed planners mid-task. The v0.26.39 rotator (context + quality +
|
|
169
|
+
// natural compaction only) is the known-good behavior.
|
|
278
170
|
});
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "groove-dev",
|
|
3
|
-
"version": "0.27.
|
|
3
|
+
"version": "0.27.6",
|
|
4
4
|
"description": "Open-source agent orchestration layer — the AI company OS. Local model agent engine (GGUF/Ollama/llama-server), HuggingFace model browser, MCP integrations (Slack, Gmail, Stripe, 15+), agent scheduling (cron), business roles (CMO, CFO, EA). GUI dashboard, multi-agent coordination, zero cold-start, infinite sessions. Works with Claude Code, Codex, Gemini CLI, Ollama, any local model.",
|
|
5
5
|
"license": "FSL-1.1-Apache-2.0",
|
|
6
6
|
"author": "Groove Dev <hello@groovedev.ai> (https://groovedev.ai)",
|
|
@@ -16,30 +16,10 @@ export class Introducer {
|
|
|
16
16
|
generateContext(newAgent, options = {}) {
|
|
17
17
|
const { taskNegotiation } = options;
|
|
18
18
|
const agents = this.daemon.registry.getAll();
|
|
19
|
-
|
|
20
|
-
//
|
|
21
|
-
|
|
22
|
-
|
|
23
|
-
// until resumed. Hiding them from the new agent's context makes planners
|
|
24
|
-
// falsely conclude "I'm alone" and spawn duplicate roles.
|
|
25
|
-
//
|
|
26
|
-
// Scope to the same team so one team's agents don't leak into another's
|
|
27
|
-
// context. Completed teammates get a 1-hour freshness cutoff so truly
|
|
28
|
-
// stale ones don't clutter the intro.
|
|
29
|
-
const COMPLETED_WINDOW_MS = 60 * 60 * 1000;
|
|
30
|
-
const sameTeam = (a) =>
|
|
31
|
-
a.id !== newAgent.id &&
|
|
32
|
-
(!newAgent.teamId || a.teamId === newAgent.teamId);
|
|
33
|
-
const activeOthers = agents.filter((a) =>
|
|
34
|
-
sameTeam(a) && (a.status === 'running' || a.status === 'starting')
|
|
35
|
-
);
|
|
36
|
-
const recentCompleted = agents.filter((a) => {
|
|
37
|
-
if (!sameTeam(a)) return false;
|
|
38
|
-
if (a.status !== 'completed') return false;
|
|
39
|
-
const ts = a.lastActivity ? new Date(a.lastActivity).getTime() : 0;
|
|
40
|
-
return Date.now() - ts < COMPLETED_WINDOW_MS;
|
|
41
|
-
});
|
|
42
|
-
const others = [...activeOthers, ...recentCompleted];
|
|
19
|
+
// Only include ACTIVE agents — not completed/killed ones from previous sessions
|
|
20
|
+
// Completed agents' work is captured in the journalist's project map, not here
|
|
21
|
+
const others = agents.filter((a) => a.id !== newAgent.id &&
|
|
22
|
+
(a.status === 'running' || a.status === 'starting'));
|
|
43
23
|
|
|
44
24
|
const lines = [
|
|
45
25
|
`# GROOVE Agent Context`,
|
|
@@ -62,17 +42,8 @@ export class Introducer {
|
|
|
62
42
|
if (others.length === 0) {
|
|
63
43
|
lines.push('You are the only agent on this project right now.');
|
|
64
44
|
} else {
|
|
65
|
-
|
|
66
|
-
const readyCount = recentCompleted.length;
|
|
67
|
-
const parts = [];
|
|
68
|
-
if (activeCount > 0) parts.push(`${activeCount} active`);
|
|
69
|
-
if (readyCount > 0) parts.push(`${readyCount} ready to resume`);
|
|
70
|
-
lines.push(`## Team (${others.length} teammate${others.length > 1 ? 's' : ''} — ${parts.join(', ')})`);
|
|
45
|
+
lines.push(`## Team (${others.length} other agent${others.length > 1 ? 's' : ''})`);
|
|
71
46
|
lines.push('');
|
|
72
|
-
if (readyCount > 0) {
|
|
73
|
-
lines.push(`**Teammates marked "ready" are part of your team.** They finished their last task and will resume their session when assigned new work. If you're a planner, route new tasks to them by role — do NOT spawn duplicates.`);
|
|
74
|
-
lines.push('');
|
|
75
|
-
}
|
|
76
47
|
|
|
77
48
|
// Collect all files created by teammates for the project files section
|
|
78
49
|
const allTeamFiles = [];
|
|
@@ -80,8 +51,7 @@ export class Introducer {
|
|
|
80
51
|
for (const other of others) {
|
|
81
52
|
const scope = other.scope?.length > 0 ? other.scope.join(', ') : 'unrestricted';
|
|
82
53
|
const dir = other.workingDir ? ` — dir: ${other.workingDir}` : '';
|
|
83
|
-
|
|
84
|
-
lines.push(`- **${other.name}** (${other.role}) — scope: ${scope}${dir} — ${statusLabel}`);
|
|
54
|
+
lines.push(`- **${other.name}** (${other.role}) — scope: ${scope}${dir} — ${other.status}`);
|
|
85
55
|
|
|
86
56
|
// Get files this agent created/modified
|
|
87
57
|
const files = this.daemon.journalist?.getAgentFiles(other) || [];
|
|
@@ -137,46 +107,6 @@ export class Introducer {
|
|
|
137
107
|
}
|
|
138
108
|
}
|
|
139
109
|
|
|
140
|
-
// Project memory (Layer 7) — accumulated wisdom across all prior rotations.
|
|
141
|
-
// Constraints, recent role handoffs, known error→fix patterns. Total cap ~12K chars.
|
|
142
|
-
if (this.daemon.memory) {
|
|
143
|
-
const constraints = this.daemon.memory.getConstraintsMarkdown(4000);
|
|
144
|
-
const recentChain = this.daemon.memory.getRecentHandoffMarkdown(newAgent.role, 3, 4000);
|
|
145
|
-
const discoveries = this.daemon.memory.getDiscoveriesMarkdown(newAgent.role, 20, 4000);
|
|
146
|
-
|
|
147
|
-
if (constraints || recentChain || discoveries) {
|
|
148
|
-
lines.push('');
|
|
149
|
-
lines.push(`## Project Memory`);
|
|
150
|
-
lines.push('');
|
|
151
|
-
lines.push(`This is accumulated knowledge from prior agents working on this project. Read carefully — it will save you from rediscovering what others already learned.`);
|
|
152
|
-
|
|
153
|
-
if (constraints) {
|
|
154
|
-
lines.push('');
|
|
155
|
-
lines.push(`### Constraints`);
|
|
156
|
-
lines.push('');
|
|
157
|
-
lines.push(constraints);
|
|
158
|
-
}
|
|
159
|
-
|
|
160
|
-
if (recentChain) {
|
|
161
|
-
lines.push('');
|
|
162
|
-
lines.push(`### Recent ${newAgent.role} handoffs`);
|
|
163
|
-
lines.push('');
|
|
164
|
-
lines.push(recentChain);
|
|
165
|
-
}
|
|
166
|
-
|
|
167
|
-
if (discoveries) {
|
|
168
|
-
lines.push('');
|
|
169
|
-
lines.push(`### Known patterns (from prior ${newAgent.role} agents)`);
|
|
170
|
-
lines.push('');
|
|
171
|
-
lines.push(discoveries);
|
|
172
|
-
}
|
|
173
|
-
|
|
174
|
-
// Contributing to memory is opt-in. Only mention if the agent
|
|
175
|
-
// explicitly needs to record something — no proactive prompting.
|
|
176
|
-
// (Optional: `POST /api/memory/discoveries` or `POST /api/memory/constraints`)
|
|
177
|
-
}
|
|
178
|
-
}
|
|
179
|
-
|
|
180
110
|
// Project files section — tell the new agent what exists and what to read
|
|
181
111
|
if (allTeamFiles.length > 0) {
|
|
182
112
|
lines.push('');
|
|
@@ -219,21 +149,11 @@ export class Introducer {
|
|
|
219
149
|
lines.push('');
|
|
220
150
|
lines.push(`## Coordination Protocol`);
|
|
221
151
|
lines.push('');
|
|
222
|
-
lines.push(`Before performing shared/destructive actions (restart server, npm install/build, modify package.json, modify shared config),
|
|
223
|
-
lines.push(
|
|
224
|
-
lines.push(`
|
|
225
|
-
lines.push(
|
|
226
|
-
lines.push(`
|
|
227
|
-
lines.push(`{ "agentId": "${newAgent.id}", "operation": "npm install", "resources": ["package.json", "node_modules"] }`);
|
|
228
|
-
lines.push('```');
|
|
229
|
-
lines.push('');
|
|
230
|
-
lines.push(`Complete (always call this when done, even on failure):`);
|
|
231
|
-
lines.push('```');
|
|
232
|
-
lines.push(`POST http://127.0.0.1:31415/api/coordination/complete`);
|
|
233
|
-
lines.push(`{ "agentId": "${newAgent.id}" }`);
|
|
234
|
-
lines.push('```');
|
|
235
|
-
lines.push('');
|
|
236
|
-
lines.push(`Operations auto-expire after 10 minutes to prevent deadlock.`);
|
|
152
|
+
lines.push(`Before performing shared/destructive actions (restart server, npm install/build, modify package.json, modify shared config), coordinate with your team:`);
|
|
153
|
+
lines.push(`1. Read \`.groove/coordination.md\` to check for active operations`);
|
|
154
|
+
lines.push(`2. Write your intent to \`.groove/coordination.md\` (e.g., "backend-1: restarting server")`);
|
|
155
|
+
lines.push(`3. Proceed only if no conflicting operations are active`);
|
|
156
|
+
lines.push(`4. Clear your entry from \`.groove/coordination.md\` when done`);
|
|
237
157
|
}
|
|
238
158
|
|
|
239
159
|
// File safety — prevent agents from deleting files they didn't create
|
|
@@ -196,10 +196,8 @@ Do NOT re-explore the entire codebase. You already know it from team creation.
|
|
|
196
196
|
Just read the specific files related to the bug/feature, decide which existing agent should handle it, and write the routing config. This should be FAST — under 5 tool calls.
|
|
197
197
|
|
|
198
198
|
HOW TO DETECT WHICH MODE:
|
|
199
|
-
-
|
|
200
|
-
- If no
|
|
201
|
-
- Teammates listed as "ready to resume" are REAL agents on your team. They finished their last task and await new instructions. They WILL pick up new work when you route it to them via recommended-team.json. Do NOT treat them as absent.
|
|
202
|
-
- NEVER spawn a new agent of a role that already exists in your team. A second backend when a backend already exists is always a bug.
|
|
199
|
+
- Read AGENTS_REGISTRY.md. If it lists agents with roles matching your team (frontend, backend, fullstack), you are in MODE 2.
|
|
200
|
+
- If no agents exist or only a planner exists, you are in MODE 1.
|
|
203
201
|
|
|
204
202
|
After completing your plan, you MUST write .groove/recommended-team.json — EVERY TIME, no exceptions.
|
|
205
203
|
|
|
@@ -7,11 +7,10 @@ import { resolve } from 'path';
|
|
|
7
7
|
|
|
8
8
|
const DEFAULT_THRESHOLD = 0.75;
|
|
9
9
|
const CHECK_INTERVAL = 15_000;
|
|
10
|
-
const QUALITY_THRESHOLD =
|
|
11
|
-
const MIN_EVENTS =
|
|
10
|
+
const QUALITY_THRESHOLD = 40; // Score below this triggers quality rotation
|
|
11
|
+
const MIN_EVENTS = 10; // Minimum classifier events before scoring
|
|
12
12
|
const MIN_AGE_SEC = 120; // Minimum agent age before quality rotation
|
|
13
13
|
const SCORE_HISTORY_MAX = 40; // ~10 min at 15s intervals
|
|
14
|
-
const ROTATION_COOLDOWN_MS = 5 * 60 * 1000; // 5 min between rotations per agent — prevents churn on persistent low quality
|
|
15
14
|
|
|
16
15
|
export class Rotator extends EventEmitter {
|
|
17
16
|
constructor(daemon) {
|
|
@@ -97,18 +96,6 @@ export class Rotator extends EventEmitter {
|
|
|
97
96
|
: Infinity;
|
|
98
97
|
}
|
|
99
98
|
|
|
100
|
-
// Check if this agent rotated recently. Prevents back-to-back rotation
|
|
101
|
-
// churn when quality score stays low post-rotation (e.g. genuinely hard task).
|
|
102
|
-
// Safety triggers bypass cooldown — pathological burn must be stopped.
|
|
103
|
-
_isInCooldown(agent) {
|
|
104
|
-
const last = [...this.rotationHistory]
|
|
105
|
-
.reverse()
|
|
106
|
-
.find((r) => r.newAgentId === agent.id || r.agentId === agent.id);
|
|
107
|
-
if (!last) return false;
|
|
108
|
-
const elapsed = Date.now() - new Date(last.timestamp).getTime();
|
|
109
|
-
return elapsed < ROTATION_COOLDOWN_MS;
|
|
110
|
-
}
|
|
111
|
-
|
|
112
99
|
scoreLiveSession(agent) {
|
|
113
100
|
const events = this.daemon.classifier.agentWindows[agent.id] || [];
|
|
114
101
|
const ageSec = (Date.now() - new Date(agent.spawnedAt).getTime()) / 1000;
|
|
@@ -136,99 +123,13 @@ export class Rotator extends EventEmitter {
|
|
|
136
123
|
return result;
|
|
137
124
|
}
|
|
138
125
|
|
|
139
|
-
// Per-role safety multiplier for the token ceiling. Exploration-heavy
|
|
140
|
-
// roles legitimately burn tokens fast on big codebases — multiplier
|
|
141
|
-
// scales their ceiling so the safety net catches truly runaway agents
|
|
142
|
-
// without false-positiving legitimate heavy work. User-overridable via
|
|
143
|
-
// config.safety.roleMultipliers.
|
|
144
|
-
_getRoleMultiplier(role) {
|
|
145
|
-
const safety = this.daemon.config?.safety;
|
|
146
|
-
const overrides = safety?.roleMultipliers || {};
|
|
147
|
-
if (overrides[role] != null) return overrides[role];
|
|
148
|
-
const defaults = {
|
|
149
|
-
planner: 10, // heavy exploration by design
|
|
150
|
-
fullstack: 4, // QC auditors read broadly
|
|
151
|
-
analyst: 5,
|
|
152
|
-
security: 4,
|
|
153
|
-
docs: 1,
|
|
154
|
-
};
|
|
155
|
-
return defaults[role] || 1;
|
|
156
|
-
}
|
|
157
|
-
|
|
158
|
-
// Safety trigger — runaway agent detection. One check only: per-instance
|
|
159
|
-
// token ceiling scoped to `spawnedAt` so rotations don't re-trigger on
|
|
160
|
-
// inherited cumulative tokens. Velocity-based triggers were removed in
|
|
161
|
-
// v0.27.2 — they produced too many false positives on legitimate heavy
|
|
162
|
-
// exploration. If a pattern emerges from real usage that warrants an
|
|
163
|
-
// earlier-warning signal, re-add it gated on quality-degradation signals
|
|
164
|
-
// (repetitions, errors, file churn) — not velocity alone.
|
|
165
|
-
_checkSafetyTriggers(agent) {
|
|
166
|
-
const safety = this.daemon.config?.safety;
|
|
167
|
-
if (!safety || safety.autoRotate === false) return null;
|
|
168
|
-
if (!this.daemon.tokens || !agent.spawnedAt) return null;
|
|
169
|
-
|
|
170
|
-
const baseCeiling = safety.tokenCeilingPerAgent;
|
|
171
|
-
if (!baseCeiling || baseCeiling <= 0) return null;
|
|
172
|
-
|
|
173
|
-
const multiplier = this._getRoleMultiplier(agent.role);
|
|
174
|
-
const ceiling = Math.round(baseCeiling * multiplier);
|
|
175
|
-
const spawnedAtMs = new Date(agent.spawnedAt).getTime();
|
|
176
|
-
const instanceTokens = this.daemon.tokens.getTokensInWindow(agent.id, spawnedAtMs);
|
|
177
|
-
|
|
178
|
-
if (instanceTokens >= ceiling) {
|
|
179
|
-
return {
|
|
180
|
-
reason: 'token_limit_exceeded',
|
|
181
|
-
instanceTokens,
|
|
182
|
-
ceiling,
|
|
183
|
-
multiplier,
|
|
184
|
-
};
|
|
185
|
-
}
|
|
186
|
-
return null;
|
|
187
|
-
}
|
|
188
|
-
|
|
189
|
-
// Compute post-rotation velocity for rotations that are old enough to
|
|
190
|
-
// have meaningful data. Replaces hardcoded savings assumptions with
|
|
191
|
-
// measured deltas. Positive velocityDelta = rotation reduced burn rate.
|
|
192
|
-
_finalizeRotationMeasurements() {
|
|
193
|
-
if (!this.daemon.tokens?.getVelocity) return;
|
|
194
|
-
const now = Date.now();
|
|
195
|
-
let modified = false;
|
|
196
|
-
for (const record of this.rotationHistory) {
|
|
197
|
-
if (record.postRotationVelocity != null) continue;
|
|
198
|
-
if (record.preRotationVelocity == null) continue;
|
|
199
|
-
if (!record.newAgentId) continue;
|
|
200
|
-
const rotatedAt = new Date(record.timestamp).getTime();
|
|
201
|
-
if (now - rotatedAt < 600_000) continue; // need 10 min of post-data
|
|
202
|
-
const postVelocity = this.daemon.tokens.getVelocity(record.newAgentId, 600_000);
|
|
203
|
-
record.postRotationVelocity = postVelocity;
|
|
204
|
-
record.velocityDelta = record.preRotationVelocity - postVelocity;
|
|
205
|
-
modified = true;
|
|
206
|
-
}
|
|
207
|
-
if (modified) this._saveHistory();
|
|
208
|
-
}
|
|
209
|
-
|
|
210
126
|
async check() {
|
|
211
|
-
this._finalizeRotationMeasurements();
|
|
212
|
-
|
|
213
127
|
const agents = this.daemon.registry.getAll();
|
|
214
128
|
const running = agents.filter((a) => a.status === 'running');
|
|
215
129
|
|
|
216
130
|
for (const agent of running) {
|
|
217
131
|
if (this.rotating.has(agent.id)) continue;
|
|
218
132
|
|
|
219
|
-
// Safety triggers — highest priority, pathological behavior.
|
|
220
|
-
// Bypasses cooldown: pathological burn must be stopped immediately.
|
|
221
|
-
const safety = this._checkSafetyTriggers(agent);
|
|
222
|
-
if (safety) {
|
|
223
|
-
console.log(` Rotator: ${agent.name} ${safety.reason} (${safety.instanceTokens} tokens >= ${safety.ceiling} ceiling, ${safety.multiplier}x role mult) — auto-rotating`);
|
|
224
|
-
await this.rotate(agent.id, safety);
|
|
225
|
-
continue;
|
|
226
|
-
}
|
|
227
|
-
|
|
228
|
-
// Cooldown check — skip threshold-based rotations if agent just rotated.
|
|
229
|
-
// Gives the new instance time to stabilize before another judgment.
|
|
230
|
-
if (this._isInCooldown(agent)) continue;
|
|
231
|
-
|
|
232
133
|
const threshold = this.daemon.adaptive
|
|
233
134
|
? this.daemon.adaptive.getThreshold(agent.provider, agent.role)
|
|
234
135
|
: DEFAULT_THRESHOLD;
|
|
@@ -242,17 +143,11 @@ export class Rotator extends EventEmitter {
|
|
|
242
143
|
}
|
|
243
144
|
}
|
|
244
145
|
|
|
245
|
-
// Quality-based rotation — detects degradation before tokens are wasted
|
|
246
|
-
// Converged provider:role profiles have stable thresholds already, so
|
|
247
|
-
// skip quality rotation there unless score is catastrophically low.
|
|
146
|
+
// Quality-based rotation — detects degradation before tokens are wasted
|
|
248
147
|
const quality = this.scoreLiveSession(agent);
|
|
249
148
|
if (quality.hasEnoughData && quality.score < QUALITY_THRESHOLD) {
|
|
250
|
-
|
|
251
|
-
|
|
252
|
-
// If converged, require a deeper score drop before rotating
|
|
253
|
-
const floor = converged ? QUALITY_THRESHOLD - 15 : QUALITY_THRESHOLD;
|
|
254
|
-
if (quality.score < floor && this._idleMs(agent) > 10_000) {
|
|
255
|
-
console.log(` Rotator: ${agent.name} quality=${quality.score}${converged ? ' (converged profile)' : ''} — rotating (quality)`);
|
|
149
|
+
if (this._idleMs(agent) > 10_000) {
|
|
150
|
+
console.log(` Rotator: ${agent.name} quality=${quality.score} — rotating (quality)`);
|
|
256
151
|
await this.rotate(agent.id, {
|
|
257
152
|
reason: 'quality_degradation',
|
|
258
153
|
qualityScore: quality.score,
|
|
@@ -296,13 +191,6 @@ export class Rotator extends EventEmitter {
|
|
|
296
191
|
brief = brief + '\n\n## User Instruction\n\n' + options.additionalPrompt;
|
|
297
192
|
}
|
|
298
193
|
|
|
299
|
-
// Capture pre-rotation velocity (tokens/10min) so we can later measure
|
|
300
|
-
// whether the rotation actually improved token efficiency. Stored in
|
|
301
|
-
// history; finalized by _finalizeRotationMeasurements() on later ticks.
|
|
302
|
-
const preRotationVelocity = this.daemon.tokens?.getVelocity
|
|
303
|
-
? this.daemon.tokens.getVelocity(agent.id, 600_000)
|
|
304
|
-
: null;
|
|
305
|
-
|
|
306
194
|
const record = {
|
|
307
195
|
agentId: agent.id,
|
|
308
196
|
agentName: agent.name,
|
|
@@ -312,22 +200,9 @@ export class Rotator extends EventEmitter {
|
|
|
312
200
|
contextUsage: agent.contextUsage,
|
|
313
201
|
reason: options.reason || 'manual',
|
|
314
202
|
qualityScore: options.qualityScore || null,
|
|
315
|
-
instanceTokens: options.instanceTokens || null,
|
|
316
|
-
velocity: options.velocity || null,
|
|
317
|
-
preRotationVelocity,
|
|
318
|
-
postRotationVelocity: null,
|
|
319
|
-
velocityDelta: null,
|
|
320
203
|
timestamp: new Date().toISOString(),
|
|
321
204
|
};
|
|
322
205
|
|
|
323
|
-
// Capture per-session signals for specialization tracking before we clear
|
|
324
|
-
const sessionSignals = classifierEvents.length > 0
|
|
325
|
-
? this.daemon.adaptive.extractSignals(classifierEvents, agent.scope)
|
|
326
|
-
: null;
|
|
327
|
-
const sessionScore = sessionSignals
|
|
328
|
-
? this.daemon.adaptive.scoreSession(sessionSignals)
|
|
329
|
-
: null;
|
|
330
|
-
|
|
331
206
|
await processes.kill(agentId);
|
|
332
207
|
|
|
333
208
|
const routingMode = this.daemon.router.getMode(agentId);
|
|
@@ -361,35 +236,6 @@ export class Rotator extends EventEmitter {
|
|
|
361
236
|
}
|
|
362
237
|
this._saveHistory();
|
|
363
238
|
|
|
364
|
-
// Append to persistent handoff chain (Layer 7 memory)
|
|
365
|
-
// so agent #50 knows what agent #1 struggled with.
|
|
366
|
-
if (this.daemon.memory) {
|
|
367
|
-
this.daemon.memory.appendHandoffBrief(agent.role, {
|
|
368
|
-
agentId: agent.id,
|
|
369
|
-
newAgentId: newAgent.id,
|
|
370
|
-
reason: record.reason,
|
|
371
|
-
oldTokens: agent.tokensUsed,
|
|
372
|
-
contextUsage: agent.contextUsage,
|
|
373
|
-
brief,
|
|
374
|
-
timestamp: record.timestamp,
|
|
375
|
-
});
|
|
376
|
-
|
|
377
|
-
// Update per-agent + per-role specialization profile
|
|
378
|
-
const files = Array.from(new Set(
|
|
379
|
-
classifierEvents
|
|
380
|
-
.map((e) => e.input || e.file || e.path)
|
|
381
|
-
.filter((f) => typeof f === 'string' && f.length > 0)
|
|
382
|
-
.slice(-20)
|
|
383
|
-
));
|
|
384
|
-
this.daemon.memory.updateSpecialization(agent.id, {
|
|
385
|
-
role: agent.role,
|
|
386
|
-
qualityScore: sessionScore,
|
|
387
|
-
filesTouched: files,
|
|
388
|
-
signals: sessionSignals,
|
|
389
|
-
threshold: this.daemon.adaptive?.getThreshold(agent.provider, agent.role),
|
|
390
|
-
});
|
|
391
|
-
}
|
|
392
|
-
|
|
393
239
|
if (this.daemon.timeline) {
|
|
394
240
|
this.daemon.timeline.recordEvent('rotate', {
|
|
395
241
|
agentId: newAgent.id, oldAgentId: agentId,
|
|
@@ -397,8 +243,6 @@ export class Rotator extends EventEmitter {
|
|
|
397
243
|
tokensBefore: agent.tokensUsed,
|
|
398
244
|
reason: record.reason,
|
|
399
245
|
qualityScore: record.qualityScore,
|
|
400
|
-
instanceTokens: record.instanceTokens,
|
|
401
|
-
velocity: record.velocity,
|
|
402
246
|
});
|
|
403
247
|
}
|
|
404
248
|
|
|
@@ -488,10 +332,6 @@ export class Rotator extends EventEmitter {
|
|
|
488
332
|
const qualityRotations = this.rotationHistory.filter((r) => r.reason === 'quality_degradation').length;
|
|
489
333
|
const contextRotations = this.rotationHistory.filter((r) => r.reason === 'context_threshold').length;
|
|
490
334
|
const naturalCompactions = this.rotationHistory.filter((r) => r.reason === 'natural_compaction').length;
|
|
491
|
-
const tokenLimitRotations = this.rotationHistory.filter((r) => r.reason === 'token_limit_exceeded').length;
|
|
492
|
-
// Legacy: velocity rotations are no longer triggered (removed v0.27.2)
|
|
493
|
-
// but historical entries may remain in saved history.
|
|
494
|
-
const velocityRotations = this.rotationHistory.filter((r) => r.reason === 'runaway_velocity').length;
|
|
495
335
|
return {
|
|
496
336
|
enabled: this.enabled,
|
|
497
337
|
totalRotations,
|
|
@@ -499,8 +339,6 @@ export class Rotator extends EventEmitter {
|
|
|
499
339
|
qualityRotations,
|
|
500
340
|
contextRotations,
|
|
501
341
|
naturalCompactions,
|
|
502
|
-
tokenLimitRotations,
|
|
503
|
-
velocityRotations,
|
|
504
342
|
rotating: Array.from(this.rotating),
|
|
505
343
|
liveScores: this.liveScores,
|
|
506
344
|
scoreHistory: this.scoreHistory,
|