mcp-coordinator 0.5.0 → 0.6.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,860 +1,938 @@
1
- <div align="center">
2
-
3
- # mcp-coordinator
4
-
5
- **Embedded MQTT broker + MCP server for multi-agent coordination. Zero conflicts, everyone aligned.**
6
-
7
- [![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](LICENSE)
8
- [![npm](https://img.shields.io/npm/v/mcp-coordinator.svg)](https://www.npmjs.com/package/mcp-coordinator)
9
- [![Tests](https://github.com/swoofer/mcp-coordinator/actions/workflows/test.yml/badge.svg)](https://github.com/swoofer/mcp-coordinator/actions)
10
-
11
- [Getting started](#getting-started) · [Problem](#the-problem) · [How It Works](#how-it-works) · [MQTT Layer](#mqtt-communication-layer) · [Scoring](#impact-scoring) · [MCP Tools](#mcp-tools) · [CLI](#cli) · [Standalone use](#standalone-use--without-an-orchestrator) · [Quota](#anthropic-quota-pre-flight) · [Dashboard](#dashboard) · [Config](#configuration) · [Auth](#authentication)
12
-
13
- </div>
14
-
15
- ---
16
-
17
- ## The Problem
18
-
19
- When multiple developers each use an AI coding agent in parallel on the same repo, things break:
20
-
21
- - **Regressions** — Agent A rewrites a module that Agent B was depending on
22
- - **Duplicated work** — Two agents implement the same feature from different directions
23
- - **Architectural drift** — Agents make local decisions that conflict with each other's designs
24
- - **Wasted reconciliation time** — Developers spend hours untangling what the agents did
25
-
26
- Each agent works in isolation. None of them know what the others are doing.
27
-
28
- mcp-coordinator fixes this by giving agents a **shared nervous system over MQTT** — they announce intentions before coding, conflicts are detected before a single line is written, and agents see each other's actions in real-time to agree on an approach.
29
-
30
- It works **with or without** an orchestrator on top. Use it standalone with any MCP client (Claude Code, Cursor, Cline, Aider) — see [Standalone use](#standalone-use--without-an-orchestrator). Or pair it with [essaim](https://github.com/swoofer/essaim) when you want pre-composed agent profiles, work-stealing templates, and a behavior catalog.
31
-
32
- ---
33
-
34
- ## Getting started
35
-
36
- ```bash
37
- # 1. Install
38
- npm install -g mcp-coordinator
39
-
40
- # 2. First-time setup — creates ~/.mcp-coordinator/, writes a default config,
41
- # and prints a .mcp.json snippet for your MCP client.
42
- mcp-coordinator init
43
-
44
- # 3. Start the server (foreground or --daemon for background)
45
- mcp-coordinator server start --daemon
46
-
47
- # 4. Verify
48
- mcp-coordinator server status
49
- mcp-coordinator dashboard # opens http://localhost:3100/dashboard
50
- ```
51
-
52
- Step 2 is idempotent — re-running `init` won't overwrite an existing config. The snippet it prints goes into your MCP client's config (e.g., `~/.claude/.mcp.json` for Claude Code). If you'd rather not copy-paste, run `mcp-coordinator init --write-mcp-config <project-path>` and the snippet is written to `<project-path>/.mcp.json` (merging if the file already exists).
53
-
54
- After step 4, every Claude Code (or other MCP-compatible) session connected to this coordinator can call all 26 tools (`register_agent`, `announce_work`, `post_to_thread`, `coordinator_status`, ...). For the full multi-Claude or team setup, see [Standalone use](#standalone-use--without-an-orchestrator).
55
-
56
- ---
57
-
58
- ## How It Works
59
-
60
- ```
61
- Agent A Agent B
62
- │ │
63
- │ announce_work │ announce_work
64
- ▼ ▼
65
- ┌──────────────┐ ┌──────────────┐
66
- │ MCP client │ ◄── MQTT ────► │ MCP client │
67
- │ (any vendor) │ push-based │ (any vendor) │
68
- └──────┬───────┘ └──────┬───────┘
69
- │ MCP HTTP / SSE │
70
- └──────────────┬────────────────┘
71
-
72
- ┌─────────▼──────────┐
73
- │ mcp-coordinator │
74
- │ 26 MCP tools + DB │
75
- │ Aedes MQTT broker │
76
- └─────────┬──────────┘
77
- │ SSE
78
- ┌─────────▼──────────┐
79
- │ Dashboard │
80
- │ live events/quota │
81
- └────────────────────┘
82
- ```
83
-
84
- The **consultation cycle** has four steps:
85
-
86
- 1. **Announce** — A client calls `announce_work` with target files, `depends_on_files`, and target modules before coding.
87
- 2. **Detect** — The coordinator scores impact against all online agents and opens a thread if a score ≥ 90 matches.
88
- 3. **Consult** — MQTT pushes the new thread to every affected agent. Each agent posts context, constraints, or proposes a resolution.
89
- 4. **Resolve** — Agents approve, contest, or propose again. The thread closes when consensus is reached, or auto-resolves after timeout / in gray zones.
90
-
91
- The server is **client-agnostic**: any MCP-compatible agent (Claude Code, Cursor, Cline, Aider, custom scripts) can connect over HTTP/SSE or stdio.
92
-
93
- ---
94
-
95
- ## MQTT Communication Layer
96
-
97
- The coordinator ships with an **embedded [Aedes](https://github.com/moscajs/aedes) MQTT broker**. Agents subscribe once and receive every coordination event in real-time — no polling, no extra infrastructure.
98
-
99
- ### Broker
100
-
101
- | Transport | Port | Use case |
102
- |-----------|------|----------|
103
- | TCP | `1883` (bind `127.0.0.1` by default) | Local / LAN agents, best latency |
104
- | WebSocket | `/mqtt` on the coordinator HTTP port (default `3100`) | Bun binary, remote agents, firewall-friendly |
105
-
106
- One coordinator = one broker. Nothing external to install.
107
-
108
- ### Topic map
109
-
110
- Every coordinator event is published on a well-known topic. Clients subscribe to the full set on connect.
111
-
112
- | Topic | Emitted when | Payload highlights |
113
- |-------|--------------|--------------------|
114
- | `coordinator/consultations/new` | A thread is opened | `thread_id`, `subject`, `initiator_id`, `target_modules`, `target_files` |
115
- | `coordinator/consultations/{id}/messages` | Anyone posts to a thread | `agent_id`, `name`, `content`, `type` (warning/context/proposal) |
116
- | `coordinator/consultations/{id}/status` | Thread transitions state | `status` ∈ `open` / `resolving` / `resolved` / `timeout` |
117
- | `coordinator/consultations/{id}/claimed` | An agent atomically claims a task (work-stealing) | `claimed_by`, `thread_id` |
118
- | `coordinator/consultations/{id}/completed` | Claimed task finishes | `agent_id`, `thread_id`, `resolution` |
119
- | `coordinator/agents/{id}/status` | Agent goes online / offline | `status`, `name`, `modules` |
120
- | `coordinator/broadcast` | System-wide announcements | arbitrary JSON |
121
- | `coordinator/quota/update` | Anthropic quota refresh | `usage`, `limit`, `utilization_pct` |
122
-
123
- ### Push delivery flow
124
-
125
- ```
126
- COORDINATOR BROKER (Aedes) CLIENT
127
- ─────────── ────────────── ──────
128
-
129
- announce_work() ──────────► publish subscribe
130
- coordinator/ ─► event
131
- consultations/new ─────────► classify topic
132
- self-msg filter
133
- ─► handler
134
- ```
135
-
136
- Key guarantees:
137
-
138
- - **Self-filter** — clients drop messages where `payload.agent_id` equals the local agent's id, so agents never wake on their own actions.
139
- - **Bun compatibility** — when consumed from a Bun-compiled client, a Duplex stream bridges the `mqtt` client to the native WebSocket API (the `ws` package receiver doesn't work under Bun).
140
- - **Backpressure-free** — messages are small JSON envelopes.
141
-
142
- ---
143
-
144
- ## Impact Scoring
145
-
146
- Every `announce_work` call scores all online agents across multiple detection layers. The highest matching layer wins.
147
-
148
- | Layer | Signal | Score | Trigger |
149
- |-------|--------|------:|---------|
150
- | 0a | Same file announced in active thread | 100 | `target_files` ∩ their `target_files` |
151
- | 0b | They modify a file you depend on | 80 | `depends_on_files` ∩ their `target_files` |
152
- | 0c | You modify a file they depend on | 80 | `target_files` ∩ their `depends_on_files` |
153
- | 1 | Same file recently edited | 100 | File tracker conflict (last 60s) |
154
- | 2 | Dependency file recently edited | 80 | `depends_on_files` recently touched |
155
- | 3 | Same module prefix | 30 | `target_modules` overlap |
156
-
157
- Scores are categorized into three outcomes:
158
-
159
- | Score | Category | Action |
160
- |-------|----------|--------|
161
- | ≥ 90 | `concerned` | Thread opened, consultation required |
162
- | 30–89 | `gray_zone` | Thread auto-resolved, introspection recommended |
163
- | < 30 | `pass` | No conflict, proceed immediately |
164
-
165
- > **Layer 0 is critical.** Without announced intentions, a two-agent scenario where both work in `src/auth/` would score only 30 (gray zone, auto-resolved). With `announce_work`, the same scenario scores 100 and triggers a full consultation.
166
-
167
- ---
168
-
169
- ## MCP Tools
170
-
171
- 26 tools organized by function. All registered under one HTTP/SSE transport at `/mcp` (and stdio for stdio-mode clients).
172
-
173
- ### Agent registry
174
-
175
- | Tool | Description |
176
- |------|-------------|
177
- | `register_agent` | Register as online with name and module list |
178
- | `list_agents` | List all registered online agents |
179
- | `heartbeat` | Update last-seen and derive activity status |
180
- | `agent_activity` | Get activity status for all online agents |
181
- | `wait_for_peers` | Block until N peers online, or timeout (prevents race before first announce) |
182
-
183
- ### Consultation
184
-
185
- | Tool | Description |
186
- |------|-------------|
187
- | `announce_work` | Open a consultation thread — the main entry point before coding |
188
- | `post_to_thread` | Post a message (warning, context, question) to an open thread |
189
- | `propose_resolution` | Submit a resolution proposal for participants to approve |
190
- | `approve_resolution` | Approve the current resolution proposal |
191
- | `contest_resolution` | Reject the proposal with a reason resets to `open` |
192
- | `close_thread` | Close a thread after work is complete |
193
- | `cancel_thread` | Cancel a thread (work abandoned or no longer relevant) |
194
- | `get_thread` | Get a thread with all messages and current status |
195
- | `get_thread_updates` | Poll for new messages since a timestamp |
196
- | `list_threads` | List threads, filterable by status or agent |
197
- | `log_action_summary` | Log a one-liner action summary for the dashboard timeline |
198
-
199
- ### File tracking
200
-
201
- | Tool | Description |
202
- |------|-------------|
203
- | `hot_files` | List files being edited by multiple agents |
204
- | `get_session_files` | Get all files edited by an agent in the current session |
205
- | `check_file_conflict` | Check whether another agent edited a given file recently |
206
-
207
- ### Dependency map
208
-
209
- | Tool | Description |
210
- |------|-------------|
211
- | `set_dependency_map` | Load a module dependency graph (JSON) |
212
- | `get_blast_radius` | Calculate which other modules are affected by changes |
213
- | `get_module_info` | Get dependency and dependent info for a module |
214
-
215
- ### MQTT
216
-
217
- | Tool | Description |
218
- |------|-------------|
219
- | `wait_for_message` | Block until a coordination message arrives on the agent's topic |
220
- | `get_queued_messages` | Drain all queued messages without blocking |
221
- | `mqtt_publish` | Publish a raw message to any MQTT topic |
222
-
223
- ### Status
224
-
225
- | Tool | Description |
226
- |------|-------------|
227
- | `coordinator_status` | Full system status: agents, threads, file activity, MQTT, quota |
228
-
229
- The in-server `introspection` tool returns the full schema for every tool point any MCP client at it for live discovery.
230
-
231
- ---
232
-
233
- ## CLI
234
-
235
- Two distribution channels:
236
-
237
- - **npm** — `npm install -g mcp-coordinator`. Requires Node.js 20+.
238
- - **Single-file binary** Bun-compiled, no Node required. Download the matching tarball from a [GitHub Release](https://github.com/swoofer/mcp-coordinator/releases).
239
-
240
- ### Commands
241
-
242
- | Command | Description |
243
- |---------|-------------|
244
- | `mcp-coordinator init [--url <url>] [--write-mcp-config <path>] [--write-claude-md <path>]` | First-time setup — create config dir, default `config.json`, print/write the `.mcp.json` snippet, optionally scaffold a sample `CLAUDE.md` |
245
- | `mcp-coordinator uninstall [--mcp-config <path>] [--claude-md <path>] [--purge] [--force]` | Remove integrations: drop `coordinator` entry from a `.mcp.json`, strip the coordination section from a `CLAUDE.md`, or `--purge` the `~/.mcp-coordinator/` directory entirely |
246
- | `mcp-coordinator server start [--port N] [--data-dir PATH] [--daemon]` | Start the coordinator (foreground or daemon) |
247
- | `mcp-coordinator server stop` | Stop the coordinator |
248
- | `mcp-coordinator server status` | PID, port, online agents, open threads |
249
- | `mcp-coordinator server logs [-n N] [-f]` | Tail the daemon log at `~/.mcp-coordinator/logs/server.log` |
250
- | `mcp-coordinator dashboard` | Open `http://localhost:3100/dashboard` |
251
- | `mcp-coordinator doctor [--host H] [--port P] [--mqtt-port P]` | Health check: config, server liveness, `/health`, `/mcp` initialize, dashboard, MQTT broker |
252
- | `mcp-coordinator --version` | Print the installed version |
253
-
254
- ### Quick start
255
-
256
- ```bash
257
- # Start the coordinator (embedded MQTT + dashboard)
258
- mcp-coordinator server start --daemon
259
-
260
- # Open the dashboard
261
- mcp-coordinator dashboard
262
-
263
- # Stop when done
264
- mcp-coordinator server stop
265
- ```
266
-
267
- ### In-process from your own Node app
268
-
269
- ```ts
270
- import { startServer } from "mcp-coordinator";
271
-
272
- await startServer({
273
- port: 3100,
274
- dataDir: "./coordinator-data",
275
- });
276
- ```
277
-
278
- ---
279
-
280
- ## Standalone use — without an orchestrator
281
-
282
- You don't need an orchestrator. mcp-coordinator works on its own with any MCP-compatible client — Claude Code, Cursor, Cline, Aider, custom scripts. The two most common setups:
283
-
284
- ### Solo developer, multiple Claude Code sessions
285
-
286
- You're running 2-3 Claude Code sessions in parallel on the same repo and want them to see each other's work. One coordinator instance handles all of them.
287
-
288
- ```bash
289
- # In one terminal: start the coordinator
290
- mcp-coordinator server start --daemon
291
- ```
292
-
293
- Then add the coordinator to each Claude Code session's `.mcp.json` (located at `~/.claude/.mcp.json` for the global config, or `<your-project>/.mcp.json` for per-project):
294
-
295
- ```json
296
- {
297
- "mcpServers": {
298
- "coordinator": {
299
- "type": "http",
300
- "url": "http://localhost:3100/mcp"
301
- }
302
- }
303
- }
304
- ```
305
-
306
- Each Claude session now has access to all 26 coordination tools (`register_agent`, `announce_work`, `post_to_thread`, etc.). Open `mcp-coordinator dashboard` in a browser to watch real-time activity across your sessions.
307
-
308
- ### Team setup — shared coordinator on LAN
309
-
310
- One person hosts the coordinator on a shared machine; teammates point their Claude at it.
311
-
312
- Host:
313
-
314
- ```bash
315
- # Bind to all interfaces; default is 127.0.0.1
316
- COORDINATOR_BIND=0.0.0.0 mcp-coordinator server start --daemon
317
- ```
318
-
319
- Each teammate's `.mcp.json` points to the host's IP:
320
-
321
- ```json
322
- {
323
- "mcpServers": {
324
- "coordinator": {
325
- "type": "http",
326
- "url": "http://192.168.1.42:3100/mcp"
327
- }
328
- }
329
- }
330
- ```
331
-
332
- For internet-facing or multi-tenant deployments, enable JWT auth (see [Authentication](#authentication)). Each teammate registers via `POST /api/auth/register` with the team's `COORDINATOR_REGISTRATION_SECRET`, gets a Bearer token, and adds it to their `.mcp.json`:
333
-
334
- ```json
335
- {
336
- "mcpServers": {
337
- "coordinator": {
338
- "type": "http",
339
- "url": "https://coordinator.example.com/mcp",
340
- "headers": { "Authorization": "Bearer <your-token>" }
341
- }
342
- }
343
- }
344
- ```
345
-
346
- ### Telling Claude to use the coordinator tools
347
-
348
- Without a behavior catalog (which is what [essaim](https://github.com/swoofer/essaim) ships), you instruct Claude manually. Easiest path:
349
-
350
- ```bash
351
- # In your project root — scaffolds CLAUDE.md with coordinator instructions
352
- mcp-coordinator init --write-claude-md ~/my-repo --write-mcp-config ~/my-repo
353
- ```
354
-
355
- This appends a clearly-marked `mcp-coordinator:coordination-section` block to `~/my-repo/CLAUDE.md` (creating it if absent, replacing the section if it already exists). Combined with `--write-mcp-config`, your project is fully wired in one command.
356
-
357
- If you'd rather embed the instructions yourself (or you're not using Claude Code), the section reads roughly:
358
-
359
- > Before modifying any source file, register with the coordinator MCP server:
360
- >
361
- > 1. Call `register_agent` with your name and the modules you'll touch
362
- > 2. Call `announce_work` describing what you'll do, listing target files (and `depends_on_files` if applicable)
363
- > 3. If a thread is created (consultation triggered), wait for the resolution before writing code
364
- > 4. After a meaningful change, call `log_action_summary` to update the dashboard timeline
365
- > 5. If another agent is already working on a file you need to touch, post a question to the thread via `post_to_thread` and wait for their response before proceeding
366
- >
367
- > Use the `coordinator_status` tool to see current activity at any time.
368
-
369
- That's all you need to start coordinating. The dashboard shows live who's doing what; the SQLite database persists threads across sessions; conflicts are detected before code is written.
370
-
371
- ### Push vs polling — important architectural note
372
-
373
- Vanilla Claude Code talks to mcp-coordinator over MCP (HTTP/stdio request-response). It **does not subscribe to MQTT**. That means events the coordinator publishes on MQTT (`coordinator/consultations/new`, etc.) are not auto-delivered to a Claude Code session — Claude has to **poll** the coordinator to discover new activity. The polling pattern is:
374
-
375
- - `announce_work` returns the thread ID immediately if a conflict is detected — that's the most important checkpoint
376
- - After that, periodic calls to `coordinator_status` / `list_threads` / `get_thread_updates` surface new posts on threads you're a participant in
377
- - The CLAUDE.md scaffolded by `mcp-coordinator init --write-claude-md` instructs Claude to do exactly this polling
378
-
379
- If you want **real-time push** (every coordination event interrupting Claude between turns instead of waiting for a poll), use [essaim](https://github.com/swoofer/essaim). essaim ships an agent-loop wrapper that subscribes to the MQTT broker and injects events into the turn flow automatically. mcp-coordinator alone supports the polling model which is sufficient for most use cases (2-3 Claude sessions on a small team) and zero-config to set up.
380
-
381
- ### End-to-end example: two Claudes coordinating (polling model)
382
-
383
- Two terminals, same repo, both Claude Code sessions wired to the same local coordinator. Both sessions have a `CLAUDE.md` scaffolded by `mcp-coordinator init --write-claude-md`, which instructs Claude to register, announce, and poll. The conversation below is what each Claude does — the human user just asks each Claude to make a change.
384
-
385
- ```
386
- TERMINAL 1 (Alice) TERMINAL 2 (Bob)
387
-
388
- $ claude $ claude
389
- > "Add updated_at to User type in > "Migrate User schema"
390
- src/models/user.ts" (touches src/models/user.ts)
391
-
392
- [Alice's Claude] [Bob's Claude]
393
- register_agent(name="Alice", ...) register_agent(name="Bob", ...)
394
- announce_work(
395
- target_files: ["src/models/user.ts"]
396
- )
397
- → response: { thread_id: null,
398
- concerned_agents: [] } announce_work(
399
- target_files: ["src/models/user.ts",
400
- "migrations/004.sql"]
401
- )
402
- → response: { thread_id: "T-1",
403
- concerned_agents: ["alice"],
404
- score: 100, layer: "0a" }
405
- [Bob sees the conflict in the response]
406
- get_thread("T-1")
407
- post_to_thread("T-1", type: "context",
408
- content: "full schema migration; can
409
- wait for your field to land first")
410
-
411
- [Alice writes the field, then before
412
- next major action the CLAUDE.md says
413
- "poll coordinator_status"]
414
- coordinator_status()
415
- → response: shows T-1 with Bob's post
416
- get_thread("T-1")
417
- post_to_thread("T-1", type: "context",
418
- content: "adding 1 field at line 42,
419
- no rename. Done in 5 min.")
420
- propose_resolution("T-1",
421
- content: "Alice's field first,
422
- Bob runs migration after")
423
-
424
- [Bob's CLAUDE.md polling step]
425
- coordinator_status()
426
- → shows T-1 in 'resolving' state
427
- get_thread("T-1")
428
- approve_resolution("T-1")
429
-
430
- [Alice's next poll]
431
- coordinator_status()
432
- T-1 status = 'resolved'
433
- [Alice writes the field] [Bob writes the migration]
434
- log_action_summary(...) log_action_summary(...)
435
- ```
436
-
437
- The dashboard at `http://localhost:3100/dashboard/` plays the entire timeline live. `mcp-coordinator server logs -f` (in a third terminal) tails the daemon log if you want to see the protocol-level events. If polling cadence is too coarse and you find Claude missing posts, switch to essaim's agent-loop, which delivers MQTT events automatically.
438
-
439
- ### Team setup walkthrough — shared coordinator with JWT
440
-
441
- Full step-by-step for a team running a coordinator on a shared host with internet-facing or multi-tenant access. Adjust to your network/TLS reality.
442
-
443
- **Step 1 (host) — generate secrets**
444
-
445
- ```bash
446
- # 32+ char shared secret; put in your secrets manager and inject as env vars
447
- JWT_SECRET=$(openssl rand -hex 32)
448
- REGISTRATION_SECRET=$(openssl rand -hex 32)
449
- ADMIN_SECRET=$(openssl rand -hex 32)
450
- ```
451
-
452
- **Step 2 (host) start the coordinator with auth enabled**
453
-
454
- ```bash
455
- COORDINATOR_AUTH_ENABLED=true \
456
- COORDINATOR_JWT_SECRET="$JWT_SECRET" \
457
- COORDINATOR_REGISTRATION_SECRET="$REGISTRATION_SECRET" \
458
- COORDINATOR_ADMIN_SECRET="$ADMIN_SECRET" \
459
- COORDINATOR_BIND=0.0.0.0 \
460
- mcp-coordinator server start --daemon --port 3100
461
- ```
462
-
463
- (Front the server with TLS via nginx/Caddy/etc. for internet exposure. Local LAN can use plain HTTP.)
464
-
465
- **Step 3 (each teammate) — request a token**
466
-
467
- ```bash
468
- curl -X POST https://coordinator.example.com/api/auth/register \
469
- -H "Content-Type: application/json" \
470
- -d '{"agent_name":"alice","registration_secret":"<REGISTRATION_SECRET shared via team channel>"}'
471
- # Response: { "agent_id": "alice-abc123", "token": "eyJ...", "expires_at": "...", "role": "agent" }
472
- ```
473
-
474
- **Step 4 (each teammate) — wire `.mcp.json`**
475
-
476
- ```json
477
- {
478
- "mcpServers": {
479
- "coordinator": {
480
- "type": "http",
481
- "url": "https://coordinator.example.com/mcp",
482
- "headers": { "Authorization": "Bearer <paste-token-here>" }
483
- }
484
- }
485
- }
486
- ```
487
-
488
- **Step 5 (each teammate) — run `init --write-claude-md` to scaffold project instructions**, OR add the coordination section to their existing `CLAUDE.md`.
489
-
490
- **Step 6 (each teammate) — verify**: `mcp-coordinator doctor --host coordinator.example.com --port 443` should show all checks green from any laptop.
491
-
492
- **Token rotation**: tokens expire per `COORDINATOR_JWT_EXPIRY` (default 24h). Refresh via `POST /api/auth/refresh` with the current Bearer token. The admin can revoke a specific agent via `POST /api/auth/revoke` (admin token required).
493
-
494
- ### Logs and debugging
495
-
496
- The daemon writes to `~/.mcp-coordinator/logs/server.log`. Tail it:
497
-
498
- ```bash
499
- mcp-coordinator server logs # last 50 lines
500
- mcp-coordinator server logs -n 200 # last 200 lines
501
- mcp-coordinator server logs -f # follow (Ctrl+C to stop)
502
- ```
503
-
504
- For a one-shot check that everything is wired up correctly (config valid, server up, MCP responds, dashboard reachable, MQTT accepting connections), use the doctor:
505
-
506
- ```bash
507
- mcp-coordinator doctor
508
- ```
509
-
510
- `doctor` exits non-zero if any check fails and prints actionable hints next to each failure. Probe a remote coordinator with `--host` and `--port`:
511
-
512
- ```bash
513
- mcp-coordinator doctor --host coordinator.example.com --port 443 --mqtt-port 1883
514
- ```
515
-
516
- Logging level is controlled by `LOG_LEVEL` (`debug`, `info`, `warn`, `error` default `info`). Set `NODE_ENV=development` for human-readable pretty logs:
517
-
518
- ```bash
519
- NODE_ENV=development LOG_LEVEL=debug mcp-coordinator server start
520
- ```
521
-
522
- ### Removing the integration (per-project or globally)
523
-
524
- Symmetric to `init`, the `uninstall` command undoes what was added without touching anything you wrote yourself.
525
-
526
- ```bash
527
- # Remove coordinator from a project's .mcp.json AND strip its section from CLAUDE.md
528
- mcp-coordinator uninstall --mcp-config ~/my-repo --claude-md ~/my-repo
529
-
530
- # Wipe the global config dir (~/.mcp-coordinator/) entirely — config + data + logs + pid file
531
- mcp-coordinator uninstall --purge # asks for confirmation
532
- mcp-coordinator uninstall --purge --force # skip the prompt, useful in scripts
533
- ```
534
-
535
- `--mcp-config <path>` reads `<path>/.mcp.json`, removes only the `coordinator` server entry (other servers untouched), and deletes the file if it ends up empty. `--claude-md <path>` removes only the block delimited by the `mcp-coordinator:coordination-section` sentinels (rendered as HTML comments around the section) — it never touches text you authored. Combine flags as needed; if the resulting `CLAUDE.md` is empty, it's deleted.
536
-
537
- To remove the npm package itself: `npm uninstall -g mcp-coordinator`.
538
-
539
- ### Running multiple coordinators on the same machine
540
-
541
- Useful for per-project isolation — every project gets its own ephemeral coordinator with no cross-contamination. Pick distinct ports + data dirs:
542
-
543
- ```bash
544
- # Project A
545
- PORT=3110 \
546
- COORDINATOR_MQTT_TCP_PORT=11883 \
547
- mcp-coordinator server start --daemon --data-dir ./.mcp-coordinator-A
548
-
549
- # Project B (different terminal)
550
- PORT=3120 \
551
- COORDINATOR_MQTT_TCP_PORT=12883 \
552
- mcp-coordinator server start --daemon --data-dir ./.mcp-coordinator-B
553
- ```
554
-
555
- The default `~/.mcp-coordinator/server.pid` only tracks ONE daemon at a time. For multi-instance runs, pass `--data-dir` explicitly to each instance — the PID file lives next to the data dir, so multiple instances don't fight over the same file. To stop a specific instance, `cd` to its data dir's parent and run `mcp-coordinator server stop` from there, OR `kill $(cat ./.mcp-coordinator-A/../server.pid)`.
556
-
557
- In each project's `.mcp.json`, point at the project's coordinator:
558
-
559
- ```json
560
- {
561
- "mcpServers": {
562
- "coordinator": {
563
- "type": "http",
564
- "url": "http://localhost:3110/mcp"
565
- }
566
- }
567
- }
568
- ```
569
-
570
- This pattern works well alongside `essaim`, which uses Strategy A (in-process) and starts its own ephemeral coordinator per `essaim run` — there's no port conflict because essaim picks an isolated dir by default.
571
-
572
- ---
573
-
574
- ## Anthropic Quota Pre-flight
575
-
576
- The coordinator tracks Anthropic workspace quota live and exposes it on MQTT, the dashboard, and the `coordinator_status` MCP tool — so MCP clients can decide whether to abort, throttle, or proceed before launching expensive turns.
577
-
578
- - Reads usage from the Anthropic API using the key in the environment.
579
- - Threshold via `MAX_QUOTA_PCT` env var (default `95`).
580
- - Back-off when the usage endpoint itself returns 429.
581
- - Live widget in the dashboard with manual refresh + historical buckets.
582
- - `coordinator/quota/update` MQTT events stream into the timeline by default.
583
-
584
- Orchestrators that spawn N agents at once can read `coordinator_status.quota` and abort their run if utilization is over a configured threshold — the [essaim](https://github.com/swoofer/essaim) reference orchestrator does exactly this.
585
-
586
- ---
587
-
588
- ## Token Observability
589
-
590
- Every MCP tool call and agent turn is logged with token breakdown.
591
-
592
- - **Logs** component logger `tokens` emits `input_tokens`, `output_tokens`, `cache_read`, `cache_creation`, `thinking`, model id, turn index.
593
- - **Dashboard** — live per-agent token gauge, cumulative session total, quota widget.
594
-
595
- Aggregating across runs (e.g., `reports/YYYY-MM-DD-<run-id>.md`) is an orchestrator responsibility — the coordinator emits the events, the orchestrator consumes them.
596
-
597
- ---
598
-
599
- ## Dashboard
600
-
601
- `http://localhost:3100/dashboard` (or `/dashboard` on whichever port the coordinator is bound to).
602
-
603
- - **Timeline**all threads + `quota_update` events with scores and resolution types
604
- - **Agent panel** online/offline, working/idle/waiting, current file, thread being waited on. Resizable drag handle.
605
- - **Scoring breakdown** which detection layer triggered each conflict
606
- - **Quota widget** — live utilization %, stacked buckets, manual refresh button
607
- - **Version banner** — server version shown in the header (dynamic, not hardcoded)
608
- - **Consensus metrics** per session: consensus / timeout / auto-resolved split, token totals
609
-
610
- All events arrive via SSE on `/api/events`. No polling.
611
-
612
- ---
613
-
614
- ## Agent Activity States
615
-
616
- | Status | Indicator | Meaning |
617
- |--------|-----------|---------|
618
- | working | pulsing blue | Actively editing files |
619
- | idle | solid green | Online, no recent activity |
620
- | waiting | pulsing yellow | Blocked on a consultation thread |
621
- | offline | solid red | Disconnected or session ended |
622
-
623
- Activity is derived from heartbeats enriched with the current file/thread context from the file tracker.
624
-
625
- ---
626
-
627
- ## Configuration
628
-
629
- ### Local data
630
-
631
- ```
632
- ~/.mcp-coordinator/
633
- ├── config.json # persistent configuration
634
- ├── data/
635
- │ └── coordinator.db # SQLite database
636
- ├── server.pid # PID file (when daemonized)
637
- └── logs/
638
- └── server.log # daemon logs
639
- ```
640
-
641
- ### config.json
642
-
643
- ```json
644
- {
645
- "server": { "port": 3100, "data_dir": "~/.mcp-coordinator/data" },
646
- "defaults": { "coordinator_url": "http://localhost:3100" }
647
- }
648
- ```
649
-
650
- Resolution priority (highest to lowest): CLI flag → env var → config.json → default.
651
-
652
- ### Server env vars
653
-
654
- | Variable | Default | Description |
655
- |----------|---------|-------------|
656
- | `PORT` | `3100` | HTTP port (also serves MQTT-over-WebSocket on `/mqtt`) |
657
- | `COORDINATOR_DATA_DIR` | `~/.mcp-coordinator/data` | Directory for the SQLite database |
658
- | `COORDINATOR_MQTT_TCP_PORT` | `1883` | TCP port for the embedded broker |
659
- | `COORDINATOR_MQTT_WS_PATH` | `/mqtt` | WebSocket path on the same HTTP port |
660
- | `LOG_LEVEL` | `info` | `debug` / `info` / `warn` / `error` |
661
- | `NODE_ENV` | — | `development` for pretty logs |
662
- | `COORDINATOR_AUTH_ENABLED` | `false` | Enable JWT authentication |
663
- | `COORDINATOR_JWT_SECRET` | | HMAC signing key (min 32 chars) |
664
- | `COORDINATOR_JWT_EXPIRY` | `24h` | Token lifetime (e.g., `1h`, `7d`) |
665
- | `COORDINATOR_REGISTRATION_SECRET` | | Shared secret for agent auto-register |
666
- | `COORDINATOR_ADMIN_SECRET` | | Separate secret for admin token creation |
667
- | `MAX_QUOTA_PCT` | `95` | Pre-flight abort threshold for Anthropic quota |
668
-
669
- ### Environment variables (v0.6)
670
-
671
- | Variable | Default | Effect |
672
- |---|---|---|
673
- | `COORDINATOR_REPO_ROOT` | (unset → team-mode) | Repo root for path-guard, FS fallback, Layer 4 |
674
- | `COORDINATOR_MAX_BODY_BYTES` | `1048576` | parseBody hard cap |
675
- | `COORDINATOR_LAYER4_DENYLIST` | (uses defaults) | Comma-separated globs appended to denylist |
676
- | `COORDINATOR_LAYER4_SINCE_DAYS` | `7` | git log --since window |
677
- | `COORDINATOR_LAYER4_MAX_COMMITS` | `2000` | git log --max-count |
678
- | `COORDINATOR_LAYER4_REFRESH_INTERVAL_MS` | `1800000` | Refresh on success |
679
- | `COORDINATOR_LAYER4_RETRY_MS` | `300000` | Retry on timeout |
680
- | `COORDINATOR_WORKING_FILES_TTL_MIN` | `30` | working_files claim TTL |
681
- | `COORDINATOR_WORKING_FILES_SWEEP_INTERVAL_MS` | `60000` | TTL sweeper tick |
682
-
683
- ---
684
-
685
- ## Structured Logging
686
-
687
- [Pino](https://getpino.io/) emits JSON per subsystem. Component loggers: `http`, `mcp`, `mqtt`, `consultation`, `conflict`, `auth`, `tokens`, `quota`.
688
-
689
- Production (default):
690
-
691
- ```json
692
- {"level":"info","time":1712345678901,"component":"http","msg":"Server started","port":3100}
693
- ```
694
-
695
- Dev (`NODE_ENV=development`):
696
-
697
- ```
698
- [14:21:03.456] INFO (http): Server started
699
- port: 3100
700
- ```
701
-
702
- Levels controlled via `LOG_LEVEL`.
703
-
704
- ---
705
-
706
- ## Authentication
707
-
708
- Opt-in JWT (HS256 via [jose](https://github.com/panva/jose)). Set `COORDINATOR_AUTH_ENABLED=true` plus the required secrets to enable.
709
-
710
- ### Setup
711
-
712
- ```bash
713
- export COORDINATOR_AUTH_ENABLED=true
714
- export COORDINATOR_JWT_SECRET="your-secret-at-least-32-characters-long"
715
- export COORDINATOR_REGISTRATION_SECRET="team-shared-secret"
716
- export COORDINATOR_ADMIN_SECRET="admin-only-secret"
717
- ```
718
-
719
- ### Agent self-register
720
-
721
- ```bash
722
- curl -X POST http://localhost:3100/api/auth/register \
723
- -H "Content-Type: application/json" \
724
- -d '{"agent_name":"my-agent","registration_secret":"team-shared-secret"}'
725
- # { agent_id, token, expires_at, role }
726
- ```
727
-
728
- ### Refresh
729
-
730
- ```bash
731
- curl -X POST http://localhost:3100/api/auth/refresh \
732
- -H "Authorization: Bearer <current-token>"
733
- ```
734
-
735
- ### Revoke (admin)
736
-
737
- ```bash
738
- curl -X POST http://localhost:3100/api/auth/revoke \
739
- -H "Authorization: Bearer <admin-token>" \
740
- -H "Content-Type: application/json" \
741
- -d '{"agent_id":"agent-to-revoke"}'
742
- ```
743
-
744
- ### Exempt routes
745
-
746
- `GET /health`, `POST /api/auth/register`, `POST /api/auth/refresh`, `GET /api/events` (SSE).
747
-
748
- ---
749
-
750
- ## Test Results
751
-
752
- All four coordination scenarios are validated end-to-end by the test suite:
753
-
754
- | Scenario | Layer | Score | Category | Outcome |
755
- |----------|-------|------:|----------|---------|
756
- | S1 — Same file | 0a | 100 | concerned | Thread opened → consensus |
757
- | S2 — Same module | 3 | 30 | gray_zone | Auto-resolved, introspection |
758
- | S3 — Dependency | 0b | 80 | gray_zone | Auto-resolved, introspection |
759
- | S4 — No overlap | — | 0 | pass | Auto-resolved immediately |
760
-
761
- **Performance:**
762
-
763
- | Component | Time |
764
- |-----------|------|
765
- | Conflict detection (no LLM) | < 5 ms |
766
- | MQTT push delivery | < 50 ms end-to-end |
767
- | Full consultation cycle (S1) | 30–45 s |
768
-
769
- ---
770
-
771
- ## Integration patterns
772
-
773
- ### Any MCP client
774
-
775
- Connect to `http://localhost:3100/mcp` (HTTP/SSE) or stdio. The server speaks MCP 2024-11-05.
776
-
777
- ### Custom orchestrator
778
-
779
- Spawn agents that connect to the MQTT broker and register via the MCP `register_agent` tool. The orchestrator decides spawn count, lifecycle, and quota gating; the coordinator handles the protocol. See [essaim](https://github.com/swoofer/essaim) for a reference implementation, or write your own.
780
-
781
- ### Reference catalog of coordinator-aware behaviors
782
-
783
- The behaviors that make agents announce-before-write, resolve conflicts, and participate in work-stealing are YAML configs assembled by [@swoofer/promptweave](https://github.com/swoofer/promptweave). See [essaim's behaviors](https://github.com/swoofer/essaim/tree/main/behaviors) for a curated catalog.
784
-
785
- ---
786
-
787
- ## Development
788
-
789
- ```bash
790
- # Tests (216 passing across 18 files)
791
- npm test
792
- npm run test:watch
793
-
794
- # Dev coordinator (tsx, hot reload)
795
- npm run dev # HTTP / SSE on port 3100
796
- npm run dev:stdio # stdio mode
797
-
798
- # CLI in dev
799
- npm run cli -- server start
800
- npm run cli -- dashboard
801
-
802
- # TypeScript build → dist/
803
- npm run build
804
-
805
- # Standalone binary (requires Bun)
806
- bun build --compile cli/index.ts --outfile bin/mcp-coordinator
807
- ```
808
-
809
- ### Project structure
810
-
811
- ```
812
- src/ # Coordinator (npm package surface)
813
- serve-http.ts # HTTP/SSE/MCP server entry
814
- server-setup.ts # 26 MCP tool registrations
815
- impact-scorer.ts # multi-layer conflict detection
816
- consultation.ts # Thread lifecycle
817
- agent-registry.ts # Online agents
818
- file-tracker.ts # File edit history
819
- dependency-map.ts # Module graph
820
- agent-activity.ts # working/idle/waiting/offline
821
- mqtt-broker.ts # Embedded Aedes (TCP + WS)
822
- mqtt-bridge.ts # Coordinator → broker fanout
823
- quota/ # Anthropic quota pre-flight + refresh
824
- auth.ts # Optional JWT
825
- index.ts # Stdio entry + programmatic re-exports
826
-
827
- cli/ # CLI binary (mcp-coordinator)
828
- index.ts # Entry point
829
- server/ # start / stop / status
830
- dashboard.ts # Open dashboard URL
831
- config.ts # Config loader
832
- version.ts # package.json version helper
833
-
834
- tests/unit/ # Vitest — 216 tests, 18 files
835
- dashboard/public/ # Single-file web dashboard
836
- ```
837
-
838
- ---
839
-
840
- ## Related projects
841
-
842
- - **[@swoofer/promptweave](https://github.com/swoofer/promptweave)** — YAML composer for assembling agent prompts, hooks, and MCP configs. Use it with mcp-coordinator-aware behaviors from essaim.
843
- - **[essaim](https://github.com/swoofer/essaim)** — end-to-end orchestrator that spawns N coordinated agents using `@swoofer/promptweave` + `mcp-coordinator`. Ships the reference catalog of coordinator-aware behaviors.
844
-
845
- ---
846
-
847
- ## Support
848
-
849
- Solo maintainer. If this project saves you time, consider supporting development:
850
-
851
- - [GitHub Sponsors](https://github.com/sponsors/swoofer)
852
- - [Buy Me A Coffee](https://buymeacoffee.com/swoofer)
853
-
854
- A star on the repo also helps surface the project to other developers.
855
-
856
- ---
857
-
858
- ## License
859
-
860
- MIT
1
+ <div align="center">
2
+
3
+ # mcp-coordinator
4
+
5
+ **Embedded MQTT broker + MCP server for multi-agent coordination. Zero conflicts, everyone aligned.**
6
+
7
+ [![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](LICENSE)
8
+ [![npm](https://img.shields.io/npm/v/mcp-coordinator.svg)](https://www.npmjs.com/package/mcp-coordinator)
9
+ [![Tests](https://github.com/swoofer/mcp-coordinator/actions/workflows/test.yml/badge.svg)](https://github.com/swoofer/mcp-coordinator/actions)
10
+
11
+ [Getting started](#getting-started) · [Problem](#the-problem) · [How It Works](#how-it-works) · [MQTT Layer](#mqtt-communication-layer) · [Scoring](#impact-scoring) · [MCP Tools](#mcp-tools) · [CLI](#cli) · [Standalone use](#standalone-use--without-an-orchestrator) · [Quota](#anthropic-quota-pre-flight) · [Dashboard](#dashboard) · [Config](#configuration) · [Auth](#authentication)
12
+
13
+ </div>
14
+
15
+ ---
16
+
17
+ ## The Problem
18
+
19
+ When multiple developers each use an AI coding agent in parallel on the same repo, things break:
20
+
21
+ - **Regressions** — Agent A rewrites a module that Agent B was depending on
22
+ - **Duplicated work** — Two agents implement the same feature from different directions
23
+ - **Architectural drift** — Agents make local decisions that conflict with each other's designs
24
+ - **Wasted reconciliation time** — Developers spend hours untangling what the agents did
25
+
26
+ Each agent works in isolation. None of them know what the others are doing.
27
+
28
+ mcp-coordinator fixes this by giving agents a **shared nervous system over MQTT** — they announce intentions before coding, conflicts are detected before a single line is written, and agents see each other's actions in real-time to agree on an approach.
29
+
30
+ It works **with or without** an orchestrator on top. Use it standalone with any MCP client (Claude Code, Cursor, Cline, Aider) — see [Standalone use](#standalone-use--without-an-orchestrator). Or pair it with [essaim](https://github.com/swoofer/essaim) when you want pre-composed agent profiles, work-stealing templates, and a behavior catalog.
31
+
32
+ ---
33
+
34
+ ## Getting started
35
+
36
+ ```bash
37
+ # 1. Install
38
+ npm install -g mcp-coordinator
39
+
40
+ # 2. First-time setup — creates ~/.mcp-coordinator/, writes a default config,
41
+ # and prints a .mcp.json snippet for your MCP client.
42
+ mcp-coordinator init
43
+
44
+ # 3. Start the server (foreground or --daemon for background)
45
+ mcp-coordinator server start --daemon
46
+
47
+ # 4. Verify
48
+ mcp-coordinator server status
49
+ mcp-coordinator dashboard # opens http://localhost:3100/dashboard
50
+ ```
51
+
52
+ Step 2 is idempotent — re-running `init` won't overwrite an existing config. The snippet it prints goes into your MCP client's config (e.g., `~/.claude/.mcp.json` for Claude Code). If you'd rather not copy-paste, run `mcp-coordinator init --write-mcp-config <project-path>` and the snippet is written to `<project-path>/.mcp.json` (merging if the file already exists).
53
+
54
+ After step 4, every Claude Code (or other MCP-compatible) session connected to this coordinator can call all 26 tools (`register_agent`, `announce_work`, `post_to_thread`, `coordinator_status`, ...). For the full multi-Claude or team setup, see [Standalone use](#standalone-use--without-an-orchestrator).
55
+
56
+ ---
57
+
58
+ ## How It Works
59
+
60
+ ```
61
+ Agent A Agent B
62
+ │ │
63
+ │ announce_work │ announce_work
64
+ ▼ ▼
65
+ ┌──────────────┐ ┌──────────────┐
66
+ │ MCP client │ ◄── MQTT ────► │ MCP client │
67
+ │ (any vendor) │ push-based │ (any vendor) │
68
+ └──────┬───────┘ └──────┬───────┘
69
+ │ MCP HTTP / SSE │
70
+ └──────────────┬────────────────┘
71
+
72
+ ┌─────────▼──────────┐
73
+ │ mcp-coordinator │
74
+ │ 26 MCP tools + DB │
75
+ │ Aedes MQTT broker │
76
+ └─────────┬──────────┘
77
+ │ SSE
78
+ ┌─────────▼──────────┐
79
+ │ Dashboard │
80
+ │ live events/quota │
81
+ └────────────────────┘
82
+ ```
83
+
84
+ The **consultation cycle** has four steps:
85
+
86
+ 1. **Announce** — A client calls `announce_work` with target files, `depends_on_files`, and target modules before coding.
87
+ 2. **Detect** — The coordinator scores impact against all online agents and opens a thread if a score ≥ 90 matches.
88
+ 3. **Consult** — MQTT pushes the new thread to every affected agent. Each agent posts context, constraints, or proposes a resolution.
89
+ 4. **Resolve** — Agents approve, contest, or propose again. The thread closes when consensus is reached, or auto-resolves after timeout / in gray zones.
90
+
91
+ The server is **client-agnostic**: any MCP-compatible agent (Claude Code, Cursor, Cline, Aider, custom scripts) can connect over HTTP/SSE or stdio.
92
+
93
+ ---
94
+
95
+ ## MQTT Communication Layer
96
+
97
+ The coordinator ships with an **embedded [Aedes](https://github.com/moscajs/aedes) MQTT broker**. Agents subscribe once and receive every coordination event in real-time — no polling, no extra infrastructure.
98
+
99
+ ### Broker
100
+
101
+ | Transport | Port | Use case |
102
+ |-----------|------|----------|
103
+ | TCP | `1883` (bind `127.0.0.1` by default) | Local / LAN agents, best latency |
104
+ | WebSocket | `/mqtt` on the coordinator HTTP port (default `3100`) | Bun binary, remote agents, firewall-friendly |
105
+
106
+ One coordinator = one broker. Nothing external to install.
107
+
108
+ ### Topic map
109
+
110
+ Every coordinator event is published on a well-known topic. Clients subscribe to the full set on connect.
111
+
112
+ | Topic | Emitted when | Payload highlights |
113
+ |-------|--------------|--------------------|
114
+ | `coordinator/consultations/new` | A thread is opened | `thread_id`, `subject`, `initiator_id`, `target_modules`, `target_files` |
115
+ | `coordinator/consultations/{id}/messages` | Anyone posts to a thread | `agent_id`, `name`, `content`, `type` (warning/context/proposal) |
116
+ | `coordinator/consultations/{id}/status` | Thread transitions state | `status` ∈ `open` / `resolving` / `resolved` / `timeout` |
117
+ | `coordinator/consultations/{id}/claimed` | An agent atomically claims a task (work-stealing) | `claimed_by`, `thread_id` |
118
+ | `coordinator/consultations/{id}/completed` | Claimed task finishes | `agent_id`, `thread_id`, `resolution` |
119
+ | `coordinator/agents/{id}/status` | Agent goes online / offline | `status`, `name`, `modules` |
120
+ | `coordinator/broadcast` | System-wide announcements | arbitrary JSON |
121
+ | `coordinator/quota/update` | Anthropic quota refresh | `usage`, `limit`, `utilization_pct` |
122
+
123
+ ### Push delivery flow
124
+
125
+ ```
126
+ COORDINATOR BROKER (Aedes) CLIENT
127
+ ─────────── ────────────── ──────
128
+
129
+ announce_work() ──────────► publish subscribe
130
+ coordinator/ ─► event
131
+ consultations/new ─────────► classify topic
132
+ self-msg filter
133
+ ─► handler
134
+ ```
135
+
136
+ Key guarantees:
137
+
138
+ - **Self-filter** — clients drop messages where `payload.agent_id` equals the local agent's id, so agents never wake on their own actions.
139
+ - **Bun compatibility** — when consumed from a Bun-compiled client, a Duplex stream bridges the `mqtt` client to the native WebSocket API (the `ws` package receiver doesn't work under Bun).
140
+ - **Backpressure-free** — messages are small JSON envelopes.
141
+
142
+ ---
143
+
144
+ ## Impact Scoring
145
+
146
+ Every `announce_work` call scores all online agents across multiple detection layers. The highest matching layer wins.
147
+
148
+ | Layer | Signal | Score | Trigger |
149
+ |-------|--------|------:|---------|
150
+ | 0a | Same file announced in active thread | 100 | `target_files` ∩ their `target_files` |
151
+ | 0b | They modify a file you depend on | 80 | `depends_on_files` ∩ their `target_files` |
152
+ | 0c | You modify a file they depend on | 80 | `target_files` ∩ their `depends_on_files` |
153
+ | 1 | Same file recently edited | 100 | File tracker conflict (last 60s) |
154
+ | 2 | Dependency file recently edited | 80 | `depends_on_files` recently touched |
155
+ | 3 | Same module prefix | 30 | `target_modules` overlap |
156
+
157
+ Scores are categorized into three outcomes:
158
+
159
+ | Score | Category | Action |
160
+ |-------|----------|--------|
161
+ | ≥ 90 | `concerned` | Thread opened, consultation required |
162
+ | 30–89 | `gray_zone` | Thread auto-resolved, introspection recommended |
163
+ | < 30 | `pass` | No conflict, proceed immediately |
164
+
165
+ > **Layer 0 is critical.** Without announced intentions, a two-agent scenario where both work in `src/auth/` would score only 30 (gray zone, auto-resolved). With `announce_work`, the same scenario scores 100 and triggers a full consultation.
166
+
167
+ ---
168
+
169
+ ## What's New in v0.5.0
170
+
171
+ Released 2026-05-10.
172
+
173
+ ### A. In-flight tracking (server-anchored)
174
+
175
+ - `WorkingFilesTracker` ([`src/working-files-tracker.ts`](src/working-files-tracker.ts)) with TTL sweeper (default 30 min claim, 60 s sweep tick).
176
+ - `POST /api/working-files/start` and `POST /api/working-files/stop` REST endpoints.
177
+ - essaim hooks: `scripts/pre_track_activity.sh` + extended `track_activity.sh` via PreToolUse / PostToolUse pipeline.
178
+
179
+ ### B. Symbol-aware annotations (tree-sitter)
180
+
181
+ - 15 languages via `optionalDependencies`: TypeScript, TSX, JavaScript, JSX, Python, Go, Rust, Java, C#, C, C++, Ruby, PHP, Kotlin, Swift, Bash.
182
+ - Strategy registry in [`src/tree-sitter-extractor.ts`](src/tree-sitter-extractor.ts) — adding a 16th language is ~5 LOC.
183
+ - `POST /api/file-activity` now accepts an optional `content` field (capped 256 KB) for server-side symbol extraction.
184
+ - New DB columns: `file_activity.symbols_touched` (JSON) + `content_hash`.
185
+
186
+ ### C. Layer 0.5 annotation — disjoint-symbol enrichment
187
+
188
+ - Same file with disjoint symbols: score stays 100, reason text enriched with `disjoint symbols: you=[X], them=[Y] verify shared module state`.
189
+ - New optional `target_symbols?: string[]` on `announce_work` (cap 200 elements, max 256 chars each).
190
+
191
+ ### D. Layer 4git co-change scoring
192
+
193
+ - [`src/git-cochange-builder.ts`](src/git-cochange-builder.ts): bounded `git log` (max 2000 commits, `--since=7d` default), 5 s timeout, denylist (lockfiles, dist, snapshots), dynamic 40% predictor cap.
194
+ - Score 60 if co-change ratio > 0.5, score 40 if > 0.2; canonical-pair lookup.
195
+ - Background scheduler with 5-min retry on timeout, 30-min refresh on success.
196
+ - Requires `COORDINATOR_REPO_ROOT` to enable + `git` on PATH.
197
+
198
+ ### E. Dashboard "Conflict signals" panel
199
+
200
+ - New panel in `dashboard/public/index.html` backed by `GET /api/scoring-stats?since=<dur>`.
201
+ - Populated by `runCommonAnnounceFlow` writing per-firing rows to `layer_firings` table.
202
+
203
+ ### Hardening & observability
204
+
205
+ - `parseBody` 1 MB cap (HTTP 413; env: `COORDINATOR_MAX_BODY_BYTES`).
206
+ - `PRAGMA user_version = 6` schema marker; daemon refuses to start on a newer DB (downgrade guard).
207
+ - [`src/path-normalize.ts`](src/path-normalize.ts) — symmetric Windows/POSIX path canonicalization.
208
+ - 5 new Prometheus metrics: `mcp_coordinator_working_files_active` (gauge), `mcp_coordinator_working_files_starts_total{result}`, `mcp_coordinator_tree_sitter_parse_failures_total`, `mcp_coordinator_git_cochange_builds_total{outcome}`, `mcp_coordinator_git_cochange_pairs_total`.
209
+ - `/readyz` extended with `tree_sitter` and `git_cochange` blocks (both `optional: true`, non-gating).
210
+
211
+ ### Bundled v0.4.1 hotfix
212
+
213
+ `/livez`, `/readyz`, and `/metrics` were wired in v0.4.0 but not routed fixed.
214
+
215
+ ---
216
+
217
+ ## Roadmap
218
+
219
+ ### v0.5.0 (shipped 2026-05-10)
220
+
221
+ Working-files in-flight tracking, tree-sitter symbol annotations across 15 languages, git co-change Layer 4 scoring, dashboard Conflict signals panel, schema downgrade guard, 5 new Prometheus metrics. 392 tests across 35+ files.
222
+
223
+ See [CHANGELOG.md](./CHANGELOG.md) for the full list.
224
+
225
+ ### Next: LLM Reasoner (v0.6 — opt-in, gated)
226
+
227
+ Design lives in `docs/superpowers/specs/2026-05-10-v0.6-semantic-conflict-design.md` section "v0.6.1 LLM Reasoner OPT-IN, gray-zone".
228
+
229
+ - Default OFFzero impact on users without an Anthropic API key.
230
+ - Activates only on gray-zone scores [30, 89] (~5–10% of announces in current telemetry).
231
+ - `COORDINATOR_REASONER=claude` env gate; `reasoner_cache` table + singleflight dedup.
232
+ - Kill criterion: verdict change < 25% / p50 latency > 100 ms / cost > $10/dev/month.
233
+ - **Implementation plan will be drafted once v0.5.0 has 1 month of gray-zone telemetry to validate ROI.**
234
+
235
+ ### Open items / known issues
236
+
237
+ - v0.5.0 dashboard Conflict signals widget shows aggregated counts only; per-layer outcome breakdown (`auto_resolved`, `consensus`, `timeout`, `cancelled`) currently returns zeros the `layer_firings` table is not yet joined against the `events` table (decorative, not blocking).
238
+ - CHANGELOG has both an auto-generated `## [0.5.0]` block (from release-please) and a stale manual `## [0.6.0]` heading from the original plan cosmetic, will be reconciled in a doc-only commit.
239
+
240
+ ---
241
+
242
+ ## MCP Tools
243
+
244
+ 26 tools organized by function. All registered under one HTTP/SSE transport at `/mcp` (and stdio for stdio-mode clients).
245
+
246
+ ### Agent registry
247
+
248
+ | Tool | Description |
249
+ |------|-------------|
250
+ | `register_agent` | Register as online with name and module list |
251
+ | `list_agents` | List all registered online agents |
252
+ | `heartbeat` | Update last-seen and derive activity status |
253
+ | `agent_activity` | Get activity status for all online agents |
254
+ | `wait_for_peers` | Block until N peers online, or timeout (prevents race before first announce) |
255
+
256
+ ### Consultation
257
+
258
+ | Tool | Description |
259
+ |------|-------------|
260
+ | `announce_work` | Open a consultation thread — the main entry point before coding |
261
+ | `post_to_thread` | Post a message (warning, context, question) to an open thread |
262
+ | `propose_resolution` | Submit a resolution proposal for participants to approve |
263
+ | `approve_resolution` | Approve the current resolution proposal |
264
+ | `contest_resolution` | Reject the proposal with a reason — resets to `open` |
265
+ | `close_thread` | Close a thread after work is complete |
266
+ | `cancel_thread` | Cancel a thread (work abandoned or no longer relevant) |
267
+ | `get_thread` | Get a thread with all messages and current status |
268
+ | `get_thread_updates` | Poll for new messages since a timestamp |
269
+ | `list_threads` | List threads, filterable by status or agent |
270
+ | `log_action_summary` | Log a one-liner action summary for the dashboard timeline |
271
+
272
+ ### File tracking
273
+
274
+ | Tool | Description |
275
+ |------|-------------|
276
+ | `hot_files` | List files being edited by multiple agents |
277
+ | `get_session_files` | Get all files edited by an agent in the current session |
278
+ | `check_file_conflict` | Check whether another agent edited a given file recently |
279
+
280
+ ### Dependency map
281
+
282
+ | Tool | Description |
283
+ |------|-------------|
284
+ | `set_dependency_map` | Load a module dependency graph (JSON) |
285
+ | `get_blast_radius` | Calculate which other modules are affected by changes |
286
+ | `get_module_info` | Get dependency and dependent info for a module |
287
+
288
+ ### MQTT
289
+
290
+ | Tool | Description |
291
+ |------|-------------|
292
+ | `wait_for_message` | Block until a coordination message arrives on the agent's topic |
293
+ | `get_queued_messages` | Drain all queued messages without blocking |
294
+ | `mqtt_publish` | Publish a raw message to any MQTT topic |
295
+
296
+ ### Status
297
+
298
+ | Tool | Description |
299
+ |------|-------------|
300
+ | `coordinator_status` | Full system status: agents, threads, file activity, MQTT, quota |
301
+
302
+ The in-server `introspection` tool returns the full schema for every tool — point any MCP client at it for live discovery.
303
+
304
+ ---
305
+
306
+ ## CLI
307
+
308
+ Two distribution channels:
309
+
310
+ - **npm** `npm install -g mcp-coordinator`. Requires Node.js 20+.
311
+ - **Single-file binary** — Bun-compiled, no Node required. Download the matching tarball from a [GitHub Release](https://github.com/swoofer/mcp-coordinator/releases).
312
+
313
+ ### Commands
314
+
315
+ | Command | Description |
316
+ |---------|-------------|
317
+ | `mcp-coordinator init [--url <url>] [--write-mcp-config <path>] [--write-claude-md <path>]` | First-time setup — create config dir, default `config.json`, print/write the `.mcp.json` snippet, optionally scaffold a sample `CLAUDE.md` |
318
+ | `mcp-coordinator uninstall [--mcp-config <path>] [--claude-md <path>] [--purge] [--force]` | Remove integrations: drop `coordinator` entry from a `.mcp.json`, strip the coordination section from a `CLAUDE.md`, or `--purge` the `~/.mcp-coordinator/` directory entirely |
319
+ | `mcp-coordinator server start [--port N] [--data-dir PATH] [--daemon]` | Start the coordinator (foreground or daemon) |
320
+ | `mcp-coordinator server stop` | Stop the coordinator |
321
+ | `mcp-coordinator server status` | PID, port, online agents, open threads |
322
+ | `mcp-coordinator server logs [-n N] [-f]` | Tail the daemon log at `~/.mcp-coordinator/logs/server.log` |
323
+ | `mcp-coordinator dashboard` | Open `http://localhost:3100/dashboard` |
324
+ | `mcp-coordinator doctor [--host H] [--port P] [--mqtt-port P]` | Health check: config, server liveness, `/health`, `/mcp` initialize, dashboard, MQTT broker |
325
+ | `mcp-coordinator --version` | Print the installed version |
326
+
327
+ ### Quick start
328
+
329
+ ```bash
330
+ # Start the coordinator (embedded MQTT + dashboard)
331
+ mcp-coordinator server start --daemon
332
+
333
+ # Open the dashboard
334
+ mcp-coordinator dashboard
335
+
336
+ # Stop when done
337
+ mcp-coordinator server stop
338
+ ```
339
+
340
+ ### In-process from your own Node app
341
+
342
+ ```ts
343
+ import { startServer } from "mcp-coordinator";
344
+
345
+ await startServer({
346
+ port: 3100,
347
+ dataDir: "./coordinator-data",
348
+ });
349
+ ```
350
+
351
+ ---
352
+
353
+ ## Standalone use — without an orchestrator
354
+
355
+ You don't need an orchestrator. mcp-coordinator works on its own with any MCP-compatible client Claude Code, Cursor, Cline, Aider, custom scripts. The two most common setups:
356
+
357
+ ### Solo developer, multiple Claude Code sessions
358
+
359
+ You're running 2-3 Claude Code sessions in parallel on the same repo and want them to see each other's work. One coordinator instance handles all of them.
360
+
361
+ ```bash
362
+ # In one terminal: start the coordinator
363
+ mcp-coordinator server start --daemon
364
+ ```
365
+
366
+ Then add the coordinator to each Claude Code session's `.mcp.json` (located at `~/.claude/.mcp.json` for the global config, or `<your-project>/.mcp.json` for per-project):
367
+
368
+ ```json
369
+ {
370
+ "mcpServers": {
371
+ "coordinator": {
372
+ "type": "http",
373
+ "url": "http://localhost:3100/mcp"
374
+ }
375
+ }
376
+ }
377
+ ```
378
+
379
+ Each Claude session now has access to all 26 coordination tools (`register_agent`, `announce_work`, `post_to_thread`, etc.). Open `mcp-coordinator dashboard` in a browser to watch real-time activity across your sessions.
380
+
381
+ ### Team setup shared coordinator on LAN
382
+
383
+ One person hosts the coordinator on a shared machine; teammates point their Claude at it.
384
+
385
+ Host:
386
+
387
+ ```bash
388
+ # Bind to all interfaces; default is 127.0.0.1
389
+ COORDINATOR_BIND=0.0.0.0 mcp-coordinator server start --daemon
390
+ ```
391
+
392
+ Each teammate's `.mcp.json` points to the host's IP:
393
+
394
+ ```json
395
+ {
396
+ "mcpServers": {
397
+ "coordinator": {
398
+ "type": "http",
399
+ "url": "http://192.168.1.42:3100/mcp"
400
+ }
401
+ }
402
+ }
403
+ ```
404
+
405
+ For internet-facing or multi-tenant deployments, enable JWT auth (see [Authentication](#authentication)). Each teammate registers via `POST /api/auth/register` with the team's `COORDINATOR_REGISTRATION_SECRET`, gets a Bearer token, and adds it to their `.mcp.json`:
406
+
407
+ ```json
408
+ {
409
+ "mcpServers": {
410
+ "coordinator": {
411
+ "type": "http",
412
+ "url": "https://coordinator.example.com/mcp",
413
+ "headers": { "Authorization": "Bearer <your-token>" }
414
+ }
415
+ }
416
+ }
417
+ ```
418
+
419
+ ### Telling Claude to use the coordinator tools
420
+
421
+ Without a behavior catalog (which is what [essaim](https://github.com/swoofer/essaim) ships), you instruct Claude manually. Easiest path:
422
+
423
+ ```bash
424
+ # In your project root — scaffolds CLAUDE.md with coordinator instructions
425
+ mcp-coordinator init --write-claude-md ~/my-repo --write-mcp-config ~/my-repo
426
+ ```
427
+
428
+ This appends a clearly-marked `mcp-coordinator:coordination-section` block to `~/my-repo/CLAUDE.md` (creating it if absent, replacing the section if it already exists). Combined with `--write-mcp-config`, your project is fully wired in one command.
429
+
430
+ If you'd rather embed the instructions yourself (or you're not using Claude Code), the section reads roughly:
431
+
432
+ > Before modifying any source file, register with the coordinator MCP server:
433
+ >
434
+ > 1. Call `register_agent` with your name and the modules you'll touch
435
+ > 2. Call `announce_work` describing what you'll do, listing target files (and `depends_on_files` if applicable)
436
+ > 3. If a thread is created (consultation triggered), wait for the resolution before writing code
437
+ > 4. After a meaningful change, call `log_action_summary` to update the dashboard timeline
438
+ > 5. If another agent is already working on a file you need to touch, post a question to the thread via `post_to_thread` and wait for their response before proceeding
439
+ >
440
+ > Use the `coordinator_status` tool to see current activity at any time.
441
+
442
+ That's all you need to start coordinating. The dashboard shows live who's doing what; the SQLite database persists threads across sessions; conflicts are detected before code is written.
443
+
444
+ ### Push vs polling — important architectural note
445
+
446
+ Vanilla Claude Code talks to mcp-coordinator over MCP (HTTP/stdio request-response). It **does not subscribe to MQTT**. That means events the coordinator publishes on MQTT (`coordinator/consultations/new`, etc.) are not auto-delivered to a Claude Code session — Claude has to **poll** the coordinator to discover new activity. The polling pattern is:
447
+
448
+ - `announce_work` returns the thread ID immediately if a conflict is detected — that's the most important checkpoint
449
+ - After that, periodic calls to `coordinator_status` / `list_threads` / `get_thread_updates` surface new posts on threads you're a participant in
450
+ - The CLAUDE.md scaffolded by `mcp-coordinator init --write-claude-md` instructs Claude to do exactly this polling
451
+
452
+ If you want **real-time push** (every coordination event interrupting Claude between turns instead of waiting for a poll), use [essaim](https://github.com/swoofer/essaim). essaim ships an agent-loop wrapper that subscribes to the MQTT broker and injects events into the turn flow automatically. mcp-coordinator alone supports the polling model — which is sufficient for most use cases (2-3 Claude sessions on a small team) and zero-config to set up.
453
+
454
+ ### End-to-end example: two Claudes coordinating (polling model)
455
+
456
+ Two terminals, same repo, both Claude Code sessions wired to the same local coordinator. Both sessions have a `CLAUDE.md` scaffolded by `mcp-coordinator init --write-claude-md`, which instructs Claude to register, announce, and poll. The conversation below is what each Claude does — the human user just asks each Claude to make a change.
457
+
458
+ ```
459
+ TERMINAL 1 (Alice) TERMINAL 2 (Bob)
460
+
461
+ $ claude $ claude
462
+ > "Add updated_at to User type in > "Migrate User schema"
463
+ src/models/user.ts" (touches src/models/user.ts)
464
+
465
+ [Alice's Claude] [Bob's Claude]
466
+ register_agent(name="Alice", ...) register_agent(name="Bob", ...)
467
+ announce_work(
468
+ target_files: ["src/models/user.ts"]
469
+ )
470
+ response: { thread_id: null,
471
+ concerned_agents: [] } announce_work(
472
+ target_files: ["src/models/user.ts",
473
+ "migrations/004.sql"]
474
+ )
475
+ → response: { thread_id: "T-1",
476
+ concerned_agents: ["alice"],
477
+ score: 100, layer: "0a" }
478
+ [Bob sees the conflict in the response]
479
+ get_thread("T-1")
480
+ post_to_thread("T-1", type: "context",
481
+ content: "full schema migration; can
482
+ wait for your field to land first")
483
+
484
+ [Alice writes the field, then before
485
+ next major action the CLAUDE.md says
486
+ "poll coordinator_status"]
487
+ coordinator_status()
488
+ response: shows T-1 with Bob's post
489
+ get_thread("T-1")
490
+ post_to_thread("T-1", type: "context",
491
+ content: "adding 1 field at line 42,
492
+ no rename. Done in 5 min.")
493
+ propose_resolution("T-1",
494
+ content: "Alice's field first,
495
+ Bob runs migration after")
496
+
497
+ [Bob's CLAUDE.md polling step]
498
+ coordinator_status()
499
+ shows T-1 in 'resolving' state
500
+ get_thread("T-1")
501
+ approve_resolution("T-1")
502
+
503
+ [Alice's next poll]
504
+ coordinator_status()
505
+ → T-1 status = 'resolved'
506
+ [Alice writes the field] [Bob writes the migration]
507
+ log_action_summary(...) log_action_summary(...)
508
+ ```
509
+
510
+ The dashboard at `http://localhost:3100/dashboard/` plays the entire timeline live. `mcp-coordinator server logs -f` (in a third terminal) tails the daemon log if you want to see the protocol-level events. If polling cadence is too coarse and you find Claude missing posts, switch to essaim's agent-loop, which delivers MQTT events automatically.
511
+
512
+ ### Team setup walkthrough — shared coordinator with JWT
513
+
514
+ Full step-by-step for a team running a coordinator on a shared host with internet-facing or multi-tenant access. Adjust to your network/TLS reality.
515
+
516
+ **Step 1 (host)generate secrets**
517
+
518
+ ```bash
519
+ # 32+ char shared secret; put in your secrets manager and inject as env vars
520
+ JWT_SECRET=$(openssl rand -hex 32)
521
+ REGISTRATION_SECRET=$(openssl rand -hex 32)
522
+ ADMIN_SECRET=$(openssl rand -hex 32)
523
+ ```
524
+
525
+ **Step 2 (host) — start the coordinator with auth enabled**
526
+
527
+ ```bash
528
+ COORDINATOR_AUTH_ENABLED=true \
529
+ COORDINATOR_JWT_SECRET="$JWT_SECRET" \
530
+ COORDINATOR_REGISTRATION_SECRET="$REGISTRATION_SECRET" \
531
+ COORDINATOR_ADMIN_SECRET="$ADMIN_SECRET" \
532
+ COORDINATOR_BIND=0.0.0.0 \
533
+ mcp-coordinator server start --daemon --port 3100
534
+ ```
535
+
536
+ (Front the server with TLS via nginx/Caddy/etc. for internet exposure. Local LAN can use plain HTTP.)
537
+
538
+ **Step 3 (each teammate) — request a token**
539
+
540
+ ```bash
541
+ curl -X POST https://coordinator.example.com/api/auth/register \
542
+ -H "Content-Type: application/json" \
543
+ -d '{"agent_name":"alice","registration_secret":"<REGISTRATION_SECRET shared via team channel>"}'
544
+ # Response: { "agent_id": "alice-abc123", "token": "eyJ...", "expires_at": "...", "role": "agent" }
545
+ ```
546
+
547
+ **Step 4 (each teammate) wire `.mcp.json`**
548
+
549
+ ```json
550
+ {
551
+ "mcpServers": {
552
+ "coordinator": {
553
+ "type": "http",
554
+ "url": "https://coordinator.example.com/mcp",
555
+ "headers": { "Authorization": "Bearer <paste-token-here>" }
556
+ }
557
+ }
558
+ }
559
+ ```
560
+
561
+ **Step 5 (each teammate) — run `init --write-claude-md` to scaffold project instructions**, OR add the coordination section to their existing `CLAUDE.md`.
562
+
563
+ **Step 6 (each teammate) — verify**: `mcp-coordinator doctor --host coordinator.example.com --port 443` should show all checks green from any laptop.
564
+
565
+ **Token rotation**: tokens expire per `COORDINATOR_JWT_EXPIRY` (default 24h). Refresh via `POST /api/auth/refresh` with the current Bearer token. The admin can revoke a specific agent via `POST /api/auth/revoke` (admin token required).
566
+
567
+ ### Logs and debugging
568
+
569
+ The daemon writes to `~/.mcp-coordinator/logs/server.log`. Tail it:
570
+
571
+ ```bash
572
+ mcp-coordinator server logs # last 50 lines
573
+ mcp-coordinator server logs -n 200 # last 200 lines
574
+ mcp-coordinator server logs -f # follow (Ctrl+C to stop)
575
+ ```
576
+
577
+ For a one-shot check that everything is wired up correctly (config valid, server up, MCP responds, dashboard reachable, MQTT accepting connections), use the doctor:
578
+
579
+ ```bash
580
+ mcp-coordinator doctor
581
+ ```
582
+
583
+ `doctor` exits non-zero if any check fails and prints actionable hints next to each failure. Probe a remote coordinator with `--host` and `--port`:
584
+
585
+ ```bash
586
+ mcp-coordinator doctor --host coordinator.example.com --port 443 --mqtt-port 1883
587
+ ```
588
+
589
+ Logging level is controlled by `LOG_LEVEL` (`debug`, `info`, `warn`, `error` — default `info`). Set `NODE_ENV=development` for human-readable pretty logs:
590
+
591
+ ```bash
592
+ NODE_ENV=development LOG_LEVEL=debug mcp-coordinator server start
593
+ ```
594
+
595
+ ### Removing the integration (per-project or globally)
596
+
597
+ Symmetric to `init`, the `uninstall` command undoes what was added without touching anything you wrote yourself.
598
+
599
+ ```bash
600
+ # Remove coordinator from a project's .mcp.json AND strip its section from CLAUDE.md
601
+ mcp-coordinator uninstall --mcp-config ~/my-repo --claude-md ~/my-repo
602
+
603
+ # Wipe the global config dir (~/.mcp-coordinator/) entirelyconfig + data + logs + pid file
604
+ mcp-coordinator uninstall --purge # asks for confirmation
605
+ mcp-coordinator uninstall --purge --force # skip the prompt, useful in scripts
606
+ ```
607
+
608
+ `--mcp-config <path>` reads `<path>/.mcp.json`, removes only the `coordinator` server entry (other servers untouched), and deletes the file if it ends up empty. `--claude-md <path>` removes only the block delimited by the `mcp-coordinator:coordination-section` sentinels (rendered as HTML comments around the section) — it never touches text you authored. Combine flags as needed; if the resulting `CLAUDE.md` is empty, it's deleted.
609
+
610
+ To remove the npm package itself: `npm uninstall -g mcp-coordinator`.
611
+
612
+ ### Running multiple coordinators on the same machine
613
+
614
+ Useful for per-project isolation — every project gets its own ephemeral coordinator with no cross-contamination. Pick distinct ports + data dirs:
615
+
616
+ ```bash
617
+ # Project A
618
+ PORT=3110 \
619
+ COORDINATOR_MQTT_TCP_PORT=11883 \
620
+ mcp-coordinator server start --daemon --data-dir ./.mcp-coordinator-A
621
+
622
+ # Project B (different terminal)
623
+ PORT=3120 \
624
+ COORDINATOR_MQTT_TCP_PORT=12883 \
625
+ mcp-coordinator server start --daemon --data-dir ./.mcp-coordinator-B
626
+ ```
627
+
628
+ The default `~/.mcp-coordinator/server.pid` only tracks ONE daemon at a time. For multi-instance runs, pass `--data-dir` explicitly to each instance — the PID file lives next to the data dir, so multiple instances don't fight over the same file. To stop a specific instance, `cd` to its data dir's parent and run `mcp-coordinator server stop` from there, OR `kill $(cat ./.mcp-coordinator-A/../server.pid)`.
629
+
630
+ In each project's `.mcp.json`, point at the project's coordinator:
631
+
632
+ ```json
633
+ {
634
+ "mcpServers": {
635
+ "coordinator": {
636
+ "type": "http",
637
+ "url": "http://localhost:3110/mcp"
638
+ }
639
+ }
640
+ }
641
+ ```
642
+
643
+ This pattern works well alongside `essaim`, which uses Strategy A (in-process) and starts its own ephemeral coordinator per `essaim run` — there's no port conflict because essaim picks an isolated dir by default.
644
+
645
+ ---
646
+
647
+ ## Anthropic Quota Pre-flight
648
+
649
+ The coordinator tracks Anthropic workspace quota live and exposes it on MQTT, the dashboard, and the `coordinator_status` MCP tool — so MCP clients can decide whether to abort, throttle, or proceed before launching expensive turns.
650
+
651
+ - Reads usage from the Anthropic API using the key in the environment.
652
+ - Threshold via `MAX_QUOTA_PCT` env var (default `95`).
653
+ - Back-off when the usage endpoint itself returns 429.
654
+ - Live widget in the dashboard with manual refresh + historical buckets.
655
+ - `coordinator/quota/update` MQTT events stream into the timeline by default.
656
+
657
+ Orchestrators that spawn N agents at once can read `coordinator_status.quota` and abort their run if utilization is over a configured threshold — the [essaim](https://github.com/swoofer/essaim) reference orchestrator does exactly this.
658
+
659
+ ---
660
+
661
+ ## Token Observability
662
+
663
+ Every MCP tool call and agent turn is logged with token breakdown.
664
+
665
+ - **Logs** — component logger `tokens` emits `input_tokens`, `output_tokens`, `cache_read`, `cache_creation`, `thinking`, model id, turn index.
666
+ - **Dashboard**live per-agent token gauge, cumulative session total, quota widget.
667
+
668
+ Aggregating across runs (e.g., `reports/YYYY-MM-DD-<run-id>.md`) is an orchestrator responsibility — the coordinator emits the events, the orchestrator consumes them.
669
+
670
+ ---
671
+
672
+ ## Dashboard
673
+
674
+ `http://localhost:3100/dashboard` (or `/dashboard` on whichever port the coordinator is bound to).
675
+
676
+ - **Timeline** all threads + `quota_update` events with scores and resolution types
677
+ - **Agent panel** online/offline, working/idle/waiting, current file, thread being waited on. Resizable drag handle.
678
+ - **Scoring breakdown** which detection layer triggered each conflict
679
+ - **Quota widget** live utilization %, stacked buckets, manual refresh button
680
+ - **Version banner** server version shown in the header (dynamic, not hardcoded)
681
+ - **Consensus metrics** per session: consensus / timeout / auto-resolved split, token totals
682
+
683
+ All events arrive via SSE on `/api/events`. No polling.
684
+
685
+ ---
686
+
687
+ ## Agent Activity States
688
+
689
+ | Status | Indicator | Meaning |
690
+ |--------|-----------|---------|
691
+ | working | pulsing blue | Actively editing files |
692
+ | idle | solid green | Online, no recent activity |
693
+ | waiting | pulsing yellow | Blocked on a consultation thread |
694
+ | offline | solid red | Disconnected or session ended |
695
+
696
+ Activity is derived from heartbeats enriched with the current file/thread context from the file tracker.
697
+
698
+ ---
699
+
700
+ ## Configuration
701
+
702
+ ### Local data
703
+
704
+ ```
705
+ ~/.mcp-coordinator/
706
+ ├── config.json # persistent configuration
707
+ ├── data/
708
+ │ └── coordinator.db # SQLite database
709
+ ├── server.pid # PID file (when daemonized)
710
+ └── logs/
711
+ └── server.log # daemon logs
712
+ ```
713
+
714
+ ### config.json
715
+
716
+ ```json
717
+ {
718
+ "server": { "port": 3100, "data_dir": "~/.mcp-coordinator/data" },
719
+ "defaults": { "coordinator_url": "http://localhost:3100" }
720
+ }
721
+ ```
722
+
723
+ Resolution priority (highest to lowest): CLI flag → env var → config.json → default.
724
+
725
+ ### Server env vars
726
+
727
+ | Variable | Default | Description |
728
+ |----------|---------|-------------|
729
+ | `PORT` | `3100` | HTTP port (also serves MQTT-over-WebSocket on `/mqtt`) |
730
+ | `COORDINATOR_DATA_DIR` | `~/.mcp-coordinator/data` | Directory for the SQLite database |
731
+ | `COORDINATOR_MQTT_TCP_PORT` | `1883` | TCP port for the embedded broker |
732
+ | `COORDINATOR_MQTT_WS_PATH` | `/mqtt` | WebSocket path on the same HTTP port |
733
+ | `LOG_LEVEL` | `info` | `debug` / `info` / `warn` / `error` |
734
+ | `NODE_ENV` | — | `development` for pretty logs |
735
+ | `COORDINATOR_AUTH_ENABLED` | `false` | Enable JWT authentication |
736
+ | `COORDINATOR_JWT_SECRET` | — | HMAC signing key (min 32 chars) |
737
+ | `COORDINATOR_JWT_EXPIRY` | `24h` | Token lifetime (e.g., `1h`, `7d`) |
738
+ | `COORDINATOR_REGISTRATION_SECRET` | | Shared secret for agent auto-register |
739
+ | `COORDINATOR_ADMIN_SECRET` | — | Separate secret for admin token creation |
740
+ | `MAX_QUOTA_PCT` | `95` | Pre-flight abort threshold for Anthropic quota |
741
+
742
+ ### Environment variables (v0.5+)
743
+
744
+ | Variable | Default | Effect |
745
+ |---|---|---|
746
+ | `COORDINATOR_REPO_ROOT` | (unset team-mode) | Repo root for path-guard, FS fallback, Layer 4 |
747
+ | `COORDINATOR_MAX_BODY_BYTES` | `1048576` | parseBody hard cap |
748
+ | `COORDINATOR_LAYER4_DENYLIST` | (uses defaults) | Comma-separated globs appended to denylist |
749
+ | `COORDINATOR_LAYER4_SINCE_DAYS` | `7` | git log --since window |
750
+ | `COORDINATOR_LAYER4_MAX_COMMITS` | `2000` | git log --max-count |
751
+ | `COORDINATOR_LAYER4_REFRESH_INTERVAL_MS` | `1800000` | Refresh on success |
752
+ | `COORDINATOR_LAYER4_RETRY_MS` | `300000` | Retry on timeout |
753
+ | `COORDINATOR_WORKING_FILES_TTL_MIN` | `30` | working_files claim TTL |
754
+ | `COORDINATOR_WORKING_FILES_SWEEP_INTERVAL_MS` | `60000` | TTL sweeper tick |
755
+
756
+ ---
757
+
758
+ ## Structured Logging
759
+
760
+ [Pino](https://getpino.io/) emits JSON per subsystem. Component loggers: `http`, `mcp`, `mqtt`, `consultation`, `conflict`, `auth`, `tokens`, `quota`.
761
+
762
+ Production (default):
763
+
764
+ ```json
765
+ {"level":"info","time":1712345678901,"component":"http","msg":"Server started","port":3100}
766
+ ```
767
+
768
+ Dev (`NODE_ENV=development`):
769
+
770
+ ```
771
+ [14:21:03.456] INFO (http): Server started
772
+ port: 3100
773
+ ```
774
+
775
+ Levels controlled via `LOG_LEVEL`.
776
+
777
+ ---
778
+
779
+ ## Authentication
780
+
781
+ Opt-in JWT (HS256 via [jose](https://github.com/panva/jose)). Set `COORDINATOR_AUTH_ENABLED=true` plus the required secrets to enable.
782
+
783
+ ### Setup
784
+
785
+ ```bash
786
+ export COORDINATOR_AUTH_ENABLED=true
787
+ export COORDINATOR_JWT_SECRET="your-secret-at-least-32-characters-long"
788
+ export COORDINATOR_REGISTRATION_SECRET="team-shared-secret"
789
+ export COORDINATOR_ADMIN_SECRET="admin-only-secret"
790
+ ```
791
+
792
+ ### Agent self-register
793
+
794
+ ```bash
795
+ curl -X POST http://localhost:3100/api/auth/register \
796
+ -H "Content-Type: application/json" \
797
+ -d '{"agent_name":"my-agent","registration_secret":"team-shared-secret"}'
798
+ # { agent_id, token, expires_at, role }
799
+ ```
800
+
801
+ ### Refresh
802
+
803
+ ```bash
804
+ curl -X POST http://localhost:3100/api/auth/refresh \
805
+ -H "Authorization: Bearer <current-token>"
806
+ ```
807
+
808
+ ### Revoke (admin)
809
+
810
+ ```bash
811
+ curl -X POST http://localhost:3100/api/auth/revoke \
812
+ -H "Authorization: Bearer <admin-token>" \
813
+ -H "Content-Type: application/json" \
814
+ -d '{"agent_id":"agent-to-revoke"}'
815
+ ```
816
+
817
+ ### Exempt routes
818
+
819
+ `GET /health`, `POST /api/auth/register`, `POST /api/auth/refresh`, `GET /api/events` (SSE).
820
+
821
+ ---
822
+
823
+ ## Test Results
824
+
825
+ All four coordination scenarios are validated end-to-end by the test suite:
826
+
827
+ | Scenario | Layer | Score | Category | Outcome |
828
+ |----------|-------|------:|----------|---------|
829
+ | S1 Same file | 0a | 100 | concerned | Thread opened → consensus |
830
+ | S2 Same module | 3 | 30 | gray_zone | Auto-resolved, introspection |
831
+ | S3 — Dependency | 0b | 80 | gray_zone | Auto-resolved, introspection |
832
+ | S4 No overlap | — | 0 | pass | Auto-resolved immediately |
833
+
834
+ **Performance:**
835
+
836
+ | Component | Time |
837
+ |-----------|------|
838
+ | Conflict detection (no LLM) | < 5 ms |
839
+ | MQTT push delivery | < 50 ms end-to-end |
840
+ | Full consultation cycle (S1) | 30–45 s |
841
+
842
+ ---
843
+
844
+ ## Integration patterns
845
+
846
+ ### Any MCP client
847
+
848
+ Connect to `http://localhost:3100/mcp` (HTTP/SSE) or stdio. The server speaks MCP 2024-11-05.
849
+
850
+ ### Custom orchestrator
851
+
852
+ Spawn agents that connect to the MQTT broker and register via the MCP `register_agent` tool. The orchestrator decides spawn count, lifecycle, and quota gating; the coordinator handles the protocol. See [essaim](https://github.com/swoofer/essaim) for a reference implementation, or write your own.
853
+
854
+ ### Reference catalog of coordinator-aware behaviors
855
+
856
+ The behaviors that make agents announce-before-write, resolve conflicts, and participate in work-stealing are YAML configs assembled by [@swoofer/promptweave](https://github.com/swoofer/promptweave). See [essaim's behaviors](https://github.com/swoofer/essaim/tree/main/behaviors) for a curated catalog.
857
+
858
+ ---
859
+
860
+ ## Development
861
+
862
+ ```bash
863
+ # Tests (392 passing across 35+ files)
864
+ npm test
865
+ npm run test:watch
866
+
867
+ # Dev coordinator (tsx, hot reload)
868
+ npm run dev # HTTP / SSE on port 3100
869
+ npm run dev:stdio # stdio mode
870
+
871
+ # CLI in dev
872
+ npm run cli -- server start
873
+ npm run cli -- dashboard
874
+
875
+ # TypeScript build → dist/
876
+ npm run build
877
+
878
+ # Standalone binary (requires Bun)
879
+ bun build --compile cli/index.ts --outfile bin/mcp-coordinator
880
+ ```
881
+
882
+ ### Project structure
883
+
884
+ ```
885
+ src/ # Coordinator (npm package surface)
886
+ serve-http.ts # HTTP/SSE/MCP server entry
887
+ server-setup.ts # 26 MCP tool registrations
888
+ impact-scorer.ts # multi-layer conflict detection
889
+ consultation.ts # Thread lifecycle
890
+ agent-registry.ts # Online agents
891
+ file-tracker.ts # File edit history
892
+ dependency-map.ts # Module graph
893
+ agent-activity.ts # working/idle/waiting/offline
894
+ mqtt-broker.ts # Embedded Aedes (TCP + WS)
895
+ mqtt-bridge.ts # Coordinator → broker fanout
896
+ quota/ # Anthropic quota pre-flight + refresh
897
+ auth.ts # Optional JWT
898
+ path-normalize.ts # Symmetric Windows/POSIX path canonicalization
899
+ working-files-tracker.ts # In-flight claim tracking + TTL sweeper
900
+ git-cochange-builder.ts # Layer 4 co-change scorer (bounded git log)
901
+ tree-sitter-extractor.ts # Symbol-aware annotations (15 languages)
902
+ http/handle-health.ts # /livez, /readyz, /metrics routing
903
+ index.ts # Stdio entry + programmatic re-exports
904
+
905
+ cli/ # CLI binary (mcp-coordinator)
906
+ index.ts # Entry point
907
+ server/ # start / stop / status
908
+ dashboard.ts # Open dashboard URL
909
+ config.ts # Config loader
910
+ version.ts # package.json version helper
911
+
912
+ tests/unit/ # Vitest — 392 tests, 35+ files
913
+ dashboard/public/ # Single-file web dashboard
914
+ ```
915
+
916
+ ---
917
+
918
+ ## Related projects
919
+
920
+ - **[@swoofer/promptweave](https://github.com/swoofer/promptweave)** — YAML composer for assembling agent prompts, hooks, and MCP configs. Use it with mcp-coordinator-aware behaviors from essaim.
921
+ - **[essaim](https://github.com/swoofer/essaim)** — end-to-end orchestrator that spawns N coordinated agents using `@swoofer/promptweave` + `mcp-coordinator`. Ships the reference catalog of coordinator-aware behaviors.
922
+
923
+ ---
924
+
925
+ ## Support
926
+
927
+ Solo maintainer. If this project saves you time, consider supporting development:
928
+
929
+ - [GitHub Sponsors](https://github.com/sponsors/swoofer)
930
+ - [Buy Me A Coffee](https://buymeacoffee.com/swoofer)
931
+
932
+ A star on the repo also helps surface the project to other developers.
933
+
934
+ ---
935
+
936
+ ## License
937
+
938
+ MIT