switchroom 0.5.0 → 0.7.9
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +142 -121
- package/bin/autoaccept.exp +29 -6
- package/dist/agent-scheduler/index.js +12261 -0
- package/dist/cli/autoaccept-poll.js +10 -0
- package/dist/cli/switchroom.js +27250 -25324
- package/dist/vault/approvals/kernel-server.js +12709 -0
- package/dist/vault/broker/server.js +15724 -0
- package/package.json +4 -3
- package/profiles/_base/start.sh.hbs +133 -0
- package/profiles/_shared/telegram-style.md.hbs +3 -3
- package/profiles/default/CLAUDE.md +3 -3
- package/profiles/default/CLAUDE.md.hbs +2 -2
- package/profiles/default/workspace/CLAUDE.md.hbs +9 -0
- package/skills/docx/VENDORED.md +1 -1
- package/skills/mcp-builder/VENDORED.md +1 -1
- package/skills/pdf/VENDORED.md +1 -1
- package/skills/pptx/VENDORED.md +1 -1
- package/skills/skill-creator/VENDORED.md +1 -1
- package/skills/switchroom-architecture/SKILL.md +8 -7
- package/skills/switchroom-cli/SKILL.md +23 -15
- package/skills/switchroom-health/SKILL.md +7 -7
- package/skills/switchroom-install/SKILL.md +36 -39
- package/skills/switchroom-manage/SKILL.md +4 -4
- package/skills/switchroom-status/SKILL.md +1 -1
- package/skills/webapp-testing/VENDORED.md +1 -1
- package/skills/xlsx/VENDORED.md +1 -1
- package/telegram-plugin/admin-commands/dispatch.test.ts +119 -1
- package/telegram-plugin/admin-commands/index.ts +71 -0
- package/telegram-plugin/ask-user.ts +1 -0
- package/telegram-plugin/card-event-log.ts +138 -0
- package/telegram-plugin/dist/bridge/bridge.js +178 -31
- package/telegram-plugin/dist/foreman/foreman.js +6875 -6526
- package/telegram-plugin/dist/gateway/gateway.js +13862 -11834
- package/telegram-plugin/dist/server.js +202 -40
- package/telegram-plugin/fleet-state.ts +25 -10
- package/telegram-plugin/foreman/foreman.ts +38 -3
- package/telegram-plugin/gateway/approval-callback.ts +126 -0
- package/telegram-plugin/gateway/approval-card.test.ts +90 -0
- package/telegram-plugin/gateway/approval-card.ts +127 -0
- package/telegram-plugin/gateway/approvals-commands.ts +126 -0
- package/telegram-plugin/gateway/boot-card.ts +31 -6
- package/telegram-plugin/gateway/boot-probes.ts +510 -72
- package/telegram-plugin/gateway/gateway.ts +822 -94
- package/telegram-plugin/gateway/ipc-protocol.ts +34 -1
- package/telegram-plugin/gateway/ipc-server.ts +35 -0
- package/telegram-plugin/gateway/startup-mutex.ts +110 -2
- package/telegram-plugin/hooks/hooks.json +19 -0
- package/telegram-plugin/hooks/tool-label-pretool.mjs +216 -0
- package/telegram-plugin/hooks/tool-label-stop.mjs +63 -0
- package/telegram-plugin/package.json +4 -1
- package/telegram-plugin/plugin-logger.ts +20 -1
- package/telegram-plugin/progress-card-driver.ts +202 -13
- package/telegram-plugin/progress-card.ts +2 -2
- package/telegram-plugin/quota-check.ts +1 -0
- package/telegram-plugin/registry/subagents-schema.ts +37 -0
- package/telegram-plugin/registry/subagents.test.ts +64 -0
- package/telegram-plugin/session-tail.ts +58 -5
- package/telegram-plugin/shared/bot-runtime.ts +48 -2
- package/telegram-plugin/subagent-watcher.ts +139 -7
- package/telegram-plugin/tests/_progress-card-harness.ts +4 -0
- package/telegram-plugin/tests/bg-agent-progress-card-757.test.ts +201 -0
- package/telegram-plugin/tests/boot-card-probe-target.test.ts +10 -34
- package/telegram-plugin/tests/boot-card-render.test.ts +6 -5
- package/telegram-plugin/tests/boot-probes.test.ts +564 -0
- package/telegram-plugin/tests/card-event-log.test.ts +145 -0
- package/telegram-plugin/tests/gateway-startup-mutex.test.ts +102 -0
- package/telegram-plugin/tests/ipc-server-validate-inject-inbound.test.ts +134 -0
- package/telegram-plugin/tests/progress-card-delay-842.test.ts +160 -0
- package/telegram-plugin/tests/quota-check.test.ts +37 -1
- package/telegram-plugin/tests/subagent-registry-bugs.test.ts +5 -0
- package/telegram-plugin/tests/subagent-watcher-stall-notification.test.ts +104 -1
- package/telegram-plugin/tests/subagent-watcher.test.ts +5 -0
- package/telegram-plugin/tests/tool-label-sidecar.test.ts +114 -0
- package/telegram-plugin/tests/two-zone-bg-done-when-all-terminal.test.ts +5 -3
- package/telegram-plugin/tests/two-zone-card-header-phases.test.ts +10 -0
- package/telegram-plugin/tests/two-zone-snapshot-extras.test.ts +58 -14
- package/telegram-plugin/tests/welcome-text.test.ts +57 -0
- package/telegram-plugin/tool-label-sidecar.ts +140 -0
- package/telegram-plugin/tool-labels.ts +55 -0
- package/telegram-plugin/two-zone-card.ts +27 -7
- package/telegram-plugin/uat/SETUP.md +160 -0
- package/telegram-plugin/uat/assertions.ts +140 -0
- package/telegram-plugin/uat/driver.ts +174 -0
- package/telegram-plugin/uat/harness.ts +161 -0
- package/telegram-plugin/uat/login.ts +134 -0
- package/telegram-plugin/uat/port-allocator.ts +71 -0
- package/telegram-plugin/uat/scenarios/smoke-clerk-reply.test.ts +61 -0
- package/telegram-plugin/welcome-text.ts +44 -2
- package/bin/bridge-watchdog.sh +0 -967
package/package.json
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "switchroom",
|
|
3
|
-
"version": "0.
|
|
4
|
-
"description": "Run Claude Code 24/7 on your Claude Pro/Max subscription over Telegram. Open-source alternative to OpenClaw and NanoClaw — no API keys
|
|
3
|
+
"version": "0.7.9",
|
|
4
|
+
"description": "Run Claude Code 24/7 on your Claude Pro/Max subscription over Telegram. Open-source alternative to OpenClaw and NanoClaw — no API keys.",
|
|
5
5
|
"type": "module",
|
|
6
6
|
"bin": {
|
|
7
7
|
"switchroom": "./dist/cli/switchroom.js"
|
|
@@ -19,9 +19,10 @@
|
|
|
19
19
|
"scripts": {
|
|
20
20
|
"dev": "bun bin/switchroom.ts",
|
|
21
21
|
"build": "node scripts/build.mjs",
|
|
22
|
+
"build:cli": "node scripts/build.mjs && bun build --compile --target=bun-linux-x64 --minify bin/switchroom.ts --outfile switchroom-linux-amd64",
|
|
22
23
|
"test": "vitest run && bun test telegram-plugin/tests/history.test.ts telegram-plugin/tests/ipc-server-client.test.ts telegram-plugin/tests/ipc-server-race.test.ts telegram-plugin/tests/gateway-bridge.test.ts telegram-plugin/tests/gateway-startup-mutex.test.ts telegram-plugin/tests/gateway-clean-shutdown-marker.test.ts telegram-plugin/tests/foreman-state.test.ts telegram-plugin/tests/boot-card-dedupe.test.ts telegram-plugin/tests/boot-card-reason.test.ts telegram-plugin/tests/progress-update.test.ts telegram-plugin/tests/quota-cache.test.ts telegram-plugin/tests/silent-reply-guard.test.ts telegram-plugin/tests/unhandled-rejection-policy.test.ts telegram-plugin/tests/registry-turns.test.ts telegram-plugin/registry/subagents.test.ts telegram-plugin/tests/turns-writer.test.ts telegram-plugin/registry/api-registry.test.ts telegram-plugin/registry/turns-schema.test.ts telegram-plugin/tests/idle-footer-wiring.test.ts telegram-plugin/tests/subagent-tracker-hooks.test.ts telegram-plugin/tests/resolve-calling-subagent.test.ts telegram-plugin/tests/gateway-update-placeholder-dispatch.test.ts",
|
|
23
24
|
"test:vitest": "vitest run",
|
|
24
|
-
"test:bun": "bun test src/vault/grants.test.ts src/vault/broker/server-grants.test.ts src/vault/broker/client-token.test.ts src/vault/broker/server-unlock.test.ts src/vault/broker/auto-unlock.test.ts tests/vault-broker-passphrase.test.ts src/cli/vault-get-broker.test.ts src/vault/resolver-via-broker.test.ts src/vault/broker/scope.test.ts src/vault/broker/server.test.ts telegram-plugin/tests/boot-probes.test.ts telegram-plugin/tests/setup-state.test.ts telegram-plugin/tests/history.test.ts telegram-plugin/tests/ipc-server-client.test.ts telegram-plugin/tests/ipc-server-race.test.ts telegram-plugin/tests/gateway-bridge.test.ts telegram-plugin/tests/gateway-startup-mutex.test.ts telegram-plugin/tests/gateway-clean-shutdown-marker.test.ts telegram-plugin/tests/foreman-state.test.ts telegram-plugin/tests/boot-card-dedupe.test.ts telegram-plugin/tests/boot-card-reason.test.ts telegram-plugin/tests/progress-update.test.ts telegram-plugin/tests/quota-cache.test.ts telegram-plugin/tests/silent-reply-guard.test.ts telegram-plugin/tests/unhandled-rejection-policy.test.ts telegram-plugin/tests/registry-turns.test.ts telegram-plugin/registry/subagents.test.ts telegram-plugin/tests/turns-writer.test.ts telegram-plugin/tests/resolve-calling-subagent.test.ts telegram-plugin/tests/gateway-update-placeholder-dispatch.test.ts",
|
|
25
|
+
"test:bun": "bun test src/watchdog/state.test.ts src/watchdog/policy.test.ts src/vault/grants.test.ts src/vault/broker/server-grants.test.ts src/vault/broker/client-token.test.ts src/vault/broker/server-unlock.test.ts src/vault/broker/auto-unlock.test.ts tests/vault-broker-passphrase.test.ts src/cli/vault-get-broker.test.ts src/vault/resolver-via-broker.test.ts src/vault/broker/scope.test.ts src/vault/broker/server.test.ts src/drive/disconnect.test.ts src/drive/grants.test.ts src/drive/oauth.test.ts src/drive/onboarding.test.ts src/drive/reconciler.test.ts src/drive/vault-slots.test.ts src/drive/wrapper.test.ts src/vault/approvals/kernel.test.ts src/vault/broker/server-approvals.test.ts telegram-plugin/tests/boot-probes.test.ts telegram-plugin/tests/setup-state.test.ts telegram-plugin/tests/history.test.ts telegram-plugin/tests/ipc-server-client.test.ts telegram-plugin/tests/ipc-server-race.test.ts telegram-plugin/tests/gateway-bridge.test.ts telegram-plugin/tests/gateway-startup-mutex.test.ts telegram-plugin/tests/gateway-clean-shutdown-marker.test.ts telegram-plugin/tests/foreman-state.test.ts telegram-plugin/tests/boot-card-dedupe.test.ts telegram-plugin/tests/boot-card-reason.test.ts telegram-plugin/tests/progress-update.test.ts telegram-plugin/tests/quota-cache.test.ts telegram-plugin/tests/silent-reply-guard.test.ts telegram-plugin/tests/unhandled-rejection-policy.test.ts telegram-plugin/tests/registry-turns.test.ts telegram-plugin/registry/subagents.test.ts telegram-plugin/tests/turns-writer.test.ts telegram-plugin/tests/resolve-calling-subagent.test.ts telegram-plugin/tests/gateway-update-placeholder-dispatch.test.ts",
|
|
25
26
|
"test:watch": "vitest",
|
|
26
27
|
"lint": "tsc --noEmit && node scripts/check-plugin-references.mjs",
|
|
27
28
|
"lint:tsc": "tsc --noEmit",
|
|
@@ -1,7 +1,140 @@
|
|
|
1
1
|
#!/bin/bash
|
|
2
|
+
# --- Docker-mode tmux supervisor (#793 §2 / v0.7.5) ---
|
|
3
|
+
# Under v0.6 systemd the unit's ExecStart is `tmux new-session -d ...
|
|
4
|
+
# bash -l start.sh`, ExecStartPost spawns autoaccept-poll on the host,
|
|
5
|
+
# and a sibling unit `switchroom-<name>-gateway.service` runs the
|
|
6
|
+
# telegram-plugin gateway daemon — three pieces sitting OUTSIDE the
|
|
7
|
+
# agent process. Under v0.7 docker the container's CMD is start.sh
|
|
8
|
+
# directly under tini, with no tmux wrapper, no host-side
|
|
9
|
+
# ExecStartPost, and no sibling gateway unit. Without this preamble
|
|
10
|
+
# the first-run dev-channels acknowledgement prompt blocks claude
|
|
11
|
+
# forever (no autoaccept), `switchroom agent attach` fails (no tmux
|
|
12
|
+
# server with the expected socket name), and the telegram MCP sidecar
|
|
13
|
+
# exits at boot with "no gateway socket; check `systemctl --user
|
|
14
|
+
# status switchroom-telegram-gateway`" because the gateway daemon
|
|
15
|
+
# isn't running anywhere.
|
|
16
|
+
#
|
|
17
|
+
# Fix: when we detect docker mode AND we're not already inside the
|
|
18
|
+
# inner tmux pane (the SWITCHROOM_DOCKER_TMUX_INNER marker), spawn
|
|
19
|
+
# the gateway daemon AND autoaccept-poll as supervised sidecars then
|
|
20
|
+
# re-exec into tmux with this same script as the inner command.
|
|
21
|
+
# Inside tmux the marker is set, so the preamble is a no-op and the
|
|
22
|
+
# rest of the script runs normally.
|
|
23
|
+
#
|
|
24
|
+
# Socket name `switchroom-<agent>` and session name `<agent>` match
|
|
25
|
+
# what `src/agents/autoaccept.ts:151` and
|
|
26
|
+
# `src/agents/lifecycle.ts:attachAgent` expect — the contract is the
|
|
27
|
+
# same one v0.6 systemd has always honored, just enforced inside the
|
|
28
|
+
# container instead of by the host's user systemd manager.
|
|
29
|
+
if [ "$SWITCHROOM_RUNTIME" = "docker" ] && [ -z "$SWITCHROOM_DOCKER_TMUX_INNER" ]; then
|
|
30
|
+
# Hoist TELEGRAM_STATE_DIR up here so the gateway daemon (forked
|
|
31
|
+
# below) finds gateway.sock / gateway.pid.json / history.db at the
|
|
32
|
+
# same path the rest of start.sh + the MCP sidecar expects.
|
|
33
|
+
export TELEGRAM_STATE_DIR="{{agentDir}}/telegram"
|
|
34
|
+
|
|
35
|
+
# Tiny in-process supervisor: runs cmd in a respawn loop. Caps at
|
|
36
|
+
# 10 restarts in 60s before giving up — protects against tight
|
|
37
|
+
# crash loops that would otherwise burn CPU and obscure the root
|
|
38
|
+
# cause in logs. The sidecar's own structured logging is written
|
|
39
|
+
# directly to its log file; this wrapper only handles process
|
|
40
|
+
# lifecycle. Ampersand-backgrounded by callers below.
|
|
41
|
+
_switchroom_supervise() {
|
|
42
|
+
local _name="$1"; local _logfile="$2"; shift 2
|
|
43
|
+
local _restarts=0
|
|
44
|
+
local _window_start=$SECONDS
|
|
45
|
+
while true; do
|
|
46
|
+
"$@" >> "$_logfile" 2>&1
|
|
47
|
+
local _exit=$?
|
|
48
|
+
local _now=$SECONDS
|
|
49
|
+
if [ $((_now - _window_start)) -ge 60 ]; then
|
|
50
|
+
_restarts=0
|
|
51
|
+
_window_start=$_now
|
|
52
|
+
fi
|
|
53
|
+
_restarts=$((_restarts + 1))
|
|
54
|
+
echo "[supervise] $_name exited (status=$_exit, restart=$_restarts in $((_now - _window_start))s window)" >> "$_logfile"
|
|
55
|
+
if [ $_restarts -ge 10 ]; then
|
|
56
|
+
echo "[supervise] $_name hit 10 restarts in <60s — giving up" >> "$_logfile"
|
|
57
|
+
return 1
|
|
58
|
+
fi
|
|
59
|
+
sleep 1
|
|
60
|
+
done
|
|
61
|
+
}
|
|
62
|
+
|
|
63
|
+
# 1) Gateway daemon — the long-running Telegram bot client.
|
|
64
|
+
# Polls Telegram, writes gateway.sock for the in-claude MCP
|
|
65
|
+
# sidecar to bridge through. Mirrors the v0.6 sibling
|
|
66
|
+
# switchroom-<name>-gateway.service unit. Talks to the broker
|
|
67
|
+
# over SWITCHROOM_VAULT_BROKER_SOCK (set by compose) for the bot
|
|
68
|
+
# token. Failure modes: vault locked → gateway boots, fails to
|
|
69
|
+
# fetch token, exits non-zero, supervisor respawns; bot token
|
|
70
|
+
# invalid → 401 from Telegram, gateway exits, same loop. The
|
|
71
|
+
# cap avoids an infinite vault-locked respawn storm.
|
|
72
|
+
_gateway_bundle=/opt/switchroom/telegram-plugin/dist/gateway/gateway.js
|
|
73
|
+
if [ -f "$_gateway_bundle" ] && command -v bun >/dev/null 2>&1; then
|
|
74
|
+
_switchroom_supervise gateway /var/log/switchroom/gateway-supervisor.log \
|
|
75
|
+
bun "$_gateway_bundle" &
|
|
76
|
+
fi
|
|
77
|
+
|
|
78
|
+
# 2) autoaccept-poll — first-run TUI prompt dispatcher. Single-shot
|
|
79
|
+
# by design (exits cleanly after idle-timeout once prompts have
|
|
80
|
+
# fired); supervisor's restart cap means a flaky autoaccept won't
|
|
81
|
+
# masquerade as a tight loop.
|
|
82
|
+
if [ -f /opt/switchroom/autoaccept-poll.js ] && command -v bun >/dev/null 2>&1; then
|
|
83
|
+
_switchroom_supervise autoaccept /var/log/switchroom/autoaccept.log \
|
|
84
|
+
bun /opt/switchroom/autoaccept-poll.js "{{name}}" &
|
|
85
|
+
fi
|
|
86
|
+
|
|
87
|
+
# 3) agent-scheduler (cron-fold-in cutover, default-on since Phase 4).
|
|
88
|
+
# Long-running. The singleton switchroom-cron container is gone;
|
|
89
|
+
# every agent runs cron in-container as a sibling of the gateway,
|
|
90
|
+
# delivering fires through the SAME inbound path as Telegram
|
|
91
|
+
# messages (synthesized turns tagged meta.source="cron"). The
|
|
92
|
+
# bundle connects to the gateway socket above, so the gateway
|
|
93
|
+
# must be up before fires can deliver; the supervisor's respawn
|
|
94
|
+
# loop handles the early-boot race naturally.
|
|
95
|
+
#
|
|
96
|
+
# Kill switch: an operator can opt OUT by setting
|
|
97
|
+
# SWITCHROOM_INLINE_SCHEDULER=0 at the container level (compose
|
|
98
|
+
# env or `docker run -e ...`). Default behaviour is enabled —
|
|
99
|
+
# we used to gate on `=1`, the gate is now `!=0`.
|
|
100
|
+
if [ "$SWITCHROOM_INLINE_SCHEDULER" != "0" ] \
|
|
101
|
+
&& [ -f /opt/switchroom/agent-scheduler/index.js ] \
|
|
102
|
+
&& command -v bun >/dev/null 2>&1; then
|
|
103
|
+
_switchroom_supervise agent-scheduler /var/log/switchroom/agent-scheduler.log \
|
|
104
|
+
bun /opt/switchroom/agent-scheduler/index.js &
|
|
105
|
+
fi
|
|
106
|
+
|
|
107
|
+
export SWITCHROOM_DOCKER_TMUX_INNER=1
|
|
108
|
+
exec tmux -L "switchroom-{{name}}" \
|
|
109
|
+
new-session -A -s "{{name}}" -x 400 -y 50 \
|
|
110
|
+
bash -l "$0"
|
|
111
|
+
fi
|
|
112
|
+
|
|
113
|
+
{{#if hostHomeQ}}
|
|
114
|
+
# Host ~/.switchroom symlink (#910). Container HOME=/state/agent/home,
|
|
115
|
+
# but operator yaml prompts (cron, hooks, ad-hoc tool calls) widely use
|
|
116
|
+
# ~/.switchroom/skills/..., ~/.switchroom/credentials/..., and
|
|
117
|
+
# ~/.switchroom/agents/<self>/... — those expand against $HOME inside the
|
|
118
|
+
# container. Bind mounts land at the host's absolute path
|
|
119
|
+
# (/home/<user>/.switchroom/...), not under HOME. Symlinking
|
|
120
|
+
# $HOME/.switchroom → <host-home>/.switchroom makes tilde paths resolve
|
|
121
|
+
# to the bind-mounted location. Idempotent: ln -sfn refreshes the link
|
|
122
|
+
# without following an existing symlink. Guard refuses to clobber a
|
|
123
|
+
# real directory at $HOME/.switchroom.
|
|
124
|
+
if [ ! -e "$HOME/.switchroom" ] || [ -L "$HOME/.switchroom" ]; then
|
|
125
|
+
ln -sfn {{{hostHomeQ}}}/.switchroom "$HOME/.switchroom" 2>/dev/null || true
|
|
126
|
+
fi
|
|
127
|
+
{{/if}}
|
|
128
|
+
|
|
2
129
|
export NVM_DIR="$HOME/.nvm"
|
|
3
130
|
[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh"
|
|
4
131
|
export PATH="$HOME/.bun/bin:$PATH"
|
|
132
|
+
# Layer 1 persistent-HOME PATH additions: user-space binary install dirs.
|
|
133
|
+
# `pip install --user`, `npm install -g` (with NPM_CONFIG_PREFIX set in
|
|
134
|
+
# compose.ts), `cargo install --root $HOME`, and manual `~/bin` drops
|
|
135
|
+
# all land in these paths; survival across container restart is via the
|
|
136
|
+
# /state/agent bind mount (HOME=/state/agent/home).
|
|
137
|
+
export PATH="$HOME/.local/bin:$HOME/bin:$HOME/.npm-global/bin:$PATH"
|
|
5
138
|
export CLAUDE_CONFIG_DIR="{{agentDir}}/.claude"
|
|
6
139
|
unset CLAUDE_CODE_OAUTH_TOKEN
|
|
7
140
|
if [ -f "$CLAUDE_CONFIG_DIR/.oauth-token" ]; then
|
|
@@ -73,7 +73,7 @@ If `SWITCHROOM_PENDING_TURN` is unset or empty, do nothing special — the previ
|
|
|
73
73
|
|
|
74
74
|
**When stickers / GIFs land badly**: in lieu of an actual answer, decorating routine acknowledgements ("got it 👍 [+sticker]"), peppering a long thread, or any time the user is task-focused. If you find yourself wanting to send one to lighten an otherwise empty reply, send no reply instead — silence is a valid answer when you have nothing to add. Two stickers in a row is always wrong.
|
|
75
75
|
|
|
76
|
-
**`!` interrupt marker.** The gateway treats a Telegram message starting with `!` (single bang, not `!!` or `!!!`) as a deliberate interrupt: SIGINT to the active turn, strip the `!`, deliver the rest as a fresh turn. Under tmux-default, the SIGINT is delivered via `tmux send-keys C-c` to whatever has focus in the agent's pane (typically the claude REPL, but if claude has spawned a child Bash for a tool call, the child gets the C-c — which usually matches operator intent); a `systemctl kill --signal=INT`
|
|
76
|
+
**`!` interrupt marker.** The gateway treats a Telegram message starting with `!` (single bang, not `!!` or `!!!`) as a deliberate interrupt: SIGINT to the active turn, strip the `!`, deliver the rest as a fresh turn. Under tmux-default, the SIGINT is delivered via `tmux send-keys C-c` to whatever has focus in the agent's pane (typically the claude REPL, but if claude has spawned a child Bash for a tool call, the child gets the C-c — which usually matches operator intent); a cgroup-wide kill fallback (legacy systemd: `systemctl kill --signal=INT`) fires only if send-keys fails. If the user sends `! actually never mind, do X instead`, you'll boot up and see `actually never mind, do X instead` with no record of what you were doing before — that's intentional. **If a user asks how to stop you mid-turn, tell them: "Start your message with `!` — it interrupts whatever I'm doing and treats the rest as a fresh request."** Doubled `!!` (typo / emphasis) reaches you verbatim. Empty `!` gets a "Send your replacement instruction now" reply from the gateway and never reaches you. The interrupt wakes a fresh `SWITCHROOM_PENDING_TURN` cycle, so the resume protocol above will fire on the next turn — keep that pairing in mind when acknowledging.
|
|
77
77
|
|
|
78
78
|
**Wake audit — every fresh boot, check what you owe before responding.** When `start.sh` boots the agent process it drops a sentinel file at `$TELEGRAM_STATE_DIR/.wake-audit-pending`. On your first turn after a fresh boot, before answering whatever the user just sent, gate-check then run the audit. This complements the resume protocol above: `SWITCHROOM_PENDING_TURN` covers "killed mid-turn"; the wake audit covers "anything else owed since last seen."
|
|
79
79
|
|
|
@@ -123,8 +123,8 @@ The marker's mtime defines "audit complete for this conversation up to now" —
|
|
|
123
123
|
**"Why did you restart?" — read the audit trail, don't guess.** The `SWITCHROOM_PENDING_*` env vars are one-shot (cleared by start.sh on first read), so by the time a user asks "why did you restart?" they're long gone. Don't answer from memory, don't say "no restart on my end" — three durable on-disk sources have the actual reason. Check them in this order:
|
|
124
124
|
|
|
125
125
|
1. **`$TELEGRAM_STATE_DIR/clean-shutdown.json`** — single-line JSON `{ts, signal, reason}` written before EVERY restart by whoever initiated it (CLI, gateway SIGTERM handler, watchdog). Fastest answer for "what was THIS boot's reason." Example: `cat "$TELEGRAM_STATE_DIR/clean-shutdown.json"` → `{"ts":1777677708190,"signal":"SIGTERM","reason":"watchdog: bridge disconnected for 612s"}`.
|
|
126
|
-
2.
|
|
127
|
-
3.
|
|
126
|
+
2. **Container/unit history** — under v0.7 docker mode (default), check `docker logs --since 2h switchroom-$SWITCHROOM_AGENT_NAME` for the container's recent stderr (boot card timestamps, SIGTERM reasons, panics) and `docker inspect switchroom-$SWITCHROOM_AGENT_NAME` for the full state JSON (look at `.State.StartedAt` for the last start time and `.State.RestartCount` for cumulative restarts). Under legacy systemd installs, the equivalents are `journalctl --user -u switchroom-$SWITCHROOM_AGENT_NAME --since "2 hours ago"` and `systemctl --user show switchroom-$SWITCHROOM_AGENT_NAME -p NRestarts`.
|
|
127
|
+
3. **Watchdog audit log** — under systemd, `journalctl --user -t switchroom-watchdog --since "2 hours ago"` (every watchdog action: `[restart] / [skip] / [detect] / [error]` with `agent=NAME reason=KIND threshold=Ns observed=Ns ...`). Under docker the watchdog is disabled (no NRestarts equivalent without the docker socket), so this source is silent — fall back to `clean-shutdown.json` plus the container logs above.
|
|
128
128
|
|
|
129
129
|
Quote the reason field verbatim when answering — don't paraphrase. If `clean-shutdown.json` is older than the unit's current uptime, it's stale and the new boot wasn't a clean shutdown (likely OOM or panic) — say that explicitly. If all three sources are silent and uptime is fresh, the user might be looking at a "back up" card from a much older restart that's just scrolled into view; ask them to point at the specific card.
|
|
130
130
|
|
|
@@ -95,7 +95,7 @@ If `SWITCHROOM_PENDING_TURN` is unset or empty, do nothing special — the previ
|
|
|
95
95
|
|
|
96
96
|
**When stickers / GIFs land badly**: in lieu of an actual answer, decorating routine acknowledgements ("got it 👍 [+sticker]"), peppering a long thread, or any time the user is task-focused. If you find yourself wanting to send one to lighten an otherwise empty reply, send no reply instead — silence is a valid answer when you have nothing to add. Two stickers in a row is always wrong.
|
|
97
97
|
|
|
98
|
-
**`!` interrupt marker.** The gateway treats a Telegram message starting with `!` (single bang, not `!!` or `!!!`) as a deliberate interrupt: SIGINT to the active turn, strip the `!`, deliver the rest as a fresh turn. Under tmux-default, the SIGINT is delivered via `tmux send-keys C-c` to whatever has focus in the agent's pane (typically the claude REPL, but if claude has spawned a child Bash for a tool call, the child gets the C-c — which usually matches operator intent); a `systemctl kill --signal=INT`
|
|
98
|
+
**`!` interrupt marker.** The gateway treats a Telegram message starting with `!` (single bang, not `!!` or `!!!`) as a deliberate interrupt: SIGINT to the active turn, strip the `!`, deliver the rest as a fresh turn. Under tmux-default, the SIGINT is delivered via `tmux send-keys C-c` to whatever has focus in the agent's pane (typically the claude REPL, but if claude has spawned a child Bash for a tool call, the child gets the C-c — which usually matches operator intent); a cgroup-wide kill fallback (legacy systemd: `systemctl kill --signal=INT`) fires only if send-keys fails. If the user sends `! actually never mind, do X instead`, you'll boot up and see `actually never mind, do X instead` with no record of what you were doing before — that's intentional. **If a user asks how to stop you mid-turn, tell them: "Start your message with `!` — it interrupts whatever I'm doing and treats the rest as a fresh request."** Doubled `!!` (typo / emphasis) reaches you verbatim. Empty `!` gets a "Send your replacement instruction now" reply from the gateway and never reaches you. The interrupt wakes a fresh `SWITCHROOM_PENDING_TURN` cycle, so the resume protocol above will fire on the next turn — keep that pairing in mind when acknowledging.
|
|
99
99
|
|
|
100
100
|
**Wake audit — every fresh boot, check what you owe before responding.** When `start.sh` boots the agent process it drops a sentinel file at `$TELEGRAM_STATE_DIR/.wake-audit-pending`. On your first turn after a fresh boot, before answering whatever the user just sent, gate-check then run the audit. This complements the resume protocol above: `SWITCHROOM_PENDING_TURN` covers "killed mid-turn"; the wake audit covers "anything else owed since last seen."
|
|
101
101
|
|
|
@@ -145,8 +145,8 @@ The marker's mtime defines "audit complete for this conversation up to now" —
|
|
|
145
145
|
**"Why did you restart?" — read the audit trail, don't guess.** The `SWITCHROOM_PENDING_*` env vars are one-shot (cleared by start.sh on first read), so by the time a user asks "why did you restart?" they're long gone. Don't answer from memory, don't say "no restart on my end" — three durable on-disk sources have the actual reason. Check them in this order:
|
|
146
146
|
|
|
147
147
|
1. **`$TELEGRAM_STATE_DIR/clean-shutdown.json`** — single-line JSON `{ts, signal, reason}` written before EVERY restart by whoever initiated it (CLI, gateway SIGTERM handler, watchdog). Fastest answer for "what was THIS boot's reason." Example: `cat "$TELEGRAM_STATE_DIR/clean-shutdown.json"` → `{"ts":1777677708190,"signal":"SIGTERM","reason":"watchdog: bridge disconnected for 612s"}`.
|
|
148
|
-
2.
|
|
149
|
-
3.
|
|
148
|
+
2. **Container/unit history** — under v0.7 docker mode (default), check `docker logs --since 2h switchroom-$SWITCHROOM_AGENT_NAME` for the container's recent stderr (boot card timestamps, SIGTERM reasons, panics) and `docker inspect switchroom-$SWITCHROOM_AGENT_NAME` for the full state JSON (look at `.State.StartedAt` for the last start time and `.State.RestartCount` for cumulative restarts). Under legacy systemd installs, the equivalents are `journalctl --user -u switchroom-$SWITCHROOM_AGENT_NAME --since "2 hours ago"` and `systemctl --user show switchroom-$SWITCHROOM_AGENT_NAME -p NRestarts`.
|
|
149
|
+
3. **Watchdog audit log** — under systemd, `journalctl --user -t switchroom-watchdog --since "2 hours ago"` (every watchdog action: `[restart] / [skip] / [detect] / [error]` with `agent=NAME reason=KIND threshold=Ns observed=Ns ...`). Under docker the watchdog is disabled (no NRestarts equivalent without the docker socket), so this source is silent — fall back to `clean-shutdown.json` plus the container logs above.
|
|
150
150
|
|
|
151
151
|
Quote the reason field verbatim when answering — don't paraphrase. If `clean-shutdown.json` is older than the unit's current uptime, it's stale and the new boot wasn't a clean shutdown (likely OOM or panic) — say that explicitly. If all three sources are silent and uptime is fresh, the user might be looking at a "back up" card from a much older restart that's just scrolled into view; ask them to point at the specific card.
|
|
152
152
|
|
|
@@ -107,7 +107,7 @@ Use your available tools when appropriate. If you lack the right tool for a task
|
|
|
107
107
|
|
|
108
108
|
{{#if schedule}}
|
|
109
109
|
## Scheduled Tasks
|
|
110
|
-
You have scheduled tasks configured. These run independently as one-shot `claude -p` calls
|
|
110
|
+
You have scheduled tasks configured. These run independently as one-shot `claude -p` calls on a schedule that fires across reboots. They don't use your session or context, they fire on their own (typically Sonnet for cost efficiency) and send output directly to Telegram.
|
|
111
111
|
|
|
112
|
-
You don't need to manage them
|
|
112
|
+
You don't need to manage them. If the user asks about scheduled tasks, explain that they run automatically and are configured in switchroom.yaml under `schedule:`.
|
|
113
113
|
{{/if}}
|
|
@@ -116,6 +116,15 @@ When operating in a group chat (more than one person in the room):
|
|
|
116
116
|
Quality > quantity. If you wouldn't send it in a real group chat with
|
|
117
117
|
friends, don't send it.
|
|
118
118
|
|
|
119
|
+
## Effort estimates
|
|
120
|
+
|
|
121
|
+
When estimating implementation, research, or refactor work, estimate in
|
|
122
|
+
**agent minutes** — wall-clock time for a current-generation Claude
|
|
123
|
+
agent (the actual worker, not a human) to complete the work end-to-end
|
|
124
|
+
including tests. A "12 dev hours" plan is the wrong unit; "~25 agent
|
|
125
|
+
minutes" is the right one. Reserve human-time estimates only for work
|
|
126
|
+
that explicitly requires the user's review or input.
|
|
127
|
+
|
|
119
128
|
## Scope
|
|
120
129
|
|
|
121
130
|
{{#if agentConfig.purpose}}
|
package/skills/docx/VENDORED.md
CHANGED
|
@@ -7,7 +7,7 @@ Pinned to commit: 5128e1865d670f5d6c9cef000e6dfc4e951fb5b9
|
|
|
7
7
|
## Why vendored
|
|
8
8
|
|
|
9
9
|
Switchroom ships this skill as a built-in default so every agent gets it
|
|
10
|
-
on scaffold (and on `switchroom
|
|
10
|
+
on scaffold (and on `switchroom apply` for pre-existing agents).
|
|
11
11
|
Vendoring keeps the skill content available offline and version-pinned.
|
|
12
12
|
|
|
13
13
|
Opt out with:
|
|
@@ -7,7 +7,7 @@ Pinned to commit: 5128e1865d670f5d6c9cef000e6dfc4e951fb5b9
|
|
|
7
7
|
## Why vendored
|
|
8
8
|
|
|
9
9
|
Switchroom ships this skill as a built-in default so every agent gets it
|
|
10
|
-
on scaffold (and on `switchroom
|
|
10
|
+
on scaffold (and on `switchroom apply` for pre-existing agents).
|
|
11
11
|
Vendoring keeps the skill content available offline and version-pinned.
|
|
12
12
|
|
|
13
13
|
Opt out with:
|
package/skills/pdf/VENDORED.md
CHANGED
|
@@ -7,7 +7,7 @@ Pinned to commit: 5128e1865d670f5d6c9cef000e6dfc4e951fb5b9
|
|
|
7
7
|
## Why vendored
|
|
8
8
|
|
|
9
9
|
Switchroom ships this skill as a built-in default so every agent gets it
|
|
10
|
-
on scaffold (and on `switchroom
|
|
10
|
+
on scaffold (and on `switchroom apply` for pre-existing agents).
|
|
11
11
|
Vendoring keeps the skill content available offline and version-pinned.
|
|
12
12
|
|
|
13
13
|
Opt out with:
|
package/skills/pptx/VENDORED.md
CHANGED
|
@@ -7,7 +7,7 @@ Pinned to commit: 5128e1865d670f5d6c9cef000e6dfc4e951fb5b9
|
|
|
7
7
|
## Why vendored
|
|
8
8
|
|
|
9
9
|
Switchroom ships this skill as a built-in default so every agent gets it
|
|
10
|
-
on scaffold (and on `switchroom
|
|
10
|
+
on scaffold (and on `switchroom apply` for pre-existing agents).
|
|
11
11
|
Vendoring keeps the skill content available offline and version-pinned.
|
|
12
12
|
|
|
13
13
|
Opt out with:
|
|
@@ -7,7 +7,7 @@ Pinned to commit: 5128e1865d670f5d6c9cef000e6dfc4e951fb5b9
|
|
|
7
7
|
## Why vendored
|
|
8
8
|
|
|
9
9
|
Switchroom ships this skill as a built-in default so every agent gets it
|
|
10
|
-
on scaffold (and on `switchroom
|
|
10
|
+
on scaffold (and on `switchroom apply` for pre-existing agents).
|
|
11
11
|
Vendoring keeps the skill content available offline and version-pinned.
|
|
12
12
|
|
|
13
13
|
Opt out with:
|
|
@@ -12,7 +12,7 @@ Switchroom is a multi-agent orchestrator built on Claude Code. It manages multip
|
|
|
12
12
|
|
|
13
13
|
**One `switchroom.yaml` to rule them all.** All agents are configured from a single file using a three-layer cascade. See [cascade.md](cascade.md) for full merge semantics.
|
|
14
14
|
|
|
15
|
-
**Agents as
|
|
15
|
+
**Agents as Docker containers.** Each agent runs as a long-lived `claude` process inside its own container (`switchroom-<name>`), supervised by Docker Compose with `restart: unless-stopped` and a healthcheck. The `start.sh` script sets environment variables and execs into `claude`. Claude Code handles session persistence and tool execution.
|
|
16
16
|
|
|
17
17
|
**Telegram as the primary interface.** The `switchroom-telegram` MCP plugin connects Claude Code to Telegram, providing 10 tools for message handling. See [telegram.md](telegram.md) for details.
|
|
18
18
|
|
|
@@ -46,12 +46,13 @@ Switchroom is a multi-agent orchestrator built on Claude Code. It manages multip
|
|
|
46
46
|
## Lifecycle
|
|
47
47
|
|
|
48
48
|
1. `switchroom agent create <name>` — scaffold agent from switchroom.yaml
|
|
49
|
-
2. `
|
|
50
|
-
3.
|
|
51
|
-
4.
|
|
52
|
-
5.
|
|
53
|
-
6.
|
|
54
|
-
7.
|
|
49
|
+
2. `switchroom apply` — write `~/.switchroom/compose/docker-compose.yml`
|
|
50
|
+
3. `docker compose -p switchroom -f ~/.switchroom/compose/docker-compose.yml up -d` — start the container
|
|
51
|
+
4. Claude Code boots, loads CLAUDE.md + skills + .mcp.json
|
|
52
|
+
5. MCP servers connect (Hindsight, switchroom-telegram, others)
|
|
53
|
+
6. Telegram plugin polls for messages
|
|
54
|
+
7. User sends message → plugin fires `UserPromptSubmit` hook → Claude responds
|
|
55
|
+
8. `switchroom agent reconcile <name>` — re-apply switchroom.yaml (no CLAUDE.md touch)
|
|
55
56
|
|
|
56
57
|
## Deep dives
|
|
57
58
|
|
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
---
|
|
2
2
|
name: switchroom-cli
|
|
3
3
|
description: "Run switchroom CLI operations on existing agents: logs, update, restart, version, config inspection, scheduled tasks, and Telegram plugin reference. Use when the user wants to: show logs (\"logs\", \"what happened\", \"check the journal\", \"why did it crash\"); update agents (\"update\", \"pull latest\", \"get new code\", \"upgrade\"); restart agents (\"restart\", \"reboot\", \"bounce\", \"kick\", \"it's stuck\"); check what's running (\"version\", \"what sha\", \"are agents up\", \"health summary\"); apply config changes (\"apply\", \"sync my config\", \"I just edited switchroom.yaml\"); inspect an agent's effective config (\"what model is X using\", \"how is <agent> configured\", \"show the cascade\"); list scheduled tasks (\"cron\", \"timers\", \"what runs automatically\", \"scheduled tasks\"); or ask about Telegram-plugin features (\"what MCP tools does the bot have\", \"how does reply work\"). Do NOT use for adding/removing agents (switchroom-manage), bootstrapping switchroom from scratch (switchroom-install), or \"something is broken\" diagnostics (switchroom-health).
|
|
4
|
-
allowed-tools: Bash(switchroom *) Bash(
|
|
4
|
+
allowed-tools: Bash(switchroom *) Bash(docker *) Bash(docker compose *)
|
|
5
5
|
---
|
|
6
6
|
|
|
7
7
|
# Switchroom CLI operations
|
|
@@ -9,7 +9,7 @@ allowed-tools: Bash(switchroom *) Bash(systemctl --user *) Bash(journalctl *)
|
|
|
9
9
|
This skill is the reference for running `switchroom` CLI commands against existing agents. Each section below is triggered by a distinct user intent — jump to the relevant one rather than walking top-to-bottom.
|
|
10
10
|
|
|
11
11
|
**Three commands to know:**
|
|
12
|
-
- `switchroom
|
|
12
|
+
- `switchroom apply` — reconcile every agent + (re)write `~/.switchroom/compose/docker-compose.yml`. Bring the fleet up afterwards with `docker compose -p switchroom -f ~/.switchroom/compose/docker-compose.yml up -d`. (Replaces the v0.6 `switchroom update` flow.)
|
|
13
13
|
- `switchroom restart [agent]` — bounces a stuck or wedged agent
|
|
14
14
|
- `switchroom version` — shows what's running (versions + health summary)
|
|
15
15
|
|
|
@@ -31,12 +31,12 @@ switchroom agent list
|
|
|
31
31
|
|
|
32
32
|
### Step 2 — Tail the logs
|
|
33
33
|
|
|
34
|
-
Default is the last 20 lines. User can specify a number. Use the CLI if available; fall back to `
|
|
34
|
+
Default is the last 20 lines. User can specify a number. Use the CLI if available; fall back to `docker compose logs` when it's not:
|
|
35
35
|
|
|
36
36
|
```bash
|
|
37
37
|
switchroom agent logs <name> [--lines 50]
|
|
38
38
|
# or, when switchroom CLI isn't reachable:
|
|
39
|
-
|
|
39
|
+
docker compose -p switchroom -f ~/.switchroom/compose/docker-compose.yml logs --tail 50 switchroom-<name>
|
|
40
40
|
```
|
|
41
41
|
|
|
42
42
|
### Step 3 — Present output
|
|
@@ -47,21 +47,26 @@ Include the last ~20 lines verbatim, then summarise what you see (crash, stall,
|
|
|
47
47
|
|
|
48
48
|
## Update — "update", "pull latest", "get new code", "upgrade"
|
|
49
49
|
|
|
50
|
-
Pull the latest switchroom source,
|
|
50
|
+
Pull the latest switchroom source, then re-apply config and bring the fleet back up via docker compose.
|
|
51
51
|
|
|
52
52
|
```bash
|
|
53
|
-
switchroom
|
|
53
|
+
cd ~/code/switchroom
|
|
54
|
+
git pull
|
|
55
|
+
bun install
|
|
56
|
+
bun run build
|
|
57
|
+
switchroom apply
|
|
58
|
+
docker compose -p switchroom -f ~/.switchroom/compose/docker-compose.yml pull
|
|
59
|
+
docker compose -p switchroom -f ~/.switchroom/compose/docker-compose.yml up -d
|
|
54
60
|
```
|
|
55
61
|
|
|
56
|
-
|
|
57
|
-
|
|
58
|
-
|
|
59
|
-
|
|
60
|
-
4. Reconciles all agent config from switchroom.yaml
|
|
61
|
-
5. Restarts all agents that need it
|
|
62
|
-
6. Prints a one-line health summary when done
|
|
62
|
+
`switchroom apply` reconciles every agent declared in `switchroom.yaml`
|
|
63
|
+
(scaffolding any missing workspaces, refreshing bootstrap files), then
|
|
64
|
+
writes `~/.switchroom/compose/docker-compose.yml`. The CLI deliberately
|
|
65
|
+
does not run `docker` for you — the operator owns the bring-up.
|
|
63
66
|
|
|
64
|
-
|
|
67
|
+
The v0.6 `switchroom update` verb is removed in v0.7+; calling it now
|
|
68
|
+
prints this upgrade hint and exits 1. The shim is slated for full removal
|
|
69
|
+
in v0.8.
|
|
65
70
|
|
|
66
71
|
---
|
|
67
72
|
|
|
@@ -230,8 +235,11 @@ List cron jobs and scheduled tasks.
|
|
|
230
235
|
|
|
231
236
|
### Step 1 — Show live timers
|
|
232
237
|
|
|
238
|
+
Cron timers in v0.7+ run inside the per-agent scheduler container. Inspect
|
|
239
|
+
its log to see fired jobs:
|
|
240
|
+
|
|
233
241
|
```bash
|
|
234
|
-
|
|
242
|
+
docker compose -p switchroom -f ~/.switchroom/compose/docker-compose.yml logs switchroom-<agent>-scheduler --tail 100
|
|
235
243
|
```
|
|
236
244
|
|
|
237
245
|
### Step 2 — Show declared schedule entries
|
|
@@ -31,11 +31,11 @@ switchroom auth status 2>/dev/null || echo "FAIL: auth check failed"
|
|
|
31
31
|
# and per-account health (healthy / quota-exhausted / expired / missing-refresh-token).
|
|
32
32
|
switchroom auth account list 2>/dev/null || echo "INFO: no Anthropic accounts configured (legacy per-agent slot model in use)"
|
|
33
33
|
|
|
34
|
-
# Check
|
|
35
|
-
|
|
34
|
+
# Check docker-compose service health
|
|
35
|
+
docker compose -p switchroom -f ~/.switchroom/compose/docker-compose.yml ps 2>/dev/null || echo "no switchroom docker fleet"
|
|
36
36
|
|
|
37
|
-
# Check for
|
|
38
|
-
|
|
37
|
+
# Check for unhealthy or exited containers
|
|
38
|
+
docker compose -p switchroom -f ~/.switchroom/compose/docker-compose.yml ps --status exited --status unhealthy 2>/dev/null
|
|
39
39
|
|
|
40
40
|
# Check MCP config exists for each agent
|
|
41
41
|
for dir in ~/.switchroom/agents/*/; do
|
|
@@ -78,7 +78,7 @@ For each check, report:
|
|
|
78
78
|
|
|
79
79
|
Group findings by category:
|
|
80
80
|
1. **CLI & Auth** — switchroom installed, authenticated
|
|
81
|
-
2. **
|
|
81
|
+
2. **Docker fleet** — containers running, no unhealthy/exited services
|
|
82
82
|
3. **Agent files** — start.sh, .mcp.json, settings.json present
|
|
83
83
|
4. **Bot tokens** — Telegram credentials resolved
|
|
84
84
|
5. **Memory backend** — Hindsight reachable
|
|
@@ -93,8 +93,8 @@ For common failures, give the exact fix:
|
|
|
93
93
|
| Per-agent auth expired (slot model) | `switchroom auth login <agent>` |
|
|
94
94
|
| Account expired (new model — `auth account list` shows red ✗) | `switchroom auth refresh-accounts` (one tick); if no refresh-token, the account needs re-adding |
|
|
95
95
|
| Account quota-exhausted (yellow ⊘ in `auth account list`) | Auto-fallback handles it if the agent has multiple accounts; otherwise wait for the reset window or `switchroom auth enable <other-account> <agent>` |
|
|
96
|
-
|
|
|
97
|
-
| Missing .mcp.json | `switchroom
|
|
96
|
+
| Container unhealthy | `docker compose -p switchroom -f ~/.switchroom/compose/docker-compose.yml restart switchroom-<name>` |
|
|
97
|
+
| Missing .mcp.json | `switchroom apply` (full reconcile + rewrite compose; bring up via `docker compose ... up -d`) or `switchroom agent reconcile <name>` (targeted) |
|
|
98
98
|
| Bot token unresolved | Check vault: `switchroom vault list` |
|
|
99
99
|
| Memory unreachable | Check Hindsight MCP server is running |
|
|
100
100
|
|
|
@@ -1,13 +1,13 @@
|
|
|
1
1
|
---
|
|
2
2
|
name: switchroom-install
|
|
3
|
-
description: Install switchroom and its dependencies (
|
|
3
|
+
description: Install switchroom and its dependencies (docker, claude CLI, switchroom binary) on a fresh machine. Use for onboarding and first-time setup — when the user says 'install switchroom on this machine', 'set up switchroom for the first time', 'bootstrap switchroom from scratch', 'get switchroom running', 'how do I get started with switchroom', "I'm new to switchroom, where do I begin", or asks about switchroom dependencies or prerequisites. This is the onboarding entry point, not for managing existing agents.
|
|
4
4
|
---
|
|
5
5
|
|
|
6
6
|
# Install Switchroom
|
|
7
7
|
|
|
8
8
|
When the user asks to install, set up, bootstrap, or get started with switchroom — or when they're new to switchroom and want to know where to begin — walk them through this flow. Switchroom turns a Linux server + their Claude Pro/Max subscription into always-on Claude Code agents reachable from Telegram.
|
|
9
9
|
|
|
10
|
-
Switchroom
|
|
10
|
+
Switchroom v0.7+ ships as a self-contained static binary (no host bun or node runtime required) and runs the agent fleet in Docker containers pulled from GHCR. The two host dependencies are: **docker** (engine 24+ with the compose v2 plugin) and the **claude** Code CLI (used for OAuth login against your Pro/Max subscription).
|
|
11
11
|
|
|
12
12
|
## Step 0 — Detect existing install
|
|
13
13
|
|
|
@@ -17,11 +17,11 @@ Before doing anything, check whether switchroom is already installed:
|
|
|
17
17
|
command -v switchroom && switchroom --version 2>/dev/null
|
|
18
18
|
```
|
|
19
19
|
|
|
20
|
-
If switchroom is present, tell the user it's already installed and then — regardless — run the dependency audit in Step 2 so they see the state of **
|
|
20
|
+
If switchroom is present, tell the user it's already installed and then — regardless — run the dependency audit in Step 2 so they see the state of **docker** and **claude**. After the audit, offer `switchroom setup` (re-run the wizard), `switchroom doctor` (diagnose), or `switchroom agent list` (see what's running). Do not reinstall switchroom itself without explicit confirmation.
|
|
21
21
|
|
|
22
22
|
## Step 1 — Verify prerequisites
|
|
23
23
|
|
|
24
|
-
Switchroom requires Ubuntu 24.04 LTS
|
|
24
|
+
Switchroom requires Linux with Docker (Ubuntu 24.04 LTS canonical; ≥4GB RAM):
|
|
25
25
|
|
|
26
26
|
```bash
|
|
27
27
|
. /etc/os-release && echo "$PRETTY_NAME"
|
|
@@ -29,56 +29,40 @@ free -h | awk '/^Mem:/ {print $2}'
|
|
|
29
29
|
uname -m
|
|
30
30
|
```
|
|
31
31
|
|
|
32
|
-
If the user is on macOS or Windows, stop and explain: switchroom
|
|
32
|
+
If the user is on macOS or Windows, stop and explain: switchroom's release-validated production runtime is Linux. macOS (Docker Desktop) works for development but isn't yet release-gated. Windows users need WSL2.
|
|
33
33
|
|
|
34
|
-
## Step 2 — Install
|
|
34
|
+
## Step 2 — Install host dependencies
|
|
35
35
|
|
|
36
36
|
Only install what's missing. Check each first:
|
|
37
37
|
|
|
38
38
|
```bash
|
|
39
|
-
#
|
|
40
|
-
|
|
41
|
-
|
|
42
|
-
done
|
|
39
|
+
# Docker
|
|
40
|
+
command -v docker || echo "MISSING: docker"
|
|
41
|
+
docker compose version >/dev/null 2>&1 || echo "MISSING: docker compose v2"
|
|
43
42
|
|
|
44
|
-
#
|
|
45
|
-
command -v bun || echo "MISSING: bun"
|
|
46
|
-
|
|
47
|
-
# Node 22+ (via nvm)
|
|
48
|
-
node -v 2>/dev/null || echo "MISSING: node"
|
|
49
|
-
|
|
50
|
-
# Claude Code CLI
|
|
43
|
+
# Claude Code CLI (needed for switchroom auth login)
|
|
51
44
|
command -v claude || echo "MISSING: claude"
|
|
52
45
|
```
|
|
53
46
|
|
|
54
|
-
For anything missing
|
|
47
|
+
For anything missing:
|
|
55
48
|
|
|
56
49
|
```bash
|
|
57
|
-
#
|
|
58
|
-
sudo apt update && sudo apt install -y
|
|
59
|
-
|
|
60
|
-
# bun
|
|
61
|
-
curl -fsSL https://bun.sh/install | bash
|
|
50
|
+
# Docker (Ubuntu/Debian)
|
|
51
|
+
sudo apt update && sudo apt install -y docker.io docker-compose-plugin
|
|
52
|
+
sudo usermod -aG docker "$USER" # log out/in or `newgrp docker` to apply
|
|
62
53
|
|
|
63
|
-
#
|
|
64
|
-
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.1/install.sh | bash
|
|
65
|
-
source ~/.bashrc
|
|
66
|
-
nvm install 22
|
|
67
|
-
|
|
68
|
-
# claude code
|
|
54
|
+
# Claude Code CLI (needs Node 20.11+)
|
|
69
55
|
npm install -g @anthropic-ai/claude-code
|
|
70
|
-
|
|
71
|
-
# docker group (user needs to log out/in or newgrp)
|
|
72
|
-
sudo usermod -aG docker "$USER"
|
|
73
56
|
```
|
|
74
57
|
|
|
75
58
|
**Important:** After `usermod -aG docker`, the user needs a new shell for group membership to apply. Mention this explicitly.
|
|
76
59
|
|
|
77
|
-
## Step 3 —
|
|
60
|
+
## Step 3 — Install the switchroom binary
|
|
61
|
+
|
|
62
|
+
The recommended path is the static-binary one-liner — auto-detects platform/arch, downloads the matching pre-built binary from the latest GitHub release, verifies its SHA256, and installs to `/usr/local/bin` (or `~/.local/bin` if not writable):
|
|
78
63
|
|
|
79
64
|
```bash
|
|
80
|
-
|
|
81
|
-
cd ~/code/switchroom && bun install && bun link
|
|
65
|
+
curl -fsSL https://github.com/switchroom/switchroom/raw/main/install.sh | sh
|
|
82
66
|
```
|
|
83
67
|
|
|
84
68
|
Verify:
|
|
@@ -87,15 +71,27 @@ Verify:
|
|
|
87
71
|
switchroom --version
|
|
88
72
|
```
|
|
89
73
|
|
|
74
|
+
(For development against a source checkout, `git clone` + `bun install` + `bun link` still works — see `docs/operators/install.md`. Don't suggest the source path for first-time users.)
|
|
75
|
+
|
|
90
76
|
## Step 4 — Run setup wizard
|
|
91
77
|
|
|
92
|
-
`switchroom setup` is an interactive wizard that
|
|
78
|
+
`switchroom setup` is an interactive wizard that wires the operator's Telegram bot token, sets up the vault, and scaffolds a first agent. DM-only by default — no forum chat ID required up front. **It requires a terminal the user controls** — if you're running inside an agent session, you cannot drive it yourself. Tell the user:
|
|
93
79
|
|
|
94
80
|
> Run `switchroom setup` in your own terminal. It'll ask for your Telegram bot token and walk you through creating your first agent. Come back when it finishes and I can verify with `switchroom doctor`.
|
|
95
81
|
|
|
96
|
-
## Step 5 —
|
|
82
|
+
## Step 5 — Apply and bring up the fleet
|
|
83
|
+
|
|
84
|
+
After `switchroom setup` completes, three commands take you from config to a running fleet:
|
|
85
|
+
|
|
86
|
+
```bash
|
|
87
|
+
switchroom apply
|
|
88
|
+
docker compose -p switchroom -f ~/.switchroom/compose/docker-compose.yml pull
|
|
89
|
+
docker compose -p switchroom -f ~/.switchroom/compose/docker-compose.yml up -d --remove-orphans
|
|
90
|
+
```
|
|
91
|
+
|
|
92
|
+
`switchroom apply` reconciles every agent declared in `switchroom.yaml` and writes `~/.switchroom/compose/docker-compose.yml`. The CLI deliberately does not run `docker` for you — operators own the bring-up. The first `pull` fetches the 5 GHCR images (~1-2 GB total); subsequent pulls are layer-only.
|
|
97
93
|
|
|
98
|
-
|
|
94
|
+
## Step 6 — Verify
|
|
99
95
|
|
|
100
96
|
```bash
|
|
101
97
|
switchroom doctor
|
|
@@ -112,5 +108,6 @@ Once the first agent is up and authenticated, the user can promote that agent's
|
|
|
112
108
|
|
|
113
109
|
- **Do not** run `switchroom setup` non-interactively or pipe input to it — it's designed for a human.
|
|
114
110
|
- **Do not** edit `~/.switchroom/vault.enc` or any file under `~/.switchroom/` directly. Use the CLI.
|
|
115
|
-
- **Do not**
|
|
111
|
+
- **Do not** run `docker build` on the operator's host. The 5 fleet images are published on GHCR; `switchroom apply` writes a compose file that pulls them.
|
|
112
|
+
- **Do not** suggest the legacy `switchroom up` / `switchroom init` / `switchroom update` verbs — they were removed in v0.7. The current flow is `switchroom apply && docker compose pull && docker compose up -d`.
|
|
116
113
|
- **Do not** reinstall over an existing install without asking. If the user wants a clean slate, have them run `switchroom uninstall` first (or confirm they want to blow away `~/.switchroom/`).
|