@openrig/cli 0.1.3 → 0.1.5

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (106) hide show
  1. package/daemon/assets/guidance/openrig-start.md +16 -1
  2. package/daemon/dist/adapters/claude-code-adapter.d.ts +12 -0
  3. package/daemon/dist/adapters/claude-code-adapter.d.ts.map +1 -1
  4. package/daemon/dist/adapters/claude-code-adapter.js +92 -3
  5. package/daemon/dist/adapters/claude-code-adapter.js.map +1 -1
  6. package/daemon/dist/adapters/codex-runtime-adapter.d.ts +5 -0
  7. package/daemon/dist/adapters/codex-runtime-adapter.d.ts.map +1 -1
  8. package/daemon/dist/adapters/codex-runtime-adapter.js +82 -2
  9. package/daemon/dist/adapters/codex-runtime-adapter.js.map +1 -1
  10. package/daemon/dist/domain/native-resume-probe.d.ts.map +1 -1
  11. package/daemon/dist/domain/native-resume-probe.js +24 -1
  12. package/daemon/dist/domain/native-resume-probe.js.map +1 -1
  13. package/daemon/dist/domain/runtime-adapter.d.ts +1 -0
  14. package/daemon/dist/domain/runtime-adapter.d.ts.map +1 -1
  15. package/daemon/dist/domain/runtime-adapter.js.map +1 -1
  16. package/daemon/dist/domain/spec-library-service.d.ts.map +1 -1
  17. package/daemon/dist/domain/spec-library-service.js +10 -0
  18. package/daemon/dist/domain/spec-library-service.js.map +1 -1
  19. package/daemon/dist/domain/startup-orchestrator.d.ts.map +1 -1
  20. package/daemon/dist/domain/startup-orchestrator.js +10 -1
  21. package/daemon/dist/domain/startup-orchestrator.js.map +1 -1
  22. package/daemon/specs/agents/design/{agent.yaml → product-designer/agent.yaml} +4 -3
  23. package/daemon/specs/agents/design/{guidance → product-designer/guidance}/role.md +13 -0
  24. package/daemon/specs/agents/{impl → development/implementer}/agent.yaml +4 -3
  25. package/daemon/specs/agents/development/implementer/guidance/role.md +47 -0
  26. package/daemon/specs/agents/{qa → development/qa}/agent.yaml +3 -2
  27. package/daemon/specs/agents/development/qa/guidance/role.md +78 -0
  28. package/daemon/specs/agents/{lead → orchestration/orchestrator}/agent.yaml +4 -3
  29. package/daemon/specs/agents/{lead → orchestration/orchestrator}/guidance/role.md +18 -0
  30. package/daemon/specs/agents/{analyst → research/analyst}/agent.yaml +2 -1
  31. package/daemon/specs/agents/{synthesizer → research/synthesizer}/agent.yaml +2 -1
  32. package/daemon/specs/agents/{reviewer → review/independent-reviewer}/agent.yaml +4 -3
  33. package/daemon/specs/agents/{reviewer → review/independent-reviewer}/guidance/role.md +13 -0
  34. package/daemon/specs/agents/shared/agent.yaml +29 -1
  35. package/daemon/specs/agents/shared/skills/core/openrig-user/SKILL.md +468 -0
  36. package/daemon/specs/agents/shared/skills/pods/development-team/SKILL.md +149 -0
  37. package/daemon/specs/agents/shared/skills/pods/orchestration-team/SKILL.md +234 -0
  38. package/daemon/specs/agents/shared/skills/pods/review-team/SKILL.md +210 -0
  39. package/daemon/specs/agents/shared/skills/process/agent-browser/LOCAL-INSIGHTS.md +189 -0
  40. package/daemon/specs/agents/shared/skills/process/agent-browser/SKILL.md +417 -0
  41. package/daemon/specs/agents/shared/skills/process/brainstorming/SKILL.md +96 -0
  42. package/daemon/specs/agents/shared/skills/process/containerized-e2e/SKILL.md +256 -0
  43. package/daemon/specs/agents/shared/skills/process/containerized-e2e/scripts/Dockerfile +39 -0
  44. package/daemon/specs/agents/shared/skills/process/containerized-e2e/scripts/build-e2e-image.sh +37 -0
  45. package/daemon/specs/agents/shared/skills/process/containerized-e2e/templates/control-plane-test.yaml +40 -0
  46. package/daemon/specs/agents/shared/skills/process/containerized-e2e/templates/e2e-report-template.md +94 -0
  47. package/daemon/specs/agents/shared/skills/process/containerized-e2e/templates/expansion-collision-fragment.yaml +13 -0
  48. package/daemon/specs/agents/shared/skills/process/containerized-e2e/templates/expansion-pod-fragment.yaml +14 -0
  49. package/daemon/specs/agents/shared/skills/process/dogfood/SKILL.md +220 -0
  50. package/daemon/specs/agents/shared/skills/process/dogfood/references/issue-taxonomy.md +109 -0
  51. package/daemon/specs/agents/shared/skills/process/dogfood/templates/dogfood-report-template.md +53 -0
  52. package/daemon/specs/agents/shared/skills/process/executing-plans/SKILL.md +84 -0
  53. package/daemon/specs/agents/shared/skills/process/frontend-design/LICENSE.txt +177 -0
  54. package/daemon/specs/agents/shared/skills/process/frontend-design/SKILL.md +42 -0
  55. package/daemon/specs/agents/shared/skills/process/systematic-debugging/CREATION-LOG.md +119 -0
  56. package/daemon/specs/agents/shared/skills/process/systematic-debugging/SKILL.md +296 -0
  57. package/daemon/specs/agents/shared/skills/process/systematic-debugging/condition-based-waiting-example.ts +158 -0
  58. package/daemon/specs/agents/shared/skills/process/systematic-debugging/condition-based-waiting.md +115 -0
  59. package/daemon/specs/agents/shared/skills/process/systematic-debugging/defense-in-depth.md +122 -0
  60. package/daemon/specs/agents/shared/skills/process/systematic-debugging/find-polluter.sh +63 -0
  61. package/daemon/specs/agents/shared/skills/process/systematic-debugging/root-cause-tracing.md +169 -0
  62. package/daemon/specs/agents/shared/skills/process/systematic-debugging/test-academic.md +14 -0
  63. package/daemon/specs/agents/shared/skills/process/systematic-debugging/test-pressure-1.md +58 -0
  64. package/daemon/specs/agents/shared/skills/process/systematic-debugging/test-pressure-2.md +68 -0
  65. package/daemon/specs/agents/shared/skills/process/systematic-debugging/test-pressure-3.md +69 -0
  66. package/daemon/specs/agents/shared/skills/process/test-driven-development/SKILL.md +371 -0
  67. package/daemon/specs/agents/shared/skills/process/test-driven-development/testing-anti-patterns.md +299 -0
  68. package/daemon/specs/agents/shared/skills/process/using-superpowers/SKILL.md +95 -0
  69. package/daemon/specs/agents/shared/skills/process/verification-before-completion/SKILL.md +139 -0
  70. package/daemon/specs/agents/shared/skills/process/writing-plans/SKILL.md +116 -0
  71. package/daemon/specs/{adversarial-review.yaml → rigs/focused/adversarial-review/rig.yaml} +3 -3
  72. package/daemon/specs/{research-team.yaml → rigs/focused/research-team/rig.yaml} +3 -3
  73. package/daemon/specs/rigs/launch/demo/CULTURE.md +92 -0
  74. package/daemon/specs/{product-team.yaml → rigs/launch/demo/rig.yaml} +13 -12
  75. package/daemon/specs/{implementation-pair.yaml → rigs/launch/implementation-pair/rig.yaml} +5 -5
  76. package/daemon/specs/rigs/preview/product-team/CULTURE.md +137 -0
  77. package/daemon/specs/rigs/preview/product-team/rig.yaml +91 -0
  78. package/dist/client.d.ts +17 -7
  79. package/dist/client.d.ts.map +1 -1
  80. package/dist/client.js +33 -23
  81. package/dist/client.js.map +1 -1
  82. package/dist/commands/bootstrap.d.ts.map +1 -1
  83. package/dist/commands/bootstrap.js +2 -1
  84. package/dist/commands/bootstrap.js.map +1 -1
  85. package/dist/commands/daemon.d.ts.map +1 -1
  86. package/dist/commands/daemon.js +5 -1
  87. package/dist/commands/daemon.js.map +1 -1
  88. package/dist/commands/up.d.ts.map +1 -1
  89. package/dist/commands/up.js +4 -3
  90. package/dist/commands/up.js.map +1 -1
  91. package/dist/daemon-lifecycle.d.ts.map +1 -1
  92. package/dist/daemon-lifecycle.js +54 -7
  93. package/dist/daemon-lifecycle.js.map +1 -1
  94. package/dist/fetch-with-timeout.d.ts +9 -0
  95. package/dist/fetch-with-timeout.d.ts.map +1 -0
  96. package/dist/fetch-with-timeout.js +41 -0
  97. package/dist/fetch-with-timeout.js.map +1 -0
  98. package/dist/mcp-server.d.ts.map +1 -1
  99. package/dist/mcp-server.js +2 -1
  100. package/dist/mcp-server.js.map +1 -1
  101. package/package.json +1 -1
  102. package/daemon/specs/agents/impl/guidance/role.md +0 -27
  103. package/daemon/specs/agents/qa/guidance/role.md +0 -26
  104. package/daemon/specs/agents/shared/skills/openrig-user/SKILL.md +0 -264
  105. /package/daemon/specs/agents/{analyst → research/analyst}/guidance/role.md +0 -0
  106. /package/daemon/specs/agents/{synthesizer → research/synthesizer}/guidance/role.md +0 -0
@@ -0,0 +1,468 @@
1
+ ---
2
+ name: openrig-user
3
+ description: Use when operating OpenRig with the `rig` CLI and you need the shipped command surface for identity, inventory, communication, lifecycle, specs, recovery, or agent-facing JSON output.
4
+ ---
5
+
6
+ # OpenRig User
7
+
8
+ This is an as-built guide to the shipped `rig` CLI.
9
+ Use current code and `rig ... --help` as ground truth if anything here ever conflicts with older planning docs.
10
+
11
+ ## Core Loop
12
+
13
+ Most work in OpenRig reduces to this loop:
14
+ - recover identity: `rig whoami --json`
15
+ - inspect inventory: `rig ps --nodes --json`
16
+ - read context: `rig transcript ...`, `rig ask ...`, `rig chatroom history ...`
17
+ - act: `rig send`, `rig capture`, `rig broadcast`, lifecycle commands
18
+
19
+ ## Identity and Recovery
20
+
21
+ Start here after launch, compaction, or confusion:
22
+
23
+ ```bash
24
+ rig whoami --json
25
+ ```
26
+
27
+ What it gives you today:
28
+ - identity: rig, logical ID, pod/member, session name, runtime
29
+ - peers and directional edges
30
+ - transcript info
31
+ - `contextUsage` when available
32
+
33
+ Flags:
34
+ ```bash
35
+ rig whoami --session <name>
36
+ rig whoami --node-id <id>
37
+ ```
38
+
39
+ If the daemon is unreachable but identity can still be inferred, `--json` may return a partial result instead of crashing.
40
+
41
+ ## Inventory and Monitoring
42
+
43
+ ```bash
44
+ rig ps
45
+ rig ps --json
46
+ rig ps --nodes
47
+ rig ps --nodes --json
48
+ ```
49
+
50
+ Use `rig ps --nodes --json` for the current node inventory across rigs. It is the best machine-readable operator surface for:
51
+ - session name
52
+ - runtime
53
+ - session/startup status
54
+ - restore outcome
55
+ - attach/resume commands
56
+ - latest error
57
+
58
+ Other health surfaces:
59
+
60
+ ```bash
61
+ rig status
62
+ rig daemon status
63
+ rig config
64
+ rig preflight
65
+ rig doctor
66
+ ```
67
+
68
+ ## Transcript and Communication
69
+
70
+ ### Transcript access
71
+
72
+ ```bash
73
+ rig transcript <session> --tail 100
74
+ rig transcript <session> --grep "pattern"
75
+ rig transcript <session> --json
76
+ ```
77
+
78
+ ### Send to one session
79
+
80
+ ```bash
81
+ rig send <session> "message"
82
+ rig send <session> "message" --verify
83
+ rig send <session> "message" --force
84
+ rig send <session> "message" --json
85
+ ```
86
+
87
+ Use `--verify` when you want delivery evidence. Use `--force` only when you intentionally want to bypass activity-risk checks.
88
+
89
+ ### Capture terminal output
90
+
91
+ ```bash
92
+ rig capture <session>
93
+ rig capture <session> --lines 50
94
+ rig capture --rig <name>
95
+ rig capture --pod <name> --rig <name>
96
+ rig capture --rig <name> --json
97
+ ```
98
+
99
+ ### Broadcast
100
+
101
+ ```bash
102
+ rig broadcast --rig <name> "message"
103
+ rig broadcast --pod <name> "message"
104
+ rig broadcast "message"
105
+ rig broadcast --rig <name> "message" --json
106
+ ```
107
+
108
+ Without `--rig` or `--pod`, broadcast targets all running sessions.
109
+
110
+ ### Chatroom
111
+
112
+ ```bash
113
+ rig chatroom send <rig> <message> [--sender <name>]
114
+ rig chatroom history <rig> [--topic <name>] [--after <id>] [--since <ts>] [--sender <name>] [--limit <n>] [--json]
115
+ rig chatroom wait <rig> [--after <id>] [--topic <name>] [--sender <name>] [--timeout <seconds>] [--json]
116
+ rig chatroom clear <rig>
117
+ rig chatroom topic <rig> <topic-name> [--body <text>] [--sender <name>]
118
+ rig chatroom watch <rig> [--tmux]
119
+ ```
120
+
121
+ **Key commands:**
122
+ - `send` — post a message
123
+ - `history` — retrieve with composable filters (sender, since, after, topic)
124
+ - `wait` — block until new matching messages arrive (polls history, times out honestly)
125
+ - `clear` — delete all messages for the rig (destructive, rig-scoped)
126
+ - `topic` — set a topic marker
127
+ - `watch` — SSE or tmux-based live stream
128
+
129
+ **Roundtable protocol:**
130
+ 1. Inspect old room: `rig chatroom history my-rig --limit 5`
131
+ 2. Save if needed: `rig chatroom history my-rig --json > /tmp/old-room.json`
132
+ 3. Clear if needed: `rig chatroom clear my-rig`
133
+ 4. Set topic: `rig chatroom topic my-rig "ROUND START"`
134
+ 5. Post: `rig chatroom send my-rig "position..." --sender <session>`
135
+ 6. Monitor: `rig chatroom wait my-rig --timeout 120`
136
+ 7. Close: `rig chatroom topic my-rig "ROUND CLOSED"`
137
+
138
+ See `docs/planning/roadmaps/chatroom-roundtable-protocol.md` for the full protocol.
139
+
140
+ ### `rig ask`
141
+
142
+ ```bash
143
+ rig ask <rig> "question"
144
+ rig ask <rig> "question" --json
145
+ ```
146
+
147
+ Current shipped behavior:
148
+ - queries the daemon for evidence
149
+ - returns rig summary
150
+ - returns transcript excerpts
151
+ - may return chat excerpts
152
+ - returns insufficiency state and optional guidance
153
+
154
+ This is an evidence/context command. It is not a hidden second-LLM call.
155
+
156
+ ## Lifecycle
157
+
158
+ ### Bring a rig up
159
+
160
+ ```bash
161
+ rig up <source>
162
+ rig up <source> --plan
163
+ rig up <source> --yes
164
+ rig up <source> --json
165
+ ```
166
+
167
+ `<source>` can be:
168
+ - a rig spec path
169
+ - a `.rigbundle` path
170
+ - a bare name
171
+
172
+ Bare names are special:
173
+ - if they match a library spec, `rig up` launches from the spec library
174
+ - if they do not match a library spec, `rig up` treats the name as an existing-rig restore/power-on target
175
+ - if both exist, `rig up` fails loudly on ambiguity
176
+
177
+ Current behavior notes:
178
+ - `--target <root>` is only for `.rigbundle` / package installation. It does not change agent cwd.
179
+ - `local:` `agent_ref` values resolve relative to the rig spec directory, not your shell cwd.
180
+ - if you copy a built-in spec elsewhere, keep its `agents/` tree beside the YAML or rewrite those refs to `path:/absolute/path`
181
+ - there is no shipped `rig up --cwd` override yet
182
+
183
+ Legacy/spec-specific surfaces still ship too:
184
+
185
+ ```bash
186
+ rig bootstrap <spec> [--plan] [--yes] [--json]
187
+ rig requirements <spec> [--json]
188
+ ```
189
+
190
+ ### Tear a rig down
191
+
192
+ ```bash
193
+ rig down <rigId>
194
+ rig down <rigId> --snapshot
195
+ rig down <rigId> --delete
196
+ rig down <rigId> --force
197
+ rig down <rigId> --json
198
+ ```
199
+
200
+ If `--snapshot` succeeds, human output includes the restore hint.
201
+
202
+ ### Release management without killing live claimed sessions
203
+
204
+ ```bash
205
+ rig release <rigId>
206
+ rig release <rigId> --delete
207
+ rig release <rigId> --json
208
+ ```
209
+
210
+ Use `rig release` for adopted/claimed-session rigs when you want OpenRig to stop managing the rig but leave the tmux sessions alive.
211
+ This is the safe recovery/reset surface for the "sessions still exist, management is broken or stale" case.
212
+ If the rig contains OpenRig-launched nodes, `rig release` refuses loudly instead of pretending the mixed rig is safe to detach.
213
+
214
+ ### Snapshots and restore
215
+
216
+ ```bash
217
+ rig snapshot <rigId>
218
+ rig snapshot list <rigId>
219
+ rig restore <snapshotId> --rig <rigId>
220
+ ```
221
+
222
+ `rig restore` requires `--rig <rigId>`.
223
+
224
+ Claude Code autonomy note:
225
+ - unattended `rig whoami` on boot may require the local permission allow list to include `Bash(rig:*)`
226
+
227
+ ### Import/export and bundles
228
+
229
+ ```bash
230
+ rig export <rigId> -o rig.yaml
231
+ rig import <path> [--instantiate] [--materialize-only] [--preflight] [--target-rig <rigId>] [--rig-root <root>]
232
+ rig bundle create <spec> -o out.rigbundle
233
+ rig bundle inspect <bundle>
234
+ rig bundle install <bundle> [--plan] [--yes] [--target <root>] [--json]
235
+ ```
236
+
237
+ ### Legacy package surface
238
+
239
+ This still ships, but is explicitly marked legacy:
240
+
241
+ ```bash
242
+ rig package validate <path>
243
+ rig package plan <path> [--target <dir>] [--runtime <runtime>] [--role <name>]
244
+ rig package install <path> [--target <dir>] [--runtime <runtime>] [--role <name>] [--allow-merge]
245
+ rig package list
246
+ rig package rollback <installId>
247
+ ```
248
+
249
+ ## Discovery and Topology Mutation
250
+
251
+ ### Discover unmanaged tmux sessions
252
+
253
+ ```bash
254
+ rig discover
255
+ rig discover --json
256
+ rig discover --draft
257
+ ```
258
+
259
+ ### Bind a discovered session
260
+
261
+ ```bash
262
+ rig bind <discoveredId> --rig <rigId> --node <logicalId>
263
+ rig bind <discoveredId> --rig <rigId> --pod <namespace> --member <name>
264
+ ```
265
+
266
+ There is no shipped top-level `rig claim` command.
267
+ The current adoption surface is `discover`, `bind`, `adopt`, and `unclaim`.
268
+
269
+ ### Self-attach the current shell or agent
270
+
271
+ ```bash
272
+ rig attach --self --rig <rigId> --node <logicalId>
273
+ rig attach --self --rig <rigId> --node <logicalId> --print-env
274
+ rig attach --self --rig <rigId> --pod <namespace> --member <name> --runtime <runtime>
275
+ ```
276
+
277
+ Use `rig attach --self` when the current agent should attach itself directly instead of going through `discover` + `bind`.
278
+
279
+ Current proven behavior:
280
+ - inside `tmux`: attaches as a normal tmux-backed node, preserving inbound `rig send` / `rig capture`
281
+ - outside `tmux`: attaches as `external_cli`
282
+ - `--print-env` prints the `OPENRIG_NODE_ID` and `OPENRIG_SESSION_NAME` exports for the current shell
283
+
284
+ Recommended flow:
285
+
286
+ ```bash
287
+ rig attach --self --rig <rigId> --node <logicalId> --print-env > /tmp/openrig-self-attach.env
288
+ . /tmp/openrig-self-attach.env
289
+ rig whoami --json
290
+ ```
291
+
292
+ Notes:
293
+ - for tmux-backed self-attach, `rig whoami --json` is the right verification
294
+ - for raw/external self-attach, `rig ps --nodes --json` is currently the more reliable verification surface
295
+ - if the current shell is outside tmux, pass `--display-name <name>` when you want a stable human session label recorded
296
+
297
+ ### Adopt a topology and bind live sessions
298
+
299
+ ```bash
300
+ rig adopt <path> --bind <logicalId=tmuxSessionOrDiscoveryId>
301
+ rig adopt <path> --bind <logicalId=...> --bind <logicalId=...> --json
302
+ rig adopt <path> --bindings-file <bindings.yaml>
303
+ rig adopt <path> --bind <logicalId=...> --target-rig <rigId> --rig-root <root>
304
+ ```
305
+
306
+ Use `rig adopt` when the sessions already exist and you want OpenRig to start managing them.
307
+
308
+ A bindings file is the durable map from authored logical IDs to live sessions. Shape:
309
+
310
+ ```yaml
311
+ bindings:
312
+ dev1.impl2: dev1.impl2@rigged-buildout
313
+ dev1.qa: dev1.qa@rigged-buildout
314
+ ```
315
+
316
+ Spec + bindings is the proven recovery pair for adopted rigs.
317
+ Spec gives OpenRig the intended topology. Bindings tells OpenRig which discovered live session belongs in each logical node.
318
+
319
+ ### Proven adopted-rig recovery workflow
320
+
321
+ This workflow is proven for the case where the external tmux sessions are still alive:
322
+
323
+ ```bash
324
+ rig release <rigId> --delete
325
+ rig discover --json
326
+ rig adopt <spec.yaml> --bindings-file <bindings.yaml>
327
+ ```
328
+
329
+ What this does:
330
+ - removes OpenRig management without killing the sessions
331
+ - re-discovers those same sessions as unmanaged
332
+ - re-attaches them to the topology defined by the spec + bindings
333
+
334
+ Important limits:
335
+ - this is for `sessions still alive`
336
+ - spec alone is not enough for adopted rigs; you also need bindings
337
+ - this does not yet mean OpenRig can recreate dead external sessions from nothing
338
+
339
+ ### Add unmanaged pods into an existing rig
340
+
341
+ This is the proven workflow when a rig is already managed, but a new pod was created outside OpenRig and you want to add it later:
342
+
343
+ ```bash
344
+ rig adopt <pod-fragment.yaml> --bindings-file <pod.bindings.yaml> --target-rig <rigId>
345
+ ```
346
+
347
+ Use this when:
348
+ - the target rig already exists
349
+ - the new sessions are live and visible in `rig discover --json`
350
+ - you want additive topology growth, not a full rebuild
351
+
352
+ What to prepare:
353
+ - a pod fragment spec with only the new pod
354
+ - a bindings file mapping the new logical IDs to the live session names
355
+
356
+ Verification loop:
357
+
358
+ ```bash
359
+ rig discover --json
360
+ rig adopt <fragment.yaml> --bindings-file <bindings.yaml> --target-rig <rigId>
361
+ rig ps --nodes --json
362
+ rig export <rigId> -o rig.yaml
363
+ ```
364
+
365
+ Success looks like:
366
+ - the new sessions stop appearing in `rig discover`
367
+ - the new logical IDs appear in `rig ps --nodes --json`
368
+ - `rig export` includes the new pod
369
+
370
+ ### Mixed-origin rigs are allowed
371
+
372
+ One rig can contain both:
373
+ - adopted nodes bound from already-running sessions
374
+ - OpenRig-launched nodes created later with `rig expand` / `rig launch`
375
+
376
+ Current safety rule:
377
+ - `rig release` is for claimed/adopted-only rigs
378
+ - if a rig contains launched nodes, `rig release` fails with `contains_launched_nodes`
379
+
380
+ ### Manager-assisted recovery
381
+
382
+ The proven operator pattern is:
383
+ - keep one OpenRig manager session outside the rig it manages
384
+ - address the target by rig name, not cached rig ID
385
+ - resolve the current owner from fresh `rig ps --nodes --json`
386
+ - send the manager the spec path, bindings path, and verification steps with `rig send`
387
+
388
+ This lets ordinary agents ask the manager for OpenRig help instead of every agent needing to be an OpenRig expert.
389
+
390
+ ### Add/remove running topology parts
391
+
392
+ ```bash
393
+ rig expand <rig-id> <pod-fragment-path> [--rig-root <path>] [--json]
394
+ rig launch <rigId> <nodeRef> [--json]
395
+ rig remove <rigId> <nodeRef> [--json]
396
+ rig shrink <rigId> <podRef> [--json]
397
+ rig unclaim <sessionRef> [--json]
398
+ ```
399
+
400
+ ## Specs and Validation
401
+
402
+ ### Validate specs
403
+
404
+ ```bash
405
+ rig spec validate <path> [--json]
406
+ rig spec preflight <path> [--rig-root <root>] [--json]
407
+ rig agent validate <path> [--json]
408
+ ```
409
+
410
+ ### Spec library
411
+
412
+ ```bash
413
+ rig specs ls [--kind <kind>] [--json]
414
+ rig specs show <name-or-id> [--json]
415
+ rig specs preview <name-or-id> [--json]
416
+ rig specs add <path> [--json]
417
+ rig specs sync [--json]
418
+ rig specs remove <name-or-id> [--json]
419
+ rig specs rename <name-or-id> <new-name> [--json]
420
+ ```
421
+
422
+ ## MCP
423
+
424
+ ```bash
425
+ rig mcp serve [--port <port>]
426
+ ```
427
+
428
+ Current shipped MCP tools:
429
+ - `rig_up`
430
+ - `rig_down`
431
+ - `rig_ps`
432
+ - `rig_status`
433
+ - `rig_snapshot_create`
434
+ - `rig_snapshot_list`
435
+ - `rig_restore`
436
+ - `rig_discover`
437
+ - `rig_bind`
438
+ - `rig_bundle_inspect`
439
+ - `rig_agent_validate`
440
+ - `rig_rig_validate`
441
+ - `rig_rig_nodes`
442
+ - `rig_send`
443
+ - `rig_capture`
444
+ - `rig_chatroom_send`
445
+ - `rig_chatroom_watch`
446
+
447
+ ## JSON and Error Posture
448
+
449
+ Design assumptions that hold in the shipped CLI:
450
+ - many operator commands support `--json`
451
+ - error messages are intended to say what happened, why it matters, and what to do next
452
+ - daemon-backed commands fail loudly when the daemon is stopped or unhealthy
453
+ - restore failure is not something you should silently reinterpret as success
454
+
455
+ ## After-Compaction Recovery Checklist
456
+
457
+ 1. `rig whoami --json`
458
+ 2. `rig transcript <your-session> --tail 100`
459
+ 3. `rig ps --nodes --json`
460
+ 4. `rig chatroom history <rig> --limit 50`
461
+
462
+ ## Commands That Do Not Exist
463
+
464
+ Do not assume these exist unless the shipped help starts listing them:
465
+ - `rig claim`
466
+ - `rig env`
467
+ - `rig blame`
468
+ - `rig replay`
@@ -0,0 +1,149 @@
1
+ ---
2
+ name: development-team
3
+ description: How the development pod coordinates implementation, QA, and design without skipping gates.
4
+ ---
5
+
6
+ # Development Team
7
+
8
+ You are part of the development pod. Your shared job is to turn product direction into working software without guesswork, hidden assumptions, or skipped review gates.
9
+
10
+ ## Startup sequence
11
+
12
+ Before the pod starts real implementation:
13
+ - load the packaged skills named in your role startup checklist
14
+ - run `rig whoami --json`
15
+ - confirm who is playing implementer, QA, and design in this run
16
+ - wait for the orchestrator's real assignment instead of freelancing off a partial guess
17
+
18
+ The development pod should feel like a real working pod, not three isolated agents improvising alone.
19
+
20
+ ## Pod shape
21
+
22
+ The development pod may include:
23
+ - an implementer who writes the change
24
+ - a QA partner who gates every edit
25
+ - a designer who clarifies product behavior and UX before implementation fills in the blanks
26
+
27
+ Some starters only launch the implementer and QA. Others also launch a designer. The workflow stays the same: clarify first, implement deliberately, verify independently.
28
+
29
+ ## Shared loop
30
+
31
+ This is the default loop for product work:
32
+
33
+ ```
34
+ 1. Clarify the work and the acceptance criteria
35
+ 2. Implementer sends a pre-edit proposal to QA
36
+ 3. QA approves or rejects with specifics
37
+ 4. Implementer changes code with TDD
38
+ 5. Implementer sends the diff and verification output back to QA
39
+ 6. QA approves or rejects with specifics
40
+ 7. If commit authority is enabled, the implementer may commit
41
+ 8. If commit authority is not enabled, stop at a QA-approved working tree and report that state clearly
42
+ ```
43
+
44
+ Skip no gates. If the task is ambiguous, resolve the ambiguity before editing.
45
+
46
+ ## What the implementer must hand QA
47
+
48
+ Pre-edit proposal should include:
49
+ - the files expected to change
50
+ - the behavior or acceptance criteria being targeted
51
+ - the first failing test or verification step
52
+ - any likely edge cases or invariants
53
+
54
+ Post-edit review bundle should include:
55
+ - what changed
56
+ - the actual verification commands run
57
+ - the result of those commands
58
+ - any remaining uncertainty or follow-up risk
59
+
60
+ QA should not have to reverse-engineer what the implementer thought they were doing.
61
+
62
+ ## Implementer
63
+
64
+ Before proposing:
65
+ - read the task fully
66
+ - inspect the relevant code before promising a solution
67
+ - name the files, tests, and acceptance criteria in the proposal
68
+
69
+ After QA rejection:
70
+ - read the exact feedback
71
+ - fix the issue instead of arguing around it
72
+ - resubmit with the changes called out explicitly
73
+
74
+ ## QA
75
+
76
+ QA is not a rubber stamp. QA is a product voice — not just a test gate.
77
+
78
+ When reviewing a proposal:
79
+ - reject if the scope is wrong
80
+ - check whether the planned tests actually prove the contract
81
+ - flag hidden risks and missing failure cases
82
+
83
+ When reviewing a diff:
84
+ - read the actual code, not just the summary
85
+ - verify independently when possible
86
+ - if you cannot verify independently, require real output in the review bundle and inspect it critically
87
+
88
+ If the implementer stalls on a permission or approval prompt, call that out immediately. Do not treat a blocked pane as finished implementation.
89
+
90
+ ### QA dogfood mode
91
+
92
+ When QA is dogfooding (testing existing features rather than gating new code), QA works solo with full autonomy:
93
+ - find issues AND fix them in a loop
94
+ - test the fix, then move to the next issue
95
+ - only escalate architecture-level concerns to the orchestrator
96
+ - do not wait for approval to fix obvious bugs during dogfood
97
+ - report findings to the chatroom so the rig has visibility
98
+
99
+ ### QA as a product voice
100
+
101
+ QA sees the product from the user's perspective. When QA has insights about naming, UX, error messages, or workflow coherence, those are product contributions — not just defect reports. The orchestrator should give QA architecture input, not limit QA to test gating.
102
+
103
+ ## Designer
104
+
105
+ When present, the designer should work ahead of implementation:
106
+ - turn vague goals into concrete flows, states, copy, and interaction choices
107
+ - surface edge cases before engineering has to guess
108
+ - review built results for coherence, not just visual polish
109
+
110
+ The designer is part of the development pod, not a decorative sidecar.
111
+
112
+ ## Browser testing and dogfood tools
113
+
114
+ The development pod has access to browser automation and structured dogfood testing tools:
115
+
116
+ - **`agent-browser`** — browser automation CLI. Navigate to the daemon UI, snapshot interactive elements, take annotated screenshots, record repro videos. Use `agent-browser open <url>`, `agent-browser snapshot -i`, `agent-browser screenshot --annotate`.
117
+ - **`dogfood`** — structured exploratory testing workflow. Produces a report with screenshots, repro videos, and step-by-step evidence for every finding.
118
+ - **`containerized-e2e`** — Docker-based clean-install testing. Simulates a fresh user environment.
119
+
120
+ QA typically drives browser and dogfood testing, but both impl and QA should know these tools exist and can use them. When dogfooding UI:
121
+ 1. Load `/agent-browser` and `/dogfood`
122
+ 2. Open the daemon UI: `agent-browser open http://127.0.0.1:7433`
123
+ 3. Systematically explore surfaces, take screenshots as proof
124
+ 4. Report findings using the PASS/FAIL/GAP format to the chatroom
125
+
126
+ ## When the pod is blocked
127
+
128
+ If the blocker is:
129
+ - ambiguity: pull in design or ask the orchestrator for clarification
130
+ - failing tests / unexpected behavior: use `systematic-debugging`
131
+ - code changes: use `test-driven-development`
132
+ - completion claims: use `verification-before-completion`
133
+
134
+ Do not hand-wave around blockers. Name them and route them.
135
+
136
+ ## Communication
137
+
138
+ - Pre-edit proposal: `rig send <qa-session> "PRE-EDIT: ..." --verify`
139
+ - Review bundle: `rig send <qa-session> "REVIEW BUNDLE: ..." --verify`
140
+ - Design clarification: `rig send <design-session> "Need product/design input on ..." --verify`
141
+
142
+ ## When blocked
143
+
144
+ If permissions block tests, file access, or commits:
145
+ 1. identify the exact blocked command
146
+ 2. tell the human what that prevents
147
+ 3. continue with the work you can still do
148
+
149
+ Do not silently stall. Do not pretend blocked verification is complete.