@calltelemetry/openclaw-linear 0.9.0 → 0.9.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,119 +1,139 @@
1
+ <p align="center">
2
+ <img src="docs/logo.jpeg" alt="OpenClaw Linear Plugin" width="720" />
3
+ </p>
4
+
1
5
  # @calltelemetry/openclaw-linear
2
6
 
7
+ [![CI](https://github.com/calltelemetry/openclaw-linear-plugin/actions/workflows/ci.yml/badge.svg)](https://github.com/calltelemetry/openclaw-linear-plugin/actions/workflows/ci.yml)
8
+ [![codecov](https://codecov.io/gh/calltelemetry/openclaw-linear-plugin/graph/badge.svg)](https://codecov.io/gh/calltelemetry/openclaw-linear-plugin)
9
+ [![npm](https://img.shields.io/npm/v/@calltelemetry/openclaw-linear)](https://www.npmjs.com/package/@calltelemetry/openclaw-linear)
3
10
  [![OpenClaw](https://img.shields.io/badge/OpenClaw-v2026.2+-blue)](https://github.com/calltelemetry/openclaw)
4
11
  [![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](LICENSE)
5
12
 
6
13
  Connect Linear to AI agents. Issues get triaged, implemented, and audited — automatically.
7
14
 
8
- ---
9
-
10
- ## What It Does
11
-
12
- - **New issue?** Agent estimates story points, adds labels, sets priority.
13
- - **Assign to agent?** A worker implements it, an independent auditor verifies it, done.
14
- - **Comment anything?** The bot understands natural language — no magic commands needed.
15
- - **Say "close this" or "mark as done"?** Agent writes a closure report and transitions the issue to completed.
16
- - **Say "let's plan the features"?** A planner interviews you, writes user stories, and builds your full issue hierarchy.
17
- - **Plan looks good?** A different AI model automatically audits the plan before dispatch.
18
- - **Agent goes silent?** A watchdog kills it and retries automatically.
19
- - **Linear guidance?** Workspace and team-level guidance from Linear flows into every agent prompt triage, dispatch, worker, audit.
20
- - **Want updates?** Get notified on Discord, Slack, Telegram, or Signal.
15
+ > **Real human here.** I'm actively building this and beta testing it on real projects.
16
+ > Looking for feedback, bug reports, and fellow mad scientists.
17
+ > [Open an issue](https://github.com/calltelemetry/openclaw-linear-plugin/issues) — feedback and bug reports welcome.
18
+
19
+ ### Project Status
20
+
21
+ - [x] Cloudflare tunnel setup (webhook ingress, no inbound ports)
22
+ - [x] Linear webhook sync (Comment + Issue events)
23
+ - [x] Linear API integration (issues, comments, labels, state transitions)
24
+ - [x] Agent routing (`@mentions`, natural language intent classifier)
25
+ - [ ] Linear OAuth app webhook (AgentSessionEvent created/prompted)
26
+ - [x] Auto-triage (story points, labels, priorityread-only)
27
+ - [x] Complexity-tier dispatch (small Haiku, medium → Sonnet, high → Opus)
28
+ - [x] Isolated git worktrees per dispatch
29
+ - [x] Worker → Auditor pipeline (hard-enforced, not LLM-mediated)
30
+ - [ ] Audit rework loop (gaps fed back, automatic retry)
31
+ - [ ] Watchdog timeout + escalation
32
+ - [x] Webhook deduplication (60s sliding window across session/comment/assignment)
33
+ - [ ] Multi-repo worktree support
34
+ - [ ] Project planner (interview → user stories → sub-issues → DAG dispatch)
35
+ - [ ] Cross-model plan review (Claude ↔ Codex ↔ Gemini)
36
+ - [ ] Issue closure with summary report
37
+ - [ ] Sub-issue decomposition (orchestrator-level only)
38
+ - [x] `spawn_agent` / `ask_agent` sub-agent tools
39
+ - [ ] **Worktree → PR merge** — `createPullRequest()` exists but is not wired into the pipeline. After audit pass, commits sit on a `codex/{identifier}` branch. You create the PR manually.
40
+ - [ ] **Sub-agent worktree sharing** — Sub-agents spawned via `spawn_agent`/`ask_agent` do not inherit the parent worktree. They run in their own session without code access.
41
+ - [ ] **Parallel worktree conflict resolution** — DAG dispatch runs up to 3 issues concurrently in separate worktrees, but there's no merge conflict detection across them.
21
42
 
22
43
  ---
23
44
 
24
- ## Quick Start
45
+ ## Why This Exists
25
46
 
26
- ### 1. Install the plugin
47
+ Linear is a great project tracker. But it doesn't orchestrate AI agents — it just gives you issues, comments, and sessions. Without something bridging that gap, every stage of an AI-driven workflow requires a human in the loop: copy the issue context, start an agent, wait, read the output, decide what's next, start another agent, paste in the feedback, repeat. That's not autonomous — that's babysitting.
27
48
 
28
- ```bash
29
- openclaw plugins install @calltelemetry/openclaw-linear
30
- ```
49
+ This plugin makes the full lifecycle hands-off:
31
50
 
32
- ### 2. Expose the gateway (Cloudflare Tunnel)
51
+ ```
52
+ You create an issue
53
+
54
+
55
+ Agent triages it ──── estimate, labels, priority
56
+
57
+
58
+ You assign it
59
+
60
+
61
+ Plugin dispatches ─── picks model tier, creates worktree
62
+
63
+
64
+ Worker implements ─── code, tests, commits
65
+
66
+
67
+ Auditor verifies ─── independent, hard-enforced
68
+
69
+ ┌───┴───┐
70
+ ▼ ▼
71
+ Done Rework ────── gaps fed back, retry automatic
72
+ ```
33
73
 
34
- Linear sends webhook events over the public internet, so the gateway must be reachable via HTTPS. A [Cloudflare Tunnel](https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/) is the recommended approach — no open ports, no TLS cert management, no static IP required.
74
+ You work in Linear. The agents handle the rest.
35
75
 
36
- ```mermaid
37
- flowchart TB
38
- subgraph Internet
39
- LW["Linear Webhooks<br/><i>Comment, Issue, AgentSession</i>"]
40
- LO["Linear OAuth<br/><i>callback redirect</i>"]
41
- You["You<br/><i>browser, curl</i>"]
42
- end
76
+ **What Linear can't do on its own — and what this plugin handles:**
43
77
 
44
- subgraph CF["Cloudflare Edge"]
45
- TLS["TLS termination<br/>DDoS protection"]
46
- end
78
+ | Gap | What the plugin does |
79
+ |---|---|
80
+ | **No agent orchestration** | Assesses complexity, picks the right model tier, creates isolated worktrees, runs workers, triggers audits, processes verdicts — all from a single issue assignment |
81
+ | **No independent verification** | Hard-enforces a worker → auditor boundary in plugin code. The worker cannot mark its own work done. The audit is not optional and not LLM-mediated. |
82
+ | **No failure recovery** | Watchdog kills hung agents after configurable silence. Feeds audit failures back as rework context. Escalates when retries are exhausted. |
83
+ | **No multi-agent routing** | Routes `@mentions` and natural language ("hey kaylee look at this") to specific agents. Intent classifier handles plan requests, questions, close commands, and work requests. |
84
+ | **No project-scale planning** | Planner interviews you, creates issues with user stories and acceptance criteria, runs a cross-model review, then dispatches the full dependency graph — up to 3 issues in parallel. |
47
85
 
48
- subgraph Server["Your Server"]
49
- CD["cloudflared<br/><i>outbound-only tunnel</i>"]
50
- GW["openclaw-gateway<br/><i>localhost:18789</i>"]
51
- end
86
+ The end result: you work in Linear. You create issues, assign them, comment in plain English. The agents do the rest — or tell you when they can't.
52
87
 
53
- LW -- "POST /linear/webhook" --> TLS
54
- LO -- "GET /linear/oauth/callback" --> TLS
55
- You -- "HTTPS" --> TLS
56
- TLS -- "tunnel" --> CD
57
- CD -- "HTTP" --> GW
58
- ```
88
+ ---
59
89
 
60
- **How it works:** `cloudflared` opens an outbound connection to Cloudflare's edge and keeps it alive. Cloudflare routes incoming HTTPS requests for your hostname back through the tunnel to `localhost:18789`. No inbound firewall rules needed.
90
+ ## Features
61
91
 
62
- #### Setup
92
+ ### Core Pipeline
63
93
 
64
- ```bash
65
- # Install cloudflared
66
- # RHEL/AlmaLinux:
67
- sudo dnf install cloudflared
68
- # Or download: https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/downloads/
94
+ - **Auto-triage** — New issues get story point estimates, labels, and priority within seconds. Read-only mode — no side effects.
95
+ - **Worker → Auditor pipeline** — Assign an issue and a worker implements it in an isolated git worktree. An independent auditor verifies the work. The worker cannot self-certify — the audit is hard-enforced in plugin code.
96
+ - **Complexity-tier dispatch** — The plugin assesses each issue and picks the right model. Simple typo? Haiku. Multi-service refactor? Opus. Saves cost and latency without manual intervention.
97
+ - **Automatic rework** — Failed audits feed gaps back to the worker as context. Retries up to N times before escalating. No human needed until the agents are stuck.
69
98
 
70
- # Authenticate (opens browser, saves cert to ~/.cloudflared/)
71
- cloudflared tunnel login
99
+ ### Planning & Closure
72
100
 
73
- # Create a named tunnel
74
- cloudflared tunnel create openclaw-linear
75
- # Note the tunnel UUID from the output (e.g., da1f21bf-856e-49ea-83c2-d210092d96be)
76
- ```
101
+ - **Project planner** Comment "plan this project" and the agent interviews you, builds user stories with acceptance criteria, creates the full issue hierarchy, and dispatches in dependency order — up to 3 issues in parallel.
102
+ - **Cross-model review** — Plans are automatically audited by a different AI model (Claude ↔ Codex ↔ Gemini) before dispatch. Two perspectives, one plan.
103
+ - **Issue closure** — Say "close this" or "mark as done" and the agent generates a closure report and transitions the issue to completed.
104
+ - **Sub-issue decomposition** — Orchestrators and the planner break complex work into sub-issues via `linear_issues`. Sub-issues inherit team and project from the parent automatically.
77
105
 
78
- #### Configure the tunnel
106
+ ### Multi-Agent & Routing
79
107
 
80
- Create `/etc/cloudflared/config.yml` (system-wide) or `~/.cloudflared/config.yml` (user):
108
+ - **Named agents** — Define agents with different roles and expertise. Route work by `@mention` or natural language ("hey kaylee look at this").
109
+ - **Intent classification** — An LLM classifier (~300 tokens, ~2s) understands what you want from any comment. Regex fallback if the classifier fails.
110
+ - **One-time detour** — `@mention` a different agent in a session and it handles that single interaction. The session stays with the original agent.
81
111
 
82
- ```yaml
83
- tunnel: <your-tunnel-uuid>
84
- credentials-file: /home/<user>/.cloudflared/<your-tunnel-uuid>.json
112
+ ### Multi-Backend & Multi-Repo
85
113
 
86
- ingress:
87
- - hostname: your-domain.com
88
- service: http://localhost:18789
89
- - service: http_status:404 # catch-all, reject unmatched requests
90
- ```
114
+ - **Three coding backends** — Codex (OpenAI), Claude (Anthropic), Gemini (Google). Configurable globally or per-agent. The agent writes the prompt; the plugin handles backend selection.
115
+ - **Multi-repo dispatch** — Tag an issue with `<!-- repos: api, frontend -->` and the worker gets isolated worktrees for each repo. One issue, multiple codebases, one agent session.
91
116
 
92
- #### DNS
117
+ ### Operations
93
118
 
94
- Point your hostname to the tunnel:
119
+ - **Linear Guidance** Workspace and team-level guidance configured in Linear's admin UI flows into every agent prompt — triage, dispatch, worker, audit. Admins steer agent behavior without touching config files.
120
+ - **Watchdog** — Kills agents that go silent after configurable inactivity. Retries once, then escalates. Covers LLM hangs, API timeouts, and CLI lockups.
121
+ - **Notifications** — Dispatch lifecycle events (started, auditing, done, stuck) to Discord, Slack, Telegram, or Signal. Rich formatting optional.
122
+ - **Webhook deduplication** — Two-tier guard (in-memory set + 60s TTL map) prevents double-processing across Linear's two webhook systems.
95
123
 
96
- ```bash
97
- cloudflared tunnel route dns <your-tunnel-uuid> your-domain.com
98
- ```
124
+ ---
99
125
 
100
- This creates a CNAME record in Cloudflare DNS. You can also do this manually in the Cloudflare dashboard.
126
+ ## Quick Start
101
127
 
102
- #### Run as a service
128
+ ### 1. Install the plugin
103
129
 
104
130
  ```bash
105
- # Install as system service (recommended for production)
106
- sudo cloudflared service install
107
- sudo systemctl enable --now cloudflared
108
-
109
- # Verify
110
- curl -s https://your-domain.com/linear/webhook \
111
- -X POST -H "Content-Type: application/json" \
112
- -d '{"type":"test","action":"ping"}'
113
- # Should return: "ok"
131
+ openclaw plugins install @calltelemetry/openclaw-linear
114
132
  ```
115
133
 
116
- > **Tip:** Keep the tunnel running at all times. If `cloudflared` stops, Linear webhook deliveries will fail silently — the gateway won't know about new issues, comments, or agent sessions until the tunnel is restored.
134
+ ### 2. Expose the gateway
135
+
136
+ Linear delivers webhooks over the public internet, so the gateway needs a public HTTPS URL. See [Tunnel Setup (Cloudflare)](#tunnel-setup-cloudflare) for the recommended approach. Any reverse proxy or tunnel that forwards HTTPS to `localhost:18789` will work.
117
137
 
118
138
  ### 3. Create a Linear OAuth app
119
139
 
@@ -180,69 +200,217 @@ That's it. Create an issue in Linear and watch the agent respond.
180
200
 
181
201
  ---
182
202
 
183
- ## How It Works — Step by Step
203
+ ## Tunnel Setup (Cloudflare)
184
204
 
185
- Every issue moves through a clear pipeline. Here's the full interaction flow between you, Linear, the plugin, and the agents:
205
+ Linear delivers webhooks over the public internet. The gateway listens on `localhost:18789` and needs a public HTTPS endpoint. A [Cloudflare Tunnel](https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/) is the recommended approach no open ports, no TLS cert management, no static IP required.
186
206
 
187
207
  ```mermaid
188
- sequenceDiagram
189
- participant You
190
- participant Linear
191
- participant Plugin
192
- participant Agents
208
+ flowchart TB
209
+ subgraph Internet
210
+ LW["Linear Webhooks<br/><i>Comment, Issue, AgentSession</i>"]
211
+ LO["Linear OAuth<br/><i>callback redirect</i>"]
212
+ end
213
+
214
+ subgraph CF["Cloudflare Edge"]
215
+ TLS["TLS termination + DDoS protection"]
216
+ end
217
+
218
+ subgraph Server["Your Server"]
219
+ CD["cloudflared<br/><i>outbound-only tunnel</i>"]
220
+ GW["openclaw-gateway<br/><i>localhost:18789</i>"]
221
+ end
222
+
223
+ LW -- "POST /linear/webhook" --> TLS
224
+ LO -- "GET /linear/oauth/callback" --> TLS
225
+ TLS -- "tunnel" --> CD
226
+ CD -- "HTTP" --> GW
227
+ ```
228
+
229
+ **How it works:** `cloudflared` opens an outbound connection to Cloudflare's edge and keeps it alive. Cloudflare routes incoming HTTPS requests for your hostname back through the tunnel to `localhost:18789`. No inbound firewall rules needed.
230
+
231
+ ### Install cloudflared
232
+
233
+ ```bash
234
+ # RHEL / Rocky / Alma
235
+ sudo dnf install -y cloudflared
236
+
237
+ # Debian / Ubuntu
238
+ curl -fsSL https://pkg.cloudflare.com/cloudflare-main.gpg | sudo tee /usr/share/keyrings/cloudflare-main.gpg >/dev/null
239
+ echo "deb [signed-by=/usr/share/keyrings/cloudflare-main.gpg] https://pkg.cloudflare.com/cloudflared $(lsb_release -cs) main" \
240
+ | sudo tee /etc/apt/sources.list.d/cloudflared.list
241
+ sudo apt update && sudo apt install -y cloudflared
242
+
243
+ # macOS
244
+ brew install cloudflare/cloudflare/cloudflared
245
+ ```
246
+
247
+ ### Authenticate with Cloudflare
248
+
249
+ ```bash
250
+ cloudflared tunnel login
251
+ ```
252
+
253
+ This opens your browser. You must:
254
+ 1. Log in to your Cloudflare account
255
+ 2. **Select the domain** (zone) for the tunnel (e.g., `yourdomain.com`)
256
+ 3. Click **Authorize**
257
+
258
+ Cloudflare writes an origin certificate to `~/.cloudflared/cert.pem`. This cert grants `cloudflared` permission to create tunnels and DNS records under that domain.
259
+
260
+ > **Prerequisite:** Your domain must already be on Cloudflare (nameservers pointed to Cloudflare). If it's not, add it in the Cloudflare dashboard first.
261
+
262
+ ### Create a tunnel
263
+
264
+ ```bash
265
+ cloudflared tunnel create openclaw-linear
266
+ ```
267
+
268
+ This outputs a **Tunnel ID** (UUID like `da1f21bf-856e-...`) and writes credentials to `~/.cloudflared/<TUNNEL_ID>.json`.
269
+
270
+ ### DNS — point your hostname to the tunnel
271
+
272
+ ```bash
273
+ cloudflared tunnel route dns openclaw-linear linear.yourdomain.com
274
+ ```
275
+
276
+ This creates a CNAME record in Cloudflare DNS: `linear.yourdomain.com → <TUNNEL_ID>.cfargotunnel.com`. You can verify it in the Cloudflare dashboard under **DNS > Records**. You can also create this record manually.
277
+
278
+ The hostname you choose here is what you'll use for **both** webhook URLs and the OAuth redirect URI in Linear. Make sure they all match.
279
+
280
+ ### Configure the tunnel
281
+
282
+ Create `/etc/cloudflared/config.yml` (system-wide) or `~/.cloudflared/config.yml` (user):
283
+
284
+ ```yaml
285
+ tunnel: <TUNNEL_ID>
286
+ credentials-file: /home/<user>/.cloudflared/<TUNNEL_ID>.json
287
+
288
+ ingress:
289
+ - hostname: linear.yourdomain.com
290
+ service: http://localhost:18789
291
+ - service: http_status:404 # catch-all, reject unmatched requests
292
+ ```
293
+
294
+ The `ingress` rule routes all traffic for your hostname to the gateway on localhost. The catch-all `http_status:404` rejects requests for any other hostname.
295
+
296
+ ### Run as a service
297
+
298
+ ```bash
299
+ # Install as system service (recommended for production)
300
+ sudo cloudflared service install
301
+ sudo systemctl enable --now cloudflared
302
+ ```
303
+
304
+ To test without installing as a service:
193
305
 
194
- You->>Linear: Create issue
195
- Linear->>Plugin: Webhook (Issue.create)
196
- Plugin->>Agents: Triage agent
197
- Agents-->>Plugin: Estimate + labels
198
- Plugin-->>Linear: Update issue
199
- Plugin-->>Linear: Post assessment
306
+ ```bash
307
+ cloudflared tunnel run openclaw-linear
308
+ ```
200
309
 
201
- You->>Linear: Assign to agent
202
- Linear->>Plugin: Webhook (Issue.update)
203
- Plugin->>Agents: Worker agent
204
- Agents-->>Linear: Streaming status
205
- Plugin->>Agents: Audit agent (automatic)
206
- Agents-->>Plugin: JSON verdict
207
- Plugin-->>Linear: Result comment
310
+ ### Verify end-to-end
208
311
 
209
- You->>Linear: Comment "@kaylee review"
210
- Linear->>Plugin: Webhook (Comment)
211
- Plugin->>Agents: Kaylee agent
212
- Agents-->>Plugin: Response
213
- Plugin-->>Linear: Branded comment
312
+ ```bash
313
+ curl -s https://linear.yourdomain.com/linear/webhook \
314
+ -X POST -H "Content-Type: application/json" \
315
+ -d '{"type":"test","action":"ping"}'
316
+ # Should return: "ok"
214
317
  ```
215
318
 
216
- Here's what each stage does, and what you'll see in Linear:
319
+ > **Tip:** Keep the tunnel running at all times. If `cloudflared` stops, Linear webhook deliveries will fail silently — the gateway won't know about new issues, comments, or agent sessions until the tunnel is restored.
320
+
321
+ ---
322
+
323
+ ## How It Works — Step by Step
324
+
325
+ A project goes through a complete lifecycle — from planning to implementation to closure. Here's every phase, what triggers it, and what you'll see in Linear.
217
326
 
218
327
  ```mermaid
219
328
  flowchart LR
220
- A["Triage<br/><i>(auto)</i>"] --> B["Dispatch<br/><i>(you assign)</i>"]
221
- B --> C["Worker<br/><i>(auto)</i>"]
222
- C --> D["Audit<br/><i>(auto)</i>"]
223
- D --> E["Done ✔"]
224
- D --> F["Rework<br/><i>(auto retry)</i>"]
225
- D --> G["Needs Your<br/>Help ⚠<br/><i>(escalated)</i>"]
226
- F --> C
329
+ P["Plan<br/><i>(optional)</i>"] --> T["Triage<br/><i>(auto)</i>"]
330
+ T --> D["Dispatch<br/><i>(assign)</i>"]
331
+ D --> W["Worker<br/><i>(auto)</i>"]
332
+ W --> A["Audit<br/><i>(auto)</i>"]
333
+ A --> Done["Done "]
334
+ A --> R["Rework<br/><i>(auto retry)</i>"]
335
+ A --> S["Escalate ⚠"]
336
+ R --> W
337
+ Done --> CL["Close<br/><i>(comment or auto)</i>"]
227
338
  ```
228
339
 
229
- ### Stage 1: Triage (automatic)
340
+ ### Phase 1: Planning (optional)
341
+
342
+ **Trigger:** Comment "let's plan the features" on a project issue.
343
+
344
+ For larger work, the planner breaks a project into issues before any code is written. It enters **interview mode** — asking questions, creating issues with user stories and acceptance criteria, and building a dependency graph in real time.
345
+
346
+ ```mermaid
347
+ sequenceDiagram
348
+ actor Human
349
+ participant Linear
350
+ participant Plugin
351
+ participant Planner as Planner Agent
352
+ participant Reviewer as Cross-Model Reviewer
353
+
354
+ Human->>Linear: "plan this project"
355
+ Plugin->>Planner: start interview
356
+ loop Until plan is complete
357
+ Planner-->>Linear: question
358
+ Human->>Linear: reply
359
+ Planner-->>Linear: create/update issues
360
+ end
361
+ Human->>Linear: "looks good"
362
+ Plugin->>Plugin: validate DAG + descriptions
363
+ Plugin->>Reviewer: cross-model audit
364
+ Reviewer-->>Plugin: recommendations
365
+ Plugin-->>Linear: summary + ask for approval
366
+ Human->>Linear: "approve plan"
367
+ Plugin-->>Linear: dispatch issues in dependency order
368
+ ```
369
+
370
+ The planner proactively asks for:
371
+ - **User stories** — "As a [role], I want [feature] so that [benefit]"
372
+ - **Acceptance criteria** — Given/When/Then format
373
+ - **UAT test scenarios** — How to manually verify the feature
374
+
375
+ **What you'll see in Linear:**
376
+
377
+ > I've created 3 issues:
378
+ > - **PROJ-2:** Build search API endpoint (3 pts, blocks PROJ-3)
379
+ > - **PROJ-3:** Search results page (2 pts, blocked by PROJ-2)
380
+ > - **PROJ-4:** Autocomplete suggestions (1 pt, independent)
381
+ >
382
+ > Does that cover it? Should the autocomplete call a separate endpoint or share the search API?
383
+
384
+ When you say "looks good", the planner validates the plan (descriptions, estimates, no circular deps) and sends it to a **different AI model** for a cross-model review:
385
+
386
+ | Your primary model | Auto-reviewer |
387
+ |---|---|
388
+ | Claude / Anthropic | Codex |
389
+ | Codex / OpenAI | Gemini |
390
+ | Gemini / Google | Codex |
391
+ | Other (Kimi, Mistral, etc.) | Gemini |
392
+
393
+ After approval, issues are dispatched automatically in dependency order — up to 3 in parallel.
394
+
395
+ > `📊 Search Feature: 2/3 complete`
396
+
397
+ ### Phase 2: Triage (automatic)
230
398
 
231
- **Trigger:** You create a new issue.
399
+ **Trigger:** A new issue is created (manually or by the planner).
232
400
 
233
- The agent reads your issue, estimates story points, adds labels, sets priority, and posts an assessment comment — all within seconds. Triage runs in **read-only mode** (no file writes, no code execution) to prevent side effects.
401
+ The agent reads the issue, estimates story points, adds labels, sets priority, and posts an assessment comment — all within seconds. Triage runs in **read-only mode** (no file writes, no code execution) to prevent side effects.
234
402
 
235
403
  **What you'll see in Linear:**
236
404
 
237
405
  > **[Mal]** This looks like a medium complexity change — the search API integration touches both the backend GraphQL schema and the frontend query layer. I've estimated 3 points and tagged it `backend` + `frontend`.
238
406
 
239
- The estimate, labels, and priority are applied silently to the issue fields. You don't need to do anything.
407
+ The estimate, labels, and priority are applied silently to the issue fields.
240
408
 
241
- ### Stage 2: Dispatch (you assign the issue)
409
+ ### Phase 3: Dispatch (assign the issue)
242
410
 
243
- **Trigger:** You assign the issue to the agent (or it gets auto-assigned after planning).
411
+ **Trigger:** The issue is assigned to the agent (manually or auto-assigned after planning).
244
412
 
245
- The agent assesses complexity, picks an appropriate model, creates an isolated git worktree, and starts working.
413
+ The plugin assesses complexity, picks an appropriate model tier, creates an isolated git worktree, and starts the worker.
246
414
 
247
415
  **What you'll see in Linear:**
248
416
 
@@ -267,25 +435,23 @@ The agent assesses complexity, picks an appropriate model, creates an isolated g
267
435
  | Medium | claude-sonnet-4-6 | Standard features, multi-file changes |
268
436
  | High | claude-opus-4-6 | Complex refactors, architecture changes |
269
437
 
270
- ### Stage 3: Implementation (automatic)
438
+ ### Phase 4: Implementation (automatic)
271
439
 
272
- The worker agent reads the issue, plans its approach, writes code, and runs tests — all in the isolated worktree. You don't need to do anything during this stage.
440
+ The worker agent reads the issue, plans its approach, writes code, and runs tests — all in the isolated worktree.
273
441
 
274
442
  If this is a **retry** after a failed audit, the worker gets the previous audit feedback as context so it knows exactly what to fix.
275
443
 
276
- **Notifications you'll receive:**
277
- > `ENG-100 working on it (attempt 1)`
444
+ **Notification:** `ENG-100 working on it (attempt 1)`
278
445
 
279
- ### Stage 4: Audit (automatic)
446
+ ### Phase 5: Audit (automatic)
280
447
 
281
- After the worker finishes, a separate auditor agent independently verifies the work. The auditor checks the issue requirements against what was actually implemented.
448
+ After the worker finishes, a separate auditor agent independently verifies the work checking issue requirements against what was actually implemented, running tests, and reviewing the diff.
282
449
 
283
450
  This is **not optional** — the worker cannot mark its own work as done. The audit is triggered by the plugin, not by the AI.
284
451
 
285
- **Notifications you'll receive:**
286
- > `ENG-100 checking the work...`
452
+ **Notification:** `ENG-100 checking the work...`
287
453
 
288
- ### Stage 5: Verdict
454
+ ### Phase 6: Verdict
289
455
 
290
456
  The audit produces one of three outcomes:
291
457
 
@@ -313,7 +479,7 @@ The issue is marked done automatically. A summary is posted.
313
479
 
314
480
  #### Fail (retries left) — Automatic rework
315
481
 
316
- The worker gets the audit feedback and tries again. You don't need to do anything.
482
+ The worker gets the audit feedback and tries again automatically.
317
483
 
318
484
  **What you'll see in Linear:**
319
485
 
@@ -331,9 +497,9 @@ The worker gets the audit feedback and tries again. You don't need to do anythin
331
497
 
332
498
  **Notification:** `ENG-100 needs more work (attempt 1). Issues: missing validation, no empty query test`
333
499
 
334
- #### Fail (no retries left) — Needs your help
500
+ #### Fail (no retries left) — Escalation
335
501
 
336
- After all retries are exhausted (default: 3 attempts), the issue is escalated to you.
502
+ After all retries are exhausted (default: 3 attempts), the issue is escalated.
337
503
 
338
504
  **What you'll see in Linear:**
339
505
 
@@ -354,26 +520,52 @@ After all retries are exhausted (default: 3 attempts), the issue is escalated to
354
520
 
355
521
  **Notification:** `🚨 ENG-100 needs your help — couldn't fix it after 3 tries`
356
522
 
357
- **What you can do:**
523
+ **Options:**
358
524
  1. **Clarify the issue** — Add more detail to the description, then re-assign to try again
359
525
  2. **Fix it yourself** — The agent's work is in the worktree, ready to edit
360
526
  3. **Force retry** — `/dispatch retry ENG-100`
361
527
  4. **Check logs** — Worker output in `.claw/worker-*.md`, audit verdicts in `.claw/audit-*.json`
362
528
 
363
- ### Stage 6: Timeout (if the agent goes silent)
529
+ ### Phase 7: Closure
364
530
 
365
- If the agent produces no output for 2 minutes (configurable), the watchdog kills it and retries once. If the retry also times out, the issue is escalated.
531
+ **Trigger:** Comment "close this", "mark as done", or "this is resolved" on any issue.
532
+
533
+ The plugin generates a closure report and transitions the issue to completed. This is a **static action** — the plugin orchestrates the API calls directly, the agent only writes the report text.
534
+
535
+ ```mermaid
536
+ flowchart LR
537
+ A["'close this'"] --> B["Fetch issue details"]
538
+ B --> C["Generate closure report<br/><i>(read-only agent)</i>"]
539
+ C --> D["Transition → completed"]
540
+ D --> E["Post report to issue"]
541
+ ```
366
542
 
367
543
  **What you'll see in Linear:**
368
544
 
369
- > ## Agent Timed Out
545
+ > ## Closed
370
546
  >
371
- > The agent stopped responding for over 120s and was automatically restarted, but the retry also failed.
547
+ > This issue has been reviewed and closed.
372
548
  >
373
- > **What to do:** Re-assign this issue to try again. If it keeps timing out, the issue might be too complex try breaking it into smaller issues.
549
+ > **Summary:** The search API endpoint was implemented with pagination, input validation, and error handling. All 14 tests pass. The frontend search page renders results correctly.
550
+
551
+ ### Timeout recovery
552
+
553
+ If an agent produces no output for 2 minutes (configurable), the watchdog kills it and retries once. If the retry also times out, the issue is escalated.
374
554
 
375
555
  **Notification:** `⚡ ENG-100 timed out (no activity for 120s). Will retry.`
376
556
 
557
+ ### Project-level progress
558
+
559
+ When issues are dispatched from a plan, you get project-level progress tracking:
560
+
561
+ > `📊 Search Feature: 2/3 complete`
562
+
563
+ When everything is done:
564
+
565
+ > `✅ Search Feature: complete (3/3 issues)`
566
+
567
+ If an issue gets stuck, dependent issues are blocked and you're notified.
568
+
377
569
  ### What's in the worktree
378
570
 
379
571
  Every dispatch creates a `.claw/` folder inside the worktree with everything the agent did:
@@ -486,73 +678,16 @@ The webhook handler prevents double-processing through a two-tier guard system:
486
678
 
487
679
  ---
488
680
 
489
- ## Planning a Project
490
-
491
- For larger work, the planner helps you break a project into issues with dependencies, then dispatches them automatically.
492
-
493
- ### Start planning
494
-
495
- Comment on any issue that belongs to a Linear project — use natural language:
496
-
497
- > "let's plan out the features for this project"
498
-
499
- The planner enters **interview mode** and asks you questions one at a time:
500
-
501
- > I'm entering planning mode for **Search Feature**. I'll interview you about the features you want to build, then structure everything into Linear issues.
502
- >
503
- > Let's start — what is this project about, and what are the main feature areas?
504
-
505
- ### Build the plan
506
-
507
- Reply with your ideas. The planner creates issues with **user stories** and **acceptance criteria**, sets dependencies, and asks follow-up questions:
508
-
509
- > I've created 3 issues:
510
- > - **PROJ-2:** Build search API endpoint (3 pts, blocks PROJ-3)
511
- > - **PROJ-3:** Search results page (2 pts, blocked by PROJ-2)
512
- > - **PROJ-4:** Autocomplete suggestions (1 pt, independent)
513
- >
514
- > For PROJ-2, here's what I wrote for acceptance criteria:
515
- > - *Given* a user sends a search query, *When* results exist, *Then* they are returned with pagination
516
- >
517
- > Does that cover it? Should the autocomplete call a separate endpoint or share the search API?
518
-
519
- The planner proactively asks for:
520
- - **User stories** — "As a [role], I want [feature] so that [benefit]"
521
- - **Acceptance criteria** — Given/When/Then format
522
- - **UAT test scenarios** — How to manually verify the feature
681
+ ## Planning Validation Details
523
682
 
524
- Keep replying until the plan looks right. The planner updates issues in real time.
683
+ See [Phase 1: Planning](#phase-1-planning-optional) for the full walkthrough. This section covers the validation rules that run when you say "finalize plan".
525
684
 
526
- ### Finalize & Cross-Model Review
685
+ ### Validation checks
527
686
 
528
- When you're happy, say something like "looks good" or "finalize plan". The planner runs a validation check:
529
687
  - Every issue has a description (50+ characters) with acceptance criteria
530
688
  - Every non-epic issue has an estimate and priority
531
689
  - No circular dependencies in the DAG
532
690
 
533
- **If validation passes, a cross-model review runs automatically:**
534
-
535
- > ## Plan Passed Checks
536
- >
537
- > **3 issues** with valid dependency graph.
538
- >
539
- > Let me have **Codex** audit this and make recommendations.
540
-
541
- A different AI model (always the complement of your primary model) reviews the plan for gaps:
542
-
543
- | Your primary model | Auto-reviewer |
544
- |---|---|
545
- | Claude / Anthropic | Codex |
546
- | Codex / OpenAI | Gemini |
547
- | Gemini / Google | Codex |
548
- | Other (Kimi, Mistral, etc.) | Gemini |
549
-
550
- After the review, the planner summarizes recommendations and asks you to approve:
551
-
552
- > Codex suggested adding error handling scenarios to PROJ-2 and noted PROJ-4 could be split into frontend/backend. I've updated PROJ-2's acceptance criteria. The PROJ-4 split is optional — your call.
553
- >
554
- > If you're happy with this plan, say **approve plan** to start dispatching.
555
-
556
691
  **If validation fails:**
557
692
 
558
693
  > ## Plan Audit Failed
@@ -566,19 +701,7 @@ After the review, the planner summarizes recommendations and asks you to approve
566
701
  >
567
702
  > Please address these issues, then say "finalize plan" again.
568
703
 
569
- Fix the issues and try again. You can also say "cancel" or "stop planning" to exit without dispatching.
570
-
571
- ### DAG dispatch progress
572
-
573
- After approval, issues are assigned to the agent automatically in dependency order. Up to 3 issues run in parallel.
574
-
575
- > `📊 Search Feature: 2/3 complete`
576
-
577
- When everything is done:
578
-
579
- > `✅ Search Feature: complete (3/3 issues)`
580
-
581
- If an issue gets stuck (all retries failed), dependent issues are blocked and you'll be notified.
704
+ Fix the issues and try again. Say "cancel" or "stop planning" to exit without dispatching.
582
705
 
583
706
  ---
584
707
 
@@ -1027,7 +1150,7 @@ For programmatic access, the plugin registers these RPC methods:
1027
1150
  If an agent goes silent (LLM timeout, API hang, CLI lockup), the watchdog handles it automatically:
1028
1151
 
1029
1152
  1. No output for `inactivitySec` → kill and retry once
1030
- 2. Second silence → escalate to stuck (you get notified, see [Stage 6](#stage-6-timeout-if-the-agent-goes-silent) above)
1153
+ 2. Second silence → escalate to stuck (you get notified, see [Timeout recovery](#timeout-recovery) above)
1031
1154
 
1032
1155
  | Setting | Default | What it controls |
1033
1156
  |---|---|---|
@@ -1060,26 +1183,35 @@ Agents call `linear_issues` with typed JSON parameters. The tool wraps the Linea
1060
1183
  | `list_states` | Get available workflow states for a team | `teamId` |
1061
1184
  | `list_labels` | Get available labels for a team | `teamId` |
1062
1185
 
1063
- **Sub-issues:** Use `action="create"` with `parentIssueId` to create sub-issues under an existing issue. The new issue inherits `teamId` and `projectId` from its parent automatically. Agents are instructed to break large work into sub-issues for granular trackingany task with multiple distinct deliverables should be decomposed. Auditors can also create sub-issues for remaining work when an implementation is partial.
1186
+ **Sub-issues:** Use `action="create"` with `parentIssueId` to create sub-issues under an existing issue. The new issue inherits `teamId` and `projectId` from its parent automatically. Only orchestrators on triaged issues have `create` accessworkers and auditors cannot create issues.
1064
1187
 
1065
1188
  ### `spawn_agent` / `ask_agent` — Multi-agent orchestration
1066
1189
 
1067
1190
  Delegate work to other crew agents. `spawn_agent` is fire-and-forget (parallel), `ask_agent` waits for a reply (synchronous). Disabled with `enableOrchestration: false`.
1068
1191
 
1192
+ Sub-agents run in their own context — they do **not** share the parent's worktree or get `code_run` access. They're useful for reasoning, research, and coordination (e.g., "ask Inara how to phrase this error message") but cannot directly modify code. To give a sub-agent code context, include the relevant snippets in the task message.
1193
+
1069
1194
  ### `dispatch_history` — Recent dispatch context
1070
1195
 
1071
1196
  Returns recent dispatch activity. Agents use this for situational awareness when working on related issues.
1072
1197
 
1073
1198
  ### Access model
1074
1199
 
1075
- Not all agents get write access. The webhook prompts enforce this:
1200
+ Tool access varies by context. Orchestrators get the full toolset; workers and auditors are restricted:
1076
1201
 
1077
- | Context | `linear_issues` access | `code_run` |
1078
- |---|---|---|
1079
- | Triaged issue (In Progress, etc.) | Full (read + create + update + comment) | Yes |
1080
- | Untriaged issue (Backlog, Triage) | Read only | Yes |
1081
- | Auditor | Full (read + create + update + comment) | Yes |
1082
- | Worker (inside `code_run`) | None | N/A |
1202
+ | Context | `linear_issues` | `code_run` | `spawn_agent` / `ask_agent` | Filesystem |
1203
+ |---|---|---|---|---|
1204
+ | Orchestrator (triaged issue) | Full (read, create, update, comment) | Yes | Yes | Read + write |
1205
+ | Orchestrator (untriaged issue) | Read only | Yes | Yes | Read + write |
1206
+ | Worker | None | None | None | Read + write |
1207
+ | Auditor | Prompt-constrained (has tool, instructed to verify only) | None | None | Read only (by prompt) |
1208
+ | Sub-agent (spawn/ask) | None | None | Yes (can chain) | Inherited from parent |
1209
+
1210
+ **Workers** run inside the coding backend (Codex, Claude, Gemini) — they have full filesystem access to the worktree but no Linear tools and no orchestration. Their only job is to write code and return a summary.
1211
+
1212
+ **Auditors** have access to `linear_issues` (the tool is registered) but are instructed via prompt to verify only — they return a JSON verdict, not code or issue mutations. Write access is not enforced at the tool level.
1213
+
1214
+ **Sub-agents** spawned via `spawn_agent`/`ask_agent` run in their own session with no worktree access and no `code_run`. They're information workers — useful for reasoning and coordination, not code execution.
1083
1215
 
1084
1216
  ---
1085
1217
 
@@ -1216,14 +1348,14 @@ The full dispatch flow for implementing an issue:
1216
1348
 
1217
1349
  ```mermaid
1218
1350
  flowchart TD
1219
- A["Issue assigned to app user"] --> B["1. Assess complexity tier<br/><i>junior / medior / senior</i>"]
1351
+ A["Issue assigned to app user"] --> B["1. Assess complexity tier<br/><i>small / medium / high</i>"]
1220
1352
  B --> C["2. Create isolated git worktree"]
1221
1353
  C --> D["3. Register dispatch in state file"]
1222
1354
  D --> E["4. Write .claw/manifest.json"]
1223
1355
  E --> F["5. Notify: dispatched as tier"]
1224
1356
 
1225
- F --> W["6. Worker phase<br/><i>code_run: YES, linear_issues: NO</i><br/>Build prompt → implement → save to .claw/"]
1226
- W -->|"plugin code — automatic"| AU["7. Audit phase<br/><i>code_run: YES, linear_issues: READ+WRITE</i><br/>Verify criteria → run tests → JSON verdict"]
1357
+ F --> W["6. Worker phase<br/><i>filesystem: full, linear_issues: NO</i><br/>Build prompt → implement → save to .claw/"]
1358
+ W -->|"plugin code — automatic"| AU["7. Audit phase<br/><i>filesystem: read, linear_issues: prompt-constrained</i><br/>Verify criteria → inspect diff → JSON verdict"]
1227
1359
 
1228
1360
  AU --> V{"8. Verdict"}
1229
1361
  V -->|PASS| DONE["Done ✔<br/>updateIssue → notify"]