@exaudeus/workrail 3.38.0 → 3.40.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/dist/cli-worktrain.js +231 -0
- package/dist/console-ui/assets/{index-BtOJj6Xy.js → index-CXWCAonr.js} +1 -1
- package/dist/console-ui/index.html +1 -1
- package/dist/coordinators/pr-review.d.ts +62 -0
- package/dist/coordinators/pr-review.js +575 -0
- package/dist/daemon/workflow-runner.d.ts +3 -2
- package/dist/daemon/workflow-runner.js +6 -3
- package/dist/manifest.json +58 -34
- package/dist/mcp/output-schemas.d.ts +10 -10
- package/dist/mcp/tools.d.ts +12 -12
- package/dist/trigger/trigger-router.js +9 -2
- package/dist/types/workflow-source.d.ts +0 -1
- package/dist/types/workflow-source.js +3 -6
- package/dist/types/workflow.d.ts +1 -1
- package/dist/types/workflow.js +1 -2
- package/dist/v2/durable-core/domain/artifact-contract-validator.js +66 -0
- package/dist/v2/durable-core/schemas/artifacts/coordinator-signal.d.ts +25 -0
- package/dist/v2/durable-core/schemas/artifacts/coordinator-signal.js +31 -0
- package/dist/v2/durable-core/schemas/artifacts/index.d.ts +3 -1
- package/dist/v2/durable-core/schemas/artifacts/index.js +14 -1
- package/dist/v2/durable-core/schemas/artifacts/review-verdict.d.ts +41 -0
- package/dist/v2/durable-core/schemas/artifacts/review-verdict.js +30 -0
- package/dist/v2/durable-core/schemas/export-bundle/index.d.ts +236 -236
- package/dist/v2/durable-core/schemas/session/events.d.ts +50 -50
- package/dist/v2/durable-core/schemas/session/gaps.d.ts +2 -2
- package/dist/v2/durable-core/schemas/session/manifest.d.ts +4 -4
- package/dist/v2/durable-core/schemas/session/outputs.d.ts +8 -8
- package/dist/v2/usecases/console-routes.js +178 -0
- package/docs/design/coordinator-artifact-protocol-design-candidates.md +155 -0
- package/docs/design/coordinator-artifact-protocol-design-review.md +103 -0
- package/docs/design/coordinator-artifact-protocol-implementation-plan.md +259 -0
- package/docs/discovery/coordinator-design-review.md +73 -0
- package/docs/discovery/coordinator-script-design.md +96 -679
- package/docs/discovery/hypothesis-challenge-report.md +44 -0
- package/docs/discovery/simulation-report.md +85 -0
- package/docs/ideas/backlog.md +158 -100
- package/package.json +1 -1
- package/workflows/mr-review-workflow.agentic.v2.json +5 -1
|
@@ -0,0 +1,44 @@
|
|
|
1
|
+
# Hypothesis Challenge: PR Review Coordinator HTTP-First Design
|
|
2
|
+
|
|
3
|
+
*Generated: 2026-04-18*
|
|
4
|
+
|
|
5
|
+
## Target Claim
|
|
6
|
+
|
|
7
|
+
The PR review coordinator's HTTP-first design (Candidate B) is sound: the 2-call HTTP notes extraction is reliable, the two-tier keyword parser correctly classifies review severity, and the 5 robustness rules are sufficient.
|
|
8
|
+
|
|
9
|
+
## Strongest Counter-Argument
|
|
10
|
+
|
|
11
|
+
`phase-6-final-handoff` has `requireConfirmation: true`. The preferredTipNodeId may point to a checkpoint/confirmation node whose recapMarkdown is sparse or absent, causing the keyword scanner to misclassify clean PRs as 'unknown' and escalate them unnecessarily.
|
|
12
|
+
|
|
13
|
+
**Verdict:** Mitigated. In autonomous mode, the agent calls `continue_workflow` with substantive notes before the confirmation gate. WorkRail stores these notes as the RECAP payload for the node. The recapMarkdown IS populated for autonomous sessions completing phase-6. The requireConfirmation flag only blocks advancement until notes are written -- which the autonomous agent does.
|
|
14
|
+
|
|
15
|
+
## Weak Assumptions / Evidence Gaps
|
|
16
|
+
|
|
17
|
+
1. **`runs[0]` is always the most recent run:** True in practice. WorkRail appends new runs; index 0 is the most recent. Confirmed in worktrain-await.ts pollSession() which uses `runs[0]`.
|
|
18
|
+
|
|
19
|
+
2. **Keyword scan is context-unaware:** 'blocking' appearing in negative context ('this is not blocking') would trigger a false positive. Mitigation: require BLOCKING keyword to be present without a preceding negation. Simpler: use priority order -- any blocking keyword -> blocking, regardless of context. The conservative default is acceptable.
|
|
20
|
+
|
|
21
|
+
3. **go/no-go time check needs adaptation:** Rule 3 (don't spawn if remaining time < 20 minutes) was designed for daemon sessions with known maxSessionMinutes. A CLI coordinator has no such limit. Adaptation: track wall-clock time since coordinator start, refuse to spawn new sessions if elapsed > coordinator_max_minutes - 20.
|
|
22
|
+
|
|
23
|
+
## Likely Failure Modes
|
|
24
|
+
|
|
25
|
+
1. **recapMarkdown null for final step** -> 'unknown' severity -> escalate (conservative, correct)
|
|
26
|
+
2. **Fix-agent loop max 3 passes exceeded** -> escalate after 3 (loop counter enforces this)
|
|
27
|
+
3. **ECONNREFUSED on daemon calls** -> early exit with clear error message
|
|
28
|
+
4. **Keyword false positive** -> PR escalated as blocking when actually clean (false negative on merge, acceptable)
|
|
29
|
+
5. **Merge conflict at merge time** -> `gh pr merge` fails, coordinator reports error and escalates
|
|
30
|
+
|
|
31
|
+
## Critical Tests
|
|
32
|
+
|
|
33
|
+
- `parseFindingsFromNotes(null)` -> returns err, classifies as 'unknown'
|
|
34
|
+
- `parseFindingsFromNotes(markdown with 'not blocking but...')` -> must NOT return 'blocking'
|
|
35
|
+
- Loop counter: 3 passes with persistent minor -> escalate on pass 3, NOT pass 4
|
|
36
|
+
- ECONNREFUSED: spawnSession failure propagates cleanly to stderr and exit code 1
|
|
37
|
+
|
|
38
|
+
## Verdict: Keep
|
|
39
|
+
|
|
40
|
+
The design is sound. The 2-call HTTP extraction works for autonomous sessions. The two-tier parser with conservative defaults is sufficient. The 5 robustness rules need one adaptation: Rule 3 (go/no-go time check) should use wall-clock time since coordinator start, not daemon session remaining time.
|
|
41
|
+
|
|
42
|
+
## Next Action
|
|
43
|
+
|
|
44
|
+
Proceed with Candidate B implementation. Add adaptation to Rule 3: track coordinator wall-clock start time; refuse new spawns if `now() - startTime > (coordinatorMaxMs - 20*60*1000)`. Default coordinatorMaxMs = 90 minutes.
|
|
@@ -0,0 +1,85 @@
|
|
|
1
|
+
# Execution Simulation Report: PR Review Coordinator Failure Paths
|
|
2
|
+
|
|
3
|
+
*Generated: 2026-04-18*
|
|
4
|
+
|
|
5
|
+
## Summary
|
|
6
|
+
|
|
7
|
+
Three failure paths simulated for the PR review coordinator design. All three produce correct outcomes under the proposed design. One gap identified: Rule 3 (go/no-go time check) needs adaptation for CLI context.
|
|
8
|
+
|
|
9
|
+
## Scenario 1: recapMarkdown is Null
|
|
10
|
+
|
|
11
|
+
**Setup:** Session completes successfully, but `GET /api/v2/sessions/:id/nodes/:nodeId` returns `recapMarkdown: null`.
|
|
12
|
+
|
|
13
|
+
**Trace:**
|
|
14
|
+
```
|
|
15
|
+
getAgentResult('handle-419')
|
|
16
|
+
GET /api/v2/sessions/handle-419 -> runs[0].preferredTipNodeId = 'node-xyz'
|
|
17
|
+
GET /api/v2/sessions/handle-419/nodes/node-xyz -> recapMarkdown = null
|
|
18
|
+
returns null
|
|
19
|
+
|
|
20
|
+
parseFindingsFromNotes(null) -> err('notes is null or empty')
|
|
21
|
+
classifySeverity(err) -> 'unknown'
|
|
22
|
+
route('unknown') -> escalate
|
|
23
|
+
```
|
|
24
|
+
|
|
25
|
+
**Divergence from expected:** None -- conservative escalation is the designed behavior.
|
|
26
|
+
|
|
27
|
+
**Outcome:** PR escalated, not merged. No crash. Clear escalation note written.
|
|
28
|
+
|
|
29
|
+
## Scenario 2: Fix-Agent Loop Exhaustion (3 Passes, Persistent Minor)
|
|
30
|
+
|
|
31
|
+
**Setup:** PR #406 has minor findings. Fix agent runs 3 times but each re-review still returns minor.
|
|
32
|
+
|
|
33
|
+
**Trace:**
|
|
34
|
+
```
|
|
35
|
+
Pass 1: passCount=0 -> review: 'minor' -> passCount becomes 1 -> spawn fix agent -> re-review
|
|
36
|
+
Pass 2: passCount=1 -> review: 'minor' -> passCount becomes 2 -> spawn fix agent -> re-review
|
|
37
|
+
Pass 3: passCount=2 -> review: 'minor' -> passCount becomes 3 -> CHECK: 3 >= 3 -> STOP
|
|
38
|
+
-> escalate, write: 'PR #406: 3 fix passes exhausted, still minor. Manual review required.'
|
|
39
|
+
```
|
|
40
|
+
|
|
41
|
+
**Divergence from expected:** None -- loop terminates correctly at pass 3.
|
|
42
|
+
|
|
43
|
+
**Key invariant verified:** `passCount >= MAX_FIX_PASSES` check happens BEFORE spawning fix agent, not after. This prevents a 4th spawn.
|
|
44
|
+
|
|
45
|
+
**Outcome:** PR escalated after exactly 3 passes. Not merged.
|
|
46
|
+
|
|
47
|
+
## Scenario 3: Daemon Not Running (ECONNREFUSED)
|
|
48
|
+
|
|
49
|
+
**Setup:** No daemon running on port 3456. No lock files present.
|
|
50
|
+
|
|
51
|
+
**Trace:**
|
|
52
|
+
```
|
|
53
|
+
discoverPort() -> no lock files -> falls back to 3456
|
|
54
|
+
spawnSession('mr-review-workflow-agentic', 'Review PR #419...', '/workspace')
|
|
55
|
+
POST http://127.0.0.1:3456/api/v2/auto/dispatch
|
|
56
|
+
-> fetch throws Error: ECONNREFUSED 127.0.0.1:3456
|
|
57
|
+
-> spawnSession catches -> returns err('Could not connect to WorkTrain daemon on port 3456')
|
|
58
|
+
runPrReviewCoordinator() receives err
|
|
59
|
+
-> deps.stderr('Could not connect to WorkTrain daemon on port 3456. Start with: worktrain daemon')
|
|
60
|
+
-> returns { kind: 'failure', exitCode: 1 }
|
|
61
|
+
```
|
|
62
|
+
|
|
63
|
+
**Divergence from expected:** None.
|
|
64
|
+
|
|
65
|
+
**Outcome:** Clear error message, exit code 1, no hang, no partial state.
|
|
66
|
+
|
|
67
|
+
## Divergence Analysis
|
|
68
|
+
|
|
69
|
+
No divergences found. All 3 scenarios produce correct outcomes.
|
|
70
|
+
|
|
71
|
+
## Gap Identified: Rule 3 Adaptation for CLI Context
|
|
72
|
+
|
|
73
|
+
**Problem:** The original Rule 3 (go/no-go time check: don't spawn if remaining time < 20 minutes) was written for daemon sessions that have a `maxSessionMinutes` parameter. A CLI coordinator script has no such parameter.
|
|
74
|
+
|
|
75
|
+
**Adaptation required:** Track wall-clock time since coordinator script start (`const startTimeMs = deps.now()` at beginning). Before spawning any new child session (review OR fix agent), check: `if (deps.now() - startTimeMs > coordinatorMaxMs - 20 * 60 * 1000)` -> refuse to spawn.
|
|
76
|
+
|
|
77
|
+
**Default:** `coordinatorMaxMs = 90 * 60 * 1000` (90 minutes). Configurable via `--max-runtime` flag if needed.
|
|
78
|
+
|
|
79
|
+
## Recommendations
|
|
80
|
+
|
|
81
|
+
1. Implement `parseFindingsFromNotes(null)` path explicitly -- don't rely on empty string check.
|
|
82
|
+
2. Keyword parser priority: BLOCKING/CRITICAL/REQUEST CHANGES takes absolute precedence over APPROVE/CLEAN/LGTM (blocking wins even if both present).
|
|
83
|
+
3. Fix-agent loop: check `passCount >= MAX_FIX_PASSES` BEFORE spawning, not after.
|
|
84
|
+
4. Add `coordinatorStartMs` tracking and Rule 3 go/no-go check adapted for CLI (wall-clock elapsed time).
|
|
85
|
+
5. `gh pr merge` failures: catch, write to stderr, escalate -- do NOT retry automatically.
|
package/docs/ideas/backlog.md
CHANGED
|
@@ -5700,136 +5700,194 @@ Tested empirically today. This is what actually works, not what's specced.
|
|
|
5700
5700
|
|
|
5701
5701
|
---
|
|
5702
5702
|
|
|
5703
|
-
###
|
|
5703
|
+
### Autonomous feature development: scope → breakdown → parallel execution → merge (Apr 18, 2026)
|
|
5704
5704
|
|
|
5705
|
-
**The
|
|
5705
|
+
**The vision:** give WorkTrain a feature scope -- from a vague idea to a fully groomed ticket -- and it figures out the rest. Discovery if needed, design if needed, breakdown into parallel slices, execution across worktrees, context management across agents, bringing it all back together.
|
|
5706
5706
|
|
|
5707
|
-
**
|
|
5707
|
+
**The four pillars the user cares about:**
|
|
5708
|
+
1. **Autonomy** -- WorkTrain takes a scope and figures out the work breakdown without hand-holding
|
|
5709
|
+
2. **Quality** -- comes FROM autonomy + workflow enforcement + coordination. Each slice goes through the right phases.
|
|
5710
|
+
3. **Throughput** -- parallel slices across worktrees simultaneously. N agents working while you focus elsewhere.
|
|
5711
|
+
4. **Visibility** -- one coherent work unit you can track at a glance, not N unrelated sessions in a flat list.
|
|
5708
5712
|
|
|
5709
|
-
**
|
|
5713
|
+
**The pipeline for a scope:**
|
|
5710
5714
|
|
|
5711
|
-
|
|
5712
|
-
|
|
5713
|
-
|
|
5714
|
-
|
|
5715
|
-
-
|
|
5715
|
+
```
|
|
5716
|
+
Input: "add GitHub polling support" (any level of definition -- idea to full spec)
|
|
5717
|
+
│
|
|
5718
|
+
├── [if vague] ideation + spec authoring → output: BRD / acceptance criteria
|
|
5719
|
+
├── classify-task → taskComplexity, hasUI, touchesArchitecture, taskMaturity
|
|
5720
|
+
├── [if Medium/Large] discovery → context bundle, invariants, candidate files
|
|
5721
|
+
├── [if touchesArchitecture] design → candidates, review, selected approach
|
|
5722
|
+
├── breakdown → parallel slices with dependency graph
|
|
5723
|
+
│ ├── Slice 1: types + schema (worktree A)
|
|
5724
|
+
│ ├── Slice 2: polling adapter (worktree B, depends: 1)
|
|
5725
|
+
│ ├── Slice 3: scheduler integration (worktree C, depends: 2)
|
|
5726
|
+
│ └── Slice 4: tests (worktree D, depends: 1-3)
|
|
5727
|
+
├── [parallel execution] each slice: implement → review → (fix if needed) → approved
|
|
5728
|
+
├── [serial integration] merge slices in dependency order, verify after each
|
|
5729
|
+
└── [final] integration test → PR created → notification to user
|
|
5730
|
+
```
|
|
5716
5731
|
|
|
5717
|
-
**
|
|
5718
|
-
-
|
|
5719
|
-
-
|
|
5720
|
-
-
|
|
5721
|
-
-
|
|
5732
|
+
**Context management across agents:**
|
|
5733
|
+
- Coordinator maintains a "work unit manifest": current phase, slice status, shared invariants, decisions made in design phase
|
|
5734
|
+
- Each spawned agent receives a context bundle: relevant portion of the manifest + files it needs + decisions from upstream phases
|
|
5735
|
+
- Agents don't rediscover what the coordinator already knows
|
|
5736
|
+
- After each agent completes, its findings update the manifest (new invariants found, scope changes, follow-up tickets)
|
|
5722
5737
|
|
|
5723
|
-
**
|
|
5724
|
-
-
|
|
5725
|
-
-
|
|
5726
|
-
-
|
|
5738
|
+
**Worktree coordination:**
|
|
5739
|
+
- Each slice gets its own worktree (already done via `--isolation worktree`)
|
|
5740
|
+
- Coordinator tracks which files each slice touches -- detects conflicts before they happen
|
|
5741
|
+
- Independent slices run in parallel; dependent slices queue automatically
|
|
5742
|
+
- Merge order follows the dependency graph, not wall-clock completion time
|
|
5727
5743
|
|
|
5728
|
-
**
|
|
5729
|
-
-
|
|
5730
|
-
-
|
|
5731
|
-
-
|
|
5732
|
-
- All of this happens automatically, without user intervention
|
|
5744
|
+
**Knowing when to spawn a new main agent:**
|
|
5745
|
+
- When a slice is too large or discovers unexpected scope, it requests a breakdown from the coordinator
|
|
5746
|
+
- When a review finds a Critical finding, the coordinator spawns a dedicated fix agent with the finding + relevant context
|
|
5747
|
+
- When integration reveals a regression, coordinator spawns an investigation agent before retrying the merge
|
|
5733
5748
|
|
|
5734
|
-
**
|
|
5735
|
-
-
|
|
5736
|
-
-
|
|
5737
|
-
-
|
|
5738
|
-
-
|
|
5749
|
+
**The coordinator's job (what stays in scripts, not LLM):**
|
|
5750
|
+
- Maintain the manifest (JSON file, append-only)
|
|
5751
|
+
- Compute the dependency graph
|
|
5752
|
+
- Decide parallelism vs serialization
|
|
5753
|
+
- Route: clean → merge, minor findings → fix agent, critical → escalate
|
|
5754
|
+
- Track worktrees, detect conflicts
|
|
5755
|
+
- Sequence the merge order
|
|
5739
5756
|
|
|
5740
|
-
**
|
|
5741
|
-
-
|
|
5742
|
-
-
|
|
5743
|
-
-
|
|
5757
|
+
**What requires LLM cognition:**
|
|
5758
|
+
- Discovery (what are the invariants, which files matter)
|
|
5759
|
+
- Design (which approach, what tradeoffs)
|
|
5760
|
+
- Implementation (write the code)
|
|
5761
|
+
- Review (is this correct and complete)
|
|
5762
|
+
- Breakdown (what are the right slice boundaries)
|
|
5744
5763
|
|
|
5745
|
-
**
|
|
5746
|
-
-
|
|
5747
|
-
- Phase 2: kill on detection -- stop agents immediately when tooling failure detected. No more unverified output reaching main.
|
|
5748
|
-
- Phase 3: auto-resume -- restart and resume for recoverable failures.
|
|
5749
|
-
- Phase 4: full self-heal loop -- diagnose, fix, reboot, resume automatically.
|
|
5764
|
+
**The minimum viable version:**
|
|
5765
|
+
A coordinator that handles a Medium/Small scoped task (already classified, no need for ideation or design). Takes 2-4 parallel slices, runs them, reviews each, merges when clean. No escalation handling in v1 -- if anything fails, notify the user.
|
|
5750
5766
|
|
|
5751
|
-
|
|
5767
|
+
This is the thing that makes WorkTrain feel like a senior engineer taking ownership of a task, not a tool you have to supervise step by step.
|
|
5752
5768
|
|
|
5753
5769
|
---
|
|
5754
5770
|
|
|
5755
|
-
###
|
|
5771
|
+
### Coordinator design decision: MVP-first, generalize after (Apr 18, 2026)
|
|
5756
5772
|
|
|
5757
|
-
|
|
5773
|
+
**Decision:** Build the first coordinator as a PR review-specific script. Generalize to a reusable coordinator framework after proving it works end-to-end.
|
|
5758
5774
|
|
|
5759
|
-
**
|
|
5775
|
+
**Rationale:** Three discovery runs all converged on the architecture (TypeScript script, `CoordinatorDeps` interface, 2-call HTTP for notes). The risk is over-engineering for hypothetical pipelines before validating the real one. PR review is the highest-value first use case with a clear success criterion.
|
|
5760
5776
|
|
|
5761
|
-
|
|
5762
|
-
|
|
5763
|
-
|
|
5764
|
-
|
|
5765
|
-
|
|
5766
|
-
|
|
5767
|
-
|
|
5768
|
-
|
|
5769
|
-
|
|
5770
|
-
|
|
5771
|
-
|
|
5772
|
-
|
|
5773
|
-
|
|
5774
|
-
|
|
5775
|
-
|
|
5776
|
-
|
|
5777
|
-
|
|
5778
|
-
|
|
5779
|
-
|
|
5780
|
-
|
|
5781
|
-
|
|
5777
|
+
**The generic coordinator architecture is already designed** (see `docs/discovery/coordinator-script-design.md`). The `CoordinatorDeps` interface and `AgentResult` bridge type make migration to a generic coordinator trivial -- the PR review script uses these types, so generalizing is additive, not a rewrite.
|
|
5778
|
+
|
|
5779
|
+
**Migration path:** once PR review coordinator is proven in production, extract the routing logic (`parseFindings`, `routeByFindings`) and `CoordinatorDeps` interface into `src/coordinators/base.ts`. The PR review coordinator becomes one implementation of the base pattern.
|
|
5780
|
+
|
|
5781
|
+
---
|
|
5782
|
+
|
|
5783
|
+
### Architecture decisions from Apr 17-18 sessions (to record before files are cleaned up)
|
|
5784
|
+
|
|
5785
|
+
**Decision 1: Structured output + tool calls can coexist (Apr 18)**
|
|
5786
|
+
Validated empirically via integration test. The beta API (`client.beta.messages.create()`) supports both JSON schema enforcement AND tool calls in the same request. Schema enforcement applies at `end_turn` only. Bedrock is more consistent than direct Anthropic API for system-prompt fallback behavior. This opens a future path for replacing `complete_step` with structured output, but `complete_step` remains the chosen primitive for now.
|
|
5787
|
+
|
|
5788
|
+
**Decision 2: `complete_step` is the preferred daemon workflow-control primitive (Apr 18)**
|
|
5789
|
+
PR #569 merged. The daemon holds the continueToken in a closure; LLM calls `complete_step(notes)` and never handles the token directly. Structured output (`beta.messages.create` with JSON schema) was evaluated as an alternative and deferred -- it's a viable migration path for a future version but adds API complexity today. Follow-up: track a structured output migration as a future improvement, not a current priority.
|
|
5790
|
+
|
|
5791
|
+
**Decision 3: AgentLoop error handling contract -- FatalToolError (Apr 16)**
|
|
5792
|
+
`FatalToolError` subclass selected for distinguishing recoverable from non-recoverable tool failures in the AgentLoop. The contract: user-facing tools (Bash, Read, Write) catch failures and return `isError: true` in the tool_result (loop continues, LLM can retry). Coordination tools with unrecoverable failures (session store corruption, token decode failure) throw `FatalToolError` -- `_executeTools` instanceof-checks this and kills the session rather than surfacing a confusing error to the LLM. This contract is part of the AgentLoop architecture and must be followed by any new tool implementations.
|
|
5793
|
+
|
|
5794
|
+
**Decision 4: Use `wr.discovery` for discovery-only tasks, not `coding-task-workflow-agentic` (Apr 17)**
|
|
5795
|
+
Discovered from a broken session: `coding-task-workflow-agentic` dispatched with "do discovery only, no code" ran 11 step advances then stopped without `run_completed`. The workflow's implementation phases fired even with explicit instructions not to code. Lesson: when a trigger or coordinator wants pure discovery/research, use `wr.discovery` as the workflowId. `coding-task-workflow-agentic` should only be dispatched when implementation is the actual goal.
|
|
5796
|
+
|
|
5797
|
+
**Decision 5: Bug -- MCP server EPIPE crash (Apr 18)**
|
|
5798
|
+
Root cause confirmed with 15 production crash log entries: `process.stderr` is missing an `'error'` event handler in `registerFatalHandlers()`. When an MCP client disconnects, Node.js emits `EPIPE` on stderr which crashes the process with an unhandled error. `process.stdout` already has equivalent protection via `wireStdoutShutdown()`. Fix: mirror the stdout protection for stderr. One-line fix being implemented in PR `fix/mcp-stderr-epipe-crash`.
|
|
5799
|
+
|
|
5800
|
+
---
|
|
5801
|
+
|
|
5802
|
+
### worktrain status → console integration (Apr 18, 2026)
|
|
5803
|
+
|
|
5804
|
+
The `worktrain status` CLI command is Phase 1. Phase 2: the same data and rendering lives inside the console as the default landing view when you open it -- not the sessions list, the overview. Same `StatusDataPacket` type, two surfaces. The console overview replaces the need to run a CLI command; it auto-refreshes and stays live.
|
|
5782
5805
|
|
|
5783
|
-
|
|
5806
|
+
---
|
|
5807
|
+
|
|
5808
|
+
### WorkTrain as a native macOS app (Apr 18, 2026)
|
|
5809
|
+
|
|
5810
|
+
Long-term vision: WorkTrain becomes a full native Mac app -- not just a CLI + web console, but a proper macOS application with a menubar icon, system notifications, windows, and native UX.
|
|
5784
5811
|
|
|
5785
|
-
**
|
|
5812
|
+
**What this unlocks:**
|
|
5813
|
+
- Always-on menubar presence showing daemon status at a glance
|
|
5814
|
+
- Native macOS notifications (already built via osascript -- the app version uses UserNotifications framework directly)
|
|
5815
|
+
- The `worktrain status` overview as a native window, not a browser tab
|
|
5816
|
+
- Message queue and inbox as a native interface (type a message from anywhere on your Mac, not just the terminal)
|
|
5817
|
+
- Background daemon management -- start/stop/restart from the menubar without terminal
|
|
5818
|
+
- Deep system integration: file system events, calendar, Contacts, native share sheet
|
|
5786
5819
|
|
|
5787
|
-
**
|
|
5820
|
+
**Tech stack options:**
|
|
5821
|
+
- Swift/SwiftUI: full native, best macOS integration, steeper learning curve from TypeScript
|
|
5822
|
+
- Electron + existing console UI: fastest path, same TypeScript codebase, but heavy
|
|
5823
|
+
- Tauri: Rust core + existing web frontend, lighter than Electron, good macOS support
|
|
5824
|
+
- React Native macOS: reuses React knowledge, not quite native feel
|
|
5788
5825
|
|
|
5789
|
-
**
|
|
5790
|
-
|
|
5791
|
-
-
|
|
5792
|
-
- Console code reads the session store directly -- no IPC with MCP or daemon needed
|
|
5793
|
-
- These are separate processes. A crash in one does not affect the others.
|
|
5826
|
+
**Recommended path:** Tauri wrapping the existing console UI. The console is already a React/Vite app. Tauri gives native menubar, notifications, and system APIs without rewriting the frontend. The WorkTrain daemon stays as a separate process managed by the app.
|
|
5827
|
+
|
|
5828
|
+
**This is a post-v1 platform decision** -- not a near-term priority, but worth designing toward. Don't make architectural decisions that would make the Tauri wrapper hard later.
|
|
5794
5829
|
|
|
5795
5830
|
---
|
|
5796
5831
|
|
|
5797
|
-
###
|
|
5832
|
+
### Long-running sessions: stay open across agent handoffs (Apr 18, 2026)
|
|
5833
|
+
|
|
5834
|
+
**The problem:** today when an MR review session completes, it writes its findings and exits. If the findings require fixes, a new fix agent starts from scratch with no shared context. When the fix is done, a new re-review agent also starts from scratch. Three sessions that are logically one unit of work are isolated from each other.
|
|
5798
5835
|
|
|
5799
|
-
|
|
5836
|
+
**The vision:** a session can stay open and wait -- dormant but alive -- while another agent does work. When that work completes, the waiting session resumes with full context continuity.
|
|
5800
5837
|
|
|
5801
|
-
**
|
|
5838
|
+
**The MR review example:**
|
|
5802
5839
|
|
|
5803
5840
|
```
|
|
5804
|
-
|
|
5805
|
-
|
|
5806
|
-
|
|
5807
|
-
|
|
5808
|
-
|
|
5809
|
-
|
|
5810
|
-
|
|
5811
|
-
|
|
5812
|
-
|
|
5813
|
-
│ WorkRail MCP │ │ WorkTrain │ │ WorkRail │
|
|
5814
|
-
│ Server │ │ Daemon │ │ Console │
|
|
5815
|
-
│ workrail start │ │ worktrain │ │ worktrain │
|
|
5816
|
-
│ src/mcp/ │ │ daemon │ │ console │
|
|
5817
|
-
│ │ │ src/daemon/│ │ src/console/ │
|
|
5818
|
-
│ Claude Code │ │ src/trigger│ │ │
|
|
5819
|
-
│ connects here │ │ │ │ Shows BOTH │
|
|
5820
|
-
│ via stdio │ │ autonomous │ │ MCP + daemon │
|
|
5821
|
-
│ │ │ agent loop │ │ sessions │
|
|
5822
|
-
└────────────────┘ └────────────┘ └──────────────┘
|
|
5841
|
+
[MR review session] finds: 2 critical, 3 minor
|
|
5842
|
+
→ stays open, waiting for fixes
|
|
5843
|
+
|
|
5844
|
+
[Fix agent session] addresses all 5 findings
|
|
5845
|
+
→ completes, signals "fixes ready"
|
|
5846
|
+
|
|
5847
|
+
[MR review session resumes] re-reads the diff, re-evaluates
|
|
5848
|
+
→ all 5 verified fixed, 0 new findings
|
|
5849
|
+
→ completes with APPROVE verdict
|
|
5823
5850
|
```
|
|
5824
5851
|
|
|
5825
|
-
|
|
5852
|
+
The same session that found the issues verifies the fixes. No context reconstruction. No risk of re-review missing something the original reviewer knew.
|
|
5853
|
+
|
|
5854
|
+
**Other use cases for waiting sessions:**
|
|
5855
|
+
|
|
5856
|
+
- **Architecture review waiting for approval:** architect session identifies a design gap, waits for the human to decide on direction, resumes when the decision is recorded
|
|
5857
|
+
- **Discovery session waiting for data:** a research session identifies that it needs a specific file or API response, signals "blocked on: fetch X", waits for a retrieval agent to deliver it, resumes with the data injected
|
|
5858
|
+
- **Coordinator waiting on child completion:** instead of a coordinator script polling `worktrain await`, the coordinator session can yield and be resumed by the daemon when child sessions complete -- same session, same context, no polling overhead
|
|
5859
|
+
- **Spec authoring waiting for stakeholder input:** a spec session writes a draft, flags "needs: human review of acceptance criteria", waits, resumes when the human adds a comment
|
|
5860
|
+
- **Integration test waiting for deployment:** a test coordination session waits for a deploy to complete before running integration tests
|
|
5861
|
+
|
|
5862
|
+
**The key insight: the LLM doesn't experience waiting.**
|
|
5826
5863
|
|
|
5827
|
-
|
|
5864
|
+
LLMs have no concept of time. Between one turn and the next, zero time passes from the agent's perspective. This means "waiting" is not a thing that happens to the agent -- it just doesn't receive its next turn until the coordinator has something to give it.
|
|
5828
5865
|
|
|
5829
|
-
|
|
5866
|
+
The session is paused at the engine level (DAG holds at a node, no new turns issued). The agent submitted its output and simply hasn't received a response yet. When the coordinator is ready -- fix agent completed, human reviewed, deployment finished -- it advances the session with a turn that contains the new context. From the agent's perspective: it submitted findings and immediately received "here are the fixes, verify them."
|
|
5867
|
+
|
|
5868
|
+
**No `wait_for` primitive needed at the workflow level.** The coordinator is the timing mechanism. This is the coordinator's job: know when each session is ready for its next input, and deliver that input at the right time.
|
|
5869
|
+
|
|
5870
|
+
```
|
|
5871
|
+
Coordinator logic:
|
|
5872
|
+
|
|
5873
|
+
1. Advance review session to "findings complete" node
|
|
5874
|
+
2. Read findings from session output
|
|
5875
|
+
3. Spawn fix agent with those findings
|
|
5876
|
+
4. Wait for fix agent to complete (worktrain await)
|
|
5877
|
+
5. Inject fix summary into review session's next turn
|
|
5878
|
+
6. Advance review session: "Here are the fixes. Verify them."
|
|
5879
|
+
→ LLM receives this as the natural next step, no time gap perceived
|
|
5880
|
+
```
|
|
5881
|
+
|
|
5882
|
+
**Why this is more powerful than re-running a fresh session:**
|
|
5883
|
+
|
|
5884
|
+
- **Context continuity:** the reviewer remembers what it found, why it flagged it, what invariants it was checking. A fresh session has to re-discover all of that.
|
|
5885
|
+
- **Relational memory:** "does this fix address the root cause I identified, or just the symptom?" -- only the original session knows the root cause reasoning.
|
|
5886
|
+
- **Efficiency:** no redundant context gathering. The resumed session picks up exactly where it left off.
|
|
5887
|
+
- **The agent doesn't know it's coordinating:** from the agent's view, it's a continuous workflow. The coordinator manages the timing externally.
|
|
5888
|
+
|
|
5889
|
+
**Implementation path:**
|
|
5830
5890
|
|
|
5831
|
-
|
|
5832
|
-
-
|
|
5833
|
-
-
|
|
5834
|
-
- Console reads the session store directly -- no IPC with either needed
|
|
5835
|
-
- These are separate processes. A crash in one does not affect the others.
|
|
5891
|
+
- Phase 1: coordinator scripts withhold `complete_step` advancement until the condition is met. This already works today -- the coordinator just doesn't advance the session until the fix agent is done.
|
|
5892
|
+
- Phase 2: the coordinator passes structured context when advancing: `complete_step(session, { injectedContext: fixSummary })`. The session receives it as part of the next step's prompt.
|
|
5893
|
+
- Phase 3: declarative pipelines -- workflow JSON declares that step N waits for an external condition before proceeding. The coordinator reads this and manages the timing automatically. No hand-coded coordinator script needed for common patterns.
|
package/package.json
CHANGED
|
@@ -312,7 +312,11 @@
|
|
|
312
312
|
{
|
|
313
313
|
"id": "phase-6-final-handoff",
|
|
314
314
|
"title": "Phase 6: Final Handoff",
|
|
315
|
-
"prompt": "Provide the final MR review handoff.\n\nInclude:\n- MR title and purpose\n- review mode used\n- final recommendation and confidence band\n- confidence assessment summary, including the most important reason confidence was capped if it was not High\n- counts of Critical / Major / Minor / Nit findings\n- top findings with rationale\n- strongest remaining areas of uncertainty, if any\n- summary of the coverage ledger, especially any still-uncertain domains\n- ready-to-post MR comments summary\n- any validation outcomes a human reviewer should see\n- review environment status:\n - what review target/context sources were successfully used\n - what important sources were missing or ambiguous\n - boundary confidence and context confidence\n - how those limits affected the review\n- path to the full human-facing review artifact (`reviewDocPath`) only if one was created\n\nRules:\n- the final recommendation assists a human reviewer; it does not replace them\n- if `reviewDocPath` exists, treat it as a human-facing companion artifact only\n- be explicit when missing PR/ticket/doc/boundary context limited confidence\n- do not post comments, approve, reject, or merge unless the user explicitly asks",
|
|
315
|
+
"prompt": "Provide the final MR review handoff.\n\nInclude:\n- MR title and purpose\n- review mode used\n- final recommendation and confidence band\n- confidence assessment summary, including the most important reason confidence was capped if it was not High\n- counts of Critical / Major / Minor / Nit findings\n- top findings with rationale\n- strongest remaining areas of uncertainty, if any\n- summary of the coverage ledger, especially any still-uncertain domains\n- ready-to-post MR comments summary\n- any validation outcomes a human reviewer should see\n- review environment status:\n - what review target/context sources were successfully used\n - what important sources were missing or ambiguous\n - boundary confidence and context confidence\n - how those limits affected the review\n- path to the full human-facing review artifact (`reviewDocPath`) only if one was created\n\nRules:\n- the final recommendation assists a human reviewer; it does not replace them\n- if `reviewDocPath` exists, treat it as a human-facing companion artifact only\n- be explicit when missing PR/ticket/doc/boundary context limited confidence\n- do not post comments, approve, reject, or merge unless the user explicitly asks\n\nIMPORTANT: After writing your notes, emit a structured verdict via complete_step's artifacts[] parameter using EXACTLY this schema (no extra fields):\n{\n \"kind\": \"wr.review_verdict\",\n \"verdict\": \"clean\" | \"minor\" | \"blocking\",\n \"confidence\": \"high\" | \"medium\" | \"low\",\n \"findings\": [ { \"severity\": \"critical\" | \"major\" | \"minor\" | \"nit\", \"summary\": \"one-line description\" } ],\n \"summary\": \"one-line overall verdict summary\"\n}\nFor a clean review with no findings, use findings: []. The verdict field maps to severity: clean = no blocking issues, minor = small issues only, blocking = critical or major issues found.",
|
|
316
|
+
"outputContract": {
|
|
317
|
+
"contractRef": "wr.contracts.review_verdict",
|
|
318
|
+
"required": false
|
|
319
|
+
},
|
|
316
320
|
"requireConfirmation": true
|
|
317
321
|
}
|
|
318
322
|
]
|