@exaudeus/workrail 3.73.2 → 3.74.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/dist/cli-worktrain.js +126 -1
- package/dist/console-ui/assets/{index-CfI4I3OX.js → index-BmDxs-a5.js} +1 -1
- package/dist/console-ui/index.html +1 -1
- package/dist/coordinators/pr-review.d.ts +11 -1
- package/dist/coordinators/types.d.ts +15 -0
- package/dist/coordinators/types.js +2 -0
- package/dist/manifest.json +17 -9
- package/dist/trigger/coordinator-deps.js +203 -36
- package/docs/authoring.md +23 -0
- package/docs/ideas/backlog.md +299 -60
- package/docs/planning/README.md +6 -9
- package/docs/roadmap/archive/README.md +8 -0
- package/docs/tickets/next-up.md +6 -1
- package/docs/vision.md +115 -0
- package/package.json +1 -1
- package/spec/authoring-spec.json +36 -1
- /package/docs/roadmap/{now-next-later.md → archive/now-next-later.md} +0 -0
- /package/docs/roadmap/{open-work-inventory.md → archive/open-work-inventory.md} +0 -0
package/docs/ideas/backlog.md
CHANGED
|
@@ -3,6 +3,8 @@
|
|
|
3
3
|
Workflow and feature ideas worth capturing but not yet planned or designed.
|
|
4
4
|
For historical narrative and sprint journals, see `docs/history/worktrain-journal.md`.
|
|
5
5
|
|
|
6
|
+
**Before reading this backlog, read the vision:** `docs/vision.md` -- what WorkTrain is, what success looks like, and the principles every decision is held against. Every item in this backlog should serve that vision. If it doesn't, it shouldn't be here.
|
|
7
|
+
|
|
6
8
|
**To see a sorted priority view, run:**
|
|
7
9
|
```bash
|
|
8
10
|
npm run backlog # full list, grouped by blocked/unblocked
|
|
@@ -12,88 +14,77 @@ npm run backlog -- --help # all options
|
|
|
12
14
|
```
|
|
13
15
|
|
|
14
16
|
Each item has a score line: `**Score: N** | Cor:N Cap:N Eff:N Lev:N Con:N | Blocked: ...`
|
|
15
|
-
See the scoring rubric in the "Agent-assisted backlog prioritization" entry (WorkTrain Daemon section).
|
|
16
17
|
|
|
17
|
-
|
|
18
|
+
**When adding a new backlog item, score it using this rubric.** Five dimensions, each 1-3. Score = sum (max 15).
|
|
18
19
|
|
|
19
|
-
|
|
20
|
+
| Dimension | 3 | 2 | 1 |
|
|
21
|
+
|---|---|---|---|
|
|
22
|
+
| **Correctness** | Silent wrong output, crash, or skipped safety gate | Degraded behavior, misleading output, test coverage gap | No effect on correctness |
|
|
23
|
+
| **Capability** | Meaningfully expands what WorkTrain can do or who can use it | Reduces friction for an *active* use case today | Polish, internal quality, or nothing anyone is actively blocked by right now |
|
|
24
|
+
| **Effort** (inverted) | Hours to a day or two | A few days to a week | Weeks or longer, significant design work needed first |
|
|
25
|
+
| **Leverage** | Prerequisite for multiple other items | Enables one or two downstream items | Standalone, nothing depends on it |
|
|
26
|
+
| **Confidence** | Clear problem, clear direction, just needs implementation | Problem is clear, but has open questions to hash out first | Still needs discovery or design before work can begin |
|
|
20
27
|
|
|
21
|
-
|
|
28
|
+
**Blocked flag:** annotate with *what* the item is blocked by -- "Blocked: needs knowledge graph" vs "Blocked: needs dispatchCondition" carry very different timelines. Blocked items are listed separately regardless of score.
|
|
22
29
|
|
|
23
|
-
**
|
|
30
|
+
**Scoring notes:**
|
|
31
|
+
- Score the first actionable phase, not the full vision. Phase 1 = two days of work should not score Effort 1 just because Phase 3 is months away.
|
|
32
|
+
- Tiebreaker at equal score: prefer the item that makes the next item easier to execute.
|
|
33
|
+
- Capability 2 = reduces friction for an *active* use case today (not something hypothetical).
|
|
24
34
|
|
|
25
|
-
|
|
35
|
+
---
|
|
26
36
|
|
|
27
|
-
|
|
37
|
+
**How to write a backlog item.** Every entry should follow this shape:
|
|
28
38
|
|
|
29
|
-
|
|
39
|
+
```
|
|
40
|
+
### Title (Date)
|
|
30
41
|
|
|
31
|
-
**
|
|
42
|
+
**Status: idea | bug | partial | done** | Priority: high/medium/low
|
|
32
43
|
|
|
33
|
-
**
|
|
34
|
-
- Is the bug in the workflow JSON (slices not wired to currentSlice tracking), in the engine (loop_control artifact evaluation), or in the way context variables are threaded between passes?
|
|
35
|
-
- Does the issue affect all loops with `wr.loop_control`, or only the implementation loop in `wr.coding-task` specifically?
|
|
36
|
-
- Is there a workaround agents can use today (e.g. setting a specific context variable that the loop decision gate does check)?
|
|
37
|
-
- Should the loop decision gate fire after every pass regardless of `currentSlice.name` state, or only when the slice tracking is valid?
|
|
44
|
+
**Score: N** | Cor:N Cap:N Eff:N Lev:N Con:N | Blocked: no / yes (blocked by X)
|
|
38
45
|
|
|
39
|
-
|
|
40
|
-
|
|
41
|
-
### Intent gap: agent builds what it understood, not what the user meant (Apr 30, 2026)
|
|
46
|
+
[2-4 sentences stating the problem plainly. What is wrong or missing? Why does it matter?
|
|
47
|
+
No proposed solutions here -- just the problem.]
|
|
42
48
|
|
|
43
|
-
**
|
|
49
|
+
**Things to hash out:**
|
|
50
|
+
- [Open question that needs a decision before design can begin]
|
|
51
|
+
- [Another open question -- constraint, tradeoff, interaction with other systems]
|
|
52
|
+
- [Keep these honest -- don't fill this section with questions you already know the answer to]
|
|
53
|
+
```
|
|
44
54
|
|
|
45
|
-
**
|
|
55
|
+
**Rules for writing entries:**
|
|
56
|
+
- **State the problem, not the solution.** "There is no way to invoke a routine directly" not "We should add a `worktrain invoke` command."
|
|
57
|
+
- **No steering.** Don't tell future implementers how to build it. Capture what needs to exist, not how to make it exist.
|
|
58
|
+
- **Things to hash out = genuine open questions.** Only include questions that actually need to be answered before design can start. If you know the answer, state it in the problem description.
|
|
59
|
+
- **Relationships matter.** If this item depends on another, or would be superseded by another, name it explicitly.
|
|
60
|
+
- **Be specific about what "done" looks like** when it's not obvious -- e.g. "done means an operator can invoke any routine by name from the CLI without writing a workflow."
|
|
46
61
|
|
|
47
|
-
|
|
62
|
+
---
|
|
48
63
|
|
|
49
|
-
|
|
64
|
+
## P0 / Critical (blocks WorkTrain from working correctly)
|
|
50
65
|
|
|
51
|
-
|
|
66
|
+
### wr.coding-task forEach loop exposes broken agent-facing state (Apr 30, 2026)
|
|
52
67
|
|
|
53
|
-
**
|
|
54
|
-
- Agent fixes the symptom instead of the root cause because the task description named the symptom
|
|
55
|
-
- Agent implements feature X when the user wanted feature Y that happens to use X
|
|
56
|
-
- Agent interprets "add support for Z" as extending the existing system when the user wanted a new abstraction
|
|
57
|
-
- Agent makes a local fix when the user wanted an architectural change
|
|
58
|
-
- Agent's implementation is technically correct but violates unstated invariants the user assumed were obvious
|
|
59
|
-
|
|
60
|
-
**Things to hash out:**
|
|
61
|
-
- Where in the workflow should intent validation happen? Before the agent writes any code (Phase 0), the agent should be required to state its interpretation back in plain English. The user (or a validation step) confirms or corrects it before implementation begins. But this requires a human confirmation gate -- does that break the autonomous use case?
|
|
62
|
-
- For fully autonomous sessions (no human in the loop), is there a way to detect a likely intent gap before the agent commits? Signals might include: the task description is short or vague, the agent's interpretation involves a significant architectural decision, the agent is about to delete or restructure existing code.
|
|
63
|
-
- What is the right escalation path when the agent detects ambiguity itself? Currently `report_issue` handles task obstacles; there is no structured way for the agent to surface "I am not sure I understood this correctly" before acting.
|
|
64
|
-
- The `wr.shaping` workflow exists precisely to close this gap for planned features -- the issue is urgent/reactive tasks that skip shaping entirely. How do we get intent validation without requiring a full shaping pass for every small task?
|
|
65
|
-
- Can historical session notes help? If previous sessions have established what "X" means in this codebase (design decisions, naming conventions, architectural invariants), injecting that context before Phase 0 reduces the gap. This points toward the knowledge graph and persistent project memory as partial solutions.
|
|
66
|
-
- Should WorkTrain have an explicit "confirm interpretation" step as a configurable option per trigger? A `requireIntentConfirmation: true` flag on the trigger that blocks autonomous start until the operator approves the agent's stated interpretation via the console or CLI.
|
|
68
|
+
**Status: bug** | Priority: high
|
|
67
69
|
|
|
68
|
-
|
|
70
|
+
**Score: 13** | Cor:3 Cap:1 Eff:2 Lev:2 Con:3 | Blocked: no
|
|
69
71
|
|
|
70
|
-
|
|
72
|
+
The `phase-6-implement-slices` loop (forEach over `slices`) ran correctly mechanically -- it iterated all 8 slices and stopped. But the agent-facing representation was broken in ways that violate WorkRail's promise of consistency and determinism:
|
|
71
73
|
|
|
72
|
-
|
|
74
|
+
1. **`currentSlice.name` showed `[unset]`** -- the agent was inside a forEach loop over `slices` with `itemVar: "currentSlice"`, but the template variable wasn't being projected into sessionContext before rendering. The agent couldn't see which slice it was on. This is an engine rendering issue in `buildLoopRenderContext` / `prompt-renderer.ts`.
|
|
73
75
|
|
|
74
|
-
**
|
|
76
|
+
2. **Agent emitted `wr.loop_control` artifacts that had no effect** -- the forEach loop silently ignores these. The agent did useless work the engine discarded without signaling that this was happening. A correct system should either prevent the agent from emitting artifacts that can't affect the loop, or tell the agent explicitly that artifact-based exit isn't available in this loop type.
|
|
75
77
|
|
|
76
|
-
|
|
78
|
+
3. **Loop presented as "Pass N of 20" not "Slice 3 of 8"** -- the framing confused the agent about what was happening. The agent should be told it's iterating over concrete slices, not burning through a budget.
|
|
77
79
|
|
|
78
|
-
|
|
80
|
+
The forEach loop *worked* but the agent experience was wrong. This matters because WorkRail's value is that agents should not be confused about their own loop state. An agent that emits useless artifacts, can't see its own iteration variable, and misunderstands whether the loop is progress-based or budget-based is not operating under the deterministic, correct framework WorkRail promises.
|
|
79
81
|
|
|
80
|
-
**
|
|
81
|
-
|
|
82
|
-
**Known manifestations:**
|
|
83
|
-
- Agent correctly fixes a bug but the fix changes a public API contract, breaking callers it didn't check
|
|
84
|
-
- Agent refactors a module for clarity but silently changes behavior in an edge case it considered minor
|
|
85
|
-
- Agent adds a feature but disables or degrades an existing feature as a side effect, judging the tradeoff acceptable on its own
|
|
86
|
-
- Agent's change passes all tests but the tests don't cover the degraded behavior
|
|
87
|
-
- Agent notes a downstream impact in session notes but does not block, escalate, or file a follow-up ticket
|
|
88
|
-
- **Agent reframes a bug as "a key tradeoff to document."** This is a specific and common failure: the agent detects a real problem it caused, correctly identifies that it's a problem, and instead of filing it as a bug or escalating, reclassifies it as an "accepted design decision" or "known limitation" in documentation. The bug is real. Documenting it is not fixing it. This pattern actively buries bugs.
|
|
82
|
+
**GitHub issue:** https://github.com/EtienneBBeaulac/workrail/issues/920
|
|
89
83
|
|
|
90
84
|
**Things to hash out:**
|
|
91
|
-
-
|
|
92
|
-
- Should the
|
|
93
|
-
-
|
|
94
|
-
- Test coverage is the obvious mitigation -- if Y has tests, the agent's change would fail them. But not everything has tests, and agents can rationalize skipping test runs for "unrelated" paths.
|
|
95
|
-
- Is there a way to detect likely collateral damage statically before the agent acts? A pre-commit check that measures what changed beyond the declared `filesChanged` list, for example, could surface unexpected side effects automatically.
|
|
96
|
-
- The knowledge graph and architectural invariant rules (pattern and architecture validation) are partial solutions -- they can flag when a change violates a declared constraint. But they only work for constraints that have been explicitly codified.
|
|
85
|
+
- Is `currentSlice.name = [unset]` a bug in `buildLoopRenderContext` (engine fix needed), or is it a workflow authoring issue (the slices array items don't have a `name` property)?
|
|
86
|
+
- Should the engine prevent agents from emitting `wr.loop_control` artifacts inside forEach loops, or simply document that they have no effect?
|
|
87
|
+
- Should forEach loops surface iteration progress ("slice 3 of 8") differently than while loops ("pass 3 of 20") in the step header text?
|
|
97
88
|
|
|
98
89
|
---
|
|
99
90
|
|
|
@@ -177,9 +168,101 @@ The delivery pipeline was extracted into `delivery-pipeline.ts` with explicit st
|
|
|
177
168
|
|
|
178
169
|
## WorkTrain Daemon
|
|
179
170
|
|
|
171
|
+
### Intent gap: agent builds what it understood, not what the user meant (Apr 30, 2026)
|
|
172
|
+
|
|
173
|
+
**Status: idea** | Priority: medium
|
|
174
|
+
|
|
175
|
+
**Score: 13** | Cor:3 Cap:3 Eff:2 Lev:3 Con:2 | Blocked: no
|
|
176
|
+
|
|
177
|
+
This is one of the most fundamental failure modes for autonomous WorkTrain sessions and a blocker for production viability. An agent receives a task description, forms an interpretation of what's needed, and executes flawlessly against that interpretation -- but the interpretation was wrong. The code is correct for what the agent thought was asked. It is not what the user actually wanted. The user only discovers this after reviewing the PR, sometimes after it has already merged.
|
|
178
|
+
|
|
179
|
+
This is categorically different from bugs (the agent implemented the right thing incorrectly) and scope creep (the agent did extra things). This is the agent solving the wrong problem well.
|
|
180
|
+
|
|
181
|
+
**Why it's hard:** the agent's interpretation feels reasonable from the task description. The user's description was ambiguous, underspecified, or relied on context the agent didn't have. Neither party made an obvious mistake -- the gap is structural.
|
|
182
|
+
|
|
183
|
+
**Known manifestations:**
|
|
184
|
+
- Agent fixes the symptom instead of the root cause because the task description named the symptom
|
|
185
|
+
- Agent implements feature X when the user wanted feature Y that happens to use X
|
|
186
|
+
- Agent interprets "add support for Z" as extending the existing system when the user wanted a new abstraction
|
|
187
|
+
- Agent makes a local fix when the user wanted an architectural change
|
|
188
|
+
- Agent's implementation is technically correct but violates unstated invariants the user assumed were obvious
|
|
189
|
+
|
|
190
|
+
**Done looks like:** a WorkTrain session that receives an ambiguous or underspecified task either (a) states its interpretation explicitly before acting and the coordinator can gate on approval, or (b) has access to enough prior context (from the knowledge graph or living work context) that the interpretation is reliably correct. A session that builds the wrong thing well should be detectable before it merges, not after.
|
|
191
|
+
|
|
192
|
+
**Things to hash out:**
|
|
193
|
+
- Where in the workflow should intent validation happen? Before the agent writes any code (Phase 0), the agent should be required to state its interpretation back in plain English. The user (or a validation step) confirms or corrects it before implementation begins. But this requires a human confirmation gate -- does that break the autonomous use case?
|
|
194
|
+
- For fully autonomous sessions (no human in the loop), is there a way to detect a likely intent gap before the agent commits? Signals might include: the task description is short or vague, the agent's interpretation involves a significant architectural decision, the agent is about to delete or restructure existing code.
|
|
195
|
+
- What is the right escalation path when the agent detects ambiguity itself? Currently `report_issue` handles task obstacles; there is no structured way for the agent to surface "I am not sure I understood this correctly" before acting.
|
|
196
|
+
- The `wr.shaping` workflow exists precisely to close this gap for planned features -- the issue is urgent/reactive tasks that skip shaping entirely. How do we get intent validation without requiring a full shaping pass for every small task?
|
|
197
|
+
- Can historical session notes help? If previous sessions have established what "X" means in this codebase (design decisions, naming conventions, architectural invariants), injecting that context before Phase 0 reduces the gap. This points toward the knowledge graph and persistent project memory as partial solutions.
|
|
198
|
+
- Should WorkTrain have an explicit "confirm interpretation" step as a configurable option per trigger? A `requireIntentConfirmation: true` flag on the trigger that blocks autonomous start until the operator approves the agent's stated interpretation via the console or CLI.
|
|
199
|
+
|
|
200
|
+
---
|
|
201
|
+
|
|
202
|
+
### Scope rationalization: agent silently accepts collateral damage (Apr 30, 2026)
|
|
203
|
+
|
|
204
|
+
**Status: idea** | Priority: medium
|
|
205
|
+
|
|
206
|
+
**Score: 13** | Cor:3 Cap:3 Eff:2 Lev:3 Con:2 | Blocked: no
|
|
207
|
+
|
|
208
|
+
When an agent makes a change that breaks or degrades something outside its immediate task scope, it often recognizes the impact but rationalizes it as acceptable because "that's not in scope for this task." The reasoning feels locally valid -- the agent was asked to do X, X is done correctly, the side effect on Y is noted but deprioritized. This produces a PR that is correct for X and silently broken for Y.
|
|
209
|
+
|
|
210
|
+
This is exactly what happened with the commit SHA change: setting `agentCommitShas` to always empty correctly fixes the faked SHA bug, but degrades the console's SHA display for all sessions going forward. A scoped agent might note "this makes the console show empty SHAs" and proceed anyway because fixing the console display is "a separate ticket."
|
|
211
|
+
|
|
212
|
+
**Why this is insidious:** the agent's reasoning is locally coherent. It did not make a mistake within its scope. The problem is that autonomous agents operating in isolation cannot always see when a locally correct change has unacceptable global consequences -- and even when they can see it, they lack a good mechanism to stop, escalate, and surface the impact rather than proceeding.
|
|
213
|
+
|
|
214
|
+
**Known manifestations:**
|
|
215
|
+
- Agent correctly fixes a bug but the fix changes a public API contract, breaking callers it didn't check
|
|
216
|
+
- Agent refactors a module for clarity but silently changes behavior in an edge case it considered minor
|
|
217
|
+
- Agent adds a feature but disables or degrades an existing feature as a side effect, judging the tradeoff acceptable on its own
|
|
218
|
+
- Agent's change passes all tests but the tests don't cover the degraded behavior
|
|
219
|
+
- Agent notes a downstream impact in session notes but does not block, escalate, or file a follow-up ticket
|
|
220
|
+
- **Agent reframes a bug as "a key tradeoff to document."** This is a specific and common failure: the agent detects a real problem it caused, correctly identifies that it's a problem, and instead of filing it as a bug or escalating, reclassifies it as an "accepted design decision" or "known limitation" in documentation. The bug is real. Documenting it is not fixing it. This pattern actively buries bugs.
|
|
221
|
+
|
|
222
|
+
**Done looks like:** when an agent makes a change that degrades something outside its scope, it surfaces the degradation explicitly before the PR merges -- either by blocking (filing a follow-up issue as a condition of the current PR merging) or escalating to the coordinator for a decision. A PR that silently buries a regression in a comment or documentation should not pass review.
|
|
223
|
+
|
|
224
|
+
**Things to hash out:**
|
|
225
|
+
- How does an agent distinguish "acceptable tradeoff within scope" from "collateral damage that must be escalated"? The line is fuzzy and context-dependent. A hard rule ("never degrade existing behavior") is too strict for refactors; a soft heuristic ("if it affects other code, escalate") is too broad.
|
|
226
|
+
- Should the agent be required to enumerate side effects as part of the verification phase, and should the coordinator review that list before merging? This is the proof record concept applied to impact assessment rather than just correctness.
|
|
227
|
+
- What is the right mechanism for the agent to pause and escalate? Currently `report_issue` is for task obstacles; `signal_coordinator` is for coordinator events. There is no structured "I need a decision on whether this tradeoff is acceptable" signal.
|
|
228
|
+
- Test coverage is the obvious mitigation -- if Y has tests, the agent's change would fail them. But not everything has tests, and agents can rationalize skipping test runs for "unrelated" paths.
|
|
229
|
+
- Is there a way to detect likely collateral damage statically before the agent acts? A pre-commit check that measures what changed beyond the declared `filesChanged` list, for example, could surface unexpected side effects automatically.
|
|
230
|
+
- The knowledge graph and architectural invariant rules (pattern and architecture validation) are partial solutions -- they can flag when a change violates a declared constraint. But they only work for constraints that have been explicitly codified.
|
|
231
|
+
|
|
232
|
+
---
|
|
233
|
+
|
|
180
234
|
The autonomous workflow runner (`worktrain daemon`). Completely separate from the MCP server -- calls the engine directly in-process.
|
|
181
235
|
|
|
182
236
|
|
|
237
|
+
### Subagent context package: project vision and task goal baked into spawning (Apr 30, 2026)
|
|
238
|
+
|
|
239
|
+
**Status: idea** | Priority: high
|
|
240
|
+
|
|
241
|
+
**Score: 12** | Cor:2 Cap:3 Eff:2 Lev:3 Con:3 | Blocked: no
|
|
242
|
+
|
|
243
|
+
When WorkTrain spawns a subagent today, the operator (or the main agent) must manually write out all context: what the project is, what WorkTrain's vision is, what the task is trying to accomplish, what documents exist, what the end goal is. Subagents know nothing -- no conversation history, no project familiarity, no awareness of the vision. If the context briefing is thin or missing, the subagent works in the dark and produces generic output.
|
|
244
|
+
|
|
245
|
+
Two things need to be baked into the spawning infrastructure:
|
|
246
|
+
|
|
247
|
+
1. **Project-level context package**: every spawned subagent automatically receives a synthesized briefing about the WorkTrain project -- what it is, what it is trying to become, the architectural layers (daemon vs MCP server vs console), the coding philosophy, and pointers to key docs (AGENTS.md, backlog.md, relevant design docs). This should not require the spawning agent to manually write it out each time.
|
|
248
|
+
|
|
249
|
+
2. **Task-level context package**: every spawned subagent automatically receives the vision and end goal of the specific task -- not just the technical instructions, but WHY the task matters, what it enables, and how it fits into the larger picture. A subagent that understands the goal can adapt when it hits unexpected situations; one that only has instructions cannot.
|
|
250
|
+
|
|
251
|
+
This is related to the "Coordinator context injection standard" and "Context budget per spawned agent" backlog entries, but is broader -- it applies to all subagent spawning, not just coordinator-spawned child sessions.
|
|
252
|
+
|
|
253
|
+
**Critical design constraint:** WorkTrain may not always have a "main" agent assembling context dynamically. A pure coordinator pipeline is deterministic TypeScript code -- it knows the goal it was given and the results it gets back, but has no ambient understanding of the project vision and cannot synthesize what context a subagent needs at runtime. This means context packages cannot be assembled dynamically by the spawning agent; they must be **pre-built and attached as structured data**, assembled by the daemon from configured sources before the session starts. This is closer to the trigger-derived knowledge configuration idea than to runtime context assembly.
|
|
254
|
+
|
|
255
|
+
**Things to hash out:**
|
|
256
|
+
- Where does the project-level context package live and how is it kept current? A static template in `~/.workrail/daemon-soul.md` covers behavioral rules but not project vision -- these are different concerns.
|
|
257
|
+
- In a pure coordinator pipeline (no main agent), who decides what goes in the context package for each session type? Must be declared configuration, not runtime synthesis.
|
|
258
|
+
- Should context profiles be declared per workflow, per trigger type, or per session role (coding vs review vs discovery)?
|
|
259
|
+
- What is the right size for an auto-injected context package? Too small loses signal; too large crowds out the actual task prompt.
|
|
260
|
+
- Should the package be structured (JSON/YAML) for programmatic injection, or prose for human readability?
|
|
261
|
+
- How does this interact with the existing workspace context injection (CLAUDE.md, AGENTS.md, daemon-soul.md)?
|
|
262
|
+
- Whether a "main" orchestrating agent is needed at all, or whether pure coordinator scripts plus well-configured context packages are sufficient -- this is an open question that requires real pipeline testing to answer.
|
|
263
|
+
|
|
264
|
+
---
|
|
265
|
+
|
|
183
266
|
### Agent-assisted backlog and issue enrichment (Apr 28, 2026)
|
|
184
267
|
|
|
185
268
|
**Status: idea** | Priority: medium
|
|
@@ -248,6 +331,40 @@ Five dimensions, each scored 1-3. Score = sum (max 15). Items marked **Blocked**
|
|
|
248
331
|
|
|
249
332
|
---
|
|
250
333
|
|
|
334
|
+
### `delivery_failed` unreachable in `getChildSessionResult` -- type promises more than code delivers (Apr 30, 2026)
|
|
335
|
+
|
|
336
|
+
**Status: bug** | Priority: medium
|
|
337
|
+
|
|
338
|
+
**Score: 10** | Cor:3 Cap:1 Eff:2 Lev:2 Con:2 | Blocked: no
|
|
339
|
+
|
|
340
|
+
`ChildSessionResult` has `reason: 'delivery_failed'` as a variant of `kind: 'failed'`. However `fetchChildSessionResult` in `coordinator-deps.ts` reads session status through `ConsoleService.getSessionDetail`, which returns statuses like `complete`/`blocked`/`in_progress` -- it never returns a `delivery_failed` status. `delivery_failed` is a `TriggerRouter`-level concept (callbackUrl POST failure) that is not stored as a session status in the event log. Child sessions spawned via `spawnSession`/`spawnAndAwait` have no `callbackUrl` and cannot produce it through this code path.
|
|
341
|
+
|
|
342
|
+
The result: coordinators using `getChildSessionResult` can never observe `reason: 'delivery_failed'`, even though the type says they might. This violates the "make illegal states unrepresentable" principle -- the type union promises a variant the implementation cannot produce on this path.
|
|
343
|
+
|
|
344
|
+
**Architectural fix (not a comment):** surface `delivery_failed` through session status. When `TriggerRouter` records a `delivery_failed` outcome, write a corresponding session event or status that `ConsoleService.getSessionDetail` returns. Then `fetchChildSessionResult` can map it correctly. This closes the gap between what the type promises and what the infrastructure delivers.
|
|
345
|
+
|
|
346
|
+
Alternative: if `spawnSession`/`spawnAndAwait` child sessions genuinely cannot have `delivery_failed` outcomes by design, remove `reason: 'delivery_failed'` from `ChildSessionResult` entirely and document that it only exists in `spawn_agent`'s direct outcome mapping.
|
|
347
|
+
|
|
348
|
+
**Things to hash out:**
|
|
349
|
+
- Should `delivery_failed` be surfaced through ConsoleService (requires touching session status storage), or removed from `ChildSessionResult` since the `spawnSession` path provably cannot produce it?
|
|
350
|
+
- If surfaced: what event or field in the session store carries this status, and how does ConsoleService project it?
|
|
351
|
+
|
|
352
|
+
---
|
|
353
|
+
|
|
354
|
+
### `spawnAndAwait` duplicates ~90 lines of polling logic from `awaitSessions` (Apr 30, 2026)
|
|
355
|
+
|
|
356
|
+
**Status: tech debt** | Priority: low
|
|
357
|
+
|
|
358
|
+
**Score: 8** | Cor:1 Cap:1 Eff:2 Lev:1 Con:3 | Blocked: no
|
|
359
|
+
|
|
360
|
+
`spawnAndAwait` in `coordinator-deps.ts` contains an inline polling loop (~90 lines) that duplicates the logic in `awaitSessions`. The WHY comment explains a real construction-time constraint: object literals cannot reference sibling methods by name during construction. But this constraint applies to methods on the returned object -- it does not apply to closure-level functions, which are already used for `fetchAgentResult` and `fetchChildSessionResult`.
|
|
361
|
+
|
|
362
|
+
**Fix:** extract a `pollUntilTerminal(handles: string[], timeoutMs: number): Promise<'completed' | 'timed_out' | 'degraded'>` closure-level function (before the `return {}` block). Have both `awaitSessions` and `spawnAndAwait` call it. This eliminates the duplication without violating the construction-time constraint.
|
|
363
|
+
|
|
364
|
+
**GitHub issue:** https://github.com/EtienneBBeaulac/workrail/issues/921
|
|
365
|
+
|
|
366
|
+
---
|
|
367
|
+
|
|
251
368
|
### Daemon architecture: remaining migrations (Apr 29, 2026)
|
|
252
369
|
|
|
253
370
|
**Status: partial** | A9 shipped Apr 29, 2026.
|
|
@@ -922,6 +1039,31 @@ Combined with the `DEFAULT_MAX_TURNS` cap, this provides defense-in-depth agains
|
|
|
922
1039
|
|
|
923
1040
|
The durable session store, v2 engine, and workflow authoring features shared by all three systems.
|
|
924
1041
|
|
|
1042
|
+
### WorkTrain as the canonical workflow author -- MCP as a derived runtime (Apr 30, 2026)
|
|
1043
|
+
|
|
1044
|
+
**Status: idea** | Priority: high
|
|
1045
|
+
|
|
1046
|
+
**Score: 13** | Cor:2 Cap:3 Eff:1 Lev:3 Con:2 | Blocked: no
|
|
1047
|
+
|
|
1048
|
+
Today workflows are authored once and expected to work identically in both runtimes: the WorkRail MCP server (human-in-the-loop, Claude Code) and the WorkTrain daemon (fully autonomous, coordinator-driven). In practice they don't -- a workflow authored for human use has `requireConfirmation` gates that block autonomous execution, step prompts that assume the human is reading them, and phase structures that assume a single continuous session. Conversely, a workflow good for autonomous use has no natural pause points, produces typed structured outputs that humans find hard to read mid-session, and chains phases that a human might want to interrupt.
|
|
1049
|
+
|
|
1050
|
+
The current response is to author separate "agentic variants" (`wr.coding-task` vs `coding-task-workflow.agentic.v2`). This is the wrong direction: it creates duplicate maintenance burden, improvements to one don't propagate to the other, and it means there is no single source of truth for what a workflow does.
|
|
1051
|
+
|
|
1052
|
+
There should be one version of each workflow, not two. Improvements to one should benefit the other automatically. The self-improvement loop WorkTrain runs on its own workflows should produce better workflows for everyone, not just daemon sessions. The question is how to structure authorship and any adaptation layer so this is possible without forcing workflows into an awkward compromise that works poorly in both contexts.
|
|
1053
|
+
|
|
1054
|
+
**What this enables:** WorkTrain can autonomously improve workflows using `wr.workflow-for-workflows`, and those improvements automatically benefit MCP users. The self-improvement loop produces better workflows for everyone, not just daemon sessions. Workflow quality compounds because there is only one version to improve.
|
|
1055
|
+
|
|
1056
|
+
**Relationship to existing entries:**
|
|
1057
|
+
- "Workflow runtime adapter: one spec, two runtimes" (Shared/Engine) is a narrower version of this idea focused on parallelism and `requireConfirmation` gates. This entry is about the authoring philosophy and source-of-truth question, not just the adapter mechanics.
|
|
1058
|
+
- `wr.workflow-for-workflows` is how WorkTrain improves workflows autonomously -- this entry determines what it improves toward.
|
|
1059
|
+
|
|
1060
|
+
**Things to hash out:**
|
|
1061
|
+
- What does the MCP conversion layer actually do? Adding pause points is straightforward. Adapting output formats (structured JSON → human-readable prose) may require active LLM translation, not just structural transformation.
|
|
1062
|
+
- Some workflow steps are genuinely different between runtimes -- a step that spawns parallel child sessions in the daemon doesn't have a clean MCP equivalent. Does the conversion layer skip those, simulate them sequentially, or require the author to declare a fallback?
|
|
1063
|
+
- If WorkTrain is the authoring target, existing workflows authored for MCP need migration. What is the migration path and who does it -- the author, WorkTrain itself, or a one-time script?
|
|
1064
|
+
- How do `requireConfirmation` gates fit? In the daemon they are removed or auto-satisfied by the coordinator. In MCP they pause for the human. Does the workflow declare them or does the conversion layer infer them?
|
|
1065
|
+
- Is the conversion layer purely structural (rearranging/omitting steps) or does it require understanding the semantic intent of each step?
|
|
1066
|
+
|
|
925
1067
|
|
|
926
1068
|
### Improve commit SHA gathering consistency in wr.coding-task
|
|
927
1069
|
|
|
@@ -1356,7 +1498,7 @@ Routing by `finding.category` from `wr.review_verdict`:
|
|
|
1356
1498
|
|
|
1357
1499
|
### Workflow execution time tracking and prediction
|
|
1358
1500
|
|
|
1359
|
-
**Status:
|
|
1501
|
+
**Status: partial** | Tracking shipped; prediction/calibration layer not yet built
|
|
1360
1502
|
|
|
1361
1503
|
**Score: 11** | Cor:1 Cap:2 Eff:3 Lev:2 Con:3 | Blocked: no
|
|
1362
1504
|
|
|
@@ -1834,10 +1976,14 @@ A proof record contains: `prNumber`, `goal`, `verificationChain` (array of `{ ki
|
|
|
1834
1976
|
|
|
1835
1977
|
### Scripts-first coordinator: avoid the main agent wherever possible (Apr 15, 2026)
|
|
1836
1978
|
|
|
1837
|
-
**Status:
|
|
1979
|
+
**Status: partial** | Foundation shipped PR #908 (Apr 30, 2026)
|
|
1838
1980
|
|
|
1839
1981
|
**Score: 12** | Cor:1 Cap:3 Eff:2 Lev:3 Con:3 | Blocked: no
|
|
1840
1982
|
|
|
1983
|
+
**What shipped:** `ChildSessionResult` discriminated union, `getChildSessionResult()`, `spawnAndAwait()`, `parentSessionId` threading, `wr.coordinator_result` artifact schema. The typed coordinator primitives that enable in-process coordinator scripts are now available.
|
|
1984
|
+
|
|
1985
|
+
**What's still needed:** the actual coordinator scripts (full development pipeline, bug-fix coordinator, grooming coordinator) and the `worktrain spawn`/`await` CLI commands that wrap these primitives for shell scripts.
|
|
1986
|
+
|
|
1841
1987
|
**The insight:** In a coordinator workflow, the main agent spends most of its time on mechanical work -- reading PR lists, checking CI status, deciding whether findings are blocking, sequencing merges. That's all deterministic logic. An LLM is expensive, slow, and inconsistent for deterministic work.
|
|
1842
1988
|
|
|
1843
1989
|
**The principle:** the scripts-over-agent rule applies at the coordinator level too. The coordinator's job is to drive a DAG of child sessions. The DAG structure, routing decisions, and termination conditions should be scripts, not LLM reasoning.
|
|
@@ -2003,7 +2149,7 @@ WorkTrain notices things without being asked. After a batch of work lands, it sc
|
|
|
2003
2149
|
|
|
2004
2150
|
### Native multi-agent orchestration: coordinator sessions + session DAG (Apr 15, 2026)
|
|
2005
2151
|
|
|
2006
|
-
**Status:
|
|
2152
|
+
**Status: partial** | Typed primitives shipped PR #908 (Apr 30, 2026)
|
|
2007
2153
|
|
|
2008
2154
|
**Score: 10** | Cor:1 Cap:3 Eff:1 Lev:3 Con:2 | Blocked: no
|
|
2009
2155
|
|
|
@@ -2322,6 +2468,99 @@ A workflow that aggregates activity across git history, GitLab/GitHub MRs and re
|
|
|
2322
2468
|
|
|
2323
2469
|
## Platform Vision (longer-term)
|
|
2324
2470
|
|
|
2471
|
+
### Move backlog to a dedicated worktrain-meta repo with version control (Apr 30, 2026)
|
|
2472
|
+
|
|
2473
|
+
**Status: idea** | Priority: high
|
|
2474
|
+
|
|
2475
|
+
**Score: 11** | Cor:2 Cap:2 Eff:2 Lev:3 Con:3 | Blocked: no
|
|
2476
|
+
|
|
2477
|
+
The backlog (`docs/ideas/backlog.md`) lives in the code repo. Every feature branch has its own version. Ideas added mid-session on a feature branch are held hostage until that PR merges. If two branches modify the backlog simultaneously, merge conflicts occur. There is no single authoritative place to capture an idea that immediately applies everywhere.
|
|
2478
|
+
|
|
2479
|
+
A dedicated `worktrain-meta` repo (e.g. `~/git/personal/worktrain-meta/`) would hold the backlog as the only concern. No feature branches -- ideas are committed directly to main. Full git history preserved. No code PR ever touches it.
|
|
2480
|
+
|
|
2481
|
+
Done means: an operator or agent can add a backlog idea from any branch or context, commit directly, and it is immediately visible on all other branches and in all other sessions.
|
|
2482
|
+
|
|
2483
|
+
**Note on format:** when this migration happens, one-file-per-item with YAML frontmatter becomes viable. Frontmatter makes scores, status, dates, and blocked-by machine-readable without prose parsing. The `npm run backlog` script would read frontmatter instead of regex-parsing Score lines. This is the right time to adopt that format -- in the current single-file structure frontmatter would require a custom delimiter scheme, but one-file-per-item makes it natural.
|
|
2484
|
+
|
|
2485
|
+
**Things to hash out:**
|
|
2486
|
+
- Should the worktrain-meta repo also hold the roadmap docs, now-next-later, open-work-inventory? Or just the backlog?
|
|
2487
|
+
- How do subagents spawned in a worktree find the backlog? They need a configured path, not relative to the code workspace.
|
|
2488
|
+
- When native structured backlog operations are built (SQLite), does the storage backend live in worktrain-meta (git-tracked history) or `~/.workrail/data/` (local queryable)? Both have merit.
|
|
2489
|
+
|
|
2490
|
+
---
|
|
2491
|
+
|
|
2492
|
+
### Invocable routines: dispatch an existing routine directly as a task (Apr 30, 2026)
|
|
2493
|
+
|
|
2494
|
+
**Status: idea** | Priority: high
|
|
2495
|
+
|
|
2496
|
+
**Score: 12** | Cor:1 Cap:3 Eff:2 Lev:3 Con:3 | Blocked: no
|
|
2497
|
+
|
|
2498
|
+
WorkRail has a routines system (`workflows/routines/`) for reusable workflow fragments. But routines can only be used embedded inside a larger workflow -- there is no way to invoke a routine directly as a standalone task. Many useful repeat tasks are process-shaped (same steps every time, structured output) and could be expressed as short 1-2 step workflows or existing routines. Today an operator who wants to run "context gathering" or "hypothesis challenge" on demand has to either build a wrapper workflow or do it manually.
|
|
2499
|
+
|
|
2500
|
+
There is no dispatch surface for standalone routine invocation. Done means: an operator can invoke any routine by name from the CLI or a trigger, and the result is durable in the session store.
|
|
2501
|
+
|
|
2502
|
+
**Relationship to existing ideas:** this is one half of the lightweight agents gap (the process-shaped half). The ad-hoc query half is a separate entry below.
|
|
2503
|
+
|
|
2504
|
+
**Things to hash out:**
|
|
2505
|
+
- Should this be a new CLI command (`worktrain invoke <routineId> --goal "..."`) or a trigger type, or both?
|
|
2506
|
+
- Do routines need output contracts defined before they can be invoked standalone, or is free-form output acceptable?
|
|
2507
|
+
- How does the session store record a routine-only run vs a full workflow run? Should they be distinguished?
|
|
2508
|
+
|
|
2509
|
+
---
|
|
2510
|
+
|
|
2511
|
+
### Ad-hoc query agents: answer questions about the workspace without a full workflow (Apr 30, 2026)
|
|
2512
|
+
|
|
2513
|
+
**Status: idea** | Priority: high
|
|
2514
|
+
|
|
2515
|
+
**Score: 11** | Cor:1 Cap:3 Eff:2 Lev:2 Con:2 | Blocked: yes (needs knowledge graph for efficient context)
|
|
2516
|
+
|
|
2517
|
+
There is a class of tasks that are question-shaped rather than process-shaped: "why does the session store use a manifest file?", "what would break if I changed this function?", "summarize what shipped this week." These don't have fixed steps, don't produce structured output contracts, and don't benefit from workflow phase gating. Running a full `wr.coding-task` session for them wastes 10 minutes on overhead. Not supporting them means the operator has to context-switch to Claude Code or do them manually.
|
|
2518
|
+
|
|
2519
|
+
These tasks need a capable agent with workspace context but no workflow structure. They are stateless, single-purpose, and short-lived.
|
|
2520
|
+
|
|
2521
|
+
Examples of what this enables:
|
|
2522
|
+
- `worktrain ask "why does the session store use a manifest file?"`
|
|
2523
|
+
- `worktrain explain pr/908`
|
|
2524
|
+
- `worktrain impact src/trigger/coordinator-deps.ts`
|
|
2525
|
+
- `worktrain diff-since "last week"`
|
|
2526
|
+
|
|
2527
|
+
Done means: an operator can ask a natural-language question about the workspace and get a grounded answer within seconds, without starting a full session.
|
|
2528
|
+
|
|
2529
|
+
**Relationship to existing ideas:** `worktrain talk` (interactive ideation) is the conversational, stateful version of this. Standup status generator is a scheduled instance of the same pattern. Invocable routines (entry above) are the process-shaped complement. This entry covers the unstructured query case.
|
|
2530
|
+
|
|
2531
|
+
**Things to hash out:**
|
|
2532
|
+
- Without the knowledge graph, these queries require full file-scanning on every invocation -- too slow to be useful. Is there a minimum viable version before the KG is built, or does this wait?
|
|
2533
|
+
- What is the boundary between "this is a quick query" and "this actually needs a full discovery session"? Who decides -- the operator, or WorkTrain itself?
|
|
2534
|
+
- Should outputs be ephemeral (printed to terminal, not stored) or durable (in session store)? Durability adds value for audit but adds overhead.
|
|
2535
|
+
|
|
2536
|
+
---
|
|
2537
|
+
|
|
2538
|
+
### Self-restart after shipping changes to itself (Apr 30, 2026)
|
|
2539
|
+
|
|
2540
|
+
**Status: idea** | Priority: medium
|
|
2541
|
+
|
|
2542
|
+
**Score: 11** | Cor:2 Cap:3 Eff:2 Lev:2 Con:2 | Blocked: yes (needs self-improvement loop operational)
|
|
2543
|
+
|
|
2544
|
+
If WorkTrain can build and ship changes to itself autonomously, the natural next step is that it also restarts itself with those changes. Today, after a WorkTrain daemon session ships a change to the workrail repo, the daemon continues running the old binary. The operator has to manually run `worktrain daemon --stop && worktrain daemon --start` to pick up the new version. In a self-improving system running overnight, this is a human intervention point that should not exist.
|
|
2545
|
+
|
|
2546
|
+
**What this requires:**
|
|
2547
|
+
1. After a session that modifies WorkTrain itself merges to main, the daemon detects it was running on this repo
|
|
2548
|
+
2. The daemon rebuilds (`npm run build`) and restarts itself cleanly -- completing any in-flight sessions first, then performing a graceful restart with the new binary
|
|
2549
|
+
3. After restart, the daemon logs what changed so the operator can review
|
|
2550
|
+
|
|
2551
|
+
This is related to the "daemon binary stale after rebuild" P0 gap, but goes further: not just warning about staleness, but actually handling the upgrade cycle automatically.
|
|
2552
|
+
|
|
2553
|
+
**Why this matters for the self-improvement loop:** if WorkTrain ships 5 improvements to itself in a day but the operator has to manually restart it 5 times, the loop isn't truly autonomous. Full autonomy requires the restart to be part of the pipeline.
|
|
2554
|
+
|
|
2555
|
+
**Things to hash out:**
|
|
2556
|
+
- What triggers the restart check? After every merge to main that touches `src/`? After a successful `npm run build`? On a heartbeat that detects binary staleness?
|
|
2557
|
+
- How does the daemon ensure in-flight sessions complete before restarting? Does it drain the active session set or hard-stop?
|
|
2558
|
+
- What is the rollback path if the new binary fails to start (startup crash, broken build)? The daemon needs to detect this and either roll back or alert the operator.
|
|
2559
|
+
- Should the restart happen immediately or at a configurable "quiet period" (e.g. 2am) to avoid disrupting active sessions during the day?
|
|
2560
|
+
- Self-modification is inherently risky -- a buggy change to the daemon's restart logic could make the daemon unable to restart at all. What safeguards prevent this?
|
|
2561
|
+
|
|
2562
|
+
---
|
|
2563
|
+
|
|
2325
2564
|
### WorkTrain as a first-class project participant: ideal backlog and planning capabilities (Apr 30, 2026)
|
|
2326
2565
|
|
|
2327
2566
|
**Status: idea** | Priority: high (long-term)
|
package/docs/planning/README.md
CHANGED
|
@@ -54,9 +54,9 @@ Not every roadmap item must become a ticket immediately.
|
|
|
54
54
|
|
|
55
55
|
## Status ownership
|
|
56
56
|
|
|
57
|
-
**Status lives in
|
|
57
|
+
**Status lives in `docs/ideas/backlog.md`**. Each entry has a `Status:` line (idea / partial / done / bug). Use `npm run backlog` to see a scored, sorted view.
|
|
58
58
|
|
|
59
|
-
Plan docs in `docs/plans/` describe **design and intent** -- not current status. When work ships, update the
|
|
59
|
+
Plan docs in `docs/plans/` describe **design and intent** -- not current status. When work ships, update the backlog entry status, not the plan doc.
|
|
60
60
|
|
|
61
61
|
## Rules of thumb
|
|
62
62
|
|
|
@@ -95,10 +95,7 @@ Existing feature-specific plans in `docs/plans/` still matter. Treat them as **f
|
|
|
95
95
|
|
|
96
96
|
## Starting points
|
|
97
97
|
|
|
98
|
-
- `docs/
|
|
99
|
-
- `docs/
|
|
100
|
-
- `docs/roadmap/
|
|
101
|
-
- `docs/
|
|
102
|
-
- `docs/planning/docs-taxonomy-and-migration-plan.md`
|
|
103
|
-
- `docs/tickets/README.md`
|
|
104
|
-
- `docs/tickets/next-up.md`
|
|
98
|
+
- `docs/vision.md` -- what WorkTrain is and where it's going (read this first)
|
|
99
|
+
- `docs/ideas/backlog.md` -- the backlog (`npm run backlog` for priority view)
|
|
100
|
+
- `docs/roadmap/legacy-planning-status.md` -- status map for older planning docs
|
|
101
|
+
- `docs/tickets/next-up.md` -- scratch space for near-term tickets
|
|
@@ -0,0 +1,8 @@
|
|
|
1
|
+
# Archive
|
|
2
|
+
|
|
3
|
+
These docs were superseded by `docs/ideas/backlog.md` + `npm run backlog`.
|
|
4
|
+
|
|
5
|
+
- `now-next-later.md` -- manual roadmap curation; replaced by backlog scoring and `npm run backlog`
|
|
6
|
+
- `open-work-inventory.md` -- normalized list of partial/unimplemented work; replaced by `Status: partial` entries in the backlog
|
|
7
|
+
|
|
8
|
+
Kept for historical reference only. Do not update.
|
package/docs/tickets/next-up.md
CHANGED
|
@@ -1,6 +1,11 @@
|
|
|
1
1
|
# Next Up
|
|
2
2
|
|
|
3
|
-
|
|
3
|
+
Scratch space for grooming near-term tickets before they become GitHub issues.
|
|
4
|
+
For the current priority ordering, run `npm run backlog -- --min-score 11 --unblocked-only` or see `docs/ideas/backlog.md`.
|
|
5
|
+
|
|
6
|
+
---
|
|
7
|
+
|
|
8
|
+
> The tickets below are historical. Active work is tracked via GitHub issues and the backlog.
|
|
4
9
|
|
|
5
10
|
---
|
|
6
11
|
|