@exaudeus/workrail 3.76.0 → 3.77.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/dist/console-ui/assets/{index-DFZjlsUM.js → index-D9pYbwS0.js} +1 -1
- package/dist/console-ui/index.html +1 -1
- package/dist/daemon/context-loader.d.ts +1 -1
- package/dist/daemon/core/agent-client.d.ts +7 -0
- package/dist/daemon/core/agent-client.js +31 -0
- package/dist/daemon/core/index.d.ts +6 -0
- package/dist/daemon/core/index.js +19 -0
- package/dist/daemon/core/session-context.d.ts +14 -0
- package/dist/daemon/core/session-context.js +24 -0
- package/dist/daemon/core/session-result.d.ts +10 -0
- package/dist/daemon/core/session-result.js +92 -0
- package/dist/daemon/core/system-prompt.d.ts +6 -0
- package/dist/daemon/core/system-prompt.js +151 -0
- package/dist/daemon/io/conversation-log.d.ts +2 -0
- package/dist/daemon/io/conversation-log.js +45 -0
- package/dist/daemon/io/execution-stats.d.ts +7 -0
- package/dist/daemon/io/execution-stats.js +86 -0
- package/dist/daemon/io/index.d.ts +5 -0
- package/dist/daemon/io/index.js +24 -0
- package/dist/daemon/io/session-notes-loader.d.ts +4 -0
- package/dist/daemon/io/session-notes-loader.js +45 -0
- package/dist/daemon/io/soul-loader.d.ts +3 -0
- package/dist/daemon/io/soul-loader.js +68 -0
- package/dist/daemon/io/workspace-context-loader.d.ts +17 -0
- package/dist/daemon/io/workspace-context-loader.js +137 -0
- package/dist/daemon/runner/agent-loop-runner.d.ts +28 -0
- package/dist/daemon/runner/agent-loop-runner.js +250 -0
- package/dist/daemon/runner/construct-tools.d.ts +5 -0
- package/dist/daemon/runner/construct-tools.js +30 -0
- package/dist/daemon/runner/finalize-session.d.ts +3 -0
- package/dist/daemon/runner/finalize-session.js +75 -0
- package/dist/daemon/runner/index.d.ts +8 -0
- package/dist/daemon/runner/index.js +18 -0
- package/dist/daemon/runner/pre-agent-session.d.ts +7 -0
- package/dist/daemon/runner/pre-agent-session.js +227 -0
- package/dist/daemon/runner/runner-types.d.ts +73 -0
- package/dist/daemon/runner/runner-types.js +39 -0
- package/dist/daemon/runner/tool-schemas.d.ts +1 -0
- package/dist/daemon/runner/tool-schemas.js +151 -0
- package/dist/daemon/session-scope.d.ts +1 -1
- package/dist/daemon/startup-recovery.d.ts +20 -0
- package/dist/daemon/startup-recovery.js +323 -0
- package/dist/daemon/state/index.d.ts +6 -0
- package/dist/daemon/state/index.js +14 -0
- package/dist/daemon/state/session-state.d.ts +23 -0
- package/dist/daemon/state/session-state.js +44 -0
- package/dist/daemon/state/stuck-detection.d.ts +22 -0
- package/dist/daemon/state/stuck-detection.js +25 -0
- package/dist/daemon/state/terminal-signal.d.ts +9 -0
- package/dist/daemon/state/terminal-signal.js +10 -0
- package/dist/daemon/tools/file-tools.d.ts +1 -1
- package/dist/daemon/turn-end/detect-stuck.d.ts +2 -2
- package/dist/daemon/turn-end/detect-stuck.js +2 -2
- package/dist/daemon/turn-end/step-injector.d.ts +1 -1
- package/dist/daemon/types.d.ts +105 -0
- package/dist/daemon/types.js +11 -0
- package/dist/daemon/workflow-enricher.d.ts +16 -0
- package/dist/daemon/workflow-enricher.js +58 -0
- package/dist/daemon/workflow-runner.d.ts +13 -277
- package/dist/daemon/workflow-runner.js +63 -1421
- package/dist/manifest.json +231 -31
- package/dist/trigger/coordinator-deps.d.ts +1 -1
- package/dist/trigger/delivery-client.d.ts +1 -1
- package/dist/trigger/delivery-pipeline.d.ts +1 -1
- package/dist/trigger/notification-service.d.ts +1 -1
- package/dist/trigger/trigger-listener.js +6 -2
- package/dist/trigger/trigger-router.d.ts +2 -2
- package/docs/ideas/backlog.md +249 -25
- package/docs/reference/worktrain-daemon-invariants.md +33 -49
- package/docs/vision.md +5 -15
- package/package.json +2 -2
package/docs/ideas/backlog.md
CHANGED
|
@@ -192,6 +192,108 @@ The delivery pipeline was extracted into `delivery-pipeline.ts` with explicit st
|
|
|
192
192
|
|
|
193
193
|
## WorkTrain Daemon
|
|
194
194
|
|
|
195
|
+
### Context injection bugs: double-injection, byte-slice truncation, workspaceRules[0] drop (Apr 30, 2026)
|
|
196
|
+
|
|
197
|
+
**Status: idea** | Priority: high
|
|
198
|
+
|
|
199
|
+
**Score: 13** | Cor:3 Cap:1 Eff:3 Lev:3 Con:3 | Blocked: no
|
|
200
|
+
|
|
201
|
+
Three active bugs in the context injection pipeline that waste tokens, produce incorrect truncation, and silently discard workspace context. Confirmed by codebase audit (Apr 30, 2026).
|
|
202
|
+
|
|
203
|
+
1. **Double-injection (`session-context.ts:117-119`):** `trigger.context` is JSON-serialized in full into the initial user message. Since coordinators write `assembledContextSummary` *into* `trigger.context`, the assembled context appears twice -- once in the system prompt (8KB cap applied) and once in the initial user message (uncapped). These diverge when the content exceeds 8KB.
|
|
204
|
+
|
|
205
|
+
2. **Byte-slice truncation (`system-prompt.ts:200-202`):** `assembledContextSummary` is truncated by raw byte index (`ctxStr.slice(0, 8192)`), which splits mid-sentence, mid-section, and can produce malformed UTF-8. The section-aware `buildBudgetedOutput()` pattern already exists in `src/coordinators/context-assembly.ts` and handles this correctly.
|
|
206
|
+
|
|
207
|
+
3. **`workspaceRules[0]` silent drop (`session-context.ts:106`):** `ContextBundle.workspaceRules` is typed as `ContextRule[]` but only `[0]` is consumed. All additional workspace context rules are silently dropped. The type implies per-file rules are supported; the consumer silently ignores them.
|
|
208
|
+
|
|
209
|
+
**Also in scope:** introduce `WorkflowContextSlots` typed fields on `WorkflowTrigger` (or a companion type) for system-managed context fields (`assembledContextSummary`, `priorSessionNotes`, `gitDiffStat`). This eliminates the stringly-typed `trigger.context['assembledContextSummary']` access pattern and is a prerequisite for the universal enricher (see next item). Scope Phase 0 changes to consumption sites only (`buildSystemPrompt`, `buildSessionContext`); coordinator write sites migrate in Phase 1.
|
|
210
|
+
|
|
211
|
+
**Done looks like:** no `trigger.context` JSON dump in `initialPrompt`; `assembledContextSummary` truncated at section boundaries; all `workspaceRules` entries injected; `WorkflowContextSlots` typed fields replace stringly-typed access in consumption sites.
|
|
212
|
+
|
|
213
|
+
---
|
|
214
|
+
|
|
215
|
+
### Universal context enricher for all session entry points (Apr 30, 2026)
|
|
216
|
+
|
|
217
|
+
**Status: idea** | Priority: high
|
|
218
|
+
|
|
219
|
+
**Score: 11** | Cor:1 Cap:3 Eff:2 Lev:3 Con:2 | Blocked: yes (needs context injection bugs fixed first)
|
|
220
|
+
|
|
221
|
+
Today 4 of 6 session entry points receive zero assembled context: raw webhook triggers, direct dispatch, `spawn_agent` children, and crash-recovered sessions never get cross-session notes or git diff state. Only coordinator-spawned sessions (via `pr-review.ts` or the adaptive pipeline) get assembled context -- and even then only through opt-in coordinator logic, not structural injection.
|
|
222
|
+
|
|
223
|
+
There is no single layer that all dispatch paths share where assembly can run universally. Coordinators that care must call assembly explicitly; everything else gets nothing. This means every new entry point or coordinator is another opportunity to forget assembly.
|
|
224
|
+
|
|
225
|
+
**Design (from Apr 30 discovery):** A `WorkflowEnricher` service injected into `runWorkflow()` that fires for root sessions only (`spawnDepth === 0`). Provides prior workspace session notes (max 3, newest-first, workspace-scoped) and `git diff HEAD~1 --stat` to all entry points. Injected via `WorkflowContextSlots` typed fields (see context injection bugs item). When a coordinator has already set `assembledContextSummary`, the enricher skips prior-notes injection (coordinator's richer context takes precedence) but still provides git diff stat if absent.
|
|
226
|
+
|
|
227
|
+
**Critical gate:** before this ships, run a pilot test -- one session with `assembledContextSummary` injected, inspect turn-1 reasoning for citation. If agents don't reference pre-loaded context, the investment in universal enrichment adds tokens without improving outcomes.
|
|
228
|
+
|
|
229
|
+
**Things to hash out:**
|
|
230
|
+
- Where exactly does the enricher inject: inside `runWorkflow()` before `buildPreAgentSession()`, or inside `buildPreAgentSession()` itself? The latter is cleaner but changes the pre-agent phase boundary.
|
|
231
|
+
- `listRecentSessions` must have a 1s wall-clock timeout with partial-result fallback. Without it, large session stores silently slow all session startups. This is a spec requirement, not optional.
|
|
232
|
+
- `spawn_agent` children don't get enriched (they'd trigger redundant assembly for deeply nested trees). Is there a case where children should optionally enrich? Candidate: an `inheritParentContext: boolean` flag in the `spawn_agent` tool schema.
|
|
233
|
+
|
|
234
|
+
---
|
|
235
|
+
|
|
236
|
+
### MemoryStore: indexed session history and mid-session query_memory tool (Apr 30, 2026)
|
|
237
|
+
|
|
238
|
+
**Status: idea** | Priority: medium
|
|
239
|
+
|
|
240
|
+
**Score: 10** | Cor:1 Cap:3 Eff:1 Lev:3 Con:2 | Blocked: yes (needs universal enricher first)
|
|
241
|
+
|
|
242
|
+
The session event log is rich -- it records goals, step notes, artifacts, delivered commits, git state, and phase handoffs. But querying it requires a full directory scan and per-session event projection on every call. `LocalSessionSummaryProviderV2` does this today and is used in exactly one place (the PR-review coordinator). Every other consumer either skips it or re-implements a slower version.
|
|
243
|
+
|
|
244
|
+
There is no mid-session memory query capability at all. An agent mid-session cannot ask "what did we decide about this module last week" and get an answer from persistent memory -- it can only use what was pre-loaded at session start.
|
|
245
|
+
|
|
246
|
+
**Design (from Apr 30 discovery):** A `MemoryStore` port backed by `~/.workrail/memory.db` (SQLite, WAL mode) indexed by `finalizeSession()` as fire-and-forget after each session completes. Query kinds v1: `recent_sessions` (by workspace path hash), `sessions_by_goal_keywords`. A `query_memory` tool added to the daemon tool set. Replaces the slow `listRecentSessions` scan in the universal enricher.
|
|
247
|
+
|
|
248
|
+
Phase 2b (separate): index phase artifacts via a new `phase_artifact_appended` session event kind -- bridges the current PipelineRunContext silo into the session event log so phase artifacts are queryable alongside session notes. Requires engine schema review before implementation.
|
|
249
|
+
|
|
250
|
+
**Things to hash out:**
|
|
251
|
+
- SQLite native compilation may fail in some deployment environments (Docker, Alpine Linux). Mitigation: use `@sqlite.org/sqlite-wasm` (pure WASM) or make `MemoryStore` fully optional -- daemon works without it, just no indexed queries.
|
|
252
|
+
- `phase_artifact_appended` event schema change is the highest-risk part of Phase 2b. Should it reuse the existing artifact channel with a new content type, or be a new event kind? Each has different backward-compatibility implications.
|
|
253
|
+
- Should `query_memory` be a general-purpose tool or typed with specific query kinds? A typed discriminated union prevents agents from inventing unsupported query shapes.
|
|
254
|
+
|
|
255
|
+
---
|
|
256
|
+
|
|
257
|
+
### worktrain session analyze: verify agents actually use pre-loaded context (Apr 30, 2026)
|
|
258
|
+
|
|
259
|
+
**Status: idea** | Priority: medium
|
|
260
|
+
|
|
261
|
+
**Score: 8** | Cor:1 Cap:2 Eff:2 Lev:2 Con:2 | Blocked: no
|
|
262
|
+
|
|
263
|
+
There is no way to verify whether agents actually use pre-loaded context (soul, workspace context, `assembledContextSummary`, session notes) in their reasoning. The entire memory architecture investment (universal enricher, MemoryStore, knowledge graph) assumes agents reference pre-loaded context at turn 1 -- but this assumption is unvalidated. If agents receive 32KB of workspace context and `assembledContextSummary` but don't cite them in their reasoning before acting, richer pre-loading adds token cost without improving outcomes.
|
|
264
|
+
|
|
265
|
+
Today, validating this requires manually reading raw session transcripts, which is impractical at scale. A `worktrain session analyze <sessionId>` command that reads the agent turn events and reports whether any pre-loaded context fields were cited in turn-1 reasoning would make this automatable and support data-driven decisions about context loading investment.
|
|
266
|
+
|
|
267
|
+
**Done looks like:** `worktrain session analyze <sessionId>` reads the session event log, extracts turn-1 assistant message content, checks for citations of injected fields (workspace context file names, goal text, prior step note content), and reports a structured summary: fields injected, fields cited, fields ignored.
|
|
268
|
+
|
|
269
|
+
**Things to hash out:**
|
|
270
|
+
- "Citation" is hard to define precisely -- the agent might paraphrase rather than quote. Does substring matching suffice, or does this need an LLM similarity check?
|
|
271
|
+
- Should this be a CLI command or a console feature? The console already reads session data; this could be a "context audit" view.
|
|
272
|
+
- The primary use case is a one-time validation gate (before shipping the universal enricher). Does this justify a permanent command, or is it a one-off script?
|
|
273
|
+
|
|
274
|
+
---
|
|
275
|
+
|
|
276
|
+
### Per-run retrospective: structured learning from pipeline outcomes (Apr 30, 2026)
|
|
277
|
+
|
|
278
|
+
**Status: idea** | Priority: medium
|
|
279
|
+
|
|
280
|
+
**Score: 9** | Cor:1 Cap:2 Eff:2 Lev:2 Con:2 | Blocked: no
|
|
281
|
+
|
|
282
|
+
After a pipeline run completes -- whether it merged, escalated, or failed -- there is no structured mechanism for WorkTrain to record what it learned. Mistakes that occurred in one run (wrong interpretation, missed edge case, collateral damage rationalized as a tradeoff) are not surfaced to future sessions. Each run starts with the same baseline.
|
|
283
|
+
|
|
284
|
+
A per-run retrospective is a lightweight post-completion step that answers: what went wrong or unexpectedly, what assumption turned out to be false, what should the next session starting on this codebase know that this session didn't? The output would be a structured record written to the session store and made available as Tier 0 context for future sessions on the same workspace.
|
|
285
|
+
|
|
286
|
+
This is distinct from the per-step `report_issue` mechanism (which records obstacles mid-session) and from the `wr.coding-task` phase-8 retrospective workflow (which is an agent-facing step prompt). This is a coordinator-level mechanism that runs after the pipeline exits, regardless of which workflows ran.
|
|
287
|
+
|
|
288
|
+
**Things to hash out:**
|
|
289
|
+
- Who runs the retrospective -- the coordinator (deterministic, reads phase results and produces structured output), a lightweight LLM step, or the agent in a final workflow phase?
|
|
290
|
+
- What is the output format? A structured `RetrospectiveArtifactV1` that feeds Tier 0 context injection, or freeform notes that accumulate in a `workspace-knowledge.md` file?
|
|
291
|
+
- Where does the output live? Per-run (alongside `PipelineRunContext`), per-workspace (accumulated knowledge store), or per-session in the session store?
|
|
292
|
+
- When a retrospective records "assumption X was wrong," how does that fact reach future sessions? It needs to be injected as Tier 0 context -- which requires the context loading path to know where to look.
|
|
293
|
+
- Should the retrospective run on every pipeline outcome (merge, escalate, timeout, error), or only on non-merge outcomes where something went wrong?
|
|
294
|
+
|
|
295
|
+
---
|
|
296
|
+
|
|
195
297
|
### Phase quality gate policy: partial vs escalate (May 5, 2026)
|
|
196
298
|
|
|
197
299
|
**Status: idea** | Priority: medium
|
|
@@ -585,12 +687,20 @@ The autonomous workflow runner (`worktrain daemon`). Completely separate from th
|
|
|
585
687
|
|
|
586
688
|
### Living work context: shared knowledge document that accumulates across the full pipeline (Apr 30, 2026)
|
|
587
689
|
|
|
588
|
-
**Status:
|
|
690
|
+
**Status: partial** | Core infra shipped May 5, 2026 (PR #939). Three gaps remain.
|
|
589
691
|
|
|
590
692
|
**Score: 13** | Cor:3 Cap:3 Eff:2 Lev:3 Con:2 | Blocked: no
|
|
591
693
|
|
|
592
694
|
**Shipped (PR #939):** `ShapingHandoffArtifactV1` + `CodingHandoffArtifactV1` + enriched `DiscoveryHandoffArtifactV1`, `PhaseHandoffArtifact` union, `buildContextSummary()` pure function with per-phase selection, `PipelineRunContext` per-run JSON with `PhaseResult<T>`, crash recovery via `active-run.json` pointer, phase quality gates (fallback escalates, partial warns), persistence failure escalation, 4 workflow authoring changes, adversarial behavioral test (AC 21), `contractRef` validation test. Deferred: `buildSystemPrompt()` named semantic slots, console visualization, retry logic, epic-mode task graph, extensible contract registration, per-workflow lifecycle artifact tests.
|
|
593
695
|
|
|
696
|
+
**Remaining gaps (not tracked elsewhere):**
|
|
697
|
+
|
|
698
|
+
1. **No end-to-end validation that context reaches downstream agents.** The `assembledContextSummary` is wired through `trigger.context` → `buildSystemPrompt()` → system prompt, but there is no test that runs a full pipeline (discovery → shaping → coding) and asserts that the coding agent's system prompt actually contains the discovery context. The adversarial behavioral test (AC 21) proves the pipeline structure -- it does not prove the context content is meaningful to the downstream agent.
|
|
699
|
+
|
|
700
|
+
2. **Not all coordinator pipeline modes populate `assembledContextSummary`.** Some modes (e.g. quick-review) may exit without writing a full `PipelineRunContext`. When context is absent, `buildSystemPrompt()` silently injects nothing -- the downstream agent gets no prior context with no warning. There is no check that the coordinator always writes context before dispatching a downstream session.
|
|
701
|
+
|
|
702
|
+
3. **No operator visibility into injected context.** The "Prior Context" section in an agent's system prompt is invisible from the console. An operator has no way to see what context was injected into a session without reading raw conversation logs. The console should surface this -- at minimum, whether the session had prior context and how many bytes.
|
|
703
|
+
|
|
594
704
|
When a multi-agent pipeline runs -- discovery → shaping → coding → review → fix → re-review -- no agent has a complete picture of what came before it. The coding agent has the goal. The review agent has the code. The fix agent has the findings. None of them have the accumulated context from the full pipeline: why this approach was chosen over alternatives, what was ruled out, what constraints were discovered, what architectural decisions were made, what edge cases were handled, what the review found and why.
|
|
595
705
|
|
|
596
706
|
Each agent reconstructs intent from incomplete context, which is why review finds things coding missed (review doesn't know what the coding agent was trying to do), why fix sessions address symptoms without understanding causes (no access to the architectural reasoning), and why agents repeat work that earlier agents already did.
|
|
@@ -1007,6 +1117,25 @@ The daemon reads `triggers.yml` once at startup. Any change requires a full daem
|
|
|
1007
1117
|
|
|
1008
1118
|
---
|
|
1009
1119
|
|
|
1120
|
+
### External task tracker integrations: Jira, Linear, Notion, and beyond (Apr 30, 2026)
|
|
1121
|
+
|
|
1122
|
+
**Status: idea** | Priority: medium
|
|
1123
|
+
|
|
1124
|
+
**Score: 11** | Cor:1 Cap:3 Eff:1 Lev:3 Con:2 | Blocked: no
|
|
1125
|
+
|
|
1126
|
+
WorkTrain currently picks up work from GitHub and GitLab. Most engineering teams track work in Jira, Linear, Notion, or similar systems -- not in GitHub issues. Without native trigger adapters for these systems, WorkTrain cannot be used as the default development workflow for teams that don't use GitHub Issues as their primary tracker.
|
|
1127
|
+
|
|
1128
|
+
The vision says WorkTrain picks up tasks "from external systems (GitHub issues, GitLab MRs, Jira tickets, webhooks)." The webhook trigger (`provider: generic`) handles anything with a POST endpoint, but it requires the operator to wire up field extraction manually and provides no assignee filtering, label filtering, or status-transition detection out of the box. A first-class adapter for each tracker would handle the integration details and give operators a clean configuration surface.
|
|
1129
|
+
|
|
1130
|
+
**Things to hash out:**
|
|
1131
|
+
- What is the right abstraction boundary? A generic polling adapter with per-tracker field mapping (same pattern as `github_issues_poll` / `gitlab_poll`) vs. a more opinionated per-tracker adapter that understands Jira workflow states, Linear priorities, etc.
|
|
1132
|
+
- Jira's API requires OAuth or API token; Linear uses API keys; Notion uses integration tokens. Is secret resolution via `$ENV_VAR_NAME` sufficient, or is a richer credentials model needed?
|
|
1133
|
+
- For Jira specifically: issue assignment events are not available via webhook without Jira admin access to configure webhooks. Does WorkTrain need a polling adapter (`jira_poll`) as the primary path, with webhook as an optional enhancement?
|
|
1134
|
+
- What context does each tracker inject into the workflow session? Jira issues have epics, acceptance criteria, sprint context, labels. Linear issues have priority, team, estimate, project. The context mapping needs to capture what's useful without overwhelming the session.
|
|
1135
|
+
- How does deduplication work across tracker adapters? A Jira issue that was already picked up and is in-flight should not be dispatched again on the next poll cycle, even if it was updated.
|
|
1136
|
+
|
|
1137
|
+
---
|
|
1138
|
+
|
|
1010
1139
|
### GitHub webhook trigger with assignee/event filtering (Apr 20, 2026)
|
|
1011
1140
|
|
|
1012
1141
|
**Status: idea** | Priority: medium-high
|
|
@@ -1891,7 +2020,7 @@ Each file is injected only into sessions running the matching pipeline phase. Re
|
|
|
1891
2020
|
|
|
1892
2021
|
**Status: idea** | Priority: medium
|
|
1893
2022
|
|
|
1894
|
-
**Score: 9** | Cor:1 Cap:2 Eff:2 Lev:2 Con:2 | Blocked:
|
|
2023
|
+
**Score: 9** | Cor:1 Cap:2 Eff:2 Lev:2 Con:2 | Blocked: no (unblocked by Apr 30 discovery -- context assembly does not require the knowledge graph)
|
|
1895
2024
|
|
|
1896
2025
|
**Problem:** `src/coordinators/pr-review.ts` is already ~500 LOC doing session dispatch, result aggregation, finding classification, merge routing, message queue drain, and outbox writes. Adding knowledge graph queries, context bundle assembly, and prior session lookups would create a god class.
|
|
1897
2026
|
|
|
@@ -1899,18 +2028,17 @@ Each file is injected only into sessions running the matching pipeline phase. Re
|
|
|
1899
2028
|
```
|
|
1900
2029
|
Trigger layer src/trigger/ receives events, validates, enqueues
|
|
1901
2030
|
Dispatch layer (TBD) decides which workflow + what goal
|
|
1902
|
-
Context assembly
|
|
2031
|
+
Context assembly src/daemon/ enriches trigger before runWorkflow() fires
|
|
1903
2032
|
Orchestration layer src/coordinators/ spawns, awaits, routes, retries, escalates
|
|
1904
2033
|
Delivery layer src/trigger/delivery posts results back to origin systems
|
|
1905
2034
|
```
|
|
1906
2035
|
|
|
1907
|
-
**Context assembly
|
|
2036
|
+
**Resolution from Apr 30 discovery:** Context assembly does NOT require the knowledge graph as a prerequisite. The universal enricher (Phase 1 of the memory architecture) provides a structural context assembly layer via `WorkflowEnricher` injected into `runWorkflow()` -- this IS the missing layer. The orchestration scripts (coordinators) continue to add task-specific richer context on top (phase artifacts, git diff for PRs) via the existing `assembledContextSummary` mechanism. The two layers compose: universal enricher provides the floor, coordinators provide the ceiling.
|
|
1908
2037
|
|
|
1909
|
-
**
|
|
1910
|
-
|
|
1911
|
-
|
|
1912
|
-
-
|
|
1913
|
-
- Who owns the context assembly API contract -- the engine (as a new primitive), the daemon (as an infrastructure capability), or user-authored scripts?
|
|
2038
|
+
**The Dispatch layer question** is resolved by the adaptive pipeline coordinator (`src/coordinators/adaptive-pipeline.ts`) -- it IS the dispatch layer for queue-polled tasks. For webhook-triggered tasks, `TriggerRouter.route()` performs dispatch. The layering is already present; it just isn't documented as such.
|
|
2039
|
+
|
|
2040
|
+
**Remaining open question:**
|
|
2041
|
+
- When a coordinator calls `spawnSession()` with an `assembledContextSummary`, should the universal enricher's prior-notes injection be suppressed (coordinator already covered it) or additive (both run)? The discovery recommends suppression -- enricher skips prior notes when `assembledContextSummary` is already set.
|
|
1914
2042
|
|
|
1915
2043
|
---
|
|
1916
2044
|
|
|
@@ -2339,6 +2467,42 @@ When an MR review session (run by a WorkTrain agent) finds issues in a coding se
|
|
|
2339
2467
|
|
|
2340
2468
|
---
|
|
2341
2469
|
|
|
2470
|
+
### wr.discovery lacks domain-specific ideation guidance (May 6, 2026)
|
|
2471
|
+
|
|
2472
|
+
**Status: idea** | Priority: medium
|
|
2473
|
+
|
|
2474
|
+
**Score: 9** | Cor:1 Cap:2 Eff:2 Lev:2 Con:2 | Blocked: no
|
|
2475
|
+
|
|
2476
|
+
`wr.discovery` classifies `problemDomain` (software / product / ux / personal / general) and uses it for a few things -- philosophy source lookup, vision doc location, and `decisionCriteria` examples. But candidate generation, challenge framing, and resolution path guidance do not adapt to domain at all. A personal career decision, a product strategy question, and a software architecture problem have meaningfully different ideation patterns, different failure modes in candidate generation, different challenge rubrics, and different resolution artifacts. The workflow currently treats them all identically after `problemDomain` is set.
|
|
2477
|
+
|
|
2478
|
+
The result is that `problemDomain` is a classification that carries almost no behavioral weight past phase-0 and phase-2. It reads well but does not change the actual work.
|
|
2479
|
+
|
|
2480
|
+
**Things to hash out:**
|
|
2481
|
+
- Where is domain-specific guidance most needed? Candidate generation (different ideation patterns per domain) and challenge framing (different adversarial angles) are the clearest gaps. Are there others -- resolution mode selection, confidence dimensions, handoff format?
|
|
2482
|
+
- What is the right mechanism -- `promptFragments` conditioned on `problemDomain`, a domain-specific routine injected via `templateCall`, or richer domain context blocks injected at workflow start? The answer probably varies by where in the workflow the guidance applies.
|
|
2483
|
+
- How much domain specificity is enough? Software vs non-software is the biggest gap. Within non-software, personal vs product vs ux are also meaningfully different. Is a two-level split (software / general) sufficient for now, or is the full five-way split worth tackling immediately?
|
|
2484
|
+
- Are there domain-specific output formats worth considering? A personal decision probably ends with a different handoff shape than a software architecture decision -- different fields, different confidence dimensions, different "next actions" structure.
|
|
2485
|
+
|
|
2486
|
+
---
|
|
2487
|
+
|
|
2488
|
+
### wr.discovery anchors candidates to existing infrastructure instead of the ideal solution (Apr 30, 2026)
|
|
2489
|
+
|
|
2490
|
+
**Status: idea** | Priority: high
|
|
2491
|
+
|
|
2492
|
+
**Score: 11** | Cor:1 Cap:3 Eff:2 Lev:3 Con:2 | Blocked: no
|
|
2493
|
+
|
|
2494
|
+
`wr.discovery` produces candidates bounded by what already exists. The landscape step grounds the agent in the current codebase, which anchors candidate generation to what is buildable today rather than what would be best. On a discovery run for context-passing, for example, candidates are shaped by the current pre-load architecture instead of questioning whether pre-load is the right model at all. Decisions that should be challenged by the discovery process are instead silently inherited from it.
|
|
2495
|
+
|
|
2496
|
+
The result is that discovery optimizes within the current design space rather than finding the edge of it. Problems that require restructuring existing code -- not just adding to it -- tend to produce timid candidates that paper over the root cause instead of addressing it. Discovery is supposed to find the best answer; it is currently finding the best answer that doesn't require changing much.
|
|
2497
|
+
|
|
2498
|
+
**Things to hash out:**
|
|
2499
|
+
- Should the ideal-first reasoning happen before or after the landscape pass? Before risks ignoring hard constraints; after risks being anchored by them. What is the right sequencing, and is it always the same or does it depend on the problem type?
|
|
2500
|
+
- How do non-negotiable constraints (e.g. "must not change the engine API", "must work without a running daemon") get introduced without becoming the excuse for avoiding the best answer? There's a real difference between a hard constraint and an inherited assumption that could be challenged.
|
|
2501
|
+
- Is "what would the ideal look like, and what's the migration path from here?" a step inside discovery, or does it belong in `wr.shaping`? Shaping already produces an appetite and scope cut -- is ideal-first reasoning a discovery concern or a shaping concern, or does each need it independently?
|
|
2502
|
+
- When the ideal requires multi-sprint groundwork (e.g. "first build the KG, then build context assembly on top of it"), how should discovery represent that? As a sequenced multi-phase candidate? As a separate "phase 1" item that gets its own discovery?
|
|
2503
|
+
|
|
2504
|
+
---
|
|
2505
|
+
|
|
2342
2506
|
### Workflow previewer for compiled and runtime behavior
|
|
2343
2507
|
|
|
2344
2508
|
**Status: idea** | Priority: medium
|
|
@@ -3170,33 +3334,33 @@ openclaw is worth studying deeply before building out the platform layer. Draw i
|
|
|
3170
3334
|
|
|
3171
3335
|
**Status: idea** | Priority: medium
|
|
3172
3336
|
|
|
3173
|
-
**Score: 10** | Cor:1 Cap:3 Eff:1 Lev:3 Con:2 | Blocked:
|
|
3337
|
+
**Score: 10** | Cor:1 Cap:3 Eff:1 Lev:3 Con:2 | Blocked: yes (needs MemoryStore first as Phase 2 prerequisite)
|
|
3338
|
+
|
|
3339
|
+
**Problem:** Every session starts with a full repo sweep. Context gathering subagents re-read the same files, re-trace the same call chains, re-identify the same invariants. And cross-session semantic queries ("what did we find about this module last week") cannot be answered without a vector index.
|
|
3174
3340
|
|
|
3175
|
-
**
|
|
3341
|
+
**Position in the phased memory architecture (from Apr 30 discovery):** This is Phase 3 in a four-phase sequence. Phase 0 (bug fixes) → Phase 1 (universal enricher) → Phase 2 (MemoryStore SQLite) → Phase 3 (knowledge graph). The MemoryStore SQLite from Phase 2 answers 6 of 8 memory queries without a vector model. The knowledge graph adds the remaining two: code-structure traversal (Q8) and semantic similarity ("what is related to X"). Phase 3a (structural layer) extends the existing spike; Phase 3b (vector layer) is a feature flag.
|
|
3176
3342
|
|
|
3177
3343
|
**Design -- two-layer hybrid:**
|
|
3178
3344
|
|
|
3179
|
-
**Layer 1: Structural graph (hard edges, deterministic)**
|
|
3180
|
-
|
|
3345
|
+
**Layer 1: Structural graph (hard edges, deterministic) -- Phase 3a**
|
|
3346
|
+
Extends existing `src/knowledge-graph/` spike (DuckDB + ts-morph, already in `dependencies`). New node kinds: `session`, `pipeline_run`, `workspace_convention`. New edge kinds: `produced_by` (session → file), `applies_to_workspace`. Current spike only tracks import edges and CLI commands; session data from Phase 2 MemoryStore migrates here. Answers: "what imports trigger-router.ts?", "what files did session X touch?", "what sessions ran in this workspace?"
|
|
3181
3347
|
|
|
3182
|
-
**Layer 2: Vector similarity (soft weights, semantic)**
|
|
3183
|
-
|
|
3348
|
+
**Layer 2: Vector similarity (soft weights, semantic) -- Phase 3b (feature flag)**
|
|
3349
|
+
LanceDB (embedded, TypeScript-native, local-first). Embeddings over session recaps and workspace conventions. Off by default (`WORKRAIL_VECTOR_SEARCH=1` to enable). Answers: "what sessions are semantically related to this bug?", "what workspace conventions mention authentication?"
|
|
3184
3350
|
|
|
3185
3351
|
**Technology:**
|
|
3186
|
-
- Structural: `ts-morph` + DuckDB
|
|
3187
|
-
- Vector: LanceDB + local embedding model
|
|
3188
|
-
- Unified query: `
|
|
3189
|
-
|
|
3190
|
-
**Build order:** Structural layer spike first (1-day). Vector layer after spike proves the foundation. Incremental update: re-index only files in `filesChanged` after each session.
|
|
3352
|
+
- Structural: `ts-morph` + DuckDB (existing spike, already in dependencies)
|
|
3353
|
+
- Vector: LanceDB + local embedding model -- `@xenova/transformers` (in-process, no external dep) preferred over Ollama (better quality but requires external process)
|
|
3354
|
+
- Unified query: `query_knowledge(intent, workspacePath)` replaces `query_memory` tool when Phase 3a lands
|
|
3191
3355
|
|
|
3192
3356
|
**Build decision (from Apr 15 research):** ts-morph + DuckDB wins. Cognee: Python-only. GraphRAG/LightRAG: use LLMs to build graph (violates scripts-over-agent). Mem0/Zep: conversational memory, not code graphs. Sourcegraph: enterprise weight, overkill.
|
|
3193
3357
|
|
|
3194
3358
|
**Things to hash out:**
|
|
3195
|
-
-
|
|
3196
|
-
-
|
|
3197
|
-
- The
|
|
3198
|
-
- DuckDB is in-process --
|
|
3199
|
-
- Is the KG per-workspace or global?
|
|
3359
|
+
- Phase 3a scope: should the structural layer replace the Phase 2 SQLite MemoryStore (same data, different engine) or exist alongside it? Replacing is cleaner; coexisting avoids a migration.
|
|
3360
|
+
- `@xenova/transformers` vs Ollama for Phase 3b: @xenova runs in-process (no setup friction) but has lower embedding quality. Ollama is better quality but adds an external process dependency. Which matters more for the target user base?
|
|
3361
|
+
- The incremental update strategy (re-index only `filesChanged` after each session) requires accurate change tracking. What is the fallback when `filesChanged` is unavailable?
|
|
3362
|
+
- DuckDB is in-process -- WAL mode handles read concurrency but writes are serialized. Is the concurrency story acceptable when 3 sessions complete simultaneously?
|
|
3363
|
+
- Is the KG per-workspace or global? Per-workspace is simpler; global enables cross-workspace queries but adds federation complexity.
|
|
3200
3364
|
|
|
3201
3365
|
---
|
|
3202
3366
|
|
|
@@ -4682,3 +4846,63 @@ WorkTrain has no tooling to surface the state of worktrees and branches relative
|
|
|
4682
4846
|
- Common-ground `make sync` distributing the script reliably
|
|
4683
4847
|
|
|
4684
4848
|
**Priority:** Medium. The shared scripts work and have been tested. Main remaining work is the shell wrapper, token storage, and integration with common-ground's team config.
|
|
4849
|
+
|
|
4850
|
+
---
|
|
4851
|
+
|
|
4852
|
+
### Cross-system blind benchmark: compare AI coding tools/models on the same tasks (May 6, 2026)
|
|
4853
|
+
|
|
4854
|
+
**Status: idea** | Priority: medium
|
|
4855
|
+
|
|
4856
|
+
**Score: 9** | Cor:1 Cap:3 Eff:1 Lev:2 Con:2 | Blocked: no
|
|
4857
|
+
|
|
4858
|
+
There is no reproducible way to compare WorkTrain against other AI coding systems (Cursor, Copilot, raw Claude Code, competing agent frameworks) or to compare model families within WorkTrain on the same real tasks. Without this, claims about WorkTrain's quality are anecdotal and there is no principled way to understand where WorkTrain adds value versus where it falls short.
|
|
4859
|
+
|
|
4860
|
+
**Things to hash out:**
|
|
4861
|
+
- What constitutes a valid "task" for comparison? Real GitHub issues from a well-understood repo are higher quality than synthetic benchmarks, but may not reproduce cleanly across different tool setups. What is the minimum reproducibility requirement?
|
|
4862
|
+
- How do you grade fairly? A grader that can see code style, comments, or formatting may infer which system produced the output. What does true blind evaluation look like here, and how blind is "blind enough"?
|
|
4863
|
+
- Should the rubric be global (same for all task types) or per-task-type (refactor vs feature vs bug fix)?
|
|
4864
|
+
- Token usage comparison requires accurate per-system accounting. Not all tools expose this. Is a cost-adjusted comparison feasible, or does this reduce to a quality-only benchmark?
|
|
4865
|
+
- Is this a one-time study or a continuous regression benchmark? The demo-repo benchmark entry covers regression -- this is specifically about cross-system comparative evaluation.
|
|
4866
|
+
|
|
4867
|
+
**Relationship to existing entries:** the demo-repo benchmark (existing entry) runs the same tasks after each WorkRail release to track regression. This entry is about comparing WorkTrain vs other systems, not WorkTrain past vs present.
|
|
4868
|
+
|
|
4869
|
+
---
|
|
4870
|
+
|
|
4871
|
+
### WorkTrain as a full software team: design, PM, data science, opex, and everything in between (May 6, 2026)
|
|
4872
|
+
|
|
4873
|
+
**Status: idea** | Priority: high
|
|
4874
|
+
|
|
4875
|
+
**Score: 13** | Cor:2 Cap:3 Eff:1 Lev:3 Con:2 | Blocked: no
|
|
4876
|
+
|
|
4877
|
+
The current vision defines WorkTrain as an autonomous *software development* system. But shipping software requires more than coding -- product management, design, data science, operations, release engineering, and the feedback loop from production back into ideas are all necessary to deliver something that works and keeps working. WorkTrain currently handles only the coding-and-review slice of this. Everything before "write the code" (discovery what to build, analyzing what users actually need) and everything after "merge the PR" (instrumentation, metrics analysis, idea generation, rollout management, incident response) is done manually.
|
|
4878
|
+
|
|
4879
|
+
The result is that the value loop -- PR → metrics → insight → idea → spec → PR -- is only partially automated. Humans still have to bridge analysis → idea and metrics → iteration gaps. An autonomous system that stops at "ship a PR" requires continuous human intervention to keep it pointed at the right work.
|
|
4880
|
+
|
|
4881
|
+
The constraint on idea generation specifically: ideas grounded in vague intuition are not useful. The gap is not that WorkTrain can't generate suggestions -- it can. The gap is that those suggestions are not grounded in specific, verifiable facts about the actual system and its users. An idea like "23% of users who reach step 3 abandon, and the median time on that step is 47 seconds, and here is what the error logs show" is categorically different from "users might want X."
|
|
4882
|
+
|
|
4883
|
+
**Relationship to existing entries:** Many existing backlog entries are partial implementations of this broader capability -- monitoring loops, analytics integration, feature flag management, opex, the blind benchmark entry. This entry captures the full frame so those entries can be understood as steps toward it rather than isolated features.
|
|
4884
|
+
|
|
4885
|
+
**Things to hash out:**
|
|
4886
|
+
- The vision.md defines WorkTrain as "autonomous software development." Does this require a vision revision, or is design/PM/data science/opex a natural extension of "everything that ships software"?
|
|
4887
|
+
- Design and PM work requires product domain knowledge -- not just technical knowledge. There is no obvious equivalent of AGENTS.md for product context. What is the right mechanism for WorkTrain to acquire and maintain that context?
|
|
4888
|
+
- Data science work requires access to event logs, metrics stores, and potentially sensitive user data. What is the authorization model? What is the minimum access needed to produce useful insights without exposing sensitive data?
|
|
4889
|
+
- Release management requires write access to production systems (feature flag platforms, deployment infrastructure). What safeguards are necessary before WorkTrain can act autonomously there?
|
|
4890
|
+
- Opex (incident response, SLO management) has a different urgency profile than coding work. How does it fit into the existing pipeline model, which is designed for hours-to-days timescales?
|
|
4891
|
+
|
|
4892
|
+
---
|
|
4893
|
+
|
|
4894
|
+
### Task completion enforcement: detect and prevent deferred work within tasks (May 6, 2026)
|
|
4895
|
+
|
|
4896
|
+
**Status: idea** | Priority: high
|
|
4897
|
+
|
|
4898
|
+
**Score: 12** | Cor:3 Cap:2 Eff:2 Lev:2 Con:2 | Blocked: no
|
|
4899
|
+
|
|
4900
|
+
Agents routinely defer work within tasks rather than completing it. Common patterns: "I'll file a ticket for this later," "this is out of scope, leaving for a follow-up," "TODO: handle this edge case," "I noticed X but didn't address it to stay focused." These deferral patterns are individually plausible but collectively mean tasks are never actually finished -- they transition from "in progress" to "apparently done" while work accumulates in a long tail of unfiled tickets and unresolved TODOs.
|
|
4901
|
+
|
|
4902
|
+
There is no mechanism to distinguish "this genuinely needs a separate session with different scope" from "I could have done this but chose not to." There is no enforcement that deferred items are tracked and eventually completed. There is no way to prove a task is actually done versus claimed done. A task that leaves TODOs in the code, or that defers 3 of its 5 acceptance criteria, is not done -- but the system currently has no way to detect or prevent this.
|
|
4903
|
+
|
|
4904
|
+
**Things to hash out:**
|
|
4905
|
+
- What does "done" mean in a provable sense? What evidence would allow a coordinator to conclude that a task is complete rather than merely that an agent has stopped working on it?
|
|
4906
|
+
- How do you distinguish legitimate scope decisions from avoidance? A session on a performance bug that surfaces an unrelated security issue is right to defer the security issue. A session that addresses only 2 of 3 acceptance criteria is not. What is the principled distinction?
|
|
4907
|
+
- TODO comments in code are not always deferred work -- some are architectural notes, some are pre-existing. How do you identify TODOs that represent deferred task-scope work versus incidental notes?
|
|
4908
|
+
- How does this interact with the existing stuck detection system? A stuck agent and a "done-claiming but not actually done" agent are different failure modes. How does the system tell them apart?
|
|
@@ -14,7 +14,7 @@ See also: `tests/unit/workflow-runner-outcome-invariants.test.ts` -- the test fi
|
|
|
14
14
|
|
|
15
15
|
**Why:** `'unknown'` in `execution-stats.jsonl` is silent data loss. Operators calibrate session timeouts and monitor health from this data.
|
|
16
16
|
|
|
17
|
-
**How it breaks:**
|
|
17
|
+
**How it breaks:** `writeExecutionStats()` takes `outcome` by value. If called with an unassigned variable, it silently records `'unknown'`. All result paths go through `finalizeSession()`, which calls `tagToStatsOutcome()` to derive the outcome -- there are no direct `writeExecutionStats()` calls outside `finalizeSession()`.
|
|
18
18
|
|
|
19
19
|
### 1.2 `delivery_failed` is never returned by `runWorkflow()` directly
|
|
20
20
|
|
|
@@ -32,13 +32,13 @@ See also: `tests/unit/workflow-runner-outcome-invariants.test.ts` -- the test fi
|
|
|
32
32
|
| `'stuck'` | `'stuck'` |
|
|
33
33
|
| `'delivery_failed'` | `'success'` (workflow succeeded; only the POST failed) |
|
|
34
34
|
|
|
35
|
-
This mapping
|
|
35
|
+
This mapping is exhaustive. `tagToStatsOutcome()` is a pure function in `workflow-runner.ts` that uses `assertNever` on the default case -- the compiler enforces exhaustiveness when new `_tag` variants are added.
|
|
36
36
|
|
|
37
37
|
### 1.4 Outcome priority when multiple signals fire
|
|
38
38
|
|
|
39
|
-
|
|
39
|
+
`stuck` takes priority over `timeout`. This is enforced structurally by `TerminalSignal` and `setTerminalSignal()`: `setTerminalSignal()` is first-writer-wins -- the first signal to set `state.terminalSignal` wins, and subsequent calls are silent no-ops. Because stuck detection fires inside the turn-end subscriber (which runs before the wall-clock timeout handler), stuck always sets `terminalSignal` first when both conditions are present in the same turn.
|
|
40
40
|
|
|
41
|
-
**Code location:**
|
|
41
|
+
**Code location:** `setTerminalSignal()` in `workflow-runner.ts`. `buildSessionResult()` reads `state.terminalSignal` after the loop exits.
|
|
42
42
|
|
|
43
43
|
### 1.5 stepCount reflects agent-loop advances only
|
|
44
44
|
|
|
@@ -56,7 +56,7 @@ Each `runWorkflow()` call writes a per-session sidecar file at `~/.workrail/daem
|
|
|
56
56
|
|
|
57
57
|
`persistTokens()` returns `Promise<Result<void, PersistTokensError>>` (not throws). Callers in the setup phase treat `err` as fatal (abort); callers inside tool closures treat `err` as degraded-but-continue (log and still call `onAdvance`/`onTokenUpdate` -- see invariant 4.3).
|
|
58
58
|
|
|
59
|
-
**Exception:** If `continueToken` is undefined (instant single-step completion, or `
|
|
59
|
+
**Exception:** If `continueToken` is undefined (instant single-step completion, or a `pre_allocated` `SessionSource` with no token), `persistTokens()` is skipped. There is nothing to recover.
|
|
60
60
|
|
|
61
61
|
### 2.2 Sidecar is deleted on every non-worktree terminal path
|
|
62
62
|
|
|
@@ -88,33 +88,34 @@ Since Phase B crash recovery (PR #811), `persistTokens()` also writes `workflowI
|
|
|
88
88
|
|
|
89
89
|
## 3. Registry invariants
|
|
90
90
|
|
|
91
|
-
|
|
91
|
+
Two registries track in-flight daemon sessions:
|
|
92
92
|
|
|
93
93
|
| Registry | Key | Value | Purpose |
|
|
94
94
|
|---|---|---|---|
|
|
95
95
|
| `DaemonRegistry` | `workrailSessionId` | `{ workflowId, lastHeartbeatMs }` | Console `isLive` display |
|
|
96
|
-
| `
|
|
97
|
-
|
|
96
|
+
| `ActiveSessionSet` | `workrailSessionId` | `SessionHandle` | Steer injection + SIGTERM abort |
|
|
97
|
+
|
|
98
|
+
`ActiveSessionSet` + `SessionHandle` (in `src/daemon/active-sessions.ts`) replaced the former separate `SteerRegistry` and `AbortRegistry` maps. A `SessionHandle` exposes `steer()`, `setAgent()`, `abort()`, and `dispose()` -- all session lifecycle operations on a single object.
|
|
98
99
|
|
|
99
100
|
### 3.1 Registry registration and deregistration
|
|
100
101
|
|
|
101
|
-
**Registration** happens in two
|
|
102
|
+
**Registration** happens in two phases:
|
|
102
103
|
|
|
103
|
-
- `
|
|
104
|
+
- `DaemonRegistry` and `ActiveSessionSet` are registered inside `buildPreAgentSession()` -- AFTER all potentially-failing I/O (executeStartWorkflow, persistTokens, worktree creation). Error paths that return before this point have nothing to clean up. The single-step completion path goes through `finalizeSession()` (which calls `daemonRegistry.unregister()`) and returns the handle via `PreAgentSessionResult` so the caller (`runWorkflow()`) can call `handle.dispose()`.
|
|
104
105
|
|
|
105
|
-
- `
|
|
106
|
+
- `handle.setAgent(agent)` is called in `buildAgentReadySession()` immediately after `const agent = new AgentLoop(...)`. This wires in abort capability. `abort()` before `setAgent()` is a safe no-op -- the TDZ hazard is eliminated by the null check inside `SessionHandleImpl.abort()`.
|
|
106
107
|
|
|
107
108
|
**Deregistration**:
|
|
108
109
|
|
|
109
|
-
- `
|
|
110
|
+
- `handle.dispose()` is called in the `finally` block of `runAgentLoop()`. This removes the handle from `ActiveSessionSet` so `size` decrements correctly and shutdown drain terminates.
|
|
110
111
|
|
|
111
|
-
- `daemonRegistry.unregister()` is called at
|
|
112
|
+
- `daemonRegistry.unregister()` is called via `finalizeSession()` at both result paths (early-exit and post-agent-loop). It is NOT in `finally` because the completion status ('completed' vs 'failed') differs by result.
|
|
112
113
|
|
|
113
|
-
**Why stale entries are bugs:** A stale steer
|
|
114
|
+
**Why stale entries are bugs:** A stale steer handle on a dead session makes `POST /sessions/:id/steer` return 200 instead of 404. A stale abort handle makes the shutdown handler call `abort()` on an already-exited session. Both are silent correctness bugs.
|
|
114
115
|
|
|
115
116
|
### 3.2 `DaemonRegistry` is unregistered at every result path
|
|
116
117
|
|
|
117
|
-
`daemonRegistry.unregister(workrailSessionId, 'completed' | 'failed')` is called at
|
|
118
|
+
`daemonRegistry.unregister(workrailSessionId, 'completed' | 'failed')` is called via `finalizeSession()` at both the early-exit path and the post-agent-loop path. It is NOT in `finally` because the completion status differs by result.
|
|
118
119
|
|
|
119
120
|
### 3.3 `workrailSessionId` is available before registry operations
|
|
120
121
|
|
|
@@ -124,11 +125,11 @@ If `parseContinueTokenOrFail()` fails (unusual -- the token just came from `exec
|
|
|
124
125
|
|
|
125
126
|
### 3.4 Registration gap is documented
|
|
126
127
|
|
|
127
|
-
**
|
|
128
|
+
**Steer gap (~50ms):** There is a ~50ms window between `executeStartWorkflow()` returning and `activeSessionSet.register()` being called (after `parseContinueTokenOrFail()` completes). A `POST /sessions/:id/steer` call in this window receives 404. Coordinators should retry once on 404 during session startup.
|
|
128
129
|
|
|
129
|
-
**
|
|
130
|
+
**Abort gap (~200-500ms):** `handle.setAgent(agent)` is called after `const agent = new AgentLoop(...)` is constructed, which happens after the context-loading phase (`loadDaemonSoul`, `loadWorkspaceContext`, `loadSessionNotes` in parallel). During this window, `handle.abort()` is a safe no-op -- SIGTERM will not abort the session. Sessions in this window run to completion or hit the wall-clock timeout.
|
|
130
131
|
|
|
131
|
-
**Why the abort gap is wider than the steer gap:** `
|
|
132
|
+
**Why the abort gap is wider than the steer gap:** `setAgent()` must be called after `agent` construction. Calling it before would be a TDZ hazard. The `SessionHandleImpl.abort()` null-checks `_agent`, making pre-`setAgent()` abort a safe no-op rather than a crash.
|
|
132
133
|
|
|
133
134
|
---
|
|
134
135
|
|
|
@@ -160,7 +161,7 @@ Both are guarded by the sequential tool execution invariant (no concurrent token
|
|
|
160
161
|
|
|
161
162
|
All three stuck detection signals (`repeated_tool_call`, `no_progress`, `timeout_imminent`) emit `agent_stuck` events via `emitter?.emit()`, which is fire-and-forget. An event write failure never affects the session.
|
|
162
163
|
|
|
163
|
-
Signals 1 and 2
|
|
164
|
+
Signals 1 and 2 call `setTerminalSignal(state, { kind: 'stuck', reason: ... })` subject to `stuckAbortPolicy`. Signal 3 (`timeout_imminent`) is purely observational -- the abort has already been triggered by the timeout handler.
|
|
164
165
|
|
|
165
166
|
### 4.5 `spawn_agent` depth is enforced at the call site
|
|
166
167
|
|
|
@@ -204,7 +205,7 @@ On failure/timeout/stuck paths, the worktree is left in place for debugging. `ru
|
|
|
204
205
|
|
|
205
206
|
### 6.3 Sessions with >= 1 step advance are resumed if sidecar has trigger context
|
|
206
207
|
|
|
207
|
-
`evaluateRecovery({ stepAdvances: >= 1 })` returns `'resume'`. If the sidecar contains `workflowId` and `workspacePath`, `runStartupRecovery()` calls `executeContinueWorkflow({ intent: 'rehydrate' })` to get the current step prompt, builds a minimal `WorkflowTrigger`
|
|
208
|
+
`evaluateRecovery({ stepAdvances: >= 1 })` returns `'resume'`. If the sidecar contains `workflowId` and `workspacePath`, `runStartupRecovery()` calls `executeContinueWorkflow({ intent: 'rehydrate' })` to get the current step prompt, builds a minimal `WorkflowTrigger` and a `pre_allocated` `SessionSource`, and calls `runWorkflow()` fire-and-forget.
|
|
208
209
|
|
|
209
210
|
**Old-format sidecars** (missing `workflowId`/`workspacePath`) fall through to discard regardless of step count.
|
|
210
211
|
|
|
@@ -214,32 +215,15 @@ Worktree sessions that are resumed set `branchStrategy: 'none'` and use the pers
|
|
|
214
215
|
|
|
215
216
|
---
|
|
216
217
|
|
|
217
|
-
## 7.
|
|
218
|
-
|
|
219
|
-
The invariants above are
|
|
220
|
-
|
|
221
|
-
|
|
222
|
-
- `
|
|
223
|
-
- `
|
|
224
|
-
- `
|
|
225
|
-
- `
|
|
226
|
-
|
|
227
|
-
|
|
228
|
-
|
|
229
|
-
async function runWorkflow(trigger, ctx, apiKey, ...): Promise<WorkflowRunResult> {
|
|
230
|
-
const startMs = Date.now();
|
|
231
|
-
const result = await _runWorkflowCore(trigger, ctx, apiKey, ...);
|
|
232
|
-
// All I/O in one place:
|
|
233
|
-
writeExecutionStats(statsDir, ..., tagToStatsOutcome(result._tag), result.stepCount);
|
|
234
|
-
await cleanupSidecar(sessionId, result._tag, trigger.branchStrategy);
|
|
235
|
-
emitSessionCompleted(emitter, sessionId, result._tag);
|
|
236
|
-
daemonRegistry?.unregister(workrailSessionId, result._tag === 'success' ? 'completed' : 'failed');
|
|
237
|
-
return result.workflowRunResult;
|
|
238
|
-
}
|
|
239
|
-
```
|
|
240
|
-
|
|
241
|
-
After the refactor, adding a new result path requires:
|
|
242
|
-
1. Adding it to the `WorkflowRunResult` union (compiler enforces exhaustiveness in `tagToStatsOutcome` via `assertNever`)
|
|
243
|
-
2. Returning the new variant from `_runWorkflowCore` (no I/O to add at the return site)
|
|
244
|
-
|
|
245
|
-
The current pattern requires manually adding `writeExecutionStats()`, sidecar deletion, event emission, and registry deregistration at each new return site -- easily forgotten.
|
|
218
|
+
## 7. Structural enforcement summary
|
|
219
|
+
|
|
220
|
+
The invariants above are enforced by a combination of type system guarantees and code structure:
|
|
221
|
+
|
|
222
|
+
- `tagToStatsOutcome()` -- pure function with `assertNever` default; compiler error on unhandled `_tag`
|
|
223
|
+
- `sidecardLifecycleFor()` -- pure function with `assertNever` default; compiler error on unhandled `_tag`
|
|
224
|
+
- `buildSessionResult()` -- pure function; reads `state.terminalSignal` after loop exits
|
|
225
|
+
- `finalizeSession()` -- single cleanup site for all result paths (event emission, registry cleanup, stats, sidecar deletion)
|
|
226
|
+
- `setTerminalSignal()` -- first-writer-wins; structurally prevents dual stuck+timeout state
|
|
227
|
+
- `SessionHandle` -- encapsulates steer/abort lifecycle; `abort()` before `setAgent()` is a safe no-op
|
|
228
|
+
|
|
229
|
+
Adding a new `WorkflowRunResult` variant requires updating `tagToStatsOutcome()` and `sidecardLifecycleFor()` -- the compiler enforces both via `assertNever`. No I/O needs to be added at the new return site.
|
package/docs/vision.md
CHANGED
|
@@ -14,7 +14,7 @@ WorkTrain runs the workrail repository as one of its own workspaces. It picks up
|
|
|
14
14
|
|
|
15
15
|
This creates a direct feedback loop: if WorkTrain's development pipeline is flawed, it will produce flawed changes to itself and catch them in review. If its context injection is thin, it will miss things in its own codebase that a well-briefed agent would catch. The quality of WorkTrain's output is the quality of WorkTrain.
|
|
16
16
|
|
|
17
|
-
The self-improvement loop is not fully operational today
|
|
17
|
+
The self-improvement loop is not fully operational today, but it is the north star. If WorkTrain cannot build WorkTrain well, it cannot be trusted to build anything else.
|
|
18
18
|
|
|
19
19
|
## What success looks like
|
|
20
20
|
|
|
@@ -34,7 +34,7 @@ WorkTrain earns trust over time by doing this correctly, repeatedly, at scale --
|
|
|
34
34
|
|
|
35
35
|
**Zero LLM turns for routing.** Coordinator decisions -- what workflow to run next, whether findings are blocking, when to merge -- are deterministic TypeScript code. LLM turns are used for cognitive work: understanding code, writing code, evaluating findings. Never for deciding "what do I do next?".
|
|
36
36
|
|
|
37
|
-
**Structured outputs at every boundary.** Each phase produces a typed result. The next phase reads that result. Free-text scraping between phases is a design smell.
|
|
37
|
+
**Structured outputs at every boundary.** Each phase produces a typed result. The next phase reads that result. Free-text scraping between phases is a design smell. Typed contracts at phase boundaries are what make phases composable without a main agent holding context.
|
|
38
38
|
|
|
39
39
|
**Correctness over speed.** WorkTrain does not merge changes it is not confident in. Review findings are addressed. Tests pass. The right next step is not always the fastest one.
|
|
40
40
|
|
|
@@ -88,18 +88,6 @@ WorkTrain does not pause for: implementation decisions within a well-specified t
|
|
|
88
88
|
|
|
89
89
|
This boundary is still being tested and refined through real usage. Where exactly "genuine ambiguity" begins is an open question.
|
|
90
90
|
|
|
91
|
-
## What is still being built
|
|
92
|
-
|
|
93
|
-
WorkTrain is not finished. The vision above is where it is going, not where it is today. Key pieces still in progress:
|
|
94
|
-
|
|
95
|
-
- **Living work context** -- shared knowledge store that accumulates across all phases so every agent starts informed (`docs/ideas/backlog.md`: "Living work context")
|
|
96
|
-
- **Coordinator pipeline templates** -- actual coordinator scripts for full development pipeline, bug-fix, grooming (`docs/ideas/backlog.md`: "Scripts-first coordinator")
|
|
97
|
-
- **`worktrain spawn`/`await` CLI** -- CLI surface for coordinator scripts
|
|
98
|
-
- **Knowledge graph** -- per-workspace structural understanding so agents skip discovery on repeated tasks
|
|
99
|
-
- **Spec as ground truth** -- wiring `wr.shaping` output into coordinator dispatch so coding/review agents work from the same spec
|
|
100
|
-
|
|
101
|
-
For the current prioritized list, see `npm run backlog` or `docs/ideas/backlog.md`.
|
|
102
|
-
|
|
103
91
|
## Open questions
|
|
104
92
|
|
|
105
93
|
These are genuinely unresolved. Any agent operating in this system should know they exist and not assume they are answered.
|
|
@@ -112,4 +100,6 @@ These are genuinely unresolved. Any agent operating in this system should know t
|
|
|
112
100
|
|
|
113
101
|
- **What is the right granularity of tasks?** WorkTrain is being designed for ticket-sized work. Whether it handles epics (by decomposing them), hotfixes (by moving fast and deferring thoroughness), and architectural changes (which may require multiple sessions across multiple days) the same way is untested.
|
|
114
102
|
|
|
115
|
-
- **Is
|
|
103
|
+
- **Is typed-artifact-per-phase the right abstraction for inter-phase context?** The current model threads structured handoff artifacts between pipeline phases. Whether this is sufficient long-term, or whether a queryable per-workspace knowledge store (indexed by topic, accessible across pipeline runs and across tasks) is needed for things like codebase-specific priors and accumulated project memory, is an open question. See `docs/ideas/backlog.md`: "Knowledge graph".
|
|
104
|
+
|
|
105
|
+
For current priorities and status, run `npm run backlog` or read `docs/ideas/backlog.md`.
|