@snipcodeit/mgw 0.3.0 → 0.6.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -22,6 +22,29 @@ CROSS_REFS=$(cat ${REPO_ROOT}/.mgw/cross-refs.json 2>/dev/null)
22
22
  # Progress table for PR details section
23
23
  PROGRESS_TABLE=$(node ~/.claude/get-shit-done/bin/gsd-tools.cjs progress table --raw 2>/dev/null || echo "")
24
24
 
25
+ **Verify execution evidence exists before creating PR:**
26
+ ```bash
27
+ SUMMARY_COUNT=$(ls ${gsd_artifacts_path}/*SUMMARY* 2>/dev/null | wc -l)
28
+ if [ "$SUMMARY_COUNT" -eq 0 ]; then
29
+ echo "MGW ERROR: No SUMMARY files found at ${gsd_artifacts_path}. Cannot create PR without execution evidence."
30
+ echo "This usually means the executor agent failed silently. Check the execution logs."
31
+ # Update pipeline_stage to "failed"
32
+ node -e "
33
+ const fs = require('fs'), path = require('path');
34
+ const activeDir = path.join(process.cwd(), '.mgw', 'active');
35
+ const files = fs.readdirSync(activeDir);
36
+ const file = files.find(f => f.startsWith('${ISSUE_NUMBER}-') && f.endsWith('.json'));
37
+ if (file) {
38
+ const filePath = path.join(activeDir, file);
39
+ const state = JSON.parse(fs.readFileSync(filePath, 'utf-8'));
40
+ state.pipeline_stage = 'failed';
41
+ fs.writeFileSync(filePath, JSON.stringify(state, null, 2));
42
+ }
43
+ " 2>/dev/null || true
44
+ exit 1
45
+ fi
46
+ ```
47
+
25
48
  # Milestone/phase context for PR body
26
49
  MILESTONE_TITLE=""
27
50
  PHASE_INFO=""
@@ -71,10 +94,37 @@ p = json.load(open('${REPO_ROOT}/.mgw/project.json'))
71
94
  print(p.get('project', {}).get('project_board', {}).get('url', ''))
72
95
  " 2>/dev/null || echo "")
73
96
  fi
97
+
98
+ # Phase Context from GitHub issue comments (multi-developer context sharing)
99
+ PHASE_CONTEXT=$(node -e "
100
+ const ic = require('${REPO_ROOT}/lib/issue-context.cjs');
101
+ ic.assembleIssueContext(${ISSUE_NUMBER})
102
+ .then(ctx => process.stdout.write(ctx));
103
+ " 2>/dev/null || echo "")
74
104
  ```
75
105
 
76
106
  Read issue state for context.
77
107
 
108
+ <!-- mgw:criticality=critical spawn_point=pr-creator -->
109
+ <!-- Critical: PR creation is the pipeline's final output. Without it,
110
+ the entire pipeline run produces no deliverable. On failure: the
111
+ pipeline marks the issue as failed (no retry — PR creation errors
112
+ are typically permanent: branch conflicts, permissions, etc.). -->
113
+
114
+ **Pre-spawn diagnostic hook (PR creator):**
115
+ ```bash
116
+ DIAG_PR_CREATOR=$(node -e "
117
+ const dh = require('${REPO_ROOT}/lib/diagnostic-hooks.cjs');
118
+ const id = dh.beforeAgentSpawn({
119
+ agentType: 'general-purpose',
120
+ issueNumber: ${ISSUE_NUMBER},
121
+ prompt: 'Create PR for #${ISSUE_NUMBER}',
122
+ repoRoot: '${REPO_ROOT}'
123
+ });
124
+ process.stdout.write(id);
125
+ " 2>/dev/null || echo "")
126
+ ```
127
+
78
128
  ```
79
129
  Task(
80
130
  prompt="
@@ -122,6 +172,10 @@ ${COMMITS}
122
172
  ${CROSS_REFS}
123
173
  </cross_refs>
124
174
 
175
+ <phase_context>
176
+ ${PHASE_CONTEXT}
177
+ </phase_context>
178
+
125
179
  <instructions>
126
180
  1. Build PR title: short, prefixed with fix:/feat:/refactor: based on issue labels. Under 70 characters.
127
181
 
@@ -141,6 +195,14 @@ Closes #${ISSUE_NUMBER}
141
195
  ## Changes
142
196
  - File-level changes grouped by module (use key_files from summary_structured)
143
197
 
198
+ ## Phase Context
199
+ <details>
200
+ <summary>Planning & execution context from GitHub issue comments</summary>
201
+
202
+ ${PHASE_CONTEXT}
203
+ </details>
204
+ (Skip if PHASE_CONTEXT is empty)
205
+
144
206
  ## Test Plan
145
207
  - Verification checklist from VERIFICATION artifact
146
208
 
@@ -165,8 +227,34 @@ ${PROGRESS_TABLE}
165
227
  )
166
228
  ```
167
229
 
230
+ **Post-spawn diagnostic hook (PR creator):**
231
+ ```bash
232
+ node -e "
233
+ const dh = require('${REPO_ROOT}/lib/diagnostic-hooks.cjs');
234
+ dh.afterAgentSpawn({
235
+ diagId: '${DIAG_PR_CREATOR}',
236
+ exitReason: '${PR_NUMBER ? \"success\" : \"error\"}',
237
+ repoRoot: '${REPO_ROOT}'
238
+ });
239
+ " 2>/dev/null || true
240
+ ```
241
+
168
242
  Parse PR number and URL from agent response.
169
243
 
244
+ **Checkpoint: record PR creation (atomic write):**
245
+ ```bash
246
+ # Checkpoint: record PR creation — final checkpoint before pipeline completion.
247
+ node -e "
248
+ const { updateCheckpoint } = require('${REPO_ROOT}/lib/state.cjs');
249
+ updateCheckpoint(${ISSUE_NUMBER}, {
250
+ pipeline_step: 'pr',
251
+ step_progress: { branch_pushed: true, pr_number: ${PR_NUMBER}, pr_url: '${PR_URL}' },
252
+ step_history: [{ step: 'pr', completed_at: new Date().toISOString(), agent_type: 'general-purpose', output_path: '${PR_URL}' }],
253
+ resume: { action: 'cleanup', context: { pr_number: ${PR_NUMBER}, pr_url: '${PR_URL}' } }
254
+ });
255
+ " 2>/dev/null || true
256
+ ```
257
+
170
258
  Update state (at `${REPO_ROOT}/.mgw/active/`):
171
259
  - linked_pr = PR number
172
260
  - pipeline_stage = "pr-created"
@@ -38,7 +38,7 @@ If no state file exists → issue not triaged yet. Run triage inline:
38
38
  - Execute the mgw:issue triage flow (steps from issue.md) inline.
39
39
  - After triage, reload state file.
40
40
 
41
- If state file exists → load it. **Run migrateProjectState() to ensure retry fields exist:**
41
+ If state file exists → load it. **Run migrateProjectState() to ensure retry and checkpoint fields exist:**
42
42
  ```bash
43
43
  node -e "
44
44
  const { migrateProjectState } = require('./lib/state.cjs');
@@ -46,6 +46,161 @@ migrateProjectState();
46
46
  " 2>/dev/null || true
47
47
  ```
48
48
 
49
+ **Checkpoint detection — check for resumable progress before stage routing:**
50
+
51
+ After loading state and running migration, detect whether a prior pipeline run left
52
+ a checkpoint with meaningful progress (beyond triage). If found, present the user
53
+ with Resume/Fresh/Skip options before proceeding.
54
+
55
+ ```bash
56
+ # Detect checkpoint with progress beyond triage
57
+ CHECKPOINT_DATA=$(node -e "
58
+ const { detectCheckpoint, resumeFromCheckpoint } = require('./lib/state.cjs');
59
+ const cp = detectCheckpoint(${ISSUE_NUMBER});
60
+ if (!cp) {
61
+ console.log('none');
62
+ } else {
63
+ const resume = resumeFromCheckpoint(${ISSUE_NUMBER});
64
+ console.log(JSON.stringify(resume));
65
+ }
66
+ " 2>/dev/null || echo "none")
67
+ ```
68
+
69
+ If checkpoint is found (`CHECKPOINT_DATA !== "none"`):
70
+
71
+ Parse the checkpoint data and display to the user:
72
+ ```bash
73
+ CHECKPOINT_STEP=$(echo "$CHECKPOINT_DATA" | node -e "
74
+ const d=JSON.parse(require('fs').readFileSync('/dev/stdin','utf-8'));
75
+ console.log(d.checkpoint.pipeline_step);
76
+ ")
77
+ RESUME_ACTION=$(echo "$CHECKPOINT_DATA" | node -e "
78
+ const d=JSON.parse(require('fs').readFileSync('/dev/stdin','utf-8'));
79
+ console.log(d.resumeAction);
80
+ ")
81
+ RESUME_STAGE=$(echo "$CHECKPOINT_DATA" | node -e "
82
+ const d=JSON.parse(require('fs').readFileSync('/dev/stdin','utf-8'));
83
+ console.log(d.resumeStage);
84
+ ")
85
+ COMPLETED_STEPS=$(echo "$CHECKPOINT_DATA" | node -e "
86
+ const d=JSON.parse(require('fs').readFileSync('/dev/stdin','utf-8'));
87
+ console.log(d.completedSteps.join(', '));
88
+ ")
89
+ ARTIFACTS_COUNT=$(echo "$CHECKPOINT_DATA" | node -e "
90
+ const d=JSON.parse(require('fs').readFileSync('/dev/stdin','utf-8'));
91
+ console.log(d.checkpoint.artifacts.length);
92
+ ")
93
+ STARTED_AT=$(echo "$CHECKPOINT_DATA" | node -e "
94
+ const d=JSON.parse(require('fs').readFileSync('/dev/stdin','utf-8'));
95
+ console.log(d.checkpoint.started_at || 'unknown');
96
+ ")
97
+ UPDATED_AT=$(echo "$CHECKPOINT_DATA" | node -e "
98
+ const d=JSON.parse(require('fs').readFileSync('/dev/stdin','utf-8'));
99
+ console.log(d.checkpoint.updated_at || 'unknown');
100
+ ")
101
+ ```
102
+
103
+ Display checkpoint state and prompt user:
104
+ ```
105
+ AskUserQuestion(
106
+ header: "Checkpoint Detected for #${ISSUE_NUMBER}",
107
+ question: "A prior pipeline run left progress at step '${CHECKPOINT_STEP}'.
108
+
109
+ | | |
110
+ |---|---|
111
+ | **Last step** | ${CHECKPOINT_STEP} |
112
+ | **Completed steps** | ${COMPLETED_STEPS} |
113
+ | **Artifacts** | ${ARTIFACTS_COUNT} file(s) |
114
+ | **Resume action** | ${RESUME_ACTION} → stage: ${RESUME_STAGE} |
115
+ | **Started** | ${STARTED_AT} |
116
+ | **Last updated** | ${UPDATED_AT} |
117
+
118
+ How would you like to proceed?",
119
+ options: [
120
+ { label: "Resume", description: "Resume from checkpoint — skip completed steps (${COMPLETED_STEPS}), jump to ${RESUME_STAGE}" },
121
+ { label: "Fresh", description: "Discard checkpoint and re-run pipeline from scratch" },
122
+ { label: "Skip", description: "Skip this issue entirely" }
123
+ ]
124
+ )
125
+ ```
126
+
127
+ Handle user choice:
128
+
129
+ | Choice | Action |
130
+ |--------|--------|
131
+ | **Resume** | Load checkpoint context. Set `pipeline_stage` in state to `${RESUME_STAGE}`. Log: "MGW: Resuming #${ISSUE_NUMBER} from checkpoint (step: ${CHECKPOINT_STEP}, action: ${RESUME_ACTION})." Skip triage/worktree stages that already completed and jump directly to the resume stage in the pipeline. The `resume.context` object carries step-specific data (e.g., `quick_dir`, `plan_num`, `phase_number`) needed by the target stage. |
132
+ | **Fresh** | Clear checkpoint via `clearCheckpoint()`. Reset `pipeline_stage` to `"triaged"`. Log: "MGW: Checkpoint cleared for #${ISSUE_NUMBER}. Starting fresh." Continue with normal pipeline flow. |
133
+ | **Skip** | Log: "MGW: Skipping #${ISSUE_NUMBER} per user request." STOP pipeline. |
134
+
135
+ ```bash
136
+ case "$USER_CHOICE" in
137
+ Resume)
138
+ # Load resume context and jump to the appropriate stage
139
+ node -e "
140
+ const fs = require('fs'), path = require('path');
141
+ const activeDir = path.join(process.cwd(), '.mgw', 'active');
142
+ const files = fs.readdirSync(activeDir);
143
+ const file = files.find(f => f.startsWith('${ISSUE_NUMBER}-') && f.endsWith('.json'));
144
+ const filePath = path.join(activeDir, file);
145
+ const state = JSON.parse(fs.readFileSync(filePath, 'utf-8'));
146
+ // The pipeline_stage already reflects prior progress — do not overwrite
147
+ // unless the resume target is more advanced than current stage
148
+ console.log('Resuming from checkpoint: ' + JSON.stringify(state.checkpoint.resume));
149
+ " 2>/dev/null || true
150
+ # Set RESUME_MODE=true — downstream stages check this flag to skip completed work
151
+ RESUME_MODE=true
152
+ RESUME_CONTEXT="${CHECKPOINT_DATA}"
153
+ ;;
154
+ Fresh)
155
+ node -e "
156
+ const { clearCheckpoint } = require('./lib/state.cjs');
157
+ clearCheckpoint(${ISSUE_NUMBER});
158
+ console.log('Checkpoint cleared for #${ISSUE_NUMBER}');
159
+ " 2>/dev/null || true
160
+ # Reset pipeline_stage to triaged for fresh start
161
+ node -e "
162
+ const fs = require('fs'), path = require('path');
163
+ const activeDir = path.join(process.cwd(), '.mgw', 'active');
164
+ const files = fs.readdirSync(activeDir);
165
+ const file = files.find(f => f.startsWith('${ISSUE_NUMBER}-') && f.endsWith('.json'));
166
+ const filePath = path.join(activeDir, file);
167
+ const state = JSON.parse(fs.readFileSync(filePath, 'utf-8'));
168
+ state.pipeline_stage = 'triaged';
169
+ fs.writeFileSync(filePath, JSON.stringify(state, null, 2));
170
+ " 2>/dev/null || true
171
+ RESUME_MODE=false
172
+ ;;
173
+ Skip)
174
+ echo "MGW: Skipping #${ISSUE_NUMBER} per user request."
175
+ exit 0
176
+ ;;
177
+ esac
178
+ ```
179
+
180
+ If no checkpoint found (or checkpoint is at triage step only), continue with
181
+ normal pipeline stage routing below.
182
+
183
+ **Initialize checkpoint** when pipeline first transitions past triage:
184
+ ```bash
185
+ # Checkpoint initialization — called once when pipeline execution begins.
186
+ # Sets pipeline_step to "triage" with route selection progress.
187
+ # Subsequent stages update the checkpoint via updateCheckpoint().
188
+ # All checkpoint writes are atomic (write to .tmp then rename).
189
+ node -e "
190
+ const { updateCheckpoint } = require('./lib/state.cjs');
191
+ updateCheckpoint(${ISSUE_NUMBER}, {
192
+ pipeline_step: 'triage',
193
+ step_progress: {
194
+ comment_check_done: true,
195
+ route_selected: '${GSD_ROUTE}'
196
+ },
197
+ resume: {
198
+ action: 'begin-execution',
199
+ context: { gsd_route: '${GSD_ROUTE}', branch: '${BRANCH_NAME}' }
200
+ }
201
+ });
202
+ " 2>/dev/null || true
203
+ ```
49
204
  Check pipeline_stage:
50
205
  - "triaged" → proceed to GSD execution
51
206
  - "planning" / "executing" → resume from where we left off
@@ -359,7 +514,37 @@ NEW_COMMENTS=$(gh issue view $ISSUE_NUMBER --json comments \
359
514
  --jq "[.comments[-${NEW_COUNT}:]] | .[] | {author: .author.login, body: .body, createdAt: .createdAt}" 2>/dev/null)
360
515
  ```
361
516
 
362
- 2. **Spawn classification agent:**
517
+ 2. **Spawn classification agent (with diagnostic capture):**
518
+
519
+ <!-- mgw:criticality=advisory spawn_point=comment-classifier -->
520
+ <!-- Advisory: comment classification failure does not block the pipeline.
521
+ If this agent fails, log a warning and treat all new comments as
522
+ informational (safe default — pipeline continues with stale data).
523
+
524
+ Graceful degradation pattern:
525
+ ```
526
+ CLASSIFICATION_RESULT=$(wrapAdvisoryAgent(Task(...), 'comment-classifier', {
527
+ issueNumber: ISSUE_NUMBER,
528
+ fallback: '{"classification":"informational","reasoning":"comment classifier unavailable","new_requirements":[],"blocking_reason":""}'
529
+ }))
530
+ ```
531
+ -->
532
+
533
+ **Pre-spawn diagnostic hook:**
534
+ ```bash
535
+ CLASSIFIER_PROMPT="<full classifier prompt assembled above>"
536
+ DIAG_CLASSIFIER=$(node -e "
537
+ const dh = require('${REPO_ROOT}/lib/diagnostic-hooks.cjs');
538
+ const id = dh.beforeAgentSpawn({
539
+ agentType: 'general-purpose',
540
+ issueNumber: ${ISSUE_NUMBER},
541
+ prompt: process.argv[1],
542
+ repoRoot: '${REPO_ROOT}'
543
+ });
544
+ process.stdout.write(id);
545
+ " "$CLASSIFIER_PROMPT" 2>/dev/null || echo "")
546
+ ```
547
+
363
548
  ```
364
549
  Task(
365
550
  prompt="
@@ -414,6 +599,18 @@ Return ONLY valid JSON:
414
599
  )
415
600
  ```
416
601
 
602
+ **Post-spawn diagnostic hook:**
603
+ ```bash
604
+ node -e "
605
+ const dh = require('${REPO_ROOT}/lib/diagnostic-hooks.cjs');
606
+ dh.afterAgentSpawn({
607
+ diagId: '${DIAG_CLASSIFIER}',
608
+ exitReason: '${CLASSIFICATION_RESULT ? \"success\" : \"error\"}',
609
+ repoRoot: '${REPO_ROOT}'
610
+ });
611
+ " 2>/dev/null || true
612
+ ```
613
+
417
614
  3. **React based on classification:**
418
615
 
419
616
  | Classification | Action |
@@ -12,6 +12,47 @@ BRANCH_NAME="issue/${ISSUE_NUMBER}-${slug}"
12
12
  WORKTREE_DIR="${REPO_ROOT}/.worktrees/${BRANCH_NAME}"
13
13
  ```
14
14
 
15
+ **Check for active work on this issue by another developer:**
16
+ ```bash
17
+ # Check 1: Remote branch already exists (another machine pushed work)
18
+ REMOTE_BRANCHES=$(git ls-remote --heads origin "issue/${ISSUE_NUMBER}-*" 2>/dev/null | awk '{print $2}' | sed 's|refs/heads/||')
19
+ if [ -n "$REMOTE_BRANCHES" ]; then
20
+ echo "WARNING: Remote branch(es) already exist for issue #${ISSUE_NUMBER}:"
21
+ echo "$REMOTE_BRANCHES" | sed 's/^/ /'
22
+ echo ""
23
+ AskUserQuestion(
24
+ header: "Active Work Detected",
25
+ question: "Remote branch exists for #${ISSUE_NUMBER}. Another developer may be working on this. Proceed anyway?",
26
+ options: [
27
+ { label: "Proceed", description: "Create a new worktree anyway (may cause conflicts)" },
28
+ { label: "Abort", description: "Stop pipeline — coordinate with the other developer first" }
29
+ ]
30
+ )
31
+ if [ "$USER_CHOICE" = "Abort" ]; then
32
+ echo "Pipeline aborted — coordinate with the developer who owns the existing branch."
33
+ exit 1
34
+ fi
35
+ fi
36
+
37
+ # Check 2: Open PR already exists for this issue
38
+ EXISTING_PR=$(gh pr list --search "issue/${ISSUE_NUMBER}-" --state open --json number,headRefName --jq '.[0].number' 2>/dev/null || echo "")
39
+ if [ -n "$EXISTING_PR" ]; then
40
+ echo "WARNING: Open PR #${EXISTING_PR} already exists for issue #${ISSUE_NUMBER}."
41
+ echo "Creating a new worktree will produce a conflicting PR."
42
+ AskUserQuestion(
43
+ header: "Open PR Exists",
44
+ question: "PR #${EXISTING_PR} is already open for this issue. Proceed with a new worktree?",
45
+ options: [
46
+ { label: "Proceed", description: "Create new worktree anyway" },
47
+ { label: "Abort", description: "Stop — review the existing PR first" }
48
+ ]
49
+ )
50
+ if [ "$USER_CHOICE" = "Abort" ]; then
51
+ exit 1
52
+ fi
53
+ fi
54
+ ```
55
+
15
56
  Ensure .worktrees/ is gitignored:
16
57
  ```bash
17
58
  mkdir -p "$(dirname "${WORKTREE_DIR}")"
package/commands/sync.md CHANGED
@@ -33,6 +33,18 @@ Run periodically or when starting a new session to get a clean view.
33
33
 
34
34
  <process>
35
35
 
36
+ <step name="parse_arguments">
37
+ **Parse sync flags:**
38
+ ```bash
39
+ FULL_SYNC=false
40
+ for arg in "$@"; do
41
+ case "$arg" in
42
+ --full) FULL_SYNC=true ;;
43
+ esac
44
+ done
45
+ ```
46
+ </step>
47
+
36
48
  <step name="pull_board_state">
37
49
  **Pull board state to reconstruct missing local .mgw/active/ files.**
38
50
 
@@ -348,6 +360,63 @@ rebuilt from board data — triage results and GSD artifacts will need to be re-
348
360
  issue advances to `planning` or beyond.
349
361
  </step>
350
362
 
363
+ <step name="rebuild_from_committed">
364
+ **Rebuild project context from committed planning files (--full only):**
365
+
366
+ Only runs when `--full` flag is passed. This is the multi-machine context rebuild:
367
+ after `git pull`, a developer runs `mgw:sync --full` to reconstruct full project
368
+ context from committed `.planning/` files and the board.
369
+
370
+ ```bash
371
+ if [ "$FULL_SYNC" = "true" ]; then
372
+ echo "Full sync: rebuilding project context from committed planning files..."
373
+
374
+ # 1. Check for committed ROADMAP.md
375
+ if [ -f ".planning/ROADMAP.md" ]; then
376
+ echo " Found committed ROADMAP.md"
377
+ ROADMAP_ANALYSIS=$(node ~/.claude/get-shit-done/bin/gsd-tools.cjs roadmap analyze 2>/dev/null || echo '{}')
378
+ PHASE_COUNT=$(echo "$ROADMAP_ANALYSIS" | python3 -c "import json,sys; print(len(json.load(sys.stdin).get('phases',[])))" 2>/dev/null || echo "0")
379
+ echo " ROADMAP.md contains ${PHASE_COUNT} phases"
380
+ else
381
+ echo " No committed ROADMAP.md found — planning context unavailable"
382
+ echo " (This is expected if the project hasn't run a milestone yet)"
383
+ fi
384
+
385
+ # 2. Scan for committed phase artifacts
386
+ COMMITTED_PLANS=$(find .planning -name '*-PLAN.md' -not -path '.planning/quick/*' 2>/dev/null | wc -l)
387
+ COMMITTED_SUMMARIES=$(find .planning -name '*-SUMMARY.md' -not -path '.planning/quick/*' 2>/dev/null | wc -l)
388
+ echo " Committed phase plans: ${COMMITTED_PLANS}"
389
+ echo " Committed phase summaries: ${COMMITTED_SUMMARIES}"
390
+
391
+ # 3. If project.json is missing but board + ROADMAP exist, offer to reconstruct
392
+ if [ ! -f "${MGW_DIR}/project.json" ] && [ -f ".planning/ROADMAP.md" ] && [ -n "$BOARD_NODE_ID" ]; then
393
+ echo ""
394
+ echo " project.json is missing but ROADMAP.md and board are available."
395
+ echo " Run /mgw:project to reconstruct full project state."
396
+ echo " (Board state has been pulled — active issues are available)"
397
+ fi
398
+
399
+ # 4. Rebuild context cache from GitHub issue comments
400
+ echo " Rebuilding context cache from GitHub issue comments..."
401
+ node -e "
402
+ const ic = require('${REPO_ROOT}/lib/issue-context.cjs');
403
+ ic.rebuildContextCache()
404
+ .then(stats => console.log(' Cached summaries for', stats.issueCount, 'issues across', stats.milestoneCount, 'milestones'))
405
+ .catch(e => console.error(' Cache rebuild failed:', e.message));
406
+ " 2>/dev/null || echo " Context cache rebuild skipped (issue-context module not available)"
407
+
408
+ # 5. Refresh Project README with current milestone progress
409
+ echo " Refreshing Project README..."
410
+ node -e "
411
+ const ic = require('${REPO_ROOT}/lib/issue-context.cjs');
412
+ ic.updateProjectReadme()
413
+ .then(ok => console.log(ok ? ' Project README updated' : ' Project README skipped (no board configured)'))
414
+ .catch(() => console.log(' Project README refresh skipped'));
415
+ " 2>/dev/null || true
416
+ fi
417
+ ```
418
+ </step>
419
+
351
420
  <step name="scan_active">
352
421
  **Scan all active issue states:**
353
422
 
@@ -355,6 +355,77 @@ The agent is read-only (general-purpose, no code execution). It reads project st
355
355
  and codebase to classify, then MGW presents the result and offers follow-up actions
356
356
  (file new issue, post comment on related issue, etc.).
357
357
 
358
+ ## Diagnostic Capture Hooks
359
+
360
+ Every Task() spawn in the mgw:run pipeline SHOULD be instrumented with diagnostic
361
+ capture hooks using `lib/diagnostic-hooks.cjs`. This provides per-agent telemetry
362
+ (timing, prompt hash, exit reason, failure classification) without blocking pipeline
363
+ execution.
364
+
365
+ **Required modules:**
366
+ - `lib/diagnostic-hooks.cjs` — Before/after hooks for Task() spawns
367
+ - `lib/agent-diagnostics.cjs` — Underlying diagnostic logger (writes to `.mgw/diagnostics/`)
368
+
369
+ **Pattern — wrap every Task() spawn:**
370
+
371
+ ```bash
372
+ # 1. Before spawning: record start time and hash prompt
373
+ DIAG_ID=$(node -e "
374
+ const dh = require('${REPO_ROOT}/lib/diagnostic-hooks.cjs');
375
+ const id = dh.beforeAgentSpawn({
376
+ agentType: '${AGENT_TYPE}', # gsd-planner, gsd-executor, etc.
377
+ issueNumber: ${ISSUE_NUMBER},
378
+ prompt: '${PROMPT_SUMMARY}', # short description, not full prompt
379
+ repoRoot: '${REPO_ROOT}'
380
+ });
381
+ process.stdout.write(id);
382
+ " 2>/dev/null || echo "")
383
+
384
+ # 2. Spawn Task() agent (unchanged)
385
+ Task(
386
+ prompt="...",
387
+ subagent_type="${AGENT_TYPE}",
388
+ description="..."
389
+ )
390
+
391
+ # 3. After agent completes: record exit reason and write diagnostic entry
392
+ EXIT_REASON=$( <check artifact exists> && echo "success" || echo "error" )
393
+ node -e "
394
+ const dh = require('${REPO_ROOT}/lib/diagnostic-hooks.cjs');
395
+ dh.afterAgentSpawn({
396
+ diagId: '${DIAG_ID}',
397
+ exitReason: '${EXIT_REASON}',
398
+ repoRoot: '${REPO_ROOT}'
399
+ });
400
+ " 2>/dev/null || true
401
+ ```
402
+
403
+ **Key design principles:**
404
+ - **Non-blocking:** All hook calls are wrapped in `2>/dev/null || true` (bash) or
405
+ try/catch (JS). If diagnostic capture fails, pipeline continues normally.
406
+ - **No prompt storage:** Only a hash of the prompt is stored, not the full text.
407
+ Use `shortHash()` from `agent-diagnostics.cjs` for longer prompts.
408
+ - **Exit reason detection:** Use artifact existence checks (e.g., `PLAN.md` exists
409
+ means planner succeeded) rather than relying on Task() return values.
410
+ - **Graceful degradation:** If `agent-diagnostics.cjs` is not available (dependency
411
+ PR not merged), `diagnostic-hooks.cjs` logs a warning and returns empty handles.
412
+
413
+ **Diagnostic entries are written to:** `.mgw/diagnostics/<issueNumber>-<timestamp>.json`
414
+
415
+ **Instrumented agent spawns in mgw:run:**
416
+
417
+ | Agent | File | Step |
418
+ |-------|------|------|
419
+ | Comment classifier | `run/triage.md` | preflight_comment_check |
420
+ | Planner (quick) | `run/execute.md` | execute_gsd_quick step 3 |
421
+ | Plan-checker (quick) | `run/execute.md` | execute_gsd_quick step 6 |
422
+ | Executor (quick) | `run/execute.md` | execute_gsd_quick step 7 |
423
+ | Verifier (quick) | `run/execute.md` | execute_gsd_quick step 9 |
424
+ | Planner (milestone) | `run/execute.md` | execute_gsd_milestone step b |
425
+ | Executor (milestone) | `run/execute.md` | execute_gsd_milestone step d |
426
+ | Verifier (milestone) | `run/execute.md` | execute_gsd_milestone step e |
427
+ | PR creator | `run/pr-create.md` | create_pr |
428
+
358
429
  ## Anti-Patterns
359
430
 
360
431
  - **NEVER** use Skill invocation from within a Task() agent — Skills don't resolve inside subagents