@snipcodeit/mgw 0.1.1 → 0.1.3
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +20 -9
- package/commands/board.md +75 -0
- package/commands/issue.md +9 -10
- package/commands/milestone.md +180 -15
- package/commands/project.md +55 -1651
- package/commands/run.md +319 -20
- package/commands/sync.md +409 -1
- package/commands/workflows/github.md +19 -4
- package/dist/bin/mgw.cjs +2 -2
- package/dist/{claude-Vp9qvImH.cjs → claude-Dk1oVsaG.cjs} +156 -0
- package/dist/lib/index.cjs +237 -12
- package/package.json +1 -1
package/commands/run.md
CHANGED
|
@@ -57,7 +57,7 @@ REPO_ROOT=$(git rev-parse --show-toplevel)
|
|
|
57
57
|
DEFAULT=$(gh repo view --json defaultBranchRef -q .defaultBranchRef.name)
|
|
58
58
|
```
|
|
59
59
|
|
|
60
|
-
Parse $ARGUMENTS for issue number. If missing:
|
|
60
|
+
Parse $ARGUMENTS for issue number and flags. If issue number missing:
|
|
61
61
|
```
|
|
62
62
|
AskUserQuestion(
|
|
63
63
|
header: "Issue Number Required",
|
|
@@ -66,6 +66,16 @@ AskUserQuestion(
|
|
|
66
66
|
)
|
|
67
67
|
```
|
|
68
68
|
|
|
69
|
+
Extract flags from $ARGUMENTS:
|
|
70
|
+
```bash
|
|
71
|
+
RETRY_FLAG=false
|
|
72
|
+
for ARG in $ARGUMENTS; do
|
|
73
|
+
case "$ARG" in
|
|
74
|
+
--retry) RETRY_FLAG=true ;;
|
|
75
|
+
esac
|
|
76
|
+
done
|
|
77
|
+
```
|
|
78
|
+
|
|
69
79
|
Check for existing state: `${REPO_ROOT}/.mgw/active/${ISSUE_NUMBER}-*.json`
|
|
70
80
|
|
|
71
81
|
If no state file exists → issue not triaged yet. Run triage inline:
|
|
@@ -73,11 +83,71 @@ If no state file exists → issue not triaged yet. Run triage inline:
|
|
|
73
83
|
- Execute the mgw:issue triage flow (steps from issue.md) inline.
|
|
74
84
|
- After triage, reload state file.
|
|
75
85
|
|
|
76
|
-
If state file exists → load it.
|
|
86
|
+
If state file exists → load it. **Run migrateProjectState() to ensure retry fields exist:**
|
|
87
|
+
```bash
|
|
88
|
+
node -e "
|
|
89
|
+
const { migrateProjectState } = require('./lib/state.cjs');
|
|
90
|
+
migrateProjectState();
|
|
91
|
+
" 2>/dev/null || true
|
|
92
|
+
```
|
|
93
|
+
|
|
94
|
+
Check pipeline_stage:
|
|
77
95
|
- "triaged" → proceed to GSD execution
|
|
78
96
|
- "planning" / "executing" → resume from where we left off
|
|
79
97
|
- "blocked" → "Pipeline for #${ISSUE_NUMBER} is blocked by a stakeholder comment. Review the issue comments, resolve the blocker, then re-run."
|
|
80
98
|
- "pr-created" / "done" → "Pipeline already completed for #${ISSUE_NUMBER}. Run /mgw:sync to reconcile."
|
|
99
|
+
- "failed" → Check for --retry flag:
|
|
100
|
+
- If --retry NOT present:
|
|
101
|
+
```
|
|
102
|
+
Pipeline for #${ISSUE_NUMBER} has failed (failure class: ${last_failure_class || "unknown"}).
|
|
103
|
+
dead_letter: ${dead_letter}
|
|
104
|
+
|
|
105
|
+
To retry: /mgw:run ${ISSUE_NUMBER} --retry
|
|
106
|
+
To inspect: /mgw:issue ${ISSUE_NUMBER}
|
|
107
|
+
```
|
|
108
|
+
STOP.
|
|
109
|
+
- If --retry present and dead_letter === true:
|
|
110
|
+
```bash
|
|
111
|
+
# Clear dead_letter and reset retry state via resetRetryState()
|
|
112
|
+
node -e "
|
|
113
|
+
const { loadActiveIssue } = require('./lib/state.cjs');
|
|
114
|
+
const { resetRetryState } = require('./lib/retry.cjs');
|
|
115
|
+
const fs = require('fs'), path = require('path');
|
|
116
|
+
const activeDir = path.join(process.cwd(), '.mgw', 'active');
|
|
117
|
+
const files = fs.readdirSync(activeDir);
|
|
118
|
+
const file = files.find(f => f.startsWith('${ISSUE_NUMBER}-') && f.endsWith('.json'));
|
|
119
|
+
if (!file) { console.error('No state file for #${ISSUE_NUMBER}'); process.exit(1); }
|
|
120
|
+
const filePath = path.join(activeDir, file);
|
|
121
|
+
const state = JSON.parse(fs.readFileSync(filePath, 'utf-8'));
|
|
122
|
+
const reset = resetRetryState(state);
|
|
123
|
+
reset.pipeline_stage = 'triaged';
|
|
124
|
+
fs.writeFileSync(filePath, JSON.stringify(reset, null, 2));
|
|
125
|
+
console.log('Retry state cleared for #${ISSUE_NUMBER}');
|
|
126
|
+
"
|
|
127
|
+
# Remove pipeline-failed label
|
|
128
|
+
gh issue edit ${ISSUE_NUMBER} --remove-label "pipeline-failed" 2>/dev/null || true
|
|
129
|
+
```
|
|
130
|
+
Log: "MGW: dead_letter cleared for #${ISSUE_NUMBER} via --retry flag. Re-queuing."
|
|
131
|
+
Continue pipeline (treat as triaged).
|
|
132
|
+
- If --retry present and dead_letter !== true (manual retry of non-dead-lettered failure):
|
|
133
|
+
```bash
|
|
134
|
+
node -e "
|
|
135
|
+
const { resetRetryState } = require('./lib/retry.cjs');
|
|
136
|
+
const fs = require('fs'), path = require('path');
|
|
137
|
+
const activeDir = path.join(process.cwd(), '.mgw', 'active');
|
|
138
|
+
const files = fs.readdirSync(activeDir);
|
|
139
|
+
const file = files.find(f => f.startsWith('${ISSUE_NUMBER}-') && f.endsWith('.json'));
|
|
140
|
+
if (!file) { console.error('No state file'); process.exit(1); }
|
|
141
|
+
const filePath = path.join(activeDir, file);
|
|
142
|
+
const state = JSON.parse(fs.readFileSync(filePath, 'utf-8'));
|
|
143
|
+
const reset = resetRetryState(state);
|
|
144
|
+
reset.pipeline_stage = 'triaged';
|
|
145
|
+
fs.writeFileSync(filePath, JSON.stringify(reset, null, 2));
|
|
146
|
+
console.log('Retry state reset for #${ISSUE_NUMBER}');
|
|
147
|
+
"
|
|
148
|
+
gh issue edit ${ISSUE_NUMBER} --remove-label "pipeline-failed" 2>/dev/null || true
|
|
149
|
+
```
|
|
150
|
+
Continue pipeline.
|
|
81
151
|
- "needs-info" → Check for --force flag in $ARGUMENTS:
|
|
82
152
|
If --force NOT present:
|
|
83
153
|
```
|
|
@@ -107,6 +177,24 @@ If state file exists → load it. Check pipeline_stage:
|
|
|
107
177
|
Update state: pipeline_stage = "triaged", add override_log entry.
|
|
108
178
|
Continue pipeline.
|
|
109
179
|
|
|
180
|
+
**Route selection via gsd-adapter (runs after loading issue state):**
|
|
181
|
+
|
|
182
|
+
Use `selectGsdRoute()` from `lib/gsd-adapter.cjs` to determine the GSD execution
|
|
183
|
+
path. This centralizes the routing decision so it is auditable and consistent
|
|
184
|
+
across all pipeline commands:
|
|
185
|
+
|
|
186
|
+
```bash
|
|
187
|
+
GSD_ROUTE=$(node -e "
|
|
188
|
+
const { selectGsdRoute } = require('./lib/gsd-adapter.cjs');
|
|
189
|
+
const issue = $(cat ${REPO_ROOT}/.mgw/active/${STATE_FILE});
|
|
190
|
+
const { loadProjectState } = require('./lib/state.cjs');
|
|
191
|
+
const projectState = loadProjectState() || {};
|
|
192
|
+
const route = selectGsdRoute(issue, projectState);
|
|
193
|
+
console.log(route);
|
|
194
|
+
")
|
|
195
|
+
# GSD_ROUTE is one of: quick | plan-phase | diagnose | execute-only | verify-only
|
|
196
|
+
```
|
|
197
|
+
|
|
110
198
|
**Cross-milestone detection (runs after loading issue state):**
|
|
111
199
|
|
|
112
200
|
Check if this issue belongs to a non-active GSD milestone:
|
|
@@ -330,9 +418,7 @@ Add cross-ref (at `${REPO_ROOT}/.mgw/cross-refs.json`): issue → branch.
|
|
|
330
418
|
|
|
331
419
|
**Apply in-progress label:**
|
|
332
420
|
```bash
|
|
333
|
-
|
|
334
|
-
gh issue edit ${ISSUE_NUMBER} --remove-label "mgw:triaged" 2>/dev/null
|
|
335
|
-
gh issue edit ${ISSUE_NUMBER} --add-label "mgw:in-progress" 2>/dev/null
|
|
421
|
+
remove_mgw_labels_and_apply ${ISSUE_NUMBER} "mgw:in-progress"
|
|
336
422
|
```
|
|
337
423
|
|
|
338
424
|
**PATH CONVENTION for remaining steps:**
|
|
@@ -443,8 +529,7 @@ fi
|
|
|
443
529
|
|
|
444
530
|
**When blocking comment detected — apply label:**
|
|
445
531
|
```bash
|
|
446
|
-
|
|
447
|
-
gh issue edit ${ISSUE_NUMBER} --add-label "mgw:blocked" 2>/dev/null
|
|
532
|
+
remove_mgw_labels_and_apply ${ISSUE_NUMBER} "mgw:blocked"
|
|
448
533
|
```
|
|
449
534
|
|
|
450
535
|
If no new comments detected, continue normally.
|
|
@@ -465,7 +550,7 @@ SYSTEM_LIST="${triage.scope.systems}"
|
|
|
465
550
|
FILE_LIST="${triage.scope.files}"
|
|
466
551
|
CONFLICTS="${triage.conflicts}"
|
|
467
552
|
ROUTE_REASONING="${triage.route_reasoning}"
|
|
468
|
-
TIMESTAMP=$(node
|
|
553
|
+
TIMESTAMP=$(node -e "try{process.stdout.write(require('./lib/gsd-adapter.cjs').getTimestamp())}catch(e){process.stdout.write(new Date().toISOString().replace(/\\.\\d{3}Z$/,'Z'))}")
|
|
469
554
|
|
|
470
555
|
# Load milestone/phase context from project.json if available
|
|
471
556
|
MILESTONE_CONTEXT=""
|
|
@@ -524,6 +609,24 @@ Log comment in state file (at `${REPO_ROOT}/.mgw/active/`).
|
|
|
524
609
|
|
|
525
610
|
Only run this step if gsd_route is "gsd:quick" or "gsd:quick --full".
|
|
526
611
|
|
|
612
|
+
**Retry loop initialization:**
|
|
613
|
+
```bash
|
|
614
|
+
# Load retry state from .mgw/active/ state file
|
|
615
|
+
RETRY_COUNT=$(node -e "
|
|
616
|
+
const { loadActiveIssue } = require('./lib/state.cjs');
|
|
617
|
+
const state = loadActiveIssue(${ISSUE_NUMBER});
|
|
618
|
+
console.log((state && typeof state.retry_count === 'number') ? state.retry_count : 0);
|
|
619
|
+
" 2>/dev/null || echo "0")
|
|
620
|
+
EXECUTION_SUCCEEDED=false
|
|
621
|
+
```
|
|
622
|
+
|
|
623
|
+
**Begin retry loop** — wraps the GSD quick execution (steps 1–11 below) with transient-failure retry:
|
|
624
|
+
|
|
625
|
+
```
|
|
626
|
+
RETRY_LOOP:
|
|
627
|
+
while canRetry(issue_state) AND NOT EXECUTION_SUCCEEDED:
|
|
628
|
+
```
|
|
629
|
+
|
|
527
630
|
Update pipeline_stage to "executing" in state file (at `${REPO_ROOT}/.mgw/active/`).
|
|
528
631
|
|
|
529
632
|
Determine flags:
|
|
@@ -725,6 +828,77 @@ node ~/.claude/get-shit-done/bin/gsd-tools.cjs commit "docs(quick-${next_num}):
|
|
|
725
828
|
```
|
|
726
829
|
|
|
727
830
|
Update state (at `${REPO_ROOT}/.mgw/active/`): gsd_artifacts.path = $QUICK_DIR, pipeline_stage = "verifying".
|
|
831
|
+
|
|
832
|
+
**Retry loop — on execution failure:**
|
|
833
|
+
|
|
834
|
+
If any step above fails (executor or verifier agent returns error, summary missing, etc.), capture the error and apply retry logic:
|
|
835
|
+
|
|
836
|
+
```bash
|
|
837
|
+
# On failure — classify and decide whether to retry
|
|
838
|
+
FAILURE_CLASS=$(node -e "
|
|
839
|
+
const { classifyFailure, canRetry, incrementRetry, getBackoffMs } = require('./lib/retry.cjs');
|
|
840
|
+
const { loadActiveIssue } = require('./lib/state.cjs');
|
|
841
|
+
const fs = require('fs'), path = require('path');
|
|
842
|
+
|
|
843
|
+
const activeDir = path.join(process.cwd(), '.mgw', 'active');
|
|
844
|
+
const files = fs.readdirSync(activeDir);
|
|
845
|
+
const file = files.find(f => f.startsWith('${ISSUE_NUMBER}-') && f.endsWith('.json'));
|
|
846
|
+
const filePath = path.join(activeDir, file);
|
|
847
|
+
let issueState = JSON.parse(fs.readFileSync(filePath, 'utf-8'));
|
|
848
|
+
|
|
849
|
+
// Classify the failure from the error context
|
|
850
|
+
const error = { message: '${EXECUTION_ERROR_MESSAGE}' };
|
|
851
|
+
const result = classifyFailure(error);
|
|
852
|
+
console.error('Failure classified as: ' + result.class + ' — ' + result.reason);
|
|
853
|
+
|
|
854
|
+
// Persist failure class to state
|
|
855
|
+
issueState.last_failure_class = result.class;
|
|
856
|
+
|
|
857
|
+
if (result.class === 'transient' && canRetry(issueState)) {
|
|
858
|
+
const backoff = getBackoffMs(issueState.retry_count || 0);
|
|
859
|
+
issueState = incrementRetry(issueState);
|
|
860
|
+
fs.writeFileSync(filePath, JSON.stringify(issueState, null, 2));
|
|
861
|
+
// Output: backoff ms so shell can sleep
|
|
862
|
+
console.log('retry:' + backoff + ':' + result.class);
|
|
863
|
+
} else {
|
|
864
|
+
// Permanent failure or retries exhausted — dead-letter
|
|
865
|
+
issueState.dead_letter = true;
|
|
866
|
+
fs.writeFileSync(filePath, JSON.stringify(issueState, null, 2));
|
|
867
|
+
console.log('dead_letter:' + result.class);
|
|
868
|
+
}
|
|
869
|
+
")
|
|
870
|
+
|
|
871
|
+
case "$FAILURE_CLASS" in
|
|
872
|
+
retry:*)
|
|
873
|
+
BACKOFF_MS=$(echo "$FAILURE_CLASS" | cut -d':' -f2)
|
|
874
|
+
BACKOFF_SEC=$(( (BACKOFF_MS + 999) / 1000 ))
|
|
875
|
+
echo "MGW: Transient failure detected — retrying in ${BACKOFF_SEC}s (retry ${RETRY_COUNT})..."
|
|
876
|
+
sleep "$BACKOFF_SEC"
|
|
877
|
+
RETRY_COUNT=$((RETRY_COUNT + 1))
|
|
878
|
+
# Loop back to retry
|
|
879
|
+
;;
|
|
880
|
+
dead_letter:*)
|
|
881
|
+
FAILURE_CLASS_NAME=$(echo "$FAILURE_CLASS" | cut -d':' -f2)
|
|
882
|
+
EXECUTION_SUCCEEDED=false
|
|
883
|
+
# Break out of retry loop — handled in post_execution_update
|
|
884
|
+
break
|
|
885
|
+
;;
|
|
886
|
+
esac
|
|
887
|
+
```
|
|
888
|
+
|
|
889
|
+
On successful execution (EXECUTION_SUCCEEDED=true): break out of retry loop, clear last_failure_class:
|
|
890
|
+
```bash
|
|
891
|
+
node -e "
|
|
892
|
+
const fs = require('fs'), path = require('path');
|
|
893
|
+
const activeDir = path.join(process.cwd(), '.mgw', 'active');
|
|
894
|
+
const files = fs.readdirSync(activeDir);
|
|
895
|
+
const file = files.find(f => f.startsWith('${ISSUE_NUMBER}-') && f.endsWith('.json'));
|
|
896
|
+
const filePath = path.join(activeDir, file);
|
|
897
|
+
const state = JSON.parse(fs.readFileSync(filePath, 'utf-8'));
|
|
898
|
+
state.last_failure_class = null;
|
|
899
|
+
fs.writeFileSync(filePath, JSON.stringify(state, null, 2));
|
|
900
|
+
" 2>/dev/null || true
|
|
901
|
+
```
|
|
728
902
|
</step>
|
|
729
903
|
|
|
730
904
|
<step name="execute_gsd_milestone">
|
|
@@ -732,13 +906,25 @@ Update state (at `${REPO_ROOT}/.mgw/active/`): gsd_artifacts.path = $QUICK_DIR,
|
|
|
732
906
|
|
|
733
907
|
Only run this step if gsd_route is "gsd:new-milestone".
|
|
734
908
|
|
|
909
|
+
**Retry loop initialization** (same pattern as execute_gsd_quick):
|
|
910
|
+
```bash
|
|
911
|
+
RETRY_COUNT=$(node -e "
|
|
912
|
+
const { loadActiveIssue } = require('./lib/state.cjs');
|
|
913
|
+
const state = loadActiveIssue(${ISSUE_NUMBER});
|
|
914
|
+
console.log((state && typeof state.retry_count === 'number') ? state.retry_count : 0);
|
|
915
|
+
" 2>/dev/null || echo "0")
|
|
916
|
+
EXECUTION_SUCCEEDED=false
|
|
917
|
+
```
|
|
918
|
+
|
|
919
|
+
**Begin retry loop** — wraps the phase-execution loop (steps 2b–2e below) with transient-failure retry. Step 2 (milestone roadmap creation) is NOT wrapped in the retry loop — roadmap creation failures are always treated as permanent (require human intervention).
|
|
920
|
+
|
|
735
921
|
This is the most complex path. The orchestrator needs to:
|
|
736
922
|
|
|
737
923
|
**Resolve models for milestone agents:**
|
|
738
924
|
```bash
|
|
739
|
-
PLANNER_MODEL=$(node
|
|
740
|
-
EXECUTOR_MODEL=$(node
|
|
741
|
-
VERIFIER_MODEL=$(node
|
|
925
|
+
PLANNER_MODEL=$(node -e "process.stdout.write(require('./lib/gsd-adapter.cjs').resolveModel('gsd-planner'))")
|
|
926
|
+
EXECUTOR_MODEL=$(node -e "process.stdout.write(require('./lib/gsd-adapter.cjs').resolveModel('gsd-executor'))")
|
|
927
|
+
VERIFIER_MODEL=$(node -e "process.stdout.write(require('./lib/gsd-adapter.cjs').resolveModel('gsd-verifier'))")
|
|
742
928
|
```
|
|
743
929
|
|
|
744
930
|
1. **Discussion phase trigger for large-scope issues:**
|
|
@@ -747,7 +933,7 @@ If the issue was triaged with large scope and `gsd_route == "gsd:new-milestone"`
|
|
|
747
933
|
a scope proposal comment and set the discussing stage before proceeding to phase execution:
|
|
748
934
|
|
|
749
935
|
```bash
|
|
750
|
-
DISCUSS_TIMESTAMP=$(node
|
|
936
|
+
DISCUSS_TIMESTAMP=$(node -e "try{process.stdout.write(require('./lib/gsd-adapter.cjs').getTimestamp())}catch(e){process.stdout.write(new Date().toISOString().replace(/\\.\\d{3}Z$/,'Z'))}")
|
|
751
937
|
|
|
752
938
|
# Build scope breakdown from triage data
|
|
753
939
|
SCOPE_SIZE="${triage.scope.size}"
|
|
@@ -760,8 +946,7 @@ PHASE_COUNT="TBD (determined by roadmapper)"
|
|
|
760
946
|
|
|
761
947
|
Set pipeline_stage to "discussing" and apply "mgw:discussing" label:
|
|
762
948
|
```bash
|
|
763
|
-
|
|
764
|
-
gh issue edit ${ISSUE_NUMBER} --add-label "mgw:discussing" 2>/dev/null
|
|
949
|
+
remove_mgw_labels_and_apply ${ISSUE_NUMBER} "mgw:discussing"
|
|
765
950
|
```
|
|
766
951
|
|
|
767
952
|
Present to user:
|
|
@@ -970,19 +1155,99 @@ COMMENTEOF
|
|
|
970
1155
|
gh issue comment ${ISSUE_NUMBER} --body "$PHASE_BODY" 2>/dev/null || true
|
|
971
1156
|
```
|
|
972
1157
|
|
|
1158
|
+
**Retry loop — on phase execution failure** (apply same pattern as execute_gsd_quick):
|
|
1159
|
+
|
|
1160
|
+
If a phase's executor or verifier fails, capture the error and apply retry logic via `classifyFailure()`, `canRetry()`, `incrementRetry()`, and `getBackoffMs()` from `lib/retry.cjs`. Only the failing phase is retried (restart from step 2b for that phase). If the failure is transient and `canRetry()` is true: sleep backoff, call `incrementRetry()`, loop. If permanent or retries exhausted: set `dead_letter = true`, set `last_failure_class`, break the retry loop.
|
|
1161
|
+
|
|
1162
|
+
On successful completion of all phases: clear `last_failure_class`, set `EXECUTION_SUCCEEDED=true`.
|
|
1163
|
+
|
|
973
1164
|
After ALL phases complete → update pipeline_stage to "verifying" (at `${REPO_ROOT}/.mgw/active/`).
|
|
974
1165
|
</step>
|
|
975
1166
|
|
|
976
1167
|
<step name="post_execution_update">
|
|
977
|
-
**Post execution-complete comment on issue:**
|
|
1168
|
+
**Post execution-complete comment on issue (or failure comment if dead_letter):**
|
|
1169
|
+
|
|
1170
|
+
Read `dead_letter` and `last_failure_class` from current issue state:
|
|
1171
|
+
```bash
|
|
1172
|
+
DEAD_LETTER=$(node -e "
|
|
1173
|
+
const { loadActiveIssue } = require('./lib/state.cjs');
|
|
1174
|
+
const state = loadActiveIssue(${ISSUE_NUMBER});
|
|
1175
|
+
console.log(state && state.dead_letter === true ? 'true' : 'false');
|
|
1176
|
+
" 2>/dev/null || echo "false")
|
|
1177
|
+
|
|
1178
|
+
LAST_FAILURE_CLASS=$(node -e "
|
|
1179
|
+
const { loadActiveIssue } = require('./lib/state.cjs');
|
|
1180
|
+
const state = loadActiveIssue(${ISSUE_NUMBER});
|
|
1181
|
+
console.log((state && state.last_failure_class) ? state.last_failure_class : 'unknown');
|
|
1182
|
+
" 2>/dev/null || echo "unknown")
|
|
1183
|
+
```
|
|
1184
|
+
|
|
1185
|
+
**If dead_letter === true — post failure comment and halt:**
|
|
1186
|
+
```bash
|
|
1187
|
+
if [ "$DEAD_LETTER" = "true" ]; then
|
|
1188
|
+
FAIL_TIMESTAMP=$(node ~/.claude/get-shit-done/bin/gsd-tools.cjs current-timestamp --raw 2>/dev/null || date -u +"%Y-%m-%dT%H:%M:%SZ")
|
|
1189
|
+
RETRY_COUNT_CURRENT=$(node -e "
|
|
1190
|
+
const { loadActiveIssue } = require('./lib/state.cjs');
|
|
1191
|
+
const state = loadActiveIssue(${ISSUE_NUMBER});
|
|
1192
|
+
console.log((state && typeof state.retry_count === 'number') ? state.retry_count : 0);
|
|
1193
|
+
" 2>/dev/null || echo "0")
|
|
1194
|
+
|
|
1195
|
+
FAIL_BODY=$(cat <<COMMENTEOF
|
|
1196
|
+
> **MGW** · \`pipeline-failed\` · ${FAIL_TIMESTAMP}
|
|
1197
|
+
> ${MILESTONE_CONTEXT}
|
|
1198
|
+
|
|
1199
|
+
### Pipeline Failed
|
|
978
1200
|
|
|
979
|
-
|
|
1201
|
+
Issue #${ISSUE_NUMBER} — ${issue_title}
|
|
1202
|
+
|
|
1203
|
+
| | |
|
|
1204
|
+
|---|---|
|
|
1205
|
+
| **Failure class** | \`${LAST_FAILURE_CLASS}\` |
|
|
1206
|
+
| **Retries attempted** | ${RETRY_COUNT_CURRENT} of 3 |
|
|
1207
|
+
| **Status** | Dead-lettered — requires human intervention |
|
|
1208
|
+
|
|
1209
|
+
**Failure class meaning:**
|
|
1210
|
+
- \`transient\` — retry exhausted (rate limit, network, or overload)
|
|
1211
|
+
- \`permanent\` — unrecoverable (auth, missing deps, bad config)
|
|
1212
|
+
- \`needs-info\` — issue is ambiguous or incomplete
|
|
1213
|
+
|
|
1214
|
+
**To retry after resolving root cause:**
|
|
1215
|
+
\`\`\`
|
|
1216
|
+
/mgw:run ${ISSUE_NUMBER} --retry
|
|
1217
|
+
\`\`\`
|
|
1218
|
+
COMMENTEOF
|
|
1219
|
+
)
|
|
1220
|
+
|
|
1221
|
+
gh issue comment ${ISSUE_NUMBER} --body "$FAIL_BODY" 2>/dev/null || true
|
|
1222
|
+
gh issue edit ${ISSUE_NUMBER} --add-label "pipeline-failed" 2>/dev/null || true
|
|
1223
|
+
gh label create "pipeline-failed" --description "Pipeline execution failed" --color "d73a4a" --force 2>/dev/null || true
|
|
1224
|
+
|
|
1225
|
+
# Update pipeline_stage to failed
|
|
1226
|
+
node -e "
|
|
1227
|
+
const fs = require('fs'), path = require('path');
|
|
1228
|
+
const activeDir = path.join(process.cwd(), '.mgw', 'active');
|
|
1229
|
+
const files = fs.readdirSync(activeDir);
|
|
1230
|
+
const file = files.find(f => f.startsWith('${ISSUE_NUMBER}-') && f.endsWith('.json'));
|
|
1231
|
+
const filePath = path.join(activeDir, file);
|
|
1232
|
+
const state = JSON.parse(fs.readFileSync(filePath, 'utf-8'));
|
|
1233
|
+
state.pipeline_stage = 'failed';
|
|
1234
|
+
fs.writeFileSync(filePath, JSON.stringify(state, null, 2));
|
|
1235
|
+
" 2>/dev/null || true
|
|
1236
|
+
|
|
1237
|
+
echo "MGW: Pipeline dead-lettered for #${ISSUE_NUMBER} (class: ${LAST_FAILURE_CLASS}). Use --retry after fixing root cause."
|
|
1238
|
+
exit 1
|
|
1239
|
+
fi
|
|
1240
|
+
```
|
|
1241
|
+
|
|
1242
|
+
**Otherwise — post execution-complete comment:**
|
|
1243
|
+
|
|
1244
|
+
After GSD execution completes successfully, post a structured update before creating the PR:
|
|
980
1245
|
|
|
981
1246
|
```bash
|
|
982
1247
|
COMMIT_COUNT=$(git rev-list ${DEFAULT_BRANCH}..HEAD --count 2>/dev/null || echo "0")
|
|
983
1248
|
TEST_STATUS=$(npm test 2>&1 >/dev/null && echo "passing" || echo "failing")
|
|
984
1249
|
FILE_CHANGES=$(git diff --stat ${DEFAULT_BRANCH}..HEAD 2>/dev/null | tail -1)
|
|
985
|
-
EXEC_TIMESTAMP=$(node
|
|
1250
|
+
EXEC_TIMESTAMP=$(node -e "try{process.stdout.write(require('./lib/gsd-adapter.cjs').getTimestamp())}catch(e){process.stdout.write(new Date().toISOString().replace(/\\.\\d{3}Z$/,'Z'))}")
|
|
986
1251
|
```
|
|
987
1252
|
|
|
988
1253
|
Post the execution-complete comment directly (no sub-agent — guarantees it happens):
|
|
@@ -1192,9 +1457,38 @@ rmdir "${REPO_ROOT}/.worktrees/issue" 2>/dev/null
|
|
|
1192
1457
|
rmdir "${REPO_ROOT}/.worktrees" 2>/dev/null
|
|
1193
1458
|
```
|
|
1194
1459
|
|
|
1195
|
-
|
|
1460
|
+
Clear MGW labels at completion:
|
|
1461
|
+
```bash
|
|
1462
|
+
# Pass empty string — removes all mgw: labels without applying a new one
|
|
1463
|
+
remove_mgw_labels_and_apply ${ISSUE_NUMBER} ""
|
|
1464
|
+
```
|
|
1465
|
+
|
|
1466
|
+
Post-completion label reconciliation:
|
|
1196
1467
|
```bash
|
|
1197
|
-
|
|
1468
|
+
# Post-completion label reconciliation — verify no stray MGW labels remain
|
|
1469
|
+
LIVE_LABELS=$(gh issue view ${ISSUE_NUMBER} --json labels --jq '[.labels[].name]' 2>/dev/null || echo "[]")
|
|
1470
|
+
STRAY_MGW=$(echo "$LIVE_LABELS" | python3 -c "
|
|
1471
|
+
import json, sys
|
|
1472
|
+
labels = json.load(sys.stdin)
|
|
1473
|
+
stray = [l for l in labels if l.startswith('mgw:')]
|
|
1474
|
+
print('\n'.join(stray))
|
|
1475
|
+
" 2>/dev/null || echo "")
|
|
1476
|
+
|
|
1477
|
+
if [ -n "$STRAY_MGW" ]; then
|
|
1478
|
+
echo "MGW WARNING: unexpected MGW labels still on issue after completion: $STRAY_MGW" >&2
|
|
1479
|
+
fi
|
|
1480
|
+
|
|
1481
|
+
# Sync live labels back to .mgw/active state file
|
|
1482
|
+
LIVE_LABELS_LIST=$(gh issue view ${ISSUE_NUMBER} --json labels --jq '[.labels[].name]' 2>/dev/null || echo "[]")
|
|
1483
|
+
# Update labels field in ${REPO_ROOT}/.mgw/active/${STATE_FILE} using python3 json patch:
|
|
1484
|
+
python3 -c "
|
|
1485
|
+
import json, sys
|
|
1486
|
+
path = sys.argv[1]
|
|
1487
|
+
live = json.loads(sys.argv[2])
|
|
1488
|
+
with open(path) as f: state = json.load(f)
|
|
1489
|
+
state['labels'] = live
|
|
1490
|
+
with open(path, 'w') as f: json.dump(state, f, indent=2)
|
|
1491
|
+
" "${REPO_ROOT}/.mgw/active/${STATE_FILE}" "$LIVE_LABELS_LIST" 2>/dev/null || true
|
|
1198
1492
|
```
|
|
1199
1493
|
|
|
1200
1494
|
Extract one-liner summary for concise comment:
|
|
@@ -1205,7 +1499,7 @@ ONE_LINER=$(node ~/.claude/get-shit-done/bin/gsd-tools.cjs summary-extract "${gs
|
|
|
1205
1499
|
Post structured PR-ready comment directly (no sub-agent — guarantees it happens):
|
|
1206
1500
|
|
|
1207
1501
|
```bash
|
|
1208
|
-
DONE_TIMESTAMP=$(node
|
|
1502
|
+
DONE_TIMESTAMP=$(node -e "try{process.stdout.write(require('./lib/gsd-adapter.cjs').getTimestamp())}catch(e){process.stdout.write(new Date().toISOString().replace(/\\.\\d{3}Z$/,'Z'))}")
|
|
1209
1503
|
|
|
1210
1504
|
PR_READY_BODY=$(cat <<COMMENTEOF
|
|
1211
1505
|
> **MGW** · \`pr-ready\` · ${DONE_TIMESTAMP}
|
|
@@ -1265,12 +1559,17 @@ Next:
|
|
|
1265
1559
|
- [ ] Issue number validated and state loaded (or triage run first)
|
|
1266
1560
|
- [ ] Pipeline refuses needs-info without --force
|
|
1267
1561
|
- [ ] Pipeline refuses needs-security-review without --security-ack
|
|
1562
|
+
- [ ] --retry flag clears dead_letter state, removes pipeline-failed label, and re-queues issue
|
|
1563
|
+
- [ ] migrateProjectState() called at load time to ensure retry fields exist on active issue files
|
|
1268
1564
|
- [ ] Isolated worktree created (.worktrees/ gitignored)
|
|
1269
1565
|
- [ ] mgw:in-progress label applied during execution
|
|
1270
1566
|
- [ ] Pre-flight comment check performed (new comments classified before execution)
|
|
1271
1567
|
- [ ] mgw:blocked label applied when blocking comments detected
|
|
1272
1568
|
- [ ] Work-starting comment posted on issue (route, scope, branch)
|
|
1273
1569
|
- [ ] GSD pipeline executed in worktree (quick or milestone route)
|
|
1570
|
+
- [ ] Transient execution failures retried up to 3 times with exponential backoff
|
|
1571
|
+
- [ ] Failure comment includes failure_class from classifyFailure()
|
|
1572
|
+
- [ ] dead_letter=true set when retries exhausted or failure is permanent
|
|
1274
1573
|
- [ ] New-milestone route triggers discussion phase with mgw:discussing label
|
|
1275
1574
|
- [ ] Execution-complete comment posted on issue (commits, changes, test status)
|
|
1276
1575
|
- [ ] PR created with summary, milestone context, testing procedures, cross-refs
|