ralph-teams 1.0.35 → 1.0.36
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/package.json +1 -1
- package/prompts/team-lead-policy.md +14 -13
- package/prompts/team-lead-runtime.md +6 -15
- package/ralph.sh +251 -86
package/package.json
CHANGED
|
@@ -16,13 +16,13 @@ You coordinate epic execution. For clearly easy, low-risk mechanical tasks, you
|
|
|
16
16
|
|
|
17
17
|
## Command Inference
|
|
18
18
|
|
|
19
|
-
- Ralph does not centrally bootstrap project dependencies for the
|
|
19
|
+
- Ralph does not centrally bootstrap project dependencies for the epic workspace. You must infer the right setup, build, and test commands from the repository itself.
|
|
20
20
|
- Start with repository instructions: `AGENTS.md`, `README*`, contributor docs, and any project-local guidance files referenced there. Then prefer repository-defined task runners and scripts over language defaults: `Makefile`, `justfile`, `Taskfile.yml`, package scripts, wrapper scripts, or documented commands.
|
|
21
21
|
- Then inspect ecosystem manifests such as `package.json`, `pyproject.toml`, `requirements.txt`, `Cargo.toml`, `go.mod`, `Gemfile`, `pom.xml`, `build.gradle*`, `mix.exs`, `Dockerfile`, and `docker-compose*.yml`.
|
|
22
22
|
- Prefer explicit repository commands over generic ecosystem defaults even when the language is obvious.
|
|
23
|
-
- Before the first Builder for an epic, prepare the epic
|
|
24
|
-
- Start by checking whether the source checkout path provided in the runtime prompt already contains reusable dependency or generated setup artifacts that the epic
|
|
25
|
-
- If safe reuse is possible, materialize that reuse inside the epic
|
|
23
|
+
- Before the first Builder for an epic, prepare the epic workspace environment once. Do not make each Builder rediscover basic setup from scratch.
|
|
24
|
+
- Start by checking whether the source checkout path provided in the runtime prompt already contains reusable dependency or generated setup artifacts that the epic workspace can safely reuse.
|
|
25
|
+
- If safe reuse is possible, materialize that reuse inside the epic workspace and continue. If reuse is not possible, run the repository's native bootstrap or install command inside the epic workspace before delegating implementation.
|
|
26
26
|
- Once the bootstrap, build, and test commands are established, pass the exact commands and any required environment-prep steps to every Builder for that epic.
|
|
27
27
|
- Only use generic defaults when the repository is unambiguous.
|
|
28
28
|
- If setup or verification remains ambiguous after inspection, do not guess wildly. Mark the attempt failed with a short concrete reason describing the ambiguity.
|
|
@@ -51,7 +51,7 @@ You coordinate epic execution. For clearly easy, low-risk mechanical tasks, you
|
|
|
51
51
|
|
|
52
52
|
- Before starting a story, check the epic state file. If the story has `passes: true`, skip it.
|
|
53
53
|
- For clearly easy, low-risk mechanical stories, you may implement directly in the Team Lead session when delegation overhead would exceed the work. Keep the change narrowly scoped and still run the required verification yourself before counting the story complete.
|
|
54
|
-
- Before delegating the first story, ensure the epic
|
|
54
|
+
- Before delegating the first story, ensure the epic workspace environment is actually runnable.
|
|
55
55
|
- Before delegating any story, pass the exact bootstrap, build, and test commands already established for this epic to the Builder.
|
|
56
56
|
- If an epic plan exists, give the Builder the story, acceptance criteria, relevant plan section, and especially the story's planned test design.
|
|
57
57
|
- If a story planner was used, give the Builder the story planner output too.
|
|
@@ -102,13 +102,14 @@ You coordinate epic execution. For clearly easy, low-risk mechanical tasks, you
|
|
|
102
102
|
|
|
103
103
|
## Merge Completion
|
|
104
104
|
|
|
105
|
-
-
|
|
105
|
+
- The Team Lead always owns the first merge attempt before exiting.
|
|
106
106
|
- Use the loop-branch name, repository-root path, epic branch name, and merge-result artifact path provided by the runtime prompt.
|
|
107
|
-
- You may leave the epic
|
|
107
|
+
- You may leave the epic workspace only for this final merge attempt. Keep all other work inside the epic workspace.
|
|
108
108
|
- Attempt the merge on the repository root branch with `git merge <epic-branch> --no-commit --no-ff` so you can resolve conflicts before the final commit.
|
|
109
|
-
- Before the merge attempt, check the repository root for uncommitted changes. If it is dirty, create a checkpoint commit so the merge can proceed cleanly.
|
|
110
|
-
- If the merge is clean, commit it and write a merge-result artifact with `status: "merged"` and mode `clean` or `projected-prd` as appropriate.
|
|
111
|
-
- If there are conflicts, resolve them yourself in this same session. Do not delegate to another merger role or any other teammate. If you resolve the conflicts, commit the merge and write `status: "merged"` with mode `conflict-resolved`.
|
|
112
|
-
- If you cannot resolve
|
|
113
|
-
-
|
|
114
|
-
-
|
|
109
|
+
- Before the Team Lead-owned merge attempt, check the repository root for uncommitted changes. If it is dirty, create a checkpoint commit so the merge can proceed cleanly.
|
|
110
|
+
- If the Team Lead-owned merge is clean, commit it and write a merge-result artifact with `status: "merged"` and mode `clean` or `projected-prd` as appropriate.
|
|
111
|
+
- If there are conflicts during a Team Lead-owned merge, resolve them yourself in this same session. Do not delegate to another merger role or any other teammate. If you resolve the conflicts, commit the merge and write `status: "merged"` with mode `conflict-resolved`.
|
|
112
|
+
- If you cannot resolve a Team Lead-owned merge safely, abort the merge, write a merge-result artifact with `status: "merge-failed"`, include a short concrete `details` string, and then exit.
|
|
113
|
+
- Write the merge-result artifact atomically to the exact path provided by the runtime prompt before you print the final DONE line.
|
|
114
|
+
- Ralph will verify the merge result after your session exits and may attempt a scripted fallback merge if the branch is still unmerged.
|
|
115
|
+
- Only print `DONE: X/Y stories passed` after the required final action for this runtime mode is finished.
|
|
@@ -9,17 +9,17 @@ Do NOT modify files outside this directory, except for the epic state file below
|
|
|
9
9
|
|
|
10
10
|
## Source Checkout
|
|
11
11
|
- Source checkout path: {{SOURCE_ROOT_DIR}}
|
|
12
|
-
- You may inspect this source checkout read-only to understand repo-level setup and to reuse existing dependency or build artifacts when that is safer or faster than reinstalling inside the epic
|
|
13
|
-
- Do NOT modify the source checkout during normal implementation. Any reuse should materialize inside the epic
|
|
12
|
+
- You may inspect this source checkout read-only to understand repo-level setup and to reuse existing dependency or build artifacts when that is safer or faster than reinstalling inside the epic workspace.
|
|
13
|
+
- Do NOT modify the source checkout during normal implementation. Any reuse should materialize inside the epic workspace, for example by creating a symlink there or copying a cacheable artifact into the workspace.
|
|
14
14
|
|
|
15
15
|
## Project Setup Strategy
|
|
16
16
|
- Ralph does not preinstall dependencies or preselect build/test commands for this repo.
|
|
17
|
-
- Before delegating implementation, establish the epic
|
|
17
|
+
- Before delegating implementation, establish the epic workspace environment once for this epic: determine the setup, build, and test commands, then make the workspace runnable before the first Builder starts.
|
|
18
18
|
- Check repo instructions first: 'AGENTS.md', 'README*', contributor docs, and project-local guidance files. Then check repo-defined task runners or scripts such as 'Makefile', 'justfile', 'Taskfile.yml', package scripts, wrapper scripts, or documented commands.
|
|
19
19
|
- Then inspect ecosystem manifests such as 'package.json', 'pyproject.toml', 'requirements.txt', 'Cargo.toml', 'go.mod', 'Gemfile', 'pom.xml', 'build.gradle*', 'mix.exs', 'Dockerfile', and 'docker-compose*.yml'.
|
|
20
20
|
- Prefer explicit repository commands over generic ecosystem defaults.
|
|
21
|
-
- If the epic
|
|
22
|
-
- Prefer safe reuse from the source checkout when the repository structure and lockfiles make that reuse trustworthy; otherwise run the repository's native bootstrap/install step inside the epic
|
|
21
|
+
- If the epic workspace is missing dependencies or other generated setup artifacts, first check whether the source checkout already has reusable artifacts that can be safely reused from the workspace.
|
|
22
|
+
- Prefer safe reuse from the source checkout when the repository structure and lockfiles make that reuse trustworthy; otherwise run the repository's native bootstrap/install step inside the epic workspace.
|
|
23
23
|
- After you determine the correct bootstrap, build, and test commands, pass those exact commands to every Builder for this epic and tell Builders not to rediscover them unless the provided commands fail.
|
|
24
24
|
- Only fall back to generic defaults when the repository is unambiguous.
|
|
25
25
|
- If setup remains ambiguous after inspection, stop guessing and fail the story attempt with a short concrete reason describing what you found.
|
|
@@ -31,16 +31,7 @@ Do NOT modify files outside this directory, except for the epic state file below
|
|
|
31
31
|
{{WORKTREE_PRD_PATH}}
|
|
32
32
|
|
|
33
33
|
## Merge Responsibility
|
|
34
|
-
|
|
35
|
-
- Loop branch to merge into: {{LOOP_BRANCH}}
|
|
36
|
-
- Source epic branch: {{EPIC_BRANCH}}
|
|
37
|
-
- Repository root for the merge attempt: {{ROOT_DIR}}
|
|
38
|
-
- Write the final merge result artifact to: {{WORKTREE_MERGE_RESULT_FILE}}
|
|
39
|
-
- The merge result artifact must be valid JSON with fields: epicId, status, mode, details, timestamp.
|
|
40
|
-
- Allowed status values: merged, merge-failed.
|
|
41
|
-
- Allowed mode values: clean, projected-prd, conflict-resolved, unknown.
|
|
42
|
-
- When all stories pass, do not print DONE until after you have attempted the merge and written the merge result artifact.
|
|
43
|
-
- During the merge attempt you may operate in the repository root path above even though normal implementation work stays inside the epic worktree.
|
|
34
|
+
{{MERGE_RESPONSIBILITY}}
|
|
44
35
|
|
|
45
36
|
## Epic
|
|
46
37
|
{{EPIC_JSON}}
|
package/ralph.sh
CHANGED
|
@@ -686,8 +686,15 @@ ensure_repo_has_initial_commit
|
|
|
686
686
|
|
|
687
687
|
mkdir -p "$RALPH_RUNTIME_DIR" "$PLANS_DIR" "$LOGS_DIR" "$STATE_DIR" "$WORKTREES_DIR"
|
|
688
688
|
|
|
689
|
+
loop_worktree_path() {
|
|
690
|
+
local loop_suffix="${LOOP_BRANCH#ralph/loop/}"
|
|
691
|
+
loop_suffix=$(printf '%s' "$loop_suffix" | tr '/:' '--')
|
|
692
|
+
echo "${WORKTREES_DIR}/loop-${loop_suffix}"
|
|
693
|
+
}
|
|
694
|
+
|
|
689
695
|
ensure_loop_worktree_ready() {
|
|
690
|
-
local loop_worktree_path
|
|
696
|
+
local loop_worktree_path
|
|
697
|
+
loop_worktree_path="$(loop_worktree_path)"
|
|
691
698
|
local add_output=""
|
|
692
699
|
local retry_output=""
|
|
693
700
|
local existing_worktree=""
|
|
@@ -804,6 +811,10 @@ fail_on_pending_epic_git_state_mismatch() {
|
|
|
804
811
|
fail_on_pending_epic_git_state_mismatch
|
|
805
812
|
|
|
806
813
|
# --- Worktree Management ---
|
|
814
|
+
use_dedicated_epic_worktrees() {
|
|
815
|
+
[ -n "$PARALLEL" ] && [ "$PARALLEL" -gt 1 ]
|
|
816
|
+
}
|
|
817
|
+
|
|
807
818
|
# Creates a git worktree at .worktrees/<epic_id> on a run-scoped branch under
|
|
808
819
|
# the current loop branch name, rooted from the loop branch for this run.
|
|
809
820
|
# rooted from the loop branch for this run.
|
|
@@ -843,6 +854,56 @@ create_epic_worktree() {
|
|
|
843
854
|
echo "$worktree_path"
|
|
844
855
|
}
|
|
845
856
|
|
|
857
|
+
restore_loop_worktree_branch() {
|
|
858
|
+
local current_branch=""
|
|
859
|
+
current_branch=$(git -C "$LOOP_ROOT_DIR" branch --show-current 2>/dev/null || echo "")
|
|
860
|
+
|
|
861
|
+
if [ -z "$current_branch" ] || [ "$current_branch" = "$LOOP_BRANCH" ]; then
|
|
862
|
+
return 0
|
|
863
|
+
fi
|
|
864
|
+
|
|
865
|
+
local dirty_status=""
|
|
866
|
+
dirty_status=$(git -C "$LOOP_ROOT_DIR" status --porcelain 2>/dev/null || true)
|
|
867
|
+
if [ -n "$dirty_status" ]; then
|
|
868
|
+
git -C "$LOOP_ROOT_DIR" add -A
|
|
869
|
+
(
|
|
870
|
+
cd "$LOOP_ROOT_DIR"
|
|
871
|
+
unstage_runtime_artifacts
|
|
872
|
+
)
|
|
873
|
+
if git -C "$LOOP_ROOT_DIR" commit -m "chore: checkpoint ${current_branch} before returning to loop branch" >/dev/null 2>&1; then
|
|
874
|
+
echo " [${current_branch}] Auto-committed dirty workspace before returning to ${LOOP_BRANCH}"
|
|
875
|
+
fi
|
|
876
|
+
fi
|
|
877
|
+
|
|
878
|
+
git -C "$LOOP_ROOT_DIR" checkout "$LOOP_BRANCH" >/dev/null 2>&1
|
|
879
|
+
}
|
|
880
|
+
|
|
881
|
+
prepare_sequential_epic_workspace() {
|
|
882
|
+
local epic_id="$1"
|
|
883
|
+
local branch_name
|
|
884
|
+
branch_name=$(epic_branch_name "$epic_id")
|
|
885
|
+
|
|
886
|
+
restore_loop_worktree_branch
|
|
887
|
+
|
|
888
|
+
if git show-ref --verify --quiet "refs/heads/${branch_name}"; then
|
|
889
|
+
git -C "$LOOP_ROOT_DIR" checkout "$branch_name" >/dev/null 2>&1
|
|
890
|
+
else
|
|
891
|
+
git -C "$LOOP_ROOT_DIR" checkout -b "$branch_name" "$LOOP_BRANCH" >/dev/null 2>&1
|
|
892
|
+
fi
|
|
893
|
+
|
|
894
|
+
echo "$LOOP_ROOT_DIR"
|
|
895
|
+
}
|
|
896
|
+
|
|
897
|
+
prepare_epic_workspace() {
|
|
898
|
+
local epic_id="$1"
|
|
899
|
+
|
|
900
|
+
if use_dedicated_epic_worktrees; then
|
|
901
|
+
create_epic_worktree "$epic_id"
|
|
902
|
+
else
|
|
903
|
+
prepare_sequential_epic_workspace "$epic_id"
|
|
904
|
+
fi
|
|
905
|
+
}
|
|
906
|
+
|
|
846
907
|
compute_file_checksum() {
|
|
847
908
|
local file_path="$1"
|
|
848
909
|
cksum "$file_path" | awk '{print $1 ":" $2}'
|
|
@@ -884,6 +945,16 @@ cleanup_epic_worktree() {
|
|
|
884
945
|
git worktree remove "${WORKTREES_DIR}/${epic_id}" --force 2>/dev/null || true
|
|
885
946
|
}
|
|
886
947
|
|
|
948
|
+
cleanup_epic_workspace() {
|
|
949
|
+
local epic_id="$1"
|
|
950
|
+
|
|
951
|
+
if use_dedicated_epic_worktrees; then
|
|
952
|
+
cleanup_epic_worktree "$epic_id"
|
|
953
|
+
else
|
|
954
|
+
restore_loop_worktree_branch
|
|
955
|
+
fi
|
|
956
|
+
}
|
|
957
|
+
|
|
887
958
|
# Removes ALL .worktrees/* entries (used on EXIT).
|
|
888
959
|
cleanup_all_worktrees() {
|
|
889
960
|
for dir in "${WORKTREES_DIR}"/*/; do
|
|
@@ -1405,6 +1476,7 @@ render_team_lead_prompt() {
|
|
|
1405
1476
|
TEAM_LEAD_TEMPLATE_EPIC_PLANNED="$EPIC_PLANNED" \
|
|
1406
1477
|
TEAM_LEAD_TEMPLATE_WORKTREE_PLAN_EXISTS="$WORKTREE_PLAN_EXISTS" \
|
|
1407
1478
|
TEAM_LEAD_TEMPLATE_PENDING_STORIES_JSON="$PENDING_STORIES_JSON" \
|
|
1479
|
+
TEAM_LEAD_TEMPLATE_MERGE_RESPONSIBILITY="$MERGE_RESPONSIBILITY" \
|
|
1408
1480
|
TEAM_LEAD_TEMPLATE_TEAM_LEAD_POLICY="$TEAM_LEAD_POLICY" \
|
|
1409
1481
|
TEAM_LEAD_TEMPLATE_STORY_PLANNING_ENABLED="$STORY_PLANNING_ENABLED" \
|
|
1410
1482
|
TEAM_LEAD_TEMPLATE_STORY_VALIDATION_ENABLED="$STORY_VALIDATION_ENABLED" \
|
|
@@ -1432,7 +1504,7 @@ NODE
|
|
|
1432
1504
|
process_epic_result() {
|
|
1433
1505
|
local epic_index="$1"
|
|
1434
1506
|
local epic_id
|
|
1435
|
-
epic_id=$(rjq read "$PRD_FILE" ".epics[$epic_index].id")
|
|
1507
|
+
epic_id=$(rjq read "$PRD_FILE" ".epics[$epic_index].id" 2>/dev/null) || epic_id="EPIC-???"
|
|
1436
1508
|
local merge_status=""
|
|
1437
1509
|
merge_status=$(read_epic_merge_result_field "$epic_id" .status "" 2>/dev/null || true)
|
|
1438
1510
|
local merge_mode=""
|
|
@@ -1441,48 +1513,36 @@ process_epic_result() {
|
|
|
1441
1513
|
merge_details=$(read_epic_merge_result_field "$epic_id" .details "" 2>/dev/null || true)
|
|
1442
1514
|
|
|
1443
1515
|
local total_stories
|
|
1444
|
-
total_stories=$(rjq length "$PRD_FILE" ".epics[$epic_index].userStories")
|
|
1516
|
+
total_stories=$(rjq length "$PRD_FILE" ".epics[$epic_index].userStories" 2>/dev/null) || total_stories=0
|
|
1445
1517
|
local passed_stories
|
|
1446
|
-
passed_stories=$(rjq count-where "$PRD_FILE" ".epics[$epic_index].userStories" "passes=true")
|
|
1518
|
+
passed_stories=$(rjq count-where "$PRD_FILE" ".epics[$epic_index].userStories" "passes=true" 2>/dev/null) || passed_stories=0
|
|
1447
1519
|
|
|
1448
1520
|
if [ "$passed_stories" -eq "$total_stories" ] && [ "$total_stories" -gt 0 ]; then
|
|
1449
|
-
if [ "$merge_status" = "merge-failed" ]; then
|
|
1450
|
-
echo ""
|
|
1451
|
-
echo " [$epic_id] MERGE FAILED — all stories passed but merge did not complete"
|
|
1452
|
-
[ -n "$merge_details" ] && echo " [$epic_id] Merge details: $merge_details"
|
|
1453
|
-
rjq set "$PRD_FILE" ".epics[$epic_index].status" '"merge-failed"'
|
|
1454
|
-
FAILED=$((FAILED + 1))
|
|
1455
|
-
echo "[$epic_id] MERGE FAILED (${merge_details:-team lead merge failed}) — $(date)" >> "$PROGRESS_FILE"
|
|
1456
|
-
return
|
|
1457
|
-
fi
|
|
1458
|
-
|
|
1459
|
-
if [ "$merge_status" != "merged" ]; then
|
|
1460
|
-
echo ""
|
|
1461
|
-
echo " [$epic_id] MERGE FAILED — all stories passed but Team Lead did not record a merge result"
|
|
1462
|
-
rjq set "$PRD_FILE" ".epics[$epic_index].status" '"merge-failed"'
|
|
1463
|
-
FAILED=$((FAILED + 1))
|
|
1464
|
-
echo "[$epic_id] MERGE FAILED (missing team lead merge result artifact) — $(date)" >> "$PROGRESS_FILE"
|
|
1465
|
-
return
|
|
1466
|
-
fi
|
|
1467
|
-
|
|
1468
1521
|
echo ""
|
|
1469
1522
|
echo " [$epic_id] PASSED — all stories completed ($passed_stories/$total_stories)"
|
|
1470
|
-
rjq set "$PRD_FILE" ".epics[$epic_index].status" '"completed"'
|
|
1523
|
+
rjq set "$PRD_FILE" ".epics[$epic_index].status" '"completed"' 2>/dev/null || echo " [WARN] Failed to set epic $epic_index status" >&2
|
|
1471
1524
|
COMPLETED=$((COMPLETED + 1))
|
|
1472
1525
|
echo "[$epic_id] PASSED — $(date)" >> "$PROGRESS_FILE"
|
|
1473
1526
|
if [ "$merge_status" = "merged" ]; then
|
|
1474
1527
|
echo " [$epic_id] Merge successful (team lead, ${merge_mode:-unknown})"
|
|
1475
1528
|
echo "[$epic_id] MERGED (team lead ${merge_mode:-unknown}) — $(date)" >> "$PROGRESS_FILE"
|
|
1529
|
+
elif [ "$merge_status" = "merge-failed" ]; then
|
|
1530
|
+
echo " [$epic_id] Team Lead merge did not complete — Ralph will verify and may retry on ${LOOP_BRANCH}"
|
|
1531
|
+
[ -n "$merge_details" ] && echo " [$epic_id] Merge details: $merge_details"
|
|
1532
|
+
echo "[$epic_id] PENDING MERGE VERIFICATION (${merge_details:-team lead merge failed}) — $(date)" >> "$PROGRESS_FILE"
|
|
1533
|
+
else
|
|
1534
|
+
echo " [$epic_id] Merge result missing — Ralph will verify and may retry on ${LOOP_BRANCH}"
|
|
1535
|
+
echo "[$epic_id] PENDING MERGE VERIFICATION (missing team lead merge result artifact) — $(date)" >> "$PROGRESS_FILE"
|
|
1476
1536
|
fi
|
|
1477
1537
|
elif [ "$passed_stories" -gt 0 ]; then
|
|
1478
1538
|
echo ""
|
|
1479
1539
|
echo " [$epic_id] PARTIAL — $passed_stories/$total_stories stories passed"
|
|
1480
|
-
rjq set "$PRD_FILE" ".epics[$epic_index].status" '"partial"'
|
|
1540
|
+
rjq set "$PRD_FILE" ".epics[$epic_index].status" '"partial"' 2>/dev/null || echo " [WARN] Failed to set epic $epic_index status" >&2
|
|
1481
1541
|
echo "[$epic_id] PARTIAL ($passed_stories/$total_stories) — $(date)" >> "$PROGRESS_FILE"
|
|
1482
1542
|
else
|
|
1483
1543
|
echo ""
|
|
1484
1544
|
echo " [$epic_id] FAILED — 0/$total_stories stories passed"
|
|
1485
|
-
rjq set "$PRD_FILE" ".epics[$epic_index].status" '"failed"'
|
|
1545
|
+
rjq set "$PRD_FILE" ".epics[$epic_index].status" '"failed"' 2>/dev/null || echo " [WARN] Failed to set epic $epic_index status" >&2
|
|
1486
1546
|
FAILED=$((FAILED + 1))
|
|
1487
1547
|
echo "[$epic_id] FAILED (0/$total_stories) — $(date)" >> "$PROGRESS_FILE"
|
|
1488
1548
|
fi
|
|
@@ -1532,6 +1592,31 @@ read_epic_merge_result_field() {
|
|
|
1532
1592
|
rjq read "$result_file" "$field_path" "$default_value"
|
|
1533
1593
|
}
|
|
1534
1594
|
|
|
1595
|
+
write_epic_merge_result_file() {
|
|
1596
|
+
local epic_id="$1"
|
|
1597
|
+
local status="$2"
|
|
1598
|
+
local mode="$3"
|
|
1599
|
+
local details="$4"
|
|
1600
|
+
local result_file="${STATE_DIR}/merge-${epic_id}.json"
|
|
1601
|
+
local tmp_file
|
|
1602
|
+
tmp_file=$(mktemp "${STATE_DIR}/.merge-${epic_id}.json.XXXXXX")
|
|
1603
|
+
|
|
1604
|
+
node - "$tmp_file" "$epic_id" "$status" "$mode" "$details" <<'NODE'
|
|
1605
|
+
const fs = require('fs');
|
|
1606
|
+
const [filePath, epicId, status, mode, details] = process.argv.slice(2);
|
|
1607
|
+
const payload = {
|
|
1608
|
+
epicId,
|
|
1609
|
+
status,
|
|
1610
|
+
mode,
|
|
1611
|
+
details,
|
|
1612
|
+
timestamp: new Date().toISOString(),
|
|
1613
|
+
};
|
|
1614
|
+
fs.writeFileSync(filePath, `${JSON.stringify(payload, null, 2)}\n`);
|
|
1615
|
+
NODE
|
|
1616
|
+
|
|
1617
|
+
mv "$tmp_file" "$result_file"
|
|
1618
|
+
}
|
|
1619
|
+
|
|
1535
1620
|
delete_epic_branch() {
|
|
1536
1621
|
local epic_id="$1"
|
|
1537
1622
|
local branch_name
|
|
@@ -1564,7 +1649,7 @@ NODE
|
|
|
1564
1649
|
|
|
1565
1650
|
read_epic_prd_passed() {
|
|
1566
1651
|
local epic_index="$1"
|
|
1567
|
-
rjq count-where "$PRD_FILE" ".epics[$epic_index].userStories" "passes=true"
|
|
1652
|
+
rjq count-where "$PRD_FILE" ".epics[$epic_index].userStories" "passes=true" 2>/dev/null || echo 0
|
|
1568
1653
|
}
|
|
1569
1654
|
|
|
1570
1655
|
project_state_to_prd() {
|
|
@@ -1612,26 +1697,36 @@ NODE
|
|
|
1612
1697
|
|
|
1613
1698
|
# spawn_epic_bg: create worktree, build prompt, and run team lead in the background.
|
|
1614
1699
|
# Sets LAST_SPAWN_PID to the background PID.
|
|
1615
|
-
# Callers must wait on that PID and then call
|
|
1700
|
+
# Callers must wait on that PID and then call cleanup_epic_workspace + process_epic_result.
|
|
1616
1701
|
spawn_epic_bg() {
|
|
1617
1702
|
local EPIC_INDEX="$1"
|
|
1618
1703
|
local EPIC_ID
|
|
1619
|
-
EPIC_ID=$(rjq read "$PRD_FILE" ".epics[$EPIC_INDEX].id")
|
|
1704
|
+
EPIC_ID=$(rjq read "$PRD_FILE" ".epics[$EPIC_INDEX].id" 2>/dev/null) || {
|
|
1705
|
+
echo " [ERROR] Cannot read epic id for index $EPIC_INDEX — skipping spawn" >&2
|
|
1706
|
+
LAST_SPAWN_PID=0
|
|
1707
|
+
LAST_SPAWN_LOG="/dev/null"
|
|
1708
|
+
return 1
|
|
1709
|
+
}
|
|
1620
1710
|
local EPIC_TITLE
|
|
1621
|
-
EPIC_TITLE=$(rjq read "$PRD_FILE" ".epics[$EPIC_INDEX].title")
|
|
1711
|
+
EPIC_TITLE=$(rjq read "$PRD_FILE" ".epics[$EPIC_INDEX].title" 2>/dev/null) || EPIC_TITLE="(unknown)"
|
|
1622
1712
|
local EPIC_JSON
|
|
1623
|
-
EPIC_JSON=$(rjq read "$PRD_FILE" ".epics[$EPIC_INDEX]")
|
|
1713
|
+
EPIC_JSON=$(rjq read "$PRD_FILE" ".epics[$EPIC_INDEX]" 2>/dev/null) || {
|
|
1714
|
+
echo " [ERROR] Cannot read epic JSON for $EPIC_ID — skipping spawn" >&2
|
|
1715
|
+
LAST_SPAWN_PID=0
|
|
1716
|
+
LAST_SPAWN_LOG="/dev/null"
|
|
1717
|
+
return 1
|
|
1718
|
+
}
|
|
1624
1719
|
local EPIC_PLANNED
|
|
1625
|
-
EPIC_PLANNED=$(rjq read "$PRD_FILE" ".epics[$EPIC_INDEX].planned" "false")
|
|
1720
|
+
EPIC_PLANNED=$(rjq read "$PRD_FILE" ".epics[$EPIC_INDEX].planned" "false" 2>/dev/null) || EPIC_PLANNED="false"
|
|
1626
1721
|
local PENDING_STORIES_JSON
|
|
1627
|
-
PENDING_STORIES_JSON=$(rjq read "$PRD_FILE" ".epics[$EPIC_INDEX].userStories" | \
|
|
1628
|
-
node -e 'const fs=require("fs"); const stories=JSON.parse(fs.readFileSync(0,"utf8")); process.stdout.write(JSON.stringify(stories.filter(s => s.passes !== true)));')
|
|
1722
|
+
PENDING_STORIES_JSON=$(rjq read "$PRD_FILE" ".epics[$EPIC_INDEX].userStories" 2>/dev/null | \
|
|
1723
|
+
node -e 'const fs=require("fs"); const stories=JSON.parse(fs.readFileSync(0,"utf8")); process.stdout.write(JSON.stringify(stories.filter(s => s.passes !== true)));') || PENDING_STORIES_JSON="[]"
|
|
1629
1724
|
|
|
1630
1725
|
local EPIC_LOG="${LOGS_DIR}/epic-${EPIC_ID}-$(date +%s).log"
|
|
1631
1726
|
|
|
1632
|
-
# Create
|
|
1727
|
+
# Create or reuse the workspace for this epic.
|
|
1633
1728
|
local WORKTREE_PATH
|
|
1634
|
-
WORKTREE_PATH=$(
|
|
1729
|
+
WORKTREE_PATH=$(prepare_epic_workspace "$EPIC_ID")
|
|
1635
1730
|
local WORKTREE_ABS_PATH
|
|
1636
1731
|
WORKTREE_ABS_PATH="$WORKTREE_PATH"
|
|
1637
1732
|
local EPIC_BRANCH
|
|
@@ -1645,11 +1740,30 @@ spawn_epic_bg() {
|
|
|
1645
1740
|
if [ -f "$WORKTREE_PLAN_FILE" ]; then
|
|
1646
1741
|
WORKTREE_PLAN_EXISTS="true"
|
|
1647
1742
|
fi
|
|
1743
|
+
local MERGE_RESPONSIBILITY
|
|
1744
|
+
MERGE_RESPONSIBILITY=$(cat <<EOF
|
|
1745
|
+
- If all stories pass, this Team Lead session owns the merge attempt before exiting.
|
|
1746
|
+
- Loop branch to merge into: ${LOOP_BRANCH}
|
|
1747
|
+
- Source epic branch: ${EPIC_BRANCH}
|
|
1748
|
+
- Repository root for the merge attempt: ${ROOT_DIR}
|
|
1749
|
+
- Write the final merge result artifact to: ${WORKTREE_MERGE_RESULT_FILE}
|
|
1750
|
+
- The merge result artifact must be valid JSON with fields: epicId, status, mode, details, timestamp.
|
|
1751
|
+
- Allowed status values: merged, merge-failed.
|
|
1752
|
+
- Allowed mode values: clean, projected-prd, conflict-resolved, unknown.
|
|
1753
|
+
- When all stories pass, do not print DONE until after you have attempted the merge and written the merge result artifact.
|
|
1754
|
+
- During the merge attempt you may operate in the repository root path above even though normal implementation work stays inside the epic workspace.
|
|
1755
|
+
- If the merge does not complete in this session, Ralph will verify the result and may attempt a scripted fallback merge afterward.
|
|
1756
|
+
EOF
|
|
1757
|
+
)
|
|
1648
1758
|
|
|
1649
1759
|
init_epic_state_file "$EPIC_ID" "$EPIC_INDEX"
|
|
1650
1760
|
reset_epic_merge_result_file "$EPIC_ID"
|
|
1651
1761
|
|
|
1652
|
-
|
|
1762
|
+
if use_dedicated_epic_worktrees; then
|
|
1763
|
+
echo " Spawning [$EPIC_ID] in worktree $WORKTREE_PATH"
|
|
1764
|
+
else
|
|
1765
|
+
echo " Spawning [$EPIC_ID] on branch ${EPIC_BRANCH} in loop worktree $WORKTREE_PATH"
|
|
1766
|
+
fi
|
|
1653
1767
|
|
|
1654
1768
|
local TEAM_LEAD_POLICY
|
|
1655
1769
|
TEAM_LEAD_POLICY="$(cat "$TEAM_LEAD_POLICY_FILE")"
|
|
@@ -1703,7 +1817,22 @@ merge_wave() {
|
|
|
1703
1817
|
local merge_failures=0
|
|
1704
1818
|
local target_branch="$LOOP_BRANCH"
|
|
1705
1819
|
|
|
1706
|
-
|
|
1820
|
+
local current_branch=""
|
|
1821
|
+
current_branch=$(git branch --show-current 2>/dev/null || echo "")
|
|
1822
|
+
if [ -n "$current_branch" ] && [ "$current_branch" != "$target_branch" ]; then
|
|
1823
|
+
local current_branch_dirty=""
|
|
1824
|
+
current_branch_dirty=$(git status --porcelain 2>/dev/null || true)
|
|
1825
|
+
if [ -n "$current_branch_dirty" ]; then
|
|
1826
|
+
git add -A
|
|
1827
|
+
unstage_runtime_artifacts
|
|
1828
|
+
if git commit -m "chore: checkpoint ${current_branch} before scripted merge" >/dev/null 2>&1; then
|
|
1829
|
+
echo " [${current_branch}] Auto-committed dirty branch before scripted merge"
|
|
1830
|
+
echo "[${current_branch}] AUTO-COMMIT before scripted merge — $(date)" >> "$PROGRESS_FILE"
|
|
1831
|
+
fi
|
|
1832
|
+
fi
|
|
1833
|
+
fi
|
|
1834
|
+
|
|
1835
|
+
if [ "$current_branch" != "$target_branch" ]; then
|
|
1707
1836
|
git checkout "$target_branch" >/dev/null 2>&1
|
|
1708
1837
|
fi
|
|
1709
1838
|
|
|
@@ -1743,6 +1872,7 @@ merge_wave() {
|
|
|
1743
1872
|
if [ -n "$epic_index" ]; then
|
|
1744
1873
|
rjq set "$PRD_FILE" ".epics[$epic_index].status" '"completed"'
|
|
1745
1874
|
fi
|
|
1875
|
+
write_epic_merge_result_file "$epic_id" "merged" "clean" "Scripted merge of ${branch_name} into ${target_branch}"
|
|
1746
1876
|
echo " [$epic_id] Merge successful (clean)"
|
|
1747
1877
|
echo "[$epic_id] MERGED (clean) — $(date)" >> "$PROGRESS_FILE"
|
|
1748
1878
|
else
|
|
@@ -1760,6 +1890,7 @@ merge_wave() {
|
|
|
1760
1890
|
echo "[$epic_id] MERGE FAILED (merge did not enter conflict state) — $(date)" >> "$PROGRESS_FILE"
|
|
1761
1891
|
[ -n "$merge_error" ] && echo " [$epic_id] Working tree state:" && printf '%s\n' "$merge_error" | sed 's/^/ /'
|
|
1762
1892
|
merge_failures=$((merge_failures + 1))
|
|
1893
|
+
write_epic_merge_result_file "$epic_id" "merge-failed" "unknown" "Scripted merge did not enter a merge state"
|
|
1763
1894
|
|
|
1764
1895
|
if [ -n "$epic_index" ]; then
|
|
1765
1896
|
rjq set "$PRD_FILE" ".epics[$epic_index].status" '"merge-failed"'
|
|
@@ -1780,6 +1911,7 @@ merge_wave() {
|
|
|
1780
1911
|
if [ -n "$epic_index" ]; then
|
|
1781
1912
|
rjq set "$PRD_FILE" ".epics[$epic_index].status" '"completed"'
|
|
1782
1913
|
fi
|
|
1914
|
+
write_epic_merge_result_file "$epic_id" "merged" "projected-prd" "Scripted merge kept the projected prd.json from ${target_branch}"
|
|
1783
1915
|
echo " [$epic_id] Merge successful (kept projected prd.json)"
|
|
1784
1916
|
echo "[$epic_id] MERGED (projected prd.json) — $(date)" >> "$PROGRESS_FILE"
|
|
1785
1917
|
git branch -d "${branch_name}" 2>/dev/null || true
|
|
@@ -1787,6 +1919,7 @@ merge_wave() {
|
|
|
1787
1919
|
fi
|
|
1788
1920
|
|
|
1789
1921
|
git merge --abort 2>/dev/null || true
|
|
1922
|
+
write_epic_merge_result_file "$epic_id" "merge-failed" "unknown" "Scripted merge left conflicts in: ${conflicted_files}"
|
|
1790
1923
|
echo " [$epic_id] Merge FAILED — conflicts remain; leaving ${branch_name} for a later clean retry"
|
|
1791
1924
|
echo "[$epic_id] MERGE FAILED (conflicts remain, files: ${conflicted_files}) — $(date)" >> "$PROGRESS_FILE"
|
|
1792
1925
|
merge_failures=$((merge_failures + 1))
|
|
@@ -2039,7 +2172,7 @@ WAVE_NUM=0
|
|
|
2039
2172
|
# Count total stories across all epics for running estimate calculations
|
|
2040
2173
|
TOTAL_STORIES=0
|
|
2041
2174
|
for i in $(seq 0 $((TOTAL_EPICS - 1))); do
|
|
2042
|
-
sc=$(rjq length "$PRD_FILE" ".epics[$i].userStories")
|
|
2175
|
+
sc=$(rjq length "$PRD_FILE" ".epics[$i].userStories" 2>/dev/null) || sc=0
|
|
2043
2176
|
TOTAL_STORIES=$((TOTAL_STORIES + sc))
|
|
2044
2177
|
done
|
|
2045
2178
|
|
|
@@ -2048,20 +2181,20 @@ while true; do
|
|
|
2048
2181
|
WAVE_EPICS=()
|
|
2049
2182
|
|
|
2050
2183
|
for EPIC_INDEX in $(seq 0 $((TOTAL_EPICS - 1))); do
|
|
2051
|
-
EPIC_STATUS=$(rjq read "$PRD_FILE" ".epics[$EPIC_INDEX].status" "pending")
|
|
2184
|
+
EPIC_STATUS=$(rjq read "$PRD_FILE" ".epics[$EPIC_INDEX].status" "pending" 2>/dev/null) || EPIC_STATUS="pending"
|
|
2052
2185
|
[ "$EPIC_STATUS" != "pending" ] && continue
|
|
2053
2186
|
|
|
2054
2187
|
# Check all dependencies
|
|
2055
2188
|
ALL_DEPS_MET=true
|
|
2056
|
-
DEPS=$(rjq list "$PRD_FILE" ".epics[$EPIC_INDEX].dependsOn")
|
|
2189
|
+
DEPS=$(rjq list "$PRD_FILE" ".epics[$EPIC_INDEX].dependsOn" 2>/dev/null) || DEPS=""
|
|
2057
2190
|
for DEP in $DEPS; do
|
|
2058
|
-
DEP_STATUS=$(rjq read-where "$PRD_FILE" .epics id "$DEP" status "pending")
|
|
2191
|
+
DEP_STATUS=$(rjq read-where "$PRD_FILE" .epics id "$DEP" status "pending" 2>/dev/null) || DEP_STATUS="pending"
|
|
2059
2192
|
if [ "$DEP_STATUS" = "failed" ] || [ "$DEP_STATUS" = "partial" ] || [ "$DEP_STATUS" = "merge-failed" ]; then
|
|
2060
2193
|
# Dependency failed — skip this epic permanently
|
|
2061
|
-
EPIC_ID=$(rjq read "$PRD_FILE" ".epics[$EPIC_INDEX].id")
|
|
2062
|
-
EPIC_TITLE=$(rjq read "$PRD_FILE" ".epics[$EPIC_INDEX].title")
|
|
2194
|
+
EPIC_ID=$(rjq read "$PRD_FILE" ".epics[$EPIC_INDEX].id" 2>/dev/null) || EPIC_ID="EPIC-???"
|
|
2195
|
+
EPIC_TITLE=$(rjq read "$PRD_FILE" ".epics[$EPIC_INDEX].title" 2>/dev/null) || EPIC_TITLE="(unknown)"
|
|
2063
2196
|
echo " [$EPIC_ID] $EPIC_TITLE — skipped (dependency $DEP has status: $DEP_STATUS)"
|
|
2064
|
-
rjq set "$PRD_FILE" ".epics[$EPIC_INDEX].status" '"failed"'
|
|
2197
|
+
rjq set "$PRD_FILE" ".epics[$EPIC_INDEX].status" '"failed"' 2>/dev/null || echo " [WARN] Failed to set epic $EPIC_INDEX status to failed" >&2
|
|
2065
2198
|
FAILED=$((FAILED + 1))
|
|
2066
2199
|
echo "[$EPIC_ID] SKIPPED (dependency $DEP failed) — $(date)" >> "$PROGRESS_FILE"
|
|
2067
2200
|
ALL_DEPS_MET=false
|
|
@@ -2089,7 +2222,7 @@ while true; do
|
|
|
2089
2222
|
if [ -n "$PARALLEL" ]; then
|
|
2090
2223
|
echo " Wave $WAVE_NUM — ${#WAVE_EPICS[@]} epic(s), $PARALLEL at a time"
|
|
2091
2224
|
else
|
|
2092
|
-
REMAINING_EPICS=$(rjq count-where "$PRD_FILE" .epics "status=pending" --default pending)
|
|
2225
|
+
REMAINING_EPICS=$(rjq count-where "$PRD_FILE" .epics "status=pending" --default pending 2>/dev/null) || REMAINING_EPICS="?"
|
|
2093
2226
|
echo " ${REMAINING_EPICS} epic(s) remaining to run sequentially"
|
|
2094
2227
|
fi
|
|
2095
2228
|
echo "========================================================"
|
|
@@ -2102,7 +2235,7 @@ while true; do
|
|
|
2102
2235
|
echo "=== Run — $(date) ===" >> "$PROGRESS_FILE"
|
|
2103
2236
|
fi
|
|
2104
2237
|
for IDX in "${WAVE_EPICS[@]}"; do
|
|
2105
|
-
W_EPIC_ID=$(rjq read "$PRD_FILE" ".epics[$IDX].id")
|
|
2238
|
+
W_EPIC_ID=$(rjq read "$PRD_FILE" ".epics[$IDX].id" 2>/dev/null) || W_EPIC_ID="EPIC-???"
|
|
2106
2239
|
echo " $W_EPIC_ID" >> "$PROGRESS_FILE"
|
|
2107
2240
|
done
|
|
2108
2241
|
|
|
@@ -2166,7 +2299,10 @@ while true; do
|
|
|
2166
2299
|
now=$(date +%s)
|
|
2167
2300
|
for slot in "${!active_pids[@]}"; do
|
|
2168
2301
|
local finished_epic_id
|
|
2169
|
-
finished_epic_id=$(rjq read "$PRD_FILE" ".epics[${active_indices[$slot]}].id")
|
|
2302
|
+
finished_epic_id=$(rjq read "$PRD_FILE" ".epics[${active_indices[$slot]}].id" 2>/dev/null) || {
|
|
2303
|
+
echo " [WARN] rjq failed reading epic id for index ${active_indices[$slot]} — skipping slot" >&2
|
|
2304
|
+
continue
|
|
2305
|
+
}
|
|
2170
2306
|
local process_finished=false
|
|
2171
2307
|
|
|
2172
2308
|
emit_new_log_output "$finished_epic_id" "${active_logs[$slot]}" "${active_log_lines[$slot]:-0}"
|
|
@@ -2181,7 +2317,7 @@ while true; do
|
|
|
2181
2317
|
wait "${active_pids[$slot]}" 2>/dev/null || true
|
|
2182
2318
|
# Check progress and retry if possible
|
|
2183
2319
|
local _to_total _to_state_passed _to_prd_passed _to_passed
|
|
2184
|
-
_to_total=$(rjq length "$PRD_FILE" ".epics[${active_indices[$slot]}].userStories")
|
|
2320
|
+
_to_total=$(rjq length "$PRD_FILE" ".epics[${active_indices[$slot]}].userStories" 2>/dev/null) || _to_total=0
|
|
2185
2321
|
_to_state_passed=$(read_epic_state_passed "$finished_epic_id")
|
|
2186
2322
|
_to_prd_passed=$(read_epic_prd_passed "${active_indices[$slot]}")
|
|
2187
2323
|
_to_passed="$_to_state_passed"
|
|
@@ -2193,16 +2329,21 @@ while true; do
|
|
|
2193
2329
|
echo " [$finished_epic_id] Timeout with $_to_passed/$_to_total passed — retry $((_to_retry_count + 1))/$MAX_CRASH_RETRIES"
|
|
2194
2330
|
echo "[$finished_epic_id] TIMEOUT RETRY $((_to_retry_count + 1))/$MAX_CRASH_RETRIES ($_to_passed/$_to_total passed) — $(date)" >> "$PROGRESS_FILE"
|
|
2195
2331
|
project_state_to_prd "$finished_epic_id" || true
|
|
2196
|
-
|
|
2197
|
-
spawn_epic_bg "${active_indices[$slot]}"
|
|
2198
|
-
|
|
2199
|
-
|
|
2200
|
-
|
|
2201
|
-
|
|
2202
|
-
|
|
2332
|
+
cleanup_epic_workspace "$finished_epic_id"
|
|
2333
|
+
if spawn_epic_bg "${active_indices[$slot]}"; then
|
|
2334
|
+
active_pids[$slot]="$LAST_SPAWN_PID"
|
|
2335
|
+
active_start_times[$slot]="$(date +%s)"
|
|
2336
|
+
active_logs[$slot]="$LAST_SPAWN_LOG"
|
|
2337
|
+
active_log_lines[$slot]="0"
|
|
2338
|
+
continue
|
|
2339
|
+
fi
|
|
2340
|
+
echo " [$finished_epic_id] Retry spawn failed — treating as timeout failure" >&2
|
|
2203
2341
|
fi
|
|
2204
2342
|
project_state_to_prd "$finished_epic_id" || true
|
|
2205
|
-
|
|
2343
|
+
cleanup_epic_workspace "$finished_epic_id"
|
|
2344
|
+
if ! use_dedicated_epic_worktrees; then
|
|
2345
|
+
project_state_to_prd "$finished_epic_id" || true
|
|
2346
|
+
fi
|
|
2206
2347
|
echo "TIMEOUT: Epic exceeded ${EPIC_TIMEOUT}s limit" >> "${active_logs[$slot]}"
|
|
2207
2348
|
# Log timeout-specific message before generic result
|
|
2208
2349
|
echo "[$finished_epic_id] FAILED (epic timeout after ${EPIC_TIMEOUT}s) — $(date)" >> "$PROGRESS_FILE"
|
|
@@ -2239,7 +2380,7 @@ while true; do
|
|
|
2239
2380
|
wait "${active_pids[$slot]}" 2>/dev/null || true
|
|
2240
2381
|
# Check progress and retry if possible
|
|
2241
2382
|
local _it_total _it_state_passed _it_prd_passed _it_passed
|
|
2242
|
-
_it_total=$(rjq length "$PRD_FILE" ".epics[${active_indices[$slot]}].userStories")
|
|
2383
|
+
_it_total=$(rjq length "$PRD_FILE" ".epics[${active_indices[$slot]}].userStories" 2>/dev/null) || _it_total=0
|
|
2243
2384
|
_it_state_passed=$(read_epic_state_passed "$finished_epic_id")
|
|
2244
2385
|
_it_prd_passed=$(read_epic_prd_passed "${active_indices[$slot]}")
|
|
2245
2386
|
_it_passed="$_it_state_passed"
|
|
@@ -2251,16 +2392,21 @@ while true; do
|
|
|
2251
2392
|
echo " [$finished_epic_id] Idle timeout with $_it_passed/$_it_total passed — retry $((_it_retry_count + 1))/$MAX_CRASH_RETRIES"
|
|
2252
2393
|
echo "[$finished_epic_id] IDLE RETRY $((_it_retry_count + 1))/$MAX_CRASH_RETRIES ($_it_passed/$_it_total passed) — $(date)" >> "$PROGRESS_FILE"
|
|
2253
2394
|
project_state_to_prd "$finished_epic_id" || true
|
|
2254
|
-
|
|
2255
|
-
spawn_epic_bg "${active_indices[$slot]}"
|
|
2256
|
-
|
|
2257
|
-
|
|
2258
|
-
|
|
2259
|
-
|
|
2260
|
-
|
|
2395
|
+
cleanup_epic_workspace "$finished_epic_id"
|
|
2396
|
+
if spawn_epic_bg "${active_indices[$slot]}"; then
|
|
2397
|
+
active_pids[$slot]="$LAST_SPAWN_PID"
|
|
2398
|
+
active_start_times[$slot]="$(date +%s)"
|
|
2399
|
+
active_logs[$slot]="$LAST_SPAWN_LOG"
|
|
2400
|
+
active_log_lines[$slot]="0"
|
|
2401
|
+
continue
|
|
2402
|
+
fi
|
|
2403
|
+
echo " [$finished_epic_id] Retry spawn failed — treating as idle timeout failure" >&2
|
|
2261
2404
|
fi
|
|
2262
2405
|
project_state_to_prd "$finished_epic_id" || true
|
|
2263
|
-
|
|
2406
|
+
cleanup_epic_workspace "$finished_epic_id"
|
|
2407
|
+
if ! use_dedicated_epic_worktrees; then
|
|
2408
|
+
project_state_to_prd "$finished_epic_id" || true
|
|
2409
|
+
fi
|
|
2264
2410
|
# Log idle-timeout-specific message before generic result
|
|
2265
2411
|
echo "[$finished_epic_id] FAILED (idle timeout — no output for ${IDLE_TIMEOUT}s) — $(date)" >> "$PROGRESS_FILE"
|
|
2266
2412
|
process_epic_result "${active_indices[$slot]}"
|
|
@@ -2283,7 +2429,7 @@ while true; do
|
|
|
2283
2429
|
fi
|
|
2284
2430
|
|
|
2285
2431
|
local total_s passed_s_state passed_s_prd passed_s
|
|
2286
|
-
total_s=$(rjq length "$PRD_FILE" ".epics[${active_indices[$slot]}].userStories")
|
|
2432
|
+
total_s=$(rjq length "$PRD_FILE" ".epics[${active_indices[$slot]}].userStories" 2>/dev/null) || total_s=0
|
|
2287
2433
|
passed_s_state=$(read_epic_state_passed "$finished_epic_id")
|
|
2288
2434
|
passed_s_prd=$(read_epic_prd_passed "${active_indices[$slot]}")
|
|
2289
2435
|
passed_s="$passed_s_state"
|
|
@@ -2300,6 +2446,13 @@ while true; do
|
|
|
2300
2446
|
if [ "$all_done" = true ] && [ "$merge_recorded" = true ]; then
|
|
2301
2447
|
epic_flow_complete=true
|
|
2302
2448
|
fi
|
|
2449
|
+
local process_exit_complete=false
|
|
2450
|
+
if [ "$all_done" = true ]; then
|
|
2451
|
+
# Once all stories pass, accept a normal Team Lead exit even if the
|
|
2452
|
+
# merge result is still missing. Ralph will verify the merge result
|
|
2453
|
+
# and attempt a scripted fallback merge when needed.
|
|
2454
|
+
process_exit_complete=true
|
|
2455
|
+
fi
|
|
2303
2456
|
|
|
2304
2457
|
if [ "$process_finished" = true ] || [ "$epic_flow_complete" = true ]; then
|
|
2305
2458
|
# Only terminate an in-flight session early after the Team Lead has
|
|
@@ -2310,10 +2463,10 @@ while true; do
|
|
|
2310
2463
|
fi
|
|
2311
2464
|
wait "${active_pids[$slot]}" 2>/dev/null || true
|
|
2312
2465
|
|
|
2313
|
-
# If the process exited before the
|
|
2314
|
-
#
|
|
2466
|
+
# If the process exited before the Team Lead session reached the
|
|
2467
|
+
# expected completion point for this run mode, consider it a crash
|
|
2315
2468
|
# and retry when possible.
|
|
2316
|
-
if [ "$
|
|
2469
|
+
if [ "$process_exit_complete" = false ]; then
|
|
2317
2470
|
local retry_count
|
|
2318
2471
|
retry_count="$(get_crash_retry_count "$finished_epic_id")"
|
|
2319
2472
|
if [ "$retry_count" -lt "$MAX_CRASH_RETRIES" ]; then
|
|
@@ -2322,14 +2475,16 @@ while true; do
|
|
|
2322
2475
|
echo " [$finished_epic_id] CRASH DETECTED ($passed_s/$total_s passed) — retry $((retry_count + 1))/$MAX_CRASH_RETRIES"
|
|
2323
2476
|
echo "[$finished_epic_id] CRASH RETRY $((retry_count + 1))/$MAX_CRASH_RETRIES ($passed_s/$total_s passed so far) — $(date)" >> "$PROGRESS_FILE"
|
|
2324
2477
|
project_state_to_prd "$finished_epic_id" || true
|
|
2325
|
-
|
|
2326
|
-
spawn_epic_bg "${active_indices[$slot]}"
|
|
2327
|
-
|
|
2328
|
-
|
|
2329
|
-
|
|
2330
|
-
|
|
2331
|
-
|
|
2332
|
-
|
|
2478
|
+
cleanup_epic_workspace "$finished_epic_id"
|
|
2479
|
+
if spawn_epic_bg "${active_indices[$slot]}"; then
|
|
2480
|
+
active_pids[$slot]="$LAST_SPAWN_PID"
|
|
2481
|
+
active_start_times[$slot]="$(date +%s)"
|
|
2482
|
+
active_logs[$slot]="$LAST_SPAWN_LOG"
|
|
2483
|
+
active_log_lines[$slot]="0"
|
|
2484
|
+
# Don't free slot — respawned epic reuses it. Continue polling.
|
|
2485
|
+
continue
|
|
2486
|
+
fi
|
|
2487
|
+
echo " [$finished_epic_id] Retry spawn failed — treating as crash failure" >&2
|
|
2333
2488
|
fi
|
|
2334
2489
|
echo ""
|
|
2335
2490
|
echo " [$finished_epic_id] CRASH — retries exhausted ($passed_s/$total_s stories passed)"
|
|
@@ -2349,11 +2504,17 @@ while true; do
|
|
|
2349
2504
|
process_epic_result "${active_indices[$slot]}"
|
|
2350
2505
|
# Track completed epics for merge_wave only when the Team Lead did not already merge.
|
|
2351
2506
|
local post_status
|
|
2352
|
-
post_status=$(rjq read "$PRD_FILE" ".epics[${active_indices[$slot]}].status" "pending")
|
|
2353
|
-
if [ "$post_status" = "completed" ] && [ "$merge_status" != "merged" ]; then
|
|
2507
|
+
post_status=$(rjq read "$PRD_FILE" ".epics[${active_indices[$slot]}].status" "pending" 2>/dev/null) || post_status="pending"
|
|
2508
|
+
if [ "$post_status" = "completed" ] && [ "$merge_status" != "merged" ] && use_dedicated_epic_worktrees; then
|
|
2354
2509
|
wave_completed_ids+=("$finished_epic_id")
|
|
2355
2510
|
fi
|
|
2356
|
-
|
|
2511
|
+
|
|
2512
|
+
if [ "$post_status" = "completed" ] && [ "$merge_status" != "merged" ] && ! use_dedicated_epic_worktrees; then
|
|
2513
|
+
merge_wave "$finished_epic_id" || true
|
|
2514
|
+
merge_status=$(read_epic_merge_result_field "$finished_epic_id" .status "" 2>/dev/null || true)
|
|
2515
|
+
fi
|
|
2516
|
+
|
|
2517
|
+
cleanup_epic_workspace "$finished_epic_id"
|
|
2357
2518
|
if [ "$merge_status" = "merged" ]; then
|
|
2358
2519
|
delete_epic_branch "$finished_epic_id"
|
|
2359
2520
|
fi
|
|
@@ -2386,10 +2547,10 @@ while true; do
|
|
|
2386
2547
|
break
|
|
2387
2548
|
fi
|
|
2388
2549
|
|
|
2389
|
-
EPIC_STATUS=$(rjq read "$PRD_FILE" ".epics[$EPIC_INDEX].status" "pending")
|
|
2550
|
+
EPIC_STATUS=$(rjq read "$PRD_FILE" ".epics[$EPIC_INDEX].status" "pending" 2>/dev/null) || EPIC_STATUS="pending"
|
|
2390
2551
|
if [ "$EPIC_STATUS" = "completed" ]; then
|
|
2391
|
-
local_epic_id=$(rjq read "$PRD_FILE" ".epics[$EPIC_INDEX].id")
|
|
2392
|
-
local_epic_title=$(rjq read "$PRD_FILE" ".epics[$EPIC_INDEX].title")
|
|
2552
|
+
local_epic_id=$(rjq read "$PRD_FILE" ".epics[$EPIC_INDEX].id" 2>/dev/null) || local_epic_id="EPIC-???"
|
|
2553
|
+
local_epic_title=$(rjq read "$PRD_FILE" ".epics[$EPIC_INDEX].title" 2>/dev/null) || local_epic_title="(unknown)"
|
|
2393
2554
|
echo " [$local_epic_id] $local_epic_title — already completed, skipping"
|
|
2394
2555
|
COMPLETED=$((COMPLETED + 1))
|
|
2395
2556
|
continue
|
|
@@ -2401,7 +2562,11 @@ while true; do
|
|
|
2401
2562
|
done
|
|
2402
2563
|
|
|
2403
2564
|
PROCESSED=$((PROCESSED + 1))
|
|
2404
|
-
spawn_epic_bg "$EPIC_INDEX"
|
|
2565
|
+
if ! spawn_epic_bg "$EPIC_INDEX"; then
|
|
2566
|
+
echo " [WARN] Failed to spawn epic at index $EPIC_INDEX — skipping" >&2
|
|
2567
|
+
FAILED=$((FAILED + 1))
|
|
2568
|
+
continue
|
|
2569
|
+
fi
|
|
2405
2570
|
active_pids+=("$LAST_SPAWN_PID")
|
|
2406
2571
|
active_indices+=("$EPIC_INDEX")
|
|
2407
2572
|
active_start_times+=("$(date +%s)")
|
|
@@ -2417,7 +2582,7 @@ while true; do
|
|
|
2417
2582
|
|
|
2418
2583
|
# Merge completed epic branches back to starting branch
|
|
2419
2584
|
if [ ${#wave_completed_ids[@]} -gt 0 ]; then
|
|
2420
|
-
merge_wave "${wave_completed_ids[@]}"
|
|
2585
|
+
merge_wave "${wave_completed_ids[@]}" || true
|
|
2421
2586
|
fi
|
|
2422
2587
|
|
|
2423
2588
|
# If we hit --max-epics mid-wave, stop processing further waves
|