@kbediako/codex-orchestrator 0.1.17 → 0.1.19

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -55,15 +55,16 @@ Use this when you want Codex to drive work inside another repo with the CO defau
55
55
  ```bash
56
56
  codex mcp add delegation -- codex-orchestrator delegate-server --repo /path/to/repo
57
57
  ```
58
- 3. Optional (collab JSONL parity): set up a CO-managed Codex CLI:
58
+ 3. Optional (managed/pinned CLI path): set up a CO-managed Codex CLI:
59
59
  ```bash
60
60
  codex-orchestrator codex setup
61
61
  ```
62
+ Use this when you want a pinned binary, build-from-source behavior, or a custom fork. Stock `codex` works for default flows.
62
63
  4. Optional (fast refresh helper for downstream users):
63
64
  ```bash
64
- scripts/codex-cli-refresh.sh --repo /path/to/codex
65
+ scripts/codex-cli-refresh.sh --repo /path/to/codex --align-only
65
66
  ```
66
- Repo-only helper (not included in npm package). Set `CODEX_REPO` or `CODEX_CLI_SOURCE` to avoid passing `--repo` each time.
67
+ Repo-only helper (not included in npm package). Add `--no-push` when you only want local alignment and do not want to update `origin/main`. To refresh the CO-managed CLI, run a separate command with `--force-rebuild` (without `--align-only`). Set `CODEX_REPO` or `CODEX_CLI_SOURCE` to avoid passing `--repo` each time.
67
68
 
68
69
  ## Delegation MCP server
69
70
 
@@ -88,7 +89,7 @@ Delegation guard profile:
88
89
  ## Delegation + RLM flow
89
90
 
90
91
  RLM (Recursive Language Model) is the long-horizon loop used by the `rlm` pipeline (`codex-orchestrator rlm "<goal>"` or `codex-orchestrator start rlm --goal "<goal>"`). Delegated runs only enter RLM when the child is launched with the `rlm` pipeline (or the rlm runner directly). In auto mode it resolves to symbolic when delegated, when `RLM_CONTEXT_PATH` is set, or when the context exceeds `RLM_SYMBOLIC_MIN_BYTES`; otherwise it stays iterative. The runner writes state to `.runs/<task-id>/cli/<run-id>/rlm/state.json` and stops when the validator passes or budgets are exhausted.
91
- Symbolic subcalls can optionally use collab tools when `RLM_SYMBOLIC_COLLAB=1` (requires a collab-enabled Codex CLI via `codex-orchestrator codex setup`). Collab tool calls parsed from `codex exec --json --enable collab` are stored in `manifest.collab_tool_calls` (bounded by `CODEX_ORCHESTRATOR_COLLAB_MAX_EVENTS`, set to `0` to disable).
92
+ Symbolic subcalls can optionally use collab tools when `RLM_SYMBOLIC_COLLAB=1` (requires `collab=true` in `codex features list`). Collab tool calls parsed from `codex exec --json --enable collab` are stored in `manifest.collab_tool_calls` (bounded by `CODEX_ORCHESTRATOR_COLLAB_MAX_EVENTS`, set to `0` to disable). `codex-orchestrator codex setup` remains available when you want a managed/pinned CLI path.
92
93
 
93
94
  ### Delegation flow
94
95
  ```mermaid
@@ -164,9 +165,9 @@ codex-orchestrator devtools setup
164
165
  - `codex-orchestrator plan <pipeline>` — preview pipeline stages.
165
166
  - `codex-orchestrator exec <cmd>` — run a one-off command with the exec runtime.
166
167
  - `codex-orchestrator init codex` — install starter templates (`mcp-client.json`, `AGENTS.md`) into a repo.
167
- - `codex-orchestrator init codex --codex-cli --yes --codex-source <path>` — also provision a CO-managed Codex CLI binary (build-from-source default; set `CODEX_CLI_SOURCE` to avoid passing `--codex-source` every time).
168
+ - `codex-orchestrator init codex --codex-cli --yes --codex-source <path>` — optionally provision a CO-managed Codex CLI binary (build-from-source default; set `CODEX_CLI_SOURCE` to avoid passing `--codex-source` every time).
168
169
  - `codex-orchestrator init codex --codex-cli --yes --codex-download-url <url> --codex-download-sha256 <sha>` — opt-in to a prebuilt Codex CLI download.
169
- - `codex-orchestrator codex setup` — plan/apply a CO-managed Codex CLI install (for collab JSONL parity; use `--download-url` + `--download-sha256` for prebuilts).
170
+ - `codex-orchestrator codex setup` — plan/apply a CO-managed Codex CLI install (optional managed/pinned path; use `--download-url` + `--download-sha256` for prebuilts).
170
171
  - `codex-orchestrator self-check --format json` — JSON health payload.
171
172
  - `codex-orchestrator mcp serve` — Codex MCP stdio server.
172
173
 
@@ -7,6 +7,7 @@ import { CodexOrchestrator } from '../orchestrator/src/cli/orchestrator.js';
7
7
  import { formatPlanPreview } from '../orchestrator/src/cli/utils/planFormatter.js';
8
8
  import { executeExecCommand } from '../orchestrator/src/cli/exec/command.js';
9
9
  import { resolveEnvironmentPaths } from '../scripts/lib/run-manifests.js';
10
+ import { runPrWatchMerge } from '../scripts/lib/pr-watch-merge.js';
10
11
  import { normalizeEnvironmentPaths, sanitizeTaskId } from '../orchestrator/src/cli/run/environment.js';
11
12
  import { RunEventEmitter } from '../orchestrator/src/cli/events/runEvents.js';
12
13
  import { evaluateInteractiveGate } from '../orchestrator/src/cli/utils/interactive.js';
@@ -77,6 +78,9 @@ async function main() {
77
78
  case 'mcp':
78
79
  await handleMcp(args);
79
80
  break;
81
+ case 'pr':
82
+ await handlePr(args);
83
+ break;
80
84
  case 'delegate-server':
81
85
  case 'delegation-server':
82
86
  await handleDelegationServer(args);
@@ -627,6 +631,20 @@ async function handleMcp(rawArgs) {
627
631
  const dryRun = Boolean(flags['dry-run']);
628
632
  await serveMcp({ repoRoot, dryRun, extraArgs: positionals });
629
633
  }
634
+ async function handlePr(rawArgs) {
635
+ if (rawArgs.length === 0 || rawArgs[0] === '--help' || rawArgs[0] === '-h' || rawArgs[0] === 'help') {
636
+ printPrHelp();
637
+ return;
638
+ }
639
+ const [subcommand, ...subcommandArgs] = rawArgs;
640
+ if (subcommand !== 'watch-merge') {
641
+ throw new Error(`Unknown pr subcommand: ${subcommand}`);
642
+ }
643
+ const exitCode = await runPrWatchMerge(subcommandArgs, { usage: 'codex-orchestrator pr watch-merge' });
644
+ if (exitCode !== 0) {
645
+ process.exitCode = exitCode;
646
+ }
647
+ }
630
648
  async function handleDelegationServer(rawArgs) {
631
649
  const { positionals, flags } = parseArgs(rawArgs);
632
650
  if (isHelpRequest(positionals, flags)) {
@@ -931,6 +949,9 @@ Commands:
931
949
  --codex-home <path> Override the target Codex home directory.
932
950
  --format json Emit machine-readable output.
933
951
  mcp serve [--repo <path>] [--dry-run] [-- <extra args>]
952
+ pr watch-merge [options]
953
+ Monitor PR checks/reviews with polling and optional auto-merge after a quiet window.
954
+ Use \`codex-orchestrator pr watch-merge --help\` for full options.
934
955
  delegate-server Run the delegation MCP server (stdio).
935
956
  --repo <path> Repo root for config + manifests (default cwd).
936
957
  --mode <full|question_only> Limit tool surface for child runs.
@@ -989,3 +1010,15 @@ Options:
989
1010
  --help Show this message.
990
1011
  `);
991
1012
  }
1013
+ function printPrHelp() {
1014
+ console.log(`Usage: codex-orchestrator pr <subcommand> [options]
1015
+
1016
+ Subcommands:
1017
+ watch-merge Monitor PR checks/reviews with polling and optional auto-merge.
1018
+ Supports PR_MONITOR_* env vars and standard flags (see: pr watch-merge --help).
1019
+
1020
+ Examples:
1021
+ codex-orchestrator pr watch-merge --pr 211 --dry-run --quiet-minutes 10
1022
+ codex-orchestrator pr watch-merge --pr 211 --auto-merge --merge-method squash
1023
+ `);
1024
+ }
@@ -62,6 +62,6 @@ export function formatInitSummary(result, cwd) {
62
62
  }
63
63
  lines.push('Next steps (recommended):');
64
64
  lines.push(` - codex mcp add delegation -- codex-orchestrator delegate-server --repo ${cwd}`);
65
- lines.push(' - codex-orchestrator codex setup # optional: CO-managed Codex CLI for collab JSONL');
65
+ lines.push(' - codex-orchestrator codex setup # optional: managed/pinned Codex CLI (stock CLI works by default)');
66
66
  return lines;
67
67
  }
@@ -613,6 +613,33 @@ function formatDeliberationReason(reason) {
613
613
  return 'cadence';
614
614
  }
615
615
  }
616
+ function attachDeliberationArtifactPaths(error, artifactPaths) {
617
+ const normalized = error instanceof Error ? error : new Error(String(error));
618
+ if (artifactPaths) {
619
+ normalized.artifactPaths = artifactPaths;
620
+ }
621
+ return normalized;
622
+ }
623
+ function extractDeliberationArtifactPaths(error) {
624
+ if (!error || typeof error !== 'object') {
625
+ return undefined;
626
+ }
627
+ const rawPaths = error.artifactPaths;
628
+ if (!rawPaths || typeof rawPaths !== 'object') {
629
+ return undefined;
630
+ }
631
+ const typed = rawPaths;
632
+ if (typeof typed.prompt !== 'string' ||
633
+ typeof typed.output !== 'string' ||
634
+ typeof typed.meta !== 'string') {
635
+ return undefined;
636
+ }
637
+ return {
638
+ prompt: typed.prompt,
639
+ output: typed.output,
640
+ meta: typed.meta
641
+ };
642
+ }
616
643
  function selectDeliberationReason(params) {
617
644
  if (params.iteration === 1) {
618
645
  return 'bootstrap';
@@ -678,27 +705,58 @@ async function runDeliberationStep(params) {
678
705
  maxSummaryBytes: params.options.maxSummaryBytes
679
706
  });
680
707
  const promptBytes = byteLength(prompt);
681
- const deliberationDir = join(params.runDir, 'deliberation');
682
- await mkdir(deliberationDir, { recursive: true });
683
- const baseName = `iteration-${String(params.iteration).padStart(4, '0')}`;
684
- const promptPath = join(deliberationDir, `${baseName}-prompt.txt`);
685
- const outputPath = join(deliberationDir, `${baseName}-output.txt`);
686
- const metaPath = join(deliberationDir, `${baseName}-meta.json`);
687
- await writeFile(promptPath, prompt, 'utf8');
688
- const output = await params.options.run(prompt, {
689
- iteration: params.iteration,
690
- reason: formatDeliberationReason(params.reason)
691
- });
708
+ const shouldLogArtifacts = params.options.logArtifacts === true;
709
+ let artifactPaths;
710
+ let outputPath = null;
711
+ let metaPath = null;
712
+ if (shouldLogArtifacts) {
713
+ const deliberationDir = join(params.runDir, 'deliberation');
714
+ await mkdir(deliberationDir, { recursive: true });
715
+ const baseName = `iteration-${String(params.iteration).padStart(4, '0')}`;
716
+ const promptPath = join(deliberationDir, `${baseName}-prompt.txt`);
717
+ outputPath = join(deliberationDir, `${baseName}-output.txt`);
718
+ metaPath = join(deliberationDir, `${baseName}-meta.json`);
719
+ await writeFile(promptPath, prompt, 'utf8');
720
+ artifactPaths = {
721
+ prompt: relative(params.repoRoot, promptPath),
722
+ output: relative(params.repoRoot, outputPath),
723
+ meta: relative(params.repoRoot, metaPath)
724
+ };
725
+ }
726
+ let output;
727
+ try {
728
+ output = await params.options.run(prompt, {
729
+ iteration: params.iteration,
730
+ reason: formatDeliberationReason(params.reason)
731
+ });
732
+ }
733
+ catch (error) {
734
+ if (shouldLogArtifacts && outputPath && metaPath) {
735
+ const errorMessage = error instanceof Error ? error.message : String(error);
736
+ await writeFile(outputPath, '', 'utf8');
737
+ await writeFile(metaPath, JSON.stringify({
738
+ iteration: params.iteration,
739
+ reason: formatDeliberationReason(params.reason),
740
+ strategy: params.options.strategy,
741
+ prompt_bytes: promptBytes,
742
+ output_bytes: 0,
743
+ error: errorMessage
744
+ }, null, 2), 'utf8');
745
+ }
746
+ throw attachDeliberationArtifactPaths(error, artifactPaths);
747
+ }
692
748
  const brief = truncateUtf8ToBytes(output ?? '', params.options.maxSummaryBytes);
693
749
  const outputBytes = byteLength(brief);
694
- await writeFile(outputPath, brief, 'utf8');
695
- await writeFile(metaPath, JSON.stringify({
696
- iteration: params.iteration,
697
- reason: formatDeliberationReason(params.reason),
698
- strategy: params.options.strategy,
699
- prompt_bytes: promptBytes,
700
- output_bytes: outputBytes
701
- }, null, 2), 'utf8');
750
+ if (shouldLogArtifacts && outputPath && metaPath) {
751
+ await writeFile(outputPath, brief, 'utf8');
752
+ await writeFile(metaPath, JSON.stringify({
753
+ iteration: params.iteration,
754
+ reason: formatDeliberationReason(params.reason),
755
+ strategy: params.options.strategy,
756
+ prompt_bytes: promptBytes,
757
+ output_bytes: outputBytes
758
+ }, null, 2), 'utf8');
759
+ }
702
760
  return {
703
761
  record: {
704
762
  status: 'ran',
@@ -706,11 +764,7 @@ async function runDeliberationStep(params) {
706
764
  strategy: params.options.strategy,
707
765
  prompt_bytes: promptBytes,
708
766
  output_bytes: outputBytes,
709
- artifact_paths: {
710
- prompt: relative(params.repoRoot, promptPath),
711
- output: relative(params.repoRoot, outputPath),
712
- meta: relative(params.repoRoot, metaPath)
713
- }
767
+ artifact_paths: artifactPaths
714
768
  },
715
769
  brief
716
770
  };
@@ -814,6 +868,7 @@ export async function runSymbolicLoop(options) {
814
868
  status: 'error',
815
869
  reason: formatDeliberationReason(reason),
816
870
  strategy: deliberationOptions.strategy,
871
+ artifact_paths: extractDeliberationArtifactPaths(error),
817
872
  error: error instanceof Error ? error.message : String(error)
818
873
  };
819
874
  log(`Deliberation ${formatDeliberationReason(reason)} failed for iteration ${iteration}: ${deliberation.error}`);
@@ -235,11 +235,11 @@ async function resolveContextSource(env, fallbackText) {
235
235
  async function promptForValidator(candidates) {
236
236
  const rl = createInterface({ input: process.stdin, output: process.stdout });
237
237
  try {
238
- console.log('Validator auto-detect found multiple candidates:');
238
+ logger.info('Validator auto-detect found multiple candidates:');
239
239
  candidates.forEach((candidate, index) => {
240
- console.log(` ${index + 1}) ${candidate.command} (${candidate.reason})`);
240
+ logger.info(` ${index + 1}) ${candidate.command} (${candidate.reason})`);
241
241
  });
242
- console.log(' n) none');
242
+ logger.info(' n) none');
243
243
  const answer = (await rl.question('Select validator [1-n or n for none]: ')).trim().toLowerCase();
244
244
  if (!answer || answer === 'n' || answer === 'none') {
245
245
  return null;
@@ -576,7 +576,7 @@ async function main() {
576
576
  state.final = { status, exitCode };
577
577
  await writeTerminalState(runDir, state);
578
578
  if (message) {
579
- console.error(message);
579
+ logger.error(message);
580
580
  }
581
581
  process.exitCode = exitCode;
582
582
  };
@@ -725,7 +725,7 @@ async function main() {
725
725
  const detection = await detectValidator(repoRoot);
726
726
  if (detection.status === 'selected' && detection.command) {
727
727
  validatorCommand = detection.command;
728
- console.log(`Validator: ${detection.command} (${detection.reason ?? 'auto-detect'})`);
728
+ logger.info(`Validator: ${detection.command} (${detection.reason ?? 'auto-detect'})`);
729
729
  }
730
730
  else if (detection.status === 'ambiguous') {
731
731
  if (isInteractive) {
@@ -743,7 +743,7 @@ async function main() {
743
743
  mode,
744
744
  context: contextInfo
745
745
  });
746
- console.error(candidates);
746
+ logger.error(candidates);
747
747
  return;
748
748
  }
749
749
  }
@@ -766,10 +766,10 @@ async function main() {
766
766
  }
767
767
  }
768
768
  if (validatorCommand === null) {
769
- console.log('Validator: none');
769
+ logger.info('Validator: none');
770
770
  }
771
771
  else {
772
- console.log(`Validator: ${validatorCommand}`);
772
+ logger.info(`Validator: ${validatorCommand}`);
773
773
  }
774
774
  const subagentsEnabled = envFlagEnabled(env.CODEX_SUBAGENTS) || envFlagEnabled(env.RLM_SUBAGENTS);
775
775
  const symbolicCollabEnabled = envFlagEnabled(env.RLM_SYMBOLIC_COLLAB);
@@ -779,6 +779,7 @@ async function main() {
779
779
  const symbolicDeliberationIncludeInPlanner = env.RLM_SYMBOLIC_DELIBERATION_INCLUDE_IN_PLANNER === undefined
780
780
  ? true
781
781
  : envFlagEnabled(env.RLM_SYMBOLIC_DELIBERATION_INCLUDE_IN_PLANNER);
782
+ const symbolicDeliberationLogArtifacts = envFlagEnabled(env.RLM_SYMBOLIC_DELIBERATION_LOG);
782
783
  const nonInteractive = shouldForceNonInteractive(env);
783
784
  if (mode === 'symbolic') {
784
785
  const budgets = {
@@ -894,6 +895,7 @@ async function main() {
894
895
  maxRuns: deliberationMaxRuns,
895
896
  maxSummaryBytes: deliberationMaxSummaryBytes,
896
897
  includeInPlannerPrompt: symbolicDeliberationIncludeInPlanner,
898
+ logArtifacts: symbolicDeliberationLogArtifacts,
897
899
  run: (prompt, _meta) => {
898
900
  void _meta;
899
901
  if (!symbolicCollabEnabled) {
@@ -914,7 +916,7 @@ async function main() {
914
916
  });
915
917
  const finalStatus = result.state.final?.status ?? 'unknown';
916
918
  const iterationCount = result.state.symbolic_iterations.length;
917
- console.log(`RLM completed: status=${finalStatus} symbolic_iterations=${iterationCount} exit=${result.exitCode}`);
919
+ logger.info(`RLM completed: status=${finalStatus} symbolic_iterations=${iterationCount} exit=${result.exitCode}`);
918
920
  process.exitCode = result.exitCode;
919
921
  return;
920
922
  }
@@ -935,11 +937,11 @@ async function main() {
935
937
  });
936
938
  const finalStatus = result.state.final?.status ?? 'unknown';
937
939
  const iterationCount = result.state.iterations.length;
938
- console.log(`RLM completed: status=${finalStatus} iterations=${iterationCount} exit=${result.exitCode}`);
940
+ logger.info(`RLM completed: status=${finalStatus} iterations=${iterationCount} exit=${result.exitCode}`);
939
941
  const hasTimeCap = resolvedMaxMinutes !== null && resolvedMaxMinutes > 0;
940
942
  const unboundedBudgetInvalid = validatorCommand === null && maxIterations === 0 && !hasTimeCap;
941
943
  if (finalStatus === 'invalid_config' && unboundedBudgetInvalid) {
942
- console.error('Invalid configuration: --validator none with unbounded iterations and --max-minutes 0 would run forever. Fix: set --max-minutes / RLM_MAX_MINUTES to a positive value (default 2880), set --max-iterations to a positive value, or provide a validator.');
944
+ logger.error('Invalid configuration: --validator none with unbounded iterations and --max-minutes 0 would run forever. Fix: set --max-minutes / RLM_MAX_MINUTES to a positive value (default 2880), set --max-iterations to a positive value, or provide a validator.');
943
945
  }
944
946
  process.exitCode = result.exitCode;
945
947
  }
@@ -5,7 +5,7 @@ import { setTimeout as sleep } from 'node:timers/promises';
5
5
  import { isoTimestamp } from '../cli/utils/time.js';
6
6
  const TASK_ID_PATTERN = /\btask_[a-z]_[a-f0-9]+\b/i;
7
7
  const MAX_LOG_CHARS = 32 * 1024;
8
- const STATUS_RETRY_LIMIT = 3;
8
+ const STATUS_RETRY_LIMIT = 12;
9
9
  const STATUS_RETRY_BACKOFF_MS = 1500;
10
10
  const DEFAULT_LIST_LIMIT = 20;
11
11
  export function extractCloudTaskId(text) {
@@ -129,6 +129,8 @@ export class CodexCloudTaskExecutor {
129
129
  }
130
130
  const timeoutAt = Date.now() + cloudExecution.timeout_seconds * 1000;
131
131
  let statusRetries = 0;
132
+ let lastKnownStatus = cloudExecution.status;
133
+ let loggedNonZeroStatus = false;
132
134
  while (Date.now() < timeoutAt) {
133
135
  const statusResult = await runCloudCommand(['cloud', 'status', taskId]);
134
136
  cloudExecution.last_polled_at = this.now();
@@ -145,9 +147,14 @@ export class CodexCloudTaskExecutor {
145
147
  await this.sleepFn(STATUS_RETRY_BACKOFF_MS * statusRetries);
146
148
  continue;
147
149
  }
150
+ if (statusResult.exitCode !== 0 && mapped !== 'unknown' && !loggedNonZeroStatus) {
151
+ notes.push(`Cloud status returned exit ${statusResult.exitCode} with remote status ${mapped}; continuing to poll.`);
152
+ loggedNonZeroStatus = true;
153
+ }
148
154
  statusRetries = 0;
149
155
  if (mapped !== 'unknown') {
150
156
  cloudExecution.status = mapped;
157
+ lastKnownStatus = mapped;
151
158
  }
152
159
  if (mapped === 'ready') {
153
160
  notes.push(`Cloud task completed: ${taskId}`);
@@ -161,7 +168,7 @@ export class CodexCloudTaskExecutor {
161
168
  }
162
169
  if (cloudExecution.status === 'running' || cloudExecution.status === 'queued') {
163
170
  cloudExecution.status = 'failed';
164
- cloudExecution.error = `Timed out waiting for cloud task completion after ${cloudExecution.timeout_seconds}s.`;
171
+ cloudExecution.error = `Timed out waiting for cloud task completion after ${cloudExecution.timeout_seconds}s (last remote status: ${lastKnownStatus}, polls: ${cloudExecution.poll_count}).`;
165
172
  }
166
173
  if (cloudExecution.status === 'ready') {
167
174
  const diffResult = await runCloudCommand(['cloud', 'diff', taskId]);
@@ -0,0 +1,566 @@
1
+ import process from 'node:process';
2
+ import { spawn } from 'node:child_process';
3
+ import { setTimeout as sleep } from 'node:timers/promises';
4
+ import { hasFlag, parseArgs } from './cli-args.js';
5
+ const DEFAULT_INTERVAL_SECONDS = 30;
6
+ const DEFAULT_QUIET_MINUTES = 15;
7
+ const DEFAULT_TIMEOUT_MINUTES = 180;
8
+ const DEFAULT_MERGE_METHOD = 'squash';
9
+ const CHECKRUN_PASS_CONCLUSIONS = new Set(['SUCCESS', 'SKIPPED', 'NEUTRAL']);
10
+ const STATUS_CONTEXT_PASS_STATES = new Set(['SUCCESS']);
11
+ const STATUS_CONTEXT_PENDING_STATES = new Set(['EXPECTED', 'PENDING']);
12
+ const MERGEABLE_STATES = new Set(['CLEAN', 'HAS_HOOKS', 'UNSTABLE']);
13
+ const BLOCKED_REVIEW_DECISIONS = new Set(['CHANGES_REQUESTED', 'REVIEW_REQUIRED']);
14
+ const DO_NOT_MERGE_LABEL = /do[\s_-]*not[\s_-]*merge/i;
15
+ const PR_QUERY = `
16
+ query($owner:String!, $repo:String!, $number:Int!) {
17
+ repository(owner:$owner, name:$repo) {
18
+ pullRequest(number:$number) {
19
+ number
20
+ url
21
+ state
22
+ isDraft
23
+ reviewDecision
24
+ mergeStateStatus
25
+ updatedAt
26
+ mergedAt
27
+ labels(first:50) {
28
+ nodes {
29
+ name
30
+ }
31
+ }
32
+ reviewThreads(first:100) {
33
+ nodes {
34
+ isResolved
35
+ isOutdated
36
+ }
37
+ }
38
+ commits(last:1) {
39
+ nodes {
40
+ commit {
41
+ oid
42
+ statusCheckRollup {
43
+ contexts(first:100) {
44
+ nodes {
45
+ __typename
46
+ ... on CheckRun {
47
+ name
48
+ status
49
+ conclusion
50
+ detailsUrl
51
+ }
52
+ ... on StatusContext {
53
+ context
54
+ state
55
+ targetUrl
56
+ }
57
+ }
58
+ }
59
+ }
60
+ }
61
+ }
62
+ }
63
+ }
64
+ }
65
+ }
66
+ `;
67
+ function normalizeEnum(value) {
68
+ return typeof value === 'string' ? value.trim().toUpperCase() : '';
69
+ }
70
+ function formatDuration(ms) {
71
+ if (ms <= 0) {
72
+ return '0s';
73
+ }
74
+ const seconds = Math.ceil(ms / 1000);
75
+ const minutes = Math.floor(seconds / 60);
76
+ const remainder = seconds % 60;
77
+ if (minutes === 0) {
78
+ return `${remainder}s`;
79
+ }
80
+ if (remainder === 0) {
81
+ return `${minutes}m`;
82
+ }
83
+ return `${minutes}m${remainder}s`;
84
+ }
85
+ function log(message) {
86
+ console.log(`[${new Date().toISOString()}] ${message}`);
87
+ }
88
+ function parseNumber(name, rawValue, fallback) {
89
+ if (rawValue === undefined || rawValue === null || rawValue === '') {
90
+ return fallback;
91
+ }
92
+ if (typeof rawValue === 'boolean') {
93
+ throw new Error(`--${name} requires a value.`);
94
+ }
95
+ const parsed = Number(rawValue);
96
+ if (!Number.isFinite(parsed) || parsed <= 0) {
97
+ throw new Error(`--${name} must be a number > 0 (received: ${rawValue})`);
98
+ }
99
+ return parsed;
100
+ }
101
+ function parseInteger(name, rawValue, fallback) {
102
+ if (rawValue === undefined || rawValue === null || rawValue === '') {
103
+ return fallback;
104
+ }
105
+ if (typeof rawValue === 'boolean') {
106
+ throw new Error(`--${name} requires a value.`);
107
+ }
108
+ const parsed = Number(rawValue);
109
+ if (!Number.isInteger(parsed) || parsed <= 0) {
110
+ throw new Error(`--${name} must be an integer > 0 (received: ${rawValue})`);
111
+ }
112
+ return parsed;
113
+ }
114
+ function envFlagEnabled(rawValue, fallback = false) {
115
+ if (rawValue === undefined || rawValue === null) {
116
+ return fallback;
117
+ }
118
+ const normalized = String(rawValue).trim().toLowerCase();
119
+ if (normalized.length === 0) {
120
+ return fallback;
121
+ }
122
+ if (normalized === '1' || normalized === 'true' || normalized === 'yes' || normalized === 'on') {
123
+ return true;
124
+ }
125
+ if (normalized === '0' || normalized === 'false' || normalized === 'no' || normalized === 'off') {
126
+ return false;
127
+ }
128
+ return fallback;
129
+ }
130
+ function parseMergeMethod(rawValue) {
131
+ const normalized = (rawValue || DEFAULT_MERGE_METHOD).trim().toLowerCase();
132
+ if (normalized !== 'merge' && normalized !== 'squash' && normalized !== 'rebase') {
133
+ throw new Error(`--merge-method must be merge, squash, or rebase (received: ${rawValue})`);
134
+ }
135
+ return normalized;
136
+ }
137
+ export function printPrWatchMergeHelp(options = {}) {
138
+ const usageCommand = typeof options.usage === 'string' && options.usage.trim().length > 0
139
+ ? options.usage.trim()
140
+ : 'codex-orchestrator pr watch-merge';
141
+ console.log(`Usage: ${usageCommand} [options]
142
+
143
+ Monitor PR checks/reviews with polling and optionally merge after a quiet window.
144
+
145
+ Options:
146
+ --pr <number> PR number (default: PR for current branch)
147
+ --owner <name> Repo owner (default: inferred via gh repo view)
148
+ --repo <name> Repo name (default: inferred via gh repo view)
149
+ --interval-seconds <n> Poll interval in seconds (default: ${DEFAULT_INTERVAL_SECONDS})
150
+ --quiet-minutes <n> Required quiet window after ready state (default: ${DEFAULT_QUIET_MINUTES})
151
+ --timeout-minutes <n> Max monitor duration before failing (default: ${DEFAULT_TIMEOUT_MINUTES})
152
+ --merge-method <method> merge|squash|rebase (default: ${DEFAULT_MERGE_METHOD})
153
+ --auto-merge Merge automatically after quiet window
154
+ --no-auto-merge Never merge automatically (monitor only)
155
+ --delete-branch Delete remote branch when merging
156
+ --no-delete-branch Keep remote branch after merge
157
+ --dry-run Never call gh pr merge (report only)
158
+ -h, --help Show this help message
159
+
160
+ Environment:
161
+ PR_MONITOR_AUTO_MERGE=1 Default auto-merge on
162
+ PR_MONITOR_DELETE_BRANCH=1 Default delete branch on merge
163
+ PR_MONITOR_QUIET_MINUTES=<n> Override quiet window default
164
+ PR_MONITOR_INTERVAL_SECONDS=<n>
165
+ PR_MONITOR_TIMEOUT_MINUTES=<n>
166
+ PR_MONITOR_MERGE_METHOD=<method>`);
167
+ }
168
+ async function runGh(args, { allowFailure = false } = {}) {
169
+ return await new Promise((resolve, reject) => {
170
+ const child = spawn('gh', args, {
171
+ env: {
172
+ ...process.env,
173
+ GH_PAGER: process.env.GH_PAGER || 'cat',
174
+ // Harden all gh calls against interactive prompts (per `gh help environment`).
175
+ GH_PROMPT_DISABLED: process.env.GH_PROMPT_DISABLED || '1'
176
+ },
177
+ stdio: ['ignore', 'pipe', 'pipe']
178
+ });
179
+ let stdout = '';
180
+ let stderr = '';
181
+ child.stdout?.on('data', (chunk) => {
182
+ stdout += chunk.toString();
183
+ });
184
+ child.stderr?.on('data', (chunk) => {
185
+ stderr += chunk.toString();
186
+ });
187
+ child.once('error', (error) => {
188
+ reject(new Error(`Failed to run gh ${args.join(' ')}: ${error.message}`));
189
+ });
190
+ child.once('close', (code) => {
191
+ const exitCode = typeof code === 'number' ? code : 1;
192
+ const result = {
193
+ exitCode,
194
+ stdout: stdout.trim(),
195
+ stderr: stderr.trim()
196
+ };
197
+ if (exitCode === 0 || allowFailure) {
198
+ resolve(result);
199
+ return;
200
+ }
201
+ const detail = result.stderr || result.stdout || `exit code ${exitCode}`;
202
+ reject(new Error(`gh ${args.join(' ')} failed: ${detail}`));
203
+ });
204
+ });
205
+ }
206
+ async function runGhJson(args) {
207
+ const result = await runGh(args);
208
+ try {
209
+ return JSON.parse(result.stdout);
210
+ }
211
+ catch (error) {
212
+ throw new Error(`Failed to parse JSON from gh ${args.join(' ')}: ${error instanceof Error ? error.message : String(error)}`);
213
+ }
214
+ }
215
+ async function ensureGhAuth() {
216
+ const result = await runGh(['auth', 'status', '-h', 'github.com'], { allowFailure: true });
217
+ if (result.exitCode !== 0) {
218
+ throw new Error('GitHub CLI is not authenticated for github.com. Run `gh auth login` and retry.');
219
+ }
220
+ }
221
+ async function resolveRepo(ownerArg, repoArg) {
222
+ if (ownerArg && repoArg) {
223
+ return { owner: ownerArg, repo: repoArg };
224
+ }
225
+ if (ownerArg || repoArg) {
226
+ throw new Error('Provide both --owner and --repo, or neither.');
227
+ }
228
+ const response = await runGhJson(['repo', 'view', '--json', 'nameWithOwner']);
229
+ const nameWithOwner = response?.nameWithOwner;
230
+ if (typeof nameWithOwner !== 'string' || !nameWithOwner.includes('/')) {
231
+ throw new Error('Unable to infer repository owner/name from gh repo view.');
232
+ }
233
+ const [owner, repo] = nameWithOwner.split('/');
234
+ return { owner, repo };
235
+ }
236
+ async function resolvePrNumber(prArg) {
237
+ if (prArg !== undefined) {
238
+ return parseInteger('pr', prArg, null);
239
+ }
240
+ const response = await runGhJson(['pr', 'view', '--json', 'number']);
241
+ const number = response?.number;
242
+ if (!Number.isInteger(number) || number <= 0) {
243
+ throw new Error('Unable to infer PR number from current branch.');
244
+ }
245
+ return number;
246
+ }
247
+ function summarizeChecks(nodes) {
248
+ const summary = {
249
+ total: 0,
250
+ successCount: 0,
251
+ pending: [],
252
+ failed: []
253
+ };
254
+ for (const node of nodes) {
255
+ if (!node || typeof node !== 'object') {
256
+ continue;
257
+ }
258
+ const typeName = typeof node.__typename === 'string' ? node.__typename : '';
259
+ if (typeName === 'CheckRun') {
260
+ summary.total += 1;
261
+ const name = typeof node.name === 'string' && node.name.trim() ? node.name.trim() : 'check-run';
262
+ const status = normalizeEnum(node.status);
263
+ if (status !== 'COMPLETED') {
264
+ summary.pending.push(name);
265
+ continue;
266
+ }
267
+ const conclusion = normalizeEnum(node.conclusion);
268
+ if (CHECKRUN_PASS_CONCLUSIONS.has(conclusion)) {
269
+ summary.successCount += 1;
270
+ }
271
+ else {
272
+ summary.failed.push({
273
+ name,
274
+ state: conclusion || 'UNKNOWN',
275
+ detailsUrl: typeof node.detailsUrl === 'string' ? node.detailsUrl : null
276
+ });
277
+ }
278
+ continue;
279
+ }
280
+ if (typeName === 'StatusContext') {
281
+ summary.total += 1;
282
+ const name = typeof node.context === 'string' && node.context.trim() ? node.context.trim() : 'status-context';
283
+ const state = normalizeEnum(node.state);
284
+ if (STATUS_CONTEXT_PENDING_STATES.has(state)) {
285
+ summary.pending.push(name);
286
+ continue;
287
+ }
288
+ if (STATUS_CONTEXT_PASS_STATES.has(state)) {
289
+ summary.successCount += 1;
290
+ }
291
+ else {
292
+ summary.failed.push({
293
+ name,
294
+ state: state || 'UNKNOWN',
295
+ detailsUrl: typeof node.targetUrl === 'string' ? node.targetUrl : null
296
+ });
297
+ }
298
+ }
299
+ }
300
+ return summary;
301
+ }
302
+ function buildStatusSnapshot(response) {
303
+ const pr = response?.data?.repository?.pullRequest;
304
+ if (!pr) {
305
+ throw new Error('GraphQL response missing pullRequest payload.');
306
+ }
307
+ const labels = Array.isArray(pr.labels?.nodes)
308
+ ? pr.labels.nodes
309
+ .map((item) => (item && typeof item.name === 'string' ? item.name.trim() : ''))
310
+ .filter(Boolean)
311
+ : [];
312
+ const hasDoNotMergeLabel = labels.some((label) => DO_NOT_MERGE_LABEL.test(label));
313
+ const threads = Array.isArray(pr.reviewThreads?.nodes) ? pr.reviewThreads.nodes : [];
314
+ const unresolvedThreadCount = threads.filter((thread) => thread && !thread.isResolved && !thread.isOutdated).length;
315
+ const contexts = pr.commits?.nodes?.[0]?.commit?.statusCheckRollup?.contexts?.nodes;
316
+ const checkNodes = Array.isArray(contexts) ? contexts : [];
317
+ const checks = summarizeChecks(checkNodes);
318
+ const reviewDecision = normalizeEnum(pr.reviewDecision);
319
+ const mergeStateStatus = normalizeEnum(pr.mergeStateStatus);
320
+ const state = normalizeEnum(pr.state);
321
+ const isDraft = Boolean(pr.isDraft);
322
+ const gateReasons = [];
323
+ if (state !== 'OPEN') {
324
+ gateReasons.push(`state=${state || 'UNKNOWN'}`);
325
+ }
326
+ if (isDraft) {
327
+ gateReasons.push('draft');
328
+ }
329
+ if (hasDoNotMergeLabel) {
330
+ gateReasons.push('label:do-not-merge');
331
+ }
332
+ if (checks.pending.length > 0) {
333
+ gateReasons.push(`checks_pending=${checks.pending.length}`);
334
+ }
335
+ if (!MERGEABLE_STATES.has(mergeStateStatus)) {
336
+ gateReasons.push(`merge_state=${mergeStateStatus || 'UNKNOWN'}`);
337
+ }
338
+ if (BLOCKED_REVIEW_DECISIONS.has(reviewDecision)) {
339
+ gateReasons.push(`review=${reviewDecision}`);
340
+ }
341
+ if (unresolvedThreadCount > 0) {
342
+ gateReasons.push(`unresolved_threads=${unresolvedThreadCount}`);
343
+ }
344
+ return {
345
+ number: Number(pr.number),
346
+ url: typeof pr.url === 'string' ? pr.url : null,
347
+ state,
348
+ isDraft,
349
+ reviewDecision: reviewDecision || 'NONE',
350
+ mergeStateStatus: mergeStateStatus || 'UNKNOWN',
351
+ updatedAt: typeof pr.updatedAt === 'string' ? pr.updatedAt : null,
352
+ mergedAt: typeof pr.mergedAt === 'string' ? pr.mergedAt : null,
353
+ labels,
354
+ hasDoNotMergeLabel,
355
+ unresolvedThreadCount,
356
+ checks,
357
+ gateReasons,
358
+ readyToMerge: gateReasons.length === 0,
359
+ headOid: pr.commits?.nodes?.[0]?.commit?.oid || null
360
+ };
361
+ }
362
+ function formatStatusLine(snapshot, quietRemainingMs) {
363
+ const failedNames = snapshot.checks.failed.map((item) => `${item.name}:${item.state}`).join(', ') || '-';
364
+ const pendingNames = snapshot.checks.pending.join(', ') || '-';
365
+ const reasons = snapshot.gateReasons.join(', ') || 'none';
366
+ return [
367
+ `PR #${snapshot.number}`,
368
+ `state=${snapshot.state}`,
369
+ `merge_state=${snapshot.mergeStateStatus}`,
370
+ `review=${snapshot.reviewDecision}`,
371
+ `checks_ok=${snapshot.checks.successCount}/${snapshot.checks.total}`,
372
+ `checks_pending=${snapshot.checks.pending.length}`,
373
+ `checks_failed=${snapshot.checks.failed.length}`,
374
+ `unresolved_threads=${snapshot.unresolvedThreadCount}`,
375
+ `quiet_remaining=${formatDuration(quietRemainingMs)}`,
376
+ `blocked_by=${reasons}`,
377
+ `pending=[${pendingNames}]`,
378
+ `failed=[${failedNames}]`
379
+ ].join(' | ');
380
+ }
381
+ async function fetchSnapshot(owner, repo, prNumber) {
382
+ const response = await runGhJson([
383
+ 'api',
384
+ 'graphql',
385
+ '-f',
386
+ `query=${PR_QUERY}`,
387
+ '-f',
388
+ `owner=${owner}`,
389
+ '-f',
390
+ `repo=${repo}`,
391
+ '-F',
392
+ `number=${prNumber}`
393
+ ]);
394
+ return buildStatusSnapshot(response);
395
+ }
396
+ async function attemptMerge({ prNumber, mergeMethod, deleteBranch, headOid }) {
397
+ // gh pr merge has no --yes flag; rely on non-interactive stdio + explicit merge method.
398
+ const args = ['pr', 'merge', String(prNumber), `--${mergeMethod}`];
399
+ if (deleteBranch) {
400
+ args.push('--delete-branch');
401
+ }
402
+ if (headOid) {
403
+ args.push('--match-head-commit', headOid);
404
+ }
405
+ return await runGh(args, { allowFailure: true });
406
+ }
407
+ async function runPrWatchMergeOrThrow(argv, options) {
408
+ const { args, positionals } = parseArgs(argv);
409
+ if (hasFlag(args, 'h') || hasFlag(args, 'help')) {
410
+ printPrWatchMergeHelp(options);
411
+ return;
412
+ }
413
+ const knownFlags = new Set([
414
+ 'pr',
415
+ 'owner',
416
+ 'repo',
417
+ 'interval-seconds',
418
+ 'quiet-minutes',
419
+ 'timeout-minutes',
420
+ 'merge-method',
421
+ 'auto-merge',
422
+ 'no-auto-merge',
423
+ 'delete-branch',
424
+ 'no-delete-branch',
425
+ 'dry-run',
426
+ 'h',
427
+ 'help'
428
+ ]);
429
+ const unknownFlags = Object.keys(args).filter((key) => !knownFlags.has(key));
430
+ if (unknownFlags.length > 0 || positionals.length > 0) {
431
+ const label = unknownFlags[0] ? `--${unknownFlags[0]}` : positionals[0];
432
+ throw new Error(`Unknown option: ${label}`);
433
+ }
434
+ const intervalSeconds = parseNumber('interval-seconds', typeof args['interval-seconds'] === 'string'
435
+ ? args['interval-seconds']
436
+ : process.env.PR_MONITOR_INTERVAL_SECONDS, DEFAULT_INTERVAL_SECONDS);
437
+ const quietMinutes = parseNumber('quiet-minutes', typeof args['quiet-minutes'] === 'string'
438
+ ? args['quiet-minutes']
439
+ : process.env.PR_MONITOR_QUIET_MINUTES, DEFAULT_QUIET_MINUTES);
440
+ const timeoutMinutes = parseNumber('timeout-minutes', typeof args['timeout-minutes'] === 'string'
441
+ ? args['timeout-minutes']
442
+ : process.env.PR_MONITOR_TIMEOUT_MINUTES, DEFAULT_TIMEOUT_MINUTES);
443
+ const mergeMethod = parseMergeMethod(typeof args['merge-method'] === 'string'
444
+ ? args['merge-method']
445
+ : process.env.PR_MONITOR_MERGE_METHOD || DEFAULT_MERGE_METHOD);
446
+ const defaultAutoMerge = envFlagEnabled(process.env.PR_MONITOR_AUTO_MERGE, false);
447
+ const defaultDeleteBranch = envFlagEnabled(process.env.PR_MONITOR_DELETE_BRANCH, true);
448
+ let autoMerge = defaultAutoMerge;
449
+ if (hasFlag(args, 'auto-merge')) {
450
+ autoMerge = true;
451
+ }
452
+ if (hasFlag(args, 'no-auto-merge')) {
453
+ autoMerge = false;
454
+ }
455
+ let deleteBranch = defaultDeleteBranch;
456
+ if (hasFlag(args, 'delete-branch')) {
457
+ deleteBranch = true;
458
+ }
459
+ if (hasFlag(args, 'no-delete-branch')) {
460
+ deleteBranch = false;
461
+ }
462
+ const dryRun = hasFlag(args, 'dry-run');
463
+ await ensureGhAuth();
464
+ const { owner, repo } = await resolveRepo(typeof args.owner === 'string' ? args.owner : undefined, typeof args.repo === 'string' ? args.repo : undefined);
465
+ const prNumber = await resolvePrNumber(args.pr);
466
+ const intervalMs = Math.round(intervalSeconds * 1000);
467
+ const quietMs = Math.round(quietMinutes * 60 * 1000);
468
+ const timeoutMs = Math.round(timeoutMinutes * 60 * 1000);
469
+ const deadline = Date.now() + timeoutMs;
470
+ log(`Monitoring ${owner}/${repo}#${prNumber} every ${intervalSeconds}s (quiet window ${quietMinutes}m, timeout ${timeoutMinutes}m, auto_merge=${autoMerge ? 'on' : 'off'}, dry_run=${dryRun ? 'on' : 'off'}).`);
471
+ let quietWindowStartedAt = null;
472
+ let quietWindowAnchorUpdatedAt = null;
473
+ let quietWindowAnchorHeadOid = null;
474
+ let lastMergeAttemptHeadOid = null;
475
+ while (Date.now() <= deadline) {
476
+ let snapshot;
477
+ try {
478
+ snapshot = await fetchSnapshot(owner, repo, prNumber);
479
+ }
480
+ catch (error) {
481
+ log(`Polling error: ${error instanceof Error ? error.message : String(error)} (retrying).`);
482
+ await sleep(intervalMs);
483
+ continue;
484
+ }
485
+ if (snapshot.state === 'MERGED' || snapshot.mergedAt) {
486
+ log(`PR #${prNumber} is merged.`);
487
+ if (snapshot.url) {
488
+ log(`URL: ${snapshot.url}`);
489
+ }
490
+ return;
491
+ }
492
+ if (snapshot.state === 'CLOSED') {
493
+ throw new Error(`PR #${prNumber} was closed without merge.`);
494
+ }
495
+ if (snapshot.readyToMerge) {
496
+ const readyAnchorChanged = quietWindowStartedAt !== null &&
497
+ (snapshot.updatedAt !== quietWindowAnchorUpdatedAt || snapshot.headOid !== quietWindowAnchorHeadOid);
498
+ if (quietWindowStartedAt === null || readyAnchorChanged) {
499
+ quietWindowStartedAt = Date.now();
500
+ quietWindowAnchorUpdatedAt = snapshot.updatedAt;
501
+ quietWindowAnchorHeadOid = snapshot.headOid;
502
+ lastMergeAttemptHeadOid = null;
503
+ log(readyAnchorChanged
504
+ ? 'Ready state changed; quiet window reset.'
505
+ : `Ready state reached; quiet window started (${quietMinutes}m).`);
506
+ }
507
+ }
508
+ else if (quietWindowStartedAt !== null) {
509
+ quietWindowStartedAt = null;
510
+ quietWindowAnchorUpdatedAt = null;
511
+ quietWindowAnchorHeadOid = null;
512
+ lastMergeAttemptHeadOid = null;
513
+ log('Ready state lost; quiet window cleared.');
514
+ }
515
+ const quietElapsedMs = quietWindowStartedAt ? Date.now() - quietWindowStartedAt : 0;
516
+ const quietRemainingMs = quietWindowStartedAt ? Math.max(quietMs - quietElapsedMs, 0) : quietMs;
517
+ log(formatStatusLine(snapshot, quietRemainingMs));
518
+ if (snapshot.readyToMerge && quietWindowStartedAt !== null && quietElapsedMs >= quietMs) {
519
+ if (!autoMerge || dryRun) {
520
+ log(dryRun
521
+ ? 'Dry run: merge conditions satisfied and quiet window elapsed.'
522
+ : 'Merge conditions satisfied and quiet window elapsed.');
523
+ if (snapshot.url) {
524
+ log(`Ready to merge: ${snapshot.url}`);
525
+ }
526
+ return;
527
+ }
528
+ if (snapshot.headOid && snapshot.headOid === lastMergeAttemptHeadOid) {
529
+ log(`Merge already attempted for head ${snapshot.headOid}; waiting for PR state refresh.`);
530
+ }
531
+ else {
532
+ lastMergeAttemptHeadOid = snapshot.headOid;
533
+ log(`Attempting merge via gh pr merge --${mergeMethod}${deleteBranch ? ' --delete-branch' : ''}.`);
534
+ const mergeResult = await attemptMerge({
535
+ prNumber,
536
+ mergeMethod,
537
+ deleteBranch,
538
+ headOid: snapshot.headOid
539
+ });
540
+ if (mergeResult.exitCode === 0) {
541
+ log(`Merge command succeeded for PR #${prNumber}.`);
542
+ return;
543
+ }
544
+ const details = mergeResult.stderr || mergeResult.stdout || `exit code ${mergeResult.exitCode}`;
545
+ log(`Merge attempt failed: ${details}`);
546
+ }
547
+ }
548
+ const remainingTimeMs = deadline - Date.now();
549
+ if (remainingTimeMs <= 0) {
550
+ break;
551
+ }
552
+ await sleep(Math.min(intervalMs, remainingTimeMs));
553
+ }
554
+ throw new Error(`Timed out after ${timeoutMinutes} minute(s) while monitoring PR #${prNumber}.`);
555
+ }
556
+ export async function runPrWatchMerge(argv, options = {}) {
557
+ try {
558
+ await runPrWatchMergeOrThrow(argv, options);
559
+ return 0;
560
+ }
561
+ catch (error) {
562
+ const message = error instanceof Error ? error.message : String(error);
563
+ console.error(message);
564
+ return 1;
565
+ }
566
+ }
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@kbediako/codex-orchestrator",
3
- "version": "0.1.17",
3
+ "version": "0.1.19",
4
4
  "license": "MIT",
5
5
  "repository": {
6
6
  "type": "git",
@@ -50,6 +50,7 @@ Use this skill when the user asks for brainstorming, tradeoffs, option compariso
50
50
  2) Close critical context gaps.
51
51
  - Ask up to 3 targeted questions only if answers could change the recommendation.
52
52
  - If delegation is available, prefer a subagent for context gathering before asking the user.
53
+ - If collab spawning fails (for example `agent thread limit reached`), proceed solo and explicitly note the limitation; do not block on spawning.
53
54
 
54
55
  3) Generate distinct options.
55
56
  - Produce 3-5 materially different options.
@@ -83,3 +84,4 @@ Use this skill when the user asks for brainstorming, tradeoffs, option compariso
83
84
  - Do not present uncertainty as certainty.
84
85
  - Keep outputs concise and action-oriented.
85
86
  - If collab subagents are used, close lifecycle loops per id (`spawn_agent` -> `wait` -> `close_agent`) before finishing.
87
+ - If you cannot close collab agents (missing ids) and spawn keeps failing, restart the session and re-run deliberation; keep work moving by doing solo deliberation meanwhile.
@@ -81,6 +81,16 @@ delegate.spawn({
81
81
  })
82
82
  ```
83
83
 
84
+ ## Collab lifecycle hygiene (required)
85
+
86
+ When using collab tools (`spawn_agent` / `wait` / `close_agent`):
87
+
88
+ - Treat each spawned `agent_id` as a resource that must be closed.
89
+ - For every successful spawn, run `wait` then `close_agent` for the same id.
90
+ - Keep a local list of spawned ids and run a final cleanup pass before returning.
91
+ - On timeout/error paths, still close known ids before reporting failure.
92
+ - If you see `agent thread limit reached`, stop spawning immediately, close known ids, and retry only after cleanup.
93
+
84
94
  ## RLM budget overrides (recommended defaults)
85
95
 
86
96
  If you want deeper recursion or longer wall-clock time for delegated runs, set RLM budgets on the delegation server:
@@ -112,7 +122,11 @@ Delegation MCP expects JSONL. Keep `codex-orchestrator` aligned with the current
112
122
  - Check: `codex-orchestrator --version`
113
123
  - Update global: `npm i -g @kbediako/codex-orchestrator@latest`
114
124
  - Or pin via npx: `npx -y @kbediako/codex-orchestrator@<version> delegate-server`
115
- - If using a custom Codex fork, fast-forward from `upstream/main` regularly and rebuild to avoid protocol drift.
125
+ - Stock `codex` is the default path. If using a custom Codex fork, fast-forward from `upstream/main` regularly.
126
+ - CO repo checkout only (helper is not shipped in npm): `scripts/codex-cli-refresh.sh --repo /path/to/codex --align-only`
127
+ - CO repo checkout only (managed rebuild helper): `scripts/codex-cli-refresh.sh --repo /path/to/codex --force-rebuild`
128
+ - Add `--no-push` only when you intentionally want local-only alignment without updating `origin/main`.
129
+ - npm-safe alternative (no repo helper): `codex-orchestrator codex setup --source /path/to/codex --yes --force`
116
130
 
117
131
  ## Common failures
118
132
 
@@ -123,3 +137,4 @@ Delegation MCP expects JSONL. Keep `codex-orchestrator` aligned with the current
123
137
  - **Run identifiers**: status/pause/cancel require `manifest_path`; question queue requires `parent_manifest_path`.
124
138
  - **Collab payload mismatch**: `spawn_agent` calls fail if they include both `message` and `items`.
125
139
  - **Collab depth limits**: recursive collab fan-out can fail near max depth; prefer shallow parent fan-out.
140
+ - **Collab lifecycle leaks**: missing `close_agent` calls can exhaust thread slots and block future spawns (`agent thread limit reached`).
@@ -20,6 +20,9 @@ Collab multi-agent mode is separate from delegation. For symbolic RLM subcalls t
20
20
  - Spawn returns an `agent_id` (thread id). Current TUI collab rendering is id-based; do not depend on custom visible agent names.
21
21
  - Subagents spawned through collab run with approval effectively set to `never`; design child tasks to avoid approval/escalation requirements.
22
22
  - Collab spawn depth is bounded. Near/at max depth, recursive delegation can fail or collab can be disabled in children; prefer shallow parent fan-out.
23
+ - **Lifecycle is mandatory:** for every successful `spawn_agent`, run `wait` and then `close_agent` for that same id before task completion.
24
+ - Keep a local list of spawned ids and run a final cleanup pass so no agent id is left unclosed on timeout/error paths.
25
+ - If spawn fails with `agent thread limit reached`, stop spawning, close any known ids first, then surface a concise recovery note.
23
26
 
24
27
  ## Quick-start workflow (canned)
25
28
 
@@ -78,7 +81,11 @@ For runner + delegation coordination (short `--task` flow), see `docs/delegation
78
81
  - Check installed version: `codex-orchestrator --version`
79
82
  - Preferred update path: `npm i -g @kbediako/codex-orchestrator@latest`
80
83
  - Deterministic pin path (for reproducible environments): `npx -y @kbediako/codex-orchestrator@<version> delegate-server`
81
- - If using a custom Codex fork, fast-forward it regularly from `upstream/main` and rebuild the managed CLI to avoid delegation/collab protocol drift.
84
+ - Stock `codex` is the default path. If you use a custom Codex fork, fast-forward it regularly from `upstream/main`.
85
+ - CO repo checkout only (helper is not shipped in npm): `scripts/codex-cli-refresh.sh --repo /path/to/codex --align-only`
86
+ - CO repo checkout only (managed rebuild helper): `scripts/codex-cli-refresh.sh --repo /path/to/codex --force-rebuild`
87
+ - Add `--no-push` only when you intentionally want local-only alignment without updating `origin/main`.
88
+ - npm-safe alternative (no repo helper): `codex-orchestrator codex setup --source /path/to/codex --yes --force`
82
89
 
83
90
  ### 0b) Background terminal bootstrap (required when MCP is disabled)
84
91
 
@@ -174,3 +181,4 @@ repeat:
174
181
  - **Missing control files:** delegate tools rely on `control_endpoint.json` in the run directory; older runs may not have it.
175
182
  - **Collab payload mismatch:** `spawn_agent` rejects calls that include both `message` and `items`.
176
183
  - **Collab UI assumptions:** agent rows/records are id-based today; use explicit stream role text in prompts/artifacts for operator clarity.
184
+ - **Collab lifecycle leaks:** missing `close_agent` calls accumulate open threads and can trigger `agent thread limit reached`; always finish `spawn -> wait -> close_agent` per id.
@@ -16,6 +16,7 @@ Use this skill when a task needs a spec-driven workflow. The objective is to cre
16
16
  - TECH_SPEC: capture technical requirements (use `.agent/task/templates/tech-spec-template.md`; stored under `tasks/specs/<id>-<slug>.md`).
17
17
  - ACTION_PLAN: capture sequencing/milestones (use `.agent/task/templates/action-plan-template.md`).
18
18
  - Depth scales with scope, but all three docs are required.
19
+ - For low-risk tiny edits, follow the bounded shortcut in `docs/micro-task-path.md` instead of long-form rewrites (still requires task/spec evidence).
19
20
 
20
21
  2) Register the TECH_SPEC and task
21
22
  - Add the TECH_SPEC to `tasks/index.json` (including `last_review`).
@@ -1,6 +1,6 @@
1
1
  ---
2
2
  name: standalone-review
3
- description: Use for ad-hoc/standalone reviews outside pipelines (fast checks during implementation or before handoff) using `codex review`.
3
+ description: Use for required periodic cross-check reviews during implementation and before handoff using `codex review`.
4
4
  ---
5
5
 
6
6
  # Standalone Review
@@ -10,6 +10,17 @@ description: Use for ad-hoc/standalone reviews outside pipelines (fast checks du
10
10
  Use this skill when you need a fast, ad-hoc review without running a pipeline or collecting a manifest. It is ideal during implementation or for quick pre-flight checks.
11
11
  Before implementation, use it to review the task/spec against the user’s intent and record the approval in the PRD/TECH_SPEC or task notes.
12
12
 
13
+ ## Auto-trigger policy (required)
14
+
15
+ Run this skill automatically whenever any condition is true:
16
+ - You made code/config/script/test edits since the last standalone review.
17
+ - You finished a meaningful chunk of work (default: behavior change or about 2+ files touched).
18
+ - You are about to report completion, propose merge, or answer "what's next?" with recommendations.
19
+ - You addressed external feedback (PR reviews, bot comments, or CI-fix patches).
20
+ - 45 minutes of active implementation elapsed without a standalone review.
21
+
22
+ If review execution is blocked, record why in task notes, then do manual diff review plus targeted tests before proceeding.
23
+
13
24
  ## Quick start
14
25
 
15
26
  Uncommitted diff:
@@ -39,6 +50,7 @@ codex review "Focus on correctness, regressions, edge cases; list missing tests.
39
50
  - Keep prompts short, specific, and test-oriented.
40
51
 
41
52
  2) Run the review often
53
+ - Follow the auto-trigger policy above (not optional).
42
54
  - Run after each meaningful chunk of work.
43
55
  - Prefer targeted focus prompts for WIP reviews.
44
56