@link-assistant/hive-mind 1.25.7 → 1.26.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -1,5 +1,32 @@
1
1
  # @link-assistant/hive-mind
2
2
 
3
+ ## 1.26.0
4
+
5
+ ### Minor Changes
6
+
7
+ - d96ae3b: feat: /merge command syncs ready tags between linked PRs and issues (Issue #1367)
8
+
9
+ The `/merge` Telegram bot command now syncs the `ready` label between PRs and their linked issues before building the merge queue.
10
+ - If a PR has the `ready` label and its body links to an issue via standard GitHub closing keywords (fixes/closes/resolves #N), the linked issue also gets the `ready` label
11
+ - If an issue has the `ready` label and has a clearly linked open PR (found via body search), the PR also gets the `ready` label
12
+ - Sync happens during `MergeQueueProcessor.initialize()`, before the final list of ready PRs is collected
13
+
14
+ ## 1.25.8
15
+
16
+ ### Patch Changes
17
+
18
+ - fix: update system messages to use authenticated curl for private GitHub issue images
19
+
20
+ Images attached to GitHub issues/PRs (github.com/user-attachments/assets/\*) require authentication. Without auth, GitHub returns "Not Found" (9 bytes ASCII) with HTTP 200 — a silent failure. The AI would then call Read on the non-image file, encoding "Not Found" as base64, causing Anthropic API to return "Could not process image" (HTTP 400), crashing the session.
21
+
22
+ Updated system messages in all 4 prompt files (claude, agent, codex, opencode) to explicitly identify user-attachments URLs as requiring GitHub authentication and provide the exact authenticated curl command using `gh auth token`.
23
+
24
+ fix: auto-restart with --resume on "Request timed out" in --tool claude (Issue #1353)
25
+
26
+ When Claude CLI encounters a network timeout, it exhausts its own internal retries and emits a synthetic result event: `{"type":"result","is_error":true,"result":"Request timed out","session_id":"..."}`. Previously hive-mind treated this as a fatal failure and exited, losing all session context (conversation history, cached tokens, partially completed work).
27
+
28
+ This fix detects the timeout pattern and automatically retries with `--resume <session-id>` to preserve the session, using exponential backoff starting at 5 minutes (increasing to max 1 hour) — longer than regular API errors since Claude CLI has already exhausted its own retries before reporting the timeout.
29
+
3
30
  ## 1.25.7
4
31
 
5
32
  ### Patch Changes
package/README.md CHANGED
@@ -192,6 +192,28 @@ docker attach hive-mind
192
192
  # Run bot here
193
193
 
194
194
  # Press Ctrl + P, Ctrl + Q to detach without destroying the container (no stopping of main bash process)
195
+
196
+ # --- Persisting auth data across restarts ---
197
+
198
+ # Extract auth data from a running (or stopped) container to the host:
199
+ mkdir -p ~/.hive-mind
200
+ docker cp hive-mind:/home/hive/.claude ~/.hive-mind/claude
201
+ docker cp hive-mind:/home/hive/.claude.json ~/.hive-mind/claude.json
202
+ docker cp hive-mind:/home/hive/.config/gh ~/.hive-mind/gh
203
+
204
+ # Fix ownership to match the hive user inside the container:
205
+ HIVE_UID=$(docker exec hive-mind id -u hive)
206
+ chown -R $HIVE_UID:$HIVE_UID ~/.hive-mind/claude ~/.hive-mind/gh
207
+ chown $HIVE_UID:$HIVE_UID ~/.hive-mind/claude.json
208
+
209
+ # On subsequent runs, mount the auth data to keep it between restarts:
210
+ docker run -dit \
211
+ --name hive-mind \
212
+ --restart unless-stopped \
213
+ -v /root/.hive-mind/claude:/home/hive/.claude \
214
+ -v /root/.hive-mind/claude.json:/home/hive/.claude.json \
215
+ -v /root/.hive-mind/gh:/home/hive/.config/gh \
216
+ konard/hive-mind:latest
195
217
  ```
196
218
 
197
219
  **Benefits of Docker:**
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@link-assistant/hive-mind",
3
- "version": "1.25.7",
3
+ "version": "1.26.0",
4
4
  "description": "AI-powered issue solver and hive mind for collaborative problem solving",
5
5
  "main": "src/hive.mjs",
6
6
  "type": "module",
@@ -144,7 +144,7 @@ ${getExperimentsExamplesSubPrompt(argv)}
144
144
  Initial research.
145
145
  - When you start, make sure you create detailed plan for yourself and follow your todo list step by step, make sure that as many points from these guidelines are added to your todo list to keep track of everything that can help you solve the issue with highest possible quality.
146
146
  - When you read issue, read all details and comments thoroughly.
147
- - When you see screenshots or images in issue descriptions, pull request descriptions, comments, or discussions, use WebFetch tool to download the image first, then use Read tool to view and analyze it.
147
+ - When you see screenshots or images in issue descriptions, pull request descriptions, comments, or discussions, download the image to a local file first, then use Read tool to view and analyze it. IMPORTANT: Before reading downloaded images with the Read tool, verify the file is a valid image (not HTML). Use a CLI tool like 'file' command to check the actual file format. If the file command shows "HTML", "text", or "ASCII text", the download FAILED — do NOT call Read on this file. For images from GitHub issues/PRs (URLs containing "github.com/user-attachments"), these require authentication — use: curl -L -H "Authorization: token $(gh auth token)" -o <filename> "<url>"
148
148
  - When you need issue details, use gh issue view https://github.com/${owner}/${repo}/issues/${issueNumber}.
149
149
  - When you need related code, use gh search code --owner ${owner} [keywords].
150
150
  - When you need repo context, read files in your working directory.${
@@ -378,12 +378,6 @@ export const executeClaude = async params => {
378
378
  prNumber,
379
379
  });
380
380
  };
381
- /**
382
- * Calculate total token usage from a session's JSONL file
383
- * @param {string} sessionId - The session ID
384
- * @param {string} tempDir - The temporary directory where the session ran
385
- * @returns {Object} Token usage statistics
386
- */
387
381
  /**
388
382
  * Fetches model information from pricing API
389
383
  * @param {string} modelId - The model ID (e.g., "claude-sonnet-4-5-20250929")
@@ -845,6 +839,7 @@ export const executeClaudeCommand = async params => {
845
839
  let isOverloadError = false;
846
840
  let is503Error = false;
847
841
  let isInternalServerError = false; // Issue #1331: Track 500 Internal server error
842
+ let isRequestTimeout = false; // Issue #1353: Track "Request timed out" from Claude CLI
848
843
  let stderrErrors = [];
849
844
  let resultSuccessReceived = false; // Issue #1354: Track if result success event was received
850
845
  let anthropicTotalCostUSD = null; // Capture Anthropic's official total_cost_usd from result
@@ -1050,6 +1045,11 @@ export const executeClaudeCommand = async params => {
1050
1045
  if (lastMessage.includes('Internal server error') && !lastMessage.includes('Overloaded')) {
1051
1046
  isInternalServerError = true;
1052
1047
  }
1048
+ // Issue #1353: Detect "Request timed out" — Claude CLI emits {type:"result",is_error:true,result:"Request timed out"} after exhausting retries
1049
+ if (lastMessage === 'Request timed out' || lastMessage.includes('Request timed out')) {
1050
+ isRequestTimeout = true;
1051
+ await log('⏱️ Detected request timeout from Claude CLI (will retry with --resume)', { verbose: true });
1052
+ }
1053
1053
  }
1054
1054
  }
1055
1055
  // Store last message for error detection
@@ -1082,6 +1082,12 @@ export const executeClaudeCommand = async params => {
1082
1082
  lastMessage = item.text;
1083
1083
  await log('⚠️ Detected 503 network error', { verbose: true });
1084
1084
  }
1085
+ // Issue #1353: Detect "Request timed out" in assistant text content
1086
+ if (item.text === 'Request timed out' || item.text.includes('Request timed out')) {
1087
+ isRequestTimeout = true;
1088
+ lastMessage = item.text;
1089
+ await log('⏱️ Detected request timeout in assistant message (will retry with --resume)', { verbose: true });
1090
+ }
1085
1091
  }
1086
1092
  }
1087
1093
  }
@@ -1188,13 +1194,19 @@ export const executeClaudeCommand = async params => {
1188
1194
  }
1189
1195
 
1190
1196
  // Issue #1331: Unified handler for all transient API errors (Overloaded, 503, Internal Server Error)
1191
- // All use same params: 10 retries, 1min initial, 30min max, exponential backoff, session preserved
1192
- const isTransientError = isOverloadError || isInternalServerError || is503Error || (lastMessage.includes('API Error: 500') && (lastMessage.includes('Overloaded') || lastMessage.includes('Internal server error'))) || (lastMessage.includes('api_error') && lastMessage.includes('Overloaded')) || lastMessage.includes('API Error: 503') || (lastMessage.includes('503') && (lastMessage.includes('upstream connect error') || lastMessage.includes('remote connection failure')));
1197
+ // Issue #1353: Also handle "Request timed out" Claude CLI times out after exhausting its own retries
1198
+ // All use exponential backoff with session preservation via --resume
1199
+ const isTransientError = isOverloadError || isInternalServerError || is503Error || isRequestTimeout || (lastMessage.includes('API Error: 500') && (lastMessage.includes('Overloaded') || lastMessage.includes('Internal server error'))) || (lastMessage.includes('api_error') && lastMessage.includes('Overloaded')) || lastMessage.includes('API Error: 503') || (lastMessage.includes('503') && (lastMessage.includes('upstream connect error') || lastMessage.includes('remote connection failure'))) || lastMessage === 'Request timed out' || lastMessage.includes('Request timed out');
1193
1200
  if ((commandFailed || isTransientError) && isTransientError) {
1194
- if (retryCount < retryLimits.maxTransientErrorRetries) {
1195
- const delay = Math.min(retryLimits.initialTransientErrorDelayMs * Math.pow(retryLimits.retryBackoffMultiplier, retryCount), retryLimits.maxTransientErrorDelayMs);
1196
- const errorLabel = isOverloadError || (lastMessage.includes('API Error: 500') && lastMessage.includes('Overloaded')) ? 'API overload (500)' : isInternalServerError || lastMessage.includes('Internal server error') ? 'Internal server error (500)' : '503 network error';
1197
- await log(`\n⚠️ ${errorLabel} detected. Retry ${retryCount + 1}/${retryLimits.maxTransientErrorRetries} in ${Math.round(delay / 60000)} min (session preserved)...`, { level: 'warning' });
1201
+ // Issue #1353: Use timeout-specific backoff params (5min–1hr) vs general transient params (1min–30min)
1202
+ // Timeouts indicate network instability Claude CLI already exhausted its own retries, so we need longer waits
1203
+ const maxRetries = isRequestTimeout ? retryLimits.maxRequestTimeoutRetries : retryLimits.maxTransientErrorRetries;
1204
+ const initialDelay = isRequestTimeout ? retryLimits.initialRequestTimeoutDelayMs : retryLimits.initialTransientErrorDelayMs;
1205
+ const maxDelay = isRequestTimeout ? retryLimits.maxRequestTimeoutDelayMs : retryLimits.maxTransientErrorDelayMs;
1206
+ if (retryCount < maxRetries) {
1207
+ const delay = Math.min(initialDelay * Math.pow(retryLimits.retryBackoffMultiplier, retryCount), maxDelay);
1208
+ const errorLabel = isRequestTimeout ? 'Request timeout' : isOverloadError || (lastMessage.includes('API Error: 500') && lastMessage.includes('Overloaded')) ? 'API overload (500)' : isInternalServerError || lastMessage.includes('Internal server error') ? 'Internal server error (500)' : '503 network error';
1209
+ await log(`\n⚠️ ${errorLabel} detected. Retry ${retryCount + 1}/${maxRetries} in ${Math.round(delay / 60000)} min (session preserved)...`, { level: 'warning' });
1198
1210
  await log(` Error: ${lastMessage.substring(0, 200)}`, { verbose: true });
1199
1211
  if (sessionId && !argv.resume) argv.resume = sessionId; // preserve session for resume
1200
1212
  await waitWithCountdown(delay, log);
@@ -1202,7 +1214,7 @@ export const executeClaudeCommand = async params => {
1202
1214
  retryCount++;
1203
1215
  return await executeWithRetry();
1204
1216
  } else {
1205
- await log(`\n\n❌ Transient API error persisted after ${retryLimits.maxTransientErrorRetries} retries\n Please try again later or check https://status.anthropic.com/`, { level: 'error' });
1217
+ await log(`\n\n❌ Transient API error persisted after ${maxRetries} retries\n Please try again later or check https://status.anthropic.com/`, { level: 'error' });
1206
1218
  return {
1207
1219
  success: false,
1208
1220
  sessionId,
@@ -1247,28 +1259,9 @@ export const executeClaudeCommand = async params => {
1247
1259
  }
1248
1260
  }
1249
1261
  }
1250
- // Additional failure detection: if no messages were processed and there were stderr errors,
1251
- // or if the command produced no output at all, treat it as a failure
1252
- //
1253
- // This is critical for detecting "silent failures" where:
1254
- // 1. Claude CLI encounters an internal error (e.g., "kill EPERM" from timeout)
1255
- // 2. The error is logged to stderr but exit code is 0 or exit event is never sent
1256
- // 3. Result: messageCount=0, toolUseCount=0, but stderrErrors has content
1257
- //
1258
- // Common cause: sudo commands that timeout
1259
- // - Timeout triggers process.kill() in Claude CLI
1260
- // - If child process runs with sudo (root), parent can't kill it → EPERM error
1261
- // - Error logged to stderr, but command doesn't properly fail
1262
- //
1263
- // Workaround (applied in system prompt):
1264
- // - Instruct Claude to run sudo commands (installations) in background
1265
- // - Background processes avoid timeout kill mechanism
1266
- // - Prevents EPERM errors and false success reports
1267
- //
1268
- // See: docs/dependencies-research/claude-code-issues/README.md for full details
1269
- // Issue #1354: Do not trigger if the result event already confirmed success.
1270
- // A successful result event is definitive proof the command succeeded, regardless of
1271
- // messageCount (which may be 0 if "assistant" events were counted instead of "message" type).
1262
+ // Additional failure detection: silent failures (no messages + stderr errors).
1263
+ // E.g., sudo timeout causing "kill EPERM" stderr error but exit code 0.
1264
+ // Issue #1354: Skip if result event confirmed success (definitive proof regardless of messageCount).
1272
1265
  if (!commandFailed && !resultSuccessReceived && stderrErrors.length > 0 && messageCount === 0 && toolUseCount === 0) {
1273
1266
  commandFailed = true;
1274
1267
  const errorsPreview = stderrErrors
@@ -1377,13 +1370,19 @@ export const executeClaudeCommand = async params => {
1377
1370
  });
1378
1371
  const errorStr = error.message || error.toString();
1379
1372
  // Issue #1331: Unified handler for all transient API errors in exception block
1380
- // (Overloaded, 503, Internal Server Error) - same params, all with session preservation
1381
- const isTransientException = (errorStr.includes('API Error: 500') && (errorStr.includes('Overloaded') || errorStr.includes('Internal server error'))) || (errorStr.includes('api_error') && errorStr.includes('Overloaded')) || errorStr.includes('API Error: 503') || (errorStr.includes('503') && (errorStr.includes('upstream connect error') || errorStr.includes('remote connection failure')));
1373
+ // Issue #1353: Also handle "Request timed out" in exception block
1374
+ // (Overloaded, 503, Internal Server Error, Request timed out) - all with session preservation
1375
+ const isTimeoutException = errorStr === 'Request timed out' || errorStr.includes('Request timed out');
1376
+ const isTransientException = isTimeoutException || (errorStr.includes('API Error: 500') && (errorStr.includes('Overloaded') || errorStr.includes('Internal server error'))) || (errorStr.includes('api_error') && errorStr.includes('Overloaded')) || errorStr.includes('API Error: 503') || (errorStr.includes('503') && (errorStr.includes('upstream connect error') || errorStr.includes('remote connection failure')));
1382
1377
  if (isTransientException) {
1383
- if (retryCount < retryLimits.maxTransientErrorRetries) {
1384
- const delay = Math.min(retryLimits.initialTransientErrorDelayMs * Math.pow(retryLimits.retryBackoffMultiplier, retryCount), retryLimits.maxTransientErrorDelayMs);
1385
- const errorLabel = errorStr.includes('Overloaded') ? 'API overload (500)' : errorStr.includes('Internal server error') ? 'Internal server error (500)' : '503 network error';
1386
- await log(`\n⚠️ ${errorLabel} in exception. Retry ${retryCount + 1}/${retryLimits.maxTransientErrorRetries} in ${Math.round(delay / 60000)} min (session preserved)...`, { level: 'warning' });
1378
+ // Issue #1353: Use timeout-specific backoff for request timeouts
1379
+ const maxRetries = isTimeoutException ? retryLimits.maxRequestTimeoutRetries : retryLimits.maxTransientErrorRetries;
1380
+ const initialDelay = isTimeoutException ? retryLimits.initialRequestTimeoutDelayMs : retryLimits.initialTransientErrorDelayMs;
1381
+ const maxDelay = isTimeoutException ? retryLimits.maxRequestTimeoutDelayMs : retryLimits.maxTransientErrorDelayMs;
1382
+ if (retryCount < maxRetries) {
1383
+ const delay = Math.min(initialDelay * Math.pow(retryLimits.retryBackoffMultiplier, retryCount), maxDelay);
1384
+ const errorLabel = isTimeoutException ? 'Request timeout' : errorStr.includes('Overloaded') ? 'API overload (500)' : errorStr.includes('Internal server error') ? 'Internal server error (500)' : '503 network error';
1385
+ await log(`\n⚠️ ${errorLabel} in exception. Retry ${retryCount + 1}/${maxRetries} in ${Math.round(delay / 60000)} min (session preserved)...`, { level: 'warning' });
1387
1386
  if (sessionId && !argv.resume) argv.resume = sessionId;
1388
1387
  await waitWithCountdown(delay, log);
1389
1388
  await log('\n🔄 Retrying now...');
@@ -1476,15 +1475,5 @@ export const checkForUncommittedChanges = async (tempDir, owner, repo, branchNam
1476
1475
  }
1477
1476
  };
1478
1477
  // Export all functions as default object too
1479
- export default {
1480
- validateClaudeConnection,
1481
- handleClaudeRuntimeSwitch,
1482
- executeClaude,
1483
- executeClaudeCommand,
1484
- checkForUncommittedChanges,
1485
- calculateSessionTokens,
1486
- getClaudeVersion,
1487
- setClaudeVersion,
1488
- resolveThinkingSettings,
1489
- checkModelVisionCapability,
1490
- };
1478
+ // prettier-ignore
1479
+ export default { validateClaudeConnection, handleClaudeRuntimeSwitch, executeClaude, executeClaudeCommand, checkForUncommittedChanges, calculateSessionTokens, getClaudeVersion, setClaudeVersion, resolveThinkingSettings, checkModelVisionCapability };
@@ -171,7 +171,7 @@ Initial research.
171
171
  - When you start, make sure you create detailed plan for yourself and follow your todo list step by step, make sure that as many points from these guidelines are added to your todo list to keep track of everything that can help you solve the issue with highest possible quality.
172
172
  - When user mentions CI failures or asks to investigate logs, consider adding these todos to track the investigation: (1) List recent CI runs with timestamps, (2) Download logs from failed runs to ci-logs/ directory, (3) Analyze error messages and identify root cause, (4) Implement fix, (5) Verify fix resolves the specific errors found in logs.
173
173
  - When you read issue, read all details and comments thoroughly.
174
- - When you see screenshots or images in issue descriptions, pull request descriptions, comments, or discussions, use WebFetch tool (or fetch tool) to download the image first, then use Read tool to view and analyze it. IMPORTANT: Before reading downloaded images with the Read tool, verify the file is a valid image (not HTML). Use a CLI tool like 'file' command to check the actual file format. Reading corrupted or non-image files (like GitHub's HTML 404 pages saved as .png) can cause "Could not process image" errors and may crash the AI solver process. If the file command shows "HTML" or "text", the download failed and you should retry or skip the image.
174
+ - When you see screenshots or images in issue descriptions, pull request descriptions, comments, or discussions, download the image to a local file first, then use Read tool to view and analyze it. IMPORTANT: Before reading downloaded images with the Read tool, verify the file is a valid image (not HTML). Use a CLI tool like 'file' command to check the actual file format. Reading corrupted or non-image files (like GitHub's "Not Found" pages saved as .png) can cause "Could not process image" errors and will crash the AI solver process. If the file command shows "HTML", "text", or "ASCII text", the download FAILED do NOT call Read on this file. Instead: (1) For images from GitHub issues/PRs (URLs containing "github.com/user-attachments"), these require authentication — retry with: curl -L -H "Authorization: token $(gh auth token)" -o <filename> "<url>" (2) If retry still fails, skip the image and note it was unavailable.
175
175
  - When you need issue details, use gh issue view https://github.com/${owner}/${repo}/issues/${issueNumber}.
176
176
  - When you need related code, use gh search code --owner ${owner} [keywords].
177
177
  - When you need repo context, read files in your working directory.${
@@ -152,7 +152,7 @@ Initial research.
152
152
  - When you start, make sure you create detailed plan for yourself and follow your todo list step by step, make sure that as many points from these guidelines are added to your todo list to keep track of everything that can help you solve the issue with highest possible quality.
153
153
  - When user mentions CI failures or asks to investigate logs, consider adding these todos to track the investigation: (1) List recent CI runs with timestamps, (2) Download logs from failed runs to ci-logs/ directory, (3) Analyze error messages and identify root cause, (4) Implement fix, (5) Verify fix resolves the specific errors found in logs.
154
154
  - When you read issue, read all details and comments thoroughly.
155
- - When you see screenshots or images in issue descriptions, pull request descriptions, comments, or discussions, use WebFetch tool (or fetch tool) to download the image first, then use Read tool to view and analyze it.
155
+ - When you see screenshots or images in issue descriptions, pull request descriptions, comments, or discussions, download the image to a local file first, then use Read tool to view and analyze it. IMPORTANT: Before reading downloaded images with the Read tool, verify the file is a valid image (not HTML). Use a CLI tool like 'file' command to check the actual file format. If the file command shows "HTML", "text", or "ASCII text", the download FAILED — do NOT call Read on this file. For images from GitHub issues/PRs (URLs containing "github.com/user-attachments"), these require authentication — use: curl -L -H "Authorization: token $(gh auth token)" -o <filename> "<url>"
156
156
  - When you need issue details, use gh issue view https://github.com/${owner}/${repo}/issues/${issueNumber}.
157
157
  - When you need related code, use gh search code --owner ${owner} [keywords].
158
158
  - When you need repo context, read files in your working directory.${
@@ -103,6 +103,11 @@ export const retryLimits = {
103
103
  maxTransientErrorRetries: parseIntWithDefault('HIVE_MIND_MAX_TRANSIENT_ERROR_RETRIES', 10),
104
104
  initialTransientErrorDelayMs: parseIntWithDefault('HIVE_MIND_INITIAL_TRANSIENT_ERROR_DELAY_MS', 60 * 1000), // 1 minute
105
105
  maxTransientErrorDelayMs: parseIntWithDefault('HIVE_MIND_MAX_TRANSIENT_ERROR_DELAY_MS', 30 * 60 * 1000), // 30 minutes
106
+ // Request timeout retry configuration (Issue #1353)
107
+ // Network timeouts need longer waits than API errors — Claude CLI already exhausted its own retries
108
+ maxRequestTimeoutRetries: parseIntWithDefault('HIVE_MIND_MAX_REQUEST_TIMEOUT_RETRIES', 10),
109
+ initialRequestTimeoutDelayMs: parseIntWithDefault('HIVE_MIND_INITIAL_REQUEST_TIMEOUT_DELAY_MS', 5 * 60 * 1000), // 5 minutes
110
+ maxRequestTimeoutDelayMs: parseIntWithDefault('HIVE_MIND_MAX_REQUEST_TIMEOUT_DELAY_MS', 60 * 60 * 1000), // 1 hour
106
111
  };
107
112
 
108
113
  // Claude Code CLI configurations
@@ -19,6 +19,9 @@ const exec = promisify(execCallback);
19
19
  // Import GitHub URL parser
20
20
  import { parseGitHubUrl } from './github.lib.mjs';
21
21
 
22
+ // Import linking utilities
23
+ import { extractLinkedIssueNumber } from './github-linking.lib.mjs';
24
+
22
25
  // Default label configuration
23
26
  export const READY_LABEL = {
24
27
  name: 'ready',
@@ -251,6 +254,172 @@ export async function fetchReadyIssuesWithPRs(owner, repo, verbose = false) {
251
254
  }
252
255
  }
253
256
 
257
+ /**
258
+ * Add a label to a GitHub issue or pull request
259
+ * @param {'issue'|'pr'} type - Whether to add to issue or PR
260
+ * @param {string} owner - Repository owner
261
+ * @param {string} repo - Repository name
262
+ * @param {number} number - Issue or PR number
263
+ * @param {string} labelName - Label name to add
264
+ * @param {boolean} verbose - Whether to log verbose output
265
+ * @returns {Promise<{success: boolean, error: string|null}>}
266
+ */
267
+ async function addLabel(type, owner, repo, number, labelName, verbose = false) {
268
+ const cmd = type === 'issue' ? 'issue' : 'pr';
269
+ try {
270
+ await exec(`gh ${cmd} edit ${number} --repo ${owner}/${repo} --add-label "${labelName}"`);
271
+ if (verbose) console.log(`[VERBOSE] /merge: Added '${labelName}' label to ${type} #${number}`);
272
+ return { success: true, error: null };
273
+ } catch (error) {
274
+ if (verbose) console.log(`[VERBOSE] /merge: Failed to add label to ${type} #${number}: ${error.message}`);
275
+ return { success: false, error: error.message };
276
+ }
277
+ }
278
+
279
+ /**
280
+ * Sync 'ready' tags between linked pull requests and issues
281
+ *
282
+ * Issue #1367: Before building the merge queue, ensure that:
283
+ * 1. If a PR has 'ready' label and is clearly linked to an issue (via standard GitHub
284
+ * keywords in the PR body/title), the issue also gets 'ready' label.
285
+ * 2. If an issue has 'ready' label and has a clearly linked open PR, the PR also gets
286
+ * 'ready' label.
287
+ *
288
+ * This ensures the final list of ready PRs reflects all ready work, regardless of
289
+ * where the 'ready' label was originally applied.
290
+ *
291
+ * @param {string} owner - Repository owner
292
+ * @param {string} repo - Repository name
293
+ * @param {boolean} verbose - Whether to log verbose output
294
+ * @returns {Promise<{synced: number, errors: number, details: Array<Object>}>}
295
+ */
296
+ export async function syncReadyTags(owner, repo, verbose = false) {
297
+ const synced = [];
298
+ const errors = [];
299
+
300
+ if (verbose) {
301
+ console.log(`[VERBOSE] /merge: Syncing 'ready' tags for ${owner}/${repo}...`);
302
+ }
303
+
304
+ try {
305
+ // Fetch open PRs with 'ready' label (including body for link detection)
306
+ const { stdout: prsJson } = await exec(`gh pr list --repo ${owner}/${repo} --label "${READY_LABEL.name}" --state open --json number,title,body,labels --limit 100`);
307
+ const readyPRs = JSON.parse(prsJson.trim() || '[]');
308
+
309
+ if (verbose) {
310
+ console.log(`[VERBOSE] /merge: Found ${readyPRs.length} open PRs with 'ready' label for tag sync`);
311
+ }
312
+
313
+ // Fetch open issues with 'ready' label
314
+ const { stdout: issuesJson } = await exec(`gh issue list --repo ${owner}/${repo} --label "${READY_LABEL.name}" --state open --json number,title --limit 100`);
315
+ const readyIssues = JSON.parse(issuesJson.trim() || '[]');
316
+
317
+ if (verbose) {
318
+ console.log(`[VERBOSE] /merge: Found ${readyIssues.length} open issues with 'ready' label for tag sync`);
319
+ }
320
+
321
+ // Build a set of issue numbers that already have 'ready'
322
+ const readyIssueNumbers = new Set(readyIssues.map(i => String(i.number)));
323
+
324
+ // Step 1: For each PR with 'ready', find linked issue and sync label to it
325
+ for (const pr of readyPRs) {
326
+ try {
327
+ const prBody = pr.body || '';
328
+ const linkedIssueNumber = extractLinkedIssueNumber(prBody);
329
+
330
+ if (!linkedIssueNumber) {
331
+ if (verbose) {
332
+ console.log(`[VERBOSE] /merge: PR #${pr.number} has no linked issue (no closing keyword in body)`);
333
+ }
334
+ continue;
335
+ }
336
+
337
+ if (readyIssueNumbers.has(String(linkedIssueNumber))) {
338
+ if (verbose) {
339
+ console.log(`[VERBOSE] /merge: Issue #${linkedIssueNumber} already has 'ready' label (linked from PR #${pr.number})`);
340
+ }
341
+ continue;
342
+ }
343
+
344
+ // Issue doesn't have 'ready' label yet - add it
345
+ if (verbose) {
346
+ console.log(`[VERBOSE] /merge: PR #${pr.number} has 'ready', adding to linked issue #${linkedIssueNumber}`);
347
+ }
348
+
349
+ const result = await addLabel('issue', owner, repo, linkedIssueNumber, READY_LABEL.name, verbose);
350
+ if (result.success) {
351
+ synced.push({ type: 'pr-to-issue', prNumber: pr.number, issueNumber: Number(linkedIssueNumber) });
352
+ // Mark this issue as now having 'ready' so we don't process it again
353
+ readyIssueNumbers.add(String(linkedIssueNumber));
354
+ } else {
355
+ errors.push({ type: 'pr-to-issue', prNumber: pr.number, issueNumber: Number(linkedIssueNumber), error: result.error });
356
+ }
357
+ } catch (err) {
358
+ if (verbose) {
359
+ console.log(`[VERBOSE] /merge: Error syncing label from PR #${pr.number}: ${err.message}`);
360
+ }
361
+ errors.push({ type: 'pr-to-issue', prNumber: pr.number, error: err.message });
362
+ }
363
+ }
364
+
365
+ // Build a set of PR numbers that already have 'ready'
366
+ const readyPRNumbers = new Set(readyPRs.map(p => String(p.number)));
367
+
368
+ // Step 2: For each issue with 'ready', find linked PRs and sync label to them
369
+ for (const issue of readyIssues) {
370
+ try {
371
+ // Search for open PRs linked to this issue via closing keywords
372
+ const { stdout: linkedPRsJson } = await exec(`gh pr list --repo ${owner}/${repo} --search "in:body closes #${issue.number} OR fixes #${issue.number} OR resolves #${issue.number}" --state open --json number,title,labels --limit 10`);
373
+ const linkedPRs = JSON.parse(linkedPRsJson.trim() || '[]');
374
+
375
+ for (const linkedPR of linkedPRs) {
376
+ if (readyPRNumbers.has(String(linkedPR.number))) {
377
+ if (verbose) {
378
+ console.log(`[VERBOSE] /merge: PR #${linkedPR.number} already has 'ready' label (linked from issue #${issue.number})`);
379
+ }
380
+ continue;
381
+ }
382
+
383
+ // PR doesn't have 'ready' label yet - add it
384
+ if (verbose) {
385
+ console.log(`[VERBOSE] /merge: Issue #${issue.number} has 'ready', adding to linked PR #${linkedPR.number}`);
386
+ }
387
+
388
+ const result = await addLabel('pr', owner, repo, linkedPR.number, READY_LABEL.name, verbose);
389
+ if (result.success) {
390
+ synced.push({ type: 'issue-to-pr', issueNumber: issue.number, prNumber: linkedPR.number });
391
+ // Mark this PR as now having 'ready'
392
+ readyPRNumbers.add(String(linkedPR.number));
393
+ } else {
394
+ errors.push({ type: 'issue-to-pr', issueNumber: issue.number, prNumber: linkedPR.number, error: result.error });
395
+ }
396
+ }
397
+ } catch (err) {
398
+ if (verbose) {
399
+ console.log(`[VERBOSE] /merge: Error syncing label from issue #${issue.number}: ${err.message}`);
400
+ }
401
+ errors.push({ type: 'issue-to-pr', issueNumber: issue.number, error: err.message });
402
+ }
403
+ }
404
+ } catch (error) {
405
+ if (verbose) {
406
+ console.log(`[VERBOSE] /merge: Error during tag sync: ${error.message}`);
407
+ }
408
+ errors.push({ type: 'fetch', error: error.message });
409
+ }
410
+
411
+ if (verbose) {
412
+ console.log(`[VERBOSE] /merge: Tag sync complete. Synced: ${synced.length}, Errors: ${errors.length}`);
413
+ }
414
+
415
+ return {
416
+ synced: synced.length,
417
+ errors: errors.length,
418
+ details: synced,
419
+ errorDetails: errors,
420
+ };
421
+ }
422
+
254
423
  /**
255
424
  * Get combined list of ready PRs (from both direct PR labels and issue labels)
256
425
  * @param {string} owner - Repository owner
@@ -1283,6 +1452,8 @@ export default {
1283
1452
  fetchReadyPullRequests,
1284
1453
  fetchReadyIssuesWithPRs,
1285
1454
  getAllReadyPRs,
1455
+ // Issue #1367: Sync 'ready' tags between linked PRs and issues
1456
+ syncReadyTags,
1286
1457
  checkPRCIStatus,
1287
1458
  checkPRMergeable,
1288
1459
  checkMergePermissions,
@@ -146,7 +146,7 @@ ${workspaceInstructions}
146
146
  Initial research.
147
147
  - When you start, make sure you create detailed plan for yourself and follow your todo list step by step, make sure that as many points from these guidelines are added to your todo list to keep track of everything that can help you solve the issue with highest possible quality.
148
148
  - When you read issue, read all details and comments thoroughly.
149
- - When you see screenshots or images in issue descriptions, pull request descriptions, comments, or discussions, use WebFetch tool to download the image first, then use Read tool to view and analyze it. IMPORTANT: Before reading downloaded images with the Read tool, verify the file is a valid image (not HTML). Use a CLI tool like 'file' command to check the actual file format. Reading corrupted or non-image files (like GitHub's HTML 404 pages saved as .png) can cause "Could not process image" errors and may crash the AI solver process. If the file command shows "HTML" or "text", the download failed and you should retry or skip the image.
149
+ - When you see screenshots or images in issue descriptions, pull request descriptions, comments, or discussions, download the image to a local file first, then use Read tool to view and analyze it. IMPORTANT: Before reading downloaded images with the Read tool, verify the file is a valid image (not HTML). Use a CLI tool like 'file' command to check the actual file format. Reading corrupted or non-image files (like GitHub's "Not Found" pages saved as .png) can cause "Could not process image" errors and will crash the AI solver process. If the file command shows "HTML", "text", or "ASCII text", the download FAILED do NOT call Read on this file. Instead: (1) For images from GitHub issues/PRs (URLs containing "github.com/user-attachments"), these require authentication — retry with: curl -L -H "Authorization: token $(gh auth token)" -o <filename> "<url>" (2) If retry still fails, skip the image and note it was unavailable.
150
150
  - When you need issue details, use gh issue view https://github.com/${owner}/${repo}/issues/${issueNumber}.
151
151
  - When you need related code, use gh search code --owner ${owner} [keywords].
152
152
  - When you need repo context, read files in your working directory.${
@@ -16,7 +16,7 @@
16
16
  * @see https://github.com/link-assistant/hive-mind/issues/1143
17
17
  */
18
18
 
19
- import { getAllReadyPRs, checkPRCIStatus, checkPRMergeable, mergePullRequest, waitForCI, ensureReadyLabel, waitForBranchCI, getDefaultBranch, waitForCommitCI, checkBranchCIHealth, getMergeCommitSha } from './github-merge.lib.mjs';
19
+ import { getAllReadyPRs, checkPRCIStatus, checkPRMergeable, mergePullRequest, waitForCI, ensureReadyLabel, waitForBranchCI, getDefaultBranch, waitForCommitCI, checkBranchCIHealth, getMergeCommitSha, syncReadyTags } from './github-merge.lib.mjs';
20
20
  import { mergeQueue as mergeQueueConfig } from './config.lib.mjs';
21
21
  import { getProgressBar } from './limits.lib.mjs';
22
22
 
@@ -197,6 +197,16 @@ export class MergeQueueProcessor {
197
197
  this.log("Created 'ready' label in repository");
198
198
  }
199
199
 
200
+ // Issue #1367: Sync 'ready' tags between linked PRs and issues before collecting the queue
201
+ // This ensures the final list reflects all ready work regardless of where the tag was applied
202
+ const syncResult = await syncReadyTags(this.owner, this.repo, this.verbose);
203
+ if (syncResult.synced > 0) {
204
+ this.log(`Synced 'ready' tag: ${syncResult.synced} item(s) updated`);
205
+ }
206
+ if (syncResult.errors > 0) {
207
+ this.log(`Tag sync had ${syncResult.errors} error(s) (non-fatal, proceeding)`);
208
+ }
209
+
200
210
  // Fetch all ready PRs
201
211
  const readyPRs = await getAllReadyPRs(this.owner, this.repo, this.verbose);
202
212