opencodekit 0.20.7 → 0.21.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/dist/index.js +1 -1
- package/dist/template/.opencode/AGENTS.md +60 -0
- package/dist/template/.opencode/agent/build.md +3 -2
- package/dist/template/.opencode/agent/explore.md +14 -14
- package/dist/template/.opencode/agent/general.md +1 -1
- package/dist/template/.opencode/agent/plan.md +1 -1
- package/dist/template/.opencode/agent/review.md +1 -1
- package/dist/template/.opencode/agent/vision.md +0 -9
- package/dist/template/.opencode/memory.db +0 -0
- package/dist/template/.opencode/memory.db-shm +0 -0
- package/dist/template/.opencode/memory.db-wal +0 -0
- package/dist/template/.opencode/opencode.json +83 -614
- package/dist/template/.opencode/opencodex-fast.jsonc +1 -1
- package/dist/template/.opencode/package.json +1 -1
- package/dist/template/.opencode/plugin/copilot-auth.ts +27 -12
- package/dist/template/.opencode/plugin/prompt-leverage.ts +193 -0
- package/dist/template/.opencode/plugin/prompt-leverage.ts.bak +228 -0
- package/dist/template/.opencode/plugin/sdk/copilot/copilot-provider.ts +14 -2
- package/dist/template/.opencode/plugin/sdk/copilot/index.ts +2 -2
- package/dist/template/.opencode/plugin/sdk/copilot/responses/convert-to-openai-responses-input.ts +335 -0
- package/dist/template/.opencode/plugin/sdk/copilot/responses/map-openai-responses-finish-reason.ts +22 -0
- package/dist/template/.opencode/plugin/sdk/copilot/responses/openai-config.ts +18 -0
- package/dist/template/.opencode/plugin/sdk/copilot/responses/openai-error.ts +22 -0
- package/dist/template/.opencode/plugin/sdk/copilot/responses/openai-responses-api-types.ts +214 -0
- package/dist/template/.opencode/plugin/sdk/copilot/responses/openai-responses-language-model.ts +1770 -0
- package/dist/template/.opencode/plugin/sdk/copilot/responses/openai-responses-prepare-tools.ts +173 -0
- package/dist/template/.opencode/plugin/sdk/copilot/responses/openai-responses-settings.ts +1 -0
- package/dist/template/.opencode/plugin/sdk/copilot/responses/tool/code-interpreter.ts +87 -0
- package/dist/template/.opencode/plugin/sdk/copilot/responses/tool/file-search.ts +127 -0
- package/dist/template/.opencode/plugin/sdk/copilot/responses/tool/image-generation.ts +114 -0
- package/dist/template/.opencode/plugin/sdk/copilot/responses/tool/local-shell.ts +64 -0
- package/dist/template/.opencode/plugin/sdk/copilot/responses/tool/web-search-preview.ts +103 -0
- package/dist/template/.opencode/plugin/sdk/copilot/responses/tool/web-search.ts +102 -0
- package/dist/template/.opencode/pnpm-lock.yaml +791 -9
- package/dist/template/.opencode/skill/api-and-interface-design/SKILL.md +162 -0
- package/dist/template/.opencode/skill/beads/SKILL.md +10 -9
- package/dist/template/.opencode/skill/beads/references/MULTI_AGENT.md +10 -10
- package/dist/template/.opencode/skill/ci-cd-and-automation/SKILL.md +202 -0
- package/dist/template/.opencode/skill/code-search-patterns/SKILL.md +253 -0
- package/dist/template/.opencode/skill/code-simplification/SKILL.md +211 -0
- package/dist/template/.opencode/skill/condition-based-waiting/SKILL.md +12 -0
- package/dist/template/.opencode/skill/defense-in-depth/SKILL.md +16 -6
- package/dist/template/.opencode/skill/deprecation-and-migration/SKILL.md +189 -0
- package/dist/template/.opencode/skill/development-lifecycle/SKILL.md +12 -48
- package/dist/template/.opencode/skill/documentation-and-adrs/SKILL.md +220 -0
- package/dist/template/.opencode/skill/gh-address-comments/SKILL.md +29 -0
- package/dist/template/.opencode/skill/gh-address-comments/scripts/fetch_comments.py +237 -0
- package/dist/template/.opencode/skill/gh-fix-ci/SKILL.md +38 -0
- package/dist/template/.opencode/skill/gh-fix-ci/scripts/inspect_pr_checks.py +509 -0
- package/dist/template/.opencode/skill/incremental-implementation/SKILL.md +191 -0
- package/dist/template/.opencode/skill/performance-optimization/SKILL.md +236 -0
- package/dist/template/.opencode/skill/prompt-leverage/SKILL.md +90 -0
- package/dist/template/.opencode/skill/prompt-leverage/references/framework.md +91 -0
- package/dist/template/.opencode/skill/prompt-leverage/scripts/augment_prompt.py +157 -0
- package/dist/template/.opencode/skill/receiving-code-review/SKILL.md +11 -0
- package/dist/template/.opencode/skill/screenshot/SKILL.md +48 -0
- package/dist/template/.opencode/skill/screenshot/scripts/ensure_macos_permissions.sh +54 -0
- package/dist/template/.opencode/skill/screenshot/scripts/macos_display_info.swift +22 -0
- package/dist/template/.opencode/skill/screenshot/scripts/macos_permissions.swift +40 -0
- package/dist/template/.opencode/skill/screenshot/scripts/macos_window_info.swift +126 -0
- package/dist/template/.opencode/skill/screenshot/scripts/take_screenshot.ps1 +163 -0
- package/dist/template/.opencode/skill/screenshot/scripts/take_screenshot.py +585 -0
- package/dist/template/.opencode/skill/security-and-hardening/SKILL.md +296 -0
- package/dist/template/.opencode/skill/security-threat-model/SKILL.md +36 -0
- package/dist/template/.opencode/skill/security-threat-model/references/prompt-template.md +255 -0
- package/dist/template/.opencode/skill/security-threat-model/references/security-controls-and-assets.md +32 -0
- package/dist/template/.opencode/skill/skill-installer/SKILL.md +58 -0
- package/dist/template/.opencode/skill/skill-installer/scripts/github_utils.py +21 -0
- package/dist/template/.opencode/skill/skill-installer/scripts/install-skill-from-github.py +313 -0
- package/dist/template/.opencode/skill/skill-installer/scripts/list-skills.py +106 -0
- package/dist/template/.opencode/skill/structured-edit/SKILL.md +10 -0
- package/dist/template/.opencode/skill/swarm-coordination/SKILL.md +66 -1
- package/package.json +1 -1
- package/dist/template/.opencode/skill/beads-bridge/SKILL.md +0 -321
- package/dist/template/.opencode/skill/code-navigation/SKILL.md +0 -130
- package/dist/template/.opencode/skill/mqdh/SKILL.md +0 -171
- package/dist/template/.opencode/skill/obsidian/SKILL.md +0 -192
- package/dist/template/.opencode/skill/obsidian/mcp.json +0 -22
- package/dist/template/.opencode/skill/pencil/SKILL.md +0 -72
- package/dist/template/.opencode/skill/ralph/SKILL.md +0 -296
- package/dist/template/.opencode/skill/tilth-cli/SKILL.md +0 -207
- package/dist/template/.opencode/skill/tool-priority/SKILL.md +0 -299
|
@@ -0,0 +1,509 @@
|
|
|
1
|
+
#!/usr/bin/env python3
|
|
2
|
+
from __future__ import annotations
|
|
3
|
+
|
|
4
|
+
import argparse
|
|
5
|
+
import json
|
|
6
|
+
import re
|
|
7
|
+
import subprocess
|
|
8
|
+
import sys
|
|
9
|
+
from pathlib import Path
|
|
10
|
+
from shutil import which
|
|
11
|
+
from typing import Any, Iterable, Sequence
|
|
12
|
+
|
|
13
|
+
FAILURE_CONCLUSIONS = {
|
|
14
|
+
"failure",
|
|
15
|
+
"cancelled",
|
|
16
|
+
"timed_out",
|
|
17
|
+
"action_required",
|
|
18
|
+
}
|
|
19
|
+
|
|
20
|
+
FAILURE_STATES = {
|
|
21
|
+
"failure",
|
|
22
|
+
"error",
|
|
23
|
+
"cancelled",
|
|
24
|
+
"timed_out",
|
|
25
|
+
"action_required",
|
|
26
|
+
}
|
|
27
|
+
|
|
28
|
+
FAILURE_BUCKETS = {"fail"}
|
|
29
|
+
|
|
30
|
+
FAILURE_MARKERS = (
|
|
31
|
+
"error",
|
|
32
|
+
"fail",
|
|
33
|
+
"failed",
|
|
34
|
+
"traceback",
|
|
35
|
+
"exception",
|
|
36
|
+
"assert",
|
|
37
|
+
"panic",
|
|
38
|
+
"fatal",
|
|
39
|
+
"timeout",
|
|
40
|
+
"segmentation fault",
|
|
41
|
+
)
|
|
42
|
+
|
|
43
|
+
DEFAULT_MAX_LINES = 160
|
|
44
|
+
DEFAULT_CONTEXT_LINES = 30
|
|
45
|
+
PENDING_LOG_MARKERS = (
|
|
46
|
+
"still in progress",
|
|
47
|
+
"log will be available when it is complete",
|
|
48
|
+
)
|
|
49
|
+
|
|
50
|
+
|
|
51
|
+
class GhResult:
|
|
52
|
+
def __init__(self, returncode: int, stdout: str, stderr: str):
|
|
53
|
+
self.returncode = returncode
|
|
54
|
+
self.stdout = stdout
|
|
55
|
+
self.stderr = stderr
|
|
56
|
+
|
|
57
|
+
|
|
58
|
+
def run_gh_command(args: Sequence[str], cwd: Path) -> GhResult:
|
|
59
|
+
process = subprocess.run(
|
|
60
|
+
["gh", *args],
|
|
61
|
+
cwd=cwd,
|
|
62
|
+
text=True,
|
|
63
|
+
capture_output=True,
|
|
64
|
+
)
|
|
65
|
+
return GhResult(process.returncode, process.stdout, process.stderr)
|
|
66
|
+
|
|
67
|
+
|
|
68
|
+
def run_gh_command_raw(args: Sequence[str], cwd: Path) -> tuple[int, bytes, str]:
|
|
69
|
+
process = subprocess.run(
|
|
70
|
+
["gh", *args],
|
|
71
|
+
cwd=cwd,
|
|
72
|
+
capture_output=True,
|
|
73
|
+
)
|
|
74
|
+
stderr = process.stderr.decode(errors="replace")
|
|
75
|
+
return process.returncode, process.stdout, stderr
|
|
76
|
+
|
|
77
|
+
|
|
78
|
+
def parse_args() -> argparse.Namespace:
|
|
79
|
+
parser = argparse.ArgumentParser(
|
|
80
|
+
description=(
|
|
81
|
+
"Inspect failing GitHub PR checks, fetch GitHub Actions logs, and extract a "
|
|
82
|
+
"failure snippet."
|
|
83
|
+
),
|
|
84
|
+
formatter_class=argparse.ArgumentDefaultsHelpFormatter,
|
|
85
|
+
)
|
|
86
|
+
parser.add_argument("--repo", default=".", help="Path inside the target Git repository.")
|
|
87
|
+
parser.add_argument(
|
|
88
|
+
"--pr", default=None, help="PR number or URL (defaults to current branch PR)."
|
|
89
|
+
)
|
|
90
|
+
parser.add_argument("--max-lines", type=int, default=DEFAULT_MAX_LINES)
|
|
91
|
+
parser.add_argument("--context", type=int, default=DEFAULT_CONTEXT_LINES)
|
|
92
|
+
parser.add_argument("--json", action="store_true", help="Emit JSON instead of text output.")
|
|
93
|
+
return parser.parse_args()
|
|
94
|
+
|
|
95
|
+
|
|
96
|
+
def main() -> int:
|
|
97
|
+
args = parse_args()
|
|
98
|
+
repo_root = find_git_root(Path(args.repo))
|
|
99
|
+
if repo_root is None:
|
|
100
|
+
print("Error: not inside a Git repository.", file=sys.stderr)
|
|
101
|
+
return 1
|
|
102
|
+
|
|
103
|
+
if not ensure_gh_available(repo_root):
|
|
104
|
+
return 1
|
|
105
|
+
|
|
106
|
+
pr_value = resolve_pr(args.pr, repo_root)
|
|
107
|
+
if pr_value is None:
|
|
108
|
+
return 1
|
|
109
|
+
|
|
110
|
+
checks = fetch_checks(pr_value, repo_root)
|
|
111
|
+
if checks is None:
|
|
112
|
+
return 1
|
|
113
|
+
|
|
114
|
+
failing = [c for c in checks if is_failing(c)]
|
|
115
|
+
if not failing:
|
|
116
|
+
print(f"PR #{pr_value}: no failing checks detected.")
|
|
117
|
+
return 0
|
|
118
|
+
|
|
119
|
+
results = []
|
|
120
|
+
for check in failing:
|
|
121
|
+
results.append(
|
|
122
|
+
analyze_check(
|
|
123
|
+
check,
|
|
124
|
+
repo_root=repo_root,
|
|
125
|
+
max_lines=max(1, args.max_lines),
|
|
126
|
+
context=max(1, args.context),
|
|
127
|
+
)
|
|
128
|
+
)
|
|
129
|
+
|
|
130
|
+
if args.json:
|
|
131
|
+
print(json.dumps({"pr": pr_value, "results": results}, indent=2))
|
|
132
|
+
else:
|
|
133
|
+
render_results(pr_value, results)
|
|
134
|
+
|
|
135
|
+
return 1
|
|
136
|
+
|
|
137
|
+
|
|
138
|
+
def find_git_root(start: Path) -> Path | None:
|
|
139
|
+
result = subprocess.run(
|
|
140
|
+
["git", "rev-parse", "--show-toplevel"],
|
|
141
|
+
cwd=start,
|
|
142
|
+
text=True,
|
|
143
|
+
capture_output=True,
|
|
144
|
+
)
|
|
145
|
+
if result.returncode != 0:
|
|
146
|
+
return None
|
|
147
|
+
return Path(result.stdout.strip())
|
|
148
|
+
|
|
149
|
+
|
|
150
|
+
def ensure_gh_available(repo_root: Path) -> bool:
|
|
151
|
+
if which("gh") is None:
|
|
152
|
+
print("Error: gh is not installed or not on PATH.", file=sys.stderr)
|
|
153
|
+
return False
|
|
154
|
+
result = run_gh_command(["auth", "status"], cwd=repo_root)
|
|
155
|
+
if result.returncode == 0:
|
|
156
|
+
return True
|
|
157
|
+
message = (result.stderr or result.stdout or "").strip()
|
|
158
|
+
print(message or "Error: gh not authenticated.", file=sys.stderr)
|
|
159
|
+
return False
|
|
160
|
+
|
|
161
|
+
|
|
162
|
+
def resolve_pr(pr_value: str | None, repo_root: Path) -> str | None:
|
|
163
|
+
if pr_value:
|
|
164
|
+
return pr_value
|
|
165
|
+
result = run_gh_command(["pr", "view", "--json", "number"], cwd=repo_root)
|
|
166
|
+
if result.returncode != 0:
|
|
167
|
+
message = (result.stderr or result.stdout or "").strip()
|
|
168
|
+
print(message or "Error: unable to resolve PR.", file=sys.stderr)
|
|
169
|
+
return None
|
|
170
|
+
try:
|
|
171
|
+
data = json.loads(result.stdout or "{}")
|
|
172
|
+
except json.JSONDecodeError:
|
|
173
|
+
print("Error: unable to parse PR JSON.", file=sys.stderr)
|
|
174
|
+
return None
|
|
175
|
+
number = data.get("number")
|
|
176
|
+
if not number:
|
|
177
|
+
print("Error: no PR number found.", file=sys.stderr)
|
|
178
|
+
return None
|
|
179
|
+
return str(number)
|
|
180
|
+
|
|
181
|
+
|
|
182
|
+
def fetch_checks(pr_value: str, repo_root: Path) -> list[dict[str, Any]] | None:
|
|
183
|
+
primary_fields = ["name", "state", "conclusion", "detailsUrl", "startedAt", "completedAt"]
|
|
184
|
+
result = run_gh_command(
|
|
185
|
+
["pr", "checks", pr_value, "--json", ",".join(primary_fields)],
|
|
186
|
+
cwd=repo_root,
|
|
187
|
+
)
|
|
188
|
+
if result.returncode != 0:
|
|
189
|
+
message = "\n".join(filter(None, [result.stderr, result.stdout])).strip()
|
|
190
|
+
available_fields = parse_available_fields(message)
|
|
191
|
+
if available_fields:
|
|
192
|
+
fallback_fields = [
|
|
193
|
+
"name",
|
|
194
|
+
"state",
|
|
195
|
+
"bucket",
|
|
196
|
+
"link",
|
|
197
|
+
"startedAt",
|
|
198
|
+
"completedAt",
|
|
199
|
+
"workflow",
|
|
200
|
+
]
|
|
201
|
+
selected_fields = [field for field in fallback_fields if field in available_fields]
|
|
202
|
+
if not selected_fields:
|
|
203
|
+
print("Error: no usable fields available for gh pr checks.", file=sys.stderr)
|
|
204
|
+
return None
|
|
205
|
+
result = run_gh_command(
|
|
206
|
+
["pr", "checks", pr_value, "--json", ",".join(selected_fields)],
|
|
207
|
+
cwd=repo_root,
|
|
208
|
+
)
|
|
209
|
+
if result.returncode != 0:
|
|
210
|
+
message = (result.stderr or result.stdout or "").strip()
|
|
211
|
+
print(message or "Error: gh pr checks failed.", file=sys.stderr)
|
|
212
|
+
return None
|
|
213
|
+
else:
|
|
214
|
+
print(message or "Error: gh pr checks failed.", file=sys.stderr)
|
|
215
|
+
return None
|
|
216
|
+
try:
|
|
217
|
+
data = json.loads(result.stdout or "[]")
|
|
218
|
+
except json.JSONDecodeError:
|
|
219
|
+
print("Error: unable to parse checks JSON.", file=sys.stderr)
|
|
220
|
+
return None
|
|
221
|
+
if not isinstance(data, list):
|
|
222
|
+
print("Error: unexpected checks JSON shape.", file=sys.stderr)
|
|
223
|
+
return None
|
|
224
|
+
return data
|
|
225
|
+
|
|
226
|
+
|
|
227
|
+
def is_failing(check: dict[str, Any]) -> bool:
|
|
228
|
+
conclusion = normalize_field(check.get("conclusion"))
|
|
229
|
+
if conclusion in FAILURE_CONCLUSIONS:
|
|
230
|
+
return True
|
|
231
|
+
state = normalize_field(check.get("state") or check.get("status"))
|
|
232
|
+
if state in FAILURE_STATES:
|
|
233
|
+
return True
|
|
234
|
+
bucket = normalize_field(check.get("bucket"))
|
|
235
|
+
return bucket in FAILURE_BUCKETS
|
|
236
|
+
|
|
237
|
+
|
|
238
|
+
def analyze_check(
|
|
239
|
+
check: dict[str, Any],
|
|
240
|
+
repo_root: Path,
|
|
241
|
+
max_lines: int,
|
|
242
|
+
context: int,
|
|
243
|
+
) -> dict[str, Any]:
|
|
244
|
+
url = check.get("detailsUrl") or check.get("link") or ""
|
|
245
|
+
run_id = extract_run_id(url)
|
|
246
|
+
job_id = extract_job_id(url)
|
|
247
|
+
base: dict[str, Any] = {
|
|
248
|
+
"name": check.get("name", ""),
|
|
249
|
+
"detailsUrl": url,
|
|
250
|
+
"runId": run_id,
|
|
251
|
+
"jobId": job_id,
|
|
252
|
+
}
|
|
253
|
+
|
|
254
|
+
if run_id is None:
|
|
255
|
+
base["status"] = "external"
|
|
256
|
+
base["note"] = "No GitHub Actions run id detected in detailsUrl."
|
|
257
|
+
return base
|
|
258
|
+
|
|
259
|
+
metadata = fetch_run_metadata(run_id, repo_root)
|
|
260
|
+
log_text, log_error, log_status = fetch_check_log(
|
|
261
|
+
run_id=run_id,
|
|
262
|
+
job_id=job_id,
|
|
263
|
+
repo_root=repo_root,
|
|
264
|
+
)
|
|
265
|
+
|
|
266
|
+
if log_status == "pending":
|
|
267
|
+
base["status"] = "log_pending"
|
|
268
|
+
base["note"] = log_error or "Logs are not available yet."
|
|
269
|
+
if metadata:
|
|
270
|
+
base["run"] = metadata
|
|
271
|
+
return base
|
|
272
|
+
|
|
273
|
+
if log_error:
|
|
274
|
+
base["status"] = "log_unavailable"
|
|
275
|
+
base["error"] = log_error
|
|
276
|
+
if metadata:
|
|
277
|
+
base["run"] = metadata
|
|
278
|
+
return base
|
|
279
|
+
|
|
280
|
+
snippet = extract_failure_snippet(log_text, max_lines=max_lines, context=context)
|
|
281
|
+
base["status"] = "ok"
|
|
282
|
+
base["run"] = metadata or {}
|
|
283
|
+
base["logSnippet"] = snippet
|
|
284
|
+
base["logTail"] = tail_lines(log_text, max_lines)
|
|
285
|
+
return base
|
|
286
|
+
|
|
287
|
+
|
|
288
|
+
def extract_run_id(url: str) -> str | None:
|
|
289
|
+
if not url:
|
|
290
|
+
return None
|
|
291
|
+
for pattern in (r"/actions/runs/(\d+)", r"/runs/(\d+)"):
|
|
292
|
+
match = re.search(pattern, url)
|
|
293
|
+
if match:
|
|
294
|
+
return match.group(1)
|
|
295
|
+
return None
|
|
296
|
+
|
|
297
|
+
|
|
298
|
+
def extract_job_id(url: str) -> str | None:
|
|
299
|
+
if not url:
|
|
300
|
+
return None
|
|
301
|
+
match = re.search(r"/actions/runs/\d+/job/(\d+)", url)
|
|
302
|
+
if match:
|
|
303
|
+
return match.group(1)
|
|
304
|
+
match = re.search(r"/job/(\d+)", url)
|
|
305
|
+
if match:
|
|
306
|
+
return match.group(1)
|
|
307
|
+
return None
|
|
308
|
+
|
|
309
|
+
|
|
310
|
+
def fetch_run_metadata(run_id: str, repo_root: Path) -> dict[str, Any] | None:
|
|
311
|
+
fields = [
|
|
312
|
+
"conclusion",
|
|
313
|
+
"status",
|
|
314
|
+
"workflowName",
|
|
315
|
+
"name",
|
|
316
|
+
"event",
|
|
317
|
+
"headBranch",
|
|
318
|
+
"headSha",
|
|
319
|
+
"url",
|
|
320
|
+
]
|
|
321
|
+
result = run_gh_command(["run", "view", run_id, "--json", ",".join(fields)], cwd=repo_root)
|
|
322
|
+
if result.returncode != 0:
|
|
323
|
+
return None
|
|
324
|
+
try:
|
|
325
|
+
data = json.loads(result.stdout or "{}")
|
|
326
|
+
except json.JSONDecodeError:
|
|
327
|
+
return None
|
|
328
|
+
if not isinstance(data, dict):
|
|
329
|
+
return None
|
|
330
|
+
return data
|
|
331
|
+
|
|
332
|
+
|
|
333
|
+
def fetch_check_log(
|
|
334
|
+
run_id: str,
|
|
335
|
+
job_id: str | None,
|
|
336
|
+
repo_root: Path,
|
|
337
|
+
) -> tuple[str, str, str]:
|
|
338
|
+
log_text, log_error = fetch_run_log(run_id, repo_root)
|
|
339
|
+
if not log_error:
|
|
340
|
+
return log_text, "", "ok"
|
|
341
|
+
|
|
342
|
+
if is_log_pending_message(log_error) and job_id:
|
|
343
|
+
job_log, job_error = fetch_job_log(job_id, repo_root)
|
|
344
|
+
if job_log:
|
|
345
|
+
return job_log, "", "ok"
|
|
346
|
+
if job_error and is_log_pending_message(job_error):
|
|
347
|
+
return "", job_error, "pending"
|
|
348
|
+
if job_error:
|
|
349
|
+
return "", job_error, "error"
|
|
350
|
+
return "", log_error, "pending"
|
|
351
|
+
|
|
352
|
+
if is_log_pending_message(log_error):
|
|
353
|
+
return "", log_error, "pending"
|
|
354
|
+
|
|
355
|
+
return "", log_error, "error"
|
|
356
|
+
|
|
357
|
+
|
|
358
|
+
def fetch_run_log(run_id: str, repo_root: Path) -> tuple[str, str]:
|
|
359
|
+
result = run_gh_command(["run", "view", run_id, "--log"], cwd=repo_root)
|
|
360
|
+
if result.returncode != 0:
|
|
361
|
+
error = (result.stderr or result.stdout or "").strip()
|
|
362
|
+
return "", error or "gh run view failed"
|
|
363
|
+
return result.stdout, ""
|
|
364
|
+
|
|
365
|
+
|
|
366
|
+
def fetch_job_log(job_id: str, repo_root: Path) -> tuple[str, str]:
|
|
367
|
+
repo_slug = fetch_repo_slug(repo_root)
|
|
368
|
+
if not repo_slug:
|
|
369
|
+
return "", "Error: unable to resolve repository name for job logs."
|
|
370
|
+
endpoint = f"/repos/{repo_slug}/actions/jobs/{job_id}/logs"
|
|
371
|
+
returncode, stdout_bytes, stderr = run_gh_command_raw(["api", endpoint], cwd=repo_root)
|
|
372
|
+
if returncode != 0:
|
|
373
|
+
message = (stderr or stdout_bytes.decode(errors="replace")).strip()
|
|
374
|
+
return "", message or "gh api job logs failed"
|
|
375
|
+
if is_zip_payload(stdout_bytes):
|
|
376
|
+
return "", "Job logs returned a zip archive; unable to parse."
|
|
377
|
+
return stdout_bytes.decode(errors="replace"), ""
|
|
378
|
+
|
|
379
|
+
|
|
380
|
+
def fetch_repo_slug(repo_root: Path) -> str | None:
|
|
381
|
+
result = run_gh_command(["repo", "view", "--json", "nameWithOwner"], cwd=repo_root)
|
|
382
|
+
if result.returncode != 0:
|
|
383
|
+
return None
|
|
384
|
+
try:
|
|
385
|
+
data = json.loads(result.stdout or "{}")
|
|
386
|
+
except json.JSONDecodeError:
|
|
387
|
+
return None
|
|
388
|
+
name_with_owner = data.get("nameWithOwner")
|
|
389
|
+
if not name_with_owner:
|
|
390
|
+
return None
|
|
391
|
+
return str(name_with_owner)
|
|
392
|
+
|
|
393
|
+
|
|
394
|
+
def normalize_field(value: Any) -> str:
|
|
395
|
+
if value is None:
|
|
396
|
+
return ""
|
|
397
|
+
return str(value).strip().lower()
|
|
398
|
+
|
|
399
|
+
|
|
400
|
+
def parse_available_fields(message: str) -> list[str]:
|
|
401
|
+
if "Available fields:" not in message:
|
|
402
|
+
return []
|
|
403
|
+
fields: list[str] = []
|
|
404
|
+
collecting = False
|
|
405
|
+
for line in message.splitlines():
|
|
406
|
+
if "Available fields:" in line:
|
|
407
|
+
collecting = True
|
|
408
|
+
continue
|
|
409
|
+
if not collecting:
|
|
410
|
+
continue
|
|
411
|
+
field = line.strip()
|
|
412
|
+
if not field:
|
|
413
|
+
continue
|
|
414
|
+
fields.append(field)
|
|
415
|
+
return fields
|
|
416
|
+
|
|
417
|
+
|
|
418
|
+
def is_log_pending_message(message: str) -> bool:
|
|
419
|
+
lowered = message.lower()
|
|
420
|
+
return any(marker in lowered for marker in PENDING_LOG_MARKERS)
|
|
421
|
+
|
|
422
|
+
|
|
423
|
+
def is_zip_payload(payload: bytes) -> bool:
|
|
424
|
+
return payload.startswith(b"PK")
|
|
425
|
+
|
|
426
|
+
|
|
427
|
+
def extract_failure_snippet(log_text: str, max_lines: int, context: int) -> str:
|
|
428
|
+
lines = log_text.splitlines()
|
|
429
|
+
if not lines:
|
|
430
|
+
return ""
|
|
431
|
+
|
|
432
|
+
marker_index = find_failure_index(lines)
|
|
433
|
+
if marker_index is None:
|
|
434
|
+
return "\n".join(lines[-max_lines:])
|
|
435
|
+
|
|
436
|
+
start = max(0, marker_index - context)
|
|
437
|
+
end = min(len(lines), marker_index + context)
|
|
438
|
+
window = lines[start:end]
|
|
439
|
+
if len(window) > max_lines:
|
|
440
|
+
window = window[-max_lines:]
|
|
441
|
+
return "\n".join(window)
|
|
442
|
+
|
|
443
|
+
|
|
444
|
+
def find_failure_index(lines: Sequence[str]) -> int | None:
|
|
445
|
+
for idx in range(len(lines) - 1, -1, -1):
|
|
446
|
+
lowered = lines[idx].lower()
|
|
447
|
+
if any(marker in lowered for marker in FAILURE_MARKERS):
|
|
448
|
+
return idx
|
|
449
|
+
return None
|
|
450
|
+
|
|
451
|
+
|
|
452
|
+
def tail_lines(text: str, max_lines: int) -> str:
|
|
453
|
+
if max_lines <= 0:
|
|
454
|
+
return ""
|
|
455
|
+
lines = text.splitlines()
|
|
456
|
+
return "\n".join(lines[-max_lines:])
|
|
457
|
+
|
|
458
|
+
|
|
459
|
+
def render_results(pr_number: str, results: Iterable[dict[str, Any]]) -> None:
|
|
460
|
+
results_list = list(results)
|
|
461
|
+
print(f"PR #{pr_number}: {len(results_list)} failing checks analyzed.")
|
|
462
|
+
for result in results_list:
|
|
463
|
+
print("-" * 60)
|
|
464
|
+
print(f"Check: {result.get('name', '')}")
|
|
465
|
+
if result.get("detailsUrl"):
|
|
466
|
+
print(f"Details: {result['detailsUrl']}")
|
|
467
|
+
run_id = result.get("runId")
|
|
468
|
+
if run_id:
|
|
469
|
+
print(f"Run ID: {run_id}")
|
|
470
|
+
job_id = result.get("jobId")
|
|
471
|
+
if job_id:
|
|
472
|
+
print(f"Job ID: {job_id}")
|
|
473
|
+
status = result.get("status", "unknown")
|
|
474
|
+
print(f"Status: {status}")
|
|
475
|
+
|
|
476
|
+
run_meta = result.get("run", {})
|
|
477
|
+
if run_meta:
|
|
478
|
+
branch = run_meta.get("headBranch", "")
|
|
479
|
+
sha = (run_meta.get("headSha") or "")[:12]
|
|
480
|
+
workflow = run_meta.get("workflowName") or run_meta.get("name") or ""
|
|
481
|
+
conclusion = run_meta.get("conclusion") or run_meta.get("status") or ""
|
|
482
|
+
print(f"Workflow: {workflow} ({conclusion})")
|
|
483
|
+
if branch or sha:
|
|
484
|
+
print(f"Branch/SHA: {branch} {sha}")
|
|
485
|
+
if run_meta.get("url"):
|
|
486
|
+
print(f"Run URL: {run_meta['url']}")
|
|
487
|
+
|
|
488
|
+
if result.get("note"):
|
|
489
|
+
print(f"Note: {result['note']}")
|
|
490
|
+
|
|
491
|
+
if result.get("error"):
|
|
492
|
+
print(f"Error fetching logs: {result['error']}")
|
|
493
|
+
continue
|
|
494
|
+
|
|
495
|
+
snippet = result.get("logSnippet") or ""
|
|
496
|
+
if snippet:
|
|
497
|
+
print("Failure snippet:")
|
|
498
|
+
print(indent_block(snippet, prefix=" "))
|
|
499
|
+
else:
|
|
500
|
+
print("No snippet available.")
|
|
501
|
+
print("-" * 60)
|
|
502
|
+
|
|
503
|
+
|
|
504
|
+
def indent_block(text: str, prefix: str = " ") -> str:
|
|
505
|
+
return "\n".join(f"{prefix}{line}" for line in text.splitlines())
|
|
506
|
+
|
|
507
|
+
|
|
508
|
+
if __name__ == "__main__":
|
|
509
|
+
raise SystemExit(main())
|
|
@@ -0,0 +1,191 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: incremental-implementation
|
|
3
|
+
description: Use when implementing features or fixes to enforce thin vertical slices with verify-after-each — prevents large, untested changes by requiring working code at every step
|
|
4
|
+
version: 1.0.0
|
|
5
|
+
tags: [workflow, implementation, code-quality]
|
|
6
|
+
dependencies: [test-driven-development, verification-before-completion]
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
# Incremental Implementation
|
|
10
|
+
|
|
11
|
+
> **Replaces** big-bang implementations where everything is built at once and tested at the end — enforces thin vertical slices with verification after each step
|
|
12
|
+
|
|
13
|
+
## When to Use
|
|
14
|
+
|
|
15
|
+
- Implementing any feature that touches more than 2 files
|
|
16
|
+
- Working from a plan or spec with multiple tasks
|
|
17
|
+
- Building something where partial progress should be demonstrable
|
|
18
|
+
|
|
19
|
+
## When NOT to Use
|
|
20
|
+
|
|
21
|
+
- One-line fixes or trivial changes
|
|
22
|
+
- Pure refactors with no behavior change (use code-simplification instead)
|
|
23
|
+
- Exploratory prototyping where you need to experiment freely
|
|
24
|
+
|
|
25
|
+
## Common Rationalizations
|
|
26
|
+
|
|
27
|
+
| Rationalization | Rebuttal |
|
|
28
|
+
| ------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------- |
|
|
29
|
+
| "I'll build everything first and test at the end" | End-to-end testing after 500 lines of changes makes failures impossible to isolate |
|
|
30
|
+
| "This feature can't be split into slices" | Every feature can be sliced — you're confusing "the UI needs all parts" with "the code must be written all at once" |
|
|
31
|
+
| "Committing partial work creates noise" | Partial working commits are rollback points. One giant commit is a rollback cliff |
|
|
32
|
+
| "It's faster to write it all at once" | It feels faster until the first bug takes 2 hours to locate in a 400-line diff |
|
|
33
|
+
| "The slices are too small to be meaningful" | If a slice compiles, passes tests, and moves toward the goal, it's meaningful |
|
|
34
|
+
| "I need to see the whole picture first" | Read the plan first, then implement slice by slice. Understanding ≠ building all at once |
|
|
35
|
+
|
|
36
|
+
## Overview
|
|
37
|
+
|
|
38
|
+
Large implementations fail because errors compound. When you write 500 lines before running anything, each line can introduce a bug that interacts with bugs from other lines. Thin vertical slices keep the error surface small.
|
|
39
|
+
|
|
40
|
+
**Core principle:** Working code at every step. Never be more than one slice away from a green build.
|
|
41
|
+
|
|
42
|
+
## The Cycle
|
|
43
|
+
|
|
44
|
+
```
|
|
45
|
+
FOR each slice:
|
|
46
|
+
1. IMPLEMENT — Write the minimal code for this slice (1-3 files max)
|
|
47
|
+
2. VERIFY — Run typecheck + lint + relevant tests
|
|
48
|
+
3. COMMIT — Create a checkpoint with descriptive message
|
|
49
|
+
4. NEXT — Move to the next slice
|
|
50
|
+
|
|
51
|
+
IF verify fails:
|
|
52
|
+
Fix within the current slice before moving on
|
|
53
|
+
Do NOT proceed to the next slice with broken code
|
|
54
|
+
```
|
|
55
|
+
|
|
56
|
+
## Slicing Strategies
|
|
57
|
+
|
|
58
|
+
### Vertical Slice (Preferred)
|
|
59
|
+
|
|
60
|
+
Each slice delivers one thin path through the full stack:
|
|
61
|
+
|
|
62
|
+
```
|
|
63
|
+
Slice 1: API endpoint returns hardcoded data → test passes
|
|
64
|
+
Slice 2: API endpoint reads from database → test passes
|
|
65
|
+
Slice 3: UI calls API and renders data → test passes
|
|
66
|
+
Slice 4: Add validation and error handling → test passes
|
|
67
|
+
```
|
|
68
|
+
|
|
69
|
+
### Contract-First
|
|
70
|
+
|
|
71
|
+
Define interfaces first, then implement behind them:
|
|
72
|
+
|
|
73
|
+
```
|
|
74
|
+
Slice 1: Define types/interfaces → compiles
|
|
75
|
+
Slice 2: Implement with stubs → tests pass (with mocked data)
|
|
76
|
+
Slice 3: Replace stubs with real implementation → tests pass
|
|
77
|
+
```
|
|
78
|
+
|
|
79
|
+
### Risk-First
|
|
80
|
+
|
|
81
|
+
Implement the hardest or most uncertain part first:
|
|
82
|
+
|
|
83
|
+
```
|
|
84
|
+
Slice 1: The tricky algorithm or integration → tests pass
|
|
85
|
+
Slice 2: The straightforward plumbing → tests pass
|
|
86
|
+
Slice 3: The UI/presentation layer → tests pass
|
|
87
|
+
```
|
|
88
|
+
|
|
89
|
+
## Implementation Rules
|
|
90
|
+
|
|
91
|
+
### 1. Simplicity First
|
|
92
|
+
|
|
93
|
+
Default to the simplest viable solution for each slice.
|
|
94
|
+
|
|
95
|
+
```
|
|
96
|
+
❌ "Let me add a factory pattern for extensibility"
|
|
97
|
+
✅ "Direct function call works. Refactor to pattern IF a second use case appears"
|
|
98
|
+
```
|
|
99
|
+
|
|
100
|
+
### 2. Scope Discipline
|
|
101
|
+
|
|
102
|
+
Each slice does ONE thing. If you notice something else that needs fixing:
|
|
103
|
+
|
|
104
|
+
```
|
|
105
|
+
NOTICED BUT NOT TOUCHING: [description of unrelated improvement]
|
|
106
|
+
```
|
|
107
|
+
|
|
108
|
+
Log it and continue with the current slice.
|
|
109
|
+
|
|
110
|
+
### 3. One Compilable Step at a Time
|
|
111
|
+
|
|
112
|
+
Never leave the codebase in a state where typecheck fails between slices.
|
|
113
|
+
|
|
114
|
+
```
|
|
115
|
+
❌ Add 5 function signatures, then implement all 5
|
|
116
|
+
✅ Add and implement function 1, verify, then function 2
|
|
117
|
+
```
|
|
118
|
+
|
|
119
|
+
### 4. Keep Tests Green
|
|
120
|
+
|
|
121
|
+
If existing tests break from your change, fix them in the same slice — not in a "fix tests" slice later.
|
|
122
|
+
|
|
123
|
+
### 5. Feature Flags for Incomplete Features
|
|
124
|
+
|
|
125
|
+
If a slice can't be hidden behind existing abstractions:
|
|
126
|
+
|
|
127
|
+
```typescript
|
|
128
|
+
// Temporary gate — remove when feature is complete
|
|
129
|
+
if (process.env.ENABLE_NEW_FEATURE) {
|
|
130
|
+
// new code path
|
|
131
|
+
} else {
|
|
132
|
+
// existing behavior
|
|
133
|
+
}
|
|
134
|
+
```
|
|
135
|
+
|
|
136
|
+
### 6. Rollback-Friendly
|
|
137
|
+
|
|
138
|
+
Each committed slice should be independently revertable without breaking the build.
|
|
139
|
+
|
|
140
|
+
## Slice Size Guide
|
|
141
|
+
|
|
142
|
+
| Slice Size | Signal |
|
|
143
|
+
| ------------- | ------------------------------------------ |
|
|
144
|
+
| 1-30 lines | Ideal — easy to review and verify |
|
|
145
|
+
| 30-100 lines | Acceptable — still isolatable |
|
|
146
|
+
| 100-200 lines | Too large — find a split point |
|
|
147
|
+
| 200+ lines | Stop. You're doing big-bang implementation |
|
|
148
|
+
|
|
149
|
+
## Red Flags — STOP
|
|
150
|
+
|
|
151
|
+
If you catch yourself:
|
|
152
|
+
|
|
153
|
+
- Writing more than 100 lines without running verification
|
|
154
|
+
- Saying "I'll test this after I finish the next part"
|
|
155
|
+
- Having 3+ files with uncommitted changes
|
|
156
|
+
- Building a complex abstraction before the simple version works
|
|
157
|
+
- Skipping verification because "this slice is trivial"
|
|
158
|
+
|
|
159
|
+
**STOP.** Verify what you have. Commit if it passes. Then continue.
|
|
160
|
+
|
|
161
|
+
## Verification
|
|
162
|
+
|
|
163
|
+
After each slice:
|
|
164
|
+
|
|
165
|
+
```bash
|
|
166
|
+
# Minimum verification (must pass)
|
|
167
|
+
npm run typecheck # or equivalent
|
|
168
|
+
npm run lint # or equivalent
|
|
169
|
+
|
|
170
|
+
# If slice changes behavior
|
|
171
|
+
npm test # relevant test files
|
|
172
|
+
```
|
|
173
|
+
|
|
174
|
+
After all slices complete:
|
|
175
|
+
|
|
176
|
+
```bash
|
|
177
|
+
# Full verification
|
|
178
|
+
npm run typecheck && npm run lint && npm test
|
|
179
|
+
```
|
|
180
|
+
|
|
181
|
+
## Integration with Other Skills
|
|
182
|
+
|
|
183
|
+
- **test-driven-development** — Write the test for each slice FIRST (RED), then implement (GREEN)
|
|
184
|
+
- **verification-before-completion** — Run full gates after the final slice
|
|
185
|
+
- **code-simplification** — Refactor AFTER all slices pass, not during implementation
|
|
186
|
+
- **systematic-debugging** — If a slice fails verification, debug systematically instead of guessing
|
|
187
|
+
|
|
188
|
+
## See Also
|
|
189
|
+
|
|
190
|
+
- **writing-plans** — Creates the plan that this skill executes slice-by-slice
|
|
191
|
+
- **executing-plans** — Orchestrates parallel execution of independent slices
|