@ranger1/dx 0.1.35 → 0.1.37
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/@opencode/agents/__pycache__/gh_review_harvest.cpython-314.pyc +0 -0
- package/@opencode/agents/__pycache__/pr_review_aggregate.cpython-314.pyc +0 -0
- package/@opencode/agents/gh-thread-reviewer.md +105 -0
- package/@opencode/agents/gh_review_harvest.py +269 -0
- package/@opencode/agents/pr-review-aggregate.md +2 -2
- package/@opencode/agents/pr_review_aggregate.py +95 -3
- package/@opencode/commands/pr-review-loop.md +54 -3
- package/package.json +1 -1
|
Binary file
|
|
Binary file
|
|
@@ -0,0 +1,105 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: review (GitHub Harvest)
|
|
3
|
+
mode: subagent
|
|
4
|
+
model: openai/gpt-5.2
|
|
5
|
+
temperature: 0.2
|
|
6
|
+
tools:
|
|
7
|
+
write: true
|
|
8
|
+
edit: false
|
|
9
|
+
bash: true
|
|
10
|
+
---
|
|
11
|
+
|
|
12
|
+
# PR Reviewer (GitHub Harvest)
|
|
13
|
+
|
|
14
|
+
Harvest all GitHub PR review feedback (humans + bots, including Copilot) and normalize into a standard `reviewFile`.
|
|
15
|
+
|
|
16
|
+
## 输入(prompt 必须包含)
|
|
17
|
+
|
|
18
|
+
- `PR #<number>`
|
|
19
|
+
- `round: <number>`
|
|
20
|
+
- `runId: <string>`(必须透传,禁止自行生成)
|
|
21
|
+
- `contextFile: <filename>`
|
|
22
|
+
|
|
23
|
+
## Cache 约定(强制)
|
|
24
|
+
|
|
25
|
+
- 缓存目录固定为 `./.cache/`;交接一律传 `./.cache/<file>`(repo 相对路径),禁止 basename-only(如 `foo.md`)。
|
|
26
|
+
|
|
27
|
+
## 输出(强制)
|
|
28
|
+
|
|
29
|
+
只输出一行:
|
|
30
|
+
|
|
31
|
+
`reviewFile: ./.cache/<file>.md`
|
|
32
|
+
|
|
33
|
+
## reviewFile 格式(强制)
|
|
34
|
+
|
|
35
|
+
```md
|
|
36
|
+
# Review (GHR)
|
|
37
|
+
|
|
38
|
+
PR: <PR_NUMBER>
|
|
39
|
+
Round: <ROUND>
|
|
40
|
+
|
|
41
|
+
## Summary
|
|
42
|
+
|
|
43
|
+
P0: <n>
|
|
44
|
+
P1: <n>
|
|
45
|
+
P2: <n>
|
|
46
|
+
P3: <n>
|
|
47
|
+
|
|
48
|
+
## Findings
|
|
49
|
+
|
|
50
|
+
- id: GHR-RC-2752827557
|
|
51
|
+
priority: P1
|
|
52
|
+
category: quality|performance|security|architecture
|
|
53
|
+
file: <path>
|
|
54
|
+
line: <number|null>
|
|
55
|
+
title: <short>
|
|
56
|
+
description: <single-line text>
|
|
57
|
+
suggestion: <single-line text>
|
|
58
|
+
```
|
|
59
|
+
|
|
60
|
+
## ID 规则(强制)
|
|
61
|
+
|
|
62
|
+
- Inline 评审(discussion_r...):`GHR-RC-<databaseId>`(databaseId 可映射到 `#discussion_r<databaseId>`)
|
|
63
|
+
- PR Review 总评:`GHR-RV-<reviewId>`
|
|
64
|
+
- PR 普通评论:`GHR-IC-<issueCommentId>`
|
|
65
|
+
|
|
66
|
+
## 执行步骤(强制)
|
|
67
|
+
|
|
68
|
+
1) Harvest(确定性)
|
|
69
|
+
|
|
70
|
+
- 调用脚本生成 raw JSON:
|
|
71
|
+
|
|
72
|
+
```bash
|
|
73
|
+
python3 ~/.opencode/agents/gh_review_harvest.py \
|
|
74
|
+
--pr <PR_NUMBER> \
|
|
75
|
+
--round <ROUND> \
|
|
76
|
+
--run-id <RUN_ID>
|
|
77
|
+
```
|
|
78
|
+
|
|
79
|
+
- 脚本 stdout 会输出一行 JSON:`{"rawFile":"./.cache/...json"}`,从中取 `rawFile`。
|
|
80
|
+
|
|
81
|
+
2) Normalize(LLM 分类)
|
|
82
|
+
|
|
83
|
+
- 读取 `rawFile`(JSON)后,提取“建议/问题”并生成 findings:
|
|
84
|
+
- 覆盖 humans + bots(不做作者白名单)。
|
|
85
|
+
- 去噪:丢弃任何 body 中包含 `<!-- pr-review-loop-marker` 的内容。
|
|
86
|
+
- 忽略纯审批/无内容:如 `LGTM`、`Looks good`、`Approved` 等。
|
|
87
|
+
- 默认策略:
|
|
88
|
+
- `isResolved=true` 或 `isOutdated=true` 的 thread 在 harvest 阶段直接丢弃(不进入 rawFile,不消耗 LLM token)。
|
|
89
|
+
- 分类规则(大致):
|
|
90
|
+
- P0: 明确安全漏洞/数据泄漏/资金损失/远程执行
|
|
91
|
+
- P1: 逻辑 bug/权限绕过/会导致线上错误
|
|
92
|
+
- P2: 潜在 bug/鲁棒性/边界条件/可维护性重大问题
|
|
93
|
+
- P3: 风格/命名/小优化/可选建议
|
|
94
|
+
- `category` 只能取:quality|performance|security|architecture
|
|
95
|
+
|
|
96
|
+
3) 写入 reviewFile
|
|
97
|
+
|
|
98
|
+
- 文件名固定:`./.cache/review-GHR-pr<PR_NUMBER>-r<ROUND>-<RUN_ID>.md`
|
|
99
|
+
- 重要:`title/description/suggestion` 必须是单行;原文有换行时用 `\\n` 转义。
|
|
100
|
+
|
|
101
|
+
## 禁止事项(强制)
|
|
102
|
+
|
|
103
|
+
- ⛔ 不发布 GitHub 评论(不调用 `gh pr comment/review`)
|
|
104
|
+
- ⛔ 不修改代码(只输出 reviewFile)
|
|
105
|
+
- ⛔ 不生成/伪造 runId
|
|
@@ -0,0 +1,269 @@
|
|
|
1
|
+
#!/usr/bin/env python3
|
|
2
|
+
|
|
3
|
+
# Deterministic GitHub PR review harvester.
|
|
4
|
+
#
|
|
5
|
+
# - Fetches inline review threads via GraphQL (reviewThreads) with pagination.
|
|
6
|
+
# - Fetches PR reviews and PR issue comments via REST (gh api) with pagination.
|
|
7
|
+
# - Writes a raw JSON file into project cache: ./.cache/
|
|
8
|
+
# - Prints exactly one JSON object to stdout: {"rawFile":"./.cache/...json"}
|
|
9
|
+
|
|
10
|
+
import argparse
|
|
11
|
+
import json
|
|
12
|
+
import os
|
|
13
|
+
import subprocess
|
|
14
|
+
import sys
|
|
15
|
+
from datetime import datetime, timezone
|
|
16
|
+
from pathlib import Path
|
|
17
|
+
|
|
18
|
+
|
|
19
|
+
def _repo_root():
|
|
20
|
+
try:
|
|
21
|
+
p = subprocess.run(
|
|
22
|
+
["git", "rev-parse", "--show-toplevel"],
|
|
23
|
+
stdout=subprocess.PIPE,
|
|
24
|
+
stderr=subprocess.DEVNULL,
|
|
25
|
+
text=True,
|
|
26
|
+
)
|
|
27
|
+
out = (p.stdout or "").strip()
|
|
28
|
+
if p.returncode == 0 and out:
|
|
29
|
+
return Path(out)
|
|
30
|
+
except Exception:
|
|
31
|
+
pass
|
|
32
|
+
return Path.cwd()
|
|
33
|
+
|
|
34
|
+
|
|
35
|
+
def _cache_dir(repo_root):
|
|
36
|
+
return (repo_root / ".cache").resolve()
|
|
37
|
+
|
|
38
|
+
|
|
39
|
+
def _repo_relpath(repo_root, p):
|
|
40
|
+
try:
|
|
41
|
+
rel = p.resolve().relative_to(repo_root.resolve())
|
|
42
|
+
return "./" + rel.as_posix()
|
|
43
|
+
except Exception:
|
|
44
|
+
return os.path.basename(str(p))
|
|
45
|
+
|
|
46
|
+
|
|
47
|
+
REPO_ROOT = _repo_root()
|
|
48
|
+
CACHE_DIR = _cache_dir(REPO_ROOT)
|
|
49
|
+
|
|
50
|
+
|
|
51
|
+
def _json_out(obj):
|
|
52
|
+
sys.stdout.write(json.dumps(obj, ensure_ascii=True))
|
|
53
|
+
sys.stdout.write("\n")
|
|
54
|
+
|
|
55
|
+
|
|
56
|
+
def _run_capture(cmd):
|
|
57
|
+
try:
|
|
58
|
+
p = subprocess.run(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
|
|
59
|
+
return p.returncode, p.stdout, p.stderr
|
|
60
|
+
except FileNotFoundError as e:
|
|
61
|
+
return 127, "", str(e)
|
|
62
|
+
|
|
63
|
+
|
|
64
|
+
def _require_gh_auth():
|
|
65
|
+
rc, out, err = _run_capture(["gh", "auth", "status"])
|
|
66
|
+
if rc == 127:
|
|
67
|
+
return False, "GH_CLI_NOT_FOUND", "gh not found in PATH"
|
|
68
|
+
if rc != 0:
|
|
69
|
+
detail = (err or out or "").strip()
|
|
70
|
+
if len(detail) > 4000:
|
|
71
|
+
detail = detail[-4000:]
|
|
72
|
+
return False, "GH_NOT_AUTHENTICATED", detail
|
|
73
|
+
return True, None, None
|
|
74
|
+
|
|
75
|
+
|
|
76
|
+
def _resolve_owner_repo(explicit_repo):
|
|
77
|
+
if explicit_repo:
|
|
78
|
+
s = str(explicit_repo).strip()
|
|
79
|
+
if s and "/" in s:
|
|
80
|
+
return s
|
|
81
|
+
rc, out, _ = _run_capture(["gh", "repo", "view", "--json", "nameWithOwner", "--jq", ".nameWithOwner"])
|
|
82
|
+
owner_repo = out.strip() if rc == 0 else ""
|
|
83
|
+
return owner_repo or None
|
|
84
|
+
|
|
85
|
+
|
|
86
|
+
def _gh_api_json(args):
|
|
87
|
+
rc, out, err = _run_capture(["gh", "api"] + args)
|
|
88
|
+
if rc != 0:
|
|
89
|
+
raise RuntimeError(f"GH_API_FAILED: {(err or out or '').strip()}")
|
|
90
|
+
try:
|
|
91
|
+
return json.loads(out or "null")
|
|
92
|
+
except Exception:
|
|
93
|
+
raise RuntimeError("GH_API_JSON_PARSE_FAILED")
|
|
94
|
+
|
|
95
|
+
|
|
96
|
+
def _gh_api_graphql(query, variables):
|
|
97
|
+
cmd = ["gh", "api", "graphql", "-f", f"query={query}"]
|
|
98
|
+
for k, v in (variables or {}).items():
|
|
99
|
+
if isinstance(v, int):
|
|
100
|
+
cmd.extend(["-F", f"{k}={v}"])
|
|
101
|
+
elif v is None:
|
|
102
|
+
cmd.extend(["-f", f"{k}="])
|
|
103
|
+
else:
|
|
104
|
+
cmd.extend(["-f", f"{k}={v}"])
|
|
105
|
+
|
|
106
|
+
rc, out, err = _run_capture(cmd)
|
|
107
|
+
if rc != 0:
|
|
108
|
+
raise RuntimeError(f"GH_GRAPHQL_FAILED: {(err or out or '').strip()}")
|
|
109
|
+
try:
|
|
110
|
+
return json.loads(out or "null")
|
|
111
|
+
except Exception:
|
|
112
|
+
raise RuntimeError("GH_GRAPHQL_JSON_PARSE_FAILED")
|
|
113
|
+
|
|
114
|
+
|
|
115
|
+
def _flatten_threads(gql_data):
|
|
116
|
+
threads = []
|
|
117
|
+
pr = (((gql_data or {}).get("data") or {}).get("repository") or {}).get("pullRequest") or {}
|
|
118
|
+
conn = pr.get("reviewThreads") or {}
|
|
119
|
+
nodes = conn.get("nodes") or []
|
|
120
|
+
for t in nodes:
|
|
121
|
+
is_resolved = bool((t or {}).get("isResolved"))
|
|
122
|
+
is_outdated = bool((t or {}).get("isOutdated"))
|
|
123
|
+
if is_resolved or is_outdated:
|
|
124
|
+
continue
|
|
125
|
+
comments_conn = (t or {}).get("comments") or {}
|
|
126
|
+
comments_nodes = comments_conn.get("nodes") or []
|
|
127
|
+
comments = []
|
|
128
|
+
for c in comments_nodes:
|
|
129
|
+
author = (c or {}).get("author") or {}
|
|
130
|
+
comments.append(
|
|
131
|
+
{
|
|
132
|
+
"id": (c or {}).get("id"),
|
|
133
|
+
"databaseId": (c or {}).get("databaseId"),
|
|
134
|
+
"url": (c or {}).get("url"),
|
|
135
|
+
"author": {
|
|
136
|
+
"login": author.get("login"),
|
|
137
|
+
"type": author.get("__typename"),
|
|
138
|
+
},
|
|
139
|
+
"body": (c or {}).get("body") or "",
|
|
140
|
+
"bodyText": (c or {}).get("bodyText") or "",
|
|
141
|
+
"createdAt": (c or {}).get("createdAt"),
|
|
142
|
+
"updatedAt": (c or {}).get("updatedAt"),
|
|
143
|
+
}
|
|
144
|
+
)
|
|
145
|
+
threads.append(
|
|
146
|
+
{
|
|
147
|
+
"id": (t or {}).get("id"),
|
|
148
|
+
"isResolved": False,
|
|
149
|
+
"isOutdated": False,
|
|
150
|
+
"path": (t or {}).get("path"),
|
|
151
|
+
"line": (t or {}).get("line"),
|
|
152
|
+
"originalLine": (t or {}).get("originalLine"),
|
|
153
|
+
"startLine": (t or {}).get("startLine"),
|
|
154
|
+
"originalStartLine": (t or {}).get("originalStartLine"),
|
|
155
|
+
"comments": comments,
|
|
156
|
+
}
|
|
157
|
+
)
|
|
158
|
+
|
|
159
|
+
page_info = conn.get("pageInfo") or {}
|
|
160
|
+
return threads, {
|
|
161
|
+
"hasNextPage": bool(page_info.get("hasNextPage")),
|
|
162
|
+
"endCursor": page_info.get("endCursor"),
|
|
163
|
+
}
|
|
164
|
+
|
|
165
|
+
|
|
166
|
+
def _fetch_all_review_threads(owner, repo, pr_number):
|
|
167
|
+
query = (
|
|
168
|
+
"query($owner:String!,$repo:String!,$prNumber:Int!,$after:String){"
|
|
169
|
+
"repository(owner:$owner,name:$repo){"
|
|
170
|
+
"pullRequest(number:$prNumber){"
|
|
171
|
+
"reviewThreads(first:100,after:$after){"
|
|
172
|
+
"pageInfo{hasNextPage endCursor}"
|
|
173
|
+
"nodes{"
|
|
174
|
+
"id isResolved isOutdated path line originalLine startLine originalStartLine "
|
|
175
|
+
"comments(first:100){nodes{"
|
|
176
|
+
"id databaseId url body bodyText createdAt updatedAt author{login __typename}"
|
|
177
|
+
"}}"
|
|
178
|
+
"}"
|
|
179
|
+
"}"
|
|
180
|
+
"}"
|
|
181
|
+
"}"
|
|
182
|
+
"}"
|
|
183
|
+
)
|
|
184
|
+
|
|
185
|
+
after = None
|
|
186
|
+
all_threads = []
|
|
187
|
+
while True:
|
|
188
|
+
data = _gh_api_graphql(query, {"owner": owner, "repo": repo, "prNumber": pr_number, "after": after})
|
|
189
|
+
threads, page = _flatten_threads(data)
|
|
190
|
+
all_threads.extend(threads)
|
|
191
|
+
if not page.get("hasNextPage"):
|
|
192
|
+
break
|
|
193
|
+
after = page.get("endCursor")
|
|
194
|
+
if not after:
|
|
195
|
+
break
|
|
196
|
+
return all_threads
|
|
197
|
+
|
|
198
|
+
|
|
199
|
+
def main(argv):
|
|
200
|
+
class _ArgParser(argparse.ArgumentParser):
|
|
201
|
+
def error(self, message):
|
|
202
|
+
raise ValueError(message)
|
|
203
|
+
|
|
204
|
+
parser = _ArgParser(add_help=False)
|
|
205
|
+
parser.add_argument("--pr", type=int, required=True)
|
|
206
|
+
parser.add_argument("--round", type=int, default=1)
|
|
207
|
+
parser.add_argument("--run-id", required=True)
|
|
208
|
+
parser.add_argument("--repo")
|
|
209
|
+
|
|
210
|
+
try:
|
|
211
|
+
args = parser.parse_args(argv)
|
|
212
|
+
except ValueError:
|
|
213
|
+
_json_out({"error": "INVALID_ARGS"})
|
|
214
|
+
return 2
|
|
215
|
+
|
|
216
|
+
ok, code, detail = _require_gh_auth()
|
|
217
|
+
if not ok:
|
|
218
|
+
_json_out({"error": code, "detail": detail})
|
|
219
|
+
return 1
|
|
220
|
+
|
|
221
|
+
owner_repo = _resolve_owner_repo(args.repo)
|
|
222
|
+
if not owner_repo:
|
|
223
|
+
_json_out({"error": "REPO_NOT_FOUND"})
|
|
224
|
+
return 1
|
|
225
|
+
if "/" not in owner_repo:
|
|
226
|
+
_json_out({"error": "INVALID_REPO"})
|
|
227
|
+
return 1
|
|
228
|
+
|
|
229
|
+
owner, repo = owner_repo.split("/", 1)
|
|
230
|
+
pr_number = int(args.pr)
|
|
231
|
+
round_num = int(args.round)
|
|
232
|
+
run_id = str(args.run_id).strip()
|
|
233
|
+
if not run_id:
|
|
234
|
+
_json_out({"error": "MISSING_RUN_ID"})
|
|
235
|
+
return 1
|
|
236
|
+
|
|
237
|
+
CACHE_DIR.mkdir(parents=True, exist_ok=True)
|
|
238
|
+
raw_basename = f"gh-review-raw-pr{pr_number}-r{round_num}-{run_id}.json"
|
|
239
|
+
raw_path = CACHE_DIR / raw_basename
|
|
240
|
+
|
|
241
|
+
try:
|
|
242
|
+
threads = _fetch_all_review_threads(owner, repo, pr_number)
|
|
243
|
+
|
|
244
|
+
reviews = _gh_api_json([f"repos/{owner_repo}/pulls/{pr_number}/reviews", "--paginate"])
|
|
245
|
+
issue_comments = _gh_api_json([f"repos/{owner_repo}/issues/{pr_number}/comments", "--paginate"])
|
|
246
|
+
|
|
247
|
+
now = datetime.now(timezone.utc).isoformat()
|
|
248
|
+
payload = {
|
|
249
|
+
"repo": owner_repo,
|
|
250
|
+
"pr": pr_number,
|
|
251
|
+
"round": round_num,
|
|
252
|
+
"runId": run_id,
|
|
253
|
+
"generatedAt": now,
|
|
254
|
+
"reviewThreads": threads,
|
|
255
|
+
"reviews": reviews if isinstance(reviews, list) else [],
|
|
256
|
+
"issueComments": issue_comments if isinstance(issue_comments, list) else [],
|
|
257
|
+
}
|
|
258
|
+
|
|
259
|
+
raw_path.write_text(json.dumps(payload, ensure_ascii=True), encoding="utf-8", newline="\n")
|
|
260
|
+
except Exception as e:
|
|
261
|
+
_json_out({"error": "HARVEST_FAILED", "detail": str(e)[:800]})
|
|
262
|
+
return 1
|
|
263
|
+
|
|
264
|
+
_json_out({"rawFile": _repo_relpath(REPO_ROOT, raw_path)})
|
|
265
|
+
return 0
|
|
266
|
+
|
|
267
|
+
|
|
268
|
+
if __name__ == "__main__":
|
|
269
|
+
raise SystemExit(main(sys.argv[1:]))
|
|
@@ -21,7 +21,7 @@ tools:
|
|
|
21
21
|
- `round: <number>`
|
|
22
22
|
- `runId: <string>`
|
|
23
23
|
- `contextFile: <path>`(例如:`./.cache/pr-context-...md`)
|
|
24
|
-
- `reviewFile: <path
|
|
24
|
+
- `reviewFile: <path>`(多行,1+ 条;例如:`./.cache/review-...md`)
|
|
25
25
|
|
|
26
26
|
### 模式 B:发布修复评论(基于 fixReportFile)
|
|
27
27
|
|
|
@@ -53,7 +53,7 @@ runId: abcdef123456
|
|
|
53
53
|
|
|
54
54
|
## 重复分组(仅作为脚本入参)
|
|
55
55
|
|
|
56
|
-
|
|
56
|
+
你需要基于所有 `reviewFile` 内容判断重复 finding 分组,生成**一行 JSON**(不要代码块、不要解释文字、不要换行)。
|
|
57
57
|
|
|
58
58
|
注意:这行 JSON **不是你的最终输出**,它只用于生成 `--duplicate-groups-b64` 传给脚本。
|
|
59
59
|
|
|
@@ -318,13 +318,66 @@ def _counts(findings):
|
|
|
318
318
|
return c
|
|
319
319
|
|
|
320
320
|
|
|
321
|
-
def
|
|
321
|
+
def _check_existing_comment(pr_number, run_id, round_num, comment_type):
|
|
322
|
+
"""
|
|
323
|
+
Check if a comment with same runId/round/type already exists.
|
|
324
|
+
Returns True if duplicate exists (should skip posting).
|
|
325
|
+
|
|
326
|
+
comment_type: "review-summary" or "fix-report" or "final-report"
|
|
327
|
+
"""
|
|
328
|
+
try:
|
|
329
|
+
result = subprocess.run(
|
|
330
|
+
["gh", "api", f"repos/:owner/:repo/issues/{pr_number}/comments", "--paginate"],
|
|
331
|
+
stdout=subprocess.PIPE,
|
|
332
|
+
stderr=subprocess.DEVNULL,
|
|
333
|
+
text=True,
|
|
334
|
+
)
|
|
335
|
+
if result.returncode != 0:
|
|
336
|
+
return False
|
|
337
|
+
|
|
338
|
+
comments = json.loads(result.stdout or "[]")
|
|
339
|
+
|
|
340
|
+
if comment_type == "review-summary":
|
|
341
|
+
type_header = f"## Review Summary (Round {round_num})"
|
|
342
|
+
elif comment_type == "fix-report":
|
|
343
|
+
type_header = f"## Fix Report (Round {round_num})"
|
|
344
|
+
elif comment_type == "final-report":
|
|
345
|
+
type_header = "## Final Report"
|
|
346
|
+
else:
|
|
347
|
+
return False
|
|
348
|
+
|
|
349
|
+
run_id_pattern = f"RunId: {run_id}"
|
|
350
|
+
|
|
351
|
+
for comment in comments:
|
|
352
|
+
body = comment.get("body", "")
|
|
353
|
+
if MARKER in body and type_header in body and run_id_pattern in body:
|
|
354
|
+
return True
|
|
355
|
+
|
|
356
|
+
return False
|
|
357
|
+
except Exception:
|
|
358
|
+
return False
|
|
359
|
+
|
|
360
|
+
|
|
361
|
+
def _post_pr_comment(pr_number, body_ref, run_id=None, round_num=None, comment_type=None):
|
|
362
|
+
"""
|
|
363
|
+
Post a PR comment with idempotency check.
|
|
364
|
+
|
|
365
|
+
If run_id, round_num, and comment_type are provided, checks for existing
|
|
366
|
+
duplicate before posting and skips if already posted.
|
|
367
|
+
|
|
368
|
+
Returns: True if posted successfully or skipped (idempotent), False on error
|
|
369
|
+
"""
|
|
322
370
|
if isinstance(body_ref, Path):
|
|
323
371
|
p = body_ref
|
|
324
372
|
else:
|
|
325
373
|
p = _resolve_ref(REPO_ROOT, CACHE_DIR, body_ref)
|
|
326
374
|
if not p:
|
|
327
375
|
return False
|
|
376
|
+
|
|
377
|
+
if run_id and round_num and comment_type:
|
|
378
|
+
if _check_existing_comment(pr_number, run_id, round_num, comment_type):
|
|
379
|
+
return True
|
|
380
|
+
|
|
328
381
|
body_path = str(p)
|
|
329
382
|
rc = subprocess.run(
|
|
330
383
|
["gh", "pr", "comment", str(pr_number), "--body-file", body_path],
|
|
@@ -397,6 +450,32 @@ def _render_mode_b_comment(pr_number, round_num, run_id, fix_report_md):
|
|
|
397
450
|
return "\n".join(body)
|
|
398
451
|
|
|
399
452
|
|
|
453
|
+
def _render_final_comment(pr_number, round_num, run_id, status):
|
|
454
|
+
lines = []
|
|
455
|
+
lines.append(MARKER)
|
|
456
|
+
lines.append("")
|
|
457
|
+
lines.append("## Final Report")
|
|
458
|
+
lines.append("")
|
|
459
|
+
lines.append(f"- PR: #{pr_number}")
|
|
460
|
+
lines.append(f"- Total Rounds: {round_num}")
|
|
461
|
+
lines.append(f"- RunId: {run_id}")
|
|
462
|
+
lines.append("")
|
|
463
|
+
|
|
464
|
+
if status == "RESOLVED":
|
|
465
|
+
lines.append("### Status: ✅ All issues resolved")
|
|
466
|
+
lines.append("")
|
|
467
|
+
lines.append("All P0/P1/P2 issues from the automated review have been addressed.")
|
|
468
|
+
lines.append("The PR is ready for human review and merge.")
|
|
469
|
+
else:
|
|
470
|
+
lines.append("### Status: ⚠️ Max rounds reached")
|
|
471
|
+
lines.append("")
|
|
472
|
+
lines.append("The automated review loop has completed the maximum number of rounds (3).")
|
|
473
|
+
lines.append("Some issues may still remain. Please review the PR comments above for details.")
|
|
474
|
+
|
|
475
|
+
lines.append("")
|
|
476
|
+
return "\n".join(lines)
|
|
477
|
+
|
|
478
|
+
|
|
400
479
|
def main(argv):
|
|
401
480
|
class _ArgParser(argparse.ArgumentParser):
|
|
402
481
|
def error(self, message):
|
|
@@ -409,6 +488,7 @@ def main(argv):
|
|
|
409
488
|
parser.add_argument("--context-file")
|
|
410
489
|
parser.add_argument("--review-file", action="append", default=[])
|
|
411
490
|
parser.add_argument("--fix-report-file")
|
|
491
|
+
parser.add_argument("--final-report")
|
|
412
492
|
parser.add_argument("--duplicate-groups-json")
|
|
413
493
|
parser.add_argument("--duplicate-groups-b64")
|
|
414
494
|
|
|
@@ -422,6 +502,7 @@ def main(argv):
|
|
|
422
502
|
round_num = args.round
|
|
423
503
|
run_id = str(args.run_id)
|
|
424
504
|
|
|
505
|
+
final_report = (args.final_report or "").strip() or None
|
|
425
506
|
fix_report_file = (args.fix_report_file or "").strip() or None
|
|
426
507
|
context_file = (args.context_file or "").strip() or None
|
|
427
508
|
review_files = []
|
|
@@ -430,6 +511,17 @@ def main(argv):
|
|
|
430
511
|
if s:
|
|
431
512
|
review_files.append(s)
|
|
432
513
|
|
|
514
|
+
if final_report:
|
|
515
|
+
body = _render_final_comment(pr_number, round_num, run_id, final_report)
|
|
516
|
+
body_basename = f"review-aggregate-final-pr{pr_number}-{run_id}.md"
|
|
517
|
+
body_ref = _repo_relpath(REPO_ROOT, CACHE_DIR / body_basename)
|
|
518
|
+
_write_cache_text(body_ref, body)
|
|
519
|
+
if not _post_pr_comment(pr_number, body_ref, run_id=run_id, round_num=round_num, comment_type="final-report"):
|
|
520
|
+
_json_out({"error": "GH_PR_COMMENT_FAILED"})
|
|
521
|
+
return 1
|
|
522
|
+
_json_out({"ok": True, "final": True})
|
|
523
|
+
return 0
|
|
524
|
+
|
|
433
525
|
if fix_report_file:
|
|
434
526
|
fix_p = _resolve_ref(REPO_ROOT, CACHE_DIR, fix_report_file)
|
|
435
527
|
if not fix_p or not fix_p.exists():
|
|
@@ -440,7 +532,7 @@ def main(argv):
|
|
|
440
532
|
body_basename = f"review-aggregate-fix-comment-pr{pr_number}-r{round_num}-{run_id}.md"
|
|
441
533
|
body_ref = _repo_relpath(REPO_ROOT, CACHE_DIR / body_basename)
|
|
442
534
|
_write_cache_text(body_ref, body)
|
|
443
|
-
if not _post_pr_comment(pr_number, body_ref):
|
|
535
|
+
if not _post_pr_comment(pr_number, body_ref, run_id=run_id, round_num=round_num, comment_type="fix-report"):
|
|
444
536
|
_json_out({"error": "GH_PR_COMMENT_FAILED"})
|
|
445
537
|
return 1
|
|
446
538
|
_json_out({"ok": True})
|
|
@@ -488,7 +580,7 @@ def main(argv):
|
|
|
488
580
|
body_basename = f"review-aggregate-comment-pr{pr_number}-r{round_num}-{run_id}.md"
|
|
489
581
|
body_ref = _repo_relpath(REPO_ROOT, CACHE_DIR / body_basename)
|
|
490
582
|
_write_cache_text(body_ref, body)
|
|
491
|
-
if not _post_pr_comment(pr_number, body_ref):
|
|
583
|
+
if not _post_pr_comment(pr_number, body_ref, run_id=run_id, round_num=round_num, comment_type="review-summary"):
|
|
492
584
|
_json_out({"error": "GH_PR_COMMENT_FAILED"})
|
|
493
585
|
return 1
|
|
494
586
|
|
|
@@ -22,6 +22,7 @@ agent: sisyphus
|
|
|
22
22
|
- `codex-reviewer`
|
|
23
23
|
- `claude-reviewer`
|
|
24
24
|
- `gemini-reviewer`
|
|
25
|
+
- `gh-thread-reviewer`
|
|
25
26
|
- `pr-review-aggregate`
|
|
26
27
|
- `pr-fix`
|
|
27
28
|
|
|
@@ -41,6 +42,13 @@ agent: sisyphus
|
|
|
41
42
|
|
|
42
43
|
## 循环(最多 3 轮)
|
|
43
44
|
|
|
45
|
+
**⚠️ 严格串行执行要求(Critical)**:
|
|
46
|
+
|
|
47
|
+
- 每个 Step 必须完成(收到返回值)后才能开始下一个 Step
|
|
48
|
+
- **禁止任何步骤并行执行**(除了 Step 2 的三个 reviewer 可并行)
|
|
49
|
+
- 如果任何步骤失败或超时,必须立即终止当前轮次,不能跳过或重试
|
|
50
|
+
- 每个步骤的 Task 调用必须 await 返回结果,不能 fire-and-forget
|
|
51
|
+
|
|
44
52
|
每轮按顺序执行:
|
|
45
53
|
|
|
46
54
|
1. Task: `pr-context` **(必须先完成,不可与 Step 2 并行)**
|
|
@@ -50,9 +58,9 @@ agent: sisyphus
|
|
|
50
58
|
- 取出:`contextFile`、`runId`、`headOid`(如有)
|
|
51
59
|
- **CRITICAL**: 必须等待此 Task 成功完成并获取到 `contextFile` 后,才能进入 Step 2
|
|
52
60
|
|
|
53
|
-
2. Task(并行): `codex-reviewer` + `claude-reviewer` + `gemini-reviewer` **(依赖 Step 1 的 contextFile)**
|
|
61
|
+
2. Task(并行): `codex-reviewer` + `claude-reviewer` + `gemini-reviewer` + `gh-thread-reviewer` **(依赖 Step 1 的 contextFile)**
|
|
54
62
|
|
|
55
|
-
- **DEPENDENCY**:
|
|
63
|
+
- **DEPENDENCY**: 这些 reviewers 依赖 Step 1 返回的 `contextFile`,因此**必须等 Step 1 完成后才能并行启动**
|
|
56
64
|
- 每个 reviewer prompt 必须包含:
|
|
57
65
|
- `PR #{{PR_NUMBER}}`
|
|
58
66
|
- `round: <ROUND>`
|
|
@@ -67,9 +75,10 @@ agent: sisyphus
|
|
|
67
75
|
|
|
68
76
|
3. Task: `pr-review-aggregate`
|
|
69
77
|
|
|
70
|
-
- prompt 必须包含:`PR #{{PR_NUMBER}}`、`round: <ROUND>`、`runId: <RUN_ID>`、`contextFile: ./.cache/<file>.md
|
|
78
|
+
- prompt 必须包含:`PR #{{PR_NUMBER}}`、`round: <ROUND>`、`runId: <RUN_ID>`、`contextFile: ./.cache/<file>.md`、以及 1+ 条 `reviewFile: ./.cache/<file>.md`
|
|
71
79
|
- 输出:`{"stop":true}` 或 `{"stop":false,"fixFile":"..."}`
|
|
72
80
|
- 若 `stop=true`:本轮结束并退出循环
|
|
81
|
+
- **唯一性约束**: 每轮只能发布一次 Review Summary;脚本内置幂等检查,重复调用不会重复发布
|
|
73
82
|
|
|
74
83
|
4. Task: `pr-fix`
|
|
75
84
|
|
|
@@ -86,7 +95,49 @@ agent: sisyphus
|
|
|
86
95
|
|
|
87
96
|
- prompt 必须包含:`PR #{{PR_NUMBER}}`、`round: <ROUND>`、`runId: <RUN_ID>`、`fixReportFile: ./.cache/<file>.md`
|
|
88
97
|
- 输出:`{"ok":true}`
|
|
98
|
+
- **唯一性约束**: 每轮只能发布一次 Fix Report;脚本内置幂等检查,重复调用不会重复发布
|
|
89
99
|
|
|
90
100
|
6. 下一轮
|
|
91
101
|
|
|
92
102
|
- 回到 1(进入下一轮 reviewers)
|
|
103
|
+
|
|
104
|
+
|
|
105
|
+
## 终止与收尾(强制)
|
|
106
|
+
|
|
107
|
+
循环结束时,必须发布一个最终评论到 PR,格式如下:
|
|
108
|
+
|
|
109
|
+
### 情况 A: 所有问题已解决(stop=true)
|
|
110
|
+
|
|
111
|
+
当 Step 3 返回 `{"stop":true}` 时,调用 `pr-review-aggregate` 发布收尾评论:
|
|
112
|
+
|
|
113
|
+
- prompt 必须包含:
|
|
114
|
+
- `PR #{{PR_NUMBER}}`
|
|
115
|
+
- `round: <ROUND>`
|
|
116
|
+
- `runId: <RUN_ID>`
|
|
117
|
+
- `--final-report "RESOLVED"`(新增参数,表示所有问题已解决)
|
|
118
|
+
|
|
119
|
+
### 情况 B: 达到最大轮次(3 轮后仍有问题)
|
|
120
|
+
|
|
121
|
+
当循环完成 3 轮后仍未 stop,调用 `pr-review-aggregate` 发布收尾评论:
|
|
122
|
+
|
|
123
|
+
- prompt 必须包含:
|
|
124
|
+
- `PR #{{PR_NUMBER}}`
|
|
125
|
+
- `round: 3`
|
|
126
|
+
- `runId: <RUN_ID>`
|
|
127
|
+
- `--final-report "MAX_ROUNDS_REACHED"`(新增参数,表示达到最大轮次)
|
|
128
|
+
|
|
129
|
+
### 最终评论格式(由脚本生成)
|
|
130
|
+
|
|
131
|
+
```markdown
|
|
132
|
+
<!-- pr-review-loop-marker -->
|
|
133
|
+
|
|
134
|
+
## Final Report
|
|
135
|
+
|
|
136
|
+
- PR: #<PR_NUMBER>
|
|
137
|
+
- Total Rounds: <N>
|
|
138
|
+
- Status: ✅ All issues resolved / ⚠️ Max rounds reached (some issues may remain)
|
|
139
|
+
|
|
140
|
+
### Summary
|
|
141
|
+
|
|
142
|
+
[自动生成的总结]
|
|
143
|
+
```
|