@ranger1/dx 0.1.39 → 0.1.41
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/@opencode/agents/__pycache__/pr_review_aggregate.cpython-314.pyc +0 -0
- package/@opencode/agents/__pycache__/test_pr_review_aggregate.cpython-314-pytest-9.0.2.pyc +0 -0
- package/@opencode/agents/claude-reviewer.md +15 -0
- package/@opencode/agents/codex-reviewer.md +15 -0
- package/@opencode/agents/gemini-reviewer.md +15 -0
- package/@opencode/agents/gh-thread-reviewer.md +15 -0
- package/@opencode/agents/pr-fix.md +63 -1
- package/@opencode/agents/pr-review-aggregate.md +45 -2
- package/@opencode/agents/pr_review_aggregate.py +243 -5
- package/@opencode/agents/test_pr_review_aggregate.py +500 -0
- package/@opencode/commands/doctor.md +10 -146
- package/@opencode/commands/oh_attach.json +92 -0
- package/@opencode/commands/opencode_attach.json +26 -0
- package/@opencode/commands/opencode_attach.py +142 -0
- package/@opencode/commands/pr-review-loop.md +16 -4
- package/README.md +57 -0
- package/lib/opencode-initial.js +16 -5
- package/package.json +1 -1
|
Binary file
|
|
Binary file
|
|
@@ -31,6 +31,21 @@ tools:
|
|
|
31
31
|
- 写入 reviewFile:`./.cache/review-CLD-pr<PR_NUMBER>-r<ROUND>-<RUN_ID>.md`
|
|
32
32
|
- findings id 必须以 `CLD-` 开头
|
|
33
33
|
|
|
34
|
+
## 决策日志约束(强制)
|
|
35
|
+
|
|
36
|
+
如果 prompt 中提供了 `decisionLogFile`,必须先读取并遵守以下规则:
|
|
37
|
+
|
|
38
|
+
1. **已修复问题**:不再提出本质相同的问题
|
|
39
|
+
2. **已拒绝问题**:
|
|
40
|
+
- 若你的发现 priority 比原问题高 ≥2 级(如 P3→P1, P2→P0),可以升级质疑
|
|
41
|
+
- 否则不再提出
|
|
42
|
+
|
|
43
|
+
判断"问题本质相同"时,比对 decision-log 中的 `essence` 字段与你发现的问题描述。
|
|
44
|
+
|
|
45
|
+
### 禁止事项
|
|
46
|
+
- ⛔ 不质疑已修复问题的实现方式(除非发现修复引入了新 bug)
|
|
47
|
+
- ⛔ 不重复提出已拒绝问题(除非满足升级质疑条件)
|
|
48
|
+
|
|
34
49
|
## Cache 约定(强制)
|
|
35
50
|
|
|
36
51
|
- 缓存目录固定为 `./.cache/`;交接一律传 `./.cache/<file>`(repo 相对路径),禁止 basename-only(如 `foo.md`)。
|
|
@@ -32,6 +32,21 @@ tools:
|
|
|
32
32
|
- 写入 reviewFile:`./.cache/review-CDX-pr<PR_NUMBER>-r<ROUND>-<RUN_ID>.md`
|
|
33
33
|
- findings id 必须以 `CDX-` 开头
|
|
34
34
|
|
|
35
|
+
## 决策日志约束(强制)
|
|
36
|
+
|
|
37
|
+
如果 prompt 中提供了 `decisionLogFile`,必须先读取并遵守以下规则:
|
|
38
|
+
|
|
39
|
+
1. **已修复问题**:不再提出本质相同的问题
|
|
40
|
+
2. **已拒绝问题**:
|
|
41
|
+
- 若你的发现 priority 比原问题高 ≥2 级(如 P3→P1, P2→P0),可以升级质疑
|
|
42
|
+
- 否则不再提出
|
|
43
|
+
|
|
44
|
+
判断"问题本质相同"时,比对 decision-log 中的 `essence` 字段与你发现的问题描述。
|
|
45
|
+
|
|
46
|
+
### 禁止事项
|
|
47
|
+
- ⛔ 不质疑已修复问题的实现方式(除非发现修复引入了新 bug)
|
|
48
|
+
- ⛔ 不重复提出已拒绝问题(除非满足升级质疑条件)
|
|
49
|
+
|
|
35
50
|
## Cache 约定(强制)
|
|
36
51
|
|
|
37
52
|
- 缓存目录固定为 `./.cache/`;交接一律传 `./.cache/<file>`(repo 相对路径),禁止 basename-only(如 `foo.md`)。
|
|
@@ -31,6 +31,21 @@ tools:
|
|
|
31
31
|
- 写入 reviewFile:`./.cache/review-GMN-pr<PR_NUMBER>-r<ROUND>-<RUN_ID>.md`
|
|
32
32
|
- findings id 必须以 `GMN-` 开头
|
|
33
33
|
|
|
34
|
+
## 决策日志约束(强制)
|
|
35
|
+
|
|
36
|
+
如果 prompt 中提供了 `decisionLogFile`,必须先读取并遵守以下规则:
|
|
37
|
+
|
|
38
|
+
1. **已修复问题**:不再提出本质相同的问题
|
|
39
|
+
2. **已拒绝问题**:
|
|
40
|
+
- 若你的发现 priority 比原问题高 ≥2 级(如 P3→P1, P2→P0),可以升级质疑
|
|
41
|
+
- 否则不再提出
|
|
42
|
+
|
|
43
|
+
判断"问题本质相同"时,比对 decision-log 中的 `essence` 字段与你发现的问题描述。
|
|
44
|
+
|
|
45
|
+
### 禁止事项
|
|
46
|
+
- ⛔ 不质疑已修复问题的实现方式(除非发现修复引入了新 bug)
|
|
47
|
+
- ⛔ 不重复提出已拒绝问题(除非满足升级质疑条件)
|
|
48
|
+
|
|
34
49
|
## Cache 约定(强制)
|
|
35
50
|
|
|
36
51
|
- 缓存目录固定为 `./.cache/`;交接一律传 `./.cache/<file>`(repo 相对路径),禁止 basename-only(如 `foo.md`)。
|
|
@@ -100,3 +100,18 @@ python3 ~/.opencode/agents/gh_review_harvest.py \
|
|
|
100
100
|
- ⛔ 不发布 GitHub 评论(不调用 `gh pr comment/review`)
|
|
101
101
|
- ⛔ 不修改代码(只输出 reviewFile)
|
|
102
102
|
- ⛔ 不生成/伪造 runId
|
|
103
|
+
|
|
104
|
+
## 决策日志约束(强制)
|
|
105
|
+
|
|
106
|
+
如果 prompt 中提供了 `decisionLogFile`,必须先读取并遵守以下规则:
|
|
107
|
+
|
|
108
|
+
1. **已修复问题**:不再提出本质相同的问题
|
|
109
|
+
2. **已拒绝问题**:
|
|
110
|
+
- 若你的发现 priority 比原问题高 ≥2 级(如 P3→P1, P2→P0),可以升级质疑
|
|
111
|
+
- 否则不再提出
|
|
112
|
+
|
|
113
|
+
判断"问题本质相同"时,比对 decision-log 中的 `essence` 字段与你发现的问题描述。
|
|
114
|
+
|
|
115
|
+
### 禁止事项
|
|
116
|
+
- ⛔ 不质疑已修复问题的实现方式(除非发现修复引入了新 bug)
|
|
117
|
+
- ⛔ 不重复提出已拒绝问题(除非满足升级质疑条件)
|
|
@@ -81,7 +81,7 @@ Round: 2
|
|
|
81
81
|
|
|
82
82
|
解析规则(强制):
|
|
83
83
|
|
|
84
|
-
- 仅处理 `## IssuesToFix` 段落里的条目;`## OptionalIssues`
|
|
84
|
+
- 仅处理 `## IssuesToFix` 段落里的条目;`## OptionalIssues` 可忽略或按需处理(建议:根据 PR 目标/风险/时间预算自行裁决)
|
|
85
85
|
- 每条必须至少包含:`id`、`priority`、`file`、`title`、`suggestion`
|
|
86
86
|
- `line` 允许为 `null`
|
|
87
87
|
|
|
@@ -134,6 +134,68 @@ Round: 2
|
|
|
134
134
|
`fixReportFile: ./.cache/<file>.md`
|
|
135
135
|
|
|
136
136
|
|
|
137
|
+
## Decision Log 输出(强制)
|
|
138
|
+
|
|
139
|
+
修复完成后,必须生成/追加 Decision Log 文件,用于跨轮次的决策持久化存储。
|
|
140
|
+
|
|
141
|
+
### 文件路径
|
|
142
|
+
|
|
143
|
+
`./.cache/decision-log-pr<PR_NUMBER>.md`
|
|
144
|
+
|
|
145
|
+
### 格式规范
|
|
146
|
+
|
|
147
|
+
```markdown
|
|
148
|
+
# Decision Log
|
|
149
|
+
|
|
150
|
+
PR: <PR_NUMBER>
|
|
151
|
+
|
|
152
|
+
## Round <ROUND>
|
|
153
|
+
|
|
154
|
+
### Fixed
|
|
155
|
+
|
|
156
|
+
- id: <FINDING_ID>
|
|
157
|
+
commit: <SHA>
|
|
158
|
+
essence: <问题本质的一句话描述>
|
|
159
|
+
|
|
160
|
+
### Rejected
|
|
161
|
+
|
|
162
|
+
- id: <FINDING_ID>
|
|
163
|
+
priority: <P0|P1|P2|P3>
|
|
164
|
+
reason: <拒绝原因>
|
|
165
|
+
essence: <问题本质的一句话描述>
|
|
166
|
+
```
|
|
167
|
+
|
|
168
|
+
### 追加规则(强制)
|
|
169
|
+
|
|
170
|
+
- 如果文件不存在:创建新文件,包含 `# Decision Log` 头、`PR: <PR_NUMBER>` 字段,以及第一个 `## Round <ROUND>` 段落
|
|
171
|
+
- 如果文件存在:追加新的 `## Round <ROUND>` 段落到文件末尾
|
|
172
|
+
- **禁止删除或覆盖历史轮次的记录**
|
|
173
|
+
|
|
174
|
+
### essence 字段要求
|
|
175
|
+
|
|
176
|
+
essence 是问题本质的一句话描述,用于后续轮次的智能匹配和重复检测。要求:
|
|
177
|
+
|
|
178
|
+
- 简洁性:≤ 50 字
|
|
179
|
+
- 问题导向:描述问题核心(而非具体代码位置、文件行号)
|
|
180
|
+
- 可匹配性:后续轮次的 reviewer 能通过关键词匹配识别该问题
|
|
181
|
+
|
|
182
|
+
**示例对比:**
|
|
183
|
+
|
|
184
|
+
| ✅ 好的 essence | ❌ 不好的 essence |
|
|
185
|
+
|---|---|
|
|
186
|
+
| "JSON.parse 未捕获异常" | "apps/backend/src/foo.ts 第 42 行缺少 try/catch" |
|
|
187
|
+
| "缺少输入验证" | "在 UserController 中没有验证 username 参数" |
|
|
188
|
+
| "密码明文存储" | "第 156 行 password 字段未加密" |
|
|
189
|
+
|
|
190
|
+
### Decision Log 的用途
|
|
191
|
+
|
|
192
|
+
Decision Log 供后续工作流参考:
|
|
193
|
+
|
|
194
|
+
- **pr-review-loop**:检查 decision-log 是否存在,避免重复提出已拒绝的问题
|
|
195
|
+
- **pr-review-aggregate**:使用 LLM 智能匹配 essence 字段,识别本轮与历史轮的重复问题
|
|
196
|
+
- **交接文档**:跨团队成员阅读,理解历史决策
|
|
197
|
+
|
|
198
|
+
|
|
137
199
|
## fixReportFile 内容格式(强制)
|
|
138
200
|
|
|
139
201
|
fixReportFile 内容必须是可直接粘贴到 GitHub 评论的 Markdown,且不得包含本地缓存文件路径。
|
|
@@ -61,9 +61,37 @@ runId: abcdef123456
|
|
|
61
61
|
{"duplicateGroups":[["CDX-001","CLD-003"],["GMN-002","CLD-005","CDX-004"]]}
|
|
62
62
|
```
|
|
63
63
|
|
|
64
|
+
## 智能匹配(仅在模式 A + decision-log 存在时)
|
|
65
|
+
|
|
66
|
+
如果 decision-log(`./.cache/decision-log-pr<PR_NUMBER>.md`)存在,你需要基于 LLM 判断每个新 finding 与已决策问题的本质是否相同,从而生成 **escalation_groups** 参数。
|
|
67
|
+
|
|
68
|
+
**流程**:
|
|
69
|
+
|
|
70
|
+
1. 读取 decision-log,提取已 rejected 问题的 `essence` 字段
|
|
71
|
+
2. 逐个新 finding,与所有已 rejected 问题的 essence 做语义比对(使用 LLM)
|
|
72
|
+
3. 判断是否"问题本质相同"(即便表述不同)
|
|
73
|
+
4. 收集可升级的问题(重新质疑阈值):
|
|
74
|
+
- **升级阈值**:优先级差距 ≥ 2 级
|
|
75
|
+
- 例如:已 rejected P3 but finding 为 P1 → 可升级质疑
|
|
76
|
+
- 例如:已 rejected P2 but finding 为 P0 → 可升级质疑
|
|
77
|
+
- 例如:已 rejected P2 but finding 为 P1 → 不升级(仅差 1 级)
|
|
78
|
+
5. 生成**一行 JSON**(不要代码块、不要解释文字、不要换行),结构如下:
|
|
79
|
+
|
|
80
|
+
```json
|
|
81
|
+
{"escalationGroups":[["CDX-001"],["GMN-002","CLD-005"]]}
|
|
82
|
+
```
|
|
83
|
+
|
|
84
|
+
其中每个组表示「可以作为已 rejected 问题的升级质疑」的 finding ID 集合。若无可升级问题,输出空数组:
|
|
85
|
+
|
|
86
|
+
```json
|
|
87
|
+
{"escalationGroups":[]}
|
|
88
|
+
```
|
|
89
|
+
|
|
90
|
+
注意:escalation_groups JSON **不是你的最终输出**,它只用于生成 `--escalation-groups-b64` 传给脚本。
|
|
91
|
+
|
|
64
92
|
## 调用脚本(强制)
|
|
65
93
|
|
|
66
|
-
模式 A(带 reviewFile +
|
|
94
|
+
模式 A(带 reviewFile + 重复分组 + 智能匹配):
|
|
67
95
|
|
|
68
96
|
```bash
|
|
69
97
|
python3 ~/.opencode/agents/pr_review_aggregate.py \
|
|
@@ -74,9 +102,17 @@ python3 ~/.opencode/agents/pr_review_aggregate.py \
|
|
|
74
102
|
--review-file <REVIEW_FILE_1> \
|
|
75
103
|
--review-file <REVIEW_FILE_2> \
|
|
76
104
|
--review-file <REVIEW_FILE_3> \
|
|
77
|
-
--duplicate-groups-b64 <BASE64_JSON>
|
|
105
|
+
--duplicate-groups-b64 <BASE64_JSON> \
|
|
106
|
+
--decision-log-file ./.cache/decision-log-pr<PR_NUMBER>.md \
|
|
107
|
+
--escalation-groups-b64 <BASE64_JSON>
|
|
78
108
|
```
|
|
79
109
|
|
|
110
|
+
**参数说明**:
|
|
111
|
+
|
|
112
|
+
- `--duplicate-groups-b64`:base64 编码的 JSON,格式同上,例如 `eyJkdXBsaWNhdGVHcm91cHMiOltbIkNEWC0wMDEiLCJDTEQtMDAzIl1dfQ==`
|
|
113
|
+
- `--decision-log-file`:decision-log 文件路径(可选;若不存在则跳过智能匹配逻辑)
|
|
114
|
+
- `--escalation-groups-b64`:base64 编码的 escalation groups JSON,格式如上,例如 `eyJlc2NhbGF0aW9uR3JvdXBzIjpbWyJDRFgtMDAxIl1dfQ==`
|
|
115
|
+
|
|
80
116
|
模式 B(带 fixReportFile):
|
|
81
117
|
|
|
82
118
|
```bash
|
|
@@ -98,3 +134,10 @@ python3 ~/.opencode/agents/pr_review_aggregate.py \
|
|
|
98
134
|
- **失败/异常时**:
|
|
99
135
|
- 若脚本 stdout 已输出合法 JSON(包含 `error` 或其他字段)→ 仍然**原样返回该 JSON**。
|
|
100
136
|
- 若脚本未输出合法 JSON / 退出异常 → 仅输出一行 JSON:`{"error":"PR_REVIEW_AGGREGATE_AGENT_FAILED"}`(必要时可加 `detail` 字段)。
|
|
137
|
+
|
|
138
|
+
## fixFile 结构(补充说明)
|
|
139
|
+
|
|
140
|
+
脚本在模式 A 下生成的 fixFile 分为两段:
|
|
141
|
+
|
|
142
|
+
- `## IssuesToFix`:只包含 P0/P1(必须修)
|
|
143
|
+
- `## OptionalIssues`:包含 P2/P3(由 pr-fix 自主决定是否修复/或拒绝并说明原因)
|
|
@@ -23,7 +23,9 @@
|
|
|
23
23
|
# - Comment body must NOT contain local filesystem paths (this script scrubs cache paths, $HOME, and repo absolute paths).
|
|
24
24
|
#
|
|
25
25
|
# fixFile rules:
|
|
26
|
-
# - fixFile includes
|
|
26
|
+
# - fixFile includes all findings, split into:
|
|
27
|
+
# - IssuesToFix: P0/P1 (must fix)
|
|
28
|
+
# - OptionalIssues: P2/P3 (pr-fix may decide)
|
|
27
29
|
# - Each merged duplicate group keeps ONE canonical id; merged IDs are appended into canonical description.
|
|
28
30
|
# - Do NOT rewrite id prefixes (CDX-/CLD-/GMN-); preserve reviewer-provided finding IDs.
|
|
29
31
|
|
|
@@ -220,6 +222,202 @@ def _parse_duplicate_groups_b64(s):
|
|
|
220
222
|
return []
|
|
221
223
|
|
|
222
224
|
|
|
225
|
+
def _parse_escalation_groups_json(s):
|
|
226
|
+
"""Parse escalation groups JSON (same format as duplicate groups)."""
|
|
227
|
+
if not s:
|
|
228
|
+
return []
|
|
229
|
+
try:
|
|
230
|
+
data = json.loads(s)
|
|
231
|
+
except Exception:
|
|
232
|
+
return []
|
|
233
|
+
|
|
234
|
+
groups = []
|
|
235
|
+
if isinstance(data, dict) and isinstance(data.get("escalationGroups"), list):
|
|
236
|
+
groups = data.get("escalationGroups")
|
|
237
|
+
elif isinstance(data, list):
|
|
238
|
+
groups = data
|
|
239
|
+
else:
|
|
240
|
+
return []
|
|
241
|
+
|
|
242
|
+
out = []
|
|
243
|
+
for g in (groups or []):
|
|
244
|
+
if not isinstance(g, list):
|
|
245
|
+
continue
|
|
246
|
+
ids = []
|
|
247
|
+
for it in g:
|
|
248
|
+
if isinstance(it, str) and it.strip():
|
|
249
|
+
ids.append(it.strip())
|
|
250
|
+
ids = list(dict.fromkeys(ids))
|
|
251
|
+
if len(ids) >= 2:
|
|
252
|
+
out.append(ids)
|
|
253
|
+
return out
|
|
254
|
+
|
|
255
|
+
|
|
256
|
+
def _parse_escalation_groups_b64(s):
|
|
257
|
+
"""Decode base64 escalation groups JSON."""
|
|
258
|
+
if not s:
|
|
259
|
+
return []
|
|
260
|
+
try:
|
|
261
|
+
raw = base64.b64decode(s.encode("ascii"), validate=True)
|
|
262
|
+
return _parse_escalation_groups_json(raw.decode("utf-8", errors="replace"))
|
|
263
|
+
except Exception:
|
|
264
|
+
return []
|
|
265
|
+
|
|
266
|
+
|
|
267
|
+
def _parse_decision_log(md_text):
|
|
268
|
+
"""
|
|
269
|
+
Parse decision log markdown and extract fixed/rejected decisions.
|
|
270
|
+
|
|
271
|
+
Format:
|
|
272
|
+
# Decision Log
|
|
273
|
+
PR: 123
|
|
274
|
+
## Round 1
|
|
275
|
+
### Fixed
|
|
276
|
+
- id: CDX-001
|
|
277
|
+
commit: abc123
|
|
278
|
+
essence: JSON.parse error handling
|
|
279
|
+
### Rejected
|
|
280
|
+
- id: GMN-004
|
|
281
|
+
priority: P2
|
|
282
|
+
reason: needs product decision
|
|
283
|
+
essence: component split suggestion
|
|
284
|
+
|
|
285
|
+
Returns: [
|
|
286
|
+
{"id": "CDX-001", "status": "fixed", "essence": "...", "commit": "..."},
|
|
287
|
+
{"id": "GMN-004", "status": "rejected", "essence": "...", "reason": "...", "priority": "P2"}
|
|
288
|
+
]
|
|
289
|
+
"""
|
|
290
|
+
if not md_text:
|
|
291
|
+
return []
|
|
292
|
+
|
|
293
|
+
lines = md_text.splitlines()
|
|
294
|
+
decisions = []
|
|
295
|
+
|
|
296
|
+
current_status = None # "fixed" or "rejected"
|
|
297
|
+
current_entry = None
|
|
298
|
+
|
|
299
|
+
for raw in lines:
|
|
300
|
+
line = raw.rstrip("\n")
|
|
301
|
+
|
|
302
|
+
# Detect status section headers
|
|
303
|
+
if line.strip().lower() == "### fixed":
|
|
304
|
+
current_status = "fixed"
|
|
305
|
+
if current_entry:
|
|
306
|
+
decisions.append(current_entry)
|
|
307
|
+
current_entry = None
|
|
308
|
+
continue
|
|
309
|
+
|
|
310
|
+
if line.strip().lower() == "### rejected":
|
|
311
|
+
current_status = "rejected"
|
|
312
|
+
if current_entry:
|
|
313
|
+
decisions.append(current_entry)
|
|
314
|
+
current_entry = None
|
|
315
|
+
continue
|
|
316
|
+
|
|
317
|
+
# Reset on new round headers
|
|
318
|
+
if line.startswith("## Round "):
|
|
319
|
+
if current_entry:
|
|
320
|
+
decisions.append(current_entry)
|
|
321
|
+
current_entry = None
|
|
322
|
+
continue
|
|
323
|
+
|
|
324
|
+
# Start new entry
|
|
325
|
+
if line.startswith("- id:") and current_status:
|
|
326
|
+
if current_entry:
|
|
327
|
+
decisions.append(current_entry)
|
|
328
|
+
fid = line.split(":", 1)[1].strip()
|
|
329
|
+
current_entry = {"id": fid, "status": current_status}
|
|
330
|
+
continue
|
|
331
|
+
|
|
332
|
+
# Parse entry fields
|
|
333
|
+
if current_entry and line.startswith(" "):
|
|
334
|
+
m = re.match(r"^\s{2}([a-zA-Z][a-zA-Z0-9]*):\s*(.*)$", line)
|
|
335
|
+
if m:
|
|
336
|
+
k = m.group(1).strip()
|
|
337
|
+
v = m.group(2).strip()
|
|
338
|
+
current_entry[k] = v
|
|
339
|
+
|
|
340
|
+
# Don't forget last entry
|
|
341
|
+
if current_entry:
|
|
342
|
+
decisions.append(current_entry)
|
|
343
|
+
|
|
344
|
+
return decisions
|
|
345
|
+
|
|
346
|
+
|
|
347
|
+
def _filter_by_decision_log(findings, prior_decisions, escalation_groups):
|
|
348
|
+
"""
|
|
349
|
+
Filter findings based on decision log.
|
|
350
|
+
|
|
351
|
+
Rules:
|
|
352
|
+
1. Filter out findings matching any "fixed" decision (by escalation group)
|
|
353
|
+
2. Filter out findings matching "rejected" decisions UNLESS in escalation group
|
|
354
|
+
|
|
355
|
+
Args:
|
|
356
|
+
findings: list of finding dicts
|
|
357
|
+
prior_decisions: list from _parse_decision_log()
|
|
358
|
+
escalation_groups: list of [rejected_id, new_finding_id] pairs
|
|
359
|
+
|
|
360
|
+
Returns:
|
|
361
|
+
filtered list of findings
|
|
362
|
+
"""
|
|
363
|
+
if not prior_decisions:
|
|
364
|
+
return findings
|
|
365
|
+
|
|
366
|
+
escalation_map = {}
|
|
367
|
+
for group in escalation_groups:
|
|
368
|
+
if len(group) >= 2:
|
|
369
|
+
prior_id = group[0]
|
|
370
|
+
new_finding_ids = group[1:]
|
|
371
|
+
if prior_id not in escalation_map:
|
|
372
|
+
escalation_map[prior_id] = set()
|
|
373
|
+
escalation_map[prior_id].update(new_finding_ids)
|
|
374
|
+
|
|
375
|
+
fixed_ids = set()
|
|
376
|
+
rejected_ids = set()
|
|
377
|
+
|
|
378
|
+
for dec in prior_decisions:
|
|
379
|
+
status = dec.get("status", "").lower()
|
|
380
|
+
fid = dec.get("id", "").strip()
|
|
381
|
+
if not fid:
|
|
382
|
+
continue
|
|
383
|
+
|
|
384
|
+
if status == "fixed":
|
|
385
|
+
fixed_ids.add(fid)
|
|
386
|
+
elif status == "rejected":
|
|
387
|
+
rejected_ids.add(fid)
|
|
388
|
+
|
|
389
|
+
filtered = []
|
|
390
|
+
for f in findings:
|
|
391
|
+
fid = f.get("id", "").strip()
|
|
392
|
+
if not fid:
|
|
393
|
+
continue
|
|
394
|
+
|
|
395
|
+
should_filter = False
|
|
396
|
+
|
|
397
|
+
if fid in fixed_ids:
|
|
398
|
+
should_filter = True
|
|
399
|
+
|
|
400
|
+
if not should_filter:
|
|
401
|
+
for fixed_id in fixed_ids:
|
|
402
|
+
if fixed_id in escalation_map and fid in escalation_map[fixed_id]:
|
|
403
|
+
should_filter = True
|
|
404
|
+
break
|
|
405
|
+
|
|
406
|
+
if not should_filter:
|
|
407
|
+
if fid in rejected_ids:
|
|
408
|
+
should_filter = True
|
|
409
|
+
|
|
410
|
+
for rejected_id in rejected_ids:
|
|
411
|
+
if rejected_id in escalation_map and fid in escalation_map[rejected_id]:
|
|
412
|
+
should_filter = False
|
|
413
|
+
break
|
|
414
|
+
|
|
415
|
+
if not should_filter:
|
|
416
|
+
filtered.append(f)
|
|
417
|
+
|
|
418
|
+
return filtered
|
|
419
|
+
|
|
420
|
+
|
|
223
421
|
def _parse_review_findings(md_text):
|
|
224
422
|
lines = md_text.splitlines()
|
|
225
423
|
items = []
|
|
@@ -399,7 +597,7 @@ def _render_mode_a_comment(pr_number, round_num, run_id, counts, must_fix, merge
|
|
|
399
597
|
lines.append("")
|
|
400
598
|
|
|
401
599
|
if must_fix:
|
|
402
|
-
lines.append("## Must Fix (P0/P1
|
|
600
|
+
lines.append("## Must Fix (P0/P1)")
|
|
403
601
|
lines.append("")
|
|
404
602
|
for f in must_fix:
|
|
405
603
|
fid = f.get("id") or ""
|
|
@@ -418,7 +616,7 @@ def _render_mode_a_comment(pr_number, round_num, run_id, counts, must_fix, merge
|
|
|
418
616
|
else:
|
|
419
617
|
lines.append("## Result")
|
|
420
618
|
lines.append("")
|
|
421
|
-
lines.append("No P0/P1
|
|
619
|
+
lines.append("No P0/P1 issues found.")
|
|
422
620
|
lines.append("")
|
|
423
621
|
|
|
424
622
|
lines.append("<details>")
|
|
@@ -464,7 +662,7 @@ def _render_final_comment(pr_number, round_num, run_id, status):
|
|
|
464
662
|
if status == "RESOLVED":
|
|
465
663
|
lines.append("### Status: ✅ All issues resolved")
|
|
466
664
|
lines.append("")
|
|
467
|
-
lines.append("All P0/P1
|
|
665
|
+
lines.append("All P0/P1 issues from the automated review have been addressed.")
|
|
468
666
|
lines.append("The PR is ready for human review and merge.")
|
|
469
667
|
else:
|
|
470
668
|
lines.append("### Status: ⚠️ Max rounds reached")
|
|
@@ -491,6 +689,8 @@ def main(argv):
|
|
|
491
689
|
parser.add_argument("--final-report")
|
|
492
690
|
parser.add_argument("--duplicate-groups-json")
|
|
493
691
|
parser.add_argument("--duplicate-groups-b64")
|
|
692
|
+
parser.add_argument("--decision-log-file")
|
|
693
|
+
parser.add_argument("--escalation-groups-b64")
|
|
494
694
|
|
|
495
695
|
try:
|
|
496
696
|
args = parser.parse_args(argv)
|
|
@@ -571,9 +771,25 @@ def main(argv):
|
|
|
571
771
|
if not duplicate_groups:
|
|
572
772
|
duplicate_groups = _parse_duplicate_groups_b64(args.duplicate_groups_b64 or "")
|
|
573
773
|
merged_findings, merged_map = _merge_duplicates(all_findings, duplicate_groups)
|
|
774
|
+
|
|
775
|
+
decision_log_file = (args.decision_log_file or "").strip() or None
|
|
776
|
+
prior_decisions = []
|
|
777
|
+
if decision_log_file:
|
|
778
|
+
try:
|
|
779
|
+
decision_log_md = _read_cache_text(decision_log_file)
|
|
780
|
+
prior_decisions = _parse_decision_log(decision_log_md)
|
|
781
|
+
except Exception:
|
|
782
|
+
pass
|
|
783
|
+
|
|
784
|
+
escalation_groups = _parse_escalation_groups_b64(args.escalation_groups_b64 or "")
|
|
785
|
+
|
|
786
|
+
if prior_decisions:
|
|
787
|
+
merged_findings = _filter_by_decision_log(merged_findings, prior_decisions, escalation_groups)
|
|
788
|
+
|
|
574
789
|
counts = _counts(merged_findings)
|
|
575
790
|
|
|
576
|
-
must_fix = [f for f in merged_findings if _priority_rank(f.get("priority")) <=
|
|
791
|
+
must_fix = [f for f in merged_findings if _priority_rank(f.get("priority")) <= 1]
|
|
792
|
+
optional = [f for f in merged_findings if _priority_rank(f.get("priority")) >= 2]
|
|
577
793
|
stop = len(must_fix) == 0
|
|
578
794
|
|
|
579
795
|
body = _render_mode_a_comment(pr_number, round_num, run_id, counts, must_fix, merged_map, raw_reviews)
|
|
@@ -616,6 +832,28 @@ def main(argv):
|
|
|
616
832
|
lines.append(f" description: {desc}")
|
|
617
833
|
lines.append(f" suggestion: {sugg}")
|
|
618
834
|
|
|
835
|
+
lines.append("")
|
|
836
|
+
lines.append("## OptionalIssues")
|
|
837
|
+
lines.append("")
|
|
838
|
+
for f in optional:
|
|
839
|
+
fid = f.get("id") or ""
|
|
840
|
+
pri = (f.get("priority") or "P3").strip()
|
|
841
|
+
cat = (f.get("category") or "quality").strip()
|
|
842
|
+
file = (f.get("file") or "<unknown>").strip()
|
|
843
|
+
line = (f.get("line") or "null").strip()
|
|
844
|
+
title = (f.get("title") or "").strip()
|
|
845
|
+
desc = (f.get("description") or "").replace("\n", "\\n").strip()
|
|
846
|
+
sugg = (f.get("suggestion") or "(no suggestion provided)").replace("\n", "\\n").strip()
|
|
847
|
+
|
|
848
|
+
lines.append(f"- id: {fid}")
|
|
849
|
+
lines.append(f" priority: {pri}")
|
|
850
|
+
lines.append(f" category: {cat}")
|
|
851
|
+
lines.append(f" file: {file}")
|
|
852
|
+
lines.append(f" line: {line}")
|
|
853
|
+
lines.append(f" title: {title}")
|
|
854
|
+
lines.append(f" description: {desc}")
|
|
855
|
+
lines.append(f" suggestion: {sugg}")
|
|
856
|
+
|
|
619
857
|
fix_ref = _repo_relpath(REPO_ROOT, CACHE_DIR / fix_file)
|
|
620
858
|
_write_cache_text(fix_ref, "\n".join(lines) + "\n")
|
|
621
859
|
_json_out({"stop": False, "fixFile": fix_ref})
|