@ranger1/dx 0.1.39 → 0.1.40

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -31,6 +31,21 @@ tools:
31
31
  - 写入 reviewFile:`./.cache/review-CLD-pr<PR_NUMBER>-r<ROUND>-<RUN_ID>.md`
32
32
  - findings id 必须以 `CLD-` 开头
33
33
 
34
+ ## 决策日志约束(强制)
35
+
36
+ 如果 prompt 中提供了 `decisionLogFile`,必须先读取并遵守以下规则:
37
+
38
+ 1. **已修复问题**:不再提出本质相同的问题
39
+ 2. **已拒绝问题**:
40
+ - 若你的发现 priority 比原问题高 ≥2 级(如 P3→P1, P2→P0),可以升级质疑
41
+ - 否则不再提出
42
+
43
+ 判断"问题本质相同"时,比对 decision-log 中的 `essence` 字段与你发现的问题描述。
44
+
45
+ ### 禁止事项
46
+ - ⛔ 不质疑已修复问题的实现方式(除非发现修复引入了新 bug)
47
+ - ⛔ 不重复提出已拒绝问题(除非满足升级质疑条件)
48
+
34
49
  ## Cache 约定(强制)
35
50
 
36
51
  - 缓存目录固定为 `./.cache/`;交接一律传 `./.cache/<file>`(repo 相对路径),禁止 basename-only(如 `foo.md`)。
@@ -32,6 +32,21 @@ tools:
32
32
  - 写入 reviewFile:`./.cache/review-CDX-pr<PR_NUMBER>-r<ROUND>-<RUN_ID>.md`
33
33
  - findings id 必须以 `CDX-` 开头
34
34
 
35
+ ## 决策日志约束(强制)
36
+
37
+ 如果 prompt 中提供了 `decisionLogFile`,必须先读取并遵守以下规则:
38
+
39
+ 1. **已修复问题**:不再提出本质相同的问题
40
+ 2. **已拒绝问题**:
41
+ - 若你的发现 priority 比原问题高 ≥2 级(如 P3→P1, P2→P0),可以升级质疑
42
+ - 否则不再提出
43
+
44
+ 判断"问题本质相同"时,比对 decision-log 中的 `essence` 字段与你发现的问题描述。
45
+
46
+ ### 禁止事项
47
+ - ⛔ 不质疑已修复问题的实现方式(除非发现修复引入了新 bug)
48
+ - ⛔ 不重复提出已拒绝问题(除非满足升级质疑条件)
49
+
35
50
  ## Cache 约定(强制)
36
51
 
37
52
  - 缓存目录固定为 `./.cache/`;交接一律传 `./.cache/<file>`(repo 相对路径),禁止 basename-only(如 `foo.md`)。
@@ -31,6 +31,21 @@ tools:
31
31
  - 写入 reviewFile:`./.cache/review-GMN-pr<PR_NUMBER>-r<ROUND>-<RUN_ID>.md`
32
32
  - findings id 必须以 `GMN-` 开头
33
33
 
34
+ ## 决策日志约束(强制)
35
+
36
+ 如果 prompt 中提供了 `decisionLogFile`,必须先读取并遵守以下规则:
37
+
38
+ 1. **已修复问题**:不再提出本质相同的问题
39
+ 2. **已拒绝问题**:
40
+ - 若你的发现 priority 比原问题高 ≥2 级(如 P3→P1, P2→P0),可以升级质疑
41
+ - 否则不再提出
42
+
43
+ 判断"问题本质相同"时,比对 decision-log 中的 `essence` 字段与你发现的问题描述。
44
+
45
+ ### 禁止事项
46
+ - ⛔ 不质疑已修复问题的实现方式(除非发现修复引入了新 bug)
47
+ - ⛔ 不重复提出已拒绝问题(除非满足升级质疑条件)
48
+
34
49
  ## Cache 约定(强制)
35
50
 
36
51
  - 缓存目录固定为 `./.cache/`;交接一律传 `./.cache/<file>`(repo 相对路径),禁止 basename-only(如 `foo.md`)。
@@ -100,3 +100,18 @@ python3 ~/.opencode/agents/gh_review_harvest.py \
100
100
  - ⛔ 不发布 GitHub 评论(不调用 `gh pr comment/review`)
101
101
  - ⛔ 不修改代码(只输出 reviewFile)
102
102
  - ⛔ 不生成/伪造 runId
103
+
104
+ ## 决策日志约束(强制)
105
+
106
+ 如果 prompt 中提供了 `decisionLogFile`,必须先读取并遵守以下规则:
107
+
108
+ 1. **已修复问题**:不再提出本质相同的问题
109
+ 2. **已拒绝问题**:
110
+ - 若你的发现 priority 比原问题高 ≥2 级(如 P3→P1, P2→P0),可以升级质疑
111
+ - 否则不再提出
112
+
113
+ 判断"问题本质相同"时,比对 decision-log 中的 `essence` 字段与你发现的问题描述。
114
+
115
+ ### 禁止事项
116
+ - ⛔ 不质疑已修复问题的实现方式(除非发现修复引入了新 bug)
117
+ - ⛔ 不重复提出已拒绝问题(除非满足升级质疑条件)
@@ -81,7 +81,7 @@ Round: 2
81
81
 
82
82
  解析规则(强制):
83
83
 
84
- - 仅处理 `## IssuesToFix` 段落里的条目;`## OptionalIssues` 可忽略或按需处理
84
+ - 仅处理 `## IssuesToFix` 段落里的条目;`## OptionalIssues` 可忽略或按需处理(建议:根据 PR 目标/风险/时间预算自行裁决)
85
85
  - 每条必须至少包含:`id`、`priority`、`file`、`title`、`suggestion`
86
86
  - `line` 允许为 `null`
87
87
 
@@ -134,6 +134,68 @@ Round: 2
134
134
  `fixReportFile: ./.cache/<file>.md`
135
135
 
136
136
 
137
+ ## Decision Log 输出(强制)
138
+
139
+ 修复完成后,必须生成/追加 Decision Log 文件,用于跨轮次的决策持久化存储。
140
+
141
+ ### 文件路径
142
+
143
+ `./.cache/decision-log-pr<PR_NUMBER>.md`
144
+
145
+ ### 格式规范
146
+
147
+ ```markdown
148
+ # Decision Log
149
+
150
+ PR: <PR_NUMBER>
151
+
152
+ ## Round <ROUND>
153
+
154
+ ### Fixed
155
+
156
+ - id: <FINDING_ID>
157
+ commit: <SHA>
158
+ essence: <问题本质的一句话描述>
159
+
160
+ ### Rejected
161
+
162
+ - id: <FINDING_ID>
163
+ priority: <P0|P1|P2|P3>
164
+ reason: <拒绝原因>
165
+ essence: <问题本质的一句话描述>
166
+ ```
167
+
168
+ ### 追加规则(强制)
169
+
170
+ - 如果文件不存在:创建新文件,包含 `# Decision Log` 头、`PR: <PR_NUMBER>` 字段,以及第一个 `## Round <ROUND>` 段落
171
+ - 如果文件存在:追加新的 `## Round <ROUND>` 段落到文件末尾
172
+ - **禁止删除或覆盖历史轮次的记录**
173
+
174
+ ### essence 字段要求
175
+
176
+ essence 是问题本质的一句话描述,用于后续轮次的智能匹配和重复检测。要求:
177
+
178
+ - 简洁性:≤ 50 字
179
+ - 问题导向:描述问题核心(而非具体代码位置、文件行号)
180
+ - 可匹配性:后续轮次的 reviewer 能通过关键词匹配识别该问题
181
+
182
+ **示例对比:**
183
+
184
+ | ✅ 好的 essence | ❌ 不好的 essence |
185
+ |---|---|
186
+ | "JSON.parse 未捕获异常" | "apps/backend/src/foo.ts 第 42 行缺少 try/catch" |
187
+ | "缺少输入验证" | "在 UserController 中没有验证 username 参数" |
188
+ | "密码明文存储" | "第 156 行 password 字段未加密" |
189
+
190
+ ### Decision Log 的用途
191
+
192
+ Decision Log 供后续工作流参考:
193
+
194
+ - **pr-review-loop**:检查 decision-log 是否存在,避免重复提出已拒绝的问题
195
+ - **pr-review-aggregate**:使用 LLM 智能匹配 essence 字段,识别本轮与历史轮的重复问题
196
+ - **交接文档**:跨团队成员阅读,理解历史决策
197
+
198
+
137
199
  ## fixReportFile 内容格式(强制)
138
200
 
139
201
  fixReportFile 内容必须是可直接粘贴到 GitHub 评论的 Markdown,且不得包含本地缓存文件路径。
@@ -61,9 +61,37 @@ runId: abcdef123456
61
61
  {"duplicateGroups":[["CDX-001","CLD-003"],["GMN-002","CLD-005","CDX-004"]]}
62
62
  ```
63
63
 
64
+ ## 智能匹配(仅在模式 A + decision-log 存在时)
65
+
66
+ 如果 decision-log(`./.cache/decision-log-pr<PR_NUMBER>.md`)存在,你需要基于 LLM 判断每个新 finding 与已决策问题的本质是否相同,从而生成 **escalation_groups** 参数。
67
+
68
+ **流程**:
69
+
70
+ 1. 读取 decision-log,提取已 rejected 问题的 `essence` 字段
71
+ 2. 逐个新 finding,与所有已 rejected 问题的 essence 做语义比对(使用 LLM)
72
+ 3. 判断是否"问题本质相同"(即便表述不同)
73
+ 4. 收集可升级的问题(重新质疑阈值):
74
+ - **升级阈值**:优先级差距 ≥ 2 级
75
+ - 例如:已 rejected P3 but finding 为 P1 → 可升级质疑
76
+ - 例如:已 rejected P2 but finding 为 P0 → 可升级质疑
77
+ - 例如:已 rejected P2 but finding 为 P1 → 不升级(仅差 1 级)
78
+ 5. 生成**一行 JSON**(不要代码块、不要解释文字、不要换行),结构如下:
79
+
80
+ ```json
81
+ {"escalationGroups":[["CDX-001"],["GMN-002","CLD-005"]]}
82
+ ```
83
+
84
+ 其中每个组表示「可以作为已 rejected 问题的升级质疑」的 finding ID 集合。若无可升级问题,输出空数组:
85
+
86
+ ```json
87
+ {"escalationGroups":[]}
88
+ ```
89
+
90
+ 注意:escalation_groups JSON **不是你的最终输出**,它只用于生成 `--escalation-groups-b64` 传给脚本。
91
+
64
92
  ## 调用脚本(强制)
65
93
 
66
- 模式 A(带 reviewFile + 重复分组):
94
+ 模式 A(带 reviewFile + 重复分组 + 智能匹配):
67
95
 
68
96
  ```bash
69
97
  python3 ~/.opencode/agents/pr_review_aggregate.py \
@@ -74,9 +102,17 @@ python3 ~/.opencode/agents/pr_review_aggregate.py \
74
102
  --review-file <REVIEW_FILE_1> \
75
103
  --review-file <REVIEW_FILE_2> \
76
104
  --review-file <REVIEW_FILE_3> \
77
- --duplicate-groups-b64 <BASE64_JSON>
105
+ --duplicate-groups-b64 <BASE64_JSON> \
106
+ --decision-log-file ./.cache/decision-log-pr<PR_NUMBER>.md \
107
+ --escalation-groups-b64 <BASE64_JSON>
78
108
  ```
79
109
 
110
+ **参数说明**:
111
+
112
+ - `--duplicate-groups-b64`:base64 编码的 JSON,格式同上,例如 `eyJkdXBsaWNhdGVHcm91cHMiOltbIkNEWC0wMDEiLCJDTEQtMDAzIl1dfQ==`
113
+ - `--decision-log-file`:decision-log 文件路径(可选;若不存在则跳过智能匹配逻辑)
114
+ - `--escalation-groups-b64`:base64 编码的 escalation groups JSON,格式如上,例如 `eyJlc2NhbGF0aW9uR3JvdXBzIjpbWyJDRFgtMDAxIl1dfQ==`
115
+
80
116
  模式 B(带 fixReportFile):
81
117
 
82
118
  ```bash
@@ -98,3 +134,10 @@ python3 ~/.opencode/agents/pr_review_aggregate.py \
98
134
  - **失败/异常时**:
99
135
  - 若脚本 stdout 已输出合法 JSON(包含 `error` 或其他字段)→ 仍然**原样返回该 JSON**。
100
136
  - 若脚本未输出合法 JSON / 退出异常 → 仅输出一行 JSON:`{"error":"PR_REVIEW_AGGREGATE_AGENT_FAILED"}`(必要时可加 `detail` 字段)。
137
+
138
+ ## fixFile 结构(补充说明)
139
+
140
+ 脚本在模式 A 下生成的 fixFile 分为两段:
141
+
142
+ - `## IssuesToFix`:只包含 P0/P1(必须修)
143
+ - `## OptionalIssues`:包含 P2/P3(由 pr-fix 自主决定是否修复/或拒绝并说明原因)
@@ -23,7 +23,9 @@
23
23
  # - Comment body must NOT contain local filesystem paths (this script scrubs cache paths, $HOME, and repo absolute paths).
24
24
  #
25
25
  # fixFile rules:
26
- # - fixFile includes ONLY P0/P1/P2 findings.
26
+ # - fixFile includes all findings, split into:
27
+ # - IssuesToFix: P0/P1 (must fix)
28
+ # - OptionalIssues: P2/P3 (pr-fix may decide)
27
29
  # - Each merged duplicate group keeps ONE canonical id; merged IDs are appended into canonical description.
28
30
  # - Do NOT rewrite id prefixes (CDX-/CLD-/GMN-); preserve reviewer-provided finding IDs.
29
31
 
@@ -220,6 +222,202 @@ def _parse_duplicate_groups_b64(s):
220
222
  return []
221
223
 
222
224
 
225
+ def _parse_escalation_groups_json(s):
226
+ """Parse escalation groups JSON (same format as duplicate groups)."""
227
+ if not s:
228
+ return []
229
+ try:
230
+ data = json.loads(s)
231
+ except Exception:
232
+ return []
233
+
234
+ groups = []
235
+ if isinstance(data, dict) and isinstance(data.get("escalationGroups"), list):
236
+ groups = data.get("escalationGroups")
237
+ elif isinstance(data, list):
238
+ groups = data
239
+ else:
240
+ return []
241
+
242
+ out = []
243
+ for g in (groups or []):
244
+ if not isinstance(g, list):
245
+ continue
246
+ ids = []
247
+ for it in g:
248
+ if isinstance(it, str) and it.strip():
249
+ ids.append(it.strip())
250
+ ids = list(dict.fromkeys(ids))
251
+ if len(ids) >= 2:
252
+ out.append(ids)
253
+ return out
254
+
255
+
256
+ def _parse_escalation_groups_b64(s):
257
+ """Decode base64 escalation groups JSON."""
258
+ if not s:
259
+ return []
260
+ try:
261
+ raw = base64.b64decode(s.encode("ascii"), validate=True)
262
+ return _parse_escalation_groups_json(raw.decode("utf-8", errors="replace"))
263
+ except Exception:
264
+ return []
265
+
266
+
267
+ def _parse_decision_log(md_text):
268
+ """
269
+ Parse decision log markdown and extract fixed/rejected decisions.
270
+
271
+ Format:
272
+ # Decision Log
273
+ PR: 123
274
+ ## Round 1
275
+ ### Fixed
276
+ - id: CDX-001
277
+ commit: abc123
278
+ essence: JSON.parse error handling
279
+ ### Rejected
280
+ - id: GMN-004
281
+ priority: P2
282
+ reason: needs product decision
283
+ essence: component split suggestion
284
+
285
+ Returns: [
286
+ {"id": "CDX-001", "status": "fixed", "essence": "...", "commit": "..."},
287
+ {"id": "GMN-004", "status": "rejected", "essence": "...", "reason": "...", "priority": "P2"}
288
+ ]
289
+ """
290
+ if not md_text:
291
+ return []
292
+
293
+ lines = md_text.splitlines()
294
+ decisions = []
295
+
296
+ current_status = None # "fixed" or "rejected"
297
+ current_entry = None
298
+
299
+ for raw in lines:
300
+ line = raw.rstrip("\n")
301
+
302
+ # Detect status section headers
303
+ if line.strip().lower() == "### fixed":
304
+ current_status = "fixed"
305
+ if current_entry:
306
+ decisions.append(current_entry)
307
+ current_entry = None
308
+ continue
309
+
310
+ if line.strip().lower() == "### rejected":
311
+ current_status = "rejected"
312
+ if current_entry:
313
+ decisions.append(current_entry)
314
+ current_entry = None
315
+ continue
316
+
317
+ # Reset on new round headers
318
+ if line.startswith("## Round "):
319
+ if current_entry:
320
+ decisions.append(current_entry)
321
+ current_entry = None
322
+ continue
323
+
324
+ # Start new entry
325
+ if line.startswith("- id:") and current_status:
326
+ if current_entry:
327
+ decisions.append(current_entry)
328
+ fid = line.split(":", 1)[1].strip()
329
+ current_entry = {"id": fid, "status": current_status}
330
+ continue
331
+
332
+ # Parse entry fields
333
+ if current_entry and line.startswith(" "):
334
+ m = re.match(r"^\s{2}([a-zA-Z][a-zA-Z0-9]*):\s*(.*)$", line)
335
+ if m:
336
+ k = m.group(1).strip()
337
+ v = m.group(2).strip()
338
+ current_entry[k] = v
339
+
340
+ # Don't forget last entry
341
+ if current_entry:
342
+ decisions.append(current_entry)
343
+
344
+ return decisions
345
+
346
+
347
+ def _filter_by_decision_log(findings, prior_decisions, escalation_groups):
348
+ """
349
+ Filter findings based on decision log.
350
+
351
+ Rules:
352
+ 1. Filter out findings matching any "fixed" decision (by escalation group)
353
+ 2. Filter out findings matching "rejected" decisions UNLESS in escalation group
354
+
355
+ Args:
356
+ findings: list of finding dicts
357
+ prior_decisions: list from _parse_decision_log()
358
+ escalation_groups: list of [rejected_id, new_finding_id] pairs
359
+
360
+ Returns:
361
+ filtered list of findings
362
+ """
363
+ if not prior_decisions:
364
+ return findings
365
+
366
+ escalation_map = {}
367
+ for group in escalation_groups:
368
+ if len(group) >= 2:
369
+ prior_id = group[0]
370
+ new_finding_ids = group[1:]
371
+ if prior_id not in escalation_map:
372
+ escalation_map[prior_id] = set()
373
+ escalation_map[prior_id].update(new_finding_ids)
374
+
375
+ fixed_ids = set()
376
+ rejected_ids = set()
377
+
378
+ for dec in prior_decisions:
379
+ status = dec.get("status", "").lower()
380
+ fid = dec.get("id", "").strip()
381
+ if not fid:
382
+ continue
383
+
384
+ if status == "fixed":
385
+ fixed_ids.add(fid)
386
+ elif status == "rejected":
387
+ rejected_ids.add(fid)
388
+
389
+ filtered = []
390
+ for f in findings:
391
+ fid = f.get("id", "").strip()
392
+ if not fid:
393
+ continue
394
+
395
+ should_filter = False
396
+
397
+ if fid in fixed_ids:
398
+ should_filter = True
399
+
400
+ if not should_filter:
401
+ for fixed_id in fixed_ids:
402
+ if fixed_id in escalation_map and fid in escalation_map[fixed_id]:
403
+ should_filter = True
404
+ break
405
+
406
+ if not should_filter:
407
+ if fid in rejected_ids:
408
+ should_filter = True
409
+
410
+ for rejected_id in rejected_ids:
411
+ if rejected_id in escalation_map and fid in escalation_map[rejected_id]:
412
+ should_filter = False
413
+ break
414
+
415
+ if not should_filter:
416
+ filtered.append(f)
417
+
418
+ return filtered
419
+
420
+
223
421
  def _parse_review_findings(md_text):
224
422
  lines = md_text.splitlines()
225
423
  items = []
@@ -399,7 +597,7 @@ def _render_mode_a_comment(pr_number, round_num, run_id, counts, must_fix, merge
399
597
  lines.append("")
400
598
 
401
599
  if must_fix:
402
- lines.append("## Must Fix (P0/P1/P2)")
600
+ lines.append("## Must Fix (P0/P1)")
403
601
  lines.append("")
404
602
  for f in must_fix:
405
603
  fid = f.get("id") or ""
@@ -418,7 +616,7 @@ def _render_mode_a_comment(pr_number, round_num, run_id, counts, must_fix, merge
418
616
  else:
419
617
  lines.append("## Result")
420
618
  lines.append("")
421
- lines.append("No P0/P1/P2 issues found.")
619
+ lines.append("No P0/P1 issues found.")
422
620
  lines.append("")
423
621
 
424
622
  lines.append("<details>")
@@ -464,7 +662,7 @@ def _render_final_comment(pr_number, round_num, run_id, status):
464
662
  if status == "RESOLVED":
465
663
  lines.append("### Status: ✅ All issues resolved")
466
664
  lines.append("")
467
- lines.append("All P0/P1/P2 issues from the automated review have been addressed.")
665
+ lines.append("All P0/P1 issues from the automated review have been addressed.")
468
666
  lines.append("The PR is ready for human review and merge.")
469
667
  else:
470
668
  lines.append("### Status: ⚠️ Max rounds reached")
@@ -491,6 +689,8 @@ def main(argv):
491
689
  parser.add_argument("--final-report")
492
690
  parser.add_argument("--duplicate-groups-json")
493
691
  parser.add_argument("--duplicate-groups-b64")
692
+ parser.add_argument("--decision-log-file")
693
+ parser.add_argument("--escalation-groups-b64")
494
694
 
495
695
  try:
496
696
  args = parser.parse_args(argv)
@@ -571,9 +771,25 @@ def main(argv):
571
771
  if not duplicate_groups:
572
772
  duplicate_groups = _parse_duplicate_groups_b64(args.duplicate_groups_b64 or "")
573
773
  merged_findings, merged_map = _merge_duplicates(all_findings, duplicate_groups)
774
+
775
+ decision_log_file = (args.decision_log_file or "").strip() or None
776
+ prior_decisions = []
777
+ if decision_log_file:
778
+ try:
779
+ decision_log_md = _read_cache_text(decision_log_file)
780
+ prior_decisions = _parse_decision_log(decision_log_md)
781
+ except Exception:
782
+ pass
783
+
784
+ escalation_groups = _parse_escalation_groups_b64(args.escalation_groups_b64 or "")
785
+
786
+ if prior_decisions:
787
+ merged_findings = _filter_by_decision_log(merged_findings, prior_decisions, escalation_groups)
788
+
574
789
  counts = _counts(merged_findings)
575
790
 
576
- must_fix = [f for f in merged_findings if _priority_rank(f.get("priority")) <= 2]
791
+ must_fix = [f for f in merged_findings if _priority_rank(f.get("priority")) <= 1]
792
+ optional = [f for f in merged_findings if _priority_rank(f.get("priority")) >= 2]
577
793
  stop = len(must_fix) == 0
578
794
 
579
795
  body = _render_mode_a_comment(pr_number, round_num, run_id, counts, must_fix, merged_map, raw_reviews)
@@ -616,6 +832,28 @@ def main(argv):
616
832
  lines.append(f" description: {desc}")
617
833
  lines.append(f" suggestion: {sugg}")
618
834
 
835
+ lines.append("")
836
+ lines.append("## OptionalIssues")
837
+ lines.append("")
838
+ for f in optional:
839
+ fid = f.get("id") or ""
840
+ pri = (f.get("priority") or "P3").strip()
841
+ cat = (f.get("category") or "quality").strip()
842
+ file = (f.get("file") or "<unknown>").strip()
843
+ line = (f.get("line") or "null").strip()
844
+ title = (f.get("title") or "").strip()
845
+ desc = (f.get("description") or "").replace("\n", "\\n").strip()
846
+ sugg = (f.get("suggestion") or "(no suggestion provided)").replace("\n", "\\n").strip()
847
+
848
+ lines.append(f"- id: {fid}")
849
+ lines.append(f" priority: {pri}")
850
+ lines.append(f" category: {cat}")
851
+ lines.append(f" file: {file}")
852
+ lines.append(f" line: {line}")
853
+ lines.append(f" title: {title}")
854
+ lines.append(f" description: {desc}")
855
+ lines.append(f" suggestion: {sugg}")
856
+
619
857
  fix_ref = _repo_relpath(REPO_ROOT, CACHE_DIR / fix_file)
620
858
  _write_cache_text(fix_ref, "\n".join(lines) + "\n")
621
859
  _json_out({"stop": False, "fixFile": fix_ref})
@@ -0,0 +1,500 @@
1
+ #!/usr/bin/env python3
2
+ """
3
+ Unit tests for pr_review_aggregate.py decision log parsing and filtering.
4
+
5
+ Tests cover:
6
+ 1. _parse_decision_log() - parsing markdown decision logs
7
+ 2. _filter_by_decision_log() - filtering findings based on prior decisions
8
+ 3. Edge cases: empty input, malformed data, cross-reviewer matching
9
+ """
10
+
11
+ import pytest
12
+ from unittest.mock import patch, MagicMock
13
+ import sys
14
+ from pathlib import Path
15
+
16
+ # Add parent directory to path for importing pr_review_aggregate
17
+ sys.path.insert(0, str(Path(__file__).parent))
18
+
19
+ # Import functions under test
20
+ from pr_review_aggregate import (
21
+ _parse_decision_log,
22
+ _filter_by_decision_log,
23
+ _parse_escalation_groups_json,
24
+ _parse_escalation_groups_b64,
25
+ )
26
+
27
+
28
+ # ============================================================
29
+ # Fixtures
30
+ # ============================================================
31
+
32
+ @pytest.fixture
33
+ def empty_decision_log():
34
+ """Empty decision log markdown."""
35
+ return ""
36
+
37
+
38
+ @pytest.fixture
39
+ def valid_decision_log():
40
+ """Valid decision log with Fixed and Rejected entries."""
41
+ return """# Decision Log
42
+
43
+ PR: 123
44
+
45
+ ## Round 1
46
+
47
+ ### Fixed
48
+ - id: CDX-001
49
+ commit: abc123
50
+ essence: JSON.parse 未捕获异常
51
+
52
+ - id: GMN-002
53
+ commit: def456
54
+ essence: 缺少错误边界处理
55
+
56
+ ### Rejected
57
+ - id: GMN-004
58
+ priority: P2
59
+ reason: 需要产品决策,超出 PR 范围
60
+ essence: 组件拆分建议
61
+
62
+ - id: CLD-003
63
+ priority: P3
64
+ reason: 性能优化非当前优先级
65
+ essence: 批量查询优化
66
+ """
67
+
68
+
69
+ @pytest.fixture
70
+ def malformed_decision_log():
71
+ """Malformed decision log with missing fields and bad formatting."""
72
+ return """# Decision Log
73
+
74
+ PR: 123
75
+
76
+ ### Fixed
77
+ - id: BROKEN-001
78
+ # Missing essence field
79
+
80
+ ### Rejected
81
+ - id: BROKEN-002
82
+ priority: P2
83
+ # Missing essence and reason
84
+
85
+ Some random text that should be ignored
86
+
87
+ - id: BROKEN-003
88
+ this is not a valid field format
89
+ """
90
+
91
+
92
+ @pytest.fixture
93
+ def sample_findings():
94
+ """Sample findings list for filter tests."""
95
+ return [
96
+ {
97
+ "id": "CDX-001",
98
+ "priority": "P1",
99
+ "category": "bug",
100
+ "file": "api.ts",
101
+ "line": "42",
102
+ "title": "JSON parse error",
103
+ "description": "JSON.parse 未捕获异常",
104
+ "suggestion": "Add try-catch"
105
+ },
106
+ {
107
+ "id": "GMN-004",
108
+ "priority": "P2",
109
+ "category": "quality",
110
+ "file": "Component.tsx",
111
+ "line": "100",
112
+ "title": "Component split",
113
+ "description": "组件拆分建议",
114
+ "suggestion": "Split into smaller components"
115
+ },
116
+ {
117
+ "id": "CLD-007",
118
+ "priority": "P0",
119
+ "category": "bug",
120
+ "file": "Component.tsx",
121
+ "line": "100",
122
+ "title": "Component split (escalated)",
123
+ "description": "组件拆分建议 - 升级为 P0",
124
+ "suggestion": "Split into smaller components - critical"
125
+ },
126
+ {
127
+ "id": "NEW-001",
128
+ "priority": "P1",
129
+ "category": "bug",
130
+ "file": "utils.ts",
131
+ "line": "20",
132
+ "title": "New issue",
133
+ "description": "This is a new issue",
134
+ "suggestion": "Fix it"
135
+ }
136
+ ]
137
+
138
+
139
+ @pytest.fixture
140
+ def prior_decisions():
141
+ """Sample prior decisions from _parse_decision_log."""
142
+ return [
143
+ {
144
+ "id": "CDX-001",
145
+ "status": "fixed",
146
+ "commit": "abc123",
147
+ "essence": "JSON.parse 未捕获异常"
148
+ },
149
+ {
150
+ "id": "GMN-004",
151
+ "status": "rejected",
152
+ "priority": "P2",
153
+ "reason": "需要产品决策,超出 PR 范围",
154
+ "essence": "组件拆分建议"
155
+ }
156
+ ]
157
+
158
+
159
+ # ============================================================
160
+ # Test: _parse_decision_log() - Empty Input
161
+ # ============================================================
162
+
163
+ def test_parse_decision_log_empty(empty_decision_log):
164
+ """
165
+ Test that empty decision log returns empty list.
166
+
167
+ Given: empty string
168
+ When: _parse_decision_log() is called
169
+ Then: returns []
170
+ """
171
+ result = _parse_decision_log(empty_decision_log)
172
+ assert result == []
173
+ assert isinstance(result, list)
174
+
175
+
176
+ # ============================================================
177
+ # Test: _parse_decision_log() - Valid Input
178
+ # ============================================================
179
+
180
+ def test_parse_decision_log_valid(valid_decision_log):
181
+ """
182
+ Test that valid decision log is parsed into structured data.
183
+
184
+ Given: valid markdown with Fixed and Rejected sections
185
+ When: _parse_decision_log() is called
186
+ Then: returns list of dicts with id, status, essence, and optional fields
187
+ """
188
+ result = _parse_decision_log(valid_decision_log)
189
+
190
+ # Should have 4 entries (2 Fixed, 2 Rejected)
191
+ assert len(result) == 4
192
+
193
+ # Verify first Fixed entry
194
+ fixed_1 = result[0]
195
+ assert fixed_1["id"] == "CDX-001"
196
+ assert fixed_1["status"] == "fixed"
197
+ assert fixed_1["commit"] == "abc123"
198
+ assert fixed_1["essence"] == "JSON.parse 未捕获异常"
199
+
200
+ # Verify second Fixed entry
201
+ fixed_2 = result[1]
202
+ assert fixed_2["id"] == "GMN-002"
203
+ assert fixed_2["status"] == "fixed"
204
+ assert fixed_2["commit"] == "def456"
205
+ assert fixed_2["essence"] == "缺少错误边界处理"
206
+
207
+ # Verify first Rejected entry
208
+ rejected_1 = result[2]
209
+ assert rejected_1["id"] == "GMN-004"
210
+ assert rejected_1["status"] == "rejected"
211
+ assert rejected_1["priority"] == "P2"
212
+ assert rejected_1["reason"] == "需要产品决策,超出 PR 范围"
213
+ assert rejected_1["essence"] == "组件拆分建议"
214
+
215
+ # Verify second Rejected entry
216
+ rejected_2 = result[3]
217
+ assert rejected_2["id"] == "CLD-003"
218
+ assert rejected_2["status"] == "rejected"
219
+ assert rejected_2["priority"] == "P3"
220
+
221
+
222
+ # ============================================================
223
+ # Test: _parse_decision_log() - Malformed Input
224
+ # ============================================================
225
+
226
+ def test_parse_decision_log_malformed(malformed_decision_log):
227
+ """
228
+ Test that malformed decision log degrades gracefully.
229
+
230
+ Given: decision log with missing required fields
231
+ When: _parse_decision_log() is called
232
+ Then: returns partial data without raising exceptions
233
+ """
234
+ # Should not raise exception
235
+ result = _parse_decision_log(malformed_decision_log)
236
+
237
+ # Should return some data (even if incomplete)
238
+ assert isinstance(result, list)
239
+
240
+ # Entries should have at least id and status
241
+ for entry in result:
242
+ assert "id" in entry
243
+ assert "status" in entry
244
+
245
+
246
+ # ============================================================
247
+ # Test: _filter_by_decision_log() - Fixed Issues
248
+ # ============================================================
249
+
250
+ def test_filter_fixed_issues(sample_findings, prior_decisions):
251
+ """
252
+ Test that findings matching Fixed decisions are filtered out.
253
+
254
+ Given: findings containing CDX-001 which is in Fixed decisions
255
+ When: _filter_by_decision_log() is called with empty escalation_groups
256
+ Then: CDX-001 is filtered out
257
+ """
258
+ escalation_groups = []
259
+
260
+ result = _filter_by_decision_log(sample_findings, prior_decisions, escalation_groups)
261
+
262
+ # CDX-001 should be filtered (it's in Fixed decisions)
263
+ result_ids = [f["id"] for f in result]
264
+ assert "CDX-001" not in result_ids
265
+
266
+ # Other findings should remain
267
+ assert "GMN-004" in result_ids or "CLD-007" in result_ids or "NEW-001" in result_ids
268
+
269
+
270
+ # ============================================================
271
+ # Test: _filter_by_decision_log() - Rejected Without Escalation
272
+ # ============================================================
273
+
274
+ def test_filter_rejected_without_escalation(sample_findings, prior_decisions):
275
+ """
276
+ Test that findings matching Rejected decisions are filtered out when NOT in escalation_groups.
277
+
278
+ Given: findings containing GMN-004 which is in Rejected decisions
279
+ and escalation_groups is empty
280
+ When: _filter_by_decision_log() is called
281
+ Then: GMN-004 is filtered out
282
+ """
283
+ escalation_groups = []
284
+
285
+ result = _filter_by_decision_log(sample_findings, prior_decisions, escalation_groups)
286
+
287
+ # GMN-004 should be filtered (it's Rejected and not escalated)
288
+ result_ids = [f["id"] for f in result]
289
+ assert "GMN-004" not in result_ids
290
+
291
+ # New findings should remain
292
+ assert "NEW-001" in result_ids
293
+
294
+
295
+ # ============================================================
296
+ # Test: _filter_by_decision_log() - Rejected With Escalation
297
+ # ============================================================
298
+
299
+ def test_filter_rejected_with_escalation(sample_findings, prior_decisions):
300
+ """
301
+ Test that findings matching Rejected decisions are kept when in escalation_groups.
302
+
303
+ Given: findings containing CLD-007 which is an escalation of GMN-004
304
+ and escalation_groups contains ["GMN-004", "CLD-007"]
305
+ When: _filter_by_decision_log() is called
306
+ Then: CLD-007 is NOT filtered (it's an escalation)
307
+ """
308
+ # GMN-004 (Rejected P2) -> CLD-007 (escalated to P0, ≥2 level jump)
309
+ escalation_groups = [["GMN-004", "CLD-007"]]
310
+
311
+ result = _filter_by_decision_log(sample_findings, prior_decisions, escalation_groups)
312
+
313
+ # CLD-007 should NOT be filtered (it's an escalation)
314
+ result_ids = [f["id"] for f in result]
315
+ assert "CLD-007" in result_ids
316
+
317
+ # GMN-004 itself (P2) should still be filtered
318
+ assert "GMN-004" not in result_ids
319
+
320
+
321
+ # ============================================================
322
+ # Test: _filter_by_decision_log() - Cross-Reviewer Match
323
+ # ============================================================
324
+
325
+ def test_filter_cross_reviewer_match():
326
+ """
327
+ Test that findings with different reviewer IDs but same essence are filtered.
328
+
329
+ Given: findings containing GMN-005 (different ID from CDX-001)
330
+ but prior decisions contain CDX-001 as Fixed
331
+ and escalation_groups links them: ["CDX-001", "GMN-005"]
332
+ When: _filter_by_decision_log() is called
333
+ Then: GMN-005 is filtered (matched via escalation group to Fixed decision)
334
+ """
335
+ findings = [
336
+ {
337
+ "id": "GMN-005",
338
+ "priority": "P1",
339
+ "category": "bug",
340
+ "file": "api.ts",
341
+ "line": "42",
342
+ "title": "JSON parse error",
343
+ "description": "JSON.parse 未捕获异常 (same essence as CDX-001)",
344
+ "suggestion": "Add try-catch"
345
+ },
346
+ {
347
+ "id": "NEW-002",
348
+ "priority": "P2",
349
+ "category": "quality",
350
+ "file": "utils.ts",
351
+ "line": "10",
352
+ "title": "Different issue",
353
+ "description": "Completely different",
354
+ "suggestion": "Fix differently"
355
+ }
356
+ ]
357
+
358
+ prior_decisions = [
359
+ {
360
+ "id": "CDX-001",
361
+ "status": "fixed",
362
+ "commit": "abc123",
363
+ "essence": "JSON.parse 未捕获异常"
364
+ }
365
+ ]
366
+
367
+ # Escalation group indicates GMN-005 is related to CDX-001
368
+ escalation_groups = [["CDX-001", "GMN-005"]]
369
+
370
+ result = _filter_by_decision_log(findings, prior_decisions, escalation_groups)
371
+
372
+ # GMN-005 should be filtered (linked to Fixed CDX-001 via escalation group)
373
+ result_ids = [f["id"] for f in result]
374
+ assert "GMN-005" not in result_ids
375
+
376
+ # NEW-002 should remain
377
+ assert "NEW-002" in result_ids
378
+
379
+
380
+ # ============================================================
381
+ # Test: _parse_escalation_groups_json()
382
+ # ============================================================
383
+
384
+ def test_parse_escalation_groups_json_valid():
385
+ """Test parsing valid escalation groups JSON."""
386
+ json_str = '{"escalationGroups": [["GMN-004", "CLD-007"], ["CDX-001", "GMN-005"]]}'
387
+ result = _parse_escalation_groups_json(json_str)
388
+
389
+ assert len(result) == 2
390
+ assert ["GMN-004", "CLD-007"] in result
391
+ assert ["CDX-001", "GMN-005"] in result
392
+
393
+
394
+ def test_parse_escalation_groups_json_empty():
395
+ """Test parsing empty escalation groups JSON."""
396
+ result = _parse_escalation_groups_json("")
397
+ assert result == []
398
+
399
+
400
+ def test_parse_escalation_groups_json_malformed():
401
+ """Test parsing malformed JSON returns empty list."""
402
+ result = _parse_escalation_groups_json("not valid json {{{")
403
+ assert result == []
404
+
405
+
406
+ # ============================================================
407
+ # Test: _parse_escalation_groups_b64()
408
+ # ============================================================
409
+
410
+ def test_parse_escalation_groups_b64_valid():
411
+ """Test parsing valid base64-encoded escalation groups."""
412
+ import base64
413
+ json_str = '{"escalationGroups": [["GMN-004", "CLD-007"]]}'
414
+ b64_str = base64.b64encode(json_str.encode("utf-8")).decode("ascii")
415
+
416
+ result = _parse_escalation_groups_b64(b64_str)
417
+
418
+ assert len(result) == 1
419
+ assert ["GMN-004", "CLD-007"] in result
420
+
421
+
422
+ def test_parse_escalation_groups_b64_empty():
423
+ """Test parsing empty base64 string."""
424
+ result = _parse_escalation_groups_b64("")
425
+ assert result == []
426
+
427
+
428
+ def test_parse_escalation_groups_b64_invalid():
429
+ """Test parsing invalid base64 returns empty list."""
430
+ result = _parse_escalation_groups_b64("not-valid-base64!!!")
431
+ assert result == []
432
+
433
+
434
+ # ============================================================
435
+ # Test: Integration - Full Workflow
436
+ # ============================================================
437
+
438
+ def test_integration_full_filter_workflow():
439
+ """
440
+ Integration test: parse decision log and filter findings.
441
+
442
+ Simulates real workflow:
443
+ 1. Parse decision log markdown
444
+ 2. Parse escalation groups
445
+ 3. Filter findings based on decisions and escalations
446
+ """
447
+ decision_log_md = """# Decision Log
448
+
449
+ PR: 456
450
+
451
+ ## Round 1
452
+
453
+ ### Fixed
454
+ - id: CDX-010
455
+ commit: sha1
456
+ essence: 类型错误修复
457
+
458
+ ### Rejected
459
+ - id: GMN-020
460
+ priority: P3
461
+ reason: 低优先级优化
462
+ essence: 性能优化建议
463
+ """
464
+
465
+ findings = [
466
+ {"id": "CDX-010", "priority": "P1", "category": "bug", "file": "a.ts", "line": "1", "title": "Type error", "description": "类型错误修复", "suggestion": "Fix"},
467
+ {"id": "GMN-020", "priority": "P3", "category": "perf", "file": "b.ts", "line": "2", "title": "Perf opt", "description": "性能优化建议", "suggestion": "Optimize"},
468
+ {"id": "CLD-030", "priority": "P1", "category": "perf", "file": "b.ts", "line": "2", "title": "Perf opt escalated", "description": "性能优化建议 - 升级", "suggestion": "Optimize now"},
469
+ {"id": "NEW-100", "priority": "P2", "category": "quality", "file": "c.ts", "line": "3", "title": "New", "description": "新问题", "suggestion": "Fix new"},
470
+ ]
471
+
472
+ # Parse decision log
473
+ prior_decisions = _parse_decision_log(decision_log_md)
474
+ assert len(prior_decisions) == 2
475
+
476
+ # Escalation: GMN-020 (P3) -> CLD-030 (P1, ≥2 level jump)
477
+ escalation_groups = [["GMN-020", "CLD-030"]]
478
+
479
+ # Filter findings
480
+ result = _filter_by_decision_log(findings, prior_decisions, escalation_groups)
481
+ result_ids = [f["id"] for f in result]
482
+
483
+ # CDX-010 should be filtered (Fixed)
484
+ assert "CDX-010" not in result_ids
485
+
486
+ # GMN-020 should be filtered (Rejected, not escalated)
487
+ assert "GMN-020" not in result_ids
488
+
489
+ # CLD-030 should remain (escalation of Rejected)
490
+ assert "CLD-030" in result_ids
491
+
492
+ # NEW-100 should remain (new issue)
493
+ assert "NEW-100" in result_ids
494
+
495
+ # Final count: 2 findings remain
496
+ assert len(result) == 2
497
+
498
+
499
+ if __name__ == "__main__":
500
+ pytest.main([__file__, "-v"])
@@ -58,24 +58,32 @@ agent: sisyphus
58
58
  - 取出:`contextFile`、`runId`、`headOid`(如有)
59
59
  - **CRITICAL**: 必须等待此 Task 成功完成并获取到 `contextFile` 后,才能进入 Step 2
60
60
 
61
- 2. Task(并行): `codex-reviewer` + `claude-reviewer` + `gemini-reviewer` + `gh-thread-reviewer` **(依赖 Step 1 的 contextFile)**
61
+ **检查 Decision Log**:
62
+ - 检查是否存在 `./.cache/decision-log-pr{{PR_NUMBER}}.md`
63
+ - 如存在,将路径记录为 `decisionLogFile`(用于后续步骤)
64
+ - 如不存在,`decisionLogFile` 为空或不传递
62
65
 
63
- - **DEPENDENCY**: 这些 reviewers 依赖 Step 1 返回的 `contextFile`,因此**必须等 Step 1 完成后才能并行启动**
66
+ 2. Task(并行): `codex-reviewer` + `claude-reviewer` + `gemini-reviewer` + `gh-thread-reviewer` **(依赖 Step 1 contextFile decisionLogFile)**
67
+
68
+ - **DEPENDENCY**: 这些 reviewers 依赖 Step 1 返回的 `contextFile` 和 `decisionLogFile`(如存在),因此**必须等 Step 1 完成后才能并行启动**
64
69
  - 每个 reviewer prompt 必须包含:
65
70
  - `PR #{{PR_NUMBER}}`
66
71
  - `round: <ROUND>`
67
72
  - `runId: <RUN_ID>`(来自 Step 1 的输出,必须透传,禁止自行生成)
68
73
  - `contextFile: ./.cache/<file>.md`(来自 Step 1 的输出)
69
- - reviewer 默认读 `contextFile`;必要时允许用 `git/gh` 只读命令拿 diff
74
+ - `decisionLogFile: ./.cache/decision-log-pr{{PR_NUMBER}}.md`(如存在,来自检查后得出)
75
+ - reviewer 默认读 `contextFile`;如果 `decisionLogFile` 存在,reviewer 应在 prompt 中提供该文件路径以参考前轮决策;必要时允许用 `git/gh` 只读命令拿 diff
70
76
  - 忽略问题:1.格式化代码引起的噪音 2.已经lint检查以外的格式问题 3.忽略单元测试不足的问题
71
77
  - 特别关注: 逻辑、安全、性能、可维护性
72
78
  - 同时要注意 pr 前面轮次的 修复和讨论,对于已经拒绝、已修复的问题不要反复的提出
73
79
  - 同时也要注意fix的过程中有没有引入新的问题。
80
+
81
+ 备注:fixFile 分为 `IssuesToFix`(P0/P1,必须修)与 `OptionalIssues`(P2/P3,pr-fix 自主裁决)。
74
82
  - 每个 reviewer 输出:`reviewFile: ./.cache/<file>.md`(Markdown)
75
83
 
76
84
  3. Task: `pr-review-aggregate`
77
85
 
78
- - prompt 必须包含:`PR #{{PR_NUMBER}}`、`round: <ROUND>`、`runId: <RUN_ID>`、`contextFile: ./.cache/<file>.md`、以及 1+ 条 `reviewFile: ./.cache/<file>.md`
86
+ - prompt 必须包含:`PR #{{PR_NUMBER}}`、`round: <ROUND>`、`runId: <RUN_ID>`、`contextFile: ./.cache/<file>.md`、以及 1+ 条 `reviewFile: ./.cache/<file>.md`、以及 `decisionLogFile: ./.cache/decision-log-pr{{PR_NUMBER}}.md`(如存在)
79
87
  - 输出:`{"stop":true}` 或 `{"stop":false,"fixFile":"..."}`
80
88
  - 若 `stop=true`:本轮结束并退出循环
81
89
  - **唯一性约束**: 每轮只能发布一次 Review Summary;脚本内置幂等检查,重复调用不会重复发布
@@ -97,6 +105,10 @@ agent: sisyphus
97
105
  - 输出:`{"ok":true}`
98
106
  - **唯一性约束**: 每轮只能发布一次 Fix Report;脚本内置幂等检查,重复调用不会重复发布
99
107
 
108
+ **Decision Log 更新**:
109
+ - `pr-fix` agent 在修复过程中会在 `./.cache/decision-log-pr{{PR_NUMBER}}.md` 中追加本轮决策(Fixed/Rejected)
110
+ - 下一轮 review 将自动使用更新后的 decision-log,避免重复提出已决策问题
111
+
100
112
  6. 下一轮
101
113
 
102
114
  - 回到 1(进入下一轮 reviewers)
package/README.md CHANGED
@@ -128,6 +128,63 @@ target(端)不写死,由 `env-policy.jsonc.targets` 定义;`commands.jso
128
128
 
129
129
  查看 `example/`:包含一个最小可读的 `dx/config` 配置示例,以及如何在一个 pnpm+nx monorepo 中接入 dx。
130
130
 
131
+ ## PR Review Loop(自动评审-修复闭环)
132
+
133
+ dx 内置一套 PR 评审自动化工作流:并行评审 → 聚合结论 → 生成修复清单 → 自动修复 → 再评审,最多循环 3 轮,用于让 PR 更快收敛。
134
+
135
+ ### 什么时候用
136
+
137
+ - PR 变更较大、想要更系统地覆盖安全/性能/可维护性问题
138
+ - 希望在 CI 通过的前提下,把评审建议落成可执行的修复清单(fixFile)
139
+ - 希望避免同一个问题在不同轮次被反复提出(通过 Decision Log 机制)
140
+
141
+ ### 如何运行
142
+
143
+ 在创建 PR 后执行:
144
+
145
+ ```
146
+ /pr-review-loop --pr <PR_NUMBER>
147
+ ```
148
+
149
+ 更多命令说明见:`@opencode/commands/pr-review-loop.md`。
150
+
151
+ 提示:在创建 PR 的流程中也会给出快捷入口,见:`@opencode/commands/git-commit-and-pr.md`。
152
+
153
+ ### 工作流概览
154
+
155
+ - 预检(`pr-precheck`):先做编译/预检 gate,不通过则进入修复再预检(最多 2 次)
156
+ - 获取上下文(`pr-context`):生成本轮上下文缓存 `contextFile`
157
+ - 并行评审(4 个 reviewer):`codex-reviewer` / `claude-reviewer` / `gemini-reviewer` / `gh-thread-reviewer`
158
+ - 聚合(`pr-review-aggregate`):合并各 reviewer 结果、去重、输出 `fixFile`,并发布本轮 Review Summary
159
+ - 修复(`pr-fix`):按 `fixFile` 逐条修复(每条 findingId 单独 commit + push),输出 `fixReportFile`
160
+ - 发布修复报告(`pr-review-aggregate` 模式 B):发布 Fix Report
161
+
162
+ 备注:每轮发布到 PR 的评论都会带 `<!-- pr-review-loop-marker -->`,用于幂等与避免反复采集机器人评论。
163
+
164
+ ### 缓存文件(项目内 ./\.cache/)
165
+
166
+ 该流程的中间产物都写入项目内 `./.cache/`,并在 agent/命令之间传递 repo 相对路径:
167
+
168
+ - `./.cache/pr-context-pr<PR>-r<ROUND>-<RUN_ID>.md`(contextFile)
169
+ - `./.cache/review-<REVIEWER>-pr<PR>-r<ROUND>-<RUN_ID>.md`(reviewFile)
170
+ - `./.cache/fix-pr<PR>-r<ROUND>-<RUN_ID>.md`(fixFile)
171
+ - `./.cache/fix-report-pr<PR>-r<ROUND>-<RUN_ID>.md`(fixReportFile)
172
+
173
+ ### Decision Log(跨轮次决策日志,用于收敛)
174
+
175
+ 为了解决“第一轮拒绝的问题在后续轮次反复出现”的问题,PR Review Loop 使用 Decision Log 持久化每轮的决策:
176
+
177
+ - 文件:`./.cache/decision-log-pr<PR_NUMBER>.md`
178
+ - 生成者:`pr-fix` 在修复完成后创建/追加(append-only,禁止覆盖历史)
179
+ - 内容:记录每轮的 Fixed/Rejected,以及 `essence`(问题本质的一句话描述,用于后续智能匹配)
180
+
181
+ 在后续轮次:
182
+
183
+ - reviewers 若收到 `decisionLogFile`,必须读取并遵守:已修复不再提、已拒绝不再提(除非严重性升级)
184
+ - aggregate 在模式 A 中基于 LLM 对比 `essence` 做“问题本质相同”的判断,并生成 `escalation_groups` 入参给脚本
185
+
186
+ 升级质疑规则:只有当新 finding 的优先级比历史 rejected 高 ≥2 级(例如 P3→P1、P2→P0)时,才允许重新打开。
187
+
131
188
  ## 命令
132
189
 
133
190
  dx 的命令由 `dx/config/commands.json` 驱动,并且内置了一些 internal runner(避免项目侧依赖任何 `scripts/lib/*.js`):
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@ranger1/dx",
3
- "version": "0.1.39",
3
+ "version": "0.1.40",
4
4
  "type": "module",
5
5
  "license": "MIT",
6
6
  "repository": {