@qwen-code/qwen-code 0.15.6 → 0.15.7-preview.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/bundled/qc-helper/docs/configuration/model-providers.md +63 -0
- package/bundled/qc-helper/docs/configuration/settings.md +19 -12
- package/bundled/qc-helper/docs/features/code-review.md +45 -33
- package/bundled/qc-helper/docs/features/skills.md +32 -3
- package/bundled/review/DESIGN.md +151 -30
- package/bundled/review/SKILL.md +210 -79
- package/cli.js +31488 -17007
- package/locales/ca.js +3 -1
- package/locales/de.js +3 -1
- package/locales/en.js +4 -2
- package/locales/fr.js +3 -1
- package/locales/ja.js +3 -1
- package/locales/pt.js +3 -1
- package/locales/ru.js +3 -1
- package/locales/zh-TW.js +3 -1
- package/locales/zh.js +3 -1
- package/package.json +2 -2
package/bundled/review/DESIGN.md
CHANGED
|
@@ -2,20 +2,35 @@
|
|
|
2
2
|
|
|
3
3
|
> Architecture decisions, trade-offs, and rejected alternatives for the `/review` skill.
|
|
4
4
|
|
|
5
|
-
## Why
|
|
5
|
+
## Why 9 agents + 1 verify + iterative reverse, not 1 agent?
|
|
6
6
|
|
|
7
7
|
**Considered:**
|
|
8
8
|
|
|
9
9
|
- **1 agent (Copilot approach):** Single agent with tool-calling, reads and reviews in one pass. Cheapest (1 LLM call). But dimensional coverage depends entirely on one prompt's attention — easy to miss performance issues while focused on security.
|
|
10
|
-
- **5 parallel agents (
|
|
10
|
+
- **5 parallel agents (original design):** Each agent focuses on one dimension. Higher coverage through forced diversity of perspective. Limited by combined Correctness+Security and a single undirected pass — recall ceiling left findings on the table that the user only discovered in subsequent /review rounds.
|
|
11
|
+
- **9 parallel agents (current):** 6 review dimensions (Correctness, Security, Code Quality, Performance, Test Coverage, Undirected) + Build & Test. Undirected runs as 3 personas in parallel.
|
|
11
12
|
|
|
12
|
-
**Decision:**
|
|
13
|
+
**Decision:** 9 agents. The marginal cost (9x vs 1x) is acceptable because:
|
|
13
14
|
|
|
14
|
-
1. Parallel execution means time cost is ~1x (all
|
|
15
|
+
1. Parallel execution means time cost is ~1x (all 9 agents launch in one response)
|
|
15
16
|
2. Dimensional focus produces higher recall (fewer missed issues)
|
|
16
|
-
3.
|
|
17
|
+
3. Three undirected personas (attacker / 3am-oncall / maintainer) catch cross-dimensional issues that a single undirected agent's prompt-induced bias would miss
|
|
17
18
|
4. The "Silence is better than noise" principle + verification controls precision
|
|
18
19
|
|
|
20
|
+
### Why split Correctness from Security
|
|
21
|
+
|
|
22
|
+
A single Correctness+Security agent has split attention — empirically one dimension dominates the output and the other is shallow. Different mindsets too: correctness asks "does this do what it intends," security asks "what unintended thing can a hostile actor make this do." Splitting forces both to get full attention.
|
|
23
|
+
|
|
24
|
+
### Why a dedicated Test Coverage agent
|
|
25
|
+
|
|
26
|
+
Test gaps are a systematic blind spot. Review agents focused on bugs in the new code itself rarely look at whether the change came with adequate tests. A dedicated agent that asks "what scenarios in this diff are untested?" catches misses no other dimension hits.
|
|
27
|
+
|
|
28
|
+
### Why three undirected personas instead of one or many
|
|
29
|
+
|
|
30
|
+
A single undirected agent has prompt-induced bias and tends to find the same kinds of issues across runs. Three personas — attacker / 3am-oncall / maintainer — force completely different mental traversals, and the union of findings is meaningfully larger than 1.5× a single agent.
|
|
31
|
+
|
|
32
|
+
Empirically, ensemble diversity drops sharply past 3-5 sampled paths. Three is the sweet spot: enough to break single-prompt bias, few enough that the marginal cost stays bounded.
|
|
33
|
+
|
|
19
34
|
## Why batch verification instead of N independent agents?
|
|
20
35
|
|
|
21
36
|
**Considered:**
|
|
@@ -25,16 +40,46 @@
|
|
|
25
40
|
|
|
26
41
|
**Decision:** Batch. The quality difference is minimal — a single agent verifying 15 findings has MORE context than 15 independent agents (sees cross-finding relationships). Cost drops from O(N) to O(1).
|
|
27
42
|
|
|
28
|
-
## Why reverse audit is a separate step,
|
|
43
|
+
## Why reverse audit is a separate step, and why iterative
|
|
29
44
|
|
|
30
|
-
|
|
45
|
+
### Why separate from verification
|
|
31
46
|
|
|
32
47
|
- **Merge with verification:** Verification agent also looks for gaps. Saves 1 LLM call.
|
|
33
48
|
- **Separate step (chosen):** Reverse audit is a full diff re-read, not a finding check. Different cognitive task.
|
|
34
49
|
|
|
35
|
-
|
|
50
|
+
Verification is targeted (check specific claims at specific locations). Reverse audit is open-ended (scan entire diff for missed issues). Combining overloads one agent with two fundamentally different tasks, degrading both.
|
|
51
|
+
|
|
52
|
+
### Why iterative (multi-round)
|
|
53
|
+
|
|
54
|
+
A single reverse audit pass leaves whatever the reverse audit agent itself missed. Each new round receives the cumulative finding list from prior rounds, so it focuses on what's left undiscovered. Empirically, most PRs converge in 1-2 rounds; the 3-round hard cap prevents runaway cost on pathological cases.
|
|
55
|
+
|
|
56
|
+
### Why cap at 3 rounds, not unlimited
|
|
57
|
+
|
|
58
|
+
Diminishing returns. Past round 3, the marginal yield is low and a stuck-loop hazard rises (the model may fabricate issues to satisfy the "find more" framing). The "No issues found" termination already exits early on most PRs — the cap is a safety net, not the common path.
|
|
59
|
+
|
|
60
|
+
**Optimization preserved:** Reverse audit findings skip verification (across all rounds). The agent has full context, so output is inherently high-confidence.
|
|
36
61
|
|
|
37
|
-
|
|
62
|
+
## Why low-confidence over rejection on uncertain findings
|
|
63
|
+
|
|
64
|
+
**Original behavior:** When verification was uncertain, it would reject. Bias toward precision.
|
|
65
|
+
|
|
66
|
+
**Problem:** Uncertain findings often turn out to be real after human inspection. Rejection silently swallows valid concerns. Users discover them in the next iteration of /review or after merging — exactly the "iterate many rounds" pain this redesign targets.
|
|
67
|
+
|
|
68
|
+
**Current behavior:** Uncertain → "confirmed (low confidence)". Low-confidence findings:
|
|
69
|
+
|
|
70
|
+
- Appear in terminal output under "Needs Human Review"
|
|
71
|
+
- Are filtered out of PR inline comments (preserves "Silence is better than noise" for PR interactions)
|
|
72
|
+
- Do not affect the verdict (Approve/Request changes/Comment is computed from high-confidence findings only)
|
|
73
|
+
|
|
74
|
+
**Trade-off:** Terminal output gets noisier. PR comments stay clean. The user sees concerns without the cost of false-positive PR noise.
|
|
75
|
+
|
|
76
|
+
**Reserved for outright rejection:**
|
|
77
|
+
|
|
78
|
+
- Finding describes behavior the code does not actually have (factually wrong about the code)
|
|
79
|
+
- Finding matches an Exclusion Criterion (pre-existing issue, formatting nitpick, etc.)
|
|
80
|
+
- Vague suspicion with no concrete code reference
|
|
81
|
+
|
|
82
|
+
This boundary keeps the low-confidence bucket meaningful — it's "likely real but needs human judgment," not "I have no idea."
|
|
38
83
|
|
|
39
84
|
## Why worktree instead of stash + checkout
|
|
40
85
|
|
|
@@ -59,6 +104,78 @@ Applied throughout:
|
|
|
59
104
|
- Uncertain issues → rejected, not reported
|
|
60
105
|
- Pattern aggregation → same issue across N files reported once
|
|
61
106
|
|
|
107
|
+
## Why classify existing Qwen Code comments instead of always prompting
|
|
108
|
+
|
|
109
|
+
**Original behavior:** any existing Qwen Code review comment on the PR → inform the user and require confirmation before posting new comments.
|
|
110
|
+
|
|
111
|
+
**Problem:** in real /review usage, most existing Qwen Code comments fall into one of three "no-real-conflict" cases:
|
|
112
|
+
|
|
113
|
+
1. **Stale by commit**: the comment was posted against an older PR HEAD; the underlying code has changed.
|
|
114
|
+
2. **Resolved by reply**: someone has replied in the thread (the original author "fixed in abc123" or a reviewer "ok, approved"). The conversation is closed.
|
|
115
|
+
3. **No anchor overlap**: the old comment is on a different `(path, line)` from any new finding. They simply coexist.
|
|
116
|
+
|
|
117
|
+
Forcing the user to confirm-or-decline every time the PR has any Qwen Code history creates prompt fatigue without protecting against the real risk — which is **commenting twice on the same line**, producing visual duplicates that look like a bug to PR readers.
|
|
118
|
+
|
|
119
|
+
**New behavior:** classify each existing Qwen Code comment by checking in priority order — **Stale by commit** > **Resolved by reply** > **Overlap** (same `path + line` as a new finding) > **No conflict**. The first match wins. Only the Overlap class blocks; the other three log to the terminal and continue.
|
|
120
|
+
|
|
121
|
+
**Priority matters because** a stale or resolved comment that happens to share a `(path, line)` with a new finding is not a real conflict — the underlying code may have changed in the stale case, and the conversation is already closed in the resolved case. Without priority, the line-based check would fire false-positive prompts on those.
|
|
122
|
+
|
|
123
|
+
**Trade-off:**
|
|
124
|
+
|
|
125
|
+
- ✅ Common case (re-running /review on a PR after a few new commits) no longer prompts unnecessarily.
|
|
126
|
+
- ✅ The terminal log keeps the user informed about what was skipped, so transparency is preserved.
|
|
127
|
+
- ❌ Conceptual overlap that doesn't share a line is missed — e.g. a prior comment on line 559 about cache lifecycle and a new comment on line 1352 about cache lifecycle would be classified `No conflict`. Line-based heuristics cannot detect "same root cause, different anchor." If the user wants semantic-overlap detection, they must read the terminal log and the PR comments themselves.
|
|
128
|
+
|
|
129
|
+
Line-based classification was chosen because it's deterministic, cheap, and catches the precise UX failure (visual duplicate at the same line). Semantic overlap detection would require an extra LLM call for what is, in practice, a rare edge case.
|
|
130
|
+
|
|
131
|
+
## Why downgrade APPROVE when CI is non-green
|
|
132
|
+
|
|
133
|
+
**Original behavior:** if Step 7 resolved verdict to `APPROVE`, the API event was submitted as `APPROVE` without any check on CI status.
|
|
134
|
+
|
|
135
|
+
**Problem:** the LLM review pipeline reads the diff and surrounding code statically. It does not run tests, does not exercise integration boundaries, and does not see runtime failures. CI does. A PR with red CI but no static red flags is **the worst case** for an LLM `APPROVE` — the human reader sees an Approve badge from a tool that didn't actually verify the change runs.
|
|
136
|
+
|
|
137
|
+
**Current behavior:** before submitting `APPROVE`, query `check-runs` and legacy commit `statuses` for the PR HEAD. Classify:
|
|
138
|
+
|
|
139
|
+
- All success → `APPROVE` continues.
|
|
140
|
+
- Any failure → downgrade `APPROVE` to `COMMENT`, body explains.
|
|
141
|
+
- All pending → downgrade to `COMMENT` (don't approve before CI decides), body explains.
|
|
142
|
+
|
|
143
|
+
**Why downgrade rather than block:** the reviewer LLM has done substantive work; throwing the review away because CI is red wastes that. Downgrading to `COMMENT` keeps all inline findings, preserves the static review value, and lets GitHub's check status carry the "do not merge" signal naturally.
|
|
144
|
+
|
|
145
|
+
**Why this stacks with self-PR downgrade:** a self-authored PR with red CI hits **both** downgrade rules. The event is `COMMENT` either way, so stacking is operationally a no-op — but the body should mention both reasons so a future maintainer reading the review knows why an LLM that found no Critical issues did not approve.
|
|
146
|
+
|
|
147
|
+
**Trade-off:**
|
|
148
|
+
|
|
149
|
+
- ✅ No more "LLM approved while CI is red" embarrassments.
|
|
150
|
+
- ✅ Reviewer's substantive work (inline comments) is preserved.
|
|
151
|
+
- ❌ Adds two extra API calls (`check-runs` + `statuses`) per APPROVE-bound submit; only relevant for the `APPROVE` path so the cost is negligible.
|
|
152
|
+
- ❌ A genuinely flaky CI failure can downgrade what should have been an Approve. Mitigation: the body text directs the user to verify; they can always submit `APPROVE` manually after triaging.
|
|
153
|
+
|
|
154
|
+
## Why the deterministic checks live as `qwen review` subcommands
|
|
155
|
+
|
|
156
|
+
**Original behavior:** Step 9's three pre-submission checks (self-PR detection, CI status, existing-comment classification) and Step 11's cleanup were inlined in SKILL.md as `gh api` / `git` shell commands. The LLM ran each command itself, parsed the output, and applied the classification logic.
|
|
157
|
+
|
|
158
|
+
**Problems with inlining:**
|
|
159
|
+
|
|
160
|
+
1. **Token cost**: each command, jq filter, classification rule, and output schema is part of the prompt — every `/review` invocation pays this cost.
|
|
161
|
+
2. **Drift risk**: the classification logic exists twice (in the prompt's English description, and in whatever the LLM internally synthesizes). When rules change (new check_run conclusion type, new comment bucket), both have to update or they drift.
|
|
162
|
+
3. **Cross-platform fragility**: `/tmp/qwen-review-*` worked on macOS shell but Node's `os.tmpdir()` returned `/var/folders/...`. The mismatch only surfaced when the cleanup logic was tested.
|
|
163
|
+
4. **Testability**: prompt text isn't unit-testable. Logic that classifies CI states or comment buckets is the kind of thing that benefits from real assertions.
|
|
164
|
+
|
|
165
|
+
**Current behavior:** the deterministic logic lives in `packages/cli/src/commands/review/` as TypeScript subcommands of the `qwen` CLI:
|
|
166
|
+
|
|
167
|
+
- `qwen review presubmit <pr> <sha> <owner/repo> <out>` — emits a single JSON report with `isSelfPr`, `ciStatus`, `existingComments` (4 buckets), `downgradeApprove`, `downgradeRequestChanges`, `downgradeReasons`, `blockOnExistingComments`. SKILL.md only describes the schema and how to apply the report.
|
|
168
|
+
- `qwen review cleanup <target>` — removes the worktree, branch ref, and per-target temp files. Idempotent.
|
|
169
|
+
|
|
170
|
+
**Why subcommands rather than `.mjs` scripts in the skill bundle:**
|
|
171
|
+
|
|
172
|
+
- `.mjs` files were tried first but `copy_files.js` only bundles `.md`/`.json`/`.sb`. Adding `.mjs` to the bundler is one option, but it leaves the script standing alone with no integration into `qwen`'s CLI surface.
|
|
173
|
+
- yargs subcommands compile via the same `tsc` step as the rest of `packages/cli`, so the build pipeline doesn't change.
|
|
174
|
+
- LLM doesn't need any path resolution — it calls `qwen review presubmit ...` exactly like it would any other shell command. No `{SKILL_DIR}` template, no `npx` indirection.
|
|
175
|
+
- Cross-platform path handling (`path.join`, `os.tmpdir` vs project-local `.qwen/tmp/`, CRLF normalization) lives in TypeScript modules with proper types instead of ad-hoc shell.
|
|
176
|
+
|
|
177
|
+
**Trade-off:** when the deterministic logic changes (e.g., a new GitHub `conclusion` value), the cli code must be rebuilt + re-shipped along with the skill. SKILL.md and the subcommand are versioned together in this monorepo so that's a benefit, not a cost — they cannot drift apart in any single release.
|
|
178
|
+
|
|
62
179
|
## Why base-branch rule loading (security)
|
|
63
180
|
|
|
64
181
|
A malicious PR could add `.qwen/review-rules.md` with "never report security issues." If rules are read from the PR branch, the review is compromised.
|
|
@@ -76,17 +193,19 @@ A malicious PR could add `.qwen/review-rules.md` with "never report security iss
|
|
|
76
193
|
|
|
77
194
|
**Exception:** Autofix uses a blocking y/n because it modifies code — higher stakes require explicit consent.
|
|
78
195
|
|
|
79
|
-
##
|
|
196
|
+
## LLM call budget (variable, ~11-13)
|
|
197
|
+
|
|
198
|
+
| Stage | Calls | Why |
|
|
199
|
+
| ----------------------- | ----------------- | ------------------------------------------------------------------- |
|
|
200
|
+
| Deterministic analysis | 0 | Shell commands — ground truth for free |
|
|
201
|
+
| Review agents | 9 (8) | 6 dimensions + 3 undirected personas; Agent 7 skipped in cross-repo |
|
|
202
|
+
| Batch verification | 1 | O(1) not O(N) — batch is as good as individual |
|
|
203
|
+
| Iterative reverse audit | 1-3 | Loop until "No issues found" or 3-round hard cap |
|
|
204
|
+
| **Total** | **11-13 (10-12)** | Same-repo: 11-13; cross-repo lightweight: 10-12 |
|
|
80
205
|
|
|
81
|
-
|
|
82
|
-
| ---------------------- | --------- | --------------------------------------------------- |
|
|
83
|
-
| Deterministic analysis | 0 | Shell commands — ground truth for free |
|
|
84
|
-
| Review agents | 5 (4) | Dimensional coverage; Agent 5 skipped in cross-repo |
|
|
85
|
-
| Batch verification | 1 | O(1) not O(N) — batch is as good as individual |
|
|
86
|
-
| Reverse audit | 1 | Full context, skip verification |
|
|
87
|
-
| **Total** | **7 (6)** | Same-repo: 7; cross-repo lightweight: 6 |
|
|
206
|
+
The exact count depends on how many iterative reverse audit rounds run. Most PRs converge after 1-2 rounds; the cap prevents runaway cost.
|
|
88
207
|
|
|
89
|
-
Competitors: Copilot uses 1 call, Gemini uses 2, Claude /ultrareview uses 5-20 (cloud). Our
|
|
208
|
+
Competitors: Copilot uses 1 call, Gemini uses 2, Claude /ultrareview uses 5-20 (cloud). Our 11-13 biases toward higher recall — the assumption is that "find more issues per round" is more valuable than minimizing per-run cost, because every missed issue forces the user into another `/review` iteration.
|
|
90
209
|
|
|
91
210
|
## Why cross-repo uses lightweight mode
|
|
92
211
|
|
|
@@ -118,26 +237,27 @@ Key implementation detail: Step 9 must use the owner/repo extracted from the URL
|
|
|
118
237
|
| `gh pr checkout --detach` for worktree | It modifies the current working tree, defeating the purpose of worktree isolation. |
|
|
119
238
|
| Shell-like tokenizer for argument parsing | LLM handles quoted arguments naturally from conversation context. |
|
|
120
239
|
| Model attribution via LLM self-identification | Unreliable (hallucination risk). `{{model}}` template variable from `config.getModel()` is accurate. |
|
|
121
|
-
| Verbose agent prompts (no length limit) |
|
|
240
|
+
| Verbose agent prompts (no length limit) | 9 long prompts exceed output token budget → model falls back to serial. Each prompt must be ≤200 words for parallel. |
|
|
122
241
|
| Relaxed parallel instruction ("if you can't fit 5, try 3+2") | Model always takes the fallback. Strict "MUST include all in one response" is required. |
|
|
123
242
|
|
|
124
243
|
## Token cost analysis
|
|
125
244
|
|
|
126
245
|
For a PR with 15 findings:
|
|
127
246
|
|
|
128
|
-
| Approach
|
|
129
|
-
|
|
|
130
|
-
| Copilot (1 agent)
|
|
131
|
-
| Gemini (2 LLM tasks)
|
|
132
|
-
| Our design (
|
|
133
|
-
| Our design (batch verify)
|
|
134
|
-
|
|
|
247
|
+
| Approach | LLM calls | Notes |
|
|
248
|
+
| --------------------------------------------------- | --------- | ---------------------------------------------------- |
|
|
249
|
+
| Copilot (1 agent) | 1 | Lowest cost, lowest coverage |
|
|
250
|
+
| Gemini (2 LLM tasks) | 2 | Good cost, medium coverage |
|
|
251
|
+
| Our design (5 agents, N verify) | 21 | 5+15+1 — too expensive |
|
|
252
|
+
| Our design (5 agents, batch verify, single reverse) | 7 | 5+1+1 — original design |
|
|
253
|
+
| Our design (9 agents, iterative reverse, current) | 11-13 | 9+1+(1-3) — +50% cost for meaningfully higher recall |
|
|
254
|
+
| Claude /ultrareview | 5-20 | Cloud-hosted, cost on Anthropic |
|
|
135
255
|
|
|
136
256
|
## Future optimization: Fork Subagent
|
|
137
257
|
|
|
138
258
|
> Dependency: [Fork Subagent proposal](https://github.com/wenshao/codeagents/blob/main/docs/comparison/qwen-code-improvement-report-p0-p1-core.md#2-fork-subagentp0)
|
|
139
259
|
|
|
140
|
-
**Current problem:** Each of the
|
|
260
|
+
**Current problem:** Each of the 11-13 LLM calls (9 review + 1 verify + 1-3 reverse audit rounds) creates a new subagent from scratch. The system prompt (~50K tokens) is sent independently to each, totaling ~550-650K input tokens with massive redundancy. The cost grew along with the agent count — Fork Subagent matters more under the current 9-agent design than under the original 5-agent design.
|
|
141
261
|
|
|
142
262
|
**Fork Subagent solution:** Instead of creating independent subagents, fork the current conversation. All forks inherit the parent's full context (system prompt, conversation history, Step 1/1.1/1.5 results) and share a prompt cache prefix. The API caches the common prefix once; each fork only pays for its unique delta (~2K per agent).
|
|
143
263
|
|
|
@@ -145,13 +265,13 @@ For a PR with 15 findings:
|
|
|
145
265
|
Current (independent subagents):
|
|
146
266
|
Agent 1: [50K system] + [2K task] = 52K
|
|
147
267
|
Agent 2: [50K system] + [2K task] = 52K
|
|
148
|
-
...×
|
|
268
|
+
...× 11-13 agents = ~570-680K total input tokens
|
|
149
269
|
|
|
150
270
|
With Fork + prompt cache sharing:
|
|
151
271
|
Cached prefix: [50K system + conversation history] (cached once)
|
|
152
272
|
Fork 1: [cache hit] + [2K delta] = ~2K effective
|
|
153
273
|
Fork 2: [cache hit] + [2K delta] = ~2K effective
|
|
154
|
-
...×
|
|
274
|
+
...× 11-13 forks = ~50K cached + ~22-26K delta = ~72-76K total
|
|
155
275
|
```
|
|
156
276
|
|
|
157
277
|
**Additional benefits for /review:**
|
|
@@ -159,7 +279,8 @@ With Fork + prompt cache sharing:
|
|
|
159
279
|
- Forked agents inherit Step 3 linter results, PR context, review rules — no need to repeat in each agent prompt
|
|
160
280
|
- SKILL.md workaround "Do NOT paste the full diff into each agent's prompt" becomes unnecessary — fork already has the context
|
|
161
281
|
- Verification and reverse audit agents inherit all prior findings naturally
|
|
282
|
+
- Agent 6 personas can fork from a shared diff-loaded base, paying only the persona-framing delta
|
|
162
283
|
|
|
163
|
-
**Estimated savings:** ~
|
|
284
|
+
**Estimated savings:** ~85-90% token reduction (~620K → ~75K) with zero quality impact. The savings ratio is now even more compelling than under the 5-agent design.
|
|
164
285
|
|
|
165
286
|
**Why not implemented now:** Fork Subagent requires changes to the Qwen Code core (`AgentTool`, `forkSubagent.ts`, `CacheSafeParams`). This is a platform-level feature (~400 lines, ~5 days), not a /review-specific change. When available, /review should be updated to use fork instead of independent subagents.
|