@qwen-code/qwen-code 0.15.6-preview.0 → 0.15.6-preview.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -481,6 +481,69 @@ When using a raw model via `--model gpt-4` (not from modelProviders, creates a R
481
481
 
482
482
  The merge strategy for `modelProviders` itself is REPLACE: the entire `modelProviders` from project settings will override the corresponding section in user settings, rather than merging the two.
483
483
 
484
+ ## Reasoning / thinking configuration
485
+
486
+ The optional `reasoning` field under `generationConfig` controls how aggressively the model reasons before responding. The Anthropic and Gemini converters always honor it. The OpenAI-compatible pipeline honors it **unless** `generationConfig.samplingParams` is set — see the "Interaction with `samplingParams`" caveat below.
487
+
488
+ ```jsonc
489
+ {
490
+ "modelProviders": {
491
+ "openai": [
492
+ {
493
+ "id": "deepseek-v4-pro",
494
+ "name": "DeepSeek V4 Pro",
495
+ "baseUrl": "https://api.deepseek.com/v1",
496
+ "envKey": "DEEPSEEK_API_KEY",
497
+ "generationConfig": {
498
+ // The four-tier scale:
499
+ // 'low' | 'medium' — server-mapped to 'high' on DeepSeek
500
+ // 'high' — default reasoning intensity
501
+ // 'max' — DeepSeek-specific extra-strong tier
502
+ // Or set `false` to disable reasoning entirely.
503
+ "reasoning": { "effort": "max" },
504
+ },
505
+ },
506
+ ],
507
+ },
508
+ }
509
+ ```
510
+
511
+ ### Per-provider behavior
512
+
513
+ | Protocol / provider | Wire shape | Notes |
514
+ | -------------------------------------------- | -------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
515
+ | **OpenAI / DeepSeek** (`api.deepseek.com`) | Flat `reasoning_effort: <effort>` body parameter | When `reasoning.effort` is set in the nested config shape, it's rewritten to flat `reasoning_effort` and `'low'`/`'medium'` are normalized to `'high'`, `'xhigh'` to `'max'` — mirroring DeepSeek's [server-side back-compat](https://api-docs.deepseek.com/zh-cn/api/create-chat-completion). Top-level `samplingParams.reasoning_effort` or `extra_body.reasoning_effort` overrides skip this normalization and ship verbatim. |
516
+ | **OpenAI** (other compatible servers) | `reasoning: { effort, ... }` passed through verbatim | Set via `samplingParams` (e.g. `samplingParams.reasoning_effort` for GPT-5/o-series) when the provider expects a different shape. |
517
+ | **Anthropic** (real `api.anthropic.com`) | `output_config: { effort }` plus the `effort-2025-11-24` beta header | Real Anthropic accepts `'low'`/`'medium'`/`'high'` only. `'max'` is **clamped to `'high'`** with a `debugLogger.warn` line (once per generator); if you want max effort, switch the baseURL to a DeepSeek-compatible endpoint that supports it. |
518
+ | **Anthropic** (`api.deepseek.com/anthropic`) | Same `output_config: { effort }` + beta header | `'max'` is passed through unchanged. |
519
+ | **Gemini** (`@google/genai`) | `thinkingConfig: { includeThoughts: true, thinkingLevel }` | `'low'` → `LOW`, `'high'`/`'max'` → `HIGH`, others → `THINKING_LEVEL_UNSPECIFIED` (Gemini has no `MAX` tier). |
520
+
521
+ ### `reasoning: false`
522
+
523
+ Setting `reasoning: false` (the literal boolean) explicitly disables thinking on every provider — useful for cheap side queries that don't benefit from reasoning. This is honored at the request level too via `request.config.thinkingConfig.includeThoughts: false` for one-off calls (e.g. suggestion generation).
524
+
525
+ On a `api.deepseek.com` baseURL, the OpenAI pipeline emits the explicit `thinking: { type: 'disabled' }` field that DeepSeek V4+ requires — the server-side default is `'enabled'`, so simply omitting `reasoning_effort` would still pay thinking latency/cost. Self-hosted DeepSeek backends (sglang/vllm) and other OpenAI-compatible servers do **not** receive this field; if you need to disable thinking on those, inject `thinking: { type: 'disabled' }` (or whatever knob your inference framework exposes) via `samplingParams`/`extra_body`.
526
+
527
+ ### Interaction with `samplingParams` (OpenAI-compatible only)
528
+
529
+ > [!warning]
530
+ >
531
+ > When `generationConfig.samplingParams` is set on an OpenAI-compatible provider, the pipeline ships those keys to the wire **verbatim** and skips the separate `reasoning` injection entirely. So a config like `{ samplingParams: { temperature: 0.5 }, reasoning: { effort: 'max' } }` will silently drop the reasoning field on OpenAI/DeepSeek requests.
532
+ >
533
+ > If you set `samplingParams`, include the reasoning knob inside it directly — for DeepSeek that's `samplingParams.reasoning_effort`, for GPT-5/o-series it's `samplingParams.reasoning_effort` (their flat field) or `samplingParams.reasoning` (the nested object). For OpenRouter and other providers the field name varies; consult the provider docs.
534
+ >
535
+ > The Anthropic and Gemini converters are unaffected — they always read `reasoning.effort` directly regardless of `samplingParams`.
536
+
537
+ ### `budget_tokens`
538
+
539
+ You can pin an exact thinking-token budget by including `budget_tokens` alongside `effort`:
540
+
541
+ ```jsonc
542
+ "reasoning": { "effort": "high", "budget_tokens": 50000 }
543
+ ```
544
+
545
+ For Anthropic this becomes `thinking.budget_tokens`. For OpenAI/DeepSeek the field is preserved but currently ignored by the server — `reasoning_effort` is the load-bearing knob.
546
+
484
547
  ## Provider Models vs Runtime Models
485
548
 
486
549
  Qwen Code distinguishes between two types of model configurations:
@@ -73,7 +73,13 @@ When both legacy settings are present with different values, the migration follo
73
73
 
74
74
  ### Available settings in `settings.json`
75
75
 
76
- Settings are organized into categories. All settings should be placed within their corresponding top-level category object in your `settings.json` file.
76
+ Settings are organized into categories. Most settings should be placed within their corresponding top-level category object in your `settings.json` file. A few compatibility settings, such as `proxy`, are top-level keys.
77
+
78
+ #### top-level
79
+
80
+ | Setting | Type | Description | Default |
81
+ | ------- | ------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------- |
82
+ | `proxy` | string | Proxy URL for CLI HTTP requests. Precedence is `--proxy` > `proxy` in `settings.json` > `HTTPS_PROXY` / `https_proxy` / `HTTP_PROXY` / `http_proxy` environment variables. | `undefined` |
77
83
 
78
84
  #### general
79
85
 
@@ -134,17 +140,17 @@ Settings are organized into categories. All settings should be placed within the
134
140
 
135
141
  #### model
136
142
 
137
- | Setting | Type | Description | Default |
138
- | -------------------------------------------------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ----------- |
139
- | `model.name` | string | The Qwen model to use for conversations. | `undefined` |
140
- | `model.maxSessionTurns` | number | Maximum number of user/model/tool turns to keep in a session. -1 means unlimited. | `-1` |
141
- | `model.generationConfig` | object | Advanced overrides passed to the underlying content generator. Supports request controls such as `timeout`, `maxRetries`, `enableCacheControl`, `splitToolMedia` (set `true` for strict OpenAI-compatible servers like LM Studio that reject non-text content on `role: "tool"` messages — splits media into a follow-up user message), `contextWindowSize` (override model's context window size), `modalities` (override auto-detected input modalities), `customHeaders` (custom HTTP headers for API requests), and `extra_body` (additional body parameters for OpenAI-compatible API requests only), along with fine-tuning knobs under `samplingParams` (for example `temperature`, `top_p`, `max_tokens`). Leave unset to rely on provider defaults. | `undefined` |
142
- | `model.chatCompression.contextPercentageThreshold` | number | Sets the threshold for chat history compression as a percentage of the model's total token limit. This is a value between 0 and 1 that applies to both automatic compression and the manual `/compress` command. For example, a value of `0.6` will trigger compression when the chat history exceeds 60% of the token limit. Use `0` to disable compression entirely. | `0.7` |
143
- | `model.skipNextSpeakerCheck` | boolean | Skip the next speaker check. | `false` |
144
- | `model.skipLoopDetection` | boolean | Disables loop detection checks. Loop detection prevents infinite loops in AI responses but can generate false positives that interrupt legitimate workflows. Enable this option if you experience frequent false positive loop detection interruptions. | `false` |
145
- | `model.skipStartupContext` | boolean | Skips sending the startup workspace context (environment summary and acknowledgement) at the beginning of each session. Enable this if you prefer to provide context manually or want to save tokens on startup. | `false` |
146
- | `model.enableOpenAILogging` | boolean | Enables logging of OpenAI API calls for debugging and analysis. When enabled, API requests and responses are logged to JSON files. | `false` |
147
- | `model.openAILoggingDir` | string | Custom directory path for OpenAI API logs. If not specified, defaults to `logs/openai` in the current working directory. Supports absolute paths, relative paths (resolved from current working directory), and `~` expansion (home directory). | `undefined` |
143
+ | Setting | Type | Description | Default |
144
+ | -------------------------------------------------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ----------- |
145
+ | `model.name` | string | The Qwen model to use for conversations. | `undefined` |
146
+ | `model.maxSessionTurns` | number | Maximum number of user/model/tool turns to keep in a session. -1 means unlimited. | `-1` |
147
+ | `model.generationConfig` | object | Advanced overrides passed to the underlying content generator. Supports request controls such as `timeout`, `maxRetries`, `enableCacheControl`, `splitToolMedia` (set `true` for strict OpenAI-compatible servers like LM Studio that reject non-text content on `role: "tool"` messages — splits media into a follow-up user message), `contextWindowSize` (override model's context window size), `modalities` (override auto-detected input modalities), `customHeaders` (custom HTTP headers for API requests), `extra_body` (additional body parameters for OpenAI-compatible API requests only), and `reasoning` (`{ effort: 'low' \| 'medium' \| 'high' \| 'max', budget_tokens?: number }` to control thinking intensity, or `false` to disable; `'max'` is a DeepSeek extension — see [Reasoning / thinking configuration](./model-providers.md#reasoning--thinking-configuration) for per-provider behavior. **Note:** when `samplingParams` is set on an OpenAI-compatible provider, the pipeline ships those keys verbatim and the separate top-level `reasoning` field is dropped — put `reasoning_effort` inside `samplingParams` (or `extra_body`) instead in that case), along with fine-tuning knobs under `samplingParams` (for example `temperature`, `top_p`, `max_tokens`). Leave unset to rely on provider defaults. | `undefined` |
148
+ | `model.chatCompression.contextPercentageThreshold` | number | Sets the threshold for chat history compression as a percentage of the model's total token limit. This is a value between 0 and 1 that applies to both automatic compression and the manual `/compress` command. For example, a value of `0.6` will trigger compression when the chat history exceeds 60% of the token limit. Use `0` to disable compression entirely. | `0.7` |
149
+ | `model.skipNextSpeakerCheck` | boolean | Skip the next speaker check. | `false` |
150
+ | `model.skipLoopDetection` | boolean | Disables loop detection checks. Loop detection prevents infinite loops in AI responses but can generate false positives that interrupt legitimate workflows. Enable this option if you experience frequent false positive loop detection interruptions. | `false` |
151
+ | `model.skipStartupContext` | boolean | Skips sending the startup workspace context (environment summary and acknowledgement) at the beginning of each session. Enable this if you prefer to provide context manually or want to save tokens on startup. | `false` |
152
+ | `model.enableOpenAILogging` | boolean | Enables logging of OpenAI API calls for debugging and analysis. When enabled, API requests and responses are logged to JSON files. | `false` |
153
+ | `model.openAILoggingDir` | string | Custom directory path for OpenAI API logs. If not specified, defaults to `logs/openai` in the current working directory. Supports absolute paths, relative paths (resolved from current working directory), and `~` expansion (home directory). | `undefined` |
148
154
 
149
155
  **Example model.generationConfig:**
150
156
 
@@ -470,6 +476,7 @@ Here is an example of a `settings.json` file with the nested structure, new as o
470
476
 
471
477
  ```
472
478
  {
479
+ "proxy": "http://localhost:7890",
473
480
  "general": {
474
481
  "vimMode": true,
475
482
  "preferredEditor": "code"
@@ -29,14 +29,16 @@ The `/review` command runs a multi-stage pipeline:
29
29
  Step 1: Determine scope (local diff / PR worktree / file)
30
30
  Step 2: Load project review rules
31
31
  Step 3: Run deterministic analysis (linter, typecheck) [zero LLM cost]
32
- Step 4: 5 parallel review agents [5 LLM calls]
33
- |-- Agent 1: Correctness & Security
34
- |-- Agent 2: Code Quality
35
- |-- Agent 3: Performance & Efficiency
36
- |-- Agent 4: Undirected Audit
37
- '-- Agent 5: Build & Test (runs shell commands)
32
+ Step 4: 9 parallel review agents [9 LLM calls]
33
+ |-- Agent 1: Correctness
34
+ |-- Agent 2: Security
35
+ |-- Agent 3: Code Quality
36
+ |-- Agent 4: Performance & Efficiency
37
+ |-- Agent 5: Test Coverage
38
+ |-- Agent 6: Undirected Audit (3 personas: 6a/6b/6c)
39
+ '-- Agent 7: Build & Test (runs shell commands)
38
40
  Step 5: Deduplicate --> Batch verify --> Aggregate [1 LLM call]
39
- Step 6: Reverse audit (find coverage gaps) [1 LLM call]
41
+ Step 6: Iterative reverse audit (1-3 rounds, gap finding) [1-3 LLM calls]
40
42
  Step 7: Present findings + verdict
41
43
  Step 8: Autofix (user-confirmed, optional)
42
44
  Step 9: Post PR inline comments (if requested)
@@ -46,15 +48,17 @@ Step 11: Clean up (remove worktree + temp files)
46
48
 
47
49
  ### Review Agents
48
50
 
49
- | Agent | Focus |
50
- | --------------------------------- | ------------------------------------------------------------------ |
51
- | Agent 1: Correctness & Security | Logic errors, null handling, race conditions, injection, XSS, SSRF |
52
- | Agent 2: Code Quality | Style consistency, naming, duplication, dead code |
53
- | Agent 3: Performance & Efficiency | N+1 queries, memory leaks, unnecessary re-renders, bundle size |
54
- | Agent 4: Undirected Audit | Business logic, boundary interactions, hidden coupling |
55
- | Agent 5: Build & Test | Runs build and test commands, reports failures |
51
+ | Agent | Focus |
52
+ | --------------------------------- | ------------------------------------------------------------------------------------------- |
53
+ | Agent 1: Correctness | Logic errors, edge cases, null handling, race conditions, type safety |
54
+ | Agent 2: Security | Injection, XSS, SSRF, auth bypass, sensitive data exposure |
55
+ | Agent 3: Code Quality | Style consistency, naming, duplication, dead code |
56
+ | Agent 4: Performance & Efficiency | N+1 queries, memory leaks, unnecessary re-renders, bundle size |
57
+ | Agent 5: Test Coverage | Untested code paths in the diff, missing branch coverage, weak assertions |
58
+ | Agent 6: Undirected Audit | 3 parallel personas (attacker / 3am-oncall / maintainer) — catches cross-dimensional issues |
59
+ | Agent 7: Build & Test | Runs build and test commands, reports failures |
56
60
 
57
- All agents run in parallel. Findings from Agents 1-4 are verified in a **single batch verification pass** (one agent reviews all findings at once, keeping LLM calls fixed). After verification, a **reverse audit agent** re-reads the entire diff with knowledge of all confirmed findings to catch issues that every other agent missed. Reverse audit findings skip the verification step (the agent already has full context) and are included directly as high-confidence results.
61
+ All agents run in parallel (Agent 6 launches 3 persona variants concurrently, totaling 9 parallel tasks for same-repo reviews). Findings from Agents 1-6 are verified in a **single batch verification pass** (one agent reviews all findings at once, keeping verification cost fixed regardless of finding count). After verification, **iterative reverse audit** runs 1-3 rounds of gap-finding — each round receives the cumulative finding list from prior rounds, so successive rounds focus on whatever's left undiscovered. The loop stops as soon as a round returns "No issues found", or after 3 rounds (hard cap). Reverse audit findings skip verification (the agent already has full context) and are included as high-confidence results.
58
62
 
59
63
  ## Deterministic Analysis
60
64
 
@@ -127,8 +131,8 @@ This runs in **lightweight mode** — no worktree, no linter, no build/test, no
127
131
 
128
132
  | Capability | Same-repo | Cross-repo |
129
133
  | ------------------------------------------------ | --------- | ----------------------------- |
130
- | LLM review (Agents 1-4 + verify + reverse audit) | ✅ | ✅ |
131
- | Agent 5: Build & test | ✅ | ❌ (no local codebase) |
134
+ | LLM review (Agents 1-6 + verify + iterative reverse audit) | ✅ | ✅ |
135
+ | Agent 7: Build & test | ✅ | ❌ (no local codebase) |
132
136
  | Deterministic analysis (linter/typecheck) | ✅ | ❌ |
133
137
  | Cross-file impact analysis | ✅ | ❌ |
134
138
  | Autofix | ✅ | ❌ |
@@ -157,6 +161,12 @@ Or, after running `/review 123`, type `post comments` to publish findings withou
157
161
  - Nice to have findings (including linter warnings)
158
162
  - Low-confidence findings
159
163
 
164
+ **Self-authored PRs:** GitHub does not allow you to submit `APPROVE` or `REQUEST_CHANGES` reviews on your own pull request — both fail with HTTP 422. When `/review` detects that the PR author matches the current authenticated user, it automatically downgrades the API event to `COMMENT` regardless of verdict, so the submission still succeeds. The terminal still shows the honest verdict ("Approve" / "Request changes" / "Comment") — only the GitHub-side review event is neutralized. The actual findings still appear as inline comments on specific lines, so substantive feedback is unchanged.
165
+
166
+ **Re-reviewing a PR with prior Qwen Code comments:** when `/review` runs on a PR that already has previous Qwen Code review comments, it classifies them before posting new ones. Only **same-line overlap** (an existing comment on the same `(path, line)` as a new finding) prompts you to confirm — that's the case where you'd see a visual duplicate on the same code line. Comments from older commits, replied-to comments (treated as resolved), and comments that simply don't overlap with any new finding are silently skipped, with a terminal log line so you know what was filtered.
167
+
168
+ **CI / build status check before APPROVE:** if the verdict is "Approve", `/review` queries the PR's check-runs and commit statuses before submitting. If any check has failed (or all checks are still pending), the API event is automatically downgraded from `APPROVE` to `COMMENT`, with the review body explaining why. Rationale: the LLM review reads code statically and cannot see runtime test failures; approving while CI is red would be misleading. The inline findings are still posted unchanged. If you want to approve anyway (e.g., a known-flaky CI failure), submit the GitHub approval manually after verifying.
169
+
160
170
  ## Follow-up Actions
161
171
 
162
172
  After the review, context-aware tips appear as ghost text. Press Tab to accept:
@@ -179,7 +189,7 @@ You can customize review criteria per project. `/review` reads rules from these
179
189
  3. `AGENTS.md` — `## Code Review` section
180
190
  4. `QWEN.md` — `## Code Review` section
181
191
 
182
- Rules are injected into the LLM review agents (1-4) as additional criteria. For PR reviews, rules are read from the **base branch** to prevent a malicious PR from injecting bypass rules.
192
+ Rules are injected into the LLM review agents (1-6) as additional criteria. For PR reviews, rules are read from the **base branch** to prevent a malicious PR from injecting bypass rules.
183
193
 
184
194
  Example `.qwen/review-rules.md`:
185
195
 
@@ -246,15 +256,17 @@ For large diffs (>10 modified symbols), analysis prioritizes functions with sign
246
256
 
247
257
  ## Token Efficiency
248
258
 
249
- The review pipeline uses a fixed number of LLM calls regardless of how many findings are produced:
259
+ The review pipeline uses a bounded number of LLM calls regardless of how many findings are produced:
260
+
261
+ | Stage | LLM calls | Notes |
262
+ | -------------------------------- | ----------------- | ---------------------------------------------------- |
263
+ | Deterministic analysis (Step 3) | 0 | Shell commands only |
264
+ | Review agents (Step 4) | 9 (or 8) | Run in parallel; Agent 7 skipped in cross-repo mode |
265
+ | Batch verification (Step 5) | 1 | Single agent verifies all findings at once |
266
+ | Iterative reverse audit (Step 6) | 1-3 | Loops until "No issues found" or 3-round cap |
267
+ | **Total** | **11-13 (10-12)** | Same-repo: 11-13; cross-repo: 10-12 (no Agent 7) |
250
268
 
251
- | Stage | LLM calls | Notes |
252
- | ------------------------------- | ---------- | --------------------------------------------------- |
253
- | Deterministic analysis (Step 3) | 0 | Shell commands only |
254
- | Review agents (Step 4) | 5 (or 4) | Run in parallel; Agent 5 skipped in cross-repo mode |
255
- | Batch verification (Step 5) | 1 | Single agent verifies all findings at once |
256
- | Reverse audit (Step 6) | 1 | Finds coverage gaps; findings skip verification |
257
- | **Total** | **7 or 6** | Same-repo: 7; cross-repo: 6 (no Agent 5) |
269
+ Most PRs converge to the lower end of the range (1 reverse audit round); the cap prevents runaway cost on pathological cases.
258
270
 
259
271
  ## What's NOT Flagged
260
272
 
@@ -40,7 +40,7 @@ You need to have the language server for your programming language installed:
40
40
 
41
41
  ### .lsp.json File
42
42
 
43
- You can configure language servers using a `.lsp.json` file in your project root. This uses the language-keyed format described in the [Claude Code plugin LSP configuration reference](https://code.claude.com/docs/en/plugins-reference#lsp-servers).
43
+ You can configure language servers using a `.lsp.json` file in your project root. Each top-level key is a language identifier, and its value is the server configuration object.
44
44
 
45
45
  **Basic format:**
46
46
 
@@ -104,9 +104,9 @@ Example:
104
104
 
105
105
  #### Required Fields
106
106
 
107
- | Option | Type | Description |
108
- | --------- | ------ | ------------------------------------------------- |
109
- | `command` | string | Command to start the LSP server (must be in PATH) |
107
+ | Option | Type | Description |
108
+ | --------- | ------ | ------------------------------------------------------------------------------------------------------------------------------------------------- |
109
+ | `command` | string | Command to start the LSP server. Supports bare command names resolved via `PATH` (e.g. `clangd`) and absolute paths (e.g. `/opt/llvm/bin/clangd`) |
110
110
 
111
111
  #### Optional Fields
112
112
 
@@ -148,6 +148,8 @@ For servers that use TCP or Unix socket transport:
148
148
 
149
149
  Qwen Code exposes LSP functionality through the unified `lsp` tool. Here are the available operations:
150
150
 
151
+ Location-based operations (`goToDefinition`, `findReferences`, `hover`, `goToImplementation`, and `prepareCallHierarchy`) require an exact `filePath` + `line` + `character` position. If you do not know the exact position, use `workspaceSymbol` or `documentSymbol` first to locate the symbol.
152
+
151
153
  ### Code Navigation
152
154
 
153
155
  #### Go to Definition
@@ -315,7 +317,7 @@ LSP servers are only started in trusted workspaces by default. This is because l
315
317
  - **Trusted Workspace**: LSP servers start if configured
316
318
  - **Untrusted Workspace**: LSP servers won't start unless `trustRequired: false` is set in the server configuration
317
319
 
318
- To mark a workspace as trusted, use the `/trust` command or configure trusted folders in settings.
320
+ To mark a workspace as trusted, use the `/trust` command.
319
321
 
320
322
  ### Per-Server Trust Override
321
323
 
@@ -338,11 +340,12 @@ You can override trust requirements for specific servers in their configuration:
338
340
 
339
341
  ### Server Not Starting
340
342
 
341
- 1. **Check if the server is installed**: Run the command manually to verify
342
- 2. **Check the PATH**: Ensure the server binary is in your system PATH
343
- 3. **Check workspace trust**: The workspace must be trusted for LSP
344
- 4. **Check logs**: Look for error messages in the console output
345
- 5. **Verify --experimental-lsp flag**: Make sure you're using the flag when starting Qwen Code
343
+ 1. **Verify `--experimental-lsp` flag**: Make sure you're using the flag when starting Qwen Code
344
+ 2. **Check if the server is installed**: Run the command manually (e.g. `clangd --version`) to verify
345
+ 3. **Check the command**: The server binary must be in your system `PATH`, or specified as an absolute path (e.g. `/opt/llvm/bin/clangd`). Relative paths that escape the workspace are blocked
346
+ 4. **Check workspace trust**: The workspace must be trusted for LSP (use `/trust`)
347
+ 5. **Check logs**: Look for `[LSP]` entries in the debug log (see Debugging section below)
348
+ 6. **Check the process**: Run `ps aux | grep <server-name>` to verify the server process is running
346
349
 
347
350
  ### Slow Performance
348
351
 
@@ -351,41 +354,34 @@ You can override trust requirements for specific servers in their configuration:
351
354
 
352
355
  ### No Results
353
356
 
354
- 1. **Server not ready**: The server may still be indexing
357
+ 1. **Server not ready**: The server may still be indexing. For C/C++ projects with clangd, ensure `--background-index` is in the args and a `compile_commands.json` (or `compile_flags.txt`) exists in the project root or a parent directory. Use `--compile-commands-dir=<path>` if it is in a build subdirectory
355
358
  2. **File not saved**: Save your file for the server to pick up changes
356
359
  3. **Wrong language**: Check if the correct server is running for your language
360
+ 4. **Check the process**: Run `ps aux | grep <server-name>` to verify the server is actually running
357
361
 
358
362
  ### Debugging
359
363
 
360
- Enable debug logging to see LSP communication:
364
+ LSP debug logs are automatically written to session log files in `~/.qwen/debug/`. To check LSP-related entries:
361
365
 
362
366
  ```bash
363
- DEBUG=lsp* qwen --experimental-lsp
364
- ```
365
-
366
- Or check the LSP debugging guide at `packages/cli/LSP_DEBUGGING_GUIDE.md`.
367
-
368
- ## Claude Code Compatibility
367
+ # View the latest session log
368
+ grep '\[LSP\]' ~/.qwen/debug/latest
369
369
 
370
- Qwen Code supports Claude Code-style `.lsp.json` configuration files in the language-keyed format defined in the [Claude Code plugins reference](https://code.claude.com/docs/en/plugins-reference#lsp-servers). If you're migrating from Claude Code, use the language-as-key layout in your configuration.
371
-
372
- ### Configuration Format
370
+ # Common error messages to look for:
371
+ # "command path is unsafe" → relative path escapes workspace, use absolute path or add to PATH
372
+ # "command not found" → server binary not installed or not in PATH
373
+ # "requires trusted workspace" → run /trust first
374
+ ```
373
375
 
374
- The recommended format follows Claude Code's specification:
376
+ You can also verify the server process is running:
375
377
 
376
- ```json
377
- {
378
- "go": {
379
- "command": "gopls",
380
- "args": ["serve"],
381
- "extensionToLanguage": {
382
- ".go": "go"
383
- }
384
- }
385
- }
378
+ ```bash
379
+ ps aux | grep clangd # or typescript-language-server, jdtls, etc.
386
380
  ```
387
381
 
388
- Claude Code LSP plugins can also supply `lspServers` in `plugin.json` (or a referenced `.lsp.json`). Qwen Code loads those configs when the extension is enabled, and they must use the same language-keyed format.
382
+ ## Extension LSP Configuration
383
+
384
+ Extensions can provide LSP server configurations through the `lspServers` field in their `plugin.json`. This can be either an inline object or a path to a `.lsp.json` file. Qwen Code loads these configs when the extension is enabled. The format is the same language-keyed layout used in project `.lsp.json` files.
389
385
 
390
386
  ## Best Practices
391
387
 
@@ -406,7 +402,7 @@ qwen --experimental-lsp
406
402
 
407
403
  ### Q: How do I know which language servers are running?
408
404
 
409
- Use the `/lsp status` command to see all configured and running language servers.
405
+ Check the debug log for `[LSP]` entries (`grep '\[LSP\]' ~/.qwen/debug/latest`), or verify the process directly with `ps aux | grep <server-name>`.
410
406
 
411
407
  ### Q: Can I use multiple language servers for the same file type?
412
408
 
@@ -89,14 +89,35 @@ Show concrete examples of using this Skill.
89
89
 
90
90
  Qwen Code currently validates that:
91
91
 
92
- - `name` is a non-empty string
92
+ - `name` is a non-empty string matching `/^[\p{L}\p{N}_:.-]+$/u` — Unicode letters and digits (CJK / Cyrillic / accented Latin all OK), plus `_`, `:`, `.`, `-`. Whitespace, slashes, brackets and other structurally unsafe characters are rejected at parse time.
93
93
  - `description` is a non-empty string
94
94
 
95
- Recommended conventions (not strictly enforced yet):
95
+ Recommended conventions:
96
96
 
97
- - Use lowercase letters, numbers, and hyphens in `name`
97
+ - Prefer lowercase ASCII with hyphens for shareable names (e.g. `tsx-helper`)
98
98
  - Make `description` specific: include both **what** the Skill does and **when** to use it (key words users will naturally mention)
99
99
 
100
+ ### Optional: gate a Skill on file paths (`paths:`)
101
+
102
+ For Skills that only matter to specific parts of a codebase, add a `paths:` list of glob patterns. The Skill stays out of the model's available-skills listing until a tool call touches a matching file:
103
+
104
+ ```yaml
105
+ ---
106
+ name: tsx-helper
107
+ description: React TSX component helper
108
+ paths:
109
+ - 'src/**/*.tsx'
110
+ - 'packages/*/src/**/*.tsx'
111
+ ---
112
+ ```
113
+
114
+ Notes:
115
+
116
+ - Globs are matched relative to the project root with [picomatch](https://github.com/micromatch/picomatch); files outside the project root never trigger activation.
117
+ - A path-gated Skill **stays activated for the rest of the session** once a matching file is touched. A new session, or a `refreshCache` triggered by editing any Skill file, resets activations.
118
+ - `paths:` only gates **model** discovery, and only at the SkillTool listing level. You can always invoke a path-gated Skill yourself via `/<skill-name>` or the `/skills` picker — that user path runs the Skill body regardless of activation state. The model side, however, stays gated until a matching file is touched: a slash invocation does **not** unlock model-side activation, so if you want the model to chain off your invocation (call `Skill { skill: ... }` itself), also access a file matching the skill's `paths:` first.
119
+ - Combining `paths:` with `disable-model-invocation: true` is allowed but the gate has no effect — the Skill is hidden from the model regardless, so path activation never advertises it.
120
+
100
121
  ## Add supporting files
101
122
 
102
123
  Create additional files alongside `SKILL.md`:
@@ -146,6 +167,14 @@ To view available Skills, ask Qwen Code directly:
146
167
  What Skills are available?
147
168
  ```
148
169
 
170
+ > **Heads up — model vs. user view.** Asking the model only surfaces Skills the model can currently see. If a Skill uses `paths:` (see "Optional: gate a Skill on file paths" above), it stays out of that listing until a matching file has been touched. The full set is always visible to you via the `/skills` slash command and on disk.
171
+
172
+ Or browse the full list with the slash command (always shows every Skill, including path-gated ones that have not activated yet):
173
+
174
+ ```text
175
+ /skills
176
+ ```
177
+
149
178
  Or inspect the filesystem:
150
179
 
151
180
  ```bash