@protolabsai/proto 0.14.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (85) hide show
  1. package/LICENSE +203 -0
  2. package/README.md +286 -0
  3. package/dist/bundled/adversarial-verification/SKILL.md +98 -0
  4. package/dist/bundled/brainstorming/SKILL.md +171 -0
  5. package/dist/bundled/coding-agent-standards/SKILL.md +67 -0
  6. package/dist/bundled/dispatching-parallel-agents/SKILL.md +193 -0
  7. package/dist/bundled/executing-plans/SKILL.md +77 -0
  8. package/dist/bundled/finishing-a-development-branch/SKILL.md +213 -0
  9. package/dist/bundled/loop/SKILL.md +61 -0
  10. package/dist/bundled/qc-helper/SKILL.md +151 -0
  11. package/dist/bundled/qc-helper/docs/_meta.ts +30 -0
  12. package/dist/bundled/qc-helper/docs/common-workflow.md +571 -0
  13. package/dist/bundled/qc-helper/docs/configuration/_meta.ts +10 -0
  14. package/dist/bundled/qc-helper/docs/configuration/auth.md +366 -0
  15. package/dist/bundled/qc-helper/docs/configuration/memory.md +0 -0
  16. package/dist/bundled/qc-helper/docs/configuration/model-providers.md +542 -0
  17. package/dist/bundled/qc-helper/docs/configuration/qwen-ignore.md +55 -0
  18. package/dist/bundled/qc-helper/docs/configuration/settings.md +652 -0
  19. package/dist/bundled/qc-helper/docs/configuration/themes.md +160 -0
  20. package/dist/bundled/qc-helper/docs/configuration/trusted-folders.md +61 -0
  21. package/dist/bundled/qc-helper/docs/extension/_meta.ts +9 -0
  22. package/dist/bundled/qc-helper/docs/extension/extension-releasing.md +121 -0
  23. package/dist/bundled/qc-helper/docs/extension/getting-started-extensions.md +299 -0
  24. package/dist/bundled/qc-helper/docs/extension/introduction.md +303 -0
  25. package/dist/bundled/qc-helper/docs/features/_meta.ts +18 -0
  26. package/dist/bundled/qc-helper/docs/features/approval-mode.md +263 -0
  27. package/dist/bundled/qc-helper/docs/features/arena.md +218 -0
  28. package/dist/bundled/qc-helper/docs/features/checkpointing.md +77 -0
  29. package/dist/bundled/qc-helper/docs/features/commands.md +312 -0
  30. package/dist/bundled/qc-helper/docs/features/headless.md +318 -0
  31. package/dist/bundled/qc-helper/docs/features/hooks.md +343 -0
  32. package/dist/bundled/qc-helper/docs/features/language.md +139 -0
  33. package/dist/bundled/qc-helper/docs/features/lsp.md +453 -0
  34. package/dist/bundled/qc-helper/docs/features/mcp.md +281 -0
  35. package/dist/bundled/qc-helper/docs/features/sandbox.md +241 -0
  36. package/dist/bundled/qc-helper/docs/features/scheduled-tasks.md +139 -0
  37. package/dist/bundled/qc-helper/docs/features/skills.md +289 -0
  38. package/dist/bundled/qc-helper/docs/features/sub-agents.md +307 -0
  39. package/dist/bundled/qc-helper/docs/features/token-caching.md +29 -0
  40. package/dist/bundled/qc-helper/docs/ide-integration/_meta.ts +4 -0
  41. package/dist/bundled/qc-helper/docs/ide-integration/ide-companion-spec.md +182 -0
  42. package/dist/bundled/qc-helper/docs/ide-integration/ide-integration.md +144 -0
  43. package/dist/bundled/qc-helper/docs/integration-github-action.md +241 -0
  44. package/dist/bundled/qc-helper/docs/integration-jetbrains.md +81 -0
  45. package/dist/bundled/qc-helper/docs/integration-vscode.md +39 -0
  46. package/dist/bundled/qc-helper/docs/integration-zed.md +72 -0
  47. package/dist/bundled/qc-helper/docs/overview.md +64 -0
  48. package/dist/bundled/qc-helper/docs/quickstart.md +273 -0
  49. package/dist/bundled/qc-helper/docs/reference/_meta.ts +4 -0
  50. package/dist/bundled/qc-helper/docs/reference/keyboard-shortcuts.md +72 -0
  51. package/dist/bundled/qc-helper/docs/reference/sdk-api.md +524 -0
  52. package/dist/bundled/qc-helper/docs/support/Uninstall.md +42 -0
  53. package/dist/bundled/qc-helper/docs/support/_meta.ts +6 -0
  54. package/dist/bundled/qc-helper/docs/support/tos-privacy.md +112 -0
  55. package/dist/bundled/qc-helper/docs/support/troubleshooting.md +123 -0
  56. package/dist/bundled/receiving-code-review/SKILL.md +226 -0
  57. package/dist/bundled/requesting-code-review/SKILL.md +115 -0
  58. package/dist/bundled/review/SKILL.md +123 -0
  59. package/dist/bundled/subagent-driven-development/SKILL.md +292 -0
  60. package/dist/bundled/subagent-driven-development/code-quality-reviewer-prompt.md +27 -0
  61. package/dist/bundled/subagent-driven-development/implementer-prompt.md +113 -0
  62. package/dist/bundled/subagent-driven-development/spec-reviewer-prompt.md +61 -0
  63. package/dist/bundled/systematic-debugging/SKILL.md +305 -0
  64. package/dist/bundled/test-driven-development/SKILL.md +396 -0
  65. package/dist/bundled/using-git-worktrees/SKILL.md +223 -0
  66. package/dist/bundled/using-superpowers/SKILL.md +117 -0
  67. package/dist/bundled/verification-before-completion/SKILL.md +147 -0
  68. package/dist/bundled/writing-plans/SKILL.md +159 -0
  69. package/dist/bundled/writing-skills/SKILL.md +716 -0
  70. package/dist/cli.js +483432 -0
  71. package/dist/sandbox-macos-permissive-closed.sb +32 -0
  72. package/dist/sandbox-macos-permissive-open.sb +27 -0
  73. package/dist/sandbox-macos-permissive-proxied.sb +37 -0
  74. package/dist/sandbox-macos-restrictive-closed.sb +93 -0
  75. package/dist/sandbox-macos-restrictive-open.sb +96 -0
  76. package/dist/sandbox-macos-restrictive-proxied.sb +98 -0
  77. package/dist/vendor/ripgrep/COPYING +3 -0
  78. package/dist/vendor/ripgrep/arm64-darwin/rg +0 -0
  79. package/dist/vendor/ripgrep/arm64-linux/rg +0 -0
  80. package/dist/vendor/ripgrep/x64-darwin/rg +0 -0
  81. package/dist/vendor/ripgrep/x64-linux/rg +0 -0
  82. package/dist/vendor/ripgrep/x64-win32/rg.exe +0 -0
  83. package/dist/vendor/tree-sitter/tree-sitter-bash.wasm +0 -0
  84. package/dist/vendor/tree-sitter/tree-sitter.wasm +0 -0
  85. package/package.json +143 -0
@@ -0,0 +1,542 @@
1
+ # Model Providers
2
+
3
+ Qwen Code allows you to configure multiple model providers through the `modelProviders` setting in your `settings.json`. This enables you to switch between different AI models and providers using the `/model` command.
4
+
5
+ ## Overview
6
+
7
+ Use `modelProviders` to declare curated model lists per auth type that the `/model` picker can switch between. Keys must be valid auth types (`openai`, `anthropic`, `gemini`, etc.). Each entry requires an `id` and **must include `envKey`**, with optional `name`, `description`, `baseUrl`, and `generationConfig`. Credentials are never persisted in settings; the runtime reads them from `process.env[envKey]`. Qwen OAuth models remain hard-coded and cannot be overridden.
8
+
9
+ > [!note]
10
+ >
11
+ > Only the `/model` command exposes non-default auth types. Anthropic, Gemini, etc., must be defined via `modelProviders`. The `/auth` command lists Qwen OAuth, Alibaba Cloud Coding Plan, and API Key as the built-in authentication options.
12
+
13
+ > [!warning]
14
+ >
15
+ > **Duplicate model IDs within the same authType:** Defining multiple models with the same `id` under a single `authType` (e.g., two entries with `"id": "gpt-4o"` in `openai`) is currently not supported. If duplicates exist, **the first occurrence wins** and subsequent duplicates are skipped with a warning. Note that the `id` field is used both as the configuration identifier and as the actual model name sent to the API, so using unique IDs (e.g., `gpt-4o-creative`, `gpt-4o-balanced`) is not a viable workaround. This is a known limitation that we plan to address in a future release.
16
+
17
+ ## Configuration Examples by Auth Type
18
+
19
+ Below are comprehensive configuration examples for different authentication types, showing the available parameters and their combinations.
20
+
21
+ ### Supported Auth Types
22
+
23
+ The `modelProviders` object keys must be valid `authType` values. Currently supported auth types are:
24
+
25
+ | Auth Type | Description |
26
+ | ------------ | --------------------------------------------------------------------------------------- |
27
+ | `openai` | OpenAI-compatible APIs (OpenAI, Azure OpenAI, local inference servers like vLLM/Ollama) |
28
+ | `anthropic` | Anthropic Claude API |
29
+ | `gemini` | Google Gemini API |
30
+ | `qwen-oauth` | Qwen OAuth (hard-coded, cannot be overridden in `modelProviders`) |
31
+
32
+ > [!warning]
33
+ > If an invalid auth type key is used (e.g., a typo like `"openai-custom"`), the configuration will be **silently skipped** and the models will not appear in the `/model` picker. Always use one of the supported auth type values listed above.
34
+
35
+ ### SDKs Used for API Requests
36
+
37
+ Qwen Code uses the following official SDKs to send requests to each provider:
38
+
39
+ | Auth Type | SDK Package |
40
+ | ------------ | ----------------------------------------------------------------------------------------------- |
41
+ | `openai` | [`openai`](https://www.npmjs.com/package/openai) - Official OpenAI Node.js SDK |
42
+ | `anthropic` | [`@anthropic-ai/sdk`](https://www.npmjs.com/package/@anthropic-ai/sdk) - Official Anthropic SDK |
43
+ | `gemini` | [`@google/genai`](https://www.npmjs.com/package/@google/genai) - Official Google GenAI SDK |
44
+ | `qwen-oauth` | [`openai`](https://www.npmjs.com/package/openai) with custom provider (DashScope-compatible) |
45
+
46
+ This means the `baseUrl` you configure should be compatible with the corresponding SDK's expected API format. For example, when using `openai` auth type, the endpoint must accept OpenAI API format requests.
47
+
48
+ ### OpenAI-compatible providers (`openai`)
49
+
50
+ This auth type supports not only OpenAI's official API but also any OpenAI-compatible endpoint, including aggregated model providers like OpenRouter.
51
+
52
+ ```json
53
+ {
54
+ "env": {
55
+ "OPENAI_API_KEY": "sk-your-actual-openai-key-here",
56
+ "OPENROUTER_API_KEY": "sk-or-your-actual-openrouter-key-here"
57
+ },
58
+ "modelProviders": {
59
+ "openai": [
60
+ {
61
+ "id": "gpt-4o",
62
+ "name": "GPT-4o",
63
+ "envKey": "OPENAI_API_KEY",
64
+ "baseUrl": "https://api.openai.com/v1",
65
+ "generationConfig": {
66
+ "timeout": 60000,
67
+ "maxRetries": 3,
68
+ "enableCacheControl": true,
69
+ "contextWindowSize": 128000,
70
+ "modalities": {
71
+ "image": true
72
+ },
73
+ "customHeaders": {
74
+ "X-Client-Request-ID": "req-123"
75
+ },
76
+ "extra_body": {
77
+ "enable_thinking": true,
78
+ "service_tier": "priority"
79
+ },
80
+ "samplingParams": {
81
+ "temperature": 0.2,
82
+ "top_p": 0.8,
83
+ "max_tokens": 4096,
84
+ "presence_penalty": 0.1,
85
+ "frequency_penalty": 0.1
86
+ }
87
+ }
88
+ },
89
+ {
90
+ "id": "gpt-4o-mini",
91
+ "name": "GPT-4o Mini",
92
+ "envKey": "OPENAI_API_KEY",
93
+ "baseUrl": "https://api.openai.com/v1",
94
+ "generationConfig": {
95
+ "timeout": 30000,
96
+ "samplingParams": {
97
+ "temperature": 0.5,
98
+ "max_tokens": 2048
99
+ }
100
+ }
101
+ },
102
+ {
103
+ "id": "openai/gpt-4o",
104
+ "name": "GPT-4o (via OpenRouter)",
105
+ "envKey": "OPENROUTER_API_KEY",
106
+ "baseUrl": "https://openrouter.ai/api/v1",
107
+ "generationConfig": {
108
+ "timeout": 120000,
109
+ "maxRetries": 3,
110
+ "samplingParams": {
111
+ "temperature": 0.7
112
+ }
113
+ }
114
+ }
115
+ ]
116
+ }
117
+ }
118
+ ```
119
+
120
+ ### Anthropic (`anthropic`)
121
+
122
+ ```json
123
+ {
124
+ "env": {
125
+ "ANTHROPIC_API_KEY": "sk-ant-your-actual-anthropic-key-here"
126
+ },
127
+ "modelProviders": {
128
+ "anthropic": [
129
+ {
130
+ "id": "claude-3-5-sonnet",
131
+ "name": "Claude 3.5 Sonnet",
132
+ "envKey": "ANTHROPIC_API_KEY",
133
+ "baseUrl": "https://api.anthropic.com/v1",
134
+ "generationConfig": {
135
+ "timeout": 120000,
136
+ "maxRetries": 3,
137
+ "contextWindowSize": 200000,
138
+ "samplingParams": {
139
+ "temperature": 0.7,
140
+ "max_tokens": 8192,
141
+ "top_p": 0.9
142
+ }
143
+ }
144
+ },
145
+ {
146
+ "id": "claude-3-opus",
147
+ "name": "Claude 3 Opus",
148
+ "envKey": "ANTHROPIC_API_KEY",
149
+ "baseUrl": "https://api.anthropic.com/v1",
150
+ "generationConfig": {
151
+ "timeout": 180000,
152
+ "samplingParams": {
153
+ "temperature": 0.3,
154
+ "max_tokens": 4096
155
+ }
156
+ }
157
+ }
158
+ ]
159
+ }
160
+ }
161
+ ```
162
+
163
+ ### Google Gemini (`gemini`)
164
+
165
+ ```json
166
+ {
167
+ "env": {
168
+ "GEMINI_API_KEY": "AIza-your-actual-gemini-key-here"
169
+ },
170
+ "modelProviders": {
171
+ "gemini": [
172
+ {
173
+ "id": "gemini-2.0-flash",
174
+ "name": "Gemini 2.0 Flash",
175
+ "envKey": "GEMINI_API_KEY",
176
+ "baseUrl": "https://generativelanguage.googleapis.com",
177
+ "capabilities": {
178
+ "vision": true
179
+ },
180
+ "generationConfig": {
181
+ "timeout": 60000,
182
+ "maxRetries": 2,
183
+ "contextWindowSize": 1000000,
184
+ "schemaCompliance": "auto",
185
+ "samplingParams": {
186
+ "temperature": 0.4,
187
+ "top_p": 0.95,
188
+ "max_tokens": 8192,
189
+ "top_k": 40
190
+ }
191
+ }
192
+ }
193
+ ]
194
+ }
195
+ }
196
+ ```
197
+
198
+ ### Local Self-Hosted Models (via OpenAI-compatible API)
199
+
200
+ Most local inference servers (vLLM, Ollama, LM Studio, etc.) provide an OpenAI-compatible API endpoint. Configure them using the `openai` auth type with a local `baseUrl`:
201
+
202
+ ```json
203
+ {
204
+ "env": {
205
+ "OLLAMA_API_KEY": "ollama",
206
+ "VLLM_API_KEY": "not-needed",
207
+ "LMSTUDIO_API_KEY": "lm-studio"
208
+ },
209
+ "modelProviders": {
210
+ "openai": [
211
+ {
212
+ "id": "qwen2.5-7b",
213
+ "name": "Qwen2.5 7B (Ollama)",
214
+ "envKey": "OLLAMA_API_KEY",
215
+ "baseUrl": "http://localhost:11434/v1",
216
+ "generationConfig": {
217
+ "timeout": 300000,
218
+ "maxRetries": 1,
219
+ "contextWindowSize": 32768,
220
+ "samplingParams": {
221
+ "temperature": 0.7,
222
+ "top_p": 0.9,
223
+ "max_tokens": 4096
224
+ }
225
+ }
226
+ },
227
+ {
228
+ "id": "llama-3.1-8b",
229
+ "name": "Llama 3.1 8B (vLLM)",
230
+ "envKey": "VLLM_API_KEY",
231
+ "baseUrl": "http://localhost:8000/v1",
232
+ "generationConfig": {
233
+ "timeout": 120000,
234
+ "maxRetries": 2,
235
+ "contextWindowSize": 128000,
236
+ "samplingParams": {
237
+ "temperature": 0.6,
238
+ "max_tokens": 8192
239
+ }
240
+ }
241
+ },
242
+ {
243
+ "id": "local-model",
244
+ "name": "Local Model (LM Studio)",
245
+ "envKey": "LMSTUDIO_API_KEY",
246
+ "baseUrl": "http://localhost:1234/v1",
247
+ "generationConfig": {
248
+ "timeout": 60000,
249
+ "samplingParams": {
250
+ "temperature": 0.5
251
+ }
252
+ }
253
+ }
254
+ ]
255
+ }
256
+ }
257
+ ```
258
+
259
+ For local servers that don't require authentication, you can use any placeholder value for the API key:
260
+
261
+ ```bash
262
+ # For Ollama (no auth required)
263
+ export OLLAMA_API_KEY="ollama"
264
+
265
+ # For vLLM (if no auth is configured)
266
+ export VLLM_API_KEY="not-needed"
267
+ ```
268
+
269
+ > [!note]
270
+ >
271
+ > The `extra_body` parameter is **only supported for OpenAI-compatible providers** (`openai`, `qwen-oauth`). It is ignored for Anthropic, and Gemini providers.
272
+
273
+ > [!note]
274
+ >
275
+ > **About `envKey`**: The `envKey` field specifies the **name of an environment variable**, not the actual API key value. For the configuration to work, you need to ensure the corresponding environment variable is set with your real API key. There are two ways to do this:
276
+ >
277
+ > - **Option 1: Using a `.env` file** (recommended for security):
278
+ > ```bash
279
+ > # ~/.qwen/.env (or project root)
280
+ > OPENAI_API_KEY=sk-your-actual-key-here
281
+ > ```
282
+ > Be sure to add `.env` to your `.gitignore` to prevent accidentally committing secrets.
283
+ > - **Option 2: Using the `env` field in `settings.json`** (as shown in the examples above):
284
+ > ```json
285
+ > {
286
+ > "env": {
287
+ > "OPENAI_API_KEY": "sk-your-actual-key-here"
288
+ > }
289
+ > }
290
+ > ```
291
+ >
292
+ > Each provider example includes an `env` field to illustrate how the API key should be configured.
293
+
294
+ ## Alibaba Cloud Coding Plan
295
+
296
+ Alibaba Cloud Coding Plan provides a pre-configured set of Qwen models optimized for coding tasks. This feature is available for users with Alibaba Cloud Coding Plan API access and offers a simplified setup experience with automatic model configuration updates.
297
+
298
+ ### Overview
299
+
300
+ When you authenticate with an Alibaba Cloud Coding Plan API key using the `/auth` command, Qwen Code automatically configures the following models:
301
+
302
+ | Model ID | Name | Description |
303
+ | ---------------------- | -------------------- | -------------------------------------- |
304
+ | `qwen3.5-plus` | qwen3.5-plus | Advanced model with thinking enabled |
305
+ | `qwen3-coder-plus` | qwen3-coder-plus | Optimized for coding tasks |
306
+ | `qwen3-max-2026-01-23` | qwen3-max-2026-01-23 | Latest max model with thinking enabled |
307
+
308
+ ### Setup
309
+
310
+ 1. Obtain an Alibaba Cloud Coding Plan API key:
311
+ - **China**: <https://bailian.console.aliyun.com/?tab=model#/efm/coding_plan>
312
+ - **International**: <https://modelstudio.console.alibabacloud.com/?tab=dashboard#/efm/coding_plan>
313
+ 2. Run the `/auth` command in Qwen Code
314
+ 3. Select **Alibaba Cloud Coding Plan**
315
+ 4. Select your region
316
+ 5. Enter your API key when prompted
317
+
318
+ The models will be automatically configured and added to your `/model` picker.
319
+
320
+ ### Regions
321
+
322
+ Alibaba Cloud Coding Plan supports two regions:
323
+
324
+ | Region | Endpoint | Description |
325
+ | -------------------- | ----------------------------------------------- | ----------------------- |
326
+ | China | `https://coding.dashscope.aliyuncs.com/v1` | Mainland China endpoint |
327
+ | Global/International | `https://coding-intl.dashscope.aliyuncs.com/v1` | International endpoint |
328
+
329
+ The region is selected during authentication and stored in `settings.json` under `codingPlan.region`. To switch regions, re-run the `/auth` command and select a different region.
330
+
331
+ ### API Key Storage
332
+
333
+ When you configure Coding Plan through the `/auth` command, the API key is stored using the reserved environment variable name `BAILIAN_CODING_PLAN_API_KEY`. By default, it is stored in the `env` field of your `settings.json` file.
334
+
335
+ > [!warning]
336
+ >
337
+ > **Security Recommendation**: For better security, it is recommended to move the API key from `settings.json` to a separate `.env` file and load it as an environment variable. For example:
338
+ >
339
+ > ```bash
340
+ > # ~/.qwen/.env
341
+ > BAILIAN_CODING_PLAN_API_KEY=your-api-key-here
342
+ > ```
343
+ >
344
+ > Then ensure this file is added to your `.gitignore` if you're using project-level settings.
345
+
346
+ ### Automatic Updates
347
+
348
+ Coding Plan model configurations are versioned. When Qwen Code detects a newer version of the model template, you will be prompted to update. Accepting the update will:
349
+
350
+ - Replace the existing Coding Plan model configurations with the latest versions
351
+ - Preserve any custom model configurations you've added manually
352
+ - Automatically switch to the first model in the updated configuration
353
+
354
+ The update process ensures you always have access to the latest model configurations and features without manual intervention.
355
+
356
+ ### Manual Configuration (Advanced)
357
+
358
+ If you prefer to manually configure Coding Plan models, you can add them to your `settings.json` like any OpenAI-compatible provider:
359
+
360
+ ```json
361
+ {
362
+ "modelProviders": {
363
+ "openai": [
364
+ {
365
+ "id": "qwen3-coder-plus",
366
+ "name": "qwen3-coder-plus",
367
+ "description": "Qwen3-Coder via Alibaba Cloud Coding Plan",
368
+ "envKey": "YOUR_CUSTOM_ENV_KEY",
369
+ "baseUrl": "https://coding.dashscope.aliyuncs.com/v1"
370
+ }
371
+ ]
372
+ }
373
+ }
374
+ ```
375
+
376
+ > [!note]
377
+ >
378
+ > When using manual configuration:
379
+ >
380
+ > - You can use any environment variable name for `envKey`
381
+ > - You do not need to configure `codingPlan.*`
382
+ > - **Automatic updates will not apply** to manually configured Coding Plan models
383
+
384
+ > [!warning]
385
+ >
386
+ > If you also use automatic Coding Plan configuration, automatic updates may overwrite your manual configurations if they use the same `envKey` and `baseUrl` as the automatic configuration. To avoid this, ensure your manual configuration uses a different `envKey` if possible.
387
+
388
+ ## Resolution Layers and Atomicity
389
+
390
+ The effective auth/model/credential values are chosen per field using the following precedence (first present wins). You can combine `--auth-type` with `--model` to point directly at a provider entry; these CLI flags run before other layers.
391
+
392
+ | Layer (highest → lowest) | authType | model | apiKey | baseUrl | apiKeyEnvKey | proxy |
393
+ | -------------------------- | ----------------------------------- | ----------------------------------------------- | --------------------------------------------------- | ---------------------------------------------------- | ---------------------- | --------------------------------- |
394
+ | Programmatic overrides | `/auth` | `/auth` input | `/auth` input | `/auth` input | — | — |
395
+ | Model provider selection | — | `modelProvider.id` | `env[modelProvider.envKey]` | `modelProvider.baseUrl` | `modelProvider.envKey` | — |
396
+ | CLI arguments | `--auth-type` | `--model` | `--openaiApiKey` (or provider-specific equivalents) | `--openaiBaseUrl` (or provider-specific equivalents) | — | — |
397
+ | Environment variables | — | Provider-specific mapping (e.g. `OPENAI_MODEL`) | Provider-specific mapping (e.g. `OPENAI_API_KEY`) | Provider-specific mapping (e.g. `OPENAI_BASE_URL`) | — | — |
398
+ | Settings (`settings.json`) | `security.auth.selectedType` | `model.name` | `security.auth.apiKey` | `security.auth.baseUrl` | — | — |
399
+ | Default / computed | Falls back to `AuthType.QWEN_OAUTH` | Built-in default (OpenAI ⇒ `qwen3-coder-plus`) | — | — | — | `Config.getProxy()` if configured |
400
+
401
+ \*When present, CLI auth flags override settings. Otherwise, `security.auth.selectedType` or the implicit default determine the auth type. Qwen OAuth and OpenAI are the only auth types surfaced without extra configuration.
402
+
403
+ > [!warning]
404
+ >
405
+ > **Deprecation of `security.auth.apiKey` and `security.auth.baseUrl`:** Directly configuring API credentials via `security.auth.apiKey` and `security.auth.baseUrl` in `settings.json` is deprecated. These settings were used in historical versions for credentials entered through the UI, but the credential input flow was removed in version 0.10.1. These fields will be fully removed in a future release. **It is strongly recommended to migrate to `modelProviders`** for all model and credential configurations. Use `envKey` in `modelProviders` to reference environment variables for secure credential management instead of hardcoding credentials in settings files.
406
+
407
+ ## Generation Config Layering: The Impermeable Provider Layer
408
+
409
+ The configuration resolution follows a strict layering model with one crucial rule: **the modelProvider layer is impermeable**.
410
+
411
+ ### How it works
412
+
413
+ 1. **When a modelProvider model IS selected** (e.g., via `/model` command choosing a provider-configured model):
414
+ - The entire `generationConfig` from the provider is applied **atomically**
415
+ - **The provider layer is completely impermeable** — lower layers (CLI, env, settings) do not participate in generationConfig resolution at all
416
+ - All fields defined in `modelProviders[].generationConfig` use the provider's values
417
+ - All fields **not defined** by the provider are set to `undefined` (not inherited from settings)
418
+ - This ensures provider configurations act as a complete, self-contained "sealed package"
419
+
420
+ 2. **When NO modelProvider model is selected** (e.g., using `--model` with a raw model ID, or using CLI/env/settings directly):
421
+ - The resolution falls through to lower layers
422
+ - Fields are populated from CLI → env → settings → defaults
423
+ - This creates a **Runtime Model** (see next section)
424
+
425
+ ### Per-field precedence for `generationConfig`
426
+
427
+ | Priority | Source | Behavior |
428
+ | -------- | --------------------------------------------- | -------------------------------------------------------------------------------------------------------- |
429
+ | 1 | Programmatic overrides | Runtime `/model`, `/auth` changes |
430
+ | 2 | `modelProviders[authType][].generationConfig` | **Impermeable layer** - completely replaces all generationConfig fields; lower layers do not participate |
431
+ | 3 | `settings.model.generationConfig` | Only used for **Runtime Models** (when no provider model is selected) |
432
+ | 4 | Content-generator defaults | Provider-specific defaults (e.g., OpenAI vs Gemini) - only for Runtime Models |
433
+
434
+ ### Atomic field treatment
435
+
436
+ The following fields are treated as atomic objects - provider values completely replace the entire object, no merging occurs:
437
+
438
+ - `samplingParams` - Temperature, top_p, max_tokens, etc.
439
+ - `customHeaders` - Custom HTTP headers
440
+ - `extra_body` - Extra request body parameters
441
+
442
+ ### Example
443
+
444
+ ```json
445
+ // User settings (~/.qwen/settings.json)
446
+ {
447
+ "model": {
448
+ "generationConfig": {
449
+ "timeout": 30000,
450
+ "samplingParams": { "temperature": 0.5, "max_tokens": 1000 }
451
+ }
452
+ }
453
+ }
454
+
455
+ // modelProviders configuration
456
+ {
457
+ "modelProviders": {
458
+ "openai": [{
459
+ "id": "gpt-4o",
460
+ "envKey": "OPENAI_API_KEY",
461
+ "generationConfig": {
462
+ "timeout": 60000,
463
+ "samplingParams": { "temperature": 0.2 }
464
+ }
465
+ }]
466
+ }
467
+ }
468
+ ```
469
+
470
+ When `gpt-4o` is selected from modelProviders:
471
+
472
+ - `timeout` = 60000 (from provider, overrides settings)
473
+ - `samplingParams.temperature` = 0.2 (from provider, completely replaces settings object)
474
+ - `samplingParams.max_tokens` = **undefined** (not defined in provider, and provider layer does not inherit from settings — fields are explicitly set to undefined if not provided)
475
+
476
+ When using a raw model via `--model gpt-4` (not from modelProviders, creates a Runtime Model):
477
+
478
+ - `timeout` = 30000 (from settings)
479
+ - `samplingParams.temperature` = 0.5 (from settings)
480
+ - `samplingParams.max_tokens` = 1000 (from settings)
481
+
482
+ The merge strategy for `modelProviders` itself is REPLACE: the entire `modelProviders` from project settings will override the corresponding section in user settings, rather than merging the two.
483
+
484
+ ## Provider Models vs Runtime Models
485
+
486
+ Qwen Code distinguishes between two types of model configurations:
487
+
488
+ ### Provider Model
489
+
490
+ - Defined in `modelProviders` configuration
491
+ - Has a complete, atomic configuration package
492
+ - When selected, its configuration is applied as an impermeable layer
493
+ - Appears in `/model` command list with full metadata (name, description, capabilities)
494
+ - Recommended for multi-model workflows and team consistency
495
+
496
+ ### Runtime Model
497
+
498
+ - Created dynamically when using raw model IDs via CLI (`--model`), environment variables, or settings
499
+ - Not defined in `modelProviders`
500
+ - Configuration is built by "projecting" through resolution layers (CLI → env → settings → defaults)
501
+ - Automatically captured as a **RuntimeModelSnapshot** when a complete configuration is detected
502
+ - Allows reuse without re-entering credentials
503
+
504
+ ### RuntimeModelSnapshot lifecycle
505
+
506
+ When you configure a model without using `modelProviders`, Qwen Code automatically creates a RuntimeModelSnapshot to preserve your configuration:
507
+
508
+ ```bash
509
+ # This creates a RuntimeModelSnapshot with ID: $runtime|openai|my-custom-model
510
+ qwen --auth-type openai --model my-custom-model --openaiApiKey $KEY --openaiBaseUrl https://api.example.com/v1
511
+ ```
512
+
513
+ The snapshot:
514
+
515
+ - Captures model ID, API key, base URL, and generation config
516
+ - Persists across sessions (stored in memory during runtime)
517
+ - Appears in the `/model` command list as a runtime option
518
+ - Can be switched to using `/model $runtime|openai|my-custom-model`
519
+
520
+ ### Key differences
521
+
522
+ | Aspect | Provider Model | Runtime Model |
523
+ | ----------------------- | --------------------------------- | ------------------------------------------ |
524
+ | Configuration source | `modelProviders` in settings | CLI, env, settings layers |
525
+ | Configuration atomicity | Complete, impermeable package | Layered, each field resolved independently |
526
+ | Reusability | Always available in `/model` list | Captured as snapshot, appears if complete |
527
+ | Team sharing | Yes (via committed settings) | No (user-local) |
528
+ | Credential storage | Reference via `envKey` only | May capture actual key in snapshot |
529
+
530
+ ### When to use each
531
+
532
+ - **Use Provider Models** when: You have standard models shared across a team, need consistent configurations, or want to prevent accidental overrides
533
+ - **Use Runtime Models** when: Quickly testing a new model, using temporary credentials, or working with ad-hoc endpoints
534
+
535
+ ## Selection Persistence and Recommendations
536
+
537
+ > [!important]
538
+ >
539
+ > Define `modelProviders` in the user-scope `~/.qwen/settings.json` whenever possible and avoid persisting credential overrides in any scope. Keeping the provider catalog in user settings prevents merge/override conflicts between project and user scopes and ensures `/auth` and `/model` updates always write back to a consistent scope.
540
+
541
+ - `/model` and `/auth` persist `model.name` (where applicable) and `security.auth.selectedType` to the closest writable scope that already defines `modelProviders`; otherwise they fall back to the user scope. This keeps workspace/user files in sync with the active provider catalog.
542
+ - Without `modelProviders`, the resolver mixes CLI/env/settings layers, creating Runtime Models. This is fine for single-provider setups but cumbersome when frequently switching. Define provider catalogs whenever multi-model workflows are common so that switches stay atomic, source-attributed, and debuggable.
@@ -0,0 +1,55 @@
1
+ # Ignoring Files
2
+
3
+ This document provides an overview of the Qwen Ignore (`.qwenignore`) feature of Qwen Code.
4
+
5
+ Qwen Code includes the ability to automatically ignore files, similar to `.gitignore` (used by Git). Adding paths to your `.qwenignore` file will exclude them from tools that support this feature, although they will still be visible to other services (such as Git).
6
+
7
+ ## How it works
8
+
9
+ When you add a path to your `.qwenignore` file, tools that respect this file will exclude matching files and directories from their operations. For example, when you use the [`read_many_files`](../../developers/tools/multi-file) command, any paths in your `.qwenignore` file will be automatically excluded.
10
+
11
+ For the most part, `.qwenignore` follows the conventions of `.gitignore` files:
12
+
13
+ - Blank lines and lines starting with `#` are ignored.
14
+ - Standard glob patterns are supported (such as `*`, `?`, and `[]`).
15
+ - Putting a `/` at the end will only match directories.
16
+ - Putting a `/` at the beginning anchors the path relative to the `.qwenignore` file.
17
+ - `!` negates a pattern.
18
+
19
+ You can update your `.qwenignore` file at any time. To apply the changes, you must restart your Qwen Code session.
20
+
21
+ ## How to use `.qwenignore`
22
+
23
+ | Step | Description |
24
+ | ---------------------- | -------------------------------------------------------------------------------------- |
25
+ | **Enable .qwenignore** | Create a file named `.qwenignore` in your project root directory |
26
+ | **Add ignore rules** | Open `.qwenignore` file and add paths to ignore, example: `/archive/` or `apikeys.txt` |
27
+
28
+ ### `.qwenignore` examples
29
+
30
+ You can use `.qwenignore` to ignore directories and files:
31
+
32
+ ```
33
+ # Exclude your /packages/ directory and all subdirectories
34
+ /packages/
35
+
36
+ # Exclude your apikeys.txt file
37
+ apikeys.txt
38
+ ```
39
+
40
+ You can use wildcards in your `.qwenignore` file with `*`:
41
+
42
+ ```
43
+ # Exclude all .md files
44
+ *.md
45
+ ```
46
+
47
+ Finally, you can exclude files and directories from exclusion with `!`:
48
+
49
+ ```
50
+ # Exclude all .md files except README.md
51
+ *.md
52
+ !README.md
53
+ ```
54
+
55
+ To remove paths from your `.qwenignore` file, delete the relevant lines.