@hafez/claude-code-router 2.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md ADDED
@@ -0,0 +1,704 @@
1
+ ![](blog/images/claude-code-router-img.png)
2
+
3
+ [![](https://img.shields.io/badge/%F0%9F%87%A8%F0%9F%87%B3-%E4%B8%AD%E6%96%87%E7%89%88-ff0000?style=flat)](README_zh.md)
4
+ [![Discord](https://img.shields.io/badge/Discord-%235865F2.svg?&logo=discord&logoColor=white)](https://discord.gg/rdftVMaUcS)
5
+ [![](https://img.shields.io/github/license/musistudio/claude-code-router)](https://github.com/musistudio/claude-code-router/blob/main/LICENSE)
6
+
7
+ <hr>
8
+
9
+ ![](blog/images/sponsors/glm-en.jpg)
10
+ > This project is sponsored by Z.ai, supporting us with their GLM CODING PLAN.
11
+ > GLM CODING PLAN is a subscription service designed for AI coding, starting at just $3/month. It provides access to their flagship GLM-4.7 model across 10+ popular AI coding tools (Claude Code, Cline, Roo Code, etc.), offering developers top-tier, fast, and stable coding experiences.
12
+ > Get 10% OFF GLM CODING PLAN:https://z.ai/subscribe?ic=8JVLJQFSKB
13
+
14
+ > [Progressive Disclosure of Agent Tools from the Perspective of CLI Tool Style](/blog/en/progressive-disclosure-of-agent-tools-from-the-perspective-of-cli-tool-style.md)
15
+
16
+ > A powerful tool to route Claude Code requests to different models and customize any request.
17
+
18
+ ![](blog/images/claude-code.png)
19
+
20
+ ## ✨ Features
21
+
22
+ - **Model Routing**: Route requests to different models based on your needs (e.g., background tasks, thinking, long context).
23
+ - **Multi-Provider Support**: Supports various model providers like OpenRouter, DeepSeek, Ollama, Gemini, Volcengine, and SiliconFlow.
24
+ - **Request/Response Transformation**: Customize requests and responses for different providers using transformers.
25
+ - **Dynamic Model Switching**: Switch models on-the-fly within Claude Code using the `/model` command.
26
+ - **CLI Model Management**: Manage models and providers directly from the terminal with `ccr model`.
27
+ - **GitHub Actions Integration**: Trigger Claude Code tasks in your GitHub workflows.
28
+ - **Plugin System**: Extend functionality with custom transformers.
29
+
30
+ ## 🚀 Getting Started
31
+
32
+ ### 1. Installation
33
+
34
+ First, ensure you have [Claude Code](https://docs.anthropic.com/en/docs/claude-code/quickstart) installed:
35
+
36
+ ```shell
37
+ npm install -g @anthropic-ai/claude-code
38
+ ```
39
+
40
+ Then, install Claude Code Router:
41
+
42
+ ```shell
43
+ npm install -g @musistudio/claude-code-router
44
+ ```
45
+
46
+ ### 2. Configuration
47
+
48
+ Create and configure your `~/.claude-code-router/config.json` file. For more details, you can refer to `config.example.json`.
49
+
50
+ The `config.json` file has several key sections:
51
+
52
+ - **`PROXY_URL`** (optional): You can set a proxy for API requests, for example: `"PROXY_URL": "http://127.0.0.1:7890"`.
53
+ - **`LOG`** (optional): You can enable logging by setting it to `true`. When set to `false`, no log files will be created. Default is `true`.
54
+ - **`LOG_LEVEL`** (optional): Set the logging level. Available options are: `"fatal"`, `"error"`, `"warn"`, `"info"`, `"debug"`, `"trace"`. Default is `"debug"`.
55
+ - **Logging Systems**: The Claude Code Router uses two separate logging systems:
56
+ - **Server-level logs**: HTTP requests, API calls, and server events are logged using pino in the `~/.claude-code-router/logs/` directory with filenames like `ccr-*.log`
57
+ - **Application-level logs**: Routing decisions and business logic events are logged in `~/.claude-code-router/claude-code-router.log`
58
+ - **`APIKEY`** (optional): You can set a secret key to authenticate requests. When set, clients must provide this key in the `Authorization` header (e.g., `Bearer your-secret-key`) or the `x-api-key` header. Example: `"APIKEY": "your-secret-key"`.
59
+ - **`HOST`** (optional): You can set the host address for the server. If `APIKEY` is not set, the host will be forced to `127.0.0.1` for security reasons to prevent unauthorized access. Example: `"HOST": "0.0.0.0"`.
60
+ - **`NON_INTERACTIVE_MODE`** (optional): When set to `true`, enables compatibility with non-interactive environments like GitHub Actions, Docker containers, or other CI/CD systems. This sets appropriate environment variables (`CI=true`, `FORCE_COLOR=0`, etc.) and configures stdin handling to prevent the process from hanging in automated environments. Example: `"NON_INTERACTIVE_MODE": true`.
61
+
62
+ - **`Providers`**: Used to configure different model providers.
63
+ - **`Router`**: Used to set up routing rules. `default` specifies the default model, which will be used for all requests if no other route is configured.
64
+ - **`API_TIMEOUT_MS`**: Specifies the timeout for API calls in milliseconds.
65
+
66
+ #### Environment Variable Interpolation
67
+
68
+ Claude Code Router supports environment variable interpolation for secure API key management. You can reference environment variables in your `config.json` using either `$VAR_NAME` or `${VAR_NAME}` syntax:
69
+
70
+ ```json
71
+ {
72
+ "OPENAI_API_KEY": "$OPENAI_API_KEY",
73
+ "GEMINI_API_KEY": "${GEMINI_API_KEY}",
74
+ "Providers": [
75
+ {
76
+ "name": "openai",
77
+ "api_base_url": "https://api.openai.com/v1/chat/completions",
78
+ "api_key": "$OPENAI_API_KEY",
79
+ "models": ["gpt-5", "gpt-5-mini"]
80
+ }
81
+ ]
82
+ }
83
+ ```
84
+
85
+ This allows you to keep sensitive API keys in environment variables instead of hardcoding them in configuration files. The interpolation works recursively through nested objects and arrays.
86
+
87
+ Here is a comprehensive example:
88
+
89
+ ```json
90
+ {
91
+ "APIKEY": "your-secret-key",
92
+ "PROXY_URL": "http://127.0.0.1:7890",
93
+ "LOG": true,
94
+ "API_TIMEOUT_MS": 600000,
95
+ "NON_INTERACTIVE_MODE": false,
96
+ "Providers": [
97
+ {
98
+ "name": "openrouter",
99
+ "api_base_url": "https://openrouter.ai/api/v1/chat/completions",
100
+ "api_key": "sk-xxx",
101
+ "models": [
102
+ "google/gemini-2.5-pro-preview",
103
+ "anthropic/claude-sonnet-4",
104
+ "anthropic/claude-3.5-sonnet",
105
+ "anthropic/claude-3.7-sonnet:thinking"
106
+ ],
107
+ "transformer": {
108
+ "use": ["openrouter"]
109
+ }
110
+ },
111
+ {
112
+ "name": "deepseek",
113
+ "api_base_url": "https://api.deepseek.com/chat/completions",
114
+ "api_key": "sk-xxx",
115
+ "models": ["deepseek-chat", "deepseek-reasoner"],
116
+ "transformer": {
117
+ "use": ["deepseek"],
118
+ "deepseek-chat": {
119
+ "use": ["tooluse"]
120
+ }
121
+ }
122
+ },
123
+ {
124
+ "name": "ollama",
125
+ "api_base_url": "http://localhost:11434/v1/chat/completions",
126
+ "api_key": "ollama",
127
+ "models": ["qwen2.5-coder:latest"]
128
+ },
129
+ {
130
+ "name": "gemini",
131
+ "api_base_url": "https://generativelanguage.googleapis.com/v1beta/models/",
132
+ "api_key": "sk-xxx",
133
+ "models": ["gemini-2.5-flash", "gemini-2.5-pro"],
134
+ "transformer": {
135
+ "use": ["gemini"]
136
+ }
137
+ },
138
+ {
139
+ "name": "volcengine",
140
+ "api_base_url": "https://ark.cn-beijing.volces.com/api/v3/chat/completions",
141
+ "api_key": "sk-xxx",
142
+ "models": ["deepseek-v3-250324", "deepseek-r1-250528"],
143
+ "transformer": {
144
+ "use": ["deepseek"]
145
+ }
146
+ },
147
+ {
148
+ "name": "modelscope",
149
+ "api_base_url": "https://api-inference.modelscope.cn/v1/chat/completions",
150
+ "api_key": "",
151
+ "models": ["Qwen/Qwen3-Coder-480B-A35B-Instruct", "Qwen/Qwen3-235B-A22B-Thinking-2507"],
152
+ "transformer": {
153
+ "use": [
154
+ [
155
+ "maxtoken",
156
+ {
157
+ "max_tokens": 65536
158
+ }
159
+ ],
160
+ "enhancetool"
161
+ ],
162
+ "Qwen/Qwen3-235B-A22B-Thinking-2507": {
163
+ "use": ["reasoning"]
164
+ }
165
+ }
166
+ },
167
+ {
168
+ "name": "dashscope",
169
+ "api_base_url": "https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions",
170
+ "api_key": "",
171
+ "models": ["qwen3-coder-plus"],
172
+ "transformer": {
173
+ "use": [
174
+ [
175
+ "maxtoken",
176
+ {
177
+ "max_tokens": 65536
178
+ }
179
+ ],
180
+ "enhancetool"
181
+ ]
182
+ }
183
+ },
184
+ {
185
+ "name": "aihubmix",
186
+ "api_base_url": "https://aihubmix.com/v1/chat/completions",
187
+ "api_key": "sk-",
188
+ "models": [
189
+ "Z/glm-4.5",
190
+ "claude-opus-4-20250514",
191
+ "gemini-2.5-pro"
192
+ ]
193
+ }
194
+ ],
195
+ "Router": {
196
+ "default": "deepseek,deepseek-chat",
197
+ "background": "ollama,qwen2.5-coder:latest",
198
+ "think": "deepseek,deepseek-reasoner",
199
+ "longContext": "openrouter,google/gemini-2.5-pro-preview",
200
+ "longContextThreshold": 60000,
201
+ "webSearch": "gemini,gemini-2.5-flash"
202
+ }
203
+ }
204
+ ```
205
+
206
+ ### 3. Running Claude Code with the Router
207
+
208
+ Start Claude Code using the router:
209
+
210
+ ```shell
211
+ ccr code
212
+ ```
213
+
214
+ > **Note**: After modifying the configuration file, you need to restart the service for the changes to take effect:
215
+ >
216
+ > ```shell
217
+ > ccr restart
218
+ > ```
219
+
220
+ ### 4. UI Mode
221
+
222
+ For a more intuitive experience, you can use the UI mode to manage your configuration:
223
+
224
+ ```shell
225
+ ccr ui
226
+ ```
227
+
228
+ This will open a web-based interface where you can easily view and edit your `config.json` file.
229
+
230
+ ![UI](/blog/images/ui.png)
231
+
232
+ ### 5. CLI Model Management
233
+
234
+ For users who prefer terminal-based workflows, you can use the interactive CLI model selector:
235
+
236
+ ```shell
237
+ ccr model
238
+ ```
239
+ ![](blog/images/models.gif)
240
+
241
+ This command provides an interactive interface to:
242
+
243
+ - View current configuration:
244
+ - See all configured models (default, background, think, longContext, webSearch, image)
245
+ - Switch models: Quickly change which model is used for each router type
246
+ - Add new models: Add models to existing providers
247
+ - Create new providers: Set up complete provider configurations including:
248
+ - Provider name and API endpoint
249
+ - API key
250
+ - Available models
251
+ - Transformer configuration with support for:
252
+ - Multiple transformers (openrouter, deepseek, gemini, etc.)
253
+ - Transformer options (e.g., maxtoken with custom limits)
254
+ - Provider-specific routing (e.g., OpenRouter provider preferences)
255
+
256
+ The CLI tool validates all inputs and provides helpful prompts to guide you through the configuration process, making it easy to manage complex setups without editing JSON files manually.
257
+
258
+ ### 6. Presets Management
259
+
260
+ Presets allow you to save, share, and reuse configurations easily. You can export your current configuration as a preset and install presets from files or URLs.
261
+
262
+ ```shell
263
+ # Export current configuration as a preset
264
+ ccr preset export my-preset
265
+
266
+ # Export with metadata
267
+ ccr preset export my-preset --description "My OpenAI config" --author "Your Name" --tags "openai,production"
268
+
269
+ # Install a preset from local directory
270
+ ccr preset install /path/to/preset
271
+
272
+ # List all installed presets
273
+ ccr preset list
274
+
275
+ # Show preset information
276
+ ccr preset info my-preset
277
+
278
+ # Delete a preset
279
+ ccr preset delete my-preset
280
+ ```
281
+
282
+ **Preset Features:**
283
+ - **Export**: Save your current configuration as a preset directory (with manifest.json)
284
+ - **Install**: Install presets from local directories
285
+ - **Sensitive Data Handling**: API keys and other sensitive data are automatically sanitized during export (marked as `{{field}}` placeholders)
286
+ - **Dynamic Configuration**: Presets can include input schemas for collecting required information during installation
287
+ - **Version Control**: Each preset includes version metadata for tracking updates
288
+
289
+ **Preset File Structure:**
290
+ ```
291
+ ~/.claude-code-router/presets/
292
+ ├── my-preset/
293
+ │ └── manifest.json # Contains configuration and metadata
294
+ ```
295
+
296
+ ### 7. Activate Command (Environment Variables Setup)
297
+
298
+ The `activate` command allows you to set up environment variables globally in your shell, enabling you to use the `claude` command directly or integrate Claude Code Router with applications built using the Agent SDK.
299
+
300
+ To activate the environment variables, run:
301
+
302
+ ```shell
303
+ eval "$(ccr activate)"
304
+ ```
305
+
306
+ This command outputs the necessary environment variables in shell-friendly format, which are then set in your current shell session. After activation, you can:
307
+
308
+ - **Use `claude` command directly**: Run `claude` commands without needing to use `ccr code`. The `claude` command will automatically route requests through Claude Code Router.
309
+ - **Integrate with Agent SDK applications**: Applications built with the Anthropic Agent SDK will automatically use the configured router and models.
310
+
311
+ The `activate` command sets the following environment variables:
312
+
313
+ - `ANTHROPIC_AUTH_TOKEN`: API key from your configuration
314
+ - `ANTHROPIC_BASE_URL`: The local router endpoint (default: `http://127.0.0.1:3456`)
315
+ - `NO_PROXY`: Set to `127.0.0.1` to prevent proxy interference
316
+ - `DISABLE_TELEMETRY`: Disables telemetry
317
+ - `DISABLE_COST_WARNINGS`: Disables cost warnings
318
+ - `API_TIMEOUT_MS`: API timeout from your configuration
319
+
320
+ > **Note**: Make sure the Claude Code Router service is running (`ccr start`) before using the activated environment variables. The environment variables are only valid for the current shell session. To make them persistent, you can add `eval "$(ccr activate)"` to your shell configuration file (e.g., `~/.zshrc` or `~/.bashrc`).
321
+
322
+ #### Providers
323
+
324
+ The `Providers` array is where you define the different model providers you want to use. Each provider object requires:
325
+
326
+ - `name`: A unique name for the provider.
327
+ - `api_base_url`: The full API endpoint for chat completions.
328
+ - `api_key`: Your API key for the provider.
329
+ - `models`: A list of model names available from this provider.
330
+ - `transformer` (optional): Specifies transformers to process requests and responses.
331
+
332
+ #### Transformers
333
+
334
+ Transformers allow you to modify the request and response payloads to ensure compatibility with different provider APIs.
335
+
336
+ - **Global Transformer**: Apply a transformer to all models from a provider. In this example, the `openrouter` transformer is applied to all models under the `openrouter` provider.
337
+ ```json
338
+ {
339
+ "name": "openrouter",
340
+ "api_base_url": "https://openrouter.ai/api/v1/chat/completions",
341
+ "api_key": "sk-xxx",
342
+ "models": [
343
+ "google/gemini-2.5-pro-preview",
344
+ "anthropic/claude-sonnet-4",
345
+ "anthropic/claude-3.5-sonnet"
346
+ ],
347
+ "transformer": { "use": ["openrouter"] }
348
+ }
349
+ ```
350
+ - **Model-Specific Transformer**: Apply a transformer to a specific model. In this example, the `deepseek` transformer is applied to all models, and an additional `tooluse` transformer is applied only to the `deepseek-chat` model.
351
+
352
+ ```json
353
+ {
354
+ "name": "deepseek",
355
+ "api_base_url": "https://api.deepseek.com/chat/completions",
356
+ "api_key": "sk-xxx",
357
+ "models": ["deepseek-chat", "deepseek-reasoner"],
358
+ "transformer": {
359
+ "use": ["deepseek"],
360
+ "deepseek-chat": { "use": ["tooluse"] }
361
+ }
362
+ }
363
+ ```
364
+
365
+ - **Passing Options to a Transformer**: Some transformers, like `maxtoken`, accept options. To pass options, use a nested array where the first element is the transformer name and the second is an options object.
366
+ ```json
367
+ {
368
+ "name": "siliconflow",
369
+ "api_base_url": "https://api.siliconflow.cn/v1/chat/completions",
370
+ "api_key": "sk-xxx",
371
+ "models": ["moonshotai/Kimi-K2-Instruct"],
372
+ "transformer": {
373
+ "use": [
374
+ [
375
+ "maxtoken",
376
+ {
377
+ "max_tokens": 16384
378
+ }
379
+ ]
380
+ ]
381
+ }
382
+ }
383
+ ```
384
+
385
+ **Available Built-in Transformers:**
386
+
387
+ - `Anthropic`:If you use only the `Anthropic` transformer, it will preserve the original request and response parameters(you can use it to connect directly to an Anthropic endpoint).
388
+ - `deepseek`: Adapts requests/responses for DeepSeek API.
389
+ - `gemini`: Adapts requests/responses for Gemini API.
390
+ - `openrouter`: Adapts requests/responses for OpenRouter API. It can also accept a `provider` routing parameter to specify which underlying providers OpenRouter should use. For more details, refer to the [OpenRouter documentation](https://openrouter.ai/docs/features/provider-routing). See an example below:
391
+ ```json
392
+ "transformer": {
393
+ "use": ["openrouter"],
394
+ "moonshotai/kimi-k2": {
395
+ "use": [
396
+ [
397
+ "openrouter",
398
+ {
399
+ "provider": {
400
+ "only": ["moonshotai/fp8"]
401
+ }
402
+ }
403
+ ]
404
+ ]
405
+ }
406
+ }
407
+ ```
408
+ - `groq`: Adapts requests/responses for groq API.
409
+ - `maxtoken`: Sets a specific `max_tokens` value.
410
+ - `tooluse`: Optimizes tool usage for certain models via `tool_choice`.
411
+ - `gemini-cli` (experimental): Unofficial support for Gemini via Gemini CLI [gemini-cli.js](https://gist.github.com/musistudio/1c13a65f35916a7ab690649d3df8d1cd).
412
+ - `reasoning`: Used to process the `reasoning_content` field.
413
+ - `sampling`: Used to process sampling information fields such as `temperature`, `top_p`, `top_k`, and `repetition_penalty`.
414
+ - `enhancetool`: Adds a layer of error tolerance to the tool call parameters returned by the LLM (this will cause the tool call information to no longer be streamed).
415
+ - `cleancache`: Clears the `cache_control` field from requests.
416
+ - `vertex-gemini`: Handles the Gemini API using Vertex authentication.
417
+ - `chutes-glm` Unofficial support for GLM 4.5 model via Chutes [chutes-glm-transformer.js](https://gist.github.com/vitobotta/2be3f33722e05e8d4f9d2b0138b8c863).
418
+ - `qwen-cli` (experimental): Unofficial support for qwen3-coder-plus model via Qwen CLI [qwen-cli.js](https://gist.github.com/musistudio/f5a67841ced39912fd99e42200d5ca8b).
419
+ - `rovo-cli` (experimental): Unofficial support for gpt-5 via Atlassian Rovo Dev CLI [rovo-cli.js](https://gist.github.com/SaseQ/c2a20a38b11276537ec5332d1f7a5e53).
420
+
421
+ **Custom Transformers:**
422
+
423
+ You can also create your own transformers and load them via the `transformers` field in `config.json`.
424
+
425
+ ```json
426
+ {
427
+ "transformers": [
428
+ {
429
+ "path": "/User/xxx/.claude-code-router/plugins/gemini-cli.js",
430
+ "options": {
431
+ "project": "xxx"
432
+ }
433
+ }
434
+ ]
435
+ }
436
+ ```
437
+
438
+ #### Router
439
+
440
+ The `Router` object defines which model to use for different scenarios:
441
+
442
+ - `default`: The default model for general tasks.
443
+ - `background`: A model for background tasks. This can be a smaller, local model to save costs.
444
+ - `think`: A model for reasoning-heavy tasks, like Plan Mode.
445
+ - `longContext`: A model for handling long contexts (e.g., > 60K tokens).
446
+ - `longContextThreshold` (optional): The token count threshold for triggering the long context model. Defaults to 60000 if not specified.
447
+ - `webSearch`: Used for handling web search tasks and this requires the model itself to support the feature. If you're using openrouter, you need to add the `:online` suffix after the model name.
448
+ - `image` (beta): Used for handling image-related tasks (supported by CCR’s built-in agent). If the model does not support tool calling, you need to set the `config.forceUseImageAgent` property to `true`.
449
+
450
+ - You can also switch models dynamically in Claude Code with the `/model` command:
451
+ `/model provider_name,model_name`
452
+ Example: `/model openrouter,anthropic/claude-3.5-sonnet`
453
+
454
+ #### Custom Router
455
+
456
+ For more advanced routing logic, you can specify a custom router script via the `CUSTOM_ROUTER_PATH` in your `config.json`. This allows you to implement complex routing rules beyond the default scenarios.
457
+
458
+ In your `config.json`:
459
+
460
+ ```json
461
+ {
462
+ "CUSTOM_ROUTER_PATH": "/User/xxx/.claude-code-router/custom-router.js"
463
+ }
464
+ ```
465
+
466
+ The custom router file must be a JavaScript module that exports an `async` function. This function receives the request object and the config object as arguments and should return the provider and model name as a string (e.g., `"provider_name,model_name"`), or `null` to fall back to the default router.
467
+
468
+ Here is an example of a `custom-router.js` based on `custom-router.example.js`:
469
+
470
+ ```javascript
471
+ // /User/xxx/.claude-code-router/custom-router.js
472
+
473
+ /**
474
+ * A custom router function to determine which model to use based on the request.
475
+ *
476
+ * @param {object} req - The request object from Claude Code, containing the request body.
477
+ * @param {object} config - The application's config object.
478
+ * @returns {Promise<string|null>} - A promise that resolves to the "provider,model_name" string, or null to use the default router.
479
+ */
480
+ module.exports = async function router(req, config) {
481
+ const userMessage = req.body.messages.find((m) => m.role === "user")?.content;
482
+
483
+ if (userMessage && userMessage.includes("explain this code")) {
484
+ // Use a powerful model for code explanation
485
+ return "openrouter,anthropic/claude-3.5-sonnet";
486
+ }
487
+
488
+ // Fallback to the default router configuration
489
+ return null;
490
+ };
491
+ ```
492
+
493
+ ##### Subagent Routing
494
+
495
+ For routing within subagents, you must specify a particular provider and model by including `<CCR-SUBAGENT-MODEL>provider,model</CCR-SUBAGENT-MODEL>` at the **beginning** of the subagent's prompt. This allows you to direct specific subagent tasks to designated models.
496
+
497
+ **Example:**
498
+
499
+ ```
500
+ <CCR-SUBAGENT-MODEL>openrouter,anthropic/claude-3.5-sonnet</CCR-SUBAGENT-MODEL>
501
+ Please help me analyze this code snippet for potential optimizations...
502
+ ```
503
+
504
+ ## Status Line (Beta)
505
+ To better monitor the status of claude-code-router at runtime, version v1.0.40 includes a built-in statusline tool, which you can enable in the UI.
506
+ ![statusline-config.png](/blog/images/statusline-config.png)
507
+
508
+ The effect is as follows:
509
+ ![statusline](/blog/images/statusline.png)
510
+
511
+ ## 🤖 GitHub Actions
512
+
513
+ Integrate Claude Code Router into your CI/CD pipeline. After setting up [Claude Code Actions](https://docs.anthropic.com/en/docs/claude-code/github-actions), modify your `.github/workflows/claude.yaml` to use the router:
514
+
515
+ ```yaml
516
+ name: Claude Code
517
+
518
+ on:
519
+ issue_comment:
520
+ types: [created]
521
+ # ... other triggers
522
+
523
+ jobs:
524
+ claude:
525
+ if: |
526
+ (github.event_name == 'issue_comment' && contains(github.event.comment.body, '@claude')) ||
527
+ # ... other conditions
528
+ runs-on: ubuntu-latest
529
+ permissions:
530
+ contents: read
531
+ pull-requests: read
532
+ issues: read
533
+ id-token: write
534
+ steps:
535
+ - name: Checkout repository
536
+ uses: actions/checkout@v4
537
+ with:
538
+ fetch-depth: 1
539
+
540
+ - name: Prepare Environment
541
+ run: |
542
+ curl -fsSL https://bun.sh/install | bash
543
+ mkdir -p $HOME/.claude-code-router
544
+ cat << 'EOF' > $HOME/.claude-code-router/config.json
545
+ {
546
+ "log": true,
547
+ "NON_INTERACTIVE_MODE": true,
548
+ "OPENAI_API_KEY": "${{ secrets.OPENAI_API_KEY }}",
549
+ "OPENAI_BASE_URL": "https://api.deepseek.com",
550
+ "OPENAI_MODEL": "deepseek-chat"
551
+ }
552
+ EOF
553
+ shell: bash
554
+
555
+ - name: Start Claude Code Router
556
+ run: |
557
+ nohup ~/.bun/bin/bunx @musistudio/claude-code-router@1.0.8 start &
558
+ shell: bash
559
+
560
+ - name: Run Claude Code
561
+ id: claude
562
+ uses: anthropics/claude-code-action@beta
563
+ env:
564
+ ANTHROPIC_BASE_URL: http://localhost:3456
565
+ with:
566
+ anthropic_api_key: "any-string-is-ok"
567
+ ```
568
+
569
+ > **Note**: When running in GitHub Actions or other automation environments, make sure to set `"NON_INTERACTIVE_MODE": true` in your configuration to prevent the process from hanging due to stdin handling issues.
570
+
571
+ This setup allows for interesting automations, like running tasks during off-peak hours to reduce API costs.
572
+
573
+ ## 📝 Further Reading
574
+
575
+ - [Project Motivation and How It Works](blog/en/project-motivation-and-how-it-works.md)
576
+ - [Maybe We Can Do More with the Router](blog/en/maybe-we-can-do-more-with-the-route.md)
577
+ - [GLM-4.6 Supports Reasoning and Interleaved Thinking](blog/en/glm-4.6-supports-reasoning.md)
578
+
579
+ ## ❤️ Support & Sponsoring
580
+
581
+ If you find this project helpful, please consider sponsoring its development. Your support is greatly appreciated!
582
+
583
+ [![ko-fi](https://ko-fi.com/img/githubbutton_sm.svg)](https://ko-fi.com/F1F31GN2GM)
584
+
585
+ [Paypal](https://paypal.me/musistudio1999)
586
+
587
+ <table>
588
+ <tr>
589
+ <td><img src="/blog/images/alipay.jpg" width="200" alt="Alipay" /></td>
590
+ <td><img src="/blog/images/wechat.jpg" width="200" alt="WeChat Pay" /></td>
591
+ </tr>
592
+ </table>
593
+
594
+ ### Our Sponsors
595
+
596
+ A huge thank you to all our sponsors for their generous support!
597
+
598
+
599
+ - [AIHubmix](https://aihubmix.com/)
600
+ - [BurnCloud](https://ai.burncloud.com)
601
+ - [302.AI](https://share.302.ai/ZGVF9w)
602
+ - [Z智谱](https://www.bigmodel.cn/claude-code?ic=FPF9IVAGFJ)
603
+ - @Simon Leischnig
604
+ - [@duanshuaimin](https://github.com/duanshuaimin)
605
+ - [@vrgitadmin](https://github.com/vrgitadmin)
606
+ - @\*o
607
+ - [@ceilwoo](https://github.com/ceilwoo)
608
+ - @\*说
609
+ - @\*更
610
+ - @K\*g
611
+ - @R\*R
612
+ - [@bobleer](https://github.com/bobleer)
613
+ - @\*苗
614
+ - @\*划
615
+ - [@Clarence-pan](https://github.com/Clarence-pan)
616
+ - [@carter003](https://github.com/carter003)
617
+ - @S\*r
618
+ - @\*晖
619
+ - @\*敏
620
+ - @Z\*z
621
+ - @\*然
622
+ - [@cluic](https://github.com/cluic)
623
+ - @\*苗
624
+ - [@PromptExpert](https://github.com/PromptExpert)
625
+ - @\*应
626
+ - [@yusnake](https://github.com/yusnake)
627
+ - @\*飞
628
+ - @董\*
629
+ - @\*汀
630
+ - @\*涯
631
+ - @\*:-)
632
+ - @\*\*磊
633
+ - @\*琢
634
+ - @\*成
635
+ - @Z\*o
636
+ - @\*琨
637
+ - [@congzhangzh](https://github.com/congzhangzh)
638
+ - @\*\_
639
+ - @Z\*m
640
+ - @*鑫
641
+ - @c\*y
642
+ - @\*昕
643
+ - [@witsice](https://github.com/witsice)
644
+ - @b\*g
645
+ - @\*亿
646
+ - @\*辉
647
+ - @JACK
648
+ - @\*光
649
+ - @W\*l
650
+ - [@kesku](https://github.com/kesku)
651
+ - [@biguncle](https://github.com/biguncle)
652
+ - @二吉吉
653
+ - @a\*g
654
+ - @\*林
655
+ - @\*咸
656
+ - @\*明
657
+ - @S\*y
658
+ - @f\*o
659
+ - @\*智
660
+ - @F\*t
661
+ - @r\*c
662
+ - [@qierkang](http://github.com/qierkang)
663
+ - @\*军
664
+ - [@snrise-z](http://github.com/snrise-z)
665
+ - @\*王
666
+ - [@greatheart1000](http://github.com/greatheart1000)
667
+ - @\*王
668
+ - @zcutlip
669
+ - [@Peng-YM](http://github.com/Peng-YM)
670
+ - @\*更
671
+ - @\*.
672
+ - @F\*t
673
+ - @\*政
674
+ - @\*铭
675
+ - @\*叶
676
+ - @七\*o
677
+ - @\*青
678
+ - @\*\*晨
679
+ - @\*远
680
+ - @\*霄
681
+ - @\*\*吉
682
+ - @\*\*飞
683
+ - @\*\*驰
684
+ - @x\*g
685
+ - @\*\*东
686
+ - @\*落
687
+ - @哆\*k
688
+ - @\*涛
689
+ - [@苗大](https://github.com/WitMiao)
690
+ - @\*呢
691
+ - @\d*u
692
+ - @crizcraig
693
+ - s\*s
694
+ - \*火
695
+ - \*勤
696
+ - \*\*锟
697
+ - \*涛
698
+ - \*\*明
699
+ - \*知
700
+ - \*语
701
+ - \*瓜
702
+
703
+
704
+ (If your name is masked, please contact me via my homepage email to update it with your GitHub username.)