@qwen-code/qwen-code 0.0.1-alpha.10 → 0.0.1-alpha.12

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (3) hide show
  1. package/README.md +46 -9
  2. package/bundle/gemini.js +740 -117
  3. package/package.json +3 -3
package/README.md CHANGED
@@ -5,7 +5,7 @@
5
5
  Qwen Code is a command-line AI workflow tool adapted from [**Gemini CLI**](https://github.com/google-gemini/gemini-cli) (Please refer to [this document](./README.gemini.md) for more details), optimized for [Qwen3-Coder](https://github.com/QwenLM/Qwen3-Coder) models with enhanced parser support & tool support.
6
6
 
7
7
  > [!WARNING]
8
- > Qwen Code may issue multiple API calls per cycle, resulting in higher token usage, similar to Claude Code. We’re actively working to enhance API efficiency and improve the overall developer experience.
8
+ > Qwen Code may issue multiple API calls per cycle, resulting in higher token usage, similar to Claude Code. We’re actively working to enhance API efficiency and improve the overall developer experience. ModelScope offers 2,000 free API calls if you are in China mainland. Please check [API config section](#api-configuration) for more details.
9
9
 
10
10
  ## Key Features
11
11
 
@@ -26,7 +26,7 @@ curl -qL https://www.npmjs.com/install.sh | sh
26
26
  ### Installation
27
27
 
28
28
  ```bash
29
- npm install -g @qwen-code/qwen-code
29
+ npm install -g @qwen-code/qwen-code@latest
30
30
  qwen --version
31
31
  ```
32
32
 
@@ -45,22 +45,55 @@ npm install
45
45
  npm install -g .
46
46
  ```
47
47
 
48
+ We now support max session token limit, you can set it in your `.qwen/settings.json` file to save the token usage.
49
+ For example, if you want to set the max session token limit to 32000, you can set it like this:
50
+
51
+ ```json
52
+ {
53
+ "sessionTokenLimit": 32000
54
+ }
55
+ ```
56
+
57
+ The max session means the maximum number of tokens that can be used in one chat (not the total usage during multiple tool call shoots); if you reach the limit, you can use the `/compress` command to compress the history and go on, or use `/clear` command to clear the history.
58
+
48
59
  ### API Configuration
49
60
 
50
61
  Set your Qwen API key (In Qwen Code project, you can also set your API key in `.env` file). the `.env` file should be placed in the root directory of your current project.
51
62
 
52
63
  > ⚠️ **Notice:** <br>
53
- > **If you are in mainland China, please go to https://bailian.console.aliyun.com/ to apply for your API key** <br>
64
+ > **If you are in mainland China, please go to https://bailian.console.aliyun.com/ or https://modelscope.cn/docs/model-service/API-Inference/intro to apply for your API key** <br>
54
65
  > **If you are not in mainland China, please go to https://modelstudio.console.alibabacloud.com/ to apply for your API key**
55
66
 
67
+ If you are in mainland China, you can use Qwen3-Coder through the Alibaba Cloud bailian platform.
68
+
69
+ ```bash
70
+ export OPENAI_API_KEY="your_api_key_here"
71
+ export OPENAI_BASE_URL="https://dashscope.aliyuncs.com/compatible-mode/v1"
72
+ export OPENAI_MODEL="qwen3-coder-plus"
73
+ ```
74
+
75
+ If you are in mainland China, ModelScope offers 2,000 free model inference API calls per day. Please make sure you connect your aliyun account to ModelScope so that you won't receive the API error like `API Error: OpenAI API error`.
76
+
77
+ ```bash
78
+ export OPENAI_API_KEY="your_api_key_here"
79
+ export OPENAI_BASE_URL="https://api-inference.modelscope.cn/v1"
80
+ export OPENAI_MODEL="Qwen/Qwen3-Coder-480B-A35B-Instruct"
81
+ ```
82
+
83
+ If you are not in mainland China, you can use Qwen3-Coder through the Alibaba Cloud modelstuido platform.
84
+
56
85
  ```bash
57
- # If you are in mainland China, use the following URL:
58
- # https://dashscope.aliyuncs.com/compatible-mode/v1
59
- # If you are not in mainland China, use the following URL:
60
- # https://dashscope-intl.aliyuncs.com/compatible-mode/v1
61
86
  export OPENAI_API_KEY="your_api_key_here"
62
- export OPENAI_BASE_URL="your_api_base_url_here"
63
- export OPENAI_MODEL="your_api_model_here"
87
+ export OPENAI_BASE_URL="https://dashscope-intl.aliyuncs.com/compatible-mode/v1"
88
+ export OPENAI_MODEL="qwen3-coder-plus"
89
+ ```
90
+
91
+ OpenRouter also provides free Qwen3-Coder model access:
92
+
93
+ ```bash
94
+ export OPENAI_API_KEY="your_api_key_here"
95
+ export OPENAI_BASE_URL=https://openrouter.ai/api/v1
96
+ export OPENAI_MODEL="qwen/qwen3-coder:free"
64
97
  ```
65
98
 
66
99
  ## Usage Examples
@@ -148,3 +181,7 @@ This project is based on [Google Gemini CLI](https://github.com/google-gemini/ge
148
181
  ## License
149
182
 
150
183
  [LICENSE](./LICENSE)
184
+
185
+ ## Star History
186
+
187
+ [![Star History Chart](https://api.star-history.com/svg?repos=QwenLM/qwen-code&type=Date)](https://www.star-history.com/#QwenLM/qwen-code&Date)