@lobehub/chat 0.147.8 → 0.147.9

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -2,6 +2,31 @@
2
2
 
3
3
  # Changelog
4
4
 
5
+ ### [Version 0.147.9](https://github.com/lobehub/lobe-chat/compare/v0.147.8...v0.147.9)
6
+
7
+ <sup>Released on **2024-04-12**</sup>
8
+
9
+ #### 🐛 Bug Fixes
10
+
11
+ - **misc**: Fix custom model list not display correctly.
12
+
13
+ <br/>
14
+
15
+ <details>
16
+ <summary><kbd>Improvements and Fixes</kbd></summary>
17
+
18
+ #### What's fixed
19
+
20
+ - **misc**: Fix custom model list not display correctly, closes [#2009](https://github.com/lobehub/lobe-chat/issues/2009) ([7d0e220](https://github.com/lobehub/lobe-chat/commit/7d0e220))
21
+
22
+ </details>
23
+
24
+ <div align="right">
25
+
26
+ [![](https://img.shields.io/badge/-BACK_TO_TOP-151515?style=flat-square)](#readme-top)
27
+
28
+ </div>
29
+
5
30
  ### [Version 0.147.8](https://github.com/lobehub/lobe-chat/compare/v0.147.7...v0.147.8)
6
31
 
7
32
  <sup>Released on **2024-04-12**</sup>
@@ -0,0 +1,48 @@
1
+ ---
2
+ title: Customizing Provider Model List in LobeChat for Deployment
3
+ description: >-
4
+ Learn how to customize the model list in LobeChat for deployment with the
5
+ syntax and extension capabilities
6
+ tags:
7
+ - LobeChat
8
+ - model customization
9
+ - deployment
10
+ - extension capabilities
11
+ ---
12
+
13
+ # Model List
14
+
15
+ LobeChat supports customizing the model list during deployment. You can use `+` to add a model, `-` to hide a model, and use `model name=display name<extension configuration>` to customize the display name of a model, separated by English commas. The basic syntax is as follows:
16
+
17
+ ```shell
18
+ id=displayName < maxToken:vision:fc:file > ,model2,model3
19
+ ```
20
+
21
+ For example: `+qwen-7b-chat,+glm-6b,-gpt-3.5-turbo,gpt-4-0125-preview=gpt-4-turbo`
22
+
23
+ In the above example, it adds `qwen-7b-chat` and `glm-6b` to the model list, removes `gpt-3.5-turbo` from the list, and displays the model name of `gpt-4-0125-preview` as `gpt-4-turbo`. If you want to disable all models first and then enable specific models, you can use `-all,+gpt-3.5-turbo`, which means only enabling `gpt-3.5-turbo`.
24
+
25
+ ## Extension Capabilities
26
+
27
+ Considering the diversity of model capabilities, we started to add extension configuration in version `0.147.8`, with the following rules:
28
+
29
+ ```shell
30
+ id=displayName<maxToken:vision:fc:file>
31
+ ```
32
+
33
+ The first value in angle brackets is designated as the `maxToken` for this model. The second value and beyond are the model's extension capabilities, separated by colons `:`, and the order is not important.
34
+
35
+ Examples are as follows:
36
+
37
+ - `chatglm-6b=ChatGLM 6B<4096>`: ChatGLM 6B, maximum context of 4k, no advanced capabilities;
38
+ - `spark-v3.5=讯飞星火 v3.5<8192:fc>`: Xunfei Spark 3.5 model, maximum context of 8k, supports Function Call;
39
+ - `gemini-pro-vision=Gemini Pro Vision<16000:vision>`: Google Vision model, maximum context of 16k, supports image recognition;
40
+ - `gpt-4-all=ChatGPT Plus<128000:fc:vision:file>`, hacked version of ChatGPT Plus web, context of 128k, supports image recognition, Function Call, file upload.
41
+
42
+ Currently supported extension capabilities are:
43
+
44
+ | --- | Description |
45
+ | -------- | -------------------------------------------------------- |
46
+ | `fc` | Function Calling |
47
+ | `vision` | Image Recognition |
48
+ | `file` | File Upload (a bit hacky, not recommended for daily use) |
@@ -0,0 +1,46 @@
1
+ ---
2
+ title: LobeChat 自定义模型服务商模型列表及扩展能力配置
3
+ description: 了解如何在 LobeChat 中自定义模型列表以及扩展能力配置的基本语法和规则。
4
+ tags:
5
+ - LobeChat
6
+ - 自定义模型列表
7
+ - 扩展能力配置
8
+ - 模型展示名
9
+ - 模型能力
10
+ ---
11
+ # Model List
12
+
13
+ LobeChat 支持在部署时自定义模型列表,可以使用 `+` 增加一个模型,使用 `-` 来隐藏一个模型,使用 `模型名=展示名<扩展配置>` 来自定义模型的展示名,用英文逗号隔开。通过 `<>` 来添加扩展配置。基本语法如下:
14
+
15
+ ```shell
16
+ id=displayName < maxToken:vision:fc:file > ,model2,model3
17
+ ```
18
+
19
+ 例如: `+qwen-7b-chat,+glm-6b,-gpt-3.5-turbo,gpt-4-0125-preview=gpt-4-turbo`
20
+
21
+ 上面示例表示增加 `qwen-7b-chat` 和 `glm-6b` 到模型列表,而从列表中删除 `gpt-3.5-turbo`,并将 `gpt-4-0125-preview` 模型名字展示为 `gpt-4-turbo`。如果你想先禁用所有模型,再启用指定模型,可以使用 `-all,+gpt-3.5-turbo`,则表示仅启用 `gpt-3.5-turbo`。
22
+
23
+ ## 扩展能力
24
+
25
+ 考虑到模型的能力多样性,我们在 `0.147.8` 版本开始增加扩展性配置,它的规则如下:
26
+
27
+ ```shell
28
+ id=displayName<maxToken:vision:fc:file>
29
+ ```
30
+
31
+ 尖括号第一个值约定为这个模型的 `maxToken` 。第二个及以后作为模型的扩展能力,能力与能力之间用冒号 `:` 作为分隔符,顺序不重要。
32
+
33
+ 举例如下:
34
+
35
+ - `chatglm-6b=ChatGLM 6B<4096>`:ChatGLM 6B,最大上下文 4k,没有高阶能力;
36
+ - `spark-v3.5=讯飞星火 v3.5<8192:fc>`:讯飞星火 3.5 模型,最大上下文 8k,支持 Function Call;
37
+ - `gemini-pro-vision=Gemini Pro Vision<16000:vision>`:Google 视觉模型,最大上下文 16k,支持图像识别;
38
+ - `gpt-4-all=ChatGPT Plus<128000:fc:vision:file>`,hack 的 ChatGPT Plus 网页版,上下 128k ,支持图像识别、Function Call、文件上传
39
+
40
+ 目前支持的扩展能力有:
41
+
42
+ | --- | 描述 |
43
+ | -------- | ------------------------------------ |
44
+ | `fc` | 函数调用(function calling) |
45
+ | `vision` | 视觉识别 |
46
+ | `file` | 文件上传(比较hack,不建议日常使用) |
@@ -27,6 +27,17 @@ LobeChat provides some additional configuration options during deployment, which
27
27
  - Default: -
28
28
  - Example: `awCTe)re_r74` or `rtrt_ewee3@09!`
29
29
 
30
+ ### `API_KEY_SELECT_MODE`
31
+
32
+ - Type:Optional
33
+ - Description:Controls the mode for selecting the API Key when multiple API Keys are available. Currently supports `random` and `turn`.
34
+ - Default:`random`
35
+ - Example:`random` or `turn`
36
+
37
+ When using the `random` mode, a random API Key will be selected from the available multiple API Keys.
38
+
39
+ When using the `turn` mode, the API Keys will be retrieved in a round-robin manner according to the specified order.
40
+
30
41
  ### `ENABLE_OAUTH_SSO`
31
42
 
32
43
  - Type: Optional
@@ -23,6 +23,17 @@ LobeChat 在部署时提供了一些额外的配置项,你可以使用环境
23
23
  - 默认值:-
24
24
  - 示例:`awCTe)re_r74` or `rtrt_ewee3@09!`
25
25
 
26
+ ### `API_KEY_SELECT_MODE`
27
+
28
+ - 类型:可选
29
+ - 描述:用于控制多个 API Keys 时,选择 Key 的模式,当前支持 `random` 和 `turn`
30
+ - 默认值:`random`
31
+ - 示例:`random` 或 `turn`
32
+
33
+ 使用 `random` 模式下,将在多个 API Keys 中随机获取一个 API Key。
34
+
35
+ 使用 `turn` 模式下,将按照填写的顺序,轮训获取得到 API Key。
36
+
26
37
  ### `ENABLE_OAUTH_SSO`
27
38
 
28
39
  - 类型:可选
@@ -18,7 +18,7 @@ When deploying LobeChat, a rich set of environment variables related to model se
18
18
  ### `OPENAI_API_KEY`
19
19
 
20
20
  - Type: Required
21
- - Description: This is the API key you applied for on the OpenAI account page, you can check it out [here](openai-api-page)
21
+ - Description: This is the API key you applied for on the OpenAI account page, you can check it out [here](https://platform.openai.com/api-keys)
22
22
  - Default: -
23
23
  - Example: `sk-xxxxxx...xxxxxx`
24
24
 
@@ -51,7 +51,7 @@ Related discussions:
51
51
  ### `OPENAI_MODEL_LIST`
52
52
 
53
53
  - Type: Optional
54
- - Description: Used to control the model list, use `+` to add a model, use `-` to hide a model, use `model_name=display_name` to customize the display name of a model, separated by commas.
54
+ - Description: Used to control the model list, use `+` to add a model, use `-` to hide a model, use `model_name=display_name` to customize the display name of a model, separated by commas. Definition syntax rules see [model-list][model-list]
55
55
  - Default: `-`
56
56
  - Example: `+qwen-7b-chat,+glm-6b,-gpt-3.5-turbo,gpt-4-0125-preview=gpt-4-turbo`
57
57
 
@@ -61,57 +61,35 @@ You can find all current model names in [modelProviders](https://github.com/lobe
61
61
 
62
62
  ## Azure OpenAI
63
63
 
64
- If you need to use Azure OpenAI to provide model services, you can refer to the [Deploying with Azure OpenAI](../Deployment/Deploy-with-Azure-OpenAI.en-US.md) section for detailed steps. Here, we will list the environment variables related to Azure OpenAI.
65
-
66
- ### `API_KEY_SELECT_MODE`
67
-
68
- - Type:Optional
69
- - Description:Controls the mode for selecting the API Key when multiple API Keys are available. Currently supports `random` and `turn`.
70
- - Default:`random`
71
- - Example:`random` or `turn`
72
-
73
- When using the `random` mode, a random API Key will be selected from the available multiple API Keys.
74
-
75
- When using the `turn` mode, the API Keys will be retrieved in a round-robin manner according to the specified order.
76
-
77
- ### `USE_AZURE_OPENAI`
78
-
79
- - Type: Optional
80
- - Description: Set this value to `1` to enable Azure OpenAI configuration
81
- - Default: -
82
- - Example: `1`
64
+ If you need to use Azure OpenAI to provide model services, you can refer to the [Deploying with Azure OpenAI](/docs/self-hosting/examples/azure-openai) section for detailed steps. Here, we will list the environment variables related to Azure OpenAI.
83
65
 
84
66
  ### `AZURE_API_KEY`
85
67
 
86
- - Type: Optional
68
+ - Type: Required
87
69
  - Description: This is the API key you applied for on the Azure OpenAI account page
88
70
  - Default: -
89
71
  - Example: `c55168be3874490ef0565d9779ecd5a6`
90
72
 
91
- ### `AZURE_API_VERSION`
73
+ ### `AZURE_ENDPOINT`
92
74
 
93
- - Type: Optional
94
- - Description: The API version of Azure, following the format YYYY-MM-DD
95
- - Default: `2023-08-01-preview`
96
- - Example: `2023-05-15`, refer to [latest version](https://docs.microsoft.com/zh-cn/azure/developer/javascript/api-reference/es-modules/azure-sdk/ai-translation/translationconfiguration?view=azure-node-latest#api-version)
97
-
98
- ## ZHIPU AI
75
+ - Type: Required
76
+ - Description: The endpoint of Azure OpenAI, you can find it in the Azure OpenAI account page
77
+ - Default: `-`
78
+ - Example: `https://docs-test-001.openai.azure.com`
99
79
 
100
- ### `ZHIPU_API_KEY`
80
+ ### `AZURE_API_VERSION`
101
81
 
102
82
  - Type: Required
103
- - Description: This is the API key you applied for in the ZHIPU AI service
104
- - Default: -
105
- - Example: `4582d332441a313f5c2ed9824d1798ca.rC8EcTAhgbOuAuVT`
83
+ - Description: The API version of Azure, following the format YYYY-MM-DD
84
+ - Default: `-`
85
+ - Example: `2023-02-01`, refer to [latest version](https://docs.microsoft.com/zh-cn/azure/developer/javascript/api-reference/es-modules/azure-sdk/ai-translation/translationconfiguration?view=azure-node-latest#api-version)
106
86
 
107
- ## Moonshot AI
87
+ ### `AZURE_MODEL_LIST`
108
88
 
109
- ### `MOONSHOT_API_KEY`
110
-
111
- - Type: Required
112
- - Description: This is the API key you applied for in the Moonshot AI service
113
- - Default: -
114
- - Example: `Y2xpdGhpMzNhZXNoYjVtdnZjMWc6bXNrLWIxQlk3aDNPaXpBWnc0V1RaMDhSRmRFVlpZUWY=`
89
+ - Type: Optional
90
+ - Description: Used to control the model list, use `+` to add a model, use `-` to hide a model, use `id->deplymentName=displayName` to customize the display name of a model, separated by commas. Definition syntax rules see [model-list][model-list]
91
+ - Default: `-`
92
+ - Example: `gpt-35-turbo->my-deploy=GPT 3.5 Turbo` `gpt-4-turbo->my-gpt4=GPT 4 Turbo<128000:vision:fc>`
115
93
 
116
94
  ## Google AI
117
95
 
@@ -125,10 +103,26 @@ When using the `turn` mode, the API Keys will be retrieved in a round-robin mann
125
103
  ### `GOOGLE_PROXY_URL`
126
104
 
127
105
  - Type: Optional
128
- - Description: If you manually configure the Google interface proxy, you can use this configuration item to override the default Google API request base URL
106
+ - Description: If you manually configure the Google API proxy, you can use this configuration item to override the default Google API request base URL
129
107
  - Default: `https://generativelanguage.googleapis.com`
130
108
  - Example: `https://api.genai.gd.edu.kg/google`
131
109
 
110
+ ## Anthropic AI
111
+
112
+ ### `ANTHROPIC_API_KEY`
113
+
114
+ - Type: Required
115
+ - Description: This is the API key you applied from Anthropic AI
116
+ - Default: -
117
+ - Example: `sk-ant-apixx-xxxxxxxxx-xxxxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-xxxxxxxx`
118
+
119
+ ### `ANTHROPIC_PROXY_URL`
120
+
121
+ - Type: Optional
122
+ - Description: If you manually configure the Anthropic API proxy, you can use this configuration item to override the default Anthropic API request base URL
123
+ - Default: `https://api.anthropic.com`
124
+ - Example: `https://my-anthropic-proxy.com`
125
+
132
126
  ## AWS Bedrock
133
127
 
134
128
  ### `AWS_ACCESS_KEY_ID`
@@ -152,11 +146,43 @@ When using the `turn` mode, the API Keys will be retrieved in a round-robin mann
152
146
  - Default: `us-east-1`
153
147
  - Example: `us-east-1`
154
148
 
149
+ ## OpenRouter AI
150
+
151
+ ### `OPENROUTER_API_KEY`
152
+
153
+ - Type: Required
154
+ - Description: This is the API key you applied for in the OpenRouter AI service
155
+ - Default: -
156
+ - Example: `sk-or-v1-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx=`
157
+
158
+ ### `OPENROUTER_MODEL_LIST`
159
+
160
+ - Type: Optional
161
+ - Description: Used to specify a custom OpenRouter model list. Model definition syntax rules see [model-list][model-list]
162
+ - Default: `-`
163
+ - Example: `-all,+01-ai/yi-34b-chat,+huggingfaceh4/zephyr-7b-beta`
164
+
165
+ ## TogetherAI
166
+
167
+ ### `TOGETHERAI_API_KEY`
168
+
169
+ - Type: Required
170
+ - Description: This is the API key you applied for in the TogetherAI service
171
+ - Default: -
172
+ - Example: `xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx`
173
+
174
+ ### `TOGETHERAI_MODEL_LIST`
175
+
176
+ - Type: Optional
177
+ - Description: Used to specify a custom TogetherAI model list. Model definition syntax rules see [model-list][model-list]
178
+ - Default: `-`
179
+ - Example: `01-ai/yi-34b-chat`
180
+
155
181
  ## Ollama
156
182
 
157
183
  ### `OLLAMA_PROXY_URL`
158
184
 
159
- - Type: Optional
185
+ - Type: Required
160
186
  - Description: Used to enable the Ollama service, setting this will display optional open-source language models in the language model list and can also specify custom language models
161
187
  - Default: -
162
188
  - Example: `http://127.0.0.1:11434/v1`
@@ -164,27 +190,27 @@ When using the `turn` mode, the API Keys will be retrieved in a round-robin mann
164
190
  ### `OLLAMA_MODEL_LIST`
165
191
 
166
192
  - Type: Optional
167
- - Description: Used to customize your own language models
193
+ - Description: Used to specify a custom Ollama language model. Model definition syntax rules see [model-list][model-list]
168
194
  - Default: -
169
195
  - Example: `llama2:7B`
170
196
 
171
- ## Perplexity AI
197
+ ## Moonshot AI
172
198
 
173
- ### `PERPLEXITY_API_KEY`
199
+ ### `MOONSHOT_API_KEY`
174
200
 
175
201
  - Type: Required
176
- - Description: This is the API key you applied from Perplexity AI
202
+ - Description: This is the API key you applied for in the Moonshot AI service
177
203
  - Default: -
178
- - Example: `pplx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx`
204
+ - Example: `Y2xpdGhpMzNhZXNoYjVtdnZjMWc6bXNrLWIxQlk3aDNPaXpBWnc0V1RaMDhSRmRFVlpZUWY=`
179
205
 
180
- ## Anthropic AI
206
+ ## Perplexity AI
181
207
 
182
- ### `ANTHROPIC_API_KEY`
208
+ ### `PERPLEXITY_API_KEY`
183
209
 
184
210
  - Type: Required
185
- - Description: This is the API key you applied from Anthropic AI
211
+ - Description: This is the API key you applied from Perplexity AI
186
212
  - Default: -
187
- - Example: `sk-ant-apixx-xxxxxxxxx-xxxxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-xxxxxxxx`
213
+ - Example: `pplx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx`
188
214
 
189
215
  ## Mistral AI
190
216
 
@@ -204,21 +230,14 @@ When using the `turn` mode, the API Keys will be retrieved in a round-robin mann
204
230
  - Default: -
205
231
  - Example: `gsk_xxxxxxxxxxxxxxxxxxxxxxxxxxxxx`
206
232
 
207
- ## OpenRouter AI
233
+ ## ZHIPU AI
208
234
 
209
- ### `OPENROUTER_API_KEY`
235
+ ### `ZHIPU_API_KEY`
210
236
 
211
237
  - Type: Required
212
- - Description: This is the API key you applied for in the OpenRouter AI service
238
+ - Description: This is the API key you applied for in the ZHIPU AI service
213
239
  - Default: -
214
- - Example: `sk-or-v1-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx=`
215
-
216
- ### `OPENROUTER_MODEL_LIST`
217
-
218
- - Type: Optional
219
- - Description: Used to control the model list, use `+` to add a model, use `-` to hide a model, use `model_name=display_name` to customize the display name of a model, separated by commas.
220
- - Default: `-`
221
- - Example: `-all,+01-ai/yi-34b-chat,+huggingfaceh4/zephyr-7b-beta`
240
+ - Example: `4582d332441a313f5c2ed9824d1798ca.rC8EcTAhgbOuAuVT`
222
241
 
223
242
  ## 01.AI
224
243
 
@@ -229,11 +248,4 @@ When using the `turn` mode, the API Keys will be retrieved in a round-robin mann
229
248
  - Default: -
230
249
  - 示例:`xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx`
231
250
 
232
- ## TogetherAI
233
-
234
- ### `TOGETHERAI_API_KEY`
235
-
236
- - Type: Required
237
- - Description: This is the API key you applied for in the TogetherAI service
238
- - Default: -
239
- - Example: `xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx`
251
+ [model-list]: /docs/self-hosting/advanced/model-list
@@ -18,7 +18,7 @@ LobeChat 在部署时提供了丰富的模型服务商相关的环境变量,
18
18
  ### `OPENAI_API_KEY`
19
19
 
20
20
  - 类型:必选
21
- - 描述:这是你在 OpenAI 账户页面申请的 API 密钥,可以前往 \[这里]\[openai-api-page] 查看
21
+ - 描述:这是你在 OpenAI 账户页面申请的 API 密钥,可以前往 [这里](https://platform.openai.com/api-keys) 查看
22
22
  - 默认值:-
23
23
  - 示例:`sk-xxxxxx...xxxxxx`
24
24
 
@@ -49,7 +49,7 @@ LobeChat 在部署时提供了丰富的模型服务商相关的环境变量,
49
49
  ### `OPENAI_MODEL_LIST`
50
50
 
51
51
  - 类型:可选
52
- - 描述:用来控制模型列表,使用 `+` 增加一个模型,使用 `-` 来隐藏一个模型,使用 `模型名=展示名` 来自定义模型的展示名,用英文逗号隔开。
52
+ - 描述:用来控制模型列表,使用 `+` 增加一个模型,使用 `-` 来隐藏一个模型,使用 `模型名=展示名<扩展配置>` 来自定义模型的展示名,用英文逗号隔开。模型定义语法规则见 [模型列表][model-list]
53
53
  - 默认值:`-`
54
54
  - 示例:`+qwen-7b-chat,+glm-6b,-gpt-3.5-turbo,gpt-4-0125-preview=gpt-4-turbo`
55
55
 
@@ -57,59 +57,37 @@ LobeChat 在部署时提供了丰富的模型服务商相关的环境变量,
57
57
 
58
58
  你可以在 [modelProviders](https://github.com/lobehub/lobe-chat/tree/main/src/config/modelProviders) 查找到当前的所有模型名。
59
59
 
60
- ### `API_KEY_SELECT_MODE`
61
-
62
- - 类型:可选
63
- - 描述:用于控制多个 API Keys 时,选择 Key 的模式,当前支持 `random` 和 `turn`
64
- - 默认值:`random`
65
- - 示例:`random` 或 `turn`
66
-
67
- 使用 `random` 模式下,将在多个 API Keys 中随机获取一个 API Key。
68
-
69
- 使用 `turn` 模式下,将按照填写的顺序,轮训获取得到 API Key。
70
-
71
60
  ## Azure OpenAI
72
61
 
73
- 如果你需要使用 Azure OpenAI 来提供模型服务,可以查阅 [使用 Azure OpenAI 部署](../Deployment/Deploy-with-Azure-OpenAI.zh-CN.md) 章节查看详细步骤,这里将列举和 Azure OpenAI 相关的环境变量。
74
-
75
- ### `USE_AZURE_OPENAI`
76
-
77
- - 类型:可选
78
- - 描述:设置该值为 `1` 开启 Azure OpenAI 配置
79
- - 默认值:-
80
- - 示例:`1`
62
+ 如果你需要使用 Azure OpenAI 来提供模型服务,可以查阅 [使用 Azure OpenAI 部署](/docs/self-hosting/examples/azure-openai) 章节查看详细步骤,这里将列举和 Azure OpenAI 相关的环境变量。
81
63
 
82
64
  ### `AZURE_API_KEY`
83
65
 
84
- - 类型:可选
66
+ - 类型:必选
85
67
  - 描述:这是你在 Azure OpenAI 账户页面申请的 API 密钥
86
68
  - 默认值:-
87
69
  - 示例:`c55168be3874490ef0565d9779ecd5a6`
88
70
 
89
- ### `AZURE_API_VERSION`
90
-
91
- - 类型:可选
92
- - 描述:Azure 的 API 版本,遵循 YYYY-MM-DD 格式
93
- - 默认值:`2023-08-01-preview`
94
- - 示例:`2023-05-15`,查阅 [最新版本][azure-api-verion-url]
71
+ ### `AZURE_ENDPOINT`
95
72
 
96
- ## 智谱 AI
73
+ - 类型:必选
74
+ - 描述:Azure API 地址,从 Azure 门户检查资源时,可在 “密钥和终结点” 部分中找到此值
75
+ - 默认值:`-`
76
+ - 示例:`https://docs-test-001.openai.azure.com`
97
77
 
98
- ### `ZHIPU_API_KEY`
78
+ ### `AZURE_API_VERSION`
99
79
 
100
80
  - 类型:必选
101
- - 描述:这是你在 智谱 AI 服务中申请的 API 密钥
102
- - 默认值:-
103
- - 示例:`4582d332441a313f5c2ed9824d1798ca.rC8EcTAhgbOuAuVT`
104
-
105
- ## Moonshot AI
81
+ - 描述:Azure API 版本,遵循 YYYY-MM-DD 格式
82
+ - 默认值:`-`
83
+ - 示例:`2024-02-01`,查阅 [最新版本](https://docs.microsoft.com/zh-cn/azure/developer/javascript/api-reference/es-modules/azure-sdk/ai-translation/translationconfiguration?view=azure-node-latest#api-version)
106
84
 
107
- ### `MOONSHOT_API_KEY`
85
+ ### `AZURE_MODEL_LIST`
108
86
 
109
87
  - 类型:必选
110
- - 描述:这是你在 Moonshot AI 服务中申请的 API 密钥
111
- - 默认值:-
112
- - 示例:`Y2xpdGhpMzNhZXNoYjVtdnZjMWc6bXNrLWIxQlk3aDNPaXpBWnc0V1RaMDhSRmRFVlpZUWY=`
88
+ - 描述:用来控制模型列表,使用 `模型名->部署名=展示名` 来自定义模型的展示名,用英文逗号隔开。支持扩展能力,其余语法规则详见 [模型列表][model-list]
89
+ - 默认值:`-`
90
+ - 示例:`gpt-35-turbo->my-deploy=GPT 3.5 Turbo` 或 `gpt-4-turbo->my-gpt4=GPT 4 Turbo<128000:vision:fc>`
113
91
 
114
92
  ## Google AI
115
93
 
@@ -127,6 +105,22 @@ LobeChat 在部署时提供了丰富的模型服务商相关的环境变量,
127
105
  - 默认值:`https://generativelanguage.googleapis.com`
128
106
  - 示例:`https://api.genai.gd.edu.kg/google`
129
107
 
108
+ ## Anthropic AI
109
+
110
+ ### `ANTHROPIC_API_KEY`
111
+
112
+ - 类型:必选
113
+ - 描述:这是你在 Anthropic AI 服务中申请的 API 密钥
114
+ - 默认值:-
115
+ - 示例:`sk-ant-apixx-xxxxxxxxx-xxxxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-xxxxxxxx`
116
+
117
+ ### `ANTHROPIC_PROXY_URL`
118
+
119
+ - 类型:可选
120
+ - 描述:如果你手动配置了 Anthropic 接口代理,可以使用此配置项来覆盖默认的 Anthropic API 请求基础 URL
121
+ - 默认值:`https://api.anthropic.com`
122
+ - 示例:`https://my-anthropic-proxy.com`
123
+
130
124
  ## AWS Bedrock
131
125
 
132
126
  ### `AWS_ACCESS_KEY_ID`
@@ -150,11 +144,43 @@ LobeChat 在部署时提供了丰富的模型服务商相关的环境变量,
150
144
  - 默认值:`us-east-1`
151
145
  - 示例:`us-east-1`
152
146
 
147
+ ## OpenRouter AI
148
+
149
+ ### `OPENROUTER_API_KEY`
150
+
151
+ - 类型:必选
152
+ - 描述:这是你在 OpenRouter AI 服务中申请的 API 密钥
153
+ - 默认值:-
154
+ - 示例:`sk-or-v1-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx=`
155
+
156
+ ### `OPENROUTER_MODEL_LIST`
157
+
158
+ - 类型:可选
159
+ - 描述:用于指定自定义 OpenRouter 模型列表。模型定义语法规则见 [模型列表][model-list]
160
+ - 默认值:`-`
161
+ - 示例:`-all,+01-ai/yi-34b-chat,+huggingfaceh4/zephyr-7b-beta`
162
+
163
+ ## TogetherAI
164
+
165
+ ### `TOGETHERAI_API_KEY`
166
+
167
+ - 类型:必选
168
+ - 描述:这是你在 TogetherAI 服务中申请的 API 密钥
169
+ - 默认值:-
170
+ - 示例:`xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx`
171
+
172
+ ### `TOGETHERAI_MODEL_LIST`
173
+
174
+ - 类型:可选
175
+ - 描述:用于指定自定义 Together AI 的模型列表。模型定义语法规则见 [模型列表][model-list]
176
+ - 默认值:`-`
177
+ - 示例:`01-ai/yi-34b-chat`
178
+
153
179
  ## Ollama
154
180
 
155
181
  ### `OLLAMA_PROXY_URL`
156
182
 
157
- - 类型:可选
183
+ - 类型:必选
158
184
  - 描述:用于启用 Ollama 服务,设置后可在语言模型列表内展示可选开源语言模型,也可以指定自定义语言模型
159
185
  - 默认值:-
160
186
  - 示例:`http://127.0.0.1:11434/v1`
@@ -162,27 +188,27 @@ LobeChat 在部署时提供了丰富的模型服务商相关的环境变量,
162
188
  ### `OLLAMA_MODEL_LIST`
163
189
 
164
190
  - 类型:可选
165
- - 描述:用于指定自定义 Ollama 语言模型
191
+ - 描述:用于指定自定义 Ollama 语言模型。模型定义语法规则见 [模型列表][model-list]
166
192
  - 默认值:-
167
193
  - 示例:`qwen:32B`
168
194
 
169
- ## Perplexity AI
195
+ ## Moonshot AI
170
196
 
171
- ### `PERPLEXITY_API_KEY`
197
+ ### `MOONSHOT_API_KEY`
172
198
 
173
199
  - 类型:必选
174
- - 描述:这是你在 Perplexity AI 服务中申请的 API 密钥
200
+ - 描述:这是你在 Moonshot AI 服务中申请的 API 密钥
175
201
  - 默认值:-
176
- - 示例:`pplx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx`
202
+ - 示例:`Y2xpdGhpMzNhZXNoYjVtdnZjMWc6bXNrLWIxQlk3aDNPaXpBWnc0V1RaMDhSRmRFVlpZUWY=`
177
203
 
178
- ## Anthropic AI
204
+ ## Perplexity AI
179
205
 
180
- ### `ANTHROPIC_API_KEY`
206
+ ### `PERPLEXITY_API_KEY`
181
207
 
182
208
  - 类型:必选
183
- - 描述:这是你在 Anthropic AI 服务中申请的 API 密钥
209
+ - 描述:这是你在 Perplexity AI 服务中申请的 API 密钥
184
210
  - 默认值:-
185
- - 示例:`sk-ant-apixx-xxxxxxxxx-xxxxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-xxxxxxxx`
211
+ - 示例:`pplx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx`
186
212
 
187
213
  ## Mistral AI
188
214
 
@@ -202,21 +228,14 @@ LobeChat 在部署时提供了丰富的模型服务商相关的环境变量,
202
228
  - 默认值:-
203
229
  - 示例:`gsk_xxxxxxxxxxxxxxxxxxxxxxxxxxxxx`
204
230
 
205
- ## OpenRouter AI
231
+ ## 智谱 AI
206
232
 
207
- ### `OPENROUTER_API_KEY`
233
+ ### `ZHIPU_API_KEY`
208
234
 
209
235
  - 类型:必选
210
- - 描述:这是你在 OpenRouter AI 服务中申请的 API 密钥
236
+ - 描述:这是你在 智谱 AI 服务中申请的 API 密钥
211
237
  - 默认值:-
212
- - 示例:`sk-or-v1-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx=`
213
-
214
- ### `OPENROUTER_MODEL_LIST`
215
-
216
- - 类型:可选
217
- - 描述:用来控制模型列表,使用 `+` 增加一个模型,使用 `-` 来隐藏一个模型,使用 `模型名=展示名` 来自定义模型的展示名,用英文逗号隔开。
218
- - 默认值:`-`
219
- - 示例:`-all,+01-ai/yi-34b-chat,+huggingfaceh4/zephyr-7b-beta`
238
+ - 示例:`4582d332441a313f5c2ed9824d1798ca.rC8EcTAhgbOuAuVT`
220
239
 
221
240
  ## 01 AI
222
241
 
@@ -227,13 +246,4 @@ LobeChat 在部署时提供了丰富的模型服务商相关的环境变量,
227
246
  - 默认值:-
228
247
  - 示例:`xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx`
229
248
 
230
- ## TogetherAI
231
-
232
- ### `TOGETHERAI_API_KEY`
233
-
234
- - 类型:必选
235
- - 描述:这是你在 TogetherAI 服务中申请的 API 密钥
236
- - 默认值:-
237
- - 示例:`xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx`
238
-
239
- [azure-api-verion-url]: https://docs.microsoft.com/zh-cn/azure/developer/javascript/api-reference/es-modules/azure-sdk/ai-translation/translationconfiguration?view=azure-node-latest#api-version
249
+ [model-list]: /docs/self-hosting/advanced/model-list
@@ -1,6 +1,8 @@
1
1
  ---
2
2
  title: Configuring Azure OpenAI for LobeChat
3
- description: Learn how to configure Azure OpenAI for LobeChat, including interface configuration, and deployment settings.
3
+ description: >-
4
+ Learn how to configure Azure OpenAI for LobeChat, including interface
5
+ configuration, and deployment settings.
4
6
  tags:
5
7
  - Azure OpenAI
6
8
  - LobeChat
@@ -27,6 +29,7 @@ You can fill in the corresponding configuration items as needed:
27
29
  - **API Key**: The API key you applied for on the Azure OpenAI account page, which can be found in the "Keys and Endpoints" section.
28
30
  - **API Address**: Azure API address, which can be found in the "Keys and Endpoints" section when checking resources in the Azure portal.
29
31
  - **Azure API Version**: The API version of Azure, following the format YYYY-MM-DD. Refer to the [latest version](https://learn.microsoft.com/zh-cn/azure/ai-services/openai/reference#chat-completions).
32
+ - ** Model List**: set your model list
30
33
 
31
34
  After completing the configuration of the above fields, click "Check". If it prompts "Check passed", it means the configuration was successful.
32
35
 
@@ -38,5 +41,6 @@ If you want the deployed version to be pre-configured with Azure OpenAI for end
38
41
  | --- | --- | --- | --- | --- |
39
42
  | `AZURE_API_KEY` | Required | This is the API key you obtained from the Azure OpenAI account page | - | `c55168be3874490ef0565d9779ecd5a6` |
40
43
  | `AZURE_ENDPOINT` | Required | Azure API address, can be found in the "Keys and Endpoints" section when checking resources in the Azure portal | - | `https://docs-test-001.openai.azure.com` |
41
- | `AZURE_API_VERSION` | Optional | Azure API version, following the format YYYY-MM-DD | 2023-08-01-preview | `-`, see [latest version](https://learn.microsoft.com/zh-cn/azure/ai-services/openai/reference#chat-completions) |
44
+ | `AZURE_API_VERSION` | Required | Azure API version, following the format YYYY-MM-DD | 2023-08-01-preview | `-`, see [latest version](https://learn.microsoft.com/zh-cn/azure/ai-services/openai/reference#chat-completions) |
45
+ | `AZURE_MODEL_LIST` | Required | Used to control the model list, use `+` to add a model, use `-` to hide a model, use `id->deplymentName=displayName` to customize the display name of a model, separated by commas. Definition syntax rules see [Model List](/docs/self-hosting/advanced/model-list) | - | `gpt-35-turbo->my-deploy=GPT 3.5 Turbo` or `gpt-4-turbo->my-gpt4=GPT 4 Turbo<128000:vision:fc>` |
42
46
  | `ACCESS_CODE` | Optional | Add a password to access LobeChat. You can set a long password to prevent brute force attacks. When this value is separated by commas, it becomes an array of passwords | - | `awCT74` or `e3@09!` or `code1,code2,code3` |
@@ -29,6 +29,7 @@ LobeChat 支持使用 [Azure OpenAI](https://learn.microsoft.com/zh-cn/azure/ai-
29
29
  - **APIKey**:你在 Azure OpenAI 账户页面申请的 API 密钥,可在 “密钥和终结点” 部分中找到此值
30
30
  - **API 地址**:Azure API 地址,从 Azure 门户检查资源时,可在 “密钥和终结点” 部分中找到此值
31
31
  - **Azure Api Version**: Azure 的 API 版本,遵循 YYYY-MM-DD 格式,查阅[最新版本](https://learn.microsoft.com/zh-cn/azure/ai-services/openai/reference#chat-completions)
32
+ - **模型列表**: 用来控制模型列表,你可以自行配置你的部署模型。
32
33
 
33
34
  完成上述字段配置后,点击「检查」,如果提示「检查通过」,则说明配置成功。
34
35
 
@@ -42,5 +43,6 @@ LobeChat 支持使用 [Azure OpenAI](https://learn.microsoft.com/zh-cn/azure/ai-
42
43
  | --- | --- | --- | --- | --- |
43
44
  | `AZURE_API_KEY` | 必选 | 这是你在 Azure OpenAI 账户页面申请的 API 密钥 | - | `c55168be3874490ef0565d9779ecd5a6` |
44
45
  | `AZURE_ENDPOINT` | 必选 | Azure API 地址,从 Azure 门户检查资源时,可在 “密钥和终结点” 部分中找到此值 | - | `https://docs-test-001.openai.azure.com` |
45
- | `AZURE_API_VERSION` | 可选 | Azure 的 API 版本,遵循 YYYY-MM-DD 格式 | 2023-08-01-preview | `-`,查阅[最新版本](https://learn.microsoft.com/zh-cn/azure/ai-services/openai/reference#chat-completions) |
46
+ | `AZURE_API_VERSION` | 必选 | Azure 的 API 版本,遵循 YYYY-MM-DD 格式 | 2023-08-01-preview | `-`,查阅[最新版本](https://learn.microsoft.com/zh-cn/azure/ai-services/openai/reference#chat-completions) |
47
+ | `AZURE_MODEL_LIST` | 必选 | 用来控制模型列表,使用 `模型名->部署名=展示名` 来自定义模型的展示名,用英文逗号隔开。支持扩展能力,其余语法规则详见 [模型列表][model-list] | - | `gpt-35-turbo->my-deploy=GPT 3.5 Turbo` 或 `gpt-4-turbo->my-gpt4=GPT 4 Turbo<128000:vision:fc>` |
46
48
  | `ACCESS_CODE` | 可选 | 添加访问 LobeChat 的密码,你可以设置一个长密码以防被爆破,该值用逗号分隔时为密码数组 | - | `awCT74` 或 `e3@09!` or `code1,code2,code3` |
@@ -32,10 +32,8 @@ docker run -d -p 3210:3210 -e OLLAMA_PROXY_URL=http://host.docker.internal:11434
32
32
 
33
33
  Now, you can use LobeChat to converse with the local LLM.
34
34
 
35
-
36
35
  For more information on using Ollama in LobeChat, please refer to [Ollama Usage](/docs/usage/providers/ollama).
37
36
 
38
-
39
37
  ## Accessing Ollama from Non-Local Locations
40
38
 
41
39
  When you first initiate Ollama, it is configured to allow access only from the local machine. To enable access from other domains and set up port listening, you will need to adjust the environment variables `OLLAMA_ORIGINS` and `OLLAMA_HOST` accordingly.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@lobehub/chat",
3
- "version": "0.147.8",
3
+ "version": "0.147.9",
4
4
  "description": "Lobe Chat - an open-source, high-performance chatbot framework that supports speech synthesis, multimodal, and extensible Function Call plugin system. Supports one-click free deployment of your private ChatGPT/LLM web application.",
5
5
  "keywords": [
6
6
  "framework",
@@ -13,13 +13,10 @@ import CustomModelOption from './CustomModelOption';
13
13
 
14
14
  const OptionRender = memo<{ displayName: string; id: string; provider: GlobalLLMProviderKey }>(
15
15
  ({ displayName, id, provider }) => {
16
- const model = useGlobalStore(
17
- (s) => modelProviderSelectors.getDefaultModelCardById(id)(s),
18
- isEqual,
19
- );
16
+ const model = useGlobalStore((s) => modelProviderSelectors.getModelCardById(id)(s), isEqual);
20
17
 
21
- // if there is no model, it means it is a user custom model
22
- if (!model) return <CustomModelOption id={id} provider={provider} />;
18
+ // if there is isCustom, it means it is a user defined custom model
19
+ if (model?.isCustom) return <CustomModelOption id={id} provider={provider} />;
23
20
 
24
21
  return (
25
22
  <Flexbox align={'center'} gap={8} horizontal>