@lobehub/chat 1.68.2 → 1.68.4
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +50 -0
- package/changelog/v1.json +18 -0
- package/docs/usage/providers/azureai.mdx +69 -0
- package/docs/usage/providers/azureai.zh-CN.mdx +69 -0
- package/docs/usage/providers/deepseek.mdx +3 -3
- package/docs/usage/providers/deepseek.zh-CN.mdx +5 -4
- package/docs/usage/providers/jina.mdx +51 -0
- package/docs/usage/providers/jina.zh-CN.mdx +51 -0
- package/docs/usage/providers/lmstudio.mdx +75 -0
- package/docs/usage/providers/lmstudio.zh-CN.mdx +75 -0
- package/docs/usage/providers/nvidia.mdx +55 -0
- package/docs/usage/providers/nvidia.zh-CN.mdx +55 -0
- package/docs/usage/providers/ppio.mdx +7 -7
- package/docs/usage/providers/ppio.zh-CN.mdx +6 -6
- package/docs/usage/providers/sambanova.mdx +50 -0
- package/docs/usage/providers/sambanova.zh-CN.mdx +50 -0
- package/docs/usage/providers/tencentcloud.mdx +49 -0
- package/docs/usage/providers/tencentcloud.zh-CN.mdx +49 -0
- package/docs/usage/providers/vertexai.mdx +59 -0
- package/docs/usage/providers/vertexai.zh-CN.mdx +59 -0
- package/docs/usage/providers/vllm.mdx +98 -0
- package/docs/usage/providers/vllm.zh-CN.mdx +98 -0
- package/docs/usage/providers/volcengine.mdx +47 -0
- package/docs/usage/providers/volcengine.zh-CN.mdx +48 -0
- package/locales/ar/chat.json +29 -0
- package/locales/ar/models.json +48 -0
- package/locales/ar/providers.json +3 -0
- package/locales/bg-BG/chat.json +29 -0
- package/locales/bg-BG/models.json +48 -0
- package/locales/bg-BG/providers.json +3 -0
- package/locales/de-DE/chat.json +29 -0
- package/locales/de-DE/models.json +48 -0
- package/locales/de-DE/providers.json +3 -0
- package/locales/en-US/chat.json +29 -0
- package/locales/en-US/models.json +48 -0
- package/locales/en-US/providers.json +3 -3
- package/locales/es-ES/chat.json +29 -0
- package/locales/es-ES/models.json +48 -0
- package/locales/es-ES/providers.json +3 -0
- package/locales/fa-IR/chat.json +29 -0
- package/locales/fa-IR/models.json +48 -0
- package/locales/fa-IR/providers.json +3 -0
- package/locales/fr-FR/chat.json +29 -0
- package/locales/fr-FR/models.json +48 -0
- package/locales/fr-FR/providers.json +3 -0
- package/locales/it-IT/chat.json +29 -0
- package/locales/it-IT/models.json +48 -0
- package/locales/it-IT/providers.json +3 -0
- package/locales/ja-JP/chat.json +29 -0
- package/locales/ja-JP/models.json +48 -0
- package/locales/ja-JP/providers.json +3 -0
- package/locales/ko-KR/chat.json +29 -0
- package/locales/ko-KR/models.json +48 -0
- package/locales/ko-KR/providers.json +3 -0
- package/locales/nl-NL/chat.json +29 -0
- package/locales/nl-NL/models.json +48 -0
- package/locales/nl-NL/providers.json +3 -0
- package/locales/pl-PL/chat.json +29 -0
- package/locales/pl-PL/models.json +48 -0
- package/locales/pl-PL/providers.json +3 -0
- package/locales/pt-BR/chat.json +29 -0
- package/locales/pt-BR/models.json +48 -0
- package/locales/pt-BR/providers.json +3 -0
- package/locales/ru-RU/chat.json +29 -0
- package/locales/ru-RU/models.json +48 -0
- package/locales/ru-RU/providers.json +3 -0
- package/locales/tr-TR/chat.json +29 -0
- package/locales/tr-TR/models.json +48 -0
- package/locales/tr-TR/providers.json +3 -0
- package/locales/vi-VN/chat.json +29 -0
- package/locales/vi-VN/models.json +48 -0
- package/locales/vi-VN/providers.json +3 -0
- package/locales/zh-CN/chat.json +29 -0
- package/locales/zh-CN/models.json +51 -3
- package/locales/zh-CN/providers.json +3 -4
- package/locales/zh-TW/chat.json +29 -0
- package/locales/zh-TW/models.json +48 -0
- package/locales/zh-TW/providers.json +3 -0
- package/package.json +1 -1
- package/packages/web-crawler/src/crawImpl/__test__/jina.test.ts +169 -0
- package/packages/web-crawler/src/crawImpl/jina.ts +1 -1
- package/packages/web-crawler/src/crawImpl/naive.ts +29 -3
- package/packages/web-crawler/src/urlRules.ts +7 -1
- package/packages/web-crawler/src/utils/errorType.ts +7 -0
- package/scripts/serverLauncher/startServer.js +11 -7
- package/src/config/modelProviders/ppio.ts +1 -1
- package/src/features/Conversation/Extras/Assistant.tsx +12 -20
- package/src/features/Conversation/Extras/Usage/UsageDetail/ModelCard.tsx +130 -0
- package/src/features/Conversation/Extras/Usage/UsageDetail/TokenProgress.tsx +71 -0
- package/src/features/Conversation/Extras/Usage/UsageDetail/index.tsx +146 -0
- package/src/features/Conversation/Extras/Usage/UsageDetail/tokens.ts +94 -0
- package/src/features/Conversation/Extras/Usage/index.tsx +40 -0
- package/src/libs/agent-runtime/utils/streams/anthropic.test.ts +14 -0
- package/src/libs/agent-runtime/utils/streams/anthropic.ts +25 -0
- package/src/libs/agent-runtime/utils/streams/openai.test.ts +100 -10
- package/src/libs/agent-runtime/utils/streams/openai.ts +30 -4
- package/src/libs/agent-runtime/utils/streams/protocol.ts +4 -0
- package/src/locales/default/chat.ts +30 -1
- package/src/server/routers/tools/search.ts +1 -1
- package/src/store/aiInfra/slices/aiModel/initialState.ts +3 -1
- package/src/store/aiInfra/slices/aiModel/selectors.test.ts +1 -0
- package/src/store/aiInfra/slices/aiModel/selectors.ts +5 -0
- package/src/store/aiInfra/slices/aiProvider/action.ts +3 -1
- package/src/store/chat/slices/aiChat/actions/generateAIChat.ts +5 -1
- package/src/store/chat/slices/message/action.ts +3 -0
- package/src/store/global/initialState.ts +1 -0
- package/src/store/global/selectors/systemStatus.ts +2 -0
- package/src/types/message/base.ts +18 -0
- package/src/types/message/chat.ts +4 -3
- package/src/utils/fetch/fetchSSE.ts +24 -1
- package/src/utils/format.ts +3 -1
@@ -1,9 +1,9 @@
|
|
1
1
|
---
|
2
2
|
title: Using PPIO API Key in LobeChat
|
3
3
|
description: >-
|
4
|
-
Learn how to integrate PPIO's language model APIs into LobeChat. Follow
|
5
|
-
|
6
|
-
|
4
|
+
Learn how to integrate PPIO's language model APIs into LobeChat. Follow the
|
5
|
+
steps to register, create an PPIO API key, configure settings, and chat with
|
6
|
+
our various AI models.
|
7
7
|
tags:
|
8
8
|
- PPIO
|
9
9
|
- DeepSeek
|
@@ -16,16 +16,16 @@ tags:
|
|
16
16
|
|
17
17
|
# Using PPIO in LobeChat
|
18
18
|
|
19
|
-
<Image alt={'Using PPIO in LobeChat'} cover src={''} />
|
19
|
+
<Image alt={'Using PPIO in LobeChat'} cover src={'https://github.com/user-attachments/assets/d0a5e152-160a-4862-8393-546f4e2e5387'} />
|
20
20
|
|
21
|
-
[PPIO](https://ppinfra.com?
|
21
|
+
[PPIO](https://ppinfra.com/user/register?invited_by=RQIMOC) supports stable and cost-efficient open-source LLM APIs, such as DeepSeek, Llama, Qwen etc.
|
22
22
|
|
23
23
|
This document will guide you on how to integrate PPIO in LobeChat:
|
24
24
|
|
25
25
|
<Steps>
|
26
26
|
### Step 1: Register and Log in to PPIO
|
27
27
|
|
28
|
-
- Visit [PPIO](https://ppinfra.com?
|
28
|
+
- Visit [PPIO](https://ppinfra.com/user/register?invited_by=RQIMOC) and create an account
|
29
29
|
- Upon registration, PPIO will provide a ¥5 credit (about 5M tokens).
|
30
30
|
|
31
31
|
<Image alt={'Register PPIO'} height={457} inStep src={'https://github.com/user-attachments/assets/7cb3019b-78c1-48e0-a64c-a6a4836affd9'} />
|
@@ -50,7 +50,7 @@ This document will guide you on how to integrate PPIO in LobeChat:
|
|
50
50
|
|
51
51
|
<Callout type={'warning'}>
|
52
52
|
During usage, you may need to pay the API service provider, please refer to PPIO's [pricing
|
53
|
-
policy](https://ppinfra.com/llm-api?utm_source=github_lobe-chat
|
53
|
+
policy](https://ppinfra.com/llm-api?utm_source=github_lobe-chat\&utm_medium=github_readme\&utm_campaign=link).
|
54
54
|
</Callout>
|
55
55
|
</Steps>
|
56
56
|
|
@@ -1,8 +1,8 @@
|
|
1
1
|
---
|
2
2
|
title: 在 LobeChat 中使用 PPIO 派欧云 API Key
|
3
3
|
description: >-
|
4
|
-
学习如何将 PPIO 派欧云的 LLM API 集成到 LobeChat 中。跟随以下步骤注册 PPIO 账号、创建 API
|
5
|
-
|
4
|
+
学习如何将 PPIO 派欧云的 LLM API 集成到 LobeChat 中。跟随以下步骤注册 PPIO 账号、创建 API Key、并在 LobeChat
|
5
|
+
中进行设置。
|
6
6
|
tags:
|
7
7
|
- PPIO
|
8
8
|
- PPInfra
|
@@ -15,16 +15,16 @@ tags:
|
|
15
15
|
|
16
16
|
# 在 LobeChat 中使用 PPIO 派欧云
|
17
17
|
|
18
|
-
<Image alt={'在 LobeChat 中使用 PPIO'} cover src={''} />
|
18
|
+
<Image alt={'在 LobeChat 中使用 PPIO'} cover src={'https://github.com/user-attachments/assets/d0a5e152-160a-4862-8393-546f4e2e5387'} />
|
19
19
|
|
20
|
-
[PPIO 派欧云](https://ppinfra.com?
|
20
|
+
[PPIO 派欧云](https://ppinfra.com/user/register?invited_by=RQIMOC)提供稳定、高性价比的开源模型 API 服务,支持 DeepSeek 全系列、Llama、Qwen 等行业领先大模型。
|
21
21
|
|
22
22
|
本文档将指导你如何在 LobeChat 中使用 PPIO:
|
23
23
|
|
24
24
|
<Steps>
|
25
25
|
### 步骤一:注册 PPIO 派欧云账号并登录
|
26
26
|
|
27
|
-
- 访问 [PPIO 派欧云](https://ppinfra.com?
|
27
|
+
- 访问 [PPIO 派欧云](https://ppinfra.com/user/register?invited_by=RQIMOC) 并注册账号
|
28
28
|
- 注册后,PPIO 会赠送 5 元(约 500 万 tokens)的使用额度
|
29
29
|
|
30
30
|
<Image alt={'注册 PPIO'} height={457} inStep src={'https://github.com/user-attachments/assets/7cb3019b-78c1-48e0-a64c-a6a4836affd9'} />
|
@@ -48,7 +48,7 @@ tags:
|
|
48
48
|
<Image alt={'选择并使用 PPIO 模型'} inStep src={'https://github.com/user-attachments/assets/8cf66e00-04fe-4bad-9e3d-35afc7d9aa58'} />
|
49
49
|
|
50
50
|
<Callout type={'warning'}>
|
51
|
-
在使用过程中你可能需要向 API 服务提供商付费,PPIO 的 API 费用参考[这里](https://ppinfra.com/llm-api?utm_source=github_lobe-chat
|
51
|
+
在使用过程中你可能需要向 API 服务提供商付费,PPIO 的 API 费用参考[这里](https://ppinfra.com/llm-api?utm_source=github_lobe-chat\&utm_medium=github_readme\&utm_campaign=link)。
|
52
52
|
</Callout>
|
53
53
|
</Steps>
|
54
54
|
|
@@ -0,0 +1,50 @@
|
|
1
|
+
---
|
2
|
+
title: Using SambaNova API Key in LobeChat
|
3
|
+
description: Learn how to configure and use SambaNova models in LobeChat, obtain an API key, and start a conversation.
|
4
|
+
tags:
|
5
|
+
- LobeChat
|
6
|
+
- SambaNova
|
7
|
+
- API Key
|
8
|
+
- Web UI
|
9
|
+
---
|
10
|
+
|
11
|
+
# Using SambaNova in LobeChat
|
12
|
+
|
13
|
+
<Image alt={'Using SambaNova in LobeChat'} cover src={'https://github.com/user-attachments/assets/1028aa1a-6c19-4191-b28a-2020e5637155'} />
|
14
|
+
|
15
|
+
[SambaNova](https://sambanova.ai/) is a company based in Palo Alto, California, USA, focused on developing high-performance AI hardware and software solutions. It provides fast AI model training, fine-tuning, and inference capabilities, especially suitable for large-scale generative AI models.
|
16
|
+
|
17
|
+
This document will guide you on how to use SambaNova in LobeChat:
|
18
|
+
|
19
|
+
<Steps>
|
20
|
+
### Step 1: Obtain a SambaNova API Key
|
21
|
+
|
22
|
+
- First, you need to register and log in to [SambaNova Cloud](https://cloud.sambanova.ai/)
|
23
|
+
- Create an API key in the `APIs` page
|
24
|
+
|
25
|
+
<Image alt={'Obtain a SambaNova API Key'} inStep src={'https://github.com/user-attachments/assets/ed6965c8-6884-4adf-a457-573a96755f55'} />
|
26
|
+
|
27
|
+
- Copy the obtained API key and save it securely
|
28
|
+
|
29
|
+
<Callout type={'warning'}>
|
30
|
+
Please save the generated API Key securely, as it will only appear once. If you accidentally lose it, you will need to create a new API key.
|
31
|
+
</Callout>
|
32
|
+
|
33
|
+
### Step 2: Configure SambaNova in LobeChat
|
34
|
+
|
35
|
+
- Access the `Application Settings` interface of LobeChat
|
36
|
+
- Find the `SambaNova` setting item under `Language Model`
|
37
|
+
|
38
|
+
<Image alt={'Fill in the SambaNova API Key'} inStep src={'https://github.com/user-attachments/assets/328e9755-8da9-4849-8569-e099924822fe'} />
|
39
|
+
|
40
|
+
- Turn on SambaNova and fill in the obtained API key
|
41
|
+
- Select a SambaNova model for your assistant to start the conversation
|
42
|
+
|
43
|
+
<Image alt={'Select a SambaNova Model'} inStep src={'https://github.com/user-attachments/assets/6dbf4560-3f62-4b33-9f41-96e12b5087b1'} />
|
44
|
+
|
45
|
+
<Callout type={'warning'}>
|
46
|
+
You may need to pay the API service provider during use, please refer to SambaNova's related fee policies.
|
47
|
+
</Callout>
|
48
|
+
</Steps>
|
49
|
+
|
50
|
+
Now you can use the models provided by SambaNova in LobeChat to conduct conversations.
|
@@ -0,0 +1,50 @@
|
|
1
|
+
---
|
2
|
+
title: 在 LobeChat 中使用 SambaNova API Key
|
3
|
+
description: 学习如何在 LobeChat 中配置和使用 SambaNova 模型,获取 API 密钥并开始对话。
|
4
|
+
tags:
|
5
|
+
- LobeChat
|
6
|
+
- SambaNova
|
7
|
+
- API密钥
|
8
|
+
- Web UI
|
9
|
+
---
|
10
|
+
|
11
|
+
# 在 LobeChat 中使用 SambaNova
|
12
|
+
|
13
|
+
<Image alt={'在 LobeChat 中使用 SambaNova'} cover src={'https://github.com/user-attachments/assets/1028aa1a-6c19-4191-b28a-2020e5637155'} />
|
14
|
+
|
15
|
+
[SambaNova](https://sambanova.ai/) 是一家位于美国加利福尼亚州帕洛阿尔托的公司,专注于开发高性能 AI 硬件和软件解决方案,提供快速的 AI 模型训练、微调和推理能力,尤其适用于大规模生成式 AI 模型。
|
16
|
+
|
17
|
+
本文档将指导你如何在 LobeChat 中使用 SambaNova:
|
18
|
+
|
19
|
+
<Steps>
|
20
|
+
### 步骤一:获取 SambaNova API 密钥
|
21
|
+
|
22
|
+
- 首先,你需要注册并登录 [SambaNova Cloud](https://cloud.sambanova.ai/)
|
23
|
+
- 在 `APIs` 页面中创建一个 API 密钥
|
24
|
+
|
25
|
+
<Image alt={'获取 SambaNova API 密钥'} inStep src={'https://github.com/user-attachments/assets/ed6965c8-6884-4adf-a457-573a96755f55'} />
|
26
|
+
|
27
|
+
- 复制得到的 API 密钥并妥善保存
|
28
|
+
|
29
|
+
<Callout type={'warning'}>
|
30
|
+
请妥善保存生成的 API Key,它只会出现一次,如果不小心丢失了,你需要重新创建一个 API key
|
31
|
+
</Callout>
|
32
|
+
|
33
|
+
### 步骤二:在 LobeChat 中配置 SambaNova
|
34
|
+
|
35
|
+
- 访问 LobeChat 的 `应用设置`界面
|
36
|
+
- 在 `语言模型` 下找到 `SambaNova` 的设置项
|
37
|
+
|
38
|
+
<Image alt={'填写 SambaNova API 密钥'} inStep src={'https://github.com/user-attachments/assets/328e9755-8da9-4849-8569-e099924822fe'} />
|
39
|
+
|
40
|
+
- 打开 SambaNova 并填入获取的 API 密钥
|
41
|
+
- 为你的助手选择一个 SambaNova 模型即可开始对话
|
42
|
+
|
43
|
+
<Image alt={'选择 SambaNova 模型'} inStep src={'https://github.com/user-attachments/assets/6dbf4560-3f62-4b33-9f41-96e12b5087b1'} />
|
44
|
+
|
45
|
+
<Callout type={'warning'}>
|
46
|
+
在使用过程中你可能需要向 API 服务提供商付费,请参考 SambaNova 的相关费用政策。
|
47
|
+
</Callout>
|
48
|
+
</Steps>
|
49
|
+
|
50
|
+
至此你已经可以在 LobeChat 中使用 SambaNova 提供的模型进行对话了。
|
@@ -0,0 +1,49 @@
|
|
1
|
+
---
|
2
|
+
title: Using Tencent Cloud API Key in LobeChat
|
3
|
+
description: Learn how to configure and use Tencent Cloud AI models in LobeChat, obtain an API key, and start a conversation.
|
4
|
+
tags:
|
5
|
+
- LobeChat
|
6
|
+
- Tencent Cloud
|
7
|
+
- API Key
|
8
|
+
- Web UI
|
9
|
+
---
|
10
|
+
|
11
|
+
# Using Tencent Cloud in LobeChat
|
12
|
+
|
13
|
+
<Image alt={'Using Tencent Cloud in LobeChat'} cover src={'https://github.com/user-attachments/assets/aa91ca54-65fc-4e33-8c76-999f0a5d2bee'} />
|
14
|
+
|
15
|
+
[Tencent Cloud](https://cloud.tencent.com/) is the cloud computing service brand of Tencent, specializing in providing cloud computing services for enterprises and developers. Tencent Cloud provides a series of AI large model solutions, through which AI models can be connected stably and efficiently.
|
16
|
+
|
17
|
+
This document will guide you on how to connect Tencent Cloud's AI models in LobeChat:
|
18
|
+
|
19
|
+
<Steps>
|
20
|
+
### Step 1: Obtain the Tencent Cloud API Key
|
21
|
+
|
22
|
+
- First, visit [Tencent Cloud](https://cloud.tencent.com/) and complete the registration and login.
|
23
|
+
- Enter the Tencent Cloud Console and navigate to [Large-scale Knowledge Engine Atomic Capability](https://console.cloud.tencent.com/lkeap).
|
24
|
+
- Activate the Large-scale Knowledge Engine, which requires real-name authentication during the activation process.
|
25
|
+
|
26
|
+
<Image alt={'Enter the Large-scale Knowledge Engine Atomic Capability Page'} inStep src={'https://github.com/user-attachments/assets/22e1a039-5e6e-4c40-8266-19821677618a'} />
|
27
|
+
|
28
|
+
- In the `Access via OpenAI SDK` option, click the `Create API Key` button to create a new API Key.
|
29
|
+
- You can view and manage the created API Keys in `API Key Management`.
|
30
|
+
- Copy and save the created API Key.
|
31
|
+
|
32
|
+
### Step 2: Configure Tencent Cloud in LobeChat
|
33
|
+
|
34
|
+
- Visit the `Application Settings` and `AI Service Provider` interface of LobeChat.
|
35
|
+
- Find the `Tencent Cloud` settings item in the list of providers.
|
36
|
+
|
37
|
+
<Image alt={'Fill in the Tencent Cloud API Key'} inStep src={'https://github.com/user-attachments/assets/a9de7780-d0cb-47d5-ad9c-fcbbec14b940'} />
|
38
|
+
|
39
|
+
- Open the Tencent Cloud provider and fill in the obtained API Key.
|
40
|
+
- Select a Tencent Cloud model for your assistant to start the conversation.
|
41
|
+
|
42
|
+
<Image alt={'Select Tencent Cloud Model'} inStep src={'https://github.com/user-attachments/assets/162bc64e-0d34-4a4e-815a-028247b73143'} />
|
43
|
+
|
44
|
+
<Callout type={'warning'}>
|
45
|
+
You may need to pay the API service provider during use, please refer to Tencent Cloud's relevant fee policy.
|
46
|
+
</Callout>
|
47
|
+
</Steps>
|
48
|
+
|
49
|
+
You can now use the models provided by Tencent Cloud in LobeChat to have conversations.
|
@@ -0,0 +1,49 @@
|
|
1
|
+
---
|
2
|
+
title: 在 LobeChat 中使用腾讯云 API Key
|
3
|
+
description: 学习如何在 LobeChat 中配置和使用腾讯云 AI 模型,获取 API 密钥并开始对话。
|
4
|
+
tags:
|
5
|
+
- LobeChat
|
6
|
+
- 腾讯云
|
7
|
+
- API密钥
|
8
|
+
- Web UI
|
9
|
+
---
|
10
|
+
|
11
|
+
# 在 LobeChat 中使用腾讯云
|
12
|
+
|
13
|
+
<Image alt={'在 LobeChat 中使用腾讯云'} cover src={'https://github.com/user-attachments/assets/aa91ca54-65fc-4e33-8c76-999f0a5d2bee'} />
|
14
|
+
|
15
|
+
[腾讯云(Tencent Cloud)](https://cloud.tencent.com/)是腾讯公司旗下的云计算服务品牌,专门为企业和开发者提供云计算服务。腾讯云提供了一系列 AI 大模型解决方案,通过这些工具可以稳定高效接入 AI 模型。
|
16
|
+
|
17
|
+
本文档将指导你如何在 LobeChat 中接入腾讯云的 AI 模型:
|
18
|
+
|
19
|
+
<Steps>
|
20
|
+
### 步骤一:获取腾讯云 API 密钥
|
21
|
+
|
22
|
+
- 首先,访问[腾讯云](https://cloud.tencent.com/)并完成注册登录
|
23
|
+
- 进入腾讯云控制台并导航至[知识引擎原子能力](https://console.cloud.tencent.com/lkeap)
|
24
|
+
- 开通大模型知识引擎,开通过程需要实名认证
|
25
|
+
|
26
|
+
<Image alt={'进入知识引擎原子能力页面'} inStep src={'https://github.com/user-attachments/assets/22e1a039-5e6e-4c40-8266-19821677618a'} />
|
27
|
+
|
28
|
+
- 在`使用OpenAI SDK方式接入`选项中,点击 `创建 API Key` 按钮,创建一个新的 API Key
|
29
|
+
- 在 `API key 管理` 中可以查看和管理已创建的 API Key
|
30
|
+
- 复制并保存创建好的 API Key
|
31
|
+
|
32
|
+
### 步骤二:在 LobeChat 中配置腾讯云
|
33
|
+
|
34
|
+
- 访问 LobeChat 的 `应用设置` 的 `AI 服务供应商` 界面
|
35
|
+
- 在供应商列表中找到 `腾讯云` 的设置项
|
36
|
+
|
37
|
+
<Image alt={'填写腾讯云 API 密钥'} inStep src={'https://github.com/user-attachments/assets/a9de7780-d0cb-47d5-ad9c-fcbbec14b940'} />
|
38
|
+
|
39
|
+
- 打开腾讯云服务商并填入获取的 API 密钥
|
40
|
+
- 为你的助手选择一个腾讯云模型即可开始对话
|
41
|
+
|
42
|
+
<Image alt={'选择腾讯云模型'} inStep src={'https://github.com/user-attachments/assets/162bc64e-0d34-4a4e-815a-028247b73143'} />
|
43
|
+
|
44
|
+
<Callout type={'warning'}>
|
45
|
+
在使用过程中你可能需要向 API 服务提供商付费,请参考腾讯云的相关费用政策。
|
46
|
+
</Callout>
|
47
|
+
</Steps>
|
48
|
+
|
49
|
+
至此你已经可以在 LobeChat 中使用腾讯云提供的模型进行对话了。
|
@@ -0,0 +1,59 @@
|
|
1
|
+
---
|
2
|
+
title: Using Vertex AI API Key in LobeChat
|
3
|
+
description: Learn how to configure and use Vertex AI models in LobeChat, get an API key, and start a conversation.
|
4
|
+
tags:
|
5
|
+
- LobeChat
|
6
|
+
- Vertex AI
|
7
|
+
- API Key
|
8
|
+
- Web UI
|
9
|
+
---
|
10
|
+
|
11
|
+
# Using Vertex AI in LobeChat
|
12
|
+
|
13
|
+
<Image alt={'Using Vertex AI in LobeChat'} cover src={'https://github.com/user-attachments/assets/638dcd7c-2bff-4adb-bade-da2aaef872bf'} />
|
14
|
+
|
15
|
+
[Vertex AI](https://cloud.google.com/vertex-ai) is a fully managed, integrated AI development platform from Google Cloud, designed for building and deploying generative AI. It provides easy access to Vertex AI Studio, Agent Builder, and over 160 foundational models for AI development.
|
16
|
+
|
17
|
+
This document will guide you on how to connect Vertex AI models in LobeChat:
|
18
|
+
|
19
|
+
<Steps>
|
20
|
+
### Step 1: Prepare a Vertex AI Project
|
21
|
+
|
22
|
+
- First, visit [Google Cloud](https://console.cloud.google.com/) and complete the registration and login process.
|
23
|
+
- Create a new Google Cloud project or select an existing one.
|
24
|
+
- Go to the [Vertex AI Console](https://console.cloud.google.com/vertex-ai).
|
25
|
+
- Ensure that the Vertex AI API service is enabled for the project.
|
26
|
+
|
27
|
+
<Image alt={'Accessing Vertex AI'} inStep src={'https://github.com/user-attachments/assets/c4fe4430-7860-4339-b014-4d8d264a12c0'} />
|
28
|
+
|
29
|
+
### Step 2: Set Up API Access Permissions
|
30
|
+
|
31
|
+
- Go to the Google Cloud [IAM Management page](https://console.cloud.google.com/iam-admin/serviceaccounts) and navigate to `Service Accounts`.
|
32
|
+
- Create a new service account and assign a role permission to it, such as `Vertex AI User`.
|
33
|
+
|
34
|
+
<Image alt={'Creating a Service Account'} inStep src={'https://github.com/user-attachments/assets/692e7c67-f173-45da-86ef-5c69e17988e4'} />
|
35
|
+
|
36
|
+
- On the service account management page, find the service account you just created, click `Keys`, and create a new JSON format key.
|
37
|
+
- After successful creation, the key file will be automatically saved to your computer in JSON format. Please keep it safe.
|
38
|
+
|
39
|
+
<Image alt={'Creating a Key'} inStep src={'https://github.com/user-attachments/assets/1fb5df18-5261-483e-a445-96f52f80dd20'} />
|
40
|
+
|
41
|
+
### Step 3: Configure Vertex AI in LobeChat
|
42
|
+
|
43
|
+
- Visit the `App Settings` and then the `AI Service Provider` interface in LobeChat.
|
44
|
+
- Find the settings item for `Vertex AI` in the list of providers.
|
45
|
+
|
46
|
+
<Image alt={'Entering Vertex AI API Key'} inStep src={'https://github.com/user-attachments/assets/5d672e8b-566f-4f82-bdce-947168726bc0'} />
|
47
|
+
|
48
|
+
- Open the Vertex AI service provider settings.
|
49
|
+
- Fill the entire content of the JSON format key you just obtained into the API Key field.
|
50
|
+
- Select a Vertex AI model for your assistant to start the conversation.
|
51
|
+
|
52
|
+
<Image alt={'Selecting a Vertex AI Model'} inStep src={'https://github.com/user-attachments/assets/1a7e9600-cd0f-4c82-9d32-4e61bbb351cc'} />
|
53
|
+
|
54
|
+
<Callout type={'warning'}>
|
55
|
+
You may need to pay the API service provider during usage. Please refer to Google Cloud's relevant fee policies.
|
56
|
+
</Callout>
|
57
|
+
</Steps>
|
58
|
+
|
59
|
+
Now you can use the models provided by Vertex AI for conversations in LobeChat.
|
@@ -0,0 +1,59 @@
|
|
1
|
+
---
|
2
|
+
title: 在 LobeChat 中使用 Vertex AI API Key
|
3
|
+
description: 学习如何在 LobeChat 中配置和使用 Vertex AI 模型,获取 API 密钥并开始对话。
|
4
|
+
tags:
|
5
|
+
- LobeChat
|
6
|
+
- Vertex AI
|
7
|
+
- API密钥
|
8
|
+
- Web UI
|
9
|
+
---
|
10
|
+
|
11
|
+
# 在 LobeChat 中使用 Vertex AI
|
12
|
+
|
13
|
+
<Image alt={'在 LobeChat 中使用 Vertex AI '} cover src={'https://github.com/user-attachments/assets/638dcd7c-2bff-4adb-bade-da2aaef872bf'} />
|
14
|
+
|
15
|
+
[Vertex AI](https://cloud.google.com/vertex-ai) 是 Google Cloud 的一款全面托管、集成的 AI 开发平台,旨在构建与应用生成式 AI。你可轻松访问 Vertex AI Studio、Agent Builder 以及超过 160 种基础模型,进行 AI 开发。
|
16
|
+
|
17
|
+
本文档将指导你如何在 LobeChat 中接入 Vertex AI 的模型:
|
18
|
+
|
19
|
+
<Steps>
|
20
|
+
### 步骤一:准备 Vertex AI 项目
|
21
|
+
|
22
|
+
- 首先,访问[Google Cloud](https://console.cloud.google.com/)并完成注册登录
|
23
|
+
- 创建一个新的 Google Cloud 项目,或选择一个已存在的项目
|
24
|
+
- 进入 [Vertex AI 控制台](https://console.cloud.google.com/vertex-ai)
|
25
|
+
- 确认该项目已开通 Vertex AI API 服务
|
26
|
+
|
27
|
+
<Image alt={'进入 Vertex AI'} inStep src={'https://github.com/user-attachments/assets/c4fe4430-7860-4339-b014-4d8d264a12c0'} />
|
28
|
+
|
29
|
+
### 步骤二:设置 API 访问权限
|
30
|
+
|
31
|
+
- 进入 Google Cloud [IAM 管理页面](https://console.cloud.google.com/iam-admin/serviceaccounts),并导航至`服务账号`
|
32
|
+
- 创建一个新的服务账号,并为其分配一个角色权限,例如 `Vertex AI User`
|
33
|
+
|
34
|
+
<Image alt={'创建服务账号'} inStep src={'https://github.com/user-attachments/assets/692e7c67-f173-45da-86ef-5c69e17988e4'} />
|
35
|
+
|
36
|
+
- 在服务账号管理页面找到刚刚创建的服务账号,点击`密钥`并创建一个新的 JSON 格式密钥
|
37
|
+
- 创建成功后,密钥文件将会以 JSON 文件的格式自动保存到你的电脑上,请妥善保存
|
38
|
+
|
39
|
+
<Image alt={'创建密钥'} inStep src={'https://github.com/user-attachments/assets/1fb5df18-5261-483e-a445-96f52f80dd20'} />
|
40
|
+
|
41
|
+
### 步骤三:在 LobeChat 中配置 Vertex AI
|
42
|
+
|
43
|
+
- 访问 LobeChat 的 `应用设置` 的 `AI 服务供应商` 界面
|
44
|
+
- 在供应商列表中找到 `Vertex AI` 的设置项
|
45
|
+
|
46
|
+
<Image alt={'填写 Vertex AI API 密钥'} inStep src={'https://github.com/user-attachments/assets/5d672e8b-566f-4f82-bdce-947168726bc0'} />
|
47
|
+
|
48
|
+
- 打开 Vertex AI 服务供应商
|
49
|
+
- 将刚刚获取的 JSON 格式的全部内容填入 API Key 字段中
|
50
|
+
- 为你的助手选择一个 Vertex AI 模型即可开始对话
|
51
|
+
|
52
|
+
<Image alt={'选择 Vertex AI 模型'} inStep src={'https://github.com/user-attachments/assets/1a7e9600-cd0f-4c82-9d32-4e61bbb351cc'} />
|
53
|
+
|
54
|
+
<Callout type={'warning'}>
|
55
|
+
在使用过程中你可能需要向 API 服务提供商付费,请参考 Google Cloud 的相关费用政策。
|
56
|
+
</Callout>
|
57
|
+
</Steps>
|
58
|
+
|
59
|
+
至此你已经可以在 LobeChat 中使用 Vertex AI 提供的模型进行对话了。
|
@@ -0,0 +1,98 @@
|
|
1
|
+
---
|
2
|
+
title: Using vLLM API Key in LobeChat
|
3
|
+
description: Learn how to configure and use the vLLM language model in LobeChat, obtain an API key, and start a conversation.
|
4
|
+
tags:
|
5
|
+
- LobeChat
|
6
|
+
- vLLM
|
7
|
+
- API Key
|
8
|
+
- Web UI
|
9
|
+
---
|
10
|
+
|
11
|
+
# Using vLLM in LobeChat
|
12
|
+
|
13
|
+
<Image alt={'Using vLLM in LobeChat'} cover src={'https://github.com/user-attachments/assets/1d77cca4-7363-4a46-9ad5-10604e111d7c'} />
|
14
|
+
|
15
|
+
[vLLM](https://github.com/vllm-project/vllm) is an open-source local large language model (LLM) deployment tool that allows users to efficiently run LLM models on local devices and provides an OpenAI API-compatible service interface.
|
16
|
+
|
17
|
+
This document will guide you on how to use vLLM in LobeChat:
|
18
|
+
|
19
|
+
<Steps>
|
20
|
+
### Step 1: Preparation
|
21
|
+
|
22
|
+
vLLM has certain requirements for hardware and software environments. Be sure to configure according to the following requirements:
|
23
|
+
|
24
|
+
| Hardware Requirements | |
|
25
|
+
| --------- | ----------------------------------------------------------------------- |
|
26
|
+
| GPU | - NVIDIA CUDA <br /> - AMD ROCm <br /> - Intel XPU |
|
27
|
+
| CPU | - Intel/AMD x86 <br /> - ARM AArch64 <br /> - Apple silicon |
|
28
|
+
| Other AI Accelerators | - Google TPU <br /> - Intel Gaudi <br /> - AWS Neuron <br /> - OpenVINO |
|
29
|
+
|
30
|
+
| Software Requirements |
|
31
|
+
| --------------------------------------- |
|
32
|
+
| - OS: Linux <br /> - Python: 3.9 – 3.12 |
|
33
|
+
|
34
|
+
### Step 2: Install vLLM
|
35
|
+
|
36
|
+
If you are using an NVIDIA GPU, you can directly install vLLM using `pip`. However, it is recommended to use `uv` here, which is a very fast Python environment manager, to create and manage the Python environment. Please follow the [documentation](https://docs.astral.sh/uv/#getting-started) to install uv. After installing uv, you can use the following command to create a new Python environment and install vLLM:
|
37
|
+
|
38
|
+
```shell
|
39
|
+
uv venv myenv --python 3.12 --seed
|
40
|
+
source myenv/bin/activate
|
41
|
+
uv pip install vllm
|
42
|
+
```
|
43
|
+
|
44
|
+
Another method is to use `uv run` with the `--with [dependency]` option, which allows you to run commands such as `vllm serve` without creating an environment:
|
45
|
+
|
46
|
+
```shell
|
47
|
+
uv run --with vllm vllm --help
|
48
|
+
```
|
49
|
+
|
50
|
+
You can also use [conda](https://docs.conda.io/projects/conda/en/latest/user-guide/getting-started.html) to create and manage your Python environment.
|
51
|
+
|
52
|
+
```shell
|
53
|
+
conda create -n myenv python=3.12 -y
|
54
|
+
conda activate myenv
|
55
|
+
pip install vllm
|
56
|
+
```
|
57
|
+
|
58
|
+
<Callout type={"note"}>
|
59
|
+
For non-CUDA platforms, please refer to the [official documentation](https://docs.vllm.ai/en/latest/getting_started/installation/index.html#installation-index) to learn how to install vLLM.
|
60
|
+
</Callout>
|
61
|
+
|
62
|
+
### Step 3: Start Local Service
|
63
|
+
|
64
|
+
vLLM can be deployed as an OpenAI API protocol-compatible server. By default, it will start the server at `http://localhost:8000`. You can specify the address using the `--host` and `--port` parameters. The server currently runs only one model at a time.
|
65
|
+
|
66
|
+
The following command will start a vLLM server and run the `Qwen2.5-1.5B-Instruct` model:
|
67
|
+
|
68
|
+
```shell
|
69
|
+
vllm serve Qwen/Qwen2.5-1.5B-Instruct
|
70
|
+
```
|
71
|
+
|
72
|
+
You can enable the server to check the API key in the header by passing the parameter `--api-key` or the environment variable `VLLM_API_KEY`. If not set, no API Key is required to access.
|
73
|
+
|
74
|
+
<Callout type={'note'}>
|
75
|
+
For more detailed vLLM server configuration, please refer to the [official documentation](https://docs.vllm.ai/en/latest/).
|
76
|
+
</Callout>
|
77
|
+
|
78
|
+
### Step 4: Configure vLLM in LobeChat
|
79
|
+
|
80
|
+
- Access the `Application Settings` interface of LobeChat.
|
81
|
+
- Find the `vLLM` settings item under `Language Model`.
|
82
|
+
|
83
|
+
<Image alt={'Fill in the vLLM API Key'} inStep src={'https://github.com/user-attachments/assets/669c68bf-3f85-4a6f-bb08-d0d7fb7f7417'} />
|
84
|
+
|
85
|
+
- Open the vLLM service provider and fill in the API service address and API Key.
|
86
|
+
|
87
|
+
<Callout type={"warning"}>
|
88
|
+
* If your vLLM is not configured with an API Key, please leave the API Key blank.
|
89
|
+
* If your vLLM is running locally, please make sure to turn on `Client Request Mode`.
|
90
|
+
</Callout>
|
91
|
+
|
92
|
+
- Add the model you are running to the model list below.
|
93
|
+
- Select a vLLM model to run for your assistant and start the conversation.
|
94
|
+
|
95
|
+
<Image alt={'Select vLLM Model'} inStep src={'https://github.com/user-attachments/assets/fcdfb9c5-819a-488f-b28d-0857fe861219'} />
|
96
|
+
</Steps>
|
97
|
+
|
98
|
+
Now you can use the models provided by vLLM in LobeChat to have conversations.
|
@@ -0,0 +1,98 @@
|
|
1
|
+
---
|
2
|
+
title: 在 LobeChat 中使用 vLLM API Key
|
3
|
+
description: 学习如何在 LobeChat 中配置和使用 vLLM 语言模型,获取 API 密钥并开始对话。
|
4
|
+
tags:
|
5
|
+
- LobeChat
|
6
|
+
- vLLM
|
7
|
+
- API密钥
|
8
|
+
- Web UI
|
9
|
+
---
|
10
|
+
|
11
|
+
# 在 LobeChat 中使用 vLLM
|
12
|
+
|
13
|
+
<Image alt={'在 LobeChat 中使用 vLLM'} cover src={'https://github.com/user-attachments/assets/1d77cca4-7363-4a46-9ad5-10604e111d7c'} />
|
14
|
+
|
15
|
+
[vLLM](https://github.com/vllm-project/vllm)是一个开源的本地大型语言模型(LLM)部署工具,允许用户在本地设备上高效运行 LLM 模型,并提供兼容 OpenAI API 的服务接口。
|
16
|
+
|
17
|
+
本文档将指导你如何在 LobeChat 中使用 vLLM:
|
18
|
+
|
19
|
+
<Steps>
|
20
|
+
### 步骤一:准备工作
|
21
|
+
|
22
|
+
vLLM 对于硬件和软件环境均有一定要求,请无比根据以下要求进行配置:
|
23
|
+
|
24
|
+
| 硬件需求 | |
|
25
|
+
| --------- | ----------------------------------------------------------------------- |
|
26
|
+
| GPU | - NVIDIA CUDA <br /> - AMD ROCm <br /> - Intel XPU |
|
27
|
+
| CPU | - Intel/AMD x86 <br /> - ARM AArch64 <br /> - Apple silicon |
|
28
|
+
| 其他 AI 加速器 | - Google TPU <br /> - Intel Gaudi <br /> - AWS Neuron <br /> - OpenVINO |
|
29
|
+
|
30
|
+
| 软件需求 |
|
31
|
+
| --------------------------------------- |
|
32
|
+
| - OS: Linux <br /> - Python: 3.9 – 3.12 |
|
33
|
+
|
34
|
+
### 步骤二:安装 vLLM
|
35
|
+
|
36
|
+
如果你正在使用 NVIDIA GPU,你可以直接使用`pip`安装 vLLM。但这里建议使用`uv`,它一个非常快速的 Python 环境管理器,来创建和管理 Python 环境。请按照[文档](https://docs.astral.sh/uv/#getting-started)安装 uv。安装 uv 后,你可以使用以下命令创建一个新的 Python 环境并安装 vLLM:
|
37
|
+
|
38
|
+
```shell
|
39
|
+
uv venv myenv --python 3.12 --seed
|
40
|
+
source myenv/bin/activate
|
41
|
+
uv pip install vllm
|
42
|
+
```
|
43
|
+
|
44
|
+
另一种方法是使用`uv run`与`--with [dependency]`选项,这允许你运行`vllm serve`等命令而无需创建环境:
|
45
|
+
|
46
|
+
```shell
|
47
|
+
uv run --with vllm vllm --help
|
48
|
+
```
|
49
|
+
|
50
|
+
你也可以使用 [conda](https://docs.conda.io/projects/conda/en/latest/user-guide/getting-started.html) 来创建和管理你的 Python 环境。
|
51
|
+
|
52
|
+
```shell
|
53
|
+
conda create -n myenv python=3.12 -y
|
54
|
+
conda activate myenv
|
55
|
+
pip install vllm
|
56
|
+
```
|
57
|
+
|
58
|
+
<Callout type={"note"}>
|
59
|
+
对于非 CUDA 平台,请参考[官方文档](https://docs.vllm.ai/en/latest/getting_started/installation/index.html#installation-index)了解如何安装 vLLM
|
60
|
+
</Callout>
|
61
|
+
|
62
|
+
### 步骤三:启动本地服务
|
63
|
+
|
64
|
+
vLLM 可以部署为一个 OpenAI API 协议兼容的服务器。默认情况下,它将在 `http://localhost:8000` 启动服务器。你可以使用 `--host` 和 `--port` 参数指定地址。服务器目前一次仅运行一个模型。
|
65
|
+
|
66
|
+
以下命令将启动一个 vLLM 服务器并运行 `Qwen2.5-1.5B-Instruct` 模型:
|
67
|
+
|
68
|
+
```shell
|
69
|
+
vllm serve Qwen/Qwen2.5-1.5B-Instruct
|
70
|
+
```
|
71
|
+
|
72
|
+
你可以通过传递参数 `--api-key` 或环境变量 `VLLM_API_KEY` 来启用服务器检查头部中的 API 密钥。如不设置,则无需 API Key 即可访问。
|
73
|
+
|
74
|
+
<Callout type={'note'}>
|
75
|
+
更详细的 vLLM 服务器配置,请参考[官方文档](https://docs.vllm.ai/en/latest/)
|
76
|
+
</Callout>
|
77
|
+
|
78
|
+
### 步骤四:在 LobeChat 中配置 vLLM
|
79
|
+
|
80
|
+
- 访问 LobeChat 的 `应用设置`界面
|
81
|
+
- 在 `语言模型` 下找到 `vLLM` 的设置项
|
82
|
+
|
83
|
+
<Image alt={'填写 vLLM API 密钥'} inStep src={'https://github.com/user-attachments/assets/669c68bf-3f85-4a6f-bb08-d0d7fb7f7417'} />
|
84
|
+
|
85
|
+
- 打开 vLLM 服务商并填入 API 服务地址以及 API Key
|
86
|
+
|
87
|
+
<Callout type={"warning"}>
|
88
|
+
* 如果你的 vLLM 没有配置 API Key,请将 API Key 留空
|
89
|
+
* 如果你的 vLLM 运行在本地,请确保打开`客户端请求模式`
|
90
|
+
</Callout>
|
91
|
+
|
92
|
+
- 在下方的模型列表中添加你运行的模型
|
93
|
+
- 为你的助手选择一个 vLLM 运行的模型即可开始对话
|
94
|
+
|
95
|
+
<Image alt={'选择 vLLM 模型'} inStep src={'https://github.com/user-attachments/assets/fcdfb9c5-819a-488f-b28d-0857fe861219'} />
|
96
|
+
</Steps>
|
97
|
+
|
98
|
+
至此你已经可以在 LobeChat 中使用 vLLM 提供的模型进行对话了。
|
@@ -0,0 +1,47 @@
|
|
1
|
+
---
|
2
|
+
title: Using the Volcano Engine API Key in LobeChat
|
3
|
+
description: Learn how to configure and use the Volcano Engine AI model in LobeChat, obtain API keys, and start conversations.
|
4
|
+
tags:
|
5
|
+
- LobeChat
|
6
|
+
- Volcengine
|
7
|
+
- Doubao
|
8
|
+
- API Key
|
9
|
+
- Web UI
|
10
|
+
---
|
11
|
+
# Using Volcengine in LobeChat
|
12
|
+
|
13
|
+
<Image alt={'Using Volcengine in LobeChat'} cover src={'https://github.com/user-attachments/assets/b9da065e-f964-44f2-8260-59e182be2729'} />
|
14
|
+
|
15
|
+
[Volcengine](https://www.volcengine.com/) is a cloud service platform under ByteDance that provides large language model (LLM) services through "Volcano Ark," supporting multiple mainstream models such as Baichuan Intelligent, Mobvoi, and more.
|
16
|
+
|
17
|
+
This document will guide you on how to use Volcengine in LobeChat:
|
18
|
+
|
19
|
+
<Steps>
|
20
|
+
### Step 1: Obtain the Volcengine API Key
|
21
|
+
- First, visit the [Volcengine official website](https://www.volcengine.com/) and complete the registration and login process.
|
22
|
+
- Access the Volcengine console and navigate to [Volcano Ark](https://console.volcengine.com/ark/).
|
23
|
+
|
24
|
+
<Image alt={'Entering Volcano Ark API Management Page'} inStep src={'https://github.com/user-attachments/assets/d6ace96f-0398-4847-83e1-75c3004a0e8b'} />
|
25
|
+
|
26
|
+
- Go to the `API Key Management` menu and click `Create API Key`.
|
27
|
+
- Copy and save the created API Key.
|
28
|
+
|
29
|
+
### Step 2: Configure Volcengine in LobeChat
|
30
|
+
|
31
|
+
- Navigate to the `Application Settings` page in LobeChat and select `AI Service Providers`.
|
32
|
+
- Find the `Volcengine` option in the provider list.
|
33
|
+
|
34
|
+
<Image alt={'Entering Volcengine API Key'} inStep src={'https://github.com/user-attachments/assets/237864d6-cc5d-4fe4-8a2b-c278016855c5'} />
|
35
|
+
|
36
|
+
- Open the Volcengine service provider and enter the obtained API Key.
|
37
|
+
- Choose a Volcengine model for your assistant to start the conversation.
|
38
|
+
|
39
|
+
<Image alt={'Selecting a Volcengine Model'} inStep src={'https://github.com/user-attachments/assets/702c191f-8250-4462-aed7-accb18b18dea'} />
|
40
|
+
|
41
|
+
<Callout type={'warning'}>
|
42
|
+
During usage, you may need to pay the API service provider, so please refer to Volcengine's pricing policy.
|
43
|
+
</Callout>
|
44
|
+
|
45
|
+
</Steps>
|
46
|
+
|
47
|
+
You can now use the models provided by Volcengine for conversations in LobeChat.
|