@lobehub/chat 1.68.3 → 1.68.5

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (112) hide show
  1. package/CHANGELOG.md +50 -0
  2. package/README.md +3 -3
  3. package/README.zh-CN.md +14 -17
  4. package/changelog/v1.json +18 -0
  5. package/docs/usage/providers/azureai.mdx +69 -0
  6. package/docs/usage/providers/azureai.zh-CN.mdx +69 -0
  7. package/docs/usage/providers/deepseek.mdx +3 -3
  8. package/docs/usage/providers/deepseek.zh-CN.mdx +5 -4
  9. package/docs/usage/providers/jina.mdx +51 -0
  10. package/docs/usage/providers/jina.zh-CN.mdx +51 -0
  11. package/docs/usage/providers/lmstudio.mdx +75 -0
  12. package/docs/usage/providers/lmstudio.zh-CN.mdx +75 -0
  13. package/docs/usage/providers/nvidia.mdx +55 -0
  14. package/docs/usage/providers/nvidia.zh-CN.mdx +55 -0
  15. package/docs/usage/providers/ppio.mdx +7 -7
  16. package/docs/usage/providers/ppio.zh-CN.mdx +6 -6
  17. package/docs/usage/providers/sambanova.mdx +50 -0
  18. package/docs/usage/providers/sambanova.zh-CN.mdx +50 -0
  19. package/docs/usage/providers/tencentcloud.mdx +49 -0
  20. package/docs/usage/providers/tencentcloud.zh-CN.mdx +49 -0
  21. package/docs/usage/providers/vertexai.mdx +59 -0
  22. package/docs/usage/providers/vertexai.zh-CN.mdx +59 -0
  23. package/docs/usage/providers/vllm.mdx +98 -0
  24. package/docs/usage/providers/vllm.zh-CN.mdx +98 -0
  25. package/docs/usage/providers/volcengine.mdx +47 -0
  26. package/docs/usage/providers/volcengine.zh-CN.mdx +48 -0
  27. package/locales/ar/chat.json +29 -0
  28. package/locales/ar/models.json +48 -0
  29. package/locales/ar/providers.json +3 -0
  30. package/locales/bg-BG/chat.json +29 -0
  31. package/locales/bg-BG/models.json +48 -0
  32. package/locales/bg-BG/providers.json +3 -0
  33. package/locales/de-DE/chat.json +29 -0
  34. package/locales/de-DE/models.json +48 -0
  35. package/locales/de-DE/providers.json +3 -0
  36. package/locales/en-US/chat.json +29 -0
  37. package/locales/en-US/models.json +48 -0
  38. package/locales/en-US/providers.json +3 -3
  39. package/locales/es-ES/chat.json +29 -0
  40. package/locales/es-ES/models.json +48 -0
  41. package/locales/es-ES/providers.json +3 -0
  42. package/locales/fa-IR/chat.json +29 -0
  43. package/locales/fa-IR/models.json +48 -0
  44. package/locales/fa-IR/providers.json +3 -0
  45. package/locales/fr-FR/chat.json +29 -0
  46. package/locales/fr-FR/models.json +48 -0
  47. package/locales/fr-FR/providers.json +3 -0
  48. package/locales/it-IT/chat.json +29 -0
  49. package/locales/it-IT/models.json +48 -0
  50. package/locales/it-IT/providers.json +3 -0
  51. package/locales/ja-JP/chat.json +29 -0
  52. package/locales/ja-JP/models.json +48 -0
  53. package/locales/ja-JP/providers.json +3 -0
  54. package/locales/ko-KR/chat.json +29 -0
  55. package/locales/ko-KR/models.json +48 -0
  56. package/locales/ko-KR/providers.json +3 -0
  57. package/locales/nl-NL/chat.json +29 -0
  58. package/locales/nl-NL/models.json +48 -0
  59. package/locales/nl-NL/providers.json +3 -0
  60. package/locales/pl-PL/chat.json +29 -0
  61. package/locales/pl-PL/models.json +48 -0
  62. package/locales/pl-PL/providers.json +3 -0
  63. package/locales/pt-BR/chat.json +29 -0
  64. package/locales/pt-BR/models.json +48 -0
  65. package/locales/pt-BR/providers.json +3 -0
  66. package/locales/ru-RU/chat.json +29 -0
  67. package/locales/ru-RU/models.json +48 -0
  68. package/locales/ru-RU/providers.json +3 -0
  69. package/locales/tr-TR/chat.json +29 -0
  70. package/locales/tr-TR/models.json +48 -0
  71. package/locales/tr-TR/providers.json +3 -0
  72. package/locales/vi-VN/chat.json +29 -0
  73. package/locales/vi-VN/models.json +48 -0
  74. package/locales/vi-VN/providers.json +3 -0
  75. package/locales/zh-CN/chat.json +29 -0
  76. package/locales/zh-CN/models.json +51 -3
  77. package/locales/zh-CN/providers.json +3 -4
  78. package/locales/zh-TW/chat.json +29 -0
  79. package/locales/zh-TW/models.json +48 -0
  80. package/locales/zh-TW/providers.json +3 -0
  81. package/package.json +1 -1
  82. package/packages/web-crawler/src/crawImpl/__test__/jina.test.ts +169 -0
  83. package/packages/web-crawler/src/crawImpl/naive.ts +29 -3
  84. package/packages/web-crawler/src/utils/errorType.ts +7 -0
  85. package/scripts/serverLauncher/startServer.js +11 -7
  86. package/src/config/modelProviders/index.ts +1 -1
  87. package/src/config/modelProviders/ppio.ts +1 -1
  88. package/src/features/Conversation/Extras/Assistant.tsx +12 -20
  89. package/src/features/Conversation/Extras/Usage/UsageDetail/ModelCard.tsx +130 -0
  90. package/src/features/Conversation/Extras/Usage/UsageDetail/TokenProgress.tsx +71 -0
  91. package/src/features/Conversation/Extras/Usage/UsageDetail/index.tsx +146 -0
  92. package/src/features/Conversation/Extras/Usage/UsageDetail/tokens.ts +94 -0
  93. package/src/features/Conversation/Extras/Usage/index.tsx +40 -0
  94. package/src/libs/agent-runtime/utils/streams/anthropic.test.ts +14 -0
  95. package/src/libs/agent-runtime/utils/streams/anthropic.ts +25 -0
  96. package/src/libs/agent-runtime/utils/streams/openai.test.ts +100 -10
  97. package/src/libs/agent-runtime/utils/streams/openai.ts +30 -4
  98. package/src/libs/agent-runtime/utils/streams/protocol.ts +4 -0
  99. package/src/locales/default/chat.ts +30 -1
  100. package/src/server/routers/tools/search.ts +1 -1
  101. package/src/store/aiInfra/slices/aiModel/initialState.ts +3 -1
  102. package/src/store/aiInfra/slices/aiModel/selectors.test.ts +1 -0
  103. package/src/store/aiInfra/slices/aiModel/selectors.ts +5 -0
  104. package/src/store/aiInfra/slices/aiProvider/action.ts +3 -1
  105. package/src/store/chat/slices/aiChat/actions/generateAIChat.ts +5 -1
  106. package/src/store/chat/slices/message/action.ts +3 -0
  107. package/src/store/global/initialState.ts +1 -0
  108. package/src/store/global/selectors/systemStatus.ts +2 -0
  109. package/src/types/message/base.ts +18 -0
  110. package/src/types/message/chat.ts +4 -3
  111. package/src/utils/fetch/fetchSSE.ts +24 -1
  112. package/src/utils/format.ts +3 -1
@@ -0,0 +1,55 @@
1
+ ---
2
+ title: Using Nvidia NIM API Key in LobeChat
3
+ description: Learn how to configure and use Nvidia NIM AI models in LobeChat, obtain an API key, and start a conversation.
4
+ tags:
5
+ - LobeChat
6
+ - Nvidia NIM
7
+ - API Key
8
+ - Web UI
9
+ ---
10
+
11
+ # Using Nvidia NIM in LobeChat
12
+
13
+ <Image alt={'Using Nvidia NIM in LobeChat'} cover src={'https://github.com/user-attachments/assets/539349dd-2c16-4f42-b525-cca74e113541'} />
14
+
15
+ [NVIDIA NIM](https://developer.nvidia.com/nim) is part of NVIDIA AI Enterprise and is designed to accelerate the deployment of generative AI applications through microservices. It provides a set of easy-to-use inference microservices that can run on any cloud, data center, or workstation, supporting NVIDIA GPU acceleration.
16
+
17
+ This document will guide you on how to access and use AI models provided by Nvidia NIM in LobeChat:
18
+
19
+ <Steps>
20
+ ### Step 1: Obtain Nvidia NIM API Key
21
+
22
+ - First, visit the [Nvidia NIM console](https://build.nvidia.com/explore/discover) and complete the registration and login.
23
+ - On the `Models` page, select the model you need, such as Deepseek-R1.
24
+
25
+ <Image alt={'Select Model'} inStep src={'https://github.com/user-attachments/assets/b49ed0c1-d6bf-4f46-b9df-5f7c730afaa3'} />
26
+
27
+ - On the model details page, click "Build with this NIM".
28
+ - In the pop-up dialog, click the `Generate API Key` button.
29
+
30
+ <Image alt={'Get API Key'} inStep src={'https://github.com/user-attachments/assets/5321f987-2c64-4211-8549-bd30ca9b59b9'} />
31
+
32
+ - Copy and save the created API Key.
33
+
34
+ <Callout type={'warning'}>
35
+ Please store the key securely as it will only appear once. If you accidentally lose it, you will need to create a new key.
36
+ </Callout>
37
+
38
+ ### Step 2: Configure Nvidia NIM in LobeChat
39
+
40
+ - Visit the `Application Settings` -> `AI Service Provider` interface in LobeChat.
41
+ - Find the settings item for `Nvidia NIM` in the list of providers.
42
+
43
+ <Image alt={'Fill in the Nvidia NIM API Key'} inStep src={'https://github.com/user-attachments/assets/dfc45807-2ed6-43eb-af4c-47df66dfff7d'} />
44
+
45
+ - Enable the Nvidia NIM service provider and fill in the obtained API key.
46
+ - Select an Nvidia NIM model for your assistant and start the conversation.
47
+
48
+ <Image alt={'Select Nvidia NIM Model'} inStep src={'https://github.com/user-attachments/assets/cb4ba5fe-c223-4b9f-a662-de93e4a536d1'} />
49
+
50
+ <Callout type={'warning'}>
51
+ You may need to pay the API service provider during use, please refer to Nvidia NIM's related fee policies.
52
+ </Callout>
53
+ </Steps>
54
+
55
+ Now you can use the models provided by Nvidia NIM to have conversations in LobeChat.
@@ -0,0 +1,55 @@
1
+ ---
2
+ title: 在 LobeChat 中使用 Nvidia NIM API Key
3
+ description: 学习如何在 LobeChat 中配置和使用 Nvidia NIM AI 模型,获取 API 密钥并开始对话。
4
+ tags:
5
+ - LobeChat
6
+ - Nvidia NIM
7
+ - API密钥
8
+ - Web UI
9
+ ---
10
+
11
+ # 在 LobeChat 中使用 Nvidia NIM
12
+
13
+ <Image alt={'在 LobeChat 中使用 Nvidia NIM'} cover src={'https://github.com/user-attachments/assets/539349dd-2c16-4f42-b525-cca74e113541'} />
14
+
15
+ [NVIDIA NIM](https://developer.nvidia.com/nim) 是 NVIDIA AI Enterprise 的一部分,旨在通过微服务加速生成式 AI 应用的部署。它提供了一组易于使用的推理微服务,可以在任何云、数据中心或工作站上运行,支持 NVIDIA GPU 加速。
16
+
17
+ 本文档将指导你如何在 LobeChat 中接入并使用 Nvidia NIM 提供的 AI 模型:
18
+
19
+ <Steps>
20
+ ### 步骤一:获取 Nvidia NIM API 密钥
21
+
22
+ - 首先,访问[Nvidia NIM 控制台](https://build.nvidia.com/explore/discover)并完成注册登录
23
+ - 在 `Models` 页面选择你需要的模型,例如 Deepseek-R1
24
+
25
+ <Image alt={'选择模型'} inStep src={'https://github.com/user-attachments/assets/b49ed0c1-d6bf-4f46-b9df-5f7c730afaa3'} />
26
+
27
+ - 在模型详情页点击`使用此NIM构建`
28
+ - 在弹出的对话框中点击`生成 API Key` 按钮
29
+
30
+ <Image alt={'获取 API Key'} inStep src={'https://github.com/user-attachments/assets/5321f987-2c64-4211-8549-bd30ca9b59b9'} />
31
+
32
+ - 复制并保存创建好的 API Key
33
+
34
+ <Callout type={'warning'}>
35
+ 请安全地存储密钥,因为它只会出现一次。如果你意外丢失它,您将需要创建一个新密钥。
36
+ </Callout>
37
+
38
+ ### 步骤二:在 LobeChat 中配置 Nvidia NIM
39
+
40
+ - 访问 LobeChat 的 `应用设置` 的 `AI 服务供应商` 界面
41
+ - 在供应商列表中找到 ` Nvidia NIM` 的设置项
42
+
43
+ <Image alt={'填写 Nvidia NIM API 密钥'} inStep src={'https://github.com/user-attachments/assets/dfc45807-2ed6-43eb-af4c-47df66dfff7d'} />
44
+
45
+ - 打开 Nvidia NIM 服务商并填入获取的 API 密钥
46
+ - 为你的助手选择一个 Nvidia NIM 模型即可开始对话
47
+
48
+ <Image alt={'选择 Nvidia NIM 模型'} inStep src={'https://github.com/user-attachments/assets/cb4ba5fe-c223-4b9f-a662-de93e4a536d1'} />
49
+
50
+ <Callout type={'warning'}>
51
+ 在使用过程中你可能需要向 API 服务提供商付费,请参考 Nvidia NIM 的相关费用政策。
52
+ </Callout>
53
+ </Steps>
54
+
55
+ 至此你已经可以在 LobeChat 中使用 Nvidia NIM 提供的模型进行对话了。
@@ -1,9 +1,9 @@
1
1
  ---
2
2
  title: Using PPIO API Key in LobeChat
3
3
  description: >-
4
- Learn how to integrate PPIO's language model APIs into LobeChat. Follow
5
- the steps to register, create an PPIO API key, configure settings, and
6
- chat with our various AI models.
4
+ Learn how to integrate PPIO's language model APIs into LobeChat. Follow the
5
+ steps to register, create an PPIO API key, configure settings, and chat with
6
+ our various AI models.
7
7
  tags:
8
8
  - PPIO
9
9
  - DeepSeek
@@ -16,16 +16,16 @@ tags:
16
16
 
17
17
  # Using PPIO in LobeChat
18
18
 
19
- <Image alt={'Using PPIO in LobeChat'} cover src={''} />
19
+ <Image alt={'Using PPIO in LobeChat'} cover src={'https://github.com/user-attachments/assets/d0a5e152-160a-4862-8393-546f4e2e5387'} />
20
20
 
21
- [PPIO](https://ppinfra.com?utm_source=github_lobe-chat&utm_medium=github_readme&utm_campaign=link) supports stable and cost-efficient open-source LLM APIs, such as DeepSeek, Llama, Qwen etc.
21
+ [PPIO](https://ppinfra.com/user/register?invited_by=RQIMOC) supports stable and cost-efficient open-source LLM APIs, such as DeepSeek, Llama, Qwen etc.
22
22
 
23
23
  This document will guide you on how to integrate PPIO in LobeChat:
24
24
 
25
25
  <Steps>
26
26
  ### Step 1: Register and Log in to PPIO
27
27
 
28
- - Visit [PPIO](https://ppinfra.com?utm_source=github_lobe-chat&utm_medium=github_readme&utm_campaign=link) and create an account
28
+ - Visit [PPIO](https://ppinfra.com/user/register?invited_by=RQIMOC) and create an account
29
29
  - Upon registration, PPIO will provide a ¥5 credit (about 5M tokens).
30
30
 
31
31
  <Image alt={'Register PPIO'} height={457} inStep src={'https://github.com/user-attachments/assets/7cb3019b-78c1-48e0-a64c-a6a4836affd9'} />
@@ -50,7 +50,7 @@ This document will guide you on how to integrate PPIO in LobeChat:
50
50
 
51
51
  <Callout type={'warning'}>
52
52
  During usage, you may need to pay the API service provider, please refer to PPIO's [pricing
53
- policy](https://ppinfra.com/llm-api?utm_source=github_lobe-chat&utm_medium=github_readme&utm_campaign=link).
53
+ policy](https://ppinfra.com/llm-api?utm_source=github_lobe-chat\&utm_medium=github_readme\&utm_campaign=link).
54
54
  </Callout>
55
55
  </Steps>
56
56
 
@@ -1,8 +1,8 @@
1
1
  ---
2
2
  title: 在 LobeChat 中使用 PPIO 派欧云 API Key
3
3
  description: >-
4
- 学习如何将 PPIO 派欧云的 LLM API 集成到 LobeChat 中。跟随以下步骤注册 PPIO 账号、创建 API
5
- Key、并在 LobeChat 中进行设置。
4
+ 学习如何将 PPIO 派欧云的 LLM API 集成到 LobeChat 中。跟随以下步骤注册 PPIO 账号、创建 API Key、并在 LobeChat
5
+ 中进行设置。
6
6
  tags:
7
7
  - PPIO
8
8
  - PPInfra
@@ -15,16 +15,16 @@ tags:
15
15
 
16
16
  # 在 LobeChat 中使用 PPIO 派欧云
17
17
 
18
- <Image alt={'在 LobeChat 中使用 PPIO'} cover src={''} />
18
+ <Image alt={'在 LobeChat 中使用 PPIO'} cover src={'https://github.com/user-attachments/assets/d0a5e152-160a-4862-8393-546f4e2e5387'} />
19
19
 
20
- [PPIO 派欧云](https://ppinfra.com?utm_source=github_lobe-chat&utm_medium=github_readme&utm_campaign=link)提供稳定、高性价比的开源模型 API 服务,支持 DeepSeek 全系列、Llama、Qwen 等行业领先大模型。
20
+ [PPIO 派欧云](https://ppinfra.com/user/register?invited_by=RQIMOC)提供稳定、高性价比的开源模型 API 服务,支持 DeepSeek 全系列、Llama、Qwen 等行业领先大模型。
21
21
 
22
22
  本文档将指导你如何在 LobeChat 中使用 PPIO:
23
23
 
24
24
  <Steps>
25
25
  ### 步骤一:注册 PPIO 派欧云账号并登录
26
26
 
27
- - 访问 [PPIO 派欧云](https://ppinfra.com?utm_source=github_lobe-chat&utm_medium=github_readme&utm_campaign=link) 并注册账号
27
+ - 访问 [PPIO 派欧云](https://ppinfra.com/user/register?invited_by=RQIMOC) 并注册账号
28
28
  - 注册后,PPIO 会赠送 5 元(约 500 万 tokens)的使用额度
29
29
 
30
30
  <Image alt={'注册 PPIO'} height={457} inStep src={'https://github.com/user-attachments/assets/7cb3019b-78c1-48e0-a64c-a6a4836affd9'} />
@@ -48,7 +48,7 @@ tags:
48
48
  <Image alt={'选择并使用 PPIO 模型'} inStep src={'https://github.com/user-attachments/assets/8cf66e00-04fe-4bad-9e3d-35afc7d9aa58'} />
49
49
 
50
50
  <Callout type={'warning'}>
51
- 在使用过程中你可能需要向 API 服务提供商付费,PPIO 的 API 费用参考[这里](https://ppinfra.com/llm-api?utm_source=github_lobe-chat&utm_medium=github_readme&utm_campaign=link)。
51
+ 在使用过程中你可能需要向 API 服务提供商付费,PPIO 的 API 费用参考[这里](https://ppinfra.com/llm-api?utm_source=github_lobe-chat\&utm_medium=github_readme\&utm_campaign=link)。
52
52
  </Callout>
53
53
  </Steps>
54
54
 
@@ -0,0 +1,50 @@
1
+ ---
2
+ title: Using SambaNova API Key in LobeChat
3
+ description: Learn how to configure and use SambaNova models in LobeChat, obtain an API key, and start a conversation.
4
+ tags:
5
+ - LobeChat
6
+ - SambaNova
7
+ - API Key
8
+ - Web UI
9
+ ---
10
+
11
+ # Using SambaNova in LobeChat
12
+
13
+ <Image alt={'Using SambaNova in LobeChat'} cover src={'https://github.com/user-attachments/assets/1028aa1a-6c19-4191-b28a-2020e5637155'} />
14
+
15
+ [SambaNova](https://sambanova.ai/) is a company based in Palo Alto, California, USA, focused on developing high-performance AI hardware and software solutions. It provides fast AI model training, fine-tuning, and inference capabilities, especially suitable for large-scale generative AI models.
16
+
17
+ This document will guide you on how to use SambaNova in LobeChat:
18
+
19
+ <Steps>
20
+ ### Step 1: Obtain a SambaNova API Key
21
+
22
+ - First, you need to register and log in to [SambaNova Cloud](https://cloud.sambanova.ai/)
23
+ - Create an API key in the `APIs` page
24
+
25
+ <Image alt={'Obtain a SambaNova API Key'} inStep src={'https://github.com/user-attachments/assets/ed6965c8-6884-4adf-a457-573a96755f55'} />
26
+
27
+ - Copy the obtained API key and save it securely
28
+
29
+ <Callout type={'warning'}>
30
+ Please save the generated API Key securely, as it will only appear once. If you accidentally lose it, you will need to create a new API key.
31
+ </Callout>
32
+
33
+ ### Step 2: Configure SambaNova in LobeChat
34
+
35
+ - Access the `Application Settings` interface of LobeChat
36
+ - Find the `SambaNova` setting item under `Language Model`
37
+
38
+ <Image alt={'Fill in the SambaNova API Key'} inStep src={'https://github.com/user-attachments/assets/328e9755-8da9-4849-8569-e099924822fe'} />
39
+
40
+ - Turn on SambaNova and fill in the obtained API key
41
+ - Select a SambaNova model for your assistant to start the conversation
42
+
43
+ <Image alt={'Select a SambaNova Model'} inStep src={'https://github.com/user-attachments/assets/6dbf4560-3f62-4b33-9f41-96e12b5087b1'} />
44
+
45
+ <Callout type={'warning'}>
46
+ You may need to pay the API service provider during use, please refer to SambaNova's related fee policies.
47
+ </Callout>
48
+ </Steps>
49
+
50
+ Now you can use the models provided by SambaNova in LobeChat to conduct conversations.
@@ -0,0 +1,50 @@
1
+ ---
2
+ title: 在 LobeChat 中使用 SambaNova API Key
3
+ description: 学习如何在 LobeChat 中配置和使用 SambaNova 模型,获取 API 密钥并开始对话。
4
+ tags:
5
+ - LobeChat
6
+ - SambaNova
7
+ - API密钥
8
+ - Web UI
9
+ ---
10
+
11
+ # 在 LobeChat 中使用 SambaNova
12
+
13
+ <Image alt={'在 LobeChat 中使用 SambaNova'} cover src={'https://github.com/user-attachments/assets/1028aa1a-6c19-4191-b28a-2020e5637155'} />
14
+
15
+ [SambaNova](https://sambanova.ai/) 是一家位于美国加利福尼亚州帕洛阿尔托的公司,专注于开发高性能 AI 硬件和软件解决方案,提供快速的 AI 模型训练、微调和推理能力,尤其适用于大规模生成式 AI 模型。
16
+
17
+ 本文档将指导你如何在 LobeChat 中使用 SambaNova:
18
+
19
+ <Steps>
20
+ ### 步骤一:获取 SambaNova API 密钥
21
+
22
+ - 首先,你需要注册并登录 [SambaNova Cloud](https://cloud.sambanova.ai/)
23
+ - 在 `APIs` 页面中创建一个 API 密钥
24
+
25
+ <Image alt={'获取 SambaNova API 密钥'} inStep src={'https://github.com/user-attachments/assets/ed6965c8-6884-4adf-a457-573a96755f55'} />
26
+
27
+ - 复制得到的 API 密钥并妥善保存
28
+
29
+ <Callout type={'warning'}>
30
+ 请妥善保存生成的 API Key,它只会出现一次,如果不小心丢失了,你需要重新创建一个 API key
31
+ </Callout>
32
+
33
+ ### 步骤二:在 LobeChat 中配置 SambaNova
34
+
35
+ - 访问 LobeChat 的 `应用设置`界面
36
+ - 在 `语言模型` 下找到 `SambaNova` 的设置项
37
+
38
+ <Image alt={'填写 SambaNova API 密钥'} inStep src={'https://github.com/user-attachments/assets/328e9755-8da9-4849-8569-e099924822fe'} />
39
+
40
+ - 打开 SambaNova 并填入获取的 API 密钥
41
+ - 为你的助手选择一个 SambaNova 模型即可开始对话
42
+
43
+ <Image alt={'选择 SambaNova 模型'} inStep src={'https://github.com/user-attachments/assets/6dbf4560-3f62-4b33-9f41-96e12b5087b1'} />
44
+
45
+ <Callout type={'warning'}>
46
+ 在使用过程中你可能需要向 API 服务提供商付费,请参考 SambaNova 的相关费用政策。
47
+ </Callout>
48
+ </Steps>
49
+
50
+ 至此你已经可以在 LobeChat 中使用 SambaNova 提供的模型进行对话了。
@@ -0,0 +1,49 @@
1
+ ---
2
+ title: Using Tencent Cloud API Key in LobeChat
3
+ description: Learn how to configure and use Tencent Cloud AI models in LobeChat, obtain an API key, and start a conversation.
4
+ tags:
5
+ - LobeChat
6
+ - Tencent Cloud
7
+ - API Key
8
+ - Web UI
9
+ ---
10
+
11
+ # Using Tencent Cloud in LobeChat
12
+
13
+ <Image alt={'Using Tencent Cloud in LobeChat'} cover src={'https://github.com/user-attachments/assets/aa91ca54-65fc-4e33-8c76-999f0a5d2bee'} />
14
+
15
+ [Tencent Cloud](https://cloud.tencent.com/) is the cloud computing service brand of Tencent, specializing in providing cloud computing services for enterprises and developers. Tencent Cloud provides a series of AI large model solutions, through which AI models can be connected stably and efficiently.
16
+
17
+ This document will guide you on how to connect Tencent Cloud's AI models in LobeChat:
18
+
19
+ <Steps>
20
+ ### Step 1: Obtain the Tencent Cloud API Key
21
+
22
+ - First, visit [Tencent Cloud](https://cloud.tencent.com/) and complete the registration and login.
23
+ - Enter the Tencent Cloud Console and navigate to [Large-scale Knowledge Engine Atomic Capability](https://console.cloud.tencent.com/lkeap).
24
+ - Activate the Large-scale Knowledge Engine, which requires real-name authentication during the activation process.
25
+
26
+ <Image alt={'Enter the Large-scale Knowledge Engine Atomic Capability Page'} inStep src={'https://github.com/user-attachments/assets/22e1a039-5e6e-4c40-8266-19821677618a'} />
27
+
28
+ - In the `Access via OpenAI SDK` option, click the `Create API Key` button to create a new API Key.
29
+ - You can view and manage the created API Keys in `API Key Management`.
30
+ - Copy and save the created API Key.
31
+
32
+ ### Step 2: Configure Tencent Cloud in LobeChat
33
+
34
+ - Visit the `Application Settings` and `AI Service Provider` interface of LobeChat.
35
+ - Find the `Tencent Cloud` settings item in the list of providers.
36
+
37
+ <Image alt={'Fill in the Tencent Cloud API Key'} inStep src={'https://github.com/user-attachments/assets/a9de7780-d0cb-47d5-ad9c-fcbbec14b940'} />
38
+
39
+ - Open the Tencent Cloud provider and fill in the obtained API Key.
40
+ - Select a Tencent Cloud model for your assistant to start the conversation.
41
+
42
+ <Image alt={'Select Tencent Cloud Model'} inStep src={'https://github.com/user-attachments/assets/162bc64e-0d34-4a4e-815a-028247b73143'} />
43
+
44
+ <Callout type={'warning'}>
45
+ You may need to pay the API service provider during use, please refer to Tencent Cloud's relevant fee policy.
46
+ </Callout>
47
+ </Steps>
48
+
49
+ You can now use the models provided by Tencent Cloud in LobeChat to have conversations.
@@ -0,0 +1,49 @@
1
+ ---
2
+ title: 在 LobeChat 中使用腾讯云 API Key
3
+ description: 学习如何在 LobeChat 中配置和使用腾讯云 AI 模型,获取 API 密钥并开始对话。
4
+ tags:
5
+ - LobeChat
6
+ - 腾讯云
7
+ - API密钥
8
+ - Web UI
9
+ ---
10
+
11
+ # 在 LobeChat 中使用腾讯云
12
+
13
+ <Image alt={'在 LobeChat 中使用腾讯云'} cover src={'https://github.com/user-attachments/assets/aa91ca54-65fc-4e33-8c76-999f0a5d2bee'} />
14
+
15
+ [腾讯云(Tencent Cloud)](https://cloud.tencent.com/)是腾讯公司旗下的云计算服务品牌,专门为企业和开发者提供云计算服务。腾讯云提供了一系列 AI 大模型解决方案,通过这些工具可以稳定高效接入 AI 模型。
16
+
17
+ 本文档将指导你如何在 LobeChat 中接入腾讯云的 AI 模型:
18
+
19
+ <Steps>
20
+ ### 步骤一:获取腾讯云 API 密钥
21
+
22
+ - 首先,访问[腾讯云](https://cloud.tencent.com/)并完成注册登录
23
+ - 进入腾讯云控制台并导航至[知识引擎原子能力](https://console.cloud.tencent.com/lkeap)
24
+ - 开通大模型知识引擎,开通过程需要实名认证
25
+
26
+ <Image alt={'进入知识引擎原子能力页面'} inStep src={'https://github.com/user-attachments/assets/22e1a039-5e6e-4c40-8266-19821677618a'} />
27
+
28
+ - 在`使用OpenAI SDK方式接入`选项中,点击 `创建 API Key` 按钮,创建一个新的 API Key
29
+ - 在 `API key 管理` 中可以查看和管理已创建的 API Key
30
+ - 复制并保存创建好的 API Key
31
+
32
+ ### 步骤二:在 LobeChat 中配置腾讯云
33
+
34
+ - 访问 LobeChat 的 `应用设置` 的 `AI 服务供应商` 界面
35
+ - 在供应商列表中找到 `腾讯云` 的设置项
36
+
37
+ <Image alt={'填写腾讯云 API 密钥'} inStep src={'https://github.com/user-attachments/assets/a9de7780-d0cb-47d5-ad9c-fcbbec14b940'} />
38
+
39
+ - 打开腾讯云服务商并填入获取的 API 密钥
40
+ - 为你的助手选择一个腾讯云模型即可开始对话
41
+
42
+ <Image alt={'选择腾讯云模型'} inStep src={'https://github.com/user-attachments/assets/162bc64e-0d34-4a4e-815a-028247b73143'} />
43
+
44
+ <Callout type={'warning'}>
45
+ 在使用过程中你可能需要向 API 服务提供商付费,请参考腾讯云的相关费用政策。
46
+ </Callout>
47
+ </Steps>
48
+
49
+ 至此你已经可以在 LobeChat 中使用腾讯云提供的模型进行对话了。
@@ -0,0 +1,59 @@
1
+ ---
2
+ title: Using Vertex AI API Key in LobeChat
3
+ description: Learn how to configure and use Vertex AI models in LobeChat, get an API key, and start a conversation.
4
+ tags:
5
+ - LobeChat
6
+ - Vertex AI
7
+ - API Key
8
+ - Web UI
9
+ ---
10
+
11
+ # Using Vertex AI in LobeChat
12
+
13
+ <Image alt={'Using Vertex AI in LobeChat'} cover src={'https://github.com/user-attachments/assets/638dcd7c-2bff-4adb-bade-da2aaef872bf'} />
14
+
15
+ [Vertex AI](https://cloud.google.com/vertex-ai) is a fully managed, integrated AI development platform from Google Cloud, designed for building and deploying generative AI. It provides easy access to Vertex AI Studio, Agent Builder, and over 160 foundational models for AI development.
16
+
17
+ This document will guide you on how to connect Vertex AI models in LobeChat:
18
+
19
+ <Steps>
20
+ ### Step 1: Prepare a Vertex AI Project
21
+
22
+ - First, visit [Google Cloud](https://console.cloud.google.com/) and complete the registration and login process.
23
+ - Create a new Google Cloud project or select an existing one.
24
+ - Go to the [Vertex AI Console](https://console.cloud.google.com/vertex-ai).
25
+ - Ensure that the Vertex AI API service is enabled for the project.
26
+
27
+ <Image alt={'Accessing Vertex AI'} inStep src={'https://github.com/user-attachments/assets/c4fe4430-7860-4339-b014-4d8d264a12c0'} />
28
+
29
+ ### Step 2: Set Up API Access Permissions
30
+
31
+ - Go to the Google Cloud [IAM Management page](https://console.cloud.google.com/iam-admin/serviceaccounts) and navigate to `Service Accounts`.
32
+ - Create a new service account and assign a role permission to it, such as `Vertex AI User`.
33
+
34
+ <Image alt={'Creating a Service Account'} inStep src={'https://github.com/user-attachments/assets/692e7c67-f173-45da-86ef-5c69e17988e4'} />
35
+
36
+ - On the service account management page, find the service account you just created, click `Keys`, and create a new JSON format key.
37
+ - After successful creation, the key file will be automatically saved to your computer in JSON format. Please keep it safe.
38
+
39
+ <Image alt={'Creating a Key'} inStep src={'https://github.com/user-attachments/assets/1fb5df18-5261-483e-a445-96f52f80dd20'} />
40
+
41
+ ### Step 3: Configure Vertex AI in LobeChat
42
+
43
+ - Visit the `App Settings` and then the `AI Service Provider` interface in LobeChat.
44
+ - Find the settings item for `Vertex AI` in the list of providers.
45
+
46
+ <Image alt={'Entering Vertex AI API Key'} inStep src={'https://github.com/user-attachments/assets/5d672e8b-566f-4f82-bdce-947168726bc0'} />
47
+
48
+ - Open the Vertex AI service provider settings.
49
+ - Fill the entire content of the JSON format key you just obtained into the API Key field.
50
+ - Select a Vertex AI model for your assistant to start the conversation.
51
+
52
+ <Image alt={'Selecting a Vertex AI Model'} inStep src={'https://github.com/user-attachments/assets/1a7e9600-cd0f-4c82-9d32-4e61bbb351cc'} />
53
+
54
+ <Callout type={'warning'}>
55
+ You may need to pay the API service provider during usage. Please refer to Google Cloud's relevant fee policies.
56
+ </Callout>
57
+ </Steps>
58
+
59
+ Now you can use the models provided by Vertex AI for conversations in LobeChat.
@@ -0,0 +1,59 @@
1
+ ---
2
+ title: 在 LobeChat 中使用 Vertex AI API Key
3
+ description: 学习如何在 LobeChat 中配置和使用 Vertex AI 模型,获取 API 密钥并开始对话。
4
+ tags:
5
+ - LobeChat
6
+ - Vertex AI
7
+ - API密钥
8
+ - Web UI
9
+ ---
10
+
11
+ # 在 LobeChat 中使用 Vertex AI
12
+
13
+ <Image alt={'在 LobeChat 中使用 Vertex AI '} cover src={'https://github.com/user-attachments/assets/638dcd7c-2bff-4adb-bade-da2aaef872bf'} />
14
+
15
+ [Vertex AI](https://cloud.google.com/vertex-ai) 是 Google Cloud 的一款全面托管、集成的 AI 开发平台,旨在构建与应用生成式 AI。你可轻松访问 Vertex AI Studio、Agent Builder 以及超过 160 种基础模型,进行 AI 开发。
16
+
17
+ 本文档将指导你如何在 LobeChat 中接入 Vertex AI 的模型:
18
+
19
+ <Steps>
20
+ ### 步骤一:准备 Vertex AI 项目
21
+
22
+ - 首先,访问[Google Cloud](https://console.cloud.google.com/)并完成注册登录
23
+ - 创建一个新的 Google Cloud 项目,或选择一个已存在的项目
24
+ - 进入 [Vertex AI 控制台](https://console.cloud.google.com/vertex-ai)
25
+ - 确认该项目已开通 Vertex AI API 服务
26
+
27
+ <Image alt={'进入 Vertex AI'} inStep src={'https://github.com/user-attachments/assets/c4fe4430-7860-4339-b014-4d8d264a12c0'} />
28
+
29
+ ### 步骤二:设置 API 访问权限
30
+
31
+ - 进入 Google Cloud [IAM 管理页面](https://console.cloud.google.com/iam-admin/serviceaccounts),并导航至`服务账号`
32
+ - 创建一个新的服务账号,并为其分配一个角色权限,例如 `Vertex AI User`
33
+
34
+ <Image alt={'创建服务账号'} inStep src={'https://github.com/user-attachments/assets/692e7c67-f173-45da-86ef-5c69e17988e4'} />
35
+
36
+ - 在服务账号管理页面找到刚刚创建的服务账号,点击`密钥`并创建一个新的 JSON 格式密钥
37
+ - 创建成功后,密钥文件将会以 JSON 文件的格式自动保存到你的电脑上,请妥善保存
38
+
39
+ <Image alt={'创建密钥'} inStep src={'https://github.com/user-attachments/assets/1fb5df18-5261-483e-a445-96f52f80dd20'} />
40
+
41
+ ### 步骤三:在 LobeChat 中配置 Vertex AI
42
+
43
+ - 访问 LobeChat 的 `应用设置` 的 `AI 服务供应商` 界面
44
+ - 在供应商列表中找到 `Vertex AI` 的设置项
45
+
46
+ <Image alt={'填写 Vertex AI API 密钥'} inStep src={'https://github.com/user-attachments/assets/5d672e8b-566f-4f82-bdce-947168726bc0'} />
47
+
48
+ - 打开 Vertex AI 服务供应商
49
+ - 将刚刚获取的 JSON 格式的全部内容填入 API Key 字段中
50
+ - 为你的助手选择一个 Vertex AI 模型即可开始对话
51
+
52
+ <Image alt={'选择 Vertex AI 模型'} inStep src={'https://github.com/user-attachments/assets/1a7e9600-cd0f-4c82-9d32-4e61bbb351cc'} />
53
+
54
+ <Callout type={'warning'}>
55
+ 在使用过程中你可能需要向 API 服务提供商付费,请参考 Google Cloud 的相关费用政策。
56
+ </Callout>
57
+ </Steps>
58
+
59
+ 至此你已经可以在 LobeChat 中使用 Vertex AI 提供的模型进行对话了。
@@ -0,0 +1,98 @@
1
+ ---
2
+ title: Using vLLM API Key in LobeChat
3
+ description: Learn how to configure and use the vLLM language model in LobeChat, obtain an API key, and start a conversation.
4
+ tags:
5
+ - LobeChat
6
+ - vLLM
7
+ - API Key
8
+ - Web UI
9
+ ---
10
+
11
+ # Using vLLM in LobeChat
12
+
13
+ <Image alt={'Using vLLM in LobeChat'} cover src={'https://github.com/user-attachments/assets/1d77cca4-7363-4a46-9ad5-10604e111d7c'} />
14
+
15
+ [vLLM](https://github.com/vllm-project/vllm) is an open-source local large language model (LLM) deployment tool that allows users to efficiently run LLM models on local devices and provides an OpenAI API-compatible service interface.
16
+
17
+ This document will guide you on how to use vLLM in LobeChat:
18
+
19
+ <Steps>
20
+ ### Step 1: Preparation
21
+
22
+ vLLM has certain requirements for hardware and software environments. Be sure to configure according to the following requirements:
23
+
24
+ | Hardware Requirements | |
25
+ | --------- | ----------------------------------------------------------------------- |
26
+ | GPU | - NVIDIA CUDA <br /> - AMD ROCm <br /> - Intel XPU |
27
+ | CPU | - Intel/AMD x86 <br /> - ARM AArch64 <br /> - Apple silicon |
28
+ | Other AI Accelerators | - Google TPU <br /> - Intel Gaudi <br /> - AWS Neuron <br /> - OpenVINO |
29
+
30
+ | Software Requirements |
31
+ | --------------------------------------- |
32
+ | - OS: Linux <br /> - Python: 3.9 – 3.12 |
33
+
34
+ ### Step 2: Install vLLM
35
+
36
+ If you are using an NVIDIA GPU, you can directly install vLLM using `pip`. However, it is recommended to use `uv` here, which is a very fast Python environment manager, to create and manage the Python environment. Please follow the [documentation](https://docs.astral.sh/uv/#getting-started) to install uv. After installing uv, you can use the following command to create a new Python environment and install vLLM:
37
+
38
+ ```shell
39
+ uv venv myenv --python 3.12 --seed
40
+ source myenv/bin/activate
41
+ uv pip install vllm
42
+ ```
43
+
44
+ Another method is to use `uv run` with the `--with [dependency]` option, which allows you to run commands such as `vllm serve` without creating an environment:
45
+
46
+ ```shell
47
+ uv run --with vllm vllm --help
48
+ ```
49
+
50
+ You can also use [conda](https://docs.conda.io/projects/conda/en/latest/user-guide/getting-started.html) to create and manage your Python environment.
51
+
52
+ ```shell
53
+ conda create -n myenv python=3.12 -y
54
+ conda activate myenv
55
+ pip install vllm
56
+ ```
57
+
58
+ <Callout type={"note"}>
59
+ For non-CUDA platforms, please refer to the [official documentation](https://docs.vllm.ai/en/latest/getting_started/installation/index.html#installation-index) to learn how to install vLLM.
60
+ </Callout>
61
+
62
+ ### Step 3: Start Local Service
63
+
64
+ vLLM can be deployed as an OpenAI API protocol-compatible server. By default, it will start the server at `http://localhost:8000`. You can specify the address using the `--host` and `--port` parameters. The server currently runs only one model at a time.
65
+
66
+ The following command will start a vLLM server and run the `Qwen2.5-1.5B-Instruct` model:
67
+
68
+ ```shell
69
+ vllm serve Qwen/Qwen2.5-1.5B-Instruct
70
+ ```
71
+
72
+ You can enable the server to check the API key in the header by passing the parameter `--api-key` or the environment variable `VLLM_API_KEY`. If not set, no API Key is required to access.
73
+
74
+ <Callout type={'note'}>
75
+ For more detailed vLLM server configuration, please refer to the [official documentation](https://docs.vllm.ai/en/latest/).
76
+ </Callout>
77
+
78
+ ### Step 4: Configure vLLM in LobeChat
79
+
80
+ - Access the `Application Settings` interface of LobeChat.
81
+ - Find the `vLLM` settings item under `Language Model`.
82
+
83
+ <Image alt={'Fill in the vLLM API Key'} inStep src={'https://github.com/user-attachments/assets/669c68bf-3f85-4a6f-bb08-d0d7fb7f7417'} />
84
+
85
+ - Open the vLLM service provider and fill in the API service address and API Key.
86
+
87
+ <Callout type={"warning"}>
88
+ * If your vLLM is not configured with an API Key, please leave the API Key blank.
89
+ * If your vLLM is running locally, please make sure to turn on `Client Request Mode`.
90
+ </Callout>
91
+
92
+ - Add the model you are running to the model list below.
93
+ - Select a vLLM model to run for your assistant and start the conversation.
94
+
95
+ <Image alt={'Select vLLM Model'} inStep src={'https://github.com/user-attachments/assets/fcdfb9c5-819a-488f-b28d-0857fe861219'} />
96
+ </Steps>
97
+
98
+ Now you can use the models provided by vLLM in LobeChat to have conversations.