@lobehub/chat 1.84.22 → 1.84.23
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +25 -0
- package/changelog/v1.json +9 -0
- package/docker-compose/local/docker-compose.yml +2 -2
- package/docs/self-hosting/server-database/dokploy.mdx +2 -2
- package/docs/self-hosting/server-database/dokploy.zh-CN.mdx +98 -98
- package/package.json +1 -1
- package/src/config/aiModels/google.ts +37 -13
- package/src/config/aiModels/mistral.ts +22 -47
- package/src/config/aiModels/vertexai.ts +47 -74
- package/src/config/modelProviders/vertexai.ts +1 -1
- package/src/features/HotkeyHelperPanel/index.tsx +21 -17
package/CHANGELOG.md
CHANGED
@@ -2,6 +2,31 @@
|
|
2
2
|
|
3
3
|
# Changelog
|
4
4
|
|
5
|
+
### [Version 1.84.23](https://github.com/lobehub/lobe-chat/compare/v1.84.22...v1.84.23)
|
6
|
+
|
7
|
+
<sup>Released on **2025-05-08**</sup>
|
8
|
+
|
9
|
+
#### 💄 Styles
|
10
|
+
|
11
|
+
- **misc**: Add new gemini & Mistral models.
|
12
|
+
|
13
|
+
<br/>
|
14
|
+
|
15
|
+
<details>
|
16
|
+
<summary><kbd>Improvements and Fixes</kbd></summary>
|
17
|
+
|
18
|
+
#### Styles
|
19
|
+
|
20
|
+
- **misc**: Add new gemini & Mistral models, closes [#7730](https://github.com/lobehub/lobe-chat/issues/7730) ([b7753e2](https://github.com/lobehub/lobe-chat/commit/b7753e2))
|
21
|
+
|
22
|
+
</details>
|
23
|
+
|
24
|
+
<div align="right">
|
25
|
+
|
26
|
+
[](#readme-top)
|
27
|
+
|
28
|
+
</div>
|
29
|
+
|
5
30
|
### [Version 1.84.22](https://github.com/lobehub/lobe-chat/compare/v1.84.21...v1.84.22)
|
6
31
|
|
7
32
|
<sup>Released on **2025-05-07**</sup>
|
package/changelog/v1.json
CHANGED
@@ -127,7 +127,7 @@ services:
|
|
127
127
|
LOBE_PID=\$!
|
128
128
|
sleep 3
|
129
129
|
if [ $(wget --timeout=5 --spider --server-response ${AUTH_CASDOOR_ISSUER}/.well-known/openid-configuration 2>&1 | grep -c 'HTTP/1.1 200 OK') -eq 0 ]; then
|
130
|
-
echo '⚠️
|
130
|
+
echo '⚠️Warning: Unable to fetch OIDC configuration from Casdoor'
|
131
131
|
echo 'Request URL: ${AUTH_CASDOOR_ISSUER}/.well-known/openid-configuration'
|
132
132
|
echo 'Read more at: https://lobehub.com/docs/self-hosting/server-database/docker-compose#necessary-configuration'
|
133
133
|
echo ''
|
@@ -150,7 +150,7 @@ services:
|
|
150
150
|
fi
|
151
151
|
fi
|
152
152
|
if [ $(wget --timeout=5 --spider --server-response ${S3_ENDPOINT}/minio/health/live 2>&1 | grep -c 'HTTP/1.1 200 OK') -eq 0 ]; then
|
153
|
-
echo '⚠️
|
153
|
+
echo '⚠️Warning: Unable to fetch MinIO health status'
|
154
154
|
echo 'Request URL: ${S3_ENDPOINT}/minio/health/live'
|
155
155
|
echo 'Read more at: https://lobehub.com/docs/self-hosting/server-database/docker-compose#necessary-configuration'
|
156
156
|
echo ''
|
@@ -86,9 +86,9 @@ Switch to the Environment section, fill in the environment variables, and click
|
|
86
86
|
|
87
87
|
```shell
|
88
88
|
# Environment variables required for building
|
89
|
-
NIXPACKS_PKGS="
|
89
|
+
NIXPACKS_PKGS="bun"
|
90
90
|
NIXPACKS_INSTALL_CMD="pnpm install"
|
91
|
-
NIXPACKS_BUILD_CMD="pnpm run build"
|
91
|
+
NIXPACKS_BUILD_CMD="NODE_OPTIONS='--max-old-space-size=8192' pnpm run build"
|
92
92
|
NIXPACKS_START_CMD="pnpm start"
|
93
93
|
|
94
94
|
APP_URL=
|
@@ -1,95 +1,95 @@
|
|
1
|
-
---
|
2
|
-
title: 在 Dokploy 上部署 LobeChat 的服务端数据库版本
|
3
|
-
description: 本文详细介绍如何在 Dokploy 中部署服务端数据库版 LobeChat,包括数据库配置、身份验证服务配置的设置步骤。
|
4
|
-
tags:
|
5
|
-
- 服务端数据库
|
6
|
-
- Postgres
|
7
|
-
- Clerk
|
8
|
-
- Dokploy部署
|
9
|
-
- 数据库配置
|
10
|
-
- 身份验证服务
|
11
|
-
- 环境变量配置
|
12
|
-
---
|
13
|
-
|
14
|
-
# 在 Dokploy 上部署服务端数据库版
|
15
|
-
|
16
|
-
本文将详细介绍如何在 Dokploy 中部署服务端数据库版 LobeChat。
|
17
|
-
|
18
|
-
## 一、准备工作
|
19
|
-
|
20
|
-
### 部署 Dokploy 并进行相关设置
|
21
|
-
|
22
|
-
```shell
|
23
|
-
curl -sSL https://dokploy.com/install.sh | sh
|
24
|
-
```
|
25
|
-
|
26
|
-
1. 在 Dokploy 的 Settings / Git 处根据提示将 Github 绑定到 Dokploy
|
27
|
-
|
28
|
-

|
29
|
-
|
30
|
-
2. 进入 Projects 界面创建一个 Project
|
31
|
-
|
32
|
-

|
33
|
-
|
34
|
-
### 配置 S3 存储服务
|
35
|
-
|
36
|
-
在服务端数据库中我们需要配置 S3 存储服务来存储文件,详细配置教程请参考 使用 Vercel 部署中 [配置 S3 储存服务](https://lobehub.com/zh/docs/self-hosting/server-database/vercel#%E4%B8%89%E3%80%81-%E9%85%8D%E7%BD%AE-s-3-%E5%AD%98%E5%82%A8%E6%9C%8D%E5%8A%A1)。配置完成后你将获得以下环境变量:
|
37
|
-
|
38
|
-
```shell
|
1
|
+
---
|
2
|
+
title: 在 Dokploy 上部署 LobeChat 的服务端数据库版本
|
3
|
+
description: 本文详细介绍如何在 Dokploy 中部署服务端数据库版 LobeChat,包括数据库配置、身份验证服务配置的设置步骤。
|
4
|
+
tags:
|
5
|
+
- 服务端数据库
|
6
|
+
- Postgres
|
7
|
+
- Clerk
|
8
|
+
- Dokploy部署
|
9
|
+
- 数据库配置
|
10
|
+
- 身份验证服务
|
11
|
+
- 环境变量配置
|
12
|
+
---
|
13
|
+
|
14
|
+
# 在 Dokploy 上部署服务端数据库版
|
15
|
+
|
16
|
+
本文将详细介绍如何在 Dokploy 中部署服务端数据库版 LobeChat。
|
17
|
+
|
18
|
+
## 一、准备工作
|
19
|
+
|
20
|
+
### 部署 Dokploy 并进行相关设置
|
21
|
+
|
22
|
+
```shell
|
23
|
+
curl -sSL https://dokploy.com/install.sh | sh
|
24
|
+
```
|
25
|
+
|
26
|
+
1. 在 Dokploy 的 Settings / Git 处根据提示将 Github 绑定到 Dokploy
|
27
|
+
|
28
|
+

|
29
|
+
|
30
|
+
2. 进入 Projects 界面创建一个 Project
|
31
|
+
|
32
|
+

|
33
|
+
|
34
|
+
### 配置 S3 存储服务
|
35
|
+
|
36
|
+
在服务端数据库中我们需要配置 S3 存储服务来存储文件,详细配置教程请参考 使用 Vercel 部署中 [配置 S3 储存服务](https://lobehub.com/zh/docs/self-hosting/server-database/vercel#%E4%B8%89%E3%80%81-%E9%85%8D%E7%BD%AE-s-3-%E5%AD%98%E5%82%A8%E6%9C%8D%E5%8A%A1)。配置完成后你将获得以下环境变量:
|
37
|
+
|
38
|
+
```shell
|
39
39
|
S3_ACCESS_KEY_ID=
|
40
40
|
S3_SECRET_ACCESS_KEY=
|
41
41
|
S3_ENDPOINT=
|
42
42
|
S3_BUCKET=
|
43
43
|
S3_PUBLIC_DOMAIN=
|
44
|
-
S3_ENABLE_PATH_STYLE=
|
45
|
-
```
|
46
|
-
|
47
|
-
### 配置 Clerk 身份验证服务
|
48
|
-
|
49
|
-
获取 `NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY` 、`CLERK_SECRET_KEY` 、`CLERK_WEBHOOK_SECRET` 这三个环境变量,Clerk 的详细配置流程请参考 使用 Vercel 部署中 [配置身份验证服务](https://lobehub.com/zh/docs/self-hosting/server-database/vercel#二、-配置身份验证服务)
|
50
|
-
|
51
|
-
```shell
|
44
|
+
S3_ENABLE_PATH_STYLE=
|
45
|
+
```
|
46
|
+
|
47
|
+
### 配置 Clerk 身份验证服务
|
48
|
+
|
49
|
+
获取 `NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY` 、`CLERK_SECRET_KEY` 、`CLERK_WEBHOOK_SECRET` 这三个环境变量,Clerk 的详细配置流程请参考 使用 Vercel 部署中 [配置身份验证服务](https://lobehub.com/zh/docs/self-hosting/server-database/vercel#二、-配置身份验证服务)
|
50
|
+
|
51
|
+
```shell
|
52
52
|
NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY=pk_live_xxxxxxxxxxx
|
53
53
|
CLERK_SECRET_KEY=sk_live_xxxxxxxxxxxxxxxxxxxxxx
|
54
|
-
CLERK_WEBHOOK_SECRET=whsec_xxxxxxxxxxxxxxxxxxxxxx
|
55
|
-
```
|
56
|
-
|
57
|
-
## 二、在 Dokploy 上部署数据库
|
58
|
-
|
59
|
-
进入前面创建的 Project,点击 Create Service 选择 Database,在 Database 界面选择 PostgreSQL ,然后设置数据库名、用户、密码,在 Docker image 中填入 `pgvector/pgvector:pg17` 最后点击 Create 创建数据库。
|
60
|
-
|
61
|
-

|
62
|
-
|
54
|
+
CLERK_WEBHOOK_SECRET=whsec_xxxxxxxxxxxxxxxxxxxxxx
|
55
|
+
```
|
56
|
+
|
57
|
+
## 二、在 Dokploy 上部署数据库
|
58
|
+
|
59
|
+
进入前面创建的 Project,点击 Create Service 选择 Database,在 Database 界面选择 PostgreSQL ,然后设置数据库名、用户、密码,在 Docker image 中填入 `pgvector/pgvector:pg17` 最后点击 Create 创建数据库。
|
60
|
+
|
61
|
+

|
62
|
+
|
63
63
|
进入创建的数据库,在 External Credentials 设置一个未被占用的端口,使其能能通过外部访问,否则 LobeChat 将无法连接到该数据库。
|
64
|
-
你可以在 External Host 查看 Postgres 数据库连接 URL ,如下:
|
65
|
-
|
66
|
-
```shell
|
67
|
-
postgresql://postgres:wAbLxfXSwkxxxxxx@45.577.281.48:5432/postgres
|
68
|
-
```
|
69
|
-
|
70
|
-
最后点击 Deploy 部署数据库
|
71
|
-
|
72
|
-

|
73
|
-
|
74
|
-
## 在 Dokploy 上部署 LobeChat
|
75
|
-
|
76
|
-
点击 Create Service 选择 Application,创建 LobeChat 应用
|
77
|
-
|
78
|
-

|
79
|
-
|
80
|
-
进入创建的 LobeChat 应用,选择你 fork 的 lobe-chat 项目及分支,点击 Save 保存
|
81
|
-
|
82
|
-

|
83
|
-
|
84
|
-
切换到 Environment ,在其中填入环境变量,点击保存。
|
85
|
-
|
86
|
-

|
87
|
-
|
88
|
-
```shell
|
64
|
+
你可以在 External Host 查看 Postgres 数据库连接 URL ,如下:
|
65
|
+
|
66
|
+
```shell
|
67
|
+
postgresql://postgres:wAbLxfXSwkxxxxxx@45.577.281.48:5432/postgres
|
68
|
+
```
|
69
|
+
|
70
|
+
最后点击 Deploy 部署数据库
|
71
|
+
|
72
|
+

|
73
|
+
|
74
|
+
## 在 Dokploy 上部署 LobeChat
|
75
|
+
|
76
|
+
点击 Create Service 选择 Application,创建 LobeChat 应用
|
77
|
+
|
78
|
+

|
79
|
+
|
80
|
+
进入创建的 LobeChat 应用,选择你 fork 的 lobe-chat 项目及分支,点击 Save 保存
|
81
|
+
|
82
|
+

|
83
|
+
|
84
|
+
切换到 Environment ,在其中填入环境变量,点击保存。
|
85
|
+
|
86
|
+

|
87
|
+
|
88
|
+
```shell
|
89
89
|
# 构建所必需的环境变量
|
90
|
-
NIXPACKS_PKGS="
|
90
|
+
NIXPACKS_PKGS="bun"
|
91
91
|
NIXPACKS_INSTALL_CMD="pnpm install"
|
92
|
-
NIXPACKS_BUILD_CMD="pnpm run build"
|
92
|
+
NIXPACKS_BUILD_CMD="NODE_OPTIONS='--max-old-space-size=8192' pnpm run build"
|
93
93
|
NIXPACKS_START_CMD="pnpm start"
|
94
94
|
|
95
95
|
APP_URL=
|
@@ -120,19 +120,19 @@ S3_ENABLE_PATH_STYLE=
|
|
120
120
|
# OpenAI 相关配置
|
121
121
|
OPENAI_API_KEY=
|
122
122
|
OPENAI_MODEL_LIST=
|
123
|
-
OPENAI_PROXY_URL=
|
124
|
-
```
|
125
|
-
|
126
|
-
添加完环境变量并保存后,点击 Deploy 进行部署,你可以在 Deployments 处查看部署进程及日志信息
|
127
|
-
|
128
|
-

|
129
|
-
|
130
|
-
部署成功后在 Domains 页面,为你的 LobeChat 应用绑定自己的域名并申请证书。
|
131
|
-
|
132
|
-

|
133
|
-
|
134
|
-
## 验证 LobeChat 是否正常工作
|
135
|
-
|
136
|
-
进入你的 LobeChat 网址,如果你点击左上角登录,可以正常显示登录弹窗,那么说明你已经配置成功了,尽情享用吧~
|
137
|
-
|
138
|
-

|
123
|
+
OPENAI_PROXY_URL=
|
124
|
+
```
|
125
|
+
|
126
|
+
添加完环境变量并保存后,点击 Deploy 进行部署,你可以在 Deployments 处查看部署进程及日志信息
|
127
|
+
|
128
|
+

|
129
|
+
|
130
|
+
部署成功后在 Domains 页面,为你的 LobeChat 应用绑定自己的域名并申请证书。
|
131
|
+
|
132
|
+

|
133
|
+
|
134
|
+
## 验证 LobeChat 是否正常工作
|
135
|
+
|
136
|
+
进入你的 LobeChat 网址,如果你点击左上角登录,可以正常显示登录弹窗,那么说明你已经配置成功了,尽情享用吧~
|
137
|
+
|
138
|
+

|
package/package.json
CHANGED
@@ -1,6 +1,6 @@
|
|
1
1
|
{
|
2
2
|
"name": "@lobehub/chat",
|
3
|
-
"version": "1.84.
|
3
|
+
"version": "1.84.23",
|
4
4
|
"description": "Lobe Chat - an open-source, high-performance chatbot framework that supports speech synthesis, multimodal, and extensible Function Call plugin system. Supports one-click free deployment of your private ChatGPT/LLM web application.",
|
5
5
|
"keywords": [
|
6
6
|
"framework",
|
@@ -9,16 +9,17 @@ const googleChatModels: AIChatModelCard[] = [
|
|
9
9
|
vision: true,
|
10
10
|
},
|
11
11
|
contextWindowTokens: 1_048_576 + 65_536,
|
12
|
-
description:
|
13
|
-
|
12
|
+
description:
|
13
|
+
'Gemini 2.5 Pro Experimental 是 Google 最先进的思维模型,能够对代码、数学和STEM领域的复杂问题进行推理,以及使用长上下文分析大型数据集、代码库和文档。',
|
14
|
+
displayName: 'Gemini 2.5 Pro Experimental 03-25',
|
14
15
|
enabled: true,
|
15
|
-
id: 'gemini-2.5-
|
16
|
+
id: 'gemini-2.5-pro-exp-03-25',
|
16
17
|
maxOutput: 65_536,
|
17
18
|
pricing: {
|
18
|
-
input: 0
|
19
|
-
output:
|
19
|
+
input: 0,
|
20
|
+
output: 0,
|
20
21
|
},
|
21
|
-
releasedAt: '2025-
|
22
|
+
releasedAt: '2025-03-25',
|
22
23
|
settings: {
|
23
24
|
searchImpl: 'params',
|
24
25
|
searchProvider: 'google',
|
@@ -34,16 +35,15 @@ const googleChatModels: AIChatModelCard[] = [
|
|
34
35
|
},
|
35
36
|
contextWindowTokens: 1_048_576 + 65_536,
|
36
37
|
description:
|
37
|
-
'Gemini 2.5 Pro
|
38
|
-
displayName: 'Gemini 2.5 Pro
|
39
|
-
|
40
|
-
id: 'gemini-2.5-pro-exp-03-25',
|
38
|
+
'Gemini 2.5 Pro Preview 是 Google 最先进的思维模型,能够对代码、数学和STEM领域的复杂问题进行推理,以及使用长上下文分析大型数据集、代码库和文档。',
|
39
|
+
displayName: 'Gemini 2.5 Pro Preview 05-06 (Paid)',
|
40
|
+
id: 'gemini-2.5-pro-preview-05-06',
|
41
41
|
maxOutput: 65_536,
|
42
42
|
pricing: {
|
43
|
-
input:
|
44
|
-
output:
|
43
|
+
input: 1.25, // prompts <= 200k tokens
|
44
|
+
output: 10, // prompts <= 200k tokens
|
45
45
|
},
|
46
|
-
releasedAt: '2025-
|
46
|
+
releasedAt: '2025-05-06',
|
47
47
|
settings: {
|
48
48
|
searchImpl: 'params',
|
49
49
|
searchProvider: 'google',
|
@@ -74,6 +74,30 @@ const googleChatModels: AIChatModelCard[] = [
|
|
74
74
|
},
|
75
75
|
type: 'chat',
|
76
76
|
},
|
77
|
+
{
|
78
|
+
abilities: {
|
79
|
+
functionCall: true,
|
80
|
+
reasoning: true,
|
81
|
+
search: true,
|
82
|
+
vision: true,
|
83
|
+
},
|
84
|
+
contextWindowTokens: 1_048_576 + 65_536,
|
85
|
+
description: 'Gemini 2.5 Flash Preview 是 Google 性价比最高的模型,提供全面的功能。',
|
86
|
+
displayName: 'Gemini 2.5 Flash Preview 04-17',
|
87
|
+
enabled: true,
|
88
|
+
id: 'gemini-2.5-flash-preview-04-17',
|
89
|
+
maxOutput: 65_536,
|
90
|
+
pricing: {
|
91
|
+
input: 0.15,
|
92
|
+
output: 3.5, // Thinking
|
93
|
+
},
|
94
|
+
releasedAt: '2025-04-17',
|
95
|
+
settings: {
|
96
|
+
searchImpl: 'params',
|
97
|
+
searchProvider: 'google',
|
98
|
+
},
|
99
|
+
type: 'chat',
|
100
|
+
},
|
77
101
|
{
|
78
102
|
abilities: {
|
79
103
|
reasoning: true,
|
@@ -4,6 +4,22 @@ import { AIChatModelCard } from '@/types/aiModel';
|
|
4
4
|
// https://mistral.ai/products/la-plateforme#pricing
|
5
5
|
|
6
6
|
const mistralChatModels: AIChatModelCard[] = [
|
7
|
+
{
|
8
|
+
abilities: {
|
9
|
+
functionCall: true,
|
10
|
+
},
|
11
|
+
contextWindowTokens: 128_000,
|
12
|
+
description:
|
13
|
+
'Mistral Medium 3 以 8 倍的成本提供最先进的性能,并从根本上简化了企业部署。',
|
14
|
+
displayName: 'Mistral Medium 3',
|
15
|
+
enabled: true,
|
16
|
+
id: 'mistral-medium-latest',
|
17
|
+
pricing: {
|
18
|
+
input: 0.4,
|
19
|
+
output: 2,
|
20
|
+
},
|
21
|
+
type: 'chat',
|
22
|
+
},
|
7
23
|
{
|
8
24
|
abilities: {
|
9
25
|
functionCall: true,
|
@@ -12,11 +28,10 @@ const mistralChatModels: AIChatModelCard[] = [
|
|
12
28
|
description:
|
13
29
|
'Mistral Nemo是一个与Nvidia合作开发的12B模型,提供出色的推理和编码性能,易于集成和替换。',
|
14
30
|
displayName: 'Mistral Nemo',
|
15
|
-
enabled: true,
|
16
31
|
id: 'open-mistral-nemo',
|
17
32
|
pricing: {
|
18
|
-
input: 0,
|
19
|
-
output: 0,
|
33
|
+
input: 0.15,
|
34
|
+
output: 0.15,
|
20
35
|
},
|
21
36
|
type: 'chat',
|
22
37
|
},
|
@@ -26,7 +41,7 @@ const mistralChatModels: AIChatModelCard[] = [
|
|
26
41
|
},
|
27
42
|
contextWindowTokens: 32_000,
|
28
43
|
description: 'Mistral Small是成本效益高、快速且可靠的选项,适用于翻译、摘要和情感分析等用例。',
|
29
|
-
displayName: 'Mistral Small',
|
44
|
+
displayName: 'Mistral Small 3.1',
|
30
45
|
enabled: true,
|
31
46
|
id: 'mistral-small-latest',
|
32
47
|
pricing: {
|
@@ -42,7 +57,7 @@ const mistralChatModels: AIChatModelCard[] = [
|
|
42
57
|
contextWindowTokens: 131_072,
|
43
58
|
description:
|
44
59
|
'Mistral Large是旗舰大模型,擅长多语言任务、复杂推理和代码生成,是高端应用的理想选择。',
|
45
|
-
displayName: 'Mistral Large',
|
60
|
+
displayName: 'Mistral Large 24.11',
|
46
61
|
enabled: true,
|
47
62
|
id: 'mistral-large-latest',
|
48
63
|
pricing: {
|
@@ -93,11 +108,10 @@ const mistralChatModels: AIChatModelCard[] = [
|
|
93
108
|
description:
|
94
109
|
'Pixtral 模型在图表和图理解、文档问答、多模态推理和指令遵循等任务上表现出强大的能力,能够以自然分辨率和宽高比摄入图像,还能够在长达 128K 令牌的长上下文窗口中处理任意数量的图像。',
|
95
110
|
displayName: 'Pixtral 12B',
|
96
|
-
enabled: true,
|
97
111
|
id: 'pixtral-12b-2409',
|
98
112
|
pricing: {
|
99
|
-
input: 0,
|
100
|
-
output: 0,
|
113
|
+
input: 0.15,
|
114
|
+
output: 0.15,
|
101
115
|
},
|
102
116
|
type: 'chat',
|
103
117
|
},
|
@@ -129,45 +143,6 @@ const mistralChatModels: AIChatModelCard[] = [
|
|
129
143
|
},
|
130
144
|
type: 'chat',
|
131
145
|
},
|
132
|
-
{
|
133
|
-
contextWindowTokens: 32_768,
|
134
|
-
description:
|
135
|
-
'Mistral 7B是一款紧凑但高性能的模型,擅长批量处理和简单任务,如分类和文本生成,具有良好的推理能力。',
|
136
|
-
displayName: 'Mistral 7B',
|
137
|
-
id: 'open-mistral-7b', // Deprecated on 2025/03/30
|
138
|
-
pricing: {
|
139
|
-
input: 0.25,
|
140
|
-
output: 0.25,
|
141
|
-
},
|
142
|
-
type: 'chat',
|
143
|
-
},
|
144
|
-
{
|
145
|
-
contextWindowTokens: 32_768,
|
146
|
-
description:
|
147
|
-
'Mixtral 8x7B是一个稀疏专家模型,利用多个参数提高推理速度,适合处理多语言和代码生成任务。',
|
148
|
-
displayName: 'Mixtral 8x7B',
|
149
|
-
id: 'open-mixtral-8x7b', // Deprecated on 2025/03/30
|
150
|
-
pricing: {
|
151
|
-
input: 0.7,
|
152
|
-
output: 0.7,
|
153
|
-
},
|
154
|
-
type: 'chat',
|
155
|
-
},
|
156
|
-
{
|
157
|
-
abilities: {
|
158
|
-
functionCall: true,
|
159
|
-
},
|
160
|
-
contextWindowTokens: 65_536,
|
161
|
-
description:
|
162
|
-
'Mixtral 8x22B是一个更大的专家模型,专注于复杂任务,提供出色的推理能力和更高的吞吐量。',
|
163
|
-
displayName: 'Mixtral 8x22B',
|
164
|
-
id: 'open-mixtral-8x22b', // Deprecated on 2025/03/30
|
165
|
-
pricing: {
|
166
|
-
input: 2,
|
167
|
-
output: 6,
|
168
|
-
},
|
169
|
-
type: 'chat',
|
170
|
-
},
|
171
146
|
{
|
172
147
|
contextWindowTokens: 256_000,
|
173
148
|
description:
|
@@ -5,59 +5,59 @@ const vertexaiChatModels: AIChatModelCard[] = [
|
|
5
5
|
{
|
6
6
|
abilities: {
|
7
7
|
functionCall: true,
|
8
|
+
reasoning: true,
|
8
9
|
vision: true,
|
9
10
|
},
|
10
|
-
contextWindowTokens:
|
11
|
+
contextWindowTokens: 1_048_576 + 65_536,
|
11
12
|
description:
|
12
|
-
'Gemini 2.
|
13
|
-
displayName: 'Gemini 2.
|
13
|
+
'Gemini 2.5 Pro Preview 是 Google 最先进的思维模型,能够对代码、数学和STEM领域的复杂问题进行推理,以及使用长上下文分析大型数据集、代码库和文档。',
|
14
|
+
displayName: 'Gemini 2.5 Pro Preview 05-06',
|
14
15
|
enabled: true,
|
15
|
-
id: 'gemini-2.
|
16
|
-
maxOutput:
|
16
|
+
id: 'gemini-2.5-pro-preview-05-06',
|
17
|
+
maxOutput: 65_536,
|
17
18
|
pricing: {
|
18
|
-
|
19
|
-
|
20
|
-
output: 0,
|
19
|
+
input: 1.25, // prompts <= 200k tokens
|
20
|
+
output: 10, // prompts <= 200k tokens
|
21
21
|
},
|
22
|
-
releasedAt: '2025-
|
22
|
+
releasedAt: '2025-05-06',
|
23
23
|
type: 'chat',
|
24
24
|
},
|
25
25
|
{
|
26
26
|
abilities: {
|
27
27
|
functionCall: true,
|
28
|
+
reasoning: true,
|
28
29
|
vision: true,
|
29
30
|
},
|
30
|
-
contextWindowTokens: 1_048_576 +
|
31
|
+
contextWindowTokens: 1_048_576 + 65_536,
|
31
32
|
description:
|
32
|
-
'Gemini 2.
|
33
|
-
displayName: 'Gemini 2.
|
34
|
-
|
35
|
-
|
36
|
-
maxOutput: 8192,
|
33
|
+
'Gemini 2.5 Pro Preview 是 Google 最先进的思维模型,能够对代码、数学和STEM领域的复杂问题进行推理,以及使用长上下文分析大型数据集、代码库和文档。',
|
34
|
+
displayName: 'Gemini 2.5 Pro Preview 03-25',
|
35
|
+
id: 'gemini-2.5-pro-preview-03-25',
|
36
|
+
maxOutput: 65_536,
|
37
37
|
pricing: {
|
38
|
-
|
39
|
-
|
40
|
-
output: 0.6,
|
38
|
+
input: 1.25, // prompts <= 200k tokens
|
39
|
+
output: 10, // prompts <= 200k tokens
|
41
40
|
},
|
42
|
-
releasedAt: '2025-
|
41
|
+
releasedAt: '2025-04-09',
|
43
42
|
type: 'chat',
|
44
43
|
},
|
45
44
|
{
|
46
45
|
abilities: {
|
47
46
|
functionCall: true,
|
47
|
+
reasoning: true,
|
48
48
|
vision: true,
|
49
49
|
},
|
50
|
-
contextWindowTokens: 1_048_576 +
|
51
|
-
description: 'Gemini 2.
|
52
|
-
displayName: 'Gemini 2.
|
53
|
-
|
54
|
-
|
50
|
+
contextWindowTokens: 1_048_576 + 65_536,
|
51
|
+
description: 'Gemini 2.5 Flash Preview 是 Google 性价比最高的模型,提供全面的功能。',
|
52
|
+
displayName: 'Gemini 2.5 Flash Preview 04-17',
|
53
|
+
enabled: true,
|
54
|
+
id: 'gemini-2.5-flash-preview-04-17',
|
55
|
+
maxOutput: 65_536,
|
55
56
|
pricing: {
|
56
|
-
|
57
|
-
|
58
|
-
output: 0.3,
|
57
|
+
input: 0.15,
|
58
|
+
output: 3.5, // Thinking
|
59
59
|
},
|
60
|
-
releasedAt: '2025-
|
60
|
+
releasedAt: '2025-04-17',
|
61
61
|
type: 'chat',
|
62
62
|
},
|
63
63
|
{
|
@@ -68,8 +68,8 @@ const vertexaiChatModels: AIChatModelCard[] = [
|
|
68
68
|
contextWindowTokens: 1_048_576 + 8192,
|
69
69
|
description:
|
70
70
|
'Gemini 2.0 Flash 提供下一代功能和改进,包括卓越的速度、原生工具使用、多模态生成和1M令牌上下文窗口。',
|
71
|
-
displayName: 'Gemini 2.0 Flash
|
72
|
-
id: 'gemini-2.0-flash
|
71
|
+
displayName: 'Gemini 2.0 Flash',
|
72
|
+
id: 'gemini-2.0-flash',
|
73
73
|
maxOutput: 8192,
|
74
74
|
pricing: {
|
75
75
|
cachedInput: 0.0375,
|
@@ -81,33 +81,33 @@ const vertexaiChatModels: AIChatModelCard[] = [
|
|
81
81
|
},
|
82
82
|
{
|
83
83
|
abilities: {
|
84
|
-
|
84
|
+
functionCall: true,
|
85
85
|
vision: true,
|
86
86
|
},
|
87
|
-
contextWindowTokens: 1_048_576 +
|
88
|
-
description:
|
89
|
-
|
90
|
-
|
91
|
-
|
92
|
-
id: 'gemini-2.0-flash-thinking-exp-01-21',
|
93
|
-
maxOutput: 65_536,
|
87
|
+
contextWindowTokens: 1_048_576 + 8192,
|
88
|
+
description: 'Gemini 2.0 Flash 模型变体,针对成本效益和低延迟等目标进行了优化。',
|
89
|
+
displayName: 'Gemini 2.0 Flash-Lite',
|
90
|
+
id: 'gemini-2.0-flash-lite',
|
91
|
+
maxOutput: 8192,
|
94
92
|
pricing: {
|
95
|
-
cachedInput: 0,
|
96
|
-
input: 0,
|
97
|
-
output: 0,
|
93
|
+
cachedInput: 0.018_75,
|
94
|
+
input: 0.075,
|
95
|
+
output: 0.3,
|
98
96
|
},
|
99
|
-
releasedAt: '2025-
|
97
|
+
releasedAt: '2025-02-05',
|
100
98
|
type: 'chat',
|
101
99
|
},
|
102
100
|
{
|
103
|
-
abilities: {
|
101
|
+
abilities: {
|
102
|
+
functionCall: true,
|
103
|
+
vision: true
|
104
|
+
},
|
104
105
|
contextWindowTokens: 1_000_000 + 8192,
|
105
106
|
description: 'Gemini 1.5 Flash 002 是一款高效的多模态模型,支持广泛应用的扩展。',
|
106
107
|
displayName: 'Gemini 1.5 Flash 002',
|
107
108
|
id: 'gemini-1.5-flash-002',
|
108
109
|
maxOutput: 8192,
|
109
110
|
pricing: {
|
110
|
-
cachedInput: 0.018_75,
|
111
111
|
input: 0.075,
|
112
112
|
output: 0.3,
|
113
113
|
},
|
@@ -115,21 +115,10 @@ const vertexaiChatModels: AIChatModelCard[] = [
|
|
115
115
|
type: 'chat',
|
116
116
|
},
|
117
117
|
{
|
118
|
-
abilities: {
|
119
|
-
|
120
|
-
|
121
|
-
displayName: 'Gemini 1.5 Flash 001',
|
122
|
-
id: 'gemini-1.5-flash-001',
|
123
|
-
maxOutput: 8192,
|
124
|
-
pricing: {
|
125
|
-
cachedInput: 0.018_75,
|
126
|
-
input: 0.075,
|
127
|
-
output: 0.3,
|
118
|
+
abilities: {
|
119
|
+
functionCall: true,
|
120
|
+
vision: true
|
128
121
|
},
|
129
|
-
type: 'chat',
|
130
|
-
},
|
131
|
-
{
|
132
|
-
abilities: { functionCall: true, vision: true },
|
133
122
|
contextWindowTokens: 2_000_000 + 8192,
|
134
123
|
description:
|
135
124
|
'Gemini 1.5 Pro 002 是最新的生产就绪模型,提供更高质量的输出,特别在数学、长上下文和视觉任务方面有显著提升。',
|
@@ -137,28 +126,12 @@ const vertexaiChatModels: AIChatModelCard[] = [
|
|
137
126
|
id: 'gemini-1.5-pro-002',
|
138
127
|
maxOutput: 8192,
|
139
128
|
pricing: {
|
140
|
-
cachedInput: 0.315,
|
141
129
|
input: 1.25,
|
142
130
|
output: 2.5,
|
143
131
|
},
|
144
132
|
releasedAt: '2024-09-24',
|
145
133
|
type: 'chat',
|
146
134
|
},
|
147
|
-
{
|
148
|
-
abilities: { functionCall: true, vision: true },
|
149
|
-
contextWindowTokens: 2_000_000 + 8192,
|
150
|
-
description: 'Gemini 1.5 Pro 001 是可扩展的多模态AI解决方案,支持广泛的复杂任务。',
|
151
|
-
displayName: 'Gemini 1.5 Pro 001',
|
152
|
-
id: 'gemini-1.5-pro-001',
|
153
|
-
maxOutput: 8192,
|
154
|
-
pricing: {
|
155
|
-
cachedInput: 0.875,
|
156
|
-
input: 3.5,
|
157
|
-
output: 10.5,
|
158
|
-
},
|
159
|
-
releasedAt: '2024-02-15',
|
160
|
-
type: 'chat',
|
161
|
-
},
|
162
135
|
];
|
163
136
|
|
164
137
|
export const allModels = [...vertexaiChatModels];
|
@@ -8,7 +8,7 @@ const VertexAI: ModelProviderCard = {
|
|
8
8
|
'Google 的 Gemini 系列是其最先进、通用的 AI模型,由 Google DeepMind 打造,专为多模态设计,支持文本、代码、图像、音频和视频的无缝理解与处理。适用于从数据中心到移动设备的多种环境,极大提升了AI模型的效率与应用广泛性。',
|
9
9
|
id: 'vertexai',
|
10
10
|
modelsUrl: 'https://console.cloud.google.com/vertex-ai/model-garden',
|
11
|
-
name: '
|
11
|
+
name: 'Vertex AI',
|
12
12
|
settings: {
|
13
13
|
disableBrowserRequest: true,
|
14
14
|
showModelFetcher: false,
|
@@ -1,6 +1,7 @@
|
|
1
1
|
'use client';
|
2
2
|
|
3
|
-
import {
|
3
|
+
import { Grid, Icon, Modal, Segmented } from '@lobehub/ui';
|
4
|
+
import { MessageSquare, Settings2 } from 'lucide-react';
|
4
5
|
import { memo, useState } from 'react';
|
5
6
|
import { useTranslation } from 'react-i18next';
|
6
7
|
|
@@ -20,39 +21,42 @@ const HotkeyHelperPanel = memo(() => {
|
|
20
21
|
const handleClose = () => updateSystemStatus({ showHotkeyHelper: false });
|
21
22
|
|
22
23
|
return (
|
23
|
-
<
|
24
|
-
|
25
|
-
|
26
|
-
|
27
|
-
onClose={handleClose}
|
24
|
+
<Modal
|
25
|
+
centered
|
26
|
+
footer={null}
|
27
|
+
onCancel={handleClose}
|
28
28
|
open={open}
|
29
|
-
placement={'bottom'}
|
30
29
|
styles={{
|
31
|
-
|
32
|
-
|
30
|
+
body: { paddingBlock: 24 },
|
31
|
+
mask: {
|
32
|
+
backdropFilter: 'blur(8px)',
|
33
|
+
backgroundColor: 'rgba(0, 0, 0, 0.5)',
|
34
|
+
},
|
33
35
|
}}
|
34
36
|
title={
|
35
|
-
<
|
36
|
-
|
37
|
-
|
38
|
-
items={[
|
37
|
+
<Segmented
|
38
|
+
onChange={(key) => setActive(key as HotkeyGroupId)}
|
39
|
+
options={[
|
39
40
|
{
|
40
|
-
|
41
|
+
icon: <Icon icon={Settings2} />,
|
41
42
|
label: t('hotkey.group.essential'),
|
43
|
+
value: HotkeyGroupEnum.Essential,
|
42
44
|
},
|
43
45
|
{
|
44
|
-
|
46
|
+
icon: <Icon icon={MessageSquare} />,
|
45
47
|
label: t('hotkey.group.conversation'),
|
48
|
+
value: HotkeyGroupEnum.Conversation,
|
46
49
|
},
|
47
50
|
]}
|
48
|
-
|
51
|
+
value={active}
|
52
|
+
variant="filled"
|
49
53
|
/>
|
50
54
|
}
|
51
55
|
>
|
52
56
|
<Grid gap={32}>
|
53
57
|
<HotkeyContent groupId={active} />
|
54
58
|
</Grid>
|
55
|
-
</
|
59
|
+
</Modal>
|
56
60
|
);
|
57
61
|
});
|
58
62
|
|