@lobehub/chat 1.133.3 → 1.133.4
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.github/workflows/claude-translator.yml +2 -3
- package/.github/workflows/issue-auto-comments.yml +4 -9
- package/.github/workflows/issue-close-require.yml +3 -6
- package/CHANGELOG.md +33 -0
- package/changelog/v1.json +12 -0
- package/package.json +1 -1
- package/packages/model-bank/src/aiModels/aihubmix.ts +34 -1
- package/packages/model-bank/src/aiModels/anthropic.ts +3 -64
- package/packages/model-bank/src/aiModels/novita.ts +2 -2
- package/packages/model-bank/src/aiModels/qwen.ts +21 -0
- package/packages/model-bank/src/aiModels/zhipu.ts +255 -62
- package/packages/model-runtime/src/providers/anthropic/index.test.ts +0 -12
- package/packages/model-runtime/src/providers/novita/index.ts +2 -1
- package/packages/model-runtime/src/providers/novita/type.ts +4 -0
- package/packages/model-runtime/src/providers/ollamacloud/index.ts +1 -1
- package/packages/model-runtime/src/providers/openrouter/index.ts +11 -4
- package/packages/utils/package.json +0 -1
- package/src/app/[variants]/(main)/chat/(workspace)/@conversation/features/ChatMinimap/index.tsx +51 -23
- package/src/config/modelProviders/anthropic.ts +0 -30
- package/src/config/modelProviders/ollamacloud.ts +1 -0
- package/src/config/modelProviders/zhipu.ts +4 -21
- package/src/features/Conversation/components/WideScreenContainer/index.tsx +3 -0
|
@@ -76,12 +76,11 @@ jobs:
|
|
|
76
76
|
|
|
77
77
|
- Title: English translation (if non-English)
|
|
78
78
|
- Content format:
|
|
79
|
-
> [!NOTE]
|
|
80
|
-
> This issue/comment/review was translated by Claude.
|
|
81
|
-
|
|
82
79
|
[Translated content]
|
|
83
80
|
|
|
84
81
|
---
|
|
82
|
+
> This issue/comment/review was translated by Claude.
|
|
83
|
+
|
|
85
84
|
<details>
|
|
86
85
|
<summary>Original Content</summary>
|
|
87
86
|
[Original content]
|
|
@@ -28,8 +28,7 @@ jobs:
|
|
|
28
28
|
👀 @{{ author }}
|
|
29
29
|
|
|
30
30
|
Thank you for raising an issue. We will investigate into the matter and get back to you as soon as possible.
|
|
31
|
-
Please make sure you have given us as much context as possible
|
|
32
|
-
非常感谢您提交 issue。我们会尽快调查此事,并尽快回复您。 请确保您已经提供了尽可能多的背景信息。
|
|
31
|
+
Please make sure you have given us as much context as possible.
|
|
33
32
|
- name: Auto Comment on Issues Closed
|
|
34
33
|
uses: wow-actions/auto-comment@v1
|
|
35
34
|
with:
|
|
@@ -37,8 +36,7 @@ jobs:
|
|
|
37
36
|
issuesClosed: |
|
|
38
37
|
✅ @{{ author }}
|
|
39
38
|
|
|
40
|
-
This issue is closed, If you have any questions, you can comment and reply
|
|
41
|
-
此问题已经关闭。如果您有任何问题,可以留言并回复。
|
|
39
|
+
This issue is closed, If you have any questions, you can comment and reply.
|
|
42
40
|
- name: Auto Comment on Pull Request Opened
|
|
43
41
|
uses: wow-actions/auto-comment@v1
|
|
44
42
|
with:
|
|
@@ -48,9 +46,7 @@ jobs:
|
|
|
48
46
|
|
|
49
47
|
Thank you for raising your pull request and contributing to our Community
|
|
50
48
|
Please make sure you have followed our contributing guidelines. We will review it as soon as possible.
|
|
51
|
-
If you encounter any problems, please feel free to connect with us
|
|
52
|
-
非常感谢您提出拉取请求并为我们的社区做出贡献,请确保您已经遵循了我们的贡献指南,我们会尽快审查它。
|
|
53
|
-
如果您遇到任何问题,请随时与我们联系。
|
|
49
|
+
If you encounter any problems, please feel free to connect with us.
|
|
54
50
|
- name: Auto Comment on Pull Request Merged
|
|
55
51
|
uses: actions-cool/pr-welcome@main
|
|
56
52
|
if: github.event.pull_request.merged == true
|
|
@@ -59,8 +55,7 @@ jobs:
|
|
|
59
55
|
comment: |
|
|
60
56
|
❤️ Great PR @${{ github.event.pull_request.user.login }} ❤️
|
|
61
57
|
|
|
62
|
-
The growth of project is inseparable from user feedback and contribution, thanks for your contribution! If you are interesting with the lobehub developer community, please join our [discord](https://discord.com/invite/AYFPHvv2jT) and then dm @arvinxx or @canisminor1990. They will invite you to our private developer channel. We are talking about the lobe-chat development or sharing ai newsletter around the world
|
|
63
|
-
项目的成长离不开用户反馈和贡献,感谢您的贡献! 如果您对 LobeHub 开发者社区感兴趣,请加入我们的 [discord](https://discord.com/invite/AYFPHvv2jT),然后私信 @arvinxx 或 @canisminor1990。他们会邀请您加入我们的私密开发者频道。我们将会讨论关于 Lobe Chat 的开发,分享和讨论全球范围内的 AI 消息。
|
|
58
|
+
The growth of project is inseparable from user feedback and contribution, thanks for your contribution! If you are interesting with the lobehub developer community, please join our [discord](https://discord.com/invite/AYFPHvv2jT) and then dm @arvinxx or @canisminor1990. They will invite you to our private developer channel. We are talking about the lobe-chat development or sharing ai newsletter around the world.
|
|
64
59
|
emoji: 'hooray'
|
|
65
60
|
pr-emoji: '+1, heart'
|
|
66
61
|
- name: Remove inactive
|
|
@@ -38,8 +38,7 @@ jobs:
|
|
|
38
38
|
body: |
|
|
39
39
|
👋 @{{ author }}
|
|
40
40
|
<br/>
|
|
41
|
-
Since the issue was labeled with `✅ Fixed`, but no response in 3 days. This issue will be closed. If you have any questions, you can comment and reply
|
|
42
|
-
由于该 issue 被标记为已修复,同时 3 天未收到回应。现关闭 issue,若有任何问题,可评论回复。
|
|
41
|
+
Since the issue was labeled with `✅ Fixed`, but no response in 3 days. This issue will be closed. If you have any questions, you can comment and reply.
|
|
43
42
|
- name: need reproduce
|
|
44
43
|
uses: actions-cool/issues-helper@v3
|
|
45
44
|
with:
|
|
@@ -50,8 +49,7 @@ jobs:
|
|
|
50
49
|
body: |
|
|
51
50
|
👋 @{{ author }}
|
|
52
51
|
<br/>
|
|
53
|
-
Since the issue was labeled with `🤔 Need Reproduce`, but no response in 3 days. This issue will be closed. If you have any questions, you can comment and reply
|
|
54
|
-
由于该 issue 被标记为需要更多信息,却 3 天未收到回应。现关闭 issue,若有任何问题,可评论回复。
|
|
52
|
+
Since the issue was labeled with `🤔 Need Reproduce`, but no response in 3 days. This issue will be closed. If you have any questions, you can comment and reply.
|
|
55
53
|
- name: need reproduce
|
|
56
54
|
uses: actions-cool/issues-helper@v3
|
|
57
55
|
with:
|
|
@@ -62,5 +60,4 @@ jobs:
|
|
|
62
60
|
body: |
|
|
63
61
|
👋 @{{ github.event.issue.user.login }}
|
|
64
62
|
<br/>
|
|
65
|
-
Since the issue was labeled with `🙅🏻♀️ WON'T DO`, and no response in 3 days. This issue will be closed. If you have any questions, you can comment and reply
|
|
66
|
-
由于该 issue 被标记为暂不处理,同时 3 天未收到回应。现关闭 issue,若有任何问题,可评论回复。
|
|
63
|
+
Since the issue was labeled with `🙅🏻♀️ WON'T DO`, and no response in 3 days. This issue will be closed. If you have any questions, you can comment and reply.
|
package/CHANGELOG.md
CHANGED
|
@@ -2,6 +2,39 @@
|
|
|
2
2
|
|
|
3
3
|
# Changelog
|
|
4
4
|
|
|
5
|
+
### [Version 1.133.4](https://github.com/lobehub/lobe-chat/compare/v1.133.3...v1.133.4)
|
|
6
|
+
|
|
7
|
+
<sup>Released on **2025-10-01**</sup>
|
|
8
|
+
|
|
9
|
+
#### 🐛 Bug Fixes
|
|
10
|
+
|
|
11
|
+
- **misc**: OllamaCloud error.
|
|
12
|
+
|
|
13
|
+
#### 💄 Styles
|
|
14
|
+
|
|
15
|
+
- **misc**: Fix chat minimap overflow.
|
|
16
|
+
|
|
17
|
+
<br/>
|
|
18
|
+
|
|
19
|
+
<details>
|
|
20
|
+
<summary><kbd>Improvements and Fixes</kbd></summary>
|
|
21
|
+
|
|
22
|
+
#### What's fixed
|
|
23
|
+
|
|
24
|
+
- **misc**: OllamaCloud error, closes [#9481](https://github.com/lobehub/lobe-chat/issues/9481) ([55c45a5](https://github.com/lobehub/lobe-chat/commit/55c45a5))
|
|
25
|
+
|
|
26
|
+
#### Styles
|
|
27
|
+
|
|
28
|
+
- **misc**: Fix chat minimap overflow, closes [#9507](https://github.com/lobehub/lobe-chat/issues/9507) ([d835c33](https://github.com/lobehub/lobe-chat/commit/d835c33))
|
|
29
|
+
|
|
30
|
+
</details>
|
|
31
|
+
|
|
32
|
+
<div align="right">
|
|
33
|
+
|
|
34
|
+
[](#readme-top)
|
|
35
|
+
|
|
36
|
+
</div>
|
|
37
|
+
|
|
5
38
|
### [Version 1.133.3](https://github.com/lobehub/lobe-chat/compare/v1.133.2...v1.133.3)
|
|
6
39
|
|
|
7
40
|
<sup>Released on **2025-10-01**</sup>
|
package/changelog/v1.json
CHANGED
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "@lobehub/chat",
|
|
3
|
-
"version": "1.133.
|
|
3
|
+
"version": "1.133.4",
|
|
4
4
|
"description": "Lobe Chat - an open-source, high-performance chatbot framework that supports speech synthesis, multimodal, and extensible Function Call plugin system. Supports one-click free deployment of your private ChatGPT/LLM web application.",
|
|
5
5
|
"keywords": [
|
|
6
6
|
"framework",
|
|
@@ -526,6 +526,40 @@ const aihubmixModels: AIChatModelCard[] = [
|
|
|
526
526
|
},
|
|
527
527
|
type: 'chat',
|
|
528
528
|
},
|
|
529
|
+
{
|
|
530
|
+
abilities: {
|
|
531
|
+
functionCall: true,
|
|
532
|
+
reasoning: true,
|
|
533
|
+
search: true,
|
|
534
|
+
vision: true,
|
|
535
|
+
},
|
|
536
|
+
contextWindowTokens: 200_000,
|
|
537
|
+
description:
|
|
538
|
+
'Sonnet 4.5 是世界上最好的代理、编码和计算机使用模型。它也是我们在长时间运行任务中最准确、最详细的模型,具有增强的编码、金融和网络安全领域知识。',
|
|
539
|
+
displayName: 'Claude Sonnet 4.5',
|
|
540
|
+
enabled: true,
|
|
541
|
+
id: 'claude-sonnet-4-5-20250929',
|
|
542
|
+
maxOutput: 64_000,
|
|
543
|
+
pricing: {
|
|
544
|
+
units: [
|
|
545
|
+
{ name: 'textInput', rate: 3, strategy: 'fixed', unit: 'millionTokens' },
|
|
546
|
+
{ name: 'textOutput', rate: 15, strategy: 'fixed', unit: 'millionTokens' },
|
|
547
|
+
{ name: 'textInput_cacheRead', rate: 0.3, strategy: 'fixed', unit: 'millionTokens' },
|
|
548
|
+
{
|
|
549
|
+
lookup: { prices: { '1h': 6, '5m': 3.75 }, pricingParams: ['ttl'] },
|
|
550
|
+
name: 'textInput_cacheWrite',
|
|
551
|
+
strategy: 'lookup',
|
|
552
|
+
unit: 'millionTokens',
|
|
553
|
+
},
|
|
554
|
+
],
|
|
555
|
+
},
|
|
556
|
+
releasedAt: '2025-09-29',
|
|
557
|
+
settings: {
|
|
558
|
+
extendParams: ['disableContextCaching', 'enableReasoning', 'reasoningBudgetToken'],
|
|
559
|
+
searchImpl: 'params',
|
|
560
|
+
},
|
|
561
|
+
type: 'chat',
|
|
562
|
+
},
|
|
529
563
|
{
|
|
530
564
|
abilities: {
|
|
531
565
|
functionCall: true,
|
|
@@ -537,7 +571,6 @@ const aihubmixModels: AIChatModelCard[] = [
|
|
|
537
571
|
description:
|
|
538
572
|
'Claude Sonnet 4 可以产生近乎即时的响应或延长的逐步思考,用户可以清晰地看到这些过程。API 用户还可以对模型思考的时间进行细致的控制',
|
|
539
573
|
displayName: 'Claude Sonnet 4',
|
|
540
|
-
enabled: true,
|
|
541
574
|
id: 'claude-sonnet-4-20250514',
|
|
542
575
|
maxOutput: 64_000,
|
|
543
576
|
pricing: {
|
|
@@ -22,7 +22,7 @@ const anthropicChatModels: AIChatModelCard[] = [
|
|
|
22
22
|
{ name: 'textInput_cacheWrite', rate: 3.75, strategy: 'fixed', unit: 'millionTokens' },
|
|
23
23
|
],
|
|
24
24
|
},
|
|
25
|
-
releasedAt: '2025-09-
|
|
25
|
+
releasedAt: '2025-09-29',
|
|
26
26
|
settings: {
|
|
27
27
|
extendParams: ['disableContextCaching', 'enableReasoning', 'reasoningBudgetToken'],
|
|
28
28
|
searchImpl: 'params',
|
|
@@ -107,7 +107,6 @@ const anthropicChatModels: AIChatModelCard[] = [
|
|
|
107
107
|
description:
|
|
108
108
|
'Claude Sonnet 4 可以产生近乎即时的响应或延长的逐步思考,用户可以清晰地看到这些过程。API 用户还可以对模型思考的时间进行细致的控制',
|
|
109
109
|
displayName: 'Claude Sonnet 4',
|
|
110
|
-
enabled: true,
|
|
111
110
|
id: 'claude-sonnet-4-20250514',
|
|
112
111
|
maxOutput: 64_000,
|
|
113
112
|
pricing: {
|
|
@@ -172,7 +171,7 @@ const anthropicChatModels: AIChatModelCard[] = [
|
|
|
172
171
|
contextWindowTokens: 200_000,
|
|
173
172
|
description:
|
|
174
173
|
'Claude 3.5 Sonnet 提供了超越 Opus 的能力和比 Sonnet 更快的速度,同时保持与 Sonnet 相同的价格。Sonnet 特别擅长编程、数据科学、视觉处理、代理任务。',
|
|
175
|
-
displayName: 'Claude 3.5 Sonnet
|
|
174
|
+
displayName: 'Claude 3.5 Sonnet (New)',
|
|
176
175
|
id: 'claude-3-5-sonnet-20241022',
|
|
177
176
|
maxOutput: 8192,
|
|
178
177
|
pricing: {
|
|
@@ -203,7 +202,7 @@ const anthropicChatModels: AIChatModelCard[] = [
|
|
|
203
202
|
contextWindowTokens: 200_000,
|
|
204
203
|
description:
|
|
205
204
|
'Claude 3.5 Sonnet 提供了超越 Opus 的能力和比 Sonnet 更快的速度,同时保持与 Sonnet 相同的价格。Sonnet 特别擅长编程、数据科学、视觉处理、代理任务。',
|
|
206
|
-
displayName: 'Claude 3.5 Sonnet',
|
|
205
|
+
displayName: 'Claude 3.5 Sonnet (Old)',
|
|
207
206
|
id: 'claude-3-5-sonnet-20240620',
|
|
208
207
|
maxOutput: 8192,
|
|
209
208
|
pricing: {
|
|
@@ -235,7 +234,6 @@ const anthropicChatModels: AIChatModelCard[] = [
|
|
|
235
234
|
description:
|
|
236
235
|
'Claude 3.5 Haiku 是 Anthropic 最快的下一代模型。与 Claude 3 Haiku 相比,Claude 3.5 Haiku 在各项技能上都有所提升,并在许多智力基准测试中超越了上一代最大的模型 Claude 3 Opus。',
|
|
237
236
|
displayName: 'Claude 3.5 Haiku',
|
|
238
|
-
enabled: true,
|
|
239
237
|
id: 'claude-3-5-haiku-20241022',
|
|
240
238
|
maxOutput: 8192,
|
|
241
239
|
pricing: {
|
|
@@ -287,33 +285,6 @@ const anthropicChatModels: AIChatModelCard[] = [
|
|
|
287
285
|
},
|
|
288
286
|
type: 'chat',
|
|
289
287
|
},
|
|
290
|
-
{
|
|
291
|
-
abilities: {
|
|
292
|
-
functionCall: true,
|
|
293
|
-
vision: true,
|
|
294
|
-
},
|
|
295
|
-
contextWindowTokens: 200_000,
|
|
296
|
-
description:
|
|
297
|
-
'Claude 3 Sonnet 在智能和速度方面为企业工作负载提供了理想的平衡。它以更低的价格提供最大效用,可靠且适合大规模部署。',
|
|
298
|
-
displayName: 'Claude 3 Sonnet',
|
|
299
|
-
id: 'claude-3-sonnet-20240229', // 弃用日期 2025年7月21日
|
|
300
|
-
maxOutput: 4096,
|
|
301
|
-
pricing: {
|
|
302
|
-
units: [
|
|
303
|
-
{ name: 'textInput_cacheRead', rate: 0.3, strategy: 'fixed', unit: 'millionTokens' },
|
|
304
|
-
{ name: 'textInput', rate: 3, strategy: 'fixed', unit: 'millionTokens' },
|
|
305
|
-
{ name: 'textOutput', rate: 15, strategy: 'fixed', unit: 'millionTokens' },
|
|
306
|
-
{
|
|
307
|
-
lookup: { prices: { '1h': 6, '5m': 3.75 }, pricingParams: ['ttl'] },
|
|
308
|
-
name: 'textInput_cacheWrite',
|
|
309
|
-
strategy: 'lookup',
|
|
310
|
-
unit: 'millionTokens',
|
|
311
|
-
},
|
|
312
|
-
],
|
|
313
|
-
},
|
|
314
|
-
releasedAt: '2024-02-29',
|
|
315
|
-
type: 'chat',
|
|
316
|
-
},
|
|
317
288
|
{
|
|
318
289
|
abilities: {
|
|
319
290
|
functionCall: true,
|
|
@@ -344,38 +315,6 @@ const anthropicChatModels: AIChatModelCard[] = [
|
|
|
344
315
|
},
|
|
345
316
|
type: 'chat',
|
|
346
317
|
},
|
|
347
|
-
{
|
|
348
|
-
contextWindowTokens: 200_000,
|
|
349
|
-
description:
|
|
350
|
-
'Claude 2 为企业提供了关键能力的进步,包括业界领先的 200K token 上下文、大幅降低模型幻觉的发生率、系统提示以及一个新的测试功能:工具调用。',
|
|
351
|
-
displayName: 'Claude 2.1',
|
|
352
|
-
id: 'claude-2.1', // 弃用日期 2025年7月21日
|
|
353
|
-
maxOutput: 4096,
|
|
354
|
-
pricing: {
|
|
355
|
-
units: [
|
|
356
|
-
{ name: 'textInput', rate: 8, strategy: 'fixed', unit: 'millionTokens' },
|
|
357
|
-
{ name: 'textOutput', rate: 24, strategy: 'fixed', unit: 'millionTokens' },
|
|
358
|
-
],
|
|
359
|
-
},
|
|
360
|
-
releasedAt: '2023-11-21',
|
|
361
|
-
type: 'chat',
|
|
362
|
-
},
|
|
363
|
-
{
|
|
364
|
-
contextWindowTokens: 100_000,
|
|
365
|
-
description:
|
|
366
|
-
'Claude 2 为企业提供了关键能力的进步,包括业界领先的 200K token 上下文、大幅降低模型幻觉的发生率、系统提示以及一个新的测试功能:工具调用。',
|
|
367
|
-
displayName: 'Claude 2.0',
|
|
368
|
-
id: 'claude-2.0', // 弃用日期 2025年7月21日
|
|
369
|
-
maxOutput: 4096,
|
|
370
|
-
pricing: {
|
|
371
|
-
units: [
|
|
372
|
-
{ name: 'textInput', rate: 8, strategy: 'fixed', unit: 'millionTokens' },
|
|
373
|
-
{ name: 'textOutput', rate: 24, strategy: 'fixed', unit: 'millionTokens' },
|
|
374
|
-
],
|
|
375
|
-
},
|
|
376
|
-
releasedAt: '2023-07-11',
|
|
377
|
-
type: 'chat',
|
|
378
|
-
},
|
|
379
318
|
];
|
|
380
319
|
|
|
381
320
|
export const allModels = [...anthropicChatModels];
|
|
@@ -30,8 +30,8 @@ const novitaChatModels: AIChatModelCard[] = [
|
|
|
30
30
|
maxOutput: 32_768,
|
|
31
31
|
pricing: {
|
|
32
32
|
units: [
|
|
33
|
-
{ name: 'textInput', rate: 0.
|
|
34
|
-
{ name: 'textOutput', rate: 3, strategy: 'fixed', unit: 'millionTokens' },
|
|
33
|
+
{ name: 'textInput', rate: 0.98, strategy: 'fixed', unit: 'millionTokens' },
|
|
34
|
+
{ name: 'textOutput', rate: 3.95, strategy: 'fixed', unit: 'millionTokens' },
|
|
35
35
|
],
|
|
36
36
|
},
|
|
37
37
|
type: 'chat',
|
|
@@ -55,6 +55,27 @@ const qwenChatModels: AIChatModelCard[] = [
|
|
|
55
55
|
},
|
|
56
56
|
type: 'chat',
|
|
57
57
|
},
|
|
58
|
+
{
|
|
59
|
+
abilities: {
|
|
60
|
+
reasoning: true,
|
|
61
|
+
},
|
|
62
|
+
contextWindowTokens: 131_072,
|
|
63
|
+
description: 'deepseek-v3.2-exp 引入稀疏注意力机制,旨在提升处理长文本时的训练与推理效率,价格低于 deepseek-v3.1。',
|
|
64
|
+
displayName: 'DeepSeek V3.2 Exp',
|
|
65
|
+
id: 'deepseek-v3.2-exp',
|
|
66
|
+
maxOutput: 65_536,
|
|
67
|
+
pricing: {
|
|
68
|
+
currency: 'CNY',
|
|
69
|
+
units: [
|
|
70
|
+
{ name: 'textInput', rate: 2, strategy: 'fixed', unit: 'millionTokens' },
|
|
71
|
+
{ name: 'textOutput', rate: 3, strategy: 'fixed', unit: 'millionTokens' },
|
|
72
|
+
],
|
|
73
|
+
},
|
|
74
|
+
settings: {
|
|
75
|
+
extendParams: ['enableReasoning', 'reasoningBudgetToken'],
|
|
76
|
+
},
|
|
77
|
+
type: 'chat',
|
|
78
|
+
},
|
|
58
79
|
{
|
|
59
80
|
abilities: {
|
|
60
81
|
reasoning: true,
|
|
@@ -1,6 +1,72 @@
|
|
|
1
1
|
import { AIChatModelCard, AIImageModelCard } from '../types/aiModel';
|
|
2
2
|
|
|
3
|
+
// price: https://bigmodel.cn/pricing
|
|
4
|
+
// ref: https://docs.bigmodel.cn/cn/guide/start/model-overview
|
|
5
|
+
|
|
3
6
|
const zhipuChatModels: AIChatModelCard[] = [
|
|
7
|
+
{
|
|
8
|
+
abilities: {
|
|
9
|
+
functionCall: true,
|
|
10
|
+
reasoning: true,
|
|
11
|
+
search: true,
|
|
12
|
+
},
|
|
13
|
+
contextWindowTokens: 200_000,
|
|
14
|
+
description:
|
|
15
|
+
'智谱最新旗舰模型 GLM-4.6 (355B) 在高级编码、长文本处理、推理与智能体能力上全面超越前代,尤其在编程能力上对齐 Claude Sonnet 4,成为国内顶尖的 Coding 模型。',
|
|
16
|
+
displayName: 'GLM-4.6',
|
|
17
|
+
enabled: true,
|
|
18
|
+
id: 'glm-4.6',
|
|
19
|
+
maxOutput: 128_000,
|
|
20
|
+
pricing: {
|
|
21
|
+
currency: 'CNY',
|
|
22
|
+
units: [
|
|
23
|
+
{
|
|
24
|
+
lookup: {
|
|
25
|
+
prices: {
|
|
26
|
+
'[0, 32_000]_[0, 200]': 0.4,
|
|
27
|
+
'[0, 32_000]_[200, infinity]': 0.6,
|
|
28
|
+
'[32_000, 200_000]': 0.8,
|
|
29
|
+
},
|
|
30
|
+
pricingParams: ['textInput', 'textOutput'],
|
|
31
|
+
},
|
|
32
|
+
name: 'textInput_cacheRead',
|
|
33
|
+
strategy: 'lookup',
|
|
34
|
+
unit: 'millionTokens',
|
|
35
|
+
},
|
|
36
|
+
{
|
|
37
|
+
lookup: {
|
|
38
|
+
prices: {
|
|
39
|
+
'[0, 32_000]_[0, 200]': 2,
|
|
40
|
+
'[0, 32_000]_[200, infinity]': 3,
|
|
41
|
+
'[32_000, 200_000]': 4,
|
|
42
|
+
},
|
|
43
|
+
pricingParams: ['textInput', 'textOutput'],
|
|
44
|
+
},
|
|
45
|
+
name: 'textInput',
|
|
46
|
+
strategy: 'lookup',
|
|
47
|
+
unit: 'millionTokens',
|
|
48
|
+
},
|
|
49
|
+
{
|
|
50
|
+
lookup: {
|
|
51
|
+
prices: {
|
|
52
|
+
'[0, 32_000]_[0, 200]': 8,
|
|
53
|
+
'[0, 32_000]_[200, infinity]': 14,
|
|
54
|
+
'[32_000, 200_000]': 16,
|
|
55
|
+
},
|
|
56
|
+
pricingParams: ['textInput', 'textOutput'],
|
|
57
|
+
},
|
|
58
|
+
name: 'textOutput',
|
|
59
|
+
strategy: 'lookup',
|
|
60
|
+
unit: 'millionTokens',
|
|
61
|
+
},
|
|
62
|
+
],
|
|
63
|
+
},
|
|
64
|
+
settings: {
|
|
65
|
+
extendParams: ['enableReasoning'],
|
|
66
|
+
searchImpl: 'params',
|
|
67
|
+
},
|
|
68
|
+
type: 'chat',
|
|
69
|
+
},
|
|
4
70
|
{
|
|
5
71
|
abilities: {
|
|
6
72
|
functionCall: true,
|
|
@@ -18,10 +84,42 @@ const zhipuChatModels: AIChatModelCard[] = [
|
|
|
18
84
|
pricing: {
|
|
19
85
|
currency: 'CNY',
|
|
20
86
|
units: [
|
|
21
|
-
|
|
22
|
-
|
|
23
|
-
|
|
24
|
-
|
|
87
|
+
{
|
|
88
|
+
lookup: {
|
|
89
|
+
prices: {
|
|
90
|
+
'[0, 32_000]': 0.4,
|
|
91
|
+
'[32_000, 65_536]': 0.8,
|
|
92
|
+
},
|
|
93
|
+
pricingParams: ['textInput'],
|
|
94
|
+
},
|
|
95
|
+
name: 'textInput_cacheRead',
|
|
96
|
+
strategy: 'lookup',
|
|
97
|
+
unit: 'millionTokens',
|
|
98
|
+
},
|
|
99
|
+
{
|
|
100
|
+
lookup: {
|
|
101
|
+
prices: {
|
|
102
|
+
'[0, 32_000]': 2,
|
|
103
|
+
'[32_000, 65_536]': 4,
|
|
104
|
+
},
|
|
105
|
+
pricingParams: ['textInput'],
|
|
106
|
+
},
|
|
107
|
+
name: 'textInput',
|
|
108
|
+
strategy: 'lookup',
|
|
109
|
+
unit: 'millionTokens',
|
|
110
|
+
},
|
|
111
|
+
{
|
|
112
|
+
lookup: {
|
|
113
|
+
prices: {
|
|
114
|
+
'[0, 32_000]': 6,
|
|
115
|
+
'[32_000, 65_536]': 12,
|
|
116
|
+
},
|
|
117
|
+
pricingParams: ['textInput'],
|
|
118
|
+
},
|
|
119
|
+
name: 'textOutput',
|
|
120
|
+
strategy: 'lookup',
|
|
121
|
+
unit: 'millionTokens',
|
|
122
|
+
},
|
|
25
123
|
],
|
|
26
124
|
},
|
|
27
125
|
settings: {
|
|
@@ -38,17 +136,52 @@ const zhipuChatModels: AIChatModelCard[] = [
|
|
|
38
136
|
},
|
|
39
137
|
contextWindowTokens: 128_000,
|
|
40
138
|
description:
|
|
41
|
-
'
|
|
139
|
+
'智谱旗舰模型,支持思考模式切换,综合能力达到开源模型的 SOTA 水平,上下文长度可达128K。',
|
|
42
140
|
displayName: 'GLM-4.5',
|
|
43
|
-
enabled: true,
|
|
44
141
|
id: 'glm-4.5',
|
|
45
142
|
maxOutput: 32_768,
|
|
46
143
|
pricing: {
|
|
47
144
|
currency: 'CNY',
|
|
48
145
|
units: [
|
|
49
|
-
{
|
|
50
|
-
|
|
51
|
-
|
|
146
|
+
{
|
|
147
|
+
lookup: {
|
|
148
|
+
prices: {
|
|
149
|
+
'[0, 32_000]_[0, 200]': 0.4,
|
|
150
|
+
'[0, 32_000]_[200, infinity]': 0.6,
|
|
151
|
+
'[32_000, 128_000]': 0.8,
|
|
152
|
+
},
|
|
153
|
+
pricingParams: ['textInput', 'textOutput'],
|
|
154
|
+
},
|
|
155
|
+
name: 'textInput_cacheRead',
|
|
156
|
+
strategy: 'lookup',
|
|
157
|
+
unit: 'millionTokens',
|
|
158
|
+
},
|
|
159
|
+
{
|
|
160
|
+
lookup: {
|
|
161
|
+
prices: {
|
|
162
|
+
'[0, 32_000]_[0, 200]': 2,
|
|
163
|
+
'[0, 32_000]_[200, infinity]': 3,
|
|
164
|
+
'[32_000, 128_000]': 4,
|
|
165
|
+
},
|
|
166
|
+
pricingParams: ['textInput', 'textOutput'],
|
|
167
|
+
},
|
|
168
|
+
name: 'textInput',
|
|
169
|
+
strategy: 'lookup',
|
|
170
|
+
unit: 'millionTokens',
|
|
171
|
+
},
|
|
172
|
+
{
|
|
173
|
+
lookup: {
|
|
174
|
+
prices: {
|
|
175
|
+
'[0, 32_000]_[0, 200]': 8,
|
|
176
|
+
'[0, 32_000]_[200, infinity]': 14,
|
|
177
|
+
'[32_000, 128_000]': 16,
|
|
178
|
+
},
|
|
179
|
+
pricingParams: ['textInput', 'textOutput'],
|
|
180
|
+
},
|
|
181
|
+
name: 'textOutput',
|
|
182
|
+
strategy: 'lookup',
|
|
183
|
+
unit: 'millionTokens',
|
|
184
|
+
},
|
|
52
185
|
],
|
|
53
186
|
},
|
|
54
187
|
settings: {
|
|
@@ -71,9 +204,45 @@ const zhipuChatModels: AIChatModelCard[] = [
|
|
|
71
204
|
pricing: {
|
|
72
205
|
currency: 'CNY',
|
|
73
206
|
units: [
|
|
74
|
-
{
|
|
75
|
-
|
|
76
|
-
|
|
207
|
+
{
|
|
208
|
+
lookup: {
|
|
209
|
+
prices: {
|
|
210
|
+
'[0, 32_000]': 1.6,
|
|
211
|
+
'[0, 32_000]_[200, infinity]': 2.4,
|
|
212
|
+
'[32_000, 128_000]': 3.2,
|
|
213
|
+
},
|
|
214
|
+
pricingParams: ['textInput', 'textOutput'],
|
|
215
|
+
},
|
|
216
|
+
name: 'textInput_cacheRead',
|
|
217
|
+
strategy: 'lookup',
|
|
218
|
+
unit: 'millionTokens',
|
|
219
|
+
},
|
|
220
|
+
{
|
|
221
|
+
lookup: {
|
|
222
|
+
prices: {
|
|
223
|
+
'[0, 32_000]_[0, 200]': 8,
|
|
224
|
+
'[0, 32_000]_[200, infinity]': 12,
|
|
225
|
+
'[32_000, 128_000]': 16,
|
|
226
|
+
},
|
|
227
|
+
pricingParams: ['textInput', 'textOutput'],
|
|
228
|
+
},
|
|
229
|
+
name: 'textInput',
|
|
230
|
+
strategy: 'lookup',
|
|
231
|
+
unit: 'millionTokens',
|
|
232
|
+
},
|
|
233
|
+
{
|
|
234
|
+
lookup: {
|
|
235
|
+
prices: {
|
|
236
|
+
'[0, 32_000]_[0, 200]': 16,
|
|
237
|
+
'[0, 32_000]_[200, infinity]': 32,
|
|
238
|
+
'[32_000, 128_000]': 64,
|
|
239
|
+
},
|
|
240
|
+
pricingParams: ['textInput', 'textOutput'],
|
|
241
|
+
},
|
|
242
|
+
name: 'textOutput',
|
|
243
|
+
strategy: 'lookup',
|
|
244
|
+
unit: 'millionTokens',
|
|
245
|
+
},
|
|
77
246
|
],
|
|
78
247
|
},
|
|
79
248
|
settings: {
|
|
@@ -96,9 +265,43 @@ const zhipuChatModels: AIChatModelCard[] = [
|
|
|
96
265
|
pricing: {
|
|
97
266
|
currency: 'CNY',
|
|
98
267
|
units: [
|
|
99
|
-
{
|
|
100
|
-
|
|
101
|
-
|
|
268
|
+
{
|
|
269
|
+
lookup: {
|
|
270
|
+
prices: {
|
|
271
|
+
'[0, 32_000]': 0.16,
|
|
272
|
+
'[32_000, 128_000]': 0.24,
|
|
273
|
+
},
|
|
274
|
+
pricingParams: ['textInput'],
|
|
275
|
+
},
|
|
276
|
+
name: 'textInput_cacheRead',
|
|
277
|
+
strategy: 'lookup',
|
|
278
|
+
unit: 'millionTokens',
|
|
279
|
+
},
|
|
280
|
+
{
|
|
281
|
+
lookup: {
|
|
282
|
+
prices: {
|
|
283
|
+
'[0, 32_000]': 0.8,
|
|
284
|
+
'[32_000, 128_000]': 1.2,
|
|
285
|
+
},
|
|
286
|
+
pricingParams: ['textInput'],
|
|
287
|
+
},
|
|
288
|
+
name: 'textInput',
|
|
289
|
+
strategy: 'lookup',
|
|
290
|
+
unit: 'millionTokens',
|
|
291
|
+
},
|
|
292
|
+
{
|
|
293
|
+
lookup: {
|
|
294
|
+
prices: {
|
|
295
|
+
'[0, 32_000]_[0, 200]': 2,
|
|
296
|
+
'[0, 32_000]_[200, infinity]': 6,
|
|
297
|
+
'[32_000, 128_000]': 8,
|
|
298
|
+
},
|
|
299
|
+
pricingParams: ['textInput', 'textOutput'],
|
|
300
|
+
},
|
|
301
|
+
name: 'textOutput',
|
|
302
|
+
strategy: 'lookup',
|
|
303
|
+
unit: 'millionTokens',
|
|
304
|
+
},
|
|
102
305
|
],
|
|
103
306
|
},
|
|
104
307
|
settings: {
|
|
@@ -121,9 +324,43 @@ const zhipuChatModels: AIChatModelCard[] = [
|
|
|
121
324
|
pricing: {
|
|
122
325
|
currency: 'CNY',
|
|
123
326
|
units: [
|
|
124
|
-
{
|
|
125
|
-
|
|
126
|
-
|
|
327
|
+
{
|
|
328
|
+
lookup: {
|
|
329
|
+
prices: {
|
|
330
|
+
'[0, 32_000]': 0.8,
|
|
331
|
+
'[32_000, 128_000]': 1.6,
|
|
332
|
+
},
|
|
333
|
+
pricingParams: ['textInput'],
|
|
334
|
+
},
|
|
335
|
+
name: 'textInput_cacheRead',
|
|
336
|
+
strategy: 'lookup',
|
|
337
|
+
unit: 'millionTokens',
|
|
338
|
+
},
|
|
339
|
+
{
|
|
340
|
+
lookup: {
|
|
341
|
+
prices: {
|
|
342
|
+
'[0, 32_000]': 4,
|
|
343
|
+
'[32_000, 128_000]': 8,
|
|
344
|
+
},
|
|
345
|
+
pricingParams: ['textInput'],
|
|
346
|
+
},
|
|
347
|
+
name: 'textInput',
|
|
348
|
+
strategy: 'lookup',
|
|
349
|
+
unit: 'millionTokens',
|
|
350
|
+
},
|
|
351
|
+
{
|
|
352
|
+
lookup: {
|
|
353
|
+
prices: {
|
|
354
|
+
'[0, 32_000]_[0, 200]': 12,
|
|
355
|
+
'[0, 32_000]_[200, infinity]': 16,
|
|
356
|
+
'[32_000, 128_000]': 32,
|
|
357
|
+
},
|
|
358
|
+
pricingParams: ['textInput', 'textOutput'],
|
|
359
|
+
},
|
|
360
|
+
name: 'textOutput',
|
|
361
|
+
strategy: 'lookup',
|
|
362
|
+
unit: 'millionTokens',
|
|
363
|
+
},
|
|
127
364
|
],
|
|
128
365
|
},
|
|
129
366
|
settings: {
|
|
@@ -187,7 +424,6 @@ const zhipuChatModels: AIChatModelCard[] = [
|
|
|
187
424
|
description:
|
|
188
425
|
'GLM-4.1V-Thinking 系列模型是目前已知10B级别的VLM模型中性能最强的视觉模型,融合了同级别SOTA的各项视觉语言任务,包括视频理解、图片问答、学科解题、OCR文字识别、文档和图表解读、GUI Agent、前端网页Coding、Grounding等,多项任务能力甚至超过8倍参数量的Qwen2.5-VL-72B。通过领先的强化学习技术,模型掌握了通过思维链推理的方式提升回答的准确性和丰富度,从最终效果和可解释性等维度都显著超过传统的非thinking模型。',
|
|
189
426
|
displayName: 'GLM-4.1V-Thinking-Flash',
|
|
190
|
-
enabled: true,
|
|
191
427
|
id: 'glm-4.1v-thinking-flash',
|
|
192
428
|
maxOutput: 16_384,
|
|
193
429
|
pricing: {
|
|
@@ -414,28 +650,6 @@ const zhipuChatModels: AIChatModelCard[] = [
|
|
|
414
650
|
},
|
|
415
651
|
type: 'chat',
|
|
416
652
|
},
|
|
417
|
-
{
|
|
418
|
-
abilities: {
|
|
419
|
-
functionCall: true,
|
|
420
|
-
search: true,
|
|
421
|
-
},
|
|
422
|
-
contextWindowTokens: 128_000,
|
|
423
|
-
description:
|
|
424
|
-
'GLM-4-AllTools 是一个多功能智能体模型,优化以支持复杂指令规划与工具调用,如网络浏览、代码解释和文本生成,适用于多任务执行。',
|
|
425
|
-
displayName: 'GLM-4-AllTools',
|
|
426
|
-
id: 'glm-4-alltools',
|
|
427
|
-
pricing: {
|
|
428
|
-
currency: 'CNY',
|
|
429
|
-
units: [
|
|
430
|
-
{ name: 'textInput', rate: 100, strategy: 'fixed', unit: 'millionTokens' },
|
|
431
|
-
{ name: 'textOutput', rate: 100, strategy: 'fixed', unit: 'millionTokens' },
|
|
432
|
-
],
|
|
433
|
-
},
|
|
434
|
-
settings: {
|
|
435
|
-
searchImpl: 'params',
|
|
436
|
-
},
|
|
437
|
-
type: 'chat',
|
|
438
|
-
},
|
|
439
653
|
{
|
|
440
654
|
abilities: {
|
|
441
655
|
functionCall: true,
|
|
@@ -479,27 +693,6 @@ const zhipuChatModels: AIChatModelCard[] = [
|
|
|
479
693
|
},
|
|
480
694
|
type: 'chat',
|
|
481
695
|
},
|
|
482
|
-
{
|
|
483
|
-
abilities: {
|
|
484
|
-
functionCall: true,
|
|
485
|
-
search: true,
|
|
486
|
-
},
|
|
487
|
-
contextWindowTokens: 128_000,
|
|
488
|
-
description: 'GLM-4 是发布于2024年1月的旧旗舰版本,目前已被更强的 GLM-4-0520 取代。',
|
|
489
|
-
displayName: 'GLM-4',
|
|
490
|
-
id: 'glm-4', // 弃用时间 2025年6月30日
|
|
491
|
-
pricing: {
|
|
492
|
-
currency: 'CNY',
|
|
493
|
-
units: [
|
|
494
|
-
{ name: 'textInput', rate: 100, strategy: 'fixed', unit: 'millionTokens' },
|
|
495
|
-
{ name: 'textOutput', rate: 100, strategy: 'fixed', unit: 'millionTokens' },
|
|
496
|
-
],
|
|
497
|
-
},
|
|
498
|
-
settings: {
|
|
499
|
-
searchImpl: 'params',
|
|
500
|
-
},
|
|
501
|
-
type: 'chat',
|
|
502
|
-
},
|
|
503
696
|
{
|
|
504
697
|
abilities: {
|
|
505
698
|
vision: true,
|
|
@@ -793,18 +793,6 @@ describe('LobeAnthropicAI', () => {
|
|
|
793
793
|
expect(result.max_tokens).toBe(4096);
|
|
794
794
|
});
|
|
795
795
|
|
|
796
|
-
it('should set correct max_tokens based on model for non claude-3 models', async () => {
|
|
797
|
-
const payload: ChatStreamPayload = {
|
|
798
|
-
messages: [{ content: 'Hello', role: 'user' }],
|
|
799
|
-
model: 'claude-2.1',
|
|
800
|
-
temperature: 0.7,
|
|
801
|
-
};
|
|
802
|
-
|
|
803
|
-
const result = await instance['buildAnthropicPayload'](payload);
|
|
804
|
-
|
|
805
|
-
expect(result.max_tokens).toBe(4096);
|
|
806
|
-
});
|
|
807
|
-
|
|
808
796
|
it('should respect max_tokens when explicitly provided', async () => {
|
|
809
797
|
const payload: ChatStreamPayload = {
|
|
810
798
|
max_tokens: 2000,
|
|
@@ -29,6 +29,7 @@ export const LobeNovitaAI = createOpenAICompatibleRuntime({
|
|
|
29
29
|
const formattedModels = modelList.map((m) => {
|
|
30
30
|
const mm = m as any;
|
|
31
31
|
const features: string[] = Array.isArray(mm.features) ? mm.features : [];
|
|
32
|
+
const inputModalities: string[] = Array.isArray(mm.input_modalities) ? mm.input_modalities : [];
|
|
32
33
|
|
|
33
34
|
return {
|
|
34
35
|
contextWindowTokens: mm.context_size ?? mm.max_output_tokens ?? undefined,
|
|
@@ -44,7 +45,7 @@ export const LobeNovitaAI = createOpenAICompatibleRuntime({
|
|
|
44
45
|
},
|
|
45
46
|
reasoning: features.includes('reasoning') || false,
|
|
46
47
|
type: mm.model_type ?? undefined,
|
|
47
|
-
vision: features.includes('vision') || false,
|
|
48
|
+
vision: inputModalities.includes('image') || features.includes('vision') || false,
|
|
48
49
|
} as any;
|
|
49
50
|
});
|
|
50
51
|
|
|
@@ -2,8 +2,12 @@ export interface NovitaModelCard {
|
|
|
2
2
|
context_size: number;
|
|
3
3
|
created: number;
|
|
4
4
|
description: string;
|
|
5
|
+
features?: string[];
|
|
5
6
|
id: string;
|
|
7
|
+
input_modalities?: string[];
|
|
6
8
|
input_token_price_per_m: number;
|
|
9
|
+
max_output_tokens?: number;
|
|
10
|
+
model_type?: string;
|
|
7
11
|
output_token_price_per_m: number;
|
|
8
12
|
status: number;
|
|
9
13
|
tags: string[];
|
|
@@ -4,7 +4,7 @@ import { createOpenAICompatibleRuntime } from '../../core/openaiCompatibleFactor
|
|
|
4
4
|
import { processMultiProviderModelList } from '../../utils/modelParse';
|
|
5
5
|
|
|
6
6
|
export const LobeOllamaCloudAI = createOpenAICompatibleRuntime({
|
|
7
|
-
baseURL: 'https://
|
|
7
|
+
baseURL: 'https://ollama.com/v1',
|
|
8
8
|
chatCompletion: {
|
|
9
9
|
handlePayload: (payload) => {
|
|
10
10
|
const { model, ...rest } = payload;
|
|
@@ -73,11 +73,18 @@ export const LobeOpenRouterAI = createOpenAICompatibleRuntime({
|
|
|
73
73
|
const { endpoint } = model;
|
|
74
74
|
const endpointModel = endpoint?.model;
|
|
75
75
|
|
|
76
|
-
const
|
|
76
|
+
const inputModalities = endpointModel?.input_modalities || model.input_modalities;
|
|
77
|
+
|
|
78
|
+
let displayName = model.slug?.toLowerCase().includes('deepseek') && !model.short_name?.toLowerCase().includes('deepseek')
|
|
77
79
|
? (model.name ?? model.slug)
|
|
78
80
|
: (model.short_name ?? model.name ?? model.slug);
|
|
79
81
|
|
|
80
|
-
const
|
|
82
|
+
const inputPrice = formatPrice(endpoint?.pricing?.prompt);
|
|
83
|
+
const outputPrice = formatPrice(endpoint?.pricing?.completion);
|
|
84
|
+
const isFree = (inputPrice === 0 || outputPrice === 0) && !displayName.endsWith('(free)');
|
|
85
|
+
if (isFree) {
|
|
86
|
+
displayName += ' (free)';
|
|
87
|
+
}
|
|
81
88
|
|
|
82
89
|
return {
|
|
83
90
|
contextWindowTokens: endpoint?.context_length || model.context_length,
|
|
@@ -90,8 +97,8 @@ export const LobeOpenRouterAI = createOpenAICompatibleRuntime({
|
|
|
90
97
|
? endpoint.max_completion_tokens
|
|
91
98
|
: undefined,
|
|
92
99
|
pricing: {
|
|
93
|
-
input:
|
|
94
|
-
output:
|
|
100
|
+
input: inputPrice,
|
|
101
|
+
output: outputPrice,
|
|
95
102
|
},
|
|
96
103
|
reasoning: endpoint?.supports_reasoning || false,
|
|
97
104
|
releasedAt: new Date(model.created_at).toISOString().split('T')[0],
|
package/src/app/[variants]/(main)/chat/(workspace)/@conversation/features/ChatMinimap/index.tsx
CHANGED
|
@@ -129,6 +129,26 @@ const useStyles = createStyles(({ css, token }) => ({
|
|
|
129
129
|
opacity: 1;
|
|
130
130
|
}
|
|
131
131
|
`,
|
|
132
|
+
railContent: css`
|
|
133
|
+
scrollbar-width: none;
|
|
134
|
+
|
|
135
|
+
overflow-y: auto;
|
|
136
|
+
display: flex;
|
|
137
|
+
flex-direction: column;
|
|
138
|
+
gap: 0;
|
|
139
|
+
align-items: end;
|
|
140
|
+
justify-content: space-between;
|
|
141
|
+
|
|
142
|
+
max-height: round(down, 50vh, 12px);
|
|
143
|
+
|
|
144
|
+
/* Hide scrollbar for IE, Edge and Firefox */
|
|
145
|
+
-ms-overflow-style: none;
|
|
146
|
+
|
|
147
|
+
/* Hide scrollbar for Chrome, Safari and Opera */
|
|
148
|
+
&::-webkit-scrollbar {
|
|
149
|
+
display: none;
|
|
150
|
+
}
|
|
151
|
+
`,
|
|
132
152
|
}));
|
|
133
153
|
|
|
134
154
|
const getIndicatorWidth = (content: string | undefined) => {
|
|
@@ -245,8 +265,6 @@ const ChatMinimap = () => {
|
|
|
245
265
|
}
|
|
246
266
|
targetPosition = matched === -1 ? 0 : matched;
|
|
247
267
|
} else {
|
|
248
|
-
console.log('activeIndex', activeIndex);
|
|
249
|
-
console.log('indicators', indicators);
|
|
250
268
|
let matched = indicators.length - 1;
|
|
251
269
|
for (const [pos, indicator] of indicators.entries()) {
|
|
252
270
|
if (indicator.virtuosoIndex > activeIndex) {
|
|
@@ -295,28 +313,38 @@ const ChatMinimap = () => {
|
|
|
295
313
|
<Icon color={theme.colorTextTertiary} icon={ChevronUp} size={16} />
|
|
296
314
|
</button>
|
|
297
315
|
</Tooltip>
|
|
298
|
-
{
|
|
299
|
-
|
|
300
|
-
|
|
301
|
-
|
|
302
|
-
|
|
303
|
-
<
|
|
304
|
-
|
|
305
|
-
|
|
306
|
-
|
|
307
|
-
|
|
308
|
-
style={{
|
|
309
|
-
width,
|
|
310
|
-
}}
|
|
311
|
-
type={'button'}
|
|
316
|
+
<Flexbox className={styles.railContent}>
|
|
317
|
+
{indicators.map(({ id, width, preview, virtuosoIndex }, position) => {
|
|
318
|
+
const isActive = activeIndicatorPosition === position;
|
|
319
|
+
|
|
320
|
+
return (
|
|
321
|
+
<Tooltip
|
|
322
|
+
key={id}
|
|
323
|
+
mouseEnterDelay={0.1}
|
|
324
|
+
placement={'left'}
|
|
325
|
+
title={preview || undefined}
|
|
312
326
|
>
|
|
313
|
-
<
|
|
314
|
-
|
|
315
|
-
|
|
316
|
-
|
|
317
|
-
|
|
318
|
-
|
|
319
|
-
|
|
327
|
+
<button
|
|
328
|
+
aria-current={isActive ? 'true' : undefined}
|
|
329
|
+
aria-label={t('minimap.jumpToMessage', { index: position + 1 })}
|
|
330
|
+
className={styles.indicator}
|
|
331
|
+
onClick={() => handleJump(virtuosoIndex)}
|
|
332
|
+
style={{
|
|
333
|
+
width,
|
|
334
|
+
}}
|
|
335
|
+
type={'button'}
|
|
336
|
+
>
|
|
337
|
+
<div
|
|
338
|
+
className={cx(
|
|
339
|
+
styles.indicatorContent,
|
|
340
|
+
isActive && styles.indicatorContentActive,
|
|
341
|
+
)}
|
|
342
|
+
/>
|
|
343
|
+
</button>
|
|
344
|
+
</Tooltip>
|
|
345
|
+
);
|
|
346
|
+
})}
|
|
347
|
+
</Flexbox>
|
|
320
348
|
<Tooltip mouseEnterDelay={0.1} placement={'left'} title={t('minimap.nextMessage')}>
|
|
321
349
|
<button
|
|
322
350
|
aria-label={t('minimap.nextMessage')}
|
|
@@ -1,6 +1,5 @@
|
|
|
1
1
|
import { ModelProviderCard } from '@/types/llm';
|
|
2
2
|
|
|
3
|
-
// ref: https://docs.anthropic.com/en/docs/about-claude/models#model-names
|
|
4
3
|
const Anthropic: ModelProviderCard = {
|
|
5
4
|
chatModels: [
|
|
6
5
|
{
|
|
@@ -83,17 +82,6 @@ const Anthropic: ModelProviderCard = {
|
|
|
83
82
|
releasedAt: '2024-03-07',
|
|
84
83
|
vision: true,
|
|
85
84
|
},
|
|
86
|
-
{
|
|
87
|
-
contextWindowTokens: 200_000,
|
|
88
|
-
description:
|
|
89
|
-
'Claude 3 Sonnet 在智能和速度方面为企业工作负载提供了理想的平衡。它以更低的价格提供最大效用,可靠且适合大规模部署。',
|
|
90
|
-
displayName: 'Claude 3 Sonnet',
|
|
91
|
-
functionCall: true,
|
|
92
|
-
id: 'claude-3-sonnet-20240229',
|
|
93
|
-
maxOutput: 4096,
|
|
94
|
-
releasedAt: '2024-02-29',
|
|
95
|
-
vision: true,
|
|
96
|
-
},
|
|
97
85
|
{
|
|
98
86
|
contextWindowTokens: 200_000,
|
|
99
87
|
description:
|
|
@@ -106,24 +94,6 @@ const Anthropic: ModelProviderCard = {
|
|
|
106
94
|
releasedAt: '2024-02-29',
|
|
107
95
|
vision: true,
|
|
108
96
|
},
|
|
109
|
-
{
|
|
110
|
-
contextWindowTokens: 200_000,
|
|
111
|
-
description:
|
|
112
|
-
'Claude 2 为企业提供了关键能力的进步,包括业界领先的 200K token 上下文、大幅降低模型幻觉的发生率、系统提示以及一个新的测试功能:工具调用。',
|
|
113
|
-
displayName: 'Claude 2.1',
|
|
114
|
-
id: 'claude-2.1',
|
|
115
|
-
maxOutput: 4096,
|
|
116
|
-
releasedAt: '2023-11-21',
|
|
117
|
-
},
|
|
118
|
-
{
|
|
119
|
-
contextWindowTokens: 100_000,
|
|
120
|
-
description:
|
|
121
|
-
'Claude 2 为企业提供了关键能力的进步,包括业界领先的 200K token 上下文、大幅降低模型幻觉的发生率、系统提示以及一个新的测试功能:工具调用。',
|
|
122
|
-
displayName: 'Claude 2.0',
|
|
123
|
-
id: 'claude-2.0',
|
|
124
|
-
maxOutput: 4096,
|
|
125
|
-
releasedAt: '2023-07-11',
|
|
126
|
-
},
|
|
127
97
|
],
|
|
128
98
|
checkModel: 'claude-3-haiku-20240307',
|
|
129
99
|
description:
|
|
@@ -1,16 +1,7 @@
|
|
|
1
1
|
import { ModelProviderCard } from '@/types/llm';
|
|
2
2
|
|
|
3
|
-
// ref :https://open.bigmodel.cn/dev/howuse/model
|
|
4
|
-
// api https://open.bigmodel.cn/dev/api#language
|
|
5
|
-
// ref :https://open.bigmodel.cn/modelcenter/square
|
|
6
3
|
const ZhiPu: ModelProviderCard = {
|
|
7
4
|
chatModels: [
|
|
8
|
-
{
|
|
9
|
-
contextWindowTokens: 16_384,
|
|
10
|
-
description: 'GLM-Zero-Preview具备强大的复杂推理能力,在逻辑推理、数学、编程等领域表现优异。',
|
|
11
|
-
displayName: 'GLM-Zero-Preview',
|
|
12
|
-
id: 'glm-zero-preview',
|
|
13
|
-
},
|
|
14
5
|
{
|
|
15
6
|
contextWindowTokens: 128_000,
|
|
16
7
|
description: 'GLM-4-Flash 是处理简单任务的理想选择,速度最快且免费。',
|
|
@@ -50,14 +41,6 @@ const ZhiPu: ModelProviderCard = {
|
|
|
50
41
|
functionCall: true,
|
|
51
42
|
id: 'glm-4-airx',
|
|
52
43
|
},
|
|
53
|
-
{
|
|
54
|
-
contextWindowTokens: 128_000,
|
|
55
|
-
description:
|
|
56
|
-
'GLM-4-AllTools 是一个多功能智能体模型,优化以支持复杂指令规划与工具调用,如网络浏览、代码解释和文本生成,适用于多任务执行。',
|
|
57
|
-
displayName: 'GLM-4-AllTools',
|
|
58
|
-
functionCall: true,
|
|
59
|
-
id: 'glm-4-alltools',
|
|
60
|
-
},
|
|
61
44
|
{
|
|
62
45
|
contextWindowTokens: 128_000,
|
|
63
46
|
description:
|
|
@@ -115,9 +98,9 @@ const ZhiPu: ModelProviderCard = {
|
|
|
115
98
|
},
|
|
116
99
|
{
|
|
117
100
|
contextWindowTokens: 4096,
|
|
118
|
-
description: 'CharGLM-
|
|
119
|
-
displayName: 'CharGLM-
|
|
120
|
-
id: 'charglm-
|
|
101
|
+
description: 'CharGLM-4 专为角色扮演与情感陪伴设计,支持超长多轮记忆与个性化对话,应用广泛。',
|
|
102
|
+
displayName: 'CharGLM-4',
|
|
103
|
+
id: 'charglm-4',
|
|
121
104
|
},
|
|
122
105
|
{
|
|
123
106
|
contextWindowTokens: 8192,
|
|
@@ -126,7 +109,7 @@ const ZhiPu: ModelProviderCard = {
|
|
|
126
109
|
id: 'emohaa',
|
|
127
110
|
},
|
|
128
111
|
],
|
|
129
|
-
checkModel: 'glm-4-flash
|
|
112
|
+
checkModel: 'glm-4.5-flash',
|
|
130
113
|
description:
|
|
131
114
|
'智谱 AI 提供多模态与语言模型的开放平台,支持广泛的AI应用场景,包括文本处理、图像理解与编程辅助等。',
|
|
132
115
|
id: 'zhipu',
|
|
@@ -11,6 +11,9 @@ import { systemStatusSelectors } from '@/store/global/selectors';
|
|
|
11
11
|
const useStyles = createStyles(({ css, token }) => ({
|
|
12
12
|
container: css`
|
|
13
13
|
align-self: center;
|
|
14
|
+
|
|
15
|
+
/* Leave some space for the minimap */
|
|
16
|
+
padding-inline: 12px;
|
|
14
17
|
transition: width 0.25s ${token.motionEaseInOut};
|
|
15
18
|
`,
|
|
16
19
|
}));
|