@lobehub/chat 1.52.1 → 1.52.2
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md
CHANGED
@@ -2,6 +2,31 @@
|
|
2
2
|
|
3
3
|
# Changelog
|
4
4
|
|
5
|
+
### [Version 1.52.2](https://github.com/lobehub/lobe-chat/compare/v1.52.1...v1.52.2)
|
6
|
+
|
7
|
+
<sup>Released on **2025-02-08**</sup>
|
8
|
+
|
9
|
+
#### 💄 Styles
|
10
|
+
|
11
|
+
- **misc**: Add siliconcloud pro models.
|
12
|
+
|
13
|
+
<br/>
|
14
|
+
|
15
|
+
<details>
|
16
|
+
<summary><kbd>Improvements and Fixes</kbd></summary>
|
17
|
+
|
18
|
+
#### Styles
|
19
|
+
|
20
|
+
- **misc**: Add siliconcloud pro models, closes [#5851](https://github.com/lobehub/lobe-chat/issues/5851) ([9b321e6](https://github.com/lobehub/lobe-chat/commit/9b321e6))
|
21
|
+
|
22
|
+
</details>
|
23
|
+
|
24
|
+
<div align="right">
|
25
|
+
|
26
|
+
[](#readme-top)
|
27
|
+
|
28
|
+
</div>
|
29
|
+
|
5
30
|
### [Version 1.52.1](https://github.com/lobehub/lobe-chat/compare/v1.52.0...v1.52.1)
|
6
31
|
|
7
32
|
<sup>Released on **2025-02-08**</sup>
|
package/changelog/v1.json
CHANGED
@@ -13,7 +13,7 @@ services:
|
|
13
13
|
- lobe-network
|
14
14
|
|
15
15
|
postgresql:
|
16
|
-
image: pgvector/pgvector:
|
16
|
+
image: pgvector/pgvector:pg17
|
17
17
|
container_name: lobe-postgres
|
18
18
|
ports:
|
19
19
|
- '5432:5432'
|
@@ -76,7 +76,7 @@ services:
|
|
76
76
|
- .env
|
77
77
|
|
78
78
|
lobe:
|
79
|
-
image: lobehub/lobe-chat-database
|
79
|
+
image: lobehub/lobe-chat-database
|
80
80
|
container_name: lobe-chat
|
81
81
|
network_mode: 'service:network-service'
|
82
82
|
depends_on:
|
package/package.json
CHANGED
@@ -1,6 +1,6 @@
|
|
1
1
|
{
|
2
2
|
"name": "@lobehub/chat",
|
3
|
-
"version": "1.52.
|
3
|
+
"version": "1.52.2",
|
4
4
|
"description": "Lobe Chat - an open-source, high-performance chatbot framework that supports speech synthesis, multimodal, and extensible Function Call plugin system. Supports one-click free deployment of your private ChatGPT/LLM web application.",
|
5
5
|
"keywords": [
|
6
6
|
"framework",
|
@@ -35,6 +35,38 @@ const siliconcloudChatModels: AIChatModelCard[] = [
|
|
35
35
|
},
|
36
36
|
type: 'chat',
|
37
37
|
},
|
38
|
+
{
|
39
|
+
abilities: {
|
40
|
+
reasoning: true,
|
41
|
+
},
|
42
|
+
contextWindowTokens: 65_536,
|
43
|
+
description:
|
44
|
+
'DeepSeek-R1 是一款强化学习(RL)驱动的推理模型,解决了模型中的重复性和可读性问题。在 RL 之前,DeepSeek-R1 引入了冷启动数据,进一步优化了推理性能。它在数学、代码和推理任务中与 OpenAI-o1 表现相当,并且通过精心设计的训练方法,提升了整体效果。',
|
45
|
+
displayName: 'DeepSeek R1 (Pro)',
|
46
|
+
id: 'Pro/deepseek-ai/DeepSeek-R1',
|
47
|
+
pricing: {
|
48
|
+
currency: 'CNY',
|
49
|
+
input: 4,
|
50
|
+
output: 16,
|
51
|
+
},
|
52
|
+
type: 'chat',
|
53
|
+
},
|
54
|
+
{
|
55
|
+
abilities: {
|
56
|
+
functionCall: true,
|
57
|
+
},
|
58
|
+
contextWindowTokens: 65_536,
|
59
|
+
description:
|
60
|
+
'DeepSeek-V3 是一款拥有 6710 亿参数的混合专家(MoE)语言模型,采用多头潜在注意力(MLA)和 DeepSeekMoE 架构,结合无辅助损失的负载平衡策略,优化推理和训练效率。通过在 14.8 万亿高质量tokens上预训练,并进行监督微调和强化学习,DeepSeek-V3 在性能上超越其他开源模型,接近领先闭源模型。',
|
61
|
+
displayName: 'DeepSeek V3 (Pro)',
|
62
|
+
id: 'Pro/deepseek-ai/DeepSeek-V3',
|
63
|
+
pricing: {
|
64
|
+
currency: 'CNY',
|
65
|
+
input: 2,
|
66
|
+
output: 8,
|
67
|
+
},
|
68
|
+
type: 'chat',
|
69
|
+
},
|
38
70
|
{
|
39
71
|
abilities: {
|
40
72
|
reasoning: true
|
@@ -30,6 +30,31 @@ const SiliconCloud: ModelProviderCard = {
|
|
30
30
|
output: 2,
|
31
31
|
},
|
32
32
|
},
|
33
|
+
{
|
34
|
+
contextWindowTokens: 65_536,
|
35
|
+
description:
|
36
|
+
'DeepSeek-R1 是一款强化学习(RL)驱动的推理模型,解决了模型中的重复性和可读性问题。在 RL 之前,DeepSeek-R1 引入了冷启动数据,进一步优化了推理性能。它在数学、代码和推理任务中与 OpenAI-o1 表现相当,并且通过精心设计的训练方法,提升了整体效果。',
|
37
|
+
displayName: 'DeepSeek R1 (Pro)',
|
38
|
+
id: 'Pro/deepseek-ai/DeepSeek-R1',
|
39
|
+
pricing: {
|
40
|
+
currency: 'CNY',
|
41
|
+
input: 4,
|
42
|
+
output: 16,
|
43
|
+
},
|
44
|
+
},
|
45
|
+
{
|
46
|
+
contextWindowTokens: 65_536,
|
47
|
+
description:
|
48
|
+
'DeepSeek-V3 是一款拥有 6710 亿参数的混合专家(MoE)语言模型,采用多头潜在注意力(MLA)和 DeepSeekMoE 架构,结合无辅助损失的负载平衡策略,优化推理和训练效率。通过在 14.8 万亿高质量tokens上预训练,并进行监督微调和强化学习,DeepSeek-V3 在性能上超越其他开源模型,接近领先闭源模型。',
|
49
|
+
displayName: 'DeepSeek V3 (Pro)',
|
50
|
+
functionCall: true,
|
51
|
+
id: 'Pro/deepseek-ai/DeepSeek-V3',
|
52
|
+
pricing: {
|
53
|
+
currency: 'CNY',
|
54
|
+
input: 2,
|
55
|
+
output: 8,
|
56
|
+
},
|
57
|
+
},
|
33
58
|
{
|
34
59
|
contextWindowTokens: 32_768,
|
35
60
|
description:
|