@lobehub/chat 1.93.1 → 1.93.2
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +25 -0
- package/README.md +8 -8
- package/README.zh-CN.md +2 -2
- package/changelog/v1.json +9 -0
- package/package.json +1 -1
- package/src/libs/model-runtime/perplexity/index.test.ts +4 -4
- package/src/libs/model-runtime/utils/streams/ollama.test.ts +59 -0
- package/src/libs/model-runtime/utils/streams/ollama.ts +14 -1
- package/src/libs/model-runtime/utils/streams/openai/openai.test.ts +163 -0
- package/src/libs/model-runtime/utils/streams/openai/openai.ts +18 -2
- package/src/libs/model-runtime/utils/streams/protocol.ts +12 -0
package/CHANGELOG.md
CHANGED
@@ -2,6 +2,31 @@
|
|
2
2
|
|
3
3
|
# Changelog
|
4
4
|
|
5
|
+
### [Version 1.93.2](https://github.com/lobehub/lobe-chat/compare/v1.93.1...v1.93.2)
|
6
|
+
|
7
|
+
<sup>Released on **2025-06-09**</sup>
|
8
|
+
|
9
|
+
#### ♻ Code Refactoring
|
10
|
+
|
11
|
+
- **misc**: Refactor `<think>` & `</think>` handling.
|
12
|
+
|
13
|
+
<br/>
|
14
|
+
|
15
|
+
<details>
|
16
|
+
<summary><kbd>Improvements and Fixes</kbd></summary>
|
17
|
+
|
18
|
+
#### Code refactoring
|
19
|
+
|
20
|
+
- **misc**: Refactor `<think>` & `</think>` handling, closes [#8121](https://github.com/lobehub/lobe-chat/issues/8121) ([04ac353](https://github.com/lobehub/lobe-chat/commit/04ac353))
|
21
|
+
|
22
|
+
</details>
|
23
|
+
|
24
|
+
<div align="right">
|
25
|
+
|
26
|
+
[](#readme-top)
|
27
|
+
|
28
|
+
</div>
|
29
|
+
|
5
30
|
### [Version 1.93.1](https://github.com/lobehub/lobe-chat/compare/v1.93.0...v1.93.1)
|
6
31
|
|
7
32
|
<sup>Released on **2025-06-08**</sup>
|
package/README.md
CHANGED
@@ -367,14 +367,14 @@ Our marketplace is not just a showcase platform but also a collaborative space.
|
|
367
367
|
|
368
368
|
<!-- AGENT LIST -->
|
369
369
|
|
370
|
-
| Recent Submits
|
371
|
-
|
|
372
|
-
| [Academic Paper Reading Mentor](https://lobechat.com/discover/assistant/paper-understanding)<br/><sup>By **[AdijeShen](https://github.com/AdijeShen)** on **2025-05-09**</sup>
|
373
|
-
| [Nutritional Advisor](https://lobechat.com/discover/assistant/nutritionist)<br/><sup>By **[egornomic](https://github.com/egornomic)** on **2025-04-15**</sup>
|
374
|
-
| [
|
375
|
-
| [Academic Paper Review Expert](https://lobechat.com/discover/assistant/academic-paper-overview)<br/><sup>By **[arvinxx](https://github.com/arvinxx)** on **2025-03-11**</sup>
|
376
|
-
|
377
|
-
> 📊 Total agents: [<kbd>**
|
370
|
+
| Recent Submits | Description |
|
371
|
+
| --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
|
372
|
+
| [Academic Paper Reading Mentor](https://lobechat.com/discover/assistant/paper-understanding)<br/><sup>By **[AdijeShen](https://github.com/AdijeShen)** on **2025-05-09**</sup> | Expert in explaining complex academic papers in simple and understandable language<br/>`academic-knowledge` `paper-analysis` |
|
373
|
+
| [Nutritional Advisor](https://lobechat.com/discover/assistant/nutritionist)<br/><sup>By **[egornomic](https://github.com/egornomic)** on **2025-04-15**</sup> | Specializes in providing detailed nutritional information for food items.<br/>`nutrition` `food` `health` `information` |
|
374
|
+
| [Rewritten in Translation Style](https://lobechat.com/discover/assistant/rewrite-in-a-translation-tone)<br/><sup>By **[q2019715](https://github.com/q2019715)** on **2025-03-13**</sup> | Rewrites a paragraph in a translation style<br/>`translation-style` `creative-writing` `language-style` `text-rewriting` `culture` |
|
375
|
+
| [Academic Paper Review Expert](https://lobechat.com/discover/assistant/academic-paper-overview)<br/><sup>By **[arvinxx](https://github.com/arvinxx)** on **2025-03-11**</sup> | An academic research assistant skilled in high-quality literature retrieval and analysis<br/>`academic-research` `literature-search` `data-analysis` `information-extraction` `consulting` |
|
376
|
+
|
377
|
+
> 📊 Total agents: [<kbd>**499**</kbd> ](https://lobechat.com/discover/assistants)
|
378
378
|
|
379
379
|
<!-- AGENT LIST -->
|
380
380
|
|
package/README.zh-CN.md
CHANGED
@@ -359,11 +359,11 @@ LobeChat 的插件生态系统是其核心功能的重要扩展,它极大地
|
|
359
359
|
| 最近新增 | 描述 |
|
360
360
|
| --------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------- |
|
361
361
|
| [学术论文阅读导师](https://lobechat.com/discover/assistant/paper-understanding)<br/><sup>By **[AdijeShen](https://github.com/AdijeShen)** on **2025-05-09**</sup> | 擅长将复杂学术论文通俗易懂讲解<br/>`学术知道` `论文解析` |
|
362
|
-
| [营养顾问](https://lobechat.com/discover/assistant/nutritionist)<br/><sup>By **[egornomic](https://github.com/egornomic)** on **2025-04-15**</sup> |
|
362
|
+
| [营养顾问](https://lobechat.com/discover/assistant/nutritionist)<br/><sup>By **[egornomic](https://github.com/egornomic)** on **2025-04-15**</sup> | 专注于提供食品项目的详细营养信息。<br/>`营养` `食品` `健康` `信息` |
|
363
363
|
| [改写为翻译腔](https://lobechat.com/discover/assistant/rewrite-in-a-translation-tone)<br/><sup>By **[q2019715](https://github.com/q2019715)** on **2025-03-13**</sup> | 将一段话重写为翻译腔<br/>`翻译腔` `创意写作` `语言风格` `文段重写` `文化` |
|
364
364
|
| [学术论文综述专家](https://lobechat.com/discover/assistant/academic-paper-overview)<br/><sup>By **[arvinxx](https://github.com/arvinxx)** on **2025-03-11**</sup> | 擅长高质量文献检索与分析的学术研究助手<br/>`学术研究` `文献检索` `数据分析` `信息提取` `咨询` |
|
365
365
|
|
366
|
-
> 📊 Total agents: [<kbd>**
|
366
|
+
> 📊 Total agents: [<kbd>**499**</kbd> ](https://lobechat.com/discover/assistants)
|
367
367
|
|
368
368
|
<!-- AGENT LIST -->
|
369
369
|
|
package/changelog/v1.json
CHANGED
package/package.json
CHANGED
@@ -1,6 +1,6 @@
|
|
1
1
|
{
|
2
2
|
"name": "@lobehub/chat",
|
3
|
-
"version": "1.93.
|
3
|
+
"version": "1.93.2",
|
4
4
|
"description": "Lobe Chat - an open-source, high-performance chatbot framework that supports speech synthesis, multimodal, and extensible Function Call plugin system. Supports one-click free deployment of your private ChatGPT/LLM web application.",
|
5
5
|
"keywords": [
|
6
6
|
"framework",
|
@@ -231,16 +231,16 @@ describe('LobePerplexityAI', () => {
|
|
231
231
|
expect(noSpeedStream).toEqual(
|
232
232
|
[
|
233
233
|
'id: 506d64fb-e7f2-4d94-b80f-158369e9446d',
|
234
|
-
'event:
|
235
|
-
'data: "
|
234
|
+
'event: reasoning',
|
235
|
+
'data: ""\n',
|
236
236
|
'id: 506d64fb-e7f2-4d94-b80f-158369e9446d',
|
237
237
|
'event: grounding',
|
238
238
|
'data: {"citations":[{"title":"https://www.weather.com.cn/weather/101210101.shtml","url":"https://www.weather.com.cn/weather/101210101.shtml"},{"title":"https://tianqi.moji.com/weather/china/zhejiang/hangzhou","url":"https://tianqi.moji.com/weather/china/zhejiang/hangzhou"},{"title":"https://weather.cma.cn/web/weather/58457.html","url":"https://weather.cma.cn/web/weather/58457.html"},{"title":"https://tianqi.so.com/weather/101210101","url":"https://tianqi.so.com/weather/101210101"},{"title":"https://www.accuweather.com/zh/cn/hangzhou/106832/weather-forecast/106832","url":"https://www.accuweather.com/zh/cn/hangzhou/106832/weather-forecast/106832"},{"title":"https://www.hzqx.com","url":"https://www.hzqx.com"},{"title":"https://www.hzqx.com/pc/hztq/","url":"https://www.hzqx.com/pc/hztq/"}]}\n',
|
239
239
|
'id: 506d64fb-e7f2-4d94-b80f-158369e9446d',
|
240
|
-
'event:
|
240
|
+
'event: reasoning',
|
241
241
|
'data: "杭州今"\n',
|
242
242
|
'id: 506d64fb-e7f2-4d94-b80f-158369e9446d',
|
243
|
-
'event:
|
243
|
+
'event: reasoning',
|
244
244
|
'data: "天和未来几天的"\n',
|
245
245
|
'id: 506d64fb-e7f2-4d94-b80f-158369e9446d',
|
246
246
|
'event: usage',
|
@@ -7,6 +7,65 @@ import { OllamaStream } from './ollama';
|
|
7
7
|
|
8
8
|
describe('OllamaStream', () => {
|
9
9
|
describe('should transform Ollama stream to protocol stream', () => {
|
10
|
+
it('reasoning', async () => {
|
11
|
+
vi.spyOn(uuidModule, 'nanoid').mockReturnValueOnce('2');
|
12
|
+
|
13
|
+
const messages = [
|
14
|
+
'<think>',
|
15
|
+
'这是一个思考过程',
|
16
|
+
',需要仔细分析问题。',
|
17
|
+
'</think>',
|
18
|
+
'根据分析,我的答案是:',
|
19
|
+
'这是最终答案。',
|
20
|
+
];
|
21
|
+
|
22
|
+
const mockOllamaStream = new ReadableStream<ChatResponse>({
|
23
|
+
start(controller) {
|
24
|
+
messages.forEach((content) => {
|
25
|
+
controller.enqueue({ message: { content }, done: false } as ChatResponse);
|
26
|
+
});
|
27
|
+
controller.enqueue({ message: { content: '' }, done: true } as ChatResponse);
|
28
|
+
controller.close();
|
29
|
+
},
|
30
|
+
});
|
31
|
+
|
32
|
+
const protocolStream = OllamaStream(mockOllamaStream);
|
33
|
+
|
34
|
+
const decoder = new TextDecoder();
|
35
|
+
const chunks = [];
|
36
|
+
|
37
|
+
// @ts-ignore
|
38
|
+
for await (const chunk of protocolStream) {
|
39
|
+
chunks.push(decoder.decode(chunk, { stream: true }));
|
40
|
+
}
|
41
|
+
|
42
|
+
expect(chunks).toEqual(
|
43
|
+
[
|
44
|
+
'id: chat_2',
|
45
|
+
'event: reasoning',
|
46
|
+
`data: ""\n`,
|
47
|
+
'id: chat_2',
|
48
|
+
'event: reasoning',
|
49
|
+
`data: "这是一个思考过程"\n`,
|
50
|
+
'id: chat_2',
|
51
|
+
'event: reasoning',
|
52
|
+
`data: ",需要仔细分析问题。"\n`,
|
53
|
+
'id: chat_2',
|
54
|
+
'event: text',
|
55
|
+
`data: ""\n`,
|
56
|
+
'id: chat_2',
|
57
|
+
'event: text',
|
58
|
+
`data: "根据分析,我的答案是:"\n`,
|
59
|
+
'id: chat_2',
|
60
|
+
'event: text',
|
61
|
+
`data: "这是最终答案。"\n`,
|
62
|
+
'id: chat_2',
|
63
|
+
'event: stop',
|
64
|
+
`data: "finished"\n`,
|
65
|
+
].map((line) => `${line}\n`)
|
66
|
+
);
|
67
|
+
});
|
68
|
+
|
10
69
|
it('text', async () => {
|
11
70
|
vi.spyOn(uuidModule, 'nanoid').mockReturnValueOnce('1');
|
12
71
|
|
@@ -32,7 +32,20 @@ const transformOllamaStream = (chunk: ChatResponse, stack: StreamContext): Strea
|
|
32
32
|
type: 'tool_calls',
|
33
33
|
};
|
34
34
|
}
|
35
|
-
|
35
|
+
|
36
|
+
// 判断是否有 <think> 或 </think> 标签,更新 thinkingInContent 状态
|
37
|
+
if (chunk.message.content.includes('<think>')) {
|
38
|
+
stack.thinkingInContent = true;
|
39
|
+
} else if (chunk.message.content.includes('</think>')) {
|
40
|
+
stack.thinkingInContent = false;
|
41
|
+
}
|
42
|
+
|
43
|
+
// 清除 <think> 及 </think> 标签,并根据当前思考模式确定返回类型
|
44
|
+
return {
|
45
|
+
data: chunk.message.content.replaceAll(/<\/?think>/g, ''),
|
46
|
+
id: stack.id,
|
47
|
+
type: stack?.thinkingInContent ? 'reasoning' : 'text',
|
48
|
+
};
|
36
49
|
};
|
37
50
|
|
38
51
|
export const OllamaStream = (
|
@@ -904,6 +904,169 @@ describe('OpenAIStream', () => {
|
|
904
904
|
});
|
905
905
|
|
906
906
|
describe('Reasoning', () => {
|
907
|
+
it('should handle <think></think> tags in streaming content', async () => {
|
908
|
+
const data = [
|
909
|
+
{
|
910
|
+
id: '1',
|
911
|
+
object: 'chat.completion.chunk',
|
912
|
+
created: 1737563070,
|
913
|
+
model: 'deepseek-reasoner',
|
914
|
+
system_fingerprint: 'fp_1c5d8833bc',
|
915
|
+
choices: [
|
916
|
+
{
|
917
|
+
index: 0,
|
918
|
+
delta: { content: '<think>' },
|
919
|
+
logprobs: null,
|
920
|
+
finish_reason: null,
|
921
|
+
},
|
922
|
+
],
|
923
|
+
},
|
924
|
+
{
|
925
|
+
id: '1',
|
926
|
+
object: 'chat.completion.chunk',
|
927
|
+
created: 1737563070,
|
928
|
+
model: 'deepseek-reasoner',
|
929
|
+
system_fingerprint: 'fp_1c5d8833bc',
|
930
|
+
choices: [
|
931
|
+
{
|
932
|
+
index: 0,
|
933
|
+
delta: { content: '这是一个思考过程' },
|
934
|
+
logprobs: null,
|
935
|
+
finish_reason: null,
|
936
|
+
},
|
937
|
+
],
|
938
|
+
},
|
939
|
+
{
|
940
|
+
id: '1',
|
941
|
+
object: 'chat.completion.chunk',
|
942
|
+
created: 1737563070,
|
943
|
+
model: 'deepseek-reasoner',
|
944
|
+
system_fingerprint: 'fp_1c5d8833bc',
|
945
|
+
choices: [
|
946
|
+
{
|
947
|
+
index: 0,
|
948
|
+
delta: { content: ',需要仔细分析问题。' },
|
949
|
+
logprobs: null,
|
950
|
+
finish_reason: null,
|
951
|
+
},
|
952
|
+
],
|
953
|
+
},
|
954
|
+
{
|
955
|
+
id: '1',
|
956
|
+
object: 'chat.completion.chunk',
|
957
|
+
created: 1737563070,
|
958
|
+
model: 'deepseek-reasoner',
|
959
|
+
system_fingerprint: 'fp_1c5d8833bc',
|
960
|
+
choices: [
|
961
|
+
{
|
962
|
+
index: 0,
|
963
|
+
delta: { content: '</think>' },
|
964
|
+
logprobs: null,
|
965
|
+
finish_reason: null,
|
966
|
+
},
|
967
|
+
],
|
968
|
+
},
|
969
|
+
{
|
970
|
+
id: '1',
|
971
|
+
object: 'chat.completion.chunk',
|
972
|
+
created: 1737563070,
|
973
|
+
model: 'deepseek-reasoner',
|
974
|
+
system_fingerprint: 'fp_1c5d8833bc',
|
975
|
+
choices: [
|
976
|
+
{
|
977
|
+
index: 0,
|
978
|
+
delta: { content: '根据分析,我的答案是:' },
|
979
|
+
logprobs: null,
|
980
|
+
finish_reason: null,
|
981
|
+
},
|
982
|
+
],
|
983
|
+
},
|
984
|
+
{
|
985
|
+
id: '1',
|
986
|
+
object: 'chat.completion.chunk',
|
987
|
+
created: 1737563070,
|
988
|
+
model: 'deepseek-reasoner',
|
989
|
+
system_fingerprint: 'fp_1c5d8833bc',
|
990
|
+
choices: [
|
991
|
+
{
|
992
|
+
index: 0,
|
993
|
+
delta: { content: '这是最终答案。' },
|
994
|
+
logprobs: null,
|
995
|
+
finish_reason: null,
|
996
|
+
},
|
997
|
+
],
|
998
|
+
},
|
999
|
+
{
|
1000
|
+
id: '1',
|
1001
|
+
object: 'chat.completion.chunk',
|
1002
|
+
created: 1737563070,
|
1003
|
+
model: 'deepseek-reasoner',
|
1004
|
+
system_fingerprint: 'fp_1c5d8833bc',
|
1005
|
+
choices: [
|
1006
|
+
{
|
1007
|
+
index: 0,
|
1008
|
+
delta: { content: '' },
|
1009
|
+
logprobs: null,
|
1010
|
+
finish_reason: 'stop',
|
1011
|
+
},
|
1012
|
+
],
|
1013
|
+
usage: {
|
1014
|
+
prompt_tokens: 10,
|
1015
|
+
completion_tokens: 50,
|
1016
|
+
total_tokens: 60,
|
1017
|
+
prompt_tokens_details: { cached_tokens: 0 },
|
1018
|
+
completion_tokens_details: { reasoning_tokens: 20 },
|
1019
|
+
prompt_cache_hit_tokens: 0,
|
1020
|
+
prompt_cache_miss_tokens: 10,
|
1021
|
+
},
|
1022
|
+
},
|
1023
|
+
];
|
1024
|
+
|
1025
|
+
const mockOpenAIStream = new ReadableStream({
|
1026
|
+
start(controller) {
|
1027
|
+
data.forEach((chunk) => {
|
1028
|
+
controller.enqueue(chunk);
|
1029
|
+
});
|
1030
|
+
controller.close();
|
1031
|
+
},
|
1032
|
+
});
|
1033
|
+
|
1034
|
+
const protocolStream = OpenAIStream(mockOpenAIStream);
|
1035
|
+
const decoder = new TextDecoder();
|
1036
|
+
const chunks = [];
|
1037
|
+
|
1038
|
+
// @ts-ignore
|
1039
|
+
for await (const chunk of protocolStream) {
|
1040
|
+
chunks.push(decoder.decode(chunk, { stream: true }));
|
1041
|
+
}
|
1042
|
+
|
1043
|
+
expect(chunks).toEqual(
|
1044
|
+
[
|
1045
|
+
'id: 1',
|
1046
|
+
'event: reasoning',
|
1047
|
+
`data: ""\n`,
|
1048
|
+
'id: 1',
|
1049
|
+
'event: reasoning',
|
1050
|
+
`data: "这是一个思考过程"\n`,
|
1051
|
+
'id: 1',
|
1052
|
+
'event: reasoning',
|
1053
|
+
`data: ",需要仔细分析问题。"\n`,
|
1054
|
+
'id: 1',
|
1055
|
+
'event: text',
|
1056
|
+
`data: ""\n`,
|
1057
|
+
'id: 1',
|
1058
|
+
'event: text',
|
1059
|
+
`data: "根据分析,我的答案是:"\n`,
|
1060
|
+
'id: 1',
|
1061
|
+
'event: text',
|
1062
|
+
`data: "这是最终答案。"\n`,
|
1063
|
+
'id: 1',
|
1064
|
+
'event: usage',
|
1065
|
+
`data: {"inputCacheMissTokens":10,"inputTextTokens":10,"outputReasoningTokens":20,"outputTextTokens":30,"totalInputTokens":10,"totalOutputTokens":50,"totalTokens":60}\n`,
|
1066
|
+
].map((i) => `${i}\n`),
|
1067
|
+
);
|
1068
|
+
});
|
1069
|
+
|
907
1070
|
it('should handle reasoning event in official DeepSeek api', async () => {
|
908
1071
|
const data = [
|
909
1072
|
{
|
@@ -210,6 +210,17 @@ const transformOpenAIStream = (
|
|
210
210
|
}
|
211
211
|
|
212
212
|
if (typeof content === 'string') {
|
213
|
+
// 清除 <think> 及 </think> 标签
|
214
|
+
const thinkingContent = content.replaceAll(/<\/?think>/g, '');
|
215
|
+
|
216
|
+
// 判断是否有 <think> 或 </think> 标签,更新 thinkingInContent 状态
|
217
|
+
if (content.includes('<think>')) {
|
218
|
+
streamContext.thinkingInContent = true;
|
219
|
+
} else if (content.includes('</think>')) {
|
220
|
+
streamContext.thinkingInContent = false;
|
221
|
+
}
|
222
|
+
|
223
|
+
// 判断是否有 citations 内容,更新 returnedCitation 状态
|
213
224
|
if (!streamContext?.returnedCitation) {
|
214
225
|
const citations =
|
215
226
|
// in Perplexity api, the citation is in every chunk, but we only need to return it once
|
@@ -237,12 +248,17 @@ const transformOpenAIStream = (
|
|
237
248
|
id: chunk.id,
|
238
249
|
type: 'grounding',
|
239
250
|
},
|
240
|
-
{ data:
|
251
|
+
{ data: thinkingContent, id: chunk.id, type: streamContext?.thinkingInContent ? 'reasoning' : 'text' },
|
241
252
|
];
|
242
253
|
}
|
243
254
|
}
|
244
255
|
|
245
|
-
|
256
|
+
// 根据当前思考模式确定返回类型
|
257
|
+
return {
|
258
|
+
data: thinkingContent,
|
259
|
+
id: chunk.id,
|
260
|
+
type: streamContext?.thinkingInContent ? 'reasoning' : 'text',
|
261
|
+
};
|
246
262
|
}
|
247
263
|
}
|
248
264
|
|
@@ -31,6 +31,18 @@ export interface StreamContext {
|
|
31
31
|
id: string;
|
32
32
|
name: string;
|
33
33
|
};
|
34
|
+
/**
|
35
|
+
* Indicates whether the current state is within a "thinking" segment of the model output
|
36
|
+
* (e.g., when processing lmstudio responses).
|
37
|
+
*
|
38
|
+
* When parsing output containing <think> and </think> tags:
|
39
|
+
* - Set to `true` upon encountering a <think> tag (entering reasoning mode)
|
40
|
+
* - Set to `false` upon encountering a </think> tag (exiting reasoning mode)
|
41
|
+
*
|
42
|
+
* While `thinkingInContent` is `true`, subsequent content should be stored in `reasoning_content`.
|
43
|
+
* When `false`, content should be stored in the regular `content` field.
|
44
|
+
*/
|
45
|
+
thinkingInContent?: boolean;
|
34
46
|
tool?: {
|
35
47
|
id: string;
|
36
48
|
index: number;
|