@lobehub/chat 1.21.12 → 1.21.14
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +50 -0
- package/docs/usage/providers/ai21.mdx +63 -0
- package/docs/usage/providers/ai21.zh-CN.mdx +63 -0
- package/docs/usage/providers/ai360.mdx +63 -0
- package/docs/usage/providers/ai360.zh-CN.mdx +63 -0
- package/docs/usage/providers/fireworksai.mdx +75 -0
- package/docs/usage/providers/fireworksai.zh-CN.mdx +75 -0
- package/docs/usage/providers/github.mdx +92 -0
- package/docs/usage/providers/github.zh-CN.mdx +91 -0
- package/docs/usage/providers/hunyuan.mdx +71 -0
- package/docs/usage/providers/hunyuan.zh-CN.mdx +71 -0
- package/docs/usage/providers/siliconcloud.mdx +37 -21
- package/docs/usage/providers/siliconcloud.zh-CN.mdx +36 -18
- package/docs/usage/providers/spark.mdx +72 -0
- package/docs/usage/providers/spark.zh-CN.mdx +71 -0
- package/docs/usage/providers/upstage.mdx +64 -0
- package/docs/usage/providers/upstage.zh-CN.mdx +64 -0
- package/docs/usage/providers/wenxin.mdx +73 -0
- package/docs/usage/providers/wenxin.zh-CN.mdx +73 -0
- package/docs/usage/providers/{01ai.mdx → zeroone.mdx} +15 -16
- package/package.json +1 -1
- package/src/app/(main)/chat/(workspace)/@portal/Artifacts/Body/Renderer/index.tsx +5 -0
- package/src/libs/agent-runtime/AgentRuntime.ts +1 -1
- package/src/libs/agent-runtime/google/index.ts +2 -2
- package/src/libs/agent-runtime/utils/streams/anthropic.ts +2 -8
- package/src/libs/agent-runtime/utils/streams/azureOpenai.ts +2 -8
- package/src/libs/agent-runtime/utils/streams/google-ai.ts +1 -12
- package/src/libs/agent-runtime/utils/streams/ollama.ts +2 -8
- package/src/libs/agent-runtime/utils/streams/openai.ts +2 -8
- package/src/libs/agent-runtime/utils/streams/protocol.ts +7 -0
- package/src/libs/agent-runtime/utils/streams/qwen.ts +2 -3
- package/src/libs/agent-runtime/utils/streams/wenxin.test.ts +7 -3
- package/src/libs/agent-runtime/utils/streams/wenxin.ts +0 -8
- package/src/libs/agent-runtime/wenxin/index.ts +3 -2
- package/src/libs/agent-runtime/zhipu/index.test.ts +7 -24
- package/src/libs/agent-runtime/zhipu/index.ts +21 -99
- /package/docs/usage/providers/{01ai.zh-CN.mdx → zeroone.zh-CN.mdx} +0 -0
@@ -0,0 +1,73 @@
|
|
1
|
+
---
|
2
|
+
title: 在 LobeChat 中使用文心千帆
|
3
|
+
description: 学习如何在 LobeChat 中配置和使用文心千帆的API Key,以便开始对话和交互。
|
4
|
+
tags:
|
5
|
+
- LobeChat
|
6
|
+
- 百度
|
7
|
+
- 文心千帆
|
8
|
+
- API密钥
|
9
|
+
- Web UI
|
10
|
+
---
|
11
|
+
|
12
|
+
# 在 LobeChat 中使用文心千帆
|
13
|
+
|
14
|
+
<Image
|
15
|
+
cover
|
16
|
+
src={
|
17
|
+
'https://github.com/user-attachments/assets/e43dacf6-313e-499c-8888-f1065c53e424'
|
18
|
+
}
|
19
|
+
/>
|
20
|
+
|
21
|
+
[文心千帆](https://qianfan.cloud.baidu.com/)是百度推出的一个人工智能大语言模型平台,支持多种应用场景,包括文学创作、商业文案生成、数理逻辑推算等。该平台具备跨模态、跨语言的深度语义理解与生成能力,广泛应用于搜索问答、内容创作和智能办公等领域。
|
22
|
+
|
23
|
+
本文将指导你如何在 LobeChat 中使用文心千帆。
|
24
|
+
|
25
|
+
<Steps>
|
26
|
+
### 步骤一:获得文心千帆的 API Key
|
27
|
+
|
28
|
+
- 注册并登录 [百度智能云控制台](https://console.bce.baidu.com/)
|
29
|
+
- 进入 `百度智能云千帆 ModelBuilder`
|
30
|
+
- 在左侧菜单中选择`应用接入`
|
31
|
+
- 创建一个应用
|
32
|
+
|
33
|
+
<Image
|
34
|
+
alt={'创建应用'}
|
35
|
+
inStep
|
36
|
+
src={'https://github.com/user-attachments/assets/927b1040-e23f-4919-92e2-80a400db8327'}
|
37
|
+
/>
|
38
|
+
|
39
|
+
- 创建成功后获取 `API Key` 和 `Secret Key`,并妥善保存
|
40
|
+
|
41
|
+
<Image
|
42
|
+
alt={'保存密钥'}
|
43
|
+
inStep
|
44
|
+
src={'https://github.com/user-attachments/assets/242c8134-8de0-4a02-b302-6bd8b19ced3e'}
|
45
|
+
/>
|
46
|
+
|
47
|
+
### 步骤二:在 LobeChat 中配置文心千帆
|
48
|
+
|
49
|
+
- 访问 LobeChat 的`设置`界面
|
50
|
+
- 在`语言模型`下找到 `文心千帆` 的设置项
|
51
|
+
|
52
|
+
<Image
|
53
|
+
alt={'填入 API 密钥'}
|
54
|
+
inStep
|
55
|
+
src={'https://github.com/user-attachments/assets/e3995de7-38d9-489b-80a2-434477018469'}
|
56
|
+
/>
|
57
|
+
|
58
|
+
- 填入获得的 `API Key` 和 `Secret Key`
|
59
|
+
- 为你的 AI 助手选择一个文心千帆的模型即可开始对话
|
60
|
+
|
61
|
+
<Image
|
62
|
+
alt={'选择文心千帆模型并开始对话'}
|
63
|
+
inStep
|
64
|
+
src={'https://github.com/user-attachments/assets/b6e6a3eb-13c6-46f0-9c7c-69a20deae30f'}
|
65
|
+
/>
|
66
|
+
|
67
|
+
<Callout type={'warning'}>
|
68
|
+
在使用过程中你可能需要向 API 服务提供商付费,请参考文心千帆的相关费用政策。
|
69
|
+
</Callout>
|
70
|
+
|
71
|
+
</Steps>
|
72
|
+
|
73
|
+
至此你已经可以在 LobeChat 中使用文心千帆提供的模型进行对话了。
|
@@ -1,39 +1,38 @@
|
|
1
1
|
---
|
2
|
-
title: Using
|
2
|
+
title: Using 01 AI API Key in LobeChat
|
3
3
|
description: >-
|
4
|
-
Learn how to integrate and use
|
5
|
-
instructions. Obtain an API key, configure
|
4
|
+
Learn how to integrate and use 01 AI in LobeChat with step-by-step
|
5
|
+
instructions. Obtain an API key, configure 01 AI, and start
|
6
6
|
conversations with AI models.
|
7
7
|
tags:
|
8
8
|
- 01.AI
|
9
|
-
- Zero One AI
|
10
9
|
- Web UI
|
11
10
|
- API key
|
12
11
|
- AI models
|
13
12
|
---
|
14
13
|
|
15
|
-
# Using
|
14
|
+
# Using 01 AI in LobeChat
|
16
15
|
|
17
16
|
<Image
|
18
|
-
alt={'Using
|
17
|
+
alt={'Using 01 AI in LobeChat'}
|
19
18
|
cover
|
20
19
|
src={'https://github.com/lobehub/lobe-chat/assets/34400653/4485fbc3-c309-4c4e-83ee-cb82392307a1'}
|
21
20
|
/>
|
22
21
|
|
23
|
-
[
|
22
|
+
[01 AI](https://www.01.ai/) is a global company dedicated to AI 2.0 large model technology and applications. Its billion-parameter Yi-Large closed-source model, when evaluated on Stanford University's English ranking AlpacaEval 2.0, is on par with GPT-4.
|
24
23
|
|
25
|
-
This document will guide you on how to use
|
24
|
+
This document will guide you on how to use 01 AI in LobeChat:
|
26
25
|
|
27
26
|
<Steps>
|
28
27
|
|
29
|
-
### Step 1: Obtain
|
28
|
+
### Step 1: Obtain 01 AI API Key
|
30
29
|
|
31
|
-
- Register and log in to the [
|
30
|
+
- Register and log in to the [01 AI Large Model Open Platform](https://platform.lingyiwanwu.com/)
|
32
31
|
- Go to the `Dashboard` and access the `API Key Management` menu
|
33
32
|
- A system-generated API key has been created for you automatically, or you can create a new one on this interface
|
34
33
|
|
35
34
|
<Image
|
36
|
-
alt={'Create
|
35
|
+
alt={'Create 01 AI API Key'}
|
37
36
|
inStep
|
38
37
|
src={'https://github.com/lobehub/lobe-chat/assets/34400653/72f165f4-d529-4f01-a3ac-163c66e5ea73'}
|
39
38
|
/>
|
@@ -55,10 +54,10 @@ This document will guide you on how to use Zero One AI in LobeChat:
|
|
55
54
|
src={'https://github.com/lobehub/lobe-chat/assets/34400653/f892fe64-c734-4944-91ff-9916a41bd1c9'}
|
56
55
|
/>
|
57
56
|
|
58
|
-
### Step 2: Configure
|
57
|
+
### Step 2: Configure 01 AI in LobeChat
|
59
58
|
|
60
59
|
- Access the `Settings` interface in LobeChat
|
61
|
-
- Find the setting for `
|
60
|
+
- Find the setting for `01 AI` under `Language Model`
|
62
61
|
|
63
62
|
<Image
|
64
63
|
alt={'Enter API Key'}
|
@@ -66,7 +65,7 @@ This document will guide you on how to use Zero One AI in LobeChat:
|
|
66
65
|
src={'https://github.com/lobehub/lobe-chat/assets/34400653/f539d104-6d64-4cc7-8781-3b36b00d32d0'}
|
67
66
|
/>
|
68
67
|
|
69
|
-
- Open
|
68
|
+
- Open 01 AI and enter the obtained API key
|
70
69
|
- Choose a 01.AI model for your AI assistant to start the conversation
|
71
70
|
|
72
71
|
<Image
|
@@ -76,10 +75,10 @@ This document will guide you on how to use Zero One AI in LobeChat:
|
|
76
75
|
/>
|
77
76
|
|
78
77
|
<Callout type={'warning'}>
|
79
|
-
During usage, you may need to pay the API service provider. Please refer to
|
78
|
+
During usage, you may need to pay the API service provider. Please refer to 01 AI's relevant
|
80
79
|
fee policies.
|
81
80
|
</Callout>
|
82
81
|
|
83
82
|
</Steps>
|
84
83
|
|
85
|
-
You can now use the models provided by
|
84
|
+
You can now use the models provided by 01 AI for conversations in LobeChat.
|
package/package.json
CHANGED
@@ -1,6 +1,6 @@
|
|
1
1
|
{
|
2
2
|
"name": "@lobehub/chat",
|
3
|
-
"version": "1.21.
|
3
|
+
"version": "1.21.14",
|
4
4
|
"description": "Lobe Chat - an open-source, high-performance chatbot framework that supports speech synthesis, multimodal, and extensible Function Call plugin system. Supports one-click free deployment of your private ChatGPT/LLM web application.",
|
5
5
|
"keywords": [
|
6
6
|
"framework",
|
@@ -1,3 +1,4 @@
|
|
1
|
+
import { Markdown } from '@lobehub/ui';
|
1
2
|
import dynamic from 'next/dynamic';
|
2
3
|
import { memo } from 'react';
|
3
4
|
|
@@ -16,6 +17,10 @@ const Renderer = memo<{ content: string; type?: string }>(({ content, type }) =>
|
|
16
17
|
return <SVGRender content={content} />;
|
17
18
|
}
|
18
19
|
|
20
|
+
case 'text/markdown': {
|
21
|
+
return <Markdown>{content}</Markdown>;
|
22
|
+
}
|
23
|
+
|
19
24
|
default: {
|
20
25
|
return <HTMLRenderer htmlContent={content} />;
|
21
26
|
}
|
@@ -27,7 +27,7 @@ import { ModelProvider } from '../types/type';
|
|
27
27
|
import { AgentRuntimeError } from '../utils/createError';
|
28
28
|
import { debugStream } from '../utils/debugStream';
|
29
29
|
import { StreamingResponse } from '../utils/response';
|
30
|
-
import { GoogleGenerativeAIStream,
|
30
|
+
import { GoogleGenerativeAIStream, convertIterableToStream } from '../utils/streams';
|
31
31
|
import { parseDataUri } from '../utils/uriParser';
|
32
32
|
|
33
33
|
enum HarmCategory {
|
@@ -97,7 +97,7 @@ export class LobeGoogleAI implements LobeRuntimeAI {
|
|
97
97
|
tools: this.buildGoogleTools(payload.tools),
|
98
98
|
});
|
99
99
|
|
100
|
-
const googleStream =
|
100
|
+
const googleStream = convertIterableToStream(geminiStreamResult.stream);
|
101
101
|
const [prod, useForDebug] = googleStream.tee();
|
102
102
|
|
103
103
|
if (process.env.DEBUG_GOOGLE_CHAT_COMPLETION === '1') {
|
@@ -1,6 +1,5 @@
|
|
1
1
|
import Anthropic from '@anthropic-ai/sdk';
|
2
2
|
import type { Stream } from '@anthropic-ai/sdk/streaming';
|
3
|
-
import { readableFromAsyncIterable } from 'ai';
|
4
3
|
|
5
4
|
import { ChatStreamCallbacks } from '../../types';
|
6
5
|
import {
|
@@ -8,6 +7,7 @@ import {
|
|
8
7
|
StreamProtocolToolCallChunk,
|
9
8
|
StreamStack,
|
10
9
|
StreamToolCallChunkData,
|
10
|
+
convertIterableToStream,
|
11
11
|
createCallbacksTransformer,
|
12
12
|
createSSEProtocolTransformer,
|
13
13
|
} from './protocol';
|
@@ -96,12 +96,6 @@ export const transformAnthropicStream = (
|
|
96
96
|
}
|
97
97
|
};
|
98
98
|
|
99
|
-
const chatStreamable = async function* (stream: AsyncIterable<Anthropic.MessageStreamEvent>) {
|
100
|
-
for await (const response of stream) {
|
101
|
-
yield response;
|
102
|
-
}
|
103
|
-
};
|
104
|
-
|
105
99
|
export const AnthropicStream = (
|
106
100
|
stream: Stream<Anthropic.MessageStreamEvent> | ReadableStream,
|
107
101
|
callbacks?: ChatStreamCallbacks,
|
@@ -109,7 +103,7 @@ export const AnthropicStream = (
|
|
109
103
|
const streamStack: StreamStack = { id: '' };
|
110
104
|
|
111
105
|
const readableStream =
|
112
|
-
stream instanceof ReadableStream ? stream :
|
106
|
+
stream instanceof ReadableStream ? stream : convertIterableToStream(stream);
|
113
107
|
|
114
108
|
return readableStream
|
115
109
|
.pipeThrough(createSSEProtocolTransformer(transformAnthropicStream, streamStack))
|
@@ -1,5 +1,4 @@
|
|
1
1
|
import { ChatCompletions, ChatCompletionsFunctionToolCall } from '@azure/openai';
|
2
|
-
import { readableFromAsyncIterable } from 'ai';
|
3
2
|
import OpenAI from 'openai';
|
4
3
|
import type { Stream } from 'openai/streaming';
|
5
4
|
|
@@ -9,6 +8,7 @@ import {
|
|
9
8
|
StreamProtocolToolCallChunk,
|
10
9
|
StreamStack,
|
11
10
|
StreamToolCallChunkData,
|
11
|
+
convertIterableToStream,
|
12
12
|
createCallbacksTransformer,
|
13
13
|
createSSEProtocolTransformer,
|
14
14
|
} from './protocol';
|
@@ -69,19 +69,13 @@ const transformOpenAIStream = (chunk: ChatCompletions, stack: StreamStack): Stre
|
|
69
69
|
};
|
70
70
|
};
|
71
71
|
|
72
|
-
const chatStreamable = async function* (stream: AsyncIterable<OpenAI.ChatCompletionChunk>) {
|
73
|
-
for await (const response of stream) {
|
74
|
-
yield response;
|
75
|
-
}
|
76
|
-
};
|
77
|
-
|
78
72
|
export const AzureOpenAIStream = (
|
79
73
|
stream: Stream<OpenAI.ChatCompletionChunk> | ReadableStream,
|
80
74
|
callbacks?: ChatStreamCallbacks,
|
81
75
|
) => {
|
82
76
|
const stack: StreamStack = { id: '' };
|
83
77
|
const readableStream =
|
84
|
-
stream instanceof ReadableStream ? stream :
|
78
|
+
stream instanceof ReadableStream ? stream : convertIterableToStream(stream);
|
85
79
|
|
86
80
|
return readableStream
|
87
81
|
.pipeThrough(createSSEProtocolTransformer(transformOpenAIStream, stack))
|
@@ -1,8 +1,4 @@
|
|
1
|
-
import {
|
2
|
-
EnhancedGenerateContentResponse,
|
3
|
-
GenerateContentStreamResult,
|
4
|
-
} from '@google/generative-ai';
|
5
|
-
import { readableFromAsyncIterable } from 'ai';
|
1
|
+
import { EnhancedGenerateContentResponse } from '@google/generative-ai';
|
6
2
|
|
7
3
|
import { nanoid } from '@/utils/uuid';
|
8
4
|
|
@@ -11,7 +7,6 @@ import {
|
|
11
7
|
StreamProtocolChunk,
|
12
8
|
StreamStack,
|
13
9
|
StreamToolCallChunkData,
|
14
|
-
chatStreamable,
|
15
10
|
createCallbacksTransformer,
|
16
11
|
createSSEProtocolTransformer,
|
17
12
|
generateToolCallId,
|
@@ -50,12 +45,6 @@ const transformGoogleGenerativeAIStream = (
|
|
50
45
|
};
|
51
46
|
};
|
52
47
|
|
53
|
-
// only use for debug
|
54
|
-
export const googleGenAIResultToStream = (stream: GenerateContentStreamResult) => {
|
55
|
-
// make the response to the streamable format
|
56
|
-
return readableFromAsyncIterable(chatStreamable(stream.stream));
|
57
|
-
};
|
58
|
-
|
59
48
|
export const GoogleGenerativeAIStream = (
|
60
49
|
rawStream: ReadableStream<EnhancedGenerateContentResponse>,
|
61
50
|
callbacks?: ChatStreamCallbacks,
|
@@ -1,4 +1,3 @@
|
|
1
|
-
import { readableFromAsyncIterable } from 'ai';
|
2
1
|
import { ChatResponse } from 'ollama/browser';
|
3
2
|
|
4
3
|
import { ChatStreamCallbacks } from '@/libs/agent-runtime';
|
@@ -7,6 +6,7 @@ import { nanoid } from '@/utils/uuid';
|
|
7
6
|
import {
|
8
7
|
StreamProtocolChunk,
|
9
8
|
StreamStack,
|
9
|
+
convertIterableToStream,
|
10
10
|
createCallbacksTransformer,
|
11
11
|
createSSEProtocolTransformer,
|
12
12
|
} from './protocol';
|
@@ -20,19 +20,13 @@ const transformOllamaStream = (chunk: ChatResponse, stack: StreamStack): StreamP
|
|
20
20
|
return { data: chunk.message.content, id: stack.id, type: 'text' };
|
21
21
|
};
|
22
22
|
|
23
|
-
const chatStreamable = async function* (stream: AsyncIterable<ChatResponse>) {
|
24
|
-
for await (const response of stream) {
|
25
|
-
yield response;
|
26
|
-
}
|
27
|
-
};
|
28
|
-
|
29
23
|
export const OllamaStream = (
|
30
24
|
res: AsyncIterable<ChatResponse>,
|
31
25
|
cb?: ChatStreamCallbacks,
|
32
26
|
): ReadableStream<string> => {
|
33
27
|
const streamStack: StreamStack = { id: 'chat_' + nanoid() };
|
34
28
|
|
35
|
-
return
|
29
|
+
return convertIterableToStream(res)
|
36
30
|
.pipeThrough(createSSEProtocolTransformer(transformOllamaStream, streamStack))
|
37
31
|
.pipeThrough(createCallbacksTransformer(cb));
|
38
32
|
};
|
@@ -1,4 +1,3 @@
|
|
1
|
-
import { readableFromAsyncIterable } from 'ai';
|
2
1
|
import OpenAI from 'openai';
|
3
2
|
import type { Stream } from 'openai/streaming';
|
4
3
|
|
@@ -10,6 +9,7 @@ import {
|
|
10
9
|
StreamProtocolToolCallChunk,
|
11
10
|
StreamStack,
|
12
11
|
StreamToolCallChunkData,
|
12
|
+
convertIterableToStream,
|
13
13
|
createCallbacksTransformer,
|
14
14
|
createSSEProtocolTransformer,
|
15
15
|
generateToolCallId,
|
@@ -105,12 +105,6 @@ export const transformOpenAIStream = (
|
|
105
105
|
}
|
106
106
|
};
|
107
107
|
|
108
|
-
const chatStreamable = async function* (stream: AsyncIterable<OpenAI.ChatCompletionChunk>) {
|
109
|
-
for await (const response of stream) {
|
110
|
-
yield response;
|
111
|
-
}
|
112
|
-
};
|
113
|
-
|
114
108
|
export const OpenAIStream = (
|
115
109
|
stream: Stream<OpenAI.ChatCompletionChunk> | ReadableStream,
|
116
110
|
callbacks?: ChatStreamCallbacks,
|
@@ -118,7 +112,7 @@ export const OpenAIStream = (
|
|
118
112
|
const streamStack: StreamStack = { id: '' };
|
119
113
|
|
120
114
|
const readableStream =
|
121
|
-
stream instanceof ReadableStream ? stream :
|
115
|
+
stream instanceof ReadableStream ? stream : convertIterableToStream(stream);
|
122
116
|
|
123
117
|
return readableStream
|
124
118
|
.pipeThrough(createSSEProtocolTransformer(transformOpenAIStream, streamStack))
|
@@ -1,3 +1,5 @@
|
|
1
|
+
import { readableFromAsyncIterable } from 'ai';
|
2
|
+
|
1
3
|
import { ChatStreamCallbacks } from '@/libs/agent-runtime';
|
2
4
|
|
3
5
|
export interface StreamStack {
|
@@ -42,6 +44,11 @@ export const chatStreamable = async function* <T>(stream: AsyncIterable<T>) {
|
|
42
44
|
}
|
43
45
|
};
|
44
46
|
|
47
|
+
// make the response to the streamable format
|
48
|
+
export const convertIterableToStream = <T>(stream: AsyncIterable<T>) => {
|
49
|
+
return readableFromAsyncIterable(chatStreamable(stream));
|
50
|
+
};
|
51
|
+
|
45
52
|
export const createSSEProtocolTransformer = (
|
46
53
|
transformer: (chunk: any, stack: StreamStack) => StreamProtocolChunk,
|
47
54
|
streamStack?: StreamStack,
|
@@ -1,4 +1,3 @@
|
|
1
|
-
import { readableFromAsyncIterable } from 'ai';
|
2
1
|
import { ChatCompletionContentPartText } from 'ai/prompts';
|
3
2
|
import OpenAI from 'openai';
|
4
3
|
import { ChatCompletionContentPart } from 'openai/resources/index.mjs';
|
@@ -9,7 +8,7 @@ import {
|
|
9
8
|
StreamProtocolChunk,
|
10
9
|
StreamProtocolToolCallChunk,
|
11
10
|
StreamToolCallChunkData,
|
12
|
-
|
11
|
+
convertIterableToStream,
|
13
12
|
createCallbacksTransformer,
|
14
13
|
createSSEProtocolTransformer,
|
15
14
|
generateToolCallId,
|
@@ -86,7 +85,7 @@ export const QwenAIStream = (
|
|
86
85
|
callbacks?: ChatStreamCallbacks,
|
87
86
|
) => {
|
88
87
|
const readableStream =
|
89
|
-
stream instanceof ReadableStream ? stream :
|
88
|
+
stream instanceof ReadableStream ? stream : convertIterableToStream(stream);
|
90
89
|
|
91
90
|
return readableStream
|
92
91
|
.pipeThrough(createSSEProtocolTransformer(transformQwenStream))
|
@@ -2,8 +2,9 @@ import { describe, expect, it, vi } from 'vitest';
|
|
2
2
|
|
3
3
|
import * as uuidModule from '@/utils/uuid';
|
4
4
|
|
5
|
+
import { convertIterableToStream } from '../../utils/streams/protocol';
|
5
6
|
import { ChatResp } from '../../wenxin/type';
|
6
|
-
import {
|
7
|
+
import { WenxinStream } from './wenxin';
|
7
8
|
|
8
9
|
const dataStream = [
|
9
10
|
{
|
@@ -95,7 +96,7 @@ describe('WenxinStream', () => {
|
|
95
96
|
},
|
96
97
|
};
|
97
98
|
|
98
|
-
const stream =
|
99
|
+
const stream = convertIterableToStream(mockWenxinStream);
|
99
100
|
|
100
101
|
const onStartMock = vi.fn();
|
101
102
|
const onTextMock = vi.fn();
|
@@ -142,7 +143,10 @@ describe('WenxinStream', () => {
|
|
142
143
|
|
143
144
|
expect(onStartMock).toHaveBeenCalledTimes(1);
|
144
145
|
expect(onTextMock).toHaveBeenNthCalledWith(1, '"当然可以,"');
|
145
|
-
expect(onTextMock).toHaveBeenNthCalledWith(
|
146
|
+
expect(onTextMock).toHaveBeenNthCalledWith(
|
147
|
+
2,
|
148
|
+
'"以下是一些建议的自驾游路线,它们涵盖了各种不同的风景和文化体验:\\n\\n1. **西安-敦煌历史文化之旅**:\\n\\n\\n\\t* 路线:西安"',
|
149
|
+
);
|
146
150
|
expect(onTokenMock).toHaveBeenCalledTimes(6);
|
147
151
|
expect(onCompletionMock).toHaveBeenCalledTimes(1);
|
148
152
|
});
|
@@ -1,5 +1,3 @@
|
|
1
|
-
import { readableFromAsyncIterable } from 'ai';
|
2
|
-
|
3
1
|
import { ChatStreamCallbacks } from '@/libs/agent-runtime';
|
4
2
|
import { nanoid } from '@/utils/uuid';
|
5
3
|
|
@@ -7,7 +5,6 @@ import { ChatResp } from '../../wenxin/type';
|
|
7
5
|
import {
|
8
6
|
StreamProtocolChunk,
|
9
7
|
StreamStack,
|
10
|
-
chatStreamable,
|
11
8
|
createCallbacksTransformer,
|
12
9
|
createSSEProtocolTransformer,
|
13
10
|
} from './protocol';
|
@@ -29,11 +26,6 @@ const transformERNIEBotStream = (chunk: ChatResp): StreamProtocolChunk => {
|
|
29
26
|
};
|
30
27
|
};
|
31
28
|
|
32
|
-
export const WenxinResultToStream = (stream: AsyncIterable<ChatResp>) => {
|
33
|
-
// make the response to the streamable format
|
34
|
-
return readableFromAsyncIterable(chatStreamable(stream));
|
35
|
-
};
|
36
|
-
|
37
29
|
export const WenxinStream = (
|
38
30
|
rawStream: ReadableStream<ChatResp>,
|
39
31
|
callbacks?: ChatStreamCallbacks,
|
@@ -10,7 +10,8 @@ import { ChatCompetitionOptions, ChatStreamPayload } from '../types';
|
|
10
10
|
import { AgentRuntimeError } from '../utils/createError';
|
11
11
|
import { debugStream } from '../utils/debugStream';
|
12
12
|
import { StreamingResponse } from '../utils/response';
|
13
|
-
import {
|
13
|
+
import { convertIterableToStream } from '../utils/streams';
|
14
|
+
import { WenxinStream } from '../utils/streams/wenxin';
|
14
15
|
import { ChatResp } from './type';
|
15
16
|
|
16
17
|
interface ChatErrorCode {
|
@@ -46,7 +47,7 @@ export class LobeWenxinAI implements LobeRuntimeAI {
|
|
46
47
|
payload.model,
|
47
48
|
);
|
48
49
|
|
49
|
-
const wenxinStream =
|
50
|
+
const wenxinStream = convertIterableToStream(result as AsyncIterable<ChatResp>);
|
50
51
|
|
51
52
|
const [prod, useForDebug] = wenxinStream.tee();
|
52
53
|
|
@@ -2,7 +2,7 @@
|
|
2
2
|
import { OpenAI } from 'openai';
|
3
3
|
import { Mock, afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
|
4
4
|
|
5
|
-
import { ChatStreamCallbacks, LobeOpenAI } from '@/libs/agent-runtime';
|
5
|
+
import { ChatStreamCallbacks, LobeOpenAI, LobeOpenAICompatibleRuntime } from '@/libs/agent-runtime';
|
6
6
|
import * as debugStreamModule from '@/libs/agent-runtime/utils/debugStream';
|
7
7
|
|
8
8
|
import * as authTokenModule from './authToken';
|
@@ -24,28 +24,11 @@ describe('LobeZhipuAI', () => {
|
|
24
24
|
vi.restoreAllMocks();
|
25
25
|
});
|
26
26
|
|
27
|
-
describe('fromAPIKey', () => {
|
28
|
-
it('should correctly initialize with an API key', async () => {
|
29
|
-
const lobeZhipuAI = await LobeZhipuAI.fromAPIKey({ apiKey: 'test_api_key' });
|
30
|
-
expect(lobeZhipuAI).toBeInstanceOf(LobeZhipuAI);
|
31
|
-
expect(lobeZhipuAI.baseURL).toEqual('https://open.bigmodel.cn/api/paas/v4');
|
32
|
-
});
|
33
|
-
|
34
|
-
it('should throw an error if API key is invalid', async () => {
|
35
|
-
vi.spyOn(authTokenModule, 'generateApiToken').mockRejectedValue(new Error('Invalid API Key'));
|
36
|
-
try {
|
37
|
-
await LobeZhipuAI.fromAPIKey({ apiKey: 'asd' });
|
38
|
-
} catch (e) {
|
39
|
-
expect(e).toEqual({ errorType: invalidErrorType });
|
40
|
-
}
|
41
|
-
});
|
42
|
-
});
|
43
|
-
|
44
27
|
describe('chat', () => {
|
45
|
-
let instance:
|
28
|
+
let instance: LobeOpenAICompatibleRuntime;
|
46
29
|
|
47
30
|
beforeEach(async () => {
|
48
|
-
instance =
|
31
|
+
instance = new LobeZhipuAI({
|
49
32
|
apiKey: 'test_api_key',
|
50
33
|
});
|
51
34
|
|
@@ -131,9 +114,9 @@ describe('LobeZhipuAI', () => {
|
|
131
114
|
const calledWithParams = spyOn.mock.calls[0][0];
|
132
115
|
|
133
116
|
expect(calledWithParams.messages[1].content).toEqual([{ type: 'text', text: 'Hello again' }]);
|
134
|
-
expect(calledWithParams.temperature).
|
117
|
+
expect(calledWithParams.temperature).toBe(0); // temperature 0 should be undefined
|
135
118
|
expect((calledWithParams as any).do_sample).toBeTruthy(); // temperature 0 should be undefined
|
136
|
-
expect(calledWithParams.top_p).toEqual(
|
119
|
+
expect(calledWithParams.top_p).toEqual(1); // top_p should be transformed correctly
|
137
120
|
});
|
138
121
|
|
139
122
|
describe('Error', () => {
|
@@ -175,7 +158,7 @@ describe('LobeZhipuAI', () => {
|
|
175
158
|
|
176
159
|
it('should throw AgentRuntimeError with NoOpenAIAPIKey if no apiKey is provided', async () => {
|
177
160
|
try {
|
178
|
-
|
161
|
+
new LobeZhipuAI({ apiKey: '' });
|
179
162
|
} catch (e) {
|
180
163
|
expect(e).toEqual({ errorType: invalidErrorType });
|
181
164
|
}
|
@@ -221,7 +204,7 @@ describe('LobeZhipuAI', () => {
|
|
221
204
|
};
|
222
205
|
const apiError = new OpenAI.APIError(400, errorInfo, 'module error', {});
|
223
206
|
|
224
|
-
instance =
|
207
|
+
instance = new LobeZhipuAI({
|
225
208
|
apiKey: 'test',
|
226
209
|
|
227
210
|
baseURL: 'https://abc.com/v2',
|