@ai-sdk/deepseek 2.0.8 → 2.0.10
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +12 -0
- package/dist/index.js +1 -1
- package/dist/index.js.map +1 -1
- package/dist/index.mjs +1 -1
- package/dist/index.mjs.map +1 -1
- package/docs/30-deepseek.mdx +160 -0
- package/package.json +8 -3
- package/src/chat/__fixtures__/deepseek-json.json +32 -0
- package/src/chat/__fixtures__/deepseek-reasoning.chunks.txt +220 -0
- package/src/chat/__fixtures__/deepseek-reasoning.json +32 -0
- package/src/chat/__fixtures__/deepseek-text.chunks.txt +402 -0
- package/src/chat/__fixtures__/deepseek-text.json +28 -0
- package/src/chat/__fixtures__/deepseek-tool-call.chunks.txt +52 -0
- package/src/chat/__fixtures__/deepseek-tool-call.json +43 -0
- package/src/chat/__snapshots__/deepseek-chat-language-model.test.ts.snap +4106 -0
- package/src/chat/convert-to-deepseek-chat-messages.test.ts +332 -0
- package/src/chat/convert-to-deepseek-chat-messages.ts +177 -0
- package/src/chat/convert-to-deepseek-usage.ts +56 -0
- package/src/chat/deepseek-chat-api-types.ts +157 -0
- package/src/chat/deepseek-chat-language-model.test.ts +524 -0
- package/src/chat/deepseek-chat-language-model.ts +534 -0
- package/src/chat/deepseek-chat-options.ts +20 -0
- package/src/chat/deepseek-prepare-tools.test.ts +178 -0
- package/src/chat/deepseek-prepare-tools.ts +82 -0
- package/src/chat/get-response-metadata.ts +15 -0
- package/src/chat/map-deepseek-finish-reason.ts +20 -0
- package/src/deepseek-provider.ts +108 -0
- package/src/index.ts +8 -0
- package/src/version.ts +6 -0
|
@@ -0,0 +1,160 @@
|
|
|
1
|
+
---
|
|
2
|
+
title: DeepSeek
|
|
3
|
+
description: Learn how to use DeepSeek's models with the AI SDK.
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# DeepSeek Provider
|
|
7
|
+
|
|
8
|
+
The [DeepSeek](https://www.deepseek.com) provider offers access to powerful language models through the DeepSeek API.
|
|
9
|
+
|
|
10
|
+
API keys can be obtained from the [DeepSeek Platform](https://platform.deepseek.com/api_keys).
|
|
11
|
+
|
|
12
|
+
## Setup
|
|
13
|
+
|
|
14
|
+
The DeepSeek provider is available via the `@ai-sdk/deepseek` module. You can install it with:
|
|
15
|
+
|
|
16
|
+
<Tabs items={['pnpm', 'npm', 'yarn', 'bun']}>
|
|
17
|
+
<Tab>
|
|
18
|
+
<Snippet text="pnpm add @ai-sdk/deepseek" dark />
|
|
19
|
+
</Tab>
|
|
20
|
+
<Tab>
|
|
21
|
+
<Snippet text="npm install @ai-sdk/deepseek" dark />
|
|
22
|
+
</Tab>
|
|
23
|
+
<Tab>
|
|
24
|
+
<Snippet text="yarn add @ai-sdk/deepseek" dark />
|
|
25
|
+
</Tab>
|
|
26
|
+
<Tab>
|
|
27
|
+
<Snippet text="bun add @ai-sdk/deepseek" dark />
|
|
28
|
+
</Tab>
|
|
29
|
+
</Tabs>
|
|
30
|
+
|
|
31
|
+
## Provider Instance
|
|
32
|
+
|
|
33
|
+
You can import the default provider instance `deepseek` from `@ai-sdk/deepseek`:
|
|
34
|
+
|
|
35
|
+
```ts
|
|
36
|
+
import { deepseek } from '@ai-sdk/deepseek';
|
|
37
|
+
```
|
|
38
|
+
|
|
39
|
+
For custom configuration, you can import `createDeepSeek` and create a provider instance with your settings:
|
|
40
|
+
|
|
41
|
+
```ts
|
|
42
|
+
import { createDeepSeek } from '@ai-sdk/deepseek';
|
|
43
|
+
|
|
44
|
+
const deepseek = createDeepSeek({
|
|
45
|
+
apiKey: process.env.DEEPSEEK_API_KEY ?? '',
|
|
46
|
+
});
|
|
47
|
+
```
|
|
48
|
+
|
|
49
|
+
You can use the following optional settings to customize the DeepSeek provider instance:
|
|
50
|
+
|
|
51
|
+
- **baseURL** _string_
|
|
52
|
+
|
|
53
|
+
Use a different URL prefix for API calls.
|
|
54
|
+
The default prefix is `https://api.deepseek.com/v1`.
|
|
55
|
+
|
|
56
|
+
- **apiKey** _string_
|
|
57
|
+
|
|
58
|
+
API key that is being sent using the `Authorization` header. It defaults to
|
|
59
|
+
the `DEEPSEEK_API_KEY` environment variable.
|
|
60
|
+
|
|
61
|
+
- **headers** _Record<string,string>_
|
|
62
|
+
|
|
63
|
+
Custom headers to include in the requests.
|
|
64
|
+
|
|
65
|
+
- **fetch** _(input: RequestInfo, init?: RequestInit) => Promise<Response>_
|
|
66
|
+
|
|
67
|
+
Custom [fetch](https://developer.mozilla.org/en-US/docs/Web/API/fetch) implementation.
|
|
68
|
+
|
|
69
|
+
## Language Models
|
|
70
|
+
|
|
71
|
+
You can create language models using a provider instance:
|
|
72
|
+
|
|
73
|
+
```ts
|
|
74
|
+
import { deepseek } from '@ai-sdk/deepseek';
|
|
75
|
+
import { generateText } from 'ai';
|
|
76
|
+
|
|
77
|
+
const { text } = await generateText({
|
|
78
|
+
model: deepseek('deepseek-chat'),
|
|
79
|
+
prompt: 'Write a vegetarian lasagna recipe for 4 people.',
|
|
80
|
+
});
|
|
81
|
+
```
|
|
82
|
+
|
|
83
|
+
You can also use the `.chat()` or `.languageModel()` factory methods:
|
|
84
|
+
|
|
85
|
+
```ts
|
|
86
|
+
const model = deepseek.chat('deepseek-chat');
|
|
87
|
+
// or
|
|
88
|
+
const model = deepseek.languageModel('deepseek-chat');
|
|
89
|
+
```
|
|
90
|
+
|
|
91
|
+
DeepSeek language models can be used in the `streamText` function
|
|
92
|
+
(see [AI SDK Core](/docs/ai-sdk-core)).
|
|
93
|
+
|
|
94
|
+
### Reasoning
|
|
95
|
+
|
|
96
|
+
DeepSeek has reasoning support for the `deepseek-reasoner` model. The reasoning is exposed through streaming:
|
|
97
|
+
|
|
98
|
+
```ts
|
|
99
|
+
import { deepseek } from '@ai-sdk/deepseek';
|
|
100
|
+
import { streamText } from 'ai';
|
|
101
|
+
|
|
102
|
+
const result = streamText({
|
|
103
|
+
model: deepseek('deepseek-reasoner'),
|
|
104
|
+
prompt: 'How many "r"s are in the word "strawberry"?',
|
|
105
|
+
});
|
|
106
|
+
|
|
107
|
+
for await (const part of result.fullStream) {
|
|
108
|
+
if (part.type === 'reasoning') {
|
|
109
|
+
// This is the reasoning text
|
|
110
|
+
console.log('Reasoning:', part.text);
|
|
111
|
+
} else if (part.type === 'text') {
|
|
112
|
+
// This is the final answer
|
|
113
|
+
console.log('Answer:', part.text);
|
|
114
|
+
}
|
|
115
|
+
}
|
|
116
|
+
```
|
|
117
|
+
|
|
118
|
+
See [AI SDK UI: Chatbot](/docs/ai-sdk-ui/chatbot#reasoning) for more details
|
|
119
|
+
on how to integrate reasoning into your chatbot.
|
|
120
|
+
|
|
121
|
+
### Cache Token Usage
|
|
122
|
+
|
|
123
|
+
DeepSeek provides context caching on disk technology that can significantly reduce token costs for repeated content. You can access the cache hit/miss metrics through the `providerMetadata` property in the response:
|
|
124
|
+
|
|
125
|
+
```ts
|
|
126
|
+
import { deepseek } from '@ai-sdk/deepseek';
|
|
127
|
+
import { generateText } from 'ai';
|
|
128
|
+
|
|
129
|
+
const result = await generateText({
|
|
130
|
+
model: deepseek('deepseek-chat'),
|
|
131
|
+
prompt: 'Your prompt here',
|
|
132
|
+
});
|
|
133
|
+
|
|
134
|
+
console.log(result.providerMetadata);
|
|
135
|
+
// Example output: { deepseek: { promptCacheHitTokens: 1856, promptCacheMissTokens: 5 } }
|
|
136
|
+
```
|
|
137
|
+
|
|
138
|
+
The metrics include:
|
|
139
|
+
|
|
140
|
+
- `promptCacheHitTokens`: Number of input tokens that were cached
|
|
141
|
+
- `promptCacheMissTokens`: Number of input tokens that were not cached
|
|
142
|
+
|
|
143
|
+
<Note>
|
|
144
|
+
For more details about DeepSeek's caching system, see the [DeepSeek caching
|
|
145
|
+
documentation](https://api-docs.deepseek.com/guides/kv_cache#checking-cache-hit-status).
|
|
146
|
+
</Note>
|
|
147
|
+
|
|
148
|
+
## Model Capabilities
|
|
149
|
+
|
|
150
|
+
| Model | Text Generation | Object Generation | Image Input | Tool Usage | Tool Streaming |
|
|
151
|
+
| ------------------- | ------------------- | ------------------- | ------------------- | ------------------- | ------------------- |
|
|
152
|
+
| `deepseek-chat` | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> |
|
|
153
|
+
| `deepseek-reasoner` | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> |
|
|
154
|
+
|
|
155
|
+
<Note>
|
|
156
|
+
Please see the [DeepSeek
|
|
157
|
+
docs](https://api-docs.deepseek.com/quick_start/pricing) for a full list of
|
|
158
|
+
available models. You can also pass any available provider model ID as a
|
|
159
|
+
string if needed.
|
|
160
|
+
</Note>
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "@ai-sdk/deepseek",
|
|
3
|
-
"version": "2.0.
|
|
3
|
+
"version": "2.0.10",
|
|
4
4
|
"license": "Apache-2.0",
|
|
5
5
|
"sideEffects": false,
|
|
6
6
|
"main": "./dist/index.js",
|
|
@@ -8,9 +8,14 @@
|
|
|
8
8
|
"types": "./dist/index.d.ts",
|
|
9
9
|
"files": [
|
|
10
10
|
"dist/**/*",
|
|
11
|
+
"docs/**/*",
|
|
12
|
+
"src",
|
|
11
13
|
"CHANGELOG.md",
|
|
12
14
|
"README.md"
|
|
13
15
|
],
|
|
16
|
+
"directories": {
|
|
17
|
+
"doc": "./docs"
|
|
18
|
+
},
|
|
14
19
|
"exports": {
|
|
15
20
|
"./package.json": "./package.json",
|
|
16
21
|
".": {
|
|
@@ -28,7 +33,7 @@
|
|
|
28
33
|
"tsup": "^8",
|
|
29
34
|
"typescript": "5.8.3",
|
|
30
35
|
"zod": "3.25.76",
|
|
31
|
-
"@ai-sdk/test-server": "1.0.
|
|
36
|
+
"@ai-sdk/test-server": "1.0.2",
|
|
32
37
|
"@vercel/ai-tsconfig": "0.0.0"
|
|
33
38
|
},
|
|
34
39
|
"peerDependencies": {
|
|
@@ -54,7 +59,7 @@
|
|
|
54
59
|
"scripts": {
|
|
55
60
|
"build": "pnpm clean && tsup --tsconfig tsconfig.build.json",
|
|
56
61
|
"build:watch": "pnpm clean && tsup --watch",
|
|
57
|
-
"clean": "del-cli dist *.tsbuildinfo",
|
|
62
|
+
"clean": "del-cli dist docs *.tsbuildinfo",
|
|
58
63
|
"lint": "eslint \"./**/*.ts*\"",
|
|
59
64
|
"type-check": "tsc --build",
|
|
60
65
|
"prettier-check": "prettier --check \"./**/*.ts*\"",
|
|
@@ -0,0 +1,32 @@
|
|
|
1
|
+
{
|
|
2
|
+
"id": "f03bc170-b375-4561-9685-35182c8152c5",
|
|
3
|
+
"object": "chat.completion",
|
|
4
|
+
"created": 1764681341,
|
|
5
|
+
"model": "deepseek-reasoner",
|
|
6
|
+
"choices": [
|
|
7
|
+
{
|
|
8
|
+
"index": 0,
|
|
9
|
+
"message": {
|
|
10
|
+
"role": "assistant",
|
|
11
|
+
"content": "{\n \"location\": \"San Francisco\",\n \"condition\": \"cloudy\",\n \"temperature\": 7\n}",
|
|
12
|
+
"reasoning_content": "I have the result from the weather tool. It returned a JSON object with location, condition, and temperature. I need to reply with JSON object ONLY. The assistant's response should be the JSON output. So I should output exactly that JSON? Or perhaps wrap it in some way? The instruction says \"Reply with JSON object ONLY.\" Probably I should output the result as is, or maybe a structured response. But the tool result is already JSON. I think I should output that JSON directly. However, I need to ensure it's a valid response. Let me output the JSON object."
|
|
13
|
+
},
|
|
14
|
+
"logprobs": null,
|
|
15
|
+
"finish_reason": "stop"
|
|
16
|
+
}
|
|
17
|
+
],
|
|
18
|
+
"usage": {
|
|
19
|
+
"prompt_tokens": 495,
|
|
20
|
+
"completion_tokens": 144,
|
|
21
|
+
"total_tokens": 639,
|
|
22
|
+
"prompt_tokens_details": {
|
|
23
|
+
"cached_tokens": 320
|
|
24
|
+
},
|
|
25
|
+
"completion_tokens_details": {
|
|
26
|
+
"reasoning_tokens": 118
|
|
27
|
+
},
|
|
28
|
+
"prompt_cache_hit_tokens": 320,
|
|
29
|
+
"prompt_cache_miss_tokens": 175
|
|
30
|
+
},
|
|
31
|
+
"system_fingerprint": "fp_eaab8d114b_prod0820_fp8_kvcache"
|
|
32
|
+
}
|