@ai-sdk/deepseek 2.0.9 → 2.0.11

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,160 @@
1
+ ---
2
+ title: DeepSeek
3
+ description: Learn how to use DeepSeek's models with the AI SDK.
4
+ ---
5
+
6
+ # DeepSeek Provider
7
+
8
+ The [DeepSeek](https://www.deepseek.com) provider offers access to powerful language models through the DeepSeek API.
9
+
10
+ API keys can be obtained from the [DeepSeek Platform](https://platform.deepseek.com/api_keys).
11
+
12
+ ## Setup
13
+
14
+ The DeepSeek provider is available via the `@ai-sdk/deepseek` module. You can install it with:
15
+
16
+ <Tabs items={['pnpm', 'npm', 'yarn', 'bun']}>
17
+ <Tab>
18
+ <Snippet text="pnpm add @ai-sdk/deepseek" dark />
19
+ </Tab>
20
+ <Tab>
21
+ <Snippet text="npm install @ai-sdk/deepseek" dark />
22
+ </Tab>
23
+ <Tab>
24
+ <Snippet text="yarn add @ai-sdk/deepseek" dark />
25
+ </Tab>
26
+ <Tab>
27
+ <Snippet text="bun add @ai-sdk/deepseek" dark />
28
+ </Tab>
29
+ </Tabs>
30
+
31
+ ## Provider Instance
32
+
33
+ You can import the default provider instance `deepseek` from `@ai-sdk/deepseek`:
34
+
35
+ ```ts
36
+ import { deepseek } from '@ai-sdk/deepseek';
37
+ ```
38
+
39
+ For custom configuration, you can import `createDeepSeek` and create a provider instance with your settings:
40
+
41
+ ```ts
42
+ import { createDeepSeek } from '@ai-sdk/deepseek';
43
+
44
+ const deepseek = createDeepSeek({
45
+ apiKey: process.env.DEEPSEEK_API_KEY ?? '',
46
+ });
47
+ ```
48
+
49
+ You can use the following optional settings to customize the DeepSeek provider instance:
50
+
51
+ - **baseURL** _string_
52
+
53
+ Use a different URL prefix for API calls.
54
+ The default prefix is `https://api.deepseek.com/v1`.
55
+
56
+ - **apiKey** _string_
57
+
58
+ API key that is being sent using the `Authorization` header. It defaults to
59
+ the `DEEPSEEK_API_KEY` environment variable.
60
+
61
+ - **headers** _Record&lt;string,string&gt;_
62
+
63
+ Custom headers to include in the requests.
64
+
65
+ - **fetch** _(input: RequestInfo, init?: RequestInit) => Promise&lt;Response&gt;_
66
+
67
+ Custom [fetch](https://developer.mozilla.org/en-US/docs/Web/API/fetch) implementation.
68
+
69
+ ## Language Models
70
+
71
+ You can create language models using a provider instance:
72
+
73
+ ```ts
74
+ import { deepseek } from '@ai-sdk/deepseek';
75
+ import { generateText } from 'ai';
76
+
77
+ const { text } = await generateText({
78
+ model: deepseek('deepseek-chat'),
79
+ prompt: 'Write a vegetarian lasagna recipe for 4 people.',
80
+ });
81
+ ```
82
+
83
+ You can also use the `.chat()` or `.languageModel()` factory methods:
84
+
85
+ ```ts
86
+ const model = deepseek.chat('deepseek-chat');
87
+ // or
88
+ const model = deepseek.languageModel('deepseek-chat');
89
+ ```
90
+
91
+ DeepSeek language models can be used in the `streamText` function
92
+ (see [AI SDK Core](/docs/ai-sdk-core)).
93
+
94
+ ### Reasoning
95
+
96
+ DeepSeek has reasoning support for the `deepseek-reasoner` model. The reasoning is exposed through streaming:
97
+
98
+ ```ts
99
+ import { deepseek } from '@ai-sdk/deepseek';
100
+ import { streamText } from 'ai';
101
+
102
+ const result = streamText({
103
+ model: deepseek('deepseek-reasoner'),
104
+ prompt: 'How many "r"s are in the word "strawberry"?',
105
+ });
106
+
107
+ for await (const part of result.fullStream) {
108
+ if (part.type === 'reasoning') {
109
+ // This is the reasoning text
110
+ console.log('Reasoning:', part.text);
111
+ } else if (part.type === 'text') {
112
+ // This is the final answer
113
+ console.log('Answer:', part.text);
114
+ }
115
+ }
116
+ ```
117
+
118
+ See [AI SDK UI: Chatbot](/docs/ai-sdk-ui/chatbot#reasoning) for more details
119
+ on how to integrate reasoning into your chatbot.
120
+
121
+ ### Cache Token Usage
122
+
123
+ DeepSeek provides context caching on disk technology that can significantly reduce token costs for repeated content. You can access the cache hit/miss metrics through the `providerMetadata` property in the response:
124
+
125
+ ```ts
126
+ import { deepseek } from '@ai-sdk/deepseek';
127
+ import { generateText } from 'ai';
128
+
129
+ const result = await generateText({
130
+ model: deepseek('deepseek-chat'),
131
+ prompt: 'Your prompt here',
132
+ });
133
+
134
+ console.log(result.providerMetadata);
135
+ // Example output: { deepseek: { promptCacheHitTokens: 1856, promptCacheMissTokens: 5 } }
136
+ ```
137
+
138
+ The metrics include:
139
+
140
+ - `promptCacheHitTokens`: Number of input tokens that were cached
141
+ - `promptCacheMissTokens`: Number of input tokens that were not cached
142
+
143
+ <Note>
144
+ For more details about DeepSeek's caching system, see the [DeepSeek caching
145
+ documentation](https://api-docs.deepseek.com/guides/kv_cache#checking-cache-hit-status).
146
+ </Note>
147
+
148
+ ## Model Capabilities
149
+
150
+ | Model | Text Generation | Object Generation | Image Input | Tool Usage | Tool Streaming |
151
+ | ------------------- | ------------------- | ------------------- | ------------------- | ------------------- | ------------------- |
152
+ | `deepseek-chat` | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> |
153
+ | `deepseek-reasoner` | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> |
154
+
155
+ <Note>
156
+ Please see the [DeepSeek
157
+ docs](https://api-docs.deepseek.com/quick_start/pricing) for a full list of
158
+ available models. You can also pass any available provider model ID as a
159
+ string if needed.
160
+ </Note>
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@ai-sdk/deepseek",
3
- "version": "2.0.9",
3
+ "version": "2.0.11",
4
4
  "license": "Apache-2.0",
5
5
  "sideEffects": false,
6
6
  "main": "./dist/index.js",
@@ -8,10 +8,18 @@
8
8
  "types": "./dist/index.d.ts",
9
9
  "files": [
10
10
  "dist/**/*",
11
+ "docs/**/*",
11
12
  "src",
13
+ "!src/**/*.test.ts",
14
+ "!src/**/*.test-d.ts",
15
+ "!src/**/__snapshots__",
16
+ "!src/**/__fixtures__",
12
17
  "CHANGELOG.md",
13
18
  "README.md"
14
19
  ],
20
+ "directories": {
21
+ "doc": "./docs"
22
+ },
15
23
  "exports": {
16
24
  "./package.json": "./package.json",
17
25
  ".": {
@@ -21,15 +29,15 @@
21
29
  }
22
30
  },
23
31
  "dependencies": {
24
- "@ai-sdk/provider": "3.0.4",
25
- "@ai-sdk/provider-utils": "4.0.8"
32
+ "@ai-sdk/provider": "3.0.5",
33
+ "@ai-sdk/provider-utils": "4.0.9"
26
34
  },
27
35
  "devDependencies": {
28
36
  "@types/node": "20.17.24",
29
37
  "tsup": "^8",
30
38
  "typescript": "5.8.3",
31
39
  "zod": "3.25.76",
32
- "@ai-sdk/test-server": "1.0.2",
40
+ "@ai-sdk/test-server": "1.0.3",
33
41
  "@vercel/ai-tsconfig": "0.0.0"
34
42
  },
35
43
  "peerDependencies": {
@@ -55,7 +63,7 @@
55
63
  "scripts": {
56
64
  "build": "pnpm clean && tsup --tsconfig tsconfig.build.json",
57
65
  "build:watch": "pnpm clean && tsup --watch",
58
- "clean": "del-cli dist *.tsbuildinfo",
66
+ "clean": "del-cli dist docs *.tsbuildinfo",
59
67
  "lint": "eslint \"./**/*.ts*\"",
60
68
  "type-check": "tsc --build",
61
69
  "prettier-check": "prettier --check \"./**/*.ts*\"",
@@ -1,32 +0,0 @@
1
- {
2
- "id": "f03bc170-b375-4561-9685-35182c8152c5",
3
- "object": "chat.completion",
4
- "created": 1764681341,
5
- "model": "deepseek-reasoner",
6
- "choices": [
7
- {
8
- "index": 0,
9
- "message": {
10
- "role": "assistant",
11
- "content": "{\n \"location\": \"San Francisco\",\n \"condition\": \"cloudy\",\n \"temperature\": 7\n}",
12
- "reasoning_content": "I have the result from the weather tool. It returned a JSON object with location, condition, and temperature. I need to reply with JSON object ONLY. The assistant's response should be the JSON output. So I should output exactly that JSON? Or perhaps wrap it in some way? The instruction says \"Reply with JSON object ONLY.\" Probably I should output the result as is, or maybe a structured response. But the tool result is already JSON. I think I should output that JSON directly. However, I need to ensure it's a valid response. Let me output the JSON object."
13
- },
14
- "logprobs": null,
15
- "finish_reason": "stop"
16
- }
17
- ],
18
- "usage": {
19
- "prompt_tokens": 495,
20
- "completion_tokens": 144,
21
- "total_tokens": 639,
22
- "prompt_tokens_details": {
23
- "cached_tokens": 320
24
- },
25
- "completion_tokens_details": {
26
- "reasoning_tokens": 118
27
- },
28
- "prompt_cache_hit_tokens": 320,
29
- "prompt_cache_miss_tokens": 175
30
- },
31
- "system_fingerprint": "fp_eaab8d114b_prod0820_fp8_kvcache"
32
- }