@ai-sdk/huggingface 1.0.23 → 1.0.24

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -1,5 +1,11 @@
1
1
  # @ai-sdk/huggingface
2
2
 
3
+ ## 1.0.24
4
+
5
+ ### Patch Changes
6
+
7
+ - 3988c08: docs: fix incorrect and outdated provider docs
8
+
3
9
  ## 1.0.23
4
10
 
5
11
  ### Patch Changes
@@ -95,25 +95,148 @@ Hugging Face language models can be used in the `streamText` function
95
95
 
96
96
  You can explore the latest and trending models with their capabilities, context size, throughput and pricing on the [Hugging Face Inference Models](https://huggingface.co/inference/models) page.
97
97
 
98
+ ### Provider Options
99
+
100
+ Hugging Face language models support provider-specific options that you can pass via `providerOptions.huggingface`:
101
+
102
+ ```ts
103
+ import { huggingface } from '@ai-sdk/huggingface';
104
+ import { generateText } from 'ai';
105
+
106
+ const { text } = await generateText({
107
+ model: huggingface('deepseek-ai/DeepSeek-R1'),
108
+ prompt: 'Explain the theory of relativity.',
109
+ providerOptions: {
110
+ huggingface: {
111
+ reasoningEffort: 'high',
112
+ instructions: 'Respond in a clear and educational manner.',
113
+ },
114
+ },
115
+ });
116
+ ```
117
+
118
+ The following provider options are available:
119
+
120
+ - **metadata** _Record<string, string>_
121
+
122
+ Additional metadata to include with the request.
123
+
124
+ - **instructions** _string_
125
+
126
+ Instructions for the model. Can be used to provide additional context or guidance.
127
+
128
+ - **strictJsonSchema** _boolean_
129
+
130
+ Whether to use strict JSON schema validation for structured outputs. Defaults to `false`.
131
+
132
+ - **reasoningEffort** _string_
133
+
134
+ Controls the reasoning effort for reasoning models like DeepSeek-R1. Higher values result in more thorough reasoning.
135
+
136
+ ### Reasoning Output
137
+
138
+ For reasoning models like `deepseek-ai/DeepSeek-R1`, you can control the reasoning effort and access the model's reasoning process in the response:
139
+
140
+ ```ts
141
+ import { huggingface } from '@ai-sdk/huggingface';
142
+ import { streamText } from 'ai';
143
+
144
+ const result = streamText({
145
+ model: huggingface('deepseek-ai/DeepSeek-R1'),
146
+ prompt: 'How many r letters are in the word strawberry?',
147
+ providerOptions: {
148
+ huggingface: {
149
+ reasoningEffort: 'high',
150
+ },
151
+ },
152
+ });
153
+
154
+ for await (const part of result.fullStream) {
155
+ if (part.type === 'reasoning') {
156
+ console.log(`Reasoning: ${part.textDelta}`);
157
+ } else if (part.type === 'text-delta') {
158
+ process.stdout.write(part.textDelta);
159
+ }
160
+ }
161
+ ```
162
+
163
+ For non-streaming calls with `generateText`, the reasoning content is available in the `reasoning` field of the response:
164
+
165
+ ```ts
166
+ import { huggingface } from '@ai-sdk/huggingface';
167
+ import { generateText } from 'ai';
168
+
169
+ const result = await generateText({
170
+ model: huggingface('deepseek-ai/DeepSeek-R1'),
171
+ prompt: 'What is 25 * 37?',
172
+ providerOptions: {
173
+ huggingface: {
174
+ reasoningEffort: 'medium',
175
+ },
176
+ },
177
+ });
178
+
179
+ console.log('Reasoning:', result.reasoning);
180
+ console.log('Answer:', result.text);
181
+ ```
182
+
183
+ ### Image Input
184
+
185
+ For vision-capable models like `Qwen/Qwen2.5-VL-7B-Instruct`, you can pass images as part of the message content:
186
+
187
+ ```ts
188
+ import { huggingface } from '@ai-sdk/huggingface';
189
+ import { generateText } from 'ai';
190
+ import { readFileSync } from 'fs';
191
+
192
+ const result = await generateText({
193
+ model: huggingface('Qwen/Qwen2.5-VL-7B-Instruct'),
194
+ messages: [
195
+ {
196
+ role: 'user',
197
+ content: [
198
+ { type: 'text', text: 'Describe this image in detail.' },
199
+ {
200
+ type: 'image',
201
+ image: readFileSync('./image.png'),
202
+ },
203
+ ],
204
+ },
205
+ ],
206
+ });
207
+ ```
208
+
209
+ You can also pass image URLs:
210
+
211
+ ```ts
212
+ {
213
+ type: 'image',
214
+ image: 'https://example.com/image.png',
215
+ }
216
+ ```
217
+
98
218
  ## Model Capabilities
99
219
 
100
- | Model | Image Input | Object Generation | Tool Usage | Tool Streaming |
101
- | ------------------------------------------- | ------------------- | ------------------- | ------------------- | ------------------- |
102
- | `meta-llama/Llama-3.1-8B-Instruct` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
103
- | `meta-llama/Llama-3.1-70B-Instruct` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
104
- | `meta-llama/Llama-3.3-70B-Instruct` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
105
- | `meta-llama/Llama-4-Scout-17B-16E-Instruct` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
106
- | `deepseek-ai/DeepSeek-V3-0324` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
107
- | `deepseek-ai/DeepSeek-R1` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
108
- | `deepseek-ai/DeepSeek-R1-Distill-Llama-70B` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
109
- | `Qwen/Qwen3-235B-A22B-Instruct-2507` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
110
- | `Qwen/Qwen3-Coder-480B-A35B-Instruct` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
111
- | `Qwen/Qwen2.5-VL-7B-Instruct` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
112
- | `google/gemma-3-27b-it` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
113
- | `moonshotai/Kimi-K2-Instruct` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
220
+ | Model | Image Input | Object Generation | Tool Usage | Tool Streaming |
221
+ | ----------------------------------------------- | ------------------- | ------------------- | ------------------- | ------------------- |
222
+ | `meta-llama/Llama-3.1-8B-Instruct` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
223
+ | `meta-llama/Llama-3.1-70B-Instruct` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
224
+ | `meta-llama/Llama-3.3-70B-Instruct` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
225
+ | `meta-llama/Llama-4-Maverick-17B-128E-Instruct` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
226
+ | `deepseek-ai/DeepSeek-V3.1` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
227
+ | `deepseek-ai/DeepSeek-V3-0324` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
228
+ | `deepseek-ai/DeepSeek-R1` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
229
+ | `deepseek-ai/DeepSeek-R1-Distill-Llama-70B` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
230
+ | `Qwen/Qwen3-32B` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
231
+ | `Qwen/Qwen3-Coder-480B-A35B-Instruct` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
232
+ | `Qwen/Qwen2.5-VL-7B-Instruct` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
233
+ | `google/gemma-3-27b-it` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
234
+ | `moonshotai/Kimi-K2-Instruct` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
114
235
 
115
236
  <Note>
116
- The capabilities depend on the specific model you're using. Check the model
117
- documentation on Hugging Face Hub for detailed information about each model's
118
- features.
237
+ The table above lists popular models. You can explore all available models on
238
+ the [Hugging Face Inference Models](https://huggingface.co/inference/models)
239
+ page. The capabilities depend on the specific model you're using. Check the
240
+ model documentation on Hugging Face Hub for detailed information about each
241
+ model's features.
119
242
  </Note>
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@ai-sdk/huggingface",
3
- "version": "1.0.23",
3
+ "version": "1.0.24",
4
4
  "license": "Apache-2.0",
5
5
  "sideEffects": false,
6
6
  "main": "./dist/index.js",