@ai-sdk/huggingface 1.0.17 → 1.0.18
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +6 -0
- package/docs/170-huggingface.mdx +119 -0
- package/package.json +6 -2
package/CHANGELOG.md
CHANGED
|
@@ -0,0 +1,119 @@
|
|
|
1
|
+
---
|
|
2
|
+
title: Hugging Face
|
|
3
|
+
description: Learn how to use Hugging Face Provider.
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# Hugging Face Provider
|
|
7
|
+
|
|
8
|
+
The [Hugging Face](https://huggingface.co/) provider offers access to thousands of language models through [Hugging Face Inference Providers](https://huggingface.co/docs/inference-providers/index), including models from Meta, DeepSeek, Qwen, and more.
|
|
9
|
+
|
|
10
|
+
API keys can be obtained from [Hugging Face Settings](https://huggingface.co/settings/tokens).
|
|
11
|
+
|
|
12
|
+
## Setup
|
|
13
|
+
|
|
14
|
+
The Hugging Face provider is available via the `@ai-sdk/huggingface` module. You can install it with:
|
|
15
|
+
|
|
16
|
+
<Tabs items={['pnpm', 'npm', 'yarn', 'bun']}>
|
|
17
|
+
<Tab>
|
|
18
|
+
<Snippet text="pnpm add @ai-sdk/huggingface" dark />
|
|
19
|
+
</Tab>
|
|
20
|
+
<Tab>
|
|
21
|
+
<Snippet text="npm install @ai-sdk/huggingface" dark />
|
|
22
|
+
</Tab>
|
|
23
|
+
<Tab>
|
|
24
|
+
<Snippet text="yarn add @ai-sdk/huggingface" dark />
|
|
25
|
+
</Tab>
|
|
26
|
+
|
|
27
|
+
<Tab>
|
|
28
|
+
<Snippet text="bun add @ai-sdk/huggingface" dark />
|
|
29
|
+
</Tab>
|
|
30
|
+
</Tabs>
|
|
31
|
+
|
|
32
|
+
## Provider Instance
|
|
33
|
+
|
|
34
|
+
You can import the default provider instance `huggingface` from `@ai-sdk/huggingface`:
|
|
35
|
+
|
|
36
|
+
```ts
|
|
37
|
+
import { huggingface } from '@ai-sdk/huggingface';
|
|
38
|
+
```
|
|
39
|
+
|
|
40
|
+
For custom configuration, you can import `createHuggingFace` and create a provider instance with your settings:
|
|
41
|
+
|
|
42
|
+
```ts
|
|
43
|
+
import { createHuggingFace } from '@ai-sdk/huggingface';
|
|
44
|
+
|
|
45
|
+
const huggingface = createHuggingFace({
|
|
46
|
+
apiKey: process.env.HUGGINGFACE_API_KEY ?? '',
|
|
47
|
+
});
|
|
48
|
+
```
|
|
49
|
+
|
|
50
|
+
You can use the following optional settings to customize the Hugging Face provider instance:
|
|
51
|
+
|
|
52
|
+
- **baseURL** _string_
|
|
53
|
+
|
|
54
|
+
Use a different URL prefix for API calls, e.g. to use proxy servers.
|
|
55
|
+
The default prefix is `https://router.huggingface.co/v1`.
|
|
56
|
+
|
|
57
|
+
- **apiKey** _string_
|
|
58
|
+
|
|
59
|
+
API key that is being sent using the `Authorization` header. It defaults to
|
|
60
|
+
the `HUGGINGFACE_API_KEY` environment variable. You can get your API key
|
|
61
|
+
from [Hugging Face Settings](https://huggingface.co/settings/tokens).
|
|
62
|
+
|
|
63
|
+
- **headers** _Record<string,string>_
|
|
64
|
+
|
|
65
|
+
Custom headers to include in the requests.
|
|
66
|
+
|
|
67
|
+
- **fetch** _(input: RequestInfo, init?: RequestInit) => Promise<Response>_
|
|
68
|
+
|
|
69
|
+
Custom [fetch](https://developer.mozilla.org/en-US/docs/Web/API/fetch) implementation.
|
|
70
|
+
|
|
71
|
+
## Language Models
|
|
72
|
+
|
|
73
|
+
You can create language models using a provider instance:
|
|
74
|
+
|
|
75
|
+
```ts
|
|
76
|
+
import { huggingface } from '@ai-sdk/huggingface';
|
|
77
|
+
import { generateText } from 'ai';
|
|
78
|
+
|
|
79
|
+
const { text } = await generateText({
|
|
80
|
+
model: huggingface('deepseek-ai/DeepSeek-V3-0324'),
|
|
81
|
+
prompt: 'Write a vegetarian lasagna recipe for 4 people.',
|
|
82
|
+
});
|
|
83
|
+
```
|
|
84
|
+
|
|
85
|
+
You can also use the `.responses()` or `.languageModel()` factory methods:
|
|
86
|
+
|
|
87
|
+
```ts
|
|
88
|
+
const model = huggingface.responses('deepseek-ai/DeepSeek-V3-0324');
|
|
89
|
+
// or
|
|
90
|
+
const model = huggingface.languageModel('moonshotai/Kimi-K2-Instruct');
|
|
91
|
+
```
|
|
92
|
+
|
|
93
|
+
Hugging Face language models can be used in the `streamText` function
|
|
94
|
+
(see [AI SDK Core](/docs/ai-sdk-core)).
|
|
95
|
+
|
|
96
|
+
You can explore the latest and trending models with their capabilities, context size, throughput and pricing on the [Hugging Face Inference Models](https://huggingface.co/inference/models) page.
|
|
97
|
+
|
|
98
|
+
## Model Capabilities
|
|
99
|
+
|
|
100
|
+
| Model | Image Input | Object Generation | Tool Usage | Tool Streaming |
|
|
101
|
+
| ------------------------------------------- | ------------------- | ------------------- | ------------------- | ------------------- |
|
|
102
|
+
| `meta-llama/Llama-3.1-8B-Instruct` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
|
|
103
|
+
| `meta-llama/Llama-3.1-70B-Instruct` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
|
|
104
|
+
| `meta-llama/Llama-3.3-70B-Instruct` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
|
|
105
|
+
| `meta-llama/Llama-4-Scout-17B-16E-Instruct` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
|
|
106
|
+
| `deepseek-ai/DeepSeek-V3-0324` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
|
|
107
|
+
| `deepseek-ai/DeepSeek-R1` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
|
|
108
|
+
| `deepseek-ai/DeepSeek-R1-Distill-Llama-70B` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
|
|
109
|
+
| `Qwen/Qwen3-235B-A22B-Instruct-2507` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
|
|
110
|
+
| `Qwen/Qwen3-Coder-480B-A35B-Instruct` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
|
|
111
|
+
| `Qwen/Qwen2.5-VL-7B-Instruct` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
|
|
112
|
+
| `google/gemma-3-27b-it` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
|
|
113
|
+
| `moonshotai/Kimi-K2-Instruct` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
|
|
114
|
+
|
|
115
|
+
<Note>
|
|
116
|
+
The capabilities depend on the specific model you're using. Check the model
|
|
117
|
+
documentation on Hugging Face Hub for detailed information about each model's
|
|
118
|
+
features.
|
|
119
|
+
</Note>
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "@ai-sdk/huggingface",
|
|
3
|
-
"version": "1.0.
|
|
3
|
+
"version": "1.0.18",
|
|
4
4
|
"license": "Apache-2.0",
|
|
5
5
|
"sideEffects": false,
|
|
6
6
|
"main": "./dist/index.js",
|
|
@@ -8,10 +8,14 @@
|
|
|
8
8
|
"types": "./dist/index.d.ts",
|
|
9
9
|
"files": [
|
|
10
10
|
"dist/**/*",
|
|
11
|
+
"docs/**/*",
|
|
11
12
|
"src",
|
|
12
13
|
"CHANGELOG.md",
|
|
13
14
|
"README.md"
|
|
14
15
|
],
|
|
16
|
+
"directories": {
|
|
17
|
+
"doc": "./docs"
|
|
18
|
+
},
|
|
15
19
|
"exports": {
|
|
16
20
|
"./package.json": "./package.json",
|
|
17
21
|
".": {
|
|
@@ -56,7 +60,7 @@
|
|
|
56
60
|
"scripts": {
|
|
57
61
|
"build": "pnpm clean && tsup --tsconfig tsconfig.build.json",
|
|
58
62
|
"build:watch": "pnpm clean && tsup --watch",
|
|
59
|
-
"clean": "rm -rf dist *.tsbuildinfo",
|
|
63
|
+
"clean": "rm -rf dist docs *.tsbuildinfo",
|
|
60
64
|
"lint": "eslint \"./**/*.ts*\"",
|
|
61
65
|
"type-check": "tsc --build",
|
|
62
66
|
"prettier-check": "prettier --check \"./**/*.ts*\"",
|