opencode-skills-antigravity 1.0.40 → 1.0.41
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/bundled-skills/.antigravity-install-manifest.json +7 -1
- package/bundled-skills/docs/integrations/jetski-cortex.md +3 -3
- package/bundled-skills/docs/integrations/jetski-gemini-loader/README.md +1 -1
- package/bundled-skills/docs/maintainers/repo-growth-seo.md +3 -3
- package/bundled-skills/docs/maintainers/skills-update-guide.md +1 -1
- package/bundled-skills/docs/sources/sources.md +2 -2
- package/bundled-skills/docs/users/bundles.md +1 -1
- package/bundled-skills/docs/users/claude-code-skills.md +1 -1
- package/bundled-skills/docs/users/gemini-cli-skills.md +1 -1
- package/bundled-skills/docs/users/getting-started.md +1 -1
- package/bundled-skills/docs/users/kiro-integration.md +1 -1
- package/bundled-skills/docs/users/usage.md +4 -4
- package/bundled-skills/docs/users/visual-guide.md +4 -4
- package/bundled-skills/hugging-face-cli/SKILL.md +192 -195
- package/bundled-skills/hugging-face-community-evals/SKILL.md +213 -0
- package/bundled-skills/hugging-face-community-evals/examples/.env.example +3 -0
- package/bundled-skills/hugging-face-community-evals/examples/USAGE_EXAMPLES.md +101 -0
- package/bundled-skills/hugging-face-community-evals/scripts/inspect_eval_uv.py +104 -0
- package/bundled-skills/hugging-face-community-evals/scripts/inspect_vllm_uv.py +306 -0
- package/bundled-skills/hugging-face-community-evals/scripts/lighteval_vllm_uv.py +297 -0
- package/bundled-skills/hugging-face-dataset-viewer/SKILL.md +120 -120
- package/bundled-skills/hugging-face-gradio/SKILL.md +304 -0
- package/bundled-skills/hugging-face-gradio/examples.md +613 -0
- package/bundled-skills/hugging-face-jobs/SKILL.md +25 -18
- package/bundled-skills/hugging-face-jobs/index.html +216 -0
- package/bundled-skills/hugging-face-jobs/references/hardware_guide.md +336 -0
- package/bundled-skills/hugging-face-jobs/references/hub_saving.md +352 -0
- package/bundled-skills/hugging-face-jobs/references/token_usage.md +570 -0
- package/bundled-skills/hugging-face-jobs/references/troubleshooting.md +475 -0
- package/bundled-skills/hugging-face-jobs/scripts/cot-self-instruct.py +718 -0
- package/bundled-skills/hugging-face-jobs/scripts/finepdfs-stats.py +546 -0
- package/bundled-skills/hugging-face-jobs/scripts/generate-responses.py +587 -0
- package/bundled-skills/hugging-face-model-trainer/SKILL.md +11 -12
- package/bundled-skills/hugging-face-model-trainer/references/gguf_conversion.md +296 -0
- package/bundled-skills/hugging-face-model-trainer/references/hardware_guide.md +283 -0
- package/bundled-skills/hugging-face-model-trainer/references/hub_saving.md +364 -0
- package/bundled-skills/hugging-face-model-trainer/references/local_training_macos.md +231 -0
- package/bundled-skills/hugging-face-model-trainer/references/reliability_principles.md +371 -0
- package/bundled-skills/hugging-face-model-trainer/references/trackio_guide.md +189 -0
- package/bundled-skills/hugging-face-model-trainer/references/training_methods.md +150 -0
- package/bundled-skills/hugging-face-model-trainer/references/training_patterns.md +203 -0
- package/bundled-skills/hugging-face-model-trainer/references/troubleshooting.md +282 -0
- package/bundled-skills/hugging-face-model-trainer/references/unsloth.md +313 -0
- package/bundled-skills/hugging-face-model-trainer/scripts/convert_to_gguf.py +424 -0
- package/bundled-skills/hugging-face-model-trainer/scripts/dataset_inspector.py +417 -0
- package/bundled-skills/hugging-face-model-trainer/scripts/estimate_cost.py +150 -0
- package/bundled-skills/hugging-face-model-trainer/scripts/train_dpo_example.py +106 -0
- package/bundled-skills/hugging-face-model-trainer/scripts/train_grpo_example.py +89 -0
- package/bundled-skills/hugging-face-model-trainer/scripts/train_sft_example.py +122 -0
- package/bundled-skills/hugging-face-model-trainer/scripts/unsloth_sft_example.py +512 -0
- package/bundled-skills/hugging-face-paper-publisher/SKILL.md +11 -4
- package/bundled-skills/hugging-face-paper-publisher/examples/example_usage.md +326 -0
- package/bundled-skills/hugging-face-paper-publisher/references/quick_reference.md +216 -0
- package/bundled-skills/hugging-face-paper-publisher/scripts/paper_manager.py +606 -0
- package/bundled-skills/hugging-face-paper-publisher/templates/arxiv.md +299 -0
- package/bundled-skills/hugging-face-paper-publisher/templates/ml-report.md +358 -0
- package/bundled-skills/hugging-face-paper-publisher/templates/modern.md +319 -0
- package/bundled-skills/hugging-face-paper-publisher/templates/standard.md +201 -0
- package/bundled-skills/hugging-face-papers/SKILL.md +241 -0
- package/bundled-skills/hugging-face-trackio/.claude-plugin/plugin.json +19 -0
- package/bundled-skills/hugging-face-trackio/SKILL.md +117 -0
- package/bundled-skills/hugging-face-trackio/references/alerts.md +196 -0
- package/bundled-skills/hugging-face-trackio/references/logging_metrics.md +206 -0
- package/bundled-skills/hugging-face-trackio/references/retrieving_metrics.md +251 -0
- package/bundled-skills/hugging-face-vision-trainer/SKILL.md +595 -0
- package/bundled-skills/hugging-face-vision-trainer/references/finetune_sam2_trainer.md +254 -0
- package/bundled-skills/hugging-face-vision-trainer/references/hub_saving.md +618 -0
- package/bundled-skills/hugging-face-vision-trainer/references/image_classification_training_notebook.md +279 -0
- package/bundled-skills/hugging-face-vision-trainer/references/object_detection_training_notebook.md +700 -0
- package/bundled-skills/hugging-face-vision-trainer/references/reliability_principles.md +310 -0
- package/bundled-skills/hugging-face-vision-trainer/references/timm_trainer.md +91 -0
- package/bundled-skills/hugging-face-vision-trainer/scripts/dataset_inspector.py +814 -0
- package/bundled-skills/hugging-face-vision-trainer/scripts/estimate_cost.py +217 -0
- package/bundled-skills/hugging-face-vision-trainer/scripts/image_classification_training.py +383 -0
- package/bundled-skills/hugging-face-vision-trainer/scripts/object_detection_training.py +710 -0
- package/bundled-skills/hugging-face-vision-trainer/scripts/sam_segmentation_training.py +382 -0
- package/bundled-skills/transformers-js/SKILL.md +639 -0
- package/bundled-skills/transformers-js/references/CACHE.md +339 -0
- package/bundled-skills/transformers-js/references/CONFIGURATION.md +390 -0
- package/bundled-skills/transformers-js/references/EXAMPLES.md +605 -0
- package/bundled-skills/transformers-js/references/MODEL_ARCHITECTURES.md +167 -0
- package/bundled-skills/transformers-js/references/PIPELINE_OPTIONS.md +545 -0
- package/bundled-skills/transformers-js/references/TEXT_GENERATION.md +315 -0
- package/package.json +1 -1
|
@@ -0,0 +1,639 @@
|
|
|
1
|
+
---
|
|
2
|
+
source: "https://github.com/huggingface/skills/tree/main/skills/transformers-js"
|
|
3
|
+
name: transformers-js
|
|
4
|
+
description: Run Hugging Face models in JavaScript or TypeScript with Transformers.js in Node.js or the browser.
|
|
5
|
+
license: Apache-2.0
|
|
6
|
+
risk: unknown
|
|
7
|
+
metadata:
|
|
8
|
+
author: huggingface
|
|
9
|
+
version: "3.8.1"
|
|
10
|
+
category: machine-learning
|
|
11
|
+
repository: https://github.com/huggingface/transformers.js
|
|
12
|
+
compatibility: Requires Node.js 18+ or modern browser with ES modules support. WebGPU support requires compatible browser/environment. Internet access needed for downloading models from Hugging Face Hub (optional if using local models).
|
|
13
|
+
---
|
|
14
|
+
|
|
15
|
+
# Transformers.js - Machine Learning for JavaScript
|
|
16
|
+
|
|
17
|
+
Transformers.js enables running state-of-the-art machine learning models directly in JavaScript, both in browsers and Node.js environments, with no server required.
|
|
18
|
+
|
|
19
|
+
## When to Use This Skill
|
|
20
|
+
|
|
21
|
+
Use this skill when you need to:
|
|
22
|
+
- Run ML models for text analysis, generation, or translation in JavaScript
|
|
23
|
+
- Perform image classification, object detection, or segmentation
|
|
24
|
+
- Implement speech recognition or audio processing
|
|
25
|
+
- Build multimodal AI applications (text-to-image, image-to-text, etc.)
|
|
26
|
+
- Run models client-side in the browser without a backend
|
|
27
|
+
|
|
28
|
+
## Installation
|
|
29
|
+
|
|
30
|
+
### NPM Installation
|
|
31
|
+
```bash
|
|
32
|
+
npm install @huggingface/transformers
|
|
33
|
+
```
|
|
34
|
+
|
|
35
|
+
### Browser Usage (CDN)
|
|
36
|
+
```javascript
|
|
37
|
+
<script type="module">
|
|
38
|
+
import { pipeline } from 'https://cdn.jsdelivr.net/npm/@huggingface/transformers';
|
|
39
|
+
</script>
|
|
40
|
+
```
|
|
41
|
+
|
|
42
|
+
## Core Concepts
|
|
43
|
+
|
|
44
|
+
### 1. Pipeline API
|
|
45
|
+
The pipeline API is the easiest way to use models. It groups together preprocessing, model inference, and postprocessing:
|
|
46
|
+
|
|
47
|
+
```javascript
|
|
48
|
+
import { pipeline } from '@huggingface/transformers';
|
|
49
|
+
|
|
50
|
+
// Create a pipeline for a specific task
|
|
51
|
+
const pipe = await pipeline('sentiment-analysis');
|
|
52
|
+
|
|
53
|
+
// Use the pipeline
|
|
54
|
+
const result = await pipe('I love transformers!');
|
|
55
|
+
// Output: [{ label: 'POSITIVE', score: 0.999817686 }]
|
|
56
|
+
|
|
57
|
+
// IMPORTANT: Always dispose when done to free memory
|
|
58
|
+
await classifier.dispose();
|
|
59
|
+
```
|
|
60
|
+
|
|
61
|
+
**⚠️ Memory Management:** All pipelines must be disposed with `pipe.dispose()` when finished to prevent memory leaks. See examples in [Code Examples](./references/EXAMPLES.md) for cleanup patterns across different environments.
|
|
62
|
+
|
|
63
|
+
### 2. Model Selection
|
|
64
|
+
You can specify a custom model as the second argument:
|
|
65
|
+
|
|
66
|
+
```javascript
|
|
67
|
+
const pipe = await pipeline(
|
|
68
|
+
'sentiment-analysis',
|
|
69
|
+
'Xenova/bert-base-multilingual-uncased-sentiment'
|
|
70
|
+
);
|
|
71
|
+
```
|
|
72
|
+
|
|
73
|
+
**Finding Models:**
|
|
74
|
+
|
|
75
|
+
Browse available Transformers.js models on Hugging Face Hub:
|
|
76
|
+
- **All models**: https://huggingface.co/models?library=transformers.js&sort=trending
|
|
77
|
+
- **By task**: Add `pipeline_tag` parameter
|
|
78
|
+
- Text generation: https://huggingface.co/models?pipeline_tag=text-generation&library=transformers.js&sort=trending
|
|
79
|
+
- Image classification: https://huggingface.co/models?pipeline_tag=image-classification&library=transformers.js&sort=trending
|
|
80
|
+
- Speech recognition: https://huggingface.co/models?pipeline_tag=automatic-speech-recognition&library=transformers.js&sort=trending
|
|
81
|
+
|
|
82
|
+
**Tip:** Filter by task type, sort by trending/downloads, and check model cards for performance metrics and usage examples.
|
|
83
|
+
|
|
84
|
+
### 3. Device Selection
|
|
85
|
+
Choose where to run the model:
|
|
86
|
+
|
|
87
|
+
```javascript
|
|
88
|
+
// Run on CPU (default for WASM)
|
|
89
|
+
const pipe = await pipeline('sentiment-analysis', 'model-id');
|
|
90
|
+
|
|
91
|
+
// Run on GPU (WebGPU - experimental)
|
|
92
|
+
const pipe = await pipeline('sentiment-analysis', 'model-id', {
|
|
93
|
+
device: 'webgpu',
|
|
94
|
+
});
|
|
95
|
+
```
|
|
96
|
+
|
|
97
|
+
### 4. Quantization Options
|
|
98
|
+
Control model precision vs. performance:
|
|
99
|
+
|
|
100
|
+
```javascript
|
|
101
|
+
// Use quantized model (faster, smaller)
|
|
102
|
+
const pipe = await pipeline('sentiment-analysis', 'model-id', {
|
|
103
|
+
dtype: 'q4', // Options: 'fp32', 'fp16', 'q8', 'q4'
|
|
104
|
+
});
|
|
105
|
+
```
|
|
106
|
+
|
|
107
|
+
## Supported Tasks
|
|
108
|
+
|
|
109
|
+
**Note:** All examples below show basic usage.
|
|
110
|
+
|
|
111
|
+
### Natural Language Processing
|
|
112
|
+
|
|
113
|
+
#### Text Classification
|
|
114
|
+
```javascript
|
|
115
|
+
const classifier = await pipeline('text-classification');
|
|
116
|
+
const result = await classifier('This movie was amazing!');
|
|
117
|
+
```
|
|
118
|
+
|
|
119
|
+
#### Named Entity Recognition (NER)
|
|
120
|
+
```javascript
|
|
121
|
+
const ner = await pipeline('token-classification');
|
|
122
|
+
const entities = await ner('My name is John and I live in New York.');
|
|
123
|
+
```
|
|
124
|
+
|
|
125
|
+
#### Question Answering
|
|
126
|
+
```javascript
|
|
127
|
+
const qa = await pipeline('question-answering');
|
|
128
|
+
const answer = await qa({
|
|
129
|
+
question: 'What is the capital of France?',
|
|
130
|
+
context: 'Paris is the capital and largest city of France.'
|
|
131
|
+
});
|
|
132
|
+
```
|
|
133
|
+
|
|
134
|
+
#### Text Generation
|
|
135
|
+
```javascript
|
|
136
|
+
const generator = await pipeline('text-generation', 'onnx-community/gemma-3-270m-it-ONNX');
|
|
137
|
+
const text = await generator('Once upon a time', {
|
|
138
|
+
max_new_tokens: 100,
|
|
139
|
+
temperature: 0.7
|
|
140
|
+
});
|
|
141
|
+
```
|
|
142
|
+
|
|
143
|
+
**For streaming and chat:** See **[Text Generation Guide](./references/TEXT_GENERATION.md)** for:
|
|
144
|
+
- Streaming token-by-token output with `TextStreamer`
|
|
145
|
+
- Chat/conversation format with system/user/assistant roles
|
|
146
|
+
- Generation parameters (temperature, top_k, top_p)
|
|
147
|
+
- Browser and Node.js examples
|
|
148
|
+
- React components and API endpoints
|
|
149
|
+
|
|
150
|
+
#### Translation
|
|
151
|
+
```javascript
|
|
152
|
+
const translator = await pipeline('translation', 'Xenova/nllb-200-distilled-600M');
|
|
153
|
+
const output = await translator('Hello, how are you?', {
|
|
154
|
+
src_lang: 'eng_Latn',
|
|
155
|
+
tgt_lang: 'fra_Latn'
|
|
156
|
+
});
|
|
157
|
+
```
|
|
158
|
+
|
|
159
|
+
#### Summarization
|
|
160
|
+
```javascript
|
|
161
|
+
const summarizer = await pipeline('summarization');
|
|
162
|
+
const summary = await summarizer(longText, {
|
|
163
|
+
max_length: 100,
|
|
164
|
+
min_length: 30
|
|
165
|
+
});
|
|
166
|
+
```
|
|
167
|
+
|
|
168
|
+
#### Zero-Shot Classification
|
|
169
|
+
```javascript
|
|
170
|
+
const classifier = await pipeline('zero-shot-classification');
|
|
171
|
+
const result = await classifier('This is a story about sports.', ['politics', 'sports', 'technology']);
|
|
172
|
+
```
|
|
173
|
+
|
|
174
|
+
### Computer Vision
|
|
175
|
+
|
|
176
|
+
#### Image Classification
|
|
177
|
+
```javascript
|
|
178
|
+
const classifier = await pipeline('image-classification');
|
|
179
|
+
const result = await classifier('https://example.com/image.jpg');
|
|
180
|
+
// Or with local file
|
|
181
|
+
const result = await classifier(imageUrl);
|
|
182
|
+
```
|
|
183
|
+
|
|
184
|
+
#### Object Detection
|
|
185
|
+
```javascript
|
|
186
|
+
const detector = await pipeline('object-detection');
|
|
187
|
+
const objects = await detector('https://example.com/image.jpg');
|
|
188
|
+
// Returns: [{ label: 'person', score: 0.95, box: { xmin, ymin, xmax, ymax } }, ...]
|
|
189
|
+
```
|
|
190
|
+
|
|
191
|
+
#### Image Segmentation
|
|
192
|
+
```javascript
|
|
193
|
+
const segmenter = await pipeline('image-segmentation');
|
|
194
|
+
const segments = await segmenter('https://example.com/image.jpg');
|
|
195
|
+
```
|
|
196
|
+
|
|
197
|
+
#### Depth Estimation
|
|
198
|
+
```javascript
|
|
199
|
+
const depthEstimator = await pipeline('depth-estimation');
|
|
200
|
+
const depth = await depthEstimator('https://example.com/image.jpg');
|
|
201
|
+
```
|
|
202
|
+
|
|
203
|
+
#### Zero-Shot Image Classification
|
|
204
|
+
```javascript
|
|
205
|
+
const classifier = await pipeline('zero-shot-image-classification');
|
|
206
|
+
const result = await classifier('image.jpg', ['cat', 'dog', 'bird']);
|
|
207
|
+
```
|
|
208
|
+
|
|
209
|
+
### Audio Processing
|
|
210
|
+
|
|
211
|
+
#### Automatic Speech Recognition
|
|
212
|
+
```javascript
|
|
213
|
+
const transcriber = await pipeline('automatic-speech-recognition');
|
|
214
|
+
const result = await transcriber('audio.wav');
|
|
215
|
+
// Returns: { text: 'transcribed text here' }
|
|
216
|
+
```
|
|
217
|
+
|
|
218
|
+
#### Audio Classification
|
|
219
|
+
```javascript
|
|
220
|
+
const classifier = await pipeline('audio-classification');
|
|
221
|
+
const result = await classifier('audio.wav');
|
|
222
|
+
```
|
|
223
|
+
|
|
224
|
+
#### Text-to-Speech
|
|
225
|
+
```javascript
|
|
226
|
+
const synthesizer = await pipeline('text-to-speech', 'Xenova/speecht5_tts');
|
|
227
|
+
const audio = await synthesizer('Hello, this is a test.', {
|
|
228
|
+
speaker_embeddings: speakerEmbeddings
|
|
229
|
+
});
|
|
230
|
+
```
|
|
231
|
+
|
|
232
|
+
### Multimodal
|
|
233
|
+
|
|
234
|
+
#### Image-to-Text (Image Captioning)
|
|
235
|
+
```javascript
|
|
236
|
+
const captioner = await pipeline('image-to-text');
|
|
237
|
+
const caption = await captioner('image.jpg');
|
|
238
|
+
```
|
|
239
|
+
|
|
240
|
+
#### Document Question Answering
|
|
241
|
+
```javascript
|
|
242
|
+
const docQA = await pipeline('document-question-answering');
|
|
243
|
+
const answer = await docQA('document-image.jpg', 'What is the total amount?');
|
|
244
|
+
```
|
|
245
|
+
|
|
246
|
+
#### Zero-Shot Object Detection
|
|
247
|
+
```javascript
|
|
248
|
+
const detector = await pipeline('zero-shot-object-detection');
|
|
249
|
+
const objects = await detector('image.jpg', ['person', 'car', 'tree']);
|
|
250
|
+
```
|
|
251
|
+
|
|
252
|
+
### Feature Extraction (Embeddings)
|
|
253
|
+
|
|
254
|
+
```javascript
|
|
255
|
+
const extractor = await pipeline('feature-extraction');
|
|
256
|
+
const embeddings = await extractor('This is a sentence to embed.');
|
|
257
|
+
// Returns: tensor of shape [1, sequence_length, hidden_size]
|
|
258
|
+
|
|
259
|
+
// For sentence embeddings (mean pooling)
|
|
260
|
+
const extractor = await pipeline('feature-extraction', 'onnx-community/all-MiniLM-L6-v2-ONNX');
|
|
261
|
+
const embeddings = await extractor('Text to embed', { pooling: 'mean', normalize: true });
|
|
262
|
+
```
|
|
263
|
+
|
|
264
|
+
## Finding and Choosing Models
|
|
265
|
+
|
|
266
|
+
### Browsing the Hugging Face Hub
|
|
267
|
+
|
|
268
|
+
Discover compatible Transformers.js models on Hugging Face Hub:
|
|
269
|
+
|
|
270
|
+
**Base URL (all models):**
|
|
271
|
+
```
|
|
272
|
+
https://huggingface.co/models?library=transformers.js&sort=trending
|
|
273
|
+
```
|
|
274
|
+
|
|
275
|
+
**Filter by task** using the `pipeline_tag` parameter:
|
|
276
|
+
|
|
277
|
+
| Task | URL |
|
|
278
|
+
|------|-----|
|
|
279
|
+
| **Text Generation** | https://huggingface.co/models?pipeline_tag=text-generation&library=transformers.js&sort=trending |
|
|
280
|
+
| **Text Classification** | https://huggingface.co/models?pipeline_tag=text-classification&library=transformers.js&sort=trending |
|
|
281
|
+
| **Translation** | https://huggingface.co/models?pipeline_tag=translation&library=transformers.js&sort=trending |
|
|
282
|
+
| **Summarization** | https://huggingface.co/models?pipeline_tag=summarization&library=transformers.js&sort=trending |
|
|
283
|
+
| **Question Answering** | https://huggingface.co/models?pipeline_tag=question-answering&library=transformers.js&sort=trending |
|
|
284
|
+
| **Image Classification** | https://huggingface.co/models?pipeline_tag=image-classification&library=transformers.js&sort=trending |
|
|
285
|
+
| **Object Detection** | https://huggingface.co/models?pipeline_tag=object-detection&library=transformers.js&sort=trending |
|
|
286
|
+
| **Image Segmentation** | https://huggingface.co/models?pipeline_tag=image-segmentation&library=transformers.js&sort=trending |
|
|
287
|
+
| **Speech Recognition** | https://huggingface.co/models?pipeline_tag=automatic-speech-recognition&library=transformers.js&sort=trending |
|
|
288
|
+
| **Audio Classification** | https://huggingface.co/models?pipeline_tag=audio-classification&library=transformers.js&sort=trending |
|
|
289
|
+
| **Image-to-Text** | https://huggingface.co/models?pipeline_tag=image-to-text&library=transformers.js&sort=trending |
|
|
290
|
+
| **Feature Extraction** | https://huggingface.co/models?pipeline_tag=feature-extraction&library=transformers.js&sort=trending |
|
|
291
|
+
| **Zero-Shot Classification** | https://huggingface.co/models?pipeline_tag=zero-shot-classification&library=transformers.js&sort=trending |
|
|
292
|
+
|
|
293
|
+
**Sort options:**
|
|
294
|
+
- `&sort=trending` - Most popular recently
|
|
295
|
+
- `&sort=downloads` - Most downloaded overall
|
|
296
|
+
- `&sort=likes` - Most liked by community
|
|
297
|
+
- `&sort=modified` - Recently updated
|
|
298
|
+
|
|
299
|
+
### Choosing the Right Model
|
|
300
|
+
|
|
301
|
+
Consider these factors when selecting a model:
|
|
302
|
+
|
|
303
|
+
**1. Model Size**
|
|
304
|
+
- **Small (< 100MB)**: Fast, suitable for browsers, limited accuracy
|
|
305
|
+
- **Medium (100MB - 500MB)**: Balanced performance, good for most use cases
|
|
306
|
+
- **Large (> 500MB)**: High accuracy, slower, better for Node.js or powerful devices
|
|
307
|
+
|
|
308
|
+
**2. Quantization**
|
|
309
|
+
Models are often available in different quantization levels:
|
|
310
|
+
- `fp32` - Full precision (largest, most accurate)
|
|
311
|
+
- `fp16` - Half precision (smaller, still accurate)
|
|
312
|
+
- `q8` - 8-bit quantized (much smaller, slight accuracy loss)
|
|
313
|
+
- `q4` - 4-bit quantized (smallest, noticeable accuracy loss)
|
|
314
|
+
|
|
315
|
+
**3. Task Compatibility**
|
|
316
|
+
Check the model card for:
|
|
317
|
+
- Supported tasks (some models support multiple tasks)
|
|
318
|
+
- Input/output formats
|
|
319
|
+
- Language support (multilingual vs. English-only)
|
|
320
|
+
- License restrictions
|
|
321
|
+
|
|
322
|
+
**4. Performance Metrics**
|
|
323
|
+
Model cards typically show:
|
|
324
|
+
- Accuracy scores
|
|
325
|
+
- Benchmark results
|
|
326
|
+
- Inference speed
|
|
327
|
+
- Memory requirements
|
|
328
|
+
|
|
329
|
+
### Example: Finding a Text Generation Model
|
|
330
|
+
|
|
331
|
+
```javascript
|
|
332
|
+
// 1. Visit: https://huggingface.co/models?pipeline_tag=text-generation&library=transformers.js&sort=trending
|
|
333
|
+
|
|
334
|
+
// 2. Browse and select a model (e.g., onnx-community/gemma-3-270m-it-ONNX)
|
|
335
|
+
|
|
336
|
+
// 3. Check model card for:
|
|
337
|
+
// - Model size: ~270M parameters
|
|
338
|
+
// - Quantization: q4 available
|
|
339
|
+
// - Language: English
|
|
340
|
+
// - Use case: Instruction-following chat
|
|
341
|
+
|
|
342
|
+
// 4. Use the model:
|
|
343
|
+
import { pipeline } from '@huggingface/transformers';
|
|
344
|
+
|
|
345
|
+
const generator = await pipeline(
|
|
346
|
+
'text-generation',
|
|
347
|
+
'onnx-community/gemma-3-270m-it-ONNX',
|
|
348
|
+
{ dtype: 'q4' } // Use quantized version for faster inference
|
|
349
|
+
);
|
|
350
|
+
|
|
351
|
+
const output = await generator('Explain quantum computing in simple terms.', {
|
|
352
|
+
max_new_tokens: 100
|
|
353
|
+
});
|
|
354
|
+
|
|
355
|
+
await generator.dispose();
|
|
356
|
+
```
|
|
357
|
+
|
|
358
|
+
### Tips for Model Selection
|
|
359
|
+
|
|
360
|
+
1. **Start Small**: Test with a smaller model first, then upgrade if needed
|
|
361
|
+
2. **Check ONNX Support**: Ensure the model has ONNX files (look for `onnx` folder in model repo)
|
|
362
|
+
3. **Read Model Cards**: Model cards contain usage examples, limitations, and benchmarks
|
|
363
|
+
4. **Test Locally**: Benchmark inference speed and memory usage in your environment
|
|
364
|
+
5. **Community Models**: Look for models by `Xenova` (Transformers.js maintainer) or `onnx-community`
|
|
365
|
+
6. **Version Pin**: Use specific git commits in production for stability:
|
|
366
|
+
```javascript
|
|
367
|
+
const pipe = await pipeline('task', 'model-id', { revision: 'abc123' });
|
|
368
|
+
```
|
|
369
|
+
|
|
370
|
+
## Advanced Configuration
|
|
371
|
+
|
|
372
|
+
### Environment Configuration (`env`)
|
|
373
|
+
|
|
374
|
+
The `env` object provides comprehensive control over Transformers.js execution, caching, and model loading.
|
|
375
|
+
|
|
376
|
+
**Quick Overview:**
|
|
377
|
+
|
|
378
|
+
```javascript
|
|
379
|
+
import { env } from '@huggingface/transformers';
|
|
380
|
+
|
|
381
|
+
// View version
|
|
382
|
+
console.log(env.version); // e.g., '3.8.1'
|
|
383
|
+
|
|
384
|
+
// Common settings
|
|
385
|
+
env.allowRemoteModels = true; // Load from Hugging Face Hub
|
|
386
|
+
env.allowLocalModels = false; // Load from file system
|
|
387
|
+
env.localModelPath = '/models/'; // Local model directory
|
|
388
|
+
env.useFSCache = true; // Cache models on disk (Node.js)
|
|
389
|
+
env.useBrowserCache = true; // Cache models in browser
|
|
390
|
+
env.cacheDir = './.cache'; // Cache directory location
|
|
391
|
+
```
|
|
392
|
+
|
|
393
|
+
**Configuration Patterns:**
|
|
394
|
+
|
|
395
|
+
```javascript
|
|
396
|
+
// Development: Fast iteration with remote models
|
|
397
|
+
env.allowRemoteModels = true;
|
|
398
|
+
env.useFSCache = true;
|
|
399
|
+
|
|
400
|
+
// Production: Local models only
|
|
401
|
+
env.allowRemoteModels = false;
|
|
402
|
+
env.allowLocalModels = true;
|
|
403
|
+
env.localModelPath = '/app/models/';
|
|
404
|
+
|
|
405
|
+
// Custom CDN
|
|
406
|
+
env.remoteHost = 'https://cdn.example.com/models';
|
|
407
|
+
|
|
408
|
+
// Disable caching (testing)
|
|
409
|
+
env.useFSCache = false;
|
|
410
|
+
env.useBrowserCache = false;
|
|
411
|
+
```
|
|
412
|
+
|
|
413
|
+
For complete documentation on all configuration options, caching strategies, cache management, pre-downloading models, and more, see:
|
|
414
|
+
|
|
415
|
+
**→ [Configuration Reference](./references/CONFIGURATION.md)**
|
|
416
|
+
|
|
417
|
+
### Working with Tensors
|
|
418
|
+
|
|
419
|
+
```javascript
|
|
420
|
+
import { AutoTokenizer, AutoModel } from '@huggingface/transformers';
|
|
421
|
+
|
|
422
|
+
// Load tokenizer and model separately for more control
|
|
423
|
+
const tokenizer = await AutoTokenizer.from_pretrained('bert-base-uncased');
|
|
424
|
+
const model = await AutoModel.from_pretrained('bert-base-uncased');
|
|
425
|
+
|
|
426
|
+
// Tokenize input
|
|
427
|
+
const inputs = await tokenizer('Hello world!');
|
|
428
|
+
|
|
429
|
+
// Run model
|
|
430
|
+
const outputs = await model(inputs);
|
|
431
|
+
```
|
|
432
|
+
|
|
433
|
+
### Batch Processing
|
|
434
|
+
|
|
435
|
+
```javascript
|
|
436
|
+
const classifier = await pipeline('sentiment-analysis');
|
|
437
|
+
|
|
438
|
+
// Process multiple texts
|
|
439
|
+
const results = await classifier([
|
|
440
|
+
'I love this!',
|
|
441
|
+
'This is terrible.',
|
|
442
|
+
'It was okay.'
|
|
443
|
+
]);
|
|
444
|
+
```
|
|
445
|
+
|
|
446
|
+
## Browser-Specific Considerations
|
|
447
|
+
|
|
448
|
+
### WebGPU Usage
|
|
449
|
+
WebGPU provides GPU acceleration in browsers:
|
|
450
|
+
|
|
451
|
+
```javascript
|
|
452
|
+
const pipe = await pipeline('text-generation', 'onnx-community/gemma-3-270m-it-ONNX', {
|
|
453
|
+
device: 'webgpu',
|
|
454
|
+
dtype: 'fp32'
|
|
455
|
+
});
|
|
456
|
+
```
|
|
457
|
+
|
|
458
|
+
**Note**: WebGPU is experimental. Check browser compatibility and file issues if problems occur.
|
|
459
|
+
|
|
460
|
+
### WASM Performance
|
|
461
|
+
Default browser execution uses WASM:
|
|
462
|
+
|
|
463
|
+
```javascript
|
|
464
|
+
// Optimized for browsers with quantization
|
|
465
|
+
const pipe = await pipeline('sentiment-analysis', 'model-id', {
|
|
466
|
+
dtype: 'q8' // or 'q4' for even smaller size
|
|
467
|
+
});
|
|
468
|
+
```
|
|
469
|
+
|
|
470
|
+
### Progress Tracking & Loading Indicators
|
|
471
|
+
|
|
472
|
+
Models can be large (ranging from a few MB to several GB) and consist of multiple files. Track download progress by passing a callback to the `pipeline()` function:
|
|
473
|
+
|
|
474
|
+
```javascript
|
|
475
|
+
import { pipeline } from '@huggingface/transformers';
|
|
476
|
+
|
|
477
|
+
// Track progress for each file
|
|
478
|
+
const fileProgress = {};
|
|
479
|
+
|
|
480
|
+
function onProgress(info) {
|
|
481
|
+
console.log(`${info.status}: ${info.file}`);
|
|
482
|
+
|
|
483
|
+
if (info.status === 'progress') {
|
|
484
|
+
fileProgress[info.file] = info.progress;
|
|
485
|
+
console.log(`${info.file}: ${info.progress.toFixed(1)}%`);
|
|
486
|
+
}
|
|
487
|
+
|
|
488
|
+
if (info.status === 'done') {
|
|
489
|
+
console.log(`✓ ${info.file} complete`);
|
|
490
|
+
}
|
|
491
|
+
}
|
|
492
|
+
|
|
493
|
+
// Pass callback to pipeline
|
|
494
|
+
const classifier = await pipeline('sentiment-analysis', null, {
|
|
495
|
+
progress_callback: onProgress
|
|
496
|
+
});
|
|
497
|
+
```
|
|
498
|
+
|
|
499
|
+
**Progress Info Properties:**
|
|
500
|
+
|
|
501
|
+
```typescript
|
|
502
|
+
interface ProgressInfo {
|
|
503
|
+
status: 'initiate' | 'download' | 'progress' | 'done' | 'ready';
|
|
504
|
+
name: string; // Model id or path
|
|
505
|
+
file: string; // File being processed
|
|
506
|
+
progress?: number; // Percentage (0-100, only for 'progress' status)
|
|
507
|
+
loaded?: number; // Bytes downloaded (only for 'progress' status)
|
|
508
|
+
total?: number; // Total bytes (only for 'progress' status)
|
|
509
|
+
}
|
|
510
|
+
```
|
|
511
|
+
|
|
512
|
+
For complete examples including browser UIs, React components, CLI progress bars, and retry logic, see:
|
|
513
|
+
|
|
514
|
+
**→ [Pipeline Options - Progress Callback](./references/PIPELINE_OPTIONS.md#progress-callback)**
|
|
515
|
+
|
|
516
|
+
## Error Handling
|
|
517
|
+
|
|
518
|
+
```javascript
|
|
519
|
+
try {
|
|
520
|
+
const pipe = await pipeline('sentiment-analysis', 'model-id');
|
|
521
|
+
const result = await pipe('text to analyze');
|
|
522
|
+
} catch (error) {
|
|
523
|
+
if (error.message.includes('fetch')) {
|
|
524
|
+
console.error('Model download failed. Check internet connection.');
|
|
525
|
+
} else if (error.message.includes('ONNX')) {
|
|
526
|
+
console.error('Model execution failed. Check model compatibility.');
|
|
527
|
+
} else {
|
|
528
|
+
console.error('Unknown error:', error);
|
|
529
|
+
}
|
|
530
|
+
}
|
|
531
|
+
```
|
|
532
|
+
|
|
533
|
+
## Performance Tips
|
|
534
|
+
|
|
535
|
+
1. **Reuse Pipelines**: Create pipeline once, reuse for multiple inferences
|
|
536
|
+
2. **Use Quantization**: Start with `q8` or `q4` for faster inference
|
|
537
|
+
3. **Batch Processing**: Process multiple inputs together when possible
|
|
538
|
+
4. **Cache Models**: Models are cached automatically (see **[Caching Reference](./references/CACHE.md)** for details on browser Cache API, Node.js filesystem cache, and custom implementations)
|
|
539
|
+
5. **WebGPU for Large Models**: Use WebGPU for models that benefit from GPU acceleration
|
|
540
|
+
6. **Prune Context**: For text generation, limit `max_new_tokens` to avoid memory issues
|
|
541
|
+
7. **Clean Up Resources**: Call `pipe.dispose()` when done to free memory
|
|
542
|
+
|
|
543
|
+
## Memory Management
|
|
544
|
+
|
|
545
|
+
**IMPORTANT:** Always call `pipe.dispose()` when finished to prevent memory leaks.
|
|
546
|
+
|
|
547
|
+
```javascript
|
|
548
|
+
const pipe = await pipeline('sentiment-analysis');
|
|
549
|
+
const result = await pipe('Great product!');
|
|
550
|
+
await pipe.dispose(); // ✓ Free memory (100MB - several GB per model)
|
|
551
|
+
```
|
|
552
|
+
|
|
553
|
+
**When to dispose:**
|
|
554
|
+
- Application shutdown or component unmount
|
|
555
|
+
- Before loading a different model
|
|
556
|
+
- After batch processing in long-running apps
|
|
557
|
+
|
|
558
|
+
Models consume significant memory and hold GPU/CPU resources. Disposal is critical for browser memory limits and server stability.
|
|
559
|
+
|
|
560
|
+
For detailed patterns (React cleanup, servers, browser), see **[Code Examples](./references/EXAMPLES.md)**
|
|
561
|
+
|
|
562
|
+
## Troubleshooting
|
|
563
|
+
|
|
564
|
+
### Model Not Found
|
|
565
|
+
- Verify model exists on Hugging Face Hub
|
|
566
|
+
- Check model name spelling
|
|
567
|
+
- Ensure model has ONNX files (look for `onnx` folder in model repo)
|
|
568
|
+
|
|
569
|
+
### Memory Issues
|
|
570
|
+
- Use smaller models or quantized versions (`dtype: 'q4'`)
|
|
571
|
+
- Reduce batch size
|
|
572
|
+
- Limit sequence length with `max_length`
|
|
573
|
+
|
|
574
|
+
### WebGPU Errors
|
|
575
|
+
- Check browser compatibility (Chrome 113+, Edge 113+)
|
|
576
|
+
- Try `dtype: 'fp16'` if `fp32` fails
|
|
577
|
+
- Fall back to WASM if WebGPU unavailable
|
|
578
|
+
|
|
579
|
+
## Reference Documentation
|
|
580
|
+
|
|
581
|
+
### This Skill
|
|
582
|
+
- **[Pipeline Options](./references/PIPELINE_OPTIONS.md)** - Configure `pipeline()` with `progress_callback`, `device`, `dtype`, etc.
|
|
583
|
+
- **[Configuration Reference](./references/CONFIGURATION.md)** - Global `env` configuration for caching and model loading
|
|
584
|
+
- **[Caching Reference](./references/CACHE.md)** - Browser Cache API, Node.js filesystem cache, and custom cache implementations
|
|
585
|
+
- **[Text Generation Guide](./references/TEXT_GENERATION.md)** - Streaming, chat format, and generation parameters
|
|
586
|
+
- **[Model Architectures](./references/MODEL_ARCHITECTURES.md)** - Supported models and selection tips
|
|
587
|
+
- **[Code Examples](./references/EXAMPLES.md)** - Real-world implementations for different runtimes
|
|
588
|
+
|
|
589
|
+
### Official Transformers.js
|
|
590
|
+
- Official docs: https://huggingface.co/docs/transformers.js
|
|
591
|
+
- API reference: https://huggingface.co/docs/transformers.js/api/pipelines
|
|
592
|
+
- Model hub: https://huggingface.co/models?library=transformers.js
|
|
593
|
+
- GitHub: https://github.com/huggingface/transformers.js
|
|
594
|
+
- Examples: https://github.com/huggingface/transformers.js/tree/main/examples
|
|
595
|
+
|
|
596
|
+
## Best Practices
|
|
597
|
+
|
|
598
|
+
1. **Always Dispose Pipelines**: Call `pipe.dispose()` when done - critical for preventing memory leaks
|
|
599
|
+
2. **Start with Pipelines**: Use the pipeline API unless you need fine-grained control
|
|
600
|
+
3. **Test Locally First**: Test models with small inputs before deploying
|
|
601
|
+
4. **Monitor Model Sizes**: Be aware of model download sizes for web applications
|
|
602
|
+
5. **Handle Loading States**: Show progress indicators for better UX
|
|
603
|
+
6. **Version Pin**: Pin specific model versions for production stability
|
|
604
|
+
7. **Error Boundaries**: Always wrap pipeline calls in try-catch blocks
|
|
605
|
+
8. **Progressive Enhancement**: Provide fallbacks for unsupported browsers
|
|
606
|
+
9. **Reuse Models**: Load once, use many times - don't recreate pipelines unnecessarily
|
|
607
|
+
10. **Graceful Shutdown**: Dispose models on SIGTERM/SIGINT in servers
|
|
608
|
+
|
|
609
|
+
## Quick Reference: Task IDs
|
|
610
|
+
|
|
611
|
+
| Task | Task ID |
|
|
612
|
+
|------|---------|
|
|
613
|
+
| Text classification | `text-classification` or `sentiment-analysis` |
|
|
614
|
+
| Token classification | `token-classification` or `ner` |
|
|
615
|
+
| Question answering | `question-answering` |
|
|
616
|
+
| Fill mask | `fill-mask` |
|
|
617
|
+
| Summarization | `summarization` |
|
|
618
|
+
| Translation | `translation` |
|
|
619
|
+
| Text generation | `text-generation` |
|
|
620
|
+
| Text-to-text generation | `text2text-generation` |
|
|
621
|
+
| Zero-shot classification | `zero-shot-classification` |
|
|
622
|
+
| Image classification | `image-classification` |
|
|
623
|
+
| Image segmentation | `image-segmentation` |
|
|
624
|
+
| Object detection | `object-detection` |
|
|
625
|
+
| Depth estimation | `depth-estimation` |
|
|
626
|
+
| Image-to-image | `image-to-image` |
|
|
627
|
+
| Zero-shot image classification | `zero-shot-image-classification` |
|
|
628
|
+
| Zero-shot object detection | `zero-shot-object-detection` |
|
|
629
|
+
| Automatic speech recognition | `automatic-speech-recognition` |
|
|
630
|
+
| Audio classification | `audio-classification` |
|
|
631
|
+
| Text-to-speech | `text-to-speech` or `text-to-audio` |
|
|
632
|
+
| Image-to-text | `image-to-text` |
|
|
633
|
+
| Document question answering | `document-question-answering` |
|
|
634
|
+
| Feature extraction | `feature-extraction` |
|
|
635
|
+
| Sentence similarity | `sentence-similarity` |
|
|
636
|
+
|
|
637
|
+
---
|
|
638
|
+
|
|
639
|
+
This skill enables you to integrate state-of-the-art machine learning capabilities directly into JavaScript applications without requiring separate ML servers or Python environments.
|