chub-dev 0.1.0 → 0.1.2-beta.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (139) hide show
  1. package/README.md +55 -0
  2. package/bin/chub-mcp +2 -0
  3. package/dist/airtable/docs/database/javascript/DOC.md +1437 -0
  4. package/dist/airtable/docs/database/python/DOC.md +1735 -0
  5. package/dist/amplitude/docs/analytics/javascript/DOC.md +1282 -0
  6. package/dist/amplitude/docs/analytics/python/DOC.md +1199 -0
  7. package/dist/anthropic/docs/claude-api/javascript/DOC.md +503 -0
  8. package/dist/anthropic/docs/claude-api/python/DOC.md +389 -0
  9. package/dist/asana/docs/tasks/DOC.md +1396 -0
  10. package/dist/assemblyai/docs/transcription/DOC.md +1043 -0
  11. package/dist/atlassian/docs/confluence/javascript/DOC.md +1347 -0
  12. package/dist/atlassian/docs/confluence/python/DOC.md +1604 -0
  13. package/dist/auth0/docs/identity/javascript/DOC.md +968 -0
  14. package/dist/auth0/docs/identity/python/DOC.md +1199 -0
  15. package/dist/aws/docs/s3/javascript/DOC.md +1773 -0
  16. package/dist/aws/docs/s3/python/DOC.md +1807 -0
  17. package/dist/binance/docs/trading/javascript/DOC.md +1315 -0
  18. package/dist/binance/docs/trading/python/DOC.md +1454 -0
  19. package/dist/braintree/docs/gateway/javascript/DOC.md +1278 -0
  20. package/dist/braintree/docs/gateway/python/DOC.md +1179 -0
  21. package/dist/chromadb/docs/embeddings-db/javascript/DOC.md +1263 -0
  22. package/dist/chromadb/docs/embeddings-db/python/DOC.md +1707 -0
  23. package/dist/clerk/docs/auth/javascript/DOC.md +1220 -0
  24. package/dist/clerk/docs/auth/python/DOC.md +274 -0
  25. package/dist/cloudflare/docs/workers/javascript/DOC.md +918 -0
  26. package/dist/cloudflare/docs/workers/python/DOC.md +994 -0
  27. package/dist/cockroachdb/docs/distributed-db/DOC.md +1500 -0
  28. package/dist/cohere/docs/llm/DOC.md +1335 -0
  29. package/dist/datadog/docs/monitoring/javascript/DOC.md +1740 -0
  30. package/dist/datadog/docs/monitoring/python/DOC.md +1815 -0
  31. package/dist/deepgram/docs/speech/javascript/DOC.md +885 -0
  32. package/dist/deepgram/docs/speech/python/DOC.md +685 -0
  33. package/dist/deepl/docs/translation/javascript/DOC.md +887 -0
  34. package/dist/deepl/docs/translation/python/DOC.md +944 -0
  35. package/dist/deepseek/docs/llm/DOC.md +1220 -0
  36. package/dist/directus/docs/headless-cms/javascript/DOC.md +1128 -0
  37. package/dist/directus/docs/headless-cms/python/DOC.md +1276 -0
  38. package/dist/discord/docs/bot/javascript/DOC.md +1090 -0
  39. package/dist/discord/docs/bot/python/DOC.md +1130 -0
  40. package/dist/elasticsearch/docs/search/DOC.md +1634 -0
  41. package/dist/elevenlabs/docs/text-to-speech/javascript/DOC.md +336 -0
  42. package/dist/elevenlabs/docs/text-to-speech/python/DOC.md +552 -0
  43. package/dist/firebase/docs/auth/DOC.md +1015 -0
  44. package/dist/gemini/docs/genai/javascript/DOC.md +691 -0
  45. package/dist/gemini/docs/genai/python/DOC.md +555 -0
  46. package/dist/github/docs/octokit/DOC.md +1560 -0
  47. package/dist/google/docs/bigquery/javascript/DOC.md +1688 -0
  48. package/dist/google/docs/bigquery/python/DOC.md +1503 -0
  49. package/dist/hubspot/docs/crm/javascript/DOC.md +1805 -0
  50. package/dist/hubspot/docs/crm/python/DOC.md +2033 -0
  51. package/dist/huggingface/docs/transformers/DOC.md +948 -0
  52. package/dist/intercom/docs/messaging/javascript/DOC.md +1844 -0
  53. package/dist/intercom/docs/messaging/python/DOC.md +1797 -0
  54. package/dist/jira/docs/issues/javascript/DOC.md +1420 -0
  55. package/dist/jira/docs/issues/python/DOC.md +1492 -0
  56. package/dist/kafka/docs/streaming/javascript/DOC.md +1671 -0
  57. package/dist/kafka/docs/streaming/python/DOC.md +1464 -0
  58. package/dist/landingai-ade/docs/api/DOC.md +620 -0
  59. package/dist/landingai-ade/docs/sdk/python/DOC.md +489 -0
  60. package/dist/landingai-ade/docs/sdk/typescript/DOC.md +542 -0
  61. package/dist/landingai-ade/skills/SKILL.md +489 -0
  62. package/dist/launchdarkly/docs/feature-flags/javascript/DOC.md +1191 -0
  63. package/dist/launchdarkly/docs/feature-flags/python/DOC.md +1671 -0
  64. package/dist/linear/docs/tracker/DOC.md +1554 -0
  65. package/dist/livekit/docs/realtime/javascript/DOC.md +303 -0
  66. package/dist/livekit/docs/realtime/python/DOC.md +163 -0
  67. package/dist/mailchimp/docs/marketing/DOC.md +1420 -0
  68. package/dist/meilisearch/docs/search/DOC.md +1241 -0
  69. package/dist/microsoft/docs/onedrive/javascript/DOC.md +1421 -0
  70. package/dist/microsoft/docs/onedrive/python/DOC.md +1549 -0
  71. package/dist/mongodb/docs/atlas/DOC.md +2041 -0
  72. package/dist/notion/docs/workspace-api/javascript/DOC.md +1435 -0
  73. package/dist/notion/docs/workspace-api/python/DOC.md +1400 -0
  74. package/dist/okta/docs/identity/javascript/DOC.md +1171 -0
  75. package/dist/okta/docs/identity/python/DOC.md +1401 -0
  76. package/dist/openai/docs/chat/javascript/DOC.md +407 -0
  77. package/dist/openai/docs/chat/python/DOC.md +568 -0
  78. package/dist/paypal/docs/checkout/DOC.md +278 -0
  79. package/dist/pinecone/docs/sdk/javascript/DOC.md +984 -0
  80. package/dist/pinecone/docs/sdk/python/DOC.md +1395 -0
  81. package/dist/plaid/docs/banking/javascript/DOC.md +1163 -0
  82. package/dist/plaid/docs/banking/python/DOC.md +1203 -0
  83. package/dist/playwright-community/skills/login-flows/SKILL.md +108 -0
  84. package/dist/postmark/docs/transactional-email/DOC.md +1168 -0
  85. package/dist/prisma/docs/orm/javascript/DOC.md +1419 -0
  86. package/dist/prisma/docs/orm/python/DOC.md +1317 -0
  87. package/dist/qdrant/docs/vector-search/javascript/DOC.md +1221 -0
  88. package/dist/qdrant/docs/vector-search/python/DOC.md +1653 -0
  89. package/dist/rabbitmq/docs/message-queue/javascript/DOC.md +1193 -0
  90. package/dist/rabbitmq/docs/message-queue/python/DOC.md +1243 -0
  91. package/dist/razorpay/docs/payments/javascript/DOC.md +1219 -0
  92. package/dist/razorpay/docs/payments/python/DOC.md +1330 -0
  93. package/dist/redis/docs/key-value/javascript/DOC.md +1851 -0
  94. package/dist/redis/docs/key-value/python/DOC.md +2054 -0
  95. package/dist/registry.json +2817 -0
  96. package/dist/replicate/docs/model-hosting/DOC.md +1318 -0
  97. package/dist/resend/docs/email/DOC.md +1271 -0
  98. package/dist/salesforce/docs/crm/javascript/DOC.md +1241 -0
  99. package/dist/salesforce/docs/crm/python/DOC.md +1183 -0
  100. package/dist/search-index.json +1 -0
  101. package/dist/sendgrid/docs/email-api/javascript/DOC.md +371 -0
  102. package/dist/sendgrid/docs/email-api/python/DOC.md +656 -0
  103. package/dist/sentry/docs/error-tracking/javascript/DOC.md +1073 -0
  104. package/dist/sentry/docs/error-tracking/python/DOC.md +1309 -0
  105. package/dist/shopify/docs/storefront/DOC.md +457 -0
  106. package/dist/slack/docs/workspace/javascript/DOC.md +933 -0
  107. package/dist/slack/docs/workspace/python/DOC.md +271 -0
  108. package/dist/square/docs/payments/javascript/DOC.md +1855 -0
  109. package/dist/square/docs/payments/python/DOC.md +1728 -0
  110. package/dist/stripe/docs/api/DOC.md +1727 -0
  111. package/dist/stripe/docs/payments/DOC.md +1726 -0
  112. package/dist/stytch/docs/auth/javascript/DOC.md +1813 -0
  113. package/dist/stytch/docs/auth/python/DOC.md +1962 -0
  114. package/dist/supabase/docs/client/DOC.md +1606 -0
  115. package/dist/twilio/docs/messaging/python/DOC.md +469 -0
  116. package/dist/twilio/docs/messaging/typescript/DOC.md +946 -0
  117. package/dist/vercel/docs/platform/DOC.md +1940 -0
  118. package/dist/weaviate/docs/vector-db/javascript/DOC.md +1268 -0
  119. package/dist/weaviate/docs/vector-db/python/DOC.md +1388 -0
  120. package/dist/zendesk/docs/support/javascript/DOC.md +2150 -0
  121. package/dist/zendesk/docs/support/python/DOC.md +2297 -0
  122. package/package.json +22 -6
  123. package/skills/get-api-docs/SKILL.md +84 -0
  124. package/src/commands/annotate.js +83 -0
  125. package/src/commands/build.js +12 -1
  126. package/src/commands/feedback.js +150 -0
  127. package/src/commands/get.js +83 -42
  128. package/src/commands/search.js +7 -0
  129. package/src/index.js +43 -17
  130. package/src/lib/analytics.js +90 -0
  131. package/src/lib/annotations.js +57 -0
  132. package/src/lib/bm25.js +170 -0
  133. package/src/lib/cache.js +69 -6
  134. package/src/lib/config.js +8 -3
  135. package/src/lib/identity.js +99 -0
  136. package/src/lib/registry.js +103 -20
  137. package/src/lib/telemetry.js +86 -0
  138. package/src/mcp/server.js +177 -0
  139. package/src/mcp/tools.js +251 -0
@@ -0,0 +1,948 @@
1
+ ---
2
+ name: transformers
3
+ description: "Transformers.js coding guidelines for running ML models in the browser or Node.js"
4
+ metadata:
5
+ languages: "javascript"
6
+ versions: "3.7.6"
7
+ updated-on: "2026-03-02"
8
+ source: maintainer
9
+ tags: "huggingface,transformers,ml,inference,models"
10
+ ---
11
+
12
+ # Transformers.js Coding Guidelines (JavaScript/TypeScript)
13
+
14
+ You are a Transformers.js expert. Help me with writing code using the Transformers.js library for running machine learning models directly in the browser or Node.js.
15
+
16
+ Please follow the following guidelines when generating code.
17
+
18
+ You can find the official documentation and examples here:
19
+ https://huggingface.co/docs/transformers.js/
20
+
21
+ ## Golden Rule: Use the Correct and Current Package
22
+
23
+ Always use the official Transformers.js package `@huggingface/transformers` for all machine learning inference tasks. This is the standard library for running transformer models in JavaScript environments.
24
+
25
+ - **Library Name:** Transformers.js
26
+ - **NPM Package:** `@huggingface/transformers`
27
+ - **Current Version:** 3.5.2
28
+
29
+ **Installation:**
30
+
31
+ - **Correct:** `npm i @huggingface/transformers`
32
+ - **Browser CDN:** `https://cdn.jsdelivr.net/npm/@huggingface/transformers@3.5.2`
33
+
34
+ **Main APIs and Usage:**
35
+
36
+ - **Correct:** `import { pipeline } from '@huggingface/transformers'`
37
+ - **Correct:** `const pipe = await pipeline('task-name')`
38
+ - **Correct:** `const result = await pipe(input)`
39
+
40
+ ## Installation and Setup
41
+
42
+ ### Browser Installation
43
+
44
+ For browser environments, you can use either NPM or CDN:
45
+
46
+ **NPM Installation:**
47
+ ```bash
48
+ npm i @huggingface/transformers
49
+ ```
50
+
51
+ **CDN Installation:**
52
+ ```html
53
+ <script type="module">
54
+ import { pipeline } from 'https://cdn.jsdelivr.net/npm/@huggingface/transformers@3.7.4';
55
+ </script>
56
+ ```
57
+
58
+
59
+ ### Node.js Installation
60
+
61
+ For Node.js environments, install via NPM and configure your project:
62
+
63
+ **ESM (Recommended):**
64
+ To indicate that your project uses ECMAScript modules, you need to add `"type": "module"` to your `package.json`:
65
+
66
+ ```json
67
+ {
68
+ //...
69
+ "type": "module",
70
+ //...
71
+ }
72
+ ```
73
+
74
+ **CommonJS:**
75
+ Use dynamic imports for Transformers.js
76
+
77
+ Following that, let's import Transformers.js and define the `MyClassificationPipeline` class. Since Transformers.js is an ESM module, we will need to dynamically import the library using the [`import()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/import) function:
78
+
79
+ ```javascript
80
+ class MyClassificationPipeline {
81
+ static task = 'text-classification';
82
+ static model = 'Xenova/distilbert-base-uncased-finetuned-sst-2-english';
83
+ static instance = null;
84
+
85
+ static async getInstance(progress_callback = null) {
86
+ if (this.instance === null) {
87
+ // Dynamically import the Transformers.js library
88
+ let { pipeline, env } = await import('@huggingface/transformers');
89
+
90
+ // NOTE: Uncomment this to change the cache directory
91
+ // env.cacheDir = './.cache';
92
+
93
+ this.instance = pipeline(this.task, this.model, { progress_callback });
94
+ }
95
+
96
+ return this.instance;
97
+ }
98
+ }
99
+ ```
100
+
101
+ ## Basic Inference (Text Processing)
102
+
103
+ The `pipeline()` function is the easiest way to use pretrained models:
104
+
105
+ ```javascript
106
+ import { pipeline } from '@huggingface/transformers';
107
+
108
+ const classifier = await pipeline('sentiment-analysis');
109
+ ```
110
+
111
+ **Text Classification Example:**
112
+
113
+ ```javascript
114
+ const result = await classifier('I love transformers!');
115
+ // [{'label': 'POSITIVE', 'score': 0.9998}]
116
+ ```
117
+
118
+
119
+ **Multiple Inputs:**
120
+
121
+ ```javascript
122
+ const result = await classifier(['I love transformers!', 'I hate transformers!']);
123
+ // [{'label': 'POSITIVE', 'score': 0.9998}, {'label': 'NEGATIVE', 'score': 0.9982}]
124
+ ```
125
+
126
+ **Custom Models:**
127
+ ```javascript
128
+ const reviewer = await pipeline('sentiment-analysis', 'Xenova/bert-base-multilingual-uncased-sentiment');
129
+
130
+ const result = await reviewer('The Shawshank Redemption is a true masterpiece of cinema.');
131
+ // [{label: '5 stars', score: 0.8167929649353027}]
132
+ ```
133
+
134
+ ## Multimodal Input Support
135
+
136
+ Transformers.js supports various input types including images, audio, and video:
137
+
138
+ **Image Processing:**
139
+
140
+ By default, when running in the browser, the model will be run on your CPU (via WASM). If you would like
141
+ to run the model on your GPU (via WebGPU), you can do this by setting `device: 'webgpu'`, for example:
142
+ ```javascript
143
+ // Run the model on WebGPU
144
+ const pipe = await pipeline('sentiment-analysis', 'Xenova/distilbert-base-uncased-finetuned-sst-2-english', {
145
+ device: 'webgpu',
146
+ });
147
+ ```
148
+
149
+ For more information, check out the [WebGPU guide](https://huggingface.co/docs/transformers.js/guides/webgpu).
150
+
151
+
152
+
153
+ In resource-constrained environments, such as web browsers, it is advisable to use a quantized version of
154
+ the model to lower bandwidth and optimize performance. This can be achieved by adjusting the `dtype` option,
155
+ which allows you to select the appropriate data type for your model. While the available options may vary
156
+ depending on the specific model, typical choices include `"fp32"` (default for WebGPU), `"fp16"`, `"q8"`
157
+ (default for WASM), and `"q4"`. For more information, check out the [quantization guide](https://huggingface.co/docs/transformers.js/guides/dtypes).
158
+ ```javascript
159
+ // Run the model at 4-bit quantization
160
+ const pipe = await pipeline('sentiment-analysis', 'Xenova/distilbert-base-uncased-finetuned-sst-2-english', {
161
+ dtype: 'q4',
162
+ });
163
+ ```
164
+
165
+ **Audio Processing (ASR):**
166
+
167
+
168
+
169
+ ## Device Configuration
170
+
171
+ ### WebGPU Acceleration
172
+
173
+ For GPU acceleration in browsers, use the `device: 'webgpu'` option:
174
+
175
+ **WebGPU Usage Example:**
176
+
177
+ ### Device Options
178
+
179
+ Available device options include:
180
+ - `'cpu'` - CPU execution (default for Node.js)
181
+ - `'wasm'` - WebAssembly execution (default for browsers)
182
+ - `'webgpu'` - GPU acceleration (browsers with WebGPU support)
183
+ - `'webnn'` - Web Neural Network API acceleration
184
+
185
+ ## Quantization and Data Types
186
+
187
+ ### Basic Quantization
188
+
189
+ Use the `dtype` parameter to control model precision and size:
190
+
191
+ **Available dtypes:**
192
+ - `"fp32"` - Full precision (default for WebGPU)
193
+ - `"fp16"` - Half precision
194
+ - `"q8"` - 8-bit quantization (default for WASM)
195
+ - `"q4"` - 4-bit quantization (smallest size)
196
+
197
+ **Basic Quantization Example:**
198
+
199
+ ### Per-Module Quantization
200
+
201
+ For complex models, you can specify different quantization levels per module:
202
+
203
+ ## Environment Configuration
204
+
205
+ ### Global Settings
206
+
207
+ Configure Transformers.js behavior using the `env` object:
208
+
209
+ **Common Configuration Options:**
210
+
211
+ - **Remote Models:** `env.allowRemoteModels = false`
212
+ - **Local Model Path:** `env.localModelPath = '/path/to/models/'`
213
+ - **Cache Directory:** `env.cacheDir = '/path/to/cache/'`
214
+
215
+ ### Node.js Specific Settings
216
+
217
+ For Node.js applications, you can customize caching and model loading:
218
+
219
+ **Default Cache Location:**
220
+ - Node.js: `node_modules/@huggingface/transformers/.cache/`
221
+ - Models are organized by author/model-name subdirectories
222
+ - Each model contains config.json, tokenizer files, and ONNX weights in an `onnx/` subfolder
223
+
224
+ ## Pipeline Options and Generation Parameters
225
+
226
+ ### Loading Options
227
+
228
+ Control how models are loaded with PretrainedOptions:
229
+
230
+ **Model Revision:**
231
+
232
+ **Available Options:**
233
+ ```javascript
234
+ const pipe = await pipeline('task-name', 'model-name', {
235
+ device: 'webgpu', // 'cpu', 'wasm', 'webgpu', 'webnn'
236
+ dtype: 'q8', // 'fp32', 'fp16', 'q8', 'q4'
237
+ progress_callback: (info) => console.log(info), // Track download progress
238
+ revision: 'main', // Specific model revision/branch
239
+ });
240
+ ```
241
+
242
+ ### Generation Parameters
243
+
244
+ For text generation models, use GenerationConfig options:
245
+
246
+ **Common Generation Options:**
247
+ ```javascript
248
+ const result = await generator(prompt, {
249
+ max_new_tokens: 50, // Maximum tokens to generate
250
+ temperature: 0.9, // Randomness (0.0 = deterministic, 1.0+ = creative)
251
+ do_sample: true, // Enable sampling (required for temperature/top_k)
252
+ top_k: 50, // Consider only top K tokens
253
+ repetition_penalty: 2.0, // Penalize repetition
254
+ no_repeat_ngram_size: 3, // Prevent n-gram repetition
255
+ });
256
+ ```
257
+
258
+ ### Feature Extraction Options
259
+
260
+ For embedding models, specify pooling and normalization:
261
+
262
+ ```javascript
263
+ const embeddings = await extractor(texts, {
264
+ pooling: 'mean', // 'mean', 'max', 'cls'
265
+ normalize: true // L2 normalization for similarity tasks
266
+ });
267
+
268
+ // Convert tensor to array
269
+ const embeddingArray = embeddings.tolist();
270
+ ```
271
+
272
+ ### Streaming Output
273
+
274
+ Enable streaming for real-time text generation:
275
+
276
+ ## Translation and Multilingual Models
277
+
278
+ ### Available Translation Models
279
+
280
+ Transformers.js supports several translation model families on Hugging Face Hub:
281
+
282
+ **OPUS-MT Models (Recommended for Lightweight Use):**
283
+ - Lightweight, fast translation models from the Marian framework
284
+ - Trained on OPUS multilingual data by Helsinki-NLP
285
+ - Available as Xenova-converted ONNX models for browser compatibility
286
+ - Examples: `Xenova/opus-mt-en-es`, `Xenova/opus-mt-en-fr`, `Xenova/opus-mt-ja-en`
287
+ - Best for: Single language-pair translation, browser applications, fast inference
288
+
289
+ **NLLB (No Language Left Behind):**
290
+ - Meta's multilingual model supporting 200+ languages
291
+ - Models: `Xenova/nllb-200-distilled-600M` (and larger variants)
292
+ - Requires more resources but supports many language pairs
293
+ - Best for: Multi-language support, low-resource languages
294
+
295
+ **mBART Models:**
296
+ - Facebook's multilingual translation models
297
+ - Good for document-level translation
298
+
299
+ ### Translation Usage
300
+
301
+ For translation tasks, specify source and target languages:
302
+
303
+ **Language Code Format:**
304
+ - OPUS-MT models: Often work with simple language codes
305
+ - NLLB models: Use codes like `eng_Latn`, `spa_Latn`, `fra_Latn`, etc.
306
+ - Check model card on Hugging Face for supported language codes
307
+
308
+ **Important Notes:**
309
+ - OPUS-MT models are typically single-direction (e.g., EN→ES only)
310
+ - For bi-directional translation, you need two separate OPUS-MT models
311
+ - NLLB models support multiple directions but are much larger
312
+ - Translation quality and speed vary significantly between model families
313
+
314
+ ## Supported Tasks
315
+
316
+ ### Natural Language Processing
317
+
318
+ Main NLP tasks include:
319
+ - Text Classification (`text-classification` or `sentiment-analysis`)
320
+ - Question Answering (`question-answering`)
321
+ - Text Generation (`text-generation`)
322
+ - Translation (`translation`)
323
+ - Summarization (`summarization`)
324
+ - Token Classification (`token-classification` or `ner`)
325
+ - Fill Mask (`fill-mask`)
326
+ - Zero-Shot Classification (`zero-shot-classification`)
327
+ - Feature Extraction (`feature-extraction`)
328
+
329
+ **Recommended Models by Task:**
330
+
331
+ - **Sentiment Analysis:** `Xenova/distilbert-base-uncased-finetuned-sst-2-english` (fast, accurate)
332
+ - **Text Generation:** `Xenova/gpt2` (lightweight), `onnx-community/Qwen2.5-Coder-0.5B-Instruct` (code/chat)
333
+ - **Feature Extraction:** `Xenova/all-MiniLM-L6-v2` (384-dim embeddings, fast)
334
+ - **Translation:** `Xenova/opus-mt-*` series (lightweight, language-specific)
335
+ - **Question Answering:** `Xenova/distilbert-base-cased-distilled-squad`
336
+
337
+ ### Computer Vision
338
+
339
+ Vision tasks include:
340
+ - Image Classification (`image-classification`)
341
+ - Object Detection (`object-detection`)
342
+ - Image Segmentation (`image-segmentation`)
343
+ - Depth Estimation (`depth-estimation`)
344
+ - Background Removal (`background-removal`)
345
+ - Image-to-Image (`image-to-image`)
346
+ - Image Feature Extraction (`image-feature-extraction`)
347
+
348
+ ### Audio Processing
349
+
350
+ Audio tasks include:
351
+ - Automatic Speech Recognition (`automatic-speech-recognition`)
352
+ - Audio Classification (`audio-classification`)
353
+ - Text-to-Speech (`text-to-speech` or `text-to-audio`)
354
+
355
+ ### Multimodal
356
+
357
+ Multimodal tasks include:
358
+ - Document Question Answering (`document-question-answering`)
359
+ - Image-to-Text (`image-to-text`)
360
+ - Zero-Shot Image Classification (`zero-shot-image-classification`)
361
+ - Zero-Shot Audio Classification (`zero-shot-audio-classification`)
362
+ - Zero-Shot Object Detection (`zero-shot-object-detection`)
363
+
364
+ ## Framework Integration
365
+
366
+ ### Node.js Server Example
367
+
368
+ Create a basic HTTP server with Transformers.js:
369
+
370
+ Use the singleton pattern for efficient model loading:
371
+
372
+ ## Error Handling and Best Practices
373
+
374
+ ### General Best Practices
375
+
376
+ - Always await pipeline creation and inference calls
377
+ - Use lazy loading patterns (singleton pattern) for efficient model loading
378
+ - Enable WebGPU when available for better performance in browsers
379
+ - Choose appropriate quantization levels based on your requirements
380
+ - Cache models locally for production applications
381
+
382
+ ### Singleton Pattern for Model Loading
383
+
384
+ Use the singleton pattern to load models once and reuse them across requests:
385
+
386
+ ```javascript
387
+ class MyPipeline {
388
+ static task = 'sentiment-analysis';
389
+ static model = 'Xenova/distilbert-base-uncased-finetuned-sst-2-english';
390
+ static instance = null;
391
+
392
+ static async getInstance(progress_callback = null) {
393
+ if (this.instance === null) {
394
+ const { pipeline, env } = await import('@huggingface/transformers');
395
+
396
+ // Configure environment if needed
397
+ // env.cacheDir = './.cache';
398
+
399
+ this.instance = await pipeline(this.task, this.model, { progress_callback });
400
+ }
401
+ return this.instance;
402
+ }
403
+ }
404
+ ```
405
+
406
+ ### Model Loading Warnings
407
+
408
+ When loading models, you may see dtype warnings in the console:
409
+
410
+ ```
411
+ dtype not specified for "model". Using the default dtype (fp32) for this device (cpu).
412
+ ```
413
+
414
+ These are informational warnings and can be safely ignored. To suppress them, explicitly specify the dtype:
415
+
416
+ ```javascript
417
+ const pipe = await pipeline('task-name', 'model-name', {
418
+ dtype: 'q8' // or 'fp32', 'fp16', 'q4', etc.
419
+ });
420
+ ```
421
+
422
+ ### Common Issues and Solutions
423
+
424
+ **Model Cache Issues:**
425
+ - If a model fails to load with "Protobuf parsing failed" errors, the cached model may be corrupted
426
+ - Clear the cache directory (default: `node_modules/@huggingface/transformers/.cache/`)
427
+ - Consider using a different model or switching to a lighter variant
428
+
429
+ **Translation Models:**
430
+ - Large translation models (600M+ parameters) may be too heavy for browser environments
431
+ - Prefer lightweight OPUS-MT models for single language pairs
432
+ - NLLB models require significant memory and may time out in resource-constrained environments
433
+
434
+ **Response Structure:**
435
+ - Different pipelines return different response structures
436
+ - Sentiment analysis: `[{label: 'POSITIVE', score: 0.99}]`
437
+ - Translation: `[{translation_text: 'Hola mundo'}]`
438
+ - Feature extraction: Returns tensor objects, use `.tolist()` to convert to arrays
439
+ - Question answering: `{answer: 'text', score: 0.95}`
440
+ - Text generation: `[{generated_text: 'text'}]`
441
+
442
+ ## Model Conversion
443
+
444
+ To use custom models, convert them to ONNX format:
445
+
446
+ The conversion script supports quantization:
447
+
448
+ ## Useful Links
449
+
450
+ - Documentation: https://huggingface.co/docs/transformers.js
451
+ - NPM Package: https://www.npmjs.com/package/@huggingface/transformers
452
+ - GitHub Repository: https://github.com/huggingface/transformers.js
453
+ - Model Hub (transformers.js compatible): https://huggingface.co/models?library=transformers.js
454
+ - Examples and Templates: https://github.com/huggingface/transformers.js-examples
455
+
456
+ ## Notes
457
+
458
+ This is a comprehensive guide for using Transformers.js in JavaScript applications. The library is designed to be functionally equivalent to Hugging Face's Python transformers library but optimized for JavaScript environments. It supports running inference in browsers, Node.js, and web workers using ONNX Runtime for optimal performance.
459
+
460
+ Key advantages include:
461
+ - No server required for inference
462
+ - Support for quantized models for better performance
463
+ - WebGPU acceleration when available
464
+ - Comprehensive task coverage across NLP, computer vision, and audio
465
+ - Easy integration with existing JavaScript applications
466
+
467
+ ### Citations
468
+
469
+ ```markdown
470
+ To install via [NPM](https://www.npmjs.com/package/@huggingface/transformers), run:
471
+ ```bash
472
+ npm i @huggingface/transformers
473
+ ```
474
+ ```
475
+
476
+ ```markdown
477
+ Alternatively, you can use it in vanilla JS, without any bundler, by using a CDN or static hosting. For example, using [ES Modules](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Modules), you can import the library with:
478
+ ```html
479
+ <script type="module">
480
+ import { pipeline } from 'https://cdn.jsdelivr.net/npm/@huggingface/transformers@3.5.2';
481
+ </script>
482
+ ```
483
+ ```
484
+
485
+ ```markdown
486
+ By default, when running in the browser, the model will be run on your CPU (via WASM). If you would like
487
+ to run the model on your GPU (via WebGPU), you can do this by setting `device: 'webgpu'`, for example:
488
+ ```javascript
489
+ // Run the model on WebGPU
490
+ const pipe = await pipeline('sentiment-analysis', 'Xenova/distilbert-base-uncased-finetuned-sst-2-english', {
491
+ device: 'webgpu',
492
+ });
493
+ ```
494
+
495
+ For more information, check out the [WebGPU guide](https://huggingface.co/docs/transformers.js/guides/webgpu).
496
+
497
+ > [!WARNING]
498
+ > The WebGPU API is still experimental in many browsers, so if you run into any issues,
499
+ > please file a [bug report](https://github.com/huggingface/transformers.js/issues/new?title=%5BWebGPU%5D%20Error%20running%20MODEL_ID_GOES_HERE&assignees=&labels=bug,webgpu&projects=&template=1_bug-report.yml).
500
+
501
+ In resource-constrained environments, such as web browsers, it is advisable to use a quantized version of
502
+ the model to lower bandwidth and optimize performance. This can be achieved by adjusting the `dtype` option,
503
+ which allows you to select the appropriate data type for your model. While the available options may vary
504
+ depending on the specific model, typical choices include `"fp32"` (default for WebGPU), `"fp16"`, `"q8"`
505
+ (default for WASM), and `"q4"`. For more information, check out the [quantization guide](https://huggingface.co/docs/transformers.js/guides/dtypes).
506
+ ```javascript
507
+ // Run the model at 4-bit quantization
508
+ const pipe = await pipeline('sentiment-analysis', 'Xenova/distilbert-base-uncased-finetuned-sst-2-english', {
509
+ dtype: 'q4',
510
+ });
511
+ ```
512
+ ```
513
+
514
+ ```markdown
515
+ ```javascript
516
+ import { env } from '@huggingface/transformers';
517
+
518
+ // Specify a custom location for models (defaults to '/models/').
519
+ env.localModelPath = '/path/to/models/';
520
+
521
+ // Disable the loading of remote models from the Hugging Face Hub:
522
+ env.allowRemoteModels = false;
523
+
524
+ // Set location of .wasm files. Defaults to use a CDN.
525
+ env.backends.onnx.wasm.wasmPaths = '/path/to/files/';
526
+ ```
527
+ ```
528
+
529
+ ```markdown
530
+ ```bash
531
+ python -m scripts.convert --quantize --model_id <model_name_or_path>
532
+ ```
533
+
534
+ For example, convert and quantize [bert-base-uncased](https://huggingface.co/bert-base-uncased) using:
535
+ ```bash
536
+ python -m scripts.convert --quantize --model_id bert-base-uncased
537
+ ```
538
+
539
+ ```
540
+
541
+ ```markdown
542
+ This will save the following files to `./models/`:
543
+
544
+ ```
545
+ bert-base-uncased/
546
+ ├── config.json
547
+ ├── tokenizer.json
548
+ ├── tokenizer_config.json
549
+ └── onnx/
550
+ ├── model.onnx
551
+ └── model_quantized.onnx
552
+ ```
553
+ ```
554
+
555
+ ```markdown
556
+ To indicate that your project uses ECMAScript modules, you need to add `"type": "module"` to your `package.json`:
557
+
558
+ ```json
559
+ {
560
+ ...
561
+ "type": "module",
562
+ ...
563
+ }
564
+ ```
565
+ ```
566
+
567
+ ```markdown
568
+ class MyClassificationPipeline {
569
+ static task = 'text-classification';
570
+ static model = 'Xenova/distilbert-base-uncased-finetuned-sst-2-english';
571
+ static instance = null;
572
+
573
+ static async getInstance(progress_callback = null) {
574
+ if (this.instance === null) {
575
+ // NOTE: Uncomment this to change the cache directory
576
+ // env.cacheDir = './.cache';
577
+
578
+ this.instance = pipeline(this.task, this.model, { progress_callback });
579
+ }
580
+
581
+ return this.instance;
582
+ }
583
+ }
584
+ ```
585
+ ```
586
+
587
+ ```markdown
588
+ Following that, let's import Transformers.js and define the `MyClassificationPipeline` class. Since Transformers.js is an ESM module, we will need to dynamically import the library using the [`import()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/import) function:
589
+
590
+ ```javascript
591
+ class MyClassificationPipeline {
592
+ static task = 'text-classification';
593
+ static model = 'Xenova/distilbert-base-uncased-finetuned-sst-2-english';
594
+ static instance = null;
595
+
596
+ static async getInstance(progress_callback = null) {
597
+ if (this.instance === null) {
598
+ // Dynamically import the Transformers.js library
599
+ let { pipeline, env } = await import('@huggingface/transformers');
600
+
601
+ // NOTE: Uncomment this to change the cache directory
602
+ // env.cacheDir = './.cache';
603
+
604
+ this.instance = pipeline(this.task, this.model, { progress_callback });
605
+ }
606
+
607
+ return this.instance;
608
+ }
609
+ }
610
+ ```
611
+ ```
612
+
613
+ ```markdown
614
+ // Define the HTTP server
615
+ const server = http.createServer();
616
+ const hostname = '127.0.0.1';
617
+ const port = 3000;
618
+
619
+ // Listen for requests made to the server
620
+ server.on('request', async (req, res) => {
621
+ // Parse the request URL
622
+ const parsedUrl = url.parse(req.url);
623
+
624
+ // Extract the query parameters
625
+ const { text } = querystring.parse(parsedUrl.query);
626
+
627
+ // Set the response headers
628
+ res.setHeader('Content-Type', 'application/json');
629
+
630
+ let response;
631
+ if (parsedUrl.pathname === '/classify' && text) {
632
+ const classifier = await MyClassificationPipeline.getInstance();
633
+ response = await classifier(text);
634
+ res.statusCode = 200;
635
+ } else {
636
+ response = { 'error': 'Bad request' }
637
+ res.statusCode = 400;
638
+ }
639
+
640
+ // Send the JSON response
641
+ res.end(JSON.stringify(response));
642
+ });
643
+
644
+ server.listen(port, hostname, () => {
645
+ console.log(`Server running at http://${hostname}:${port}/`);
646
+ });
647
+
648
+ ```
649
+ ```
650
+
651
+ ```markdown
652
+ ### Model caching
653
+
654
+ By default, the first time you run the application, it will download the model files and cache them on your file system (in `./node_modules/@huggingface/transformers/.cache/`). All subsequent requests will then use this model. You can change the location of the cache by setting `env.cacheDir`. For example, to cache the model in the `.cache` directory in the current working directory, you can add:
655
+
656
+ ```javascript
657
+ env.cacheDir = './.cache';
658
+ ```
659
+
660
+ ### Use local models
661
+
662
+ If you want to use local model files, you can set `env.localModelPath` as follows:
663
+
664
+ ```javascript
665
+ // Specify a custom location for models (defaults to '/models/').
666
+ env.localModelPath = '/path/to/models/';
667
+ ```
668
+
669
+ You can also disable loading of remote models by setting `env.allowRemoteModels` to `false`:
670
+
671
+ ```javascript
672
+ // Disable the loading of remote models from the Hugging Face Hub:
673
+ env.allowRemoteModels = false;
674
+ ```
675
+
676
+ ```markdown
677
+ ```javascript
678
+ import { pipeline } from '@huggingface/transformers';
679
+
680
+ const classifier = await pipeline('sentiment-analysis');
681
+ ```
682
+
683
+ ```
684
+
685
+ ```markdown
686
+ ```javascript
687
+ const result = await classifier('I love transformers!');
688
+ // [{'label': 'POSITIVE', 'score': 0.9998}]
689
+ ```
690
+ ```
691
+
692
+ ```markdown
693
+ ```javascript
694
+ const result = await classifier(['I love transformers!', 'I hate transformers!']);
695
+ // [{'label': 'POSITIVE', 'score': 0.9998}, {'label': 'NEGATIVE', 'score': 0.9982}]
696
+ ```
697
+ ```
698
+
699
+ ```markdown
700
+ ```javascript
701
+ const reviewer = await pipeline('sentiment-analysis', 'Xenova/bert-base-multilingual-uncased-sentiment');
702
+
703
+ const result = await reviewer('The Shawshank Redemption is a true masterpiece of cinema.');
704
+ // [{label: '5 stars', score: 0.8167929649353027}]
705
+ ```
706
+ ```
707
+
708
+ ```markdown
709
+ // Create a pipeline for Automatic Speech Recognition
710
+ const transcriber = await pipeline('automatic-speech-recognition', 'Xenova/whisper-small.en');
711
+
712
+ // Transcribe an audio file, loaded from a URL.
713
+ const result = await transcriber('https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac');
714
+ // {text: ' I have a dream that one day this nation will rise up and live out the true meaning of its creed.'}
715
+ ```
716
+ ```
717
+
718
+ ```markdown
719
+ // Create a pipeline for feature extraction, using the full-precision model (fp32)
720
+ const pipe = await pipeline('feature-extraction', 'Xenova/all-MiniLM-L6-v2', {
721
+ dtype: "fp32",
722
+ });
723
+ ```
724
+ Check out the section on [quantization](./guides/dtypes) to learn more.
725
+ ```
726
+
727
+ ```markdown
728
+ ```javascript
729
+ const transcriber = await pipeline('automatic-speech-recognition', 'Xenova/whisper-tiny.en', {
730
+ revision: 'output_attentions',
731
+ });
732
+ ```
733
+ ```
734
+
735
+ ```markdown
736
+ // Create a pipeline for translation
737
+ const translator = await pipeline('translation', 'Xenova/nllb-200-distilled-600M');
738
+
739
+ // Translate from English to Greek
740
+ const result = await translator('I like to walk my dog.', {
741
+ src_lang: 'eng_Latn',
742
+ tgt_lang: 'ell_Grek'
743
+ });
744
+ // [ { translation_text: 'Μου αρέσει να περπατάω το σκυλί μου.' } ]
745
+
746
+ // Translate back to English
747
+ const result2 = await translator(result[0].translation_text, {
748
+ src_lang: 'ell_Grek',
749
+ tgt_lang: 'eng_Latn'
750
+ });
751
+ // [ { translation_text: 'I like to walk my dog.' } ]
752
+ ```
753
+ ```
754
+
755
+ ```markdown
756
+ ```javascript
757
+ // Create a pipeline for text2text-generation
758
+ const poet = await pipeline('text2text-generation', 'Xenova/LaMini-Flan-T5-783M');
759
+ const result = await poet('Write me a love poem about cheese.', {
760
+ max_new_tokens: 200,
761
+ temperature: 0.9,
762
+ repetition_penalty: 2.0,
763
+ no_repeat_ngram_size: 3,
764
+ });
765
+ ```
766
+ ```
767
+
768
+ ```markdown
769
+ Some pipelines such as `text-generation` or `automatic-speech-recognition` support streaming output. This is achieved using the `TextStreamer` class. For example, when using a chat model like `Qwen2.5-Coder-0.5B-Instruct`, you can specify a callback function that will be called with each generated token text (if unset, new tokens will be printed to the console).
770
+
771
+ ```js
772
+ import { pipeline, TextStreamer } from "@huggingface/transformers";
773
+
774
+ // Create a text generation pipeline
775
+ const generator = await pipeline(
776
+ "text-generation",
777
+ "onnx-community/Qwen2.5-Coder-0.5B-Instruct",
778
+ { dtype: "q4" },
779
+ );
780
+
781
+ // Define the list of messages
782
+ const messages = [
783
+ { role: "system", content: "You are a helpful assistant." },
784
+ { role: "user", content: "Write a quick sort algorithm." },
785
+ ];
786
+
787
+ // Create text streamer
788
+ const streamer = new TextStreamer(generator.tokenizer, {
789
+ skip_prompt: true,
790
+ // Optionally, do something with the text (e.g., write to a textbox)
791
+ // callback_function: (text) => { /* Do something with text */ },
792
+ })
793
+
794
+ // Generate a response
795
+ const result = await generator(messages, { max_new_tokens: 512, do_sample: false, streamer });
796
+ ```
797
+ ```
798
+
799
+ ```markdown
800
+ ```js
801
+ import { pipeline } from "@huggingface/transformers";
802
+
803
+ // Create a feature-extraction pipeline
804
+ const extractor = await pipeline(
805
+ "feature-extraction",
806
+ "mixedbread-ai/mxbai-embed-xsmall-v1",
807
+ { device: "webgpu" },
808
+ );
809
+
810
+ // Compute embeddings
811
+ const texts = ["Hello world!", "This is an example sentence."];
812
+ const embeddings = await extractor(texts, { pooling: "mean", normalize: true });
813
+ console.log(embeddings.tolist());
814
+ // [
815
+ // [-0.016986183822155, 0.03228696808218956, -0.0013630966423079371, ... ],
816
+ // [0.09050482511520386, 0.07207386940717697, 0.05762749910354614, ... ],
817
+ // ]
818
+ ```
819
+ ```
820
+
821
+ ```markdown
822
+ Before Transformers.js v3, we used the `quantized` option to specify whether to use a quantized (q8) or full-precision (fp32) variant of the model by setting `quantized` to `true` or `false`, respectively. Now, we've added the ability to select from a much larger list with the `dtype` parameter.
823
+
824
+ The list of available quantizations depends on the model, but some common ones are: full-precision (`"fp32"`), half-precision (`"fp16"`), 8-bit (`"q8"`, `"int8"`, `"uint8"`), and 4-bit (`"q4"`, `"bnb4"`, `"q4f16"`).
825
+
826
+ ```
827
+
828
+ ```markdown
829
+ ```js
830
+ import { pipeline } from "@huggingface/transformers";
831
+
832
+ // Create a text generation pipeline
833
+ const generator = await pipeline(
834
+ "text-generation",
835
+ "onnx-community/Qwen2.5-0.5B-Instruct",
836
+ { dtype: "q4", device: "webgpu" },
837
+ );
838
+
839
+ // Define the list of messages
840
+ const messages = [
841
+ { role: "system", content: "You are a helpful assistant." },
842
+ { role: "user", content: "Tell me a funny joke." },
843
+ ];
844
+
845
+ // Generate a response
846
+ const output = await generator(messages, { max_new_tokens: 128 });
847
+ console.log(output[0].generated_text.at(-1).content);
848
+ ```
849
+ ```
850
+
851
+ ```markdown
852
+ Some encoder-decoder models, like Whisper or Florence-2, are extremely sensitive to quantization settings: especially of the encoder. For this reason, we added the ability to select per-module dtypes, which can be done by providing a mapping from module name to dtype.
853
+
854
+ **Example:** Run Florence-2 on WebGPU ([demo](https://v2.scrimba.com/s0pdm485fo))
855
+
856
+ ```js
857
+ import { Florence2ForConditionalGeneration } from "@huggingface/transformers";
858
+
859
+ const model = await Florence2ForConditionalGeneration.from_pretrained(
860
+ "onnx-community/Florence-2-base-ft",
861
+ {
862
+ dtype: {
863
+ embed_tokens: "fp16",
864
+ vision_encoder: "fp16",
865
+ encoder_model: "q4",
866
+ decoder_model_merged: "q4",
867
+ },
868
+ device: "webgpu",
869
+ },
870
+ );
871
+ ```
872
+ ```
873
+
874
+ ```javascript
875
+ * **Example:** Disable remote models.
876
+ * ```javascript
877
+ * import { env } from '@huggingface/transformers';
878
+ * env.allowRemoteModels = false;
879
+ * ```
880
+ *
881
+ * **Example:** Set local model path.
882
+ * ```javascript
883
+ * import { env } from '@huggingface/transformers';
884
+ * env.localModelPath = '/path/to/local/models/';
885
+ * ```
886
+ *
887
+ * **Example:** Set cache directory.
888
+ * ```javascript
889
+ * import { env } from '@huggingface/transformers';
890
+ * env.cacheDir = '/path/to/cache/directory/';
891
+ * ```
892
+ *
893
+ * @module env
894
+ */
895
+ ```
896
+
897
+ ```text
898
+ | Task | ID | Description | Supported? |
899
+ |--------------------------|----|-------------|------------|
900
+ | [Fill-Mask](https://huggingface.co/tasks/fill-mask) | `fill-mask` | Masking some of the words in a sentence and predicting which words should replace those masks. | ✅ [(docs)](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.FillMaskPipeline)<br>[(models)](https://huggingface.co/models?pipeline_tag=fill-mask&library=transformers.js) |
901
+ | [Question Answering](https://huggingface.co/tasks/question-answering) | `question-answering` | Retrieve the answer to a question from a given text. | ✅ [(docs)](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.QuestionAnsweringPipeline)<br>[(models)](https://huggingface.co/models?pipeline_tag=question-answering&library=transformers.js) |
902
+ | [Sentence Similarity](https://huggingface.co/tasks/sentence-similarity) | `sentence-similarity` | Determining how similar two texts are. | ✅ [(docs)](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.FeatureExtractionPipeline)<br>[(models)](https://huggingface.co/models?pipeline_tag=sentence-similarity&library=transformers.js) |
903
+ | [Summarization](https://huggingface.co/tasks/summarization) | `summarization` | Producing a shorter version of a document while preserving its important information. | ✅ [(docs)](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.SummarizationPipeline)<br>[(models)](https://huggingface.co/models?pipeline_tag=summarization&library=transformers.js) |
904
+ | [Table Question Answering](https://huggingface.co/tasks/table-question-answering) | `table-question-answering` | Answering a question about information from a given table. | ❌ |
905
+ | [Text Classification](https://huggingface.co/tasks/text-classification) | `text-classification` or `sentiment-analysis` | Assigning a label or class to a given text. | ✅ [(docs)](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.TextClassificationPipeline)<br>[(models)](https://huggingface.co/models?pipeline_tag=text-classification&library=transformers.js) |
906
+ | [Text Generation](https://huggingface.co/tasks/text-generation#completion-generation-models) | `text-generation` | Producing new text by predicting the next word in a sequence. | ✅ [(docs)](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.TextGenerationPipeline)<br>[(models)](https://huggingface.co/models?pipeline_tag=text-generation&library=transformers.js) |
907
+ | [Text-to-text Generation](https://huggingface.co/tasks/text-generation#text-to-text-generation-models) | `text2text-generation` | Converting one text sequence into another text sequence. | ✅ [(docs)](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.Text2TextGenerationPipeline)<br>[(models)](https://huggingface.co/models?pipeline_tag=text2text-generation&library=transformers.js) |
908
+ | [Token Classification](https://huggingface.co/tasks/token-classification) | `token-classification` or `ner` | Assigning a label to each token in a text. | ✅ [(docs)](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.TokenClassificationPipeline)<br>[(models)](https://huggingface.co/models?pipeline_tag=token-classification&library=transformers.js) |
909
+ | [Translation](https://huggingface.co/tasks/translation) | `translation` | Converting text from one language to another. | ✅ [(docs)](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.TranslationPipeline)<br>[(models)](https://huggingface.co/models?pipeline_tag=translation&library=transformers.js) |
910
+ | [Zero-Shot Classification](https://huggingface.co/tasks/zero-shot-classification) | `zero-shot-classification` | Classifying text into classes that are unseen during training. | ✅ [(docs)](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.ZeroShotClassificationPipeline)<br>[(models)](https://huggingface.co/models?pipeline_tag=zero-shot-classification&library=transformers.js) |
911
+ | [Feature Extraction](https://huggingface.co/tasks/feature-extraction) | `feature-extraction` | Transforming raw data into numerical features that can be processed while preserving the information in the original dataset. | ✅ [(docs)](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.FeatureExtractionPipeline)<br>[(models)](https://huggingface.co/models?pipeline_tag=feature-extraction&library=transformers.js) |
912
+ ```
913
+
914
+ ```text
915
+ | Task | ID | Description | Supported? |
916
+ |--------------------------|----|-------------|------------|
917
+ | [Background Removal](https://huggingface.co/tasks/image-segmentation#background-removal) | `background-removal` | Isolating the main subject of an image by removing or making the background transparent. | ✅ [(docs)](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.BackgroundRemovalPipeline)<br>[(models)](https://huggingface.co/models?other=background-removal&library=transformers.js) |
918
+ | [Depth Estimation](https://huggingface.co/tasks/depth-estimation) | `depth-estimation` | Predicting the depth of objects present in an image. | ✅ [(docs)](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.DepthEstimationPipeline)<br>[(models)](https://huggingface.co/models?pipeline_tag=depth-estimation&library=transformers.js) |
919
+ | [Image Classification](https://huggingface.co/tasks/image-classification) | `image-classification` | Assigning a label or class to an entire image. | ✅ [(docs)](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.ImageClassificationPipeline)<br>[(models)](https://huggingface.co/models?pipeline_tag=image-classification&library=transformers.js) |
920
+ | [Image Segmentation](https://huggingface.co/tasks/image-segmentation) | `image-segmentation` | Divides an image into segments where each pixel is mapped to an object. This task has multiple variants such as instance segmentation, panoptic segmentation and semantic segmentation. | ✅ [(docs)](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.ImageSegmentationPipeline)<br>[(models)](https://huggingface.co/models?pipeline_tag=image-segmentation&library=transformers.js) |
921
+ | [Image-to-Image](https://huggingface.co/tasks/image-to-image) | `image-to-image` | Transforming a source image to match the characteristics of a target image or a target image domain. | ✅ [(docs)](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.ImageToImagePipeline)<br>[(models)](https://huggingface.co/models?pipeline_tag=image-to-image&library=transformers.js) |
922
+ | [Mask Generation](https://huggingface.co/tasks/mask-generation) | `mask-generation` | Generate masks for the objects in an image. | ❌ |
923
+ | [Object Detection](https://huggingface.co/tasks/object-detection) | `object-detection` | Identify objects of certain defined classes within an image. | ✅ [(docs)](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.ObjectDetectionPipeline)<br>[(models)](https://huggingface.co/models?pipeline_tag=object-detection&library=transformers.js) |
924
+ | [Video Classification](https://huggingface.co/tasks/video-classification) | n/a | Assigning a label or class to an entire video. | ❌ |
925
+ | [Unconditional Image Generation](https://huggingface.co/tasks/unconditional-image-generation) | n/a | Generating images with no condition in any context (like a prompt text or another image). | ❌ |
926
+ | [Image Feature Extraction](https://huggingface.co/tasks/image-feature-extraction) | `image-feature-extraction` | Transforming raw data into numerical features that can be processed while preserving the information in the original image. | ✅ [(docs)](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.ImageFeatureExtractionPipeline)<br>[(models)](https://huggingface.co/models?pipeline_tag=image-feature-extraction&library=transformers.js) |
927
+ ```
928
+
929
+ ```text
930
+ | Task | ID | Description | Supported? |
931
+ |--------------------------|----|-------------|------------|
932
+ | [Audio Classification](https://huggingface.co/tasks/audio-classification) | `audio-classification` | Assigning a label or class to a given audio. | ✅ [(docs)](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.AudioClassificationPipeline)<br>[(models)](https://huggingface.co/models?pipeline_tag=audio-classification&library=transformers.js) |
933
+ | [Audio-to-Audio](https://huggingface.co/tasks/audio-to-audio) | n/a | Generating audio from an input audio source. | ❌ |
934
+ | [Automatic Speech Recognition](https://huggingface.co/tasks/automatic-speech-recognition) | `automatic-speech-recognition` | Transcribing a given audio into text. | ✅ [(docs)](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.AutomaticSpeechRecognitionPipeline)<br>[(models)](https://huggingface.co/models?pipeline_tag=automatic-speech-recognition&library=transformers.js) |
935
+ | [Text-to-Speech](https://huggingface.co/tasks/text-to-speech) | `text-to-speech` or `text-to-audio` | Generating natural-sounding speech given text input. | ✅ [(docs)](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.TextToAudioPipeline)<br>[(models)](https://huggingface.co/models?pipeline_tag=text-to-audio&library=transformers.js) |
936
+ ```
937
+
938
+ ```text
939
+ | Task | ID | Description | Supported? |
940
+ |--------------------------|----|-------------|------------|
941
+ | [Document Question Answering](https://huggingface.co/tasks/document-question-answering) | `document-question-answering` | Answering questions on document images. | ✅ [(docs)](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.DocumentQuestionAnsweringPipeline)<br>[(models)](https://huggingface.co/models?pipeline_tag=document-question-answering&library=transformers.js) |
942
+ | [Image-to-Text](https://huggingface.co/tasks/image-to-text) | `image-to-text` | Output text from a given image. | ✅ [(docs)](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.ImageToTextPipeline)<br>[(models)](https://huggingface.co/models?pipeline_tag=image-to-text&library=transformers.js) |
943
+ | [Text-to-Image](https://huggingface.co/tasks/text-to-image) | `text-to-image` | Generates images from input text. | ❌ |
944
+ | [Visual Question Answering](https://huggingface.co/tasks/visual-question-answering) | `visual-question-answering` | Answering open-ended questions based on an image. | ❌ |
945
+ | [Zero-Shot Audio Classification](https://huggingface.co/learn/audio-course/chapter4/classification_models#zero-shot-audio-classification) | `zero-shot-audio-classification` | Classifying audios into classes that are unseen during training. | ✅ [(docs)](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.ZeroShotAudioClassificationPipeline)<br>[(models)](https://huggingface.co/models?other=zero-shot-audio-classification&library=transformers.js) |
946
+ | [Zero-Shot Image Classification](https://huggingface.co/tasks/zero-shot-image-classification) | `zero-shot-image-classification` | Classifying images into classes that are unseen during training. | ✅ [(docs)](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.ZeroShotImageClassificationPipeline)<br>[(models)](https://huggingface.co/models?pipeline_tag=zero-shot-image-classification&library=transformers.js) |
947
+ | [Zero-Shot Object Detection](https://huggingface.co/tasks/zero-shot-object-detection) | `zero-shot-object-detection` | Identify objects of classes that are unseen during training. | ✅ [(docs)](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.ZeroShotObjectDetectionPipeline)<br>[(models)](https://huggingface.co/models?other=zero-shot-object-detection&library=transformers.js) |
948
+ ```