@huggingface/transformers 3.2.0 → 3.2.2
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +3 -2
- package/dist/transformers.cjs +61 -3
- package/dist/transformers.cjs.map +1 -1
- package/dist/transformers.js +67 -4
- package/dist/transformers.js.map +1 -1
- package/dist/transformers.min.cjs +1 -1
- package/dist/transformers.min.cjs.map +1 -1
- package/dist/transformers.min.js +1 -1
- package/dist/transformers.min.js.map +1 -1
- package/dist/transformers.min.mjs +1 -1
- package/dist/transformers.min.mjs.map +1 -1
- package/dist/transformers.mjs +67 -4
- package/dist/transformers.mjs.map +1 -1
- package/package.json +1 -1
- package/src/env.js +1 -1
- package/src/models.js +47 -0
- package/src/ops/registry.js +3 -2
- package/src/pipelines.js +1 -1
- package/types/models.d.ts +31 -0
- package/types/models.d.ts.map +1 -1
- package/types/ops/registry.d.ts.map +1 -1
package/README.md
CHANGED
|
@@ -47,7 +47,7 @@ npm i @huggingface/transformers
|
|
|
47
47
|
Alternatively, you can use it in vanilla JS, without any bundler, by using a CDN or static hosting. For example, using [ES Modules](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Modules), you can import the library with:
|
|
48
48
|
```html
|
|
49
49
|
<script type="module">
|
|
50
|
-
import { pipeline } from 'https://cdn.jsdelivr.net/npm/@huggingface/transformers@3.2.
|
|
50
|
+
import { pipeline } from 'https://cdn.jsdelivr.net/npm/@huggingface/transformers@3.2.2';
|
|
51
51
|
</script>
|
|
52
52
|
```
|
|
53
53
|
|
|
@@ -155,7 +155,7 @@ Check out the Transformers.js [template](https://huggingface.co/new-space?templa
|
|
|
155
155
|
|
|
156
156
|
|
|
157
157
|
|
|
158
|
-
By default, Transformers.js uses [hosted pretrained models](https://huggingface.co/models?library=transformers.js) and [precompiled WASM binaries](https://cdn.jsdelivr.net/npm/@huggingface/transformers@3.2.
|
|
158
|
+
By default, Transformers.js uses [hosted pretrained models](https://huggingface.co/models?library=transformers.js) and [precompiled WASM binaries](https://cdn.jsdelivr.net/npm/@huggingface/transformers@3.2.2/dist/), which should work out-of-the-box. You can customize this as follows:
|
|
159
159
|
|
|
160
160
|
### Settings
|
|
161
161
|
|
|
@@ -366,6 +366,7 @@ You can refine your search by selecting the task you're interested in (e.g., [te
|
|
|
366
366
|
1. **MobileNetV4** (from Google Inc.) released with the paper [MobileNetV4 - Universal Models for the Mobile Ecosystem](https://arxiv.org/abs/2404.10518) by Danfeng Qin, Chas Leichner, Manolis Delakis, Marco Fornoni, Shixin Luo, Fan Yang, Weijun Wang, Colby Banbury, Chengxi Ye, Berkin Akin, Vaibhav Aggarwal, Tenghui Zhu, Daniele Moro, Andrew Howard.
|
|
367
367
|
1. **[MobileViT](https://huggingface.co/docs/transformers/model_doc/mobilevit)** (from Apple) released with the paper [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari.
|
|
368
368
|
1. **[MobileViTV2](https://huggingface.co/docs/transformers/model_doc/mobilevitv2)** (from Apple) released with the paper [Separable Self-attention for Mobile Vision Transformers](https://arxiv.org/abs/2206.02680) by Sachin Mehta and Mohammad Rastegari.
|
|
369
|
+
1. **[ModernBERT](https://huggingface.co/docs/transformers/model_doc/modernbert)** (from Answer.AI) released with the paper [Smarter, Better, Faster, Longer: A Modern Bidirectional Encoder for Fast, Memory Efficient, and Long Context Finetuning and Inference](https://arxiv.org/abs/2412.13663) by Benjamin Warner, Antoine Chaffin, Benjamin Clavié, Orion Weller, Oskar Hallström, Said Taghadouini, Alexis Gallagher, Raja Biswas, Faisal Ladhak, Tom Aarsen, Nathan Cooper, Griffin Adams, Jeremy Howard, Iacopo Poli.
|
|
369
370
|
1. **Moondream1** released in the repository [moondream](https://github.com/vikhyat/moondream) by vikhyat.
|
|
370
371
|
1. **[Moonshine](https://huggingface.co/docs/transformers/model_doc/moonshine)** (from Useful Sensors) released with the paper [Moonshine: Speech Recognition for Live Transcription and Voice Commands](https://arxiv.org/abs/2410.15608) by Nat Jeffries, Evan King, Manjunath Kudlur, Guy Nicholson, James Wang, Pete Warden.
|
|
371
372
|
1. **[MPNet](https://huggingface.co/docs/transformers/model_doc/mpnet)** (from Microsoft Research) released with the paper [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu.
|
package/dist/transformers.cjs
CHANGED
|
@@ -5932,7 +5932,7 @@ __webpack_require__.r(__webpack_exports__);
|
|
|
5932
5932
|
|
|
5933
5933
|
|
|
5934
5934
|
|
|
5935
|
-
const VERSION = '3.2.
|
|
5935
|
+
const VERSION = '3.2.2';
|
|
5936
5936
|
|
|
5937
5937
|
// Check if various APIs are available (depends on environment)
|
|
5938
5938
|
const IS_BROWSER_ENV = typeof window !== "undefined" && typeof window.document !== "undefined";
|
|
@@ -8156,6 +8156,11 @@ __webpack_require__.r(__webpack_exports__);
|
|
|
8156
8156
|
/* harmony export */ MobileViTV2Model: () => (/* binding */ MobileViTV2Model),
|
|
8157
8157
|
/* harmony export */ MobileViTV2PreTrainedModel: () => (/* binding */ MobileViTV2PreTrainedModel),
|
|
8158
8158
|
/* harmony export */ ModelOutput: () => (/* binding */ ModelOutput),
|
|
8159
|
+
/* harmony export */ ModernBertForMaskedLM: () => (/* binding */ ModernBertForMaskedLM),
|
|
8160
|
+
/* harmony export */ ModernBertForSequenceClassification: () => (/* binding */ ModernBertForSequenceClassification),
|
|
8161
|
+
/* harmony export */ ModernBertForTokenClassification: () => (/* binding */ ModernBertForTokenClassification),
|
|
8162
|
+
/* harmony export */ ModernBertModel: () => (/* binding */ ModernBertModel),
|
|
8163
|
+
/* harmony export */ ModernBertPreTrainedModel: () => (/* binding */ ModernBertPreTrainedModel),
|
|
8159
8164
|
/* harmony export */ Moondream1ForConditionalGeneration: () => (/* binding */ Moondream1ForConditionalGeneration),
|
|
8160
8165
|
/* harmony export */ MoonshineForConditionalGeneration: () => (/* binding */ MoonshineForConditionalGeneration),
|
|
8161
8166
|
/* harmony export */ MoonshineModel: () => (/* binding */ MoonshineModel),
|
|
@@ -10267,6 +10272,49 @@ class BertForQuestionAnswering extends BertPreTrainedModel {
|
|
|
10267
10272
|
}
|
|
10268
10273
|
//////////////////////////////////////////////////
|
|
10269
10274
|
|
|
10275
|
+
//////////////////////////////////////////////////
|
|
10276
|
+
// ModernBert models
|
|
10277
|
+
class ModernBertPreTrainedModel extends PreTrainedModel { }
|
|
10278
|
+
class ModernBertModel extends ModernBertPreTrainedModel { }
|
|
10279
|
+
|
|
10280
|
+
class ModernBertForMaskedLM extends ModernBertPreTrainedModel {
|
|
10281
|
+
/**
|
|
10282
|
+
* Calls the model on new inputs.
|
|
10283
|
+
*
|
|
10284
|
+
* @param {Object} model_inputs The inputs to the model.
|
|
10285
|
+
* @returns {Promise<MaskedLMOutput>} An object containing the model's output logits for masked language modeling.
|
|
10286
|
+
*/
|
|
10287
|
+
async _call(model_inputs) {
|
|
10288
|
+
return new MaskedLMOutput(await super._call(model_inputs));
|
|
10289
|
+
}
|
|
10290
|
+
}
|
|
10291
|
+
|
|
10292
|
+
class ModernBertForSequenceClassification extends ModernBertPreTrainedModel {
|
|
10293
|
+
/**
|
|
10294
|
+
* Calls the model on new inputs.
|
|
10295
|
+
*
|
|
10296
|
+
* @param {Object} model_inputs The inputs to the model.
|
|
10297
|
+
* @returns {Promise<SequenceClassifierOutput>} An object containing the model's output logits for sequence classification.
|
|
10298
|
+
*/
|
|
10299
|
+
async _call(model_inputs) {
|
|
10300
|
+
return new SequenceClassifierOutput(await super._call(model_inputs));
|
|
10301
|
+
}
|
|
10302
|
+
}
|
|
10303
|
+
|
|
10304
|
+
class ModernBertForTokenClassification extends ModernBertPreTrainedModel {
|
|
10305
|
+
/**
|
|
10306
|
+
* Calls the model on new inputs.
|
|
10307
|
+
*
|
|
10308
|
+
* @param {Object} model_inputs The inputs to the model.
|
|
10309
|
+
* @returns {Promise<TokenClassifierOutput>} An object containing the model's output logits for token classification.
|
|
10310
|
+
*/
|
|
10311
|
+
async _call(model_inputs) {
|
|
10312
|
+
return new TokenClassifierOutput(await super._call(model_inputs));
|
|
10313
|
+
}
|
|
10314
|
+
}
|
|
10315
|
+
//////////////////////////////////////////////////
|
|
10316
|
+
|
|
10317
|
+
|
|
10270
10318
|
//////////////////////////////////////////////////
|
|
10271
10319
|
// NomicBert models
|
|
10272
10320
|
class NomicBertPreTrainedModel extends PreTrainedModel { }
|
|
@@ -15237,6 +15285,7 @@ class PretrainedMixin {
|
|
|
15237
15285
|
|
|
15238
15286
|
const MODEL_MAPPING_NAMES_ENCODER_ONLY = new Map([
|
|
15239
15287
|
['bert', ['BertModel', BertModel]],
|
|
15288
|
+
['modernbert', ['ModernBertModel', ModernBertModel]],
|
|
15240
15289
|
['nomic_bert', ['NomicBertModel', NomicBertModel]],
|
|
15241
15290
|
['roformer', ['RoFormerModel', RoFormerModel]],
|
|
15242
15291
|
['electra', ['ElectraModel', ElectraModel]],
|
|
@@ -15375,6 +15424,7 @@ const MODEL_FOR_TEXT_TO_WAVEFORM_MAPPING_NAMES = new Map([
|
|
|
15375
15424
|
|
|
15376
15425
|
const MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING_NAMES = new Map([
|
|
15377
15426
|
['bert', ['BertForSequenceClassification', BertForSequenceClassification]],
|
|
15427
|
+
['modernbert', ['ModernBertForSequenceClassification', ModernBertForSequenceClassification]],
|
|
15378
15428
|
['roformer', ['RoFormerForSequenceClassification', RoFormerForSequenceClassification]],
|
|
15379
15429
|
['electra', ['ElectraForSequenceClassification', ElectraForSequenceClassification]],
|
|
15380
15430
|
['esm', ['EsmForSequenceClassification', EsmForSequenceClassification]],
|
|
@@ -15396,6 +15446,7 @@ const MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING_NAMES = new Map([
|
|
|
15396
15446
|
|
|
15397
15447
|
const MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING_NAMES = new Map([
|
|
15398
15448
|
['bert', ['BertForTokenClassification', BertForTokenClassification]],
|
|
15449
|
+
['modernbert', ['ModernBertForTokenClassification', ModernBertForTokenClassification]],
|
|
15399
15450
|
['roformer', ['RoFormerForTokenClassification', RoFormerForTokenClassification]],
|
|
15400
15451
|
['electra', ['ElectraForTokenClassification', ElectraForTokenClassification]],
|
|
15401
15452
|
['esm', ['EsmForTokenClassification', EsmForTokenClassification]],
|
|
@@ -15464,6 +15515,7 @@ const MODEL_FOR_MULTIMODALITY_MAPPING_NAMES = new Map([
|
|
|
15464
15515
|
|
|
15465
15516
|
const MODEL_FOR_MASKED_LM_MAPPING_NAMES = new Map([
|
|
15466
15517
|
['bert', ['BertForMaskedLM', BertForMaskedLM]],
|
|
15518
|
+
['modernbert', ['ModernBertForMaskedLM', ModernBertForMaskedLM]],
|
|
15467
15519
|
['roformer', ['RoFormerForMaskedLM', RoFormerForMaskedLM]],
|
|
15468
15520
|
['electra', ['ElectraForMaskedLM', ElectraForMaskedLM]],
|
|
15469
15521
|
['esm', ['EsmForMaskedLM', EsmForMaskedLM]],
|
|
@@ -20732,7 +20784,8 @@ const wrap = async (session_bytes, session_options, names) => {
|
|
|
20732
20784
|
new Uint8Array(session_bytes), session_options,
|
|
20733
20785
|
);
|
|
20734
20786
|
return /** @type {any} */(async (/** @type {Record<string, Tensor>} */ inputs) => {
|
|
20735
|
-
const
|
|
20787
|
+
const proxied = (0,_backends_onnx_js__WEBPACK_IMPORTED_MODULE_0__.isONNXProxy)();
|
|
20788
|
+
const ortFeed = Object.fromEntries(Object.entries(inputs).map(([k, v]) => [k, (proxied ? v.clone() : v).ort_tensor]));
|
|
20736
20789
|
const outputs = await session.run(ortFeed);
|
|
20737
20790
|
|
|
20738
20791
|
if (Array.isArray(names)) {
|
|
@@ -21518,7 +21571,7 @@ class FillMaskPipeline extends (/** @type {new (options: TextPipelineConstructor
|
|
|
21518
21571
|
return {
|
|
21519
21572
|
score: values[i],
|
|
21520
21573
|
token: Number(x),
|
|
21521
|
-
token_str: this.tokenizer.
|
|
21574
|
+
token_str: this.tokenizer.decode([x]),
|
|
21522
21575
|
sequence: this.tokenizer.decode(sequence, { skip_special_tokens: true }),
|
|
21523
21576
|
}
|
|
21524
21577
|
}));
|
|
@@ -34942,6 +34995,11 @@ __webpack_require__.r(__webpack_exports__);
|
|
|
34942
34995
|
/* harmony export */ MobileViTV2Model: () => (/* reexport safe */ _models_js__WEBPACK_IMPORTED_MODULE_2__.MobileViTV2Model),
|
|
34943
34996
|
/* harmony export */ MobileViTV2PreTrainedModel: () => (/* reexport safe */ _models_js__WEBPACK_IMPORTED_MODULE_2__.MobileViTV2PreTrainedModel),
|
|
34944
34997
|
/* harmony export */ ModelOutput: () => (/* reexport safe */ _models_js__WEBPACK_IMPORTED_MODULE_2__.ModelOutput),
|
|
34998
|
+
/* harmony export */ ModernBertForMaskedLM: () => (/* reexport safe */ _models_js__WEBPACK_IMPORTED_MODULE_2__.ModernBertForMaskedLM),
|
|
34999
|
+
/* harmony export */ ModernBertForSequenceClassification: () => (/* reexport safe */ _models_js__WEBPACK_IMPORTED_MODULE_2__.ModernBertForSequenceClassification),
|
|
35000
|
+
/* harmony export */ ModernBertForTokenClassification: () => (/* reexport safe */ _models_js__WEBPACK_IMPORTED_MODULE_2__.ModernBertForTokenClassification),
|
|
35001
|
+
/* harmony export */ ModernBertModel: () => (/* reexport safe */ _models_js__WEBPACK_IMPORTED_MODULE_2__.ModernBertModel),
|
|
35002
|
+
/* harmony export */ ModernBertPreTrainedModel: () => (/* reexport safe */ _models_js__WEBPACK_IMPORTED_MODULE_2__.ModernBertPreTrainedModel),
|
|
34945
35003
|
/* harmony export */ Moondream1ForConditionalGeneration: () => (/* reexport safe */ _models_js__WEBPACK_IMPORTED_MODULE_2__.Moondream1ForConditionalGeneration),
|
|
34946
35004
|
/* harmony export */ MoonshineFeatureExtractor: () => (/* reexport safe */ _models_feature_extractors_js__WEBPACK_IMPORTED_MODULE_10__.MoonshineFeatureExtractor),
|
|
34947
35005
|
/* harmony export */ MoonshineForConditionalGeneration: () => (/* reexport safe */ _models_js__WEBPACK_IMPORTED_MODULE_2__.MoonshineForConditionalGeneration),
|