@huggingface/transformers 3.0.1 → 3.0.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -47,7 +47,7 @@ npm i @huggingface/transformers
47
47
  Alternatively, you can use it in vanilla JS, without any bundler, by using a CDN or static hosting. For example, using [ES Modules](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Modules), you can import the library with:
48
48
  ```html
49
49
  <script type="module">
50
- import { pipeline } from 'https://cdn.jsdelivr.net/npm/@huggingface/transformers@3.0.1';
50
+ import { pipeline } from 'https://cdn.jsdelivr.net/npm/@huggingface/transformers@3.0.2';
51
51
  </script>
52
52
  ```
53
53
 
@@ -155,7 +155,7 @@ Check out the Transformers.js [template](https://huggingface.co/new-space?templa
155
155
 
156
156
 
157
157
 
158
- By default, Transformers.js uses [hosted pretrained models](https://huggingface.co/models?library=transformers.js) and [precompiled WASM binaries](https://cdn.jsdelivr.net/npm/@huggingface/transformers@3.0.1/dist/), which should work out-of-the-box. You can customize this as follows:
158
+ By default, Transformers.js uses [hosted pretrained models](https://huggingface.co/models?library=transformers.js) and [precompiled WASM binaries](https://cdn.jsdelivr.net/npm/@huggingface/transformers@3.0.2/dist/), which should work out-of-the-box. You can customize this as follows:
159
159
 
160
160
  ### Settings
161
161
 
@@ -352,6 +352,7 @@ You can refine your search by selecting the task you're interested in (e.g., [te
352
352
  1. **[MMS](https://huggingface.co/docs/transformers/model_doc/mms)** (from Facebook) released with the paper [Scaling Speech Technology to 1,000+ Languages](https://arxiv.org/abs/2305.13516) by Vineel Pratap, Andros Tjandra, Bowen Shi, Paden Tomasello, Arun Babu, Sayani Kundu, Ali Elkahky, Zhaoheng Ni, Apoorv Vyas, Maryam Fazel-Zarandi, Alexei Baevski, Yossi Adi, Xiaohui Zhang, Wei-Ning Hsu, Alexis Conneau, Michael Auli.
353
353
  1. **[MobileBERT](https://huggingface.co/docs/transformers/model_doc/mobilebert)** (from CMU/Google Brain) released with the paper [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984) by Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou.
354
354
  1. **MobileCLIP** (from Apple) released with the paper [MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training](https://arxiv.org/abs/2311.17049) by Pavan Kumar Anasosalu Vasu, Hadi Pouransari, Fartash Faghri, Raviteja Vemulapalli, Oncel Tuzel.
355
+ 1. **MobileLLM** (from Meta) released with the paper [MobileLLM: Optimizing Sub-billion Parameter Language Models for On-Device Use Cases](https://arxiv.org/abs/2402.14905) by Zechun Liu, Changsheng Zhao, Forrest Iandola, Chen Lai, Yuandong Tian, Igor Fedorov, Yunyang Xiong, Ernie Chang, Yangyang Shi, Raghuraman Krishnamoorthi, Liangzhen Lai, Vikas Chandra.
355
356
  1. **[MobileNetV1](https://huggingface.co/docs/transformers/model_doc/mobilenet_v1)** (from Google Inc.) released with the paper [MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications](https://arxiv.org/abs/1704.04861) by Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam.
356
357
  1. **[MobileNetV2](https://huggingface.co/docs/transformers/model_doc/mobilenet_v2)** (from Google Inc.) released with the paper [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381) by Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen.
357
358
  1. **MobileNetV3** (from Google Inc.) released with the paper [Searching for MobileNetV3](https://arxiv.org/abs/1905.02244) by Andrew Howard, Mark Sandler, Grace Chu, Liang-Chieh Chen, Bo Chen, Mingxing Tan, Weijun Wang, Yukun Zhu, Ruoming Pang, Vijay Vasudevan, Quoc V. Le, Hartwig Adam.
@@ -364,6 +365,7 @@ You can refine your search by selecting the task you're interested in (e.g., [te
364
365
  1. **[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)** (from Google AI) released with the paper [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel.
365
366
  1. **[NLLB](https://huggingface.co/docs/transformers/model_doc/nllb)** (from Meta) released with the paper [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) by the NLLB team.
366
367
  1. **[Nougat](https://huggingface.co/docs/transformers/model_doc/nougat)** (from Meta AI) released with the paper [Nougat: Neural Optical Understanding for Academic Documents](https://arxiv.org/abs/2308.13418) by Lukas Blecher, Guillem Cucurull, Thomas Scialom, Robert Stojnic.
368
+ 1. **[OLMo](https://huggingface.co/docs/transformers/master/model_doc/olmo)** (from AI2) released with the paper [OLMo: Accelerating the Science of Language Models](https://arxiv.org/abs/2402.00838) by Dirk Groeneveld, Iz Beltagy, Pete Walsh, Akshita Bhagia, Rodney Kinney, Oyvind Tafjord, Ananya Harsh Jha, Hamish Ivison, Ian Magnusson, Yizhong Wang, Shane Arora, David Atkinson, Russell Authur, Khyathi Raghavi Chandu, Arman Cohan, Jennifer Dumas, Yanai Elazar, Yuling Gu, Jack Hessel, Tushar Khot, William Merrill, Jacob Morrison, Niklas Muennighoff, Aakanksha Naik, Crystal Nam, Matthew E. Peters, Valentina Pyatkin, Abhilasha Ravichander, Dustin Schwenk, Saurabh Shah, Will Smith, Emma Strubell, Nishant Subramani, Mitchell Wortsman, Pradeep Dasigi, Nathan Lambert, Kyle Richardson, Luke Zettlemoyer, Jesse Dodge, Kyle Lo, Luca Soldaini, Noah A. Smith, Hannaneh Hajishirzi.
367
369
  1. **OpenELM** (from Apple) released with the paper [OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework](https://arxiv.org/abs/2404.14619) by Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari.
368
370
  1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (from Meta AI) released with the paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al.
369
371
  1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (from Google AI) released with the paper [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby.
@@ -4131,6 +4131,8 @@ function getNormalizedConfig(config) {
4131
4131
  mapping['hidden_size'] = 'hidden_size';
4132
4132
  break;
4133
4133
  case 'llama':
4134
+ case 'olmo':
4135
+ case 'mobilellm':
4134
4136
  case 'granite':
4135
4137
  case 'cohere':
4136
4138
  case 'mistral':
@@ -4460,7 +4462,7 @@ __webpack_require__.r(__webpack_exports__);
4460
4462
 
4461
4463
 
4462
4464
 
4463
- const VERSION = '3.0.1';
4465
+ const VERSION = '3.0.2';
4464
4466
 
4465
4467
  // Check if various APIs are available (depends on environment)
4466
4468
  const IS_BROWSER_ENV = typeof self !== 'undefined';
@@ -4507,9 +4509,20 @@ const apis = Object.freeze({
4507
4509
  });
4508
4510
 
4509
4511
  const RUNNING_LOCALLY = IS_FS_AVAILABLE && IS_PATH_AVAILABLE;
4510
- const dirname__ = RUNNING_LOCALLY
4511
- ? path__WEBPACK_IMPORTED_MODULE_1__.dirname(path__WEBPACK_IMPORTED_MODULE_1__.dirname(url__WEBPACK_IMPORTED_MODULE_2__.fileURLToPath("file:///home/runner/work/transformers.js/transformers.js/src/env.js")))
4512
- : './';
4512
+
4513
+ let dirname__ = './';
4514
+ if (RUNNING_LOCALLY) {
4515
+ // NOTE: We wrap `import.meta` in a call to `Object` to prevent Webpack from trying to bundle it in CommonJS.
4516
+ // Although we get the warning: "Accessing import.meta directly is unsupported (only property access or destructuring is supported)",
4517
+ // it is safe to ignore since the bundled value (`{}`) isn't used for CommonJS environments (we use __dirname instead).
4518
+ const _import_meta_url = Object(({})).url;
4519
+
4520
+ if (_import_meta_url) {
4521
+ dirname__ = path__WEBPACK_IMPORTED_MODULE_1__.dirname(path__WEBPACK_IMPORTED_MODULE_1__.dirname(url__WEBPACK_IMPORTED_MODULE_2__.fileURLToPath(_import_meta_url))) // ESM
4522
+ } else if (typeof __dirname !== 'undefined') {
4523
+ dirname__ = path__WEBPACK_IMPORTED_MODULE_1__.dirname(__dirname) // CommonJS
4524
+ }
4525
+ }
4513
4526
 
4514
4527
  // Only used for environments with access to file system
4515
4528
  const DEFAULT_CACHE_DIR = RUNNING_LOCALLY
@@ -6616,6 +6629,9 @@ __webpack_require__.r(__webpack_exports__);
6616
6629
  /* harmony export */ MobileBertForSequenceClassification: () => (/* binding */ MobileBertForSequenceClassification),
6617
6630
  /* harmony export */ MobileBertModel: () => (/* binding */ MobileBertModel),
6618
6631
  /* harmony export */ MobileBertPreTrainedModel: () => (/* binding */ MobileBertPreTrainedModel),
6632
+ /* harmony export */ MobileLLMForCausalLM: () => (/* binding */ MobileLLMForCausalLM),
6633
+ /* harmony export */ MobileLLMModel: () => (/* binding */ MobileLLMModel),
6634
+ /* harmony export */ MobileLLMPreTrainedModel: () => (/* binding */ MobileLLMPreTrainedModel),
6619
6635
  /* harmony export */ MobileNetV1ForImageClassification: () => (/* binding */ MobileNetV1ForImageClassification),
6620
6636
  /* harmony export */ MobileNetV1Model: () => (/* binding */ MobileNetV1Model),
6621
6637
  /* harmony export */ MobileNetV1PreTrainedModel: () => (/* binding */ MobileNetV1PreTrainedModel),
@@ -6648,6 +6664,9 @@ __webpack_require__.r(__webpack_exports__);
6648
6664
  /* harmony export */ OPTForCausalLM: () => (/* binding */ OPTForCausalLM),
6649
6665
  /* harmony export */ OPTModel: () => (/* binding */ OPTModel),
6650
6666
  /* harmony export */ OPTPreTrainedModel: () => (/* binding */ OPTPreTrainedModel),
6667
+ /* harmony export */ OlmoForCausalLM: () => (/* binding */ OlmoForCausalLM),
6668
+ /* harmony export */ OlmoModel: () => (/* binding */ OlmoModel),
6669
+ /* harmony export */ OlmoPreTrainedModel: () => (/* binding */ OlmoPreTrainedModel),
6651
6670
  /* harmony export */ OpenELMForCausalLM: () => (/* binding */ OpenELMForCausalLM),
6652
6671
  /* harmony export */ OpenELMModel: () => (/* binding */ OpenELMModel),
6653
6672
  /* harmony export */ OpenELMPreTrainedModel: () => (/* binding */ OpenELMPreTrainedModel),
@@ -10580,6 +10599,22 @@ class LlamaForCausalLM extends LlamaPreTrainedModel { }
10580
10599
  //////////////////////////////////////////////////
10581
10600
 
10582
10601
 
10602
+ //////////////////////////////////////////////////
10603
+ // MobileLLM models
10604
+ class MobileLLMPreTrainedModel extends PreTrainedModel { }
10605
+ class MobileLLMModel extends MobileLLMPreTrainedModel { }
10606
+ class MobileLLMForCausalLM extends MobileLLMPreTrainedModel { }
10607
+ //////////////////////////////////////////////////
10608
+
10609
+
10610
+ //////////////////////////////////////////////////
10611
+ // OLMo models
10612
+ class OlmoPreTrainedModel extends PreTrainedModel { }
10613
+ class OlmoModel extends OlmoPreTrainedModel { }
10614
+ class OlmoForCausalLM extends OlmoPreTrainedModel { }
10615
+ //////////////////////////////////////////////////
10616
+
10617
+
10583
10618
  //////////////////////////////////////////////////
10584
10619
  // Granite models
10585
10620
  class GranitePreTrainedModel extends PreTrainedModel { }
@@ -12895,6 +12930,8 @@ const MODEL_MAPPING_NAMES_DECODER_ONLY = new Map([
12895
12930
  ['gpt_neox', ['GPTNeoXModel', GPTNeoXModel]],
12896
12931
  ['codegen', ['CodeGenModel', CodeGenModel]],
12897
12932
  ['llama', ['LlamaModel', LlamaModel]],
12933
+ ['olmo', ['OlmoModel', OlmoModel]],
12934
+ ['mobilellm', ['MobileLLMModel', MobileLLMModel]],
12898
12935
  ['granite', ['GraniteModel', GraniteModel]],
12899
12936
  ['cohere', ['CohereModel', CohereModel]],
12900
12937
  ['gemma', ['GemmaModel', GemmaModel]],
@@ -12984,6 +13021,8 @@ const MODEL_FOR_CAUSAL_LM_MAPPING_NAMES = new Map([
12984
13021
  ['gpt_neox', ['GPTNeoXForCausalLM', GPTNeoXForCausalLM]],
12985
13022
  ['codegen', ['CodeGenForCausalLM', CodeGenForCausalLM]],
12986
13023
  ['llama', ['LlamaForCausalLM', LlamaForCausalLM]],
13024
+ ['olmo', ['OlmoForCausalLM', OlmoForCausalLM]],
13025
+ ['mobilellm', ['MobileLLMForCausalLM', MobileLLMForCausalLM]],
12987
13026
  ['granite', ['GraniteForCausalLM', GraniteForCausalLM]],
12988
13027
  ['cohere', ['CohereForCausalLM', CohereForCausalLM]],
12989
13028
  ['gemma', ['GemmaForCausalLM', GemmaForCausalLM]],
@@ -30448,6 +30487,9 @@ __webpack_require__.r(__webpack_exports__);
30448
30487
  /* harmony export */ MobileBertModel: () => (/* reexport safe */ _models_js__WEBPACK_IMPORTED_MODULE_2__.MobileBertModel),
30449
30488
  /* harmony export */ MobileBertPreTrainedModel: () => (/* reexport safe */ _models_js__WEBPACK_IMPORTED_MODULE_2__.MobileBertPreTrainedModel),
30450
30489
  /* harmony export */ MobileBertTokenizer: () => (/* reexport safe */ _tokenizers_js__WEBPACK_IMPORTED_MODULE_3__.MobileBertTokenizer),
30490
+ /* harmony export */ MobileLLMForCausalLM: () => (/* reexport safe */ _models_js__WEBPACK_IMPORTED_MODULE_2__.MobileLLMForCausalLM),
30491
+ /* harmony export */ MobileLLMModel: () => (/* reexport safe */ _models_js__WEBPACK_IMPORTED_MODULE_2__.MobileLLMModel),
30492
+ /* harmony export */ MobileLLMPreTrainedModel: () => (/* reexport safe */ _models_js__WEBPACK_IMPORTED_MODULE_2__.MobileLLMPreTrainedModel),
30451
30493
  /* harmony export */ MobileNetV1FeatureExtractor: () => (/* reexport safe */ _processors_js__WEBPACK_IMPORTED_MODULE_4__.MobileNetV1FeatureExtractor),
30452
30494
  /* harmony export */ MobileNetV1ForImageClassification: () => (/* reexport safe */ _models_js__WEBPACK_IMPORTED_MODULE_2__.MobileNetV1ForImageClassification),
30453
30495
  /* harmony export */ MobileNetV1Model: () => (/* reexport safe */ _models_js__WEBPACK_IMPORTED_MODULE_2__.MobileNetV1Model),
@@ -30490,6 +30532,9 @@ __webpack_require__.r(__webpack_exports__);
30490
30532
  /* harmony export */ OPTModel: () => (/* reexport safe */ _models_js__WEBPACK_IMPORTED_MODULE_2__.OPTModel),
30491
30533
  /* harmony export */ OPTPreTrainedModel: () => (/* reexport safe */ _models_js__WEBPACK_IMPORTED_MODULE_2__.OPTPreTrainedModel),
30492
30534
  /* harmony export */ ObjectDetectionPipeline: () => (/* reexport safe */ _pipelines_js__WEBPACK_IMPORTED_MODULE_1__.ObjectDetectionPipeline),
30535
+ /* harmony export */ OlmoForCausalLM: () => (/* reexport safe */ _models_js__WEBPACK_IMPORTED_MODULE_2__.OlmoForCausalLM),
30536
+ /* harmony export */ OlmoModel: () => (/* reexport safe */ _models_js__WEBPACK_IMPORTED_MODULE_2__.OlmoModel),
30537
+ /* harmony export */ OlmoPreTrainedModel: () => (/* reexport safe */ _models_js__WEBPACK_IMPORTED_MODULE_2__.OlmoPreTrainedModel),
30493
30538
  /* harmony export */ OpenELMForCausalLM: () => (/* reexport safe */ _models_js__WEBPACK_IMPORTED_MODULE_2__.OpenELMForCausalLM),
30494
30539
  /* harmony export */ OpenELMModel: () => (/* reexport safe */ _models_js__WEBPACK_IMPORTED_MODULE_2__.OpenELMModel),
30495
30540
  /* harmony export */ OpenELMPreTrainedModel: () => (/* reexport safe */ _models_js__WEBPACK_IMPORTED_MODULE_2__.OpenELMPreTrainedModel),