@huggingface/transformers 3.0.0-alpha.1 → 3.0.0-alpha.11

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (40) hide show
  1. package/README.md +6 -5
  2. package/dist/ort-wasm-simd-threaded.jsep.wasm +0 -0
  3. package/dist/transformers.cjs +1568 -1480
  4. package/dist/transformers.cjs.map +1 -1
  5. package/dist/transformers.js +2422 -2260
  6. package/dist/transformers.js.map +1 -1
  7. package/dist/transformers.min.cjs +36 -42
  8. package/dist/transformers.min.cjs.map +1 -1
  9. package/dist/transformers.min.js +24 -25
  10. package/dist/transformers.min.js.map +1 -1
  11. package/package.json +22 -11
  12. package/src/backends/onnx.js +92 -36
  13. package/src/env.js +6 -6
  14. package/src/generation/logits_process.js +39 -36
  15. package/src/generation/streamers.js +3 -3
  16. package/src/models.js +23 -10
  17. package/src/processors.js +79 -67
  18. package/src/utils/devices.js +15 -4
  19. package/src/utils/dtypes.js +1 -3
  20. package/src/utils/hub.js +17 -16
  21. package/types/backends/onnx.d.ts +6 -5
  22. package/types/backends/onnx.d.ts.map +1 -1
  23. package/types/env.d.ts +6 -2
  24. package/types/env.d.ts.map +1 -1
  25. package/types/generation/logits_process.d.ts.map +1 -1
  26. package/types/models.d.ts +8 -0
  27. package/types/models.d.ts.map +1 -1
  28. package/types/processors.d.ts +15 -1
  29. package/types/processors.d.ts.map +1 -1
  30. package/types/utils/devices.d.ts +11 -1
  31. package/types/utils/devices.d.ts.map +1 -1
  32. package/types/utils/dtypes.d.ts +0 -3
  33. package/types/utils/dtypes.d.ts.map +1 -1
  34. package/types/utils/hub.d.ts +1 -40
  35. package/types/utils/hub.d.ts.map +1 -1
  36. package/types/utils/tensor.d.ts +1 -1
  37. package/dist/transformers.min.mjs +0 -174
  38. package/dist/transformers.min.mjs.map +0 -1
  39. package/dist/transformers.mjs +0 -31265
  40. package/dist/transformers.mjs.map +0 -1
package/README.md CHANGED
@@ -33,9 +33,9 @@ State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in
33
33
 
34
34
  Transformers.js is designed to be functionally equivalent to Hugging Face's [transformers](https://github.com/huggingface/transformers) python library, meaning you can run the same pretrained models using a very similar API. These models support common tasks in different modalities, such as:
35
35
  - 📝 **Natural Language Processing**: text classification, named entity recognition, question answering, language modeling, summarization, translation, multiple choice, and text generation.
36
- - 🖼️ **Computer Vision**: image classification, object detection, and segmentation.
37
- - 🗣️ **Audio**: automatic speech recognition and audio classification.
38
- - 🐙 **Multimodal**: zero-shot image classification.
36
+ - 🖼️ **Computer Vision**: image classification, object detection, segmentation, and depth estimation.
37
+ - 🗣️ **Audio**: automatic speech recognition, audio classification, and text-to-speech.
38
+ - 🐙 **Multimodal**: embeddings, zero-shot audio classification, zero-shot image classification, and zero-shot object detection.
39
39
 
40
40
  Transformers.js uses [ONNX Runtime](https://onnxruntime.ai/) to run models in the browser. The best part about it, is that you can easily [convert](#convert-your-models-to-onnx) your pretrained PyTorch, TensorFlow, or JAX models to ONNX using [🤗 Optimum](https://github.com/huggingface/optimum#onnx--onnx-runtime).
41
41
 
@@ -101,7 +101,7 @@ npm i @huggingface/transformers
101
101
  Alternatively, you can use it in vanilla JS, without any bundler, by using a CDN or static hosting. For example, using [ES Modules](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Modules), you can import the library with:
102
102
  ```html
103
103
  <script type="module">
104
- import { pipeline } from 'https://cdn.jsdelivr.net/npm/@huggingface/transformers@3.0.0-alpha.0';
104
+ import { pipeline } from 'https://cdn.jsdelivr.net/npm/@huggingface/transformers@3.0.0-alpha.11';
105
105
  </script>
106
106
  ```
107
107
 
@@ -134,7 +134,7 @@ Check out the Transformers.js [template](https://huggingface.co/new-space?templa
134
134
 
135
135
 
136
136
 
137
- By default, Transformers.js uses [hosted pretrained models](https://huggingface.co/models?library=transformers.js) and [precompiled WASM binaries](https://cdn.jsdelivr.net/npm/@huggingface/transformers@3.0.0-alpha.0/dist/), which should work out-of-the-box. You can customize this as follows:
137
+ By default, Transformers.js uses [hosted pretrained models](https://huggingface.co/models?library=transformers.js) and [precompiled WASM binaries](https://cdn.jsdelivr.net/npm/@huggingface/transformers@3.0.0-alpha.11/dist/), which should work out-of-the-box. You can customize this as follows:
138
138
 
139
139
  ### Settings
140
140
 
@@ -348,6 +348,7 @@ You can refine your search by selecting the task you're interested in (e.g., [te
348
348
  1. **[RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta)** (from Facebook), released together with the paper [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov.
349
349
  1. **[RoFormer](https://huggingface.co/docs/transformers/model_doc/roformer)** (from ZhuiyiTechnology), released together with the paper [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/abs/2104.09864) by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu.
350
350
  1. **[RT-DETR](https://huggingface.co/docs/transformers/model_doc/rt_detr)** (from Baidu), released together with the paper [DETRs Beat YOLOs on Real-time Object Detection](https://arxiv.org/abs/2304.08069) by Yian Zhao, Wenyu Lv, Shangliang Xu, Jinman Wei, Guanzhong Wang, Qingqing Dang, Yi Liu, Jie Chen.
351
+ 1. **Sapiens** (from Meta AI) released with the paper [Sapiens: Foundation for Human Vision Models](https://arxiv.org/pdf/2408.12569) by Rawal Khirodkar, Timur Bagautdinov, Julieta Martinez, Su Zhaoen, Austin James, Peter Selednik, Stuart Anderson, Shunsuke Saito.
351
352
  1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (from NVIDIA) released with the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo.
352
353
  1. **[Segment Anything](https://huggingface.co/docs/transformers/model_doc/sam)** (from Meta AI) released with the paper [Segment Anything](https://arxiv.org/pdf/2304.02643v1.pdf) by Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alex Berg, Wan-Yen Lo, Piotr Dollar, Ross Girshick.
353
354
  1. **[SigLIP](https://huggingface.co/docs/transformers/main/model_doc/siglip)** (from Google AI) released with the paper [Sigmoid Loss for Language Image Pre-Training](https://arxiv.org/abs/2303.15343) by Xiaohua Zhai, Basil Mustafa, Alexander Kolesnikov, Lucas Beyer.