@huggingface/transformers 3.0.0-alpha.13 → 3.0.0-alpha.14

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -101,7 +101,7 @@ npm i @huggingface/transformers
101
101
  Alternatively, you can use it in vanilla JS, without any bundler, by using a CDN or static hosting. For example, using [ES Modules](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Modules), you can import the library with:
102
102
  ```html
103
103
  <script type="module">
104
- import { pipeline } from 'https://cdn.jsdelivr.net/npm/@huggingface/transformers@3.0.0-alpha.13';
104
+ import { pipeline } from 'https://cdn.jsdelivr.net/npm/@huggingface/transformers@3.0.0-alpha.14';
105
105
  </script>
106
106
  ```
107
107
 
@@ -134,7 +134,7 @@ Check out the Transformers.js [template](https://huggingface.co/new-space?templa
134
134
 
135
135
 
136
136
 
137
- By default, Transformers.js uses [hosted pretrained models](https://huggingface.co/models?library=transformers.js) and [precompiled WASM binaries](https://cdn.jsdelivr.net/npm/@huggingface/transformers@3.0.0-alpha.13/dist/), which should work out-of-the-box. You can customize this as follows:
137
+ By default, Transformers.js uses [hosted pretrained models](https://huggingface.co/models?library=transformers.js) and [precompiled WASM binaries](https://cdn.jsdelivr.net/npm/@huggingface/transformers@3.0.0-alpha.14/dist/), which should work out-of-the-box. You can customize this as follows:
138
138
 
139
139
  ### Settings
140
140
 
@@ -310,6 +310,7 @@ You can refine your search by selecting the task you're interested in (e.g., [te
310
310
  1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (from EleutherAI) released in the repository [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) by Ben Wang and Aran Komatsuzaki.
311
311
  1. **[GPTBigCode](https://huggingface.co/docs/transformers/model_doc/gpt_bigcode)** (from BigCode) released with the paper [SantaCoder: don't reach for the stars!](https://arxiv.org/abs/2301.03988) by Loubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, Logesh Kumar Umapathi, Carolyn Jane Anderson, Yangtian Zi, Joel Lamy Poirier, Hailey Schoelkopf, Sergey Troshin, Dmitry Abulkhanov, Manuel Romero, Michael Lappert, Francesco De Toni, Bernardo García del Río, Qian Liu, Shamik Bose, Urvashi Bhattacharyya, Terry Yue Zhuo, Ian Yu, Paulo Villegas, Marco Zocca, Sourab Mangrulkar, David Lansky, Huu Nguyen, Danish Contractor, Luis Villa, Jia Li, Dzmitry Bahdanau, Yacine Jernite, Sean Hughes, Daniel Fried, Arjun Guha, Harm de Vries, Leandro von Werra.
312
312
  1. **[HerBERT](https://huggingface.co/docs/transformers/model_doc/herbert)** (from Allegro.pl, AGH University of Science and Technology) released with the paper [KLEJ: Comprehensive Benchmark for Polish Language Understanding](https://www.aclweb.org/anthology/2020.acl-main.111.pdf) by Piotr Rybak, Robert Mroczkowski, Janusz Tracz, Ireneusz Gawlik.
313
+ 1. **[Hiera](https://huggingface.co/docs/transformers/model_doc/hiera)** (from Meta) released with the paper [Hiera: A Hierarchical Vision Transformer without the Bells-and-Whistles](https://arxiv.org/pdf/2306.00989) by Chaitanya Ryali, Yuan-Ting Hu, Daniel Bolya, Chen Wei, Haoqi Fan, Po-Yao Huang, Vaibhav Aggarwal, Arkabandhu Chowdhury, Omid Poursaeed, Judy Hoffman, Jitendra Malik, Yanghao Li, Christoph Feichtenhofer.
313
314
  1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed.
314
315
  1. **JAIS** (from Core42) released with the paper [Jais and Jais-chat: Arabic-Centric Foundation and Instruction-Tuned Open Generative Large Language Models](https://arxiv.org/pdf/2308.16149) by Neha Sengupta, Sunil Kumar Sahu, Bokang Jia, Satheesh Katipomu, Haonan Li, Fajri Koto, William Marshall, Gurpreet Gosal, Cynthia Liu, Zhiming Chen, Osama Mohammed Afzal, Samta Kamboj, Onkar Pandit, Rahul Pal, Lalit Pradhan, Zain Muhammad Mujahid, Massa Baali, Xudong Han, Sondos Mahmoud Bsharat, Alham Fikri Aji, Zhiqiang Shen, Zhengzhong Liu, Natalia Vassilieva, Joel Hestness, Andy Hock, Andrew Feldman, Jonathan Lee, Andrew Jackson, Hector Xuguang Ren, Preslav Nakov, Timothy Baldwin, Eric Xing.
315
316
  1. **[LongT5](https://huggingface.co/docs/transformers/model_doc/longt5)** (from Google AI) released with the paper [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916) by Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang.
@@ -4437,7 +4437,7 @@ __webpack_require__.r(__webpack_exports__);
4437
4437
 
4438
4438
 
4439
4439
 
4440
- const VERSION = '3.0.0-alpha.13';
4440
+ const VERSION = '3.0.0-alpha.14';
4441
4441
 
4442
4442
  // Check if various APIs are available (depends on environment)
4443
4443
  const IS_BROWSER_ENV = typeof self !== 'undefined';
@@ -5533,18 +5533,18 @@ class NoBadWordsLogitsProcessor extends LogitsProcessor {
5533
5533
  _call(input_ids, logits) {
5534
5534
  for (let i = 0; i < input_ids.length; ++i) {
5535
5535
  const batch_logits_data = /** @type {Float32Array} */(logits[i].data);
5536
-
5536
+ const ids = input_ids[i];
5537
5537
  for (const bad_word_ids of this.bad_words_ids) {
5538
5538
  // Whether to modify the logits of the last token in the bad word id sequence
5539
5539
  let mark = true;
5540
5540
 
5541
5541
  // For each bad word in the list, if the current sequence of input ids ends with this sequence (excluding the last),
5542
5542
  // then we set the logits of the last bad word id to -Infinity.
5543
- for (let i = 1; i <= bad_word_ids.length - 1 && bad_word_ids.length < input_ids[i].length; ++i) {
5543
+ for (let j = 1; j <= bad_word_ids.length - 1 && bad_word_ids.length < ids.length; ++j) {
5544
5544
 
5545
5545
  // NOTE: We use != instead of !== to compare bigint and number
5546
5546
  // @ts-ignore
5547
- if (bad_word_ids.at(-i - 1) != input_ids[i].at(-i)) {
5547
+ if (bad_word_ids.at(-j - 1) != ids.at(-j)) {
5548
5548
  // We have found a mismatch
5549
5549
  mark = false;
5550
5550
  break;
@@ -6530,6 +6530,9 @@ __webpack_require__.r(__webpack_exports__);
6530
6530
  /* harmony export */ GemmaForCausalLM: () => (/* binding */ GemmaForCausalLM),
6531
6531
  /* harmony export */ GemmaModel: () => (/* binding */ GemmaModel),
6532
6532
  /* harmony export */ GemmaPreTrainedModel: () => (/* binding */ GemmaPreTrainedModel),
6533
+ /* harmony export */ HieraForImageClassification: () => (/* binding */ HieraForImageClassification),
6534
+ /* harmony export */ HieraModel: () => (/* binding */ HieraModel),
6535
+ /* harmony export */ HieraPreTrainedModel: () => (/* binding */ HieraPreTrainedModel),
6533
6536
  /* harmony export */ HubertForCTC: () => (/* binding */ HubertForCTC),
6534
6537
  /* harmony export */ HubertForSequenceClassification: () => (/* binding */ HubertForSequenceClassification),
6535
6538
  /* harmony export */ HubertModel: () => (/* binding */ HubertModel),
@@ -6958,6 +6961,7 @@ async function getSession(pretrained_model_name_or_path, fileName, options) {
6958
6961
  });
6959
6962
  if (Object.keys(shapes).length > 0 && !(0,_backends_onnx_js__WEBPACK_IMPORTED_MODULE_1__.isONNXProxy)()) {
6960
6963
  // Only set preferredOutputLocation if shapes are present and we aren't proxying ONNX
6964
+ /** @type {Record<string, import('onnxruntime-common').Tensor.DataLocation>} */
6961
6965
  const preferredOutputLocation = {};
6962
6966
  for (const key in shapes) {
6963
6967
  preferredOutputLocation[key] = 'gpu-buffer';
@@ -11178,6 +11182,19 @@ class DeiTForImageClassification extends DeiTPreTrainedModel {
11178
11182
  }
11179
11183
  //////////////////////////////////////////////////
11180
11184
 
11185
+ //////////////////////////////////////////////////
11186
+ class HieraPreTrainedModel extends PreTrainedModel { }
11187
+ class HieraModel extends HieraPreTrainedModel { }
11188
+ class HieraForImageClassification extends HieraPreTrainedModel {
11189
+ /**
11190
+ * @param {any} model_inputs
11191
+ */
11192
+ async _call(model_inputs) {
11193
+ return new SequenceClassifierOutput(await super._call(model_inputs));
11194
+ }
11195
+ }
11196
+ //////////////////////////////////////////////////
11197
+
11181
11198
 
11182
11199
  //////////////////////////////////////////////////
11183
11200
  /**
@@ -13058,6 +13075,7 @@ const MODEL_MAPPING_NAMES_ENCODER_ONLY = new Map([
13058
13075
  ['owlv2', ['Owlv2Model', Owlv2Model]],
13059
13076
  ['beit', ['BeitModel', BeitModel]],
13060
13077
  ['deit', ['DeiTModel', DeiTModel]],
13078
+ ['hiera', ['HieraModel', HieraModel]],
13061
13079
  ['convnext', ['ConvNextModel', ConvNextModel]],
13062
13080
  ['convnextv2', ['ConvNextV2Model', ConvNextV2Model]],
13063
13081
  ['dinov2', ['Dinov2Model', Dinov2Model]],
@@ -13265,6 +13283,7 @@ const MODEL_FOR_IMAGE_CLASSIFICATION_MAPPING_NAMES = new Map([
13265
13283
  ['mobilevitv2', ['MobileViTV2ForImageClassification', MobileViTV2ForImageClassification]],
13266
13284
  ['beit', ['BeitForImageClassification', BeitForImageClassification]],
13267
13285
  ['deit', ['DeiTForImageClassification', DeiTForImageClassification]],
13286
+ ['hiera', ['HieraForImageClassification', HieraForImageClassification]],
13268
13287
  ['convnext', ['ConvNextForImageClassification', ConvNextForImageClassification]],
13269
13288
  ['convnextv2', ['ConvNextV2ForImageClassification', ConvNextV2ForImageClassification]],
13270
13289
  ['dinov2', ['Dinov2ForImageClassification', Dinov2ForImageClassification]],
@@ -14092,20 +14111,31 @@ __webpack_require__.r(__webpack_exports__);
14092
14111
 
14093
14112
 
14094
14113
 
14114
+ /**
14115
+ * Asynchronously creates a wrapper function for running an ONNX inference session.
14116
+ *
14117
+ * @param {number[]} session_bytes The session data in bytes.
14118
+ * @param {import('onnxruntime-common').InferenceSession.SessionOptions} session_options The options for the ONNX session.
14119
+ * @template {string | [string] | string[]} T
14120
+ * @param {T} names The name(s) of the output tensor(s).
14121
+ *
14122
+ * @returns {Promise<function(Record<string, Tensor>): Promise<T extends string ? Tensor : T extends string[] ? { [K in keyof T]: Tensor } : never>>}
14123
+ * The wrapper function for running the ONNX inference session.
14124
+ */
14095
14125
  const wrap = async (session_bytes, session_options, names) => {
14096
14126
  const session = await (0,_backends_onnx_js__WEBPACK_IMPORTED_MODULE_0__.createInferenceSession)(
14097
14127
  new Uint8Array(session_bytes), session_options,
14098
14128
  );
14099
- return async (inputs) => {
14129
+ return /** @type {any} */(async (/** @type {Record<string, Tensor>} */ inputs) => {
14100
14130
  const ortFeed = Object.fromEntries(Object.entries(inputs).map(([k, v]) => [k, v.ort_tensor]));
14101
14131
  const outputs = await session.run(ortFeed);
14102
14132
 
14103
14133
  if (Array.isArray(names)) {
14104
14134
  return names.map((n) => new _utils_tensor_js__WEBPACK_IMPORTED_MODULE_1__.Tensor(outputs[n]));
14105
14135
  } else {
14106
- return new _utils_tensor_js__WEBPACK_IMPORTED_MODULE_1__.Tensor(outputs[names]);
14136
+ return new _utils_tensor_js__WEBPACK_IMPORTED_MODULE_1__.Tensor(outputs[/** @type {string} */(names)]);
14107
14137
  }
14108
- }
14138
+ })
14109
14139
  }
14110
14140
 
14111
14141
  // In-memory registry of initialized ONNX operators
@@ -17783,9 +17813,8 @@ function post_process_semantic_segmentation(outputs, target_sizes = null) {
17783
17813
  // Store which objects have labels
17784
17814
  // This is much more efficient that creating a set of the final values
17785
17815
  const hasLabel = new Array(data.dims[0]);
17786
- const out = segmentation.data;
17787
- for (let j = 0; j < out.length; ++j) {
17788
- const index = out[j];
17816
+ for (let j = 0; j < segmentation_data.length; ++j) {
17817
+ const index = segmentation_data[j];
17789
17818
  hasLabel[index] = index;
17790
17819
  }
17791
17820
  /** @type {number[]} The unique list of labels that were detected */
@@ -27901,7 +27930,7 @@ function magnitude(arr) {
27901
27930
  /**
27902
27931
  * Returns the value and index of the minimum element in an array.
27903
27932
  * @param {number[]|TypedArray} arr array of numbers.
27904
- * @returns {number[]} the value and index of the minimum element, of the form: [valueOfMin, indexOfMin]
27933
+ * @returns {[number, number]} the value and index of the minimum element, of the form: [valueOfMin, indexOfMin]
27905
27934
  * @throws {Error} If array is empty.
27906
27935
  */
27907
27936
  function min(arr) {
@@ -30497,6 +30526,9 @@ __webpack_require__.r(__webpack_exports__);
30497
30526
  /* harmony export */ GemmaTokenizer: () => (/* reexport safe */ _tokenizers_js__WEBPACK_IMPORTED_MODULE_3__.GemmaTokenizer),
30498
30527
  /* harmony export */ Grok1Tokenizer: () => (/* reexport safe */ _tokenizers_js__WEBPACK_IMPORTED_MODULE_3__.Grok1Tokenizer),
30499
30528
  /* harmony export */ HerbertTokenizer: () => (/* reexport safe */ _tokenizers_js__WEBPACK_IMPORTED_MODULE_3__.HerbertTokenizer),
30529
+ /* harmony export */ HieraForImageClassification: () => (/* reexport safe */ _models_js__WEBPACK_IMPORTED_MODULE_2__.HieraForImageClassification),
30530
+ /* harmony export */ HieraModel: () => (/* reexport safe */ _models_js__WEBPACK_IMPORTED_MODULE_2__.HieraModel),
30531
+ /* harmony export */ HieraPreTrainedModel: () => (/* reexport safe */ _models_js__WEBPACK_IMPORTED_MODULE_2__.HieraPreTrainedModel),
30500
30532
  /* harmony export */ HubertForCTC: () => (/* reexport safe */ _models_js__WEBPACK_IMPORTED_MODULE_2__.HubertForCTC),
30501
30533
  /* harmony export */ HubertForSequenceClassification: () => (/* reexport safe */ _models_js__WEBPACK_IMPORTED_MODULE_2__.HubertForSequenceClassification),
30502
30534
  /* harmony export */ HubertModel: () => (/* reexport safe */ _models_js__WEBPACK_IMPORTED_MODULE_2__.HubertModel),