@huggingface/tasks 0.2.1 → 0.3.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (123) hide show
  1. package/README.md +1 -1
  2. package/dist/{index.mjs → index.cjs} +2695 -2497
  3. package/dist/index.d.ts +427 -65
  4. package/dist/index.js +2660 -2532
  5. package/package.json +13 -8
  6. package/src/index.ts +2 -5
  7. package/src/library-to-tasks.ts +1 -1
  8. package/src/model-data.ts +1 -1
  9. package/src/model-libraries-downloads.ts +20 -0
  10. package/src/{library-ui-elements.ts → model-libraries-snippets.ts} +50 -296
  11. package/src/model-libraries.ts +375 -44
  12. package/src/pipelines.ts +1 -1
  13. package/src/tasks/audio-classification/about.md +1 -1
  14. package/src/tasks/audio-classification/inference.ts +51 -0
  15. package/src/tasks/audio-classification/spec/input.json +34 -0
  16. package/src/tasks/audio-classification/spec/output.json +10 -0
  17. package/src/tasks/audio-to-audio/about.md +1 -1
  18. package/src/tasks/automatic-speech-recognition/about.md +4 -2
  19. package/src/tasks/automatic-speech-recognition/inference.ts +159 -0
  20. package/src/tasks/automatic-speech-recognition/spec/input.json +34 -0
  21. package/src/tasks/automatic-speech-recognition/spec/output.json +38 -0
  22. package/src/tasks/common-definitions.json +117 -0
  23. package/src/tasks/depth-estimation/data.ts +8 -4
  24. package/src/tasks/depth-estimation/inference.ts +35 -0
  25. package/src/tasks/depth-estimation/spec/input.json +25 -0
  26. package/src/tasks/depth-estimation/spec/output.json +16 -0
  27. package/src/tasks/document-question-answering/inference.ts +110 -0
  28. package/src/tasks/document-question-answering/spec/input.json +85 -0
  29. package/src/tasks/document-question-answering/spec/output.json +36 -0
  30. package/src/tasks/feature-extraction/inference.ts +22 -0
  31. package/src/tasks/feature-extraction/spec/input.json +26 -0
  32. package/src/tasks/feature-extraction/spec/output.json +7 -0
  33. package/src/tasks/fill-mask/inference.ts +62 -0
  34. package/src/tasks/fill-mask/spec/input.json +38 -0
  35. package/src/tasks/fill-mask/spec/output.json +29 -0
  36. package/src/tasks/image-classification/inference.ts +51 -0
  37. package/src/tasks/image-classification/spec/input.json +34 -0
  38. package/src/tasks/image-classification/spec/output.json +10 -0
  39. package/src/tasks/image-segmentation/inference.ts +65 -0
  40. package/src/tasks/image-segmentation/spec/input.json +54 -0
  41. package/src/tasks/image-segmentation/spec/output.json +25 -0
  42. package/src/tasks/image-to-image/inference.ts +67 -0
  43. package/src/tasks/image-to-image/spec/input.json +54 -0
  44. package/src/tasks/image-to-image/spec/output.json +12 -0
  45. package/src/tasks/image-to-text/inference.ts +143 -0
  46. package/src/tasks/image-to-text/spec/input.json +34 -0
  47. package/src/tasks/image-to-text/spec/output.json +14 -0
  48. package/src/tasks/index.ts +5 -2
  49. package/src/tasks/mask-generation/about.md +65 -0
  50. package/src/tasks/mask-generation/data.ts +42 -5
  51. package/src/tasks/object-detection/inference.ts +62 -0
  52. package/src/tasks/object-detection/spec/input.json +30 -0
  53. package/src/tasks/object-detection/spec/output.json +46 -0
  54. package/src/tasks/placeholder/data.ts +3 -0
  55. package/src/tasks/placeholder/spec/input.json +35 -0
  56. package/src/tasks/placeholder/spec/output.json +17 -0
  57. package/src/tasks/question-answering/inference.ts +99 -0
  58. package/src/tasks/question-answering/spec/input.json +67 -0
  59. package/src/tasks/question-answering/spec/output.json +29 -0
  60. package/src/tasks/sentence-similarity/about.md +2 -2
  61. package/src/tasks/sentence-similarity/inference.ts +32 -0
  62. package/src/tasks/sentence-similarity/spec/input.json +40 -0
  63. package/src/tasks/sentence-similarity/spec/output.json +12 -0
  64. package/src/tasks/summarization/data.ts +1 -0
  65. package/src/tasks/summarization/inference.ts +59 -0
  66. package/src/tasks/summarization/spec/input.json +7 -0
  67. package/src/tasks/summarization/spec/output.json +7 -0
  68. package/src/tasks/table-question-answering/inference.ts +61 -0
  69. package/src/tasks/table-question-answering/spec/input.json +44 -0
  70. package/src/tasks/table-question-answering/spec/output.json +40 -0
  71. package/src/tasks/tabular-classification/about.md +1 -1
  72. package/src/tasks/tabular-regression/about.md +1 -1
  73. package/src/tasks/text-classification/about.md +1 -0
  74. package/src/tasks/text-classification/inference.ts +51 -0
  75. package/src/tasks/text-classification/spec/input.json +35 -0
  76. package/src/tasks/text-classification/spec/output.json +10 -0
  77. package/src/tasks/text-generation/about.md +24 -13
  78. package/src/tasks/text-generation/data.ts +22 -38
  79. package/src/tasks/text-generation/inference.ts +194 -0
  80. package/src/tasks/text-generation/spec/input.json +90 -0
  81. package/src/tasks/text-generation/spec/output.json +120 -0
  82. package/src/tasks/text-to-audio/inference.ts +143 -0
  83. package/src/tasks/text-to-audio/spec/input.json +31 -0
  84. package/src/tasks/text-to-audio/spec/output.json +17 -0
  85. package/src/tasks/text-to-image/about.md +11 -2
  86. package/src/tasks/text-to-image/data.ts +6 -2
  87. package/src/tasks/text-to-image/inference.ts +71 -0
  88. package/src/tasks/text-to-image/spec/input.json +59 -0
  89. package/src/tasks/text-to-image/spec/output.json +13 -0
  90. package/src/tasks/text-to-speech/about.md +4 -2
  91. package/src/tasks/text-to-speech/data.ts +1 -0
  92. package/src/tasks/text-to-speech/inference.ts +147 -0
  93. package/src/tasks/text-to-speech/spec/input.json +7 -0
  94. package/src/tasks/text-to-speech/spec/output.json +7 -0
  95. package/src/tasks/text2text-generation/inference.ts +55 -0
  96. package/src/tasks/text2text-generation/spec/input.json +55 -0
  97. package/src/tasks/text2text-generation/spec/output.json +14 -0
  98. package/src/tasks/token-classification/inference.ts +82 -0
  99. package/src/tasks/token-classification/spec/input.json +65 -0
  100. package/src/tasks/token-classification/spec/output.json +33 -0
  101. package/src/tasks/translation/data.ts +1 -0
  102. package/src/tasks/translation/inference.ts +59 -0
  103. package/src/tasks/translation/spec/input.json +7 -0
  104. package/src/tasks/translation/spec/output.json +7 -0
  105. package/src/tasks/video-classification/inference.ts +59 -0
  106. package/src/tasks/video-classification/spec/input.json +42 -0
  107. package/src/tasks/video-classification/spec/output.json +10 -0
  108. package/src/tasks/visual-question-answering/inference.ts +63 -0
  109. package/src/tasks/visual-question-answering/spec/input.json +41 -0
  110. package/src/tasks/visual-question-answering/spec/output.json +21 -0
  111. package/src/tasks/zero-shot-classification/inference.ts +67 -0
  112. package/src/tasks/zero-shot-classification/spec/input.json +50 -0
  113. package/src/tasks/zero-shot-classification/spec/output.json +10 -0
  114. package/src/tasks/zero-shot-image-classification/data.ts +8 -5
  115. package/src/tasks/zero-shot-image-classification/inference.ts +61 -0
  116. package/src/tasks/zero-shot-image-classification/spec/input.json +45 -0
  117. package/src/tasks/zero-shot-image-classification/spec/output.json +10 -0
  118. package/src/tasks/zero-shot-object-detection/about.md +6 -0
  119. package/src/tasks/zero-shot-object-detection/data.ts +6 -1
  120. package/src/tasks/zero-shot-object-detection/inference.ts +66 -0
  121. package/src/tasks/zero-shot-object-detection/spec/input.json +40 -0
  122. package/src/tasks/zero-shot-object-detection/spec/output.json +47 -0
  123. package/tsconfig.json +3 -3
@@ -0,0 +1,67 @@
1
+ {
2
+ "$id": "/inference/schemas/question-answering/input.json",
3
+ "$schema": "http://json-schema.org/draft-06/schema#",
4
+ "description": "Inputs for Question Answering inference",
5
+ "title": "QuestionAnsweringInput",
6
+ "type": "object",
7
+ "properties": {
8
+ "inputs": {
9
+ "title": "QuestionAnsweringInputData",
10
+ "description": "One (context, question) pair to answer",
11
+ "type": "object",
12
+ "properties": {
13
+ "context": {
14
+ "type": "string",
15
+ "description": "The context to be used for answering the question"
16
+ },
17
+ "question": {
18
+ "type": "string",
19
+ "description": "The question to be answered"
20
+ }
21
+ },
22
+ "required": ["question", "context"]
23
+ },
24
+ "parameters": {
25
+ "description": "Additional inference parameters",
26
+ "$ref": "#/$defs/QuestionAnsweringParameters"
27
+ }
28
+ },
29
+ "$defs": {
30
+ "QuestionAnsweringParameters": {
31
+ "title": "QuestionAnsweringParameters",
32
+ "description": "Additional inference parameters for Question Answering",
33
+ "type": "object",
34
+ "properties": {
35
+ "top_k": {
36
+ "type": "integer",
37
+ "description": "The number of answers to return (will be chosen by order of likelihood). Note that we return less than topk answers if there are not enough options available within the context."
38
+ },
39
+ "doc_stride": {
40
+ "type": "integer",
41
+ "description": "If the context is too long to fit with the question for the model, it will be split in several chunks with some overlap. This argument controls the size of that overlap."
42
+ },
43
+ "max_answer_len": {
44
+ "type": "integer",
45
+ "description": "The maximum length of predicted answers (e.g., only answers with a shorter length are considered)."
46
+ },
47
+ "max_seq_len": {
48
+ "type": "integer",
49
+ "description": "The maximum length of the total sentence (context + question) in tokens of each chunk passed to the model. The context will be split in several chunks (using docStride as overlap) if needed."
50
+ },
51
+ "max_question_len": {
52
+ "type": "integer",
53
+ "description": "The maximum length of the question after tokenization. It will be truncated if needed."
54
+ },
55
+ "handle_impossible_answer": {
56
+ "type": "boolean",
57
+ "description": "Whether to accept impossible as an answer."
58
+ },
59
+ "align_to_words": {
60
+ "type": "boolean",
61
+ "description": "Attempts to align the answer to real words. Improves quality on space separated languages. Might hurt on non-space-separated languages (like Japanese or Chinese)"
62
+ }
63
+ }
64
+ }
65
+ },
66
+ "required": ["inputs"]
67
+ }
@@ -0,0 +1,29 @@
1
+ {
2
+ "$id": "/inference/schemas/question-answering/output.json",
3
+ "$schema": "http://json-schema.org/draft-06/schema#",
4
+ "title": "QuestionAnsweringOutput",
5
+ "description": "Outputs of inference for the Question Answering task",
6
+ "type": "array",
7
+ "items": {
8
+ "type": "object",
9
+ "properties": {
10
+ "answer": {
11
+ "type": "string",
12
+ "description": "The answer to the question."
13
+ },
14
+ "score": {
15
+ "type": "number",
16
+ "description": "The probability associated to the answer."
17
+ },
18
+ "start": {
19
+ "type": "integer",
20
+ "description": "The character position in the input where the answer begins."
21
+ },
22
+ "end": {
23
+ "type": "integer",
24
+ "description": "The character position in the input where the answer ends."
25
+ }
26
+ },
27
+ "required": ["answer", "score", "start", "end"]
28
+ }
29
+ }
@@ -8,7 +8,7 @@ You can extract information from documents using Sentence Similarity models. The
8
8
 
9
9
  The [Sentence Transformers](https://www.sbert.net/) library is very powerful for calculating embeddings of sentences, paragraphs, and entire documents. An embedding is just a vector representation of a text and is useful for finding how similar two texts are.
10
10
 
11
- You can find and use [hundreds of Sentence Transformers](https://huggingface.co/models?library=sentence-transformers&sort=downloads) models from the Hub by directly using the library, playing with the widgets in the browser or using the Inference API.
11
+ You can find and use [hundreds of Sentence Transformers](https://huggingface.co/models?library=sentence-transformers&sort=downloads) models from the Hub by directly using the library, playing with the widgets in the browser or using Inference Endpoints.
12
12
 
13
13
  ## Task Variants
14
14
 
@@ -16,7 +16,7 @@ You can find and use [hundreds of Sentence Transformers](https://huggingface.co/
16
16
 
17
17
  Passage Ranking is the task of ranking documents based on their relevance to a given query. The task is evaluated on Mean Reciprocal Rank. These models take one query and multiple documents and return ranked documents according to the relevancy to the query. 📄
18
18
 
19
- You can infer with Passage Ranking models using the [Inference API](https://huggingface.co/inference-api). The Passage Ranking model inputs are a query for which we look for relevancy in the documents and the documents we want to search. The model will return scores according to the relevancy of these documents for the query.
19
+ You can infer with Passage Ranking models using [Inference Endpoints](https://huggingface.co/inference-endpoints). The Passage Ranking model inputs are a query for which we look for relevancy in the documents and the documents we want to search. The model will return scores according to the relevancy of these documents for the query.
20
20
 
21
21
  ```python
22
22
  import json
@@ -0,0 +1,32 @@
1
+ /**
2
+ * Inference code generated from the JSON schema spec in ./spec
3
+ *
4
+ * Using src/scripts/inference-codegen
5
+ */
6
+
7
+ export type SentenceSimilarityOutput = number[];
8
+
9
+ /**
10
+ * Inputs for Sentence similarity inference
11
+ */
12
+ export interface SentenceSimilarityInput {
13
+ inputs: SentenceSimilarityInputData;
14
+ /**
15
+ * Additional inference parameters
16
+ */
17
+ parameters?: { [key: string]: unknown };
18
+ [property: string]: unknown;
19
+ }
20
+
21
+ export interface SentenceSimilarityInputData {
22
+ /**
23
+ * A list of strings which will be compared against the source_sentence.
24
+ */
25
+ sentences: string[];
26
+ /**
27
+ * The string that you wish to compare the other strings with. This can be a phrase,
28
+ * sentence, or longer passage, depending on the model being used.
29
+ */
30
+ sourceSentence: string;
31
+ [property: string]: unknown;
32
+ }
@@ -0,0 +1,40 @@
1
+ {
2
+ "$id": "/inference/schemas/sentence-similarity/input.json",
3
+ "$schema": "http://json-schema.org/draft-06/schema#",
4
+ "description": "Inputs for Sentence similarity inference",
5
+ "title": "SentenceSimilarityInput",
6
+ "type": "object",
7
+ "properties": {
8
+ "inputs": {
9
+ "title": "SentenceSimilarityInputData",
10
+ "type": "object",
11
+ "properties": {
12
+ "sourceSentence": {
13
+ "description": "The string that you wish to compare the other strings with. This can be a phrase, sentence, or longer passage, depending on the model being used.",
14
+ "type": "string"
15
+ },
16
+ "sentences": {
17
+ "type": "array",
18
+ "description": "A list of strings which will be compared against the source_sentence.",
19
+ "items": {
20
+ "type": "string"
21
+ }
22
+ }
23
+ },
24
+ "required": ["sourceSentence", "sentences"]
25
+ },
26
+ "parameters": {
27
+ "description": "Additional inference parameters",
28
+ "$ref": "#/$defs/SentenceSimilarityParameters"
29
+ }
30
+ },
31
+ "$defs": {
32
+ "SentenceSimilarityParameters": {
33
+ "title": "SentenceSimilarityParameters",
34
+ "description": "Additional inference parameters for Sentence Similarity",
35
+ "type": "object",
36
+ "properties": {}
37
+ }
38
+ },
39
+ "required": ["inputs"]
40
+ }
@@ -0,0 +1,12 @@
1
+ {
2
+ "$id": "/inference/schemas/sentence-similarity/output.json",
3
+ "$schema": "http://json-schema.org/draft-06/schema#",
4
+ "title": "SentenceSimilarityOutput",
5
+ "description": "Outputs of inference for the Sentence Similarity task",
6
+ "type": "array",
7
+ "items": {
8
+ "description": "The associated similarity score for each of the given sentences",
9
+ "type": "number",
10
+ "title": "SentenceSimilarityScore"
11
+ }
12
+ }
@@ -1,6 +1,7 @@
1
1
  import type { TaskDataCustom } from "..";
2
2
 
3
3
  const taskData: TaskDataCustom = {
4
+ canonicalId: "text2text-generation",
4
5
  datasets: [
5
6
  {
6
7
  description:
@@ -0,0 +1,59 @@
1
+ /**
2
+ * Inference code generated from the JSON schema spec in ./spec
3
+ *
4
+ * Using src/scripts/inference-codegen
5
+ */
6
+
7
+ /**
8
+ * Inputs for Summarization inference
9
+ *
10
+ * Inputs for Text2text Generation inference
11
+ */
12
+ export interface SummarizationInput {
13
+ /**
14
+ * The input text data
15
+ */
16
+ inputs: string;
17
+ /**
18
+ * Additional inference parameters
19
+ */
20
+ parameters?: Text2TextGenerationParameters;
21
+ [property: string]: unknown;
22
+ }
23
+
24
+ /**
25
+ * Additional inference parameters
26
+ *
27
+ * Additional inference parameters for Text2text Generation
28
+ */
29
+ export interface Text2TextGenerationParameters {
30
+ /**
31
+ * Whether to clean up the potential extra spaces in the text output.
32
+ */
33
+ clean_up_tokenization_spaces?: boolean;
34
+ /**
35
+ * Additional parametrization of the text generation algorithm
36
+ */
37
+ generate_parameters?: { [key: string]: unknown };
38
+ /**
39
+ * The truncation strategy to use
40
+ */
41
+ truncation?: Text2TextGenerationTruncationStrategy;
42
+ [property: string]: unknown;
43
+ }
44
+
45
+ export type Text2TextGenerationTruncationStrategy = "do_not_truncate" | "longest_first" | "only_first" | "only_second";
46
+
47
+ /**
48
+ * Outputs for Summarization inference
49
+ *
50
+ * Outputs of inference for the Text2text Generation task
51
+ */
52
+ export interface SummarizationOutput {
53
+ generatedText: unknown;
54
+ /**
55
+ * The generated text.
56
+ */
57
+ generated_text?: string;
58
+ [property: string]: unknown;
59
+ }
@@ -0,0 +1,7 @@
1
+ {
2
+ "$ref": "/inference/schemas/text2text-generation/input.json",
3
+ "$id": "/inference/schemas/summarization/input.json",
4
+ "$schema": "http://json-schema.org/draft-06/schema#",
5
+ "title": "SummarizationInput",
6
+ "description": "Inputs for Summarization inference"
7
+ }
@@ -0,0 +1,7 @@
1
+ {
2
+ "$ref": "/inference/schemas/text2text-generation/output.json",
3
+ "$id": "/inference/schemas/summarization/output.json",
4
+ "$schema": "http://json-schema.org/draft-06/schema#",
5
+ "title": "SummarizationOutput",
6
+ "description": "Outputs for Summarization inference"
7
+ }
@@ -0,0 +1,61 @@
1
+ /**
2
+ * Inference code generated from the JSON schema spec in ./spec
3
+ *
4
+ * Using src/scripts/inference-codegen
5
+ */
6
+ /**
7
+ * Inputs for Table Question Answering inference
8
+ */
9
+ export interface TableQuestionAnsweringInput {
10
+ /**
11
+ * One (table, question) pair to answer
12
+ */
13
+ inputs: TableQuestionAnsweringInputData;
14
+ /**
15
+ * Additional inference parameters
16
+ */
17
+ parameters?: {
18
+ [key: string]: unknown;
19
+ };
20
+ [property: string]: unknown;
21
+ }
22
+ /**
23
+ * One (table, question) pair to answer
24
+ */
25
+ export interface TableQuestionAnsweringInputData {
26
+ /**
27
+ * The question to be answered about the table
28
+ */
29
+ question: string;
30
+ /**
31
+ * The table to serve as context for the questions
32
+ */
33
+ table: {
34
+ [key: string]: string[];
35
+ };
36
+ [property: string]: unknown;
37
+ }
38
+ export type TableQuestionAnsweringOutput = TableQuestionAnsweringOutputElement[];
39
+ /**
40
+ * Outputs of inference for the Table Question Answering task
41
+ */
42
+ export interface TableQuestionAnsweringOutputElement {
43
+ /**
44
+ * If the model has an aggregator, this returns the aggregator.
45
+ */
46
+ aggregator?: string;
47
+ /**
48
+ * The answer of the question given the table. If there is an aggregator, the answer will be
49
+ * preceded by `AGGREGATOR >`.
50
+ */
51
+ answer: string;
52
+ /**
53
+ * List of strings made up of the answer cell values.
54
+ */
55
+ cells: string[];
56
+ /**
57
+ * Coordinates of the cells of the answers.
58
+ */
59
+ coordinates: Array<number[]>;
60
+ [property: string]: unknown;
61
+ }
@@ -0,0 +1,44 @@
1
+ {
2
+ "$id": "/inference/schemas/table-question-answering/input.json",
3
+ "$schema": "http://json-schema.org/draft-06/schema#",
4
+ "description": "Inputs for Table Question Answering inference",
5
+ "title": "TableQuestionAnsweringInput",
6
+ "type": "object",
7
+ "properties": {
8
+ "inputs": {
9
+ "description": "One (table, question) pair to answer",
10
+ "title": "TableQuestionAnsweringInputData",
11
+ "type": "object",
12
+ "properties": {
13
+ "table": {
14
+ "description": "The table to serve as context for the questions",
15
+ "type": "object",
16
+ "additionalProperties": {
17
+ "type": "array",
18
+ "items": {
19
+ "type": "string"
20
+ }
21
+ }
22
+ },
23
+ "question": {
24
+ "description": "The question to be answered about the table",
25
+ "type": "string"
26
+ }
27
+ },
28
+ "required": ["table", "question"]
29
+ },
30
+ "parameters": {
31
+ "description": "Additional inference parameters",
32
+ "$ref": "#/$defs/TableQuestionAnsweringParameters"
33
+ }
34
+ },
35
+ "$defs": {
36
+ "TableQuestionAnsweringParameters": {
37
+ "title": "TableQuestionAnsweringParameters",
38
+ "description": "Additional inference parameters for Table Question Answering",
39
+ "type": "object",
40
+ "properties": {}
41
+ }
42
+ },
43
+ "required": ["inputs"]
44
+ }
@@ -0,0 +1,40 @@
1
+ {
2
+ "$id": "/inference/schemas/table-question-answering/output.json",
3
+ "$schema": "http://json-schema.org/draft-06/schema#",
4
+ "description": "Outputs of inference for the Table Question Answering task",
5
+ "title": "TableQuestionAnsweringOutput",
6
+ "type": "array",
7
+ "items": {
8
+ "type": "object",
9
+ "properties": {
10
+ "answer": {
11
+ "type": "string",
12
+ "description": "The answer of the question given the table. If there is an aggregator, the answer will be preceded by `AGGREGATOR >`."
13
+ },
14
+ "coordinates": {
15
+ "type": "array",
16
+ "description": "Coordinates of the cells of the answers.",
17
+ "items": {
18
+ "type": "array",
19
+ "items": {
20
+ "type": "integer"
21
+ },
22
+ "minLength": 2,
23
+ "maxLength": 2
24
+ }
25
+ },
26
+ "cells": {
27
+ "type": "array",
28
+ "description": "List of strings made up of the answer cell values.",
29
+ "items": {
30
+ "type": "string"
31
+ }
32
+ },
33
+ "aggregator": {
34
+ "type": "string",
35
+ "description": "If the model has an aggregator, this returns the aggregator."
36
+ }
37
+ },
38
+ "required": ["answer", "cells", "coordinates"]
39
+ }
40
+ }
@@ -19,7 +19,7 @@ Tabular classification models can be used in predicting customer churn in teleco
19
19
 
20
20
  You can use [skops](https://skops.readthedocs.io/) for model hosting and inference on the Hugging Face Hub. This library is built to improve production workflows of various libraries that are used to train tabular models, including [sklearn](https://scikit-learn.org/stable/) and [xgboost](https://xgboost.readthedocs.io/en/stable/). Using `skops` you can:
21
21
 
22
- - Easily use inference API,
22
+ - Easily use Inference Endpoints
23
23
  - Build neat UIs with one line of code,
24
24
  - Programmatically create model cards,
25
25
  - Securely serialize your scikit-learn model. (See limitations of using pickle [here](https://huggingface.co/docs/hub/security-pickle).)
@@ -30,7 +30,7 @@ model.fit(X, y)
30
30
 
31
31
  You can use [skops](https://skops.readthedocs.io/) for model hosting and inference on the Hugging Face Hub. This library is built to improve production workflows of various libraries that are used to train tabular models, including [sklearn](https://scikit-learn.org/stable/) and [xgboost](https://xgboost.readthedocs.io/en/stable/). Using `skops` you can:
32
32
 
33
- - Easily use inference API,
33
+ - Easily use Inference Endpoints,
34
34
  - Build neat UIs with one line of code,
35
35
  - Programmatically create model cards,
36
36
  - Securely serialize your models. (See limitations of using pickle [here](https://huggingface.co/docs/hub/security-pickle).)
@@ -150,6 +150,7 @@ classifier("I will walk to home when I went through the bus.")
150
150
 
151
151
  Would you like to learn more about the topic? Awesome! Here you can find some curated resources that you may find helpful!
152
152
 
153
+ - [SetFitABSA: Few-Shot Aspect Based Sentiment Analysis using SetFit](https://huggingface.co/blog/setfit-absa)
153
154
  - [Course Chapter on Fine-tuning a Text Classification Model](https://huggingface.co/course/chapter3/1?fw=pt)
154
155
  - [Getting Started with Sentiment Analysis using Python](https://huggingface.co/blog/sentiment-analysis-python)
155
156
  - [Sentiment Analysis on Encrypted Data with Homomorphic Encryption](https://huggingface.co/blog/sentiment-analysis-fhe)
@@ -0,0 +1,51 @@
1
+ /**
2
+ * Inference code generated from the JSON schema spec in ./spec
3
+ *
4
+ * Using src/scripts/inference-codegen
5
+ */
6
+ /**
7
+ * Inputs for Text Classification inference
8
+ */
9
+ export interface TextClassificationInput {
10
+ /**
11
+ * The text to classify
12
+ */
13
+ inputs: string;
14
+ /**
15
+ * Additional inference parameters
16
+ */
17
+ parameters?: TextClassificationParameters;
18
+ [property: string]: unknown;
19
+ }
20
+ /**
21
+ * Additional inference parameters
22
+ *
23
+ * Additional inference parameters for Text Classification
24
+ */
25
+ export interface TextClassificationParameters {
26
+ function_to_apply?: ClassificationOutputTransform;
27
+ /**
28
+ * When specified, limits the output to the top K most probable classes.
29
+ */
30
+ top_k?: number;
31
+ [property: string]: unknown;
32
+ }
33
+ /**
34
+ * The function to apply to the model outputs in order to retrieve the scores.
35
+ */
36
+ export type ClassificationOutputTransform = "sigmoid" | "softmax" | "none";
37
+ export type TextClassificationOutput = TextClassificationOutputElement[];
38
+ /**
39
+ * Outputs of inference for the Text Classification task
40
+ */
41
+ export interface TextClassificationOutputElement {
42
+ /**
43
+ * The predicted class label.
44
+ */
45
+ label: string;
46
+ /**
47
+ * The corresponding probability.
48
+ */
49
+ score: number;
50
+ [property: string]: unknown;
51
+ }
@@ -0,0 +1,35 @@
1
+ {
2
+ "$id": "/inference/schemas/text-classification/input.json",
3
+ "$schema": "http://json-schema.org/draft-06/schema#",
4
+ "description": "Inputs for Text Classification inference",
5
+ "title": "TextClassificationInput",
6
+ "type": "object",
7
+ "properties": {
8
+ "inputs": {
9
+ "description": "The text to classify",
10
+ "type": "string"
11
+ },
12
+ "parameters": {
13
+ "description": "Additional inference parameters",
14
+ "$ref": "#/$defs/TextClassificationParameters"
15
+ }
16
+ },
17
+ "$defs": {
18
+ "TextClassificationParameters": {
19
+ "title": "TextClassificationParameters",
20
+ "description": "Additional inference parameters for Text Classification",
21
+ "type": "object",
22
+ "properties": {
23
+ "function_to_apply": {
24
+ "title": "TextClassificationOutputTransform",
25
+ "$ref": "/inference/schemas/common-definitions.json#/definitions/ClassificationOutputTransform"
26
+ },
27
+ "top_k": {
28
+ "type": "integer",
29
+ "description": "When specified, limits the output to the top K most probable classes."
30
+ }
31
+ }
32
+ }
33
+ },
34
+ "required": ["inputs"]
35
+ }
@@ -0,0 +1,10 @@
1
+ {
2
+ "$id": "/inference/schemas/text-classification/output.json",
3
+ "$schema": "http://json-schema.org/draft-06/schema#",
4
+ "description": "Outputs of inference for the Text Classification task",
5
+ "title": "TextClassificationOutput",
6
+ "type": "array",
7
+ "items": {
8
+ "$ref": "/inference/schemas/common-definitions.json#/definitions/ClassificationOutput"
9
+ }
10
+ }
@@ -110,25 +110,36 @@ Would you like to learn more about the topic? Awesome! Here you can find some cu
110
110
  - [ChatUI Docker Spaces](https://huggingface.co/docs/hub/spaces-sdks-docker-chatui)
111
111
  - [Causal language modeling task guide](https://huggingface.co/docs/transformers/tasks/language_modeling)
112
112
  - [Text generation strategies](https://huggingface.co/docs/transformers/generation_strategies)
113
+ - [Course chapter on training a causal language model from scratch](https://huggingface.co/course/chapter7/6?fw=pt)
113
114
 
114
- ### Course and Blogs
115
+ ### Model Inference & Deployment
115
116
 
116
- - [Course Chapter on Training a causal language model from scratch](https://huggingface.co/course/chapter7/6?fw=pt)
117
- - [TO Discussion with Victor Sanh](https://www.youtube.com/watch?v=Oy49SCW_Xpw&ab_channel=HuggingFace)
118
- - [Hugging Face Course Workshops: Pretraining Language Models & CodeParrot](https://www.youtube.com/watch?v=ExUR7w6xe94&ab_channel=HuggingFace)
119
- - [Training CodeParrot 🦜 from Scratch](https://huggingface.co/blog/codeparrot)
120
- - [How to generate text: using different decoding methods for language generation with Transformers](https://huggingface.co/blog/how-to-generate)
117
+ - [Optimizing your LLM in production](https://huggingface.co/blog/optimize-llm)
118
+ - [Open-Source Text Generation & LLM Ecosystem at Hugging Face](https://huggingface.co/blog/os-llms)
119
+ - [Introducing RWKV - An RNN with the advantages of a transformer](https://huggingface.co/blog/rwkv)
120
+ - [Llama 2 is at Hugging Face](https://huggingface.co/blog/llama2)
121
121
  - [Guiding Text Generation with Constrained Beam Search in 🤗 Transformers](https://huggingface.co/blog/constrained-beam-search)
122
122
  - [Code generation with Hugging Face](https://huggingface.co/spaces/codeparrot/code-generation-models)
123
- - [🌸 Introducing The World's Largest Open Multilingual Language Model: BLOOM 🌸](https://huggingface.co/blog/bloom)
124
- - [The Technology Behind BLOOM Training](https://huggingface.co/blog/bloom-megatron-deepspeed)
125
- - [Faster Text Generation with TensorFlow and XLA](https://huggingface.co/blog/tf-xla-generate)
126
123
  - [Assisted Generation: a new direction toward low-latency text generation](https://huggingface.co/blog/assisted-generation)
127
- - [Introducing RWKV - An RNN with the advantages of a transformer](https://huggingface.co/blog/rwkv)
124
+ - [How to generate text: using different decoding methods for language generation with Transformers](https://huggingface.co/blog/how-to-generate)
125
+ - [Faster Text Generation with TensorFlow and XLA](https://huggingface.co/blog/tf-xla-generate)
126
+
127
+ ### Model Fine-tuning/Training
128
+
129
+ - [Non-engineers guide: Train a LLaMA 2 chatbot](https://huggingface.co/blog/Llama2-for-non-engineers)
130
+ - [Training CodeParrot 🦜 from Scratch](https://huggingface.co/blog/codeparrot)
128
131
  - [Creating a Coding Assistant with StarCoder](https://huggingface.co/blog/starchat-alpha)
129
- - [StarCoder: A State-of-the-Art LLM for Code](https://huggingface.co/blog/starcoder)
130
- - [Open-Source Text Generation & LLM Ecosystem at Hugging Face](https://huggingface.co/blog/os-llms)
131
- - [Llama 2 is at Hugging Face](https://huggingface.co/blog/llama2)
132
+
133
+ ### Advanced Concepts Explained Simply
134
+
135
+ - [Mixture of Experts Explained](https://huggingface.co/blog/moe)
136
+
137
+ ### Advanced Fine-tuning/Training Recipes
138
+
139
+ - [Fine-tuning Llama 2 70B using PyTorch FSDP](https://huggingface.co/blog/ram-efficient-pytorch-fsdp)
140
+ - [The N Implementation Details of RLHF with PPO](https://huggingface.co/blog/the_n_implementation_details_of_rlhf_with_ppo)
141
+ - [Preference Tuning LLMs with Direct Preference Optimization Methods](https://huggingface.co/blog/pref-tuning)
142
+ - [Fine-tune Llama 2 with DPO](https://huggingface.co/blog/dpo-trl)
132
143
 
133
144
  ### Notebooks
134
145