@huggingface/tasks 0.2.1 → 0.2.2
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/dist/{index.mjs → index.cjs} +280 -133
- package/dist/index.d.ts +4 -3
- package/dist/index.js +245 -168
- package/package.json +13 -8
- package/src/library-to-tasks.ts +1 -1
- package/src/library-ui-elements.ts +11 -11
- package/src/model-data.ts +1 -1
- package/src/model-libraries.ts +1 -1
- package/src/pipelines.ts +1 -1
- package/src/tasks/audio-classification/about.md +1 -1
- package/src/tasks/audio-classification/inference.ts +51 -0
- package/src/tasks/audio-classification/spec/input.json +34 -0
- package/src/tasks/audio-classification/spec/output.json +21 -0
- package/src/tasks/audio-to-audio/about.md +1 -1
- package/src/tasks/automatic-speech-recognition/about.md +4 -2
- package/src/tasks/automatic-speech-recognition/inference.ts +154 -0
- package/src/tasks/automatic-speech-recognition/spec/input.json +34 -0
- package/src/tasks/automatic-speech-recognition/spec/output.json +36 -0
- package/src/tasks/common-definitions.json +109 -0
- package/src/tasks/depth-estimation/data.ts +8 -4
- package/src/tasks/depth-estimation/inference.ts +35 -0
- package/src/tasks/depth-estimation/spec/input.json +30 -0
- package/src/tasks/depth-estimation/spec/output.json +10 -0
- package/src/tasks/document-question-answering/inference.ts +102 -0
- package/src/tasks/document-question-answering/spec/input.json +85 -0
- package/src/tasks/document-question-answering/spec/output.json +36 -0
- package/src/tasks/feature-extraction/inference.ts +22 -0
- package/src/tasks/feature-extraction/spec/input.json +26 -0
- package/src/tasks/feature-extraction/spec/output.json +7 -0
- package/src/tasks/fill-mask/inference.ts +61 -0
- package/src/tasks/fill-mask/spec/input.json +38 -0
- package/src/tasks/fill-mask/spec/output.json +29 -0
- package/src/tasks/image-classification/inference.ts +51 -0
- package/src/tasks/image-classification/spec/input.json +34 -0
- package/src/tasks/image-classification/spec/output.json +10 -0
- package/src/tasks/image-segmentation/inference.ts +65 -0
- package/src/tasks/image-segmentation/spec/input.json +54 -0
- package/src/tasks/image-segmentation/spec/output.json +25 -0
- package/src/tasks/image-to-image/inference.ts +67 -0
- package/src/tasks/image-to-image/spec/input.json +52 -0
- package/src/tasks/image-to-image/spec/output.json +12 -0
- package/src/tasks/image-to-text/inference.ts +138 -0
- package/src/tasks/image-to-text/spec/input.json +34 -0
- package/src/tasks/image-to-text/spec/output.json +17 -0
- package/src/tasks/index.ts +5 -2
- package/src/tasks/mask-generation/about.md +65 -0
- package/src/tasks/mask-generation/data.ts +42 -5
- package/src/tasks/object-detection/inference.ts +62 -0
- package/src/tasks/object-detection/spec/input.json +30 -0
- package/src/tasks/object-detection/spec/output.json +46 -0
- package/src/tasks/placeholder/data.ts +3 -0
- package/src/tasks/placeholder/spec/input.json +35 -0
- package/src/tasks/placeholder/spec/output.json +17 -0
- package/src/tasks/question-answering/inference.ts +99 -0
- package/src/tasks/question-answering/spec/input.json +67 -0
- package/src/tasks/question-answering/spec/output.json +29 -0
- package/src/tasks/sentence-similarity/about.md +2 -2
- package/src/tasks/sentence-similarity/inference.ts +32 -0
- package/src/tasks/sentence-similarity/spec/input.json +40 -0
- package/src/tasks/sentence-similarity/spec/output.json +12 -0
- package/src/tasks/summarization/data.ts +1 -0
- package/src/tasks/summarization/inference.ts +58 -0
- package/src/tasks/summarization/spec/input.json +7 -0
- package/src/tasks/summarization/spec/output.json +7 -0
- package/src/tasks/table-question-answering/inference.ts +61 -0
- package/src/tasks/table-question-answering/spec/input.json +39 -0
- package/src/tasks/table-question-answering/spec/output.json +40 -0
- package/src/tasks/tabular-classification/about.md +1 -1
- package/src/tasks/tabular-regression/about.md +1 -1
- package/src/tasks/text-classification/about.md +1 -0
- package/src/tasks/text-classification/inference.ts +51 -0
- package/src/tasks/text-classification/spec/input.json +35 -0
- package/src/tasks/text-classification/spec/output.json +10 -0
- package/src/tasks/text-generation/about.md +24 -13
- package/src/tasks/text-generation/data.ts +22 -38
- package/src/tasks/text-generation/inference.ts +85 -0
- package/src/tasks/text-generation/spec/input.json +74 -0
- package/src/tasks/text-generation/spec/output.json +17 -0
- package/src/tasks/text-to-audio/inference.ts +138 -0
- package/src/tasks/text-to-audio/spec/input.json +31 -0
- package/src/tasks/text-to-audio/spec/output.json +20 -0
- package/src/tasks/text-to-image/about.md +11 -2
- package/src/tasks/text-to-image/data.ts +6 -2
- package/src/tasks/text-to-image/inference.ts +73 -0
- package/src/tasks/text-to-image/spec/input.json +57 -0
- package/src/tasks/text-to-image/spec/output.json +15 -0
- package/src/tasks/text-to-speech/about.md +4 -2
- package/src/tasks/text-to-speech/data.ts +1 -0
- package/src/tasks/text-to-speech/inference.ts +146 -0
- package/src/tasks/text-to-speech/spec/input.json +7 -0
- package/src/tasks/text-to-speech/spec/output.json +7 -0
- package/src/tasks/text2text-generation/inference.ts +53 -0
- package/src/tasks/text2text-generation/spec/input.json +55 -0
- package/src/tasks/text2text-generation/spec/output.json +17 -0
- package/src/tasks/token-classification/inference.ts +82 -0
- package/src/tasks/token-classification/spec/input.json +65 -0
- package/src/tasks/token-classification/spec/output.json +33 -0
- package/src/tasks/translation/data.ts +1 -0
- package/src/tasks/translation/inference.ts +58 -0
- package/src/tasks/translation/spec/input.json +7 -0
- package/src/tasks/translation/spec/output.json +7 -0
- package/src/tasks/video-classification/inference.ts +59 -0
- package/src/tasks/video-classification/spec/input.json +42 -0
- package/src/tasks/video-classification/spec/output.json +10 -0
- package/src/tasks/visual-question-answering/inference.ts +63 -0
- package/src/tasks/visual-question-answering/spec/input.json +41 -0
- package/src/tasks/visual-question-answering/spec/output.json +21 -0
- package/src/tasks/zero-shot-classification/inference.ts +67 -0
- package/src/tasks/zero-shot-classification/spec/input.json +50 -0
- package/src/tasks/zero-shot-classification/spec/output.json +10 -0
- package/src/tasks/zero-shot-image-classification/data.ts +8 -5
- package/src/tasks/zero-shot-image-classification/inference.ts +61 -0
- package/src/tasks/zero-shot-image-classification/spec/input.json +45 -0
- package/src/tasks/zero-shot-image-classification/spec/output.json +10 -0
- package/src/tasks/zero-shot-object-detection/about.md +6 -0
- package/src/tasks/zero-shot-object-detection/data.ts +6 -1
- package/src/tasks/zero-shot-object-detection/inference.ts +66 -0
- package/src/tasks/zero-shot-object-detection/spec/input.json +40 -0
- package/src/tasks/zero-shot-object-detection/spec/output.json +47 -0
- package/tsconfig.json +3 -3
|
@@ -0,0 +1,46 @@
|
|
|
1
|
+
{
|
|
2
|
+
"$id": "/inference/schemas/object-detection/output.json",
|
|
3
|
+
"$schema": "http://json-schema.org/draft-06/schema#",
|
|
4
|
+
"description": "Outputs of inference for the Object Detection task",
|
|
5
|
+
"title": "ObjectDetectionOutput",
|
|
6
|
+
"type": "array",
|
|
7
|
+
"items": {
|
|
8
|
+
"type": "object",
|
|
9
|
+
"properties": {
|
|
10
|
+
"label": {
|
|
11
|
+
"type": "string",
|
|
12
|
+
"description": "The predicted label for the bounding box"
|
|
13
|
+
},
|
|
14
|
+
"score": {
|
|
15
|
+
"type": "number",
|
|
16
|
+
"description": "The associated score / probability"
|
|
17
|
+
},
|
|
18
|
+
"box": {
|
|
19
|
+
"$ref": "#/$defs/BoundingBox",
|
|
20
|
+
"description": "The predicted bounding box. Coordinates are relative to the top left corner of the input image."
|
|
21
|
+
}
|
|
22
|
+
},
|
|
23
|
+
"required": ["box", "label", "score"]
|
|
24
|
+
},
|
|
25
|
+
"$defs": {
|
|
26
|
+
"BoundingBox": {
|
|
27
|
+
"type": "object",
|
|
28
|
+
"title": "BoundingBox",
|
|
29
|
+
"properties": {
|
|
30
|
+
"xmin": {
|
|
31
|
+
"type": "integer"
|
|
32
|
+
},
|
|
33
|
+
"xmax": {
|
|
34
|
+
"type": "integer"
|
|
35
|
+
},
|
|
36
|
+
"ymin": {
|
|
37
|
+
"type": "integer"
|
|
38
|
+
},
|
|
39
|
+
"ymax": {
|
|
40
|
+
"type": "integer"
|
|
41
|
+
}
|
|
42
|
+
},
|
|
43
|
+
"required": ["xmin", "xmax", "ymin", "ymax"]
|
|
44
|
+
}
|
|
45
|
+
}
|
|
46
|
+
}
|
|
@@ -13,6 +13,9 @@ const taskData: TaskDataCustom = {
|
|
|
13
13
|
summary: "",
|
|
14
14
|
widgetModels: [],
|
|
15
15
|
youtubeId: undefined,
|
|
16
|
+
/// If this is a subtask, link to the most general task ID
|
|
17
|
+
/// (eg, text2text-generation is the canonical ID of translation)
|
|
18
|
+
canonicalId: undefined,
|
|
16
19
|
};
|
|
17
20
|
|
|
18
21
|
export default taskData;
|
|
@@ -0,0 +1,35 @@
|
|
|
1
|
+
{
|
|
2
|
+
"$id": "/inference/schemas/<TASK_ID>/input.json",
|
|
3
|
+
"$schema": "http://json-schema.org/draft-06/schema#",
|
|
4
|
+
"description": "Inputs for <TASK_ID> inference",
|
|
5
|
+
"title": "PlaceholderInput",
|
|
6
|
+
"type": "object",
|
|
7
|
+
"properties": {
|
|
8
|
+
"data": {
|
|
9
|
+
"description": "TODO: describe the input here. This must be model & framework agnostic.",
|
|
10
|
+
"type": "string"
|
|
11
|
+
},
|
|
12
|
+
"parameters": {
|
|
13
|
+
"description": "Additional inference parameters",
|
|
14
|
+
"$ref": "#/$defs/<TASK_ID>Parameters"
|
|
15
|
+
}
|
|
16
|
+
},
|
|
17
|
+
"$defs": {
|
|
18
|
+
"<TASK_ID>Parameters": {
|
|
19
|
+
"title": "<TASK_ID>Parameters",
|
|
20
|
+
"description": "TODO: describe additional parameters here.",
|
|
21
|
+
"type": "object",
|
|
22
|
+
"properties": {
|
|
23
|
+
"dummyParameterName": {
|
|
24
|
+
"type": "boolean",
|
|
25
|
+
"description": "TODO: describe the parameter here"
|
|
26
|
+
},
|
|
27
|
+
"dummyParameterName2": {
|
|
28
|
+
"type": "integer",
|
|
29
|
+
"description": "TODO: describe the parameter here"
|
|
30
|
+
}
|
|
31
|
+
}
|
|
32
|
+
}
|
|
33
|
+
},
|
|
34
|
+
"required": ["data"]
|
|
35
|
+
}
|
|
@@ -0,0 +1,17 @@
|
|
|
1
|
+
{
|
|
2
|
+
"$id": "/inference/schemas/<TASK_ID>/output.json",
|
|
3
|
+
"$schema": "http://json-schema.org/draft-06/schema#",
|
|
4
|
+
"description": "Outputs for <TASK_ID> inference",
|
|
5
|
+
"title": "PlaceholderOutput",
|
|
6
|
+
"type": "array",
|
|
7
|
+
"items": {
|
|
8
|
+
"type": "object",
|
|
9
|
+
"properties": {
|
|
10
|
+
"meaningfulOutputName": {
|
|
11
|
+
"type": "string",
|
|
12
|
+
"description": "TODO: Describe what is outputed by the inference here"
|
|
13
|
+
}
|
|
14
|
+
},
|
|
15
|
+
"required": ["meaningfulOutputName"]
|
|
16
|
+
}
|
|
17
|
+
}
|
|
@@ -0,0 +1,99 @@
|
|
|
1
|
+
/**
|
|
2
|
+
* Inference code generated from the JSON schema spec in ./spec
|
|
3
|
+
*
|
|
4
|
+
* Using src/scripts/inference-codegen
|
|
5
|
+
*/
|
|
6
|
+
/**
|
|
7
|
+
* Inputs for Question Answering inference
|
|
8
|
+
*/
|
|
9
|
+
export interface QuestionAnsweringInput {
|
|
10
|
+
/**
|
|
11
|
+
* One (context, question) pair to answer
|
|
12
|
+
*/
|
|
13
|
+
data: QuestionAnsweringInputData;
|
|
14
|
+
/**
|
|
15
|
+
* Additional inference parameters
|
|
16
|
+
*/
|
|
17
|
+
parameters?: QuestionAnsweringParameters;
|
|
18
|
+
[property: string]: unknown;
|
|
19
|
+
}
|
|
20
|
+
/**
|
|
21
|
+
* One (context, question) pair to answer
|
|
22
|
+
*/
|
|
23
|
+
export interface QuestionAnsweringInputData {
|
|
24
|
+
/**
|
|
25
|
+
* The context to be used for answering the question
|
|
26
|
+
*/
|
|
27
|
+
context: string;
|
|
28
|
+
/**
|
|
29
|
+
* The question to be answered
|
|
30
|
+
*/
|
|
31
|
+
question: string;
|
|
32
|
+
[property: string]: unknown;
|
|
33
|
+
}
|
|
34
|
+
/**
|
|
35
|
+
* Additional inference parameters
|
|
36
|
+
*
|
|
37
|
+
* Additional inference parameters for Question Answering
|
|
38
|
+
*/
|
|
39
|
+
export interface QuestionAnsweringParameters {
|
|
40
|
+
/**
|
|
41
|
+
* Attempts to align the answer to real words. Improves quality on space separated
|
|
42
|
+
* languages. Might hurt on non-space-separated languages (like Japanese or Chinese)
|
|
43
|
+
*/
|
|
44
|
+
alignToWords?: boolean;
|
|
45
|
+
/**
|
|
46
|
+
* If the context is too long to fit with the question for the model, it will be split in
|
|
47
|
+
* several chunks with some overlap. This argument controls the size of that overlap.
|
|
48
|
+
*/
|
|
49
|
+
docStride?: number;
|
|
50
|
+
/**
|
|
51
|
+
* Whether to accept impossible as an answer.
|
|
52
|
+
*/
|
|
53
|
+
handleImpossibleAnswer?: boolean;
|
|
54
|
+
/**
|
|
55
|
+
* The maximum length of predicted answers (e.g., only answers with a shorter length are
|
|
56
|
+
* considered).
|
|
57
|
+
*/
|
|
58
|
+
maxAnswerLen?: number;
|
|
59
|
+
/**
|
|
60
|
+
* The maximum length of the question after tokenization. It will be truncated if needed.
|
|
61
|
+
*/
|
|
62
|
+
maxQuestionLen?: number;
|
|
63
|
+
/**
|
|
64
|
+
* The maximum length of the total sentence (context + question) in tokens of each chunk
|
|
65
|
+
* passed to the model. The context will be split in several chunks (using docStride as
|
|
66
|
+
* overlap) if needed.
|
|
67
|
+
*/
|
|
68
|
+
maxSeqLen?: number;
|
|
69
|
+
/**
|
|
70
|
+
* The number of answers to return (will be chosen by order of likelihood). Note that we
|
|
71
|
+
* return less than topk answers if there are not enough options available within the
|
|
72
|
+
* context.
|
|
73
|
+
*/
|
|
74
|
+
topK?: number;
|
|
75
|
+
[property: string]: unknown;
|
|
76
|
+
}
|
|
77
|
+
export type QuestionAnsweringOutput = QuestionAnsweringOutputElement[];
|
|
78
|
+
/**
|
|
79
|
+
* Outputs of inference for the Question Answering task
|
|
80
|
+
*/
|
|
81
|
+
export interface QuestionAnsweringOutputElement {
|
|
82
|
+
/**
|
|
83
|
+
* The answer to the question.
|
|
84
|
+
*/
|
|
85
|
+
answer: string;
|
|
86
|
+
/**
|
|
87
|
+
* The character position in the input where the answer ends.
|
|
88
|
+
*/
|
|
89
|
+
end: number;
|
|
90
|
+
/**
|
|
91
|
+
* The probability associated to the answer.
|
|
92
|
+
*/
|
|
93
|
+
score: number;
|
|
94
|
+
/**
|
|
95
|
+
* The character position in the input where the answer begins.
|
|
96
|
+
*/
|
|
97
|
+
start: number;
|
|
98
|
+
[property: string]: unknown;
|
|
99
|
+
}
|
|
@@ -0,0 +1,67 @@
|
|
|
1
|
+
{
|
|
2
|
+
"$id": "/inference/schemas/question-answering/input.json",
|
|
3
|
+
"$schema": "http://json-schema.org/draft-06/schema#",
|
|
4
|
+
"description": "Inputs for Question Answering inference",
|
|
5
|
+
"title": "QuestionAnsweringInput",
|
|
6
|
+
"type": "object",
|
|
7
|
+
"properties": {
|
|
8
|
+
"data": {
|
|
9
|
+
"title": "QuestionAnsweringInputData",
|
|
10
|
+
"description": "One (context, question) pair to answer",
|
|
11
|
+
"type": "object",
|
|
12
|
+
"properties": {
|
|
13
|
+
"context": {
|
|
14
|
+
"type": "string",
|
|
15
|
+
"description": "The context to be used for answering the question"
|
|
16
|
+
},
|
|
17
|
+
"question": {
|
|
18
|
+
"type": "string",
|
|
19
|
+
"description": "The question to be answered"
|
|
20
|
+
}
|
|
21
|
+
},
|
|
22
|
+
"required": ["question", "context"]
|
|
23
|
+
},
|
|
24
|
+
"parameters": {
|
|
25
|
+
"description": "Additional inference parameters",
|
|
26
|
+
"$ref": "#/$defs/QuestionAnsweringParameters"
|
|
27
|
+
}
|
|
28
|
+
},
|
|
29
|
+
"$defs": {
|
|
30
|
+
"QuestionAnsweringParameters": {
|
|
31
|
+
"title": "QuestionAnsweringParameters",
|
|
32
|
+
"description": "Additional inference parameters for Question Answering",
|
|
33
|
+
"type": "object",
|
|
34
|
+
"properties": {
|
|
35
|
+
"topK": {
|
|
36
|
+
"type": "integer",
|
|
37
|
+
"description": "The number of answers to return (will be chosen by order of likelihood). Note that we return less than topk answers if there are not enough options available within the context."
|
|
38
|
+
},
|
|
39
|
+
"docStride": {
|
|
40
|
+
"type": "integer",
|
|
41
|
+
"description": "If the context is too long to fit with the question for the model, it will be split in several chunks with some overlap. This argument controls the size of that overlap."
|
|
42
|
+
},
|
|
43
|
+
"maxAnswerLen": {
|
|
44
|
+
"type": "integer",
|
|
45
|
+
"description": "The maximum length of predicted answers (e.g., only answers with a shorter length are considered)."
|
|
46
|
+
},
|
|
47
|
+
"maxSeqLen": {
|
|
48
|
+
"type": "integer",
|
|
49
|
+
"description": "The maximum length of the total sentence (context + question) in tokens of each chunk passed to the model. The context will be split in several chunks (using docStride as overlap) if needed."
|
|
50
|
+
},
|
|
51
|
+
"maxQuestionLen": {
|
|
52
|
+
"type": "integer",
|
|
53
|
+
"description": "The maximum length of the question after tokenization. It will be truncated if needed."
|
|
54
|
+
},
|
|
55
|
+
"handleImpossibleAnswer": {
|
|
56
|
+
"type": "boolean",
|
|
57
|
+
"description": "Whether to accept impossible as an answer."
|
|
58
|
+
},
|
|
59
|
+
"alignToWords": {
|
|
60
|
+
"type": "boolean",
|
|
61
|
+
"description": "Attempts to align the answer to real words. Improves quality on space separated languages. Might hurt on non-space-separated languages (like Japanese or Chinese)"
|
|
62
|
+
}
|
|
63
|
+
}
|
|
64
|
+
}
|
|
65
|
+
},
|
|
66
|
+
"required": ["data"]
|
|
67
|
+
}
|
|
@@ -0,0 +1,29 @@
|
|
|
1
|
+
{
|
|
2
|
+
"$id": "/inference/schemas/question-answering/output.json",
|
|
3
|
+
"$schema": "http://json-schema.org/draft-06/schema#",
|
|
4
|
+
"title": "QuestionAnsweringOutput",
|
|
5
|
+
"description": "Outputs of inference for the Question Answering task",
|
|
6
|
+
"type": "array",
|
|
7
|
+
"items": {
|
|
8
|
+
"type": "object",
|
|
9
|
+
"properties": {
|
|
10
|
+
"answer": {
|
|
11
|
+
"type": "string",
|
|
12
|
+
"description": "The answer to the question."
|
|
13
|
+
},
|
|
14
|
+
"score": {
|
|
15
|
+
"type": "number",
|
|
16
|
+
"description": "The probability associated to the answer."
|
|
17
|
+
},
|
|
18
|
+
"start": {
|
|
19
|
+
"type": "integer",
|
|
20
|
+
"description": "The character position in the input where the answer begins."
|
|
21
|
+
},
|
|
22
|
+
"end": {
|
|
23
|
+
"type": "integer",
|
|
24
|
+
"description": "The character position in the input where the answer ends."
|
|
25
|
+
}
|
|
26
|
+
},
|
|
27
|
+
"required": ["answer", "score", "start", "end"]
|
|
28
|
+
}
|
|
29
|
+
}
|
|
@@ -8,7 +8,7 @@ You can extract information from documents using Sentence Similarity models. The
|
|
|
8
8
|
|
|
9
9
|
The [Sentence Transformers](https://www.sbert.net/) library is very powerful for calculating embeddings of sentences, paragraphs, and entire documents. An embedding is just a vector representation of a text and is useful for finding how similar two texts are.
|
|
10
10
|
|
|
11
|
-
You can find and use [hundreds of Sentence Transformers](https://huggingface.co/models?library=sentence-transformers&sort=downloads) models from the Hub by directly using the library, playing with the widgets in the browser or using
|
|
11
|
+
You can find and use [hundreds of Sentence Transformers](https://huggingface.co/models?library=sentence-transformers&sort=downloads) models from the Hub by directly using the library, playing with the widgets in the browser or using Inference Endpoints.
|
|
12
12
|
|
|
13
13
|
## Task Variants
|
|
14
14
|
|
|
@@ -16,7 +16,7 @@ You can find and use [hundreds of Sentence Transformers](https://huggingface.co/
|
|
|
16
16
|
|
|
17
17
|
Passage Ranking is the task of ranking documents based on their relevance to a given query. The task is evaluated on Mean Reciprocal Rank. These models take one query and multiple documents and return ranked documents according to the relevancy to the query. 📄
|
|
18
18
|
|
|
19
|
-
You can infer with Passage Ranking models using
|
|
19
|
+
You can infer with Passage Ranking models using [Inference Endpoints](https://huggingface.co/inference-endpoints). The Passage Ranking model inputs are a query for which we look for relevancy in the documents and the documents we want to search. The model will return scores according to the relevancy of these documents for the query.
|
|
20
20
|
|
|
21
21
|
```python
|
|
22
22
|
import json
|
|
@@ -0,0 +1,32 @@
|
|
|
1
|
+
/**
|
|
2
|
+
* Inference code generated from the JSON schema spec in ./spec
|
|
3
|
+
*
|
|
4
|
+
* Using src/scripts/inference-codegen
|
|
5
|
+
*/
|
|
6
|
+
|
|
7
|
+
export type SentenceSimilarityOutput = number[];
|
|
8
|
+
|
|
9
|
+
/**
|
|
10
|
+
* Inputs for Sentence similarity inference
|
|
11
|
+
*/
|
|
12
|
+
export interface SentenceSimilarityInput {
|
|
13
|
+
data: SentenceSimilarityInputData;
|
|
14
|
+
/**
|
|
15
|
+
* Additional inference parameters
|
|
16
|
+
*/
|
|
17
|
+
parameters?: { [key: string]: unknown };
|
|
18
|
+
[property: string]: unknown;
|
|
19
|
+
}
|
|
20
|
+
|
|
21
|
+
export interface SentenceSimilarityInputData {
|
|
22
|
+
/**
|
|
23
|
+
* A list of strings which will be compared against the source_sentence.
|
|
24
|
+
*/
|
|
25
|
+
sentences: string[];
|
|
26
|
+
/**
|
|
27
|
+
* The string that you wish to compare the other strings with. This can be a phrase,
|
|
28
|
+
* sentence, or longer passage, depending on the model being used.
|
|
29
|
+
*/
|
|
30
|
+
sourceSentence: string;
|
|
31
|
+
[property: string]: unknown;
|
|
32
|
+
}
|
|
@@ -0,0 +1,40 @@
|
|
|
1
|
+
{
|
|
2
|
+
"$id": "/inference/schemas/sentence-similarity/input.json",
|
|
3
|
+
"$schema": "http://json-schema.org/draft-06/schema#",
|
|
4
|
+
"description": "Inputs for Sentence similarity inference",
|
|
5
|
+
"title": "SentenceSimilarityInput",
|
|
6
|
+
"type": "object",
|
|
7
|
+
"properties": {
|
|
8
|
+
"data": {
|
|
9
|
+
"title": "SentenceSimilarityInputData",
|
|
10
|
+
"type": "object",
|
|
11
|
+
"properties": {
|
|
12
|
+
"sourceSentence": {
|
|
13
|
+
"description": "The string that you wish to compare the other strings with. This can be a phrase, sentence, or longer passage, depending on the model being used.",
|
|
14
|
+
"type": "string"
|
|
15
|
+
},
|
|
16
|
+
"sentences": {
|
|
17
|
+
"type": "array",
|
|
18
|
+
"description": "A list of strings which will be compared against the source_sentence.",
|
|
19
|
+
"items": {
|
|
20
|
+
"type": "string"
|
|
21
|
+
}
|
|
22
|
+
}
|
|
23
|
+
},
|
|
24
|
+
"required": ["sourceSentence", "sentences"]
|
|
25
|
+
},
|
|
26
|
+
"parameters": {
|
|
27
|
+
"description": "Additional inference parameters",
|
|
28
|
+
"$ref": "#/$defs/SentenceSimilarityParameters"
|
|
29
|
+
}
|
|
30
|
+
},
|
|
31
|
+
"$defs": {
|
|
32
|
+
"SentenceSimilarityParameters": {
|
|
33
|
+
"title": "SentenceSimilarityParameters",
|
|
34
|
+
"description": "Additional inference parameters for Sentence Similarity",
|
|
35
|
+
"type": "object",
|
|
36
|
+
"properties": {}
|
|
37
|
+
}
|
|
38
|
+
},
|
|
39
|
+
"required": ["data"]
|
|
40
|
+
}
|
|
@@ -0,0 +1,12 @@
|
|
|
1
|
+
{
|
|
2
|
+
"$id": "/inference/schemas/sentence-similarity/output.json",
|
|
3
|
+
"$schema": "http://json-schema.org/draft-06/schema#",
|
|
4
|
+
"title": "SentenceSimilarityOutput",
|
|
5
|
+
"description": "Outputs of inference for the Sentence Similarity task",
|
|
6
|
+
"type": "array",
|
|
7
|
+
"items": {
|
|
8
|
+
"description": "The associated similarity score for each of the given sentences",
|
|
9
|
+
"type": "number",
|
|
10
|
+
"title": "SentenceSimilarityScore"
|
|
11
|
+
}
|
|
12
|
+
}
|
|
@@ -0,0 +1,58 @@
|
|
|
1
|
+
/**
|
|
2
|
+
* Inference code generated from the JSON schema spec in ./spec
|
|
3
|
+
*
|
|
4
|
+
* Using src/scripts/inference-codegen
|
|
5
|
+
*/
|
|
6
|
+
|
|
7
|
+
/**
|
|
8
|
+
* Inputs for Summarization inference
|
|
9
|
+
*
|
|
10
|
+
* Inputs for Text2text Generation inference
|
|
11
|
+
*/
|
|
12
|
+
export interface SummarizationInput {
|
|
13
|
+
/**
|
|
14
|
+
* The input text data
|
|
15
|
+
*/
|
|
16
|
+
data: string;
|
|
17
|
+
/**
|
|
18
|
+
* Additional inference parameters
|
|
19
|
+
*/
|
|
20
|
+
parameters?: Text2TextGenerationParameters;
|
|
21
|
+
[property: string]: unknown;
|
|
22
|
+
}
|
|
23
|
+
|
|
24
|
+
/**
|
|
25
|
+
* Additional inference parameters
|
|
26
|
+
*
|
|
27
|
+
* Additional inference parameters for Text2text Generation
|
|
28
|
+
*/
|
|
29
|
+
export interface Text2TextGenerationParameters {
|
|
30
|
+
/**
|
|
31
|
+
* Whether to clean up the potential extra spaces in the text output.
|
|
32
|
+
*/
|
|
33
|
+
cleanUpTokenizationSpaces?: boolean;
|
|
34
|
+
/**
|
|
35
|
+
* Additional parametrization of the text generation algorithm
|
|
36
|
+
*/
|
|
37
|
+
generateParameters?: { [key: string]: unknown };
|
|
38
|
+
/**
|
|
39
|
+
* The truncation strategy to use
|
|
40
|
+
*/
|
|
41
|
+
truncation?: Text2TextGenerationTruncationStrategy;
|
|
42
|
+
[property: string]: unknown;
|
|
43
|
+
}
|
|
44
|
+
|
|
45
|
+
export type Text2TextGenerationTruncationStrategy = "do_not_truncate" | "longest_first" | "only_first" | "only_second";
|
|
46
|
+
|
|
47
|
+
/**
|
|
48
|
+
* Outputs for Summarization inference
|
|
49
|
+
*
|
|
50
|
+
* Outputs of inference for the Text2text Generation task
|
|
51
|
+
*/
|
|
52
|
+
export interface SummarizationOutput {
|
|
53
|
+
/**
|
|
54
|
+
* The generated text.
|
|
55
|
+
*/
|
|
56
|
+
generatedText: string;
|
|
57
|
+
[property: string]: unknown;
|
|
58
|
+
}
|
|
@@ -0,0 +1,7 @@
|
|
|
1
|
+
{
|
|
2
|
+
"$ref": "/inference/schemas/text2text-generation/input.json",
|
|
3
|
+
"$id": "/inference/schemas/summarization/input.json",
|
|
4
|
+
"$schema": "http://json-schema.org/draft-06/schema#",
|
|
5
|
+
"title": "SummarizationInput",
|
|
6
|
+
"description": "Inputs for Summarization inference"
|
|
7
|
+
}
|
|
@@ -0,0 +1,7 @@
|
|
|
1
|
+
{
|
|
2
|
+
"$ref": "/inference/schemas/text2text-generation/output.json",
|
|
3
|
+
"$id": "/inference/schemas/summarization/output.json",
|
|
4
|
+
"$schema": "http://json-schema.org/draft-06/schema#",
|
|
5
|
+
"title": "SummarizationOutput",
|
|
6
|
+
"description": "Outputs for Summarization inference"
|
|
7
|
+
}
|
|
@@ -0,0 +1,61 @@
|
|
|
1
|
+
/**
|
|
2
|
+
* Inference code generated from the JSON schema spec in ./spec
|
|
3
|
+
*
|
|
4
|
+
* Using src/scripts/inference-codegen
|
|
5
|
+
*/
|
|
6
|
+
/**
|
|
7
|
+
* Inputs for Table Question Answering inference
|
|
8
|
+
*/
|
|
9
|
+
export interface TableQuestionAnsweringInput {
|
|
10
|
+
/**
|
|
11
|
+
* One (table, question) pair to answer
|
|
12
|
+
*/
|
|
13
|
+
data: TableQuestionAnsweringInputData;
|
|
14
|
+
/**
|
|
15
|
+
* Additional inference parameters
|
|
16
|
+
*/
|
|
17
|
+
parameters?: {
|
|
18
|
+
[key: string]: unknown;
|
|
19
|
+
};
|
|
20
|
+
[property: string]: unknown;
|
|
21
|
+
}
|
|
22
|
+
/**
|
|
23
|
+
* One (table, question) pair to answer
|
|
24
|
+
*/
|
|
25
|
+
export interface TableQuestionAnsweringInputData {
|
|
26
|
+
/**
|
|
27
|
+
* The question to be answered about the table
|
|
28
|
+
*/
|
|
29
|
+
question: string;
|
|
30
|
+
/**
|
|
31
|
+
* The table to serve as context for the questions
|
|
32
|
+
*/
|
|
33
|
+
table: {
|
|
34
|
+
[key: string]: string[];
|
|
35
|
+
};
|
|
36
|
+
[property: string]: unknown;
|
|
37
|
+
}
|
|
38
|
+
export type TableQuestionAnsweringOutput = TableQuestionAnsweringOutputElement[];
|
|
39
|
+
/**
|
|
40
|
+
* Outputs of inference for the Table Question Answering task
|
|
41
|
+
*/
|
|
42
|
+
export interface TableQuestionAnsweringOutputElement {
|
|
43
|
+
/**
|
|
44
|
+
* If the model has an aggregator, this returns the aggregator.
|
|
45
|
+
*/
|
|
46
|
+
aggregator?: string;
|
|
47
|
+
/**
|
|
48
|
+
* The answer of the question given the table. If there is an aggregator, the answer will be
|
|
49
|
+
* preceded by `AGGREGATOR >`.
|
|
50
|
+
*/
|
|
51
|
+
answer: string;
|
|
52
|
+
/**
|
|
53
|
+
* List of strings made up of the answer cell values.
|
|
54
|
+
*/
|
|
55
|
+
cells: string[];
|
|
56
|
+
/**
|
|
57
|
+
* Coordinates of the cells of the answers.
|
|
58
|
+
*/
|
|
59
|
+
coordinates: Array<number[]>;
|
|
60
|
+
[property: string]: unknown;
|
|
61
|
+
}
|
|
@@ -0,0 +1,39 @@
|
|
|
1
|
+
{
|
|
2
|
+
"$id": "/inference/schemas/table-question-answering/input.json",
|
|
3
|
+
"$schema": "http://json-schema.org/draft-06/schema#",
|
|
4
|
+
"description": "Inputs for Table Question Answering inference",
|
|
5
|
+
"title": "TableQuestionAnsweringInput",
|
|
6
|
+
"type": "object",
|
|
7
|
+
"properties": {
|
|
8
|
+
"data": {
|
|
9
|
+
"description": "One (table, question) pair to answer",
|
|
10
|
+
"title": "TableQuestionAnsweringInputData",
|
|
11
|
+
"type": "object",
|
|
12
|
+
"properties": {
|
|
13
|
+
"table": {
|
|
14
|
+
"description": "The table to serve as context for the questions",
|
|
15
|
+
"type": "object",
|
|
16
|
+
"additionalProperties": { "type": "array", "items": { "type": "string" } }
|
|
17
|
+
},
|
|
18
|
+
"question": {
|
|
19
|
+
"description": "The question to be answered about the table",
|
|
20
|
+
"type": "string"
|
|
21
|
+
}
|
|
22
|
+
},
|
|
23
|
+
"required": ["table", "question"]
|
|
24
|
+
},
|
|
25
|
+
"parameters": {
|
|
26
|
+
"description": "Additional inference parameters",
|
|
27
|
+
"$ref": "#/$defs/TableQuestionAnsweringParameters"
|
|
28
|
+
}
|
|
29
|
+
},
|
|
30
|
+
"$defs": {
|
|
31
|
+
"TableQuestionAnsweringParameters": {
|
|
32
|
+
"title": "TableQuestionAnsweringParameters",
|
|
33
|
+
"description": "Additional inference parameters for Table Question Answering",
|
|
34
|
+
"type": "object",
|
|
35
|
+
"properties": {}
|
|
36
|
+
}
|
|
37
|
+
},
|
|
38
|
+
"required": ["data"]
|
|
39
|
+
}
|
|
@@ -0,0 +1,40 @@
|
|
|
1
|
+
{
|
|
2
|
+
"$id": "/inference/schemas/table-question-answering/output.json",
|
|
3
|
+
"$schema": "http://json-schema.org/draft-06/schema#",
|
|
4
|
+
"description": "Outputs of inference for the Table Question Answering task",
|
|
5
|
+
"title": "TableQuestionAnsweringOutput",
|
|
6
|
+
"type": "array",
|
|
7
|
+
"items": {
|
|
8
|
+
"type": "object",
|
|
9
|
+
"properties": {
|
|
10
|
+
"answer": {
|
|
11
|
+
"type": "string",
|
|
12
|
+
"description": "The answer of the question given the table. If there is an aggregator, the answer will be preceded by `AGGREGATOR >`."
|
|
13
|
+
},
|
|
14
|
+
"coordinates": {
|
|
15
|
+
"type": "array",
|
|
16
|
+
"description": "Coordinates of the cells of the answers.",
|
|
17
|
+
"items": {
|
|
18
|
+
"type": "array",
|
|
19
|
+
"items": {
|
|
20
|
+
"type": "integer"
|
|
21
|
+
},
|
|
22
|
+
"minLength": 2,
|
|
23
|
+
"maxLength": 2
|
|
24
|
+
}
|
|
25
|
+
},
|
|
26
|
+
"cells": {
|
|
27
|
+
"type": "array",
|
|
28
|
+
"description": "List of strings made up of the answer cell values.",
|
|
29
|
+
"items": {
|
|
30
|
+
"type": "string"
|
|
31
|
+
}
|
|
32
|
+
},
|
|
33
|
+
"aggregator": {
|
|
34
|
+
"type": "string",
|
|
35
|
+
"description": "If the model has an aggregator, this returns the aggregator."
|
|
36
|
+
}
|
|
37
|
+
},
|
|
38
|
+
"required": ["answer", "cells", "coordinates"]
|
|
39
|
+
}
|
|
40
|
+
}
|
|
@@ -19,7 +19,7 @@ Tabular classification models can be used in predicting customer churn in teleco
|
|
|
19
19
|
|
|
20
20
|
You can use [skops](https://skops.readthedocs.io/) for model hosting and inference on the Hugging Face Hub. This library is built to improve production workflows of various libraries that are used to train tabular models, including [sklearn](https://scikit-learn.org/stable/) and [xgboost](https://xgboost.readthedocs.io/en/stable/). Using `skops` you can:
|
|
21
21
|
|
|
22
|
-
- Easily use
|
|
22
|
+
- Easily use Inference Endpoints
|
|
23
23
|
- Build neat UIs with one line of code,
|
|
24
24
|
- Programmatically create model cards,
|
|
25
25
|
- Securely serialize your scikit-learn model. (See limitations of using pickle [here](https://huggingface.co/docs/hub/security-pickle).)
|
|
@@ -30,7 +30,7 @@ model.fit(X, y)
|
|
|
30
30
|
|
|
31
31
|
You can use [skops](https://skops.readthedocs.io/) for model hosting and inference on the Hugging Face Hub. This library is built to improve production workflows of various libraries that are used to train tabular models, including [sklearn](https://scikit-learn.org/stable/) and [xgboost](https://xgboost.readthedocs.io/en/stable/). Using `skops` you can:
|
|
32
32
|
|
|
33
|
-
- Easily use
|
|
33
|
+
- Easily use Inference Endpoints,
|
|
34
34
|
- Build neat UIs with one line of code,
|
|
35
35
|
- Programmatically create model cards,
|
|
36
36
|
- Securely serialize your models. (See limitations of using pickle [here](https://huggingface.co/docs/hub/security-pickle).)
|
|
@@ -150,6 +150,7 @@ classifier("I will walk to home when I went through the bus.")
|
|
|
150
150
|
|
|
151
151
|
Would you like to learn more about the topic? Awesome! Here you can find some curated resources that you may find helpful!
|
|
152
152
|
|
|
153
|
+
- [SetFitABSA: Few-Shot Aspect Based Sentiment Analysis using SetFit](https://huggingface.co/blog/setfit-absa)
|
|
153
154
|
- [Course Chapter on Fine-tuning a Text Classification Model](https://huggingface.co/course/chapter3/1?fw=pt)
|
|
154
155
|
- [Getting Started with Sentiment Analysis using Python](https://huggingface.co/blog/sentiment-analysis-python)
|
|
155
156
|
- [Sentiment Analysis on Encrypted Data with Homomorphic Encryption](https://huggingface.co/blog/sentiment-analysis-fhe)
|