@huggingface/tasks 0.2.0 → 0.2.2
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/dist/{index.mjs → index.cjs} +295 -134
- package/dist/index.d.ts +8 -6
- package/dist/index.js +260 -169
- package/package.json +13 -8
- package/src/library-to-tasks.ts +1 -1
- package/src/library-ui-elements.ts +24 -10
- package/src/model-data.ts +1 -1
- package/src/model-libraries.ts +3 -2
- package/src/pipelines.ts +1 -1
- package/src/tasks/audio-classification/about.md +1 -1
- package/src/tasks/audio-classification/inference.ts +51 -0
- package/src/tasks/audio-classification/spec/input.json +34 -0
- package/src/tasks/audio-classification/spec/output.json +21 -0
- package/src/tasks/audio-to-audio/about.md +1 -1
- package/src/tasks/automatic-speech-recognition/about.md +4 -2
- package/src/tasks/automatic-speech-recognition/inference.ts +154 -0
- package/src/tasks/automatic-speech-recognition/spec/input.json +34 -0
- package/src/tasks/automatic-speech-recognition/spec/output.json +36 -0
- package/src/tasks/common-definitions.json +109 -0
- package/src/tasks/depth-estimation/data.ts +8 -4
- package/src/tasks/depth-estimation/inference.ts +35 -0
- package/src/tasks/depth-estimation/spec/input.json +30 -0
- package/src/tasks/depth-estimation/spec/output.json +10 -0
- package/src/tasks/document-question-answering/inference.ts +102 -0
- package/src/tasks/document-question-answering/spec/input.json +85 -0
- package/src/tasks/document-question-answering/spec/output.json +36 -0
- package/src/tasks/feature-extraction/inference.ts +22 -0
- package/src/tasks/feature-extraction/spec/input.json +26 -0
- package/src/tasks/feature-extraction/spec/output.json +7 -0
- package/src/tasks/fill-mask/inference.ts +61 -0
- package/src/tasks/fill-mask/spec/input.json +38 -0
- package/src/tasks/fill-mask/spec/output.json +29 -0
- package/src/tasks/image-classification/inference.ts +51 -0
- package/src/tasks/image-classification/spec/input.json +34 -0
- package/src/tasks/image-classification/spec/output.json +10 -0
- package/src/tasks/image-segmentation/inference.ts +65 -0
- package/src/tasks/image-segmentation/spec/input.json +54 -0
- package/src/tasks/image-segmentation/spec/output.json +25 -0
- package/src/tasks/image-to-image/inference.ts +67 -0
- package/src/tasks/image-to-image/spec/input.json +52 -0
- package/src/tasks/image-to-image/spec/output.json +12 -0
- package/src/tasks/image-to-text/inference.ts +138 -0
- package/src/tasks/image-to-text/spec/input.json +34 -0
- package/src/tasks/image-to-text/spec/output.json +17 -0
- package/src/tasks/index.ts +5 -2
- package/src/tasks/mask-generation/about.md +65 -0
- package/src/tasks/mask-generation/data.ts +55 -0
- package/src/tasks/object-detection/inference.ts +62 -0
- package/src/tasks/object-detection/spec/input.json +30 -0
- package/src/tasks/object-detection/spec/output.json +46 -0
- package/src/tasks/placeholder/data.ts +3 -0
- package/src/tasks/placeholder/spec/input.json +35 -0
- package/src/tasks/placeholder/spec/output.json +17 -0
- package/src/tasks/question-answering/inference.ts +99 -0
- package/src/tasks/question-answering/spec/input.json +67 -0
- package/src/tasks/question-answering/spec/output.json +29 -0
- package/src/tasks/sentence-similarity/about.md +2 -2
- package/src/tasks/sentence-similarity/inference.ts +32 -0
- package/src/tasks/sentence-similarity/spec/input.json +40 -0
- package/src/tasks/sentence-similarity/spec/output.json +12 -0
- package/src/tasks/summarization/data.ts +1 -0
- package/src/tasks/summarization/inference.ts +58 -0
- package/src/tasks/summarization/spec/input.json +7 -0
- package/src/tasks/summarization/spec/output.json +7 -0
- package/src/tasks/table-question-answering/inference.ts +61 -0
- package/src/tasks/table-question-answering/spec/input.json +39 -0
- package/src/tasks/table-question-answering/spec/output.json +40 -0
- package/src/tasks/tabular-classification/about.md +1 -1
- package/src/tasks/tabular-regression/about.md +1 -1
- package/src/tasks/text-classification/about.md +1 -0
- package/src/tasks/text-classification/inference.ts +51 -0
- package/src/tasks/text-classification/spec/input.json +35 -0
- package/src/tasks/text-classification/spec/output.json +10 -0
- package/src/tasks/text-generation/about.md +24 -13
- package/src/tasks/text-generation/data.ts +22 -38
- package/src/tasks/text-generation/inference.ts +85 -0
- package/src/tasks/text-generation/spec/input.json +74 -0
- package/src/tasks/text-generation/spec/output.json +17 -0
- package/src/tasks/text-to-audio/inference.ts +138 -0
- package/src/tasks/text-to-audio/spec/input.json +31 -0
- package/src/tasks/text-to-audio/spec/output.json +20 -0
- package/src/tasks/text-to-image/about.md +11 -2
- package/src/tasks/text-to-image/data.ts +6 -2
- package/src/tasks/text-to-image/inference.ts +73 -0
- package/src/tasks/text-to-image/spec/input.json +57 -0
- package/src/tasks/text-to-image/spec/output.json +15 -0
- package/src/tasks/text-to-speech/about.md +4 -2
- package/src/tasks/text-to-speech/data.ts +1 -0
- package/src/tasks/text-to-speech/inference.ts +146 -0
- package/src/tasks/text-to-speech/spec/input.json +7 -0
- package/src/tasks/text-to-speech/spec/output.json +7 -0
- package/src/tasks/text2text-generation/inference.ts +53 -0
- package/src/tasks/text2text-generation/spec/input.json +55 -0
- package/src/tasks/text2text-generation/spec/output.json +17 -0
- package/src/tasks/token-classification/inference.ts +82 -0
- package/src/tasks/token-classification/spec/input.json +65 -0
- package/src/tasks/token-classification/spec/output.json +33 -0
- package/src/tasks/translation/data.ts +1 -0
- package/src/tasks/translation/inference.ts +58 -0
- package/src/tasks/translation/spec/input.json +7 -0
- package/src/tasks/translation/spec/output.json +7 -0
- package/src/tasks/video-classification/inference.ts +59 -0
- package/src/tasks/video-classification/spec/input.json +42 -0
- package/src/tasks/video-classification/spec/output.json +10 -0
- package/src/tasks/visual-question-answering/inference.ts +63 -0
- package/src/tasks/visual-question-answering/spec/input.json +41 -0
- package/src/tasks/visual-question-answering/spec/output.json +21 -0
- package/src/tasks/zero-shot-classification/inference.ts +67 -0
- package/src/tasks/zero-shot-classification/spec/input.json +50 -0
- package/src/tasks/zero-shot-classification/spec/output.json +10 -0
- package/src/tasks/zero-shot-image-classification/data.ts +8 -5
- package/src/tasks/zero-shot-image-classification/inference.ts +61 -0
- package/src/tasks/zero-shot-image-classification/spec/input.json +45 -0
- package/src/tasks/zero-shot-image-classification/spec/output.json +10 -0
- package/src/tasks/zero-shot-object-detection/about.md +45 -0
- package/src/tasks/zero-shot-object-detection/data.ts +62 -0
- package/src/tasks/zero-shot-object-detection/inference.ts +66 -0
- package/src/tasks/zero-shot-object-detection/spec/input.json +40 -0
- package/src/tasks/zero-shot-object-detection/spec/output.json +47 -0
- package/tsconfig.json +3 -3
|
@@ -0,0 +1,51 @@
|
|
|
1
|
+
/**
|
|
2
|
+
* Inference code generated from the JSON schema spec in ./spec
|
|
3
|
+
*
|
|
4
|
+
* Using src/scripts/inference-codegen
|
|
5
|
+
*/
|
|
6
|
+
/**
|
|
7
|
+
* Inputs for Text Classification inference
|
|
8
|
+
*/
|
|
9
|
+
export interface TextClassificationInput {
|
|
10
|
+
/**
|
|
11
|
+
* The text to classify
|
|
12
|
+
*/
|
|
13
|
+
data: string;
|
|
14
|
+
/**
|
|
15
|
+
* Additional inference parameters
|
|
16
|
+
*/
|
|
17
|
+
parameters?: TextClassificationParameters;
|
|
18
|
+
[property: string]: unknown;
|
|
19
|
+
}
|
|
20
|
+
/**
|
|
21
|
+
* Additional inference parameters
|
|
22
|
+
*
|
|
23
|
+
* Additional inference parameters for Text Classification
|
|
24
|
+
*/
|
|
25
|
+
export interface TextClassificationParameters {
|
|
26
|
+
functionToApply?: ClassificationOutputTransform;
|
|
27
|
+
/**
|
|
28
|
+
* When specified, limits the output to the top K most probable classes.
|
|
29
|
+
*/
|
|
30
|
+
topK?: number;
|
|
31
|
+
[property: string]: unknown;
|
|
32
|
+
}
|
|
33
|
+
/**
|
|
34
|
+
* The function to apply to the model outputs in order to retrieve the scores.
|
|
35
|
+
*/
|
|
36
|
+
export type ClassificationOutputTransform = "sigmoid" | "softmax" | "none";
|
|
37
|
+
export type TextClassificationOutput = TextClassificationOutputElement[];
|
|
38
|
+
/**
|
|
39
|
+
* Outputs of inference for the Text Classification task
|
|
40
|
+
*/
|
|
41
|
+
export interface TextClassificationOutputElement {
|
|
42
|
+
/**
|
|
43
|
+
* The predicted class label.
|
|
44
|
+
*/
|
|
45
|
+
label: string;
|
|
46
|
+
/**
|
|
47
|
+
* The corresponding probability.
|
|
48
|
+
*/
|
|
49
|
+
score: number;
|
|
50
|
+
[property: string]: unknown;
|
|
51
|
+
}
|
|
@@ -0,0 +1,35 @@
|
|
|
1
|
+
{
|
|
2
|
+
"$id": "/inference/schemas/text-classification/input.json",
|
|
3
|
+
"$schema": "http://json-schema.org/draft-06/schema#",
|
|
4
|
+
"description": "Inputs for Text Classification inference",
|
|
5
|
+
"title": "TextClassificationInput",
|
|
6
|
+
"type": "object",
|
|
7
|
+
"properties": {
|
|
8
|
+
"data": {
|
|
9
|
+
"description": "The text to classify",
|
|
10
|
+
"type": "string"
|
|
11
|
+
},
|
|
12
|
+
"parameters": {
|
|
13
|
+
"description": "Additional inference parameters",
|
|
14
|
+
"$ref": "#/$defs/TextClassificationParameters"
|
|
15
|
+
}
|
|
16
|
+
},
|
|
17
|
+
"$defs": {
|
|
18
|
+
"TextClassificationParameters": {
|
|
19
|
+
"title": "TextClassificationParameters",
|
|
20
|
+
"description": "Additional inference parameters for Text Classification",
|
|
21
|
+
"type": "object",
|
|
22
|
+
"properties": {
|
|
23
|
+
"functionToApply": {
|
|
24
|
+
"title": "TextClassificationOutputTransform",
|
|
25
|
+
"$ref": "/inference/schemas/common-definitions.json#/definitions/ClassificationOutputTransform"
|
|
26
|
+
},
|
|
27
|
+
"topK": {
|
|
28
|
+
"type": "integer",
|
|
29
|
+
"description": "When specified, limits the output to the top K most probable classes."
|
|
30
|
+
}
|
|
31
|
+
}
|
|
32
|
+
}
|
|
33
|
+
},
|
|
34
|
+
"required": ["data"]
|
|
35
|
+
}
|
|
@@ -0,0 +1,10 @@
|
|
|
1
|
+
{
|
|
2
|
+
"$id": "/inference/schemas/text-classification/output.json",
|
|
3
|
+
"$schema": "http://json-schema.org/draft-06/schema#",
|
|
4
|
+
"description": "Outputs of inference for the Text Classification task",
|
|
5
|
+
"title": "TextClassificationOutput",
|
|
6
|
+
"type": "array",
|
|
7
|
+
"items": {
|
|
8
|
+
"$ref": "/inference/schemas/common-definitions.json#/definitions/ClassificationOutput"
|
|
9
|
+
}
|
|
10
|
+
}
|
|
@@ -110,25 +110,36 @@ Would you like to learn more about the topic? Awesome! Here you can find some cu
|
|
|
110
110
|
- [ChatUI Docker Spaces](https://huggingface.co/docs/hub/spaces-sdks-docker-chatui)
|
|
111
111
|
- [Causal language modeling task guide](https://huggingface.co/docs/transformers/tasks/language_modeling)
|
|
112
112
|
- [Text generation strategies](https://huggingface.co/docs/transformers/generation_strategies)
|
|
113
|
+
- [Course chapter on training a causal language model from scratch](https://huggingface.co/course/chapter7/6?fw=pt)
|
|
113
114
|
|
|
114
|
-
###
|
|
115
|
+
### Model Inference & Deployment
|
|
115
116
|
|
|
116
|
-
- [
|
|
117
|
-
- [
|
|
118
|
-
- [
|
|
119
|
-
- [
|
|
120
|
-
- [How to generate text: using different decoding methods for language generation with Transformers](https://huggingface.co/blog/how-to-generate)
|
|
117
|
+
- [Optimizing your LLM in production](https://huggingface.co/blog/optimize-llm)
|
|
118
|
+
- [Open-Source Text Generation & LLM Ecosystem at Hugging Face](https://huggingface.co/blog/os-llms)
|
|
119
|
+
- [Introducing RWKV - An RNN with the advantages of a transformer](https://huggingface.co/blog/rwkv)
|
|
120
|
+
- [Llama 2 is at Hugging Face](https://huggingface.co/blog/llama2)
|
|
121
121
|
- [Guiding Text Generation with Constrained Beam Search in 🤗 Transformers](https://huggingface.co/blog/constrained-beam-search)
|
|
122
122
|
- [Code generation with Hugging Face](https://huggingface.co/spaces/codeparrot/code-generation-models)
|
|
123
|
-
- [🌸 Introducing The World's Largest Open Multilingual Language Model: BLOOM 🌸](https://huggingface.co/blog/bloom)
|
|
124
|
-
- [The Technology Behind BLOOM Training](https://huggingface.co/blog/bloom-megatron-deepspeed)
|
|
125
|
-
- [Faster Text Generation with TensorFlow and XLA](https://huggingface.co/blog/tf-xla-generate)
|
|
126
123
|
- [Assisted Generation: a new direction toward low-latency text generation](https://huggingface.co/blog/assisted-generation)
|
|
127
|
-
- [
|
|
124
|
+
- [How to generate text: using different decoding methods for language generation with Transformers](https://huggingface.co/blog/how-to-generate)
|
|
125
|
+
- [Faster Text Generation with TensorFlow and XLA](https://huggingface.co/blog/tf-xla-generate)
|
|
126
|
+
|
|
127
|
+
### Model Fine-tuning/Training
|
|
128
|
+
|
|
129
|
+
- [Non-engineers guide: Train a LLaMA 2 chatbot](https://huggingface.co/blog/Llama2-for-non-engineers)
|
|
130
|
+
- [Training CodeParrot 🦜 from Scratch](https://huggingface.co/blog/codeparrot)
|
|
128
131
|
- [Creating a Coding Assistant with StarCoder](https://huggingface.co/blog/starchat-alpha)
|
|
129
|
-
|
|
130
|
-
|
|
131
|
-
|
|
132
|
+
|
|
133
|
+
### Advanced Concepts Explained Simply
|
|
134
|
+
|
|
135
|
+
- [Mixture of Experts Explained](https://huggingface.co/blog/moe)
|
|
136
|
+
|
|
137
|
+
### Advanced Fine-tuning/Training Recipes
|
|
138
|
+
|
|
139
|
+
- [Fine-tuning Llama 2 70B using PyTorch FSDP](https://huggingface.co/blog/ram-efficient-pytorch-fsdp)
|
|
140
|
+
- [The N Implementation Details of RLHF with PPO](https://huggingface.co/blog/the_n_implementation_details_of_rlhf_with_ppo)
|
|
141
|
+
- [Preference Tuning LLMs with Direct Preference Optimization Methods](https://huggingface.co/blog/pref-tuning)
|
|
142
|
+
- [Fine-tune Llama 2 with DPO](https://huggingface.co/blog/dpo-trl)
|
|
132
143
|
|
|
133
144
|
### Notebooks
|
|
134
145
|
|
|
@@ -12,12 +12,12 @@ const taskData: TaskDataCustom = {
|
|
|
12
12
|
id: "the_pile",
|
|
13
13
|
},
|
|
14
14
|
{
|
|
15
|
-
description: "
|
|
16
|
-
id: "
|
|
15
|
+
description: "Truly open-source, curated and cleaned dialogue dataset.",
|
|
16
|
+
id: "HuggingFaceH4/ultrachat_200k",
|
|
17
17
|
},
|
|
18
18
|
{
|
|
19
|
-
description: "
|
|
20
|
-
id: "
|
|
19
|
+
description: "An instruction dataset with preference ratings on responses.",
|
|
20
|
+
id: "openbmb/UltraFeedback",
|
|
21
21
|
},
|
|
22
22
|
],
|
|
23
23
|
demo: {
|
|
@@ -59,66 +59,50 @@ const taskData: TaskDataCustom = {
|
|
|
59
59
|
id: "bigcode/starcoder",
|
|
60
60
|
},
|
|
61
61
|
{
|
|
62
|
-
description: "A
|
|
63
|
-
id: "
|
|
62
|
+
description: "A very powerful text generation model.",
|
|
63
|
+
id: "mistralai/Mixtral-8x7B-Instruct-v0.1",
|
|
64
64
|
},
|
|
65
65
|
{
|
|
66
|
-
description: "
|
|
67
|
-
id: "
|
|
66
|
+
description: "Small yet powerful text generation model.",
|
|
67
|
+
id: "microsoft/phi-2",
|
|
68
68
|
},
|
|
69
69
|
{
|
|
70
|
-
description: "A
|
|
71
|
-
id: "
|
|
70
|
+
description: "A very powerful model that can chat, do mathematical reasoning and write code.",
|
|
71
|
+
id: "openchat/openchat-3.5-0106",
|
|
72
72
|
},
|
|
73
73
|
{
|
|
74
|
-
description: "
|
|
75
|
-
id: "
|
|
74
|
+
description: "Very strong yet small assistant model.",
|
|
75
|
+
id: "HuggingFaceH4/zephyr-7b-beta",
|
|
76
76
|
},
|
|
77
77
|
{
|
|
78
|
-
description: "
|
|
79
|
-
id: "EleutherAI/pythia-12b",
|
|
80
|
-
},
|
|
81
|
-
{
|
|
82
|
-
description: "A large text-to-text model trained to follow instructions.",
|
|
83
|
-
id: "google/flan-ul2",
|
|
84
|
-
},
|
|
85
|
-
{
|
|
86
|
-
description: "A large and powerful text generation model.",
|
|
87
|
-
id: "tiiuae/falcon-40b",
|
|
88
|
-
},
|
|
89
|
-
{
|
|
90
|
-
description: "State-of-the-art open-source large language model.",
|
|
78
|
+
description: "Very strong open-source large language model.",
|
|
91
79
|
id: "meta-llama/Llama-2-70b-hf",
|
|
92
80
|
},
|
|
93
81
|
],
|
|
94
82
|
spaces: [
|
|
95
83
|
{
|
|
96
|
-
description: "A
|
|
97
|
-
id: "
|
|
84
|
+
description: "A leaderboard to compare different open-source text generation models based on various benchmarks.",
|
|
85
|
+
id: "HuggingFaceH4/open_llm_leaderboard",
|
|
98
86
|
},
|
|
99
87
|
{
|
|
100
|
-
description: "An text generation based application
|
|
101
|
-
id: "
|
|
88
|
+
description: "An text generation based application based on a very powerful LLaMA2 model.",
|
|
89
|
+
id: "ysharma/Explore_llamav2_with_TGI",
|
|
102
90
|
},
|
|
103
91
|
{
|
|
104
|
-
description: "An text generation based application
|
|
105
|
-
id: "
|
|
92
|
+
description: "An text generation based application to converse with Zephyr model.",
|
|
93
|
+
id: "HuggingFaceH4/zephyr-chat",
|
|
106
94
|
},
|
|
107
95
|
{
|
|
108
96
|
description: "An text generation application that combines OpenAI and Hugging Face models.",
|
|
109
97
|
id: "microsoft/HuggingGPT",
|
|
110
98
|
},
|
|
111
99
|
{
|
|
112
|
-
description: "An
|
|
113
|
-
id: "
|
|
114
|
-
},
|
|
115
|
-
{
|
|
116
|
-
description: "An UI that uses StableLM-tuned-alpha-7b.",
|
|
117
|
-
id: "togethercomputer/OpenChatKit",
|
|
100
|
+
description: "An chatbot to converse with a very powerful text generation model.",
|
|
101
|
+
id: "mlabonne/phixtral-chat",
|
|
118
102
|
},
|
|
119
103
|
],
|
|
120
104
|
summary:
|
|
121
|
-
"Generating text is the task of
|
|
105
|
+
"Generating text is the task of generating new text given another text. These models can, for example, fill in incomplete text or paraphrase.",
|
|
122
106
|
widgetModels: ["HuggingFaceH4/zephyr-7b-beta"],
|
|
123
107
|
youtubeId: "Vpjb1lu0MDk",
|
|
124
108
|
};
|
|
@@ -0,0 +1,85 @@
|
|
|
1
|
+
/**
|
|
2
|
+
* Inference code generated from the JSON schema spec in ./spec
|
|
3
|
+
*
|
|
4
|
+
* Using src/scripts/inference-codegen
|
|
5
|
+
*/
|
|
6
|
+
/**
|
|
7
|
+
* Inputs for Text Generation inference
|
|
8
|
+
*/
|
|
9
|
+
export interface TextGenerationInput {
|
|
10
|
+
/**
|
|
11
|
+
* The text to initialize generation with
|
|
12
|
+
*/
|
|
13
|
+
data: string;
|
|
14
|
+
/**
|
|
15
|
+
* Additional inference parameters
|
|
16
|
+
*/
|
|
17
|
+
parameters?: TextGenerationParameters;
|
|
18
|
+
[property: string]: unknown;
|
|
19
|
+
}
|
|
20
|
+
/**
|
|
21
|
+
* Additional inference parameters
|
|
22
|
+
*
|
|
23
|
+
* Additional inference parameters for Text Generation
|
|
24
|
+
*/
|
|
25
|
+
export interface TextGenerationParameters {
|
|
26
|
+
/**
|
|
27
|
+
* Whether to use logit sampling (true) or greedy search (false).
|
|
28
|
+
*/
|
|
29
|
+
doSample?: boolean;
|
|
30
|
+
/**
|
|
31
|
+
* Maximum number of generated tokens.
|
|
32
|
+
*/
|
|
33
|
+
maxNewTokens?: number;
|
|
34
|
+
/**
|
|
35
|
+
* The parameter for repetition penalty. A value of 1.0 means no penalty. See [this
|
|
36
|
+
* paper](https://hf.co/papers/1909.05858) for more details.
|
|
37
|
+
*/
|
|
38
|
+
repetitionPenalty?: number;
|
|
39
|
+
/**
|
|
40
|
+
* Whether to prepend the prompt to the generated text.
|
|
41
|
+
*/
|
|
42
|
+
returnFullText?: boolean;
|
|
43
|
+
/**
|
|
44
|
+
* Stop generating tokens if a member of `stop_sequences` is generated.
|
|
45
|
+
*/
|
|
46
|
+
stopSequences?: string[];
|
|
47
|
+
/**
|
|
48
|
+
* The value used to modulate the logits distribution.
|
|
49
|
+
*/
|
|
50
|
+
temperature?: number;
|
|
51
|
+
/**
|
|
52
|
+
* The number of highest probability vocabulary tokens to keep for top-k-filtering.
|
|
53
|
+
*/
|
|
54
|
+
topK?: number;
|
|
55
|
+
/**
|
|
56
|
+
* If set to < 1, only the smallest set of most probable tokens with probabilities that add
|
|
57
|
+
* up to `top_p` or higher are kept for generation.
|
|
58
|
+
*/
|
|
59
|
+
topP?: number;
|
|
60
|
+
/**
|
|
61
|
+
* Truncate input tokens to the given size.
|
|
62
|
+
*/
|
|
63
|
+
truncate?: number;
|
|
64
|
+
/**
|
|
65
|
+
* Typical Decoding mass. See [Typical Decoding for Natural Language
|
|
66
|
+
* Generation](https://hf.co/papers/2202.00666) for more information
|
|
67
|
+
*/
|
|
68
|
+
typicalP?: number;
|
|
69
|
+
/**
|
|
70
|
+
* Watermarking with [A Watermark for Large Language Models](https://hf.co/papers/2301.10226)
|
|
71
|
+
*/
|
|
72
|
+
watermark?: boolean;
|
|
73
|
+
[property: string]: unknown;
|
|
74
|
+
}
|
|
75
|
+
export type TextGenerationOutput = TextGenerationOutputElement[];
|
|
76
|
+
/**
|
|
77
|
+
* Outputs for Text Generation inference
|
|
78
|
+
*/
|
|
79
|
+
export interface TextGenerationOutputElement {
|
|
80
|
+
/**
|
|
81
|
+
* The generated text
|
|
82
|
+
*/
|
|
83
|
+
generatedText: string;
|
|
84
|
+
[property: string]: unknown;
|
|
85
|
+
}
|
|
@@ -0,0 +1,74 @@
|
|
|
1
|
+
{
|
|
2
|
+
"$id": "/inference/schemas/text-generation/input.json",
|
|
3
|
+
"$schema": "http://json-schema.org/draft-06/schema#",
|
|
4
|
+
"description": "Inputs for Text Generation inference",
|
|
5
|
+
"title": "TextGenerationInput",
|
|
6
|
+
"type": "object",
|
|
7
|
+
"properties": {
|
|
8
|
+
"data": {
|
|
9
|
+
"description": "The text to initialize generation with",
|
|
10
|
+
"type": "string"
|
|
11
|
+
},
|
|
12
|
+
"parameters": {
|
|
13
|
+
"description": "Additional inference parameters",
|
|
14
|
+
"$ref": "#/$defs/TextGenerationParameters"
|
|
15
|
+
}
|
|
16
|
+
},
|
|
17
|
+
"$defs": {
|
|
18
|
+
"TextGenerationParameters": {
|
|
19
|
+
"title": "TextGenerationParameters",
|
|
20
|
+
"description": "Additional inference parameters for Text Generation",
|
|
21
|
+
"type": "object",
|
|
22
|
+
"properties": {
|
|
23
|
+
"doSample": {
|
|
24
|
+
"type": "boolean",
|
|
25
|
+
"description": "Whether to use logit sampling (true) or greedy search (false)."
|
|
26
|
+
},
|
|
27
|
+
"maxNewTokens": {
|
|
28
|
+
"type": "integer",
|
|
29
|
+
"description": "Maximum number of generated tokens."
|
|
30
|
+
},
|
|
31
|
+
"repetitionPenalty": {
|
|
32
|
+
"type": "number",
|
|
33
|
+
"description": "The parameter for repetition penalty. A value of 1.0 means no penalty. See [this paper](https://hf.co/papers/1909.05858) for more details."
|
|
34
|
+
},
|
|
35
|
+
"returnFullText": {
|
|
36
|
+
"type": "boolean",
|
|
37
|
+
"description": "Whether to prepend the prompt to the generated text."
|
|
38
|
+
},
|
|
39
|
+
"stopSequences": {
|
|
40
|
+
"type": "array",
|
|
41
|
+
"items": {
|
|
42
|
+
"type": "string"
|
|
43
|
+
},
|
|
44
|
+
"description": "Stop generating tokens if a member of `stop_sequences` is generated."
|
|
45
|
+
},
|
|
46
|
+
"temperature": {
|
|
47
|
+
"type": "number",
|
|
48
|
+
"description": "The value used to modulate the logits distribution."
|
|
49
|
+
},
|
|
50
|
+
"topK": {
|
|
51
|
+
"type": "integer",
|
|
52
|
+
"description": "The number of highest probability vocabulary tokens to keep for top-k-filtering."
|
|
53
|
+
},
|
|
54
|
+
"topP": {
|
|
55
|
+
"type": "number",
|
|
56
|
+
"description": "If set to < 1, only the smallest set of most probable tokens with probabilities that add up to `top_p` or higher are kept for generation."
|
|
57
|
+
},
|
|
58
|
+
"truncate": {
|
|
59
|
+
"type": "integer",
|
|
60
|
+
"description": "Truncate input tokens to the given size."
|
|
61
|
+
},
|
|
62
|
+
"typicalP": {
|
|
63
|
+
"type": "number",
|
|
64
|
+
"description": "Typical Decoding mass. See [Typical Decoding for Natural Language Generation](https://hf.co/papers/2202.00666) for more information"
|
|
65
|
+
},
|
|
66
|
+
"watermark": {
|
|
67
|
+
"type": "boolean",
|
|
68
|
+
"description": "Watermarking with [A Watermark for Large Language Models](https://hf.co/papers/2301.10226)"
|
|
69
|
+
}
|
|
70
|
+
}
|
|
71
|
+
}
|
|
72
|
+
},
|
|
73
|
+
"required": ["data"]
|
|
74
|
+
}
|
|
@@ -0,0 +1,17 @@
|
|
|
1
|
+
{
|
|
2
|
+
"$id": "/inference/schemas/text-generation/output.json",
|
|
3
|
+
"$schema": "http://json-schema.org/draft-06/schema#",
|
|
4
|
+
"description": "Outputs for Text Generation inference",
|
|
5
|
+
"title": "TextGenerationOutput",
|
|
6
|
+
"type": "array",
|
|
7
|
+
"items": {
|
|
8
|
+
"type": "object",
|
|
9
|
+
"properties": {
|
|
10
|
+
"generatedText": {
|
|
11
|
+
"type": "string",
|
|
12
|
+
"description": "The generated text"
|
|
13
|
+
}
|
|
14
|
+
},
|
|
15
|
+
"required": ["generatedText"]
|
|
16
|
+
}
|
|
17
|
+
}
|
|
@@ -0,0 +1,138 @@
|
|
|
1
|
+
/**
|
|
2
|
+
* Inference code generated from the JSON schema spec in ./spec
|
|
3
|
+
*
|
|
4
|
+
* Using src/scripts/inference-codegen
|
|
5
|
+
*/
|
|
6
|
+
/**
|
|
7
|
+
* Inputs for Text To Audio inference
|
|
8
|
+
*/
|
|
9
|
+
export interface TextToAudioInput {
|
|
10
|
+
/**
|
|
11
|
+
* The input text data
|
|
12
|
+
*/
|
|
13
|
+
data: string;
|
|
14
|
+
/**
|
|
15
|
+
* Additional inference parameters
|
|
16
|
+
*/
|
|
17
|
+
parameters?: TextToAudioParameters;
|
|
18
|
+
[property: string]: unknown;
|
|
19
|
+
}
|
|
20
|
+
/**
|
|
21
|
+
* Additional inference parameters
|
|
22
|
+
*
|
|
23
|
+
* Additional inference parameters for Text To Audio
|
|
24
|
+
*/
|
|
25
|
+
export interface TextToAudioParameters {
|
|
26
|
+
/**
|
|
27
|
+
* Parametrization of the text generation process
|
|
28
|
+
*/
|
|
29
|
+
generate?: GenerationParameters;
|
|
30
|
+
[property: string]: unknown;
|
|
31
|
+
}
|
|
32
|
+
/**
|
|
33
|
+
* Parametrization of the text generation process
|
|
34
|
+
*
|
|
35
|
+
* Ad-hoc parametrization of the text generation process
|
|
36
|
+
*/
|
|
37
|
+
export interface GenerationParameters {
|
|
38
|
+
/**
|
|
39
|
+
* Whether to use sampling instead of greedy decoding when generating new tokens.
|
|
40
|
+
*/
|
|
41
|
+
doSample?: boolean;
|
|
42
|
+
/**
|
|
43
|
+
* Controls the stopping condition for beam-based methods.
|
|
44
|
+
*/
|
|
45
|
+
earlyStopping?: EarlyStoppingUnion;
|
|
46
|
+
/**
|
|
47
|
+
* If set to float strictly between 0 and 1, only tokens with a conditional probability
|
|
48
|
+
* greater than epsilon_cutoff will be sampled. In the paper, suggested values range from
|
|
49
|
+
* 3e-4 to 9e-4, depending on the size of the model. See [Truncation Sampling as Language
|
|
50
|
+
* Model Desmoothing](https://hf.co/papers/2210.15191) for more details.
|
|
51
|
+
*/
|
|
52
|
+
epsilonCutoff?: number;
|
|
53
|
+
/**
|
|
54
|
+
* Eta sampling is a hybrid of locally typical sampling and epsilon sampling. If set to
|
|
55
|
+
* float strictly between 0 and 1, a token is only considered if it is greater than either
|
|
56
|
+
* eta_cutoff or sqrt(eta_cutoff) * exp(-entropy(softmax(next_token_logits))). The latter
|
|
57
|
+
* term is intuitively the expected next token probability, scaled by sqrt(eta_cutoff). In
|
|
58
|
+
* the paper, suggested values range from 3e-4 to 2e-3, depending on the size of the model.
|
|
59
|
+
* See [Truncation Sampling as Language Model Desmoothing](https://hf.co/papers/2210.15191)
|
|
60
|
+
* for more details.
|
|
61
|
+
*/
|
|
62
|
+
etaCutoff?: number;
|
|
63
|
+
/**
|
|
64
|
+
* The maximum length (in tokens) of the generated text, including the input.
|
|
65
|
+
*/
|
|
66
|
+
maxLength?: number;
|
|
67
|
+
/**
|
|
68
|
+
* The maximum number of tokens to generate. Takes precedence over maxLength.
|
|
69
|
+
*/
|
|
70
|
+
maxNewTokens?: number;
|
|
71
|
+
/**
|
|
72
|
+
* The minimum length (in tokens) of the generated text, including the input.
|
|
73
|
+
*/
|
|
74
|
+
minLength?: number;
|
|
75
|
+
/**
|
|
76
|
+
* The minimum number of tokens to generate. Takes precedence over maxLength.
|
|
77
|
+
*/
|
|
78
|
+
minNewTokens?: number;
|
|
79
|
+
/**
|
|
80
|
+
* Number of groups to divide num_beams into in order to ensure diversity among different
|
|
81
|
+
* groups of beams. See [this paper](https://hf.co/papers/1610.02424) for more details.
|
|
82
|
+
*/
|
|
83
|
+
numBeamGroups?: number;
|
|
84
|
+
/**
|
|
85
|
+
* Number of beams to use for beam search.
|
|
86
|
+
*/
|
|
87
|
+
numBeams?: number;
|
|
88
|
+
/**
|
|
89
|
+
* The value balances the model confidence and the degeneration penalty in contrastive
|
|
90
|
+
* search decoding.
|
|
91
|
+
*/
|
|
92
|
+
penaltyAlpha?: number;
|
|
93
|
+
/**
|
|
94
|
+
* The value used to modulate the next token probabilities.
|
|
95
|
+
*/
|
|
96
|
+
temperature?: number;
|
|
97
|
+
/**
|
|
98
|
+
* The number of highest probability vocabulary tokens to keep for top-k-filtering.
|
|
99
|
+
*/
|
|
100
|
+
topK?: number;
|
|
101
|
+
/**
|
|
102
|
+
* If set to float < 1, only the smallest set of most probable tokens with probabilities
|
|
103
|
+
* that add up to top_p or higher are kept for generation.
|
|
104
|
+
*/
|
|
105
|
+
topP?: number;
|
|
106
|
+
/**
|
|
107
|
+
* Local typicality measures how similar the conditional probability of predicting a target
|
|
108
|
+
* token next is to the expected conditional probability of predicting a random token next,
|
|
109
|
+
* given the partial text already generated. If set to float < 1, the smallest set of the
|
|
110
|
+
* most locally typical tokens with probabilities that add up to typical_p or higher are
|
|
111
|
+
* kept for generation. See [this paper](https://hf.co/papers/2202.00666) for more details.
|
|
112
|
+
*/
|
|
113
|
+
typicalP?: number;
|
|
114
|
+
/**
|
|
115
|
+
* Whether the model should use the past last key/values attentions to speed up decoding
|
|
116
|
+
*/
|
|
117
|
+
useCache?: boolean;
|
|
118
|
+
[property: string]: unknown;
|
|
119
|
+
}
|
|
120
|
+
/**
|
|
121
|
+
* Controls the stopping condition for beam-based methods.
|
|
122
|
+
*/
|
|
123
|
+
export type EarlyStoppingUnion = boolean | "never";
|
|
124
|
+
export type TextToAudioOutput = TextToAudioOutputElement[];
|
|
125
|
+
/**
|
|
126
|
+
* Outputs of inference for the Text To Audio task
|
|
127
|
+
*/
|
|
128
|
+
export interface TextToAudioOutputElement {
|
|
129
|
+
/**
|
|
130
|
+
* The generated audio waveform.
|
|
131
|
+
*/
|
|
132
|
+
audio: unknown;
|
|
133
|
+
/**
|
|
134
|
+
* The sampling rate of the generated audio waveform.
|
|
135
|
+
*/
|
|
136
|
+
samplingRate: number;
|
|
137
|
+
[property: string]: unknown;
|
|
138
|
+
}
|
|
@@ -0,0 +1,31 @@
|
|
|
1
|
+
{
|
|
2
|
+
"$id": "/inference/schemas/text-to-audio/input.json",
|
|
3
|
+
"$schema": "http://json-schema.org/draft-06/schema#",
|
|
4
|
+
"description": "Inputs for Text To Audio inference",
|
|
5
|
+
"title": "TextToAudioInput",
|
|
6
|
+
"type": "object",
|
|
7
|
+
"properties": {
|
|
8
|
+
"data": {
|
|
9
|
+
"description": "The input text data",
|
|
10
|
+
"type": "string"
|
|
11
|
+
},
|
|
12
|
+
"parameters": {
|
|
13
|
+
"description": "Additional inference parameters",
|
|
14
|
+
"$ref": "#/$defs/TextToAudioParameters"
|
|
15
|
+
}
|
|
16
|
+
},
|
|
17
|
+
"$defs": {
|
|
18
|
+
"TextToAudioParameters": {
|
|
19
|
+
"title": "TextToAudioParameters",
|
|
20
|
+
"description": "Additional inference parameters for Text To Audio",
|
|
21
|
+
"type": "object",
|
|
22
|
+
"properties": {
|
|
23
|
+
"generate": {
|
|
24
|
+
"description": "Parametrization of the text generation process",
|
|
25
|
+
"$ref": "/inference/schemas/common-definitions.json#/definitions/GenerationParameters"
|
|
26
|
+
}
|
|
27
|
+
}
|
|
28
|
+
}
|
|
29
|
+
},
|
|
30
|
+
"required": ["data"]
|
|
31
|
+
}
|
|
@@ -0,0 +1,20 @@
|
|
|
1
|
+
{
|
|
2
|
+
"$id": "/inference/schemas/text-to-audio/output.json",
|
|
3
|
+
"$schema": "http://json-schema.org/draft-06/schema#",
|
|
4
|
+
"description": "Outputs of inference for the Text To Audio task",
|
|
5
|
+
"title": "TextToAudioOutput",
|
|
6
|
+
"type": "array",
|
|
7
|
+
"items": {
|
|
8
|
+
"type": "object",
|
|
9
|
+
"properties": {
|
|
10
|
+
"audio": {
|
|
11
|
+
"description": "The generated audio waveform."
|
|
12
|
+
},
|
|
13
|
+
"samplingRate": {
|
|
14
|
+
"type": "number",
|
|
15
|
+
"description": "The sampling rate of the generated audio waveform."
|
|
16
|
+
}
|
|
17
|
+
},
|
|
18
|
+
"required": ["audio", "samplingRate"]
|
|
19
|
+
}
|
|
20
|
+
}
|
|
@@ -53,14 +53,23 @@ await inference.textToImage({
|
|
|
53
53
|
|
|
54
54
|
## Useful Resources
|
|
55
55
|
|
|
56
|
+
### Model Inference
|
|
57
|
+
|
|
56
58
|
- [Hugging Face Diffusion Models Course](https://github.com/huggingface/diffusion-models-class)
|
|
57
59
|
- [Getting Started with Diffusers](https://huggingface.co/docs/diffusers/index)
|
|
58
60
|
- [Text-to-Image Generation](https://huggingface.co/docs/diffusers/using-diffusers/conditional_image_generation)
|
|
59
|
-
- [MinImagen - Build Your Own Imagen Text-to-Image Model](https://www.assemblyai.com/blog/minimagen-build-your-own-imagen-text-to-image-model/)
|
|
60
|
-
- [Using LoRA for Efficient Stable Diffusion Fine-Tuning](https://huggingface.co/blog/lora)
|
|
61
61
|
- [Using Stable Diffusion with Core ML on Apple Silicon](https://huggingface.co/blog/diffusers-coreml)
|
|
62
62
|
- [A guide on Vector Quantized Diffusion](https://huggingface.co/blog/vq-diffusion)
|
|
63
63
|
- [🧨 Stable Diffusion in JAX/Flax](https://huggingface.co/blog/stable_diffusion_jax)
|
|
64
64
|
- [Running IF with 🧨 diffusers on a Free Tier Google Colab](https://huggingface.co/blog/if)
|
|
65
|
+
- [Introducing Würstchen: Fast Diffusion for Image Generation](https://huggingface.co/blog/wuerstchen)
|
|
66
|
+
- [Efficient Controllable Generation for SDXL with T2I-Adapters](https://huggingface.co/blog/t2i-sdxl-adapters)
|
|
67
|
+
- [Welcome aMUSEd: Efficient Text-to-Image Generation](https://huggingface.co/blog/amused)
|
|
68
|
+
|
|
69
|
+
### Model Fine-tuning
|
|
70
|
+
|
|
71
|
+
- [Finetune Stable Diffusion Models with DDPO via TRL](https://huggingface.co/blog/pref-tuning)
|
|
72
|
+
- [LoRA training scripts of the world, unite!](https://huggingface.co/blog/sdxl_lora_advanced_script)
|
|
73
|
+
- [Using LoRA for Efficient Stable Diffusion Fine-Tuning](https://huggingface.co/blog/lora)
|
|
65
74
|
|
|
66
75
|
This page was made possible thanks to the efforts of [Ishan Dutta](https://huggingface.co/ishandutta), [Enrique Elias Ubaldo](https://huggingface.co/herrius) and [Oğuz Akif](https://huggingface.co/oguzakif).
|