opencode-skills-antigravity 1.0.39 → 1.0.41
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/bundled-skills/.antigravity-install-manifest.json +10 -1
- package/bundled-skills/docs/integrations/jetski-cortex.md +3 -3
- package/bundled-skills/docs/integrations/jetski-gemini-loader/README.md +1 -1
- package/bundled-skills/docs/maintainers/repo-growth-seo.md +3 -3
- package/bundled-skills/docs/maintainers/security-findings-triage-2026-03-29-refresh.csv +34 -0
- package/bundled-skills/docs/maintainers/security-findings-triage-2026-03-29-refresh.md +2 -0
- package/bundled-skills/docs/maintainers/skills-update-guide.md +1 -1
- package/bundled-skills/docs/sources/sources.md +2 -2
- package/bundled-skills/docs/users/bundles.md +1 -1
- package/bundled-skills/docs/users/claude-code-skills.md +1 -1
- package/bundled-skills/docs/users/gemini-cli-skills.md +1 -1
- package/bundled-skills/docs/users/getting-started.md +1 -1
- package/bundled-skills/docs/users/kiro-integration.md +1 -1
- package/bundled-skills/docs/users/usage.md +4 -4
- package/bundled-skills/docs/users/visual-guide.md +4 -4
- package/bundled-skills/hugging-face-cli/SKILL.md +192 -195
- package/bundled-skills/hugging-face-community-evals/SKILL.md +213 -0
- package/bundled-skills/hugging-face-community-evals/examples/.env.example +3 -0
- package/bundled-skills/hugging-face-community-evals/examples/USAGE_EXAMPLES.md +101 -0
- package/bundled-skills/hugging-face-community-evals/scripts/inspect_eval_uv.py +104 -0
- package/bundled-skills/hugging-face-community-evals/scripts/inspect_vllm_uv.py +306 -0
- package/bundled-skills/hugging-face-community-evals/scripts/lighteval_vllm_uv.py +297 -0
- package/bundled-skills/hugging-face-dataset-viewer/SKILL.md +120 -120
- package/bundled-skills/hugging-face-gradio/SKILL.md +304 -0
- package/bundled-skills/hugging-face-gradio/examples.md +613 -0
- package/bundled-skills/hugging-face-jobs/SKILL.md +25 -18
- package/bundled-skills/hugging-face-jobs/index.html +216 -0
- package/bundled-skills/hugging-face-jobs/references/hardware_guide.md +336 -0
- package/bundled-skills/hugging-face-jobs/references/hub_saving.md +352 -0
- package/bundled-skills/hugging-face-jobs/references/token_usage.md +570 -0
- package/bundled-skills/hugging-face-jobs/references/troubleshooting.md +475 -0
- package/bundled-skills/hugging-face-jobs/scripts/cot-self-instruct.py +718 -0
- package/bundled-skills/hugging-face-jobs/scripts/finepdfs-stats.py +546 -0
- package/bundled-skills/hugging-face-jobs/scripts/generate-responses.py +587 -0
- package/bundled-skills/hugging-face-model-trainer/SKILL.md +11 -12
- package/bundled-skills/hugging-face-model-trainer/references/gguf_conversion.md +296 -0
- package/bundled-skills/hugging-face-model-trainer/references/hardware_guide.md +283 -0
- package/bundled-skills/hugging-face-model-trainer/references/hub_saving.md +364 -0
- package/bundled-skills/hugging-face-model-trainer/references/local_training_macos.md +231 -0
- package/bundled-skills/hugging-face-model-trainer/references/reliability_principles.md +371 -0
- package/bundled-skills/hugging-face-model-trainer/references/trackio_guide.md +189 -0
- package/bundled-skills/hugging-face-model-trainer/references/training_methods.md +150 -0
- package/bundled-skills/hugging-face-model-trainer/references/training_patterns.md +203 -0
- package/bundled-skills/hugging-face-model-trainer/references/troubleshooting.md +282 -0
- package/bundled-skills/hugging-face-model-trainer/references/unsloth.md +313 -0
- package/bundled-skills/hugging-face-model-trainer/scripts/convert_to_gguf.py +424 -0
- package/bundled-skills/hugging-face-model-trainer/scripts/dataset_inspector.py +417 -0
- package/bundled-skills/hugging-face-model-trainer/scripts/estimate_cost.py +150 -0
- package/bundled-skills/hugging-face-model-trainer/scripts/train_dpo_example.py +106 -0
- package/bundled-skills/hugging-face-model-trainer/scripts/train_grpo_example.py +89 -0
- package/bundled-skills/hugging-face-model-trainer/scripts/train_sft_example.py +122 -0
- package/bundled-skills/hugging-face-model-trainer/scripts/unsloth_sft_example.py +512 -0
- package/bundled-skills/hugging-face-paper-publisher/SKILL.md +11 -4
- package/bundled-skills/hugging-face-paper-publisher/examples/example_usage.md +326 -0
- package/bundled-skills/hugging-face-paper-publisher/references/quick_reference.md +216 -0
- package/bundled-skills/hugging-face-paper-publisher/scripts/paper_manager.py +606 -0
- package/bundled-skills/hugging-face-paper-publisher/templates/arxiv.md +299 -0
- package/bundled-skills/hugging-face-paper-publisher/templates/ml-report.md +358 -0
- package/bundled-skills/hugging-face-paper-publisher/templates/modern.md +319 -0
- package/bundled-skills/hugging-face-paper-publisher/templates/standard.md +201 -0
- package/bundled-skills/hugging-face-papers/SKILL.md +241 -0
- package/bundled-skills/hugging-face-trackio/.claude-plugin/plugin.json +19 -0
- package/bundled-skills/hugging-face-trackio/SKILL.md +117 -0
- package/bundled-skills/hugging-face-trackio/references/alerts.md +196 -0
- package/bundled-skills/hugging-face-trackio/references/logging_metrics.md +206 -0
- package/bundled-skills/hugging-face-trackio/references/retrieving_metrics.md +251 -0
- package/bundled-skills/hugging-face-vision-trainer/SKILL.md +595 -0
- package/bundled-skills/hugging-face-vision-trainer/references/finetune_sam2_trainer.md +254 -0
- package/bundled-skills/hugging-face-vision-trainer/references/hub_saving.md +618 -0
- package/bundled-skills/hugging-face-vision-trainer/references/image_classification_training_notebook.md +279 -0
- package/bundled-skills/hugging-face-vision-trainer/references/object_detection_training_notebook.md +700 -0
- package/bundled-skills/hugging-face-vision-trainer/references/reliability_principles.md +310 -0
- package/bundled-skills/hugging-face-vision-trainer/references/timm_trainer.md +91 -0
- package/bundled-skills/hugging-face-vision-trainer/scripts/dataset_inspector.py +814 -0
- package/bundled-skills/hugging-face-vision-trainer/scripts/estimate_cost.py +217 -0
- package/bundled-skills/hugging-face-vision-trainer/scripts/image_classification_training.py +383 -0
- package/bundled-skills/hugging-face-vision-trainer/scripts/object_detection_training.py +710 -0
- package/bundled-skills/hugging-face-vision-trainer/scripts/sam_segmentation_training.py +382 -0
- package/bundled-skills/jq/SKILL.md +273 -0
- package/bundled-skills/odoo-edi-connector/SKILL.md +32 -10
- package/bundled-skills/odoo-woocommerce-bridge/SKILL.md +9 -5
- package/bundled-skills/tmux/SKILL.md +370 -0
- package/bundled-skills/transformers-js/SKILL.md +639 -0
- package/bundled-skills/transformers-js/references/CACHE.md +339 -0
- package/bundled-skills/transformers-js/references/CONFIGURATION.md +390 -0
- package/bundled-skills/transformers-js/references/EXAMPLES.md +605 -0
- package/bundled-skills/transformers-js/references/MODEL_ARCHITECTURES.md +167 -0
- package/bundled-skills/transformers-js/references/PIPELINE_OPTIONS.md +545 -0
- package/bundled-skills/transformers-js/references/TEXT_GENERATION.md +315 -0
- package/bundled-skills/viboscope/SKILL.md +64 -0
- package/package.json +1 -1
|
@@ -0,0 +1,613 @@
|
|
|
1
|
+
# Gradio End-to-End Examples
|
|
2
|
+
|
|
3
|
+
Complete working Gradio apps for reference.
|
|
4
|
+
|
|
5
|
+
## Blocks Essay Simple
|
|
6
|
+
|
|
7
|
+
```python
|
|
8
|
+
import gradio as gr
|
|
9
|
+
|
|
10
|
+
def change_textbox(choice):
|
|
11
|
+
if choice == "short":
|
|
12
|
+
return gr.Textbox(lines=2, visible=True)
|
|
13
|
+
elif choice == "long":
|
|
14
|
+
return gr.Textbox(lines=8, visible=True, value="Lorem ipsum dolor sit amet")
|
|
15
|
+
else:
|
|
16
|
+
return gr.Textbox(visible=False)
|
|
17
|
+
|
|
18
|
+
with gr.Blocks() as demo:
|
|
19
|
+
radio = gr.Radio(
|
|
20
|
+
["short", "long", "none"], label="What kind of essay would you like to write?"
|
|
21
|
+
)
|
|
22
|
+
text = gr.Textbox(lines=2, interactive=True, buttons=["copy"])
|
|
23
|
+
radio.change(fn=change_textbox, inputs=radio, outputs=text)
|
|
24
|
+
|
|
25
|
+
demo.launch()
|
|
26
|
+
```
|
|
27
|
+
|
|
28
|
+
## Blocks Flipper
|
|
29
|
+
|
|
30
|
+
```python
|
|
31
|
+
import numpy as np
|
|
32
|
+
import gradio as gr
|
|
33
|
+
|
|
34
|
+
def flip_text(x):
|
|
35
|
+
return x[::-1]
|
|
36
|
+
|
|
37
|
+
def flip_image(x):
|
|
38
|
+
return np.fliplr(x)
|
|
39
|
+
|
|
40
|
+
with gr.Blocks() as demo:
|
|
41
|
+
gr.Markdown("Flip text or image files using this demo.")
|
|
42
|
+
with gr.Tab("Flip Text"):
|
|
43
|
+
text_input = gr.Textbox()
|
|
44
|
+
text_output = gr.Textbox()
|
|
45
|
+
text_button = gr.Button("Flip")
|
|
46
|
+
with gr.Tab("Flip Image"):
|
|
47
|
+
with gr.Row():
|
|
48
|
+
image_input = gr.Image()
|
|
49
|
+
image_output = gr.Image()
|
|
50
|
+
image_button = gr.Button("Flip")
|
|
51
|
+
|
|
52
|
+
with gr.Accordion("Open for More!", open=False):
|
|
53
|
+
gr.Markdown("Look at me...")
|
|
54
|
+
temp_slider = gr.Slider(
|
|
55
|
+
0, 1,
|
|
56
|
+
value=0.1,
|
|
57
|
+
step=0.1,
|
|
58
|
+
interactive=True,
|
|
59
|
+
label="Slide me",
|
|
60
|
+
)
|
|
61
|
+
|
|
62
|
+
text_button.click(flip_text, inputs=text_input, outputs=text_output)
|
|
63
|
+
image_button.click(flip_image, inputs=image_input, outputs=image_output)
|
|
64
|
+
|
|
65
|
+
demo.launch()
|
|
66
|
+
```
|
|
67
|
+
|
|
68
|
+
## Blocks Form
|
|
69
|
+
|
|
70
|
+
```python
|
|
71
|
+
import gradio as gr
|
|
72
|
+
|
|
73
|
+
with gr.Blocks() as demo:
|
|
74
|
+
name_box = gr.Textbox(label="Name")
|
|
75
|
+
age_box = gr.Number(label="Age", minimum=0, maximum=100)
|
|
76
|
+
symptoms_box = gr.CheckboxGroup(["Cough", "Fever", "Runny Nose"])
|
|
77
|
+
submit_btn = gr.Button("Submit")
|
|
78
|
+
|
|
79
|
+
with gr.Column(visible=False) as output_col:
|
|
80
|
+
diagnosis_box = gr.Textbox(label="Diagnosis")
|
|
81
|
+
patient_summary_box = gr.Textbox(label="Patient Summary")
|
|
82
|
+
|
|
83
|
+
def submit(name, age, symptoms):
|
|
84
|
+
return {
|
|
85
|
+
submit_btn: gr.Button(visible=False),
|
|
86
|
+
output_col: gr.Column(visible=True),
|
|
87
|
+
diagnosis_box: "covid" if "Cough" in symptoms else "flu",
|
|
88
|
+
patient_summary_box: f"{name}, {age} y/o",
|
|
89
|
+
}
|
|
90
|
+
|
|
91
|
+
submit_btn.click(
|
|
92
|
+
submit,
|
|
93
|
+
[name_box, age_box, symptoms_box],
|
|
94
|
+
[submit_btn, diagnosis_box, patient_summary_box, output_col],
|
|
95
|
+
)
|
|
96
|
+
|
|
97
|
+
demo.launch()
|
|
98
|
+
```
|
|
99
|
+
|
|
100
|
+
## Blocks Hello
|
|
101
|
+
|
|
102
|
+
```python
|
|
103
|
+
import gradio as gr
|
|
104
|
+
|
|
105
|
+
def welcome(name):
|
|
106
|
+
return f"Welcome to Gradio, {name}!"
|
|
107
|
+
|
|
108
|
+
with gr.Blocks() as demo:
|
|
109
|
+
gr.Markdown(
|
|
110
|
+
"""
|
|
111
|
+
# Hello World!
|
|
112
|
+
Start typing below to see the output.
|
|
113
|
+
""")
|
|
114
|
+
inp = gr.Textbox(placeholder="What is your name?")
|
|
115
|
+
out = gr.Textbox()
|
|
116
|
+
inp.change(welcome, inp, out)
|
|
117
|
+
|
|
118
|
+
demo.launch()
|
|
119
|
+
```
|
|
120
|
+
|
|
121
|
+
## Blocks Layout
|
|
122
|
+
|
|
123
|
+
```python
|
|
124
|
+
import gradio as gr
|
|
125
|
+
|
|
126
|
+
demo = gr.Blocks()
|
|
127
|
+
|
|
128
|
+
with demo:
|
|
129
|
+
with gr.Row():
|
|
130
|
+
gr.Image(interactive=True, scale=2)
|
|
131
|
+
gr.Image()
|
|
132
|
+
with gr.Row():
|
|
133
|
+
gr.Textbox(label="Text")
|
|
134
|
+
gr.Number(label="Count", scale=2)
|
|
135
|
+
gr.Radio(choices=["One", "Two"])
|
|
136
|
+
with gr.Row():
|
|
137
|
+
gr.Button("500", scale=0, min_width=500)
|
|
138
|
+
gr.Button("A", scale=0)
|
|
139
|
+
gr.Button("grow")
|
|
140
|
+
with gr.Row():
|
|
141
|
+
gr.Textbox()
|
|
142
|
+
gr.Textbox()
|
|
143
|
+
gr.Button()
|
|
144
|
+
with gr.Row():
|
|
145
|
+
with gr.Row():
|
|
146
|
+
with gr.Column():
|
|
147
|
+
gr.Textbox(label="Text")
|
|
148
|
+
gr.Number(label="Count")
|
|
149
|
+
gr.Radio(choices=["One", "Two"])
|
|
150
|
+
gr.Image()
|
|
151
|
+
with gr.Column():
|
|
152
|
+
gr.Image(interactive=True)
|
|
153
|
+
gr.Image()
|
|
154
|
+
gr.Image()
|
|
155
|
+
gr.Textbox(label="Text")
|
|
156
|
+
gr.Number(label="Count")
|
|
157
|
+
gr.Radio(choices=["One", "Two"])
|
|
158
|
+
|
|
159
|
+
demo.launch()
|
|
160
|
+
```
|
|
161
|
+
|
|
162
|
+
## Calculator
|
|
163
|
+
|
|
164
|
+
```python
|
|
165
|
+
import gradio as gr
|
|
166
|
+
|
|
167
|
+
def calculator(num1, operation, num2):
|
|
168
|
+
if operation == "add":
|
|
169
|
+
return num1 + num2
|
|
170
|
+
elif operation == "subtract":
|
|
171
|
+
return num1 - num2
|
|
172
|
+
elif operation == "multiply":
|
|
173
|
+
return num1 * num2
|
|
174
|
+
elif operation == "divide":
|
|
175
|
+
if num2 == 0:
|
|
176
|
+
raise gr.Error("Cannot divide by zero!")
|
|
177
|
+
return num1 / num2
|
|
178
|
+
|
|
179
|
+
demo = gr.Interface(
|
|
180
|
+
calculator,
|
|
181
|
+
[
|
|
182
|
+
"number",
|
|
183
|
+
gr.Radio(["add", "subtract", "multiply", "divide"]),
|
|
184
|
+
"number"
|
|
185
|
+
],
|
|
186
|
+
"number",
|
|
187
|
+
examples=[
|
|
188
|
+
[45, "add", 3],
|
|
189
|
+
[3.14, "divide", 2],
|
|
190
|
+
[144, "multiply", 2.5],
|
|
191
|
+
[0, "subtract", 1.2],
|
|
192
|
+
],
|
|
193
|
+
title="Toy Calculator",
|
|
194
|
+
description="Here's a sample toy calculator.",
|
|
195
|
+
api_name="predict"
|
|
196
|
+
)
|
|
197
|
+
|
|
198
|
+
demo.launch()
|
|
199
|
+
```
|
|
200
|
+
|
|
201
|
+
## Chatbot Simple
|
|
202
|
+
|
|
203
|
+
```python
|
|
204
|
+
import gradio as gr
|
|
205
|
+
import random
|
|
206
|
+
import time
|
|
207
|
+
|
|
208
|
+
with gr.Blocks() as demo:
|
|
209
|
+
chatbot = gr.Chatbot()
|
|
210
|
+
msg = gr.Textbox()
|
|
211
|
+
clear = gr.ClearButton([msg, chatbot])
|
|
212
|
+
|
|
213
|
+
def respond(message, chat_history):
|
|
214
|
+
bot_message = random.choice(["How are you?", "Today is a great day", "I'm very hungry"])
|
|
215
|
+
chat_history.append({"role": "user", "content": message})
|
|
216
|
+
chat_history.append({"role": "assistant", "content": bot_message})
|
|
217
|
+
time.sleep(2)
|
|
218
|
+
return "", chat_history
|
|
219
|
+
|
|
220
|
+
msg.submit(respond, [msg, chatbot], [msg, chatbot])
|
|
221
|
+
|
|
222
|
+
demo.launch()
|
|
223
|
+
```
|
|
224
|
+
|
|
225
|
+
## Chatbot Streaming
|
|
226
|
+
|
|
227
|
+
```python
|
|
228
|
+
import gradio as gr
|
|
229
|
+
import random
|
|
230
|
+
import time
|
|
231
|
+
|
|
232
|
+
with gr.Blocks() as demo:
|
|
233
|
+
chatbot = gr.Chatbot()
|
|
234
|
+
msg = gr.Textbox()
|
|
235
|
+
clear = gr.Button("Clear")
|
|
236
|
+
|
|
237
|
+
def user(user_message, history: list):
|
|
238
|
+
return "", history + [{"role": "user", "content": user_message}]
|
|
239
|
+
|
|
240
|
+
def bot(history: list):
|
|
241
|
+
bot_message = random.choice(["How are you?", "I love you", "I'm very hungry"])
|
|
242
|
+
history.append({"role": "assistant", "content": ""})
|
|
243
|
+
for character in bot_message:
|
|
244
|
+
history[-1]['content'] += character
|
|
245
|
+
time.sleep(0.05)
|
|
246
|
+
yield history
|
|
247
|
+
|
|
248
|
+
msg.submit(user, [msg, chatbot], [msg, chatbot], queue=False).then(
|
|
249
|
+
bot, chatbot, chatbot
|
|
250
|
+
)
|
|
251
|
+
clear.click(lambda: None, None, chatbot, queue=False)
|
|
252
|
+
|
|
253
|
+
demo.launch()
|
|
254
|
+
```
|
|
255
|
+
|
|
256
|
+
## Custom Css
|
|
257
|
+
|
|
258
|
+
```python
|
|
259
|
+
import gradio as gr
|
|
260
|
+
|
|
261
|
+
with gr.Blocks() as demo:
|
|
262
|
+
with gr.Column(elem_classes="cool-col"):
|
|
263
|
+
gr.Markdown("### Gradio Demo with Custom CSS", elem_classes="darktest")
|
|
264
|
+
gr.Markdown(
|
|
265
|
+
elem_classes="markdown",
|
|
266
|
+
value="Resize the browser window to see the CSS media query in action.",
|
|
267
|
+
)
|
|
268
|
+
|
|
269
|
+
if __name__ == "__main__":
|
|
270
|
+
demo.launch(css_paths=["demo/custom_css/custom_css.css"])
|
|
271
|
+
```
|
|
272
|
+
|
|
273
|
+
## Fake Diffusion
|
|
274
|
+
|
|
275
|
+
```python
|
|
276
|
+
import gradio as gr
|
|
277
|
+
import numpy as np
|
|
278
|
+
import time
|
|
279
|
+
|
|
280
|
+
def fake_diffusion(steps):
|
|
281
|
+
rng = np.random.default_rng()
|
|
282
|
+
for i in range(steps):
|
|
283
|
+
time.sleep(1)
|
|
284
|
+
image = rng.random(size=(600, 600, 3))
|
|
285
|
+
yield image
|
|
286
|
+
image = np.ones((1000,1000,3), np.uint8)
|
|
287
|
+
image[:] = [255, 124, 0]
|
|
288
|
+
yield image
|
|
289
|
+
|
|
290
|
+
demo = gr.Interface(fake_diffusion,
|
|
291
|
+
inputs=gr.Slider(1, 10, 3, step=1),
|
|
292
|
+
outputs="image",
|
|
293
|
+
api_name="predict")
|
|
294
|
+
|
|
295
|
+
demo.launch()
|
|
296
|
+
```
|
|
297
|
+
|
|
298
|
+
## Hello World
|
|
299
|
+
|
|
300
|
+
```python
|
|
301
|
+
import gradio as gr
|
|
302
|
+
|
|
303
|
+
|
|
304
|
+
def greet(name):
|
|
305
|
+
return "Hello " + name + "!"
|
|
306
|
+
|
|
307
|
+
|
|
308
|
+
demo = gr.Interface(fn=greet, inputs="textbox", outputs="textbox", api_name="predict")
|
|
309
|
+
|
|
310
|
+
demo.launch()
|
|
311
|
+
```
|
|
312
|
+
|
|
313
|
+
## Image Editor
|
|
314
|
+
|
|
315
|
+
```python
|
|
316
|
+
import gradio as gr
|
|
317
|
+
import time
|
|
318
|
+
|
|
319
|
+
|
|
320
|
+
def sleep(im):
|
|
321
|
+
time.sleep(5)
|
|
322
|
+
return [im["background"], im["layers"][0], im["layers"][1], im["composite"]]
|
|
323
|
+
|
|
324
|
+
|
|
325
|
+
def predict(im):
|
|
326
|
+
return im["composite"]
|
|
327
|
+
|
|
328
|
+
|
|
329
|
+
with gr.Blocks() as demo:
|
|
330
|
+
with gr.Row():
|
|
331
|
+
im = gr.ImageEditor(
|
|
332
|
+
type="numpy",
|
|
333
|
+
)
|
|
334
|
+
im_preview = gr.Image()
|
|
335
|
+
n_upload = gr.Number(0, label="Number of upload events", step=1)
|
|
336
|
+
n_change = gr.Number(0, label="Number of change events", step=1)
|
|
337
|
+
n_input = gr.Number(0, label="Number of input events", step=1)
|
|
338
|
+
|
|
339
|
+
im.upload(lambda x: x + 1, outputs=n_upload, inputs=n_upload)
|
|
340
|
+
im.change(lambda x: x + 1, outputs=n_change, inputs=n_change)
|
|
341
|
+
im.input(lambda x: x + 1, outputs=n_input, inputs=n_input)
|
|
342
|
+
im.change(predict, outputs=im_preview, inputs=im, show_progress="hidden")
|
|
343
|
+
|
|
344
|
+
demo.launch()
|
|
345
|
+
```
|
|
346
|
+
|
|
347
|
+
## On Listener Decorator
|
|
348
|
+
|
|
349
|
+
```python
|
|
350
|
+
import gradio as gr
|
|
351
|
+
|
|
352
|
+
with gr.Blocks() as demo:
|
|
353
|
+
name = gr.Textbox(label="Name")
|
|
354
|
+
output = gr.Textbox(label="Output Box")
|
|
355
|
+
greet_btn = gr.Button("Greet")
|
|
356
|
+
|
|
357
|
+
@gr.on(triggers=[name.submit, greet_btn.click], inputs=name, outputs=output)
|
|
358
|
+
def greet(name):
|
|
359
|
+
return "Hello " + name + "!"
|
|
360
|
+
|
|
361
|
+
demo.launch()
|
|
362
|
+
```
|
|
363
|
+
|
|
364
|
+
## Render Merge
|
|
365
|
+
|
|
366
|
+
```python
|
|
367
|
+
import gradio as gr
|
|
368
|
+
import time
|
|
369
|
+
|
|
370
|
+
with gr.Blocks() as demo:
|
|
371
|
+
text_count = gr.Slider(1, 5, value=1, step=1, label="Textbox Count")
|
|
372
|
+
|
|
373
|
+
@gr.render(inputs=text_count)
|
|
374
|
+
def render_count(count):
|
|
375
|
+
boxes = []
|
|
376
|
+
for i in range(count):
|
|
377
|
+
box = gr.Textbox(label=f"Box {i}")
|
|
378
|
+
boxes.append(box)
|
|
379
|
+
|
|
380
|
+
def merge(*args):
|
|
381
|
+
time.sleep(0.2) # simulate a delay
|
|
382
|
+
return " ".join(args)
|
|
383
|
+
|
|
384
|
+
merge_btn.click(merge, boxes, output)
|
|
385
|
+
|
|
386
|
+
def clear():
|
|
387
|
+
time.sleep(0.2) # simulate a delay
|
|
388
|
+
return [" "] * count
|
|
389
|
+
|
|
390
|
+
clear_btn.click(clear, None, boxes)
|
|
391
|
+
|
|
392
|
+
def countup():
|
|
393
|
+
time.sleep(0.2) # simulate a delay
|
|
394
|
+
return list(range(count))
|
|
395
|
+
|
|
396
|
+
count_btn.click(countup, None, boxes, queue=False)
|
|
397
|
+
|
|
398
|
+
with gr.Row():
|
|
399
|
+
merge_btn = gr.Button("Merge")
|
|
400
|
+
clear_btn = gr.Button("Clear")
|
|
401
|
+
count_btn = gr.Button("Count")
|
|
402
|
+
|
|
403
|
+
output = gr.Textbox()
|
|
404
|
+
|
|
405
|
+
demo.launch()
|
|
406
|
+
```
|
|
407
|
+
|
|
408
|
+
## Reverse Audio 2
|
|
409
|
+
|
|
410
|
+
```python
|
|
411
|
+
import gradio as gr
|
|
412
|
+
import numpy as np
|
|
413
|
+
|
|
414
|
+
def reverse_audio(audio):
|
|
415
|
+
sr, data = audio
|
|
416
|
+
return (sr, np.flipud(data))
|
|
417
|
+
|
|
418
|
+
demo = gr.Interface(fn=reverse_audio,
|
|
419
|
+
inputs="microphone",
|
|
420
|
+
outputs="audio", api_name="predict")
|
|
421
|
+
|
|
422
|
+
demo.launch()
|
|
423
|
+
```
|
|
424
|
+
|
|
425
|
+
## Sepia Filter
|
|
426
|
+
|
|
427
|
+
```python
|
|
428
|
+
import numpy as np
|
|
429
|
+
import gradio as gr
|
|
430
|
+
|
|
431
|
+
def sepia(input_img):
|
|
432
|
+
sepia_filter = np.array([
|
|
433
|
+
[0.393, 0.769, 0.189],
|
|
434
|
+
[0.349, 0.686, 0.168],
|
|
435
|
+
[0.272, 0.534, 0.131]
|
|
436
|
+
])
|
|
437
|
+
sepia_img = input_img.dot(sepia_filter.T)
|
|
438
|
+
sepia_img /= sepia_img.max()
|
|
439
|
+
return sepia_img
|
|
440
|
+
|
|
441
|
+
demo = gr.Interface(sepia, gr.Image(), "image", api_name="predict")
|
|
442
|
+
demo.launch()
|
|
443
|
+
```
|
|
444
|
+
|
|
445
|
+
## Sort Records
|
|
446
|
+
|
|
447
|
+
```python
|
|
448
|
+
import gradio as gr
|
|
449
|
+
|
|
450
|
+
def sort_records(records):
|
|
451
|
+
return records.sort("Quantity")
|
|
452
|
+
|
|
453
|
+
demo = gr.Interface(
|
|
454
|
+
sort_records,
|
|
455
|
+
gr.Dataframe(
|
|
456
|
+
headers=["Item", "Quantity"],
|
|
457
|
+
datatype=["str", "number"],
|
|
458
|
+
row_count=3,
|
|
459
|
+
column_count=2,
|
|
460
|
+
column_limits=(2, 2),
|
|
461
|
+
type="polars"
|
|
462
|
+
),
|
|
463
|
+
"dataframe",
|
|
464
|
+
description="Sort by Quantity"
|
|
465
|
+
)
|
|
466
|
+
|
|
467
|
+
demo.launch()
|
|
468
|
+
```
|
|
469
|
+
|
|
470
|
+
## Streaming Simple
|
|
471
|
+
|
|
472
|
+
```python
|
|
473
|
+
import gradio as gr
|
|
474
|
+
|
|
475
|
+
with gr.Blocks() as demo:
|
|
476
|
+
with gr.Row():
|
|
477
|
+
with gr.Column():
|
|
478
|
+
input_img = gr.Image(label="Input", sources="webcam")
|
|
479
|
+
with gr.Column():
|
|
480
|
+
output_img = gr.Image(label="Output")
|
|
481
|
+
input_img.stream(lambda s: s, input_img, output_img, time_limit=15, stream_every=0.1, concurrency_limit=30)
|
|
482
|
+
|
|
483
|
+
if __name__ == "__main__":
|
|
484
|
+
|
|
485
|
+
demo.launch()
|
|
486
|
+
```
|
|
487
|
+
|
|
488
|
+
## Tabbed Interface Lite
|
|
489
|
+
|
|
490
|
+
```python
|
|
491
|
+
import gradio as gr
|
|
492
|
+
|
|
493
|
+
hello_world = gr.Interface(lambda name: "Hello " + name, "text", "text", api_name="predict")
|
|
494
|
+
bye_world = gr.Interface(lambda name: "Bye " + name, "text", "text", api_name="predict")
|
|
495
|
+
chat = gr.ChatInterface(lambda *args: "Hello " + args[0], api_name="chat")
|
|
496
|
+
|
|
497
|
+
demo = gr.TabbedInterface([hello_world, bye_world, chat], ["Hello World", "Bye World", "Chat"])
|
|
498
|
+
|
|
499
|
+
demo.launch()
|
|
500
|
+
```
|
|
501
|
+
|
|
502
|
+
## Tax Calculator
|
|
503
|
+
|
|
504
|
+
```python
|
|
505
|
+
import gradio as gr
|
|
506
|
+
|
|
507
|
+
def tax_calculator(income, marital_status, assets):
|
|
508
|
+
tax_brackets = [(10, 0), (25, 8), (60, 12), (120, 20), (250, 30)]
|
|
509
|
+
total_deductible = sum(cost for cost, deductible in zip(assets["Cost"], assets["Deductible"]) if deductible)
|
|
510
|
+
taxable_income = income - total_deductible
|
|
511
|
+
|
|
512
|
+
total_tax = 0
|
|
513
|
+
for bracket, rate in tax_brackets:
|
|
514
|
+
if taxable_income > bracket:
|
|
515
|
+
total_tax += (taxable_income - bracket) * rate / 100
|
|
516
|
+
|
|
517
|
+
if marital_status == "Married":
|
|
518
|
+
total_tax *= 0.75
|
|
519
|
+
elif marital_status == "Divorced":
|
|
520
|
+
total_tax *= 0.8
|
|
521
|
+
|
|
522
|
+
return round(total_tax)
|
|
523
|
+
|
|
524
|
+
demo = gr.Interface(
|
|
525
|
+
tax_calculator,
|
|
526
|
+
[
|
|
527
|
+
"number",
|
|
528
|
+
gr.Radio(["Single", "Married", "Divorced"]),
|
|
529
|
+
gr.Dataframe(
|
|
530
|
+
headers=["Item", "Cost", "Deductible"],
|
|
531
|
+
datatype=["str", "number", "bool"],
|
|
532
|
+
label="Assets Purchased this Year",
|
|
533
|
+
),
|
|
534
|
+
],
|
|
535
|
+
gr.Number(label="Tax due"),
|
|
536
|
+
examples=[
|
|
537
|
+
[10000, "Married", [["Suit", 5000, True], ["Laptop (for work)", 800, False], ["Car", 1800, True]]],
|
|
538
|
+
[80000, "Single", [["Suit", 800, True], ["Watch", 1800, True], ["Food", 800, True]]],
|
|
539
|
+
],
|
|
540
|
+
live=True,
|
|
541
|
+
api_name="predict"
|
|
542
|
+
)
|
|
543
|
+
|
|
544
|
+
demo.launch()
|
|
545
|
+
```
|
|
546
|
+
|
|
547
|
+
## Timer Simple
|
|
548
|
+
|
|
549
|
+
```python
|
|
550
|
+
import gradio as gr
|
|
551
|
+
import random
|
|
552
|
+
import time
|
|
553
|
+
|
|
554
|
+
with gr.Blocks() as demo:
|
|
555
|
+
timer = gr.Timer(1)
|
|
556
|
+
timestamp = gr.Number(label="Time")
|
|
557
|
+
timer.tick(lambda: round(time.time()), outputs=timestamp, api_name="timestamp")
|
|
558
|
+
|
|
559
|
+
number = gr.Number(lambda: random.randint(1, 10), every=timer, label="Random Number")
|
|
560
|
+
with gr.Row():
|
|
561
|
+
gr.Button("Start").click(lambda: gr.Timer(active=True), None, timer)
|
|
562
|
+
gr.Button("Stop").click(lambda: gr.Timer(active=False), None, timer)
|
|
563
|
+
gr.Button("Go Fast").click(lambda: 0.2, None, timer)
|
|
564
|
+
|
|
565
|
+
if __name__ == "__main__":
|
|
566
|
+
demo.launch()
|
|
567
|
+
```
|
|
568
|
+
|
|
569
|
+
## Variable Outputs
|
|
570
|
+
|
|
571
|
+
```python
|
|
572
|
+
import gradio as gr
|
|
573
|
+
|
|
574
|
+
max_textboxes = 10
|
|
575
|
+
|
|
576
|
+
def variable_outputs(k):
|
|
577
|
+
k = int(k)
|
|
578
|
+
return [gr.Textbox(visible=True)]*k + [gr.Textbox(visible=False)]*(max_textboxes-k)
|
|
579
|
+
|
|
580
|
+
with gr.Blocks() as demo:
|
|
581
|
+
s = gr.Slider(1, max_textboxes, value=max_textboxes, step=1, label="How many textboxes to show:")
|
|
582
|
+
textboxes = []
|
|
583
|
+
for i in range(max_textboxes):
|
|
584
|
+
t = gr.Textbox(f"Textbox {i}")
|
|
585
|
+
textboxes.append(t)
|
|
586
|
+
|
|
587
|
+
s.change(variable_outputs, s, textboxes)
|
|
588
|
+
|
|
589
|
+
if __name__ == "__main__":
|
|
590
|
+
demo.launch()
|
|
591
|
+
```
|
|
592
|
+
|
|
593
|
+
## Video Identity
|
|
594
|
+
|
|
595
|
+
```python
|
|
596
|
+
import gradio as gr
|
|
597
|
+
from gradio.media import get_video
|
|
598
|
+
|
|
599
|
+
def video_identity(video):
|
|
600
|
+
return video
|
|
601
|
+
|
|
602
|
+
# get_video() returns file paths to sample media included with Gradio
|
|
603
|
+
demo = gr.Interface(video_identity,
|
|
604
|
+
gr.Video(),
|
|
605
|
+
"playable_video",
|
|
606
|
+
examples=[
|
|
607
|
+
get_video("world.mp4")
|
|
608
|
+
],
|
|
609
|
+
cache_examples=True,
|
|
610
|
+
api_name="predict",)
|
|
611
|
+
|
|
612
|
+
demo.launch()
|
|
613
|
+
```
|
|
@@ -1,9 +1,9 @@
|
|
|
1
1
|
---
|
|
2
|
+
source: "https://github.com/huggingface/skills/tree/main/skills/huggingface-jobs"
|
|
2
3
|
name: hugging-face-jobs
|
|
3
|
-
description:
|
|
4
|
-
|
|
5
|
-
|
|
6
|
-
date_added: "2026-02-27"
|
|
4
|
+
description: Run workloads on Hugging Face Jobs with managed CPUs, GPUs, TPUs, secrets, and Hub persistence.
|
|
5
|
+
license: Complete terms in LICENSE.txt
|
|
6
|
+
risk: unknown
|
|
7
7
|
---
|
|
8
8
|
|
|
9
9
|
# Running Workloads on Hugging Face Jobs
|
|
@@ -66,12 +66,15 @@ Before starting any job, verify:
|
|
|
66
66
|
|
|
67
67
|
**How to provide tokens:**
|
|
68
68
|
```python
|
|
69
|
-
|
|
70
|
-
|
|
71
|
-
|
|
69
|
+
# hf_jobs MCP tool — $HF_TOKEN is auto-replaced with real token:
|
|
70
|
+
{"secrets": {"HF_TOKEN": "$HF_TOKEN"}}
|
|
71
|
+
|
|
72
|
+
# HfApi().run_uv_job() — MUST pass actual token:
|
|
73
|
+
from huggingface_hub import get_token
|
|
74
|
+
secrets={"HF_TOKEN": get_token()}
|
|
72
75
|
```
|
|
73
76
|
|
|
74
|
-
**⚠️ CRITICAL:** The `$HF_TOKEN` placeholder is
|
|
77
|
+
**⚠️ CRITICAL:** The `$HF_TOKEN` placeholder is ONLY auto-replaced by the `hf_jobs` MCP tool. When using `HfApi().run_uv_job()`, you MUST pass the real token via `get_token()`. Passing the literal string `"$HF_TOKEN"` results in a 9-character invalid token and 401 errors.
|
|
75
78
|
|
|
76
79
|
## Token Usage Guide
|
|
77
80
|
|
|
@@ -539,9 +542,12 @@ requests.post("https://your-api.com/results", json=results)
|
|
|
539
542
|
|
|
540
543
|
**In job submission:**
|
|
541
544
|
```python
|
|
542
|
-
|
|
543
|
-
|
|
544
|
-
|
|
545
|
+
# hf_jobs MCP tool:
|
|
546
|
+
{"secrets": {"HF_TOKEN": "$HF_TOKEN"}} # auto-replaced
|
|
547
|
+
|
|
548
|
+
# HfApi().run_uv_job():
|
|
549
|
+
from huggingface_hub import get_token
|
|
550
|
+
secrets={"HF_TOKEN": get_token()} # must pass real token
|
|
545
551
|
```
|
|
546
552
|
|
|
547
553
|
**In script:**
|
|
@@ -560,7 +566,7 @@ api.upload_file(...)
|
|
|
560
566
|
|
|
561
567
|
Before submitting:
|
|
562
568
|
- [ ] Results persistence method chosen
|
|
563
|
-
- [ ]
|
|
569
|
+
- [ ] Token in secrets if using Hub (MCP: `"$HF_TOKEN"`, Python API: `get_token()`)
|
|
564
570
|
- [ ] Script handles missing token gracefully
|
|
565
571
|
- [ ] Test persistence path works
|
|
566
572
|
|
|
@@ -950,7 +956,7 @@ hf_jobs("uv", {
|
|
|
950
956
|
### Hub Push Failures
|
|
951
957
|
|
|
952
958
|
**Fix:**
|
|
953
|
-
1. Add to
|
|
959
|
+
1. Add token to secrets: MCP uses `"$HF_TOKEN"` (auto-replaced), Python API uses `get_token()` (must pass real token)
|
|
954
960
|
2. Verify token in script: `assert "HF_TOKEN" in os.environ`
|
|
955
961
|
3. Check token permissions
|
|
956
962
|
4. Verify repo exists or can be created
|
|
@@ -969,7 +975,7 @@ Add to PEP 723 header:
|
|
|
969
975
|
|
|
970
976
|
**Fix:**
|
|
971
977
|
1. Check `hf_whoami()` works locally
|
|
972
|
-
2. Verify `
|
|
978
|
+
2. Verify token in secrets — MCP: `"$HF_TOKEN"`, Python API: `get_token()` (NOT `"$HF_TOKEN"`)
|
|
973
979
|
3. Re-login: `hf auth login`
|
|
974
980
|
4. Check token has required permissions
|
|
975
981
|
|
|
@@ -1017,7 +1023,7 @@ Add to PEP 723 header:
|
|
|
1017
1023
|
2. **Jobs are asynchronous** - Don't wait/poll; let user check when ready
|
|
1018
1024
|
3. **Always set timeout** - Default 30 min may be insufficient; set appropriate timeout
|
|
1019
1025
|
4. **Always persist results** - Environment is ephemeral; without persistence, all work is lost
|
|
1020
|
-
5. **Use tokens securely** -
|
|
1026
|
+
5. **Use tokens securely** - MCP: `secrets={"HF_TOKEN": "$HF_TOKEN"}`, Python API: `secrets={"HF_TOKEN": get_token()}` — `"$HF_TOKEN"` only works with MCP tool
|
|
1021
1027
|
6. **Choose appropriate hardware** - Start small, scale up based on needs (see hardware guide)
|
|
1022
1028
|
7. **Use UV scripts** - Default to `hf_jobs("uv", {...})` with inline scripts for Python workloads
|
|
1023
1029
|
8. **Handle authentication** - Verify tokens are available before Hub operations
|
|
@@ -1033,6 +1039,7 @@ Add to PEP 723 header:
|
|
|
1033
1039
|
| List jobs | `hf_jobs("ps")` | `hf jobs ps` | `list_jobs()` |
|
|
1034
1040
|
| View logs | `hf_jobs("logs", {...})` | `hf jobs logs <id>` | `fetch_job_logs(job_id)` |
|
|
1035
1041
|
| Cancel job | `hf_jobs("cancel", {...})` | `hf jobs cancel <id>` | `cancel_job(job_id)` |
|
|
1036
|
-
| Schedule UV | `hf_jobs("scheduled uv", {...})` |
|
|
1037
|
-
| Schedule Docker | `hf_jobs("scheduled run", {...})` |
|
|
1038
|
-
|
|
1042
|
+
| Schedule UV | `hf_jobs("scheduled uv", {...})` | `hf jobs scheduled uv run SCHEDULE script.py` | `create_scheduled_uv_job()` |
|
|
1043
|
+
| Schedule Docker | `hf_jobs("scheduled run", {...})` | `hf jobs scheduled run SCHEDULE image cmd` | `create_scheduled_job()` |
|
|
1044
|
+
| List scheduled | `hf_jobs("scheduled ps")` | `hf jobs scheduled ps` | `list_scheduled_jobs()` |
|
|
1045
|
+
| Delete scheduled | `hf_jobs("scheduled delete", {...})` | `hf jobs scheduled delete <id>` | `delete_scheduled_job()` |
|