opencode-skills-antigravity 1.0.40 → 1.0.41

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (84) hide show
  1. package/bundled-skills/.antigravity-install-manifest.json +7 -1
  2. package/bundled-skills/docs/integrations/jetski-cortex.md +3 -3
  3. package/bundled-skills/docs/integrations/jetski-gemini-loader/README.md +1 -1
  4. package/bundled-skills/docs/maintainers/repo-growth-seo.md +3 -3
  5. package/bundled-skills/docs/maintainers/skills-update-guide.md +1 -1
  6. package/bundled-skills/docs/sources/sources.md +2 -2
  7. package/bundled-skills/docs/users/bundles.md +1 -1
  8. package/bundled-skills/docs/users/claude-code-skills.md +1 -1
  9. package/bundled-skills/docs/users/gemini-cli-skills.md +1 -1
  10. package/bundled-skills/docs/users/getting-started.md +1 -1
  11. package/bundled-skills/docs/users/kiro-integration.md +1 -1
  12. package/bundled-skills/docs/users/usage.md +4 -4
  13. package/bundled-skills/docs/users/visual-guide.md +4 -4
  14. package/bundled-skills/hugging-face-cli/SKILL.md +192 -195
  15. package/bundled-skills/hugging-face-community-evals/SKILL.md +213 -0
  16. package/bundled-skills/hugging-face-community-evals/examples/.env.example +3 -0
  17. package/bundled-skills/hugging-face-community-evals/examples/USAGE_EXAMPLES.md +101 -0
  18. package/bundled-skills/hugging-face-community-evals/scripts/inspect_eval_uv.py +104 -0
  19. package/bundled-skills/hugging-face-community-evals/scripts/inspect_vllm_uv.py +306 -0
  20. package/bundled-skills/hugging-face-community-evals/scripts/lighteval_vllm_uv.py +297 -0
  21. package/bundled-skills/hugging-face-dataset-viewer/SKILL.md +120 -120
  22. package/bundled-skills/hugging-face-gradio/SKILL.md +304 -0
  23. package/bundled-skills/hugging-face-gradio/examples.md +613 -0
  24. package/bundled-skills/hugging-face-jobs/SKILL.md +25 -18
  25. package/bundled-skills/hugging-face-jobs/index.html +216 -0
  26. package/bundled-skills/hugging-face-jobs/references/hardware_guide.md +336 -0
  27. package/bundled-skills/hugging-face-jobs/references/hub_saving.md +352 -0
  28. package/bundled-skills/hugging-face-jobs/references/token_usage.md +570 -0
  29. package/bundled-skills/hugging-face-jobs/references/troubleshooting.md +475 -0
  30. package/bundled-skills/hugging-face-jobs/scripts/cot-self-instruct.py +718 -0
  31. package/bundled-skills/hugging-face-jobs/scripts/finepdfs-stats.py +546 -0
  32. package/bundled-skills/hugging-face-jobs/scripts/generate-responses.py +587 -0
  33. package/bundled-skills/hugging-face-model-trainer/SKILL.md +11 -12
  34. package/bundled-skills/hugging-face-model-trainer/references/gguf_conversion.md +296 -0
  35. package/bundled-skills/hugging-face-model-trainer/references/hardware_guide.md +283 -0
  36. package/bundled-skills/hugging-face-model-trainer/references/hub_saving.md +364 -0
  37. package/bundled-skills/hugging-face-model-trainer/references/local_training_macos.md +231 -0
  38. package/bundled-skills/hugging-face-model-trainer/references/reliability_principles.md +371 -0
  39. package/bundled-skills/hugging-face-model-trainer/references/trackio_guide.md +189 -0
  40. package/bundled-skills/hugging-face-model-trainer/references/training_methods.md +150 -0
  41. package/bundled-skills/hugging-face-model-trainer/references/training_patterns.md +203 -0
  42. package/bundled-skills/hugging-face-model-trainer/references/troubleshooting.md +282 -0
  43. package/bundled-skills/hugging-face-model-trainer/references/unsloth.md +313 -0
  44. package/bundled-skills/hugging-face-model-trainer/scripts/convert_to_gguf.py +424 -0
  45. package/bundled-skills/hugging-face-model-trainer/scripts/dataset_inspector.py +417 -0
  46. package/bundled-skills/hugging-face-model-trainer/scripts/estimate_cost.py +150 -0
  47. package/bundled-skills/hugging-face-model-trainer/scripts/train_dpo_example.py +106 -0
  48. package/bundled-skills/hugging-face-model-trainer/scripts/train_grpo_example.py +89 -0
  49. package/bundled-skills/hugging-face-model-trainer/scripts/train_sft_example.py +122 -0
  50. package/bundled-skills/hugging-face-model-trainer/scripts/unsloth_sft_example.py +512 -0
  51. package/bundled-skills/hugging-face-paper-publisher/SKILL.md +11 -4
  52. package/bundled-skills/hugging-face-paper-publisher/examples/example_usage.md +326 -0
  53. package/bundled-skills/hugging-face-paper-publisher/references/quick_reference.md +216 -0
  54. package/bundled-skills/hugging-face-paper-publisher/scripts/paper_manager.py +606 -0
  55. package/bundled-skills/hugging-face-paper-publisher/templates/arxiv.md +299 -0
  56. package/bundled-skills/hugging-face-paper-publisher/templates/ml-report.md +358 -0
  57. package/bundled-skills/hugging-face-paper-publisher/templates/modern.md +319 -0
  58. package/bundled-skills/hugging-face-paper-publisher/templates/standard.md +201 -0
  59. package/bundled-skills/hugging-face-papers/SKILL.md +241 -0
  60. package/bundled-skills/hugging-face-trackio/.claude-plugin/plugin.json +19 -0
  61. package/bundled-skills/hugging-face-trackio/SKILL.md +117 -0
  62. package/bundled-skills/hugging-face-trackio/references/alerts.md +196 -0
  63. package/bundled-skills/hugging-face-trackio/references/logging_metrics.md +206 -0
  64. package/bundled-skills/hugging-face-trackio/references/retrieving_metrics.md +251 -0
  65. package/bundled-skills/hugging-face-vision-trainer/SKILL.md +595 -0
  66. package/bundled-skills/hugging-face-vision-trainer/references/finetune_sam2_trainer.md +254 -0
  67. package/bundled-skills/hugging-face-vision-trainer/references/hub_saving.md +618 -0
  68. package/bundled-skills/hugging-face-vision-trainer/references/image_classification_training_notebook.md +279 -0
  69. package/bundled-skills/hugging-face-vision-trainer/references/object_detection_training_notebook.md +700 -0
  70. package/bundled-skills/hugging-face-vision-trainer/references/reliability_principles.md +310 -0
  71. package/bundled-skills/hugging-face-vision-trainer/references/timm_trainer.md +91 -0
  72. package/bundled-skills/hugging-face-vision-trainer/scripts/dataset_inspector.py +814 -0
  73. package/bundled-skills/hugging-face-vision-trainer/scripts/estimate_cost.py +217 -0
  74. package/bundled-skills/hugging-face-vision-trainer/scripts/image_classification_training.py +383 -0
  75. package/bundled-skills/hugging-face-vision-trainer/scripts/object_detection_training.py +710 -0
  76. package/bundled-skills/hugging-face-vision-trainer/scripts/sam_segmentation_training.py +382 -0
  77. package/bundled-skills/transformers-js/SKILL.md +639 -0
  78. package/bundled-skills/transformers-js/references/CACHE.md +339 -0
  79. package/bundled-skills/transformers-js/references/CONFIGURATION.md +390 -0
  80. package/bundled-skills/transformers-js/references/EXAMPLES.md +605 -0
  81. package/bundled-skills/transformers-js/references/MODEL_ARCHITECTURES.md +167 -0
  82. package/bundled-skills/transformers-js/references/PIPELINE_OPTIONS.md +545 -0
  83. package/bundled-skills/transformers-js/references/TEXT_GENERATION.md +315 -0
  84. package/package.json +1 -1
@@ -0,0 +1,595 @@
1
+ ---
2
+ source: "https://github.com/huggingface/skills/tree/main/skills/huggingface-vision-trainer"
3
+ name: hugging-face-vision-trainer
4
+ description: Train or fine-tune vision models on Hugging Face Jobs for detection, classification, and SAM or SAM2 segmentation.
5
+ risk: unknown
6
+ ---
7
+
8
+ # Vision Model Training on Hugging Face Jobs
9
+
10
+ Train object detection, image classification, and SAM/SAM2 segmentation models on managed cloud GPUs. No local GPU setup required—results are automatically saved to the Hugging Face Hub.
11
+
12
+ ## When to Use This Skill
13
+
14
+ Use this skill when users want to:
15
+ - Fine-tune object detection models (D-FINE, RT-DETR v2, DETR, YOLOS) on cloud GPUs or local
16
+ - Fine-tune image classification models (timm: MobileNetV3, MobileViT, ResNet, ViT/DINOv3, or any Transformers classifier) on cloud GPUs or local
17
+ - Fine-tune SAM or SAM2 models for segmentation / image matting using bbox or point prompts
18
+ - Train bounding-box detectors on custom datasets
19
+ - Train image classifiers on custom datasets
20
+ - Train segmentation models on custom mask datasets with prompts
21
+ - Run vision training jobs on Hugging Face Jobs infrastructure
22
+ - Ensure trained vision models are permanently saved to the Hub
23
+
24
+ ## Related Skills
25
+
26
+ - **`hugging-face-jobs`** — General HF Jobs infrastructure: token authentication, hardware flavors, timeout management, cost estimation, secrets, environment variables, scheduled jobs, and result persistence. **Refer to the Jobs skill for any non-training-specific Jobs questions** (e.g., "how do secrets work?", "what hardware is available?", "how do I pass tokens?").
27
+ - **`hugging-face-model-trainer`** — TRL-based language model training (SFT, DPO, GRPO). Use that skill for text/language model fine-tuning.
28
+
29
+ ## Local Script Execution
30
+
31
+ Helper scripts use PEP 723 inline dependencies. Run them with `uv run`:
32
+ ```bash
33
+ uv run scripts/dataset_inspector.py --dataset username/dataset-name --split train
34
+ uv run scripts/estimate_cost.py --help
35
+ ```
36
+
37
+ ## Prerequisites Checklist
38
+
39
+ Before starting any training job, verify:
40
+
41
+ ### Account & Authentication
42
+ - Hugging Face Account with [Pro](https://hf.co/pro), [Team](https://hf.co/enterprise), or [Enterprise](https://hf.co/enterprise) plan (Jobs require paid plan)
43
+ - Authenticated login: Check with `hf_whoami()` (tool) or `hf auth whoami` (terminal)
44
+ - Token has **write** permissions
45
+ - **MUST pass token in job secrets** — see directive #3 below for syntax (MCP tool vs Python API)
46
+
47
+ ### Dataset Requirements — Object Detection
48
+ - Dataset must exist on Hub
49
+ - Annotations must use the `objects` column with `bbox`, `category` (and optionally `area`) sub-fields
50
+ - Bboxes can be in **xywh (COCO)** or **xyxy (Pascal VOC)** format — auto-detected and converted
51
+ - Categories can be **integers or strings** — strings are auto-remapped to integer IDs
52
+ - `image_id` column is **optional** — generated automatically if missing
53
+ - **ALWAYS validate unknown datasets** before GPU training (see Dataset Validation section)
54
+
55
+ ### Dataset Requirements — Image Classification
56
+ - Dataset must exist on Hub
57
+ - Must have an **`image` column** (PIL images) and a **`label` column** (integer class IDs or strings)
58
+ - The label column can be `ClassLabel` type (with names) or plain integers/strings — strings are auto-remapped
59
+ - Common column names auto-detected: `label`, `labels`, `class`, `fine_label`
60
+ - **ALWAYS validate unknown datasets** before GPU training (see Dataset Validation section)
61
+
62
+ ### Dataset Requirements — SAM/SAM2 Segmentation
63
+ - Dataset must exist on Hub
64
+ - Must have an **`image` column** (PIL images) and a **`mask` column** (binary ground-truth segmentation mask)
65
+ - Must have a **prompt** — either:
66
+ - A **`prompt` column** with JSON containing `{"bbox": [x0,y0,x1,y1]}` or `{"point": [x,y]}`
67
+ - OR a dedicated **`bbox`** column with `[x0,y0,x1,y1]` values
68
+ - OR a dedicated **`point`** column with `[x,y]` or `[[x,y],...]` values
69
+ - Bboxes should be in **xyxy** format (absolute pixel coordinates)
70
+ - Example dataset: `merve/MicroMat-mini` (image matting with bbox prompts)
71
+ - **ALWAYS validate unknown datasets** before GPU training (see Dataset Validation section)
72
+
73
+ ### Critical Settings
74
+ - **Timeout must exceed expected training time** — Default 30min is TOO SHORT. See directive #6 for recommended values.
75
+ - **Hub push must be enabled** — `push_to_hub=True`, `hub_model_id="username/model-name"`, token in `secrets`
76
+
77
+ ## Dataset Validation
78
+
79
+ **Validate dataset format BEFORE launching GPU training to prevent the #1 cause of training failures: format mismatches.**
80
+
81
+ **ALWAYS validate for** unknown/custom datasets or any dataset you haven't trained with before. **Skip for** `cppe-5` (the default in the training script).
82
+
83
+ ### Running the Inspector
84
+
85
+ **Option 1: Via HF Jobs (recommended — avoids local SSL/dependency issues):**
86
+ ```python
87
+ hf_jobs("uv", {
88
+ "script": "path/to/dataset_inspector.py",
89
+ "script_args": ["--dataset", "username/dataset-name", "--split", "train"]
90
+ })
91
+ ```
92
+
93
+ **Option 2: Locally:**
94
+ ```bash
95
+ uv run scripts/dataset_inspector.py --dataset username/dataset-name --split train
96
+ ```
97
+
98
+ **Option 3: Via `HfApi().run_uv_job()` (if hf_jobs MCP unavailable):**
99
+ ```python
100
+ from huggingface_hub import HfApi
101
+ api = HfApi()
102
+ api.run_uv_job(
103
+ script="scripts/dataset_inspector.py",
104
+ script_args=["--dataset", "username/dataset-name", "--split", "train"],
105
+ flavor="cpu-basic",
106
+ timeout=300,
107
+ )
108
+ ```
109
+
110
+ ### Reading Results
111
+
112
+ - **`✓ READY`** — Dataset is compatible, use directly
113
+ - **`✗ NEEDS FORMATTING`** — Needs preprocessing (mapping code provided in output)
114
+
115
+ ## Automatic Bbox Preprocessing
116
+
117
+ The object detection training script (`scripts/object_detection_training.py`) automatically handles bbox format detection (xyxy→xywh conversion), bbox sanitization, `image_id` generation, string category→integer remapping, and dataset truncation. **No manual preprocessing needed** — just ensure the dataset has `objects.bbox` and `objects.category` columns.
118
+
119
+ ## Training workflow
120
+
121
+ Copy this checklist and track progress:
122
+
123
+ ```
124
+ Training Progress:
125
+ - [ ] Step 1: Verify prerequisites (account, token, dataset)
126
+ - [ ] Step 2: Validate dataset format (run dataset_inspector.py)
127
+ - [ ] Step 3: Ask user about dataset size and validation split
128
+ - [ ] Step 4: Prepare training script (OD: scripts/object_detection_training.py, IC: scripts/image_classification_training.py, SAM: scripts/sam_segmentation_training.py)
129
+ - [ ] Step 5: Save script locally, submit job, and report details
130
+ ```
131
+
132
+ **Step 1: Verify prerequisites**
133
+
134
+ Follow the Prerequisites Checklist above.
135
+
136
+ **Step 2: Validate dataset**
137
+
138
+ Run the dataset inspector BEFORE spending GPU time. See "Dataset Validation" section above.
139
+
140
+ **Step 3: Ask user preferences**
141
+
142
+ ALWAYS use the AskUserQuestion tool with option-style format:
143
+
144
+ ```python
145
+ AskUserQuestion({
146
+ "questions": [
147
+ {
148
+ "question": "Do you want to run a quick test with a subset of the data first?",
149
+ "header": "Dataset Size",
150
+ "options": [
151
+ {"label": "Quick test run (10% of data)", "description": "Faster, cheaper (~30-60 min, ~$2-5) to validate setup"},
152
+ {"label": "Full dataset (Recommended)", "description": "Complete training for best model quality"}
153
+ ],
154
+ "multiSelect": false
155
+ },
156
+ {
157
+ "question": "Do you want to create a validation split from the training data?",
158
+ "header": "Split data",
159
+ "options": [
160
+ {"label": "Yes (Recommended)", "description": "Automatically split 15% of training data for validation"},
161
+ {"label": "No", "description": "Use existing validation split from dataset"}
162
+ ],
163
+ "multiSelect": false
164
+ },
165
+ {
166
+ "question": "Which GPU hardware do you want to use?",
167
+ "header": "Hardware Flavor",
168
+ "options": [
169
+ {"label": "t4-small ($0.40/hr)", "description": "1x T4, 16 GB VRAM — sufficient for all OD models under 100M params"},
170
+ {"label": "l4x1 ($0.80/hr)", "description": "1x L4, 24 GB VRAM — more headroom for large images or batch sizes"},
171
+ {"label": "a10g-large ($1.50/hr)", "description": "1x A10G, 24 GB VRAM — faster training, more CPU/RAM"},
172
+ {"label": "a100-large ($2.50/hr)", "description": "1x A100, 80 GB VRAM — fastest, for very large datasets or image sizes"}
173
+ ],
174
+ "multiSelect": false
175
+ }
176
+ ]
177
+ })
178
+ ```
179
+
180
+ **Step 4: Prepare training script**
181
+
182
+ For object detection, use [scripts/object_detection_training.py](scripts/object_detection_training.py) as the production-ready template. For image classification, use [scripts/image_classification_training.py](scripts/image_classification_training.py). For SAM/SAM2 segmentation, use [scripts/sam_segmentation_training.py](scripts/sam_segmentation_training.py). All scripts use `HfArgumentParser` — all configuration is passed via CLI arguments in `script_args`, NOT by editing Python variables. For timm model details, see [references/timm_trainer.md](references/timm_trainer.md). For SAM2 training details, see [references/finetune_sam2_trainer.md](references/finetune_sam2_trainer.md).
183
+
184
+ **Step 5: Save script, submit job, and report**
185
+
186
+ 1. **Save the script locally** to `submitted_jobs/` in the workspace root (create if needed) with a descriptive name like `training_<dataset>_<YYYYMMDD_HHMMSS>.py`. Tell the user the path.
187
+ 2. **Submit** using `hf_jobs` MCP tool (preferred) or `HfApi().run_uv_job()` — see directive #1 for both methods. Pass all config via `script_args`.
188
+ 3. **Report** the job ID (from `.id` attribute), monitoring URL, Trackio dashboard (`https://huggingface.co/spaces/{username}/trackio`), expected time, and estimated cost.
189
+ 4. **Wait for user** to request status checks — don't poll automatically. Training jobs run asynchronously and can take hours.
190
+
191
+ ## Critical directives
192
+
193
+ These rules prevent common failures. Follow them exactly.
194
+
195
+ ### 1. Job submission: `hf_jobs` MCP tool vs Python API
196
+
197
+ **`hf_jobs()` is an MCP tool, NOT a Python function.** Do NOT try to import it from `huggingface_hub`. Call it as a tool:
198
+
199
+ ```
200
+ hf_jobs("uv", {"script": training_script_content, "flavor": "a10g-large", "timeout": "4h", "secrets": {"HF_TOKEN": "$HF_TOKEN"}})
201
+ ```
202
+
203
+ **If `hf_jobs` MCP tool is unavailable**, use the Python API directly:
204
+
205
+ ```python
206
+ from huggingface_hub import HfApi, get_token
207
+ api = HfApi()
208
+ job_info = api.run_uv_job(
209
+ script="path/to/training_script.py", # file PATH, NOT content
210
+ script_args=["--dataset_name", "cppe-5", ...],
211
+ flavor="a10g-large",
212
+ timeout=14400, # seconds (4 hours)
213
+ env={"PYTHONUNBUFFERED": "1"},
214
+ secrets={"HF_TOKEN": get_token()}, # MUST use get_token(), NOT "$HF_TOKEN"
215
+ )
216
+ print(f"Job ID: {job_info.id}")
217
+ ```
218
+
219
+ **Critical differences between the two methods:**
220
+
221
+ | | `hf_jobs` MCP tool | `HfApi().run_uv_job()` |
222
+ |---|---|---|
223
+ | `script` param | Python code string or URL (NOT local paths) | File path to `.py` file (NOT content) |
224
+ | Token in secrets | `"$HF_TOKEN"` (auto-replaced) | `get_token()` (actual token value) |
225
+ | Timeout format | String (`"4h"`) | Seconds (`14400`) |
226
+
227
+ **Rules for both methods:**
228
+ - The training script MUST include PEP 723 inline metadata with dependencies
229
+ - Do NOT use `image` or `command` parameters (those belong to `run_job()`, not `run_uv_job()`)
230
+
231
+ ### 2. Authentication via job secrets + explicit hub_token injection
232
+
233
+ **Job config** MUST include the token in secrets — syntax depends on submission method (see table above).
234
+
235
+ **Training script requirement:** The Transformers `Trainer` calls `create_repo(token=self.args.hub_token)` during `__init__()` when `push_to_hub=True`. The training script MUST inject `HF_TOKEN` into `training_args.hub_token` AFTER parsing args but BEFORE creating the `Trainer`. The template `scripts/object_detection_training.py` already includes this:
236
+
237
+ ```python
238
+ hf_token = os.environ.get("HF_TOKEN")
239
+ if training_args.push_to_hub and not training_args.hub_token:
240
+ if hf_token:
241
+ training_args.hub_token = hf_token
242
+ ```
243
+
244
+ If you write a custom script, you MUST include this token injection before the `Trainer(...)` call.
245
+
246
+ - Do NOT call `login()` in custom scripts unless replicating the full pattern from `scripts/object_detection_training.py`
247
+ - Do NOT rely on implicit token resolution (`hub_token=None`) — unreliable in Jobs
248
+ - See the `hugging-face-jobs` skill → *Token Usage Guide* for full details
249
+
250
+ ### 3. JobInfo attribute
251
+
252
+ Access the job identifier using `.id` (NOT `.job_id` or `.name` — these don't exist):
253
+
254
+ ```python
255
+ job_info = api.run_uv_job(...) # or hf_jobs("uv", {...})
256
+ job_id = job_info.id # Correct -- returns string like "687fb701029421ae5549d998"
257
+ ```
258
+
259
+ ### 4. Required training flags and HfArgumentParser boolean syntax
260
+
261
+ `scripts/object_detection_training.py` uses `HfArgumentParser` — all config is passed via `script_args`. Boolean arguments have two syntaxes:
262
+
263
+ - **`bool` fields** (e.g., `push_to_hub`, `do_train`): Use as bare flags (`--push_to_hub`) or negate with `--no_` prefix (`--no_remove_unused_columns`)
264
+ - **`Optional[bool]` fields** (e.g., `greater_is_better`): MUST pass explicit value (`--greater_is_better True`). Bare `--greater_is_better` causes `error: expected one argument`
265
+
266
+ Required flags for object detection:
267
+
268
+ ```
269
+ --no_remove_unused_columns # MUST: preserves image column for pixel_values
270
+ --no_eval_do_concat_batches # MUST: images have different numbers of target boxes
271
+ --push_to_hub # MUST: environment is ephemeral
272
+ --hub_model_id username/model-name
273
+ --metric_for_best_model eval_map
274
+ --greater_is_better True # MUST pass "True" explicitly (Optional[bool])
275
+ --do_train
276
+ --do_eval
277
+ ```
278
+
279
+ Required flags for image classification:
280
+
281
+ ```
282
+ --no_remove_unused_columns # MUST: preserves image column for pixel_values
283
+ --push_to_hub # MUST: environment is ephemeral
284
+ --hub_model_id username/model-name
285
+ --metric_for_best_model eval_accuracy
286
+ --greater_is_better True # MUST pass "True" explicitly (Optional[bool])
287
+ --do_train
288
+ --do_eval
289
+ ```
290
+
291
+ Required flags for SAM/SAM2 segmentation:
292
+
293
+ ```
294
+ --remove_unused_columns False # MUST: preserves input_boxes/input_points
295
+ --push_to_hub # MUST: environment is ephemeral
296
+ --hub_model_id username/model-name
297
+ --do_train
298
+ --prompt_type bbox # or "point"
299
+ --dataloader_pin_memory False # MUST: avoids pin_memory issues with custom collator
300
+ ```
301
+
302
+ ### 5. Timeout management
303
+
304
+ Default 30 min is TOO SHORT for object detection. Set minimum 2-4 hours. Add 30% buffer for model loading, preprocessing, and Hub push.
305
+
306
+ | Scenario | Timeout |
307
+ |----------|---------|
308
+ | Quick test (100-200 images, 5-10 epochs) | 1h |
309
+ | Development (500-1K images, 15-20 epochs) | 2-3h |
310
+ | Production (1K-5K images, 30 epochs) | 4-6h |
311
+ | Large dataset (5K+ images) | 6-12h |
312
+
313
+ ### 6. Trackio monitoring
314
+
315
+ Trackio is **always enabled** in the object detection training script — it calls `trackio.init()` and `trackio.finish()` automatically. No need to pass `--report_to trackio`. The project name is taken from `--output_dir` and the run name from `--run_name`. For image classification, pass `--report_to trackio` in `TrainingArguments`.
316
+
317
+ Dashboard at: `https://huggingface.co/spaces/{username}/trackio`
318
+
319
+ ## Model & hardware selection
320
+
321
+ ### Recommended object detection models
322
+
323
+ | Model | Params | Use case |
324
+ |-------|--------|----------|
325
+ | `ustc-community/dfine-small-coco` | 10.4M | Best starting point — fast, cheap, SOTA quality |
326
+ | `PekingU/rtdetr_v2_r18vd` | 20.2M | Lightweight real-time detector |
327
+ | `ustc-community/dfine-large-coco` | 31.4M | Higher accuracy, still efficient |
328
+ | `PekingU/rtdetr_v2_r50vd` | 43M | Strong real-time baseline |
329
+ | `ustc-community/dfine-xlarge-obj365` | 63.5M | Best accuracy (pretrained on Objects365) |
330
+ | `PekingU/rtdetr_v2_r101vd` | 76M | Largest RT-DETR v2 variant |
331
+
332
+ Start with `ustc-community/dfine-small-coco` for fast iteration. Move to D-FINE Large or RT-DETR v2 R50 for better accuracy.
333
+
334
+ ### Recommended image classification models
335
+
336
+ All `timm/` models work out of the box via `AutoModelForImageClassification` (loaded as `TimmWrapperForImageClassification`). See [references/timm_trainer.md](references/timm_trainer.md) for details.
337
+
338
+ | Model | Params | Use case |
339
+ |-------|--------|----------|
340
+ | `timm/mobilenetv3_small_100.lamb_in1k` | 2.5M | Ultra-lightweight — mobile/edge, fastest training |
341
+ | `timm/mobilevit_s.cvnets_in1k` | 5.6M | Mobile transformer — good accuracy/speed trade-off |
342
+ | `timm/resnet50.a1_in1k` | 25.6M | Strong CNN baseline — reliable, well-studied |
343
+ | `timm/vit_base_patch16_dinov3.lvd1689m` | 86.6M | Best accuracy — DINOv3 self-supervised ViT |
344
+
345
+ Start with `timm/mobilenetv3_small_100.lamb_in1k` for fast iteration. Move to `timm/resnet50.a1_in1k` or `timm/vit_base_patch16_dinov3.lvd1689m` for better accuracy.
346
+
347
+ ### Recommended SAM/SAM2 segmentation models
348
+
349
+ | Model | Params | Use case |
350
+ |-------|--------|----------|
351
+ | `facebook/sam2.1-hiera-tiny` | 38.9M | Fastest SAM2 — good for quick experiments |
352
+ | `facebook/sam2.1-hiera-small` | 46.0M | Best starting point — good quality/speed balance |
353
+ | `facebook/sam2.1-hiera-base-plus` | 80.8M | Higher capacity for complex segmentation |
354
+ | `facebook/sam2.1-hiera-large` | 224.4M | Best SAM2 accuracy — requires more VRAM |
355
+ | `facebook/sam-vit-base` | 93.7M | Original SAM — ViT-B backbone |
356
+ | `facebook/sam-vit-large` | 312.3M | Original SAM — ViT-L backbone |
357
+ | `facebook/sam-vit-huge` | 641.1M | Original SAM — ViT-H, best SAM v1 accuracy |
358
+
359
+ Start with `facebook/sam2.1-hiera-small` for fast iteration. SAM2 models are generally more efficient than SAM v1 at similar quality. Only the mask decoder is trained by default (vision and prompt encoders are frozen).
360
+
361
+ ### Hardware recommendation
362
+
363
+ All recommended OD and IC models are under 100M params — **`t4-small` (16 GB VRAM, $0.40/hr) is sufficient for all of them.** Image classification models are generally smaller and faster than object detection models — `t4-small` handles even ViT-Base comfortably. For SAM2 models up to `hiera-base-plus`, `t4-small` is sufficient since only the mask decoder is trained. For `sam2.1-hiera-large` or SAM v1 models, use `l4x1` or `a10g-large`. Only upgrade if you hit OOM from large batch sizes — reduce batch size first before switching hardware. Common upgrade path: `t4-small` → `l4x1` ($0.80/hr, 24 GB) → `a10g-large` ($1.50/hr, 24 GB).
364
+
365
+ For full hardware flavor list: refer to the `hugging-face-jobs` skill. For cost estimation: run `scripts/estimate_cost.py`.
366
+
367
+ ## Quick start — Object Detection
368
+
369
+ The `script_args` below are the same for both submission methods. See directive #1 for the critical differences between them.
370
+
371
+ ```python
372
+ OD_SCRIPT_ARGS = [
373
+ "--model_name_or_path", "ustc-community/dfine-small-coco",
374
+ "--dataset_name", "cppe-5",
375
+ "--image_square_size", "640",
376
+ "--output_dir", "dfine_finetuned",
377
+ "--num_train_epochs", "30",
378
+ "--per_device_train_batch_size", "8",
379
+ "--learning_rate", "5e-5",
380
+ "--eval_strategy", "epoch",
381
+ "--save_strategy", "epoch",
382
+ "--save_total_limit", "2",
383
+ "--load_best_model_at_end",
384
+ "--metric_for_best_model", "eval_map",
385
+ "--greater_is_better", "True",
386
+ "--no_remove_unused_columns",
387
+ "--no_eval_do_concat_batches",
388
+ "--push_to_hub",
389
+ "--hub_model_id", "username/model-name",
390
+ "--do_train",
391
+ "--do_eval",
392
+ ]
393
+ ```
394
+
395
+ ```python
396
+ from huggingface_hub import HfApi, get_token
397
+ api = HfApi()
398
+ job_info = api.run_uv_job(
399
+ script="scripts/object_detection_training.py",
400
+ script_args=OD_SCRIPT_ARGS,
401
+ flavor="t4-small",
402
+ timeout=14400,
403
+ env={"PYTHONUNBUFFERED": "1"},
404
+ secrets={"HF_TOKEN": get_token()},
405
+ )
406
+ print(f"Job ID: {job_info.id}")
407
+ ```
408
+
409
+ ### Key OD `script_args`
410
+
411
+ - `--model_name_or_path` — recommended: `"ustc-community/dfine-small-coco"` (see model table above)
412
+ - `--dataset_name` — the Hub dataset ID
413
+ - `--image_square_size` — 480 (fast iteration) or 800 (better accuracy)
414
+ - `--hub_model_id` — `"username/model-name"` for Hub persistence
415
+ - `--num_train_epochs` — 30 typical for convergence
416
+ - `--train_val_split` — fraction to split for validation (default 0.15), set if dataset lacks a validation split
417
+ - `--max_train_samples` — truncate training set (useful for quick test runs, e.g. `"785"` for ~10% of a 7.8K dataset)
418
+ - `--max_eval_samples` — truncate evaluation set
419
+
420
+ ## Quick start — Image Classification
421
+
422
+ ```python
423
+ IC_SCRIPT_ARGS = [
424
+ "--model_name_or_path", "timm/mobilenetv3_small_100.lamb_in1k",
425
+ "--dataset_name", "ethz/food101",
426
+ "--output_dir", "food101_classifier",
427
+ "--num_train_epochs", "5",
428
+ "--per_device_train_batch_size", "32",
429
+ "--per_device_eval_batch_size", "32",
430
+ "--learning_rate", "5e-5",
431
+ "--eval_strategy", "epoch",
432
+ "--save_strategy", "epoch",
433
+ "--save_total_limit", "2",
434
+ "--load_best_model_at_end",
435
+ "--metric_for_best_model", "eval_accuracy",
436
+ "--greater_is_better", "True",
437
+ "--no_remove_unused_columns",
438
+ "--push_to_hub",
439
+ "--hub_model_id", "username/food101-classifier",
440
+ "--do_train",
441
+ "--do_eval",
442
+ ]
443
+ ```
444
+
445
+ ```python
446
+ from huggingface_hub import HfApi, get_token
447
+ api = HfApi()
448
+ job_info = api.run_uv_job(
449
+ script="scripts/image_classification_training.py",
450
+ script_args=IC_SCRIPT_ARGS,
451
+ flavor="t4-small",
452
+ timeout=7200,
453
+ env={"PYTHONUNBUFFERED": "1"},
454
+ secrets={"HF_TOKEN": get_token()},
455
+ )
456
+ print(f"Job ID: {job_info.id}")
457
+ ```
458
+
459
+ ### Key IC `script_args`
460
+
461
+ - `--model_name_or_path` — any `timm/` model or Transformers classification model (see model table above)
462
+ - `--dataset_name` — the Hub dataset ID
463
+ - `--image_column_name` — column containing PIL images (default: `"image"`)
464
+ - `--label_column_name` — column containing class labels (default: `"label"`)
465
+ - `--hub_model_id` — `"username/model-name"` for Hub persistence
466
+ - `--num_train_epochs` — 3-5 typical for classification (fewer than OD)
467
+ - `--per_device_train_batch_size` — 16-64 (classification models use less memory than OD)
468
+ - `--train_val_split` — fraction to split for validation (default 0.15), set if dataset lacks a validation split
469
+ - `--max_train_samples` / `--max_eval_samples` — truncate for quick tests
470
+
471
+ ## Quick start — SAM/SAM2 Segmentation
472
+
473
+ ```python
474
+ SAM_SCRIPT_ARGS = [
475
+ "--model_name_or_path", "facebook/sam2.1-hiera-small",
476
+ "--dataset_name", "merve/MicroMat-mini",
477
+ "--prompt_type", "bbox",
478
+ "--prompt_column_name", "prompt",
479
+ "--output_dir", "sam2-finetuned",
480
+ "--num_train_epochs", "30",
481
+ "--per_device_train_batch_size", "4",
482
+ "--learning_rate", "1e-5",
483
+ "--logging_steps", "1",
484
+ "--save_strategy", "epoch",
485
+ "--save_total_limit", "2",
486
+ "--remove_unused_columns", "False",
487
+ "--dataloader_pin_memory", "False",
488
+ "--push_to_hub",
489
+ "--hub_model_id", "username/sam2-finetuned",
490
+ "--do_train",
491
+ "--report_to", "trackio",
492
+ ]
493
+ ```
494
+
495
+ ```python
496
+ from huggingface_hub import HfApi, get_token
497
+ api = HfApi()
498
+ job_info = api.run_uv_job(
499
+ script="scripts/sam_segmentation_training.py",
500
+ script_args=SAM_SCRIPT_ARGS,
501
+ flavor="t4-small",
502
+ timeout=7200,
503
+ env={"PYTHONUNBUFFERED": "1"},
504
+ secrets={"HF_TOKEN": get_token()},
505
+ )
506
+ print(f"Job ID: {job_info.id}")
507
+ ```
508
+
509
+ ### Key SAM `script_args`
510
+
511
+ - `--model_name_or_path` — SAM or SAM2 model (see model table above); auto-detects SAM vs SAM2
512
+ - `--dataset_name` — the Hub dataset ID (e.g., `"merve/MicroMat-mini"`)
513
+ - `--prompt_type` — `"bbox"` or `"point"` — type of prompt in the dataset
514
+ - `--prompt_column_name` — column with JSON-encoded prompts (default: `"prompt"`)
515
+ - `--bbox_column_name` — dedicated bbox column (alternative to JSON prompt column)
516
+ - `--point_column_name` — dedicated point column (alternative to JSON prompt column)
517
+ - `--mask_column_name` — column with ground-truth masks (default: `"mask"`)
518
+ - `--hub_model_id` — `"username/model-name"` for Hub persistence
519
+ - `--num_train_epochs` — 20-30 typical for SAM fine-tuning
520
+ - `--per_device_train_batch_size` — 2-4 (SAM models use significant memory)
521
+ - `--freeze_vision_encoder` / `--freeze_prompt_encoder` — freeze encoder weights (default: both frozen, only mask decoder trains)
522
+ - `--train_val_split` — fraction to split for validation (default 0.1)
523
+
524
+ ## Checking job status
525
+
526
+ **MCP tool (if available):**
527
+ ```
528
+ hf_jobs("ps") # List all jobs
529
+ hf_jobs("logs", {"job_id": "your-job-id"}) # View logs
530
+ hf_jobs("inspect", {"job_id": "your-job-id"}) # Job details
531
+ ```
532
+
533
+ **Python API fallback:**
534
+ ```python
535
+ from huggingface_hub import HfApi
536
+ api = HfApi()
537
+ api.list_jobs() # List all jobs
538
+ api.get_job_logs(job_id="your-job-id") # View logs
539
+ api.get_job(job_id="your-job-id") # Job details
540
+ ```
541
+
542
+ ## Common failure modes
543
+
544
+ ### OOM (CUDA out of memory)
545
+ Reduce `per_device_train_batch_size` (try 4, then 2), reduce `IMAGE_SIZE`, or upgrade hardware.
546
+
547
+ ### Dataset format errors
548
+ Run `scripts/dataset_inspector.py` first. The training script auto-detects xyxy vs xywh, converts string categories to integer IDs, and adds `image_id` if missing. Ensure `objects.bbox` contains 4-value coordinate lists in absolute pixels and `objects.category` contains either integer IDs or string labels.
549
+
550
+ ### Hub push failures (401)
551
+ Verify: (1) job secrets include token (see directive #2), (2) script sets `training_args.hub_token` BEFORE creating the `Trainer`, (3) `push_to_hub=True` is set, (4) correct `hub_model_id`, (5) token has write permissions.
552
+
553
+ ### Job timeout
554
+ Increase timeout (see directive #5 table), reduce epochs/dataset, or use checkpoint strategy with `hub_strategy="every_save"`.
555
+
556
+ ### KeyError: 'test' (missing test split)
557
+ The object detection training script handles this gracefully — it falls back to the `validation` split. Ensure you're using the latest `scripts/object_detection_training.py`.
558
+
559
+ ### Single-class dataset: "iteration over a 0-d tensor"
560
+ `torchmetrics.MeanAveragePrecision` returns scalar (0-d) tensors for per-class metrics when there's only one class. The template `scripts/object_detection_training.py` handles this by calling `.unsqueeze(0)` on these tensors. Ensure you're using the latest template.
561
+
562
+ ### Poor detection performance (mAP < 0.15)
563
+ Increase epochs (30-50), ensure 500+ images, check per-class mAP for imbalanced classes, try different learning rates (1e-5 to 1e-4), increase image size.
564
+
565
+ For comprehensive troubleshooting: see [references/reliability_principles.md](references/reliability_principles.md)
566
+
567
+ ## Reference files
568
+
569
+ - [scripts/object_detection_training.py](scripts/object_detection_training.py) — Production-ready object detection training script
570
+ - [scripts/image_classification_training.py](scripts/image_classification_training.py) — Production-ready image classification training script (supports timm models)
571
+ - [scripts/sam_segmentation_training.py](scripts/sam_segmentation_training.py) — Production-ready SAM/SAM2 segmentation training script (bbox & point prompts)
572
+ - [scripts/dataset_inspector.py](scripts/dataset_inspector.py) — Validate dataset format for OD, classification, and SAM segmentation
573
+ - [scripts/estimate_cost.py](scripts/estimate_cost.py) — Estimate training costs for any vision model (includes SAM/SAM2)
574
+ - [references/object_detection_training_notebook.md](references/object_detection_training_notebook.md) — Object detection training workflow, augmentation strategies, and training patterns
575
+ - [references/image_classification_training_notebook.md](references/image_classification_training_notebook.md) — Image classification training workflow with ViT, preprocessing, and evaluation
576
+ - [references/finetune_sam2_trainer.md](references/finetune_sam2_trainer.md) — SAM2 fine-tuning walkthrough with MicroMat dataset, DiceCE loss, and Trainer integration
577
+ - [references/timm_trainer.md](references/timm_trainer.md) — Using timm models with HF Trainer (TimmWrapper, transforms, full example)
578
+ - [references/hub_saving.md](references/hub_saving.md) — Detailed Hub persistence guide and verification checklist
579
+ - [references/reliability_principles.md](references/reliability_principles.md) — Failure prevention principles from production experience
580
+
581
+ ## External links
582
+
583
+ - [Transformers Object Detection Guide](https://huggingface.co/docs/transformers/tasks/object_detection)
584
+ - [Transformers Image Classification Guide](https://huggingface.co/docs/transformers/tasks/image_classification)
585
+ - [DETR Model Documentation](https://huggingface.co/docs/transformers/model_doc/detr)
586
+ - [ViT Model Documentation](https://huggingface.co/docs/transformers/model_doc/vit)
587
+ - [HF Jobs Guide](https://huggingface.co/docs/huggingface_hub/guides/jobs) — Main Jobs documentation
588
+ - [HF Jobs Configuration](https://huggingface.co/docs/hub/en/jobs-configuration) — Hardware, secrets, timeouts, namespaces
589
+ - [HF Jobs CLI Reference](https://huggingface.co/docs/huggingface_hub/guides/cli#hf-jobs) — Command line interface
590
+ - [Object Detection Models](https://huggingface.co/models?pipeline_tag=object-detection)
591
+ - [Image Classification Models](https://huggingface.co/models?pipeline_tag=image-classification)
592
+ - [SAM2 Model Documentation](https://huggingface.co/docs/transformers/model_doc/sam2)
593
+ - [SAM Model Documentation](https://huggingface.co/docs/transformers/model_doc/sam)
594
+ - [Object Detection Datasets](https://huggingface.co/datasets?task_categories=task_categories:object-detection)
595
+ - [Image Classification Datasets](https://huggingface.co/datasets?task_categories=task_categories:image-classification)