opencode-skills-antigravity 1.0.39 → 1.0.41

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (91) hide show
  1. package/bundled-skills/.antigravity-install-manifest.json +10 -1
  2. package/bundled-skills/docs/integrations/jetski-cortex.md +3 -3
  3. package/bundled-skills/docs/integrations/jetski-gemini-loader/README.md +1 -1
  4. package/bundled-skills/docs/maintainers/repo-growth-seo.md +3 -3
  5. package/bundled-skills/docs/maintainers/security-findings-triage-2026-03-29-refresh.csv +34 -0
  6. package/bundled-skills/docs/maintainers/security-findings-triage-2026-03-29-refresh.md +2 -0
  7. package/bundled-skills/docs/maintainers/skills-update-guide.md +1 -1
  8. package/bundled-skills/docs/sources/sources.md +2 -2
  9. package/bundled-skills/docs/users/bundles.md +1 -1
  10. package/bundled-skills/docs/users/claude-code-skills.md +1 -1
  11. package/bundled-skills/docs/users/gemini-cli-skills.md +1 -1
  12. package/bundled-skills/docs/users/getting-started.md +1 -1
  13. package/bundled-skills/docs/users/kiro-integration.md +1 -1
  14. package/bundled-skills/docs/users/usage.md +4 -4
  15. package/bundled-skills/docs/users/visual-guide.md +4 -4
  16. package/bundled-skills/hugging-face-cli/SKILL.md +192 -195
  17. package/bundled-skills/hugging-face-community-evals/SKILL.md +213 -0
  18. package/bundled-skills/hugging-face-community-evals/examples/.env.example +3 -0
  19. package/bundled-skills/hugging-face-community-evals/examples/USAGE_EXAMPLES.md +101 -0
  20. package/bundled-skills/hugging-face-community-evals/scripts/inspect_eval_uv.py +104 -0
  21. package/bundled-skills/hugging-face-community-evals/scripts/inspect_vllm_uv.py +306 -0
  22. package/bundled-skills/hugging-face-community-evals/scripts/lighteval_vllm_uv.py +297 -0
  23. package/bundled-skills/hugging-face-dataset-viewer/SKILL.md +120 -120
  24. package/bundled-skills/hugging-face-gradio/SKILL.md +304 -0
  25. package/bundled-skills/hugging-face-gradio/examples.md +613 -0
  26. package/bundled-skills/hugging-face-jobs/SKILL.md +25 -18
  27. package/bundled-skills/hugging-face-jobs/index.html +216 -0
  28. package/bundled-skills/hugging-face-jobs/references/hardware_guide.md +336 -0
  29. package/bundled-skills/hugging-face-jobs/references/hub_saving.md +352 -0
  30. package/bundled-skills/hugging-face-jobs/references/token_usage.md +570 -0
  31. package/bundled-skills/hugging-face-jobs/references/troubleshooting.md +475 -0
  32. package/bundled-skills/hugging-face-jobs/scripts/cot-self-instruct.py +718 -0
  33. package/bundled-skills/hugging-face-jobs/scripts/finepdfs-stats.py +546 -0
  34. package/bundled-skills/hugging-face-jobs/scripts/generate-responses.py +587 -0
  35. package/bundled-skills/hugging-face-model-trainer/SKILL.md +11 -12
  36. package/bundled-skills/hugging-face-model-trainer/references/gguf_conversion.md +296 -0
  37. package/bundled-skills/hugging-face-model-trainer/references/hardware_guide.md +283 -0
  38. package/bundled-skills/hugging-face-model-trainer/references/hub_saving.md +364 -0
  39. package/bundled-skills/hugging-face-model-trainer/references/local_training_macos.md +231 -0
  40. package/bundled-skills/hugging-face-model-trainer/references/reliability_principles.md +371 -0
  41. package/bundled-skills/hugging-face-model-trainer/references/trackio_guide.md +189 -0
  42. package/bundled-skills/hugging-face-model-trainer/references/training_methods.md +150 -0
  43. package/bundled-skills/hugging-face-model-trainer/references/training_patterns.md +203 -0
  44. package/bundled-skills/hugging-face-model-trainer/references/troubleshooting.md +282 -0
  45. package/bundled-skills/hugging-face-model-trainer/references/unsloth.md +313 -0
  46. package/bundled-skills/hugging-face-model-trainer/scripts/convert_to_gguf.py +424 -0
  47. package/bundled-skills/hugging-face-model-trainer/scripts/dataset_inspector.py +417 -0
  48. package/bundled-skills/hugging-face-model-trainer/scripts/estimate_cost.py +150 -0
  49. package/bundled-skills/hugging-face-model-trainer/scripts/train_dpo_example.py +106 -0
  50. package/bundled-skills/hugging-face-model-trainer/scripts/train_grpo_example.py +89 -0
  51. package/bundled-skills/hugging-face-model-trainer/scripts/train_sft_example.py +122 -0
  52. package/bundled-skills/hugging-face-model-trainer/scripts/unsloth_sft_example.py +512 -0
  53. package/bundled-skills/hugging-face-paper-publisher/SKILL.md +11 -4
  54. package/bundled-skills/hugging-face-paper-publisher/examples/example_usage.md +326 -0
  55. package/bundled-skills/hugging-face-paper-publisher/references/quick_reference.md +216 -0
  56. package/bundled-skills/hugging-face-paper-publisher/scripts/paper_manager.py +606 -0
  57. package/bundled-skills/hugging-face-paper-publisher/templates/arxiv.md +299 -0
  58. package/bundled-skills/hugging-face-paper-publisher/templates/ml-report.md +358 -0
  59. package/bundled-skills/hugging-face-paper-publisher/templates/modern.md +319 -0
  60. package/bundled-skills/hugging-face-paper-publisher/templates/standard.md +201 -0
  61. package/bundled-skills/hugging-face-papers/SKILL.md +241 -0
  62. package/bundled-skills/hugging-face-trackio/.claude-plugin/plugin.json +19 -0
  63. package/bundled-skills/hugging-face-trackio/SKILL.md +117 -0
  64. package/bundled-skills/hugging-face-trackio/references/alerts.md +196 -0
  65. package/bundled-skills/hugging-face-trackio/references/logging_metrics.md +206 -0
  66. package/bundled-skills/hugging-face-trackio/references/retrieving_metrics.md +251 -0
  67. package/bundled-skills/hugging-face-vision-trainer/SKILL.md +595 -0
  68. package/bundled-skills/hugging-face-vision-trainer/references/finetune_sam2_trainer.md +254 -0
  69. package/bundled-skills/hugging-face-vision-trainer/references/hub_saving.md +618 -0
  70. package/bundled-skills/hugging-face-vision-trainer/references/image_classification_training_notebook.md +279 -0
  71. package/bundled-skills/hugging-face-vision-trainer/references/object_detection_training_notebook.md +700 -0
  72. package/bundled-skills/hugging-face-vision-trainer/references/reliability_principles.md +310 -0
  73. package/bundled-skills/hugging-face-vision-trainer/references/timm_trainer.md +91 -0
  74. package/bundled-skills/hugging-face-vision-trainer/scripts/dataset_inspector.py +814 -0
  75. package/bundled-skills/hugging-face-vision-trainer/scripts/estimate_cost.py +217 -0
  76. package/bundled-skills/hugging-face-vision-trainer/scripts/image_classification_training.py +383 -0
  77. package/bundled-skills/hugging-face-vision-trainer/scripts/object_detection_training.py +710 -0
  78. package/bundled-skills/hugging-face-vision-trainer/scripts/sam_segmentation_training.py +382 -0
  79. package/bundled-skills/jq/SKILL.md +273 -0
  80. package/bundled-skills/odoo-edi-connector/SKILL.md +32 -10
  81. package/bundled-skills/odoo-woocommerce-bridge/SKILL.md +9 -5
  82. package/bundled-skills/tmux/SKILL.md +370 -0
  83. package/bundled-skills/transformers-js/SKILL.md +639 -0
  84. package/bundled-skills/transformers-js/references/CACHE.md +339 -0
  85. package/bundled-skills/transformers-js/references/CONFIGURATION.md +390 -0
  86. package/bundled-skills/transformers-js/references/EXAMPLES.md +605 -0
  87. package/bundled-skills/transformers-js/references/MODEL_ARCHITECTURES.md +167 -0
  88. package/bundled-skills/transformers-js/references/PIPELINE_OPTIONS.md +545 -0
  89. package/bundled-skills/transformers-js/references/TEXT_GENERATION.md +315 -0
  90. package/bundled-skills/viboscope/SKILL.md +64 -0
  91. package/package.json +1 -1
@@ -0,0 +1,310 @@
1
+ # Reliability Principles for Training Jobs
2
+
3
+ ## Contents
4
+ - Principle 1: Always Verify Before Use
5
+ - Principle 2: Prioritize Reliability Over Performance
6
+ - Principle 3: Create Atomic, Self-Contained Scripts
7
+ - Principle 4: Provide Clear Error Context
8
+ - Principle 5: Test the Happy Path on Known-Good Inputs
9
+ - Summary: The Reliability Checklist (pre-flight, script quality, job config)
10
+ - When Principles Conflict
11
+
12
+ ---
13
+
14
+ These principles are derived from real production failures and successful fixes. Following them prevents common failure modes and ensures reliable job execution.
15
+
16
+ ## Principle 1: Always Verify Before Use
17
+
18
+ **Rule:** Never assume repos, datasets, or resources exist. Verify with tools first.
19
+
20
+ ### What It Prevents
21
+
22
+ - **Non-existent datasets** - Jobs fail immediately when dataset doesn't exist
23
+ - **Typos in names** - Simple mistakes like "argilla-dpo-mix-7k" vs "ultrafeedback_binarized"
24
+ - **Incorrect paths** - Old or moved repos, renamed files
25
+ - **Missing dependencies** - Undocumented requirements
26
+
27
+ ### How to Apply
28
+
29
+ **Before submitting ANY job:**
30
+
31
+ ```python
32
+ # Verify dataset exists
33
+ dataset_search({"query": "dataset-name", "author": "author-name", "limit": 5})
34
+ hub_repo_details(["author/dataset-name"], repo_type="dataset")
35
+
36
+ # Verify model exists
37
+ hub_repo_details(["org/model-name"], repo_type="model")
38
+
39
+ # Check script/file paths (for URL-based scripts)
40
+ # Verify before using: https://github.com/user/repo/blob/main/script.py
41
+ ```
42
+
43
+ **Examples that would have caught errors:**
44
+
45
+ ```python
46
+ # ❌ WRONG: Assumed dataset exists
47
+ hf_jobs("uv", {
48
+ "script": """...""",
49
+ "env": {"DATASET": "trl-lib/argilla-dpo-mix-7k"} # Doesn't exist!
50
+ })
51
+
52
+ # ✅ CORRECT: Verify first
53
+ dataset_search({"query": "argilla dpo", "author": "trl-lib"})
54
+ # Would show: "trl-lib/ultrafeedback_binarized" is the correct name
55
+
56
+ hub_repo_details(["trl-lib/ultrafeedback_binarized"], repo_type="dataset")
57
+ # Confirms it exists before using
58
+ ```
59
+
60
+ ### Implementation Checklist
61
+
62
+ - [ ] Check dataset exists before training
63
+ - [ ] Test script URLs are valid before submitting
64
+ - [ ] Check for recent updates/renames of resources
65
+ - [ ] Check for dataset format
66
+
67
+ **Time cost:** 5-10 seconds
68
+ **Time saved:** Hours of failed job time + debugging
69
+
70
+ ---
71
+
72
+ ## Principle 2: Prioritize Reliability Over Performance
73
+
74
+ **Rule:** Default to what is most likely to succeed, not what is theoretically fastest.
75
+
76
+ ### What It Prevents
77
+
78
+ - **Hardware incompatibilities** - Features that fail on certain GPUs
79
+ - **Unstable optimizations** - Speed-ups that cause crashes
80
+ - **Complex configurations** - More failure points
81
+ - **Build system issues** - Unreliable compilation methods
82
+
83
+ ### How to Apply
84
+
85
+ **Choose reliability:**
86
+
87
+ ```python
88
+ # ❌ RISKY: Aggressive optimization that may fail
89
+ TrainingArguments(
90
+ torch_compile=True, # Can fail on T4, A10G GPUs
91
+ optim="adamw_bnb_8bit", # Requires specific setup
92
+ dataloader_num_workers=8, # May cause OOM on small instances
93
+ ...
94
+ )
95
+
96
+ # ✅ SAFE: Proven defaults
97
+ TrainingArguments(
98
+ # torch_compile=True, # Commented with note: "Enable on H100 for 20% speedup"
99
+ optim="adamw_torch", # Standard, always works
100
+ fp16=True, # Stable and fast on T4/A10G
101
+ dataloader_num_workers=4, # Conservative, reliable
102
+ ...
103
+ )
104
+ ```
105
+
106
+ ### Real-World Example
107
+
108
+ **The `torch.compile` failure:**
109
+ - Added for "20% speedup" on H100
110
+ - **Failed fatally on T4-medium** with cryptic error
111
+ - Misdiagnosed as dataset issue (cost hours)
112
+ - **Fix:** Disable by default, add as optional comment
113
+
114
+ **Result:** Reliability > 20% performance gain
115
+
116
+ ### Implementation Checklist
117
+
118
+ - [ ] Use proven, standard configurations by default
119
+ - [ ] Comment out performance optimizations with hardware notes
120
+ - [ ] Use stable build systems (CMake > make)
121
+ - [ ] Test on target hardware before production
122
+ - [ ] Document known incompatibilities
123
+ - [ ] Provide "safe" and "fast" variants when needed
124
+
125
+ **Performance loss:** 10-20% in best case
126
+ **Reliability gain:** 95%+ success rate vs 60-70%
127
+
128
+ ---
129
+
130
+ ## Principle 3: Create Atomic, Self-Contained Scripts
131
+
132
+ **Rule:** Scripts should work as complete, independent units. Don't remove parts to "simplify."
133
+
134
+ ### What It Prevents
135
+
136
+ - **Missing dependencies** - Removed "unnecessary" packages that are actually required
137
+ - **Incomplete processes** - Skipped steps that seem redundant
138
+ - **Environment assumptions** - Scripts that need pre-setup
139
+ - **Partial failures** - Some parts work, others fail silently
140
+
141
+ ### How to Apply
142
+
143
+ **Complete dependency specifications:**
144
+
145
+ ```python
146
+ # ❌ INCOMPLETE: "Simplified" by removing dependencies
147
+ # /// script
148
+ # dependencies = [
149
+ # "transformers",
150
+ # "torch",
151
+ # "datasets",
152
+ # ]
153
+ # ///
154
+
155
+ # ✅ COMPLETE: All dependencies explicit
156
+ # /// script
157
+ # dependencies = [
158
+ # "transformers>=5.2.0",
159
+ # "accelerate>=1.1.0",
160
+ # "albumentations>=1.4.16", # Required for augmentation + bbox handling
161
+ # "timm", # Required for vision backbones
162
+ # "datasets>=4.0",
163
+ # "torchmetrics", # Required for mAP/mAR computation
164
+ # "pycocotools", # Required for COCO evaluation
165
+ # "trackio", # Required for metrics monitoring
166
+ # "huggingface_hub",
167
+ # ]
168
+ # ///
169
+ ```
170
+
171
+ ### Real-World Example
172
+
173
+ **The `albumentations` failure:**
174
+ - Original script had it: augmentations and bbox clipping worked fine
175
+ - "Simplified" version removed it: "not strictly needed for training"
176
+ - **Training crashed on bbox augmentation** — no fallback for COCO-format bbox handling
177
+ - Hard to debug: error appeared in data loading, not in augmentation setup
178
+ - **Fix:** Restore all original dependencies
179
+
180
+ **Result:** Don't remove dependencies without thorough testing
181
+
182
+ ### Implementation Checklist
183
+
184
+ - [ ] All dependencies in PEP 723 header with version pins
185
+ - [ ] All system packages installed by script
186
+ - [ ] No assumptions about pre-existing environment
187
+ - [ ] No "optional" steps that are actually required
188
+ - [ ] Test scripts in clean environment
189
+ - [ ] Document why each dependency is needed
190
+
191
+ **Complexity:** Slightly longer scripts
192
+ **Reliability:** Scripts "just work" every time
193
+
194
+ ---
195
+
196
+ ## Principle 4: Provide Clear Error Context
197
+
198
+ **Rule:** When things fail, make it obvious what went wrong and how to fix it.
199
+
200
+ ### How to Apply
201
+
202
+ **Wrap subprocess calls:**
203
+
204
+ ```python
205
+ # ❌ UNCLEAR: Silent failure
206
+ subprocess.run([...], check=True, capture_output=True)
207
+
208
+ # ✅ CLEAR: Shows what failed
209
+ try:
210
+ result = subprocess.run(
211
+ [...],
212
+ check=True,
213
+ capture_output=True,
214
+ text=True
215
+ )
216
+ print(result.stdout)
217
+ if result.stderr:
218
+ print("Warnings:", result.stderr)
219
+ except subprocess.CalledProcessError as e:
220
+ print(f"❌ Command failed!")
221
+ print("STDOUT:", e.stdout)
222
+ print("STDERR:", e.stderr)
223
+ raise
224
+ ```
225
+
226
+ **Validate inputs:**
227
+
228
+ ```python
229
+ # ❌ UNCLEAR: Fails later with cryptic error
230
+ model = load_model(MODEL_NAME)
231
+
232
+ # ✅ CLEAR: Fails fast with clear message
233
+ if not MODEL_NAME:
234
+ raise ValueError("MODEL_NAME environment variable not set!")
235
+
236
+ print(f"Loading model: {MODEL_NAME}")
237
+ try:
238
+ model = load_model(MODEL_NAME)
239
+ print(f"✅ Model loaded successfully")
240
+ except Exception as e:
241
+ print(f"❌ Failed to load model: {MODEL_NAME}")
242
+ print(f"Error: {e}")
243
+ print("Hint: Check that model exists on Hub")
244
+ raise
245
+ ```
246
+
247
+ ### Implementation Checklist
248
+
249
+ - [ ] Wrap external calls with try/except
250
+ - [ ] Print stdout/stderr on failure
251
+ - [ ] Validate environment variables early
252
+ - [ ] Add progress indicators (✅, ❌, 🔄)
253
+ - [ ] Include hints for common failures
254
+ - [ ] Log configuration at start
255
+
256
+ ---
257
+
258
+ ## Principle 5: Test the Happy Path on Known-Good Inputs
259
+
260
+ **Rule:** Before using new code in production, test with inputs you know work.
261
+
262
+ ## Summary: The Reliability Checklist
263
+
264
+ Before submitting ANY job:
265
+
266
+ ### Pre-Flight Checks
267
+ - [ ] **Verified** all repos/datasets exist (hub_repo_details)
268
+ - [ ] **Tested** with known-good inputs if new code
269
+ - [ ] **Using** proven hardware/configuration
270
+ - [ ] **Included** all dependencies in PEP 723 header
271
+ - [ ] **Installed** system requirements (build tools, etc.)
272
+ - [ ] **Set** appropriate timeout (not default 30m)
273
+ - [ ] **Configured** Hub push with HF_TOKEN (login() + hub_token)
274
+ - [ ] **Added** clear error handling
275
+
276
+ ### Script Quality
277
+ - [ ] Self-contained (no external setup needed)
278
+ - [ ] Complete dependencies listed
279
+ - [ ] Build tools installed by script
280
+ - [ ] Progress indicators included
281
+ - [ ] Error messages are clear
282
+ - [ ] Configuration logged at start
283
+
284
+ ### Job Configuration
285
+ - [ ] Timeout > expected runtime + 30% buffer
286
+ - [ ] Hardware appropriate for model size
287
+ - [ ] Secrets include HF_TOKEN (see SKILL.md directive #2 for syntax)
288
+ - [ ] Script calls `login(token=hf_token)` and sets `training_args.hub_token = hf_token` BEFORE `Trainer()` init
289
+ - [ ] Environment variables set correctly
290
+ - [ ] Cost estimated and acceptable
291
+
292
+ **Following these principles transforms job success rate from ~60-70% to ~95%+**
293
+
294
+ ---
295
+
296
+ ## When Principles Conflict
297
+
298
+ Sometimes reliability and performance conflict. Here's how to choose:
299
+
300
+ | Scenario | Choose | Rationale |
301
+ |----------|--------|-----------|
302
+ | Demo/test | Reliability | Fast failure is worse than slow success |
303
+ | Production (first run) | Reliability | Prove it works before optimizing |
304
+ | Production (proven) | Performance | Safe to optimize after validation |
305
+ | Time-critical | Reliability | Failures cause more delay than slow runs |
306
+ | Cost-critical | Balanced | Test with small model, then optimize |
307
+
308
+ **General rule:** Reliability first, optimize second.
309
+
310
+ ---
@@ -0,0 +1,91 @@
1
+ # Using timm models with Hugging Face Trainer
2
+
3
+ Transformers has first-class support for timm models via the `TimmWrapper` classes. You can load any timm model and use it directly with the `Trainer` API for image classification. Here's how it works:
4
+
5
+ ## Loading a timm model
6
+
7
+ The `TimmWrapperForImageClassification` class (in `transformers/src/transformers/models/timm_wrapper/modeling_timm_wrapper.py`) wraps timm models so they're fully compatible with the Trainer API. You can load them via the `Auto` classes:
8
+
9
+ ```python
10
+ from transformers import AutoModelForImageClassification, AutoImageProcessor, Trainer, TrainingArguments
11
+
12
+ # Load a timm model for image classification
13
+ checkpoint = "timm/resnet50.a1_in1k"
14
+ image_processor = AutoImageProcessor.from_pretrained(checkpoint)
15
+ model = AutoModelForImageClassification.from_pretrained(
16
+ checkpoint,
17
+ num_labels=10, # set to your number of classes
18
+ ignore_mismatched_sizes=True, # needed when changing num_labels from pretrained
19
+ )
20
+ ```
21
+
22
+ ## Key details
23
+
24
+ 1. **Image processor**: The `TimmWrapperImageProcessor` automatically resolves the correct transforms from timm's config. It exposes both `val_transforms` and `train_transforms` (with augmentations), as noted in the code:
25
+
26
+ ```64:65:transformers/src/transformers/models/timm_wrapper/image_processing_timm_wrapper.py
27
+ # useful for training, see examples/pytorch/image-classification/run_image_classification.py
28
+ self.train_transforms = timm.data.create_transform(**self.data_config, is_training=True)
29
+ ```
30
+
31
+ 2. **Loss computation is built-in**: `TimmWrapperForImageClassification.forward()` accepts a `labels` argument and computes cross-entropy loss automatically, which is exactly what Trainer expects:
32
+
33
+ ```374:376:transformers/src/transformers/models/timm_wrapper/modeling_timm_wrapper.py
34
+ loss = None
35
+ if labels is not None:
36
+ loss = self.loss_function(labels, logits, self.config)
37
+ ```
38
+
39
+ 3. **Returns `ImageClassifierOutput`**: The output format is the standard transformers output, so Trainer handles it seamlessly.
40
+
41
+ ## Full training example
42
+
43
+ ```python
44
+ from transformers import AutoModelForImageClassification, AutoImageProcessor, Trainer, TrainingArguments
45
+ from datasets import load_dataset
46
+
47
+ # Load dataset
48
+ dataset = load_dataset("food101", split="train[:5000]")
49
+ dataset = dataset.train_test_split(test_size=0.2)
50
+
51
+ # Load timm model + processor
52
+ checkpoint = "timm/resnet50.a1_in1k"
53
+ image_processor = AutoImageProcessor.from_pretrained(checkpoint)
54
+ model = AutoModelForImageClassification.from_pretrained(
55
+ checkpoint,
56
+ num_labels=101,
57
+ ignore_mismatched_sizes=True,
58
+ )
59
+
60
+ # Preprocessing
61
+ def transform(batch):
62
+ batch["pixel_values"] = [image_processor(img)["pixel_values"][0] for img in batch["image"]]
63
+ batch["labels"] = batch["label"]
64
+ return batch
65
+
66
+ dataset["train"].set_transform(transform)
67
+ dataset["test"].set_transform(transform)
68
+
69
+ # Train
70
+ training_args = TrainingArguments(
71
+ output_dir="./timm-finetuned",
72
+ num_train_epochs=3,
73
+ per_device_train_batch_size=16,
74
+ per_device_eval_batch_size=16,
75
+ eval_strategy="epoch",
76
+ save_strategy="epoch",
77
+ logging_steps=50,
78
+ remove_unused_columns=False,
79
+ )
80
+
81
+ trainer = Trainer(
82
+ model=model,
83
+ args=training_args,
84
+ train_dataset=dataset["train"],
85
+ eval_dataset=dataset["test"],
86
+ )
87
+
88
+ trainer.train()
89
+ ```
90
+
91
+ Any timm checkpoint on the Hub (prefixed with `timm/`) works out of the box (ResNet, EfficientNet, ViT, ConvNeXt, etc). The wrapper handles all the translation between timm's interface and what Trainer expects.