opencode-skills-antigravity 1.0.40 → 1.0.41

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (84) hide show
  1. package/bundled-skills/.antigravity-install-manifest.json +7 -1
  2. package/bundled-skills/docs/integrations/jetski-cortex.md +3 -3
  3. package/bundled-skills/docs/integrations/jetski-gemini-loader/README.md +1 -1
  4. package/bundled-skills/docs/maintainers/repo-growth-seo.md +3 -3
  5. package/bundled-skills/docs/maintainers/skills-update-guide.md +1 -1
  6. package/bundled-skills/docs/sources/sources.md +2 -2
  7. package/bundled-skills/docs/users/bundles.md +1 -1
  8. package/bundled-skills/docs/users/claude-code-skills.md +1 -1
  9. package/bundled-skills/docs/users/gemini-cli-skills.md +1 -1
  10. package/bundled-skills/docs/users/getting-started.md +1 -1
  11. package/bundled-skills/docs/users/kiro-integration.md +1 -1
  12. package/bundled-skills/docs/users/usage.md +4 -4
  13. package/bundled-skills/docs/users/visual-guide.md +4 -4
  14. package/bundled-skills/hugging-face-cli/SKILL.md +192 -195
  15. package/bundled-skills/hugging-face-community-evals/SKILL.md +213 -0
  16. package/bundled-skills/hugging-face-community-evals/examples/.env.example +3 -0
  17. package/bundled-skills/hugging-face-community-evals/examples/USAGE_EXAMPLES.md +101 -0
  18. package/bundled-skills/hugging-face-community-evals/scripts/inspect_eval_uv.py +104 -0
  19. package/bundled-skills/hugging-face-community-evals/scripts/inspect_vllm_uv.py +306 -0
  20. package/bundled-skills/hugging-face-community-evals/scripts/lighteval_vllm_uv.py +297 -0
  21. package/bundled-skills/hugging-face-dataset-viewer/SKILL.md +120 -120
  22. package/bundled-skills/hugging-face-gradio/SKILL.md +304 -0
  23. package/bundled-skills/hugging-face-gradio/examples.md +613 -0
  24. package/bundled-skills/hugging-face-jobs/SKILL.md +25 -18
  25. package/bundled-skills/hugging-face-jobs/index.html +216 -0
  26. package/bundled-skills/hugging-face-jobs/references/hardware_guide.md +336 -0
  27. package/bundled-skills/hugging-face-jobs/references/hub_saving.md +352 -0
  28. package/bundled-skills/hugging-face-jobs/references/token_usage.md +570 -0
  29. package/bundled-skills/hugging-face-jobs/references/troubleshooting.md +475 -0
  30. package/bundled-skills/hugging-face-jobs/scripts/cot-self-instruct.py +718 -0
  31. package/bundled-skills/hugging-face-jobs/scripts/finepdfs-stats.py +546 -0
  32. package/bundled-skills/hugging-face-jobs/scripts/generate-responses.py +587 -0
  33. package/bundled-skills/hugging-face-model-trainer/SKILL.md +11 -12
  34. package/bundled-skills/hugging-face-model-trainer/references/gguf_conversion.md +296 -0
  35. package/bundled-skills/hugging-face-model-trainer/references/hardware_guide.md +283 -0
  36. package/bundled-skills/hugging-face-model-trainer/references/hub_saving.md +364 -0
  37. package/bundled-skills/hugging-face-model-trainer/references/local_training_macos.md +231 -0
  38. package/bundled-skills/hugging-face-model-trainer/references/reliability_principles.md +371 -0
  39. package/bundled-skills/hugging-face-model-trainer/references/trackio_guide.md +189 -0
  40. package/bundled-skills/hugging-face-model-trainer/references/training_methods.md +150 -0
  41. package/bundled-skills/hugging-face-model-trainer/references/training_patterns.md +203 -0
  42. package/bundled-skills/hugging-face-model-trainer/references/troubleshooting.md +282 -0
  43. package/bundled-skills/hugging-face-model-trainer/references/unsloth.md +313 -0
  44. package/bundled-skills/hugging-face-model-trainer/scripts/convert_to_gguf.py +424 -0
  45. package/bundled-skills/hugging-face-model-trainer/scripts/dataset_inspector.py +417 -0
  46. package/bundled-skills/hugging-face-model-trainer/scripts/estimate_cost.py +150 -0
  47. package/bundled-skills/hugging-face-model-trainer/scripts/train_dpo_example.py +106 -0
  48. package/bundled-skills/hugging-face-model-trainer/scripts/train_grpo_example.py +89 -0
  49. package/bundled-skills/hugging-face-model-trainer/scripts/train_sft_example.py +122 -0
  50. package/bundled-skills/hugging-face-model-trainer/scripts/unsloth_sft_example.py +512 -0
  51. package/bundled-skills/hugging-face-paper-publisher/SKILL.md +11 -4
  52. package/bundled-skills/hugging-face-paper-publisher/examples/example_usage.md +326 -0
  53. package/bundled-skills/hugging-face-paper-publisher/references/quick_reference.md +216 -0
  54. package/bundled-skills/hugging-face-paper-publisher/scripts/paper_manager.py +606 -0
  55. package/bundled-skills/hugging-face-paper-publisher/templates/arxiv.md +299 -0
  56. package/bundled-skills/hugging-face-paper-publisher/templates/ml-report.md +358 -0
  57. package/bundled-skills/hugging-face-paper-publisher/templates/modern.md +319 -0
  58. package/bundled-skills/hugging-face-paper-publisher/templates/standard.md +201 -0
  59. package/bundled-skills/hugging-face-papers/SKILL.md +241 -0
  60. package/bundled-skills/hugging-face-trackio/.claude-plugin/plugin.json +19 -0
  61. package/bundled-skills/hugging-face-trackio/SKILL.md +117 -0
  62. package/bundled-skills/hugging-face-trackio/references/alerts.md +196 -0
  63. package/bundled-skills/hugging-face-trackio/references/logging_metrics.md +206 -0
  64. package/bundled-skills/hugging-face-trackio/references/retrieving_metrics.md +251 -0
  65. package/bundled-skills/hugging-face-vision-trainer/SKILL.md +595 -0
  66. package/bundled-skills/hugging-face-vision-trainer/references/finetune_sam2_trainer.md +254 -0
  67. package/bundled-skills/hugging-face-vision-trainer/references/hub_saving.md +618 -0
  68. package/bundled-skills/hugging-face-vision-trainer/references/image_classification_training_notebook.md +279 -0
  69. package/bundled-skills/hugging-face-vision-trainer/references/object_detection_training_notebook.md +700 -0
  70. package/bundled-skills/hugging-face-vision-trainer/references/reliability_principles.md +310 -0
  71. package/bundled-skills/hugging-face-vision-trainer/references/timm_trainer.md +91 -0
  72. package/bundled-skills/hugging-face-vision-trainer/scripts/dataset_inspector.py +814 -0
  73. package/bundled-skills/hugging-face-vision-trainer/scripts/estimate_cost.py +217 -0
  74. package/bundled-skills/hugging-face-vision-trainer/scripts/image_classification_training.py +383 -0
  75. package/bundled-skills/hugging-face-vision-trainer/scripts/object_detection_training.py +710 -0
  76. package/bundled-skills/hugging-face-vision-trainer/scripts/sam_segmentation_training.py +382 -0
  77. package/bundled-skills/transformers-js/SKILL.md +639 -0
  78. package/bundled-skills/transformers-js/references/CACHE.md +339 -0
  79. package/bundled-skills/transformers-js/references/CONFIGURATION.md +390 -0
  80. package/bundled-skills/transformers-js/references/EXAMPLES.md +605 -0
  81. package/bundled-skills/transformers-js/references/MODEL_ARCHITECTURES.md +167 -0
  82. package/bundled-skills/transformers-js/references/PIPELINE_OPTIONS.md +545 -0
  83. package/bundled-skills/transformers-js/references/TEXT_GENERATION.md +315 -0
  84. package/package.json +1 -1
@@ -0,0 +1,279 @@
1
+ # Image classification
2
+
3
+ ## Contents
4
+ - Load Food-101 dataset
5
+ - Preprocess (ViT image processor, torchvision transforms)
6
+ - Evaluate (accuracy metric, compute_metrics)
7
+ - Train (TrainingArguments, Trainer setup, push to Hub)
8
+ - Inference (pipeline, manual prediction)
9
+
10
+ ---
11
+
12
+ Image classification assigns a label or class to an image. Unlike text or audio classification, the inputs are the
13
+ pixel values that comprise an image. There are many applications for image classification, such as detecting damage
14
+ after a natural disaster, monitoring crop health, or helping screen medical images for signs of disease.
15
+
16
+ This guide illustrates how to:
17
+
18
+ 1. Fine-tune [ViT](../model_doc/vit) on the [Food-101](https://huggingface.co/datasets/ethz/food101) dataset to classify a food item in an image.
19
+ 2. Use your fine-tuned model for inference.
20
+
21
+ To see all architectures and checkpoints compatible with this task, we recommend checking the [task-page](https://huggingface.co/tasks/image-classification)
22
+
23
+ Before you begin, make sure you have all the necessary libraries installed:
24
+
25
+ ```bash
26
+ pip install transformers datasets evaluate accelerate pillow torchvision scikit-learn trackio
27
+ ```
28
+
29
+ We encourage you to log in to your Hugging Face account to upload and share your model with the community. When prompted, enter your token to log in:
30
+
31
+ ```py
32
+ >>> from huggingface_hub import notebook_login
33
+
34
+ >>> notebook_login()
35
+ ```
36
+
37
+ ## Load Food-101 dataset
38
+
39
+ Start by loading a smaller subset of the Food-101 dataset from the 🤗 Datasets library. This will give you a chance to
40
+ experiment and make sure everything works before spending more time training on the full dataset.
41
+
42
+ ```py
43
+ >>> from datasets import load_dataset
44
+
45
+ >>> food = load_dataset("ethz/food101", split="train[:5000]")
46
+ ```
47
+
48
+ Split the dataset's `train` split into a train and test set with the [train_test_split](https://huggingface.co/docs/datasets/v4.5.0/en/package_reference/main_classes#datasets.Dataset.train_test_split) method:
49
+
50
+ ```py
51
+ >>> food = food.train_test_split(test_size=0.2)
52
+ ```
53
+
54
+ Then take a look at an example:
55
+
56
+ ```py
57
+ >>> food["train"][0]
58
+ {'image': ,
59
+ 'label': 79}
60
+ ```
61
+
62
+ Each example in the dataset has two fields:
63
+
64
+ - `image`: a PIL image of the food item
65
+ - `label`: the label class of the food item
66
+
67
+ To make it easier for the model to get the label name from the label id, create a dictionary that maps the label name
68
+ to an integer and vice versa:
69
+
70
+ ```py
71
+ >>> labels = food["train"].features["label"].names
72
+ >>> label2id, id2label = dict(), dict()
73
+ >>> for i, label in enumerate(labels):
74
+ ... label2id[label] = str(i)
75
+ ... id2label[str(i)] = label
76
+ ```
77
+
78
+ Now you can convert the label id to a label name:
79
+
80
+ ```py
81
+ >>> id2label[str(79)]
82
+ 'prime_rib'
83
+ ```
84
+
85
+ ## Preprocess
86
+
87
+ The next step is to load a ViT image processor to process the image into a tensor:
88
+
89
+ ```py
90
+ >>> from transformers import AutoImageProcessor
91
+
92
+ >>> checkpoint = "google/vit-base-patch16-224-in21k"
93
+ >>> image_processor = AutoImageProcessor.from_pretrained(checkpoint)
94
+ ```
95
+
96
+ Apply some image transformations to the images to make the model more robust against overfitting. Here you'll use torchvision's [`transforms`](https://pytorch.org/vision/stable/transforms.html) module, but you can also use any image library you like.
97
+
98
+ Crop a random part of the image, resize it, and normalize it with the image mean and standard deviation:
99
+
100
+ ```py
101
+ >>> from torchvision.transforms import RandomResizedCrop, Compose, Normalize, ToTensor
102
+
103
+ >>> normalize = Normalize(mean=image_processor.image_mean, std=image_processor.image_std)
104
+ >>> size = (
105
+ ... image_processor.size["shortest_edge"]
106
+ ... if "shortest_edge" in image_processor.size
107
+ ... else (image_processor.size["height"], image_processor.size["width"])
108
+ ... )
109
+ >>> _transforms = Compose([RandomResizedCrop(size), ToTensor(), normalize])
110
+ ```
111
+
112
+ Then create a preprocessing function to apply the transforms and return the `pixel_values` - the inputs to the model - of the image:
113
+
114
+ ```py
115
+ >>> def transforms(examples):
116
+ ... examples["pixel_values"] = [_transforms(img.convert("RGB")) for img in examples["image"]]
117
+ ... del examples["image"]
118
+ ... return examples
119
+ ```
120
+
121
+ To apply the preprocessing function over the entire dataset, use 🤗 Datasets [with_transform](https://huggingface.co/docs/datasets/v4.5.0/en/package_reference/main_classes#datasets.Dataset.with_transform) method. The transforms are applied on the fly when you load an element of the dataset:
122
+
123
+ ```py
124
+ >>> food = food.with_transform(transforms)
125
+ ```
126
+
127
+ Now create a batch of examples using [DefaultDataCollator](/docs/transformers/v5.2.0/en/main_classes/data_collator#transformers.DefaultDataCollator). Unlike other data collators in 🤗 Transformers, the `DefaultDataCollator` does not apply additional preprocessing such as padding.
128
+
129
+ ```py
130
+ >>> from transformers import DefaultDataCollator
131
+
132
+ >>> data_collator = DefaultDataCollator()
133
+ ```
134
+
135
+ ## Evaluate
136
+
137
+ Including a metric during training is often helpful for evaluating your model's performance. You can quickly load an
138
+ evaluation method with the 🤗 [Evaluate](https://huggingface.co/docs/evaluate/index) library. For this task, load
139
+ the [accuracy](https://huggingface.co/spaces/evaluate-metric/accuracy) metric (see the 🤗 Evaluate [quick tour](https://huggingface.co/docs/evaluate/a_quick_tour) to learn more about how to load and compute a metric):
140
+
141
+ ```py
142
+ >>> import evaluate
143
+
144
+ >>> accuracy = evaluate.load("accuracy")
145
+ ```
146
+
147
+ Then create a function that passes your predictions and labels to [compute](https://huggingface.co/docs/evaluate/v0.4.6/en/package_reference/main_classes#evaluate.EvaluationModule.compute) to calculate the accuracy:
148
+
149
+ ```py
150
+ >>> import numpy as np
151
+
152
+ >>> def compute_metrics(eval_pred):
153
+ ... predictions, labels = eval_pred
154
+ ... predictions = np.argmax(predictions, axis=1)
155
+ ... return accuracy.compute(predictions=predictions, references=labels)
156
+ ```
157
+
158
+ Your `compute_metrics` function is ready to go now, and you'll return to it when you set up your training.
159
+
160
+ ## Train
161
+
162
+ If you aren't familiar with finetuning a model with the [Trainer](/docs/transformers/v5.2.0/en/main_classes/trainer#transformers.Trainer), take a look at the basic tutorial [here](../training#train-with-pytorch-trainer)!
163
+
164
+ You're ready to start training your model now! Load ViT with [AutoModelForImageClassification](/docs/transformers/v5.2.0/en/model_doc/auto#transformers.AutoModelForImageClassification). Specify the number of labels along with the number of expected labels, and the label mappings:
165
+
166
+ ```py
167
+ >>> from transformers import AutoModelForImageClassification, TrainingArguments, Trainer
168
+
169
+ >>> model = AutoModelForImageClassification.from_pretrained(
170
+ ... checkpoint,
171
+ ... num_labels=len(labels),
172
+ ... id2label=id2label,
173
+ ... label2id=label2id,
174
+ ... )
175
+ ```
176
+
177
+ At this point, only three steps remain:
178
+
179
+ 1. Define your training hyperparameters in [TrainingArguments](/docs/transformers/v5.2.0/en/main_classes/trainer#transformers.TrainingArguments). It is important you don't remove unused columns because that'll drop the `image` column. Without the `image` column, you can't create `pixel_values`. Set `remove_unused_columns=False` to prevent this behavior! The only other required parameter is `output_dir` which specifies where to save your model. You'll push this model to the Hub by setting `push_to_hub=True` (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the [Trainer](/docs/transformers/v5.2.0/en/main_classes/trainer#transformers.Trainer) will evaluate the accuracy and save the training checkpoint.
180
+ 2. Pass the training arguments to [Trainer](/docs/transformers/v5.2.0/en/main_classes/trainer#transformers.Trainer) along with the model, dataset, tokenizer, data collator, and `compute_metrics` function.
181
+ 3. Call [train()](/docs/transformers/v5.2.0/en/main_classes/trainer#transformers.Trainer.train) to finetune your model.
182
+
183
+ ```py
184
+ >>> training_args = TrainingArguments(
185
+ ... output_dir="my_awesome_food_model",
186
+ ... remove_unused_columns=False,
187
+ ... eval_strategy="epoch",
188
+ ... save_strategy="epoch",
189
+ ... learning_rate=5e-5,
190
+ ... per_device_train_batch_size=16,
191
+ ... gradient_accumulation_steps=4,
192
+ ... per_device_eval_batch_size=16,
193
+ ... num_train_epochs=3,
194
+ ... warmup_steps=0.1,
195
+ ... logging_steps=10,
196
+ ... report_to="trackio",
197
+ ... run_name="food101",
198
+ ... load_best_model_at_end=True,
199
+ ... metric_for_best_model="accuracy",
200
+ ... push_to_hub=True,
201
+ ... )
202
+
203
+ >>> trainer = Trainer(
204
+ ... model=model,
205
+ ... args=training_args,
206
+ ... data_collator=data_collator,
207
+ ... train_dataset=food["train"],
208
+ ... eval_dataset=food["test"],
209
+ ... processing_class=image_processor,
210
+ ... compute_metrics=compute_metrics,
211
+ ... )
212
+
213
+ >>> trainer.train()
214
+ ```
215
+
216
+ Once training is completed, share your model to the Hub with the [push_to_hub()](/docs/transformers/v5.2.0/en/main_classes/trainer#transformers.Trainer.push_to_hub) method so everyone can use your model:
217
+
218
+ ```py
219
+ >>> trainer.push_to_hub()
220
+ ```
221
+
222
+ For a more in-depth example of how to finetune a model for image classification, take a look at the corresponding [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb).
223
+
224
+ ## Inference
225
+
226
+ Great, now that you've fine-tuned a model, you can use it for inference!
227
+
228
+ Load an image you'd like to run inference on:
229
+
230
+ ```py
231
+ >>> ds = load_dataset("ethz/food101", split="validation[:10]")
232
+ >>> image = ds["image"][0]
233
+ ```
234
+
235
+
236
+
237
+ The simplest way to try out your finetuned model for inference is to use it in a [pipeline()](/docs/transformers/v5.2.0/en/main_classes/pipelines#transformers.pipeline). Instantiate a `pipeline` for image classification with your model, and pass your image to it:
238
+
239
+ ```py
240
+ >>> from transformers import pipeline
241
+
242
+ >>> classifier = pipeline("image-classification", model="my_awesome_food_model")
243
+ >>> classifier(image)
244
+ [{'score': 0.31856709718704224, 'label': 'beignets'},
245
+ {'score': 0.015232225880026817, 'label': 'bruschetta'},
246
+ {'score': 0.01519392803311348, 'label': 'chicken_wings'},
247
+ {'score': 0.013022331520915031, 'label': 'pork_chop'},
248
+ {'score': 0.012728818692266941, 'label': 'prime_rib'}]
249
+ ```
250
+
251
+ You can also manually replicate the results of the `pipeline` if you'd like:
252
+
253
+ Load an image processor to preprocess the image and return the `input` as PyTorch tensors:
254
+
255
+ ```py
256
+ >>> from transformers import AutoImageProcessor
257
+ >>> import torch
258
+
259
+ >>> image_processor = AutoImageProcessor.from_pretrained("my_awesome_food_model")
260
+ >>> inputs = image_processor(image, return_tensors="pt")
261
+ ```
262
+
263
+ Pass your inputs to the model and return the logits:
264
+
265
+ ```py
266
+ >>> from transformers import AutoModelForImageClassification
267
+
268
+ >>> model = AutoModelForImageClassification.from_pretrained("my_awesome_food_model")
269
+ >>> with torch.no_grad():
270
+ ... logits = model(**inputs).logits
271
+ ```
272
+
273
+ Get the predicted label with the highest probability, and use the model's `id2label` mapping to convert it to a label:
274
+
275
+ ```py
276
+ >>> predicted_label = logits.argmax(-1).item()
277
+ >>> model.config.id2label[predicted_label]
278
+ 'beignets'
279
+ ```