@synsci/cli-darwin-x64 1.1.49
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/bin/skills/accelerate/SKILL.md +332 -0
- package/bin/skills/accelerate/references/custom-plugins.md +453 -0
- package/bin/skills/accelerate/references/megatron-integration.md +489 -0
- package/bin/skills/accelerate/references/performance.md +525 -0
- package/bin/skills/audiocraft/SKILL.md +564 -0
- package/bin/skills/audiocraft/references/advanced-usage.md +666 -0
- package/bin/skills/audiocraft/references/troubleshooting.md +504 -0
- package/bin/skills/autogpt/SKILL.md +403 -0
- package/bin/skills/autogpt/references/advanced-usage.md +535 -0
- package/bin/skills/autogpt/references/troubleshooting.md +420 -0
- package/bin/skills/awq/SKILL.md +310 -0
- package/bin/skills/awq/references/advanced-usage.md +324 -0
- package/bin/skills/awq/references/troubleshooting.md +344 -0
- package/bin/skills/axolotl/SKILL.md +158 -0
- package/bin/skills/axolotl/references/api.md +5548 -0
- package/bin/skills/axolotl/references/dataset-formats.md +1029 -0
- package/bin/skills/axolotl/references/index.md +15 -0
- package/bin/skills/axolotl/references/other.md +3563 -0
- package/bin/skills/bigcode-evaluation-harness/SKILL.md +405 -0
- package/bin/skills/bigcode-evaluation-harness/references/benchmarks.md +393 -0
- package/bin/skills/bigcode-evaluation-harness/references/custom-tasks.md +424 -0
- package/bin/skills/bigcode-evaluation-harness/references/issues.md +394 -0
- package/bin/skills/bitsandbytes/SKILL.md +411 -0
- package/bin/skills/bitsandbytes/references/memory-optimization.md +521 -0
- package/bin/skills/bitsandbytes/references/qlora-training.md +521 -0
- package/bin/skills/bitsandbytes/references/quantization-formats.md +447 -0
- package/bin/skills/blip-2/SKILL.md +564 -0
- package/bin/skills/blip-2/references/advanced-usage.md +680 -0
- package/bin/skills/blip-2/references/troubleshooting.md +526 -0
- package/bin/skills/chroma/SKILL.md +406 -0
- package/bin/skills/chroma/references/integration.md +38 -0
- package/bin/skills/clip/SKILL.md +253 -0
- package/bin/skills/clip/references/applications.md +207 -0
- package/bin/skills/constitutional-ai/SKILL.md +290 -0
- package/bin/skills/crewai/SKILL.md +498 -0
- package/bin/skills/crewai/references/flows.md +438 -0
- package/bin/skills/crewai/references/tools.md +429 -0
- package/bin/skills/crewai/references/troubleshooting.md +480 -0
- package/bin/skills/deepspeed/SKILL.md +141 -0
- package/bin/skills/deepspeed/references/08.md +17 -0
- package/bin/skills/deepspeed/references/09.md +173 -0
- package/bin/skills/deepspeed/references/2020.md +378 -0
- package/bin/skills/deepspeed/references/2023.md +279 -0
- package/bin/skills/deepspeed/references/assets.md +179 -0
- package/bin/skills/deepspeed/references/index.md +35 -0
- package/bin/skills/deepspeed/references/mii.md +118 -0
- package/bin/skills/deepspeed/references/other.md +1191 -0
- package/bin/skills/deepspeed/references/tutorials.md +6554 -0
- package/bin/skills/dspy/SKILL.md +590 -0
- package/bin/skills/dspy/references/examples.md +663 -0
- package/bin/skills/dspy/references/modules.md +475 -0
- package/bin/skills/dspy/references/optimizers.md +566 -0
- package/bin/skills/faiss/SKILL.md +221 -0
- package/bin/skills/faiss/references/index_types.md +280 -0
- package/bin/skills/flash-attention/SKILL.md +367 -0
- package/bin/skills/flash-attention/references/benchmarks.md +215 -0
- package/bin/skills/flash-attention/references/transformers-integration.md +293 -0
- package/bin/skills/gguf/SKILL.md +427 -0
- package/bin/skills/gguf/references/advanced-usage.md +504 -0
- package/bin/skills/gguf/references/troubleshooting.md +442 -0
- package/bin/skills/gptq/SKILL.md +450 -0
- package/bin/skills/gptq/references/calibration.md +337 -0
- package/bin/skills/gptq/references/integration.md +129 -0
- package/bin/skills/gptq/references/troubleshooting.md +95 -0
- package/bin/skills/grpo-rl-training/README.md +97 -0
- package/bin/skills/grpo-rl-training/SKILL.md +572 -0
- package/bin/skills/grpo-rl-training/examples/reward_functions_library.py +393 -0
- package/bin/skills/grpo-rl-training/templates/basic_grpo_training.py +228 -0
- package/bin/skills/guidance/SKILL.md +572 -0
- package/bin/skills/guidance/references/backends.md +554 -0
- package/bin/skills/guidance/references/constraints.md +674 -0
- package/bin/skills/guidance/references/examples.md +767 -0
- package/bin/skills/hqq/SKILL.md +445 -0
- package/bin/skills/hqq/references/advanced-usage.md +528 -0
- package/bin/skills/hqq/references/troubleshooting.md +503 -0
- package/bin/skills/hugging-face-cli/SKILL.md +191 -0
- package/bin/skills/hugging-face-cli/references/commands.md +954 -0
- package/bin/skills/hugging-face-cli/references/examples.md +374 -0
- package/bin/skills/hugging-face-datasets/SKILL.md +547 -0
- package/bin/skills/hugging-face-datasets/examples/diverse_training_examples.json +239 -0
- package/bin/skills/hugging-face-datasets/examples/system_prompt_template.txt +196 -0
- package/bin/skills/hugging-face-datasets/examples/training_examples.json +176 -0
- package/bin/skills/hugging-face-datasets/scripts/dataset_manager.py +522 -0
- package/bin/skills/hugging-face-datasets/scripts/sql_manager.py +844 -0
- package/bin/skills/hugging-face-datasets/templates/chat.json +55 -0
- package/bin/skills/hugging-face-datasets/templates/classification.json +62 -0
- package/bin/skills/hugging-face-datasets/templates/completion.json +51 -0
- package/bin/skills/hugging-face-datasets/templates/custom.json +75 -0
- package/bin/skills/hugging-face-datasets/templates/qa.json +54 -0
- package/bin/skills/hugging-face-datasets/templates/tabular.json +81 -0
- package/bin/skills/hugging-face-evaluation/SKILL.md +656 -0
- package/bin/skills/hugging-face-evaluation/examples/USAGE_EXAMPLES.md +382 -0
- package/bin/skills/hugging-face-evaluation/examples/artificial_analysis_to_hub.py +141 -0
- package/bin/skills/hugging-face-evaluation/examples/example_readme_tables.md +135 -0
- package/bin/skills/hugging-face-evaluation/examples/metric_mapping.json +50 -0
- package/bin/skills/hugging-face-evaluation/requirements.txt +20 -0
- package/bin/skills/hugging-face-evaluation/scripts/evaluation_manager.py +1374 -0
- package/bin/skills/hugging-face-evaluation/scripts/inspect_eval_uv.py +104 -0
- package/bin/skills/hugging-face-evaluation/scripts/inspect_vllm_uv.py +317 -0
- package/bin/skills/hugging-face-evaluation/scripts/lighteval_vllm_uv.py +303 -0
- package/bin/skills/hugging-face-evaluation/scripts/run_eval_job.py +98 -0
- package/bin/skills/hugging-face-evaluation/scripts/run_vllm_eval_job.py +331 -0
- package/bin/skills/hugging-face-evaluation/scripts/test_extraction.py +206 -0
- package/bin/skills/hugging-face-jobs/SKILL.md +1041 -0
- package/bin/skills/hugging-face-jobs/index.html +216 -0
- package/bin/skills/hugging-face-jobs/references/hardware_guide.md +336 -0
- package/bin/skills/hugging-face-jobs/references/hub_saving.md +352 -0
- package/bin/skills/hugging-face-jobs/references/token_usage.md +546 -0
- package/bin/skills/hugging-face-jobs/references/troubleshooting.md +475 -0
- package/bin/skills/hugging-face-jobs/scripts/cot-self-instruct.py +718 -0
- package/bin/skills/hugging-face-jobs/scripts/finepdfs-stats.py +546 -0
- package/bin/skills/hugging-face-jobs/scripts/generate-responses.py +587 -0
- package/bin/skills/hugging-face-model-trainer/SKILL.md +711 -0
- package/bin/skills/hugging-face-model-trainer/references/gguf_conversion.md +296 -0
- package/bin/skills/hugging-face-model-trainer/references/hardware_guide.md +283 -0
- package/bin/skills/hugging-face-model-trainer/references/hub_saving.md +364 -0
- package/bin/skills/hugging-face-model-trainer/references/reliability_principles.md +371 -0
- package/bin/skills/hugging-face-model-trainer/references/trackio_guide.md +189 -0
- package/bin/skills/hugging-face-model-trainer/references/training_methods.md +150 -0
- package/bin/skills/hugging-face-model-trainer/references/training_patterns.md +203 -0
- package/bin/skills/hugging-face-model-trainer/references/troubleshooting.md +282 -0
- package/bin/skills/hugging-face-model-trainer/scripts/convert_to_gguf.py +424 -0
- package/bin/skills/hugging-face-model-trainer/scripts/dataset_inspector.py +417 -0
- package/bin/skills/hugging-face-model-trainer/scripts/estimate_cost.py +150 -0
- package/bin/skills/hugging-face-model-trainer/scripts/train_dpo_example.py +106 -0
- package/bin/skills/hugging-face-model-trainer/scripts/train_grpo_example.py +89 -0
- package/bin/skills/hugging-face-model-trainer/scripts/train_sft_example.py +122 -0
- package/bin/skills/hugging-face-paper-publisher/SKILL.md +627 -0
- package/bin/skills/hugging-face-paper-publisher/examples/example_usage.md +327 -0
- package/bin/skills/hugging-face-paper-publisher/references/quick_reference.md +216 -0
- package/bin/skills/hugging-face-paper-publisher/scripts/paper_manager.py +508 -0
- package/bin/skills/hugging-face-paper-publisher/templates/arxiv.md +299 -0
- package/bin/skills/hugging-face-paper-publisher/templates/ml-report.md +358 -0
- package/bin/skills/hugging-face-paper-publisher/templates/modern.md +319 -0
- package/bin/skills/hugging-face-paper-publisher/templates/standard.md +201 -0
- package/bin/skills/hugging-face-tool-builder/SKILL.md +115 -0
- package/bin/skills/hugging-face-tool-builder/references/baseline_hf_api.py +57 -0
- package/bin/skills/hugging-face-tool-builder/references/baseline_hf_api.sh +40 -0
- package/bin/skills/hugging-face-tool-builder/references/baseline_hf_api.tsx +57 -0
- package/bin/skills/hugging-face-tool-builder/references/find_models_by_paper.sh +230 -0
- package/bin/skills/hugging-face-tool-builder/references/hf_enrich_models.sh +96 -0
- package/bin/skills/hugging-face-tool-builder/references/hf_model_card_frontmatter.sh +188 -0
- package/bin/skills/hugging-face-tool-builder/references/hf_model_papers_auth.sh +171 -0
- package/bin/skills/hugging-face-trackio/SKILL.md +65 -0
- package/bin/skills/hugging-face-trackio/references/logging_metrics.md +206 -0
- package/bin/skills/hugging-face-trackio/references/retrieving_metrics.md +223 -0
- package/bin/skills/huggingface-tokenizers/SKILL.md +516 -0
- package/bin/skills/huggingface-tokenizers/references/algorithms.md +653 -0
- package/bin/skills/huggingface-tokenizers/references/integration.md +637 -0
- package/bin/skills/huggingface-tokenizers/references/pipeline.md +723 -0
- package/bin/skills/huggingface-tokenizers/references/training.md +565 -0
- package/bin/skills/instructor/SKILL.md +740 -0
- package/bin/skills/instructor/references/examples.md +107 -0
- package/bin/skills/instructor/references/providers.md +70 -0
- package/bin/skills/instructor/references/validation.md +606 -0
- package/bin/skills/knowledge-distillation/SKILL.md +458 -0
- package/bin/skills/knowledge-distillation/references/minillm.md +334 -0
- package/bin/skills/lambda-labs/SKILL.md +545 -0
- package/bin/skills/lambda-labs/references/advanced-usage.md +611 -0
- package/bin/skills/lambda-labs/references/troubleshooting.md +530 -0
- package/bin/skills/langchain/SKILL.md +480 -0
- package/bin/skills/langchain/references/agents.md +499 -0
- package/bin/skills/langchain/references/integration.md +562 -0
- package/bin/skills/langchain/references/rag.md +600 -0
- package/bin/skills/langsmith/SKILL.md +422 -0
- package/bin/skills/langsmith/references/advanced-usage.md +548 -0
- package/bin/skills/langsmith/references/troubleshooting.md +537 -0
- package/bin/skills/litgpt/SKILL.md +469 -0
- package/bin/skills/litgpt/references/custom-models.md +568 -0
- package/bin/skills/litgpt/references/distributed-training.md +451 -0
- package/bin/skills/litgpt/references/supported-models.md +336 -0
- package/bin/skills/litgpt/references/training-recipes.md +619 -0
- package/bin/skills/llama-cpp/SKILL.md +258 -0
- package/bin/skills/llama-cpp/references/optimization.md +89 -0
- package/bin/skills/llama-cpp/references/quantization.md +213 -0
- package/bin/skills/llama-cpp/references/server.md +125 -0
- package/bin/skills/llama-factory/SKILL.md +80 -0
- package/bin/skills/llama-factory/references/_images.md +23 -0
- package/bin/skills/llama-factory/references/advanced.md +1055 -0
- package/bin/skills/llama-factory/references/getting_started.md +349 -0
- package/bin/skills/llama-factory/references/index.md +19 -0
- package/bin/skills/llama-factory/references/other.md +31 -0
- package/bin/skills/llamaguard/SKILL.md +337 -0
- package/bin/skills/llamaindex/SKILL.md +569 -0
- package/bin/skills/llamaindex/references/agents.md +83 -0
- package/bin/skills/llamaindex/references/data_connectors.md +108 -0
- package/bin/skills/llamaindex/references/query_engines.md +406 -0
- package/bin/skills/llava/SKILL.md +304 -0
- package/bin/skills/llava/references/training.md +197 -0
- package/bin/skills/lm-evaluation-harness/SKILL.md +490 -0
- package/bin/skills/lm-evaluation-harness/references/api-evaluation.md +490 -0
- package/bin/skills/lm-evaluation-harness/references/benchmark-guide.md +488 -0
- package/bin/skills/lm-evaluation-harness/references/custom-tasks.md +602 -0
- package/bin/skills/lm-evaluation-harness/references/distributed-eval.md +519 -0
- package/bin/skills/long-context/SKILL.md +536 -0
- package/bin/skills/long-context/references/extension_methods.md +468 -0
- package/bin/skills/long-context/references/fine_tuning.md +611 -0
- package/bin/skills/long-context/references/rope.md +402 -0
- package/bin/skills/mamba/SKILL.md +260 -0
- package/bin/skills/mamba/references/architecture-details.md +206 -0
- package/bin/skills/mamba/references/benchmarks.md +255 -0
- package/bin/skills/mamba/references/training-guide.md +388 -0
- package/bin/skills/megatron-core/SKILL.md +366 -0
- package/bin/skills/megatron-core/references/benchmarks.md +249 -0
- package/bin/skills/megatron-core/references/parallelism-guide.md +404 -0
- package/bin/skills/megatron-core/references/production-examples.md +473 -0
- package/bin/skills/megatron-core/references/training-recipes.md +547 -0
- package/bin/skills/miles/SKILL.md +315 -0
- package/bin/skills/miles/references/api-reference.md +141 -0
- package/bin/skills/miles/references/troubleshooting.md +352 -0
- package/bin/skills/mlflow/SKILL.md +704 -0
- package/bin/skills/mlflow/references/deployment.md +744 -0
- package/bin/skills/mlflow/references/model-registry.md +770 -0
- package/bin/skills/mlflow/references/tracking.md +680 -0
- package/bin/skills/modal/SKILL.md +341 -0
- package/bin/skills/modal/references/advanced-usage.md +503 -0
- package/bin/skills/modal/references/troubleshooting.md +494 -0
- package/bin/skills/model-merging/SKILL.md +539 -0
- package/bin/skills/model-merging/references/evaluation.md +462 -0
- package/bin/skills/model-merging/references/examples.md +428 -0
- package/bin/skills/model-merging/references/methods.md +352 -0
- package/bin/skills/model-pruning/SKILL.md +495 -0
- package/bin/skills/model-pruning/references/wanda.md +347 -0
- package/bin/skills/moe-training/SKILL.md +526 -0
- package/bin/skills/moe-training/references/architectures.md +432 -0
- package/bin/skills/moe-training/references/inference.md +348 -0
- package/bin/skills/moe-training/references/training.md +425 -0
- package/bin/skills/nanogpt/SKILL.md +290 -0
- package/bin/skills/nanogpt/references/architecture.md +382 -0
- package/bin/skills/nanogpt/references/data.md +476 -0
- package/bin/skills/nanogpt/references/training.md +564 -0
- package/bin/skills/nemo-curator/SKILL.md +383 -0
- package/bin/skills/nemo-curator/references/deduplication.md +87 -0
- package/bin/skills/nemo-curator/references/filtering.md +102 -0
- package/bin/skills/nemo-evaluator/SKILL.md +494 -0
- package/bin/skills/nemo-evaluator/references/adapter-system.md +340 -0
- package/bin/skills/nemo-evaluator/references/configuration.md +447 -0
- package/bin/skills/nemo-evaluator/references/custom-benchmarks.md +315 -0
- package/bin/skills/nemo-evaluator/references/execution-backends.md +361 -0
- package/bin/skills/nemo-guardrails/SKILL.md +297 -0
- package/bin/skills/nnsight/SKILL.md +436 -0
- package/bin/skills/nnsight/references/README.md +78 -0
- package/bin/skills/nnsight/references/api.md +344 -0
- package/bin/skills/nnsight/references/tutorials.md +300 -0
- package/bin/skills/openrlhf/SKILL.md +249 -0
- package/bin/skills/openrlhf/references/algorithm-comparison.md +404 -0
- package/bin/skills/openrlhf/references/custom-rewards.md +530 -0
- package/bin/skills/openrlhf/references/hybrid-engine.md +287 -0
- package/bin/skills/openrlhf/references/multi-node-training.md +454 -0
- package/bin/skills/outlines/SKILL.md +652 -0
- package/bin/skills/outlines/references/backends.md +615 -0
- package/bin/skills/outlines/references/examples.md +773 -0
- package/bin/skills/outlines/references/json_generation.md +652 -0
- package/bin/skills/peft/SKILL.md +431 -0
- package/bin/skills/peft/references/advanced-usage.md +514 -0
- package/bin/skills/peft/references/troubleshooting.md +480 -0
- package/bin/skills/phoenix/SKILL.md +475 -0
- package/bin/skills/phoenix/references/advanced-usage.md +619 -0
- package/bin/skills/phoenix/references/troubleshooting.md +538 -0
- package/bin/skills/pinecone/SKILL.md +358 -0
- package/bin/skills/pinecone/references/deployment.md +181 -0
- package/bin/skills/pytorch-fsdp/SKILL.md +126 -0
- package/bin/skills/pytorch-fsdp/references/index.md +7 -0
- package/bin/skills/pytorch-fsdp/references/other.md +4249 -0
- package/bin/skills/pytorch-lightning/SKILL.md +346 -0
- package/bin/skills/pytorch-lightning/references/callbacks.md +436 -0
- package/bin/skills/pytorch-lightning/references/distributed.md +490 -0
- package/bin/skills/pytorch-lightning/references/hyperparameter-tuning.md +556 -0
- package/bin/skills/pyvene/SKILL.md +473 -0
- package/bin/skills/pyvene/references/README.md +73 -0
- package/bin/skills/pyvene/references/api.md +383 -0
- package/bin/skills/pyvene/references/tutorials.md +376 -0
- package/bin/skills/qdrant/SKILL.md +493 -0
- package/bin/skills/qdrant/references/advanced-usage.md +648 -0
- package/bin/skills/qdrant/references/troubleshooting.md +631 -0
- package/bin/skills/ray-data/SKILL.md +326 -0
- package/bin/skills/ray-data/references/integration.md +82 -0
- package/bin/skills/ray-data/references/transformations.md +83 -0
- package/bin/skills/ray-train/SKILL.md +406 -0
- package/bin/skills/ray-train/references/multi-node.md +628 -0
- package/bin/skills/rwkv/SKILL.md +260 -0
- package/bin/skills/rwkv/references/architecture-details.md +344 -0
- package/bin/skills/rwkv/references/rwkv7.md +386 -0
- package/bin/skills/rwkv/references/state-management.md +369 -0
- package/bin/skills/saelens/SKILL.md +386 -0
- package/bin/skills/saelens/references/README.md +70 -0
- package/bin/skills/saelens/references/api.md +333 -0
- package/bin/skills/saelens/references/tutorials.md +318 -0
- package/bin/skills/segment-anything/SKILL.md +500 -0
- package/bin/skills/segment-anything/references/advanced-usage.md +589 -0
- package/bin/skills/segment-anything/references/troubleshooting.md +484 -0
- package/bin/skills/sentence-transformers/SKILL.md +255 -0
- package/bin/skills/sentence-transformers/references/models.md +123 -0
- package/bin/skills/sentencepiece/SKILL.md +235 -0
- package/bin/skills/sentencepiece/references/algorithms.md +200 -0
- package/bin/skills/sentencepiece/references/training.md +304 -0
- package/bin/skills/sglang/SKILL.md +442 -0
- package/bin/skills/sglang/references/deployment.md +490 -0
- package/bin/skills/sglang/references/radix-attention.md +413 -0
- package/bin/skills/sglang/references/structured-generation.md +541 -0
- package/bin/skills/simpo/SKILL.md +219 -0
- package/bin/skills/simpo/references/datasets.md +478 -0
- package/bin/skills/simpo/references/hyperparameters.md +452 -0
- package/bin/skills/simpo/references/loss-functions.md +350 -0
- package/bin/skills/skypilot/SKILL.md +509 -0
- package/bin/skills/skypilot/references/advanced-usage.md +491 -0
- package/bin/skills/skypilot/references/troubleshooting.md +570 -0
- package/bin/skills/slime/SKILL.md +464 -0
- package/bin/skills/slime/references/api-reference.md +392 -0
- package/bin/skills/slime/references/troubleshooting.md +386 -0
- package/bin/skills/speculative-decoding/SKILL.md +467 -0
- package/bin/skills/speculative-decoding/references/lookahead.md +309 -0
- package/bin/skills/speculative-decoding/references/medusa.md +350 -0
- package/bin/skills/stable-diffusion/SKILL.md +519 -0
- package/bin/skills/stable-diffusion/references/advanced-usage.md +716 -0
- package/bin/skills/stable-diffusion/references/troubleshooting.md +555 -0
- package/bin/skills/tensorboard/SKILL.md +629 -0
- package/bin/skills/tensorboard/references/integrations.md +638 -0
- package/bin/skills/tensorboard/references/profiling.md +545 -0
- package/bin/skills/tensorboard/references/visualization.md +620 -0
- package/bin/skills/tensorrt-llm/SKILL.md +187 -0
- package/bin/skills/tensorrt-llm/references/multi-gpu.md +298 -0
- package/bin/skills/tensorrt-llm/references/optimization.md +242 -0
- package/bin/skills/tensorrt-llm/references/serving.md +470 -0
- package/bin/skills/tinker/SKILL.md +362 -0
- package/bin/skills/tinker/references/api-reference.md +168 -0
- package/bin/skills/tinker/references/getting-started.md +157 -0
- package/bin/skills/tinker/references/loss-functions.md +163 -0
- package/bin/skills/tinker/references/models-and-lora.md +139 -0
- package/bin/skills/tinker/references/recipes.md +280 -0
- package/bin/skills/tinker/references/reinforcement-learning.md +212 -0
- package/bin/skills/tinker/references/rendering.md +243 -0
- package/bin/skills/tinker/references/supervised-learning.md +232 -0
- package/bin/skills/tinker-training-cost/SKILL.md +187 -0
- package/bin/skills/tinker-training-cost/scripts/calculate_cost.py +123 -0
- package/bin/skills/torchforge/SKILL.md +433 -0
- package/bin/skills/torchforge/references/api-reference.md +327 -0
- package/bin/skills/torchforge/references/troubleshooting.md +409 -0
- package/bin/skills/torchtitan/SKILL.md +358 -0
- package/bin/skills/torchtitan/references/checkpoint.md +181 -0
- package/bin/skills/torchtitan/references/custom-models.md +258 -0
- package/bin/skills/torchtitan/references/float8.md +133 -0
- package/bin/skills/torchtitan/references/fsdp.md +126 -0
- package/bin/skills/transformer-lens/SKILL.md +346 -0
- package/bin/skills/transformer-lens/references/README.md +54 -0
- package/bin/skills/transformer-lens/references/api.md +362 -0
- package/bin/skills/transformer-lens/references/tutorials.md +339 -0
- package/bin/skills/trl-fine-tuning/SKILL.md +455 -0
- package/bin/skills/trl-fine-tuning/references/dpo-variants.md +227 -0
- package/bin/skills/trl-fine-tuning/references/online-rl.md +82 -0
- package/bin/skills/trl-fine-tuning/references/reward-modeling.md +122 -0
- package/bin/skills/trl-fine-tuning/references/sft-training.md +168 -0
- package/bin/skills/unsloth/SKILL.md +80 -0
- package/bin/skills/unsloth/references/index.md +7 -0
- package/bin/skills/unsloth/references/llms-full.md +16799 -0
- package/bin/skills/unsloth/references/llms-txt.md +12044 -0
- package/bin/skills/unsloth/references/llms.md +82 -0
- package/bin/skills/verl/SKILL.md +391 -0
- package/bin/skills/verl/references/api-reference.md +301 -0
- package/bin/skills/verl/references/troubleshooting.md +391 -0
- package/bin/skills/vllm/SKILL.md +364 -0
- package/bin/skills/vllm/references/optimization.md +226 -0
- package/bin/skills/vllm/references/quantization.md +284 -0
- package/bin/skills/vllm/references/server-deployment.md +255 -0
- package/bin/skills/vllm/references/troubleshooting.md +447 -0
- package/bin/skills/weights-and-biases/SKILL.md +590 -0
- package/bin/skills/weights-and-biases/references/artifacts.md +584 -0
- package/bin/skills/weights-and-biases/references/integrations.md +700 -0
- package/bin/skills/weights-and-biases/references/sweeps.md +847 -0
- package/bin/skills/whisper/SKILL.md +317 -0
- package/bin/skills/whisper/references/languages.md +189 -0
- package/bin/synsc +0 -0
- package/package.json +10 -0
|
@@ -0,0 +1,521 @@
|
|
|
1
|
+
# QLoRA Training
|
|
2
|
+
|
|
3
|
+
Complete guide to fine-tuning large language models using 4-bit quantization with QLoRA (Quantized Low-Rank Adaptation).
|
|
4
|
+
|
|
5
|
+
## Overview
|
|
6
|
+
|
|
7
|
+
QLoRA enables fine-tuning 70B+ parameter models on consumer GPUs by:
|
|
8
|
+
- Loading base model in 4-bit (75% memory reduction)
|
|
9
|
+
- Training only small LoRA adapters (~20MB)
|
|
10
|
+
- Maintaining near-full-precision quality
|
|
11
|
+
|
|
12
|
+
**Memory savings**:
|
|
13
|
+
- Llama 2 70B: 140GB → 35GB (4-bit) + 20MB (LoRA) = **35GB total**
|
|
14
|
+
- Fits on single A100 80GB!
|
|
15
|
+
|
|
16
|
+
**Accuracy**: <1% degradation vs full fine-tuning
|
|
17
|
+
|
|
18
|
+
## Quick Start
|
|
19
|
+
|
|
20
|
+
### Basic QLoRA Fine-tuning
|
|
21
|
+
|
|
22
|
+
```python
|
|
23
|
+
from transformers import AutoModelForCausalLM, BitsAndBytesConfig, TrainingArguments
|
|
24
|
+
from peft import LoraConfig, get_peft_model, prepare_model_for_kbit_training
|
|
25
|
+
import torch
|
|
26
|
+
|
|
27
|
+
# Step 1: Load model in 4-bit
|
|
28
|
+
bnb_config = BitsAndBytesConfig(
|
|
29
|
+
load_in_4bit=True,
|
|
30
|
+
bnb_4bit_compute_dtype=torch.bfloat16,
|
|
31
|
+
bnb_4bit_quant_type="nf4",
|
|
32
|
+
bnb_4bit_use_double_quant=True
|
|
33
|
+
)
|
|
34
|
+
|
|
35
|
+
model = AutoModelForCausalLM.from_pretrained(
|
|
36
|
+
"meta-llama/Llama-2-70b-hf",
|
|
37
|
+
quantization_config=bnb_config,
|
|
38
|
+
device_map="auto",
|
|
39
|
+
torch_dtype=torch.bfloat16
|
|
40
|
+
)
|
|
41
|
+
|
|
42
|
+
# Step 2: Prepare for k-bit training
|
|
43
|
+
model = prepare_model_for_kbit_training(model)
|
|
44
|
+
|
|
45
|
+
# Step 3: Add LoRA adapters
|
|
46
|
+
lora_config = LoraConfig(
|
|
47
|
+
r=64,
|
|
48
|
+
lora_alpha=16,
|
|
49
|
+
target_modules="all-linear",
|
|
50
|
+
lora_dropout=0.1,
|
|
51
|
+
bias="none",
|
|
52
|
+
task_type="CAUSAL_LM"
|
|
53
|
+
)
|
|
54
|
+
|
|
55
|
+
model = get_peft_model(model, lora_config)
|
|
56
|
+
model.print_trainable_parameters()
|
|
57
|
+
# trainable params: 335M || all params: 70B || trainable%: 0.48%
|
|
58
|
+
|
|
59
|
+
# Step 4: Train
|
|
60
|
+
from trl import SFTTrainer
|
|
61
|
+
|
|
62
|
+
training_args = TrainingArguments(
|
|
63
|
+
output_dir="./qlora-70b",
|
|
64
|
+
per_device_train_batch_size=4,
|
|
65
|
+
gradient_accumulation_steps=4,
|
|
66
|
+
num_train_epochs=3,
|
|
67
|
+
learning_rate=2e-4,
|
|
68
|
+
bf16=True,
|
|
69
|
+
optim="paged_adamw_8bit",
|
|
70
|
+
logging_steps=10,
|
|
71
|
+
save_strategy="epoch"
|
|
72
|
+
)
|
|
73
|
+
|
|
74
|
+
trainer = SFTTrainer(
|
|
75
|
+
model=model,
|
|
76
|
+
args=training_args,
|
|
77
|
+
train_dataset=dataset,
|
|
78
|
+
tokenizer=tokenizer
|
|
79
|
+
)
|
|
80
|
+
|
|
81
|
+
trainer.train()
|
|
82
|
+
```
|
|
83
|
+
|
|
84
|
+
## Complete Training Workflows
|
|
85
|
+
|
|
86
|
+
### Workflow 1: Single GPU Training (Consumer GPU)
|
|
87
|
+
|
|
88
|
+
Train Llama 2 13B on RTX 4090 (24GB).
|
|
89
|
+
|
|
90
|
+
**Step 1: Prepare dataset**
|
|
91
|
+
|
|
92
|
+
```python
|
|
93
|
+
from datasets import load_dataset
|
|
94
|
+
|
|
95
|
+
# Load instruction dataset
|
|
96
|
+
dataset = load_dataset("timdettmers/openassistant-guanaco")
|
|
97
|
+
|
|
98
|
+
# Format for instruction tuning
|
|
99
|
+
def format_instruction(example):
|
|
100
|
+
return {
|
|
101
|
+
"text": f"### Human: {example['text']}\n### Assistant: {example['output']}"
|
|
102
|
+
}
|
|
103
|
+
|
|
104
|
+
dataset = dataset.map(format_instruction)
|
|
105
|
+
```
|
|
106
|
+
|
|
107
|
+
**Step 2: Configure quantization**
|
|
108
|
+
|
|
109
|
+
```python
|
|
110
|
+
bnb_config = BitsAndBytesConfig(
|
|
111
|
+
load_in_4bit=True,
|
|
112
|
+
bnb_4bit_compute_dtype=torch.bfloat16, # BF16 for stability
|
|
113
|
+
bnb_4bit_quant_type="nf4", # NormalFloat4 (recommended)
|
|
114
|
+
bnb_4bit_use_double_quant=True # Nested quantization
|
|
115
|
+
)
|
|
116
|
+
```
|
|
117
|
+
|
|
118
|
+
**Step 3: Load and prepare model**
|
|
119
|
+
|
|
120
|
+
```python
|
|
121
|
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
|
122
|
+
|
|
123
|
+
model = AutoModelForCausalLM.from_pretrained(
|
|
124
|
+
"meta-llama/Llama-2-13b-hf",
|
|
125
|
+
quantization_config=bnb_config,
|
|
126
|
+
device_map="auto"
|
|
127
|
+
)
|
|
128
|
+
|
|
129
|
+
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-13b-hf")
|
|
130
|
+
tokenizer.pad_token = tokenizer.eos_token
|
|
131
|
+
|
|
132
|
+
# Enable gradient checkpointing (further memory savings)
|
|
133
|
+
model.gradient_checkpointing_enable()
|
|
134
|
+
model = prepare_model_for_kbit_training(model, use_gradient_checkpointing=True)
|
|
135
|
+
```
|
|
136
|
+
|
|
137
|
+
**Step 4: Configure LoRA**
|
|
138
|
+
|
|
139
|
+
```python
|
|
140
|
+
from peft import LoraConfig
|
|
141
|
+
|
|
142
|
+
lora_config = LoraConfig(
|
|
143
|
+
r=16, # LoRA rank (lower = less memory)
|
|
144
|
+
lora_alpha=32, # Scaling factor
|
|
145
|
+
target_modules="all-linear", # Apply to all linear layers
|
|
146
|
+
lora_dropout=0.05,
|
|
147
|
+
bias="none",
|
|
148
|
+
task_type="CAUSAL_LM"
|
|
149
|
+
)
|
|
150
|
+
|
|
151
|
+
model = get_peft_model(model, lora_config)
|
|
152
|
+
```
|
|
153
|
+
|
|
154
|
+
**Step 5: Train**
|
|
155
|
+
|
|
156
|
+
```python
|
|
157
|
+
training_args = TrainingArguments(
|
|
158
|
+
output_dir="./qlora-13b-results",
|
|
159
|
+
per_device_train_batch_size=4,
|
|
160
|
+
gradient_accumulation_steps=4, # Effective batch = 16
|
|
161
|
+
warmup_steps=100,
|
|
162
|
+
num_train_epochs=1,
|
|
163
|
+
learning_rate=2e-4,
|
|
164
|
+
bf16=True,
|
|
165
|
+
logging_steps=10,
|
|
166
|
+
save_strategy="steps",
|
|
167
|
+
save_steps=100,
|
|
168
|
+
eval_strategy="steps",
|
|
169
|
+
eval_steps=100,
|
|
170
|
+
optim="paged_adamw_8bit", # 8-bit optimizer
|
|
171
|
+
max_grad_norm=0.3,
|
|
172
|
+
max_steps=1000
|
|
173
|
+
)
|
|
174
|
+
|
|
175
|
+
trainer = SFTTrainer(
|
|
176
|
+
model=model,
|
|
177
|
+
args=training_args,
|
|
178
|
+
train_dataset=dataset["train"],
|
|
179
|
+
eval_dataset=dataset["test"],
|
|
180
|
+
tokenizer=tokenizer,
|
|
181
|
+
max_seq_length=512
|
|
182
|
+
)
|
|
183
|
+
|
|
184
|
+
trainer.train()
|
|
185
|
+
```
|
|
186
|
+
|
|
187
|
+
**Memory usage**: ~18GB on RTX 4090 (24GB)
|
|
188
|
+
|
|
189
|
+
### Workflow 2: Multi-GPU Training (FSDP + QLoRA)
|
|
190
|
+
|
|
191
|
+
Train Llama 2 70B on 8×A100 (80GB each).
|
|
192
|
+
|
|
193
|
+
**Step 1: Configure FSDP-compatible quantization**
|
|
194
|
+
|
|
195
|
+
```python
|
|
196
|
+
bnb_config = BitsAndBytesConfig(
|
|
197
|
+
load_in_4bit=True,
|
|
198
|
+
bnb_4bit_compute_dtype=torch.bfloat16,
|
|
199
|
+
bnb_4bit_quant_type="nf4",
|
|
200
|
+
bnb_4bit_use_double_quant=True,
|
|
201
|
+
bnb_4bit_quant_storage=torch.bfloat16 # CRITICAL for FSDP!
|
|
202
|
+
)
|
|
203
|
+
```
|
|
204
|
+
|
|
205
|
+
**Important**: `bnb_4bit_quant_storage=torch.bfloat16` ensures 4-bit layers are wrapped identically to regular layers for FSDP sharding.
|
|
206
|
+
|
|
207
|
+
**Step 2: Launch with accelerate**
|
|
208
|
+
|
|
209
|
+
Create `fsdp_config.yaml`:
|
|
210
|
+
```yaml
|
|
211
|
+
compute_environment: LOCAL_MACHINE
|
|
212
|
+
distributed_type: FSDP
|
|
213
|
+
fsdp_config:
|
|
214
|
+
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
|
|
215
|
+
fsdp_backward_prefetch_policy: BACKWARD_PRE
|
|
216
|
+
fsdp_forward_prefetch: true
|
|
217
|
+
fsdp_sharding_strategy: 1 # FULL_SHARD
|
|
218
|
+
fsdp_state_dict_type: SHARDED_STATE_DICT
|
|
219
|
+
fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer
|
|
220
|
+
mixed_precision: bf16
|
|
221
|
+
num_processes: 8
|
|
222
|
+
```
|
|
223
|
+
|
|
224
|
+
**Launch training**:
|
|
225
|
+
```bash
|
|
226
|
+
accelerate launch --config_file fsdp_config.yaml train_qlora.py
|
|
227
|
+
```
|
|
228
|
+
|
|
229
|
+
**train_qlora.py**:
|
|
230
|
+
```python
|
|
231
|
+
model = AutoModelForCausalLM.from_pretrained(
|
|
232
|
+
"meta-llama/Llama-2-70b-hf",
|
|
233
|
+
quantization_config=bnb_config,
|
|
234
|
+
torch_dtype=torch.bfloat16
|
|
235
|
+
)
|
|
236
|
+
|
|
237
|
+
# Rest same as single-GPU workflow
|
|
238
|
+
model = prepare_model_for_kbit_training(model)
|
|
239
|
+
model = get_peft_model(model, lora_config)
|
|
240
|
+
|
|
241
|
+
trainer = SFTTrainer(...)
|
|
242
|
+
trainer.train()
|
|
243
|
+
```
|
|
244
|
+
|
|
245
|
+
**Memory per GPU**: ~40GB (70B model sharded across 8 GPUs)
|
|
246
|
+
|
|
247
|
+
### Workflow 3: Extremely Large Models (405B)
|
|
248
|
+
|
|
249
|
+
Train Llama 3.1 405B on 8×H100 (80GB each).
|
|
250
|
+
|
|
251
|
+
**Requirements**:
|
|
252
|
+
- 8×H100 80GB GPUs
|
|
253
|
+
- 256GB+ system RAM
|
|
254
|
+
- FSDP + QLoRA
|
|
255
|
+
|
|
256
|
+
**Configuration**:
|
|
257
|
+
```python
|
|
258
|
+
bnb_config = BitsAndBytesConfig(
|
|
259
|
+
load_in_4bit=True,
|
|
260
|
+
bnb_4bit_compute_dtype=torch.bfloat16,
|
|
261
|
+
bnb_4bit_quant_type="nf4",
|
|
262
|
+
bnb_4bit_use_double_quant=True,
|
|
263
|
+
bnb_4bit_quant_storage=torch.bfloat16
|
|
264
|
+
)
|
|
265
|
+
|
|
266
|
+
lora_config = LoraConfig(
|
|
267
|
+
r=32, # Higher rank for 405B
|
|
268
|
+
lora_alpha=64,
|
|
269
|
+
target_modules="all-linear",
|
|
270
|
+
lora_dropout=0.1,
|
|
271
|
+
bias="none",
|
|
272
|
+
task_type="CAUSAL_LM"
|
|
273
|
+
)
|
|
274
|
+
|
|
275
|
+
training_args = TrainingArguments(
|
|
276
|
+
per_device_train_batch_size=1, # Small batch
|
|
277
|
+
gradient_accumulation_steps=32, # Effective batch = 256
|
|
278
|
+
learning_rate=1e-4, # Lower LR for large model
|
|
279
|
+
bf16=True,
|
|
280
|
+
optim="paged_adamw_8bit",
|
|
281
|
+
gradient_checkpointing=True
|
|
282
|
+
)
|
|
283
|
+
```
|
|
284
|
+
|
|
285
|
+
**Memory per GPU**: ~70GB (405B in 4-bit / 8 GPUs)
|
|
286
|
+
|
|
287
|
+
## Hyperparameter Tuning
|
|
288
|
+
|
|
289
|
+
### LoRA Rank (r)
|
|
290
|
+
|
|
291
|
+
Controls adapter capacity:
|
|
292
|
+
|
|
293
|
+
| Model Size | Recommended r | Trainable Params | Use Case |
|
|
294
|
+
|------------|---------------|------------------|----------|
|
|
295
|
+
| 7B | 8-16 | ~4M | Simple tasks |
|
|
296
|
+
| 13B | 16-32 | ~8M | General fine-tuning |
|
|
297
|
+
| 70B | 32-64 | ~80M | Complex tasks |
|
|
298
|
+
| 405B | 64-128 | ~300M | Maximum capacity |
|
|
299
|
+
|
|
300
|
+
**Trade-off**: Higher r = more capacity but more memory and slower training
|
|
301
|
+
|
|
302
|
+
### LoRA Alpha
|
|
303
|
+
|
|
304
|
+
Scaling factor for LoRA updates:
|
|
305
|
+
|
|
306
|
+
```python
|
|
307
|
+
effective_learning_rate = learning_rate * (lora_alpha / r)
|
|
308
|
+
```
|
|
309
|
+
|
|
310
|
+
**Recommended**: `lora_alpha = 2 × r`
|
|
311
|
+
- r=16 → alpha=32
|
|
312
|
+
- r=64 → alpha=128
|
|
313
|
+
|
|
314
|
+
### Target Modules
|
|
315
|
+
|
|
316
|
+
**Options**:
|
|
317
|
+
- `"all-linear"`: All linear layers (recommended for QLoRA)
|
|
318
|
+
- `["q_proj", "v_proj"]`: Only attention (minimal)
|
|
319
|
+
- `["q_proj", "k_proj", "v_proj", "o_proj"]`: All attention
|
|
320
|
+
- `["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"]`: Attention + FFN
|
|
321
|
+
|
|
322
|
+
**Trade-off**: More modules = better performance but more memory
|
|
323
|
+
|
|
324
|
+
### Learning Rate
|
|
325
|
+
|
|
326
|
+
| Model Size | Recommended LR |
|
|
327
|
+
|------------|----------------|
|
|
328
|
+
| 7-13B | 2e-4 to 3e-4 |
|
|
329
|
+
| 70B | 1e-4 to 2e-4 |
|
|
330
|
+
| 405B | 5e-5 to 1e-4 |
|
|
331
|
+
|
|
332
|
+
**Rule**: Larger models need lower learning rates
|
|
333
|
+
|
|
334
|
+
### Batch Size
|
|
335
|
+
|
|
336
|
+
```python
|
|
337
|
+
effective_batch_size = per_device_batch_size × gradient_accumulation_steps × num_gpus
|
|
338
|
+
```
|
|
339
|
+
|
|
340
|
+
**Recommended effective batch sizes**:
|
|
341
|
+
- Instruction tuning: 64-128
|
|
342
|
+
- Continued pretraining: 256-512
|
|
343
|
+
|
|
344
|
+
### Quantization Dtype
|
|
345
|
+
|
|
346
|
+
| Dtype | Speed | Accuracy | Use Case |
|
|
347
|
+
|-------|-------|----------|----------|
|
|
348
|
+
| `torch.float32` | Slow | Best | Debugging |
|
|
349
|
+
| `torch.bfloat16` | Fast | Good | **Recommended** |
|
|
350
|
+
| `torch.float16` | Fastest | Risky | May have precision issues |
|
|
351
|
+
|
|
352
|
+
## Advanced Techniques
|
|
353
|
+
|
|
354
|
+
### Gradient Checkpointing
|
|
355
|
+
|
|
356
|
+
Save memory by recomputing activations:
|
|
357
|
+
|
|
358
|
+
```python
|
|
359
|
+
model.gradient_checkpointing_enable()
|
|
360
|
+
model = prepare_model_for_kbit_training(model, use_gradient_checkpointing=True)
|
|
361
|
+
```
|
|
362
|
+
|
|
363
|
+
**Memory savings**: ~30-40% activation memory
|
|
364
|
+
**Cost**: ~20% slower training
|
|
365
|
+
|
|
366
|
+
### Nested Quantization
|
|
367
|
+
|
|
368
|
+
Quantize the quantization constants:
|
|
369
|
+
|
|
370
|
+
```python
|
|
371
|
+
bnb_config = BitsAndBytesConfig(
|
|
372
|
+
bnb_4bit_use_double_quant=True # Enable nested quantization
|
|
373
|
+
)
|
|
374
|
+
```
|
|
375
|
+
|
|
376
|
+
**Memory savings**: Additional ~2-3% reduction
|
|
377
|
+
**Accuracy**: Minimal impact
|
|
378
|
+
|
|
379
|
+
### CPU Offloading
|
|
380
|
+
|
|
381
|
+
For models that still don't fit:
|
|
382
|
+
|
|
383
|
+
```python
|
|
384
|
+
model = AutoModelForCausalLM.from_pretrained(
|
|
385
|
+
"model-name",
|
|
386
|
+
quantization_config=bnb_config,
|
|
387
|
+
device_map="auto",
|
|
388
|
+
max_memory={0: "40GB", "cpu": "100GB"}
|
|
389
|
+
)
|
|
390
|
+
```
|
|
391
|
+
|
|
392
|
+
**Trade-off**: Much slower but enables larger models
|
|
393
|
+
|
|
394
|
+
### Paged Optimizers
|
|
395
|
+
|
|
396
|
+
Use paged memory for optimizer states:
|
|
397
|
+
|
|
398
|
+
```python
|
|
399
|
+
training_args = TrainingArguments(
|
|
400
|
+
optim="paged_adamw_8bit" # Or paged_adamw_32bit
|
|
401
|
+
)
|
|
402
|
+
```
|
|
403
|
+
|
|
404
|
+
**Benefit**: Prevents OOM from optimizer states
|
|
405
|
+
|
|
406
|
+
## Deployment
|
|
407
|
+
|
|
408
|
+
### Save LoRA Adapters
|
|
409
|
+
|
|
410
|
+
```python
|
|
411
|
+
# Save only adapters (~20MB)
|
|
412
|
+
model.save_pretrained("./qlora-adapters")
|
|
413
|
+
tokenizer.save_pretrained("./qlora-adapters")
|
|
414
|
+
```
|
|
415
|
+
|
|
416
|
+
### Load for Inference
|
|
417
|
+
|
|
418
|
+
```python
|
|
419
|
+
from peft import PeftModel
|
|
420
|
+
|
|
421
|
+
# Load base model in 4-bit
|
|
422
|
+
base_model = AutoModelForCausalLM.from_pretrained(
|
|
423
|
+
"meta-llama/Llama-2-70b-hf",
|
|
424
|
+
quantization_config=bnb_config,
|
|
425
|
+
device_map="auto"
|
|
426
|
+
)
|
|
427
|
+
|
|
428
|
+
# Load adapters
|
|
429
|
+
model = PeftModel.from_pretrained(base_model, "./qlora-adapters")
|
|
430
|
+
|
|
431
|
+
# Inference
|
|
432
|
+
inputs = tokenizer("Question here", return_tensors="pt").to("cuda")
|
|
433
|
+
outputs = model.generate(**inputs, max_length=200)
|
|
434
|
+
```
|
|
435
|
+
|
|
436
|
+
### Merge Adapters (Optional)
|
|
437
|
+
|
|
438
|
+
```python
|
|
439
|
+
# Merge LoRA into base weights
|
|
440
|
+
model = model.merge_and_unload()
|
|
441
|
+
|
|
442
|
+
# Save merged model
|
|
443
|
+
model.save_pretrained("./merged-model")
|
|
444
|
+
```
|
|
445
|
+
|
|
446
|
+
**Note**: Merged model loses 4-bit quantization (back to FP16/BF16)
|
|
447
|
+
|
|
448
|
+
## Troubleshooting
|
|
449
|
+
|
|
450
|
+
### OOM During Training
|
|
451
|
+
|
|
452
|
+
1. Reduce batch size:
|
|
453
|
+
```python
|
|
454
|
+
per_device_train_batch_size=1
|
|
455
|
+
```
|
|
456
|
+
|
|
457
|
+
2. Increase gradient accumulation:
|
|
458
|
+
```python
|
|
459
|
+
gradient_accumulation_steps=16
|
|
460
|
+
```
|
|
461
|
+
|
|
462
|
+
3. Lower LoRA rank:
|
|
463
|
+
```python
|
|
464
|
+
r=8 # Instead of 16
|
|
465
|
+
```
|
|
466
|
+
|
|
467
|
+
4. Enable gradient checkpointing
|
|
468
|
+
|
|
469
|
+
5. Use CPU offloading
|
|
470
|
+
|
|
471
|
+
### Low Quality Results
|
|
472
|
+
|
|
473
|
+
1. Increase LoRA rank:
|
|
474
|
+
```python
|
|
475
|
+
r=64 # Instead of 16
|
|
476
|
+
```
|
|
477
|
+
|
|
478
|
+
2. Train longer:
|
|
479
|
+
```python
|
|
480
|
+
num_train_epochs=3 # Instead of 1
|
|
481
|
+
```
|
|
482
|
+
|
|
483
|
+
3. Use more target modules:
|
|
484
|
+
```python
|
|
485
|
+
target_modules="all-linear"
|
|
486
|
+
```
|
|
487
|
+
|
|
488
|
+
4. Check learning rate (try 1e-4 to 3e-4)
|
|
489
|
+
|
|
490
|
+
### Slow Training
|
|
491
|
+
|
|
492
|
+
1. Disable gradient checkpointing (if memory allows)
|
|
493
|
+
|
|
494
|
+
2. Increase batch size
|
|
495
|
+
|
|
496
|
+
3. Use BF16:
|
|
497
|
+
```python
|
|
498
|
+
bf16=True
|
|
499
|
+
```
|
|
500
|
+
|
|
501
|
+
4. Use paged optimizer
|
|
502
|
+
|
|
503
|
+
## Best Practices
|
|
504
|
+
|
|
505
|
+
1. **Start small**: Test on 7B before 70B
|
|
506
|
+
2. **Monitor loss**: Should decrease steadily
|
|
507
|
+
3. **Use validation**: Track eval loss to detect overfitting
|
|
508
|
+
4. **Save checkpoints**: Every 100-500 steps
|
|
509
|
+
5. **Log hyperparameters**: For reproducibility
|
|
510
|
+
6. **Test inference**: Verify quality before full training
|
|
511
|
+
|
|
512
|
+
## Example: Complete Training Script
|
|
513
|
+
|
|
514
|
+
See full working example at `examples/qlora_training.py` in the repository.
|
|
515
|
+
|
|
516
|
+
## References
|
|
517
|
+
|
|
518
|
+
- QLoRA paper: "QLoRA: Efficient Finetuning of Quantized LLMs" (Dettmers et al., 2023)
|
|
519
|
+
- bitsandbytes GitHub: https://github.com/bitsandbytes-foundation/bitsandbytes
|
|
520
|
+
- PEFT documentation: https://huggingface.co/docs/peft
|
|
521
|
+
- FSDP+QLoRA guide: https://huggingface.co/blog/fsdp-qlora
|