@synsci/cli-darwin-x64 1.1.49
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/bin/skills/accelerate/SKILL.md +332 -0
- package/bin/skills/accelerate/references/custom-plugins.md +453 -0
- package/bin/skills/accelerate/references/megatron-integration.md +489 -0
- package/bin/skills/accelerate/references/performance.md +525 -0
- package/bin/skills/audiocraft/SKILL.md +564 -0
- package/bin/skills/audiocraft/references/advanced-usage.md +666 -0
- package/bin/skills/audiocraft/references/troubleshooting.md +504 -0
- package/bin/skills/autogpt/SKILL.md +403 -0
- package/bin/skills/autogpt/references/advanced-usage.md +535 -0
- package/bin/skills/autogpt/references/troubleshooting.md +420 -0
- package/bin/skills/awq/SKILL.md +310 -0
- package/bin/skills/awq/references/advanced-usage.md +324 -0
- package/bin/skills/awq/references/troubleshooting.md +344 -0
- package/bin/skills/axolotl/SKILL.md +158 -0
- package/bin/skills/axolotl/references/api.md +5548 -0
- package/bin/skills/axolotl/references/dataset-formats.md +1029 -0
- package/bin/skills/axolotl/references/index.md +15 -0
- package/bin/skills/axolotl/references/other.md +3563 -0
- package/bin/skills/bigcode-evaluation-harness/SKILL.md +405 -0
- package/bin/skills/bigcode-evaluation-harness/references/benchmarks.md +393 -0
- package/bin/skills/bigcode-evaluation-harness/references/custom-tasks.md +424 -0
- package/bin/skills/bigcode-evaluation-harness/references/issues.md +394 -0
- package/bin/skills/bitsandbytes/SKILL.md +411 -0
- package/bin/skills/bitsandbytes/references/memory-optimization.md +521 -0
- package/bin/skills/bitsandbytes/references/qlora-training.md +521 -0
- package/bin/skills/bitsandbytes/references/quantization-formats.md +447 -0
- package/bin/skills/blip-2/SKILL.md +564 -0
- package/bin/skills/blip-2/references/advanced-usage.md +680 -0
- package/bin/skills/blip-2/references/troubleshooting.md +526 -0
- package/bin/skills/chroma/SKILL.md +406 -0
- package/bin/skills/chroma/references/integration.md +38 -0
- package/bin/skills/clip/SKILL.md +253 -0
- package/bin/skills/clip/references/applications.md +207 -0
- package/bin/skills/constitutional-ai/SKILL.md +290 -0
- package/bin/skills/crewai/SKILL.md +498 -0
- package/bin/skills/crewai/references/flows.md +438 -0
- package/bin/skills/crewai/references/tools.md +429 -0
- package/bin/skills/crewai/references/troubleshooting.md +480 -0
- package/bin/skills/deepspeed/SKILL.md +141 -0
- package/bin/skills/deepspeed/references/08.md +17 -0
- package/bin/skills/deepspeed/references/09.md +173 -0
- package/bin/skills/deepspeed/references/2020.md +378 -0
- package/bin/skills/deepspeed/references/2023.md +279 -0
- package/bin/skills/deepspeed/references/assets.md +179 -0
- package/bin/skills/deepspeed/references/index.md +35 -0
- package/bin/skills/deepspeed/references/mii.md +118 -0
- package/bin/skills/deepspeed/references/other.md +1191 -0
- package/bin/skills/deepspeed/references/tutorials.md +6554 -0
- package/bin/skills/dspy/SKILL.md +590 -0
- package/bin/skills/dspy/references/examples.md +663 -0
- package/bin/skills/dspy/references/modules.md +475 -0
- package/bin/skills/dspy/references/optimizers.md +566 -0
- package/bin/skills/faiss/SKILL.md +221 -0
- package/bin/skills/faiss/references/index_types.md +280 -0
- package/bin/skills/flash-attention/SKILL.md +367 -0
- package/bin/skills/flash-attention/references/benchmarks.md +215 -0
- package/bin/skills/flash-attention/references/transformers-integration.md +293 -0
- package/bin/skills/gguf/SKILL.md +427 -0
- package/bin/skills/gguf/references/advanced-usage.md +504 -0
- package/bin/skills/gguf/references/troubleshooting.md +442 -0
- package/bin/skills/gptq/SKILL.md +450 -0
- package/bin/skills/gptq/references/calibration.md +337 -0
- package/bin/skills/gptq/references/integration.md +129 -0
- package/bin/skills/gptq/references/troubleshooting.md +95 -0
- package/bin/skills/grpo-rl-training/README.md +97 -0
- package/bin/skills/grpo-rl-training/SKILL.md +572 -0
- package/bin/skills/grpo-rl-training/examples/reward_functions_library.py +393 -0
- package/bin/skills/grpo-rl-training/templates/basic_grpo_training.py +228 -0
- package/bin/skills/guidance/SKILL.md +572 -0
- package/bin/skills/guidance/references/backends.md +554 -0
- package/bin/skills/guidance/references/constraints.md +674 -0
- package/bin/skills/guidance/references/examples.md +767 -0
- package/bin/skills/hqq/SKILL.md +445 -0
- package/bin/skills/hqq/references/advanced-usage.md +528 -0
- package/bin/skills/hqq/references/troubleshooting.md +503 -0
- package/bin/skills/hugging-face-cli/SKILL.md +191 -0
- package/bin/skills/hugging-face-cli/references/commands.md +954 -0
- package/bin/skills/hugging-face-cli/references/examples.md +374 -0
- package/bin/skills/hugging-face-datasets/SKILL.md +547 -0
- package/bin/skills/hugging-face-datasets/examples/diverse_training_examples.json +239 -0
- package/bin/skills/hugging-face-datasets/examples/system_prompt_template.txt +196 -0
- package/bin/skills/hugging-face-datasets/examples/training_examples.json +176 -0
- package/bin/skills/hugging-face-datasets/scripts/dataset_manager.py +522 -0
- package/bin/skills/hugging-face-datasets/scripts/sql_manager.py +844 -0
- package/bin/skills/hugging-face-datasets/templates/chat.json +55 -0
- package/bin/skills/hugging-face-datasets/templates/classification.json +62 -0
- package/bin/skills/hugging-face-datasets/templates/completion.json +51 -0
- package/bin/skills/hugging-face-datasets/templates/custom.json +75 -0
- package/bin/skills/hugging-face-datasets/templates/qa.json +54 -0
- package/bin/skills/hugging-face-datasets/templates/tabular.json +81 -0
- package/bin/skills/hugging-face-evaluation/SKILL.md +656 -0
- package/bin/skills/hugging-face-evaluation/examples/USAGE_EXAMPLES.md +382 -0
- package/bin/skills/hugging-face-evaluation/examples/artificial_analysis_to_hub.py +141 -0
- package/bin/skills/hugging-face-evaluation/examples/example_readme_tables.md +135 -0
- package/bin/skills/hugging-face-evaluation/examples/metric_mapping.json +50 -0
- package/bin/skills/hugging-face-evaluation/requirements.txt +20 -0
- package/bin/skills/hugging-face-evaluation/scripts/evaluation_manager.py +1374 -0
- package/bin/skills/hugging-face-evaluation/scripts/inspect_eval_uv.py +104 -0
- package/bin/skills/hugging-face-evaluation/scripts/inspect_vllm_uv.py +317 -0
- package/bin/skills/hugging-face-evaluation/scripts/lighteval_vllm_uv.py +303 -0
- package/bin/skills/hugging-face-evaluation/scripts/run_eval_job.py +98 -0
- package/bin/skills/hugging-face-evaluation/scripts/run_vllm_eval_job.py +331 -0
- package/bin/skills/hugging-face-evaluation/scripts/test_extraction.py +206 -0
- package/bin/skills/hugging-face-jobs/SKILL.md +1041 -0
- package/bin/skills/hugging-face-jobs/index.html +216 -0
- package/bin/skills/hugging-face-jobs/references/hardware_guide.md +336 -0
- package/bin/skills/hugging-face-jobs/references/hub_saving.md +352 -0
- package/bin/skills/hugging-face-jobs/references/token_usage.md +546 -0
- package/bin/skills/hugging-face-jobs/references/troubleshooting.md +475 -0
- package/bin/skills/hugging-face-jobs/scripts/cot-self-instruct.py +718 -0
- package/bin/skills/hugging-face-jobs/scripts/finepdfs-stats.py +546 -0
- package/bin/skills/hugging-face-jobs/scripts/generate-responses.py +587 -0
- package/bin/skills/hugging-face-model-trainer/SKILL.md +711 -0
- package/bin/skills/hugging-face-model-trainer/references/gguf_conversion.md +296 -0
- package/bin/skills/hugging-face-model-trainer/references/hardware_guide.md +283 -0
- package/bin/skills/hugging-face-model-trainer/references/hub_saving.md +364 -0
- package/bin/skills/hugging-face-model-trainer/references/reliability_principles.md +371 -0
- package/bin/skills/hugging-face-model-trainer/references/trackio_guide.md +189 -0
- package/bin/skills/hugging-face-model-trainer/references/training_methods.md +150 -0
- package/bin/skills/hugging-face-model-trainer/references/training_patterns.md +203 -0
- package/bin/skills/hugging-face-model-trainer/references/troubleshooting.md +282 -0
- package/bin/skills/hugging-face-model-trainer/scripts/convert_to_gguf.py +424 -0
- package/bin/skills/hugging-face-model-trainer/scripts/dataset_inspector.py +417 -0
- package/bin/skills/hugging-face-model-trainer/scripts/estimate_cost.py +150 -0
- package/bin/skills/hugging-face-model-trainer/scripts/train_dpo_example.py +106 -0
- package/bin/skills/hugging-face-model-trainer/scripts/train_grpo_example.py +89 -0
- package/bin/skills/hugging-face-model-trainer/scripts/train_sft_example.py +122 -0
- package/bin/skills/hugging-face-paper-publisher/SKILL.md +627 -0
- package/bin/skills/hugging-face-paper-publisher/examples/example_usage.md +327 -0
- package/bin/skills/hugging-face-paper-publisher/references/quick_reference.md +216 -0
- package/bin/skills/hugging-face-paper-publisher/scripts/paper_manager.py +508 -0
- package/bin/skills/hugging-face-paper-publisher/templates/arxiv.md +299 -0
- package/bin/skills/hugging-face-paper-publisher/templates/ml-report.md +358 -0
- package/bin/skills/hugging-face-paper-publisher/templates/modern.md +319 -0
- package/bin/skills/hugging-face-paper-publisher/templates/standard.md +201 -0
- package/bin/skills/hugging-face-tool-builder/SKILL.md +115 -0
- package/bin/skills/hugging-face-tool-builder/references/baseline_hf_api.py +57 -0
- package/bin/skills/hugging-face-tool-builder/references/baseline_hf_api.sh +40 -0
- package/bin/skills/hugging-face-tool-builder/references/baseline_hf_api.tsx +57 -0
- package/bin/skills/hugging-face-tool-builder/references/find_models_by_paper.sh +230 -0
- package/bin/skills/hugging-face-tool-builder/references/hf_enrich_models.sh +96 -0
- package/bin/skills/hugging-face-tool-builder/references/hf_model_card_frontmatter.sh +188 -0
- package/bin/skills/hugging-face-tool-builder/references/hf_model_papers_auth.sh +171 -0
- package/bin/skills/hugging-face-trackio/SKILL.md +65 -0
- package/bin/skills/hugging-face-trackio/references/logging_metrics.md +206 -0
- package/bin/skills/hugging-face-trackio/references/retrieving_metrics.md +223 -0
- package/bin/skills/huggingface-tokenizers/SKILL.md +516 -0
- package/bin/skills/huggingface-tokenizers/references/algorithms.md +653 -0
- package/bin/skills/huggingface-tokenizers/references/integration.md +637 -0
- package/bin/skills/huggingface-tokenizers/references/pipeline.md +723 -0
- package/bin/skills/huggingface-tokenizers/references/training.md +565 -0
- package/bin/skills/instructor/SKILL.md +740 -0
- package/bin/skills/instructor/references/examples.md +107 -0
- package/bin/skills/instructor/references/providers.md +70 -0
- package/bin/skills/instructor/references/validation.md +606 -0
- package/bin/skills/knowledge-distillation/SKILL.md +458 -0
- package/bin/skills/knowledge-distillation/references/minillm.md +334 -0
- package/bin/skills/lambda-labs/SKILL.md +545 -0
- package/bin/skills/lambda-labs/references/advanced-usage.md +611 -0
- package/bin/skills/lambda-labs/references/troubleshooting.md +530 -0
- package/bin/skills/langchain/SKILL.md +480 -0
- package/bin/skills/langchain/references/agents.md +499 -0
- package/bin/skills/langchain/references/integration.md +562 -0
- package/bin/skills/langchain/references/rag.md +600 -0
- package/bin/skills/langsmith/SKILL.md +422 -0
- package/bin/skills/langsmith/references/advanced-usage.md +548 -0
- package/bin/skills/langsmith/references/troubleshooting.md +537 -0
- package/bin/skills/litgpt/SKILL.md +469 -0
- package/bin/skills/litgpt/references/custom-models.md +568 -0
- package/bin/skills/litgpt/references/distributed-training.md +451 -0
- package/bin/skills/litgpt/references/supported-models.md +336 -0
- package/bin/skills/litgpt/references/training-recipes.md +619 -0
- package/bin/skills/llama-cpp/SKILL.md +258 -0
- package/bin/skills/llama-cpp/references/optimization.md +89 -0
- package/bin/skills/llama-cpp/references/quantization.md +213 -0
- package/bin/skills/llama-cpp/references/server.md +125 -0
- package/bin/skills/llama-factory/SKILL.md +80 -0
- package/bin/skills/llama-factory/references/_images.md +23 -0
- package/bin/skills/llama-factory/references/advanced.md +1055 -0
- package/bin/skills/llama-factory/references/getting_started.md +349 -0
- package/bin/skills/llama-factory/references/index.md +19 -0
- package/bin/skills/llama-factory/references/other.md +31 -0
- package/bin/skills/llamaguard/SKILL.md +337 -0
- package/bin/skills/llamaindex/SKILL.md +569 -0
- package/bin/skills/llamaindex/references/agents.md +83 -0
- package/bin/skills/llamaindex/references/data_connectors.md +108 -0
- package/bin/skills/llamaindex/references/query_engines.md +406 -0
- package/bin/skills/llava/SKILL.md +304 -0
- package/bin/skills/llava/references/training.md +197 -0
- package/bin/skills/lm-evaluation-harness/SKILL.md +490 -0
- package/bin/skills/lm-evaluation-harness/references/api-evaluation.md +490 -0
- package/bin/skills/lm-evaluation-harness/references/benchmark-guide.md +488 -0
- package/bin/skills/lm-evaluation-harness/references/custom-tasks.md +602 -0
- package/bin/skills/lm-evaluation-harness/references/distributed-eval.md +519 -0
- package/bin/skills/long-context/SKILL.md +536 -0
- package/bin/skills/long-context/references/extension_methods.md +468 -0
- package/bin/skills/long-context/references/fine_tuning.md +611 -0
- package/bin/skills/long-context/references/rope.md +402 -0
- package/bin/skills/mamba/SKILL.md +260 -0
- package/bin/skills/mamba/references/architecture-details.md +206 -0
- package/bin/skills/mamba/references/benchmarks.md +255 -0
- package/bin/skills/mamba/references/training-guide.md +388 -0
- package/bin/skills/megatron-core/SKILL.md +366 -0
- package/bin/skills/megatron-core/references/benchmarks.md +249 -0
- package/bin/skills/megatron-core/references/parallelism-guide.md +404 -0
- package/bin/skills/megatron-core/references/production-examples.md +473 -0
- package/bin/skills/megatron-core/references/training-recipes.md +547 -0
- package/bin/skills/miles/SKILL.md +315 -0
- package/bin/skills/miles/references/api-reference.md +141 -0
- package/bin/skills/miles/references/troubleshooting.md +352 -0
- package/bin/skills/mlflow/SKILL.md +704 -0
- package/bin/skills/mlflow/references/deployment.md +744 -0
- package/bin/skills/mlflow/references/model-registry.md +770 -0
- package/bin/skills/mlflow/references/tracking.md +680 -0
- package/bin/skills/modal/SKILL.md +341 -0
- package/bin/skills/modal/references/advanced-usage.md +503 -0
- package/bin/skills/modal/references/troubleshooting.md +494 -0
- package/bin/skills/model-merging/SKILL.md +539 -0
- package/bin/skills/model-merging/references/evaluation.md +462 -0
- package/bin/skills/model-merging/references/examples.md +428 -0
- package/bin/skills/model-merging/references/methods.md +352 -0
- package/bin/skills/model-pruning/SKILL.md +495 -0
- package/bin/skills/model-pruning/references/wanda.md +347 -0
- package/bin/skills/moe-training/SKILL.md +526 -0
- package/bin/skills/moe-training/references/architectures.md +432 -0
- package/bin/skills/moe-training/references/inference.md +348 -0
- package/bin/skills/moe-training/references/training.md +425 -0
- package/bin/skills/nanogpt/SKILL.md +290 -0
- package/bin/skills/nanogpt/references/architecture.md +382 -0
- package/bin/skills/nanogpt/references/data.md +476 -0
- package/bin/skills/nanogpt/references/training.md +564 -0
- package/bin/skills/nemo-curator/SKILL.md +383 -0
- package/bin/skills/nemo-curator/references/deduplication.md +87 -0
- package/bin/skills/nemo-curator/references/filtering.md +102 -0
- package/bin/skills/nemo-evaluator/SKILL.md +494 -0
- package/bin/skills/nemo-evaluator/references/adapter-system.md +340 -0
- package/bin/skills/nemo-evaluator/references/configuration.md +447 -0
- package/bin/skills/nemo-evaluator/references/custom-benchmarks.md +315 -0
- package/bin/skills/nemo-evaluator/references/execution-backends.md +361 -0
- package/bin/skills/nemo-guardrails/SKILL.md +297 -0
- package/bin/skills/nnsight/SKILL.md +436 -0
- package/bin/skills/nnsight/references/README.md +78 -0
- package/bin/skills/nnsight/references/api.md +344 -0
- package/bin/skills/nnsight/references/tutorials.md +300 -0
- package/bin/skills/openrlhf/SKILL.md +249 -0
- package/bin/skills/openrlhf/references/algorithm-comparison.md +404 -0
- package/bin/skills/openrlhf/references/custom-rewards.md +530 -0
- package/bin/skills/openrlhf/references/hybrid-engine.md +287 -0
- package/bin/skills/openrlhf/references/multi-node-training.md +454 -0
- package/bin/skills/outlines/SKILL.md +652 -0
- package/bin/skills/outlines/references/backends.md +615 -0
- package/bin/skills/outlines/references/examples.md +773 -0
- package/bin/skills/outlines/references/json_generation.md +652 -0
- package/bin/skills/peft/SKILL.md +431 -0
- package/bin/skills/peft/references/advanced-usage.md +514 -0
- package/bin/skills/peft/references/troubleshooting.md +480 -0
- package/bin/skills/phoenix/SKILL.md +475 -0
- package/bin/skills/phoenix/references/advanced-usage.md +619 -0
- package/bin/skills/phoenix/references/troubleshooting.md +538 -0
- package/bin/skills/pinecone/SKILL.md +358 -0
- package/bin/skills/pinecone/references/deployment.md +181 -0
- package/bin/skills/pytorch-fsdp/SKILL.md +126 -0
- package/bin/skills/pytorch-fsdp/references/index.md +7 -0
- package/bin/skills/pytorch-fsdp/references/other.md +4249 -0
- package/bin/skills/pytorch-lightning/SKILL.md +346 -0
- package/bin/skills/pytorch-lightning/references/callbacks.md +436 -0
- package/bin/skills/pytorch-lightning/references/distributed.md +490 -0
- package/bin/skills/pytorch-lightning/references/hyperparameter-tuning.md +556 -0
- package/bin/skills/pyvene/SKILL.md +473 -0
- package/bin/skills/pyvene/references/README.md +73 -0
- package/bin/skills/pyvene/references/api.md +383 -0
- package/bin/skills/pyvene/references/tutorials.md +376 -0
- package/bin/skills/qdrant/SKILL.md +493 -0
- package/bin/skills/qdrant/references/advanced-usage.md +648 -0
- package/bin/skills/qdrant/references/troubleshooting.md +631 -0
- package/bin/skills/ray-data/SKILL.md +326 -0
- package/bin/skills/ray-data/references/integration.md +82 -0
- package/bin/skills/ray-data/references/transformations.md +83 -0
- package/bin/skills/ray-train/SKILL.md +406 -0
- package/bin/skills/ray-train/references/multi-node.md +628 -0
- package/bin/skills/rwkv/SKILL.md +260 -0
- package/bin/skills/rwkv/references/architecture-details.md +344 -0
- package/bin/skills/rwkv/references/rwkv7.md +386 -0
- package/bin/skills/rwkv/references/state-management.md +369 -0
- package/bin/skills/saelens/SKILL.md +386 -0
- package/bin/skills/saelens/references/README.md +70 -0
- package/bin/skills/saelens/references/api.md +333 -0
- package/bin/skills/saelens/references/tutorials.md +318 -0
- package/bin/skills/segment-anything/SKILL.md +500 -0
- package/bin/skills/segment-anything/references/advanced-usage.md +589 -0
- package/bin/skills/segment-anything/references/troubleshooting.md +484 -0
- package/bin/skills/sentence-transformers/SKILL.md +255 -0
- package/bin/skills/sentence-transformers/references/models.md +123 -0
- package/bin/skills/sentencepiece/SKILL.md +235 -0
- package/bin/skills/sentencepiece/references/algorithms.md +200 -0
- package/bin/skills/sentencepiece/references/training.md +304 -0
- package/bin/skills/sglang/SKILL.md +442 -0
- package/bin/skills/sglang/references/deployment.md +490 -0
- package/bin/skills/sglang/references/radix-attention.md +413 -0
- package/bin/skills/sglang/references/structured-generation.md +541 -0
- package/bin/skills/simpo/SKILL.md +219 -0
- package/bin/skills/simpo/references/datasets.md +478 -0
- package/bin/skills/simpo/references/hyperparameters.md +452 -0
- package/bin/skills/simpo/references/loss-functions.md +350 -0
- package/bin/skills/skypilot/SKILL.md +509 -0
- package/bin/skills/skypilot/references/advanced-usage.md +491 -0
- package/bin/skills/skypilot/references/troubleshooting.md +570 -0
- package/bin/skills/slime/SKILL.md +464 -0
- package/bin/skills/slime/references/api-reference.md +392 -0
- package/bin/skills/slime/references/troubleshooting.md +386 -0
- package/bin/skills/speculative-decoding/SKILL.md +467 -0
- package/bin/skills/speculative-decoding/references/lookahead.md +309 -0
- package/bin/skills/speculative-decoding/references/medusa.md +350 -0
- package/bin/skills/stable-diffusion/SKILL.md +519 -0
- package/bin/skills/stable-diffusion/references/advanced-usage.md +716 -0
- package/bin/skills/stable-diffusion/references/troubleshooting.md +555 -0
- package/bin/skills/tensorboard/SKILL.md +629 -0
- package/bin/skills/tensorboard/references/integrations.md +638 -0
- package/bin/skills/tensorboard/references/profiling.md +545 -0
- package/bin/skills/tensorboard/references/visualization.md +620 -0
- package/bin/skills/tensorrt-llm/SKILL.md +187 -0
- package/bin/skills/tensorrt-llm/references/multi-gpu.md +298 -0
- package/bin/skills/tensorrt-llm/references/optimization.md +242 -0
- package/bin/skills/tensorrt-llm/references/serving.md +470 -0
- package/bin/skills/tinker/SKILL.md +362 -0
- package/bin/skills/tinker/references/api-reference.md +168 -0
- package/bin/skills/tinker/references/getting-started.md +157 -0
- package/bin/skills/tinker/references/loss-functions.md +163 -0
- package/bin/skills/tinker/references/models-and-lora.md +139 -0
- package/bin/skills/tinker/references/recipes.md +280 -0
- package/bin/skills/tinker/references/reinforcement-learning.md +212 -0
- package/bin/skills/tinker/references/rendering.md +243 -0
- package/bin/skills/tinker/references/supervised-learning.md +232 -0
- package/bin/skills/tinker-training-cost/SKILL.md +187 -0
- package/bin/skills/tinker-training-cost/scripts/calculate_cost.py +123 -0
- package/bin/skills/torchforge/SKILL.md +433 -0
- package/bin/skills/torchforge/references/api-reference.md +327 -0
- package/bin/skills/torchforge/references/troubleshooting.md +409 -0
- package/bin/skills/torchtitan/SKILL.md +358 -0
- package/bin/skills/torchtitan/references/checkpoint.md +181 -0
- package/bin/skills/torchtitan/references/custom-models.md +258 -0
- package/bin/skills/torchtitan/references/float8.md +133 -0
- package/bin/skills/torchtitan/references/fsdp.md +126 -0
- package/bin/skills/transformer-lens/SKILL.md +346 -0
- package/bin/skills/transformer-lens/references/README.md +54 -0
- package/bin/skills/transformer-lens/references/api.md +362 -0
- package/bin/skills/transformer-lens/references/tutorials.md +339 -0
- package/bin/skills/trl-fine-tuning/SKILL.md +455 -0
- package/bin/skills/trl-fine-tuning/references/dpo-variants.md +227 -0
- package/bin/skills/trl-fine-tuning/references/online-rl.md +82 -0
- package/bin/skills/trl-fine-tuning/references/reward-modeling.md +122 -0
- package/bin/skills/trl-fine-tuning/references/sft-training.md +168 -0
- package/bin/skills/unsloth/SKILL.md +80 -0
- package/bin/skills/unsloth/references/index.md +7 -0
- package/bin/skills/unsloth/references/llms-full.md +16799 -0
- package/bin/skills/unsloth/references/llms-txt.md +12044 -0
- package/bin/skills/unsloth/references/llms.md +82 -0
- package/bin/skills/verl/SKILL.md +391 -0
- package/bin/skills/verl/references/api-reference.md +301 -0
- package/bin/skills/verl/references/troubleshooting.md +391 -0
- package/bin/skills/vllm/SKILL.md +364 -0
- package/bin/skills/vllm/references/optimization.md +226 -0
- package/bin/skills/vllm/references/quantization.md +284 -0
- package/bin/skills/vllm/references/server-deployment.md +255 -0
- package/bin/skills/vllm/references/troubleshooting.md +447 -0
- package/bin/skills/weights-and-biases/SKILL.md +590 -0
- package/bin/skills/weights-and-biases/references/artifacts.md +584 -0
- package/bin/skills/weights-and-biases/references/integrations.md +700 -0
- package/bin/skills/weights-and-biases/references/sweeps.md +847 -0
- package/bin/skills/whisper/SKILL.md +317 -0
- package/bin/skills/whisper/references/languages.md +189 -0
- package/bin/synsc +0 -0
- package/package.json +10 -0
|
@@ -0,0 +1,447 @@
|
|
|
1
|
+
# Quantization Formats
|
|
2
|
+
|
|
3
|
+
Complete guide to INT8, NF4, FP4 quantization formats, double quantization, and custom configurations in bitsandbytes.
|
|
4
|
+
|
|
5
|
+
## Overview
|
|
6
|
+
|
|
7
|
+
bitsandbytes supports multiple quantization formats:
|
|
8
|
+
- **INT8**: 8-bit integer quantization (LLM.int8())
|
|
9
|
+
- **NF4**: 4-bit NormalFloat (for normally distributed weights)
|
|
10
|
+
- **FP4**: 4-bit FloatPoint (for uniformly distributed weights)
|
|
11
|
+
- **Double Quantization**: Quantize the quantization constants
|
|
12
|
+
|
|
13
|
+
## INT8 Quantization
|
|
14
|
+
|
|
15
|
+
### LLM.int8() Algorithm
|
|
16
|
+
|
|
17
|
+
LLM.int8() uses mixed 8-bit/16-bit matrix multiplication:
|
|
18
|
+
- Most features (>99.9%) computed in INT8
|
|
19
|
+
- Outlier features (>threshold) computed in FP16
|
|
20
|
+
- Results combined for final output
|
|
21
|
+
|
|
22
|
+
**Memory**: 50% reduction (2 bytes → 1 byte per parameter)
|
|
23
|
+
**Accuracy**: <0.5% degradation
|
|
24
|
+
|
|
25
|
+
### Configuration
|
|
26
|
+
|
|
27
|
+
```python
|
|
28
|
+
from transformers import BitsAndBytesConfig
|
|
29
|
+
|
|
30
|
+
config = BitsAndBytesConfig(
|
|
31
|
+
load_in_8bit=True,
|
|
32
|
+
llm_int8_threshold=6.0, # Outlier threshold
|
|
33
|
+
llm_int8_has_fp16_weight=False, # Use INT8 storage
|
|
34
|
+
llm_int8_skip_modules=["lm_head"] # Skip certain layers
|
|
35
|
+
)
|
|
36
|
+
```
|
|
37
|
+
|
|
38
|
+
### Parameters Explained
|
|
39
|
+
|
|
40
|
+
**`llm_int8_threshold`** (default: 6.0):
|
|
41
|
+
- Activations with magnitude > threshold are kept in FP16
|
|
42
|
+
- Lower = more FP16 (slower but more accurate)
|
|
43
|
+
- Higher = more INT8 (faster but less accurate)
|
|
44
|
+
|
|
45
|
+
```python
|
|
46
|
+
# Conservative (more accurate)
|
|
47
|
+
llm_int8_threshold=5.0
|
|
48
|
+
|
|
49
|
+
# Aggressive (faster)
|
|
50
|
+
llm_int8_threshold=8.0
|
|
51
|
+
```
|
|
52
|
+
|
|
53
|
+
**`llm_int8_has_fp16_weight`** (default: False):
|
|
54
|
+
- `False`: Store weights in INT8 (50% memory savings)
|
|
55
|
+
- `True`: Store in FP16, quantize only during computation (no memory savings)
|
|
56
|
+
|
|
57
|
+
**`llm_int8_skip_modules`**:
|
|
58
|
+
```python
|
|
59
|
+
# Skip specific layers (keep in FP16)
|
|
60
|
+
llm_int8_skip_modules=["lm_head", "embed_tokens"]
|
|
61
|
+
```
|
|
62
|
+
|
|
63
|
+
### Example
|
|
64
|
+
|
|
65
|
+
```python
|
|
66
|
+
from transformers import AutoModelForCausalLM
|
|
67
|
+
|
|
68
|
+
model = AutoModelForCausalLM.from_pretrained(
|
|
69
|
+
"meta-llama/Llama-2-13b-hf",
|
|
70
|
+
quantization_config=config,
|
|
71
|
+
device_map="auto"
|
|
72
|
+
)
|
|
73
|
+
|
|
74
|
+
# Memory: 26GB (FP16) → 13GB (INT8)
|
|
75
|
+
```
|
|
76
|
+
|
|
77
|
+
### When to Use INT8
|
|
78
|
+
|
|
79
|
+
✅ **Use INT8 when**:
|
|
80
|
+
- Need high accuracy (<0.5% loss)
|
|
81
|
+
- Model fits with 50% reduction
|
|
82
|
+
- Have Turing+ GPU (tensor cores)
|
|
83
|
+
|
|
84
|
+
❌ **Don't use when**:
|
|
85
|
+
- Need maximum memory savings (use 4-bit)
|
|
86
|
+
- Inference speed critical (use GPTQ/AWQ)
|
|
87
|
+
|
|
88
|
+
## 4-Bit Quantization
|
|
89
|
+
|
|
90
|
+
### NormalFloat4 (NF4)
|
|
91
|
+
|
|
92
|
+
Optimized for normally distributed weights (most neural networks).
|
|
93
|
+
|
|
94
|
+
**How it works**:
|
|
95
|
+
- Bins chosen to minimize quantization error for normal distribution
|
|
96
|
+
- Asymmetric quantization bins
|
|
97
|
+
- Better for transformer weights
|
|
98
|
+
|
|
99
|
+
**Configuration**:
|
|
100
|
+
```python
|
|
101
|
+
config = BitsAndBytesConfig(
|
|
102
|
+
load_in_4bit=True,
|
|
103
|
+
bnb_4bit_compute_dtype=torch.bfloat16,
|
|
104
|
+
bnb_4bit_quant_type="nf4" # NormalFloat4
|
|
105
|
+
)
|
|
106
|
+
```
|
|
107
|
+
|
|
108
|
+
**Memory**: 75% reduction (2 bytes → 0.5 bytes per parameter)
|
|
109
|
+
|
|
110
|
+
### FloatPoint4 (FP4)
|
|
111
|
+
|
|
112
|
+
Standard 4-bit floating point for uniform distributions.
|
|
113
|
+
|
|
114
|
+
**How it works**:
|
|
115
|
+
- Symmetric quantization bins
|
|
116
|
+
- Better for weights with broader dynamic range
|
|
117
|
+
- Less common for transformers
|
|
118
|
+
|
|
119
|
+
**Configuration**:
|
|
120
|
+
```python
|
|
121
|
+
config = BitsAndBytesConfig(
|
|
122
|
+
load_in_4bit=True,
|
|
123
|
+
bnb_4bit_compute_dtype=torch.bfloat16,
|
|
124
|
+
bnb_4bit_quant_type="fp4" # FloatPoint4
|
|
125
|
+
)
|
|
126
|
+
```
|
|
127
|
+
|
|
128
|
+
### NF4 vs FP4 Comparison
|
|
129
|
+
|
|
130
|
+
| Aspect | NF4 | FP4 |
|
|
131
|
+
|--------|-----|-----|
|
|
132
|
+
| Distribution | Normal | Uniform |
|
|
133
|
+
| Typical use | **Transformers** | CNNs, unusual architectures |
|
|
134
|
+
| Accuracy | **Better for LLMs** | Worse for LLMs |
|
|
135
|
+
| Speed | Same | Same |
|
|
136
|
+
| Recommendation | ✅ Default | Use only if NF4 fails |
|
|
137
|
+
|
|
138
|
+
**Rule of thumb**: Always use NF4 for transformers.
|
|
139
|
+
|
|
140
|
+
### Example Comparison
|
|
141
|
+
|
|
142
|
+
```python
|
|
143
|
+
# NF4 (recommended)
|
|
144
|
+
nf4_config = BitsAndBytesConfig(
|
|
145
|
+
load_in_4bit=True,
|
|
146
|
+
bnb_4bit_quant_type="nf4"
|
|
147
|
+
)
|
|
148
|
+
|
|
149
|
+
# FP4 (alternative)
|
|
150
|
+
fp4_config = BitsAndBytesConfig(
|
|
151
|
+
load_in_4bit=True,
|
|
152
|
+
bnb_4bit_quant_type="fp4"
|
|
153
|
+
)
|
|
154
|
+
|
|
155
|
+
# Load and compare
|
|
156
|
+
model_nf4 = AutoModelForCausalLM.from_pretrained(
|
|
157
|
+
"meta-llama/Llama-2-7b-hf",
|
|
158
|
+
quantization_config=nf4_config
|
|
159
|
+
)
|
|
160
|
+
|
|
161
|
+
model_fp4 = AutoModelForCausalLM.from_pretrained(
|
|
162
|
+
"meta-llama/Llama-2-7b-hf",
|
|
163
|
+
quantization_config=fp4_config
|
|
164
|
+
)
|
|
165
|
+
|
|
166
|
+
# Typical results on MMLU:
|
|
167
|
+
# NF4: 45.2%
|
|
168
|
+
# FP4: 43.8%
|
|
169
|
+
# FP16: 45.9%
|
|
170
|
+
```
|
|
171
|
+
|
|
172
|
+
## Compute Dtype
|
|
173
|
+
|
|
174
|
+
The `bnb_4bit_compute_dtype` controls the precision used for actual computation.
|
|
175
|
+
|
|
176
|
+
### Options
|
|
177
|
+
|
|
178
|
+
**torch.bfloat16** (recommended):
|
|
179
|
+
```python
|
|
180
|
+
bnb_4bit_compute_dtype=torch.bfloat16
|
|
181
|
+
```
|
|
182
|
+
- Good balance of speed and accuracy
|
|
183
|
+
- Recommended for A100/H100
|
|
184
|
+
- Prevents numerical instability
|
|
185
|
+
|
|
186
|
+
**torch.float16**:
|
|
187
|
+
```python
|
|
188
|
+
bnb_4bit_compute_dtype=torch.float16
|
|
189
|
+
```
|
|
190
|
+
- Slightly faster than BF16
|
|
191
|
+
- Risk of overflow/underflow
|
|
192
|
+
- Use only if BF16 unavailable
|
|
193
|
+
|
|
194
|
+
**torch.float32**:
|
|
195
|
+
```python
|
|
196
|
+
bnb_4bit_compute_dtype=torch.float32
|
|
197
|
+
```
|
|
198
|
+
- Most accurate
|
|
199
|
+
- Slowest (no tensor core acceleration)
|
|
200
|
+
- Debugging only
|
|
201
|
+
|
|
202
|
+
### Performance Comparison
|
|
203
|
+
|
|
204
|
+
| Dtype | Speed | Accuracy | Memory |
|
|
205
|
+
|-------|-------|----------|--------|
|
|
206
|
+
| FP32 | 1× (baseline) | 100% | 4 bytes |
|
|
207
|
+
| FP16 | 3-4× | 99.5% | 2 bytes |
|
|
208
|
+
| BF16 | 3-4× | **99.8%** | 2 bytes |
|
|
209
|
+
|
|
210
|
+
**Recommendation**: Always use `torch.bfloat16` if supported.
|
|
211
|
+
|
|
212
|
+
## Double Quantization
|
|
213
|
+
|
|
214
|
+
Quantize the quantization constants for additional memory savings.
|
|
215
|
+
|
|
216
|
+
### How It Works
|
|
217
|
+
|
|
218
|
+
Standard 4-bit quantization stores:
|
|
219
|
+
- 4-bit quantized weights
|
|
220
|
+
- FP32 scaling factors (4 bytes per block)
|
|
221
|
+
|
|
222
|
+
Double quantization:
|
|
223
|
+
- 4-bit quantized weights
|
|
224
|
+
- **INT8 quantized scaling factors** (1 byte per block)
|
|
225
|
+
|
|
226
|
+
**Additional savings**: ~2-3% memory reduction
|
|
227
|
+
|
|
228
|
+
### Configuration
|
|
229
|
+
|
|
230
|
+
```python
|
|
231
|
+
config = BitsAndBytesConfig(
|
|
232
|
+
load_in_4bit=True,
|
|
233
|
+
bnb_4bit_quant_type="nf4",
|
|
234
|
+
bnb_4bit_use_double_quant=True # Enable double quantization
|
|
235
|
+
)
|
|
236
|
+
```
|
|
237
|
+
|
|
238
|
+
### Example
|
|
239
|
+
|
|
240
|
+
```python
|
|
241
|
+
# Without double quant
|
|
242
|
+
model_single = AutoModelForCausalLM.from_pretrained(
|
|
243
|
+
"meta-llama/Llama-2-70b-hf",
|
|
244
|
+
quantization_config=BitsAndBytesConfig(
|
|
245
|
+
load_in_4bit=True,
|
|
246
|
+
bnb_4bit_use_double_quant=False
|
|
247
|
+
)
|
|
248
|
+
)
|
|
249
|
+
# Memory: ~36GB
|
|
250
|
+
|
|
251
|
+
# With double quant
|
|
252
|
+
model_double = AutoModelForCausalLM.from_pretrained(
|
|
253
|
+
"meta-llama/Llama-2-70b-hf",
|
|
254
|
+
quantization_config=BitsAndBytesConfig(
|
|
255
|
+
load_in_4bit=True,
|
|
256
|
+
bnb_4bit_use_double_quant=True
|
|
257
|
+
)
|
|
258
|
+
)
|
|
259
|
+
# Memory: ~35GB (saves ~1GB)
|
|
260
|
+
```
|
|
261
|
+
|
|
262
|
+
**Accuracy impact**: Negligible (<0.1%)
|
|
263
|
+
|
|
264
|
+
**Recommendation**: Always enable for maximum memory savings.
|
|
265
|
+
|
|
266
|
+
## Quantization Storage
|
|
267
|
+
|
|
268
|
+
Controls storage dtype for quantized weights (important for FSDP).
|
|
269
|
+
|
|
270
|
+
### Configuration
|
|
271
|
+
|
|
272
|
+
```python
|
|
273
|
+
config = BitsAndBytesConfig(
|
|
274
|
+
load_in_4bit=True,
|
|
275
|
+
bnb_4bit_quant_storage=torch.bfloat16 # Storage dtype
|
|
276
|
+
)
|
|
277
|
+
```
|
|
278
|
+
|
|
279
|
+
### When to Use
|
|
280
|
+
|
|
281
|
+
**Default (uint8)**:
|
|
282
|
+
- Single GPU training/inference
|
|
283
|
+
- No special requirements
|
|
284
|
+
|
|
285
|
+
**torch.bfloat16** (for FSDP):
|
|
286
|
+
```python
|
|
287
|
+
bnb_4bit_quant_storage=torch.bfloat16
|
|
288
|
+
```
|
|
289
|
+
- **Required for FSDP+QLoRA**
|
|
290
|
+
- Ensures 4-bit layers wrapped like regular layers
|
|
291
|
+
- Enables proper model sharding
|
|
292
|
+
|
|
293
|
+
### Example: FSDP Configuration
|
|
294
|
+
|
|
295
|
+
```python
|
|
296
|
+
# CRITICAL: Set quant_storage for FSDP
|
|
297
|
+
fsdp_config = BitsAndBytesConfig(
|
|
298
|
+
load_in_4bit=True,
|
|
299
|
+
bnb_4bit_compute_dtype=torch.bfloat16,
|
|
300
|
+
bnb_4bit_quant_type="nf4",
|
|
301
|
+
bnb_4bit_use_double_quant=True,
|
|
302
|
+
bnb_4bit_quant_storage=torch.bfloat16 # Must match torch_dtype!
|
|
303
|
+
)
|
|
304
|
+
|
|
305
|
+
model = AutoModelForCausalLM.from_pretrained(
|
|
306
|
+
"meta-llama/Llama-2-70b-hf",
|
|
307
|
+
quantization_config=fsdp_config,
|
|
308
|
+
torch_dtype=torch.bfloat16 # Must match quant_storage!
|
|
309
|
+
)
|
|
310
|
+
```
|
|
311
|
+
|
|
312
|
+
## Recommended Configurations
|
|
313
|
+
|
|
314
|
+
### Production Inference (Best Accuracy)
|
|
315
|
+
|
|
316
|
+
```python
|
|
317
|
+
BitsAndBytesConfig(
|
|
318
|
+
load_in_8bit=True,
|
|
319
|
+
llm_int8_threshold=6.0
|
|
320
|
+
)
|
|
321
|
+
```
|
|
322
|
+
|
|
323
|
+
**Use case**: Maximum accuracy with 50% memory savings
|
|
324
|
+
|
|
325
|
+
### Production Inference (Maximum Memory Savings)
|
|
326
|
+
|
|
327
|
+
```python
|
|
328
|
+
BitsAndBytesConfig(
|
|
329
|
+
load_in_4bit=True,
|
|
330
|
+
bnb_4bit_compute_dtype=torch.bfloat16,
|
|
331
|
+
bnb_4bit_quant_type="nf4",
|
|
332
|
+
bnb_4bit_use_double_quant=True
|
|
333
|
+
)
|
|
334
|
+
```
|
|
335
|
+
|
|
336
|
+
**Use case**: 75% memory reduction with <1% accuracy loss
|
|
337
|
+
|
|
338
|
+
### QLoRA Training (Single GPU)
|
|
339
|
+
|
|
340
|
+
```python
|
|
341
|
+
BitsAndBytesConfig(
|
|
342
|
+
load_in_4bit=True,
|
|
343
|
+
bnb_4bit_compute_dtype=torch.bfloat16,
|
|
344
|
+
bnb_4bit_quant_type="nf4",
|
|
345
|
+
bnb_4bit_use_double_quant=True
|
|
346
|
+
)
|
|
347
|
+
```
|
|
348
|
+
|
|
349
|
+
**Use case**: Fine-tune 70B on RTX 3090
|
|
350
|
+
|
|
351
|
+
### FSDP + QLoRA (Multi-GPU)
|
|
352
|
+
|
|
353
|
+
```python
|
|
354
|
+
BitsAndBytesConfig(
|
|
355
|
+
load_in_4bit=True,
|
|
356
|
+
bnb_4bit_compute_dtype=torch.bfloat16,
|
|
357
|
+
bnb_4bit_quant_type="nf4",
|
|
358
|
+
bnb_4bit_use_double_quant=True,
|
|
359
|
+
bnb_4bit_quant_storage=torch.bfloat16 # CRITICAL!
|
|
360
|
+
)
|
|
361
|
+
```
|
|
362
|
+
|
|
363
|
+
**Use case**: Fine-tune 405B on 8×H100
|
|
364
|
+
|
|
365
|
+
## Advanced: Block-wise Quantization
|
|
366
|
+
|
|
367
|
+
bitsandbytes uses block-wise quantization:
|
|
368
|
+
- Weights divided into blocks (typically 64 or 128 elements)
|
|
369
|
+
- Each block has own scaling factor
|
|
370
|
+
- Better accuracy than tensor-wise quantization
|
|
371
|
+
|
|
372
|
+
**Block size** (automatically determined):
|
|
373
|
+
```python
|
|
374
|
+
# Typical block sizes
|
|
375
|
+
# 4-bit: 64 elements per block
|
|
376
|
+
# 8-bit: 64 elements per block
|
|
377
|
+
```
|
|
378
|
+
|
|
379
|
+
**Cannot be configured** (internal implementation detail).
|
|
380
|
+
|
|
381
|
+
## Quantization Quality Metrics
|
|
382
|
+
|
|
383
|
+
### Perplexity (Lower is Better)
|
|
384
|
+
|
|
385
|
+
| Model | FP16 | INT8 | NF4 | NF4+DQ |
|
|
386
|
+
|-------|------|------|-----|--------|
|
|
387
|
+
| Llama 2 7B | 5.12 | 5.14 | 5.18 | 5.19 |
|
|
388
|
+
| Llama 2 13B | 4.88 | 4.90 | 4.93 | 4.94 |
|
|
389
|
+
| Llama 2 70B | 3.32 | 3.33 | 3.35 | 3.36 |
|
|
390
|
+
|
|
391
|
+
**Conclusion**: <1% degradation for all quantization methods
|
|
392
|
+
|
|
393
|
+
### MMLU Accuracy (Higher is Better)
|
|
394
|
+
|
|
395
|
+
| Model | FP16 | INT8 | NF4 | FP4 |
|
|
396
|
+
|-------|------|------|-----|-----|
|
|
397
|
+
| Llama 2 7B | 45.9% | 45.7% | 45.2% | 43.8% |
|
|
398
|
+
| Llama 2 13B | 54.8% | 54.6% | 54.1% | 52.9% |
|
|
399
|
+
| Llama 2 70B | 68.9% | 68.7% | 68.4% | 67.2% |
|
|
400
|
+
|
|
401
|
+
**Conclusion**: NF4 is significantly better than FP4 for transformers
|
|
402
|
+
|
|
403
|
+
## Troubleshooting
|
|
404
|
+
|
|
405
|
+
### "Quantization failed" Error
|
|
406
|
+
|
|
407
|
+
Try different quant type:
|
|
408
|
+
```python
|
|
409
|
+
# If NF4 fails
|
|
410
|
+
bnb_4bit_quant_type="fp4"
|
|
411
|
+
```
|
|
412
|
+
|
|
413
|
+
### Numerical Instability
|
|
414
|
+
|
|
415
|
+
Use BF16 compute:
|
|
416
|
+
```python
|
|
417
|
+
bnb_4bit_compute_dtype=torch.bfloat16
|
|
418
|
+
```
|
|
419
|
+
|
|
420
|
+
### Poor Quality with 4-bit
|
|
421
|
+
|
|
422
|
+
1. Try 8-bit instead:
|
|
423
|
+
```python
|
|
424
|
+
load_in_8bit=True
|
|
425
|
+
```
|
|
426
|
+
|
|
427
|
+
2. Enable double quantization:
|
|
428
|
+
```python
|
|
429
|
+
bnb_4bit_use_double_quant=True
|
|
430
|
+
```
|
|
431
|
+
|
|
432
|
+
3. Use BF16 compute dtype
|
|
433
|
+
|
|
434
|
+
### FSDP Errors
|
|
435
|
+
|
|
436
|
+
Ensure quant_storage matches torch_dtype:
|
|
437
|
+
```python
|
|
438
|
+
bnb_4bit_quant_storage=torch.bfloat16
|
|
439
|
+
torch_dtype=torch.bfloat16 # Must match!
|
|
440
|
+
```
|
|
441
|
+
|
|
442
|
+
## References
|
|
443
|
+
|
|
444
|
+
- LLM.int8() paper: "LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale" (2022)
|
|
445
|
+
- QLoRA paper: "QLoRA: Efficient Finetuning of Quantized LLMs" (2023)
|
|
446
|
+
- bitsandbytes GitHub: https://github.com/bitsandbytes-foundation/bitsandbytes
|
|
447
|
+
- HuggingFace quantization docs: https://huggingface.co/docs/transformers/quantization/bitsandbytes
|