@synsci/cli-darwin-x64 1.1.49
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/bin/skills/accelerate/SKILL.md +332 -0
- package/bin/skills/accelerate/references/custom-plugins.md +453 -0
- package/bin/skills/accelerate/references/megatron-integration.md +489 -0
- package/bin/skills/accelerate/references/performance.md +525 -0
- package/bin/skills/audiocraft/SKILL.md +564 -0
- package/bin/skills/audiocraft/references/advanced-usage.md +666 -0
- package/bin/skills/audiocraft/references/troubleshooting.md +504 -0
- package/bin/skills/autogpt/SKILL.md +403 -0
- package/bin/skills/autogpt/references/advanced-usage.md +535 -0
- package/bin/skills/autogpt/references/troubleshooting.md +420 -0
- package/bin/skills/awq/SKILL.md +310 -0
- package/bin/skills/awq/references/advanced-usage.md +324 -0
- package/bin/skills/awq/references/troubleshooting.md +344 -0
- package/bin/skills/axolotl/SKILL.md +158 -0
- package/bin/skills/axolotl/references/api.md +5548 -0
- package/bin/skills/axolotl/references/dataset-formats.md +1029 -0
- package/bin/skills/axolotl/references/index.md +15 -0
- package/bin/skills/axolotl/references/other.md +3563 -0
- package/bin/skills/bigcode-evaluation-harness/SKILL.md +405 -0
- package/bin/skills/bigcode-evaluation-harness/references/benchmarks.md +393 -0
- package/bin/skills/bigcode-evaluation-harness/references/custom-tasks.md +424 -0
- package/bin/skills/bigcode-evaluation-harness/references/issues.md +394 -0
- package/bin/skills/bitsandbytes/SKILL.md +411 -0
- package/bin/skills/bitsandbytes/references/memory-optimization.md +521 -0
- package/bin/skills/bitsandbytes/references/qlora-training.md +521 -0
- package/bin/skills/bitsandbytes/references/quantization-formats.md +447 -0
- package/bin/skills/blip-2/SKILL.md +564 -0
- package/bin/skills/blip-2/references/advanced-usage.md +680 -0
- package/bin/skills/blip-2/references/troubleshooting.md +526 -0
- package/bin/skills/chroma/SKILL.md +406 -0
- package/bin/skills/chroma/references/integration.md +38 -0
- package/bin/skills/clip/SKILL.md +253 -0
- package/bin/skills/clip/references/applications.md +207 -0
- package/bin/skills/constitutional-ai/SKILL.md +290 -0
- package/bin/skills/crewai/SKILL.md +498 -0
- package/bin/skills/crewai/references/flows.md +438 -0
- package/bin/skills/crewai/references/tools.md +429 -0
- package/bin/skills/crewai/references/troubleshooting.md +480 -0
- package/bin/skills/deepspeed/SKILL.md +141 -0
- package/bin/skills/deepspeed/references/08.md +17 -0
- package/bin/skills/deepspeed/references/09.md +173 -0
- package/bin/skills/deepspeed/references/2020.md +378 -0
- package/bin/skills/deepspeed/references/2023.md +279 -0
- package/bin/skills/deepspeed/references/assets.md +179 -0
- package/bin/skills/deepspeed/references/index.md +35 -0
- package/bin/skills/deepspeed/references/mii.md +118 -0
- package/bin/skills/deepspeed/references/other.md +1191 -0
- package/bin/skills/deepspeed/references/tutorials.md +6554 -0
- package/bin/skills/dspy/SKILL.md +590 -0
- package/bin/skills/dspy/references/examples.md +663 -0
- package/bin/skills/dspy/references/modules.md +475 -0
- package/bin/skills/dspy/references/optimizers.md +566 -0
- package/bin/skills/faiss/SKILL.md +221 -0
- package/bin/skills/faiss/references/index_types.md +280 -0
- package/bin/skills/flash-attention/SKILL.md +367 -0
- package/bin/skills/flash-attention/references/benchmarks.md +215 -0
- package/bin/skills/flash-attention/references/transformers-integration.md +293 -0
- package/bin/skills/gguf/SKILL.md +427 -0
- package/bin/skills/gguf/references/advanced-usage.md +504 -0
- package/bin/skills/gguf/references/troubleshooting.md +442 -0
- package/bin/skills/gptq/SKILL.md +450 -0
- package/bin/skills/gptq/references/calibration.md +337 -0
- package/bin/skills/gptq/references/integration.md +129 -0
- package/bin/skills/gptq/references/troubleshooting.md +95 -0
- package/bin/skills/grpo-rl-training/README.md +97 -0
- package/bin/skills/grpo-rl-training/SKILL.md +572 -0
- package/bin/skills/grpo-rl-training/examples/reward_functions_library.py +393 -0
- package/bin/skills/grpo-rl-training/templates/basic_grpo_training.py +228 -0
- package/bin/skills/guidance/SKILL.md +572 -0
- package/bin/skills/guidance/references/backends.md +554 -0
- package/bin/skills/guidance/references/constraints.md +674 -0
- package/bin/skills/guidance/references/examples.md +767 -0
- package/bin/skills/hqq/SKILL.md +445 -0
- package/bin/skills/hqq/references/advanced-usage.md +528 -0
- package/bin/skills/hqq/references/troubleshooting.md +503 -0
- package/bin/skills/hugging-face-cli/SKILL.md +191 -0
- package/bin/skills/hugging-face-cli/references/commands.md +954 -0
- package/bin/skills/hugging-face-cli/references/examples.md +374 -0
- package/bin/skills/hugging-face-datasets/SKILL.md +547 -0
- package/bin/skills/hugging-face-datasets/examples/diverse_training_examples.json +239 -0
- package/bin/skills/hugging-face-datasets/examples/system_prompt_template.txt +196 -0
- package/bin/skills/hugging-face-datasets/examples/training_examples.json +176 -0
- package/bin/skills/hugging-face-datasets/scripts/dataset_manager.py +522 -0
- package/bin/skills/hugging-face-datasets/scripts/sql_manager.py +844 -0
- package/bin/skills/hugging-face-datasets/templates/chat.json +55 -0
- package/bin/skills/hugging-face-datasets/templates/classification.json +62 -0
- package/bin/skills/hugging-face-datasets/templates/completion.json +51 -0
- package/bin/skills/hugging-face-datasets/templates/custom.json +75 -0
- package/bin/skills/hugging-face-datasets/templates/qa.json +54 -0
- package/bin/skills/hugging-face-datasets/templates/tabular.json +81 -0
- package/bin/skills/hugging-face-evaluation/SKILL.md +656 -0
- package/bin/skills/hugging-face-evaluation/examples/USAGE_EXAMPLES.md +382 -0
- package/bin/skills/hugging-face-evaluation/examples/artificial_analysis_to_hub.py +141 -0
- package/bin/skills/hugging-face-evaluation/examples/example_readme_tables.md +135 -0
- package/bin/skills/hugging-face-evaluation/examples/metric_mapping.json +50 -0
- package/bin/skills/hugging-face-evaluation/requirements.txt +20 -0
- package/bin/skills/hugging-face-evaluation/scripts/evaluation_manager.py +1374 -0
- package/bin/skills/hugging-face-evaluation/scripts/inspect_eval_uv.py +104 -0
- package/bin/skills/hugging-face-evaluation/scripts/inspect_vllm_uv.py +317 -0
- package/bin/skills/hugging-face-evaluation/scripts/lighteval_vllm_uv.py +303 -0
- package/bin/skills/hugging-face-evaluation/scripts/run_eval_job.py +98 -0
- package/bin/skills/hugging-face-evaluation/scripts/run_vllm_eval_job.py +331 -0
- package/bin/skills/hugging-face-evaluation/scripts/test_extraction.py +206 -0
- package/bin/skills/hugging-face-jobs/SKILL.md +1041 -0
- package/bin/skills/hugging-face-jobs/index.html +216 -0
- package/bin/skills/hugging-face-jobs/references/hardware_guide.md +336 -0
- package/bin/skills/hugging-face-jobs/references/hub_saving.md +352 -0
- package/bin/skills/hugging-face-jobs/references/token_usage.md +546 -0
- package/bin/skills/hugging-face-jobs/references/troubleshooting.md +475 -0
- package/bin/skills/hugging-face-jobs/scripts/cot-self-instruct.py +718 -0
- package/bin/skills/hugging-face-jobs/scripts/finepdfs-stats.py +546 -0
- package/bin/skills/hugging-face-jobs/scripts/generate-responses.py +587 -0
- package/bin/skills/hugging-face-model-trainer/SKILL.md +711 -0
- package/bin/skills/hugging-face-model-trainer/references/gguf_conversion.md +296 -0
- package/bin/skills/hugging-face-model-trainer/references/hardware_guide.md +283 -0
- package/bin/skills/hugging-face-model-trainer/references/hub_saving.md +364 -0
- package/bin/skills/hugging-face-model-trainer/references/reliability_principles.md +371 -0
- package/bin/skills/hugging-face-model-trainer/references/trackio_guide.md +189 -0
- package/bin/skills/hugging-face-model-trainer/references/training_methods.md +150 -0
- package/bin/skills/hugging-face-model-trainer/references/training_patterns.md +203 -0
- package/bin/skills/hugging-face-model-trainer/references/troubleshooting.md +282 -0
- package/bin/skills/hugging-face-model-trainer/scripts/convert_to_gguf.py +424 -0
- package/bin/skills/hugging-face-model-trainer/scripts/dataset_inspector.py +417 -0
- package/bin/skills/hugging-face-model-trainer/scripts/estimate_cost.py +150 -0
- package/bin/skills/hugging-face-model-trainer/scripts/train_dpo_example.py +106 -0
- package/bin/skills/hugging-face-model-trainer/scripts/train_grpo_example.py +89 -0
- package/bin/skills/hugging-face-model-trainer/scripts/train_sft_example.py +122 -0
- package/bin/skills/hugging-face-paper-publisher/SKILL.md +627 -0
- package/bin/skills/hugging-face-paper-publisher/examples/example_usage.md +327 -0
- package/bin/skills/hugging-face-paper-publisher/references/quick_reference.md +216 -0
- package/bin/skills/hugging-face-paper-publisher/scripts/paper_manager.py +508 -0
- package/bin/skills/hugging-face-paper-publisher/templates/arxiv.md +299 -0
- package/bin/skills/hugging-face-paper-publisher/templates/ml-report.md +358 -0
- package/bin/skills/hugging-face-paper-publisher/templates/modern.md +319 -0
- package/bin/skills/hugging-face-paper-publisher/templates/standard.md +201 -0
- package/bin/skills/hugging-face-tool-builder/SKILL.md +115 -0
- package/bin/skills/hugging-face-tool-builder/references/baseline_hf_api.py +57 -0
- package/bin/skills/hugging-face-tool-builder/references/baseline_hf_api.sh +40 -0
- package/bin/skills/hugging-face-tool-builder/references/baseline_hf_api.tsx +57 -0
- package/bin/skills/hugging-face-tool-builder/references/find_models_by_paper.sh +230 -0
- package/bin/skills/hugging-face-tool-builder/references/hf_enrich_models.sh +96 -0
- package/bin/skills/hugging-face-tool-builder/references/hf_model_card_frontmatter.sh +188 -0
- package/bin/skills/hugging-face-tool-builder/references/hf_model_papers_auth.sh +171 -0
- package/bin/skills/hugging-face-trackio/SKILL.md +65 -0
- package/bin/skills/hugging-face-trackio/references/logging_metrics.md +206 -0
- package/bin/skills/hugging-face-trackio/references/retrieving_metrics.md +223 -0
- package/bin/skills/huggingface-tokenizers/SKILL.md +516 -0
- package/bin/skills/huggingface-tokenizers/references/algorithms.md +653 -0
- package/bin/skills/huggingface-tokenizers/references/integration.md +637 -0
- package/bin/skills/huggingface-tokenizers/references/pipeline.md +723 -0
- package/bin/skills/huggingface-tokenizers/references/training.md +565 -0
- package/bin/skills/instructor/SKILL.md +740 -0
- package/bin/skills/instructor/references/examples.md +107 -0
- package/bin/skills/instructor/references/providers.md +70 -0
- package/bin/skills/instructor/references/validation.md +606 -0
- package/bin/skills/knowledge-distillation/SKILL.md +458 -0
- package/bin/skills/knowledge-distillation/references/minillm.md +334 -0
- package/bin/skills/lambda-labs/SKILL.md +545 -0
- package/bin/skills/lambda-labs/references/advanced-usage.md +611 -0
- package/bin/skills/lambda-labs/references/troubleshooting.md +530 -0
- package/bin/skills/langchain/SKILL.md +480 -0
- package/bin/skills/langchain/references/agents.md +499 -0
- package/bin/skills/langchain/references/integration.md +562 -0
- package/bin/skills/langchain/references/rag.md +600 -0
- package/bin/skills/langsmith/SKILL.md +422 -0
- package/bin/skills/langsmith/references/advanced-usage.md +548 -0
- package/bin/skills/langsmith/references/troubleshooting.md +537 -0
- package/bin/skills/litgpt/SKILL.md +469 -0
- package/bin/skills/litgpt/references/custom-models.md +568 -0
- package/bin/skills/litgpt/references/distributed-training.md +451 -0
- package/bin/skills/litgpt/references/supported-models.md +336 -0
- package/bin/skills/litgpt/references/training-recipes.md +619 -0
- package/bin/skills/llama-cpp/SKILL.md +258 -0
- package/bin/skills/llama-cpp/references/optimization.md +89 -0
- package/bin/skills/llama-cpp/references/quantization.md +213 -0
- package/bin/skills/llama-cpp/references/server.md +125 -0
- package/bin/skills/llama-factory/SKILL.md +80 -0
- package/bin/skills/llama-factory/references/_images.md +23 -0
- package/bin/skills/llama-factory/references/advanced.md +1055 -0
- package/bin/skills/llama-factory/references/getting_started.md +349 -0
- package/bin/skills/llama-factory/references/index.md +19 -0
- package/bin/skills/llama-factory/references/other.md +31 -0
- package/bin/skills/llamaguard/SKILL.md +337 -0
- package/bin/skills/llamaindex/SKILL.md +569 -0
- package/bin/skills/llamaindex/references/agents.md +83 -0
- package/bin/skills/llamaindex/references/data_connectors.md +108 -0
- package/bin/skills/llamaindex/references/query_engines.md +406 -0
- package/bin/skills/llava/SKILL.md +304 -0
- package/bin/skills/llava/references/training.md +197 -0
- package/bin/skills/lm-evaluation-harness/SKILL.md +490 -0
- package/bin/skills/lm-evaluation-harness/references/api-evaluation.md +490 -0
- package/bin/skills/lm-evaluation-harness/references/benchmark-guide.md +488 -0
- package/bin/skills/lm-evaluation-harness/references/custom-tasks.md +602 -0
- package/bin/skills/lm-evaluation-harness/references/distributed-eval.md +519 -0
- package/bin/skills/long-context/SKILL.md +536 -0
- package/bin/skills/long-context/references/extension_methods.md +468 -0
- package/bin/skills/long-context/references/fine_tuning.md +611 -0
- package/bin/skills/long-context/references/rope.md +402 -0
- package/bin/skills/mamba/SKILL.md +260 -0
- package/bin/skills/mamba/references/architecture-details.md +206 -0
- package/bin/skills/mamba/references/benchmarks.md +255 -0
- package/bin/skills/mamba/references/training-guide.md +388 -0
- package/bin/skills/megatron-core/SKILL.md +366 -0
- package/bin/skills/megatron-core/references/benchmarks.md +249 -0
- package/bin/skills/megatron-core/references/parallelism-guide.md +404 -0
- package/bin/skills/megatron-core/references/production-examples.md +473 -0
- package/bin/skills/megatron-core/references/training-recipes.md +547 -0
- package/bin/skills/miles/SKILL.md +315 -0
- package/bin/skills/miles/references/api-reference.md +141 -0
- package/bin/skills/miles/references/troubleshooting.md +352 -0
- package/bin/skills/mlflow/SKILL.md +704 -0
- package/bin/skills/mlflow/references/deployment.md +744 -0
- package/bin/skills/mlflow/references/model-registry.md +770 -0
- package/bin/skills/mlflow/references/tracking.md +680 -0
- package/bin/skills/modal/SKILL.md +341 -0
- package/bin/skills/modal/references/advanced-usage.md +503 -0
- package/bin/skills/modal/references/troubleshooting.md +494 -0
- package/bin/skills/model-merging/SKILL.md +539 -0
- package/bin/skills/model-merging/references/evaluation.md +462 -0
- package/bin/skills/model-merging/references/examples.md +428 -0
- package/bin/skills/model-merging/references/methods.md +352 -0
- package/bin/skills/model-pruning/SKILL.md +495 -0
- package/bin/skills/model-pruning/references/wanda.md +347 -0
- package/bin/skills/moe-training/SKILL.md +526 -0
- package/bin/skills/moe-training/references/architectures.md +432 -0
- package/bin/skills/moe-training/references/inference.md +348 -0
- package/bin/skills/moe-training/references/training.md +425 -0
- package/bin/skills/nanogpt/SKILL.md +290 -0
- package/bin/skills/nanogpt/references/architecture.md +382 -0
- package/bin/skills/nanogpt/references/data.md +476 -0
- package/bin/skills/nanogpt/references/training.md +564 -0
- package/bin/skills/nemo-curator/SKILL.md +383 -0
- package/bin/skills/nemo-curator/references/deduplication.md +87 -0
- package/bin/skills/nemo-curator/references/filtering.md +102 -0
- package/bin/skills/nemo-evaluator/SKILL.md +494 -0
- package/bin/skills/nemo-evaluator/references/adapter-system.md +340 -0
- package/bin/skills/nemo-evaluator/references/configuration.md +447 -0
- package/bin/skills/nemo-evaluator/references/custom-benchmarks.md +315 -0
- package/bin/skills/nemo-evaluator/references/execution-backends.md +361 -0
- package/bin/skills/nemo-guardrails/SKILL.md +297 -0
- package/bin/skills/nnsight/SKILL.md +436 -0
- package/bin/skills/nnsight/references/README.md +78 -0
- package/bin/skills/nnsight/references/api.md +344 -0
- package/bin/skills/nnsight/references/tutorials.md +300 -0
- package/bin/skills/openrlhf/SKILL.md +249 -0
- package/bin/skills/openrlhf/references/algorithm-comparison.md +404 -0
- package/bin/skills/openrlhf/references/custom-rewards.md +530 -0
- package/bin/skills/openrlhf/references/hybrid-engine.md +287 -0
- package/bin/skills/openrlhf/references/multi-node-training.md +454 -0
- package/bin/skills/outlines/SKILL.md +652 -0
- package/bin/skills/outlines/references/backends.md +615 -0
- package/bin/skills/outlines/references/examples.md +773 -0
- package/bin/skills/outlines/references/json_generation.md +652 -0
- package/bin/skills/peft/SKILL.md +431 -0
- package/bin/skills/peft/references/advanced-usage.md +514 -0
- package/bin/skills/peft/references/troubleshooting.md +480 -0
- package/bin/skills/phoenix/SKILL.md +475 -0
- package/bin/skills/phoenix/references/advanced-usage.md +619 -0
- package/bin/skills/phoenix/references/troubleshooting.md +538 -0
- package/bin/skills/pinecone/SKILL.md +358 -0
- package/bin/skills/pinecone/references/deployment.md +181 -0
- package/bin/skills/pytorch-fsdp/SKILL.md +126 -0
- package/bin/skills/pytorch-fsdp/references/index.md +7 -0
- package/bin/skills/pytorch-fsdp/references/other.md +4249 -0
- package/bin/skills/pytorch-lightning/SKILL.md +346 -0
- package/bin/skills/pytorch-lightning/references/callbacks.md +436 -0
- package/bin/skills/pytorch-lightning/references/distributed.md +490 -0
- package/bin/skills/pytorch-lightning/references/hyperparameter-tuning.md +556 -0
- package/bin/skills/pyvene/SKILL.md +473 -0
- package/bin/skills/pyvene/references/README.md +73 -0
- package/bin/skills/pyvene/references/api.md +383 -0
- package/bin/skills/pyvene/references/tutorials.md +376 -0
- package/bin/skills/qdrant/SKILL.md +493 -0
- package/bin/skills/qdrant/references/advanced-usage.md +648 -0
- package/bin/skills/qdrant/references/troubleshooting.md +631 -0
- package/bin/skills/ray-data/SKILL.md +326 -0
- package/bin/skills/ray-data/references/integration.md +82 -0
- package/bin/skills/ray-data/references/transformations.md +83 -0
- package/bin/skills/ray-train/SKILL.md +406 -0
- package/bin/skills/ray-train/references/multi-node.md +628 -0
- package/bin/skills/rwkv/SKILL.md +260 -0
- package/bin/skills/rwkv/references/architecture-details.md +344 -0
- package/bin/skills/rwkv/references/rwkv7.md +386 -0
- package/bin/skills/rwkv/references/state-management.md +369 -0
- package/bin/skills/saelens/SKILL.md +386 -0
- package/bin/skills/saelens/references/README.md +70 -0
- package/bin/skills/saelens/references/api.md +333 -0
- package/bin/skills/saelens/references/tutorials.md +318 -0
- package/bin/skills/segment-anything/SKILL.md +500 -0
- package/bin/skills/segment-anything/references/advanced-usage.md +589 -0
- package/bin/skills/segment-anything/references/troubleshooting.md +484 -0
- package/bin/skills/sentence-transformers/SKILL.md +255 -0
- package/bin/skills/sentence-transformers/references/models.md +123 -0
- package/bin/skills/sentencepiece/SKILL.md +235 -0
- package/bin/skills/sentencepiece/references/algorithms.md +200 -0
- package/bin/skills/sentencepiece/references/training.md +304 -0
- package/bin/skills/sglang/SKILL.md +442 -0
- package/bin/skills/sglang/references/deployment.md +490 -0
- package/bin/skills/sglang/references/radix-attention.md +413 -0
- package/bin/skills/sglang/references/structured-generation.md +541 -0
- package/bin/skills/simpo/SKILL.md +219 -0
- package/bin/skills/simpo/references/datasets.md +478 -0
- package/bin/skills/simpo/references/hyperparameters.md +452 -0
- package/bin/skills/simpo/references/loss-functions.md +350 -0
- package/bin/skills/skypilot/SKILL.md +509 -0
- package/bin/skills/skypilot/references/advanced-usage.md +491 -0
- package/bin/skills/skypilot/references/troubleshooting.md +570 -0
- package/bin/skills/slime/SKILL.md +464 -0
- package/bin/skills/slime/references/api-reference.md +392 -0
- package/bin/skills/slime/references/troubleshooting.md +386 -0
- package/bin/skills/speculative-decoding/SKILL.md +467 -0
- package/bin/skills/speculative-decoding/references/lookahead.md +309 -0
- package/bin/skills/speculative-decoding/references/medusa.md +350 -0
- package/bin/skills/stable-diffusion/SKILL.md +519 -0
- package/bin/skills/stable-diffusion/references/advanced-usage.md +716 -0
- package/bin/skills/stable-diffusion/references/troubleshooting.md +555 -0
- package/bin/skills/tensorboard/SKILL.md +629 -0
- package/bin/skills/tensorboard/references/integrations.md +638 -0
- package/bin/skills/tensorboard/references/profiling.md +545 -0
- package/bin/skills/tensorboard/references/visualization.md +620 -0
- package/bin/skills/tensorrt-llm/SKILL.md +187 -0
- package/bin/skills/tensorrt-llm/references/multi-gpu.md +298 -0
- package/bin/skills/tensorrt-llm/references/optimization.md +242 -0
- package/bin/skills/tensorrt-llm/references/serving.md +470 -0
- package/bin/skills/tinker/SKILL.md +362 -0
- package/bin/skills/tinker/references/api-reference.md +168 -0
- package/bin/skills/tinker/references/getting-started.md +157 -0
- package/bin/skills/tinker/references/loss-functions.md +163 -0
- package/bin/skills/tinker/references/models-and-lora.md +139 -0
- package/bin/skills/tinker/references/recipes.md +280 -0
- package/bin/skills/tinker/references/reinforcement-learning.md +212 -0
- package/bin/skills/tinker/references/rendering.md +243 -0
- package/bin/skills/tinker/references/supervised-learning.md +232 -0
- package/bin/skills/tinker-training-cost/SKILL.md +187 -0
- package/bin/skills/tinker-training-cost/scripts/calculate_cost.py +123 -0
- package/bin/skills/torchforge/SKILL.md +433 -0
- package/bin/skills/torchforge/references/api-reference.md +327 -0
- package/bin/skills/torchforge/references/troubleshooting.md +409 -0
- package/bin/skills/torchtitan/SKILL.md +358 -0
- package/bin/skills/torchtitan/references/checkpoint.md +181 -0
- package/bin/skills/torchtitan/references/custom-models.md +258 -0
- package/bin/skills/torchtitan/references/float8.md +133 -0
- package/bin/skills/torchtitan/references/fsdp.md +126 -0
- package/bin/skills/transformer-lens/SKILL.md +346 -0
- package/bin/skills/transformer-lens/references/README.md +54 -0
- package/bin/skills/transformer-lens/references/api.md +362 -0
- package/bin/skills/transformer-lens/references/tutorials.md +339 -0
- package/bin/skills/trl-fine-tuning/SKILL.md +455 -0
- package/bin/skills/trl-fine-tuning/references/dpo-variants.md +227 -0
- package/bin/skills/trl-fine-tuning/references/online-rl.md +82 -0
- package/bin/skills/trl-fine-tuning/references/reward-modeling.md +122 -0
- package/bin/skills/trl-fine-tuning/references/sft-training.md +168 -0
- package/bin/skills/unsloth/SKILL.md +80 -0
- package/bin/skills/unsloth/references/index.md +7 -0
- package/bin/skills/unsloth/references/llms-full.md +16799 -0
- package/bin/skills/unsloth/references/llms-txt.md +12044 -0
- package/bin/skills/unsloth/references/llms.md +82 -0
- package/bin/skills/verl/SKILL.md +391 -0
- package/bin/skills/verl/references/api-reference.md +301 -0
- package/bin/skills/verl/references/troubleshooting.md +391 -0
- package/bin/skills/vllm/SKILL.md +364 -0
- package/bin/skills/vllm/references/optimization.md +226 -0
- package/bin/skills/vllm/references/quantization.md +284 -0
- package/bin/skills/vllm/references/server-deployment.md +255 -0
- package/bin/skills/vllm/references/troubleshooting.md +447 -0
- package/bin/skills/weights-and-biases/SKILL.md +590 -0
- package/bin/skills/weights-and-biases/references/artifacts.md +584 -0
- package/bin/skills/weights-and-biases/references/integrations.md +700 -0
- package/bin/skills/weights-and-biases/references/sweeps.md +847 -0
- package/bin/skills/whisper/SKILL.md +317 -0
- package/bin/skills/whisper/references/languages.md +189 -0
- package/bin/synsc +0 -0
- package/package.json +10 -0
|
@@ -0,0 +1,475 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: phoenix-observability
|
|
3
|
+
description: Open-source AI observability platform for LLM tracing, evaluation, and monitoring. Use when debugging LLM applications with detailed traces, running evaluations on datasets, or monitoring production AI systems with real-time insights.
|
|
4
|
+
version: 1.0.0
|
|
5
|
+
author: Synthetic Sciences
|
|
6
|
+
license: MIT
|
|
7
|
+
tags: [Observability, Phoenix, Arize, Tracing, Evaluation, Monitoring, LLM Ops, OpenTelemetry]
|
|
8
|
+
dependencies: [arize-phoenix>=12.0.0]
|
|
9
|
+
---
|
|
10
|
+
|
|
11
|
+
# Phoenix - AI Observability Platform
|
|
12
|
+
|
|
13
|
+
Open-source AI observability and evaluation platform for LLM applications with tracing, evaluation, datasets, experiments, and real-time monitoring.
|
|
14
|
+
|
|
15
|
+
## When to use Phoenix
|
|
16
|
+
|
|
17
|
+
**Use Phoenix when:**
|
|
18
|
+
- Debugging LLM application issues with detailed traces
|
|
19
|
+
- Running systematic evaluations on datasets
|
|
20
|
+
- Monitoring production LLM systems in real-time
|
|
21
|
+
- Building experiment pipelines for prompt/model comparison
|
|
22
|
+
- Self-hosted observability without vendor lock-in
|
|
23
|
+
|
|
24
|
+
**Key features:**
|
|
25
|
+
- **Tracing**: OpenTelemetry-based trace collection for any LLM framework
|
|
26
|
+
- **Evaluation**: LLM-as-judge evaluators for quality assessment
|
|
27
|
+
- **Datasets**: Versioned test sets for regression testing
|
|
28
|
+
- **Experiments**: Compare prompts, models, and configurations
|
|
29
|
+
- **Playground**: Interactive prompt testing with multiple models
|
|
30
|
+
- **Open-source**: Self-hosted with PostgreSQL or SQLite
|
|
31
|
+
|
|
32
|
+
**Use alternatives instead:**
|
|
33
|
+
- **LangSmith**: Managed platform with LangChain-first integration
|
|
34
|
+
- **Weights & Biases**: Deep learning experiment tracking focus
|
|
35
|
+
- **Arize Cloud**: Managed Phoenix with enterprise features
|
|
36
|
+
- **MLflow**: General ML lifecycle, model registry focus
|
|
37
|
+
|
|
38
|
+
## Quick start
|
|
39
|
+
|
|
40
|
+
### Installation
|
|
41
|
+
|
|
42
|
+
```bash
|
|
43
|
+
pip install arize-phoenix
|
|
44
|
+
|
|
45
|
+
# With specific backends
|
|
46
|
+
pip install arize-phoenix[embeddings] # Embedding analysis
|
|
47
|
+
pip install arize-phoenix-otel # OpenTelemetry config
|
|
48
|
+
pip install arize-phoenix-evals # Evaluation framework
|
|
49
|
+
pip install arize-phoenix-client # Lightweight REST client
|
|
50
|
+
```
|
|
51
|
+
|
|
52
|
+
### Launch Phoenix server
|
|
53
|
+
|
|
54
|
+
```python
|
|
55
|
+
import phoenix as px
|
|
56
|
+
|
|
57
|
+
# Launch in notebook (ThreadServer mode)
|
|
58
|
+
session = px.launch_app()
|
|
59
|
+
|
|
60
|
+
# View UI
|
|
61
|
+
session.view() # Embedded iframe
|
|
62
|
+
print(session.url) # http://localhost:6006
|
|
63
|
+
```
|
|
64
|
+
|
|
65
|
+
### Command-line server (production)
|
|
66
|
+
|
|
67
|
+
```bash
|
|
68
|
+
# Start Phoenix server
|
|
69
|
+
phoenix serve
|
|
70
|
+
|
|
71
|
+
# With PostgreSQL
|
|
72
|
+
export PHOENIX_SQL_DATABASE_URL="postgresql://user:pass@host/db"
|
|
73
|
+
phoenix serve --port 6006
|
|
74
|
+
```
|
|
75
|
+
|
|
76
|
+
### Basic tracing
|
|
77
|
+
|
|
78
|
+
```python
|
|
79
|
+
from phoenix.otel import register
|
|
80
|
+
from openinference.instrumentation.openai import OpenAIInstrumentor
|
|
81
|
+
|
|
82
|
+
# Configure OpenTelemetry with Phoenix
|
|
83
|
+
tracer_provider = register(
|
|
84
|
+
project_name="my-llm-app",
|
|
85
|
+
endpoint="http://localhost:6006/v1/traces"
|
|
86
|
+
)
|
|
87
|
+
|
|
88
|
+
# Instrument OpenAI SDK
|
|
89
|
+
OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)
|
|
90
|
+
|
|
91
|
+
# All OpenAI calls are now traced
|
|
92
|
+
from openai import OpenAI
|
|
93
|
+
client = OpenAI()
|
|
94
|
+
response = client.chat.completions.create(
|
|
95
|
+
model="gpt-4o",
|
|
96
|
+
messages=[{"role": "user", "content": "Hello!"}]
|
|
97
|
+
)
|
|
98
|
+
```
|
|
99
|
+
|
|
100
|
+
## Core concepts
|
|
101
|
+
|
|
102
|
+
### Traces and spans
|
|
103
|
+
|
|
104
|
+
A **trace** represents a complete execution flow, while **spans** are individual operations within that trace.
|
|
105
|
+
|
|
106
|
+
```python
|
|
107
|
+
from phoenix.otel import register
|
|
108
|
+
from opentelemetry import trace
|
|
109
|
+
|
|
110
|
+
# Setup tracing
|
|
111
|
+
tracer_provider = register(project_name="my-app")
|
|
112
|
+
tracer = trace.get_tracer(__name__)
|
|
113
|
+
|
|
114
|
+
# Create custom spans
|
|
115
|
+
with tracer.start_as_current_span("process_query") as span:
|
|
116
|
+
span.set_attribute("input.value", query)
|
|
117
|
+
|
|
118
|
+
# Child spans are automatically nested
|
|
119
|
+
with tracer.start_as_current_span("retrieve_context"):
|
|
120
|
+
context = retriever.search(query)
|
|
121
|
+
|
|
122
|
+
with tracer.start_as_current_span("generate_response"):
|
|
123
|
+
response = llm.generate(query, context)
|
|
124
|
+
|
|
125
|
+
span.set_attribute("output.value", response)
|
|
126
|
+
```
|
|
127
|
+
|
|
128
|
+
### Projects
|
|
129
|
+
|
|
130
|
+
Projects organize related traces:
|
|
131
|
+
|
|
132
|
+
```python
|
|
133
|
+
import os
|
|
134
|
+
os.environ["PHOENIX_PROJECT_NAME"] = "production-chatbot"
|
|
135
|
+
|
|
136
|
+
# Or per-trace
|
|
137
|
+
from phoenix.otel import register
|
|
138
|
+
tracer_provider = register(project_name="experiment-v2")
|
|
139
|
+
```
|
|
140
|
+
|
|
141
|
+
## Framework instrumentation
|
|
142
|
+
|
|
143
|
+
### OpenAI
|
|
144
|
+
|
|
145
|
+
```python
|
|
146
|
+
from phoenix.otel import register
|
|
147
|
+
from openinference.instrumentation.openai import OpenAIInstrumentor
|
|
148
|
+
|
|
149
|
+
tracer_provider = register()
|
|
150
|
+
OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)
|
|
151
|
+
```
|
|
152
|
+
|
|
153
|
+
### LangChain
|
|
154
|
+
|
|
155
|
+
```python
|
|
156
|
+
from phoenix.otel import register
|
|
157
|
+
from openinference.instrumentation.langchain import LangChainInstrumentor
|
|
158
|
+
|
|
159
|
+
tracer_provider = register()
|
|
160
|
+
LangChainInstrumentor().instrument(tracer_provider=tracer_provider)
|
|
161
|
+
|
|
162
|
+
# All LangChain operations traced
|
|
163
|
+
from langchain_openai import ChatOpenAI
|
|
164
|
+
llm = ChatOpenAI(model="gpt-4o")
|
|
165
|
+
response = llm.invoke("Hello!")
|
|
166
|
+
```
|
|
167
|
+
|
|
168
|
+
### LlamaIndex
|
|
169
|
+
|
|
170
|
+
```python
|
|
171
|
+
from phoenix.otel import register
|
|
172
|
+
from openinference.instrumentation.llama_index import LlamaIndexInstrumentor
|
|
173
|
+
|
|
174
|
+
tracer_provider = register()
|
|
175
|
+
LlamaIndexInstrumentor().instrument(tracer_provider=tracer_provider)
|
|
176
|
+
```
|
|
177
|
+
|
|
178
|
+
### Anthropic
|
|
179
|
+
|
|
180
|
+
```python
|
|
181
|
+
from phoenix.otel import register
|
|
182
|
+
from openinference.instrumentation.anthropic import AnthropicInstrumentor
|
|
183
|
+
|
|
184
|
+
tracer_provider = register()
|
|
185
|
+
AnthropicInstrumentor().instrument(tracer_provider=tracer_provider)
|
|
186
|
+
```
|
|
187
|
+
|
|
188
|
+
## Evaluation framework
|
|
189
|
+
|
|
190
|
+
### Built-in evaluators
|
|
191
|
+
|
|
192
|
+
```python
|
|
193
|
+
from phoenix.evals import (
|
|
194
|
+
OpenAIModel,
|
|
195
|
+
HallucinationEvaluator,
|
|
196
|
+
RelevanceEvaluator,
|
|
197
|
+
ToxicityEvaluator,
|
|
198
|
+
llm_classify
|
|
199
|
+
)
|
|
200
|
+
|
|
201
|
+
# Setup model for evaluation
|
|
202
|
+
eval_model = OpenAIModel(model="gpt-4o")
|
|
203
|
+
|
|
204
|
+
# Evaluate hallucination
|
|
205
|
+
hallucination_eval = HallucinationEvaluator(eval_model)
|
|
206
|
+
results = hallucination_eval.evaluate(
|
|
207
|
+
input="What is the capital of France?",
|
|
208
|
+
output="The capital of France is Paris.",
|
|
209
|
+
reference="Paris is the capital of France."
|
|
210
|
+
)
|
|
211
|
+
```
|
|
212
|
+
|
|
213
|
+
### Custom evaluators
|
|
214
|
+
|
|
215
|
+
```python
|
|
216
|
+
from phoenix.evals import llm_classify
|
|
217
|
+
|
|
218
|
+
# Define custom evaluation
|
|
219
|
+
def evaluate_helpfulness(input_text, output_text):
|
|
220
|
+
template = """
|
|
221
|
+
Evaluate if the response is helpful for the given question.
|
|
222
|
+
|
|
223
|
+
Question: {input}
|
|
224
|
+
Response: {output}
|
|
225
|
+
|
|
226
|
+
Is this response helpful? Answer 'helpful' or 'not_helpful'.
|
|
227
|
+
"""
|
|
228
|
+
|
|
229
|
+
result = llm_classify(
|
|
230
|
+
model=eval_model,
|
|
231
|
+
template=template,
|
|
232
|
+
input=input_text,
|
|
233
|
+
output=output_text,
|
|
234
|
+
rails=["helpful", "not_helpful"]
|
|
235
|
+
)
|
|
236
|
+
return result
|
|
237
|
+
```
|
|
238
|
+
|
|
239
|
+
### Run evaluations on dataset
|
|
240
|
+
|
|
241
|
+
```python
|
|
242
|
+
from phoenix import Client
|
|
243
|
+
from phoenix.evals import run_evals
|
|
244
|
+
|
|
245
|
+
client = Client()
|
|
246
|
+
|
|
247
|
+
# Get spans to evaluate
|
|
248
|
+
spans_df = client.get_spans_dataframe(
|
|
249
|
+
project_name="my-app",
|
|
250
|
+
filter_condition="span_kind == 'LLM'"
|
|
251
|
+
)
|
|
252
|
+
|
|
253
|
+
# Run evaluations
|
|
254
|
+
eval_results = run_evals(
|
|
255
|
+
dataframe=spans_df,
|
|
256
|
+
evaluators=[
|
|
257
|
+
HallucinationEvaluator(eval_model),
|
|
258
|
+
RelevanceEvaluator(eval_model)
|
|
259
|
+
],
|
|
260
|
+
provide_explanation=True
|
|
261
|
+
)
|
|
262
|
+
|
|
263
|
+
# Log results back to Phoenix
|
|
264
|
+
client.log_evaluations(eval_results)
|
|
265
|
+
```
|
|
266
|
+
|
|
267
|
+
## Datasets and experiments
|
|
268
|
+
|
|
269
|
+
### Create dataset
|
|
270
|
+
|
|
271
|
+
```python
|
|
272
|
+
from phoenix import Client
|
|
273
|
+
|
|
274
|
+
client = Client()
|
|
275
|
+
|
|
276
|
+
# Create dataset
|
|
277
|
+
dataset = client.create_dataset(
|
|
278
|
+
name="qa-test-set",
|
|
279
|
+
description="QA evaluation dataset"
|
|
280
|
+
)
|
|
281
|
+
|
|
282
|
+
# Add examples
|
|
283
|
+
client.add_examples_to_dataset(
|
|
284
|
+
dataset_name="qa-test-set",
|
|
285
|
+
examples=[
|
|
286
|
+
{
|
|
287
|
+
"input": {"question": "What is Python?"},
|
|
288
|
+
"output": {"answer": "A programming language"}
|
|
289
|
+
},
|
|
290
|
+
{
|
|
291
|
+
"input": {"question": "What is ML?"},
|
|
292
|
+
"output": {"answer": "Machine learning"}
|
|
293
|
+
}
|
|
294
|
+
]
|
|
295
|
+
)
|
|
296
|
+
```
|
|
297
|
+
|
|
298
|
+
### Run experiment
|
|
299
|
+
|
|
300
|
+
```python
|
|
301
|
+
from phoenix import Client
|
|
302
|
+
from phoenix.experiments import run_experiment
|
|
303
|
+
|
|
304
|
+
client = Client()
|
|
305
|
+
|
|
306
|
+
def my_model(input_data):
|
|
307
|
+
"""Your model function."""
|
|
308
|
+
question = input_data["question"]
|
|
309
|
+
return {"answer": generate_answer(question)}
|
|
310
|
+
|
|
311
|
+
def accuracy_evaluator(input_data, output, expected):
|
|
312
|
+
"""Custom evaluator."""
|
|
313
|
+
return {
|
|
314
|
+
"score": 1.0 if expected["answer"].lower() in output["answer"].lower() else 0.0,
|
|
315
|
+
"label": "correct" if expected["answer"].lower() in output["answer"].lower() else "incorrect"
|
|
316
|
+
}
|
|
317
|
+
|
|
318
|
+
# Run experiment
|
|
319
|
+
results = run_experiment(
|
|
320
|
+
dataset_name="qa-test-set",
|
|
321
|
+
task=my_model,
|
|
322
|
+
evaluators=[accuracy_evaluator],
|
|
323
|
+
experiment_name="baseline-v1"
|
|
324
|
+
)
|
|
325
|
+
|
|
326
|
+
print(f"Average accuracy: {results.aggregate_metrics['accuracy']}")
|
|
327
|
+
```
|
|
328
|
+
|
|
329
|
+
## Client API
|
|
330
|
+
|
|
331
|
+
### Query traces and spans
|
|
332
|
+
|
|
333
|
+
```python
|
|
334
|
+
from phoenix import Client
|
|
335
|
+
|
|
336
|
+
client = Client(endpoint="http://localhost:6006")
|
|
337
|
+
|
|
338
|
+
# Get spans as DataFrame
|
|
339
|
+
spans_df = client.get_spans_dataframe(
|
|
340
|
+
project_name="my-app",
|
|
341
|
+
filter_condition="span_kind == 'LLM'",
|
|
342
|
+
limit=1000
|
|
343
|
+
)
|
|
344
|
+
|
|
345
|
+
# Get specific span
|
|
346
|
+
span = client.get_span(span_id="abc123")
|
|
347
|
+
|
|
348
|
+
# Get trace
|
|
349
|
+
trace = client.get_trace(trace_id="xyz789")
|
|
350
|
+
```
|
|
351
|
+
|
|
352
|
+
### Log feedback
|
|
353
|
+
|
|
354
|
+
```python
|
|
355
|
+
from phoenix import Client
|
|
356
|
+
|
|
357
|
+
client = Client()
|
|
358
|
+
|
|
359
|
+
# Log user feedback
|
|
360
|
+
client.log_annotation(
|
|
361
|
+
span_id="abc123",
|
|
362
|
+
name="user_rating",
|
|
363
|
+
annotator_kind="HUMAN",
|
|
364
|
+
score=0.8,
|
|
365
|
+
label="helpful",
|
|
366
|
+
metadata={"comment": "Good response"}
|
|
367
|
+
)
|
|
368
|
+
```
|
|
369
|
+
|
|
370
|
+
### Export data
|
|
371
|
+
|
|
372
|
+
```python
|
|
373
|
+
# Export to pandas
|
|
374
|
+
df = client.get_spans_dataframe(project_name="my-app")
|
|
375
|
+
|
|
376
|
+
# Export traces
|
|
377
|
+
traces = client.list_traces(project_name="my-app")
|
|
378
|
+
```
|
|
379
|
+
|
|
380
|
+
## Production deployment
|
|
381
|
+
|
|
382
|
+
### Docker
|
|
383
|
+
|
|
384
|
+
```bash
|
|
385
|
+
docker run -p 6006:6006 arizephoenix/phoenix:latest
|
|
386
|
+
```
|
|
387
|
+
|
|
388
|
+
### With PostgreSQL
|
|
389
|
+
|
|
390
|
+
```bash
|
|
391
|
+
# Set database URL
|
|
392
|
+
export PHOENIX_SQL_DATABASE_URL="postgresql://user:pass@host:5432/phoenix"
|
|
393
|
+
|
|
394
|
+
# Start server
|
|
395
|
+
phoenix serve --host 0.0.0.0 --port 6006
|
|
396
|
+
```
|
|
397
|
+
|
|
398
|
+
### Environment variables
|
|
399
|
+
|
|
400
|
+
| Variable | Description | Default |
|
|
401
|
+
|----------|-------------|---------|
|
|
402
|
+
| `PHOENIX_PORT` | HTTP server port | `6006` |
|
|
403
|
+
| `PHOENIX_HOST` | Server bind address | `127.0.0.1` |
|
|
404
|
+
| `PHOENIX_GRPC_PORT` | gRPC/OTLP port | `4317` |
|
|
405
|
+
| `PHOENIX_SQL_DATABASE_URL` | Database connection | SQLite temp |
|
|
406
|
+
| `PHOENIX_WORKING_DIR` | Data storage directory | OS temp |
|
|
407
|
+
| `PHOENIX_ENABLE_AUTH` | Enable authentication | `false` |
|
|
408
|
+
| `PHOENIX_SECRET` | JWT signing secret | Required if auth enabled |
|
|
409
|
+
|
|
410
|
+
### With authentication
|
|
411
|
+
|
|
412
|
+
```bash
|
|
413
|
+
export PHOENIX_ENABLE_AUTH=true
|
|
414
|
+
export PHOENIX_SECRET="your-secret-key-min-32-chars"
|
|
415
|
+
export PHOENIX_ADMIN_SECRET="admin-bootstrap-token"
|
|
416
|
+
|
|
417
|
+
phoenix serve
|
|
418
|
+
```
|
|
419
|
+
|
|
420
|
+
## Best practices
|
|
421
|
+
|
|
422
|
+
1. **Use projects**: Separate traces by environment (dev/staging/prod)
|
|
423
|
+
2. **Add metadata**: Include user IDs, session IDs for debugging
|
|
424
|
+
3. **Evaluate regularly**: Run automated evaluations in CI/CD
|
|
425
|
+
4. **Version datasets**: Track test set changes over time
|
|
426
|
+
5. **Monitor costs**: Track token usage via Phoenix dashboards
|
|
427
|
+
6. **Self-host**: Use PostgreSQL for production deployments
|
|
428
|
+
|
|
429
|
+
## Common issues
|
|
430
|
+
|
|
431
|
+
**Traces not appearing:**
|
|
432
|
+
```python
|
|
433
|
+
from phoenix.otel import register
|
|
434
|
+
|
|
435
|
+
# Verify endpoint
|
|
436
|
+
tracer_provider = register(
|
|
437
|
+
project_name="my-app",
|
|
438
|
+
endpoint="http://localhost:6006/v1/traces" # Correct endpoint
|
|
439
|
+
)
|
|
440
|
+
|
|
441
|
+
# Force flush
|
|
442
|
+
from opentelemetry import trace
|
|
443
|
+
trace.get_tracer_provider().force_flush()
|
|
444
|
+
```
|
|
445
|
+
|
|
446
|
+
**High memory in notebook:**
|
|
447
|
+
```python
|
|
448
|
+
# Close session when done
|
|
449
|
+
session = px.launch_app()
|
|
450
|
+
# ... do work ...
|
|
451
|
+
session.close()
|
|
452
|
+
px.close_app()
|
|
453
|
+
```
|
|
454
|
+
|
|
455
|
+
**Database connection issues:**
|
|
456
|
+
```bash
|
|
457
|
+
# Verify PostgreSQL connection
|
|
458
|
+
psql $PHOENIX_SQL_DATABASE_URL -c "SELECT 1"
|
|
459
|
+
|
|
460
|
+
# Check Phoenix logs
|
|
461
|
+
phoenix serve --log-level debug
|
|
462
|
+
```
|
|
463
|
+
|
|
464
|
+
## References
|
|
465
|
+
|
|
466
|
+
- **[Advanced Usage](references/advanced-usage.md)** - Custom evaluators, experiments, production setup
|
|
467
|
+
- **[Troubleshooting](references/troubleshooting.md)** - Common issues, debugging, performance
|
|
468
|
+
|
|
469
|
+
## Resources
|
|
470
|
+
|
|
471
|
+
- **Documentation**: https://docs.arize.com/phoenix
|
|
472
|
+
- **Repository**: https://github.com/Arize-ai/phoenix
|
|
473
|
+
- **Docker Hub**: https://hub.docker.com/r/arizephoenix/phoenix
|
|
474
|
+
- **Version**: 12.0.0+
|
|
475
|
+
- **License**: Apache 2.0
|