@synsci/cli-darwin-x64 1.1.49
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/bin/skills/accelerate/SKILL.md +332 -0
- package/bin/skills/accelerate/references/custom-plugins.md +453 -0
- package/bin/skills/accelerate/references/megatron-integration.md +489 -0
- package/bin/skills/accelerate/references/performance.md +525 -0
- package/bin/skills/audiocraft/SKILL.md +564 -0
- package/bin/skills/audiocraft/references/advanced-usage.md +666 -0
- package/bin/skills/audiocraft/references/troubleshooting.md +504 -0
- package/bin/skills/autogpt/SKILL.md +403 -0
- package/bin/skills/autogpt/references/advanced-usage.md +535 -0
- package/bin/skills/autogpt/references/troubleshooting.md +420 -0
- package/bin/skills/awq/SKILL.md +310 -0
- package/bin/skills/awq/references/advanced-usage.md +324 -0
- package/bin/skills/awq/references/troubleshooting.md +344 -0
- package/bin/skills/axolotl/SKILL.md +158 -0
- package/bin/skills/axolotl/references/api.md +5548 -0
- package/bin/skills/axolotl/references/dataset-formats.md +1029 -0
- package/bin/skills/axolotl/references/index.md +15 -0
- package/bin/skills/axolotl/references/other.md +3563 -0
- package/bin/skills/bigcode-evaluation-harness/SKILL.md +405 -0
- package/bin/skills/bigcode-evaluation-harness/references/benchmarks.md +393 -0
- package/bin/skills/bigcode-evaluation-harness/references/custom-tasks.md +424 -0
- package/bin/skills/bigcode-evaluation-harness/references/issues.md +394 -0
- package/bin/skills/bitsandbytes/SKILL.md +411 -0
- package/bin/skills/bitsandbytes/references/memory-optimization.md +521 -0
- package/bin/skills/bitsandbytes/references/qlora-training.md +521 -0
- package/bin/skills/bitsandbytes/references/quantization-formats.md +447 -0
- package/bin/skills/blip-2/SKILL.md +564 -0
- package/bin/skills/blip-2/references/advanced-usage.md +680 -0
- package/bin/skills/blip-2/references/troubleshooting.md +526 -0
- package/bin/skills/chroma/SKILL.md +406 -0
- package/bin/skills/chroma/references/integration.md +38 -0
- package/bin/skills/clip/SKILL.md +253 -0
- package/bin/skills/clip/references/applications.md +207 -0
- package/bin/skills/constitutional-ai/SKILL.md +290 -0
- package/bin/skills/crewai/SKILL.md +498 -0
- package/bin/skills/crewai/references/flows.md +438 -0
- package/bin/skills/crewai/references/tools.md +429 -0
- package/bin/skills/crewai/references/troubleshooting.md +480 -0
- package/bin/skills/deepspeed/SKILL.md +141 -0
- package/bin/skills/deepspeed/references/08.md +17 -0
- package/bin/skills/deepspeed/references/09.md +173 -0
- package/bin/skills/deepspeed/references/2020.md +378 -0
- package/bin/skills/deepspeed/references/2023.md +279 -0
- package/bin/skills/deepspeed/references/assets.md +179 -0
- package/bin/skills/deepspeed/references/index.md +35 -0
- package/bin/skills/deepspeed/references/mii.md +118 -0
- package/bin/skills/deepspeed/references/other.md +1191 -0
- package/bin/skills/deepspeed/references/tutorials.md +6554 -0
- package/bin/skills/dspy/SKILL.md +590 -0
- package/bin/skills/dspy/references/examples.md +663 -0
- package/bin/skills/dspy/references/modules.md +475 -0
- package/bin/skills/dspy/references/optimizers.md +566 -0
- package/bin/skills/faiss/SKILL.md +221 -0
- package/bin/skills/faiss/references/index_types.md +280 -0
- package/bin/skills/flash-attention/SKILL.md +367 -0
- package/bin/skills/flash-attention/references/benchmarks.md +215 -0
- package/bin/skills/flash-attention/references/transformers-integration.md +293 -0
- package/bin/skills/gguf/SKILL.md +427 -0
- package/bin/skills/gguf/references/advanced-usage.md +504 -0
- package/bin/skills/gguf/references/troubleshooting.md +442 -0
- package/bin/skills/gptq/SKILL.md +450 -0
- package/bin/skills/gptq/references/calibration.md +337 -0
- package/bin/skills/gptq/references/integration.md +129 -0
- package/bin/skills/gptq/references/troubleshooting.md +95 -0
- package/bin/skills/grpo-rl-training/README.md +97 -0
- package/bin/skills/grpo-rl-training/SKILL.md +572 -0
- package/bin/skills/grpo-rl-training/examples/reward_functions_library.py +393 -0
- package/bin/skills/grpo-rl-training/templates/basic_grpo_training.py +228 -0
- package/bin/skills/guidance/SKILL.md +572 -0
- package/bin/skills/guidance/references/backends.md +554 -0
- package/bin/skills/guidance/references/constraints.md +674 -0
- package/bin/skills/guidance/references/examples.md +767 -0
- package/bin/skills/hqq/SKILL.md +445 -0
- package/bin/skills/hqq/references/advanced-usage.md +528 -0
- package/bin/skills/hqq/references/troubleshooting.md +503 -0
- package/bin/skills/hugging-face-cli/SKILL.md +191 -0
- package/bin/skills/hugging-face-cli/references/commands.md +954 -0
- package/bin/skills/hugging-face-cli/references/examples.md +374 -0
- package/bin/skills/hugging-face-datasets/SKILL.md +547 -0
- package/bin/skills/hugging-face-datasets/examples/diverse_training_examples.json +239 -0
- package/bin/skills/hugging-face-datasets/examples/system_prompt_template.txt +196 -0
- package/bin/skills/hugging-face-datasets/examples/training_examples.json +176 -0
- package/bin/skills/hugging-face-datasets/scripts/dataset_manager.py +522 -0
- package/bin/skills/hugging-face-datasets/scripts/sql_manager.py +844 -0
- package/bin/skills/hugging-face-datasets/templates/chat.json +55 -0
- package/bin/skills/hugging-face-datasets/templates/classification.json +62 -0
- package/bin/skills/hugging-face-datasets/templates/completion.json +51 -0
- package/bin/skills/hugging-face-datasets/templates/custom.json +75 -0
- package/bin/skills/hugging-face-datasets/templates/qa.json +54 -0
- package/bin/skills/hugging-face-datasets/templates/tabular.json +81 -0
- package/bin/skills/hugging-face-evaluation/SKILL.md +656 -0
- package/bin/skills/hugging-face-evaluation/examples/USAGE_EXAMPLES.md +382 -0
- package/bin/skills/hugging-face-evaluation/examples/artificial_analysis_to_hub.py +141 -0
- package/bin/skills/hugging-face-evaluation/examples/example_readme_tables.md +135 -0
- package/bin/skills/hugging-face-evaluation/examples/metric_mapping.json +50 -0
- package/bin/skills/hugging-face-evaluation/requirements.txt +20 -0
- package/bin/skills/hugging-face-evaluation/scripts/evaluation_manager.py +1374 -0
- package/bin/skills/hugging-face-evaluation/scripts/inspect_eval_uv.py +104 -0
- package/bin/skills/hugging-face-evaluation/scripts/inspect_vllm_uv.py +317 -0
- package/bin/skills/hugging-face-evaluation/scripts/lighteval_vllm_uv.py +303 -0
- package/bin/skills/hugging-face-evaluation/scripts/run_eval_job.py +98 -0
- package/bin/skills/hugging-face-evaluation/scripts/run_vllm_eval_job.py +331 -0
- package/bin/skills/hugging-face-evaluation/scripts/test_extraction.py +206 -0
- package/bin/skills/hugging-face-jobs/SKILL.md +1041 -0
- package/bin/skills/hugging-face-jobs/index.html +216 -0
- package/bin/skills/hugging-face-jobs/references/hardware_guide.md +336 -0
- package/bin/skills/hugging-face-jobs/references/hub_saving.md +352 -0
- package/bin/skills/hugging-face-jobs/references/token_usage.md +546 -0
- package/bin/skills/hugging-face-jobs/references/troubleshooting.md +475 -0
- package/bin/skills/hugging-face-jobs/scripts/cot-self-instruct.py +718 -0
- package/bin/skills/hugging-face-jobs/scripts/finepdfs-stats.py +546 -0
- package/bin/skills/hugging-face-jobs/scripts/generate-responses.py +587 -0
- package/bin/skills/hugging-face-model-trainer/SKILL.md +711 -0
- package/bin/skills/hugging-face-model-trainer/references/gguf_conversion.md +296 -0
- package/bin/skills/hugging-face-model-trainer/references/hardware_guide.md +283 -0
- package/bin/skills/hugging-face-model-trainer/references/hub_saving.md +364 -0
- package/bin/skills/hugging-face-model-trainer/references/reliability_principles.md +371 -0
- package/bin/skills/hugging-face-model-trainer/references/trackio_guide.md +189 -0
- package/bin/skills/hugging-face-model-trainer/references/training_methods.md +150 -0
- package/bin/skills/hugging-face-model-trainer/references/training_patterns.md +203 -0
- package/bin/skills/hugging-face-model-trainer/references/troubleshooting.md +282 -0
- package/bin/skills/hugging-face-model-trainer/scripts/convert_to_gguf.py +424 -0
- package/bin/skills/hugging-face-model-trainer/scripts/dataset_inspector.py +417 -0
- package/bin/skills/hugging-face-model-trainer/scripts/estimate_cost.py +150 -0
- package/bin/skills/hugging-face-model-trainer/scripts/train_dpo_example.py +106 -0
- package/bin/skills/hugging-face-model-trainer/scripts/train_grpo_example.py +89 -0
- package/bin/skills/hugging-face-model-trainer/scripts/train_sft_example.py +122 -0
- package/bin/skills/hugging-face-paper-publisher/SKILL.md +627 -0
- package/bin/skills/hugging-face-paper-publisher/examples/example_usage.md +327 -0
- package/bin/skills/hugging-face-paper-publisher/references/quick_reference.md +216 -0
- package/bin/skills/hugging-face-paper-publisher/scripts/paper_manager.py +508 -0
- package/bin/skills/hugging-face-paper-publisher/templates/arxiv.md +299 -0
- package/bin/skills/hugging-face-paper-publisher/templates/ml-report.md +358 -0
- package/bin/skills/hugging-face-paper-publisher/templates/modern.md +319 -0
- package/bin/skills/hugging-face-paper-publisher/templates/standard.md +201 -0
- package/bin/skills/hugging-face-tool-builder/SKILL.md +115 -0
- package/bin/skills/hugging-face-tool-builder/references/baseline_hf_api.py +57 -0
- package/bin/skills/hugging-face-tool-builder/references/baseline_hf_api.sh +40 -0
- package/bin/skills/hugging-face-tool-builder/references/baseline_hf_api.tsx +57 -0
- package/bin/skills/hugging-face-tool-builder/references/find_models_by_paper.sh +230 -0
- package/bin/skills/hugging-face-tool-builder/references/hf_enrich_models.sh +96 -0
- package/bin/skills/hugging-face-tool-builder/references/hf_model_card_frontmatter.sh +188 -0
- package/bin/skills/hugging-face-tool-builder/references/hf_model_papers_auth.sh +171 -0
- package/bin/skills/hugging-face-trackio/SKILL.md +65 -0
- package/bin/skills/hugging-face-trackio/references/logging_metrics.md +206 -0
- package/bin/skills/hugging-face-trackio/references/retrieving_metrics.md +223 -0
- package/bin/skills/huggingface-tokenizers/SKILL.md +516 -0
- package/bin/skills/huggingface-tokenizers/references/algorithms.md +653 -0
- package/bin/skills/huggingface-tokenizers/references/integration.md +637 -0
- package/bin/skills/huggingface-tokenizers/references/pipeline.md +723 -0
- package/bin/skills/huggingface-tokenizers/references/training.md +565 -0
- package/bin/skills/instructor/SKILL.md +740 -0
- package/bin/skills/instructor/references/examples.md +107 -0
- package/bin/skills/instructor/references/providers.md +70 -0
- package/bin/skills/instructor/references/validation.md +606 -0
- package/bin/skills/knowledge-distillation/SKILL.md +458 -0
- package/bin/skills/knowledge-distillation/references/minillm.md +334 -0
- package/bin/skills/lambda-labs/SKILL.md +545 -0
- package/bin/skills/lambda-labs/references/advanced-usage.md +611 -0
- package/bin/skills/lambda-labs/references/troubleshooting.md +530 -0
- package/bin/skills/langchain/SKILL.md +480 -0
- package/bin/skills/langchain/references/agents.md +499 -0
- package/bin/skills/langchain/references/integration.md +562 -0
- package/bin/skills/langchain/references/rag.md +600 -0
- package/bin/skills/langsmith/SKILL.md +422 -0
- package/bin/skills/langsmith/references/advanced-usage.md +548 -0
- package/bin/skills/langsmith/references/troubleshooting.md +537 -0
- package/bin/skills/litgpt/SKILL.md +469 -0
- package/bin/skills/litgpt/references/custom-models.md +568 -0
- package/bin/skills/litgpt/references/distributed-training.md +451 -0
- package/bin/skills/litgpt/references/supported-models.md +336 -0
- package/bin/skills/litgpt/references/training-recipes.md +619 -0
- package/bin/skills/llama-cpp/SKILL.md +258 -0
- package/bin/skills/llama-cpp/references/optimization.md +89 -0
- package/bin/skills/llama-cpp/references/quantization.md +213 -0
- package/bin/skills/llama-cpp/references/server.md +125 -0
- package/bin/skills/llama-factory/SKILL.md +80 -0
- package/bin/skills/llama-factory/references/_images.md +23 -0
- package/bin/skills/llama-factory/references/advanced.md +1055 -0
- package/bin/skills/llama-factory/references/getting_started.md +349 -0
- package/bin/skills/llama-factory/references/index.md +19 -0
- package/bin/skills/llama-factory/references/other.md +31 -0
- package/bin/skills/llamaguard/SKILL.md +337 -0
- package/bin/skills/llamaindex/SKILL.md +569 -0
- package/bin/skills/llamaindex/references/agents.md +83 -0
- package/bin/skills/llamaindex/references/data_connectors.md +108 -0
- package/bin/skills/llamaindex/references/query_engines.md +406 -0
- package/bin/skills/llava/SKILL.md +304 -0
- package/bin/skills/llava/references/training.md +197 -0
- package/bin/skills/lm-evaluation-harness/SKILL.md +490 -0
- package/bin/skills/lm-evaluation-harness/references/api-evaluation.md +490 -0
- package/bin/skills/lm-evaluation-harness/references/benchmark-guide.md +488 -0
- package/bin/skills/lm-evaluation-harness/references/custom-tasks.md +602 -0
- package/bin/skills/lm-evaluation-harness/references/distributed-eval.md +519 -0
- package/bin/skills/long-context/SKILL.md +536 -0
- package/bin/skills/long-context/references/extension_methods.md +468 -0
- package/bin/skills/long-context/references/fine_tuning.md +611 -0
- package/bin/skills/long-context/references/rope.md +402 -0
- package/bin/skills/mamba/SKILL.md +260 -0
- package/bin/skills/mamba/references/architecture-details.md +206 -0
- package/bin/skills/mamba/references/benchmarks.md +255 -0
- package/bin/skills/mamba/references/training-guide.md +388 -0
- package/bin/skills/megatron-core/SKILL.md +366 -0
- package/bin/skills/megatron-core/references/benchmarks.md +249 -0
- package/bin/skills/megatron-core/references/parallelism-guide.md +404 -0
- package/bin/skills/megatron-core/references/production-examples.md +473 -0
- package/bin/skills/megatron-core/references/training-recipes.md +547 -0
- package/bin/skills/miles/SKILL.md +315 -0
- package/bin/skills/miles/references/api-reference.md +141 -0
- package/bin/skills/miles/references/troubleshooting.md +352 -0
- package/bin/skills/mlflow/SKILL.md +704 -0
- package/bin/skills/mlflow/references/deployment.md +744 -0
- package/bin/skills/mlflow/references/model-registry.md +770 -0
- package/bin/skills/mlflow/references/tracking.md +680 -0
- package/bin/skills/modal/SKILL.md +341 -0
- package/bin/skills/modal/references/advanced-usage.md +503 -0
- package/bin/skills/modal/references/troubleshooting.md +494 -0
- package/bin/skills/model-merging/SKILL.md +539 -0
- package/bin/skills/model-merging/references/evaluation.md +462 -0
- package/bin/skills/model-merging/references/examples.md +428 -0
- package/bin/skills/model-merging/references/methods.md +352 -0
- package/bin/skills/model-pruning/SKILL.md +495 -0
- package/bin/skills/model-pruning/references/wanda.md +347 -0
- package/bin/skills/moe-training/SKILL.md +526 -0
- package/bin/skills/moe-training/references/architectures.md +432 -0
- package/bin/skills/moe-training/references/inference.md +348 -0
- package/bin/skills/moe-training/references/training.md +425 -0
- package/bin/skills/nanogpt/SKILL.md +290 -0
- package/bin/skills/nanogpt/references/architecture.md +382 -0
- package/bin/skills/nanogpt/references/data.md +476 -0
- package/bin/skills/nanogpt/references/training.md +564 -0
- package/bin/skills/nemo-curator/SKILL.md +383 -0
- package/bin/skills/nemo-curator/references/deduplication.md +87 -0
- package/bin/skills/nemo-curator/references/filtering.md +102 -0
- package/bin/skills/nemo-evaluator/SKILL.md +494 -0
- package/bin/skills/nemo-evaluator/references/adapter-system.md +340 -0
- package/bin/skills/nemo-evaluator/references/configuration.md +447 -0
- package/bin/skills/nemo-evaluator/references/custom-benchmarks.md +315 -0
- package/bin/skills/nemo-evaluator/references/execution-backends.md +361 -0
- package/bin/skills/nemo-guardrails/SKILL.md +297 -0
- package/bin/skills/nnsight/SKILL.md +436 -0
- package/bin/skills/nnsight/references/README.md +78 -0
- package/bin/skills/nnsight/references/api.md +344 -0
- package/bin/skills/nnsight/references/tutorials.md +300 -0
- package/bin/skills/openrlhf/SKILL.md +249 -0
- package/bin/skills/openrlhf/references/algorithm-comparison.md +404 -0
- package/bin/skills/openrlhf/references/custom-rewards.md +530 -0
- package/bin/skills/openrlhf/references/hybrid-engine.md +287 -0
- package/bin/skills/openrlhf/references/multi-node-training.md +454 -0
- package/bin/skills/outlines/SKILL.md +652 -0
- package/bin/skills/outlines/references/backends.md +615 -0
- package/bin/skills/outlines/references/examples.md +773 -0
- package/bin/skills/outlines/references/json_generation.md +652 -0
- package/bin/skills/peft/SKILL.md +431 -0
- package/bin/skills/peft/references/advanced-usage.md +514 -0
- package/bin/skills/peft/references/troubleshooting.md +480 -0
- package/bin/skills/phoenix/SKILL.md +475 -0
- package/bin/skills/phoenix/references/advanced-usage.md +619 -0
- package/bin/skills/phoenix/references/troubleshooting.md +538 -0
- package/bin/skills/pinecone/SKILL.md +358 -0
- package/bin/skills/pinecone/references/deployment.md +181 -0
- package/bin/skills/pytorch-fsdp/SKILL.md +126 -0
- package/bin/skills/pytorch-fsdp/references/index.md +7 -0
- package/bin/skills/pytorch-fsdp/references/other.md +4249 -0
- package/bin/skills/pytorch-lightning/SKILL.md +346 -0
- package/bin/skills/pytorch-lightning/references/callbacks.md +436 -0
- package/bin/skills/pytorch-lightning/references/distributed.md +490 -0
- package/bin/skills/pytorch-lightning/references/hyperparameter-tuning.md +556 -0
- package/bin/skills/pyvene/SKILL.md +473 -0
- package/bin/skills/pyvene/references/README.md +73 -0
- package/bin/skills/pyvene/references/api.md +383 -0
- package/bin/skills/pyvene/references/tutorials.md +376 -0
- package/bin/skills/qdrant/SKILL.md +493 -0
- package/bin/skills/qdrant/references/advanced-usage.md +648 -0
- package/bin/skills/qdrant/references/troubleshooting.md +631 -0
- package/bin/skills/ray-data/SKILL.md +326 -0
- package/bin/skills/ray-data/references/integration.md +82 -0
- package/bin/skills/ray-data/references/transformations.md +83 -0
- package/bin/skills/ray-train/SKILL.md +406 -0
- package/bin/skills/ray-train/references/multi-node.md +628 -0
- package/bin/skills/rwkv/SKILL.md +260 -0
- package/bin/skills/rwkv/references/architecture-details.md +344 -0
- package/bin/skills/rwkv/references/rwkv7.md +386 -0
- package/bin/skills/rwkv/references/state-management.md +369 -0
- package/bin/skills/saelens/SKILL.md +386 -0
- package/bin/skills/saelens/references/README.md +70 -0
- package/bin/skills/saelens/references/api.md +333 -0
- package/bin/skills/saelens/references/tutorials.md +318 -0
- package/bin/skills/segment-anything/SKILL.md +500 -0
- package/bin/skills/segment-anything/references/advanced-usage.md +589 -0
- package/bin/skills/segment-anything/references/troubleshooting.md +484 -0
- package/bin/skills/sentence-transformers/SKILL.md +255 -0
- package/bin/skills/sentence-transformers/references/models.md +123 -0
- package/bin/skills/sentencepiece/SKILL.md +235 -0
- package/bin/skills/sentencepiece/references/algorithms.md +200 -0
- package/bin/skills/sentencepiece/references/training.md +304 -0
- package/bin/skills/sglang/SKILL.md +442 -0
- package/bin/skills/sglang/references/deployment.md +490 -0
- package/bin/skills/sglang/references/radix-attention.md +413 -0
- package/bin/skills/sglang/references/structured-generation.md +541 -0
- package/bin/skills/simpo/SKILL.md +219 -0
- package/bin/skills/simpo/references/datasets.md +478 -0
- package/bin/skills/simpo/references/hyperparameters.md +452 -0
- package/bin/skills/simpo/references/loss-functions.md +350 -0
- package/bin/skills/skypilot/SKILL.md +509 -0
- package/bin/skills/skypilot/references/advanced-usage.md +491 -0
- package/bin/skills/skypilot/references/troubleshooting.md +570 -0
- package/bin/skills/slime/SKILL.md +464 -0
- package/bin/skills/slime/references/api-reference.md +392 -0
- package/bin/skills/slime/references/troubleshooting.md +386 -0
- package/bin/skills/speculative-decoding/SKILL.md +467 -0
- package/bin/skills/speculative-decoding/references/lookahead.md +309 -0
- package/bin/skills/speculative-decoding/references/medusa.md +350 -0
- package/bin/skills/stable-diffusion/SKILL.md +519 -0
- package/bin/skills/stable-diffusion/references/advanced-usage.md +716 -0
- package/bin/skills/stable-diffusion/references/troubleshooting.md +555 -0
- package/bin/skills/tensorboard/SKILL.md +629 -0
- package/bin/skills/tensorboard/references/integrations.md +638 -0
- package/bin/skills/tensorboard/references/profiling.md +545 -0
- package/bin/skills/tensorboard/references/visualization.md +620 -0
- package/bin/skills/tensorrt-llm/SKILL.md +187 -0
- package/bin/skills/tensorrt-llm/references/multi-gpu.md +298 -0
- package/bin/skills/tensorrt-llm/references/optimization.md +242 -0
- package/bin/skills/tensorrt-llm/references/serving.md +470 -0
- package/bin/skills/tinker/SKILL.md +362 -0
- package/bin/skills/tinker/references/api-reference.md +168 -0
- package/bin/skills/tinker/references/getting-started.md +157 -0
- package/bin/skills/tinker/references/loss-functions.md +163 -0
- package/bin/skills/tinker/references/models-and-lora.md +139 -0
- package/bin/skills/tinker/references/recipes.md +280 -0
- package/bin/skills/tinker/references/reinforcement-learning.md +212 -0
- package/bin/skills/tinker/references/rendering.md +243 -0
- package/bin/skills/tinker/references/supervised-learning.md +232 -0
- package/bin/skills/tinker-training-cost/SKILL.md +187 -0
- package/bin/skills/tinker-training-cost/scripts/calculate_cost.py +123 -0
- package/bin/skills/torchforge/SKILL.md +433 -0
- package/bin/skills/torchforge/references/api-reference.md +327 -0
- package/bin/skills/torchforge/references/troubleshooting.md +409 -0
- package/bin/skills/torchtitan/SKILL.md +358 -0
- package/bin/skills/torchtitan/references/checkpoint.md +181 -0
- package/bin/skills/torchtitan/references/custom-models.md +258 -0
- package/bin/skills/torchtitan/references/float8.md +133 -0
- package/bin/skills/torchtitan/references/fsdp.md +126 -0
- package/bin/skills/transformer-lens/SKILL.md +346 -0
- package/bin/skills/transformer-lens/references/README.md +54 -0
- package/bin/skills/transformer-lens/references/api.md +362 -0
- package/bin/skills/transformer-lens/references/tutorials.md +339 -0
- package/bin/skills/trl-fine-tuning/SKILL.md +455 -0
- package/bin/skills/trl-fine-tuning/references/dpo-variants.md +227 -0
- package/bin/skills/trl-fine-tuning/references/online-rl.md +82 -0
- package/bin/skills/trl-fine-tuning/references/reward-modeling.md +122 -0
- package/bin/skills/trl-fine-tuning/references/sft-training.md +168 -0
- package/bin/skills/unsloth/SKILL.md +80 -0
- package/bin/skills/unsloth/references/index.md +7 -0
- package/bin/skills/unsloth/references/llms-full.md +16799 -0
- package/bin/skills/unsloth/references/llms-txt.md +12044 -0
- package/bin/skills/unsloth/references/llms.md +82 -0
- package/bin/skills/verl/SKILL.md +391 -0
- package/bin/skills/verl/references/api-reference.md +301 -0
- package/bin/skills/verl/references/troubleshooting.md +391 -0
- package/bin/skills/vllm/SKILL.md +364 -0
- package/bin/skills/vllm/references/optimization.md +226 -0
- package/bin/skills/vllm/references/quantization.md +284 -0
- package/bin/skills/vllm/references/server-deployment.md +255 -0
- package/bin/skills/vllm/references/troubleshooting.md +447 -0
- package/bin/skills/weights-and-biases/SKILL.md +590 -0
- package/bin/skills/weights-and-biases/references/artifacts.md +584 -0
- package/bin/skills/weights-and-biases/references/integrations.md +700 -0
- package/bin/skills/weights-and-biases/references/sweeps.md +847 -0
- package/bin/skills/whisper/SKILL.md +317 -0
- package/bin/skills/whisper/references/languages.md +189 -0
- package/bin/synsc +0 -0
- package/package.json +10 -0
|
@@ -0,0 +1,480 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: langchain
|
|
3
|
+
description: Framework for building LLM-powered applications with agents, chains, and RAG. Supports multiple providers (OpenAI, Anthropic, Google), 500+ integrations, ReAct agents, tool calling, memory management, and vector store retrieval. Use for building chatbots, question-answering systems, autonomous agents, or RAG applications. Best for rapid prototyping and production deployments.
|
|
4
|
+
version: 1.0.0
|
|
5
|
+
author: Synthetic Sciences
|
|
6
|
+
license: MIT
|
|
7
|
+
tags: [Agents, LangChain, RAG, Tool Calling, ReAct, Memory Management, Vector Stores, LLM Applications, Chatbots, Production]
|
|
8
|
+
dependencies: [langchain, langchain-core, langchain-openai, langchain-anthropic]
|
|
9
|
+
---
|
|
10
|
+
|
|
11
|
+
# LangChain - Build LLM Applications with Agents & RAG
|
|
12
|
+
|
|
13
|
+
The most popular framework for building LLM-powered applications.
|
|
14
|
+
|
|
15
|
+
## When to use LangChain
|
|
16
|
+
|
|
17
|
+
**Use LangChain when:**
|
|
18
|
+
- Building agents with tool calling and reasoning (ReAct pattern)
|
|
19
|
+
- Implementing RAG (retrieval-augmented generation) pipelines
|
|
20
|
+
- Need to swap LLM providers easily (OpenAI, Anthropic, Google)
|
|
21
|
+
- Creating chatbots with conversation memory
|
|
22
|
+
- Rapid prototyping of LLM applications
|
|
23
|
+
- Production deployments with LangSmith observability
|
|
24
|
+
|
|
25
|
+
**Metrics**:
|
|
26
|
+
- **119,000+ GitHub stars**
|
|
27
|
+
- **272,000+ repositories** use LangChain
|
|
28
|
+
- **500+ integrations** (models, vector stores, tools)
|
|
29
|
+
- **3,800+ contributors**
|
|
30
|
+
|
|
31
|
+
**Use alternatives instead**:
|
|
32
|
+
- **LlamaIndex**: RAG-focused, better for document Q&A
|
|
33
|
+
- **LangGraph**: Complex stateful workflows, more control
|
|
34
|
+
- **Haystack**: Production search pipelines
|
|
35
|
+
- **Semantic Kernel**: Microsoft ecosystem
|
|
36
|
+
|
|
37
|
+
## Quick start
|
|
38
|
+
|
|
39
|
+
### Installation
|
|
40
|
+
|
|
41
|
+
```bash
|
|
42
|
+
# Core library (Python 3.10+)
|
|
43
|
+
pip install -U langchain
|
|
44
|
+
|
|
45
|
+
# With OpenAI
|
|
46
|
+
pip install langchain-openai
|
|
47
|
+
|
|
48
|
+
# With Anthropic
|
|
49
|
+
pip install langchain-anthropic
|
|
50
|
+
|
|
51
|
+
# Common extras
|
|
52
|
+
pip install langchain-community # 500+ integrations
|
|
53
|
+
pip install langchain-chroma # Vector store
|
|
54
|
+
```
|
|
55
|
+
|
|
56
|
+
### Basic LLM usage
|
|
57
|
+
|
|
58
|
+
```python
|
|
59
|
+
from langchain_anthropic import ChatAnthropic
|
|
60
|
+
|
|
61
|
+
# Initialize model
|
|
62
|
+
llm = ChatAnthropic(model="claude-sonnet-4-5-20250929")
|
|
63
|
+
|
|
64
|
+
# Simple completion
|
|
65
|
+
response = llm.invoke("Explain quantum computing in 2 sentences")
|
|
66
|
+
print(response.content)
|
|
67
|
+
```
|
|
68
|
+
|
|
69
|
+
### Create an agent (ReAct pattern)
|
|
70
|
+
|
|
71
|
+
```python
|
|
72
|
+
from langchain.agents import create_agent
|
|
73
|
+
from langchain_anthropic import ChatAnthropic
|
|
74
|
+
|
|
75
|
+
# Define tools
|
|
76
|
+
def get_weather(city: str) -> str:
|
|
77
|
+
"""Get current weather for a city."""
|
|
78
|
+
return f"It's sunny in {city}, 72°F"
|
|
79
|
+
|
|
80
|
+
def search_web(query: str) -> str:
|
|
81
|
+
"""Search the web for information."""
|
|
82
|
+
return f"Search results for: {query}"
|
|
83
|
+
|
|
84
|
+
# Create agent (<10 lines!)
|
|
85
|
+
agent = create_agent(
|
|
86
|
+
model=ChatAnthropic(model="claude-sonnet-4-5-20250929"),
|
|
87
|
+
tools=[get_weather, search_web],
|
|
88
|
+
system_prompt="You are a helpful assistant. Use tools when needed."
|
|
89
|
+
)
|
|
90
|
+
|
|
91
|
+
# Run agent
|
|
92
|
+
result = agent.invoke({"messages": [{"role": "user", "content": "What's the weather in Paris?"}]})
|
|
93
|
+
print(result["messages"][-1].content)
|
|
94
|
+
```
|
|
95
|
+
|
|
96
|
+
## Core concepts
|
|
97
|
+
|
|
98
|
+
### 1. Models - LLM abstraction
|
|
99
|
+
|
|
100
|
+
```python
|
|
101
|
+
from langchain_openai import ChatOpenAI
|
|
102
|
+
from langchain_anthropic import ChatAnthropic
|
|
103
|
+
from langchain_google_genai import ChatGoogleGenerativeAI
|
|
104
|
+
|
|
105
|
+
# Swap providers easily
|
|
106
|
+
llm = ChatOpenAI(model="gpt-4o")
|
|
107
|
+
llm = ChatAnthropic(model="claude-sonnet-4-5-20250929")
|
|
108
|
+
llm = ChatGoogleGenerativeAI(model="gemini-2.0-flash-exp")
|
|
109
|
+
|
|
110
|
+
# Streaming
|
|
111
|
+
for chunk in llm.stream("Write a poem"):
|
|
112
|
+
print(chunk.content, end="", flush=True)
|
|
113
|
+
```
|
|
114
|
+
|
|
115
|
+
### 2. Chains - Sequential operations
|
|
116
|
+
|
|
117
|
+
```python
|
|
118
|
+
from langchain.chains import LLMChain
|
|
119
|
+
from langchain.prompts import PromptTemplate
|
|
120
|
+
|
|
121
|
+
# Define prompt template
|
|
122
|
+
prompt = PromptTemplate(
|
|
123
|
+
input_variables=["topic"],
|
|
124
|
+
template="Write a 3-sentence summary about {topic}"
|
|
125
|
+
)
|
|
126
|
+
|
|
127
|
+
# Create chain
|
|
128
|
+
chain = LLMChain(llm=llm, prompt=prompt)
|
|
129
|
+
|
|
130
|
+
# Run chain
|
|
131
|
+
result = chain.run(topic="machine learning")
|
|
132
|
+
```
|
|
133
|
+
|
|
134
|
+
### 3. Agents - Tool-using reasoning
|
|
135
|
+
|
|
136
|
+
**ReAct (Reasoning + Acting) pattern:**
|
|
137
|
+
|
|
138
|
+
```python
|
|
139
|
+
from langchain.agents import create_tool_calling_agent, AgentExecutor
|
|
140
|
+
from langchain.tools import Tool
|
|
141
|
+
|
|
142
|
+
# Define custom tool
|
|
143
|
+
calculator = Tool(
|
|
144
|
+
name="Calculator",
|
|
145
|
+
func=lambda x: eval(x),
|
|
146
|
+
description="Useful for math calculations. Input: valid Python expression."
|
|
147
|
+
)
|
|
148
|
+
|
|
149
|
+
# Create agent with tools
|
|
150
|
+
agent = create_tool_calling_agent(
|
|
151
|
+
llm=llm,
|
|
152
|
+
tools=[calculator, search_web],
|
|
153
|
+
prompt="Answer questions using available tools"
|
|
154
|
+
)
|
|
155
|
+
|
|
156
|
+
# Create executor
|
|
157
|
+
agent_executor = AgentExecutor(agent=agent, tools=[calculator], verbose=True)
|
|
158
|
+
|
|
159
|
+
# Run with reasoning
|
|
160
|
+
result = agent_executor.invoke({"input": "What is 25 * 17 + 142?"})
|
|
161
|
+
```
|
|
162
|
+
|
|
163
|
+
### 4. Memory - Conversation history
|
|
164
|
+
|
|
165
|
+
```python
|
|
166
|
+
from langchain.memory import ConversationBufferMemory
|
|
167
|
+
from langchain.chains import ConversationChain
|
|
168
|
+
|
|
169
|
+
# Add memory to track conversation
|
|
170
|
+
memory = ConversationBufferMemory()
|
|
171
|
+
|
|
172
|
+
conversation = ConversationChain(
|
|
173
|
+
llm=llm,
|
|
174
|
+
memory=memory,
|
|
175
|
+
verbose=True
|
|
176
|
+
)
|
|
177
|
+
|
|
178
|
+
# Multi-turn conversation
|
|
179
|
+
conversation.predict(input="Hi, I'm Alice")
|
|
180
|
+
conversation.predict(input="What's my name?") # Remembers "Alice"
|
|
181
|
+
```
|
|
182
|
+
|
|
183
|
+
## RAG (Retrieval-Augmented Generation)
|
|
184
|
+
|
|
185
|
+
### Basic RAG pipeline
|
|
186
|
+
|
|
187
|
+
```python
|
|
188
|
+
from langchain_community.document_loaders import WebBaseLoader
|
|
189
|
+
from langchain.text_splitter import RecursiveCharacterTextSplitter
|
|
190
|
+
from langchain_openai import OpenAIEmbeddings
|
|
191
|
+
from langchain_chroma import Chroma
|
|
192
|
+
from langchain.chains import RetrievalQA
|
|
193
|
+
|
|
194
|
+
# 1. Load documents
|
|
195
|
+
loader = WebBaseLoader("https://docs.python.org/3/tutorial/")
|
|
196
|
+
docs = loader.load()
|
|
197
|
+
|
|
198
|
+
# 2. Split into chunks
|
|
199
|
+
text_splitter = RecursiveCharacterTextSplitter(
|
|
200
|
+
chunk_size=1000,
|
|
201
|
+
chunk_overlap=200
|
|
202
|
+
)
|
|
203
|
+
splits = text_splitter.split_documents(docs)
|
|
204
|
+
|
|
205
|
+
# 3. Create embeddings and vector store
|
|
206
|
+
vectorstore = Chroma.from_documents(
|
|
207
|
+
documents=splits,
|
|
208
|
+
embedding=OpenAIEmbeddings()
|
|
209
|
+
)
|
|
210
|
+
|
|
211
|
+
# 4. Create retriever
|
|
212
|
+
retriever = vectorstore.as_retriever(search_kwargs={"k": 4})
|
|
213
|
+
|
|
214
|
+
# 5. Create QA chain
|
|
215
|
+
qa_chain = RetrievalQA.from_chain_type(
|
|
216
|
+
llm=llm,
|
|
217
|
+
retriever=retriever,
|
|
218
|
+
return_source_documents=True
|
|
219
|
+
)
|
|
220
|
+
|
|
221
|
+
# 6. Query
|
|
222
|
+
result = qa_chain({"query": "What are Python decorators?"})
|
|
223
|
+
print(result["result"])
|
|
224
|
+
print(f"Sources: {result['source_documents']}")
|
|
225
|
+
```
|
|
226
|
+
|
|
227
|
+
### Conversational RAG with memory
|
|
228
|
+
|
|
229
|
+
```python
|
|
230
|
+
from langchain.chains import ConversationalRetrievalChain
|
|
231
|
+
|
|
232
|
+
# RAG with conversation memory
|
|
233
|
+
qa = ConversationalRetrievalChain.from_llm(
|
|
234
|
+
llm=llm,
|
|
235
|
+
retriever=retriever,
|
|
236
|
+
memory=ConversationBufferMemory(
|
|
237
|
+
memory_key="chat_history",
|
|
238
|
+
return_messages=True
|
|
239
|
+
)
|
|
240
|
+
)
|
|
241
|
+
|
|
242
|
+
# Multi-turn RAG
|
|
243
|
+
qa({"question": "What is Python used for?"})
|
|
244
|
+
qa({"question": "Can you elaborate on web development?"}) # Remembers context
|
|
245
|
+
```
|
|
246
|
+
|
|
247
|
+
## Advanced agent patterns
|
|
248
|
+
|
|
249
|
+
### Structured output
|
|
250
|
+
|
|
251
|
+
```python
|
|
252
|
+
from langchain_core.pydantic_v1 import BaseModel, Field
|
|
253
|
+
|
|
254
|
+
# Define schema
|
|
255
|
+
class WeatherReport(BaseModel):
|
|
256
|
+
city: str = Field(description="City name")
|
|
257
|
+
temperature: float = Field(description="Temperature in Fahrenheit")
|
|
258
|
+
condition: str = Field(description="Weather condition")
|
|
259
|
+
|
|
260
|
+
# Get structured response
|
|
261
|
+
structured_llm = llm.with_structured_output(WeatherReport)
|
|
262
|
+
result = structured_llm.invoke("What's the weather in SF? It's 65F and sunny")
|
|
263
|
+
print(result.city, result.temperature, result.condition)
|
|
264
|
+
```
|
|
265
|
+
|
|
266
|
+
### Parallel tool execution
|
|
267
|
+
|
|
268
|
+
```python
|
|
269
|
+
from langchain.agents import create_tool_calling_agent
|
|
270
|
+
|
|
271
|
+
# Agent automatically parallelizes independent tool calls
|
|
272
|
+
agent = create_tool_calling_agent(
|
|
273
|
+
llm=llm,
|
|
274
|
+
tools=[get_weather, search_web, calculator]
|
|
275
|
+
)
|
|
276
|
+
|
|
277
|
+
# This will call get_weather("Paris") and get_weather("London") in parallel
|
|
278
|
+
result = agent.invoke({
|
|
279
|
+
"messages": [{"role": "user", "content": "Compare weather in Paris and London"}]
|
|
280
|
+
})
|
|
281
|
+
```
|
|
282
|
+
|
|
283
|
+
### Streaming agent execution
|
|
284
|
+
|
|
285
|
+
```python
|
|
286
|
+
# Stream agent steps
|
|
287
|
+
for step in agent_executor.stream({"input": "Research AI trends"}):
|
|
288
|
+
if "actions" in step:
|
|
289
|
+
print(f"Tool: {step['actions'][0].tool}")
|
|
290
|
+
if "output" in step:
|
|
291
|
+
print(f"Output: {step['output']}")
|
|
292
|
+
```
|
|
293
|
+
|
|
294
|
+
## Common patterns
|
|
295
|
+
|
|
296
|
+
### Multi-document QA
|
|
297
|
+
|
|
298
|
+
```python
|
|
299
|
+
from langchain.chains.qa_with_sources import load_qa_with_sources_chain
|
|
300
|
+
|
|
301
|
+
# Load multiple documents
|
|
302
|
+
docs = [
|
|
303
|
+
loader.load("https://docs.python.org"),
|
|
304
|
+
loader.load("https://docs.numpy.org")
|
|
305
|
+
]
|
|
306
|
+
|
|
307
|
+
# QA with source citations
|
|
308
|
+
chain = load_qa_with_sources_chain(llm, chain_type="stuff")
|
|
309
|
+
result = chain({"input_documents": docs, "question": "How to use numpy arrays?"})
|
|
310
|
+
print(result["output_text"]) # Includes source citations
|
|
311
|
+
```
|
|
312
|
+
|
|
313
|
+
### Custom tools with error handling
|
|
314
|
+
|
|
315
|
+
```python
|
|
316
|
+
from langchain.tools import tool
|
|
317
|
+
|
|
318
|
+
@tool
|
|
319
|
+
def risky_operation(query: str) -> str:
|
|
320
|
+
"""Perform a risky operation that might fail."""
|
|
321
|
+
try:
|
|
322
|
+
# Your operation here
|
|
323
|
+
result = perform_operation(query)
|
|
324
|
+
return f"Success: {result}"
|
|
325
|
+
except Exception as e:
|
|
326
|
+
return f"Error: {str(e)}"
|
|
327
|
+
|
|
328
|
+
# Agent handles errors gracefully
|
|
329
|
+
agent = create_agent(model=llm, tools=[risky_operation])
|
|
330
|
+
```
|
|
331
|
+
|
|
332
|
+
### LangSmith observability
|
|
333
|
+
|
|
334
|
+
```python
|
|
335
|
+
import os
|
|
336
|
+
|
|
337
|
+
# Enable tracing
|
|
338
|
+
os.environ["LANGCHAIN_TRACING_V2"] = "true"
|
|
339
|
+
os.environ["LANGCHAIN_API_KEY"] = "your-api-key"
|
|
340
|
+
os.environ["LANGCHAIN_PROJECT"] = "my-project"
|
|
341
|
+
|
|
342
|
+
# All chains/agents automatically traced
|
|
343
|
+
agent = create_agent(model=llm, tools=[calculator])
|
|
344
|
+
result = agent.invoke({"input": "Calculate 123 * 456"})
|
|
345
|
+
|
|
346
|
+
# View traces at smith.langchain.com
|
|
347
|
+
```
|
|
348
|
+
|
|
349
|
+
## Vector stores
|
|
350
|
+
|
|
351
|
+
### Chroma (local)
|
|
352
|
+
|
|
353
|
+
```python
|
|
354
|
+
from langchain_chroma import Chroma
|
|
355
|
+
|
|
356
|
+
vectorstore = Chroma.from_documents(
|
|
357
|
+
documents=docs,
|
|
358
|
+
embedding=OpenAIEmbeddings(),
|
|
359
|
+
persist_directory="./chroma_db"
|
|
360
|
+
)
|
|
361
|
+
```
|
|
362
|
+
|
|
363
|
+
### Pinecone (cloud)
|
|
364
|
+
|
|
365
|
+
```python
|
|
366
|
+
from langchain_pinecone import PineconeVectorStore
|
|
367
|
+
|
|
368
|
+
vectorstore = PineconeVectorStore.from_documents(
|
|
369
|
+
documents=docs,
|
|
370
|
+
embedding=OpenAIEmbeddings(),
|
|
371
|
+
index_name="my-index"
|
|
372
|
+
)
|
|
373
|
+
```
|
|
374
|
+
|
|
375
|
+
### FAISS (similarity search)
|
|
376
|
+
|
|
377
|
+
```python
|
|
378
|
+
from langchain_community.vectorstores import FAISS
|
|
379
|
+
|
|
380
|
+
vectorstore = FAISS.from_documents(docs, OpenAIEmbeddings())
|
|
381
|
+
vectorstore.save_local("faiss_index")
|
|
382
|
+
|
|
383
|
+
# Load later
|
|
384
|
+
vectorstore = FAISS.load_local("faiss_index", OpenAIEmbeddings())
|
|
385
|
+
```
|
|
386
|
+
|
|
387
|
+
## Document loaders
|
|
388
|
+
|
|
389
|
+
```python
|
|
390
|
+
# Web pages
|
|
391
|
+
from langchain_community.document_loaders import WebBaseLoader
|
|
392
|
+
loader = WebBaseLoader("https://example.com")
|
|
393
|
+
|
|
394
|
+
# PDFs
|
|
395
|
+
from langchain_community.document_loaders import PyPDFLoader
|
|
396
|
+
loader = PyPDFLoader("paper.pdf")
|
|
397
|
+
|
|
398
|
+
# GitHub
|
|
399
|
+
from langchain_community.document_loaders import GithubFileLoader
|
|
400
|
+
loader = GithubFileLoader(repo="user/repo", file_filter=lambda x: x.endswith(".py"))
|
|
401
|
+
|
|
402
|
+
# CSV
|
|
403
|
+
from langchain_community.document_loaders import CSVLoader
|
|
404
|
+
loader = CSVLoader("data.csv")
|
|
405
|
+
```
|
|
406
|
+
|
|
407
|
+
## Text splitters
|
|
408
|
+
|
|
409
|
+
```python
|
|
410
|
+
# Recursive (recommended for general text)
|
|
411
|
+
from langchain.text_splitter import RecursiveCharacterTextSplitter
|
|
412
|
+
splitter = RecursiveCharacterTextSplitter(
|
|
413
|
+
chunk_size=1000,
|
|
414
|
+
chunk_overlap=200,
|
|
415
|
+
separators=["\n\n", "\n", " ", ""]
|
|
416
|
+
)
|
|
417
|
+
|
|
418
|
+
# Code-aware
|
|
419
|
+
from langchain.text_splitter import PythonCodeTextSplitter
|
|
420
|
+
splitter = PythonCodeTextSplitter(chunk_size=500)
|
|
421
|
+
|
|
422
|
+
# Semantic (by meaning)
|
|
423
|
+
from langchain_experimental.text_splitter import SemanticChunker
|
|
424
|
+
splitter = SemanticChunker(OpenAIEmbeddings())
|
|
425
|
+
```
|
|
426
|
+
|
|
427
|
+
## Best practices
|
|
428
|
+
|
|
429
|
+
1. **Start simple** - Use `create_agent()` for most cases
|
|
430
|
+
2. **Enable streaming** - Better UX for long responses
|
|
431
|
+
3. **Add error handling** - Tools can fail, handle gracefully
|
|
432
|
+
4. **Use LangSmith** - Essential for debugging agents
|
|
433
|
+
5. **Optimize chunk size** - 500-1000 chars for RAG
|
|
434
|
+
6. **Version prompts** - Track changes in production
|
|
435
|
+
7. **Cache embeddings** - Expensive, cache when possible
|
|
436
|
+
8. **Monitor costs** - Track token usage with LangSmith
|
|
437
|
+
|
|
438
|
+
## Performance benchmarks
|
|
439
|
+
|
|
440
|
+
| Operation | Latency | Notes |
|
|
441
|
+
|-----------|---------|-------|
|
|
442
|
+
| Simple LLM call | ~1-2s | Depends on provider |
|
|
443
|
+
| Agent with 1 tool | ~3-5s | ReAct reasoning overhead |
|
|
444
|
+
| RAG retrieval | ~0.5-1s | Vector search + LLM |
|
|
445
|
+
| Embedding 1000 docs | ~10-30s | Depends on model |
|
|
446
|
+
|
|
447
|
+
## LangChain vs LangGraph
|
|
448
|
+
|
|
449
|
+
| Feature | LangChain | LangGraph |
|
|
450
|
+
|---------|-----------|-----------|
|
|
451
|
+
| **Best for** | Quick agents, RAG | Complex workflows |
|
|
452
|
+
| **Abstraction level** | High | Low |
|
|
453
|
+
| **Code to start** | <10 lines | ~30 lines |
|
|
454
|
+
| **Control** | Simple | Full control |
|
|
455
|
+
| **Stateful workflows** | Limited | Native |
|
|
456
|
+
| **Cyclic graphs** | No | Yes |
|
|
457
|
+
| **Human-in-loop** | Basic | Advanced |
|
|
458
|
+
|
|
459
|
+
**Use LangGraph when:**
|
|
460
|
+
- Need stateful workflows with cycles
|
|
461
|
+
- Require fine-grained control
|
|
462
|
+
- Building multi-agent systems
|
|
463
|
+
- Production apps with complex logic
|
|
464
|
+
|
|
465
|
+
## References
|
|
466
|
+
|
|
467
|
+
- **[Agents Guide](references/agents.md)** - ReAct, tool calling, streaming
|
|
468
|
+
- **[RAG Guide](references/rag.md)** - Document loaders, retrievers, QA chains
|
|
469
|
+
- **[Integration Guide](references/integration.md)** - Vector stores, LangSmith, deployment
|
|
470
|
+
|
|
471
|
+
## Resources
|
|
472
|
+
|
|
473
|
+
- **GitHub**: https://github.com/langchain-ai/langchain ⭐ 119,000+
|
|
474
|
+
- **Docs**: https://docs.langchain.com
|
|
475
|
+
- **API Reference**: https://reference.langchain.com/python
|
|
476
|
+
- **LangSmith**: https://smith.langchain.com (observability)
|
|
477
|
+
- **Version**: 0.3+ (stable)
|
|
478
|
+
- **License**: MIT
|
|
479
|
+
|
|
480
|
+
|