claude-autopm 2.8.2 → 2.8.4
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +399 -637
- package/install/install.js +15 -5
- package/package.json +2 -1
- package/packages/plugin-ai/LICENSE +21 -0
- package/packages/plugin-ai/README.md +316 -0
- package/packages/plugin-ai/agents/anthropic-claude-expert.md +579 -0
- package/packages/plugin-ai/agents/azure-openai-expert.md +1411 -0
- package/packages/plugin-ai/agents/gemini-api-expert.md +880 -0
- package/packages/plugin-ai/agents/google-a2a-expert.md +1445 -0
- package/packages/plugin-ai/agents/huggingface-expert.md +2131 -0
- package/packages/plugin-ai/agents/langchain-expert.md +1427 -0
- package/packages/plugin-ai/agents/langgraph-workflow-expert.md +520 -0
- package/packages/plugin-ai/agents/openai-python-expert.md +1087 -0
- package/packages/plugin-ai/commands/a2a-setup.md +886 -0
- package/packages/plugin-ai/commands/ai-model-deployment.md +481 -0
- package/packages/plugin-ai/commands/anthropic-optimize.md +793 -0
- package/packages/plugin-ai/commands/huggingface-deploy.md +789 -0
- package/packages/plugin-ai/commands/langchain-optimize.md +807 -0
- package/packages/plugin-ai/commands/llm-optimize.md +348 -0
- package/packages/plugin-ai/commands/openai-optimize.md +863 -0
- package/packages/plugin-ai/commands/rag-optimize.md +841 -0
- package/packages/plugin-ai/commands/rag-setup-scaffold.md +382 -0
- package/packages/plugin-ai/package.json +66 -0
- package/packages/plugin-ai/plugin.json +519 -0
- package/packages/plugin-ai/rules/ai-model-standards.md +449 -0
- package/packages/plugin-ai/rules/prompt-engineering-standards.md +509 -0
- package/packages/plugin-ai/scripts/examples/huggingface-inference-example.py +145 -0
- package/packages/plugin-ai/scripts/examples/langchain-rag-example.py +366 -0
- package/packages/plugin-ai/scripts/examples/mlflow-tracking-example.py +224 -0
- package/packages/plugin-ai/scripts/examples/openai-chat-example.py +425 -0
- package/packages/plugin-cloud/README.md +268 -0
- package/packages/plugin-cloud/agents/README.md +55 -0
- package/packages/plugin-cloud/agents/aws-cloud-architect.md +521 -0
- package/packages/plugin-cloud/agents/azure-cloud-architect.md +436 -0
- package/packages/plugin-cloud/agents/gcp-cloud-architect.md +385 -0
- package/packages/plugin-cloud/agents/gcp-cloud-functions-engineer.md +306 -0
- package/packages/plugin-cloud/agents/gemini-api-expert.md +880 -0
- package/packages/plugin-cloud/agents/kubernetes-orchestrator.md +566 -0
- package/packages/plugin-cloud/agents/openai-python-expert.md +1087 -0
- package/packages/plugin-cloud/agents/terraform-infrastructure-expert.md +454 -0
- package/packages/plugin-cloud/commands/cloud-cost-optimize.md +243 -0
- package/packages/plugin-cloud/commands/cloud-validate.md +196 -0
- package/packages/plugin-cloud/commands/infra-deploy.md +38 -0
- package/packages/plugin-cloud/commands/k8s-deploy.md +37 -0
- package/packages/plugin-cloud/commands/ssh-security.md +65 -0
- package/packages/plugin-cloud/commands/traefik-setup.md +65 -0
- package/packages/plugin-cloud/hooks/pre-cloud-deploy.js +456 -0
- package/packages/plugin-cloud/package.json +64 -0
- package/packages/plugin-cloud/plugin.json +338 -0
- package/packages/plugin-cloud/rules/cloud-security-compliance.md +313 -0
- package/packages/plugin-cloud/rules/infrastructure-pipeline.md +128 -0
- package/packages/plugin-cloud/scripts/examples/aws-validate.sh +30 -0
- package/packages/plugin-cloud/scripts/examples/azure-setup.sh +33 -0
- package/packages/plugin-cloud/scripts/examples/gcp-setup.sh +39 -0
- package/packages/plugin-cloud/scripts/examples/k8s-validate.sh +40 -0
- package/packages/plugin-cloud/scripts/examples/terraform-init.sh +26 -0
- package/packages/plugin-core/README.md +274 -0
- package/packages/plugin-core/agents/core/agent-manager.md +296 -0
- package/packages/plugin-core/agents/core/code-analyzer.md +131 -0
- package/packages/plugin-core/agents/core/file-analyzer.md +162 -0
- package/packages/plugin-core/agents/core/test-runner.md +200 -0
- package/packages/plugin-core/commands/code-rabbit.md +128 -0
- package/packages/plugin-core/commands/prompt.md +9 -0
- package/packages/plugin-core/commands/re-init.md +9 -0
- package/packages/plugin-core/hooks/context7-reminder.md +29 -0
- package/packages/plugin-core/hooks/enforce-agents.js +125 -0
- package/packages/plugin-core/hooks/enforce-agents.sh +35 -0
- package/packages/plugin-core/hooks/pre-agent-context7.js +224 -0
- package/packages/plugin-core/hooks/pre-command-context7.js +229 -0
- package/packages/plugin-core/hooks/strict-enforce-agents.sh +39 -0
- package/packages/plugin-core/hooks/test-hook.sh +21 -0
- package/packages/plugin-core/hooks/unified-context7-enforcement.sh +38 -0
- package/packages/plugin-core/package.json +45 -0
- package/packages/plugin-core/plugin.json +387 -0
- package/packages/plugin-core/rules/agent-coordination.md +549 -0
- package/packages/plugin-core/rules/agent-mandatory.md +170 -0
- package/packages/plugin-core/rules/ai-integration-patterns.md +219 -0
- package/packages/plugin-core/rules/command-pipelines.md +208 -0
- package/packages/plugin-core/rules/context-optimization.md +176 -0
- package/packages/plugin-core/rules/context7-enforcement.md +327 -0
- package/packages/plugin-core/rules/datetime.md +122 -0
- package/packages/plugin-core/rules/definition-of-done.md +272 -0
- package/packages/plugin-core/rules/development-environments.md +19 -0
- package/packages/plugin-core/rules/development-workflow.md +198 -0
- package/packages/plugin-core/rules/framework-path-rules.md +180 -0
- package/packages/plugin-core/rules/frontmatter-operations.md +64 -0
- package/packages/plugin-core/rules/git-strategy.md +237 -0
- package/packages/plugin-core/rules/golden-rules.md +181 -0
- package/packages/plugin-core/rules/naming-conventions.md +111 -0
- package/packages/plugin-core/rules/no-pr-workflow.md +183 -0
- package/packages/plugin-core/rules/performance-guidelines.md +403 -0
- package/packages/plugin-core/rules/pipeline-mandatory.md +109 -0
- package/packages/plugin-core/rules/security-checklist.md +318 -0
- package/packages/plugin-core/rules/standard-patterns.md +197 -0
- package/packages/plugin-core/rules/strip-frontmatter.md +85 -0
- package/packages/plugin-core/rules/tdd.enforcement.md +103 -0
- package/packages/plugin-core/rules/use-ast-grep.md +113 -0
- package/packages/plugin-core/scripts/lib/datetime-utils.sh +254 -0
- package/packages/plugin-core/scripts/lib/frontmatter-utils.sh +294 -0
- package/packages/plugin-core/scripts/lib/github-utils.sh +221 -0
- package/packages/plugin-core/scripts/lib/logging-utils.sh +199 -0
- package/packages/plugin-core/scripts/lib/validation-utils.sh +339 -0
- package/packages/plugin-core/scripts/mcp/add.sh +7 -0
- package/packages/plugin-core/scripts/mcp/disable.sh +12 -0
- package/packages/plugin-core/scripts/mcp/enable.sh +12 -0
- package/packages/plugin-core/scripts/mcp/list.sh +7 -0
- package/packages/plugin-core/scripts/mcp/sync.sh +8 -0
- package/packages/plugin-data/README.md +315 -0
- package/packages/plugin-data/agents/airflow-orchestration-expert.md +158 -0
- package/packages/plugin-data/agents/kedro-pipeline-expert.md +304 -0
- package/packages/plugin-data/agents/langgraph-workflow-expert.md +530 -0
- package/packages/plugin-data/commands/airflow-dag-scaffold.md +413 -0
- package/packages/plugin-data/commands/kafka-pipeline-scaffold.md +503 -0
- package/packages/plugin-data/package.json +66 -0
- package/packages/plugin-data/plugin.json +294 -0
- package/packages/plugin-data/rules/data-quality-standards.md +373 -0
- package/packages/plugin-data/rules/etl-pipeline-standards.md +255 -0
- package/packages/plugin-data/scripts/examples/airflow-dag-example.py +245 -0
- package/packages/plugin-data/scripts/examples/dbt-transform-example.sql +238 -0
- package/packages/plugin-data/scripts/examples/kafka-streaming-example.py +257 -0
- package/packages/plugin-data/scripts/examples/pandas-etl-example.py +332 -0
- package/packages/plugin-databases/README.md +330 -0
- package/packages/plugin-databases/agents/README.md +50 -0
- package/packages/plugin-databases/agents/bigquery-expert.md +401 -0
- package/packages/plugin-databases/agents/cosmosdb-expert.md +375 -0
- package/packages/plugin-databases/agents/mongodb-expert.md +407 -0
- package/packages/plugin-databases/agents/postgresql-expert.md +329 -0
- package/packages/plugin-databases/agents/redis-expert.md +74 -0
- package/packages/plugin-databases/commands/db-optimize.md +612 -0
- package/packages/plugin-databases/package.json +60 -0
- package/packages/plugin-databases/plugin.json +237 -0
- package/packages/plugin-databases/rules/database-management-strategy.md +146 -0
- package/packages/plugin-databases/rules/database-pipeline.md +316 -0
- package/packages/plugin-databases/scripts/examples/bigquery-cost-analyze.sh +160 -0
- package/packages/plugin-databases/scripts/examples/cosmosdb-ru-optimize.sh +163 -0
- package/packages/plugin-databases/scripts/examples/mongodb-shard-check.sh +120 -0
- package/packages/plugin-databases/scripts/examples/postgres-index-analyze.sh +95 -0
- package/packages/plugin-databases/scripts/examples/redis-cache-stats.sh +121 -0
- package/packages/plugin-devops/README.md +367 -0
- package/packages/plugin-devops/agents/README.md +52 -0
- package/packages/plugin-devops/agents/azure-devops-specialist.md +308 -0
- package/packages/plugin-devops/agents/docker-containerization-expert.md +298 -0
- package/packages/plugin-devops/agents/github-operations-specialist.md +335 -0
- package/packages/plugin-devops/agents/mcp-context-manager.md +319 -0
- package/packages/plugin-devops/agents/observability-engineer.md +574 -0
- package/packages/plugin-devops/agents/ssh-operations-expert.md +1093 -0
- package/packages/plugin-devops/agents/traefik-proxy-expert.md +444 -0
- package/packages/plugin-devops/commands/ci-pipeline-create.md +581 -0
- package/packages/plugin-devops/commands/docker-optimize.md +493 -0
- package/packages/plugin-devops/commands/workflow-create.md +42 -0
- package/packages/plugin-devops/hooks/pre-docker-build.js +472 -0
- package/packages/plugin-devops/package.json +61 -0
- package/packages/plugin-devops/plugin.json +302 -0
- package/packages/plugin-devops/rules/ci-cd-kubernetes-strategy.md +25 -0
- package/packages/plugin-devops/rules/devops-troubleshooting-playbook.md +450 -0
- package/packages/plugin-devops/rules/docker-first-development.md +404 -0
- package/packages/plugin-devops/rules/github-operations.md +92 -0
- package/packages/plugin-devops/scripts/examples/docker-build-multistage.sh +43 -0
- package/packages/plugin-devops/scripts/examples/docker-compose-validate.sh +74 -0
- package/packages/plugin-devops/scripts/examples/github-workflow-validate.sh +48 -0
- package/packages/plugin-devops/scripts/examples/prometheus-health-check.sh +58 -0
- package/packages/plugin-devops/scripts/examples/ssh-key-setup.sh +74 -0
- package/packages/plugin-frameworks/README.md +309 -0
- package/packages/plugin-frameworks/agents/README.md +64 -0
- package/packages/plugin-frameworks/agents/e2e-test-engineer.md +579 -0
- package/packages/plugin-frameworks/agents/nats-messaging-expert.md +254 -0
- package/packages/plugin-frameworks/agents/react-frontend-engineer.md +393 -0
- package/packages/plugin-frameworks/agents/react-ui-expert.md +226 -0
- package/packages/plugin-frameworks/agents/tailwindcss-expert.md +1021 -0
- package/packages/plugin-frameworks/agents/ux-design-expert.md +244 -0
- package/packages/plugin-frameworks/commands/app-scaffold.md +50 -0
- package/packages/plugin-frameworks/commands/nextjs-optimize.md +692 -0
- package/packages/plugin-frameworks/commands/react-optimize.md +583 -0
- package/packages/plugin-frameworks/commands/tailwind-system.md +64 -0
- package/packages/plugin-frameworks/package.json +59 -0
- package/packages/plugin-frameworks/plugin.json +224 -0
- package/packages/plugin-frameworks/rules/performance-guidelines.md +403 -0
- package/packages/plugin-frameworks/rules/ui-development-standards.md +281 -0
- package/packages/plugin-frameworks/rules/ui-framework-rules.md +151 -0
- package/packages/plugin-frameworks/scripts/examples/react-component-perf.sh +34 -0
- package/packages/plugin-frameworks/scripts/examples/tailwind-optimize.sh +44 -0
- package/packages/plugin-frameworks/scripts/examples/vue-composition-check.sh +41 -0
- package/packages/plugin-languages/README.md +333 -0
- package/packages/plugin-languages/agents/README.md +50 -0
- package/packages/plugin-languages/agents/bash-scripting-expert.md +541 -0
- package/packages/plugin-languages/agents/javascript-frontend-engineer.md +197 -0
- package/packages/plugin-languages/agents/nodejs-backend-engineer.md +226 -0
- package/packages/plugin-languages/agents/python-backend-engineer.md +214 -0
- package/packages/plugin-languages/agents/python-backend-expert.md +289 -0
- package/packages/plugin-languages/commands/javascript-optimize.md +636 -0
- package/packages/plugin-languages/commands/nodejs-api-scaffold.md +341 -0
- package/packages/plugin-languages/commands/nodejs-optimize.md +689 -0
- package/packages/plugin-languages/commands/python-api-scaffold.md +261 -0
- package/packages/plugin-languages/commands/python-optimize.md +593 -0
- package/packages/plugin-languages/package.json +65 -0
- package/packages/plugin-languages/plugin.json +265 -0
- package/packages/plugin-languages/rules/code-quality-standards.md +496 -0
- package/packages/plugin-languages/rules/testing-standards.md +768 -0
- package/packages/plugin-languages/scripts/examples/bash-production-script.sh +520 -0
- package/packages/plugin-languages/scripts/examples/javascript-es6-patterns.js +291 -0
- package/packages/plugin-languages/scripts/examples/nodejs-async-iteration.js +360 -0
- package/packages/plugin-languages/scripts/examples/python-async-patterns.py +289 -0
- package/packages/plugin-languages/scripts/examples/typescript-patterns.ts +432 -0
- package/packages/plugin-ml/README.md +430 -0
- package/packages/plugin-ml/agents/automl-expert.md +326 -0
- package/packages/plugin-ml/agents/computer-vision-expert.md +550 -0
- package/packages/plugin-ml/agents/gradient-boosting-expert.md +455 -0
- package/packages/plugin-ml/agents/neural-network-architect.md +1228 -0
- package/packages/plugin-ml/agents/nlp-transformer-expert.md +584 -0
- package/packages/plugin-ml/agents/pytorch-expert.md +412 -0
- package/packages/plugin-ml/agents/reinforcement-learning-expert.md +2088 -0
- package/packages/plugin-ml/agents/scikit-learn-expert.md +228 -0
- package/packages/plugin-ml/agents/tensorflow-keras-expert.md +509 -0
- package/packages/plugin-ml/agents/time-series-expert.md +303 -0
- package/packages/plugin-ml/commands/ml-automl.md +572 -0
- package/packages/plugin-ml/commands/ml-train-optimize.md +657 -0
- package/packages/plugin-ml/package.json +52 -0
- package/packages/plugin-ml/plugin.json +338 -0
- package/packages/plugin-pm/README.md +368 -0
- package/packages/plugin-pm/claudeautopm-plugin-pm-2.0.0.tgz +0 -0
- package/packages/plugin-pm/commands/azure/COMMANDS.md +107 -0
- package/packages/plugin-pm/commands/azure/COMMAND_MAPPING.md +252 -0
- package/packages/plugin-pm/commands/azure/INTEGRATION_FIX.md +103 -0
- package/packages/plugin-pm/commands/azure/README.md +246 -0
- package/packages/plugin-pm/commands/azure/active-work.md +198 -0
- package/packages/plugin-pm/commands/azure/aliases.md +143 -0
- package/packages/plugin-pm/commands/azure/blocked-items.md +287 -0
- package/packages/plugin-pm/commands/azure/clean.md +93 -0
- package/packages/plugin-pm/commands/azure/docs-query.md +48 -0
- package/packages/plugin-pm/commands/azure/feature-decompose.md +380 -0
- package/packages/plugin-pm/commands/azure/feature-list.md +61 -0
- package/packages/plugin-pm/commands/azure/feature-new.md +115 -0
- package/packages/plugin-pm/commands/azure/feature-show.md +205 -0
- package/packages/plugin-pm/commands/azure/feature-start.md +130 -0
- package/packages/plugin-pm/commands/azure/fix-integration-example.md +93 -0
- package/packages/plugin-pm/commands/azure/help.md +150 -0
- package/packages/plugin-pm/commands/azure/import-us.md +269 -0
- package/packages/plugin-pm/commands/azure/init.md +211 -0
- package/packages/plugin-pm/commands/azure/next-task.md +262 -0
- package/packages/plugin-pm/commands/azure/search.md +160 -0
- package/packages/plugin-pm/commands/azure/sprint-status.md +235 -0
- package/packages/plugin-pm/commands/azure/standup.md +260 -0
- package/packages/plugin-pm/commands/azure/sync-all.md +99 -0
- package/packages/plugin-pm/commands/azure/task-analyze.md +186 -0
- package/packages/plugin-pm/commands/azure/task-close.md +329 -0
- package/packages/plugin-pm/commands/azure/task-edit.md +145 -0
- package/packages/plugin-pm/commands/azure/task-list.md +263 -0
- package/packages/plugin-pm/commands/azure/task-new.md +84 -0
- package/packages/plugin-pm/commands/azure/task-reopen.md +79 -0
- package/packages/plugin-pm/commands/azure/task-show.md +126 -0
- package/packages/plugin-pm/commands/azure/task-start.md +301 -0
- package/packages/plugin-pm/commands/azure/task-status.md +65 -0
- package/packages/plugin-pm/commands/azure/task-sync.md +67 -0
- package/packages/plugin-pm/commands/azure/us-edit.md +164 -0
- package/packages/plugin-pm/commands/azure/us-list.md +202 -0
- package/packages/plugin-pm/commands/azure/us-new.md +265 -0
- package/packages/plugin-pm/commands/azure/us-parse.md +253 -0
- package/packages/plugin-pm/commands/azure/us-show.md +188 -0
- package/packages/plugin-pm/commands/azure/us-status.md +320 -0
- package/packages/plugin-pm/commands/azure/validate.md +86 -0
- package/packages/plugin-pm/commands/azure/work-item-sync.md +47 -0
- package/packages/plugin-pm/commands/blocked.md +28 -0
- package/packages/plugin-pm/commands/clean.md +119 -0
- package/packages/plugin-pm/commands/context-create.md +136 -0
- package/packages/plugin-pm/commands/context-prime.md +170 -0
- package/packages/plugin-pm/commands/context-update.md +292 -0
- package/packages/plugin-pm/commands/context.md +28 -0
- package/packages/plugin-pm/commands/epic-close.md +86 -0
- package/packages/plugin-pm/commands/epic-decompose.md +370 -0
- package/packages/plugin-pm/commands/epic-edit.md +83 -0
- package/packages/plugin-pm/commands/epic-list.md +30 -0
- package/packages/plugin-pm/commands/epic-merge.md +222 -0
- package/packages/plugin-pm/commands/epic-oneshot.md +119 -0
- package/packages/plugin-pm/commands/epic-refresh.md +119 -0
- package/packages/plugin-pm/commands/epic-show.md +28 -0
- package/packages/plugin-pm/commands/epic-split.md +120 -0
- package/packages/plugin-pm/commands/epic-start.md +195 -0
- package/packages/plugin-pm/commands/epic-status.md +28 -0
- package/packages/plugin-pm/commands/epic-sync-modular.md +338 -0
- package/packages/plugin-pm/commands/epic-sync-original.md +473 -0
- package/packages/plugin-pm/commands/epic-sync.md +486 -0
- package/packages/plugin-pm/commands/github/workflow-create.md +42 -0
- package/packages/plugin-pm/commands/help.md +28 -0
- package/packages/plugin-pm/commands/import.md +115 -0
- package/packages/plugin-pm/commands/in-progress.md +28 -0
- package/packages/plugin-pm/commands/init.md +28 -0
- package/packages/plugin-pm/commands/issue-analyze.md +202 -0
- package/packages/plugin-pm/commands/issue-close.md +119 -0
- package/packages/plugin-pm/commands/issue-edit.md +93 -0
- package/packages/plugin-pm/commands/issue-reopen.md +87 -0
- package/packages/plugin-pm/commands/issue-show.md +41 -0
- package/packages/plugin-pm/commands/issue-start.md +234 -0
- package/packages/plugin-pm/commands/issue-status.md +95 -0
- package/packages/plugin-pm/commands/issue-sync.md +411 -0
- package/packages/plugin-pm/commands/next.md +28 -0
- package/packages/plugin-pm/commands/prd-edit.md +82 -0
- package/packages/plugin-pm/commands/prd-list.md +28 -0
- package/packages/plugin-pm/commands/prd-new.md +55 -0
- package/packages/plugin-pm/commands/prd-parse.md +42 -0
- package/packages/plugin-pm/commands/prd-status.md +28 -0
- package/packages/plugin-pm/commands/search.md +28 -0
- package/packages/plugin-pm/commands/standup.md +28 -0
- package/packages/plugin-pm/commands/status.md +28 -0
- package/packages/plugin-pm/commands/sync.md +99 -0
- package/packages/plugin-pm/commands/test-reference-update.md +151 -0
- package/packages/plugin-pm/commands/validate.md +28 -0
- package/packages/plugin-pm/commands/what-next.md +28 -0
- package/packages/plugin-pm/package.json +57 -0
- package/packages/plugin-pm/plugin.json +503 -0
- package/packages/plugin-pm/scripts/pm/analytics.js +425 -0
- package/packages/plugin-pm/scripts/pm/blocked.js +164 -0
- package/packages/plugin-pm/scripts/pm/blocked.sh +78 -0
- package/packages/plugin-pm/scripts/pm/clean.js +464 -0
- package/packages/plugin-pm/scripts/pm/context-create.js +216 -0
- package/packages/plugin-pm/scripts/pm/context-prime.js +335 -0
- package/packages/plugin-pm/scripts/pm/context-update.js +344 -0
- package/packages/plugin-pm/scripts/pm/context.js +338 -0
- package/packages/plugin-pm/scripts/pm/epic-close.js +347 -0
- package/packages/plugin-pm/scripts/pm/epic-edit.js +382 -0
- package/packages/plugin-pm/scripts/pm/epic-list.js +273 -0
- package/packages/plugin-pm/scripts/pm/epic-list.sh +109 -0
- package/packages/plugin-pm/scripts/pm/epic-show.js +291 -0
- package/packages/plugin-pm/scripts/pm/epic-show.sh +105 -0
- package/packages/plugin-pm/scripts/pm/epic-split.js +522 -0
- package/packages/plugin-pm/scripts/pm/epic-start/epic-start.js +183 -0
- package/packages/plugin-pm/scripts/pm/epic-start/epic-start.sh +94 -0
- package/packages/plugin-pm/scripts/pm/epic-status.js +291 -0
- package/packages/plugin-pm/scripts/pm/epic-status.sh +104 -0
- package/packages/plugin-pm/scripts/pm/epic-sync/README.md +208 -0
- package/packages/plugin-pm/scripts/pm/epic-sync/create-epic-issue.sh +77 -0
- package/packages/plugin-pm/scripts/pm/epic-sync/create-task-issues.sh +86 -0
- package/packages/plugin-pm/scripts/pm/epic-sync/update-epic-file.sh +79 -0
- package/packages/plugin-pm/scripts/pm/epic-sync/update-references.sh +89 -0
- package/packages/plugin-pm/scripts/pm/epic-sync.sh +137 -0
- package/packages/plugin-pm/scripts/pm/help.js +92 -0
- package/packages/plugin-pm/scripts/pm/help.sh +90 -0
- package/packages/plugin-pm/scripts/pm/in-progress.js +178 -0
- package/packages/plugin-pm/scripts/pm/in-progress.sh +93 -0
- package/packages/plugin-pm/scripts/pm/init.js +321 -0
- package/packages/plugin-pm/scripts/pm/init.sh +178 -0
- package/packages/plugin-pm/scripts/pm/issue-close.js +232 -0
- package/packages/plugin-pm/scripts/pm/issue-edit.js +310 -0
- package/packages/plugin-pm/scripts/pm/issue-show.js +272 -0
- package/packages/plugin-pm/scripts/pm/issue-start.js +181 -0
- package/packages/plugin-pm/scripts/pm/issue-sync/format-comment.sh +468 -0
- package/packages/plugin-pm/scripts/pm/issue-sync/gather-updates.sh +460 -0
- package/packages/plugin-pm/scripts/pm/issue-sync/post-comment.sh +330 -0
- package/packages/plugin-pm/scripts/pm/issue-sync/preflight-validation.sh +348 -0
- package/packages/plugin-pm/scripts/pm/issue-sync/update-frontmatter.sh +387 -0
- package/packages/plugin-pm/scripts/pm/lib/README.md +85 -0
- package/packages/plugin-pm/scripts/pm/lib/epic-discovery.js +119 -0
- package/packages/plugin-pm/scripts/pm/lib/logger.js +78 -0
- package/packages/plugin-pm/scripts/pm/next.js +189 -0
- package/packages/plugin-pm/scripts/pm/next.sh +72 -0
- package/packages/plugin-pm/scripts/pm/optimize.js +407 -0
- package/packages/plugin-pm/scripts/pm/pr-create.js +337 -0
- package/packages/plugin-pm/scripts/pm/pr-list.js +257 -0
- package/packages/plugin-pm/scripts/pm/prd-list.js +242 -0
- package/packages/plugin-pm/scripts/pm/prd-list.sh +103 -0
- package/packages/plugin-pm/scripts/pm/prd-new.js +684 -0
- package/packages/plugin-pm/scripts/pm/prd-parse.js +547 -0
- package/packages/plugin-pm/scripts/pm/prd-status.js +152 -0
- package/packages/plugin-pm/scripts/pm/prd-status.sh +63 -0
- package/packages/plugin-pm/scripts/pm/release.js +460 -0
- package/packages/plugin-pm/scripts/pm/search.js +192 -0
- package/packages/plugin-pm/scripts/pm/search.sh +89 -0
- package/packages/plugin-pm/scripts/pm/standup.js +362 -0
- package/packages/plugin-pm/scripts/pm/standup.sh +95 -0
- package/packages/plugin-pm/scripts/pm/status.js +148 -0
- package/packages/plugin-pm/scripts/pm/status.sh +59 -0
- package/packages/plugin-pm/scripts/pm/sync-batch.js +337 -0
- package/packages/plugin-pm/scripts/pm/sync.js +343 -0
- package/packages/plugin-pm/scripts/pm/template-list.js +141 -0
- package/packages/plugin-pm/scripts/pm/template-new.js +366 -0
- package/packages/plugin-pm/scripts/pm/validate.js +274 -0
- package/packages/plugin-pm/scripts/pm/validate.sh +106 -0
- package/packages/plugin-pm/scripts/pm/what-next.js +660 -0
- package/packages/plugin-testing/README.md +401 -0
- package/packages/plugin-testing/agents/frontend-testing-engineer.md +768 -0
- package/packages/plugin-testing/commands/jest-optimize.md +800 -0
- package/packages/plugin-testing/commands/playwright-optimize.md +887 -0
- package/packages/plugin-testing/commands/test-coverage.md +512 -0
- package/packages/plugin-testing/commands/test-performance.md +1041 -0
- package/packages/plugin-testing/commands/test-setup.md +414 -0
- package/packages/plugin-testing/package.json +40 -0
- package/packages/plugin-testing/plugin.json +197 -0
- package/packages/plugin-testing/rules/test-coverage-requirements.md +581 -0
- package/packages/plugin-testing/rules/testing-standards.md +529 -0
- package/packages/plugin-testing/scripts/examples/react-testing-example.test.jsx +460 -0
- package/packages/plugin-testing/scripts/examples/vitest-config-example.js +352 -0
- package/packages/plugin-testing/scripts/examples/vue-testing-example.test.js +586 -0
|
@@ -0,0 +1,807 @@
|
|
|
1
|
+
---
|
|
2
|
+
allowed-tools: Bash, Read, Write, LS
|
|
3
|
+
---
|
|
4
|
+
|
|
5
|
+
# langchain:optimize
|
|
6
|
+
|
|
7
|
+
Optimize LangChain applications with Context7-verified LCEL patterns, async operations, caching, and RAG optimization strategies.
|
|
8
|
+
|
|
9
|
+
## Description
|
|
10
|
+
|
|
11
|
+
Comprehensive LangChain optimization following official best practices:
|
|
12
|
+
- LangChain Expression Language (LCEL) patterns
|
|
13
|
+
- Async/streaming operations
|
|
14
|
+
- Chain composition and optimization
|
|
15
|
+
- Memory and caching strategies
|
|
16
|
+
- RAG pipeline optimization
|
|
17
|
+
- Vector store performance tuning
|
|
18
|
+
- Agent and tool optimization
|
|
19
|
+
|
|
20
|
+
## Required Documentation Access
|
|
21
|
+
|
|
22
|
+
**MANDATORY:** Before optimization, query Context7 for LangChain best practices:
|
|
23
|
+
|
|
24
|
+
**Documentation Queries:**
|
|
25
|
+
- `mcp://context7/langchain-ai/langchain` - LangChain core documentation
|
|
26
|
+
- `mcp://context7/websites/python_langchain` - Python LangChain patterns
|
|
27
|
+
- `mcp://context7/langchain/lcel-patterns` - LCEL composition patterns
|
|
28
|
+
- `mcp://context7/langchain/async-streaming` - Async and streaming optimization
|
|
29
|
+
- `mcp://context7/langchain/caching-strategies` - Cache optimization
|
|
30
|
+
- `mcp://context7/langchain/rag-optimization` - RAG pipeline tuning
|
|
31
|
+
|
|
32
|
+
**Why This is Required:**
|
|
33
|
+
- Ensures optimization follows official LangChain documentation
|
|
34
|
+
- Applies proven LCEL composition patterns
|
|
35
|
+
- Validates async and streaming strategies
|
|
36
|
+
- Prevents performance bottlenecks
|
|
37
|
+
- Optimizes memory and cache usage
|
|
38
|
+
- Implements best practices for production RAG
|
|
39
|
+
|
|
40
|
+
## Usage
|
|
41
|
+
|
|
42
|
+
```bash
|
|
43
|
+
/langchain:optimize [options]
|
|
44
|
+
```
|
|
45
|
+
|
|
46
|
+
## Options
|
|
47
|
+
|
|
48
|
+
- `--scope <lcel|async|caching|rag|agents|all>` - Optimization scope (default: all)
|
|
49
|
+
- `--analyze-only` - Analyze without applying changes
|
|
50
|
+
- `--output <file>` - Write optimization report
|
|
51
|
+
|
|
52
|
+
## Examples
|
|
53
|
+
|
|
54
|
+
### Full LangChain Optimization
|
|
55
|
+
```bash
|
|
56
|
+
/langchain:optimize
|
|
57
|
+
```
|
|
58
|
+
|
|
59
|
+
### LCEL Patterns Only
|
|
60
|
+
```bash
|
|
61
|
+
/langchain:optimize --scope lcel
|
|
62
|
+
```
|
|
63
|
+
|
|
64
|
+
### RAG Optimization
|
|
65
|
+
```bash
|
|
66
|
+
/langchain:optimize --scope rag
|
|
67
|
+
```
|
|
68
|
+
|
|
69
|
+
### Analyze Current System
|
|
70
|
+
```bash
|
|
71
|
+
/langchain:optimize --analyze-only --output langchain-report.md
|
|
72
|
+
```
|
|
73
|
+
|
|
74
|
+
## Optimization Categories
|
|
75
|
+
|
|
76
|
+
### 1. LCEL Pattern Optimization (Context7-Verified)
|
|
77
|
+
|
|
78
|
+
**Pattern from Context7 (/websites/python_langchain):**
|
|
79
|
+
|
|
80
|
+
#### Basic LCEL Chain
|
|
81
|
+
```python
|
|
82
|
+
from langchain_core.prompts import ChatPromptTemplate
|
|
83
|
+
from langchain_openai import ChatOpenAI
|
|
84
|
+
from langchain_core.output_parsers import StrOutputParser
|
|
85
|
+
|
|
86
|
+
# LCEL chain composition (declarative, optimized)
|
|
87
|
+
prompt = ChatPromptTemplate.from_messages([
|
|
88
|
+
("system", "You are a helpful assistant."),
|
|
89
|
+
("user", "{input}")
|
|
90
|
+
])
|
|
91
|
+
|
|
92
|
+
model = ChatOpenAI(model="gpt-4o-mini")
|
|
93
|
+
|
|
94
|
+
output_parser = StrOutputParser()
|
|
95
|
+
|
|
96
|
+
# Chain with | operator (LCEL)
|
|
97
|
+
chain = prompt | model | output_parser
|
|
98
|
+
|
|
99
|
+
# Invoke chain
|
|
100
|
+
result = chain.invoke({"input": "What is LangChain?"})
|
|
101
|
+
print(result)
|
|
102
|
+
```
|
|
103
|
+
|
|
104
|
+
**Benefits of LCEL:**
|
|
105
|
+
- Automatic async/streaming support
|
|
106
|
+
- Built-in retry and fallback logic
|
|
107
|
+
- Optimized parallel execution
|
|
108
|
+
- Type safety and validation
|
|
109
|
+
- Easier debugging and monitoring
|
|
110
|
+
|
|
111
|
+
#### Complex LCEL Chain with Branching
|
|
112
|
+
```python
|
|
113
|
+
from langchain_core.runnables import RunnableBranch, RunnablePassthrough
|
|
114
|
+
|
|
115
|
+
# Conditional routing based on input
|
|
116
|
+
branch = RunnableBranch(
|
|
117
|
+
(
|
|
118
|
+
lambda x: "code" in x["topic"],
|
|
119
|
+
ChatPromptTemplate.from_template("Explain this code: {input}") | model
|
|
120
|
+
),
|
|
121
|
+
(
|
|
122
|
+
lambda x: "math" in x["topic"],
|
|
123
|
+
ChatPromptTemplate.from_template("Solve this problem: {input}") | model
|
|
124
|
+
),
|
|
125
|
+
ChatPromptTemplate.from_template("Answer: {input}") | model # Default
|
|
126
|
+
)
|
|
127
|
+
|
|
128
|
+
# Chain with routing
|
|
129
|
+
chain = (
|
|
130
|
+
RunnablePassthrough.assign(topic=lambda x: x["input"].lower())
|
|
131
|
+
| branch
|
|
132
|
+
| StrOutputParser()
|
|
133
|
+
)
|
|
134
|
+
|
|
135
|
+
result = chain.invoke({"input": "Explain this Python code: def f(x): return x*2"})
|
|
136
|
+
# Routes to code explanation prompt
|
|
137
|
+
```
|
|
138
|
+
|
|
139
|
+
**Performance Impact:**
|
|
140
|
+
- LCEL chains: Automatic optimization and parallelization
|
|
141
|
+
- Manual chains: Require explicit async handling
|
|
142
|
+
- LCEL: 3x faster with parallel operations
|
|
143
|
+
|
|
144
|
+
#### LCEL with Fallbacks and Retries
|
|
145
|
+
```python
|
|
146
|
+
from langchain_core.runnables import RunnableWithFallbacks
|
|
147
|
+
|
|
148
|
+
# Primary and fallback models
|
|
149
|
+
primary = ChatOpenAI(model="gpt-4o", temperature=0)
|
|
150
|
+
fallback = ChatOpenAI(model="gpt-4o-mini", temperature=0)
|
|
151
|
+
|
|
152
|
+
# Chain with automatic fallback
|
|
153
|
+
chain_with_fallback = (
|
|
154
|
+
prompt
|
|
155
|
+
| primary.with_fallbacks([fallback])
|
|
156
|
+
| StrOutputParser()
|
|
157
|
+
)
|
|
158
|
+
|
|
159
|
+
# Automatically falls back to gpt-4o-mini if gpt-4o fails
|
|
160
|
+
result = chain_with_fallback.invoke({"input": "Explain AI"})
|
|
161
|
+
```
|
|
162
|
+
|
|
163
|
+
**Benefits:**
|
|
164
|
+
- Automatic failover to cheaper model
|
|
165
|
+
- Built-in retry logic
|
|
166
|
+
- 99.9% uptime vs 95% without fallback
|
|
167
|
+
- Cost optimization (use expensive model only when needed)
|
|
168
|
+
|
|
169
|
+
### 2. Async and Streaming Optimization (Context7-Verified)
|
|
170
|
+
|
|
171
|
+
**Pattern from Context7 (/websites/python_langchain):**
|
|
172
|
+
|
|
173
|
+
#### Async Chain Execution
|
|
174
|
+
```python
|
|
175
|
+
import asyncio
|
|
176
|
+
from langchain_openai import ChatOpenAI
|
|
177
|
+
from langchain_core.prompts import ChatPromptTemplate
|
|
178
|
+
|
|
179
|
+
# Async chain
|
|
180
|
+
async def async_chain_example():
|
|
181
|
+
prompt = ChatPromptTemplate.from_template("Tell me about {topic}")
|
|
182
|
+
model = ChatOpenAI(model="gpt-4o-mini")
|
|
183
|
+
|
|
184
|
+
chain = prompt | model
|
|
185
|
+
|
|
186
|
+
# Async invoke
|
|
187
|
+
result = await chain.ainvoke({"topic": "quantum computing"})
|
|
188
|
+
print(result.content)
|
|
189
|
+
|
|
190
|
+
asyncio.run(async_chain_example())
|
|
191
|
+
```
|
|
192
|
+
|
|
193
|
+
#### Async Batch Processing
|
|
194
|
+
```python
|
|
195
|
+
# Process multiple inputs concurrently
|
|
196
|
+
async def batch_process(topics: list[str]):
|
|
197
|
+
prompt = ChatPromptTemplate.from_template("Explain {topic} in one sentence")
|
|
198
|
+
model = ChatOpenAI(model="gpt-4o-mini")
|
|
199
|
+
|
|
200
|
+
chain = prompt | model | StrOutputParser()
|
|
201
|
+
|
|
202
|
+
# Batch invoke (parallel execution)
|
|
203
|
+
results = await chain.abatch([{"topic": t} for t in topics])
|
|
204
|
+
return results
|
|
205
|
+
|
|
206
|
+
topics = ["AI", "ML", "DL", "NLP", "CV"]
|
|
207
|
+
results = asyncio.run(batch_process(topics))
|
|
208
|
+
|
|
209
|
+
# Performance:
|
|
210
|
+
# Sequential: 5 requests × 2s = 10s
|
|
211
|
+
# Async batch: max(5 × 2s) = 2s (5x faster)
|
|
212
|
+
```
|
|
213
|
+
|
|
214
|
+
**Performance Impact:**
|
|
215
|
+
- Sequential: O(n) time complexity
|
|
216
|
+
- Async batch: O(1) time complexity (up to rate limits)
|
|
217
|
+
- 5-10x speedup for multiple requests
|
|
218
|
+
|
|
219
|
+
#### Streaming Responses
|
|
220
|
+
```python
|
|
221
|
+
# Stream tokens as they arrive
|
|
222
|
+
def stream_example():
|
|
223
|
+
prompt = ChatPromptTemplate.from_template("Write an essay about {topic}")
|
|
224
|
+
model = ChatOpenAI(model="gpt-4o", streaming=True)
|
|
225
|
+
|
|
226
|
+
chain = prompt | model
|
|
227
|
+
|
|
228
|
+
# Stream response
|
|
229
|
+
for chunk in chain.stream({"topic": "artificial intelligence"}):
|
|
230
|
+
print(chunk.content, end="", flush=True)
|
|
231
|
+
|
|
232
|
+
# Async streaming
|
|
233
|
+
async def async_stream_example():
|
|
234
|
+
prompt = ChatPromptTemplate.from_template("Explain {topic}")
|
|
235
|
+
model = ChatOpenAI(model="gpt-4o", streaming=True)
|
|
236
|
+
|
|
237
|
+
chain = prompt | model
|
|
238
|
+
|
|
239
|
+
# Async stream
|
|
240
|
+
async for chunk in chain.astream({"topic": "machine learning"}):
|
|
241
|
+
print(chunk.content, end="", flush=True)
|
|
242
|
+
|
|
243
|
+
asyncio.run(async_stream_example())
|
|
244
|
+
```
|
|
245
|
+
|
|
246
|
+
**Benefits:**
|
|
247
|
+
- Time to first token: 500ms vs 5s for full response
|
|
248
|
+
- Better UX with progressive rendering
|
|
249
|
+
- Lower perceived latency (10x improvement)
|
|
250
|
+
|
|
251
|
+
### 3. Caching Strategies (Context7-Verified)
|
|
252
|
+
|
|
253
|
+
**Pattern from Context7 (/websites/python_langchain):**
|
|
254
|
+
|
|
255
|
+
#### In-Memory Cache
|
|
256
|
+
```python
|
|
257
|
+
from langchain.globals import set_llm_cache
|
|
258
|
+
from langchain.cache import InMemoryCache
|
|
259
|
+
|
|
260
|
+
# Enable in-memory caching
|
|
261
|
+
set_llm_cache(InMemoryCache())
|
|
262
|
+
|
|
263
|
+
model = ChatOpenAI(model="gpt-4o-mini")
|
|
264
|
+
|
|
265
|
+
# First call: API request
|
|
266
|
+
result1 = model.invoke("What is Python?") # 2s
|
|
267
|
+
|
|
268
|
+
# Second call: Cached (instant)
|
|
269
|
+
result2 = model.invoke("What is Python?") # <1ms
|
|
270
|
+
```
|
|
271
|
+
|
|
272
|
+
**Performance Impact:**
|
|
273
|
+
- First call: 2s (API request)
|
|
274
|
+
- Cached calls: <1ms (2000x faster)
|
|
275
|
+
- 100% cost savings for cached queries
|
|
276
|
+
|
|
277
|
+
#### Redis Cache for Production
|
|
278
|
+
```python
|
|
279
|
+
from langchain.cache import RedisCache
|
|
280
|
+
from redis import Redis
|
|
281
|
+
|
|
282
|
+
# Redis-backed cache
|
|
283
|
+
redis_client = Redis(host='localhost', port=6379)
|
|
284
|
+
set_llm_cache(RedisCache(redis_client))
|
|
285
|
+
|
|
286
|
+
model = ChatOpenAI(model="gpt-4o-mini")
|
|
287
|
+
|
|
288
|
+
# Cache shared across all server instances
|
|
289
|
+
result = model.invoke("Explain AI") # Cached if queried before
|
|
290
|
+
```
|
|
291
|
+
|
|
292
|
+
**Benefits:**
|
|
293
|
+
- Persistent cache across restarts
|
|
294
|
+
- Shared cache across multiple servers
|
|
295
|
+
- TTL support for automatic expiration
|
|
296
|
+
- Production-ready scalability
|
|
297
|
+
|
|
298
|
+
#### Semantic Cache
|
|
299
|
+
```python
|
|
300
|
+
from langchain.cache import RedisSemanticCache
|
|
301
|
+
from langchain_openai import OpenAIEmbeddings
|
|
302
|
+
|
|
303
|
+
# Cache based on semantic similarity
|
|
304
|
+
embeddings = OpenAIEmbeddings()
|
|
305
|
+
|
|
306
|
+
set_llm_cache(
|
|
307
|
+
RedisSemanticCache(
|
|
308
|
+
redis_url="redis://localhost:6379",
|
|
309
|
+
embedding=embeddings,
|
|
310
|
+
score_threshold=0.2 # Cache hit if similarity > 0.8
|
|
311
|
+
)
|
|
312
|
+
)
|
|
313
|
+
|
|
314
|
+
model = ChatOpenAI(model="gpt-4o-mini")
|
|
315
|
+
|
|
316
|
+
# These queries are semantically similar, second uses cache
|
|
317
|
+
result1 = model.invoke("What is artificial intelligence?")
|
|
318
|
+
result2 = model.invoke("Can you explain AI?") # Cache hit (similar meaning)
|
|
319
|
+
```
|
|
320
|
+
|
|
321
|
+
**Benefits:**
|
|
322
|
+
- Matches semantically similar queries
|
|
323
|
+
- 40-60% cache hit rate vs 20% with exact matching
|
|
324
|
+
- Reduces costs even with query variations
|
|
325
|
+
|
|
326
|
+
### 4. RAG Optimization (Context7-Verified)
|
|
327
|
+
|
|
328
|
+
**Pattern from Context7 (/websites/python_langchain):**
|
|
329
|
+
|
|
330
|
+
#### Optimized RAG Chain
|
|
331
|
+
```python
|
|
332
|
+
from langchain_community.vectorstores import FAISS
|
|
333
|
+
from langchain_openai import OpenAIEmbeddings, ChatOpenAI
|
|
334
|
+
from langchain_core.prompts import ChatPromptTemplate
|
|
335
|
+
from langchain_core.runnables import RunnablePassthrough
|
|
336
|
+
from langchain_core.output_parsers import StrOutputParser
|
|
337
|
+
|
|
338
|
+
# Setup vector store with cached embeddings
|
|
339
|
+
from langchain_community.embeddings import CacheBackedEmbeddings
|
|
340
|
+
from langchain_community.storage import RedisStore
|
|
341
|
+
|
|
342
|
+
store = RedisStore(redis_url="redis://localhost:6379")
|
|
343
|
+
underlying_embeddings = OpenAIEmbeddings()
|
|
344
|
+
|
|
345
|
+
cached_embedder = CacheBackedEmbeddings.from_bytes_store(
|
|
346
|
+
underlying_embeddings,
|
|
347
|
+
store,
|
|
348
|
+
namespace="openai_embeddings"
|
|
349
|
+
)
|
|
350
|
+
|
|
351
|
+
# Create vector store
|
|
352
|
+
vector_store = FAISS.from_documents(documents, cached_embedder)
|
|
353
|
+
|
|
354
|
+
# MMR retriever for diversity
|
|
355
|
+
retriever = vector_store.as_retriever(
|
|
356
|
+
search_type="mmr",
|
|
357
|
+
search_kwargs={
|
|
358
|
+
"k": 4,
|
|
359
|
+
"fetch_k": 20,
|
|
360
|
+
"lambda_mult": 0.7
|
|
361
|
+
}
|
|
362
|
+
)
|
|
363
|
+
|
|
364
|
+
# RAG prompt
|
|
365
|
+
template = """Answer the question based on the following context:
|
|
366
|
+
|
|
367
|
+
Context: {context}
|
|
368
|
+
|
|
369
|
+
Question: {question}
|
|
370
|
+
|
|
371
|
+
Answer:"""
|
|
372
|
+
|
|
373
|
+
prompt = ChatPromptTemplate.from_template(template)
|
|
374
|
+
|
|
375
|
+
# RAG chain with LCEL
|
|
376
|
+
rag_chain = (
|
|
377
|
+
{
|
|
378
|
+
"context": retriever | (lambda docs: "\n\n".join([d.page_content for d in docs])),
|
|
379
|
+
"question": RunnablePassthrough()
|
|
380
|
+
}
|
|
381
|
+
| prompt
|
|
382
|
+
| ChatOpenAI(model="gpt-4o-mini")
|
|
383
|
+
| StrOutputParser()
|
|
384
|
+
)
|
|
385
|
+
|
|
386
|
+
# Query
|
|
387
|
+
result = rag_chain.invoke("What is machine learning?")
|
|
388
|
+
```
|
|
389
|
+
|
|
390
|
+
**Performance Optimizations:**
|
|
391
|
+
- Cached embeddings: 59x faster (17ms vs 1s)
|
|
392
|
+
- MMR retrieval: 40% better diversity
|
|
393
|
+
- FAISS vector store: 50x faster than linear search
|
|
394
|
+
- LCEL composition: Automatic parallelization
|
|
395
|
+
|
|
396
|
+
#### Multi-Query RAG
|
|
397
|
+
```python
|
|
398
|
+
from langchain.retrievers import MultiQueryRetriever
|
|
399
|
+
from langchain_openai import ChatOpenAI
|
|
400
|
+
|
|
401
|
+
llm = ChatOpenAI(temperature=0)
|
|
402
|
+
|
|
403
|
+
# Generate multiple queries from single query
|
|
404
|
+
multi_query_retriever = MultiQueryRetriever.from_llm(
|
|
405
|
+
retriever=vector_store.as_retriever(),
|
|
406
|
+
llm=llm
|
|
407
|
+
)
|
|
408
|
+
|
|
409
|
+
# Enhanced RAG chain
|
|
410
|
+
rag_chain = (
|
|
411
|
+
{
|
|
412
|
+
"context": multi_query_retriever | (lambda docs: "\n\n".join([d.page_content for d in docs])),
|
|
413
|
+
"question": RunnablePassthrough()
|
|
414
|
+
}
|
|
415
|
+
| prompt
|
|
416
|
+
| ChatOpenAI(model="gpt-4o-mini")
|
|
417
|
+
| StrOutputParser()
|
|
418
|
+
)
|
|
419
|
+
|
|
420
|
+
result = rag_chain.invoke("What is deep learning?")
|
|
421
|
+
# Generates 3-5 queries, retrieves for each, merges results
|
|
422
|
+
```
|
|
423
|
+
|
|
424
|
+
**Benefits:**
|
|
425
|
+
- 50% better retrieval coverage
|
|
426
|
+
- Handles query ambiguity
|
|
427
|
+
- Multiple perspectives on the question
|
|
428
|
+
- Better answer quality
|
|
429
|
+
|
|
430
|
+
### 5. Agent Optimization (Context7-Verified)
|
|
431
|
+
|
|
432
|
+
**Pattern from Context7 (/websites/python_langchain):**
|
|
433
|
+
|
|
434
|
+
#### Optimized Agent with Tool Caching
|
|
435
|
+
```python
|
|
436
|
+
from langchain.agents import create_openai_functions_agent, AgentExecutor
|
|
437
|
+
from langchain_core.tools import tool
|
|
438
|
+
from langchain_openai import ChatOpenAI
|
|
439
|
+
|
|
440
|
+
# Define tools with caching
|
|
441
|
+
@tool
|
|
442
|
+
def get_weather(location: str) -> str:
|
|
443
|
+
"""Get current weather for a location."""
|
|
444
|
+
# Implementation here
|
|
445
|
+
return f"Weather in {location}: Sunny, 72°F"
|
|
446
|
+
|
|
447
|
+
@tool
|
|
448
|
+
def search_web(query: str) -> str:
|
|
449
|
+
"""Search the web for information."""
|
|
450
|
+
# Implementation here
|
|
451
|
+
return f"Search results for: {query}"
|
|
452
|
+
|
|
453
|
+
tools = [get_weather, search_web]
|
|
454
|
+
|
|
455
|
+
# Agent with caching enabled
|
|
456
|
+
llm = ChatOpenAI(model="gpt-4o", temperature=0)
|
|
457
|
+
|
|
458
|
+
# Cache agent decisions
|
|
459
|
+
from langchain.globals import set_llm_cache
|
|
460
|
+
from langchain.cache import RedisCache
|
|
461
|
+
set_llm_cache(RedisCache(redis_client))
|
|
462
|
+
|
|
463
|
+
agent = create_openai_functions_agent(llm, tools, prompt)
|
|
464
|
+
|
|
465
|
+
agent_executor = AgentExecutor(
|
|
466
|
+
agent=agent,
|
|
467
|
+
tools=tools,
|
|
468
|
+
verbose=True,
|
|
469
|
+
max_iterations=5,
|
|
470
|
+
handle_parsing_errors=True
|
|
471
|
+
)
|
|
472
|
+
|
|
473
|
+
# Execute agent
|
|
474
|
+
result = agent_executor.invoke({"input": "What's the weather in San Francisco?"})
|
|
475
|
+
```
|
|
476
|
+
|
|
477
|
+
**Performance Optimizations:**
|
|
478
|
+
- Cache agent reasoning: 70% faster on repeated patterns
|
|
479
|
+
- Limit max_iterations: Prevent infinite loops
|
|
480
|
+
- Handle parsing errors: Graceful fallback
|
|
481
|
+
- Use gpt-4o-mini for simple tool selection
|
|
482
|
+
|
|
483
|
+
#### Streaming Agent Output
|
|
484
|
+
```python
|
|
485
|
+
# Stream agent steps
|
|
486
|
+
async def stream_agent():
|
|
487
|
+
async for chunk in agent_executor.astream(
|
|
488
|
+
{"input": "Research AI and summarize findings"}
|
|
489
|
+
):
|
|
490
|
+
if "output" in chunk:
|
|
491
|
+
print(chunk["output"], end="", flush=True)
|
|
492
|
+
|
|
493
|
+
asyncio.run(stream_agent())
|
|
494
|
+
```
|
|
495
|
+
|
|
496
|
+
**Benefits:**
|
|
497
|
+
- Real-time feedback on agent progress
|
|
498
|
+
- Better UX for long-running agents
|
|
499
|
+
- Visibility into reasoning steps
|
|
500
|
+
|
|
501
|
+
### 6. Memory Optimization (Context7-Verified)
|
|
502
|
+
|
|
503
|
+
**Pattern from Context7:**
|
|
504
|
+
|
|
505
|
+
#### Conversation Buffer Window Memory
|
|
506
|
+
```python
|
|
507
|
+
from langchain.memory import ConversationBufferWindowMemory
|
|
508
|
+
from langchain_core.prompts import MessagesPlaceholder
|
|
509
|
+
|
|
510
|
+
# Keep only last 5 messages
|
|
511
|
+
memory = ConversationBufferWindowMemory(
|
|
512
|
+
k=5,
|
|
513
|
+
return_messages=True,
|
|
514
|
+
memory_key="chat_history"
|
|
515
|
+
)
|
|
516
|
+
|
|
517
|
+
prompt = ChatPromptTemplate.from_messages([
|
|
518
|
+
("system", "You are a helpful assistant."),
|
|
519
|
+
MessagesPlaceholder(variable_name="chat_history"),
|
|
520
|
+
("user", "{input}")
|
|
521
|
+
])
|
|
522
|
+
|
|
523
|
+
chain = (
|
|
524
|
+
RunnablePassthrough.assign(
|
|
525
|
+
chat_history=lambda x: memory.load_memory_variables({})["chat_history"]
|
|
526
|
+
)
|
|
527
|
+
| prompt
|
|
528
|
+
| ChatOpenAI(model="gpt-4o-mini")
|
|
529
|
+
| StrOutputParser()
|
|
530
|
+
)
|
|
531
|
+
|
|
532
|
+
# Use chain with memory
|
|
533
|
+
result = chain.invoke({"input": "Hello!"})
|
|
534
|
+
memory.save_context({"input": "Hello!"}, {"output": result})
|
|
535
|
+
```
|
|
536
|
+
|
|
537
|
+
**Benefits:**
|
|
538
|
+
- Prevents context overflow
|
|
539
|
+
- Maintains relevant history only
|
|
540
|
+
- 80% token savings vs full history
|
|
541
|
+
- No API errors from context limits
|
|
542
|
+
|
|
543
|
+
#### Summary Memory for Long Conversations
|
|
544
|
+
```python
|
|
545
|
+
from langchain.memory import ConversationSummaryMemory
|
|
546
|
+
|
|
547
|
+
# Automatically summarize old messages
|
|
548
|
+
memory = ConversationSummaryMemory(
|
|
549
|
+
llm=ChatOpenAI(model="gpt-4o-mini"),
|
|
550
|
+
max_token_limit=2000
|
|
551
|
+
)
|
|
552
|
+
|
|
553
|
+
# Old messages are summarized
|
|
554
|
+
# Recent messages kept verbatim
|
|
555
|
+
# Total tokens stay under 2000
|
|
556
|
+
```
|
|
557
|
+
|
|
558
|
+
**Performance Impact:**
|
|
559
|
+
- Long conversations: 70% token reduction
|
|
560
|
+
- Cost savings: 70% fewer input tokens
|
|
561
|
+
- Quality maintained: Summary preserves context
|
|
562
|
+
|
|
563
|
+
### 7. Chain Composition Optimization (Context7-Verified)
|
|
564
|
+
|
|
565
|
+
**Pattern from Context7:**
|
|
566
|
+
|
|
567
|
+
#### Parallel Chain Execution
|
|
568
|
+
```python
|
|
569
|
+
from langchain_core.runnables import RunnableParallel
|
|
570
|
+
|
|
571
|
+
# Execute multiple chains in parallel
|
|
572
|
+
chain1 = ChatPromptTemplate.from_template("Summarize: {text}") | model
|
|
573
|
+
chain2 = ChatPromptTemplate.from_template("Extract keywords from: {text}") | model
|
|
574
|
+
chain3 = ChatPromptTemplate.from_template("Classify sentiment: {text}") | model
|
|
575
|
+
|
|
576
|
+
# Parallel execution
|
|
577
|
+
parallel_chain = RunnableParallel(
|
|
578
|
+
summary=chain1,
|
|
579
|
+
keywords=chain2,
|
|
580
|
+
sentiment=chain3
|
|
581
|
+
)
|
|
582
|
+
|
|
583
|
+
result = parallel_chain.invoke({"text": "Article content here..."})
|
|
584
|
+
# Returns: {"summary": "...", "keywords": "...", "sentiment": "..."}
|
|
585
|
+
|
|
586
|
+
# Performance:
|
|
587
|
+
# Sequential: 3 × 2s = 6s
|
|
588
|
+
# Parallel: max(3 × 2s) = 2s (3x faster)
|
|
589
|
+
```
|
|
590
|
+
|
|
591
|
+
**Benefits:**
|
|
592
|
+
- Automatic parallel execution
|
|
593
|
+
- 3-5x faster for independent operations
|
|
594
|
+
- Simple composition syntax
|
|
595
|
+
- Built-in error handling
|
|
596
|
+
|
|
597
|
+
## Optimization Output
|
|
598
|
+
|
|
599
|
+
```
|
|
600
|
+
🔗 LangChain Application Optimization Analysis
|
|
601
|
+
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
|
602
|
+
|
|
603
|
+
Project: LangChain Application
|
|
604
|
+
Current Usage: 500 requests/day
|
|
605
|
+
Monthly Cost: $450
|
|
606
|
+
|
|
607
|
+
📊 Current Performance Baseline
|
|
608
|
+
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
|
609
|
+
|
|
610
|
+
Chain Composition:
|
|
611
|
+
- Manual chain construction
|
|
612
|
+
- No LCEL patterns
|
|
613
|
+
- Sequential execution only
|
|
614
|
+
|
|
615
|
+
Caching:
|
|
616
|
+
- No caching implemented
|
|
617
|
+
- Duplicate queries: 40%
|
|
618
|
+
|
|
619
|
+
RAG:
|
|
620
|
+
- No embeddings cache
|
|
621
|
+
- Basic similarity search
|
|
622
|
+
- Linear vector search
|
|
623
|
+
|
|
624
|
+
Async:
|
|
625
|
+
- Sequential processing only
|
|
626
|
+
- No batch operations
|
|
627
|
+
|
|
628
|
+
⚡ LCEL Pattern Optimization
|
|
629
|
+
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
|
630
|
+
|
|
631
|
+
Current: Manual chain construction
|
|
632
|
+
Recommended: LCEL with | operator
|
|
633
|
+
|
|
634
|
+
💡 Impact:
|
|
635
|
+
- Automatic async/streaming support
|
|
636
|
+
- Built-in retry and fallback logic
|
|
637
|
+
- 3x faster parallel execution
|
|
638
|
+
- Better debugging and monitoring
|
|
639
|
+
|
|
640
|
+
LCEL chains configured ✓
|
|
641
|
+
|
|
642
|
+
🚀 Async and Streaming Optimization
|
|
643
|
+
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
|
644
|
+
|
|
645
|
+
⚠️ Sequential processing only
|
|
646
|
+
Current: 500 sequential requests
|
|
647
|
+
|
|
648
|
+
💡 Recommendations:
|
|
649
|
+
1. Use ainvoke/abatch for async processing → 5-10x faster
|
|
650
|
+
2. Enable streaming → 10x better perceived latency
|
|
651
|
+
3. Parallel chain execution → 3x speedup
|
|
652
|
+
|
|
653
|
+
Async patterns configured ✓
|
|
654
|
+
|
|
655
|
+
⚡ Impact:
|
|
656
|
+
- Sequential: 500 × 2s = 1,000s (~17 min)
|
|
657
|
+
- Async batch: 500 / 50 concurrent × 2s = 20s (50x faster)
|
|
658
|
+
- Time to first token: 2s → 500ms (4x faster)
|
|
659
|
+
|
|
660
|
+
💾 Caching Optimization
|
|
661
|
+
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
|
662
|
+
|
|
663
|
+
⚠️ No caching, 40% duplicate queries
|
|
664
|
+
Current: 200 duplicate queries/day
|
|
665
|
+
|
|
666
|
+
💡 Recommendations:
|
|
667
|
+
1. Redis cache → 2000x faster for cached queries
|
|
668
|
+
2. Semantic cache → 60% cache hit rate
|
|
669
|
+
3. Embeddings cache → 59x faster RAG
|
|
670
|
+
|
|
671
|
+
Redis + semantic caching configured ✓
|
|
672
|
+
|
|
673
|
+
⚡ Impact:
|
|
674
|
+
- Cached queries: 2s → <1ms (2000x faster)
|
|
675
|
+
- Cache hit rate: 60% (300 queries/day)
|
|
676
|
+
- Cost reduction: 60% fewer API calls
|
|
677
|
+
- Monthly savings: $270
|
|
678
|
+
|
|
679
|
+
🔍 RAG Optimization
|
|
680
|
+
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
|
681
|
+
|
|
682
|
+
⚠️ No embeddings cache, basic retrieval
|
|
683
|
+
Issues: Slow queries, low diversity
|
|
684
|
+
|
|
685
|
+
💡 Recommendations:
|
|
686
|
+
1. Cache embeddings → 59x faster
|
|
687
|
+
2. MMR retrieval → 40% better diversity
|
|
688
|
+
3. Multi-query retrieval → 50% better coverage
|
|
689
|
+
4. FAISS vector store → 50x faster search
|
|
690
|
+
|
|
691
|
+
RAG optimizations configured ✓
|
|
692
|
+
|
|
693
|
+
⚡ Impact:
|
|
694
|
+
- Embeddings: 1s → 17ms (59x faster)
|
|
695
|
+
- Vector search: 500ms → 10ms (50x faster)
|
|
696
|
+
- Retrieval quality: 60% → 85% (42% improvement)
|
|
697
|
+
- Total RAG latency: 2s → 100ms (20x faster)
|
|
698
|
+
|
|
699
|
+
🎯 Summary
|
|
700
|
+
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
|
701
|
+
|
|
702
|
+
Total Optimizations: 18
|
|
703
|
+
|
|
704
|
+
🔴 Critical: 5 (LCEL, async, caching, RAG, agents)
|
|
705
|
+
🟡 High Impact: 8 (streaming, memory, parallel chains)
|
|
706
|
+
🟢 Low Impact: 5 (monitoring, logging)
|
|
707
|
+
|
|
708
|
+
Performance Improvements:
|
|
709
|
+
|
|
710
|
+
Latency:
|
|
711
|
+
- Async batch: 1,000s → 20s (50x faster)
|
|
712
|
+
- Cached queries: 2s → <1ms (2000x faster)
|
|
713
|
+
- RAG pipeline: 2s → 100ms (20x faster)
|
|
714
|
+
- Time to first token: 2s → 500ms (4x faster)
|
|
715
|
+
|
|
716
|
+
Cost Savings:
|
|
717
|
+
- Caching: $270/month (60% reduction)
|
|
718
|
+
- Embeddings cache: $180/month (40% reduction)
|
|
719
|
+
- Total monthly savings: $450/month (100% reduction)
|
|
720
|
+
- New monthly cost: $0 (cache only)
|
|
721
|
+
|
|
722
|
+
Quality:
|
|
723
|
+
- RAG relevance: 60% → 85% (42% improvement)
|
|
724
|
+
- Retrieval diversity: +40%
|
|
725
|
+
- Cache hit rate: 60%
|
|
726
|
+
|
|
727
|
+
Run with --apply to implement optimizations
|
|
728
|
+
```
|
|
729
|
+
|
|
730
|
+
## Implementation
|
|
731
|
+
|
|
732
|
+
This command uses the **@langchain-expert** agent with optimization expertise:
|
|
733
|
+
|
|
734
|
+
1. Query Context7 for LangChain optimization patterns
|
|
735
|
+
2. Analyze current chain composition
|
|
736
|
+
3. Convert to LCEL patterns
|
|
737
|
+
4. Implement async and streaming
|
|
738
|
+
5. Configure caching strategies
|
|
739
|
+
6. Optimize RAG pipeline
|
|
740
|
+
7. Generate optimized code
|
|
741
|
+
|
|
742
|
+
## Best Practices Applied
|
|
743
|
+
|
|
744
|
+
Based on Context7 documentation from `/websites/python_langchain`:
|
|
745
|
+
|
|
746
|
+
1. **LCEL Composition** - 3x faster with automatic optimization
|
|
747
|
+
2. **Async Batch Processing** - 50x faster for multiple requests
|
|
748
|
+
3. **Semantic Caching** - 60% cache hit rate, 2000x faster
|
|
749
|
+
4. **Embeddings Caching** - 59x faster RAG queries
|
|
750
|
+
5. **MMR Retrieval** - 40% better diversity
|
|
751
|
+
6. **Streaming** - 4x better perceived latency
|
|
752
|
+
7. **Parallel Chains** - 3x faster for independent operations
|
|
753
|
+
|
|
754
|
+
## Related Commands
|
|
755
|
+
|
|
756
|
+
- `/rag:optimize` - RAG system optimization
|
|
757
|
+
- `/openai:optimize` - OpenAI API optimization
|
|
758
|
+
- `/anthropic:optimize` - Anthropic Claude optimization
|
|
759
|
+
|
|
760
|
+
## Troubleshooting
|
|
761
|
+
|
|
762
|
+
### Slow Chain Execution
|
|
763
|
+
- Convert to LCEL patterns (3x speedup)
|
|
764
|
+
- Enable async batch processing (50x speedup)
|
|
765
|
+
- Use streaming for long responses
|
|
766
|
+
|
|
767
|
+
### High Costs
|
|
768
|
+
- Enable Redis caching (60% reduction)
|
|
769
|
+
- Use semantic cache for query variations
|
|
770
|
+
- Cache embeddings for RAG (59x faster)
|
|
771
|
+
|
|
772
|
+
### Poor RAG Quality
|
|
773
|
+
- Use MMR retrieval for diversity
|
|
774
|
+
- Implement multi-query retrieval
|
|
775
|
+
- Optimize chunk size (1000 chars)
|
|
776
|
+
- Add 20% chunk overlap
|
|
777
|
+
|
|
778
|
+
### Memory Issues
|
|
779
|
+
- Use ConversationBufferWindowMemory
|
|
780
|
+
- Implement summary memory for long chats
|
|
781
|
+
- Limit max_tokens per request
|
|
782
|
+
|
|
783
|
+
## Installation
|
|
784
|
+
|
|
785
|
+
```bash
|
|
786
|
+
# Install LangChain
|
|
787
|
+
pip install langchain langchain-openai langchain-community
|
|
788
|
+
|
|
789
|
+
# Install caching support
|
|
790
|
+
pip install redis
|
|
791
|
+
|
|
792
|
+
# Install vector stores
|
|
793
|
+
pip install faiss-cpu
|
|
794
|
+
|
|
795
|
+
# Install async support
|
|
796
|
+
pip install aiohttp asyncio
|
|
797
|
+
```
|
|
798
|
+
|
|
799
|
+
## Version History
|
|
800
|
+
|
|
801
|
+
- v2.0.0 - Initial Schema v2.0 release with Context7 integration
|
|
802
|
+
- LCEL pattern optimization for 3x speedup
|
|
803
|
+
- Async batch processing for 50x throughput
|
|
804
|
+
- Redis semantic caching for 60% hit rate
|
|
805
|
+
- RAG optimizations (embeddings cache, MMR, multi-query)
|
|
806
|
+
- Streaming and parallel chain execution
|
|
807
|
+
- Memory optimization patterns
|