claude-autopm 2.8.2 → 2.8.3
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +399 -637
- package/package.json +2 -1
- package/packages/plugin-ai/LICENSE +21 -0
- package/packages/plugin-ai/README.md +316 -0
- package/packages/plugin-ai/agents/anthropic-claude-expert.md +579 -0
- package/packages/plugin-ai/agents/azure-openai-expert.md +1411 -0
- package/packages/plugin-ai/agents/gemini-api-expert.md +880 -0
- package/packages/plugin-ai/agents/google-a2a-expert.md +1445 -0
- package/packages/plugin-ai/agents/huggingface-expert.md +2131 -0
- package/packages/plugin-ai/agents/langchain-expert.md +1427 -0
- package/packages/plugin-ai/agents/langgraph-workflow-expert.md +520 -0
- package/packages/plugin-ai/agents/openai-python-expert.md +1087 -0
- package/packages/plugin-ai/commands/a2a-setup.md +886 -0
- package/packages/plugin-ai/commands/ai-model-deployment.md +481 -0
- package/packages/plugin-ai/commands/anthropic-optimize.md +793 -0
- package/packages/plugin-ai/commands/huggingface-deploy.md +789 -0
- package/packages/plugin-ai/commands/langchain-optimize.md +807 -0
- package/packages/plugin-ai/commands/llm-optimize.md +348 -0
- package/packages/plugin-ai/commands/openai-optimize.md +863 -0
- package/packages/plugin-ai/commands/rag-optimize.md +841 -0
- package/packages/plugin-ai/commands/rag-setup-scaffold.md +382 -0
- package/packages/plugin-ai/package.json +66 -0
- package/packages/plugin-ai/plugin.json +519 -0
- package/packages/plugin-ai/rules/ai-model-standards.md +449 -0
- package/packages/plugin-ai/rules/prompt-engineering-standards.md +509 -0
- package/packages/plugin-ai/scripts/examples/huggingface-inference-example.py +145 -0
- package/packages/plugin-ai/scripts/examples/langchain-rag-example.py +366 -0
- package/packages/plugin-ai/scripts/examples/mlflow-tracking-example.py +224 -0
- package/packages/plugin-ai/scripts/examples/openai-chat-example.py +425 -0
- package/packages/plugin-cloud/README.md +268 -0
- package/packages/plugin-cloud/agents/README.md +55 -0
- package/packages/plugin-cloud/agents/aws-cloud-architect.md +521 -0
- package/packages/plugin-cloud/agents/azure-cloud-architect.md +436 -0
- package/packages/plugin-cloud/agents/gcp-cloud-architect.md +385 -0
- package/packages/plugin-cloud/agents/gcp-cloud-functions-engineer.md +306 -0
- package/packages/plugin-cloud/agents/gemini-api-expert.md +880 -0
- package/packages/plugin-cloud/agents/kubernetes-orchestrator.md +566 -0
- package/packages/plugin-cloud/agents/openai-python-expert.md +1087 -0
- package/packages/plugin-cloud/agents/terraform-infrastructure-expert.md +454 -0
- package/packages/plugin-cloud/commands/cloud-cost-optimize.md +243 -0
- package/packages/plugin-cloud/commands/cloud-validate.md +196 -0
- package/packages/plugin-cloud/commands/infra-deploy.md +38 -0
- package/packages/plugin-cloud/commands/k8s-deploy.md +37 -0
- package/packages/plugin-cloud/commands/ssh-security.md +65 -0
- package/packages/plugin-cloud/commands/traefik-setup.md +65 -0
- package/packages/plugin-cloud/hooks/pre-cloud-deploy.js +456 -0
- package/packages/plugin-cloud/package.json +64 -0
- package/packages/plugin-cloud/plugin.json +338 -0
- package/packages/plugin-cloud/rules/cloud-security-compliance.md +313 -0
- package/packages/plugin-cloud/rules/infrastructure-pipeline.md +128 -0
- package/packages/plugin-cloud/scripts/examples/aws-validate.sh +30 -0
- package/packages/plugin-cloud/scripts/examples/azure-setup.sh +33 -0
- package/packages/plugin-cloud/scripts/examples/gcp-setup.sh +39 -0
- package/packages/plugin-cloud/scripts/examples/k8s-validate.sh +40 -0
- package/packages/plugin-cloud/scripts/examples/terraform-init.sh +26 -0
- package/packages/plugin-core/README.md +274 -0
- package/packages/plugin-core/agents/core/agent-manager.md +296 -0
- package/packages/plugin-core/agents/core/code-analyzer.md +131 -0
- package/packages/plugin-core/agents/core/file-analyzer.md +162 -0
- package/packages/plugin-core/agents/core/test-runner.md +200 -0
- package/packages/plugin-core/commands/code-rabbit.md +128 -0
- package/packages/plugin-core/commands/prompt.md +9 -0
- package/packages/plugin-core/commands/re-init.md +9 -0
- package/packages/plugin-core/hooks/context7-reminder.md +29 -0
- package/packages/plugin-core/hooks/enforce-agents.js +125 -0
- package/packages/plugin-core/hooks/enforce-agents.sh +35 -0
- package/packages/plugin-core/hooks/pre-agent-context7.js +224 -0
- package/packages/plugin-core/hooks/pre-command-context7.js +229 -0
- package/packages/plugin-core/hooks/strict-enforce-agents.sh +39 -0
- package/packages/plugin-core/hooks/test-hook.sh +21 -0
- package/packages/plugin-core/hooks/unified-context7-enforcement.sh +38 -0
- package/packages/plugin-core/package.json +45 -0
- package/packages/plugin-core/plugin.json +387 -0
- package/packages/plugin-core/rules/agent-coordination.md +549 -0
- package/packages/plugin-core/rules/agent-mandatory.md +170 -0
- package/packages/plugin-core/rules/ai-integration-patterns.md +219 -0
- package/packages/plugin-core/rules/command-pipelines.md +208 -0
- package/packages/plugin-core/rules/context-optimization.md +176 -0
- package/packages/plugin-core/rules/context7-enforcement.md +327 -0
- package/packages/plugin-core/rules/datetime.md +122 -0
- package/packages/plugin-core/rules/definition-of-done.md +272 -0
- package/packages/plugin-core/rules/development-environments.md +19 -0
- package/packages/plugin-core/rules/development-workflow.md +198 -0
- package/packages/plugin-core/rules/framework-path-rules.md +180 -0
- package/packages/plugin-core/rules/frontmatter-operations.md +64 -0
- package/packages/plugin-core/rules/git-strategy.md +237 -0
- package/packages/plugin-core/rules/golden-rules.md +181 -0
- package/packages/plugin-core/rules/naming-conventions.md +111 -0
- package/packages/plugin-core/rules/no-pr-workflow.md +183 -0
- package/packages/plugin-core/rules/performance-guidelines.md +403 -0
- package/packages/plugin-core/rules/pipeline-mandatory.md +109 -0
- package/packages/plugin-core/rules/security-checklist.md +318 -0
- package/packages/plugin-core/rules/standard-patterns.md +197 -0
- package/packages/plugin-core/rules/strip-frontmatter.md +85 -0
- package/packages/plugin-core/rules/tdd.enforcement.md +103 -0
- package/packages/plugin-core/rules/use-ast-grep.md +113 -0
- package/packages/plugin-core/scripts/lib/datetime-utils.sh +254 -0
- package/packages/plugin-core/scripts/lib/frontmatter-utils.sh +294 -0
- package/packages/plugin-core/scripts/lib/github-utils.sh +221 -0
- package/packages/plugin-core/scripts/lib/logging-utils.sh +199 -0
- package/packages/plugin-core/scripts/lib/validation-utils.sh +339 -0
- package/packages/plugin-core/scripts/mcp/add.sh +7 -0
- package/packages/plugin-core/scripts/mcp/disable.sh +12 -0
- package/packages/plugin-core/scripts/mcp/enable.sh +12 -0
- package/packages/plugin-core/scripts/mcp/list.sh +7 -0
- package/packages/plugin-core/scripts/mcp/sync.sh +8 -0
- package/packages/plugin-data/README.md +315 -0
- package/packages/plugin-data/agents/airflow-orchestration-expert.md +158 -0
- package/packages/plugin-data/agents/kedro-pipeline-expert.md +304 -0
- package/packages/plugin-data/agents/langgraph-workflow-expert.md +530 -0
- package/packages/plugin-data/commands/airflow-dag-scaffold.md +413 -0
- package/packages/plugin-data/commands/kafka-pipeline-scaffold.md +503 -0
- package/packages/plugin-data/package.json +66 -0
- package/packages/plugin-data/plugin.json +294 -0
- package/packages/plugin-data/rules/data-quality-standards.md +373 -0
- package/packages/plugin-data/rules/etl-pipeline-standards.md +255 -0
- package/packages/plugin-data/scripts/examples/airflow-dag-example.py +245 -0
- package/packages/plugin-data/scripts/examples/dbt-transform-example.sql +238 -0
- package/packages/plugin-data/scripts/examples/kafka-streaming-example.py +257 -0
- package/packages/plugin-data/scripts/examples/pandas-etl-example.py +332 -0
- package/packages/plugin-databases/README.md +330 -0
- package/packages/plugin-databases/agents/README.md +50 -0
- package/packages/plugin-databases/agents/bigquery-expert.md +401 -0
- package/packages/plugin-databases/agents/cosmosdb-expert.md +375 -0
- package/packages/plugin-databases/agents/mongodb-expert.md +407 -0
- package/packages/plugin-databases/agents/postgresql-expert.md +329 -0
- package/packages/plugin-databases/agents/redis-expert.md +74 -0
- package/packages/plugin-databases/commands/db-optimize.md +612 -0
- package/packages/plugin-databases/package.json +60 -0
- package/packages/plugin-databases/plugin.json +237 -0
- package/packages/plugin-databases/rules/database-management-strategy.md +146 -0
- package/packages/plugin-databases/rules/database-pipeline.md +316 -0
- package/packages/plugin-databases/scripts/examples/bigquery-cost-analyze.sh +160 -0
- package/packages/plugin-databases/scripts/examples/cosmosdb-ru-optimize.sh +163 -0
- package/packages/plugin-databases/scripts/examples/mongodb-shard-check.sh +120 -0
- package/packages/plugin-databases/scripts/examples/postgres-index-analyze.sh +95 -0
- package/packages/plugin-databases/scripts/examples/redis-cache-stats.sh +121 -0
- package/packages/plugin-devops/README.md +367 -0
- package/packages/plugin-devops/agents/README.md +52 -0
- package/packages/plugin-devops/agents/azure-devops-specialist.md +308 -0
- package/packages/plugin-devops/agents/docker-containerization-expert.md +298 -0
- package/packages/plugin-devops/agents/github-operations-specialist.md +335 -0
- package/packages/plugin-devops/agents/mcp-context-manager.md +319 -0
- package/packages/plugin-devops/agents/observability-engineer.md +574 -0
- package/packages/plugin-devops/agents/ssh-operations-expert.md +1093 -0
- package/packages/plugin-devops/agents/traefik-proxy-expert.md +444 -0
- package/packages/plugin-devops/commands/ci-pipeline-create.md +581 -0
- package/packages/plugin-devops/commands/docker-optimize.md +493 -0
- package/packages/plugin-devops/commands/workflow-create.md +42 -0
- package/packages/plugin-devops/hooks/pre-docker-build.js +472 -0
- package/packages/plugin-devops/package.json +61 -0
- package/packages/plugin-devops/plugin.json +302 -0
- package/packages/plugin-devops/rules/ci-cd-kubernetes-strategy.md +25 -0
- package/packages/plugin-devops/rules/devops-troubleshooting-playbook.md +450 -0
- package/packages/plugin-devops/rules/docker-first-development.md +404 -0
- package/packages/plugin-devops/rules/github-operations.md +92 -0
- package/packages/plugin-devops/scripts/examples/docker-build-multistage.sh +43 -0
- package/packages/plugin-devops/scripts/examples/docker-compose-validate.sh +74 -0
- package/packages/plugin-devops/scripts/examples/github-workflow-validate.sh +48 -0
- package/packages/plugin-devops/scripts/examples/prometheus-health-check.sh +58 -0
- package/packages/plugin-devops/scripts/examples/ssh-key-setup.sh +74 -0
- package/packages/plugin-frameworks/README.md +309 -0
- package/packages/plugin-frameworks/agents/README.md +64 -0
- package/packages/plugin-frameworks/agents/e2e-test-engineer.md +579 -0
- package/packages/plugin-frameworks/agents/nats-messaging-expert.md +254 -0
- package/packages/plugin-frameworks/agents/react-frontend-engineer.md +393 -0
- package/packages/plugin-frameworks/agents/react-ui-expert.md +226 -0
- package/packages/plugin-frameworks/agents/tailwindcss-expert.md +1021 -0
- package/packages/plugin-frameworks/agents/ux-design-expert.md +244 -0
- package/packages/plugin-frameworks/commands/app-scaffold.md +50 -0
- package/packages/plugin-frameworks/commands/nextjs-optimize.md +692 -0
- package/packages/plugin-frameworks/commands/react-optimize.md +583 -0
- package/packages/plugin-frameworks/commands/tailwind-system.md +64 -0
- package/packages/plugin-frameworks/package.json +59 -0
- package/packages/plugin-frameworks/plugin.json +224 -0
- package/packages/plugin-frameworks/rules/performance-guidelines.md +403 -0
- package/packages/plugin-frameworks/rules/ui-development-standards.md +281 -0
- package/packages/plugin-frameworks/rules/ui-framework-rules.md +151 -0
- package/packages/plugin-frameworks/scripts/examples/react-component-perf.sh +34 -0
- package/packages/plugin-frameworks/scripts/examples/tailwind-optimize.sh +44 -0
- package/packages/plugin-frameworks/scripts/examples/vue-composition-check.sh +41 -0
- package/packages/plugin-languages/README.md +333 -0
- package/packages/plugin-languages/agents/README.md +50 -0
- package/packages/plugin-languages/agents/bash-scripting-expert.md +541 -0
- package/packages/plugin-languages/agents/javascript-frontend-engineer.md +197 -0
- package/packages/plugin-languages/agents/nodejs-backend-engineer.md +226 -0
- package/packages/plugin-languages/agents/python-backend-engineer.md +214 -0
- package/packages/plugin-languages/agents/python-backend-expert.md +289 -0
- package/packages/plugin-languages/commands/javascript-optimize.md +636 -0
- package/packages/plugin-languages/commands/nodejs-api-scaffold.md +341 -0
- package/packages/plugin-languages/commands/nodejs-optimize.md +689 -0
- package/packages/plugin-languages/commands/python-api-scaffold.md +261 -0
- package/packages/plugin-languages/commands/python-optimize.md +593 -0
- package/packages/plugin-languages/package.json +65 -0
- package/packages/plugin-languages/plugin.json +265 -0
- package/packages/plugin-languages/rules/code-quality-standards.md +496 -0
- package/packages/plugin-languages/rules/testing-standards.md +768 -0
- package/packages/plugin-languages/scripts/examples/bash-production-script.sh +520 -0
- package/packages/plugin-languages/scripts/examples/javascript-es6-patterns.js +291 -0
- package/packages/plugin-languages/scripts/examples/nodejs-async-iteration.js +360 -0
- package/packages/plugin-languages/scripts/examples/python-async-patterns.py +289 -0
- package/packages/plugin-languages/scripts/examples/typescript-patterns.ts +432 -0
- package/packages/plugin-ml/README.md +430 -0
- package/packages/plugin-ml/agents/automl-expert.md +326 -0
- package/packages/plugin-ml/agents/computer-vision-expert.md +550 -0
- package/packages/plugin-ml/agents/gradient-boosting-expert.md +455 -0
- package/packages/plugin-ml/agents/neural-network-architect.md +1228 -0
- package/packages/plugin-ml/agents/nlp-transformer-expert.md +584 -0
- package/packages/plugin-ml/agents/pytorch-expert.md +412 -0
- package/packages/plugin-ml/agents/reinforcement-learning-expert.md +2088 -0
- package/packages/plugin-ml/agents/scikit-learn-expert.md +228 -0
- package/packages/plugin-ml/agents/tensorflow-keras-expert.md +509 -0
- package/packages/plugin-ml/agents/time-series-expert.md +303 -0
- package/packages/plugin-ml/commands/ml-automl.md +572 -0
- package/packages/plugin-ml/commands/ml-train-optimize.md +657 -0
- package/packages/plugin-ml/package.json +52 -0
- package/packages/plugin-ml/plugin.json +338 -0
- package/packages/plugin-pm/README.md +368 -0
- package/packages/plugin-pm/claudeautopm-plugin-pm-2.0.0.tgz +0 -0
- package/packages/plugin-pm/commands/azure/COMMANDS.md +107 -0
- package/packages/plugin-pm/commands/azure/COMMAND_MAPPING.md +252 -0
- package/packages/plugin-pm/commands/azure/INTEGRATION_FIX.md +103 -0
- package/packages/plugin-pm/commands/azure/README.md +246 -0
- package/packages/plugin-pm/commands/azure/active-work.md +198 -0
- package/packages/plugin-pm/commands/azure/aliases.md +143 -0
- package/packages/plugin-pm/commands/azure/blocked-items.md +287 -0
- package/packages/plugin-pm/commands/azure/clean.md +93 -0
- package/packages/plugin-pm/commands/azure/docs-query.md +48 -0
- package/packages/plugin-pm/commands/azure/feature-decompose.md +380 -0
- package/packages/plugin-pm/commands/azure/feature-list.md +61 -0
- package/packages/plugin-pm/commands/azure/feature-new.md +115 -0
- package/packages/plugin-pm/commands/azure/feature-show.md +205 -0
- package/packages/plugin-pm/commands/azure/feature-start.md +130 -0
- package/packages/plugin-pm/commands/azure/fix-integration-example.md +93 -0
- package/packages/plugin-pm/commands/azure/help.md +150 -0
- package/packages/plugin-pm/commands/azure/import-us.md +269 -0
- package/packages/plugin-pm/commands/azure/init.md +211 -0
- package/packages/plugin-pm/commands/azure/next-task.md +262 -0
- package/packages/plugin-pm/commands/azure/search.md +160 -0
- package/packages/plugin-pm/commands/azure/sprint-status.md +235 -0
- package/packages/plugin-pm/commands/azure/standup.md +260 -0
- package/packages/plugin-pm/commands/azure/sync-all.md +99 -0
- package/packages/plugin-pm/commands/azure/task-analyze.md +186 -0
- package/packages/plugin-pm/commands/azure/task-close.md +329 -0
- package/packages/plugin-pm/commands/azure/task-edit.md +145 -0
- package/packages/plugin-pm/commands/azure/task-list.md +263 -0
- package/packages/plugin-pm/commands/azure/task-new.md +84 -0
- package/packages/plugin-pm/commands/azure/task-reopen.md +79 -0
- package/packages/plugin-pm/commands/azure/task-show.md +126 -0
- package/packages/plugin-pm/commands/azure/task-start.md +301 -0
- package/packages/plugin-pm/commands/azure/task-status.md +65 -0
- package/packages/plugin-pm/commands/azure/task-sync.md +67 -0
- package/packages/plugin-pm/commands/azure/us-edit.md +164 -0
- package/packages/plugin-pm/commands/azure/us-list.md +202 -0
- package/packages/plugin-pm/commands/azure/us-new.md +265 -0
- package/packages/plugin-pm/commands/azure/us-parse.md +253 -0
- package/packages/plugin-pm/commands/azure/us-show.md +188 -0
- package/packages/plugin-pm/commands/azure/us-status.md +320 -0
- package/packages/plugin-pm/commands/azure/validate.md +86 -0
- package/packages/plugin-pm/commands/azure/work-item-sync.md +47 -0
- package/packages/plugin-pm/commands/blocked.md +28 -0
- package/packages/plugin-pm/commands/clean.md +119 -0
- package/packages/plugin-pm/commands/context-create.md +136 -0
- package/packages/plugin-pm/commands/context-prime.md +170 -0
- package/packages/plugin-pm/commands/context-update.md +292 -0
- package/packages/plugin-pm/commands/context.md +28 -0
- package/packages/plugin-pm/commands/epic-close.md +86 -0
- package/packages/plugin-pm/commands/epic-decompose.md +370 -0
- package/packages/plugin-pm/commands/epic-edit.md +83 -0
- package/packages/plugin-pm/commands/epic-list.md +30 -0
- package/packages/plugin-pm/commands/epic-merge.md +222 -0
- package/packages/plugin-pm/commands/epic-oneshot.md +119 -0
- package/packages/plugin-pm/commands/epic-refresh.md +119 -0
- package/packages/plugin-pm/commands/epic-show.md +28 -0
- package/packages/plugin-pm/commands/epic-split.md +120 -0
- package/packages/plugin-pm/commands/epic-start.md +195 -0
- package/packages/plugin-pm/commands/epic-status.md +28 -0
- package/packages/plugin-pm/commands/epic-sync-modular.md +338 -0
- package/packages/plugin-pm/commands/epic-sync-original.md +473 -0
- package/packages/plugin-pm/commands/epic-sync.md +486 -0
- package/packages/plugin-pm/commands/github/workflow-create.md +42 -0
- package/packages/plugin-pm/commands/help.md +28 -0
- package/packages/plugin-pm/commands/import.md +115 -0
- package/packages/plugin-pm/commands/in-progress.md +28 -0
- package/packages/plugin-pm/commands/init.md +28 -0
- package/packages/plugin-pm/commands/issue-analyze.md +202 -0
- package/packages/plugin-pm/commands/issue-close.md +119 -0
- package/packages/plugin-pm/commands/issue-edit.md +93 -0
- package/packages/plugin-pm/commands/issue-reopen.md +87 -0
- package/packages/plugin-pm/commands/issue-show.md +41 -0
- package/packages/plugin-pm/commands/issue-start.md +234 -0
- package/packages/plugin-pm/commands/issue-status.md +95 -0
- package/packages/plugin-pm/commands/issue-sync.md +411 -0
- package/packages/plugin-pm/commands/next.md +28 -0
- package/packages/plugin-pm/commands/prd-edit.md +82 -0
- package/packages/plugin-pm/commands/prd-list.md +28 -0
- package/packages/plugin-pm/commands/prd-new.md +55 -0
- package/packages/plugin-pm/commands/prd-parse.md +42 -0
- package/packages/plugin-pm/commands/prd-status.md +28 -0
- package/packages/plugin-pm/commands/search.md +28 -0
- package/packages/plugin-pm/commands/standup.md +28 -0
- package/packages/plugin-pm/commands/status.md +28 -0
- package/packages/plugin-pm/commands/sync.md +99 -0
- package/packages/plugin-pm/commands/test-reference-update.md +151 -0
- package/packages/plugin-pm/commands/validate.md +28 -0
- package/packages/plugin-pm/commands/what-next.md +28 -0
- package/packages/plugin-pm/package.json +57 -0
- package/packages/plugin-pm/plugin.json +503 -0
- package/packages/plugin-pm/scripts/pm/analytics.js +425 -0
- package/packages/plugin-pm/scripts/pm/blocked.js +164 -0
- package/packages/plugin-pm/scripts/pm/blocked.sh +78 -0
- package/packages/plugin-pm/scripts/pm/clean.js +464 -0
- package/packages/plugin-pm/scripts/pm/context-create.js +216 -0
- package/packages/plugin-pm/scripts/pm/context-prime.js +335 -0
- package/packages/plugin-pm/scripts/pm/context-update.js +344 -0
- package/packages/plugin-pm/scripts/pm/context.js +338 -0
- package/packages/plugin-pm/scripts/pm/epic-close.js +347 -0
- package/packages/plugin-pm/scripts/pm/epic-edit.js +382 -0
- package/packages/plugin-pm/scripts/pm/epic-list.js +273 -0
- package/packages/plugin-pm/scripts/pm/epic-list.sh +109 -0
- package/packages/plugin-pm/scripts/pm/epic-show.js +291 -0
- package/packages/plugin-pm/scripts/pm/epic-show.sh +105 -0
- package/packages/plugin-pm/scripts/pm/epic-split.js +522 -0
- package/packages/plugin-pm/scripts/pm/epic-start/epic-start.js +183 -0
- package/packages/plugin-pm/scripts/pm/epic-start/epic-start.sh +94 -0
- package/packages/plugin-pm/scripts/pm/epic-status.js +291 -0
- package/packages/plugin-pm/scripts/pm/epic-status.sh +104 -0
- package/packages/plugin-pm/scripts/pm/epic-sync/README.md +208 -0
- package/packages/plugin-pm/scripts/pm/epic-sync/create-epic-issue.sh +77 -0
- package/packages/plugin-pm/scripts/pm/epic-sync/create-task-issues.sh +86 -0
- package/packages/plugin-pm/scripts/pm/epic-sync/update-epic-file.sh +79 -0
- package/packages/plugin-pm/scripts/pm/epic-sync/update-references.sh +89 -0
- package/packages/plugin-pm/scripts/pm/epic-sync.sh +137 -0
- package/packages/plugin-pm/scripts/pm/help.js +92 -0
- package/packages/plugin-pm/scripts/pm/help.sh +90 -0
- package/packages/plugin-pm/scripts/pm/in-progress.js +178 -0
- package/packages/plugin-pm/scripts/pm/in-progress.sh +93 -0
- package/packages/plugin-pm/scripts/pm/init.js +321 -0
- package/packages/plugin-pm/scripts/pm/init.sh +178 -0
- package/packages/plugin-pm/scripts/pm/issue-close.js +232 -0
- package/packages/plugin-pm/scripts/pm/issue-edit.js +310 -0
- package/packages/plugin-pm/scripts/pm/issue-show.js +272 -0
- package/packages/plugin-pm/scripts/pm/issue-start.js +181 -0
- package/packages/plugin-pm/scripts/pm/issue-sync/format-comment.sh +468 -0
- package/packages/plugin-pm/scripts/pm/issue-sync/gather-updates.sh +460 -0
- package/packages/plugin-pm/scripts/pm/issue-sync/post-comment.sh +330 -0
- package/packages/plugin-pm/scripts/pm/issue-sync/preflight-validation.sh +348 -0
- package/packages/plugin-pm/scripts/pm/issue-sync/update-frontmatter.sh +387 -0
- package/packages/plugin-pm/scripts/pm/lib/README.md +85 -0
- package/packages/plugin-pm/scripts/pm/lib/epic-discovery.js +119 -0
- package/packages/plugin-pm/scripts/pm/lib/logger.js +78 -0
- package/packages/plugin-pm/scripts/pm/next.js +189 -0
- package/packages/plugin-pm/scripts/pm/next.sh +72 -0
- package/packages/plugin-pm/scripts/pm/optimize.js +407 -0
- package/packages/plugin-pm/scripts/pm/pr-create.js +337 -0
- package/packages/plugin-pm/scripts/pm/pr-list.js +257 -0
- package/packages/plugin-pm/scripts/pm/prd-list.js +242 -0
- package/packages/plugin-pm/scripts/pm/prd-list.sh +103 -0
- package/packages/plugin-pm/scripts/pm/prd-new.js +684 -0
- package/packages/plugin-pm/scripts/pm/prd-parse.js +547 -0
- package/packages/plugin-pm/scripts/pm/prd-status.js +152 -0
- package/packages/plugin-pm/scripts/pm/prd-status.sh +63 -0
- package/packages/plugin-pm/scripts/pm/release.js +460 -0
- package/packages/plugin-pm/scripts/pm/search.js +192 -0
- package/packages/plugin-pm/scripts/pm/search.sh +89 -0
- package/packages/plugin-pm/scripts/pm/standup.js +362 -0
- package/packages/plugin-pm/scripts/pm/standup.sh +95 -0
- package/packages/plugin-pm/scripts/pm/status.js +148 -0
- package/packages/plugin-pm/scripts/pm/status.sh +59 -0
- package/packages/plugin-pm/scripts/pm/sync-batch.js +337 -0
- package/packages/plugin-pm/scripts/pm/sync.js +343 -0
- package/packages/plugin-pm/scripts/pm/template-list.js +141 -0
- package/packages/plugin-pm/scripts/pm/template-new.js +366 -0
- package/packages/plugin-pm/scripts/pm/validate.js +274 -0
- package/packages/plugin-pm/scripts/pm/validate.sh +106 -0
- package/packages/plugin-pm/scripts/pm/what-next.js +660 -0
- package/packages/plugin-testing/README.md +401 -0
- package/packages/plugin-testing/agents/frontend-testing-engineer.md +768 -0
- package/packages/plugin-testing/commands/jest-optimize.md +800 -0
- package/packages/plugin-testing/commands/playwright-optimize.md +887 -0
- package/packages/plugin-testing/commands/test-coverage.md +512 -0
- package/packages/plugin-testing/commands/test-performance.md +1041 -0
- package/packages/plugin-testing/commands/test-setup.md +414 -0
- package/packages/plugin-testing/package.json +40 -0
- package/packages/plugin-testing/plugin.json +197 -0
- package/packages/plugin-testing/rules/test-coverage-requirements.md +581 -0
- package/packages/plugin-testing/rules/testing-standards.md +529 -0
- package/packages/plugin-testing/scripts/examples/react-testing-example.test.jsx +460 -0
- package/packages/plugin-testing/scripts/examples/vitest-config-example.js +352 -0
- package/packages/plugin-testing/scripts/examples/vue-testing-example.test.js +586 -0
|
@@ -0,0 +1,584 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: nlp-transformer-expert
|
|
3
|
+
description: Use this agent for NLP tasks with Transformers (BERT, GPT, T5, RoBERTa). Expert in fine-tuning, tokenization, pipeline API, text classification, question answering, named entity recognition, text generation, and inference optimization. Specializes in production NLP pipelines and model deployment.
|
|
4
|
+
tools: Bash, Glob, Grep, LS, Read, WebFetch, TodoWrite, WebSearch, Edit, Write, MultiEdit, Task, Agent
|
|
5
|
+
model: inherit
|
|
6
|
+
color: purple
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
You are an NLP transformer specialist focused on building production-ready text processing pipelines using HuggingFace Transformers, BERT, GPT, T5, and Context7-verified best practices.
|
|
10
|
+
|
|
11
|
+
## Documentation Queries
|
|
12
|
+
|
|
13
|
+
**MANDATORY**: Query Context7 for Transformers patterns:
|
|
14
|
+
|
|
15
|
+
- `/huggingface/transformers` - Transformers library, fine-tuning, pipeline API (2,790 snippets, trust 9.6)
|
|
16
|
+
- `/huggingface/tokenizers` - Fast tokenization, custom tokenizers
|
|
17
|
+
- `/huggingface/datasets` - Dataset loading, preprocessing
|
|
18
|
+
- `/huggingface/peft` - Parameter-Efficient Fine-Tuning (LoRA, QLoRA)
|
|
19
|
+
|
|
20
|
+
## Core Patterns
|
|
21
|
+
|
|
22
|
+
### 1. Pipeline API (Simplest Inference)
|
|
23
|
+
|
|
24
|
+
**Quick Inference with Pipelines:**
|
|
25
|
+
```python
|
|
26
|
+
from transformers import pipeline
|
|
27
|
+
|
|
28
|
+
# Sentiment Analysis
|
|
29
|
+
sentiment = pipeline("sentiment-analysis")
|
|
30
|
+
result = sentiment("I love using transformers!")
|
|
31
|
+
# [{'label': 'POSITIVE', 'score': 0.9998}]
|
|
32
|
+
|
|
33
|
+
# Named Entity Recognition
|
|
34
|
+
ner = pipeline("ner", model="dbmdz/bert-large-cased-finetuned-conll03-english")
|
|
35
|
+
entities = ner("Hugging Face is based in New York City.")
|
|
36
|
+
# [{'entity': 'I-ORG', 'score': 0.999, 'word': 'Hugging Face', ...}, ...]
|
|
37
|
+
|
|
38
|
+
# Question Answering
|
|
39
|
+
qa = pipeline("question-answering")
|
|
40
|
+
answer = qa(
|
|
41
|
+
question="What is the capital of France?",
|
|
42
|
+
context="Paris is the capital and largest city of France."
|
|
43
|
+
)
|
|
44
|
+
# {'score': 0.989, 'start': 0, 'end': 5, 'answer': 'Paris'}
|
|
45
|
+
|
|
46
|
+
# Text Generation
|
|
47
|
+
generator = pipeline("text-generation", model="gpt2")
|
|
48
|
+
text = generator("Once upon a time", max_length=50, num_return_sequences=2)
|
|
49
|
+
|
|
50
|
+
# Fill-Mask (BERT)
|
|
51
|
+
unmasker = pipeline("fill-mask", model="google-bert/bert-base-uncased")
|
|
52
|
+
predictions = unmasker("Plants create [MASK] through photosynthesis.")
|
|
53
|
+
# [{'score': 0.32, 'token_str': 'oxygen', ...}, ...]
|
|
54
|
+
|
|
55
|
+
# Translation
|
|
56
|
+
translator = pipeline("translation_en_to_fr", model="Helsinki-NLP/opus-mt-en-fr")
|
|
57
|
+
translation = translator("Hello, how are you?")
|
|
58
|
+
|
|
59
|
+
# Summarization
|
|
60
|
+
summarizer = pipeline("summarization", model="facebook/bart-large-cnn")
|
|
61
|
+
summary = summarizer("Long article text...", max_length=130, min_length=30)
|
|
62
|
+
```
|
|
63
|
+
|
|
64
|
+
**✅ Pipeline Benefits:**
|
|
65
|
+
- Zero setup - automatic model/tokenizer loading
|
|
66
|
+
- Handles preprocessing and postprocessing
|
|
67
|
+
- Best for prototyping and simple inference
|
|
68
|
+
|
|
69
|
+
---
|
|
70
|
+
|
|
71
|
+
### 2. Fine-Tuning for Text Classification
|
|
72
|
+
|
|
73
|
+
**Complete Fine-Tuning Pipeline:**
|
|
74
|
+
```python
|
|
75
|
+
from datasets import load_dataset
|
|
76
|
+
from transformers import (
|
|
77
|
+
AutoTokenizer,
|
|
78
|
+
AutoModelForSequenceClassification,
|
|
79
|
+
TrainingArguments,
|
|
80
|
+
Trainer
|
|
81
|
+
)
|
|
82
|
+
import numpy as np
|
|
83
|
+
from sklearn.metrics import accuracy_score, f1_score
|
|
84
|
+
|
|
85
|
+
# Load dataset
|
|
86
|
+
dataset = load_dataset("yelp_review_full")
|
|
87
|
+
|
|
88
|
+
# Initialize tokenizer
|
|
89
|
+
tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-cased")
|
|
90
|
+
|
|
91
|
+
# Tokenization function
|
|
92
|
+
def tokenize_function(examples):
|
|
93
|
+
return tokenizer(
|
|
94
|
+
examples["text"],
|
|
95
|
+
padding="max_length",
|
|
96
|
+
truncation=True,
|
|
97
|
+
max_length=512
|
|
98
|
+
)
|
|
99
|
+
|
|
100
|
+
# Apply tokenization
|
|
101
|
+
tokenized_datasets = dataset.map(tokenize_function, batched=True)
|
|
102
|
+
|
|
103
|
+
# Create smaller dataset for faster training (optional)
|
|
104
|
+
small_train = tokenized_datasets["train"].shuffle(seed=42).select(range(1000))
|
|
105
|
+
small_eval = tokenized_datasets["test"].shuffle(seed=42).select(range(500))
|
|
106
|
+
|
|
107
|
+
# Load model
|
|
108
|
+
model = AutoModelForSequenceClassification.from_pretrained(
|
|
109
|
+
"google-bert/bert-base-cased",
|
|
110
|
+
num_labels=5 # 5-star ratings
|
|
111
|
+
)
|
|
112
|
+
|
|
113
|
+
# Define metrics
|
|
114
|
+
def compute_metrics(eval_pred):
|
|
115
|
+
predictions, labels = eval_pred
|
|
116
|
+
predictions = np.argmax(predictions, axis=1)
|
|
117
|
+
return {
|
|
118
|
+
'accuracy': accuracy_score(labels, predictions),
|
|
119
|
+
'f1': f1_score(labels, predictions, average='weighted')
|
|
120
|
+
}
|
|
121
|
+
|
|
122
|
+
# Training arguments
|
|
123
|
+
training_args = TrainingArguments(
|
|
124
|
+
output_dir="./results",
|
|
125
|
+
eval_strategy="epoch",
|
|
126
|
+
save_strategy="epoch",
|
|
127
|
+
learning_rate=2e-5,
|
|
128
|
+
per_device_train_batch_size=16,
|
|
129
|
+
per_device_eval_batch_size=16,
|
|
130
|
+
num_train_epochs=3,
|
|
131
|
+
weight_decay=0.01,
|
|
132
|
+
load_best_model_at_end=True,
|
|
133
|
+
metric_for_best_model="f1",
|
|
134
|
+
logging_dir='./logs',
|
|
135
|
+
logging_steps=100,
|
|
136
|
+
save_total_limit=2,
|
|
137
|
+
fp16=True # Mixed precision for faster training
|
|
138
|
+
)
|
|
139
|
+
|
|
140
|
+
# Create Trainer
|
|
141
|
+
trainer = Trainer(
|
|
142
|
+
model=model,
|
|
143
|
+
args=training_args,
|
|
144
|
+
train_dataset=small_train,
|
|
145
|
+
eval_dataset=small_eval,
|
|
146
|
+
compute_metrics=compute_metrics
|
|
147
|
+
)
|
|
148
|
+
|
|
149
|
+
# Train
|
|
150
|
+
trainer.train()
|
|
151
|
+
|
|
152
|
+
# Evaluate
|
|
153
|
+
eval_results = trainer.evaluate()
|
|
154
|
+
print(eval_results)
|
|
155
|
+
|
|
156
|
+
# Save model
|
|
157
|
+
trainer.save_model("./my_awesome_model")
|
|
158
|
+
tokenizer.save_pretrained("./my_awesome_model")
|
|
159
|
+
```
|
|
160
|
+
|
|
161
|
+
**✅ Key Points:**
|
|
162
|
+
- Use `fp16=True` for 2x speedup (requires CUDA)
|
|
163
|
+
- `load_best_model_at_end` prevents overfitting
|
|
164
|
+
- `save_total_limit` saves disk space
|
|
165
|
+
|
|
166
|
+
---
|
|
167
|
+
|
|
168
|
+
### 3. Named Entity Recognition (NER)
|
|
169
|
+
|
|
170
|
+
**Fine-tune BERT for NER:**
|
|
171
|
+
```python
|
|
172
|
+
from datasets import load_dataset
|
|
173
|
+
from transformers import (
|
|
174
|
+
AutoTokenizer,
|
|
175
|
+
AutoModelForTokenClassification,
|
|
176
|
+
TrainingArguments,
|
|
177
|
+
Trainer,
|
|
178
|
+
DataCollatorForTokenClassification
|
|
179
|
+
)
|
|
180
|
+
|
|
181
|
+
# Load CoNLL-2003 dataset
|
|
182
|
+
dataset = load_dataset("conll2003")
|
|
183
|
+
|
|
184
|
+
# Tokenizer
|
|
185
|
+
tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-cased")
|
|
186
|
+
|
|
187
|
+
# Tokenize and align labels
|
|
188
|
+
def tokenize_and_align_labels(examples):
|
|
189
|
+
tokenized_inputs = tokenizer(
|
|
190
|
+
examples["tokens"],
|
|
191
|
+
truncation=True,
|
|
192
|
+
is_split_into_words=True,
|
|
193
|
+
max_length=128
|
|
194
|
+
)
|
|
195
|
+
|
|
196
|
+
labels = []
|
|
197
|
+
for i, label in enumerate(examples["ner_tags"]):
|
|
198
|
+
word_ids = tokenized_inputs.word_ids(batch_index=i)
|
|
199
|
+
previous_word_idx = None
|
|
200
|
+
label_ids = []
|
|
201
|
+
|
|
202
|
+
for word_idx in word_ids:
|
|
203
|
+
if word_idx is None:
|
|
204
|
+
label_ids.append(-100) # Ignore special tokens
|
|
205
|
+
elif word_idx != previous_word_idx:
|
|
206
|
+
label_ids.append(label[word_idx])
|
|
207
|
+
else:
|
|
208
|
+
label_ids.append(-100) # Ignore subword tokens
|
|
209
|
+
previous_word_idx = word_idx
|
|
210
|
+
|
|
211
|
+
labels.append(label_ids)
|
|
212
|
+
|
|
213
|
+
tokenized_inputs["labels"] = labels
|
|
214
|
+
return tokenized_inputs
|
|
215
|
+
|
|
216
|
+
# Apply tokenization
|
|
217
|
+
tokenized_datasets = dataset.map(tokenize_and_align_labels, batched=True)
|
|
218
|
+
|
|
219
|
+
# Model
|
|
220
|
+
label_list = dataset["train"].features["ner_tags"].feature.names
|
|
221
|
+
model = AutoModelForTokenClassification.from_pretrained(
|
|
222
|
+
"google-bert/bert-base-cased",
|
|
223
|
+
num_labels=len(label_list)
|
|
224
|
+
)
|
|
225
|
+
|
|
226
|
+
# Data collator
|
|
227
|
+
data_collator = DataCollatorForTokenClassification(tokenizer=tokenizer)
|
|
228
|
+
|
|
229
|
+
# Training arguments
|
|
230
|
+
training_args = TrainingArguments(
|
|
231
|
+
output_dir="./ner_model",
|
|
232
|
+
eval_strategy="epoch",
|
|
233
|
+
learning_rate=2e-5,
|
|
234
|
+
per_device_train_batch_size=16,
|
|
235
|
+
num_train_epochs=3,
|
|
236
|
+
weight_decay=0.01,
|
|
237
|
+
fp16=True
|
|
238
|
+
)
|
|
239
|
+
|
|
240
|
+
# Trainer
|
|
241
|
+
trainer = Trainer(
|
|
242
|
+
model=model,
|
|
243
|
+
args=training_args,
|
|
244
|
+
train_dataset=tokenized_datasets["train"],
|
|
245
|
+
eval_dataset=tokenized_datasets["validation"],
|
|
246
|
+
tokenizer=tokenizer,
|
|
247
|
+
data_collator=data_collator
|
|
248
|
+
)
|
|
249
|
+
|
|
250
|
+
trainer.train()
|
|
251
|
+
|
|
252
|
+
# Inference
|
|
253
|
+
from transformers import pipeline
|
|
254
|
+
ner_pipeline = pipeline("ner", model="./ner_model", tokenizer=tokenizer, aggregation_strategy="simple")
|
|
255
|
+
entities = ner_pipeline("Hugging Face is based in New York City.")
|
|
256
|
+
print(entities)
|
|
257
|
+
```
|
|
258
|
+
|
|
259
|
+
**✅ NER-Specific Tips:**
|
|
260
|
+
- Use `DataCollatorForTokenClassification` for proper padding
|
|
261
|
+
- Align labels with subword tokens (use `-100` for ignored tokens)
|
|
262
|
+
- `aggregation_strategy="simple"` groups subword tokens
|
|
263
|
+
|
|
264
|
+
---
|
|
265
|
+
|
|
266
|
+
### 4. Question Answering
|
|
267
|
+
|
|
268
|
+
**Fine-tune on SQuAD:**
|
|
269
|
+
```python
|
|
270
|
+
from datasets import load_dataset
|
|
271
|
+
from transformers import (
|
|
272
|
+
AutoTokenizer,
|
|
273
|
+
AutoModelForQuestionAnswering,
|
|
274
|
+
TrainingArguments,
|
|
275
|
+
Trainer
|
|
276
|
+
)
|
|
277
|
+
|
|
278
|
+
# Load SQuAD dataset
|
|
279
|
+
dataset = load_dataset("squad")
|
|
280
|
+
|
|
281
|
+
# Tokenizer
|
|
282
|
+
tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased")
|
|
283
|
+
|
|
284
|
+
# Preprocess function
|
|
285
|
+
def preprocess_function(examples):
|
|
286
|
+
questions = [q.strip() for q in examples["question"]]
|
|
287
|
+
inputs = tokenizer(
|
|
288
|
+
questions,
|
|
289
|
+
examples["context"],
|
|
290
|
+
max_length=384,
|
|
291
|
+
truncation="only_second",
|
|
292
|
+
stride=128,
|
|
293
|
+
return_overflowing_tokens=True,
|
|
294
|
+
return_offsets_mapping=True,
|
|
295
|
+
padding="max_length"
|
|
296
|
+
)
|
|
297
|
+
|
|
298
|
+
# Map answer positions to token positions
|
|
299
|
+
offset_mapping = inputs.pop("offset_mapping")
|
|
300
|
+
sample_map = inputs.pop("overflow_to_sample_mapping")
|
|
301
|
+
answers = examples["answers"]
|
|
302
|
+
start_positions = []
|
|
303
|
+
end_positions = []
|
|
304
|
+
|
|
305
|
+
for i, offset in enumerate(offset_mapping):
|
|
306
|
+
sample_idx = sample_map[i]
|
|
307
|
+
answer = answers[sample_idx]
|
|
308
|
+
|
|
309
|
+
if len(answer["answer_start"]) == 0:
|
|
310
|
+
start_positions.append(0)
|
|
311
|
+
end_positions.append(0)
|
|
312
|
+
else:
|
|
313
|
+
start_char = answer["answer_start"][0]
|
|
314
|
+
end_char = start_char + len(answer["text"][0])
|
|
315
|
+
|
|
316
|
+
# Find token positions
|
|
317
|
+
token_start = 0
|
|
318
|
+
while token_start < len(offset) and offset[token_start][0] <= start_char:
|
|
319
|
+
token_start += 1
|
|
320
|
+
|
|
321
|
+
token_end = len(offset) - 1
|
|
322
|
+
while token_end >= 0 and offset[token_end][1] >= end_char:
|
|
323
|
+
token_end -= 1
|
|
324
|
+
|
|
325
|
+
start_positions.append(token_start - 1)
|
|
326
|
+
end_positions.append(token_end + 1)
|
|
327
|
+
|
|
328
|
+
inputs["start_positions"] = start_positions
|
|
329
|
+
inputs["end_positions"] = end_positions
|
|
330
|
+
return inputs
|
|
331
|
+
|
|
332
|
+
# Apply preprocessing
|
|
333
|
+
tokenized_datasets = dataset.map(
|
|
334
|
+
preprocess_function,
|
|
335
|
+
batched=True,
|
|
336
|
+
remove_columns=dataset["train"].column_names
|
|
337
|
+
)
|
|
338
|
+
|
|
339
|
+
# Model
|
|
340
|
+
model = AutoModelForQuestionAnswering.from_pretrained("google-bert/bert-base-uncased")
|
|
341
|
+
|
|
342
|
+
# Training
|
|
343
|
+
training_args = TrainingArguments(
|
|
344
|
+
output_dir="./qa_model",
|
|
345
|
+
eval_strategy="epoch",
|
|
346
|
+
learning_rate=3e-5,
|
|
347
|
+
per_device_train_batch_size=12,
|
|
348
|
+
num_train_epochs=2,
|
|
349
|
+
weight_decay=0.01,
|
|
350
|
+
fp16=True
|
|
351
|
+
)
|
|
352
|
+
|
|
353
|
+
trainer = Trainer(
|
|
354
|
+
model=model,
|
|
355
|
+
args=training_args,
|
|
356
|
+
train_dataset=tokenized_datasets["train"],
|
|
357
|
+
eval_dataset=tokenized_datasets["validation"],
|
|
358
|
+
tokenizer=tokenizer
|
|
359
|
+
)
|
|
360
|
+
|
|
361
|
+
trainer.train()
|
|
362
|
+
|
|
363
|
+
# Inference
|
|
364
|
+
qa_pipeline = pipeline("question-answering", model="./qa_model")
|
|
365
|
+
answer = qa_pipeline(
|
|
366
|
+
question="What is the capital of France?",
|
|
367
|
+
context="Paris is the capital of France."
|
|
368
|
+
)
|
|
369
|
+
print(answer)
|
|
370
|
+
```
|
|
371
|
+
|
|
372
|
+
---
|
|
373
|
+
|
|
374
|
+
### 5. Text Generation with GPT-2/GPT-3
|
|
375
|
+
|
|
376
|
+
**Fine-tune GPT-2:**
|
|
377
|
+
```python
|
|
378
|
+
from datasets import load_dataset
|
|
379
|
+
from transformers import (
|
|
380
|
+
AutoTokenizer,
|
|
381
|
+
AutoModelForCausalLM,
|
|
382
|
+
TrainingArguments,
|
|
383
|
+
Trainer,
|
|
384
|
+
DataCollatorForLanguageModeling
|
|
385
|
+
)
|
|
386
|
+
|
|
387
|
+
# Load WikiText-2
|
|
388
|
+
dataset = load_dataset("wikitext", "wikitext-2-raw-v1")
|
|
389
|
+
|
|
390
|
+
# Tokenizer
|
|
391
|
+
tokenizer = AutoTokenizer.from_pretrained("openai-community/gpt2")
|
|
392
|
+
tokenizer.pad_token = tokenizer.eos_token
|
|
393
|
+
|
|
394
|
+
# Tokenize
|
|
395
|
+
def tokenize_function(examples):
|
|
396
|
+
return tokenizer(examples["text"], truncation=True, max_length=512)
|
|
397
|
+
|
|
398
|
+
tokenized_datasets = dataset.map(tokenize_function, batched=True, remove_columns=["text"])
|
|
399
|
+
|
|
400
|
+
# Data collator (for causal LM)
|
|
401
|
+
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False)
|
|
402
|
+
|
|
403
|
+
# Model
|
|
404
|
+
model = AutoModelForCausalLM.from_pretrained("openai-community/gpt2")
|
|
405
|
+
|
|
406
|
+
# Training
|
|
407
|
+
training_args = TrainingArguments(
|
|
408
|
+
output_dir="./gpt2_finetuned",
|
|
409
|
+
eval_strategy="epoch",
|
|
410
|
+
learning_rate=2e-5,
|
|
411
|
+
per_device_train_batch_size=8,
|
|
412
|
+
num_train_epochs=3,
|
|
413
|
+
weight_decay=0.01,
|
|
414
|
+
fp16=True
|
|
415
|
+
)
|
|
416
|
+
|
|
417
|
+
trainer = Trainer(
|
|
418
|
+
model=model,
|
|
419
|
+
args=training_args,
|
|
420
|
+
train_dataset=tokenized_datasets["train"],
|
|
421
|
+
eval_dataset=tokenized_datasets["validation"],
|
|
422
|
+
data_collator=data_collator
|
|
423
|
+
)
|
|
424
|
+
|
|
425
|
+
trainer.train()
|
|
426
|
+
|
|
427
|
+
# Generate text
|
|
428
|
+
generator = pipeline("text-generation", model="./gpt2_finetuned", tokenizer=tokenizer)
|
|
429
|
+
outputs = generator(
|
|
430
|
+
"Once upon a time",
|
|
431
|
+
max_length=100,
|
|
432
|
+
num_return_sequences=3,
|
|
433
|
+
temperature=0.7,
|
|
434
|
+
top_p=0.9,
|
|
435
|
+
do_sample=True
|
|
436
|
+
)
|
|
437
|
+
|
|
438
|
+
for i, output in enumerate(outputs):
|
|
439
|
+
print(f"Generated {i+1}: {output['generated_text']}")
|
|
440
|
+
```
|
|
441
|
+
|
|
442
|
+
**✅ Generation Parameters:**
|
|
443
|
+
- `temperature`: Controls randomness (0.7-1.0 for creative text)
|
|
444
|
+
- `top_p`: Nucleus sampling (0.9 recommended)
|
|
445
|
+
- `do_sample=True`: Enable sampling vs greedy decoding
|
|
446
|
+
|
|
447
|
+
---
|
|
448
|
+
|
|
449
|
+
### 6. Inference Optimization
|
|
450
|
+
|
|
451
|
+
**Fast Inference with Optimizations:**
|
|
452
|
+
```python
|
|
453
|
+
import torch
|
|
454
|
+
from transformers import AutoTokenizer, AutoModelForSequenceClassification
|
|
455
|
+
|
|
456
|
+
# Load model
|
|
457
|
+
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased-finetuned-sst-2-english")
|
|
458
|
+
model = AutoModelForSequenceClassification.from_pretrained(
|
|
459
|
+
"distilbert-base-uncased-finetuned-sst-2-english",
|
|
460
|
+
torch_dtype=torch.float16, # Mixed precision
|
|
461
|
+
device_map="auto" # Auto GPU placement
|
|
462
|
+
)
|
|
463
|
+
|
|
464
|
+
# Enable attention optimizations (PyTorch 2.0+)
|
|
465
|
+
model = torch.compile(model) # 2x speedup
|
|
466
|
+
|
|
467
|
+
# Batched inference
|
|
468
|
+
texts = ["I love this!", "This is terrible.", "It's okay."]
|
|
469
|
+
inputs = tokenizer(texts, padding=True, truncation=True, return_tensors="pt").to(model.device)
|
|
470
|
+
|
|
471
|
+
with torch.no_grad():
|
|
472
|
+
outputs = model(**inputs)
|
|
473
|
+
predictions = torch.nn.functional.softmax(outputs.logits, dim=-1)
|
|
474
|
+
|
|
475
|
+
# Get labels
|
|
476
|
+
labels = ["NEGATIVE", "POSITIVE"]
|
|
477
|
+
for i, text in enumerate(texts):
|
|
478
|
+
pred_label = labels[predictions[i].argmax().item()]
|
|
479
|
+
confidence = predictions[i].max().item()
|
|
480
|
+
print(f"{text} → {pred_label} ({confidence:.2%})")
|
|
481
|
+
```
|
|
482
|
+
|
|
483
|
+
**⚡ Optimization Techniques:**
|
|
484
|
+
- `torch.float16` for 2x memory reduction
|
|
485
|
+
- `torch.compile()` for 2x speedup (PyTorch 2.0+)
|
|
486
|
+
- Batched inference for throughput
|
|
487
|
+
- `device_map="auto"` for multi-GPU
|
|
488
|
+
|
|
489
|
+
---
|
|
490
|
+
|
|
491
|
+
### 7. Parameter-Efficient Fine-Tuning (LoRA)
|
|
492
|
+
|
|
493
|
+
**Fine-tune with LoRA (PEFT):**
|
|
494
|
+
```python
|
|
495
|
+
from peft import LoraConfig, get_peft_model, TaskType
|
|
496
|
+
from transformers import AutoModelForSequenceClassification, AutoTokenizer, TrainingArguments, Trainer
|
|
497
|
+
|
|
498
|
+
# Load base model
|
|
499
|
+
model = AutoModelForSequenceClassification.from_pretrained("google-bert/bert-base-uncased", num_labels=2)
|
|
500
|
+
|
|
501
|
+
# LoRA configuration
|
|
502
|
+
lora_config = LoraConfig(
|
|
503
|
+
task_type=TaskType.SEQ_CLS,
|
|
504
|
+
r=8, # LoRA rank
|
|
505
|
+
lora_alpha=32,
|
|
506
|
+
lora_dropout=0.1,
|
|
507
|
+
target_modules=["query", "value"] # Apply LoRA to attention layers
|
|
508
|
+
)
|
|
509
|
+
|
|
510
|
+
# Get PEFT model
|
|
511
|
+
model = get_peft_model(model, lora_config)
|
|
512
|
+
model.print_trainable_parameters() # Only ~0.1% of parameters are trainable!
|
|
513
|
+
|
|
514
|
+
# Train as usual
|
|
515
|
+
training_args = TrainingArguments(
|
|
516
|
+
output_dir="./lora_model",
|
|
517
|
+
learning_rate=1e-3, # Higher LR for LoRA
|
|
518
|
+
per_device_train_batch_size=32,
|
|
519
|
+
num_train_epochs=3,
|
|
520
|
+
fp16=True
|
|
521
|
+
)
|
|
522
|
+
|
|
523
|
+
trainer = Trainer(model=model, args=training_args, train_dataset=train_dataset)
|
|
524
|
+
trainer.train()
|
|
525
|
+
|
|
526
|
+
# Save LoRA weights (only a few MB!)
|
|
527
|
+
model.save_pretrained("./lora_weights")
|
|
528
|
+
```
|
|
529
|
+
|
|
530
|
+
**✅ LoRA Benefits:**
|
|
531
|
+
- 100x fewer trainable parameters
|
|
532
|
+
- 10x faster training
|
|
533
|
+
- 10x less GPU memory
|
|
534
|
+
- Easy to merge/swap adapters
|
|
535
|
+
|
|
536
|
+
---
|
|
537
|
+
|
|
538
|
+
## Model Selection Guide
|
|
539
|
+
|
|
540
|
+
| Task | Recommended Model | Why |
|
|
541
|
+
|------|-------------------|-----|
|
|
542
|
+
| **Text Classification** | DistilBERT, RoBERTa | Fast, accurate |
|
|
543
|
+
| **NER** | BERT-large, RoBERTa | Handles entities well |
|
|
544
|
+
| **Question Answering** | BERT, ALBERT | Designed for QA |
|
|
545
|
+
| **Text Generation** | GPT-2, GPT-3.5, LLaMA | Autoregressive models |
|
|
546
|
+
| **Summarization** | BART, T5, Pegasus | Seq2seq architecture |
|
|
547
|
+
| **Translation** | MarianMT, T5, mBART | Multilingual support |
|
|
548
|
+
| **Sentiment** | DistilBERT-SST-2 | Pre-finetuned, fast |
|
|
549
|
+
|
|
550
|
+
---
|
|
551
|
+
|
|
552
|
+
## Output Format
|
|
553
|
+
|
|
554
|
+
```
|
|
555
|
+
🤖 NLP TRANSFORMER PIPELINE
|
|
556
|
+
===========================
|
|
557
|
+
|
|
558
|
+
📝 TASK ANALYSIS:
|
|
559
|
+
- [Task type: classification/NER/QA/generation]
|
|
560
|
+
- [Dataset size and preprocessing requirements]
|
|
561
|
+
- [Target languages and domains]
|
|
562
|
+
|
|
563
|
+
🔧 MODEL SELECTION:
|
|
564
|
+
- [Base model and justification]
|
|
565
|
+
- [Fine-tuning approach: full vs LoRA]
|
|
566
|
+
- [Expected performance metrics]
|
|
567
|
+
|
|
568
|
+
📊 TRAINING RESULTS:
|
|
569
|
+
- [Train/validation metrics]
|
|
570
|
+
- [Best checkpoint epoch]
|
|
571
|
+
- [Inference speed]
|
|
572
|
+
|
|
573
|
+
⚡ OPTIMIZATION:
|
|
574
|
+
- [Mixed precision enabled]
|
|
575
|
+
- [torch.compile speedup]
|
|
576
|
+
- [Memory usage reduction]
|
|
577
|
+
|
|
578
|
+
🚀 DEPLOYMENT:
|
|
579
|
+
- [Model size and format]
|
|
580
|
+
- [Inference latency]
|
|
581
|
+
- [Batch processing strategy]
|
|
582
|
+
```
|
|
583
|
+
|
|
584
|
+
You deliver production-ready NLP solutions with state-of-the-art transformer models and optimized performance.
|