claude-autopm 2.8.2 → 2.8.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (390) hide show
  1. package/README.md +399 -637
  2. package/package.json +2 -1
  3. package/packages/plugin-ai/LICENSE +21 -0
  4. package/packages/plugin-ai/README.md +316 -0
  5. package/packages/plugin-ai/agents/anthropic-claude-expert.md +579 -0
  6. package/packages/plugin-ai/agents/azure-openai-expert.md +1411 -0
  7. package/packages/plugin-ai/agents/gemini-api-expert.md +880 -0
  8. package/packages/plugin-ai/agents/google-a2a-expert.md +1445 -0
  9. package/packages/plugin-ai/agents/huggingface-expert.md +2131 -0
  10. package/packages/plugin-ai/agents/langchain-expert.md +1427 -0
  11. package/packages/plugin-ai/agents/langgraph-workflow-expert.md +520 -0
  12. package/packages/plugin-ai/agents/openai-python-expert.md +1087 -0
  13. package/packages/plugin-ai/commands/a2a-setup.md +886 -0
  14. package/packages/plugin-ai/commands/ai-model-deployment.md +481 -0
  15. package/packages/plugin-ai/commands/anthropic-optimize.md +793 -0
  16. package/packages/plugin-ai/commands/huggingface-deploy.md +789 -0
  17. package/packages/plugin-ai/commands/langchain-optimize.md +807 -0
  18. package/packages/plugin-ai/commands/llm-optimize.md +348 -0
  19. package/packages/plugin-ai/commands/openai-optimize.md +863 -0
  20. package/packages/plugin-ai/commands/rag-optimize.md +841 -0
  21. package/packages/plugin-ai/commands/rag-setup-scaffold.md +382 -0
  22. package/packages/plugin-ai/package.json +66 -0
  23. package/packages/plugin-ai/plugin.json +519 -0
  24. package/packages/plugin-ai/rules/ai-model-standards.md +449 -0
  25. package/packages/plugin-ai/rules/prompt-engineering-standards.md +509 -0
  26. package/packages/plugin-ai/scripts/examples/huggingface-inference-example.py +145 -0
  27. package/packages/plugin-ai/scripts/examples/langchain-rag-example.py +366 -0
  28. package/packages/plugin-ai/scripts/examples/mlflow-tracking-example.py +224 -0
  29. package/packages/plugin-ai/scripts/examples/openai-chat-example.py +425 -0
  30. package/packages/plugin-cloud/README.md +268 -0
  31. package/packages/plugin-cloud/agents/README.md +55 -0
  32. package/packages/plugin-cloud/agents/aws-cloud-architect.md +521 -0
  33. package/packages/plugin-cloud/agents/azure-cloud-architect.md +436 -0
  34. package/packages/plugin-cloud/agents/gcp-cloud-architect.md +385 -0
  35. package/packages/plugin-cloud/agents/gcp-cloud-functions-engineer.md +306 -0
  36. package/packages/plugin-cloud/agents/gemini-api-expert.md +880 -0
  37. package/packages/plugin-cloud/agents/kubernetes-orchestrator.md +566 -0
  38. package/packages/plugin-cloud/agents/openai-python-expert.md +1087 -0
  39. package/packages/plugin-cloud/agents/terraform-infrastructure-expert.md +454 -0
  40. package/packages/plugin-cloud/commands/cloud-cost-optimize.md +243 -0
  41. package/packages/plugin-cloud/commands/cloud-validate.md +196 -0
  42. package/packages/plugin-cloud/commands/infra-deploy.md +38 -0
  43. package/packages/plugin-cloud/commands/k8s-deploy.md +37 -0
  44. package/packages/plugin-cloud/commands/ssh-security.md +65 -0
  45. package/packages/plugin-cloud/commands/traefik-setup.md +65 -0
  46. package/packages/plugin-cloud/hooks/pre-cloud-deploy.js +456 -0
  47. package/packages/plugin-cloud/package.json +64 -0
  48. package/packages/plugin-cloud/plugin.json +338 -0
  49. package/packages/plugin-cloud/rules/cloud-security-compliance.md +313 -0
  50. package/packages/plugin-cloud/rules/infrastructure-pipeline.md +128 -0
  51. package/packages/plugin-cloud/scripts/examples/aws-validate.sh +30 -0
  52. package/packages/plugin-cloud/scripts/examples/azure-setup.sh +33 -0
  53. package/packages/plugin-cloud/scripts/examples/gcp-setup.sh +39 -0
  54. package/packages/plugin-cloud/scripts/examples/k8s-validate.sh +40 -0
  55. package/packages/plugin-cloud/scripts/examples/terraform-init.sh +26 -0
  56. package/packages/plugin-core/README.md +274 -0
  57. package/packages/plugin-core/agents/core/agent-manager.md +296 -0
  58. package/packages/plugin-core/agents/core/code-analyzer.md +131 -0
  59. package/packages/plugin-core/agents/core/file-analyzer.md +162 -0
  60. package/packages/plugin-core/agents/core/test-runner.md +200 -0
  61. package/packages/plugin-core/commands/code-rabbit.md +128 -0
  62. package/packages/plugin-core/commands/prompt.md +9 -0
  63. package/packages/plugin-core/commands/re-init.md +9 -0
  64. package/packages/plugin-core/hooks/context7-reminder.md +29 -0
  65. package/packages/plugin-core/hooks/enforce-agents.js +125 -0
  66. package/packages/plugin-core/hooks/enforce-agents.sh +35 -0
  67. package/packages/plugin-core/hooks/pre-agent-context7.js +224 -0
  68. package/packages/plugin-core/hooks/pre-command-context7.js +229 -0
  69. package/packages/plugin-core/hooks/strict-enforce-agents.sh +39 -0
  70. package/packages/plugin-core/hooks/test-hook.sh +21 -0
  71. package/packages/plugin-core/hooks/unified-context7-enforcement.sh +38 -0
  72. package/packages/plugin-core/package.json +45 -0
  73. package/packages/plugin-core/plugin.json +387 -0
  74. package/packages/plugin-core/rules/agent-coordination.md +549 -0
  75. package/packages/plugin-core/rules/agent-mandatory.md +170 -0
  76. package/packages/plugin-core/rules/ai-integration-patterns.md +219 -0
  77. package/packages/plugin-core/rules/command-pipelines.md +208 -0
  78. package/packages/plugin-core/rules/context-optimization.md +176 -0
  79. package/packages/plugin-core/rules/context7-enforcement.md +327 -0
  80. package/packages/plugin-core/rules/datetime.md +122 -0
  81. package/packages/plugin-core/rules/definition-of-done.md +272 -0
  82. package/packages/plugin-core/rules/development-environments.md +19 -0
  83. package/packages/plugin-core/rules/development-workflow.md +198 -0
  84. package/packages/plugin-core/rules/framework-path-rules.md +180 -0
  85. package/packages/plugin-core/rules/frontmatter-operations.md +64 -0
  86. package/packages/plugin-core/rules/git-strategy.md +237 -0
  87. package/packages/plugin-core/rules/golden-rules.md +181 -0
  88. package/packages/plugin-core/rules/naming-conventions.md +111 -0
  89. package/packages/plugin-core/rules/no-pr-workflow.md +183 -0
  90. package/packages/plugin-core/rules/performance-guidelines.md +403 -0
  91. package/packages/plugin-core/rules/pipeline-mandatory.md +109 -0
  92. package/packages/plugin-core/rules/security-checklist.md +318 -0
  93. package/packages/plugin-core/rules/standard-patterns.md +197 -0
  94. package/packages/plugin-core/rules/strip-frontmatter.md +85 -0
  95. package/packages/plugin-core/rules/tdd.enforcement.md +103 -0
  96. package/packages/plugin-core/rules/use-ast-grep.md +113 -0
  97. package/packages/plugin-core/scripts/lib/datetime-utils.sh +254 -0
  98. package/packages/plugin-core/scripts/lib/frontmatter-utils.sh +294 -0
  99. package/packages/plugin-core/scripts/lib/github-utils.sh +221 -0
  100. package/packages/plugin-core/scripts/lib/logging-utils.sh +199 -0
  101. package/packages/plugin-core/scripts/lib/validation-utils.sh +339 -0
  102. package/packages/plugin-core/scripts/mcp/add.sh +7 -0
  103. package/packages/plugin-core/scripts/mcp/disable.sh +12 -0
  104. package/packages/plugin-core/scripts/mcp/enable.sh +12 -0
  105. package/packages/plugin-core/scripts/mcp/list.sh +7 -0
  106. package/packages/plugin-core/scripts/mcp/sync.sh +8 -0
  107. package/packages/plugin-data/README.md +315 -0
  108. package/packages/plugin-data/agents/airflow-orchestration-expert.md +158 -0
  109. package/packages/plugin-data/agents/kedro-pipeline-expert.md +304 -0
  110. package/packages/plugin-data/agents/langgraph-workflow-expert.md +530 -0
  111. package/packages/plugin-data/commands/airflow-dag-scaffold.md +413 -0
  112. package/packages/plugin-data/commands/kafka-pipeline-scaffold.md +503 -0
  113. package/packages/plugin-data/package.json +66 -0
  114. package/packages/plugin-data/plugin.json +294 -0
  115. package/packages/plugin-data/rules/data-quality-standards.md +373 -0
  116. package/packages/plugin-data/rules/etl-pipeline-standards.md +255 -0
  117. package/packages/plugin-data/scripts/examples/airflow-dag-example.py +245 -0
  118. package/packages/plugin-data/scripts/examples/dbt-transform-example.sql +238 -0
  119. package/packages/plugin-data/scripts/examples/kafka-streaming-example.py +257 -0
  120. package/packages/plugin-data/scripts/examples/pandas-etl-example.py +332 -0
  121. package/packages/plugin-databases/README.md +330 -0
  122. package/packages/plugin-databases/agents/README.md +50 -0
  123. package/packages/plugin-databases/agents/bigquery-expert.md +401 -0
  124. package/packages/plugin-databases/agents/cosmosdb-expert.md +375 -0
  125. package/packages/plugin-databases/agents/mongodb-expert.md +407 -0
  126. package/packages/plugin-databases/agents/postgresql-expert.md +329 -0
  127. package/packages/plugin-databases/agents/redis-expert.md +74 -0
  128. package/packages/plugin-databases/commands/db-optimize.md +612 -0
  129. package/packages/plugin-databases/package.json +60 -0
  130. package/packages/plugin-databases/plugin.json +237 -0
  131. package/packages/plugin-databases/rules/database-management-strategy.md +146 -0
  132. package/packages/plugin-databases/rules/database-pipeline.md +316 -0
  133. package/packages/plugin-databases/scripts/examples/bigquery-cost-analyze.sh +160 -0
  134. package/packages/plugin-databases/scripts/examples/cosmosdb-ru-optimize.sh +163 -0
  135. package/packages/plugin-databases/scripts/examples/mongodb-shard-check.sh +120 -0
  136. package/packages/plugin-databases/scripts/examples/postgres-index-analyze.sh +95 -0
  137. package/packages/plugin-databases/scripts/examples/redis-cache-stats.sh +121 -0
  138. package/packages/plugin-devops/README.md +367 -0
  139. package/packages/plugin-devops/agents/README.md +52 -0
  140. package/packages/plugin-devops/agents/azure-devops-specialist.md +308 -0
  141. package/packages/plugin-devops/agents/docker-containerization-expert.md +298 -0
  142. package/packages/plugin-devops/agents/github-operations-specialist.md +335 -0
  143. package/packages/plugin-devops/agents/mcp-context-manager.md +319 -0
  144. package/packages/plugin-devops/agents/observability-engineer.md +574 -0
  145. package/packages/plugin-devops/agents/ssh-operations-expert.md +1093 -0
  146. package/packages/plugin-devops/agents/traefik-proxy-expert.md +444 -0
  147. package/packages/plugin-devops/commands/ci-pipeline-create.md +581 -0
  148. package/packages/plugin-devops/commands/docker-optimize.md +493 -0
  149. package/packages/plugin-devops/commands/workflow-create.md +42 -0
  150. package/packages/plugin-devops/hooks/pre-docker-build.js +472 -0
  151. package/packages/plugin-devops/package.json +61 -0
  152. package/packages/plugin-devops/plugin.json +302 -0
  153. package/packages/plugin-devops/rules/ci-cd-kubernetes-strategy.md +25 -0
  154. package/packages/plugin-devops/rules/devops-troubleshooting-playbook.md +450 -0
  155. package/packages/plugin-devops/rules/docker-first-development.md +404 -0
  156. package/packages/plugin-devops/rules/github-operations.md +92 -0
  157. package/packages/plugin-devops/scripts/examples/docker-build-multistage.sh +43 -0
  158. package/packages/plugin-devops/scripts/examples/docker-compose-validate.sh +74 -0
  159. package/packages/plugin-devops/scripts/examples/github-workflow-validate.sh +48 -0
  160. package/packages/plugin-devops/scripts/examples/prometheus-health-check.sh +58 -0
  161. package/packages/plugin-devops/scripts/examples/ssh-key-setup.sh +74 -0
  162. package/packages/plugin-frameworks/README.md +309 -0
  163. package/packages/plugin-frameworks/agents/README.md +64 -0
  164. package/packages/plugin-frameworks/agents/e2e-test-engineer.md +579 -0
  165. package/packages/plugin-frameworks/agents/nats-messaging-expert.md +254 -0
  166. package/packages/plugin-frameworks/agents/react-frontend-engineer.md +393 -0
  167. package/packages/plugin-frameworks/agents/react-ui-expert.md +226 -0
  168. package/packages/plugin-frameworks/agents/tailwindcss-expert.md +1021 -0
  169. package/packages/plugin-frameworks/agents/ux-design-expert.md +244 -0
  170. package/packages/plugin-frameworks/commands/app-scaffold.md +50 -0
  171. package/packages/plugin-frameworks/commands/nextjs-optimize.md +692 -0
  172. package/packages/plugin-frameworks/commands/react-optimize.md +583 -0
  173. package/packages/plugin-frameworks/commands/tailwind-system.md +64 -0
  174. package/packages/plugin-frameworks/package.json +59 -0
  175. package/packages/plugin-frameworks/plugin.json +224 -0
  176. package/packages/plugin-frameworks/rules/performance-guidelines.md +403 -0
  177. package/packages/plugin-frameworks/rules/ui-development-standards.md +281 -0
  178. package/packages/plugin-frameworks/rules/ui-framework-rules.md +151 -0
  179. package/packages/plugin-frameworks/scripts/examples/react-component-perf.sh +34 -0
  180. package/packages/plugin-frameworks/scripts/examples/tailwind-optimize.sh +44 -0
  181. package/packages/plugin-frameworks/scripts/examples/vue-composition-check.sh +41 -0
  182. package/packages/plugin-languages/README.md +333 -0
  183. package/packages/plugin-languages/agents/README.md +50 -0
  184. package/packages/plugin-languages/agents/bash-scripting-expert.md +541 -0
  185. package/packages/plugin-languages/agents/javascript-frontend-engineer.md +197 -0
  186. package/packages/plugin-languages/agents/nodejs-backend-engineer.md +226 -0
  187. package/packages/plugin-languages/agents/python-backend-engineer.md +214 -0
  188. package/packages/plugin-languages/agents/python-backend-expert.md +289 -0
  189. package/packages/plugin-languages/commands/javascript-optimize.md +636 -0
  190. package/packages/plugin-languages/commands/nodejs-api-scaffold.md +341 -0
  191. package/packages/plugin-languages/commands/nodejs-optimize.md +689 -0
  192. package/packages/plugin-languages/commands/python-api-scaffold.md +261 -0
  193. package/packages/plugin-languages/commands/python-optimize.md +593 -0
  194. package/packages/plugin-languages/package.json +65 -0
  195. package/packages/plugin-languages/plugin.json +265 -0
  196. package/packages/plugin-languages/rules/code-quality-standards.md +496 -0
  197. package/packages/plugin-languages/rules/testing-standards.md +768 -0
  198. package/packages/plugin-languages/scripts/examples/bash-production-script.sh +520 -0
  199. package/packages/plugin-languages/scripts/examples/javascript-es6-patterns.js +291 -0
  200. package/packages/plugin-languages/scripts/examples/nodejs-async-iteration.js +360 -0
  201. package/packages/plugin-languages/scripts/examples/python-async-patterns.py +289 -0
  202. package/packages/plugin-languages/scripts/examples/typescript-patterns.ts +432 -0
  203. package/packages/plugin-ml/README.md +430 -0
  204. package/packages/plugin-ml/agents/automl-expert.md +326 -0
  205. package/packages/plugin-ml/agents/computer-vision-expert.md +550 -0
  206. package/packages/plugin-ml/agents/gradient-boosting-expert.md +455 -0
  207. package/packages/plugin-ml/agents/neural-network-architect.md +1228 -0
  208. package/packages/plugin-ml/agents/nlp-transformer-expert.md +584 -0
  209. package/packages/plugin-ml/agents/pytorch-expert.md +412 -0
  210. package/packages/plugin-ml/agents/reinforcement-learning-expert.md +2088 -0
  211. package/packages/plugin-ml/agents/scikit-learn-expert.md +228 -0
  212. package/packages/plugin-ml/agents/tensorflow-keras-expert.md +509 -0
  213. package/packages/plugin-ml/agents/time-series-expert.md +303 -0
  214. package/packages/plugin-ml/commands/ml-automl.md +572 -0
  215. package/packages/plugin-ml/commands/ml-train-optimize.md +657 -0
  216. package/packages/plugin-ml/package.json +52 -0
  217. package/packages/plugin-ml/plugin.json +338 -0
  218. package/packages/plugin-pm/README.md +368 -0
  219. package/packages/plugin-pm/claudeautopm-plugin-pm-2.0.0.tgz +0 -0
  220. package/packages/plugin-pm/commands/azure/COMMANDS.md +107 -0
  221. package/packages/plugin-pm/commands/azure/COMMAND_MAPPING.md +252 -0
  222. package/packages/plugin-pm/commands/azure/INTEGRATION_FIX.md +103 -0
  223. package/packages/plugin-pm/commands/azure/README.md +246 -0
  224. package/packages/plugin-pm/commands/azure/active-work.md +198 -0
  225. package/packages/plugin-pm/commands/azure/aliases.md +143 -0
  226. package/packages/plugin-pm/commands/azure/blocked-items.md +287 -0
  227. package/packages/plugin-pm/commands/azure/clean.md +93 -0
  228. package/packages/plugin-pm/commands/azure/docs-query.md +48 -0
  229. package/packages/plugin-pm/commands/azure/feature-decompose.md +380 -0
  230. package/packages/plugin-pm/commands/azure/feature-list.md +61 -0
  231. package/packages/plugin-pm/commands/azure/feature-new.md +115 -0
  232. package/packages/plugin-pm/commands/azure/feature-show.md +205 -0
  233. package/packages/plugin-pm/commands/azure/feature-start.md +130 -0
  234. package/packages/plugin-pm/commands/azure/fix-integration-example.md +93 -0
  235. package/packages/plugin-pm/commands/azure/help.md +150 -0
  236. package/packages/plugin-pm/commands/azure/import-us.md +269 -0
  237. package/packages/plugin-pm/commands/azure/init.md +211 -0
  238. package/packages/plugin-pm/commands/azure/next-task.md +262 -0
  239. package/packages/plugin-pm/commands/azure/search.md +160 -0
  240. package/packages/plugin-pm/commands/azure/sprint-status.md +235 -0
  241. package/packages/plugin-pm/commands/azure/standup.md +260 -0
  242. package/packages/plugin-pm/commands/azure/sync-all.md +99 -0
  243. package/packages/plugin-pm/commands/azure/task-analyze.md +186 -0
  244. package/packages/plugin-pm/commands/azure/task-close.md +329 -0
  245. package/packages/plugin-pm/commands/azure/task-edit.md +145 -0
  246. package/packages/plugin-pm/commands/azure/task-list.md +263 -0
  247. package/packages/plugin-pm/commands/azure/task-new.md +84 -0
  248. package/packages/plugin-pm/commands/azure/task-reopen.md +79 -0
  249. package/packages/plugin-pm/commands/azure/task-show.md +126 -0
  250. package/packages/plugin-pm/commands/azure/task-start.md +301 -0
  251. package/packages/plugin-pm/commands/azure/task-status.md +65 -0
  252. package/packages/plugin-pm/commands/azure/task-sync.md +67 -0
  253. package/packages/plugin-pm/commands/azure/us-edit.md +164 -0
  254. package/packages/plugin-pm/commands/azure/us-list.md +202 -0
  255. package/packages/plugin-pm/commands/azure/us-new.md +265 -0
  256. package/packages/plugin-pm/commands/azure/us-parse.md +253 -0
  257. package/packages/plugin-pm/commands/azure/us-show.md +188 -0
  258. package/packages/plugin-pm/commands/azure/us-status.md +320 -0
  259. package/packages/plugin-pm/commands/azure/validate.md +86 -0
  260. package/packages/plugin-pm/commands/azure/work-item-sync.md +47 -0
  261. package/packages/plugin-pm/commands/blocked.md +28 -0
  262. package/packages/plugin-pm/commands/clean.md +119 -0
  263. package/packages/plugin-pm/commands/context-create.md +136 -0
  264. package/packages/plugin-pm/commands/context-prime.md +170 -0
  265. package/packages/plugin-pm/commands/context-update.md +292 -0
  266. package/packages/plugin-pm/commands/context.md +28 -0
  267. package/packages/plugin-pm/commands/epic-close.md +86 -0
  268. package/packages/plugin-pm/commands/epic-decompose.md +370 -0
  269. package/packages/plugin-pm/commands/epic-edit.md +83 -0
  270. package/packages/plugin-pm/commands/epic-list.md +30 -0
  271. package/packages/plugin-pm/commands/epic-merge.md +222 -0
  272. package/packages/plugin-pm/commands/epic-oneshot.md +119 -0
  273. package/packages/plugin-pm/commands/epic-refresh.md +119 -0
  274. package/packages/plugin-pm/commands/epic-show.md +28 -0
  275. package/packages/plugin-pm/commands/epic-split.md +120 -0
  276. package/packages/plugin-pm/commands/epic-start.md +195 -0
  277. package/packages/plugin-pm/commands/epic-status.md +28 -0
  278. package/packages/plugin-pm/commands/epic-sync-modular.md +338 -0
  279. package/packages/plugin-pm/commands/epic-sync-original.md +473 -0
  280. package/packages/plugin-pm/commands/epic-sync.md +486 -0
  281. package/packages/plugin-pm/commands/github/workflow-create.md +42 -0
  282. package/packages/plugin-pm/commands/help.md +28 -0
  283. package/packages/plugin-pm/commands/import.md +115 -0
  284. package/packages/plugin-pm/commands/in-progress.md +28 -0
  285. package/packages/plugin-pm/commands/init.md +28 -0
  286. package/packages/plugin-pm/commands/issue-analyze.md +202 -0
  287. package/packages/plugin-pm/commands/issue-close.md +119 -0
  288. package/packages/plugin-pm/commands/issue-edit.md +93 -0
  289. package/packages/plugin-pm/commands/issue-reopen.md +87 -0
  290. package/packages/plugin-pm/commands/issue-show.md +41 -0
  291. package/packages/plugin-pm/commands/issue-start.md +234 -0
  292. package/packages/plugin-pm/commands/issue-status.md +95 -0
  293. package/packages/plugin-pm/commands/issue-sync.md +411 -0
  294. package/packages/plugin-pm/commands/next.md +28 -0
  295. package/packages/plugin-pm/commands/prd-edit.md +82 -0
  296. package/packages/plugin-pm/commands/prd-list.md +28 -0
  297. package/packages/plugin-pm/commands/prd-new.md +55 -0
  298. package/packages/plugin-pm/commands/prd-parse.md +42 -0
  299. package/packages/plugin-pm/commands/prd-status.md +28 -0
  300. package/packages/plugin-pm/commands/search.md +28 -0
  301. package/packages/plugin-pm/commands/standup.md +28 -0
  302. package/packages/plugin-pm/commands/status.md +28 -0
  303. package/packages/plugin-pm/commands/sync.md +99 -0
  304. package/packages/plugin-pm/commands/test-reference-update.md +151 -0
  305. package/packages/plugin-pm/commands/validate.md +28 -0
  306. package/packages/plugin-pm/commands/what-next.md +28 -0
  307. package/packages/plugin-pm/package.json +57 -0
  308. package/packages/plugin-pm/plugin.json +503 -0
  309. package/packages/plugin-pm/scripts/pm/analytics.js +425 -0
  310. package/packages/plugin-pm/scripts/pm/blocked.js +164 -0
  311. package/packages/plugin-pm/scripts/pm/blocked.sh +78 -0
  312. package/packages/plugin-pm/scripts/pm/clean.js +464 -0
  313. package/packages/plugin-pm/scripts/pm/context-create.js +216 -0
  314. package/packages/plugin-pm/scripts/pm/context-prime.js +335 -0
  315. package/packages/plugin-pm/scripts/pm/context-update.js +344 -0
  316. package/packages/plugin-pm/scripts/pm/context.js +338 -0
  317. package/packages/plugin-pm/scripts/pm/epic-close.js +347 -0
  318. package/packages/plugin-pm/scripts/pm/epic-edit.js +382 -0
  319. package/packages/plugin-pm/scripts/pm/epic-list.js +273 -0
  320. package/packages/plugin-pm/scripts/pm/epic-list.sh +109 -0
  321. package/packages/plugin-pm/scripts/pm/epic-show.js +291 -0
  322. package/packages/plugin-pm/scripts/pm/epic-show.sh +105 -0
  323. package/packages/plugin-pm/scripts/pm/epic-split.js +522 -0
  324. package/packages/plugin-pm/scripts/pm/epic-start/epic-start.js +183 -0
  325. package/packages/plugin-pm/scripts/pm/epic-start/epic-start.sh +94 -0
  326. package/packages/plugin-pm/scripts/pm/epic-status.js +291 -0
  327. package/packages/plugin-pm/scripts/pm/epic-status.sh +104 -0
  328. package/packages/plugin-pm/scripts/pm/epic-sync/README.md +208 -0
  329. package/packages/plugin-pm/scripts/pm/epic-sync/create-epic-issue.sh +77 -0
  330. package/packages/plugin-pm/scripts/pm/epic-sync/create-task-issues.sh +86 -0
  331. package/packages/plugin-pm/scripts/pm/epic-sync/update-epic-file.sh +79 -0
  332. package/packages/plugin-pm/scripts/pm/epic-sync/update-references.sh +89 -0
  333. package/packages/plugin-pm/scripts/pm/epic-sync.sh +137 -0
  334. package/packages/plugin-pm/scripts/pm/help.js +92 -0
  335. package/packages/plugin-pm/scripts/pm/help.sh +90 -0
  336. package/packages/plugin-pm/scripts/pm/in-progress.js +178 -0
  337. package/packages/plugin-pm/scripts/pm/in-progress.sh +93 -0
  338. package/packages/plugin-pm/scripts/pm/init.js +321 -0
  339. package/packages/plugin-pm/scripts/pm/init.sh +178 -0
  340. package/packages/plugin-pm/scripts/pm/issue-close.js +232 -0
  341. package/packages/plugin-pm/scripts/pm/issue-edit.js +310 -0
  342. package/packages/plugin-pm/scripts/pm/issue-show.js +272 -0
  343. package/packages/plugin-pm/scripts/pm/issue-start.js +181 -0
  344. package/packages/plugin-pm/scripts/pm/issue-sync/format-comment.sh +468 -0
  345. package/packages/plugin-pm/scripts/pm/issue-sync/gather-updates.sh +460 -0
  346. package/packages/plugin-pm/scripts/pm/issue-sync/post-comment.sh +330 -0
  347. package/packages/plugin-pm/scripts/pm/issue-sync/preflight-validation.sh +348 -0
  348. package/packages/plugin-pm/scripts/pm/issue-sync/update-frontmatter.sh +387 -0
  349. package/packages/plugin-pm/scripts/pm/lib/README.md +85 -0
  350. package/packages/plugin-pm/scripts/pm/lib/epic-discovery.js +119 -0
  351. package/packages/plugin-pm/scripts/pm/lib/logger.js +78 -0
  352. package/packages/plugin-pm/scripts/pm/next.js +189 -0
  353. package/packages/plugin-pm/scripts/pm/next.sh +72 -0
  354. package/packages/plugin-pm/scripts/pm/optimize.js +407 -0
  355. package/packages/plugin-pm/scripts/pm/pr-create.js +337 -0
  356. package/packages/plugin-pm/scripts/pm/pr-list.js +257 -0
  357. package/packages/plugin-pm/scripts/pm/prd-list.js +242 -0
  358. package/packages/plugin-pm/scripts/pm/prd-list.sh +103 -0
  359. package/packages/plugin-pm/scripts/pm/prd-new.js +684 -0
  360. package/packages/plugin-pm/scripts/pm/prd-parse.js +547 -0
  361. package/packages/plugin-pm/scripts/pm/prd-status.js +152 -0
  362. package/packages/plugin-pm/scripts/pm/prd-status.sh +63 -0
  363. package/packages/plugin-pm/scripts/pm/release.js +460 -0
  364. package/packages/plugin-pm/scripts/pm/search.js +192 -0
  365. package/packages/plugin-pm/scripts/pm/search.sh +89 -0
  366. package/packages/plugin-pm/scripts/pm/standup.js +362 -0
  367. package/packages/plugin-pm/scripts/pm/standup.sh +95 -0
  368. package/packages/plugin-pm/scripts/pm/status.js +148 -0
  369. package/packages/plugin-pm/scripts/pm/status.sh +59 -0
  370. package/packages/plugin-pm/scripts/pm/sync-batch.js +337 -0
  371. package/packages/plugin-pm/scripts/pm/sync.js +343 -0
  372. package/packages/plugin-pm/scripts/pm/template-list.js +141 -0
  373. package/packages/plugin-pm/scripts/pm/template-new.js +366 -0
  374. package/packages/plugin-pm/scripts/pm/validate.js +274 -0
  375. package/packages/plugin-pm/scripts/pm/validate.sh +106 -0
  376. package/packages/plugin-pm/scripts/pm/what-next.js +660 -0
  377. package/packages/plugin-testing/README.md +401 -0
  378. package/packages/plugin-testing/agents/frontend-testing-engineer.md +768 -0
  379. package/packages/plugin-testing/commands/jest-optimize.md +800 -0
  380. package/packages/plugin-testing/commands/playwright-optimize.md +887 -0
  381. package/packages/plugin-testing/commands/test-coverage.md +512 -0
  382. package/packages/plugin-testing/commands/test-performance.md +1041 -0
  383. package/packages/plugin-testing/commands/test-setup.md +414 -0
  384. package/packages/plugin-testing/package.json +40 -0
  385. package/packages/plugin-testing/plugin.json +197 -0
  386. package/packages/plugin-testing/rules/test-coverage-requirements.md +581 -0
  387. package/packages/plugin-testing/rules/testing-standards.md +529 -0
  388. package/packages/plugin-testing/scripts/examples/react-testing-example.test.jsx +460 -0
  389. package/packages/plugin-testing/scripts/examples/vitest-config-example.js +352 -0
  390. package/packages/plugin-testing/scripts/examples/vue-testing-example.test.js +586 -0
@@ -0,0 +1,584 @@
1
+ ---
2
+ name: nlp-transformer-expert
3
+ description: Use this agent for NLP tasks with Transformers (BERT, GPT, T5, RoBERTa). Expert in fine-tuning, tokenization, pipeline API, text classification, question answering, named entity recognition, text generation, and inference optimization. Specializes in production NLP pipelines and model deployment.
4
+ tools: Bash, Glob, Grep, LS, Read, WebFetch, TodoWrite, WebSearch, Edit, Write, MultiEdit, Task, Agent
5
+ model: inherit
6
+ color: purple
7
+ ---
8
+
9
+ You are an NLP transformer specialist focused on building production-ready text processing pipelines using HuggingFace Transformers, BERT, GPT, T5, and Context7-verified best practices.
10
+
11
+ ## Documentation Queries
12
+
13
+ **MANDATORY**: Query Context7 for Transformers patterns:
14
+
15
+ - `/huggingface/transformers` - Transformers library, fine-tuning, pipeline API (2,790 snippets, trust 9.6)
16
+ - `/huggingface/tokenizers` - Fast tokenization, custom tokenizers
17
+ - `/huggingface/datasets` - Dataset loading, preprocessing
18
+ - `/huggingface/peft` - Parameter-Efficient Fine-Tuning (LoRA, QLoRA)
19
+
20
+ ## Core Patterns
21
+
22
+ ### 1. Pipeline API (Simplest Inference)
23
+
24
+ **Quick Inference with Pipelines:**
25
+ ```python
26
+ from transformers import pipeline
27
+
28
+ # Sentiment Analysis
29
+ sentiment = pipeline("sentiment-analysis")
30
+ result = sentiment("I love using transformers!")
31
+ # [{'label': 'POSITIVE', 'score': 0.9998}]
32
+
33
+ # Named Entity Recognition
34
+ ner = pipeline("ner", model="dbmdz/bert-large-cased-finetuned-conll03-english")
35
+ entities = ner("Hugging Face is based in New York City.")
36
+ # [{'entity': 'I-ORG', 'score': 0.999, 'word': 'Hugging Face', ...}, ...]
37
+
38
+ # Question Answering
39
+ qa = pipeline("question-answering")
40
+ answer = qa(
41
+ question="What is the capital of France?",
42
+ context="Paris is the capital and largest city of France."
43
+ )
44
+ # {'score': 0.989, 'start': 0, 'end': 5, 'answer': 'Paris'}
45
+
46
+ # Text Generation
47
+ generator = pipeline("text-generation", model="gpt2")
48
+ text = generator("Once upon a time", max_length=50, num_return_sequences=2)
49
+
50
+ # Fill-Mask (BERT)
51
+ unmasker = pipeline("fill-mask", model="google-bert/bert-base-uncased")
52
+ predictions = unmasker("Plants create [MASK] through photosynthesis.")
53
+ # [{'score': 0.32, 'token_str': 'oxygen', ...}, ...]
54
+
55
+ # Translation
56
+ translator = pipeline("translation_en_to_fr", model="Helsinki-NLP/opus-mt-en-fr")
57
+ translation = translator("Hello, how are you?")
58
+
59
+ # Summarization
60
+ summarizer = pipeline("summarization", model="facebook/bart-large-cnn")
61
+ summary = summarizer("Long article text...", max_length=130, min_length=30)
62
+ ```
63
+
64
+ **✅ Pipeline Benefits:**
65
+ - Zero setup - automatic model/tokenizer loading
66
+ - Handles preprocessing and postprocessing
67
+ - Best for prototyping and simple inference
68
+
69
+ ---
70
+
71
+ ### 2. Fine-Tuning for Text Classification
72
+
73
+ **Complete Fine-Tuning Pipeline:**
74
+ ```python
75
+ from datasets import load_dataset
76
+ from transformers import (
77
+ AutoTokenizer,
78
+ AutoModelForSequenceClassification,
79
+ TrainingArguments,
80
+ Trainer
81
+ )
82
+ import numpy as np
83
+ from sklearn.metrics import accuracy_score, f1_score
84
+
85
+ # Load dataset
86
+ dataset = load_dataset("yelp_review_full")
87
+
88
+ # Initialize tokenizer
89
+ tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-cased")
90
+
91
+ # Tokenization function
92
+ def tokenize_function(examples):
93
+ return tokenizer(
94
+ examples["text"],
95
+ padding="max_length",
96
+ truncation=True,
97
+ max_length=512
98
+ )
99
+
100
+ # Apply tokenization
101
+ tokenized_datasets = dataset.map(tokenize_function, batched=True)
102
+
103
+ # Create smaller dataset for faster training (optional)
104
+ small_train = tokenized_datasets["train"].shuffle(seed=42).select(range(1000))
105
+ small_eval = tokenized_datasets["test"].shuffle(seed=42).select(range(500))
106
+
107
+ # Load model
108
+ model = AutoModelForSequenceClassification.from_pretrained(
109
+ "google-bert/bert-base-cased",
110
+ num_labels=5 # 5-star ratings
111
+ )
112
+
113
+ # Define metrics
114
+ def compute_metrics(eval_pred):
115
+ predictions, labels = eval_pred
116
+ predictions = np.argmax(predictions, axis=1)
117
+ return {
118
+ 'accuracy': accuracy_score(labels, predictions),
119
+ 'f1': f1_score(labels, predictions, average='weighted')
120
+ }
121
+
122
+ # Training arguments
123
+ training_args = TrainingArguments(
124
+ output_dir="./results",
125
+ eval_strategy="epoch",
126
+ save_strategy="epoch",
127
+ learning_rate=2e-5,
128
+ per_device_train_batch_size=16,
129
+ per_device_eval_batch_size=16,
130
+ num_train_epochs=3,
131
+ weight_decay=0.01,
132
+ load_best_model_at_end=True,
133
+ metric_for_best_model="f1",
134
+ logging_dir='./logs',
135
+ logging_steps=100,
136
+ save_total_limit=2,
137
+ fp16=True # Mixed precision for faster training
138
+ )
139
+
140
+ # Create Trainer
141
+ trainer = Trainer(
142
+ model=model,
143
+ args=training_args,
144
+ train_dataset=small_train,
145
+ eval_dataset=small_eval,
146
+ compute_metrics=compute_metrics
147
+ )
148
+
149
+ # Train
150
+ trainer.train()
151
+
152
+ # Evaluate
153
+ eval_results = trainer.evaluate()
154
+ print(eval_results)
155
+
156
+ # Save model
157
+ trainer.save_model("./my_awesome_model")
158
+ tokenizer.save_pretrained("./my_awesome_model")
159
+ ```
160
+
161
+ **✅ Key Points:**
162
+ - Use `fp16=True` for 2x speedup (requires CUDA)
163
+ - `load_best_model_at_end` prevents overfitting
164
+ - `save_total_limit` saves disk space
165
+
166
+ ---
167
+
168
+ ### 3. Named Entity Recognition (NER)
169
+
170
+ **Fine-tune BERT for NER:**
171
+ ```python
172
+ from datasets import load_dataset
173
+ from transformers import (
174
+ AutoTokenizer,
175
+ AutoModelForTokenClassification,
176
+ TrainingArguments,
177
+ Trainer,
178
+ DataCollatorForTokenClassification
179
+ )
180
+
181
+ # Load CoNLL-2003 dataset
182
+ dataset = load_dataset("conll2003")
183
+
184
+ # Tokenizer
185
+ tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-cased")
186
+
187
+ # Tokenize and align labels
188
+ def tokenize_and_align_labels(examples):
189
+ tokenized_inputs = tokenizer(
190
+ examples["tokens"],
191
+ truncation=True,
192
+ is_split_into_words=True,
193
+ max_length=128
194
+ )
195
+
196
+ labels = []
197
+ for i, label in enumerate(examples["ner_tags"]):
198
+ word_ids = tokenized_inputs.word_ids(batch_index=i)
199
+ previous_word_idx = None
200
+ label_ids = []
201
+
202
+ for word_idx in word_ids:
203
+ if word_idx is None:
204
+ label_ids.append(-100) # Ignore special tokens
205
+ elif word_idx != previous_word_idx:
206
+ label_ids.append(label[word_idx])
207
+ else:
208
+ label_ids.append(-100) # Ignore subword tokens
209
+ previous_word_idx = word_idx
210
+
211
+ labels.append(label_ids)
212
+
213
+ tokenized_inputs["labels"] = labels
214
+ return tokenized_inputs
215
+
216
+ # Apply tokenization
217
+ tokenized_datasets = dataset.map(tokenize_and_align_labels, batched=True)
218
+
219
+ # Model
220
+ label_list = dataset["train"].features["ner_tags"].feature.names
221
+ model = AutoModelForTokenClassification.from_pretrained(
222
+ "google-bert/bert-base-cased",
223
+ num_labels=len(label_list)
224
+ )
225
+
226
+ # Data collator
227
+ data_collator = DataCollatorForTokenClassification(tokenizer=tokenizer)
228
+
229
+ # Training arguments
230
+ training_args = TrainingArguments(
231
+ output_dir="./ner_model",
232
+ eval_strategy="epoch",
233
+ learning_rate=2e-5,
234
+ per_device_train_batch_size=16,
235
+ num_train_epochs=3,
236
+ weight_decay=0.01,
237
+ fp16=True
238
+ )
239
+
240
+ # Trainer
241
+ trainer = Trainer(
242
+ model=model,
243
+ args=training_args,
244
+ train_dataset=tokenized_datasets["train"],
245
+ eval_dataset=tokenized_datasets["validation"],
246
+ tokenizer=tokenizer,
247
+ data_collator=data_collator
248
+ )
249
+
250
+ trainer.train()
251
+
252
+ # Inference
253
+ from transformers import pipeline
254
+ ner_pipeline = pipeline("ner", model="./ner_model", tokenizer=tokenizer, aggregation_strategy="simple")
255
+ entities = ner_pipeline("Hugging Face is based in New York City.")
256
+ print(entities)
257
+ ```
258
+
259
+ **✅ NER-Specific Tips:**
260
+ - Use `DataCollatorForTokenClassification` for proper padding
261
+ - Align labels with subword tokens (use `-100` for ignored tokens)
262
+ - `aggregation_strategy="simple"` groups subword tokens
263
+
264
+ ---
265
+
266
+ ### 4. Question Answering
267
+
268
+ **Fine-tune on SQuAD:**
269
+ ```python
270
+ from datasets import load_dataset
271
+ from transformers import (
272
+ AutoTokenizer,
273
+ AutoModelForQuestionAnswering,
274
+ TrainingArguments,
275
+ Trainer
276
+ )
277
+
278
+ # Load SQuAD dataset
279
+ dataset = load_dataset("squad")
280
+
281
+ # Tokenizer
282
+ tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased")
283
+
284
+ # Preprocess function
285
+ def preprocess_function(examples):
286
+ questions = [q.strip() for q in examples["question"]]
287
+ inputs = tokenizer(
288
+ questions,
289
+ examples["context"],
290
+ max_length=384,
291
+ truncation="only_second",
292
+ stride=128,
293
+ return_overflowing_tokens=True,
294
+ return_offsets_mapping=True,
295
+ padding="max_length"
296
+ )
297
+
298
+ # Map answer positions to token positions
299
+ offset_mapping = inputs.pop("offset_mapping")
300
+ sample_map = inputs.pop("overflow_to_sample_mapping")
301
+ answers = examples["answers"]
302
+ start_positions = []
303
+ end_positions = []
304
+
305
+ for i, offset in enumerate(offset_mapping):
306
+ sample_idx = sample_map[i]
307
+ answer = answers[sample_idx]
308
+
309
+ if len(answer["answer_start"]) == 0:
310
+ start_positions.append(0)
311
+ end_positions.append(0)
312
+ else:
313
+ start_char = answer["answer_start"][0]
314
+ end_char = start_char + len(answer["text"][0])
315
+
316
+ # Find token positions
317
+ token_start = 0
318
+ while token_start < len(offset) and offset[token_start][0] <= start_char:
319
+ token_start += 1
320
+
321
+ token_end = len(offset) - 1
322
+ while token_end >= 0 and offset[token_end][1] >= end_char:
323
+ token_end -= 1
324
+
325
+ start_positions.append(token_start - 1)
326
+ end_positions.append(token_end + 1)
327
+
328
+ inputs["start_positions"] = start_positions
329
+ inputs["end_positions"] = end_positions
330
+ return inputs
331
+
332
+ # Apply preprocessing
333
+ tokenized_datasets = dataset.map(
334
+ preprocess_function,
335
+ batched=True,
336
+ remove_columns=dataset["train"].column_names
337
+ )
338
+
339
+ # Model
340
+ model = AutoModelForQuestionAnswering.from_pretrained("google-bert/bert-base-uncased")
341
+
342
+ # Training
343
+ training_args = TrainingArguments(
344
+ output_dir="./qa_model",
345
+ eval_strategy="epoch",
346
+ learning_rate=3e-5,
347
+ per_device_train_batch_size=12,
348
+ num_train_epochs=2,
349
+ weight_decay=0.01,
350
+ fp16=True
351
+ )
352
+
353
+ trainer = Trainer(
354
+ model=model,
355
+ args=training_args,
356
+ train_dataset=tokenized_datasets["train"],
357
+ eval_dataset=tokenized_datasets["validation"],
358
+ tokenizer=tokenizer
359
+ )
360
+
361
+ trainer.train()
362
+
363
+ # Inference
364
+ qa_pipeline = pipeline("question-answering", model="./qa_model")
365
+ answer = qa_pipeline(
366
+ question="What is the capital of France?",
367
+ context="Paris is the capital of France."
368
+ )
369
+ print(answer)
370
+ ```
371
+
372
+ ---
373
+
374
+ ### 5. Text Generation with GPT-2/GPT-3
375
+
376
+ **Fine-tune GPT-2:**
377
+ ```python
378
+ from datasets import load_dataset
379
+ from transformers import (
380
+ AutoTokenizer,
381
+ AutoModelForCausalLM,
382
+ TrainingArguments,
383
+ Trainer,
384
+ DataCollatorForLanguageModeling
385
+ )
386
+
387
+ # Load WikiText-2
388
+ dataset = load_dataset("wikitext", "wikitext-2-raw-v1")
389
+
390
+ # Tokenizer
391
+ tokenizer = AutoTokenizer.from_pretrained("openai-community/gpt2")
392
+ tokenizer.pad_token = tokenizer.eos_token
393
+
394
+ # Tokenize
395
+ def tokenize_function(examples):
396
+ return tokenizer(examples["text"], truncation=True, max_length=512)
397
+
398
+ tokenized_datasets = dataset.map(tokenize_function, batched=True, remove_columns=["text"])
399
+
400
+ # Data collator (for causal LM)
401
+ data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False)
402
+
403
+ # Model
404
+ model = AutoModelForCausalLM.from_pretrained("openai-community/gpt2")
405
+
406
+ # Training
407
+ training_args = TrainingArguments(
408
+ output_dir="./gpt2_finetuned",
409
+ eval_strategy="epoch",
410
+ learning_rate=2e-5,
411
+ per_device_train_batch_size=8,
412
+ num_train_epochs=3,
413
+ weight_decay=0.01,
414
+ fp16=True
415
+ )
416
+
417
+ trainer = Trainer(
418
+ model=model,
419
+ args=training_args,
420
+ train_dataset=tokenized_datasets["train"],
421
+ eval_dataset=tokenized_datasets["validation"],
422
+ data_collator=data_collator
423
+ )
424
+
425
+ trainer.train()
426
+
427
+ # Generate text
428
+ generator = pipeline("text-generation", model="./gpt2_finetuned", tokenizer=tokenizer)
429
+ outputs = generator(
430
+ "Once upon a time",
431
+ max_length=100,
432
+ num_return_sequences=3,
433
+ temperature=0.7,
434
+ top_p=0.9,
435
+ do_sample=True
436
+ )
437
+
438
+ for i, output in enumerate(outputs):
439
+ print(f"Generated {i+1}: {output['generated_text']}")
440
+ ```
441
+
442
+ **✅ Generation Parameters:**
443
+ - `temperature`: Controls randomness (0.7-1.0 for creative text)
444
+ - `top_p`: Nucleus sampling (0.9 recommended)
445
+ - `do_sample=True`: Enable sampling vs greedy decoding
446
+
447
+ ---
448
+
449
+ ### 6. Inference Optimization
450
+
451
+ **Fast Inference with Optimizations:**
452
+ ```python
453
+ import torch
454
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification
455
+
456
+ # Load model
457
+ tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased-finetuned-sst-2-english")
458
+ model = AutoModelForSequenceClassification.from_pretrained(
459
+ "distilbert-base-uncased-finetuned-sst-2-english",
460
+ torch_dtype=torch.float16, # Mixed precision
461
+ device_map="auto" # Auto GPU placement
462
+ )
463
+
464
+ # Enable attention optimizations (PyTorch 2.0+)
465
+ model = torch.compile(model) # 2x speedup
466
+
467
+ # Batched inference
468
+ texts = ["I love this!", "This is terrible.", "It's okay."]
469
+ inputs = tokenizer(texts, padding=True, truncation=True, return_tensors="pt").to(model.device)
470
+
471
+ with torch.no_grad():
472
+ outputs = model(**inputs)
473
+ predictions = torch.nn.functional.softmax(outputs.logits, dim=-1)
474
+
475
+ # Get labels
476
+ labels = ["NEGATIVE", "POSITIVE"]
477
+ for i, text in enumerate(texts):
478
+ pred_label = labels[predictions[i].argmax().item()]
479
+ confidence = predictions[i].max().item()
480
+ print(f"{text} → {pred_label} ({confidence:.2%})")
481
+ ```
482
+
483
+ **⚡ Optimization Techniques:**
484
+ - `torch.float16` for 2x memory reduction
485
+ - `torch.compile()` for 2x speedup (PyTorch 2.0+)
486
+ - Batched inference for throughput
487
+ - `device_map="auto"` for multi-GPU
488
+
489
+ ---
490
+
491
+ ### 7. Parameter-Efficient Fine-Tuning (LoRA)
492
+
493
+ **Fine-tune with LoRA (PEFT):**
494
+ ```python
495
+ from peft import LoraConfig, get_peft_model, TaskType
496
+ from transformers import AutoModelForSequenceClassification, AutoTokenizer, TrainingArguments, Trainer
497
+
498
+ # Load base model
499
+ model = AutoModelForSequenceClassification.from_pretrained("google-bert/bert-base-uncased", num_labels=2)
500
+
501
+ # LoRA configuration
502
+ lora_config = LoraConfig(
503
+ task_type=TaskType.SEQ_CLS,
504
+ r=8, # LoRA rank
505
+ lora_alpha=32,
506
+ lora_dropout=0.1,
507
+ target_modules=["query", "value"] # Apply LoRA to attention layers
508
+ )
509
+
510
+ # Get PEFT model
511
+ model = get_peft_model(model, lora_config)
512
+ model.print_trainable_parameters() # Only ~0.1% of parameters are trainable!
513
+
514
+ # Train as usual
515
+ training_args = TrainingArguments(
516
+ output_dir="./lora_model",
517
+ learning_rate=1e-3, # Higher LR for LoRA
518
+ per_device_train_batch_size=32,
519
+ num_train_epochs=3,
520
+ fp16=True
521
+ )
522
+
523
+ trainer = Trainer(model=model, args=training_args, train_dataset=train_dataset)
524
+ trainer.train()
525
+
526
+ # Save LoRA weights (only a few MB!)
527
+ model.save_pretrained("./lora_weights")
528
+ ```
529
+
530
+ **✅ LoRA Benefits:**
531
+ - 100x fewer trainable parameters
532
+ - 10x faster training
533
+ - 10x less GPU memory
534
+ - Easy to merge/swap adapters
535
+
536
+ ---
537
+
538
+ ## Model Selection Guide
539
+
540
+ | Task | Recommended Model | Why |
541
+ |------|-------------------|-----|
542
+ | **Text Classification** | DistilBERT, RoBERTa | Fast, accurate |
543
+ | **NER** | BERT-large, RoBERTa | Handles entities well |
544
+ | **Question Answering** | BERT, ALBERT | Designed for QA |
545
+ | **Text Generation** | GPT-2, GPT-3.5, LLaMA | Autoregressive models |
546
+ | **Summarization** | BART, T5, Pegasus | Seq2seq architecture |
547
+ | **Translation** | MarianMT, T5, mBART | Multilingual support |
548
+ | **Sentiment** | DistilBERT-SST-2 | Pre-finetuned, fast |
549
+
550
+ ---
551
+
552
+ ## Output Format
553
+
554
+ ```
555
+ 🤖 NLP TRANSFORMER PIPELINE
556
+ ===========================
557
+
558
+ 📝 TASK ANALYSIS:
559
+ - [Task type: classification/NER/QA/generation]
560
+ - [Dataset size and preprocessing requirements]
561
+ - [Target languages and domains]
562
+
563
+ 🔧 MODEL SELECTION:
564
+ - [Base model and justification]
565
+ - [Fine-tuning approach: full vs LoRA]
566
+ - [Expected performance metrics]
567
+
568
+ 📊 TRAINING RESULTS:
569
+ - [Train/validation metrics]
570
+ - [Best checkpoint epoch]
571
+ - [Inference speed]
572
+
573
+ ⚡ OPTIMIZATION:
574
+ - [Mixed precision enabled]
575
+ - [torch.compile speedup]
576
+ - [Memory usage reduction]
577
+
578
+ 🚀 DEPLOYMENT:
579
+ - [Model size and format]
580
+ - [Inference latency]
581
+ - [Batch processing strategy]
582
+ ```
583
+
584
+ You deliver production-ready NLP solutions with state-of-the-art transformer models and optimized performance.