claude-autopm 2.8.1 → 2.8.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (450) hide show
  1. package/README.md +399 -529
  2. package/bin/autopm.js +2 -0
  3. package/bin/commands/plugin.js +395 -0
  4. package/bin/commands/team.js +184 -10
  5. package/install/install.js +223 -4
  6. package/lib/plugins/PluginManager.js +1328 -0
  7. package/lib/plugins/PluginManager.old.js +400 -0
  8. package/package.json +5 -1
  9. package/packages/plugin-ai/LICENSE +21 -0
  10. package/packages/plugin-ai/README.md +316 -0
  11. package/packages/plugin-ai/agents/anthropic-claude-expert.md +579 -0
  12. package/packages/plugin-ai/agents/azure-openai-expert.md +1411 -0
  13. package/packages/plugin-ai/agents/google-a2a-expert.md +1445 -0
  14. package/packages/plugin-ai/agents/huggingface-expert.md +2131 -0
  15. package/packages/plugin-ai/agents/langchain-expert.md +1427 -0
  16. package/packages/plugin-ai/commands/a2a-setup.md +886 -0
  17. package/packages/plugin-ai/commands/ai-model-deployment.md +481 -0
  18. package/packages/plugin-ai/commands/anthropic-optimize.md +793 -0
  19. package/packages/plugin-ai/commands/huggingface-deploy.md +789 -0
  20. package/packages/plugin-ai/commands/langchain-optimize.md +807 -0
  21. package/packages/plugin-ai/commands/llm-optimize.md +348 -0
  22. package/packages/plugin-ai/commands/openai-optimize.md +863 -0
  23. package/packages/plugin-ai/commands/rag-optimize.md +841 -0
  24. package/packages/plugin-ai/commands/rag-setup-scaffold.md +382 -0
  25. package/packages/plugin-ai/package.json +66 -0
  26. package/packages/plugin-ai/plugin.json +519 -0
  27. package/packages/plugin-ai/rules/ai-model-standards.md +449 -0
  28. package/packages/plugin-ai/rules/prompt-engineering-standards.md +509 -0
  29. package/packages/plugin-ai/scripts/examples/huggingface-inference-example.py +145 -0
  30. package/packages/plugin-ai/scripts/examples/langchain-rag-example.py +366 -0
  31. package/packages/plugin-ai/scripts/examples/mlflow-tracking-example.py +224 -0
  32. package/packages/plugin-ai/scripts/examples/openai-chat-example.py +425 -0
  33. package/packages/plugin-cloud/README.md +268 -0
  34. package/packages/plugin-cloud/agents/gemini-api-expert.md +880 -0
  35. package/packages/plugin-cloud/agents/openai-python-expert.md +1087 -0
  36. package/packages/plugin-cloud/commands/cloud-cost-optimize.md +243 -0
  37. package/packages/plugin-cloud/commands/cloud-validate.md +196 -0
  38. package/packages/plugin-cloud/hooks/pre-cloud-deploy.js +456 -0
  39. package/packages/plugin-cloud/package.json +64 -0
  40. package/packages/plugin-cloud/plugin.json +338 -0
  41. package/packages/plugin-cloud/rules/cloud-security-compliance.md +313 -0
  42. package/packages/plugin-cloud/scripts/examples/aws-validate.sh +30 -0
  43. package/packages/plugin-cloud/scripts/examples/azure-setup.sh +33 -0
  44. package/packages/plugin-cloud/scripts/examples/gcp-setup.sh +39 -0
  45. package/packages/plugin-cloud/scripts/examples/k8s-validate.sh +40 -0
  46. package/packages/plugin-cloud/scripts/examples/terraform-init.sh +26 -0
  47. package/packages/plugin-core/README.md +274 -0
  48. package/packages/plugin-core/commands/code-rabbit.md +128 -0
  49. package/packages/plugin-core/commands/prompt.md +9 -0
  50. package/packages/plugin-core/commands/re-init.md +9 -0
  51. package/packages/plugin-core/hooks/context7-reminder.md +29 -0
  52. package/packages/plugin-core/hooks/enforce-agents.js +125 -0
  53. package/packages/plugin-core/hooks/enforce-agents.sh +35 -0
  54. package/packages/plugin-core/hooks/pre-agent-context7.js +224 -0
  55. package/packages/plugin-core/hooks/pre-command-context7.js +229 -0
  56. package/packages/plugin-core/hooks/strict-enforce-agents.sh +39 -0
  57. package/packages/plugin-core/hooks/test-hook.sh +21 -0
  58. package/packages/plugin-core/hooks/unified-context7-enforcement.sh +38 -0
  59. package/packages/plugin-core/package.json +45 -0
  60. package/packages/plugin-core/plugin.json +387 -0
  61. package/packages/plugin-core/rules/agent-coordination.md +549 -0
  62. package/packages/plugin-core/rules/agent-mandatory.md +170 -0
  63. package/packages/plugin-core/rules/command-pipelines.md +208 -0
  64. package/packages/plugin-core/rules/context-optimization.md +176 -0
  65. package/packages/plugin-core/rules/context7-enforcement.md +327 -0
  66. package/packages/plugin-core/rules/datetime.md +122 -0
  67. package/packages/plugin-core/rules/definition-of-done.md +272 -0
  68. package/packages/plugin-core/rules/development-environments.md +19 -0
  69. package/packages/plugin-core/rules/development-workflow.md +198 -0
  70. package/packages/plugin-core/rules/framework-path-rules.md +180 -0
  71. package/packages/plugin-core/rules/frontmatter-operations.md +64 -0
  72. package/packages/plugin-core/rules/git-strategy.md +237 -0
  73. package/packages/plugin-core/rules/golden-rules.md +181 -0
  74. package/packages/plugin-core/rules/naming-conventions.md +111 -0
  75. package/packages/plugin-core/rules/no-pr-workflow.md +183 -0
  76. package/packages/plugin-core/rules/pipeline-mandatory.md +109 -0
  77. package/packages/plugin-core/rules/security-checklist.md +318 -0
  78. package/packages/plugin-core/rules/standard-patterns.md +197 -0
  79. package/packages/plugin-core/rules/strip-frontmatter.md +85 -0
  80. package/packages/plugin-core/rules/tdd.enforcement.md +103 -0
  81. package/packages/plugin-core/rules/use-ast-grep.md +113 -0
  82. package/packages/plugin-core/scripts/lib/datetime-utils.sh +254 -0
  83. package/packages/plugin-core/scripts/lib/frontmatter-utils.sh +294 -0
  84. package/packages/plugin-core/scripts/lib/github-utils.sh +221 -0
  85. package/packages/plugin-core/scripts/lib/logging-utils.sh +199 -0
  86. package/packages/plugin-core/scripts/lib/validation-utils.sh +339 -0
  87. package/packages/plugin-core/scripts/mcp/add.sh +7 -0
  88. package/packages/plugin-core/scripts/mcp/disable.sh +12 -0
  89. package/packages/plugin-core/scripts/mcp/enable.sh +12 -0
  90. package/packages/plugin-core/scripts/mcp/list.sh +7 -0
  91. package/packages/plugin-core/scripts/mcp/sync.sh +8 -0
  92. package/packages/plugin-data/README.md +315 -0
  93. package/packages/plugin-data/agents/airflow-orchestration-expert.md +158 -0
  94. package/packages/plugin-data/agents/kedro-pipeline-expert.md +304 -0
  95. package/packages/plugin-data/agents/langgraph-workflow-expert.md +530 -0
  96. package/packages/plugin-data/commands/airflow-dag-scaffold.md +413 -0
  97. package/packages/plugin-data/commands/kafka-pipeline-scaffold.md +503 -0
  98. package/packages/plugin-data/package.json +66 -0
  99. package/packages/plugin-data/plugin.json +294 -0
  100. package/packages/plugin-data/rules/data-quality-standards.md +373 -0
  101. package/packages/plugin-data/rules/etl-pipeline-standards.md +255 -0
  102. package/packages/plugin-data/scripts/examples/airflow-dag-example.py +245 -0
  103. package/packages/plugin-data/scripts/examples/dbt-transform-example.sql +238 -0
  104. package/packages/plugin-data/scripts/examples/kafka-streaming-example.py +257 -0
  105. package/packages/plugin-data/scripts/examples/pandas-etl-example.py +332 -0
  106. package/packages/plugin-databases/README.md +330 -0
  107. package/{autopm/.claude/agents/databases → packages/plugin-databases/agents}/bigquery-expert.md +24 -15
  108. package/{autopm/.claude/agents/databases → packages/plugin-databases/agents}/cosmosdb-expert.md +22 -15
  109. package/{autopm/.claude/agents/databases → packages/plugin-databases/agents}/mongodb-expert.md +24 -15
  110. package/{autopm/.claude/agents/databases → packages/plugin-databases/agents}/postgresql-expert.md +23 -15
  111. package/{autopm/.claude/agents/databases → packages/plugin-databases/agents}/redis-expert.md +29 -7
  112. package/packages/plugin-databases/commands/db-optimize.md +612 -0
  113. package/packages/plugin-databases/package.json +60 -0
  114. package/packages/plugin-databases/plugin.json +237 -0
  115. package/packages/plugin-databases/rules/database-management-strategy.md +146 -0
  116. package/packages/plugin-databases/rules/database-pipeline.md +316 -0
  117. package/packages/plugin-databases/scripts/examples/bigquery-cost-analyze.sh +160 -0
  118. package/packages/plugin-databases/scripts/examples/cosmosdb-ru-optimize.sh +163 -0
  119. package/packages/plugin-databases/scripts/examples/mongodb-shard-check.sh +120 -0
  120. package/packages/plugin-databases/scripts/examples/postgres-index-analyze.sh +95 -0
  121. package/packages/plugin-databases/scripts/examples/redis-cache-stats.sh +121 -0
  122. package/packages/plugin-devops/README.md +367 -0
  123. package/{autopm/.claude/agents/devops → packages/plugin-devops/agents}/github-operations-specialist.md +1 -1
  124. package/packages/plugin-devops/commands/ci-pipeline-create.md +581 -0
  125. package/packages/plugin-devops/commands/docker-optimize.md +493 -0
  126. package/packages/plugin-devops/hooks/pre-docker-build.js +472 -0
  127. package/packages/plugin-devops/package.json +61 -0
  128. package/packages/plugin-devops/plugin.json +302 -0
  129. package/packages/plugin-devops/rules/github-operations.md +92 -0
  130. package/packages/plugin-devops/scripts/examples/docker-build-multistage.sh +43 -0
  131. package/packages/plugin-devops/scripts/examples/docker-compose-validate.sh +74 -0
  132. package/packages/plugin-devops/scripts/examples/github-workflow-validate.sh +48 -0
  133. package/packages/plugin-devops/scripts/examples/prometheus-health-check.sh +58 -0
  134. package/packages/plugin-devops/scripts/examples/ssh-key-setup.sh +74 -0
  135. package/packages/plugin-frameworks/README.md +309 -0
  136. package/{autopm/.claude/agents/frameworks → packages/plugin-frameworks/agents}/e2e-test-engineer.md +219 -0
  137. package/{autopm/.claude/agents/frameworks → packages/plugin-frameworks/agents}/react-frontend-engineer.md +176 -0
  138. package/{autopm/.claude/agents/frameworks → packages/plugin-frameworks/agents}/tailwindcss-expert.md +251 -0
  139. package/packages/plugin-frameworks/commands/nextjs-optimize.md +692 -0
  140. package/packages/plugin-frameworks/commands/react-optimize.md +583 -0
  141. package/packages/plugin-frameworks/package.json +59 -0
  142. package/packages/plugin-frameworks/plugin.json +224 -0
  143. package/packages/plugin-frameworks/rules/performance-guidelines.md +403 -0
  144. package/packages/plugin-frameworks/scripts/examples/react-component-perf.sh +34 -0
  145. package/packages/plugin-frameworks/scripts/examples/tailwind-optimize.sh +44 -0
  146. package/packages/plugin-frameworks/scripts/examples/vue-composition-check.sh +41 -0
  147. package/packages/plugin-languages/README.md +333 -0
  148. package/packages/plugin-languages/commands/javascript-optimize.md +636 -0
  149. package/packages/plugin-languages/commands/nodejs-api-scaffold.md +341 -0
  150. package/packages/plugin-languages/commands/nodejs-optimize.md +689 -0
  151. package/packages/plugin-languages/commands/python-api-scaffold.md +261 -0
  152. package/packages/plugin-languages/commands/python-optimize.md +593 -0
  153. package/packages/plugin-languages/package.json +65 -0
  154. package/packages/plugin-languages/plugin.json +265 -0
  155. package/packages/plugin-languages/rules/code-quality-standards.md +496 -0
  156. package/packages/plugin-languages/rules/testing-standards.md +768 -0
  157. package/packages/plugin-languages/scripts/examples/bash-production-script.sh +520 -0
  158. package/packages/plugin-languages/scripts/examples/javascript-es6-patterns.js +291 -0
  159. package/packages/plugin-languages/scripts/examples/nodejs-async-iteration.js +360 -0
  160. package/packages/plugin-languages/scripts/examples/python-async-patterns.py +289 -0
  161. package/packages/plugin-languages/scripts/examples/typescript-patterns.ts +432 -0
  162. package/packages/plugin-ml/README.md +430 -0
  163. package/packages/plugin-ml/agents/automl-expert.md +326 -0
  164. package/packages/plugin-ml/agents/computer-vision-expert.md +550 -0
  165. package/packages/plugin-ml/agents/gradient-boosting-expert.md +455 -0
  166. package/packages/plugin-ml/agents/neural-network-architect.md +1228 -0
  167. package/packages/plugin-ml/agents/nlp-transformer-expert.md +584 -0
  168. package/packages/plugin-ml/agents/pytorch-expert.md +412 -0
  169. package/packages/plugin-ml/agents/reinforcement-learning-expert.md +2088 -0
  170. package/packages/plugin-ml/agents/scikit-learn-expert.md +228 -0
  171. package/packages/plugin-ml/agents/tensorflow-keras-expert.md +509 -0
  172. package/packages/plugin-ml/agents/time-series-expert.md +303 -0
  173. package/packages/plugin-ml/commands/ml-automl.md +572 -0
  174. package/packages/plugin-ml/commands/ml-train-optimize.md +657 -0
  175. package/packages/plugin-ml/package.json +52 -0
  176. package/packages/plugin-ml/plugin.json +338 -0
  177. package/packages/plugin-pm/README.md +368 -0
  178. package/packages/plugin-pm/claudeautopm-plugin-pm-2.0.0.tgz +0 -0
  179. package/packages/plugin-pm/commands/github/workflow-create.md +42 -0
  180. package/packages/plugin-pm/package.json +57 -0
  181. package/packages/plugin-pm/plugin.json +503 -0
  182. package/packages/plugin-testing/README.md +401 -0
  183. package/{autopm/.claude/agents/testing → packages/plugin-testing/agents}/frontend-testing-engineer.md +373 -0
  184. package/packages/plugin-testing/commands/jest-optimize.md +800 -0
  185. package/packages/plugin-testing/commands/playwright-optimize.md +887 -0
  186. package/packages/plugin-testing/commands/test-coverage.md +512 -0
  187. package/packages/plugin-testing/commands/test-performance.md +1041 -0
  188. package/packages/plugin-testing/commands/test-setup.md +414 -0
  189. package/packages/plugin-testing/package.json +40 -0
  190. package/packages/plugin-testing/plugin.json +197 -0
  191. package/packages/plugin-testing/rules/test-coverage-requirements.md +581 -0
  192. package/packages/plugin-testing/rules/testing-standards.md +529 -0
  193. package/packages/plugin-testing/scripts/examples/react-testing-example.test.jsx +460 -0
  194. package/packages/plugin-testing/scripts/examples/vitest-config-example.js +352 -0
  195. package/packages/plugin-testing/scripts/examples/vue-testing-example.test.js +586 -0
  196. package/scripts/publish-plugins.sh +166 -0
  197. package/autopm/.claude/agents/data/airflow-orchestration-expert.md +0 -52
  198. package/autopm/.claude/agents/data/kedro-pipeline-expert.md +0 -50
  199. package/autopm/.claude/agents/integration/message-queue-engineer.md +0 -794
  200. package/autopm/.claude/commands/ai/langgraph-workflow.md +0 -65
  201. package/autopm/.claude/commands/ai/openai-chat.md +0 -65
  202. package/autopm/.claude/commands/playwright/test-scaffold.md +0 -38
  203. package/autopm/.claude/commands/python/api-scaffold.md +0 -50
  204. package/autopm/.claude/commands/python/docs-query.md +0 -48
  205. package/autopm/.claude/commands/testing/prime.md +0 -314
  206. package/autopm/.claude/commands/testing/run.md +0 -125
  207. package/autopm/.claude/commands/ui/bootstrap-scaffold.md +0 -65
  208. package/autopm/.claude/rules/database-management-strategy.md +0 -17
  209. package/autopm/.claude/rules/database-pipeline.md +0 -94
  210. package/autopm/.claude/rules/ux-design-rules.md +0 -209
  211. package/autopm/.claude/rules/visual-testing.md +0 -223
  212. package/autopm/.claude/scripts/azure/README.md +0 -192
  213. package/autopm/.claude/scripts/azure/active-work.js +0 -524
  214. package/autopm/.claude/scripts/azure/active-work.sh +0 -20
  215. package/autopm/.claude/scripts/azure/blocked.js +0 -520
  216. package/autopm/.claude/scripts/azure/blocked.sh +0 -20
  217. package/autopm/.claude/scripts/azure/daily.js +0 -533
  218. package/autopm/.claude/scripts/azure/daily.sh +0 -20
  219. package/autopm/.claude/scripts/azure/dashboard.js +0 -970
  220. package/autopm/.claude/scripts/azure/dashboard.sh +0 -20
  221. package/autopm/.claude/scripts/azure/feature-list.js +0 -254
  222. package/autopm/.claude/scripts/azure/feature-list.sh +0 -20
  223. package/autopm/.claude/scripts/azure/feature-show.js +0 -7
  224. package/autopm/.claude/scripts/azure/feature-show.sh +0 -20
  225. package/autopm/.claude/scripts/azure/feature-status.js +0 -604
  226. package/autopm/.claude/scripts/azure/feature-status.sh +0 -20
  227. package/autopm/.claude/scripts/azure/help.js +0 -342
  228. package/autopm/.claude/scripts/azure/help.sh +0 -20
  229. package/autopm/.claude/scripts/azure/next-task.js +0 -508
  230. package/autopm/.claude/scripts/azure/next-task.sh +0 -20
  231. package/autopm/.claude/scripts/azure/search.js +0 -469
  232. package/autopm/.claude/scripts/azure/search.sh +0 -20
  233. package/autopm/.claude/scripts/azure/setup.js +0 -745
  234. package/autopm/.claude/scripts/azure/setup.sh +0 -20
  235. package/autopm/.claude/scripts/azure/sprint-report.js +0 -1012
  236. package/autopm/.claude/scripts/azure/sprint-report.sh +0 -20
  237. package/autopm/.claude/scripts/azure/sync.js +0 -563
  238. package/autopm/.claude/scripts/azure/sync.sh +0 -20
  239. package/autopm/.claude/scripts/azure/us-list.js +0 -210
  240. package/autopm/.claude/scripts/azure/us-list.sh +0 -20
  241. package/autopm/.claude/scripts/azure/us-status.js +0 -238
  242. package/autopm/.claude/scripts/azure/us-status.sh +0 -20
  243. package/autopm/.claude/scripts/azure/validate.js +0 -626
  244. package/autopm/.claude/scripts/azure/validate.sh +0 -20
  245. package/autopm/.claude/scripts/azure/wrapper-template.sh +0 -20
  246. package/autopm/.claude/scripts/github/dependency-tracker.js +0 -554
  247. package/autopm/.claude/scripts/github/dependency-validator.js +0 -545
  248. package/autopm/.claude/scripts/github/dependency-visualizer.js +0 -477
  249. package/bin/node/azure-feature-show.js +0 -7
  250. /package/{autopm/.claude/agents/cloud → packages/plugin-ai/agents}/gemini-api-expert.md +0 -0
  251. /package/{autopm/.claude/agents/data → packages/plugin-ai/agents}/langgraph-workflow-expert.md +0 -0
  252. /package/{autopm/.claude/agents/cloud → packages/plugin-ai/agents}/openai-python-expert.md +0 -0
  253. /package/{autopm/.claude/agents/cloud → packages/plugin-cloud/agents}/README.md +0 -0
  254. /package/{autopm/.claude/agents/cloud → packages/plugin-cloud/agents}/aws-cloud-architect.md +0 -0
  255. /package/{autopm/.claude/agents/cloud → packages/plugin-cloud/agents}/azure-cloud-architect.md +0 -0
  256. /package/{autopm/.claude/agents/cloud → packages/plugin-cloud/agents}/gcp-cloud-architect.md +0 -0
  257. /package/{autopm/.claude/agents/cloud → packages/plugin-cloud/agents}/gcp-cloud-functions-engineer.md +0 -0
  258. /package/{autopm/.claude/agents/cloud → packages/plugin-cloud/agents}/kubernetes-orchestrator.md +0 -0
  259. /package/{autopm/.claude/agents/cloud → packages/plugin-cloud/agents}/terraform-infrastructure-expert.md +0 -0
  260. /package/{autopm/.claude/commands/cloud → packages/plugin-cloud/commands}/infra-deploy.md +0 -0
  261. /package/{autopm/.claude/commands/kubernetes/deploy.md → packages/plugin-cloud/commands/k8s-deploy.md} +0 -0
  262. /package/{autopm/.claude/commands/infrastructure → packages/plugin-cloud/commands}/ssh-security.md +0 -0
  263. /package/{autopm/.claude/commands/infrastructure → packages/plugin-cloud/commands}/traefik-setup.md +0 -0
  264. /package/{autopm/.claude → packages/plugin-cloud}/rules/infrastructure-pipeline.md +0 -0
  265. /package/{autopm/.claude → packages/plugin-core}/agents/core/agent-manager.md +0 -0
  266. /package/{autopm/.claude → packages/plugin-core}/agents/core/code-analyzer.md +0 -0
  267. /package/{autopm/.claude → packages/plugin-core}/agents/core/file-analyzer.md +0 -0
  268. /package/{autopm/.claude → packages/plugin-core}/agents/core/test-runner.md +0 -0
  269. /package/{autopm/.claude → packages/plugin-core}/rules/ai-integration-patterns.md +0 -0
  270. /package/{autopm/.claude → packages/plugin-core}/rules/performance-guidelines.md +0 -0
  271. /package/{autopm/.claude/agents/databases → packages/plugin-databases/agents}/README.md +0 -0
  272. /package/{autopm/.claude/agents/devops → packages/plugin-devops/agents}/README.md +0 -0
  273. /package/{autopm/.claude/agents/devops → packages/plugin-devops/agents}/azure-devops-specialist.md +0 -0
  274. /package/{autopm/.claude/agents/devops → packages/plugin-devops/agents}/docker-containerization-expert.md +0 -0
  275. /package/{autopm/.claude/agents/devops → packages/plugin-devops/agents}/mcp-context-manager.md +0 -0
  276. /package/{autopm/.claude/agents/devops → packages/plugin-devops/agents}/observability-engineer.md +0 -0
  277. /package/{autopm/.claude/agents/devops → packages/plugin-devops/agents}/ssh-operations-expert.md +0 -0
  278. /package/{autopm/.claude/agents/devops → packages/plugin-devops/agents}/traefik-proxy-expert.md +0 -0
  279. /package/{autopm/.claude/commands/github → packages/plugin-devops/commands}/workflow-create.md +0 -0
  280. /package/{autopm/.claude → packages/plugin-devops}/rules/ci-cd-kubernetes-strategy.md +0 -0
  281. /package/{autopm/.claude → packages/plugin-devops}/rules/devops-troubleshooting-playbook.md +0 -0
  282. /package/{autopm/.claude → packages/plugin-devops}/rules/docker-first-development.md +0 -0
  283. /package/{autopm/.claude/agents/frameworks → packages/plugin-frameworks/agents}/README.md +0 -0
  284. /package/{autopm/.claude/agents/frameworks → packages/plugin-frameworks/agents}/nats-messaging-expert.md +0 -0
  285. /package/{autopm/.claude/agents/frameworks → packages/plugin-frameworks/agents}/react-ui-expert.md +0 -0
  286. /package/{autopm/.claude/agents/frameworks → packages/plugin-frameworks/agents}/ux-design-expert.md +0 -0
  287. /package/{autopm/.claude/commands/react → packages/plugin-frameworks/commands}/app-scaffold.md +0 -0
  288. /package/{autopm/.claude/commands/ui → packages/plugin-frameworks/commands}/tailwind-system.md +0 -0
  289. /package/{autopm/.claude → packages/plugin-frameworks}/rules/ui-development-standards.md +0 -0
  290. /package/{autopm/.claude → packages/plugin-frameworks}/rules/ui-framework-rules.md +0 -0
  291. /package/{autopm/.claude/agents/languages → packages/plugin-languages/agents}/README.md +0 -0
  292. /package/{autopm/.claude/agents/languages → packages/plugin-languages/agents}/bash-scripting-expert.md +0 -0
  293. /package/{autopm/.claude/agents/languages → packages/plugin-languages/agents}/javascript-frontend-engineer.md +0 -0
  294. /package/{autopm/.claude/agents/languages → packages/plugin-languages/agents}/nodejs-backend-engineer.md +0 -0
  295. /package/{autopm/.claude/agents/languages → packages/plugin-languages/agents}/python-backend-engineer.md +0 -0
  296. /package/{autopm/.claude/agents/languages → packages/plugin-languages/agents}/python-backend-expert.md +0 -0
  297. /package/{autopm/.claude → packages/plugin-pm}/commands/azure/COMMANDS.md +0 -0
  298. /package/{autopm/.claude → packages/plugin-pm}/commands/azure/COMMAND_MAPPING.md +0 -0
  299. /package/{autopm/.claude → packages/plugin-pm}/commands/azure/INTEGRATION_FIX.md +0 -0
  300. /package/{autopm/.claude → packages/plugin-pm}/commands/azure/README.md +0 -0
  301. /package/{autopm/.claude → packages/plugin-pm}/commands/azure/active-work.md +0 -0
  302. /package/{autopm/.claude → packages/plugin-pm}/commands/azure/aliases.md +0 -0
  303. /package/{autopm/.claude → packages/plugin-pm}/commands/azure/blocked-items.md +0 -0
  304. /package/{autopm/.claude → packages/plugin-pm}/commands/azure/clean.md +0 -0
  305. /package/{autopm/.claude → packages/plugin-pm}/commands/azure/docs-query.md +0 -0
  306. /package/{autopm/.claude → packages/plugin-pm}/commands/azure/feature-decompose.md +0 -0
  307. /package/{autopm/.claude → packages/plugin-pm}/commands/azure/feature-list.md +0 -0
  308. /package/{autopm/.claude → packages/plugin-pm}/commands/azure/feature-new.md +0 -0
  309. /package/{autopm/.claude → packages/plugin-pm}/commands/azure/feature-show.md +0 -0
  310. /package/{autopm/.claude → packages/plugin-pm}/commands/azure/feature-start.md +0 -0
  311. /package/{autopm/.claude → packages/plugin-pm}/commands/azure/fix-integration-example.md +0 -0
  312. /package/{autopm/.claude → packages/plugin-pm}/commands/azure/help.md +0 -0
  313. /package/{autopm/.claude → packages/plugin-pm}/commands/azure/import-us.md +0 -0
  314. /package/{autopm/.claude → packages/plugin-pm}/commands/azure/init.md +0 -0
  315. /package/{autopm/.claude → packages/plugin-pm}/commands/azure/next-task.md +0 -0
  316. /package/{autopm/.claude → packages/plugin-pm}/commands/azure/search.md +0 -0
  317. /package/{autopm/.claude → packages/plugin-pm}/commands/azure/sprint-status.md +0 -0
  318. /package/{autopm/.claude → packages/plugin-pm}/commands/azure/standup.md +0 -0
  319. /package/{autopm/.claude → packages/plugin-pm}/commands/azure/sync-all.md +0 -0
  320. /package/{autopm/.claude → packages/plugin-pm}/commands/azure/task-analyze.md +0 -0
  321. /package/{autopm/.claude → packages/plugin-pm}/commands/azure/task-close.md +0 -0
  322. /package/{autopm/.claude → packages/plugin-pm}/commands/azure/task-edit.md +0 -0
  323. /package/{autopm/.claude → packages/plugin-pm}/commands/azure/task-list.md +0 -0
  324. /package/{autopm/.claude → packages/plugin-pm}/commands/azure/task-new.md +0 -0
  325. /package/{autopm/.claude → packages/plugin-pm}/commands/azure/task-reopen.md +0 -0
  326. /package/{autopm/.claude → packages/plugin-pm}/commands/azure/task-show.md +0 -0
  327. /package/{autopm/.claude → packages/plugin-pm}/commands/azure/task-start.md +0 -0
  328. /package/{autopm/.claude → packages/plugin-pm}/commands/azure/task-status.md +0 -0
  329. /package/{autopm/.claude → packages/plugin-pm}/commands/azure/task-sync.md +0 -0
  330. /package/{autopm/.claude → packages/plugin-pm}/commands/azure/us-edit.md +0 -0
  331. /package/{autopm/.claude → packages/plugin-pm}/commands/azure/us-list.md +0 -0
  332. /package/{autopm/.claude → packages/plugin-pm}/commands/azure/us-new.md +0 -0
  333. /package/{autopm/.claude → packages/plugin-pm}/commands/azure/us-parse.md +0 -0
  334. /package/{autopm/.claude → packages/plugin-pm}/commands/azure/us-show.md +0 -0
  335. /package/{autopm/.claude → packages/plugin-pm}/commands/azure/us-status.md +0 -0
  336. /package/{autopm/.claude → packages/plugin-pm}/commands/azure/validate.md +0 -0
  337. /package/{autopm/.claude → packages/plugin-pm}/commands/azure/work-item-sync.md +0 -0
  338. /package/{autopm/.claude/commands/pm → packages/plugin-pm/commands}/blocked.md +0 -0
  339. /package/{autopm/.claude/commands/pm → packages/plugin-pm/commands}/clean.md +0 -0
  340. /package/{autopm/.claude/commands/pm → packages/plugin-pm/commands}/context-create.md +0 -0
  341. /package/{autopm/.claude/commands/pm → packages/plugin-pm/commands}/context-prime.md +0 -0
  342. /package/{autopm/.claude/commands/pm → packages/plugin-pm/commands}/context-update.md +0 -0
  343. /package/{autopm/.claude/commands/pm → packages/plugin-pm/commands}/context.md +0 -0
  344. /package/{autopm/.claude/commands/pm → packages/plugin-pm/commands}/epic-close.md +0 -0
  345. /package/{autopm/.claude/commands/pm → packages/plugin-pm/commands}/epic-decompose.md +0 -0
  346. /package/{autopm/.claude/commands/pm → packages/plugin-pm/commands}/epic-edit.md +0 -0
  347. /package/{autopm/.claude/commands/pm → packages/plugin-pm/commands}/epic-list.md +0 -0
  348. /package/{autopm/.claude/commands/pm → packages/plugin-pm/commands}/epic-merge.md +0 -0
  349. /package/{autopm/.claude/commands/pm → packages/plugin-pm/commands}/epic-oneshot.md +0 -0
  350. /package/{autopm/.claude/commands/pm → packages/plugin-pm/commands}/epic-refresh.md +0 -0
  351. /package/{autopm/.claude/commands/pm → packages/plugin-pm/commands}/epic-show.md +0 -0
  352. /package/{autopm/.claude/commands/pm → packages/plugin-pm/commands}/epic-split.md +0 -0
  353. /package/{autopm/.claude/commands/pm → packages/plugin-pm/commands}/epic-start.md +0 -0
  354. /package/{autopm/.claude/commands/pm → packages/plugin-pm/commands}/epic-status.md +0 -0
  355. /package/{autopm/.claude/commands/pm → packages/plugin-pm/commands}/epic-sync-modular.md +0 -0
  356. /package/{autopm/.claude/commands/pm → packages/plugin-pm/commands}/epic-sync-original.md +0 -0
  357. /package/{autopm/.claude/commands/pm → packages/plugin-pm/commands}/epic-sync.md +0 -0
  358. /package/{autopm/.claude/commands/pm → packages/plugin-pm/commands}/help.md +0 -0
  359. /package/{autopm/.claude/commands/pm → packages/plugin-pm/commands}/import.md +0 -0
  360. /package/{autopm/.claude/commands/pm → packages/plugin-pm/commands}/in-progress.md +0 -0
  361. /package/{autopm/.claude/commands/pm → packages/plugin-pm/commands}/init.md +0 -0
  362. /package/{autopm/.claude/commands/pm → packages/plugin-pm/commands}/issue-analyze.md +0 -0
  363. /package/{autopm/.claude/commands/pm → packages/plugin-pm/commands}/issue-close.md +0 -0
  364. /package/{autopm/.claude/commands/pm → packages/plugin-pm/commands}/issue-edit.md +0 -0
  365. /package/{autopm/.claude/commands/pm → packages/plugin-pm/commands}/issue-reopen.md +0 -0
  366. /package/{autopm/.claude/commands/pm → packages/plugin-pm/commands}/issue-show.md +0 -0
  367. /package/{autopm/.claude/commands/pm → packages/plugin-pm/commands}/issue-start.md +0 -0
  368. /package/{autopm/.claude/commands/pm → packages/plugin-pm/commands}/issue-status.md +0 -0
  369. /package/{autopm/.claude/commands/pm → packages/plugin-pm/commands}/issue-sync.md +0 -0
  370. /package/{autopm/.claude/commands/pm → packages/plugin-pm/commands}/next.md +0 -0
  371. /package/{autopm/.claude/commands/pm → packages/plugin-pm/commands}/prd-edit.md +0 -0
  372. /package/{autopm/.claude/commands/pm → packages/plugin-pm/commands}/prd-list.md +0 -0
  373. /package/{autopm/.claude/commands/pm → packages/plugin-pm/commands}/prd-new.md +0 -0
  374. /package/{autopm/.claude/commands/pm → packages/plugin-pm/commands}/prd-parse.md +0 -0
  375. /package/{autopm/.claude/commands/pm → packages/plugin-pm/commands}/prd-status.md +0 -0
  376. /package/{autopm/.claude/commands/pm → packages/plugin-pm/commands}/search.md +0 -0
  377. /package/{autopm/.claude/commands/pm → packages/plugin-pm/commands}/standup.md +0 -0
  378. /package/{autopm/.claude/commands/pm → packages/plugin-pm/commands}/status.md +0 -0
  379. /package/{autopm/.claude/commands/pm → packages/plugin-pm/commands}/sync.md +0 -0
  380. /package/{autopm/.claude/commands/pm → packages/plugin-pm/commands}/test-reference-update.md +0 -0
  381. /package/{autopm/.claude/commands/pm → packages/plugin-pm/commands}/validate.md +0 -0
  382. /package/{autopm/.claude/commands/pm → packages/plugin-pm/commands}/what-next.md +0 -0
  383. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/analytics.js +0 -0
  384. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/blocked.js +0 -0
  385. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/blocked.sh +0 -0
  386. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/clean.js +0 -0
  387. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/context-create.js +0 -0
  388. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/context-prime.js +0 -0
  389. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/context-update.js +0 -0
  390. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/context.js +0 -0
  391. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/epic-close.js +0 -0
  392. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/epic-edit.js +0 -0
  393. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/epic-list.js +0 -0
  394. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/epic-list.sh +0 -0
  395. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/epic-show.js +0 -0
  396. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/epic-show.sh +0 -0
  397. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/epic-split.js +0 -0
  398. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/epic-start/epic-start.js +0 -0
  399. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/epic-start/epic-start.sh +0 -0
  400. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/epic-status.js +0 -0
  401. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/epic-status.sh +0 -0
  402. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/epic-sync/README.md +0 -0
  403. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/epic-sync/create-epic-issue.sh +0 -0
  404. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/epic-sync/create-task-issues.sh +0 -0
  405. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/epic-sync/update-epic-file.sh +0 -0
  406. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/epic-sync/update-references.sh +0 -0
  407. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/epic-sync.sh +0 -0
  408. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/help.js +0 -0
  409. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/help.sh +0 -0
  410. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/in-progress.js +0 -0
  411. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/in-progress.sh +0 -0
  412. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/init.js +0 -0
  413. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/init.sh +0 -0
  414. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/issue-close.js +0 -0
  415. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/issue-edit.js +0 -0
  416. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/issue-show.js +0 -0
  417. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/issue-start.js +0 -0
  418. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/issue-sync/format-comment.sh +0 -0
  419. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/issue-sync/gather-updates.sh +0 -0
  420. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/issue-sync/post-comment.sh +0 -0
  421. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/issue-sync/preflight-validation.sh +0 -0
  422. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/issue-sync/update-frontmatter.sh +0 -0
  423. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/lib/README.md +0 -0
  424. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/lib/epic-discovery.js +0 -0
  425. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/lib/logger.js +0 -0
  426. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/next.js +0 -0
  427. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/next.sh +0 -0
  428. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/optimize.js +0 -0
  429. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/pr-create.js +0 -0
  430. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/pr-list.js +0 -0
  431. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/prd-list.js +0 -0
  432. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/prd-list.sh +0 -0
  433. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/prd-new.js +0 -0
  434. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/prd-parse.js +0 -0
  435. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/prd-status.js +0 -0
  436. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/prd-status.sh +0 -0
  437. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/release.js +0 -0
  438. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/search.js +0 -0
  439. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/search.sh +0 -0
  440. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/standup.js +0 -0
  441. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/standup.sh +0 -0
  442. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/status.js +0 -0
  443. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/status.sh +0 -0
  444. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/sync-batch.js +0 -0
  445. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/sync.js +0 -0
  446. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/template-list.js +0 -0
  447. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/template-new.js +0 -0
  448. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/validate.js +0 -0
  449. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/validate.sh +0 -0
  450. /package/{autopm/.claude → packages/plugin-pm}/scripts/pm/what-next.js +0 -0
@@ -0,0 +1,863 @@
1
+ # openai:optimize
2
+
3
+ Optimize OpenAI API usage with Context7-verified async operations, batching, caching, and rate limiting strategies.
4
+
5
+ ## Description
6
+
7
+ Comprehensive OpenAI API optimization following official best practices:
8
+ - Async/await for concurrent requests
9
+ - Batch processing for bulk operations
10
+ - Response caching strategies
11
+ - Rate limiting and retry logic
12
+ - Token usage optimization
13
+ - Streaming responses
14
+ - Function calling optimization
15
+
16
+ ## Required Documentation Access
17
+
18
+ **MANDATORY:** Before optimization, query Context7 for OpenAI best practices:
19
+
20
+ **Documentation Queries:**
21
+ - `mcp://context7/openai/async-operations` - AsyncOpenAI client patterns
22
+ - `mcp://context7/openai/batching` - Batch API for bulk processing
23
+ - `mcp://context7/openai/caching` - Response caching strategies
24
+ - `mcp://context7/openai/rate-limiting` - Rate limit handling and backoff
25
+ - `mcp://context7/openai/streaming` - Streaming response optimization
26
+ - `mcp://context7/openai/function-calling` - Function calling best practices
27
+
28
+ **Why This is Required:**
29
+ - Ensures optimization follows official OpenAI documentation
30
+ - Applies proven async and batching patterns
31
+ - Validates rate limiting strategies
32
+ - Prevents API quota exhaustion
33
+ - Optimizes token usage and costs
34
+
35
+ ## Usage
36
+
37
+ ```bash
38
+ /openai:optimize [options]
39
+ ```
40
+
41
+ ## Options
42
+
43
+ - `--scope <async|batching|caching|rate-limiting|all>` - Optimization scope (default: all)
44
+ - `--analyze-only` - Analyze without applying changes
45
+ - `--output <file>` - Write optimization report
46
+ - `--model <gpt-4|gpt-3.5-turbo>` - Target model for optimization
47
+
48
+ ## Examples
49
+
50
+ ### Full OpenAI Optimization
51
+ ```bash
52
+ /openai:optimize
53
+ ```
54
+
55
+ ### Async Operations Only
56
+ ```bash
57
+ /openai:optimize --scope async
58
+ ```
59
+
60
+ ### Batch Processing Optimization
61
+ ```bash
62
+ /openai:optimize --scope batching
63
+ ```
64
+
65
+ ### Analyze Current Usage
66
+ ```bash
67
+ /openai:optimize --analyze-only --output openai-report.md
68
+ ```
69
+
70
+ ## Optimization Categories
71
+
72
+ ### 1. Async Operations (Context7-Verified)
73
+
74
+ **Pattern from Context7 (/openai/openai-python):**
75
+
76
+ #### AsyncOpenAI Client
77
+ ```python
78
+ import asyncio
79
+ from openai import AsyncOpenAI
80
+
81
+ async def main():
82
+ client = AsyncOpenAI()
83
+
84
+ # Async streaming
85
+ stream = await client.chat.completions.create(
86
+ model="gpt-4o",
87
+ messages=[{"role": "user", "content": "Explain quantum computing"}],
88
+ stream=True,
89
+ )
90
+
91
+ async for chunk in stream:
92
+ if chunk.choices[0].delta.content:
93
+ print(chunk.choices[0].delta.content, end="", flush=True)
94
+
95
+ # Non-streaming async
96
+ response = await client.chat.completions.create(
97
+ model="gpt-4o-mini",
98
+ messages=[{"role": "user", "content": "What is Python?"}],
99
+ )
100
+ print(response.choices[0].message.content)
101
+
102
+ asyncio.run(main())
103
+ ```
104
+
105
+ **Benefits:**
106
+ - Non-blocking I/O operations
107
+ - Concurrent request processing
108
+ - Better resource utilization
109
+
110
+ **Performance Impact:**
111
+ - Sequential requests: 10 × 2s = 20s
112
+ - Async concurrent: max(10 × 2s) = 2s (10x faster)
113
+
114
+ #### Concurrent Requests Pattern
115
+ ```python
116
+ import asyncio
117
+ from openai import AsyncOpenAI
118
+
119
+ async def process_batch(prompts: list[str]) -> list[str]:
120
+ client = AsyncOpenAI()
121
+
122
+ async def get_completion(prompt: str) -> str:
123
+ response = await client.chat.completions.create(
124
+ model="gpt-4o-mini",
125
+ messages=[{"role": "user", "content": prompt}],
126
+ )
127
+ return response.choices[0].message.content
128
+
129
+ # Process all prompts concurrently
130
+ tasks = [get_completion(prompt) for prompt in prompts]
131
+ results = await asyncio.gather(*tasks)
132
+
133
+ return results
134
+
135
+ # Usage
136
+ prompts = [
137
+ "Summarize machine learning",
138
+ "Explain neural networks",
139
+ "What is deep learning?"
140
+ ]
141
+
142
+ results = asyncio.run(process_batch(prompts))
143
+ ```
144
+
145
+ **Performance Impact:**
146
+ - 3 sequential requests: 6 seconds
147
+ - 3 concurrent requests: 2 seconds (3x faster)
148
+
149
+ ### 2. Batch Processing (Context7-Verified)
150
+
151
+ **Pattern from Context7 (/openai/openai-python):**
152
+
153
+ #### Create Batch Job
154
+ ```python
155
+ from openai import OpenAI
156
+
157
+ client = OpenAI()
158
+
159
+ # Create JSONL file with batch requests
160
+ batch_requests = [
161
+ {
162
+ "custom_id": "request-1",
163
+ "method": "POST",
164
+ "url": "/v1/chat/completions",
165
+ "body": {
166
+ "model": "gpt-4o-mini",
167
+ "messages": [{"role": "user", "content": "Explain Python"}],
168
+ "max_tokens": 1000
169
+ }
170
+ },
171
+ {
172
+ "custom_id": "request-2",
173
+ "method": "POST",
174
+ "url": "/v1/chat/completions",
175
+ "body": {
176
+ "model": "gpt-4o-mini",
177
+ "messages": [{"role": "user", "content": "What is JavaScript?"}],
178
+ "max_tokens": 1000
179
+ }
180
+ }
181
+ ]
182
+
183
+ # Save to JSONL file
184
+ import json
185
+ with open("batch_requests.jsonl", "w") as f:
186
+ for req in batch_requests:
187
+ f.write(json.dumps(req) + "\n")
188
+
189
+ # Upload file
190
+ with open("batch_requests.jsonl", "rb") as f:
191
+ batch_input_file = client.files.create(
192
+ file=f,
193
+ purpose="batch"
194
+ )
195
+
196
+ # Create batch
197
+ batch = client.batches.create(
198
+ input_file_id=batch_input_file.id,
199
+ endpoint="/v1/chat/completions",
200
+ completion_window="24h",
201
+ metadata={"description": "Daily processing job"},
202
+ )
203
+
204
+ print(f"Batch ID: {batch.id}")
205
+ print(f"Status: {batch.status}")
206
+ ```
207
+
208
+ **Benefits:**
209
+ - 50% cost reduction compared to synchronous API
210
+ - Automatic retries and error handling
211
+ - No rate limit concerns
212
+ - 24-hour processing window
213
+
214
+ **Performance Impact:**
215
+ - Synchronous: 1,000 requests × 2s = 2,000s (~33 minutes)
216
+ - Batch API: 1,000 requests processed within 24h, 50% cheaper
217
+
218
+ #### Monitor Batch Status
219
+ ```python
220
+ # Retrieve batch status
221
+ batch = client.batches.retrieve("batch-abc123")
222
+
223
+ print(f"Total requests: {batch.request_counts.total}")
224
+ print(f"Completed: {batch.request_counts.completed}")
225
+ print(f"Failed: {batch.request_counts.failed}")
226
+ print(f"Status: {batch.status}")
227
+
228
+ # List all batches
229
+ batches = client.batches.list(limit=10)
230
+ for b in batches.data:
231
+ print(f"{b.id}: {b.status}")
232
+
233
+ # Cancel batch if needed
234
+ if batch.status == "in_progress":
235
+ cancelled = client.batches.cancel("batch-abc123")
236
+ print(f"Cancelled: {cancelled.status}")
237
+ ```
238
+
239
+ #### Retrieve Batch Results
240
+ ```python
241
+ # Download results file
242
+ if batch.status == "completed":
243
+ result_file_id = batch.output_file_id
244
+
245
+ # Download file content
246
+ file_response = client.files.content(result_file_id)
247
+
248
+ # Parse JSONL results
249
+ results = []
250
+ for line in file_response.text.strip().split("\n"):
251
+ result = json.loads(line)
252
+ results.append(result)
253
+
254
+ # Process results
255
+ for result in results:
256
+ custom_id = result["custom_id"]
257
+ response = result["response"]
258
+ content = response["body"]["choices"][0]["message"]["content"]
259
+ print(f"{custom_id}: {content[:100]}...")
260
+ ```
261
+
262
+ ### 3. Response Caching (Context7-Verified)
263
+
264
+ **Pattern from Context7:**
265
+
266
+ #### In-Memory Cache
267
+ ```python
268
+ from functools import lru_cache
269
+ from openai import OpenAI
270
+ import hashlib
271
+
272
+ client = OpenAI()
273
+
274
+ @lru_cache(maxsize=1000)
275
+ def get_cached_completion(prompt: str, model: str = "gpt-4o-mini") -> str:
276
+ """
277
+ Cache OpenAI completions using LRU cache.
278
+ Identical prompts return cached results instantly.
279
+ """
280
+ response = client.chat.completions.create(
281
+ model=model,
282
+ messages=[{"role": "user", "content": prompt}],
283
+ )
284
+ return response.choices[0].message.content
285
+
286
+ # Usage
287
+ result1 = get_cached_completion("Explain Python") # API call
288
+ result2 = get_cached_completion("Explain Python") # Cached (instant)
289
+ ```
290
+
291
+ **Performance Impact:**
292
+ - First call: 2 seconds (API request)
293
+ - Cached calls: <1ms (1000x faster)
294
+
295
+ #### Redis Cache for Production
296
+ ```python
297
+ import redis
298
+ import json
299
+ import hashlib
300
+ from openai import OpenAI
301
+
302
+ client = OpenAI()
303
+ redis_client = redis.Redis(host='localhost', port=6379, db=0)
304
+
305
+ def get_cache_key(prompt: str, model: str) -> str:
306
+ """Generate consistent cache key."""
307
+ content = f"{prompt}:{model}"
308
+ return f"openai:{hashlib.sha256(content.encode()).hexdigest()}"
309
+
310
+ def get_cached_completion_redis(
311
+ prompt: str,
312
+ model: str = "gpt-4o-mini",
313
+ ttl: int = 3600 # 1 hour
314
+ ) -> str:
315
+ """
316
+ Cache completions in Redis with TTL.
317
+ """
318
+ cache_key = get_cache_key(prompt, model)
319
+
320
+ # Check cache
321
+ cached = redis_client.get(cache_key)
322
+ if cached:
323
+ return cached.decode('utf-8')
324
+
325
+ # API call
326
+ response = client.chat.completions.create(
327
+ model=model,
328
+ messages=[{"role": "user", "content": prompt}],
329
+ )
330
+ result = response.choices[0].message.content
331
+
332
+ # Store in cache
333
+ redis_client.setex(cache_key, ttl, result)
334
+
335
+ return result
336
+
337
+ # Usage
338
+ result = get_cached_completion_redis("What is AI?") # API call or cached
339
+ ```
340
+
341
+ **Benefits:**
342
+ - Persistent cache across application restarts
343
+ - TTL for automatic expiration
344
+ - Shared cache across multiple servers
345
+ - 99.9% latency reduction for cached queries
346
+
347
+ ### 4. Rate Limiting and Retry Logic (Context7-Verified)
348
+
349
+ **Pattern from Context7:**
350
+
351
+ #### Exponential Backoff with Tenacity
352
+ ```python
353
+ from tenacity import (
354
+ retry,
355
+ stop_after_attempt,
356
+ wait_exponential,
357
+ retry_if_exception_type
358
+ )
359
+ from openai import OpenAI, RateLimitError, APIError
360
+
361
+ client = OpenAI()
362
+
363
+ @retry(
364
+ retry=retry_if_exception_type((RateLimitError, APIError)),
365
+ wait=wait_exponential(multiplier=1, min=4, max=60),
366
+ stop=stop_after_attempt(5)
367
+ )
368
+ def get_completion_with_retry(prompt: str) -> str:
369
+ """
370
+ Automatically retry on rate limit errors with exponential backoff.
371
+
372
+ Backoff schedule:
373
+ - Attempt 1: Immediate
374
+ - Attempt 2: 4s wait
375
+ - Attempt 3: 8s wait
376
+ - Attempt 4: 16s wait
377
+ - Attempt 5: 32s wait
378
+ """
379
+ response = client.chat.completions.create(
380
+ model="gpt-4o-mini",
381
+ messages=[{"role": "user", "content": prompt}],
382
+ )
383
+ return response.choices[0].message.content
384
+
385
+ # Usage
386
+ try:
387
+ result = get_completion_with_retry("Explain machine learning")
388
+ print(result)
389
+ except Exception as e:
390
+ print(f"Failed after 5 attempts: {e}")
391
+ ```
392
+
393
+ **Benefits:**
394
+ - Automatic retry on transient errors
395
+ - Exponential backoff prevents API hammering
396
+ - Configurable retry attempts
397
+ - 95% success rate even under rate limits
398
+
399
+ #### Rate Limiter with Token Bucket
400
+ ```python
401
+ import time
402
+ from threading import Lock
403
+ from openai import OpenAI
404
+
405
+ class RateLimiter:
406
+ """
407
+ Token bucket rate limiter for OpenAI API.
408
+ """
409
+ def __init__(self, requests_per_minute: int = 60):
410
+ self.capacity = requests_per_minute
411
+ self.tokens = requests_per_minute
412
+ self.fill_rate = requests_per_minute / 60.0 # tokens per second
413
+ self.last_update = time.time()
414
+ self.lock = Lock()
415
+
416
+ def acquire(self) -> None:
417
+ """Wait if necessary to acquire a token."""
418
+ with self.lock:
419
+ now = time.time()
420
+ elapsed = now - self.last_update
421
+
422
+ # Refill tokens
423
+ self.tokens = min(
424
+ self.capacity,
425
+ self.tokens + elapsed * self.fill_rate
426
+ )
427
+ self.last_update = now
428
+
429
+ # Wait if no tokens available
430
+ if self.tokens < 1:
431
+ wait_time = (1 - self.tokens) / self.fill_rate
432
+ time.sleep(wait_time)
433
+ self.tokens = 0
434
+ else:
435
+ self.tokens -= 1
436
+
437
+ # Usage
438
+ client = OpenAI()
439
+ limiter = RateLimiter(requests_per_minute=60)
440
+
441
+ def get_rate_limited_completion(prompt: str) -> str:
442
+ limiter.acquire() # Wait if rate limit reached
443
+
444
+ response = client.chat.completions.create(
445
+ model="gpt-4o-mini",
446
+ messages=[{"role": "user", "content": prompt}],
447
+ )
448
+ return response.choices[0].message.content
449
+
450
+ # Process many requests without hitting rate limits
451
+ prompts = ["Question " + str(i) for i in range(100)]
452
+ for prompt in prompts:
453
+ result = get_rate_limited_completion(prompt)
454
+ print(f"Processed: {prompt}")
455
+ ```
456
+
457
+ **Performance Impact:**
458
+ - Without limiter: 429 errors, retries, delays
459
+ - With limiter: Smooth processing, 0 errors
460
+
461
+ ### 5. Streaming Optimization (Context7-Verified)
462
+
463
+ **Pattern from Context7 (/openai/openai-python):**
464
+
465
+ #### Streaming Responses
466
+ ```python
467
+ from openai import OpenAI
468
+
469
+ client = OpenAI()
470
+
471
+ def stream_completion(prompt: str) -> None:
472
+ """
473
+ Stream response chunks for better UX.
474
+ Users see partial results immediately.
475
+ """
476
+ stream = client.chat.completions.create(
477
+ model="gpt-4o",
478
+ messages=[{"role": "user", "content": prompt}],
479
+ stream=True,
480
+ )
481
+
482
+ print("Response: ", end="", flush=True)
483
+ for chunk in stream:
484
+ if chunk.choices[0].delta.content:
485
+ print(chunk.choices[0].delta.content, end="", flush=True)
486
+ print() # New line
487
+
488
+ # Usage
489
+ stream_completion("Write a long essay about AI")
490
+ ```
491
+
492
+ **Benefits:**
493
+ - Time to first token: ~500ms vs 5s for full response
494
+ - Better perceived performance
495
+ - Progressive rendering
496
+ - Lower latency for user experience
497
+
498
+ #### Async Streaming
499
+ ```python
500
+ import asyncio
501
+ from openai import AsyncOpenAI
502
+
503
+ async def async_stream_completion(prompt: str) -> None:
504
+ client = AsyncOpenAI()
505
+
506
+ stream = await client.chat.completions.create(
507
+ model="gpt-4o",
508
+ messages=[{"role": "user", "content": prompt}],
509
+ stream=True,
510
+ )
511
+
512
+ print("Response: ", end="", flush=True)
513
+ async for chunk in stream:
514
+ if chunk.choices[0].delta.content:
515
+ print(chunk.choices[0].delta.content, end="", flush=True)
516
+ print()
517
+
518
+ # Usage
519
+ asyncio.run(async_stream_completion("Explain quantum computing"))
520
+ ```
521
+
522
+ ### 6. Token Optimization (Context7-Verified)
523
+
524
+ **Pattern from Context7:**
525
+
526
+ #### Token Counting
527
+ ```python
528
+ import tiktoken
529
+ from openai import OpenAI
530
+
531
+ def count_tokens(text: str, model: str = "gpt-4o") -> int:
532
+ """Count tokens for a given text and model."""
533
+ encoding = tiktoken.encoding_for_model(model)
534
+ return len(encoding.encode(text))
535
+
536
+ def optimize_prompt(prompt: str, max_tokens: int = 4000) -> str:
537
+ """Truncate prompt to fit within token limit."""
538
+ tokens = count_tokens(prompt)
539
+
540
+ if tokens <= max_tokens:
541
+ return prompt
542
+
543
+ # Truncate to fit
544
+ encoding = tiktoken.encoding_for_model("gpt-4o")
545
+ encoded = encoding.encode(prompt)
546
+ truncated = encoding.decode(encoded[:max_tokens])
547
+
548
+ return truncated
549
+
550
+ # Usage
551
+ long_prompt = "..." * 10000
552
+ optimized = optimize_prompt(long_prompt, max_tokens=4000)
553
+ print(f"Original tokens: {count_tokens(long_prompt)}")
554
+ print(f"Optimized tokens: {count_tokens(optimized)}")
555
+ ```
556
+
557
+ **Cost Impact:**
558
+ - GPT-4o: $5.00 per 1M input tokens
559
+ - Optimizing 10,000 requests from 8K → 4K tokens
560
+ - Savings: $200 per day
561
+
562
+ #### Response Format Optimization
563
+ ```python
564
+ from openai import OpenAI
565
+ import json
566
+
567
+ client = OpenAI()
568
+
569
+ def get_structured_output(prompt: str) -> dict:
570
+ """
571
+ Use structured outputs to reduce token usage.
572
+ JSON mode is more token-efficient than prose.
573
+ """
574
+ response = client.chat.completions.create(
575
+ model="gpt-4o-mini",
576
+ messages=[
577
+ {"role": "system", "content": "You are a helpful assistant. Respond in JSON format."},
578
+ {"role": "user", "content": prompt}
579
+ ],
580
+ response_format={"type": "json_object"},
581
+ )
582
+
583
+ return json.loads(response.choices[0].message.content)
584
+
585
+ # Usage
586
+ result = get_structured_output("List 3 programming languages with their use cases")
587
+ # Returns: {"languages": [{"name": "Python", "use_case": "..."}, ...]}
588
+ ```
589
+
590
+ **Token Savings:** 30-50% compared to prose format
591
+
592
+ ### 7. Function Calling Optimization (Context7-Verified)
593
+
594
+ **Pattern from Context7:**
595
+
596
+ #### Efficient Function Definitions
597
+ ```python
598
+ from openai import OpenAI
599
+ import json
600
+
601
+ client = OpenAI()
602
+
603
+ # Define functions concisely
604
+ tools = [
605
+ {
606
+ "type": "function",
607
+ "function": {
608
+ "name": "get_weather",
609
+ "description": "Get current weather",
610
+ "parameters": {
611
+ "type": "object",
612
+ "properties": {
613
+ "location": {"type": "string", "description": "City name"},
614
+ "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}
615
+ },
616
+ "required": ["location"]
617
+ }
618
+ }
619
+ }
620
+ ]
621
+
622
+ def call_function_optimized(prompt: str) -> str:
623
+ """Use function calling with minimal token overhead."""
624
+ response = client.chat.completions.create(
625
+ model="gpt-4o-mini",
626
+ messages=[{"role": "user", "content": prompt}],
627
+ tools=tools,
628
+ tool_choice="auto", # Let model decide when to call
629
+ )
630
+
631
+ message = response.choices[0].message
632
+
633
+ if message.tool_calls:
634
+ # Function was called
635
+ tool_call = message.tool_calls[0]
636
+ function_args = json.loads(tool_call.function.arguments)
637
+ return f"Function called: {tool_call.function.name} with {function_args}"
638
+ else:
639
+ # Direct response
640
+ return message.content
641
+
642
+ # Usage
643
+ result = call_function_optimized("What's the weather in London?")
644
+ ```
645
+
646
+ **Benefits:**
647
+ - Structured outputs without parsing
648
+ - Reduced prompt engineering
649
+ - Type-safe function calls
650
+ - 20-40% token savings vs prompt-based extraction
651
+
652
+ ## Optimization Output
653
+
654
+ ```
655
+ 🤖 OpenAI API Optimization Analysis
656
+ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
657
+
658
+ Project: AI Application
659
+ Current Usage: 1M tokens/day
660
+ Monthly Cost: $150
661
+
662
+ 📊 Current Performance Baseline
663
+ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
664
+
665
+ Request Pattern:
666
+ - Sequential requests: 500/day
667
+ - Average latency: 2s per request
668
+ - Total time: 1,000s (~16.7 minutes/day)
669
+
670
+ Rate Limiting:
671
+ - 429 errors: 15% of requests
672
+ - Retry overhead: +30% latency
673
+
674
+ Caching:
675
+ - Cache hit rate: 0% (no caching)
676
+ - Duplicate requests: 40%
677
+
678
+ ⚡ Async Operations Optimization
679
+ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
680
+
681
+ Current: Sequential execution
682
+ Recommended: AsyncOpenAI with concurrent requests
683
+
684
+ 💡 Impact:
685
+ - 500 sequential: 1,000s (~16.7 min)
686
+ - 500 concurrent (10 at a time): 100s (~1.7 min)
687
+ - Speedup: 10x faster (15 minutes saved/day)
688
+
689
+ AsyncOpenAI pattern configured ✓
690
+
691
+ 📦 Batch Processing Optimization
692
+ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
693
+
694
+ ⚠️ Using synchronous API for bulk operations
695
+ Current: 100 bulk requests/day at standard pricing
696
+
697
+ 💡 Recommendations:
698
+ 1. Use Batch API for bulk operations → 50% cost reduction
699
+ 2. 24-hour processing window → No rate limit concerns
700
+ 3. Automatic retries → Improved reliability
701
+
702
+ Batch API integration configured ✓
703
+
704
+ ⚡ Impact:
705
+ - Cost: $75/day → $37.50/day (50% savings)
706
+ - Monthly savings: $1,125
707
+ - Reliability: 95% → 99.9%
708
+
709
+ 💾 Response Caching Optimization
710
+ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
711
+
712
+ ⚠️ No caching implemented
713
+ Duplicate requests: 40% (200/day)
714
+
715
+ 💡 Recommendations:
716
+ 1. Redis cache with 1-hour TTL → 99.9% latency reduction
717
+ 2. LRU cache for in-memory → Instant responses
718
+ 3. Cache invalidation strategy → Fresh data when needed
719
+
720
+ Redis caching configured ✓
721
+
722
+ ⚡ Impact:
723
+ - Cached requests: 200/day
724
+ - Latency: 2s → <1ms (2000x faster)
725
+ - Cost reduction: 40% fewer API calls
726
+ - Monthly savings: $600
727
+
728
+ ⏱️ Rate Limiting Optimization
729
+ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
730
+
731
+ ⚠️ No rate limiting, frequent 429 errors
732
+ Current: 15% error rate, 30% retry overhead
733
+
734
+ 💡 Recommendations:
735
+ 1. Token bucket rate limiter → Smooth request flow
736
+ 2. Exponential backoff → Smart retry logic
737
+ 3. 60 requests/minute limit → Zero 429 errors
738
+
739
+ Rate limiter + retry logic configured ✓
740
+
741
+ ⚡ Impact:
742
+ - 429 errors: 15% → 0%
743
+ - Retry overhead: 30% → 0%
744
+ - Reliability: 85% → 100%
745
+
746
+ 🌊 Streaming Optimization
747
+ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
748
+
749
+ ⚠️ Using non-streaming responses
750
+ Time to first token: 5s (full response wait)
751
+
752
+ 💡 Recommendation: Enable streaming for long responses
753
+
754
+ ⚡ Impact:
755
+ - Time to first token: 5s → 500ms (10x faster perceived)
756
+ - Better UX: Progressive rendering
757
+ - Reduced user wait time: 90%
758
+
759
+ 🎯 Summary
760
+ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
761
+
762
+ Total Optimizations: 18
763
+
764
+ 🔴 Critical: 5 (async, batching, caching, rate limiting, streaming)
765
+ 🟡 High Impact: 8 (token optimization, function calling)
766
+ 🟢 Low Impact: 5 (monitoring, logging)
767
+
768
+ Performance Improvements:
769
+
770
+ Latency:
771
+ - Sequential processing: 16.7 min/day → 1.7 min/day (10x faster)
772
+ - Cached requests: 2s → <1ms (2000x faster)
773
+ - Time to first token: 5s → 500ms (10x faster perceived)
774
+
775
+ Cost Savings:
776
+ - Batch API: 50% reduction ($1,125/month)
777
+ - Caching: 40% fewer API calls ($600/month)
778
+ - Token optimization: 30% reduction ($450/month)
779
+ - Total monthly savings: $2,175 (48% reduction)
780
+
781
+ Reliability:
782
+ - 429 errors: 15% → 0%
783
+ - Success rate: 85% → 99.9%
784
+ - Retry overhead: 30% → 0%
785
+
786
+ Run with --apply to implement optimizations
787
+ ```
788
+
789
+ ## Implementation
790
+
791
+ This command uses the **@openai-python-expert** agent with optimization expertise:
792
+
793
+ 1. Query Context7 for OpenAI optimization patterns
794
+ 2. Analyze current API usage patterns
795
+ 3. Identify async opportunities
796
+ 4. Configure batch processing
797
+ 5. Implement caching strategy
798
+ 6. Setup rate limiting
799
+ 7. Generate optimized code
800
+
801
+ ## Best Practices Applied
802
+
803
+ Based on Context7 documentation from `/openai/openai-python`:
804
+
805
+ 1. **AsyncOpenAI** - Concurrent request processing (10x faster)
806
+ 2. **Batch API** - 50% cost reduction for bulk operations
807
+ 3. **Redis Caching** - 99.9% latency reduction for duplicates
808
+ 4. **Rate Limiting** - Zero 429 errors with token bucket
809
+ 5. **Exponential Backoff** - Smart retry logic
810
+ 6. **Streaming** - 10x faster time to first token
811
+ 7. **Token Optimization** - 30% cost reduction
812
+
813
+ ## Related Commands
814
+
815
+ - `/ai:model-deployment` - AI model deployment
816
+ - `/rag:setup-scaffold` - RAG system setup
817
+ - `/llm:optimize` - General LLM optimization
818
+
819
+ ## Troubleshooting
820
+
821
+ ### 429 Rate Limit Errors
822
+ - Implement token bucket rate limiter
823
+ - Use exponential backoff with tenacity
824
+ - Consider Batch API for bulk operations
825
+
826
+ ### High Latency
827
+ - Enable async operations with AsyncOpenAI
828
+ - Implement Redis caching for duplicates
829
+ - Use streaming for long responses
830
+
831
+ ### High Costs
832
+ - Use Batch API (50% discount)
833
+ - Implement caching (40% reduction)
834
+ - Optimize token usage (30% reduction)
835
+ - Use gpt-4o-mini for simpler tasks
836
+
837
+ ### Timeout Errors
838
+ - Increase timeout in AsyncOpenAI client
839
+ - Break large requests into smaller chunks
840
+ - Use streaming to avoid timeouts
841
+
842
+ ## Installation
843
+
844
+ ```bash
845
+ # Install OpenAI Python SDK
846
+ pip install openai
847
+
848
+ # Install optimization dependencies
849
+ pip install tenacity tiktoken redis
850
+
851
+ # Install async support
852
+ pip install aiohttp asyncio
853
+ ```
854
+
855
+ ## Version History
856
+
857
+ - v2.0.0 - Initial Schema v2.0 release with Context7 integration
858
+ - AsyncOpenAI patterns for concurrent processing
859
+ - Batch API integration for 50% cost reduction
860
+ - Redis caching for duplicate request optimization
861
+ - Rate limiting with token bucket algorithm
862
+ - Streaming response optimization
863
+ - Token counting and optimization utilities