@ngxtm/devkit 3.18.0 → 3.20.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (196) hide show
  1. package/merged-commands/application-performance-performance-optimization.md +13 -13
  2. package/merged-commands/ask/fast.md +14 -57
  3. package/merged-commands/ask/hard.md +22 -79
  4. package/merged-commands/auto.md +6 -33
  5. package/merged-commands/backend-development-feature-development.md +12 -12
  6. package/merged-commands/bootstrap/auto/fast.md +15 -15
  7. package/merged-commands/bootstrap/auto/parallel.md +12 -12
  8. package/merged-commands/bootstrap/auto.md +14 -14
  9. package/merged-commands/bootstrap.md +15 -15
  10. package/merged-commands/brainstorm/fast.md +19 -72
  11. package/merged-commands/brainstorm/hard.md +23 -84
  12. package/merged-commands/c4-architecture-c4-architecture.md +5 -5
  13. package/merged-commands/code/auto.md +16 -16
  14. package/merged-commands/code/fast.md +19 -72
  15. package/merged-commands/code/hard.md +38 -122
  16. package/merged-commands/code/no-test.md +12 -12
  17. package/merged-commands/code/parallel.md +9 -9
  18. package/merged-commands/code.md +14 -14
  19. package/merged-commands/comprehensive-review-full-review.md +8 -8
  20. package/merged-commands/context-degradation.md +2 -2
  21. package/merged-commands/context-engineering.md +4 -4
  22. package/merged-commands/context-optimization.md +3 -3
  23. package/merged-commands/cook/auto/fast.md +3 -3
  24. package/merged-commands/cook/auto/parallel.md +9 -9
  25. package/merged-commands/cook/auto.md +1 -1
  26. package/merged-commands/cook/fast.md +38 -47
  27. package/merged-commands/cook/hard.md +46 -41
  28. package/merged-commands/cook.md +13 -13
  29. package/merged-commands/daily-news-report.md +15 -15
  30. package/merged-commands/data-engineering-data-driven-feature.md +16 -16
  31. package/merged-commands/debug/fast.md +13 -29
  32. package/merged-commands/debug/hard.md +47 -49
  33. package/merged-commands/debug.md +1 -1
  34. package/merged-commands/debugging-toolkit-smart-debug.md +1 -1
  35. package/merged-commands/deploy/check.md +22 -71
  36. package/merged-commands/deploy/preview.md +18 -62
  37. package/merged-commands/deploy/production.md +22 -71
  38. package/merged-commands/deploy/rollback.md +22 -71
  39. package/merged-commands/deploy.md +0 -11
  40. package/merged-commands/design/3d.md +3 -3
  41. package/merged-commands/design/describe.md +1 -1
  42. package/merged-commands/design/fast.md +2 -2
  43. package/merged-commands/design/good.md +3 -3
  44. package/merged-commands/design/hard.md +15 -85
  45. package/merged-commands/design/screenshot.md +1 -1
  46. package/merged-commands/design/video.md +1 -1
  47. package/merged-commands/design.md +0 -11
  48. package/merged-commands/doc-coauthoring.md +5 -5
  49. package/merged-commands/docker-expert.md +1 -1
  50. package/merged-commands/docs/audit.md +26 -77
  51. package/merged-commands/docs/business.md +26 -77
  52. package/merged-commands/docs/core.md +24 -68
  53. package/merged-commands/docs/init.md +8 -8
  54. package/merged-commands/docs/update.md +13 -13
  55. package/merged-commands/docs.md +0 -12
  56. package/merged-commands/error-debugging-multi-agent-review.md +1 -1
  57. package/merged-commands/error-diagnostics-smart-debug.md +1 -1
  58. package/merged-commands/finishing-a-development-branch.md +1 -1
  59. package/merged-commands/fix/ci.md +2 -2
  60. package/merged-commands/fix/fast.md +2 -2
  61. package/merged-commands/fix/hard.md +6 -6
  62. package/merged-commands/fix/logs.md +5 -5
  63. package/merged-commands/fix/parallel.md +9 -9
  64. package/merged-commands/fix/test.md +6 -6
  65. package/merged-commands/fix/ui.md +8 -8
  66. package/merged-commands/fixing.md +3 -3
  67. package/merged-commands/framework-migration-legacy-modernize.md +13 -13
  68. package/merged-commands/full-stack-orchestration-full-stack-feature.md +12 -12
  69. package/merged-commands/git/cm.md +1 -1
  70. package/merged-commands/git/cp.md +1 -1
  71. package/merged-commands/git/merge.md +1 -1
  72. package/merged-commands/git/pr.md +1 -1
  73. package/merged-commands/git-pr-workflows-git-workflow.md +10 -10
  74. package/merged-commands/google-adk-python.md +1 -1
  75. package/merged-commands/hr-pro.md +1 -1
  76. package/merged-commands/incident-response-incident-response.md +13 -13
  77. package/merged-commands/integrate/polar.md +3 -3
  78. package/merged-commands/integrate/sepay.md +3 -3
  79. package/merged-commands/journal.md +1 -1
  80. package/merged-commands/learn.md +51 -4
  81. package/merged-commands/linear-claude-skill.md +2 -2
  82. package/merged-commands/loki-mode.md +14 -14
  83. package/merged-commands/machine-learning-ops-ml-pipeline.md +7 -7
  84. package/merged-commands/mcp-management.md +8 -8
  85. package/merged-commands/multi-agent-patterns.md +14 -14
  86. package/merged-commands/multi-platform-apps-multi-platform.md +10 -10
  87. package/merged-commands/nestjs-expert.md +1 -1
  88. package/merged-commands/performance-testing-review-multi-agent-review.md +1 -1
  89. package/merged-commands/plan/archive.md +1 -1
  90. package/merged-commands/plan/ci.md +1 -1
  91. package/merged-commands/plan/fast.md +2 -2
  92. package/merged-commands/plan/hard.md +4 -4
  93. package/merged-commands/plan/parallel.md +5 -5
  94. package/merged-commands/plan/two.md +6 -6
  95. package/merged-commands/requesting-code-review.md +6 -6
  96. package/merged-commands/review/codebase/parallel.md +5 -5
  97. package/merged-commands/review/codebase.md +5 -5
  98. package/merged-commands/review/fast.md +13 -29
  99. package/merged-commands/review/hard.md +48 -49
  100. package/merged-commands/review.md +0 -11
  101. package/merged-commands/security-scanning-security-hardening.md +13 -13
  102. package/merged-commands/skill/add.md +6 -6
  103. package/merged-commands/skill/create.md +6 -6
  104. package/merged-commands/skill/fix-logs.md +6 -6
  105. package/merged-commands/skill/optimize/auto.md +1 -1
  106. package/merged-commands/skill/optimize.md +1 -1
  107. package/merged-commands/skill/plan.md +1 -1
  108. package/merged-commands/skill/update.md +6 -6
  109. package/merged-commands/subagent-driven-development.md +53 -53
  110. package/merged-commands/tdd-workflows-tdd-cycle.md +12 -12
  111. package/merged-commands/tdd-workflows-tdd-red.md +1 -1
  112. package/merged-commands/tdd-workflows-tdd-refactor.md +1 -1
  113. package/merged-commands/test/fast.md +22 -33
  114. package/merged-commands/test/hard.md +59 -56
  115. package/merged-commands/test/ui.md +1 -1
  116. package/merged-commands/test.md +1 -1
  117. package/merged-commands/typescript-expert.md +1 -1
  118. package/merged-commands/use-mcp.md +5 -5
  119. package/merged-commands/writing-plans.md +3 -3
  120. package/merged-commands/writing-skills.md +8 -8
  121. package/package.json +1 -1
  122. package/rules-index.json +1 -1
  123. package/skills/application-performance-performance-optimization/SKILL.md +13 -13
  124. package/skills/azure-ai-agents-python/references/tools.md +1 -1
  125. package/skills/backend-development-feature-development/SKILL.md +12 -12
  126. package/skills/best-practices/references/anti-patterns.md +2 -2
  127. package/skills/best-practices/references/best-practices-guide.md +14 -14
  128. package/skills/c4-architecture-c4-architecture/SKILL.md +5 -5
  129. package/skills/comprehensive-review-full-review/SKILL.md +8 -8
  130. package/skills/context-degradation/SKILL.md +2 -2
  131. package/skills/context-engineering/SKILL.md +4 -4
  132. package/skills/context-engineering/references/context-degradation.md +1 -1
  133. package/skills/context-engineering/references/context-optimization.md +1 -1
  134. package/skills/context-engineering/references/multi-agent-patterns.md +1 -1
  135. package/skills/context-engineering/references/runtime-awareness.md +1 -1
  136. package/skills/context-optimization/SKILL.md +3 -3
  137. package/skills/daily-news-report/SKILL.md +15 -15
  138. package/skills/data-engineering-data-driven-feature/SKILL.md +16 -16
  139. package/skills/debugging-toolkit-smart-debug/SKILL.md +1 -1
  140. package/skills/doc-coauthoring/SKILL.md +5 -5
  141. package/skills/docker-expert/SKILL.md +1 -1
  142. package/skills/error-debugging-multi-agent-review/SKILL.md +1 -1
  143. package/skills/error-diagnostics-smart-debug/SKILL.md +1 -1
  144. package/skills/finishing-a-development-branch/SKILL.md +1 -1
  145. package/skills/fixing/SKILL.md +3 -3
  146. package/skills/fixing/references/parallel-exploration.md +4 -4
  147. package/skills/fixing/references/skill-activation-matrix.md +3 -3
  148. package/skills/fixing/references/workflow-deep.md +11 -11
  149. package/skills/fixing/references/workflow-quick.md +4 -4
  150. package/skills/fixing/references/workflow-standard.md +12 -12
  151. package/skills/framework-migration-legacy-modernize/SKILL.md +13 -13
  152. package/skills/full-stack-orchestration-full-stack-feature/SKILL.md +12 -12
  153. package/skills/git-pr-workflows-git-workflow/SKILL.md +10 -10
  154. package/skills/google-adk-python/SKILL.md +1 -1
  155. package/skills/hr-pro/SKILL.md +1 -1
  156. package/skills/incident-response-incident-response/SKILL.md +13 -13
  157. package/skills/incident-response-smart-fix/resources/implementation-playbook.md +17 -17
  158. package/skills/learn/SKILL.md +51 -4
  159. package/skills/linear-claude-skill/SKILL.md +2 -2
  160. package/skills/loki-mode/ACKNOWLEDGEMENTS.md +4 -4
  161. package/skills/loki-mode/CHANGELOG.md +9 -9
  162. package/skills/loki-mode/CONTEXT-EXPORT.md +1 -1
  163. package/skills/loki-mode/README.md +2 -2
  164. package/skills/loki-mode/SKILL.md +14 -14
  165. package/skills/loki-mode/autonomy/run.sh +1 -1
  166. package/skills/loki-mode/integrations/vibe-kanban.md +1 -1
  167. package/skills/loki-mode/references/core-workflow.md +4 -4
  168. package/skills/loki-mode/references/production-patterns.md +6 -6
  169. package/skills/loki-mode/references/quality-control.md +2 -2
  170. package/skills/loki-mode/references/sdlc-phases.md +3 -3
  171. package/skills/machine-learning-ops-ml-pipeline/SKILL.md +7 -7
  172. package/skills/mcp-builder/reference/evaluation.md +3 -3
  173. package/skills/mcp-management/README.md +6 -6
  174. package/skills/mcp-management/SKILL.md +8 -8
  175. package/skills/mcp-management/references/gemini-cli-integration.md +1 -1
  176. package/skills/multi-agent-patterns/SKILL.md +14 -14
  177. package/skills/multi-platform-apps-multi-platform/SKILL.md +10 -10
  178. package/skills/nestjs-expert/SKILL.md +1 -1
  179. package/skills/performance-testing-review-multi-agent-review/SKILL.md +1 -1
  180. package/skills/planning-with-files/reference.md +2 -2
  181. package/skills/requesting-code-review/SKILL.md +6 -6
  182. package/skills/security-scanning-security-hardening/SKILL.md +13 -13
  183. package/skills/subagent-driven-development/SKILL.md +53 -53
  184. package/skills/subagent-driven-development/code-quality-reviewer-prompt.md +1 -1
  185. package/skills/subagent-driven-development/implementer-prompt.md +3 -3
  186. package/skills/subagent-driven-development/spec-reviewer-prompt.md +1 -1
  187. package/skills/tdd-workflows-tdd-cycle/SKILL.md +12 -12
  188. package/skills/tdd-workflows-tdd-green/resources/implementation-playbook.md +1 -1
  189. package/skills/tdd-workflows-tdd-red/SKILL.md +1 -1
  190. package/skills/tdd-workflows-tdd-refactor/SKILL.md +1 -1
  191. package/skills/typescript-expert/SKILL.md +1 -1
  192. package/skills/writing-plans/SKILL.md +3 -3
  193. package/skills/writing-skills/SKILL.md +8 -8
  194. package/skills/writing-skills/examples/CLAUDE_MD_TESTING.md +1 -1
  195. package/skills/writing-skills/references/cso/README.md +3 -3
  196. package/skills/writing-skills/testing-skills-with-subagents.md +1 -1
@@ -250,11 +250,11 @@ Analyze and fix the GitHub issue: $ARGUMENTS.
250
250
 
251
251
  Run `/fix-issue 1234` to invoke it. Use `disable-model-invocation: true` for workflows with side effects that you want to trigger manually.
252
252
 
253
- ### Create custom subagents
253
+ ### Create custom Task agents
254
254
 
255
255
  > **Tip:** Define specialized assistants in `.claude/agents/` that Claude can delegate to for isolated tasks.
256
256
 
257
- Subagents run in their own context with their own set of allowed tools. They're useful for tasks that read many files or need specialized focus without cluttering your main conversation.
257
+ Task agents run in their own context with their own set of allowed tools. They're useful for tasks that read many files or need specialized focus without cluttering your main conversation.
258
258
 
259
259
  ```markdown
260
260
  ---
@@ -272,15 +272,15 @@ You are a senior security engineer. Review code for:
272
272
  Provide specific line references and suggested fixes.
273
273
  ```
274
274
 
275
- Tell Claude to use subagents explicitly: *"Use a subagent to review this code for security issues."*
275
+ Tell Claude to use Task agents explicitly: *"Use a Task agent to review this code for security issues."*
276
276
 
277
277
  ### Install plugins
278
278
 
279
279
  > **Tip:** Run `/plugin` to browse the marketplace. Plugins add skills, tools, and integrations without configuration.
280
280
 
281
- Plugins bundle skills, hooks, subagents, and MCP servers into a single installable unit from the community and Anthropic.
281
+ Plugins bundle skills, hooks, Task agents, and MCP servers into a single installable unit from the community and Anthropic.
282
282
 
283
- For guidance on choosing between skills, subagents, hooks, and MCP, see Extend Claude Code.
283
+ For guidance on choosing between skills, Task agents, hooks, and MCP, see Extend Claude Code.
284
284
 
285
285
  ***
286
286
 
@@ -350,23 +350,23 @@ During long sessions, Claude's context window can fill with irrelevant conversat
350
350
  * For more control, run `/compact <instructions>`, like `/compact Focus on the API changes`
351
351
  * Customize compaction behavior in CLAUDE.md with instructions like `"When compacting, always preserve the full list of modified files and any test commands"` to ensure critical context survives summarization
352
352
 
353
- ### Use subagents for investigation
353
+ ### Use Task agents for investigation
354
354
 
355
- > **Tip:** Delegate research with `"use subagents to investigate X"`. They explore in a separate context, keeping your main conversation clean for implementation.
355
+ > **Tip:** Delegate research with `"use Task agents to investigate X"`. They explore in a separate context, keeping your main conversation clean for implementation.
356
356
 
357
- Since context is your fundamental constraint, subagents are one of the most powerful tools available. When Claude researches a codebase it reads lots of files, all of which consume your context. Subagents run in separate context windows and report back summaries:
357
+ Since context is your fundamental constraint, Task agents are one of the most powerful tools available. When Claude researches a codebase it reads lots of files, all of which consume your context. Task agents run in separate context windows and report back summaries:
358
358
 
359
359
  ```
360
- Use subagents to investigate how our authentication system handles token
360
+ Use Task agents to investigate how our authentication system handles token
361
361
  refresh, and whether we have any existing OAuth utilities I should reuse.
362
362
  ```
363
363
 
364
- The subagent explores the codebase, reads relevant files, and reports back with findings, all without cluttering your main conversation.
364
+ The Task agent explores the codebase, reads relevant files, and reports back with findings, all without cluttering your main conversation.
365
365
 
366
- You can also use subagents for verification after Claude implements something:
366
+ You can also use Task agents for verification after Claude implements something:
367
367
 
368
368
  ```
369
- use a subagent to review this code for edge cases
369
+ use a Task agent to review this code for edge cases
370
370
  ```
371
371
 
372
372
  ### Rewind with checkpoints
@@ -487,7 +487,7 @@ These are common mistakes. Recognizing them early saves time:
487
487
  * **The trust-then-verify gap.** Claude produces a plausible-looking implementation that doesn't handle edge cases.
488
488
  > **Fix**: Always provide verification (tests, scripts, screenshots). If you can't verify it, don't ship it.
489
489
  * **The infinite exploration.** You ask Claude to "investigate" something without scoping it. Claude reads hundreds of files, filling the context.
490
- > **Fix**: Scope investigations narrowly or use subagents so the exploration doesn't consume your main context.
490
+ > **Fix**: Scope investigations narrowly or use Task agents so the exploration doesn't consume your main context.
491
491
 
492
492
  ***
493
493
 
@@ -504,7 +504,7 @@ Over time, you'll develop intuition that no guide can capture. You'll know when
504
504
  ## Related resources
505
505
 
506
506
  * **How Claude Code works** - Understand the agentic loop, tools, and context management
507
- * **Extend Claude Code** - Choose between skills, hooks, MCP, subagents, and plugins
507
+ * **Extend Claude Code** - Choose between skills, hooks, MCP, Task agents, and plugins
508
508
  * **Common workflows** - Step-by-step recipes for debugging, testing, PRs, and more
509
509
  * **CLAUDE.md** - Store project conventions and persistent context
510
510
 
@@ -54,7 +54,7 @@ All documentation is written to a new `C4-Documentation/` directory in the repos
54
54
 
55
55
  For each directory, starting from the deepest:
56
56
 
57
- - Use Task tool with subagent_type="c4-architecture::c4-code"
57
+ - Use Task tool with subagent_type="general-purpose"
58
58
  - Prompt: |
59
59
  Analyze the code in directory: [directory_path]
60
60
 
@@ -110,7 +110,7 @@ For each directory, starting from the deepest:
110
110
 
111
111
  For each identified component:
112
112
 
113
- - Use Task tool with subagent_type="c4-architecture::c4-component"
113
+ - Use Task tool with subagent_type="general-purpose"
114
114
  - Prompt: |
115
115
  Synthesize the following C4 Code-level documentation files into a logical component:
116
116
 
@@ -153,7 +153,7 @@ For each identified component:
153
153
 
154
154
  ### 2.3 Create Master Component Index
155
155
 
156
- - Use Task tool with subagent_type="c4-architecture::c4-component"
156
+ - Use Task tool with subagent_type="general-purpose"
157
157
  - Prompt: |
158
158
  Create a master component index that lists all components in the system.
159
159
 
@@ -188,7 +188,7 @@ For each identified component:
188
188
 
189
189
  ### 3.2 Map Components to Containers
190
190
 
191
- - Use Task tool with subagent_type="c4-architecture::c4-container"
191
+ - Use Task tool with subagent_type="general-purpose"
192
192
  - Prompt: |
193
193
  Synthesize components into containers based on deployment definitions.
194
194
 
@@ -261,7 +261,7 @@ For each identified component:
261
261
 
262
262
  ### 4.2 Create Context Documentation
263
263
 
264
- - Use Task tool with subagent_type="c4-architecture::c4-context"
264
+ - Use Task tool with subagent_type="general-purpose"
265
265
  - Prompt: |
266
266
  Create comprehensive C4 Context-level documentation for the system.
267
267
 
@@ -41,13 +41,13 @@ Orchestrate comprehensive multi-dimensional code review using specialized review
41
41
  Use Task tool to orchestrate quality and architecture agents in parallel:
42
42
 
43
43
  ### 1A. Code Quality Analysis
44
- - Use Task tool with subagent_type="code-reviewer"
44
+ - Use Task tool with subagent_type="general-purpose"
45
45
  - Prompt: "Perform comprehensive code quality review for: $ARGUMENTS. Analyze code complexity, maintainability index, technical debt, code duplication, naming conventions, and adherence to Clean Code principles. Integrate with SonarQube, CodeQL, and Semgrep for static analysis. Check for code smells, anti-patterns, and violations of SOLID principles. Generate cyclomatic complexity metrics and identify refactoring opportunities."
46
46
  - Expected output: Quality metrics, code smell inventory, refactoring recommendations
47
47
  - Context: Initial codebase analysis, no dependencies on other phases
48
48
 
49
49
  ### 1B. Architecture & Design Review
50
- - Use Task tool with subagent_type="architect-review"
50
+ - Use Task tool with subagent_type="general-purpose"
51
51
  - Prompt: "Review architectural design patterns and structural integrity in: $ARGUMENTS. Evaluate microservices boundaries, API design, database schema, dependency management, and adherence to Domain-Driven Design principles. Check for circular dependencies, inappropriate coupling, missing abstractions, and architectural drift. Verify compliance with enterprise architecture standards and cloud-native patterns."
52
52
  - Expected output: Architecture assessment, design pattern analysis, structural recommendations
53
53
  - Context: Runs parallel with code quality analysis
@@ -57,13 +57,13 @@ Use Task tool to orchestrate quality and architecture agents in parallel:
57
57
  Use Task tool with security and performance agents, incorporating Phase 1 findings:
58
58
 
59
59
  ### 2A. Security Vulnerability Assessment
60
- - Use Task tool with subagent_type="security-auditor"
60
+ - Use Task tool with subagent_type="general-purpose"
61
61
  - Prompt: "Execute comprehensive security audit on: $ARGUMENTS. Perform OWASP Top 10 analysis, dependency vulnerability scanning with Snyk/Trivy, secrets detection with GitLeaks, input validation review, authentication/authorization assessment, and cryptographic implementation review. Include findings from Phase 1 architecture review: {phase1_architecture_context}. Check for SQL injection, XSS, CSRF, insecure deserialization, and configuration security issues."
62
62
  - Expected output: Vulnerability report, CVE list, security risk matrix, remediation steps
63
63
  - Context: Incorporates architectural vulnerabilities identified in Phase 1B
64
64
 
65
65
  ### 2B. Performance & Scalability Analysis
66
- - Use Task tool with subagent_type="application-performance::performance-engineer"
66
+ - Use Task tool with subagent_type="general-purpose"
67
67
  - Prompt: "Conduct performance analysis and scalability assessment for: $ARGUMENTS. Profile code for CPU/memory hotspots, analyze database query performance, review caching strategies, identify N+1 problems, assess connection pooling, and evaluate asynchronous processing patterns. Consider architectural findings from Phase 1: {phase1_architecture_context}. Check for memory leaks, resource contention, and bottlenecks under load."
68
68
  - Expected output: Performance metrics, bottleneck analysis, optimization recommendations
69
69
  - Context: Uses architecture insights to identify systemic performance issues
@@ -73,13 +73,13 @@ Use Task tool with security and performance agents, incorporating Phase 1 findin
73
73
  Use Task tool for test and documentation quality assessment:
74
74
 
75
75
  ### 3A. Test Coverage & Quality Analysis
76
- - Use Task tool with subagent_type="unit-testing::test-automator"
76
+ - Use Task tool with subagent_type="general-purpose"
77
77
  - Prompt: "Evaluate testing strategy and implementation for: $ARGUMENTS. Analyze unit test coverage, integration test completeness, end-to-end test scenarios, test pyramid adherence, and test maintainability. Review test quality metrics including assertion density, test isolation, mock usage, and flakiness. Consider security and performance test requirements from Phase 2: {phase2_security_context}, {phase2_performance_context}. Verify TDD practices if --tdd-review flag is set."
78
78
  - Expected output: Coverage report, test quality metrics, testing gap analysis
79
79
  - Context: Incorporates security and performance testing requirements from Phase 2
80
80
 
81
81
  ### 3B. Documentation & API Specification Review
82
- - Use Task tool with subagent_type="code-documentation::docs-architect"
82
+ - Use Task tool with subagent_type="general-purpose"
83
83
  - Prompt: "Review documentation completeness and quality for: $ARGUMENTS. Assess inline code documentation, API documentation (OpenAPI/Swagger), architecture decision records (ADRs), README completeness, deployment guides, and runbooks. Verify documentation reflects actual implementation based on all previous phase findings: {phase1_context}, {phase2_context}. Check for outdated documentation, missing examples, and unclear explanations."
84
84
  - Expected output: Documentation coverage report, inconsistency list, improvement recommendations
85
85
  - Context: Cross-references all previous findings to ensure documentation accuracy
@@ -89,13 +89,13 @@ Use Task tool for test and documentation quality assessment:
89
89
  Use Task tool to verify framework-specific and industry best practices:
90
90
 
91
91
  ### 4A. Framework & Language Best Practices
92
- - Use Task tool with subagent_type="framework-migration::legacy-modernizer"
92
+ - Use Task tool with subagent_type="general-purpose"
93
93
  - Prompt: "Verify adherence to framework and language best practices for: $ARGUMENTS. Check modern JavaScript/TypeScript patterns, React hooks best practices, Python PEP compliance, Java enterprise patterns, Go idiomatic code, or framework-specific conventions (based on --framework flag). Review package management, build configuration, environment handling, and deployment practices. Include all quality issues from previous phases: {all_previous_contexts}."
94
94
  - Expected output: Best practices compliance report, modernization recommendations
95
95
  - Context: Synthesizes all previous findings for framework-specific guidance
96
96
 
97
97
  ### 4B. CI/CD & DevOps Practices Review
98
- - Use Task tool with subagent_type="cicd-automation::deployment-engineer"
98
+ - Use Task tool with subagent_type="general-purpose"
99
99
  - Prompt: "Review CI/CD pipeline and DevOps practices for: $ARGUMENTS. Evaluate build automation, test automation integration, deployment strategies (blue-green, canary), infrastructure as code, monitoring/observability setup, and incident response procedures. Assess pipeline security, artifact management, and rollback capabilities. Consider all issues identified in previous phases that impact deployment: {all_critical_issues}."
100
100
  - Expected output: Pipeline assessment, DevOps maturity evaluation, automation recommendations
101
101
  - Context: Focuses on operationalizing fixes for all identified issues
@@ -157,11 +157,11 @@ Four strategies address different aspects of context degradation:
157
157
 
158
158
  **Compress**: Reduce tokens while preserving information through summarization, abstraction, and observation masking. This extends effective context capacity.
159
159
 
160
- **Isolate**: Split context across sub-agents or sessions to prevent any single context from growing large enough to degrade. This is the most aggressive strategy but often the most effective.
160
+ **Isolate**: Split context across Task agents or sessions to prevent any single context from growing large enough to degrade. This is the most aggressive strategy but often the most effective.
161
161
 
162
162
  ### Architectural Patterns
163
163
 
164
- Implement these strategies through specific architectural patterns. Use just-in-time context loading to retrieve information only when needed. Use observation masking to replace verbose tool outputs with compact references. Use sub-agent architectures to isolate context for different tasks. Use compaction to summarize growing context before it exceeds limits.
164
+ Implement these strategies through specific architectural patterns. Use just-in-time context loading to retrieve information only when needed. Use observation masking to replace verbose tool outputs with compact references. Use Task agent architectures to isolate context for different tasks. Use compaction to summarize growing context before it exceeds limits.
165
165
 
166
166
  ## Examples
167
167
 
@@ -25,13 +25,13 @@ Context engineering curates the smallest high-signal token set for LLM tasks. Th
25
25
  1. **Context quality > quantity** - High-signal tokens beat exhaustive content
26
26
  2. **Attention is finite** - U-shaped curve favors beginning/end positions
27
27
  3. **Progressive disclosure** - Load information just-in-time
28
- 4. **Isolation prevents degradation** - Partition work across sub-agents
28
+ 4. **Isolation prevents degradation** - Partition work across Task agents
29
29
  5. **Measure before optimizing** - Know your baseline
30
30
 
31
31
  **IMPORTANT:**
32
32
  - Sacrifice grammar for the sake of concision.
33
33
  - Ensure token efficiency while maintaining high quality.
34
- - Pass these rules to subagents.
34
+ - Pass these rules to Task agents.
35
35
 
36
36
  ## Quick Reference
37
37
 
@@ -61,7 +61,7 @@ Context engineering curates the smallest high-signal token set for LLM tasks. Th
61
61
  1. **Write**: Save context externally (scratchpads, files)
62
62
  2. **Select**: Pull only relevant context (retrieval, filtering)
63
63
  3. **Compress**: Reduce tokens while preserving info (summarization)
64
- 4. **Isolate**: Split across sub-agents (partitioning)
64
+ 4. **Isolate**: Split across Task agents (partitioning)
65
65
 
66
66
  ## Anti-Patterns
67
67
 
@@ -75,7 +75,7 @@ Context engineering curates the smallest high-signal token set for LLM tasks. Th
75
75
 
76
76
  1. Place critical info at beginning/end of context
77
77
  2. Implement compaction at 70-80% utilization
78
- 3. Use sub-agents for context isolation, not role-play
78
+ 3. Use Task agents for context isolation, not role-play
79
79
  4. Design tools with 4-question framework (what, when, inputs, returns)
80
80
  5. Optimize for tokens-per-task, not tokens-per-request
81
81
  6. Validate with probe-based evaluation
@@ -61,7 +61,7 @@ Predictable degradation as context grows. Not binary - a continuum.
61
61
  1. **Write**: Save externally (scratchpads, files)
62
62
  2. **Select**: Pull only relevant (retrieval, filtering)
63
63
  3. **Compress**: Reduce tokens (summarization)
64
- 4. **Isolate**: Split across sub-agents (partitioning)
64
+ 4. **Isolate**: Split across Task agents (partitioning)
65
65
 
66
66
  ## Detection Heuristics
67
67
 
@@ -51,7 +51,7 @@ context += [unique_content] # Variable last
51
51
 
52
52
  ## Context Partitioning
53
53
 
54
- Split work across sub-agents with isolated contexts.
54
+ Split work across Task agents with isolated contexts.
55
55
 
56
56
  ```python
57
57
  result = await sub_agent.process(subtask, clean_context=True)
@@ -4,7 +4,7 @@ Distribute work across multiple context windows for isolation and scale.
4
4
 
5
5
  ## Core Insight
6
6
 
7
- Sub-agents exist to **isolate context**, not anthropomorphize roles.
7
+ Task agents exist to **isolate context**, not anthropomorphize roles.
8
8
 
9
9
  ## Token Economics
10
10
 
@@ -157,7 +157,7 @@ Context: 91% [CRITICAL - compaction needed]
157
157
  | 5-Hour | Action |
158
158
  |--------|--------|
159
159
  | < 70% | Normal usage |
160
- | 70-90% | Reduce parallelization, delegate to subagents |
160
+ | 70-90% | Reduce parallelization, delegate to Task agents |
161
161
  | > 90% | Wait for reset or use lower-tier models |
162
162
 
163
163
  | 7-Day | Action |
@@ -81,10 +81,10 @@ Design prompts to maximize cache stability: avoid dynamic content like timestamp
81
81
 
82
82
  ### Context Partitioning
83
83
 
84
- **Sub-Agent Partitioning**
85
- The most aggressive form of context optimization is partitioning work across sub-agents with isolated contexts. Each sub-agent operates in a clean context focused on its subtask without carrying accumulated context from other subtasks.
84
+ **Task Agent Partitioning**
85
+ The most aggressive form of context optimization is partitioning work across Task agents with isolated contexts. Each Task agent operates in a clean context focused on its subtask without carrying accumulated context from other subtasks.
86
86
 
87
- This approach achieves separation of concerns—the detailed search context remains isolated within sub-agents while the coordinator focuses on synthesis and analysis.
87
+ This approach achieves separation of concerns—the detailed search context remains isolated within Task agents while the coordinator focuses on synthesis and analysis.
88
88
 
89
89
  **Result Aggregation**
90
90
  Aggregate results from partitioned subtasks by validating all partitions completed, merging compatible results, and summarizing if still too large.
@@ -11,7 +11,7 @@ source: community
11
11
 
12
12
  # Daily News Report v3.0
13
13
 
14
- > **Architecture Upgrade**: Main Agent Orchestration + SubAgent Execution + Browser Scraping + Smart Caching
14
+ > **Architecture Upgrade**: Main Agent Orchestration + Task Agent Execution + Browser Scraping + Smart Caching
15
15
 
16
16
  ## Core Architecture
17
17
 
@@ -35,7 +35,7 @@ source: community
35
35
  └──────────────────────────────────────────────────────────────────────┘
36
36
  ↓ Dispatch ↑ Return Results
37
37
  ┌─────────────────────────────────────────────────────────────────────┐
38
- SubAgent Execution Layer │
38
+ Task Agent Execution Layer │
39
39
  ├─────────────────────────────────────────────────────────────────────┤
40
40
  │ │
41
41
  │ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
@@ -74,7 +74,7 @@ Steps:
74
74
  5. Check if a partial report exists for today (append mode)
75
75
  ```
76
76
 
77
- ### Phase 2: Dispatch SubAgents
77
+ ### Phase 2: Dispatch Task Agents
78
78
 
79
79
  **Strategy**: Parallel dispatch, batch execution, early stopping mechanism
80
80
 
@@ -95,9 +95,9 @@ If still < 20 items:
95
95
  - Browser Worker: ProductHunt, Latent Space (Require JS rendering)
96
96
  ```
97
97
 
98
- ### Phase 3: SubAgent Task Format
98
+ ### Phase 3: Task Agent Task Format
99
99
 
100
- Task format received by each SubAgent:
100
+ Task format received by each Task Agent:
101
101
 
102
102
  ```yaml
103
103
  task: fetch_and_extract
@@ -134,12 +134,12 @@ Main Agent Responsibilities:
134
134
 
135
135
  ```yaml
136
136
  Monitoring:
137
- - Check SubAgent return status (success/partial/failed)
137
+ - Check Task Agent return status (success/partial/failed)
138
138
  - Count collected items
139
139
  - Record success rate per source
140
140
 
141
141
  Feedback Loop:
142
- - If a SubAgent fails, decide whether to retry or skip
142
+ - If a Task Agent fails, decide whether to retry or skip
143
143
  - If a source fails persistently, mark as disabled
144
144
  - Dynamically adjust source selection for subsequent batches
145
145
 
@@ -158,7 +158,7 @@ Deduplication:
158
158
  - Check cache.json to avoid history duplicates
159
159
 
160
160
  Score Calibration:
161
- - Unify scoring standards across SubAgents
161
+ - Unify scoring standards across Task Agents
162
162
  - Adjust weights based on source credibility
163
163
  - Bonus points for manually curated high-quality sources
164
164
 
@@ -212,7 +212,7 @@ Update cache.json:
212
212
  - article_history: Record included articles
213
213
  ```
214
214
 
215
- ## SubAgent Call Examples
215
+ ## Task Agent Call Examples
216
216
 
217
217
  ### Using general-purpose Agent
218
218
 
@@ -260,7 +260,7 @@ Task Call:
260
260
 
261
261
  ```
262
262
  Task Call:
263
- subagent_type: worker
263
+ subagent_type: general-purpose
264
264
  prompt: |
265
265
  task: fetch_and_extract
266
266
  input:
@@ -288,7 +288,7 @@ Task Call:
288
288
  > Curated from N sources today, containing 20 high-quality items
289
289
  > Generation Time: X min | Version: v3.0
290
290
  >
291
- > **Warning**: Sub-agent 'worker' not detected. Running in generic mode (Serial Execution). Performance might be degraded.
291
+ > **Warning**: Task agent 'worker' not detected. Running in generic mode (Serial Execution). Performance might be degraded.
292
292
 
293
293
  ---
294
294
 
@@ -318,11 +318,11 @@ Task Call:
318
318
 
319
319
  1. **Quality over Quantity**: Low-quality content does not enter the report.
320
320
  2. **Early Stop**: Stop scraping once 20 high-quality items are reached.
321
- 3. **Parallel First**: SubAgents in the same batch execute in parallel.
321
+ 3. **Parallel First**: Task Agents in the same batch execute in parallel.
322
322
  4. **Fault Tolerance**: Failure of a single source does not affect the whole process.
323
323
  5. **Cache Reuse**: Avoid re-scraping the same content.
324
324
  6. **Main Agent Control**: All decisions are made by the Main Agent.
325
- 7. **Fallback Awareness**: Detect sub-agent availability, gracefully degrade if unavailable.
325
+ 7. **Fallback Awareness**: Detect Task agent availability, gracefully degrade if unavailable.
326
326
 
327
327
  ## Expected Performance
328
328
 
@@ -336,7 +336,7 @@ Task Call:
336
336
 
337
337
  | Error Type | Handling |
338
338
  |---|---|
339
- | SubAgent Timeout | Log error, continue to next |
339
+ | Task Agent Timeout | Log error, continue to next |
340
340
  | Source 403/404 | Mark disabled, update sources.json |
341
341
  | Extraction Failed | Return raw content, Main Agent decides |
342
342
  | Browser Crash | Skip source, log entry |
@@ -346,7 +346,7 @@ Task Call:
346
346
  To ensure usability across different Agent environments, the following checks must be performed:
347
347
 
348
348
  1. **Environment Check**:
349
- - In Phase 1 initialization, attempt to detect if `worker` sub-agent exists.
349
+ - In Phase 1 initialization, attempt to detect if `worker` Task agent exists.
350
350
  - If not exists (or plugin not installed), automatically switch to **Serial Execution Mode**.
351
351
 
352
352
  2. **Serial Execution Mode**:
@@ -31,18 +31,18 @@ Build features guided by data insights, A/B testing, and continuous measurement
31
31
  ## Phase 1: Data Analysis and Hypothesis Formation
32
32
 
33
33
  ### 1. Exploratory Data Analysis
34
- - Use Task tool with subagent_type="machine-learning-ops::data-scientist"
34
+ - Use Task tool with subagent_type="general-purpose"
35
35
  - Prompt: "Perform exploratory data analysis for feature: $ARGUMENTS. Analyze existing user behavior data, identify patterns and opportunities, segment users by behavior, and calculate baseline metrics. Use modern analytics tools (Amplitude, Mixpanel, Segment) to understand current user journeys, conversion funnels, and engagement patterns."
36
36
  - Output: EDA report with visualizations, user segments, behavioral patterns, baseline metrics
37
37
 
38
38
  ### 2. Business Hypothesis Development
39
- - Use Task tool with subagent_type="business-analytics::business-analyst"
39
+ - Use Task tool with subagent_type="general-purpose"
40
40
  - Context: Data scientist's EDA findings and behavioral patterns
41
41
  - Prompt: "Formulate business hypotheses for feature: $ARGUMENTS based on data analysis. Define clear success metrics, expected impact on key business KPIs, target user segments, and minimum detectable effects. Create measurable hypotheses using frameworks like ICE scoring or RICE prioritization."
42
42
  - Output: Hypothesis document, success metrics definition, expected ROI calculations
43
43
 
44
44
  ### 3. Statistical Experiment Design
45
- - Use Task tool with subagent_type="machine-learning-ops::data-scientist"
45
+ - Use Task tool with subagent_type="general-purpose"
46
46
  - Context: Business hypotheses and success metrics
47
47
  - Prompt: "Design statistical experiment for feature: $ARGUMENTS. Calculate required sample size for statistical power, define control and treatment groups, specify randomization strategy, and plan for multiple testing corrections. Consider Bayesian A/B testing approaches for faster decision making. Design for both primary and guardrail metrics."
48
48
  - Output: Experiment design document, power analysis, statistical test plan
@@ -50,19 +50,19 @@ Build features guided by data insights, A/B testing, and continuous measurement
50
50
  ## Phase 2: Feature Architecture and Analytics Design
51
51
 
52
52
  ### 4. Feature Architecture Planning
53
- - Use Task tool with subagent_type="data-engineering::backend-architect"
53
+ - Use Task tool with subagent_type="general-purpose"
54
54
  - Context: Business requirements and experiment design
55
55
  - Prompt: "Design feature architecture for: $ARGUMENTS with A/B testing capability. Include feature flag integration (LaunchDarkly, Split.io, or Optimizely), gradual rollout strategy, circuit breakers for safety, and clean separation between control and treatment logic. Ensure architecture supports real-time configuration updates."
56
56
  - Output: Architecture diagrams, feature flag schema, rollout strategy
57
57
 
58
58
  ### 5. Analytics Instrumentation Design
59
- - Use Task tool with subagent_type="data-engineering::data-engineer"
59
+ - Use Task tool with subagent_type="general-purpose"
60
60
  - Context: Feature architecture and success metrics
61
61
  - Prompt: "Design comprehensive analytics instrumentation for: $ARGUMENTS. Define event schemas for user interactions, specify properties for segmentation and analysis, design funnel tracking and conversion events, plan cohort analysis capabilities. Implement using modern SDKs (Segment, Amplitude, Mixpanel) with proper event taxonomy."
62
62
  - Output: Event tracking plan, analytics schema, instrumentation guide
63
63
 
64
64
  ### 6. Data Pipeline Architecture
65
- - Use Task tool with subagent_type="data-engineering::data-engineer"
65
+ - Use Task tool with subagent_type="general-purpose"
66
66
  - Context: Analytics requirements and existing data infrastructure
67
67
  - Prompt: "Design data pipelines for feature: $ARGUMENTS. Include real-time streaming for live metrics (Kafka, Kinesis), batch processing for detailed analysis, data warehouse integration (Snowflake, BigQuery), and feature store for ML if applicable. Ensure proper data governance and GDPR compliance."
68
68
  - Output: Pipeline architecture, ETL/ELT specifications, data flow diagrams
@@ -70,19 +70,19 @@ Build features guided by data insights, A/B testing, and continuous measurement
70
70
  ## Phase 3: Implementation with Instrumentation
71
71
 
72
72
  ### 7. Backend Implementation
73
- - Use Task tool with subagent_type="backend-development::backend-architect"
73
+ - Use Task tool with subagent_type="general-purpose"
74
74
  - Context: Architecture design and feature requirements
75
75
  - Prompt: "Implement backend for feature: $ARGUMENTS with full instrumentation. Include feature flag checks at decision points, comprehensive event tracking for all user actions, performance metrics collection, error tracking and monitoring. Implement proper logging for experiment analysis."
76
76
  - Output: Backend code with analytics, feature flag integration, monitoring setup
77
77
 
78
78
  ### 8. Frontend Implementation
79
- - Use Task tool with subagent_type="frontend-mobile-development::frontend-developer"
79
+ - Use Task tool with subagent_type="general-purpose"
80
80
  - Context: Backend APIs and analytics requirements
81
81
  - Prompt: "Build frontend for feature: $ARGUMENTS with analytics tracking. Implement event tracking for all user interactions, session recording integration if applicable, performance metrics (Core Web Vitals), and proper error boundaries. Ensure consistent experience between control and treatment groups."
82
82
  - Output: Frontend code with analytics, A/B test variants, performance monitoring
83
83
 
84
84
  ### 9. ML Model Integration (if applicable)
85
- - Use Task tool with subagent_type="machine-learning-ops::ml-engineer"
85
+ - Use Task tool with subagent_type="general-purpose"
86
86
  - Context: Feature requirements and data pipelines
87
87
  - Prompt: "Integrate ML models for feature: $ARGUMENTS if needed. Implement online inference with low latency, A/B testing between model versions, model performance tracking, and automatic fallback mechanisms. Set up model monitoring for drift detection."
88
88
  - Output: ML pipeline, model serving infrastructure, monitoring setup
@@ -90,13 +90,13 @@ Build features guided by data insights, A/B testing, and continuous measurement
90
90
  ## Phase 4: Pre-Launch Validation
91
91
 
92
92
  ### 10. Analytics Validation
93
- - Use Task tool with subagent_type="data-engineering::data-engineer"
93
+ - Use Task tool with subagent_type="general-purpose"
94
94
  - Context: Implemented tracking and event schemas
95
95
  - Prompt: "Validate analytics implementation for: $ARGUMENTS. Test all event tracking in staging, verify data quality and completeness, validate funnel definitions, ensure proper user identification and session tracking. Run end-to-end tests for data pipeline."
96
96
  - Output: Validation report, data quality metrics, tracking coverage analysis
97
97
 
98
98
  ### 11. Experiment Setup
99
- - Use Task tool with subagent_type="cloud-infrastructure::deployment-engineer"
99
+ - Use Task tool with subagent_type="general-purpose"
100
100
  - Context: Feature flags and experiment design
101
101
  - Prompt: "Configure experiment infrastructure for: $ARGUMENTS. Set up feature flags with proper targeting rules, configure traffic allocation (start with 5-10%), implement kill switches, set up monitoring alerts for key metrics. Test randomization and assignment logic."
102
102
  - Output: Experiment configuration, monitoring dashboards, rollout plan
@@ -104,13 +104,13 @@ Build features guided by data insights, A/B testing, and continuous measurement
104
104
  ## Phase 5: Launch and Experimentation
105
105
 
106
106
  ### 12. Gradual Rollout
107
- - Use Task tool with subagent_type="cloud-infrastructure::deployment-engineer"
107
+ - Use Task tool with subagent_type="general-purpose"
108
108
  - Context: Experiment configuration and monitoring setup
109
109
  - Prompt: "Execute gradual rollout for feature: $ARGUMENTS. Start with internal dogfooding, then beta users (1-5%), gradually increase to target traffic. Monitor error rates, performance metrics, and early indicators. Implement automated rollback on anomalies."
110
110
  - Output: Rollout execution, monitoring alerts, health metrics
111
111
 
112
112
  ### 13. Real-time Monitoring
113
- - Use Task tool with subagent_type="observability-monitoring::observability-engineer"
113
+ - Use Task tool with subagent_type="general-purpose"
114
114
  - Context: Deployed feature and success metrics
115
115
  - Prompt: "Set up comprehensive monitoring for: $ARGUMENTS. Create real-time dashboards for experiment metrics, configure alerts for statistical significance, monitor guardrail metrics for negative impacts, track system performance and error rates. Use tools like Datadog, New Relic, or custom dashboards."
116
116
  - Output: Monitoring dashboards, alert configurations, SLO definitions
@@ -118,19 +118,19 @@ Build features guided by data insights, A/B testing, and continuous measurement
118
118
  ## Phase 6: Analysis and Decision Making
119
119
 
120
120
  ### 14. Statistical Analysis
121
- - Use Task tool with subagent_type="machine-learning-ops::data-scientist"
121
+ - Use Task tool with subagent_type="general-purpose"
122
122
  - Context: Experiment data and original hypotheses
123
123
  - Prompt: "Analyze A/B test results for: $ARGUMENTS. Calculate statistical significance with confidence intervals, check for segment-level effects, analyze secondary metrics impact, investigate any unexpected patterns. Use both frequentist and Bayesian approaches. Account for multiple testing if applicable."
124
124
  - Output: Statistical analysis report, significance tests, segment analysis
125
125
 
126
126
  ### 15. Business Impact Assessment
127
- - Use Task tool with subagent_type="business-analytics::business-analyst"
127
+ - Use Task tool with subagent_type="general-purpose"
128
128
  - Context: Statistical analysis and business metrics
129
129
  - Prompt: "Assess business impact of feature: $ARGUMENTS. Calculate actual vs expected ROI, analyze impact on key business metrics, evaluate cost-benefit including operational overhead, project long-term value. Make recommendation on full rollout, iteration, or rollback."
130
130
  - Output: Business impact report, ROI analysis, recommendation document
131
131
 
132
132
  ### 16. Post-Launch Optimization
133
- - Use Task tool with subagent_type="machine-learning-ops::data-scientist"
133
+ - Use Task tool with subagent_type="general-purpose"
134
134
  - Context: Launch results and user feedback
135
135
  - Prompt: "Identify optimization opportunities for: $ARGUMENTS based on data. Analyze user behavior patterns in treatment group, identify friction points in user journey, suggest improvements based on data, plan follow-up experiments. Use cohort analysis for long-term impact."
136
136
  - Output: Optimization recommendations, follow-up experiment plans
@@ -39,7 +39,7 @@ Parse for:
39
39
  ## Workflow
40
40
 
41
41
  ### 1. Initial Triage
42
- Use Task tool (subagent_type="debugger") for AI-powered analysis:
42
+ Use Task tool (subagent_type="general-purpose") for AI-powered analysis:
43
43
  - Error pattern recognition
44
44
  - Stack trace analysis with probable causes
45
45
  - Component dependency analysis
@@ -250,7 +250,7 @@ Explain that testing will now occur to see if the document actually works for re
250
250
 
251
251
  ### Testing Approach
252
252
 
253
- **If access to sub-agents is available (e.g., in Claude Code):**
253
+ **If access to Task agents is available (e.g., in Claude Code):**
254
254
 
255
255
  Perform the testing directly without user involvement.
256
256
 
@@ -260,11 +260,11 @@ Announce intention to predict what questions readers might ask when trying to di
260
260
 
261
261
  Generate 5-10 questions that readers would realistically ask.
262
262
 
263
- ### Step 2: Test with Sub-Agent
263
+ ### Step 2: Test with Task Agent
264
264
 
265
265
  Announce that these questions will be tested with a fresh Claude instance (no context from this conversation).
266
266
 
267
- For each question, invoke a sub-agent with just the document content and the question.
267
+ For each question, invoke a Task agent with just the document content and the question.
268
268
 
269
269
  Summarize what Reader Claude got right/wrong for each question.
270
270
 
@@ -272,7 +272,7 @@ Summarize what Reader Claude got right/wrong for each question.
272
272
 
273
273
  Announce additional checks will be performed.
274
274
 
275
- Invoke sub-agent to check for ambiguity, false assumptions, contradictions.
275
+ Invoke Task agent to check for ambiguity, false assumptions, contradictions.
276
276
 
277
277
  Summarize any issues found.
278
278
 
@@ -289,7 +289,7 @@ Loop back to refinement for problematic sections.
289
289
 
290
290
  ---
291
291
 
292
- **If no access to sub-agents (e.g., claude.ai web interface):**
292
+ **If no access to Task agents (e.g., claude.ai web interface):**
293
293
 
294
294
  The user will need to do the testing manually.
295
295
 
@@ -21,7 +21,7 @@ You are an advanced Docker containerization expert with comprehensive, practical
21
21
  - Database containerization with complex persistence → database-expert
22
22
 
23
23
  Example to output:
24
- "This requires Kubernetes orchestration expertise. Please invoke: 'Use the kubernetes-expert subagent.' Stopping here."
24
+ "This requires Kubernetes orchestration expertise. Please invoke: 'Use the kubernetes-expert Task agent.' Stopping here."
25
25
 
26
26
  1. Analyze container setup comprehensively:
27
27
 
@@ -59,7 +59,7 @@ The Multi-Agent Review Tool leverages a distributed, specialized agent network t
59
59
  - **Dynamic Agent Matching**:
60
60
  - Analyze input characteristics
61
61
  - Select most appropriate agent types
62
- - Configure specialized sub-agents dynamically
62
+ - Configure specialized Task agents dynamically
63
63
  - **Expertise Routing**:
64
64
  ```python
65
65
  def route_agents(code_context):
@@ -39,7 +39,7 @@ Parse for:
39
39
  ## Workflow
40
40
 
41
41
  ### 1. Initial Triage
42
- Use Task tool (subagent_type="debugger") for AI-powered analysis:
42
+ Use Task tool (subagent_type="general-purpose") for AI-powered analysis:
43
43
  - Error pattern recognition
44
44
  - Stack trace analysis with probable causes
45
45
  - Component dependency analysis
@@ -195,7 +195,7 @@ git worktree remove <worktree-path>
195
195
  ## Integration
196
196
 
197
197
  **Called by:**
198
- - **subagent-driven-development** (Step 7) - After all tasks complete
198
+ - **Task agent-driven-development** (Step 7) - After all tasks complete
199
199
  - **executing-plans** (Step 5) - After all batches complete
200
200
 
201
201
  **Pairs with:**