@ngxtm/devkit 3.19.0 → 3.20.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/merged-commands/application-performance-performance-optimization.md +13 -13
- package/merged-commands/ask/fast.md +14 -57
- package/merged-commands/ask/hard.md +22 -79
- package/merged-commands/auto.md +6 -33
- package/merged-commands/backend-development-feature-development.md +12 -12
- package/merged-commands/bootstrap/auto/fast.md +15 -15
- package/merged-commands/bootstrap/auto/parallel.md +12 -12
- package/merged-commands/bootstrap/auto.md +14 -14
- package/merged-commands/bootstrap.md +15 -15
- package/merged-commands/brainstorm/fast.md +19 -72
- package/merged-commands/brainstorm/hard.md +23 -84
- package/merged-commands/c4-architecture-c4-architecture.md +5 -5
- package/merged-commands/code/auto.md +16 -16
- package/merged-commands/code/fast.md +19 -72
- package/merged-commands/code/hard.md +38 -122
- package/merged-commands/code/no-test.md +12 -12
- package/merged-commands/code/parallel.md +9 -9
- package/merged-commands/code.md +14 -14
- package/merged-commands/comprehensive-review-full-review.md +8 -8
- package/merged-commands/context-degradation.md +2 -2
- package/merged-commands/context-engineering.md +4 -4
- package/merged-commands/context-optimization.md +3 -3
- package/merged-commands/cook/auto/fast.md +3 -3
- package/merged-commands/cook/auto/parallel.md +9 -9
- package/merged-commands/cook/auto.md +1 -1
- package/merged-commands/cook/fast.md +38 -47
- package/merged-commands/cook/hard.md +46 -41
- package/merged-commands/cook.md +13 -13
- package/merged-commands/daily-news-report.md +15 -15
- package/merged-commands/data-engineering-data-driven-feature.md +16 -16
- package/merged-commands/debug/fast.md +13 -29
- package/merged-commands/debug/hard.md +47 -49
- package/merged-commands/debug.md +1 -1
- package/merged-commands/debugging-toolkit-smart-debug.md +1 -1
- package/merged-commands/deploy/check.md +22 -71
- package/merged-commands/deploy/preview.md +18 -62
- package/merged-commands/deploy/production.md +22 -71
- package/merged-commands/deploy/rollback.md +22 -71
- package/merged-commands/deploy.md +0 -11
- package/merged-commands/design/3d.md +3 -3
- package/merged-commands/design/describe.md +1 -1
- package/merged-commands/design/fast.md +2 -2
- package/merged-commands/design/good.md +3 -3
- package/merged-commands/design/hard.md +15 -85
- package/merged-commands/design/screenshot.md +1 -1
- package/merged-commands/design/video.md +1 -1
- package/merged-commands/design.md +0 -11
- package/merged-commands/doc-coauthoring.md +5 -5
- package/merged-commands/docker-expert.md +1 -1
- package/merged-commands/docs/audit.md +26 -77
- package/merged-commands/docs/business.md +26 -77
- package/merged-commands/docs/core.md +24 -68
- package/merged-commands/docs/init.md +8 -8
- package/merged-commands/docs/update.md +13 -13
- package/merged-commands/docs.md +0 -12
- package/merged-commands/error-debugging-multi-agent-review.md +1 -1
- package/merged-commands/error-diagnostics-smart-debug.md +1 -1
- package/merged-commands/finishing-a-development-branch.md +1 -1
- package/merged-commands/fix/ci.md +2 -2
- package/merged-commands/fix/fast.md +2 -2
- package/merged-commands/fix/hard.md +6 -6
- package/merged-commands/fix/logs.md +5 -5
- package/merged-commands/fix/parallel.md +9 -9
- package/merged-commands/fix/test.md +6 -6
- package/merged-commands/fix/ui.md +8 -8
- package/merged-commands/fixing.md +3 -3
- package/merged-commands/framework-migration-legacy-modernize.md +13 -13
- package/merged-commands/full-stack-orchestration-full-stack-feature.md +12 -12
- package/merged-commands/git/cm.md +1 -1
- package/merged-commands/git/cp.md +1 -1
- package/merged-commands/git/merge.md +1 -1
- package/merged-commands/git/pr.md +1 -1
- package/merged-commands/git-pr-workflows-git-workflow.md +10 -10
- package/merged-commands/google-adk-python.md +1 -1
- package/merged-commands/hr-pro.md +1 -1
- package/merged-commands/incident-response-incident-response.md +13 -13
- package/merged-commands/integrate/polar.md +3 -3
- package/merged-commands/integrate/sepay.md +3 -3
- package/merged-commands/journal.md +1 -1
- package/merged-commands/linear-claude-skill.md +2 -2
- package/merged-commands/loki-mode.md +14 -14
- package/merged-commands/machine-learning-ops-ml-pipeline.md +7 -7
- package/merged-commands/mcp-management.md +8 -8
- package/merged-commands/multi-agent-patterns.md +14 -14
- package/merged-commands/multi-platform-apps-multi-platform.md +10 -10
- package/merged-commands/nestjs-expert.md +1 -1
- package/merged-commands/performance-testing-review-multi-agent-review.md +1 -1
- package/merged-commands/plan/archive.md +1 -1
- package/merged-commands/plan/ci.md +1 -1
- package/merged-commands/plan/fast.md +2 -2
- package/merged-commands/plan/hard.md +4 -4
- package/merged-commands/plan/parallel.md +5 -5
- package/merged-commands/plan/two.md +6 -6
- package/merged-commands/requesting-code-review.md +6 -6
- package/merged-commands/review/codebase/parallel.md +5 -5
- package/merged-commands/review/codebase.md +5 -5
- package/merged-commands/review/fast.md +13 -29
- package/merged-commands/review/hard.md +48 -49
- package/merged-commands/review.md +0 -11
- package/merged-commands/security-scanning-security-hardening.md +13 -13
- package/merged-commands/skill/add.md +6 -6
- package/merged-commands/skill/create.md +6 -6
- package/merged-commands/skill/fix-logs.md +6 -6
- package/merged-commands/skill/optimize/auto.md +1 -1
- package/merged-commands/skill/optimize.md +1 -1
- package/merged-commands/skill/plan.md +1 -1
- package/merged-commands/skill/update.md +6 -6
- package/merged-commands/subagent-driven-development.md +53 -53
- package/merged-commands/tdd-workflows-tdd-cycle.md +12 -12
- package/merged-commands/tdd-workflows-tdd-red.md +1 -1
- package/merged-commands/tdd-workflows-tdd-refactor.md +1 -1
- package/merged-commands/test/fast.md +22 -33
- package/merged-commands/test/hard.md +59 -56
- package/merged-commands/test/ui.md +1 -1
- package/merged-commands/test.md +1 -1
- package/merged-commands/typescript-expert.md +1 -1
- package/merged-commands/use-mcp.md +5 -5
- package/merged-commands/writing-plans.md +3 -3
- package/merged-commands/writing-skills.md +8 -8
- package/package.json +1 -1
- package/rules-index.json +1 -1
- package/skills/application-performance-performance-optimization/SKILL.md +13 -13
- package/skills/azure-ai-agents-python/references/tools.md +1 -1
- package/skills/backend-development-feature-development/SKILL.md +12 -12
- package/skills/best-practices/references/anti-patterns.md +2 -2
- package/skills/best-practices/references/best-practices-guide.md +14 -14
- package/skills/c4-architecture-c4-architecture/SKILL.md +5 -5
- package/skills/comprehensive-review-full-review/SKILL.md +8 -8
- package/skills/context-degradation/SKILL.md +2 -2
- package/skills/context-engineering/SKILL.md +4 -4
- package/skills/context-engineering/references/context-degradation.md +1 -1
- package/skills/context-engineering/references/context-optimization.md +1 -1
- package/skills/context-engineering/references/multi-agent-patterns.md +1 -1
- package/skills/context-engineering/references/runtime-awareness.md +1 -1
- package/skills/context-optimization/SKILL.md +3 -3
- package/skills/daily-news-report/SKILL.md +15 -15
- package/skills/data-engineering-data-driven-feature/SKILL.md +16 -16
- package/skills/debugging-toolkit-smart-debug/SKILL.md +1 -1
- package/skills/doc-coauthoring/SKILL.md +5 -5
- package/skills/docker-expert/SKILL.md +1 -1
- package/skills/error-debugging-multi-agent-review/SKILL.md +1 -1
- package/skills/error-diagnostics-smart-debug/SKILL.md +1 -1
- package/skills/finishing-a-development-branch/SKILL.md +1 -1
- package/skills/fixing/SKILL.md +3 -3
- package/skills/fixing/references/parallel-exploration.md +4 -4
- package/skills/fixing/references/skill-activation-matrix.md +3 -3
- package/skills/fixing/references/workflow-deep.md +11 -11
- package/skills/fixing/references/workflow-quick.md +4 -4
- package/skills/fixing/references/workflow-standard.md +12 -12
- package/skills/framework-migration-legacy-modernize/SKILL.md +13 -13
- package/skills/full-stack-orchestration-full-stack-feature/SKILL.md +12 -12
- package/skills/git-pr-workflows-git-workflow/SKILL.md +10 -10
- package/skills/google-adk-python/SKILL.md +1 -1
- package/skills/hr-pro/SKILL.md +1 -1
- package/skills/incident-response-incident-response/SKILL.md +13 -13
- package/skills/incident-response-smart-fix/resources/implementation-playbook.md +17 -17
- package/skills/linear-claude-skill/SKILL.md +2 -2
- package/skills/loki-mode/ACKNOWLEDGEMENTS.md +4 -4
- package/skills/loki-mode/CHANGELOG.md +9 -9
- package/skills/loki-mode/CONTEXT-EXPORT.md +1 -1
- package/skills/loki-mode/README.md +2 -2
- package/skills/loki-mode/SKILL.md +14 -14
- package/skills/loki-mode/autonomy/run.sh +1 -1
- package/skills/loki-mode/integrations/vibe-kanban.md +1 -1
- package/skills/loki-mode/references/core-workflow.md +4 -4
- package/skills/loki-mode/references/production-patterns.md +6 -6
- package/skills/loki-mode/references/quality-control.md +2 -2
- package/skills/loki-mode/references/sdlc-phases.md +3 -3
- package/skills/machine-learning-ops-ml-pipeline/SKILL.md +7 -7
- package/skills/mcp-builder/reference/evaluation.md +3 -3
- package/skills/mcp-management/README.md +6 -6
- package/skills/mcp-management/SKILL.md +8 -8
- package/skills/mcp-management/references/gemini-cli-integration.md +1 -1
- package/skills/multi-agent-patterns/SKILL.md +14 -14
- package/skills/multi-platform-apps-multi-platform/SKILL.md +10 -10
- package/skills/nestjs-expert/SKILL.md +1 -1
- package/skills/performance-testing-review-multi-agent-review/SKILL.md +1 -1
- package/skills/planning-with-files/reference.md +2 -2
- package/skills/requesting-code-review/SKILL.md +6 -6
- package/skills/security-scanning-security-hardening/SKILL.md +13 -13
- package/skills/subagent-driven-development/SKILL.md +53 -53
- package/skills/subagent-driven-development/code-quality-reviewer-prompt.md +1 -1
- package/skills/subagent-driven-development/implementer-prompt.md +3 -3
- package/skills/subagent-driven-development/spec-reviewer-prompt.md +1 -1
- package/skills/tdd-workflows-tdd-cycle/SKILL.md +12 -12
- package/skills/tdd-workflows-tdd-green/resources/implementation-playbook.md +1 -1
- package/skills/tdd-workflows-tdd-red/SKILL.md +1 -1
- package/skills/tdd-workflows-tdd-refactor/SKILL.md +1 -1
- package/skills/typescript-expert/SKILL.md +1 -1
- package/skills/writing-plans/SKILL.md +3 -3
- package/skills/writing-skills/SKILL.md +8 -8
- package/skills/writing-skills/examples/CLAUDE_MD_TESTING.md +1 -1
- package/skills/writing-skills/references/cso/README.md +3 -3
- package/skills/writing-skills/testing-skills-with-subagents.md +1 -1
|
@@ -12,12 +12,12 @@ Execute MCP operations via **Gemini CLI** to preserve context budget.
|
|
|
12
12
|
echo "$ARGUMENTS. Return JSON only per GEMINI.md instructions." | gemini -y -m gemini-2.5-flash
|
|
13
13
|
```
|
|
14
14
|
|
|
15
|
-
2. **Fallback to
|
|
16
|
-
- Use
|
|
17
|
-
- If the
|
|
15
|
+
2. **Fallback to Task agent for MCP management** (if Gemini CLI unavailable):
|
|
16
|
+
- Use Task agent for MCP management to discover and execute tools: Task(subagent_type="general-purpose", prompt="You are an mcp-manager. Discover and execute MCP tools...", description="MCP tool execution")
|
|
17
|
+
- If the Task agent got issues with the scripts of `mcp-management` skill, use `mcp-builder` skill to fix them
|
|
18
18
|
- **DO NOT** create ANY new scripts
|
|
19
|
-
- The
|
|
20
|
-
- If the
|
|
19
|
+
- The Task agent can only use MCP tools if any to achieve this task
|
|
20
|
+
- If the Task agent can't find any suitable tools, just report it back to the main agent to move on to the next step
|
|
21
21
|
|
|
22
22
|
## Important Notes
|
|
23
23
|
|
|
@@ -102,16 +102,16 @@ After saving the plan, offer execution choice:
|
|
|
102
102
|
|
|
103
103
|
**"Plan complete and saved to `docs/plans/<filename>.md`. Two execution options:**
|
|
104
104
|
|
|
105
|
-
**1.
|
|
105
|
+
**1. Task Agent-Driven (this session)** - I dispatch fresh Task agent per task, review between tasks, fast iteration
|
|
106
106
|
|
|
107
107
|
**2. Parallel Session (separate)** - Open new session with executing-plans, batch execution with checkpoints
|
|
108
108
|
|
|
109
109
|
**Which approach?"**
|
|
110
110
|
|
|
111
|
-
**If
|
|
111
|
+
**If Task Agent-Driven chosen:**
|
|
112
112
|
- **REQUIRED SUB-SKILL:** Use superpowers:subagent-driven-development
|
|
113
113
|
- Stay in this session
|
|
114
|
-
- Fresh
|
|
114
|
+
- Fresh Task agent per task + code review
|
|
115
115
|
|
|
116
116
|
**If Parallel Session chosen:**
|
|
117
117
|
- Guide them to open new session in worktree
|
|
@@ -11,7 +11,7 @@ description: Use when creating new skills, editing existing skills, or verifying
|
|
|
11
11
|
|
|
12
12
|
**Personal skills live in agent-specific directories (`~/.claude/skills` for Claude Code, `~/.codex/skills` for Codex)**
|
|
13
13
|
|
|
14
|
-
You write test cases (pressure scenarios with
|
|
14
|
+
You write test cases (pressure scenarios with Task agents), watch them fail (baseline behavior), write the skill (documentation), watch tests pass (agents comply), and refactor (close loopholes).
|
|
15
15
|
|
|
16
16
|
**Core principle:** If you didn't watch an agent fail without the skill, you don't know if the skill teaches the right thing.
|
|
17
17
|
|
|
@@ -31,7 +31,7 @@ A **skill** is a reference guide for proven techniques, patterns, or tools. Skil
|
|
|
31
31
|
|
|
32
32
|
| TDD Concept | Skill Creation |
|
|
33
33
|
|-------------|----------------|
|
|
34
|
-
| **Test case** | Pressure scenario with
|
|
34
|
+
| **Test case** | Pressure scenario with Task agent |
|
|
35
35
|
| **Production code** | Skill document (SKILL.md) |
|
|
36
36
|
| **Test fails (RED)** | Agent violates rule without skill (baseline) |
|
|
37
37
|
| **Test passes (GREEN)** | Agent complies with skill present |
|
|
@@ -159,7 +159,7 @@ When the description was changed to just "Use when executing implementation plan
|
|
|
159
159
|
|
|
160
160
|
```yaml
|
|
161
161
|
# ❌ BAD: Summarizes workflow - Claude may follow this instead of reading skill
|
|
162
|
-
description: Use when executing plans - dispatches
|
|
162
|
+
description: Use when executing plans - dispatches Task agent per task with code review between tasks
|
|
163
163
|
|
|
164
164
|
# ❌ BAD: Too much process detail
|
|
165
165
|
description: Use for TDD - write test first, watch it fail, write minimal code, refactor
|
|
@@ -233,11 +233,11 @@ search-conversations supports multiple modes and filters. Run --help for details
|
|
|
233
233
|
**Use cross-references:**
|
|
234
234
|
```markdown
|
|
235
235
|
# ❌ BAD: Repeat workflow details
|
|
236
|
-
When searching, dispatch
|
|
236
|
+
When searching, dispatch Task agent with template...
|
|
237
237
|
[20 lines of repeated instructions]
|
|
238
238
|
|
|
239
239
|
# ✅ GOOD: Reference other skill
|
|
240
|
-
Always use
|
|
240
|
+
Always use Task agents (50-100x context savings). REQUIRED: Use [other-skill-name] for workflow.
|
|
241
241
|
```
|
|
242
242
|
|
|
243
243
|
**Compress examples:**
|
|
@@ -245,12 +245,12 @@ Always use subagents (50-100x context savings). REQUIRED: Use [other-skill-name]
|
|
|
245
245
|
# ❌ BAD: Verbose example (42 words)
|
|
246
246
|
your human partner: "How did we handle authentication errors in React Router before?"
|
|
247
247
|
You: I'll search past conversations for React Router authentication patterns.
|
|
248
|
-
[Dispatch
|
|
248
|
+
[Dispatch Task agent with search query: "React Router authentication error handling 401"]
|
|
249
249
|
|
|
250
250
|
# ✅ GOOD: Minimal example (20 words)
|
|
251
251
|
Partner: "How did we handle auth errors in React Router?"
|
|
252
252
|
You: Searching...
|
|
253
|
-
[Dispatch
|
|
253
|
+
[Dispatch Task agent → synthesis]
|
|
254
254
|
```
|
|
255
255
|
|
|
256
256
|
**Eliminate redundancy:**
|
|
@@ -536,7 +536,7 @@ Follow the TDD cycle:
|
|
|
536
536
|
|
|
537
537
|
### RED: Write Failing Test (Baseline)
|
|
538
538
|
|
|
539
|
-
Run pressure scenario with
|
|
539
|
+
Run pressure scenario with Task agent WITHOUT the skill. Document exact behavior:
|
|
540
540
|
- What choices did they make?
|
|
541
541
|
- What rationalizations did they use (verbatim)?
|
|
542
542
|
- Which pressures triggered violations?
|
package/package.json
CHANGED
package/rules-index.json
CHANGED
|
@@ -38,21 +38,21 @@ Optimize application performance end-to-end using specialized performance and op
|
|
|
38
38
|
|
|
39
39
|
### 1. Comprehensive Performance Profiling
|
|
40
40
|
|
|
41
|
-
- Use Task tool with subagent_type="
|
|
41
|
+
- Use Task tool with subagent_type="general-purpose"
|
|
42
42
|
- Prompt: "Profile application performance comprehensively for: $ARGUMENTS. Generate flame graphs for CPU usage, heap dumps for memory analysis, trace I/O operations, and identify hot paths. Use APM tools like DataDog or New Relic if available. Include database query profiling, API response times, and frontend rendering metrics. Establish performance baselines for all critical user journeys."
|
|
43
43
|
- Context: Initial performance investigation
|
|
44
44
|
- Output: Detailed performance profile with flame graphs, memory analysis, bottleneck identification, baseline metrics
|
|
45
45
|
|
|
46
46
|
### 2. Observability Stack Assessment
|
|
47
47
|
|
|
48
|
-
- Use Task tool with subagent_type="
|
|
48
|
+
- Use Task tool with subagent_type="general-purpose"
|
|
49
49
|
- Prompt: "Assess current observability setup for: $ARGUMENTS. Review existing monitoring, distributed tracing with OpenTelemetry, log aggregation, and metrics collection. Identify gaps in visibility, missing metrics, and areas needing better instrumentation. Recommend APM tool integration and custom metrics for business-critical operations."
|
|
50
50
|
- Context: Performance profile from step 1
|
|
51
51
|
- Output: Observability assessment report, instrumentation gaps, monitoring recommendations
|
|
52
52
|
|
|
53
53
|
### 3. User Experience Analysis
|
|
54
54
|
|
|
55
|
-
- Use Task tool with subagent_type="
|
|
55
|
+
- Use Task tool with subagent_type="general-purpose"
|
|
56
56
|
- Prompt: "Analyze user experience metrics for: $ARGUMENTS. Measure Core Web Vitals (LCP, FID, CLS), page load times, time to interactive, and perceived performance. Use Real User Monitoring (RUM) data if available. Identify user journeys with poor performance and their business impact."
|
|
57
57
|
- Context: Performance baselines from step 1
|
|
58
58
|
- Output: UX performance report, Core Web Vitals analysis, user impact assessment
|
|
@@ -61,21 +61,21 @@ Optimize application performance end-to-end using specialized performance and op
|
|
|
61
61
|
|
|
62
62
|
### 4. Database Performance Optimization
|
|
63
63
|
|
|
64
|
-
- Use Task tool with subagent_type="
|
|
64
|
+
- Use Task tool with subagent_type="general-purpose"
|
|
65
65
|
- Prompt: "Optimize database performance for: $ARGUMENTS based on profiling data: {context_from_phase_1}. Analyze slow query logs, create missing indexes, optimize execution plans, implement query result caching with Redis/Memcached. Review connection pooling, prepared statements, and batch processing opportunities. Consider read replicas and database sharding if needed."
|
|
66
66
|
- Context: Performance bottlenecks from phase 1
|
|
67
67
|
- Output: Optimized queries, new indexes, caching strategy, connection pool configuration
|
|
68
68
|
|
|
69
69
|
### 5. Backend Code & API Optimization
|
|
70
70
|
|
|
71
|
-
- Use Task tool with subagent_type="
|
|
71
|
+
- Use Task tool with subagent_type="general-purpose"
|
|
72
72
|
- Prompt: "Optimize backend services for: $ARGUMENTS targeting bottlenecks: {context_from_phase_1}. Implement efficient algorithms, add application-level caching, optimize N+1 queries, use async/await patterns effectively. Implement pagination, response compression, GraphQL query optimization, and batch API operations. Add circuit breakers and bulkheads for resilience."
|
|
73
73
|
- Context: Database optimizations from step 4, profiling data from phase 1
|
|
74
74
|
- Output: Optimized backend code, caching implementation, API improvements, resilience patterns
|
|
75
75
|
|
|
76
76
|
### 6. Microservices & Distributed System Optimization
|
|
77
77
|
|
|
78
|
-
- Use Task tool with subagent_type="
|
|
78
|
+
- Use Task tool with subagent_type="general-purpose"
|
|
79
79
|
- Prompt: "Optimize distributed system performance for: $ARGUMENTS. Analyze service-to-service communication, implement service mesh optimizations, optimize message queue performance (Kafka/RabbitMQ), reduce network hops. Implement distributed caching strategies and optimize serialization/deserialization."
|
|
80
80
|
- Context: Backend optimizations from step 5
|
|
81
81
|
- Output: Service communication improvements, message queue optimization, distributed caching setup
|
|
@@ -84,21 +84,21 @@ Optimize application performance end-to-end using specialized performance and op
|
|
|
84
84
|
|
|
85
85
|
### 7. Frontend Bundle & Loading Optimization
|
|
86
86
|
|
|
87
|
-
- Use Task tool with subagent_type="
|
|
87
|
+
- Use Task tool with subagent_type="general-purpose"
|
|
88
88
|
- Prompt: "Optimize frontend performance for: $ARGUMENTS targeting Core Web Vitals: {context_from_phase_1}. Implement code splitting, tree shaking, lazy loading, and dynamic imports. Optimize bundle sizes with webpack/rollup analysis. Implement resource hints (prefetch, preconnect, preload). Optimize critical rendering path and eliminate render-blocking resources."
|
|
89
89
|
- Context: UX analysis from phase 1, backend optimizations from phase 2
|
|
90
90
|
- Output: Optimized bundles, lazy loading implementation, improved Core Web Vitals
|
|
91
91
|
|
|
92
92
|
### 8. CDN & Edge Optimization
|
|
93
93
|
|
|
94
|
-
- Use Task tool with subagent_type="
|
|
94
|
+
- Use Task tool with subagent_type="general-purpose"
|
|
95
95
|
- Prompt: "Optimize CDN and edge performance for: $ARGUMENTS. Configure CloudFlare/CloudFront for optimal caching, implement edge functions for dynamic content, set up image optimization with responsive images and WebP/AVIF formats. Configure HTTP/2 and HTTP/3, implement Brotli compression. Set up geographic distribution for global users."
|
|
96
96
|
- Context: Frontend optimizations from step 7
|
|
97
97
|
- Output: CDN configuration, edge caching rules, compression setup, geographic optimization
|
|
98
98
|
|
|
99
99
|
### 9. Mobile & Progressive Web App Optimization
|
|
100
100
|
|
|
101
|
-
- Use Task tool with subagent_type="
|
|
101
|
+
- Use Task tool with subagent_type="general-purpose"
|
|
102
102
|
- Prompt: "Optimize mobile experience for: $ARGUMENTS. Implement service workers for offline functionality, optimize for slow networks with adaptive loading. Reduce JavaScript execution time for mobile CPUs. Implement virtual scrolling for long lists. Optimize touch responsiveness and smooth animations. Consider React Native/Flutter specific optimizations if applicable."
|
|
103
103
|
- Context: Frontend optimizations from steps 7-8
|
|
104
104
|
- Output: Mobile-optimized code, PWA implementation, offline functionality
|
|
@@ -107,14 +107,14 @@ Optimize application performance end-to-end using specialized performance and op
|
|
|
107
107
|
|
|
108
108
|
### 10. Comprehensive Load Testing
|
|
109
109
|
|
|
110
|
-
- Use Task tool with subagent_type="
|
|
110
|
+
- Use Task tool with subagent_type="general-purpose"
|
|
111
111
|
- Prompt: "Conduct comprehensive load testing for: $ARGUMENTS using k6/Gatling/Artillery. Design realistic load scenarios based on production traffic patterns. Test normal load, peak load, and stress scenarios. Include API testing, browser-based testing, and WebSocket testing if applicable. Measure response times, throughput, error rates, and resource utilization at various load levels."
|
|
112
112
|
- Context: All optimizations from phases 1-3
|
|
113
113
|
- Output: Load test results, performance under load, breaking points, scalability analysis
|
|
114
114
|
|
|
115
115
|
### 11. Performance Regression Testing
|
|
116
116
|
|
|
117
|
-
- Use Task tool with subagent_type="
|
|
117
|
+
- Use Task tool with subagent_type="general-purpose"
|
|
118
118
|
- Prompt: "Create automated performance regression tests for: $ARGUMENTS. Set up performance budgets for key metrics, integrate with CI/CD pipeline using GitHub Actions or similar. Create Lighthouse CI tests for frontend, API performance tests with Artillery, and database performance benchmarks. Implement automatic rollback triggers for performance regressions."
|
|
119
119
|
- Context: Load test results from step 10, baseline metrics from phase 1
|
|
120
120
|
- Output: Performance test suite, CI/CD integration, regression prevention system
|
|
@@ -123,14 +123,14 @@ Optimize application performance end-to-end using specialized performance and op
|
|
|
123
123
|
|
|
124
124
|
### 12. Production Monitoring Setup
|
|
125
125
|
|
|
126
|
-
- Use Task tool with subagent_type="
|
|
126
|
+
- Use Task tool with subagent_type="general-purpose"
|
|
127
127
|
- Prompt: "Implement production performance monitoring for: $ARGUMENTS. Set up APM with DataDog/New Relic/Dynatrace, configure distributed tracing with OpenTelemetry, implement custom business metrics. Create Grafana dashboards for key metrics, set up PagerDuty alerts for performance degradation. Define SLIs/SLOs for critical services with error budgets."
|
|
128
128
|
- Context: Performance improvements from all previous phases
|
|
129
129
|
- Output: Monitoring dashboards, alert rules, SLI/SLO definitions, runbooks
|
|
130
130
|
|
|
131
131
|
### 13. Continuous Performance Optimization
|
|
132
132
|
|
|
133
|
-
- Use Task tool with subagent_type="
|
|
133
|
+
- Use Task tool with subagent_type="general-purpose"
|
|
134
134
|
- Prompt: "Establish continuous optimization process for: $ARGUMENTS. Create performance budget tracking, implement A/B testing for performance changes, set up continuous profiling in production. Document optimization opportunities backlog, create capacity planning models, and establish regular performance review cycles."
|
|
135
135
|
- Context: Monitoring setup from step 12, all previous optimization work
|
|
136
136
|
- Output: Performance budget tracking, optimization backlog, capacity planning, review process
|
|
@@ -402,7 +402,7 @@ Chain multiple agents together.
|
|
|
402
402
|
```python
|
|
403
403
|
from azure.ai.agents import ConnectedAgentTool
|
|
404
404
|
|
|
405
|
-
# Create
|
|
405
|
+
# Create Task agent
|
|
406
406
|
research_agent = client.create_agent(
|
|
407
407
|
model=os.environ["MODEL_DEPLOYMENT_NAME"],
|
|
408
408
|
name="researcher",
|
|
@@ -61,19 +61,19 @@ Orchestrate end-to-end feature development from requirements to production deplo
|
|
|
61
61
|
## Phase 1: Discovery & Requirements Planning
|
|
62
62
|
|
|
63
63
|
1. **Business Analysis & Requirements**
|
|
64
|
-
- Use Task tool with subagent_type="
|
|
64
|
+
- Use Task tool with subagent_type="general-purpose"
|
|
65
65
|
- Prompt: "Analyze feature requirements for: $ARGUMENTS. Define user stories, acceptance criteria, success metrics, and business value. Identify stakeholders, dependencies, and risks. Create feature specification document with clear scope boundaries."
|
|
66
66
|
- Expected output: Requirements document with user stories, success metrics, risk assessment
|
|
67
67
|
- Context: Initial feature request and business context
|
|
68
68
|
|
|
69
69
|
2. **Technical Architecture Design**
|
|
70
|
-
- Use Task tool with subagent_type="
|
|
70
|
+
- Use Task tool with subagent_type="general-purpose"
|
|
71
71
|
- Prompt: "Design technical architecture for feature: $ARGUMENTS. Using requirements: [include business analysis from step 1]. Define service boundaries, API contracts, data models, integration points, and technology stack. Consider scalability, performance, and security requirements."
|
|
72
72
|
- Expected output: Technical design document with architecture diagrams, API specifications, data models
|
|
73
73
|
- Context: Business requirements, existing system architecture
|
|
74
74
|
|
|
75
75
|
3. **Feasibility & Risk Assessment**
|
|
76
|
-
- Use Task tool with subagent_type="
|
|
76
|
+
- Use Task tool with subagent_type="general-purpose"
|
|
77
77
|
- Prompt: "Assess security implications and risks for feature: $ARGUMENTS. Review architecture: [include technical design from step 2]. Identify security requirements, compliance needs, data privacy concerns, and potential vulnerabilities."
|
|
78
78
|
- Expected output: Security assessment with risk matrix, compliance checklist, mitigation strategies
|
|
79
79
|
- Context: Technical design, regulatory requirements
|
|
@@ -81,19 +81,19 @@ Orchestrate end-to-end feature development from requirements to production deplo
|
|
|
81
81
|
## Phase 2: Implementation & Development
|
|
82
82
|
|
|
83
83
|
4. **Backend Services Implementation**
|
|
84
|
-
- Use Task tool with subagent_type="
|
|
84
|
+
- Use Task tool with subagent_type="general-purpose"
|
|
85
85
|
- Prompt: "Implement backend services for: $ARGUMENTS. Follow technical design: [include architecture from step 2]. Build RESTful/GraphQL APIs, implement business logic, integrate with data layer, add resilience patterns (circuit breakers, retries), implement caching strategies. Include feature flags for gradual rollout."
|
|
86
86
|
- Expected output: Backend services with APIs, business logic, database integration, feature flags
|
|
87
87
|
- Context: Technical design, API contracts, data models
|
|
88
88
|
|
|
89
89
|
5. **Frontend Implementation**
|
|
90
|
-
- Use Task tool with subagent_type="
|
|
90
|
+
- Use Task tool with subagent_type="general-purpose"
|
|
91
91
|
- Prompt: "Build frontend components for: $ARGUMENTS. Integrate with backend APIs: [include API endpoints from step 4]. Implement responsive UI, state management, error handling, loading states, and analytics tracking. Add feature flag integration for A/B testing capabilities."
|
|
92
92
|
- Expected output: Frontend components with API integration, state management, analytics
|
|
93
93
|
- Context: Backend APIs, UI/UX designs, user stories
|
|
94
94
|
|
|
95
95
|
6. **Data Pipeline & Integration**
|
|
96
|
-
- Use Task tool with subagent_type="
|
|
96
|
+
- Use Task tool with subagent_type="general-purpose"
|
|
97
97
|
- Prompt: "Build data pipelines for: $ARGUMENTS. Design ETL/ELT processes, implement data validation, create analytics events, set up data quality monitoring. Integrate with product analytics platforms for feature usage tracking."
|
|
98
98
|
- Expected output: Data pipelines, analytics events, data quality checks
|
|
99
99
|
- Context: Data requirements, analytics needs, existing data infrastructure
|
|
@@ -101,19 +101,19 @@ Orchestrate end-to-end feature development from requirements to production deplo
|
|
|
101
101
|
## Phase 3: Testing & Quality Assurance
|
|
102
102
|
|
|
103
103
|
7. **Automated Test Suite**
|
|
104
|
-
- Use Task tool with subagent_type="
|
|
104
|
+
- Use Task tool with subagent_type="general-purpose"
|
|
105
105
|
- Prompt: "Create comprehensive test suite for: $ARGUMENTS. Write unit tests for backend: [from step 4] and frontend: [from step 5]. Add integration tests for API endpoints, E2E tests for critical user journeys, performance tests for scalability validation. Ensure minimum 80% code coverage."
|
|
106
106
|
- Expected output: Test suites with unit, integration, E2E, and performance tests
|
|
107
107
|
- Context: Implementation code, acceptance criteria, test requirements
|
|
108
108
|
|
|
109
109
|
8. **Security Validation**
|
|
110
|
-
- Use Task tool with subagent_type="
|
|
110
|
+
- Use Task tool with subagent_type="general-purpose"
|
|
111
111
|
- Prompt: "Perform security testing for: $ARGUMENTS. Review implementation: [include backend and frontend from steps 4-5]. Run OWASP checks, penetration testing, dependency scanning, and compliance validation. Verify data encryption, authentication, and authorization."
|
|
112
112
|
- Expected output: Security test results, vulnerability report, remediation actions
|
|
113
113
|
- Context: Implementation code, security requirements
|
|
114
114
|
|
|
115
115
|
9. **Performance Optimization**
|
|
116
|
-
- Use Task tool with subagent_type="
|
|
116
|
+
- Use Task tool with subagent_type="general-purpose"
|
|
117
117
|
- Prompt: "Optimize performance for: $ARGUMENTS. Analyze backend services: [from step 4] and frontend: [from step 5]. Profile code, optimize queries, implement caching, reduce bundle sizes, improve load times. Set up performance budgets and monitoring."
|
|
118
118
|
- Expected output: Performance improvements, optimization report, performance metrics
|
|
119
119
|
- Context: Implementation code, performance requirements
|
|
@@ -121,19 +121,19 @@ Orchestrate end-to-end feature development from requirements to production deplo
|
|
|
121
121
|
## Phase 4: Deployment & Monitoring
|
|
122
122
|
|
|
123
123
|
10. **Deployment Strategy & Pipeline**
|
|
124
|
-
- Use Task tool with subagent_type="
|
|
124
|
+
- Use Task tool with subagent_type="general-purpose"
|
|
125
125
|
- Prompt: "Prepare deployment for: $ARGUMENTS. Create CI/CD pipeline with automated tests: [from step 7]. Configure feature flags for gradual rollout, implement blue-green deployment, set up rollback procedures. Create deployment runbook and rollback plan."
|
|
126
126
|
- Expected output: CI/CD pipeline, deployment configuration, rollback procedures
|
|
127
127
|
- Context: Test suites, infrastructure requirements, deployment strategy
|
|
128
128
|
|
|
129
129
|
11. **Observability & Monitoring**
|
|
130
|
-
- Use Task tool with subagent_type="
|
|
130
|
+
- Use Task tool with subagent_type="general-purpose"
|
|
131
131
|
- Prompt: "Set up observability for: $ARGUMENTS. Implement distributed tracing, custom metrics, error tracking, and alerting. Create dashboards for feature usage, performance metrics, error rates, and business KPIs. Set up SLOs/SLIs with automated alerts."
|
|
132
132
|
- Expected output: Monitoring dashboards, alerts, SLO definitions, observability infrastructure
|
|
133
133
|
- Context: Feature implementation, success metrics, operational requirements
|
|
134
134
|
|
|
135
135
|
12. **Documentation & Knowledge Transfer**
|
|
136
|
-
- Use Task tool with subagent_type="
|
|
136
|
+
- Use Task tool with subagent_type="general-purpose"
|
|
137
137
|
- Prompt: "Generate comprehensive documentation for: $ARGUMENTS. Create API documentation, user guides, deployment guides, troubleshooting runbooks. Include architecture diagrams, data flow diagrams, and integration guides. Generate automated changelog from commits."
|
|
138
138
|
- Expected output: API docs, user guides, runbooks, architecture documentation
|
|
139
139
|
- Context: All previous phases' outputs
|
|
@@ -528,9 +528,9 @@ read all the files in src/ and then tell me about the architecture
|
|
|
528
528
|
```
|
|
529
529
|
read the main entry point and top-level directories to understand the architecture. don't read every file - just enough to explain the main patterns.
|
|
530
530
|
```
|
|
531
|
-
Or use
|
|
531
|
+
Or use Task agents:
|
|
532
532
|
```
|
|
533
|
-
use a
|
|
533
|
+
use a Task agent to investigate the codebase architecture and report a summary.
|
|
534
534
|
```
|
|
535
535
|
|
|
536
536
|
---
|
|
@@ -250,11 +250,11 @@ Analyze and fix the GitHub issue: $ARGUMENTS.
|
|
|
250
250
|
|
|
251
251
|
Run `/fix-issue 1234` to invoke it. Use `disable-model-invocation: true` for workflows with side effects that you want to trigger manually.
|
|
252
252
|
|
|
253
|
-
### Create custom
|
|
253
|
+
### Create custom Task agents
|
|
254
254
|
|
|
255
255
|
> **Tip:** Define specialized assistants in `.claude/agents/` that Claude can delegate to for isolated tasks.
|
|
256
256
|
|
|
257
|
-
|
|
257
|
+
Task agents run in their own context with their own set of allowed tools. They're useful for tasks that read many files or need specialized focus without cluttering your main conversation.
|
|
258
258
|
|
|
259
259
|
```markdown
|
|
260
260
|
---
|
|
@@ -272,15 +272,15 @@ You are a senior security engineer. Review code for:
|
|
|
272
272
|
Provide specific line references and suggested fixes.
|
|
273
273
|
```
|
|
274
274
|
|
|
275
|
-
Tell Claude to use
|
|
275
|
+
Tell Claude to use Task agents explicitly: *"Use a Task agent to review this code for security issues."*
|
|
276
276
|
|
|
277
277
|
### Install plugins
|
|
278
278
|
|
|
279
279
|
> **Tip:** Run `/plugin` to browse the marketplace. Plugins add skills, tools, and integrations without configuration.
|
|
280
280
|
|
|
281
|
-
Plugins bundle skills, hooks,
|
|
281
|
+
Plugins bundle skills, hooks, Task agents, and MCP servers into a single installable unit from the community and Anthropic.
|
|
282
282
|
|
|
283
|
-
For guidance on choosing between skills,
|
|
283
|
+
For guidance on choosing between skills, Task agents, hooks, and MCP, see Extend Claude Code.
|
|
284
284
|
|
|
285
285
|
***
|
|
286
286
|
|
|
@@ -350,23 +350,23 @@ During long sessions, Claude's context window can fill with irrelevant conversat
|
|
|
350
350
|
* For more control, run `/compact <instructions>`, like `/compact Focus on the API changes`
|
|
351
351
|
* Customize compaction behavior in CLAUDE.md with instructions like `"When compacting, always preserve the full list of modified files and any test commands"` to ensure critical context survives summarization
|
|
352
352
|
|
|
353
|
-
### Use
|
|
353
|
+
### Use Task agents for investigation
|
|
354
354
|
|
|
355
|
-
> **Tip:** Delegate research with `"use
|
|
355
|
+
> **Tip:** Delegate research with `"use Task agents to investigate X"`. They explore in a separate context, keeping your main conversation clean for implementation.
|
|
356
356
|
|
|
357
|
-
Since context is your fundamental constraint,
|
|
357
|
+
Since context is your fundamental constraint, Task agents are one of the most powerful tools available. When Claude researches a codebase it reads lots of files, all of which consume your context. Task agents run in separate context windows and report back summaries:
|
|
358
358
|
|
|
359
359
|
```
|
|
360
|
-
Use
|
|
360
|
+
Use Task agents to investigate how our authentication system handles token
|
|
361
361
|
refresh, and whether we have any existing OAuth utilities I should reuse.
|
|
362
362
|
```
|
|
363
363
|
|
|
364
|
-
The
|
|
364
|
+
The Task agent explores the codebase, reads relevant files, and reports back with findings, all without cluttering your main conversation.
|
|
365
365
|
|
|
366
|
-
You can also use
|
|
366
|
+
You can also use Task agents for verification after Claude implements something:
|
|
367
367
|
|
|
368
368
|
```
|
|
369
|
-
use a
|
|
369
|
+
use a Task agent to review this code for edge cases
|
|
370
370
|
```
|
|
371
371
|
|
|
372
372
|
### Rewind with checkpoints
|
|
@@ -487,7 +487,7 @@ These are common mistakes. Recognizing them early saves time:
|
|
|
487
487
|
* **The trust-then-verify gap.** Claude produces a plausible-looking implementation that doesn't handle edge cases.
|
|
488
488
|
> **Fix**: Always provide verification (tests, scripts, screenshots). If you can't verify it, don't ship it.
|
|
489
489
|
* **The infinite exploration.** You ask Claude to "investigate" something without scoping it. Claude reads hundreds of files, filling the context.
|
|
490
|
-
> **Fix**: Scope investigations narrowly or use
|
|
490
|
+
> **Fix**: Scope investigations narrowly or use Task agents so the exploration doesn't consume your main context.
|
|
491
491
|
|
|
492
492
|
***
|
|
493
493
|
|
|
@@ -504,7 +504,7 @@ Over time, you'll develop intuition that no guide can capture. You'll know when
|
|
|
504
504
|
## Related resources
|
|
505
505
|
|
|
506
506
|
* **How Claude Code works** - Understand the agentic loop, tools, and context management
|
|
507
|
-
* **Extend Claude Code** - Choose between skills, hooks, MCP,
|
|
507
|
+
* **Extend Claude Code** - Choose between skills, hooks, MCP, Task agents, and plugins
|
|
508
508
|
* **Common workflows** - Step-by-step recipes for debugging, testing, PRs, and more
|
|
509
509
|
* **CLAUDE.md** - Store project conventions and persistent context
|
|
510
510
|
|
|
@@ -54,7 +54,7 @@ All documentation is written to a new `C4-Documentation/` directory in the repos
|
|
|
54
54
|
|
|
55
55
|
For each directory, starting from the deepest:
|
|
56
56
|
|
|
57
|
-
- Use Task tool with subagent_type="
|
|
57
|
+
- Use Task tool with subagent_type="general-purpose"
|
|
58
58
|
- Prompt: |
|
|
59
59
|
Analyze the code in directory: [directory_path]
|
|
60
60
|
|
|
@@ -110,7 +110,7 @@ For each directory, starting from the deepest:
|
|
|
110
110
|
|
|
111
111
|
For each identified component:
|
|
112
112
|
|
|
113
|
-
- Use Task tool with subagent_type="
|
|
113
|
+
- Use Task tool with subagent_type="general-purpose"
|
|
114
114
|
- Prompt: |
|
|
115
115
|
Synthesize the following C4 Code-level documentation files into a logical component:
|
|
116
116
|
|
|
@@ -153,7 +153,7 @@ For each identified component:
|
|
|
153
153
|
|
|
154
154
|
### 2.3 Create Master Component Index
|
|
155
155
|
|
|
156
|
-
- Use Task tool with subagent_type="
|
|
156
|
+
- Use Task tool with subagent_type="general-purpose"
|
|
157
157
|
- Prompt: |
|
|
158
158
|
Create a master component index that lists all components in the system.
|
|
159
159
|
|
|
@@ -188,7 +188,7 @@ For each identified component:
|
|
|
188
188
|
|
|
189
189
|
### 3.2 Map Components to Containers
|
|
190
190
|
|
|
191
|
-
- Use Task tool with subagent_type="
|
|
191
|
+
- Use Task tool with subagent_type="general-purpose"
|
|
192
192
|
- Prompt: |
|
|
193
193
|
Synthesize components into containers based on deployment definitions.
|
|
194
194
|
|
|
@@ -261,7 +261,7 @@ For each identified component:
|
|
|
261
261
|
|
|
262
262
|
### 4.2 Create Context Documentation
|
|
263
263
|
|
|
264
|
-
- Use Task tool with subagent_type="
|
|
264
|
+
- Use Task tool with subagent_type="general-purpose"
|
|
265
265
|
- Prompt: |
|
|
266
266
|
Create comprehensive C4 Context-level documentation for the system.
|
|
267
267
|
|
|
@@ -41,13 +41,13 @@ Orchestrate comprehensive multi-dimensional code review using specialized review
|
|
|
41
41
|
Use Task tool to orchestrate quality and architecture agents in parallel:
|
|
42
42
|
|
|
43
43
|
### 1A. Code Quality Analysis
|
|
44
|
-
- Use Task tool with subagent_type="
|
|
44
|
+
- Use Task tool with subagent_type="general-purpose"
|
|
45
45
|
- Prompt: "Perform comprehensive code quality review for: $ARGUMENTS. Analyze code complexity, maintainability index, technical debt, code duplication, naming conventions, and adherence to Clean Code principles. Integrate with SonarQube, CodeQL, and Semgrep for static analysis. Check for code smells, anti-patterns, and violations of SOLID principles. Generate cyclomatic complexity metrics and identify refactoring opportunities."
|
|
46
46
|
- Expected output: Quality metrics, code smell inventory, refactoring recommendations
|
|
47
47
|
- Context: Initial codebase analysis, no dependencies on other phases
|
|
48
48
|
|
|
49
49
|
### 1B. Architecture & Design Review
|
|
50
|
-
- Use Task tool with subagent_type="
|
|
50
|
+
- Use Task tool with subagent_type="general-purpose"
|
|
51
51
|
- Prompt: "Review architectural design patterns and structural integrity in: $ARGUMENTS. Evaluate microservices boundaries, API design, database schema, dependency management, and adherence to Domain-Driven Design principles. Check for circular dependencies, inappropriate coupling, missing abstractions, and architectural drift. Verify compliance with enterprise architecture standards and cloud-native patterns."
|
|
52
52
|
- Expected output: Architecture assessment, design pattern analysis, structural recommendations
|
|
53
53
|
- Context: Runs parallel with code quality analysis
|
|
@@ -57,13 +57,13 @@ Use Task tool to orchestrate quality and architecture agents in parallel:
|
|
|
57
57
|
Use Task tool with security and performance agents, incorporating Phase 1 findings:
|
|
58
58
|
|
|
59
59
|
### 2A. Security Vulnerability Assessment
|
|
60
|
-
- Use Task tool with subagent_type="
|
|
60
|
+
- Use Task tool with subagent_type="general-purpose"
|
|
61
61
|
- Prompt: "Execute comprehensive security audit on: $ARGUMENTS. Perform OWASP Top 10 analysis, dependency vulnerability scanning with Snyk/Trivy, secrets detection with GitLeaks, input validation review, authentication/authorization assessment, and cryptographic implementation review. Include findings from Phase 1 architecture review: {phase1_architecture_context}. Check for SQL injection, XSS, CSRF, insecure deserialization, and configuration security issues."
|
|
62
62
|
- Expected output: Vulnerability report, CVE list, security risk matrix, remediation steps
|
|
63
63
|
- Context: Incorporates architectural vulnerabilities identified in Phase 1B
|
|
64
64
|
|
|
65
65
|
### 2B. Performance & Scalability Analysis
|
|
66
|
-
- Use Task tool with subagent_type="
|
|
66
|
+
- Use Task tool with subagent_type="general-purpose"
|
|
67
67
|
- Prompt: "Conduct performance analysis and scalability assessment for: $ARGUMENTS. Profile code for CPU/memory hotspots, analyze database query performance, review caching strategies, identify N+1 problems, assess connection pooling, and evaluate asynchronous processing patterns. Consider architectural findings from Phase 1: {phase1_architecture_context}. Check for memory leaks, resource contention, and bottlenecks under load."
|
|
68
68
|
- Expected output: Performance metrics, bottleneck analysis, optimization recommendations
|
|
69
69
|
- Context: Uses architecture insights to identify systemic performance issues
|
|
@@ -73,13 +73,13 @@ Use Task tool with security and performance agents, incorporating Phase 1 findin
|
|
|
73
73
|
Use Task tool for test and documentation quality assessment:
|
|
74
74
|
|
|
75
75
|
### 3A. Test Coverage & Quality Analysis
|
|
76
|
-
- Use Task tool with subagent_type="
|
|
76
|
+
- Use Task tool with subagent_type="general-purpose"
|
|
77
77
|
- Prompt: "Evaluate testing strategy and implementation for: $ARGUMENTS. Analyze unit test coverage, integration test completeness, end-to-end test scenarios, test pyramid adherence, and test maintainability. Review test quality metrics including assertion density, test isolation, mock usage, and flakiness. Consider security and performance test requirements from Phase 2: {phase2_security_context}, {phase2_performance_context}. Verify TDD practices if --tdd-review flag is set."
|
|
78
78
|
- Expected output: Coverage report, test quality metrics, testing gap analysis
|
|
79
79
|
- Context: Incorporates security and performance testing requirements from Phase 2
|
|
80
80
|
|
|
81
81
|
### 3B. Documentation & API Specification Review
|
|
82
|
-
- Use Task tool with subagent_type="
|
|
82
|
+
- Use Task tool with subagent_type="general-purpose"
|
|
83
83
|
- Prompt: "Review documentation completeness and quality for: $ARGUMENTS. Assess inline code documentation, API documentation (OpenAPI/Swagger), architecture decision records (ADRs), README completeness, deployment guides, and runbooks. Verify documentation reflects actual implementation based on all previous phase findings: {phase1_context}, {phase2_context}. Check for outdated documentation, missing examples, and unclear explanations."
|
|
84
84
|
- Expected output: Documentation coverage report, inconsistency list, improvement recommendations
|
|
85
85
|
- Context: Cross-references all previous findings to ensure documentation accuracy
|
|
@@ -89,13 +89,13 @@ Use Task tool for test and documentation quality assessment:
|
|
|
89
89
|
Use Task tool to verify framework-specific and industry best practices:
|
|
90
90
|
|
|
91
91
|
### 4A. Framework & Language Best Practices
|
|
92
|
-
- Use Task tool with subagent_type="
|
|
92
|
+
- Use Task tool with subagent_type="general-purpose"
|
|
93
93
|
- Prompt: "Verify adherence to framework and language best practices for: $ARGUMENTS. Check modern JavaScript/TypeScript patterns, React hooks best practices, Python PEP compliance, Java enterprise patterns, Go idiomatic code, or framework-specific conventions (based on --framework flag). Review package management, build configuration, environment handling, and deployment practices. Include all quality issues from previous phases: {all_previous_contexts}."
|
|
94
94
|
- Expected output: Best practices compliance report, modernization recommendations
|
|
95
95
|
- Context: Synthesizes all previous findings for framework-specific guidance
|
|
96
96
|
|
|
97
97
|
### 4B. CI/CD & DevOps Practices Review
|
|
98
|
-
- Use Task tool with subagent_type="
|
|
98
|
+
- Use Task tool with subagent_type="general-purpose"
|
|
99
99
|
- Prompt: "Review CI/CD pipeline and DevOps practices for: $ARGUMENTS. Evaluate build automation, test automation integration, deployment strategies (blue-green, canary), infrastructure as code, monitoring/observability setup, and incident response procedures. Assess pipeline security, artifact management, and rollback capabilities. Consider all issues identified in previous phases that impact deployment: {all_critical_issues}."
|
|
100
100
|
- Expected output: Pipeline assessment, DevOps maturity evaluation, automation recommendations
|
|
101
101
|
- Context: Focuses on operationalizing fixes for all identified issues
|
|
@@ -157,11 +157,11 @@ Four strategies address different aspects of context degradation:
|
|
|
157
157
|
|
|
158
158
|
**Compress**: Reduce tokens while preserving information through summarization, abstraction, and observation masking. This extends effective context capacity.
|
|
159
159
|
|
|
160
|
-
**Isolate**: Split context across
|
|
160
|
+
**Isolate**: Split context across Task agents or sessions to prevent any single context from growing large enough to degrade. This is the most aggressive strategy but often the most effective.
|
|
161
161
|
|
|
162
162
|
### Architectural Patterns
|
|
163
163
|
|
|
164
|
-
Implement these strategies through specific architectural patterns. Use just-in-time context loading to retrieve information only when needed. Use observation masking to replace verbose tool outputs with compact references. Use
|
|
164
|
+
Implement these strategies through specific architectural patterns. Use just-in-time context loading to retrieve information only when needed. Use observation masking to replace verbose tool outputs with compact references. Use Task agent architectures to isolate context for different tasks. Use compaction to summarize growing context before it exceeds limits.
|
|
165
165
|
|
|
166
166
|
## Examples
|
|
167
167
|
|
|
@@ -25,13 +25,13 @@ Context engineering curates the smallest high-signal token set for LLM tasks. Th
|
|
|
25
25
|
1. **Context quality > quantity** - High-signal tokens beat exhaustive content
|
|
26
26
|
2. **Attention is finite** - U-shaped curve favors beginning/end positions
|
|
27
27
|
3. **Progressive disclosure** - Load information just-in-time
|
|
28
|
-
4. **Isolation prevents degradation** - Partition work across
|
|
28
|
+
4. **Isolation prevents degradation** - Partition work across Task agents
|
|
29
29
|
5. **Measure before optimizing** - Know your baseline
|
|
30
30
|
|
|
31
31
|
**IMPORTANT:**
|
|
32
32
|
- Sacrifice grammar for the sake of concision.
|
|
33
33
|
- Ensure token efficiency while maintaining high quality.
|
|
34
|
-
- Pass these rules to
|
|
34
|
+
- Pass these rules to Task agents.
|
|
35
35
|
|
|
36
36
|
## Quick Reference
|
|
37
37
|
|
|
@@ -61,7 +61,7 @@ Context engineering curates the smallest high-signal token set for LLM tasks. Th
|
|
|
61
61
|
1. **Write**: Save context externally (scratchpads, files)
|
|
62
62
|
2. **Select**: Pull only relevant context (retrieval, filtering)
|
|
63
63
|
3. **Compress**: Reduce tokens while preserving info (summarization)
|
|
64
|
-
4. **Isolate**: Split across
|
|
64
|
+
4. **Isolate**: Split across Task agents (partitioning)
|
|
65
65
|
|
|
66
66
|
## Anti-Patterns
|
|
67
67
|
|
|
@@ -75,7 +75,7 @@ Context engineering curates the smallest high-signal token set for LLM tasks. Th
|
|
|
75
75
|
|
|
76
76
|
1. Place critical info at beginning/end of context
|
|
77
77
|
2. Implement compaction at 70-80% utilization
|
|
78
|
-
3. Use
|
|
78
|
+
3. Use Task agents for context isolation, not role-play
|
|
79
79
|
4. Design tools with 4-question framework (what, when, inputs, returns)
|
|
80
80
|
5. Optimize for tokens-per-task, not tokens-per-request
|
|
81
81
|
6. Validate with probe-based evaluation
|
|
@@ -61,7 +61,7 @@ Predictable degradation as context grows. Not binary - a continuum.
|
|
|
61
61
|
1. **Write**: Save externally (scratchpads, files)
|
|
62
62
|
2. **Select**: Pull only relevant (retrieval, filtering)
|
|
63
63
|
3. **Compress**: Reduce tokens (summarization)
|
|
64
|
-
4. **Isolate**: Split across
|
|
64
|
+
4. **Isolate**: Split across Task agents (partitioning)
|
|
65
65
|
|
|
66
66
|
## Detection Heuristics
|
|
67
67
|
|
|
@@ -51,7 +51,7 @@ context += [unique_content] # Variable last
|
|
|
51
51
|
|
|
52
52
|
## Context Partitioning
|
|
53
53
|
|
|
54
|
-
Split work across
|
|
54
|
+
Split work across Task agents with isolated contexts.
|
|
55
55
|
|
|
56
56
|
```python
|
|
57
57
|
result = await sub_agent.process(subtask, clean_context=True)
|