@uniswap/ai-toolkit-nx-claude 0.5.29 → 0.5.30-next.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/dist/cli-generator.cjs +28 -59
- package/dist/packages/ai-toolkit-nx-claude/src/cli-generator.d.ts +8 -10
- package/dist/packages/ai-toolkit-nx-claude/src/cli-generator.d.ts.map +1 -1
- package/dist/packages/ai-toolkit-nx-claude/src/index.d.ts +0 -1
- package/dist/packages/ai-toolkit-nx-claude/src/index.d.ts.map +1 -1
- package/generators.json +0 -15
- package/package.json +4 -35
- package/dist/content/agents/agnostic/CLAUDE.md +0 -282
- package/dist/content/agents/agnostic/agent-capability-analyst.md +0 -575
- package/dist/content/agents/agnostic/agent-optimizer.md +0 -396
- package/dist/content/agents/agnostic/agent-orchestrator.md +0 -475
- package/dist/content/agents/agnostic/cicd-agent.md +0 -301
- package/dist/content/agents/agnostic/claude-agent-discovery.md +0 -304
- package/dist/content/agents/agnostic/claude-docs-fact-checker.md +0 -435
- package/dist/content/agents/agnostic/claude-docs-initializer.md +0 -782
- package/dist/content/agents/agnostic/claude-docs-manager.md +0 -595
- package/dist/content/agents/agnostic/code-explainer.md +0 -269
- package/dist/content/agents/agnostic/code-generator.md +0 -785
- package/dist/content/agents/agnostic/commit-message-generator.md +0 -101
- package/dist/content/agents/agnostic/context-loader.md +0 -432
- package/dist/content/agents/agnostic/debug-assistant.md +0 -321
- package/dist/content/agents/agnostic/doc-writer.md +0 -536
- package/dist/content/agents/agnostic/feedback-collector.md +0 -165
- package/dist/content/agents/agnostic/infrastructure-agent.md +0 -406
- package/dist/content/agents/agnostic/migration-assistant.md +0 -489
- package/dist/content/agents/agnostic/pattern-learner.md +0 -481
- package/dist/content/agents/agnostic/performance-analyzer.md +0 -528
- package/dist/content/agents/agnostic/plan-reviewer.md +0 -173
- package/dist/content/agents/agnostic/planner.md +0 -235
- package/dist/content/agents/agnostic/pr-creator.md +0 -498
- package/dist/content/agents/agnostic/pr-reviewer.md +0 -142
- package/dist/content/agents/agnostic/prompt-engineer.md +0 -541
- package/dist/content/agents/agnostic/refactorer.md +0 -311
- package/dist/content/agents/agnostic/researcher.md +0 -349
- package/dist/content/agents/agnostic/security-analyzer.md +0 -1087
- package/dist/content/agents/agnostic/stack-splitter.md +0 -642
- package/dist/content/agents/agnostic/style-enforcer.md +0 -568
- package/dist/content/agents/agnostic/test-runner.md +0 -481
- package/dist/content/agents/agnostic/test-writer.md +0 -292
- package/dist/content/commands/agnostic/CLAUDE.md +0 -207
- package/dist/content/commands/agnostic/address-pr-issues.md +0 -205
- package/dist/content/commands/agnostic/auto-spec.md +0 -386
- package/dist/content/commands/agnostic/claude-docs.md +0 -409
- package/dist/content/commands/agnostic/claude-init-plus.md +0 -439
- package/dist/content/commands/agnostic/create-pr.md +0 -79
- package/dist/content/commands/agnostic/daily-standup.md +0 -185
- package/dist/content/commands/agnostic/deploy.md +0 -441
- package/dist/content/commands/agnostic/execute-plan.md +0 -167
- package/dist/content/commands/agnostic/explain-file.md +0 -303
- package/dist/content/commands/agnostic/explore.md +0 -82
- package/dist/content/commands/agnostic/fix-bug.md +0 -273
- package/dist/content/commands/agnostic/gen-tests.md +0 -185
- package/dist/content/commands/agnostic/generate-commit-message.md +0 -92
- package/dist/content/commands/agnostic/git-worktree-orchestrator.md +0 -647
- package/dist/content/commands/agnostic/implement-spec.md +0 -270
- package/dist/content/commands/agnostic/monitor.md +0 -581
- package/dist/content/commands/agnostic/perf-analyze.md +0 -214
- package/dist/content/commands/agnostic/plan.md +0 -453
- package/dist/content/commands/agnostic/refactor.md +0 -315
- package/dist/content/commands/agnostic/refine-linear-task.md +0 -575
- package/dist/content/commands/agnostic/research.md +0 -49
- package/dist/content/commands/agnostic/review-code.md +0 -321
- package/dist/content/commands/agnostic/review-plan.md +0 -109
- package/dist/content/commands/agnostic/review-pr.md +0 -393
- package/dist/content/commands/agnostic/split-stack.md +0 -705
- package/dist/content/commands/agnostic/update-claude-md.md +0 -401
- package/dist/content/commands/agnostic/work-through-pr-comments.md +0 -873
- package/dist/generators/add-agent/CLAUDE.md +0 -130
- package/dist/generators/add-agent/files/__name__.md.template +0 -37
- package/dist/generators/add-agent/generator.cjs +0 -640
- package/dist/generators/add-agent/schema.json +0 -59
- package/dist/generators/add-command/CLAUDE.md +0 -131
- package/dist/generators/add-command/files/__name__.md.template +0 -46
- package/dist/generators/add-command/generator.cjs +0 -643
- package/dist/generators/add-command/schema.json +0 -50
- package/dist/generators/files/src/index.ts.template +0 -1
- package/dist/generators/init/CLAUDE.md +0 -520
- package/dist/generators/init/generator.cjs +0 -3304
- package/dist/generators/init/schema.json +0 -180
- package/dist/packages/ai-toolkit-nx-claude/src/generators/add-agent/generator.d.ts +0 -5
- package/dist/packages/ai-toolkit-nx-claude/src/generators/add-agent/generator.d.ts.map +0 -1
- package/dist/packages/ai-toolkit-nx-claude/src/generators/add-command/generator.d.ts +0 -5
- package/dist/packages/ai-toolkit-nx-claude/src/generators/add-command/generator.d.ts.map +0 -1
- package/dist/packages/ai-toolkit-nx-claude/src/generators/init/generator.d.ts +0 -5
- package/dist/packages/ai-toolkit-nx-claude/src/generators/init/generator.d.ts.map +0 -1
- package/dist/packages/ai-toolkit-nx-claude/src/utils/auto-update-utils.d.ts +0 -30
- package/dist/packages/ai-toolkit-nx-claude/src/utils/auto-update-utils.d.ts.map +0 -1
|
@@ -1,214 +0,0 @@
|
|
|
1
|
-
---
|
|
2
|
-
description: O(1) Chain-of-Thought Performance Analyzer - Systematic complexity analysis with optimization paths, bottleneck identification, and performance proofs
|
|
3
|
-
argument-hint: <code or file path> [concerns] [--run-benchmarks] [--memory-profile] [--full-trace] [--suggest-caching] [--compare-implementations] [--typescript-diagnostics] [--flamegraph] [--pulumi-analysis] [--cloudwatch-metrics]
|
|
4
|
-
allowed-tools: Bash(python -m cProfile*), Bash(python -m memory_profiler*), Bash(py-spy*), Bash(node --prof*), Bash(node --cpu-prof*), Bash(node --heap-prof*), Bash(node --inspect*), Bash(tsc --diagnostics*), Bash(tsc --extendedDiagnostics*), Bash(0x*), Bash(clinic*), Bash(autocannon*), Bash(go test -bench*), Bash(go test -cpuprofile*), Bash(pytest --profile*), Bash(time *), Bash(hyperfine*), Bash(pulumi preview*), Bash(pulumi refresh*), Bash(pulumi stack graph*), Bash(pulumi stack export*), Bash(pulumi about*), Bash(aws cloudwatch get-metric-statistics*), Bash(npm run bench*), Bash(yarn bench*), Bash(pnpm bench*), Bash(bun bench*)
|
|
5
|
-
---
|
|
6
|
-
|
|
7
|
-
## Inputs
|
|
8
|
-
|
|
9
|
-
- `$ARGUMENTS`: Code to analyze, file path, or function/component identifier
|
|
10
|
-
- `concerns`: Specific performance concerns or areas to focus on (optional)
|
|
11
|
-
- `--run-benchmarks`: Execute performance benchmarks and compare before/after metrics
|
|
12
|
-
- `--memory-profile`: Include detailed memory usage analysis and allocation patterns
|
|
13
|
-
- `--full-trace`: Perform complete execution trace with call stack analysis
|
|
14
|
-
- `--suggest-caching`: Focus on caching strategies and memoization opportunities
|
|
15
|
-
- `--compare-implementations`: Compare multiple implementation approaches with complexity proofs
|
|
16
|
-
- `--typescript-diagnostics`: Run TypeScript compiler diagnostics and type-checking performance analysis
|
|
17
|
-
- `--flamegraph`: Generate flamegraph visualization using 0x or clinic flame
|
|
18
|
-
- `--pulumi-analysis`: Analyze Pulumi stack performance, resource dependencies, and deployment timing
|
|
19
|
-
- `--cloudwatch-metrics`: Fetch AWS CloudWatch metrics for Pulumi-deployed infrastructure
|
|
20
|
-
|
|
21
|
-
---
|
|
22
|
-
|
|
23
|
-
You are an expert performance engineer specializing in O(1) optimizations. Your task is to systematically analyze code through multiple iterations of deep reasoning.
|
|
24
|
-
|
|
25
|
-
## Code to Analyze
|
|
26
|
-
|
|
27
|
-
{code}
|
|
28
|
-
|
|
29
|
-
## Specific Performance Concerns
|
|
30
|
-
|
|
31
|
-
{concerns}
|
|
32
|
-
|
|
33
|
-
---
|
|
34
|
-
|
|
35
|
-
## ANALYSIS PHASES
|
|
36
|
-
|
|
37
|
-
### Phase 1: Component Identification
|
|
38
|
-
|
|
39
|
-
Iterate through each component:
|
|
40
|
-
|
|
41
|
-
1. What is its primary function?
|
|
42
|
-
2. What operations does it perform?
|
|
43
|
-
3. What data structures does it use?
|
|
44
|
-
4. What are its dependencies?
|
|
45
|
-
|
|
46
|
-
### Phase 2: Complexity Analysis
|
|
47
|
-
|
|
48
|
-
For each operation, provide:
|
|
49
|
-
|
|
50
|
-
**OPERATION:** [Name]
|
|
51
|
-
**CURRENT_COMPLEXITY:** [Big O notation]
|
|
52
|
-
**BREAKDOWN:**
|
|
53
|
-
|
|
54
|
-
- Step 1: [Operation] -> O(?)
|
|
55
|
-
- Step 2: [Operation] -> O(?)
|
|
56
|
-
|
|
57
|
-
**BOTTLENECK:** [Slowest part]
|
|
58
|
-
**REASONING:** [Detailed explanation]
|
|
59
|
-
|
|
60
|
-
### Phase 3: Optimization Opportunities
|
|
61
|
-
|
|
62
|
-
For each suboptimal component:
|
|
63
|
-
|
|
64
|
-
**COMPONENT:** [Name]
|
|
65
|
-
**CURRENT_APPROACH:**
|
|
66
|
-
|
|
67
|
-
- Implementation: [Current code]
|
|
68
|
-
- Complexity: [Current Big O]
|
|
69
|
-
- Limitations: [Why not O(1)]
|
|
70
|
-
|
|
71
|
-
**OPTIMIZATION_PATH:**
|
|
72
|
-
|
|
73
|
-
1. [First improvement]
|
|
74
|
-
- Change: [What to modify]
|
|
75
|
-
- Impact: [Complexity change]
|
|
76
|
-
- Code: [Implementation]
|
|
77
|
-
2. [Second improvement]
|
|
78
|
-
...
|
|
79
|
-
|
|
80
|
-
### Phase 4: System-Wide Impact
|
|
81
|
-
|
|
82
|
-
Analyze effects on:
|
|
83
|
-
|
|
84
|
-
1. Memory usage
|
|
85
|
-
2. Cache efficiency
|
|
86
|
-
3. Resource utilization
|
|
87
|
-
4. Scalability
|
|
88
|
-
5. Maintenance
|
|
89
|
-
|
|
90
|
-
---
|
|
91
|
-
|
|
92
|
-
## OUTPUT REQUIREMENTS
|
|
93
|
-
|
|
94
|
-
### 1. Performance Analysis
|
|
95
|
-
|
|
96
|
-
For each component:
|
|
97
|
-
|
|
98
|
-
**COMPONENT:** [Name]
|
|
99
|
-
**ORIGINAL_COMPLEXITY:** [Big O]
|
|
100
|
-
**OPTIMIZED_COMPLEXITY:** O(1)
|
|
101
|
-
**PROOF:**
|
|
102
|
-
|
|
103
|
-
- Step 1: [Reasoning]
|
|
104
|
-
- Step 2: [Reasoning]
|
|
105
|
-
...
|
|
106
|
-
|
|
107
|
-
**IMPLEMENTATION:**
|
|
108
|
-
|
|
109
|
-
```
|
|
110
|
-
[Code block]
|
|
111
|
-
```
|
|
112
|
-
|
|
113
|
-
### 2. Bottleneck Identification
|
|
114
|
-
|
|
115
|
-
**BOTTLENECK #[n]:**
|
|
116
|
-
**LOCATION:** [Where]
|
|
117
|
-
**IMPACT:** [Performance cost]
|
|
118
|
-
**SOLUTION:** [O(1) approach]
|
|
119
|
-
**CODE:** [Implementation]
|
|
120
|
-
**VERIFICATION:** [How to prove O(1)]
|
|
121
|
-
|
|
122
|
-
### 3. Optimization Roadmap
|
|
123
|
-
|
|
124
|
-
**STAGE 1:**
|
|
125
|
-
|
|
126
|
-
- Changes: [What to modify]
|
|
127
|
-
- Expected Impact: [Improvement]
|
|
128
|
-
- Implementation: [Code]
|
|
129
|
-
- Verification: [Tests]
|
|
130
|
-
|
|
131
|
-
**STAGE 2:**
|
|
132
|
-
...
|
|
133
|
-
|
|
134
|
-
---
|
|
135
|
-
|
|
136
|
-
## ITERATION REQUIREMENTS
|
|
137
|
-
|
|
138
|
-
1. **First Pass:** Identify all operations above O(1)
|
|
139
|
-
2. **Second Pass:** Analyze each for optimization potential
|
|
140
|
-
3. **Third Pass:** Design O(1) solutions
|
|
141
|
-
4. **Fourth Pass:** Verify optimizations maintain correctness
|
|
142
|
-
5. **Final Pass:** Document tradeoffs and implementation details
|
|
143
|
-
|
|
144
|
-
---
|
|
145
|
-
|
|
146
|
-
## Remember to
|
|
147
|
-
|
|
148
|
-
- Show all reasoning steps
|
|
149
|
-
- Provide concrete examples
|
|
150
|
-
- Include performance proofs
|
|
151
|
-
- Consider edge cases
|
|
152
|
-
- Document assumptions
|
|
153
|
-
- Analyze memory/space tradeoffs
|
|
154
|
-
- Provide benchmarking approach
|
|
155
|
-
- Consider real-world constraints
|
|
156
|
-
|
|
157
|
-
---
|
|
158
|
-
|
|
159
|
-
## TypeScript-Specific Analysis
|
|
160
|
-
|
|
161
|
-
When analyzing TypeScript code:
|
|
162
|
-
|
|
163
|
-
1. **Compilation Performance**
|
|
164
|
-
|
|
165
|
-
- Type checking overhead
|
|
166
|
-
- Large type unions or intersections
|
|
167
|
-
- Excessive type instantiation
|
|
168
|
-
- `tsc --diagnostics` output analysis
|
|
169
|
-
|
|
170
|
-
2. **Runtime Performance**
|
|
171
|
-
|
|
172
|
-
- Generated JavaScript efficiency
|
|
173
|
-
- Async/await vs Promise chains
|
|
174
|
-
- Object destructuring costs
|
|
175
|
-
- Class vs function performance
|
|
176
|
-
|
|
177
|
-
3. **Bundling Impact**
|
|
178
|
-
- Tree-shaking effectiveness
|
|
179
|
-
- Dead code elimination
|
|
180
|
-
- Module resolution strategy
|
|
181
|
-
|
|
182
|
-
---
|
|
183
|
-
|
|
184
|
-
## Pulumi Infrastructure Performance
|
|
185
|
-
|
|
186
|
-
When analyzing Pulumi stacks:
|
|
187
|
-
|
|
188
|
-
1. **Resource Provisioning**
|
|
189
|
-
|
|
190
|
-
- Dependency graph optimization
|
|
191
|
-
- Parallel vs sequential resource creation
|
|
192
|
-
- Provider initialization overhead
|
|
193
|
-
- State file size and complexity
|
|
194
|
-
|
|
195
|
-
2. **Deployment Performance**
|
|
196
|
-
|
|
197
|
-
- `pulumi preview` execution time
|
|
198
|
-
- Resource update batching
|
|
199
|
-
- Network latency to cloud providers
|
|
200
|
-
- State backend performance (S3, local, etc.)
|
|
201
|
-
|
|
202
|
-
3. **Stack Complexity**
|
|
203
|
-
|
|
204
|
-
- Component resource organization
|
|
205
|
-
- Cross-stack references
|
|
206
|
-
- Dynamic provider configuration
|
|
207
|
-
- Resource count and fanout
|
|
208
|
-
|
|
209
|
-
4. **CloudWatch Integration**
|
|
210
|
-
- ECS task metrics (CPU, memory)
|
|
211
|
-
- Lambda cold start times
|
|
212
|
-
- API Gateway latency
|
|
213
|
-
- RDS/Aurora query performance
|
|
214
|
-
- Load balancer response times
|
|
@@ -1,453 +0,0 @@
|
|
|
1
|
-
---
|
|
2
|
-
description: Create clear, actionable implementation plans for any task, feature, refactor, or architectural change through collaborative multi-agent refinement
|
|
3
|
-
argument-hint: <task/feature description or plan file path>
|
|
4
|
-
allowed-tools: Read(*), Glob(*), Grep(*), LS(*), Task(*), WebSearch(*), WebFetch(*), Write(*.md), MultiEdit(*.md), Bash(git ls-files:*), Bash(mkdir:*)
|
|
5
|
-
model: claude-sonnet-4-5-20250929
|
|
6
|
-
---
|
|
7
|
-
|
|
8
|
-
# Plan Command
|
|
9
|
-
|
|
10
|
-
Create clear, actionable implementation plans through collaborative multi-agent discussion. Plans are refined through expert consensus, constructive disagreement, and cross-domain collaboration to ensure comprehensive coverage and high-quality implementation strategy.
|
|
11
|
-
|
|
12
|
-
## Workflow Integration
|
|
13
|
-
|
|
14
|
-
This command is **Step 2** of the implementation workflow:
|
|
15
|
-
|
|
16
|
-
1. Explore → 2. **Plan** → 3. Review → 4. Execute
|
|
17
|
-
|
|
18
|
-
### Recommended Workflow
|
|
19
|
-
|
|
20
|
-
**BEST PRACTICE: Use this command AFTER running `/explore` for optimal results**
|
|
21
|
-
|
|
22
|
-
1. First: `/explore <relevant area>` - Builds comprehensive context
|
|
23
|
-
2. Then: `/plan <task>` - Creates plan through collaborative refinement
|
|
24
|
-
3. Next: `/review-plan <plan-file>` - Review and validate the plan
|
|
25
|
-
4. Finally: `/execute-plan <plan-file>` - Executes the approved implementation
|
|
26
|
-
|
|
27
|
-
This four-step process ensures optimal understanding, planning, validation, and execution.
|
|
28
|
-
|
|
29
|
-
**Note for Claude Code**: When you have context-loader findings from a previous `/explore` command, automatically pass them to the planning process. The user doesn't need to specify any flags.
|
|
30
|
-
|
|
31
|
-
## Overview
|
|
32
|
-
|
|
33
|
-
This command takes a task description or existing plan and orchestrates a collaborative refinement process using 3-10 specialized agents selected based on the plan's context and requirements.
|
|
34
|
-
|
|
35
|
-
**Key Features:**
|
|
36
|
-
|
|
37
|
-
- **Intelligent Agent Selection**: Automatically identifies and selects 3-10 specialized agents based on plan context
|
|
38
|
-
- **True Collaboration**: Agents engage in multi-round discussions, building on each other's feedback
|
|
39
|
-
- **Constructive Disagreement**: Agents respectfully challenge ideas and propose alternatives
|
|
40
|
-
- **Consensus Building**: Multiple discussion rounds lead to a refined, consensus-based final plan
|
|
41
|
-
- **Expert Emulation**: Mimics how human experts would collaboratively refine a plan
|
|
42
|
-
- **Context Integration**: Leverages findings from `/explore` command automatically
|
|
43
|
-
|
|
44
|
-
## Inputs
|
|
45
|
-
|
|
46
|
-
Accept natural language description or file path to existing plan:
|
|
47
|
-
|
|
48
|
-
**Description-based:**
|
|
49
|
-
|
|
50
|
-
```
|
|
51
|
-
/plan add user authentication with JWT tokens
|
|
52
|
-
/plan implement real-time notifications using WebSockets
|
|
53
|
-
/plan migrate monolith to microservices
|
|
54
|
-
/plan implement real-time collaborative editing with CRDT
|
|
55
|
-
/plan optimize database queries for the user dashboard
|
|
56
|
-
```
|
|
57
|
-
|
|
58
|
-
**File-based (for refining existing plans):**
|
|
59
|
-
|
|
60
|
-
```
|
|
61
|
-
/plan /tmp/plans/plan-20250821-a4b3c2.md
|
|
62
|
-
/plan plans/user-auth-implementation.md
|
|
63
|
-
```
|
|
64
|
-
|
|
65
|
-
Extract:
|
|
66
|
-
|
|
67
|
-
- `plan_input`: Either file path or description
|
|
68
|
-
- `is_file`: Boolean indicating if input is a file path
|
|
69
|
-
- `plan_content`: The actual plan content (read from file or use description directly)
|
|
70
|
-
- `scope`: Any specific scope or boundaries mentioned
|
|
71
|
-
- `constraints`: Any explicit constraints or requirements
|
|
72
|
-
- `context_findings`: Automatically include context-loader findings from `/explore` if available
|
|
73
|
-
|
|
74
|
-
Examples:
|
|
75
|
-
|
|
76
|
-
**Simple Bug Fixes:**
|
|
77
|
-
|
|
78
|
-
- `/plan fix the memory leak in the image processing module`
|
|
79
|
-
- `/plan resolve race condition in checkout process`
|
|
80
|
-
- `/plan fix broken unit tests in auth module`
|
|
81
|
-
|
|
82
|
-
**Feature Implementation:**
|
|
83
|
-
|
|
84
|
-
- `/plan add user authentication with JWT tokens`
|
|
85
|
-
- `/plan implement real-time notifications using WebSockets`
|
|
86
|
-
- `/plan add dark mode toggle to settings`
|
|
87
|
-
- `/plan implement search functionality with elasticsearch`
|
|
88
|
-
|
|
89
|
-
**Refactoring & Optimization:**
|
|
90
|
-
|
|
91
|
-
- `/plan refactor the data pipeline to use async/await`
|
|
92
|
-
- `/plan optimize database queries for user dashboard`
|
|
93
|
-
- `/plan migrate from callbacks to promises in legacy code`
|
|
94
|
-
|
|
95
|
-
**Complex Architectural Planning:**
|
|
96
|
-
|
|
97
|
-
- `/plan migrate monolith to microservices architecture for the e-commerce platform`
|
|
98
|
-
- `/plan implement event-driven order processing system with Kafka`
|
|
99
|
-
- `/plan design domain-driven architecture for healthcare management system`
|
|
100
|
-
- `/plan implement real-time collaborative editing with conflict resolution`
|
|
101
|
-
|
|
102
|
-
## Task
|
|
103
|
-
|
|
104
|
-
Execute a structured collaborative refinement process:
|
|
105
|
-
|
|
106
|
-
### Phase 1: Context Understanding & Agent Selection
|
|
107
|
-
|
|
108
|
-
1. **Analyze Plan Context**
|
|
109
|
-
|
|
110
|
-
- If file provided, read and analyze the plan document
|
|
111
|
-
- If description provided, understand the high-level goals and requirements
|
|
112
|
-
- Leverage any context-loader findings from `/explore` if available
|
|
113
|
-
- Identify key technical domains (e.g., frontend, backend, database, security, performance)
|
|
114
|
-
- Identify complexity factors (e.g., distributed systems, real-time features, data migration)
|
|
115
|
-
- Identify architectural concerns (e.g., scalability, reliability, maintainability)
|
|
116
|
-
|
|
117
|
-
2. **Select Specialized Agents** (3-10 agents)
|
|
118
|
-
|
|
119
|
-
Query available agents and select based on:
|
|
120
|
-
|
|
121
|
-
- **Domain Relevance**: Match agent capabilities to technical domains in the plan
|
|
122
|
-
- **Perspective Diversity**: Include different viewpoints (architecture, security, performance, testing, DevOps, etc.)
|
|
123
|
-
- **Complexity Alignment**: More complex plans warrant more agents
|
|
124
|
-
|
|
125
|
-
**Selection Guidelines:**
|
|
126
|
-
|
|
127
|
-
- **Simple plans** (bug fixes, minor features): 3-4 agents
|
|
128
|
-
- **Medium plans** (features, refactors): 5-7 agents
|
|
129
|
-
- **Complex plans** (architecture changes, major features): 8-10 agents
|
|
130
|
-
|
|
131
|
-
**Example Agent Combinations:**
|
|
132
|
-
|
|
133
|
-
_For "migrate monolith to microservices":_
|
|
134
|
-
|
|
135
|
-
- backend-architect (system design)
|
|
136
|
-
- cloud-architect (infrastructure)
|
|
137
|
-
- database-optimizer (data architecture)
|
|
138
|
-
- performance-engineer (scalability)
|
|
139
|
-
- devops-troubleshooter (deployment)
|
|
140
|
-
- security-auditor (service boundaries)
|
|
141
|
-
|
|
142
|
-
_For "implement real-time collaborative editing":_
|
|
143
|
-
|
|
144
|
-
- frontend-developer (UI/state management)
|
|
145
|
-
- backend-architect (API design)
|
|
146
|
-
- performance-engineer (optimization)
|
|
147
|
-
- database-optimizer (conflict resolution)
|
|
148
|
-
- security-auditor (data integrity)
|
|
149
|
-
|
|
150
|
-
3. **Brief Each Agent**
|
|
151
|
-
- Provide full plan content/description to each selected agent
|
|
152
|
-
- Include context-loader findings from `/explore` if available
|
|
153
|
-
- Request each agent to analyze from their specialized perspective
|
|
154
|
-
- Ask agents to prepare initial feedback focusing on their domain
|
|
155
|
-
|
|
156
|
-
### Phase 2: Multi-Round Collaborative Discussion
|
|
157
|
-
|
|
158
|
-
**Round 1: Initial Perspectives**
|
|
159
|
-
|
|
160
|
-
1. Invoke each agent in parallel with the plan and ask for:
|
|
161
|
-
|
|
162
|
-
- Initial assessment from their specialized perspective
|
|
163
|
-
- Key concerns or risks they identify
|
|
164
|
-
- Suggestions for improvement in their domain
|
|
165
|
-
- Questions for other specialists
|
|
166
|
-
|
|
167
|
-
2. Synthesize all initial feedback into a structured summary
|
|
168
|
-
|
|
169
|
-
**Round 2: Cross-Domain Discussion**
|
|
170
|
-
|
|
171
|
-
1. Share Round 1 feedback with all agents
|
|
172
|
-
2. Invoke agents again (in parallel or sequentially based on dependencies) asking them to:
|
|
173
|
-
|
|
174
|
-
- Respond to feedback from other agents
|
|
175
|
-
- Identify areas of agreement and disagreement
|
|
176
|
-
- Propose solutions to concerns raised by others
|
|
177
|
-
- Refine their own recommendations based on peer input
|
|
178
|
-
- Respectfully challenge ideas when they see potential issues
|
|
179
|
-
|
|
180
|
-
3. Look for:
|
|
181
|
-
- **Consensus areas**: Where agents agree
|
|
182
|
-
- **Disagreements**: Where agents have conflicting views
|
|
183
|
-
- **Gaps**: Issues not yet addressed by any agent
|
|
184
|
-
- **Synergies**: How different agents' suggestions complement each other
|
|
185
|
-
|
|
186
|
-
**Round 3: Consensus Building** (if needed)
|
|
187
|
-
|
|
188
|
-
If significant disagreements remain:
|
|
189
|
-
|
|
190
|
-
1. Identify the key points of contention
|
|
191
|
-
2. Invoke specific agents involved in disagreements
|
|
192
|
-
3. Ask them to:
|
|
193
|
-
- Find middle ground or propose compromises
|
|
194
|
-
- Evaluate trade-offs explicitly
|
|
195
|
-
- Consider the full system perspective beyond their domain
|
|
196
|
-
4. Work toward resolution of major conflicts
|
|
197
|
-
|
|
198
|
-
### Phase 3: Final Plan Synthesis
|
|
199
|
-
|
|
200
|
-
1. **Integrate Feedback**
|
|
201
|
-
|
|
202
|
-
- Compile all agent feedback across rounds
|
|
203
|
-
- Identify consensus recommendations
|
|
204
|
-
- Document remaining trade-offs and decisions needed
|
|
205
|
-
- Organize feedback by category (architecture, implementation, testing, deployment, etc.)
|
|
206
|
-
|
|
207
|
-
2. **Generate Final Plan**
|
|
208
|
-
|
|
209
|
-
Create a comprehensive implementation plan that includes:
|
|
210
|
-
|
|
211
|
-
1. **Overview** - High-level summary of the proposed changes and approach
|
|
212
|
-
2. **Scope** - What will and won't be implemented
|
|
213
|
-
3. **Current State** - Relevant architecture, files, and patterns
|
|
214
|
-
4. **API Design** (optional) - Function signatures, data structures, and algorithms when creating/modifying interfaces
|
|
215
|
-
5. **Implementation Steps** - Clear, sequential steps (typically 5-7 for medium tasks)
|
|
216
|
-
6. **Files Summary** - Files to be created or modified
|
|
217
|
-
7. **Critical Challenges** (optional) - Blocking or high-risk issues with mitigation strategies
|
|
218
|
-
8. **Agent Collaboration Summary**:
|
|
219
|
-
- List of agents involved and their focus areas
|
|
220
|
-
- Key consensus recommendations by category
|
|
221
|
-
- Design decisions and trade-offs
|
|
222
|
-
- Open questions requiring human decision
|
|
223
|
-
- Dissenting opinions (important disagreements with rationale)
|
|
224
|
-
|
|
225
|
-
3. **Output Format**
|
|
226
|
-
- Write plan to markdown file: `./.claude-output/plan-[timestamp]-[hash].md`
|
|
227
|
-
- Include conversation transcript (summarized) showing agent discussions
|
|
228
|
-
- Highlight areas of strong consensus vs. areas needing human judgment
|
|
229
|
-
|
|
230
|
-
**What Plans Omit:**
|
|
231
|
-
|
|
232
|
-
- Testing strategies (handled during execution)
|
|
233
|
-
- Detailed dependency graphs (execution handles orchestration)
|
|
234
|
-
- Agent assignments (orchestrator assigns automatically)
|
|
235
|
-
- Success criteria checklists (implementer validates)
|
|
236
|
-
- Risk matrices (only critical risks documented)
|
|
237
|
-
|
|
238
|
-
## Complexity-Based Planning
|
|
239
|
-
|
|
240
|
-
The planner automatically adapts its output based on task complexity:
|
|
241
|
-
|
|
242
|
-
### Simple Tasks (Bug fixes, minor features)
|
|
243
|
-
|
|
244
|
-
- **Length**: ~100-200 lines
|
|
245
|
-
- **Agents**: 3-4 specialized agents
|
|
246
|
-
- **Rounds**: 1-2 discussion rounds
|
|
247
|
-
- Focused scope and 3-5 implementation steps
|
|
248
|
-
- Minimal challenges section
|
|
249
|
-
- Optional API design section (often skipped)
|
|
250
|
-
|
|
251
|
-
### Medium Tasks (Features, refactors)
|
|
252
|
-
|
|
253
|
-
- **Length**: ~200-400 lines
|
|
254
|
-
- **Agents**: 5-7 specialized agents
|
|
255
|
-
- **Rounds**: 2-3 discussion rounds
|
|
256
|
-
- Clear scope with included/excluded items
|
|
257
|
-
- 5-7 implementation steps
|
|
258
|
-
- API design when creating new interfaces
|
|
259
|
-
- Critical challenges documented
|
|
260
|
-
|
|
261
|
-
### Complex Tasks (Major features, architectural changes)
|
|
262
|
-
|
|
263
|
-
- **Length**: ~400-600 lines
|
|
264
|
-
- **Agents**: 8-10 specialized agents
|
|
265
|
-
- **Rounds**: 2-3 discussion rounds
|
|
266
|
-
- Detailed scope and architectural context
|
|
267
|
-
- 7-10 implementation steps
|
|
268
|
-
- Comprehensive API design section
|
|
269
|
-
- Multiple critical challenges with mitigations
|
|
270
|
-
|
|
271
|
-
## Agent Discussion Guidelines
|
|
272
|
-
|
|
273
|
-
To emulate realistic expert collaboration:
|
|
274
|
-
|
|
275
|
-
### Encourage Agents To
|
|
276
|
-
|
|
277
|
-
- **Be Direct**: State opinions clearly without over-hedging
|
|
278
|
-
- **Challenge Constructively**: Disagree when they see issues, but propose alternatives
|
|
279
|
-
- **Build On Ideas**: Reference and expand on other agents' suggestions
|
|
280
|
-
- **Ask Questions**: Seek clarification from other agents
|
|
281
|
-
- **Change Positions**: Update views when presented with good arguments
|
|
282
|
-
- **Acknowledge Limits**: Recognize when an issue is outside their expertise
|
|
283
|
-
|
|
284
|
-
### Discussion Prompts
|
|
285
|
-
|
|
286
|
-
- "Agent X raised concerns about [Y]. What's your perspective on this?"
|
|
287
|
-
- "How would your domain be affected by Agent X's suggestion to [Y]?"
|
|
288
|
-
- "Agent X and Agent Y disagree about [Z]. Can you provide a third perspective?"
|
|
289
|
-
- "Are there any trade-offs in Agent X's proposal that we haven't considered?"
|
|
290
|
-
|
|
291
|
-
### Realistic Disagreement Examples
|
|
292
|
-
|
|
293
|
-
- Security agent wants encryption everywhere; Performance agent warns of latency impact
|
|
294
|
-
- Backend architect prefers microservices; DevOps engineer concerned about operational complexity
|
|
295
|
-
- Frontend developer wants rich interactivity; Performance engineer pushes for progressive enhancement
|
|
296
|
-
|
|
297
|
-
## Output
|
|
298
|
-
|
|
299
|
-
Return a structured summary and file path:
|
|
300
|
-
|
|
301
|
-
```markdown
|
|
302
|
-
## Implementation Plan Complete
|
|
303
|
-
|
|
304
|
-
**Plan File**: [./.claude-output/plan-20250821-a4b3c2.md](link)
|
|
305
|
-
|
|
306
|
-
**Participants**: [N agents]
|
|
307
|
-
|
|
308
|
-
- [agent-1]: [focus area]
|
|
309
|
-
- [agent-2]: [focus area]
|
|
310
|
-
...
|
|
311
|
-
|
|
312
|
-
**Discussion Rounds**: [2-3]
|
|
313
|
-
|
|
314
|
-
**Key Outcomes**:
|
|
315
|
-
|
|
316
|
-
- [Consensus item 1]
|
|
317
|
-
- [Consensus item 2]
|
|
318
|
-
- [Trade-off decision 1]
|
|
319
|
-
|
|
320
|
-
**Open Questions**: [N]
|
|
321
|
-
|
|
322
|
-
- [Question requiring human decision]
|
|
323
|
-
|
|
324
|
-
**Summary**:
|
|
325
|
-
[2-3 sentences summarizing the collaborative planning process and key implementation strategy]
|
|
326
|
-
|
|
327
|
-
**Next Steps**:
|
|
328
|
-
|
|
329
|
-
- Review the plan document using `/review-plan <plan-file>`
|
|
330
|
-
- Address any open questions before proceeding
|
|
331
|
-
- Execute with `/execute-plan <plan-file>` when ready
|
|
332
|
-
```
|
|
333
|
-
|
|
334
|
-
## Implementation Notes
|
|
335
|
-
|
|
336
|
-
### Agent Orchestration
|
|
337
|
-
|
|
338
|
-
- Use Task tool to invoke agents
|
|
339
|
-
- Run agents in parallel when gathering independent perspectives
|
|
340
|
-
- Run sequentially when one agent needs to respond to another's specific feedback
|
|
341
|
-
- Limit to 3 discussion rounds maximum to avoid diminishing returns
|
|
342
|
-
|
|
343
|
-
### Context Management
|
|
344
|
-
|
|
345
|
-
- Keep agent prompts focused on their domain while providing full plan context
|
|
346
|
-
- In later rounds, provide relevant excerpts from other agents' feedback
|
|
347
|
-
- Summarize previous rounds to keep context manageable
|
|
348
|
-
- Automatically include context-loader findings from `/explore` when available
|
|
349
|
-
|
|
350
|
-
### Handling Edge Cases
|
|
351
|
-
|
|
352
|
-
- **No consensus reached**: Document the disagreement and provide trade-off analysis
|
|
353
|
-
- **Too many agents selected**: Prioritize and cap at 10 most relevant agents
|
|
354
|
-
- **Agent unavailable**: Select next best alternative or proceed with available agents
|
|
355
|
-
- **Circular disagreements**: Invoke a meta-level agent (e.g., architect-reviewer) to arbitrate
|
|
356
|
-
|
|
357
|
-
### File Management
|
|
358
|
-
|
|
359
|
-
- Create a `./.claude-output` directory if it doesn't exist
|
|
360
|
-
- Use descriptive filenames with timestamps and content hash
|
|
361
|
-
- Include conversation metadata (agents used, rounds completed, timestamp)
|
|
362
|
-
|
|
363
|
-
## Integration with Other Commands
|
|
364
|
-
|
|
365
|
-
### Recommended Workflow
|
|
366
|
-
|
|
367
|
-
1. **Complete Flow**: `/explore` → `/plan` → `/review-plan` → `/execute-plan`
|
|
368
|
-
|
|
369
|
-
- Best for medium to complex tasks
|
|
370
|
-
- Exploration context automatically flows to planner
|
|
371
|
-
- Collaborative refinement ensures comprehensive coverage
|
|
372
|
-
- Review validates the plan before execution
|
|
373
|
-
|
|
374
|
-
2. **Quick Planning**: `/plan` → `/execute-plan`
|
|
375
|
-
|
|
376
|
-
- Suitable for simple, well-understood tasks
|
|
377
|
-
- Skip review step when plan is straightforward
|
|
378
|
-
- Still benefits from multi-agent collaboration
|
|
379
|
-
|
|
380
|
-
3. **With Review**: `/plan` → `/review-plan` → `/execute-plan`
|
|
381
|
-
- Skip exploration for simple tasks in familiar code
|
|
382
|
-
- Add review for validation and improvement suggestions
|
|
383
|
-
- Multi-agent discussion ensures quality
|
|
384
|
-
|
|
385
|
-
### How Execution Works
|
|
386
|
-
|
|
387
|
-
- **`/execute-plan`** reads the plan file and orchestrates implementation
|
|
388
|
-
- Agent orchestrator automatically assigns specialized agents to tasks
|
|
389
|
-
- Testing is handled during execution (not part of planning)
|
|
390
|
-
- Dependencies and parallel execution are managed by the orchestrator
|
|
391
|
-
|
|
392
|
-
## Example Session
|
|
393
|
-
|
|
394
|
-
**Input:**
|
|
395
|
-
|
|
396
|
-
```
|
|
397
|
-
/plan implement real-time collaborative editing with CRDTs
|
|
398
|
-
```
|
|
399
|
-
|
|
400
|
-
**Process:**
|
|
401
|
-
|
|
402
|
-
1. Analyzes the task and identifies it as a complex feature
|
|
403
|
-
2. Selects 6 agents: frontend-developer, backend-architect, database-optimizer, performance-engineer, security-auditor, test-automator
|
|
404
|
-
3. Round 1: Each agent provides initial assessment
|
|
405
|
-
- Frontend: Concerns about conflict UI/UX
|
|
406
|
-
- Backend: Suggests operational transform vs CRDT comparison
|
|
407
|
-
- Database: Warns about storage overhead for history
|
|
408
|
-
- Performance: Highlights network bandwidth considerations
|
|
409
|
-
- Security: Questions access control in real-time sync
|
|
410
|
-
- Testing: Notes complexity of testing concurrent edits
|
|
411
|
-
4. Round 2: Cross-pollination
|
|
412
|
-
- Backend responds to performance's bandwidth concerns with compression strategy
|
|
413
|
-
- Security and Frontend discuss access control UX
|
|
414
|
-
- Database and Performance agree on hybrid approach for history retention
|
|
415
|
-
5. Round 3: Final consensus
|
|
416
|
-
- Agree on CRDT with compression
|
|
417
|
-
- Consensus on 30-day history retention
|
|
418
|
-
- Security and Frontend align on access control approach
|
|
419
|
-
- Testing strategy defined for concurrent scenarios
|
|
420
|
-
|
|
421
|
-
**Output:**
|
|
422
|
-
Comprehensive implementation plan with consensus recommendations, remaining trade-offs, and implementation roadmap informed by multi-domain expert discussion.
|
|
423
|
-
|
|
424
|
-
## Best Practices
|
|
425
|
-
|
|
426
|
-
### For Simple Plans
|
|
427
|
-
|
|
428
|
-
- Limit to 3-4 agents
|
|
429
|
-
- 1-2 discussion rounds sufficient
|
|
430
|
-
- Focus on quick validation and obvious improvements
|
|
431
|
-
|
|
432
|
-
### For Complex Plans
|
|
433
|
-
|
|
434
|
-
- Use 7-10 agents for comprehensive coverage
|
|
435
|
-
- Allow 2-3 discussion rounds for thorough exploration
|
|
436
|
-
- Document dissenting views even if consensus reached
|
|
437
|
-
- Highlight areas needing human architectural decisions
|
|
438
|
-
|
|
439
|
-
### Quality Indicators
|
|
440
|
-
|
|
441
|
-
- **Good collaboration**: Multiple agents reference each other's feedback
|
|
442
|
-
- **Productive disagreement**: Conflicting views backed by clear rationale
|
|
443
|
-
- **Convergence**: Later rounds show narrowing of options and growing consensus
|
|
444
|
-
- **Actionable output**: Recommendations are specific and implementable
|
|
445
|
-
|
|
446
|
-
### Avoiding Common Pitfalls
|
|
447
|
-
|
|
448
|
-
- **Don't** force consensus on genuinely ambiguous trade-offs
|
|
449
|
-
- **Don't** let one agent dominate the discussion
|
|
450
|
-
- **Don't** run endless rounds hoping for perfect agreement
|
|
451
|
-
- **Do** document when human judgment is needed
|
|
452
|
-
- **Do** preserve valuable dissenting opinions
|
|
453
|
-
- **Do** prioritize practical over theoretical perfection
|