aidp 0.26.0 → 0.28.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/README.md +89 -0
- data/lib/aidp/cli/checkpoint_command.rb +198 -0
- data/lib/aidp/cli/config_command.rb +71 -0
- data/lib/aidp/cli/enhanced_input.rb +2 -0
- data/lib/aidp/cli/first_run_wizard.rb +8 -7
- data/lib/aidp/cli/harness_command.rb +102 -0
- data/lib/aidp/cli/jobs_command.rb +3 -3
- data/lib/aidp/cli/mcp_dashboard.rb +4 -3
- data/lib/aidp/cli/models_command.rb +661 -0
- data/lib/aidp/cli/providers_command.rb +223 -0
- data/lib/aidp/cli.rb +45 -464
- data/lib/aidp/config.rb +54 -0
- data/lib/aidp/daemon/runner.rb +2 -2
- data/lib/aidp/debug_mixin.rb +25 -10
- data/lib/aidp/execute/agent_signal_parser.rb +22 -0
- data/lib/aidp/execute/async_work_loop_runner.rb +2 -1
- data/lib/aidp/execute/checkpoint_display.rb +38 -37
- data/lib/aidp/execute/interactive_repl.rb +2 -1
- data/lib/aidp/execute/prompt_manager.rb +4 -4
- data/lib/aidp/execute/repl_macros.rb +2 -2
- data/lib/aidp/execute/steps.rb +94 -1
- data/lib/aidp/execute/work_loop_runner.rb +238 -19
- data/lib/aidp/execute/workflow_selector.rb +4 -27
- data/lib/aidp/firewall/provider_requirements_collector.rb +262 -0
- data/lib/aidp/harness/ai_decision_engine.rb +35 -2
- data/lib/aidp/harness/config_manager.rb +5 -10
- data/lib/aidp/harness/config_schema.rb +8 -0
- data/lib/aidp/harness/configuration.rb +40 -2
- data/lib/aidp/harness/enhanced_runner.rb +25 -19
- data/lib/aidp/harness/error_handler.rb +23 -73
- data/lib/aidp/harness/model_cache.rb +269 -0
- data/lib/aidp/harness/model_discovery_service.rb +259 -0
- data/lib/aidp/harness/model_registry.rb +201 -0
- data/lib/aidp/harness/provider_factory.rb +11 -2
- data/lib/aidp/harness/runner.rb +5 -0
- data/lib/aidp/harness/state_manager.rb +0 -7
- data/lib/aidp/harness/thinking_depth_manager.rb +202 -7
- data/lib/aidp/harness/ui/enhanced_tui.rb +8 -18
- data/lib/aidp/harness/ui/enhanced_workflow_selector.rb +0 -18
- data/lib/aidp/harness/ui/progress_display.rb +6 -2
- data/lib/aidp/harness/user_interface.rb +0 -58
- data/lib/aidp/init/runner.rb +7 -2
- data/lib/aidp/message_display.rb +0 -46
- data/lib/aidp/planning/analyzers/feedback_analyzer.rb +365 -0
- data/lib/aidp/planning/builders/agile_plan_builder.rb +387 -0
- data/lib/aidp/planning/builders/project_plan_builder.rb +193 -0
- data/lib/aidp/planning/generators/gantt_generator.rb +190 -0
- data/lib/aidp/planning/generators/iteration_plan_generator.rb +392 -0
- data/lib/aidp/planning/generators/legacy_research_planner.rb +473 -0
- data/lib/aidp/planning/generators/marketing_report_generator.rb +348 -0
- data/lib/aidp/planning/generators/mvp_scope_generator.rb +310 -0
- data/lib/aidp/planning/generators/user_test_plan_generator.rb +373 -0
- data/lib/aidp/planning/generators/wbs_generator.rb +259 -0
- data/lib/aidp/planning/mappers/persona_mapper.rb +163 -0
- data/lib/aidp/planning/parsers/document_parser.rb +141 -0
- data/lib/aidp/planning/parsers/feedback_data_parser.rb +252 -0
- data/lib/aidp/provider_manager.rb +8 -32
- data/lib/aidp/providers/adapter.rb +2 -4
- data/lib/aidp/providers/aider.rb +264 -0
- data/lib/aidp/providers/anthropic.rb +206 -121
- data/lib/aidp/providers/base.rb +123 -3
- data/lib/aidp/providers/capability_registry.rb +0 -1
- data/lib/aidp/providers/codex.rb +75 -70
- data/lib/aidp/providers/cursor.rb +87 -59
- data/lib/aidp/providers/gemini.rb +57 -60
- data/lib/aidp/providers/github_copilot.rb +19 -66
- data/lib/aidp/providers/kilocode.rb +35 -80
- data/lib/aidp/providers/opencode.rb +35 -80
- data/lib/aidp/setup/wizard.rb +555 -8
- data/lib/aidp/version.rb +1 -1
- data/lib/aidp/watch/build_processor.rb +211 -30
- data/lib/aidp/watch/change_request_processor.rb +128 -14
- data/lib/aidp/watch/ci_fix_processor.rb +103 -37
- data/lib/aidp/watch/ci_log_extractor.rb +258 -0
- data/lib/aidp/watch/github_state_extractor.rb +177 -0
- data/lib/aidp/watch/implementation_verifier.rb +284 -0
- data/lib/aidp/watch/plan_generator.rb +95 -52
- data/lib/aidp/watch/plan_processor.rb +7 -6
- data/lib/aidp/watch/repository_client.rb +245 -17
- data/lib/aidp/watch/review_processor.rb +100 -19
- data/lib/aidp/watch/reviewers/base_reviewer.rb +1 -1
- data/lib/aidp/watch/runner.rb +181 -29
- data/lib/aidp/watch/state_store.rb +22 -1
- data/lib/aidp/workflows/definitions.rb +147 -0
- data/lib/aidp/workflows/guided_agent.rb +3 -3
- data/lib/aidp/workstream_cleanup.rb +245 -0
- data/lib/aidp/worktree.rb +19 -0
- data/templates/aidp-development.yml.example +2 -2
- data/templates/aidp-production.yml.example +3 -3
- data/templates/aidp.yml.example +57 -0
- data/templates/implementation/generate_tdd_specs.md +213 -0
- data/templates/implementation/iterative_implementation.md +122 -0
- data/templates/planning/agile/analyze_feedback.md +183 -0
- data/templates/planning/agile/generate_iteration_plan.md +179 -0
- data/templates/planning/agile/generate_legacy_research_plan.md +171 -0
- data/templates/planning/agile/generate_marketing_report.md +162 -0
- data/templates/planning/agile/generate_mvp_scope.md +127 -0
- data/templates/planning/agile/generate_user_test_plan.md +143 -0
- data/templates/planning/agile/ingest_feedback.md +174 -0
- data/templates/planning/assemble_project_plan.md +113 -0
- data/templates/planning/assign_personas.md +108 -0
- data/templates/planning/create_tasks.md +52 -6
- data/templates/planning/generate_gantt.md +86 -0
- data/templates/planning/generate_wbs.md +85 -0
- data/templates/planning/initialize_planning_mode.md +70 -0
- data/templates/skills/README.md +2 -2
- data/templates/skills/marketing_strategist/SKILL.md +279 -0
- data/templates/skills/product_manager/SKILL.md +177 -0
- data/templates/skills/ruby_aidp_planning/SKILL.md +497 -0
- data/templates/skills/ruby_rspec_tdd/SKILL.md +514 -0
- data/templates/skills/ux_researcher/SKILL.md +222 -0
- metadata +47 -1
|
@@ -0,0 +1,122 @@
|
|
|
1
|
+
# Iterative Implementation
|
|
2
|
+
|
|
3
|
+
You are implementing a feature or fix within the AIDP work loop using an iterative, task-based approach.
|
|
4
|
+
|
|
5
|
+
## Your Mission
|
|
6
|
+
|
|
7
|
+
{{task_description}}
|
|
8
|
+
|
|
9
|
+
## Important Instructions
|
|
10
|
+
|
|
11
|
+
### 1. Break Down the Work
|
|
12
|
+
|
|
13
|
+
If this is a multi-step feature, **break it into concrete subtasks** using the persistent tasklist:
|
|
14
|
+
|
|
15
|
+
```text
|
|
16
|
+
File task: "Subtask description here" priority: high|medium|low tags: tag1,tag2
|
|
17
|
+
```
|
|
18
|
+
|
|
19
|
+
Examples:
|
|
20
|
+
|
|
21
|
+
```text
|
|
22
|
+
File task: "Add worktree lookup logic to find existing worktrees" priority: high tags: implementation,git
|
|
23
|
+
File task: "Implement worktree creation for PR branches" priority: high tags: implementation,git
|
|
24
|
+
File task: "Add comprehensive logging using Aidp.log_debug" priority: medium tags: observability
|
|
25
|
+
File task: "Add tests for worktree operations" priority: high tags: testing
|
|
26
|
+
```
|
|
27
|
+
|
|
28
|
+
### 2. Implement One Subtask at a Time
|
|
29
|
+
|
|
30
|
+
**Focus on completing ONE subtask per iteration.** Keep changes minimal and focused.
|
|
31
|
+
|
|
32
|
+
- Read the pending tasks from the tasklist
|
|
33
|
+
- Pick the highest priority task that's ready to implement
|
|
34
|
+
- Implement it completely with tests
|
|
35
|
+
- Mark it done: `Update task: task_id_here status: done`
|
|
36
|
+
|
|
37
|
+
### 3. Request Next Iteration
|
|
38
|
+
|
|
39
|
+
When you've completed a subtask and there's more work:
|
|
40
|
+
|
|
41
|
+
```text
|
|
42
|
+
NEXT_UNIT: agentic
|
|
43
|
+
```
|
|
44
|
+
|
|
45
|
+
This tells the work loop to continue with the next subtask after running tests/linters.
|
|
46
|
+
|
|
47
|
+
### 4. Track Progress in PROMPT.md
|
|
48
|
+
|
|
49
|
+
Update this file to:
|
|
50
|
+
|
|
51
|
+
- Remove completed items
|
|
52
|
+
- Show current status
|
|
53
|
+
- List what remains
|
|
54
|
+
|
|
55
|
+
### 5. Mark Complete When Done
|
|
56
|
+
|
|
57
|
+
When ALL work is complete (all subtasks done, tests passing):
|
|
58
|
+
|
|
59
|
+
```text
|
|
60
|
+
STATUS: COMPLETE
|
|
61
|
+
```
|
|
62
|
+
|
|
63
|
+
## Completion Criteria
|
|
64
|
+
|
|
65
|
+
✅ All subtasks filed and completed
|
|
66
|
+
✅ All tests passing
|
|
67
|
+
✅ All linters passing
|
|
68
|
+
✅ Code follows project style guide
|
|
69
|
+
✅ Comprehensive logging added
|
|
70
|
+
✅ STATUS: COMPLETE added to PROMPT.md
|
|
71
|
+
|
|
72
|
+
## Context
|
|
73
|
+
|
|
74
|
+
{{additional_context}}
|
|
75
|
+
|
|
76
|
+
## Best Practices
|
|
77
|
+
|
|
78
|
+
- **Small iterations**: Better to do 5 small focused iterations than 1 giant one
|
|
79
|
+
- **Test as you go**: Write tests for each subtask before moving on
|
|
80
|
+
- **Use signals**: `File task:`, `Update task:`, `NEXT_UNIT:` keep the system coordinated
|
|
81
|
+
- **Log extensively**: Use `Aidp.log_debug()` at all important code paths
|
|
82
|
+
- **Fail fast**: Let bugs surface early rather than masking with rescues
|
|
83
|
+
|
|
84
|
+
## Example Flow
|
|
85
|
+
|
|
86
|
+
Iteration 1:
|
|
87
|
+
|
|
88
|
+
```text
|
|
89
|
+
File task: "Create WorktreeBranchManager class" priority: high tags: implementation
|
|
90
|
+
File task: "Add worktree lookup logic" priority: high tags: implementation
|
|
91
|
+
File task: "Add tests for WorktreeBranchManager" priority: high tags: testing
|
|
92
|
+
|
|
93
|
+
[Implement WorktreeBranchManager class with basic structure]
|
|
94
|
+
|
|
95
|
+
Update task: task_abc123 status: done
|
|
96
|
+
NEXT_UNIT: agentic
|
|
97
|
+
```
|
|
98
|
+
|
|
99
|
+
Iteration 2:
|
|
100
|
+
|
|
101
|
+
```text
|
|
102
|
+
[Implement worktree lookup logic]
|
|
103
|
+
|
|
104
|
+
Update task: task_def456 status: done
|
|
105
|
+
NEXT_UNIT: agentic
|
|
106
|
+
```
|
|
107
|
+
|
|
108
|
+
Iteration 3:
|
|
109
|
+
|
|
110
|
+
```text
|
|
111
|
+
[Add comprehensive tests]
|
|
112
|
+
|
|
113
|
+
Update task: task_ghi789 status: done
|
|
114
|
+
STATUS: COMPLETE
|
|
115
|
+
```
|
|
116
|
+
|
|
117
|
+
## Notes
|
|
118
|
+
|
|
119
|
+
- The work loop will automatically run tests/linters after each iteration
|
|
120
|
+
- If tests fail, you'll see the errors in the next iteration - fix them before continuing
|
|
121
|
+
- Use the persistent tasklist to coordinate work across sessions
|
|
122
|
+
- Each iteration should leave the codebase in a working state
|
|
@@ -0,0 +1,183 @@
|
|
|
1
|
+
# Analyze User Feedback
|
|
2
|
+
|
|
3
|
+
You are a UX researcher analyzing user feedback using AI-powered semantic analysis.
|
|
4
|
+
|
|
5
|
+
## Input
|
|
6
|
+
|
|
7
|
+
Read:
|
|
8
|
+
|
|
9
|
+
- `.aidp/docs/feedback_data.json` - Normalized feedback data from ingestion step
|
|
10
|
+
|
|
11
|
+
## Your Task
|
|
12
|
+
|
|
13
|
+
Perform comprehensive analysis of user feedback to extract insights, identify trends, and generate actionable recommendations.
|
|
14
|
+
|
|
15
|
+
## Analysis Components
|
|
16
|
+
|
|
17
|
+
### 1. Executive Summary
|
|
18
|
+
|
|
19
|
+
High-level overview (2-3 paragraphs):
|
|
20
|
+
|
|
21
|
+
- Overall sentiment and key themes
|
|
22
|
+
- Most important findings
|
|
23
|
+
- Top recommendations
|
|
24
|
+
|
|
25
|
+
### 2. Sentiment Breakdown
|
|
26
|
+
|
|
27
|
+
Distribution analysis:
|
|
28
|
+
|
|
29
|
+
- Positive/negative/neutral counts and percentages
|
|
30
|
+
- Sentiment trends over time if timestamps available
|
|
31
|
+
- Sentiment by feature or category
|
|
32
|
+
|
|
33
|
+
### 3. Key Findings
|
|
34
|
+
|
|
35
|
+
3-5 major discoveries, each with:
|
|
36
|
+
|
|
37
|
+
- Finding title and description
|
|
38
|
+
- Evidence (quotes, data supporting the finding)
|
|
39
|
+
- Impact assessment (high/medium/low)
|
|
40
|
+
|
|
41
|
+
### 4. Trends and Patterns
|
|
42
|
+
|
|
43
|
+
Recurring themes across responses:
|
|
44
|
+
|
|
45
|
+
- Trend description
|
|
46
|
+
- Frequency (how often mentioned)
|
|
47
|
+
- Implications for product development
|
|
48
|
+
|
|
49
|
+
### 5. Insights
|
|
50
|
+
|
|
51
|
+
Categorized observations:
|
|
52
|
+
|
|
53
|
+
- **Usability**: Ease of use, interface issues
|
|
54
|
+
- **Features**: What's working, what's missing
|
|
55
|
+
- **Performance**: Speed, reliability concerns
|
|
56
|
+
- **Value**: Perceived value and benefits
|
|
57
|
+
|
|
58
|
+
### 6. Feature-Specific Feedback
|
|
59
|
+
|
|
60
|
+
For each feature mentioned:
|
|
61
|
+
|
|
62
|
+
- Overall sentiment
|
|
63
|
+
- Positive feedback
|
|
64
|
+
- Negative feedback
|
|
65
|
+
- Suggested improvements
|
|
66
|
+
|
|
67
|
+
### 7. Priority Issues
|
|
68
|
+
|
|
69
|
+
Critical items requiring immediate attention:
|
|
70
|
+
|
|
71
|
+
- Issue description
|
|
72
|
+
- Priority level (critical/high/medium)
|
|
73
|
+
- Number/percentage of users affected
|
|
74
|
+
- Recommended action
|
|
75
|
+
|
|
76
|
+
### 8. Positive Highlights
|
|
77
|
+
|
|
78
|
+
What users loved:
|
|
79
|
+
|
|
80
|
+
- Features or aspects that delighted users
|
|
81
|
+
- Strengths to maintain or amplify
|
|
82
|
+
|
|
83
|
+
### 9. Recommendations
|
|
84
|
+
|
|
85
|
+
4-6 actionable recommendations, each with:
|
|
86
|
+
|
|
87
|
+
- Recommendation title and description
|
|
88
|
+
- Rationale based on feedback
|
|
89
|
+
- Effort estimate (low/medium/high)
|
|
90
|
+
- Expected impact (low/medium/high)
|
|
91
|
+
|
|
92
|
+
## Analysis Principles
|
|
93
|
+
|
|
94
|
+
**Semantic Analysis:**
|
|
95
|
+
|
|
96
|
+
- Use AI to understand meaning and context, not just keywords
|
|
97
|
+
- Identify themes across different wording
|
|
98
|
+
- Understand user intent and emotion
|
|
99
|
+
|
|
100
|
+
**Evidence-Based:**
|
|
101
|
+
|
|
102
|
+
- Support findings with specific quotes or data
|
|
103
|
+
- Quantify when possible
|
|
104
|
+
- Don't over-generalize from limited data
|
|
105
|
+
|
|
106
|
+
**Actionable:**
|
|
107
|
+
|
|
108
|
+
- Translate insights into specific recommendations
|
|
109
|
+
- Prioritize by impact and urgency
|
|
110
|
+
- Make recommendations concrete and implementable
|
|
111
|
+
|
|
112
|
+
**Objective:**
|
|
113
|
+
|
|
114
|
+
- Present both positive and negative feedback fairly
|
|
115
|
+
- Avoid bias toward confirming existing beliefs
|
|
116
|
+
- Let data speak for itself
|
|
117
|
+
|
|
118
|
+
## Implementation
|
|
119
|
+
|
|
120
|
+
**For Ruby/AIDP projects**, use the `ruby_aidp_planning` skill with `Aidp::Planning::Analyzers::FeedbackAnalyzer`:
|
|
121
|
+
|
|
122
|
+
1. Read feedback data from `.aidp/docs/feedback_data.json`
|
|
123
|
+
2. Analyze using `FeedbackAnalyzer.new(ai_decision_engine:).analyze(feedback_data)`
|
|
124
|
+
3. Format as markdown using `format_as_markdown(analysis)`
|
|
125
|
+
4. Write to `.aidp/docs/USER_FEEDBACK_ANALYSIS.md`
|
|
126
|
+
|
|
127
|
+
**For other implementations**, create equivalent functionality that:
|
|
128
|
+
|
|
129
|
+
1. Parses normalized feedback data
|
|
130
|
+
2. Uses AI for semantic analysis (NO regex, keyword matching, or heuristics)
|
|
131
|
+
3. Identifies patterns and themes
|
|
132
|
+
4. Calculates sentiment distribution
|
|
133
|
+
5. Extracts evidence-based findings
|
|
134
|
+
6. Generates prioritized recommendations
|
|
135
|
+
7. Formats as comprehensive markdown report
|
|
136
|
+
|
|
137
|
+
## AI Analysis Guidelines
|
|
138
|
+
|
|
139
|
+
Use AI Decision Engine with Zero Framework Cognition:
|
|
140
|
+
|
|
141
|
+
**NO:**
|
|
142
|
+
|
|
143
|
+
- Regex pattern matching
|
|
144
|
+
- Keyword counting
|
|
145
|
+
- Scoring formulas
|
|
146
|
+
- Heuristic thresholds
|
|
147
|
+
|
|
148
|
+
**YES:**
|
|
149
|
+
|
|
150
|
+
- Semantic understanding of text
|
|
151
|
+
- Context-aware analysis
|
|
152
|
+
- Theme identification across varied wording
|
|
153
|
+
- Nuanced sentiment analysis
|
|
154
|
+
- Evidence-based recommendations
|
|
155
|
+
|
|
156
|
+
Provide structured schema for consistent output.
|
|
157
|
+
|
|
158
|
+
## Output Structure
|
|
159
|
+
|
|
160
|
+
Write to `.aidp/docs/USER_FEEDBACK_ANALYSIS.md` with:
|
|
161
|
+
|
|
162
|
+
- Executive summary
|
|
163
|
+
- Sentiment breakdown (table with counts and percentages)
|
|
164
|
+
- Key findings with evidence and impact
|
|
165
|
+
- Trends and patterns with frequency and implications
|
|
166
|
+
- Categorized insights (usability, features, performance, value)
|
|
167
|
+
- Feature-specific feedback (positive, negative, improvements)
|
|
168
|
+
- Priority issues (with recommended actions)
|
|
169
|
+
- Positive highlights
|
|
170
|
+
- Actionable recommendations (with rationale, effort, impact)
|
|
171
|
+
- Generated timestamp and metadata
|
|
172
|
+
|
|
173
|
+
## Common Pitfalls to Avoid
|
|
174
|
+
|
|
175
|
+
- Keyword matching instead of semantic understanding
|
|
176
|
+
- Ignoring negative feedback
|
|
177
|
+
- Over-generalizing from limited responses
|
|
178
|
+
- Recommendations without evidence
|
|
179
|
+
- Analysis paralysis (waiting for perfect data)
|
|
180
|
+
|
|
181
|
+
## Output
|
|
182
|
+
|
|
183
|
+
Write complete feedback analysis to `.aidp/docs/USER_FEEDBACK_ANALYSIS.md` with all insights and evidence-based recommendations.
|
|
@@ -0,0 +1,179 @@
|
|
|
1
|
+
# Generate Next Iteration Plan
|
|
2
|
+
|
|
3
|
+
You are a product manager creating the next iteration plan based on user feedback.
|
|
4
|
+
|
|
5
|
+
## Input
|
|
6
|
+
|
|
7
|
+
Read:
|
|
8
|
+
|
|
9
|
+
- `.aidp/docs/USER_FEEDBACK_ANALYSIS.md` - Analyzed user feedback with insights and recommendations
|
|
10
|
+
- `.aidp/docs/MVP_SCOPE.md` (if available) - Current MVP features
|
|
11
|
+
|
|
12
|
+
## Your Task
|
|
13
|
+
|
|
14
|
+
Create a detailed plan for the next development iteration that addresses user feedback, prioritizes improvements, and defines specific, actionable tasks.
|
|
15
|
+
|
|
16
|
+
## Iteration Plan Components
|
|
17
|
+
|
|
18
|
+
### 1. Overview
|
|
19
|
+
|
|
20
|
+
- Focus of this iteration
|
|
21
|
+
- Why these priorities
|
|
22
|
+
- Expected outcomes
|
|
23
|
+
|
|
24
|
+
### 2. Iteration Goals
|
|
25
|
+
|
|
26
|
+
3-5 clear, measurable goals for this iteration
|
|
27
|
+
|
|
28
|
+
### 3. Feature Improvements
|
|
29
|
+
|
|
30
|
+
For existing features that need enhancement:
|
|
31
|
+
|
|
32
|
+
- Feature name
|
|
33
|
+
- Current issue/problem
|
|
34
|
+
- Proposed improvement
|
|
35
|
+
- User impact
|
|
36
|
+
- Effort estimate (low/medium/high)
|
|
37
|
+
- Priority (critical/high/medium/low)
|
|
38
|
+
|
|
39
|
+
### 4. New Features
|
|
40
|
+
|
|
41
|
+
Features to add based on user requests:
|
|
42
|
+
|
|
43
|
+
- Feature name and description
|
|
44
|
+
- Rationale (why add this now)
|
|
45
|
+
- Acceptance criteria
|
|
46
|
+
- Effort estimate
|
|
47
|
+
|
|
48
|
+
### 5. Bug Fixes
|
|
49
|
+
|
|
50
|
+
Critical and high-priority bugs:
|
|
51
|
+
|
|
52
|
+
- Bug title and description
|
|
53
|
+
- Priority level
|
|
54
|
+
- Number/percentage of users affected
|
|
55
|
+
- Fix approach
|
|
56
|
+
|
|
57
|
+
### 6. Technical Debt
|
|
58
|
+
|
|
59
|
+
Technical improvements needed:
|
|
60
|
+
|
|
61
|
+
- Debt item title and description
|
|
62
|
+
- Why it matters (impact on quality, performance, maintainability)
|
|
63
|
+
- Effort estimate
|
|
64
|
+
|
|
65
|
+
### 7. Task Breakdown
|
|
66
|
+
|
|
67
|
+
Specific, actionable tasks:
|
|
68
|
+
|
|
69
|
+
- Task name and description
|
|
70
|
+
- Category (feature, improvement, bug_fix, tech_debt, testing, documentation)
|
|
71
|
+
- Priority
|
|
72
|
+
- Estimated effort
|
|
73
|
+
- Dependencies
|
|
74
|
+
- Success criteria
|
|
75
|
+
|
|
76
|
+
### 8. Success Metrics
|
|
77
|
+
|
|
78
|
+
How to measure iteration success:
|
|
79
|
+
|
|
80
|
+
- Metric name
|
|
81
|
+
- Target value
|
|
82
|
+
- How to measure
|
|
83
|
+
|
|
84
|
+
### 9. Risks and Mitigation
|
|
85
|
+
|
|
86
|
+
What could go wrong:
|
|
87
|
+
|
|
88
|
+
- Risk description
|
|
89
|
+
- Probability (low/medium/high)
|
|
90
|
+
- Impact (low/medium/high)
|
|
91
|
+
- Mitigation strategy
|
|
92
|
+
|
|
93
|
+
### 10. Timeline
|
|
94
|
+
|
|
95
|
+
Iteration phases:
|
|
96
|
+
|
|
97
|
+
- Phase name
|
|
98
|
+
- Duration
|
|
99
|
+
- Key activities
|
|
100
|
+
|
|
101
|
+
## Prioritization Framework
|
|
102
|
+
|
|
103
|
+
Consider these factors when prioritizing:
|
|
104
|
+
|
|
105
|
+
1. **User Impact**: How many users benefit? How significantly?
|
|
106
|
+
2. **Business Value**: Does this align with business goals?
|
|
107
|
+
3. **Effort**: How much work required?
|
|
108
|
+
4. **Risk**: What's the probability and impact of failure?
|
|
109
|
+
5. **Dependencies**: What must happen first?
|
|
110
|
+
6. **Learning**: What will we learn from building this?
|
|
111
|
+
|
|
112
|
+
## Implementation
|
|
113
|
+
|
|
114
|
+
**For Ruby/AIDP projects**, use the `ruby_aidp_planning` skill with `Aidp::Planning::Generators::IterationPlanGenerator`:
|
|
115
|
+
|
|
116
|
+
1. Parse feedback analysis using `Aidp::Planning::Parsers::DocumentParser`
|
|
117
|
+
2. Parse MVP scope if available
|
|
118
|
+
3. Generate plan using `IterationPlanGenerator.generate(feedback_analysis:, current_mvp:)`
|
|
119
|
+
4. Format as markdown using `format_as_markdown(plan)`
|
|
120
|
+
5. Write to `.aidp/docs/NEXT_ITERATION_PLAN.md`
|
|
121
|
+
|
|
122
|
+
**For other implementations**, create equivalent functionality that:
|
|
123
|
+
|
|
124
|
+
1. Parses feedback analysis to understand issues and recommendations
|
|
125
|
+
2. Parses current MVP scope if available
|
|
126
|
+
3. Uses AI to transform recommendations into actionable tasks
|
|
127
|
+
4. Prioritizes based on user impact, effort, and dependencies
|
|
128
|
+
5. Breaks down work into specific tasks
|
|
129
|
+
6. Defines success metrics for iteration
|
|
130
|
+
7. Identifies and plans mitigation for risks
|
|
131
|
+
|
|
132
|
+
## AI Analysis Guidelines
|
|
133
|
+
|
|
134
|
+
Use AI Decision Engine to:
|
|
135
|
+
|
|
136
|
+
- Transform feedback insights into specific improvements
|
|
137
|
+
- Prioritize tasks by impact and effort
|
|
138
|
+
- Break down complex improvements into tasks
|
|
139
|
+
- Identify dependencies and sequencing
|
|
140
|
+
- Suggest realistic timelines
|
|
141
|
+
|
|
142
|
+
Be specific and actionable—tasks should be clear enough for developers to implement.
|
|
143
|
+
|
|
144
|
+
## Task Categories
|
|
145
|
+
|
|
146
|
+
- **feature**: New functionality
|
|
147
|
+
- **improvement**: Enhancement to existing feature
|
|
148
|
+
- **bug_fix**: Resolve defect or error
|
|
149
|
+
- **tech_debt**: Technical improvement (refactoring, performance, etc.)
|
|
150
|
+
- **testing**: Test coverage or quality improvements
|
|
151
|
+
- **documentation**: User guides, API docs, etc.
|
|
152
|
+
|
|
153
|
+
## Output Structure
|
|
154
|
+
|
|
155
|
+
Write to `.aidp/docs/NEXT_ITERATION_PLAN.md` with:
|
|
156
|
+
|
|
157
|
+
- Overview of iteration focus
|
|
158
|
+
- Iteration goals (3-5 measurable goals)
|
|
159
|
+
- Feature improvements (with issue, improvement, impact, effort, priority)
|
|
160
|
+
- New features (with rationale and acceptance criteria)
|
|
161
|
+
- Bug fixes (with priority and affected users)
|
|
162
|
+
- Technical debt items (with impact and effort)
|
|
163
|
+
- Task breakdown (with category, priority, effort, dependencies, success criteria)
|
|
164
|
+
- Success metrics and targets
|
|
165
|
+
- Risks with probability, impact, and mitigation
|
|
166
|
+
- Timeline with phases and activities
|
|
167
|
+
- Generated timestamp and metadata
|
|
168
|
+
|
|
169
|
+
## Common Pitfalls to Avoid
|
|
170
|
+
|
|
171
|
+
- Vague, non-actionable tasks
|
|
172
|
+
- Ignoring technical debt
|
|
173
|
+
- Over-ambitious scope for iteration
|
|
174
|
+
- Missing dependencies between tasks
|
|
175
|
+
- No clear success metrics
|
|
176
|
+
|
|
177
|
+
## Output
|
|
178
|
+
|
|
179
|
+
Write complete iteration plan to `.aidp/docs/NEXT_ITERATION_PLAN.md` with specific, prioritized, actionable tasks based on user feedback.
|
|
@@ -0,0 +1,171 @@
|
|
|
1
|
+
# Generate Legacy User Research Plan
|
|
2
|
+
|
|
3
|
+
You are a UX researcher creating a user research plan for an existing codebase/product.
|
|
4
|
+
|
|
5
|
+
## Your Task
|
|
6
|
+
|
|
7
|
+
Analyze an existing codebase to understand what features are already built, then create a user research plan to understand how users experience the product and identify improvement opportunities.
|
|
8
|
+
|
|
9
|
+
## Interactive Input
|
|
10
|
+
|
|
11
|
+
**Prompt the user for:**
|
|
12
|
+
|
|
13
|
+
1. Path to codebase directory
|
|
14
|
+
2. Primary language/framework (for context)
|
|
15
|
+
3. Known user segments (if any)
|
|
16
|
+
|
|
17
|
+
## Codebase Analysis
|
|
18
|
+
|
|
19
|
+
Analyze the existing codebase to understand:
|
|
20
|
+
|
|
21
|
+
1. **Feature Inventory**: What features currently exist
|
|
22
|
+
2. **User-Facing Components**: UI, APIs, endpoints, workflows
|
|
23
|
+
3. **Integration Points**: External services, databases
|
|
24
|
+
4. **Configuration Options**: Customization and settings
|
|
25
|
+
5. **Documentation**: README, docs, comments
|
|
26
|
+
|
|
27
|
+
Use tree-sitter or static analysis to extract:
|
|
28
|
+
|
|
29
|
+
- Classes and modules
|
|
30
|
+
- Public APIs and methods
|
|
31
|
+
- User workflows and entry points
|
|
32
|
+
- Feature flags or toggles
|
|
33
|
+
- Configuration files
|
|
34
|
+
|
|
35
|
+
## Legacy Research Plan Components
|
|
36
|
+
|
|
37
|
+
### 1. Current Feature Audit
|
|
38
|
+
|
|
39
|
+
List of features identified in codebase:
|
|
40
|
+
|
|
41
|
+
- Feature name
|
|
42
|
+
- Description (what it does)
|
|
43
|
+
- Entry points (how users access it)
|
|
44
|
+
- Status (active, deprecated, experimental)
|
|
45
|
+
|
|
46
|
+
### 2. Research Questions
|
|
47
|
+
|
|
48
|
+
Key questions to answer about user experience:
|
|
49
|
+
|
|
50
|
+
- How are users currently using each feature?
|
|
51
|
+
- What pain points exist in current workflows?
|
|
52
|
+
- Which features are most/least valuable?
|
|
53
|
+
- Where do users get confused or stuck?
|
|
54
|
+
- What improvements would have biggest impact?
|
|
55
|
+
|
|
56
|
+
### 3. Research Methods
|
|
57
|
+
|
|
58
|
+
Appropriate methods for legacy product:
|
|
59
|
+
|
|
60
|
+
- **User Interviews**: Understand current usage and pain points
|
|
61
|
+
- **Usage Analytics**: Analyze feature adoption and patterns
|
|
62
|
+
- **Usability Testing**: Observe users with existing features
|
|
63
|
+
- **Surveys**: Collect feedback from broad user base
|
|
64
|
+
|
|
65
|
+
### 4. Testing Priorities
|
|
66
|
+
|
|
67
|
+
Which features/flows to focus on first:
|
|
68
|
+
|
|
69
|
+
- High-usage features (most critical)
|
|
70
|
+
- Features with known issues
|
|
71
|
+
- Recently changed or updated features
|
|
72
|
+
- Features with low adoption (understand why)
|
|
73
|
+
|
|
74
|
+
### 5. User Segments
|
|
75
|
+
|
|
76
|
+
Different types of users to study:
|
|
77
|
+
|
|
78
|
+
- Power users vs. casual users
|
|
79
|
+
- Different use cases or workflows
|
|
80
|
+
- Different industries or contexts
|
|
81
|
+
|
|
82
|
+
### 6. Improvement Opportunities
|
|
83
|
+
|
|
84
|
+
Based on codebase analysis:
|
|
85
|
+
|
|
86
|
+
- Missing features users likely need
|
|
87
|
+
- Workflows that could be streamlined
|
|
88
|
+
- Technical debt affecting user experience
|
|
89
|
+
- Areas for modernization
|
|
90
|
+
|
|
91
|
+
### 7. Research Timeline
|
|
92
|
+
|
|
93
|
+
Phases with duration:
|
|
94
|
+
|
|
95
|
+
- Codebase analysis completion
|
|
96
|
+
- User recruitment
|
|
97
|
+
- Data collection (interviews, surveys, testing)
|
|
98
|
+
- Analysis and reporting
|
|
99
|
+
|
|
100
|
+
## Implementation
|
|
101
|
+
|
|
102
|
+
**For Ruby/AIDP projects**, use the `ruby_aidp_planning` skill with `Aidp::Planning::Generators::LegacyResearchPlanner`:
|
|
103
|
+
|
|
104
|
+
1. Prompt for codebase path using TTY::Prompt
|
|
105
|
+
2. Analyze codebase structure (tree-sitter, file scanning)
|
|
106
|
+
3. Generate research plan using `LegacyResearchPlanner.generate(codebase_path:)`
|
|
107
|
+
4. Format as markdown using `format_as_markdown(plan)`
|
|
108
|
+
5. Write to `.aidp/docs/LEGACY_USER_RESEARCH_PLAN.md`
|
|
109
|
+
|
|
110
|
+
**For other implementations**, create equivalent functionality that:
|
|
111
|
+
|
|
112
|
+
1. Prompts for codebase information
|
|
113
|
+
2. Analyzes codebase to extract feature list
|
|
114
|
+
3. Uses AI to generate contextual research questions
|
|
115
|
+
4. Identifies testing priorities based on feature importance
|
|
116
|
+
5. Suggests appropriate research methods
|
|
117
|
+
6. Creates improvement recommendations based on code analysis
|
|
118
|
+
|
|
119
|
+
## Codebase Analysis Approach
|
|
120
|
+
|
|
121
|
+
For static analysis:
|
|
122
|
+
|
|
123
|
+
- Parse main entry points and routes
|
|
124
|
+
- Extract public APIs and classes
|
|
125
|
+
- Identify user-facing components
|
|
126
|
+
- Find configuration and feature flags
|
|
127
|
+
- Review documentation for feature descriptions
|
|
128
|
+
|
|
129
|
+
For tree-sitter analysis:
|
|
130
|
+
|
|
131
|
+
- Parse AST to find classes and methods
|
|
132
|
+
- Identify public vs. private interfaces
|
|
133
|
+
- Extract comments and documentation
|
|
134
|
+
- Find integration points
|
|
135
|
+
- Map user workflows
|
|
136
|
+
|
|
137
|
+
## AI Analysis Guidelines
|
|
138
|
+
|
|
139
|
+
Use AI Decision Engine to:
|
|
140
|
+
|
|
141
|
+
- Generate feature descriptions from code structure
|
|
142
|
+
- Create contextual research questions based on features
|
|
143
|
+
- Prioritize features for testing
|
|
144
|
+
- Suggest improvement opportunities
|
|
145
|
+
- Recommend appropriate research methods
|
|
146
|
+
|
|
147
|
+
## Output Structure
|
|
148
|
+
|
|
149
|
+
Write to `.aidp/docs/LEGACY_USER_RESEARCH_PLAN.md` with:
|
|
150
|
+
|
|
151
|
+
- Overview of research goals
|
|
152
|
+
- Current feature audit (features identified in codebase)
|
|
153
|
+
- Research questions to answer
|
|
154
|
+
- Recommended research methods
|
|
155
|
+
- Testing priorities (features to focus on)
|
|
156
|
+
- User segments to study
|
|
157
|
+
- Improvement opportunities identified
|
|
158
|
+
- Research timeline
|
|
159
|
+
- Generated timestamp and metadata
|
|
160
|
+
|
|
161
|
+
## Common Use Cases
|
|
162
|
+
|
|
163
|
+
- Understanding usage of existing product before redesign
|
|
164
|
+
- Identifying pain points in mature product
|
|
165
|
+
- Prioritizing feature improvements
|
|
166
|
+
- Planning modernization efforts
|
|
167
|
+
- Validating assumptions about user needs
|
|
168
|
+
|
|
169
|
+
## Output
|
|
170
|
+
|
|
171
|
+
Write complete legacy user research plan to `.aidp/docs/LEGACY_USER_RESEARCH_PLAN.md` based on codebase analysis and AI-generated research questions.
|