aidp 0.27.0 → 0.28.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/README.md +89 -0
- data/lib/aidp/cli/models_command.rb +5 -6
- data/lib/aidp/cli.rb +10 -8
- data/lib/aidp/config.rb +54 -0
- data/lib/aidp/debug_mixin.rb +23 -1
- data/lib/aidp/execute/agent_signal_parser.rb +22 -0
- data/lib/aidp/execute/repl_macros.rb +2 -2
- data/lib/aidp/execute/steps.rb +94 -1
- data/lib/aidp/execute/work_loop_runner.rb +209 -17
- data/lib/aidp/execute/workflow_selector.rb +2 -25
- data/lib/aidp/firewall/provider_requirements_collector.rb +262 -0
- data/lib/aidp/harness/ai_decision_engine.rb +35 -2
- data/lib/aidp/harness/config_manager.rb +0 -5
- data/lib/aidp/harness/config_schema.rb +8 -0
- data/lib/aidp/harness/configuration.rb +27 -19
- data/lib/aidp/harness/enhanced_runner.rb +1 -4
- data/lib/aidp/harness/error_handler.rb +1 -72
- data/lib/aidp/harness/provider_factory.rb +11 -2
- data/lib/aidp/harness/state_manager.rb +0 -7
- data/lib/aidp/harness/thinking_depth_manager.rb +47 -68
- data/lib/aidp/harness/ui/enhanced_tui.rb +8 -18
- data/lib/aidp/harness/ui/enhanced_workflow_selector.rb +0 -18
- data/lib/aidp/harness/ui/progress_display.rb +6 -2
- data/lib/aidp/harness/user_interface.rb +0 -58
- data/lib/aidp/init/runner.rb +7 -2
- data/lib/aidp/planning/analyzers/feedback_analyzer.rb +365 -0
- data/lib/aidp/planning/builders/agile_plan_builder.rb +387 -0
- data/lib/aidp/planning/builders/project_plan_builder.rb +193 -0
- data/lib/aidp/planning/generators/gantt_generator.rb +190 -0
- data/lib/aidp/planning/generators/iteration_plan_generator.rb +392 -0
- data/lib/aidp/planning/generators/legacy_research_planner.rb +473 -0
- data/lib/aidp/planning/generators/marketing_report_generator.rb +348 -0
- data/lib/aidp/planning/generators/mvp_scope_generator.rb +310 -0
- data/lib/aidp/planning/generators/user_test_plan_generator.rb +373 -0
- data/lib/aidp/planning/generators/wbs_generator.rb +259 -0
- data/lib/aidp/planning/mappers/persona_mapper.rb +163 -0
- data/lib/aidp/planning/parsers/document_parser.rb +141 -0
- data/lib/aidp/planning/parsers/feedback_data_parser.rb +252 -0
- data/lib/aidp/provider_manager.rb +8 -32
- data/lib/aidp/providers/aider.rb +264 -0
- data/lib/aidp/providers/anthropic.rb +74 -2
- data/lib/aidp/providers/base.rb +25 -1
- data/lib/aidp/providers/codex.rb +26 -3
- data/lib/aidp/providers/cursor.rb +16 -0
- data/lib/aidp/providers/gemini.rb +13 -0
- data/lib/aidp/providers/github_copilot.rb +17 -0
- data/lib/aidp/providers/kilocode.rb +11 -0
- data/lib/aidp/providers/opencode.rb +11 -0
- data/lib/aidp/setup/wizard.rb +249 -39
- data/lib/aidp/version.rb +1 -1
- data/lib/aidp/watch/build_processor.rb +211 -30
- data/lib/aidp/watch/change_request_processor.rb +128 -14
- data/lib/aidp/watch/ci_fix_processor.rb +103 -37
- data/lib/aidp/watch/ci_log_extractor.rb +258 -0
- data/lib/aidp/watch/github_state_extractor.rb +177 -0
- data/lib/aidp/watch/implementation_verifier.rb +284 -0
- data/lib/aidp/watch/plan_generator.rb +7 -43
- data/lib/aidp/watch/plan_processor.rb +7 -6
- data/lib/aidp/watch/repository_client.rb +245 -17
- data/lib/aidp/watch/review_processor.rb +98 -17
- data/lib/aidp/watch/reviewers/base_reviewer.rb +1 -1
- data/lib/aidp/watch/runner.rb +181 -29
- data/lib/aidp/watch/state_store.rb +22 -1
- data/lib/aidp/workflows/definitions.rb +147 -0
- data/lib/aidp/workstream_cleanup.rb +245 -0
- data/lib/aidp/worktree.rb +19 -0
- data/templates/aidp.yml.example +57 -0
- data/templates/implementation/generate_tdd_specs.md +213 -0
- data/templates/implementation/iterative_implementation.md +122 -0
- data/templates/planning/agile/analyze_feedback.md +183 -0
- data/templates/planning/agile/generate_iteration_plan.md +179 -0
- data/templates/planning/agile/generate_legacy_research_plan.md +171 -0
- data/templates/planning/agile/generate_marketing_report.md +162 -0
- data/templates/planning/agile/generate_mvp_scope.md +127 -0
- data/templates/planning/agile/generate_user_test_plan.md +143 -0
- data/templates/planning/agile/ingest_feedback.md +174 -0
- data/templates/planning/assemble_project_plan.md +113 -0
- data/templates/planning/assign_personas.md +108 -0
- data/templates/planning/create_tasks.md +52 -6
- data/templates/planning/generate_gantt.md +86 -0
- data/templates/planning/generate_wbs.md +85 -0
- data/templates/planning/initialize_planning_mode.md +70 -0
- data/templates/skills/README.md +2 -2
- data/templates/skills/marketing_strategist/SKILL.md +279 -0
- data/templates/skills/product_manager/SKILL.md +177 -0
- data/templates/skills/ruby_aidp_planning/SKILL.md +497 -0
- data/templates/skills/ruby_rspec_tdd/SKILL.md +514 -0
- data/templates/skills/ux_researcher/SKILL.md +222 -0
- metadata +39 -1
|
@@ -0,0 +1,183 @@
|
|
|
1
|
+
# Analyze User Feedback
|
|
2
|
+
|
|
3
|
+
You are a UX researcher analyzing user feedback using AI-powered semantic analysis.
|
|
4
|
+
|
|
5
|
+
## Input
|
|
6
|
+
|
|
7
|
+
Read:
|
|
8
|
+
|
|
9
|
+
- `.aidp/docs/feedback_data.json` - Normalized feedback data from ingestion step
|
|
10
|
+
|
|
11
|
+
## Your Task
|
|
12
|
+
|
|
13
|
+
Perform comprehensive analysis of user feedback to extract insights, identify trends, and generate actionable recommendations.
|
|
14
|
+
|
|
15
|
+
## Analysis Components
|
|
16
|
+
|
|
17
|
+
### 1. Executive Summary
|
|
18
|
+
|
|
19
|
+
High-level overview (2-3 paragraphs):
|
|
20
|
+
|
|
21
|
+
- Overall sentiment and key themes
|
|
22
|
+
- Most important findings
|
|
23
|
+
- Top recommendations
|
|
24
|
+
|
|
25
|
+
### 2. Sentiment Breakdown
|
|
26
|
+
|
|
27
|
+
Distribution analysis:
|
|
28
|
+
|
|
29
|
+
- Positive/negative/neutral counts and percentages
|
|
30
|
+
- Sentiment trends over time if timestamps available
|
|
31
|
+
- Sentiment by feature or category
|
|
32
|
+
|
|
33
|
+
### 3. Key Findings
|
|
34
|
+
|
|
35
|
+
3-5 major discoveries, each with:
|
|
36
|
+
|
|
37
|
+
- Finding title and description
|
|
38
|
+
- Evidence (quotes, data supporting the finding)
|
|
39
|
+
- Impact assessment (high/medium/low)
|
|
40
|
+
|
|
41
|
+
### 4. Trends and Patterns
|
|
42
|
+
|
|
43
|
+
Recurring themes across responses:
|
|
44
|
+
|
|
45
|
+
- Trend description
|
|
46
|
+
- Frequency (how often mentioned)
|
|
47
|
+
- Implications for product development
|
|
48
|
+
|
|
49
|
+
### 5. Insights
|
|
50
|
+
|
|
51
|
+
Categorized observations:
|
|
52
|
+
|
|
53
|
+
- **Usability**: Ease of use, interface issues
|
|
54
|
+
- **Features**: What's working, what's missing
|
|
55
|
+
- **Performance**: Speed, reliability concerns
|
|
56
|
+
- **Value**: Perceived value and benefits
|
|
57
|
+
|
|
58
|
+
### 6. Feature-Specific Feedback
|
|
59
|
+
|
|
60
|
+
For each feature mentioned:
|
|
61
|
+
|
|
62
|
+
- Overall sentiment
|
|
63
|
+
- Positive feedback
|
|
64
|
+
- Negative feedback
|
|
65
|
+
- Suggested improvements
|
|
66
|
+
|
|
67
|
+
### 7. Priority Issues
|
|
68
|
+
|
|
69
|
+
Critical items requiring immediate attention:
|
|
70
|
+
|
|
71
|
+
- Issue description
|
|
72
|
+
- Priority level (critical/high/medium)
|
|
73
|
+
- Number/percentage of users affected
|
|
74
|
+
- Recommended action
|
|
75
|
+
|
|
76
|
+
### 8. Positive Highlights
|
|
77
|
+
|
|
78
|
+
What users loved:
|
|
79
|
+
|
|
80
|
+
- Features or aspects that delighted users
|
|
81
|
+
- Strengths to maintain or amplify
|
|
82
|
+
|
|
83
|
+
### 9. Recommendations
|
|
84
|
+
|
|
85
|
+
4-6 actionable recommendations, each with:
|
|
86
|
+
|
|
87
|
+
- Recommendation title and description
|
|
88
|
+
- Rationale based on feedback
|
|
89
|
+
- Effort estimate (low/medium/high)
|
|
90
|
+
- Expected impact (low/medium/high)
|
|
91
|
+
|
|
92
|
+
## Analysis Principles
|
|
93
|
+
|
|
94
|
+
**Semantic Analysis:**
|
|
95
|
+
|
|
96
|
+
- Use AI to understand meaning and context, not just keywords
|
|
97
|
+
- Identify themes across different wording
|
|
98
|
+
- Understand user intent and emotion
|
|
99
|
+
|
|
100
|
+
**Evidence-Based:**
|
|
101
|
+
|
|
102
|
+
- Support findings with specific quotes or data
|
|
103
|
+
- Quantify when possible
|
|
104
|
+
- Don't over-generalize from limited data
|
|
105
|
+
|
|
106
|
+
**Actionable:**
|
|
107
|
+
|
|
108
|
+
- Translate insights into specific recommendations
|
|
109
|
+
- Prioritize by impact and urgency
|
|
110
|
+
- Make recommendations concrete and implementable
|
|
111
|
+
|
|
112
|
+
**Objective:**
|
|
113
|
+
|
|
114
|
+
- Present both positive and negative feedback fairly
|
|
115
|
+
- Avoid bias toward confirming existing beliefs
|
|
116
|
+
- Let data speak for itself
|
|
117
|
+
|
|
118
|
+
## Implementation
|
|
119
|
+
|
|
120
|
+
**For Ruby/AIDP projects**, use the `ruby_aidp_planning` skill with `Aidp::Planning::Analyzers::FeedbackAnalyzer`:
|
|
121
|
+
|
|
122
|
+
1. Read feedback data from `.aidp/docs/feedback_data.json`
|
|
123
|
+
2. Analyze using `FeedbackAnalyzer.new(ai_decision_engine:).analyze(feedback_data)`
|
|
124
|
+
3. Format as markdown using `format_as_markdown(analysis)`
|
|
125
|
+
4. Write to `.aidp/docs/USER_FEEDBACK_ANALYSIS.md`
|
|
126
|
+
|
|
127
|
+
**For other implementations**, create equivalent functionality that:
|
|
128
|
+
|
|
129
|
+
1. Parses normalized feedback data
|
|
130
|
+
2. Uses AI for semantic analysis (NO regex, keyword matching, or heuristics)
|
|
131
|
+
3. Identifies patterns and themes
|
|
132
|
+
4. Calculates sentiment distribution
|
|
133
|
+
5. Extracts evidence-based findings
|
|
134
|
+
6. Generates prioritized recommendations
|
|
135
|
+
7. Formats as comprehensive markdown report
|
|
136
|
+
|
|
137
|
+
## AI Analysis Guidelines
|
|
138
|
+
|
|
139
|
+
Use AI Decision Engine with Zero Framework Cognition:
|
|
140
|
+
|
|
141
|
+
**NO:**
|
|
142
|
+
|
|
143
|
+
- Regex pattern matching
|
|
144
|
+
- Keyword counting
|
|
145
|
+
- Scoring formulas
|
|
146
|
+
- Heuristic thresholds
|
|
147
|
+
|
|
148
|
+
**YES:**
|
|
149
|
+
|
|
150
|
+
- Semantic understanding of text
|
|
151
|
+
- Context-aware analysis
|
|
152
|
+
- Theme identification across varied wording
|
|
153
|
+
- Nuanced sentiment analysis
|
|
154
|
+
- Evidence-based recommendations
|
|
155
|
+
|
|
156
|
+
Provide structured schema for consistent output.
|
|
157
|
+
|
|
158
|
+
## Output Structure
|
|
159
|
+
|
|
160
|
+
Write to `.aidp/docs/USER_FEEDBACK_ANALYSIS.md` with:
|
|
161
|
+
|
|
162
|
+
- Executive summary
|
|
163
|
+
- Sentiment breakdown (table with counts and percentages)
|
|
164
|
+
- Key findings with evidence and impact
|
|
165
|
+
- Trends and patterns with frequency and implications
|
|
166
|
+
- Categorized insights (usability, features, performance, value)
|
|
167
|
+
- Feature-specific feedback (positive, negative, improvements)
|
|
168
|
+
- Priority issues (with recommended actions)
|
|
169
|
+
- Positive highlights
|
|
170
|
+
- Actionable recommendations (with rationale, effort, impact)
|
|
171
|
+
- Generated timestamp and metadata
|
|
172
|
+
|
|
173
|
+
## Common Pitfalls to Avoid
|
|
174
|
+
|
|
175
|
+
- Keyword matching instead of semantic understanding
|
|
176
|
+
- Ignoring negative feedback
|
|
177
|
+
- Over-generalizing from limited responses
|
|
178
|
+
- Recommendations without evidence
|
|
179
|
+
- Analysis paralysis (waiting for perfect data)
|
|
180
|
+
|
|
181
|
+
## Output
|
|
182
|
+
|
|
183
|
+
Write complete feedback analysis to `.aidp/docs/USER_FEEDBACK_ANALYSIS.md` with all insights and evidence-based recommendations.
|
|
@@ -0,0 +1,179 @@
|
|
|
1
|
+
# Generate Next Iteration Plan
|
|
2
|
+
|
|
3
|
+
You are a product manager creating the next iteration plan based on user feedback.
|
|
4
|
+
|
|
5
|
+
## Input
|
|
6
|
+
|
|
7
|
+
Read:
|
|
8
|
+
|
|
9
|
+
- `.aidp/docs/USER_FEEDBACK_ANALYSIS.md` - Analyzed user feedback with insights and recommendations
|
|
10
|
+
- `.aidp/docs/MVP_SCOPE.md` (if available) - Current MVP features
|
|
11
|
+
|
|
12
|
+
## Your Task
|
|
13
|
+
|
|
14
|
+
Create a detailed plan for the next development iteration that addresses user feedback, prioritizes improvements, and defines specific, actionable tasks.
|
|
15
|
+
|
|
16
|
+
## Iteration Plan Components
|
|
17
|
+
|
|
18
|
+
### 1. Overview
|
|
19
|
+
|
|
20
|
+
- Focus of this iteration
|
|
21
|
+
- Why these priorities
|
|
22
|
+
- Expected outcomes
|
|
23
|
+
|
|
24
|
+
### 2. Iteration Goals
|
|
25
|
+
|
|
26
|
+
3-5 clear, measurable goals for this iteration
|
|
27
|
+
|
|
28
|
+
### 3. Feature Improvements
|
|
29
|
+
|
|
30
|
+
For existing features that need enhancement:
|
|
31
|
+
|
|
32
|
+
- Feature name
|
|
33
|
+
- Current issue/problem
|
|
34
|
+
- Proposed improvement
|
|
35
|
+
- User impact
|
|
36
|
+
- Effort estimate (low/medium/high)
|
|
37
|
+
- Priority (critical/high/medium/low)
|
|
38
|
+
|
|
39
|
+
### 4. New Features
|
|
40
|
+
|
|
41
|
+
Features to add based on user requests:
|
|
42
|
+
|
|
43
|
+
- Feature name and description
|
|
44
|
+
- Rationale (why add this now)
|
|
45
|
+
- Acceptance criteria
|
|
46
|
+
- Effort estimate
|
|
47
|
+
|
|
48
|
+
### 5. Bug Fixes
|
|
49
|
+
|
|
50
|
+
Critical and high-priority bugs:
|
|
51
|
+
|
|
52
|
+
- Bug title and description
|
|
53
|
+
- Priority level
|
|
54
|
+
- Number/percentage of users affected
|
|
55
|
+
- Fix approach
|
|
56
|
+
|
|
57
|
+
### 6. Technical Debt
|
|
58
|
+
|
|
59
|
+
Technical improvements needed:
|
|
60
|
+
|
|
61
|
+
- Debt item title and description
|
|
62
|
+
- Why it matters (impact on quality, performance, maintainability)
|
|
63
|
+
- Effort estimate
|
|
64
|
+
|
|
65
|
+
### 7. Task Breakdown
|
|
66
|
+
|
|
67
|
+
Specific, actionable tasks:
|
|
68
|
+
|
|
69
|
+
- Task name and description
|
|
70
|
+
- Category (feature, improvement, bug_fix, tech_debt, testing, documentation)
|
|
71
|
+
- Priority
|
|
72
|
+
- Estimated effort
|
|
73
|
+
- Dependencies
|
|
74
|
+
- Success criteria
|
|
75
|
+
|
|
76
|
+
### 8. Success Metrics
|
|
77
|
+
|
|
78
|
+
How to measure iteration success:
|
|
79
|
+
|
|
80
|
+
- Metric name
|
|
81
|
+
- Target value
|
|
82
|
+
- How to measure
|
|
83
|
+
|
|
84
|
+
### 9. Risks and Mitigation
|
|
85
|
+
|
|
86
|
+
What could go wrong:
|
|
87
|
+
|
|
88
|
+
- Risk description
|
|
89
|
+
- Probability (low/medium/high)
|
|
90
|
+
- Impact (low/medium/high)
|
|
91
|
+
- Mitigation strategy
|
|
92
|
+
|
|
93
|
+
### 10. Timeline
|
|
94
|
+
|
|
95
|
+
Iteration phases:
|
|
96
|
+
|
|
97
|
+
- Phase name
|
|
98
|
+
- Duration
|
|
99
|
+
- Key activities
|
|
100
|
+
|
|
101
|
+
## Prioritization Framework
|
|
102
|
+
|
|
103
|
+
Consider these factors when prioritizing:
|
|
104
|
+
|
|
105
|
+
1. **User Impact**: How many users benefit? How significantly?
|
|
106
|
+
2. **Business Value**: Does this align with business goals?
|
|
107
|
+
3. **Effort**: How much work required?
|
|
108
|
+
4. **Risk**: What's the probability and impact of failure?
|
|
109
|
+
5. **Dependencies**: What must happen first?
|
|
110
|
+
6. **Learning**: What will we learn from building this?
|
|
111
|
+
|
|
112
|
+
## Implementation
|
|
113
|
+
|
|
114
|
+
**For Ruby/AIDP projects**, use the `ruby_aidp_planning` skill with `Aidp::Planning::Generators::IterationPlanGenerator`:
|
|
115
|
+
|
|
116
|
+
1. Parse feedback analysis using `Aidp::Planning::Parsers::DocumentParser`
|
|
117
|
+
2. Parse MVP scope if available
|
|
118
|
+
3. Generate plan using `IterationPlanGenerator.generate(feedback_analysis:, current_mvp:)`
|
|
119
|
+
4. Format as markdown using `format_as_markdown(plan)`
|
|
120
|
+
5. Write to `.aidp/docs/NEXT_ITERATION_PLAN.md`
|
|
121
|
+
|
|
122
|
+
**For other implementations**, create equivalent functionality that:
|
|
123
|
+
|
|
124
|
+
1. Parses feedback analysis to understand issues and recommendations
|
|
125
|
+
2. Parses current MVP scope if available
|
|
126
|
+
3. Uses AI to transform recommendations into actionable tasks
|
|
127
|
+
4. Prioritizes based on user impact, effort, and dependencies
|
|
128
|
+
5. Breaks down work into specific tasks
|
|
129
|
+
6. Defines success metrics for iteration
|
|
130
|
+
7. Identifies and plans mitigation for risks
|
|
131
|
+
|
|
132
|
+
## AI Analysis Guidelines
|
|
133
|
+
|
|
134
|
+
Use AI Decision Engine to:
|
|
135
|
+
|
|
136
|
+
- Transform feedback insights into specific improvements
|
|
137
|
+
- Prioritize tasks by impact and effort
|
|
138
|
+
- Break down complex improvements into tasks
|
|
139
|
+
- Identify dependencies and sequencing
|
|
140
|
+
- Suggest realistic timelines
|
|
141
|
+
|
|
142
|
+
Be specific and actionable—tasks should be clear enough for developers to implement.
|
|
143
|
+
|
|
144
|
+
## Task Categories
|
|
145
|
+
|
|
146
|
+
- **feature**: New functionality
|
|
147
|
+
- **improvement**: Enhancement to existing feature
|
|
148
|
+
- **bug_fix**: Resolve defect or error
|
|
149
|
+
- **tech_debt**: Technical improvement (refactoring, performance, etc.)
|
|
150
|
+
- **testing**: Test coverage or quality improvements
|
|
151
|
+
- **documentation**: User guides, API docs, etc.
|
|
152
|
+
|
|
153
|
+
## Output Structure
|
|
154
|
+
|
|
155
|
+
Write to `.aidp/docs/NEXT_ITERATION_PLAN.md` with:
|
|
156
|
+
|
|
157
|
+
- Overview of iteration focus
|
|
158
|
+
- Iteration goals (3-5 measurable goals)
|
|
159
|
+
- Feature improvements (with issue, improvement, impact, effort, priority)
|
|
160
|
+
- New features (with rationale and acceptance criteria)
|
|
161
|
+
- Bug fixes (with priority and affected users)
|
|
162
|
+
- Technical debt items (with impact and effort)
|
|
163
|
+
- Task breakdown (with category, priority, effort, dependencies, success criteria)
|
|
164
|
+
- Success metrics and targets
|
|
165
|
+
- Risks with probability, impact, and mitigation
|
|
166
|
+
- Timeline with phases and activities
|
|
167
|
+
- Generated timestamp and metadata
|
|
168
|
+
|
|
169
|
+
## Common Pitfalls to Avoid
|
|
170
|
+
|
|
171
|
+
- Vague, non-actionable tasks
|
|
172
|
+
- Ignoring technical debt
|
|
173
|
+
- Over-ambitious scope for iteration
|
|
174
|
+
- Missing dependencies between tasks
|
|
175
|
+
- No clear success metrics
|
|
176
|
+
|
|
177
|
+
## Output
|
|
178
|
+
|
|
179
|
+
Write complete iteration plan to `.aidp/docs/NEXT_ITERATION_PLAN.md` with specific, prioritized, actionable tasks based on user feedback.
|
|
@@ -0,0 +1,171 @@
|
|
|
1
|
+
# Generate Legacy User Research Plan
|
|
2
|
+
|
|
3
|
+
You are a UX researcher creating a user research plan for an existing codebase/product.
|
|
4
|
+
|
|
5
|
+
## Your Task
|
|
6
|
+
|
|
7
|
+
Analyze an existing codebase to understand what features are already built, then create a user research plan to understand how users experience the product and identify improvement opportunities.
|
|
8
|
+
|
|
9
|
+
## Interactive Input
|
|
10
|
+
|
|
11
|
+
**Prompt the user for:**
|
|
12
|
+
|
|
13
|
+
1. Path to codebase directory
|
|
14
|
+
2. Primary language/framework (for context)
|
|
15
|
+
3. Known user segments (if any)
|
|
16
|
+
|
|
17
|
+
## Codebase Analysis
|
|
18
|
+
|
|
19
|
+
Analyze the existing codebase to understand:
|
|
20
|
+
|
|
21
|
+
1. **Feature Inventory**: What features currently exist
|
|
22
|
+
2. **User-Facing Components**: UI, APIs, endpoints, workflows
|
|
23
|
+
3. **Integration Points**: External services, databases
|
|
24
|
+
4. **Configuration Options**: Customization and settings
|
|
25
|
+
5. **Documentation**: README, docs, comments
|
|
26
|
+
|
|
27
|
+
Use tree-sitter or static analysis to extract:
|
|
28
|
+
|
|
29
|
+
- Classes and modules
|
|
30
|
+
- Public APIs and methods
|
|
31
|
+
- User workflows and entry points
|
|
32
|
+
- Feature flags or toggles
|
|
33
|
+
- Configuration files
|
|
34
|
+
|
|
35
|
+
## Legacy Research Plan Components
|
|
36
|
+
|
|
37
|
+
### 1. Current Feature Audit
|
|
38
|
+
|
|
39
|
+
List of features identified in codebase:
|
|
40
|
+
|
|
41
|
+
- Feature name
|
|
42
|
+
- Description (what it does)
|
|
43
|
+
- Entry points (how users access it)
|
|
44
|
+
- Status (active, deprecated, experimental)
|
|
45
|
+
|
|
46
|
+
### 2. Research Questions
|
|
47
|
+
|
|
48
|
+
Key questions to answer about user experience:
|
|
49
|
+
|
|
50
|
+
- How are users currently using each feature?
|
|
51
|
+
- What pain points exist in current workflows?
|
|
52
|
+
- Which features are most/least valuable?
|
|
53
|
+
- Where do users get confused or stuck?
|
|
54
|
+
- What improvements would have biggest impact?
|
|
55
|
+
|
|
56
|
+
### 3. Research Methods
|
|
57
|
+
|
|
58
|
+
Appropriate methods for legacy product:
|
|
59
|
+
|
|
60
|
+
- **User Interviews**: Understand current usage and pain points
|
|
61
|
+
- **Usage Analytics**: Analyze feature adoption and patterns
|
|
62
|
+
- **Usability Testing**: Observe users with existing features
|
|
63
|
+
- **Surveys**: Collect feedback from broad user base
|
|
64
|
+
|
|
65
|
+
### 4. Testing Priorities
|
|
66
|
+
|
|
67
|
+
Which features/flows to focus on first:
|
|
68
|
+
|
|
69
|
+
- High-usage features (most critical)
|
|
70
|
+
- Features with known issues
|
|
71
|
+
- Recently changed or updated features
|
|
72
|
+
- Features with low adoption (understand why)
|
|
73
|
+
|
|
74
|
+
### 5. User Segments
|
|
75
|
+
|
|
76
|
+
Different types of users to study:
|
|
77
|
+
|
|
78
|
+
- Power users vs. casual users
|
|
79
|
+
- Different use cases or workflows
|
|
80
|
+
- Different industries or contexts
|
|
81
|
+
|
|
82
|
+
### 6. Improvement Opportunities
|
|
83
|
+
|
|
84
|
+
Based on codebase analysis:
|
|
85
|
+
|
|
86
|
+
- Missing features users likely need
|
|
87
|
+
- Workflows that could be streamlined
|
|
88
|
+
- Technical debt affecting user experience
|
|
89
|
+
- Areas for modernization
|
|
90
|
+
|
|
91
|
+
### 7. Research Timeline
|
|
92
|
+
|
|
93
|
+
Phases with duration:
|
|
94
|
+
|
|
95
|
+
- Codebase analysis completion
|
|
96
|
+
- User recruitment
|
|
97
|
+
- Data collection (interviews, surveys, testing)
|
|
98
|
+
- Analysis and reporting
|
|
99
|
+
|
|
100
|
+
## Implementation
|
|
101
|
+
|
|
102
|
+
**For Ruby/AIDP projects**, use the `ruby_aidp_planning` skill with `Aidp::Planning::Generators::LegacyResearchPlanner`:
|
|
103
|
+
|
|
104
|
+
1. Prompt for codebase path using TTY::Prompt
|
|
105
|
+
2. Analyze codebase structure (tree-sitter, file scanning)
|
|
106
|
+
3. Generate research plan using `LegacyResearchPlanner.generate(codebase_path:)`
|
|
107
|
+
4. Format as markdown using `format_as_markdown(plan)`
|
|
108
|
+
5. Write to `.aidp/docs/LEGACY_USER_RESEARCH_PLAN.md`
|
|
109
|
+
|
|
110
|
+
**For other implementations**, create equivalent functionality that:
|
|
111
|
+
|
|
112
|
+
1. Prompts for codebase information
|
|
113
|
+
2. Analyzes codebase to extract feature list
|
|
114
|
+
3. Uses AI to generate contextual research questions
|
|
115
|
+
4. Identifies testing priorities based on feature importance
|
|
116
|
+
5. Suggests appropriate research methods
|
|
117
|
+
6. Creates improvement recommendations based on code analysis
|
|
118
|
+
|
|
119
|
+
## Codebase Analysis Approach
|
|
120
|
+
|
|
121
|
+
For static analysis:
|
|
122
|
+
|
|
123
|
+
- Parse main entry points and routes
|
|
124
|
+
- Extract public APIs and classes
|
|
125
|
+
- Identify user-facing components
|
|
126
|
+
- Find configuration and feature flags
|
|
127
|
+
- Review documentation for feature descriptions
|
|
128
|
+
|
|
129
|
+
For tree-sitter analysis:
|
|
130
|
+
|
|
131
|
+
- Parse AST to find classes and methods
|
|
132
|
+
- Identify public vs. private interfaces
|
|
133
|
+
- Extract comments and documentation
|
|
134
|
+
- Find integration points
|
|
135
|
+
- Map user workflows
|
|
136
|
+
|
|
137
|
+
## AI Analysis Guidelines
|
|
138
|
+
|
|
139
|
+
Use AI Decision Engine to:
|
|
140
|
+
|
|
141
|
+
- Generate feature descriptions from code structure
|
|
142
|
+
- Create contextual research questions based on features
|
|
143
|
+
- Prioritize features for testing
|
|
144
|
+
- Suggest improvement opportunities
|
|
145
|
+
- Recommend appropriate research methods
|
|
146
|
+
|
|
147
|
+
## Output Structure
|
|
148
|
+
|
|
149
|
+
Write to `.aidp/docs/LEGACY_USER_RESEARCH_PLAN.md` with:
|
|
150
|
+
|
|
151
|
+
- Overview of research goals
|
|
152
|
+
- Current feature audit (features identified in codebase)
|
|
153
|
+
- Research questions to answer
|
|
154
|
+
- Recommended research methods
|
|
155
|
+
- Testing priorities (features to focus on)
|
|
156
|
+
- User segments to study
|
|
157
|
+
- Improvement opportunities identified
|
|
158
|
+
- Research timeline
|
|
159
|
+
- Generated timestamp and metadata
|
|
160
|
+
|
|
161
|
+
## Common Use Cases
|
|
162
|
+
|
|
163
|
+
- Understanding usage of existing product before redesign
|
|
164
|
+
- Identifying pain points in mature product
|
|
165
|
+
- Prioritizing feature improvements
|
|
166
|
+
- Planning modernization efforts
|
|
167
|
+
- Validating assumptions about user needs
|
|
168
|
+
|
|
169
|
+
## Output
|
|
170
|
+
|
|
171
|
+
Write complete legacy user research plan to `.aidp/docs/LEGACY_USER_RESEARCH_PLAN.md` based on codebase analysis and AI-generated research questions.
|
|
@@ -0,0 +1,162 @@
|
|
|
1
|
+
# Generate Marketing Report
|
|
2
|
+
|
|
3
|
+
You are a marketing strategist creating comprehensive marketing materials for product launch.
|
|
4
|
+
|
|
5
|
+
## Input
|
|
6
|
+
|
|
7
|
+
Read:
|
|
8
|
+
|
|
9
|
+
- `.aidp/docs/MVP_SCOPE.md` - MVP features and scope
|
|
10
|
+
- `.aidp/docs/USER_FEEDBACK_ANALYSIS.md` (if available) - User insights
|
|
11
|
+
|
|
12
|
+
## Your Task
|
|
13
|
+
|
|
14
|
+
Translate technical features into compelling customer value and create go-to-market materials that drive adoption.
|
|
15
|
+
|
|
16
|
+
## Marketing Report Components
|
|
17
|
+
|
|
18
|
+
### 1. Value Proposition
|
|
19
|
+
|
|
20
|
+
- **Headline**: Compelling 10-15 word benefit statement
|
|
21
|
+
- **Subheadline**: 15-25 word expansion
|
|
22
|
+
- **Core Benefits**: 3-5 customer outcomes (not features)
|
|
23
|
+
|
|
24
|
+
### 2. Key Messages
|
|
25
|
+
|
|
26
|
+
3-5 primary messages, each with:
|
|
27
|
+
|
|
28
|
+
- Message title
|
|
29
|
+
- Description
|
|
30
|
+
- 3-5 supporting points
|
|
31
|
+
- Focus on customer value, not technical details
|
|
32
|
+
|
|
33
|
+
### 3. Differentiators
|
|
34
|
+
|
|
35
|
+
2-4 competitive advantages:
|
|
36
|
+
|
|
37
|
+
- What makes this unique
|
|
38
|
+
- Why it matters to customers
|
|
39
|
+
- Evidence or proof points
|
|
40
|
+
|
|
41
|
+
### 4. Target Audience
|
|
42
|
+
|
|
43
|
+
2-3 customer segments, each with:
|
|
44
|
+
|
|
45
|
+
- Segment name and description
|
|
46
|
+
- Pain points they experience
|
|
47
|
+
- How our solution addresses their needs
|
|
48
|
+
|
|
49
|
+
### 5. Positioning
|
|
50
|
+
|
|
51
|
+
- Category (what market/space)
|
|
52
|
+
- Positioning statement (who, what, value, differentiation)
|
|
53
|
+
- Tagline (memorable 3-7 words)
|
|
54
|
+
|
|
55
|
+
### 6. Success Metrics
|
|
56
|
+
|
|
57
|
+
4-6 launch metrics:
|
|
58
|
+
|
|
59
|
+
- Specific targets
|
|
60
|
+
- How to measure
|
|
61
|
+
- Why it matters
|
|
62
|
+
|
|
63
|
+
### 7. Messaging Framework
|
|
64
|
+
|
|
65
|
+
For each audience:
|
|
66
|
+
|
|
67
|
+
- Tailored message
|
|
68
|
+
- Appropriate channel
|
|
69
|
+
- Call to action
|
|
70
|
+
|
|
71
|
+
### 8. Launch Checklist
|
|
72
|
+
|
|
73
|
+
8-12 pre-launch tasks:
|
|
74
|
+
|
|
75
|
+
- Task description
|
|
76
|
+
- Owner
|
|
77
|
+
- Timeline
|
|
78
|
+
|
|
79
|
+
## Marketing Principles
|
|
80
|
+
|
|
81
|
+
**Customer-Focused:**
|
|
82
|
+
|
|
83
|
+
- Start with problems and benefits, not features
|
|
84
|
+
- Use language customers use
|
|
85
|
+
- Focus on outcomes, not outputs
|
|
86
|
+
|
|
87
|
+
**Clear and Compelling:**
|
|
88
|
+
|
|
89
|
+
- Avoid jargon and technical terms
|
|
90
|
+
- Make it emotionally resonant
|
|
91
|
+
- Be specific and concrete
|
|
92
|
+
|
|
93
|
+
**Differentiated:**
|
|
94
|
+
|
|
95
|
+
- Clearly state what makes you different
|
|
96
|
+
- Don't just match competitors
|
|
97
|
+
- Claim unique positioning
|
|
98
|
+
|
|
99
|
+
**Evidence-Based:**
|
|
100
|
+
|
|
101
|
+
- Support claims with proof
|
|
102
|
+
- Use data when available
|
|
103
|
+
- Reference user feedback
|
|
104
|
+
|
|
105
|
+
## Implementation
|
|
106
|
+
|
|
107
|
+
**For Ruby/AIDP projects**, use the `ruby_aidp_planning` skill with `Aidp::Planning::Generators::MarketingReportGenerator`:
|
|
108
|
+
|
|
109
|
+
1. Parse MVP scope using `Aidp::Planning::Parsers::DocumentParser`
|
|
110
|
+
2. Parse feedback analysis if available
|
|
111
|
+
3. Generate report using `MarketingReportGenerator.generate(mvp_scope:, feedback_analysis:)`
|
|
112
|
+
4. Format as markdown using `format_as_markdown(report)`
|
|
113
|
+
5. Write to `.aidp/docs/MARKETING_REPORT.md`
|
|
114
|
+
|
|
115
|
+
**For other implementations**, create equivalent functionality that:
|
|
116
|
+
|
|
117
|
+
1. Parses MVP scope to understand features
|
|
118
|
+
2. Analyzes user feedback if available
|
|
119
|
+
3. Uses AI to craft customer-focused messaging
|
|
120
|
+
4. Translates technical features to customer benefits
|
|
121
|
+
5. Identifies competitive differentiation
|
|
122
|
+
6. Creates audience-specific messaging
|
|
123
|
+
7. Generates actionable launch checklist
|
|
124
|
+
|
|
125
|
+
## AI Analysis Guidelines
|
|
126
|
+
|
|
127
|
+
Use AI Decision Engine to:
|
|
128
|
+
|
|
129
|
+
- Transform technical features into customer benefits
|
|
130
|
+
- Craft compelling, jargon-free headlines
|
|
131
|
+
- Identify competitive advantages
|
|
132
|
+
- Create audience-specific messaging
|
|
133
|
+
- Generate evidence-based differentiators
|
|
134
|
+
|
|
135
|
+
Focus on customer value, not product capabilities.
|
|
136
|
+
|
|
137
|
+
## Output Structure
|
|
138
|
+
|
|
139
|
+
Write to `.aidp/docs/MARKETING_REPORT.md` with:
|
|
140
|
+
|
|
141
|
+
- Overview of marketing strategy
|
|
142
|
+
- Complete value proposition (headline, subheadline, benefits)
|
|
143
|
+
- Key messages with supporting points
|
|
144
|
+
- Differentiators with competitive advantages
|
|
145
|
+
- Target audience analysis with pain points and solutions
|
|
146
|
+
- Positioning (category, statement, tagline)
|
|
147
|
+
- Success metrics with targets and measurement
|
|
148
|
+
- Messaging framework table (audience, message, channel, CTA)
|
|
149
|
+
- Launch checklist with tasks, owners, timelines
|
|
150
|
+
- Generated timestamp and metadata
|
|
151
|
+
|
|
152
|
+
## Common Pitfalls to Avoid
|
|
153
|
+
|
|
154
|
+
- Feature lists without customer benefits
|
|
155
|
+
- Technical jargon that confuses customers
|
|
156
|
+
- Generic "me too" positioning
|
|
157
|
+
- Vague, unmeasurable claims
|
|
158
|
+
- Inside-out thinking (what we built vs. what customers get)
|
|
159
|
+
|
|
160
|
+
## Output
|
|
161
|
+
|
|
162
|
+
Write complete marketing report to `.aidp/docs/MARKETING_REPORT.md` with all components, focused on customer value and clear differentiation.
|