oden-forge 2.2.1 → 2.4.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.claude/commands/oden/epic.md +226 -179
- package/.claude/commands/oden/prd.md +283 -73
- package/.claude/commands/oden/tasks.md +350 -0
- package/.claude/commands/oden/work.md +121 -2
- package/MIGRATION.md +195 -2
- package/README.md +157 -283
- package/package.json +1 -1
|
@@ -1,134 +1,344 @@
|
|
|
1
1
|
---
|
|
2
|
-
allowed-tools: Bash, Read, Write,
|
|
3
|
-
description: Crear PRD con brainstorming inteligente
|
|
2
|
+
allowed-tools: Bash, Read, Write, Task
|
|
3
|
+
description: Crear PRD con brainstorming inteligente usando subagentes especializados - optimizado para contexto
|
|
4
4
|
---
|
|
5
5
|
|
|
6
|
-
# PRD - Product Requirements Document
|
|
6
|
+
# PRD - Product Requirements Document with Orchestrated Subagents
|
|
7
7
|
|
|
8
|
-
Crea un PRD completo con
|
|
8
|
+
Crea un PRD completo con **investigación inteligente y brainstorming optimizado** usando subagentes especializados para máximo contexto.
|
|
9
9
|
|
|
10
10
|
## Usage
|
|
11
11
|
```
|
|
12
12
|
/oden:prd <feature_name>
|
|
13
13
|
```
|
|
14
14
|
|
|
15
|
-
##
|
|
15
|
+
## 🔄 New Architecture: Multi-Agent Research & Brainstorming
|
|
16
16
|
|
|
17
|
-
|
|
17
|
+
### Problema Resuelto
|
|
18
|
+
- ❌ **Antes**: Una sesión hace research + brainstorming + writing (10,000+ tokens)
|
|
19
|
+
- ✅ **Ahora**: 3 fases con investigación paralela + brainstorming contextual
|
|
18
20
|
|
|
19
|
-
|
|
20
|
-
|
|
21
|
+
### Arquitectura de 3 Fases
|
|
22
|
+
|
|
23
|
+
```
|
|
24
|
+
PHASE 1: Research (Parallel) 🟢
|
|
25
|
+
├─ competitive-researcher → Investigate 3-5 competitors
|
|
26
|
+
├─ context-analyzer → Scan existing PRDs + technical decisions
|
|
27
|
+
└─ domain-researcher → Market research + user insights
|
|
28
|
+
|
|
29
|
+
PHASE 2: Brainstorming (Interactive) 🔵
|
|
30
|
+
└─ prd-interviewer → Smart questions based on research
|
|
21
31
|
|
|
22
|
-
|
|
23
|
-
|
|
32
|
+
PHASE 3: Assembly (Main Session) 🟡
|
|
33
|
+
└─ prd-assembler → Create coherent PRD document
|
|
34
|
+
```
|
|
35
|
+
|
|
36
|
+
## Preflight (Quick Validation)
|
|
24
37
|
|
|
38
|
+
1. **Feature name**: Must be kebab-case (lowercase, numbers, hyphens, starts with letter)
|
|
39
|
+
- If invalid: "Feature name must be kebab-case. Examples: user-auth, payment-v2"
|
|
40
|
+
2. **Existing PRD**: Check `.claude/prds/$ARGUMENTS.md` - if exists, ask to overwrite
|
|
25
41
|
3. **Directory**: Create `.claude/prds/` if needed
|
|
26
42
|
|
|
27
|
-
##
|
|
43
|
+
## Phase 1: Parallel Research 🔍
|
|
44
|
+
|
|
45
|
+
Launch **3 specialized subagents in parallel** for comprehensive market and technical research:
|
|
46
|
+
|
|
47
|
+
### 1.1 Competitive Researcher
|
|
48
|
+
```markdown
|
|
49
|
+
Launch subagent: search-specialist
|
|
50
|
+
|
|
51
|
+
Task: Research competitive landscape and best practices
|
|
52
|
+
|
|
53
|
+
Requirements:
|
|
54
|
+
- Find and analyze 3-5 relevant competitors for $ARGUMENTS feature
|
|
55
|
+
- Document their approach, user flows, key features
|
|
56
|
+
- Identify gaps, opportunities, and differentiation points
|
|
57
|
+
- Research industry best practices and standards
|
|
58
|
+
- Note pricing models, user feedback, and success metrics
|
|
59
|
+
- Output: Competitive analysis with actionable insights
|
|
60
|
+
|
|
61
|
+
Context: Focus on practical implementation lessons, not just feature lists
|
|
62
|
+
```
|
|
63
|
+
|
|
64
|
+
### 1.2 Context Analyzer
|
|
65
|
+
```markdown
|
|
66
|
+
Launch subagent: technical-researcher
|
|
67
|
+
|
|
68
|
+
Task: Analyze existing project context and related work
|
|
69
|
+
|
|
70
|
+
Requirements:
|
|
71
|
+
- Read docs/reference/technical-decisions.md for stack/architecture constraints
|
|
72
|
+
- Scan .claude/prds/ for related features and potential overlaps
|
|
73
|
+
- Read project CLAUDE.md for conventions and methodologies
|
|
74
|
+
- Identify existing technical patterns to leverage
|
|
75
|
+
- Check for integration points with existing features
|
|
76
|
+
- Output: Project context summary with technical constraints
|
|
77
|
+
|
|
78
|
+
Context: Ensure new PRD aligns with existing technical and product strategy
|
|
79
|
+
```
|
|
80
|
+
|
|
81
|
+
### 1.3 Domain Researcher
|
|
82
|
+
```markdown
|
|
83
|
+
Launch subagent: data-analyst
|
|
84
|
+
|
|
85
|
+
Task: Research market trends, user needs, and success metrics
|
|
86
|
+
|
|
87
|
+
Requirements:
|
|
88
|
+
- Research market size, trends, and growth for $ARGUMENTS domain
|
|
89
|
+
- Identify target user personas and their pain points
|
|
90
|
+
- Find industry benchmarks and success metrics
|
|
91
|
+
- Research regulatory/compliance requirements if applicable
|
|
92
|
+
- Identify technical challenges and solutions in the domain
|
|
93
|
+
- Output: Market research with user insights and success criteria
|
|
94
|
+
|
|
95
|
+
Context: Ground PRD in real market data and user needs
|
|
96
|
+
```
|
|
97
|
+
|
|
98
|
+
## Phase 2: Smart Brainstorming Session 💡
|
|
99
|
+
|
|
100
|
+
Use research results to conduct focused, intelligent brainstorming:
|
|
101
|
+
|
|
102
|
+
### 2.1 Contextual PRD Interviewer
|
|
103
|
+
```markdown
|
|
104
|
+
You are a product manager conducting a smart brainstorming session for: **$ARGUMENTS**
|
|
28
105
|
|
|
29
|
-
|
|
106
|
+
Based on the research phase results:
|
|
107
|
+
- Competitive landscape: [insights from competitive-researcher]
|
|
108
|
+
- Technical context: [constraints from context-analyzer]
|
|
109
|
+
- Market research: [trends from domain-researcher]
|
|
30
110
|
|
|
31
|
-
|
|
32
|
-
- Read `docs/reference/technical-decisions.md` for stack/architecture
|
|
33
|
-
- Read `docs/reference/competitive-analysis.md` for market context
|
|
34
|
-
- Read any existing module specs in `docs/reference/modules/`
|
|
111
|
+
### Adaptive Smart Questions
|
|
35
112
|
|
|
36
|
-
|
|
37
|
-
- Identify potential overlaps or dependencies
|
|
113
|
+
Ask 3-5 focused questions that leverage research insights:
|
|
38
114
|
|
|
39
|
-
|
|
115
|
+
**If competitive analysis found gaps:**
|
|
116
|
+
- "Competitors X and Y both struggle with [specific issue]. How should we solve this differently?"
|
|
40
117
|
|
|
41
|
-
|
|
118
|
+
**If technical constraints exist:**
|
|
119
|
+
- "Given our [existing stack/architecture], what's the most feasible approach for [key feature]?"
|
|
42
120
|
|
|
43
|
-
|
|
121
|
+
**If market research shows trends:**
|
|
122
|
+
- "[Market trend] is growing 40% YoY. How do we position against this opportunity?"
|
|
44
123
|
|
|
45
|
-
|
|
124
|
+
**Core question areas (adapt based on research):**
|
|
125
|
+
- **Problem**: What specific user pain does this solve? (reference research findings)
|
|
126
|
+
- **Users**: Who benefits most? (use personas from domain research)
|
|
127
|
+
- **Scope**: What's MVP vs full vision? (informed by competitive analysis)
|
|
128
|
+
- **Constraints**: Timeline, budget, technical limitations? (from context analysis)
|
|
129
|
+
- **Success**: How do we measure this worked? (use industry benchmarks)
|
|
46
130
|
|
|
47
|
-
###
|
|
48
|
-
|
|
131
|
+
### Question Guidelines:
|
|
132
|
+
- Reference specific research findings in questions
|
|
133
|
+
- Don't ask about things already known from technical-decisions.md
|
|
134
|
+
- Focus on decisions that research couldn't answer
|
|
135
|
+
- Keep total questions to 3-5 for focused session
|
|
49
136
|
|
|
50
|
-
|
|
51
|
-
|
|
52
|
-
- If related PRDs exist: Ask about integration/overlap
|
|
137
|
+
Context: Use research to ask smarter, more targeted questions
|
|
138
|
+
```
|
|
53
139
|
|
|
54
|
-
|
|
55
|
-
- Problem: What specific user pain does this solve?
|
|
56
|
-
- Users: Who benefits most? (reference existing personas if available)
|
|
57
|
-
- Scope: What's MVP vs full vision?
|
|
58
|
-
- Constraints: Timeline, budget, technical limitations?
|
|
59
|
-
- Success: How do we measure this worked?
|
|
140
|
+
## Phase 3: PRD Assembly 📋
|
|
60
141
|
|
|
61
|
-
|
|
142
|
+
Main session synthesizes all research and brainstorming into comprehensive PRD:
|
|
62
143
|
|
|
63
|
-
|
|
144
|
+
### PRD Document Structure
|
|
64
145
|
|
|
65
|
-
|
|
146
|
+
Create `.claude/prds/$ARGUMENTS.md`:
|
|
66
147
|
|
|
67
148
|
```markdown
|
|
68
149
|
---
|
|
69
150
|
name: $ARGUMENTS
|
|
70
|
-
description: [One-line summary]
|
|
151
|
+
description: [One-line summary from brainstorming]
|
|
71
152
|
status: backlog
|
|
72
|
-
created: [Real datetime
|
|
153
|
+
created: [Real datetime: date -u +"%Y-%m-%dT%H:%M:%SZ"]
|
|
154
|
+
competitive_analysis: true
|
|
155
|
+
market_research: true
|
|
156
|
+
subagents_used: competitive-researcher, context-analyzer, domain-researcher, prd-interviewer
|
|
73
157
|
---
|
|
74
158
|
|
|
75
159
|
# PRD: $ARGUMENTS
|
|
76
160
|
|
|
77
|
-
## Executive Summary
|
|
78
|
-
[Value proposition and brief overview]
|
|
161
|
+
## 📊 Executive Summary
|
|
162
|
+
[Value proposition and brief overview based on research and brainstorming]
|
|
163
|
+
|
|
164
|
+
## 🎯 Problem Statement
|
|
165
|
+
[What problem, why now, evidence from market research]
|
|
166
|
+
|
|
167
|
+
### Market Context
|
|
168
|
+
[From domain-researcher: market size, trends, growth]
|
|
169
|
+
|
|
170
|
+
### Competitive Landscape
|
|
171
|
+
[From competitive-researcher: key players, gaps, opportunities]
|
|
79
172
|
|
|
80
|
-
##
|
|
81
|
-
[
|
|
173
|
+
## 👥 User Stories & Personas
|
|
174
|
+
[From brainstorming session informed by domain research]
|
|
82
175
|
|
|
83
|
-
|
|
84
|
-
[
|
|
176
|
+
### Primary Personas
|
|
177
|
+
[Based on domain research and brainstorming]
|
|
85
178
|
|
|
86
|
-
|
|
179
|
+
### User Journeys
|
|
180
|
+
[Informed by competitive analysis of successful flows]
|
|
181
|
+
|
|
182
|
+
### Acceptance Criteria
|
|
183
|
+
[Specific, testable criteria per story]
|
|
184
|
+
|
|
185
|
+
## ⚙️ Requirements
|
|
87
186
|
|
|
88
187
|
### Functional Requirements
|
|
89
188
|
[Core features with clear acceptance criteria]
|
|
90
189
|
|
|
190
|
+
#### Inspired by Competitive Analysis:
|
|
191
|
+
[Features/patterns learned from competitive research]
|
|
192
|
+
|
|
193
|
+
#### Technical Integration Points:
|
|
194
|
+
[From context-analyzer: how this connects to existing system]
|
|
195
|
+
|
|
91
196
|
### Non-Functional Requirements
|
|
92
197
|
[Performance, security, scalability, accessibility]
|
|
93
198
|
|
|
94
|
-
|
|
95
|
-
[
|
|
199
|
+
#### Industry Standards:
|
|
200
|
+
[Benchmarks from market research]
|
|
96
201
|
|
|
97
|
-
|
|
98
|
-
[
|
|
202
|
+
#### Technical Constraints:
|
|
203
|
+
[From context-analyzer: stack limitations, existing patterns]
|
|
99
204
|
|
|
100
|
-
##
|
|
101
|
-
[
|
|
205
|
+
## 📈 Success Criteria
|
|
206
|
+
[Measurable KPIs from market research + brainstorming]
|
|
102
207
|
|
|
103
|
-
|
|
104
|
-
[
|
|
105
|
-
|
|
208
|
+
### Industry Benchmarks:
|
|
209
|
+
[From domain research: what "good" looks like]
|
|
210
|
+
|
|
211
|
+
### Business Metrics:
|
|
212
|
+
[Revenue, user adoption, engagement targets]
|
|
213
|
+
|
|
214
|
+
### Technical Metrics:
|
|
215
|
+
[Performance, reliability, scalability targets]
|
|
216
|
+
|
|
217
|
+
## 🚧 Constraints & Assumptions
|
|
218
|
+
|
|
219
|
+
### Technical Constraints:
|
|
220
|
+
[From context-analyzer: stack, architecture, integration limitations]
|
|
221
|
+
|
|
222
|
+
### Market Constraints:
|
|
223
|
+
[From domain research: regulatory, competitive, timeline factors]
|
|
224
|
+
|
|
225
|
+
### Resource Constraints:
|
|
226
|
+
[From brainstorming: budget, timeline, team limitations]
|
|
106
227
|
|
|
107
|
-
##
|
|
228
|
+
## ❌ Out of Scope
|
|
229
|
+
[What we explicitly won't build - informed by competitive analysis]
|
|
108
230
|
|
|
109
|
-
|
|
110
|
-
|
|
111
|
-
- User stories have acceptance criteria
|
|
112
|
-
- Success criteria are measurable
|
|
113
|
-
- Out of scope is explicit
|
|
231
|
+
### Competitive Features We're Skipping:
|
|
232
|
+
[Features competitors have that we're intentionally not building]
|
|
114
233
|
|
|
115
|
-
|
|
234
|
+
### Future Considerations:
|
|
235
|
+
[Features that might be added in later versions]
|
|
116
236
|
|
|
237
|
+
## 🔗 Dependencies
|
|
238
|
+
|
|
239
|
+
### Internal Dependencies:
|
|
240
|
+
[From context-analyzer: other PRDs, shared systems, technical components]
|
|
241
|
+
|
|
242
|
+
### External Dependencies:
|
|
243
|
+
[Third-party services, APIs, data sources identified in research]
|
|
244
|
+
|
|
245
|
+
## 💡 Research Insights
|
|
246
|
+
|
|
247
|
+
### Competitive Intelligence:
|
|
248
|
+
[Key learnings from competitive analysis that influenced decisions]
|
|
249
|
+
|
|
250
|
+
### Market Opportunities:
|
|
251
|
+
[Specific opportunities identified in domain research]
|
|
252
|
+
|
|
253
|
+
### Technical Considerations:
|
|
254
|
+
[Architecture insights from context analysis]
|
|
255
|
+
|
|
256
|
+
## 📋 Next Steps
|
|
257
|
+
1. Review PRD with stakeholders for completeness
|
|
258
|
+
2. Create technical epic: `/oden:epic $ARGUMENTS`
|
|
259
|
+
3. Begin implementation planning
|
|
117
260
|
```
|
|
118
|
-
PRD created: .claude/prds/$ARGUMENTS.md
|
|
119
261
|
|
|
120
|
-
|
|
121
|
-
|
|
122
|
-
|
|
123
|
-
|
|
124
|
-
|
|
262
|
+
## 📊 Quality Checks & Output
|
|
263
|
+
|
|
264
|
+
Before completion, verify:
|
|
265
|
+
- [ ] All research insights properly incorporated
|
|
266
|
+
- [ ] User stories have acceptance criteria based on competitive learnings
|
|
267
|
+
- [ ] Success criteria use industry benchmarks from research
|
|
268
|
+
- [ ] Technical constraints from existing system acknowledged
|
|
269
|
+
- [ ] No research findings ignored or contradicted
|
|
270
|
+
- [ ] Competitive differentiation clearly articulated
|
|
125
271
|
|
|
126
|
-
|
|
272
|
+
## Success Output
|
|
273
|
+
|
|
274
|
+
```
|
|
275
|
+
🎉 PRD created with comprehensive research: .claude/prds/$ARGUMENTS.md
|
|
276
|
+
|
|
277
|
+
📊 Research Summary:
|
|
278
|
+
Phase 1: Competitive + Context + Domain research (parallel) ✅
|
|
279
|
+
Phase 2: Smart brainstorming with research context ✅
|
|
280
|
+
Phase 3: Research-informed PRD assembly ✅
|
|
281
|
+
|
|
282
|
+
🔍 Research Insights Applied:
|
|
283
|
+
- Competitive analysis: [X] competitors analyzed
|
|
284
|
+
- Market research: [industry trends, user personas, benchmarks]
|
|
285
|
+
- Technical context: [integration points, constraints identified]
|
|
286
|
+
- Smart questions: [Y] targeted questions based on research
|
|
287
|
+
|
|
288
|
+
📋 PRD Summary:
|
|
289
|
+
- Problem: [one sentence from brainstorming]
|
|
290
|
+
- Users: [personas from domain research]
|
|
291
|
+
- Requirements: [count] functional + [count] non-functional
|
|
292
|
+
- Success metrics: [key benchmarks from market research]
|
|
293
|
+
- Differentiation: [competitive advantage identified]
|
|
294
|
+
|
|
295
|
+
💡 Context Optimization:
|
|
296
|
+
- Previous: Single session research + brainstorming (~10,000 tokens)
|
|
297
|
+
- Current: Parallel research + focused brainstorming (~4,000 tokens total)
|
|
298
|
+
- Quality: Multiple specialized perspectives + market intelligence
|
|
299
|
+
- Decisions: Research-backed rather than assumption-based
|
|
300
|
+
|
|
301
|
+
Next Steps:
|
|
302
|
+
1. Review PRD for stakeholder alignment
|
|
303
|
+
2. Run: /oden:epic $ARGUMENTS (convert to technical implementation plan)
|
|
304
|
+
3. Share competitive insights with product team
|
|
305
|
+
```
|
|
306
|
+
|
|
307
|
+
## 🔧 Implementation Notes
|
|
308
|
+
|
|
309
|
+
### Error Handling
|
|
310
|
+
- If competitive research finds <3 competitors → expand search terms or adjacent markets
|
|
311
|
+
- If no technical context available → proceed with generic technical considerations
|
|
312
|
+
- If domain research limited → focus on user interviews and surveys in brainstorming
|
|
313
|
+
|
|
314
|
+
### Research Quality Gates
|
|
315
|
+
- Competitive analysis must find ≥3 relevant examples
|
|
316
|
+
- Market research should include quantitative data where available
|
|
317
|
+
- Technical context should identify ≥1 integration point or constraint
|
|
318
|
+
|
|
319
|
+
### Brainstorming Optimization
|
|
320
|
+
- Questions adapt based on research quality and findings
|
|
321
|
+
- If research is comprehensive, focus questions on decisions and tradeoffs
|
|
322
|
+
- If research is limited, ask broader exploratory questions
|
|
323
|
+
|
|
324
|
+
### Subagent Selection Logic
|
|
325
|
+
```yaml
|
|
326
|
+
competitive-researcher: search-specialist (expert web research, comparative analysis)
|
|
327
|
+
context-analyzer: technical-researcher (reads technical docs, understands architecture)
|
|
328
|
+
domain-researcher: data-analyst (market research, quantitative analysis, benchmarks)
|
|
329
|
+
prd-interviewer: general-purpose (adaptable, good at asking smart questions)
|
|
127
330
|
```
|
|
128
331
|
|
|
129
|
-
##
|
|
332
|
+
## 🚀 Benefits Achieved
|
|
333
|
+
|
|
334
|
+
1. **Research Quality**: Professional competitive and market analysis
|
|
335
|
+
2. **Context Efficiency**: Parallel research vs sequential brainstorming
|
|
336
|
+
3. **Smart Questions**: Research-informed rather than generic brainstorming
|
|
337
|
+
4. **Decision Quality**: Market data + competitive intelligence backing decisions
|
|
338
|
+
5. **Technical Alignment**: PRD considers existing architecture from day 1
|
|
339
|
+
6. **Scalable Process**: Can handle complex domains with deep research needs
|
|
340
|
+
7. **Reusable Insights**: Research can inform future related PRDs
|
|
341
|
+
|
|
342
|
+
---
|
|
130
343
|
|
|
131
|
-
-
|
|
132
|
-
- Never use placeholder dates
|
|
133
|
-
- Leverage existing project context for smarter brainstorming
|
|
134
|
-
- Keep brainstorming focused (3-5 questions, not 10+)
|
|
344
|
+
**Important**: This creates research-backed PRDs rather than assumption-based documents, leading to better technical epics and implementation decisions.
|