agentic-team-templates 0.8.2 → 0.9.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +2 -0
- package/package.json +1 -1
- package/templates/product-manager/.cursorrules/communication.md +353 -0
- package/templates/product-manager/.cursorrules/discovery.md +258 -0
- package/templates/product-manager/.cursorrules/metrics.md +319 -0
- package/templates/product-manager/.cursorrules/overview.md +95 -0
- package/templates/product-manager/.cursorrules/prioritization.md +240 -0
- package/templates/product-manager/.cursorrules/requirements.md +371 -0
- package/templates/product-manager/CLAUDE.md +593 -0
- package/templates/qa-engineering/.cursorrules/automation.md +460 -0
- package/templates/qa-engineering/.cursorrules/metrics.md +292 -0
- package/templates/qa-engineering/.cursorrules/overview.md +125 -0
- package/templates/qa-engineering/.cursorrules/quality-gates.md +372 -0
- package/templates/qa-engineering/.cursorrules/test-design.md +301 -0
- package/templates/qa-engineering/.cursorrules/test-strategy.md +218 -0
- package/templates/qa-engineering/CLAUDE.md +726 -0
|
@@ -0,0 +1,319 @@
|
|
|
1
|
+
# Product Metrics
|
|
2
|
+
|
|
3
|
+
Guidelines for defining, tracking, and acting on product metrics.
|
|
4
|
+
|
|
5
|
+
## Metrics Hierarchy
|
|
6
|
+
|
|
7
|
+
### North Star Metric
|
|
8
|
+
|
|
9
|
+
The single metric that best captures the core value your product delivers to customers.
|
|
10
|
+
|
|
11
|
+
```text
|
|
12
|
+
North Star Metric
|
|
13
|
+
├── Input Metrics (leading indicators you can influence)
|
|
14
|
+
│ ├── Activation rate
|
|
15
|
+
│ ├── Feature adoption
|
|
16
|
+
│ └── Engagement frequency
|
|
17
|
+
└── Output Metrics (lagging indicators that result)
|
|
18
|
+
├── Retention
|
|
19
|
+
├── Revenue
|
|
20
|
+
└── Customer satisfaction
|
|
21
|
+
```
|
|
22
|
+
|
|
23
|
+
### Examples by Business Model
|
|
24
|
+
|
|
25
|
+
| Model | North Star | Input Metrics |
|
|
26
|
+
|-------|------------|---------------|
|
|
27
|
+
| SaaS | Weekly Active Users | Activation, Feature Adoption |
|
|
28
|
+
| Marketplace | Transactions | Listings, Buyer Visits |
|
|
29
|
+
| E-commerce | Revenue | Traffic, Conversion, AOV |
|
|
30
|
+
| Consumer App | Daily Active Users | Session Length, Features Used |
|
|
31
|
+
| B2B Platform | Active Accounts | Users per Account, API Calls |
|
|
32
|
+
|
|
33
|
+
## OKRs (Objectives and Key Results)
|
|
34
|
+
|
|
35
|
+
### Structure
|
|
36
|
+
|
|
37
|
+
```text
|
|
38
|
+
Objective: [Qualitative, inspiring, time-bound]
|
|
39
|
+
├── KR1: [Metric] - Baseline: X → Target: Y
|
|
40
|
+
├── KR2: [Metric] - Baseline: X → Target: Y
|
|
41
|
+
└── KR3: [Metric] - Baseline: X → Target: Y
|
|
42
|
+
```
|
|
43
|
+
|
|
44
|
+
### OKR Best Practices
|
|
45
|
+
|
|
46
|
+
| Practice | Rationale |
|
|
47
|
+
|----------|-----------|
|
|
48
|
+
| 3-5 Objectives per quarter | Focus enables execution |
|
|
49
|
+
| 2-4 Key Results per Objective | Measurable, not task lists |
|
|
50
|
+
| 70% achievement = success | Stretch goals drive innovation |
|
|
51
|
+
| Outcomes, not outputs | "Reduce churn to 5%" not "Launch retention feature" |
|
|
52
|
+
| Weekly check-ins | Track progress, identify blockers early |
|
|
53
|
+
|
|
54
|
+
### OKR Examples
|
|
55
|
+
|
|
56
|
+
**Good:**
|
|
57
|
+
```text
|
|
58
|
+
Objective: Become the preferred tool for enterprise teams
|
|
59
|
+
|
|
60
|
+
KR1: Increase enterprise NPS from 32 to 50
|
|
61
|
+
KR2: Reduce time-to-first-value from 14 days to 3 days
|
|
62
|
+
KR3: Grow enterprise accounts from 50 to 150
|
|
63
|
+
```
|
|
64
|
+
|
|
65
|
+
**Bad:**
|
|
66
|
+
```text
|
|
67
|
+
Objective: Build enterprise features
|
|
68
|
+
|
|
69
|
+
KR1: Launch SSO
|
|
70
|
+
KR2: Build admin dashboard
|
|
71
|
+
KR3: Create 10 case studies
|
|
72
|
+
```
|
|
73
|
+
(These are outputs/tasks, not measurable outcomes)
|
|
74
|
+
|
|
75
|
+
### OKR Scoring
|
|
76
|
+
|
|
77
|
+
| Score | Interpretation |
|
|
78
|
+
|-------|----------------|
|
|
79
|
+
| 0.0-0.3 | Failed to make progress |
|
|
80
|
+
| 0.4-0.6 | Made progress but fell short |
|
|
81
|
+
| 0.7-0.9 | Hit our stretch goal |
|
|
82
|
+
| 1.0 | Achieved everything (goal may have been too easy) |
|
|
83
|
+
|
|
84
|
+
## Pirate Metrics (AARRR)
|
|
85
|
+
|
|
86
|
+
### Framework
|
|
87
|
+
|
|
88
|
+
| Stage | Question | Example Metrics |
|
|
89
|
+
|-------|----------|-----------------|
|
|
90
|
+
| **A**cquisition | How do users find us? | Traffic, CAC, Channel Performance |
|
|
91
|
+
| **A**ctivation | Do users have a great first experience? | Signup Rate, Onboarding Completion |
|
|
92
|
+
| **R**etention | Do users come back? | D1/D7/D30 Retention, Churn Rate |
|
|
93
|
+
| **R**evenue | How do we make money? | ARPU, LTV, Conversion to Paid |
|
|
94
|
+
| **R**eferral | Do users tell others? | NPS, Viral Coefficient, Referrals |
|
|
95
|
+
|
|
96
|
+
### Funnel Analysis
|
|
97
|
+
|
|
98
|
+
```text
|
|
99
|
+
Visitors → Signups → Activated → Retained → Paid
|
|
100
|
+
100,000 10,000 6,000 4,000 1,000
|
|
101
|
+
10% 60% 67% 25%
|
|
102
|
+
```
|
|
103
|
+
|
|
104
|
+
### Identifying Funnel Problems
|
|
105
|
+
|
|
106
|
+
| Pattern | Diagnosis | Focus Area |
|
|
107
|
+
|---------|-----------|------------|
|
|
108
|
+
| Low visitor-to-signup | Messaging/positioning issue | Acquisition |
|
|
109
|
+
| Low signup-to-activated | Onboarding friction | Activation |
|
|
110
|
+
| Low activated-to-retained | Core value not delivered | Product/Value |
|
|
111
|
+
| Low retained-to-paid | Pricing or value perception | Monetization |
|
|
112
|
+
|
|
113
|
+
## Metric Definitions
|
|
114
|
+
|
|
115
|
+
### Retention Metrics
|
|
116
|
+
|
|
117
|
+
| Metric | Formula | Use Case |
|
|
118
|
+
|--------|---------|----------|
|
|
119
|
+
| D1 Retention | Users active on Day 1 / New users | Early activation signal |
|
|
120
|
+
| D7 Retention | Users active on Day 7 / New users | Short-term retention |
|
|
121
|
+
| D30 Retention | Users active on Day 30 / New users | Medium-term retention |
|
|
122
|
+
| Weekly Retention | Users active this week / Users active last week | Cohort health |
|
|
123
|
+
| Logo Churn | Accounts lost / Total accounts | B2B health |
|
|
124
|
+
| Revenue Churn | MRR lost / Total MRR | Revenue health |
|
|
125
|
+
| Net Revenue Retention | (Start MRR + Expansion - Churn) / Start MRR | Growth efficiency |
|
|
126
|
+
|
|
127
|
+
### Engagement Metrics
|
|
128
|
+
|
|
129
|
+
| Metric | Formula | Use Case |
|
|
130
|
+
|--------|---------|----------|
|
|
131
|
+
| DAU/MAU | Daily Active / Monthly Active | Stickiness |
|
|
132
|
+
| Session Length | Time in app per session | Depth of engagement |
|
|
133
|
+
| Sessions per Day | Sessions / DAU | Frequency |
|
|
134
|
+
| Feature Adoption | Users using feature / Total users | Feature success |
|
|
135
|
+
| Time to Value | Time from signup to "aha moment" | Onboarding efficiency |
|
|
136
|
+
|
|
137
|
+
### Revenue Metrics
|
|
138
|
+
|
|
139
|
+
| Metric | Formula | Use Case |
|
|
140
|
+
|--------|---------|----------|
|
|
141
|
+
| MRR | Monthly recurring revenue | Revenue health |
|
|
142
|
+
| ARR | MRR × 12 | Annual planning |
|
|
143
|
+
| ARPU | Revenue / Users | Revenue efficiency |
|
|
144
|
+
| LTV | ARPU × Average lifetime | Customer value |
|
|
145
|
+
| CAC | Acquisition cost / New customers | Acquisition efficiency |
|
|
146
|
+
| LTV:CAC | LTV / CAC | Unit economics (target: 3:1+) |
|
|
147
|
+
|
|
148
|
+
## Instrumentation Standards
|
|
149
|
+
|
|
150
|
+
### Event Naming Convention
|
|
151
|
+
|
|
152
|
+
```text
|
|
153
|
+
object_action
|
|
154
|
+
|
|
155
|
+
Examples:
|
|
156
|
+
- user_signed_up
|
|
157
|
+
- feature_used
|
|
158
|
+
- subscription_upgraded
|
|
159
|
+
- report_exported
|
|
160
|
+
```
|
|
161
|
+
|
|
162
|
+
### Event Properties
|
|
163
|
+
|
|
164
|
+
```javascript
|
|
165
|
+
analytics.track('feature_used', {
|
|
166
|
+
// Required
|
|
167
|
+
feature_name: 'search',
|
|
168
|
+
timestamp: '2025-01-28T12:00:00Z',
|
|
169
|
+
|
|
170
|
+
// User context
|
|
171
|
+
user_id: '123',
|
|
172
|
+
account_id: 'abc',
|
|
173
|
+
user_role: 'admin',
|
|
174
|
+
|
|
175
|
+
// Session context
|
|
176
|
+
session_id: 'xyz',
|
|
177
|
+
platform: 'web',
|
|
178
|
+
|
|
179
|
+
// Feature-specific
|
|
180
|
+
query: 'product roadmap',
|
|
181
|
+
results_count: 15,
|
|
182
|
+
time_to_results_ms: 234
|
|
183
|
+
});
|
|
184
|
+
```
|
|
185
|
+
|
|
186
|
+
### Standard Events to Track
|
|
187
|
+
|
|
188
|
+
| Event | When to Fire | Key Properties |
|
|
189
|
+
|-------|--------------|----------------|
|
|
190
|
+
| `user_signed_up` | Registration complete | signup_method, referrer |
|
|
191
|
+
| `user_activated` | Completed activation criteria | time_to_activate |
|
|
192
|
+
| `feature_used` | Core feature interaction | feature_name, context |
|
|
193
|
+
| `upgrade_started` | Began upgrade flow | plan_from, plan_to |
|
|
194
|
+
| `upgrade_completed` | Payment successful | plan, revenue |
|
|
195
|
+
| `support_contacted` | Reached out for help | channel, topic |
|
|
196
|
+
|
|
197
|
+
## Dashboards
|
|
198
|
+
|
|
199
|
+
### Executive Dashboard
|
|
200
|
+
|
|
201
|
+
```text
|
|
202
|
+
┌─────────────────────────────────────────────────────────┐
|
|
203
|
+
│ NORTH STAR: Weekly Active Users │
|
|
204
|
+
│ ████████████████████████░░░░░░ 45,000 / 50,000 (90%) │
|
|
205
|
+
└─────────────────────────────────────────────────────────┘
|
|
206
|
+
|
|
207
|
+
┌─────────────────────────┬─────────────────────────┐
|
|
208
|
+
│ Revenue │ Retention │
|
|
209
|
+
│ MRR: $125K (+8%) │ D30: 42% (+3pp) │
|
|
210
|
+
│ ARR: $1.5M │ Churn: 4.2% (-0.5pp) │
|
|
211
|
+
└─────────────────────────┴─────────────────────────┘
|
|
212
|
+
|
|
213
|
+
┌─────────────────────────────────────────────────────────┐
|
|
214
|
+
│ OKR Progress │
|
|
215
|
+
│ Q1 Obj 1: ████████░░ 80% │
|
|
216
|
+
│ Q1 Obj 2: █████░░░░░ 50% │
|
|
217
|
+
│ Q1 Obj 3: ███████░░░ 70% │
|
|
218
|
+
└─────────────────────────────────────────────────────────┘
|
|
219
|
+
```
|
|
220
|
+
|
|
221
|
+
### Product Dashboard
|
|
222
|
+
|
|
223
|
+
```text
|
|
224
|
+
┌─────────────────────────────────────────────────────────┐
|
|
225
|
+
│ Funnel (Last 7 Days) │
|
|
226
|
+
│ Visitors → Signups → Activated → Retained → Paid │
|
|
227
|
+
│ 100K → 10K → 6K → 4K → 1K │
|
|
228
|
+
│ 10% 60% 67% 25% │
|
|
229
|
+
└─────────────────────────────────────────────────────────┘
|
|
230
|
+
|
|
231
|
+
┌─────────────────────────┬─────────────────────────┐
|
|
232
|
+
│ Feature Adoption │ Recent Experiments │
|
|
233
|
+
│ Search: 78% │ New onboarding: +12% │
|
|
234
|
+
│ Export: 45% │ Pricing test: -3% │
|
|
235
|
+
│ Integrations: 23% │ Dark mode: neutral │
|
|
236
|
+
└─────────────────────────┴─────────────────────────┘
|
|
237
|
+
```
|
|
238
|
+
|
|
239
|
+
## Experimentation
|
|
240
|
+
|
|
241
|
+
### A/B Test Framework
|
|
242
|
+
|
|
243
|
+
```markdown
|
|
244
|
+
## Experiment: [Name]
|
|
245
|
+
|
|
246
|
+
### Hypothesis
|
|
247
|
+
If we [change], then [metric] will [improve/decrease] because [reason].
|
|
248
|
+
|
|
249
|
+
### Metrics
|
|
250
|
+
- Primary: [Metric to optimize]
|
|
251
|
+
- Secondary: [Metrics to monitor]
|
|
252
|
+
- Guardrails: [Metrics that shouldn't regress]
|
|
253
|
+
|
|
254
|
+
### Variants
|
|
255
|
+
- Control: [Current experience]
|
|
256
|
+
- Treatment: [New experience]
|
|
257
|
+
|
|
258
|
+
### Sample Size & Duration
|
|
259
|
+
- Minimum detectable effect: [X%]
|
|
260
|
+
- Required sample: [N users per variant]
|
|
261
|
+
- Estimated duration: [X weeks]
|
|
262
|
+
|
|
263
|
+
### Results
|
|
264
|
+
| Metric | Control | Treatment | Lift | Significance |
|
|
265
|
+
|--------|---------|-----------|------|--------------|
|
|
266
|
+
| Primary | X | Y | +Z% | p < 0.05 |
|
|
267
|
+
|
|
268
|
+
### Decision
|
|
269
|
+
[Ship / Iterate / Kill] - [Reasoning]
|
|
270
|
+
```
|
|
271
|
+
|
|
272
|
+
### Statistical Significance
|
|
273
|
+
|
|
274
|
+
| Confidence Level | When to Use |
|
|
275
|
+
|------------------|-------------|
|
|
276
|
+
| 90% | Exploratory tests, low-risk changes |
|
|
277
|
+
| 95% | Standard product decisions |
|
|
278
|
+
| 99% | High-stakes changes, revenue impact |
|
|
279
|
+
|
|
280
|
+
### Experiment Anti-Patterns
|
|
281
|
+
|
|
282
|
+
| Anti-Pattern | Problem | Solution |
|
|
283
|
+
|--------------|---------|----------|
|
|
284
|
+
| Peeking | Stopping early inflates false positives | Set duration upfront, stick to it |
|
|
285
|
+
| Multiple testing | Increases false positive rate | Adjust for multiple comparisons |
|
|
286
|
+
| Underpowered | Can't detect real effects | Calculate sample size before starting |
|
|
287
|
+
| Metric gaming | Optimizing wrong behavior | Include guardrail metrics |
|
|
288
|
+
|
|
289
|
+
## Metric Reviews
|
|
290
|
+
|
|
291
|
+
### Weekly Metrics Review
|
|
292
|
+
|
|
293
|
+
```markdown
|
|
294
|
+
## Weekly Metrics Review: [Date]
|
|
295
|
+
|
|
296
|
+
### North Star
|
|
297
|
+
- Current: [Value]
|
|
298
|
+
- WoW Change: [+/-X%]
|
|
299
|
+
- Status: [On Track / At Risk / Off Track]
|
|
300
|
+
|
|
301
|
+
### Key Changes
|
|
302
|
+
1. [Metric 1] [increased/decreased] by [X%] because [reason]
|
|
303
|
+
2. [Metric 2] [increased/decreased] by [X%] because [reason]
|
|
304
|
+
|
|
305
|
+
### Experiments Update
|
|
306
|
+
- [Experiment 1]: [Status] - [Key finding]
|
|
307
|
+
- [Experiment 2]: [Status] - [Key finding]
|
|
308
|
+
|
|
309
|
+
### Actions
|
|
310
|
+
- [ ] [Action item 1] - Owner: [Name]
|
|
311
|
+
- [ ] [Action item 2] - Owner: [Name]
|
|
312
|
+
```
|
|
313
|
+
|
|
314
|
+
### Monthly Metrics Deep Dive
|
|
315
|
+
|
|
316
|
+
- Cohort analysis: How are different user groups performing?
|
|
317
|
+
- Segment analysis: Which segments are growing/shrinking?
|
|
318
|
+
- Feature impact: How did recent launches affect metrics?
|
|
319
|
+
- Competitive benchmarking: How do we compare to industry?
|
|
@@ -0,0 +1,95 @@
|
|
|
1
|
+
# Product Management
|
|
2
|
+
|
|
3
|
+
Principal-level guidelines for outcome-driven product management.
|
|
4
|
+
|
|
5
|
+
## Scope
|
|
6
|
+
|
|
7
|
+
This ruleset applies to:
|
|
8
|
+
|
|
9
|
+
- Product strategy and vision
|
|
10
|
+
- Customer discovery and research
|
|
11
|
+
- Feature prioritization and roadmapping
|
|
12
|
+
- Requirements documentation (PRDs, user stories)
|
|
13
|
+
- OKRs and product metrics
|
|
14
|
+
- Stakeholder alignment and communication
|
|
15
|
+
- Go-to-market coordination
|
|
16
|
+
|
|
17
|
+
## Core Philosophy
|
|
18
|
+
|
|
19
|
+
**Products exist to solve customer problems in ways that drive business outcomes.** Every decision should trace back to validated customer needs and measurable business impact.
|
|
20
|
+
|
|
21
|
+
## Fundamental Principles
|
|
22
|
+
|
|
23
|
+
### 1. Outcomes Over Outputs
|
|
24
|
+
|
|
25
|
+
Measure success by customer and business impact, not features shipped.
|
|
26
|
+
|
|
27
|
+
```markdown
|
|
28
|
+
❌ Wrong: "We shipped 15 features this quarter"
|
|
29
|
+
✅ Right: "We reduced time-to-value from 14 days to 3 days"
|
|
30
|
+
```
|
|
31
|
+
|
|
32
|
+
### 2. Continuous Discovery
|
|
33
|
+
|
|
34
|
+
Never stop learning from customers. Minimum one customer conversation per week.
|
|
35
|
+
|
|
36
|
+
### 3. Evidence-Based Decisions
|
|
37
|
+
|
|
38
|
+
Use data and research to inform priorities, not HiPPO (Highest Paid Person's Opinion).
|
|
39
|
+
|
|
40
|
+
### 4. Cross-Functional Collaboration
|
|
41
|
+
|
|
42
|
+
Great products emerge from empowered teams of product, engineering, and design working together—not handoffs.
|
|
43
|
+
|
|
44
|
+
### 5. Strategic Clarity
|
|
45
|
+
|
|
46
|
+
Every feature connects to a user need, which connects to a product goal, which connects to a company objective.
|
|
47
|
+
|
|
48
|
+
## Project Structure
|
|
49
|
+
|
|
50
|
+
```text
|
|
51
|
+
product/
|
|
52
|
+
├── strategy/
|
|
53
|
+
│ ├── vision.md # Product vision and mission
|
|
54
|
+
│ ├── strategy.md # 1-2 year strategic plan
|
|
55
|
+
│ └── competitive-analysis.md
|
|
56
|
+
├── discovery/
|
|
57
|
+
│ ├── opportunity-tree.md # Opportunity solution tree
|
|
58
|
+
│ ├── interviews/ # Customer interview notes
|
|
59
|
+
│ ├── personas/ # User personas
|
|
60
|
+
│ └── research/ # Research findings
|
|
61
|
+
├── roadmap/
|
|
62
|
+
│ ├── roadmap.md # Current roadmap
|
|
63
|
+
│ ├── okrs.md # Quarterly OKRs
|
|
64
|
+
│ └── archive/ # Historical roadmaps
|
|
65
|
+
├── requirements/
|
|
66
|
+
│ ├── prds/ # Product requirements documents
|
|
67
|
+
│ ├── user-stories/ # User story backlog
|
|
68
|
+
│ └── specs/ # Detailed specifications
|
|
69
|
+
├── analytics/
|
|
70
|
+
│ ├── metrics.md # Key metrics definitions
|
|
71
|
+
│ ├── dashboards/ # Dashboard configs
|
|
72
|
+
│ └── experiments/ # A/B test documentation
|
|
73
|
+
└── communication/
|
|
74
|
+
├── stakeholder-updates/ # Status updates
|
|
75
|
+
├── release-notes/ # Customer-facing notes
|
|
76
|
+
└── presentations/ # Roadmap presentations
|
|
77
|
+
```
|
|
78
|
+
|
|
79
|
+
## Decision Framework
|
|
80
|
+
|
|
81
|
+
When evaluating any product decision:
|
|
82
|
+
|
|
83
|
+
1. **Customer Impact**: Does this solve a validated customer problem?
|
|
84
|
+
2. **Business Alignment**: Does this support company objectives?
|
|
85
|
+
3. **Feasibility**: Can we build this with available resources?
|
|
86
|
+
4. **Evidence**: What data supports this decision?
|
|
87
|
+
5. **Opportunity Cost**: What are we NOT doing if we choose this?
|
|
88
|
+
|
|
89
|
+
## Communication Standards
|
|
90
|
+
|
|
91
|
+
- Use data to support assertions
|
|
92
|
+
- Lead with the "why" before the "what"
|
|
93
|
+
- Tailor detail level to audience
|
|
94
|
+
- Document decisions and rationale
|
|
95
|
+
- Share context, not just conclusions
|
|
@@ -0,0 +1,240 @@
|
|
|
1
|
+
# Prioritization
|
|
2
|
+
|
|
3
|
+
Frameworks and best practices for evidence-based prioritization.
|
|
4
|
+
|
|
5
|
+
## Core Principle
|
|
6
|
+
|
|
7
|
+
**Prioritization is about making trade-offs explicit.** Every "yes" is an implicit "no" to something else. Use frameworks to make these trade-offs visible and defensible.
|
|
8
|
+
|
|
9
|
+
## RICE Framework
|
|
10
|
+
|
|
11
|
+
### Formula
|
|
12
|
+
|
|
13
|
+
```text
|
|
14
|
+
RICE Score = (Reach × Impact × Confidence) / Effort
|
|
15
|
+
```
|
|
16
|
+
|
|
17
|
+
### Components
|
|
18
|
+
|
|
19
|
+
| Factor | Description | Measurement |
|
|
20
|
+
|--------|-------------|-------------|
|
|
21
|
+
| **Reach** | Users affected in time period | Number (per quarter) |
|
|
22
|
+
| **Impact** | Effect on each user | Scale: 0.25 - 3 |
|
|
23
|
+
| **Confidence** | Certainty in estimates | Percentage: 50% - 100% |
|
|
24
|
+
| **Effort** | Resources required | Person-months |
|
|
25
|
+
|
|
26
|
+
### Impact Scale
|
|
27
|
+
|
|
28
|
+
| Score | Label | Criteria |
|
|
29
|
+
|-------|-------|----------|
|
|
30
|
+
| 3 | Massive | Core workflow, high frequency, users would churn without it |
|
|
31
|
+
| 2 | High | Important workflow, meaningful improvement |
|
|
32
|
+
| 1 | Medium | Nice improvement, noticeable but not critical |
|
|
33
|
+
| 0.5 | Low | Minor improvement, some users benefit |
|
|
34
|
+
| 0.25 | Minimal | Edge case, rarely noticed |
|
|
35
|
+
|
|
36
|
+
### Confidence Scoring
|
|
37
|
+
|
|
38
|
+
| Score | Label | Criteria |
|
|
39
|
+
|-------|-------|----------|
|
|
40
|
+
| 100% | High | Validated with data, multiple sources confirm |
|
|
41
|
+
| 80% | Good | Strong signals from research, some data |
|
|
42
|
+
| 60% | Medium | Reasonable assumptions, limited validation |
|
|
43
|
+
| 50% | Low | Gut feel, minimal evidence |
|
|
44
|
+
|
|
45
|
+
### RICE Scoring Template
|
|
46
|
+
|
|
47
|
+
```markdown
|
|
48
|
+
## Feature: [Name]
|
|
49
|
+
|
|
50
|
+
### Reach
|
|
51
|
+
- Time period: [Quarter]
|
|
52
|
+
- Users affected: [Number]
|
|
53
|
+
- Source: [Analytics/Research/Estimate]
|
|
54
|
+
|
|
55
|
+
### Impact
|
|
56
|
+
- Score: [0.25/0.5/1/2/3]
|
|
57
|
+
- Rationale: [Why this score]
|
|
58
|
+
|
|
59
|
+
### Confidence
|
|
60
|
+
- Score: [50%/60%/80%/100%]
|
|
61
|
+
- Evidence: [What supports our estimates]
|
|
62
|
+
- Gaps: [What we don't know]
|
|
63
|
+
|
|
64
|
+
### Effort
|
|
65
|
+
- Estimate: [Person-months]
|
|
66
|
+
- Breakdown: [Engineering X, Design Y, QA Z]
|
|
67
|
+
|
|
68
|
+
### RICE Score
|
|
69
|
+
[Reach] × [Impact] × [Confidence] / [Effort] = [Score]
|
|
70
|
+
```
|
|
71
|
+
|
|
72
|
+
### Example RICE Comparison
|
|
73
|
+
|
|
74
|
+
| Feature | Reach | Impact | Confidence | Effort | RICE |
|
|
75
|
+
|---------|-------|--------|------------|--------|------|
|
|
76
|
+
| Search improvements | 50,000 | 2 | 80% | 3 | 26,667 |
|
|
77
|
+
| New dashboard | 10,000 | 2 | 60% | 4 | 3,000 |
|
|
78
|
+
| Export to CSV | 5,000 | 1 | 100% | 0.5 | 10,000 |
|
|
79
|
+
| Dark mode | 30,000 | 0.5 | 80% | 2 | 6,000 |
|
|
80
|
+
|
|
81
|
+
**Priority order: Search → Export → Dark Mode → Dashboard**
|
|
82
|
+
|
|
83
|
+
## Alternative Frameworks
|
|
84
|
+
|
|
85
|
+
### Value vs. Effort Matrix
|
|
86
|
+
|
|
87
|
+
```text
|
|
88
|
+
High Value │ Quick Wins │ Big Bets
|
|
89
|
+
│ (Do First) │ (Plan Carefully)
|
|
90
|
+
│───────────────┼──────────────────
|
|
91
|
+
│ Fill-Ins │ Time Sinks
|
|
92
|
+
Low Value │ (Maybe Later) │ (Avoid)
|
|
93
|
+
└───────────────┴──────────────────
|
|
94
|
+
Low Effort High Effort
|
|
95
|
+
```
|
|
96
|
+
|
|
97
|
+
### Kano Model
|
|
98
|
+
|
|
99
|
+
| Category | Definition | Priority |
|
|
100
|
+
|----------|------------|----------|
|
|
101
|
+
| Must-Have | Expected, causes dissatisfaction if missing | High |
|
|
102
|
+
| Performance | More is better, linear satisfaction | Medium-High |
|
|
103
|
+
| Delighters | Unexpected, creates positive surprise | Strategic |
|
|
104
|
+
| Indifferent | Users don't care either way | Low |
|
|
105
|
+
| Reverse | Causes dissatisfaction if present | Remove |
|
|
106
|
+
|
|
107
|
+
### MoSCoW Method
|
|
108
|
+
|
|
109
|
+
| Priority | Description | Commitment |
|
|
110
|
+
|----------|-------------|------------|
|
|
111
|
+
| **Must** | Non-negotiable for release | 100% |
|
|
112
|
+
| **Should** | Important but not critical | High effort |
|
|
113
|
+
| **Could** | Nice to have | If time permits |
|
|
114
|
+
| **Won't** | Not this time | Explicitly excluded |
|
|
115
|
+
|
|
116
|
+
### ICE Scoring
|
|
117
|
+
|
|
118
|
+
```text
|
|
119
|
+
ICE Score = Impact × Confidence × Ease
|
|
120
|
+
```
|
|
121
|
+
|
|
122
|
+
Simpler than RICE, good for quick prioritization:
|
|
123
|
+
- **Impact**: 1-10 scale
|
|
124
|
+
- **Confidence**: 1-10 scale
|
|
125
|
+
- **Ease**: 1-10 scale (inverse of effort)
|
|
126
|
+
|
|
127
|
+
## Stakeholder Management
|
|
128
|
+
|
|
129
|
+
### Handling Prioritization Requests
|
|
130
|
+
|
|
131
|
+
```markdown
|
|
132
|
+
## Request Triage Framework
|
|
133
|
+
|
|
134
|
+
1. **Acknowledge**: "I understand this is important. Let me make sure I understand the problem."
|
|
135
|
+
|
|
136
|
+
2. **Understand**:
|
|
137
|
+
- What problem does this solve?
|
|
138
|
+
- Who is affected?
|
|
139
|
+
- What's the impact of not doing this?
|
|
140
|
+
- What's the urgency?
|
|
141
|
+
|
|
142
|
+
3. **Evaluate**:
|
|
143
|
+
- Score against current prioritization framework
|
|
144
|
+
- Compare to existing roadmap items
|
|
145
|
+
- Identify trade-offs
|
|
146
|
+
|
|
147
|
+
4. **Respond**:
|
|
148
|
+
- If high priority: "This scores well. Here's how it compares to current work..."
|
|
149
|
+
- If low priority: "I understand the need. Here's why other items currently rank higher..."
|
|
150
|
+
- If unclear: "I need more information to evaluate this properly..."
|
|
151
|
+
|
|
152
|
+
5. **Document**: Record request, evaluation, and decision
|
|
153
|
+
```
|
|
154
|
+
|
|
155
|
+
### Saying No Constructively
|
|
156
|
+
|
|
157
|
+
```markdown
|
|
158
|
+
## Framework for Declining Requests
|
|
159
|
+
|
|
160
|
+
"I appreciate you bringing this to me. Here's my perspective:
|
|
161
|
+
|
|
162
|
+
**Acknowledge the need**: I understand that [stakeholder's concern] is important because [reason].
|
|
163
|
+
|
|
164
|
+
**Explain current priorities**: Right now, we're focused on [current priorities] because [business rationale]. These are expected to deliver [expected outcomes].
|
|
165
|
+
|
|
166
|
+
**Show the trade-off**: If we were to prioritize [their request], we would need to delay [current work], which would impact [consequences].
|
|
167
|
+
|
|
168
|
+
**Offer alternatives**:
|
|
169
|
+
- Option A: We could address a smaller version of this in [timeframe]
|
|
170
|
+
- Option B: Here's a workaround that might help in the meantime
|
|
171
|
+
- Option C: Let's revisit this in [timeframe] when [conditions]
|
|
172
|
+
|
|
173
|
+
**Stay open**: I'm happy to discuss further or reconsider if new information emerges."
|
|
174
|
+
```
|
|
175
|
+
|
|
176
|
+
### Prioritization Governance
|
|
177
|
+
|
|
178
|
+
| Decision Type | Who Decides | Input From |
|
|
179
|
+
|---------------|-------------|------------|
|
|
180
|
+
| Quarterly themes | Product Leadership + Executives | All stakeholders |
|
|
181
|
+
| Monthly priorities | Product Manager | Engineering, Design, Stakeholders |
|
|
182
|
+
| Sprint items | Product Trio | Engineering Team |
|
|
183
|
+
| Day-to-day tasks | Engineering Team | Product Manager |
|
|
184
|
+
|
|
185
|
+
## Special Cases
|
|
186
|
+
|
|
187
|
+
### When to Override Scores
|
|
188
|
+
|
|
189
|
+
| Scenario | Action |
|
|
190
|
+
|----------|--------|
|
|
191
|
+
| Security vulnerability | Immediate priority regardless of score |
|
|
192
|
+
| Regulatory compliance | Non-negotiable timeline |
|
|
193
|
+
| Technical debt blocking features | Elevate priority |
|
|
194
|
+
| Strategic partnership requirement | Weigh relationship value |
|
|
195
|
+
| Quick win during downtime | Opportunistic execution |
|
|
196
|
+
|
|
197
|
+
### Technical Debt Prioritization
|
|
198
|
+
|
|
199
|
+
Allocate 15-20% of capacity to technical debt. Prioritize by:
|
|
200
|
+
|
|
201
|
+
1. **Blocking**: Prevents new features
|
|
202
|
+
2. **Slowing**: Significantly increases development time
|
|
203
|
+
3. **Risk**: Security or stability concerns
|
|
204
|
+
4. **Maintainability**: Code that's hard to understand/modify
|
|
205
|
+
|
|
206
|
+
### Bug Prioritization
|
|
207
|
+
|
|
208
|
+
| Severity | Criteria | Response |
|
|
209
|
+
|----------|----------|----------|
|
|
210
|
+
| P0 | System down, data loss, security breach | Drop everything |
|
|
211
|
+
| P1 | Major feature broken, many users affected | Fix this sprint |
|
|
212
|
+
| P2 | Feature impaired, workaround exists | Plan for near term |
|
|
213
|
+
| P3 | Minor issue, few users affected | Backlog |
|
|
214
|
+
| P4 | Cosmetic, edge case | If time permits |
|
|
215
|
+
|
|
216
|
+
## Maintaining Prioritization Health
|
|
217
|
+
|
|
218
|
+
### Weekly Review
|
|
219
|
+
|
|
220
|
+
- [ ] Review new requests against backlog
|
|
221
|
+
- [ ] Re-score items with new information
|
|
222
|
+
- [ ] Archive completed/obsolete items
|
|
223
|
+
- [ ] Communicate any changes to stakeholders
|
|
224
|
+
|
|
225
|
+
### Quarterly Review
|
|
226
|
+
|
|
227
|
+
- [ ] Reassess all backlog items
|
|
228
|
+
- [ ] Update confidence scores with new data
|
|
229
|
+
- [ ] Align with new company/product OKRs
|
|
230
|
+
- [ ] Archive items that no longer fit strategy
|
|
231
|
+
|
|
232
|
+
### Common Pitfalls
|
|
233
|
+
|
|
234
|
+
| Pitfall | Symptom | Solution |
|
|
235
|
+
|---------|---------|----------|
|
|
236
|
+
| Recency bias | Latest request always wins | Use consistent scoring |
|
|
237
|
+
| HiPPO | Exec requests skip the queue | Score all requests equally |
|
|
238
|
+
| Analysis paralysis | Nothing gets prioritized | Set decision deadlines |
|
|
239
|
+
| Stale backlog | Old items never reviewed | Regular backlog grooming |
|
|
240
|
+
| Inconsistent scoring | Different people score differently | Calibration sessions |
|