opencode-metis 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (156) hide show
  1. package/README.md +140 -0
  2. package/dist/cli.cjs +63 -0
  3. package/dist/mcp-server.cjs +51 -0
  4. package/dist/plugin.cjs +4 -0
  5. package/dist/worker.cjs +224 -0
  6. package/opencode/agent/the-analyst/feature-prioritization.md +66 -0
  7. package/opencode/agent/the-analyst/market-research.md +77 -0
  8. package/opencode/agent/the-analyst/project-coordination.md +81 -0
  9. package/opencode/agent/the-analyst/requirements-analysis.md +77 -0
  10. package/opencode/agent/the-architect/compatibility-review.md +138 -0
  11. package/opencode/agent/the-architect/complexity-review.md +137 -0
  12. package/opencode/agent/the-architect/quality-review.md +67 -0
  13. package/opencode/agent/the-architect/security-review.md +127 -0
  14. package/opencode/agent/the-architect/system-architecture.md +119 -0
  15. package/opencode/agent/the-architect/system-documentation.md +83 -0
  16. package/opencode/agent/the-architect/technology-research.md +85 -0
  17. package/opencode/agent/the-chief.md +79 -0
  18. package/opencode/agent/the-designer/accessibility-implementation.md +101 -0
  19. package/opencode/agent/the-designer/design-foundation.md +74 -0
  20. package/opencode/agent/the-designer/interaction-architecture.md +75 -0
  21. package/opencode/agent/the-designer/user-research.md +70 -0
  22. package/opencode/agent/the-meta-agent.md +155 -0
  23. package/opencode/agent/the-platform-engineer/ci-cd-pipelines.md +109 -0
  24. package/opencode/agent/the-platform-engineer/containerization.md +106 -0
  25. package/opencode/agent/the-platform-engineer/data-architecture.md +81 -0
  26. package/opencode/agent/the-platform-engineer/dependency-review.md +144 -0
  27. package/opencode/agent/the-platform-engineer/deployment-automation.md +81 -0
  28. package/opencode/agent/the-platform-engineer/infrastructure-as-code.md +107 -0
  29. package/opencode/agent/the-platform-engineer/performance-tuning.md +82 -0
  30. package/opencode/agent/the-platform-engineer/pipeline-engineering.md +81 -0
  31. package/opencode/agent/the-platform-engineer/production-monitoring.md +105 -0
  32. package/opencode/agent/the-qa-engineer/exploratory-testing.md +66 -0
  33. package/opencode/agent/the-qa-engineer/performance-testing.md +81 -0
  34. package/opencode/agent/the-qa-engineer/quality-assurance.md +77 -0
  35. package/opencode/agent/the-qa-engineer/test-execution.md +66 -0
  36. package/opencode/agent/the-software-engineer/api-development.md +78 -0
  37. package/opencode/agent/the-software-engineer/component-development.md +79 -0
  38. package/opencode/agent/the-software-engineer/concurrency-review.md +141 -0
  39. package/opencode/agent/the-software-engineer/domain-modeling.md +66 -0
  40. package/opencode/agent/the-software-engineer/performance-optimization.md +113 -0
  41. package/opencode/command/analyze.md +149 -0
  42. package/opencode/command/constitution.md +178 -0
  43. package/opencode/command/debug.md +194 -0
  44. package/opencode/command/document.md +178 -0
  45. package/opencode/command/implement.md +225 -0
  46. package/opencode/command/refactor.md +207 -0
  47. package/opencode/command/review.md +229 -0
  48. package/opencode/command/simplify.md +267 -0
  49. package/opencode/command/specify.md +191 -0
  50. package/opencode/command/validate.md +224 -0
  51. package/opencode/skill/accessibility-design/SKILL.md +566 -0
  52. package/opencode/skill/accessibility-design/checklists/wcag-checklist.md +435 -0
  53. package/opencode/skill/agent-coordination/SKILL.md +224 -0
  54. package/opencode/skill/api-contract-design/SKILL.md +550 -0
  55. package/opencode/skill/api-contract-design/templates/graphql-schema-template.md +818 -0
  56. package/opencode/skill/api-contract-design/templates/rest-api-template.md +417 -0
  57. package/opencode/skill/architecture-design/SKILL.md +160 -0
  58. package/opencode/skill/architecture-design/examples/architecture-examples.md +170 -0
  59. package/opencode/skill/architecture-design/template.md +749 -0
  60. package/opencode/skill/architecture-design/validation.md +99 -0
  61. package/opencode/skill/architecture-selection/SKILL.md +522 -0
  62. package/opencode/skill/architecture-selection/examples/adrs/001-example-adr.md +71 -0
  63. package/opencode/skill/architecture-selection/examples/architecture-patterns.md +239 -0
  64. package/opencode/skill/bug-diagnosis/SKILL.md +235 -0
  65. package/opencode/skill/code-quality-review/SKILL.md +337 -0
  66. package/opencode/skill/code-quality-review/examples/anti-patterns.md +629 -0
  67. package/opencode/skill/code-quality-review/reference.md +322 -0
  68. package/opencode/skill/code-review/SKILL.md +363 -0
  69. package/opencode/skill/code-review/reference.md +450 -0
  70. package/opencode/skill/codebase-analysis/SKILL.md +139 -0
  71. package/opencode/skill/codebase-navigation/SKILL.md +227 -0
  72. package/opencode/skill/codebase-navigation/examples/exploration-patterns.md +263 -0
  73. package/opencode/skill/coding-conventions/SKILL.md +178 -0
  74. package/opencode/skill/coding-conventions/checklists/accessibility-checklist.md +176 -0
  75. package/opencode/skill/coding-conventions/checklists/performance-checklist.md +154 -0
  76. package/opencode/skill/coding-conventions/checklists/security-checklist.md +127 -0
  77. package/opencode/skill/constitution-validation/SKILL.md +315 -0
  78. package/opencode/skill/constitution-validation/examples/CONSTITUTION.md +202 -0
  79. package/opencode/skill/constitution-validation/reference/rule-patterns.md +328 -0
  80. package/opencode/skill/constitution-validation/template.md +115 -0
  81. package/opencode/skill/context-preservation/SKILL.md +445 -0
  82. package/opencode/skill/data-modeling/SKILL.md +385 -0
  83. package/opencode/skill/data-modeling/templates/schema-design-template.md +268 -0
  84. package/opencode/skill/deployment-pipeline-design/SKILL.md +579 -0
  85. package/opencode/skill/deployment-pipeline-design/templates/pipeline-template.md +633 -0
  86. package/opencode/skill/documentation-extraction/SKILL.md +259 -0
  87. package/opencode/skill/documentation-sync/SKILL.md +431 -0
  88. package/opencode/skill/domain-driven-design/SKILL.md +509 -0
  89. package/opencode/skill/domain-driven-design/examples/ddd-patterns.md +688 -0
  90. package/opencode/skill/domain-driven-design/reference.md +465 -0
  91. package/opencode/skill/drift-detection/SKILL.md +383 -0
  92. package/opencode/skill/drift-detection/reference.md +340 -0
  93. package/opencode/skill/error-recovery/SKILL.md +162 -0
  94. package/opencode/skill/error-recovery/examples/error-patterns.md +484 -0
  95. package/opencode/skill/feature-prioritization/SKILL.md +419 -0
  96. package/opencode/skill/feature-prioritization/examples/rice-template.md +139 -0
  97. package/opencode/skill/feature-prioritization/reference.md +256 -0
  98. package/opencode/skill/git-workflow/SKILL.md +453 -0
  99. package/opencode/skill/implementation-planning/SKILL.md +215 -0
  100. package/opencode/skill/implementation-planning/examples/phase-examples.md +217 -0
  101. package/opencode/skill/implementation-planning/template.md +220 -0
  102. package/opencode/skill/implementation-planning/validation.md +88 -0
  103. package/opencode/skill/implementation-verification/SKILL.md +272 -0
  104. package/opencode/skill/knowledge-capture/SKILL.md +265 -0
  105. package/opencode/skill/knowledge-capture/reference/knowledge-capture.md +402 -0
  106. package/opencode/skill/knowledge-capture/reference.md +444 -0
  107. package/opencode/skill/knowledge-capture/templates/domain-template.md +325 -0
  108. package/opencode/skill/knowledge-capture/templates/interface-template.md +255 -0
  109. package/opencode/skill/knowledge-capture/templates/pattern-template.md +144 -0
  110. package/opencode/skill/observability-design/SKILL.md +291 -0
  111. package/opencode/skill/observability-design/references/monitoring-patterns.md +461 -0
  112. package/opencode/skill/pattern-detection/SKILL.md +171 -0
  113. package/opencode/skill/pattern-detection/examples/common-patterns.md +359 -0
  114. package/opencode/skill/performance-analysis/SKILL.md +266 -0
  115. package/opencode/skill/performance-analysis/references/profiling-tools.md +499 -0
  116. package/opencode/skill/requirements-analysis/SKILL.md +139 -0
  117. package/opencode/skill/requirements-analysis/examples/good-prd.md +66 -0
  118. package/opencode/skill/requirements-analysis/template.md +177 -0
  119. package/opencode/skill/requirements-analysis/validation.md +69 -0
  120. package/opencode/skill/requirements-elicitation/SKILL.md +518 -0
  121. package/opencode/skill/requirements-elicitation/examples/interview-questions.md +226 -0
  122. package/opencode/skill/requirements-elicitation/examples/user-stories.md +414 -0
  123. package/opencode/skill/safe-refactoring/SKILL.md +312 -0
  124. package/opencode/skill/safe-refactoring/reference/code-smells.md +347 -0
  125. package/opencode/skill/security-assessment/SKILL.md +421 -0
  126. package/opencode/skill/security-assessment/checklists/security-review-checklist.md +285 -0
  127. package/opencode/skill/specification-management/SKILL.md +143 -0
  128. package/opencode/skill/specification-management/readme-template.md +32 -0
  129. package/opencode/skill/specification-management/reference.md +115 -0
  130. package/opencode/skill/specification-management/spec.py +229 -0
  131. package/opencode/skill/specification-validation/SKILL.md +397 -0
  132. package/opencode/skill/specification-validation/reference/3cs-framework.md +306 -0
  133. package/opencode/skill/specification-validation/reference/ambiguity-detection.md +132 -0
  134. package/opencode/skill/specification-validation/reference/constitution-validation.md +301 -0
  135. package/opencode/skill/specification-validation/reference/drift-detection.md +383 -0
  136. package/opencode/skill/task-delegation/SKILL.md +607 -0
  137. package/opencode/skill/task-delegation/examples/file-coordination.md +495 -0
  138. package/opencode/skill/task-delegation/examples/parallel-research.md +337 -0
  139. package/opencode/skill/task-delegation/examples/sequential-build.md +504 -0
  140. package/opencode/skill/task-delegation/reference.md +825 -0
  141. package/opencode/skill/tech-stack-detection/SKILL.md +89 -0
  142. package/opencode/skill/tech-stack-detection/references/framework-signatures.md +598 -0
  143. package/opencode/skill/technical-writing/SKILL.md +190 -0
  144. package/opencode/skill/technical-writing/templates/adr-template.md +205 -0
  145. package/opencode/skill/technical-writing/templates/system-doc-template.md +380 -0
  146. package/opencode/skill/test-design/SKILL.md +464 -0
  147. package/opencode/skill/test-design/examples/test-pyramid.md +724 -0
  148. package/opencode/skill/testing/SKILL.md +213 -0
  149. package/opencode/skill/testing/examples/test-pyramid.md +724 -0
  150. package/opencode/skill/user-insight-synthesis/SKILL.md +576 -0
  151. package/opencode/skill/user-insight-synthesis/templates/research-plan-template.md +217 -0
  152. package/opencode/skill/user-research/SKILL.md +508 -0
  153. package/opencode/skill/user-research/examples/interview-questions.md +265 -0
  154. package/opencode/skill/user-research/examples/personas.md +267 -0
  155. package/opencode/skill/vibe-security/SKILL.md +654 -0
  156. package/package.json +45 -0
@@ -0,0 +1,419 @@
1
+ ---
2
+ name: feature-prioritization
3
+ description: "RICE, MoSCoW, Kano, and value-effort prioritization frameworks with scoring methodologies and decision documentation."
4
+ license: MIT
5
+ compatibility: opencode
6
+ metadata:
7
+ category: analysis
8
+ version: "1.0"
9
+ ---
10
+
11
+ # Feature Prioritization
12
+
13
+ Roleplay as a prioritization specialist applying systematic frameworks for objective prioritization decisions that balance value, effort, and strategic alignment.
14
+
15
+ FeaturePrioritization {
16
+ Activation {
17
+ Prioritizing feature backlogs
18
+ Evaluating competing initiatives
19
+ Making build vs defer decisions
20
+ Creating product roadmaps
21
+ Allocating limited resources
22
+ Justifying prioritization decisions to stakeholders
23
+ }
24
+
25
+ RICEFramework {
26
+ Formula => RICE Score = (Reach x Impact x Confidence) / Effort
27
+
28
+ Components {
29
+ | Factor | Description | Scale |
30
+ |--------|-------------|-------|
31
+ | Reach | How many users affected per quarter | Actual number (100, 1000, 10000) |
32
+ | Impact | Effect on each user | 0.25 (Minimal) to 3 (Massive) |
33
+ | Confidence | How sure are we | 50% (Low) to 100% (High) |
34
+ | Effort | Person-months required | Actual estimate (0.5, 1, 3, 6) |
35
+ }
36
+
37
+ ImpactScale {
38
+ | Score | Label | Description |
39
+ |-------|-------|-------------|
40
+ | 3 | Massive | Life-changing for users, core workflow transformation |
41
+ | 2 | High | Major improvement, significant time savings |
42
+ | 1 | Medium | Noticeable improvement, minor friction reduction |
43
+ | 0.5 | Low | Slight improvement, nice-to-have |
44
+ | 0.25 | Minimal | Barely noticeable difference |
45
+ }
46
+
47
+ ConfidenceScale {
48
+ | Score | Label | Basis |
49
+ |-------|-------|-------|
50
+ | 100% | High | User research + validated data + successful tests |
51
+ | 80% | Medium | Some data + team experience + analogous examples |
52
+ | 50% | Low | Intuition only, no supporting data |
53
+ }
54
+
55
+ Example {
56
+ ```
57
+ Feature: One-click reorder
58
+
59
+ Reach: 5,000 (customers who reorder monthly)
60
+ Impact: 2 (High - saves significant time)
61
+ Confidence: 80% (Based on support ticket analysis)
62
+ Effort: 1 person-month
63
+
64
+ RICE = (5000 x 2 x 0.8) / 1 = 8000
65
+
66
+ Feature: Dark mode
67
+
68
+ Reach: 20,000 (all active users)
69
+ Impact: 0.5 (Low - preference, not productivity)
70
+ Confidence: 50% (No data, user requests only)
71
+ Effort: 2 person-months
72
+
73
+ RICE = (20000 x 0.5 x 0.5) / 2 = 2500
74
+
75
+ Decision: One-click reorder scores higher, prioritize first
76
+ ```
77
+ }
78
+
79
+ Template {
80
+ | Feature | Reach | Impact | Confidence | Effort | Score | Rank |
81
+ |---------|-------|--------|------------|--------|-------|------|
82
+ | Feature A | 5000 | 2 | 80% | 1 | 8000 | 1 |
83
+ | Feature B | 20000 | 0.5 | 50% | 2 | 2500 | 2 |
84
+ }
85
+ }
86
+
87
+ ValueEffortMatrix {
88
+ Diagram {
89
+ ```
90
+ High Value
91
+ |
92
+ +--------------+--------------+
93
+ | | |
94
+ | QUICK WINS | STRATEGIC |
95
+ | Do First | Plan & Do |
96
+ | | |
97
+ +--------------+--------------+ High
98
+ Low | | | Effort
99
+ Effort | |
100
+ | FILL-INS | TIME SINKS |
101
+ | If Spare | Avoid |
102
+ | Capacity | |
103
+ | | |
104
+ +--------------+--------------+
105
+ |
106
+ Low Value
107
+ ```
108
+ }
109
+
110
+ QuadrantActions {
111
+ | Quadrant | Characteristics | Action |
112
+ |----------|-----------------|--------|
113
+ | Quick Wins | High value, low effort | Do immediately |
114
+ | Strategic | High value, high effort | Plan carefully, staff appropriately |
115
+ | Fill-Ins | Low value, low effort | Do when nothing else is ready |
116
+ | Time Sinks | Low value, high effort | Don't do (or simplify drastically) |
117
+ }
118
+
119
+ EstimationGuidance {
120
+ ValueAssessment {
121
+ Revenue impact
122
+ Cost reduction
123
+ User satisfaction improvement
124
+ Strategic alignment
125
+ Risk reduction
126
+ }
127
+
128
+ EffortAssessment {
129
+ Development time
130
+ Design complexity
131
+ Testing requirements
132
+ Deployment complexity
133
+ Ongoing maintenance
134
+ }
135
+ }
136
+ }
137
+
138
+ KanoModel {
139
+ Diagram {
140
+ ```
141
+ Satisfaction
142
+ ^
143
+ | / Delighters
144
+ | / (Unexpected features)
145
+ | /
146
+ -----+----o---------------------------> Feature
147
+ | |\ Implementation
148
+ | | \ Performance
149
+ | | (More is better)
150
+ | |
151
+ | +-- Must-Haves
152
+ | (Expected, dissatisfaction if missing)
153
+ v
154
+ ```
155
+ }
156
+
157
+ CategoryDefinitions {
158
+ | Category | Present | Absent | Example |
159
+ |----------|---------|--------|---------|
160
+ | Must-Have | Neutral | Very dissatisfied | Login functionality |
161
+ | Performance | More = better | Less = worse | Page load speed |
162
+ | Delighter | Very satisfied | Neutral | Personalized recommendations |
163
+ | Indifferent | No effect | No effect | Backend tech choice |
164
+ | Reverse | Dissatisfied | Satisfied | Forced tutorials |
165
+ }
166
+
167
+ SurveyQuestions {
168
+ For each feature, ask two questions:
169
+ ```
170
+ Functional: "If [feature] were present, how would you feel?"
171
+ Dysfunctional: "If [feature] were absent, how would you feel?"
172
+
173
+ Answer Options:
174
+ 1. I like it
175
+ 2. I expect it
176
+ 3. I'm neutral
177
+ 4. I can tolerate it
178
+ 5. I dislike it
179
+ ```
180
+ }
181
+
182
+ InterpretationMatrix {
183
+ | | Like | Expect | Neutral | Tolerate | Dislike |
184
+ |--|------|--------|---------|----------|---------|
185
+ | Like | Q | A | A | A | O |
186
+ | Expect | R | I | I | I | M |
187
+ | Neutral | R | I | I | I | M |
188
+ | Tolerate | R | I | I | I | M |
189
+ | Dislike | R | R | R | R | Q |
190
+
191
+ Key => M=Must-Have, O=One-dimensional, A=Attractive, I=Indifferent, R=Reverse, Q=Questionable
192
+ }
193
+ }
194
+
195
+ MoSCoWMethod {
196
+ Categories {
197
+ | Category | Definition | Negotiability |
198
+ |----------|------------|---------------|
199
+ | Must | Critical for success, release blocked without | Non-negotiable |
200
+ | Should | Important but not critical | Can defer to next release |
201
+ | Could | Nice to have, minor impact | First to cut if needed |
202
+ | Won't | Explicitly excluded from scope | Not this release |
203
+ }
204
+
205
+ BudgetAllocation {
206
+ ```
207
+ Budget Allocation (Recommended):
208
+ - Must: 60% of capacity
209
+ - Should: 20% of capacity
210
+ - Could: 20% of capacity (buffer)
211
+ - Won't: 0% (explicitly excluded)
212
+
213
+ Why the buffer matters:
214
+ - Must items often take longer than estimated
215
+ - Should items may become Must if requirements change
216
+ - Could items fill capacity at sprint end
217
+ ```
218
+ }
219
+
220
+ Example {
221
+ ```
222
+ Feature: User Registration
223
+
224
+ MUST:
225
+ - Email/password signup
226
+ - Email verification
227
+ - Password requirements enforcement
228
+
229
+ SHOULD:
230
+ - Social login (Google)
231
+ - Remember me functionality
232
+ - Password strength indicator
233
+
234
+ COULD:
235
+ - Social login (Facebook, Apple)
236
+ - Profile picture upload
237
+ - Username suggestions
238
+
239
+ WON'T (this release):
240
+ - Two-factor authentication
241
+ - SSO integration
242
+ - Biometric login
243
+ ```
244
+ }
245
+ }
246
+
247
+ CostOfDelay {
248
+ CD3Formula {
249
+ ```
250
+ CD3 = Cost of Delay / Duration
251
+
252
+ Cost of Delay: Weekly value lost by not having the feature
253
+ Duration: Weeks to implement
254
+ ```
255
+ }
256
+
257
+ DelayCostTypes {
258
+ | Type | Description | Calculation |
259
+ |------|-------------|-------------|
260
+ | Revenue | Sales not captured | Lost deals x average value |
261
+ | Cost | Ongoing expenses | Weekly operational cost |
262
+ | Risk | Penalty or loss potential | Probability x impact |
263
+ | Opportunity | Market window | Revenue x time sensitivity |
264
+ }
265
+
266
+ UrgencyProfiles {
267
+ ```
268
+ Value
269
+ |
270
+ Standard: |----------------
271
+ |
272
+ +-------------------> Time
273
+
274
+ Urgent: |\
275
+ | \
276
+ | \--------
277
+ |
278
+ +-------------------> Time
279
+
280
+ Deadline: |
281
+ |--------+
282
+ | |
283
+ | +- (drops to zero)
284
+ +-------------------> Time
285
+ ```
286
+ }
287
+
288
+ Example {
289
+ ```
290
+ Feature A: New payment method
291
+ - Cost of Delay: $10,000/week (lost sales to competitor)
292
+ - Duration: 4 weeks
293
+ - CD3 = 10000 / 4 = 2500
294
+
295
+ Feature B: Admin dashboard redesign
296
+ - Cost of Delay: $2,000/week (support inefficiency)
297
+ - Duration: 2 weeks
298
+ - CD3 = 2000 / 2 = 1000
299
+
300
+ Feature C: Compliance update (deadline in 6 weeks)
301
+ - Cost of Delay: $50,000/week after deadline (fines)
302
+ - Duration: 4 weeks
303
+ - CD3 = 50000 / 4 = 12500 (if started now, 0 if after deadline)
304
+
305
+ Priority: C (deadline), then A (highest CD3), then B
306
+ ```
307
+ }
308
+ }
309
+
310
+ WeightedScoring {
311
+ BuildingModel {
312
+ ```
313
+ Step 1: Define Criteria
314
+ - Strategic alignment
315
+ - Revenue potential
316
+ - User demand
317
+ - Technical feasibility
318
+ - Competitive advantage
319
+
320
+ Step 2: Assign Weights (total = 100%)
321
+ | Criterion | Weight |
322
+ |-----------|--------|
323
+ | Strategic | 30% |
324
+ | Revenue | 25% |
325
+ | User demand | 20% |
326
+ | Feasibility | 15% |
327
+ | Competitive | 10% |
328
+
329
+ Step 3: Score Each Feature (1-5 scale)
330
+ | Feature | Strategic | Revenue | Demand | Feasible | Competitive | Total |
331
+ |---------|-----------|---------|--------|----------|-------------|-------|
332
+ | A | 5 | 4 | 3 | 4 | 2 | 3.95 |
333
+ | B | 3 | 5 | 5 | 3 | 3 | 3.90 |
334
+ | C | 4 | 3 | 4 | 5 | 4 | 3.85 |
335
+ ```
336
+ }
337
+
338
+ Calculation {
339
+ ```
340
+ Score = sum(criterion_score x criterion_weight)
341
+
342
+ Feature A:
343
+ = (5 x 0.30) + (4 x 0.25) + (3 x 0.20) + (4 x 0.15) + (2 x 0.10)
344
+ = 1.5 + 1.0 + 0.6 + 0.6 + 0.2
345
+ = 3.9
346
+ ```
347
+ }
348
+ }
349
+
350
+ DecisionDocumentation {
351
+ PriorityDecisionRecord {
352
+ ```markdown
353
+ # Priority Decision: [Feature/Initiative]
354
+
355
+ ## Date: [YYYY-MM-DD]
356
+ ## Decision: [Prioritize / Defer / Reject]
357
+
358
+ ## Context
359
+ [What prompted this decision?]
360
+
361
+ ## Evaluation
362
+
363
+ ### Framework Used: [RICE / Kano / MoSCoW / Weighted]
364
+
365
+ ### Scores
366
+ [Show calculations or categorization]
367
+
368
+ ### Trade-offs Considered
369
+ - Option A: [description] - [pros/cons]
370
+ - Option B: [description] - [pros/cons]
371
+
372
+ ## Decision Rationale
373
+ [Why this priority over alternatives?]
374
+
375
+ ## Stakeholders
376
+ - Agreed: [names]
377
+ - Disagreed: [names, reasons documented]
378
+
379
+ ## Review Date
380
+ [When to revisit if deferred]
381
+ ```
382
+ }
383
+ }
384
+
385
+ FrameworkSelectionGuide {
386
+ | Situation | Recommended Framework |
387
+ |-----------|----------------------|
388
+ | Comparing many similar features | RICE (quantitative) |
389
+ | Quick triage of backlog | Value vs Effort |
390
+ | Understanding user expectations | Kano Model |
391
+ | Defining release scope | MoSCoW |
392
+ | Time-sensitive decisions | Cost of Delay |
393
+ | Organization-specific criteria | Weighted Scoring |
394
+ }
395
+
396
+ AntiPatterns {
397
+ | Anti-Pattern | Problem | Solution |
398
+ |--------------|---------|----------|
399
+ | HiPPO | Highest-paid person's opinion wins | Use data-driven frameworks |
400
+ | Recency Bias | Last request gets priority | Systematic evaluation of all options |
401
+ | Squeaky Wheel | Loudest stakeholder wins | Weight by strategic value |
402
+ | Analysis Paralysis | Over-analyzing decisions | Time-box evaluation |
403
+ | Sunken Cost | Continuing failed initiatives | Evaluate future value only |
404
+ | Feature Factory | Shipping without measuring | Tie features to outcomes |
405
+ }
406
+
407
+ BestPractices {
408
+ 1. Use multiple frameworks - Validate with different approaches
409
+ 2. Document decisions - Enable future learning
410
+ 3. Revisit regularly - Priorities change as context evolves
411
+ 4. Include stakeholders - Ensure buy-in
412
+ 5. Measure outcomes - Validate prioritization quality
413
+ }
414
+ }
415
+
416
+ ## References
417
+
418
+ - [RICE Scoring Template](examples/rice-template.md) - Spreadsheet template
419
+ - [Prioritization Workshop Guide](reference.md) - Facilitation guide
@@ -0,0 +1,139 @@
1
+ # RICE Scoring Template
2
+
3
+ A ready-to-use template for scoring and ranking features using the RICE framework. Copy the blank template, fill in your estimates, and let the scores determine priority.
4
+
5
+ ## Quick Reminder: The Formula
6
+
7
+ ```
8
+ RICE Score = (Reach x Impact x Confidence) / Effort
9
+ ```
10
+
11
+ Higher score = higher priority.
12
+
13
+ ---
14
+
15
+ ## Blank Template
16
+
17
+ Copy this table for your prioritization session.
18
+
19
+ | Feature | Reach (users/qtr) | Impact (0.25-3) | Confidence (50-100%) | Effort (person-months) | RICE Score | Rank |
20
+ |---------|-------------------|-----------------|----------------------|------------------------|------------|------|
21
+ | | | | | | | |
22
+ | | | | | | | |
23
+ | | | | | | | |
24
+ | | | | | | | |
25
+ | | | | | | | |
26
+
27
+ ### Score Calculation
28
+
29
+ For each row:
30
+
31
+ ```
32
+ RICE Score = (Reach x Impact x (Confidence / 100)) / Effort
33
+ ```
34
+
35
+ Note: Convert confidence percentage to decimal before calculating (80% -> 0.80).
36
+
37
+ ---
38
+
39
+ ## Scale Reference
40
+
41
+ ### Impact Scale
42
+
43
+ | Value | Label | Description |
44
+ |-------|-------|-------------|
45
+ | 3 | Massive | Core workflow transformation, life-changing for users |
46
+ | 2 | High | Major improvement, significant time or cost savings |
47
+ | 1 | Medium | Noticeable improvement, reduces meaningful friction |
48
+ | 0.5 | Low | Slight improvement, nice-to-have quality of life |
49
+ | 0.25 | Minimal | Barely noticeable difference |
50
+
51
+ ### Confidence Scale
52
+
53
+ | Value | Label | When to Use |
54
+ |-------|-------|-------------|
55
+ | 100% | High | User research + validated data + prior successful tests |
56
+ | 80% | Medium | Some data + team experience + analogous examples |
57
+ | 50% | Low | Intuition or anecdote only, no supporting data |
58
+
59
+ **Rule of thumb**: If you're debating between two confidence levels, use the lower one. Overconfidence inflates scores.
60
+
61
+ ---
62
+
63
+ ## Filled-In Example: SaaS Analytics Dashboard
64
+
65
+ This example scores five competing features for a B2B analytics product.
66
+
67
+ ### Context
68
+
69
+ - Team capacity: 3 person-months per quarter
70
+ - User base: 25,000 monthly active users
71
+ - 8,000 users engage with the reporting section
72
+
73
+ ### Scored Features
74
+
75
+ | Feature | Reach | Impact | Confidence | Effort | RICE Score | Rank |
76
+ |---------|-------|--------|------------|--------|------------|------|
77
+ | CSV export | 6,000 | 2 | 80% | 0.5 | **19,200** | 1 |
78
+ | Scheduled email reports | 3,500 | 2 | 80% | 1 | **5,600** | 2 |
79
+ | Custom date range picker | 8,000 | 1 | 100% | 0.5 | **16,000** | -- |
80
+ | Dashboard sharing (public link) | 2,000 | 2 | 50% | 1.5 | **1,333** | 4 |
81
+ | Dark mode | 25,000 | 0.25 | 50% | 2 | **1,563** | 3 |
82
+
83
+ ### Score Calculations
84
+
85
+ ```
86
+ CSV export:
87
+ (6,000 x 2 x 0.80) / 0.5 = 9,600 / 0.5 = 19,200
88
+
89
+ Scheduled email reports:
90
+ (3,500 x 2 x 0.80) / 1 = 5,600
91
+
92
+ Custom date range picker:
93
+ (8,000 x 1 x 1.00) / 0.5 = 16,000
94
+
95
+ Dashboard sharing:
96
+ (2,000 x 2 x 0.50) / 1.5 = 2,000 / 1.5 = 1,333
97
+
98
+ Dark mode:
99
+ (25,000 x 0.25 x 0.50) / 2 = 3,125 / 2 = 1,563
100
+ ```
101
+
102
+ ### Adjusted Priority
103
+
104
+ Scores alone tell most of the story, but two features need a note:
105
+
106
+ **Custom date range picker (16,000)** scored second-highest but was pre-committed to a partner. It does not compete for the open roadmap slots.
107
+
108
+ Final open-roadmap ranking with 3 person-months of capacity:
109
+
110
+ 1. **CSV export** (score: 19,200) -- 0.5 months. High confidence data from 62 support tickets.
111
+ 2. **Scheduled email reports** (score: 5,600) -- 1 month. Validated by customer interviews with 3 enterprise accounts.
112
+ 3. **Dark mode** (score: 1,563) -- defer to next quarter. High reach, but very low impact and confidence.
113
+ 4. **Dashboard sharing** (score: 1,333) -- defer. Low confidence, significant security design work needed first.
114
+
115
+ Total committed: 1.5 months, leaving 1.5 months buffer for Must items and scope growth.
116
+
117
+ ---
118
+
119
+ ## Common Mistakes
120
+
121
+ | Mistake | Problem | Fix |
122
+ |---------|---------|-----|
123
+ | Using 100% confidence by default | Inflates every score equally, ranking becomes meaningless | Only use 100% when you have validated data |
124
+ | Estimating Reach as total user base | Overstates impact -- most features affect a subset of users | Count only users who encounter the relevant workflow |
125
+ | Ignoring Effort entirely | Low-effort features win by default regardless of value | Always estimate Effort; a 0.1 score skews results badly |
126
+ | Scoring in isolation | Individual scorers have different mental scales | Score as a group, or calibrate with one known-reference feature |
127
+ | Never revisiting scores | Context changes -- last quarter's data is stale | Re-score when key inputs (user count, team size) shift significantly |
128
+
129
+ ---
130
+
131
+ ## Tips for Estimation Sessions
132
+
133
+ **Anchor on a reference feature.** Before scoring new items, pick one feature the team already shipped and agree on its scores. Use it as a calibration baseline for Reach and Impact.
134
+
135
+ **Score independently, then converge.** Have each team member fill in scores before comparing. This surfaces disagreements that group discussion would suppress.
136
+
137
+ **Time-box the session.** Spend no more than 5 minutes per feature. If a feature requires more debate, mark it as low confidence and move on.
138
+
139
+ **Document your assumptions.** Record the data source behind each Reach estimate and the reasoning behind each Confidence rating. Scores without sources are guesses with extra steps.