opencode-metis 0.1.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +140 -0
- package/dist/cli.cjs +63 -0
- package/dist/mcp-server.cjs +51 -0
- package/dist/plugin.cjs +4 -0
- package/dist/worker.cjs +224 -0
- package/opencode/agent/the-analyst/feature-prioritization.md +66 -0
- package/opencode/agent/the-analyst/market-research.md +77 -0
- package/opencode/agent/the-analyst/project-coordination.md +81 -0
- package/opencode/agent/the-analyst/requirements-analysis.md +77 -0
- package/opencode/agent/the-architect/compatibility-review.md +138 -0
- package/opencode/agent/the-architect/complexity-review.md +137 -0
- package/opencode/agent/the-architect/quality-review.md +67 -0
- package/opencode/agent/the-architect/security-review.md +127 -0
- package/opencode/agent/the-architect/system-architecture.md +119 -0
- package/opencode/agent/the-architect/system-documentation.md +83 -0
- package/opencode/agent/the-architect/technology-research.md +85 -0
- package/opencode/agent/the-chief.md +79 -0
- package/opencode/agent/the-designer/accessibility-implementation.md +101 -0
- package/opencode/agent/the-designer/design-foundation.md +74 -0
- package/opencode/agent/the-designer/interaction-architecture.md +75 -0
- package/opencode/agent/the-designer/user-research.md +70 -0
- package/opencode/agent/the-meta-agent.md +155 -0
- package/opencode/agent/the-platform-engineer/ci-cd-pipelines.md +109 -0
- package/opencode/agent/the-platform-engineer/containerization.md +106 -0
- package/opencode/agent/the-platform-engineer/data-architecture.md +81 -0
- package/opencode/agent/the-platform-engineer/dependency-review.md +144 -0
- package/opencode/agent/the-platform-engineer/deployment-automation.md +81 -0
- package/opencode/agent/the-platform-engineer/infrastructure-as-code.md +107 -0
- package/opencode/agent/the-platform-engineer/performance-tuning.md +82 -0
- package/opencode/agent/the-platform-engineer/pipeline-engineering.md +81 -0
- package/opencode/agent/the-platform-engineer/production-monitoring.md +105 -0
- package/opencode/agent/the-qa-engineer/exploratory-testing.md +66 -0
- package/opencode/agent/the-qa-engineer/performance-testing.md +81 -0
- package/opencode/agent/the-qa-engineer/quality-assurance.md +77 -0
- package/opencode/agent/the-qa-engineer/test-execution.md +66 -0
- package/opencode/agent/the-software-engineer/api-development.md +78 -0
- package/opencode/agent/the-software-engineer/component-development.md +79 -0
- package/opencode/agent/the-software-engineer/concurrency-review.md +141 -0
- package/opencode/agent/the-software-engineer/domain-modeling.md +66 -0
- package/opencode/agent/the-software-engineer/performance-optimization.md +113 -0
- package/opencode/command/analyze.md +149 -0
- package/opencode/command/constitution.md +178 -0
- package/opencode/command/debug.md +194 -0
- package/opencode/command/document.md +178 -0
- package/opencode/command/implement.md +225 -0
- package/opencode/command/refactor.md +207 -0
- package/opencode/command/review.md +229 -0
- package/opencode/command/simplify.md +267 -0
- package/opencode/command/specify.md +191 -0
- package/opencode/command/validate.md +224 -0
- package/opencode/skill/accessibility-design/SKILL.md +566 -0
- package/opencode/skill/accessibility-design/checklists/wcag-checklist.md +435 -0
- package/opencode/skill/agent-coordination/SKILL.md +224 -0
- package/opencode/skill/api-contract-design/SKILL.md +550 -0
- package/opencode/skill/api-contract-design/templates/graphql-schema-template.md +818 -0
- package/opencode/skill/api-contract-design/templates/rest-api-template.md +417 -0
- package/opencode/skill/architecture-design/SKILL.md +160 -0
- package/opencode/skill/architecture-design/examples/architecture-examples.md +170 -0
- package/opencode/skill/architecture-design/template.md +749 -0
- package/opencode/skill/architecture-design/validation.md +99 -0
- package/opencode/skill/architecture-selection/SKILL.md +522 -0
- package/opencode/skill/architecture-selection/examples/adrs/001-example-adr.md +71 -0
- package/opencode/skill/architecture-selection/examples/architecture-patterns.md +239 -0
- package/opencode/skill/bug-diagnosis/SKILL.md +235 -0
- package/opencode/skill/code-quality-review/SKILL.md +337 -0
- package/opencode/skill/code-quality-review/examples/anti-patterns.md +629 -0
- package/opencode/skill/code-quality-review/reference.md +322 -0
- package/opencode/skill/code-review/SKILL.md +363 -0
- package/opencode/skill/code-review/reference.md +450 -0
- package/opencode/skill/codebase-analysis/SKILL.md +139 -0
- package/opencode/skill/codebase-navigation/SKILL.md +227 -0
- package/opencode/skill/codebase-navigation/examples/exploration-patterns.md +263 -0
- package/opencode/skill/coding-conventions/SKILL.md +178 -0
- package/opencode/skill/coding-conventions/checklists/accessibility-checklist.md +176 -0
- package/opencode/skill/coding-conventions/checklists/performance-checklist.md +154 -0
- package/opencode/skill/coding-conventions/checklists/security-checklist.md +127 -0
- package/opencode/skill/constitution-validation/SKILL.md +315 -0
- package/opencode/skill/constitution-validation/examples/CONSTITUTION.md +202 -0
- package/opencode/skill/constitution-validation/reference/rule-patterns.md +328 -0
- package/opencode/skill/constitution-validation/template.md +115 -0
- package/opencode/skill/context-preservation/SKILL.md +445 -0
- package/opencode/skill/data-modeling/SKILL.md +385 -0
- package/opencode/skill/data-modeling/templates/schema-design-template.md +268 -0
- package/opencode/skill/deployment-pipeline-design/SKILL.md +579 -0
- package/opencode/skill/deployment-pipeline-design/templates/pipeline-template.md +633 -0
- package/opencode/skill/documentation-extraction/SKILL.md +259 -0
- package/opencode/skill/documentation-sync/SKILL.md +431 -0
- package/opencode/skill/domain-driven-design/SKILL.md +509 -0
- package/opencode/skill/domain-driven-design/examples/ddd-patterns.md +688 -0
- package/opencode/skill/domain-driven-design/reference.md +465 -0
- package/opencode/skill/drift-detection/SKILL.md +383 -0
- package/opencode/skill/drift-detection/reference.md +340 -0
- package/opencode/skill/error-recovery/SKILL.md +162 -0
- package/opencode/skill/error-recovery/examples/error-patterns.md +484 -0
- package/opencode/skill/feature-prioritization/SKILL.md +419 -0
- package/opencode/skill/feature-prioritization/examples/rice-template.md +139 -0
- package/opencode/skill/feature-prioritization/reference.md +256 -0
- package/opencode/skill/git-workflow/SKILL.md +453 -0
- package/opencode/skill/implementation-planning/SKILL.md +215 -0
- package/opencode/skill/implementation-planning/examples/phase-examples.md +217 -0
- package/opencode/skill/implementation-planning/template.md +220 -0
- package/opencode/skill/implementation-planning/validation.md +88 -0
- package/opencode/skill/implementation-verification/SKILL.md +272 -0
- package/opencode/skill/knowledge-capture/SKILL.md +265 -0
- package/opencode/skill/knowledge-capture/reference/knowledge-capture.md +402 -0
- package/opencode/skill/knowledge-capture/reference.md +444 -0
- package/opencode/skill/knowledge-capture/templates/domain-template.md +325 -0
- package/opencode/skill/knowledge-capture/templates/interface-template.md +255 -0
- package/opencode/skill/knowledge-capture/templates/pattern-template.md +144 -0
- package/opencode/skill/observability-design/SKILL.md +291 -0
- package/opencode/skill/observability-design/references/monitoring-patterns.md +461 -0
- package/opencode/skill/pattern-detection/SKILL.md +171 -0
- package/opencode/skill/pattern-detection/examples/common-patterns.md +359 -0
- package/opencode/skill/performance-analysis/SKILL.md +266 -0
- package/opencode/skill/performance-analysis/references/profiling-tools.md +499 -0
- package/opencode/skill/requirements-analysis/SKILL.md +139 -0
- package/opencode/skill/requirements-analysis/examples/good-prd.md +66 -0
- package/opencode/skill/requirements-analysis/template.md +177 -0
- package/opencode/skill/requirements-analysis/validation.md +69 -0
- package/opencode/skill/requirements-elicitation/SKILL.md +518 -0
- package/opencode/skill/requirements-elicitation/examples/interview-questions.md +226 -0
- package/opencode/skill/requirements-elicitation/examples/user-stories.md +414 -0
- package/opencode/skill/safe-refactoring/SKILL.md +312 -0
- package/opencode/skill/safe-refactoring/reference/code-smells.md +347 -0
- package/opencode/skill/security-assessment/SKILL.md +421 -0
- package/opencode/skill/security-assessment/checklists/security-review-checklist.md +285 -0
- package/opencode/skill/specification-management/SKILL.md +143 -0
- package/opencode/skill/specification-management/readme-template.md +32 -0
- package/opencode/skill/specification-management/reference.md +115 -0
- package/opencode/skill/specification-management/spec.py +229 -0
- package/opencode/skill/specification-validation/SKILL.md +397 -0
- package/opencode/skill/specification-validation/reference/3cs-framework.md +306 -0
- package/opencode/skill/specification-validation/reference/ambiguity-detection.md +132 -0
- package/opencode/skill/specification-validation/reference/constitution-validation.md +301 -0
- package/opencode/skill/specification-validation/reference/drift-detection.md +383 -0
- package/opencode/skill/task-delegation/SKILL.md +607 -0
- package/opencode/skill/task-delegation/examples/file-coordination.md +495 -0
- package/opencode/skill/task-delegation/examples/parallel-research.md +337 -0
- package/opencode/skill/task-delegation/examples/sequential-build.md +504 -0
- package/opencode/skill/task-delegation/reference.md +825 -0
- package/opencode/skill/tech-stack-detection/SKILL.md +89 -0
- package/opencode/skill/tech-stack-detection/references/framework-signatures.md +598 -0
- package/opencode/skill/technical-writing/SKILL.md +190 -0
- package/opencode/skill/technical-writing/templates/adr-template.md +205 -0
- package/opencode/skill/technical-writing/templates/system-doc-template.md +380 -0
- package/opencode/skill/test-design/SKILL.md +464 -0
- package/opencode/skill/test-design/examples/test-pyramid.md +724 -0
- package/opencode/skill/testing/SKILL.md +213 -0
- package/opencode/skill/testing/examples/test-pyramid.md +724 -0
- package/opencode/skill/user-insight-synthesis/SKILL.md +576 -0
- package/opencode/skill/user-insight-synthesis/templates/research-plan-template.md +217 -0
- package/opencode/skill/user-research/SKILL.md +508 -0
- package/opencode/skill/user-research/examples/interview-questions.md +265 -0
- package/opencode/skill/user-research/examples/personas.md +267 -0
- package/opencode/skill/vibe-security/SKILL.md +654 -0
- package/package.json +45 -0
|
@@ -0,0 +1,265 @@
|
|
|
1
|
+
# Interview Question Bank
|
|
2
|
+
|
|
3
|
+
## Context
|
|
4
|
+
|
|
5
|
+
Use these questions as a starting point -- adapt to your product domain and the specific study objectives. Do not read questions verbatim. The goal is conversation, not interrogation. Questions are organized by research phase and purpose.
|
|
6
|
+
|
|
7
|
+
Pair each question with follow-up probes and silence. The best insight often comes after the participant pauses and keeps going.
|
|
8
|
+
|
|
9
|
+
---
|
|
10
|
+
|
|
11
|
+
## Discovery Research
|
|
12
|
+
|
|
13
|
+
Discovery research explores the problem space before you have a solution. The goal is to understand users' worlds -- their goals, habits, mental models, and the workarounds they have invented to cope with existing problems.
|
|
14
|
+
|
|
15
|
+
### Opening and Context
|
|
16
|
+
|
|
17
|
+
These questions establish the participant's context and warm up the conversation. Ask them before anything domain-specific.
|
|
18
|
+
|
|
19
|
+
- "Walk me through what your typical [day / week] looks like."
|
|
20
|
+
- "What are the most important things you're trying to get done in your role?"
|
|
21
|
+
- "What does success look like for you in [relevant domain]?"
|
|
22
|
+
- "Tell me about the last time you had to deal with [problem area]. What happened?"
|
|
23
|
+
|
|
24
|
+
**Probes:**
|
|
25
|
+
- "What were you trying to accomplish at that point?"
|
|
26
|
+
- "What was going through your mind when that happened?"
|
|
27
|
+
- "And then what did you do?"
|
|
28
|
+
|
|
29
|
+
### Understanding Current Behavior
|
|
30
|
+
|
|
31
|
+
Ask about behavior, not hypothetical behavior. Past events are evidence. Future intentions are speculation.
|
|
32
|
+
|
|
33
|
+
- "How do you currently handle [task]? Walk me through it step by step."
|
|
34
|
+
- "What tools or workarounds do you use to get that done?"
|
|
35
|
+
- "When was the last time you did that? Can you walk me through exactly what happened?"
|
|
36
|
+
- "Who else is involved when you're doing [task]?"
|
|
37
|
+
- "What do you do when [task] goes wrong?"
|
|
38
|
+
- "What's the most annoying part of how you handle [task] right now?"
|
|
39
|
+
|
|
40
|
+
**Probes:**
|
|
41
|
+
- "You mentioned [X] -- can you tell me more about that?"
|
|
42
|
+
- "How long does that usually take?"
|
|
43
|
+
- "How often does that happen?"
|
|
44
|
+
- "What do you do if [tool / step] isn't available?"
|
|
45
|
+
|
|
46
|
+
### Uncovering Pain Points
|
|
47
|
+
|
|
48
|
+
Let participants articulate frustrations in their own words. Do not suggest the pain point -- have them name it.
|
|
49
|
+
|
|
50
|
+
- "What's the most frustrating part of your current process?"
|
|
51
|
+
- "What takes longer than it should?"
|
|
52
|
+
- "What do you wish worked differently?"
|
|
53
|
+
- "What do you have to do manually that feels like it shouldn't need to be manual?"
|
|
54
|
+
- "Have you ever just given up on doing [task] a certain way? What happened?"
|
|
55
|
+
- "What's the thing you dread most when [task] comes up?"
|
|
56
|
+
|
|
57
|
+
**Probes:**
|
|
58
|
+
- "How often does that happen?"
|
|
59
|
+
- "What do you do instead?"
|
|
60
|
+
- "What's the cost of that frustration -- to you, or to your team?"
|
|
61
|
+
- "Has that ever caused a real problem for you? Tell me about it."
|
|
62
|
+
|
|
63
|
+
### Exploring Goals and Motivation
|
|
64
|
+
|
|
65
|
+
Understand what users are trying to achieve at the level above the task.
|
|
66
|
+
|
|
67
|
+
- "Why does [task] matter to you?"
|
|
68
|
+
- "If you could solve one problem related to [domain], what would it be?"
|
|
69
|
+
- "What would make you feel like [domain area] was working really well for you?"
|
|
70
|
+
- "What would need to change for you to feel like your situation improved significantly?"
|
|
71
|
+
- "What do your most successful colleagues do differently from you in this area?"
|
|
72
|
+
|
|
73
|
+
**Probes:**
|
|
74
|
+
- "Why does that matter?"
|
|
75
|
+
- "What would be different if that were solved?"
|
|
76
|
+
- "What's stopped you from doing that already?"
|
|
77
|
+
|
|
78
|
+
### Mental Models and Decision-Making
|
|
79
|
+
|
|
80
|
+
Understanding how participants reason helps you design for their actual model, not yours.
|
|
81
|
+
|
|
82
|
+
- "How do you decide when to [take an action / use a tool / ask for help]?"
|
|
83
|
+
- "How do you know when something has gone wrong with [task]?"
|
|
84
|
+
- "When you're evaluating [options / vendors / tools], what do you look for?"
|
|
85
|
+
- "How do you know if [task outcome] was successful?"
|
|
86
|
+
- "Who do you trust for advice on [topic]? Why them?"
|
|
87
|
+
|
|
88
|
+
**Probes:**
|
|
89
|
+
- "What's the most important factor in that decision?"
|
|
90
|
+
- "Has your thinking on that changed over time?"
|
|
91
|
+
- "Walk me through the last time you made that decision."
|
|
92
|
+
|
|
93
|
+
---
|
|
94
|
+
|
|
95
|
+
## Usability Research
|
|
96
|
+
|
|
97
|
+
Usability research evaluates a specific interface or flow. The goal is to observe behavior -- what participants do, where they hesitate, where they fail, and why -- not to collect opinions about the design.
|
|
98
|
+
|
|
99
|
+
### Task Framing
|
|
100
|
+
|
|
101
|
+
Frame tasks as scenarios, not instructions. Avoid naming UI elements in the task -- let participants find them.
|
|
102
|
+
|
|
103
|
+
**Instead of:** "Click the Settings button and change your notification preferences."
|
|
104
|
+
**Use:** "Imagine you want to stop receiving email notifications from this product. Show me how you would do that."
|
|
105
|
+
|
|
106
|
+
**Instead of:** "Use the filter to find orders from last month."
|
|
107
|
+
**Use:** "You need to find all the orders that came in during October. Take a look around and try to accomplish that."
|
|
108
|
+
|
|
109
|
+
### Before the Session Starts
|
|
110
|
+
|
|
111
|
+
Set expectations so participants feel safe failing and think aloud naturally.
|
|
112
|
+
|
|
113
|
+
- "I want to be clear -- I'm testing the design, not you. There are no wrong answers."
|
|
114
|
+
- "If something is confusing, that's incredibly useful feedback. Please don't assume you're missing something obvious."
|
|
115
|
+
- "As you go through the tasks, please think out loud -- tell me what you're looking at, what you're thinking, what you're trying to do."
|
|
116
|
+
|
|
117
|
+
### During Task Completion
|
|
118
|
+
|
|
119
|
+
Prompt thinking aloud without steering the participant toward the solution.
|
|
120
|
+
|
|
121
|
+
- "What are you thinking right now?"
|
|
122
|
+
- "What do you expect to happen when you do that?"
|
|
123
|
+
- "What are you looking for?"
|
|
124
|
+
- "If you were doing this at home on your own, what would you do next?"
|
|
125
|
+
|
|
126
|
+
**What not to say:**
|
|
127
|
+
- "You're close." (Evaluative)
|
|
128
|
+
- "Try looking up there." (Directive)
|
|
129
|
+
- "That's the right spot, yes." (Confirming)
|
|
130
|
+
|
|
131
|
+
### After Each Task
|
|
132
|
+
|
|
133
|
+
Capture their experience while it is fresh, after they have completed or abandoned the task.
|
|
134
|
+
|
|
135
|
+
- "How did that go from your perspective?"
|
|
136
|
+
- "Was there a moment where you weren't sure what to do? Tell me about that."
|
|
137
|
+
- "What did you expect to find where you looked first?"
|
|
138
|
+
- "If you had to do that again, what would you do differently?"
|
|
139
|
+
- "What would you call what you just did -- what word comes to mind for that action?"
|
|
140
|
+
|
|
141
|
+
### After the Session
|
|
142
|
+
|
|
143
|
+
Gather overall impressions and anything participants held back during tasks.
|
|
144
|
+
|
|
145
|
+
- "Looking back at everything you saw today -- what stood out most?"
|
|
146
|
+
- "What, if anything, surprised you?"
|
|
147
|
+
- "Is there anything you wish worked differently from what you saw?"
|
|
148
|
+
- "If this were a product you used every day, what would be the first thing you'd want to change?"
|
|
149
|
+
- "Is there anything you were thinking during the session that you didn't say out loud?"
|
|
150
|
+
|
|
151
|
+
---
|
|
152
|
+
|
|
153
|
+
## Validation Research
|
|
154
|
+
|
|
155
|
+
Validation research tests whether a solution -- a concept, prototype, or launched feature -- addresses the problem you intended to solve. The goal is to disconfirm your assumptions, not confirm them.
|
|
156
|
+
|
|
157
|
+
### Concept Testing
|
|
158
|
+
|
|
159
|
+
Use these when presenting a concept or early prototype for the first time.
|
|
160
|
+
|
|
161
|
+
- "Before I show you anything, tell me how you currently handle [problem this addresses]."
|
|
162
|
+
- "Looking at this, what do you think it does?"
|
|
163
|
+
- "Who do you think this is for?"
|
|
164
|
+
- "What problem do you think this is meant to solve?"
|
|
165
|
+
- "If you saw this, what would you expect to happen when you [key action]?"
|
|
166
|
+
|
|
167
|
+
**Probes:**
|
|
168
|
+
- "What makes you say that?"
|
|
169
|
+
- "Is that what you expected, or did something surprise you?"
|
|
170
|
+
- "What would you need to see to feel confident using this?"
|
|
171
|
+
|
|
172
|
+
### Relevance and Value
|
|
173
|
+
|
|
174
|
+
Assess whether the solution addresses a real and important problem for this participant.
|
|
175
|
+
|
|
176
|
+
- "Does this address a problem you actually have?"
|
|
177
|
+
- "On a scale from not important at all to extremely important -- how much does this matter to you? Why?"
|
|
178
|
+
- "Would you use something like this? In what situation?"
|
|
179
|
+
- "What would you need to believe for this to be worth your time?"
|
|
180
|
+
|
|
181
|
+
**Probes:**
|
|
182
|
+
- "What would stop you from using it?"
|
|
183
|
+
- "Who else on your team would care about this?"
|
|
184
|
+
- "How does this compare to how you handle it today?"
|
|
185
|
+
|
|
186
|
+
### Differentiation
|
|
187
|
+
|
|
188
|
+
Understand how participants position your solution against alternatives.
|
|
189
|
+
|
|
190
|
+
- "If this existed, how would it fit into your current workflow?"
|
|
191
|
+
- "Is there anything you currently use that does something similar?"
|
|
192
|
+
- "What does this have that [current solution] doesn't?"
|
|
193
|
+
- "What does [current solution] have that this doesn't?"
|
|
194
|
+
- "What would it take for you to switch to something like this?"
|
|
195
|
+
|
|
196
|
+
### Post-Launch Validation
|
|
197
|
+
|
|
198
|
+
For features that are already live -- understand adoption and actual impact.
|
|
199
|
+
|
|
200
|
+
- "Have you used [feature]? Tell me about the last time."
|
|
201
|
+
- "What made you decide to try it?"
|
|
202
|
+
- "What did you expect it to do? Did it do that?"
|
|
203
|
+
- "Has it changed how you [do the related task]? How?"
|
|
204
|
+
- "What would you miss if it disappeared tomorrow?"
|
|
205
|
+
- "Who else on your team has tried it? What did they think?"
|
|
206
|
+
|
|
207
|
+
---
|
|
208
|
+
|
|
209
|
+
## Generative Research Starters
|
|
210
|
+
|
|
211
|
+
These open-ended conversation starters are particularly useful at the start of a research program, when you do not yet have a well-defined problem space and are listening for signals.
|
|
212
|
+
|
|
213
|
+
- "Tell me about the biggest challenge you're dealing with in [domain] right now."
|
|
214
|
+
- "What's the thing in your work that most often doesn't go the way you need it to?"
|
|
215
|
+
- "If you had an extra two hours a week to dedicate to improving [area of work], what would you work on?"
|
|
216
|
+
- "What's something that used to be a problem that you've actually solved? How?"
|
|
217
|
+
- "If I were starting your job tomorrow, what would you tell me to watch out for?"
|
|
218
|
+
- "What do you know now that you wish you had known when you started in [role]?"
|
|
219
|
+
|
|
220
|
+
---
|
|
221
|
+
|
|
222
|
+
## Evaluative Research Starters
|
|
223
|
+
|
|
224
|
+
These questions are anchored to existing evidence and are particularly useful when you have data (analytics, support tickets, prior research) and need participants to help you interpret it.
|
|
225
|
+
|
|
226
|
+
- "We've seen a lot of people [behavior observed in data]. Does that match your experience?"
|
|
227
|
+
- "Some of the people we've talked to have told us [finding]. How does that land for you?"
|
|
228
|
+
- "We see that users often drop off at [step]. Does that step feel like a natural stopping point to you?"
|
|
229
|
+
- "Looking at this [flow / page / report], what feels most and least useful to you?"
|
|
230
|
+
|
|
231
|
+
**Important:** When referencing prior findings, present the observation without the interpretation. Let participants react to what happened, not to your conclusion about why.
|
|
232
|
+
|
|
233
|
+
---
|
|
234
|
+
|
|
235
|
+
## Probing Follow-Ups (Universal)
|
|
236
|
+
|
|
237
|
+
Keep these ready for any interview. They are domain-agnostic and work in any research phase.
|
|
238
|
+
|
|
239
|
+
| Situation | Probe |
|
|
240
|
+
|-----------|-------|
|
|
241
|
+
| Participant gives a vague answer | "Can you give me an example of that?" |
|
|
242
|
+
| Participant makes a claim | "When did that last happen?" |
|
|
243
|
+
| Participant says something surprising | "Say more about that." |
|
|
244
|
+
| Participant hesitates | [Wait. Count to five in your head. Then ask:] "What's making you pause?" |
|
|
245
|
+
| Participant says "usually" | "Walk me through the last time that happened." |
|
|
246
|
+
| Participant gives a short answer | "What else?" |
|
|
247
|
+
| Participant mentions someone else | "What does [person] think about that?" |
|
|
248
|
+
| Participant uses a word you don't recognize | "When you say [word], what do you mean by that?" |
|
|
249
|
+
| Participant says "it depends" | "What does it depend on?" |
|
|
250
|
+
| Participant gives a very positive answer | "What's the thing you would most want to change, even if everything else stayed the same?" |
|
|
251
|
+
|
|
252
|
+
---
|
|
253
|
+
|
|
254
|
+
## Questions to Avoid
|
|
255
|
+
|
|
256
|
+
These questions are common but produce low-quality data. Replace them with the alternatives below.
|
|
257
|
+
|
|
258
|
+
| Avoid | Why | Use Instead |
|
|
259
|
+
|-------|-----|-------------|
|
|
260
|
+
| "Would you use this?" | Hypothetical behavior is unreliable | "When did you last need to do something like this?" |
|
|
261
|
+
| "Do you like it?" | Yes/no with no signal | "What's working for you? What isn't?" |
|
|
262
|
+
| "Don't you think that's confusing?" | Leading | "What was your reaction when you saw that?" |
|
|
263
|
+
| "What features do you want?" | Invites solution ideas, not problem insight | "What's the hardest part of doing this today?" |
|
|
264
|
+
| "Is this easy to use?" | Leading toward a positive answer | "Walk me through how you'd use this." |
|
|
265
|
+
| "What would make this better?" | Too abstract, disconnected from behavior | "Tell me about a time when something similar let you down." |
|
|
@@ -0,0 +1,267 @@
|
|
|
1
|
+
# Persona Examples
|
|
2
|
+
|
|
3
|
+
## Context
|
|
4
|
+
|
|
5
|
+
This document shows what complete personas look like in practice. It includes two contrasting examples -- a proto-persona built from assumptions before research, and a research-backed persona built from interview synthesis -- followed by a blank template for new personas.
|
|
6
|
+
|
|
7
|
+
The key distinction between these types is not format but evidence. A proto-persona is a documented hypothesis. A research-backed persona is a finding. Both are useful. Neither should be mistaken for the other.
|
|
8
|
+
|
|
9
|
+
---
|
|
10
|
+
|
|
11
|
+
## Proto-Persona (Assumption-Based)
|
|
12
|
+
|
|
13
|
+
Proto-personas are created before or instead of research, usually to align a team on a shared working hypothesis or to plan a study. They are honest about their origins -- they represent what the team believes, not what users have confirmed.
|
|
14
|
+
|
|
15
|
+
Label all proto-personas clearly. Treat them as assumptions to be tested.
|
|
16
|
+
|
|
17
|
+
---
|
|
18
|
+
|
|
19
|
+
```
|
|
20
|
+
NAME: Marcus Chen
|
|
21
|
+
TITLE: Founder, early-stage B2B SaaS startup
|
|
22
|
+
ARCHETYPE: Resourceful Builder
|
|
23
|
+
|
|
24
|
+
SOURCE: PROTO-PERSONA -- based on team assumptions,
|
|
25
|
+
not validated with research
|
|
26
|
+
|
|
27
|
+
QUOTE:
|
|
28
|
+
"I need to move fast. I'll figure out the proper way
|
|
29
|
+
to do it once we have traction."
|
|
30
|
+
|
|
31
|
+
DEMOGRAPHICS:
|
|
32
|
+
Age: 28-38 Experience: 2-6 years as founder
|
|
33
|
+
Context: Remote, works across product / sales / ops
|
|
34
|
+
Team size: 1-8 people
|
|
35
|
+
|
|
36
|
+
GOALS:
|
|
37
|
+
- Primary: Ship product quickly to validate the market
|
|
38
|
+
- Secondary: Keep costs low while reaching first $10K MRR
|
|
39
|
+
|
|
40
|
+
PAIN POINTS (ASSUMED):
|
|
41
|
+
- Too many tools pulling attention in different directions
|
|
42
|
+
- No dedicated ops or admin support -- does everything
|
|
43
|
+
- Hard to prioritize when everything feels urgent
|
|
44
|
+
|
|
45
|
+
BEHAVIORS (ASSUMED):
|
|
46
|
+
- Heavy Twitter / X and Slack user
|
|
47
|
+
- Prefers products with quick time-to-value, no onboarding
|
|
48
|
+
- Makes purchase decisions alone, often on impulse
|
|
49
|
+
- Will switch tools if something cheaper appears
|
|
50
|
+
|
|
51
|
+
TECH COMFORT: High. Will use APIs, configure webhooks,
|
|
52
|
+
read docs to solve problems independently.
|
|
53
|
+
|
|
54
|
+
OPEN ASSUMPTIONS (MUST TEST):
|
|
55
|
+
- Do founders actually buy this themselves, or do they
|
|
56
|
+
delegate it once they see traction?
|
|
57
|
+
- Is speed really the primary value, or is it trust /
|
|
58
|
+
reliability?
|
|
59
|
+
- Does this segment actually experience the pain we think?
|
|
60
|
+
|
|
61
|
+
SCENARIO:
|
|
62
|
+
Marcus discovers the product through a tweet, signs up for
|
|
63
|
+
a free trial without reading the landing page in full, and
|
|
64
|
+
tries to connect his existing stack in the first 10 minutes.
|
|
65
|
+
If he hits a wall, he leaves. If it works, he upgrades.
|
|
66
|
+
```
|
|
67
|
+
|
|
68
|
+
**When to use a proto-persona:**
|
|
69
|
+
- Early in a project before research is possible
|
|
70
|
+
- To align stakeholders on who you are and are not designing for
|
|
71
|
+
- To scope a research study by making assumptions explicit
|
|
72
|
+
|
|
73
|
+
**Important constraint from SKILL.md:** Proto-personas must never be presented as research findings. They are a starting position, not a conclusion. Label them explicitly and use them to drive research, not replace it.
|
|
74
|
+
|
|
75
|
+
---
|
|
76
|
+
|
|
77
|
+
## Research-Backed Persona
|
|
78
|
+
|
|
79
|
+
Research-backed personas are synthesized from multiple interview participants. Each attribute is grounded in observed behavior or direct quotes from at least two to three participants. The evidence column is the most important part -- it is what separates a persona from a guess.
|
|
80
|
+
|
|
81
|
+
---
|
|
82
|
+
|
|
83
|
+
```
|
|
84
|
+
NAME: Priya Nair
|
|
85
|
+
TITLE: Operations Lead, mid-size B2B software company
|
|
86
|
+
ARCHETYPE: The Reluctant Coordinator
|
|
87
|
+
|
|
88
|
+
SOURCE: RESEARCH-BACKED -- synthesized from 9 interviews,
|
|
89
|
+
validated across 3 participant segments
|
|
90
|
+
|
|
91
|
+
QUOTE:
|
|
92
|
+
"My job is to make sure nothing falls through the cracks.
|
|
93
|
+
The problem is I'm always finding out too late that
|
|
94
|
+
something already did."
|
|
95
|
+
-- P4, Operations Lead, 6 years experience
|
|
96
|
+
|
|
97
|
+
DEMOGRAPHICS:
|
|
98
|
+
Age: 30-45 Experience: 4-10 years in ops roles
|
|
99
|
+
Context: Office or hybrid; manages cross-functional
|
|
100
|
+
workflows across teams of 20-80 people
|
|
101
|
+
|
|
102
|
+
GOALS:
|
|
103
|
+
- Primary: Maintain visibility across all in-flight work
|
|
104
|
+
without becoming the bottleneck for every update
|
|
105
|
+
- Secondary: Reduce meeting load by making status visible
|
|
106
|
+
asynchronously
|
|
107
|
+
|
|
108
|
+
PAIN POINTS (EVIDENCE-BACKED):
|
|
109
|
+
|
|
110
|
+
1. Status is scattered across too many places
|
|
111
|
+
Evidence: 7/9 participants described checking 3+ tools
|
|
112
|
+
(email, Slack, spreadsheets, project tools) just to
|
|
113
|
+
answer "where does this stand?"
|
|
114
|
+
Quote: "I have four tabs open just to give one update."
|
|
115
|
+
-- P7
|
|
116
|
+
|
|
117
|
+
2. Escalations arrive too late
|
|
118
|
+
Evidence: 6/9 participants described situations where a
|
|
119
|
+
problem surfaced at standup or in a meeting that should
|
|
120
|
+
have been flagged a day or more earlier
|
|
121
|
+
Quote: "By the time I hear about it, the deadline is
|
|
122
|
+
already at risk." -- P2
|
|
123
|
+
|
|
124
|
+
3. Reminder and follow-up work is manual and time-consuming
|
|
125
|
+
Evidence: 5/9 participants manually tracked who had
|
|
126
|
+
responded to requests via spreadsheet or flagged emails
|
|
127
|
+
Quote: "I paste the same message into Slack three times
|
|
128
|
+
a week just to get a response." -- P4
|
|
129
|
+
|
|
130
|
+
BEHAVIORS (OBSERVED):
|
|
131
|
+
- Builds personal tracking systems (spreadsheets, Notion
|
|
132
|
+
pages) to compensate for tool fragmentation
|
|
133
|
+
- Sends Slack messages rather than creating formal tickets
|
|
134
|
+
because it's faster, even when a ticket would help more
|
|
135
|
+
- Prefers to check status herself rather than interrupt
|
|
136
|
+
teammates -- respects others' focus time
|
|
137
|
+
- Exports data from tools manually to create summary reports
|
|
138
|
+
for leadership on a weekly basis
|
|
139
|
+
|
|
140
|
+
TECH COMFORT: Medium. Uses existing tools confidently but
|
|
141
|
+
does not configure integrations or write automations. Relies
|
|
142
|
+
on IT or engineering for anything technical.
|
|
143
|
+
|
|
144
|
+
SCENARIO:
|
|
145
|
+
It is 3:00 PM on a Thursday. Priya has a leadership sync in
|
|
146
|
+
90 minutes and needs to confirm that three open deliverables
|
|
147
|
+
are on track. She opens Slack to message three different
|
|
148
|
+
people, checks the project tool to see if statuses were
|
|
149
|
+
updated (they were not), then falls back to her spreadsheet.
|
|
150
|
+
She composes a "quick update" message that takes 20 minutes
|
|
151
|
+
to send because she doesn't want to seem like she's nagging.
|
|
152
|
+
She goes into the sync knowing she has incomplete info.
|
|
153
|
+
|
|
154
|
+
DESIGN IMPLICATIONS:
|
|
155
|
+
- Visibility without friction is the core value proposition
|
|
156
|
+
- Automation must reduce her work, not create new admin
|
|
157
|
+
- Notifications must be smart -- she is already overwhelmed
|
|
158
|
+
- Language should not feel corporate -- she uses informal
|
|
159
|
+
channels by choice
|
|
160
|
+
```
|
|
161
|
+
|
|
162
|
+
**What makes this persona research-backed:**
|
|
163
|
+
- Every pain point cites the number of participants who expressed it
|
|
164
|
+
- At least one direct quote supports each attribute
|
|
165
|
+
- Behaviors describe observed actions, not inferred preferences
|
|
166
|
+
- Design implications follow from the evidence, not from assumptions
|
|
167
|
+
|
|
168
|
+
---
|
|
169
|
+
|
|
170
|
+
## Negative Persona Example
|
|
171
|
+
|
|
172
|
+
Negative personas define who the product is not designed for. They prevent design decisions from being pulled toward the wrong user by stakeholders who advocate for edge cases.
|
|
173
|
+
|
|
174
|
+
```
|
|
175
|
+
NAME: "The Power Admin"
|
|
176
|
+
ARCHETYPE: Deep configurator, technically proficient
|
|
177
|
+
|
|
178
|
+
WHY NOT OUR USER:
|
|
179
|
+
This person needs granular permission controls, custom
|
|
180
|
+
workflow logic, and API access. Designing for their needs
|
|
181
|
+
would add complexity that makes our product harder to use
|
|
182
|
+
for Priya -- our primary persona.
|
|
183
|
+
|
|
184
|
+
They are better served by [enterprise competitor].
|
|
185
|
+
|
|
186
|
+
REFERENCE:
|
|
187
|
+
When a stakeholder requests a feature that only a power
|
|
188
|
+
admin would use, refer back to this persona to ground the
|
|
189
|
+
conversation in tradeoffs.
|
|
190
|
+
```
|
|
191
|
+
|
|
192
|
+
---
|
|
193
|
+
|
|
194
|
+
## Blank Persona Template
|
|
195
|
+
|
|
196
|
+
Copy this template when creating personas from a new research study. Fill in only what your research supports. Leave fields blank rather than filling them with assumptions -- missing evidence is better than fabricated evidence.
|
|
197
|
+
|
|
198
|
+
```
|
|
199
|
+
NAME: [Fictional name]
|
|
200
|
+
TITLE: [Role / context]
|
|
201
|
+
ARCHETYPE: [2-3 word descriptor]
|
|
202
|
+
|
|
203
|
+
SOURCE: [ ] PROTO-PERSONA (assumptions)
|
|
204
|
+
[ ] RESEARCH-BACKED (synthesized from N interviews)
|
|
205
|
+
|
|
206
|
+
QUOTE:
|
|
207
|
+
"[Direct quote from a participant -- attribute to P#]"
|
|
208
|
+
|
|
209
|
+
DEMOGRAPHICS:
|
|
210
|
+
Age: [Range] Experience: [Level / years]
|
|
211
|
+
Context: [Work environment, team size, role scope]
|
|
212
|
+
|
|
213
|
+
GOALS:
|
|
214
|
+
- Primary: [Main objective -- what success looks like]
|
|
215
|
+
- Secondary: [Supporting objective]
|
|
216
|
+
|
|
217
|
+
PAIN POINTS:
|
|
218
|
+
|
|
219
|
+
1. [Pain point headline]
|
|
220
|
+
Evidence: [N/N participants; direct quote -- P#]
|
|
221
|
+
|
|
222
|
+
2. [Pain point headline]
|
|
223
|
+
Evidence: [N/N participants; direct quote -- P#]
|
|
224
|
+
|
|
225
|
+
3. [Pain point headline]
|
|
226
|
+
Evidence: [N/N participants; direct quote -- P#]
|
|
227
|
+
|
|
228
|
+
BEHAVIORS:
|
|
229
|
+
- [How they approach the problem domain]
|
|
230
|
+
- [Tools, workarounds, or systems they have built]
|
|
231
|
+
- [Decision-making patterns relevant to your product]
|
|
232
|
+
- [Social / collaborative behavior -- who do they involve?]
|
|
233
|
+
|
|
234
|
+
TECH COMFORT: [Low / Medium / High -- describe what they
|
|
235
|
+
can and cannot do independently]
|
|
236
|
+
|
|
237
|
+
OPEN QUESTIONS: (for proto-personas only)
|
|
238
|
+
- [Assumption that still needs validation]
|
|
239
|
+
- [Assumption that still needs validation]
|
|
240
|
+
|
|
241
|
+
SCENARIO:
|
|
242
|
+
[A concrete, present-tense narrative of this persona
|
|
243
|
+
experiencing the core problem your product addresses.
|
|
244
|
+
3-6 sentences. Specific details, not generalities.]
|
|
245
|
+
|
|
246
|
+
DESIGN IMPLICATIONS:
|
|
247
|
+
- [What this persona requires from the product]
|
|
248
|
+
- [What would break trust or cause abandonment]
|
|
249
|
+
- [Constraints on complexity, workflow, or tone]
|
|
250
|
+
```
|
|
251
|
+
|
|
252
|
+
---
|
|
253
|
+
|
|
254
|
+
## Persona Quality Checklist
|
|
255
|
+
|
|
256
|
+
Before sharing a persona with stakeholders, verify:
|
|
257
|
+
|
|
258
|
+
- [ ] The persona type (proto vs. research-backed) is clearly labeled
|
|
259
|
+
- [ ] Every pain point cites the number of participants who confirmed it
|
|
260
|
+
- [ ] At least one direct quote supports each claim
|
|
261
|
+
- [ ] The quote is attributed to a participant ID, not paraphrased
|
|
262
|
+
- [ ] Behaviors describe observed actions, not inferred preferences
|
|
263
|
+
- [ ] The scenario is specific enough to use in a design critique
|
|
264
|
+
- [ ] Design implications follow directly from evidence
|
|
265
|
+
- [ ] The persona has been reviewed against at least 3 participant transcripts
|
|
266
|
+
|
|
267
|
+
**Fail condition:** If a team member reads the persona and cannot tell which attributes are observed vs. assumed, the persona needs revision.
|