@wazir-dev/cli 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (629) hide show
  1. package/AGENTS.md +111 -0
  2. package/CHANGELOG.md +14 -0
  3. package/CONTRIBUTING.md +101 -0
  4. package/LICENSE +21 -0
  5. package/README.md +314 -0
  6. package/assets/composition-engine.mmd +34 -0
  7. package/assets/demo-script.sh +17 -0
  8. package/assets/logo-dark.svg +14 -0
  9. package/assets/logo.svg +14 -0
  10. package/assets/pipeline.mmd +39 -0
  11. package/assets/record-demo.sh +51 -0
  12. package/docs/README.md +51 -0
  13. package/docs/adapters/context-mode.md +60 -0
  14. package/docs/concepts/architecture.md +87 -0
  15. package/docs/concepts/artifact-model.md +60 -0
  16. package/docs/concepts/composition-engine.md +36 -0
  17. package/docs/concepts/indexing-and-recall.md +160 -0
  18. package/docs/concepts/observability.md +41 -0
  19. package/docs/concepts/roles-and-workflows.md +59 -0
  20. package/docs/concepts/terminology-policy.md +27 -0
  21. package/docs/getting-started/01-installation.md +78 -0
  22. package/docs/getting-started/02-first-run.md +102 -0
  23. package/docs/getting-started/03-adding-to-project.md +15 -0
  24. package/docs/getting-started/04-host-setup.md +15 -0
  25. package/docs/guides/ci-integration.md +15 -0
  26. package/docs/guides/creating-skills.md +15 -0
  27. package/docs/guides/expertise-module-authoring.md +15 -0
  28. package/docs/guides/hook-development.md +15 -0
  29. package/docs/guides/memory-and-learnings.md +34 -0
  30. package/docs/guides/multi-host-export.md +15 -0
  31. package/docs/guides/troubleshooting.md +101 -0
  32. package/docs/guides/writing-custom-roles.md +15 -0
  33. package/docs/plans/2026-03-15-cli-pipeline-integration-design.md +592 -0
  34. package/docs/plans/2026-03-15-cli-pipeline-integration-plan.md +598 -0
  35. package/docs/plans/2026-03-15-docs-enforcement-plan.md +238 -0
  36. package/docs/readmes/INDEX.md +99 -0
  37. package/docs/readmes/features/expertise/README.md +171 -0
  38. package/docs/readmes/features/exports/README.md +222 -0
  39. package/docs/readmes/features/hooks/README.md +103 -0
  40. package/docs/readmes/features/hooks/loop-cap-guard.md +133 -0
  41. package/docs/readmes/features/hooks/post-tool-capture.md +121 -0
  42. package/docs/readmes/features/hooks/post-tool-lint.md +130 -0
  43. package/docs/readmes/features/hooks/pre-compact-summary.md +122 -0
  44. package/docs/readmes/features/hooks/pre-tool-capture-route.md +100 -0
  45. package/docs/readmes/features/hooks/protected-path-write-guard.md +128 -0
  46. package/docs/readmes/features/hooks/session-start.md +119 -0
  47. package/docs/readmes/features/hooks/stop-handoff-harvest.md +125 -0
  48. package/docs/readmes/features/roles/README.md +157 -0
  49. package/docs/readmes/features/roles/clarifier.md +152 -0
  50. package/docs/readmes/features/roles/content-author.md +190 -0
  51. package/docs/readmes/features/roles/designer.md +193 -0
  52. package/docs/readmes/features/roles/executor.md +184 -0
  53. package/docs/readmes/features/roles/learner.md +210 -0
  54. package/docs/readmes/features/roles/planner.md +182 -0
  55. package/docs/readmes/features/roles/researcher.md +164 -0
  56. package/docs/readmes/features/roles/reviewer.md +184 -0
  57. package/docs/readmes/features/roles/specifier.md +162 -0
  58. package/docs/readmes/features/roles/verifier.md +215 -0
  59. package/docs/readmes/features/schemas/README.md +178 -0
  60. package/docs/readmes/features/skills/README.md +63 -0
  61. package/docs/readmes/features/skills/brainstorming.md +96 -0
  62. package/docs/readmes/features/skills/debugging.md +148 -0
  63. package/docs/readmes/features/skills/design.md +120 -0
  64. package/docs/readmes/features/skills/prepare-next.md +109 -0
  65. package/docs/readmes/features/skills/run-audit.md +159 -0
  66. package/docs/readmes/features/skills/scan-project.md +109 -0
  67. package/docs/readmes/features/skills/self-audit.md +176 -0
  68. package/docs/readmes/features/skills/tdd.md +137 -0
  69. package/docs/readmes/features/skills/using-skills.md +92 -0
  70. package/docs/readmes/features/skills/verification.md +120 -0
  71. package/docs/readmes/features/skills/writing-plans.md +104 -0
  72. package/docs/readmes/features/tooling/README.md +320 -0
  73. package/docs/readmes/features/workflows/README.md +186 -0
  74. package/docs/readmes/features/workflows/author.md +181 -0
  75. package/docs/readmes/features/workflows/clarify.md +154 -0
  76. package/docs/readmes/features/workflows/design-review.md +171 -0
  77. package/docs/readmes/features/workflows/design.md +169 -0
  78. package/docs/readmes/features/workflows/discover.md +162 -0
  79. package/docs/readmes/features/workflows/execute.md +173 -0
  80. package/docs/readmes/features/workflows/learn.md +167 -0
  81. package/docs/readmes/features/workflows/plan-review.md +165 -0
  82. package/docs/readmes/features/workflows/plan.md +170 -0
  83. package/docs/readmes/features/workflows/prepare-next.md +167 -0
  84. package/docs/readmes/features/workflows/review.md +169 -0
  85. package/docs/readmes/features/workflows/run-audit.md +191 -0
  86. package/docs/readmes/features/workflows/spec-challenge.md +159 -0
  87. package/docs/readmes/features/workflows/specify.md +160 -0
  88. package/docs/readmes/features/workflows/verify.md +177 -0
  89. package/docs/readmes/packages/README.md +50 -0
  90. package/docs/readmes/packages/ajv.md +117 -0
  91. package/docs/readmes/packages/context-mode.md +118 -0
  92. package/docs/readmes/packages/gray-matter.md +116 -0
  93. package/docs/readmes/packages/node-test.md +137 -0
  94. package/docs/readmes/packages/yaml.md +112 -0
  95. package/docs/reference/configuration-reference.md +159 -0
  96. package/docs/reference/expertise-index.md +52 -0
  97. package/docs/reference/git-flow.md +43 -0
  98. package/docs/reference/hooks.md +87 -0
  99. package/docs/reference/host-exports.md +50 -0
  100. package/docs/reference/launch-checklist.md +172 -0
  101. package/docs/reference/marketplace-listings.md +76 -0
  102. package/docs/reference/release-process.md +34 -0
  103. package/docs/reference/roles-reference.md +77 -0
  104. package/docs/reference/skills.md +33 -0
  105. package/docs/reference/templates.md +29 -0
  106. package/docs/reference/tooling-cli.md +94 -0
  107. package/docs/truth-claims.yaml +222 -0
  108. package/expertise/PROGRESS.md +63 -0
  109. package/expertise/README.md +18 -0
  110. package/expertise/antipatterns/PROGRESS.md +56 -0
  111. package/expertise/antipatterns/backend/api-design-antipatterns.md +1271 -0
  112. package/expertise/antipatterns/backend/auth-antipatterns.md +1195 -0
  113. package/expertise/antipatterns/backend/caching-antipatterns.md +622 -0
  114. package/expertise/antipatterns/backend/database-antipatterns.md +1038 -0
  115. package/expertise/antipatterns/backend/index.md +24 -0
  116. package/expertise/antipatterns/backend/microservices-antipatterns.md +850 -0
  117. package/expertise/antipatterns/code/architecture-antipatterns.md +919 -0
  118. package/expertise/antipatterns/code/async-antipatterns.md +622 -0
  119. package/expertise/antipatterns/code/code-smells.md +1186 -0
  120. package/expertise/antipatterns/code/dependency-antipatterns.md +1209 -0
  121. package/expertise/antipatterns/code/error-handling-antipatterns.md +1360 -0
  122. package/expertise/antipatterns/code/index.md +27 -0
  123. package/expertise/antipatterns/code/naming-and-abstraction.md +1118 -0
  124. package/expertise/antipatterns/code/state-management-antipatterns.md +1076 -0
  125. package/expertise/antipatterns/code/testing-antipatterns.md +1053 -0
  126. package/expertise/antipatterns/design/accessibility-antipatterns.md +1136 -0
  127. package/expertise/antipatterns/design/dark-patterns.md +1121 -0
  128. package/expertise/antipatterns/design/index.md +22 -0
  129. package/expertise/antipatterns/design/ui-antipatterns.md +1202 -0
  130. package/expertise/antipatterns/design/ux-antipatterns.md +680 -0
  131. package/expertise/antipatterns/frontend/css-layout-antipatterns.md +691 -0
  132. package/expertise/antipatterns/frontend/flutter-antipatterns.md +1827 -0
  133. package/expertise/antipatterns/frontend/index.md +23 -0
  134. package/expertise/antipatterns/frontend/mobile-antipatterns.md +573 -0
  135. package/expertise/antipatterns/frontend/react-antipatterns.md +1128 -0
  136. package/expertise/antipatterns/frontend/spa-antipatterns.md +1235 -0
  137. package/expertise/antipatterns/index.md +31 -0
  138. package/expertise/antipatterns/performance/index.md +20 -0
  139. package/expertise/antipatterns/performance/performance-antipatterns.md +1013 -0
  140. package/expertise/antipatterns/performance/premature-optimization.md +623 -0
  141. package/expertise/antipatterns/performance/scaling-antipatterns.md +785 -0
  142. package/expertise/antipatterns/process/ai-coding-antipatterns.md +853 -0
  143. package/expertise/antipatterns/process/code-review-antipatterns.md +656 -0
  144. package/expertise/antipatterns/process/deployment-antipatterns.md +920 -0
  145. package/expertise/antipatterns/process/index.md +23 -0
  146. package/expertise/antipatterns/process/technical-debt-antipatterns.md +647 -0
  147. package/expertise/antipatterns/security/index.md +20 -0
  148. package/expertise/antipatterns/security/secrets-antipatterns.md +849 -0
  149. package/expertise/antipatterns/security/security-theater.md +843 -0
  150. package/expertise/antipatterns/security/vulnerability-patterns.md +801 -0
  151. package/expertise/architecture/PROGRESS.md +70 -0
  152. package/expertise/architecture/data/caching-architecture.md +671 -0
  153. package/expertise/architecture/data/data-consistency.md +574 -0
  154. package/expertise/architecture/data/data-modeling.md +536 -0
  155. package/expertise/architecture/data/event-streams-and-queues.md +634 -0
  156. package/expertise/architecture/data/index.md +25 -0
  157. package/expertise/architecture/data/search-architecture.md +663 -0
  158. package/expertise/architecture/data/sql-vs-nosql.md +708 -0
  159. package/expertise/architecture/decisions/architecture-decision-records.md +640 -0
  160. package/expertise/architecture/decisions/build-vs-buy.md +616 -0
  161. package/expertise/architecture/decisions/index.md +23 -0
  162. package/expertise/architecture/decisions/monolith-to-microservices.md +790 -0
  163. package/expertise/architecture/decisions/technology-selection.md +616 -0
  164. package/expertise/architecture/distributed/cap-theorem-and-tradeoffs.md +800 -0
  165. package/expertise/architecture/distributed/circuit-breaker-bulkhead.md +741 -0
  166. package/expertise/architecture/distributed/consensus-and-coordination.md +796 -0
  167. package/expertise/architecture/distributed/distributed-systems-fundamentals.md +564 -0
  168. package/expertise/architecture/distributed/idempotency-and-retry.md +796 -0
  169. package/expertise/architecture/distributed/index.md +25 -0
  170. package/expertise/architecture/distributed/saga-pattern.md +797 -0
  171. package/expertise/architecture/foundations/architectural-thinking.md +460 -0
  172. package/expertise/architecture/foundations/coupling-and-cohesion.md +770 -0
  173. package/expertise/architecture/foundations/design-principles-solid.md +649 -0
  174. package/expertise/architecture/foundations/domain-driven-design.md +719 -0
  175. package/expertise/architecture/foundations/index.md +25 -0
  176. package/expertise/architecture/foundations/separation-of-concerns.md +472 -0
  177. package/expertise/architecture/foundations/twelve-factor-app.md +797 -0
  178. package/expertise/architecture/index.md +34 -0
  179. package/expertise/architecture/integration/api-design-graphql.md +638 -0
  180. package/expertise/architecture/integration/api-design-grpc.md +804 -0
  181. package/expertise/architecture/integration/api-design-rest.md +892 -0
  182. package/expertise/architecture/integration/index.md +25 -0
  183. package/expertise/architecture/integration/third-party-integration.md +795 -0
  184. package/expertise/architecture/integration/webhooks-and-callbacks.md +1152 -0
  185. package/expertise/architecture/integration/websockets-realtime.md +791 -0
  186. package/expertise/architecture/mobile-architecture/index.md +22 -0
  187. package/expertise/architecture/mobile-architecture/mobile-app-architecture.md +780 -0
  188. package/expertise/architecture/mobile-architecture/mobile-backend-for-frontend.md +670 -0
  189. package/expertise/architecture/mobile-architecture/offline-first.md +719 -0
  190. package/expertise/architecture/mobile-architecture/push-and-sync.md +782 -0
  191. package/expertise/architecture/patterns/cqrs-event-sourcing.md +717 -0
  192. package/expertise/architecture/patterns/event-driven.md +797 -0
  193. package/expertise/architecture/patterns/hexagonal-clean-architecture.md +870 -0
  194. package/expertise/architecture/patterns/index.md +27 -0
  195. package/expertise/architecture/patterns/layered-architecture.md +736 -0
  196. package/expertise/architecture/patterns/microservices.md +753 -0
  197. package/expertise/architecture/patterns/modular-monolith.md +692 -0
  198. package/expertise/architecture/patterns/monolith.md +626 -0
  199. package/expertise/architecture/patterns/plugin-architecture.md +735 -0
  200. package/expertise/architecture/patterns/serverless.md +780 -0
  201. package/expertise/architecture/scaling/database-scaling.md +615 -0
  202. package/expertise/architecture/scaling/feature-flags-and-rollouts.md +757 -0
  203. package/expertise/architecture/scaling/horizontal-vs-vertical.md +606 -0
  204. package/expertise/architecture/scaling/index.md +24 -0
  205. package/expertise/architecture/scaling/multi-tenancy.md +800 -0
  206. package/expertise/architecture/scaling/stateless-design.md +787 -0
  207. package/expertise/backend/embedded-firmware.md +625 -0
  208. package/expertise/backend/go.md +853 -0
  209. package/expertise/backend/index.md +24 -0
  210. package/expertise/backend/java-spring.md +448 -0
  211. package/expertise/backend/node-typescript.md +625 -0
  212. package/expertise/backend/python-fastapi.md +724 -0
  213. package/expertise/backend/rust.md +458 -0
  214. package/expertise/backend/solidity.md +711 -0
  215. package/expertise/composition-map.yaml +443 -0
  216. package/expertise/content/foundations/content-modeling.md +395 -0
  217. package/expertise/content/foundations/editorial-standards.md +449 -0
  218. package/expertise/content/foundations/index.md +24 -0
  219. package/expertise/content/foundations/microcopy.md +455 -0
  220. package/expertise/content/foundations/terminology-governance.md +509 -0
  221. package/expertise/content/index.md +34 -0
  222. package/expertise/content/patterns/accessibility-copy.md +518 -0
  223. package/expertise/content/patterns/index.md +24 -0
  224. package/expertise/content/patterns/notification-content.md +433 -0
  225. package/expertise/content/patterns/sample-content.md +486 -0
  226. package/expertise/content/patterns/state-copy.md +439 -0
  227. package/expertise/design/PROGRESS.md +58 -0
  228. package/expertise/design/disciplines/dark-mode-theming.md +577 -0
  229. package/expertise/design/disciplines/design-systems.md +595 -0
  230. package/expertise/design/disciplines/index.md +25 -0
  231. package/expertise/design/disciplines/information-architecture.md +800 -0
  232. package/expertise/design/disciplines/interaction-design.md +788 -0
  233. package/expertise/design/disciplines/responsive-design.md +552 -0
  234. package/expertise/design/disciplines/usability-testing.md +516 -0
  235. package/expertise/design/disciplines/user-research.md +792 -0
  236. package/expertise/design/foundations/accessibility-design.md +796 -0
  237. package/expertise/design/foundations/color-theory.md +797 -0
  238. package/expertise/design/foundations/iconography.md +795 -0
  239. package/expertise/design/foundations/index.md +26 -0
  240. package/expertise/design/foundations/motion-and-animation.md +653 -0
  241. package/expertise/design/foundations/rtl-design.md +585 -0
  242. package/expertise/design/foundations/spacing-and-layout.md +607 -0
  243. package/expertise/design/foundations/typography.md +800 -0
  244. package/expertise/design/foundations/visual-hierarchy.md +761 -0
  245. package/expertise/design/index.md +32 -0
  246. package/expertise/design/patterns/authentication-flows.md +474 -0
  247. package/expertise/design/patterns/content-consumption.md +789 -0
  248. package/expertise/design/patterns/data-display.md +618 -0
  249. package/expertise/design/patterns/e-commerce.md +1494 -0
  250. package/expertise/design/patterns/feedback-and-states.md +642 -0
  251. package/expertise/design/patterns/forms-and-input.md +819 -0
  252. package/expertise/design/patterns/gamification.md +801 -0
  253. package/expertise/design/patterns/index.md +31 -0
  254. package/expertise/design/patterns/microinteractions.md +449 -0
  255. package/expertise/design/patterns/navigation.md +800 -0
  256. package/expertise/design/patterns/notifications.md +705 -0
  257. package/expertise/design/patterns/onboarding.md +700 -0
  258. package/expertise/design/patterns/search-and-filter.md +601 -0
  259. package/expertise/design/patterns/settings-and-preferences.md +768 -0
  260. package/expertise/design/patterns/social-and-community.md +748 -0
  261. package/expertise/design/platforms/desktop-native.md +612 -0
  262. package/expertise/design/platforms/index.md +25 -0
  263. package/expertise/design/platforms/mobile-android.md +825 -0
  264. package/expertise/design/platforms/mobile-cross-platform.md +983 -0
  265. package/expertise/design/platforms/mobile-ios.md +699 -0
  266. package/expertise/design/platforms/tablet.md +794 -0
  267. package/expertise/design/platforms/web-dashboard.md +790 -0
  268. package/expertise/design/platforms/web-responsive.md +550 -0
  269. package/expertise/design/psychology/behavioral-nudges.md +449 -0
  270. package/expertise/design/psychology/cognitive-load.md +1191 -0
  271. package/expertise/design/psychology/error-psychology.md +778 -0
  272. package/expertise/design/psychology/index.md +22 -0
  273. package/expertise/design/psychology/persuasive-design.md +736 -0
  274. package/expertise/design/psychology/user-mental-models.md +623 -0
  275. package/expertise/design/tooling/open-pencil.md +266 -0
  276. package/expertise/frontend/angular.md +1073 -0
  277. package/expertise/frontend/desktop-electron.md +546 -0
  278. package/expertise/frontend/flutter.md +782 -0
  279. package/expertise/frontend/index.md +27 -0
  280. package/expertise/frontend/native-android.md +409 -0
  281. package/expertise/frontend/native-ios.md +490 -0
  282. package/expertise/frontend/react-native.md +1160 -0
  283. package/expertise/frontend/react.md +808 -0
  284. package/expertise/frontend/vue.md +1089 -0
  285. package/expertise/humanize/domain-rules-code.md +79 -0
  286. package/expertise/humanize/domain-rules-content.md +67 -0
  287. package/expertise/humanize/domain-rules-technical-docs.md +56 -0
  288. package/expertise/humanize/index.md +35 -0
  289. package/expertise/humanize/self-audit-checklist.md +87 -0
  290. package/expertise/humanize/sentence-patterns.md +218 -0
  291. package/expertise/humanize/vocabulary-blacklist.md +105 -0
  292. package/expertise/i18n/PROGRESS.md +65 -0
  293. package/expertise/i18n/advanced/accessibility-and-i18n.md +28 -0
  294. package/expertise/i18n/advanced/bidirectional-text-algorithm.md +38 -0
  295. package/expertise/i18n/advanced/complex-scripts.md +30 -0
  296. package/expertise/i18n/advanced/performance-and-i18n.md +27 -0
  297. package/expertise/i18n/advanced/testing-i18n.md +28 -0
  298. package/expertise/i18n/content/content-adaptation.md +23 -0
  299. package/expertise/i18n/content/locale-specific-formatting.md +23 -0
  300. package/expertise/i18n/content/machine-translation-integration.md +28 -0
  301. package/expertise/i18n/content/translation-management.md +29 -0
  302. package/expertise/i18n/foundations/date-time-calendars.md +67 -0
  303. package/expertise/i18n/foundations/i18n-architecture.md +272 -0
  304. package/expertise/i18n/foundations/locale-and-language-tags.md +79 -0
  305. package/expertise/i18n/foundations/numbers-currency-units.md +61 -0
  306. package/expertise/i18n/foundations/pluralization-and-gender.md +109 -0
  307. package/expertise/i18n/foundations/string-externalization.md +236 -0
  308. package/expertise/i18n/foundations/text-direction-bidi.md +241 -0
  309. package/expertise/i18n/foundations/unicode-and-encoding.md +86 -0
  310. package/expertise/i18n/index.md +38 -0
  311. package/expertise/i18n/platform/backend-i18n.md +31 -0
  312. package/expertise/i18n/platform/flutter-i18n.md +148 -0
  313. package/expertise/i18n/platform/native-android-i18n.md +36 -0
  314. package/expertise/i18n/platform/native-ios-i18n.md +36 -0
  315. package/expertise/i18n/platform/react-i18n.md +103 -0
  316. package/expertise/i18n/platform/web-css-i18n.md +81 -0
  317. package/expertise/i18n/rtl/arabic-specific.md +175 -0
  318. package/expertise/i18n/rtl/hebrew-specific.md +149 -0
  319. package/expertise/i18n/rtl/rtl-animations-and-transitions.md +111 -0
  320. package/expertise/i18n/rtl/rtl-forms-and-input.md +161 -0
  321. package/expertise/i18n/rtl/rtl-fundamentals.md +211 -0
  322. package/expertise/i18n/rtl/rtl-icons-and-images.md +181 -0
  323. package/expertise/i18n/rtl/rtl-layout-mirroring.md +252 -0
  324. package/expertise/i18n/rtl/rtl-navigation-and-gestures.md +107 -0
  325. package/expertise/i18n/rtl/rtl-testing-and-qa.md +147 -0
  326. package/expertise/i18n/rtl/rtl-typography.md +160 -0
  327. package/expertise/index.md +113 -0
  328. package/expertise/index.yaml +216 -0
  329. package/expertise/infrastructure/cloud-aws.md +597 -0
  330. package/expertise/infrastructure/cloud-gcp.md +599 -0
  331. package/expertise/infrastructure/cybersecurity.md +816 -0
  332. package/expertise/infrastructure/database-mongodb.md +447 -0
  333. package/expertise/infrastructure/database-postgres.md +400 -0
  334. package/expertise/infrastructure/devops-cicd.md +787 -0
  335. package/expertise/infrastructure/index.md +27 -0
  336. package/expertise/performance/PROGRESS.md +50 -0
  337. package/expertise/performance/backend/api-latency.md +1204 -0
  338. package/expertise/performance/backend/background-jobs.md +506 -0
  339. package/expertise/performance/backend/connection-pooling.md +1209 -0
  340. package/expertise/performance/backend/database-query-optimization.md +515 -0
  341. package/expertise/performance/backend/index.md +23 -0
  342. package/expertise/performance/backend/rate-limiting-and-throttling.md +971 -0
  343. package/expertise/performance/foundations/algorithmic-complexity.md +954 -0
  344. package/expertise/performance/foundations/caching-strategies.md +489 -0
  345. package/expertise/performance/foundations/concurrency-and-parallelism.md +847 -0
  346. package/expertise/performance/foundations/index.md +24 -0
  347. package/expertise/performance/foundations/measuring-and-profiling.md +440 -0
  348. package/expertise/performance/foundations/memory-management.md +964 -0
  349. package/expertise/performance/foundations/performance-budgets.md +1314 -0
  350. package/expertise/performance/index.md +31 -0
  351. package/expertise/performance/infrastructure/auto-scaling.md +1059 -0
  352. package/expertise/performance/infrastructure/cdn-and-edge.md +1081 -0
  353. package/expertise/performance/infrastructure/index.md +22 -0
  354. package/expertise/performance/infrastructure/load-balancing.md +1081 -0
  355. package/expertise/performance/infrastructure/observability.md +1079 -0
  356. package/expertise/performance/mobile/index.md +23 -0
  357. package/expertise/performance/mobile/mobile-animations.md +544 -0
  358. package/expertise/performance/mobile/mobile-memory-battery.md +416 -0
  359. package/expertise/performance/mobile/mobile-network.md +452 -0
  360. package/expertise/performance/mobile/mobile-rendering.md +599 -0
  361. package/expertise/performance/mobile/mobile-startup-time.md +505 -0
  362. package/expertise/performance/platform-specific/flutter-performance.md +647 -0
  363. package/expertise/performance/platform-specific/index.md +22 -0
  364. package/expertise/performance/platform-specific/node-performance.md +1307 -0
  365. package/expertise/performance/platform-specific/postgres-performance.md +1366 -0
  366. package/expertise/performance/platform-specific/react-performance.md +1403 -0
  367. package/expertise/performance/web/bundle-optimization.md +1239 -0
  368. package/expertise/performance/web/image-and-media.md +636 -0
  369. package/expertise/performance/web/index.md +24 -0
  370. package/expertise/performance/web/network-optimization.md +1133 -0
  371. package/expertise/performance/web/rendering-performance.md +1098 -0
  372. package/expertise/performance/web/ssr-and-hydration.md +918 -0
  373. package/expertise/performance/web/web-vitals.md +1374 -0
  374. package/expertise/quality/accessibility.md +985 -0
  375. package/expertise/quality/evidence-based-verification.md +499 -0
  376. package/expertise/quality/index.md +24 -0
  377. package/expertise/quality/ml-model-audit.md +614 -0
  378. package/expertise/quality/performance.md +600 -0
  379. package/expertise/quality/testing-api.md +891 -0
  380. package/expertise/quality/testing-mobile.md +496 -0
  381. package/expertise/quality/testing-web.md +849 -0
  382. package/expertise/security/PROGRESS.md +54 -0
  383. package/expertise/security/agentic-identity.md +540 -0
  384. package/expertise/security/compliance-frameworks.md +601 -0
  385. package/expertise/security/data/data-encryption.md +364 -0
  386. package/expertise/security/data/data-privacy-gdpr.md +692 -0
  387. package/expertise/security/data/database-security.md +1171 -0
  388. package/expertise/security/data/index.md +22 -0
  389. package/expertise/security/data/pii-handling.md +531 -0
  390. package/expertise/security/foundations/authentication.md +1041 -0
  391. package/expertise/security/foundations/authorization.md +603 -0
  392. package/expertise/security/foundations/cryptography.md +1001 -0
  393. package/expertise/security/foundations/index.md +25 -0
  394. package/expertise/security/foundations/owasp-top-10.md +1354 -0
  395. package/expertise/security/foundations/secrets-management.md +1217 -0
  396. package/expertise/security/foundations/secure-sdlc.md +700 -0
  397. package/expertise/security/foundations/supply-chain-security.md +698 -0
  398. package/expertise/security/index.md +31 -0
  399. package/expertise/security/infrastructure/cloud-security-aws.md +1296 -0
  400. package/expertise/security/infrastructure/cloud-security-gcp.md +1376 -0
  401. package/expertise/security/infrastructure/container-security.md +721 -0
  402. package/expertise/security/infrastructure/incident-response.md +1295 -0
  403. package/expertise/security/infrastructure/index.md +24 -0
  404. package/expertise/security/infrastructure/logging-and-monitoring.md +1618 -0
  405. package/expertise/security/infrastructure/network-security.md +1337 -0
  406. package/expertise/security/mobile/index.md +23 -0
  407. package/expertise/security/mobile/mobile-android-security.md +1218 -0
  408. package/expertise/security/mobile/mobile-binary-protection.md +1229 -0
  409. package/expertise/security/mobile/mobile-data-storage.md +1265 -0
  410. package/expertise/security/mobile/mobile-ios-security.md +1401 -0
  411. package/expertise/security/mobile/mobile-network-security.md +1520 -0
  412. package/expertise/security/smart-contract-security.md +594 -0
  413. package/expertise/security/testing/index.md +22 -0
  414. package/expertise/security/testing/penetration-testing.md +1258 -0
  415. package/expertise/security/testing/security-code-review.md +1765 -0
  416. package/expertise/security/testing/threat-modeling.md +1074 -0
  417. package/expertise/security/testing/vulnerability-scanning.md +1062 -0
  418. package/expertise/security/web/api-security.md +586 -0
  419. package/expertise/security/web/cors-and-headers.md +433 -0
  420. package/expertise/security/web/csrf.md +562 -0
  421. package/expertise/security/web/file-upload.md +1477 -0
  422. package/expertise/security/web/index.md +25 -0
  423. package/expertise/security/web/injection.md +1375 -0
  424. package/expertise/security/web/session-management.md +1101 -0
  425. package/expertise/security/web/xss.md +1158 -0
  426. package/exports/README.md +17 -0
  427. package/exports/hosts/claude/.claude/agents/clarifier.md +42 -0
  428. package/exports/hosts/claude/.claude/agents/content-author.md +63 -0
  429. package/exports/hosts/claude/.claude/agents/designer.md +55 -0
  430. package/exports/hosts/claude/.claude/agents/executor.md +55 -0
  431. package/exports/hosts/claude/.claude/agents/learner.md +51 -0
  432. package/exports/hosts/claude/.claude/agents/planner.md +53 -0
  433. package/exports/hosts/claude/.claude/agents/researcher.md +43 -0
  434. package/exports/hosts/claude/.claude/agents/reviewer.md +54 -0
  435. package/exports/hosts/claude/.claude/agents/specifier.md +47 -0
  436. package/exports/hosts/claude/.claude/agents/verifier.md +71 -0
  437. package/exports/hosts/claude/.claude/commands/author.md +42 -0
  438. package/exports/hosts/claude/.claude/commands/clarify.md +38 -0
  439. package/exports/hosts/claude/.claude/commands/design-review.md +46 -0
  440. package/exports/hosts/claude/.claude/commands/design.md +44 -0
  441. package/exports/hosts/claude/.claude/commands/discover.md +37 -0
  442. package/exports/hosts/claude/.claude/commands/execute.md +48 -0
  443. package/exports/hosts/claude/.claude/commands/learn.md +38 -0
  444. package/exports/hosts/claude/.claude/commands/plan-review.md +42 -0
  445. package/exports/hosts/claude/.claude/commands/plan.md +39 -0
  446. package/exports/hosts/claude/.claude/commands/prepare-next.md +37 -0
  447. package/exports/hosts/claude/.claude/commands/review.md +40 -0
  448. package/exports/hosts/claude/.claude/commands/run-audit.md +41 -0
  449. package/exports/hosts/claude/.claude/commands/spec-challenge.md +41 -0
  450. package/exports/hosts/claude/.claude/commands/specify.md +38 -0
  451. package/exports/hosts/claude/.claude/commands/verify.md +37 -0
  452. package/exports/hosts/claude/.claude/settings.json +34 -0
  453. package/exports/hosts/claude/CLAUDE.md +19 -0
  454. package/exports/hosts/claude/export.manifest.json +38 -0
  455. package/exports/hosts/claude/host-package.json +67 -0
  456. package/exports/hosts/codex/AGENTS.md +19 -0
  457. package/exports/hosts/codex/export.manifest.json +38 -0
  458. package/exports/hosts/codex/host-package.json +41 -0
  459. package/exports/hosts/cursor/.cursor/hooks.json +16 -0
  460. package/exports/hosts/cursor/.cursor/rules/wazir-core.mdc +19 -0
  461. package/exports/hosts/cursor/export.manifest.json +38 -0
  462. package/exports/hosts/cursor/host-package.json +42 -0
  463. package/exports/hosts/gemini/GEMINI.md +19 -0
  464. package/exports/hosts/gemini/export.manifest.json +38 -0
  465. package/exports/hosts/gemini/host-package.json +41 -0
  466. package/hooks/README.md +18 -0
  467. package/hooks/definitions/loop_cap_guard.yaml +21 -0
  468. package/hooks/definitions/post_tool_capture.yaml +24 -0
  469. package/hooks/definitions/pre_compact_summary.yaml +19 -0
  470. package/hooks/definitions/pre_tool_capture_route.yaml +19 -0
  471. package/hooks/definitions/protected_path_write_guard.yaml +19 -0
  472. package/hooks/definitions/session_start.yaml +19 -0
  473. package/hooks/definitions/stop_handoff_harvest.yaml +20 -0
  474. package/hooks/loop-cap-guard +17 -0
  475. package/hooks/post-tool-lint +36 -0
  476. package/hooks/protected-path-write-guard +17 -0
  477. package/hooks/session-start +41 -0
  478. package/llms-full.txt +2355 -0
  479. package/llms.txt +43 -0
  480. package/package.json +79 -0
  481. package/roles/README.md +20 -0
  482. package/roles/clarifier.md +42 -0
  483. package/roles/content-author.md +63 -0
  484. package/roles/designer.md +55 -0
  485. package/roles/executor.md +55 -0
  486. package/roles/learner.md +51 -0
  487. package/roles/planner.md +53 -0
  488. package/roles/researcher.md +43 -0
  489. package/roles/reviewer.md +54 -0
  490. package/roles/specifier.md +47 -0
  491. package/roles/verifier.md +71 -0
  492. package/schemas/README.md +24 -0
  493. package/schemas/accepted-learning.schema.json +20 -0
  494. package/schemas/author-artifact.schema.json +156 -0
  495. package/schemas/clarification.schema.json +19 -0
  496. package/schemas/design-artifact.schema.json +80 -0
  497. package/schemas/docs-claim.schema.json +18 -0
  498. package/schemas/export-manifest.schema.json +20 -0
  499. package/schemas/hook.schema.json +67 -0
  500. package/schemas/host-export-package.schema.json +18 -0
  501. package/schemas/implementation-plan.schema.json +19 -0
  502. package/schemas/proposed-learning.schema.json +19 -0
  503. package/schemas/research.schema.json +18 -0
  504. package/schemas/review.schema.json +29 -0
  505. package/schemas/run-manifest.schema.json +18 -0
  506. package/schemas/spec-challenge.schema.json +18 -0
  507. package/schemas/spec.schema.json +20 -0
  508. package/schemas/usage.schema.json +102 -0
  509. package/schemas/verification-proof.schema.json +29 -0
  510. package/schemas/wazir-manifest.schema.json +173 -0
  511. package/skills/README.md +40 -0
  512. package/skills/brainstorming/SKILL.md +77 -0
  513. package/skills/debugging/SKILL.md +50 -0
  514. package/skills/design/SKILL.md +61 -0
  515. package/skills/dispatching-parallel-agents/SKILL.md +128 -0
  516. package/skills/executing-plans/SKILL.md +70 -0
  517. package/skills/finishing-a-development-branch/SKILL.md +169 -0
  518. package/skills/humanize/SKILL.md +123 -0
  519. package/skills/init-pipeline/SKILL.md +124 -0
  520. package/skills/prepare-next/SKILL.md +20 -0
  521. package/skills/receiving-code-review/SKILL.md +123 -0
  522. package/skills/requesting-code-review/SKILL.md +105 -0
  523. package/skills/requesting-code-review/code-reviewer.md +108 -0
  524. package/skills/run-audit/SKILL.md +197 -0
  525. package/skills/scan-project/SKILL.md +41 -0
  526. package/skills/self-audit/SKILL.md +153 -0
  527. package/skills/subagent-driven-development/SKILL.md +154 -0
  528. package/skills/subagent-driven-development/code-quality-reviewer-prompt.md +26 -0
  529. package/skills/subagent-driven-development/implementer-prompt.md +102 -0
  530. package/skills/subagent-driven-development/spec-reviewer-prompt.md +61 -0
  531. package/skills/tdd/SKILL.md +23 -0
  532. package/skills/using-git-worktrees/SKILL.md +163 -0
  533. package/skills/using-skills/SKILL.md +95 -0
  534. package/skills/verification/SKILL.md +22 -0
  535. package/skills/wazir/SKILL.md +463 -0
  536. package/skills/writing-plans/SKILL.md +30 -0
  537. package/skills/writing-skills/SKILL.md +157 -0
  538. package/skills/writing-skills/anthropic-best-practices.md +122 -0
  539. package/skills/writing-skills/persuasion-principles.md +50 -0
  540. package/templates/README.md +20 -0
  541. package/templates/artifacts/README.md +10 -0
  542. package/templates/artifacts/accepted-learning.md +19 -0
  543. package/templates/artifacts/accepted-learning.template.json +12 -0
  544. package/templates/artifacts/author.md +74 -0
  545. package/templates/artifacts/author.template.json +19 -0
  546. package/templates/artifacts/clarification.md +21 -0
  547. package/templates/artifacts/clarification.template.json +12 -0
  548. package/templates/artifacts/execute-notes.md +19 -0
  549. package/templates/artifacts/implementation-plan.md +21 -0
  550. package/templates/artifacts/implementation-plan.template.json +11 -0
  551. package/templates/artifacts/learning-proposal.md +19 -0
  552. package/templates/artifacts/next-run-handoff.md +21 -0
  553. package/templates/artifacts/plan-review.md +19 -0
  554. package/templates/artifacts/proposed-learning.template.json +12 -0
  555. package/templates/artifacts/research.md +21 -0
  556. package/templates/artifacts/research.template.json +12 -0
  557. package/templates/artifacts/review-findings.md +19 -0
  558. package/templates/artifacts/review.template.json +11 -0
  559. package/templates/artifacts/run-manifest.template.json +8 -0
  560. package/templates/artifacts/spec-challenge.md +19 -0
  561. package/templates/artifacts/spec-challenge.template.json +11 -0
  562. package/templates/artifacts/spec.md +21 -0
  563. package/templates/artifacts/spec.template.json +12 -0
  564. package/templates/artifacts/verification-proof.md +19 -0
  565. package/templates/artifacts/verification-proof.template.json +11 -0
  566. package/templates/examples/accepted-learning.example.json +14 -0
  567. package/templates/examples/author.example.json +152 -0
  568. package/templates/examples/clarification.example.json +15 -0
  569. package/templates/examples/docs-claim.example.json +8 -0
  570. package/templates/examples/export-manifest.example.json +7 -0
  571. package/templates/examples/host-export-package.example.json +11 -0
  572. package/templates/examples/implementation-plan.example.json +17 -0
  573. package/templates/examples/proposed-learning.example.json +13 -0
  574. package/templates/examples/research.example.json +15 -0
  575. package/templates/examples/research.example.md +6 -0
  576. package/templates/examples/review.example.json +17 -0
  577. package/templates/examples/run-manifest.example.json +9 -0
  578. package/templates/examples/spec-challenge.example.json +14 -0
  579. package/templates/examples/spec.example.json +21 -0
  580. package/templates/examples/verification-proof.example.json +21 -0
  581. package/templates/examples/wazir-manifest.example.yaml +65 -0
  582. package/templates/task-definition-schema.md +99 -0
  583. package/tooling/README.md +20 -0
  584. package/tooling/src/adapters/context-mode.js +50 -0
  585. package/tooling/src/capture/command.js +376 -0
  586. package/tooling/src/capture/store.js +99 -0
  587. package/tooling/src/capture/usage.js +270 -0
  588. package/tooling/src/checks/branches.js +50 -0
  589. package/tooling/src/checks/brand-truth.js +110 -0
  590. package/tooling/src/checks/changelog.js +231 -0
  591. package/tooling/src/checks/command-registry.js +36 -0
  592. package/tooling/src/checks/commits.js +102 -0
  593. package/tooling/src/checks/docs-drift.js +103 -0
  594. package/tooling/src/checks/docs-truth.js +201 -0
  595. package/tooling/src/checks/runtime-surface.js +156 -0
  596. package/tooling/src/cli.js +116 -0
  597. package/tooling/src/command-options.js +56 -0
  598. package/tooling/src/commands/validate.js +320 -0
  599. package/tooling/src/doctor/command.js +91 -0
  600. package/tooling/src/export/command.js +77 -0
  601. package/tooling/src/export/compiler.js +498 -0
  602. package/tooling/src/guards/loop-cap-guard.js +52 -0
  603. package/tooling/src/guards/protected-path-write-guard.js +67 -0
  604. package/tooling/src/index/command.js +152 -0
  605. package/tooling/src/index/storage.js +1061 -0
  606. package/tooling/src/index/summarizers.js +261 -0
  607. package/tooling/src/loaders.js +18 -0
  608. package/tooling/src/project-root.js +22 -0
  609. package/tooling/src/recall/command.js +225 -0
  610. package/tooling/src/schema-validator.js +30 -0
  611. package/tooling/src/state-root.js +40 -0
  612. package/tooling/src/status/command.js +71 -0
  613. package/wazir.manifest.yaml +135 -0
  614. package/workflows/README.md +19 -0
  615. package/workflows/author.md +42 -0
  616. package/workflows/clarify.md +38 -0
  617. package/workflows/design-review.md +46 -0
  618. package/workflows/design.md +44 -0
  619. package/workflows/discover.md +37 -0
  620. package/workflows/execute.md +48 -0
  621. package/workflows/learn.md +38 -0
  622. package/workflows/plan-review.md +42 -0
  623. package/workflows/plan.md +39 -0
  624. package/workflows/prepare-next.md +37 -0
  625. package/workflows/review.md +40 -0
  626. package/workflows/run-audit.md +41 -0
  627. package/workflows/spec-challenge.md +41 -0
  628. package/workflows/specify.md +38 -0
  629. package/workflows/verify.md +37 -0
@@ -0,0 +1,792 @@
1
+ # User Research — Expertise Module
2
+
3
+ > User research is the systematic study of users — their needs, behaviors, motivations, and contexts — through observation, feedback, and measurement. It provides the empirical foundation for design decisions, replacing assumptions with evidence. The scope spans generative research (discovering opportunities), evaluative research (validating solutions), qualitative methods (understanding why), and quantitative methods (measuring how much). A skilled user researcher selects the right method for each question, manages bias rigorously, and translates raw findings into actionable insights that shape product strategy, interaction design, and development priorities.
4
+
5
+ ---
6
+
7
+ ## 1. What This Discipline Covers
8
+
9
+ ### Definition and Scope
10
+
11
+ User research is the practice of understanding the people who use (or will use) a product through direct and indirect methods of inquiry. It sits at the intersection of design, product management, and engineering, informing all three with empirical human data.
12
+
13
+ The discipline encompasses:
14
+
15
+ - **Generative (discovery) research** — conducted before or early in design to uncover unmet needs, mental models, workflows, and opportunity spaces. The goal is to learn what to build.
16
+ - **Evaluative research** — conducted during and after design to assess whether a solution meets user needs effectively. The goal is to learn whether what was built works.
17
+ - **Continuous research** — ongoing, lightweight research embedded into product development cadences to maintain a current understanding of users as the product and market evolve.
18
+
19
+ ### The NNG Research Landscape
20
+
21
+ Nielsen Norman Group frames user research along three dimensions that determine which method to choose:
22
+
23
+ **Dimension 1 — Attitudinal vs. Behavioral**
24
+ - Attitudinal: what people *say* — their beliefs, preferences, stated needs, self-reported satisfaction
25
+ - Behavioral: what people *do* — observed actions, task completion, click paths, error rates
26
+ - The gap between the two is one of the most important findings in research; people routinely overstate their competence and mispredict their own behavior
27
+
28
+ **Dimension 2 — Qualitative vs. Quantitative**
29
+ - Qualitative: direct observation or listening, generating rich descriptive data; answers *why* and *how to fix*
30
+ - Quantitative: indirect measurement through instruments (surveys, analytics, A/B tests); answers *how many* and *how much*
31
+
32
+ **Dimension 3 — Context of Use**
33
+ - Natural: studying users in their real environment (field studies, diary studies, analytics)
34
+ - Scripted: giving users specific tasks to perform (usability testing, benchmarking)
35
+ - Decontextualized: removing context entirely (surveys, card sorting, interviews)
36
+ - Hybrid: combining approaches (contextual inquiry blends observation and interview)
37
+
38
+ ### The IDEO Human-Centered Design Philosophy
39
+
40
+ IDEO's Field Guide to Human-Centered Design structures research within three phases:
41
+
42
+ 1. **Inspiration** — immerse yourself in the lives of the people you are designing for through empathy-driven research methods
43
+ 2. **Ideation** — synthesize findings into frameworks (personas, journey maps, "How Might We" questions) and generate solutions
44
+ 3. **Implementation** — prototype, test, and iterate with real users
45
+
46
+ IDEO's core principle: start with people, end with solutions tailor-made for their needs. Research is not a phase that ends — it is a continuous practice that bookends every design decision.
47
+
48
+ ### Steve Krug's Pragmatic Research Philosophy
49
+
50
+ Steve Krug ("Don't Make Me Think," "Rocket Surgery Made Easy") champions accessible, frequent, low-overhead research with these principles:
51
+
52
+ - **Testing one user is 100% better than testing none** — the first participant reveals more than any amount of internal debate
53
+ - **Test early and test often** — one morning a month, three participants, is enough to surface the most critical issues
54
+ - **Finding the "right" target-market users is less important than you think** — most usability problems are universal enough that almost any user will reveal them
55
+ - **The goal is to identify problems, not document them** — keep reports short, fix issues fast, test again
56
+ - **Involve stakeholders as observers** — watching real users struggle builds more conviction than any slide deck
57
+
58
+ ---
59
+
60
+ ## 2. Core Methods & Frameworks
61
+
62
+ ### 2.1 User Interviews
63
+
64
+ **What it is:** One-on-one conversations with users or prospective users, typically 30-60 minutes, following a semi-structured discussion guide.
65
+
66
+ **When to use:**
67
+ - Discovery phase to understand goals, pain points, and workflows
68
+ - After quantitative signals suggest a problem but the cause is unclear
69
+ - When exploring a new market or user segment
70
+
71
+ **Lightweight version (30 min):**
72
+ - 5 participants, one user segment
73
+ - 5-7 open-ended questions, no discussion guide beyond a topic list
74
+ - Notes taken live, debrief same day
75
+
76
+ **Thorough version (60 min):**
77
+ - 12-20 participants across multiple segments (NNG recommends 5 per segment for thematic saturation, with Griffin & Hauser research showing 20-30 interviews uncover 90-95% of all customer needs)
78
+ - Formal discussion guide with warm-up, core topics, probing questions, and wrap-up
79
+ - Audio/video recorded, transcribed, affinity-mapped
80
+
81
+ **Key technique — the Five Whys:**
82
+ When a participant states a preference or behavior, ask "why" up to five times to reach the root motivation. Stop when you reach an emotional or identity-level driver.
83
+
84
+ **Anti-patterns to avoid:**
85
+ - Leading questions: "Don't you think the new design is cleaner?" (leads to yes)
86
+ - Double-barreled questions: "How do you feel about our pricing and customer support?" (two topics, one answer)
87
+ - Hypothetical questions: "Would you use this feature?" (stated intent does not predict behavior)
88
+
89
+ ### 2.2 Surveys
90
+
91
+ **What it is:** Structured questionnaires distributed to a large sample, producing quantitative data and optional open-ended qualitative responses.
92
+
93
+ **When to use:**
94
+ - Measuring satisfaction, preference, or frequency across a large population
95
+ - Validating qualitative findings with statistical confidence
96
+ - Baseline measurement before a redesign (benchmarking)
97
+ - Tracking sentiment over time (NPS, CSAT, SUS)
98
+
99
+ **Lightweight version:**
100
+ - 5-10 questions, single-page form
101
+ - Distributed via in-app prompt or email to existing users
102
+ - 50-100 responses for directional findings
103
+
104
+ **Thorough version:**
105
+ - 20-30 questions with validated scales (System Usability Scale, SUPR-Q, AttrakDiff)
106
+ - Screener to ensure representative sample
107
+ - 200+ responses for statistical significance (margin of error under 7% at 95% confidence)
108
+ - Cross-tabulation by segment, regression analysis for drivers
109
+
110
+ **Question design principles:**
111
+ - Use closed-ended questions for measurement, open-ended for exploration
112
+ - Randomize option order to prevent position bias
113
+ - Include attention-check and foil questions to filter inattentive respondents (NNG recommends foils — fake but plausible options that catch dishonest or inattentive participants)
114
+ - Avoid absolute questions ("always/never") — use frequency scales instead
115
+ - Pre-test the survey with 3-5 people to catch ambiguity
116
+
117
+ ### 2.3 Contextual Inquiry
118
+
119
+ **What it is:** A field research method that combines observation and interview in the user's real environment. The researcher watches the participant perform actual tasks, then asks questions in context to understand motivations and workarounds.
120
+
121
+ **When to use:**
122
+ - Understanding complex workflows (enterprise software, professional tools)
123
+ - Discovering environmental factors that affect usage (interruptions, physical setup, social dynamics)
124
+ - Early discovery when you need to learn the problem space before designing anything
125
+
126
+ **Lightweight version:**
127
+ - 3-5 participants, 60-90 minutes each
128
+ - Visit their workspace (or observe via screen-share for remote)
129
+ - Capture photos, sketches of environment, and key quotes
130
+
131
+ **Thorough version:**
132
+ - 8-12 participants across roles and environments
133
+ - Full-day or multi-day immersion
134
+ - Video recording, artifact collection (sticky notes, printouts, workaround documents)
135
+ - Interpretation sessions with the full team after each visit
136
+
137
+ **The four principles of contextual inquiry (Beyer & Holtzblatt):**
138
+ 1. **Context** — go to the user's environment; do not rely on recalled behavior
139
+ 2. **Partnership** — the user is the expert; the researcher is the apprentice
140
+ 3. **Interpretation** — share your interpretation during the session to validate or correct it
141
+ 4. **Focus** — enter with a clear research focus, but remain open to unexpected findings
142
+
143
+ ### 2.4 Diary Studies
144
+
145
+ **What it is:** Participants self-report their experiences, behaviors, and emotions over an extended period (typically 1-4 weeks) through structured prompts sent at intervals or triggered by events.
146
+
147
+ **When to use:**
148
+ - Understanding behaviors that unfold over time (habit formation, onboarding journeys, seasonal patterns)
149
+ - Capturing in-the-moment experiences that would be distorted by retrospective recall
150
+ - When the behavior of interest is sporadic or unpredictable (e.g., error encounters, customer support contacts)
151
+ - Late in development or post-launch to monitor real-world experience over time
152
+
153
+ **Lightweight version:**
154
+ - 5-8 participants, 1 week
155
+ - Daily prompts via messaging app (Slack, WhatsApp) or simple form
156
+ - 3-5 questions per entry: what happened, how they felt, what they did
157
+
158
+ **Thorough version:**
159
+ - 15-25 participants, 2-4 weeks
160
+ - Structured diary app (dscout, Indeemo) with photo/video capture
161
+ - Entry prompts tied to specific events (signal-contingent) or fixed intervals (interval-contingent)
162
+ - Compensation structure that rewards consistent participation
163
+ - Thematic analysis of entries, longitudinal pattern identification
164
+
165
+ **Key advantage over interviews:** Diary studies capture behavior and context in real time, eliminating the recall bias that plagues retrospective interviews. NNG specifically recommends them as a complement to contextual inquiry — diary studies are remote and asynchronous, giving access to geographically distributed users at lower cost.
166
+
167
+ ### 2.5 Personas
168
+
169
+ **What it is:** Archetypal user profiles synthesized from research data, representing distinct segments of the user population with different goals, behaviors, and contexts.
170
+
171
+ **When to use:**
172
+ - Aligning a team around shared understanding of who they are designing for
173
+ - Prioritizing features by tying them to the needs of specific persona segments
174
+ - Evaluating design decisions: "Would [Persona] understand this?"
175
+
176
+ **NNG's three persona types:**
177
+
178
+ 1. **Proto-personas (lightweight)** — created in a 2-4 hour team workshop using existing knowledge and assumptions; useful for alignment when no research budget exists; must be validated later
179
+ 2. **Qualitative personas** — built from 5-30 user interviews; segments based on observed behavioral patterns, goals, and attitudes; the standard recommended approach
180
+ 3. **Statistical personas** — built from large-scale survey or analytics data using cluster analysis; validated quantitatively; high investment but high confidence
181
+
182
+ **What to include in a research-backed persona:**
183
+ - Name and photo (realistic, not stock-model glamour)
184
+ - Role and context (job title, environment, key constraints)
185
+ - Goals — what they are trying to accomplish (primary and secondary)
186
+ - Behaviors — how they currently accomplish their goals
187
+ - Pain points — frustrations, inefficiencies, unmet needs
188
+ - Motivations — why they care, what drives their decisions
189
+ - Technology comfort — relevant tools and proficiency levels
190
+
191
+ **What to exclude:**
192
+ - Demographics that do not affect behavior (age, gender, location — unless they demonstrably change usage patterns)
193
+ - Fictional backstories that add color but not insight
194
+ - Too many personas — 3-5 primary personas cover most products; more than 7 causes decision paralysis
195
+
196
+ **Common failure:** Personas built on assumptions rather than research are imaginary characters, not tools. If you cannot trace every attribute to a research finding, the persona is fiction.
197
+
198
+ ### 2.6 Jobs-to-Be-Done (JTBD)
199
+
200
+ **What it is:** A framework that shifts focus from who the user is (demographics) to what the user is trying to accomplish (the "job" they hire a product to do). Originated by Tony Ulwick (1991, Outcome-Driven Innovation) and popularized by Clayton Christensen ("The Innovator's Solution," 2003).
201
+
202
+ **The core insight:** People do not buy products — they hire products to make progress in specific circumstances. Every job has functional, social, and emotional dimensions.
203
+
204
+ **The classic example (Christensen's milkshake study):**
205
+ A fast-food chain discovered that half of all milkshakes were sold before 8 AM. Ethnographic research revealed the "job" was not "enjoy a treat" but "give me something to consume during my boring commute that keeps me full until lunch, fits in my cupholder, and takes a long time to finish." Competing solutions were not other milkshakes — they were bananas, bagels, and boredom.
206
+
207
+ **JTBD statement format:**
208
+ ```
209
+ When [situation/trigger],
210
+ I want to [motivation/goal],
211
+ so I can [expected outcome].
212
+ ```
213
+
214
+ **When to use JTBD:**
215
+ - Product strategy and positioning — understanding what you actually compete with
216
+ - Feature prioritization — mapping features to unmet jobs
217
+ - Innovation — discovering over-served and under-served jobs
218
+ - When personas feel too demographic and not actionable enough
219
+
220
+ **Research methods for uncovering JTBD:**
221
+ - Switch interviews — talk to recent customers about the moment they switched from a previous solution
222
+ - Timeline mapping — reconstruct the buying/adoption decision chronologically
223
+ - Contextual inquiry — observe what users are actually trying to accomplish, not what they say they want
224
+
225
+ ### 2.7 Empathy Maps
226
+
227
+ **What it is:** A collaborative visualization that captures what is known about a user across four quadrants: Says, Thinks, Does, and Feels. Created by Dave Gray (XPLANE), widely adopted by the d.school at Stanford and promoted by NNG.
228
+
229
+ **When to use:**
230
+ - At the start of a design project to externalize and align team assumptions
231
+ - During or after interviews to synthesize individual participant data
232
+ - As a lightweight alternative to full personas when time is limited
233
+
234
+ **The four quadrants:**
235
+ 1. **Says** — direct quotes from interviews or usability sessions
236
+ 2. **Thinks** — inferred thoughts that the user may not vocalize (beliefs, assumptions, concerns)
237
+ 3. **Does** — observable behaviors and actions
238
+ 4. **Feels** — emotional states (frustrated, confident, anxious, delighted)
239
+
240
+ **Lightweight version:** One empathy map per participant, created during a debrief session, using sticky notes on a whiteboard or Miro board. Takes 15-20 minutes per participant.
241
+
242
+ **Thorough version:** Aggregate empathy map per user segment, synthesized from multiple participants. Cross-referenced with quantitative data. Used as input for persona creation.
243
+
244
+ **Key value:** The gap between Says and Does (and between Thinks and Feels) often contains the most important insights. Users who say "I always read the documentation" but are observed clicking randomly reveal a design problem that self-reported data alone would miss.
245
+
246
+ ### 2.8 User Journey Maps
247
+
248
+ **What it is:** A visualization of the end-to-end process a user goes through to accomplish a goal, including stages, actions, touchpoints, thoughts, emotions, and pain points at each step.
249
+
250
+ **When to use:**
251
+ - Identifying friction points and drop-off moments across a multi-step experience
252
+ - Aligning cross-functional teams around the full user experience (not just their own touchpoint)
253
+ - Prioritizing improvement efforts by severity of pain at each stage
254
+ - Communicating research findings to stakeholders who respond to narrative formats
255
+
256
+ **Components of a journey map:**
257
+ 1. **Actor** — the persona or user type whose journey is being mapped
258
+ 2. **Scenario** — the specific goal or task (e.g., "First-time user sets up their account")
259
+ 3. **Stages** — the high-level phases (Awareness, Consideration, Onboarding, First Use, Retention)
260
+ 4. **Actions** — what the user does at each stage
261
+ 5. **Touchpoints** — where the user interacts with the product or brand
262
+ 6. **Thoughts** — what the user is thinking at each stage
263
+ 7. **Emotions** — the emotional arc (typically visualized as a curve moving between positive and negative)
264
+ 8. **Pain points** — specific frustrations or blockers
265
+ 9. **Opportunities** — design or product improvements identified at each stage
266
+
267
+ **Lightweight version:** Whiteboard session with the team, mapping 4-6 stages based on existing knowledge and 3-5 interviews. Completed in 2-3 hours.
268
+
269
+ **Thorough version:** Research-backed map synthesizing data from 10-20 interviews, diary studies, and analytics. Validated with users. Published as a reference artifact for the organization.
270
+
271
+ **NNG's key distinction:** Journey maps represent a *specific* user type completing a *specific* goal. A map that tries to cover all users doing all things becomes so generic it is useless.
272
+
273
+ ### 2.9 Competitive Analysis (UX-Focused)
274
+
275
+ **What it is:** Systematic evaluation of competitor products to understand market conventions, identify differentiation opportunities, and learn from others' successes and failures.
276
+
277
+ **When to use:**
278
+ - Early discovery to understand the landscape before designing
279
+ - When stakeholders reference competitor features ("Why don't we have X like Competitor Y?")
280
+ - To establish baseline expectations for interaction patterns in a category
281
+ - During redesign to identify best-in-class patterns worth adopting
282
+
283
+ **Lightweight version (heuristic review):**
284
+ - Select 3-5 direct competitors and 2-3 indirect competitors
285
+ - Walk through key user flows (onboarding, core task, recovery from error)
286
+ - Score against heuristics (Nielsen's 10 Usability Heuristics)
287
+ - Document in a comparison matrix with screenshots
288
+
289
+ **Thorough version (competitive usability study):**
290
+ - Recruit 5-8 participants per competitor product
291
+ - Assign identical tasks across all products
292
+ - Measure task success rate, time on task, error rate, and satisfaction
293
+ - Produces quantitative benchmarks and qualitative insight into competitor strengths/weaknesses
294
+
295
+ ---
296
+
297
+ ## 3. Deliverables
298
+
299
+ ### 3.1 Personas Document
300
+
301
+ **Format:** 1-2 page document per persona (PDF or shared design file).
302
+
303
+ **Contents:**
304
+ - Photo, name, and quote that captures their core attitude
305
+ - Context (role, environment, tools, constraints)
306
+ - Goals (primary and secondary)
307
+ - Behaviors (current workflows and habits)
308
+ - Pain points (ranked by severity)
309
+ - Motivations and values
310
+ - Technology proficiency and tool preferences
311
+ - A scenario illustrating a typical interaction with the product
312
+
313
+ **Distribution:** Printed and posted in the team area. Embedded in design system documentation. Referenced in user story templates.
314
+
315
+ ### 3.2 User Journey Maps
316
+
317
+ **Format:** Large-format visualization (poster, Miro/FigJam board, or slide deck for remote teams).
318
+
319
+ **Contents:**
320
+ - Stage-by-stage breakdown with actions, thoughts, emotions, and pain points
321
+ - Emotional arc visualization (positive/negative curve)
322
+ - Opportunity annotations linked to backlog items or design recommendations
323
+ - Data sources cited for each insight
324
+
325
+ **Quality criteria:**
326
+ - Based on research data, not assumptions (cite participant IDs or data sources)
327
+ - Specific to one persona and one scenario
328
+ - Includes both the current-state experience and (optionally) a future-state vision
329
+
330
+ ### 3.3 Research Reports
331
+
332
+ **Format:** Written document (5-15 pages) or recorded presentation (20-30 min).
333
+
334
+ **Structure:**
335
+ 1. **Executive summary** — 3-5 key findings and their implications (one page; this is often the only section stakeholders read)
336
+ 2. **Research objectives** — what questions the study was designed to answer
337
+ 3. **Methodology** — method, sample size, participant demographics, timeline, tools used
338
+ 4. **Detailed findings** — organized by theme or research question, with supporting evidence (quotes, metrics, screenshots, video clips)
339
+ 5. **Recommendations** — specific, actionable design or product changes, prioritized by impact and effort
340
+ 6. **Appendix** — discussion guide, raw data summaries, participant screener
341
+
342
+ **Anti-pattern:** Reports that describe findings without recommendations. If the report does not tell the reader what to do differently, it has failed its purpose.
343
+
344
+ ### 3.4 Insight Summaries (Research Nuggets)
345
+
346
+ **Format:** Single-page or single-slide format for rapid consumption.
347
+
348
+ **Structure:**
349
+ - **Insight statement** — one sentence capturing the finding (e.g., "Users abandon the checkout flow when asked to create an account because they perceive it as a commitment, not a convenience")
350
+ - **Evidence** — 2-3 supporting data points (quotes, metrics, observation counts)
351
+ - **Impact** — what happens if this is not addressed (churn, support cost, lost revenue)
352
+ - **Recommendation** — what to do about it
353
+ - **Confidence level** — high (multiple sources), medium (single method), low (preliminary signal)
354
+
355
+ **Distribution:** Shared in Slack/Teams after each research session. Accumulated in a research repository (Dovetail, Notion, Confluence) for longitudinal reference.
356
+
357
+ ### 3.5 User Stories (Research-Informed)
358
+
359
+ **Format:** Agile user story cards with research traceability.
360
+
361
+ **Structure:**
362
+ ```
363
+ As a [persona name],
364
+ I want to [goal derived from research],
365
+ so that [motivation uncovered in interviews].
366
+
367
+ Acceptance Criteria:
368
+ - Given [context observed in research],
369
+ when [action users actually take],
370
+ then [outcome users expect based on mental model].
371
+
372
+ Research Source: [Study name, participant IDs, insight reference]
373
+ ```
374
+
375
+ **Key principle:** User stories derived from research carry more weight in prioritization because they are grounded in observed need, not stakeholder opinion or competitive mimicry. The "so that" clause should trace to a real motivation discovered through interviews or observation, not a product manager's hypothesis.
376
+
377
+ ---
378
+
379
+ ## 4. Tools & Techniques
380
+
381
+ ### 4.1 Unmoderated Testing & Research Platforms
382
+
383
+ **Maze**
384
+ - Unmoderated usability testing, prototype testing, card sorting, tree testing, surveys
385
+ - Integrates with Figma prototypes for click-through testing
386
+ - Best for: product teams wanting continuous, fast testing on a moderate budget
387
+ - Strength: speed — tests can launch and collect results within hours
388
+ - Limitation: less depth than moderated sessions; complex tasks may confuse participants without a moderator
389
+
390
+ **UserTesting (now UserZoom)**
391
+ - Moderated and unmoderated testing with a large participant panel
392
+ - Video recording of sessions with highlight reels
393
+ - Best for: B2C companies needing diverse demographic panels at scale
394
+ - Strength: panel breadth and quality; enterprise-grade reporting
395
+ - Limitation: expensive; per-session pricing discourages frequent testing
396
+
397
+ **Lyssna (formerly UsabilityHub)**
398
+ - Quick preference tests, first-click tests, five-second tests, design surveys
399
+ - Best for: rapid design validation with low overhead
400
+ - Strength: fast turnaround for simple design questions
401
+ - Limitation: limited to specific test types; not suited for complex task flows
402
+
403
+ ### 4.2 Behavioral Analytics & Heatmaps
404
+
405
+ **Hotjar**
406
+ - Heatmaps (click, scroll, move), session recordings, feedback polls, surveys
407
+ - Best for: understanding aggregate behavioral patterns on web pages
408
+ - Strength: visual and intuitive; non-technical stakeholders understand heatmaps immediately
409
+ - Limitation: limited product analytics; no funnel analysis or cohort tracking
410
+
411
+ **FullStory**
412
+ - Session replay, heatmaps, frustration signals (rage clicks, dead clicks, error clicks)
413
+ - AI-powered session summarization (StoryAI, powered by Google Gemini)
414
+ - Best for: UX teams and customer experience teams focused on digital experience quality
415
+ - Strength: high-fidelity replay; frustration detection surfaces issues automatically
416
+ - Limitation: no free plan; premium pricing; no feature flags or A/B testing
417
+
418
+ **PostHog**
419
+ - Session recording, product analytics, feature flags, A/B testing, error tracking — all-in-one
420
+ - Open-source core with generous free tier (1M events + 5K recordings/month)
421
+ - Best for: engineering-oriented teams wanting a unified analytics + experimentation platform
422
+ - Strength: tight integration between analytics, flags, and replays; transparent pricing
423
+ - Limitation: replay quality less polished than FullStory; steeper learning curve for non-technical users
424
+
425
+ ### 4.3 Product Analytics
426
+
427
+ **Mixpanel**
428
+ - Event-based analytics: funnels, retention, flows, cohorts, A/B test analysis
429
+ - Best for: teams tracking specific user actions and conversion funnels
430
+ - Strength: powerful segmentation; flexible event taxonomy
431
+ - Limitation: requires disciplined event instrumentation; garbage in, garbage out
432
+
433
+ **Amplitude**
434
+ - Behavioral analytics, user segmentation, experimentation, CDP (Customer Data Platform)
435
+ - Best for: product-led growth companies analyzing user behavior at scale
436
+ - Strength: behavioral cohorting; chart collaboration features; strong governance model
437
+ - Limitation: complex setup; expensive at scale; can overwhelm teams without a dedicated analyst
438
+
439
+ **Google Analytics 4 (GA4)**
440
+ - Web and app analytics with event-based model
441
+ - Best for: marketing-oriented analytics; acquisition and traffic analysis
442
+ - Strength: free; ubiquitous; integrates with Google Ads ecosystem
443
+ - Limitation: limited for product-level behavioral analysis; privacy-sampling at high volumes
444
+
445
+ ### 4.4 Survey & Form Tools
446
+
447
+ **Typeform**
448
+ - Conversational, one-question-at-a-time survey experience
449
+ - Best for: customer-facing surveys where completion rate matters (higher engagement than traditional forms)
450
+ - Strength: beautiful design; logic branching; high response rates
451
+ - Limitation: analytics are basic; expensive per-response at scale
452
+
453
+ **Google Forms**
454
+ - Free, simple form builder integrated with Google Sheets
455
+ - Best for: internal surveys, quick polls, research screeners
456
+ - Strength: zero cost; real-time Google Sheets integration; unlimited responses
457
+ - Limitation: minimal design control; no logic branching in free tier; limited question types
458
+
459
+ **SurveyMonkey**
460
+ - Enterprise survey platform with advanced logic, analysis, and panel access
461
+ - Best for: large-scale research surveys requiring statistical rigor
462
+ - Strength: validated question templates (NPS, CSAT, SUS); cross-tabulation; data export
463
+
464
+ ### 4.5 Qualitative Research & Synthesis Tools
465
+
466
+ | Tool | Purpose | Best For |
467
+ |---|---|---|
468
+ | **Dovetail** | Research repository, transcription, tagging, insight management | Teams conducting regular research who need to organize and retrieve insights over time |
469
+ | **Miro / FigJam** | Collaborative whiteboarding | Remote synthesis workshops: affinity mapping, empathy maps, journey maps |
470
+ | **Optimal Workshop** | Card sorting, tree testing, first-click testing | Information architecture research and navigation design validation |
471
+ | **dscout** | Mobile diary study platform with photo/video/text | Longitudinal research capturing in-context experiences over days or weeks |
472
+
473
+ ### 4.6 Recruitment & Scheduling
474
+
475
+ | Tool | Best For |
476
+ |---|---|
477
+ | **User Interviews** | Finding qualified participants quickly across demographics and industries |
478
+ | **Respondent.io** | Hard-to-reach professional audiences (doctors, engineers, executives) |
479
+ | **Calendly / Cal.com** | Coordinating moderated sessions with participants across time zones |
480
+
481
+ ---
482
+
483
+ ## 5. Common Failures
484
+
485
+ ### 5.1 Confirmation Bias
486
+
487
+ **The failure:** Designing research to validate a pre-existing belief rather than to discover truth. Interpreting ambiguous data as supporting the hypothesis. Ignoring or downplaying contradictory findings.
488
+
489
+ **How it manifests:** choosing only participants who match the ideal user profile; stopping research once supportive data is found; cherry-picking quotes that support the team's preferred direction; framing findings as "users confirmed that..." rather than "users revealed that..."
490
+
491
+ **Mitigation:** state hypotheses before research begins, then actively seek disconfirming evidence. Use triangulation (multiple methods, researchers, and data sources). Have a team member play devil's advocate during analysis. Pre-register research questions before data collection. Invite cross-functional observers who have no stake in the outcome.
492
+
493
+ ### 5.2 Leading Questions
494
+
495
+ **The failure:** Phrasing questions in a way that suggests the "correct" answer, producing data that reflects the researcher's beliefs rather than the participant's experience.
496
+
497
+ **Examples of leading vs. neutral questions:**
498
+
499
+ | Leading (Bad) | Neutral (Good) |
500
+ |---|---|
501
+ | "How much did you like the new design?" | "What was your experience with the design?" |
502
+ | "Don't you think this is easier?" | "How would you compare this to your current approach?" |
503
+ | "Would you use this feature?" | "Walk me through how you currently handle this task." |
504
+ | "What problems did you have?" | "Describe what happened when you tried to complete the task." |
505
+ | "The old version was confusing, right?" | "Tell me about your experience with the previous version." |
506
+
507
+ **Mitigation:** pilot the discussion guide with a colleague to flag implied preferred answers. Start questions with "how," "what," "tell me about," and "describe" rather than "do you," "would you," or "don't you." Avoid adjectives in questions (never say "easy," "confusing," "better," "improved"). Control body language — do not nod, smile, or react. Record sessions and review your own moderating behavior for unconscious bias.
508
+
509
+ ### 5.3 Insufficient Sample Sizes
510
+
511
+ **The failure:** Drawing conclusions from too few participants (for quantitative work) or too few segments (for qualitative work), producing findings that do not generalize.
512
+
513
+ **NNG sample size guidelines:**
514
+
515
+ | Method | Lightweight | Standard | Rigorous |
516
+ |---|---|---|---|
517
+ | Qualitative usability testing | 3-5 participants | 5 per user segment | 8-12 across segments |
518
+ | User interviews | 5 participants | 5-8 per segment | 20-30 for saturation |
519
+ | Surveys (quantitative) | 50-100 directional | 200+ for statistical significance | 400+ for segment-level analysis |
520
+ | Quantitative usability study | 15-20 participants | 40 participants (NNG standard) | 40+ per segment |
521
+ | Card sorting | 15 participants (open) | 30 participants (closed) | 50+ for statistical patterns |
522
+ | A/B testing | Depends on effect size | Use power calculator | Typically 1,000+ per variant |
523
+
524
+ **The critical distinction:** Five participants is appropriate for qualitative usability testing (finding problems) but dangerously inadequate for quantitative measurement (measuring prevalence). NNG explicitly warns: "5 users — okay for qual, wrong for quant."
525
+
526
+ **Mitigation:**
527
+ - Choose sample size based on the *type of question* you are answering, not on budget alone
528
+ - For qualitative work, test 5 users per distinct user segment, not 5 total
529
+ - For quantitative work, use a sample size calculator based on desired confidence level and margin of error
530
+ - Be transparent about confidence levels when presenting findings from small samples
531
+ - Label preliminary findings as "signals" not "conclusions"
532
+
533
+ ### 5.4 Research Without Action
534
+
535
+ **The failure:** Conducting research that produces reports but does not change decisions. The most damaging and most common failure in organizational research practice.
536
+
537
+ **How it manifests:** reports filed but never referenced during design or planning; findings presented but no one assigned to act on them; research cadence disconnected from sprint planning or roadmap cycles; stakeholders say "we already know that" and dismiss findings that challenge existing plans.
538
+
539
+ **Mitigation:** tie every research study to a pending decision — if no decision depends on the outcome, do not run the study. Include specific, actionable recommendations in every deliverable. Present findings where decisions are made (sprint planning, roadmap review), not in a separate "research share-out." Track recommendation adoption rate as a research team metric. Embed researchers in product teams rather than centralizing them separately.
540
+
541
+ ### 5.5 No Stakeholder Involvement
542
+
543
+ **The failure:** Researchers conduct studies in isolation, then present findings to stakeholders who are not invested and therefore ignore the results.
544
+
545
+ **How it manifests:** product managers and engineers never observe sessions; findings communicated via document, never via shared experience; stakeholders challenge methodology instead of engaging with findings.
546
+
547
+ **Mitigation (Steve Krug's approach):** invite stakeholders to observe sessions live or watch recordings — watching a real user struggle is worth more than any report. Run a debrief immediately: "What did you see? What surprised you? What should we do?" Create a 2-minute highlight reel. Have stakeholders help prioritize which issues to fix. Share raw quotes and video clips in Slack/Teams, not polished decks.
548
+
549
+ ### 5.6 Researching the Wrong Questions
550
+
551
+ **The failure:** Spending research resources answering questions that are irrelevant to product decisions, already answered by existing data, or framed at the wrong level of abstraction.
552
+
553
+ **Examples:** conducting interviews to decide button color (this is an A/B test); running a survey to understand mental models (this requires qualitative inquiry); testing a prototype when the team has not validated the problem exists; asking "would you use this?" instead of "how do you currently solve this?"
554
+
555
+ **Mitigation — the research question audit:** Before any study, answer: (1) What decision will this inform? If you cannot name one, do not run the study. (2) What method is appropriate? Match the method to the question type. (3) Does this data already exist? Check analytics, support tickets, previous research, and industry reports first.
556
+
557
+ ### 5.7 Treating Research as a Phase
558
+
559
+ **The failure:** Conducting research only at the beginning of a project ("discovery phase"), then designing and building without further user input until launch.
560
+
561
+ **Why it fails:** assumptions accumulate silently during design and development; the product evolves away from original findings; by launch, the product reflects the team's mental model, not the user's.
562
+
563
+ **Mitigation — continuous research cadence:** weekly: review analytics dashboards and support ticket themes. Biweekly: conduct 2-3 lightweight usability tests. Monthly: run a focused study (interviews, survey, or diary study). Quarterly: synthesize accumulated findings into updated personas and journey maps.
564
+
565
+ ---
566
+
567
+ ## 6. Integration with Development
568
+
569
+ ### 6.1 From Research to User Stories
570
+
571
+ Research findings should flow directly into the product backlog as user stories with traceable evidence:
572
+
573
+ **Step 1 — Extract insights from research**
574
+ Each research study produces a set of insight statements. Example:
575
+ > "Enterprise users managing 50+ team members need to bulk-assign permissions because doing it one-by-one takes 20+ minutes and causes errors when they lose track of who has been updated."
576
+
577
+ **Step 2 — Frame as user stories**
578
+ ```
579
+ As an enterprise admin managing a large team,
580
+ I want to select multiple team members and assign permissions in bulk,
581
+ so that I can complete permission updates in minutes instead of 20+
582
+ and avoid errors from manual one-by-one assignment.
583
+
584
+ Acceptance Criteria:
585
+ - Given I am on the team management page with 50+ members,
586
+ when I select multiple members using checkboxes,
587
+ then I can apply a permission template to all selected members at once.
588
+ - Given I have applied bulk permissions,
589
+ when the operation completes,
590
+ then I see a confirmation showing exactly which members were updated
591
+ and which (if any) failed with the reason.
592
+
593
+ Research Source: Enterprise Admin Interviews, Q1 2026, participants P03, P07, P11
594
+ ```
595
+
596
+ **Step 3 — Prioritize using research severity**
597
+ - **Critical** — users cannot complete the task at all; observed in 4+ participants
598
+ - **Major** — users complete the task but with significant frustration or errors; observed in 3+ participants
599
+ - **Minor** — users notice the issue but work around it; observed in 1-2 participants
600
+ - **Enhancement** — users did not report it, but research suggests an opportunity
601
+
602
+ ### 6.2 Acceptance Criteria from User Needs
603
+
604
+ Research data transforms into acceptance criteria by mapping observed user behavior and expectations to testable conditions:
605
+
606
+ **Research observation:** "Users expect the system to remember their last-used settings because they perform the same report configuration 90% of the time."
607
+
608
+ **Acceptance criteria:**
609
+ ```
610
+ Given a user has previously configured and run a report,
611
+ when they return to the report builder,
612
+ then their last-used configuration is pre-populated as the default,
613
+ and they can modify it or run it immediately without re-entering settings.
614
+ ```
615
+
616
+ **Research observation:** "Users become anxious when a long-running operation shows no progress indicator — 4 of 6 participants refreshed the page, causing data loss."
617
+
618
+ **Acceptance criteria:**
619
+ ```
620
+ Given a user initiates an operation expected to take more than 3 seconds,
621
+ when the operation is processing,
622
+ then a progress indicator is displayed showing estimated time remaining,
623
+ and the user is warned if they attempt to navigate away that data may be lost.
624
+ ```
625
+
626
+ ### 6.3 Analytics-Driven Iteration
627
+
628
+ Post-launch, research shifts from generative to evaluative, using quantitative signals to identify where qualitative investigation is needed:
629
+
630
+ **The analytics-to-insight loop:**
631
+
632
+ 1. **Instrument** — define key events aligned with user goals (not just page views): task started, task completed, error encountered, feature adopted, feature abandoned
633
+ 2. **Monitor** — set up dashboards tracking funnel conversion, feature adoption, error rates, and session duration trends
634
+ 3. **Detect** — identify anomalies: drop-offs in funnels, declining retention cohorts, features with high abandonment, rage-click concentrations
635
+ 4. **Investigate** — use qualitative methods (session replay review, targeted interviews, usability testing) to understand *why* the quantitative signal exists
636
+ 5. **Intervene** — implement design changes, feature flags, or A/B tests based on qualitative findings
637
+ 6. **Measure** — track the quantitative impact of the intervention; close the loop
638
+
639
+ **Example workflow:**
640
+ - Amplitude shows a 40% drop-off at step 3 of the onboarding flow
641
+ - PostHog session replays reveal users are confused by a form field label
642
+ - 5 quick usability tests confirm the label is ambiguous
643
+ - Design team revises the label and adds helper text
644
+ - A/B test shows the revision improves step 3 completion by 22%
645
+ - The finding is documented as an insight nugget in Dovetail for future reference
646
+
647
+ ### 6.4 Research in Agile Ceremonies
648
+
649
+ **Sprint Planning:**
650
+ - Researchers present relevant insights when backlog items are discussed
651
+ - Acceptance criteria reference specific research findings
652
+ - Research-sourced stories carry an evidence tag that influences priority
653
+
654
+ **Sprint Review / Demo:**
655
+ - Compare implemented features against research-based expectations
656
+ - Highlight any deviations from researched user mental models
657
+ - Queue items for post-sprint usability validation
658
+
659
+ **Retrospective:**
660
+ - Review whether shipped features addressed the identified user needs
661
+ - Assess whether research recommendations were adopted or deferred (and why)
662
+ - Identify knowledge gaps that require new research
663
+
664
+ **Continuous Discovery (Teresa Torres model):**
665
+ - Weekly touchpoint with users (interview, test, or observation)
666
+ - Opportunity solution tree maintained alongside the product backlog
667
+ - Research is not a project — it is a continuous team habit
668
+
669
+ ---
670
+
671
+ ## 7. Method Selection Guide
672
+
673
+ Use this decision tree to select the right research method for your question:
674
+
675
+ ```
676
+ What type of question are you answering?
677
+ |
678
+ +-- "What do users need?" (Discovery)
679
+ | +-- Early stage, unknown problem space --> Contextual Inquiry
680
+ | +-- Known domain, need to understand goals --> User Interviews
681
+ | +-- Need to understand behavior over time --> Diary Study
682
+ | +-- Need to understand the competitive landscape --> Competitive Analysis
683
+ |
684
+ +-- "Does this design work?" (Evaluation)
685
+ | +-- Have a prototype, need qualitative feedback --> Usability Testing (moderated)
686
+ | +-- Have a prototype, need speed and scale --> Usability Testing (unmoderated, Maze)
687
+ | +-- Need to validate information architecture --> Card Sorting / Tree Testing
688
+ | +-- Need to compare design options --> A/B Testing or Preference Testing
689
+ |
690
+ +-- "How many / how much?" (Measurement)
691
+ | +-- Need to measure satisfaction or preference --> Survey
692
+ | +-- Need to measure task performance --> Quantitative Usability Study (40+ users)
693
+ | +-- Need to track behavior patterns --> Product Analytics (Mixpanel / Amplitude)
694
+ | +-- Need to detect friction points --> Session Recording (FullStory / PostHog / Hotjar)
695
+ |
696
+ +-- "Why is this happening?" (Diagnosis)
697
+ | +-- Analytics show a drop-off or anomaly --> Session Replay + Targeted Interviews
698
+ | +-- Support tickets describe confusion --> Usability Testing on the affected flow
699
+ | +-- Users request features that seem odd --> JTBD Interviews (what job are they hiring for?)
700
+ ```
701
+
702
+ ---
703
+
704
+ ## 8. Quick Reference Checklist
705
+
706
+ ### Before Starting Research
707
+
708
+ - [ ] Define the research question(s) — what specific decision will this inform?
709
+ - [ ] Confirm the question cannot be answered with existing data (analytics, support logs, prior studies)
710
+ - [ ] Select the appropriate method based on question type (see Method Selection Guide)
711
+ - [ ] Determine sample size based on method type (qual: 5 per segment; quant: 40+)
712
+ - [ ] Write a discussion guide or test script and pilot it with a colleague
713
+ - [ ] Review all questions for leading language, double-barreling, and hypotheticals
714
+ - [ ] Recruit participants who match the target user profile (not colleagues, not power users only)
715
+ - [ ] Schedule stakeholder observation for live sessions
716
+ - [ ] Prepare consent forms and recording permissions
717
+ - [ ] Set up tools (recording software, note-taking template, analysis framework)
718
+
719
+ ### During Research
720
+
721
+ - [ ] Follow the discussion guide but remain flexible for unexpected insights
722
+ - [ ] Ask open-ended questions; probe with "tell me more," "why," "what happened next"
723
+ - [ ] Do not lead, validate, or react to participant responses
724
+ - [ ] Control body language — neutral expression, no nodding at "good" answers
725
+ - [ ] Take timestamped notes with participant quotes (not interpretations)
726
+ - [ ] Note environmental context (device, location, interruptions, workarounds)
727
+ - [ ] If moderating remotely, ask participants to share their screen and think aloud
728
+ - [ ] Capture unexpected behaviors and off-script moments — these are often the richest data
729
+
730
+ ### After Research
731
+
732
+ - [ ] Debrief with observers within 24 hours while memory is fresh
733
+ - [ ] Transcribe sessions (automated tools: Otter.ai, Dovetail, Rev)
734
+ - [ ] Conduct affinity mapping — group observations into themes
735
+ - [ ] Distinguish findings (what you observed) from interpretations (what you think it means)
736
+ - [ ] Write insight statements: observation + impact + recommendation
737
+ - [ ] Prioritize findings by severity (critical / major / minor / enhancement)
738
+ - [ ] Present findings in the decision-making forum, not a separate meeting
739
+ - [ ] Include specific, actionable recommendations — not just observations
740
+ - [ ] Track which recommendations are adopted and measure their impact
741
+ - [ ] Archive raw data and insights in the research repository for future reference
742
+ - [ ] Update personas and journey maps if findings warrant revision
743
+
744
+ ### Research Ethics Checklist
745
+
746
+ - [ ] Informed consent obtained before every session
747
+ - [ ] Participants can withdraw at any time without consequence
748
+ - [ ] Personal data is anonymized in all reports and shared artifacts
749
+ - [ ] Recordings are stored securely and deleted after the agreed retention period
750
+ - [ ] Compensation is fair and does not create undue incentive to please the researcher
751
+ - [ ] Vulnerable populations (minors, people with disabilities) receive additional protections
752
+ - [ ] Research does not deceive participants about the purpose of the study
753
+
754
+ ### Bias Mitigation Checklist
755
+
756
+ - [ ] Hypotheses stated before research, not after
757
+ - [ ] Discussion guide reviewed by someone outside the project team
758
+ - [ ] Participant sample includes users who may disagree with the team's direction
759
+ - [ ] Analysis conducted by at least two people independently before comparing notes
760
+ - [ ] Contradictory findings given equal weight in the report
761
+ - [ ] Confidence level stated for each finding (high / medium / low)
762
+ - [ ] Findings distinguished from recommendations — let the data speak before proposing solutions
763
+ - [ ] Video clips selected to represent the range of responses, not just the dramatic ones
764
+
765
+ ---
766
+
767
+ ## 9. Key References
768
+
769
+ ### Books
770
+ - **"Don't Make Me Think"** — Steve Krug. The foundational text on pragmatic usability. Emphasizes simplicity, common sense, and testing with real users over debates about best practices.
771
+ - **"Rocket Surgery Made Easy"** — Steve Krug. Step-by-step guide to DIY usability testing: recruit 3 users, test one morning a month, fix the most serious problems, repeat.
772
+ - **"Interviewing Users"** — Steve Portigal. Deep guide to conducting effective user interviews, including rapport-building, question design, and handling difficult participants.
773
+ - **"Just Enough Research"** — Erika Hall. Argues for right-sized research that is fast, focused, and tied to decisions. Strong antidote to analysis paralysis.
774
+ - **"The Innovator's Solution"** — Clayton Christensen. Introduces the Jobs-to-Be-Done framework in the context of disruptive innovation.
775
+ - **"Competing Against Luck"** — Clayton Christensen, Taddy Hall, Karen Dillon, David Duncan. Full treatment of JTBD with practical application guidance.
776
+ - **"Observing the User Experience"** — Elizabeth Goodman, Mike Kuniavsky, Andrea Moed. Comprehensive practitioner's guide covering the full spectrum of UX research methods.
777
+ - **"The Field Guide to Human-Centered Design"** — IDEO.org. 57 design methods organized by Inspiration, Ideation, and Implementation phases.
778
+
779
+ ### Online Resources
780
+ - **Nielsen Norman Group (nngroup.com)** — "When to Use Which User-Experience Research Methods" article and the UX research methods landscape chart. The most widely referenced framework for method selection.
781
+ - **IDEO Design Kit (designkit.org)** — Open-source collection of human-centered design methods with step-by-step guidance.
782
+ - **User Interviews Field Guide (userinterviews.com/ux-research-field-guide)** — Comprehensive guide to UX research methods with practical templates.
783
+
784
+ ### Validated Measurement Instruments
785
+ - **System Usability Scale (SUS)** — 10-item questionnaire producing a 0-100 usability score. Industry benchmark: 68 is average. Quick, reliable, and free.
786
+ - **Net Promoter Score (NPS)** — single-question loyalty metric ("How likely are you to recommend..."). Useful for tracking trends, not for diagnosing problems.
787
+ - **Customer Satisfaction Score (CSAT)** — single-question satisfaction metric. Best used immediately after an interaction.
788
+ - **SUPR-Q** — standardized questionnaire for website user experience covering usability, trust, loyalty, and appearance.
789
+
790
+ ---
791
+
792
+ *This module provides the conceptual foundation and practical toolkit for conducting user research that informs product decisions. It should be used alongside the Usability Testing expertise module for evaluation-specific guidance and the Information Architecture module for navigation and structure research methods.*