@fredcallagan/arn-spark 5.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (130) hide show
  1. package/.claude-plugin/plugin.json +9 -0
  2. package/.opencode/plugins/arn-spark.js +272 -0
  3. package/package.json +17 -0
  4. package/plugins/arn-spark/.claude-plugin/plugin.json +9 -0
  5. package/plugins/arn-spark/LICENSE +21 -0
  6. package/plugins/arn-spark/README.md +25 -0
  7. package/plugins/arn-spark/agents/arn-spark-brand-strategist.md +299 -0
  8. package/plugins/arn-spark/agents/arn-spark-dev-env-builder.md +228 -0
  9. package/plugins/arn-spark/agents/arn-spark-doctor.md +92 -0
  10. package/plugins/arn-spark/agents/arn-spark-forensic-investigator.md +181 -0
  11. package/plugins/arn-spark/agents/arn-spark-market-researcher.md +232 -0
  12. package/plugins/arn-spark/agents/arn-spark-marketing-pm.md +225 -0
  13. package/plugins/arn-spark/agents/arn-spark-persona-architect.md +259 -0
  14. package/plugins/arn-spark/agents/arn-spark-persona-impersonator.md +183 -0
  15. package/plugins/arn-spark/agents/arn-spark-product-strategist.md +191 -0
  16. package/plugins/arn-spark/agents/arn-spark-prototype-builder.md +497 -0
  17. package/plugins/arn-spark/agents/arn-spark-scaffolder.md +228 -0
  18. package/plugins/arn-spark/agents/arn-spark-spike-runner.md +209 -0
  19. package/plugins/arn-spark/agents/arn-spark-style-capture.md +196 -0
  20. package/plugins/arn-spark/agents/arn-spark-tech-evaluator.md +229 -0
  21. package/plugins/arn-spark/agents/arn-spark-ui-interactor.md +235 -0
  22. package/plugins/arn-spark/agents/arn-spark-use-case-writer.md +280 -0
  23. package/plugins/arn-spark/agents/arn-spark-ux-judge.md +215 -0
  24. package/plugins/arn-spark/agents/arn-spark-ux-specialist.md +200 -0
  25. package/plugins/arn-spark/agents/arn-spark-visual-sketcher.md +285 -0
  26. package/plugins/arn-spark/agents/arn-spark-visual-test-engineer.md +224 -0
  27. package/plugins/arn-spark/references/copilot-tools.md +62 -0
  28. package/plugins/arn-spark/skills/arn-brainstorming/SKILL.md +520 -0
  29. package/plugins/arn-spark/skills/arn-brainstorming/references/add-feature-flow.md +155 -0
  30. package/plugins/arn-spark/skills/arn-spark-arch-vision/SKILL.md +226 -0
  31. package/plugins/arn-spark/skills/arn-spark-arch-vision/references/architecture-vision-template.md +153 -0
  32. package/plugins/arn-spark/skills/arn-spark-arch-vision/references/technology-evaluation-guide.md +86 -0
  33. package/plugins/arn-spark/skills/arn-spark-clickable-prototype/SKILL.md +471 -0
  34. package/plugins/arn-spark/skills/arn-spark-clickable-prototype/references/clickable-prototype-criteria.md +65 -0
  35. package/plugins/arn-spark/skills/arn-spark-clickable-prototype/references/journey-template.md +62 -0
  36. package/plugins/arn-spark/skills/arn-spark-clickable-prototype/references/review-report-template.md +75 -0
  37. package/plugins/arn-spark/skills/arn-spark-clickable-prototype/references/showcase-capture-guide.md +213 -0
  38. package/plugins/arn-spark/skills/arn-spark-clickable-prototype-teams/SKILL.md +642 -0
  39. package/plugins/arn-spark/skills/arn-spark-clickable-prototype-teams/references/debate-protocol.md +242 -0
  40. package/plugins/arn-spark/skills/arn-spark-clickable-prototype-teams/references/debate-review-report-template.md +161 -0
  41. package/plugins/arn-spark/skills/arn-spark-clickable-prototype-teams/references/expert-interaction-review-template.md +152 -0
  42. package/plugins/arn-spark/skills/arn-spark-concept-review/SKILL.md +350 -0
  43. package/plugins/arn-spark/skills/arn-spark-concept-review/references/conflict-resolution-protocol.md +145 -0
  44. package/plugins/arn-spark/skills/arn-spark-concept-review/references/review-report-template.md +185 -0
  45. package/plugins/arn-spark/skills/arn-spark-dev-setup/SKILL.md +366 -0
  46. package/plugins/arn-spark/skills/arn-spark-dev-setup/references/dev-setup-checklist.md +84 -0
  47. package/plugins/arn-spark/skills/arn-spark-dev-setup/references/dev-setup-template.md +205 -0
  48. package/plugins/arn-spark/skills/arn-spark-discover/SKILL.md +303 -0
  49. package/plugins/arn-spark/skills/arn-spark-discover/references/competitive-landscape-template.md +87 -0
  50. package/plugins/arn-spark/skills/arn-spark-discover/references/discovery-questions.md +120 -0
  51. package/plugins/arn-spark/skills/arn-spark-discover/references/persona-profile-template.md +97 -0
  52. package/plugins/arn-spark/skills/arn-spark-discover/references/product-concept-template.md +253 -0
  53. package/plugins/arn-spark/skills/arn-spark-ensure-config/SKILL.md +23 -0
  54. package/plugins/arn-spark/skills/arn-spark-ensure-config/references/ensure-config.md +388 -0
  55. package/plugins/arn-spark/skills/arn-spark-ensure-config/references/step-0-fast-path.md +25 -0
  56. package/plugins/arn-spark/skills/arn-spark-ensure-config/scripts/cache-check.sh +127 -0
  57. package/plugins/arn-spark/skills/arn-spark-feature-extract/SKILL.md +483 -0
  58. package/plugins/arn-spark/skills/arn-spark-feature-extract/references/feature-backlog-template.md +176 -0
  59. package/plugins/arn-spark/skills/arn-spark-feature-extract/references/feature-entry-template.md +209 -0
  60. package/plugins/arn-spark/skills/arn-spark-help/SKILL.md +149 -0
  61. package/plugins/arn-spark/skills/arn-spark-help/references/pipeline-map.md +211 -0
  62. package/plugins/arn-spark/skills/arn-spark-init/SKILL.md +312 -0
  63. package/plugins/arn-spark/skills/arn-spark-init/references/agent-models-presets/all-opus.md +23 -0
  64. package/plugins/arn-spark/skills/arn-spark-init/references/agent-models-presets/balanced.md +23 -0
  65. package/plugins/arn-spark/skills/arn-spark-init/references/bkt-setup.md +55 -0
  66. package/plugins/arn-spark/skills/arn-spark-init/references/jira-mcp-setup.md +61 -0
  67. package/plugins/arn-spark/skills/arn-spark-init/references/platform-labels.md +97 -0
  68. package/plugins/arn-spark/skills/arn-spark-naming/SKILL.md +275 -0
  69. package/plugins/arn-spark/skills/arn-spark-naming/references/creative-brief-template.md +146 -0
  70. package/plugins/arn-spark/skills/arn-spark-naming/references/naming-methodology.md +237 -0
  71. package/plugins/arn-spark/skills/arn-spark-naming/references/naming-report-template.md +122 -0
  72. package/plugins/arn-spark/skills/arn-spark-naming/references/trademark-databases.md +88 -0
  73. package/plugins/arn-spark/skills/arn-spark-naming/references/whois-server-map.md +164 -0
  74. package/plugins/arn-spark/skills/arn-spark-naming/scripts/whois-check.js +502 -0
  75. package/plugins/arn-spark/skills/arn-spark-naming/scripts/whois-check.py +533 -0
  76. package/plugins/arn-spark/skills/arn-spark-prototype-lock/SKILL.md +260 -0
  77. package/plugins/arn-spark/skills/arn-spark-prototype-lock/references/lock-report-template.md +68 -0
  78. package/plugins/arn-spark/skills/arn-spark-prototype-lock/references/pretooluse-hook-template.json +35 -0
  79. package/plugins/arn-spark/skills/arn-spark-prototype-lock/references/prototype-guardrail-rules.md +38 -0
  80. package/plugins/arn-spark/skills/arn-spark-report/SKILL.md +144 -0
  81. package/plugins/arn-spark/skills/arn-spark-report/references/issue-template.md +81 -0
  82. package/plugins/arn-spark/skills/arn-spark-report/references/spark-knowledge-base.md +293 -0
  83. package/plugins/arn-spark/skills/arn-spark-scaffold/SKILL.md +239 -0
  84. package/plugins/arn-spark/skills/arn-spark-scaffold/references/scaffold-checklist.md +79 -0
  85. package/plugins/arn-spark/skills/arn-spark-scaffold/references/scaffold-summary-template.md +74 -0
  86. package/plugins/arn-spark/skills/arn-spark-spike/SKILL.md +209 -0
  87. package/plugins/arn-spark/skills/arn-spark-spike/references/spike-report-template.md +123 -0
  88. package/plugins/arn-spark/skills/arn-spark-static-prototype/SKILL.md +362 -0
  89. package/plugins/arn-spark/skills/arn-spark-static-prototype/references/review-report-template.md +65 -0
  90. package/plugins/arn-spark/skills/arn-spark-static-prototype/references/showcase-capture-guide.md +153 -0
  91. package/plugins/arn-spark/skills/arn-spark-static-prototype/references/static-prototype-criteria.md +54 -0
  92. package/plugins/arn-spark/skills/arn-spark-static-prototype-teams/SKILL.md +518 -0
  93. package/plugins/arn-spark/skills/arn-spark-static-prototype-teams/references/debate-protocol.md +230 -0
  94. package/plugins/arn-spark/skills/arn-spark-static-prototype-teams/references/debate-review-report-template.md +148 -0
  95. package/plugins/arn-spark/skills/arn-spark-static-prototype-teams/references/expert-visual-review-template.md +130 -0
  96. package/plugins/arn-spark/skills/arn-spark-stress-competitive/SKILL.md +166 -0
  97. package/plugins/arn-spark/skills/arn-spark-stress-competitive/references/competitive-report-template.md +139 -0
  98. package/plugins/arn-spark/skills/arn-spark-stress-competitive/references/gap-analysis-framework.md +111 -0
  99. package/plugins/arn-spark/skills/arn-spark-stress-interview/SKILL.md +257 -0
  100. package/plugins/arn-spark/skills/arn-spark-stress-interview/references/interview-protocol.md +140 -0
  101. package/plugins/arn-spark/skills/arn-spark-stress-interview/references/interview-report-template.md +165 -0
  102. package/plugins/arn-spark/skills/arn-spark-stress-interview/references/persona-casting-spec.md +138 -0
  103. package/plugins/arn-spark/skills/arn-spark-stress-premortem/SKILL.md +181 -0
  104. package/plugins/arn-spark/skills/arn-spark-stress-premortem/references/premortem-protocol.md +112 -0
  105. package/plugins/arn-spark/skills/arn-spark-stress-premortem/references/premortem-report-template.md +158 -0
  106. package/plugins/arn-spark/skills/arn-spark-stress-prfaq/SKILL.md +206 -0
  107. package/plugins/arn-spark/skills/arn-spark-stress-prfaq/references/prfaq-report-template.md +139 -0
  108. package/plugins/arn-spark/skills/arn-spark-stress-prfaq/references/prfaq-workflow.md +118 -0
  109. package/plugins/arn-spark/skills/arn-spark-style-explore/SKILL.md +281 -0
  110. package/plugins/arn-spark/skills/arn-spark-style-explore/references/style-brief-template.md +198 -0
  111. package/plugins/arn-spark/skills/arn-spark-use-cases/SKILL.md +359 -0
  112. package/plugins/arn-spark/skills/arn-spark-use-cases/references/expert-review-template.md +94 -0
  113. package/plugins/arn-spark/skills/arn-spark-use-cases/references/review-protocol.md +150 -0
  114. package/plugins/arn-spark/skills/arn-spark-use-cases/references/use-case-index-template.md +108 -0
  115. package/plugins/arn-spark/skills/arn-spark-use-cases/references/use-case-template.md +125 -0
  116. package/plugins/arn-spark/skills/arn-spark-use-cases-teams/SKILL.md +306 -0
  117. package/plugins/arn-spark/skills/arn-spark-use-cases-teams/references/debate-protocol.md +272 -0
  118. package/plugins/arn-spark/skills/arn-spark-use-cases-teams/references/review-report-template.md +112 -0
  119. package/plugins/arn-spark/skills/arn-spark-visual-readiness/SKILL.md +293 -0
  120. package/plugins/arn-spark/skills/arn-spark-visual-readiness/references/readiness-checklist.md +196 -0
  121. package/plugins/arn-spark/skills/arn-spark-visual-sketch/SKILL.md +376 -0
  122. package/plugins/arn-spark/skills/arn-spark-visual-sketch/references/aesthetic-philosophy.md +210 -0
  123. package/plugins/arn-spark/skills/arn-spark-visual-sketch/references/sketch-gallery-guide.md +282 -0
  124. package/plugins/arn-spark/skills/arn-spark-visual-sketch/references/visual-direction-template.md +174 -0
  125. package/plugins/arn-spark/skills/arn-spark-visual-strategy/SKILL.md +447 -0
  126. package/plugins/arn-spark/skills/arn-spark-visual-strategy/references/baseline-capture-script-template.js +89 -0
  127. package/plugins/arn-spark/skills/arn-spark-visual-strategy/references/journey-schema.md +375 -0
  128. package/plugins/arn-spark/skills/arn-spark-visual-strategy/references/spike-checklist.md +122 -0
  129. package/plugins/arn-spark/skills/arn-spark-visual-strategy/references/strategy-layers-guide.md +132 -0
  130. package/plugins/arn-spark/skills/arn-spark-visual-strategy/references/visual-strategy-template.md +141 -0
@@ -0,0 +1,140 @@
1
+ # Interview Protocol
2
+
3
+ Structured protocol for the "Two-Part Reveal" synthetic user interview process. This document is consumed by the `arn-spark-stress-interview` skill to orchestrate persona impersonator invocations across three interview phases.
4
+
5
+ ## Overview
6
+
7
+ The Two-Part Reveal is designed to surface genuine reactions by controlling information disclosure. The interview starts blind (the persona does not know what the product is), reveals the full concept in the second phase, and pressure-tests in the third. This mirrors how real users encounter products: first impressions are formed before marketing messaging has time to shape expectations.
8
+
9
+ The full interview cycle runs 3 personas x 3 phases = 9 persona-impersonator invocations organized as 3 phase-parallel waves, with product-strategist invocations at the start of each wave for question formulation. All personas run the same phase in parallel before the wave advances to the next phase.
10
+
11
+ ---
12
+
13
+ ## Phase 1: Initial Reaction (Blind Problem Check)
14
+
15
+ **Goal:** Determine whether the persona recognizes and cares about the problem the product solves -- before knowing the product exists.
16
+
17
+ **Information disclosed to persona:**
18
+ - The problem space description (extracted from the product concept's Problem Statement section)
19
+ - The persona's own profile and casting overlay
20
+ - NO product name, NO solution description, NO features
21
+
22
+ **Question types:**
23
+ - Problem recognition: "Do you experience [problem]? How often? How severely?"
24
+ - Current coping: "What do you currently do about this? What tools or workarounds?"
25
+ - Pain severity: "On a scale of 'mild annoyance' to 'blocking my work,' where does this fall?"
26
+ - Willingness to solve: "If a solution existed, what would you pay / what effort would you invest?"
27
+
28
+ **What to listen for:**
29
+ - Whether the persona even recognizes the problem (if not, the product concept may have a target audience assumption error)
30
+ - The language the persona uses to describe the problem (may differ from the product concept's framing)
31
+ - How severe the problem is from the persona's perspective versus the product concept's claim
32
+ - What the persona's switching threshold looks like (how much pain before they seek a solution)
33
+
34
+ **Expected duration:** 3-5 sentences from the persona per prompt. 1-2 prompts per phase.
35
+
36
+ **Product strategist role:** Before Phase 1, the product strategist formulates 2-3 questions that probe problem recognition without revealing the solution. The strategist extracts the problem framing from the product concept and strips solution-specific language.
37
+
38
+ ---
39
+
40
+ ## Phase 2: Deep Probing (Elevator Pitch Reveal)
41
+
42
+ **Goal:** Reveal the full product concept and probe adoption barriers, feature gaps, and misalignment with the persona's needs.
43
+
44
+ **Information disclosed to persona:**
45
+ - Everything from Phase 1 PLUS the full product concept summary (vision, core experience, product pillars, key features)
46
+ - The product's claimed value proposition and differentiation
47
+
48
+ **Question types:**
49
+ - First reaction to solution: "Now that you see the product, what is your gut reaction?"
50
+ - Fit assessment: "Does this solve the problem you described in Phase 1? Where does it fall short?"
51
+ - Adoption barriers: "What would stop you from trying this? What would you need to see first?"
52
+ - Feature evaluation: "Which features matter most to you? Which seem unnecessary?"
53
+ - Pillar alignment: "The product commits to [pillar]. Does that match what you need?"
54
+
55
+ **What to listen for:**
56
+ - Whether the persona's Phase 1 problem description matches what the product actually solves
57
+ - Specific adoption barriers that the product concept does not address
58
+ - Features the persona expected that are missing
59
+ - Features the persona considers unnecessary or harmful
60
+ - Whether the product pillars resonate or feel irrelevant to the persona's needs
61
+
62
+ **Expected duration:** 1-3 paragraphs from the persona per prompt. 2-3 prompts per phase.
63
+
64
+ **Product strategist role:** Between Phase 1 and Phase 2, the product strategist reviews Phase 1 responses and formulates questions that target gaps between the persona's described problem and the product's proposed solution. The strategist identifies which product pillars to probe based on the persona's stated priorities.
65
+
66
+ ---
67
+
68
+ ## Phase 3: Stress Test (Dealbreaker Probe)
69
+
70
+ **Goal:** Present the weakest aspects of the product concept and pressure-test the persona's willingness to adopt despite known limitations.
71
+
72
+ **Information disclosed to persona:**
73
+ - Everything from Phase 2 PLUS explicitly identified weaknesses, scope boundaries, and deferred features
74
+ - Known competitive alternatives and their advantages over this product
75
+
76
+ **Question types:**
77
+ - Dealbreaker identification: "What is the single biggest reason you would NOT use this product?"
78
+ - Competitive comparison: "Given that [competitor] already does [feature], why would you switch to this?"
79
+ - Scope boundary reaction: "[Feature] is explicitly not in v1. Does that change your assessment?"
80
+ - Worst case: "If [known weakness] turns out to be worse than expected, would you still use this?"
81
+ - Verdict: "Honest assessment -- would you use this, pay for this, recommend this to a colleague?"
82
+
83
+ **What to listen for:**
84
+ - Dealbreakers that the product concept does not acknowledge or mitigate
85
+ - Competitive advantages that the product concept underestimates
86
+ - Scope boundary decisions that the persona views as fatal omissions
87
+ - The persona's honest adoption verdict and the reasoning behind it
88
+ - Whether the casting overlay's specific concerns (pragmatist/skeptic/power user) surface unique failure modes
89
+
90
+ **Expected duration:** 2-4 paragraphs from the persona per prompt, ending with a clear verdict. 2-3 prompts per phase.
91
+
92
+ **Product strategist role:** Between Phase 2 and Phase 3, the product strategist reviews Phase 2 responses and identifies the weakest points to probe. The strategist formulates questions that force the persona to confront known limitations rather than politely sidestep them.
93
+
94
+ ---
95
+
96
+ ## Orchestration Guide
97
+
98
+ ### Invocation Sequence (Phase-Parallel Waves)
99
+
100
+ ```
101
+ Wave 1 -- Phase 1 (all 3 personas in parallel):
102
+ Persona 1: Product strategist → Persona impersonator (Phase 1)
103
+ Persona 2: Product strategist → Persona impersonator (Phase 1)
104
+ Persona 3: Product strategist → Persona impersonator (Phase 1)
105
+ [Wait for all 3 to complete]
106
+
107
+ Wave 2 -- Phase 2 (all 3 personas in parallel):
108
+ Persona 1: Product strategist → Persona impersonator (Phase 2)
109
+ Persona 2: Product strategist → Persona impersonator (Phase 2)
110
+ Persona 3: Product strategist → Persona impersonator (Phase 2)
111
+ [Wait for all 3 to complete]
112
+
113
+ Wave 3 -- Phase 3 (all 3 personas in parallel):
114
+ Persona 1: Product strategist → Persona impersonator (Phase 3)
115
+ Persona 2: Product strategist → Persona impersonator (Phase 3)
116
+ Persona 3: Product strategist → Persona impersonator (Phase 3)
117
+ [Wait for all 3 to complete]
118
+ ```
119
+
120
+ Total: 3 waves x 3 personas x 2 agents = 18 invocations (9 impersonator + 9 strategist)
121
+ Per wave: 3 parallel persona-pairs, each with a strategist then impersonator (sequential within the pair, parallel across pairs)
122
+
123
+ ### Context Passing Between Waves
124
+
125
+ Each persona-impersonator invocation receives:
126
+ - The persona profile (constant across all 3 phases)
127
+ - The casting overlay (constant across all 3 phases)
128
+ - The interview phase identifier (reaction/probing/stress)
129
+ - The specific questions for this phase (from product strategist)
130
+ - Previous phase responses for **this persona only** (accumulated -- Phase 2 receives this persona's Phase 1 responses, Phase 3 receives this persona's Phase 1+2 responses)
131
+
132
+ Each product-strategist invocation receives:
133
+ - The full product concept
134
+ - The persona profile and casting overlay
135
+ - All previous phase responses **for this persona only**
136
+ - The current phase's goal and question types (from this protocol)
137
+
138
+ ### Phase-Parallel Execution
139
+
140
+ All 3 personas run the same phase in parallel before the wave advances to the next phase. Each persona's interview context is independent -- no cross-persona information is shared between parallel invocations within a wave. Cross-persona comparison happens at the synthesis stage (Step 5 of the SKILL), not during interviews.
@@ -0,0 +1,165 @@
1
+ # Interview Report Template
2
+
3
+ Template for the synthetic user interview stress test report. This document is consumed by the `arn-spark-stress-interview` skill when assembling the final report from interview transcripts and strategist synthesis.
4
+
5
+ ## Instructions for arn-spark-stress-interview
6
+
7
+ When populating this template:
8
+
9
+ - Every section below MUST appear in the output, even if an interview was skipped due to agent failure
10
+ - Replace all bracketed placeholders with concrete content from interview transcripts and strategist synthesis
11
+ - Per-Persona Findings should quote or closely paraphrase the persona's actual responses -- do not summarize away the specificity
12
+ - Synthesized Themes should identify cross-persona patterns, not repeat individual findings
13
+ - The Recommended Concept Updates table MUST use the standardized schema exactly as shown
14
+ - Unresolved Questions should capture questions that emerged from interviews but cannot be answered without real user data or domain expertise
15
+ - Full Transcript includes all 9 persona-impersonator responses organized by persona and phase
16
+ - If a persona interview was partially completed (e.g., agent failure after Phase 2), include what was captured and note the gap
17
+
18
+ ---
19
+
20
+ ## Template
21
+
22
+ ```markdown
23
+ # Synthetic User Interview Report
24
+
25
+ **Product:** [product name]
26
+ **Date:** [ISO 8601 date]
27
+ **Personas interviewed:** [Persona 1 name] (Mould: [archetype], Overlay: [casting]), [Persona 2 name] (Mould: [archetype], Overlay: [casting]), [Persona 3 name] (Mould: [archetype], Overlay: [casting])
28
+
29
+ ---
30
+
31
+ ## Executive Summary
32
+
33
+ [3-5 sentences summarizing the overall interview findings. What was the dominant sentiment? Where did the product concept hold up? Where did it crack? What was the most surprising finding?]
34
+
35
+ ---
36
+
37
+ ## Per-Persona Findings
38
+
39
+ ### [Persona 1 Name] -- [Archetype Label] / [Casting Overlay]
40
+
41
+ **Key Reactions:**
42
+ - Phase 1 (Blind): [1-2 sentences on whether they recognized and cared about the problem]
43
+ - Phase 2 (Reveal): [1-2 sentences on their reaction to the full product concept]
44
+ - Phase 3 (Stress): [1-2 sentences on their dealbreaker assessment]
45
+
46
+ **Adoption Barriers:**
47
+ - [Barrier 1 -- specific to this persona's context and casting overlay]
48
+ - [Barrier 2]
49
+ - [Barrier 3 if applicable]
50
+
51
+ **Strongest Objections:**
52
+ - [Objection 1 -- quoted or closely paraphrased from transcript]
53
+ - [Objection 2]
54
+
55
+ **What Resonated:**
56
+ - [Element 1 -- what the persona responded positively to, if anything]
57
+ - [Element 2 if applicable]
58
+
59
+ **Verdict:** [The persona's honest adoption verdict from Phase 3 -- would they use it, pay for it, recommend it? 1-2 sentences.]
60
+
61
+ ---
62
+
63
+ ### [Persona 2 Name] -- [Archetype Label] / [Casting Overlay]
64
+
65
+ [Same structure as Persona 1]
66
+
67
+ ---
68
+
69
+ ### [Persona 3 Name] -- [Archetype Label] / [Casting Overlay]
70
+
71
+ [Same structure as Persona 1]
72
+
73
+ ---
74
+
75
+ ## Synthesized Themes
76
+
77
+ Cross-persona patterns identified by the product strategist from all 9 interview responses.
78
+
79
+ ### Theme 1: [Theme Title]
80
+
81
+ [Description of the pattern observed across multiple personas. Which personas exhibited this? How does it manifest differently through different casting overlays? What does this mean for the product concept?]
82
+
83
+ ### Theme 2: [Theme Title]
84
+
85
+ [Same structure]
86
+
87
+ ### Theme 3: [Theme Title]
88
+
89
+ [Same structure]
90
+
91
+ [3-5 themes total]
92
+
93
+ ---
94
+
95
+ ## Recommended Concept Updates
96
+
97
+ | # | Section | Current State | Recommended Change | Type | Rationale |
98
+ |---|---------|---------------|--------------------|------|-----------|
99
+ | 1 | [product concept section] | [what the concept currently says or assumes] | [specific change recommended] | [Add/Modify/Remove] | [which interview findings support this -- reference persona names and phases] |
100
+ | 2 | ... | ... | ... | ... | ... |
101
+
102
+ ---
103
+
104
+ ## Unresolved Questions
105
+
106
+ | # | Section | Question | Options | Assessment |
107
+ |---|---------|----------|---------|------------|
108
+ | 1 | [product concept section] | [question that emerged from interviews but cannot be answered without real user data] | [possible approaches to answering this] | [preliminary assessment based on interview data] |
109
+ | 2 | ... | ... | ... | ... |
110
+
111
+ ---
112
+
113
+ ## Full Transcript
114
+
115
+ ### [Persona 1 Name] -- [Archetype Label] / [Casting Overlay]
116
+
117
+ #### Phase 1: Initial Reaction
118
+
119
+ **Questions asked:**
120
+ [questions from product strategist]
121
+
122
+ **Response:**
123
+ [full persona-impersonator response]
124
+
125
+ #### Phase 2: Deep Probing
126
+
127
+ **Questions asked:**
128
+ [questions from product strategist]
129
+
130
+ **Response:**
131
+ [full persona-impersonator response]
132
+
133
+ #### Phase 3: Stress Test
134
+
135
+ **Questions asked:**
136
+ [questions from product strategist]
137
+
138
+ **Response:**
139
+ [full persona-impersonator response]
140
+
141
+ ---
142
+
143
+ ### [Persona 2 Name] -- [Archetype Label] / [Casting Overlay]
144
+
145
+ [Same structure as Persona 1 transcript]
146
+
147
+ ---
148
+
149
+ ### [Persona 3 Name] -- [Archetype Label] / [Casting Overlay]
150
+
151
+ [Same structure as Persona 1 transcript]
152
+ ```
153
+
154
+ ---
155
+
156
+ ## Section Guidance
157
+
158
+ | Section | Source | Depth |
159
+ |---------|--------|-------|
160
+ | Executive Summary | Synthesized by skill from strategist output and interview transcripts | 3-5 sentences, high-level patterns and surprises |
161
+ | Per-Persona Findings | Extracted from persona-impersonator responses across 3 phases | Per persona: key reactions (3 phases), adoption barriers (2-3), strongest objections (1-2), what resonated (1-2), verdict (1-2 sentences) |
162
+ | Synthesized Themes | Product strategist synthesis of all 9 interview responses | 3-5 themes, each with cross-persona pattern analysis |
163
+ | Recommended Concept Updates | Product strategist output using standardized table schema | One row per recommendation, Type must be Add/Modify/Remove, rationale must reference specific interview findings |
164
+ | Unresolved Questions | Identified during strategist synthesis as questions requiring real data | One row per question, must specify which product concept section is affected and possible resolution approaches |
165
+ | Full Transcript | Raw persona-impersonator responses, organized by persona and phase | Complete responses -- no summarization or editing |
@@ -0,0 +1,138 @@
1
+ # Persona Casting Spec
2
+
3
+ Defines the 3 casting overlays used by the `arn-spark-stress-interview` skill to create adversarial interview perspectives. Each overlay is applied on top of a base persona profile to focus the interview through a specific critical lens.
4
+
5
+ ## Overview
6
+
7
+ Casting overlays do not replace the persona's identity -- they focus the persona's natural concerns toward specific categories of product weakness. A Pragmatist version of "Sarah, the senior product manager" evaluates differently from a Skeptic version of the same person, even though both share the same background, goals, and pain points.
8
+
9
+ The 3 overlays are designed to cover complementary failure surfaces:
10
+ - **Pragmatist:** Practical adoption and workflow integration failures
11
+ - **Skeptic:** Trust, privacy, and systemic risk failures
12
+ - **Power User:** Depth, scalability, and customization failures
13
+
14
+ Together, they approximate the range of real user reactions that a product concept would face in the market.
15
+
16
+ ---
17
+
18
+ ## Overlay 1: Pragmatist
19
+
20
+ ### Adversarial Lens
21
+
22
+ The Pragmatist evaluates everything through the lens of practical adoption. Their time is valuable, their current workflow is functional (if imperfect), and they need concrete evidence that switching is worth the disruption. They are not hostile to new tools -- they are hostile to wasted time.
23
+
24
+ ### How It Modifies Base Persona Responses
25
+
26
+ - Amplifies the persona's existing concerns about workflow disruption and migration cost
27
+ - Foregrounds time-to-value calculations: "How long until this actually helps me?"
28
+ - Emphasizes comparison with current workarounds: "My spreadsheet is ugly but it works"
29
+ - Reduces patience for abstract benefits: "Show me the before/after for my specific Tuesday morning"
30
+ - Increases sensitivity to onboarding friction and learning curves
31
+
32
+ ### Weakness Categories Surfaced
33
+
34
+ - Migration and onboarding friction that the product concept underestimates
35
+ - Workflow integration gaps (the product works in isolation but not in the user's existing toolchain)
36
+ - Time-to-value disconnect (the product requires significant setup before delivering value)
37
+ - Reliability concerns (the product needs to work consistently, not just in demos)
38
+ - Hidden costs of switching (not just price -- cognitive load, team retraining, data migration)
39
+
40
+ ### Casting Instructions for Persona Architect
41
+
42
+ When generating a persona with the Pragmatist overlay for instantiation mode:
43
+ 1. Take the base persona mould and generate a concrete instance
44
+ 2. Emphasize the persona's current tool stack and daily workflow in the profile
45
+ 3. Include specific details about how the persona currently solves the problem (even poorly)
46
+ 4. Note the persona's switching history: what tools they have adopted and abandoned, and why
47
+ 5. Set the frustration threshold around workflow disruption rather than abstract concerns
48
+
49
+ ---
50
+
51
+ ## Overlay 2: Skeptic
52
+
53
+ ### Adversarial Lens
54
+
55
+ The Skeptic evaluates everything through the lens of trust, privacy, and systemic risk. They have been burned before by products that over-promised and under-delivered, or that handled their data carelessly. They are not paranoid -- they are experienced. Marketing language makes them more suspicious, not less.
56
+
57
+ ### How It Modifies Base Persona Responses
58
+
59
+ - Amplifies the persona's existing concerns about data handling, privacy, and security
60
+ - Foregrounds trust establishment: "Who built this? What is their track record?"
61
+ - Emphasizes vendor lock-in and exit strategies: "What happens if this company shuts down?"
62
+ - Reduces trust in marketing claims: "Show me the architecture, not the landing page"
63
+ - Increases sensitivity to vague privacy policies and unclear data ownership
64
+
65
+ ### Weakness Categories Surfaced
66
+
67
+ - Trust and credibility gaps in the product concept's positioning
68
+ - Privacy and data handling assumptions that are stated but not substantiated
69
+ - Vendor lock-in risks that the product concept minimizes or ignores
70
+ - Failure precedents: "Why would this succeed where [competitor] failed?"
71
+ - Security architecture decisions that are deferred or hand-waved
72
+
73
+ ### Casting Instructions for Persona Architect
74
+
75
+ When generating a persona with the Skeptic overlay for instantiation mode:
76
+ 1. Take the base persona mould and generate a concrete instance
77
+ 2. Emphasize the persona's past experiences with tool adoption failures
78
+ 3. Include specific details about data sensitivity in the persona's domain
79
+ 4. Note the persona's trust signals: what makes them trust a new tool, what triggers distrust
80
+ 5. Set the frustration threshold around trust violations rather than usability issues
81
+
82
+ ---
83
+
84
+ ## Overlay 3: Power User
85
+
86
+ ### Adversarial Lens
87
+
88
+ The Power User evaluates everything through the lens of depth, customization, and scalability. They push every tool to its limits and need to know what happens at the edges. They are not looking for the simplest tool -- they are looking for the most capable one that respects their expertise.
89
+
90
+ ### How It Modifies Base Persona Responses
91
+
92
+ - Amplifies the persona's existing need for control, customization, and advanced features
93
+ - Foregrounds API access, automation, and integration capabilities
94
+ - Emphasizes performance ceilings and scalability limits: "What happens at 10x?"
95
+ - Reduces tolerance for opinionated defaults without escape hatches
96
+ - Increases sensitivity to edge cases, error handling, and recovery mechanisms
97
+
98
+ ### Weakness Categories Surfaced
99
+
100
+ - Customization and configuration limitations that constrain advanced workflows
101
+ - API and integration gaps that prevent embedding the product in existing pipelines
102
+ - Performance ceilings and scalability limits that the product concept does not address
103
+ - Edge case handling: what happens when the happy path breaks?
104
+ - Advanced workflow support: can the product grow with the user or does it plateau?
105
+
106
+ ### Casting Instructions for Persona Architect
107
+
108
+ When generating a persona with the Power User overlay for instantiation mode:
109
+ 1. Take the base persona mould and generate a concrete instance
110
+ 2. Emphasize the persona's technical depth and existing integrations
111
+ 3. Include specific details about the persona's advanced use cases and automation needs
112
+ 4. Note the persona's history with outgrowing tools: what tools they have abandoned because of limits
113
+ 5. Set the frustration threshold around capability ceilings rather than initial usability
114
+
115
+ ---
116
+
117
+ ## Instantiation Workflow
118
+
119
+ The interview skill uses the following process to generate cast personas:
120
+
121
+ 1. **Read persona moulds** from the product concept's Target Personas section
122
+ 2. **Select 3 moulds** (or use all if 3 or fewer exist). If more than 3 moulds exist, select the 3 that are most diverse in their adoption posture and technical sophistication.
123
+ 3. **For each mould, invoke the `arn-spark-persona-architect` agent in instantiation mode** via the Task tool, passing the model from `.arness/agent-models/spark.md` as the `model` parameter (see `plugins/arn-spark/skills/arn-spark-ensure-config/references/ensure-config.md` "Dispatch convention" for fallback). Context:
124
+ - The mould definition (abstracted profile)
125
+ - The casting overlay specification (from this document)
126
+ - The product concept summary (for domain grounding)
127
+ 4. **Receive back** a fully detailed concrete persona with the casting overlay baked into the profile
128
+ 5. **Pass the cast persona** to `arn-spark-persona-impersonator` for interview phases
129
+
130
+ ### Mould-to-Overlay Assignment
131
+
132
+ Each mould gets exactly one overlay. Assignment follows this priority:
133
+
134
+ 1. If the mould's boundary conditions or variation axes suggest a natural fit (e.g., a mould with "risk-averse" in its personality spectrum maps well to Skeptic), use that alignment.
135
+ 2. If no natural fit exists, assign overlays to maximize coverage: the mould whose persona is most likely to challenge practical adoption gets Pragmatist, the mould most sensitive to trust gets Skeptic, and the mould with the deepest technical range gets Power User.
136
+ 3. If all moulds are similar, assign overlays round-robin to ensure all 3 lenses are represented.
137
+
138
+ The goal is not to assign the "matching" overlay but to ensure all 3 adversarial perspectives are represented across the interview set.
@@ -0,0 +1,181 @@
1
+ ---
2
+ name: arn-spark-stress-premortem
3
+ description: >-
4
+ This skill should be used when the user says "pre-mortem", "premortem",
5
+ "risk analysis", "stress premortem", "failure analysis", "what could go wrong",
6
+ "pre mortem", "investigate failure", "failure modes", or wants to stress-test a
7
+ product concept by applying Gary Klein's pre-mortem methodology to identify
8
+ hypothetical failure root causes, early warning signals, and mitigation
9
+ strategies. Produces a pre-mortem report with 3 root causes across distinct
10
+ failure dimensions and recommended concept updates.
11
+ version: 1.0.0
12
+ ---
13
+
14
+ # Arness Spark Stress Pre-Mortem
15
+
16
+ Stress-test a product concept using Gary Klein's pre-mortem methodology. Instead of asking "what could go wrong?" (which invites optimism bias), the pre-mortem declares that the product has already launched and failed, then investigates why.
17
+
18
+ The process works like this:
19
+ 1. **Accept the premise:** It is 12 months after launch. The product was shut down today.
20
+ 2. **Investigate:** A forensic investigator agent works backward from the failure to identify 3 distinct root causes -- a core experience flaw (A), a trust/security blind spot (B), and a target audience assumption error (C).
21
+ 3. **Assess:** Each root cause gets a causal chain, early warning signals, mitigation strategies, and a likelihood/severity rating.
22
+ 4. **Prioritize:** Root causes are mapped on a risk priority matrix to identify what needs immediate attention.
23
+
24
+ This technique surfaces failure modes that optimism obscures. The product concept is read but never modified -- all recommendations are captured in the pre-mortem report for later review.
25
+
26
+ ## Prerequisites
27
+
28
+ ### Configuration Check
29
+
30
+ 1. Read the project's `arness.md` and check for a `## Arness` section
31
+ 2. If found, extract the configured **Vision directory** and **Reports directory** paths
32
+ 3. If no `## Arness` section exists or Arness Spark fields are missing, inform the user: "Arness Spark is not configured for this project yet. Run `/arn-brainstorming` to get started — it will set everything up automatically." Do not proceed without it.
33
+ 4. If the Reports directory does not exist, create it with `mkdir -p <reports-dir>/stress-tests/`
34
+
35
+ ### Data Availability
36
+
37
+ | Artifact | Status | Location | Fallback |
38
+ |----------|--------|----------|----------|
39
+ | Product concept | REQUIRED | `<vision-dir>/product-concept.md` | Cannot proceed without it -- suggest running `/arn-spark-discover` |
40
+ | Product pillars | ENRICHES | Product Pillars section of product concept | Investigation proceeds but pillar-as-evidence analysis is less targeted |
41
+ | Competitive landscape | ENRICHES | Competitive Landscape section of product concept | Root Cause C (market misread) is less grounded in competitive dynamics |
42
+ | Target personas | ENRICHES | Target Personas section of product concept | Root Cause A and C are less grounded in persona-specific failure scenarios |
43
+
44
+ **Product concept fallback:**
45
+
46
+ If no product concept exists:
47
+
48
+ Ask the user: **"No product concept found. The pre-mortem needs a product concept to investigate. How would you like to proceed?"**
49
+ 1. Run `/arn-spark-discover` to create a product concept first
50
+ 2. Describe the product now (I will conduct the pre-mortem from your description)
51
+ 3. Skip the pre-mortem stress test
52
+
53
+ If the user chooses option 2, collect a product description and proceed with a reduced-fidelity investigation (note in the report that the investigation was based on a verbal description rather than a full product concept).
54
+
55
+ ## Workflow
56
+
57
+ ### Step 1: Load References
58
+
59
+ Load the pre-mortem protocol and report template:
60
+ > Read `${CLAUDE_PLUGIN_ROOT}/skills/arn-spark-stress-premortem/references/premortem-protocol.md`
61
+ > Read `${CLAUDE_PLUGIN_ROOT}/skills/arn-spark-stress-premortem/references/premortem-report-template.md`
62
+
63
+ ### Step 2: Read Product Concept and Extract Context
64
+
65
+ Read the product concept from `<vision-dir>/product-concept.md`. Extract:
66
+ - Full product concept (the investigator needs the complete document)
67
+ - Product pillars (used as forensic evidence -- pillars often become failure vectors)
68
+ - Competitive landscape (grounds Root Cause C in real competitive dynamics)
69
+ - Target personas (grounds Root Causes A and C in specific user scenarios)
70
+ - Core experience (primary investigation target for Root Cause A)
71
+ - Trust & security model (primary investigation target for Root Cause B)
72
+
73
+ ### Step 3: Invoke Forensic Investigator
74
+
75
+ Invoke the `arn-spark-forensic-investigator` agent via the Task tool, passing the model from `.arness/agent-models/spark.md` as the `model` parameter (see `plugins/arn-spark/skills/arn-spark-ensure-config/references/ensure-config.md` "Dispatch convention" for fallback). Context:
76
+
77
+ --- PRODUCT CONCEPT ---
78
+ [full product concept document]
79
+ --- END PRODUCT CONCEPT ---
80
+
81
+ --- PRODUCT PILLARS ---
82
+ [product pillars section -- these are forensic evidence, not goals to protect]
83
+ --- END PRODUCT PILLARS ---
84
+
85
+ --- COMPETITIVE LANDSCAPE ---
86
+ [competitive landscape section, or "Not available" if absent]
87
+ --- END COMPETITIVE LANDSCAPE ---
88
+
89
+ --- TARGET PERSONAS ---
90
+ [target personas section from the product concept, or "Not available" if absent]
91
+ --- END TARGET PERSONAS ---
92
+
93
+ --- INVESTIGATION TASK ---
94
+ Standard investigation: Generate 3 root causes across distinct failure dimensions:
95
+ - Root Cause A: Core Experience Flaw leading to user churn
96
+ - Root Cause B: Trust & Security Blind Spot leading to breach or exodus
97
+ - Root Cause C: Target Audience Assumption that was wrong
98
+
99
+ For each root cause: failure narrative, causal chain (4 links), early warning signals (3), mitigation strategies (3), likelihood assessment, severity assessment.
100
+
101
+ Include a Recommended Concept Updates table and Unresolved Questions section.
102
+ --- END INVESTIGATION TASK ---
103
+
104
+ ### Step 4: Validate Investigation Quality
105
+
106
+ Review the forensic investigator's output for quality:
107
+
108
+ 1. **3 distinct root causes:** Each root cause must have a distinct causal chain. If two root causes share the same underlying mechanism, they are one root cause, not two.
109
+ 2. **Specificity:** Root causes must reference specific elements of the product concept (features, claims, personas, pillars). Generic failures that could apply to any product are rejected.
110
+ 3. **Adversarial depth:** Root causes must be genuinely adversarial -- not soft, hedged, or diplomatic. Each should make someone wince and say "that could actually happen."
111
+
112
+ **If quality is insufficient:** Retry with an explicit adversarial instruction:
113
+
114
+ "The investigation was too soft. Requirements:
115
+ - Each root cause must quote or reference a specific claim from the product concept
116
+ - Each causal chain must have 4 distinct links, not 2 restated
117
+ - Each early warning signal must include a specific metric threshold or observable behavior
118
+ - Failure narratives must describe what users actually experienced, not abstract disappointment"
119
+
120
+ If the retry also fails quality checks, proceed with the best available output and note the quality gap in the report.
121
+
122
+ ### Step 5: Draft Recommended Concept Updates
123
+
124
+ Review the investigator's Recommended Concept Updates table. Ensure:
125
+ - Each recommendation uses the 6-column schema: `# | Section | Current State | Recommended Change | Type | Rationale`
126
+ - Each recommendation traces to a specific root cause
127
+ - Type column uses Add/Modify/Remove
128
+ - Recommendations from "Address immediately" and "Mitigate" cells in the risk priority matrix are all represented
129
+
130
+ If the investigator's recommendations are incomplete, supplement from the mitigation strategies.
131
+
132
+ ### Step 6: Assemble and Write Report
133
+
134
+ Using the pre-mortem report template:
135
+ 1. Populate all sections with investigator output
136
+ 2. Construct the Risk Priority Matrix from likelihood/severity assessments
137
+ 3. Write the report to `<reports-dir>/stress-tests/premortem-report.md`
138
+
139
+ Present a summary to the user:
140
+
141
+ "Pre-mortem investigation complete. Report saved to `[path]`.
142
+
143
+ **Failure premise:** [Product name] launched and was shut down 12 months later.
144
+
145
+ **Root causes identified:**
146
+ - **A -- [Title]:** [1-sentence summary] (Likelihood: [L], Severity: [S])
147
+ - **B -- [Title]:** [1-sentence summary] (Likelihood: [L], Severity: [S])
148
+ - **C -- [Title]:** [1-sentence summary] (Likelihood: [L], Severity: [S])
149
+
150
+ **Risk priority:** [N] root causes require immediate attention, [N] require mitigation
151
+ **Recommended concept updates:** [N] recommendations ([X] Add, [Y] Modify, [Z] Remove)
152
+ **Unresolved questions:** [N]
153
+
154
+ This report will be used by `/arn-spark-concept-review` to propose changes to the product concept."
155
+
156
+ ## Agent Invocation Guide
157
+
158
+ | Situation | Agent | Mode/Context |
159
+ |-----------|-------|--------------|
160
+ | Standard pre-mortem investigation | `arn-spark-forensic-investigator` | Full product concept with pillars and competitive landscape |
161
+ | Targeted deep-dive (future) | `arn-spark-forensic-investigator` | Specific failure scenario for extended analysis |
162
+
163
+ ## Error Handling
164
+
165
+ - **Forensic investigator produces soft/hedged failures:** Retry with explicit adversarial instruction emphasizing that failures must be specific, vivid, and grounded in product concept details. Include: "Generic failures like 'users did not find it useful' are worthless -- explain exactly why and how."
166
+
167
+ - **Forensic investigator produces fewer than 3 root causes:** Retry specifying which failure dimension is missing and providing additional context for that dimension. If retry still produces fewer than 3, proceed with available output and note the gap.
168
+
169
+ - **Forensic investigator produces overlapping root causes:** Retry specifying that each root cause must have a distinct causal mechanism. If two root causes share the same first link in the causal chain, they are one root cause.
170
+
171
+ - **Agent invocation fails entirely:** Retry once with a simplified prompt. If retry fails:
172
+ Ask the user: **"Agent invocation failed. How would you like to proceed?"**
173
+ 1. Retry
174
+ 2. Skip this step
175
+ 3. Abort
176
+
177
+ ## Constraints
178
+
179
+ - **Read-only with respect to product-concept.md.** The pre-mortem skill reads the product concept but NEVER modifies it. All recommendations are captured in the pre-mortem report.
180
+ - **3 root causes across distinct dimensions.** The standard investigation always targets 3 failure dimensions (core experience, trust/security, audience). This ensures coverage across the most common failure modes.
181
+ - **Report overwrites on re-run.** If `premortem-report.md` already exists, it is overwritten. Git provides history.