@fredcallagan/arn-spark 5.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (130) hide show
  1. package/.claude-plugin/plugin.json +9 -0
  2. package/.opencode/plugins/arn-spark.js +272 -0
  3. package/package.json +17 -0
  4. package/plugins/arn-spark/.claude-plugin/plugin.json +9 -0
  5. package/plugins/arn-spark/LICENSE +21 -0
  6. package/plugins/arn-spark/README.md +25 -0
  7. package/plugins/arn-spark/agents/arn-spark-brand-strategist.md +299 -0
  8. package/plugins/arn-spark/agents/arn-spark-dev-env-builder.md +228 -0
  9. package/plugins/arn-spark/agents/arn-spark-doctor.md +92 -0
  10. package/plugins/arn-spark/agents/arn-spark-forensic-investigator.md +181 -0
  11. package/plugins/arn-spark/agents/arn-spark-market-researcher.md +232 -0
  12. package/plugins/arn-spark/agents/arn-spark-marketing-pm.md +225 -0
  13. package/plugins/arn-spark/agents/arn-spark-persona-architect.md +259 -0
  14. package/plugins/arn-spark/agents/arn-spark-persona-impersonator.md +183 -0
  15. package/plugins/arn-spark/agents/arn-spark-product-strategist.md +191 -0
  16. package/plugins/arn-spark/agents/arn-spark-prototype-builder.md +497 -0
  17. package/plugins/arn-spark/agents/arn-spark-scaffolder.md +228 -0
  18. package/plugins/arn-spark/agents/arn-spark-spike-runner.md +209 -0
  19. package/plugins/arn-spark/agents/arn-spark-style-capture.md +196 -0
  20. package/plugins/arn-spark/agents/arn-spark-tech-evaluator.md +229 -0
  21. package/plugins/arn-spark/agents/arn-spark-ui-interactor.md +235 -0
  22. package/plugins/arn-spark/agents/arn-spark-use-case-writer.md +280 -0
  23. package/plugins/arn-spark/agents/arn-spark-ux-judge.md +215 -0
  24. package/plugins/arn-spark/agents/arn-spark-ux-specialist.md +200 -0
  25. package/plugins/arn-spark/agents/arn-spark-visual-sketcher.md +285 -0
  26. package/plugins/arn-spark/agents/arn-spark-visual-test-engineer.md +224 -0
  27. package/plugins/arn-spark/references/copilot-tools.md +62 -0
  28. package/plugins/arn-spark/skills/arn-brainstorming/SKILL.md +520 -0
  29. package/plugins/arn-spark/skills/arn-brainstorming/references/add-feature-flow.md +155 -0
  30. package/plugins/arn-spark/skills/arn-spark-arch-vision/SKILL.md +226 -0
  31. package/plugins/arn-spark/skills/arn-spark-arch-vision/references/architecture-vision-template.md +153 -0
  32. package/plugins/arn-spark/skills/arn-spark-arch-vision/references/technology-evaluation-guide.md +86 -0
  33. package/plugins/arn-spark/skills/arn-spark-clickable-prototype/SKILL.md +471 -0
  34. package/plugins/arn-spark/skills/arn-spark-clickable-prototype/references/clickable-prototype-criteria.md +65 -0
  35. package/plugins/arn-spark/skills/arn-spark-clickable-prototype/references/journey-template.md +62 -0
  36. package/plugins/arn-spark/skills/arn-spark-clickable-prototype/references/review-report-template.md +75 -0
  37. package/plugins/arn-spark/skills/arn-spark-clickable-prototype/references/showcase-capture-guide.md +213 -0
  38. package/plugins/arn-spark/skills/arn-spark-clickable-prototype-teams/SKILL.md +642 -0
  39. package/plugins/arn-spark/skills/arn-spark-clickable-prototype-teams/references/debate-protocol.md +242 -0
  40. package/plugins/arn-spark/skills/arn-spark-clickable-prototype-teams/references/debate-review-report-template.md +161 -0
  41. package/plugins/arn-spark/skills/arn-spark-clickable-prototype-teams/references/expert-interaction-review-template.md +152 -0
  42. package/plugins/arn-spark/skills/arn-spark-concept-review/SKILL.md +350 -0
  43. package/plugins/arn-spark/skills/arn-spark-concept-review/references/conflict-resolution-protocol.md +145 -0
  44. package/plugins/arn-spark/skills/arn-spark-concept-review/references/review-report-template.md +185 -0
  45. package/plugins/arn-spark/skills/arn-spark-dev-setup/SKILL.md +366 -0
  46. package/plugins/arn-spark/skills/arn-spark-dev-setup/references/dev-setup-checklist.md +84 -0
  47. package/plugins/arn-spark/skills/arn-spark-dev-setup/references/dev-setup-template.md +205 -0
  48. package/plugins/arn-spark/skills/arn-spark-discover/SKILL.md +303 -0
  49. package/plugins/arn-spark/skills/arn-spark-discover/references/competitive-landscape-template.md +87 -0
  50. package/plugins/arn-spark/skills/arn-spark-discover/references/discovery-questions.md +120 -0
  51. package/plugins/arn-spark/skills/arn-spark-discover/references/persona-profile-template.md +97 -0
  52. package/plugins/arn-spark/skills/arn-spark-discover/references/product-concept-template.md +253 -0
  53. package/plugins/arn-spark/skills/arn-spark-ensure-config/SKILL.md +23 -0
  54. package/plugins/arn-spark/skills/arn-spark-ensure-config/references/ensure-config.md +388 -0
  55. package/plugins/arn-spark/skills/arn-spark-ensure-config/references/step-0-fast-path.md +25 -0
  56. package/plugins/arn-spark/skills/arn-spark-ensure-config/scripts/cache-check.sh +127 -0
  57. package/plugins/arn-spark/skills/arn-spark-feature-extract/SKILL.md +483 -0
  58. package/plugins/arn-spark/skills/arn-spark-feature-extract/references/feature-backlog-template.md +176 -0
  59. package/plugins/arn-spark/skills/arn-spark-feature-extract/references/feature-entry-template.md +209 -0
  60. package/plugins/arn-spark/skills/arn-spark-help/SKILL.md +149 -0
  61. package/plugins/arn-spark/skills/arn-spark-help/references/pipeline-map.md +211 -0
  62. package/plugins/arn-spark/skills/arn-spark-init/SKILL.md +312 -0
  63. package/plugins/arn-spark/skills/arn-spark-init/references/agent-models-presets/all-opus.md +23 -0
  64. package/plugins/arn-spark/skills/arn-spark-init/references/agent-models-presets/balanced.md +23 -0
  65. package/plugins/arn-spark/skills/arn-spark-init/references/bkt-setup.md +55 -0
  66. package/plugins/arn-spark/skills/arn-spark-init/references/jira-mcp-setup.md +61 -0
  67. package/plugins/arn-spark/skills/arn-spark-init/references/platform-labels.md +97 -0
  68. package/plugins/arn-spark/skills/arn-spark-naming/SKILL.md +275 -0
  69. package/plugins/arn-spark/skills/arn-spark-naming/references/creative-brief-template.md +146 -0
  70. package/plugins/arn-spark/skills/arn-spark-naming/references/naming-methodology.md +237 -0
  71. package/plugins/arn-spark/skills/arn-spark-naming/references/naming-report-template.md +122 -0
  72. package/plugins/arn-spark/skills/arn-spark-naming/references/trademark-databases.md +88 -0
  73. package/plugins/arn-spark/skills/arn-spark-naming/references/whois-server-map.md +164 -0
  74. package/plugins/arn-spark/skills/arn-spark-naming/scripts/whois-check.js +502 -0
  75. package/plugins/arn-spark/skills/arn-spark-naming/scripts/whois-check.py +533 -0
  76. package/plugins/arn-spark/skills/arn-spark-prototype-lock/SKILL.md +260 -0
  77. package/plugins/arn-spark/skills/arn-spark-prototype-lock/references/lock-report-template.md +68 -0
  78. package/plugins/arn-spark/skills/arn-spark-prototype-lock/references/pretooluse-hook-template.json +35 -0
  79. package/plugins/arn-spark/skills/arn-spark-prototype-lock/references/prototype-guardrail-rules.md +38 -0
  80. package/plugins/arn-spark/skills/arn-spark-report/SKILL.md +144 -0
  81. package/plugins/arn-spark/skills/arn-spark-report/references/issue-template.md +81 -0
  82. package/plugins/arn-spark/skills/arn-spark-report/references/spark-knowledge-base.md +293 -0
  83. package/plugins/arn-spark/skills/arn-spark-scaffold/SKILL.md +239 -0
  84. package/plugins/arn-spark/skills/arn-spark-scaffold/references/scaffold-checklist.md +79 -0
  85. package/plugins/arn-spark/skills/arn-spark-scaffold/references/scaffold-summary-template.md +74 -0
  86. package/plugins/arn-spark/skills/arn-spark-spike/SKILL.md +209 -0
  87. package/plugins/arn-spark/skills/arn-spark-spike/references/spike-report-template.md +123 -0
  88. package/plugins/arn-spark/skills/arn-spark-static-prototype/SKILL.md +362 -0
  89. package/plugins/arn-spark/skills/arn-spark-static-prototype/references/review-report-template.md +65 -0
  90. package/plugins/arn-spark/skills/arn-spark-static-prototype/references/showcase-capture-guide.md +153 -0
  91. package/plugins/arn-spark/skills/arn-spark-static-prototype/references/static-prototype-criteria.md +54 -0
  92. package/plugins/arn-spark/skills/arn-spark-static-prototype-teams/SKILL.md +518 -0
  93. package/plugins/arn-spark/skills/arn-spark-static-prototype-teams/references/debate-protocol.md +230 -0
  94. package/plugins/arn-spark/skills/arn-spark-static-prototype-teams/references/debate-review-report-template.md +148 -0
  95. package/plugins/arn-spark/skills/arn-spark-static-prototype-teams/references/expert-visual-review-template.md +130 -0
  96. package/plugins/arn-spark/skills/arn-spark-stress-competitive/SKILL.md +166 -0
  97. package/plugins/arn-spark/skills/arn-spark-stress-competitive/references/competitive-report-template.md +139 -0
  98. package/plugins/arn-spark/skills/arn-spark-stress-competitive/references/gap-analysis-framework.md +111 -0
  99. package/plugins/arn-spark/skills/arn-spark-stress-interview/SKILL.md +257 -0
  100. package/plugins/arn-spark/skills/arn-spark-stress-interview/references/interview-protocol.md +140 -0
  101. package/plugins/arn-spark/skills/arn-spark-stress-interview/references/interview-report-template.md +165 -0
  102. package/plugins/arn-spark/skills/arn-spark-stress-interview/references/persona-casting-spec.md +138 -0
  103. package/plugins/arn-spark/skills/arn-spark-stress-premortem/SKILL.md +181 -0
  104. package/plugins/arn-spark/skills/arn-spark-stress-premortem/references/premortem-protocol.md +112 -0
  105. package/plugins/arn-spark/skills/arn-spark-stress-premortem/references/premortem-report-template.md +158 -0
  106. package/plugins/arn-spark/skills/arn-spark-stress-prfaq/SKILL.md +206 -0
  107. package/plugins/arn-spark/skills/arn-spark-stress-prfaq/references/prfaq-report-template.md +139 -0
  108. package/plugins/arn-spark/skills/arn-spark-stress-prfaq/references/prfaq-workflow.md +118 -0
  109. package/plugins/arn-spark/skills/arn-spark-style-explore/SKILL.md +281 -0
  110. package/plugins/arn-spark/skills/arn-spark-style-explore/references/style-brief-template.md +198 -0
  111. package/plugins/arn-spark/skills/arn-spark-use-cases/SKILL.md +359 -0
  112. package/plugins/arn-spark/skills/arn-spark-use-cases/references/expert-review-template.md +94 -0
  113. package/plugins/arn-spark/skills/arn-spark-use-cases/references/review-protocol.md +150 -0
  114. package/plugins/arn-spark/skills/arn-spark-use-cases/references/use-case-index-template.md +108 -0
  115. package/plugins/arn-spark/skills/arn-spark-use-cases/references/use-case-template.md +125 -0
  116. package/plugins/arn-spark/skills/arn-spark-use-cases-teams/SKILL.md +306 -0
  117. package/plugins/arn-spark/skills/arn-spark-use-cases-teams/references/debate-protocol.md +272 -0
  118. package/plugins/arn-spark/skills/arn-spark-use-cases-teams/references/review-report-template.md +112 -0
  119. package/plugins/arn-spark/skills/arn-spark-visual-readiness/SKILL.md +293 -0
  120. package/plugins/arn-spark/skills/arn-spark-visual-readiness/references/readiness-checklist.md +196 -0
  121. package/plugins/arn-spark/skills/arn-spark-visual-sketch/SKILL.md +376 -0
  122. package/plugins/arn-spark/skills/arn-spark-visual-sketch/references/aesthetic-philosophy.md +210 -0
  123. package/plugins/arn-spark/skills/arn-spark-visual-sketch/references/sketch-gallery-guide.md +282 -0
  124. package/plugins/arn-spark/skills/arn-spark-visual-sketch/references/visual-direction-template.md +174 -0
  125. package/plugins/arn-spark/skills/arn-spark-visual-strategy/SKILL.md +447 -0
  126. package/plugins/arn-spark/skills/arn-spark-visual-strategy/references/baseline-capture-script-template.js +89 -0
  127. package/plugins/arn-spark/skills/arn-spark-visual-strategy/references/journey-schema.md +375 -0
  128. package/plugins/arn-spark/skills/arn-spark-visual-strategy/references/spike-checklist.md +122 -0
  129. package/plugins/arn-spark/skills/arn-spark-visual-strategy/references/strategy-layers-guide.md +132 -0
  130. package/plugins/arn-spark/skills/arn-spark-visual-strategy/references/visual-strategy-template.md +141 -0
@@ -0,0 +1,205 @@
1
+ # Dev Setup Template
2
+
3
+ This template defines the structure for development environment setup documents written by the `arn-spark-dev-setup` skill. The document captures all decisions about how developers set up, run, and contribute to the project. It is written to the project's docs directory as `dev-setup.md`.
4
+
5
+ A dev setup document serves two audiences: the architect who defined the standard (for reference), and the developer who needs to get the project running (for onboarding).
6
+
7
+ ## Instructions for arn-spark-dev-setup
8
+
9
+ When populating this template:
10
+
11
+ - Replace all bracketed placeholders with concrete content from the conversation
12
+ - Prerequisites must list specific package names and versions, not vague references ("install the required tools")
13
+ - Setup steps must be copy-pasteable commands, not prose descriptions
14
+ - Platform-specific instructions must be separated clearly -- a Linux developer should not have to read through macOS instructions
15
+ - If any setup step could not be verified in the current environment, note it explicitly
16
+ - The Troubleshooting section should address problems actually observed during setup, not hypothetical issues
17
+
18
+ ---
19
+
20
+ ## Template
21
+
22
+ ```markdown
23
+ # Development Environment Setup
24
+
25
+ Generated by `/arn-spark-dev-setup` on [date].
26
+
27
+ ## Environment Overview
28
+
29
+ **Type:** [native / dev-container / docker / docker-compose / hybrid]
30
+
31
+ **Rationale:** [Why this environment type was chosen for this project. 1-2 sentences.]
32
+
33
+ **Supported platforms:** [linux, macos, windows]
34
+
35
+ ## Prerequisites
36
+
37
+ ### All Platforms
38
+
39
+ [Tools needed regardless of platform:]
40
+
41
+ - [Tool 1] [version] -- [what it is used for]
42
+ - [Tool 2] [version] -- [what it is used for]
43
+
44
+ ### macOS
45
+
46
+ [System packages and tools specific to macOS:]
47
+
48
+ ```bash
49
+ # Install via Homebrew
50
+ brew install [packages]
51
+ ```
52
+
53
+ ### Linux (Ubuntu/Debian)
54
+
55
+ [System packages and tools specific to Linux:]
56
+
57
+ ```bash
58
+ # Install system dependencies
59
+ sudo apt install [packages]
60
+ ```
61
+
62
+ ### Windows
63
+
64
+ [System packages and tools specific to Windows:]
65
+
66
+ ```powershell
67
+ # Install via winget
68
+ winget install [packages]
69
+ ```
70
+
71
+ ## Setup Instructions
72
+
73
+ ### Quick Start
74
+
75
+ [The fastest path from clone to running project. 3-5 commands maximum.]
76
+
77
+ ```bash
78
+ git clone [repo-url]
79
+ cd [project-name]
80
+ ./scripts/setup.sh # or scripts\setup.ps1 on Windows
81
+ [dev server command]
82
+ ```
83
+
84
+ ### Detailed Setup
85
+
86
+ #### 1. Install prerequisites
87
+
88
+ [Detailed steps for installing prerequisites that the setup script does not cover.]
89
+
90
+ #### 2. Run the setup script
91
+
92
+ ```bash
93
+ # Linux / macOS
94
+ chmod +x scripts/setup.sh
95
+ ./scripts/setup.sh
96
+
97
+ # Windows (PowerShell)
98
+ scripts\setup.ps1
99
+ ```
100
+
101
+ The setup script will:
102
+ - [list what the script does]
103
+
104
+ #### 3. Verify the setup
105
+
106
+ ```bash
107
+ [build command]
108
+ [test command]
109
+ ```
110
+
111
+ Expected output: [what success looks like]
112
+
113
+ ### Dev Container Setup
114
+
115
+ [Only if environment type is dev-container or hybrid:]
116
+
117
+ 1. Open the project in VS Code
118
+ 2. When prompted, click "Reopen in Container" (or run the command: Dev Containers: Reopen in Container)
119
+ 3. Wait for the container to build and start
120
+ 4. The development environment is ready when [indicator]
121
+
122
+ ### Docker / Docker Compose Setup
123
+
124
+ [Only if environment type is docker, docker-compose, or hybrid:]
125
+
126
+ ```bash
127
+ # Build and start services
128
+ docker compose up -d
129
+
130
+ # Verify services are running
131
+ docker compose ps
132
+ ```
133
+
134
+ ## Toolchain Versions
135
+
136
+ | Tool | Version | Pin File | Notes |
137
+ |------|---------|----------|-------|
138
+ | [Rust] | [version] | `rust-toolchain.toml` | [notes] |
139
+ | [Node.js] | [version] | `.nvmrc` | [notes] |
140
+ | [other] | [version] | [file] | [notes] |
141
+
142
+ ## CI/CD Pipeline
143
+
144
+ **Provider:** [GitHub Actions / GitLab CI / none]
145
+
146
+ **Workflow file:** [path, e.g., `.github/workflows/ci.yml`]
147
+
148
+ **What CI does:**
149
+ - [step 1: e.g., "Builds on Linux, macOS, and Windows"]
150
+ - [step 2: e.g., "Runs linting and formatting checks"]
151
+ - [step 3: e.g., "Runs test suite"]
152
+
153
+ **Platform matrix:**
154
+ | Platform | Runner | Notes |
155
+ |----------|--------|-------|
156
+ | [Linux] | [ubuntu-latest] | [notes] |
157
+ | [macOS] | [macos-latest] | [notes] |
158
+ | [Windows] | [windows-latest] | [notes] |
159
+
160
+ ## IDE Configuration
161
+
162
+ **Recommended IDE:** [VS Code / other]
163
+
164
+ **Recommended extensions:**
165
+ - [extension 1] -- [purpose]
166
+ - [extension 2] -- [purpose]
167
+
168
+ **Settings:** [Any project-specific IDE settings in `.vscode/settings.json` or equivalent]
169
+
170
+ ## Troubleshooting
171
+
172
+ ### [Common problem 1]
173
+
174
+ **Symptom:** [what the developer sees]
175
+
176
+ **Cause:** [why it happens]
177
+
178
+ **Fix:**
179
+ ```bash
180
+ [commands to fix]
181
+ ```
182
+
183
+ ### [Common problem 2]
184
+
185
+ [Same format]
186
+
187
+ ## Deviations from Standard
188
+
189
+ This section is for developers who chose a non-standard setup. If you deviated from the above, document your choices here for your own reference. Note that CI runs against the standard environment, so your local deviations may cause CI failures that do not reproduce locally.
190
+ ```
191
+
192
+ ---
193
+
194
+ ## Section Guidance
195
+
196
+ | Section | Source | Depth |
197
+ |---------|--------|-------|
198
+ | Environment Overview | Conversation decisions | 1-2 sentences rationale |
199
+ | Prerequisites | Architecture vision + scaffold results + platform research | Specific packages and versions per platform |
200
+ | Setup Instructions | Generated setup scripts + manual steps | Copy-pasteable commands |
201
+ | Toolchain Versions | Conversation decisions + toolchain pin files | Table with pin file references |
202
+ | CI/CD Pipeline | Generated CI workflow | Summary of what CI checks |
203
+ | IDE Configuration | Conversation decisions | Extension list with purposes |
204
+ | Troubleshooting | Issues observed during setup verification | Symptom/cause/fix format |
205
+ | Deviations from Standard | Empty section for developer use | Placeholder only |
@@ -0,0 +1,303 @@
1
+ ---
2
+ name: arn-spark-discover
3
+ description: >-
4
+ This skill should be used when the user says "discover", "product discovery",
5
+ "arn discover", "help me define this product", "what should I build",
6
+ "product concept", "define the product", "let's figure out what to build",
7
+ "vision for this project", "shape this idea", "new project idea",
8
+ "brainstorm this product", "starting from scratch", or wants to explore
9
+ and structure a greenfield product idea through guided conversation. Produces a
10
+ product-concept.md document capturing the product vision, core experience,
11
+ target users, trust model, platforms, and scope boundaries.
12
+ version: 1.1.0
13
+ ---
14
+
15
+ # Arness Discover
16
+
17
+ Guide a greenfield product idea from raw concept to structured product vision through iterative conversation, aided by product thinking from the `arn-spark-product-strategist` agent, competitive landscape research from `arn-spark-market-researcher`, and persona generation from `arn-spark-persona-architect` (greenfield agents in this plugin). This is a conversational skill that runs in normal conversation (NOT plan mode). The primary artifact is a **product concept document** written to the project's vision directory.
18
+
19
+ This skill covers the WHAT and WHY of the product, including the **product pillars** -- non-negotiable qualities that define the product's soul and guide all downstream decisions. Technology choices and system design are handled separately by `/arn-spark-arch-vision`.
20
+
21
+ ## Step 0: Ensure Configuration (Non-Blocking)
22
+
23
+ Read `${CLAUDE_PLUGIN_ROOT}/skills/arn-spark-ensure-config/references/step-0-fast-path.md` and follow its instructions. This captures the user profile and configures `## Arness` with Arness Spark fields.
24
+
25
+ **Important:** This skill is designed for exploratory use before a project may fully exist. If ensure-config encounters errors (e.g., no git repository, arness.md cannot be created), proceed anyway using fallback defaults: Vision directory = `.arness/vision`, Reports directory = `.arness/reports`. Do not hard-block.
26
+
27
+ After Step 0 completes (or falls back), determine the output directory:
28
+ 1. Read the project's `arness.md` and check for a `## Arness` section
29
+ 2. If found, extract the configured Vision directory path — this is the source of truth
30
+ 3. If NOT found (ensure-config fell back), use `.arness/vision` at the project root.
31
+ 4. If the output directory does not exist, create it
32
+
33
+ ## Workflow
34
+
35
+ ### Step 1: Capture the Raw Idea
36
+
37
+ Accept the user's product idea in any form -- a single sentence, a paragraph, or a detailed description. If the idea was included in the trigger message, use it directly. If not, ask:
38
+
39
+ "What product or tool do you want to build? Describe it in whatever detail you have -- even a rough idea is a good starting point."
40
+
41
+ After receiving the idea, acknowledge with a brief restatement (2-3 sentences) to confirm understanding. Do not add interpretation or assumptions beyond what the user stated.
42
+
43
+ ### Step 2: Initial Analysis with Product Strategist
44
+
45
+ Invoke the `arn-spark-product-strategist` agent via the Task tool, passing the model from `.arness/agent-models/spark.md` as the `model` parameter (see `plugins/arn-spark/skills/arn-spark-ensure-config/references/ensure-config.md` "Dispatch convention" for fallback). Context:
46
+ - The user's raw idea description
47
+ - Any context from the conversation so far
48
+
49
+ The agent returns:
50
+ - A vision sketch (initial attempt at a vision statement)
51
+ - An assessment of what is clear and what is missing
52
+ - 3-5 probing questions organized by priority category
53
+ - Scope observations (what looks essential, what could be deferred)
54
+
55
+ Present the agent's findings to the user as a conversation starter. Frame it as: "Here is my initial read on your idea, with some questions to explore." Do NOT present it as a finished analysis.
56
+
57
+ ### Step 3: Guided Discovery Conversation (Iterative)
58
+
59
+ Enter a conversation loop. The goal is to cover eleven discovery categories, but do so through natural conversation -- not as a sequential questionnaire.
60
+
61
+ Load the discovery question bank for reference:
62
+ > Read `${CLAUDE_PLUGIN_ROOT}/skills/arn-spark-discover/references/discovery-questions.md`
63
+
64
+ Use the product strategist's output from Step 2 to drive the conversation. Start with the categories the agent flagged as weakest. Cover these categories through the conversation (not necessarily in order — the numbering and ordering here differs intentionally from the question bank and template, which are optimized for different purposes):
65
+
66
+ 1. **Vision & Problem** -- what and why
67
+ 2. **Target Users & Personas** -- who and when (AI-generated personas for validation)
68
+ 3. **Core Experience** -- the primary interaction
69
+ 4. **Product Pillars** -- the non-negotiable qualities that define the product's soul
70
+ 5. **Trust & Security Model** -- how users establish trust
71
+ 6. **Platform & Constraints** -- where and what limits
72
+ 7. **Participants & Scale** -- how many, what topology
73
+ 8. **Scope Boundaries** -- what is NOT v1
74
+ 9. **Business Model & Constraints** -- revenue model, tenancy, compliance, cost limits
75
+ 10. **Competitive Landscape** -- alternatives, market positioning (AI-researched)
76
+ 11. **Assumptions & Success Criteria** -- validated hypotheses, measurable outcomes (AI-derived)
77
+
78
+ **Product Pillars** deserve special attention. Unlike the other categories which capture facts and decisions, pillars capture convictions. Listen for strong language throughout the conversation -- phrases like "it HAS to feel...", "the whole point is...", "I refuse to compromise on..." are pillar signals. When you hear them, name the pillar back to the user: "It sounds like [quality] is non-negotiable for you -- would you call that a core pillar of this product?" Collect pillars as they emerge naturally rather than asking for them all at once.
79
+
80
+ #### AI Assist Checkpoints
81
+
82
+ **Product type signal:** Early in the conversation (during Step 2 initial analysis or first rounds of Step 3), establish whether this is a **commercial product** (targeting a market/customers) or an **internal tool / personal utility** (for a team, personal use, or internal workflow). If ambiguous from the description, ask naturally: "Is this something you're building for customers, or more of an internal tool / personal utility?" Record the answer -- it affects framing of Checkpoints C and D, but NOT the depth of persona work (Checkpoint B is always full depth because personas feed downstream simulation skills regardless of product type).
83
+
84
+ **Checkpoint A -- Problem Statement Draft** (after vision is clear, rounds 1-3):
85
+ - Synthesize a crisp problem statement from the conversation (who/what/why/workarounds/severity)
86
+ - Present to user: "Based on what you've described, here's a draft problem statement. Does this capture it?"
87
+ - User validates/refines. No extra agent needed -- the skill does this directly from conversation context.
88
+
89
+ **Checkpoint B -- Persona Generation** (when target users discussed, rounds 2-4):
90
+
91
+ This is a two-phase process: **concrete examples first** (for user interaction), then **abstracted moulds** (for reuse).
92
+
93
+ 1. Ask user to describe target users in their own words
94
+ 2. Assess what the user provides:
95
+ - **Vague description** (e.g., "developers who want to ship faster"):
96
+
97
+ Ask the user:
98
+
99
+ > **I can generate concrete persona examples based on what you've described -- specific people you can picture and critique. Would you like me to draft those?**
100
+ > 1. **Yes** — Generate persona examples for review
101
+ > 2. **No** — Skip persona generation, continue with what we have
102
+
103
+ - **Concrete names/roles** (e.g., "Bob is a PM who cares about velocity, Julie is a dev who hates context switching"):
104
+
105
+ Ask the user:
106
+
107
+ > **I can expand those into full profiles with personality traits, pain points, and day-in-the-life scenarios. Want me to research and draft those?**
108
+ > 1. **Yes** — Expand into full persona profiles
109
+ > 2. **No** — Keep as-is and continue
110
+ 3. If agreed, invoke the `arn-spark-persona-architect` agent in **discovery mode** via the Task tool, passing the model from `.arness/agent-models/spark.md` as the `model` parameter (see `plugins/arn-spark/skills/arn-spark-ensure-config/references/ensure-config.md` "Dispatch convention" for fallback). Context: product vision, problem statement, user description (vague or concrete seeds), product pillars so far. The agent handles both cases -- generating from scratch or expanding user-provided specifics.
111
+ 4. **Phase 1 -- Concrete examples:** Present the generated/expanded concrete personas (including personality traits): "Here are some specific people who would use this product. Do these ring true? Any to add, remove, or change?" The user interacts with these -- critiquing, adjusting, approving. Iterate until the user is satisfied. These are vivid, specific characters with personality.
112
+ 5. **Phase 2 -- Abstracted moulds:** Once concrete personas are approved, invoke the persona-architect again (or in the same pass) to derive the abstracted profiles (moulds) from the approved examples. Present: "Here are the generalized persona archetypes I've extracted from the examples. These moulds can be used later to generate more personas for testing. Do they capture the pattern?" The moulds define ranges, patterns, personality spectrums, and variation axes rather than single points.
113
+ 6. Record both: the approved concrete examples AND the abstracted moulds.
114
+ 7. If declined, record whatever target user info was gathered conversationally and move on.
115
+
116
+ **Why both layers?** Concrete personas make the conversation tangible -- the user can say "no, that person wouldn't care about X." Abstracted moulds make the output reusable -- future skills (Synthetic User Panel, etc.) can generate fresh concrete instances from the moulds without repeating the discovery conversation.
117
+
118
+ **Checkpoint C -- Competitive Landscape / Existing Alternatives** (when competitors come up, rounds 3-6):
119
+
120
+ Framing adapts based on product type:
121
+
122
+ **Commercial product framing:**
123
+ 1. Ask what competitors or alternatives the user is aware of
124
+ 2. Based on response:
125
+ Ask the user:
126
+
127
+ > **Would you like me to research the competitive landscape?**
128
+ [Adapt the question text based on how many competitors the user named:
129
+ - 0 named: "Would you like me to research what else is out there in this space?"
130
+ - 1-2 named: "Want me to research whether there are others and build a fuller picture?"
131
+ - 3+ named: "Good awareness. Want me to verify these and check for anything you might have missed?"]
132
+ > 1. **Yes** — Research alternatives and build a landscape
133
+ > 2. **No** — Skip competitive research
134
+
135
+ **Internal tool / utility framing:**
136
+ 1. Ask: "Have you looked for existing tools that already do something like this? Sometimes there's something out there you could use directly or draw inspiration from."
137
+ 2. Based on response:
138
+ Ask the user:
139
+
140
+ > [Adapt the question text based on the user's response:
141
+ > - User knows of some: "Want me to search for any others that might save you time or give you ideas?"
142
+ > - User hasn't looked: "Want me to do a quick search for similar tools? Could save you building from scratch, or at least give you inspiration for the approach."]
143
+ > 1. **Yes** — Search for existing tools
144
+ > 2. **No** — Skip and build from scratch
145
+
146
+ - If **No** or the user says not needed: Record "Not explored -- user prefers to build from scratch" and move on.
147
+
148
+ **If user agrees to research (either framing) -- Three-phase fan-out/fan-in orchestration:**
149
+
150
+ This process ensures thorough, validated results rather than a shallow 3-query search:
151
+
152
+ 1. **Phase 1 -- Query Planning:** Invoke the `arn-spark-market-researcher` agent (identification/plan) via the Task tool, passing the model from `.arness/agent-models/spark.md` as the `model` parameter (see `plugins/arn-spark/skills/arn-spark-ensure-config/references/ensure-config.md` "Dispatch convention" for fallback). Context: product description, problem space, known competitors. The agent generates 10-15 search queries across diverse angles (problem-focused, solution-focused, comparison, review, community, domain). Present the query plan briefly: "I've identified [N] search angles to explore. Searching now..."
153
+
154
+ 2. **Phase 2 -- Parallel Search:** Split the queries into 2-3 batches. Invoke `arn-spark-market-researcher` (identification/search) **2-3 times in parallel**, each with a batch of 4-6 queries. Each agent searches independently and returns raw findings.
155
+
156
+ 3. **Phase 3 -- Consolidation:** Invoke `arn-spark-market-researcher` (identification/consolidate) with all raw findings from Phase 2. The agent de-duplicates (products found by multiple queries signal higher relevance), validates (confirms URLs and descriptions are accurate), and ranks by relevance.
157
+
158
+ 4. **Present results for user curation:** Present the full tiered list (top 5 + extended landscape). Frame it for review:
159
+ - "I searched across [N] angles and found [Y] validated alternatives. Here are my top 5 with rationale, plus the extended list. Take a look:"
160
+ - Present the top 5 with rationale for each
161
+ - Present the extended landscape (numbered 6+) so the user can see what else was found
162
+ - Ask: "Does this top 5 look right? You can swap any of them with candidates from the extended list, add ones I missed, or exclude any that aren't relevant."
163
+
164
+ 5. **User curates:** The user may:
165
+ - Approve the top 5 as-is
166
+ - Swap candidates (e.g., "move #8 into the top 5, drop #3")
167
+ - Exclude false positives (e.g., "#4 isn't actually a competitor")
168
+ - Add competitors the research missed
169
+ - Adjust rationale or categorization
170
+ Iterate until the user is satisfied with the focused set.
171
+
172
+ 6. **Record the final curated landscape:** Save the full tiered list to the product concept:
173
+ - **Primary competitors (top 5):** The user-validated focused set -- these are the ones to track and compare against
174
+ - **Extended landscape:** Remaining validated candidates, trimmed to ~10 max -- kept for reference because the vision may shift and make them more relevant later
175
+ - **Indirect alternatives:** The "do nothing" baseline and generic tool workarounds
176
+ The full list is preserved as research -- future skills (Gap Analysis) can draw from any tier.
177
+
178
+ 7. If research was declined, record appropriately and move on. Do NOT force research.
179
+
180
+ **Checkpoint D -- Assumptions & Success Criteria** (before readiness check, rounds 6-8):
181
+ 1. Derive 5-8 key assumptions from the entire conversation -- things stated/implied as true but unverified. Present: "Based on our conversation, here are the key assumptions underlying this product. Which feel solid and which feel like risks?"
182
+ 2. Suggest 3-5 success criteria based on product type + business model:
183
+ - **Commercial product:** Market-oriented metrics -- adoption targets, engagement ratios, revenue milestones, NPS (e.g., "1000 MAU within 6 months", "DAU/MAU > 30%", "$5K MRR by month 12")
184
+ - **Internal tool / utility:** Usage and impact metrics -- team adoption, time saved, process replaced, reliability (e.g., "team uses it daily within 2 weeks", "reduces manual process from 2 hours to 10 minutes", "replaces spreadsheet workflow entirely")
185
+ Present: "Here are some success metrics I'd suggest for this type of product. Do these make sense?"
186
+ 3. User validates/refines both. Record approved versions.
187
+
188
+ **Within each round of conversation, decide how to respond:**
189
+
190
+ | Situation | Action |
191
+ |-----------|--------|
192
+ | User gives a vague or broad answer | Invoke `arn-spark-product-strategist` for follow-up probing |
193
+ | User proposes a feature that might be scope creep | Invoke `arn-spark-product-strategist` for scope assessment |
194
+ | User asks "is this too much?" or "should I include X?" | Invoke `arn-spark-product-strategist` for evaluation |
195
+ | User gives a clear, concrete answer | Acknowledge and record directly, no agent needed |
196
+ | User makes a definitive scope decision | Record directly, confirm |
197
+ | User asks about technology choices | Defer to `/arn-spark-arch-vision` |
198
+ | User seems done or conversation is circling | Proceed to readiness check |
199
+ | Vision & problem are clear enough to draft | **Trigger Checkpoint A** -- draft problem statement for validation |
200
+ | User describes target users vaguely (no specific names or roles) | **Trigger Checkpoint B** -- offer persona generation |
201
+ | User provides concrete persona seeds (specific names, roles, or detailed descriptions) | **Trigger Checkpoint B** with details as persona seeds |
202
+ | Competitors/alternatives come up or mid-conversation lull | **Trigger Checkpoint C** -- ask about competitors (commercial) or existing tools (internal), offer research |
203
+ | User names < 3 alternatives (after initial competitor question) | **Trigger Checkpoint C** -- offer to fill gaps with research |
204
+ | User wants claims about alternatives validated | **Trigger Checkpoint C** with named alternatives for verification |
205
+ | User says research is "not needed" | Record appropriately and move on -- do NOT force research |
206
+ | Most categories covered, pre-readiness | **Trigger Checkpoint D** -- present derived assumptions + suggested success criteria |
207
+
208
+ **Tracking coverage:** Mentally track which categories have been sufficiently explored. A category is "covered" when the user has made concrete statements or decisions about it. Brief answers are acceptable if the topic is genuinely simple for this product.
209
+
210
+ **Readiness check:** After covering the major categories (typically 3-8 rounds of conversation), check:
211
+
212
+ "I think we have a solid picture of the product. Here is a quick summary of what we have covered: [brief list of key decisions].
213
+
214
+ **Product Pillars identified so far:** [list the pillars that emerged, e.g., 'Design fidelity', 'Privacy-first', 'Zero-configuration simplicity']. [If fewer than 3:] Are there other non-negotiable qualities I should capture? [If 3-5:] Do these capture the soul of the product?
215
+
216
+ **AI-assisted sections status:**
217
+ - Problem Statement: [drafted and approved / needs drafting]
218
+ - Target Personas: [approved / generated but not reviewed / user declined / not enough context yet]
219
+ - Competitive Landscape: [approved / researched but not reviewed / user declined / not applicable]
220
+ - Key Assumptions: [validated / not yet surfaced]
221
+ - Success Criteria: [approved / not yet suggested]
222
+
223
+ If any AI-assisted sections have not been offered yet and there is enough context, offer them now before writing.
224
+
225
+ Ready for me to write the product concept document, or is there anything else to explore?"
226
+
227
+ If the user wants to explore more, continue the conversation. If ready, proceed to Step 4.
228
+
229
+ ### Step 4: Write the Product Concept Document
230
+
231
+ When the user is ready:
232
+
233
+ 1. Read the templates:
234
+ > Read `${CLAUDE_PLUGIN_ROOT}/skills/arn-spark-discover/references/product-concept-template.md`
235
+ > Read `${CLAUDE_PLUGIN_ROOT}/skills/arn-spark-discover/references/persona-profile-template.md`
236
+ > Read `${CLAUDE_PLUGIN_ROOT}/skills/arn-spark-discover/references/competitive-landscape-template.md`
237
+
238
+ 2. Populate the template with all decisions and insights from the conversation:
239
+ - Replace all bracketed placeholders with concrete content
240
+ - Adapt subsection names to match the product (e.g., "The Knock" for a walkie-talkie app)
241
+ - If any section has insufficient information, write what is known and note the gap
242
+ - For AI-assisted sections (Problem Statement, Target Personas, Competitive Landscape, Key Assumptions, Success Criteria): use the user-approved content. If declined, write "Not explored during discovery." If not applicable, write "Not applicable -- [reason]."
243
+ - For Target Personas: follow the structure in persona-profile-template.md -- each archetype needs both the abstracted mould and the concrete example.
244
+ - For Competitive Landscape: follow the structure in competitive-landscape-template.md -- use the tiered format (primary, extended, indirect).
245
+ - Write in present tense, as if the product exists
246
+
247
+ 3. Write the document to the output directory as `product-concept.md`
248
+
249
+ 4. Present a summary to the user:
250
+ - List the document path
251
+ - Highlight 3-5 key decisions captured
252
+ - Note any gaps or areas that could use further exploration
253
+
254
+ ### Step 5: Recommend Next Steps
255
+
256
+ After writing the document, inform the user:
257
+
258
+ "Product concept saved to `[path]/product-concept.md`.
259
+
260
+ Next step: Run `/arn-spark-arch-vision` to explore technology options and define the architecture for this product. That skill will load this product concept as input."
261
+
262
+ If the project does not yet have Arness initialized, also mention:
263
+ "If you have the Arness Code plugin installed, run `/arn-planning` to start the development pipeline. Arness auto-configures code patterns on first use."
264
+
265
+ ## Agent Invocation Guide
266
+
267
+ | Situation | Action |
268
+ |-----------|--------|
269
+ | Initial idea analysis (Step 2) | Invoke `arn-spark-product-strategist` with raw idea |
270
+ | User gives vague/broad answer | Invoke `arn-spark-product-strategist` with answer + context |
271
+ | Feature might be scope creep | Invoke `arn-spark-product-strategist` for assessment |
272
+ | User asks scope/priority question | Invoke `arn-spark-product-strategist` with question |
273
+ | Target user discussion, description is vague | Invoke `arn-spark-persona-architect` (discovery mode) with product context + user hints |
274
+ | Target user discussion, some details provided | Invoke `arn-spark-persona-architect` (discovery mode) with details as seeds |
275
+ | Concrete personas approved, need moulds | Invoke `arn-spark-persona-architect` to derive abstracted profiles from approved examples |
276
+ | User agrees to market research | Phase 1: Invoke `arn-spark-market-researcher` (identification/plan) for query generation |
277
+ | Query plan ready | Phase 2: Invoke `arn-spark-market-researcher` (identification/search) 2-3x in parallel with query batches |
278
+ | Parallel search complete | Phase 3: Invoke `arn-spark-market-researcher` (identification/consolidate) to de-duplicate and validate |
279
+ | User wants claims about alternatives validated | Invoke `arn-spark-market-researcher` with named alternatives for verification |
280
+ | Clear, concrete answer | Record directly, no invocation |
281
+ | Definitive decision | Record directly, confirm |
282
+ | Technology question | Defer to `/arn-spark-arch-vision` |
283
+ | Conversation stalls/circles | Summarize progress, suggest next category |
284
+ | Most categories covered (pre-readiness) | Synthesize assumptions and success criteria from conversation |
285
+
286
+ ## Error Handling
287
+
288
+ - **User cancels mid-conversation:** Confirm cancellation. If enough content has been gathered for a partial document, offer to write it. Otherwise, inform the user they can restart with `/arn-spark-discover` at any time.
289
+ - **arn-spark-product-strategist returns unhelpful response:** Summarize the issue briefly and continue the conversation directly. Try a more specific question on the next agent invocation.
290
+ - **Writing the document fails:** Print the full document content in the conversation so the user can copy it. Suggest checking file permissions or the output directory path.
291
+ - **Product concept already exists:**
292
+
293
+ Ask the user:
294
+
295
+ > **A product concept already exists at `[path]`. What would you like to do?**
296
+ > 1. **Replace** — Start a fresh discovery (existing concept is preserved in git)
297
+ > 2. **Refine** — Use the existing concept as starting context and iterate on it
298
+
299
+ If **Refine**, read the existing document and use it as starting context for the conversation.
300
+ - **arn-spark-market-researcher returns no results:** "I couldn't find much about competitors in this space. This could mean it's genuinely novel, or the search was too narrow. We can note this as a gap and revisit later." Record what was found (even if sparse). Offer to revisit with different search angles.
301
+ - **arn-spark-persona-architect returns generic personas:** Summarize the issue to the user. Ask targeted follow-up questions to get more specific user details. Re-invoke the agent with additional details if user provides them.
302
+ - **User declines AI-assisted sections:** Respect the decision. Record "Not explored during discovery." The product concept is valid without these sections -- they are enrichment, not requirements. Do NOT force or re-offer after decline.
303
+ - **Mould derivation produces poor results:** Present the moulds to the user. If they are too generic or do not capture the patterns, ask the user what distinguishes the approved personas from each other. Re-invoke the persona-architect with the user's differentiation criteria. If moulds still fail, record the concrete examples as the primary artifact and note that moulds need manual refinement.
@@ -0,0 +1,87 @@
1
+ # Competitive Landscape Template
2
+
3
+ This template defines the tiered competitive landscape format for the Competitive Landscape section of `product-concept.md`. The landscape captures identification-level data -- who exists in this space and what they do -- not a deep competitive analysis.
4
+
5
+ The structure preserves the full research output in tiers so the user can focus on the most relevant alternatives while future skills (e.g., Gap Analysis) can draw from any tier for deeper investigation. The `arn-spark-market-researcher` agent can be re-invoked in **deep analysis mode** by future skills for full feature comparison, strengths/weaknesses analysis, and positioning strategy.
6
+
7
+ ## Instructions for arn-spark-discover
8
+
9
+ When populating this template:
10
+
11
+ - Populate from the `arn-spark-market-researcher` agent's Phase 3 consolidation output. The agent produces a ranked, tiered list with rationale for each top-5 selection.
12
+ - The user curates the top 5 during the AI Assist Checkpoint. They may swap entries between tiers, exclude irrelevant entries, or add competitors they know about that the research missed. Honor all user curation decisions.
13
+ - **Commercial products:** Use "Competitors" framing throughout. The top 5 are direct and indirect competitors to track and compare against.
14
+ - **Internal tools / enterprise projects:** Use "Existing Tools / Inspiration" framing instead of "Competitors." The top 5 are existing tools, internal systems, or external products that serve as reference points or alternatives the organization already uses. Adapt language accordingly (e.g., "Why focus" instead of "Why primary", "reference tools" instead of "competitors").
15
+ - Source URLs are required for all entries that were identified through market research. User-provided entries without URLs should be marked accordingly.
16
+ - Tag confidence for each entry: **Verified** (product page confirmed, description matches), **Inferred** (found in reviews or mentions but not directly verified), **Unverified** (user-provided or single-source claim).
17
+ - If the user declines competitive research entirely, write: `Not explored during discovery.`
18
+ - If the product is in a genuinely novel space with no known alternatives (rare -- the "do nothing" baseline almost always applies), write: `Not applicable -- [reason].`
19
+ - Always include the "do nothing" / manual process baseline in Indirect Alternatives, even when other alternatives are scarce.
20
+
21
+ ---
22
+
23
+ ## Template
24
+
25
+ ```markdown
26
+ ## Competitive Landscape
27
+
28
+ [Identified alternatives in this problem space, curated by the user. This is a landscape map -- who exists and what they do -- not a deep analysis. Detailed feature comparison and gap analysis can be performed separately via dedicated skills. The full research is preserved in tiers so future skills can draw from any level.]
29
+
30
+ ### Primary Competitors (Focus)
31
+ [The user-validated top 5 -- these are the most relevant alternatives to track and compare against]
32
+
33
+ 1. **[Name]** ([URL]) -- [one-line description of approach and target user]
34
+ **Why primary:** [1 sentence rationale -- relevance to problem space, user overlap, market presence]
35
+ **Confidence:** [Verified / Inferred / Unverified]
36
+
37
+ 2. **[Name]** ([URL]) -- [one-line description]
38
+ **Why primary:** [rationale]
39
+ **Confidence:** [tag]
40
+
41
+ 3. **[Name]** ([URL]) -- [one-line description]
42
+ **Why primary:** [rationale]
43
+ **Confidence:** [tag]
44
+
45
+ 4. **[Name]** ([URL]) -- [one-line description]
46
+ **Why primary:** [rationale]
47
+ **Confidence:** [tag]
48
+
49
+ 5. **[Name]** ([URL]) -- [one-line description]
50
+ **Why primary:** [rationale]
51
+ **Confidence:** [tag]
52
+
53
+ ### Extended Landscape
54
+ [Additional validated alternatives kept for reference -- may become relevant as the product evolves]
55
+
56
+ - **[Name]** ([URL]) -- [one-line description] [Confidence: tag]
57
+ - **[Name]** ([URL]) -- [one-line description] [Confidence: tag]
58
+ [... up to ~10]
59
+
60
+ ### Indirect Alternatives
61
+ - **Manual / "Do Nothing"** -- [How people cope without a dedicated tool]
62
+ - **[Generic tool, e.g., spreadsheets, email, Slack]** -- [How people repurpose general tools for this]
63
+ [... additional indirect alternatives as applicable]
64
+
65
+ **Initial positioning:** [1-2 sentences -- where the proposed product sits relative to these alternatives and what gap it fills]
66
+
67
+ ### Research Metadata
68
+ - **Research date:** [ISO 8601 date]
69
+ - **Search coverage:** [N] queries across [M] search angles
70
+ - **Raw candidates found:** [X]
71
+ - **Validated alternatives:** [Y]
72
+ - **Research mode:** Identification (landscape mapping only -- deep analysis available via dedicated skills)
73
+ ```
74
+
75
+ ---
76
+
77
+ ## Section Guidance
78
+
79
+ | Section | Source | Depth |
80
+ |---------|--------|-------|
81
+ | Primary Competitors (Focus) | market-researcher Phase 3 consolidation output, user-curated during AI Assist Checkpoint | Top 5 entries. Each with name, URL, one-line description, rationale for inclusion, and confidence tag. User has final say on which entries make the top 5. |
82
+ | Extended Landscape | market-researcher Phase 3 consolidation output (entries ranked 6+) | Up to ~10 additional entries. Same format as primary but without detailed rationale. These are validated alternatives that did not make the user's top 5 but remain available for future reference. |
83
+ | Indirect Alternatives | market-researcher output + conversation context | Always includes "do nothing" / manual process baseline. Add generic tool workarounds (spreadsheets, email, etc.) that people repurpose. These do not need URLs. |
84
+ | Initial Positioning | Synthesized from the landscape as a whole, user-validated | 1-2 sentences only. Describes the market gap the proposed product addresses. This is a starting hypothesis, not a strategy document. |
85
+ | Research Metadata | market-researcher Phase 3 output | Date of research, number of queries executed, search angles covered, raw vs. validated candidate counts. Enables future skills to assess research freshness and coverage. |
86
+ | Source URLs | market-researcher output (websearch + webfetch) | Required for all researched entries. Must point to the actual product page, not a review or listing. User-provided entries without URLs are acceptable but should be noted. |
87
+ | Confidence Tags | market-researcher validation during Phase 3 | **Verified**: product page confirmed and description matches actual product. **Inferred**: found in reviews, comparisons, or community mentions but product page not directly verified. **Unverified**: user-provided, single-source, or could not be independently confirmed. |