startup-ideation-kit 1.0.0 → 2.0.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (54) hide show
  1. package/README.md +46 -34
  2. package/bin/cli.js +36 -24
  3. package/package.json +7 -3
  4. package/skills/sk-competitors/SKILL.md +284 -0
  5. package/skills/sk-competitors/references/honesty-protocol.md +72 -0
  6. package/skills/sk-competitors/references/research-principles.md +54 -0
  7. package/skills/sk-competitors/references/research-scaling.md +106 -0
  8. package/skills/sk-competitors/references/research-synthesis.md +237 -0
  9. package/skills/sk-competitors/references/research-wave-1-profiles-pricing.md +186 -0
  10. package/skills/sk-competitors/references/research-wave-2-sentiment-mining.md +189 -0
  11. package/skills/sk-competitors/references/research-wave-3-gtm-signals.md +192 -0
  12. package/skills/sk-competitors/references/verification-agent.md +126 -0
  13. package/skills/sk-export/SKILL.md +36 -12
  14. package/skills/sk-leads/SKILL.md +9 -8
  15. package/skills/sk-money/SKILL.md +7 -6
  16. package/skills/sk-niche/SKILL.md +3 -3
  17. package/skills/sk-offer/SKILL.md +15 -6
  18. package/skills/sk-pitch/SKILL.md +461 -0
  19. package/skills/sk-pitch/references/honesty-protocol.md +62 -0
  20. package/skills/sk-pitch/references/pitch-frameworks.md +261 -0
  21. package/skills/sk-pitch/references/research-principles.md +64 -0
  22. package/skills/sk-pitch/references/research-scaling.md +96 -0
  23. package/skills/sk-pitch/references/research-synthesis.md +423 -0
  24. package/skills/sk-pitch/references/research-wave-1-audience-narrative.md +164 -0
  25. package/skills/sk-pitch/references/research-wave-2-competitive-framing.md +159 -0
  26. package/skills/sk-pitch/references/verification-agent.md +129 -0
  27. package/skills/sk-positioning/SKILL.md +318 -0
  28. package/skills/sk-positioning/references/frameworks.md +132 -0
  29. package/skills/sk-positioning/references/honesty-protocol.md +72 -0
  30. package/skills/sk-positioning/references/research-principles.md +64 -0
  31. package/skills/sk-positioning/references/research-scaling.md +96 -0
  32. package/skills/sk-positioning/references/research-synthesis.md +419 -0
  33. package/skills/sk-positioning/references/research-wave-1-alternatives.md +236 -0
  34. package/skills/sk-positioning/references/research-wave-2-market-frame.md +208 -0
  35. package/skills/sk-positioning/references/verification-agent.md +128 -0
  36. package/skills/sk-skills/SKILL.md +9 -8
  37. package/skills/sk-validate/SKILL.md +8 -6
  38. package/skills/startupkit/SKILL.md +40 -18
  39. package/skills/startupkit/templates/competitors-template.md +43 -0
  40. package/skills/startupkit/templates/diverge-template.md +179 -0
  41. package/skills/startupkit/templates/lead-strategy-template.md +215 -0
  42. package/skills/startupkit/templates/money-model-template.md +282 -0
  43. package/skills/startupkit/templates/niche-template.md +203 -0
  44. package/skills/startupkit/templates/offer-template.md +243 -0
  45. package/skills/startupkit/templates/one-pager-template.md +125 -0
  46. package/skills/startupkit/templates/pitch-template.md +48 -0
  47. package/skills/startupkit/templates/positioning-template.md +51 -0
  48. package/skills/startupkit/templates/session-template.md +74 -0
  49. package/skills/startupkit/templates/skills-match-template.md +160 -0
  50. package/skills/startupkit/templates/validation-template.md +273 -0
  51. package/templates/competitors-template.md +43 -0
  52. package/templates/pitch-template.md +48 -0
  53. package/templates/positioning-template.md +51 -0
  54. package/templates/session-template.md +26 -7
@@ -0,0 +1,159 @@
1
+ # Wave 2: Competitive Framing & Why Now
2
+
3
+ Read `research-principles.md` first. Use findings from Wave 1 (audience expectations, comparable narratives) to focus the research.
4
+
5
+ ---
6
+
7
+ ## Agent B1: Competitive Framing for Pitch
8
+
9
+ ```
10
+ Research task: Build pitch-aware competitive frame for {product description}
11
+ Context: {product summary from intake}
12
+ Known competitors: {list from intake or prior sessions}
13
+ Wave 1 findings: {investor expectations, comparable narratives}
14
+
15
+ IMPORTANT — PITCH-FOCUSED COMPETITIVE FRAMING:
16
+ This is NOT a full competitive analysis (startup-competitors does that).
17
+ This is specifically: how do competitors position themselves to investors,
18
+ what narratives have worked for them, and where are the gaps this pitch can exploit.
19
+
20
+ RESEARCH PROTOCOL:
21
+
22
+ ROUND 1 — Competitor investor positioning (4-5 searches):
23
+ - "{competitor 1} pitch deck" OR "{competitor 1} fundraise announcement"
24
+ - "{competitor 2} funding round {current year}"
25
+ - "{product category} funded startups {current year}"
26
+ - "{competitor} investor presentation" OR "{competitor} demo day"
27
+ - "why {competitor} raised" OR "{competitor} investor thesis"
28
+
29
+ ROUND 2 — Competitive narrative gaps (3-4 searches):
30
+ - "{competitor 1} vs {competitor 2}" investor perspective
31
+ - "{product category} startup weaknesses"
32
+ - "{competitor} criticism" OR "{competitor} limitations investor"
33
+ - "what's missing in {product category}" OR "{product category} underserved"
34
+
35
+ ROUND 3 — Investor objections about competition (2-3 searches):
36
+ - "investor concerns {product category} crowded market"
37
+ - "{product category} competitive moat" investor perspective
38
+ - "why invest {product category} despite competition"
39
+
40
+ ROUND 4 — Cross-reference and validate (2-3 searches):
41
+ - Verify key claims from rounds 1-3
42
+ - Search for contrarian views
43
+ - Look for recent changes (new entrants, acquisitions, pivots)
44
+
45
+ OUTPUT FORMAT:
46
+
47
+ ## Competitive Framing for Pitch
48
+
49
+ ### How Competitors Position to Investors
50
+ For each funded competitor:
51
+ - **{Competitor}:** Raised {amount} with narrative: "{their story}"
52
+ - What resonated with investors: {specifics}
53
+ - Where their narrative is weak: {gaps}
54
+ - Source: {citation}
55
+
56
+ ### Narrative Gaps to Exploit
57
+ Rank by pitch impact (strongest first):
58
+
59
+ 1. **{Gap name}** — {description}
60
+ - Why competitors can't claim this: {reasoning}
61
+ - How to frame it in the pitch: {specific suggestion}
62
+ - Evidence: {data point}
63
+
64
+ ### Likely Investor Objections About Competition
65
+ | Objection | Why They'll Ask | Prepared Reframe |
66
+ |-----------|----------------|-----------------|
67
+ | "{objection}" | {reasoning} | "{how to respond}" |
68
+
69
+ ### Competitive Positioning One-Liner
70
+ For the pitch, craft ONE sentence that captures competitive differentiation:
71
+ - Candidate 1: "{sentence}" — Strength: {H/M/L}
72
+ - Candidate 2: "{sentence}" — Strength: {H/M/L}
73
+ - Recommended: {which and why}
74
+
75
+ > **Confidence:** High / Medium / Low — {reasoning}
76
+ > Data labels: Mark each finding with [Data], [Estimate], [Assumption], or [Opinion]
77
+ ```
78
+
79
+ ---
80
+
81
+ ## Agent B2: Why Now & Market Timing
82
+
83
+ ```
84
+ Research task: Build "why now" timing thesis for {product description}
85
+ Context: {product summary from intake}
86
+ Market: {market/category from intake}
87
+ Wave 1 findings: {trends mentioned in comparable narratives}
88
+
89
+ IMPORTANT — THE "WHY NOW" QUESTION:
90
+ Every investor asks "why now?" — why is this the right moment for this company?
91
+ A pitch without a credible timing thesis leaves investors wondering
92
+ "why hasn't someone done this already?" or "why will this work NOW vs. 2 years ago?"
93
+
94
+ RESEARCH PROTOCOL:
95
+
96
+ ROUND 1 — Technology and market shifts (4-5 searches):
97
+ - "{product category} market growth {current year}"
98
+ - "{technology enabling this product} adoption rate {current year}"
99
+ - "{product category} inflection point" OR "{product space} tipping point"
100
+ - "why now {product category}" OR "{product space} momentum {current year}"
101
+ - "{enabling technology} cost reduction" OR "{enabling technology} democratization"
102
+
103
+ ROUND 2 — Behavioral and regulatory changes (3-4 searches):
104
+ - "{customer behavior change} driving {product category}"
105
+ - "{regulatory change} affecting {product space} {current year}"
106
+ - "COVID / remote work / AI impact on {product category}"
107
+ - "{product space} buyer behavior shift {current year}"
108
+
109
+ ROUND 3 — Adoption curves and precedents (2-3 searches):
110
+ - "{product category} adoption curve"
111
+ - "{comparable market} adoption timeline" (analog from adjacent space)
112
+ - "{product category} early adopters vs mainstream"
113
+ - "when did {similar product category} take off"
114
+
115
+ ROUND 4 — Counter-arguments (2-3 searches):
116
+ - "why {product category} won't work" OR "{product space} headwinds"
117
+ - "{product category} too early" OR "{product space} not ready"
118
+ - Skeptical takes on the timing thesis
119
+
120
+ OUTPUT FORMAT:
121
+
122
+ ## Why Now: Timing Thesis
123
+
124
+ ### Primary Timing Driver
125
+ - **What changed:** {specific shift — technology, regulation, behavior}
126
+ - **When it changed:** {date or timeframe}
127
+ - **Evidence:** {data with source}
128
+ - **Impact on this company:** {how it enables or accelerates the pitch}
129
+
130
+ ### Supporting Timing Signals
131
+ 1. **{Signal}** — {description with data}
132
+ - Source: {citation}
133
+ - Strength: Strong / Moderate / Weak
134
+
135
+ 2. **{Signal}** — {description with data}
136
+ [same structure]
137
+
138
+ ### "Why Now" Narrative for the Pitch
139
+ Craft a 2-3 sentence "why now" that can slot into the pitch:
140
+ - Version 1 (data-led): "{version}"
141
+ - Version 2 (story-led): "{version}"
142
+ - Recommended: {which and why}
143
+
144
+ ### Counter-Arguments (Investor Skepticism)
145
+ | Skeptical View | Evidence For It | How to Address |
146
+ |---------------|-----------------|----------------|
147
+ | "{concern}" | {data} | "{response with evidence}" |
148
+
149
+ ### Timing Assessment
150
+ - **Is the timing genuinely strong?** Yes / Partially / No
151
+ - **Risk of being too early:** {assessment}
152
+ - **Risk of being too late:** {assessment}
153
+ - **Honest recommendation:** {use the timing thesis / downplay timing / find a different angle}
154
+
155
+ If the timing isn't genuinely strong, say so. A forced "why now" is worse than none — investors see through it. In that case, recommend leading with a different pitch element instead.
156
+
157
+ > **Confidence:** High / Medium / Low — {reasoning}
158
+ > Data labels: Mark each finding with [Data], [Estimate], [Assumption], or [Opinion]
159
+ ```
@@ -0,0 +1,129 @@
1
+ # Verification Agent Protocol
2
+
3
+ After pitch construction completes, spawn a **Verification Agent (V1)** that audits all deliverables for consistency, accuracy, and completeness. This step catches issues that individual agents and synthesis can miss.
4
+
5
+ ## When to Run
6
+
7
+ - After all pitch deliverables are written (before the scorecard/review phase)
8
+ - Uses one agent: **V1: Verification**
9
+
10
+ ## Agent Task
11
+
12
+ The V1 agent reads ALL deliverable files (not raw files) and checks them against the rules below. It produces a `verification-report.md` in the project directory.
13
+
14
+ ## Universal Checks
15
+
16
+ These apply to every skill in the startup plugin:
17
+
18
+ ### 1. Claims Without Source
19
+ Every quantitative claim must have a data label: **[Data]**, **[Estimate]**, **[Assumption]**, or **[Opinion]**. Flag any number, percentage, or factual assertion without a label.
20
+
21
+ ### 2. Internal Contradictions
22
+ Cross-check numbers and statements across deliverable files. Flag when:
23
+ - The same metric appears with different values in two files
24
+ - A claim in one file contradicts a claim in another
25
+ - Confidence ratings disagree (e.g., "High confidence" in one file, different data in another suggests Medium)
26
+
27
+ ### 3. Confidence Rating Consistency
28
+ Verify that confidence ratings match the evidence:
29
+ - A claim with only one Tier 3 source cannot be rated **High**
30
+ - A claim with multiple Tier 1 sources should not be rated **Low**
31
+ - Every major section must have a confidence rating
32
+
33
+ ### 4. Data Gaps Declared
34
+ Every deliverable must have a Data Gaps section. Flag:
35
+ - Files missing the Data Gaps section entirely
36
+ - Sections where data is clearly thin but no gap is declared
37
+ - Gaps mentioned in raw files that didn't make it into the synthesized deliverables
38
+
39
+ ### 5. Flags Present
40
+ Every deliverable must end with Red Flags and Yellow Flags sections. Flag:
41
+ - Files missing these sections
42
+ - Files with "No flags identified" where the content clearly contains risks
43
+
44
+ ### 6. Stale Data
45
+ Flag any data point older than 18 months that isn't marked as potentially outdated.
46
+
47
+ ### 7. Duplicate Sources
48
+ Flag when the same source is used as "independent corroboration" in multiple places. Two claims both citing the same blog post don't have independent verification.
49
+
50
+ ## Skill-Specific Checks: startup-pitch
51
+
52
+ In addition to the universal checks above, verify:
53
+
54
+ ### Pitch Claims vs. Source Data
55
+ - Every numerical claim in the pitch narratives (traction, market size, revenue, growth rate) must be traceable to either intake data or prior session files
56
+ - If the pitch was built on startup-design data, verify that market size numbers match `market-analysis.md`
57
+ - If built on startup-competitors data, verify competitive claims match `competitors-report.md`
58
+ - Flag any number in the pitch that doesn't appear in any source file
59
+
60
+ ### Cross-Format Consistency
61
+ - The 2-sentence opener must be identical (or a faithful compression) across all pitch formats
62
+ - Market size, traction numbers, and the ask must be consistent across pitch-full, pitch-5min, pitch-2min, pitch-1min, and pitch-email
63
+ - The unique insight must be the same concept across all formats (phrased differently but same core idea)
64
+ - Team credentials must not vary between formats (no inflated version in one, modest in another)
65
+
66
+ ### Pitch vs. Appendix Alignment
67
+ - Every objection in `pitch-appendix.md` must have a corresponding answer
68
+ - Known weaknesses in the appendix must match the Red/Yellow Flags in pitch files
69
+ - Q&A answers must not contradict claims made in the pitch narratives
70
+ - If competitive backup references battle cards, verify the competitor names and claims match
71
+
72
+ ### Honesty Checks
73
+ - Flag any traction claim without a timeframe ("10K users" without "in X months")
74
+ - Flag top-down TAM without bottom-up math
75
+ - Flag team credentials that are titles without accomplishments
76
+ - Flag "no competition" or equivalent claims
77
+
78
+ ## Output: verification-report.md
79
+
80
+ ```markdown
81
+ # Verification Report: {project-name}
82
+ *Generated: {date}*
83
+
84
+ ## Summary
85
+ - **Critical issues:** {count}
86
+ - **Warnings:** {count}
87
+ - **Info:** {count}
88
+
89
+ ## Critical Issues
90
+ Issues that could mislead decision-making. The process pauses here for user review.
91
+
92
+ ### {Issue title}
93
+ - **File(s):** {affected files}
94
+ - **Section:** {section name}
95
+ - **Problem:** {description}
96
+ - **Suggested fix:** {how to resolve}
97
+
98
+ ## Warnings
99
+ Issues that reduce quality but don't block decisions.
100
+
101
+ ### {Issue title}
102
+ - **File(s):** {affected files}
103
+ - **Problem:** {description}
104
+ - **Suggested fix:** {how to resolve}
105
+
106
+ ## Info
107
+ Minor improvements and observations.
108
+
109
+ - {observation}
110
+ - {observation}
111
+
112
+ ## Verification Checklist
113
+ - [ ] All quantitative claims labeled
114
+ - [ ] No internal contradictions found
115
+ - [ ] Confidence ratings consistent with evidence
116
+ - [ ] Data gaps declared in all deliverables
117
+ - [ ] Red/Yellow flags present in all deliverables
118
+ - [ ] No stale data unmarked
119
+ - [ ] No duplicate-source false corroboration
120
+ - [ ] Pitch claims traceable to source data (skill-specific)
121
+ - [ ] Cross-format consistency verified (skill-specific)
122
+ - [ ] Pitch and appendix aligned (skill-specific)
123
+ - [ ] Honesty checks passed (skill-specific)
124
+ ```
125
+
126
+ ## Flow Control
127
+
128
+ - **If Critical issues > 0:** Pause. Show the user: "Verification found {N} critical issues that could affect decision-making." List them. Ask: "Should I fix these before continuing, or proceed as-is?"
129
+ - **If only Warnings/Info:** Show a one-line summary: "Verification complete: {N} warnings, {N} info items. See `verification-report.md` for details." Continue to scorecard/review phase.
@@ -0,0 +1,318 @@
1
+ ---
2
+ name: sk-positioning
3
+ description: "Phase 4: Market positioning strategy using April Dunford's framework, enriched with JTBD discovery, Moore positioning statement, and Neumeier's Onliness Test. Produces a complete positioning document, positioning statements, competitive alternatives map, and market category analysis. Informed by your Gold niche and competitive research from prior phases."
4
+ ---
5
+
6
+ # Startup Positioning
7
+
8
+ Market positioning strategy that produces a complete positioning document, Moore + Neumeier positioning statements, competitive alternatives map, and market category analysis. Built on April Dunford's framework, enriched with JTBD discovery and stress-tested with Neumeier's Onliness Test.
9
+
10
+ ## How It Works
11
+
12
+ ```
13
+ INTAKE -> RESEARCH (2 parallel waves) -> POSITIONING SYNTHESIS
14
+ ```
15
+
16
+ The process: understand the product and its customers, research competitive alternatives and market context, then build positioning through Dunford's 5+1 components. Typical runtime: 10-15 minutes in Claude Code (parallel agents), 20-30 minutes in Claude.ai (sequential).
17
+
18
+ ### Language
19
+
20
+ Default output language is **English**. If the user writes in another language or explicitly requests one, use that language for all outputs instead.
21
+
22
+ ---
23
+
24
+ ## Phase 1: Intake
25
+
26
+ Short and focused -- 1-2 rounds of questions. The goal is enough context to research alternatives and build positioning.
27
+
28
+ ### Check for Prior StartupKit Session
29
+
30
+ Before asking questions, ask the user for their session name, then check for prior phase data:
31
+
32
+ **From Phase 2 (Niche):**
33
+ - `workspace/sessions/{name}/02-niches.md` -- Gold niche (Person, Problem, Promise)
34
+
35
+ **From Phase 3 (Competitors):**
36
+ - `workspace/sessions/{name}/03-competitors.md` -- Competitive research summary
37
+ - `workspace/sessions/{name}/03-competitors/competitors-report.md` -- Full strategic analysis
38
+ - `workspace/sessions/{name}/03-competitors/battle-cards/` -- Per-competitor battle cards
39
+ - `workspace/sessions/{name}/03-competitors/pricing-landscape.md` -- Pricing analysis
40
+
41
+ If `02-niches.md` exists, extract the Gold Niche (Person, Problem, Promise) and any scoring data.
42
+
43
+ If `03-competitors.md` exists, extract:
44
+ - Key competitors and their strengths/weaknesses
45
+ - Pricing landscape (market price range, value metrics, whitespace)
46
+ - Strategic opportunities and risks
47
+
48
+ If battle cards exist, read them to seed the competitive alternatives map.
49
+
50
+ Tell the user: "I found your Gold niche and competitive research from prior phases. I'll use this as the starting point for positioning: [Person] who [Problem], with [X] competitors mapped."
51
+
52
+ If `03-competitors.md` does not exist, tell the user: "For the strongest positioning, consider running `/sk-competitors` first. But we can proceed with what we have."
53
+
54
+ If no prior data exists at all, fall back to the intake questions below.
55
+
56
+ ### What to Ask (if no prior data exists)
57
+
58
+ **Round 1 -- Core context:**
59
+ - What's your product? (one sentence is fine)
60
+ - What problem does it solve and for whom?
61
+ - What do your customers do today instead of using you? (alternatives, workarounds, doing nothing)
62
+ - Who are your best existing customers? (if any -- describe them, not demographics)
63
+
64
+ **Round 2 -- Sharpening (only if needed):**
65
+ - How is your product different from the alternatives you mentioned?
66
+ - Have you tried positioning before? What didn't work?
67
+ - Are there competitors you're often compared to?
68
+
69
+ Don't over-interview. If the user gives a clear description upfront, move to research. The positioning process itself will surface what matters.
70
+
71
+ ---
72
+
73
+ ## Phase 1.5: Research Depth Assessment
74
+
75
+ After intake, assess market complexity and present the Research Depth recommendation to the user.
76
+
77
+ > **Reference:** Read `references/research-scaling.md` for the complexity scoring matrix, tier definitions, wave configurations, and the user communication template.
78
+
79
+ ### Process
80
+
81
+ 1. Score three factors from the intake: market breadth (1-3), known competitors (1-3), geographic scope (1-3)
82
+ 2. Sum the scores (range 3-9) and map to a tier: Light (3-4), Standard (5-7), Deep (8-9)
83
+ 3. Present the Research Depth table to the user (see `research-scaling.md` for the exact template)
84
+ 4. Wait for user response: **light**, **deep**, or **ok** to accept the recommendation
85
+ 5. Record the selected tier
86
+
87
+ The selected tier determines the number of agents per wave and search rounds per agent in Phase 2. See `research-scaling.md` for exact wave configurations per tier.
88
+
89
+ ---
90
+
91
+ ## Phase 2: Research
92
+
93
+ Two parallel research waves exploring competitive alternatives and market context. Together they provide the raw material for Dunford's 5+1 positioning components.
94
+
95
+ ### Environment Detection
96
+
97
+ Check if the `Agent` tool is available:
98
+
99
+ - **Agent tool available (Claude Code):** Spawn all agents within each wave in parallel. This is faster.
100
+ - **Agent tool NOT available (Claude.ai, web):** Execute research sequentially, following the same templates. Same depth, just slower.
101
+
102
+ ### Web Search
103
+
104
+ This skill requires WebSearch for real data. If WebSearch is unavailable or denied, fall back to **Knowledge-Based Mode**: use training data, mark all findings with **[Knowledge-Based -- verify independently]**, and reduce confidence ratings by one level.
105
+
106
+ > **Reference:** Read `references/research-principles.md` before starting any wave. It defines source quality tiers, cross-referencing rules, and how to handle data gaps.
107
+
108
+ ### Wave 1: Competitive Alternatives & Customer Context
109
+
110
+ > **Reference:** Read `references/research-wave-1-alternatives.md` for agent templates.
111
+
112
+ Two agents (or two sequential blocks):
113
+
114
+ **A1: Alternative Mapping (JTBD Lens)** -- Map ALL competitive alternatives, not just direct competitors. Include: direct competitors, adjacent tools competing for the same budget, manual processes, spreadsheets, hiring someone, doing nothing / status quo. For each: what job does the customer hire it for, where does it fall short, what triggers switching? The goal is the full set of things your product replaces.
115
+
116
+ **A2: Customer Intelligence** -- Mine voice-of-customer data: reviews, forums, communities. Extract: pain points with current alternatives, exact language customers use, what "better" means to them, best-fit customer profile (who gets the most value fastest), switching triggers (what makes someone finally change). Build a **language map** -- the words customers use to describe their problem and desired outcome.
117
+
118
+ ### Wave 2: Market Frame & Trends
119
+
120
+ > **Reference:** Read `references/research-wave-2-market-frame.md` for agent templates.
121
+
122
+ Two agents (or two sequential blocks):
123
+
124
+ **B1: Market Category Analysis** -- Identify 3-5 candidate market categories. For each: what do buyers expect from this category, who are the leaders, what's the competitive dynamic, how mature is it? Apply Dunford's category types: head-to-head (existing category), big fish/small pond (subcategory), or category creation. Assess which frame makes your unique strengths matter most.
125
+
126
+ **B2: Trend & Timing Analysis** -- Identify relevant trends: technology shifts, behavioral changes, regulatory moves. For each: is it real or hype, how does it affect buyer expectations, does it make your positioning stronger or weaker? Assess timing -- are you early, on-time, or late to the trend? Only include trends that genuinely change how buyers evaluate solutions.
127
+
128
+ ---
129
+
130
+ ### Post-Research Checkpoint
131
+
132
+ After both waves complete, before synthesis, briefly present what the research found to the user: the competitive alternative landscape (how many direct, adjacent, status quo), the strongest customer pains, and the most promising category candidates. Ask: "Does this align with your expectations? Anything to adjust before I synthesize the positioning?"
133
+
134
+ Keep it to one message -- this is a quick alignment check, not a full report.
135
+
136
+ ---
137
+
138
+ ## Phase 3: Positioning Synthesis
139
+
140
+ > **Reference:** Read `references/research-synthesis.md` for synthesis protocol and Dunford process details.
141
+
142
+ After the checkpoint, build positioning through Dunford's 5+1 components **in order**. The sequence matters -- each step builds on the previous.
143
+
144
+ ### The 5+1 Components
145
+
146
+ 1. **Competitive Alternatives** -- From Wave 1. What would customers use if your product didn't exist? This is the anchor -- positioning is always relative.
147
+
148
+ 2. **Unique Attributes** -- What do you have that the alternatives lack? Be specific and honest. Features, architecture, team expertise, business model, speed -- anything defensible.
149
+
150
+ **PAUSE -- User Input Required.** Present the research-derived attributes to the user. Ask them to confirm, add, or remove before proceeding to Value Themes. The founder knows capabilities that research can't surface.
151
+
152
+ 3. **Value Themes** -- Translate each unique attribute into a customer outcome. Attribute -> "so what?" -> value. Group related attributes into 2-3 value themes. Use customer language from Wave 1's language map.
153
+
154
+ 4. **Best-Fit Customers** -- From Wave 1 customer intelligence. Who cares most about your value themes? Define by characteristics that make them care, not demographics. These customers should be reachable, recognizable, and willing to pay.
155
+
156
+ 5. **Market Category** -- From Wave 2. Choose the category frame that makes your value obvious. Present 3-5 options with trade-offs. Recommend one. The right category triggers the right buyer expectations.
157
+
158
+ 6. **Trend Overlay (optional)** -- From Wave 2. Only include if a genuine trend makes your positioning stronger. Forced trend alignment is worse than none.
159
+
160
+ ### Validation
161
+
162
+ Two stress tests before finalizing:
163
+
164
+ **Neumeier Onliness Test:**
165
+
166
+ Basic form:
167
+ > "Our [product] is the only [category] that [differentiator]."
168
+
169
+ Extended form (6 elements -- WHAT/HOW/WHO/WHERE/WHY/WHEN):
170
+ > "Our [product] is the only [category] that [differentiator] for [target] who [need] in [context]."
171
+
172
+ If you can't fill the basic form convincingly -- if "only" feels like a stretch -- the positioning is too weak. Iterate.
173
+
174
+ **Ries/Trout Mental Ladder:**
175
+ - Is it simple enough to remember?
176
+ - Does it claim one clear rung?
177
+ - Is that rung available (not owned by a competitor)?
178
+ - Can you explain it in one sentence?
179
+
180
+ If either test fails, revisit the 5+1 components. Don't ship weak positioning.
181
+
182
+ ### Output Files
183
+
184
+ Every deliverable file must start with a standardized header: `# {Title}: {product}` followed by `*Skill: sk-positioning | Generated: {date}*`. Every deliverable must end with Red Flags, Yellow Flags, and Sources sections (see templates in `references/research-synthesis.md`).
185
+
186
+ **`workspace/sessions/{name}/04-positioning/positioning-doc.md`** -- The main deliverable:
187
+ - Executive summary (positioning in 3 sentences)
188
+ - The 5+1 components with supporting evidence
189
+ - Strength assessment per component (Strong / Moderate / Needs Work)
190
+ - Strategic recommendations and next steps
191
+ - Data gaps & limitations
192
+
193
+ **`workspace/sessions/{name}/04-positioning/positioning-statement.md`** -- Statements and messaging:
194
+ - Moore template: "For [target] who [need], [product] is a [category] that [benefit]. Unlike [alternative], we [differentiator]."
195
+ - Neumeier Onliness Statement (basic + extended)
196
+ - Elevator pitch (30-second version)
197
+ - Tagline candidates with stress-tested "Possible Misread" column
198
+ - One-liner variants for different channels (GitHub, marketplace, social, elevator)
199
+ - Freemium positioning (if applicable)
200
+
201
+ **`workspace/sessions/{name}/04-positioning/competitive-alternatives.md`** -- Complete alternatives map:
202
+ - All alternatives (direct, adjacent, manual, status quo)
203
+ - Per alternative: job hired for, strengths, shortcomings, switching triggers
204
+ - Your unique attributes vs. each alternative
205
+
206
+ **`workspace/sessions/{name}/04-positioning/market-category-analysis.md`** -- Category strategy:
207
+ - 3-5 candidate categories with buyer expectations
208
+ - Category type assessment (head-to-head / subcategory / creation)
209
+ - Recommendation with reasoning
210
+ - Implementation (category label, tagline direction, buyer expectation alignment)
211
+ - Red flags and yellow flags
212
+
213
+ **`workspace/sessions/{name}/04-positioning/messaging-implications.md`** -- Bridge from positioning to copy:
214
+ - Messaging hierarchy (what to communicate first, second, third)
215
+ - Category label (exact phrase to use everywhere)
216
+ - Value anchor (what to compare value to, separate from category)
217
+ - Customer language vs. category language map (which words are customer verbs, which are category nouns)
218
+ - Words to use / avoid
219
+ - Social proof guidelines
220
+ - Freemium positioning (if applicable)
221
+
222
+ ### Raw Data
223
+
224
+ Each agent saves its raw output to `workspace/sessions/{name}/04-positioning/raw/`. The synthesis phase reads these raw files and produces the polished deliverables above. Agents must NOT write directly to deliverable paths -- raw and synthesized output are separate.
225
+
226
+ Raw research files:
227
+ - `alternative-mapping.md`
228
+ - `customer-intelligence.md`
229
+ - `market-categories.md`
230
+ - `trends-timing.md`
231
+
232
+ ### Summary File
233
+
234
+ After completing synthesis, generate a summary file at `workspace/sessions/{name}/04-positioning.md` containing:
235
+
236
+ - **Positioning (3-Sentence Summary)**: Concise positioning overview
237
+ - **Positioning Statement (Moore)**: For [target] who [need], [product] is a [category] that [benefit]. Unlike [alternative], we [differentiator].
238
+ - **Onliness Statement (Neumeier)**: Our [product] is the only [category] that [differentiator].
239
+ - **Market Category**: Category name + type (Head-to-head / Subcategory / Category creation)
240
+ - **Value Themes**: 2-3 numbered value themes
241
+ - **Best-Fit Customer**: Characteristics-based description
242
+ - **Elevator Pitch (30 seconds)**: Ready-to-use pitch
243
+ - **Full Deliverables**: Links to files in `04-positioning/` subdirectory
244
+
245
+ This summary file is what downstream phases (offer, leads, pitch) will read. Keep it concise.
246
+
247
+ ---
248
+
249
+ ## Phase 3.5: Research Verification
250
+
251
+ After all positioning deliverables are written, run a verification pass.
252
+
253
+ > **Reference:** Read `references/verification-agent.md` for the full verification protocol, universal checks, and skill-specific checks.
254
+
255
+ ### Process
256
+
257
+ 1. Spawn agent **V1: Verification** -- it reads all deliverable files and checks for: unlabeled claims, internal contradictions, confidence rating consistency, missing data gaps, missing flags, stale data, and duplicate-source false corroboration
258
+ 2. V1 also runs startup-positioning-specific checks: positioning statement vs. research data, JTBD vs. customer intelligence, cross-deliverable coherence, validation test integrity
259
+ 3. V1 produces `workspace/sessions/{name}/04-positioning/verification-report.md`
260
+ 4. **If Critical issues found:** Pause and present issues to the user. Ask: fix first, or proceed as-is?
261
+ 5. **If only Warnings/Info:** Show one-line summary
262
+
263
+ In Claude.ai or when Agent tool is unavailable, run the verification checks yourself in the main conversation following the same protocol.
264
+
265
+ ---
266
+
267
+ ## Honesty Protocol
268
+
269
+ > **Reference:** Read `references/honesty-protocol.md` for full protocol and anti-pattern details.
270
+
271
+ Positioning is only useful if it's honest. Core rules apply (label claims, quantify, declare gaps), plus positioning-specific additions:
272
+
273
+ 1. **No aspirational positioning.** Position on what you ARE, not what you hope to become. Aspirational positioning crumbles at first customer contact.
274
+ 2. **Challenge "we're unique."** The Onliness Test must be genuinely convincing. If it reads like marketing fluff, iterate.
275
+ 3. **Research wins over narrative.** When customer data contradicts internal beliefs about positioning, the data wins.
276
+ 4. **Flag category creation risk.** Most startups can't afford to educate a market. Default to existing categories unless the evidence is overwhelming.
277
+
278
+ | Anti-Pattern | What It Looks Like | What to Say |
279
+ |---|---|---|
280
+ | "We're for everyone" | No target segment defined | "If you're for everyone, you're for no one. Who cares MOST?" |
281
+ | Feature-based positioning | Leading with features not outcomes | "Customers don't buy features. What outcome do they get?" |
282
+ | Aspirational positioning | "We'll be the AI-powered..." | "Position on what you deliver today, not the roadmap." |
283
+ | Category-of-one | Inventing a category to avoid comparison | "New categories cost millions. Is there an existing frame?" |
284
+ | Copycat positioning | Same message as the market leader | "Find genuinely different ground -- you can't out-position the leader." |
285
+
286
+ See `references/honesty-protocol.md` for the full anti-pattern table (7 entries) and detailed protocol.
287
+
288
+ ---
289
+
290
+ ## Reference Files
291
+
292
+ Read only what you need for the current phase.
293
+
294
+ | File | When to Read | ~Lines | Purpose |
295
+ |------|-------------|--------|---------|
296
+ | `honesty-protocol.md` | Start of session | ~73 | Full honesty protocol with anti-patterns |
297
+ | `research-principles.md` | Before starting Phase 2 | ~65 | Source quality, cross-referencing, data gaps |
298
+ | `research-wave-1-alternatives.md` | When running Wave 1 | ~235 | Agent templates for alternatives + customer intel |
299
+ | `research-wave-2-market-frame.md` | When running Wave 2 | ~210 | Agent templates for categories + trends |
300
+ | `research-synthesis.md` | After both waves complete | ~380 | Synthesis protocol, Dunford process, validation tests, messaging implications |
301
+ | `frameworks.md` | During Phase 3 | ~133 | Dunford/Moore/Neumeier/JTBD/Ries reference |
302
+ | `research-scaling.md` | After intake, before Phase 2 | ~75 | Complexity scoring, tier definitions, wave configurations |
303
+ | `verification-agent.md` | After synthesis | ~80 | Verification protocol, universal + skill-specific checks |
304
+
305
+ ---
306
+
307
+ ## Save & Next
308
+
309
+ 1. Save the main summary to `workspace/sessions/{name}/04-positioning.md`.
310
+ 2. Save full deliverables to `workspace/sessions/{name}/04-positioning/` directory.
311
+ 3. Update `workspace/sessions/{name}/00-session.md`:
312
+ - Change Phase 4 Positioning status from `[ ] Not Started` to `[x] Complete`
313
+ - Set Active Phase to "Phase 5: Offer"
314
+ - Set Next Recommended to "Phase 5: Offer"
315
+ - Fill in the "Positioning" section:
316
+ - **Positioning Statement:** [Moore template one-liner]
317
+ - **Market Category:** [category name]
318
+ 4. Tell the user: "Positioning complete! Your position: [Moore statement]. When you're ready, run `/sk-offer` to build your Grand Slam Offer informed by this positioning."