startup-ideation-kit 1.0.0 → 2.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (42) hide show
  1. package/README.md +46 -34
  2. package/bin/cli.js +7 -1
  3. package/package.json +7 -3
  4. package/skills/sk-competitors/SKILL.md +284 -0
  5. package/skills/sk-competitors/references/honesty-protocol.md +72 -0
  6. package/skills/sk-competitors/references/research-principles.md +54 -0
  7. package/skills/sk-competitors/references/research-scaling.md +106 -0
  8. package/skills/sk-competitors/references/research-synthesis.md +237 -0
  9. package/skills/sk-competitors/references/research-wave-1-profiles-pricing.md +186 -0
  10. package/skills/sk-competitors/references/research-wave-2-sentiment-mining.md +189 -0
  11. package/skills/sk-competitors/references/research-wave-3-gtm-signals.md +192 -0
  12. package/skills/sk-competitors/references/verification-agent.md +126 -0
  13. package/skills/sk-export/SKILL.md +36 -12
  14. package/skills/sk-leads/SKILL.md +9 -8
  15. package/skills/sk-money/SKILL.md +7 -6
  16. package/skills/sk-niche/SKILL.md +3 -3
  17. package/skills/sk-offer/SKILL.md +15 -6
  18. package/skills/sk-pitch/SKILL.md +461 -0
  19. package/skills/sk-pitch/references/honesty-protocol.md +62 -0
  20. package/skills/sk-pitch/references/pitch-frameworks.md +261 -0
  21. package/skills/sk-pitch/references/research-principles.md +64 -0
  22. package/skills/sk-pitch/references/research-scaling.md +96 -0
  23. package/skills/sk-pitch/references/research-synthesis.md +423 -0
  24. package/skills/sk-pitch/references/research-wave-1-audience-narrative.md +164 -0
  25. package/skills/sk-pitch/references/research-wave-2-competitive-framing.md +159 -0
  26. package/skills/sk-pitch/references/verification-agent.md +129 -0
  27. package/skills/sk-positioning/SKILL.md +318 -0
  28. package/skills/sk-positioning/references/frameworks.md +132 -0
  29. package/skills/sk-positioning/references/honesty-protocol.md +72 -0
  30. package/skills/sk-positioning/references/research-principles.md +64 -0
  31. package/skills/sk-positioning/references/research-scaling.md +96 -0
  32. package/skills/sk-positioning/references/research-synthesis.md +419 -0
  33. package/skills/sk-positioning/references/research-wave-1-alternatives.md +236 -0
  34. package/skills/sk-positioning/references/research-wave-2-market-frame.md +208 -0
  35. package/skills/sk-positioning/references/verification-agent.md +128 -0
  36. package/skills/sk-skills/SKILL.md +9 -8
  37. package/skills/sk-validate/SKILL.md +8 -6
  38. package/skills/startupkit/SKILL.md +39 -17
  39. package/templates/competitors-template.md +43 -0
  40. package/templates/pitch-template.md +48 -0
  41. package/templates/positioning-template.md +51 -0
  42. package/templates/session-template.md +26 -7
package/README.md CHANGED
@@ -1,6 +1,6 @@
1
1
  # StartupKit
2
2
 
3
- An interactive startup ideation toolkit powered by Claude Code skills. Walk through a structured brainstorming process to go from "I want to start a business" to a validated, scored business idea with a complete offer, pricing model, and go-to-market plan.
3
+ An interactive 11-phase startup ideation toolkit powered by Claude Code skills. Walk through a structured brainstorming process to go from "I want to start a business" to a validated, scored business idea with competitive intelligence, market positioning, a complete offer, pricing model, go-to-market plan, and investor-ready pitch.
4
4
 
5
5
  ## What This Is
6
6
 
@@ -15,71 +15,84 @@ This toolkit synthesizes frameworks and methodologies from:
15
15
  - **Alex Hormozi** -- *$100M Offers* (Grand Slam Offer framework, Value Equation, offer enhancement through Scarcity/Urgency/Bonuses/Guarantees/Naming), *$100M Money Models* (Attraction/Upsell/Downsell/Continuity offer types), *$100M Leads* (lead generation), and the *$100M Playbook* (Lead Nurture: the 4 Pillars of Availability, Speed, Personalization, and Volume)
16
16
  - **Ali Abdaal** -- The Diverge/Converge/Emerge creativity process, the Holy Trinity of business ideas (Person + Problem + Product), Craft Skills auditing, the Six P's offer framework (Person/Problem/Promise/Plan/Product/Price), the 5-phase business growth roadmap, and validation through discovery calls
17
17
  - **Taki Moore** -- The 3-question niche scoring framework (Do I like them? Can I help them? Will they pay?)
18
+ - **April Dunford** -- *Obviously Awesome* (5+1 positioning framework, market category strategy)
19
+ - **Marty Neumeier** -- *Zag* (Onliness Statement for differentiation validation)
20
+ - **Geoffrey Moore** -- *Crossing the Chasm* (positioning statement template)
21
+ - **Ferdinando Bons** -- [startup-skill](https://github.com/ferdinandobons/startup-skill) (competitive intelligence research architecture, positioning synthesis, pitch construction framework). Phases 3 (Competitors), 4 (Positioning), and 10 (Pitch) are adapted from this project.
18
22
  - **Additional influences** -- Rob Fitzpatrick's *The Mom Test* (customer interview principles), MJ DeMarco's *The Millionaire Fastlane* (wealth-building lanes), James Altucher's ideation techniques, and Shapiro's 4 viable business model categories
19
23
 
20
24
  All credit for the underlying business frameworks goes to these creators. StartupKit simply packages their wisdom into an interactive workflow you can use with Claude.
21
25
 
22
- ## The 8 Phases
26
+ ## The 11 Phases
23
27
 
24
28
  | # | Phase | Command | What You Do |
25
29
  |---|-------|---------|-------------|
26
30
  | 0 | Start | `/startupkit` | Create or continue a brainstorming session |
27
31
  | 1 | Diverge | `/sk-diverge` | Brainstorm skills, passions, and 50+ problems |
28
32
  | 2 | Niche | `/sk-niche` | Score and rank niche ideas (person + problem) |
29
- | 3 | Offer | `/sk-offer` | Build a Grand Slam Offer with the Value Equation |
30
- | 4 | Validate | `/sk-validate` | Plan discovery calls, outreach scripts, and MVP |
31
- | 5 | Money | `/sk-money` | Design pricing and your full money model |
32
- | 6 | Leads | `/sk-leads` | Plan lead channels and nurture sequences |
33
- | 7 | Skills | `/sk-skills` | Match AI-powered skills to your business |
34
- | 8 | Export | `/sk-export` | Generate a clean one-pager summary |
33
+ | 3 | Competitors | `/sk-competitors` | Deep competitive research with battle cards |
34
+ | 4 | Positioning | `/sk-positioning` | Market positioning with April Dunford framework |
35
+ | 5 | Offer | `/sk-offer` | Build a Grand Slam Offer with the Value Equation |
36
+ | 6 | Validate | `/sk-validate` | Plan discovery calls, outreach scripts, and MVP |
37
+ | 7 | Money | `/sk-money` | Design pricing and your full money model |
38
+ | 8 | Leads | `/sk-leads` | Plan lead channels and nurture sequences |
39
+ | 9 | Skills | `/sk-skills` | Match AI-powered skills to your business |
40
+ | 10 | Pitch | `/sk-pitch` | Build investor-ready pitch scripts and practice |
41
+ | 11 | Export | `/sk-export` | Generate a comprehensive one-pager summary |
35
42
 
36
43
  You don't have to follow them in order. Jump to any phase, revisit previous ones, or skip what doesn't apply.
37
44
 
38
45
  ## Quick Start
39
46
 
40
- ### Install via npm
47
+ ### Install
41
48
 
42
49
  ```bash
43
- npx startup-ideation-kit init
50
+ npx skills add mohamedameen-io/StartupKit
44
51
  ```
45
52
 
46
- This installs the skills into `.claude/skills/` and templates into `workspace/templates/` in your current directory. No cloning required.
53
+ That's it. The skills are installed into your project and ready to use.
47
54
 
48
- To remove:
55
+ ### Then brainstorm
49
56
 
57
+ 1. Open Claude Code in your project directory
58
+ 2. Run `/startupkit` to create a new brainstorming session
59
+ 3. Work through the phases at your own pace -- Claude guides you with questions and frameworks
60
+ 4. Each phase saves its output as a structured markdown file in `workspace/sessions/your-session/`
61
+ 5. When you're done, run `/sk-export` to get a clean one-pager
62
+
63
+ ### Alternative install methods
64
+
65
+ **Via npm** (also copies workspace templates):
50
66
  ```bash
51
- npx startup-ideation-kit uninstall
67
+ npx startup-ideation-kit init
52
68
  ```
53
69
 
54
- ### Or clone the repo
55
-
70
+ **Clone the repo:**
56
71
  ```bash
57
72
  git clone https://github.com/mohamedameen-io/StartupKit.git
58
73
  cd StartupKit
59
74
  ```
60
75
 
61
- ### Then brainstorm
62
-
63
- 1. Open Claude Code in the directory
64
- 2. Run `/startupkit` to create a new brainstorming session
65
- 3. Work through the phases at your own pace -- Claude guides you with questions and frameworks
66
- 4. Each phase saves its output as a structured markdown file in `workspace/sessions/your-session/`
67
- 5. When you're done, run `/sk-export` to get a clean one-pager
68
-
69
76
  ### What You Get At The End
70
77
 
71
78
  A `workspace/sessions/your-session/` folder containing:
72
79
 
73
80
  ```
74
- 00-session.md # Progress tracker
75
- 01-diverge.md # Your skills, passions, and problems list
76
- 02-niches.md # Scored and ranked niche ideas
77
- 03-offer.md # Complete Grand Slam Offer
78
- 04-validation.md # Validation plan with scripts and milestones
79
- 05-money-model.md # Pricing, offer ladder, and revenue projections
80
- 06-lead-strategy.md # Lead channels and nurture sequences
81
- 07-skills-match.md # AI skill recommendations
82
- 08-one-pager.md # Final exportable summary
81
+ 00-session.md # Progress tracker
82
+ 01-diverge.md # Your skills, passions, and problems list
83
+ 02-niches.md # Scored and ranked niche ideas
84
+ 03-competitors.md # Competitive research summary
85
+ 03-competitors/ # Full deliverables (battle cards, pricing, matrix)
86
+ 04-positioning.md # Positioning strategy summary
87
+ 04-positioning/ # Full deliverables (statements, alternatives, messaging)
88
+ 05-offer.md # Complete Grand Slam Offer
89
+ 06-validation.md # Validation plan with scripts and milestones
90
+ 07-money-model.md # Pricing, offer ladder, and revenue projections
91
+ 08-lead-strategy.md # Lead channels and nurture sequences
92
+ 09-skills-match.md # AI skill recommendations
93
+ 10-pitch.md # Investor pitch summary and scorecard
94
+ 10-pitch/ # Full pitch deliverables (all formats, appendix)
95
+ 11-one-pager.md # Final exportable summary
83
96
  ```
84
97
 
85
98
  ## Project Structure
@@ -90,13 +103,12 @@ StartupKit/
90
103
  workspace/
91
104
  templates/ # Blank worksheet templates for each phase
92
105
  sessions/ # Your brainstorming sessions (gitignored)
93
- resources/ # Reference materials
94
106
  ```
95
107
 
96
108
  ## Requirements
97
109
 
98
110
  - [Claude Code](https://claude.ai/code) (CLI, desktop app, or IDE extension)
99
- - Node.js (for `npx startup-ideation-kit init` only -- not needed if you clone the repo)
111
+ - Node.js (for `npx skills add` only -- not needed if you clone the repo)
100
112
 
101
113
  ## License
102
114
 
package/bin/cli.js CHANGED
@@ -7,11 +7,14 @@ const SKILLS = [
7
7
  "startupkit",
8
8
  "sk-diverge",
9
9
  "sk-niche",
10
+ "sk-competitors",
11
+ "sk-positioning",
10
12
  "sk-offer",
11
13
  "sk-validate",
12
14
  "sk-money",
13
15
  "sk-leads",
14
16
  "sk-skills",
17
+ "sk-pitch",
15
18
  "sk-export",
16
19
  ];
17
20
 
@@ -19,11 +22,14 @@ const TEMPLATES = [
19
22
  "session-template.md",
20
23
  "diverge-template.md",
21
24
  "niche-template.md",
25
+ "competitors-template.md",
26
+ "positioning-template.md",
22
27
  "offer-template.md",
23
28
  "validation-template.md",
24
29
  "money-model-template.md",
25
30
  "lead-strategy-template.md",
26
31
  "skills-match-template.md",
32
+ "pitch-template.md",
27
33
  "one-pager-template.md",
28
34
  ];
29
35
 
@@ -105,7 +111,7 @@ function init() {
105
111
  Next steps:
106
112
  1. Open Claude Code in this directory
107
113
  2. Run /startupkit to create a new brainstorming session
108
- 3. Follow the phases: /sk-diverge -> /sk-niche -> /sk-offer -> ...
114
+ 3. Follow the phases: /sk-diverge -> /sk-niche -> /sk-competitors -> ...
109
115
  `);
110
116
  }
111
117
 
package/package.json CHANGED
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "name": "startup-ideation-kit",
3
- "version": "1.0.0",
4
- "description": "Interactive startup ideation kit powered by Claude Code skills. Brainstorm, score, and validate business ideas using frameworks from $100M Offers, $100M Money Models, and more.",
3
+ "version": "2.0.0",
4
+ "description": "Interactive 11-phase startup ideation kit powered by Claude Code skills. Brainstorm, score, research competitors, position, build offers, validate, model revenue, plan leads, match AI skills, pitch investors, and export -- using frameworks from $100M Offers, April Dunford, and more.",
5
5
  "bin": {
6
6
  "startupkit": "./bin/cli.js"
7
7
  },
@@ -12,7 +12,11 @@
12
12
  "business",
13
13
  "brainstorming",
14
14
  "hormozi",
15
- "offers"
15
+ "offers",
16
+ "competitive-intelligence",
17
+ "positioning",
18
+ "investor-pitch",
19
+ "april-dunford"
16
20
  ],
17
21
  "author": "Mohamed Ameen",
18
22
  "license": "MIT",
@@ -0,0 +1,284 @@
1
+ ---
2
+ name: sk-competitors
3
+ description: "Phase 3: Deep competitive research on your Gold niche. Analyzes competitors' products, pricing, customer sentiment, GTM strategy, and growth signals using real web data. Produces battle cards, pricing landscape, and competitive matrix. Use when the user wants to understand their competitive landscape, analyze competitors, compare products, or research who they're competing against."
4
+ ---
5
+
6
+ # Startup Competitors
7
+
8
+ Deep competitive intelligence that goes beyond surface-level profiles. Produces actionable battle cards, pricing landscape analysis, and strategic vulnerability mapping using real web data.
9
+
10
+ ## How It Works
11
+
12
+ ```
13
+ INTAKE → RESEARCH (3 parallel waves) → SYNTHESIS → BATTLE CARDS
14
+ ```
15
+
16
+ The process is focused: understand the product, research competitors deeply across 3 dimensions, synthesize findings, and produce actionable output. Typical runtime: 15-25 minutes in Claude Code (parallel agents), 30-45 minutes in Claude.ai (sequential).
17
+
18
+ ### Language
19
+
20
+ Default output language is **English**. If the user writes in another language or explicitly requests one, use that language for all outputs instead.
21
+
22
+ ---
23
+
24
+ ## Phase 1: Intake
25
+
26
+ Short and focused — 1-2 rounds of questions, not an extended interview. The goal is just enough context to run targeted research.
27
+
28
+ ### Check for Prior StartupKit Session
29
+
30
+ Before asking questions, check if a StartupKit session exists. Ask the user for their session name, then look for:
31
+
32
+ - `workspace/sessions/{name}/02-niches.md` -- Gold niche selection
33
+ - `workspace/sessions/{name}/00-session.md` -- Session metadata
34
+
35
+ If `02-niches.md` exists, read it and extract the **Gold Niche** data:
36
+ - **Person**: The target customer description from the Gold Niche section
37
+ - **Problem**: The core pain from the Gold Niche section
38
+ - **Promise**: The transformation statement from the Gold Niche section
39
+
40
+ Also extract any Hormozi 4-criteria scores (Painful, Purchasing Power, Targetable, Growing) for market context.
41
+
42
+ Tell the user: "I found your Gold niche from Phase 2. I'll use it as the starting point for competitive research: [Person] who struggle with [Problem]."
43
+
44
+ Skip the intake interview if the niche data provides enough context (person, problem, and market are clear). Go straight to research depth assessment.
45
+
46
+ If no session data exists, fall back to the intake questions below.
47
+
48
+ ### What to Ask (if no prior data exists)
49
+
50
+ **Round 1 — The basics:**
51
+ - What's your product/idea? (one sentence is fine)
52
+ - What problem does it solve and for whom?
53
+ - What market/category are you in?
54
+ - Do you know any competitors already? (names, URLs)
55
+
56
+ **Round 2 — Sharpening (only if needed):**
57
+ - What geography/market are you targeting?
58
+ - What's your pricing model or range?
59
+ - What do you consider your key differentiator?
60
+
61
+ Don't over-interview. If the user gives a clear description upfront, skip straight to research. The competitive analysis itself will surface what matters.
62
+
63
+ ### Output
64
+
65
+ Save the main summary to `workspace/sessions/{name}/03-competitors.md` — a brief summary of the product, market, and known competitors. If built on prior session data, note the source files used.
66
+
67
+ ---
68
+
69
+ ## Phase 1.5: Research Depth Assessment
70
+
71
+ After intake, assess market complexity and present the Research Depth recommendation to the user.
72
+
73
+ > **Reference:** Read `references/research-scaling.md` for the complexity scoring matrix, tier definitions, wave configurations, and the user communication template.
74
+
75
+ ### Process
76
+
77
+ 1. Score three factors from the intake: market breadth (1-3), known competitors (1-3), geographic scope (1-3)
78
+ 2. Sum the scores (range 3-9) and map to a tier: Light (3-4), Standard (5-7), Deep (8-9)
79
+ 3. Present the Research Depth table to the user (see `research-scaling.md` for the exact template)
80
+ 4. Wait for user response: **light**, **deep**, or **ok** to accept the recommendation
81
+ 5. Record the selected tier
82
+
83
+ ---
84
+
85
+ ## Phase 2: Research
86
+
87
+ Three parallel research waves, each attacking the competitive landscape from a different angle. Together they produce a 360-degree view.
88
+
89
+ ### Environment Detection
90
+
91
+ Check if the `Agent` tool is available:
92
+
93
+ - **Agent tool available (Claude Code):** Spawn all agents within each wave in parallel. This is faster.
94
+ - **Agent tool NOT available (Claude.ai, web):** Execute research sequentially, following the same templates. Same depth, just slower.
95
+
96
+ ### Web Search
97
+
98
+ This skill requires WebSearch for real data. If WebSearch is unavailable or denied, fall back to **Knowledge-Based Mode**: use training data, mark all findings with **[Knowledge-Based — verify independently]**, and reduce confidence ratings by one level.
99
+
100
+ > **Reference:** Read `references/research-principles.md` before starting any wave. It defines source quality tiers, cross-referencing rules, and how to handle data gaps.
101
+
102
+ ### Wave 1: Competitor Profiles + Pricing Intelligence
103
+
104
+ > **Reference:** Read `references/research-wave-1-profiles-pricing.md` for agent templates.
105
+
106
+ Two agents (or two sequential blocks):
107
+
108
+ **A1: Competitor Deep-Dives** — Identify and profile 5-8 direct competitors plus 2-3 adjacent solutions (broader platforms, manual alternatives, tools from neighboring categories that compete for the same budget). For each: product, features, team size, funding, traction signals, strengths, weaknesses. Go beyond their marketing page — check reviews, job postings, and funding data.
109
+
110
+ **A2: Pricing Intelligence** — For each competitor: reverse-engineer the pricing model. Not just "it costs $49/mo" but: what's the value metric (per seat? per usage? flat?), how do tiers differentiate, what pricing psychology do they use (anchoring, decoy, charm pricing), what's the switching cost (technical, contractual, emotional). Build a tier-by-tier comparison.
111
+
112
+ ### Wave 2: Customer Sentiment Mining
113
+
114
+ > **Reference:** Read `references/research-wave-2-sentiment-mining.md` for agent templates.
115
+
116
+ Two agents (or two sequential blocks):
117
+
118
+ **B1: Review Mining** — Mine G2, Capterra, TrustRadius, Product Hunt, and App Store reviews for each competitor. Extract patterns: what do people praise? What do they complain about? What features do they request? Organize by competitor and by pain theme. Include verbatim quotes.
119
+
120
+ **B2: Forum & Community Mining** — Mine Reddit, Indie Hackers, Hacker News, Quora, and niche communities. Find: complaints about existing tools, "what do you use for X?" threads, migration stories, workaround discussions. Build a **language map** — the exact words customers use to describe their problems and desires. Identify **churn signals** — why people leave each competitor.
121
+
122
+ ### Wave 3: GTM & Strategic Signals
123
+
124
+ > **Reference:** Read `references/research-wave-3-gtm-signals.md` for agent templates.
125
+
126
+ Two agents (or two sequential blocks):
127
+
128
+ **C1: Go-to-Market Analysis** — For each competitor: primary acquisition channel, sales motion (self-serve vs. sales-led), content strategy (blog frequency, topics, quality), social presence, paid advertising signals, partnership plays. Build a **channel opportunity map** showing competitor saturation vs. opportunity per channel.
129
+
130
+ **C2: Strategic & Growth Signals** — Funding trajectory (rounds, investors, timing), hiring patterns (engineering-heavy = building, sales-heavy = scaling, support-heavy = struggling), content/SEO footprint (what keywords they rank for, where the gaps are), product roadmap signals from changelogs and public statements. Identify **content pillars** each competitor owns and which topics nobody covers well.
131
+
132
+ ---
133
+
134
+ ### Post-Research Checkpoint
135
+
136
+ After all three waves complete, before synthesis, briefly present what the research found to the user: how many competitors were profiled, the top customer pain themes, the most notable strategic signals (funding, hiring, GTM patterns). Ask: "Does this align with your expectations? Any competitors to add or remove before I synthesize?"
137
+
138
+ Keep it to one message — this is a quick alignment check, not a full report.
139
+
140
+ ---
141
+
142
+ ## Phase 3: Synthesis
143
+
144
+ > **Reference:** Read `references/research-synthesis.md` for synthesis protocol and battle card template.
145
+
146
+ After the checkpoint, synthesize raw findings into strategic deliverables. This step creates the real value — it's not reporting, it's pattern-matching across data sources.
147
+
148
+ ### How to Synthesize
149
+
150
+ 1. Read all raw files before writing anything
151
+ 2. Connect findings across waves: pricing gaps + customer complaints + hiring signals = strategic opportunities
152
+ 3. Identify contradictions between sources and explain which to trust
153
+ 4. Rate confidence for each major claim (High / Medium / Low)
154
+ 5. Surface strategic implications — not just facts, but what they mean
155
+ 6. Aggregate all data gaps from raw files into a dedicated "Data Gaps & Research Limitations" section in the competitors-report — every analysis has blind spots, and being explicit about them prevents false confidence
156
+ 7. Include adjacent solutions (broader platforms, manual alternatives, tools from neighboring categories) — customers don't just choose between direct competitors, they choose between "good enough" options from adjacent spaces
157
+
158
+ ### Output Files
159
+
160
+ Every deliverable file must start with a standardized header: `# {Title}: {product}` followed by `*Skill: sk-competitors | Generated: {date}*`. Every deliverable must end with Red Flags, Yellow Flags, and Sources sections.
161
+
162
+ **`workspace/sessions/{name}/03-competitors/competitors-report.md`** — The main deliverable:
163
+ - Executive summary (5-sentence competitive landscape overview)
164
+ - Market concentration assessment (fragmented / consolidating / dominated)
165
+ - Key findings per research dimension
166
+ - Strategic opportunities (where to compete)
167
+ - Strategic risks (where to avoid)
168
+ - Competitive moat assessment (network effects, switching costs, data moat, brand, scale)
169
+ - Data gaps & research limitations (mandatory — aggregate from all raw files)
170
+ - Red flags and yellow flags
171
+
172
+ **`workspace/sessions/{name}/03-competitors/competitive-matrix.md`** — Feature comparison table:
173
+ - Features as rows, competitors as columns
174
+ - Rating: strong / adequate / weak / missing
175
+ - Highlight gaps where no competitor serves well
176
+ - Your product included (or placeholder if pre-launch)
177
+
178
+ **`workspace/sessions/{name}/03-competitors/pricing-landscape.md`** — Dedicated pricing analysis:
179
+ - Tier-by-tier comparison across all competitors
180
+ - Value metric analysis (what each charges for and why)
181
+ - Pricing psychology breakdown (anchoring, decoy, freemium strategies)
182
+ - Price positioning map (axes: price vs. feature depth)
183
+ - Pricing whitespace — where there's room to position
184
+ - Switching cost matrix (per competitor: technical, contractual, emotional)
185
+
186
+ **`workspace/sessions/{name}/03-competitors/battle-cards/{competitor-name}.md`** — One per competitor:
187
+ - One-page format: who they are, their strengths, their weaknesses
188
+ - How to win against them (specific talking points)
189
+ - When they win over you (be honest)
190
+ - Customer objections and responses
191
+ - Key vulnerability to exploit
192
+ - Churn signals (why their customers leave)
193
+
194
+ ### Summary File
195
+
196
+ After completing synthesis, generate a summary file at `workspace/sessions/{name}/03-competitors.md` containing:
197
+
198
+ - **Executive Summary**: 5 sentences covering the competitive landscape
199
+ - **Key Competitors** table: Name | Stage | Strength | Weakness | Threat Level (H/M/L)
200
+ - **Strategic Opportunity**: Single strongest opportunity with evidence
201
+ - **Strategic Risk**: Single biggest risk with evidence
202
+ - **Pricing Landscape Summary**: Market price range, dominant value metric, pricing whitespace
203
+ - **Full Deliverables**: Links to the files in `03-competitors/` subdirectory
204
+
205
+ This summary file is what downstream phases (positioning, offer, pitch) will read. Keep it concise and data-dense.
206
+
207
+ ### Raw Data
208
+
209
+ Keep raw research files in `workspace/sessions/{name}/03-competitors/raw/` for reference:
210
+ - `competitor-profiles.md`
211
+ - `pricing-intelligence.md`
212
+ - `review-mining.md`
213
+ - `forum-mining.md`
214
+ - `gtm-analysis.md`
215
+ - `strategic-signals.md`
216
+
217
+ ---
218
+
219
+ ## Phase 3.5: Research Verification
220
+
221
+ After synthesis completes and all deliverable files are written, run a verification pass.
222
+
223
+ > **Reference:** Read `references/verification-agent.md` for the full verification protocol, universal checks, and skill-specific checks.
224
+
225
+ ### Process
226
+
227
+ 1. Spawn agent **V1: Verification** — it reads all deliverable files and checks for: unlabeled claims, internal contradictions, confidence rating consistency, missing data gaps, missing flags, stale data, and duplicate-source false corroboration
228
+ 2. V1 also runs startup-competitors-specific checks: battle card vs. report consistency, matrix vs. profiles alignment, pricing landscape vs. profiles consistency, cross-deliverable coherence
229
+ 3. V1 produces `workspace/sessions/{name}/03-competitors/verification-report.md`
230
+ 4. **If Critical issues found:** Pause and present issues to the user. Ask: fix first, or proceed as-is?
231
+ 5. **If only Warnings/Info:** Show one-line summary
232
+
233
+ In Claude.ai or when Agent tool is unavailable, run the verification checks yourself in the main conversation following the same protocol.
234
+
235
+ ---
236
+
237
+ ## Honesty Protocol
238
+
239
+ > **Reference:** Read `references/honesty-protocol.md` for full protocol and anti-pattern details.
240
+
241
+ Competitive intelligence is only useful if it's honest. Core rules apply (label claims, quantify, declare gaps), plus competitive-intelligence-specific additions:
242
+
243
+ 1. **No cheerleading.** If a competitor is objectively better at something, say so. Battle cards that ignore competitor strengths are useless in real sales conversations.
244
+ 2. **Label claims.** Use **[Data]**, **[Estimate]**, **[Assumption]**, **[Opinion]** tags. Never present guesses as facts.
245
+ 3. **Quantify.** "$12M ARR growing 40% YoY" not "they're growing fast."
246
+ 4. **Date everything.** Flag data older than 12 months.
247
+ 5. **Declare gaps.** "DATA GAP: Could not find reliable data on [X]" is always better than fabrication.
248
+ 6. **Surface red flags.** If the competitive landscape looks brutal, say so directly.
249
+ 7. **Challenge confirmation bias.** When research confirms what the founder already believes, probe deeper. Look for disconfirming evidence.
250
+
251
+ See `references/honesty-protocol.md` for the full anti-pattern table (6 entries) and detailed protocol.
252
+
253
+ ---
254
+
255
+ ## Reference Files
256
+
257
+ Read only what you need for the current phase.
258
+
259
+ | File | When to Read | ~Lines | Purpose |
260
+ |------|-------------|--------|---------|
261
+ | `honesty-protocol.md` | Start of session | ~72 | Full honesty protocol with anti-patterns |
262
+ | `research-principles.md` | Before starting Phase 2 | ~54 | Source quality, cross-referencing, data gaps |
263
+ | `research-wave-1-profiles-pricing.md` | When running Wave 1 | ~186 | Agent templates for profiles + pricing |
264
+ | `research-wave-2-sentiment-mining.md` | When running Wave 2 | ~189 | Agent templates for review + forum mining |
265
+ | `research-wave-3-gtm-signals.md` | When running Wave 3 | ~192 | Agent templates for GTM + strategic signals |
266
+ | `research-synthesis.md` | After all waves complete | ~231 | How to synthesize + battle card template |
267
+ | `research-scaling.md` | After intake, before Phase 2 | ~80 | Complexity scoring, tier definitions, wave configurations |
268
+ | `verification-agent.md` | After synthesis | ~85 | Verification protocol, universal + skill-specific checks |
269
+
270
+ ---
271
+
272
+ ## Save & Next
273
+
274
+ 1. Save the main summary to `workspace/sessions/{name}/03-competitors.md`.
275
+ 2. Save full deliverables to `workspace/sessions/{name}/03-competitors/` directory.
276
+ 3. Update `workspace/sessions/{name}/00-session.md`:
277
+ - Change Phase 3 Competitors status from `[ ] Not Started` to `[x] Complete`
278
+ - Set Active Phase to "Phase 4: Positioning"
279
+ - Set Next Recommended to "Phase 4: Positioning"
280
+ - Fill in the "Competitive Intelligence" section:
281
+ - **Competitors Profiled:** [total number]
282
+ - **Market Concentration:** [fragmented / consolidating / dominated]
283
+ - **Key Opportunity:** [one-line summary]
284
+ 4. Tell the user: "Competitive research complete! [X] competitors profiled with battle cards, pricing landscape, and strategic analysis. When you're ready, run `/sk-positioning` to define your market positioning strategy."
@@ -0,0 +1,72 @@
1
+ # Radical Honesty Protocol
2
+
3
+ This skill exists to help founders make good decisions — not to feel good. An AI that cheerleads every idea is actively harmful: it wastes the founder's time, money, and emotional energy. These principles are non-negotiable and apply to every phase.
4
+
5
+ ## Tell the truth, even when it's uncomfortable
6
+
7
+ - If the market is too small, say so directly. Don't soften "$12M and shrinking" into "room for a focused player."
8
+ - If the idea has a fatal flaw, name it up front. Don't bury it in a list of minor risks.
9
+ - If the founder's assumptions contradict research, flag it explicitly: "You assumed X, but the data shows Y."
10
+ - Challenge "everyone needs this" (who specifically?), "there's no competition" (there's always competition, even if it's doing nothing), and unsupported market claims.
11
+ - Never use vague positive language to avoid delivering bad news. Replace "interesting opportunity" with the specific finding.
12
+
13
+ ## Separate facts from opinions
14
+
15
+ - Label every major claim with its basis:
16
+ - **[Data]** — sourced finding with citation
17
+ - **[Estimate]** — calculated projection with stated assumptions
18
+ - **[Assumption]** — unverified belief that needs testing
19
+ - **[Opinion]** — your analytical judgment
20
+ - When data is missing or weak, say so.
21
+ - Never present estimates as facts.
22
+ - A confident-sounding fabrication is worse than an honest "I don't know."
23
+
24
+ ## Surface flags proactively
25
+
26
+ In every output file, include a **Flags** section at the end:
27
+
28
+ - **Red Flags** — Issues that could undermine the competitive analysis or the business.
29
+ - **Yellow Flags** — Concerns that need investigation or monitoring.
30
+
31
+ If there are no flags, write "No flags identified" — don't skip the section.
32
+
33
+ ## Challenge the founder's assumptions
34
+
35
+ Don't just accept what the user says at face value:
36
+ - Ask "What evidence do you have for that?" when the founder claims competitive advantages
37
+ - Push back on "we have no competitors" — there's always competition, even doing nothing
38
+ - Question "we're better at everything" — no product wins on every dimension
39
+ - When the founder dismisses a competitor, test that dismissal against data
40
+
41
+ ## Competitive Intelligence-Specific Rules
42
+
43
+ 1. **Acknowledge competitor strengths.** Battle cards that ignore competitor strengths are useless in real sales conversations. If a competitor is objectively better at something, say so directly. The goal is to help the founder win deals, not feel good about their product.
44
+
45
+ 2. **Challenge confirmation bias.** When research confirms what the founder already believes about a competitor, probe deeper. Confirmation bias is the #1 enemy of competitive intelligence. Look for disconfirming evidence.
46
+
47
+ 3. **Don't cherry-pick reviews.** When mining customer sentiment, present the full picture — positive AND negative. Three angry Reddit posts don't mean a product is failing if it has 4.5 stars on G2 with 500 reviews. Represent sentiment proportionally.
48
+
49
+ 4. **Flag intelligence gaps honestly.** If you couldn't find pricing, revenue, or traction data for a competitor, say so. A blank cell is better than a guess. Founders make resource allocation decisions based on competitive intelligence — fabricated data leads to bad strategy.
50
+
51
+ 5. **Rate competitor threat honestly.** The final report must include an honest threat assessment per competitor:
52
+ - **High threat** — Strong product, growing fast, well-funded, overlapping target
53
+ - **Medium threat** — Competitive on some dimensions, gaps on others
54
+ - **Low threat** — Weak product, stagnant, or targeting different segment
55
+ Don't inflate threats to create urgency or deflate them to comfort the founder.
56
+
57
+ ## Competitive Intelligence Anti-Patterns
58
+
59
+ | Anti-Pattern | What It Looks Like | What to Say |
60
+ |---|---|---|
61
+ | Cherry-picking weaknesses | Only listing competitor flaws | "What are they actually good at? Your sales team will face this in every deal." |
62
+ | Dismissing competitors | "They're not real competition" | "Their customers chose them over the alternatives. Why? What job are they doing well?" |
63
+ | Confirmation bias | Only finding data that confirms existing beliefs | "Let me look for evidence that contradicts this. What if we're wrong about [X]?" |
64
+ | Outdated intelligence | Using 2-year-old data for a fast-moving market | "This data is from {date}. The landscape may have shifted. Flag as potentially outdated." |
65
+ | Vanity comparisons | Comparing your best feature to their worst | "Compare apples to apples. Where do they win on dimensions that matter to buyers?" |
66
+ | Ignoring status quo | Not treating 'doing nothing' as competition | "Your biggest competitor might be inertia. What triggers someone to actually switch?" |
67
+
68
+ ## Ground rules
69
+
70
+ - **Ground in evidence.** Every competitive claim should trace to research findings.
71
+ - **Make it actionable.** Intelligence that can't inform strategy is worthless.
72
+ - **No fabrication.** If data not found, say so.
@@ -0,0 +1,54 @@
1
+ # Research Principles
2
+
3
+ These principles apply to ALL research agents across all waves. Every agent must follow them.
4
+
5
+ ## Iterative Deep Research
6
+
7
+ Each agent performs **5-8 web searches minimum**, organized in sequential rounds that drill deeper:
8
+ - Round 1: Broad overview queries
9
+ - Round 2: Drill-down into specific findings from Round 1
10
+ - Round 3: Cross-reference and validate
11
+ - Round 4: Reality check and edge cases
12
+
13
+ Do NOT stop after a single query. The first search gives you the surface — the follow-ups give you the insight.
14
+
15
+ ## Source Quality Tiers
16
+
17
+ Rank every finding by source reliability:
18
+
19
+ | Tier | Source Type | Use For |
20
+ |------|-----------|---------|
21
+ | **Tier 1** | Industry reports (Gartner, Forrester, McKinsey, IBISWorld), SEC filings, government data, G2/Capterra aggregate data | Hard numbers, market share, verified metrics |
22
+ | **Tier 2** | Reputable tech press (TechCrunch, Bloomberg, WSJ), company press releases, Crunchbase, investor presentations | Funding data, company news, expert opinions |
23
+ | **Tier 3** | Blog posts, Reddit threads, individual reviews, social media | Sentiment, customer voice, anecdotal evidence |
24
+
25
+ For competitive intelligence, Tier 3 sources are more valuable than usual — individual reviews and forum posts reveal the real customer experience that polished marketing hides. But don't use them for hard numbers.
26
+
27
+ ## Cross-Referencing
28
+
29
+ Never trust a single source for important claims. For every key finding:
30
+ - Look for 2-3 independent sources
31
+ - If sources agree: note convergence and cite all
32
+ - If sources disagree: note both figures, explain the discrepancy, and state which you trust more and why
33
+
34
+ ## Quantification
35
+
36
+ Vague claims are worthless. Always push for numbers:
37
+ - Bad: "They're a big player"
38
+ - Good: "$12M ARR, 2,500+ customers, Series B ($28M from Accel)"
39
+ - Bad: "Their pricing is high"
40
+ - Good: "$49/seat/mo for Pro, $99/seat/mo for Enterprise, vs. market median of $35/seat/mo"
41
+
42
+ ## Dating
43
+
44
+ Always note when data was published. Flag anything older than 12 months as potentially outdated. Competitor landscapes shift fast — a funding round, pivot, or acquisition can change everything in weeks.
45
+
46
+ ## Handling Research Failures
47
+
48
+ Sometimes WebSearch won't find what you need:
49
+
50
+ 1. **Try alternative queries.** Rephrase, use synonyms, try different angles. At least 3 variations before declaring a gap.
51
+ 2. **Use proxy data.** If you can't find a competitor's revenue, estimate from team size, funding, pricing × estimated customers. Show your math.
52
+ 3. **Declare the gap explicitly.** Write: "DATA GAP: Could not find reliable data on [X]. Closest proxy: [Y]. Confidence: Low."
53
+ 4. **Never fabricate.** An honest "unknown" is infinitely more valuable than a made-up number.
54
+ 5. **Suggest how to fill the gap.** Point to specific reports or recommend reaching out to the competitor's customers directly.