ideabox 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,208 @@
1
+ # Phase 09: Learn
2
+
3
+ Track what happened, update preferences, and improve future suggestions. This phase always runs — even if earlier phases were skipped.
4
+
5
+ ## Step 1: Record Session Outcome
6
+
7
+ Determine the session outcome based on which phases completed:
8
+
9
+ | Phases Completed | Outcome | Event |
10
+ |-----------------|---------|-------|
11
+ | 01 only | browsed | User looked at ideas but didn't pick one |
12
+ | 01-02 | brainstormed | User refined an idea but didn't plan/build |
13
+ | 01-03 | planned | User planned but didn't build |
14
+ | 01-04+ | started | User began building |
15
+ | 01-07+ | shipped | User shipped the project |
16
+ | 01-08+ | completed | Full cycle including post-ship |
17
+
18
+ ## Step 2: Update Idea Status
19
+
20
+ Read `.ideabox/state.json` to get the idea that was worked on.
21
+
22
+ Append a status update to `~/.ideabox/ideas.jsonl`:
23
+ ```json
24
+ {"type":"status_update","idea_id":"{id}","old_status":"planned","new_status":"{outcome_status}","timestamp":"{ISO}"}
25
+ ```
26
+
27
+ Status mapping:
28
+ - browsed/brainstormed/planned -> keep status as "planned" (incomplete)
29
+ - started -> "in_progress"
30
+ - shipped/completed -> "built"
31
+
32
+ ## Step 3: Record Implicit Feedback
33
+
34
+ Append to `~/.ideabox/preferences.jsonl`:
35
+
36
+ ```json
37
+ {"ts":"{ISO}","event":"{outcome}","idea_id":"{id}","category":"{idea_category}","complexity":"{idea_complexity}","monetization":"{idea_monetization}","phases_completed":{count},"dropped_at_phase":"{phase_name_or_null}"}
38
+ ```
39
+
40
+ ## Step 4: Update Preference Scores
41
+
42
+ Read `~/.ideabox/profile.json` and update scores based on what happened:
43
+
44
+ ### Signal Weights
45
+ | Event | Weight |
46
+ |-------|--------|
47
+ | accepted (picked idea) | +0.15 |
48
+ | dismissed | -0.10 |
49
+ | started (began building) | +0.25 |
50
+ | completed (shipped) | +0.40 |
51
+ | abandoned (dropped mid-build) | -0.20 |
52
+
53
+ ### Score Update Formula
54
+
55
+ For the idea's category, complexity, and monetization type:
56
+ ```
57
+ new_score = old_score + (learning_rate * signal_weight)
58
+ learning_rate = 0.3
59
+ Clamp result to [0.05, 0.95]
60
+ ```
61
+
62
+ Example: if category is "developer-tools" with current score 0.6, and user completed the project:
63
+ ```
64
+ new_score = 0.6 + (0.3 * 0.40) = 0.72
65
+ ```
66
+
67
+ ### Update category_scores
68
+ ```json
69
+ "category_scores": {
70
+ "developer-tools": {"accepted": 9, "dismissed": 2, "completed": 4, "score": 0.72}
71
+ }
72
+ ```
73
+ Increment the relevant counter and update the score.
74
+
75
+ ### Update complexity_preference
76
+ Same formula for the idea's complexity level (weekend, 1-week, multi-week).
77
+
78
+ ### Update monetization_preference
79
+ Same formula for the idea's monetization type (freemium, saas, sponsorware, etc.).
80
+
81
+ ### Decay Exploration Rate
82
+ ```
83
+ exploration_rate = max(0.10, 0.3 / sqrt(total_interactions))
84
+ ```
85
+ This starts at 30% (high exploration when data is scarce) and decays toward 10% as more data accumulates. The 10% floor ensures some random suggestions always appear — preventing echo chambers.
86
+
87
+ ### Increment total_interactions
88
+ ```
89
+ total_interactions += 1
90
+ ```
91
+
92
+ ### Save Updated Profile
93
+ Write the updated scores back to `~/.ideabox/profile.json`.
94
+
95
+ ## Step 5: Pattern Detection
96
+
97
+ Check for repeating patterns:
98
+
99
+ - **If same category shipped 3+ times:** Note in profile as "preferred category" (add to interests if not already there)
100
+ - **If same category dismissed 3+ times:** Note as "frequently dismissed" but DO NOT remove from consideration (exploration rate keeps it alive)
101
+ - **If user consistently picks high-feasibility ideas:** Note preference for "quick wins"
102
+ - **If user consistently picks high-revenue ideas:** Note "monetization-focused"
103
+
104
+ ## Step 6: Self-Improvement Data Updates
105
+
106
+ Read `${CLAUDE_SKILL_DIR}/references/self-improvement.md` for the full specification, then update the following. **Note:** `query-performance.jsonl` was already written during Phase 01 — do NOT re-write it here.
107
+
108
+ ### 6a: Source Quality Tracking
109
+
110
+ For each research source used in this session, append to `~/.ideabox/source-quality.jsonl`:
111
+ ```json
112
+ {"ts":"{ISO}","session_id":"...","source":"{source_name}","signals_found":{N},"contributed_to_chosen":{true|false},"idea_outcome":"{outcome}"}
113
+ ```
114
+
115
+ Record whether each source contributed signals that were part of the chosen idea's evidence chain.
116
+
117
+ ### 6b: Scoring Feedback
118
+
119
+ If the idea reached a terminal state (completed, abandoned, or dismissed after brainstorming), append to `~/.ideabox/scoring-feedback.jsonl`:
120
+ ```json
121
+ {"ts":"{ISO}","idea_id":"...","outcome":"{outcome}","original_scores":{"revenue":7,"gap":9,"demand":8,"feasibility":8,"stack_fit":9,"trend":9},"total_score":50,"phases_completed":8}
122
+ ```
123
+
124
+ This feeds the scoring weight adaptation engine — after 10+ outcomes, dimension weights auto-adjust to predict successful projects better.
125
+
126
+ ### 6c: Adaptation Summary
127
+
128
+ If enough data exists (5+ sessions for source quality, 10+ outcomes for scoring), compute and show:
129
+
130
+ ```
131
+ ## Self-Improvement Status
132
+
133
+ **Source Quality (from {N} sessions):**
134
+ - Best source: {name} (score: {X})
135
+ - Weakest source: {name} (score: {X})
136
+ - Sources to expand next session: {list}
137
+ - Sources to reduce: {list}
138
+
139
+ **Scoring Adaptation (from {N} outcomes):**
140
+ - Strongest predictor: {dimension} ({weight}x) — ideas you ship score high on this
141
+ - Weakest predictor: {dimension} ({weight}x) — doesn't correlate with what you build
142
+ - Weights: Revenue {X}x | Gap {X}x | Demand {X}x | Feasibility {X}x | Stack {X}x | Trend {X}x
143
+
144
+ **Query Health (from {N} queries tracked):**
145
+ - Active: {N} | Productive: {N} | Retired: {N}
146
+ - New variations generated: {N}
147
+ ```
148
+
149
+ If insufficient data, show: "Self-improvement activates after 5+ sessions. Current: {N} sessions."
150
+
151
+ ## Step 7: Session Summary
152
+
153
+ Present to the user:
154
+
155
+ ```
156
+ ## Session Summary
157
+
158
+ **Idea:** {title}
159
+ **Phases completed:** {list of phases}
160
+ **Outcome:** {outcome}
161
+
162
+ **Preferences updated:**
163
+ - Category "{category}": {old_score} -> {new_score}
164
+ - Complexity "{complexity}": {old_score} -> {new_score}
165
+ - Exploration rate: {old_rate}% -> {new_rate}%
166
+ - Total ideas seen: {total_interactions}
167
+
168
+ Run `/ideas` again for your next project!
169
+ ```
170
+
171
+ ## Step 8: Log Session
172
+
173
+ Append to `~/.ideabox/sessions.jsonl`:
174
+ ```json
175
+ {
176
+ "session_id": "{session_id}",
177
+ "timestamp": "{ISO}",
178
+ "mode": "full_pipeline",
179
+ "idea_id": "{id}",
180
+ "idea_title": "{title}",
181
+ "idea_category": "{category}",
182
+ "phases_completed": ["01-research", "02-brainstorm", ...],
183
+ "outcome": "{outcome}",
184
+ "duration_phases": {count},
185
+ "score_updates": {
186
+ "category": {"field": "{category}", "old": 0.6, "new": 0.72},
187
+ "complexity": {"field": "{complexity}", "old": 0.5, "new": 0.65}
188
+ }
189
+ }
190
+ ```
191
+
192
+ ## Step 9: Clean Up State
193
+
194
+ Delete `.ideabox/state.json` (session complete — no need to resume).
195
+
196
+ **Session artifact archival:** Move `.ideabox/session/` to `.ideabox/archive/{session_id}/` to preserve artifacts without accumulating clutter. If more than 10 archived sessions exist, delete the oldest ones (keep the 10 most recent).
197
+
198
+ ## Gate Condition
199
+
200
+ Phase 09 always passes. It completes when preferences are updated and session is logged.
201
+
202
+ ## Anti-Echo-Chamber Safeguards
203
+
204
+ 1. **Score clamping [0.05, 0.95]:** Nothing is ever fully eliminated or guaranteed
205
+ 2. **Exploration floor 10%:** Always some random/diverse suggestions
206
+ 3. **Diversity guarantee:** Each suggestion batch spans >= 3 categories
207
+ 4. **Novelty bonus:** +0.1 for categories not recently suggested
208
+ 5. **No hard blocks:** Even "frequently dismissed" categories can appear via exploration
@@ -0,0 +1,247 @@
1
+ ---
2
+ type: reference
3
+ title: Research Sources & Data Endpoints
4
+ description: Source catalog for research subagents. 6 parallel subagents launched per session; additional categories serve as supplementary reference.
5
+ ---
6
+
7
+ # Research Sources
8
+
9
+ This catalog defines all available research categories. During execution, **6 parallel subagents** are launched (categories 1-4 + 5 + 6). Categories 5, 7, 8, and 10 are consolidated into other subagents or used as filter steps, not standalone subagents.
10
+
11
+ ---
12
+
13
+ ## Source Categories
14
+
15
+ ### 1. Agentic AI Ecosystem (PRIORITY)
16
+
17
+ The highest-value category. Agentic AI tools, MCP servers, Claude Code plugins, and AI coding assistants are exploding in demand.
18
+
19
+ **WebSearch queries:**
20
+ - `"MCP server" site:github.com`
21
+ - `"Claude Code plugin" OR "claude code skill"`
22
+ - `"AI coding agent" framework 2025 2026`
23
+ - `"agentic AI" developer tool`
24
+
25
+ **API endpoints:**
26
+ - HN Algolia: `https://hn.algolia.com/api/v1/search_by_date?tags=story&query=MCP+server&hitsPerPage=25`
27
+ - HN Algolia: `https://hn.algolia.com/api/v1/search_by_date?tags=story&query=AI+coding+agent&hitsPerPage=25`
28
+ - HN Algolia: `https://hn.algolia.com/api/v1/search_by_date?tags=story&query=Claude+Code+plugin&hitsPerPage=25`
29
+ - GitHub Search: `gh api "/search/repositories?q=MCP+server+created:>$(date -v-30d +%Y-%m-%d 2>/dev/null || date -d '30 days ago' +%Y-%m-%d)+stars:>5&sort=stars&per_page=30"`
30
+ - GitHub Search: `gh api "/search/repositories?q=claude+code+plugin+created:>$(date -v-90d +%Y-%m-%d 2>/dev/null || date -d '90 days ago' +%Y-%m-%d)&sort=stars&per_page=30"`
31
+ - GitHub Topics: `gh api "/search/repositories?q=topic:mcp+topic:server&sort=stars&per_page=30"`
32
+
33
+ **Key signals:** New frameworks, missing integrations, unserved use cases, complaints about existing tools.
34
+
35
+ ---
36
+
37
+ ### 2. Developer Pain Points
38
+
39
+ Complaints and frustrations are the strongest demand signals. Developers describe exactly what they need.
40
+
41
+ **API endpoints:**
42
+ - Reddit r/webdev: `https://www.reddit.com/r/webdev/top.json?t=week&limit=25`
43
+ - Reddit r/node: `https://www.reddit.com/r/node/top.json?t=week&limit=25`
44
+ - Reddit r/typescript: `https://www.reddit.com/r/typescript/top.json?t=week&limit=25`
45
+ - Reddit r/nextjs: `https://www.reddit.com/r/nextjs/top.json?t=week&limit=25`
46
+ - Reddit r/reactjs: `https://www.reddit.com/r/reactjs/top.json?t=week&limit=25`
47
+ - HN Ask HN: `https://hn.algolia.com/api/v1/search_by_date?tags=ask_hn&hitsPerPage=25`
48
+ - HN Algolia: `https://hn.algolia.com/api/v1/search?query=frustrated+with&tags=comment&hitsPerPage=25`
49
+ - Stack Overflow (via WebSearch): `"site:stackoverflow.com [tag] no library" OR "is there a tool"`
50
+
51
+ **Key signals:** "I wish there was...", "Why doesn't X exist?", "Frustrated with...", "Looking for alternatives to..."
52
+
53
+ ---
54
+
55
+ ### 3. Trending Projects
56
+
57
+ What's gaining traction right now. Early-stage projects with momentum reveal where demand is heading.
58
+
59
+ **API endpoints:**
60
+ - GitHub new stars: `gh api "/search/repositories?q=created:>$(date -v-7d +%Y-%m-%d 2>/dev/null || date -d '7 days ago' +%Y-%m-%d)+stars:>10&sort=stars&per_page=30"`
61
+ - GitHub weekly stars: `gh api "/search/repositories?q=created:>$(date -v-30d +%Y-%m-%d 2>/dev/null || date -d '30 days ago' +%Y-%m-%d)+stars:>50&sort=stars&per_page=30"`
62
+ - HN Show HN (high points): `https://hn.algolia.com/api/v1/search_by_date?tags=show_hn&numericFilters=points>50&hitsPerPage=25`
63
+ - HN Show HN (recent): `https://hn.algolia.com/api/v1/search_by_date?tags=show_hn&hitsPerPage=25`
64
+
65
+ **Key signals:** Rapid star growth, Show HN with 100+ points, trending topics appearing across platforms.
66
+
67
+ ---
68
+
69
+ ### 4. Indie Hacker / Monetization
70
+
71
+ Real revenue data and validated business models from solo developers and small teams.
72
+
73
+ **API endpoints:**
74
+ - Reddit r/SideProject: `https://www.reddit.com/r/SideProject/top.json?t=week&limit=25`
75
+ - Reddit r/indiehackers: `https://www.reddit.com/r/indiehackers/top.json?t=week&limit=25`
76
+ - Reddit r/SaaS: `https://www.reddit.com/r/SaaS/top.json?t=week&limit=25`
77
+ - Reddit r/microsaas: `https://www.reddit.com/r/microsaas/top.json?t=week&limit=25`
78
+ - HN Algolia: `https://hn.algolia.com/api/v1/search?query=side+project+revenue&tags=story&hitsPerPage=25`
79
+
80
+ **Key signals:** Revenue numbers, pricing strategies, market validation, "hit $X MRR" posts.
81
+
82
+ ---
83
+
84
+ ### 5. AI SaaS Landscape
85
+
86
+ The broader AI tooling ecosystem. Identify white space where existing tools fall short.
87
+
88
+ **Sources (WebSearch/WebFetch):**
89
+ - YC news and batch companies: WebSearch `"YC" "AI" "developer tool" site:ycombinator.com`
90
+ - AI funding rounds: WebSearch `"AI startup" "seed round" "developer tool" 2026`
91
+ - AI SaaS directories: WebSearch `"AI tools for developers" list 2026`
92
+ - HN Algolia: `https://hn.algolia.com/api/v1/search_by_date?tags=story&query=AI+SaaS&hitsPerPage=25`
93
+
94
+ **Note:** Product Hunt deprioritized due to API limitations. Use WebSearch for PH pages if needed.
95
+
96
+ **Key signals:** Funding activity in specific niches, gaps YC companies haven't filled, underserved verticals.
97
+
98
+ ---
99
+
100
+ ### 6. Package Ecosystem Gaps
101
+
102
+ Missing, unmaintained, or poorly-implemented packages in the npm/Python ecosystems.
103
+
104
+ **API endpoints:**
105
+ - npm search: `https://registry.npmjs.org/-/v1/search?text=keyword:{keyword}&size=25`
106
+ - npm downloads: `https://api.npmjs.org/downloads/point/last-week/{package}`
107
+ - npm downloads range: `https://api.npmjs.org/downloads/range/last-month/{package}`
108
+ - GitHub issues (abandoned): `gh api "/search/issues?q=repo:{owner}/{repo}+is:open+is:issue+sort:created-desc&per_page=10"`
109
+
110
+ **Assessment flow:**
111
+ 1. Search for packages in a category
112
+ 2. Check download counts (low downloads + high GitHub stars = opportunity)
113
+ 3. Check last publish date (>1 year = potentially abandoned)
114
+ 4. Read open issues for unmet needs
115
+ 5. Look for "looking for alternative" issues
116
+
117
+ **Key signals:** Deprecated packages with high downloads, missing TypeScript support, abandoned projects with active forks.
118
+
119
+ ---
120
+
121
+ ### 7. MCP & Plugin Ecosystems
122
+
123
+ The Claude Code / MCP ecosystem and adjacent plugin marketplaces.
124
+
125
+ **Sources:**
126
+ - GitHub MCP servers: `gh api "/search/repositories?q=topic:mcp+topic:server&sort=stars&per_page=30"`
127
+ - GitHub Claude Code plugins: `gh api "/search/repositories?q=claude+code+plugin&sort=stars&per_page=30"`
128
+ - GitHub Cursor extensions: `gh api "/search/repositories?q=cursor+extension&sort=stars&per_page=20"`
129
+ - Awesome MCP lists: WebFetch `https://raw.githubusercontent.com/punkpeye/awesome-mcp-servers/main/README.md`
130
+ - WebSearch: `"MCP server" "missing" OR "need" OR "wish" site:github.com`
131
+
132
+ **Key signals:** Categories with few servers, frequently requested integrations, popular tools without MCP support.
133
+
134
+ ---
135
+
136
+ ### 8. Claude Code Skills & Plugin Inspiration
137
+
138
+ Study existing Claude Code plugins/skills for patterns, gaps, and improvement opportunities.
139
+
140
+ **Known plugins to study:**
141
+ - `superpowers` — skill collections, patterns for multi-skill plugins
142
+ - `gstack` — project scaffolding and templates
143
+ - `labrat` — autonomous research and experiment loops
144
+ - `engineering-skills` — code quality and engineering best practices
145
+ - `claudemod` — modding and customization
146
+
147
+ **Analysis approach:**
148
+ 1. Clone or WebFetch README of each known plugin
149
+ 2. Catalog what each plugin does
150
+ 3. Identify missing categories (deployment, monitoring, database, testing, documentation, security)
151
+ 4. Look for plugin interoperability gaps
152
+ 5. Note UI/UX patterns that work well
153
+
154
+ **Key signals:** Plugin categories with zero entries, common workflows not yet automated, integration gaps between plugins.
155
+
156
+ ---
157
+
158
+ ### 9. User's GitHub Profile
159
+
160
+ Personalize recommendations based on the user's actual skills and interests.
161
+
162
+ **API endpoints:**
163
+ - User repos: `https://api.github.com/users/{username}/repos?sort=updated&per_page=30`
164
+ - User repos (via CLI): `gh api "/users/{username}/repos?sort=updated&per_page=30"`
165
+ - User starred repos: `gh api "/users/{username}/starred?per_page=30"`
166
+ - User languages: Aggregate from repo language fields
167
+
168
+ **Extracted signals:**
169
+ - Primary languages and frameworks
170
+ - Domain expertise (web, CLI, mobile, blockchain, AI, etc.)
171
+ - Project patterns (libraries vs apps vs plugins)
172
+ - Active vs archived repos
173
+ - Star counts on own projects (what resonated)
174
+
175
+ **Note:** Username is loaded from `~/.ideabox/profile.json` or prompted on first run.
176
+
177
+ ---
178
+
179
+ ### 10. Dismissed Ideas Filter
180
+
181
+ Prevent resurfacing ideas the user has already seen and rejected.
182
+
183
+ **Data source:**
184
+ - Load from `~/.ideabox/ideas.jsonl`
185
+ - Each line: `{"id":"...","title":"...","problem":"...","status":"dismissed|built|saved|suggested","timestamp":"..."}`
186
+
187
+ **Matching logic (concrete rules since this runs in LLM context, not code):**
188
+ - Extract 3-5 key terms from each dismissed idea's title and problem statement
189
+ - For each new idea, check if 3+ key terms overlap with any dismissed idea
190
+ - Also check: same target user AND same problem domain = likely match
191
+ - If match found: skip idea or flag as "similar to dismissed: {name}"
192
+ - If status is `backlog`: note it, don't skip (user may want to revisit)
193
+ - When in doubt, present the idea with a note: "Note: similar to previously dismissed idea '{name}'"
194
+
195
+ **Key behavior:** Never silently drop ideas. Always log when a match is found with the reason.
196
+
197
+ ---
198
+
199
+ ## Subagent Output Format
200
+
201
+ Each subagent returns an array of structured signals:
202
+
203
+ ```json
204
+ [
205
+ {
206
+ "source_category": "developer_pain_points",
207
+ "source_url": "https://www.reddit.com/r/webdev/comments/abc123/...",
208
+ "signal_type": "gap|complaint|trend|revenue",
209
+ "title": "No good self-hosted analytics with AI insights",
210
+ "description": "Multiple developers complaining about lack of privacy-focused analytics that use AI to surface actionable insights automatically.",
211
+ "evidence": "Reddit post with 247 upvotes, 89 comments. Top comment: 'I've been looking for exactly this for months'",
212
+ "demand_score": 8
213
+ }
214
+ ]
215
+ ```
216
+
217
+ ### Signal Types
218
+
219
+ | Type | Description | Weight |
220
+ |------|-------------|--------|
221
+ | `gap` | Missing tool or feature in the ecosystem | High |
222
+ | `complaint` | Active frustration with existing solutions | High |
223
+ | `trend` | Growing interest or momentum | Medium |
224
+ | `revenue` | Evidence of willingness to pay | Very High |
225
+
226
+ ### Demand Score Guidelines
227
+
228
+ | Score | Meaning |
229
+ |-------|---------|
230
+ | 1-3 | Weak signal, single source, few engagements |
231
+ | 4-5 | Moderate signal, some engagement |
232
+ | 6-7 | Strong signal, multiple engagements or cross-platform |
233
+ | 8-9 | Very strong, high engagement, multiple sources |
234
+ | 10 | Overwhelming demand, viral discussion |
235
+
236
+ ---
237
+
238
+ ## Aggregation
239
+
240
+ After all subagents complete, the aggregator:
241
+
242
+ 1. Deduplicates signals (fuzzy match on title + description)
243
+ 2. Groups related signals into idea clusters
244
+ 3. Cross-references with dismissed ideas filter
245
+ 4. Scores each cluster using the scoring rubric
246
+ 5. Ranks ideas by total score
247
+ 6. Formats top ideas for presentation
@@ -0,0 +1,81 @@
1
+ # Revenue Model Library
2
+
3
+ Reference for assessing monetization potential of project ideas.
4
+
5
+ ## Model 1: SaaS Subscription
6
+
7
+ **Pattern:** Monthly/annual recurring fee for hosted service.
8
+ **Pricing range:** $9-99/month for individual devs, $29-499/month for teams.
9
+ **Examples:** Vercel ($20/mo), Railway ($5/mo), Supabase ($25/mo), Sentry ($26/mo).
10
+ **Best for:** Tools that provide ongoing value, need hosting, or aggregate data.
11
+ **Moat:** Data network effects, switching costs, integrations.
12
+
13
+ ## Model 2: Freemium / Open Core
14
+
15
+ **Pattern:** Core product is free and open source. Paid tier adds features.
16
+ **Pricing range:** Free tier unlimited for personal use, $19-99/month for pro features.
17
+ **Examples:** PostHog (free -> $450/mo), GitButler (free -> teams), Raycast (free -> $8/mo).
18
+ **Best for:** Developer tools where adoption matters more than immediate revenue.
19
+ **Moat:** Community, ecosystem, brand.
20
+
21
+ ## Model 3: One-Time Purchase / Lifetime Deal
22
+
23
+ **Pattern:** Pay once, use forever. Optional paid updates for major versions.
24
+ **Pricing range:** $29-299 one-time.
25
+ **Examples:** Sublime Text ($99), TablePlus ($89), Paw/RapidAPI ($50).
26
+ **Best for:** Desktop apps, CLI tools, tools where the value is immediate.
27
+ **Moat:** Quality, brand loyalty.
28
+
29
+ ## Model 4: Sponsorware
30
+
31
+ **Pattern:** Open source but early access / premium features for GitHub sponsors.
32
+ **Pricing range:** $5-25/month sponsorship tiers.
33
+ **Examples:** Sindre Sorhus packages, Caleb Porzio (Livewire), Anthony Fu (VueUse).
34
+ **Best for:** Popular open source projects with a solo maintainer.
35
+ **Moat:** Reputation, community trust.
36
+
37
+ ## Model 5: API / Usage-Based
38
+
39
+ **Pattern:** Pay per API call, per token, per request.
40
+ **Pricing range:** $0.001-1 per call depending on compute.
41
+ **Examples:** OpenAI API, Stripe ($0.029/txn), Twilio ($0.0075/msg).
42
+ **Best for:** Infrastructure tools, data services, AI wrappers with added value.
43
+ **Moat:** Infrastructure lock-in, data moats.
44
+
45
+ ## Model 6: Marketplace / Platform Fee
46
+
47
+ **Pattern:** Take a percentage of transactions on your platform.
48
+ **Pricing range:** 5-30% of transaction value.
49
+ **Examples:** Gumroad (10%), Lemon Squeezy (5%), Shopify apps (20%).
50
+ **Best for:** Tools that facilitate transactions between creators and consumers.
51
+ **Moat:** Two-sided network effects.
52
+
53
+ ## Model 7: Paid Plugin / Extension
54
+
55
+ **Pattern:** Sell plugins for popular platforms (VS Code, Figma, Claude Code).
56
+ **Pricing range:** $5-49 one-time or $3-19/month.
57
+ **Examples:** VS Code extensions, Figma plugins, Raycast extensions.
58
+ **Best for:** Niche tools that enhance existing platforms.
59
+ **Moat:** Platform ecosystem, first-mover in a niche.
60
+
61
+ ## Model 8: Consulting + Tool
62
+
63
+ **Pattern:** Open source tool that creates demand for paid consulting/support.
64
+ **Pricing range:** $150-500/hour consulting, $5K-50K projects.
65
+ **Examples:** Hashicorp (Terraform -> consulting), many DevOps tools.
66
+ **Best for:** Complex infrastructure tools where expertise is valuable.
67
+ **Moat:** Expertise, reputation.
68
+
69
+ ---
70
+
71
+ ## Quick Assessment
72
+
73
+ When evaluating which model fits an idea:
74
+
75
+ 1. **Does it need hosting?** -> SaaS or API
76
+ 2. **Is adoption more important than revenue initially?** -> Freemium
77
+ 3. **Is it a one-time solve?** -> One-time purchase
78
+ 4. **Does it enhance an existing platform?** -> Paid plugin
79
+ 5. **Are you building a reputation?** -> Sponsorware
80
+ 6. **Does it facilitate transactions?** -> Marketplace
81
+ 7. **Is the value in expertise, not code?** -> Consulting + Tool