tweakidea 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,54 @@
1
+ # Behavior Change Required
2
+
3
+ **Weight:** 4%
4
+ **Core Question:** How hard is it to get people to adopt?
5
+
6
+ **NOTE: This dimension is INVERTED. Score 5 = minimal behavior change (best for adoption). Score 1 = massive behavior change (worst for adoption).**
7
+
8
+ ## Signal Table
9
+
10
+ ### BJ Fogg's Behavior Model
11
+ Behavior = Motivation x Ability x Trigger
12
+
13
+ | Factor | Low Friction | High Friction |
14
+ |--------|--------------|---------------|
15
+ | Motivation | Problem is already urgent | Need to create urgency |
16
+ | Ability | Minimal learning curve | Requires training/change |
17
+ | Trigger | Natural workflow moment | Need to create new habit |
18
+
19
+ ### Adoption Complexity
20
+ | Type | Description | Difficulty |
21
+ |------|-------------|------------|
22
+ | Substitute | Replace existing tool | Low-Medium |
23
+ | Complement | Add to existing workflow | Medium |
24
+ | Transform | Change how work is done | High |
25
+ | Create | New behavior from scratch | Very High |
26
+
27
+ ## Scoring Rubric
28
+
29
+ ### Score 5 (Minimal Change -- Drop-in Replacement)
30
+ - [ ] Solution substitutes an existing tool or process with minimal learning curve
31
+ - [ ] A natural workflow trigger already exists (the user already does something that leads them to this solution)
32
+ - [ ] Adoption requires no training, habit formation, or process redesign
33
+
34
+ ### Score 4 (Low Change -- Easy Complement)
35
+ - [ ] Solution complements an existing workflow without disrupting it
36
+ - [ ] Learning curve is brief (under 1 hour to basic proficiency)
37
+
38
+ ### Score 3 (Moderate Change)
39
+ - [ ] Solution requires some workflow adjustment but the core user behavior stays the same
40
+ - [ ] Evidence that the motivation (pain intensity) is high enough to justify the required change
41
+ - [ ] Onboarding or training is needed but can be completed in a single session
42
+
43
+ ### Score 2 (Significant Change)
44
+ - [ ] Solution requires meaningful changes to how work is done (new process, new team roles, or new habits)
45
+ - [ ] Evidence of high adoption friction -- extended onboarding, organizational buy-in, or cultural shift needed
46
+
47
+ ### Score 1 (Massive Change -- New Behavior from Scratch)
48
+ - [ ] Solution requires creating an entirely new behavior or habit with no existing trigger
49
+ - [ ] Evidence of very high training requirements, organizational transformation, or multi-stakeholder change management
50
+ - [ ] No natural workflow moment triggers adoption -- the user must remember to use it independently
51
+
52
+ ### B2B/B2C Nuance
53
+ For B2B: Look for integration with existing tools (API, SSO, existing data formats) as friction reducers. Organizational change management (multiple stakeholders, IT approval, security review) increases adoption friction significantly even for individually simple products.
54
+ For B2C: Look for existing app/platform habits that the solution can piggyback on. Consumer habit formation depends on trigger frequency -- daily triggers build habits faster. Products requiring users to "remember to open the app" face higher adoption friction than those embedded in existing routines.
@@ -0,0 +1,49 @@
1
+ # Clarity of Target Customer
2
+
3
+ **Weight:** 4%
4
+ **Core Question:** Do you know exactly who you're building for?
5
+
6
+ ## Signal Table
7
+
8
+ ### ICP (Ideal Customer Profile) Checklist
9
+ - [ ] Can you describe them in one sentence?
10
+ - [ ] Can you find 100 of them this week?
11
+ - [ ] Do they have budget and authority to buy?
12
+ - [ ] Are they accessible through specific channels?
13
+ - [ ] Will they talk to you for customer development?
14
+
15
+ ### Warning Signs
16
+ - "Everyone could use this"
17
+ - "Both consumers and enterprises"
18
+ - "Anyone who uses the internet"
19
+ - Can't name 10 specific target companies/people
20
+
21
+ ### YC Advice
22
+ > "At early-stage, clarity trumps exhaustivity. You need to have a strong angle that speaks to your core target."
23
+
24
+ ## Scoring Rubric
25
+
26
+ ### Score 5 (Crystal Clear ICP)
27
+ - [ ] Ideal customer profile can be described in one sentence with specific, identifying characteristics
28
+ - [ ] Evidence the founder can find 100 target customers this week through identified channels
29
+ - [ ] Target customers have budget, authority to buy, and are accessible for customer development conversations
30
+
31
+ ### Score 4 (Well-Defined ICP)
32
+ - [ ] Target customer segment is specific enough to build a targeted go-to-market strategy
33
+ - [ ] Evidence that the founder can name at least 10 specific target companies or individuals
34
+
35
+ ### Score 3 (Reasonably Defined ICP)
36
+ - [ ] Target customer is described with enough specificity to distinguish from adjacent segments
37
+ - [ ] At least one concrete channel to reach the target customer has been identified
38
+
39
+ ### Score 2 (Vague ICP)
40
+ - [ ] Target customer description is broad ("small businesses", "millennials", "enterprise companies")
41
+ - [ ] Founder struggles to name specific target companies or individuals
42
+
43
+ ### Score 1 (No Clear ICP)
44
+ - [ ] "Everyone could use this" or similarly unfocused customer description
45
+ - [ ] No evidence that the founder can identify, locate, or reach the target customer
46
+
47
+ ### B2B/B2C Nuance
48
+ For B2B: ICP should include industry, company size, role/title of buyer, and specific pain trigger. Look for evidence the founder can get meetings with the right person at target companies. Named accounts and warm introductions are strong signals.
49
+ For B2C: ICP should include demographic, psychographic, and behavioral characteristics. Look for evidence the founder knows where these users already congregate (online communities, platforms, physical locations). Ability to run a targeted ad campaign is a proxy for ICP clarity.
@@ -0,0 +1,50 @@
1
+ # Defensibility
2
+
3
+ **Weight:** 8%
4
+ **Core Question:** Can you protect this business once you build it?
5
+
6
+ ## Signal Table
7
+
8
+ ### Types of Moats
9
+ | Moat Type | Description | Strength |
10
+ |-----------|-------------|----------|
11
+ | Network effects | Each user makes product better for others | Very strong |
12
+ | Switching costs | Painful/expensive to leave | Strong |
13
+ | Data advantages | Proprietary data that improves product | Strong |
14
+ | Brand | Trust/recognition that's hard to replicate | Medium-strong |
15
+ | Economies of scale | Cost advantages at volume | Medium |
16
+ | IP/Patents | Legal protection | Medium (often weaker than expected) |
17
+ | Regulatory capture | Compliance barriers | Medium |
18
+ | Speed/Execution | First-mover advantage | Weak (temporary) |
19
+
20
+ ### The Incumbent Test
21
+ > "If Google, Microsoft, and OpenAI all copied your product tomorrow — with unlimited resources — would your users stay?"
22
+
23
+ If no → low defensibility. Focus on building moats before you get noticed.
24
+
25
+ ## Scoring Rubric
26
+
27
+ ### Score 5 (Strong Moat)
28
+ - [ ] Evidence of a compounding retention mechanism -- a moat that grows stronger as more users adopt or data accumulates (e.g., network effects, high switching costs, proprietary data advantage)
29
+ - [ ] Evidence of a proprietary data advantage that improves the product and is difficult to replicate
30
+ - [ ] Product survives the Incumbent Test (users would stay even if Google/Microsoft/OpenAI cloned it)
31
+
32
+ ### Score 4 (Good Defensibility)
33
+ - [ ] Evidence of a durable competitive barrier that would meaningfully slow a well-resourced competitor from capturing the user base (e.g., network effects, switching costs, proprietary data, or established brand trust)
34
+ - [ ] Evidence that the moat strengthens over time rather than eroding
35
+
36
+ ### Score 3 (Moderate Defensibility)
37
+ - [ ] Evidence of at least one moat-building mechanism, even if not yet fully established
38
+ - [ ] Path to defensibility is identified and plausible (e.g., data flywheel planned, community building underway)
39
+
40
+ ### Score 2 (Weak Defensibility)
41
+ - [ ] Primary advantage is execution speed or first-mover position (temporary moats)
42
+ - [ ] Product could be replicated by a well-resourced competitor without significant barriers
43
+
44
+ ### Score 1 (No Defensibility)
45
+ - [ ] No identified moat or path to building one
46
+ - [ ] Product fails the Incumbent Test -- users would switch to a big-tech clone immediately
47
+
48
+ ### B2B/B2C Nuance
49
+ For B2B: Switching costs and data lock-in are the strongest enterprise moats. Look for workflow integration depth, data migration difficulty, and compliance certification barriers. Enterprise brand trust accumulates slowly but is durable.
50
+ For B2C: Network effects and brand loyalty are the strongest consumer moats. Look for social graph integration, content lock-in, and identity/status attachment. Consumer switching costs are typically lower than B2B -- defensibility must come from engagement, not lock-in.
@@ -0,0 +1,51 @@
1
+ # Founder-Market Fit
2
+
3
+ **Weight:** 12%
4
+ **Core Question:** Are YOU the right person to solve this?
5
+
6
+ ## Signal Table
7
+
8
+ ### Assessment Criteria
9
+ - [ ] **Domain expertise**: Deep understanding of the industry/problem
10
+ - [ ] **Personal experience**: Have you felt this pain yourself?
11
+ - [ ] **Network access**: Can you reach early customers easily?
12
+ - [ ] **Technical capability**: Can your team actually build this?
13
+ - [ ] **Credibility**: Will customers trust you to solve this?
14
+ - [ ] **Passion**: Will you persist through years of difficulty?
15
+ - [ ] **Unfair advantages**: What do you know that others don't?
16
+
17
+ ### YC's Formulation
18
+ > "The very best startup ideas have three things in common: they're something the founders themselves want, that they themselves can build, and that few others realize are worth doing."
19
+
20
+ ### Key Insight
21
+ A great problem for someone else is not a great problem for you if you lack the fit. Conversely, a "smaller" problem where you have unique insight may be better than a "bigger" problem where you're an outsider.
22
+
23
+ ## Scoring Rubric
24
+
25
+ ### Score 5 (Exceptional Fit)
26
+ - [ ] Evidence of deep domain expertise in the problem space (years of experience, professional background, or academic specialization)
27
+ - [ ] Evidence of personal pain experience -- the founder has encountered this problem firsthand
28
+ - [ ] Evidence of network access to early customers and ability to reach them directly
29
+ - [ ] Evidence of an articulated unfair advantage (unique insight, proprietary access, or capability others lack)
30
+
31
+ ### Score 4 (Strong Fit)
32
+ - [ ] Evidence of meaningful domain knowledge (professional experience or demonstrated deep research)
33
+ - [ ] Evidence that the founding team has the technical capability to build the solution
34
+ - [ ] Evidence of at least one credible channel to reach early customers
35
+
36
+ ### Score 3 (Moderate Fit)
37
+ - [ ] Evidence of genuine passion or commitment to the problem space (not just a market opportunity)
38
+ - [ ] Evidence that the founder has engaged with potential customers (interviews, surveys, or observation)
39
+ - [ ] Founding team can build an MVP (even if not the ideal long-term team)
40
+
41
+ ### Score 2 (Weak Fit)
42
+ - [ ] Founder interest appears market-opportunity-driven rather than problem-driven
43
+ - [ ] Limited evidence of domain knowledge, customer access, or relevant network
44
+
45
+ ### Score 1 (Poor Fit)
46
+ - [ ] No evidence of domain knowledge, relevant experience, or customer network
47
+ - [ ] No credible path to developing the expertise or access needed to succeed in this space
48
+
49
+ ### B2B/B2C Nuance
50
+ For B2B: Industry credibility and professional network are critical. Look for evidence the founder can get meetings with decision makers and understands enterprise procurement. Domain-specific jargon fluency and industry relationships are strong signals.
51
+ For B2C: Personal experience with the problem and empathy for the user are more important than professional credentials. Look for evidence the founder deeply understands the target user's daily life, motivations, and frustrations. Consumer taste and design sensibility matter.
@@ -0,0 +1,44 @@
1
+ # Frequency
2
+
3
+ **Weight:** 8%
4
+ **Core Question:** How often do people encounter this problem?
5
+
6
+ ## Signal Table
7
+
8
+ | Frequency | Examples | Implications |
9
+ |-----------|----------|--------------|
10
+ | Multiple times/day | Communication (Slack), search, navigation | Strong habit formation, top-of-mind |
11
+ | Daily | Email, task management, meals | Good retention potential |
12
+ | Weekly | Grocery shopping, expense reports | Moderate engagement |
13
+ | Monthly | Bills, subscriptions, reports | Need strong value per interaction |
14
+ | Annually | Taxes, insurance renewal, major purchases | High willingness to pay per transaction |
15
+ | Once in lifetime | Wedding, home buying, major surgery | Very high transaction value needed |
16
+
17
+ **Key Insight:** High frequency + high pain = strongest combination. But low-frequency problems can work if willingness to pay is proportionally high (tax software, wedding planning).
18
+
19
+ ## Scoring Rubric
20
+
21
+ ### Score 5 (Very High Frequency)
22
+ - [ ] Evidence that the target customer encounters this problem multiple times per day
23
+ - [ ] Evidence of strong habit formation potential or top-of-mind awareness around the problem
24
+ - [ ] Problem is embedded in a daily workflow or routine that cannot be avoided
25
+
26
+ ### Score 4 (High Frequency)
27
+ - [ ] Evidence the problem occurs at least daily for the target customer
28
+ - [ ] Evidence of good retention potential due to regular engagement with the problem space
29
+
30
+ ### Score 3 (Moderate Frequency)
31
+ - [ ] Evidence the problem occurs at least weekly for the target customer
32
+ - [ ] Problem recurrence is predictable and tied to a regular activity or cycle
33
+
34
+ ### Score 2 (Low Frequency)
35
+ - [ ] Problem occurs monthly or less frequently
36
+ - [ ] Evidence that the per-interaction value is high enough to justify low frequency (e.g., high transaction value)
37
+
38
+ ### Score 1 (Very Low Frequency)
39
+ - [ ] Problem is encountered rarely (annually or once-in-lifetime)
40
+ - [ ] No evidence of proportionally high transaction value to compensate for low frequency
41
+
42
+ ### B2B/B2C Nuance
43
+ For B2B: Weight process-embedded frequency (how often the workflow triggers the problem) and cost-per-occurrence. Infrequent but high-cost problems (quarterly audits, annual compliance) can still score well if willingness to pay is proportional.
44
+ For B2C: Weight daily-life integration and habit formation potential. High-frequency consumer problems build retention through routine. Low-frequency consumer problems need extremely high emotional or financial stakes per occurrence.
@@ -0,0 +1,50 @@
1
+ # Incumbent Indifference
2
+
3
+ **Weight:** 2%
4
+ **Core Question:** Will big players try to crush you?
5
+
6
+ ## Signal Table
7
+
8
+ ### Attention Matrix
9
+
10
+ | | Low Defensibility | High Defensibility |
11
+ |--|-------------------|-------------------|
12
+ | **High Attention** | Kill Zone ☠️ | Battlefield ⚔️ |
13
+ | **Low Attention** | Waiting Room ⏳ | Sweet Spot ✓ |
14
+
15
+ ### Attention Indicators
16
+ - Market size large enough to matter to big tech
17
+ - Problem adjacent to incumbent core business
18
+ - Clear, measurable revenue opportunity
19
+ - Low technical complexity to replicate
20
+
21
+ ### Sweet Spot Strategy
22
+ Build in a niche too small for incumbents to prioritize, accumulate moats, then expand once defensible.
23
+
24
+ ## Scoring Rubric
25
+
26
+ ### Score 5 (Safe from Incumbents)
27
+ - [ ] Target niche is too small for big tech to prioritize (below their revenue relevance threshold)
28
+ - [ ] Problem is not adjacent to any incumbent's core business or strategic priorities
29
+ - [ ] Evidence of low attention indicators (no big tech blog posts, acquisitions, or feature launches in this space)
30
+
31
+ ### Score 4 (Likely Safe)
32
+ - [ ] Market size is moderate -- large enough for a startup but not enough to attract big tech investment
33
+ - [ ] Problem requires domain-specific expertise or go-to-market that incumbents lack motivation to develop
34
+
35
+ ### Score 3 (Uncertain)
36
+ - [ ] Market is growing into a size range where incumbent attention is plausible but not yet evident
37
+ - [ ] Problem is somewhat adjacent to incumbent interests but not core to their strategy
38
+
39
+ ### Score 2 (At Risk)
40
+ - [ ] Market is large and visible enough to attract incumbent attention
41
+ - [ ] Problem is adjacent to an incumbent's core business, increasing the likelihood of a competitive response
42
+
43
+ ### Score 1 (Kill Zone)
44
+ - [ ] Large, obvious market directly adjacent to big tech core business
45
+ - [ ] Evidence that incumbents could easily replicate the solution (low technical complexity, existing infrastructure)
46
+ - [ ] Evidence of incumbent activity in the space (acquisitions, feature launches, public statements)
47
+
48
+ ### B2B/B2C Nuance
49
+ For B2B: Vertical-specific niches (healthcare, legal, construction) are more likely to escape incumbent attention than horizontal tools. Look for regulatory or domain complexity that makes the space unattractive to generalist tech companies.
50
+ For B2C: Consumer attention is a primary resource for big tech. Any consumer product with viral potential or broad appeal is more likely to attract incumbent response. Look for distribution advantages (app store, social graph) that incumbents already control.
@@ -0,0 +1,43 @@
1
+ # Mandatory Nature
2
+
3
+ **Weight:** 2%
4
+ **Core Question:** Are people forced to solve this?
5
+
6
+ ## Signal Table
7
+
8
+ | Driver | Examples |
9
+ |--------|----------|
10
+ | Regulatory | GDPR compliance, SOC2, HIPAA, tax filing |
11
+ | Contractual | SLA requirements, audit obligations |
12
+ | Operational | Payroll, invoicing, legal filings |
13
+ | Physical | Safety equipment, infrastructure maintenance |
14
+ | Social | Professional credentials, certifications |
15
+
16
+ **Strength:** Mandatory problems have built-in demand and often higher willingness to pay. Downside: often competitive, commoditized.
17
+
18
+ ## Scoring Rubric
19
+
20
+ ### Score 5 (Fully Mandatory)
21
+ - [ ] Evidence of a regulatory or legal mandate requiring the problem to be solved (e.g., GDPR, HIPAA, SOC2, tax filing)
22
+ - [ ] Evidence of deadlines and penalties for non-compliance
23
+ - [ ] Mandate applies broadly to the target customer segment (not just edge cases)
24
+
25
+ ### Score 4 (Strongly Obligatory)
26
+ - [ ] Evidence of contractual, operational, or professional obligation driving the need (SLA, audit, certification)
27
+ - [ ] Non-compliance carries meaningful consequences (lost contracts, failed audits, professional sanctions)
28
+
29
+ ### Score 3 (Moderately Obligatory)
30
+ - [ ] Evidence that the problem is tied to a standard business or personal obligation (payroll, invoicing, reporting)
31
+ - [ ] Failure to address it would cause operational disruption, though not legal or regulatory consequences
32
+
33
+ ### Score 2 (Weakly Obligatory)
34
+ - [ ] Problem is related to a social or professional norm but carries no formal enforcement mechanism
35
+ - [ ] Customers could ignore it indefinitely without concrete penalties
36
+
37
+ ### Score 1 (Purely Optional)
38
+ - [ ] No external forcing function -- solving the problem is entirely discretionary
39
+ - [ ] No evidence of regulatory, contractual, or operational mandate
40
+
41
+ ### B2B/B2C Nuance
42
+ For B2B: Regulatory and contractual mandates are the strongest signals. Look for specific regulation names, compliance deadlines, and audit requirements. Industry-specific mandates (healthcare HIPAA, finance SOX) carry more weight than general best practices.
43
+ For B2C: Legal obligations (tax filing, insurance requirements) and life-stage mandates (school enrollment, visa applications) are the primary mandatory signals. Social expectations are weaker than legal requirements.
@@ -0,0 +1,47 @@
1
+ # Market Growth
2
+
3
+ **Weight:** 4%
4
+ **Core Question:** Is the problem getting bigger or smaller?
5
+
6
+ ## Signal Table
7
+
8
+ | Growth Signal | Implication |
9
+ |---------------|-------------|
10
+ | >20% CAGR | Ideal — rising tide lifts all boats |
11
+ | 10-20% CAGR | Acceptable — room for new entrants |
12
+ | 0-10% CAGR | Challenging — must take share |
13
+ | Negative | Avoid unless transforming the category |
14
+
15
+ ### Growth Drivers to Look For
16
+ - Technology shifts creating new problems
17
+ - Regulatory changes mandating solutions
18
+ - Demographic shifts (aging, urbanization)
19
+ - Behavioral changes (remote work, mobile-first)
20
+ - Economic pressures forcing efficiency
21
+
22
+ ## Scoring Rubric
23
+
24
+ ### Score 5 (Rapidly Growing Market)
25
+ - [ ] Evidence of >20% CAGR supported by credible data sources
26
+ - [ ] Evidence of multiple independent growth drivers (technology shifts, regulatory changes, demographic shifts, or behavioral changes)
27
+ - [ ] Growth is structural (not a temporary spike) with multi-year trajectory evidence
28
+
29
+ ### Score 4 (Strong Growth)
30
+ - [ ] Evidence of 10-20% CAGR with at least one identified growth driver
31
+ - [ ] Evidence that the growth trend is sustained (not a one-year anomaly)
32
+
33
+ ### Score 3 (Moderate Growth)
34
+ - [ ] Evidence of positive growth (>0% CAGR) in the relevant market
35
+ - [ ] At least one credible source supports the growth claim (industry report, market data, or comparable company trajectories)
36
+
37
+ ### Score 2 (Flat or Slow Growth)
38
+ - [ ] Market growth is flat (0-5% CAGR) with no clear catalyst for acceleration
39
+ - [ ] Evidence suggests the startup would need to take market share from incumbents rather than ride growth
40
+
41
+ ### Score 1 (Declining or Stagnant Market)
42
+ - [ ] Evidence of negative growth or market contraction
43
+ - [ ] No identified transformation catalyst that could reverse the decline
44
+
45
+ ### B2B/B2C Nuance
46
+ For B2B: Look for industry-specific growth drivers (digital transformation, regulatory expansion, new compliance requirements). Enterprise budget shifts toward the problem category are strong signals.
47
+ For B2C: Look for demographic and behavioral shifts (generational adoption, mobile penetration, urbanization). Consumer spending trends in the category matter more than broad market CAGR.
@@ -0,0 +1,50 @@
1
+ # Market Size
2
+
3
+ **Weight:** 8%
4
+ **Core Question:** How big is the opportunity?
5
+
6
+ ## Signal Table
7
+
8
+ ### TAM/SAM/SOM Framework
9
+ - **TAM** (Total Addressable Market): Everyone who could theoretically buy
10
+ - **SAM** (Serviceable Addressable Market): Segment you can realistically reach
11
+ - **SOM** (Serviceable Obtainable Market): What you can capture in 2-3 years
12
+
13
+ ### Size Guidelines
14
+ | Stage | Typical Expectation |
15
+ |-------|---------------------|
16
+ | Pre-seed | TAM > $100M acceptable |
17
+ | Seed | TAM > $500M preferred |
18
+ | Series A+ | TAM > $1B typically required |
19
+
20
+ ### Paul Graham's Nuance
21
+ > "It's better to make a small number of people love your product than a large number simply like it."
22
+
23
+ **Key:** Start with a small market you can dominate, but ensure there's a "fast path out" to larger markets (Facebook: Harvard → colleges → everyone).
24
+
25
+ ## Scoring Rubric
26
+
27
+ ### Score 5 (Large, Accessible Market)
28
+ - [ ] Evidence that TAM exceeds $1B with a clear SAM/SOM breakdown showing realistic path to capture
29
+ - [ ] Evidence of an accessible initial segment that can be dominated before expanding (beachhead market identified)
30
+ - [ ] Market size claims are grounded in bottom-up analysis or credible third-party data (not just top-down assumptions)
31
+
32
+ ### Score 4 (Substantial Market)
33
+ - [ ] Evidence that TAM exceeds $500M with a plausible serviceable market identified
34
+ - [ ] Evidence that the initial target segment is large enough to build a sustainable business ($10M+ revenue potential)
35
+
36
+ ### Score 3 (Moderate Market)
37
+ - [ ] Evidence that TAM exceeds $100M
38
+ - [ ] At least one credible data point supporting market size (industry report, comparable company revenue, or customer count estimate)
39
+
40
+ ### Score 2 (Small Market)
41
+ - [ ] Market exists but evidence suggests TAM is below $100M
42
+ - [ ] A clear expansion path to adjacent markets has been identified (even if unproven)
43
+
44
+ ### Score 1 (Niche or Unclear Market)
45
+ - [ ] Market is too small to sustain a venture-scale business without a clear expansion path
46
+ - [ ] No credible evidence supporting market size claims (purely speculative)
47
+
48
+ ### B2B/B2C Nuance
49
+ For B2B: Bottom-up sizing (number of target companies x ACV) is more credible than top-down. Look for industry concentration -- a few large customers vs. long tail of small ones -- which affects go-to-market strategy and risk.
50
+ For B2C: User count x ARPU is the primary sizing method. Look for evidence of total addressable users in the target demographic. Network effects or viral mechanics can justify a smaller initial market if expansion dynamics are credible.
@@ -0,0 +1,48 @@
1
+ # Pain Intensity
2
+
3
+ **Weight:** 12%
4
+ **Core Question:** How much does this problem hurt?
5
+
6
+ ## Signal Table
7
+
8
+ | Signal | Strong | Weak |
9
+ |--------|--------|------|
10
+ | Pain type | Financial loss, regulatory risk, existential threat | Minor inconvenience, "nice to have" |
11
+ | Emotional weight | Frustration, anxiety, desperation | Mild annoyance |
12
+ | Current workarounds | Painful, expensive, time-consuming hacks | Acceptable alternatives exist |
13
+ | Language used | "I hate...", "I can't stand...", "This is killing us..." | "It would be nice if..." |
14
+
15
+ **Framework: Painkiller vs. Vitamin vs. Candy**
16
+ - **Painkiller**: Solves acute, urgent pain → easiest go-to-market
17
+ - **Vitamin**: Provides benefit but not urgent → harder to sell
18
+ - **Candy**: Pure entertainment/pleasure → needs massive scale
19
+
20
+ **Key Insight:** Problems that are real but not urgent rarely build valuable businesses.
21
+
22
+ ## Scoring Rubric
23
+
24
+ ### Score 5 (Extreme Pain -- Painkiller)
25
+ - [ ] Evidence of direct financial loss, regulatory risk, or existential threat to the customer
26
+ - [ ] Customers actively use painful, expensive, or time-consuming workarounds
27
+ - [ ] Evidence of desperation-level urgency in language or behavior (not just frustration)
28
+
29
+ ### Score 4 (Significant Pain)
30
+ - [ ] Evidence of meaningful negative impact on customer operations, revenue, or efficiency
31
+ - [ ] Current alternatives are demonstrably inadequate (workarounds exist but cause real friction)
32
+
33
+ ### Score 3 (Moderate Pain)
34
+ - [ ] Problem is real and affects regular operations (daily, weekly, or monthly recurrence)
35
+ - [ ] At least one source beyond the founder confirms the pain exists
36
+
37
+ ### Score 2 (Mild Pain -- Vitamin)
38
+ - [ ] Problem exists but acceptable solutions are currently available
39
+ - [ ] Impact is a convenience improvement, not a necessity
40
+
41
+ ### Score 1 (Minimal or Theoretical Pain)
42
+ - [ ] Problem is speculative, theoretical, or affects very few people
43
+ - [ ] No evidence of active workarounds, complaints, or demand signals
44
+
45
+ ### B2B/B2C Nuance
46
+ For B2B: Weight financial and operational impact evidence more heavily. Look for quantifiable cost of the problem (hours lost, revenue leaked, compliance risk).
47
+ For B2C: Weight emotional investment and behavior frequency more heavily. Look for intensity of frustration, frequency of encounter, and willingness to adopt workarounds.
48
+ Evaluator reasoning narrative should explicitly state which startup type applies and adjust evidence weighting accordingly.
@@ -0,0 +1,48 @@
1
+ # Scalability
2
+
3
+ **Weight:** 4%
4
+ **Core Question:** Can this grow without proportional cost increase?
5
+
6
+ ## Signal Table
7
+
8
+ ### Scalability Signals
9
+ | Scalable | Not Scalable |
10
+ |----------|--------------|
11
+ | Software margins (70-90%) | Service margins (30-50%) |
12
+ | Self-serve onboarding | High-touch sales required |
13
+ | Automated delivery | Manual fulfillment |
14
+ | Digital distribution | Physical logistics |
15
+ | Network effects | Linear growth only |
16
+
17
+ ### Unit Economics Check
18
+ - Does CAC decrease over time?
19
+ - Does LTV increase with scale?
20
+ - Do costs grow slower than revenue?
21
+ - Can you serve 10x users with <10x headcount?
22
+
23
+ ## Scoring Rubric
24
+
25
+ ### Score 5 (Highly Scalable)
26
+ - [ ] Evidence of software-like margins (70-90%+ gross margin) or a clear path to achieving them
27
+ - [ ] Self-serve onboarding and automated delivery are feasible for the core product
28
+ - [ ] Evidence of a self-reinforcing acquisition mechanism that reduces marginal CAC as the user base grows (e.g., viral referral loops, network effects, organic content compounding)
29
+
30
+ ### Score 4 (Good Scalability)
31
+ - [ ] Evidence that revenue can grow significantly faster than headcount (operational leverage)
32
+ - [ ] Product delivery is mostly automated with limited manual intervention per customer
33
+
34
+ ### Score 3 (Moderate Scalability)
35
+ - [ ] Evidence that the business model can achieve moderate margins (50%+ gross margin)
36
+ - [ ] Some manual processes exist but are automatable with investment
37
+
38
+ ### Score 2 (Limited Scalability)
39
+ - [ ] Revenue growth requires roughly proportional cost growth (linear scaling)
40
+ - [ ] High-touch sales or manual fulfillment is core to the delivery model
41
+
42
+ ### Score 1 (Not Scalable)
43
+ - [ ] Costs grow proportionally with or faster than revenue
44
+ - [ ] Business model requires per-customer manual effort with no clear automation path
45
+
46
+ ### B2B/B2C Nuance
47
+ For B2B: Look for self-serve vs. enterprise sales motion. Products requiring custom implementation, dedicated CSMs, or professional services per customer have inherent scalability limits. API-first and platform-based models scale better than consulting-like delivery.
48
+ For B2C: Look for digital distribution, zero marginal cost per user, and viral acquisition mechanics. Physical goods, local services, and high-CAC paid acquisition channels limit consumer scalability. Network effects are the strongest consumer scalability signal.
@@ -0,0 +1,55 @@
1
+ # Solution Gap
2
+
3
+ **Weight:** 12%
4
+ **Core Question:** Why hasn't this been solved already?
5
+
6
+ ## Signal Table
7
+
8
+ ### Good Reasons a Gap Exists
9
+ - Technology only recently became viable
10
+ - Market only recently became large enough
11
+ - Regulation recently changed
12
+ - Incumbent is asleep/complacent
13
+ - Problem requires unique insight/access
14
+ - Coordination problem nobody solved
15
+
16
+ ### Bad Reasons (Red Flags)
17
+ - "Nobody thought of it" (unlikely)
18
+ - Smarter people tried and failed
19
+ - Economics don't work
20
+ - Customers don't actually want it solved
21
+
22
+ ### Competitive Landscape
23
+ | Situation | Interpretation |
24
+ |-----------|----------------|
25
+ | No competitors | Either too early, too small, or problem isn't real |
26
+ | Few weak competitors | Often ideal — validates market, beatable |
27
+ | Many competitors | Crowded, need strong differentiation |
28
+ | One dominant player | Need 10x better or different GTM |
29
+
30
+ ## Scoring Rubric
31
+
32
+ ### Score 5 (Clear, Justified Gap)
33
+ - [ ] Evidence of a structural reason the gap exists (technology newly viable, regulation recently changed, market recently reached critical mass)
34
+ - [ ] Few or weak competitors in the space, with no dominant incumbent addressing this specific problem
35
+ - [ ] No credible evidence that well-funded teams have attempted and failed at this specific solution
36
+
37
+ ### Score 4 (Strong Gap)
38
+ - [ ] Evidence of a defensible reason the gap has not been closed (incumbent complacency, unique insight required, coordination problem)
39
+ - [ ] Existing alternatives are demonstrably inadequate for the target customer's needs
40
+
41
+ ### Score 3 (Moderate Gap)
42
+ - [ ] Evidence that existing solutions leave meaningful unmet needs for the target customer
43
+ - [ ] The competitive landscape is navigable (not too crowded, not dominated by a single strong player)
44
+
45
+ ### Score 2 (Weak Gap)
46
+ - [ ] Existing solutions address most of the problem, leaving only incremental improvement opportunities
47
+ - [ ] Multiple competitors are actively investing in this space
48
+
49
+ ### Score 1 (No Credible Gap)
50
+ - [ ] Evidence that capable teams have tried and failed to solve this problem (market signals it may not be solvable/desirable)
51
+ - [ ] "Nobody thought of it" is the primary justification for the gap (red flag)
52
+
53
+ ### B2B/B2C Nuance
54
+ For B2B: Look for enterprise-specific gap reasons (legacy system lock-in creating openings, compliance changes creating new needs, vertical-specific requirements ignored by horizontal players). Failed enterprise attempts are more informative than in consumer markets.
55
+ For B2C: Look for UX/distribution gap reasons (mobile-first experience missing, demographic underserved, new platform creating distribution opportunity). Consumer markets move faster -- a gap from 2 years ago may already be closing.
@@ -0,0 +1,46 @@
1
+ # Urgency
2
+
3
+ **Weight:** 8%
4
+ **Core Question:** Does this need to be solved NOW?
5
+
6
+ ## Signal Table
7
+
8
+ | High Urgency | Low Urgency |
9
+ |--------------|-------------|
10
+ | Regulatory deadline approaching | "Someday" improvement |
11
+ | Revenue actively being lost | Theoretical future benefit |
12
+ | System is broken/failing | Current system works "okay" |
13
+ | Competitive threat imminent | No immediate pressure |
14
+ | Time-sensitive opportunity | Evergreen problem |
15
+
16
+ **Test Questions:**
17
+ - Can the customer delay solving this by 6 months? 12 months?
18
+ - What happens if they do nothing?
19
+ - Is there a forcing function (deadline, event, regulation)?
20
+
21
+ ## Scoring Rubric
22
+
23
+ ### Score 5 (Critical -- Must Solve Now)
24
+ - [ ] Evidence of a forcing function with a concrete deadline (regulatory, contractual, or operational)
25
+ - [ ] Evidence of active, ongoing revenue loss or operational failure tied to the problem
26
+ - [ ] Delay of 6 months would result in measurable, serious consequences
27
+
28
+ ### Score 4 (High Urgency)
29
+ - [ ] Evidence of a time-sensitive trigger driving action (competitive threat, approaching deadline, or system degradation)
30
+ - [ ] Evidence that customers are actively seeking solutions now, not "someday"
31
+
32
+ ### Score 3 (Moderate Urgency)
33
+ - [ ] Evidence the problem is currently active and worsening (not static or improving on its own)
34
+ - [ ] Customer could delay solving it 6 months but would incur noticeable cost or risk by doing so
35
+
36
+ ### Score 2 (Low Urgency)
37
+ - [ ] Problem exists but customers can delay solving it 12+ months without serious consequences
38
+ - [ ] No external forcing function or deadline pressuring a decision
39
+
40
+ ### Score 1 (No Urgency)
41
+ - [ ] Problem is a "someday" improvement with no time pressure
42
+ - [ ] Evidence suggests customers would not prioritize solving this over other pressing issues
43
+
44
+ ### B2B/B2C Nuance
45
+ For B2B: Look for regulatory deadlines, SLA obligations, fiscal year budget cycles, and competitive pressure as urgency drivers. Revenue-at-risk quantification strengthens urgency evidence.
46
+ For B2C: Look for life events (moving, new baby, tax season), seasonal triggers, and social pressure as urgency drivers. Emotional urgency ("I need this now") carries more weight than in B2B.