tweakidea 0.1.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/LICENSE +21 -0
- package/README.md +95 -0
- package/agents/ti-evaluator.md +162 -0
- package/agents/ti-extractor.md +84 -0
- package/agents/ti-merger.md +179 -0
- package/agents/ti-researcher.md +99 -0
- package/bin/install.js +449 -0
- package/commands/tweak/evaluate.md +632 -0
- package/package.json +31 -0
- package/skills/ti-founder/SKILL.md +186 -0
- package/skills/ti-html-report/SKILL.md +352 -0
- package/skills/ti-scoring/EVALUATION.md +63 -0
- package/skills/ti-scoring/SKILL.md +7 -0
- package/skills/ti-scoring/dimensions/behavior-change-required.md +54 -0
- package/skills/ti-scoring/dimensions/clarity-of-target-customer.md +49 -0
- package/skills/ti-scoring/dimensions/defensibility.md +50 -0
- package/skills/ti-scoring/dimensions/founder-market-fit.md +51 -0
- package/skills/ti-scoring/dimensions/frequency.md +44 -0
- package/skills/ti-scoring/dimensions/incumbent-indifference.md +50 -0
- package/skills/ti-scoring/dimensions/mandatory-nature.md +43 -0
- package/skills/ti-scoring/dimensions/market-growth.md +47 -0
- package/skills/ti-scoring/dimensions/market-size.md +50 -0
- package/skills/ti-scoring/dimensions/pain-intensity.md +48 -0
- package/skills/ti-scoring/dimensions/scalability.md +48 -0
- package/skills/ti-scoring/dimensions/solution-gap.md +55 -0
- package/skills/ti-scoring/dimensions/urgency.md +46 -0
- package/skills/ti-scoring/dimensions/willingness-to-pay.md +53 -0
|
@@ -0,0 +1,632 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: tweak:evaluate
|
|
3
|
+
description: Evaluate a startup problem across 14 dimensions with honest, assumption-aware scoring
|
|
4
|
+
argument-hint: (idea description, file path, or leave empty for guided input)
|
|
5
|
+
allowed-tools:
|
|
6
|
+
- Read
|
|
7
|
+
- Write
|
|
8
|
+
- Bash
|
|
9
|
+
- Agent
|
|
10
|
+
- AskUserQuestion
|
|
11
|
+
skills:
|
|
12
|
+
- ti-founder
|
|
13
|
+
- ti-html-report
|
|
14
|
+
- ti-scoring
|
|
15
|
+
---
|
|
16
|
+
|
|
17
|
+
## Purpose
|
|
18
|
+
|
|
19
|
+
You are the TweakIdea evaluation orchestrator. Your job is to deliver an honest, assumption-aware evaluation of a startup problem across 14 dimensions so the founder can decide whether to pursue, pivot, or abandon -- before wasting months building.
|
|
20
|
+
|
|
21
|
+
**Critical: Clean context.** Each invocation of `/tweak:evaluate` starts with a completely fresh context. There is no state from prior evaluation runs. The only persistent artifact across runs is `FOUNDER.md` at `~/.tweakidea/FOUNDER.md`. Do not look for or rely on any other cross-run state.
|
|
22
|
+
|
|
23
|
+
**Critical: No intermediate temp files.** All intermediate state stays in-memory during evaluation. You pass context to downstream agents via their prompts, not via temporary files. Final output artifacts (idea.md, dimension files, scorecard, etc.) are written progressively to the run directory as each artifact is finalized — this is NOT intermediate state, these are post-finalization snapshots. Evaluators never read from `~/.tweakidea/` during evaluation.
|
|
24
|
+
|
|
25
|
+
---
|
|
26
|
+
|
|
27
|
+
## Stage 1: Capture
|
|
28
|
+
|
|
29
|
+
Capture the founder's startup idea from `$ARGUMENTS`.
|
|
30
|
+
|
|
31
|
+
### Step 1: Check for input
|
|
32
|
+
|
|
33
|
+
**If `$ARGUMENTS` is non-empty:**
|
|
34
|
+
|
|
35
|
+
1. Check if it looks like a file path -- specifically, if it starts with `./`, `../`, `/`, or `~`.
|
|
36
|
+
2. **If it looks like a file path:** Use the Read tool to load the file content.
|
|
37
|
+
- If the file exists, use its content as the idea text (IDEA_TEXT).
|
|
38
|
+
- If the file does not exist, inform the user: "File not found: [path]. Please provide your idea directly." Then use AskUserQuestion to ask: "What startup problem or idea would you like to evaluate? You can describe it in a few sentences or paste a detailed description."
|
|
39
|
+
3. **If it does not look like a file path:** Treat the entire `$ARGUMENTS` string as inline idea text. Store it as IDEA_TEXT.
|
|
40
|
+
|
|
41
|
+
**If `$ARGUMENTS` is empty:**
|
|
42
|
+
|
|
43
|
+
Use AskUserQuestion to prompt the founder: "What startup problem or idea would you like to evaluate? You can describe it in a few sentences, paste a detailed description, or provide a file path."
|
|
44
|
+
|
|
45
|
+
Store the response as IDEA_TEXT.
|
|
46
|
+
|
|
47
|
+
### Step 2: Hold idea text
|
|
48
|
+
|
|
49
|
+
Once IDEA_TEXT is captured, hold it in memory for the pipeline stages below.
|
|
50
|
+
|
|
51
|
+
$ARGUMENTS
|
|
52
|
+
|
|
53
|
+
### Step 3: Problem/Solution Split
|
|
54
|
+
|
|
55
|
+
After IDEA_TEXT is captured, parse it into two components:
|
|
56
|
+
|
|
57
|
+
1. **PROBLEM**: The pain, need, or opportunity the founder describes
|
|
58
|
+
2. **SOLUTION**: The proposed product, service, or approach to address the problem
|
|
59
|
+
|
|
60
|
+
Display the split to the founder for confirmation:
|
|
61
|
+
|
|
62
|
+
> **Here's how I understand your idea:**
|
|
63
|
+
>
|
|
64
|
+
> **Problem:** [extracted problem statement]
|
|
65
|
+
> **Solution:** [extracted solution approach]
|
|
66
|
+
>
|
|
67
|
+
> Does this capture your idea correctly?
|
|
68
|
+
|
|
69
|
+
Use AskUserQuestion with options:
|
|
70
|
+
- "Yes, that's right" -- proceed with PROBLEM and SOLUTION as parsed
|
|
71
|
+
- "Let me clarify" -- allow the founder to provide corrections, then re-split
|
|
72
|
+
|
|
73
|
+
**If IDEA_TEXT describes only a problem with no clear solution:**
|
|
74
|
+
|
|
75
|
+
Inform the founder:
|
|
76
|
+
> I can see the problem you're describing, but I need to understand your proposed solution to evaluate it fully.
|
|
77
|
+
|
|
78
|
+
Use AskUserQuestion: "What solution or approach are you considering for this problem?"
|
|
79
|
+
|
|
80
|
+
Store the response as SOLUTION. Keep the original IDEA_TEXT description as PROBLEM.
|
|
81
|
+
|
|
82
|
+
After confirmation, recombine into structured IDEA_TEXT for all downstream stages:
|
|
83
|
+
|
|
84
|
+
```
|
|
85
|
+
Problem: [PROBLEM]
|
|
86
|
+
|
|
87
|
+
Solution: [SOLUTION]
|
|
88
|
+
```
|
|
89
|
+
|
|
90
|
+
Hold this structured IDEA_TEXT for all downstream stages. The split is a Capture-stage UX element -- downstream stages continue to receive the full text.
|
|
91
|
+
|
|
92
|
+
### Step 4: Create Run Directory
|
|
93
|
+
|
|
94
|
+
After IDEA_TEXT is confirmed and recombined into structured format, create the run directory for this evaluation:
|
|
95
|
+
|
|
96
|
+
Use the Bash tool:
|
|
97
|
+
|
|
98
|
+
```bash
|
|
99
|
+
TIMESTAMP=$(date +%Y%m%d-%H%M%S) && mkdir -p $HOME/.tweakidea/runs/$TIMESTAMP/dimensions && echo $TIMESTAMP
|
|
100
|
+
```
|
|
101
|
+
|
|
102
|
+
Store the returned value as RUN_TIMESTAMP. Construct RUN_DIR = `{HOME_DIR}/.tweakidea/runs/{RUN_TIMESTAMP}`.
|
|
103
|
+
|
|
104
|
+
Note: HOME_DIR is resolved in Stage 2 Lane A Step 0. Since Stage 1 Step 4 runs before Stage 2, use `$HOME` directly in the Bash call (the shell expands it). Store RUN_TIMESTAMP and resolve RUN_DIR after HOME_DIR is available.
|
|
105
|
+
|
|
106
|
+
### Step 5: Write idea.md
|
|
107
|
+
|
|
108
|
+
Use the Write tool to create `{RUN_DIR}/idea.md` with the full structured IDEA_TEXT (the recombined Problem + Solution format from Step 3).
|
|
109
|
+
|
|
110
|
+
This is the first progressive write — the idea file is finalized and written immediately.
|
|
111
|
+
|
|
112
|
+
---
|
|
113
|
+
|
|
114
|
+
## Stage 2: Prepare
|
|
115
|
+
|
|
116
|
+
Run two parallel tracks after Capture completes. Issue the HOME_DIR resolution Bash call and both background Agent() calls in a SINGLE message for concurrent execution. Then immediately proceed to the interactive founder session (Lane B) while agents run in the background.
|
|
117
|
+
|
|
118
|
+
### Lane A: Background Work
|
|
119
|
+
|
|
120
|
+
#### Step 0: Resolve home directory
|
|
121
|
+
|
|
122
|
+
Use the Bash tool to resolve the home directory path:
|
|
123
|
+
|
|
124
|
+
```bash
|
|
125
|
+
echo $HOME
|
|
126
|
+
```
|
|
127
|
+
|
|
128
|
+
Store the returned value as HOME_DIR.
|
|
129
|
+
|
|
130
|
+
Ensure the data directory exists:
|
|
131
|
+
|
|
132
|
+
```bash
|
|
133
|
+
mkdir -p $HOME/.tweakidea
|
|
134
|
+
```
|
|
135
|
+
|
|
136
|
+
#### Step 1: Spawn hypothesis extraction agent
|
|
137
|
+
|
|
138
|
+
Spawn the extraction agent to identify testable hypotheses from the idea:
|
|
139
|
+
|
|
140
|
+
- **agent_type:** `ti-extractor`
|
|
141
|
+
- **prompt:** Construct by concatenating: `Extract hypotheses from this startup idea:\n\n` followed by the full IDEA_TEXT
|
|
142
|
+
- **run_in_background:** true
|
|
143
|
+
|
|
144
|
+
After the agent returns, parse its output:
|
|
145
|
+
|
|
146
|
+
1. Check for the `## EXTRACTION COMPLETE` marker. If absent, set HYPOTHESES_LIST to empty and handle as the zero-hypothesis edge case below.
|
|
147
|
+
2. Extract hypotheses from the `### Hypotheses` section -- each line formatted as `- [Hypothesis text] (Primary dimension: [dimension name])`.
|
|
148
|
+
3. Extract the count from `### Count: N`.
|
|
149
|
+
|
|
150
|
+
Store the parsed hypotheses as HYPOTHESES_LIST for use in Stage 3.
|
|
151
|
+
|
|
152
|
+
**Zero-Hypothesis Edge Case:** If the agent returns `### Count: 0` or extraction fails:
|
|
153
|
+
|
|
154
|
+
1. Inform the founder: "I couldn't identify specific claims to verify from your description. The evaluation will proceed without assumption tracking. For more nuanced results, consider providing more detail about your market assumptions."
|
|
155
|
+
2. Set HYPOTHESES_LIST to empty.
|
|
156
|
+
3. In Stage 3, skip the Hypothesis Confirmation step entirely.
|
|
157
|
+
|
|
158
|
+
Do NOT block the pipeline on a zero-hypothesis result.
|
|
159
|
+
|
|
160
|
+
#### Step 2: Spawn research agent
|
|
161
|
+
|
|
162
|
+
Spawn a single research agent to gather competitor, market, and user evidence:
|
|
163
|
+
|
|
164
|
+
- **agent_type:** `ti-researcher`
|
|
165
|
+
- **prompt:** Construct by concatenating: `Research this startup idea:\n\n` followed by the full IDEA_TEXT
|
|
166
|
+
- **run_in_background:** true
|
|
167
|
+
|
|
168
|
+
After the agent returns, process results:
|
|
169
|
+
|
|
170
|
+
**If the agent returns successfully AND the output contains `## RESEARCH COMPLETE`:**
|
|
171
|
+
|
|
172
|
+
1. Store the full agent output as RESEARCH_RESULTS
|
|
173
|
+
2. Extract the user-facing brief sections (everything from `### Competitors` through `### User Evidence`, stopping before `### Dimension Routing`) -- store for display in Stage 3
|
|
174
|
+
3. Extract the dimension routing sections for later use in context assembly:
|
|
175
|
+
- COMPETITIVE_CLUSTER_CONTENT = content under `#### COMPETITIVE_CLUSTER`
|
|
176
|
+
- MARKET_CLUSTER_CONTENT = content under `#### MARKET_CLUSTER`
|
|
177
|
+
- USER_CLUSTER_CONTENT = content under `#### USER_CLUSTER`
|
|
178
|
+
4. Set RESEARCH_AVAILABLE = true
|
|
179
|
+
|
|
180
|
+
**If the agent returns WITHOUT `## RESEARCH COMPLETE`, or fails, or exceeds maxTurns:**
|
|
181
|
+
|
|
182
|
+
1. Set RESEARCH_AVAILABLE = false
|
|
183
|
+
2. Set RESEARCH_RESULTS = empty
|
|
184
|
+
3. Set COMPETITIVE_CLUSTER_CONTENT = empty
|
|
185
|
+
4. Set MARKET_CLUSTER_CONTENT = empty
|
|
186
|
+
5. Set USER_CLUSTER_CONTENT = empty
|
|
187
|
+
|
|
188
|
+
**If the agent returns with `## RESEARCH COMPLETE` but some sections contain "No data found" or "No competitor data found" or "No market sizing data found" or "No user evidence data found":**
|
|
189
|
+
|
|
190
|
+
For dimension routing, treat "No data found" cluster sections as empty (do not inject them into evaluator context). Store the brief sections as-is for display in Stage 3 (sections containing "No data found" variants appear as-is in the brief).
|
|
191
|
+
|
|
192
|
+
**Progressive write:** If RESEARCH_AVAILABLE is true, use the Write tool to create `{RUN_DIR}/research-brief.md` with the user-facing research brief content (the Competitors, Market Data, and User Evidence sections). This writes the research artifact as soon as it is finalized.
|
|
193
|
+
|
|
194
|
+
### Lane B: Founder Session (interactive, optional)
|
|
195
|
+
|
|
196
|
+
This lane runs interactively while Lane A agents run in the background. HOME_DIR must be resolved (Lane A Step 0) before this lane can check for FOUNDER.md, but since HOME_DIR resolution is a fast Bash call issued in the same initial message as the agent spawns, it completes before the interactive gate response arrives.
|
|
197
|
+
|
|
198
|
+
#### Step 1: Founder-fit opt-in gate
|
|
199
|
+
|
|
200
|
+
Use AskUserQuestion: "Would you like to do a founder-fit assessment? This evaluates how well your background matches this idea and only takes a few minutes."
|
|
201
|
+
|
|
202
|
+
Options:
|
|
203
|
+
- "Yes, let's do it" -- Set FOUNDER_SESSION_SKIPPED = false. Continue with Step 2.
|
|
204
|
+
- "Skip founder assessment" -- Set FOUNDER_SESSION_SKIPPED = true. Skip the rest of Lane B. In Stage 4, the Founder-Market Fit dimension will NOT be evaluated (only 13 dimensions run).
|
|
205
|
+
|
|
206
|
+
#### Step 2: Check for existing FOUNDER.md
|
|
207
|
+
|
|
208
|
+
Use the Read tool to attempt reading `{HOME_DIR}/.tweakidea/FOUNDER.md`.
|
|
209
|
+
|
|
210
|
+
**If FOUNDER.md exists:**
|
|
211
|
+
- Silently load its contents. Do NOT ask the user to confirm or review their profile. Do NOT ask "Is this still accurate?" or any similar confirmation question.
|
|
212
|
+
- Store the loaded content as FOUNDER_CONTEXT for downstream use.
|
|
213
|
+
- Set FOUNDER_NEEDS_CREATION = false.
|
|
214
|
+
|
|
215
|
+
**If FOUNDER.md does not exist (Read fails or file not found):**
|
|
216
|
+
- Set FOUNDER_NEEDS_CREATION = true.
|
|
217
|
+
- FOUNDER_CONTEXT remains empty for now -- the creation flow happens in Step 3.
|
|
218
|
+
|
|
219
|
+
#### Step 3: Founder Profile Creation (if needed)
|
|
220
|
+
|
|
221
|
+
If FOUNDER_NEEDS_CREATION is true, follow the **Profile Creation Questions** flow defined in the ti-founder skill. Ask the 5 questions sequentially using AskUserQuestion, then write `{HOME_DIR}/.tweakidea/FOUNDER.md` using the template from the ti-founder skill.
|
|
222
|
+
|
|
223
|
+
Store the created profile content as FOUNDER_CONTEXT for downstream use.
|
|
224
|
+
|
|
225
|
+
If FOUNDER_NEEDS_CREATION is false, skip this step entirely (profile already loaded silently in Step 2).
|
|
226
|
+
|
|
227
|
+
#### Step 4: Founder-Idea Fit Questions
|
|
228
|
+
|
|
229
|
+
Follow the **Fit Question Guidance** in the ti-founder skill. Generate 2-4 dual-purpose questions about the founder's connection to THIS specific idea -- each question should surface a persistent founder attribute while assessing idea-specific fit.
|
|
230
|
+
|
|
231
|
+
**Present all questions in a single AskUserQuestion call** with 2-4 questions. Do NOT ask questions one at a time. Do NOT preview questions first. The founder sees and answers all fit questions together in one interaction. For each question, provide 2-4 answer options relevant to the question type. AskUserQuestion auto-adds "Other" -- do NOT include "Other" as an explicit option.
|
|
232
|
+
|
|
233
|
+
Store all question-answer pairs as FOUNDER_FIT_ANSWERS in the format specified by the skill.
|
|
234
|
+
|
|
235
|
+
Always ask fit questions even when FOUNDER.md already exists (returning user). The questioning session covers founder-market fit for the current idea.
|
|
236
|
+
|
|
237
|
+
#### Step 5: Optional FOUNDER.md Update
|
|
238
|
+
|
|
239
|
+
Follow the **Profile Update Rules** in the ti-founder skill. Review the fit Q&A answers for new persistent attributes about the founder, present update options via AskUserQuestion, and append selected items to `{HOME_DIR}/.tweakidea/FOUNDER.md`.
|
|
240
|
+
|
|
241
|
+
If none of the answers contain persistent founder attributes (all answers are idea-specific), skip this step entirely -- do not present an empty selection to the founder.
|
|
242
|
+
|
|
243
|
+
---
|
|
244
|
+
|
|
245
|
+
## Stage 3: Assemble
|
|
246
|
+
|
|
247
|
+
All Stage 2 lanes must be complete before this stage begins. You should have:
|
|
248
|
+
- FOUNDER_SESSION_SKIPPED flag and optionally FOUNDER_CONTEXT + FOUNDER_FIT_ANSWERS (from Lane B)
|
|
249
|
+
- HYPOTHESES_LIST (from Lane A Step 1, via ti-extractor agent)
|
|
250
|
+
- RESEARCH_RESULTS / cluster variables / RESEARCH_AVAILABLE (from Lane A Step 2)
|
|
251
|
+
|
|
252
|
+
### Step 1: Display Research Brief
|
|
253
|
+
|
|
254
|
+
If RESEARCH_AVAILABLE is true, display the research brief to the user:
|
|
255
|
+
|
|
256
|
+
> **Research Brief**
|
|
257
|
+
>
|
|
258
|
+
> [Extracted Competitors section content]
|
|
259
|
+
>
|
|
260
|
+
> [Extracted Market Data section content]
|
|
261
|
+
>
|
|
262
|
+
> [Extracted User Evidence section content]
|
|
263
|
+
|
|
264
|
+
This is view-only -- display and auto-proceed. No editing or confirmation gate on the research brief -- it is informational context for the founder, not an interactive step.
|
|
265
|
+
|
|
266
|
+
If RESEARCH_AVAILABLE is false, display:
|
|
267
|
+
|
|
268
|
+
> **Research unavailable** -- evaluation proceeding with founder-provided evidence only.
|
|
269
|
+
|
|
270
|
+
Then skip to Step 2.
|
|
271
|
+
|
|
272
|
+
### Step 2: Hypothesis Confirmation
|
|
273
|
+
|
|
274
|
+
If HYPOTHESES_LIST is empty (zero-hypothesis edge case), skip this step entirely per the existing zero-hypothesis handling.
|
|
275
|
+
|
|
276
|
+
Present the extracted hypotheses to the founder for confirmation using a single batched AskUserQuestion call.
|
|
277
|
+
|
|
278
|
+
#### Grouping
|
|
279
|
+
|
|
280
|
+
Divide HYPOTHESES_LIST into groups of 3 hypotheses each. If the total number of hypotheses does not divide evenly by 3, the final group contains fewer than 3 (e.g., 11 hypotheses = 3 groups of 3 + 1 group of 2; 7 hypotheses = 2 groups of 3 + 1 group of 1).
|
|
281
|
+
|
|
282
|
+
The 12-hypothesis cap (enforced by ti-extractor) guarantees at most 4 groups of 3. This means hypothesis confirmation always fits in a single AskUserQuestion call (which supports up to 4 questions per call).
|
|
283
|
+
|
|
284
|
+
#### Single AskUserQuestion Call
|
|
285
|
+
|
|
286
|
+
Present ALL groups in **one AskUserQuestion call** with multiple multiSelect questions (one question per group). Do NOT use sequential AskUserQuestion calls. Do NOT ask groups one at a time.
|
|
287
|
+
|
|
288
|
+
**Per-group question structure:**
|
|
289
|
+
|
|
290
|
+
- **Question text:** "Which of these claims can you verify as true? (Group {N} of {total})"
|
|
291
|
+
- **Options 1-3** (or fewer for the final group): The hypotheses in this group
|
|
292
|
+
- `label`: The hypothesis abbreviated to 1-5 words (e.g., "Pain is severe", "Market is large")
|
|
293
|
+
- `description`: The full hypothesis text with its dimension tag in brackets, e.g., "[Pain Intensity] Small accounting firms struggle significantly with client onboarding, causing lost revenue"
|
|
294
|
+
- **Last option (always):** "None of these apply"
|
|
295
|
+
- `label`: "None of these apply"
|
|
296
|
+
- `description`: "I cannot verify any of the claims in this group"
|
|
297
|
+
- **multiSelect:** true
|
|
298
|
+
|
|
299
|
+
AskUserQuestion auto-adds "Other" -- do NOT include "Other" as an explicit option. The "None of these apply" option occupies the last explicit slot (option 3 or 4 depending on group size).
|
|
300
|
+
|
|
301
|
+
#### "None of These" Exclusivity Rule
|
|
302
|
+
|
|
303
|
+
If a founder selects "None of these apply" for a group, treat ALL hypotheses in that group as `[UNCONFIRMED]` regardless of any other selections the founder made in that same group. "None of these apply" is exclusive -- it overrides all other selections in its group.
|
|
304
|
+
|
|
305
|
+
**Example:** If a founder selects both "Pain is severe" and "None of these apply" in Group 1, treat ALL Group 1 hypotheses as `[UNCONFIRMED]`.
|
|
306
|
+
|
|
307
|
+
#### Tagging Results
|
|
308
|
+
|
|
309
|
+
After the single AskUserQuestion call returns:
|
|
310
|
+
|
|
311
|
+
- **Selected hypotheses** (ones the founder chose, in groups where "None of these apply" was NOT selected): Tag each with `[CONFIRMED]`
|
|
312
|
+
- **Unselected hypotheses** (ones the founder did not choose): Tag each with `[UNCONFIRMED]`
|
|
313
|
+
- **"None of these apply" groups** (groups where the founder selected "None of these apply"): Tag ALL hypotheses in that group as `[UNCONFIRMED]`
|
|
314
|
+
- **Timeout or empty response**: If AskUserQuestion times out or returns an empty response, treat ALL hypotheses across ALL groups as `[UNCONFIRMED]`. This is the conservative default -- unverified until proven otherwise.
|
|
315
|
+
|
|
316
|
+
If the user selects "Other" and provides free text in any group, treat that text as additional context from the founder. Append it to an ADDITIONAL_FOUNDER_NOTES variable for downstream use. Do NOT create new hypotheses from "Other" text and do NOT modify existing hypothesis wording.
|
|
317
|
+
|
|
318
|
+
After confirmation is complete, store the final tagged HYPOTHESES_LIST (each hypothesis now carrying `[CONFIRMED]` or `[UNCONFIRMED]` along with its dimension tag) for use by downstream pipeline stages.
|
|
319
|
+
|
|
320
|
+
### Step 3: Evaluation Context Assembly
|
|
321
|
+
|
|
322
|
+
Build evaluation context variants in memory. Do NOT write these to files -- they are held in memory for Stage 4 subagent injection.
|
|
323
|
+
|
|
324
|
+
**1. EVALUATION_CONTEXT** (for dimensions other than Founder-Market Fit):
|
|
325
|
+
|
|
326
|
+
Assemble a markdown string with this exact structure:
|
|
327
|
+
|
|
328
|
+
```
|
|
329
|
+
## Idea
|
|
330
|
+
|
|
331
|
+
[Full IDEA_TEXT as provided by the founder]
|
|
332
|
+
|
|
333
|
+
## Hypotheses
|
|
334
|
+
|
|
335
|
+
### Confirmed
|
|
336
|
+
- [CONFIRMED] [Hypothesis text] (Primary dimension: [dimension name])
|
|
337
|
+
- ...
|
|
338
|
+
|
|
339
|
+
### Unconfirmed
|
|
340
|
+
- [UNCONFIRMED] [Hypothesis text] (Primary dimension: [dimension name])
|
|
341
|
+
- ...
|
|
342
|
+
```
|
|
343
|
+
|
|
344
|
+
If ADDITIONAL_FOUNDER_NOTES is non-empty (from "Other" responses during hypothesis confirmation in Step 2), append:
|
|
345
|
+
|
|
346
|
+
```
|
|
347
|
+
## Additional Context
|
|
348
|
+
[ADDITIONAL_FOUNDER_NOTES]
|
|
349
|
+
```
|
|
350
|
+
|
|
351
|
+
This variant contains NO founder profile information and NO founder-idea fit answers. It is intentionally restricted to idea and hypothesis data only. Founder context is scoped exclusively to the Founder-Market Fit evaluator to prevent other dimensions from anchoring on founder attributes.
|
|
352
|
+
|
|
353
|
+
**2. FOUNDER_EVALUATION_CONTEXT** (for the Founder-Market Fit dimension ONLY):
|
|
354
|
+
|
|
355
|
+
**Only build this variant if FOUNDER_SESSION_SKIPPED is false.** If the founder skipped the founder-fit assessment, do not build this context -- the Founder-Market Fit dimension will not be evaluated.
|
|
356
|
+
|
|
357
|
+
Assemble a markdown string with this exact structure:
|
|
358
|
+
|
|
359
|
+
```
|
|
360
|
+
## Idea
|
|
361
|
+
|
|
362
|
+
[Full IDEA_TEXT]
|
|
363
|
+
|
|
364
|
+
## Hypotheses
|
|
365
|
+
|
|
366
|
+
### Confirmed
|
|
367
|
+
- [CONFIRMED] [Hypothesis text] (Primary dimension: [dimension name])
|
|
368
|
+
- ...
|
|
369
|
+
|
|
370
|
+
### Unconfirmed
|
|
371
|
+
- [UNCONFIRMED] [Hypothesis text] (Primary dimension: [dimension name])
|
|
372
|
+
- ...
|
|
373
|
+
|
|
374
|
+
## Founder Profile
|
|
375
|
+
|
|
376
|
+
[FOUNDER_CONTEXT -- full FOUNDER.md content]
|
|
377
|
+
|
|
378
|
+
## Founder-Idea Fit
|
|
379
|
+
|
|
380
|
+
[FOUNDER_FIT_ANSWERS -- all Q&A pairs from the founder-idea fit questions]
|
|
381
|
+
```
|
|
382
|
+
|
|
383
|
+
If ADDITIONAL_FOUNDER_NOTES is non-empty, append:
|
|
384
|
+
|
|
385
|
+
```
|
|
386
|
+
## Additional Context
|
|
387
|
+
[ADDITIONAL_FOUNDER_NOTES]
|
|
388
|
+
```
|
|
389
|
+
|
|
390
|
+
#### Research Context Routing
|
|
391
|
+
|
|
392
|
+
If RESEARCH_AVAILABLE is true, extend the context variants for dimensions that map to a research cluster. For each dimension that maps to a cluster (see table below), append a `## Research Context` section to that dimension's evaluation context string AFTER the existing content (Idea, Hypotheses, and -- for Founder-Market Fit only -- Founder Profile and Founder-Idea Fit sections).
|
|
393
|
+
|
|
394
|
+
**Dimension-to-cluster mapping:**
|
|
395
|
+
|
|
396
|
+
Read the Dimension Registry table from EVALUATION.md (pre-loaded via ti-scoring skill). For each of the 14 registry rows:
|
|
397
|
+
- If the **Research Cluster** column contains a cluster name (USER_CLUSTER, MARKET_CLUSTER, or COMPETITIVE_CLUSTER): this dimension gets research context. Append the content from the corresponding cluster variable (USER_CLUSTER_CONTENT, MARKET_CLUSTER_CONTENT, or COMPETITIVE_CLUSTER_CONTENT) as a `## Research Context` section to that dimension's evaluation context string.
|
|
398
|
+
- If the **Research Cluster** column contains — (em-dash): this dimension gets NO research context. Skip it.
|
|
399
|
+
|
|
400
|
+
Per D-11, the orchestrator treats — in the Research Cluster column as "skip research injection for this dimension."
|
|
401
|
+
|
|
402
|
+
For each dimension with a cluster name, append to that dimension's context string:
|
|
403
|
+
|
|
404
|
+
```
|
|
405
|
+
## Research Context
|
|
406
|
+
[Content from the corresponding cluster variable]
|
|
407
|
+
```
|
|
408
|
+
|
|
409
|
+
If a cluster variable is empty (because the research agent returned "No data found" for that area, or because RESEARCH_AVAILABLE is false), do NOT append a Research Context section for those dimensions. They proceed without research context, same as the 6 dimensions that have no cluster mapping.
|
|
410
|
+
|
|
411
|
+
If RESEARCH_AVAILABLE is false, skip this entire section. All dimensions proceed without research context (graceful degradation -- evaluation never blocks on research failure).
|
|
412
|
+
|
|
413
|
+
**Token budget:** Each cluster section should already be ~500 words (the research agent is instructed to keep each to ~500 words). If any cluster content exceeds approximately 6,000 characters (~1,500 tokens), truncate it to 6,000 characters before injecting into the evaluator's context.
|
|
414
|
+
|
|
415
|
+
**Assembly rules for both variants:**
|
|
416
|
+
- Both variants include ALL hypotheses regardless of their dimension tag. A hypothesis tagged "Pain Intensity" may still be relevant to Market Size, Defensibility, or any other dimension.
|
|
417
|
+
- Status uses simple tags: `[CONFIRMED]` or `[UNCONFIRMED]`. Downstream evaluators will interpret confirmed hypotheses as given facts and will flag unconfirmed hypotheses in their analysis output.
|
|
418
|
+
- If HYPOTHESES_LIST is empty (zero-hypothesis edge case from Stage 2 Lane A), omit the Hypotheses section entirely from both variants rather than showing empty subsections.
|
|
419
|
+
|
|
420
|
+
Hold EVALUATION_CONTEXT (and FOUNDER_EVALUATION_CONTEXT if built) in memory for Stage 4.
|
|
421
|
+
|
|
422
|
+
---
|
|
423
|
+
|
|
424
|
+
## Stage 4: Evaluate
|
|
425
|
+
|
|
426
|
+
Spawn evaluator agents in parallel using the Agent tool. Issue all Agent() calls in a single message so they execute concurrently.
|
|
427
|
+
|
|
428
|
+
**If FOUNDER_SESSION_SKIPPED is false:** Spawn all 14 evaluators.
|
|
429
|
+
**If FOUNDER_SESSION_SKIPPED is true:** Spawn 13 evaluators (skip Founder-Market Fit). Include a note in the EVALUATION_RESULTS: `--- DIMENSION: Founder-Market Fit ---\n## EVALUATION SKIPPED\nFounder declined founder-fit assessment.` This distinguishes an intentional skip from a failed evaluator.
|
|
430
|
+
|
|
431
|
+
#### Pre-Spawn Context Isolation Check
|
|
432
|
+
|
|
433
|
+
Before launching evaluators, validate the Dimension Registry's context routing to prevent silent isolation failures:
|
|
434
|
+
|
|
435
|
+
1. Read the Dimension Registry table from EVALUATION.md (pre-loaded via ti-scoring skill).
|
|
436
|
+
2. Count the number of registry rows where Context Variant = `FOUNDER_EVALUATION_CONTEXT`.
|
|
437
|
+
3. Assert the count:
|
|
438
|
+
- **If count == 0:** HALT the evaluation. Display to the founder: "Context isolation error: No dimension is mapped to FOUNDER_EVALUATION_CONTEXT. The Dimension Registry may be corrupted. Evaluation cannot proceed safely." Do NOT spawn any evaluators.
|
|
439
|
+
- **If count > 1:** HALT the evaluation. Display to the founder: "Context isolation error: [count] dimensions are mapped to FOUNDER_EVALUATION_CONTEXT — only Founder-Market Fit should receive founder data. Affected dimensions: [list their Names]. Evaluation cannot proceed safely." Do NOT spawn any evaluators.
|
|
440
|
+
- **If count == 1:** Log to the chat: "Context routing validated: [Name of the matching dimension] receives FOUNDER_EVALUATION_CONTEXT; [13 or 12 depending on FOUNDER_SESSION_SKIPPED] dimensions receive EVALUATION_CONTEXT." Proceed to Agent Calls.
|
|
441
|
+
|
|
442
|
+
This assertion runs every evaluation. It catches registry drift (e.g., a copy-paste error that gives a second dimension FOUNDER_EVALUATION_CONTEXT) before any evaluator sees founder data it should not have.
|
|
443
|
+
|
|
444
|
+
#### Context Routing Rule (CRITICAL)
|
|
445
|
+
|
|
446
|
+
- **Founder-Market Fit** dimension: use **FOUNDER_EVALUATION_CONTEXT**
|
|
447
|
+
- **All other 13 dimensions**: use **EVALUATION_CONTEXT** (NOT FOUNDER_EVALUATION_CONTEXT)
|
|
448
|
+
|
|
449
|
+
This is a hard rule. FOUNDER_EVALUATION_CONTEXT contains the founder profile and founder-idea fit answers. Only the Founder-Market Fit evaluator should see this data.
|
|
450
|
+
|
|
451
|
+
#### Agent Calls
|
|
452
|
+
|
|
453
|
+
**Spawning from the Dimension Registry:**
|
|
454
|
+
|
|
455
|
+
Read the Dimension Registry table from EVALUATION.md (pre-loaded via ti-scoring skill). For each of the 14 registry rows, spawn one Agent with:
|
|
456
|
+
|
|
457
|
+
- **agent_type:** `ti-evaluator`
|
|
458
|
+
- **prompt:** Construct by concatenating:
|
|
459
|
+
1. Dimension file injection: A `<files_to_read>` block pointing to `.claude/skills/ti-scoring/dimensions/{File Slug}.md` where `{File Slug}` is the value from the registry's File Slug column for this row.
|
|
460
|
+
2. Assignment line: `Your assigned dimension is: {Name}` where `{Name}` is the value from the registry's Name column.
|
|
461
|
+
3. The context variable determined by the registry's **Context Variant** column:
|
|
462
|
+
- If Context Variant is `FOUNDER_EVALUATION_CONTEXT`: use FOUNDER_EVALUATION_CONTEXT
|
|
463
|
+
- If Context Variant is `EVALUATION_CONTEXT`: use EVALUATION_CONTEXT
|
|
464
|
+
This enforces the context routing rule: only the dimension with Context Variant = FOUNDER_EVALUATION_CONTEXT receives founder data.
|
|
465
|
+
4. Instruction: `Evaluate this idea on the {Name} dimension only. Use the dimension framework and rubric criteria from the file provided above. Follow the evaluation process in your system prompt exactly.`
|
|
466
|
+
|
|
467
|
+
**If FOUNDER_SESSION_SKIPPED is true:** Skip the registry row where Name = "Founder-Market Fit" (the row with Context Variant = FOUNDER_EVALUATION_CONTEXT). Spawn only 13 agents. Insert skip marker as before: `--- DIMENSION: Founder-Market Fit ---\n## EVALUATION SKIPPED\nFounder declined founder-fit assessment.`
|
|
468
|
+
|
|
469
|
+
#### Result Collection
|
|
470
|
+
|
|
471
|
+
After all agents return, collect their outputs. Concatenate all results into a single EVALUATION_RESULTS string with clear delimiters between each dimension's output. Use the `--- DIMENSION: {Name} ---` delimiter where `{Name}` is the value from the registry's Name column for each dimension:
|
|
472
|
+
|
|
473
|
+
```
|
|
474
|
+
--- DIMENSION: Pain Intensity ---
|
|
475
|
+
[full evaluation output from Pain Intensity evaluator]
|
|
476
|
+
|
|
477
|
+
--- DIMENSION: Urgency ---
|
|
478
|
+
[full evaluation output from Urgency evaluator]
|
|
479
|
+
|
|
480
|
+
... (all evaluated dimensions) ...
|
|
481
|
+
|
|
482
|
+
--- DIMENSION: Behavior Change Required ---
|
|
483
|
+
[full evaluation output from Behavior Change Required evaluator]
|
|
484
|
+
```
|
|
485
|
+
|
|
486
|
+
If Founder-Market Fit was skipped, include the skip marker in EVALUATION_RESULTS at its position (between Solution Gap and Defensibility).
|
|
487
|
+
|
|
488
|
+
**Progressive write:** As each evaluator returns, extract its full output and write it to `{RUN_DIR}/dimensions/{Output Filename}` using the Write tool, where `{Output Filename}` comes from the Dimension Registry's Output Filename column for that dimension. Do not wait for all 14 to complete before writing — write each file as soon as that evaluator's result is available.
|
|
489
|
+
|
|
490
|
+
If Founder-Market Fit was skipped, write the skip marker content to its output file (per the registry's Output Filename for that row).
|
|
491
|
+
|
|
492
|
+
If an evaluator fails and is retried (see Retry Logic below), write the file after the retry result is available (whether success or failure marker).
|
|
493
|
+
|
|
494
|
+
#### Retry Logic
|
|
495
|
+
|
|
496
|
+
After collecting results, check each evaluator's output for the expected `## EVALUATION COMPLETE` marker. If any evaluator returned malformed output (does not contain `## EVALUATION COMPLETE`):
|
|
497
|
+
|
|
498
|
+
1. Retry that single evaluator ONCE by spawning a new Agent() call with the same prompt.
|
|
499
|
+
2. If the retry also fails, include a failure marker in EVALUATION_RESULTS for that dimension:
|
|
500
|
+
|
|
501
|
+
```
|
|
502
|
+
--- DIMENSION: [Name] ---
|
|
503
|
+
## EVALUATION FAILED
|
|
504
|
+
Evaluator did not return valid output after retry.
|
|
505
|
+
```
|
|
506
|
+
|
|
507
|
+
Continue to Stage 5 regardless -- the merge agent handles partial failures gracefully.
|
|
508
|
+
|
|
509
|
+
---
|
|
510
|
+
|
|
511
|
+
## Stage 5: Merge
|
|
512
|
+
|
|
513
|
+
### Evaluator Output Trimming
|
|
514
|
+
|
|
515
|
+
Before constructing the merger prompt, process each evaluator output in EVALUATION_RESULTS to reduce context size:
|
|
516
|
+
|
|
517
|
+
**For each dimension's evaluator output:**
|
|
518
|
+
|
|
519
|
+
1. **Extract evidence tier counts** -- Scan the entire `### Rubric Assessment` section for compound tags matching `[PASS|Tier]`, `[FAIL|Tier]`, or `[CONDITIONAL|Tier]` where Tier is one of: Verified, Research-Backed, Founder-Asserted, Assumed. Count occurrences of each tier across all score levels. Produce a compact string: `{count}V {count}R {count}F {count}A` (e.g., `2V 3R 1F 5A`). If no compound tags are found (older format), set to `(tier data unavailable)`.
|
|
520
|
+
|
|
521
|
+
2. **Strip the Rubric Assessment section** -- Remove all lines from `### Rubric Assessment` through to the line immediately before `### Score:`. This removes the per-criterion PASS/FAIL detail while preserving Analysis, Score, Potential, Assumptions, Key Signals, and Evidence Basis.
|
|
522
|
+
|
|
523
|
+
3. **Insert pre-computed tier counts** -- Add a new line immediately before `### Score:` in the trimmed output:
|
|
524
|
+
```
|
|
525
|
+
### Evidence Tier Counts: {compact tier string from step 1}
|
|
526
|
+
```
|
|
527
|
+
|
|
528
|
+
**CRITICAL ORDERING:** Step 1 (extract) MUST happen before Step 2 (strip). The compound tags `[PASS|Verified]` etc. live inside the Rubric Assessment lines. If you strip first, tier counts will always be zero.
|
|
529
|
+
|
|
530
|
+
After trimming all evaluator outputs, construct the merger prompt using the TRIMMED_EVALUATION_RESULTS (same format as before, with `--- DIMENSION: [Name] ---` delimiters, but each evaluator output now has the rubric stripped and tier counts pre-computed).
|
|
531
|
+
|
|
532
|
+
Spawn the merge agent to synthesize evaluation results into a weighted scorecard report.
|
|
533
|
+
|
|
534
|
+
- **agent_type:** `ti-merger`
|
|
535
|
+
- **prompt:** Construct by concatenating these components:
|
|
536
|
+
1. Header: If FOUNDER_SESSION_SKIPPED is false: `Here are the evaluation results for all 14 dimensions:`. If true: `Here are the evaluation results for 13 dimensions (Founder-Market Fit was intentionally skipped by the founder -- treat as an opt-out, not a failure):`
|
|
537
|
+
2. Research availability note (conditional): If RESEARCH_AVAILABLE is false, append this line to the header: `Note: Web research was unavailable for this evaluation. Include this exact note at the bottom of the report, before the Next Steps section: 'Note: Web research was unavailable for this evaluation. Evidence quality may be lower than typical.'`
|
|
538
|
+
3. The full TRIMMED_EVALUATION_RESULTS string (concatenated output from all evaluators with `--- DIMENSION: [Name] ---` delimiters, rubric stripped and tier counts pre-computed)
|
|
539
|
+
4. Instruction: `Produce the weighted scorecard report following your system prompt instructions exactly.`
|
|
540
|
+
|
|
541
|
+
**Registry injection for merger:** Prepend the Dimension Registry table from EVALUATION.md to the merger prompt before TRIMMED_EVALUATION_RESULTS. The merger uses the registry for weight values and scorecard row ordering. Format:
|
|
542
|
+
|
|
543
|
+
```
|
|
544
|
+
## Dimension Registry
|
|
545
|
+
[Copy the full registry table from EVALUATION.md]
|
|
546
|
+
|
|
547
|
+
## Evaluation Results
|
|
548
|
+
[TRIMMED_EVALUATION_RESULTS content]
|
|
549
|
+
```
|
|
550
|
+
|
|
551
|
+
This is belt-and-suspenders with ti-scoring already in ti-merger.md skills (from Plan 01), but ensures the merger always has explicit registry access even if skills loading is imperfect.
|
|
552
|
+
|
|
553
|
+
Wait for the merge agent to return. Store its returned output as FINAL_REPORT.
|
|
554
|
+
|
|
555
|
+
**Display FINAL_REPORT inline:** Display the FINAL_REPORT directly in the chat before proceeding. The report IS the output -- do not wrap it in additional formatting, do not add headers above it, do not summarize it. Just display the report as-is. The founder sees the full scorecard here, before being asked about HTML.
|
|
556
|
+
|
|
557
|
+
### HTML Report Gate
|
|
558
|
+
|
|
559
|
+
Use AskUserQuestion: "Would you like an HTML report?"
|
|
560
|
+
|
|
561
|
+
Options:
|
|
562
|
+
- "Yes, generate HTML report" -- Set HTML_REQUESTED = true.
|
|
563
|
+
- "No, scorecard only" -- Set HTML_REQUESTED = false.
|
|
564
|
+
|
|
565
|
+
**Progressive write:** After FINAL_REPORT is stored and the HTML gate is answered, write:
|
|
566
|
+
|
|
567
|
+
1. If HTML_REQUESTED is true:
|
|
568
|
+
|
|
569
|
+
**HTML Escaping:** Before interpolating any user-provided content into the HTML template, apply these character substitutions:
|
|
570
|
+
- `&` -> `&`
|
|
571
|
+
- `<` -> `<`
|
|
572
|
+
- `>` -> `>`
|
|
573
|
+
- `"` -> `"`
|
|
574
|
+
|
|
575
|
+
Apply escaping to: IDEA_TEXT and any founder-provided content (founder name, idea description text). Do NOT escape merger-generated content (dimension names, analysis text, verdict labels) -- these are safe by construction.
|
|
576
|
+
|
|
577
|
+
Generate the HTML report following the ti-html-report skill instructions, then write `{RUN_DIR}/report.html`.
|
|
578
|
+
|
|
579
|
+
**Browser Open:** After report.html is written, use AskUserQuestion: "Open the report in your browser?"
|
|
580
|
+
|
|
581
|
+
Options:
|
|
582
|
+
- "Yes" -- Run Bash: `open {RUN_DIR}/report.html`
|
|
583
|
+
- "No" -- Continue to next step.
|
|
584
|
+
|
|
585
|
+
2. `{RUN_DIR}/scorecard.md` — Write the full FINAL_REPORT content.
|
|
586
|
+
|
|
587
|
+
These are the last progressive writes. All output artifacts are now on disk.
|
|
588
|
+
|
|
589
|
+
---
|
|
590
|
+
|
|
591
|
+
## Stage 6: Confirm
|
|
592
|
+
|
|
593
|
+
All output artifacts were written progressively during Stages 1-5. This stage displays a confirmation summary.
|
|
594
|
+
|
|
595
|
+
### Step 1: Display confirmation
|
|
596
|
+
|
|
597
|
+
Count the files in the run directory and display:
|
|
598
|
+
|
|
599
|
+
If HTML_REQUESTED is true and RESEARCH_AVAILABLE is true:
|
|
600
|
+
> idea.md + research-brief.md + [13 or 14] dimension files + scorecard.md + report.html saved to `~/.tweakidea/runs/{RUN_TIMESTAMP}/`
|
|
601
|
+
> HTML report: `~/.tweakidea/runs/{RUN_TIMESTAMP}/report.html`
|
|
602
|
+
|
|
603
|
+
If HTML_REQUESTED is true and RESEARCH_AVAILABLE is false:
|
|
604
|
+
> idea.md + [13 or 14] dimension files + scorecard.md + report.html saved to `~/.tweakidea/runs/{RUN_TIMESTAMP}/`
|
|
605
|
+
> HTML report: `~/.tweakidea/runs/{RUN_TIMESTAMP}/report.html`
|
|
606
|
+
|
|
607
|
+
If HTML_REQUESTED is false and RESEARCH_AVAILABLE is true:
|
|
608
|
+
> idea.md + research-brief.md + [13 or 14] dimension files + scorecard.md saved to `~/.tweakidea/runs/{RUN_TIMESTAMP}/`
|
|
609
|
+
|
|
610
|
+
If HTML_REQUESTED is false and RESEARCH_AVAILABLE is false:
|
|
611
|
+
> idea.md + [13 or 14] dimension files + scorecard.md saved to `~/.tweakidea/runs/{RUN_TIMESTAMP}/`
|
|
612
|
+
|
|
613
|
+
Use "13 dimension files" when FOUNDER_SESSION_SKIPPED is true, "14 dimension files" otherwise.
|
|
614
|
+
|
|
615
|
+
If any progressive write failed earlier in the pipeline, add:
|
|
616
|
+
> Note: Some files could not be saved. The evaluation report above is your complete result.
|
|
617
|
+
|
|
618
|
+
Do NOT use the Write tool in this stage. All files are already written.
|
|
619
|
+
|
|
620
|
+
---
|
|
621
|
+
|
|
622
|
+
## Report Output
|
|
623
|
+
|
|
624
|
+
Display the FINAL_REPORT returned by the merge agent directly inline in the chat. The report IS the output -- do not wrap it in additional formatting, do not add headers above it, do not summarize it. Just display the report as-is.
|
|
625
|
+
|
|
626
|
+
After displaying the inline report, execute Stage 6 (Confirm) to display the run directory summary. All files were already written progressively during Stages 1-5.
|
|
627
|
+
|
|
628
|
+
**File output (optional, only on explicit request):** The run directory is written automatically by Stage 6. If the founder additionally asks to save the report to a custom location (e.g., "save this to reports/", "export to evaluation.md"), use the Write tool to save FINAL_REPORT to the user-specified path.
|
|
629
|
+
|
|
630
|
+
After the run directory confirmation, add the closing line:
|
|
631
|
+
|
|
632
|
+
> Run `/tweak:evaluate` again with a modified idea to re-evaluate, or ask follow-up questions about any dimension.
|
package/package.json
ADDED
|
@@ -0,0 +1,31 @@
|
|
|
1
|
+
{
|
|
2
|
+
"name": "tweakidea",
|
|
3
|
+
"version": "0.1.0",
|
|
4
|
+
"description": "14-dimension startup idea evaluator for Claude Code",
|
|
5
|
+
"bin": {
|
|
6
|
+
"tweakidea": "bin/install.js"
|
|
7
|
+
},
|
|
8
|
+
"files": [
|
|
9
|
+
"bin",
|
|
10
|
+
"agents",
|
|
11
|
+
"commands",
|
|
12
|
+
"skills"
|
|
13
|
+
],
|
|
14
|
+
"keywords": [
|
|
15
|
+
"claude",
|
|
16
|
+
"claude-code",
|
|
17
|
+
"ai",
|
|
18
|
+
"startup",
|
|
19
|
+
"evaluation",
|
|
20
|
+
"agent"
|
|
21
|
+
],
|
|
22
|
+
"author": "Aleksandr Sarantsev",
|
|
23
|
+
"license": "MIT",
|
|
24
|
+
"repository": {
|
|
25
|
+
"type": "git",
|
|
26
|
+
"url": "git+https://github.com/eph5xx/tweakidea.git"
|
|
27
|
+
},
|
|
28
|
+
"engines": {
|
|
29
|
+
"node": ">=18.0.0"
|
|
30
|
+
}
|
|
31
|
+
}
|