@slopus/beer 0.1.2 → 0.1.6
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/dist/_workflows/_index.d.ts +1 -1
- package/dist/_workflows/_index.js +7 -7
- package/dist/_workflows/bootstrap.d.ts +1 -1
- package/dist/_workflows/bootstrap.js +14 -14
- package/dist/_workflows/checkpointWorkflow.d.ts +1 -1
- package/dist/_workflows/checkpointWorkflow.js +2 -2
- package/dist/_workflows/context/context.d.ts +2 -2
- package/dist/_workflows/context/context.js +11 -11
- package/dist/_workflows/context/context.spec.js +1 -1
- package/dist/_workflows/context/utils/contextApplyConfig.d.ts +1 -1
- package/dist/_workflows/context/utils/contextApplyConfig.js +1 -1
- package/dist/_workflows/context/utils/contextApplyConfig.spec.js +1 -1
- package/dist/_workflows/context/utils/contextAskGithubRepo.d.ts +1 -1
- package/dist/_workflows/context/utils/contextAskGithubRepo.js +3 -3
- package/dist/_workflows/context/utils/contextAskGithubRepo.spec.js +1 -1
- package/dist/_workflows/context/utils/contextGitignoreEnsure.spec.js +1 -1
- package/dist/_workflows/context/utils/progressMultilineStart.spec.js +1 -1
- package/dist/_workflows/planWorkflow.d.ts +1 -1
- package/dist/_workflows/planWorkflow.js +9 -9
- package/dist/_workflows/prompts/PROMPT_AGENTS_MD.md +168 -0
- package/dist/_workflows/prompts/PROMPT_DECISIONS.md +372 -0
- package/dist/_workflows/prompts/PROMPT_PRODUCT_NAME.md +101 -0
- package/dist/_workflows/prompts/PROMPT_PRODUCT_PITCH.md +197 -0
- package/dist/_workflows/prompts/PROMPT_PRODUCT_PITCH_FINAL.md +44 -0
- package/dist/_workflows/prompts/PROMPT_PROJECT_BLUEPRINT.md +469 -0
- package/dist/_workflows/prompts/PROMPT_README.md +101 -0
- package/dist/_workflows/prompts/PROMPT_RESEARCH.md +407 -0
- package/dist/_workflows/prompts/PROMPT_RESEARCH_PROBLEMS.md +296 -0
- package/dist/_workflows/prompts/PROMPT_TECHNOLOGY_STACK.md +460 -0
- package/dist/_workflows/prompts/PROMPT_TECHNOLOGY_STACK_FINAL.md +48 -0
- package/dist/_workflows/ralphLoopWorkflow.d.ts +1 -1
- package/dist/_workflows/ralphLoopWorkflow.js +5 -5
- package/dist/_workflows/ralphWorkflow.d.ts +1 -1
- package/dist/_workflows/ralphWorkflow.js +5 -5
- package/dist/_workflows/researchWorkflow.d.ts +1 -1
- package/dist/_workflows/researchWorkflow.js +3 -3
- package/dist/_workflows/steps/generate.d.ts +2 -2
- package/dist/_workflows/steps/generate.js +3 -3
- package/dist/_workflows/steps/generateCommit.d.ts +1 -1
- package/dist/_workflows/steps/generateCommit.js +2 -2
- package/dist/_workflows/steps/generateDocument.d.ts +2 -2
- package/dist/_workflows/steps/generateDocument.js +3 -3
- package/dist/_workflows/steps/generateFrontmatter.d.ts +2 -2
- package/dist/_workflows/steps/generateFrontmatter.js +1 -1
- package/dist/_workflows/steps/generateProgressMessageResolve.d.ts +1 -1
- package/dist/_workflows/steps/generateReadme.d.ts +1 -1
- package/dist/_workflows/steps/generateReadme.js +2 -2
- package/dist/_workflows/steps/ralphExecute.d.ts +1 -1
- package/dist/_workflows/steps/ralphExecute.js +2 -2
- package/dist/_workflows/steps/ralphLoopExecute.d.ts +1 -1
- package/dist/_workflows/steps/ralphLoopExecute.js +2 -2
- package/dist/_workflows/steps/ralphLoopPlanGenerate.d.ts +1 -1
- package/dist/_workflows/steps/ralphLoopPlanGenerate.js +3 -3
- package/dist/_workflows/steps/ralphLoopPlanPathResolve.d.ts +1 -1
- package/dist/_workflows/steps/ralphLoopReviewRound.d.ts +1 -1
- package/dist/_workflows/steps/ralphLoopReviewRound.js +2 -2
- package/dist/_workflows/steps/ralphPlan.d.ts +1 -1
- package/dist/_workflows/steps/ralphPlan.js +6 -6
- package/dist/_workflows/steps/ralphPlanPathResolve.d.ts +1 -1
- package/dist/_workflows/steps/ralphReview.d.ts +1 -1
- package/dist/_workflows/steps/ralphReview.js +4 -4
- package/dist/main.js +5 -5
- package/dist/modules/ai/aiOutputExtract.spec.js +1 -1
- package/dist/modules/ai/generate.d.ts +2 -2
- package/dist/modules/ai/generate.js +5 -5
- package/dist/modules/ai/generate.spec.js +1 -1
- package/dist/modules/ai/generate.unit.spec.js +1 -1
- package/dist/modules/ai/generateEventTypes.d.ts +2 -2
- package/dist/modules/ai/generateFile.d.ts +2 -2
- package/dist/modules/ai/generateFile.js +2 -2
- package/dist/modules/ai/generateFile.spec.js +1 -1
- package/dist/modules/ai/generatePureSessionCreate.d.ts +3 -3
- package/dist/modules/ai/generatePureSessionCreate.js +1 -1
- package/dist/modules/ai/generatePureSessionCreate.spec.js +1 -1
- package/dist/modules/ai/generatePureText.d.ts +2 -2
- package/dist/modules/ai/generatePureText.js +5 -5
- package/dist/modules/ai/generatePureText.spec.js +1 -1
- package/dist/modules/ai/generateSessionCreate.d.ts +2 -2
- package/dist/modules/ai/generateSessionCreate.js +1 -1
- package/dist/modules/ai/generateSessionCreate.spec.js +1 -1
- package/dist/modules/ai/generateText.d.ts +2 -2
- package/dist/modules/ai/generateText.js +1 -1
- package/dist/modules/ai/generateText.spec.js +1 -1
- package/dist/modules/ai/generateVerify.spec.js +1 -1
- package/dist/modules/ai/providerGenerate.d.ts +3 -3
- package/dist/modules/ai/providerGenerate.js +2 -2
- package/dist/modules/ai/providerGenerate.spec.js +2 -2
- package/dist/modules/ai/providerGenerate.unit.spec.js +1 -1
- package/dist/modules/ai/providers/commandJSONL.d.ts +1 -1
- package/dist/modules/ai/providers/commandJSONL.js +2 -2
- package/dist/modules/ai/providers/commandJSONL.spec.js +1 -1
- package/dist/modules/ai/providers/piProviderGenerate.d.ts +1 -1
- package/dist/modules/ai/providers/piProviderGenerate.js +1 -1
- package/dist/modules/ai/providers/piProviderGenerate.spec.js +1 -1
- package/dist/modules/beer/beerOriginalPathResolve.spec.js +1 -1
- package/dist/modules/beer/beerSettingsRead.d.ts +1 -1
- package/dist/modules/beer/beerSettingsRead.spec.js +1 -1
- package/dist/modules/beer/beerSettingsTypes.d.ts +2 -2
- package/dist/modules/beer/beerSettingsWrite.d.ts +1 -1
- package/dist/modules/git/gitPush.js +1 -1
- package/dist/modules/git/gitRemoteEnsure.js +1 -1
- package/dist/modules/git/gitRepoCheckout.js +1 -1
- package/dist/modules/git/gitRepoCheckout.spec.js +2 -2
- package/dist/modules/git/gitRepoEnsure.js +1 -1
- package/dist/modules/git/gitRepoEnsure.spec.js +1 -1
- package/dist/modules/git/gitStageAndCommit.js +1 -1
- package/dist/modules/git/gitignoreEnsure.spec.js +1 -1
- package/dist/modules/github/githubCliEnsure.js +2 -2
- package/dist/modules/github/githubOwnerChoicesGet.js +1 -1
- package/dist/modules/github/githubRepoCreate.js +1 -1
- package/dist/modules/github/githubRepoExists.js +1 -1
- package/dist/modules/github/githubRepoNameResolve.d.ts +1 -1
- package/dist/modules/github/githubRepoNameResolve.js +1 -1
- package/dist/modules/github/githubRepoNameResolve.spec.js +1 -1
- package/dist/modules/github/githubRepoParse.d.ts +1 -1
- package/dist/modules/github/githubRepoParse.spec.js +1 -1
- package/dist/modules/github/githubRepoStatusGet.d.ts +1 -1
- package/dist/modules/github/githubRepoStatusGet.js +2 -2
- package/dist/modules/github/githubViewerGet.js +2 -2
- package/dist/modules/plan/planPromptChildren.d.ts +2 -2
- package/dist/modules/plan/planPromptChildren.spec.js +1 -1
- package/dist/modules/plan/planPromptDocument.d.ts +2 -2
- package/dist/modules/plan/planPromptDocument.spec.js +1 -1
- package/dist/modules/plan/planPromptPicker.d.ts +1 -1
- package/dist/modules/plan/planPromptPicker.js +1 -1
- package/dist/modules/plan/planPromptPicker.spec.js +1 -1
- package/dist/modules/plan/planPromptRoot.d.ts +1 -1
- package/dist/modules/plan/planPromptRoot.spec.js +1 -1
- package/dist/modules/plan/planSourceDocumentsResolve.d.ts +1 -1
- package/dist/modules/plan/planSourceDocumentsResolve.spec.js +1 -1
- package/dist/modules/providers/providerDetect.d.ts +1 -1
- package/dist/modules/providers/providerDetect.js +2 -2
- package/dist/modules/providers/providerDetect.spec.js +1 -1
- package/dist/modules/providers/providerModelSelect.d.ts +1 -1
- package/dist/modules/providers/providerModelSelect.spec.js +1 -1
- package/dist/modules/providers/providerModelsGet.d.ts +1 -1
- package/dist/modules/providers/providerModelsGet.js +1 -1
- package/dist/modules/providers/providerModelsGet.spec.js +1 -1
- package/dist/modules/providers/providerPriorityList.d.ts +1 -1
- package/dist/modules/providers/providerPriorityList.spec.js +1 -1
- package/dist/modules/sandbox/sandboxInferenceFilesystemPolicy.d.ts +1 -1
- package/dist/modules/sandbox/sandboxInferenceFilesystemPolicy.js +1 -1
- package/dist/modules/sandbox/sandboxInferenceFilesystemPolicy.spec.js +1 -1
- package/dist/modules/sandbox/sandboxInferenceGet.d.ts +2 -2
- package/dist/modules/sandbox/sandboxInferenceGet.js +1 -1
- package/dist/modules/sandbox/sandboxPassthrough.d.ts +1 -1
- package/dist/modules/sandbox/sandboxPassthrough.spec.js +1 -1
- package/dist/modules/tree/treeChildrenParse.d.ts +1 -1
- package/dist/modules/tree/treeChildrenRead.d.ts +1 -1
- package/dist/modules/tree/treeChildrenRead.spec.js +1 -1
- package/dist/modules/tree/treeChildrenWrite.d.ts +1 -1
- package/dist/modules/tree/treeChildrenWrite.spec.js +1 -1
- package/dist/modules/tree/treeInferenceProgressRun.d.ts +1 -1
- package/dist/modules/tree/treeInferenceProgressRun.js +1 -1
- package/dist/modules/tree/treeInferenceProgressRun.spec.js +1 -1
- package/dist/modules/tree/treeLeafPick.d.ts +1 -1
- package/dist/modules/tree/treeLeafPick.js +8 -8
- package/dist/modules/tree/treeLeafPick.spec.js +1 -1
- package/dist/modules/tree/treeNodeExpand.d.ts +1 -1
- package/dist/modules/tree/treeNodeExpand.js +8 -8
- package/dist/modules/tree/treeNodeExpand.spec.js +3 -3
- package/dist/modules/tree/treeNodePathResolve.d.ts +1 -1
- package/dist/modules/tree/treeNodeRead.d.ts +1 -1
- package/dist/modules/tree/treeNodeRead.spec.js +1 -1
- package/dist/modules/tree/treeNodeSlug.spec.js +1 -1
- package/dist/modules/tree/treeNodeWrite.d.ts +1 -1
- package/dist/modules/tree/treeNodeWrite.spec.js +1 -1
- package/dist/modules/tree/treeSearchRun.d.ts +1 -1
- package/dist/modules/tree/treeSearchRun.js +12 -12
- package/dist/modules/tree/treeSearchRun.spec.js +3 -3
- package/dist/modules/tree/treeSearchTypes.d.ts +1 -1
- package/dist/modules/tree/treeStateLeaves.d.ts +1 -1
- package/dist/modules/tree/treeStateLeaves.spec.js +1 -1
- package/dist/modules/tree/treeStateRead.d.ts +1 -1
- package/dist/modules/tree/treeStateRead.js +2 -2
- package/dist/modules/tree/treeStateRead.spec.js +1 -1
- package/dist/modules/tree/treeStateRender.d.ts +1 -1
- package/dist/modules/tree/treeStateRender.spec.js +1 -1
- package/dist/modules/util/asyncLock.spec.js +1 -1
- package/dist/modules/util/commandRun.d.ts +1 -1
- package/dist/modules/util/commandRun.js +2 -2
- package/dist/modules/util/commandRun.spec.js +1 -1
- package/dist/modules/util/pathLock.js +2 -2
- package/dist/modules/util/pathLock.spec.js +1 -1
- package/dist/modules/util/pathLockOverlap.spec.js +1 -1
- package/dist/release/releaseRun.js +3 -3
- package/dist/release/releaseVersionPrompt.js +3 -3
- package/dist/text/text.d.ts +2 -2
- package/dist/text/text.js +1 -1
- package/dist/text/text.spec.js +1 -1
- package/dist/text/textGenBuild.js +1 -1
- package/dist/text/textGenGenerate.spec.js +1 -1
- package/dist/types.d.ts +9 -9
- package/dist/types.js +1 -1
- package/package.json +3 -2
|
@@ -0,0 +1,372 @@
|
|
|
1
|
+
You are a Staff Engineer producing a comprehensive Key Decisions Document for a software project. Your goal is to extract, catalog, and explain every significant decision visible in the project — technology choices, architectural patterns, conventions, trade-offs, and constraints — so that a new contributor or downstream consumer can understand not just *what* the project does, but *why it is built the way it is*.
|
|
2
|
+
|
|
3
|
+
## Context
|
|
4
|
+
|
|
5
|
+
- **Output File Path**: {outputPath}
|
|
6
|
+
- **Original source repository:** {sourceFullName} (Use a `gh` tool to look into issues)
|
|
7
|
+
- **Local checkout path:** {originalCheckoutPath}
|
|
8
|
+
|
|
9
|
+
You have read-only access to the local checkout. Do not modify anything.
|
|
10
|
+
|
|
11
|
+
Two research documents have already been generated for this project. Read them before starting:
|
|
12
|
+
|
|
13
|
+
- **Research Summary**: {researchPath} — structured analysis of the project's identity, architecture, dependencies, development lifecycle, conventions, and hidden knowledge.
|
|
14
|
+
- **Unresolved Problems**: {unresolvedProblemsPath} — catalog of open questions, risks, contradictions, and gaps found in the codebase.
|
|
15
|
+
|
|
16
|
+
## Extraction methodology
|
|
17
|
+
|
|
18
|
+
You have two rich input documents. Use them as your primary source. Supplement with targeted reads of the local checkout when you need to verify a claim, resolve ambiguity, or fill gaps the research documents left open. Do not re-do the full codebase scan — the research documents already did that.
|
|
19
|
+
|
|
20
|
+
### Phase 1: Decision extraction from research
|
|
21
|
+
|
|
22
|
+
1. **Mine the research summary** — read every section and extract every choice that represents a decision:
|
|
23
|
+
- Each dependency listed is a decision (why this library, not another?)
|
|
24
|
+
- Each config value documented is a decision (why this setting?)
|
|
25
|
+
- Each pattern described is a decision (why this pattern, not alternatives?)
|
|
26
|
+
- Each convention noted is a decision (why this naming, structure, or style?)
|
|
27
|
+
- Each absence noted is a decision (why NOT include this?)
|
|
28
|
+
|
|
29
|
+
2. **Mine the unresolved problems** — read every question and extract the decisions that created those problems:
|
|
30
|
+
- Unresolved questions often point to decisions that were made implicitly or inconsistently
|
|
31
|
+
- Contradictions reveal competing decisions
|
|
32
|
+
- Gaps reveal decisions not yet made
|
|
33
|
+
- Risks reveal consequences of existing decisions
|
|
34
|
+
|
|
35
|
+
3. **Targeted verification** — for high-impact decisions where the research documents lack specificity, read the relevant source files directly:
|
|
36
|
+
- Configuration files (tsconfig, lint configs, build configs) — each key is a micro-decision
|
|
37
|
+
- Package manifests — version constraints, engine fields, exports strategy
|
|
38
|
+
- Entry points — how the project is invoked and what it exposes
|
|
39
|
+
- Key source files referenced in the research — verify patterns and extract rationale from comments
|
|
40
|
+
|
|
41
|
+
### Phase 2: Decision classification
|
|
42
|
+
|
|
43
|
+
Classify every discovered decision into one of these categories:
|
|
44
|
+
|
|
45
|
+
- **Language & Runtime** — programming language, runtime version, module system
|
|
46
|
+
- **Framework & Libraries** — framework choice, key library selections, why-this-over-that
|
|
47
|
+
- **Build & Tooling** — compiler, bundler, linter, formatter, test runner, package manager
|
|
48
|
+
- **Architecture & Patterns** — architectural style, design patterns, module boundaries
|
|
49
|
+
- **File Organization** — directory structure, naming conventions, file-per-function vs grouped
|
|
50
|
+
- **Type System** — type strictness, shared types strategy, type definition patterns
|
|
51
|
+
- **Error Handling** — error propagation strategy, custom error types, failure modes
|
|
52
|
+
- **Testing Strategy** — framework, file placement, test types, coverage approach
|
|
53
|
+
- **API Design** — public API surface, versioning, backwards compatibility
|
|
54
|
+
- **Data & State** — data flow, state management, persistence, caching
|
|
55
|
+
- **Async & Concurrency** — async patterns, concurrency primitives, parallelism approach
|
|
56
|
+
- **Security & Secrets** — auth approach, input validation, secrets management
|
|
57
|
+
- **CI/CD & Release** — pipeline structure, release process, versioning strategy
|
|
58
|
+
- **Developer Experience** — local dev workflow, debugging, onboarding
|
|
59
|
+
- **Coding Conventions** — naming rules, comment style, import ordering, code limits
|
|
60
|
+
- **Operational** — logging, monitoring, deployment, configuration management
|
|
61
|
+
|
|
62
|
+
### Phase 3: Decision analysis
|
|
63
|
+
|
|
64
|
+
For each decision, determine:
|
|
65
|
+
|
|
66
|
+
1. **What was decided** — the concrete choice made
|
|
67
|
+
2. **Evidence** — where in the codebase this decision is visible (file paths, config keys, code patterns)
|
|
68
|
+
3. **Alternatives rejected** — what the obvious alternatives were (only state what is inferable from context; do not fabricate)
|
|
69
|
+
4. **Rationale** — why this choice was likely made (from docs, comments, or strong inference from context)
|
|
70
|
+
5. **Consequences** — what this decision enables or constrains
|
|
71
|
+
6. **Strength of commitment** — how deeply embedded this decision is:
|
|
72
|
+
- **Deep** — changing it would require rewriting large parts of the codebase
|
|
73
|
+
- **Moderate** — changing it would require coordinated updates across multiple files
|
|
74
|
+
- **Shallow** — changing it is a config tweak or small refactor
|
|
75
|
+
|
|
76
|
+
### Phase 4: Cross-reference with problems
|
|
77
|
+
|
|
78
|
+
For each decision, check the unresolved problems document:
|
|
79
|
+
- Does this decision appear as a source of unresolved questions? If so, note the tension.
|
|
80
|
+
- Does this decision have contradictions flagged? If so, include them.
|
|
81
|
+
- Is this decision identified as a risk? If so, note the risk level.
|
|
82
|
+
|
|
83
|
+
## Output format
|
|
84
|
+
|
|
85
|
+
Produce the document as a single markdown file with the following structure. Every section is required. If a section does not apply, write "Not applicable" with a brief reason.
|
|
86
|
+
|
|
87
|
+
```
|
|
88
|
+
# Key Decisions: {project name}
|
|
89
|
+
|
|
90
|
+
## Executive summary
|
|
91
|
+
|
|
92
|
+
{5-10 bullet points capturing the most consequential decisions in this project. Each bullet should name the decision and its primary rationale in one sentence.}
|
|
93
|
+
|
|
94
|
+
## 1. Language & runtime
|
|
95
|
+
|
|
96
|
+
### Primary language
|
|
97
|
+
{Language, version constraints, module system (ESM/CJS/both). Evidence: file paths.}
|
|
98
|
+
|
|
99
|
+
### Runtime
|
|
100
|
+
{Runtime(s), version requirements, platform targets. Evidence: engine fields, CI matrix, Dockerfiles.}
|
|
101
|
+
|
|
102
|
+
### Module system
|
|
103
|
+
{ESM, CJS, or dual. How imports/exports are structured. Evidence: tsconfig, package.json type field.}
|
|
104
|
+
|
|
105
|
+
## 2. Framework & libraries
|
|
106
|
+
|
|
107
|
+
### Core framework
|
|
108
|
+
{Framework choice and version, or "no framework" if applicable. Why this framework. Evidence.}
|
|
109
|
+
|
|
110
|
+
### Key library decisions
|
|
111
|
+
{For each significant dependency, document as a subsection:}
|
|
112
|
+
|
|
113
|
+
#### {Library name}
|
|
114
|
+
- **Purpose**: {what it does in this project}
|
|
115
|
+
- **Version**: {pinned version or range}
|
|
116
|
+
- **Alternatives considered**: {if inferable}
|
|
117
|
+
- **Evidence**: {package.json path, import locations}
|
|
118
|
+
|
|
119
|
+
## 3. Build & tooling
|
|
120
|
+
|
|
121
|
+
### Package manager
|
|
122
|
+
{npm, yarn, pnpm, bun — and version. Evidence: lock file, packageManager field.}
|
|
123
|
+
|
|
124
|
+
### Build system
|
|
125
|
+
{Compiler/bundler/build tool. Configuration. Output format. Evidence.}
|
|
126
|
+
|
|
127
|
+
### Linting & formatting
|
|
128
|
+
{Tools, configuration style, custom rules. Evidence: config file paths.}
|
|
129
|
+
|
|
130
|
+
### Type checking
|
|
131
|
+
{TypeScript strictness level, key compiler options. Evidence: tsconfig path.}
|
|
132
|
+
|
|
133
|
+
### Test runner
|
|
134
|
+
{Framework, configuration, assertion library. Evidence.}
|
|
135
|
+
|
|
136
|
+
## 4. Architecture & patterns
|
|
137
|
+
|
|
138
|
+
### Architectural style
|
|
139
|
+
{Monolith/microservices/serverless/library/CLI tool/etc. Layer structure if any. Evidence.}
|
|
140
|
+
|
|
141
|
+
### Design patterns
|
|
142
|
+
{Recurring patterns with specific file references:}
|
|
143
|
+
|
|
144
|
+
#### {Pattern name}
|
|
145
|
+
- **Description**: {how the pattern is applied}
|
|
146
|
+
- **Examples**: {2-3 file paths demonstrating the pattern}
|
|
147
|
+
- **Rationale**: {why this pattern, if inferable}
|
|
148
|
+
|
|
149
|
+
### Module boundaries
|
|
150
|
+
{How the codebase is divided into modules. What rules govern cross-module imports. Evidence.}
|
|
151
|
+
|
|
152
|
+
### Dependency flow
|
|
153
|
+
{Which modules depend on which. Direction of dependencies. Any dependency inversion. Evidence.}
|
|
154
|
+
|
|
155
|
+
## 5. File organization
|
|
156
|
+
|
|
157
|
+
### Directory structure
|
|
158
|
+
{Top-level layout with purpose of each directory.}
|
|
159
|
+
|
|
160
|
+
### File naming convention
|
|
161
|
+
{Pattern: camelCase, kebab-case, PascalCase, prefix notation, etc. Evidence: actual file names.}
|
|
162
|
+
|
|
163
|
+
### File granularity
|
|
164
|
+
{One function per file, one class per file, grouped by feature, etc. Evidence.}
|
|
165
|
+
|
|
166
|
+
### Import conventions
|
|
167
|
+
{Path aliases, barrel files, relative vs absolute imports. Evidence: tsconfig paths, import statements.}
|
|
168
|
+
|
|
169
|
+
## 6. Type system
|
|
170
|
+
|
|
171
|
+
### Strictness level
|
|
172
|
+
{TypeScript strict mode settings. Evidence: tsconfig.}
|
|
173
|
+
|
|
174
|
+
### Shared types strategy
|
|
175
|
+
{How types are shared across modules. Central type files, per-module types, or inline. Evidence.}
|
|
176
|
+
|
|
177
|
+
### Type patterns
|
|
178
|
+
{Discriminated unions, branded types, utility types, generic patterns used. Evidence: file paths.}
|
|
179
|
+
|
|
180
|
+
## 7. Error handling
|
|
181
|
+
|
|
182
|
+
### Error propagation strategy
|
|
183
|
+
{Exceptions, Result types, error codes, or mixed. Evidence: code patterns.}
|
|
184
|
+
|
|
185
|
+
### Custom error types
|
|
186
|
+
{Any custom error classes or error factories. Evidence: file paths.}
|
|
187
|
+
|
|
188
|
+
### Failure philosophy
|
|
189
|
+
{Fail fast, graceful degradation, retry with backoff, or context-dependent. Evidence.}
|
|
190
|
+
|
|
191
|
+
## 8. Testing strategy
|
|
192
|
+
|
|
193
|
+
### Test framework & tools
|
|
194
|
+
{Framework, assertion library, mocking approach. Evidence: config and test files.}
|
|
195
|
+
|
|
196
|
+
### Test file placement
|
|
197
|
+
{Co-located, separate directory, or mixed. Naming convention. Evidence.}
|
|
198
|
+
|
|
199
|
+
### Test types present
|
|
200
|
+
{Unit, integration, e2e, snapshot, property-based — which exist and which are absent. Evidence.}
|
|
201
|
+
|
|
202
|
+
### Test coverage
|
|
203
|
+
{Coverage tooling, thresholds, or absence of coverage tracking. Evidence.}
|
|
204
|
+
|
|
205
|
+
### What is NOT tested
|
|
206
|
+
{Notable gaps — untested modules, untested error paths, areas explicitly excluded. Evidence.}
|
|
207
|
+
|
|
208
|
+
## 9. API design
|
|
209
|
+
|
|
210
|
+
### Public API surface
|
|
211
|
+
{Exports, CLI commands, HTTP endpoints, library interface — whatever applies. Evidence.}
|
|
212
|
+
|
|
213
|
+
### Versioning & compatibility
|
|
214
|
+
{Semver adherence, breaking change policy, deprecation approach. Evidence.}
|
|
215
|
+
|
|
216
|
+
### Input/output contracts
|
|
217
|
+
{Validation approach, schema definitions, type safety at boundaries. Evidence.}
|
|
218
|
+
|
|
219
|
+
## 10. Data & state
|
|
220
|
+
|
|
221
|
+
### Data flow
|
|
222
|
+
{How data enters, transforms, and exits the system. Evidence.}
|
|
223
|
+
|
|
224
|
+
### State management
|
|
225
|
+
{Where state lives, how it is mutated, persistence mechanism. Evidence.}
|
|
226
|
+
|
|
227
|
+
### Serialization
|
|
228
|
+
{JSON, protobuf, msgpack, custom — and how serialization boundaries are handled. Evidence.}
|
|
229
|
+
|
|
230
|
+
## 11. Async & concurrency
|
|
231
|
+
|
|
232
|
+
### Async model
|
|
233
|
+
{Promises, async/await, callbacks, streams, workers, or mixed. Evidence.}
|
|
234
|
+
|
|
235
|
+
### Concurrency primitives
|
|
236
|
+
{Locks, queues, semaphores, worker pools — if any. Evidence: file paths.}
|
|
237
|
+
|
|
238
|
+
### Parallelism strategy
|
|
239
|
+
{How parallel work is structured. Evidence.}
|
|
240
|
+
|
|
241
|
+
## 12. Security & secrets
|
|
242
|
+
|
|
243
|
+
### Authentication
|
|
244
|
+
{Auth mechanism, if any. Evidence.}
|
|
245
|
+
|
|
246
|
+
### Input validation
|
|
247
|
+
{Validation approach at system boundaries. Evidence.}
|
|
248
|
+
|
|
249
|
+
### Secrets management
|
|
250
|
+
{How secrets are stored and accessed. Evidence: .env patterns, config files.}
|
|
251
|
+
|
|
252
|
+
### Supply chain
|
|
253
|
+
{Dependency auditing, lockfile integrity, pinning strategy. Evidence.}
|
|
254
|
+
|
|
255
|
+
## 13. CI/CD & release
|
|
256
|
+
|
|
257
|
+
### Pipeline structure
|
|
258
|
+
{CI stages, triggers, matrix builds. Evidence: workflow files.}
|
|
259
|
+
|
|
260
|
+
### Release process
|
|
261
|
+
{Manual, automated, semantic-release, changesets, etc. Evidence.}
|
|
262
|
+
|
|
263
|
+
### Versioning strategy
|
|
264
|
+
{How versions are determined and bumped. Evidence.}
|
|
265
|
+
|
|
266
|
+
### Artifact distribution
|
|
267
|
+
{npm, Docker, binary, CDN — how the project is distributed. Evidence.}
|
|
268
|
+
|
|
269
|
+
## 14. Developer experience
|
|
270
|
+
|
|
271
|
+
### Local development
|
|
272
|
+
{Dev server, hot reload, watch mode — how developers run the project locally. Evidence: scripts.}
|
|
273
|
+
|
|
274
|
+
### Onboarding
|
|
275
|
+
{README quality, setup steps, CONTRIBUTING guide. Evidence.}
|
|
276
|
+
|
|
277
|
+
### Debugging
|
|
278
|
+
{Debug configurations, source maps, verbose modes. Evidence.}
|
|
279
|
+
|
|
280
|
+
## 15. Coding conventions
|
|
281
|
+
|
|
282
|
+
### Naming rules
|
|
283
|
+
{Function naming, variable naming, file naming, constant naming. Evidence: code patterns.}
|
|
284
|
+
|
|
285
|
+
### Code organization rules
|
|
286
|
+
{Import ordering, function ordering within files, export placement. Evidence.}
|
|
287
|
+
|
|
288
|
+
### Comment style
|
|
289
|
+
{When and how comments are used. Doc comment format. Evidence.}
|
|
290
|
+
|
|
291
|
+
### Code limits
|
|
292
|
+
{Line length, file length, function length — if any limits are visible or configured. Evidence.}
|
|
293
|
+
|
|
294
|
+
## 16. Operational concerns
|
|
295
|
+
|
|
296
|
+
### Logging
|
|
297
|
+
{Logging library, structured vs unstructured, log levels. Evidence.}
|
|
298
|
+
|
|
299
|
+
### Configuration management
|
|
300
|
+
{Environment variables, config files, defaults. Evidence.}
|
|
301
|
+
|
|
302
|
+
### Deployment
|
|
303
|
+
{Deployment target, containerization, infrastructure. Evidence.}
|
|
304
|
+
|
|
305
|
+
### Monitoring & observability
|
|
306
|
+
{Metrics, tracing, health checks — if any. Evidence.}
|
|
307
|
+
|
|
308
|
+
## 17. Decision tensions & trade-offs
|
|
309
|
+
|
|
310
|
+
{Identify decisions that create tension with each other or with common practices. Cross-reference the unresolved problems document — many tensions surface there as open questions.}
|
|
311
|
+
|
|
312
|
+
### {Tension title}
|
|
313
|
+
- **Decision A**: {first decision}
|
|
314
|
+
- **Decision B**: {second decision or common practice}
|
|
315
|
+
- **Tension**: {how they conflict or create friction}
|
|
316
|
+
- **Resolution**: {how the project handles it, if visible}
|
|
317
|
+
- **Related problems**: {IDs from unresolved problems document, if any}
|
|
318
|
+
|
|
319
|
+
## 18. Decision dependency graph
|
|
320
|
+
|
|
321
|
+
{Identify which decisions constrain or enable other decisions:}
|
|
322
|
+
|
|
323
|
+
| Decision | Enables | Constrains |
|
|
324
|
+
|----------|---------|------------|
|
|
325
|
+
| {decision} | {what it makes possible} | {what it limits} |
|
|
326
|
+
|
|
327
|
+
## 19. Absent decisions
|
|
328
|
+
|
|
329
|
+
{Decisions that most projects of this type make explicitly but this project has not. Cross-reference the unresolved problems document — many absent decisions surface there as gaps.}
|
|
330
|
+
|
|
331
|
+
- **{Topic}**: No explicit decision found. Implicit default: {what the code does in practice}. Evidence: {absence of config, docs, or patterns}.
|
|
332
|
+
|
|
333
|
+
## 20. Decisions at risk
|
|
334
|
+
|
|
335
|
+
{Decisions flagged as problematic in the unresolved problems document. For each:}
|
|
336
|
+
|
|
337
|
+
### {Decision}
|
|
338
|
+
- **Current choice**: {what was decided}
|
|
339
|
+
- **Risk identified**: {from unresolved problems document}
|
|
340
|
+
- **Severity**: {Critical / High / Medium / Low}
|
|
341
|
+
- **Decision stability**: {Is this decision likely to change? What would trigger a change?}
|
|
342
|
+
|
|
343
|
+
## Summary
|
|
344
|
+
|
|
345
|
+
### Decision philosophy
|
|
346
|
+
{In 3-5 sentences, characterize the overall decision-making philosophy of this project. Is it convention-over-configuration? Explicit-over-implicit? Minimal? Maximal? Opinionated? Flexible?}
|
|
347
|
+
|
|
348
|
+
### Top 10 decisions to understand first
|
|
349
|
+
{Ordered list of the 10 most important decisions a new contributor must understand to be productive. Each with a one-line explanation.}
|
|
350
|
+
|
|
351
|
+
### Decision maturity
|
|
352
|
+
{Assessment of how well decisions are documented, enforced, and consistent across the codebase. Note areas where decisions are inconsistently applied.}
|
|
353
|
+
```
|
|
354
|
+
|
|
355
|
+
## Rules
|
|
356
|
+
|
|
357
|
+
- **Use the research documents as primary source.** Do not repeat the full codebase scan. Read source files only to verify, clarify, or fill gaps.
|
|
358
|
+
- **Evidence is mandatory.** Every decision must cite at least one concrete file path, config key, or code pattern. Decisions without evidence are not decisions — they are speculation.
|
|
359
|
+
- **Distinguish explicit from implicit.** An explicit decision has documentation, config, or clear intentional code. An implicit decision is a pattern that emerged without documentation. Label each.
|
|
360
|
+
- **Distinguish decision from accident.** Some patterns are deliberate choices; others are historical artifacts or defaults never reconsidered. Note the difference when visible.
|
|
361
|
+
- **Be specific.** Include file paths, version numbers, config values, and function names. Vague descriptions like "uses modern patterns" are useless.
|
|
362
|
+
- **Be honest.** If rationale is not visible in the codebase, say "Rationale not documented" rather than inventing one.
|
|
363
|
+
- **No value judgments.** Report what the code does, not whether it is good or bad. "Uses X" not "Wisely uses X" or "Unfortunately uses X."
|
|
364
|
+
- **Cover everything.** A missing section is worse than a section that says "Not applicable." Check every category.
|
|
365
|
+
- **Quantify when possible.** "47 files follow this pattern, 3 do not" is more useful than "most files follow this pattern."
|
|
366
|
+
- **Follow the chain.** When you find a decision, trace its consequences. A TypeScript strict mode decision affects type definitions, error handling, and testing patterns.
|
|
367
|
+
- **Cross-reference the problems document.** Decisions that generated unresolved problems deserve extra attention. Note the connection explicitly.
|
|
368
|
+
- **Note contradictions.** If two parts of the codebase make contradictory decisions, flag it explicitly.
|
|
369
|
+
|
|
370
|
+
## Output
|
|
371
|
+
|
|
372
|
+
Output only raw markdown. No preamble, no explanation, no commentary outside the document structure.
|
|
@@ -0,0 +1,101 @@
|
|
|
1
|
+
You are naming a software product. This is harder than it sounds — a good name is memorable, short, available, and captures the essence of what the tool does without being generic. And in this case, it should make someone smile.
|
|
2
|
+
|
|
3
|
+
## Context
|
|
4
|
+
|
|
5
|
+
- **Output File Path**: {outputPath}
|
|
6
|
+
- **Original source repository:** {sourceFullName}
|
|
7
|
+
- **Local checkout path:** {originalCheckoutPath}
|
|
8
|
+
|
|
9
|
+
**Input documents — read all before starting:**
|
|
10
|
+
|
|
11
|
+
- **Product Pitch**: {productPitchPath} — the complete product description. Understand what this tool does, who it's for, and what makes it distinctive.
|
|
12
|
+
- **Research Summary**: {researchPath} — technical analysis of the original project.
|
|
13
|
+
- **Key Decisions**: {decisionsPath} — architectural and design decisions.
|
|
14
|
+
|
|
15
|
+
## Research before naming
|
|
16
|
+
|
|
17
|
+
Before you pick a name, you MUST research the competitive landscape. Use the `gh` tool and web search to:
|
|
18
|
+
|
|
19
|
+
1. **Find existing tools in the same space.** AI-powered code generation, project scaffolding, AI workflow orchestration, repository bootstrap tools. List every competitor you find.
|
|
20
|
+
2. **Catalog their names.** What naming patterns do they use? What's overused? (hint: everything with "ai", "co", "pilot", "gen", "auto" in the name is exhausted)
|
|
21
|
+
3. **Check npm for collisions.** Before finalizing, verify the name isn't already taken on npm. Use `npm search {name}` or check `https://www.npmjs.com/package/{name}`.
|
|
22
|
+
4. **Check GitHub for collisions.** Search for repos with the candidate name.
|
|
23
|
+
5. **Check domain availability.** For your top candidates, check if `{name}.dev`, `{name}.io`, or `{name}.com` are available and affordable (under $5k). Use WHOIS lookups or domain search tools. A name with an available domain under $5k is strongly preferred over one without.
|
|
24
|
+
|
|
25
|
+
Include a "## Competitive landscape" section in your output showing what you found.
|
|
26
|
+
|
|
27
|
+
## The vibe
|
|
28
|
+
|
|
29
|
+
The name must be **goofy**. Not corporate-goofy like "Sagemaker" or startup-goofy like "Humane." Actually funny. The kind of name that makes a developer do a double-take, chuckle, and then remember it forever.
|
|
30
|
+
|
|
31
|
+
Think about names like:
|
|
32
|
+
- **yeet** — you wouldn't forget a CLI tool called yeet
|
|
33
|
+
- **bonk** — it just sounds fun to type in a terminal
|
|
34
|
+
- **lmao** — irreverent but memorable
|
|
35
|
+
- **oops** — self-aware humor
|
|
36
|
+
- **bruh** — internet culture meets developer tooling
|
|
37
|
+
- **yolo** — the attitude of running AI code generation on your repo
|
|
38
|
+
|
|
39
|
+
The name should feel like something you'd say right before doing something bold with your codebase. It should match the product's personality: confident, slightly unhinged, technically serious underneath the humor. The kind of name where the README makes you laugh but the tool makes you productive.
|
|
40
|
+
|
|
41
|
+
**Do NOT pick any of the example names above.** They're just illustrations of the vibe. Find your own.
|
|
42
|
+
|
|
43
|
+
## What to produce
|
|
44
|
+
|
|
45
|
+
Generate a YAML frontmatter with the chosen product name and rationale, followed by a brief markdown body.
|
|
46
|
+
|
|
47
|
+
```
|
|
48
|
+
---
|
|
49
|
+
productName: "{the chosen name}"
|
|
50
|
+
---
|
|
51
|
+
|
|
52
|
+
# Naming: {the chosen name}
|
|
53
|
+
|
|
54
|
+
## Competitive landscape
|
|
55
|
+
|
|
56
|
+
| Tool | What it does | Name style |
|
|
57
|
+
|------|-------------|------------|
|
|
58
|
+
| {tool} | {description} | {serious/playful/generic/etc} |
|
|
59
|
+
(list all competitors found)
|
|
60
|
+
|
|
61
|
+
{1-2 sentences on naming patterns in this space and what's overused.}
|
|
62
|
+
|
|
63
|
+
## Why this name
|
|
64
|
+
|
|
65
|
+
{2-3 sentences. Why this name is funny, memorable, and fits. What it evokes. Why a developer would grin the first time they type it in a terminal.}
|
|
66
|
+
|
|
67
|
+
## Considered alternatives
|
|
68
|
+
|
|
69
|
+
| Name | Why considered | Why rejected |
|
|
70
|
+
|------|---------------|--------------|
|
|
71
|
+
| {name} | {reason} | {reason} |
|
|
72
|
+
(5-10 alternatives, at least half should be goofy)
|
|
73
|
+
|
|
74
|
+
## Name properties
|
|
75
|
+
|
|
76
|
+
- **Memorable:** {yes/no and why}
|
|
77
|
+
- **Goofy factor:** {what makes it funny}
|
|
78
|
+
- **Short:** {character count, syllable count}
|
|
79
|
+
- **Domain-friendly:** {would work as productname.dev or similar}
|
|
80
|
+
- **CLI-friendly:** {works as a terminal command — short, no special chars, fun to type}
|
|
81
|
+
- **Searchable:** {not a common English word that pollutes search results}
|
|
82
|
+
- **npm available:** {yes/no — did you check?}
|
|
83
|
+
- **Domain available:** {which domains checked, which are available, estimated price}
|
|
84
|
+
```
|
|
85
|
+
|
|
86
|
+
## Naming rules
|
|
87
|
+
|
|
88
|
+
- **Max 12 characters.** Shorter is better. 4-8 is ideal.
|
|
89
|
+
- **Must be goofy.** If it doesn't make someone smile, try again. The humor can be subtle or obvious, but it must be there.
|
|
90
|
+
- **Lowercase-friendly.** Must work as a CLI command and npm package name.
|
|
91
|
+
- **No generic words.** "devtool", "aihelper", "codebot" — these are descriptions, not names.
|
|
92
|
+
- **No forced acronyms.** If it doesn't spell naturally, don't force it.
|
|
93
|
+
- **No "AI" in the name.** Every tool has AI now. It's not a differentiator, it's a commodity. Putting "ai" in your name is like putting "electric" in a car name in 2026.
|
|
94
|
+
- **Evocative over descriptive.** The name should hint at what the tool feels like to use, not literally describe its function.
|
|
95
|
+
- **Actually check for collisions.** Search npm and GitHub. A name that's already taken is not an option no matter how good it is.
|
|
96
|
+
- **Domain matters.** Strongly prefer names where `{name}.dev`, `{name}.io`, or `{name}.com` is available for under $5k. A great name without a domain is worse than a good name with one.
|
|
97
|
+
- **Consider the terminal experience.** `$ {name} bootstrap` — does it feel good to type? Does it make the developer smile every time?
|
|
98
|
+
|
|
99
|
+
## Output
|
|
100
|
+
|
|
101
|
+
Output only raw markdown with YAML frontmatter. No preamble, no explanation.
|
|
@@ -0,0 +1,197 @@
|
|
|
1
|
+
You are a writer with the sensibility of Paul Graham, the technical depth of a Staff Engineer, and a clear-eyed view of what makes developer tools succeed or fail. Your task is to produce a dense, slightly irreverent, dead-serious product description for a **new product** built by deeply studying an existing project, learning from its mistakes, and doing it significantly better.
|
|
2
|
+
|
|
3
|
+
**Critical constraints:**
|
|
4
|
+
- The product is **unnamed**. Use "{Project Name}" as placeholder. Do not pick, suggest, or reference the original project's name.
|
|
5
|
+
- The original product is **not perfect**. It has real architectural flaws, unresolved problems, UX friction, and missing capabilities documented in the research. Your pitch must honestly address what was wrong and how we fix it.
|
|
6
|
+
- **Density over length.** Every sentence must carry information. No filler paragraphs, no restating things in different words, no padding. Target 150-250 lines of output. If a section can be a table, make it a table. If a point fits in one sentence, don't use three.
|
|
7
|
+
|
|
8
|
+
## Context
|
|
9
|
+
|
|
10
|
+
- **Output File Path**: {outputPath}
|
|
11
|
+
- **Original source repository:** {sourceFullName} (Use a `gh` tool to look into issues)
|
|
12
|
+
- **Local checkout path:** {originalCheckoutPath}
|
|
13
|
+
|
|
14
|
+
You have read-only access to the local checkout of the **original project** — the one we studied. We are not forking it. We are building a new product informed by everything we learned from dissecting it. The original is our textbook, not our codebase.
|
|
15
|
+
|
|
16
|
+
Three research documents have been generated by analyzing the original. Read them before starting:
|
|
17
|
+
|
|
18
|
+
- **Research Summary**: {researchPath} — architecture, dependencies, conventions, hidden knowledge.
|
|
19
|
+
- **Unresolved Problems**: {unresolvedProblemsPath} — open questions, risks, contradictions, gaps. Each one is something our product can get right from day one.
|
|
20
|
+
- **Key Decisions**: {decisionsPath} — every significant decision. Some were brilliant (keep). Some were mistakes (reverse). Some were tradeoffs that no longer apply (drop).
|
|
21
|
+
|
|
22
|
+
## The core premise
|
|
23
|
+
|
|
24
|
+
The original product attempted something real and partially succeeded. But it accumulated compromises, unresolved questions, and architectural debt. We read every file, cataloged every decision, every unresolved problem. Now we're building the version that should have existed — not by copying, but by understanding deeply enough to start fresh with earned wisdom.
|
|
25
|
+
|
|
26
|
+
The research documents are our competitive intelligence. The unresolved problems are our feature roadmap. The key decisions tell us which bets to keep and which to reverse.
|
|
27
|
+
|
|
28
|
+
## Tone
|
|
29
|
+
|
|
30
|
+
- **Confident, not grandiose.** Say what it does. No "revolutionary" or "paradigm-shifting."
|
|
31
|
+
- **Slightly funny, never forced.** Humor from honesty, not from effort.
|
|
32
|
+
- **Technically precise.** "3-round AI review cycles on implementation plans" — good. "Leverages AI to supercharge workflows" — banned.
|
|
33
|
+
- **Dense.** If a paragraph works without its first sentence, delete the first sentence. If three sentences say what one could, keep one.
|
|
34
|
+
|
|
35
|
+
## Research methodology
|
|
36
|
+
|
|
37
|
+
### Phase 1: Understand the original and its flaws
|
|
38
|
+
|
|
39
|
+
1. **Read all three research documents.** Extract: what it does, what works, what's broken, what was never attempted.
|
|
40
|
+
|
|
41
|
+
2. **Read the key decisions critically.** For each: right call or wrong? Value we share or compromise we avoid? Decisions that created debt become our differentiators.
|
|
42
|
+
|
|
43
|
+
3. **Read unresolved problems as our roadmap.** Each is a pain point, architectural weakness, missing capability, or open question our product answers from day one.
|
|
44
|
+
|
|
45
|
+
4. **Verify against the original codebase.** Read entry points, workflows, text catalog, config. Identify where the UX is clunky, where error handling is missing, where the architecture constrains evolution.
|
|
46
|
+
|
|
47
|
+
5. **Read the original's GitHub issues.** Open issues = unaddressed user complaints. Closed issues = problems that took too long to fix. Both inform what we build differently.
|
|
48
|
+
|
|
49
|
+
### Phase 2: Feature extraction
|
|
50
|
+
|
|
51
|
+
6. **Map every capability from the original.** Input, output, experience, failure modes. For each: where does it fall short?
|
|
52
|
+
|
|
53
|
+
7. **Identify "aha" moments worth keeping.** The ideas that make someone say "I want that" — iterative review cycles, parallel document generation, one-command bootstrap, sandboxed execution, fail-fast philosophy.
|
|
54
|
+
|
|
55
|
+
8. **Identify what the original got wrong.** Mine the unresolved problems: features never built, patterns that created friction, UX confusion, reliability gaps, untested paths.
|
|
56
|
+
|
|
57
|
+
9. **Decide what stays absent.** No fallbacks (keep — it's a feature). No GUI (keep — CLI-first). No plugin system (keep for now — simplicity).
|
|
58
|
+
|
|
59
|
+
### Phase 3: Assemble the narrative
|
|
60
|
+
|
|
61
|
+
10. **The story has two chapters.** Chapter one: a product attempted something ambitious and partially succeeded. Chapter two: we studied it completely and built what should have existed. The story is "we did the homework, now we're building with conviction."
|
|
62
|
+
|
|
63
|
+
11. **Explain why these features belong together.** The product isn't a random collection of AI wrappers. Each feature exists because of a specific insight from studying the original. The bootstrap feeds the research, the research feeds the planning, the planning feeds the execution, the execution feeds the review. It's a pipeline, not a toolbox.
|
|
64
|
+
|
|
65
|
+
## Output format
|
|
66
|
+
|
|
67
|
+
Produce a single markdown file **with YAML frontmatter**. The frontmatter contains a deep research query that will be used to validate and enrich this pitch. The body contains the pitch itself. Every section is required. Be dense. Be specific. No filler.
|
|
68
|
+
|
|
69
|
+
```
|
|
70
|
+
---
|
|
71
|
+
deepResearchQuery: |
|
|
72
|
+
{A detailed, multi-part research query (3-8 sentences) that someone should run against
|
|
73
|
+
web search, academic sources, or competitive analysis to validate and enrich the claims
|
|
74
|
+
in this pitch. The query should cover: (1) competitive landscape — what similar tools exist
|
|
75
|
+
and how they compare, (2) market validation — evidence that the problem described is real
|
|
76
|
+
and widespread, (3) technical validation — whether the architectural choices described are
|
|
77
|
+
sound by current standards, (4) user evidence — forums, discussions, or complaints that
|
|
78
|
+
confirm the pain points described. Be specific to the domain of THIS product — reference
|
|
79
|
+
the actual problem space, not generic software advice.}
|
|
80
|
+
---
|
|
81
|
+
|
|
82
|
+
# {Project Name}
|
|
83
|
+
|
|
84
|
+
{One sentence. What this is and why it exists. Memorable, specific, no preamble.}
|
|
85
|
+
|
|
86
|
+
## The Problem
|
|
87
|
+
|
|
88
|
+
{2-3 short paragraphs, max 150 words total. The frustration this addresses — be vivid and concrete. Include the "almost" tools that exist (including the original we studied) and why they fall short. End with: we studied everything that came before, and built what should have existed.}
|
|
89
|
+
|
|
90
|
+
## What It Does
|
|
91
|
+
|
|
92
|
+
{One dense paragraph, max 100 words. The honest explanation — what happens when you use it. Not a feature list. A coherent picture of the experience. Include one concrete example of the core workflow without using the original's CLI command names.}
|
|
93
|
+
|
|
94
|
+
## Why We Started Over
|
|
95
|
+
|
|
96
|
+
{One dense paragraph, max 100 words. Why this is a fresh build, not a fork. What we learned from the original that convinced us to start from scratch. The difference between a fork and a studied rebuild. Be specific about what was wrong with the original — cite the research.}
|
|
97
|
+
|
|
98
|
+
## Features
|
|
99
|
+
|
|
100
|
+
{This is the exhaustive section. Enumerate ALL features. Use this exact format:
|
|
101
|
+
|
|
102
|
+
### {Group Name}
|
|
103
|
+
|
|
104
|
+
For each feature in the group, use a **definition list** style — bold name, then 1-2 sentences max:
|
|
105
|
+
|
|
106
|
+
**{Feature Name}** — {What it does, what the user experiences, why it matters. One to two sentences only. Be specific.}
|
|
107
|
+
|
|
108
|
+
Groups (cover all of these):
|
|
109
|
+
|
|
110
|
+
### Project Setup
|
|
111
|
+
(Bootstrap, provider detection, repo creation, source checkout, README generation, git init, first push — the full pipeline)
|
|
112
|
+
|
|
113
|
+
### AI-Driven Development
|
|
114
|
+
(Planning, implementation execution, sandboxed code generation, multi-round review — the ralph-loop cycle explained as a coherent pipeline)
|
|
115
|
+
|
|
116
|
+
### Codebase Analysis
|
|
117
|
+
(Research generation, problem analysis, decision documentation — parallel document generation, how outputs chain into each other)
|
|
118
|
+
|
|
119
|
+
### Workflow Operations
|
|
120
|
+
(Checkpoint, commit generation, progress tracking — the daily-use features)
|
|
121
|
+
|
|
122
|
+
### Architecture & Internals
|
|
123
|
+
(Provider abstraction, fail-fast policy, text catalog, settings persistence, logging, sandboxing — explain WHY these architectural choices matter for the user)
|
|
124
|
+
|
|
125
|
+
After the feature groups, add one paragraph: **Why these features together.** Explain the pipeline — how bootstrap feeds research, research feeds planning, planning feeds execution, execution feeds review. This isn't a collection of tools, it's a coherent workflow where each step makes the next one better.}
|
|
126
|
+
|
|
127
|
+
## What the Original Got Wrong
|
|
128
|
+
|
|
129
|
+
{This is critical. Use a bullet list. Be specific and cite the research documents:
|
|
130
|
+
|
|
131
|
+
- **{Problem}** — {What was wrong, what consequence it had, what we do instead. One to two sentences.}
|
|
132
|
+
|
|
133
|
+
Cover at minimum:
|
|
134
|
+
- Architectural flaws that limited evolution
|
|
135
|
+
- Missing error handling or reliability gaps
|
|
136
|
+
- UX friction points
|
|
137
|
+
- Features that were never built despite user demand
|
|
138
|
+
- Testing or quality gaps
|
|
139
|
+
- Design decisions whose consequences became apparent over time
|
|
140
|
+
|
|
141
|
+
End with a single sentence: how many unresolved problems the original had, and how our architecture addresses them from the start.}
|
|
142
|
+
|
|
143
|
+
## Design Philosophy
|
|
144
|
+
|
|
145
|
+
{One dense paragraph, max 120 words. Not a list of principles — a story of why. Cover: fail-fast (the most interesting decision), composable steps over monolith, CLI-first, opinionated conventions, why we started fresh. This section answers "what kind of tool does this want to be?"}
|
|
146
|
+
|
|
147
|
+
## Who This Is For
|
|
148
|
+
|
|
149
|
+
{Two short paragraphs, max 100 words total. First: the specific person who benefits — their role, their day-to-day, their frustration. Second: who this is NOT for. Be honest about both.}
|
|
150
|
+
|
|
151
|
+
## Getting Started
|
|
152
|
+
|
|
153
|
+
{Copy-pasteable quick-start. Prerequisites, install, first run, first workflow. Use a numbered list or code blocks. No prose padding.}
|
|
154
|
+
|
|
155
|
+
## Architecture
|
|
156
|
+
|
|
157
|
+
{One paragraph, max 80 words. Key abstractions, module organization, where the interesting engineering is. For the architecturally curious — not documentation.}
|
|
158
|
+
|
|
159
|
+
## Roadmap
|
|
160
|
+
|
|
161
|
+
{Bullet list, 5-8 items. What we're building next, informed by the original's gaps and our architecture's capabilities. Each item: one sentence. Be concrete.}
|
|
162
|
+
|
|
163
|
+
## Summary
|
|
164
|
+
|
|
165
|
+
{Three sentences total. What it is. How it works. Why it matters.}
|
|
166
|
+
```
|
|
167
|
+
|
|
168
|
+
## Writing rules
|
|
169
|
+
|
|
170
|
+
- **Dense.** No sentence without new information. No paragraph that restates the previous one. No section that could be half as long.
|
|
171
|
+
- **Specific.** "3 review rounds" not "multiple cycles." "Angular-style commit messages" not "commit messages." Cite counts, names, and mechanisms.
|
|
172
|
+
- **No invented names.** Use "{Project Name}" throughout. Do not use the original's CLI commands or brand.
|
|
173
|
+
- **Honest about the original.** The original had real problems — cite them from the research. Don't soften findings. Don't be mean about it, but don't pretend it was fine either. "The original had N unresolved questions including X, Y, Z" is honest. "The original was a great effort" is empty.
|
|
174
|
+
- **Features explain their WHY.** Every feature description must connect to either: (a) a problem the original had, or (b) a capability the original proved was valuable. Features don't exist in a vacuum — they exist because we studied something and learned.
|
|
175
|
+
- **The pipeline matters.** The most important thing about the features is how they connect. Bootstrap → Research → Planning → Execution → Review → Checkpoint. Explain the pipeline, not just the parts.
|
|
176
|
+
- **Humor serves clarity.** A joke that makes a concept clearer stays. Everything else goes.
|
|
177
|
+
- **Banned words:** revolutionary, powerful, seamless, robust, cutting-edge, next-generation, best-in-class, blazing-fast, game-changing, disruptive, leverage.
|
|
178
|
+
|
|
179
|
+
## Quality gates
|
|
180
|
+
|
|
181
|
+
Before finalizing:
|
|
182
|
+
1. The file starts with valid YAML frontmatter containing `deepResearchQuery` (a non-empty string, 3-8 sentences, specific to this product's domain)
|
|
183
|
+
2. Total body output is 150-250 lines — dense, not padded
|
|
184
|
+
3. Every feature from the original is enumerated (not summarized away)
|
|
185
|
+
4. "What the Original Got Wrong" cites specific findings from the research documents
|
|
186
|
+
5. The pipeline (how features connect) is explicitly explained
|
|
187
|
+
6. No product name appears — only "{Project Name}"
|
|
188
|
+
7. No word from the banned list appears
|
|
189
|
+
8. Every section respects its word limit
|
|
190
|
+
9. A reader finishes knowing: what it does, why it exists, how features connect, what was wrong with the original, and what's next
|
|
191
|
+
10. The deep research query is actionable — someone could paste it into a search engine or research tool and get useful results back
|
|
192
|
+
|
|
193
|
+
If any check fails, cut and tighten until it passes.
|
|
194
|
+
|
|
195
|
+
## Output
|
|
196
|
+
|
|
197
|
+
Output only raw markdown. No preamble, no explanation, no commentary outside the document structure.
|