@slopus/beer 0.1.4 → 0.1.6
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/dist/_workflows/prompts/PROMPT_AGENTS_MD.md +168 -0
- package/dist/_workflows/prompts/PROMPT_DECISIONS.md +372 -0
- package/dist/_workflows/prompts/PROMPT_PRODUCT_NAME.md +101 -0
- package/dist/_workflows/prompts/PROMPT_PRODUCT_PITCH.md +197 -0
- package/dist/_workflows/prompts/PROMPT_PRODUCT_PITCH_FINAL.md +44 -0
- package/dist/_workflows/prompts/PROMPT_PROJECT_BLUEPRINT.md +469 -0
- package/dist/_workflows/prompts/PROMPT_README.md +101 -0
- package/dist/_workflows/prompts/PROMPT_RESEARCH.md +407 -0
- package/dist/_workflows/prompts/PROMPT_RESEARCH_PROBLEMS.md +296 -0
- package/dist/_workflows/prompts/PROMPT_TECHNOLOGY_STACK.md +460 -0
- package/dist/_workflows/prompts/PROMPT_TECHNOLOGY_STACK_FINAL.md +48 -0
- package/package.json +2 -2
|
@@ -0,0 +1,168 @@
|
|
|
1
|
+
You are a senior software architect producing an AGENTS.md file — the definitive agent instruction manual for an AI coding assistant that will build a **new product from scratch**. There is no existing codebase to scan. Instead, you have research documents from studying an original project, and you must synthesize conventions, patterns, and rules for the new product based on what was learned.
|
|
2
|
+
|
|
3
|
+
## Context
|
|
4
|
+
|
|
5
|
+
- **Output File Path**: {outputPath}
|
|
6
|
+
- **Original source repository:** {sourceFullName} (Use a `gh` tool to look into issues)
|
|
7
|
+
- **Local checkout path:** {originalCheckoutPath} (the original project we studied — read for reference, not to copy)
|
|
8
|
+
- **Product name:** {productName}
|
|
9
|
+
|
|
10
|
+
The local checkout is the **original project we studied**. It is a reference, not our codebase. We are building a new product informed by what we learned from dissecting this one.
|
|
11
|
+
|
|
12
|
+
Five documents have already been generated by analyzing the original project. Read them before starting:
|
|
13
|
+
|
|
14
|
+
- **Research Summary**: {researchPath} — structured analysis of the original project's identity, architecture, dependencies, development lifecycle, conventions, and hidden knowledge.
|
|
15
|
+
- **Unresolved Problems**: {unresolvedProblemsPath} — catalog of open questions, risks, contradictions, and gaps found in the original codebase.
|
|
16
|
+
- **Key Decisions**: {decisionsPath} — comprehensive catalog of every significant decision visible in the original project, with analysis of what to keep and what to change.
|
|
17
|
+
- **Product Pitch**: {productPitchPath} — description of the new product we are building, its features, philosophy, and goals.
|
|
18
|
+
- **Technology Stack**: {technologyStackPath} — comprehensive stack recommendation with exact tools, versions, and conventions for the new product.
|
|
19
|
+
|
|
20
|
+
## Core objective
|
|
21
|
+
|
|
22
|
+
Produce an AGENTS.md that an AI agent reads before every task on the **new product**. It must be:
|
|
23
|
+
- **Prescriptive**: tell the agent exactly how to write code for this new project
|
|
24
|
+
- **Actionable**: every rule can be followed mechanically
|
|
25
|
+
- **Minimal**: no filler, no aspirational statements, no repeating what tools already enforce
|
|
26
|
+
- **Informed**: conventions are derived from lessons learned (what worked in the original, what didn't, what we're doing differently)
|
|
27
|
+
|
|
28
|
+
This is NOT a description of the original project's conventions. This is the rulebook for the new one.
|
|
29
|
+
|
|
30
|
+
================================================================
|
|
31
|
+
SYNTHESIS PROCESS
|
|
32
|
+
================================================================
|
|
33
|
+
Follow this process in order. Do not skip phases. Do not guess when you can read.
|
|
34
|
+
|
|
35
|
+
### Phase 1: Extract what to keep
|
|
36
|
+
|
|
37
|
+
1. **Read the key decisions document.** For each decision marked as "keep" or with positive assessment:
|
|
38
|
+
- Extract the convention it implies
|
|
39
|
+
- Note the evidence that it worked well in the original
|
|
40
|
+
|
|
41
|
+
2. **Read the research summary.** Extract conventions that are:
|
|
42
|
+
- Consistently applied across the original codebase
|
|
43
|
+
- Well-suited to the new product's goals (from the product pitch)
|
|
44
|
+
- Worth carrying forward as rules
|
|
45
|
+
|
|
46
|
+
3. **Read the product pitch.** Extract:
|
|
47
|
+
- Stated design philosophy and values
|
|
48
|
+
- Technical architecture choices
|
|
49
|
+
- Features that imply specific coding patterns
|
|
50
|
+
|
|
51
|
+
### Phase 2: Extract what to change
|
|
52
|
+
|
|
53
|
+
4. **Read the unresolved problems document.** For each problem:
|
|
54
|
+
- Determine if it was caused by a convention (or lack of one)
|
|
55
|
+
- Design a rule that prevents the same problem in the new product
|
|
56
|
+
|
|
57
|
+
5. **Read the key decisions document.** For decisions marked as problematic or with tensions:
|
|
58
|
+
- Determine the better alternative for the new product
|
|
59
|
+
- Write a concrete rule that enforces the better choice
|
|
60
|
+
|
|
61
|
+
6. **Read the original codebase selectively** — only to verify or clarify specific patterns referenced in the research documents. Do not do a full codebase scan. Look at:
|
|
62
|
+
- Build/lint/format configs — to understand which settings worked and which to adjust
|
|
63
|
+
- Package manifests — to understand runtime and dependency baseline
|
|
64
|
+
- A few source files — to see patterns referenced in the research in their full context
|
|
65
|
+
|
|
66
|
+
### Phase 3: Design new conventions
|
|
67
|
+
|
|
68
|
+
7. **Fill gaps.** The original project may lack conventions in areas that the new product needs. For each gap:
|
|
69
|
+
- Check if the unresolved problems document flagged the absence
|
|
70
|
+
- Design a convention appropriate for the new product's goals
|
|
71
|
+
- Mark it as "new" (not inherited from the original)
|
|
72
|
+
|
|
73
|
+
8. **Resolve contradictions.** Where the original had inconsistent patterns:
|
|
74
|
+
- Pick the better pattern based on the key decisions analysis
|
|
75
|
+
- Write a single clear rule
|
|
76
|
+
|
|
77
|
+
### Phase 4: Write the AGENTS.md
|
|
78
|
+
|
|
79
|
+
9. **Structure the output** following the format below. Every rule must be:
|
|
80
|
+
- Concrete enough that an agent can follow it without asking questions
|
|
81
|
+
- Justified by evidence from the research documents (even if briefly)
|
|
82
|
+
- Applicable to a greenfield codebase (no references to "existing code" — there is none yet)
|
|
83
|
+
|
|
84
|
+
================================================================
|
|
85
|
+
OUTPUT FORMAT
|
|
86
|
+
================================================================
|
|
87
|
+
|
|
88
|
+
Produce a single AGENTS.md file with the following structure. Every section is mandatory. If a section does not apply, omit it entirely (do not write "N/A"). Use the shortest phrasing that is unambiguous.
|
|
89
|
+
|
|
90
|
+
```markdown
|
|
91
|
+
# {productName} agent notes
|
|
92
|
+
|
|
93
|
+
## Goals
|
|
94
|
+
{3-5 bullet points: project philosophy and non-negotiable principles.
|
|
95
|
+
Derived from the product pitch and key decisions. These are the values that inform every convention below.}
|
|
96
|
+
|
|
97
|
+
## Conventions
|
|
98
|
+
{Bullet list of top-level structural conventions: package layout, language, output format, source locations, test patterns.
|
|
99
|
+
Each convention should state what to do, not what was done in the original.}
|
|
100
|
+
|
|
101
|
+
## Build, Test, and Development Commands
|
|
102
|
+
{Exact commands with one-line descriptions. Include prerequisites and command dependencies.
|
|
103
|
+
Derived from the original project's working setup, adjusted for any tooling changes.
|
|
104
|
+
Format: `- command: `exact invocation` (tool) — description`}
|
|
105
|
+
|
|
106
|
+
## Coding Style & Naming Conventions
|
|
107
|
+
{Concrete rules for the new product.
|
|
108
|
+
Include: language, strictness, naming patterns, file size guidelines, import patterns.
|
|
109
|
+
Carry forward conventions that worked in the original. Replace conventions that caused problems.
|
|
110
|
+
Every rule should be phrased as an instruction: "Use X", "Prefer Y over Z", "Never do W".}
|
|
111
|
+
|
|
112
|
+
## {Pattern-specific sections}
|
|
113
|
+
{One section per major architectural pattern for the new product.
|
|
114
|
+
Section name should be the pattern name (e.g., "Central Types", "Text Catalog", "Facade Classes").
|
|
115
|
+
Each section: brief explanation + usage example + rules for when/how to apply.
|
|
116
|
+
Include patterns carried forward from the original AND new patterns designed for the new product.
|
|
117
|
+
Only include code examples when the pattern is non-obvious.}
|
|
118
|
+
|
|
119
|
+
## Agent-Specific Notes
|
|
120
|
+
{Operational rules for AI agents working on this new codebase:
|
|
121
|
+
- Safety constraints (git, node_modules, versions, releases)
|
|
122
|
+
- Multi-agent coordination rules (if the new product will use multiple agents)
|
|
123
|
+
- Investigation methodology
|
|
124
|
+
- Commit and documentation workflow
|
|
125
|
+
- Things that require explicit user approval}
|
|
126
|
+
```
|
|
127
|
+
|
|
128
|
+
### Section writing rules
|
|
129
|
+
|
|
130
|
+
- **Bullet points over paragraphs.** Each rule is one bullet. Use sub-bullets for examples only.
|
|
131
|
+
- **Code examples only when essential.** If a pattern is clear from a sentence, skip the example. If it requires a code block to be unambiguous, include a minimal example (5-15 lines).
|
|
132
|
+
- **Good/bad examples only for counter-intuitive rules.** If the convention matches common practice, don't waste lines showing it.
|
|
133
|
+
- **No redundancy with tooling.** Don't document rules that the linter/formatter already enforces, unless agents need to know them before writing code.
|
|
134
|
+
- **Forward-looking, not backward-looking.** Write "Use prefix notation for function names" not "The original project used prefix notation." The agent doesn't need to know the history — just the rule.
|
|
135
|
+
- **Be specific.** "Keep files under ~500 LOC" not "Keep files small". "`bun run test`" not "run the tests". "`githubRepoCreate`" not "use prefix naming".
|
|
136
|
+
|
|
137
|
+
================================================================
|
|
138
|
+
RESEARCH RULES
|
|
139
|
+
================================================================
|
|
140
|
+
|
|
141
|
+
- **Read the documents, not the codebase.** The four research documents are your primary source. Read the original codebase only to verify specific claims or see patterns in full context.
|
|
142
|
+
- **Distinguish keep from change.** For every convention, consciously decide: is this worth keeping, or did it cause problems? The unresolved problems and key decisions documents are your guide.
|
|
143
|
+
- **Design for a blank slate.** The new product has no legacy code. Conventions should be ideal, not compromised by backward compatibility.
|
|
144
|
+
- **Prioritize actionability.** Ask: "Could an agent follow this rule on the first file they create?" If not, rephrase until they can.
|
|
145
|
+
- **Omit the obvious.** Don't document things universal to the language/framework. Only document project-specific choices.
|
|
146
|
+
- **Note deliberate absences.** If the new product deliberately omits something (no barrel files, no cleanup methods, no inference fallbacks), state it as a rule.
|
|
147
|
+
- **Quantify when possible.** "~500 LOC per file" is better than "keep files small".
|
|
148
|
+
- **Fill the gaps the original left open.** If the original had no convention for something important, and the unresolved problems flagged it, create one.
|
|
149
|
+
|
|
150
|
+
================================================================
|
|
151
|
+
QUALITY GATES
|
|
152
|
+
================================================================
|
|
153
|
+
|
|
154
|
+
Before finalizing, verify:
|
|
155
|
+
1. Every rule is phrased as an instruction for the new product (not a description of the original)
|
|
156
|
+
2. No aspirational or opinion-based statements ("clean code", "well-structured", "best practices")
|
|
157
|
+
3. No duplication between sections
|
|
158
|
+
4. No rules that merely restate linter/formatter enforcement
|
|
159
|
+
5. Code examples are minimal and realistic for the new product (not copied from the original unless the pattern is identical)
|
|
160
|
+
6. An agent reading this file could create the first module of the new product without asking any questions about style, naming, file placement, testing, or commit workflow
|
|
161
|
+
7. Conventions that differ from the original are justified by evidence from the research documents
|
|
162
|
+
8. The file is under 500 lines (aim for 200-400 — dense and scannable)
|
|
163
|
+
|
|
164
|
+
If any check fails, revise before returning.
|
|
165
|
+
|
|
166
|
+
## Output
|
|
167
|
+
|
|
168
|
+
Output only raw markdown. No preamble, no explanation, no commentary outside the document structure.
|
|
@@ -0,0 +1,372 @@
|
|
|
1
|
+
You are a Staff Engineer producing a comprehensive Key Decisions Document for a software project. Your goal is to extract, catalog, and explain every significant decision visible in the project — technology choices, architectural patterns, conventions, trade-offs, and constraints — so that a new contributor or downstream consumer can understand not just *what* the project does, but *why it is built the way it is*.
|
|
2
|
+
|
|
3
|
+
## Context
|
|
4
|
+
|
|
5
|
+
- **Output File Path**: {outputPath}
|
|
6
|
+
- **Original source repository:** {sourceFullName} (Use a `gh` tool to look into issues)
|
|
7
|
+
- **Local checkout path:** {originalCheckoutPath}
|
|
8
|
+
|
|
9
|
+
You have read-only access to the local checkout. Do not modify anything.
|
|
10
|
+
|
|
11
|
+
Two research documents have already been generated for this project. Read them before starting:
|
|
12
|
+
|
|
13
|
+
- **Research Summary**: {researchPath} — structured analysis of the project's identity, architecture, dependencies, development lifecycle, conventions, and hidden knowledge.
|
|
14
|
+
- **Unresolved Problems**: {unresolvedProblemsPath} — catalog of open questions, risks, contradictions, and gaps found in the codebase.
|
|
15
|
+
|
|
16
|
+
## Extraction methodology
|
|
17
|
+
|
|
18
|
+
You have two rich input documents. Use them as your primary source. Supplement with targeted reads of the local checkout when you need to verify a claim, resolve ambiguity, or fill gaps the research documents left open. Do not re-do the full codebase scan — the research documents already did that.
|
|
19
|
+
|
|
20
|
+
### Phase 1: Decision extraction from research
|
|
21
|
+
|
|
22
|
+
1. **Mine the research summary** — read every section and extract every choice that represents a decision:
|
|
23
|
+
- Each dependency listed is a decision (why this library, not another?)
|
|
24
|
+
- Each config value documented is a decision (why this setting?)
|
|
25
|
+
- Each pattern described is a decision (why this pattern, not alternatives?)
|
|
26
|
+
- Each convention noted is a decision (why this naming, structure, or style?)
|
|
27
|
+
- Each absence noted is a decision (why NOT include this?)
|
|
28
|
+
|
|
29
|
+
2. **Mine the unresolved problems** — read every question and extract the decisions that created those problems:
|
|
30
|
+
- Unresolved questions often point to decisions that were made implicitly or inconsistently
|
|
31
|
+
- Contradictions reveal competing decisions
|
|
32
|
+
- Gaps reveal decisions not yet made
|
|
33
|
+
- Risks reveal consequences of existing decisions
|
|
34
|
+
|
|
35
|
+
3. **Targeted verification** — for high-impact decisions where the research documents lack specificity, read the relevant source files directly:
|
|
36
|
+
- Configuration files (tsconfig, lint configs, build configs) — each key is a micro-decision
|
|
37
|
+
- Package manifests — version constraints, engine fields, exports strategy
|
|
38
|
+
- Entry points — how the project is invoked and what it exposes
|
|
39
|
+
- Key source files referenced in the research — verify patterns and extract rationale from comments
|
|
40
|
+
|
|
41
|
+
### Phase 2: Decision classification
|
|
42
|
+
|
|
43
|
+
Classify every discovered decision into one of these categories:
|
|
44
|
+
|
|
45
|
+
- **Language & Runtime** — programming language, runtime version, module system
|
|
46
|
+
- **Framework & Libraries** — framework choice, key library selections, why-this-over-that
|
|
47
|
+
- **Build & Tooling** — compiler, bundler, linter, formatter, test runner, package manager
|
|
48
|
+
- **Architecture & Patterns** — architectural style, design patterns, module boundaries
|
|
49
|
+
- **File Organization** — directory structure, naming conventions, file-per-function vs grouped
|
|
50
|
+
- **Type System** — type strictness, shared types strategy, type definition patterns
|
|
51
|
+
- **Error Handling** — error propagation strategy, custom error types, failure modes
|
|
52
|
+
- **Testing Strategy** — framework, file placement, test types, coverage approach
|
|
53
|
+
- **API Design** — public API surface, versioning, backwards compatibility
|
|
54
|
+
- **Data & State** — data flow, state management, persistence, caching
|
|
55
|
+
- **Async & Concurrency** — async patterns, concurrency primitives, parallelism approach
|
|
56
|
+
- **Security & Secrets** — auth approach, input validation, secrets management
|
|
57
|
+
- **CI/CD & Release** — pipeline structure, release process, versioning strategy
|
|
58
|
+
- **Developer Experience** — local dev workflow, debugging, onboarding
|
|
59
|
+
- **Coding Conventions** — naming rules, comment style, import ordering, code limits
|
|
60
|
+
- **Operational** — logging, monitoring, deployment, configuration management
|
|
61
|
+
|
|
62
|
+
### Phase 3: Decision analysis
|
|
63
|
+
|
|
64
|
+
For each decision, determine:
|
|
65
|
+
|
|
66
|
+
1. **What was decided** — the concrete choice made
|
|
67
|
+
2. **Evidence** — where in the codebase this decision is visible (file paths, config keys, code patterns)
|
|
68
|
+
3. **Alternatives rejected** — what the obvious alternatives were (only state what is inferable from context; do not fabricate)
|
|
69
|
+
4. **Rationale** — why this choice was likely made (from docs, comments, or strong inference from context)
|
|
70
|
+
5. **Consequences** — what this decision enables or constrains
|
|
71
|
+
6. **Strength of commitment** — how deeply embedded this decision is:
|
|
72
|
+
- **Deep** — changing it would require rewriting large parts of the codebase
|
|
73
|
+
- **Moderate** — changing it would require coordinated updates across multiple files
|
|
74
|
+
- **Shallow** — changing it is a config tweak or small refactor
|
|
75
|
+
|
|
76
|
+
### Phase 4: Cross-reference with problems
|
|
77
|
+
|
|
78
|
+
For each decision, check the unresolved problems document:
|
|
79
|
+
- Does this decision appear as a source of unresolved questions? If so, note the tension.
|
|
80
|
+
- Does this decision have contradictions flagged? If so, include them.
|
|
81
|
+
- Is this decision identified as a risk? If so, note the risk level.
|
|
82
|
+
|
|
83
|
+
## Output format
|
|
84
|
+
|
|
85
|
+
Produce the document as a single markdown file with the following structure. Every section is required. If a section does not apply, write "Not applicable" with a brief reason.
|
|
86
|
+
|
|
87
|
+
```
|
|
88
|
+
# Key Decisions: {project name}
|
|
89
|
+
|
|
90
|
+
## Executive summary
|
|
91
|
+
|
|
92
|
+
{5-10 bullet points capturing the most consequential decisions in this project. Each bullet should name the decision and its primary rationale in one sentence.}
|
|
93
|
+
|
|
94
|
+
## 1. Language & runtime
|
|
95
|
+
|
|
96
|
+
### Primary language
|
|
97
|
+
{Language, version constraints, module system (ESM/CJS/both). Evidence: file paths.}
|
|
98
|
+
|
|
99
|
+
### Runtime
|
|
100
|
+
{Runtime(s), version requirements, platform targets. Evidence: engine fields, CI matrix, Dockerfiles.}
|
|
101
|
+
|
|
102
|
+
### Module system
|
|
103
|
+
{ESM, CJS, or dual. How imports/exports are structured. Evidence: tsconfig, package.json type field.}
|
|
104
|
+
|
|
105
|
+
## 2. Framework & libraries
|
|
106
|
+
|
|
107
|
+
### Core framework
|
|
108
|
+
{Framework choice and version, or "no framework" if applicable. Why this framework. Evidence.}
|
|
109
|
+
|
|
110
|
+
### Key library decisions
|
|
111
|
+
{For each significant dependency, document as a subsection:}
|
|
112
|
+
|
|
113
|
+
#### {Library name}
|
|
114
|
+
- **Purpose**: {what it does in this project}
|
|
115
|
+
- **Version**: {pinned version or range}
|
|
116
|
+
- **Alternatives considered**: {if inferable}
|
|
117
|
+
- **Evidence**: {package.json path, import locations}
|
|
118
|
+
|
|
119
|
+
## 3. Build & tooling
|
|
120
|
+
|
|
121
|
+
### Package manager
|
|
122
|
+
{npm, yarn, pnpm, bun — and version. Evidence: lock file, packageManager field.}
|
|
123
|
+
|
|
124
|
+
### Build system
|
|
125
|
+
{Compiler/bundler/build tool. Configuration. Output format. Evidence.}
|
|
126
|
+
|
|
127
|
+
### Linting & formatting
|
|
128
|
+
{Tools, configuration style, custom rules. Evidence: config file paths.}
|
|
129
|
+
|
|
130
|
+
### Type checking
|
|
131
|
+
{TypeScript strictness level, key compiler options. Evidence: tsconfig path.}
|
|
132
|
+
|
|
133
|
+
### Test runner
|
|
134
|
+
{Framework, configuration, assertion library. Evidence.}
|
|
135
|
+
|
|
136
|
+
## 4. Architecture & patterns
|
|
137
|
+
|
|
138
|
+
### Architectural style
|
|
139
|
+
{Monolith/microservices/serverless/library/CLI tool/etc. Layer structure if any. Evidence.}
|
|
140
|
+
|
|
141
|
+
### Design patterns
|
|
142
|
+
{Recurring patterns with specific file references:}
|
|
143
|
+
|
|
144
|
+
#### {Pattern name}
|
|
145
|
+
- **Description**: {how the pattern is applied}
|
|
146
|
+
- **Examples**: {2-3 file paths demonstrating the pattern}
|
|
147
|
+
- **Rationale**: {why this pattern, if inferable}
|
|
148
|
+
|
|
149
|
+
### Module boundaries
|
|
150
|
+
{How the codebase is divided into modules. What rules govern cross-module imports. Evidence.}
|
|
151
|
+
|
|
152
|
+
### Dependency flow
|
|
153
|
+
{Which modules depend on which. Direction of dependencies. Any dependency inversion. Evidence.}
|
|
154
|
+
|
|
155
|
+
## 5. File organization
|
|
156
|
+
|
|
157
|
+
### Directory structure
|
|
158
|
+
{Top-level layout with purpose of each directory.}
|
|
159
|
+
|
|
160
|
+
### File naming convention
|
|
161
|
+
{Pattern: camelCase, kebab-case, PascalCase, prefix notation, etc. Evidence: actual file names.}
|
|
162
|
+
|
|
163
|
+
### File granularity
|
|
164
|
+
{One function per file, one class per file, grouped by feature, etc. Evidence.}
|
|
165
|
+
|
|
166
|
+
### Import conventions
|
|
167
|
+
{Path aliases, barrel files, relative vs absolute imports. Evidence: tsconfig paths, import statements.}
|
|
168
|
+
|
|
169
|
+
## 6. Type system
|
|
170
|
+
|
|
171
|
+
### Strictness level
|
|
172
|
+
{TypeScript strict mode settings. Evidence: tsconfig.}
|
|
173
|
+
|
|
174
|
+
### Shared types strategy
|
|
175
|
+
{How types are shared across modules. Central type files, per-module types, or inline. Evidence.}
|
|
176
|
+
|
|
177
|
+
### Type patterns
|
|
178
|
+
{Discriminated unions, branded types, utility types, generic patterns used. Evidence: file paths.}
|
|
179
|
+
|
|
180
|
+
## 7. Error handling
|
|
181
|
+
|
|
182
|
+
### Error propagation strategy
|
|
183
|
+
{Exceptions, Result types, error codes, or mixed. Evidence: code patterns.}
|
|
184
|
+
|
|
185
|
+
### Custom error types
|
|
186
|
+
{Any custom error classes or error factories. Evidence: file paths.}
|
|
187
|
+
|
|
188
|
+
### Failure philosophy
|
|
189
|
+
{Fail fast, graceful degradation, retry with backoff, or context-dependent. Evidence.}
|
|
190
|
+
|
|
191
|
+
## 8. Testing strategy
|
|
192
|
+
|
|
193
|
+
### Test framework & tools
|
|
194
|
+
{Framework, assertion library, mocking approach. Evidence: config and test files.}
|
|
195
|
+
|
|
196
|
+
### Test file placement
|
|
197
|
+
{Co-located, separate directory, or mixed. Naming convention. Evidence.}
|
|
198
|
+
|
|
199
|
+
### Test types present
|
|
200
|
+
{Unit, integration, e2e, snapshot, property-based — which exist and which are absent. Evidence.}
|
|
201
|
+
|
|
202
|
+
### Test coverage
|
|
203
|
+
{Coverage tooling, thresholds, or absence of coverage tracking. Evidence.}
|
|
204
|
+
|
|
205
|
+
### What is NOT tested
|
|
206
|
+
{Notable gaps — untested modules, untested error paths, areas explicitly excluded. Evidence.}
|
|
207
|
+
|
|
208
|
+
## 9. API design
|
|
209
|
+
|
|
210
|
+
### Public API surface
|
|
211
|
+
{Exports, CLI commands, HTTP endpoints, library interface — whatever applies. Evidence.}
|
|
212
|
+
|
|
213
|
+
### Versioning & compatibility
|
|
214
|
+
{Semver adherence, breaking change policy, deprecation approach. Evidence.}
|
|
215
|
+
|
|
216
|
+
### Input/output contracts
|
|
217
|
+
{Validation approach, schema definitions, type safety at boundaries. Evidence.}
|
|
218
|
+
|
|
219
|
+
## 10. Data & state
|
|
220
|
+
|
|
221
|
+
### Data flow
|
|
222
|
+
{How data enters, transforms, and exits the system. Evidence.}
|
|
223
|
+
|
|
224
|
+
### State management
|
|
225
|
+
{Where state lives, how it is mutated, persistence mechanism. Evidence.}
|
|
226
|
+
|
|
227
|
+
### Serialization
|
|
228
|
+
{JSON, protobuf, msgpack, custom — and how serialization boundaries are handled. Evidence.}
|
|
229
|
+
|
|
230
|
+
## 11. Async & concurrency
|
|
231
|
+
|
|
232
|
+
### Async model
|
|
233
|
+
{Promises, async/await, callbacks, streams, workers, or mixed. Evidence.}
|
|
234
|
+
|
|
235
|
+
### Concurrency primitives
|
|
236
|
+
{Locks, queues, semaphores, worker pools — if any. Evidence: file paths.}
|
|
237
|
+
|
|
238
|
+
### Parallelism strategy
|
|
239
|
+
{How parallel work is structured. Evidence.}
|
|
240
|
+
|
|
241
|
+
## 12. Security & secrets
|
|
242
|
+
|
|
243
|
+
### Authentication
|
|
244
|
+
{Auth mechanism, if any. Evidence.}
|
|
245
|
+
|
|
246
|
+
### Input validation
|
|
247
|
+
{Validation approach at system boundaries. Evidence.}
|
|
248
|
+
|
|
249
|
+
### Secrets management
|
|
250
|
+
{How secrets are stored and accessed. Evidence: .env patterns, config files.}
|
|
251
|
+
|
|
252
|
+
### Supply chain
|
|
253
|
+
{Dependency auditing, lockfile integrity, pinning strategy. Evidence.}
|
|
254
|
+
|
|
255
|
+
## 13. CI/CD & release
|
|
256
|
+
|
|
257
|
+
### Pipeline structure
|
|
258
|
+
{CI stages, triggers, matrix builds. Evidence: workflow files.}
|
|
259
|
+
|
|
260
|
+
### Release process
|
|
261
|
+
{Manual, automated, semantic-release, changesets, etc. Evidence.}
|
|
262
|
+
|
|
263
|
+
### Versioning strategy
|
|
264
|
+
{How versions are determined and bumped. Evidence.}
|
|
265
|
+
|
|
266
|
+
### Artifact distribution
|
|
267
|
+
{npm, Docker, binary, CDN — how the project is distributed. Evidence.}
|
|
268
|
+
|
|
269
|
+
## 14. Developer experience
|
|
270
|
+
|
|
271
|
+
### Local development
|
|
272
|
+
{Dev server, hot reload, watch mode — how developers run the project locally. Evidence: scripts.}
|
|
273
|
+
|
|
274
|
+
### Onboarding
|
|
275
|
+
{README quality, setup steps, CONTRIBUTING guide. Evidence.}
|
|
276
|
+
|
|
277
|
+
### Debugging
|
|
278
|
+
{Debug configurations, source maps, verbose modes. Evidence.}
|
|
279
|
+
|
|
280
|
+
## 15. Coding conventions
|
|
281
|
+
|
|
282
|
+
### Naming rules
|
|
283
|
+
{Function naming, variable naming, file naming, constant naming. Evidence: code patterns.}
|
|
284
|
+
|
|
285
|
+
### Code organization rules
|
|
286
|
+
{Import ordering, function ordering within files, export placement. Evidence.}
|
|
287
|
+
|
|
288
|
+
### Comment style
|
|
289
|
+
{When and how comments are used. Doc comment format. Evidence.}
|
|
290
|
+
|
|
291
|
+
### Code limits
|
|
292
|
+
{Line length, file length, function length — if any limits are visible or configured. Evidence.}
|
|
293
|
+
|
|
294
|
+
## 16. Operational concerns
|
|
295
|
+
|
|
296
|
+
### Logging
|
|
297
|
+
{Logging library, structured vs unstructured, log levels. Evidence.}
|
|
298
|
+
|
|
299
|
+
### Configuration management
|
|
300
|
+
{Environment variables, config files, defaults. Evidence.}
|
|
301
|
+
|
|
302
|
+
### Deployment
|
|
303
|
+
{Deployment target, containerization, infrastructure. Evidence.}
|
|
304
|
+
|
|
305
|
+
### Monitoring & observability
|
|
306
|
+
{Metrics, tracing, health checks — if any. Evidence.}
|
|
307
|
+
|
|
308
|
+
## 17. Decision tensions & trade-offs
|
|
309
|
+
|
|
310
|
+
{Identify decisions that create tension with each other or with common practices. Cross-reference the unresolved problems document — many tensions surface there as open questions.}
|
|
311
|
+
|
|
312
|
+
### {Tension title}
|
|
313
|
+
- **Decision A**: {first decision}
|
|
314
|
+
- **Decision B**: {second decision or common practice}
|
|
315
|
+
- **Tension**: {how they conflict or create friction}
|
|
316
|
+
- **Resolution**: {how the project handles it, if visible}
|
|
317
|
+
- **Related problems**: {IDs from unresolved problems document, if any}
|
|
318
|
+
|
|
319
|
+
## 18. Decision dependency graph
|
|
320
|
+
|
|
321
|
+
{Identify which decisions constrain or enable other decisions:}
|
|
322
|
+
|
|
323
|
+
| Decision | Enables | Constrains |
|
|
324
|
+
|----------|---------|------------|
|
|
325
|
+
| {decision} | {what it makes possible} | {what it limits} |
|
|
326
|
+
|
|
327
|
+
## 19. Absent decisions
|
|
328
|
+
|
|
329
|
+
{Decisions that most projects of this type make explicitly but this project has not. Cross-reference the unresolved problems document — many absent decisions surface there as gaps.}
|
|
330
|
+
|
|
331
|
+
- **{Topic}**: No explicit decision found. Implicit default: {what the code does in practice}. Evidence: {absence of config, docs, or patterns}.
|
|
332
|
+
|
|
333
|
+
## 20. Decisions at risk
|
|
334
|
+
|
|
335
|
+
{Decisions flagged as problematic in the unresolved problems document. For each:}
|
|
336
|
+
|
|
337
|
+
### {Decision}
|
|
338
|
+
- **Current choice**: {what was decided}
|
|
339
|
+
- **Risk identified**: {from unresolved problems document}
|
|
340
|
+
- **Severity**: {Critical / High / Medium / Low}
|
|
341
|
+
- **Decision stability**: {Is this decision likely to change? What would trigger a change?}
|
|
342
|
+
|
|
343
|
+
## Summary
|
|
344
|
+
|
|
345
|
+
### Decision philosophy
|
|
346
|
+
{In 3-5 sentences, characterize the overall decision-making philosophy of this project. Is it convention-over-configuration? Explicit-over-implicit? Minimal? Maximal? Opinionated? Flexible?}
|
|
347
|
+
|
|
348
|
+
### Top 10 decisions to understand first
|
|
349
|
+
{Ordered list of the 10 most important decisions a new contributor must understand to be productive. Each with a one-line explanation.}
|
|
350
|
+
|
|
351
|
+
### Decision maturity
|
|
352
|
+
{Assessment of how well decisions are documented, enforced, and consistent across the codebase. Note areas where decisions are inconsistently applied.}
|
|
353
|
+
```
|
|
354
|
+
|
|
355
|
+
## Rules
|
|
356
|
+
|
|
357
|
+
- **Use the research documents as primary source.** Do not repeat the full codebase scan. Read source files only to verify, clarify, or fill gaps.
|
|
358
|
+
- **Evidence is mandatory.** Every decision must cite at least one concrete file path, config key, or code pattern. Decisions without evidence are not decisions — they are speculation.
|
|
359
|
+
- **Distinguish explicit from implicit.** An explicit decision has documentation, config, or clear intentional code. An implicit decision is a pattern that emerged without documentation. Label each.
|
|
360
|
+
- **Distinguish decision from accident.** Some patterns are deliberate choices; others are historical artifacts or defaults never reconsidered. Note the difference when visible.
|
|
361
|
+
- **Be specific.** Include file paths, version numbers, config values, and function names. Vague descriptions like "uses modern patterns" are useless.
|
|
362
|
+
- **Be honest.** If rationale is not visible in the codebase, say "Rationale not documented" rather than inventing one.
|
|
363
|
+
- **No value judgments.** Report what the code does, not whether it is good or bad. "Uses X" not "Wisely uses X" or "Unfortunately uses X."
|
|
364
|
+
- **Cover everything.** A missing section is worse than a section that says "Not applicable." Check every category.
|
|
365
|
+
- **Quantify when possible.** "47 files follow this pattern, 3 do not" is more useful than "most files follow this pattern."
|
|
366
|
+
- **Follow the chain.** When you find a decision, trace its consequences. A TypeScript strict mode decision affects type definitions, error handling, and testing patterns.
|
|
367
|
+
- **Cross-reference the problems document.** Decisions that generated unresolved problems deserve extra attention. Note the connection explicitly.
|
|
368
|
+
- **Note contradictions.** If two parts of the codebase make contradictory decisions, flag it explicitly.
|
|
369
|
+
|
|
370
|
+
## Output
|
|
371
|
+
|
|
372
|
+
Output only raw markdown. No preamble, no explanation, no commentary outside the document structure.
|
|
@@ -0,0 +1,101 @@
|
|
|
1
|
+
You are naming a software product. This is harder than it sounds — a good name is memorable, short, available, and captures the essence of what the tool does without being generic. And in this case, it should make someone smile.
|
|
2
|
+
|
|
3
|
+
## Context
|
|
4
|
+
|
|
5
|
+
- **Output File Path**: {outputPath}
|
|
6
|
+
- **Original source repository:** {sourceFullName}
|
|
7
|
+
- **Local checkout path:** {originalCheckoutPath}
|
|
8
|
+
|
|
9
|
+
**Input documents — read all before starting:**
|
|
10
|
+
|
|
11
|
+
- **Product Pitch**: {productPitchPath} — the complete product description. Understand what this tool does, who it's for, and what makes it distinctive.
|
|
12
|
+
- **Research Summary**: {researchPath} — technical analysis of the original project.
|
|
13
|
+
- **Key Decisions**: {decisionsPath} — architectural and design decisions.
|
|
14
|
+
|
|
15
|
+
## Research before naming
|
|
16
|
+
|
|
17
|
+
Before you pick a name, you MUST research the competitive landscape. Use the `gh` tool and web search to:
|
|
18
|
+
|
|
19
|
+
1. **Find existing tools in the same space.** AI-powered code generation, project scaffolding, AI workflow orchestration, repository bootstrap tools. List every competitor you find.
|
|
20
|
+
2. **Catalog their names.** What naming patterns do they use? What's overused? (hint: everything with "ai", "co", "pilot", "gen", "auto" in the name is exhausted)
|
|
21
|
+
3. **Check npm for collisions.** Before finalizing, verify the name isn't already taken on npm. Use `npm search {name}` or check `https://www.npmjs.com/package/{name}`.
|
|
22
|
+
4. **Check GitHub for collisions.** Search for repos with the candidate name.
|
|
23
|
+
5. **Check domain availability.** For your top candidates, check if `{name}.dev`, `{name}.io`, or `{name}.com` are available and affordable (under $5k). Use WHOIS lookups or domain search tools. A name with an available domain under $5k is strongly preferred over one without.
|
|
24
|
+
|
|
25
|
+
Include a "## Competitive landscape" section in your output showing what you found.
|
|
26
|
+
|
|
27
|
+
## The vibe
|
|
28
|
+
|
|
29
|
+
The name must be **goofy**. Not corporate-goofy like "Sagemaker" or startup-goofy like "Humane." Actually funny. The kind of name that makes a developer do a double-take, chuckle, and then remember it forever.
|
|
30
|
+
|
|
31
|
+
Think about names like:
|
|
32
|
+
- **yeet** — you wouldn't forget a CLI tool called yeet
|
|
33
|
+
- **bonk** — it just sounds fun to type in a terminal
|
|
34
|
+
- **lmao** — irreverent but memorable
|
|
35
|
+
- **oops** — self-aware humor
|
|
36
|
+
- **bruh** — internet culture meets developer tooling
|
|
37
|
+
- **yolo** — the attitude of running AI code generation on your repo
|
|
38
|
+
|
|
39
|
+
The name should feel like something you'd say right before doing something bold with your codebase. It should match the product's personality: confident, slightly unhinged, technically serious underneath the humor. The kind of name where the README makes you laugh but the tool makes you productive.
|
|
40
|
+
|
|
41
|
+
**Do NOT pick any of the example names above.** They're just illustrations of the vibe. Find your own.
|
|
42
|
+
|
|
43
|
+
## What to produce
|
|
44
|
+
|
|
45
|
+
Generate a YAML frontmatter with the chosen product name and rationale, followed by a brief markdown body.
|
|
46
|
+
|
|
47
|
+
```
|
|
48
|
+
---
|
|
49
|
+
productName: "{the chosen name}"
|
|
50
|
+
---
|
|
51
|
+
|
|
52
|
+
# Naming: {the chosen name}
|
|
53
|
+
|
|
54
|
+
## Competitive landscape
|
|
55
|
+
|
|
56
|
+
| Tool | What it does | Name style |
|
|
57
|
+
|------|-------------|------------|
|
|
58
|
+
| {tool} | {description} | {serious/playful/generic/etc} |
|
|
59
|
+
(list all competitors found)
|
|
60
|
+
|
|
61
|
+
{1-2 sentences on naming patterns in this space and what's overused.}
|
|
62
|
+
|
|
63
|
+
## Why this name
|
|
64
|
+
|
|
65
|
+
{2-3 sentences. Why this name is funny, memorable, and fits. What it evokes. Why a developer would grin the first time they type it in a terminal.}
|
|
66
|
+
|
|
67
|
+
## Considered alternatives
|
|
68
|
+
|
|
69
|
+
| Name | Why considered | Why rejected |
|
|
70
|
+
|------|---------------|--------------|
|
|
71
|
+
| {name} | {reason} | {reason} |
|
|
72
|
+
(5-10 alternatives, at least half should be goofy)
|
|
73
|
+
|
|
74
|
+
## Name properties
|
|
75
|
+
|
|
76
|
+
- **Memorable:** {yes/no and why}
|
|
77
|
+
- **Goofy factor:** {what makes it funny}
|
|
78
|
+
- **Short:** {character count, syllable count}
|
|
79
|
+
- **Domain-friendly:** {would work as productname.dev or similar}
|
|
80
|
+
- **CLI-friendly:** {works as a terminal command — short, no special chars, fun to type}
|
|
81
|
+
- **Searchable:** {not a common English word that pollutes search results}
|
|
82
|
+
- **npm available:** {yes/no — did you check?}
|
|
83
|
+
- **Domain available:** {which domains checked, which are available, estimated price}
|
|
84
|
+
```
|
|
85
|
+
|
|
86
|
+
## Naming rules
|
|
87
|
+
|
|
88
|
+
- **Max 12 characters.** Shorter is better. 4-8 is ideal.
|
|
89
|
+
- **Must be goofy.** If it doesn't make someone smile, try again. The humor can be subtle or obvious, but it must be there.
|
|
90
|
+
- **Lowercase-friendly.** Must work as a CLI command and npm package name.
|
|
91
|
+
- **No generic words.** "devtool", "aihelper", "codebot" — these are descriptions, not names.
|
|
92
|
+
- **No forced acronyms.** If it doesn't spell naturally, don't force it.
|
|
93
|
+
- **No "AI" in the name.** Every tool has AI now. It's not a differentiator, it's a commodity. Putting "ai" in your name is like putting "electric" in a car name in 2026.
|
|
94
|
+
- **Evocative over descriptive.** The name should hint at what the tool feels like to use, not literally describe its function.
|
|
95
|
+
- **Actually check for collisions.** Search npm and GitHub. A name that's already taken is not an option no matter how good it is.
|
|
96
|
+
- **Domain matters.** Strongly prefer names where `{name}.dev`, `{name}.io`, or `{name}.com` is available for under $5k. A great name without a domain is worse than a good name with one.
|
|
97
|
+
- **Consider the terminal experience.** `$ {name} bootstrap` — does it feel good to type? Does it make the developer smile every time?
|
|
98
|
+
|
|
99
|
+
## Output
|
|
100
|
+
|
|
101
|
+
Output only raw markdown with YAML frontmatter. No preamble, no explanation.
|