codifier 2.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (45) hide show
  1. package/README.md +543 -0
  2. package/commands/codify.md +7 -0
  3. package/commands/onboard.md +7 -0
  4. package/commands/push-memory.md +7 -0
  5. package/commands/recall.md +41 -0
  6. package/commands/remember.md +7 -0
  7. package/commands/research.md +7 -0
  8. package/dist/cli/add.d.ts +5 -0
  9. package/dist/cli/add.d.ts.map +1 -0
  10. package/dist/cli/add.js +25 -0
  11. package/dist/cli/add.js.map +1 -0
  12. package/dist/cli/bin/codifier.d.ts +7 -0
  13. package/dist/cli/bin/codifier.d.ts.map +1 -0
  14. package/dist/cli/bin/codifier.js +47 -0
  15. package/dist/cli/bin/codifier.js.map +1 -0
  16. package/dist/cli/detect.d.ts +15 -0
  17. package/dist/cli/detect.d.ts.map +1 -0
  18. package/dist/cli/detect.js +69 -0
  19. package/dist/cli/detect.js.map +1 -0
  20. package/dist/cli/doctor.d.ts +6 -0
  21. package/dist/cli/doctor.d.ts.map +1 -0
  22. package/dist/cli/doctor.js +71 -0
  23. package/dist/cli/doctor.js.map +1 -0
  24. package/dist/cli/init.d.ts +7 -0
  25. package/dist/cli/init.d.ts.map +1 -0
  26. package/dist/cli/init.js +144 -0
  27. package/dist/cli/init.js.map +1 -0
  28. package/dist/cli/update.d.ts +5 -0
  29. package/dist/cli/update.d.ts.map +1 -0
  30. package/dist/cli/update.js +38 -0
  31. package/dist/cli/update.js.map +1 -0
  32. package/dist/index.js +87 -0
  33. package/package.json +40 -0
  34. package/skills/brownfield-onboard/SKILL.md +142 -0
  35. package/skills/capture-session/SKILL.md +111 -0
  36. package/skills/initialize-project/SKILL.md +185 -0
  37. package/skills/initialize-project/templates/evals-prompt.md +39 -0
  38. package/skills/initialize-project/templates/requirements-prompt.md +44 -0
  39. package/skills/initialize-project/templates/roadmap-prompt.md +44 -0
  40. package/skills/initialize-project/templates/rules-prompt.md +34 -0
  41. package/skills/push-memory/SKILL.md +131 -0
  42. package/skills/research-analyze/SKILL.md +149 -0
  43. package/skills/research-analyze/templates/query-generation-prompt.md +61 -0
  44. package/skills/research-analyze/templates/synthesis-prompt.md +67 -0
  45. package/skills/shared/codifier-tools.md +187 -0
@@ -0,0 +1,142 @@
1
+ # Skill: Brownfield Onboard
2
+
3
+ **Role:** Developer
4
+ **Purpose:** Onboard existing codebases into the Codifier shared knowledge base by packing repositories, generating architectural summaries, and persisting learnings.
5
+
6
+ See `../shared/codifier-tools.md` for full MCP tool reference.
7
+
8
+ ---
9
+
10
+ ## Prerequisites
11
+
12
+ - Active MCP connection to the Codifier server
13
+ - At least one repository URL (GitHub, GitLab, or local path)
14
+ - A project to associate the snapshots with (existing or new)
15
+
16
+ ---
17
+
18
+ ## Workflow
19
+
20
+ ### Step 1 — Identify or Create the Project
21
+
22
+ Call `manage_projects` with `operation: "list"` and show the user their existing projects.
23
+
24
+ Ask: **"Which project should these repositories be associated with, or should we create a new one?"**
25
+
26
+ - If **existing**: use the selected `project_id`.
27
+ - If **new**: collect name and optionally org, then call `manage_projects` with `operation: "create"`.
28
+
29
+ ### Step 2 — Collect Repository URLs
30
+
31
+ Ask the user to provide all repository URLs to onboard. They may provide:
32
+ - One or more GitHub/GitLab/Bitbucket HTTPS URLs
33
+ - Local filesystem paths (absolute)
34
+
35
+ Ask: **"Are there any other repos to include, or is this the complete list?"**
36
+
37
+ Also ask: **"What is the current state of these repos — active development, legacy, recently archived?"**
38
+
39
+ ### Step 3 — Fetch Existing Context
40
+
41
+ Call `fetch_context` with `{ project_id }` to retrieve any prior memories for this project. Summarize relevant findings to the user — prior architectural decisions, existing rules, or previous onboarding notes are important context.
42
+
43
+ ### Step 3b — Surface Local Learnings
44
+
45
+ Attempt to read `docs/MEMORY.md`. If the file does not exist, skip this step silently and continue to Step 4.
46
+
47
+ If the file exists, scan for entries relevant to the repositories being onboarded — particularly `architecture`, `gotcha`, and `convention` categories. Present relevant local learnings to the user alongside KB context from Step 3.
48
+
49
+ Note: This is a local file read — no MCP call required.
50
+
51
+ ### Step 4 — Pack Repositories
52
+
53
+ For each repository URL:
54
+ 1. Call `pack_repo` with the URL, `project_id`, and a `version_label` (use current date: `"YYYY-MM"` or a tag like `"initial-onboard"`)
55
+ 2. Note the returned `repository_id`, `token_count`, and `file_count`
56
+ 3. Inform the user: "Packed `<repo-url>` — `<N>` files, `<M>` tokens"
57
+
58
+ If a pack fails, log the error and continue with remaining repos.
59
+
60
+ ### Step 5 — Generate Architectural Summary
61
+
62
+ Using the packed repository content (available in your context from the pack results) and any prior memories, generate a comprehensive architectural summary covering:
63
+
64
+ 1. **System Overview** — what the system does, its primary users, and its business purpose
65
+ 2. **Technology Stack** — languages, frameworks, databases, infrastructure
66
+ 3. **Module Structure** — major directories/packages and their responsibilities
67
+ 4. **Key Interfaces** — APIs, event buses, shared contracts between components
68
+ 5. **Data Flow** — how data moves through the system from input to output
69
+ 6. **External Dependencies** — third-party services, APIs, or systems integrated with
70
+ 7. **Known Issues / Technical Debt** — observations from the code (if apparent)
71
+ 8. **Conventions Observed** — naming patterns, file organisation, testing approach
72
+
73
+ Present the summary to the user and ask: **"Does this accurately describe the system? What should be added or corrected?"**
74
+
75
+ Incorporate feedback.
76
+
77
+ ### Step 6 — Write Local Copies
78
+
79
+ Write the confirmed architectural summary as a local file in the `docs/` directory at the project root. Create the directory if it does not exist.
80
+
81
+ | Artifact | Local Path |
82
+ |----------|-----------|
83
+ | Architectural Summary | `docs/architecture.md` |
84
+
85
+ **Important:**
86
+ - If `docs/architecture.md` already exists, ask the user before overwriting
87
+ - If the write fails, inform the user but continue — remote persistence in the next step will still capture the artifact
88
+
89
+ Inform the user: "Local copy saved to docs/architecture.md"
90
+
91
+ ### Step 7 — Persist Architectural Summary Remotely
92
+
93
+ Call `update_memory`:
94
+ ```
95
+ memory_type: "learning"
96
+ title: "Architectural Summary — <repo-name or project-name>"
97
+ content: { text: "<full summary markdown>", repos: ["<url1>", "<url2>"] }
98
+ tags: ["architecture", "onboarding", "brownfield"]
99
+ source_role: "developer"
100
+ ```
101
+
102
+ ### Step 8 — Persist Architectural Decisions
103
+
104
+ For any significant architectural decisions uncovered (e.g., "uses event sourcing", "monorepo with Turborepo", "Postgres as primary store"), ask the user which to persist as formal documents.
105
+
106
+ For each confirmed decision:
107
+ 1. Write a local copy to `docs/adr-<kebab-slug>.md` (convert the decision title to lowercase kebab-case, e.g., "Uses Event Sourcing" → `docs/adr-uses-event-sourcing.md`)
108
+ 2. Then call `update_memory`:
109
+ ```
110
+ memory_type: "document"
111
+ title: "ADR: <decision title>"
112
+ content: { text: "<decision description, rationale, and consequences>" }
113
+ tags: ["adr", "architecture"]
114
+ source_role: "developer"
115
+ ```
116
+
117
+ ### Step 9 — Summarize
118
+
119
+ Tell the user:
120
+ - Project ID
121
+ - Repositories packed (with IDs and token counts)
122
+ - Memories persisted (IDs and titles)
123
+ - Local copies written to `docs/` (architecture.md, any ADR files)
124
+ - How to retrieve this context in future: `fetch_context` with `{ project_id, tags: ["architecture"] }`
125
+
126
+ ---
127
+
128
+ ## Error Handling
129
+
130
+ - If `pack_repo` times out or fails: note the error in the summary, ask the user if they want to retry or skip.
131
+ - If a repo is private and credentials are not configured: inform the user that the server needs the relevant token (`GITHUB_TOKEN`, `GITLAB_TOKEN`) configured as an environment variable.
132
+ - If the packed content is very large (>500K tokens): focus the architectural summary on the highest-level structural observations rather than deep code analysis.
133
+
134
+ ---
135
+
136
+ ## End-of-Workflow Memory Capture
137
+
138
+ After completing Step 9, suggest to the user:
139
+
140
+ > "You may have learned things during this onboarding session worth capturing. Run `/remember` to capture session learnings to docs/MEMORY.md, or `/push-memory` to sync existing local memories to the shared KB."
141
+
142
+ This is a suggestion only — do not automatically invoke the capture or push Skills.
@@ -0,0 +1,111 @@
1
+ # Skill: Capture Session
2
+
3
+ **Role:** Any
4
+ **Purpose:** Capture session learnings — gotchas, conventions, insights, what-to-do, what-not-to-do — into the local `docs/MEMORY.md` file for later review and optional KB sync.
5
+
6
+ See `../shared/codifier-tools.md` for full MCP tool reference.
7
+
8
+ ---
9
+
10
+ ## Prerequisites
11
+
12
+ - A `docs/MEMORY.md` file (created by `npx codifier init`; if missing, this skill will create it with a placeholder header)
13
+ - Optionally an active MCP connection (only needed if the user wants to confirm the `project_id`)
14
+
15
+ ---
16
+
17
+ ## When to Use / When NOT to Use
18
+
19
+ **Use this skill when:**
20
+ - You've learned something during a session worth remembering — a debugging insight, an API behavior, a convention, a gotcha, or a team decision
21
+ - You are at the end of any Codifier workflow and want to capture what you learned along the way
22
+
23
+ **Do NOT use this skill when:**
24
+ - You want to persist structured project artifacts such as rules, requirements, or architecture docs — use `/codify`, `/onboard`, or `/research` instead
25
+ - You want to push memories to the shared KB for the team to access — use `/push-memory` instead
26
+
27
+ ---
28
+
29
+ ## Workflow
30
+
31
+ Follow these steps conversationally. You are the state machine — write to the local file only; do not call `update_memory` during this skill.
32
+
33
+ ### Step 1 — Confirm Project Context
34
+
35
+ Read `docs/MEMORY.md`.
36
+
37
+ - If the file **exists**: extract the project name and ID from the header and present them to the user for confirmation.
38
+ - If the file **does not exist**: create it with the following placeholder header, substituting today's date for `<today's date>`:
39
+
40
+ ```markdown
41
+ # Project Memory
42
+ _Last updated: <today's date>_
43
+
44
+ ```
45
+
46
+ Ask the user: **"What project are these learnings for?"** to confirm or set the project context. Update the header with the confirmed project name if it is not already present.
47
+
48
+ ### Step 2 — Elicit Learnings
49
+
50
+ Ask the user:
51
+
52
+ **"What did you learn during this session? Think about: gotchas, surprises, conventions you discovered, things to do or avoid, insights about the codebase or tools."**
53
+
54
+ Let the user respond in any format — bullet points, paragraphs, or freeform prose. Collect everything before structuring. Do not interrupt to ask for clarification mid-response; wait until the user has finished providing input.
55
+
56
+ ### Step 3 — Structure and Dedup
57
+
58
+ For each learning the user provided:
59
+
60
+ 1. Assign a category from the following list, or ask the user if the right category is unclear:
61
+ - `architecture` — structural decisions, component relationships, design patterns
62
+ - `gotcha` — surprising behaviors, footguns, things that went wrong
63
+ - `convention` — naming, formatting, file organisation, team norms
64
+ - `tooling` — build tools, CLIs, libraries, dev environment
65
+ - `data` — schema details, query behaviors, data quirks
66
+ - `process` — workflow, review conventions, deployment steps
67
+ 2. Distill the learning into a concise, actionable bullet point (one line)
68
+ 3. Check for exact string matches against existing bullet points already in `docs/MEMORY.md` — skip any entry whose text is identical to an existing entry
69
+
70
+ Present the structured candidates to the user grouped by category. For example:
71
+
72
+ ```
73
+ **gotcha**
74
+ - Supabase RLS blocks inserts when no matching policy exists; always test with service role key first
75
+
76
+ **convention**
77
+ - Use kebab-case for all slug fields; snake_case is reserved for database column names
78
+ ```
79
+
80
+ Ask: **"Here are the learnings I've structured. Any to add, remove, or recategorize?"**
81
+
82
+ Incorporate the user's feedback before proceeding.
83
+
84
+ ### Step 4 — Append to Local File
85
+
86
+ Append the confirmed learnings to `docs/MEMORY.md`:
87
+
88
+ - For each category with new entries, find the matching `## <Category>` heading in the file.
89
+ - If the heading already exists: append new bullet points beneath it.
90
+ - If the heading does not exist yet: create it at the end of the file.
91
+ - Update the `_Last updated_` date in the header to today's date.
92
+
93
+ Do NOT call `update_memory`. This step writes only to the local `docs/MEMORY.md` file.
94
+
95
+ ### Step 5 — Next Steps
96
+
97
+ Inform the user:
98
+
99
+ - "Learnings saved to docs/MEMORY.md"
100
+ - "You can edit the file directly to refine, recategorize, or remove entries"
101
+ - "When ready to share with your team, run /push-memory to sync to the shared KB"
102
+
103
+ ---
104
+
105
+ ## Error Handling
106
+
107
+ - If `docs/MEMORY.md` **cannot be written** (e.g., permission error): present the full structured learnings as a fenced Markdown code block the user can copy-paste into the file manually.
108
+ - If the user **provides no learnings**: ask 2–3 targeted questions to help surface implicit learnings. Example prompts:
109
+ - "What was the hardest part of what you worked on today?"
110
+ - "Did anything behave differently than you expected?"
111
+ - "Is there anything you'd tell a teammate before they touched this area of the code?"
@@ -0,0 +1,185 @@
1
+ # Skill: Initialize Project
2
+
3
+ **Role:** Developer
4
+ **Purpose:** Set up a new project in the Codifier shared knowledge base — collecting context, optionally packing repositories, and generating four key artifacts: Rules.md, Evals.md, Requirements.md, and Roadmap.md.
5
+
6
+ See `../shared/codifier-tools.md` for full MCP tool reference.
7
+
8
+ ---
9
+
10
+ ## Prerequisites
11
+
12
+ - Active MCP connection to the Codifier server
13
+ - Project context: name, description, and optionally a Scope of Work (SOW) document and repo URLs
14
+
15
+ ---
16
+
17
+ ## Workflow
18
+
19
+ Follow these steps conversationally. You are the state machine — call MCP tools only for data operations.
20
+
21
+ ### Step 1 — Identify or Create the Project
22
+
23
+ Call `manage_projects` with `operation: "list"` to show the user their existing projects.
24
+
25
+ Ask: **"Is this a new project, or do you want to use an existing one?"**
26
+
27
+ - If **existing**: ask the user to select from the list; use that `project_id` for all subsequent calls.
28
+ - If **new**: collect a project name and optionally an org name, then call `manage_projects` with `operation: "create"`. Use the returned `project_id` for all subsequent calls.
29
+
30
+ ### Step 2 — Collect Project Context
31
+
32
+ Gather the following from the user in a single conversational turn:
33
+
34
+ 1. **Project name** (if not already set)
35
+ 2. **Description** — what does this project build and for whom?
36
+ 3. **Scope of Work (SOW)** — paste the SOW document, or describe key deliverables if no formal SOW exists
37
+ 4. **Repository URLs** (optional) — GitHub/GitLab URLs of codebases relevant to this project
38
+ 5. **Additional context** — any constraints, tech stack, team conventions, or prior decisions
39
+
40
+ Confirm you have understood all provided context before proceeding.
41
+
42
+ ### Step 3 — Pack Repositories (if URLs provided)
43
+
44
+ For each repository URL provided:
45
+ 1. Call `pack_repo` with the URL, `project_id`, and a `version_label` (use the current date or sprint label, e.g., `"2026-02"`)
46
+ 2. Note the returned `repository_id` and `token_count`
47
+ 3. Inform the user: "Packed `<repo-url>` — `<N>` tokens"
48
+
49
+ If no URLs were provided, skip this step.
50
+
51
+ ### Step 4 — Fetch Existing Context
52
+
53
+ Call `fetch_context` with `{ project_id }` (no type filter) to retrieve any prior memories for this project. This surfaces research findings, prior rules, or existing docs that should inform the new artifacts.
54
+
55
+ Summarize any relevant findings to the user before generating artifacts.
56
+
57
+ ### Step 4b — Surface Local Learnings
58
+
59
+ Attempt to read `docs/MEMORY.md`. If the file does not exist, skip this step silently and continue to Step 5.
60
+
61
+ If the file exists, scan it for entries relevant to this project — particularly entries in the `architecture`, `gotcha`, and `convention` categories. Summarize relevant local learnings to the user alongside the KB context from Step 4.
62
+
63
+ Note: This is a local file read — no MCP call required.
64
+
65
+ ### Step 5 — Generate Rules.md
66
+
67
+ Using the prompt template in `templates/rules-prompt.md`, generate a comprehensive set of development rules and coding standards for this project.
68
+
69
+ **Substitute these placeholders with actual values:**
70
+ - `{project_name}` — the project name
71
+ - `{description}` — the project description
72
+ - `{sow}` — the SOW or deliverables description
73
+ - `{repo_urls}` — list of repo URLs (or "none provided")
74
+ - `{additional_context}` — any extra context, including relevant memories from Step 4
75
+
76
+ Present the generated Rules.md to the user inline. Ask: **"Does this look right? Any rules to add, remove, or change?"**
77
+
78
+ Incorporate feedback before proceeding.
79
+
80
+ ### Step 6 — Generate Evals.md
81
+
82
+ Using the prompt template in `templates/evals-prompt.md`, generate evaluation criteria from the confirmed Rules.md.
83
+
84
+ **Substitute:**
85
+ - `{rules}` — the confirmed Rules.md content
86
+ - `{project_name}` — the project name
87
+ - `{description}` — the project description
88
+
89
+ Present Evals.md inline and ask for confirmation.
90
+
91
+ ### Step 7 — Generate Requirements.md
92
+
93
+ Using the prompt template in `templates/requirements-prompt.md`, generate a detailed requirements document.
94
+
95
+ **Substitute:**
96
+ - `{project_name}`, `{description}`, `{sow}`, `{repo_urls}`, `{additional_context}`
97
+
98
+ Present Requirements.md inline and ask for confirmation.
99
+
100
+ ### Step 8 — Generate Roadmap.md
101
+
102
+ Using the prompt template in `templates/roadmap-prompt.md`, generate a phased implementation roadmap from Requirements.md.
103
+
104
+ **Substitute:**
105
+ - `{requirements}` — the confirmed Requirements.md content
106
+ - `{project_name}`, `{description}`, `{repo_urls}`
107
+
108
+ Present Roadmap.md inline and ask for confirmation.
109
+
110
+ ### Step 9 — Write Local Copies
111
+
112
+ Write each confirmed artifact as a local file in the `docs/` directory at the project root. Create the directory if it does not exist.
113
+
114
+ | Artifact | Local Path |
115
+ |----------|-----------|
116
+ | Rules | `docs/rules.md` |
117
+ | Evals | `docs/evals.yaml` |
118
+ | Requirements | `docs/requirements.md` |
119
+ | Roadmap | `docs/roadmap.md` |
120
+
121
+ Write each file with the confirmed artifact content (the same content that will be passed to `update_memory` in the next step).
122
+
123
+ **Important:**
124
+ - Use YAML format for Evals (the evals-prompt template produces YAML output)
125
+ - Use Markdown for all other artifacts
126
+ - If `docs/` already contains files with the same names, ask the user before overwriting
127
+ - If a write fails, inform the user but continue — remote persistence in the next step will still capture the artifact
128
+
129
+ Inform the user: "Local copies saved to docs/"
130
+
131
+ ### Step 10 — Persist All Artifacts Remotely
132
+
133
+ Call `update_memory` four times — once per artifact:
134
+
135
+ | Artifact | `memory_type` | `title` | `source_role` |
136
+ |----------|--------------|---------|---------------|
137
+ | Rules.md | `document` | `"Rules.md — <project_name>"` | `"developer"` |
138
+ | Evals.md | `document` | `"Evals.md — <project_name>"` | `"developer"` |
139
+ | Requirements.md | `document` | `"Requirements.md — <project_name>"` | `"developer"` |
140
+ | Roadmap.md | `document` | `"Roadmap.md — <project_name>"` | `"developer"` |
141
+
142
+ For each call, set `content: { text: "<full artifact markdown>" }` and add relevant `tags` (e.g., `["rules", "standards"]` for Rules.md).
143
+
144
+ ### Step 11 — Summarize
145
+
146
+ Tell the user:
147
+ - Project ID (so they can reference it later)
148
+ - Which artifacts were generated and persisted
149
+ - Local copies written to `docs/` (rules.md, evals.yaml, requirements.md, roadmap.md)
150
+ - How many MCP tool calls were made total
151
+ - How to retrieve context in future sessions: `fetch_context` with `{ project_id, memory_type: "document" }`
152
+
153
+ ---
154
+
155
+ ## Context Assembly by Scenario
156
+
157
+ ### Greenfield + SOW
158
+ Emphasize SOW deliverables and functional requirements in rules and requirements generation. The roadmap should sequence SOW milestones explicitly.
159
+
160
+ ### Greenfield — No SOW
161
+ Prompt the user for key deliverables and target users before generating. Rules should be general-purpose but tailored to the tech stack described.
162
+
163
+ ### Brownfield + SOW
164
+ Pack all repos first (Step 3). Fetch existing memories (Step 4) — prior rules and learnings are especially important. SOW delta (what's changing vs. what exists) should drive Requirements.md.
165
+
166
+ ### Brownfield — No SOW
167
+ Pack all repos first. Spend extra time in conversation understanding the existing system before generating rules — ask about pain points, constraints, and what must not change.
168
+
169
+ ---
170
+
171
+ ## Error Handling
172
+
173
+ - If `pack_repo` fails for a URL: log the error, inform the user, and continue with remaining URLs.
174
+ - If `update_memory` fails: retry once. If still failing, present the artifact as a code block the user can save manually.
175
+ - If the user provides no description or SOW: ask at least 3 clarifying questions before attempting artifact generation.
176
+
177
+ ---
178
+
179
+ ## End-of-Workflow Memory Capture
180
+
181
+ After completing Step 11, suggest to the user:
182
+
183
+ > "You may have learned things during this session worth capturing. Run `/remember` to capture session learnings to docs/MEMORY.md, or `/push-memory` to sync existing local memories to the shared KB."
184
+
185
+ This is a suggestion only — do not automatically invoke the capture or push Skills.
@@ -0,0 +1,39 @@
1
+ # Prompt Template: Generate Evals.md
2
+
3
+ When this template is used, substitute all `{placeholders}` with actual values, then generate the evals document as instructed.
4
+
5
+ ---
6
+
7
+ You are a quality-engineering expert. Using the project rules below, create a set of structured evaluation criteria that can be used to verify compliance with those rules during code review, CI checks, or AI-assisted development sessions.
8
+
9
+ ## Project Rules
10
+
11
+ {rules}
12
+
13
+ ## Project Context
14
+
15
+ **Project Name:** {project_name}
16
+ **Description:** {description}
17
+
18
+ ## Instructions
19
+
20
+ For EACH rule, produce one or more evals. Each eval must include:
21
+
22
+ - **id**: a slug identifier (e.g., `eval-validate-input-boundary`)
23
+ - **rule_ref**: the title or ID of the rule being evaluated
24
+ - **description**: what this eval checks
25
+ - **pass_criteria**: precise, observable conditions that indicate the rule is being followed
26
+ - **fail_criteria**: precise, observable conditions that indicate a violation
27
+ - **automation_hint**: whether this can be checked automatically (lint, test, static analysis) and how
28
+
29
+ Format the output as a YAML document with a top-level `evals:` list. Example structure:
30
+
31
+ ```yaml
32
+ evals:
33
+ - id: eval-validate-input-boundary
34
+ rule_ref: Always validate external input at the boundary
35
+ description: Checks that all external inputs are validated before use
36
+ pass_criteria: Every controller method validates request body with a schema before processing
37
+ fail_criteria: Business logic receives raw unvalidated input from request objects
38
+ automation_hint: ESLint rule or custom AST check; unit tests covering invalid inputs
39
+ ```
@@ -0,0 +1,44 @@
1
+ # Prompt Template: Generate Requirements.md
2
+
3
+ When this template is used, substitute all `{placeholders}` with actual values, then generate the requirements document as instructed.
4
+
5
+ ---
6
+
7
+ You are a product manager and solutions architect. Using the project information below, produce a detailed requirements document.
8
+
9
+ ## Project Information
10
+
11
+ **Project Name:** {project_name}
12
+ **Description:** {description}
13
+ **Scope of Work:** {sow}
14
+ **Repositories:** {repo_urls}
15
+ **Additional Context:** {additional_context}
16
+
17
+ ## Instructions
18
+
19
+ Produce a requirements document titled `# Requirements.md` with the following sections:
20
+
21
+ ### 1. Executive Summary
22
+ One-paragraph summary of what the project delivers and for whom.
23
+
24
+ ### 2. Functional Requirements
25
+ List every distinct feature or capability. For each requirement use this format:
26
+
27
+ - **FR-001**: short title
28
+ - **Priority**: Must / Should / Could (MoSCoW)
29
+ - **Description**: what the system must do
30
+ - **Acceptance Criteria**: measurable, testable conditions
31
+
32
+ ### 3. Non-Functional Requirements
33
+ Cover: Performance, Security, Scalability, Reliability, Maintainability, Observability. Use the same FR-NNN format with prefix NFR-.
34
+
35
+ ### 4. Constraints and Assumptions
36
+ List known technical constraints, business constraints, and assumptions being made.
37
+
38
+ ### 5. Out of Scope
39
+ Explicitly list what is NOT included in this project.
40
+
41
+ ### 6. Glossary
42
+ Define key domain terms used throughout this document.
43
+
44
+ Format as a structured Markdown document. Number all requirements sequentially.
@@ -0,0 +1,44 @@
1
+ # Prompt Template: Generate Roadmap.md
2
+
3
+ When this template is used, substitute all `{placeholders}` with actual values, then generate the roadmap document as instructed.
4
+
5
+ ---
6
+
7
+ You are a senior engineering lead responsible for delivery planning. Using the project requirements below, produce a phased implementation roadmap.
8
+
9
+ ## Requirements
10
+
11
+ {requirements}
12
+
13
+ ## Project Context
14
+
15
+ **Project Name:** {project_name}
16
+ **Description:** {description}
17
+ **Repositories:** {repo_urls}
18
+
19
+ ## Instructions
20
+
21
+ Produce a roadmap titled `# Roadmap.md` structured as 3–5 phases. For EACH phase include:
22
+
23
+ - **Phase N — Name**: meaningful phase title (e.g., "Phase 1 — Foundation")
24
+ - **Goal**: one-sentence summary of what this phase achieves
25
+ - **Duration estimate**: calendar weeks or sprints
26
+ - **Deliverables**: concrete, shippable outputs
27
+ - **Functional Requirements covered**: list the FR-NNN and NFR-NNN IDs addressed
28
+ - **Technical tasks**: engineering work breakdown (checklist format)
29
+ - **Dependencies**: what must be true before this phase can start
30
+ - **Success criteria**: how to know this phase is done
31
+
32
+ After the phased plan, include:
33
+
34
+ ### Critical Path
35
+ The sequence of tasks where any delay directly delays the project.
36
+
37
+ ### Risks and Mitigations
38
+ Top 5 risks in a table:
39
+
40
+ | Risk | Likelihood | Impact | Mitigation |
41
+ |------|-----------|--------|-----------|
42
+ | ... | High/Med/Low | High/Med/Low | ... |
43
+
44
+ Format as a structured Markdown document.
@@ -0,0 +1,34 @@
1
+ # Prompt Template: Generate Rules.md
2
+
3
+ When this template is used, substitute all `{placeholders}` with actual project values, then generate the rules document as instructed.
4
+
5
+ ---
6
+
7
+ You are a senior software architect. Based on the project context below, generate a comprehensive set of development rules and coding standards for this project.
8
+
9
+ ## Project Context
10
+
11
+ **Project Name:** {project_name}
12
+ **Description:** {description}
13
+ **Scope of Work:** {sow}
14
+ **Repositories:** {repo_urls}
15
+ **Additional Context:** {additional_context}
16
+
17
+ ## Instructions
18
+
19
+ Generate rules covering ALL of the following areas:
20
+
21
+ 1. **Code Style** — naming conventions, file organisation, formatting
22
+ 2. **Architecture Patterns** — module structure, dependency direction, layering
23
+ 3. **Security** — input validation, secrets management, authentication patterns
24
+ 4. **Testing** — unit test structure, coverage targets, mocking strategy
25
+ 5. **Documentation** — inline comments, ADR conventions, README standards
26
+ 6. **Error Handling** — error propagation, logging strategy, user-facing messages
27
+
28
+ For EACH rule provide:
29
+ - **title**: short, actionable slug (e.g., "Always validate external input at the boundary")
30
+ - **description**: one-paragraph explanation
31
+ - **rationale**: why this rule matters for this specific project
32
+ - **examples**: 1–3 concrete code or configuration examples
33
+
34
+ Format the output as a Markdown document titled `# Rules.md` with one H2 heading per rule category and one H3 heading per rule.