prompt-forge-cc 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,89 @@
1
+ # Superpowers Output Format
2
+
3
+ How to structure Prompt Forge output when the target execution tool is Superpowers.
4
+
5
+ Superpowers' brainstorming phase is where quality is won or lost. A rich, pre-analyzed prompt with design considerations already surfaced lets brainstorming go deeper — exploring architectural alternatives, edge cases, and failure modes instead of asking "what does this feature do?"
6
+
7
+ ---
8
+
9
+ ## For Brainstorming Input
10
+
11
+ Structure as a rich feature brief with design considerations pre-loaded. The brainstorming skill will refine this, not extract it.
12
+
13
+ ```
14
+ ## Feature: [Name]
15
+
16
+ ### Intent
17
+ [What I want to build and why — business context, user problem being solved]
18
+
19
+ ### Technical landscape
20
+ - Stack: [framework, versions]
21
+ - Affected code: @[list of files that will be touched]
22
+ - Related implementations: @[similar feature already in codebase]
23
+ - Test setup: [framework, patterns in @test-file]
24
+
25
+ ### Design considerations (for brainstorming to refine)
26
+
27
+ **Architecture:** [How this fits into the existing codebase structure.
28
+ Reference existing patterns. Flag any architectural decisions needed.]
29
+
30
+ **Security:** [Auth implications, input validation needs, data exposure risks.
31
+ Specific to this feature, not generic security advice.]
32
+
33
+ **Performance:** [Expected load, caching considerations, query concerns.
34
+ Reference specific DB queries or endpoints.]
35
+
36
+ **UX:** [What the user sees/experiences, error states, loading states.]
37
+
38
+ **Edge cases:** [Specific scenarios that could break, boundary conditions,
39
+ error handling requirements.]
40
+
41
+ ### Testing strategy (for TDD planning)
42
+ - Must test: [critical paths — feeds TDD red phase]
43
+ - Edge cases to cover: [specific scenarios]
44
+ - What NOT to test: [things already covered]
45
+
46
+ ### Research findings
47
+ [Best practices for this specific pattern + stack.
48
+ Known issues with current library versions.
49
+ Alternative approaches considered and why the chosen one is preferred.]
50
+
51
+ ### Constraints
52
+ - Don't modify: @[protected files]
53
+ - Stay compatible with: [APIs, interfaces, contracts]
54
+ - Version notes: [deprecation warnings from research]
55
+ ```
56
+
57
+ ---
58
+
59
+ ## For Direct Task Execution
60
+
61
+ When the task doesn't need brainstorming (small, well-defined), structure it so planning and TDD phases have everything:
62
+
63
+ ```
64
+ ## Task: [Name]
65
+
66
+ [Clear description — what to do]
67
+
68
+ Files to modify: @[list]
69
+ Follow pattern: @[reference]
70
+ Test file: @[where tests should go, following @reference-test pattern]
71
+
72
+ Test first:
73
+ - RED: [what the failing test should assert]
74
+ - GREEN: [minimal implementation to pass]
75
+ - REFACTOR: [any cleanup needed]
76
+
77
+ Constraints: [what not to touch]
78
+ Verify: [command to run]
79
+ ```
80
+
81
+ ---
82
+
83
+ ## Detection Signals
84
+
85
+ **Superpowers indicators:**
86
+ - Skills directory with superpowers skills (brainstorming/, test-driven-development/)
87
+ - `.claude-plugin/` with superpowers plugin.json
88
+ - User mentions "superpowers", "brainstorm", "/superpowers:", "TDD workflow"
89
+ - Agents directory with code-reviewer or similar
@@ -0,0 +1,317 @@
1
+ # Task-Type Prompt Blueprints
2
+
3
+ Each task type has a different prompt structure because each type requires different thinking from Claude Code. The prompt shape signals the right mode.
4
+
5
+ Every blueprint includes a docs-check preamble — Claude Code should read the relevant project CLAUDE.md before starting work.
6
+
7
+ ---
8
+
9
+ ## 1. Bug Fix / Debugging
10
+
11
+ **Thinking mode:** Investigate first, understand the root cause, then fix. Never guess-and-patch.
12
+
13
+ ```
14
+ Before starting, read @CLAUDE.md for project conventions.
15
+
16
+ ## Bug report
17
+ [What's happening — symptoms, error messages, reproduction steps]
18
+
19
+ ## Expected behavior
20
+ [What should happen instead]
21
+
22
+ ## Affected code
23
+ [Specific files/functions where the bug likely lives — grounded from code analysis]
24
+
25
+ ## Investigation steps
26
+ 1. First, reproduce the issue by [specific steps/commands]
27
+ 2. Read and understand the relevant code path: [grounded file list]
28
+ 3. Identify the root cause — explain what's wrong and why before writing any fix
29
+ 4. Check if this same bug pattern exists elsewhere in the codebase
30
+ 5. Implement the fix
31
+ 6. Verify the fix resolves the issue without introducing regressions
32
+
33
+ ## Constraints
34
+ - Do NOT apply a surface-level patch — find and fix the root cause
35
+ - Do NOT modify [files/modules that should stay untouched]
36
+ - Preserve existing behavior for [related functionality]
37
+
38
+ ## Verification
39
+ - Run `[actual test command from project]` — all tests must pass
40
+ - Specifically test: [the exact scenario that was broken]
41
+ - Run `[lint/typecheck command]` — no new warnings
42
+ ```
43
+
44
+ ---
45
+
46
+ ## 2. New Feature / Implementation
47
+
48
+ **Thinking mode:** Follow existing patterns, build incrementally, verify as you go.
49
+
50
+ ```
51
+ Before starting, read @CLAUDE.md for project conventions.
52
+
53
+ ## Context
54
+ [What part of the app, current state, why this feature is needed — business context]
55
+
56
+ ## Feature requirements
57
+ [Clear description of what to build]
58
+
59
+ ## Pattern reference
60
+ Follow the existing implementation pattern in @[reference file] for:
61
+ - [Routing / API structure]
62
+ - [Validation approach]
63
+ - [Error handling]
64
+ - [Response format]
65
+ - [Test structure]
66
+
67
+ ## Implementation plan
68
+ 1. [Step-by-step breakdown — each step is a verifiable unit]
69
+ 2. After each step, run `[test command]` to verify nothing broke
70
+ 3. [Continue steps...]
71
+
72
+ ## Scope boundaries
73
+ - IN scope: [explicit list]
74
+ - OUT of scope: [explicit list]
75
+ - Do NOT refactor existing code unless directly necessary
76
+
77
+ ## Technical notes
78
+ - [Version-specific guidance from web research]
79
+ - [Dependency notes]
80
+ - [Architecture notes from code analysis]
81
+
82
+ ## Done criteria
83
+ - [ ] Feature works as described
84
+ - [ ] Tests written and passing: `[test command]`
85
+ - [ ] Types/interfaces updated
86
+ - [ ] No lint errors: `[lint command]`
87
+ - [ ] Follows existing patterns — code should look like it belongs
88
+ ```
89
+
90
+ ---
91
+
92
+ ## 3. Refactor / Code Improvement
93
+
94
+ **Thinking mode:** Understand deeply, change structure without changing behavior, verify continuously.
95
+
96
+ ```
97
+ Before starting, read @CLAUDE.md for project conventions.
98
+
99
+ ## Current state
100
+ [What the code looks like now — specific files, the problem with the current structure]
101
+
102
+ ## Desired state
103
+ [What the code should look like after — structural improvement, not new behavior]
104
+
105
+ ## The rule: behavior must not change
106
+ This is a refactor, not a feature. External behavior must remain identical. Every existing test must continue to pass at every step.
107
+
108
+ ## Files in scope
109
+ [Explicit grounded list]
110
+
111
+ ## Files NOT in scope — do not modify
112
+ [Explicit list]
113
+
114
+ ## Approach
115
+ 1. First, run `[test command]` to establish a green baseline
116
+ 2. [First refactor step — small, verifiable]
117
+ 3. Run tests again — must still pass
118
+ 4. [Next step]
119
+ 5. Run tests again
120
+ 6. [Continue — never more than one structural change between test runs]
121
+
122
+ ## Verification
123
+ - All existing tests pass: `[test command]`
124
+ - No new lint warnings: `[lint command]`
125
+ - No type errors: `[typecheck command]`
126
+ - Git diff should show structural changes only — no behavior changes
127
+ ```
128
+
129
+ ---
130
+
131
+ ## 4. Migration / Upgrade
132
+
133
+ **Thinking mode:** Incremental, backwards-compatible, rollback-aware. Never big-bang.
134
+
135
+ ```
136
+ Before starting, read @CLAUDE.md for project conventions.
137
+
138
+ ## Migration overview
139
+ [What's being migrated — from X to Y, why, what's at stake]
140
+
141
+ ## Current stack
142
+ - [Framework/library]: [current version]
143
+ - [Relevant config files]
144
+ - [Current patterns in use]
145
+
146
+ ## Target stack
147
+ - [Framework/library]: [target version]
148
+ - [New patterns to adopt]
149
+ - [Deprecations to address]
150
+
151
+ ## Migration strategy: incremental, not big-bang
152
+
153
+ ### Phase 1: Preparation
154
+ 1. [Setup step — install new dependency alongside old]
155
+ 2. Verify existing tests still pass: `[test command]`
156
+
157
+ ### Phase 2: Gradual migration
158
+ 1. [Migrate one module as a pilot]
159
+ 2. Test thoroughly: `[test command]`
160
+ 3. [Continue one at a time]
161
+
162
+ ### Phase 3: Cleanup
163
+ 1. [Remove old dependencies/code]
164
+ 2. [Update configs]
165
+ 3. Final full test run
166
+
167
+ ## Known gotchas
168
+ [Version-specific issues from web research — breaking changes, renamed APIs]
169
+
170
+ ## Verification
171
+ - Full test suite passes at each phase: `[test command]`
172
+ - No deprecation warnings from [target framework]
173
+ ```
174
+
175
+ ---
176
+
177
+ ## 5. Performance Optimization
178
+
179
+ **Thinking mode:** Measure first, optimize the bottleneck, verify improvement with numbers.
180
+
181
+ ```
182
+ Before starting, read @CLAUDE.md for project conventions.
183
+
184
+ ## Performance problem
185
+ [What's slow — specific endpoint, operation, or page. Include metrics if available]
186
+
187
+ ## Affected code
188
+ [Grounded file paths and functions]
189
+
190
+ ## Investigation first — do NOT optimize before profiling
191
+ 1. Read and trace the full execution path
192
+ 2. Identify the actual bottleneck — DB queries? Network calls? Computation? Memory?
193
+ 3. If there are N+1 queries, count them
194
+ 4. Explain the bottleneck and proposed optimization before implementing
195
+
196
+ ## Optimization constraints
197
+ - Do NOT change external API/behavior
198
+ - Do NOT add new dependencies without mentioning it first
199
+ - Readability over cleverness
200
+
201
+ ## Verification
202
+ - Run existing tests: `[test command]` — must pass
203
+ - [Benchmark command to prove improvement]
204
+ - Check that related endpoints aren't negatively affected
205
+ ```
206
+
207
+ ---
208
+
209
+ ## 6. Security Hardening
210
+
211
+ **Thinking mode:** Assume adversarial input. Audit systematically, fix comprehensively.
212
+
213
+ ```
214
+ Before starting, read @CLAUDE.md for project conventions.
215
+
216
+ ## Security concern
217
+ [What needs hardening — specific area, known vulnerability, or audit scope]
218
+
219
+ ## Systematic audit checklist
220
+ For each file in scope, check:
221
+ 1. **Input validation** — Are all user inputs validated and sanitized?
222
+ 2. **Authentication** — Are protected routes properly guarded?
223
+ 3. **Authorization** — Can users access only their own resources?
224
+ 4. **Data exposure** — Are sensitive fields stripped from responses?
225
+ 5. **Secrets handling** — Are secrets in env vars, not hardcoded?
226
+ 6. **SQL/NoSQL injection** — Are queries parameterized?
227
+ 7. **XSS/CSRF** — Are outputs escaped? CSRF tokens in place?
228
+ 8. **Dependencies** — Run `npm audit` or equivalent
229
+
230
+ ## Fix approach
231
+ - Explain the risk before fixing each vulnerability
232
+ - Use the framework's built-in security features where available
233
+
234
+ ## Verification
235
+ - All existing tests pass: `[test command]`
236
+ - `npm audit` shows no high/critical vulnerabilities
237
+ ```
238
+
239
+ ---
240
+
241
+ ## 7. Investigation / Understanding Code
242
+
243
+ **Thinking mode:** Read, trace, explain. No modifications unless explicitly asked.
244
+
245
+ ```
246
+ Before starting, read @CLAUDE.md for project context.
247
+
248
+ ## What I want to understand
249
+ [The question — how does X work? Why does Y happen?]
250
+
251
+ ## Starting points
252
+ [Grounded file paths and functions]
253
+
254
+ ## Investigation approach
255
+ 1. Read the relevant code paths starting from [entry point]
256
+ 2. Trace the execution flow — what calls what, in what order
257
+ 3. Map dependencies
258
+ 4. Identify non-obvious behavior
259
+
260
+ ## Output format
261
+ - High-level summary (2-3 sentences)
262
+ - Step-by-step flow walkthrough
263
+ - Call out anything surprising or risky
264
+
265
+ ## Rules
266
+ - READ ONLY — do not modify any files
267
+ - Don't guess — if something is ambiguous, say so
268
+ ```
269
+
270
+ ---
271
+
272
+ ## 8. Testing / Test Coverage
273
+
274
+ **Thinking mode:** Understand the code's contract, then write tests that verify it — including edge cases.
275
+
276
+ ```
277
+ Before starting, read @CLAUDE.md for testing conventions.
278
+
279
+ ## What to test
280
+ [Specific module/function/endpoint — grounded paths]
281
+
282
+ ## Existing test setup
283
+ - Test framework: [from package.json]
284
+ - Test location: [where tests live]
285
+ - Existing examples: follow patterns in @[reference test file]
286
+
287
+ ## Coverage requirements
288
+ 1. **Happy path** — [normal expected behavior]
289
+ 2. **Edge cases** — [specific edge cases from code analysis]
290
+ 3. **Error cases** — [what should fail and how]
291
+ 4. **Boundary conditions** — [empty arrays, null values, max lengths]
292
+
293
+ ## Constraints
294
+ - Do NOT modify the source code — only add/modify test files
295
+ - Use existing test utilities in @[test helpers file]
296
+
297
+ ## Verification
298
+ - All new tests pass: `[test command]`
299
+ - All existing tests still pass
300
+ ```
301
+
302
+ ---
303
+
304
+ ## Task Classification Guide
305
+
306
+ | Signal | Task Type |
307
+ |--------|-----------|
308
+ | "bug", "broken", "not working", "error", "fix" | Bug Fix |
309
+ | "add", "build", "create", "implement", "new" | New Feature |
310
+ | "refactor", "clean up", "reorganize", "simplify" | Refactor |
311
+ | "migrate", "upgrade", "update to", "switch from X to Y" | Migration |
312
+ | "slow", "optimize", "performance", "speed up", "cache" | Performance |
313
+ | "secure", "vulnerability", "auth", "injection", "audit" | Security |
314
+ | "how does", "explain", "understand", "trace", "what does" | Investigation |
315
+ | "test", "coverage", "spec", "write tests for" | Testing |
316
+
317
+ If a task spans multiple types, use the **primary** type as the base and incorporate relevant sections from the secondary type. For compound tasks, suggest separate prompts.
@@ -0,0 +1,86 @@
1
+ # Claude Adapter
2
+
3
+ Formats Prompt Forge output for Anthropic Claude models (Claude Code, Claude API, Claude chat).
4
+
5
+ ## Role Definition
6
+
7
+ Claude responds best to direct, structured instructions with explicit constraints. It follows XML tags reliably and benefits from chain-of-thought prompting.
8
+
9
+ ```
10
+ You are a senior software engineer working on this codebase. Follow instructions precisely.
11
+ Read all referenced files before making changes. Verify your work after each step.
12
+ ```
13
+
14
+ ## Instruction Structure
15
+
16
+ Claude excels with this layout:
17
+
18
+ 1. **Preamble** — Read project context first (`@CLAUDE.md`, referenced files)
19
+ 2. **Context block** — Background, stack, current state (use XML `<context>` tags for complex prompts)
20
+ 3. **Task block** — Clear, imperative instructions (use XML `<task>` tags)
21
+ 4. **Constraints block** — What NOT to do (use XML `<constraints>` tags)
22
+ 5. **Verification block** — Feedback loop commands (test, lint, typecheck)
23
+
24
+ ### XML Tag Usage
25
+
26
+ Claude reliably parses XML structure. Use it for multi-section prompts:
27
+
28
+ ```xml
29
+ <context>
30
+ Express 4.18 + Prisma 5.x + PostgreSQL. Auth via JWT in @src/middleware/auth.ts.
31
+ </context>
32
+
33
+ <task>
34
+ Add rate limiting to /api/login. Limit to 5 attempts per IP per 15-minute window.
35
+ </task>
36
+
37
+ <constraints>
38
+ - Use existing Redis connection in @src/lib/redis.ts
39
+ - Do not add new dependencies
40
+ - Preserve existing error response format
41
+ </constraints>
42
+
43
+ <verification>
44
+ Run `npm test` after changes. Fix any failures before completing.
45
+ Run `npm run lint` — no new warnings.
46
+ </verification>
47
+ ```
48
+
49
+ For simple prompts, skip XML — plain markdown with clear sections works fine.
50
+
51
+ ## Constraint Formatting
52
+
53
+ Claude respects constraints best when they are:
54
+ - **Explicit and negative** — "Do NOT modify auth.ts" over "be careful with auth.ts"
55
+ - **Positioned after the task** — constraints after instructions reduce accidental ignoring
56
+ - **Grouped together** — one constraints section, not scattered throughout
57
+ - **Explained briefly** — "Do NOT use class components — project migrated to hooks in Q3" is stronger than "Do NOT use class components"
58
+
59
+ ## Output Expectations
60
+
61
+ Tell Claude what format to deliver results in:
62
+
63
+ ```
64
+ After implementing:
65
+ 1. Run `npm test` — report pass/fail
66
+ 2. Run `npm run typecheck` — report any errors
67
+ 3. Summarize what you changed and why
68
+ ```
69
+
70
+ Claude benefits from explicit feedback loops — "run X, then fix any issues" produces 2-3x better results than "make sure it works."
71
+
72
+ ## @ Reference Convention
73
+
74
+ Claude Code uses `@filename` to reference files. Always use this syntax:
75
+ - `@src/routes/orders.ts` not "the orders route file"
76
+ - `@CLAUDE.md` not "the project conventions"
77
+
78
+ ## Mode Adjustments for Claude
79
+
80
+ | Mode | Claude-Specific Emphasis |
81
+ |------|-------------------------|
82
+ | build | Reference existing patterns with @ syntax, step-by-step with test gates |
83
+ | audit | XML `<constraints>` block, explicit negative instructions, checklist format |
84
+ | debug | "Explain what's wrong before fixing" — Claude naturally supports chain-of-thought |
85
+ | research | "Read these files, then explain" — leverage Claude's strong comprehension |
86
+ | optimize | "Profile first, propose second, implement third" — explicit ordering |
@@ -0,0 +1,89 @@
1
+ # Gemini Adapter
2
+
3
+ Formats Prompt Forge output for Google Gemini models (Gemini CLI, Gemini API, AI Studio).
4
+
5
+ ## Role Definition
6
+
7
+ Gemini responds well to clear role framing and benefits from structured markdown. It handles long context well and supports grounding with Google Search.
8
+
9
+ ```
10
+ Role: Senior software engineer performing a focused task on this codebase.
11
+ Approach: Read all relevant files first, then plan your approach, then implement.
12
+ Standard: Production-quality code that follows existing project patterns.
13
+ ```
14
+
15
+ ## Instruction Structure
16
+
17
+ Gemini works best with this layout:
18
+
19
+ 1. **Role and approach** — Set the persona and methodology upfront
20
+ 2. **Context** — Background information, stack details, file references
21
+ 3. **Task** — Clear, numbered steps with explicit ordering
22
+ 4. **Rules** — Constraints as a bulleted list with "MUST" / "MUST NOT" language
23
+ 5. **Output format** — What to deliver and how to verify
24
+
25
+ ### Markdown Structure
26
+
27
+ Gemini parses markdown headers and lists reliably. Use them for structure:
28
+
29
+ ```markdown
30
+ ## Context
31
+ Express 4.18 app with Prisma 5.x and PostgreSQL.
32
+ Auth handled by JWT middleware in `src/middleware/auth.ts`.
33
+
34
+ ## Task
35
+ 1. Add rate limiting to `/api/login`
36
+ 2. Limit to 5 attempts per IP per 15-minute window
37
+ 3. Use the existing Redis connection in `src/lib/redis.ts`
38
+
39
+ ## Rules
40
+ - MUST NOT add new dependencies
41
+ - MUST NOT change the existing error response format
42
+ - MUST follow the middleware pattern in `src/middleware/auth.ts`
43
+
44
+ ## Verification
45
+ - Run `npm test` — all tests pass
46
+ - Run `npm run lint` — no new warnings
47
+ ```
48
+
49
+ ## Constraint Formatting
50
+
51
+ Gemini responds best to constraints when they use:
52
+ - **MUST / MUST NOT** language — stronger signal than "don't" or "avoid"
53
+ - **Rules section** — separated clearly from the task itself
54
+ - **Positive + negative pairs** — "MUST use Prisma client, MUST NOT write raw SQL"
55
+ - **Numbered priority** — if constraints conflict, number them by importance
56
+
57
+ ## Output Expectations
58
+
59
+ Gemini benefits from explicit output structure requests:
60
+
61
+ ```
62
+ ## Expected Output
63
+ 1. Modified files with changes explained
64
+ 2. Test results from `npm test`
65
+ 3. Summary of approach taken and any trade-offs
66
+ ```
67
+
68
+ ## File Reference Convention
69
+
70
+ Gemini CLI uses backtick paths. Always use:
71
+ - `src/routes/orders.ts` not "the orders route file"
72
+ - Include relative paths from project root
73
+
74
+ ## Grounding Note
75
+
76
+ Gemini has native Google Search grounding. For research-heavy prompts, you can include:
77
+ ```
78
+ Use Google Search to verify current best practices for [topic] before implementing.
79
+ ```
80
+
81
+ ## Mode Adjustments for Gemini
82
+
83
+ | Mode | Gemini-Specific Emphasis |
84
+ |------|------------------------|
85
+ | build | Numbered step-by-step, explicit pattern references with file paths |
86
+ | audit | MUST/MUST NOT rules list, checklist output format |
87
+ | debug | "Analyze → Hypothesize → Verify → Fix" — explicit methodology steps |
88
+ | research | Leverage search grounding, ask for alternatives with trade-off tables |
89
+ | optimize | Request measurement data in structured table format before/after |
@@ -0,0 +1,100 @@
1
+ # OpenAI Adapter
2
+
3
+ Formats Prompt Forge output for OpenAI models (GPT-4o, o1/o3, Codex, ChatGPT).
4
+
5
+ ## Role Definition
6
+
7
+ OpenAI models respond strongly to system-level role definition. Set the persona clearly:
8
+
9
+ ```
10
+ You are a senior software engineer. You write clean, tested, production-ready code.
11
+ You follow existing project patterns exactly. You verify your work before reporting completion.
12
+ ```
13
+
14
+ ## Instruction Structure
15
+
16
+ OpenAI models work best with this layout:
17
+
18
+ 1. **System context** — Role + project background (maps to system message in API)
19
+ 2. **User task** — Direct, imperative instructions
20
+ 3. **Constraints** — Boundaries and prohibitions as a clear list
21
+ 4. **Examples** — Few-shot examples when pattern-following is critical
22
+ 5. **Output specification** — What format to deliver in
23
+
24
+ ### System/User Split
25
+
26
+ For API usage, structure maps naturally to message roles:
27
+
28
+ ```
29
+ [System]
30
+ You are a senior engineer working on an Express 4.18 + Prisma 5.x + PostgreSQL app.
31
+ Project conventions: services are class-based, routes use Zod validation,
32
+ errors follow AppError pattern in src/lib/errors.ts.
33
+
34
+ [User]
35
+ Add rate limiting to /api/login. Limit to 5 attempts per IP per 15-minute window.
36
+
37
+ Requirements:
38
+ - Use existing Redis connection in src/lib/redis.ts
39
+ - Do not add new dependencies
40
+ - Preserve existing error response format
41
+ - Follow middleware pattern in src/middleware/auth.ts
42
+
43
+ After implementing, run `npm test` and fix any failures.
44
+ ```
45
+
46
+ For chat/agent usage, combine into a single structured prompt with clear sections.
47
+
48
+ ## Constraint Formatting
49
+
50
+ OpenAI models respect constraints best when:
51
+ - **Listed explicitly** — bullet points, not embedded in prose
52
+ - **Negative constraints are bolded** — "**Do NOT** modify auth.ts"
53
+ - **Placed after the task** — instructions first, then boundaries
54
+ - **Few-shot reinforced** — if a constraint is critical, show an example of correct behavior
55
+
56
+ ## Output Expectations
57
+
58
+ OpenAI models benefit from explicit format requests:
59
+
60
+ ```
61
+ Output format:
62
+ 1. List of files changed
63
+ 2. For each file: what changed and why
64
+ 3. Test results
65
+ 4. Any concerns or follow-up items
66
+ ```
67
+
68
+ For code generation, specify: "Write complete, runnable code — not pseudocode or snippets."
69
+
70
+ ## File Reference Convention
71
+
72
+ Use standard path notation:
73
+ - `src/routes/orders.ts` not "the orders file"
74
+ - Include full relative paths from project root
75
+ - For multi-file tasks, list all files upfront
76
+
77
+ ## Few-Shot Patterns
78
+
79
+ OpenAI models benefit significantly from examples. When the task requires following a specific pattern:
80
+
81
+ ```
82
+ Example of the pattern to follow (from src/routes/orders.ts):
83
+ - Router definition at top
84
+ - Zod schema for validation
85
+ - Auth middleware in chain
86
+ - Service call, not direct DB access
87
+ - JSON response with consistent format
88
+
89
+ Now apply this same pattern to create src/routes/payments.ts.
90
+ ```
91
+
92
+ ## Mode Adjustments for OpenAI
93
+
94
+ | Mode | OpenAI-Specific Emphasis |
95
+ |------|------------------------|
96
+ | build | Few-shot examples from codebase, explicit pattern references |
97
+ | audit | Bold **DO NOT** constraints, checklist output with pass/fail |
98
+ | debug | Chain-of-thought: "Think step by step about what could cause this" |
99
+ | research | Request structured comparison tables, pros/cons format |
100
+ | optimize | Ask for profiling analysis before implementation, metrics-focused |