@hustle-together/api-dev-tools 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,192 @@
1
+ ---
2
+ description: Analyze GitHub issue and create TDD implementation plan
3
+ argument-hint: [optional-issue-number]
4
+ ---
5
+
6
+ Analyze GitHub issue and create TDD implementation plan.
7
+
8
+ ## General Guidelines
9
+
10
+ ### Output Style
11
+
12
+ - **Never explicitly mention TDD** in code, comments, commits, PRs, or issues
13
+ - Write natural, descriptive code without meta-commentary about the development process
14
+ - The code should speak for itself - TDD is the process, not the product
15
+
16
+ Process:
17
+
18
+ 1. Get Issue Number
19
+
20
+ - Either from branch name use that issue number
21
+ - Patterns: issue-123, 123-feature, feature/123, fix/123
22
+ - Or from this bullet point with custom info: $ARGUMENTS
23
+ - If not found: ask user
24
+
25
+ 1. Fetch Issue
26
+
27
+ Try to fetch the issue using GitHub MCP (mcp__github__issue_read tool).
28
+
29
+ If GitHub MCP is not configured, show:
30
+
31
+ ```
32
+ GitHub MCP not configured!
33
+ See: https://github.com/modelcontextprotocol/servers/tree/main/src/github
34
+ Trying GitHub CLI fallback...
35
+ ```
36
+
37
+ Then try using `gh issue view [ISSUE_NUMBER] --json` as fallback.
38
+
39
+ 1. Analyze and Plan
40
+
41
+ Summarize the issue and requirements, then:
42
+
43
+ ## Discovery Phase
44
+
45
+ Understand the requirement by asking (use AskUserQuestion if needed):
46
+
47
+ **Problem Statement**
48
+
49
+ - What problem does this solve?
50
+ - Who experiences this problem?
51
+ - What's the current pain point?
52
+
53
+ **Desired Outcome**
54
+
55
+ - What should happen after this is built?
56
+ - How will users interact with it?
57
+ - What does success look like?
58
+
59
+ **Scope & Constraints**
60
+
61
+ - What's in scope vs. out of scope?
62
+ - Any technical constraints?
63
+ - Dependencies on other systems/features?
64
+
65
+ **Context Check**
66
+
67
+ - Search codebase for related features/modules
68
+ - Check for existing test files that might be relevant
69
+
70
+ ## TDD Fundamentals
71
+
72
+ ### The TDD Cycle
73
+
74
+ The foundation of TDD is the Red-Green-Refactor cycle:
75
+
76
+ 1. **Red Phase**: Write ONE failing test that describes desired behavior
77
+
78
+ - The test must fail for the RIGHT reason (not syntax/import errors)
79
+ - Only one test at a time - this is critical for TDD discipline
80
+ - Exception: For browser-level tests or expensive setup (e.g., Storybook `*.stories.tsx`), group multiple assertions within a single test block to avoid redundant setup - but only when adding assertions to an existing interaction flow. If new user interactions are required, still create a new test. Split files by category if they exceed ~1000 lines.
81
+ - **Adding a single test to a test file is ALWAYS allowed** - no prior test output needed
82
+ - Starting TDD for a new feature is always valid, even if test output shows unrelated work
83
+ - For DOM-based tests, use `data-testid` attributes to select elements rather than CSS classes, tag names, or text content
84
+ - Avoid hard-coded timeouts both in form of sleep() or timeout: 5000 etc; use proper async patterns (`waitFor`, `findBy*`, event-based sync) instead and rely on global test configs for timeout settings
85
+
86
+ 2. **Green Phase**: Write MINIMAL code to make the test pass
87
+
88
+ - Implement only what's needed for the current failing test
89
+ - No anticipatory coding or extra features
90
+ - Address the specific failure message
91
+
92
+ 3. **Refactor Phase**: Improve code structure while keeping tests green
93
+ - Only allowed when relevant tests are passing
94
+ - Requires proof that tests have been run and are green
95
+ - Applies to BOTH implementation and test code
96
+ - No refactoring with failing tests - fix them first
97
+
98
+ ### Core Violations
99
+
100
+ 1. **Multiple Test Addition**
101
+
102
+ - Adding more than one new test at once
103
+ - Exception: Initial test file setup or extracting shared test utilities
104
+
105
+ 2. **Over-Implementation**
106
+
107
+ - Code that exceeds what's needed to pass the current failing test
108
+ - Adding untested features, methods, or error handling
109
+ - Implementing multiple methods when test only requires one
110
+
111
+ 3. **Premature Implementation**
112
+ - Adding implementation before a test exists and fails properly
113
+ - Adding implementation without running the test first
114
+ - Refactoring when tests haven't been run or are failing
115
+
116
+ ### Critical Principle: Incremental Development
117
+
118
+ Each step in TDD should address ONE specific issue:
119
+
120
+ - Test fails "not defined" → Create empty stub/class only
121
+ - Test fails "not a function" → Add method stub only
122
+ - Test fails with assertion → Implement minimal logic only
123
+
124
+ ### Optional Pre-Phase: Spike Phase
125
+
126
+ In rare cases where the problem space, interface, or expected behavior is unclear, a **Spike Phase** may be used **before the Red Phase**.
127
+ This phase is **not part of the regular TDD workflow** and must only be applied under exceptional circumstances.
128
+
129
+ - The goal of a Spike is **exploration and learning**, not implementation.
130
+ - The code written during a Spike is **disposable** and **must not** be merged or reused directly.
131
+ - Once sufficient understanding is achieved, all spike code is discarded, and normal TDD resumes starting from the **Red Phase**.
132
+ - A Spike is justified only when it is impossible to define a meaningful failing test due to technical uncertainty or unknown system behavior.
133
+
134
+ ### General Information
135
+
136
+ - Sometimes the test output shows as no tests have been run when a new test is failing due to a missing import or constructor. In such cases, allow the agent to create simple stubs. Ask them if they forgot to create a stub if they are stuck.
137
+ - It is never allowed to introduce new logic without evidence of relevant failing tests. However, stubs and simple implementation to make imports and test infrastructure work is fine.
138
+ - In the refactor phase, it is perfectly fine to refactor both test and implementation code. That said, completely new functionality is not allowed. Types, clean up, abstractions, and helpers are allowed as long as they do not introduce new behavior.
139
+ - Adding types, interfaces, or a constant in order to replace magic values is perfectly fine during refactoring.
140
+ - Provide the agent with helpful directions so that they do not get stuck when blocking them.
141
+
142
+
143
+ ## 🛡 Project Rules (Injected into every command)
144
+
145
+ 1. **NO BROKEN BUILDS:**
146
+ - Run `pnpm test` before every `/commit`
147
+ - Ensure all tests pass
148
+ - Fix any type errors immediately
149
+
150
+ 2. **API DEVELOPMENT:**
151
+ - All new APIs MUST have Zod request/response schemas
152
+ - All APIs MUST be documented in both:
153
+ - OpenAPI spec ([src/lib/openapi/](src/lib/openapi/))
154
+ - API test manifest ([src/app/api-test/api-tests-manifest.json](src/app/api-test/api-tests-manifest.json))
155
+ - Test ALL parameters and edge cases
156
+ - Include code examples and real-world outputs
157
+
158
+ 3. **TDD WORKFLOW:**
159
+ - ALWAYS use /red → /green → /refactor cycle
160
+ - NEVER write implementation without failing test first
161
+ - Use /cycle for feature development
162
+ - Use characterization tests for refactoring
163
+
164
+ 4. **API KEY MANAGEMENT:**
165
+ - Support three loading methods:
166
+ - Server environment variables
167
+ - NEXT_PUBLIC_ variables (client-side)
168
+ - Custom headers (X-OpenAI-Key, X-Anthropic-Key, etc.)
169
+ - Never hardcode API keys
170
+ - Always validate key availability before use
171
+
172
+ 5. **COMPREHENSIVE TESTING:**
173
+ - When researching APIs, read actual implementation code
174
+ - Discover ALL possible parameters (not just documented ones)
175
+ - Test with various parameter combinations
176
+ - Document custom headers, query params, request/response schemas
177
+ - Include validation rules and testing notes
178
+
179
+ 6. **NO UI BLOAT:**
180
+ - This is an API project with minimal frontend
181
+ - Only keep necessary test/documentation interfaces
182
+ - Delete unused components immediately
183
+ - No unnecessary UI libraries or features
184
+
185
+ 7. **DOCUMENTATION:**
186
+ - If you change an API, you MUST update:
187
+ - OpenAPI spec
188
+ - api-tests-manifest.json
189
+ - Code examples
190
+ - Testing notes
191
+ - Document expected behavior and edge cases
192
+ - Include real-world output examples
@@ -0,0 +1,168 @@
1
+ ---
2
+ description: Create implementation plan from feature/requirement with PRD-style discovery and TDD acceptance criteria
3
+ argument-hint: <feature/requirement description or GitHub issue URL/number>
4
+ ---
5
+
6
+ # Plan: PRD-Informed Task Planning for TDD
7
+
8
+ Create structured implementation plan that bridges product thinking (PRD) with test-driven development.
9
+
10
+ ## General Guidelines
11
+
12
+ ### Output Style
13
+
14
+ - **Never explicitly mention TDD** in code, comments, commits, PRs, or issues
15
+ - Write natural, descriptive code without meta-commentary about the development process
16
+ - The code should speak for itself - TDD is the process, not the product
17
+
18
+ ## Input
19
+
20
+ $ARGUMENTS
21
+
22
+ (If no input provided, check conversation context)
23
+
24
+ ## Input Processing
25
+
26
+ The input can be one of:
27
+
28
+ 1. **GitHub Issue URL** (e.g., `https://github.com/owner/repo/issues/123`)
29
+ 2. **GitHub Issue Number** (e.g., `#123` or `123`)
30
+ 3. **Feature Description** (e.g., "Add user authentication")
31
+ 4. **Empty** - use conversation context
32
+
33
+ ### GitHub Issue Integration
34
+
35
+ If input looks like a GitHub issue:
36
+
37
+ **Step 1: Extract Issue Number**
38
+
39
+ - From URL: extract owner/repo/number
40
+ - From number: try to infer repo from git remote
41
+ - From branch name: check patterns like `issue-123`, `123-feature`, `feature/123`
42
+
43
+ **Step 2: Fetch Issue**
44
+ Try GitHub MCP first:
45
+
46
+ - If available: use `mcp__github__issue_read` to fetch issue details
47
+ - If not available: show message and try `gh issue view <number>`
48
+
49
+ ```
50
+ GitHub MCP not configured!
51
+ See: https://github.com/modelcontextprotocol/servers/tree/main/src/github
52
+ Trying GitHub CLI fallback...
53
+ ```
54
+
55
+ **Step 3: Use Issue as Discovery Input**
56
+
57
+ - Title → Feature name
58
+ - Description → Problem statement and context
59
+ - Labels → Type/priority hints
60
+ - Comments → Additional requirements and discussion
61
+ - Linked issues → Dependencies
62
+
63
+ Extract from GitHub issue:
64
+
65
+ - Problem statement and context
66
+ - Acceptance criteria (if present)
67
+ - Technical notes (if present)
68
+ - Related issues/dependencies
69
+
70
+ ## Process
71
+
72
+ ## Discovery Phase
73
+
74
+ Understand the requirement by asking (use AskUserQuestion if needed):
75
+
76
+ **Problem Statement**
77
+
78
+ - What problem does this solve?
79
+ - Who experiences this problem?
80
+ - What's the current pain point?
81
+
82
+ **Desired Outcome**
83
+
84
+ - What should happen after this is built?
85
+ - How will users interact with it?
86
+ - What does success look like?
87
+
88
+ **Scope & Constraints**
89
+
90
+ - What's in scope vs. out of scope?
91
+ - Any technical constraints?
92
+ - Dependencies on other systems/features?
93
+
94
+ **Context Check**
95
+
96
+ - Search codebase for related features/modules
97
+ - Check for existing test files that might be relevant
98
+
99
+ ## Key Principles
100
+
101
+ **From PRD World:**
102
+
103
+ - Start with user problems, not solutions
104
+ - Define success criteria upfront
105
+ - Understand constraints and scope
106
+
107
+ **From TDD World:**
108
+
109
+ - Make acceptance criteria test-ready
110
+ - Break work into small, testable pieces
111
+ - Each task should map to test(s)
112
+
113
+ ## Integration with Other Commands
114
+
115
+ - **Before /plan**: Use `/spike` if you need technical exploration first
116
+ - **After /plan**: Use `/red` to start TDD on first task
117
+
118
+
119
+ ## 🛡 Project Rules (Injected into every command)
120
+
121
+ 1. **NO BROKEN BUILDS:**
122
+ - Run `pnpm test` before every `/commit`
123
+ - Ensure all tests pass
124
+ - Fix any type errors immediately
125
+
126
+ 2. **API DEVELOPMENT:**
127
+ - All new APIs MUST have Zod request/response schemas
128
+ - All APIs MUST be documented in both:
129
+ - OpenAPI spec ([src/lib/openapi/](src/lib/openapi/))
130
+ - API test manifest ([src/app/api-test/api-tests-manifest.json](src/app/api-test/api-tests-manifest.json))
131
+ - Test ALL parameters and edge cases
132
+ - Include code examples and real-world outputs
133
+
134
+ 3. **TDD WORKFLOW:**
135
+ - ALWAYS use /red → /green → /refactor cycle
136
+ - NEVER write implementation without failing test first
137
+ - Use /cycle for feature development
138
+ - Use characterization tests for refactoring
139
+
140
+ 4. **API KEY MANAGEMENT:**
141
+ - Support three loading methods:
142
+ - Server environment variables
143
+ - NEXT_PUBLIC_ variables (client-side)
144
+ - Custom headers (X-OpenAI-Key, X-Anthropic-Key, etc.)
145
+ - Never hardcode API keys
146
+ - Always validate key availability before use
147
+
148
+ 5. **COMPREHENSIVE TESTING:**
149
+ - When researching APIs, read actual implementation code
150
+ - Discover ALL possible parameters (not just documented ones)
151
+ - Test with various parameter combinations
152
+ - Document custom headers, query params, request/response schemas
153
+ - Include validation rules and testing notes
154
+
155
+ 6. **NO UI BLOAT:**
156
+ - This is an API project with minimal frontend
157
+ - Only keep necessary test/documentation interfaces
158
+ - Delete unused components immediately
159
+ - No unnecessary UI libraries or features
160
+
161
+ 7. **DOCUMENTATION:**
162
+ - If you change an API, you MUST update:
163
+ - OpenAPI spec
164
+ - api-tests-manifest.json
165
+ - Code examples
166
+ - Testing notes
167
+ - Document expected behavior and edge cases
168
+ - Include real-world output examples
package/commands/pr.md ADDED
@@ -0,0 +1,122 @@
1
+ ---
2
+ description: Creates a pull request using GitHub MCP
3
+ argument-hint: [optional-pr-title-and-description]
4
+ ---
5
+
6
+ # Create Pull Request
7
+
8
+ ## General Guidelines
9
+
10
+ ### Output Style
11
+
12
+ - **Never explicitly mention TDD** in code, comments, commits, PRs, or issues
13
+ - Write natural, descriptive code without meta-commentary about the development process
14
+ - The code should speak for itself - TDD is the process, not the product
15
+
16
+ Create a pull request for the current branch using GitHub MCP tools.
17
+
18
+ ## Workflow
19
+
20
+ Current branch status:
21
+ !`git status`
22
+
23
+ Recent commits:
24
+ !`git log --oneline -5`
25
+
26
+ Arguments: $ARGUMENTS
27
+
28
+ **Process:**
29
+
30
+ 1. **Ensure Branch is Ready**:
31
+ !`git status`
32
+ - Commit all changes
33
+ - Push to remote: `git push origin [branch-name]`
34
+
35
+ 2. **Create PR**: Create a well-formatted pull request
36
+
37
+ Title: conventional commits format, like `feat(#123): add user authentication`
38
+
39
+ Description template:
40
+
41
+ ```markdown
42
+ <!--
43
+ Are there any relevant issues / PRs / mailing lists discussions?
44
+ Please reference them here.
45
+ -->
46
+
47
+ ## References
48
+
49
+ - [links to github issues referenced in commit messages]
50
+
51
+ ## Summary
52
+
53
+ [Brief description of changes]
54
+
55
+ ## Test Plan
56
+
57
+ - [ ] Tests pass
58
+ - [ ] Manual testing completed
59
+ ```
60
+
61
+ 3. **Set Base Branch**: Default to main unless specified otherwise
62
+
63
+ 4. **Link Issues**: Reference related issues found in commit messages
64
+
65
+ ## Use GitHub MCP Tools
66
+
67
+ 1. Check current branch and ensure it's pushed
68
+ 2. Create a well-formatted pull request with proper title and description
69
+ 3. Set the base branch (default: main)
70
+ 4. Include relevant issue references if found in commit messages
71
+
72
+
73
+ ## 🛡 Project Rules (Injected into every command)
74
+
75
+ 1. **NO BROKEN BUILDS:**
76
+ - Run `pnpm test` before every `/commit`
77
+ - Ensure all tests pass
78
+ - Fix any type errors immediately
79
+
80
+ 2. **API DEVELOPMENT:**
81
+ - All new APIs MUST have Zod request/response schemas
82
+ - All APIs MUST be documented in both:
83
+ - OpenAPI spec ([src/lib/openapi/](src/lib/openapi/))
84
+ - API test manifest ([src/app/api-test/api-tests-manifest.json](src/app/api-test/api-tests-manifest.json))
85
+ - Test ALL parameters and edge cases
86
+ - Include code examples and real-world outputs
87
+
88
+ 3. **TDD WORKFLOW:**
89
+ - ALWAYS use /red → /green → /refactor cycle
90
+ - NEVER write implementation without failing test first
91
+ - Use /cycle for feature development
92
+ - Use characterization tests for refactoring
93
+
94
+ 4. **API KEY MANAGEMENT:**
95
+ - Support three loading methods:
96
+ - Server environment variables
97
+ - NEXT_PUBLIC_ variables (client-side)
98
+ - Custom headers (X-OpenAI-Key, X-Anthropic-Key, etc.)
99
+ - Never hardcode API keys
100
+ - Always validate key availability before use
101
+
102
+ 5. **COMPREHENSIVE TESTING:**
103
+ - When researching APIs, read actual implementation code
104
+ - Discover ALL possible parameters (not just documented ones)
105
+ - Test with various parameter combinations
106
+ - Document custom headers, query params, request/response schemas
107
+ - Include validation rules and testing notes
108
+
109
+ 6. **NO UI BLOAT:**
110
+ - This is an API project with minimal frontend
111
+ - Only keep necessary test/documentation interfaces
112
+ - Delete unused components immediately
113
+ - No unnecessary UI libraries or features
114
+
115
+ 7. **DOCUMENTATION:**
116
+ - If you change an API, you MUST update:
117
+ - OpenAPI spec
118
+ - api-tests-manifest.json
119
+ - Code examples
120
+ - Testing notes
121
+ - Document expected behavior and edge cases
122
+ - Include real-world output examples
@@ -0,0 +1,142 @@
1
+ ---
2
+ description: Execute TDD Red Phase - write ONE failing test
3
+ argument-hint: [optional additional info]
4
+ ---
5
+
6
+ RED PHASE! Apply the below to the info given by user input here:
7
+
8
+ $ARGUMENTS
9
+
10
+ ## General Guidelines
11
+
12
+ ### Output Style
13
+
14
+ - **Never explicitly mention TDD** in code, comments, commits, PRs, or issues
15
+ - Write natural, descriptive code without meta-commentary about the development process
16
+ - The code should speak for itself - TDD is the process, not the product
17
+
18
+ (If there was no info above, fallback to the context of the conversation)
19
+
20
+ ## TDD Fundamentals
21
+
22
+ ### The TDD Cycle
23
+
24
+ The foundation of TDD is the Red-Green-Refactor cycle:
25
+
26
+ 1. **Red Phase**: Write ONE failing test that describes desired behavior
27
+
28
+ - The test must fail for the RIGHT reason (not syntax/import errors)
29
+ - Only one test at a time - this is critical for TDD discipline
30
+ - Exception: For browser-level tests or expensive setup (e.g., Storybook `*.stories.tsx`), group multiple assertions within a single test block to avoid redundant setup - but only when adding assertions to an existing interaction flow. If new user interactions are required, still create a new test. Split files by category if they exceed ~1000 lines.
31
+ - **Adding a single test to a test file is ALWAYS allowed** - no prior test output needed
32
+ - Starting TDD for a new feature is always valid, even if test output shows unrelated work
33
+ - For DOM-based tests, use `data-testid` attributes to select elements rather than CSS classes, tag names, or text content
34
+ - Avoid hard-coded timeouts both in form of sleep() or timeout: 5000 etc; use proper async patterns (`waitFor`, `findBy*`, event-based sync) instead and rely on global test configs for timeout settings
35
+
36
+ 2. **Green Phase**: Write MINIMAL code to make the test pass
37
+
38
+ - Implement only what's needed for the current failing test
39
+ - No anticipatory coding or extra features
40
+ - Address the specific failure message
41
+
42
+ 3. **Refactor Phase**: Improve code structure while keeping tests green
43
+ - Only allowed when relevant tests are passing
44
+ - Requires proof that tests have been run and are green
45
+ - Applies to BOTH implementation and test code
46
+ - No refactoring with failing tests - fix them first
47
+
48
+ ### Core Violations
49
+
50
+ 1. **Multiple Test Addition**
51
+
52
+ - Adding more than one new test at once
53
+ - Exception: Initial test file setup or extracting shared test utilities
54
+
55
+ 2. **Over-Implementation**
56
+
57
+ - Code that exceeds what's needed to pass the current failing test
58
+ - Adding untested features, methods, or error handling
59
+ - Implementing multiple methods when test only requires one
60
+
61
+ 3. **Premature Implementation**
62
+ - Adding implementation before a test exists and fails properly
63
+ - Adding implementation without running the test first
64
+ - Refactoring when tests haven't been run or are failing
65
+
66
+ ### Critical Principle: Incremental Development
67
+
68
+ Each step in TDD should address ONE specific issue:
69
+
70
+ - Test fails "not defined" → Create empty stub/class only
71
+ - Test fails "not a function" → Add method stub only
72
+ - Test fails with assertion → Implement minimal logic only
73
+
74
+ ### Optional Pre-Phase: Spike Phase
75
+
76
+ In rare cases where the problem space, interface, or expected behavior is unclear, a **Spike Phase** may be used **before the Red Phase**.
77
+ This phase is **not part of the regular TDD workflow** and must only be applied under exceptional circumstances.
78
+
79
+ - The goal of a Spike is **exploration and learning**, not implementation.
80
+ - The code written during a Spike is **disposable** and **must not** be merged or reused directly.
81
+ - Once sufficient understanding is achieved, all spike code is discarded, and normal TDD resumes starting from the **Red Phase**.
82
+ - A Spike is justified only when it is impossible to define a meaningful failing test due to technical uncertainty or unknown system behavior.
83
+
84
+ ### General Information
85
+
86
+ - Sometimes the test output shows as no tests have been run when a new test is failing due to a missing import or constructor. In such cases, allow the agent to create simple stubs. Ask them if they forgot to create a stub if they are stuck.
87
+ - It is never allowed to introduce new logic without evidence of relevant failing tests. However, stubs and simple implementation to make imports and test infrastructure work is fine.
88
+ - In the refactor phase, it is perfectly fine to refactor both test and implementation code. That said, completely new functionality is not allowed. Types, clean up, abstractions, and helpers are allowed as long as they do not introduce new behavior.
89
+ - Adding types, interfaces, or a constant in order to replace magic values is perfectly fine during refactoring.
90
+ - Provide the agent with helpful directions so that they do not get stuck when blocking them.
91
+
92
+
93
+ ## 🛡 Project Rules (Injected into every command)
94
+
95
+ 1. **NO BROKEN BUILDS:**
96
+ - Run `pnpm test` before every `/commit`
97
+ - Ensure all tests pass
98
+ - Fix any type errors immediately
99
+
100
+ 2. **API DEVELOPMENT:**
101
+ - All new APIs MUST have Zod request/response schemas
102
+ - All APIs MUST be documented in both:
103
+ - OpenAPI spec ([src/lib/openapi/](src/lib/openapi/))
104
+ - API test manifest ([src/app/api-test/api-tests-manifest.json](src/app/api-test/api-tests-manifest.json))
105
+ - Test ALL parameters and edge cases
106
+ - Include code examples and real-world outputs
107
+
108
+ 3. **TDD WORKFLOW:**
109
+ - ALWAYS use /red → /green → /refactor cycle
110
+ - NEVER write implementation without failing test first
111
+ - Use /cycle for feature development
112
+ - Use characterization tests for refactoring
113
+
114
+ 4. **API KEY MANAGEMENT:**
115
+ - Support three loading methods:
116
+ - Server environment variables
117
+ - NEXT_PUBLIC_ variables (client-side)
118
+ - Custom headers (X-OpenAI-Key, X-Anthropic-Key, etc.)
119
+ - Never hardcode API keys
120
+ - Always validate key availability before use
121
+
122
+ 5. **COMPREHENSIVE TESTING:**
123
+ - When researching APIs, read actual implementation code
124
+ - Discover ALL possible parameters (not just documented ones)
125
+ - Test with various parameter combinations
126
+ - Document custom headers, query params, request/response schemas
127
+ - Include validation rules and testing notes
128
+
129
+ 6. **NO UI BLOAT:**
130
+ - This is an API project with minimal frontend
131
+ - Only keep necessary test/documentation interfaces
132
+ - Delete unused components immediately
133
+ - No unnecessary UI libraries or features
134
+
135
+ 7. **DOCUMENTATION:**
136
+ - If you change an API, you MUST update:
137
+ - OpenAPI spec
138
+ - api-tests-manifest.json
139
+ - Code examples
140
+ - Testing notes
141
+ - Document expected behavior and edge cases
142
+ - Include real-world output examples