@hustle-together/api-dev-tools 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,112 @@
1
+ ---
2
+ description: Create multiple atomic git commits, one logical change at a time
3
+ argument-hint: [optional-commit-description]
4
+ ---
5
+
6
+ ## General Guidelines
7
+
8
+ ### Output Style
9
+
10
+ - **Never explicitly mention TDD** in code, comments, commits, PRs, or issues
11
+ - Write natural, descriptive code without meta-commentary about the development process
12
+ - The code should speak for itself - TDD is the process, not the product
13
+
14
+ Create multiple atomic git commits, committing the smallest possible logical unit at a time
15
+
16
+ Include any of the following info if specified: $ARGUMENTS
17
+
18
+ ## Process
19
+
20
+ 1. Run `git status` and `git diff` to review changes
21
+ 2. Run `git log --oneline -5` to see recent commit style
22
+ 3. Stage relevant files with `git add`
23
+ 4. Create commit with descriptive message
24
+ 5. Verify with `git status`
25
+
26
+ ## Example
27
+
28
+ ```bash
29
+ git add <files>
30
+ git commit -m "feat(#123): add validation to user input form"
31
+ ```
32
+
33
+ ## Atomic Commit Approach
34
+
35
+ Each commit should represent ONE logical change. Do NOT bundle multiple unrelated changes into one commit.
36
+
37
+ - Identify the smallest atomic units of change
38
+ - For EACH atomic unit: stage only those files/hunks, commit, verify
39
+ - Use `git add -p` to stage partial file changes when a file contains multiple logical changes
40
+ - Repeat until all changes are committed
41
+ - It is OK to create multiple commits without stopping - keep going until `git status` shows clean
42
+
43
+ ## Multi-Commit Example
44
+
45
+ If a single file contains multiple unrelated changes, use `git add -p` to stage hunks interactively:
46
+
47
+ ```bash
48
+ # Stage only the validation-related hunks from the file
49
+ git add -p src/user-service.ts
50
+ # Select 'y' for validation hunks, 'n' for others
51
+ git commit -m "feat(#123): add email format validation"
52
+
53
+ # Stage the error handling hunks
54
+ git add -p src/user-service.ts
55
+ git commit -m "fix(#124): handle null user gracefully"
56
+
57
+ # Stage remaining changes
58
+ git add src/user-service.ts
59
+ git commit -m "refactor: extract user lookup to helper"
60
+ ```
61
+
62
+
63
+ ## 🛡 Project Rules (Injected into every command)
64
+
65
+ 1. **NO BROKEN BUILDS:**
66
+ - Run `pnpm test` before every `/commit`
67
+ - Ensure all tests pass
68
+ - Fix any type errors immediately
69
+
70
+ 2. **API DEVELOPMENT:**
71
+ - All new APIs MUST have Zod request/response schemas
72
+ - All APIs MUST be documented in both:
73
+ - OpenAPI spec ([src/lib/openapi/](src/lib/openapi/))
74
+ - API test manifest ([src/app/api-test/api-tests-manifest.json](src/app/api-test/api-tests-manifest.json))
75
+ - Test ALL parameters and edge cases
76
+ - Include code examples and real-world outputs
77
+
78
+ 3. **TDD WORKFLOW:**
79
+ - ALWAYS use /red → /green → /refactor cycle
80
+ - NEVER write implementation without failing test first
81
+ - Use /cycle for feature development
82
+ - Use characterization tests for refactoring
83
+
84
+ 4. **API KEY MANAGEMENT:**
85
+ - Support three loading methods:
86
+ - Server environment variables
87
+ - NEXT_PUBLIC_ variables (client-side)
88
+ - Custom headers (X-OpenAI-Key, X-Anthropic-Key, etc.)
89
+ - Never hardcode API keys
90
+ - Always validate key availability before use
91
+
92
+ 5. **COMPREHENSIVE TESTING:**
93
+ - When researching APIs, read actual implementation code
94
+ - Discover ALL possible parameters (not just documented ones)
95
+ - Test with various parameter combinations
96
+ - Document custom headers, query params, request/response schemas
97
+ - Include validation rules and testing notes
98
+
99
+ 6. **NO UI BLOAT:**
100
+ - This is an API project with minimal frontend
101
+ - Only keep necessary test/documentation interfaces
102
+ - Delete unused components immediately
103
+ - No unnecessary UI libraries or features
104
+
105
+ 7. **DOCUMENTATION:**
106
+ - If you change an API, you MUST update:
107
+ - OpenAPI spec
108
+ - api-tests-manifest.json
109
+ - Code examples
110
+ - Testing notes
111
+ - Document expected behavior and edge cases
112
+ - Include real-world output examples
@@ -0,0 +1,83 @@
1
+ ---
2
+ description: Create a git commit following project standards
3
+ argument-hint: [optional-commit-description]
4
+ ---
5
+
6
+ ## General Guidelines
7
+
8
+ ### Output Style
9
+
10
+ - **Never explicitly mention TDD** in code, comments, commits, PRs, or issues
11
+ - Write natural, descriptive code without meta-commentary about the development process
12
+ - The code should speak for itself - TDD is the process, not the product
13
+
14
+ Create a git commit following project standards
15
+
16
+ Include any of the following info if specified: $ARGUMENTS
17
+
18
+ ## Process
19
+
20
+ 1. Run `git status` and `git diff` to review changes
21
+ 2. Run `git log --oneline -5` to see recent commit style
22
+ 3. Stage relevant files with `git add`
23
+ 4. Create commit with descriptive message
24
+ 5. Verify with `git status`
25
+
26
+ ## Example
27
+
28
+ ```bash
29
+ git add <files>
30
+ git commit -m "feat(#123): add validation to user input form"
31
+ ```
32
+
33
+
34
+ ## 🛡 Project Rules (Injected into every command)
35
+
36
+ 1. **NO BROKEN BUILDS:**
37
+ - Run `pnpm test` before every `/commit`
38
+ - Ensure all tests pass
39
+ - Fix any type errors immediately
40
+
41
+ 2. **API DEVELOPMENT:**
42
+ - All new APIs MUST have Zod request/response schemas
43
+ - All APIs MUST be documented in both:
44
+ - OpenAPI spec ([src/lib/openapi/](src/lib/openapi/))
45
+ - API test manifest ([src/app/api-test/api-tests-manifest.json](src/app/api-test/api-tests-manifest.json))
46
+ - Test ALL parameters and edge cases
47
+ - Include code examples and real-world outputs
48
+
49
+ 3. **TDD WORKFLOW:**
50
+ - ALWAYS use /red → /green → /refactor cycle
51
+ - NEVER write implementation without failing test first
52
+ - Use /cycle for feature development
53
+ - Use characterization tests for refactoring
54
+
55
+ 4. **API KEY MANAGEMENT:**
56
+ - Support three loading methods:
57
+ - Server environment variables
58
+ - NEXT_PUBLIC_ variables (client-side)
59
+ - Custom headers (X-OpenAI-Key, X-Anthropic-Key, etc.)
60
+ - Never hardcode API keys
61
+ - Always validate key availability before use
62
+
63
+ 5. **COMPREHENSIVE TESTING:**
64
+ - When researching APIs, read actual implementation code
65
+ - Discover ALL possible parameters (not just documented ones)
66
+ - Test with various parameter combinations
67
+ - Document custom headers, query params, request/response schemas
68
+ - Include validation rules and testing notes
69
+
70
+ 6. **NO UI BLOAT:**
71
+ - This is an API project with minimal frontend
72
+ - Only keep necessary test/documentation interfaces
73
+ - Delete unused components immediately
74
+ - No unnecessary UI libraries or features
75
+
76
+ 7. **DOCUMENTATION:**
77
+ - If you change an API, you MUST update:
78
+ - OpenAPI spec
79
+ - api-tests-manifest.json
80
+ - Code examples
81
+ - Testing notes
82
+ - Document expected behavior and edge cases
83
+ - Include real-world output examples
@@ -0,0 +1,142 @@
1
+ ---
2
+ description: Execute complete TDD cycle - Red, Green, and Refactor phases in sequence
3
+ argument-hint: <feature or requirement description>
4
+ ---
5
+
6
+ RED+GREEN+REFACTOR (one cycle) PHASE! Apply the below to the info given by user input here:
7
+
8
+ $ARGUMENTS
9
+
10
+ ## General Guidelines
11
+
12
+ ### Output Style
13
+
14
+ - **Never explicitly mention TDD** in code, comments, commits, PRs, or issues
15
+ - Write natural, descriptive code without meta-commentary about the development process
16
+ - The code should speak for itself - TDD is the process, not the product
17
+
18
+ (If there was no info above, fallback to the context of the conversation)
19
+
20
+ ## TDD Fundamentals
21
+
22
+ ### The TDD Cycle
23
+
24
+ The foundation of TDD is the Red-Green-Refactor cycle:
25
+
26
+ 1. **Red Phase**: Write ONE failing test that describes desired behavior
27
+
28
+ - The test must fail for the RIGHT reason (not syntax/import errors)
29
+ - Only one test at a time - this is critical for TDD discipline
30
+ - Exception: For browser-level tests or expensive setup (e.g., Storybook `*.stories.tsx`), group multiple assertions within a single test block to avoid redundant setup - but only when adding assertions to an existing interaction flow. If new user interactions are required, still create a new test. Split files by category if they exceed ~1000 lines.
31
+ - **Adding a single test to a test file is ALWAYS allowed** - no prior test output needed
32
+ - Starting TDD for a new feature is always valid, even if test output shows unrelated work
33
+ - For DOM-based tests, use `data-testid` attributes to select elements rather than CSS classes, tag names, or text content
34
+ - Avoid hard-coded timeouts both in form of sleep() or timeout: 5000 etc; use proper async patterns (`waitFor`, `findBy*`, event-based sync) instead and rely on global test configs for timeout settings
35
+
36
+ 2. **Green Phase**: Write MINIMAL code to make the test pass
37
+
38
+ - Implement only what's needed for the current failing test
39
+ - No anticipatory coding or extra features
40
+ - Address the specific failure message
41
+
42
+ 3. **Refactor Phase**: Improve code structure while keeping tests green
43
+ - Only allowed when relevant tests are passing
44
+ - Requires proof that tests have been run and are green
45
+ - Applies to BOTH implementation and test code
46
+ - No refactoring with failing tests - fix them first
47
+
48
+ ### Core Violations
49
+
50
+ 1. **Multiple Test Addition**
51
+
52
+ - Adding more than one new test at once
53
+ - Exception: Initial test file setup or extracting shared test utilities
54
+
55
+ 2. **Over-Implementation**
56
+
57
+ - Code that exceeds what's needed to pass the current failing test
58
+ - Adding untested features, methods, or error handling
59
+ - Implementing multiple methods when test only requires one
60
+
61
+ 3. **Premature Implementation**
62
+ - Adding implementation before a test exists and fails properly
63
+ - Adding implementation without running the test first
64
+ - Refactoring when tests haven't been run or are failing
65
+
66
+ ### Critical Principle: Incremental Development
67
+
68
+ Each step in TDD should address ONE specific issue:
69
+
70
+ - Test fails "not defined" → Create empty stub/class only
71
+ - Test fails "not a function" → Add method stub only
72
+ - Test fails with assertion → Implement minimal logic only
73
+
74
+ ### Optional Pre-Phase: Spike Phase
75
+
76
+ In rare cases where the problem space, interface, or expected behavior is unclear, a **Spike Phase** may be used **before the Red Phase**.
77
+ This phase is **not part of the regular TDD workflow** and must only be applied under exceptional circumstances.
78
+
79
+ - The goal of a Spike is **exploration and learning**, not implementation.
80
+ - The code written during a Spike is **disposable** and **must not** be merged or reused directly.
81
+ - Once sufficient understanding is achieved, all spike code is discarded, and normal TDD resumes starting from the **Red Phase**.
82
+ - A Spike is justified only when it is impossible to define a meaningful failing test due to technical uncertainty or unknown system behavior.
83
+
84
+ ### General Information
85
+
86
+ - Sometimes the test output shows as no tests have been run when a new test is failing due to a missing import or constructor. In such cases, allow the agent to create simple stubs. Ask them if they forgot to create a stub if they are stuck.
87
+ - It is never allowed to introduce new logic without evidence of relevant failing tests. However, stubs and simple implementation to make imports and test infrastructure work is fine.
88
+ - In the refactor phase, it is perfectly fine to refactor both test and implementation code. That said, completely new functionality is not allowed. Types, clean up, abstractions, and helpers are allowed as long as they do not introduce new behavior.
89
+ - Adding types, interfaces, or a constant in order to replace magic values is perfectly fine during refactoring.
90
+ - Provide the agent with helpful directions so that they do not get stuck when blocking them.
91
+
92
+
93
+ ## 🛡 Project Rules (Injected into every command)
94
+
95
+ 1. **NO BROKEN BUILDS:**
96
+ - Run `pnpm test` before every `/commit`
97
+ - Ensure all tests pass
98
+ - Fix any type errors immediately
99
+
100
+ 2. **API DEVELOPMENT:**
101
+ - All new APIs MUST have Zod request/response schemas
102
+ - All APIs MUST be documented in both:
103
+ - OpenAPI spec ([src/lib/openapi/](src/lib/openapi/))
104
+ - API test manifest ([src/app/api-test/api-tests-manifest.json](src/app/api-test/api-tests-manifest.json))
105
+ - Test ALL parameters and edge cases
106
+ - Include code examples and real-world outputs
107
+
108
+ 3. **TDD WORKFLOW:**
109
+ - ALWAYS use /red → /green → /refactor cycle
110
+ - NEVER write implementation without failing test first
111
+ - Use /cycle for feature development
112
+ - Use characterization tests for refactoring
113
+
114
+ 4. **API KEY MANAGEMENT:**
115
+ - Support three loading methods:
116
+ - Server environment variables
117
+ - NEXT_PUBLIC_ variables (client-side)
118
+ - Custom headers (X-OpenAI-Key, X-Anthropic-Key, etc.)
119
+ - Never hardcode API keys
120
+ - Always validate key availability before use
121
+
122
+ 5. **COMPREHENSIVE TESTING:**
123
+ - When researching APIs, read actual implementation code
124
+ - Discover ALL possible parameters (not just documented ones)
125
+ - Test with various parameter combinations
126
+ - Document custom headers, query params, request/response schemas
127
+ - Include validation rules and testing notes
128
+
129
+ 6. **NO UI BLOAT:**
130
+ - This is an API project with minimal frontend
131
+ - Only keep necessary test/documentation interfaces
132
+ - Delete unused components immediately
133
+ - No unnecessary UI libraries or features
134
+
135
+ 7. **DOCUMENTATION:**
136
+ - If you change an API, you MUST update:
137
+ - OpenAPI spec
138
+ - api-tests-manifest.json
139
+ - Code examples
140
+ - Testing notes
141
+ - Document expected behavior and edge cases
142
+ - Include real-world output examples
@@ -0,0 +1,86 @@
1
+ ---
2
+ description: Analyze conversation context for unaddressed items and gaps
3
+ argument-hint: [optional additional info]
4
+ ---
5
+
6
+ ## General Guidelines
7
+
8
+ ### Output Style
9
+
10
+ - **Never explicitly mention TDD** in code, comments, commits, PRs, or issues
11
+ - Write natural, descriptive code without meta-commentary about the development process
12
+ - The code should speak for itself - TDD is the process, not the product
13
+
14
+ Analyze the current conversation context and identify things that have not yet been addressed. Look for:
15
+
16
+ 1. **Incomplete implementations** - Code that was started but not finished
17
+ 2. **Unused variables/results** - Values that were captured but never used
18
+ 3. **Missing tests** - Functionality without test coverage
19
+ 4. **User requests** - Things the user asked for that weren't fully completed
20
+ 5. **TODO comments** - Any TODOs mentioned in conversation
21
+ 6. **Error handling gaps** - Missing error cases or edge cases
22
+ 7. **Documentation gaps** - Undocumented APIs or features
23
+ 8. **Consistency check** - Look for inconsistent patterns, naming conventions, or structure across the codebase
24
+
25
+ Present findings as a prioritized list with:
26
+
27
+ - What the gap is
28
+ - Why it matters
29
+ - Suggested next action
30
+
31
+ If there are no gaps, confirm that everything discussed has been addressed.
32
+
33
+ Additional info:
34
+ $ARGUMENTS
35
+
36
+
37
+ ## 🛡 Project Rules (Injected into every command)
38
+
39
+ 1. **NO BROKEN BUILDS:**
40
+ - Run `pnpm test` before every `/commit`
41
+ - Ensure all tests pass
42
+ - Fix any type errors immediately
43
+
44
+ 2. **API DEVELOPMENT:**
45
+ - All new APIs MUST have Zod request/response schemas
46
+ - All APIs MUST be documented in both:
47
+ - OpenAPI spec ([src/lib/openapi/](src/lib/openapi/))
48
+ - API test manifest ([src/app/api-test/api-tests-manifest.json](src/app/api-test/api-tests-manifest.json))
49
+ - Test ALL parameters and edge cases
50
+ - Include code examples and real-world outputs
51
+
52
+ 3. **TDD WORKFLOW:**
53
+ - ALWAYS use /red → /green → /refactor cycle
54
+ - NEVER write implementation without failing test first
55
+ - Use /cycle for feature development
56
+ - Use characterization tests for refactoring
57
+
58
+ 4. **API KEY MANAGEMENT:**
59
+ - Support three loading methods:
60
+ - Server environment variables
61
+ - NEXT_PUBLIC_ variables (client-side)
62
+ - Custom headers (X-OpenAI-Key, X-Anthropic-Key, etc.)
63
+ - Never hardcode API keys
64
+ - Always validate key availability before use
65
+
66
+ 5. **COMPREHENSIVE TESTING:**
67
+ - When researching APIs, read actual implementation code
68
+ - Discover ALL possible parameters (not just documented ones)
69
+ - Test with various parameter combinations
70
+ - Document custom headers, query params, request/response schemas
71
+ - Include validation rules and testing notes
72
+
73
+ 6. **NO UI BLOAT:**
74
+ - This is an API project with minimal frontend
75
+ - Only keep necessary test/documentation interfaces
76
+ - Delete unused components immediately
77
+ - No unnecessary UI libraries or features
78
+
79
+ 7. **DOCUMENTATION:**
80
+ - If you change an API, you MUST update:
81
+ - OpenAPI spec
82
+ - api-tests-manifest.json
83
+ - Code examples
84
+ - Testing notes
85
+ - Document expected behavior and edge cases
86
+ - Include real-world output examples
@@ -0,0 +1,142 @@
1
+ ---
2
+ description: Execute TDD Green Phase - write minimal implementation to pass the failing test
3
+ argument-hint: <implementation description>
4
+ ---
5
+
6
+ GREEN PHASE! Apply the below to the info given by user input here:
7
+
8
+ $ARGUMENTS
9
+
10
+ ## General Guidelines
11
+
12
+ ### Output Style
13
+
14
+ - **Never explicitly mention TDD** in code, comments, commits, PRs, or issues
15
+ - Write natural, descriptive code without meta-commentary about the development process
16
+ - The code should speak for itself - TDD is the process, not the product
17
+
18
+ (If there was no info above, fallback to the context of the conversation)
19
+
20
+ ## TDD Fundamentals
21
+
22
+ ### The TDD Cycle
23
+
24
+ The foundation of TDD is the Red-Green-Refactor cycle:
25
+
26
+ 1. **Red Phase**: Write ONE failing test that describes desired behavior
27
+
28
+ - The test must fail for the RIGHT reason (not syntax/import errors)
29
+ - Only one test at a time - this is critical for TDD discipline
30
+ - Exception: For browser-level tests or expensive setup (e.g., Storybook `*.stories.tsx`), group multiple assertions within a single test block to avoid redundant setup - but only when adding assertions to an existing interaction flow. If new user interactions are required, still create a new test. Split files by category if they exceed ~1000 lines.
31
+ - **Adding a single test to a test file is ALWAYS allowed** - no prior test output needed
32
+ - Starting TDD for a new feature is always valid, even if test output shows unrelated work
33
+ - For DOM-based tests, use `data-testid` attributes to select elements rather than CSS classes, tag names, or text content
34
+ - Avoid hard-coded timeouts both in form of sleep() or timeout: 5000 etc; use proper async patterns (`waitFor`, `findBy*`, event-based sync) instead and rely on global test configs for timeout settings
35
+
36
+ 2. **Green Phase**: Write MINIMAL code to make the test pass
37
+
38
+ - Implement only what's needed for the current failing test
39
+ - No anticipatory coding or extra features
40
+ - Address the specific failure message
41
+
42
+ 3. **Refactor Phase**: Improve code structure while keeping tests green
43
+ - Only allowed when relevant tests are passing
44
+ - Requires proof that tests have been run and are green
45
+ - Applies to BOTH implementation and test code
46
+ - No refactoring with failing tests - fix them first
47
+
48
+ ### Core Violations
49
+
50
+ 1. **Multiple Test Addition**
51
+
52
+ - Adding more than one new test at once
53
+ - Exception: Initial test file setup or extracting shared test utilities
54
+
55
+ 2. **Over-Implementation**
56
+
57
+ - Code that exceeds what's needed to pass the current failing test
58
+ - Adding untested features, methods, or error handling
59
+ - Implementing multiple methods when test only requires one
60
+
61
+ 3. **Premature Implementation**
62
+ - Adding implementation before a test exists and fails properly
63
+ - Adding implementation without running the test first
64
+ - Refactoring when tests haven't been run or are failing
65
+
66
+ ### Critical Principle: Incremental Development
67
+
68
+ Each step in TDD should address ONE specific issue:
69
+
70
+ - Test fails "not defined" → Create empty stub/class only
71
+ - Test fails "not a function" → Add method stub only
72
+ - Test fails with assertion → Implement minimal logic only
73
+
74
+ ### Optional Pre-Phase: Spike Phase
75
+
76
+ In rare cases where the problem space, interface, or expected behavior is unclear, a **Spike Phase** may be used **before the Red Phase**.
77
+ This phase is **not part of the regular TDD workflow** and must only be applied under exceptional circumstances.
78
+
79
+ - The goal of a Spike is **exploration and learning**, not implementation.
80
+ - The code written during a Spike is **disposable** and **must not** be merged or reused directly.
81
+ - Once sufficient understanding is achieved, all spike code is discarded, and normal TDD resumes starting from the **Red Phase**.
82
+ - A Spike is justified only when it is impossible to define a meaningful failing test due to technical uncertainty or unknown system behavior.
83
+
84
+ ### General Information
85
+
86
+ - Sometimes the test output shows as no tests have been run when a new test is failing due to a missing import or constructor. In such cases, allow the agent to create simple stubs. Ask them if they forgot to create a stub if they are stuck.
87
+ - It is never allowed to introduce new logic without evidence of relevant failing tests. However, stubs and simple implementation to make imports and test infrastructure work is fine.
88
+ - In the refactor phase, it is perfectly fine to refactor both test and implementation code. That said, completely new functionality is not allowed. Types, clean up, abstractions, and helpers are allowed as long as they do not introduce new behavior.
89
+ - Adding types, interfaces, or a constant in order to replace magic values is perfectly fine during refactoring.
90
+ - Provide the agent with helpful directions so that they do not get stuck when blocking them.
91
+
92
+
93
+ ## 🛡 Project Rules (Injected into every command)
94
+
95
+ 1. **NO BROKEN BUILDS:**
96
+ - Run `pnpm test` before every `/commit`
97
+ - Ensure all tests pass
98
+ - Fix any type errors immediately
99
+
100
+ 2. **API DEVELOPMENT:**
101
+ - All new APIs MUST have Zod request/response schemas
102
+ - All APIs MUST be documented in both:
103
+ - OpenAPI spec ([src/lib/openapi/](src/lib/openapi/))
104
+ - API test manifest ([src/app/api-test/api-tests-manifest.json](src/app/api-test/api-tests-manifest.json))
105
+ - Test ALL parameters and edge cases
106
+ - Include code examples and real-world outputs
107
+
108
+ 3. **TDD WORKFLOW:**
109
+ - ALWAYS use /red → /green → /refactor cycle
110
+ - NEVER write implementation without failing test first
111
+ - Use /cycle for feature development
112
+ - Use characterization tests for refactoring
113
+
114
+ 4. **API KEY MANAGEMENT:**
115
+ - Support three loading methods:
116
+ - Server environment variables
117
+ - NEXT_PUBLIC_ variables (client-side)
118
+ - Custom headers (X-OpenAI-Key, X-Anthropic-Key, etc.)
119
+ - Never hardcode API keys
120
+ - Always validate key availability before use
121
+
122
+ 5. **COMPREHENSIVE TESTING:**
123
+ - When researching APIs, read actual implementation code
124
+ - Discover ALL possible parameters (not just documented ones)
125
+ - Test with various parameter combinations
126
+ - Document custom headers, query params, request/response schemas
127
+ - Include validation rules and testing notes
128
+
129
+ 6. **NO UI BLOAT:**
130
+ - This is an API project with minimal frontend
131
+ - Only keep necessary test/documentation interfaces
132
+ - Delete unused components immediately
133
+ - No unnecessary UI libraries or features
134
+
135
+ 7. **DOCUMENTATION:**
136
+ - If you change an API, you MUST update:
137
+ - OpenAPI spec
138
+ - api-tests-manifest.json
139
+ - Code examples
140
+ - Testing notes
141
+ - Document expected behavior and edge cases
142
+ - Include real-world output examples