@lousy-agents/cli 1.1.0 → 2.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (32) hide show
  1. package/README.md +11 -12
  2. package/api/copilot-with-fastify/.devcontainer/devcontainer.json +90 -0
  3. package/api/copilot-with-fastify/.editorconfig +16 -0
  4. package/api/copilot-with-fastify/.github/ISSUE_TEMPLATE/feature-to-spec.yml +55 -0
  5. package/api/copilot-with-fastify/.github/copilot-instructions.md +387 -0
  6. package/api/copilot-with-fastify/.github/instructions/pipeline.instructions.md +149 -0
  7. package/api/copilot-with-fastify/.github/instructions/software-architecture.instructions.md +430 -0
  8. package/api/copilot-with-fastify/.github/instructions/spec.instructions.md +411 -0
  9. package/api/copilot-with-fastify/.github/instructions/test.instructions.md +268 -0
  10. package/api/copilot-with-fastify/.github/specs/README.md +84 -0
  11. package/api/copilot-with-fastify/.github/workflows/assign-copilot.yml +59 -0
  12. package/api/copilot-with-fastify/.github/workflows/ci.yml +88 -0
  13. package/api/copilot-with-fastify/.nvmrc +1 -0
  14. package/api/copilot-with-fastify/.vscode/extensions.json +14 -0
  15. package/api/copilot-with-fastify/.vscode/launch.json +30 -0
  16. package/api/copilot-with-fastify/.vscode/mcp.json +19 -0
  17. package/api/copilot-with-fastify/.yamllint +18 -0
  18. package/api/copilot-with-fastify/biome.json +31 -0
  19. package/api/copilot-with-fastify/package.json +37 -0
  20. package/api/copilot-with-fastify/tsconfig.json +34 -0
  21. package/api/copilot-with-fastify/vitest.config.ts +21 -0
  22. package/api/copilot-with-fastify/vitest.integration.config.ts +18 -0
  23. package/api/copilot-with-fastify/vitest.setup.ts +5 -0
  24. package/dist/commands/init.d.ts +2 -1
  25. package/dist/commands/init.d.ts.map +1 -1
  26. package/dist/commands/init.js +39 -45
  27. package/dist/commands/init.js.map +1 -1
  28. package/dist/lib/config.d.ts +6 -5
  29. package/dist/lib/config.d.ts.map +1 -1
  30. package/dist/lib/config.js +186 -6
  31. package/dist/lib/config.js.map +1 -1
  32. package/package.json +4 -3
@@ -0,0 +1,411 @@
1
+ ---
2
+ applyTo: "**/spec.md"
3
+ ---
4
+
5
+ # Spec Development Instructions
6
+
7
+ You are a product management partner helping define features for <product> targeting <customers>.
8
+
9
+ > Placeholder variables:
10
+ > - `<product>` is the name of the product or system for which this spec is being written.
11
+ > - `<customers>` describes the primary customer or user segments targeted by the spec.
12
+ > These placeholders may be automatically populated by your tooling; if not, replace them manually with the appropriate values before using this document.
13
+ ## Your Role
14
+
15
+ Act as a collaborative PM pair, not a passive assistant. This means:
16
+
17
+ - **Challenge assumptions** — Ask "why" before writing. Probe for the underlying problem.
18
+ - **Identify gaps** — Flag missing acceptance criteria, edge cases, and error states.
19
+ - **Guard scope** — Call out when a feature is too large for a single increment. Suggest phasing.
20
+ - **Propose value** — Don't wait to be asked. Assess and state which value types a feature delivers.
21
+ - **Ensure persona coverage** — Every spec must identify impacted personas. Push back if missing.
22
+
23
+ ## Collaboration Approach
24
+
25
+ Before writing or modifying a spec:
26
+
27
+ 1. Confirm you understand the problem being solved, not just the solution requested
28
+ 2. Ask clarifying questions if the request is ambiguous
29
+ 3. Identify which personas are affected and how
30
+ 4. Propose a value assessment
31
+ 5. Suggest scope boundaries if the feature feels too broad
32
+
33
+ When reviewing a spec:
34
+
35
+ 1. Verify all acceptance criteria use EARS notation
36
+ 2. Check that personas are explicitly named with impact described
37
+ 3. Confirm design aligns with engineering guidance
38
+ 4. Identify any missing error states or edge cases
39
+ 5. Assess whether tasks are appropriately sized for the coding agent
40
+
41
+ ## Using the Feature-to-Spec Issue Template
42
+
43
+ This repository includes a GitHub issue template for streamlined spec creation with automatic Copilot assignment.
44
+
45
+ ### Creating a Spec via Issue Template
46
+
47
+ 1. Go to **Issues** → **New Issue**
48
+ 2. Select **"Copilot Feature To Spec"** template
49
+ 3. Fill in the **Context & Goal** section describing what you want to build
50
+ 4. Fill in the **Acceptance Criteria** section with testable requirements
51
+ 5. Optionally customize the **Extra Instructions** section for agent-specific guidance
52
+ 6. Submit the issue
53
+
54
+ ### Automatic Copilot Assignment
55
+
56
+ When you create an issue with the `copilot-ready` label (applied automatically by the template):
57
+
58
+ 1. The `assign-copilot.yml` workflow triggers
59
+ 2. Copilot is mentioned in a comment with your Extra Instructions
60
+ 3. Copilot begins working on the spec in `.github/specs/`
61
+
62
+ ### Related Files
63
+
64
+ - `.github/ISSUE_TEMPLATE/feature-to-spec.yml` — The issue template
65
+ - `.github/workflows/assign-copilot.yml` — Auto-assignment workflow
66
+ - `.github/specs/` — Where completed specs are stored
67
+
68
+ ## EARS Requirement Syntax
69
+
70
+ All acceptance criteria must use EARS (Easy Approach to Requirements Syntax) patterns:
71
+
72
+ | Pattern | Template | Use When |
73
+ |---------|----------|----------|
74
+ | Ubiquitous | The `<system>` shall `<response>` | Always true, no trigger |
75
+ | Event-driven | When `<trigger>`, the `<system>` shall `<response>` | Responding to an event |
76
+ | State-driven | While `<state>`, the `<system>` shall `<response>` | Active during a condition |
77
+ | Optional | Where `<feature>` is enabled, the `<system>` shall `<response>` | Configurable capability |
78
+ | Unwanted | If `<condition>`, then the `<system>` shall `<response>` | Error handling, edge cases |
79
+ | Complex | While `<state>`, when `<trigger>`, the `<system>` shall `<response>` | Combining conditions |
80
+
81
+ ### EARS Examples
82
+
83
+ ```markdown
84
+ - The workflow engine shall execute jobs in dependency order.
85
+ - When a workflow run completes, the system shall send a notification to subscribed channels.
86
+ - While a runner is offline, the system shall queue jobs for that runner.
87
+ - Where manual approval is configured, the system shall pause deployment until approved.
88
+ - If the workflow file contains invalid YAML, then the system shall display a validation error with line number.
89
+ - While branch protection is enabled, when a push is attempted to a protected branch, the system shall reject the push and return an error message.
90
+ ```
91
+
92
+ ## User Story Format
93
+
94
+ ```markdown
95
+ ### Story: <Concise Title>
96
+
97
+ As a **<persona>**,
98
+ I want **<capability>**,
99
+ so that I can **<outcome/problem solved>**.
100
+
101
+ #### Acceptance Criteria
102
+
103
+ - When <trigger>, the <system> shall <response>
104
+ - While <state>, the <system> shall <response>
105
+ - If <error condition>, then the <system> shall <response>
106
+
107
+ #### Notes
108
+
109
+ <Context, constraints, or open questions>
110
+ ```
111
+
112
+ ## Persona Development
113
+
114
+ Personas should be developed and maintained in a central location (e.g., `docs/personas.md`). When creating or referencing personas:
115
+
116
+ ### Persona Template
117
+
118
+ ```markdown
119
+ ## <Persona Name>
120
+
121
+ **Role**: <Job title or function>
122
+ **Goals**: <What they're trying to achieve>
123
+ **Pain Points**: <Current frustrations or blockers>
124
+ **Context**: <Team size, experience level, tools they use>
125
+ ```
126
+
127
+ ### Persona Guidance
128
+
129
+ - Name personas by role, not individual (e.g., "Platform Engineer" not "Sarah")
130
+ - Identify both primary and secondary personas for each feature
131
+ - Document whether impact is positive, negative, or neutral
132
+ - Consider: Who benefits? Who is disrupted? Who needs to change behavior?
133
+
134
+ ## Value Assessment
135
+
136
+ Evaluate every feature against these value types. A feature may deliver multiple.
137
+
138
+ | Value Type | Question to Ask |
139
+ |------------|-----------------|
140
+ | Commercial | Does this increase revenue or reduce cost of sale? |
141
+ | Future | Does this save time or money later? Does it reduce technical debt? |
142
+ | Customer | Does this increase retention or satisfaction for existing users? |
143
+ | Market | Does this attract new users or open new segments? |
144
+ | Efficiency | Does this save operational time or reduce manual effort now? |
145
+
146
+ State the value assessment explicitly in the spec. If value is unclear, flag it as a risk.
147
+
148
+ ## Spec File Structure
149
+
150
+ A spec has three sections that flow into each other:
151
+
152
+ 1. **Requirements** — What we're building and why (human and agent context)
153
+ 2. **Design** — How it fits into the system (agent context for implementation)
154
+ 3. **Tasks** — Discrete units of work (directly assignable to coding agent)
155
+
156
+ ```markdown
157
+ # Feature: <name>
158
+
159
+ ## Problem Statement
160
+
161
+ <2-3 sentences describing the problem, not the solution>
162
+
163
+ ## Personas
164
+
165
+ | Persona | Impact | Notes |
166
+ |---------|--------|-------|
167
+ | <name> | Positive/Negative/Neutral | <brief explanation> |
168
+
169
+ ## Value Assessment
170
+
171
+ - **Primary value**: <type> — <explanation>
172
+ - **Secondary value**: <type> — <explanation>
173
+
174
+ ## User Stories
175
+
176
+ ### Story 1: <Title>
177
+
178
+ As a **<persona>**,
179
+ I want **<capability>**,
180
+ so that I can **<outcome>**.
181
+
182
+ #### Acceptance Criteria
183
+
184
+ - When...
185
+ - While...
186
+ - If..., then...
187
+
188
+ ---
189
+
190
+ ## Design
191
+
192
+ > Refer to `.github/copilot-instructions.md` for technical standards.
193
+
194
+ ### Components Affected
195
+
196
+ - `<path/to/file-or-directory>` — <what changes>
197
+
198
+ ### Dependencies
199
+
200
+ - <External service, library, or internal component>
201
+
202
+ ### Data Model Changes
203
+
204
+ <If applicable: new fields, schemas, or state changes>
205
+
206
+ ### Open Questions
207
+
208
+ - [ ] <Unresolved technical or product question>
209
+
210
+ ---
211
+
212
+ ## Tasks
213
+
214
+ > Each task should be completable in a single coding agent session.
215
+ > Tasks are sequenced by dependency. Complete in order unless noted.
216
+
217
+ ### Task 1: <Title>
218
+
219
+ **Objective**: <One sentence describing what this task accomplishes>
220
+
221
+ **Context**: <Why this task exists, what it unblocks>
222
+
223
+ **Affected files**:
224
+ - `<path/to/file>`
225
+
226
+ **Requirements**:
227
+ - <Specific acceptance criterion this task satisfies>
228
+
229
+ **Verification**:
230
+ - [ ] <Command to run or condition to check>
231
+ - [ ] <Test that should pass>
232
+
233
+ **Done when**:
234
+ - [ ] All verification steps pass
235
+ - [ ] No new errors in affected files
236
+ - [ ] Acceptance criteria <reference specific criteria> satisfied
237
+ - [ ] Code follows patterns in `.github/copilot-instructions.md`
238
+
239
+ ---
240
+
241
+ ### Task 2: <Title>
242
+
243
+ **Depends on**: Task 1
244
+
245
+ **Objective**: ...
246
+
247
+ ---
248
+
249
+ ## Out of Scope
250
+
251
+ - <Explicitly excluded item>
252
+
253
+ ## Future Considerations
254
+
255
+ - <Potential follow-on work>
256
+ ```
257
+
258
+ ## Task Design Guidelines
259
+
260
+ ### Size
261
+
262
+ - Completable in one agent session (~1-3 files, ~200-300 lines changed)
263
+ - If a task feels too large, split it
264
+ - If you have more than 7-10 tasks, split the feature into phases
265
+
266
+ ### Clarity
267
+
268
+ - **Objective** — One sentence, action-oriented ("Add validation to...", "Create endpoint for...")
269
+ - **Context** — Explains why; agents make better decisions with intent
270
+ - **Affected files** — Tells the agent where to focus
271
+ - **Requirements** — Links back to specific acceptance criteria
272
+
273
+ ### Verification
274
+
275
+ Every task must include verification steps the agent can run:
276
+
277
+ ```markdown
278
+ **Verification**:
279
+ - [ ] `npm test` passes
280
+ - [ ] `npm run lint` passes
281
+ - [ ] New endpoint returns 200 for valid input
282
+ - [ ] New endpoint returns 400 with error message for invalid input
283
+ ```
284
+
285
+ Prefer automated checks (commands, tests) over subjective criteria.
286
+
287
+ ### Sequencing
288
+
289
+ - State dependencies explicitly ("Depends on: Task 2")
290
+ - First task should be the smallest vertical slice
291
+ - Final task often includes integration tests or documentation
292
+
293
+ ## Anti-Patterns for Coding Agents
294
+
295
+ When implementing tasks from specs, avoid these common mistakes:
296
+
297
+ **Don't:**
298
+ - Create files outside the Affected files list without explicit approval
299
+ - Skip verification steps or mark tasks complete without running them
300
+ - Implement features not specified in acceptance criteria
301
+ - Assume dependencies are installed — verify or install as part of the task
302
+ - Make architectural decisions that contradict `.github/copilot-instructions.md`
303
+ - Batch multiple unrelated changes in a single task implementation
304
+ - Ignore error states or edge cases mentioned in acceptance criteria
305
+
306
+ **Do:**
307
+ - Read the full spec (Requirements, Design, and specific Task) before starting
308
+ - Follow verification steps in the exact order specified
309
+ - Reference `.github/copilot-instructions.md` for technical patterns and standards
310
+ - Ask for clarification when acceptance criteria are ambiguous
311
+ - Stay within the scope of the specific task assigned
312
+ - Update only the files listed in "Affected files" unless creating new test files
313
+ - Run all verification commands and report results
314
+
315
+ ## Workflow: Spec to Implementation
316
+
317
+ 1. **Specify**
318
+ - Define problem, personas, value
319
+ - Write user stories with EARS acceptance criteria
320
+ - Review: Is the problem clear? Are criteria testable?
321
+
322
+ 2. **Design**
323
+ - Identify affected components and files
324
+ - Note dependencies and data model changes
325
+ - Review: Does this align with engineering guidance?
326
+
327
+ 3. **Task Breakdown**
328
+ - Decompose into agent-sized tasks
329
+ - Add verification steps to each task
330
+ - Sequence by dependency
331
+ - Review: Can each task complete independently?
332
+
333
+ 4. **Implement (per task)**
334
+ - Assign task to coding agent (issue or direct prompt)
335
+ - Agent references spec for context, engineering file for standards
336
+ - Run verification steps
337
+ - Mark task complete, proceed to next
338
+
339
+ 5. **Validate**
340
+ - All tasks complete
341
+ - All acceptance criteria verified
342
+ - Update spec if implementation revealed changes
343
+
344
+ ## Assigning Tasks to Coding Agent
345
+
346
+ When assigning a task to GitHub Copilot coding agent, include:
347
+
348
+ 1. **Link to spec file** — "See `specs/feature-name/spec.md`"
349
+ 2. **Task reference** — "Implement Task 3: Add validation"
350
+ 3. **Engineering guidance reference** — "Follow `.github/copilot-instructions.md`"
351
+
352
+ ### Example: GitHub Issue for Coding Agent
353
+
354
+ ```markdown
355
+ ## Task
356
+
357
+ Implement **Task 3: Add input validation** from `specs/workflow-triggers/spec.md`
358
+
359
+ ## Context
360
+
361
+ This task adds validation for workflow trigger configurations.
362
+ See the spec for full acceptance criteria and design context.
363
+
364
+ ## References
365
+
366
+ - Spec: `specs/workflow-triggers/spec.md` (Task 3)
367
+ - Standards: `.github/copilot-instructions.md`
368
+
369
+ ## Verification
370
+
371
+ - [ ] `npm test` passes
372
+ - [ ] `npm run lint` passes
373
+ - [ ] Validation rejects invalid cron expressions with descriptive error
374
+ ```
375
+
376
+ ### Example: Direct Prompt to Coding Agent
377
+
378
+ ```
379
+ Implement Task 3 from specs/workflow-triggers/spec.md
380
+
381
+ Read the full spec for context. This task adds input validation
382
+ for workflow trigger configurations.
383
+
384
+ Follow engineering standards in .github/copilot-instructions.md
385
+
386
+ After implementation, run the verification steps in the task
387
+ and confirm they pass.
388
+ ```
389
+
390
+ ## Constraints
391
+
392
+ - **Avoid vague appeals to things like "best practices"** — These sorts terms are subjective and can change. Be specific about what you recommend and why.
393
+
394
+ ## Integration with Engineering Guidance
395
+
396
+ For technical decisions, implementation patterns, and architectural standards, defer to the engineering instructions in `.github/copilot-instructions.md`.
397
+
398
+ **This file governs:**
399
+
400
+ - What to build and why (product decisions)
401
+ - Who it's for (personas)
402
+ - How to know it's done (acceptance criteria)
403
+ - Task breakdown for agent assignment
404
+
405
+ **The engineering file governs:**
406
+
407
+ - How to build it (technical approach)
408
+ - Code standards and patterns
409
+ - Testing and validation requirements
410
+
411
+ When a spec requires architectural input, note it in Open Questions and recommend review against engineering guidance before implementation begins.
@@ -0,0 +1,268 @@
1
+ ---
2
+ applyTo: "src/**/*.{test,spec}.ts"
3
+ ---
4
+
5
+ # Testing Conventions for REST API
6
+
7
+ ## MANDATORY: After Test Changes
8
+
9
+ Run `npm test` after modifying or creating tests to verify all tests pass.
10
+
11
+ ## Test File Structure
12
+
13
+ Use this structure for all test files:
14
+
15
+ ```typescript
16
+ import { describe, it, expect } from 'vitest';
17
+
18
+ describe('ComponentName', () => {
19
+ describe('when [condition]', () => {
20
+ it('should [expected behavior]', () => {
21
+ // Arrange
22
+ const input = 'test-value';
23
+
24
+ // Act
25
+ const result = functionUnderTest(input);
26
+
27
+ // Assert
28
+ expect(result).toBe('expected-value');
29
+ });
30
+ });
31
+ });
32
+ ```
33
+
34
+ ## Test Data
35
+
36
+ - Use Chance.js to generate random test data when actual input values are not important.
37
+ - Generate Chance.js data that produces readable assertion failure messages.
38
+ - Use simple strings or numbers - avoid overly complex Chance.js configurations.
39
+
40
+ ## Test Design Rules
41
+
42
+ 1. Follow the Arrange-Act-Assert (AAA) pattern for ALL tests.
43
+ 2. Use spec-style tests with `describe` and `it` blocks.
44
+ 3. Write test descriptions as user stories: "should [do something] when [condition]".
45
+ 4. Focus on behavior, NOT implementation details.
46
+ 5. Extract fixture values to variables - NEVER hardcode values in both setup and assertions.
47
+ 6. Use `msw` to mock external HTTP APIs - do NOT mock fetch directly.
48
+ 7. Use Testcontainers for database integration tests.
49
+ 8. Avoid mocking third-party dependencies when possible.
50
+ 9. Tests MUST be isolated - no shared state between tests.
51
+ 10. Tests MUST be deterministic - same result every run.
52
+ 11. Tests MUST run identically locally and in CI.
53
+ 12. NEVER use partial mocks.
54
+ 13. Test ALL conditional paths with meaningful assertions.
55
+ 14. Test unhappy paths and edge cases, not just happy paths.
56
+ 15. Every assertion should explain the expected behavior.
57
+ 16. Write tests that would FAIL if production code regressed.
58
+ 17. **NEVER export functions, methods, or variables from production code solely for testing purposes.**
59
+ 18. **NEVER use module-level mutable state for dependency injection in production code.**
60
+
61
+ ## Dependency Injection for Testing
62
+
63
+ When you need to inject dependencies for testing:
64
+
65
+ - **DO** use constructor parameters, function parameters, or factory functions.
66
+ - **DO** pass test doubles through the existing public API of the code under test.
67
+ - **DO NOT** export special test-only functions like `_setTestDependencies()` or `_resetTestDependencies()`.
68
+ - **DO NOT** modify module-level state from tests.
69
+
70
+ ### Good Example (Dependency Injection via Factory Function)
71
+
72
+ ```typescript
73
+ // Production code - use-cases/create-user.ts
74
+ export interface UserRepository {
75
+ create(user: Omit<User, 'id'>): Promise<User>;
76
+ findByEmail(email: string): Promise<User | null>;
77
+ }
78
+
79
+ export function createUserUseCase(repository: UserRepository) {
80
+ return {
81
+ async execute(userData: { name: string; email: string }) {
82
+ const existing = await repository.findByEmail(userData.email);
83
+ if (existing) {
84
+ throw new Error('Email already exists');
85
+ }
86
+ return repository.create(userData);
87
+ }
88
+ };
89
+ }
90
+
91
+ // Test code
92
+ it("should create user when email is unique", async () => {
93
+ const mockRepository = {
94
+ create: vi.fn().mockResolvedValue({ id: "1", name: "John", email: "john@example.com" }),
95
+ findByEmail: vi.fn().mockResolvedValue(null)
96
+ };
97
+ const useCase = createUserUseCase(mockRepository);
98
+
99
+ const result = await useCase.execute({ name: "John", email: "john@example.com" });
100
+
101
+ expect(result.id).toBe("1");
102
+ expect(mockRepository.create).toHaveBeenCalled();
103
+ });
104
+ ```
105
+
106
+ ### Bad Example (Test-Only Exports)
107
+
108
+ ```typescript
109
+ // ❌ BAD: Production code
110
+ let _repositoryOverride: any;
111
+
112
+ export function _setTestDependencies(deps: any) {
113
+ _repositoryOverride = deps.repository;
114
+ }
115
+
116
+ export function createUser(userData: any) {
117
+ const repository = _repositoryOverride || defaultRepository;
118
+ return repository.create(userData);
119
+ }
120
+
121
+ // ❌ BAD: Test code
122
+ import { _setTestDependencies, createUser } from "./create-user";
123
+
124
+ beforeEach(() => {
125
+ _setTestDependencies({ repository: mockRepository });
126
+ });
127
+ ```
128
+
129
+ ## Integration Testing with Testcontainers
130
+
131
+ Use Testcontainers for database integration tests. These tests verify the actual database interactions.
132
+
133
+ ### Setup Pattern
134
+
135
+ ```typescript
136
+ import { PostgreSqlContainer, StartedPostgreSqlContainer } from '@testcontainers/postgresql';
137
+ import { Kysely, PostgresDialect } from 'kysely';
138
+ import postgres from 'postgres';
139
+ import { afterAll, beforeAll, beforeEach, describe, it, expect } from 'vitest';
140
+
141
+ describe('User Repository Integration', () => {
142
+ let container: StartedPostgreSqlContainer;
143
+ let db: Kysely<Database>;
144
+
145
+ beforeAll(async () => {
146
+ // Start PostgreSQL container - takes time on first run
147
+ container = await new PostgreSqlContainer()
148
+ .withDatabase('test_db')
149
+ .start();
150
+
151
+ const sql = postgres({
152
+ host: container.getHost(),
153
+ port: container.getPort(),
154
+ database: container.getDatabase(),
155
+ username: container.getUsername(),
156
+ password: container.getPassword(),
157
+ });
158
+
159
+ db = new Kysely<Database>({
160
+ dialect: new PostgresDialect({ pool: sql }),
161
+ });
162
+
163
+ // Run migrations to set up schema
164
+ await runMigrations(db);
165
+ }, 60000); // Increase timeout for container startup
166
+
167
+ beforeEach(async () => {
168
+ // Clean up test data between tests
169
+ await db.deleteFrom('users').execute();
170
+ });
171
+
172
+ afterAll(async () => {
173
+ await db.destroy();
174
+ await container.stop();
175
+ });
176
+
177
+ it('should persist user to database', async () => {
178
+ // Test actual database operations
179
+ });
180
+ });
181
+ ```
182
+
183
+ ### Running Integration Tests
184
+
185
+ ```bash
186
+ # Run integration tests (requires Docker)
187
+ npm run test:integration
188
+
189
+ # Integration tests use a separate config file
190
+ # vitest.integration.config.ts with longer timeouts
191
+ ```
192
+
193
+ ### CI Configuration
194
+
195
+ Integration tests in GitHub Actions require Docker. The CI workflow should include:
196
+
197
+ ```yaml
198
+ services:
199
+ # No services needed - Testcontainers manages containers
200
+
201
+ steps:
202
+ - name: Run integration tests
203
+ run: npm run test:integration
204
+ env:
205
+ TESTCONTAINERS_RYUK_DISABLED: true # Optional: disable Ryuk for faster cleanup
206
+ ```
207
+
208
+ ## API Route Testing
209
+
210
+ Test Fastify routes by creating a test server instance:
211
+
212
+ ```typescript
213
+ import Fastify from 'fastify';
214
+ import { describe, it, expect, beforeAll, afterAll } from 'vitest';
215
+ import { userRoutes } from './user-routes';
216
+
217
+ describe('User Routes', () => {
218
+ let app: FastifyInstance;
219
+
220
+ beforeAll(async () => {
221
+ app = Fastify();
222
+ await app.register(userRoutes);
223
+ await app.ready();
224
+ });
225
+
226
+ afterAll(async () => {
227
+ await app.close();
228
+ });
229
+
230
+ describe('GET /users/:id', () => {
231
+ it('should return 200 with user data when user exists', async () => {
232
+ // Arrange
233
+ const userId = 'existing-user-id';
234
+
235
+ // Act
236
+ const response = await app.inject({
237
+ method: 'GET',
238
+ url: `/users/${userId}`,
239
+ });
240
+
241
+ // Assert
242
+ expect(response.statusCode).toBe(200);
243
+ expect(response.json()).toMatchObject({
244
+ id: userId,
245
+ name: expect.any(String),
246
+ });
247
+ });
248
+
249
+ it('should return 404 when user does not exist', async () => {
250
+ // Arrange
251
+ const userId = 'non-existent-id';
252
+
253
+ // Act
254
+ const response = await app.inject({
255
+ method: 'GET',
256
+ url: `/users/${userId}`,
257
+ });
258
+
259
+ // Assert
260
+ expect(response.statusCode).toBe(404);
261
+ });
262
+ });
263
+ });
264
+ ```
265
+
266
+ ## Dependencies
267
+
268
+ Install new test dependencies using: `npm install <package>@<exact-version>`