@jgamaraalv/ts-dev-kit 1.2.1 → 2.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (37) hide show
  1. package/.claude-plugin/marketplace.json +2 -2
  2. package/.claude-plugin/plugin.json +2 -2
  3. package/agent-memory/accessibility-pro/MEMORY.md +3 -0
  4. package/agent-memory/api-builder/MEMORY.md +3 -0
  5. package/agent-memory/code-reviewer/MEMORY.md +3 -0
  6. package/agent-memory/database-expert/MEMORY.md +3 -0
  7. package/agent-memory/debugger/MEMORY.md +3 -0
  8. package/agent-memory/docker-expert/MEMORY.md +3 -0
  9. package/agent-memory/performance-engineer/MEMORY.md +3 -0
  10. package/agent-memory/playwright-expert/MEMORY.md +3 -0
  11. package/agent-memory/react-specialist/MEMORY.md +3 -0
  12. package/agent-memory/security-scanner/MEMORY.md +3 -0
  13. package/agent-memory/test-generator/MEMORY.md +3 -0
  14. package/agent-memory/typescript-pro/MEMORY.md +3 -0
  15. package/agent-memory/ux-optimizer/MEMORY.md +3 -0
  16. package/agents/accessibility-pro.md +82 -119
  17. package/agents/api-builder.md +69 -104
  18. package/agents/code-reviewer.md +54 -175
  19. package/agents/database-expert.md +80 -134
  20. package/agents/debugger.md +95 -200
  21. package/agents/docker-expert.md +53 -45
  22. package/agents/performance-engineer.md +97 -118
  23. package/agents/playwright-expert.md +62 -82
  24. package/agents/react-specialist.md +80 -97
  25. package/agents/security-scanner.md +63 -83
  26. package/agents/test-generator.md +85 -175
  27. package/agents/typescript-pro.md +81 -215
  28. package/agents/ux-optimizer.md +60 -77
  29. package/package.json +3 -2
  30. package/skills/debug/SKILL.md +256 -0
  31. package/skills/debug/references/debug-dispatch.md +289 -0
  32. package/skills/task/SKILL.md +366 -0
  33. package/skills/task/references/agent-dispatch.md +156 -0
  34. package/skills/task/references/output-templates.md +53 -0
  35. package/agents/multi-agent-coordinator.md +0 -142
  36. package/agents/nextjs-expert.md +0 -144
  37. package/docs/rules/orchestration.md.template +0 -126
@@ -0,0 +1,256 @@
1
+ ---
2
+ name: debug
3
+ description: "End-to-end debugging workflow that triages, reproduces, and fixes bugs across the full stack using multi-agent orchestration. Use when: (1) encountering runtime errors in the API or web app, (2) investigating failed requests or broken user flows, (3) debugging production issues via Sentry/PostHog, (4) tracing data flow across backend and frontend, or (5) the user reports a bug that spans multiple layers."
4
+ argument-hint: "[error-description or sentry-issue-url]"
5
+ ---
6
+
7
+ <trigger_examples>
8
+ - "Debug why the form submission fails with a 500 error"
9
+ - "The dashboard page shows a blank screen after login"
10
+ - "API returns 403 but the user should be authorized"
11
+ - "Investigate this Sentry issue: PROJECT-123"
12
+ - "The form submits but nothing appears in the list"
13
+ - "Debug the notification queue — jobs are stuck"
14
+ </trigger_examples>
15
+
16
+ <workflow>
17
+ Follow each phase in order. Each one feeds the next.
18
+
19
+ <phase_1_triage>
20
+ Classify the bug before investigating. This determines which agents to dispatch.
21
+
22
+ 1. Read the error description, stack trace, or reproduction steps provided by the user.
23
+ 2. Determine the affected layers:
24
+
25
+ | Layer | Signals |
26
+ |-------|---------|
27
+ | **Frontend** | UI doesn't render, hydration errors, blank pages, console errors, wrong data displayed |
28
+ | **API** | HTTP error codes (4xx/5xx), validation failures, timeout, wrong response body |
29
+ | **Database** | Missing data, wrong query results, migration issues, connection errors |
30
+ | **Queue/Worker** | Jobs stuck, not processing, duplicate execution, Redis connectivity |
31
+ | **Infrastructure** | Docker containers down, ports in use, env vars missing |
32
+ | **Cross-cutting** | Data flows correctly in one layer but breaks in another |
33
+
34
+ 3. Check for quick context — run these in parallel:
35
+
36
+ ```bash
37
+ # Recent changes that might have introduced the bug
38
+ git log --oneline -10
39
+ git diff HEAD~3 --stat
40
+
41
+ # Infrastructure health
42
+ docker compose ps
43
+ ```
44
+
45
+ 4. If the user provided a Sentry issue URL or error ID, query Sentry for the full stack trace and event details.
46
+ 5. If the user mentions production errors, check PostHog error tracking for frequency and user impact.
47
+
48
+ State the triage result to the user:
49
+
50
+ > **TRIAGE: [layer(s)]** — Brief description of what appears to be happening.
51
+ </phase_1_triage>
52
+
53
+ <phase_2_execution_mode>
54
+ Based on the triage, decide the execution mode:
55
+
56
+ **SINGLE-LAYER** — The bug is isolated to one layer. Debug directly without dispatching agents.
57
+
58
+ **MULTI-LAYER** — The bug spans 2+ layers. Act as orchestrator and dispatch specialized debugging agents.
59
+
60
+ State the decision:
61
+
62
+ > **EXECUTION MODE: SINGLE-LAYER** — I will investigate and fix this directly.
63
+
64
+ OR
65
+
66
+ > **EXECUTION MODE: MULTI-LAYER** — I will dispatch specialized agents to investigate each layer in parallel.
67
+ </phase_2_execution_mode>
68
+
69
+ <phase_3_reproduce>
70
+ Reproduce the bug before investigating. Never fix what you can't reproduce.
71
+
72
+ <reproduction_strategies>
73
+
74
+ **API bugs** — Use curl or Bash to hit the endpoint directly:
75
+ ```bash
76
+ # Discover the API base URL and endpoints from CLAUDE.md, package.json, or route files
77
+ curl -v http://localhost:<port>/<endpoint> | jq .
78
+ ```
79
+
80
+ **Frontend bugs** — Use browser automation MCPs:
81
+ 1. Call `mcp__chrome-devtools__list_pages` or `mcp__plugin_playwright_playwright__browser_tabs` to check current state.
82
+ 2. Navigate to the affected page and take a screenshot.
83
+ 3. Check the browser console for errors.
84
+ 4. Inspect network requests for failed API calls.
85
+
86
+ **Queue/Worker bugs** — Check Redis and BullMQ state:
87
+ ```bash
88
+ # Check Redis connectivity
89
+ redis-cli ping
90
+
91
+ # Check queue state — discover the dev command from package.json scripts
92
+ # e.g., yarn dev, npm run dev, or the relevant workspace command
93
+ ```
94
+
95
+ **Database bugs** — Query directly:
96
+ ```bash
97
+ # Discover database credentials and container names from docker-compose.yml or .env
98
+ docker compose exec <db-container> psql -U <user> -c "SELECT * FROM ... LIMIT 5"
99
+ ```
100
+
101
+ If reproduction fails, add strategic logging and retry. See references/debug-dispatch.md for logging patterns.
102
+ </reproduction_strategies>
103
+ </phase_3_reproduce>
104
+
105
+ <phase_4_investigate>
106
+ With the bug reproduced, investigate the root cause.
107
+
108
+ <single_layer_investigation>
109
+ Follow the data flow from the error point backward:
110
+
111
+ 1. Read the source code at the error location.
112
+ 2. Trace inputs — where does the data come from? What transformations happen?
113
+ 3. Form a hypothesis about the root cause.
114
+ 4. Test the hypothesis — add logging, inspect state, check the database.
115
+ 5. If the hypothesis is wrong, form a new one based on what you learned.
116
+
117
+ Load relevant skills before diving in:
118
+ - Load skills matching the technologies used in the project (discover from package.json).
119
+ - Common examples: backend framework skills, ORM/database skills, frontend framework skills, queue/worker skills.
120
+ - Read CLAUDE.md and package.json to identify the project's tech stack before selecting skills.
121
+
122
+ When the bug involves library API misuse, version-specific behavior, or unclear method signatures, query Context7 for up-to-date documentation:
123
+ 1. `mcp__context7__resolve-library-id` — resolve the library name (e.g., "fastify", "drizzle-orm", "bullmq") to its Context7 ID.
124
+ 2. `mcp__context7__query-docs` — query with the specific API, method, or error message you're investigating.
125
+
126
+ Use Context7 when:
127
+ - The error message references a library API you're unsure about.
128
+ - You suspect a breaking change between versions (check the project's package.json for installed versions first).
129
+ - The stack trace points to library internals and you need to understand expected behavior.
130
+ - You need to verify correct usage patterns for any project dependency.
131
+ </single_layer_investigation>
132
+
133
+ <multi_layer_investigation>
134
+ Dispatch specialized agents in parallel. Each agent investigates its layer independently and reports findings.
135
+
136
+ See references/debug-dispatch.md for the agent prompt templates and dispatch patterns.
137
+
138
+ <debugging_roles>
139
+
140
+ | Role | Agent type | Domain | MCPs |
141
+ |------|-----------|--------|------|
142
+ | Backend debugger | `debugger` | API routes, use cases, DB queries, server hooks | Sentry, PostHog, Context7 |
143
+ | Frontend debugger | `debugger` | Pages, components, client state, network | Chrome DevTools, Playwright, Context7 |
144
+ | Queue debugger | `debugger` | Job queues, workers, Redis state | Context7 |
145
+ | E2E verifier | `playwright-expert` | Reproduce and verify via browser automation | Playwright |
146
+
147
+ Each debugging agent should use Context7 (`mcp__context7__resolve-library-id` + `mcp__context7__query-docs`) to verify library API usage when the bug involves unclear method signatures, version-specific behavior, or suspected API misuse. Include this instruction in agent prompts — see references/debug-dispatch.md.
148
+
149
+ </debugging_roles>
150
+
151
+ <dispatch_rules>
152
+ 1. Launch layer-specific debuggers in parallel — they investigate independently.
153
+ 2. Each agent receives: the error description, reproduction steps, relevant file paths, and which skills to load.
154
+ 3. Each agent reports: root cause hypothesis, evidence (logs, screenshots, stack traces), and suggested fix location.
155
+ 4. After all agents report, correlate findings — the root cause often lives at the boundary between layers.
156
+ 5. If agents disagree on the root cause, investigate the boundary between their domains.
157
+ </dispatch_rules>
158
+
159
+ <model_selection>
160
+ - Simple, well-scoped investigation (single file, clear error) → `haiku`
161
+ - Moderate investigation (multiple files, data flow tracing) → `sonnet`
162
+ - Complex cross-cutting investigation (race conditions, auth flows, distributed state) → `opus`
163
+ </model_selection>
164
+ </multi_layer_investigation>
165
+ </phase_4_investigate>
166
+
167
+ <phase_5_fix>
168
+ Implement the minimal fix that addresses the root cause.
169
+
170
+ <fix_principles>
171
+ - Fix the root cause, not the symptom. A 500 error caused by a missing null check needs the null check AND proper error handling upstream.
172
+ - Follow existing patterns. Read similar code in the codebase before writing the fix.
173
+ - Load the relevant skills before writing the fix (same skills as investigation phase).
174
+ - If the fix spans multiple layers, dispatch agents for each layer. See references/debug-dispatch.md for the fix dispatch template.
175
+ </fix_principles>
176
+
177
+ <fix_dispatch>
178
+ In MULTI-LAYER mode, the orchestrator decides who fixes what:
179
+ 1. Identify which files need changes based on investigation findings.
180
+ 2. Group changes by package/domain.
181
+ 3. Determine dependency order (shared → api → web).
182
+ 4. Dispatch fix agents sequentially if there are dependencies, or in parallel if independent.
183
+ </fix_dispatch>
184
+ </phase_5_fix>
185
+
186
+ <phase_6_verify>
187
+ A fix is not done until it's verified end-to-end.
188
+
189
+ 1. **Re-run the reproduction** — the original error should no longer occur.
190
+ 2. **Quality gates** — run the project's type checking, linting, and testing commands for every affected package. Discover available commands from package.json scripts (e.g., `tsc`, `lint`, `test`).
191
+ 3. **Browser verification** (for user-facing bugs) — use Playwright or Chrome DevTools to navigate to the affected page and confirm the fix works visually.
192
+ 4. **Regression check** — verify the fix doesn't break related functionality. Run the full test suite for affected packages.
193
+
194
+ If any step fails, return to phase 5 and fix the issue. Re-run all verification steps from the beginning.
195
+ </phase_6_verify>
196
+ </workflow>
197
+
198
+ <common_patterns>
199
+ Quick reference for the most frequent bugs in this stack. Use these to accelerate triage.
200
+
201
+ **Frontend → API boundary**
202
+ - CORS errors → check server CORS config
203
+ - 401/403 → check auth middleware, token expiry, cookie settings
204
+ - Wrong data shape → compare shared schema/type definitions with API response
205
+
206
+ **API → Database boundary**
207
+ - "relation does not exist" → migration not applied
208
+ - Empty results → wrong query conditions, check WHERE clause
209
+ - Connection errors → Docker not running, pool exhausted
210
+
211
+ **API → Redis/Queue boundary**
212
+ - Jobs not processing → worker not started, Redis down, wrong queue name
213
+ - Duplicate jobs → missing deduplication, idempotency key
214
+ - Stalled jobs → worker crashed without ack, check stall interval
215
+
216
+ **Frontend rendering**
217
+ - Hydration mismatch → server/client content differs, date formatting, conditional rendering
218
+ - Blank page → unhandled error in RSC, check error.tsx boundaries
219
+ - Infinite loading → API call hanging, missing Suspense boundary
220
+ </common_patterns>
221
+
222
+ <output>
223
+ When complete, produce a debug report:
224
+
225
+ ```
226
+ ## Bug resolved
227
+
228
+ **Root cause**: one sentence describing why the bug occurred.
229
+ **Fix**: one sentence describing what was changed to fix it.
230
+
231
+ ### Investigation path
232
+ Brief trace of how the root cause was found (which layer, what evidence).
233
+
234
+ ### Files changed
235
+ List every file created/modified.
236
+
237
+ ### Verification
238
+ - Reproduction: pass (the original error no longer occurs)
239
+ - tsc: pass/fail (per package)
240
+ - lint: pass/fail (per package)
241
+ - test: pass/fail (per package)
242
+ - Browser: pass/fail (if applicable)
243
+
244
+ ### Skills loaded
245
+ List every Skill() call made.
246
+
247
+ ### MCPs used
248
+ List every MCP used, or "none".
249
+ ```
250
+
251
+ Do not add explanations, caveats, or follow-up suggestions unless the user asks.
252
+ </output>
253
+
254
+ <task>
255
+ $ARGUMENTS
256
+ </task>
@@ -0,0 +1,289 @@
1
+ # Debug Agent Dispatch Reference
2
+
3
+ ## Table of contents
4
+
5
+ - [Investigation agent template](#investigation-agent-template)
6
+ - [Fix agent template](#fix-agent-template)
7
+ - [Role-specific prompts](#role-specific-prompts)
8
+ - [Dispatch examples](#dispatch-examples)
9
+ - [Strategic logging patterns](#strategic-logging-patterns)
10
+ - [Correlation protocol](#correlation-protocol)
11
+
12
+ ---
13
+
14
+ ## Investigation agent template
15
+
16
+ The `debugger` agent already includes project context, common errors, debugging commands, strategic logging, and quality gates in its system prompt. The dispatch prompt only needs **bug-specific** information:
17
+
18
+ ```
19
+ ## Bug description
20
+ [What the user reported — error message, behavior, reproduction steps]
21
+
22
+ ## Your investigation scope
23
+ [Which layer/files to investigate. Be specific about boundaries.]
24
+
25
+ ## Skills to load
26
+ Call the Skill tool before investigating:
27
+ - Skill(skill: "[skill-1]")
28
+ - Skill(skill: "[skill-2]")
29
+
30
+ ## Reproduction context
31
+ [How the bug was reproduced — curl command, browser steps, log output]
32
+
33
+ ## Files to start with
34
+ [List specific file paths where the error originates or is likely rooted]
35
+
36
+ ## Report format
37
+ When done, report:
38
+ - **Root cause hypothesis**: one sentence.
39
+ - **Evidence**: what you found that supports the hypothesis.
40
+ - **Fix location**: which file(s) and line(s) need to change.
41
+ - **Suggested fix**: code diff or description.
42
+ - **Confidence**: high / medium / low.
43
+ ```
44
+
45
+ ### Skill mapping for investigation
46
+
47
+ The debugger loads skills dynamically based on the bug domain. Include the appropriate skills in the dispatch prompt:
48
+
49
+ | Bug domain | Skills to load |
50
+ |-----------|---------------|
51
+ | API/Fastify | `fastify-best-practices` |
52
+ | Database | `drizzle-pg`, `postgresql` |
53
+ | Frontend/React | `nextjs-best-practices`, `react-best-practices` |
54
+ | Queue/Redis | `bullmq`, `ioredis` |
55
+ | Security | `owasp-security-review` |
56
+
57
+ ## Fix agent template
58
+
59
+ After investigation, dispatch the appropriate **specialist agent** (not necessarily the debugger) to implement fixes. Specialist agents already include their domain context and quality gates:
60
+
61
+ ```
62
+ ## Bug root cause
63
+ [One sentence from the investigation phase]
64
+
65
+ ## Your fix scope
66
+ [Which files to modify and what the fix should do]
67
+
68
+ ## Existing patterns to follow
69
+ [Paste relevant code snippets or file paths showing the codebase conventions]
70
+
71
+ ## Fix requirements
72
+ [Specific changes needed — be precise about expected behavior]
73
+ ```
74
+
75
+ Use the agent type that matches the fix domain:
76
+ - API route fix -> `api-builder` (preloads fastify-best-practices)
77
+ - Database/query fix -> `database-expert` (preloads drizzle-pg, postgresql)
78
+ - Component fix -> `react-specialist` (preloads react-best-practices, composition-patterns)
79
+ - Type fix -> `typescript-pro`
80
+ - Security fix -> `security-scanner` (preloads owasp-security-review)
81
+ - Cross-cutting or unclear -> `debugger` (load skills dynamically)
82
+
83
+ ---
84
+
85
+ ## Role-specific prompts
86
+
87
+ ### Backend debugger
88
+
89
+ **Agent type**: `debugger`
90
+ **Skills to include**: `fastify-best-practices`, `drizzle-pg`, `postgresql`
91
+ **Scope**: Backend source directory — routes, use cases, adapters, plugins, middleware. Discover the path from the project structure.
92
+
93
+ Investigation focus:
94
+ - Read the Fastify route handler at the error location
95
+ - Check request validation (Zod schema vs actual payload)
96
+ - Trace database queries — use `EXPLAIN ANALYZE` if performance-related
97
+ - Check error handling — are errors caught and mapped to proper HTTP codes?
98
+ - Inspect middleware/hooks in the request lifecycle
99
+
100
+ ### Frontend debugger
101
+
102
+ **Agent type**: `debugger`
103
+ **Skills to include**: `nextjs-best-practices`, `react-best-practices`
104
+ **Scope**: Frontend source directory — pages, components, hooks, contexts, lib. Discover the path from the project structure.
105
+
106
+ Investigation focus:
107
+ - Check the page/component at the error location
108
+ - Inspect browser console for client-side errors (use Chrome DevTools or Playwright MCP)
109
+ - Check network requests — is the API call correct? Does the response match expectations?
110
+ - Verify server vs client rendering — hydration mismatches, missing Suspense boundaries
111
+ - Check auth context — is the user authenticated? Are tokens being sent?
112
+
113
+ MCPs available to the debugger:
114
+ - `mcp__chrome-devtools__take_screenshot` — visual state of the page
115
+ - `mcp__chrome-devtools__list_console_messages` — browser console errors
116
+ - `mcp__chrome-devtools__list_network_requests` — failed API calls
117
+ - `mcp__plugin_playwright_playwright__browser_snapshot` — accessibility tree inspection
118
+
119
+ ### Queue debugger
120
+
121
+ **Agent type**: `debugger`
122
+ **Skills to include**: `bullmq`, `ioredis`
123
+ **Scope**: Queue/worker source directory — queue definitions, workers, job processors. Discover the path from the project structure.
124
+
125
+ Investigation focus:
126
+ - Check queue configuration (name, connection, default job options)
127
+ - Inspect worker concurrency and error handling
128
+ - Check Redis connectivity and key patterns
129
+ - Look for stalled jobs, failed jobs, or jobs stuck in "waiting"
130
+ - Verify job data shape matches worker expectations
131
+
132
+ ### E2E verifier
133
+
134
+ **Agent type**: `playwright-expert`
135
+ **Skills**: none needed (agent has its own testing patterns)
136
+ **Scope**: Full application via browser automation
137
+
138
+ Verification focus:
139
+ - Navigate to the affected page
140
+ - Perform the user flow that triggers the bug
141
+ - Take screenshots before and after
142
+ - Check console for errors
143
+ - Verify the fix resolves the issue visually
144
+
145
+ ---
146
+
147
+ ## Dispatch examples
148
+
149
+ ### Example 1: API returns 500 on resource creation
150
+
151
+ ```
152
+ # Triage: API layer (single-layer)
153
+ # No dispatch needed — debug directly
154
+
155
+ 1. curl the endpoint to reproduce
156
+ 2. Read the route handler and use case
157
+ 3. Check the database query
158
+ 4. Fix the root cause
159
+ 5. Verify with curl + tests
160
+ ```
161
+
162
+ ### Example 2: Form submits but data doesn't appear in list
163
+
164
+ ```
165
+ # Triage: Cross-cutting (frontend -> API -> database)
166
+ # Dispatch: MULTI-LAYER
167
+
168
+ # Wave 1: Parallel investigation
169
+ Task(
170
+ description: "Debug resource creation API endpoint",
171
+ subagent_type: "debugger",
172
+ model: "sonnet",
173
+ prompt: """
174
+ ## Bug description
175
+ User submits the creation form successfully (200 response) but the item
176
+ doesn't appear in the list page.
177
+
178
+ ## Your investigation scope
179
+ Backend only: the POST endpoint and corresponding GET list endpoint.
180
+ Check if data reaches the database and if the list query returns it.
181
+
182
+ ## Skills to load
183
+ Call the Skill tool before investigating:
184
+ - Load skills matching the backend framework and ORM used (discover from package.json)
185
+
186
+ ## Reproduction context
187
+ curl -X POST http://localhost:<port>/api/<resource> -H "Content-Type: application/json" \
188
+ -d '{"field":"value"}'
189
+ Returns 200, but GET /api/<resource> returns empty array.
190
+
191
+ ## Files to start with
192
+ - Discover from the project structure: look for route handlers, use cases, and persistence layers
193
+ """
194
+ )
195
+
196
+ Task(
197
+ description: "Debug resource list page",
198
+ subagent_type: "debugger",
199
+ model: "sonnet",
200
+ prompt: """
201
+ ## Bug description
202
+ User submits the creation form successfully but the item doesn't appear
203
+ in the list page.
204
+
205
+ ## Your investigation scope
206
+ Frontend only: the list page. Check the API call and rendering logic.
207
+
208
+ ## Skills to load
209
+ Call the Skill tool before investigating:
210
+ - Load skills matching the frontend framework used (discover from package.json)
211
+
212
+ ## Reproduction context
213
+ Navigate to the list page after submitting the form. The list shows empty or
214
+ stale data.
215
+
216
+ ## Files to start with
217
+ - Discover from the project structure: look for the list page component and its sub-components
218
+ """
219
+ )
220
+
221
+ # Wave 2: Dispatch fixes using specialist agents matching the fix domain
222
+ # e.g., api-builder for API fix, react-specialist for frontend fix
223
+
224
+ # Wave 3: E2E verification
225
+ Task(
226
+ description: "Verify resource creation flow",
227
+ subagent_type: "playwright-expert",
228
+ model: "haiku",
229
+ prompt: """
230
+ ## Your task
231
+ Verify the resource creation flow works end-to-end:
232
+ 1. Navigate to the creation form
233
+ 2. Fill in the required fields and submit
234
+ 3. Verify the new item appears in the list page
235
+
236
+ ## Success criteria
237
+ - Form submits successfully
238
+ - Item appears in the list within 5 seconds
239
+ - No console errors
240
+ """
241
+ )
242
+ ```
243
+
244
+ ### Example 3: Production error from Sentry
245
+
246
+ ```
247
+ # Triage: Starts with Sentry context, then investigate
248
+
249
+ 1. Query Sentry for the full stack trace and event details
250
+ 2. Identify the affected layer from the stack trace
251
+ 3. Dispatch investigation agent(s) for that layer
252
+ 4. Dispatch fix agent(s) using the appropriate specialist
253
+ 5. E2E verification
254
+ ```
255
+
256
+ ---
257
+
258
+ ## Strategic logging patterns
259
+
260
+ When investigation needs more visibility, add temporary logging:
261
+
262
+ ```typescript
263
+ // Prefix with DEBUG: for easy cleanup after fixing
264
+ request.log.debug({ body: request.body, params: request.params }, "DEBUG: incoming request");
265
+ request.log.debug({ result }, "DEBUG: query result");
266
+
267
+ // Outside request context
268
+ fastify.log.debug({ config }, "DEBUG: plugin configuration");
269
+ ```
270
+
271
+ After fixing, search and remove all `DEBUG:` log lines:
272
+ ```bash
273
+ # Search the backend source directory for temporary debug lines
274
+ grep -rn "DEBUG:" <backend-src-dir>/
275
+ ```
276
+
277
+ ---
278
+
279
+ ## Correlation protocol
280
+
281
+ When multiple agents investigate the same bug, the orchestrator correlates their findings:
282
+
283
+ 1. Collect each agent's root cause hypothesis and evidence.
284
+ 2. Check for consistency — do the hypotheses align?
285
+ 3. If agents agree, proceed with the fix.
286
+ 4. If agents disagree, look at the boundary:
287
+ - Frontend says "API returns wrong data" + Backend says "query is correct" -> check response serialization.
288
+ - Backend says "data is correct in DB" + Frontend says "list is empty" -> check API response shape vs frontend expectations (Zod schema mismatch).
289
+ 5. The true root cause is usually at the boundary between layers where one side's assumption doesn't match the other's reality.