@jgamaraalv/ts-dev-kit 2.0.0 → 2.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -12,7 +12,7 @@
12
12
  "name": "ts-dev-kit",
13
13
  "source": "./",
14
14
  "description": "13 specialized agents and 16 skills for TypeScript fullstack development",
15
- "version": "2.0.0",
15
+ "version": "2.1.0",
16
16
  "author": {
17
17
  "name": "jgamaraalv"
18
18
  },
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "ts-dev-kit",
3
- "version": "2.0.0",
3
+ "version": "2.1.0",
4
4
  "description": "13 specialized agents and 16 skills for TypeScript fullstack development with Fastify, Next.js, PostgreSQL, Redis, and more.",
5
5
  "author": {
6
6
  "name": "jgamaraalv",
package/README.md CHANGED
@@ -100,6 +100,42 @@ claude --plugin-dir ./ts-dev-kit
100
100
 
101
101
  ---
102
102
 
103
+ ## Recommended MCP Servers
104
+
105
+ Some agents and skills reference external MCP tools for documentation lookup, browser debugging, E2E testing, and web fetching. These are **optional** — skills degrade gracefully without them — but installing them unlocks the full experience.
106
+
107
+ | MCP Server | Used By | Purpose |
108
+ | ------------- | ---------------------------------------- | ------------------------------------ |
109
+ | context7 | Most skills (doc lookup) | Query up-to-date library docs |
110
+ | playwright | playwright-expert, debugger, test-generator | Browser automation and E2E testing |
111
+ | chrome-devtools | debugger | Frontend debugging, screenshots |
112
+ | firecrawl | task skill | Web fetching and scraping |
113
+
114
+ ### Installing as Claude Code plugins
115
+
116
+ ```bash
117
+ claude plugin add context7
118
+ claude plugin add playwright
119
+ claude plugin add firecrawl
120
+ ```
121
+
122
+ ### Installing as standalone MCP servers
123
+
124
+ ```bash
125
+ # context7 — no API key required
126
+ claude mcp add context7 -- npx -y @upstash/context7-mcp@latest
127
+
128
+ # playwright — no API key required
129
+ claude mcp add playwright -- npx -y @playwright/mcp@latest
130
+
131
+ # firecrawl — requires FIRECRAWL_API_KEY
132
+ claude mcp add firecrawl --env FIRECRAWL_API_KEY=your-key -- npx -y firecrawl-mcp
133
+ ```
134
+
135
+ > **chrome-devtools** requires Chrome running with remote debugging enabled (`--remote-debugging-port=9222`). Refer to the [Chrome DevTools MCP docs](https://github.com/anthropics/mcp-chrome-devtools) for setup instructions.
136
+
137
+ ---
138
+
103
139
  ## Customizing for Your Project
104
140
 
105
141
  This kit ships with a project orchestration template at `docs/rules/orchestration.md.template`. It defines quality gates, workspace commands, and dependency ordering that you can adapt to your own monorepo or project.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@jgamaraalv/ts-dev-kit",
3
- "version": "2.0.0",
3
+ "version": "2.1.0",
4
4
  "description": "Claude Code plugin: 13 agents + 16 skills for TypeScript fullstack development",
5
5
  "author": "jgamaraalv",
6
6
  "license": "MIT",
@@ -278,8 +278,33 @@ Follow this decision in phase 4. In MULTI-ROLE and PLAN modes, delegate applicat
278
278
  4. For questions about project libraries, use Context7 (`mcp__context7__resolve-library-id` → `mcp__context7__query-docs`) to query up-to-date documentation. If anything is ambiguous, ask the user before proceeding.
279
279
  5. Check for helpful MCPs — does the task involve browser testing, external docs?
280
280
  6. Plan the implementation order — determine which changes must happen first (e.g., types before lib before hooks before components before pages).
281
+ 7. Generate the verification plan — build a before/after test plan combining task-defined criteria with automatic checks based on domain and available MCPs. See references/verification-protocol.md for the full protocol.
282
+ - Detect available testing MCPs (playwright, chrome-devtools, or none).
283
+ - Map the task domain to checks: frontend → visual + performance, backend → API responses, database → schema state.
284
+ - Always include standard quality gates (lint, build, test) as baseline.
285
+ - Present the plan:
286
+ > **Verification plan:**
287
+ > - Baseline checks: [list]
288
+ > - MCPs for verification: [list or "none available — shell-only checks"]
289
+ > - Post-change checks: [list]
281
290
  </phase_3_task_analysis>
282
291
 
292
+ <phase_3b_baseline_capture>
293
+ Run the verification plan before writing any code to establish the baseline for comparison.
294
+
295
+ 1. Run standard quality gates and record results (pass/fail, counts, bundle sizes).
296
+ 2. Run domain-specific checks from the verification plan:
297
+ - **Frontend**: if playwright/chrome-devtools MCPs are available, navigate to affected pages, capture screenshots, and measure performance (LCP, load time). Otherwise, record build output and bundle sizes.
298
+ - **Backend**: execute requests to affected endpoints (via curl or available API MCPs) and record response status, payload shape, and timing.
299
+ - **Database**: record current schema state for affected tables.
300
+ 3. Store all baseline values — these are compared against post-change results in phase 5b.
301
+
302
+ If testing MCPs are not available, skip those checks and note it:
303
+ > **Baseline captured.** MCP-based visual/performance checks skipped — no browser MCPs available.
304
+
305
+ In MULTI-ROLE mode, the orchestrator runs baseline capture before dispatching any agents.
306
+ </phase_3b_baseline_capture>
307
+
283
308
  <phase_4_execution>
284
309
  Before writing any code, check the execution mode decision from phase 2.
285
310
 
@@ -345,6 +370,20 @@ If a gate fails:
345
370
  - Repeat until all gates pass cleanly.
346
371
  </phase_5_quality_gates>
347
372
 
373
+ <phase_5b_post_change_verification>
374
+ After all quality gates pass, re-run the verification plan from phase 3b and compare against baseline.
375
+
376
+ 1. Re-run every check from phase 3b with identical parameters.
377
+ 2. Compare each result against baseline:
378
+ - **Quality gates**: must remain passing. New failures = regression.
379
+ - **Visual checks** (if MCPs available): compare screenshots for unintended changes.
380
+ - **Performance** (if MCPs available): compare metrics. Regressions > 10% must be investigated.
381
+ - **API responses**: compare status codes and payload shapes. Breaking changes = regression.
382
+ 3. Build the comparison table (see references/output-templates.md for format).
383
+
384
+ If any regression is found, fix it, re-run phase 5 quality gates, then re-run this phase. Repeat until clean.
385
+ </phase_5b_post_change_verification>
386
+
348
387
  <phase_6_documentation>
349
388
  After all quality gates pass, review whether the changes require documentation updates:
350
389
 
@@ -356,7 +395,9 @@ Only update documentation directly affected by the changes. Do not create new do
356
395
  </workflow>
357
396
 
358
397
  <output>
359
- When complete, produce the completion report. See references/output-templates.md for the exact format.
398
+ When complete, produce the completion report including the baseline vs post-change comparison table. See references/output-templates.md for the exact format.
399
+
400
+ If the task document specifies a results file path, also create the comparison report at that path.
360
401
 
361
402
  Do not add explanations, caveats, or follow-up suggestions unless the user explicitly asks. The report is the final output.
362
403
  </output>
@@ -35,13 +35,15 @@ List every file created/modified/deleted across all roles.
35
35
  - test: pass/fail (count)
36
36
  - build: pass/fail (per package)
37
37
 
38
- ### Task-defined test results
39
- If the task document defined tests, report results here:
40
- | Test | Baseline (before) | Result (after) | Delta |
41
- |------|-------------------|----------------|-------|
42
- | [test name] | [baseline value] | [post-change value] | [improvement or regression] |
43
-
44
- If no task-defined tests existed, omit this section.
38
+ ### Verification results (baseline vs post-change)
39
+ | Check | Baseline (before) | Result (after) | Delta | Status |
40
+ |-------|-------------------|----------------|-------|--------|
41
+ | lint | [pass/fail] | [pass/fail] | — | [ok/regression] |
42
+ | build | [pass/fail (size)] | [pass/fail (size)] | [delta] | [ok/improved/regression] |
43
+ | tests | [count] | [count] | [delta] | [ok/improved/regression] |
44
+ | [domain-specific check] | [baseline value] | [post-change value] | [delta] | [ok/improved/regression] |
45
+
46
+ Include every check from the verification plan. This section is always present.
45
47
 
46
48
  ### Skills loaded
47
49
  List every skill called across all roles.
@@ -0,0 +1,101 @@
1
+ # Verification Protocol
2
+
3
+ Every task runs a before/after verification cycle: capture baseline → make changes → verify no regressions and improvements achieved.
4
+
5
+ ## MCP detection
6
+
7
+ At the start of verification planning, detect which testing MCPs are available by checking for their tool prefixes:
8
+
9
+ | MCP | Tool prefix | Use for |
10
+ |-----|-------------|---------|
11
+ | Playwright | `mcp__plugin_playwright_playwright__*` | Visual verification, interaction testing, screenshots |
12
+ | Chrome DevTools | `mcp__chrome-devtools__*` | Performance traces, network analysis, console checks |
13
+ | Neither | — | Shell-only checks (curl, CLI commands, build output) |
14
+
15
+ Use ToolSearch to confirm availability before including MCP-based checks in the plan.
16
+
17
+ ## Domain-specific test catalog
18
+
19
+ ### Frontend tasks
20
+
21
+ **With browser MCPs (playwright or chrome-devtools):**
22
+ 1. Navigate to each affected page/route.
23
+ 2. Capture screenshots of key states (idle, loading, error, success).
24
+ 3. Measure performance:
25
+ - Chrome DevTools: `performance_start_trace` → interact → `performance_stop_trace` → `performance_analyze_insight`.
26
+ - Playwright: `browser_navigate` with timing, `browser_take_screenshot`.
27
+ 4. Verify interactive elements: click buttons, fill forms, check navigation flows.
28
+ 5. Check browser console for errors or warnings.
29
+
30
+ **Without browser MCPs:**
31
+ 1. Run build and record bundle size (`ls -la` on build output).
32
+ 2. Run lint and type checking.
33
+ 3. Run existing test suite and record pass/fail counts.
34
+
35
+ ### Backend tasks
36
+
37
+ **API endpoint testing:**
38
+ 1. Identify affected endpoints from the task scope.
39
+ 2. Record baseline responses via curl:
40
+ ```bash
41
+ curl -s -w "\nHTTP %{http_code} | %{time_total}s" <endpoint>
42
+ ```
43
+ 3. After changes, re-run identical requests and compare status codes, response shapes, and timing.
44
+
45
+ **With Postman/API MCPs (if available):**
46
+ - Use the MCP to send structured requests and validate response schemas.
47
+
48
+ ### Database tasks
49
+
50
+ 1. Record current schema state for affected tables (e.g., `\d+ table_name` in psql).
51
+ 2. After migration, verify schema matches expectations.
52
+ 3. Run sample queries to verify data integrity.
53
+
54
+ ### Shared/library tasks
55
+
56
+ 1. Run type checking across all dependent packages.
57
+ 2. Run tests in all packages that import the changed code.
58
+ 3. Verify no new type errors introduced downstream.
59
+
60
+ ## Baseline capture protocol
61
+
62
+ 1. Run standard quality gates first: lint, build, test — record pass/fail and counts.
63
+ 2. Run domain-specific checks from the catalog above.
64
+ 3. Store results in a structured format for comparison:
65
+
66
+ ```
67
+ Baseline:
68
+ - lint: pass
69
+ - build: pass (bundle: 245KB)
70
+ - tests: 42 passed, 0 failed
71
+ - page /receive: screenshot captured, LCP 1.2s
72
+ - GET /api/resource: 200, 45ms
73
+ ```
74
+
75
+ ## Post-change verification protocol
76
+
77
+ 1. Re-run every baseline check with identical parameters.
78
+ 2. Build the comparison table:
79
+
80
+ | Check | Baseline | After | Delta | Status |
81
+ |-------|----------|-------|-------|--------|
82
+ | lint | pass | pass | — | ok |
83
+ | build | pass (245KB) | pass (238KB) | -7KB | improved |
84
+ | tests | 42 pass | 45 pass | +3 | improved |
85
+ | LCP /receive | 1.2s | 0.9s | -0.3s | improved |
86
+ | GET /api/resource | 200, 45ms | 200, 42ms | -3ms | ok |
87
+
88
+ 3. Status classification:
89
+ - **ok**: no change or negligible change.
90
+ - **improved**: measurable improvement.
91
+ - **regression**: measurable degradation — must be fixed before task completion.
92
+
93
+ 4. If regressions exist, fix and re-verify until clean.
94
+
95
+ ## Results file
96
+
97
+ When the task document specifies a results file path, create a markdown file at that path containing:
98
+ 1. Summary of changes made.
99
+ 2. Full comparison table.
100
+ 3. Key improvements highlighted.
101
+ 4. Any regressions that were found and how they were resolved.