devlyn-cli 1.14.0 → 2.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (148) hide show
  1. package/AGENTS.md +104 -0
  2. package/CLAUDE.md +112 -119
  3. package/README.md +43 -125
  4. package/benchmark/auto-resolve/BENCHMARK-DESIGN.md +272 -0
  5. package/benchmark/auto-resolve/README.md +114 -0
  6. package/benchmark/auto-resolve/RUBRIC.md +162 -0
  7. package/benchmark/auto-resolve/fixtures/F1-cli-trivial-flag/NOTES.md +30 -0
  8. package/benchmark/auto-resolve/fixtures/F1-cli-trivial-flag/expected.json +68 -0
  9. package/benchmark/auto-resolve/fixtures/F1-cli-trivial-flag/metadata.json +10 -0
  10. package/benchmark/auto-resolve/fixtures/F1-cli-trivial-flag/setup.sh +4 -0
  11. package/benchmark/auto-resolve/fixtures/F1-cli-trivial-flag/spec.md +45 -0
  12. package/benchmark/auto-resolve/fixtures/F1-cli-trivial-flag/task.txt +8 -0
  13. package/benchmark/auto-resolve/fixtures/F2-cli-medium-subcommand/NOTES.md +54 -0
  14. package/benchmark/auto-resolve/fixtures/F2-cli-medium-subcommand/expected-pair-plan-registry.json +170 -0
  15. package/benchmark/auto-resolve/fixtures/F2-cli-medium-subcommand/expected.json +84 -0
  16. package/benchmark/auto-resolve/fixtures/F2-cli-medium-subcommand/metadata.json +21 -0
  17. package/benchmark/auto-resolve/fixtures/F2-cli-medium-subcommand/pair-plan.sample-fail.json +214 -0
  18. package/benchmark/auto-resolve/fixtures/F2-cli-medium-subcommand/pair-plan.sample-pass.json +223 -0
  19. package/benchmark/auto-resolve/fixtures/F2-cli-medium-subcommand/setup.sh +5 -0
  20. package/benchmark/auto-resolve/fixtures/F2-cli-medium-subcommand/spec.md +56 -0
  21. package/benchmark/auto-resolve/fixtures/F2-cli-medium-subcommand/task.txt +14 -0
  22. package/benchmark/auto-resolve/fixtures/F3-backend-contract-risk/NOTES.md +28 -0
  23. package/benchmark/auto-resolve/fixtures/F3-backend-contract-risk/expected-pair-plan-registry.json +162 -0
  24. package/benchmark/auto-resolve/fixtures/F3-backend-contract-risk/expected.json +65 -0
  25. package/benchmark/auto-resolve/fixtures/F3-backend-contract-risk/metadata.json +19 -0
  26. package/benchmark/auto-resolve/fixtures/F3-backend-contract-risk/setup.sh +4 -0
  27. package/benchmark/auto-resolve/fixtures/F3-backend-contract-risk/spec.md +56 -0
  28. package/benchmark/auto-resolve/fixtures/F3-backend-contract-risk/task.txt +9 -0
  29. package/benchmark/auto-resolve/fixtures/F4-web-browser-design/NOTES.md +40 -0
  30. package/benchmark/auto-resolve/fixtures/F4-web-browser-design/expected.json +57 -0
  31. package/benchmark/auto-resolve/fixtures/F4-web-browser-design/metadata.json +10 -0
  32. package/benchmark/auto-resolve/fixtures/F4-web-browser-design/setup.sh +6 -0
  33. package/benchmark/auto-resolve/fixtures/F4-web-browser-design/spec.md +49 -0
  34. package/benchmark/auto-resolve/fixtures/F4-web-browser-design/task.txt +9 -0
  35. package/benchmark/auto-resolve/fixtures/F5-fix-loop-red-green/NOTES.md +38 -0
  36. package/benchmark/auto-resolve/fixtures/F5-fix-loop-red-green/expected.json +65 -0
  37. package/benchmark/auto-resolve/fixtures/F5-fix-loop-red-green/metadata.json +10 -0
  38. package/benchmark/auto-resolve/fixtures/F5-fix-loop-red-green/setup.sh +55 -0
  39. package/benchmark/auto-resolve/fixtures/F5-fix-loop-red-green/spec.md +49 -0
  40. package/benchmark/auto-resolve/fixtures/F5-fix-loop-red-green/task.txt +7 -0
  41. package/benchmark/auto-resolve/fixtures/F6-dep-audit-native-module/NOTES.md +38 -0
  42. package/benchmark/auto-resolve/fixtures/F6-dep-audit-native-module/expected.json +77 -0
  43. package/benchmark/auto-resolve/fixtures/F6-dep-audit-native-module/metadata.json +10 -0
  44. package/benchmark/auto-resolve/fixtures/F6-dep-audit-native-module/setup.sh +4 -0
  45. package/benchmark/auto-resolve/fixtures/F6-dep-audit-native-module/spec.md +49 -0
  46. package/benchmark/auto-resolve/fixtures/F6-dep-audit-native-module/task.txt +10 -0
  47. package/benchmark/auto-resolve/fixtures/F7-out-of-scope-trap/NOTES.md +50 -0
  48. package/benchmark/auto-resolve/fixtures/F7-out-of-scope-trap/expected.json +76 -0
  49. package/benchmark/auto-resolve/fixtures/F7-out-of-scope-trap/metadata.json +10 -0
  50. package/benchmark/auto-resolve/fixtures/F7-out-of-scope-trap/setup.sh +36 -0
  51. package/benchmark/auto-resolve/fixtures/F7-out-of-scope-trap/spec.md +46 -0
  52. package/benchmark/auto-resolve/fixtures/F7-out-of-scope-trap/task.txt +7 -0
  53. package/benchmark/auto-resolve/fixtures/F8-known-limit-ambiguous/NOTES.md +50 -0
  54. package/benchmark/auto-resolve/fixtures/F8-known-limit-ambiguous/expected.json +63 -0
  55. package/benchmark/auto-resolve/fixtures/F8-known-limit-ambiguous/metadata.json +10 -0
  56. package/benchmark/auto-resolve/fixtures/F8-known-limit-ambiguous/setup.sh +4 -0
  57. package/benchmark/auto-resolve/fixtures/F8-known-limit-ambiguous/spec.md +48 -0
  58. package/benchmark/auto-resolve/fixtures/F8-known-limit-ambiguous/task.txt +1 -0
  59. package/benchmark/auto-resolve/fixtures/F9-e2e-ideate-to-resolve/NOTES.md +93 -0
  60. package/benchmark/auto-resolve/fixtures/F9-e2e-ideate-to-resolve/expected.json +74 -0
  61. package/benchmark/auto-resolve/fixtures/F9-e2e-ideate-to-resolve/metadata.json +10 -0
  62. package/benchmark/auto-resolve/fixtures/F9-e2e-ideate-to-resolve/setup.sh +28 -0
  63. package/benchmark/auto-resolve/fixtures/F9-e2e-ideate-to-resolve/spec.md +62 -0
  64. package/benchmark/auto-resolve/fixtures/F9-e2e-ideate-to-resolve/task.txt +5 -0
  65. package/benchmark/auto-resolve/fixtures/SCHEMA.md +130 -0
  66. package/benchmark/auto-resolve/fixtures/test-repo/README.md +27 -0
  67. package/benchmark/auto-resolve/fixtures/test-repo/bin/cli.js +63 -0
  68. package/benchmark/auto-resolve/fixtures/test-repo/package-lock.json +823 -0
  69. package/benchmark/auto-resolve/fixtures/test-repo/package.json +22 -0
  70. package/benchmark/auto-resolve/fixtures/test-repo/playwright.config.js +17 -0
  71. package/benchmark/auto-resolve/fixtures/test-repo/server/index.js +37 -0
  72. package/benchmark/auto-resolve/fixtures/test-repo/tests/cli.test.js +25 -0
  73. package/benchmark/auto-resolve/fixtures/test-repo/tests/server.test.js +58 -0
  74. package/benchmark/auto-resolve/fixtures/test-repo/web/index.html +37 -0
  75. package/benchmark/auto-resolve/scripts/build-pair-eligible-manifest.py +174 -0
  76. package/benchmark/auto-resolve/scripts/check-f9-artifacts.py +256 -0
  77. package/benchmark/auto-resolve/scripts/compile-report.py +331 -0
  78. package/benchmark/auto-resolve/scripts/iter-0033c-compare.py +552 -0
  79. package/benchmark/auto-resolve/scripts/judge-opus-pass.sh +430 -0
  80. package/benchmark/auto-resolve/scripts/judge.sh +359 -0
  81. package/benchmark/auto-resolve/scripts/oracle-scope-tier-a.py +260 -0
  82. package/benchmark/auto-resolve/scripts/oracle-scope-tier-b.py +274 -0
  83. package/benchmark/auto-resolve/scripts/oracle-test-fidelity.py +328 -0
  84. package/benchmark/auto-resolve/scripts/pair-plan-idgen.py +401 -0
  85. package/benchmark/auto-resolve/scripts/pair-plan-lint.py +468 -0
  86. package/benchmark/auto-resolve/scripts/run-fixture.sh +691 -0
  87. package/benchmark/auto-resolve/scripts/run-iter-0033c.sh +234 -0
  88. package/benchmark/auto-resolve/scripts/run-suite.sh +214 -0
  89. package/benchmark/auto-resolve/scripts/ship-gate.py +222 -0
  90. package/bin/devlyn.js +129 -17
  91. package/config/skills/_shared/adapters/README.md +64 -0
  92. package/config/skills/_shared/adapters/gpt-5-5.md +29 -0
  93. package/config/skills/_shared/adapters/opus-4-7.md +29 -0
  94. package/config/skills/_shared/archive_run.py +130 -0
  95. package/config/skills/_shared/codex-config.md +54 -0
  96. package/config/skills/_shared/codex-monitored.sh +141 -0
  97. package/config/skills/_shared/engine-preflight.md +35 -0
  98. package/config/skills/_shared/expected.schema.json +93 -0
  99. package/config/skills/_shared/pair-plan-schema.md +298 -0
  100. package/config/skills/_shared/runtime-principles.md +110 -0
  101. package/config/skills/_shared/spec-verify-check.py +519 -0
  102. package/config/skills/devlyn:ideate/SKILL.md +99 -481
  103. package/config/skills/devlyn:ideate/references/elicitation.md +97 -0
  104. package/config/skills/devlyn:ideate/references/from-spec-mode.md +54 -0
  105. package/config/skills/devlyn:ideate/references/project-mode.md +76 -0
  106. package/config/skills/devlyn:ideate/references/spec-template.md +102 -0
  107. package/config/skills/devlyn:resolve/SKILL.md +172 -184
  108. package/config/skills/devlyn:resolve/references/free-form-mode.md +68 -0
  109. package/config/skills/devlyn:resolve/references/phases/build-gate.md +45 -0
  110. package/config/skills/devlyn:resolve/references/phases/cleanup.md +39 -0
  111. package/config/skills/devlyn:resolve/references/phases/implement.md +42 -0
  112. package/config/skills/devlyn:resolve/references/phases/plan.md +42 -0
  113. package/config/skills/devlyn:resolve/references/phases/verify.md +69 -0
  114. package/config/skills/devlyn:resolve/references/state-schema.md +106 -0
  115. package/{config/skills → optional-skills}/devlyn:design-system/SKILL.md +1 -0
  116. package/optional-skills/devlyn:reap/SKILL.md +105 -0
  117. package/optional-skills/devlyn:reap/scripts/reap.sh +129 -0
  118. package/optional-skills/devlyn:reap/scripts/scan.sh +116 -0
  119. package/{config/skills → optional-skills}/devlyn:team-design-ui/SKILL.md +5 -0
  120. package/package.json +16 -2
  121. package/scripts/lint-skills.sh +431 -0
  122. package/config/skills/devlyn:auto-resolve/SKILL.md +0 -602
  123. package/config/skills/devlyn:auto-resolve/references/build-gate.md +0 -116
  124. package/config/skills/devlyn:auto-resolve/references/engine-routing.md +0 -204
  125. package/config/skills/devlyn:browser-validate/SKILL.md +0 -164
  126. package/config/skills/devlyn:browser-validate/references/flow-testing.md +0 -118
  127. package/config/skills/devlyn:browser-validate/references/tier1-chrome.md +0 -137
  128. package/config/skills/devlyn:browser-validate/references/tier2-playwright.md +0 -195
  129. package/config/skills/devlyn:browser-validate/references/tier3-curl.md +0 -57
  130. package/config/skills/devlyn:clean/SKILL.md +0 -285
  131. package/config/skills/devlyn:design-ui/SKILL.md +0 -351
  132. package/config/skills/devlyn:discover-product/SKILL.md +0 -124
  133. package/config/skills/devlyn:evaluate/SKILL.md +0 -564
  134. package/config/skills/devlyn:feature-spec/SKILL.md +0 -630
  135. package/config/skills/devlyn:ideate/references/challenge-rubric.md +0 -122
  136. package/config/skills/devlyn:ideate/references/templates/item-spec.md +0 -90
  137. package/config/skills/devlyn:implement-ui/SKILL.md +0 -466
  138. package/config/skills/devlyn:preflight/SKILL.md +0 -370
  139. package/config/skills/devlyn:preflight/references/auditors/browser-auditor.md +0 -32
  140. package/config/skills/devlyn:preflight/references/auditors/code-auditor.md +0 -90
  141. package/config/skills/devlyn:preflight/references/auditors/docs-auditor.md +0 -38
  142. package/config/skills/devlyn:product-spec/SKILL.md +0 -603
  143. package/config/skills/devlyn:recommend-features/SKILL.md +0 -286
  144. package/config/skills/devlyn:review/SKILL.md +0 -161
  145. package/config/skills/devlyn:team-resolve/SKILL.md +0 -631
  146. package/config/skills/devlyn:team-review/SKILL.md +0 -493
  147. package/config/skills/devlyn:update-docs/SKILL.md +0 -463
  148. package/config/skills/workflow-routing/SKILL.md +0 -73
@@ -1,137 +0,0 @@
1
- # Tier 1: Chrome DevTools (claude-in-chrome)
2
-
3
- The richest testing tier. Requires the claude-in-chrome MCP extension running in a Chrome browser. Provides full DOM interaction, console monitoring, network inspection, screenshots, and GIF recording.
4
-
5
- Read this file only when Tier 1 was selected during DETECT phase.
6
-
7
- ---
8
-
9
- ## Setup
10
-
11
- Before any browser interaction, load the tools you need via ToolSearch:
12
- ```
13
- ToolSearch: "select:mcp__claude-in-chrome__tabs_context_mcp"
14
- ToolSearch: "select:mcp__claude-in-chrome__tabs_create_mcp"
15
- ToolSearch: "select:mcp__claude-in-chrome__navigate"
16
- ToolSearch: "select:mcp__claude-in-chrome__get_page_text"
17
- ToolSearch: "select:mcp__claude-in-chrome__read_page"
18
- ToolSearch: "select:mcp__claude-in-chrome__find"
19
- ToolSearch: "select:mcp__claude-in-chrome__computer"
20
- ToolSearch: "select:mcp__claude-in-chrome__form_input"
21
- ToolSearch: "select:mcp__claude-in-chrome__resize_window"
22
- ToolSearch: "select:mcp__claude-in-chrome__read_console_messages"
23
- ToolSearch: "select:mcp__claude-in-chrome__read_network_requests"
24
- ToolSearch: "select:mcp__claude-in-chrome__gif_creator"
25
- ToolSearch: "select:mcp__claude-in-chrome__javascript_tool"
26
- ```
27
-
28
- Then call `tabs_context_mcp` first to understand current browser state. Create a new tab for testing — never reuse existing user tabs.
29
-
30
- ## Tool Mapping by Action
31
-
32
- ### Navigate to a page
33
- ```
34
- tabs_create_mcp → create new tab with URL http://localhost:{PORT}{route}
35
- OR
36
- navigate → go to URL in existing tab
37
- ```
38
- After navigating, wait 2-3 seconds for client-side rendering, then call `get_page_text` to verify content loaded.
39
-
40
- ### Check if page rendered
41
- ```
42
- get_page_text → extract visible text content
43
- ```
44
- Read the text and judge: is this the actual application, or an error/fallback page? Browser error pages, framework error overlays, "Unable to connect" screens, and empty shells all have text — but they're not the app. If the page content doesn't look like what the application is supposed to show, it's a failure.
45
-
46
- ### Read page structure
47
- ```
48
- read_page → get DOM structure and layout info
49
- ```
50
- Use this to understand component hierarchy before interacting.
51
-
52
- ### Find interactive elements
53
- ```
54
- find → locate buttons, links, inputs by text content or attributes
55
- ```
56
- Returns element positions for clicking.
57
-
58
- ### Click elements
59
- ```
60
- computer → click at coordinates returned by find
61
- ```
62
- After clicking, wait 1-2 seconds, then check console + network for errors.
63
-
64
- ### Fill form fields
65
- ```
66
- form_input → set values on input fields, selects, textareas
67
- ```
68
- Identify fields with `find` first, then use `form_input` with the field selector.
69
-
70
- ### Take screenshots
71
- ```
72
- computer → screenshot action captures the visible viewport
73
- ```
74
- Save screenshots into the topic-scoped directory that PHASE 1 set up (`.devlyn/screenshots/<topic-slug>/`), organized by phase:
75
- - Smoke: `<topic>/smoke/<route>.png` — e.g., `smoke/root.png`, `smoke/dashboard.png`
76
- - Feature: `<topic>/feature/<criterion>-step<N>.png` — e.g., `feature/create-project-step3.png`
77
- - Visual: `<topic>/visual/<viewport>-<route>.png` — e.g., `visual/mobile-dashboard.png`
78
-
79
- Since `computer → screenshot` writes to a default location, move/rename the captured file into the right subdirectory immediately after taking it, so evidence paths in the report match this scheme.
80
-
81
- ### Resize viewport
82
- ```
83
- resize_window → set width and height
84
- ```
85
- Mobile: `resize_window(375, 812)`. Desktop: `resize_window(1280, 800)`.
86
-
87
- ### Read console messages
88
- ```
89
- read_console_messages → get all console output
90
- ```
91
- Use `pattern` parameter to filter. Useful patterns:
92
- - `"error|Error|ERROR"` — catch errors
93
- - `"warn|Warning"` — catch warnings
94
- - Exclude known noise: React dev warnings (`"Warning: "` prefix), HMR messages (`"[vite]"`, `"[HMR]"`, `"[Fast Refresh]"`), favicon 404s
95
-
96
- ### Read network requests
97
- ```
98
- read_network_requests → get all HTTP requests with status codes
99
- ```
100
- Flag: any request with status 4xx or 5xx (excluding `/favicon.ico`). Flag: any CORS error. Ignore: HMR websocket connections, source map requests (`.map`).
101
-
102
- ### Record multi-step flows
103
- ```
104
- gif_creator → record a sequence of actions as an animated GIF
105
- ```
106
- Use for flow tests with 3+ steps. Capture extra frames before and after actions for smooth playback. Name meaningfully: `flow-user-registration.gif`.
107
-
108
- ### Run custom assertions
109
- ```
110
- javascript_tool → execute JS in the page context
111
- ```
112
- Useful for checking specific DOM state that other tools can't easily verify:
113
- - `document.querySelectorAll('.error-message').length` — count error elements
114
- - `window.__NEXT_DATA__` — check Next.js hydration data
115
- - `document.title` — verify page title
116
-
117
- Avoid triggering alerts or confirms — they block the extension. Use `console.log` + `read_console_messages` instead.
118
-
119
- ## Error Filtering
120
-
121
- Not every console message is a real problem. Apply these filters:
122
-
123
- **Ignore (dev noise)**:
124
- - `[HMR]`, `[vite]`, `[Fast Refresh]`, `[webpack-dev-server]`
125
- - `Warning: ReactDOM.render is no longer supported` (React 18 dev warning)
126
- - `Download the React DevTools`
127
- - `/favicon.ico` 404
128
- - Source map warnings
129
-
130
- **Flag as errors**:
131
- - `Uncaught` anything
132
- - `TypeError`, `ReferenceError`, `SyntaxError`
133
- - `Failed to fetch` (network errors)
134
- - `CORS` errors
135
- - `Hydration` mismatches
136
- - `ChunkLoadError` (code splitting failures)
137
- - Any `console.error` call from application code
@@ -1,195 +0,0 @@
1
- # Tier 2: Playwright (Headless Browser)
2
-
3
- Solid middle-ground tier. No browser extension needed — works in CI, SSH, Docker, and headless environments. Provides DOM interaction, console monitoring, screenshots, and network inspection. No GIF recording.
4
-
5
- Read this file only when Tier 2 was selected during DETECT phase.
6
-
7
- ---
8
-
9
- ## Two Modes
10
-
11
- Playwright Tier 2 has two sub-modes depending on what's available. The skill auto-detects which to use.
12
-
13
- ### Mode A: Playwright MCP (preferred)
14
-
15
- If `mcp__playwright__*` tools are available (installed via `npx devlyn-cli` → select "playwright" MCP), use them directly. This gives interactive browser control similar to Tier 1:
16
-
17
- - `mcp__playwright__browser_navigate` — navigate to URL
18
- - `mcp__playwright__browser_screenshot` — capture screenshot
19
- - `mcp__playwright__browser_click` — click elements
20
- - `mcp__playwright__browser_type` — type into inputs
21
- - `mcp__playwright__browser_console` — read console messages
22
- - `mcp__playwright__browser_network` — read network requests
23
- - `mcp__playwright__browser_resize` — resize viewport
24
-
25
- When Playwright MCP is available, follow the same interaction pattern as Tier 1 (navigate → check → interact → screenshot) but using `mcp__playwright__*` tools instead of `mcp__claude-in-chrome__*`.
26
-
27
- Load tools via ToolSearch before use: `ToolSearch: "select:mcp__playwright__browser_navigate"` etc.
28
-
29
- ### Mode B: Script Generation (fallback)
30
-
31
- If Playwright MCP is not installed but `npx playwright` CLI is available, generate and execute test scripts. This is the approach documented below.
32
-
33
- ## Setup (Mode B only)
34
-
35
- Playwright runs via `npx` with auto-download. No global install needed. If browsers aren't installed yet:
36
- ```bash
37
- npx playwright install chromium 2>/dev/null
38
- ```
39
- This downloads only Chromium (~130MB), not all browsers. It's a one-time cost.
40
-
41
- ## Approach (Mode B)
42
-
43
- Generate a temporary test script from the test steps, run it with Playwright's JSON reporter, then parse the results. This avoids needing a persistent test infrastructure — the script is created, executed, and cleaned up.
44
-
45
- ## Script Generation
46
-
47
- For each phase (smoke, flow, visual), generate a test script at `.devlyn/browser-test.spec.ts`.
48
-
49
- ### Smoke Test Script Template
50
-
51
- ```typescript
52
- import { test, expect } from '@playwright/test';
53
-
54
- const PORT = {PORT};
55
- const ROUTES = {ROUTES_JSON_ARRAY};
56
-
57
- test.describe('Smoke Tests', () => {
58
- for (const route of ROUTES) {
59
- test(`smoke: ${route}`, async ({ page }) => {
60
- const errors: string[] = [];
61
- const failedRequests: string[] = [];
62
-
63
- page.on('console', msg => {
64
- if (msg.type() === 'error') errors.push(msg.text());
65
- });
66
-
67
- page.on('response', response => {
68
- if (response.status() >= 400 && !response.url().includes('favicon')) {
69
- failedRequests.push(`${response.status()} ${response.url()}`);
70
- }
71
- });
72
-
73
- // If goto throws (connection refused), the test fails — that's correct behavior
74
- await page.goto(`http://localhost:${PORT}${route}`, { waitUntil: 'networkidle', timeout: 15000 });
75
-
76
- // Verify this is the actual application, not an error page.
77
- // When a server is down or a route is broken, the browser shows an error page
78
- // that still has text content — "Unable to connect", "This site can't be reached", etc.
79
- // A naive length check would pass on these. The title is the best signal:
80
- // browser error pages have titles like "Problem loading page" or the URL itself,
81
- // while real apps have meaningful titles set by the application.
82
- const title = await page.title();
83
- const bodyText = await page.textContent('body') || '';
84
-
85
- // Page must have substantive content
86
- expect(bodyText.trim().length, 'Page body is empty').toBeGreaterThan(0);
87
-
88
- // Fail if the page navigation itself failed (Playwright sets title to the URL on error)
89
- const pageUrl = page.url();
90
- expect(title, 'Page shows a browser error — server may be down').not.toBe(pageUrl);
91
-
92
- await page.screenshot({ path: `${SCREENSHOT_DIR}/smoke/${route.replace(/^\//, '').replace(/\//g, '-') || 'root'}.png`, fullPage: true });
93
- // SCREENSHOT_DIR is the topic-scoped dir set up in PHASE 1 of SKILL.md
94
- // (e.g., .devlyn/screenshots/add-login-page). Inject it at test-generation
95
- // time so every test writes into the same per-run folder.
96
-
97
- if (errors.length > 0) {
98
- test.info().annotations.push({ type: 'console_errors', description: errors.join(' | ') });
99
- }
100
- if (failedRequests.length > 0) {
101
- test.info().annotations.push({ type: 'network_failures', description: failedRequests.join(' | ') });
102
- }
103
-
104
- expect(errors.filter(e => !e.includes('[HMR]') && !e.includes('favicon'))).toHaveLength(0);
105
- expect(failedRequests).toHaveLength(0);
106
- });
107
- }
108
- });
109
- ```
110
-
111
- ### Flow Test Script Template
112
-
113
- For each flow test step from done-criteria, generate a test block:
114
-
115
- ```typescript
116
- test('flow: [criterion description]', async ({ page }) => {
117
- // Navigate
118
- await page.goto(`http://localhost:${PORT}{start_route}`);
119
-
120
- // Find and interact
121
- await page.click('[text or selector]');
122
- await page.fill('[selector]', '[value]');
123
- await page.click('[submit selector]');
124
-
125
- // Verify
126
- await expect(page.locator('[verification selector]')).toBeVisible();
127
-
128
- // Screenshot
129
- await page.screenshot({ path: `${SCREENSHOT_DIR}/feature/[criterion-slug]-step[N].png` });
130
- });
131
- ```
132
-
133
- ### Visual Test Script Template
134
-
135
- ```typescript
136
- test.describe('Visual - Mobile', () => {
137
- test.use({ viewport: { width: 375, height: 812 } });
138
- for (const route of ROUTES) {
139
- test(`visual-mobile: ${route}`, async ({ page }) => {
140
- await page.goto(`http://localhost:${PORT}${route}`, { waitUntil: 'networkidle' });
141
- await page.screenshot({ path: `${SCREENSHOT_DIR}/visual/mobile-${route.replace(/^\//, '').replace(/\//g, '-') || 'root'}.png`, fullPage: true });
142
- });
143
- }
144
- });
145
-
146
- test.describe('Visual - Desktop', () => {
147
- test.use({ viewport: { width: 1280, height: 800 } });
148
- for (const route of ROUTES) {
149
- test(`visual-desktop: ${route}`, async ({ page }) => {
150
- await page.goto(`http://localhost:${PORT}${route}`, { waitUntil: 'networkidle' });
151
- await page.screenshot({ path: `${SCREENSHOT_DIR}/visual/desktop-${route.replace(/^\//, '').replace(/\//g, '-') || 'root'}.png`, fullPage: true });
152
- });
153
- }
154
- });
155
- ```
156
-
157
- ## Execution
158
-
159
- ```bash
160
- mkdir -p "$SCREENSHOT_DIR"/{smoke,feature,visual}
161
- npx playwright test .devlyn/browser-test.spec.ts \
162
- --reporter=json \
163
- --output=.devlyn/playwright-results \
164
- 2>&1 | tee .devlyn/playwright-output.json
165
- ```
166
-
167
- ## Parsing Results
168
-
169
- Read `.devlyn/playwright-output.json`. The JSON structure contains:
170
- - `suites[].specs[].tests[].results[].status` — `"passed"`, `"failed"`, `"timedOut"`
171
- - `suites[].specs[].tests[].results[].errors` — error messages with stack traces
172
- - `suites[].specs[].tests[].annotations` — custom annotations (console_errors, network_failures)
173
-
174
- Map these to BROWSER-RESULTS.md findings:
175
- - `failed` → route fails smoke, include error message
176
- - Annotations with `console_errors` → list in Runtime Errors section
177
- - Annotations with `network_failures` → list in Failed Network Requests section
178
-
179
- ## Cleanup
180
-
181
- After parsing results:
182
- ```bash
183
- rm -f .devlyn/browser-test.spec.ts
184
- rm -rf .devlyn/playwright-results
185
- rm -f .devlyn/playwright-output.json
186
- ```
187
-
188
- Keep `$SCREENSHOT_DIR` (`.devlyn/screenshots/<topic-slug>/`) — those are evidence referenced by the report. Don't touch other topics' directories.
189
-
190
- ## Limitations vs Tier 1
191
-
192
- - No GIF recording (can't capture multi-step flow animations)
193
- - No live DOM exploration (tests are scripted, not interactive)
194
- - Screenshots are full-page captures, not viewport-specific (use `fullPage: true`)
195
- - Console filtering is code-based (less flexible than chrome MCP pattern matching)
@@ -1,57 +0,0 @@
1
- # Tier 3: HTTP Smoke (curl)
2
-
3
- Bare-minimum fallback. No browser, no JavaScript execution, no interaction testing. This tier confirms the dev server responds and pages return valid HTML. It catches "app doesn't start" and "page returns 500" but nothing subtler.
4
-
5
- Read this file only when Tier 3 was selected during DETECT phase.
6
-
7
- ---
8
-
9
- ## What You Can Test
10
-
11
- - Server responds on the expected port
12
- - Pages return HTTP 200
13
- - HTML contains a `<body>` with content (not an empty shell)
14
- - No server-side error indicators in the HTML
15
-
16
- ## What You Cannot Test
17
-
18
- - Client-side rendering (SPA content won't appear in curl output)
19
- - JavaScript errors or console output
20
- - Network requests made by the client
21
- - Interactive elements (forms, buttons, navigation)
22
- - Visual layout or responsive behavior
23
- - Screenshots
24
-
25
- ## Smoke Test
26
-
27
- For each affected route:
28
-
29
- ```bash
30
- # Check HTTP status
31
- STATUS=$(curl -s -o /dev/null -w "%{http_code}" http://localhost:{PORT}{route} --max-time 10)
32
-
33
- # Get HTML content
34
- HTML=$(curl -s http://localhost:{PORT}{route} --max-time 10)
35
- ```
36
-
37
- ### Pass Criteria
38
-
39
- A route passes if:
40
- 1. curl succeeds (doesn't error out with connection refused or timeout)
41
- 2. `STATUS` is `200` (or `301`, `302`, `304`) — not `000`, not `5xx`
42
- 3. HTML contains `<body` tag
43
- 3. HTML body has more than 100 characters of text content (not just empty divs)
44
- 4. HTML does not contain server error indicators: `Internal Server Error`, `500`, `ECONNREFUSED`, `Cannot GET`, `404`
45
-
46
- ### Parsing HTML Content
47
-
48
- Since curl returns raw HTML (no JS execution), for SPAs the body may only contain a root `<div id="root"></div>` or `<div id="__next"></div>`. This is normal and counts as a PASS for Tier 3 — note it as "SPA shell detected, client-side rendering not verifiable at this tier."
49
-
50
- For SSR frameworks (Next.js with server components, Nuxt, Astro), the HTML should contain actual rendered content.
51
-
52
- ## Report Adjustments
53
-
54
- When writing BROWSER-RESULTS.md from Tier 3:
55
- - Set confidence level to LOW
56
- - Leave Console Errors, Network Failures, Flow Tests, and Visual Check sections as "N/A — Tier 3 (HTTP only)"
57
- - Note the limitation: "Tier 3 testing provides HTTP-level validation only. Client-side behavior, JavaScript errors, and visual rendering were not tested. For comprehensive browser validation, install the claude-in-chrome extension (Tier 1) or Playwright (Tier 2)."
@@ -1,285 +0,0 @@
1
- ---
2
- description: Detect and remove dead code, unused dependencies, complexity hotspots, and tech debt. Keeps your codebase lean and maintainable.
3
- allowed-tools: Read, Write, Edit, Glob, Grep, Bash(ls:*), Bash(test:*), Bash(git log:*), Bash(git blame:*), Bash(wc:*), Bash(npm:*), Bash(npx:*), Bash(pnpm:*), Bash(yarn:*), Bash(cargo:*), Bash(pip:*), Bash(go:*), Bash(node -e:*), Bash(python -c:*)
4
- argument-hint: [focus area, or empty for full scan]
5
- ---
6
-
7
- <role>
8
- You are a Codebase Health Engineer. Your job is to find and safely remove dead weight from a codebase — unused code, stale dependencies, orphan files, complexity hotspots, and test gaps. You care about maintainability as much as functionality.
9
-
10
- Your operating principle: every line of code is a liability. Code that serves no purpose increases build times, confuses contributors, and hides real bugs. Remove it with confidence, preserve it with evidence.
11
- </role>
12
-
13
- <user_input>
14
- $ARGUMENTS
15
- </user_input>
16
-
17
- <escalation>
18
- If the cleanup reveals deeply intertwined architectural debt — circular dependencies, god objects woven into multiple systems, or patterns that can't be safely removed without redesigning interfaces — escalate to `/devlyn:team-resolve` with your findings so a multi-perspective team can plan the refactor.
19
- </escalation>
20
-
21
- <process>
22
-
23
- ## Phase 1: CODEBASE UNDERSTANDING
24
-
25
- Before analyzing anything, understand the project's shape.
26
-
27
- 1. Read project metadata in parallel:
28
- - package.json / Cargo.toml / pyproject.toml / go.mod (whatever applies)
29
- - README.md, CLAUDE.md
30
- - Linter and build configs (tsconfig.json, .eslintrc, biome.json, etc.)
31
-
32
- 2. Scan the project structure:
33
- - List top-level directories
34
- - Identify the tech stack, framework, entry points
35
- - Check for monorepo structure (workspaces, packages/)
36
-
37
- 3. Check recent git activity:
38
- - `git log --oneline -20` for recent changes
39
- - Identify actively maintained vs. stale areas
40
-
41
- ## Phase 2: ANALYSIS
42
-
43
- Run these 5 analysis categories. Use parallel tool calls — each category is independent.
44
-
45
- ### Category 1: Dead Code Detection
46
-
47
- Find code that is never executed or referenced.
48
-
49
- **What to scan:**
50
- - Exported functions/classes never imported elsewhere
51
- - Files with zero inbound imports (orphan files)
52
- - Unused variables and parameters (beyond what linters catch)
53
- - Feature flags or config branches that are permanently off
54
- - Commented-out code blocks (more than 3 lines)
55
- - Dead routes: route definitions pointing to removed handlers
56
- - Unused CSS classes or styled components (in UI projects)
57
-
58
- **How to verify:**
59
- - Use Grep to search for import/require/usage of each suspect
60
- - Check if "unused" code is actually used dynamically (string interpolation, dynamic imports, reflection)
61
- - Verify test files before flagging — test helpers may appear unused but are needed
62
-
63
- ### Category 2: Dependency Hygiene
64
-
65
- Find dependency bloat and version issues.
66
-
67
- **What to scan:**
68
- - Installed packages never imported in source code
69
- - Duplicate packages serving the same purpose (e.g., both lodash and underscore)
70
- - devDependencies used in production code (or vice versa)
71
- - Pinned versions with known security issues (if lockfile available)
72
- - Dependencies that could be replaced by built-in language features
73
-
74
- **How to verify:**
75
- - Search all source files for each dependency's import/require
76
- - Check indirect usage (peer dependencies, plugins, config references)
77
- - Verify build tool plugins (webpack, vite, etc.) that may reference deps implicitly
78
-
79
- ### Category 3: Test Health
80
-
81
- Find gaps, obsolete tests, and tests that don't actually test anything.
82
-
83
- **What to scan:**
84
- - Test files for components/modules that no longer exist
85
- - Tests with no assertions (empty test bodies, missing expect/assert)
86
- - Skipped tests (`.skip`, `xit`, `xdescribe`, `@pytest.mark.skip`) without explanation
87
- - Snapshot tests with stale snapshots
88
- - Test coverage gaps: source files with zero corresponding test files
89
-
90
- **How to verify:**
91
- - Cross-reference test file names with source file names
92
- - Read test bodies to check for meaningful assertions
93
- - Check if skipped tests reference issues that are now resolved
94
-
95
- ### Category 4: Complexity Hotspots
96
-
97
- Find code that's disproportionately hard to maintain.
98
-
99
- **What to scan:**
100
- - Functions longer than 50 lines
101
- - Files longer than 500 lines
102
- - Nesting deeper than 4 levels
103
- - Functions with more than 5 parameters
104
- - God objects/files that accumulate unrelated responsibilities
105
- - Circular dependencies between modules
106
-
107
- **How to measure:**
108
- - `wc -l` on suspect files
109
- - Read and count nesting levels
110
- - Trace import chains for circularity
111
-
112
- ### Category 5: Code Hygiene
113
-
114
- Find patterns that degrade codebase quality over time.
115
-
116
- **What to scan:**
117
- - Console.log/print statements in production code (not in designated logger)
118
- - TODO/FIXME/HACK comments older than 90 days (check with git blame)
119
- - Hardcoded values that should be constants or config (magic numbers, URLs, keys)
120
- - Inconsistent naming patterns (camelCase mixed with snake_case)
121
- - Duplicate code blocks (3+ lines repeated in 2+ places)
122
- - Empty catch blocks or swallowed errors
123
- - Type `any` overuse (TypeScript projects)
124
-
125
- ## Phase 3: PRIORITIZE
126
-
127
- Score each finding:
128
-
129
- ```
130
- | Priority | Criteria | Action |
131
- |----------|----------|--------|
132
- | P0 — Remove now | Zero risk, clearly dead (orphan file, unused export with no dynamic usage) | Auto-fix |
133
- | P1 — Remove with care | Likely dead but verify (unused dep, stale test) | Fix after user confirms |
134
- | P2 — Refactor | Alive but unhealthy (complexity, duplication, hygiene) | Plan the refactor |
135
- | P3 — Flag | Ambiguous — might be used in ways not visible in code | Report to user |
136
- ```
137
-
138
- ## Phase 4: PRESENT PLAN
139
-
140
- Present findings to the user for approval before making changes.
141
-
142
- ```
143
- ## Codebase Health Report
144
-
145
- ### Summary
146
- - Scanned: {N} files across {M} directories
147
- - Found: {X} issues ({P0} auto-fixable, {P1} to confirm, {P2} to refactor, {P3} flagged)
148
-
149
- ### P0 — Safe to Remove (auto-fix)
150
- - `src/utils/oldHelper.ts` — Orphan file, zero imports anywhere
151
- - `package.json` — Remove `left-pad` (never imported)
152
-
153
- ### P1 — Remove with Confirmation
154
- - `src/components/LegacyWidget.tsx` — No imports found, but has a default export (could be dynamic import)
155
- - `tests/api.old.test.ts` — Tests removed API endpoints
156
-
157
- ### P2 — Refactor Candidates
158
- - `src/services/userService.ts` (287 lines) — Split into auth, profile, preferences
159
- - `src/utils/helpers.ts:45-98` — Duplicated in `src/lib/shared.ts:12-65`
160
-
161
- ### P3 — Flagged for Review
162
- - `src/config/featureFlags.ts` — Contains 3 flags set to `false` since [date]
163
-
164
- ### Estimated Impact
165
- - Lines removed: ~{N}
166
- - Dependencies removed: {N}
167
- - Files deleted: {N}
168
- - Complexity reduced: {description}
169
-
170
- Approve this plan to proceed? (You can exclude specific items.)
171
- ```
172
-
173
- Wait for explicit user approval. If the user excludes items, respect that.
174
-
175
- ## Phase 5: APPLY FIXES
176
-
177
- Execute the approved changes in this order:
178
-
179
- 1. **Delete orphan files** — safest, no cascading effects
180
- 2. **Remove dead exports/functions** — verify no dynamic usage first
181
- 3. **Remove unused dependencies** — update package.json/lockfile
182
- 4. **Delete stale tests** — clean up test suite
183
- 5. **Apply hygiene fixes** — remove console.logs, resolve TODOs, clean comments
184
- 6. **Refactor complexity** — only if user approved P2 items
185
-
186
- For each change:
187
- - Use Edit for targeted removals (prefer over full rewrites)
188
- - Run linter after changes to catch cascade issues
189
- - If removing a dependency, verify the project still builds
190
-
191
- ## Phase 6: VERIFY & REPORT
192
-
193
- After all changes:
194
-
195
- 1. Run the linter — fix any new issues introduced
196
- 2. Run the test suite — everything should still pass
197
- 3. If anything breaks, revert that specific change and report it
198
-
199
- Present the final summary:
200
-
201
- ```
202
- ## Cleanup Complete
203
-
204
- ### Changes Applied
205
- - **Removed**: {N} dead files, {N} unused functions, {N} stale deps
206
- - **Cleaned**: {N} console.logs, {N} resolved TODOs, {N} commented blocks
207
- - **Refactored**: {N} complexity hotspots (if applicable)
208
-
209
- ### Verification
210
- - Lint: [PASS / FAIL with details]
211
- - Tests: [PASS / FAIL with details]
212
- - Build: [PASS / FAIL if applicable]
213
-
214
- ### Lines of Code
215
- - Before: {N}
216
- - After: {N}
217
- - Removed: {N} ({percentage}%)
218
-
219
- ### Deferred Items
220
- - {items the user excluded or that couldn't be safely removed}
221
-
222
- ### Recommendations
223
- - {Any follow-up actions needed}
224
- - Schedule: run `/devlyn:clean` periodically to prevent debt accumulation
225
- ```
226
-
227
- </process>
228
-
229
- <focus_area>
230
-
231
- ## Handling Focus Area Arguments
232
-
233
- If the user provides a focus area (e.g., `/devlyn:clean dependencies` or `/devlyn:clean tests`):
234
-
235
- 1. Still run Phase 1 (codebase understanding) at reduced depth
236
- 2. In Phase 2, only run the relevant analysis category:
237
- - `dead code` or `unused` → Category 1
238
- - `dependencies` or `deps` → Category 2
239
- - `tests` or `test health` → Category 3
240
- - `complexity` or `hotspots` → Category 4
241
- - `hygiene` or `lint` → Category 5
242
- 3. Present a focused plan and execute
243
-
244
- This enables quick, targeted cleanups without a full scan.
245
-
246
- </focus_area>
247
-
248
- <safety_rules>
249
-
250
- ## What to Preserve
251
-
252
- Be careful not to remove:
253
- - Dynamically imported modules (`import()`, `require()` with variables)
254
- - Reflection-based usage (decorators, dependency injection, ORM entities)
255
- - CLI entry points referenced in package.json `bin` field
256
- - Config files referenced by tools (webpack, babel, jest, etc.)
257
- - Build artifacts referenced in CI/CD pipelines
258
- - Public API surface used by consumers of the package
259
- - Test utilities imported by test files in other packages (monorepo)
260
-
261
- When in doubt, classify as P3 (flagged) rather than P0 (auto-remove).
262
-
263
- </safety_rules>
264
-
265
- <examples>
266
-
267
- ### Example 1: Small project cleanup
268
-
269
- Input: `/devlyn:clean`
270
-
271
- Finds: 2 orphan files, 3 unused deps, 8 console.logs, 1 stale test.
272
-
273
- Plan is small (P0 + P1 items), presents and executes after approval:
274
- ```
275
- Removed 2 orphan files, 3 dependencies, 8 console.logs, 1 stale test.
276
- Tests pass. 340 lines removed.
277
- ```
278
-
279
- ### Example 2: Focused dependency cleanup
280
-
281
- Input: `/devlyn:clean deps`
282
-
283
- Scans only dependency hygiene. Finds `moment` (replaced by `dayjs` already in use), `lodash` (only `_.get` used — replaceable with optional chaining). Presents targeted plan.
284
-
285
- </examples>