laxy-verify 1.1.13 → 1.1.15

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,150 +1,189 @@
1
- # laxy-verify
2
-
3
- CLI verification for frontend apps.
4
-
5
- `laxy-verify` runs production build checks, Lighthouse, tiered verify E2E, and plan-gated verification features for Free, Pro, and Pro+ accounts.
6
-
7
- ```bash
8
- npx laxy-verify --init --run
9
- npx laxy-verify .
10
- npx laxy-verify login
11
- npx laxy-verify whoami
12
- npx laxy-verify --help
13
- ```
14
-
15
- ## Quick Start
16
-
17
- ### 1. Initialize
18
-
19
- ```bash
20
- cd your-project
21
- npx laxy-verify --init
22
- ```
23
-
24
- This generates `.laxy.yml` and a GitHub Actions workflow.
25
-
26
- ### 2. Run locally
27
-
28
- ```bash
29
- npx laxy-verify .
30
- ```
31
-
32
- ### 3. Add to CI
33
-
34
- Commit the generated workflow. Each PR gets a verification run, grade output, and optional GitHub reporting.
35
-
36
- ## Verification Tiers
37
-
38
- | Plan | Question it answers |
39
- |------|---------------------|
40
- | Free | Is this likely to break right now? |
41
- | Pro | Is this strong enough to send to a client? |
42
- | Pro+ | Can I call this release-ready with confidence? |
43
-
44
- ## Grades
45
-
46
- | Grade | Meaning |
47
- |-------|---------|
48
- | Gold | Build passed + E2E passed + Lighthouse passed + Pro+ viewport evidence passed |
49
- | Silver | Build passed + E2E passed |
50
- | Bronze | Build passed |
51
- | Unverified | Build failed |
52
-
53
- ## Paid Features
54
-
55
- Log in with your Laxy account to unlock paid plan features.
56
-
57
- ```bash
58
- npx laxy-verify login
59
- npx laxy-verify whoami
60
- npx laxy-verify logout
61
- ```
62
-
1
+ # laxy-verify
2
+
3
+ CLI verification for frontend apps.
4
+
5
+ `laxy-verify` runs production build checks, Lighthouse, tiered verify E2E, and plan-gated verification features for Free, Pro, and Pro+ accounts.
6
+ It is designed around three user questions:
7
+
8
+ - Free: "Is this likely to break right now?"
9
+ - Pro: "Is this strong enough to send to a client?"
10
+ - Pro+: "Can I call this release-ready with confidence?"
11
+
12
+ ```bash
13
+ npx laxy-verify --init --run
14
+ npx laxy-verify .
15
+ npx laxy-verify login
16
+ npx laxy-verify whoami
17
+ npx laxy-verify --help
18
+ ```
19
+
20
+ ## Quick Start
21
+
22
+ ### 1. Initialize
23
+
24
+ ```bash
25
+ cd your-project
26
+ npx laxy-verify --init
27
+ ```
28
+
29
+ This generates `.laxy.yml` and a GitHub Actions workflow.
30
+
31
+ ### 2. Run locally
32
+
33
+ ```bash
34
+ npx laxy-verify .
35
+ ```
36
+
37
+ ### 3. Add to CI
38
+
39
+ Commit the generated workflow. Each PR gets a verification run, grade output, and optional GitHub reporting.
40
+
41
+ ## Verification Tiers
42
+
43
+ | Plan | Question it answers |
44
+ |------|---------------------|
45
+ | Free | Is this likely to break right now? |
46
+ | Pro | Is this strong enough to send to a client? |
47
+ | Pro+ | Can I call this release-ready with confidence? |
48
+
49
+ ## Grades
50
+
51
+ | Grade | Meaning |
52
+ |-------|---------|
53
+ | Gold | Build passed + E2E passed + Lighthouse passed + Pro+ viewport evidence passed |
54
+ | Silver | Build passed + E2E passed |
55
+ | Bronze | Build passed |
56
+ | Unverified | Build failed |
57
+
58
+ ## Paid Features
59
+
60
+ Log in with your Laxy account to unlock paid plan features.
61
+
62
+ ```bash
63
+ npx laxy-verify login
64
+ npx laxy-verify whoami
65
+ npx laxy-verify logout
66
+ ```
67
+
63
68
  | Feature | Free | Pro | Pro+ |
64
69
  |---------|------|-----|------|
65
70
  | Build verification | Yes | Yes | Yes |
66
71
  | Lighthouse | 1 run | 3 runs | 3 runs |
67
72
  | Verify E2E | Smoke | Deeper client-send checks | Deeper client-send checks |
68
73
  | Detailed report view | No | Yes | Yes |
69
- | Report export | No | Yes | Yes |
74
+ | `laxy-verify-report.md` export | No | Yes | Yes |
70
75
  | Multi-viewport verification | No | No | Yes |
71
76
  | Visual diff | No | No | Yes |
72
77
  | Failure analysis signals | No | No | Yes |
78
+
79
+ Pro is for delivery verification.
80
+ Pro+ is for release-confidence verification with extra evidence before you say "ship it."
81
+
82
+ For CI, set `LAXY_TOKEN` instead of using interactive login.
83
+
84
+ ```yaml
85
+ env:
86
+ LAXY_TOKEN: ${{ secrets.LAXY_TOKEN }}
87
+ ```
88
+
89
+ ## Configuration
90
+
91
+ All fields are optional in `.laxy.yml`.
92
+
93
+ ```yaml
94
+ framework: "auto"
95
+ build_command: ""
96
+ dev_command: ""
97
+ package_manager: "auto"
98
+ port: 3000
99
+ build_timeout: 300
100
+ dev_timeout: 60
101
+ lighthouse_runs: 1
102
+
103
+ thresholds:
104
+ performance: 70
105
+ accessibility: 85
106
+ seo: 80
107
+ best_practices: 80
108
+
109
+ fail_on: "bronze"
110
+ ```
111
+
112
+ ## CLI Options
113
+
114
+ ```text
115
+ npx laxy-verify [project-dir]
116
+
117
+ Options:
118
+ --format console|json
119
+ --ci
120
+ --config <path>
121
+ --fail-on unverified|bronze|silver|gold
122
+ --skip-lighthouse
123
+ --badge
124
+ --init
125
+ --multi-viewport
126
+ --help
127
+
128
+ Subcommands:
129
+ login [email]
130
+ logout
131
+ whoami
132
+ ```
133
+
134
+ ## Result Files
73
135
 
74
- For CI, set `LAXY_TOKEN` instead of using interactive login.
75
-
76
- ```yaml
77
- env:
78
- LAXY_TOKEN: ${{ secrets.LAXY_TOKEN }}
79
- ```
80
-
81
- ## Configuration
82
-
83
- All fields are optional in `.laxy.yml`.
84
-
85
- ```yaml
86
- framework: "auto"
87
- build_command: ""
88
- dev_command: ""
89
- package_manager: "auto"
90
- port: 3000
91
- build_timeout: 300
92
- dev_timeout: 60
93
- lighthouse_runs: 1
94
-
95
- thresholds:
96
- performance: 70
97
- accessibility: 85
98
- seo: 80
99
- best_practices: 80
100
-
101
- fail_on: "bronze"
102
- ```
103
-
104
- ## CLI Options
105
-
106
- ```text
107
- npx laxy-verify [project-dir]
108
-
109
- Options:
110
- --format console|json
111
- --ci
112
- --config <path>
113
- --fail-on unverified|bronze|silver|gold
114
- --skip-lighthouse
115
- --badge
116
- --init
117
- --multi-viewport
118
- --help
119
-
120
- Subcommands:
121
- login [email]
122
- logout
123
- whoami
124
- ```
136
+ Each run writes `.laxy-result.json`.
125
137
 
126
- ## Result File
138
+ Paid plans also write a readable markdown summary to `laxy-verify-report.md`.
127
139
 
128
- Each run writes `.laxy-result.json`.
140
+ - `Pro`: blocker-focused delivery report
141
+ - `Pro+`: release-readiness report with viewport and visual evidence
129
142
 
130
143
  ```json
131
144
  {
132
- "grade": "Silver",
133
- "timestamp": "2026-04-09T09:00:00Z",
134
- "build": { "success": true, "durationMs": 12000, "errors": [] },
135
- "e2e": { "passed": 4, "failed": 0, "total": 4, "results": [] },
136
- "lighthouse": { "performance": 82, "accessibility": 94, "seo": 90, "bestPractices": 92, "runs": 3 },
137
- "exitCode": 0,
138
- "_plan": "pro"
145
+ "grade": "Gold",
146
+ "timestamp": "2026-04-09T09:00:00Z",
147
+ "build": { "success": true, "durationMs": 12000, "errors": [] },
148
+ "e2e": { "passed": 5, "failed": 0, "total": 5, "results": [] },
149
+ "lighthouse": { "performance": 82, "accessibility": 94, "seo": 90, "bestPractices": 92, "runs": 3 },
150
+ "multiViewport": {
151
+ "allPassed": true,
152
+ "summary": "Desktop, tablet, and mobile checks passed."
153
+ },
154
+ "visualDiff": {
155
+ "verdict": "pass",
156
+ "differencePercentage": 0
157
+ },
158
+ "verification": {
159
+ "tier": "pro_plus",
160
+ "report": { "verdict": "release-ready" }
161
+ },
162
+ "exitCode": 0,
163
+ "_plan": "pro_plus"
139
164
  }
140
165
  ```
141
166
 
142
- ## Limitations
143
-
144
- - Monorepos require targeting the app subdirectory explicitly.
145
- - Dev-server-based Lighthouse can differ from production hosting.
146
- - Pro+ visual diff and viewport checks increase runtime.
147
-
148
- ## License
149
-
150
- MIT
167
+ ### `laxy-verify-report.md`
168
+
169
+ For Pro and Pro+ runs, the markdown report is designed to be easy to read and easy to paste into an AI coding tool.
170
+
171
+ It includes:
172
+
173
+ - the main decision in plain English
174
+ - what passed
175
+ - blockers and warnings
176
+ - exact verification evidence
177
+ - failed E2E scenarios
178
+ - a `Copy For AI` section you can paste directly into Codex, Cursor, Claude, or ChatGPT
179
+
180
+ ## Limitations
181
+
182
+ - Monorepos require targeting the app subdirectory explicitly.
183
+ - Dev-server-based Lighthouse can differ from production hosting.
184
+ - Pro+ visual diff and viewport checks increase runtime.
185
+ - Local verification is most stable on current LTS Node releases.
186
+
187
+ ## License
188
+
189
+ MIT
package/dist/cli.js CHANGED
@@ -53,6 +53,7 @@ const auth_js_1 = require("./auth.js");
53
53
  const entitlement_js_1 = require("./entitlement.js");
54
54
  const multi_viewport_js_1 = require("./multi-viewport.js");
55
55
  const e2e_js_1 = require("./e2e.js");
56
+ const report_markdown_js_1 = require("./report-markdown.js");
56
57
  const visual_diff_js_1 = require("./visual-diff.js");
57
58
  const index_js_1 = require("./verification-core/index.js");
58
59
  const package_json_1 = __importDefault(require("../package.json"));
@@ -209,6 +210,9 @@ function consoleOutput(result) {
209
210
  console.log(` Status check: ${result.github.grade}`);
210
211
  }
211
212
  console.log(" Result: .laxy-result.json");
213
+ if (result.markdownReportPath) {
214
+ console.log(` Report: ${path.basename(result.markdownReportPath)}`);
215
+ }
212
216
  console.log(` Exit code: ${result.exitCode}`);
213
217
  if ((result.grade === "Silver" || result.grade === "Bronze") && (!result._plan || result._plan === "free")) {
214
218
  console.log("\n Unlock deeper verification and Gold-grade confidence with Pro or Pro+:");
@@ -218,43 +222,43 @@ function consoleOutput(result) {
218
222
  async function run() {
219
223
  const args = parseArgs();
220
224
  if (args.help) {
221
- console.log(`
222
- laxy-verify v${package_json_1.default.version}
223
- Frontend quality gate: build + Lighthouse verification
224
-
225
- Usage:
226
- npx laxy-verify [project-dir] [options]
227
- npx laxy-verify <subcommand>
228
-
229
- Subcommands:
230
- login [email] Log in to unlock Pro/Pro+ features
231
- logout Remove saved credentials
232
- whoami Show current login status
233
-
234
- Options:
235
- --init Generate .laxy.yml + GitHub workflow file
236
- --init --run Generate config and immediately run verification
237
- --format console | json (default: console)
238
- --ci CI mode: -10 Performance threshold, runs=3
239
- --config <path> Path to .laxy.yml
240
- --fail-on unverified | bronze | silver | gold
241
- --skip-lighthouse Skip Lighthouse but still run build and E2E
242
- --multi-viewport Pro+: Lighthouse on desktop/tablet/mobile
243
- --badge Print shields.io badge markdown
244
- --help Show this help
245
-
246
- Exit codes:
247
- 0 Grade meets or exceeds fail_on threshold
248
- 1 Grade worse than fail_on, or build failed
249
- 2 Configuration error
250
-
251
- Examples:
252
- npx laxy-verify --init --run # Setup + first verification
253
- npx laxy-verify . # Run in current directory
254
- npx laxy-verify . --ci # CI mode
255
- npx laxy-verify . --fail-on silver # Require Silver or better
256
-
257
- Docs: https://github.com/psungmin24/laxy-verify
225
+ console.log(`
226
+ laxy-verify v${package_json_1.default.version}
227
+ Frontend quality gate: build + Lighthouse verification
228
+
229
+ Usage:
230
+ npx laxy-verify [project-dir] [options]
231
+ npx laxy-verify <subcommand>
232
+
233
+ Subcommands:
234
+ login [email] Log in to unlock Pro/Pro+ features
235
+ logout Remove saved credentials
236
+ whoami Show current login status
237
+
238
+ Options:
239
+ --init Generate .laxy.yml + GitHub workflow file
240
+ --init --run Generate config and immediately run verification
241
+ --format console | json (default: console)
242
+ --ci CI mode: -10 Performance threshold, runs=3
243
+ --config <path> Path to .laxy.yml
244
+ --fail-on unverified | bronze | silver | gold
245
+ --skip-lighthouse Skip Lighthouse but still run build and E2E
246
+ --multi-viewport Pro+: Lighthouse on desktop/tablet/mobile
247
+ --badge Print shields.io badge markdown
248
+ --help Show this help
249
+
250
+ Exit codes:
251
+ 0 Grade meets or exceeds fail_on threshold
252
+ 1 Grade worse than fail_on, or build failed
253
+ 2 Configuration error
254
+
255
+ Examples:
256
+ npx laxy-verify --init --run # Setup + first verification
257
+ npx laxy-verify . # Run in current directory
258
+ npx laxy-verify . --ci # CI mode
259
+ npx laxy-verify . --fail-on silver # Require Silver or better
260
+
261
+ Docs: https://github.com/psungmin24/laxy-verify
258
262
  `);
259
263
  exitGracefully(0);
260
264
  return;
@@ -511,6 +515,15 @@ async function run() {
511
515
  view: verificationView,
512
516
  },
513
517
  };
518
+ const markdownReportPath = (0, report_markdown_js_1.getMarkdownReportPath)(args.projectDir);
519
+ if ((0, report_markdown_js_1.shouldWriteMarkdownReport)(resultObj)) {
520
+ const markdownReport = (0, report_markdown_js_1.buildMarkdownReport)(args.projectDir, resultObj);
521
+ fs.writeFileSync(markdownReportPath, markdownReport, "utf-8");
522
+ resultObj.markdownReportPath = markdownReportPath;
523
+ }
524
+ else if (fs.existsSync(markdownReportPath)) {
525
+ fs.rmSync(markdownReportPath, { force: true });
526
+ }
514
527
  const inGitHubActions = !!process.env.GITHUB_ACTIONS;
515
528
  if (inGitHubActions) {
516
529
  try {
package/dist/init.js CHANGED
@@ -56,17 +56,17 @@ function runInit(dir) {
56
56
  catch {
57
57
  // Keep defaults when auto-detection fails.
58
58
  }
59
- const ymlContent = `# Generated by laxy-verify --init
60
- # See https://github.com/psungmin24/laxy-verify for full docs
61
- framework: ${detectedFramework} # auto-detected
62
- port: ${detectedPort}
63
- fail_on: bronze
64
-
65
- thresholds:
66
- performance: 70
67
- accessibility: 85
68
- seo: 80
69
- best_practices: 80
59
+ const ymlContent = `# Generated by laxy-verify --init
60
+ # See https://github.com/psungmin24/laxy-verify for full docs
61
+ framework: ${detectedFramework} # auto-detected
62
+ port: ${detectedPort}
63
+ fail_on: bronze
64
+
65
+ thresholds:
66
+ performance: 70
67
+ accessibility: 85
68
+ seo: 80
69
+ best_practices: 80
70
70
  `;
71
71
  fs.writeFileSync(laxyYmlPath, ymlContent, "utf-8");
72
72
  console.log("Created .laxy.yml");
@@ -76,25 +76,25 @@ thresholds:
76
76
  }
77
77
  else {
78
78
  fs.mkdirSync(workflowDir, { recursive: true });
79
- const workflowContent = `name: Laxy Verify
80
- on:
81
- pull_request:
82
- branches: [main, master]
83
- push:
84
- branches: [main, master]
85
-
86
- permissions:
87
- pull-requests: write
88
- statuses: write
89
-
90
- jobs:
91
- verify:
92
- runs-on: ubuntu-latest
93
- steps:
94
- - uses: actions/checkout@v4
95
- - uses: psungmin24/laxy-verify@v1
96
- with:
97
- github-token: \${{ secrets.GITHUB_TOKEN }}
79
+ const workflowContent = `name: Laxy Verify
80
+ on:
81
+ pull_request:
82
+ branches: [main, master]
83
+ push:
84
+ branches: [main, master]
85
+
86
+ permissions:
87
+ pull-requests: write
88
+ statuses: write
89
+
90
+ jobs:
91
+ verify:
92
+ runs-on: ubuntu-latest
93
+ steps:
94
+ - uses: actions/checkout@v4
95
+ - uses: psungmin24/laxy-verify@v1
96
+ with:
97
+ github-token: \${{ secrets.GITHUB_TOKEN }}
98
98
  `;
99
99
  fs.writeFileSync(workflowPath, workflowContent, "utf-8");
100
100
  console.log("Created .github/workflows/laxy-verify.yml");
@@ -63,38 +63,38 @@ async function removeDirWithRetries(dirPath, retries = 5) {
63
63
  }
64
64
  }
65
65
  function writeRunnerScript(runnerPath) {
66
- const source = `import fs from "node:fs/promises";
67
- import lighthouse from "lighthouse";
68
- import { launch } from "chrome-launcher";
69
-
70
- const [url, reportPath, chromeDir] = process.argv.slice(2);
71
-
72
- const chrome = await launch({
73
- logLevel: "error",
74
- chromeFlags: [
75
- "--headless=new",
76
- "--no-sandbox",
77
- "--disable-dev-shm-usage",
78
- \`--user-data-dir=\${chromeDir}\`,
79
- ],
80
- });
81
-
82
- try {
83
- const result = await lighthouse(url, {
84
- port: chrome.port,
85
- output: "json",
86
- logLevel: "error",
87
- onlyCategories: ["performance", "accessibility", "seo", "best-practices"],
88
- });
89
-
90
- if (!result?.lhr) {
91
- throw new Error("Lighthouse returned no report.");
92
- }
93
-
94
- await fs.writeFile(reportPath, JSON.stringify(result.lhr), "utf8");
95
- } finally {
96
- await chrome.kill();
97
- }
66
+ const source = `import fs from "node:fs/promises";
67
+ import lighthouse from "lighthouse";
68
+ import { launch } from "chrome-launcher";
69
+
70
+ const [url, reportPath, chromeDir] = process.argv.slice(2);
71
+
72
+ const chrome = await launch({
73
+ logLevel: "error",
74
+ chromeFlags: [
75
+ "--headless=new",
76
+ "--no-sandbox",
77
+ "--disable-dev-shm-usage",
78
+ \`--user-data-dir=\${chromeDir}\`,
79
+ ],
80
+ });
81
+
82
+ try {
83
+ const result = await lighthouse(url, {
84
+ port: chrome.port,
85
+ output: "json",
86
+ logLevel: "error",
87
+ onlyCategories: ["performance", "accessibility", "seo", "best-practices"],
88
+ });
89
+
90
+ if (!result?.lhr) {
91
+ throw new Error("Lighthouse returned no report.");
92
+ }
93
+
94
+ await fs.writeFile(reportPath, JSON.stringify(result.lhr), "utf8");
95
+ } finally {
96
+ await chrome.kill();
97
+ }
98
98
  `;
99
99
  fs.writeFileSync(runnerPath, source, "utf-8");
100
100
  }
@@ -81,49 +81,49 @@ async function removeDirWithRetries(dirPath, retries = 5) {
81
81
  }
82
82
  }
83
83
  function writeViewportRunnerScript(runnerPath) {
84
- const source = `import fs from "node:fs/promises";
85
- import lighthouse from "lighthouse";
86
- import { launch } from "chrome-launcher";
87
-
88
- const [url, reportPath, chromeDir, formFactor, screenJson] = process.argv.slice(2);
89
- const screen = JSON.parse(screenJson);
90
-
91
- const chrome = await launch({
92
- logLevel: "error",
93
- chromeFlags: [
94
- "--headless=new",
95
- "--no-sandbox",
96
- "--disable-dev-shm-usage",
97
- \`--user-data-dir=\${chromeDir}\`,
98
- ],
99
- });
100
-
101
- try {
102
- const result = await lighthouse(
103
- url,
104
- {
105
- port: chrome.port,
106
- output: "json",
107
- logLevel: "error",
108
- onlyCategories: ["performance", "accessibility", "seo", "best-practices"],
109
- },
110
- {
111
- extends: "lighthouse:default",
112
- settings: {
113
- formFactor,
114
- screenEmulation: screen,
115
- },
116
- }
117
- );
118
-
119
- if (!result?.lhr) {
120
- throw new Error("Lighthouse returned no report.");
121
- }
122
-
123
- await fs.writeFile(reportPath, JSON.stringify(result.lhr), "utf8");
124
- } finally {
125
- await chrome.kill();
126
- }
84
+ const source = `import fs from "node:fs/promises";
85
+ import lighthouse from "lighthouse";
86
+ import { launch } from "chrome-launcher";
87
+
88
+ const [url, reportPath, chromeDir, formFactor, screenJson] = process.argv.slice(2);
89
+ const screen = JSON.parse(screenJson);
90
+
91
+ const chrome = await launch({
92
+ logLevel: "error",
93
+ chromeFlags: [
94
+ "--headless=new",
95
+ "--no-sandbox",
96
+ "--disable-dev-shm-usage",
97
+ \`--user-data-dir=\${chromeDir}\`,
98
+ ],
99
+ });
100
+
101
+ try {
102
+ const result = await lighthouse(
103
+ url,
104
+ {
105
+ port: chrome.port,
106
+ output: "json",
107
+ logLevel: "error",
108
+ onlyCategories: ["performance", "accessibility", "seo", "best-practices"],
109
+ },
110
+ {
111
+ extends: "lighthouse:default",
112
+ settings: {
113
+ formFactor,
114
+ screenEmulation: screen,
115
+ },
116
+ }
117
+ );
118
+
119
+ if (!result?.lhr) {
120
+ throw new Error("Lighthouse returned no report.");
121
+ }
122
+
123
+ await fs.writeFile(reportPath, JSON.stringify(result.lhr), "utf8");
124
+ } finally {
125
+ await chrome.kill();
126
+ }
127
127
  `;
128
128
  fs.writeFileSync(runnerPath, source, "utf-8");
129
129
  }
@@ -0,0 +1,39 @@
1
+ import type { E2EScenarioResult } from "./e2e.js";
2
+ import type { LighthouseScores } from "./grade.js";
3
+ import type { TierVerificationView, VerificationReport } from "./verification-core/index.js";
4
+ import type { VisualDiffResult } from "./visual-diff.js";
5
+ export interface MarkdownReportResult {
6
+ grade: string;
7
+ timestamp: string;
8
+ build: {
9
+ success: boolean;
10
+ durationMs: number;
11
+ errors: string[];
12
+ };
13
+ e2e?: {
14
+ passed: number;
15
+ failed: number;
16
+ total: number;
17
+ results: E2EScenarioResult[];
18
+ };
19
+ lighthouse: (LighthouseScores & {
20
+ runs: number;
21
+ }) | null;
22
+ visualDiff?: VisualDiffResult | null;
23
+ thresholds: {
24
+ performance: number;
25
+ accessibility: number;
26
+ seo: number;
27
+ bestPractices: number;
28
+ };
29
+ framework: string | null;
30
+ _plan?: string;
31
+ verification?: {
32
+ tier: VerificationReport["tier"];
33
+ report: VerificationReport;
34
+ view: TierVerificationView;
35
+ };
36
+ }
37
+ export declare function shouldWriteMarkdownReport(result: MarkdownReportResult): boolean;
38
+ export declare function getMarkdownReportPath(projectDir: string): string;
39
+ export declare function buildMarkdownReport(projectDir: string, result: MarkdownReportResult): string;
@@ -0,0 +1,269 @@
1
+ "use strict";
2
+ var __createBinding = (this && this.__createBinding) || (Object.create ? (function(o, m, k, k2) {
3
+ if (k2 === undefined) k2 = k;
4
+ var desc = Object.getOwnPropertyDescriptor(m, k);
5
+ if (!desc || ("get" in desc ? !m.__esModule : desc.writable || desc.configurable)) {
6
+ desc = { enumerable: true, get: function() { return m[k]; } };
7
+ }
8
+ Object.defineProperty(o, k2, desc);
9
+ }) : (function(o, m, k, k2) {
10
+ if (k2 === undefined) k2 = k;
11
+ o[k2] = m[k];
12
+ }));
13
+ var __setModuleDefault = (this && this.__setModuleDefault) || (Object.create ? (function(o, v) {
14
+ Object.defineProperty(o, "default", { enumerable: true, value: v });
15
+ }) : function(o, v) {
16
+ o["default"] = v;
17
+ });
18
+ var __importStar = (this && this.__importStar) || (function () {
19
+ var ownKeys = function(o) {
20
+ ownKeys = Object.getOwnPropertyNames || function (o) {
21
+ var ar = [];
22
+ for (var k in o) if (Object.prototype.hasOwnProperty.call(o, k)) ar[ar.length] = k;
23
+ return ar;
24
+ };
25
+ return ownKeys(o);
26
+ };
27
+ return function (mod) {
28
+ if (mod && mod.__esModule) return mod;
29
+ var result = {};
30
+ if (mod != null) for (var k = ownKeys(mod), i = 0; i < k.length; i++) if (k[i] !== "default") __createBinding(result, mod, k[i]);
31
+ __setModuleDefault(result, mod);
32
+ return result;
33
+ };
34
+ })();
35
+ Object.defineProperty(exports, "__esModule", { value: true });
36
+ exports.shouldWriteMarkdownReport = shouldWriteMarkdownReport;
37
+ exports.getMarkdownReportPath = getMarkdownReportPath;
38
+ exports.buildMarkdownReport = buildMarkdownReport;
39
+ const path = __importStar(require("node:path"));
40
+ function titleCasePlan(plan) {
41
+ switch (plan) {
42
+ case "pro":
43
+ return "Pro";
44
+ case "pro_plus":
45
+ return "Pro+";
46
+ case "team":
47
+ return "Team";
48
+ case "enterprise":
49
+ return "Enterprise";
50
+ default:
51
+ return "Free";
52
+ }
53
+ }
54
+ function titleCaseVerdict(verdict) {
55
+ return verdict
56
+ .split("-")
57
+ .map((part) => part.charAt(0).toUpperCase() + part.slice(1))
58
+ .join(" ");
59
+ }
60
+ function formatTimestamp(iso) {
61
+ const date = new Date(iso);
62
+ if (Number.isNaN(date.getTime()))
63
+ return iso;
64
+ return date.toISOString().replace("T", " ").replace(".000Z", " UTC");
65
+ }
66
+ function sentenceForVerdict(view) {
67
+ switch (view.verdict) {
68
+ case "release-ready":
69
+ return "Yes. This run collected enough evidence to support a release-ready call.";
70
+ case "hold":
71
+ return "No. This run found blockers that should be fixed before release.";
72
+ case "investigate":
73
+ return "Not yet. The project is standing, but there is not enough confidence to call it release-ready.";
74
+ case "build-failed":
75
+ return "No. The production build failed, so the release should be held immediately.";
76
+ default:
77
+ return "This run did not find an immediate hard blocker, but it is still a shallow verification pass.";
78
+ }
79
+ }
80
+ function defaultNextActions(result) {
81
+ const view = result.verification?.view;
82
+ if (!view)
83
+ return ["Rerun verification after the project changes are applied."];
84
+ if (view.nextActions.length > 0)
85
+ return view.nextActions;
86
+ switch (view.verdict) {
87
+ case "release-ready":
88
+ return ["Ship this version, or archive this report as release evidence."];
89
+ case "investigate":
90
+ return ["Collect the missing verification evidence, then rerun the command before release."];
91
+ case "build-failed":
92
+ return ["Fix the production build first, then rerun the verification command."];
93
+ case "quick-pass":
94
+ return ["Run a deeper Pro verification before sending this to a client."];
95
+ default:
96
+ return ["Rerun verification after the blockers are fixed."];
97
+ }
98
+ }
99
+ function renderChecklist(title, items) {
100
+ if (items.length === 0) {
101
+ return `## ${title}\n\n- None.\n`;
102
+ }
103
+ return `## ${title}\n\n${items.map((item) => `- ${item}`).join("\n")}\n`;
104
+ }
105
+ function renderBuildErrors(errors) {
106
+ if (errors.length === 0)
107
+ return "";
108
+ const trimmed = errors.slice(0, 5).map((error) => error.trim()).filter(Boolean);
109
+ if (trimmed.length === 0)
110
+ return "";
111
+ return [
112
+ "## Build Errors",
113
+ "",
114
+ "```text",
115
+ ...trimmed,
116
+ "```",
117
+ "",
118
+ ].join("\n");
119
+ }
120
+ function renderE2EFailures(result) {
121
+ const failedScenarios = result.e2e?.results.filter((scenario) => !scenario.passed).slice(0, 5) ?? [];
122
+ if (failedScenarios.length === 0) {
123
+ return "";
124
+ }
125
+ const lines = ["## Failed E2E Scenarios", ""];
126
+ for (const scenario of failedScenarios) {
127
+ lines.push(`### ${scenario.name}`);
128
+ if (scenario.error) {
129
+ lines.push("", `- Error: ${scenario.error}`);
130
+ }
131
+ const failedSteps = scenario.steps.filter((step) => !step.passed).slice(0, 3);
132
+ if (failedSteps.length > 0) {
133
+ lines.push("", "- Failing steps:");
134
+ for (const step of failedSteps) {
135
+ const detail = step.error ? ` - ${step.error}` : "";
136
+ lines.push(` - ${step.description}${detail}`);
137
+ }
138
+ }
139
+ lines.push("");
140
+ }
141
+ return `${lines.join("\n")}\n`;
142
+ }
143
+ function renderMetrics(result) {
144
+ const lines = ["## Verification Evidence", ""];
145
+ lines.push("| Check | Result |");
146
+ lines.push("|---|---|");
147
+ lines.push(`| Build | ${result.build.success ? "Passed" : "Failed"} in ${result.build.durationMs}ms |`);
148
+ if (result.lighthouse) {
149
+ lines.push(`| Lighthouse | P ${result.lighthouse.performance}, A ${result.lighthouse.accessibility}, SEO ${result.lighthouse.seo}, BP ${result.lighthouse.bestPractices} over ${result.lighthouse.runs} run(s) |`);
150
+ }
151
+ else {
152
+ lines.push("| Lighthouse | Skipped |");
153
+ }
154
+ if (result.e2e) {
155
+ lines.push(`| E2E | ${result.e2e.passed}/${result.e2e.total} passed |`);
156
+ }
157
+ const reportInput = result.verification?.report.evidence.input;
158
+ if (typeof reportInput?.viewportIssues === "number" || typeof reportInput?.multiViewportPassed === "boolean") {
159
+ lines.push(`| Multi-viewport | ${reportInput.multiViewportPassed ? "Passed" : "Needs work"}${reportInput.multiViewportSummary ? `, ${reportInput.multiViewportSummary}` : ""} |`);
160
+ }
161
+ if (result.visualDiff) {
162
+ lines.push(`| Visual diff | ${result.visualDiff.hasBaseline ? `${result.visualDiff.diffPercentage}% (${result.visualDiff.verdict})` : "Baseline seeded"} |`);
163
+ }
164
+ lines.push("");
165
+ return `${lines.join("\n")}\n`;
166
+ }
167
+ function renderCopyForAI(result) {
168
+ const view = result.verification?.view;
169
+ if (!view)
170
+ return "";
171
+ const blockers = view.blockers.map((blocker) => `- ${blocker.title}: ${blocker.action}`);
172
+ const warnings = view.warnings.map((warning) => `- ${warning.title}: ${warning.action}`);
173
+ const evidence = view.failureEvidence.map((item) => `- ${item}`);
174
+ const closingLine = view.verdict === "release-ready"
175
+ ? "Use this as release evidence, or rerun after any code change that could affect quality."
176
+ : view.verdict === "investigate" && view.blockers.length === 0
177
+ ? "Collect the missing verification evidence, then rerun the command and compare the new report."
178
+ : "Please fix the blockers first, then rerun the verification command and compare the new report.";
179
+ return [
180
+ "## Copy For AI",
181
+ "",
182
+ "```text",
183
+ "Use this verification report to fix the project.",
184
+ "",
185
+ `Plan: ${titleCasePlan(result._plan)}`,
186
+ `Question: ${view.question}`,
187
+ `Verdict: ${titleCaseVerdict(view.verdict)}`,
188
+ "",
189
+ "Priority blockers:",
190
+ ...(blockers.length > 0 ? blockers : ["- None listed."]),
191
+ "",
192
+ "Warnings to review after blockers:",
193
+ ...(warnings.length > 0 ? warnings : ["- None listed."]),
194
+ "",
195
+ "Evidence from the verification run:",
196
+ ...(evidence.length > 0 ? evidence : ["- No extra evidence recorded."]),
197
+ "",
198
+ closingLine,
199
+ "```",
200
+ "",
201
+ ].join("\n");
202
+ }
203
+ function shouldWriteMarkdownReport(result) {
204
+ return result.verification?.view.showReportExport === true;
205
+ }
206
+ function getMarkdownReportPath(projectDir) {
207
+ return path.join(projectDir, "laxy-verify-report.md");
208
+ }
209
+ function buildMarkdownReport(projectDir, result) {
210
+ const projectName = path.basename(path.resolve(projectDir));
211
+ const plan = titleCasePlan(result._plan);
212
+ const view = result.verification?.view;
213
+ if (!view) {
214
+ return [
215
+ "# Laxy Verify Report",
216
+ "",
217
+ `Project: ${projectName}`,
218
+ `Generated: ${formatTimestamp(result.timestamp)}`,
219
+ "",
220
+ "No detailed verification report was available for this run.",
221
+ "",
222
+ ].join("\n");
223
+ }
224
+ const blockers = view.blockers.map((blocker) => `**${blocker.title}**\n Why it matters: ${blocker.description}\n What to do: ${blocker.action}`);
225
+ const warnings = view.warnings.map((warning) => `**${warning.title}**\n Why it matters: ${warning.description}\n What to do: ${warning.action}`);
226
+ const passes = view.passes.map((check) => `${check.passed ? "Passed" : "Failed"}: ${check.label}`);
227
+ const nextActions = defaultNextActions(result);
228
+ return [
229
+ "# Laxy Verify Report",
230
+ "",
231
+ `Project: ${projectName}`,
232
+ `Generated: ${formatTimestamp(result.timestamp)}`,
233
+ `Plan: ${plan}`,
234
+ `Framework: ${result.framework ?? "unknown"}`,
235
+ "",
236
+ "## At A Glance",
237
+ "",
238
+ `Short answer: ${sentenceForVerdict(view)}`,
239
+ `Why: ${view.summary}`,
240
+ `Recommended next move: ${nextActions[0]}`,
241
+ "",
242
+ "## Decision",
243
+ "",
244
+ `Question: ${view.question}`,
245
+ `Answer: ${titleCaseVerdict(view.verdict)}`,
246
+ `Verdict: ${titleCaseVerdict(view.verdict)}`,
247
+ `Confidence: ${view.confidence}`,
248
+ `Grade: ${result.grade}`,
249
+ "",
250
+ renderMetrics(result).trimEnd(),
251
+ "",
252
+ renderChecklist("What Passed", passes).trimEnd(),
253
+ "",
254
+ renderChecklist("Blockers", blockers).trimEnd(),
255
+ "",
256
+ renderChecklist("Warnings", warnings).trimEnd(),
257
+ "",
258
+ renderChecklist("Next Actions", nextActions).trimEnd(),
259
+ "",
260
+ renderChecklist("Recorded Evidence", view.failureEvidence).trimEnd(),
261
+ "",
262
+ renderBuildErrors(result.build.errors).trimEnd(),
263
+ renderE2EFailures(result).trimEnd(),
264
+ renderCopyForAI(result).trimEnd(),
265
+ "",
266
+ ]
267
+ .filter(Boolean)
268
+ .join("\n");
269
+ }
package/package.json CHANGED
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "name": "laxy-verify",
3
- "version": "1.1.13",
4
- "description": "Frontend quality gate: build + Lighthouse verification",
3
+ "version": "1.1.15",
4
+ "description": "Frontend quality gate: build, Lighthouse, tiered E2E, and release-confidence verification",
5
5
  "license": "MIT",
6
6
  "type": "commonjs",
7
7
  "bin": {