@diegovelasquezweb/a11y-engine 0.1.9 → 0.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -7,6 +7,37 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
7
7
 
8
8
  ## [Unreleased]
9
9
 
10
+ ### Added
11
+
12
+ - **Programmatic API** — 7 exported functions accessible via `import { ... } from "@diegovelasquezweb/a11y-engine"`:
13
+ - `getEnrichedFindings(input, options?)` — normalizes raw findings, canonicalizes pa11y rules, enriches with fix intelligence, infers effort, sorts by severity. Accepts a full scan payload or a raw findings array. Supports `screenshotUrlBuilder` callback for consumer-specific screenshot URLs.
14
+ - `getAuditSummary(findings, payload?)` — computes severity totals, compliance score, grade label, WCAG pass/fail status, persona impact groups, quick wins, target URL, and detected stack from metadata.
15
+ - `getPDFReport(payload, options?)` — generates a formal A4 PDF compliance report via Playwright. Returns `{ buffer, contentType }`.
16
+ - `getChecklist(options?)` — generates a standalone manual accessibility testing checklist as HTML. Returns `{ html, contentType }`.
17
+ - `getHTMLReport(payload, options?)` — generates an interactive HTML audit dashboard with severity filters and fix guidance. Supports embedded base64 screenshots via `screenshotsDir`. Returns `{ html, contentType }`.
18
+ - `getRemediationGuide(payload, options?)` — generates a Markdown remediation guide optimized for AI agents. Supports optional `patternFindings` from source scanner. Returns `{ markdown, contentType }`.
19
+ - `getSourcePatterns(projectDir, options?)` — scans project source code for accessibility patterns not detectable by axe-core. Returns `{ findings, summary }`.
20
+ - **TypeScript type declarations** shipped with the package (`scripts/index.d.mts`):
21
+ - `Finding` — raw finding with all snake_case fields
22
+ - `EnrichedFinding` — extends Finding with camelCase aliases and enriched fields
23
+ - `AuditSummary` — full audit summary including totals, score, personas, quick wins, detected stack
24
+ - `SeverityTotals`, `PersonaGroup`, `DetectedStack`, `ComplianceScore`
25
+ - `ScanPayload`, `EnrichmentOptions`, `ReportOptions`
26
+ - `PDFReport`, `HTMLReport`, `ChecklistReport`, `RemediationGuide`
27
+ - `SourcePatternFinding`, `SourcePatternResult`, `SourcePatternOptions`
28
+ - `exports` and `main` fields in `package.json` pointing to `scripts/index.mjs`
29
+ - `--axe-tags` CLI flag passthrough from `audit.mjs` to `dom-scanner.mjs`
30
+ - `resolveScanDirs` exported from `source-scanner.mjs` for programmatic use
31
+
32
+ ### Changed
33
+
34
+ - `getEnrichedFindings` always creates camelCase aliases (`fixDescription`, `fixCode`, `screenshotPath`, `wcagCriterionId`, `impactedUsers`, etc.) regardless of whether the finding already has fix data — fixes bug where camelCase fields were `undefined` when snake_case data existed
35
+ - `getEnrichedFindings` infers `effort` field after intelligence enrichment: findings with `fixCode` default to `"low"`, others to `"high"` — unless an explicit effort value already exists
36
+ - `getEnrichedFindings` normalizes raw findings internally — consumers no longer need to pre-process the findings array
37
+ - `getEnrichedFindings` sorts findings by severity (Critical > Serious > Moderate > Minor) then by ID
38
+ - `getAuditSummary` now includes `quickWins` (top 3 Critical/Serious findings with fix code), `targetUrl` (extracted from metadata with fallbacks), and `detectedStack` (framework/CMS/libraries from project context)
39
+ - CLI (`audit.mjs`) continues to work standalone — the programmatic API is additive
40
+
10
41
  ---
11
42
 
12
43
  ## [0.1.3] — 2026-03-14
package/README.md CHANGED
@@ -1,176 +1,218 @@
1
1
  # @diegovelasquezweb/a11y-engine
2
2
 
3
- Multi-engine WCAG 2.2 AA accessibility audit engine. Combines three scanning engines (axe-core, Chrome DevTools Protocol, and pa11y), merges and deduplicates their findings, enriches results with fix intelligence, and produces structured artifacts for developers, agents, and stakeholders.
3
+ Multi-engine WCAG 2.2 accessibility audit engine. Combines three scanning engines (axe-core, Chrome DevTools Protocol, and pa11y), merges and deduplicates their findings, enriches results with fix intelligence, and produces structured artifacts for developers, agents, and stakeholders.
4
4
 
5
5
  ## What it is
6
6
 
7
- A Node.js CLI and programmatic engine that:
7
+ A Node.js package that works two ways:
8
8
 
9
- 1. Crawls a target URL and discovers routes automatically
10
- 2. Runs three independent accessibility engines against each page:
11
- - **axe-core** — industry-standard WCAG rule engine, injected into the live page via Playwright
12
- - **CDP** (Chrome DevTools Protocol) — queries the browser's accessibility tree directly for issues axe may miss (missing accessible names, aria-hidden on focusable elements)
13
- - **pa11y** (HTML CodeSniffer) — catches WCAG violations around heading hierarchy, link purpose, and form associations
14
- 3. Merges and deduplicates findings across all three engines
15
- 4. Optionally scans project source code for patterns no runtime engine can detect
16
- 5. Enriches each finding with stack-aware fix guidance, selectors, and verification commands
17
- 6. Produces a full artifact set: JSON data, Markdown remediation guide, HTML dashboard, PDF compliance report, and manual testing checklist
9
+ 1. **CLI** — run `npx a11y-audit --base-url <url>` to scan a site and generate reports
10
+ 2. **Programmatic API** import functions directly to normalize findings, compute scores, and generate reports in your own application
18
11
 
19
- ## Why use this engine
12
+ ## Programmatic API
20
13
 
21
- | Capability | With this engine | Without |
14
+ ```bash
15
+ npm install @diegovelasquezweb/a11y-engine
16
+ ```
17
+
18
+ ```ts
19
+ import {
20
+ getEnrichedFindings,
21
+ getAuditSummary,
22
+ getPDFReport,
23
+ getChecklist,
24
+ getHTMLReport,
25
+ getRemediationGuide,
26
+ getSourcePatterns,
27
+ } from "@diegovelasquezweb/a11y-engine";
28
+ ```
29
+
30
+ ### getEnrichedFindings
31
+
32
+ Normalizes raw scan findings, canonicalizes pa11y rules to axe equivalents, enriches with fix intelligence, infers effort, and sorts by severity.
33
+
34
+ ```ts
35
+ const findings = getEnrichedFindings(scanPayload, {
36
+ screenshotUrlBuilder: (path) => `/api/screenshot?path=${encodeURIComponent(path)}`,
37
+ });
38
+ ```
39
+
40
+ | Parameter | Type | Description |
22
41
  | :--- | :--- | :--- |
23
- | **Multi-engine scanning** | axe-core + CDP accessibility tree + pa11y (HTML CodeSniffer) with cross-engine deduplication | Single engine higher false-negative rate |
24
- | **Full WCAG 2.2 Coverage** | Three runtime engines + source code pattern scanner | Runtime scan only misses structural and source-level issues |
25
- | **Fix Intelligence** | Stack-aware patches with code snippets tailored to detected framework | Raw rule violations with no remediation context |
26
- | **Structured Artifacts** | JSON + Markdown + HTML + PDF + Checklist — ready to consume or forward | Findings exist only in the terminal session |
27
- | **CI/Agent Integration** | Deterministic exit codes, stdout-parseable output paths, JSON schema | Requires wrapper scripting |
42
+ | `input` | `ScanPayload \| Finding[] \| Record<string, unknown>[]` | Raw scan output or findings array |
43
+ | `options.screenshotUrlBuilder` | `(rawPath: string) => string` | Transforms screenshot file paths into consumer-specific URLs |
28
44
 
29
- ## How the scan pipeline works
45
+ **Returns**: `EnrichedFinding[]` normalized, enriched, sorted findings with both snake_case and camelCase fields.
46
+
47
+ ### getAuditSummary
48
+
49
+ Computes a complete audit summary from enriched findings.
30
50
 
51
+ ```ts
52
+ const summary = getAuditSummary(findings, scanPayload);
53
+ // summary.score → 72
54
+ // summary.label → "Good"
55
+ // summary.wcagStatus → "Fail"
56
+ // summary.totals → { Critical: 1, Serious: 3, Moderate: 5, Minor: 2 }
57
+ // summary.personaGroups → { screenReader: {...}, keyboard: {...}, ... }
58
+ // summary.quickWins → [top 3 fixable Critical/Serious findings]
59
+ // summary.targetUrl → "https://example.com"
60
+ // summary.detectedStack → { framework: "nextjs", cms: null, uiLibraries: [] }
31
61
  ```
32
- URL
33
- |
34
- v
35
- [1. Crawl & Discover] sitemap.xml / BFS link crawl / explicit --routes
36
- |
37
- v
38
- [2. Navigate] Playwright opens each route in Chromium
39
- |
40
- +---> [axe-core] Injects axe into the page, runs WCAG tag checks
41
- |
42
- +---> [CDP] Opens a CDP session, reads the full accessibility tree
43
- |
44
- +---> [pa11y] Launches HTML CodeSniffer via Puppeteer Chrome
45
- |
46
- v
47
- [3. Merge & Dedup] Combines findings, removes cross-engine duplicates
48
- |
49
- v
50
- [4. Analyze] Enriches with WCAG mapping, severity, fix code, framework hints
51
- |
52
- v
53
- [5. Reports] HTML dashboard, PDF, checklist, Markdown remediation
62
+
63
+ | Parameter | Type | Description |
64
+ | :--- | :--- | :--- |
65
+ | `findings` | `EnrichedFinding[]` | Output from `getEnrichedFindings` |
66
+ | `payload` | `ScanPayload \| null` | Original scan payload for metadata extraction |
67
+
68
+ **Returns**: `AuditSummary`
69
+
70
+ ### getPDFReport
71
+
72
+ Generates a formal A4 PDF compliance report using Playwright.
73
+
74
+ ```ts
75
+ const { buffer, contentType } = await getPDFReport(scanPayload, {
76
+ baseUrl: "https://example.com",
77
+ });
78
+ fs.writeFileSync("report.pdf", buffer);
54
79
  ```
55
80
 
56
- ## Installation
81
+ **Returns**: `Promise<PDFReport>` — `{ buffer: Buffer, contentType: "application/pdf" }`
57
82
 
58
- ```bash
59
- npm install @diegovelasquezweb/a11y-engine
60
- npx playwright install chromium
61
- npx puppeteer browsers install chrome
83
+ ### getHTMLReport
84
+
85
+ Generates an interactive HTML audit dashboard with severity filters, persona impact, and fix guidance.
86
+
87
+ ```ts
88
+ const { html, contentType } = await getHTMLReport(scanPayload, {
89
+ baseUrl: "https://example.com",
90
+ screenshotsDir: "/path/to/.audit/screenshots",
91
+ });
62
92
  ```
63
93
 
64
- ```bash
65
- pnpm add @diegovelasquezweb/a11y-engine
66
- pnpm exec playwright install chromium
67
- npx puppeteer browsers install chrome
94
+ **Returns**: `Promise<HTMLReport>` — `{ html: string, contentType: "text/html" }`
95
+
96
+ ### getChecklist
97
+
98
+ Generates a standalone manual accessibility testing checklist.
99
+
100
+ ```ts
101
+ const { html, contentType } = await getChecklist({
102
+ baseUrl: "https://example.com",
103
+ });
68
104
  ```
69
105
 
70
- > **Two browsers are required:**
71
- > - **Playwright Chromium** — used by axe-core and CDP checks
72
- > - **Puppeteer Chrome** — used by pa11y (HTML CodeSniffer)
73
- >
74
- > These are separate browser installations. If Puppeteer Chrome is missing, pa11y checks fail silently (non-fatal) and the scan continues with axe + CDP only.
106
+ **Returns**: `Promise<ChecklistReport>` `{ html: string, contentType: "text/html" }`
107
+
108
+ ### getRemediationGuide
109
+
110
+ Generates a Markdown remediation guide optimized for AI agents.
111
+
112
+ ```ts
113
+ const { markdown, contentType } = await getRemediationGuide(scanPayload, {
114
+ baseUrl: "https://example.com",
115
+ patternFindings: sourcePatternResult,
116
+ });
117
+ ```
118
+
119
+ **Returns**: `Promise<RemediationGuide>` — `{ markdown: string, contentType: "text/markdown" }`
120
+
121
+ ### getSourcePatterns
122
+
123
+ Scans project source code for accessibility patterns not detectable by axe-core at runtime.
124
+
125
+ ```ts
126
+ const { findings, summary } = await getSourcePatterns("/path/to/project", {
127
+ framework: "nextjs",
128
+ });
129
+ // summary → { total: 12, confirmed: 10, potential: 2 }
130
+ ```
131
+
132
+ **Returns**: `Promise<SourcePatternResult>` — `{ findings: SourcePatternFinding[], summary: { total, confirmed, potential } }`
133
+
134
+ ## CLI usage
75
135
 
76
- ## Quick start
136
+ The CLI runs the full scan pipeline: crawl, scan with 3 engines, merge, analyze, and generate reports.
77
137
 
78
138
  ```bash
79
- # Minimal scan — produces remediation.md in .audit/
139
+ # Minimal scan
80
140
  npx a11y-audit --base-url https://example.com
81
141
 
82
142
  # Full audit with all reports
83
143
  npx a11y-audit --base-url https://example.com --with-reports --output ./audit/report.html
84
144
 
85
- # Scan with source code intelligence (for stack-aware fix guidance)
145
+ # Scan with source code intelligence
86
146
  npx a11y-audit --base-url http://localhost:3000 --project-dir . --with-reports --output ./audit/report.html
87
147
  ```
88
148
 
89
- ## CLI usage
90
-
91
- ```
92
- a11y-audit --base-url <url> [options]
93
- ```
94
-
95
- ### Targeting & scope
149
+ ### Targeting and scope
96
150
 
97
151
  | Flag | Argument | Default | Description |
98
152
  | :--- | :--- | :--- | :--- |
99
- | `--base-url` | `<url>` | (Required) | Starting URL for the audit. |
100
- | `--max-routes` | `<num>` | `10` | Max routes to discover and scan. |
101
- | `--crawl-depth` | `<num>` | `2` | BFS link-follow depth during discovery (1-3). |
102
- | `--routes` | `<csv>` | — | Explicit path list, bypasses auto-discovery. |
103
- | `--project-dir` | `<path>` | — | Path to project source. Enables source pattern scanner and framework auto-detection. |
153
+ | `--base-url` | `<url>` | (Required) | Starting URL for the audit |
154
+ | `--max-routes` | `<num>` | `10` | Max routes to discover and scan |
155
+ | `--crawl-depth` | `<num>` | `2` | BFS link-follow depth during discovery (1-3) |
156
+ | `--routes` | `<csv>` | — | Explicit path list, bypasses auto-discovery |
157
+ | `--project-dir` | `<path>` | — | Path to project source for stack-aware fixes and source pattern scanning |
104
158
 
105
159
  ### Audit intelligence
106
160
 
107
161
  | Flag | Argument | Default | Description |
108
162
  | :--- | :--- | :--- | :--- |
109
- | `--target` | `<text>` | `WCAG 2.2 AA` | Compliance target label in reports. |
110
- | `--only-rule` | `<id>` | | Run a single axe rule (e.g. `color-contrast`). |
111
- | `--ignore-findings` | `<csv>` | — | Rule IDs to exclude from output. |
112
- | `--exclude-selectors` | `<csv>` | — | CSS selectors to skip during DOM scan. |
113
- | `--axe-tags` | `<csv>` | `wcag2a,wcag2aa,wcag21a,wcag21aa,wcag22a,wcag22aa` | axe-core WCAG tag filter. |
114
- | `--framework` | `<name>` | — | Override auto-detected stack. Supported: `nextjs`, `gatsby`, `react`, `nuxt`, `vue`, `angular`, `astro`, `svelte`, `shopify`, `wordpress`, `drupal`. |
163
+ | `--target` | `<text>` | `WCAG 2.2 AA` | Compliance target label in reports |
164
+ | `--axe-tags` | `<csv>` | `wcag2a,wcag2aa,wcag21a,wcag21aa,wcag22a,wcag22aa` | axe-core WCAG tag filter |
165
+ | `--only-rule` | `<id>` | — | Run a single axe rule (e.g. `color-contrast`) |
166
+ | `--ignore-findings` | `<csv>` | — | Rule IDs to exclude from output |
167
+ | `--exclude-selectors` | `<csv>` | | CSS selectors to skip during DOM scan |
168
+ | `--framework` | `<name>` | — | Override auto-detected stack (`nextjs`, `react`, `vue`, `angular`, `svelte`, `shopify`, `wordpress`, etc.) |
115
169
 
116
- ### Execution & emulation
170
+ ### Execution and emulation
117
171
 
118
172
  | Flag | Argument | Default | Description |
119
173
  | :--- | :--- | :--- | :--- |
120
- | `--color-scheme` | `light\|dark` | `light` | Emulate `prefers-color-scheme`. |
121
- | `--wait-until` | `domcontentloaded\|load\|networkidle` | `domcontentloaded` | Playwright page load strategy. Use `networkidle` for SPAs. |
122
- | `--viewport` | `<WxH>` | — | Viewport size (e.g. `375x812`, `1440x900`). |
123
- | `--wait-ms` | `<num>` | `2000` | Delay after page load before running axe (ms). |
124
- | `--timeout-ms` | `<num>` | `30000` | Network timeout per page (ms). |
125
- | `--headed` | — | `false` | Run browser in visible mode. |
126
- | `--affected-only` | — | `false` | Re-scan only routes with previous violations. Requires a prior scan in `.audit/`. |
174
+ | `--color-scheme` | `light\|dark` | `light` | Emulate `prefers-color-scheme` |
175
+ | `--wait-until` | `domcontentloaded\|load\|networkidle` | `domcontentloaded` | Playwright page load strategy |
176
+ | `--viewport` | `<WxH>` | — | Viewport size (e.g. `375x812`) |
177
+ | `--wait-ms` | `<num>` | `2000` | Delay after page load before scanning (ms) |
178
+ | `--timeout-ms` | `<num>` | `30000` | Network timeout per page (ms) |
179
+ | `--headed` | — | `false` | Run browser in visible mode |
180
+ | `--affected-only` | — | `false` | Re-scan only routes with previous violations |
127
181
 
128
182
  ### Output generation
129
183
 
130
184
  | Flag | Argument | Default | Description |
131
185
  | :--- | :--- | :--- | :--- |
132
- | `--with-reports` | — | `false` | Generate HTML + PDF + Checklist reports. Requires `--output`. |
133
- | `--skip-reports` | | `true` | Skip visual report generation (default). |
134
- | `--output` | `<path>` | — | Output path for `report.html` (PDF and checklist derive from it). |
135
- | `--skip-patterns` | — | `false` | Disable source code pattern scanner even when `--project-dir` is set. |
136
-
137
- ## Common command patterns
138
-
139
- ```bash
140
- # Focused audit — one rule, one route
141
- a11y-audit --base-url https://example.com --only-rule color-contrast --routes /checkout --max-routes 1
142
-
143
- # Dark mode audit
144
- a11y-audit --base-url https://example.com --color-scheme dark
186
+ | `--with-reports` | — | `false` | Generate HTML + PDF + Checklist reports |
187
+ | `--output` | `<path>` | | Output path for `report.html` |
188
+ | `--skip-patterns` | — | `false` | Disable source code pattern scanner |
145
189
 
146
- # SPA with deferred rendering
147
- a11y-audit --base-url https://example.com --wait-until networkidle --wait-ms 3000
148
-
149
- # Mobile viewport
150
- a11y-audit --base-url https://example.com --viewport 375x812
151
-
152
- # Fast re-audit after fixes (skips clean pages)
153
- a11y-audit --base-url https://example.com --affected-only
190
+ ## How the scan pipeline works
154
191
 
155
- # Ignore known false positives
156
- a11y-audit --base-url https://example.com --ignore-findings color-contrast,frame-title
157
192
  ```
158
-
159
- ## Output artifacts
160
-
161
- All artifacts are written to `.audit/` relative to the package root.
162
-
163
- | File | Always generated | Description |
164
- | :--- | :--- | :--- |
165
- | `a11y-scan-results.json` | Yes | Raw merged results from axe-core + CDP + pa11y per route |
166
- | `a11y-findings.json` | Yes | Enriched findings with fix intelligence, WCAG mapping, and severity |
167
- | `progress.json` | Yes | Real-time scan progress with per-engine step status and finding counts |
168
- | `remediation.md` | Yes | AI-agent-optimized remediation roadmap |
169
- | `report.html` | With `--with-reports` | Interactive HTML dashboard |
170
- | `report.pdf` | With `--with-reports` | Formal compliance PDF |
171
- | `checklist.html` | With `--with-reports` | Manual WCAG testing checklist |
172
-
173
- See [Output Artifacts](docs/outputs.md) for full schema reference.
193
+ URL
194
+ |
195
+ v
196
+ [1. Crawl & Discover] sitemap.xml / BFS link crawl / explicit --routes
197
+ |
198
+ v
199
+ [2. Navigate] Playwright opens each route in Chromium
200
+ |
201
+ +---> [axe-core] Injects axe into the page, runs WCAG tag checks
202
+ |
203
+ +---> [CDP] Opens a CDP session, reads the full accessibility tree
204
+ |
205
+ +---> [pa11y] Launches HTML CodeSniffer via Puppeteer Chrome
206
+ |
207
+ v
208
+ [3. Merge & Dedup] Combines findings, removes cross-engine duplicates
209
+ |
210
+ v
211
+ [4. Analyze] Enriches with WCAG mapping, severity, fix code, framework hints
212
+ |
213
+ v
214
+ [5. Reports] HTML dashboard, PDF, checklist, Markdown remediation
215
+ ```
174
216
 
175
217
  ## Scan engines
176
218
 
@@ -181,52 +223,48 @@ The primary engine. Runs Deque's axe-core rule set against the live DOM inside P
181
223
  ### CDP (Chrome DevTools Protocol)
182
224
 
183
225
  Queries the browser's full accessibility tree via a CDP session. Catches issues axe may miss:
184
- - Interactive elements (buttons, links, inputs) with no accessible name
226
+ - Interactive elements with no accessible name
185
227
  - Focusable elements hidden with `aria-hidden`
186
228
 
187
229
  ### pa11y (HTML CodeSniffer)
188
230
 
189
- Runs Squiz's HTML CodeSniffer via Puppeteer Chrome. Catches WCAG violations around:
190
- - Heading hierarchy
191
- - Link purpose
192
- - Form label associations
231
+ Runs Squiz's HTML CodeSniffer via Puppeteer Chrome. Catches WCAG violations around heading hierarchy, link purpose, and form label associations.
193
232
 
194
233
  Requires a separate Chrome installation (`npx puppeteer browsers install chrome`). If Chrome is missing, pa11y fails silently and the scan continues with axe + CDP.
195
234
 
196
- ### Merge & deduplication
197
-
198
- After all three engines run, findings are merged and deduplicated:
199
- - axe findings are added first (baseline)
200
- - CDP findings are checked against axe equivalents (e.g. `cdp-missing-accessible-name` vs `button-name`) to avoid duplicates
201
- - pa11y findings are checked against existing selectors to avoid triple-reporting the same element
202
-
203
- ## Troubleshooting
204
-
205
- **`Error: browserType.launch: Executable doesn't exist`**
206
- Run `npx playwright install chromium` (or `pnpm exec playwright install chromium`).
235
+ ## Output artifacts
207
236
 
208
- **`pa11y checks failed (non-fatal): Could not find Chrome`**
209
- pa11y requires Puppeteer's Chrome, which is separate from Playwright's Chromium. Install it with `npx puppeteer browsers install chrome`.
237
+ All artifacts are written to `.audit/` relative to the package root.
210
238
 
211
- **`Missing required argument: --base-url`**
212
- The flag is required. Provide a full URL including protocol: `--base-url https://example.com`.
239
+ | File | Always generated | Description |
240
+ | :--- | :--- | :--- |
241
+ | `a11y-scan-results.json` | Yes | Raw merged results from axe + CDP + pa11y per route |
242
+ | `a11y-findings.json` | Yes | Enriched findings with fix intelligence |
243
+ | `progress.json` | Yes | Real-time scan progress with per-engine step status |
244
+ | `remediation.md` | Yes | AI-agent-optimized remediation roadmap |
245
+ | `report.html` | With `--with-reports` | Interactive HTML dashboard |
246
+ | `report.pdf` | With `--with-reports` | Formal compliance PDF |
247
+ | `checklist.html` | With `--with-reports` | Manual WCAG testing checklist |
213
248
 
214
- **Scan returns 0 findings on an SPA**
215
- Use `--wait-until networkidle --wait-ms 3000` to let async content render before the engines run.
249
+ ## Installation
216
250
 
217
- **`--with-reports` exits without generating PDF**
218
- Ensure `--output` is also set and points to an `.html` file path: `--output ./audit/report.html`.
251
+ ```bash
252
+ npm install @diegovelasquezweb/a11y-engine
253
+ npx playwright install chromium
254
+ npx puppeteer browsers install chrome
255
+ ```
219
256
 
220
- **Chromium crashes in CI**
221
- Add `--no-sandbox` via the `PLAYWRIGHT_CHROMIUM_LAUNCH_OPTIONS` env var, or run Playwright with the `--with-deps` flag during browser installation.
257
+ > **Two browsers are required:**
258
+ > - **Playwright Chromium** used by axe-core and CDP checks
259
+ > - **Puppeteer Chrome** — used by pa11y (HTML CodeSniffer)
222
260
 
223
261
  ## Documentation
224
262
 
225
263
  | Resource | Description |
226
264
  | :--- | :--- |
227
- | [Architecture](https://github.com/diegovelasquezweb/a11y-engine/blob/main/docs/architecture.md) | How the multi-engine scanner pipeline works |
228
- | [CLI Handbook](https://github.com/diegovelasquezweb/a11y-engine/blob/main/docs/cli-handbook.md) | Full flag reference and usage patterns |
229
- | [Output Artifacts](https://github.com/diegovelasquezweb/a11y-engine/blob/main/docs/outputs.md) | Schema and structure of every generated file |
265
+ | [Architecture](docs/architecture.md) | How the multi-engine scanner pipeline works |
266
+ | [CLI Handbook](docs/cli-handbook.md) | Full flag reference and usage patterns |
267
+ | [Output Artifacts](docs/outputs.md) | Schema and structure of every generated file |
230
268
 
231
269
  ## License
232
270
 
@@ -205,6 +205,28 @@ Assets are static JSON files bundled with the package under `assets/`. They are
205
205
  | `remediation/axe-check-maps.json` | axe check-to-rule mapping |
206
206
  | `remediation/source-boundaries.json` | Framework-specific source file locations |
207
207
 
208
+ ## Programmatic API
209
+
210
+ In addition to the CLI pipeline, the engine exports 7 functions via `scripts/index.mjs` for direct consumption by Node.js applications (e.g. `a11y-scanner`). These functions reuse the same internal renderers, assets, and enrichment logic as the CLI — no duplication.
211
+
212
+ ```
213
+ scripts/index.mjs (public API)
214
+ ├── getEnrichedFindings() ← uses asset-loader, intelligence.json, pa11y-config.json
215
+ ├── getAuditSummary() ← uses compliance-config.json, wcag-reference.json
216
+ ├── getPDFReport() ← uses reports/renderers/pdf.mjs + Playwright
217
+ ├── getHTMLReport() ← uses reports/renderers/html.mjs + findings.mjs
218
+ ├── getChecklist() ← uses reports/renderers/html.mjs (manual checks)
219
+ ├── getRemediationGuide() ← uses reports/renderers/md.mjs
220
+ └── getSourcePatterns() ← uses engine/source-scanner.mjs
221
+ ```
222
+
223
+ ### Key design decisions
224
+
225
+ - **No filesystem output** — all API functions return data in memory (strings, Buffers, arrays). The consumer decides where to write.
226
+ - **Payload in, results out** — functions accept the raw `{ findings, metadata }` payload that `a11y-findings.json` contains. No need to resolve paths or read files.
227
+ - **`screenshotUrlBuilder` callback** — `getEnrichedFindings` accepts an optional function to transform raw screenshot paths (e.g. `screenshots/0-color-contrast.png`) into consumer-specific URLs (e.g. `/api/scan/{id}/screenshot?path=...`). This keeps URL construction out of the engine.
228
+ - **CLI unaffected** — the `audit.mjs` orchestrator and all CLI builders continue to work exactly as before. The API is additive.
229
+
208
230
  ## Execution model and timeouts
209
231
 
210
232
  `audit.mjs` spawns each stage as a child process via `node:child_process`. All child processes:
@@ -235,3 +235,7 @@ The engine never exits `1` just because findings were found. Exit `1` only indic
235
235
  REMEDIATION_PATH=<abs-path> # always printed on success
236
236
  REPORT_PATH=<abs-path> # only printed when --with-reports is set
237
237
  ```
238
+
239
+ ## Programmatic alternative
240
+
241
+ For applications that embed the engine as a dependency (e.g. web dashboards, CI pipelines), the engine also exports a programmatic API that processes scan data in memory without filesystem operations. See the [README](../README.md#programmatic-api) for full documentation.
package/docs/outputs.md CHANGED
@@ -257,40 +257,54 @@ Written to the same directory as `--output` as `checklist.html`.
257
257
 
258
258
  ## Consuming outputs programmatically
259
259
 
260
- ### Reading `a11y-findings.json` from an integration
261
-
262
- ```js
263
- import fs from "node:fs";
264
- import path from "node:path";
265
- import { createRequire } from "node:module";
266
-
267
- // Resolve real path (handles pnpm symlinks)
268
- const req = createRequire(import.meta.url);
269
- const auditScript = req.resolve("@diegovelasquezweb/a11y-engine/scripts/audit.mjs");
270
- const engineRoot = path.dirname(path.dirname(auditScript));
271
- const findingsPath = path.join(engineRoot, ".audit", "a11y-findings.json");
272
-
273
- const { findings, metadata } = JSON.parse(fs.readFileSync(findingsPath, "utf-8"));
260
+ ### Using the programmatic API (recommended)
261
+
262
+ The engine exports functions that process scan data directly in memory — no filesystem path resolution needed:
263
+
264
+ ```ts
265
+ import {
266
+ getEnrichedFindings,
267
+ getAuditSummary,
268
+ getPDFReport,
269
+ getHTMLReport,
270
+ getChecklist,
271
+ getRemediationGuide,
272
+ getSourcePatterns,
273
+ } from "@diegovelasquezweb/a11y-engine";
274
+
275
+ // After running audit.mjs via CLI, read the findings file
276
+ const payload = JSON.parse(fs.readFileSync(findingsPath, "utf-8"));
277
+
278
+ // Enrich findings with fix intelligence
279
+ const findings = getEnrichedFindings(payload, {
280
+ screenshotUrlBuilder: (path) => `/api/screenshot?path=${encodeURIComponent(path)}`,
281
+ });
282
+
283
+ // Get full audit summary
284
+ const summary = getAuditSummary(findings, payload);
285
+
286
+ // Generate reports
287
+ const pdf = await getPDFReport(payload, { baseUrl: "https://example.com" });
288
+ const html = await getHTMLReport(payload, { baseUrl: "https://example.com" });
289
+ const checklist = await getChecklist({ baseUrl: "https://example.com" });
290
+ const guide = await getRemediationGuide(payload, { baseUrl: "https://example.com" });
291
+
292
+ // Scan source code patterns
293
+ const patterns = await getSourcePatterns("/path/to/project", { framework: "nextjs" });
274
294
  ```
275
295
 
276
- > Note: `import.meta.url` may be mangled by bundlers (e.g. Next.js). In that case, use `fs.realpathSync` on the known symlink instead:
277
-
278
- ```js
279
- const symlinkBase = path.join(process.cwd(), "node_modules", "@diegovelasquezweb", "a11y-engine");
280
- const engineRoot = fs.realpathSync(symlinkBase);
281
- const findingsPath = path.join(engineRoot, ".audit", "a11y-findings.json");
282
- ```
296
+ See the [README](../README.md#programmatic-api) for full API documentation and type signatures.
283
297
 
284
298
  ### Reading `progress.json` for live UI updates
285
299
 
300
+ During CLI execution, `progress.json` is written to `.audit/` in real-time. This is relevant when using the CLI via `child_process` — the programmatic API does not write progress files.
301
+
286
302
  ```js
287
303
  const progressPath = path.join(engineRoot, ".audit", "progress.json");
288
304
 
289
- // Poll this file during scan execution
290
305
  if (fs.existsSync(progressPath)) {
291
306
  const progress = JSON.parse(fs.readFileSync(progressPath, "utf-8"));
292
307
  console.log(`Current step: ${progress.currentStep}`);
293
- console.log(`axe found: ${progress.steps?.axe?.found ?? "pending"}`);
294
308
  }
295
309
  ```
296
310
 
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@diegovelasquezweb/a11y-engine",
3
- "version": "0.1.9",
3
+ "version": "0.2.0",
4
4
  "description": "WCAG 2.2 AA accessibility audit engine — scanner, analyzer, and report builders",
5
5
  "type": "module",
6
6
  "license": "MIT",
@@ -949,17 +949,23 @@ export function collectIncompleteFindings(routes) {
949
949
  }
950
950
 
951
951
  /**
952
- * The main execution function for the analyzer script.
953
- * Reads scan results, processes findings, and writes the final findings JSON.
952
+ * Runs the analyzer programmatically on a scan payload.
953
+ * @param {Object} scanPayload - The raw scan output from dom-scanner ({ routes, base_url, projectContext, ... }).
954
+ * @param {{ ignoreFindings?: string[], framework?: string, output?: string }} [options={}]
955
+ * @returns {Object} The enriched findings payload { findings, incomplete_findings, metadata, ... }.
954
956
  */
955
- function main() {
956
- const args = parseArgs(process.argv.slice(2));
957
- const ignoredRules = new Set(args.ignoreFindings);
957
+ export function runAnalyzer(scanPayload, options = {}) {
958
+ if (!scanPayload) throw new Error("Missing scan payload");
958
959
 
959
- const payload = readJson(args.input);
960
- if (!payload) throw new Error(`Input not found or invalid: ${args.input}`);
960
+ const args = {
961
+ input: null,
962
+ output: options.output || getInternalPath("a11y-findings.json"),
963
+ ignoreFindings: options.ignoreFindings || [],
964
+ framework: options.framework || null,
965
+ };
961
966
 
962
- const result = buildFindings(payload, args);
967
+ const ignoredRules = new Set(args.ignoreFindings);
968
+ const result = buildFindings(scanPayload, args);
963
969
 
964
970
  if (ignoredRules.size > 0) {
965
971
  const knownIds = new Set(
@@ -989,15 +995,15 @@ function main() {
989
995
  if (deduplicatedCount > 0) log.info(`Deduplicated ${deduplicatedCount} cross-page finding group(s).`);
990
996
 
991
997
  const overallAssessment = computeOverallAssessment(dedupedFindings);
992
- const passedCriteria = computePassedCriteria(payload.routes || [], WCAG_CRITERION_MAP, dedupedFindings);
993
- const outOfScope = computeOutOfScope(payload.routes || []);
998
+ const passedCriteria = computePassedCriteria(scanPayload.routes || [], WCAG_CRITERION_MAP, dedupedFindings);
999
+ const outOfScope = computeOutOfScope(scanPayload.routes || []);
994
1000
  const recommendations = computeRecommendations(dedupedFindings);
995
- const testingMethodology = computeTestingMethodology(payload);
996
- const incompleteFindings = collectIncompleteFindings(payload.routes || []);
1001
+ const testingMethodology = computeTestingMethodology(scanPayload);
1002
+ const incompleteFindings = collectIncompleteFindings(scanPayload.routes || []);
997
1003
  if (incompleteFindings.length > 0)
998
1004
  log.info(`${incompleteFindings.length} incomplete finding(s) require manual review.`);
999
1005
 
1000
- writeJson(args.output, {
1006
+ const outputPayload = {
1001
1007
  ...result,
1002
1008
  findings: dedupedFindings,
1003
1009
  incomplete_findings: incompleteFindings,
@@ -1011,12 +1017,32 @@ function main() {
1011
1017
  fpFiltered: fpRemovedCount,
1012
1018
  deduplicatedCount,
1013
1019
  },
1014
- });
1020
+ };
1021
+
1022
+ // Write to disk for CLI compatibility
1023
+ writeJson(args.output, outputPayload);
1015
1024
 
1016
1025
  if (dedupedFindings.length === 0) {
1017
1026
  log.info("Congratulations, no issues found.");
1018
1027
  }
1019
1028
  log.success(`Findings processed and saved to ${args.output}`);
1029
+
1030
+ return outputPayload;
1031
+ }
1032
+
1033
+ /**
1034
+ * CLI entry point — reads from disk, processes, writes to disk.
1035
+ */
1036
+ function main() {
1037
+ const args = parseArgs(process.argv.slice(2));
1038
+ const payload = readJson(args.input);
1039
+ if (!payload) throw new Error(`Input not found or invalid: ${args.input}`);
1040
+
1041
+ runAnalyzer(payload, {
1042
+ ignoreFindings: args.ignoreFindings,
1043
+ framework: args.framework,
1044
+ output: args.output,
1045
+ });
1020
1046
  }
1021
1047
 
1022
1048
  if (process.argv[1] === fileURLToPath(import.meta.url)) {
@@ -489,7 +489,16 @@ async function analyzeRoute(
489
489
  * @param {"pending"|"running"|"done"|"error"} status - Step status.
490
490
  * @param {Object} [extra={}] - Additional metadata.
491
491
  */
492
+ /** @type {((step: string, status: string, extra?: object) => void) | null} */
493
+ let _onProgressCallback = null;
494
+
492
495
  function writeProgress(step, status, extra = {}) {
496
+ // Notify external callback if set (programmatic API)
497
+ if (_onProgressCallback) {
498
+ _onProgressCallback(step, status, extra);
499
+ }
500
+
501
+ // Always write to disk for CLI consumers
493
502
  const progressPath = getInternalPath("progress.json");
494
503
  let progress = {};
495
504
  try {
@@ -788,8 +797,45 @@ function mergeViolations(axeViolations, cdpViolations, pa11yViolations) {
788
797
  * Coordinates browser setup, crawling/discovery, parallel scanning, and result saving.
789
798
  * @throws {Error} If navigation to the base URL fails or browser setup issues occur.
790
799
  */
791
- async function main() {
792
- const args = parseArgs(process.argv.slice(2));
800
+ /**
801
+ * Runs the DOM scanner programmatically.
802
+ * @param {Object} options - Scanner configuration (same shape as CLI args object).
803
+ * @param {{ onProgress?: (step: string, status: string, extra?: object) => void }} [callbacks={}]
804
+ * @returns {Promise<Object>} The scan payload { generated_at, base_url, onlyRule, projectContext, routes }.
805
+ */
806
+ export async function runDomScanner(options = {}, callbacks = {}) {
807
+ const args = {
808
+ baseUrl: options.baseUrl || "",
809
+ routes: options.routes || "",
810
+ output: options.output || getInternalPath("a11y-scan-results.json"),
811
+ maxRoutes: options.maxRoutes ?? DEFAULTS.maxRoutes,
812
+ waitMs: options.waitMs ?? DEFAULTS.waitMs,
813
+ timeoutMs: options.timeoutMs ?? DEFAULTS.timeoutMs,
814
+ headless: options.headless ?? DEFAULTS.headless,
815
+ waitUntil: options.waitUntil ?? DEFAULTS.waitUntil,
816
+ colorScheme: options.colorScheme || null,
817
+ screenshotsDir: options.screenshotsDir || getInternalPath("screenshots"),
818
+ excludeSelectors: options.excludeSelectors || [],
819
+ onlyRule: options.onlyRule || null,
820
+ crawlDepth: Math.min(Math.max(options.crawlDepth ?? DEFAULTS.crawlDepth, 1), 3),
821
+ viewport: options.viewport || null,
822
+ axeTags: options.axeTags || null,
823
+ };
824
+
825
+ if (!args.baseUrl) throw new Error("Missing required option: baseUrl");
826
+
827
+ if (callbacks.onProgress) {
828
+ _onProgressCallback = callbacks.onProgress;
829
+ }
830
+
831
+ try {
832
+ return await _runDomScannerInternal(args);
833
+ } finally {
834
+ _onProgressCallback = null;
835
+ }
836
+ }
837
+
838
+ async function _runDomScannerInternal(args) {
793
839
  const baseUrl = new URL(args.baseUrl).toString();
794
840
  const origin = new URL(baseUrl).origin;
795
841
 
@@ -850,7 +896,7 @@ async function main() {
850
896
  } catch (err) {
851
897
  log.error(`Fatal: Could not load base URL ${baseUrl}: ${err.message}`);
852
898
  await browser.close();
853
- process.exit(1);
899
+ throw new Error(`Could not load base URL ${baseUrl}: ${err.message}`);
854
900
  }
855
901
 
856
902
  /**
@@ -1036,6 +1082,13 @@ async function main() {
1036
1082
 
1037
1083
  writeJson(args.output, payload);
1038
1084
  log.success(`Routes scan complete. Results saved to ${args.output}`);
1085
+
1086
+ return payload;
1087
+ }
1088
+
1089
+ async function main() {
1090
+ const args = parseArgs(process.argv.slice(2));
1091
+ await runDomScanner(args);
1039
1092
  }
1040
1093
 
1041
1094
  if (process.argv[1] === fileURLToPath(import.meta.url)) {
@@ -113,7 +113,7 @@ function walkFiles(dir, extensions, results = []) {
113
113
  * @param {string} projectDir
114
114
  * @returns {string[]}
115
115
  */
116
- function resolveScanDirs(framework, projectDir) {
116
+ export function resolveScanDirs(framework, projectDir) {
117
117
  const boundaries = framework ? SOURCE_BOUNDARIES?.[framework] : null;
118
118
  if (!boundaries) return [projectDir];
119
119
 
@@ -137,11 +137,86 @@ export interface PDFReport {
137
137
  contentType: "application/pdf";
138
138
  }
139
139
 
140
+ export interface HTMLReport {
141
+ html: string;
142
+ contentType: "text/html";
143
+ }
144
+
140
145
  export interface ChecklistReport {
141
146
  html: string;
142
147
  contentType: "text/html";
143
148
  }
144
149
 
150
+ export interface RemediationGuide {
151
+ markdown: string;
152
+ contentType: "text/markdown";
153
+ }
154
+
155
+ export interface SourcePatternFinding {
156
+ id: string;
157
+ pattern_id: string;
158
+ title: string;
159
+ severity: string;
160
+ wcag: string;
161
+ wcag_criterion: string;
162
+ wcag_level: string;
163
+ type: string;
164
+ fix_description: string | null;
165
+ status: "confirmed" | "potential";
166
+ file: string;
167
+ line: number;
168
+ match: string;
169
+ context: string;
170
+ source: "code-pattern";
171
+ }
172
+
173
+ export interface SourcePatternResult {
174
+ findings: SourcePatternFinding[];
175
+ summary: {
176
+ total: number;
177
+ confirmed: number;
178
+ potential: number;
179
+ };
180
+ }
181
+
182
+ export interface HTMLReportOptions extends ReportOptions {
183
+ screenshotsDir?: string;
184
+ }
185
+
186
+ export interface RemediationOptions extends ReportOptions {
187
+ patternFindings?: Record<string, unknown> | null;
188
+ }
189
+
190
+ export interface SourcePatternOptions {
191
+ framework?: string;
192
+ onlyPattern?: string;
193
+ }
194
+
195
+ // ---------------------------------------------------------------------------
196
+ // Audit options
197
+ // ---------------------------------------------------------------------------
198
+
199
+ export interface RunAuditOptions {
200
+ baseUrl: string;
201
+ maxRoutes?: number;
202
+ crawlDepth?: number;
203
+ routes?: string;
204
+ waitMs?: number;
205
+ timeoutMs?: number;
206
+ headless?: boolean;
207
+ waitUntil?: string;
208
+ colorScheme?: string;
209
+ viewport?: { width: number; height: number };
210
+ axeTags?: string[];
211
+ onlyRule?: string;
212
+ excludeSelectors?: string[];
213
+ ignoreFindings?: string[];
214
+ framework?: string;
215
+ projectDir?: string;
216
+ skipPatterns?: boolean;
217
+ onProgress?: (step: string, status: string, extra?: Record<string, unknown>) => void;
218
+ }
219
+
145
220
  // ---------------------------------------------------------------------------
146
221
  // Enrichment options
147
222
  // ---------------------------------------------------------------------------
@@ -154,6 +229,8 @@ export interface EnrichmentOptions {
154
229
  // Public API
155
230
  // ---------------------------------------------------------------------------
156
231
 
232
+ export function runAudit(options: RunAuditOptions): Promise<ScanPayload>;
233
+
157
234
  export function getEnrichedFindings(
158
235
  input: ScanPayload | Finding[] | Record<string, unknown>[],
159
236
  options?: EnrichmentOptions
@@ -172,3 +249,18 @@ export function getPDFReport(
172
249
  export function getChecklist(
173
250
  options?: Pick<ReportOptions, "baseUrl">
174
251
  ): Promise<ChecklistReport>;
252
+
253
+ export function getHTMLReport(
254
+ payload: ScanPayload,
255
+ options?: HTMLReportOptions
256
+ ): Promise<HTMLReport>;
257
+
258
+ export function getRemediationGuide(
259
+ payload: ScanPayload & { incomplete_findings?: unknown[] },
260
+ options?: RemediationOptions
261
+ ): Promise<RemediationGuide>;
262
+
263
+ export function getSourcePatterns(
264
+ projectDir: string,
265
+ options?: SourcePatternOptions
266
+ ): Promise<SourcePatternResult>;
package/scripts/index.mjs CHANGED
@@ -424,6 +424,117 @@ export function getAuditSummary(findings, payload = null) {
424
424
  };
425
425
  }
426
426
 
427
+ // ---------------------------------------------------------------------------
428
+ // Full audit pipeline
429
+ // ---------------------------------------------------------------------------
430
+
431
+ /**
432
+ * Runs a complete accessibility audit: crawl + scan (axe + CDP + pa11y) + analyze.
433
+ * Returns the enriched scan payload ready for getEnrichedFindings().
434
+ *
435
+ * @param {{
436
+ * baseUrl: string,
437
+ * maxRoutes?: number,
438
+ * crawlDepth?: number,
439
+ * routes?: string,
440
+ * waitMs?: number,
441
+ * timeoutMs?: number,
442
+ * headless?: boolean,
443
+ * waitUntil?: string,
444
+ * colorScheme?: string,
445
+ * viewport?: { width: number, height: number },
446
+ * axeTags?: string[],
447
+ * onlyRule?: string,
448
+ * excludeSelectors?: string[],
449
+ * ignoreFindings?: string[],
450
+ * framework?: string,
451
+ * projectDir?: string,
452
+ * skipPatterns?: boolean,
453
+ * onProgress?: (step: string, status: string, extra?: object) => void,
454
+ * }} options
455
+ * @returns {Promise<{ findings: object[], metadata: object, incomplete_findings?: object[] }>}
456
+ */
457
+ export async function runAudit(options) {
458
+ if (!options.baseUrl) throw new Error("runAudit requires baseUrl");
459
+
460
+ const { runDomScanner } = await import("./engine/dom-scanner.mjs");
461
+ const { runAnalyzer } = await import("./engine/analyzer.mjs");
462
+
463
+ const onProgress = options.onProgress || null;
464
+
465
+ // Step 1: DOM scan (axe + CDP + pa11y)
466
+ if (onProgress) onProgress("page", "running");
467
+
468
+ const scanPayload = await runDomScanner(
469
+ {
470
+ baseUrl: options.baseUrl,
471
+ maxRoutes: options.maxRoutes,
472
+ crawlDepth: options.crawlDepth,
473
+ routes: options.routes,
474
+ waitMs: options.waitMs,
475
+ timeoutMs: options.timeoutMs,
476
+ headless: options.headless,
477
+ waitUntil: options.waitUntil,
478
+ colorScheme: options.colorScheme,
479
+ viewport: options.viewport,
480
+ axeTags: options.axeTags,
481
+ onlyRule: options.onlyRule,
482
+ excludeSelectors: options.excludeSelectors,
483
+ },
484
+ { onProgress },
485
+ );
486
+
487
+ // Step 2: Analyze + enrich
488
+ if (onProgress) onProgress("intelligence", "running");
489
+
490
+ const findingsPayload = runAnalyzer(scanPayload, {
491
+ ignoreFindings: options.ignoreFindings,
492
+ framework: options.framework,
493
+ });
494
+
495
+ // Step 3: Source patterns (optional)
496
+ if (options.projectDir && !options.skipPatterns) {
497
+ try {
498
+ const { resolveScanDirs, scanPattern } = await import("./engine/source-scanner.mjs");
499
+ const { patterns } = loadAssetJson(ASSET_PATHS.remediation.codePatterns, "code-patterns.json");
500
+
501
+ let resolvedFramework = options.framework;
502
+ if (!resolvedFramework && findingsPayload.metadata?.projectContext?.framework) {
503
+ resolvedFramework = findingsPayload.metadata.projectContext.framework;
504
+ }
505
+
506
+ const scanDirs = resolveScanDirs(resolvedFramework || null, options.projectDir);
507
+ const allPatternFindings = [];
508
+ for (const pattern of patterns) {
509
+ for (const scanDir of scanDirs) {
510
+ allPatternFindings.push(...scanPattern(pattern, scanDir, options.projectDir));
511
+ }
512
+ }
513
+
514
+ if (allPatternFindings.length > 0) {
515
+ findingsPayload.patternFindings = {
516
+ generated_at: new Date().toISOString(),
517
+ project_dir: options.projectDir,
518
+ findings: allPatternFindings,
519
+ summary: {
520
+ total: allPatternFindings.length,
521
+ confirmed: allPatternFindings.filter((f) => f.status === "confirmed").length,
522
+ potential: allPatternFindings.filter((f) => f.status === "potential").length,
523
+ },
524
+ };
525
+ }
526
+ } catch (err) {
527
+ // Non-fatal: source scanning is optional
528
+ const msg = err instanceof Error ? err.message : String(err);
529
+ console.warn(`Source pattern scan failed (non-fatal): ${msg}`);
530
+ }
531
+ }
532
+
533
+ if (onProgress) onProgress("intelligence", "done");
534
+
535
+ return findingsPayload;
536
+ }
537
+
427
538
  // ---------------------------------------------------------------------------
428
539
  // Report generation
429
540
  // ---------------------------------------------------------------------------
@@ -608,3 +719,204 @@ export async function getChecklist(options = {}) {
608
719
  contentType: "text/html",
609
720
  };
610
721
  }
722
+
723
+ // ---------------------------------------------------------------------------
724
+ // HTML Report
725
+ // ---------------------------------------------------------------------------
726
+
727
+ /**
728
+ * Generates an interactive HTML audit dashboard from raw scan findings.
729
+ * Embeds screenshots as base64 data URIs when available.
730
+ * @param {{ findings: object[], metadata?: object }} payload - Raw scan output.
731
+ * @param {{ baseUrl?: string, target?: string, screenshotsDir?: string }} [options={}]
732
+ * @returns {Promise<{ html: string, contentType: "text/html" }>}
733
+ */
734
+ export async function getHTMLReport(payload, options = {}) {
735
+ const fs = await import("node:fs");
736
+ const path = await import("node:path");
737
+ const { buildIssueCard, buildPageGroupedSection } = await import("./reports/renderers/html.mjs");
738
+ const { escapeHtml } = await import("./reports/renderers/utils.mjs");
739
+
740
+ const args = { baseUrl: options.baseUrl || "", target: options.target || "WCAG 2.2 AA" };
741
+ const findings = normalizeForReports(payload).filter(
742
+ (f) => f.wcagClassification !== "AAA" && f.wcagClassification !== "Best Practice",
743
+ );
744
+
745
+ // Embed screenshots as base64 if screenshotsDir is provided
746
+ if (options.screenshotsDir) {
747
+ for (const finding of findings) {
748
+ if (finding.screenshotPath) {
749
+ const filename = path.basename(finding.screenshotPath);
750
+ const absolutePath = path.join(options.screenshotsDir, filename);
751
+ try {
752
+ if (fs.existsSync(absolutePath)) {
753
+ const data = fs.readFileSync(absolutePath);
754
+ finding.screenshotPath = `data:image/png;base64,${data.toString("base64")}`;
755
+ } else {
756
+ finding.screenshotPath = null;
757
+ }
758
+ } catch {
759
+ finding.screenshotPath = null;
760
+ }
761
+ }
762
+ }
763
+ }
764
+
765
+ // Dynamically import the html builder's buildHtml — it auto-executes main() on import,
766
+ // so we replicate its logic here using the renderers directly.
767
+ const {
768
+ buildSummary: buildSummaryLocal,
769
+ computeComplianceScore: computeScoreLocal,
770
+ scoreLabel: scoreLabelLocal,
771
+ buildPersonaSummary: buildPersonaSummaryLocal,
772
+ wcagOverallStatus: wcagOverallStatusLocal,
773
+ } = await import("./reports/renderers/findings.mjs");
774
+
775
+ // Use the builder's internal buildHtml by re-importing it
776
+ // Since html.mjs auto-runs main() on import, we cannot import it directly.
777
+ // Instead, we construct the HTML using the same renderers.
778
+ const totals = buildSummaryLocal(findings);
779
+ const score = computeScoreLocal(totals);
780
+ const label = scoreLabelLocal(score);
781
+ const wcagStatus = wcagOverallStatusLocal(totals);
782
+ const personaCounts = buildPersonaSummaryLocal(findings);
783
+
784
+ let siteHostname = args.baseUrl;
785
+ try {
786
+ siteHostname = new URL(args.baseUrl.startsWith("http") ? args.baseUrl : `https://${args.baseUrl}`).hostname;
787
+ } catch {}
788
+
789
+ const pageGroups = {};
790
+ for (const f of findings) {
791
+ const area = f.area || "Unknown";
792
+ if (!pageGroups[area]) pageGroups[area] = [];
793
+ pageGroups[area].push(f);
794
+ }
795
+
796
+ const issueCards = findings.map((f) => buildIssueCard(f)).join("\n");
797
+ const pageGroupedSections = Object.entries(pageGroups)
798
+ .map(([area, group]) => buildPageGroupedSection(area, group))
799
+ .join("\n");
800
+
801
+ const quickWins = findings
802
+ .filter((f) => (f.severity === "Critical" || f.severity === "Serious") && f.fixCode)
803
+ .slice(0, 3);
804
+
805
+ // Build a self-contained HTML report
806
+ const html = `<!doctype html>
807
+ <html lang="en">
808
+ <head>
809
+ <meta charset="utf-8">
810
+ <meta name="viewport" content="width=device-width, initial-scale=1">
811
+ <title>Accessibility Audit — ${escapeHtml(siteHostname)}</title>
812
+ <script src="https://cdn.tailwindcss.com"><\/script>
813
+ <link rel="preconnect" href="https://fonts.googleapis.com">
814
+ <link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
815
+ <link href="https://fonts.googleapis.com/css2?family=Inter:wght@400;500;600;700;800&family=JetBrains+Mono:wght@400;500&display=swap" rel="stylesheet">
816
+ <style>
817
+ body { font-family: 'Inter', sans-serif; background: #f8fafc; }
818
+ </style>
819
+ </head>
820
+ <body>
821
+ <main class="max-w-5xl mx-auto px-4 py-12">
822
+ <h1 class="text-3xl font-extrabold mb-2">Accessibility Audit Dashboard</h1>
823
+ <p class="text-slate-500 mb-8">${escapeHtml(siteHostname)} — ${new Date().toLocaleDateString("en-US", { year: "numeric", month: "long", day: "numeric" })}</p>
824
+ <div class="grid grid-cols-2 md:grid-cols-4 gap-4 mb-8">
825
+ <div class="bg-white rounded-xl p-4 border border-slate-200 shadow-sm">
826
+ <div class="text-3xl font-black">${score}</div>
827
+ <div class="text-xs font-bold text-slate-500 uppercase">${label}</div>
828
+ </div>
829
+ <div class="bg-white rounded-xl p-4 border border-slate-200 shadow-sm">
830
+ <div class="text-3xl font-black">${findings.length}</div>
831
+ <div class="text-xs font-bold text-slate-500 uppercase">Issues</div>
832
+ </div>
833
+ <div class="bg-white rounded-xl p-4 border border-slate-200 shadow-sm">
834
+ <div class="text-3xl font-black ${wcagStatus === 'Pass' ? 'text-emerald-600' : 'text-rose-600'}">${wcagStatus}</div>
835
+ <div class="text-xs font-bold text-slate-500 uppercase">WCAG 2.2 AA</div>
836
+ </div>
837
+ <div class="bg-white rounded-xl p-4 border border-slate-200 shadow-sm">
838
+ <div class="text-3xl font-black">${Object.keys(pageGroups).length}</div>
839
+ <div class="text-xs font-bold text-slate-500 uppercase">Pages</div>
840
+ </div>
841
+ </div>
842
+ <div class="space-y-4">
843
+ ${pageGroupedSections}
844
+ </div>
845
+ </main>
846
+ </body>
847
+ </html>`;
848
+
849
+ return {
850
+ html,
851
+ contentType: "text/html",
852
+ };
853
+ }
854
+
855
+ // ---------------------------------------------------------------------------
856
+ // Remediation Guide (Markdown)
857
+ // ---------------------------------------------------------------------------
858
+
859
+ /**
860
+ * Generates a Markdown remediation guide from raw scan findings.
861
+ * @param {{ findings: object[], metadata?: object, incomplete_findings?: object[] }} payload
862
+ * @param {{ baseUrl?: string, target?: string, patternFindings?: object }} [options={}]
863
+ * @returns {Promise<{ markdown: string, contentType: "text/markdown" }>}
864
+ */
865
+ export async function getRemediationGuide(payload, options = {}) {
866
+ const { buildMarkdownSummary } = await import("./reports/renderers/md.mjs");
867
+
868
+ const args = { baseUrl: options.baseUrl || "", target: options.target || "WCAG 2.2 AA" };
869
+ const findings = normalizeForReports(payload);
870
+
871
+ const markdown = buildMarkdownSummary(args, findings, {
872
+ ...payload.metadata,
873
+ incomplete_findings: payload.incomplete_findings,
874
+ pattern_findings: options.patternFindings || null,
875
+ });
876
+
877
+ return {
878
+ markdown,
879
+ contentType: "text/markdown",
880
+ };
881
+ }
882
+
883
+ // ---------------------------------------------------------------------------
884
+ // Source Pattern Scanner
885
+ // ---------------------------------------------------------------------------
886
+
887
+ /**
888
+ * Scans a project's source code for accessibility patterns not detectable by axe-core.
889
+ * @param {string} projectDir - Absolute path to the project root.
890
+ * @param {{ framework?: string, onlyPattern?: string }} [options={}]
891
+ * @returns {Promise<{ findings: object[], summary: { total: number, confirmed: number, potential: number } }>}
892
+ */
893
+ export async function getSourcePatterns(projectDir, options = {}) {
894
+ const { scanPattern, resolveScanDirs } = await import("./engine/source-scanner.mjs");
895
+
896
+ const { patterns } = loadAssetJson(ASSET_PATHS.remediation.codePatterns, "code-patterns.json");
897
+
898
+ const activePatterns = options.onlyPattern
899
+ ? patterns.filter((p) => p.id === options.onlyPattern)
900
+ : patterns;
901
+
902
+ if (activePatterns.length === 0) {
903
+ return { findings: [], summary: { total: 0, confirmed: 0, potential: 0 } };
904
+ }
905
+
906
+ const scanDirs = resolveScanDirs(options.framework || null, projectDir);
907
+ const allFindings = [];
908
+
909
+ for (const pattern of activePatterns) {
910
+ for (const scanDir of scanDirs) {
911
+ allFindings.push(...scanPattern(pattern, scanDir, projectDir));
912
+ }
913
+ }
914
+
915
+ const confirmed = allFindings.filter((f) => f.status === "confirmed").length;
916
+ const potential = allFindings.filter((f) => f.status === "potential").length;
917
+
918
+ return {
919
+ findings: allFindings,
920
+ summary: { total: allFindings.length, confirmed, potential },
921
+ };
922
+ }