bmad-plus 0.3.0 → 0.3.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -5,6 +5,29 @@ All notable changes to BMAD+ will be documented in this file.
5
5
  The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
6
6
  and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7
7
 
8
+ ## [0.3.2] — 2026-03-19
9
+
10
+ ### 📊 SEO Engine — Reports, Competitor & Hreflang (Sprint 2)
11
+
12
+ ### Added
13
+ - **seo_report.py** — Professional HTML report generator with inline SVG radar chart, color-coded issue cards, quick wins section, and print-friendly CSS
14
+ - **Benchmarker role** — Added to Chief agent for `/seo competitor` command (side-by-side site comparison with delta scoring)
15
+ - **hreflang-rules.md** — Complete hreflang audit reference with 7 validation rules, 6 common error patterns, and 12-point checklist
16
+
17
+ ---
18
+
19
+ ## [0.3.1] — 2026-03-19
20
+
21
+ ### 🔧 SEO Engine Enhancements (Sprint 1)
22
+
23
+ ### Added
24
+ - **SKILL.md orchestrator** — Single entry point routing 15 `/seo` commands to the right agents
25
+ - **seo_apis.py** — Google APIs client (PageSpeed Insights, CrUX field data, Rich Results Test)
26
+ - **requirements.txt** — Python dependencies (requests, beautifulsoup4, lxml)
27
+ - **install.sh + install.ps1** — Cross-platform dependency installer with venv support
28
+
29
+ ---
30
+
8
31
  ## [0.3.0] — 2026-03-19
9
32
 
10
33
  ### 🚀 SEO Engine v2.0 — Complete Rewrite
@@ -0,0 +1,171 @@
1
+ ---
2
+ name: seo-engine
3
+ description: >
4
+ BMAD+ SEO Engine v2.1 — Complete SEO audit engine with 3 multi-role agents,
5
+ 6-phase workflow, Python toolkit, Google API integration, and PageSpeed
6
+ perfection loop. Use when user says /seo or any SEO-related command.
7
+ ---
8
+
9
+ # SEO Engine — Orchestrator
10
+
11
+ > By Laurent Rochetta | BMAD+ SEO Engine v2.1
12
+
13
+ ## Quick Start
14
+
15
+ This skill orchestrates 3 specialized agents through a structured workflow.
16
+ Load the full agent files only when activating that agent's phase.
17
+
18
+ ## Command Router
19
+
20
+ When the user issues a `/seo` command, route as follows:
21
+
22
+ | Command | Agent(s) | Action |
23
+ |---------|----------|--------|
24
+ | `/seo full <url>` | Scout → Judge → Chief | Run all 6 phases |
25
+ | `/seo quick <url>` | Scout → Judge → Chief | Run phases 1–4 only |
26
+ | `/seo technical <url>` | Scout (Inspector) | Phase 2 technical only |
27
+ | `/seo content <url>` | Judge (Content Expert) | Phase 2 content only |
28
+ | `/seo geo <url>` | Judge (GEO Analyst) | Phase 3 only |
29
+ | `/seo schema <url>` | Judge (Schema Master) | Schema detection + validation |
30
+ | `/seo images <url>` | Judge (Content Expert) | Image audit subset |
31
+ | `/seo hreflang <url>` | Scout (Inspector) | Hreflang audit, ref: `ref/hreflang-rules.md` |
32
+ | `/seo pagespeed <url>` | Scout + Chief | PageSpeed perfection loop |
33
+ | `/seo plan <type>` | Chief (Strategist) | Strategic plan by industry |
34
+ | `/seo fix` | Chief (Strategist) | Auto-generate fixes from last audit |
35
+ | `/seo history` | Chief (Reporter) | Show score history |
36
+ | `/seo compare` | Chief (Reporter) | Compare with previous audit |
37
+ | `/seo competitor <url1> <url2>` | Scout + Judge + Chief | Benchmark two sites |
38
+ | `/seo api <url>` | (script) | Run Google APIs (PSI + CrUX + Rich Results) |
39
+
40
+ ## Full Audit Orchestration (`/seo full`)
41
+
42
+ ### Phase 1 — Reconnaissance
43
+ **Agent**: Scout (Crawler role)
44
+ **Load**: `agent/seo-scout.md`
45
+
46
+ 1. Run `scripts/seo_fetch.py <url> --json` to fetch homepage
47
+ 2. Run `scripts/seo_crawl.py <url> --depth 2 --max 25 --json` to discover structure
48
+ 3. Detect business type from content analysis:
49
+ - **SaaS**: pricing page, features page, signup CTA
50
+ - **E-commerce**: product pages, cart, categories
51
+ - **Local business**: address, phone, map, opening hours
52
+ - **Publisher**: articles, blog, news, RSS feed
53
+ - **Agency**: services, portfolio, case studies
54
+ 4. Check for `/robots.txt`, `/sitemap.xml`, `/llms.txt`
55
+
56
+ **Checkpoint**: Report discovery summary, ask "Continue with full audit?"
57
+
58
+ ### Phase 2 — Deep Scan (PARALLEL)
59
+ **Agents**: Scout (Inspector) + Judge (Content Expert + Schema Master)
60
+ **Load**: `agent/seo-scout.md` + `agent/seo-judge.md`
61
+
62
+ Run Scout and Judge **simultaneously** on each discovered page:
63
+
64
+ **Scout checks** (9 categories — see `agent/seo-scout.md`):
65
+ - Crawlability, Indexability, Security, URL Structure, Mobile
66
+ - Core Web Vitals, Structured Data detection, JS Rendering, IndexNow
67
+
68
+ **Judge checks** (see `agent/seo-judge.md`):
69
+ - E-E-A-T evaluation (ref: `ref/eeat-criteria.md`)
70
+ - Content quality (ref: `ref/quality-gates.md`)
71
+ - Schema validation (ref: `ref/schema-catalog.md`)
72
+ - Image audit
73
+ - Internal/external link analysis
74
+
75
+ **Optional**: Run `scripts/seo_apis.py --all <url>` for live PageSpeed + CrUX data.
76
+
77
+ Use `scripts/seo_parse.py <file> --url <url> --json` on fetched HTML.
78
+ Use `scripts/seo_screenshot.py <url> --viewport mobile` for visual audit.
79
+
80
+ ### Phase 3 — AI Readiness & GEO
81
+ **Agent**: Judge (GEO Analyst role)
82
+ **Reference**: `ref/geo-signals.md`
83
+
84
+ - Check AI crawler access (GPTBot, ClaudeBot, PerplexityBot)
85
+ - Verify llms.txt compliance
86
+ - Score passage citability (134–167 word blocks)
87
+ - Compute AI Readiness Score (0–100)
88
+
89
+ ### Phase 4 — Scoring
90
+ **Agent**: Chief (Scorer role)
91
+ **Load**: `agent/seo-chief.md`
92
+
93
+ Compute SEO Health Score (0–100):
94
+
95
+ | Category | Weight |
96
+ |----------|--------|
97
+ | Technical SEO | 20% |
98
+ | Content & E-E-A-T | 22% |
99
+ | On-Page SEO | 18% |
100
+ | Schema | 10% |
101
+ | Performance (CWV) | 12% |
102
+ | AI Readiness (GEO) | 12% |
103
+ | Images | 6% |
104
+
105
+ ### Phase 5 — Action Plan
106
+ **Agent**: Chief (Strategist role)
107
+
108
+ 1. Classify issues: 🔴 Critical → 🟠 High → 🟡 Medium → 🟢 Low
109
+ 2. Identify quick wins (highest impact/effort ratio)
110
+ 3. Generate 30/60/90-day roadmap
111
+ 4. Auto-generate code fixes (meta tags, schema JSON-LD, robots.txt, llms.txt)
112
+
113
+ **Checkpoint**: "Here's the plan. Apply fixes automatically?"
114
+
115
+ ### Phase 5b — PageSpeed Perfection Loop
116
+ **Agents**: Scout + Chief
117
+ **Reference**: `pagespeed-playbook.md` + `checklist.md`
118
+
119
+ Use `scripts/seo_apis.py --pagespeed <url>` for live scores.
120
+ Loop: fix one issue → re-test → verify improvement → next issue.
121
+ Target: 100% on all 4 categories (Performance, Accessibility, Best Practices, SEO).
122
+
123
+ ### Phase 6 — Monitoring (optional)
124
+ **Agent**: Scout (Crawler role)
125
+
126
+ Save results to `.bmad-seo/history/<domain>-<date>.json`.
127
+ On re-audit: compare with previous, show deltas.
128
+
129
+ ---
130
+
131
+ ## Python Toolkit
132
+
133
+ | Script | Usage | Dependencies |
134
+ |--------|-------|-------------|
135
+ | `seo_fetch.py` | `python scripts/seo_fetch.py <url> [--ua googlebot] [--json]` | requests |
136
+ | `seo_parse.py` | `python scripts/seo_parse.py <file> --url <url> --json` | beautifulsoup4, lxml |
137
+ | `seo_crawl.py` | `python scripts/seo_crawl.py <url> --depth 2 --max 25 --json` | requests |
138
+ | `seo_screenshot.py` | `python scripts/seo_screenshot.py <url> --viewport mobile` | playwright |
139
+ | `seo_apis.py` | `python scripts/seo_apis.py --pagespeed <url>` | requests |
140
+
141
+ **Install dependencies**: `pip install -r requirements.txt`
142
+
143
+ **Environment**: Set `GOOGLE_API_KEY` for Google API access (free, no OAuth).
144
+
145
+ ---
146
+
147
+ ## Reference Files (lazy-load)
148
+
149
+ Only load these when the relevant agent needs them:
150
+ - `ref/cwv-thresholds.md` — Core Web Vitals 2026
151
+ - `ref/schema-catalog.md` — Schema.org v29.4 types
152
+ - `ref/eeat-criteria.md` — E-E-A-T scoring grid
153
+ - `ref/geo-signals.md` — AI search signals
154
+ - `ref/quality-gates.md` — Content thresholds
155
+ - `ref/schema-templates.json` — 14 JSON-LD templates
156
+
157
+ ---
158
+
159
+ ## Industry-Specific Plans (`/seo plan <type>`)
160
+
161
+ | Type | Focus |
162
+ |------|-------|
163
+ | `saas` | Pricing pages, feature comparison, trial CTAs, documentation SEO |
164
+ | `ecommerce` | Product schema, category pages, faceted navigation, review markup |
165
+ | `local` | LocalBusiness schema, Google Business Profile, location pages, NAP consistency |
166
+ | `publisher` | Article schema, author E-E-A-T, news sitemap, pagination |
167
+ | `agency` | Service schema, portfolio, case studies, city-specific landing pages |
168
+
169
+ ---
170
+
171
+ *BMAD+ SEO Engine v2.1 — By Laurent Rochetta*
@@ -29,6 +29,25 @@ You are **Chief**, the strategist and reporting agent of the BMAD+ SEO Engine. Y
29
29
  - Generate executive summary for non-technical stakeholders
30
30
  - Create monitoring comparison reports (vs previous audit)
31
31
  - Format reports for different audiences (developer, marketing, executive)
32
+ - Generate **HTML reports** via `scripts/seo_report.py` from audit JSON
33
+
34
+ ### Role: Benchmarker
35
+ **Trigger**: `/seo competitor`, competitive analysis, benchmark
36
+ - Run full audit on **two sites simultaneously** (Scout + Judge on each)
37
+ - Compare scores side-by-side with delta indicators:
38
+
39
+ | Metric | My Site | Competitor | Delta |
40
+ |--------|---------|-----------|-------|
41
+ | SEO Score | 72 | 85 | -13 🔴 |
42
+ | E-E-A-T | 65 | 78 | -13 🔴 |
43
+ | Schema types | 3 | 7 | -4 🟠 |
44
+ | GEO/AI Score | 55 | 70 | -15 🔴 |
45
+ | PageSpeed | 92 | 88 | +4 🟢 |
46
+
47
+ - Identify **competitive gaps** (where rival is better)
48
+ - Identify **competitive advantages** (where we're better)
49
+ - Generate actionable plan: "To match competitor, prioritize: ..."
50
+ - Output: Markdown comparison report + optional HTML via `seo_report.py`
32
51
 
33
52
  ---
34
53
 
@@ -0,0 +1,153 @@
1
+ # Hreflang — Audit Rules & Best Practices (March 2026)
2
+
3
+ > Author: Laurent Rochetta | BMAD+ SEO Engine v2.0
4
+
5
+ ## What is Hreflang?
6
+
7
+ `hreflang` tells search engines which language/region version of a page to serve.
8
+ Errors cause wrong language indexing, duplicate content penalties, and lost organic traffic.
9
+
10
+ ---
11
+
12
+ ## Implementation Methods
13
+
14
+ | Method | Best For | Max Pages |
15
+ |--------|----------|-----------|
16
+ | `<link>` in `<head>` | Small sites (<50 pages) | ~50 |
17
+ | HTTP header `Link:` | Non-HTML files (PDFs) | ~50 |
18
+ | Sitemap `<xhtml:link>` | Large sites (50+) | Unlimited |
19
+
20
+ > **Recommendation**: Use sitemap for sites with 50+ pages. HTTP headers for non-HTML resources.
21
+
22
+ ---
23
+
24
+ ## Validation Rules
25
+
26
+ ### Rule 1: Valid Language Codes
27
+ - Use **ISO 639-1** (2-letter): `en`, `fr`, `de`, `es`, `ja`, `zh`
28
+ - Optional **ISO 3166-1 Alpha-2** for region: `en-US`, `en-GB`, `fr-FR`, `fr-CA`, `pt-BR`
29
+ - **Case insensitive** but convention is lowercase lang, uppercase country
30
+
31
+ | ✅ Valid | ❌ Invalid | Why |
32
+ |---------|-----------|-----|
33
+ | `en` | `english` | Must be ISO 639-1 |
34
+ | `fr-FR` | `fr-FRA` | Country must be 2-letter |
35
+ | `zh-Hans` | `cn` | `cn` is not a valid language code |
36
+ | `x-default` | `default` | Must use exact `x-default` |
37
+
38
+ ### Rule 2: Self-Referencing (MANDATORY)
39
+ Every page MUST include a hreflang tag pointing to itself.
40
+
41
+ ```html
42
+ <!-- On the English page (example.com/en/) -->
43
+ <link rel="alternate" hreflang="en" href="https://example.com/en/" />
44
+ <link rel="alternate" hreflang="fr" href="https://example.com/fr/" />
45
+ <link rel="alternate" hreflang="x-default" href="https://example.com/en/" />
46
+ ```
47
+
48
+ **Error if missing**: Google may ignore all hreflang tags on that page.
49
+
50
+ ### Rule 3: Return Tags (MANDATORY)
51
+ If page A links to page B with hreflang, page B MUST link back to page A.
52
+
53
+ ```
54
+ Page A (en) → hreflang="fr" → Page B (fr)
55
+ Page B (fr) → hreflang="en" → Page A (en) ← MANDATORY
56
+ ```
57
+
58
+ **Error if missing**: Called "orphan hreflang" — Google ignores the one-way tag.
59
+
60
+ ### Rule 4: x-default (STRONGLY RECOMMENDED)
61
+ Designate a fallback page for users whose language/region doesn't match any variant.
62
+
63
+ ```html
64
+ <link rel="alternate" hreflang="x-default" href="https://example.com/" />
65
+ ```
66
+
67
+ Common choices for x-default:
68
+ - Language selector/redirect page
69
+ - English version (most common)
70
+ - Homepage of the main domain
71
+
72
+ ### Rule 5: Canonical + Hreflang Consistency
73
+ - Each hreflang URL **must be the canonical version** (not a redirect, not a URL with parameters)
74
+ - If a page has `rel="canonical"` pointing elsewhere, hreflang tags on that page are **ignored**
75
+ - Canonical and hreflang must agree: don't have hreflang point to a URL that canonicalizes to a different URL
76
+
77
+ ### Rule 6: Absolute URLs Only
78
+ ```html
79
+ <!-- ✅ Correct -->
80
+ <link rel="alternate" hreflang="fr" href="https://example.com/fr/page" />
81
+
82
+ <!-- ❌ Wrong -->
83
+ <link rel="alternate" hreflang="fr" href="/fr/page" />
84
+ ```
85
+
86
+ ### Rule 7: No Hreflang on Non-200 Pages
87
+ - Don't include hreflang tags on pages that return 3xx, 4xx, or 5xx
88
+ - Don't point hreflang to pages that redirect
89
+
90
+ ---
91
+
92
+ ## Common Error Patterns
93
+
94
+ ### Error 1: Missing Return Tags
95
+ **Symptom**: hreflang is configured on the main language but not on alternate versions.
96
+ **Fix**: Add reciprocal hreflang tags on ALL language variants.
97
+
98
+ ### Error 2: Wrong Canonical + Hreflang
99
+ **Symptom**: Page A hreflang → Page B, but Page B canonical → Page C.
100
+ **Fix**: Align canonical and hreflang targets.
101
+
102
+ ### Error 3: Missing Self-Reference
103
+ **Symptom**: Page lists other language versions but not itself.
104
+ **Fix**: Add `hreflang` tag with the current page's own language/URL.
105
+
106
+ ### Error 4: Inconsistent URLs
107
+ **Symptom**: hreflang uses `http://` but site is on `https://`, or trailing slash mismatch.
108
+ **Fix**: Use exact canonical URL (protocol, www/non-www, trailing slash).
109
+
110
+ ### Error 5: Language vs Region Confusion
111
+ **Symptom**: Using `hreflang="fr"` for France and `hreflang="fr"` for Canada.
112
+ **Fix**: Use `hreflang="fr-FR"` and `hreflang="fr-CA"` to differentiate.
113
+
114
+ ### Error 6: Missing x-default
115
+ **Symptom**: Users in unsupported regions see random language version.
116
+ **Fix**: Add `x-default` pointing to language selector or English version.
117
+
118
+ ---
119
+
120
+ ## Sitemap Implementation (Recommended for 50+ pages)
121
+
122
+ ```xml
123
+ <?xml version="1.0" encoding="UTF-8"?>
124
+ <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"
125
+ xmlns:xhtml="http://www.w3.org/1999/xhtml">
126
+ <url>
127
+ <loc>https://example.com/en/page</loc>
128
+ <xhtml:link rel="alternate" hreflang="en" href="https://example.com/en/page"/>
129
+ <xhtml:link rel="alternate" hreflang="fr" href="https://example.com/fr/page"/>
130
+ <xhtml:link rel="alternate" hreflang="de" href="https://example.com/de/page"/>
131
+ <xhtml:link rel="alternate" hreflang="x-default" href="https://example.com/en/page"/>
132
+ </url>
133
+ </urlset>
134
+ ```
135
+
136
+ ---
137
+
138
+ ## Audit Checklist
139
+
140
+ | # | Check | Priority |
141
+ |---|-------|----------|
142
+ | 1 | All hreflang language codes are valid ISO 639-1 | 🔴 Critical |
143
+ | 2 | All hreflang country codes are valid ISO 3166-1 | 🔴 Critical |
144
+ | 3 | Every page has self-referencing hreflang | 🔴 Critical |
145
+ | 4 | All hreflang tags have return tags | 🔴 Critical |
146
+ | 5 | All hreflang URLs are absolute | 🔴 Critical |
147
+ | 6 | x-default is specified | 🟠 High |
148
+ | 7 | Hreflang URLs match canonical URLs | 🟠 High |
149
+ | 8 | No hreflang on non-200 pages | 🟠 High |
150
+ | 9 | No hreflang pointing to redirecting URLs | 🟠 High |
151
+ | 10 | Consistent protocol (https) and www/non-www | 🟡 Medium |
152
+ | 11 | Language/region differentiation correct | 🟡 Medium |
153
+ | 12 | Sitemap implementation for 50+ pages | 🟡 Medium |
@@ -0,0 +1,14 @@
1
+ # BMAD+ SEO Engine — Python Dependencies
2
+ # Install: pip install -r requirements.txt
3
+ # Author: Laurent Rochetta
4
+
5
+ # Core (required)
6
+ requests>=2.31.0
7
+ beautifulsoup4>=4.12.0
8
+
9
+ # Fast HTML parser (recommended)
10
+ lxml>=5.0.0
11
+
12
+ # Screenshot capture (optional — only for seo_screenshot.py)
13
+ # Uncomment and run: playwright install chromium
14
+ # playwright>=1.40.0
@@ -0,0 +1,53 @@
1
+ # BMAD+ SEO Engine — Dependency Installer (Windows)
2
+ # Author: Laurent Rochetta
3
+
4
+ $ErrorActionPreference = "Stop"
5
+
6
+ $ScriptDir = Split-Path -Parent $MyInvocation.MyCommand.Path
7
+ $ParentDir = Split-Path -Parent $ScriptDir
8
+
9
+ Write-Host "🔧 BMAD+ SEO Engine — Installing dependencies..." -ForegroundColor Cyan
10
+ Write-Host ""
11
+
12
+ # Check Python
13
+ $Python = $null
14
+ if (Get-Command python -ErrorAction SilentlyContinue) {
15
+ $Python = "python"
16
+ } elseif (Get-Command py -ErrorAction SilentlyContinue) {
17
+ $Python = "py -3"
18
+ } else {
19
+ Write-Host "❌ Python not found. Please install Python 3.10+" -ForegroundColor Red
20
+ exit 1
21
+ }
22
+
23
+ Write-Host "Using: $(& $Python --version)"
24
+
25
+ # Create venv if not exists
26
+ $VenvPath = Join-Path $ParentDir ".venv"
27
+ if (-not (Test-Path $VenvPath)) {
28
+ Write-Host "📦 Creating virtual environment..."
29
+ & $Python -m venv $VenvPath
30
+ }
31
+
32
+ # Activate venv
33
+ $ActivateScript = Join-Path $VenvPath "Scripts\Activate.ps1"
34
+ if (Test-Path $ActivateScript) {
35
+ & $ActivateScript
36
+ }
37
+
38
+ # Install dependencies
39
+ Write-Host "📥 Installing core dependencies..."
40
+ $RequirementsPath = Join-Path $ParentDir "requirements.txt"
41
+ & $Python -m pip install --quiet -r $RequirementsPath
42
+
43
+ Write-Host ""
44
+ Write-Host "✅ Core dependencies installed!" -ForegroundColor Green
45
+ Write-Host ""
46
+ Write-Host "Optional: To enable screenshots (seo_screenshot.py):"
47
+ Write-Host " pip install playwright; playwright install chromium"
48
+ Write-Host ""
49
+ Write-Host "Set your Google API key for live data:"
50
+ Write-Host ' $env:GOOGLE_API_KEY = "your_key_here"'
51
+ Write-Host " Get one free: https://console.cloud.google.com/apis/credentials"
52
+ Write-Host ""
53
+ Write-Host "🚀 Ready! — BMAD+ SEO Engine by Laurent Rochetta" -ForegroundColor Cyan
@@ -0,0 +1,48 @@
1
+ #!/bin/bash
2
+ # BMAD+ SEO Engine — Dependency Installer (Linux/macOS)
3
+ # Author: Laurent Rochetta
4
+
5
+ set -e
6
+
7
+ SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
8
+ PARENT_DIR="$(dirname "$SCRIPT_DIR")"
9
+
10
+ echo "🔧 BMAD+ SEO Engine — Installing dependencies..."
11
+ echo ""
12
+
13
+ # Check Python
14
+ if command -v python3 &>/dev/null; then
15
+ PYTHON=python3
16
+ elif command -v python &>/dev/null; then
17
+ PYTHON=python
18
+ else
19
+ echo "❌ Python not found. Please install Python 3.10+"
20
+ exit 1
21
+ fi
22
+
23
+ echo "Using: $($PYTHON --version)"
24
+
25
+ # Create venv if not exists
26
+ if [ ! -d "$PARENT_DIR/.venv" ]; then
27
+ echo "📦 Creating virtual environment..."
28
+ $PYTHON -m venv "$PARENT_DIR/.venv"
29
+ fi
30
+
31
+ # Activate venv
32
+ source "$PARENT_DIR/.venv/bin/activate" 2>/dev/null || true
33
+
34
+ # Install core dependencies
35
+ echo "📥 Installing core dependencies..."
36
+ $PYTHON -m pip install --quiet -r "$PARENT_DIR/requirements.txt"
37
+
38
+ echo ""
39
+ echo "✅ Core dependencies installed!"
40
+ echo ""
41
+ echo "Optional: To enable screenshots (seo_screenshot.py):"
42
+ echo " pip install playwright && playwright install chromium"
43
+ echo ""
44
+ echo "Set your Google API key for live data:"
45
+ echo " export GOOGLE_API_KEY=your_key_here"
46
+ echo " Get one free: https://console.cloud.google.com/apis/credentials"
47
+ echo ""
48
+ echo "🚀 Ready! — BMAD+ SEO Engine by Laurent Rochetta"
@@ -0,0 +1,464 @@
1
+ #!/usr/bin/env python3
2
+ """
3
+ SEO APIs — Google free API client for live SEO data.
4
+
5
+ Connects to:
6
+ - PageSpeed Insights API v5 (lab scores + audits)
7
+ - Chrome UX Report (CrUX) API (field CWV data)
8
+ - Rich Results Test API (schema validation)
9
+
10
+ Requires: GOOGLE_API_KEY environment variable (free, no OAuth).
11
+ Get one at: https://console.cloud.google.com/apis/credentials
12
+
13
+ Author: Laurent Rochetta
14
+ License: MIT
15
+ """
16
+
17
+ import argparse
18
+ import json
19
+ import os
20
+ import sys
21
+ from typing import Optional
22
+
23
+ try:
24
+ import requests
25
+ except ImportError:
26
+ print("Error: requests required. Install: pip install requests", file=sys.stderr)
27
+ sys.exit(1)
28
+
29
+
30
+ API_KEY = os.environ.get("GOOGLE_API_KEY", "")
31
+
32
+ PSI_ENDPOINT = "https://www.googleapis.com/pagespeedonline/v5/runPagespeed"
33
+ CRUX_ENDPOINT = "https://chromeuxreport.googleapis.com/v1/records:queryRecord"
34
+ RICH_RESULTS_ENDPOINT = "https://searchconsole.googleapis.com/v1/urlTestingTools/mobileFriendlyTest:run"
35
+
36
+
37
+ # ── PageSpeed Insights ─────────────────────────────────────────────
38
+
39
+ def run_pagespeed(url: str, strategy: str = "mobile", categories: Optional[list] = None) -> dict:
40
+ """
41
+ Run PageSpeed Insights audit.
42
+
43
+ Args:
44
+ url: URL to audit
45
+ strategy: "mobile" or "desktop"
46
+ categories: List of categories (PERFORMANCE, ACCESSIBILITY, BEST_PRACTICES, SEO)
47
+
48
+ Returns:
49
+ Structured result with scores, audits, and opportunities
50
+ """
51
+ if not API_KEY:
52
+ return {"error": "GOOGLE_API_KEY not set. Get one at https://console.cloud.google.com/apis/credentials"}
53
+
54
+ if categories is None:
55
+ categories = ["PERFORMANCE", "ACCESSIBILITY", "BEST_PRACTICES", "SEO"]
56
+
57
+ params = {
58
+ "url": url,
59
+ "key": API_KEY,
60
+ "strategy": strategy,
61
+ }
62
+ for cat in categories:
63
+ params.setdefault("category", [])
64
+ if isinstance(params["category"], list):
65
+ params["category"].append(cat)
66
+
67
+ # requests needs category as repeated param
68
+ param_str = f"url={url}&key={API_KEY}&strategy={strategy}"
69
+ for cat in categories:
70
+ param_str += f"&category={cat}"
71
+
72
+ try:
73
+ response = requests.get(f"{PSI_ENDPOINT}?{param_str}", timeout=120)
74
+ response.raise_for_status()
75
+ data = response.json()
76
+ except requests.RequestException as e:
77
+ return {"error": f"PSI API request failed: {e}"}
78
+
79
+ # Extract scores
80
+ result = {
81
+ "url": url,
82
+ "strategy": strategy,
83
+ "scores": {},
84
+ "cwv": {},
85
+ "failing_audits": [],
86
+ "opportunities": [],
87
+ }
88
+
89
+ # Category scores
90
+ categories_data = data.get("lighthouseResult", {}).get("categories", {})
91
+ for cat_id, cat_data in categories_data.items():
92
+ score = cat_data.get("score", 0)
93
+ result["scores"][cat_id] = round(score * 100)
94
+
95
+ # Core Web Vitals from Lighthouse
96
+ audits = data.get("lighthouseResult", {}).get("audits", {})
97
+
98
+ cwv_metrics = {
99
+ "largest-contentful-paint": "LCP",
100
+ "interaction-to-next-paint": "INP",
101
+ "cumulative-layout-shift": "CLS",
102
+ "first-contentful-paint": "FCP",
103
+ "total-blocking-time": "TBT",
104
+ "speed-index": "SI",
105
+ }
106
+
107
+ for audit_id, label in cwv_metrics.items():
108
+ if audit_id in audits:
109
+ audit = audits[audit_id]
110
+ result["cwv"][label] = {
111
+ "value": audit.get("numericValue"),
112
+ "display": audit.get("displayValue", ""),
113
+ "score": round(audit.get("score", 0) * 100),
114
+ }
115
+
116
+ # Failing audits (score < 1.0)
117
+ for audit_id, audit in audits.items():
118
+ score = audit.get("score")
119
+ if score is not None and score < 0.9 and audit.get("title"):
120
+ severity = "critical" if score < 0.5 else "warning"
121
+ result["failing_audits"].append({
122
+ "id": audit_id,
123
+ "title": audit.get("title", ""),
124
+ "description": audit.get("description", "")[:200],
125
+ "score": round(score * 100),
126
+ "severity": severity,
127
+ "display_value": audit.get("displayValue", ""),
128
+ })
129
+
130
+ # Sort failures by score (worst first)
131
+ result["failing_audits"].sort(key=lambda x: x["score"])
132
+
133
+ # Opportunities (have savings)
134
+ for audit_id, audit in audits.items():
135
+ details = audit.get("details", {})
136
+ if details.get("type") == "opportunity" and details.get("overallSavingsMs", 0) > 0:
137
+ result["opportunities"].append({
138
+ "id": audit_id,
139
+ "title": audit.get("title", ""),
140
+ "savings_ms": details.get("overallSavingsMs", 0),
141
+ "savings_bytes": details.get("overallSavingsBytes", 0),
142
+ })
143
+
144
+ result["opportunities"].sort(key=lambda x: x["savings_ms"], reverse=True)
145
+
146
+ # Field data (CrUX from PSI response)
147
+ loading_exp = data.get("loadingExperience", {})
148
+ if loading_exp.get("metrics"):
149
+ result["field_data"] = {}
150
+ for metric_id, metric_data in loading_exp["metrics"].items():
151
+ result["field_data"][metric_id] = {
152
+ "percentile": metric_data.get("percentile"),
153
+ "category": metric_data.get("category"),
154
+ }
155
+
156
+ return result
157
+
158
+
159
+ # ── CrUX API ───────────────────────────────────────────────────────
160
+
161
+ def run_crux(url: str, form_factor: str = "PHONE") -> dict:
162
+ """
163
+ Query Chrome UX Report for real-world performance data.
164
+
165
+ Args:
166
+ url: URL or origin to query
167
+ form_factor: PHONE, DESKTOP, or ALL_FORM_FACTORS
168
+
169
+ Returns:
170
+ Field CWV data at 75th percentile
171
+ """
172
+ if not API_KEY:
173
+ return {"error": "GOOGLE_API_KEY not set"}
174
+
175
+ # Try URL-level first, fall back to origin
176
+ from urllib.parse import urlparse
177
+ parsed = urlparse(url)
178
+ origin = f"{parsed.scheme}://{parsed.netloc}"
179
+
180
+ payload = {
181
+ "url": url,
182
+ "formFactor": form_factor,
183
+ }
184
+
185
+ try:
186
+ response = requests.post(
187
+ f"{CRUX_ENDPOINT}?key={API_KEY}",
188
+ json=payload,
189
+ timeout=30,
190
+ )
191
+
192
+ if response.status_code == 404:
193
+ # No URL-level data, try origin
194
+ payload = {"origin": origin, "formFactor": form_factor}
195
+ response = requests.post(
196
+ f"{CRUX_ENDPOINT}?key={API_KEY}",
197
+ json=payload,
198
+ timeout=30,
199
+ )
200
+
201
+ if response.status_code == 404:
202
+ return {"error": f"No CrUX data available for {url} (not enough traffic)"}
203
+
204
+ response.raise_for_status()
205
+ data = response.json()
206
+ except requests.RequestException as e:
207
+ return {"error": f"CrUX API request failed: {e}"}
208
+
209
+ result = {
210
+ "url": url,
211
+ "form_factor": form_factor,
212
+ "metrics": {},
213
+ "collection_period": {},
214
+ }
215
+
216
+ # Extract metrics
217
+ metrics = data.get("record", {}).get("metrics", {})
218
+
219
+ metric_map = {
220
+ "largest_contentful_paint": "LCP",
221
+ "interaction_to_next_paint": "INP",
222
+ "cumulative_layout_shift": "CLS",
223
+ "first_contentful_paint": "FCP",
224
+ "experimental_time_to_first_byte": "TTFB",
225
+ }
226
+
227
+ for api_name, label in metric_map.items():
228
+ if api_name in metrics:
229
+ m = metrics[api_name]
230
+ p75 = m.get("percentiles", {}).get("p75")
231
+ histogram = m.get("histogram", [])
232
+
233
+ # Calculate good/needs-improvement/poor distribution
234
+ distribution = {}
235
+ for bucket in histogram:
236
+ density = bucket.get("density", 0)
237
+ if bucket.get("end"):
238
+ distribution["good"] = distribution.get("good", 0) + density
239
+ elif "start" in bucket and "end" not in bucket:
240
+ distribution["poor"] = density
241
+ else:
242
+ distribution["needs_improvement"] = density
243
+
244
+ result["metrics"][label] = {
245
+ "p75": p75,
246
+ "distribution": distribution,
247
+ }
248
+
249
+ # Collection period
250
+ period = data.get("record", {}).get("collectionPeriod", {})
251
+ result["collection_period"] = {
252
+ "first_date": period.get("firstDate", {}),
253
+ "last_date": period.get("lastDate", {}),
254
+ }
255
+
256
+ return result
257
+
258
+
259
+ # ── Rich Results Test ──────────────────────────────────────────────
260
+
261
+ def run_rich_results_test(url: str) -> dict:
262
+ """
263
+ Check if a URL is eligible for rich results.
264
+
265
+ Note: This uses the URL Testing Tools API (Mobile-Friendly Test)
266
+ which also returns rich results information. The dedicated Rich Results
267
+ Test API requires OAuth2, so we use this free alternative.
268
+
269
+ Args:
270
+ url: URL to test
271
+
272
+ Returns:
273
+ Mobile-friendly status and detected structured data
274
+ """
275
+ if not API_KEY:
276
+ return {"error": "GOOGLE_API_KEY not set"}
277
+
278
+ payload = {"url": url}
279
+
280
+ try:
281
+ response = requests.post(
282
+ f"{RICH_RESULTS_ENDPOINT}?key={API_KEY}",
283
+ json=payload,
284
+ timeout=60,
285
+ )
286
+ response.raise_for_status()
287
+ data = response.json()
288
+ except requests.RequestException as e:
289
+ return {"error": f"URL Testing API request failed: {e}"}
290
+
291
+ result = {
292
+ "url": url,
293
+ "mobile_friendly": data.get("mobileFriendliness") == "MOBILE_FRIENDLY",
294
+ "issues": [],
295
+ }
296
+
297
+ for issue in data.get("mobileFriendlyIssues", []):
298
+ result["issues"].append({
299
+ "rule": issue.get("rule", ""),
300
+ })
301
+
302
+ return result
303
+
304
+
305
+ # ── Unified Runner ─────────────────────────────────────────────────
306
+
307
+ def run_all(url: str) -> dict:
308
+ """Run all available API checks and merge results."""
309
+ print(f"Running PageSpeed Insights (mobile)...", file=sys.stderr)
310
+ psi_mobile = run_pagespeed(url, strategy="mobile")
311
+
312
+ print(f"Running PageSpeed Insights (desktop)...", file=sys.stderr)
313
+ psi_desktop = run_pagespeed(url, strategy="desktop")
314
+
315
+ print(f"Running CrUX API...", file=sys.stderr)
316
+ crux = run_crux(url)
317
+
318
+ print(f"Running Mobile-Friendly Test...", file=sys.stderr)
319
+ rich = run_rich_results_test(url)
320
+
321
+ return {
322
+ "url": url,
323
+ "timestamp": __import__("datetime").datetime.utcnow().isoformat() + "Z",
324
+ "pagespeed": {
325
+ "mobile": psi_mobile,
326
+ "desktop": psi_desktop,
327
+ },
328
+ "crux": crux,
329
+ "mobile_friendly": rich,
330
+ }
331
+
332
+
333
+ # ── CLI ────────────────────────────────────────────────────────────
334
+
335
+ def print_psi_summary(result: dict, label: str):
336
+ """Print a human-readable PSI summary."""
337
+ if result.get("error"):
338
+ print(f" Error: {result['error']}")
339
+ return
340
+
341
+ scores = result.get("scores", {})
342
+ print(f"\n {label} Scores:")
343
+ for cat, score in scores.items():
344
+ icon = "🟢" if score >= 90 else "🟡" if score >= 50 else "🔴"
345
+ cat_name = cat.replace("_", " ").replace("-", " ").title()
346
+ print(f" {icon} {cat_name}: {score}/100")
347
+
348
+ cwv = result.get("cwv", {})
349
+ if cwv:
350
+ print(f"\n Core Web Vitals:")
351
+ for metric, data in cwv.items():
352
+ icon = "🟢" if data["score"] >= 90 else "🟡" if data["score"] >= 50 else "🔴"
353
+ print(f" {icon} {metric}: {data['display']} ({data['score']}/100)")
354
+
355
+ failures = result.get("failing_audits", [])[:5]
356
+ if failures:
357
+ print(f"\n Top Failing Audits:")
358
+ for audit in failures:
359
+ icon = "🔴" if audit["severity"] == "critical" else "🟠"
360
+ print(f" {icon} {audit['title']} ({audit['score']}/100)")
361
+
362
+ opps = result.get("opportunities", [])[:3]
363
+ if opps:
364
+ print(f"\n Top Opportunities:")
365
+ for opp in opps:
366
+ print(f" 💡 {opp['title']} (save {opp['savings_ms']}ms)")
367
+
368
+
369
+ def main():
370
+ parser = argparse.ArgumentParser(
371
+ description="SEO APIs — Google free API client (BMAD+ SEO Engine)"
372
+ )
373
+ parser.add_argument("url", nargs="?", help="URL to analyze")
374
+ parser.add_argument("--pagespeed", action="store_true", help="Run PageSpeed Insights")
375
+ parser.add_argument("--crux", action="store_true", help="Run CrUX API")
376
+ parser.add_argument("--richtest", action="store_true", help="Run Rich Results Test")
377
+ parser.add_argument("--all", action="store_true", help="Run all APIs")
378
+ parser.add_argument("--strategy", choices=["mobile", "desktop"], default="mobile",
379
+ help="PSI strategy (default: mobile)")
380
+ parser.add_argument("--json", "-j", action="store_true", help="Output as JSON")
381
+
382
+ args = parser.parse_args()
383
+
384
+ if not args.url:
385
+ parser.print_help()
386
+ sys.exit(1)
387
+
388
+ if not API_KEY:
389
+ print("⚠️ GOOGLE_API_KEY not set!", file=sys.stderr)
390
+ print(" Get a free key: https://console.cloud.google.com/apis/credentials", file=sys.stderr)
391
+ print(" Enable: PageSpeed Insights API + Chrome UX Report API", file=sys.stderr)
392
+ print(" Set: export GOOGLE_API_KEY=your_key", file=sys.stderr)
393
+ sys.exit(1)
394
+
395
+ if args.all:
396
+ result = run_all(args.url)
397
+ if args.json:
398
+ print(json.dumps(result, indent=2, ensure_ascii=False))
399
+ else:
400
+ print(f"\n{'='*60}")
401
+ print(f"SEO API Report: {args.url}")
402
+ print(f"{'='*60}")
403
+ print_psi_summary(result["pagespeed"]["mobile"], "📱 Mobile")
404
+ print_psi_summary(result["pagespeed"]["desktop"], "🖥️ Desktop")
405
+
406
+ crux = result["crux"]
407
+ if not crux.get("error"):
408
+ print(f"\n 📊 CrUX Field Data:")
409
+ for metric, data in crux.get("metrics", {}).items():
410
+ print(f" {metric}: p75 = {data['p75']}")
411
+ else:
412
+ print(f"\n 📊 CrUX: {crux['error']}")
413
+
414
+ mf = result["mobile_friendly"]
415
+ if not mf.get("error"):
416
+ icon = "✅" if mf["mobile_friendly"] else "❌"
417
+ print(f"\n 📱 Mobile-Friendly: {icon}")
418
+ else:
419
+ print(f"\n 📱 Mobile-Friendly: {mf['error']}")
420
+
421
+ elif args.pagespeed:
422
+ result = run_pagespeed(args.url, strategy=args.strategy)
423
+ if args.json:
424
+ print(json.dumps(result, indent=2, ensure_ascii=False))
425
+ else:
426
+ print_psi_summary(result, f"{'📱 Mobile' if args.strategy == 'mobile' else '🖥️ Desktop'}")
427
+
428
+ elif args.crux:
429
+ result = run_crux(args.url)
430
+ if args.json:
431
+ print(json.dumps(result, indent=2, ensure_ascii=False))
432
+ else:
433
+ if result.get("error"):
434
+ print(f"Error: {result['error']}")
435
+ else:
436
+ print(f"\nCrUX Field Data: {args.url}")
437
+ for metric, data in result.get("metrics", {}).items():
438
+ print(f" {metric}: p75 = {data['p75']}")
439
+
440
+ elif args.richtest:
441
+ result = run_rich_results_test(args.url)
442
+ if args.json:
443
+ print(json.dumps(result, indent=2, ensure_ascii=False))
444
+ else:
445
+ if result.get("error"):
446
+ print(f"Error: {result['error']}")
447
+ else:
448
+ icon = "✅" if result["mobile_friendly"] else "❌"
449
+ print(f"Mobile-Friendly: {icon}")
450
+ if result["issues"]:
451
+ for issue in result["issues"]:
452
+ print(f" ⚠️ {issue['rule']}")
453
+
454
+ else:
455
+ # Default: run pagespeed
456
+ result = run_pagespeed(args.url, strategy=args.strategy)
457
+ if args.json:
458
+ print(json.dumps(result, indent=2, ensure_ascii=False))
459
+ else:
460
+ print_psi_summary(result, f"{'📱 Mobile' if args.strategy == 'mobile' else '🖥️ Desktop'}")
461
+
462
+
463
+ if __name__ == "__main__":
464
+ main()
@@ -0,0 +1,403 @@
1
+ #!/usr/bin/env python3
2
+ """
3
+ SEO Report — Professional HTML audit report generator.
4
+
5
+ Features:
6
+ - Single-file HTML with inline CSS (no external deps)
7
+ - SVG radar chart for score visualization
8
+ - Color-coded issue cards (Critical/High/Medium/Low)
9
+ - Quick Wins section
10
+ - Print-friendly (@media print)
11
+ - Responsive (mobile-readable)
12
+
13
+ Author: Laurent Rochetta
14
+ License: MIT
15
+ """
16
+
17
+ import argparse
18
+ import json
19
+ import math
20
+ import os
21
+ import sys
22
+ from datetime import datetime
23
+
24
+
25
+ def generate_radar_svg(scores: dict, size: int = 300) -> str:
26
+ """Generate an SVG radar chart for the 7 score categories."""
27
+ categories = list(scores.keys())
28
+ values = list(scores.values())
29
+ n = len(categories)
30
+
31
+ if n == 0:
32
+ return ""
33
+
34
+ cx, cy = size // 2, size // 2
35
+ radius = size // 2 - 40
36
+
37
+ # Short labels for display
38
+ short_labels = {
39
+ "technical": "Tech",
40
+ "content_eeat": "E-E-A-T",
41
+ "on_page": "On-Page",
42
+ "schema": "Schema",
43
+ "performance": "Perf",
44
+ "ai_readiness": "AI/GEO",
45
+ "images": "Images",
46
+ }
47
+
48
+ def point(angle_deg, r):
49
+ angle_rad = math.radians(angle_deg - 90)
50
+ x = cx + r * math.cos(angle_rad)
51
+ y = cy + r * math.sin(angle_rad)
52
+ return x, y
53
+
54
+ svg_parts = [f'<svg viewBox="0 0 {size} {size}" xmlns="http://www.w3.org/2000/svg" style="max-width:{size}px;margin:auto;display:block;">']
55
+
56
+ # Background circles
57
+ for pct in [25, 50, 75, 100]:
58
+ r = radius * pct / 100
59
+ svg_parts.append(f'<circle cx="{cx}" cy="{cy}" r="{r}" fill="none" stroke="#e2e8f0" stroke-width="1" opacity="0.5"/>')
60
+
61
+ # Axis lines + labels
62
+ for i in range(n):
63
+ angle = (360 / n) * i
64
+ x1, y1 = point(angle, 0)
65
+ x2, y2 = point(angle, radius)
66
+ svg_parts.append(f'<line x1="{cx}" y1="{cy}" x2="{x2}" y2="{y2}" stroke="#e2e8f0" stroke-width="1"/>')
67
+
68
+ lx, ly = point(angle, radius + 20)
69
+ label = short_labels.get(categories[i], categories[i][:6])
70
+ svg_parts.append(f'<text x="{lx}" y="{ly}" text-anchor="middle" font-size="11" fill="#64748b" font-family="Inter,sans-serif">{label}</text>')
71
+
72
+ # Data polygon
73
+ data_points = []
74
+ for i in range(n):
75
+ angle = (360 / n) * i
76
+ r = radius * min(values[i], 100) / 100
77
+ x, y = point(angle, r)
78
+ data_points.append(f"{x},{y}")
79
+
80
+ poly = " ".join(data_points)
81
+ svg_parts.append(f'<polygon points="{poly}" fill="rgba(59,130,246,0.2)" stroke="#3b82f6" stroke-width="2"/>')
82
+
83
+ # Data points
84
+ for i in range(n):
85
+ angle = (360 / n) * i
86
+ r = radius * min(values[i], 100) / 100
87
+ x, y = point(angle, r)
88
+ color = "#22c55e" if values[i] >= 80 else "#f59e0b" if values[i] >= 50 else "#ef4444"
89
+ svg_parts.append(f'<circle cx="{x}" cy="{y}" r="4" fill="{color}" stroke="white" stroke-width="2"/>')
90
+
91
+ svg_parts.append('</svg>')
92
+ return "\n".join(svg_parts)
93
+
94
+
95
+ def severity_color(severity: str) -> str:
96
+ """Get color for severity level."""
97
+ return {
98
+ "critical": "#ef4444",
99
+ "high": "#f97316",
100
+ "medium": "#f59e0b",
101
+ "low": "#22c55e",
102
+ }.get(severity, "#64748b")
103
+
104
+
105
+ def severity_icon(severity: str) -> str:
106
+ return {
107
+ "critical": "🔴",
108
+ "high": "🟠",
109
+ "medium": "🟡",
110
+ "low": "🟢",
111
+ }.get(severity, "⚪")
112
+
113
+
114
+ def score_color(score: int) -> str:
115
+ if score >= 90:
116
+ return "#22c55e"
117
+ elif score >= 70:
118
+ return "#84cc16"
119
+ elif score >= 50:
120
+ return "#f59e0b"
121
+ else:
122
+ return "#ef4444"
123
+
124
+
125
+ def generate_html_report(audit_data: dict) -> str:
126
+ """Generate a complete HTML report from audit JSON data."""
127
+
128
+ domain = audit_data.get("domain", "Unknown")
129
+ timestamp = audit_data.get("timestamp", datetime.now().isoformat())
130
+ total_score = audit_data.get("score", {}).get("total", 0)
131
+ categories = audit_data.get("score", {}).get("categories", {})
132
+ issues = audit_data.get("issues", [])
133
+ pages = audit_data.get("pages", [])
134
+
135
+ # Generate radar chart
136
+ radar_svg = generate_radar_svg(categories) if categories else ""
137
+
138
+ # Sort issues by severity
139
+ severity_order = {"critical": 0, "high": 1, "medium": 2, "low": 3}
140
+ sorted_issues = sorted(issues, key=lambda x: severity_order.get(x.get("severity", "low"), 4))
141
+
142
+ # Count by severity
143
+ counts = {"critical": 0, "high": 0, "medium": 0, "low": 0}
144
+ for issue in issues:
145
+ sev = issue.get("severity", "low")
146
+ counts[sev] = counts.get(sev, 0) + 1
147
+
148
+ # Quick wins
149
+ quick_wins = [i for i in issues if i.get("quick_win", False)][:5]
150
+
151
+ # Build issue cards HTML
152
+ issue_cards = ""
153
+ for issue in sorted_issues:
154
+ sev = issue.get("severity", "low")
155
+ fix_html = ""
156
+ if issue.get("fix"):
157
+ fix_html = f'<div class="fix-block"><strong>Fix:</strong><pre><code>{issue["fix"]}</code></pre></div>'
158
+
159
+ issue_cards += f'''
160
+ <div class="issue-card" style="border-left: 4px solid {severity_color(sev)}">
161
+ <div class="issue-header">
162
+ <span class="severity-badge" style="background:{severity_color(sev)}">{sev.upper()}</span>
163
+ <span class="issue-category">{issue.get("category", "")}</span>
164
+ </div>
165
+ <h4>{issue.get("title", "")}</h4>
166
+ <p>{issue.get("description", "")}</p>
167
+ {fix_html}
168
+ </div>'''
169
+
170
+ # Quick wins HTML
171
+ qw_html = ""
172
+ if quick_wins:
173
+ qw_items = ""
174
+ for qw in quick_wins:
175
+ qw_items += f'<li>{severity_icon(qw.get("severity", ""))} {qw.get("title", "")}</li>'
176
+ qw_html = f'<div class="quick-wins"><h3>⚡ Quick Wins</h3><ul>{qw_items}</ul></div>'
177
+
178
+ # Category scores table
179
+ cat_rows = ""
180
+ for cat, score in categories.items():
181
+ cat_name = cat.replace("_", " ").title()
182
+ cat_rows += f'''
183
+ <tr>
184
+ <td>{cat_name}</td>
185
+ <td>
186
+ <div class="score-bar-bg">
187
+ <div class="score-bar" style="width:{score}%;background:{score_color(score)}"></div>
188
+ </div>
189
+ </td>
190
+ <td style="color:{score_color(score)};font-weight:700">{score}</td>
191
+ </tr>'''
192
+
193
+ html = f'''<!DOCTYPE html>
194
+ <html lang="en">
195
+ <head>
196
+ <meta charset="UTF-8">
197
+ <meta name="viewport" content="width=device-width, initial-scale=1.0">
198
+ <title>SEO Audit Report — {domain}</title>
199
+ <style>
200
+ @import url('https://fonts.googleapis.com/css2?family=Inter:wght@400;500;600;700&display=swap');
201
+
202
+ * {{ margin: 0; padding: 0; box-sizing: border-box; }}
203
+ body {{
204
+ font-family: 'Inter', -apple-system, sans-serif;
205
+ background: #f8fafc;
206
+ color: #1e293b;
207
+ line-height: 1.6;
208
+ }}
209
+ .container {{ max-width: 900px; margin: 0 auto; padding: 2rem; }}
210
+
211
+ /* Header */
212
+ .header {{
213
+ background: linear-gradient(135deg, #0f172a 0%, #1e3a5f 100%);
214
+ color: white;
215
+ padding: 3rem 2rem;
216
+ border-radius: 16px;
217
+ margin-bottom: 2rem;
218
+ text-align: center;
219
+ }}
220
+ .header h1 {{ font-size: 2rem; margin-bottom: 0.5rem; }}
221
+ .header .domain {{ font-size: 1.2rem; opacity: 0.8; }}
222
+ .header .date {{ font-size: 0.85rem; opacity: 0.6; margin-top: 0.5rem; }}
223
+
224
+ /* Score circle */
225
+ .score-hero {{
226
+ display: flex;
227
+ align-items: center;
228
+ justify-content: center;
229
+ gap: 3rem;
230
+ margin: 2rem 0;
231
+ flex-wrap: wrap;
232
+ }}
233
+ .score-circle {{
234
+ width: 150px;
235
+ height: 150px;
236
+ border-radius: 50%;
237
+ display: flex;
238
+ flex-direction: column;
239
+ align-items: center;
240
+ justify-content: center;
241
+ border: 6px solid {score_color(total_score)};
242
+ background: white;
243
+ box-shadow: 0 4px 24px rgba(0,0,0,0.08);
244
+ }}
245
+ .score-number {{ font-size: 3rem; font-weight: 700; color: {score_color(total_score)}; }}
246
+ .score-label {{ font-size: 0.75rem; text-transform: uppercase; color: #64748b; letter-spacing: 1px; }}
247
+
248
+ /* Summary cards */
249
+ .summary-grid {{
250
+ display: grid;
251
+ grid-template-columns: repeat(4, 1fr);
252
+ gap: 1rem;
253
+ margin-bottom: 2rem;
254
+ }}
255
+ .summary-card {{
256
+ background: white;
257
+ border-radius: 12px;
258
+ padding: 1.2rem;
259
+ text-align: center;
260
+ box-shadow: 0 2px 8px rgba(0,0,0,0.04);
261
+ }}
262
+ .summary-card .count {{ font-size: 2rem; font-weight: 700; }}
263
+ .summary-card .label {{ font-size: 0.8rem; color: #64748b; }}
264
+
265
+ /* Sections */
266
+ .section {{ background: white; border-radius: 12px; padding: 2rem; margin-bottom: 1.5rem; box-shadow: 0 2px 8px rgba(0,0,0,0.04); }}
267
+ .section h2 {{ margin-bottom: 1rem; font-size: 1.3rem; }}
268
+ .section h3 {{ margin-bottom: 0.8rem; font-size: 1.1rem; }}
269
+
270
+ /* Score bars */
271
+ table {{ width: 100%; border-collapse: collapse; }}
272
+ td {{ padding: 0.6rem 0; }}
273
+ .score-bar-bg {{ width: 100%; height: 8px; background: #e2e8f0; border-radius: 4px; overflow: hidden; margin: 0 1rem; }}
274
+ .score-bar {{ height: 100%; border-radius: 4px; transition: width 0.5s ease; }}
275
+
276
+ /* Issue cards */
277
+ .issue-card {{
278
+ border: 1px solid #e2e8f0;
279
+ border-radius: 8px;
280
+ padding: 1rem;
281
+ margin-bottom: 0.8rem;
282
+ }}
283
+ .issue-header {{ display: flex; gap: 0.5rem; margin-bottom: 0.3rem; align-items: center; }}
284
+ .severity-badge {{ color: white; padding: 2px 8px; border-radius: 4px; font-size: 0.7rem; font-weight: 600; }}
285
+ .issue-category {{ font-size: 0.8rem; color: #64748b; }}
286
+ .issue-card h4 {{ margin-bottom: 0.3rem; }}
287
+ .issue-card p {{ color: #475569; font-size: 0.9rem; }}
288
+ .fix-block {{ background: #f1f5f9; border-radius: 6px; padding: 0.8rem; margin-top: 0.5rem; }}
289
+ .fix-block pre {{ overflow-x: auto; font-size: 0.8rem; }}
290
+
291
+ /* Quick wins */
292
+ .quick-wins {{ background: #f0fdf4; border: 1px solid #bbf7d0; border-radius: 12px; padding: 1.5rem; margin-bottom: 1.5rem; }}
293
+ .quick-wins ul {{ list-style: none; padding: 0; }}
294
+ .quick-wins li {{ padding: 0.3rem 0; }}
295
+
296
+ /* Footer */
297
+ .footer {{ text-align: center; color: #94a3b8; font-size: 0.8rem; padding: 2rem 0; }}
298
+
299
+ /* Print */
300
+ @media print {{
301
+ body {{ background: white; }}
302
+ .container {{ max-width: 100%; padding: 0; }}
303
+ .header {{ break-after: avoid; }}
304
+ .section {{ break-inside: avoid; box-shadow: none; border: 1px solid #e2e8f0; }}
305
+ }}
306
+
307
+ /* Mobile */
308
+ @media (max-width: 640px) {{
309
+ .summary-grid {{ grid-template-columns: repeat(2, 1fr); }}
310
+ .score-hero {{ flex-direction: column; gap: 1.5rem; }}
311
+ }}
312
+ </style>
313
+ </head>
314
+ <body>
315
+ <div class="container">
316
+ <div class="header">
317
+ <h1>SEO Audit Report</h1>
318
+ <div class="domain">{domain}</div>
319
+ <div class="date">{timestamp[:10]}</div>
320
+ </div>
321
+
322
+ <div class="score-hero">
323
+ <div class="score-circle">
324
+ <div class="score-number">{total_score}</div>
325
+ <div class="score-label">SEO Score</div>
326
+ </div>
327
+ <div>
328
+ {radar_svg}
329
+ </div>
330
+ </div>
331
+
332
+ <div class="summary-grid">
333
+ <div class="summary-card">
334
+ <div class="count" style="color:#ef4444">{counts["critical"]}</div>
335
+ <div class="label">Critical</div>
336
+ </div>
337
+ <div class="summary-card">
338
+ <div class="count" style="color:#f97316">{counts["high"]}</div>
339
+ <div class="label">High</div>
340
+ </div>
341
+ <div class="summary-card">
342
+ <div class="count" style="color:#f59e0b">{counts["medium"]}</div>
343
+ <div class="label">Medium</div>
344
+ </div>
345
+ <div class="summary-card">
346
+ <div class="count" style="color:#22c55e">{counts["low"]}</div>
347
+ <div class="label">Low</div>
348
+ </div>
349
+ </div>
350
+
351
+ {qw_html}
352
+
353
+ <div class="section">
354
+ <h2>📊 Category Scores</h2>
355
+ <table>{cat_rows}</table>
356
+ </div>
357
+
358
+ <div class="section">
359
+ <h2>🔍 Issues ({len(issues)})</h2>
360
+ {issue_cards}
361
+ </div>
362
+
363
+ <div class="footer">
364
+ Generated by BMAD+ SEO Engine v2.1 — By Laurent Rochetta
365
+ </div>
366
+ </div>
367
+ </body>
368
+ </html>'''
369
+
370
+ return html
371
+
372
+
373
+ # ── CLI ────────────────────────────────────────────────────────────
374
+
375
+ def main():
376
+ parser = argparse.ArgumentParser(
377
+ description="SEO Report — HTML audit report generator (BMAD+ SEO Engine)"
378
+ )
379
+ parser.add_argument("input", help="Audit JSON file")
380
+ parser.add_argument("--output", "-o", default="seo-report.html", help="Output HTML file")
381
+
382
+ args = parser.parse_args()
383
+
384
+ if not os.path.isfile(args.input):
385
+ print(f"Error: File not found: {args.input}", file=sys.stderr)
386
+ sys.exit(1)
387
+
388
+ with open(args.input, "r", encoding="utf-8") as f:
389
+ audit_data = json.load(f)
390
+
391
+ html = generate_html_report(audit_data)
392
+
393
+ with open(args.output, "w", encoding="utf-8") as f:
394
+ f.write(html)
395
+
396
+ print(f"✅ Report generated: {args.output}", file=sys.stderr)
397
+ print(f" Domain: {audit_data.get('domain', 'Unknown')}")
398
+ print(f" Score: {audit_data.get('score', {}).get('total', 0)}/100")
399
+ print(f" Issues: {len(audit_data.get('issues', []))}")
400
+
401
+
402
+ if __name__ == "__main__":
403
+ main()
package/package.json CHANGED
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "$schema": "https://json.schemastore.org/package.json",
3
3
  "name": "bmad-plus",
4
- "version": "0.3.0",
4
+ "version": "0.3.2",
5
5
  "description": "BMAD+ — Augmented AI-Driven Development Framework with multi-role agents, autopilot, and parallel execution",
6
6
  "keywords": [
7
7
  "bmad",