bmad-plus 0.3.0 → 0.3.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -5,6 +5,18 @@ All notable changes to BMAD+ will be documented in this file.
5
5
  The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
6
6
  and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7
7
 
8
+ ## [0.3.1] — 2026-03-19
9
+
10
+ ### 🔧 SEO Engine Enhancements (Sprint 1)
11
+
12
+ ### Added
13
+ - **SKILL.md orchestrator** — Single entry point routing 15 `/seo` commands to the right agents
14
+ - **seo_apis.py** — Google APIs client (PageSpeed Insights, CrUX field data, Rich Results Test)
15
+ - **requirements.txt** — Python dependencies (requests, beautifulsoup4, lxml)
16
+ - **install.sh + install.ps1** — Cross-platform dependency installer with venv support
17
+
18
+ ---
19
+
8
20
  ## [0.3.0] — 2026-03-19
9
21
 
10
22
  ### 🚀 SEO Engine v2.0 — Complete Rewrite
@@ -0,0 +1,171 @@
1
+ ---
2
+ name: seo-engine
3
+ description: >
4
+ BMAD+ SEO Engine v2.1 — Complete SEO audit engine with 3 multi-role agents,
5
+ 6-phase workflow, Python toolkit, Google API integration, and PageSpeed
6
+ perfection loop. Use when user says /seo or any SEO-related command.
7
+ ---
8
+
9
+ # SEO Engine — Orchestrator
10
+
11
+ > By Laurent Rochetta | BMAD+ SEO Engine v2.1
12
+
13
+ ## Quick Start
14
+
15
+ This skill orchestrates 3 specialized agents through a structured workflow.
16
+ Load the full agent files only when activating that agent's phase.
17
+
18
+ ## Command Router
19
+
20
+ When the user issues a `/seo` command, route as follows:
21
+
22
+ | Command | Agent(s) | Action |
23
+ |---------|----------|--------|
24
+ | `/seo full <url>` | Scout → Judge → Chief | Run all 6 phases |
25
+ | `/seo quick <url>` | Scout → Judge → Chief | Run phases 1–4 only |
26
+ | `/seo technical <url>` | Scout (Inspector) | Phase 2 technical only |
27
+ | `/seo content <url>` | Judge (Content Expert) | Phase 2 content only |
28
+ | `/seo geo <url>` | Judge (GEO Analyst) | Phase 3 only |
29
+ | `/seo schema <url>` | Judge (Schema Master) | Schema detection + validation |
30
+ | `/seo images <url>` | Judge (Content Expert) | Image audit subset |
31
+ | `/seo hreflang <url>` | Scout (Inspector) | Hreflang audit, ref: `ref/hreflang-rules.md` |
32
+ | `/seo pagespeed <url>` | Scout + Chief | PageSpeed perfection loop |
33
+ | `/seo plan <type>` | Chief (Strategist) | Strategic plan by industry |
34
+ | `/seo fix` | Chief (Strategist) | Auto-generate fixes from last audit |
35
+ | `/seo history` | Chief (Reporter) | Show score history |
36
+ | `/seo compare` | Chief (Reporter) | Compare with previous audit |
37
+ | `/seo competitor <url1> <url2>` | Scout + Judge + Chief | Benchmark two sites |
38
+ | `/seo api <url>` | (script) | Run Google APIs (PSI + CrUX + Rich Results) |
39
+
40
+ ## Full Audit Orchestration (`/seo full`)
41
+
42
+ ### Phase 1 — Reconnaissance
43
+ **Agent**: Scout (Crawler role)
44
+ **Load**: `agent/seo-scout.md`
45
+
46
+ 1. Run `scripts/seo_fetch.py <url> --json` to fetch homepage
47
+ 2. Run `scripts/seo_crawl.py <url> --depth 2 --max 25 --json` to discover structure
48
+ 3. Detect business type from content analysis:
49
+ - **SaaS**: pricing page, features page, signup CTA
50
+ - **E-commerce**: product pages, cart, categories
51
+ - **Local business**: address, phone, map, opening hours
52
+ - **Publisher**: articles, blog, news, RSS feed
53
+ - **Agency**: services, portfolio, case studies
54
+ 4. Check for `/robots.txt`, `/sitemap.xml`, `/llms.txt`
55
+
56
+ **Checkpoint**: Report discovery summary, ask "Continue with full audit?"
57
+
58
+ ### Phase 2 — Deep Scan (PARALLEL)
59
+ **Agents**: Scout (Inspector) + Judge (Content Expert + Schema Master)
60
+ **Load**: `agent/seo-scout.md` + `agent/seo-judge.md`
61
+
62
+ Run Scout and Judge **simultaneously** on each discovered page:
63
+
64
+ **Scout checks** (9 categories — see `agent/seo-scout.md`):
65
+ - Crawlability, Indexability, Security, URL Structure, Mobile
66
+ - Core Web Vitals, Structured Data detection, JS Rendering, IndexNow
67
+
68
+ **Judge checks** (see `agent/seo-judge.md`):
69
+ - E-E-A-T evaluation (ref: `ref/eeat-criteria.md`)
70
+ - Content quality (ref: `ref/quality-gates.md`)
71
+ - Schema validation (ref: `ref/schema-catalog.md`)
72
+ - Image audit
73
+ - Internal/external link analysis
74
+
75
+ **Optional**: Run `scripts/seo_apis.py --all <url>` for live PageSpeed + CrUX data.
76
+
77
+ Use `scripts/seo_parse.py <file> --url <url> --json` on fetched HTML.
78
+ Use `scripts/seo_screenshot.py <url> --viewport mobile` for visual audit.
79
+
80
+ ### Phase 3 — AI Readiness & GEO
81
+ **Agent**: Judge (GEO Analyst role)
82
+ **Reference**: `ref/geo-signals.md`
83
+
84
+ - Check AI crawler access (GPTBot, ClaudeBot, PerplexityBot)
85
+ - Verify llms.txt compliance
86
+ - Score passage citability (134–167 word blocks)
87
+ - Compute AI Readiness Score (0–100)
88
+
89
+ ### Phase 4 — Scoring
90
+ **Agent**: Chief (Scorer role)
91
+ **Load**: `agent/seo-chief.md`
92
+
93
+ Compute SEO Health Score (0–100):
94
+
95
+ | Category | Weight |
96
+ |----------|--------|
97
+ | Technical SEO | 20% |
98
+ | Content & E-E-A-T | 22% |
99
+ | On-Page SEO | 18% |
100
+ | Schema | 10% |
101
+ | Performance (CWV) | 12% |
102
+ | AI Readiness (GEO) | 12% |
103
+ | Images | 6% |
104
+
105
+ ### Phase 5 — Action Plan
106
+ **Agent**: Chief (Strategist role)
107
+
108
+ 1. Classify issues: 🔴 Critical → 🟠 High → 🟡 Medium → 🟢 Low
109
+ 2. Identify quick wins (highest impact/effort ratio)
110
+ 3. Generate 30/60/90-day roadmap
111
+ 4. Auto-generate code fixes (meta tags, schema JSON-LD, robots.txt, llms.txt)
112
+
113
+ **Checkpoint**: "Here's the plan. Apply fixes automatically?"
114
+
115
+ ### Phase 5b — PageSpeed Perfection Loop
116
+ **Agents**: Scout + Chief
117
+ **Reference**: `pagespeed-playbook.md` + `checklist.md`
118
+
119
+ Use `scripts/seo_apis.py --pagespeed <url>` for live scores.
120
+ Loop: fix one issue → re-test → verify improvement → next issue.
121
+ Target: 100% on all 4 categories (Performance, Accessibility, Best Practices, SEO).
122
+
123
+ ### Phase 6 — Monitoring (optional)
124
+ **Agent**: Scout (Crawler role)
125
+
126
+ Save results to `.bmad-seo/history/<domain>-<date>.json`.
127
+ On re-audit: compare with previous, show deltas.
128
+
129
+ ---
130
+
131
+ ## Python Toolkit
132
+
133
+ | Script | Usage | Dependencies |
134
+ |--------|-------|-------------|
135
+ | `seo_fetch.py` | `python scripts/seo_fetch.py <url> [--ua googlebot] [--json]` | requests |
136
+ | `seo_parse.py` | `python scripts/seo_parse.py <file> --url <url> --json` | beautifulsoup4, lxml |
137
+ | `seo_crawl.py` | `python scripts/seo_crawl.py <url> --depth 2 --max 25 --json` | requests |
138
+ | `seo_screenshot.py` | `python scripts/seo_screenshot.py <url> --viewport mobile` | playwright |
139
+ | `seo_apis.py` | `python scripts/seo_apis.py --pagespeed <url>` | requests |
140
+
141
+ **Install dependencies**: `pip install -r requirements.txt`
142
+
143
+ **Environment**: Set `GOOGLE_API_KEY` for Google API access (free, no OAuth).
144
+
145
+ ---
146
+
147
+ ## Reference Files (lazy-load)
148
+
149
+ Only load these when the relevant agent needs them:
150
+ - `ref/cwv-thresholds.md` — Core Web Vitals 2026
151
+ - `ref/schema-catalog.md` — Schema.org v29.4 types
152
+ - `ref/eeat-criteria.md` — E-E-A-T scoring grid
153
+ - `ref/geo-signals.md` — AI search signals
154
+ - `ref/quality-gates.md` — Content thresholds
155
+ - `ref/schema-templates.json` — 14 JSON-LD templates
156
+
157
+ ---
158
+
159
+ ## Industry-Specific Plans (`/seo plan <type>`)
160
+
161
+ | Type | Focus |
162
+ |------|-------|
163
+ | `saas` | Pricing pages, feature comparison, trial CTAs, documentation SEO |
164
+ | `ecommerce` | Product schema, category pages, faceted navigation, review markup |
165
+ | `local` | LocalBusiness schema, Google Business Profile, location pages, NAP consistency |
166
+ | `publisher` | Article schema, author E-E-A-T, news sitemap, pagination |
167
+ | `agency` | Service schema, portfolio, case studies, city-specific landing pages |
168
+
169
+ ---
170
+
171
+ *BMAD+ SEO Engine v2.1 — By Laurent Rochetta*
@@ -0,0 +1,14 @@
1
+ # BMAD+ SEO Engine — Python Dependencies
2
+ # Install: pip install -r requirements.txt
3
+ # Author: Laurent Rochetta
4
+
5
+ # Core (required)
6
+ requests>=2.31.0
7
+ beautifulsoup4>=4.12.0
8
+
9
+ # Fast HTML parser (recommended)
10
+ lxml>=5.0.0
11
+
12
+ # Screenshot capture (optional — only for seo_screenshot.py)
13
+ # Uncomment and run: playwright install chromium
14
+ # playwright>=1.40.0
@@ -0,0 +1,53 @@
1
+ # BMAD+ SEO Engine — Dependency Installer (Windows)
2
+ # Author: Laurent Rochetta
3
+
4
+ $ErrorActionPreference = "Stop"
5
+
6
+ $ScriptDir = Split-Path -Parent $MyInvocation.MyCommand.Path
7
+ $ParentDir = Split-Path -Parent $ScriptDir
8
+
9
+ Write-Host "🔧 BMAD+ SEO Engine — Installing dependencies..." -ForegroundColor Cyan
10
+ Write-Host ""
11
+
12
+ # Check Python
13
+ $Python = $null
14
+ if (Get-Command python -ErrorAction SilentlyContinue) {
15
+ $Python = "python"
16
+ } elseif (Get-Command py -ErrorAction SilentlyContinue) {
17
+ $Python = "py -3"
18
+ } else {
19
+ Write-Host "❌ Python not found. Please install Python 3.10+" -ForegroundColor Red
20
+ exit 1
21
+ }
22
+
23
+ Write-Host "Using: $(& $Python --version)"
24
+
25
+ # Create venv if not exists
26
+ $VenvPath = Join-Path $ParentDir ".venv"
27
+ if (-not (Test-Path $VenvPath)) {
28
+ Write-Host "📦 Creating virtual environment..."
29
+ & $Python -m venv $VenvPath
30
+ }
31
+
32
+ # Activate venv
33
+ $ActivateScript = Join-Path $VenvPath "Scripts\Activate.ps1"
34
+ if (Test-Path $ActivateScript) {
35
+ & $ActivateScript
36
+ }
37
+
38
+ # Install dependencies
39
+ Write-Host "📥 Installing core dependencies..."
40
+ $RequirementsPath = Join-Path $ParentDir "requirements.txt"
41
+ & $Python -m pip install --quiet -r $RequirementsPath
42
+
43
+ Write-Host ""
44
+ Write-Host "✅ Core dependencies installed!" -ForegroundColor Green
45
+ Write-Host ""
46
+ Write-Host "Optional: To enable screenshots (seo_screenshot.py):"
47
+ Write-Host " pip install playwright; playwright install chromium"
48
+ Write-Host ""
49
+ Write-Host "Set your Google API key for live data:"
50
+ Write-Host ' $env:GOOGLE_API_KEY = "your_key_here"'
51
+ Write-Host " Get one free: https://console.cloud.google.com/apis/credentials"
52
+ Write-Host ""
53
+ Write-Host "🚀 Ready! — BMAD+ SEO Engine by Laurent Rochetta" -ForegroundColor Cyan
@@ -0,0 +1,48 @@
1
+ #!/bin/bash
2
+ # BMAD+ SEO Engine — Dependency Installer (Linux/macOS)
3
+ # Author: Laurent Rochetta
4
+
5
+ set -e
6
+
7
+ SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
8
+ PARENT_DIR="$(dirname "$SCRIPT_DIR")"
9
+
10
+ echo "🔧 BMAD+ SEO Engine — Installing dependencies..."
11
+ echo ""
12
+
13
+ # Check Python
14
+ if command -v python3 &>/dev/null; then
15
+ PYTHON=python3
16
+ elif command -v python &>/dev/null; then
17
+ PYTHON=python
18
+ else
19
+ echo "❌ Python not found. Please install Python 3.10+"
20
+ exit 1
21
+ fi
22
+
23
+ echo "Using: $($PYTHON --version)"
24
+
25
+ # Create venv if not exists
26
+ if [ ! -d "$PARENT_DIR/.venv" ]; then
27
+ echo "📦 Creating virtual environment..."
28
+ $PYTHON -m venv "$PARENT_DIR/.venv"
29
+ fi
30
+
31
+ # Activate venv
32
+ source "$PARENT_DIR/.venv/bin/activate" 2>/dev/null || true
33
+
34
+ # Install core dependencies
35
+ echo "📥 Installing core dependencies..."
36
+ $PYTHON -m pip install --quiet -r "$PARENT_DIR/requirements.txt"
37
+
38
+ echo ""
39
+ echo "✅ Core dependencies installed!"
40
+ echo ""
41
+ echo "Optional: To enable screenshots (seo_screenshot.py):"
42
+ echo " pip install playwright && playwright install chromium"
43
+ echo ""
44
+ echo "Set your Google API key for live data:"
45
+ echo " export GOOGLE_API_KEY=your_key_here"
46
+ echo " Get one free: https://console.cloud.google.com/apis/credentials"
47
+ echo ""
48
+ echo "🚀 Ready! — BMAD+ SEO Engine by Laurent Rochetta"
@@ -0,0 +1,464 @@
1
+ #!/usr/bin/env python3
2
+ """
3
+ SEO APIs — Google free API client for live SEO data.
4
+
5
+ Connects to:
6
+ - PageSpeed Insights API v5 (lab scores + audits)
7
+ - Chrome UX Report (CrUX) API (field CWV data)
8
+ - Rich Results Test API (schema validation)
9
+
10
+ Requires: GOOGLE_API_KEY environment variable (free, no OAuth).
11
+ Get one at: https://console.cloud.google.com/apis/credentials
12
+
13
+ Author: Laurent Rochetta
14
+ License: MIT
15
+ """
16
+
17
+ import argparse
18
+ import json
19
+ import os
20
+ import sys
21
+ from typing import Optional
22
+
23
+ try:
24
+ import requests
25
+ except ImportError:
26
+ print("Error: requests required. Install: pip install requests", file=sys.stderr)
27
+ sys.exit(1)
28
+
29
+
30
+ API_KEY = os.environ.get("GOOGLE_API_KEY", "")
31
+
32
+ PSI_ENDPOINT = "https://www.googleapis.com/pagespeedonline/v5/runPagespeed"
33
+ CRUX_ENDPOINT = "https://chromeuxreport.googleapis.com/v1/records:queryRecord"
34
+ RICH_RESULTS_ENDPOINT = "https://searchconsole.googleapis.com/v1/urlTestingTools/mobileFriendlyTest:run"
35
+
36
+
37
+ # ── PageSpeed Insights ─────────────────────────────────────────────
38
+
39
+ def run_pagespeed(url: str, strategy: str = "mobile", categories: Optional[list] = None) -> dict:
40
+ """
41
+ Run PageSpeed Insights audit.
42
+
43
+ Args:
44
+ url: URL to audit
45
+ strategy: "mobile" or "desktop"
46
+ categories: List of categories (PERFORMANCE, ACCESSIBILITY, BEST_PRACTICES, SEO)
47
+
48
+ Returns:
49
+ Structured result with scores, audits, and opportunities
50
+ """
51
+ if not API_KEY:
52
+ return {"error": "GOOGLE_API_KEY not set. Get one at https://console.cloud.google.com/apis/credentials"}
53
+
54
+ if categories is None:
55
+ categories = ["PERFORMANCE", "ACCESSIBILITY", "BEST_PRACTICES", "SEO"]
56
+
57
+ params = {
58
+ "url": url,
59
+ "key": API_KEY,
60
+ "strategy": strategy,
61
+ }
62
+ for cat in categories:
63
+ params.setdefault("category", [])
64
+ if isinstance(params["category"], list):
65
+ params["category"].append(cat)
66
+
67
+ # requests needs category as repeated param
68
+ param_str = f"url={url}&key={API_KEY}&strategy={strategy}"
69
+ for cat in categories:
70
+ param_str += f"&category={cat}"
71
+
72
+ try:
73
+ response = requests.get(f"{PSI_ENDPOINT}?{param_str}", timeout=120)
74
+ response.raise_for_status()
75
+ data = response.json()
76
+ except requests.RequestException as e:
77
+ return {"error": f"PSI API request failed: {e}"}
78
+
79
+ # Extract scores
80
+ result = {
81
+ "url": url,
82
+ "strategy": strategy,
83
+ "scores": {},
84
+ "cwv": {},
85
+ "failing_audits": [],
86
+ "opportunities": [],
87
+ }
88
+
89
+ # Category scores
90
+ categories_data = data.get("lighthouseResult", {}).get("categories", {})
91
+ for cat_id, cat_data in categories_data.items():
92
+ score = cat_data.get("score", 0)
93
+ result["scores"][cat_id] = round(score * 100)
94
+
95
+ # Core Web Vitals from Lighthouse
96
+ audits = data.get("lighthouseResult", {}).get("audits", {})
97
+
98
+ cwv_metrics = {
99
+ "largest-contentful-paint": "LCP",
100
+ "interaction-to-next-paint": "INP",
101
+ "cumulative-layout-shift": "CLS",
102
+ "first-contentful-paint": "FCP",
103
+ "total-blocking-time": "TBT",
104
+ "speed-index": "SI",
105
+ }
106
+
107
+ for audit_id, label in cwv_metrics.items():
108
+ if audit_id in audits:
109
+ audit = audits[audit_id]
110
+ result["cwv"][label] = {
111
+ "value": audit.get("numericValue"),
112
+ "display": audit.get("displayValue", ""),
113
+ "score": round(audit.get("score", 0) * 100),
114
+ }
115
+
116
+ # Failing audits (score < 1.0)
117
+ for audit_id, audit in audits.items():
118
+ score = audit.get("score")
119
+ if score is not None and score < 0.9 and audit.get("title"):
120
+ severity = "critical" if score < 0.5 else "warning"
121
+ result["failing_audits"].append({
122
+ "id": audit_id,
123
+ "title": audit.get("title", ""),
124
+ "description": audit.get("description", "")[:200],
125
+ "score": round(score * 100),
126
+ "severity": severity,
127
+ "display_value": audit.get("displayValue", ""),
128
+ })
129
+
130
+ # Sort failures by score (worst first)
131
+ result["failing_audits"].sort(key=lambda x: x["score"])
132
+
133
+ # Opportunities (have savings)
134
+ for audit_id, audit in audits.items():
135
+ details = audit.get("details", {})
136
+ if details.get("type") == "opportunity" and details.get("overallSavingsMs", 0) > 0:
137
+ result["opportunities"].append({
138
+ "id": audit_id,
139
+ "title": audit.get("title", ""),
140
+ "savings_ms": details.get("overallSavingsMs", 0),
141
+ "savings_bytes": details.get("overallSavingsBytes", 0),
142
+ })
143
+
144
+ result["opportunities"].sort(key=lambda x: x["savings_ms"], reverse=True)
145
+
146
+ # Field data (CrUX from PSI response)
147
+ loading_exp = data.get("loadingExperience", {})
148
+ if loading_exp.get("metrics"):
149
+ result["field_data"] = {}
150
+ for metric_id, metric_data in loading_exp["metrics"].items():
151
+ result["field_data"][metric_id] = {
152
+ "percentile": metric_data.get("percentile"),
153
+ "category": metric_data.get("category"),
154
+ }
155
+
156
+ return result
157
+
158
+
159
+ # ── CrUX API ───────────────────────────────────────────────────────
160
+
161
+ def run_crux(url: str, form_factor: str = "PHONE") -> dict:
162
+ """
163
+ Query Chrome UX Report for real-world performance data.
164
+
165
+ Args:
166
+ url: URL or origin to query
167
+ form_factor: PHONE, DESKTOP, or ALL_FORM_FACTORS
168
+
169
+ Returns:
170
+ Field CWV data at 75th percentile
171
+ """
172
+ if not API_KEY:
173
+ return {"error": "GOOGLE_API_KEY not set"}
174
+
175
+ # Try URL-level first, fall back to origin
176
+ from urllib.parse import urlparse
177
+ parsed = urlparse(url)
178
+ origin = f"{parsed.scheme}://{parsed.netloc}"
179
+
180
+ payload = {
181
+ "url": url,
182
+ "formFactor": form_factor,
183
+ }
184
+
185
+ try:
186
+ response = requests.post(
187
+ f"{CRUX_ENDPOINT}?key={API_KEY}",
188
+ json=payload,
189
+ timeout=30,
190
+ )
191
+
192
+ if response.status_code == 404:
193
+ # No URL-level data, try origin
194
+ payload = {"origin": origin, "formFactor": form_factor}
195
+ response = requests.post(
196
+ f"{CRUX_ENDPOINT}?key={API_KEY}",
197
+ json=payload,
198
+ timeout=30,
199
+ )
200
+
201
+ if response.status_code == 404:
202
+ return {"error": f"No CrUX data available for {url} (not enough traffic)"}
203
+
204
+ response.raise_for_status()
205
+ data = response.json()
206
+ except requests.RequestException as e:
207
+ return {"error": f"CrUX API request failed: {e}"}
208
+
209
+ result = {
210
+ "url": url,
211
+ "form_factor": form_factor,
212
+ "metrics": {},
213
+ "collection_period": {},
214
+ }
215
+
216
+ # Extract metrics
217
+ metrics = data.get("record", {}).get("metrics", {})
218
+
219
+ metric_map = {
220
+ "largest_contentful_paint": "LCP",
221
+ "interaction_to_next_paint": "INP",
222
+ "cumulative_layout_shift": "CLS",
223
+ "first_contentful_paint": "FCP",
224
+ "experimental_time_to_first_byte": "TTFB",
225
+ }
226
+
227
+ for api_name, label in metric_map.items():
228
+ if api_name in metrics:
229
+ m = metrics[api_name]
230
+ p75 = m.get("percentiles", {}).get("p75")
231
+ histogram = m.get("histogram", [])
232
+
233
+ # Calculate good/needs-improvement/poor distribution
234
+ distribution = {}
235
+ for bucket in histogram:
236
+ density = bucket.get("density", 0)
237
+ if bucket.get("end"):
238
+ distribution["good"] = distribution.get("good", 0) + density
239
+ elif "start" in bucket and "end" not in bucket:
240
+ distribution["poor"] = density
241
+ else:
242
+ distribution["needs_improvement"] = density
243
+
244
+ result["metrics"][label] = {
245
+ "p75": p75,
246
+ "distribution": distribution,
247
+ }
248
+
249
+ # Collection period
250
+ period = data.get("record", {}).get("collectionPeriod", {})
251
+ result["collection_period"] = {
252
+ "first_date": period.get("firstDate", {}),
253
+ "last_date": period.get("lastDate", {}),
254
+ }
255
+
256
+ return result
257
+
258
+
259
+ # ── Rich Results Test ──────────────────────────────────────────────
260
+
261
+ def run_rich_results_test(url: str) -> dict:
262
+ """
263
+ Check if a URL is eligible for rich results.
264
+
265
+ Note: This uses the URL Testing Tools API (Mobile-Friendly Test)
266
+ which also returns rich results information. The dedicated Rich Results
267
+ Test API requires OAuth2, so we use this free alternative.
268
+
269
+ Args:
270
+ url: URL to test
271
+
272
+ Returns:
273
+ Mobile-friendly status and detected structured data
274
+ """
275
+ if not API_KEY:
276
+ return {"error": "GOOGLE_API_KEY not set"}
277
+
278
+ payload = {"url": url}
279
+
280
+ try:
281
+ response = requests.post(
282
+ f"{RICH_RESULTS_ENDPOINT}?key={API_KEY}",
283
+ json=payload,
284
+ timeout=60,
285
+ )
286
+ response.raise_for_status()
287
+ data = response.json()
288
+ except requests.RequestException as e:
289
+ return {"error": f"URL Testing API request failed: {e}"}
290
+
291
+ result = {
292
+ "url": url,
293
+ "mobile_friendly": data.get("mobileFriendliness") == "MOBILE_FRIENDLY",
294
+ "issues": [],
295
+ }
296
+
297
+ for issue in data.get("mobileFriendlyIssues", []):
298
+ result["issues"].append({
299
+ "rule": issue.get("rule", ""),
300
+ })
301
+
302
+ return result
303
+
304
+
305
+ # ── Unified Runner ─────────────────────────────────────────────────
306
+
307
+ def run_all(url: str) -> dict:
308
+ """Run all available API checks and merge results."""
309
+ print(f"Running PageSpeed Insights (mobile)...", file=sys.stderr)
310
+ psi_mobile = run_pagespeed(url, strategy="mobile")
311
+
312
+ print(f"Running PageSpeed Insights (desktop)...", file=sys.stderr)
313
+ psi_desktop = run_pagespeed(url, strategy="desktop")
314
+
315
+ print(f"Running CrUX API...", file=sys.stderr)
316
+ crux = run_crux(url)
317
+
318
+ print(f"Running Mobile-Friendly Test...", file=sys.stderr)
319
+ rich = run_rich_results_test(url)
320
+
321
+ return {
322
+ "url": url,
323
+ "timestamp": __import__("datetime").datetime.utcnow().isoformat() + "Z",
324
+ "pagespeed": {
325
+ "mobile": psi_mobile,
326
+ "desktop": psi_desktop,
327
+ },
328
+ "crux": crux,
329
+ "mobile_friendly": rich,
330
+ }
331
+
332
+
333
+ # ── CLI ────────────────────────────────────────────────────────────
334
+
335
+ def print_psi_summary(result: dict, label: str):
336
+ """Print a human-readable PSI summary."""
337
+ if result.get("error"):
338
+ print(f" Error: {result['error']}")
339
+ return
340
+
341
+ scores = result.get("scores", {})
342
+ print(f"\n {label} Scores:")
343
+ for cat, score in scores.items():
344
+ icon = "🟢" if score >= 90 else "🟡" if score >= 50 else "🔴"
345
+ cat_name = cat.replace("_", " ").replace("-", " ").title()
346
+ print(f" {icon} {cat_name}: {score}/100")
347
+
348
+ cwv = result.get("cwv", {})
349
+ if cwv:
350
+ print(f"\n Core Web Vitals:")
351
+ for metric, data in cwv.items():
352
+ icon = "🟢" if data["score"] >= 90 else "🟡" if data["score"] >= 50 else "🔴"
353
+ print(f" {icon} {metric}: {data['display']} ({data['score']}/100)")
354
+
355
+ failures = result.get("failing_audits", [])[:5]
356
+ if failures:
357
+ print(f"\n Top Failing Audits:")
358
+ for audit in failures:
359
+ icon = "🔴" if audit["severity"] == "critical" else "🟠"
360
+ print(f" {icon} {audit['title']} ({audit['score']}/100)")
361
+
362
+ opps = result.get("opportunities", [])[:3]
363
+ if opps:
364
+ print(f"\n Top Opportunities:")
365
+ for opp in opps:
366
+ print(f" 💡 {opp['title']} (save {opp['savings_ms']}ms)")
367
+
368
+
369
+ def main():
370
+ parser = argparse.ArgumentParser(
371
+ description="SEO APIs — Google free API client (BMAD+ SEO Engine)"
372
+ )
373
+ parser.add_argument("url", nargs="?", help="URL to analyze")
374
+ parser.add_argument("--pagespeed", action="store_true", help="Run PageSpeed Insights")
375
+ parser.add_argument("--crux", action="store_true", help="Run CrUX API")
376
+ parser.add_argument("--richtest", action="store_true", help="Run Rich Results Test")
377
+ parser.add_argument("--all", action="store_true", help="Run all APIs")
378
+ parser.add_argument("--strategy", choices=["mobile", "desktop"], default="mobile",
379
+ help="PSI strategy (default: mobile)")
380
+ parser.add_argument("--json", "-j", action="store_true", help="Output as JSON")
381
+
382
+ args = parser.parse_args()
383
+
384
+ if not args.url:
385
+ parser.print_help()
386
+ sys.exit(1)
387
+
388
+ if not API_KEY:
389
+ print("⚠️ GOOGLE_API_KEY not set!", file=sys.stderr)
390
+ print(" Get a free key: https://console.cloud.google.com/apis/credentials", file=sys.stderr)
391
+ print(" Enable: PageSpeed Insights API + Chrome UX Report API", file=sys.stderr)
392
+ print(" Set: export GOOGLE_API_KEY=your_key", file=sys.stderr)
393
+ sys.exit(1)
394
+
395
+ if args.all:
396
+ result = run_all(args.url)
397
+ if args.json:
398
+ print(json.dumps(result, indent=2, ensure_ascii=False))
399
+ else:
400
+ print(f"\n{'='*60}")
401
+ print(f"SEO API Report: {args.url}")
402
+ print(f"{'='*60}")
403
+ print_psi_summary(result["pagespeed"]["mobile"], "📱 Mobile")
404
+ print_psi_summary(result["pagespeed"]["desktop"], "🖥️ Desktop")
405
+
406
+ crux = result["crux"]
407
+ if not crux.get("error"):
408
+ print(f"\n 📊 CrUX Field Data:")
409
+ for metric, data in crux.get("metrics", {}).items():
410
+ print(f" {metric}: p75 = {data['p75']}")
411
+ else:
412
+ print(f"\n 📊 CrUX: {crux['error']}")
413
+
414
+ mf = result["mobile_friendly"]
415
+ if not mf.get("error"):
416
+ icon = "✅" if mf["mobile_friendly"] else "❌"
417
+ print(f"\n 📱 Mobile-Friendly: {icon}")
418
+ else:
419
+ print(f"\n 📱 Mobile-Friendly: {mf['error']}")
420
+
421
+ elif args.pagespeed:
422
+ result = run_pagespeed(args.url, strategy=args.strategy)
423
+ if args.json:
424
+ print(json.dumps(result, indent=2, ensure_ascii=False))
425
+ else:
426
+ print_psi_summary(result, f"{'📱 Mobile' if args.strategy == 'mobile' else '🖥️ Desktop'}")
427
+
428
+ elif args.crux:
429
+ result = run_crux(args.url)
430
+ if args.json:
431
+ print(json.dumps(result, indent=2, ensure_ascii=False))
432
+ else:
433
+ if result.get("error"):
434
+ print(f"Error: {result['error']}")
435
+ else:
436
+ print(f"\nCrUX Field Data: {args.url}")
437
+ for metric, data in result.get("metrics", {}).items():
438
+ print(f" {metric}: p75 = {data['p75']}")
439
+
440
+ elif args.richtest:
441
+ result = run_rich_results_test(args.url)
442
+ if args.json:
443
+ print(json.dumps(result, indent=2, ensure_ascii=False))
444
+ else:
445
+ if result.get("error"):
446
+ print(f"Error: {result['error']}")
447
+ else:
448
+ icon = "✅" if result["mobile_friendly"] else "❌"
449
+ print(f"Mobile-Friendly: {icon}")
450
+ if result["issues"]:
451
+ for issue in result["issues"]:
452
+ print(f" ⚠️ {issue['rule']}")
453
+
454
+ else:
455
+ # Default: run pagespeed
456
+ result = run_pagespeed(args.url, strategy=args.strategy)
457
+ if args.json:
458
+ print(json.dumps(result, indent=2, ensure_ascii=False))
459
+ else:
460
+ print_psi_summary(result, f"{'📱 Mobile' if args.strategy == 'mobile' else '🖥️ Desktop'}")
461
+
462
+
463
+ if __name__ == "__main__":
464
+ main()
package/package.json CHANGED
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "$schema": "https://json.schemastore.org/package.json",
3
3
  "name": "bmad-plus",
4
- "version": "0.3.0",
4
+ "version": "0.3.1",
5
5
  "description": "BMAD+ — Augmented AI-Driven Development Framework with multi-role agents, autopilot, and parallel execution",
6
6
  "keywords": [
7
7
  "bmad",