aeo-scanner 1.0.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,6 @@
1
+ __pycache__/
2
+ *.pyc
3
+ .venv/
4
+ dist/
5
+ *.egg-info/
6
+ .DS_Store
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2026 Convrgent
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
@@ -0,0 +1,106 @@
1
+ Metadata-Version: 2.4
2
+ Name: aeo-scanner
3
+ Version: 1.0.0
4
+ Summary: MCP server for AI search visibility auditing. Checks AEO score and Agent Readiness score for any website.
5
+ Author-email: Convrgent <hello@convrgent.ai>
6
+ License: MIT
7
+ License-File: LICENSE
8
+ Keywords: aeo,agent-readiness,ai-search,ai-visibility,mcp,seo
9
+ Classifier: Development Status :: 4 - Beta
10
+ Classifier: Intended Audience :: Developers
11
+ Classifier: Topic :: Internet :: WWW/HTTP
12
+ Classifier: Topic :: Software Development :: Libraries
13
+ Requires-Python: >=3.10
14
+ Requires-Dist: httpx>=0.27.0
15
+ Requires-Dist: mcp>=1.0.0
16
+ Description-Content-Type: text/markdown
17
+
18
+ # AEO Scanner — MCP Server
19
+
20
+ AI search visibility audit for any website. Two scores, one scan.
21
+
22
+ ## What it does
23
+
24
+ - **AEO Score (0-100):** How well AI search engines (ChatGPT, Perplexity, Google AI Overviews) can find, read, and cite your content
25
+ - **Agent Readiness (0-100):** How easily AI agents can understand, interact with, and transact on your site
26
+
27
+ ## Quick start
28
+
29
+ ```bash
30
+ # Claude Code — one command
31
+ claude mcp add aeo-scanner -- uvx aeo-scanner
32
+
33
+ # Cursor — add to .cursor/mcp.json
34
+ { "aeo-scanner": { "command": "uvx", "args": ["aeo-scanner"] } }
35
+ ```
36
+
37
+ Then ask your AI assistant: *"Scan example.com for AI visibility"*
38
+
39
+ ## Tools
40
+
41
+ | Tool | What it does | Price |
42
+ |------|-------------|-------|
43
+ | `scan_site` | Quick dual-score scan + top issues | **Free** |
44
+ | `audit_site` | Full 25+ check breakdown across 8 categories | $1.00 |
45
+ | `fix_site` | Generated fix code — apply directly with Claude Code | $5.00 |
46
+
47
+ ## Free tier
48
+
49
+ `scan_site` works without any authentication. No API key, no wallet, no setup. Just install and scan.
50
+
51
+ Rate limits: 20 scans/hour per IP, 5 per URL per day.
52
+
53
+ ## Paid tools
54
+
55
+ `audit_site` and `fix_site` require an API key:
56
+
57
+ 1. Get your key at [scan.convrgent.ai](https://scan.convrgent.ai)
58
+ 2. Add it to your MCP config:
59
+
60
+ ```json
61
+ {
62
+ "aeo-scanner": {
63
+ "command": "uvx",
64
+ "args": ["aeo-scanner"],
65
+ "env": {
66
+ "AEO_API_KEY": "your-api-key"
67
+ }
68
+ }
69
+ }
70
+ ```
71
+
72
+ Or pay per call with USDC via [x402 protocol](https://www.x402.org/) (Base network).
73
+
74
+ ## Workflow
75
+
76
+ The included `optimize_site` prompt guides the full workflow:
77
+
78
+ 1. **Scan** — get baseline scores (free)
79
+ 2. **Audit** — see detailed breakdown by category ($1)
80
+ 3. **Fix** — get working code to apply ($5)
81
+ 4. **Rescan** — verify improvement (free)
82
+
83
+ ## Context cost
84
+
85
+ ~1,000 tokens at startup. 40x lighter than GitHub MCP.
86
+
87
+ ## Scoring
88
+
89
+ 25+ checks across 8 categories. See the built-in `aeo://reference/scoring-methodology` resource for full details, or read [scoring-methodology.md](scoring-methodology.md).
90
+
91
+ **AEO categories:** Structured Data (30%), Meta & Technical (20%), AI Accessibility (25%), Content Quality (25%)
92
+
93
+ **Agent Readiness categories:** Machine Identity (25%), API Discoverability (25%), Structured Actions (20%), Programmatic Access (20%), Data Clarity (10%)
94
+
95
+ **Grades:** A (90+), B (75-89), C (60-74), D (40-59), F (0-39)
96
+
97
+ ## Environment variables
98
+
99
+ | Variable | Required | Description |
100
+ |----------|----------|-------------|
101
+ | `AEO_API_KEY` | For paid tools | API key from scan.convrgent.ai |
102
+ | `AEO_API_URL` | No | Override API base URL (default: https://scan.convrgent.ai) |
103
+
104
+ ---
105
+
106
+ Built by [Convrgent](https://convrgent.ai) — personality intelligence and AI visibility tools for agents.
@@ -0,0 +1,89 @@
1
+ # AEO Scanner — MCP Server
2
+
3
+ AI search visibility audit for any website. Two scores, one scan.
4
+
5
+ ## What it does
6
+
7
+ - **AEO Score (0-100):** How well AI search engines (ChatGPT, Perplexity, Google AI Overviews) can find, read, and cite your content
8
+ - **Agent Readiness (0-100):** How easily AI agents can understand, interact with, and transact on your site
9
+
10
+ ## Quick start
11
+
12
+ ```bash
13
+ # Claude Code — one command
14
+ claude mcp add aeo-scanner -- uvx aeo-scanner
15
+
16
+ # Cursor — add to .cursor/mcp.json
17
+ { "aeo-scanner": { "command": "uvx", "args": ["aeo-scanner"] } }
18
+ ```
19
+
20
+ Then ask your AI assistant: *"Scan example.com for AI visibility"*
21
+
22
+ ## Tools
23
+
24
+ | Tool | What it does | Price |
25
+ |------|-------------|-------|
26
+ | `scan_site` | Quick dual-score scan + top issues | **Free** |
27
+ | `audit_site` | Full 25+ check breakdown across 8 categories | $1.00 |
28
+ | `fix_site` | Generated fix code — apply directly with Claude Code | $5.00 |
29
+
30
+ ## Free tier
31
+
32
+ `scan_site` works without any authentication. No API key, no wallet, no setup. Just install and scan.
33
+
34
+ Rate limits: 20 scans/hour per IP, 5 per URL per day.
35
+
36
+ ## Paid tools
37
+
38
+ `audit_site` and `fix_site` require an API key:
39
+
40
+ 1. Get your key at [scan.convrgent.ai](https://scan.convrgent.ai)
41
+ 2. Add it to your MCP config:
42
+
43
+ ```json
44
+ {
45
+ "aeo-scanner": {
46
+ "command": "uvx",
47
+ "args": ["aeo-scanner"],
48
+ "env": {
49
+ "AEO_API_KEY": "your-api-key"
50
+ }
51
+ }
52
+ }
53
+ ```
54
+
55
+ Or pay per call with USDC via [x402 protocol](https://www.x402.org/) (Base network).
56
+
57
+ ## Workflow
58
+
59
+ The included `optimize_site` prompt guides the full workflow:
60
+
61
+ 1. **Scan** — get baseline scores (free)
62
+ 2. **Audit** — see detailed breakdown by category ($1)
63
+ 3. **Fix** — get working code to apply ($5)
64
+ 4. **Rescan** — verify improvement (free)
65
+
66
+ ## Context cost
67
+
68
+ ~1,000 tokens at startup. 40x lighter than GitHub MCP.
69
+
70
+ ## Scoring
71
+
72
+ 25+ checks across 8 categories. See the built-in `aeo://reference/scoring-methodology` resource for full details, or read [scoring-methodology.md](scoring-methodology.md).
73
+
74
+ **AEO categories:** Structured Data (30%), Meta & Technical (20%), AI Accessibility (25%), Content Quality (25%)
75
+
76
+ **Agent Readiness categories:** Machine Identity (25%), API Discoverability (25%), Structured Actions (20%), Programmatic Access (20%), Data Clarity (10%)
77
+
78
+ **Grades:** A (90+), B (75-89), C (60-74), D (40-59), F (0-39)
79
+
80
+ ## Environment variables
81
+
82
+ | Variable | Required | Description |
83
+ |----------|----------|-------------|
84
+ | `AEO_API_KEY` | For paid tools | API key from scan.convrgent.ai |
85
+ | `AEO_API_URL` | No | Override API base URL (default: https://scan.convrgent.ai) |
86
+
87
+ ---
88
+
89
+ Built by [Convrgent](https://convrgent.ai) — personality intelligence and AI visibility tools for agents.
@@ -0,0 +1,29 @@
1
+ [project]
2
+ name = "aeo-scanner"
3
+ version = "1.0.0"
4
+ description = "MCP server for AI search visibility auditing. Checks AEO score and Agent Readiness score for any website."
5
+ readme = "README.md"
6
+ license = {text = "MIT"}
7
+ requires-python = ">=3.10"
8
+ authors = [{name = "Convrgent", email = "hello@convrgent.ai"}]
9
+ keywords = ["mcp", "aeo", "seo", "ai-search", "agent-readiness", "ai-visibility"]
10
+ classifiers = [
11
+ "Development Status :: 4 - Beta",
12
+ "Intended Audience :: Developers",
13
+ "Topic :: Internet :: WWW/HTTP",
14
+ "Topic :: Software Development :: Libraries",
15
+ ]
16
+ dependencies = [
17
+ "mcp>=1.0.0",
18
+ "httpx>=0.27.0",
19
+ ]
20
+
21
+ [project.scripts]
22
+ aeo-scanner = "src.server:mcp.run"
23
+
24
+ [tool.hatch.build.targets.wheel]
25
+ packages = ["src"]
26
+
27
+ [build-system]
28
+ requires = ["hatchling"]
29
+ build-backend = "hatchling.build"
@@ -0,0 +1,96 @@
1
+ # AEO Scanner — Scoring Methodology
2
+
3
+ Two independent scores, one scan.
4
+
5
+ ## AEO Score (0-100)
6
+
7
+ Measures how well AI search engines (ChatGPT, Perplexity, Google AI Overviews) can find, read, and cite your content.
8
+
9
+ ### Categories
10
+
11
+ **Structured Data (30% of score)**
12
+ JSON-LD schema markup — Organization, WebSite, BreadcrumbList, FAQ, Article, Product schemas. Validates JSON-LD blocks are valid and contain required fields (name, url, logo, description, sameAs for Organization; headline, author, datePublished for Articles).
13
+
14
+ **Meta & Technical (20% of score)**
15
+ Canonical URLs, meta descriptions (50-160 chars), Open Graph tags, heading hierarchy (single H1, no gaps), language attribute, Twitter cards, robots.txt (must allow AI crawlers: GPTBot, ClaudeBot, PerplexityBot, Google-Extended, Applebot-Extended, OAI-SearchBot), and sitemap.xml with lastmod dates.
16
+
17
+ **AI Accessibility (25% of score)**
18
+ llms.txt (structured machine-readable site summary with sections, links, API info), llms-full.txt, definition blocks in first 500 words ("X is a Y" patterns), content structure (subheadings, lists, scannable paragraphs), and Q&A content format.
19
+
20
+ **Content Quality (25% of score)**
21
+ Data density (3+ statistics per 500 words), expert attribution (author meta, Person schema, credentials), content freshness signals (published/modified dates), content depth (300+ words minimum, 800+ for bonus), and unique value indicators (original research, case studies, proprietary data).
22
+
23
+ ### How checks are scored
24
+
25
+ Each check has a severity (critical = 3x weight, warning = 2x, info = 1x). Category score = weighted sum of passed checks / total possible weight. Overall AEO score = weighted average of category scores.
26
+
27
+ ---
28
+
29
+ ## Agent Readiness Score (0-100)
30
+
31
+ Measures how easily AI agents can understand, interact with, and transact on your site.
32
+
33
+ ### Categories
34
+
35
+ **Machine Identity (25% of score)**
36
+ llms.txt depth (scored on API info, pricing, auth details), site description clarity (action verb + noun + audience in first 100 words), consistent machine-readable name across Organization schema, og:site_name, and page title.
37
+
38
+ **API Discoverability (25% of score)**
39
+ OpenAPI/Swagger specification at standard paths, developer documentation with technical signals (API key, SDK, curl, REST, GraphQL, webhook references), and visible API endpoints in content.
40
+
41
+ **Structured Actions (20% of score)**
42
+ Machine-readable pricing (Product/Service schema with offers), action affordances (forms, sign-up links, CTAs, Action schema), and machine-readable contact info (contactPoint in Organization schema).
43
+
44
+ **Programmatic Access (20% of score)**
45
+ Payment protocols (x402, Stripe, crypto/USDC detection), authentication documentation, and webhook/event support (callback URLs, event-driven patterns).
46
+
47
+ **Data Clarity (10% of score)**
48
+ Reserved for future checks.
49
+
50
+ ### How checks are scored
51
+
52
+ Agent checks use the actual score (0-100) of each check, weighted by severity. Category score = weighted average of check scores. Overall Agent Readiness = weighted average of category scores.
53
+
54
+ ---
55
+
56
+ ## Letter Grades
57
+
58
+ | Grade | Score | Meaning |
59
+ |-------|-------|---------|
60
+ | A | 90-100 | Excellent — optimized for AI visibility |
61
+ | B | 75-89 | Good — strong fundamentals, minor gaps |
62
+ | C | 60-74 | Fair — basics met, significant room for improvement |
63
+ | D | 40-59 | Poor — major gaps in AI optimization |
64
+ | F | 0-39 | Failing — critical elements missing |
65
+
66
+ ---
67
+
68
+ ## What raises your AEO Score
69
+
70
+ - Complete JSON-LD schema (Organization + WebSite on homepage)
71
+ - robots.txt allowing AI crawlers
72
+ - /llms.txt with structured content
73
+ - Definition blocks in opening paragraphs
74
+ - 300+ words with data density
75
+ - Author attribution and freshness signals
76
+
77
+ ## What raises your Agent Readiness Score
78
+
79
+ - Deep llms.txt with API docs, pricing, auth info
80
+ - OpenAPI spec or developer documentation
81
+ - Clear site description (what + who + for whom)
82
+ - Pricing in schema (Product with offers)
83
+ - Action affordances (forms, buttons, CTAs)
84
+ - Contact info in Organization schema
85
+ - Payment protocol support (x402, Stripe, crypto)
86
+ - Authentication docs and webhook support
87
+
88
+ ---
89
+
90
+ ## Multi-page scanning
91
+
92
+ When scanning multiple pages, site-wide checks (robots.txt, sitemap, llms.txt) are counted once. Per-page checks are aggregated across all scanned pages. Top issues are sorted by severity (critical first).
93
+
94
+ ---
95
+
96
+ Built by [Convrgent](https://convrgent.ai) — tools for AI agents.
@@ -0,0 +1,26 @@
1
+ {
2
+ "name": "aeo-scanner",
3
+ "version": "1.0.0",
4
+ "description": "AI search visibility audit. Two scores: AEO Score (AI search findability) and Agent Readiness (agent interaction capability). Returns actionable fix code.",
5
+ "author": {
6
+ "name": "Convrgent",
7
+ "url": "https://convrgent.ai"
8
+ },
9
+ "repository": "https://github.com/convrgent/aeo-scanner-mcp",
10
+ "homepage": "https://scan.convrgent.ai",
11
+ "categories": ["web-development", "seo", "ai-visibility", "agent-readiness"],
12
+ "packages": {
13
+ "pypi": "aeo-scanner"
14
+ },
15
+ "tools": [
16
+ {"name": "scan_site", "description": "Quick AI visibility scan with dual scores (free)"},
17
+ {"name": "audit_site", "description": "Full 25+ check AI visibility audit ($1.00)"},
18
+ {"name": "fix_site", "description": "Generate fix code for all AI visibility issues ($5.00)"}
19
+ ],
20
+ "resources": [
21
+ {"uri": "aeo://reference/scoring-methodology", "description": "Scoring methodology documentation"}
22
+ ],
23
+ "prompts": [
24
+ {"name": "optimize_site", "description": "Full scan-audit-fix-verify workflow"}
25
+ ]
26
+ }
File without changes
@@ -0,0 +1,159 @@
1
+ import os
2
+ import json
3
+ import httpx
4
+ from mcp.server.fastmcp import FastMCP
5
+
6
+ mcp = FastMCP(
7
+ name="aeo-scanner",
8
+ instructions="AI search visibility audit. Checks AEO score and Agent Readiness score for any website. Returns actionable fix code.",
9
+ )
10
+
11
+ API_BASE = os.environ.get("AEO_API_URL", "https://scan.convrgent.ai")
12
+ AEO_API_KEY = os.environ.get("AEO_API_KEY")
13
+
14
+
15
+ def _get_paid_headers() -> dict:
16
+ """Build headers for paid API calls."""
17
+ headers = {"Content-Type": "application/json"}
18
+ if AEO_API_KEY:
19
+ headers["Authorization"] = f"Bearer {AEO_API_KEY}"
20
+ return headers
21
+
22
+
23
+ def _payment_required_response() -> str:
24
+ """Structured 402 response that agents can relay to their operators."""
25
+ return json.dumps(
26
+ {
27
+ "error": "payment_required",
28
+ "message": "This tool requires a paid API key.",
29
+ "how_to_pay": {
30
+ "stripe": "Get your API key at https://scan.convrgent.ai",
31
+ "crypto": "Pay per call via x402 (USDC on Base). See https://scan.convrgent.ai",
32
+ },
33
+ "setup": "Set AEO_API_KEY in your MCP server config env vars.",
34
+ "pricing": {"audit_site": "$1.00", "fix_site": "$5.00"},
35
+ "tip": "scan_site is free — try it first to see your scores.",
36
+ },
37
+ indent=2,
38
+ )
39
+
40
+
41
+ async def _handle_paid_request(endpoint: str, payload: dict) -> str:
42
+ """Call a paid endpoint with proper 402 handling."""
43
+ headers = _get_paid_headers()
44
+ async with httpx.AsyncClient(timeout=180) as client:
45
+ resp = await client.post(f"{API_BASE}{endpoint}", json=payload, headers=headers)
46
+ if resp.status_code == 402:
47
+ return _payment_required_response()
48
+ if resp.status_code >= 400:
49
+ try:
50
+ error_data = resp.json()
51
+ except Exception:
52
+ error_data = {"raw": resp.text[:500]}
53
+ return json.dumps(
54
+ {
55
+ "error": "request_failed",
56
+ "status": resp.status_code,
57
+ "details": error_data,
58
+ },
59
+ indent=2,
60
+ )
61
+ return resp.text
62
+
63
+
64
+ @mcp.tool()
65
+ async def scan_site(url: str, pages: int = 5) -> str:
66
+ """Quick AI visibility scan. Returns AEO Score (0-100) and Agent
67
+ Readiness Score (0-100) with letter grades, plus top issues found.
68
+ Free to use — no API key needed. Use for fast assessment before
69
+ diving deeper with audit_site. Run again after applying fixes to
70
+ verify improvement."""
71
+ async with httpx.AsyncClient(timeout=120) as client:
72
+ resp = await client.post(
73
+ f"{API_BASE}/api/aeo/scan",
74
+ json={"url": url, "pages": min(max(1, pages), 5)},
75
+ headers={"Content-Type": "application/json"},
76
+ )
77
+ if resp.status_code >= 400:
78
+ try:
79
+ error_data = resp.json()
80
+ except Exception:
81
+ error_data = {"raw": resp.text[:500]}
82
+ return json.dumps(
83
+ {
84
+ "error": "scan_failed",
85
+ "status": resp.status_code,
86
+ "details": error_data,
87
+ },
88
+ indent=2,
89
+ )
90
+ return resp.text
91
+
92
+
93
+ @mcp.tool()
94
+ async def audit_site(
95
+ url: str, pages: int = 5, categories: list[str] | None = None
96
+ ) -> str:
97
+ """Full AI visibility audit across 25+ checks in 8 categories:
98
+ structured data, meta & technical, AI accessibility, content quality,
99
+ machine identity, API discoverability, structured actions, and
100
+ programmatic access. Returns detailed per-check scores with
101
+ specific issues and fix recommendations.
102
+ Requires API key (set AEO_API_KEY env var). $1.00 per call."""
103
+ payload: dict = {"url": url, "pages": min(max(1, pages), 5)}
104
+ if categories:
105
+ payload["categories"] = categories
106
+ return await _handle_paid_request("/api/aeo/audit", payload)
107
+
108
+
109
+ @mcp.tool()
110
+ async def fix_site(url: str, pages: int = 5, format: str = "generic") -> str:
111
+ """Generate complete fix code for all AI visibility issues.
112
+ Returns a structured fix file that coding agents (Claude Code,
113
+ Codex, Cursor) can apply directly. Includes schema generation,
114
+ robots.txt fixes, sitemap fixes, llms.txt generation, and
115
+ structured data additions. Not just recommendations — working code.
116
+ Set format to 'claude_code' for Claude Code optimized output.
117
+ Requires API key (set AEO_API_KEY env var). $5.00 per call."""
118
+ return await _handle_paid_request(
119
+ "/api/aeo/fix",
120
+ {"url": url, "pages": min(max(1, pages), 5), "format": format},
121
+ )
122
+
123
+
124
+ @mcp.resource("aeo://reference/scoring-methodology")
125
+ async def scoring_methodology() -> str:
126
+ """How AEO and Agent Readiness scores are calculated across 25+ checks."""
127
+ path = os.path.join(os.path.dirname(__file__), "..", "scoring-methodology.md")
128
+ with open(path) as f:
129
+ return f.read()
130
+
131
+
132
+ @mcp.prompt()
133
+ async def optimize_site(url: str, priority: str = "both") -> str:
134
+ """Full scan → audit → fix → verify workflow for optimizing a site's
135
+ AI visibility. Guides the agent through the complete process."""
136
+ return f"""You are optimizing {url} for AI search visibility and agent readiness.
137
+
138
+ Workflow:
139
+ 1. Call scan_site to get baseline scores (free)
140
+ 2. Report the two scores and top issues to the user
141
+ 3. If user wants to proceed, call audit_site for the full breakdown ($1.00 — requires API key)
142
+ 4. Present the detailed findings organized by category
143
+ 5. If user wants fixes, call fix_site to generate fix code ($5.00 — requires API key)
144
+ 6. Apply the fix code to the project (if you have file access)
145
+ 7. After applying fixes, call scan_site again to verify improvement (free)
146
+ 8. Report the before/after comparison (e.g., "AEO: 61 → 96")
147
+
148
+ Priority focus: {priority}
149
+ Always present both scores but emphasize the priority area.
150
+
151
+ If audit_site or fix_site returns a payment_required error, tell the user:
152
+ - Get an API key at https://scan.convrgent.ai
153
+ - Set it as AEO_API_KEY in your MCP config
154
+ - Or pay per call with USDC via x402 protocol
155
+ """
156
+
157
+
158
+ if __name__ == "__main__":
159
+ mcp.run()