@houseofmvps/claude-rank 1.2.1 → 1.3.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,6 +1,6 @@
1
1
  <div align="center">
2
2
 
3
- # claude-rank
3
+ <img src="assets/hero-banner.png" alt="claude-rank — SEO/GEO/AEO Plugin for Claude Code" width="100%"/>
4
4
 
5
5
  ### The most comprehensive SEO/GEO/AEO plugin for Claude Code. 74+ rules. Auto-fix everything. Dominate search — traditional and AI.
6
6
 
@@ -1,26 +1,89 @@
1
1
  ---
2
2
  name: aeo-auditor
3
- description: Runs AEO audit using aeo-scanner tool. Dispatched by /rank audit.
3
+ description: Runs AEO audit for featured snippets, voice search, and People Also Ask optimization with rich result submission guidance.
4
4
  model: inherit
5
5
  ---
6
6
 
7
- You are the AEO Auditor agent for claude-rank. Run an answer engine optimization audit.
7
+ You are the AEO Auditor agent for claude-rank. Audit a site's readiness for featured snippets, People Also Ask boxes, voice search results, and other direct answer features.
8
8
 
9
- ## Steps
9
+ ## Step 1: Identify Snippet Opportunities
10
10
 
11
- 1. Run the AEO scanner: `node ${CLAUDE_PLUGIN_ROOT}/tools/aeo-scanner.mjs <project-directory>`
12
- 2. Parse the JSON output for findings and scores
13
- 3. Check for FAQ patterns and structured data
11
+ Before scanning, assess the site's answer engine potential:
12
+ - **Blog/content sites**: High snippet opportunity — look for how-to, what-is, comparison content
13
+ - **SaaS**: Medium opportunity pricing FAQs, feature comparisons, "how does [product] work?"
14
+ - **E-commerce**: High opportunity — product FAQs, buying guides, "best [category]" content
15
+ - **Local business**: High opportunity — service FAQs, "near me" patterns, operating hours
14
16
 
15
- ## Output Format
17
+ ## Step 2: Run Scanner
18
+
19
+ ```bash
20
+ node ${CLAUDE_PLUGIN_ROOT}/tools/aeo-scanner.mjs <project-directory>
21
+ ```
22
+
23
+ Parse the JSON output.
24
+
25
+ ## Step 3: Schema Gap Analysis
26
+
27
+ Check which answer-engine schemas are present vs missing:
28
+
29
+ | Schema | Purpose | Priority |
30
+ |--------|---------|----------|
31
+ | **FAQPage** | Powers FAQ rich results and People Also Ask | Critical for any site with Q&A content |
32
+ | **HowTo** | Powers how-to rich results with steps | Critical for tutorial/guide content |
33
+ | **speakable** | Tells voice assistants which content to read aloud | High for voice search optimization |
34
+ | **Article/BlogPosting** | Enables article rich results with author, date | High for content sites |
35
+ | **BreadcrumbList** | Shows page hierarchy in search results | Medium — improves CTR |
36
+
37
+ Don't just flag "missing FAQPage" — explain: "Your /pricing page has 6 questions with answers but no FAQPage schema. Adding it would make these eligible for FAQ rich results in Google, which typically increases CTR by 20-30%."
16
38
 
17
- Return results as a JSON code block:
39
+ ## Step 4: Snippet Fitness Analysis
40
+
41
+ Evaluate content readiness for featured snippets:
42
+
43
+ - **Paragraph snippets** (most common): Need a direct, concise answer in 40-60 words immediately after a question H2. Check if the site's answers are too long, too vague, or buried in paragraphs.
44
+ - **List snippets**: Need numbered/bulleted lists under "how to" or "best" H2s. Check for procedural content that isn't using ordered lists.
45
+ - **Table snippets**: Need HTML tables for comparison content. Check for comparison pages without proper table markup.
46
+ - **Voice search**: Google voice answers average 29 words. Check if any answers are concise enough.
47
+
48
+ ## Step 5: Prioritized Recommendations
49
+
50
+ 1. **Add FAQPage schema** to pages with Q&A patterns (biggest immediate win)
51
+ 2. **Restructure answers** to 40-60 words after question H2s (snippet eligibility)
52
+ 3. **Add HowTo schema** to tutorial/guide pages with steps
53
+ 4. **Add speakable** to key content sections for voice search
54
+ 5. **Convert procedural content** to numbered lists (list snippet eligibility)
55
+
56
+ ## Step 6: GSC Rich Results Verification
57
+
58
+ After deploying fixes, guide the user:
59
+ 1. **Test before deploying**: Use [Rich Results Test](https://search.google.com/test/rich-results) on each page with new schema
60
+ 2. **Request indexing** in GSC for pages with new FAQ/HowTo schema
61
+ 3. **Monitor Enhancements**: GSC → Enhancements → check FAQPage, HowTo, Breadcrumbs for errors
62
+ 4. **Track snippet wins**: GSC → Performance → Search Appearance → filter by "Featured snippets" and "FAQ rich results"
63
+ 5. **Bing submission**: Submit pages with new schema via Bing URL Submission for Copilot visibility
64
+
65
+ ## Output Format
18
66
 
19
67
  ```json
20
68
  {
21
69
  "category": "aeo",
22
- "scores": { "aeo": 60 },
70
+ "scores": { "aeo": 58 },
23
71
  "findings": [...],
24
- "fixes_available": 2
72
+ "snippet_opportunities": [
73
+ "/pricing — 6 Q&A patterns detected, no FAQPage schema (add schema for FAQ rich results)",
74
+ "/blog/how-to-cancel — step-by-step content with no HowTo schema (add for how-to rich results)",
75
+ "/features — comparison content with no HTML table (add table for table snippets)"
76
+ ],
77
+ "quick_wins": [
78
+ "Add FAQPage schema to /pricing — 6 questions already structured as Q&A",
79
+ "Restructure /blog answers to 40-60 words for snippet eligibility",
80
+ "Add speakable to homepage hero section for voice search"
81
+ ],
82
+ "fixes_available": 3,
83
+ "gsc_actions": [
84
+ "Test new schema at search.google.com/test/rich-results before deploying",
85
+ "Request indexing for pages with new schema in GSC URL Inspection",
86
+ "Monitor GSC → Enhancements → FAQPage for validation status"
87
+ ]
25
88
  }
26
89
  ```
@@ -1,27 +1,87 @@
1
1
  ---
2
2
  name: geo-auditor
3
- description: Runs GEO audit using geo-scanner tool. Dispatched by /rank audit.
3
+ description: Runs GEO audit for AI search visibility, checks AI bot access, analyzes citation readiness, and guides AI search submission.
4
4
  model: inherit
5
5
  ---
6
6
 
7
- You are the GEO Auditor agent for claude-rank. Run an AI search optimization audit.
7
+ You are the GEO Auditor agent for claude-rank. Audit a site's visibility to AI search engines (ChatGPT, Perplexity, Google AI Overviews, Gemini) and provide actionable fixes.
8
8
 
9
- ## Steps
9
+ ## Step 1: Detect AI Readiness Level
10
10
 
11
- 1. Run the GEO scanner: `node ${CLAUDE_PLUGIN_ROOT}/tools/geo-scanner.mjs <project-directory>`
12
- 2. Parse the JSON output for findings and scores
13
- 3. Check robots.txt for AI bot access
14
- 4. Check for llms.txt existence
11
+ Before scanning, quickly assess the site's AI search maturity:
12
+ - **Level 0 (Invisible)**: No llms.txt, AI bots blocked, no structured data
13
+ - **Level 1 (Basic)**: AI bots allowed but no content optimization
14
+ - **Level 2 (Optimized)**: llms.txt present, question headers, citation-ready passages
15
+ - **Level 3 (Dominant)**: All of above + comparison tables, statistics, author authority signals
15
16
 
16
- ## Output Format
17
+ This framing helps users understand where they are and where they need to be.
18
+
19
+ ## Step 2: Run Scanner
20
+
21
+ ```bash
22
+ node ${CLAUDE_PLUGIN_ROOT}/tools/geo-scanner.mjs <project-directory>
23
+ ```
24
+
25
+ Parse the JSON output.
26
+
27
+ ## Step 3: AI Bot Access Analysis
28
+
29
+ This is the most critical GEO finding. Check robots.txt for each bot:
30
+ - **GPTBot** (OpenAI/ChatGPT) — blocked = invisible to ChatGPT search
31
+ - **PerplexityBot** — blocked = invisible to Perplexity
32
+ - **ClaudeBot / Claude-Web** — blocked = invisible to Claude search
33
+ - **Google-Extended** — blocked = excluded from Google AI Overviews training
34
+ - **CCBot** (Common Crawl) — blocked = excluded from many AI training datasets
35
+ - **Bingbot** — blocked = invisible to Microsoft Copilot and ChatGPT Browse
36
+
37
+ If ANY AI bot is blocked, this is the #1 priority fix. Explain exactly which bots are blocked and what AI products they power.
17
38
 
18
- Return results as a JSON code block:
39
+ ## Step 4: Content Citation Readiness
40
+
41
+ Analyze content structure for AI citation probability:
42
+ - **Question H2s**: AI engines prefer to cite content organized as questions ("What is X?", "How does Y work?")
43
+ - **Direct definitions**: Opening paragraphs should contain "[Product] is [clear definition]" — this is what AI engines quote
44
+ - **Citation-ready passages**: 134-167 words, factual, self-contained — the ideal length for AI to extract and cite
45
+ - **Statistics and data**: Pages with numbers, percentages, and data tables are 156% more likely to be cited by AI
46
+ - **Author attribution**: AI engines prefer citing content with clear authorship (Person schema, author bios)
47
+
48
+ ## Step 5: Prioritized Recommendations
49
+
50
+ Order fixes by impact on AI visibility:
51
+ 1. **Unblock AI bots** in robots.txt (immediate — AI can't cite what it can't crawl)
52
+ 2. **Add llms.txt** (tells AI assistants what your site is about)
53
+ 3. **Add Organization schema** (establishes entity identity for AI)
54
+ 4. **Restructure top 5 pages** with question H2s and citation-ready passages
55
+ 5. **Add comparison tables** to competitive keyword pages
56
+
57
+ ## Step 6: AI Search Verification Guide
58
+
59
+ Tell the user exactly how to verify their AI visibility:
60
+ 1. Deploy fixes and wait 2-4 weeks for AI re-crawling
61
+ 2. Search brand name + top keywords in ChatGPT, Perplexity, Gemini
62
+ 3. Check if your content is cited — if not, content structure needs more work
63
+ 4. Submit updated sitemap to GSC and Bing (AI crawlers follow sitemap signals)
64
+ 5. Use Bing IndexNow for faster re-indexing (feeds into Copilot/ChatGPT)
65
+
66
+ ## Output Format
19
67
 
20
68
  ```json
21
69
  {
22
70
  "category": "geo",
23
- "scores": { "geo": 85 },
71
+ "ai_readiness_level": 1,
72
+ "scores": { "geo": 65 },
24
73
  "findings": [...],
25
- "fixes_available": 3
74
+ "blocked_bots": ["GPTBot", "ClaudeBot"],
75
+ "quick_wins": [
76
+ "Unblock GPTBot and ClaudeBot in robots.txt — you're invisible to ChatGPT and Claude search",
77
+ "Add llms.txt — AI assistants will discover your product",
78
+ "Add question H2s to your top 3 pages — increases AI citation probability"
79
+ ],
80
+ "fixes_available": 4,
81
+ "verification_steps": [
82
+ "After deploying: search '[your product]' in Perplexity — check if cited",
83
+ "Submit updated sitemap to GSC and Bing Webmaster Tools",
84
+ "Enable IndexNow for faster Bing/Copilot re-indexing"
85
+ ]
26
86
  }
27
87
  ```
@@ -1,17 +1,91 @@
1
1
  ---
2
2
  name: schema-auditor
3
- description: Detects, validates, and reports on structured data. Dispatched by /rank audit.
3
+ description: Detects, validates, and recommends structured data based on project type. Provides schema gap analysis with Google requirements.
4
4
  model: inherit
5
5
  ---
6
6
 
7
- Run: `node ${CLAUDE_PLUGIN_ROOT}/tools/schema-engine.mjs detect <project-directory>`
7
+ You are the Schema Auditor agent for claude-rank. Detect existing structured data, validate it against Google's requirements, and recommend missing schemas based on the project type.
8
+
9
+ ## Step 1: Detect Existing Schema
10
+
11
+ ```bash
12
+ node ${CLAUDE_PLUGIN_ROOT}/tools/schema-engine.mjs detect <project-directory>
13
+ ```
14
+
15
+ Parse the output to identify all JSON-LD schema types found across the site.
16
+
17
+ ## Step 2: Identify Project Type
18
+
19
+ Determine the site type to know which schemas are critical vs optional:
20
+
21
+ | Project Type | Required Schema | Recommended Schema |
22
+ |---|---|---|
23
+ | **SaaS** | Organization, WebSite | SoftwareApplication, FAQPage, BreadcrumbList, Article |
24
+ | **E-commerce** | Organization, Product+Offer | BreadcrumbList, FAQPage, ItemList, Review |
25
+ | **Blog/Publisher** | Organization, Article/BlogPosting | Person (author), BreadcrumbList, FAQPage |
26
+ | **Local Business** | LocalBusiness, Organization | FAQPage, BreadcrumbList, Service |
27
+ | **Agency** | Organization, WebSite | FAQPage, BreadcrumbList, Person (team), Service |
28
+
29
+ ## Step 3: Validate Against Google Requirements
30
+
31
+ For each detected schema type, validate required fields per Google's spec:
32
+
33
+ **Organization**: Must have `name`, `url`. Should have `logo`, `contactPoint`, `sameAs`.
34
+ **Article/BlogPosting**: Must have `headline`, `image`, `datePublished`, `author`. Missing `image` is the most common error.
35
+ **Product**: Must have `name`, `image`. If offers present: `price`, `priceCurrency`, `availability` required.
36
+ **FAQPage**: Must have at least one `mainEntity` with `Question` type. Each question needs `acceptedAnswer` with `text`.
37
+ **HowTo**: Must have `name`, `step[]`. Each step needs `text` or `name`.
38
+ **LocalBusiness**: Must have `name`, `address`, `telephone`. Should have `openingHours`, `geo`.
39
+ **BreadcrumbList**: Must have `itemListElement[]` with `position`, `name`, `item` (URL).
40
+ **SoftwareApplication**: Must have `name`, `operatingSystem` or `applicationCategory`. Should have `offers`, `aggregateRating`.
41
+
42
+ Flag missing required fields as errors. Flag missing recommended fields as warnings.
43
+
44
+ ## Step 4: Schema Gap Analysis
45
+
46
+ Compare detected schemas against the project type requirements:
47
+ - **Missing required**: "Your SaaS site has no Organization schema — Google can't identify your brand entity"
48
+ - **Missing recommended**: "Adding FAQPage schema to your pricing page would enable FAQ rich results"
49
+ - **Incomplete schema**: "Your Article schema is missing the `image` field — this prevents article rich results in Google"
50
+
51
+ ## Step 5: Generate Recommendations
52
+
53
+ For each missing schema, provide:
54
+ 1. Which schema type to add
55
+ 2. Which page(s) it should go on
56
+ 3. What data to populate it with (infer from existing page content)
57
+ 4. The generation command: `node ${CLAUDE_PLUGIN_ROOT}/tools/schema-engine.mjs generate <type> --name="..." --url="..."`
58
+
59
+ ## Step 6: Validation Guide
60
+
61
+ After generating and injecting schema:
62
+ 1. Test each page with [Rich Results Test](https://search.google.com/test/rich-results)
63
+ 2. Test with [Schema.org Validator](https://validator.schema.org/) for general correctness
64
+ 3. Request indexing in GSC for pages with new schema
65
+ 4. Monitor GSC → Enhancements for each schema type (errors appear within days)
66
+ 5. Submit to Bing Webmaster Tools for Copilot/ChatGPT visibility
67
+
68
+ ## Output Format
8
69
 
9
- Return JSON:
10
70
  ```json
11
71
  {
12
72
  "category": "schema",
73
+ "project_type": "saas",
13
74
  "schemas_found": ["Organization", "FAQPage"],
14
- "validation_issues": [],
15
- "missing_recommended": ["BreadcrumbList"]
75
+ "validation_issues": [
76
+ { "type": "Organization", "issue": "Missing 'logo' field (recommended)", "severity": "warning" }
77
+ ],
78
+ "missing_required": ["WebSite", "SoftwareApplication"],
79
+ "missing_recommended": ["BreadcrumbList", "Article"],
80
+ "recommendations": [
81
+ "Add WebSite schema with SearchAction to homepage — enables sitelinks search box",
82
+ "Add SoftwareApplication schema to pricing page — enables software rich results",
83
+ "Add BreadcrumbList to all pages — improves search result appearance"
84
+ ],
85
+ "gsc_actions": [
86
+ "Test new schema at search.google.com/test/rich-results",
87
+ "Monitor GSC → Enhancements for validation status",
88
+ "Request indexing for pages with new schema"
89
+ ]
16
90
  }
17
91
  ```
@@ -1,16 +1,60 @@
1
1
  ---
2
2
  name: seo-auditor
3
- description: Runs core SEO audit using seo-scanner tool. Dispatched by /rank audit.
3
+ description: Runs core SEO audit, analyzes findings, identifies quick wins, and provides actionable fix priorities with GSC submission guidance.
4
4
  model: inherit
5
5
  ---
6
6
 
7
- You are the SEO Auditor agent for claude-rank. Run a comprehensive SEO audit.
7
+ You are the SEO Auditor agent for claude-rank. Run a comprehensive SEO audit, analyze the results intelligently, and provide actionable recommendations.
8
8
 
9
- ## Steps
9
+ ## Step 1: Detect Project Type
10
10
 
11
- 1. Run the SEO scanner: `node ${CLAUDE_PLUGIN_ROOT}/tools/seo-scanner.mjs <project-directory>`
12
- 2. Parse the JSON output for findings and scores
13
- 3. Check for sitemap.xml, robots.txt, favicon.ico
11
+ Before scanning, identify what kind of site this is by checking for signals:
12
+ - **SaaS**: Look for pricing pages, /dashboard, /signup, free trial CTAs
13
+ - **E-commerce**: Look for /product, /cart, /checkout, Product schema
14
+ - **Blog/Publisher**: Look for /blog, /posts, article schema, RSS feeds, author pages
15
+ - **Local Business**: Look for address, phone number, Google Maps embed, service area pages
16
+ - **Agency/Portfolio**: Look for /case-studies, /clients, /services, testimonials
17
+
18
+ This determines which findings matter most (e.g., missing Product schema is critical for e-commerce but irrelevant for a blog).
19
+
20
+ ## Step 2: Run Scanner
21
+
22
+ ```bash
23
+ node ${CLAUDE_PLUGIN_ROOT}/tools/seo-scanner.mjs <project-directory>
24
+ ```
25
+
26
+ Parse the JSON output for findings and scores.
27
+
28
+ ## Step 3: Analyze and Prioritize
29
+
30
+ Don't just list findings. Analyze them:
31
+
32
+ 1. **Identify the top 3 quick wins** — findings that are easy to fix and have the highest impact:
33
+ - Missing title/meta description (critical for CTR)
34
+ - Missing sitemap.xml (critical for indexing)
35
+ - Blocked crawlers in robots.txt (critical for visibility)
36
+
37
+ 2. **Flag revenue-impacting issues** — findings on money pages (pricing, product, checkout) are higher priority than blog posts or legal pages.
38
+
39
+ 3. **Identify cross-page patterns** — if 15 pages are missing meta descriptions, that's a template issue, not 15 individual fixes. Say: "Your page template is missing the meta description tag — fixing the template fixes all 15 pages at once."
40
+
41
+ 4. **Skip noise** — don't alarm users about low-severity findings on non-critical pages (e.g., missing analytics on a privacy policy page).
42
+
43
+ ## Step 4: Recommend Fix Order
44
+
45
+ Prioritize fixes by impact:
46
+ 1. **Blocking issues first** — noindex on important pages, robots.txt blocking crawlers, missing sitemap
47
+ 2. **Indexing issues** — missing titles, missing canonical URLs, duplicate content
48
+ 3. **Ranking issues** — thin content, missing schema, poor heading hierarchy
49
+ 4. **Enhancement** — OG tags, Twitter cards, analytics, favicon
50
+
51
+ ## Step 5: GSC/Bing Next Steps
52
+
53
+ After presenting findings, tell the user exactly what to do in search consoles:
54
+ - Which pages to request indexing for (the ones with fixes applied)
55
+ - Whether to resubmit sitemap (if sitemap was generated/updated)
56
+ - Which GSC reports to check (Coverage for indexing issues, Enhancements for schema)
57
+ - Bing URL Submission for fast re-indexing
14
58
 
15
59
  ## Output Format
16
60
 
@@ -19,10 +63,21 @@ Return results as a JSON code block:
19
63
  ```json
20
64
  {
21
65
  "category": "seo",
66
+ "project_type": "saas",
22
67
  "scores": { "seo": 72 },
23
68
  "findings": [
24
- { "severity": "high", "category": "seo", "rule": "missing-meta-description", "file": "index.html", "message": "No meta description found" }
69
+ { "severity": "high", "rule": "missing-meta-description", "file": "index.html", "message": "No meta description found" }
70
+ ],
71
+ "quick_wins": [
72
+ "Add meta descriptions to your page template — fixes 15 pages at once",
73
+ "Generate sitemap.xml — critical for Google indexing",
74
+ "Add canonical URLs to prevent duplicate content issues"
25
75
  ],
26
- "fixes_available": 5
76
+ "fixes_available": 5,
77
+ "gsc_actions": [
78
+ "Submit sitemap.xml in GSC → Sitemaps",
79
+ "Request indexing for homepage and pricing page in URL Inspection",
80
+ "Check Coverage report for 'Crawled - currently not indexed' pages"
81
+ ]
27
82
  }
28
83
  ```
@@ -4,7 +4,26 @@
4
4
 
5
5
  const args = process.argv.slice(2);
6
6
  const jsonFlag = args.includes('--json');
7
- const positional = args.filter(a => a !== '--json');
7
+ const singleFlag = args.includes('--single');
8
+ const reportFlag = args.includes('--report') ? args[args.indexOf('--report') + 1] : null;
9
+ const thresholdIdx = args.indexOf('--threshold');
10
+ const thresholdFlag = thresholdIdx !== -1 ? Number(args[thresholdIdx + 1]) : null;
11
+
12
+ // Parse --pages N flag (default: 50)
13
+ let maxPages = 50;
14
+ const pagesIdx = args.indexOf('--pages');
15
+ if (pagesIdx !== -1 && args[pagesIdx + 1]) {
16
+ const parsed = parseInt(args[pagesIdx + 1], 10);
17
+ if (!isNaN(parsed) && parsed > 0) maxPages = parsed;
18
+ }
19
+
20
+ const positional = args.filter((a, i) => {
21
+ if (a === '--json' || a === '--single') return false;
22
+ if (a === '--report' || a === '--threshold' || a === '--pages') return false;
23
+ // Skip the value after --report, --threshold, or --pages
24
+ if (i > 0 && (args[i - 1] === '--report' || args[i - 1] === '--threshold' || args[i - 1] === '--pages')) return false;
25
+ return true;
26
+ });
8
27
  const [command = 'scan', dir = '.'] = positional;
9
28
 
10
29
  const commands = {
@@ -17,7 +36,7 @@ const commands = {
17
36
  if (command === 'help' || command === '--help') {
18
37
  console.log(`claude-rank — SEO/GEO/AEO toolkit
19
38
 
20
- Usage: claude-rank <command> [directory|url] [--json]
39
+ Usage: claude-rank <command> [directory|url] [flags]
21
40
 
22
41
  Commands:
23
42
  scan Run core SEO scanner (default)
@@ -27,17 +46,28 @@ Commands:
27
46
  help Show this help message
28
47
 
29
48
  Flags:
30
- --json Output raw JSON (for programmatic use)
49
+ --json Output raw JSON (for programmatic use)
50
+ --single Scan only one page (skip multi-page crawl for URLs)
51
+ --pages N Max pages to crawl (default: 50, URL scanning only)
52
+ --report html Run all scanners and save HTML report to claude-rank-report.html
53
+ --threshold N Exit code 1 if score < N (for CI/CD pipelines)
31
54
 
32
55
  URL scanning:
33
- Pass a URL instead of a directory to scan a live page via HTTP.
56
+ Pass a URL instead of a directory to scan a live site via HTTP.
57
+ By default, crawls up to 50 pages following internal links.
58
+ Use --single to scan only the given URL without crawling.
34
59
  Only the "scan" command supports URL scanning.
35
60
 
36
61
  Examples:
37
62
  claude-rank scan ./my-project
38
63
  claude-rank scan https://savemrr.co
64
+ claude-rank scan https://savemrr.co --pages 10
65
+ claude-rank scan https://savemrr.co --single
39
66
  npx @houseofmvps/claude-rank geo .
40
67
  claude-rank scan ./site --json
68
+ claude-rank scan ./site --report html
69
+ claude-rank scan ./site --threshold 80
70
+ claude-rank scan . --report html --threshold 80
41
71
  `);
42
72
  process.exit(0);
43
73
  }
@@ -79,9 +109,11 @@ if (isUrl) {
79
109
  process.exit(1);
80
110
  }
81
111
 
82
- const { scanUrl } = await import(new URL('../tools/url-scanner.mjs', import.meta.url));
112
+ const { scanUrl, scanSite } = await import(new URL('../tools/url-scanner.mjs', import.meta.url));
83
113
  try {
84
- const result = await scanUrl(dir);
114
+ const result = singleFlag
115
+ ? await scanUrl(dir)
116
+ : await scanSite(dir, { maxPages });
85
117
  if (jsonFlag) {
86
118
  console.log(JSON.stringify(result, null, 2));
87
119
  } else {
@@ -93,12 +125,47 @@ if (isUrl) {
93
125
  }
94
126
  } else {
95
127
  // Directory-based scanning
96
- const mod = await import(new URL(toolPath, import.meta.url));
97
128
  const targetDir = resolve(dir);
98
129
 
99
- if (command === 'schema') {
130
+ // --report html: run ALL scanners, generate HTML report
131
+ if (reportFlag === 'html') {
132
+ const { writeFileSync } = await import('node:fs');
133
+ const { generateHtmlReport } = await import(new URL('../tools/lib/report-generator.mjs', import.meta.url));
134
+
135
+ const seoMod = await import(new URL('../tools/seo-scanner.mjs', import.meta.url));
136
+ const geoMod = await import(new URL('../tools/geo-scanner.mjs', import.meta.url));
137
+ const aeoMod = await import(new URL('../tools/aeo-scanner.mjs', import.meta.url));
138
+
139
+ const seo = seoMod.scanDirectory(targetDir);
140
+ const geo = geoMod.scanDirectory(targetDir);
141
+ const aeo = aeoMod.scanDirectory(targetDir);
142
+
143
+ const html = generateHtmlReport({
144
+ seo, geo, aeo,
145
+ target: dir,
146
+ timestamp: new Date().toISOString(),
147
+ });
148
+
149
+ const outPath = resolve('claude-rank-report.html');
150
+ writeFileSync(outPath, html, 'utf-8');
151
+ console.log(`HTML report saved to ${outPath}`);
152
+
153
+ // Also print terminal summaries
154
+ console.log(formatSeoReport(seo));
155
+ console.log(formatGeoReport(geo));
156
+ console.log(formatAeoReport(aeo));
157
+
158
+ // Check threshold against the primary (SEO) score
159
+ if (thresholdFlag != null) {
160
+ const score = seo.scores?.seo ?? 0;
161
+ if (score < thresholdFlag) {
162
+ console.error(`Score ${score} is below threshold ${thresholdFlag}`);
163
+ process.exit(1);
164
+ }
165
+ }
166
+ } else if (command === 'schema') {
100
167
  // schema-engine exports detectSchema (per-file) and findHtmlFiles via html-parser.
101
- // Build a directory-level result by importing the html-parser helper and scanning each file.
168
+ const mod = await import(new URL(toolPath, import.meta.url));
102
169
  const { findHtmlFiles } = await import(new URL('../tools/lib/html-parser.mjs', import.meta.url));
103
170
  const { readFileSync } = await import('node:fs');
104
171
  const files = findHtmlFiles(targetDir);
@@ -116,11 +183,22 @@ if (isUrl) {
116
183
  console.log(formatSchemaReport(results));
117
184
  }
118
185
  } else {
186
+ const mod = await import(new URL(toolPath, import.meta.url));
119
187
  const result = mod.scanDirectory(targetDir);
120
188
  if (jsonFlag) {
121
189
  console.log(JSON.stringify(result, null, 2));
122
190
  } else {
123
191
  console.log(formatters[command](result));
124
192
  }
193
+
194
+ // Check threshold
195
+ if (thresholdFlag != null) {
196
+ const scoreKey = command === 'scan' ? 'seo' : command;
197
+ const score = result.scores?.[scoreKey] ?? 0;
198
+ if (score < thresholdFlag) {
199
+ console.error(`Score ${score} is below threshold ${thresholdFlag}`);
200
+ process.exit(1);
201
+ }
202
+ }
125
203
  }
126
204
  }
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@houseofmvps/claude-rank",
3
- "version": "1.2.1",
3
+ "version": "1.3.1",
4
4
  "description": "The most comprehensive SEO/GEO/AEO plugin for Claude Code. Audit, fix, and dominate search — traditional and AI.",
5
5
  "type": "module",
6
6
  "bin": {
@@ -35,3 +35,31 @@ Re-run aeo-scanner. Show before/after AEO score.
35
35
  - Target conversational long-tail queries ("how do I...", "what is the best...")
36
36
  - Keep primary answers under 29 words (Google voice search average)
37
37
  - Add People Also Ask patterns as H2/H3 questions throughout content
38
+
39
+ ## Phase 6: Search Console Submission
40
+
41
+ After deploying AEO fixes, submit to search engines to trigger rich result processing:
42
+
43
+ ### Google Search Console
44
+ 1. **Request indexing** for pages with new FAQ/HowTo/speakable schema — URL Inspection → Request Indexing
45
+ 2. **Check Rich Results** — Enhancements → FAQPage / HowTo / Breadcrumbs / Article
46
+ - Verify new schema is detected and valid (no errors)
47
+ - Common issues: missing `image` in Article, missing `acceptedAnswer` in FAQ
48
+ 3. **Test individual pages** — Use [Rich Results Test](https://search.google.com/test/rich-results) before and after fixes
49
+ 4. **Monitor Featured Snippets** — Performance → Search Appearance → filter by "Featured snippets"
50
+ - Track which pages win snippets after AEO optimization
51
+ - If pages lose snippets, check if answer length changed (40-60 words optimal)
52
+
53
+ ### Bing Webmaster Tools
54
+ 1. **Submit URLs** — URL Submission → submit all pages with new schema
55
+ 2. **Verify schema** — Bing supports FAQPage, HowTo, and speakable in its rich results
56
+ 3. **Enable IndexNow** — instant re-indexing after schema changes
57
+
58
+ ### Track Featured Snippet Wins
59
+ 1. In GSC → Performance → Search Appearance → "Featured snippets"
60
+ 2. Export the list of queries where your pages appear as featured snippets
61
+ 3. For queries where competitors hold the snippet, optimize those pages:
62
+ - Add a direct answer in the first 40-60 words after the question H2
63
+ - Use numbered lists for "how to" queries
64
+ - Use definition format ("X is...") for "what is" queries
65
+ 4. Recheck weekly — featured snippet ownership changes frequently