@ainyc/canonry 1.45.3 → 1.46.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,37 @@
1
+ # Memory Patterns
2
+
3
+ ## Per-Client State Template
4
+
5
+ Store in OpenClaw agent memory after each significant event:
6
+
7
+ ```
8
+ Client: <business name>
9
+ Domain: <domain>
10
+ Project: <project slug>
11
+
12
+ Baseline (set <date>):
13
+ Overall cited rate: <X>% (<N>/<total> keyword-provider pairs)
14
+ Best provider: <provider> (<X>% cited)
15
+ Worst provider: <provider> (<X>% cited)
16
+ Top keyword: "<keyword>" (cited on <N>/<total> providers)
17
+ Worst keyword: "<keyword>" (cited on <N>/<total>)
18
+
19
+ Competitors:
20
+ <domain> — <trend description>
21
+
22
+ Content strategy:
23
+ <page type> drives <X>% of citations
24
+
25
+ Open items:
26
+ - <description>
27
+
28
+ Sweep history summary:
29
+ <date>: <X>% (<note>)
30
+ ```
31
+
32
+ ## Update Cadence
33
+
34
+ - **After each sweep:** Update cited rates, flag new regressions
35
+ - **After each fix:** Record what was changed, set monitoring flag
36
+ - **After each client interaction:** Update preferences, strategy notes
37
+ - **Weekly:** Summarize trend direction, update competitor notes
@@ -0,0 +1,52 @@
1
+ # Orchestration Workflows
2
+
3
+ ## Workflow 1: New Client Baseline
4
+
5
+ Trigger: First sweep completes for a new project
6
+
7
+ Steps:
8
+ 1. `canonry evidence <project> --format json` → get initial citation data
9
+ 2. Compute baseline: cited rate, provider breakdown, top/bottom keywords
10
+ 3. `npx @ainyc/aeo-audit "<domain>" --format json` → site readiness score
11
+ 4. Identify top 3 gaps (uncited keywords with fixable site issues)
12
+ 5. Generate onboarding report with baseline + action plan
13
+ 6. Store baseline metrics in memory
14
+
15
+ ## Workflow 2: Regression Response
16
+
17
+ Trigger: Comparison shows decline or webhook fires regression.detected
18
+
19
+ Steps:
20
+ 1. `canonry evidence <project> --format json` → current state
21
+ 2. `canonry history <project> --keyword "<keyword>"` → trend for affected keyword
22
+ 3. Check indexing: `canonry google coverage <project>` → is the page still indexed?
23
+ 4. Check competitor: did a competitor gain the citation we lost?
24
+ 5. Audit the page: `npx @ainyc/aeo-audit "<page-url>" --format json`
25
+ 6. Diagnose cause: indexing issue / content issue / competitive displacement
26
+ 7. Recommend fix with evidence
27
+ 8. If content fix: generate diff (schema, llms.txt, or content changes)
28
+ 9. Update memory with regression event + diagnosis
29
+
30
+ ## Workflow 3: Weekly Review
31
+
32
+ Trigger: Scheduled (weekly, or on-demand)
33
+
34
+ Steps:
35
+ 1. `canonry evidence <project> --format json` → current metrics
36
+ 2. Compare to baseline/prior week from memory
37
+ 3. Compute deltas: citations gained, lost, stable
38
+ 4. Flag any new regressions not yet addressed
39
+ 5. Check competitor movement
40
+ 6. Generate summary with key changes + recommended next steps
41
+
42
+ ## Workflow 4: Content Gap Analysis
43
+
44
+ Trigger: User asks "why aren't we cited for X?" or multiple uncited keywords detected
45
+
46
+ Steps:
47
+ 1. `canonry evidence <project> --keyword "<keyword>"` → confirm uncited
48
+ 2. Check if a relevant page exists on the domain
49
+ 3. If no page: recommend content creation (topic, target keywords)
50
+ 4. If page exists: `npx @ainyc/aeo-audit "<page-url>"` → diagnose why uncited
51
+ 5. Check schema completeness, llms.txt coverage, indexing status
52
+ 6. Generate prioritized fix list
@@ -0,0 +1,34 @@
1
+ # Regression Playbook
2
+
3
+ ## Detection
4
+
5
+ A regression is detected when a citation is lost between consecutive completed runs for the same project. Specifically: a keyword+provider pair that was cited in run N is no longer cited in run N+1.
6
+
7
+ ## Triage
8
+
9
+ Classify the regression by severity:
10
+
11
+ | Severity | Criteria |
12
+ |---|---|
13
+ | **Critical** | Branded term lost on any provider |
14
+ | **High** | Top-performing keyword lost on primary provider |
15
+ | **Medium** | Non-branded keyword lost on one provider |
16
+ | **Low** | Keyword lost that was only marginally cited |
17
+
18
+ ## Diagnosis
19
+
20
+ For each regression, check causes in order:
21
+
22
+ 1. **Competitor displacement** — Did a competitor domain appear in the citation for this keyword+provider? Check current run snapshots.
23
+ 2. **Indexing loss** — Is the page still indexed? Check Google Search Console integration or HTTP status.
24
+ 3. **Content change** — Did the page content change significantly? Compare content hashes if available.
25
+ 4. **Provider behavior change** — Did the provider change its response pattern for this query type?
26
+ 5. **Unknown** — No clear cause identified. Flag for manual investigation.
27
+
28
+ ## Response
29
+
30
+ 1. Alert the client with specific data (keyword, provider, dates, evidence)
31
+ 2. Recommend diagnostic steps based on suspected cause
32
+ 3. If actionable: generate fix (schema update, content suggestion, indexing resubmission)
33
+ 4. Set monitoring flag to track if the regression resolves
34
+ 5. Update memory with the regression event and diagnosis
@@ -0,0 +1,67 @@
1
+ # Reporting Templates
2
+
3
+ ## Weekly Report
4
+
5
+ ```
6
+ # Weekly AEO Report: <project> (<date range>)
7
+
8
+ ## Summary
9
+ - Cited rate: <X>% (Δ<+/-Y>% from last week)
10
+ - Regressions: <N> new, <N> resolved
11
+ - Gains: <N> new citations
12
+ - Providers monitored: <N>
13
+
14
+ ## Key Changes
15
+ - <most important change with data>
16
+ - <second most important>
17
+ - <third>
18
+
19
+ ## Regressions
20
+ | Keyword | Provider | Status | Suspected Cause |
21
+ |---------|----------|--------|-----------------|
22
+ | <keyword> | <provider> | New/Investigating/Resolved | <cause> |
23
+
24
+ ## Gains
25
+ | Keyword | Provider | Position | Page |
26
+ |---------|----------|----------|------|
27
+ | <keyword> | <provider> | <N> | <url> |
28
+
29
+ ## Competitor Watch
30
+ - <competitor>: <trend>
31
+
32
+ ## Recommended Actions
33
+ 1. <action with rationale>
34
+ 2. <action>
35
+ 3. <action>
36
+ ```
37
+
38
+ ## Monthly Report
39
+
40
+ ```
41
+ # Monthly AEO Report: <project> (<month year>)
42
+
43
+ ## Executive Summary
44
+ <2-3 sentence overview of the month>
45
+
46
+ ## Metrics
47
+ | Metric | Start of Month | End of Month | Change |
48
+ |--------|---------------|--------------|--------|
49
+ | Overall cited rate | <X>% | <Y>% | <Δ>% |
50
+ | Keywords monitored | <N> | <N> | <Δ> |
51
+ | Active regressions | <N> | <N> | <Δ> |
52
+
53
+ ## Provider Breakdown
54
+ | Provider | Cited Rate | Trend |
55
+ |----------|-----------|-------|
56
+ | <provider> | <X>% | ↑/↓/→ |
57
+
58
+ ## Fixes Deployed
59
+ | Date | Fix | Status | Impact |
60
+ |------|-----|--------|--------|
61
+ | <date> | <description> | Monitoring/Confirmed | <result> |
62
+
63
+ ## Next Month Priorities
64
+ 1. <priority>
65
+ 2. <priority>
66
+ 3. <priority>
67
+ ```
@@ -0,0 +1,274 @@
1
+ ---
2
+ name: canonry
3
+ description: "AEO (Answer Engine Optimization) monitoring and analysis using canonry CLI and aeo-audit tool. Use when: (1) running citation sweeps across AI providers (Gemini, ChatGPT, Claude, Perplexity); (2) auditing technical SEO with structured data validation; (3) implementing schema markup, sitemaps, llms.txt; (4) diagnosing indexing issues via Google Search Console and Bing Webmaster Tools; (5) optimizing content for AI readability and entity consistency. NOT for: general web development, content writing, PPC campaigns, or social media management."
4
+ metadata:
5
+ {
6
+ "openclaw":
7
+ {
8
+ "emoji": "📡",
9
+ "requires": { "bins": ["canonry"] },
10
+ "install":
11
+ [
12
+ {
13
+ "id": "npm",
14
+ "kind": "npm",
15
+ "package": "canonry",
16
+ "bins": ["canonry"],
17
+ "label": "Install canonry globally",
18
+ "command": "npm install -g canonry"
19
+ },
20
+ {
21
+ "id": "npx",
22
+ "kind": "npx",
23
+ "package": "@ainyc/aeo-audit",
24
+ "bins": ["aeo-audit"],
25
+ "label": "Use aeo-audit via npx",
26
+ "command": "npx @ainyc/aeo-audit@latest"
27
+ }
28
+ ],
29
+ },
30
+ }
31
+ ---
32
+
33
+ # Canonry
34
+
35
+ Monitor and optimize site visibility across AI answer engines (Gemini, ChatGPT, Claude, Perplexity) and traditional search engines using the `canonry` CLI for AEO monitoring and `aeo-audit` for technical SEO analysis.
36
+
37
+ ## When to Use
38
+
39
+ ✅ **USE this skill when:**
40
+
41
+ - Tracking which keyphrases earn citations (or lose them) across AI providers
42
+ - Running technical SEO audits with 14‑factor scoring
43
+ - Implementing structured data (JSON‑LD: LocalBusiness, FAQPage, Service)
44
+ - Diagnosing indexing gaps in Google Search Console / Bing Webmaster Tools
45
+ - Optimizing `llms.txt`, `llms‑full.txt`, sitemaps, robots.txt for AI crawlers
46
+ - Patching missing H1 tags, meta descriptions, image alt text
47
+ - Submitting URLs to Google Indexing API and Bing IndexNow
48
+ - Analyzing competitor citation patterns in AI answers
49
+
50
+ ## When NOT to Use
51
+
52
+ ❌ **DON'T use this skill when:**
53
+
54
+ - General WordPress development (use `wordpress` skill if available)
55
+ - Content writing or copy creation (human‑led task)
56
+ - Paid search/SEM campaigns (different specialty)
57
+ - Social media management or outreach
58
+ - Local business listing management (e.g., GBP, Yelp)
59
+ - Backlink building or outreach campaigns
60
+
61
+ ## Core Philosophy
62
+
63
+ - **AI models are black boxes** — Measure citation outcomes, not assume causality
64
+ - **Position, then wait** — Site changes take weeks/months to reflect in AI indexes; canonry tells us *when* it happens, not *if*
65
+ - **Signal‑over‑noise** — Trim keyphrase lists to high‑intent queries; avoid granular targeting until base visibility exists
66
+ - **CLI‑native, UI‑optional** — Prefer API‑driven changes over manual CMS clicks; faster, repeatable, auditable
67
+
68
+ ## Toolchain
69
+
70
+ ### canonry (AEO Monitoring)
71
+ ```bash
72
+ # List projects
73
+ canonry project list
74
+
75
+ # Run a sweep (all providers)
76
+ canonry run <project> --wait
77
+
78
+ # Check per‑phrase citation status
79
+ canonry evidence <project>
80
+
81
+ # Show latest run summary
82
+ canonry status <project>
83
+
84
+ # Add/remove keyphrases
85
+ canonry keyword add <project> "polyurea roof coating"
86
+ canonry keyword remove <project> "best roof coating for a warehouse"
87
+
88
+ # Submit URLs to Bing
89
+ canonry bing request-indexing <project> <url>
90
+
91
+ # Submit to Google Indexing API
92
+ canonry google request-indexing <project> <url>
93
+ ```
94
+
95
+ ### aeo-audit (Technical SEO Analysis)
96
+ ```bash
97
+ # Run audit (JSON output)
98
+ npx @ainyc/aeo-audit@latest "https://example.com" --format json
99
+
100
+ # 14‑factor scoring includes:
101
+ # - Structured Data (JSON‑LD)
102
+ # - Content Depth
103
+ # - AI‑Readable Content (llms.txt, llms‑full.txt)
104
+ # - E‑E‑A‑T Signals
105
+ # - FAQ Content
106
+ # - Citations & Authority Signals
107
+ # - Definition Blocks
108
+ # - Technical SEO (H1, alt text, meta)
109
+ ```
110
+
111
+ ### Google Search Console / Bing WMT
112
+ ```bash
113
+ # GSC coverage summary
114
+ canonry google coverage <project>
115
+
116
+ # Bing coverage summary
117
+ canonry bing coverage <project>
118
+
119
+ # Force refresh cached data
120
+ canonry google refresh <project>
121
+ canonry bing refresh <project>
122
+ ```
123
+
124
+ ## Workflow
125
+
126
+ ### 1. Diagnose
127
+ ```bash
128
+ # Baseline AEO visibility
129
+ canonry run <project> --wait
130
+ canonry evidence <project>
131
+
132
+ # Technical SEO audit
133
+ npx @ainyc/aeo-audit@latest "https://client.com" --format json > audit.json
134
+ ```
135
+
136
+ ### 2. Prioritize
137
+ Gaps sorted by impact:
138
+ 1. **Missing H1** → immediate content patch
139
+ 2. **No structured data** → JSON‑LD injection
140
+ 3. **Thin content** → definition blocks ("What is…")
141
+ 4. **County‑level targeting** → refine after base visibility
142
+ 5. **E‑E‑A‑T signals** → Person schema, author tags (needs client input)
143
+
144
+ ### 3. Execute
145
+ - **Schema injection**: LocalBusiness + FAQPage JSON‑LD via site‑appropriate method (Elementor Custom Code, theme hooks, etc.)
146
+ - **Content patches**: H1, meta title/description, image alt text via REST API or CMS
147
+ - **AI‑readable files**: Upload `llms.txt`, `llms‑full.txt` to site root
148
+ - **Indexing requests**: Submit all URLs to Google Indexing API + Bing IndexNow
149
+ - **Keyphrase strategy**: Trim to 8‑12 high‑intent queries; remove noise
150
+
151
+ ### 4. Monitor
152
+ - Weekly canonry sweeps to track citation changes
153
+ - Correlate visibility shifts with deployment dates
154
+ - Watch for competitor displacement in keyphrases
155
+
156
+ ### 5. Report
157
+ Clear, data‑first summaries:
158
+ > “Lost `emergency dentist brooklyn` on Gemini — two competitors moved in. Here’s what to fix.”
159
+
160
+ ## Common Patterns
161
+
162
+ ### New Site (0 citations)
163
+ - Focus on indexing first: submit sitemap to GSC/Bing, request indexing
164
+ - Implement base schema (LocalBusiness, Service)
165
+ - Create `llms.txt` with service‑area details
166
+ - Trim keyphrases to 8‑12 core queries
167
+ - Expect 4‑8 weeks for first citations
168
+
169
+ ### Established Site (regression)
170
+ - Compare canonry runs to identify when loss occurred
171
+ - Check for recent competitor content or site changes
172
+ - Validate schema is still present and error‑free
173
+ - Re‑submit affected URLs to indexing APIs
174
+
175
+ ### County‑Level Targeting
176
+ ```yaml
177
+ # Service areas in llms.txt / schema
178
+ Michigan:
179
+ - Oakland County (Troy, Auburn Hills, Pontiac)
180
+ - Macomb County (Sterling Heights, Shelby Township)
181
+ - Wayne County (Detroit, Dearborn)
182
+ - Lapeer County (HQ: Almont)
183
+
184
+ Florida:
185
+ - Miami‑Dade County (Miami, Coral Gables)
186
+ - Broward County (Fort Lauderdale, Hollywood)
187
+ - Palm Beach County (West Palm Beach, Boca Raton)
188
+ ```
189
+ - Reference counties in schema `areaServed` and `llms.txt`
190
+ - **Do not** create separate keyphrases per county until base visibility exists
191
+
192
+ ### WordPress/Elementor Specifics
193
+ - REST API user with Application Passwords (`/wp‑json/wp/v2/`)
194
+ - Elementor data patched via `_elementor_data` meta field
195
+ - Schema injection via Elementor Pro Custom Code (`elementor_snippet` CPT)
196
+ - Yoast SEO title/description fields often NOT REST‑writable → manual WP Admin edit
197
+ - `wp‑login.php` may be hidden (security plugin) → file uploads require manual WP File Manager
198
+
199
+ ## Example: Full AEO Audit + Action Plan
200
+
201
+ ```bash
202
+ # 1. Audit
203
+ npx @ainyc/aeo-audit@latest "https://client.com" --format json > audit.json
204
+
205
+ # 2. Parse score
206
+ cat audit.json | jq '.overallScore, .overallGrade'
207
+
208
+ # 3. Check AEO baseline
209
+ canonry status client-project
210
+ canonry evidence client-project
211
+
212
+ # 4. Generate action list
213
+ cat audit.json | jq -r '.factors[] | select(.score < 70) | "- \(.name): \(.score)/100 (\(.grade)) - \(.recommendations[0])"'
214
+ ```
215
+
216
+ ## Boundaries & Safety
217
+
218
+ - **Never touch live WordPress without explicit approval**
219
+ - **Back up `~/.canonry/config.yaml` before any config edit**
220
+ - **Never fabricate citation data** — if a sweep hasn’t run, say so
221
+ - **Client data stays private** — canonry repo is public; no real domains in issues
222
+ - **Respect API rate limits** — batch operations, avoid tight loops
223
+
224
+ ## Output Templates
225
+
226
+ ### Audit Summary
227
+ ```
228
+ ## AEO/SEO Audit — https://client.com
229
+
230
+ **Overall:** 66/100 (D)
231
+
232
+ **Top strengths (A/A+):**
233
+ - AI‑Readable Content (100) — llms.txt, llms‑full.txt present
234
+ - FAQ Content (100) — FAQPage schema detected
235
+ - AI Crawler Access (100) — robots.txt allows all bots
236
+
237
+ **Critical gaps (F):**
238
+ - Definition Blocks (0) — no "What is…" sections
239
+ - E‑E‑A‑T Signals (45) — missing Person schema, author tags
240
+ - Citations & Authority (44) — no external references to industry sources
241
+
242
+ **Immediate actions:**
243
+ 1. Add H1 tag to homepage (Technical SEO: 60/100)
244
+ 2. Create "What is polyurea?" section on /services/ (Definition Blocks: 0/100)
245
+ 3. Submit all 5 URLs to Bing IndexNow (indexing: 2/5)
246
+ ```
247
+
248
+ ### Citation Report
249
+ ```
250
+ ## canonry sweep — client-project
251
+
252
+ **Run:** 2026‑04‑03T13:44Z (ID: 4a45ebfc...)
253
+
254
+ **Keyphrase visibility (12 tracked):**
255
+ ✅ polyurea roof coating — 3/3 providers
256
+ ✅ commercial roof coating — 2/3 providers
257
+ ❌ polyurea roof coating Michigan — 0/3 (geo gap)
258
+ ❌ commercial roofing contractor Michigan — 0/3 (geo gap)
259
+
260
+ **Changes since last sweep (2026‑03‑27):**
261
+ - Lost `flat roof coating Michigan` on Gemini (−1)
262
+ - Gained `industrial roof coating` on Claude (+1)
263
+ - No change on ChatGPT (stable)
264
+
265
+ **Next steps:**
266
+ - Build Michigan location page (/michigan/)
267
+ - Add county‑level references to llms.txt
268
+ - Re‑sweep in 7 days
269
+ ```
270
+
271
+ ---
272
+
273
+ **Tools:** canonry v1.37+, @ainyc/aeo‑audit v1.3+
274
+ **Reference:** [AINYC AEO Methodology](https://ainyc.ai/aeo-methodology)
@@ -0,0 +1,130 @@
1
+ # AEO Analysis: Interpreting Canonry Results
2
+
3
+ ## What Citation Means
4
+
5
+ A "cited" keyword means the client's domain appeared in an AI provider's response when that query was asked. It does NOT mean:
6
+ - The AI recommended them positively
7
+ - The citation is prominent
8
+ - It will persist on the next sweep
9
+
10
+ A "not-cited" keyword means the AI answered without mentioning the client at all.
11
+
12
+ ## Reading Evidence Output
13
+
14
+ ```
15
+ ✓ cited AEO Agency NYC ← branded/direct match
16
+ ✓ cited best plumber brooklyn
17
+ ✗ not-cited how to fix a leaky faucet ← informational gap: no page for this topic
18
+ ✗ not-cited emergency plumber near me ← competitive gap: others cited instead
19
+ ```
20
+
21
+ ### Keyword Categories
22
+
23
+ **Branded/direct keywords** (e.g., "[business name] [city]"):
24
+ - If cited: good — entity is established for core queries
25
+ - If not cited: urgent — something broken at a fundamental level (indexing, schema, llms.txt)
26
+
27
+ **Competitive keywords** (e.g., "best [service] [city]"):
28
+ - If not cited: check who IS cited — competitor analysis needed
29
+ - Harder wins; require established authority and trust signals
30
+
31
+ **Informational/how-to keywords** (e.g., "how to [do X]"):
32
+ - If not cited: almost always a content gap — no page targeting this topic, or it's not indexed
33
+ - High-leverage — informational content positions a site as authoritative to AI models
34
+
35
+ ## Using Analytics
36
+
37
+ ### Citation Rate Trends (`--feature metrics`)
38
+ Shows citation rate over time across providers. Use to identify:
39
+ - Improving or declining visibility trends
40
+ - Provider-specific performance differences
41
+ - Impact of content/indexing changes over time
42
+
43
+ **Key phrase normalization:** When new key phrases are added to a project mid-history, canonry automatically normalizes each time bucket to only include key phrases that existed before that bucket started. This prevents newly-added (typically uncited) key phrases from creating a false drop in the citation rate trend. The chart displays dashed vertical annotation lines at points where key phrases were added (e.g. "+3 kp"), and each bucket's tooltip shows the key phrase count ("kp") used for that bucket's calculation.
44
+
45
+ ### Gap Analysis (`--feature gaps`)
46
+ Categorizes keywords as cited, gap (competitor cited but you're not), or uncited (nobody cited). Priorities:
47
+ - **Gap keywords** are highest priority — competitors are winning these
48
+ - **Uncited keywords** may need content or may be too broad
49
+
50
+ ### Source Breakdown (`--feature sources`)
51
+ Shows which source categories AI models cite for your keywords. Helps identify:
52
+ - Whether competitors dominate specific categories
53
+ - Content format opportunities (FAQ, how-to, comparison pages)
54
+
55
+ ## Diagnosing Citation Gaps
56
+
57
+ ### Step 1: Check indexing first
58
+ Not cited ≠ bad content. Often the page isn't indexed yet.
59
+ ```bash
60
+ canonry google coverage <project>
61
+ ```
62
+ If key pages are "unknown to Google," submit them before drawing conclusions.
63
+
64
+ ### Step 2: Check if content exists
65
+ Is there a page on the site targeting that keyword? If not, that's the gap — not a canonry or provider issue.
66
+
67
+ ### Step 3: Check competitors
68
+ For competitive keywords, if others are cited and the client isn't:
69
+ - Do competitors have more specific, dedicated pages?
70
+ - Do they have stronger schema/structured data?
71
+ - Are they more established in the index?
72
+
73
+ Run `canonry evidence <project> --format json` and check `competitorOverlap` in snapshots.
74
+
75
+ ### Step 4: Check across providers
76
+ Gemini, OpenAI, Claude, and Perplexity may behave differently. One citing a domain while another doesn't is normal — each has its own knowledge base and update schedule.
77
+
78
+ ### Step 5: Check analytics trends
79
+ ```bash
80
+ canonry analytics <project> --feature gaps --window 30d
81
+ ```
82
+ Look for patterns: are gaps growing or shrinking? Are new competitors appearing?
83
+
84
+ ## Trend Interpretation
85
+
86
+ **Stable cited** — monitor for regressions, no action needed.
87
+
88
+ **New citation** (was not-cited, now cited) — win. Correlate with what changed: new content, indexing, schema update.
89
+
90
+ **Regression** (was cited, now not-cited) — investigate immediately:
91
+ - Did a competitor page launch?
92
+ - Did the page get deindexed or go down?
93
+ - Did the model update?
94
+ - Check `canonry google deindexed <project>` for index losses
95
+
96
+ **Fluctuation** (cited in some runs, not others) — normal for competitive keywords. Track trend over 5+ runs before drawing conclusions. AI answers are non-deterministic.
97
+
98
+ ## What to Recommend
99
+
100
+ ### Low overall citation (< 50%)
101
+ 1. Audit indexing — `canonry google coverage <project>`
102
+ 2. Submit unindexed pages to Google Indexing API
103
+ 3. Submit sitemap to Bing WMT + send IndexNow batch
104
+ 4. Check core pages for schema (LocalBusiness / Organization / FAQPage)
105
+ 5. Map uncited keywords to pages — which have no corresponding page?
106
+
107
+ ### Branded terms not cited
108
+ Red flag. Check:
109
+ - Is the homepage indexed?
110
+ - Does `llms.txt` exist and list the business clearly?
111
+ - Does schema include the exact brand name in `name` field?
112
+
113
+ ### Informational terms not cited
114
+ Content strategy play:
115
+ - Does a page targeting this topic exist? If not, create it.
116
+ - Is it indexed? If not, submit it.
117
+ - Is it structured for AI extraction? (FAQ schema, clear H2s, definition-style answers)
118
+
119
+ ### Provider variance (cited on one, not others)
120
+ Expected — each provider has independent knowledge. Focus on the ones that matter most for the client's audience. Don't over-optimize for one provider at the expense of others.
121
+
122
+ ## The AEO Timeline Reality
123
+
124
+ - Site changes → weeks/months to appear in sweeps (or never)
125
+ - Google indexing → 24–72h with Indexing API, longer organic
126
+ - Bing indexing → hours with IndexNow, days without
127
+ - Model training updates → unknown schedule, outside our control
128
+
129
+ **Never say:** "Deploy this and re-run canonry to see if it worked."
130
+ **Always say:** "This positions the site correctly. Canonry will tell us if/when that pays off."