@ainyc/canonry 3.6.4 → 4.1.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -66,15 +66,15 @@ adapter. The fastest path is the `canonry mcp install` helper:
66
66
 
67
67
  ```bash
68
68
  canonry mcp install --client claude-desktop # or: cursor
69
- canonry mcp install --client claude-desktop --read-only # 33 read tools only
69
+ canonry mcp install --client claude-desktop --read-only # 45 read API tools only
70
70
  canonry mcp config --client codex # print snippet for unsupported clients
71
71
  ```
72
72
 
73
73
  `install` merges a `canonry` entry into the client's config, backs up the
74
74
  original, and is idempotent. Restart the client after install to pick it up.
75
75
 
76
- The adapter exposes 48 tools — projects, runs, snapshots, insights, health,
77
- keyword and competitor management, schedules, GSC and GA reads, and the
76
+ The adapter exposes 67 API tools — projects, runs, snapshots, insights, health,
77
+ query and competitor management, schedules, GSC and GA reads, and the
78
78
  config-as-code apply path. Auth and configuration are inherited from
79
79
  `~/.canonry/config.yaml`. See [`docs/mcp.md`](docs/mcp.md) for the full
80
80
  surface and safety rules.
@@ -91,7 +91,7 @@ Canonry's CLI and API are the agent interface. The optional `canonry-mcp` adapte
91
91
  ## Features
92
92
 
93
93
  - **Built-in AI agent (Aero).** Reads state, analyzes regressions, fires write tools (`run_sweep`, `dismiss_insight`, `update_schedule`, etc.), wakes up unprompted after runs. Backed by [`pi-agent-core`](https://github.com/badlogic/pi-mono) — 15+ LLM providers, streaming first.
94
- - **Agent-first.** Every CLI command supports `--format json`; every UI view has a matching API endpoint. An optional `canonry-mcp` stdio adapter exposes 48 tools to MCP clients like Claude Desktop and Codex.
94
+ - **Agent-first.** Every CLI command supports `--format json`; every UI view has a matching API endpoint. An optional `canonry-mcp` stdio adapter exposes 67 API tools to MCP clients like Claude Desktop and Codex.
95
95
  - **Multi-provider.** Query Gemini, OpenAI, Claude, Perplexity, and local LLMs from a single platform.
96
96
  - **Content opportunity engine.** Per-query recommendations typed by action (`create` / `expand` / `refresh` / `add-schema`) with auditable score breakdowns, drivers, and demand-source labels. Combines GSC ranking signals with competitor citation evidence so zero-traffic gaps still surface. Available via `canonry content targets / gaps / sources`, the matching API endpoints, and Aero's tool surface.
97
97
  - **Config-as-code.** Kubernetes-style YAML files. Version control your monitoring, let agents apply changes declaratively.
@@ -128,7 +128,7 @@ spec:
128
128
  canonicalDomain: example.com
129
129
  country: US
130
130
  language: en
131
- keywords:
131
+ queries:
132
132
  - best dental implants near me
133
133
  - emergency dentist open now
134
134
  competitors:
@@ -156,17 +156,17 @@ Canonry is **agent-first** — every dashboard view has a matching API endpoint
156
156
  |--------|----------------|------------|
157
157
  | **Projects** | Create, read, update, delete projects; locations; export | `PUT /projects/{name}`, `GET /projects`, `GET /projects/{name}/export` |
158
158
  | **Apply** | Config-as-code — declarative multi-project upsert | `POST /apply` |
159
- | **Keywords / Competitors** | Per-project keyword and competitor management | `POST/DELETE /projects/{name}/keywords`, `/competitors` |
159
+ | **Queries / Competitors** | Per-project query and competitor management | `POST/DELETE /projects/{name}/queries`, `/competitors` |
160
160
  | **Runs** | Trigger, list, cancel, and inspect visibility sweeps | `POST /projects/{name}/runs`, `GET /runs`, `POST /runs/{id}/cancel` |
161
161
  | **Schedules** | Cron-based recurring sweeps | `GET/PUT /projects/{name}/schedule` |
162
- | **History / Snapshots** | Timeline + run diffs + per-keyword citation state | `GET /projects/{name}/timeline`, `/snapshots/diff`, `/history` |
162
+ | **History / Snapshots** | Timeline + run diffs + per-query citation state | `GET /projects/{name}/timeline`, `/snapshots/diff`, `/history` |
163
163
  | **Intelligence** | DB-backed insights + health snapshots + dismissal | `GET /projects/{name}/insights`, `/health`, `POST /insights/{id}/dismiss` |
164
164
  | **Content** | Action-typed content opportunities, gaps, and grounding-source map | `GET /projects/{name}/content/targets`, `/gaps`, `/sources` |
165
165
  | **Notifications** | Webhook subscriptions per project (agent or user-defined) | `GET/POST/DELETE /projects/{name}/notifications`, `POST /.../test` |
166
166
  | **Analytics** | Aggregated dashboard analytics | `GET /projects/{name}/analytics` |
167
167
  | **Google (GSC + OAuth)** | Search Console integration, OAuth flow, property selection, URL inspection | `/google/*`, `/projects/{name}/google/*` |
168
168
  | **Google Analytics (GA4)** | Traffic, social referrals, attribution, AI referrals | `/projects/{name}/ga/*` |
169
- | **Bing Webmaster** | Coverage, URL inspection, keyword stats | `/projects/{name}/bing/*` |
169
+ | **Bing Webmaster** | Coverage, URL inspection, keyword stats (Bing's term) | `/projects/{name}/bing/*` |
170
170
  | **WordPress** | Content publishing + site management integration | `/projects/{name}/wordpress/*` |
171
171
  | **CDP (ChatGPT browser provider)** | Chrome DevTools Protocol health and session status | `/cdp/*` |
172
172
  | **Settings / Auth / Telemetry** | Server config, API key management, opt-in telemetry | `/settings`, `/telemetry` |
@@ -194,10 +194,10 @@ Canonry ships a bundled `canonry-setup` skill that turns Aero (or any Claude-pow
194
194
 
195
195
  The skill covers the end-to-end answer-engine optimization loop:
196
196
 
197
- - **AEO monitoring.** Running citation sweeps across Gemini, ChatGPT, Claude, and Perplexity via `canonry run` / `canonry evidence` / `canonry status`, including how to interpret per-phrase citation state and regressions.
197
+ - **AEO monitoring.** Running citation sweeps across Gemini, ChatGPT, Claude, and Perplexity via `canonry run` / `canonry evidence` / `canonry status`, including how to interpret per-query citation state and regressions.
198
198
  - **Technical SEO audits.** Driving the companion [`@ainyc/aeo-audit`](https://www.npmjs.com/package/@ainyc/aeo-audit) CLI for 14-factor scoring — structured data (JSON-LD), content depth, AI-readable files (`llms.txt`, `llms-full.txt`), E-E-A-T signals, FAQ blocks, definition blocks, H1/alt/meta hygiene.
199
199
  - **Indexing diagnosis.** Google Search Console and Bing Webmaster Tools coverage, URL inspection, and one-shot submissions via `canonry google request-indexing` / `canonry bing request-indexing`.
200
- - **Schema & content execution.** Patterns for injecting LocalBusiness/FAQPage JSON-LD, writing `llms.txt` with service-area detail, trimming keyphrase lists to high-intent queries, and handling WordPress/Elementor specifics (REST API, Application Passwords, Elementor Custom Code).
200
+ - **Schema & content execution.** Patterns for injecting LocalBusiness/FAQPage JSON-LD, writing `llms.txt` with service-area detail, trimming query lists to high-intent queries, and handling WordPress/Elementor specifics (REST API, Application Passwords, Elementor Custom Code).
201
201
  - **Diagnose → prioritize → execute → monitor → report workflow.** Opinionated defaults for new sites (0 citations), regressions on established sites, and county-level targeting — with guardrails like "never fabricate citation data" and "back up `~/.canonry/config.yaml` before editing".
202
202
 
203
203
  See [`skills/canonry-setup/SKILL.md`](skills/canonry-setup/SKILL.md) plus the reference files under [`skills/canonry-setup/references/`](skills/canonry-setup/references/) (`canonry-cli.md`, `aeo-analysis.md`, `indexing.md`, `wordpress-integration.md`) for the full playbook. Aero loads the same material natively, so anything an external agent can do through the skill, Aero can do from the CLI or dashboard command bar.
@@ -29,7 +29,7 @@ If the server isn't running, start it with `canonry serve`.
29
29
  | `canonry evidence <project>` | Raw citation evidence from sweeps |
30
30
  | `canonry insights <project>` | AI-generated insights and findings |
31
31
  | `canonry health <project>` | Health snapshot with visibility scores |
32
- | `canonry timeline <project>` | Per-keyword citation history over time |
32
+ | `canonry timeline <project>` | Per-query citation history over time |
33
33
  | `canonry export <project>` | Full project data export |
34
34
 
35
35
  ### Auditing
@@ -45,8 +45,8 @@ npx @ainyc/aeo-audit <url> --format json
45
45
  |---------|---------|
46
46
  | `canonry project list` | List all projects |
47
47
  | `canonry project create <name> --domain <domain>` | Create a new project |
48
- | `canonry keyword add <project> <keyword>...` | Add keywords to track |
49
- | `canonry keyword list <project>` | List tracked keywords |
48
+ | `canonry query add <project> <query>...` | Add queries to track |
49
+ | `canonry query list <project>` | List tracked queries |
50
50
 
51
51
  ## Workflow Patterns
52
52
 
@@ -59,7 +59,7 @@ npx @ainyc/aeo-audit <url> --format json
59
59
 
60
60
  ### Investigation workflow
61
61
 
62
- 1. Identify affected keywords from insights
62
+ 1. Identify affected queries from insights
63
63
  2. Pull evidence: `canonry evidence <project> --format json`
64
64
  3. Check timeline for trends: `canonry timeline <project> --format json`
65
65
  4. If structural issues suspected, run audit: `npx @ainyc/aeo-audit <url> --format json`
@@ -16,7 +16,7 @@ This file stores client-specific context accumulated over time. Update it as you
16
16
 
17
17
  ## Watchlist
18
18
 
19
- <!-- Keywords, competitors, or trends to monitor closely -->
19
+ <!-- Queries, competitors, or trends to monitor closely -->
20
20
 
21
21
  ## Notes
22
22
 
@@ -20,10 +20,10 @@ When a project has GA4 connected, traffic is a first-class signal alongside cita
20
20
 
21
21
  ### What to Prioritize
22
22
  1. Branded term regressions (losing citations for your own name = urgent)
23
- 2. Competitive keyword losses (competitor gained where you lost)
24
- 3. Informational gap expansion (new uncited keywords appearing)
23
+ 2. Competitive query losses (competitor gained where you lost)
24
+ 3. Informational gap expansion (new uncited queries appearing)
25
25
  4. Indexing issues (pages not indexed can't be cited)
26
- 5. Content optimization (improve cited rate on partially-cited keywords)
26
+ 5. Content optimization (improve cited rate on partially-cited queries)
27
27
 
28
28
  ### What NOT to Do
29
29
  - Don't promise fixes will appear in the next sweep (AEO changes take weeks/months)
@@ -44,7 +44,7 @@ Detailed playbooks live alongside this file. Read them on demand when the task m
44
44
  | File | Read when |
45
45
  |---|---|
46
46
  | `references/orchestration.md` | Planning a multi-step or recurring workflow (baseline, weekly review, content-gap analysis) |
47
- | `references/regression-playbook.md` | A keyword lost its citation and you need to triage and respond |
47
+ | `references/regression-playbook.md` | A query lost its citation and you need to triage and respond |
48
48
  | `references/memory-patterns.md` | Deciding whether to remember a fact in agent memory or re-query canonry |
49
49
  | `references/reporting.md` | Producing a client-facing weekly or monthly summary |
50
50
  | `references/wordpress-elementor-mcp.md` | Editing WordPress pages with the Elementor MCP integration |
@@ -13,13 +13,13 @@ Aero ships with a built-in durable notes store — the `canonry_memory_set`, `ca
13
13
 
14
14
  | Scope | Examples | Home |
15
15
  |---|---|---|
16
- | **Project state** | Baselines, historical regressions, citation rates per keyword/provider, recent insights, sweep history, audit trail | Canonry DB — query via CLI / API / read tools |
16
+ | **Project state** | Baselines, historical regressions, citation rates per query/provider, recent insights, sweep history, audit trail | Canonry DB — query via CLI / API / read tools |
17
17
  | **Operator facts** | Personal preferences, non-observable context ("content lead is Sarah", "migrating off Webflow next quarter"), tone/voice preferences the operator confirmed | Aero memory (`canonry_memory_set`) |
18
18
  | **Session scratch** | "I just tried X and it failed", intermediate reasoning, turn-local state | Nowhere — let it die with the session |
19
19
 
20
20
  ## How to read project state from canonry
21
21
 
22
- Prefer Aero's read tools (`canonry_project_overview`, `canonry_health_latest`, `canonry_timeline_get`, `canonry_insights_list`, `canonry_keywords_list`, `canonry_competitors_list`, `canonry_run_get`) over shelling out, but the CLI exists for operators too:
22
+ Prefer Aero's read tools (`canonry_project_overview`, `canonry_health_latest`, `canonry_timeline_get`, `canonry_insights_list`, `canonry_queries_list`, `canonry_competitors_list`, `canonry_run_get`) over shelling out, but the CLI exists for operators too:
23
23
 
24
24
  ```bash
25
25
  canonry status <project> --format json
@@ -61,6 +61,6 @@ canonry agent memory forget <project> --key <k>
61
61
  ## Bad remember candidates
62
62
 
63
63
  - Anything canonry already tracks (runs, insights, citation rates, schedules). Query it.
64
- - Turn-local state that's useful for one follow-up and then noise ("user just asked about keyword Y").
64
+ - Turn-local state that's useful for one follow-up and then noise ("user just asked about query Y").
65
65
  - Raw evidence or long transcripts — persist a conclusion, not a dump.
66
66
  - Unvalidated guesses. Memory isn't a place to think aloud; it's a place to record things you're willing to act on next session.
@@ -11,9 +11,9 @@ Trigger: First sweep completes for a new project
11
11
 
12
12
  Steps:
13
13
  1. `canonry evidence <project> --format json` → get initial citation data
14
- 2. Compute baseline: cited rate, provider breakdown, top/bottom keywords
14
+ 2. Compute baseline: cited rate, provider breakdown, top/bottom queries
15
15
  3. `npx @ainyc/aeo-audit "<domain>" --format json` → site readiness score
16
- 4. Identify top 3 gaps (uncited keywords with fixable site issues)
16
+ 4. Identify top 3 gaps (uncited queries with fixable site issues)
17
17
  5. Generate onboarding report with baseline + action plan
18
18
  6. Store baseline metrics in memory
19
19
 
@@ -23,7 +23,7 @@ Trigger: Comparison shows decline or webhook fires regression.detected
23
23
 
24
24
  Steps:
25
25
  1. `canonry evidence <project> --format json` → current state
26
- 2. `canonry history <project> --keyword "<keyword>"` → trend for affected keyword
26
+ 2. `canonry history <project>` → trend for affected query
27
27
  3. Check indexing: `canonry google coverage <project>` → is the page still indexed?
28
28
  4. Check competitor: did a competitor gain the citation we lost?
29
29
  5. Audit the page: `npx @ainyc/aeo-audit "<page-url>" --format json`
@@ -46,12 +46,12 @@ Steps:
46
46
 
47
47
  ## Workflow 4: Content Gap Analysis
48
48
 
49
- Trigger: User asks "why aren't we cited for X?" or multiple uncited keywords detected
49
+ Trigger: User asks "why aren't we cited for X?" or multiple uncited queries detected
50
50
 
51
51
  Steps:
52
- 1. `canonry evidence <project> --keyword "<keyword>"` → confirm uncited
52
+ 1. `canonry evidence <project>` → confirm uncited
53
53
  2. Check if a relevant page exists on the domain
54
- 3. If no page: recommend content creation (topic, target keywords)
54
+ 3. If no page: recommend content creation (topic, target queries)
55
55
  4. If page exists: `npx @ainyc/aeo-audit "<page-url>"` → diagnose why uncited
56
56
  5. Check schema completeness, llms.txt coverage, indexing status
57
57
  6. Generate prioritized fix list
@@ -1,13 +1,13 @@
1
1
  ---
2
2
  name: regression-playbook
3
- description: Detection → triage → diagnosis → response for lost citations. Read when investigating why a keyword lost its citation.
3
+ description: Detection → triage → diagnosis → response for lost citations. Read when investigating why a query lost its citation.
4
4
  ---
5
5
 
6
6
  # Regression Playbook
7
7
 
8
8
  ## Detection
9
9
 
10
- A regression is detected when a citation is lost between consecutive completed runs for the same project. Specifically: a keyword+provider pair that was cited in run N is no longer cited in run N+1.
10
+ A regression is detected when a citation is lost between consecutive completed runs for the same project. Specifically: a query+provider pair that was cited in run N is no longer cited in run N+1.
11
11
 
12
12
  ## Triage
13
13
 
@@ -16,15 +16,15 @@ Classify the regression by severity:
16
16
  | Severity | Criteria |
17
17
  |---|---|
18
18
  | **Critical** | Branded term lost on any provider |
19
- | **High** | Top-performing keyword lost on primary provider |
20
- | **Medium** | Non-branded keyword lost on one provider |
21
- | **Low** | Keyword lost that was only marginally cited |
19
+ | **High** | Top-performing query lost on primary provider |
20
+ | **Medium** | Non-branded query lost on one provider |
21
+ | **Low** | Query lost that was only marginally cited |
22
22
 
23
23
  ## Diagnosis
24
24
 
25
25
  For each regression, check causes in order:
26
26
 
27
- 1. **Competitor displacement** — Did a competitor domain appear in the citation for this keyword+provider? Check current run snapshots.
27
+ 1. **Competitor displacement** — Did a competitor domain appear in the citation for this query+provider? Check current run snapshots.
28
28
  2. **Indexing loss** — Is the page still indexed? Check Google Search Console integration or HTTP status.
29
29
  3. **Content change** — Did the page content change significantly? Compare content hashes if available.
30
30
  4. **Provider behavior change** — Did the provider change its response pattern for this query type?
@@ -32,7 +32,7 @@ For each regression, check causes in order:
32
32
 
33
33
  ## Response
34
34
 
35
- 1. Alert the client with specific data (keyword, provider, dates, evidence)
35
+ 1. Alert the client with specific data (query, provider, dates, evidence)
36
36
  2. Recommend diagnostic steps based on suspected cause
37
37
  3. If actionable: generate fix (schema update, content suggestion, indexing resubmission)
38
38
  4. Set monitoring flag to track if the regression resolves
@@ -15,7 +15,7 @@ canonry report <project> --output dist/aeo.html # custom path
15
15
  canonry report <project> --format json # raw payload, useful for narrating in chat
16
16
  ```
17
17
 
18
- The HTML is self-contained (inline CSS + SVG charts, no network dependencies) and covers: executive summary, per-keyword × per-provider citation matrix, competitor landscape, AI citation sources, GSC + GA4 performance, social and AI referrals, indexing health, citations trend, prioritized insights, and recommended next steps. Same payload is available via `GET /api/v1/projects/<name>/report` and the `canonry_report` MCP tool — use `--format json` when you want to summarize specific numbers in a thread instead of attaching the file.
18
+ The HTML is self-contained (inline CSS + SVG charts, no network dependencies) and covers: executive summary, per-query × per-provider citation matrix, competitor landscape, AI citation sources, GSC + GA4 performance, social and AI referrals, indexing health, citations trend, prioritized insights, and recommended next steps. Same payload is available via `GET /api/v1/projects/<name>/report` and the `canonry_report` MCP tool — use `--format json` when you want to summarize specific numbers in a thread instead of attaching the file.
19
19
 
20
20
  Behaviors worth knowing before narrating numbers from the report:
21
21
  - `executiveSummary.citationRate` is always sourced from the latest visibility run (completed **or** partial), so it tracks the scorecard table even when the latest sweep had a flaky provider.
@@ -42,14 +42,14 @@ The hand-rolled templates below are still the right call when the user wants a f
42
42
  - <third>
43
43
 
44
44
  ## Regressions
45
- | Keyword | Provider | Status | Suspected Cause |
46
- |---------|----------|--------|-----------------|
47
- | <keyword> | <provider> | New/Investigating/Resolved | <cause> |
45
+ | Query | Provider | Status | Suspected Cause |
46
+ |-------|----------|--------|-----------------|
47
+ | <query> | <provider> | New/Investigating/Resolved | <cause> |
48
48
 
49
49
  ## Gains
50
- | Keyword | Provider | Position | Page |
51
- |---------|----------|----------|------|
52
- | <keyword> | <provider> | <N> | <url> |
50
+ | Query | Provider | Position | Page |
51
+ |-------|----------|----------|------|
52
+ | <query> | <provider> | <N> | <url> |
53
53
 
54
54
  ## Competitor Watch
55
55
  - <competitor>: <trend>
@@ -72,7 +72,7 @@ The hand-rolled templates below are still the right call when the user wants a f
72
72
  | Metric | Start of Month | End of Month | Change |
73
73
  |--------|---------------|--------------|--------|
74
74
  | Overall cited rate | <X>% | <Y>% | <Δ>% |
75
- | Keywords monitored | <N> | <N> | <Δ> |
75
+ | Queries monitored | <N> | <N> | <Δ> |
76
76
  | Active regressions | <N> | <N> | <Δ> |
77
77
 
78
78
  ## Provider Breakdown
@@ -12,7 +12,7 @@ You are **Aero** — an AEO analyst. You help operators understand how AI answer
12
12
  - **Evidence over opinion.** Numbers before interpretation. "You lost the ChatGPT citation for 'roof repair phoenix' between March 28 and April 2" beats "your visibility decreased."
13
13
  - **Proactive, not passive.** Regressions don't wait to be asked about. Surface them when you spot them. Flag emerging competitors the moment they appear in citations you own.
14
14
  - **Honest about uncertainty.** When the data is ambiguous, say so. Don't manufacture confidence. Don't promise fixes will appear in the next sweep — AEO changes take weeks.
15
- - **Cautious with writes.** Sweeps cost quota. Schedules shape downstream notifications. Keywords define what gets tracked. Confirm intent before mutating state the operator will notice.
15
+ - **Cautious with writes.** Sweeps cost quota. Schedules shape downstream notifications. Queries define what gets tracked. Confirm intent before mutating state the operator will notice.
16
16
  - **Canonry is the source of truth.** Read state back; never maintain a parallel copy in your head. Conclusions age, the data doesn't.
17
17
 
18
18
  ## Voice
@@ -38,7 +38,7 @@ Agent-first open-source AEO (Answer Engine Optimization) operating platform. Tra
38
38
 
39
39
  ## When to Use
40
40
 
41
- - Tracking keyphrase citations across AI providers
41
+ - Tracking query citations across AI providers
42
42
  - Running technical SEO audits (14‑factor scoring)
43
43
  - Implementing structured data (JSON‑LD)
44
44
  - Diagnosing indexing gaps via Google Search Console / Bing Webmaster Tools
@@ -57,16 +57,16 @@ Agent-first open-source AEO (Answer Engine Optimization) operating platform. Tra
57
57
  A canonry engagement follows the same loop regardless of project size:
58
58
 
59
59
  1. **Diagnose** — Run a baseline sweep (`canonry run <project> --wait`) and a technical audit (`npx @ainyc/aeo-audit@latest <url> --format json`). See `references/aeo-analysis.md` for interpretation.
60
- 2. **Prioritize** — Triage by impact: indexing gaps → schema gaps → content gaps → keyphrase strategy. Branded-term losses are urgent.
60
+ 2. **Prioritize** — Triage by impact: indexing gaps → schema gaps → content gaps → query strategy. Branded-term losses are urgent.
61
61
  3. **Execute** — Apply fixes via the canonry CLI or platform integrations. See `references/canonry-cli.md` for the full command catalog and `references/wordpress-integration.md` for the WordPress workflow.
62
62
  4. **Monitor** — Re-run sweeps weekly. Correlate visibility shifts with deployments and competitor moves.
63
- 5. **Report** — Lead with data, not interpretation: "Lost `<keyword>` on Gemini between <date> and <date> — two competitors moved in. Here's what to fix." For a one-command client-facing summary, run `canonry report <project>` to generate a self-contained HTML bundle (executive summary, citation scorecard, competitor landscape, GSC + GA4 performance, insights). Same payload is available via `--format json` and the `canonry_report` MCP tool.
63
+ 5. **Report** — Lead with data, not interpretation: "Lost `<query>` on Gemini between <date> and <date> — two competitors moved in. Here's what to fix." For a one-command client-facing summary, run `canonry report <project>` to generate a self-contained HTML bundle (executive summary, citation scorecard, competitor landscape, GSC + GA4 performance, insights). Same payload is available via `--format json` and the `canonry_report` MCP tool.
64
64
 
65
65
  ## Common Starting Points
66
66
 
67
- - **New site, 0 citations** → submit to GSC/Bing first; basic LocalBusiness/Service schema; `llms.txt`; trim to 8–12 high-intent keyphrases. See `references/indexing.md`.
67
+ - **New site, 0 citations** → submit to GSC/Bing first; basic LocalBusiness/Service schema; `llms.txt`; trim to 8–12 high-intent queries. See `references/indexing.md`.
68
68
  - **Established site, regression** → diff canonry runs to find the loss window; verify schema is intact; resubmit affected URLs. See `references/aeo-analysis.md`.
69
- - **Multi-county targeting** → reference counties in `areaServed` schema and `llms.txt`; do not split into per-county keyphrases until base visibility exists.
69
+ - **Multi-county targeting** → reference counties in `areaServed` schema and `llms.txt`; do not split into per-county queries until base visibility exists.
70
70
 
71
71
  ## Google Analytics 4
72
72
 
@@ -2,12 +2,12 @@
2
2
 
3
3
  ## What Citation Means
4
4
 
5
- A "cited" keyword means the client's domain appeared in an AI provider's response when that query was asked. It does NOT mean:
5
+ A "cited" query means the client's domain appeared in an AI provider's response when that query was asked. It does NOT mean:
6
6
  - The AI recommended them positively
7
7
  - The citation is prominent
8
8
  - It will persist on the next sweep
9
9
 
10
- A "not-cited" keyword means the AI answered without mentioning the client at all.
10
+ A "not-cited" query means the AI answered without mentioning the client at all.
11
11
 
12
12
  ## Reading Evidence Output
13
13
 
@@ -18,17 +18,17 @@ A "not-cited" keyword means the AI answered without mentioning the client at all
18
18
  ✗ not-cited emergency plumber near me ← competitive gap: others cited instead
19
19
  ```
20
20
 
21
- ### Keyword Categories
21
+ ### Query Categories
22
22
 
23
- **Branded/direct keywords** (e.g., "[business name] [city]"):
23
+ **Branded/direct queries** (e.g., "[business name] [city]"):
24
24
  - If cited: good — entity is established for core queries
25
25
  - If not cited: urgent — something broken at a fundamental level (indexing, schema, llms.txt)
26
26
 
27
- **Competitive keywords** (e.g., "best [service] [city]"):
27
+ **Competitive queries** (e.g., "best [service] [city]"):
28
28
  - If not cited: check who IS cited — competitor analysis needed
29
29
  - Harder wins; require established authority and trust signals
30
30
 
31
- **Informational/how-to keywords** (e.g., "how to [do X]"):
31
+ **Informational/how-to queries** (e.g., "how to [do X]"):
32
32
  - If not cited: almost always a content gap — no page targeting this topic, or it's not indexed
33
33
  - High-leverage — informational content positions a site as authoritative to AI models
34
34
 
@@ -40,15 +40,15 @@ Shows citation rate over time across providers. Use to identify:
40
40
  - Provider-specific performance differences
41
41
  - Impact of content/indexing changes over time
42
42
 
43
- **Key phrase normalization:** When new key phrases are added to a project mid-history, canonry automatically normalizes each time bucket to only include key phrases that existed before that bucket started. This prevents newly-added (typically uncited) key phrases from creating a false drop in the citation rate trend. The chart displays dashed vertical annotation lines at points where key phrases were added (e.g. "+3 kp"), and each bucket's tooltip shows the key phrase count ("kp") used for that bucket's calculation.
43
+ **Query normalization:** When new queries are added to a project mid-history, canonry automatically normalizes each time bucket to only include queries that existed before that bucket started. This prevents newly-added (typically uncited) queries from creating a false drop in the citation rate trend. The chart displays dashed vertical annotation lines at points where queries were added (e.g. "+3 q"), and each bucket's tooltip shows the query count ("q") used for that bucket's calculation.
44
44
 
45
45
  ### Gap Analysis (`--feature gaps`)
46
- Categorizes keywords as cited, gap (competitor cited but you're not), or uncited (nobody cited). Priorities:
47
- - **Gap keywords** are highest priority — competitors are winning these
48
- - **Uncited keywords** may need content or may be too broad
46
+ Categorizes queries as cited, gap (competitor cited but you're not), or uncited (nobody cited). Priorities:
47
+ - **Gap queries** are highest priority — competitors are winning these
48
+ - **Uncited queries** may need content or may be too broad
49
49
 
50
50
  ### Source Breakdown (`--feature sources`)
51
- Shows which source categories AI models cite for your keywords. Helps identify:
51
+ Shows which source categories AI models cite for your queries. Helps identify:
52
52
  - Whether competitors dominate specific categories
53
53
  - Content format opportunities (FAQ, how-to, comparison pages)
54
54
 
@@ -62,10 +62,10 @@ canonry google coverage <project>
62
62
  If key pages are "unknown to Google," submit them before drawing conclusions.
63
63
 
64
64
  ### Step 2: Check if content exists
65
- Is there a page on the site targeting that keyword? If not, that's the gap — not a canonry or provider issue.
65
+ Is there a page on the site targeting that query? If not, that's the gap — not a canonry or provider issue.
66
66
 
67
67
  ### Step 3: Check competitors
68
- For competitive keywords, if others are cited and the client isn't:
68
+ For competitive queries, if others are cited and the client isn't:
69
69
  - Do competitors have more specific, dedicated pages?
70
70
  - Do they have stronger schema/structured data?
71
71
  - Are they more established in the index?
@@ -108,7 +108,7 @@ GA4 also covers the inverse case: a *gain* on `attribution --trend` for the AI c
108
108
  - Did the model update?
109
109
  - Check `canonry google deindexed <project>` for index losses
110
110
 
111
- **Fluctuation** (cited in some runs, not others) — normal for competitive keywords. Track trend over 5+ runs before drawing conclusions. AI answers are non-deterministic.
111
+ **Fluctuation** (cited in some runs, not others) — normal for competitive queries. Track trend over 5+ runs before drawing conclusions. AI answers are non-deterministic.
112
112
 
113
113
  ## What to Recommend
114
114
 
@@ -117,7 +117,7 @@ GA4 also covers the inverse case: a *gain* on `attribution --trend` for the AI c
117
117
  2. Submit unindexed pages to Google Indexing API
118
118
  3. Submit sitemap to Bing WMT + send IndexNow batch
119
119
  4. Check core pages for schema (LocalBusiness / Organization / FAQPage)
120
- 5. Map uncited keywords to pages — which have no corresponding page?
120
+ 5. Map uncited queries to pages — which have no corresponding page?
121
121
 
122
122
  ### Branded terms not cited
123
123
  Red flag. Check:
@@ -71,7 +71,7 @@ Run statuses: `queued` → `running` → `completed` / `failed` / `partial`
71
71
  ## Citation Data
72
72
 
73
73
  ```bash
74
- canonry evidence <project> # per-keyword cited/not-cited
74
+ canonry evidence <project> # per-query cited/not-cited
75
75
  canonry evidence <project> --format json # JSON output
76
76
  canonry history <project> # audit trail
77
77
  canonry export <project> --include-results # export as YAML
@@ -80,7 +80,7 @@ canonry backfill answer-visibility --project <name> --format json
80
80
  ```
81
81
 
82
82
  Output shows:
83
- - `✓ cited` — domain appeared in AI response for that keyword
83
+ - `✓ cited` — domain appeared in AI response for that query
84
84
  - `✗ not-cited` — domain did not appear
85
85
  - Summary: `Cited: X / Y`
86
86
 
@@ -127,14 +127,14 @@ canonry backfill insights <project> # backfill insights for all com
127
127
  canonry backfill insights <project> --from-run <id> --to-run <id> # backfill a range
128
128
  ```
129
129
 
130
- ## Keywords & Competitors
130
+ ## Queries & Competitors
131
131
 
132
132
  ```bash
133
- canonry keyword add <project> "phrase one" "phrase two"
134
- canonry keyword remove <project> "phrase"
135
- canonry keyword list <project>
136
- canonry keyword import <project> keywords.txt
137
- canonry keyword generate <project> --provider gemini --count 10 --save
133
+ canonry query add <project> "phrase one" "phrase two"
134
+ canonry query remove <project> "phrase"
135
+ canonry query list <project>
136
+ canonry query import <project> queries.txt
137
+ canonry query generate <project> --provider gemini --count 10 --save
138
138
 
139
139
  canonry competitor add <project> competitor1.com competitor2.com
140
140
  canonry competitor list <project>