@ainyc/canonry 1.46.0 → 1.48.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -2,244 +2,88 @@
2
2
 
3
3
  [![npm version](https://img.shields.io/npm/v/@ainyc/canonry)](https://www.npmjs.com/package/@ainyc/canonry) [![License: FSL-1.1-ALv2](https://img.shields.io/badge/License-FSL--1.1--ALv2-blue.svg)](https://fsl.software/) [![Node.js >= 20](https://img.shields.io/badge/node-%3E%3D20-brightgreen)](https://nodejs.org)
4
4
 
5
- **Agent-first AEO monitoring.** Canonry tracks how AI answer engines (ChatGPT, Gemini, Claude, Perplexity, and others) cite or omit your website, and it's built so that AI agents and automation pipelines can operate it end-to-end without human intervention.
5
+ Canonry is an agent-first AEO platform powered by [OpenClaw](https://openclaw.ai). It tracks how ChatGPT, Gemini, Claude, and Perplexity cite your site, detects regressions, diagnoses causes, coordinates fixes, and reports results.
6
6
 
7
- Every capability is exposed through a stable REST API and a machine-readable CLI. An AI agent can install canonry, configure providers, create projects, trigger visibility sweeps, and act on the results. All from a terminal, all scriptable, all JSON-parseable. The web dashboard is there for human analysts, but nothing requires it.
8
-
9
- AEO (Answer Engine Optimization) is the practice of ensuring your content is accurately represented in AI-generated answers. As search shifts from links to synthesized responses, monitoring your visibility across answer engines is essential.
7
+ AEO (Answer Engine Optimization) is about making sure your content shows up accurately in AI-generated answers. As search shifts from links to synthesized responses, you need something that can monitor, analyze, and act across these engines continuously.
10
8
 
11
9
  ![Canonry Dashboard](docs/images/dashboard.png)
12
10
 
13
- ## Quick Start
14
-
15
- ```bash
16
- npm install -g @ainyc/canonry
17
- canonry init
18
- canonry serve
19
- ```
20
-
21
- Open [http://localhost:4100](http://localhost:4100) to access the optional web dashboard.
22
-
23
- ### Zero-touch setup for agents and CI
24
-
25
- No interactive prompts required. Pass keys as flags or environment variables and canonry configures itself:
11
+ ## Getting Started
26
12
 
27
13
  ```bash
28
- # flags
29
- canonry init --gemini-key <key> --openai-key <key>
30
-
31
- # environment variables
32
- GEMINI_API_KEY=... OPENAI_API_KEY=... canonry init
33
-
34
- # headless bootstrap (env vars only, no prompts, idempotent)
35
- canonry bootstrap
36
- ```
37
-
38
- ### Agent workflow example
39
-
40
- A coding agent (Claude Code, Cursor, Copilot, or any MCP-equipped tool) can run an entire monitoring cycle in a single script:
41
-
42
- ```bash
43
- # 1. Install and bootstrap
44
14
  npm install -g @ainyc/canonry
45
- GEMINI_API_KEY=$KEY canonry bootstrap
46
- canonry start # background daemon
47
-
48
- # 2. Define a project from a YAML spec
49
- canonry apply canonry.yaml --format json # declarative, version-controlled
50
-
51
- # 3. Trigger a sweep and wait for results
52
- canonry run my-project --wait --format json
53
-
54
- # 4. Inspect results programmatically
55
- canonry status my-project --format json # visibility scores
56
- canonry evidence my-project --format json # citation evidence
57
- canonry history my-project --format json # timeline for trend analysis
58
- ```
59
-
60
- Every command supports `--format json` so agents can parse output directly. Error messages include the failed command, the reason, and a suggested fix, so there's no guesswork.
61
-
62
- ## Why Agent-First?
63
-
64
- Canonry is designed so that AI agents and automation pipelines can drive it without human interaction.
65
-
66
- - **No browser required.** The CLI and API cover 100% of functionality.
67
- - **Deterministic setup.** `canonry bootstrap` is idempotent and non-interactive. Run it in CI, in a container, or from an agent with zero human input.
68
- - **Config-as-code.** Kubernetes-style YAML files that agents can generate, version-control, and apply. No forms to fill out.
69
- - **Structured output everywhere.** `--format json` on every command. Agents parse results, not humans.
70
- - **Stable API contract.** Endpoints never change paths or methods. Agents can hard-code routes safely.
71
- - **Actionable errors.** Every failure includes the command that failed, why it failed, and what to do next.
72
-
73
- Start with [docs/README.md](docs/README.md) for the full architecture, roadmap, active plans, testing, deployment, and ADR index.
74
-
75
- ## Skills for AI Agents
76
-
77
- Canonry ships with an [OpenClaw](https://clawhub.dev) skill that teaches AI agents how to use it. The skill covers CLI commands, provider setup, interpreting results, indexing workflows, and troubleshooting.
78
-
79
- **Claude Code** picks it up automatically from `.claude/skills/canonry-setup/` when you open this repo. No configuration needed.
80
-
81
- **ClawHub** hosts the same skill at [clawhub.dev](https://clawhub.dev) so any MCP-equipped agent (Cursor, Windsurf, Copilot, etc.) can discover and install it. Search for `canonry` on ClawHub, or point your agent at the `skills/canonry-setup/` directory in this repo.
82
-
83
- Once an agent has the skill loaded, it can set up canonry, run sweeps, interpret citation evidence, and troubleshoot errors without you having to explain any of it.
84
-
85
- ## Features
86
-
87
- - **Multi-provider monitoring** -- query Gemini, OpenAI, Claude, Perplexity, and local LLMs (Ollama, LM Studio, or any OpenAI-compatible endpoint) from a single tool.
88
- - **Agent-first surfaces** -- the REST API is canonical, the CLI supports `--format json` on every command, and the web dashboard is an optional visualization layer.
89
- - **Config-as-code** -- manage projects with Kubernetes-style YAML files. Version control your monitoring setup and let agents apply changes declaratively.
90
- - **Self-hosted** -- runs locally with SQLite. No cloud account, no external dependencies beyond the LLM API keys you choose to configure.
91
- - **Project-scoped location context** -- define named locations per project, set a default, and run explicit or all-location sweeps without making keywords location-owned.
92
- - **WordPress publishing workflows** -- manage WordPress pages over REST, compare live vs staging, and generate manual handoffs for `llms.txt`, schema, and staging pushes.
93
- - **Scheduled monitoring** -- set up cron-based recurring runs to track citation changes over time.
94
- - **Webhook notifications** -- get alerted when your citation status changes.
95
- - **Audit logging** -- full history of every action taken through any surface.
96
-
97
- ## CLI Reference
98
-
99
- All commands support `--format json` for machine-readable output.
100
-
101
- ### Setup
102
-
103
- ```bash
104
- canonry init [--force] # Initialize config and database (interactive)
105
- canonry init --gemini-key <key> # Initialize non-interactively (flags or env vars)
106
- canonry init --perplexity-key <key> # Any combination of provider flags works
107
- canonry bootstrap [--force] # Bootstrap config/database from env vars only
108
- canonry serve [--port 4100] [--base-path /prefix/] # Start server (foreground)
109
- canonry start [--port 4100] [--base-path /prefix/] # Start server (background daemon)
110
- canonry stop # Stop the background daemon
111
- canonry settings # View active provider and quota settings
112
- ```
113
-
114
- Non-interactive `init` flags: `--gemini-key`, `--openai-key`, `--claude-key`, `--perplexity-key`, `--local-url`, `--local-model`, `--local-key`, `--google-client-id`, `--google-client-secret`. Falls back to `GEMINI_API_KEY`, `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`, `PERPLEXITY_API_KEY`, `LOCAL_BASE_URL`, `LOCAL_MODEL`, `LOCAL_API_KEY`, `GOOGLE_CLIENT_ID`, `GOOGLE_CLIENT_SECRET` env vars.
115
-
116
- ### Projects
117
-
118
- ```bash
119
- canonry project create <name> --domain <domain> --country US --language en
120
- canonry project list
121
- canonry project show <name>
122
- canonry project delete <name>
123
- canonry project add-location <name> --label <label> --city <city> --region <region> --country <country>
124
- canonry project locations <name>
125
- canonry project set-default-location <name> <label>
126
- canonry project remove-location <name> <label>
15
+ canonry agent setup
127
16
  ```
128
17
 
129
- ### Keywords and Competitors
18
+ One command. It installs [OpenClaw](https://openclaw.ai), configures the agent's LLM, sets up monitoring providers, and seeds the workspace. Interactive prompts guide you through everything, or pass flags for fully automated setup:
130
19
 
131
20
  ```bash
132
- canonry keyword add <project> "keyword one" "keyword two"
133
- canonry keyword list <project>
134
- canonry keyword import <project> <file.csv>
135
- canonry keyword generate <project> --provider gemini [--count 10] [--save]
136
-
137
- canonry competitor add <project> competitor1.com competitor2.com
138
- canonry competitor list <project>
21
+ canonry agent setup --gemini-key <key> --agent-key <key> --format json
139
22
  ```
140
23
 
141
- ### Visibility Runs
24
+ Then start the agent and server:
142
25
 
143
26
  ```bash
144
- canonry run <project> # Run all configured providers
145
- canonry run <project> --provider gemini # Run a single provider
146
- canonry run <project> --location sf # Run with one configured project location
147
- canonry run <project> --all-locations # Fan out one run per configured location
148
- canonry run <project> --no-location # Explicitly skip location context
149
- canonry run <project> --wait # Trigger and wait for completion
150
- canonry run --all # Trigger runs for all projects
151
- canonry run show <id> # Show run details and snapshots
152
- canonry runs <project> # List past runs
153
- canonry status <project> # Current visibility summary
154
- canonry evidence <project> # View citation evidence
155
- canonry history <project> # Per-keyword citation timeline
156
- canonry export <project> # Export project as YAML
27
+ canonry serve &
28
+ canonry agent start
157
29
  ```
158
30
 
159
- ### Config-as-Code
31
+ Open [http://localhost:4100](http://localhost:4100) for the web dashboard. The agent runs in the background, ready to orchestrate sweeps and act on results.
160
32
 
161
- ```bash
162
- canonry apply canonry.yaml # Single project
163
- canonry apply projects/*.yaml # Multiple files
164
- canonry apply multi-projects.yaml # Multi-doc YAML (---separated)
165
- ```
33
+ ### Monitoring only (no agent)
166
34
 
167
- ### Scheduling and Notifications
35
+ If you just want the monitoring layer without the autonomous agent:
168
36
 
169
37
  ```bash
170
- canonry schedule set <project> --preset daily # Use a preset
171
- canonry schedule set <project> --cron "0 8 * * *" # Use a cron expression
172
- canonry schedule set <project> --preset daily --provider gemini openai
173
- canonry schedule show <project>
174
- canonry schedule enable <project>
175
- canonry schedule disable <project>
176
- canonry schedule remove <project>
177
-
178
- canonry notify add <project> --webhook https://hooks.slack.com/... --events run.completed,citation.changed
179
- canonry notify list <project>
180
- canonry notify remove <project> <id>
181
- canonry notify test <project> <id>
182
- canonry notify events # List available event types
38
+ npm install -g @ainyc/canonry
39
+ canonry init
40
+ canonry serve
183
41
  ```
184
42
 
185
- Schedule presets: `daily`, `weekly`, `twice-daily`, `daily@HH`, `weekly@DAY`.
43
+ ## What the Agent Does
186
44
 
187
- ### Provider Settings
45
+ The Canonry agent ("Aero") is an [OpenClaw](https://openclaw.ai)-powered operator:
188
46
 
189
- ```bash
190
- canonry settings # Show all providers and quotas
191
- canonry settings provider gemini --api-key <key>
192
- canonry settings google --client-id <id> --client-secret <secret>
193
- canonry settings provider local --base-url http://localhost:11434/v1 --model llama3
194
- canonry settings provider openai --api-key <key> --max-per-day 1000 --max-per-minute 20
195
- canonry settings provider perplexity --api-key <key>
196
- ```
47
+ - **Monitors** visibility sweeps across providers on schedule, tracking citation changes over time
48
+ - **Analyzes** regressions, emerging opportunities, and correlates visibility shifts with site changes
49
+ - **Operates** across your content, schema markup, indexing submissions, and `llms.txt` to coordinate fixes and generate action-oriented reports
50
+ - **Remembers** client context across sessions: canonical domains, historical patterns, known issues
197
51
 
198
- Quota flags: `--max-concurrent`, `--max-per-minute`, `--max-per-day`.
52
+ Every action the agent takes goes through the same CLI and API available to everyone. No special SDK, no hidden state.
199
53
 
200
- ### WordPress
54
+ ## Features
201
55
 
202
- ```bash
203
- canonry wordpress connect <project> --url https://example.com --user admin
204
- canonry wordpress status <project>
205
- canonry wordpress pages <project> --staging
206
- canonry wordpress page <project> about --live
207
- canonry wordpress create-page <project> --title "About" --slug about --content-file ./about.html
208
- canonry wordpress update-page <project> about --title "About Us" --content-file ./about.html
209
- canonry wordpress set-meta <project> about --title "SEO title" --description "Meta description"
210
- canonry wordpress audit <project> --staging
211
- canonry wordpress diff <project> about
212
- canonry wordpress schema <project> about
213
- canonry wordpress set-schema <project> about --json '{"@type":"FAQPage"}'
214
- canonry wordpress llms-txt <project>
215
- canonry wordpress set-llms-txt <project> --content "User-agent: *"
216
- canonry wordpress staging status <project>
217
- canonry wordpress staging push <project>
218
- ```
56
+ - **Agent-operated.** The OpenClaw agent monitors, analyzes, and acts autonomously. Humans supervise via the dashboard.
57
+ - **Multi-provider.** Query Gemini, OpenAI, Claude, Perplexity, and local LLMs from a single platform.
58
+ - **Config-as-code.** Kubernetes-style YAML files. Version control your monitoring, let agents apply changes declaratively.
59
+ - **Self-hosted.** Runs locally with SQLite. No cloud account required.
60
+ - **Full API parity.** REST API and CLI cover 100% of functionality. `--format json` on every command.
61
+ - **Integrations.** Google Search Console, Google Analytics 4, Bing Webmaster Tools, WordPress.
62
+ - **Location-aware.** Project-scoped locations for geo-targeted monitoring.
63
+ - **Scheduled monitoring.** Cron-based recurring runs with webhook notifications.
219
64
 
220
- WordPress automation uses REST + Application Passwords. `set-schema`, `set-llms-txt`, and `staging push` are manual-assist commands by design. See [docs/wordpress-setup.md](docs/wordpress-setup.md) for the full workflow and constraints.
65
+ ## How It Works
221
66
 
222
- ### Telemetry
67
+ The agent uses the same CLI and API that humans do. A typical cycle:
223
68
 
224
69
  ```bash
225
- canonry telemetry status # Show telemetry status
226
- canonry telemetry enable # Enable anonymous telemetry
227
- canonry telemetry disable # Disable anonymous telemetry
70
+ canonry apply canonry.yaml --format json # define projects from YAML specs
71
+ canonry run my-project --wait --format json # sweep all providers
72
+ canonry evidence my-project --format json # inspect citation evidence
73
+ canonry insights my-project --format json # get agent-generated analysis
74
+ canonry health my-project --format json # visibility health snapshot
228
75
  ```
229
76
 
230
- Telemetry is automatically disabled when `CANONRY_TELEMETRY_DISABLED=1`, `DO_NOT_TRACK=1`, or a CI environment is detected.
77
+ The agent runs these automatically on schedule, detects changes, and generates reports. You can run the same commands manually at any time.
231
78
 
232
79
  ## Config-as-Code
233
80
 
234
- Define your monitoring projects in version-controlled YAML files:
235
-
236
81
  ```yaml
237
82
  apiVersion: canonry/v1
238
83
  kind: Project
239
84
  metadata:
240
85
  name: my-project
241
86
  spec:
242
- displayName: My Project
243
87
  canonicalDomain: example.com
244
88
  country: US
245
89
  language: en
@@ -253,177 +97,90 @@ spec:
253
97
  - openai
254
98
  - claude
255
99
  - perplexity
256
- - local
257
- locations:
258
- - label: sf
259
- city: San Francisco
260
- region: California
261
- country: US
262
- timezone: America/Los_Angeles
263
- - label: nyc
264
- city: New York
265
- region: New York
266
- country: US
267
- timezone: America/New_York
268
- defaultLocation: sf
269
100
  ```
270
101
 
271
- Locations are project-scoped run context. Keywords remain project-wide; choose the location at run time via the default location or the `canonry run` location flags.
272
-
273
- Apply with the CLI or the API. Multiple projects can live in one file separated by `---`, or pass multiple files:
274
-
275
102
  ```bash
276
103
  canonry apply canonry.yaml
277
104
  canonry apply project-a.yaml project-b.yaml
278
105
  ```
279
106
 
280
- ```bash
281
- curl -X POST http://localhost:4100/api/v1/apply \
282
- -H "Authorization: Bearer cnry_..." \
283
- -H "Content-Type: application/yaml" \
284
- --data-binary @canonry.yaml
285
- ```
107
+ ## API
286
108
 
287
- Applied project YAML is declarative input. Runtime project/run data lives in the database, while local authentication and provider credentials live in `~/.canonry/config.yaml`.
109
+ All endpoints under `/api/v1/`. Authenticate with `Authorization: Bearer cnry_...`.
110
+ The canonical, always-up-to-date surface is served at `GET /api/v1/openapi.json` (no auth required).
111
+
112
+ Canonry is **agent-first** — every dashboard view has a matching API endpoint and CLI command. The surface is grouped by domain:
113
+
114
+ | Domain | What it covers | Highlights |
115
+ |--------|----------------|------------|
116
+ | **Projects** | Create, read, update, delete projects; locations; export | `PUT /projects/{name}`, `GET /projects`, `GET /projects/{name}/export` |
117
+ | **Apply** | Config-as-code — declarative multi-project upsert | `POST /apply` |
118
+ | **Keywords / Competitors** | Per-project keyword and competitor management | `POST/DELETE /projects/{name}/keywords`, `/competitors` |
119
+ | **Runs** | Trigger, list, cancel, and inspect visibility sweeps | `POST /projects/{name}/runs`, `GET /runs`, `POST /runs/{id}/cancel` |
120
+ | **Schedules** | Cron-based recurring sweeps | `GET/PUT /projects/{name}/schedule` |
121
+ | **History / Snapshots** | Timeline + run diffs + per-keyword citation state | `GET /projects/{name}/timeline`, `/snapshots/diff`, `/history` |
122
+ | **Intelligence** | DB-backed insights + health snapshots + dismissal | `GET /projects/{name}/insights`, `/health`, `POST /insights/{id}/dismiss` |
123
+ | **Notifications** | Webhook subscriptions per project (agent or user-defined) | `GET/POST/DELETE /projects/{name}/notifications`, `POST /.../test` |
124
+ | **Analytics** | Aggregated dashboard analytics | `GET /projects/{name}/analytics` |
125
+ | **Google (GSC + OAuth)** | Search Console integration, OAuth flow, property selection, URL inspection | `/google/*`, `/projects/{name}/google/*` |
126
+ | **Google Analytics (GA4)** | Traffic, social referrals, attribution, AI referrals | `/projects/{name}/ga/*` |
127
+ | **Bing Webmaster** | Coverage, URL inspection, keyword stats | `/projects/{name}/bing/*` |
128
+ | **WordPress** | Content publishing + site management integration | `/projects/{name}/wordpress/*` |
129
+ | **CDP (ChatGPT browser provider)** | Chrome DevTools Protocol health and session status | `/cdp/*` |
130
+ | **Settings / Auth / Telemetry** | Server config, API key management, opt-in telemetry | `/settings`, `/telemetry` |
131
+ | **OpenAPI** | Full spec | `GET /openapi.json` *(no auth)* |
132
+
133
+ For the complete list of ~118 endpoints with request/response schemas, query `GET /api/v1/openapi.json` or browse the per-domain route handlers under [`packages/api-routes/src/`](packages/api-routes/src/).
288
134
 
289
135
  ## Provider Setup
290
136
 
291
- Canonry queries multiple AI answer engines. Configure the providers you want during `canonry init`, or add them later via the settings page or API.
292
-
293
- For authentication material, the local config file at `~/.canonry/config.yaml` is the source of truth. Provider API keys, Google OAuth client credentials, and Google OAuth tokens are stored there with file mode `0600`.
294
-
295
- ### Gemini
296
-
297
- Get an API key from [Google AI Studio](https://aistudio.google.com/apikey).
298
-
299
- ### Google Search Console
300
-
301
- Create OAuth client credentials in Google Cloud, then store them locally:
302
-
303
- ```bash
304
- canonry settings google --client-id <id> --client-secret <secret>
305
- ```
306
-
307
- After that, connect a project with:
308
-
309
- ```bash
310
- canonry google connect <project> --type gsc
311
- ```
312
-
313
- The web dashboard now supports the same flow:
314
-
315
- - Configure Google OAuth once on the Settings page.
316
- - Open a project and generate the Google consent link for that canonical domain.
317
- - Select the matching Search Console property in the project dashboard.
318
- - Queue syncs, inspect URLs, review inspection history, and review deindexed pages from the same project view.
319
-
320
- ### WordPress
321
-
322
- Connect a project-scoped WordPress site with an Application Password:
323
-
324
- ```bash
325
- canonry wordpress connect <project> \
326
- --url https://example.com \
327
- --user admin \
328
- --staging-url https://staging.example.com \
329
- --default-env staging
330
- ```
137
+ Configure providers during `canonry init`, via the web dashboard at `/settings`, or with the CLI:
331
138
 
332
- Canonry can automate page reads/writes, audits, and live-vs-staging diffs through the REST API. `llms.txt`, schema injection, and WP STAGING push remain manual-assist workflows. Full setup instructions live in [docs/wordpress-setup.md](docs/wordpress-setup.md).
139
+ | Provider | Key source | CLI flag |
140
+ |----------|-----------|----------|
141
+ | Gemini | [aistudio.google.com/apikey](https://aistudio.google.com/apikey) | `--gemini-key` |
142
+ | OpenAI | [platform.openai.com/api-keys](https://platform.openai.com/api-keys) | `--openai-key` |
143
+ | Claude | [console.anthropic.com](https://console.anthropic.com/settings/keys) | `--claude-key` |
144
+ | Perplexity | [perplexity.ai/settings/api](https://www.perplexity.ai/settings/api) | `--perplexity-key` |
145
+ | Local LLMs | Any OpenAI-compatible endpoint (Ollama, LM Studio, vLLM) | `--local-url` |
333
146
 
334
- ### OpenAI
147
+ Integration setup guides: [Google Search Console](docs/google-search-console-setup.md) | [Google Analytics](docs/google-analytics-setup.md) | [Bing Webmaster](docs/bing-webmaster-setup.md) | [WordPress](docs/wordpress-setup.md)
335
148
 
336
- Get an API key from [platform.openai.com](https://platform.openai.com/api-keys).
149
+ ## Skills
337
150
 
338
- ### Claude
151
+ The agent learns how to operate canonry through bundled [OpenClaw skills](https://clawhub.dev) that cover CLI commands, provider setup, analysis workflows, and troubleshooting. Skills are seeded into the agent workspace during `canonry agent setup`.
339
152
 
340
- Get an API key from [console.anthropic.com](https://console.anthropic.com/settings/keys).
153
+ **Claude Code** also picks up the skill automatically from `.claude/skills/canonry-setup/` when you open this repo. **ClawHub** hosts the same skill at [clawhub.dev](https://clawhub.dev) for any MCP-equipped agent.
341
154
 
342
- ### Perplexity
343
-
344
- Get an API key from [perplexity.ai/settings/api](https://www.perplexity.ai/settings/api). Perplexity uses its Sonar model family with built-in web search, so citation results reflect live search grounding.
345
-
346
- ```bash
347
- canonry settings provider perplexity --api-key <key>
348
- ```
349
-
350
- Available models: `sonar` (default), `sonar-pro`, `sonar-reasoning`, `sonar-reasoning-pro`.
155
+ ## Deployment
351
156
 
352
- ### Local LLMs
157
+ See **[docs/deployment.md](docs/deployment.md)** for local, reverse proxy, sub-path, Tailscale, systemd, and Docker guides.
353
158
 
354
- Any OpenAI-compatible endpoint works -- Ollama, LM Studio, llama.cpp, vLLM, and similar tools. Configure via `canonry init`, the settings page, or the CLI:
159
+ ### Docker
355
160
 
356
161
  ```bash
357
- canonry settings provider local --base-url http://localhost:11434/v1
358
- canonry settings provider local --base-url http://localhost:11434/v1 --model llama3
359
- ```
360
-
361
- The base URL is the only required field. API key is optional (most local servers don't need one).
362
-
363
- > **Note:** Unless your local model has web search capabilities, responses will be based solely on its training data. Cloud providers (Gemini, OpenAI, Claude) use live web search to ground their answers, which produces more accurate citation results. Local LLMs are best used for comparing how different models perceive your brand without real-time search context.
364
-
365
- ## API
366
-
367
- All endpoints are served under `/api/v1/`. Authenticate with a bearer token:
368
-
369
- ```
370
- Authorization: Bearer cnry_...
162
+ docker build -t canonry .
163
+ docker run --rm -p 4100:4100 -e GEMINI_API_KEY=your-key -v canonry-data:/data canonry
371
164
  ```
372
165
 
373
- Key endpoints:
166
+ Published images: [Docker Hub](https://hub.docker.com/repository/docker/arberx/canonry) | [GHCR](https://github.com/ainyc/canonry/pkgs/container/canonry)
374
167
 
375
- | Method | Path | Description |
376
- |--------|------|-------------|
377
- | `PUT` | `/api/v1/projects/{name}` | Create or update a project |
378
- | `POST` | `/api/v1/projects/{name}/runs` | Trigger a visibility sweep |
379
- | `GET` | `/api/v1/projects/{name}/timeline` | Per-keyword citation history |
380
- | `GET` | `/api/v1/projects/{name}/snapshots/diff` | Compare two runs |
381
- | `POST` | `/api/v1/apply` | Config-as-code apply |
382
- | `GET` | `/api/v1/openapi.json` | OpenAPI spec (no auth required) |
168
+ ### Railway
383
169
 
384
- ## Web Dashboard
170
+ [![Deploy on Railway](https://railway.com/button.svg)](https://railway.com/deploy/ENziH9?referralCode=0vODBs&utm_medium=integration&utm_source=template&utm_campaign=generic)
385
171
 
386
- The bundled web dashboard provides five views:
172
+ Click deploy, add a volume at `/data`, generate a domain. No env vars required to start. Configure providers via the dashboard.
387
173
 
388
- - **Overview** -- portfolio-level visibility scores across all projects with sparkline trends.
389
- - **Project** -- command center with score gauges, keyword evidence tables, and competitor analysis.
390
- - **Runs** -- history of all visibility sweeps with per-provider breakdowns.
391
- - **Settings** -- provider configuration, scheduling, and notification management.
392
- - **Setup** -- guided wizard for first-time onboarding.
174
+ ### Render
393
175
 
394
- Access it at [http://localhost:4100](http://localhost:4100) after running `canonry serve`.
176
+ Create a Web Service with runtime Docker, attach a disk at `/data`. Health check: `/health`.
395
177
 
396
178
  ## Requirements
397
179
 
398
180
  - Node.js >= 20
399
- - At least one provider API key to run visibility sweeps (configurable after startup via the dashboard or CLI)
400
- - A C++ toolchain for building `better-sqlite3` native bindings (only needed if prebuilt binaries aren't available for your platform)
401
-
402
- ### Native dependency setup
403
-
404
- Canonry uses `better-sqlite3` for its embedded database. Prebuilt binaries are downloaded automatically for most platforms, but if `npm install` fails with a `node-gyp` error, you need to install build tools:
181
+ - At least one provider API key (configurable after startup)
405
182
 
406
- **macOS:**
407
- ```bash
408
- xcode-select --install
409
- ```
410
-
411
- **Debian / Ubuntu:**
412
- ```bash
413
- sudo apt-get install -y python3 make g++
414
- ```
415
-
416
- **Alpine Linux (Docker):**
417
- ```bash
418
- apk add --no-cache python3 make g++ gcc musl-dev
419
- ```
420
-
421
- **Windows:**
422
- ```bash
423
- npm install -g windows-build-tools
424
- ```
425
-
426
- If you're running in a minimal Docker image or CI environment without these tools, the install will fail. See the [better-sqlite3 troubleshooting guide](https://github.com/WiseLibs/better-sqlite3/blob/master/docs/troubleshooting.md) for additional help.
183
+ If `npm install` fails with `node-gyp` errors, install build tools for `better-sqlite3`: `xcode-select --install` (macOS), `apt-get install python3 make g++` (Debian), or see the [troubleshooting guide](https://github.com/WiseLibs/better-sqlite3/blob/master/docs/troubleshooting.md).
427
184
 
428
185
  ## Development
429
186
 
@@ -431,119 +188,14 @@ If you're running in a minimal Docker image or CI environment without these tool
431
188
  git clone https://github.com/ainyc/canonry.git
432
189
  cd canonry
433
190
  pnpm install
434
- pnpm run typecheck
435
- pnpm run test
436
- pnpm run lint
437
- pnpm run dev:web # Run SPA in dev mode
438
- ```
439
-
440
- ## Deployment
441
-
442
- See **[docs/deployment.md](docs/deployment.md)** for the full guide covering local, reverse proxy (Caddy/nginx), sub-path, Tailscale, systemd, and Docker.
443
-
444
- ### Sub-path deployments
445
-
446
- Serve canonry under a URL prefix without rebuilding:
447
-
448
- ```bash
449
- canonry serve --base-path /canonry/
191
+ pnpm run typecheck && pnpm run test && pnpm run lint
450
192
  ```
451
193
 
452
- The server injects the base path at runtime, so no build-time config is needed.
453
-
454
- ### Docker Deployment
455
-
456
- Canonry currently deploys as a **single Node.js service with a SQLite file on persistent disk**.
457
-
458
- The repo includes a production `Dockerfile` and entry script. The default container entrypoint runs `canonry bootstrap` and then `canonry serve`.
459
-
460
- ```bash
461
- docker build -t canonry .
462
-
463
- docker run --rm \
464
- -p 4100:4100 \
465
- -e PORT=4100 \
466
- -e CANONRY_CONFIG_DIR=/data/canonry \
467
- -e GEMINI_API_KEY=your-key \
468
- -v canonry-data:/data \
469
- canonry
470
- ```
471
-
472
- Published container images are available on Docker Hub:
473
-
474
- - [Repository overview](https://hub.docker.com/repository/docker/arberx/canonry/general)
475
- - [Available tags](https://hub.docker.com/repository/docker/arberx/canonry/tags)
476
-
477
- ```bash
478
- docker pull arberx/canonry:latest
479
- ```
480
-
481
- The same image is also published to GitHub Container Registry:
482
-
483
- ```bash
484
- docker pull ghcr.io/ainyc/canonry:latest
485
- ```
486
-
487
- Keep the container to a single replica and mount persistent storage at `/data` so SQLite and `config.yaml` survive restarts.
488
-
489
- No CORS configuration is required for this Docker setup. The dashboard and API are served by the same Canonry process on the same origin. CORS only becomes relevant if you split the frontend and API onto different domains.
490
-
491
- ## Deploy on Railway or Render
492
-
493
- Use the **repo root** as the service root. `@ainyc/canonry` depends on shared workspace packages under `packages/*`, so deploying from a subdirectory will break the build.
494
-
495
- Canonry runs as a **single service** -- the API, web dashboard, and job scheduler all run in one process. No provider API keys are required at startup; configure them later through the web dashboard.
496
-
497
- ### Railway
498
-
499
- [![Deploy on Railway](https://railway.com/button.svg)](https://railway.com/deploy/ENziH9?referralCode=0vODBs&utm_medium=integration&utm_source=template&utm_campaign=generic)
500
-
501
- **One-click deploy:**
502
-
503
- 1. Click the button above (or create a service from this repo manually)
504
- 2. Railway builds the `Dockerfile` automatically -- no custom build or start commands needed
505
- 3. Right-click the service and select **Create Volume**, set the mount path to `/data`
506
- 4. Generate a public domain under **Settings > Networking** (port `8080`)
507
- 5. Open the dashboard and follow the setup wizard to configure providers and create your first project
508
-
509
- **Manual setup:**
510
-
511
- 1. Create a new service from this GitHub repo
512
- 2. **Dockerfile Path**: `Dockerfile` (the default)
513
- 3. **Custom Build Command**: leave empty
514
- 4. **Custom Start Command**: leave empty
515
- 5. Add a **Volume** mounted at `/data` (right-click the service > Create Volume)
516
- 6. Generate a public domain under **Settings > Networking**
517
- 7. No environment variables are required to start -- the bootstrap creates a SQLite database and API key automatically
518
-
519
- **Optional environment variables:**
520
-
521
- | Variable | Description |
522
- |----------|-------------|
523
- | `GEMINI_API_KEY` | Google Gemini provider key |
524
- | `OPENAI_API_KEY` | OpenAI provider key |
525
- | `ANTHROPIC_API_KEY` | Anthropic/Claude provider key |
526
- | `PERPLEXITY_API_KEY` | Perplexity provider key |
527
- | `LOCAL_BASE_URL` | Local LLM endpoint (Ollama, LM Studio, etc.) |
528
- | `CANONRY_API_KEY` | Pin a specific API key instead of auto-generating one |
529
-
530
- Provider keys can also be configured at any time via the Settings page in the dashboard.
531
-
532
- Keep the service to a **single replica** -- SQLite does not support concurrent writers.
533
-
534
- ### Render
535
-
536
- Create one **Web Service** from this repo with runtime **Docker**, then attach a persistent disk mounted at `/data`.
537
-
538
- - Leave build and start commands unset so Render uses the image `ENTRYPOINT`.
539
- - Health check path: `/health`
540
- - No environment variables are required at startup. Configure providers via the dashboard.
541
-
542
- SQLite should live on the persistent disk, so keep the service to a single instance.
194
+ See [docs/README.md](docs/README.md) for the full architecture, roadmap, ADR index, and doc map.
543
195
 
544
196
  ## Contributing
545
197
 
546
- Contributions are welcome. See [CONTRIBUTING.md](./CONTRIBUTING.md) for setup instructions.
198
+ See [CONTRIBUTING.md](./CONTRIBUTING.md).
547
199
 
548
200
  ## License
549
201