@ainyc/canonry 1.22.0 → 1.24.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +64 -10
- package/assets/assets/index-BUzyFRxr.js +246 -0
- package/assets/assets/index-BidvmvWJ.css +1 -0
- package/assets/index.html +2 -2
- package/dist/{chunk-SN5AMQJE.js → chunk-QMUO2JYU.js} +847 -72
- package/dist/cli.js +331 -67
- package/dist/index.d.ts +12 -0
- package/dist/index.js +1 -1
- package/package.json +6 -6
- package/assets/assets/index-7DjD4Oje.css +0 -1
- package/assets/assets/index-sl--69xx.js +0 -246
package/README.md
CHANGED
|
@@ -2,9 +2,9 @@
|
|
|
2
2
|
|
|
3
3
|
[](https://www.npmjs.com/package/@ainyc/canonry) [](https://fsl.software/) [](https://nodejs.org)
|
|
4
4
|
|
|
5
|
-
**
|
|
5
|
+
**Agent-first AEO monitoring.** Canonry tracks how AI answer engines (ChatGPT, Gemini, Claude, Perplexity, and others) cite or omit your website, and it's built so that AI agents and automation pipelines can operate it end-to-end without human intervention.
|
|
6
6
|
|
|
7
|
-
|
|
7
|
+
Every capability is exposed through a stable REST API and a machine-readable CLI. An AI agent can install canonry, configure providers, create projects, trigger visibility sweeps, and act on the results. All from a terminal, all scriptable, all JSON-parseable. The web dashboard is there for human analysts, but nothing requires it.
|
|
8
8
|
|
|
9
9
|
AEO (Answer Engine Optimization) is the practice of ensuring your content is accurately represented in AI-generated answers. As search shifts from links to synthesized responses, monitoring your visibility across answer engines is essential.
|
|
10
10
|
|
|
@@ -20,21 +20,75 @@ canonry serve
|
|
|
20
20
|
|
|
21
21
|
Open [http://localhost:4100](http://localhost:4100) to access the optional web dashboard.
|
|
22
22
|
|
|
23
|
-
|
|
23
|
+
### Zero-touch setup for agents and CI
|
|
24
|
+
|
|
25
|
+
No interactive prompts required. Pass keys as flags or environment variables and canonry configures itself:
|
|
24
26
|
|
|
25
27
|
```bash
|
|
28
|
+
# flags
|
|
26
29
|
canonry init --gemini-key <key> --openai-key <key>
|
|
27
|
-
|
|
30
|
+
|
|
31
|
+
# environment variables
|
|
28
32
|
GEMINI_API_KEY=... OPENAI_API_KEY=... canonry init
|
|
33
|
+
|
|
34
|
+
# headless bootstrap (env vars only, no prompts, idempotent)
|
|
35
|
+
canonry bootstrap
|
|
36
|
+
```
|
|
37
|
+
|
|
38
|
+
### Agent workflow example
|
|
39
|
+
|
|
40
|
+
A coding agent (Claude Code, Cursor, Copilot, or any MCP-equipped tool) can run an entire monitoring cycle in a single script:
|
|
41
|
+
|
|
42
|
+
```bash
|
|
43
|
+
# 1. Install and bootstrap
|
|
44
|
+
npm install -g @ainyc/canonry
|
|
45
|
+
GEMINI_API_KEY=$KEY canonry bootstrap
|
|
46
|
+
canonry start # background daemon
|
|
47
|
+
|
|
48
|
+
# 2. Define a project from a YAML spec
|
|
49
|
+
canonry apply canonry.yaml --format json # declarative, version-controlled
|
|
50
|
+
|
|
51
|
+
# 3. Trigger a sweep and wait for results
|
|
52
|
+
canonry run my-project --wait --format json
|
|
53
|
+
|
|
54
|
+
# 4. Inspect results programmatically
|
|
55
|
+
canonry status my-project --format json # visibility scores
|
|
56
|
+
canonry evidence my-project --format json # citation evidence
|
|
57
|
+
canonry history my-project --format json # timeline for trend analysis
|
|
29
58
|
```
|
|
30
59
|
|
|
60
|
+
Every command supports `--format json` so agents can parse output directly. Error messages include the failed command, the reason, and a suggested fix, so there's no guesswork.
|
|
61
|
+
|
|
62
|
+
## Why Agent-First?
|
|
63
|
+
|
|
64
|
+
Canonry is designed so that AI agents and automation pipelines can drive it without human interaction.
|
|
65
|
+
|
|
66
|
+
- **No browser required.** The CLI and API cover 100% of functionality.
|
|
67
|
+
- **Deterministic setup.** `canonry bootstrap` is idempotent and non-interactive. Run it in CI, in a container, or from an agent with zero human input.
|
|
68
|
+
- **Config-as-code.** Kubernetes-style YAML files that agents can generate, version-control, and apply. No forms to fill out.
|
|
69
|
+
- **Structured output everywhere.** `--format json` on every command. Agents parse results, not humans.
|
|
70
|
+
- **Stable API contract.** Endpoints never change paths or methods. Agents can hard-code routes safely.
|
|
71
|
+
- **Actionable errors.** Every failure includes the command that failed, why it failed, and what to do next.
|
|
72
|
+
|
|
73
|
+
Start with [docs/README.md](docs/README.md) for the full architecture, roadmap, active plans, testing, deployment, and ADR index.
|
|
74
|
+
|
|
75
|
+
## Skills for AI Agents
|
|
76
|
+
|
|
77
|
+
Canonry ships with an [OpenClaw](https://clawhub.dev) skill that teaches AI agents how to use it. The skill covers CLI commands, provider setup, interpreting results, indexing workflows, and troubleshooting.
|
|
78
|
+
|
|
79
|
+
**Claude Code** picks it up automatically from `.claude/skills/canonry-setup/` when you open this repo. No configuration needed.
|
|
80
|
+
|
|
81
|
+
**ClawHub** hosts the same skill at [clawhub.dev](https://clawhub.dev) so any MCP-equipped agent (Cursor, Windsurf, Copilot, etc.) can discover and install it. Search for `canonry` on ClawHub, or point your agent at the `skills/canonry-setup/` directory in this repo.
|
|
82
|
+
|
|
83
|
+
Once an agent has the skill loaded, it can set up canonry, run sweeps, interpret citation evidence, and troubleshoot errors without you having to explain any of it.
|
|
84
|
+
|
|
31
85
|
## Features
|
|
32
86
|
|
|
33
87
|
- **Multi-provider monitoring** -- query Gemini, OpenAI, Claude, Perplexity, and local LLMs (Ollama, LM Studio, or any OpenAI-compatible endpoint) from a single tool.
|
|
34
|
-
- **
|
|
35
|
-
- **
|
|
36
|
-
- **Config-as-code** -- manage projects with Kubernetes-style YAML files. Version control your monitoring setup.
|
|
88
|
+
- **Agent-first surfaces** -- the REST API is canonical, the CLI supports `--format json` on every command, and the web dashboard is an optional visualization layer.
|
|
89
|
+
- **Config-as-code** -- manage projects with Kubernetes-style YAML files. Version control your monitoring setup and let agents apply changes declaratively.
|
|
37
90
|
- **Self-hosted** -- runs locally with SQLite. No cloud account, no external dependencies beyond the LLM API keys you choose to configure.
|
|
91
|
+
- **Project-scoped location context** -- define named locations per project, set a default, and run explicit or all-location sweeps without making keywords location-owned.
|
|
38
92
|
- **Scheduled monitoring** -- set up cron-based recurring runs to track citation changes over time.
|
|
39
93
|
- **Webhook notifications** -- get alerted when your citation status changes.
|
|
40
94
|
- **Audit logging** -- full history of every action taken through any surface.
|
|
@@ -348,7 +402,7 @@ pnpm run dev:web # Run SPA in dev mode
|
|
|
348
402
|
|
|
349
403
|
## Deployment
|
|
350
404
|
|
|
351
|
-
See **[docs/deployment.md](docs/deployment.md)** for the full guide
|
|
405
|
+
See **[docs/deployment.md](docs/deployment.md)** for the full guide covering local, reverse proxy (Caddy/nginx), sub-path, Tailscale, systemd, and Docker.
|
|
352
406
|
|
|
353
407
|
### Sub-path deployments
|
|
354
408
|
|
|
@@ -358,7 +412,7 @@ Serve canonry under a URL prefix without rebuilding:
|
|
|
358
412
|
canonry serve --base-path /canonry/
|
|
359
413
|
```
|
|
360
414
|
|
|
361
|
-
The server injects the base path at runtime
|
|
415
|
+
The server injects the base path at runtime, so no build-time config is needed.
|
|
362
416
|
|
|
363
417
|
### Docker Deployment
|
|
364
418
|
|
|
@@ -456,7 +510,7 @@ Contributions are welcome. See [CONTRIBUTING.md](./CONTRIBUTING.md) for setup in
|
|
|
456
510
|
|
|
457
511
|
## License
|
|
458
512
|
|
|
459
|
-
[FSL-1.1-ALv2](./LICENSE)
|
|
513
|
+
[FSL-1.1-ALv2](./LICENSE). Free to use, modify, and self-host. Each version converts to Apache 2.0 after two years.
|
|
460
514
|
|
|
461
515
|
---
|
|
462
516
|
|