@ainyc/canonry 1.7.0 → 1.7.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -18,6 +18,14 @@ canonry serve
18
18
 
19
19
  Open [http://localhost:4100](http://localhost:4100) to access the web dashboard.
20
20
 
21
+ For CI or agent workflows, initialize non-interactively:
22
+
23
+ ```bash
24
+ canonry init --gemini-key <key> --openai-key <key>
25
+ # or via environment variables:
26
+ GEMINI_API_KEY=... OPENAI_API_KEY=... canonry init
27
+ ```
28
+
21
29
  ## Features
22
30
 
23
31
  - **Multi-provider monitoring** -- query Gemini, OpenAI, Claude, and local LLMs (Ollama, LM Studio, or any OpenAI-compatible endpoint) from a single tool.
@@ -30,15 +38,22 @@ Open [http://localhost:4100](http://localhost:4100) to access the web dashboard.
30
38
 
31
39
  ## CLI Reference
32
40
 
41
+ All commands support `--format json` for machine-readable output.
42
+
33
43
  ### Setup
34
44
 
35
45
  ```bash
36
- canonry init # Initialize config and database
37
- canonry bootstrap # Bootstrap hosted config/database from env vars
38
- canonry serve # Start server (API + web dashboard)
39
- canonry settings # View/edit configuration
46
+ canonry init [--force] # Initialize config and database (interactive)
47
+ canonry init --gemini-key <key> # Initialize non-interactively (flags or env vars)
48
+ canonry bootstrap [--force] # Bootstrap config/database from env vars only
49
+ canonry serve [--port 4100] # Start server in foreground (API + web dashboard)
50
+ canonry start [--port 4100] # Start server as a background daemon
51
+ canonry stop # Stop the background daemon
52
+ canonry settings # View active provider and quota settings
40
53
  ```
41
54
 
55
+ Non-interactive `init` flags: `--gemini-key`, `--openai-key`, `--claude-key`, `--local-url`, `--local-model`, `--local-key`. Falls back to `GEMINI_API_KEY`, `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`, `LOCAL_BASE_URL`, `LOCAL_MODEL`, `LOCAL_API_KEY` env vars.
56
+
42
57
  ### Projects
43
58
 
44
59
  ```bash
@@ -54,6 +69,7 @@ canonry project delete <name>
54
69
  canonry keyword add <project> "keyword one" "keyword two"
55
70
  canonry keyword list <project>
56
71
  canonry keyword import <project> <file.csv>
72
+ canonry keyword generate <project> --provider gemini [--count 10] [--save]
57
73
 
58
74
  canonry competitor add <project> competitor1.com competitor2.com
59
75
  canonry competitor list <project>
@@ -64,6 +80,9 @@ canonry competitor list <project>
64
80
  ```bash
65
81
  canonry run <project> # Run all configured providers
66
82
  canonry run <project> --provider gemini # Run a single provider
83
+ canonry run <project> --wait # Trigger and wait for completion
84
+ canonry run --all # Trigger runs for all projects
85
+ canonry run show <id> # Show run details and snapshots
67
86
  canonry runs <project> # List past runs
68
87
  canonry status <project> # Current visibility summary
69
88
  canonry evidence <project> # View citation evidence
@@ -82,18 +101,44 @@ canonry apply multi-projects.yaml # Multi-doc YAML (---separated)
82
101
  ### Scheduling and Notifications
83
102
 
84
103
  ```bash
85
- canonry schedule set <project> --cron "0 8 * * *"
104
+ canonry schedule set <project> --preset daily # Use a preset
105
+ canonry schedule set <project> --cron "0 8 * * *" # Use a cron expression
106
+ canonry schedule set <project> --preset daily --provider gemini openai
86
107
  canonry schedule show <project>
87
108
  canonry schedule enable <project>
88
109
  canonry schedule disable <project>
89
110
  canonry schedule remove <project>
90
111
 
91
- canonry notify add <project> --url https://hooks.slack.com/...
112
+ canonry notify add <project> --webhook https://hooks.slack.com/... --events run.completed,citation.changed
92
113
  canonry notify list <project>
93
114
  canonry notify remove <project> <id>
94
115
  canonry notify test <project> <id>
116
+ canonry notify events # List available event types
117
+ ```
118
+
119
+ Schedule presets: `daily`, `weekly`, `twice-daily`, `daily@HH`, `weekly@DAY`.
120
+
121
+ ### Provider Settings
122
+
123
+ ```bash
124
+ canonry settings # Show all providers and quotas
125
+ canonry settings provider gemini --api-key <key>
126
+ canonry settings provider local --base-url http://localhost:11434/v1 --model llama3
127
+ canonry settings provider openai --api-key <key> --max-per-day 1000 --max-per-minute 20
95
128
  ```
96
129
 
130
+ Quota flags: `--max-concurrent`, `--max-per-minute`, `--max-per-day`.
131
+
132
+ ### Telemetry
133
+
134
+ ```bash
135
+ canonry telemetry status # Show telemetry status
136
+ canonry telemetry enable # Enable anonymous telemetry
137
+ canonry telemetry disable # Disable anonymous telemetry
138
+ ```
139
+
140
+ Telemetry is automatically disabled when `CANONRY_TELEMETRY_DISABLED=1`, `DO_NOT_TRACK=1`, or a CI environment is detected.
141
+
97
142
  ## Config-as-Code
98
143
 
99
144
  Define your monitoring projects in version-controlled YAML files:
@@ -154,18 +199,15 @@ Get an API key from [console.anthropic.com](https://console.anthropic.com/settin
154
199
 
155
200
  ### Local LLMs
156
201
 
157
- Any OpenAI-compatible endpoint works -- Ollama, LM Studio, llama.cpp, vLLM, and similar tools. Configure via CLI or API:
202
+ Any OpenAI-compatible endpoint works -- Ollama, LM Studio, llama.cpp, vLLM, and similar tools. Configure via `canonry init`, the settings page, or the CLI:
158
203
 
159
204
  ```bash
160
205
  canonry settings provider local --base-url http://localhost:11434/v1
161
- ```
162
-
163
- The base URL is the only required field. API key is optional (most local servers don't need one). You can also set a specific model:
164
-
165
- ```bash
166
206
  canonry settings provider local --base-url http://localhost:11434/v1 --model llama3
167
207
  ```
168
208
 
209
+ The base URL is the only required field. API key is optional (most local servers don't need one).
210
+
169
211
  > **Note:** Unless your local model has web search capabilities, responses will be based solely on its training data. Cloud providers (Gemini, OpenAI, Claude) use live web search to ground their answers, which produces more accurate citation results. Local LLMs are best used for comparing how different models perceive your brand without real-time search context.
170
212
 
171
213
  ## API