@rely-ai/caliber 1.36.1 → 1.37.0-dev.1774893051

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (3) hide show
  1. package/README.md +90 -59
  2. package/dist/bin.js +137 -113
  3. package/package.json +1 -1
package/README.md CHANGED
@@ -1,3 +1,7 @@
1
+ # Caliber
2
+
3
+ **Hand-written `CLAUDE.md` files go stale the moment you refactor.** Your AI agent hallucinates paths that no longer exist, misses new dependencies, and gives advice based on yesterday's architecture. Caliber generates and maintains your AI context files (`CLAUDE.md`, `.cursor/rules/`, `AGENTS.md`, `copilot-instructions.md`) so they stay accurate as your code evolves — and keeps every agent on your team in sync, whether they use Claude Code, Cursor, Codex, OpenCode, or GitHub Copilot.
4
+
1
5
  <p align="center">
2
6
  <img src="assets/demo-header.gif" alt="Caliber product demo" width="900">
3
7
  </p>
@@ -10,30 +14,16 @@
10
14
  <img src="https://img.shields.io/badge/Claude_Code-supported-blue" alt="Claude Code">
11
15
  <img src="https://img.shields.io/badge/Cursor-supported-blue" alt="Cursor">
12
16
  <img src="https://img.shields.io/badge/Codex-supported-blue" alt="Codex">
17
+ <img src="https://img.shields.io/badge/OpenCode-supported-blue" alt="OpenCode">
18
+ <img src="https://img.shields.io/badge/GitHub_Copilot-supported-blue" alt="GitHub Copilot">
13
19
  </p>
14
20
 
15
- ---
16
-
17
- ### Try it — zero install, zero commitment
18
-
19
- ```bash
20
- npx @rely-ai/caliber score
21
- ```
22
-
23
- Score your AI agent config in 3 seconds. No API key. No changes to your code. Just a score.
24
-
25
- > **Your code stays on your machine.** Scoring is 100% local — no LLM calls, no code sent anywhere. Generation uses your own AI subscription (Claude Code, Cursor) or your own API key (Anthropic, OpenAI, Vertex AI). Caliber never sees your code.
26
-
27
- ---
28
-
29
- Caliber scores, generates, and keeps your AI agent configs in sync with your codebase. It fingerprints your project — languages, frameworks, dependencies, architecture — and produces tailored configs for **Claude Code**, **Cursor**, and **OpenAI Codex**. When your code evolves, Caliber detects the drift and updates your configs to match.
30
-
31
21
  ## Before / After
32
22
 
33
23
  Most repos start with a hand-written `CLAUDE.md` and nothing else. Here's what Caliber finds — and fixes:
34
24
 
35
25
  ```
36
- Before After caliber init
26
+ Before After /setup-caliber
37
27
  ────────────────────────────── ──────────────────────────────
38
28
 
39
29
  Agent Config Score 35 / 100 Agent Config Score 94 / 100
@@ -53,6 +43,24 @@ Scoring is deterministic — no LLM, no API calls. It cross-references your conf
53
43
  caliber score --compare main # See how your branch changed the score
54
44
  ```
55
45
 
46
+ ## Get Started
47
+
48
+ Requires **Node.js >= 20**.
49
+
50
+ ```bash
51
+ npx @rely-ai/caliber bootstrap
52
+ ```
53
+
54
+ Then, in your next Claude Code or Cursor chat session, type:
55
+
56
+ > **/setup-caliber**
57
+
58
+ Your agent detects your stack, generates tailored configs for every platform your team uses, sets up pre-commit hooks, and enables continuous sync — all from inside your normal workflow.
59
+
60
+ **Don't use Claude Code or Cursor?** Run `caliber init` instead — it's the same setup as a CLI wizard. Works with any LLM provider: bring your own Anthropic, OpenAI, or Vertex AI key.
61
+
62
+ > **Your code stays on your machine.** Bootstrap is 100% local — no LLM calls, no code sent anywhere. Generation uses your own AI subscription or API key. Caliber never sees your code.
63
+
56
64
  ## Audits first, writes second
57
65
 
58
66
  Caliber never overwrites your existing configs without asking. The workflow mirrors code review:
@@ -67,26 +75,28 @@ If your existing config scores **95+**, Caliber skips full regeneration and appl
67
75
 
68
76
  ## How It Works
69
77
 
70
- Caliber is not a one-time setup tool. It's a loop:
78
+ Bootstrap gives your agent the `/setup-caliber` skill. Your agent analyzes your project — languages, frameworks, dependencies, architecture — generates configs, and installs hooks. From there, it's a loop:
71
79
 
72
80
  ```
73
- caliber score
81
+ npx @rely-ai/caliber bootstrap ← one-time, 2 seconds
74
82
 
75
83
 
76
- ┌──── caliber init ◄────────────────┐
77
- │ (generate / fix)
78
- │ │ │
79
- │ ▼ │
80
- your code evolves
81
- (new deps, renamed files,
82
- changed architecture)
83
-
84
-
85
- └──► caliber refresh ──────────────►┘
86
- (detect drift, update configs)
84
+ agent runs /setup-caliber agent handles everything
85
+
86
+
87
+ ┌──── configs generated ◄────────────┐
88
+
89
+
90
+ your code evolves
91
+ (new deps, renamed files,
92
+ changed architecture)
93
+ │ │ │
94
+ │ ▼ │
95
+ └──► caliber refresh ──────────────►─┘
96
+ (auto, on every commit)
87
97
  ```
88
98
 
89
- Auto-refresh hooks run this loop automatically on every commit or at the end of each AI coding session.
99
+ Pre-commit hooks run the refresh loop automatically. New team members get nudged to bootstrap on their first session.
90
100
 
91
101
  ### What It Generates
92
102
 
@@ -106,6 +116,13 @@ Auto-refresh hooks run this loop automatically — on every commit or at the end
106
116
  - `AGENTS.md` — Project context for Codex
107
117
  - `.agents/skills/*/SKILL.md` — Skills for Codex
108
118
 
119
+ **OpenCode**
120
+ - `AGENTS.md` — Project context (shared with Codex when both are targeted)
121
+ - `.opencode/skills/*/SKILL.md` — Skills for OpenCode
122
+
123
+ **GitHub Copilot**
124
+ - `.github/copilot-instructions.md` — Project context for Copilot
125
+
109
126
  ## Key Features
110
127
 
111
128
  <details>
@@ -118,13 +135,15 @@ TypeScript, Python, Go, Rust, Java, Ruby, Terraform, and more. Language and fram
118
135
  <details>
119
136
  <summary><strong>Any AI Tool</strong></summary>
120
137
 
121
- Target a single platform or all three at once:
138
+ `caliber bootstrap` auto-detects which agents you have installed. For manual control:
122
139
  ```bash
123
140
  caliber init --agent claude # Claude Code only
124
141
  caliber init --agent cursor # Cursor only
125
142
  caliber init --agent codex # Codex only
126
- caliber init --agent all # All three
127
- caliber init --agent claude,cursor # Comma-separated
143
+ caliber init --agent opencode # OpenCode only
144
+ caliber init --agent github-copilot # GitHub Copilot only
145
+ caliber init --agent all # All platforms
146
+ caliber init --agent claude,cursor # Comma-separated
128
147
  ```
129
148
 
130
149
  </details>
@@ -197,12 +216,20 @@ The `refresh` command analyzes your git diff (committed, staged, and unstaged ch
197
216
 
198
217
  </details>
199
218
 
219
+ <details>
220
+ <summary><strong>Team Onboarding</strong></summary>
221
+
222
+ When Caliber is set up in a repo, it automatically nudges new team members to configure it on their machine. A lightweight session hook checks whether the pre-commit hook is installed and prompts setup if not — no manual coordination needed.
223
+
224
+ </details>
225
+
200
226
  <details>
201
227
  <summary><strong>Fully Reversible</strong></summary>
202
228
 
203
229
  - **Automatic backups** — originals saved to `.caliber/backups/` before every write
204
230
  - **Score regression guard** — if a regeneration produces a lower score, changes are auto-reverted
205
231
  - **Full undo** — `caliber undo` restores everything to its previous state
232
+ - **Clean uninstall** — `caliber uninstall` removes everything Caliber added (hooks, generated sections, skills, learnings) while preserving your own content
206
233
  - **Dry run** — preview changes with `--dry-run` before applying
207
234
 
208
235
  </details>
@@ -211,9 +238,10 @@ The `refresh` command analyzes your git diff (committed, staged, and unstaged ch
211
238
 
212
239
  | Command | Description |
213
240
  |---|---|
241
+ | `caliber bootstrap` | Install agent skills — the fastest way to get started |
242
+ | `caliber init` | Full setup wizard — analyze, generate, review, install hooks |
214
243
  | `caliber score` | Score config quality (deterministic, no LLM) |
215
244
  | `caliber score --compare <ref>` | Compare current score against a git ref |
216
- | `caliber init` | Full setup wizard — analyze, generate, review, install hooks |
217
245
  | `caliber regenerate` | Re-analyze and regenerate configs (aliases: `regen`, `re`) |
218
246
  | `caliber refresh` | Update docs based on recent code changes |
219
247
  | `caliber skills` | Discover and install community skills |
@@ -221,6 +249,7 @@ The `refresh` command analyzes your git diff (committed, staged, and unstaged ch
221
249
  | `caliber hooks` | Manage auto-refresh hooks |
222
250
  | `caliber config` | Configure LLM provider, API key, and model |
223
251
  | `caliber status` | Show current setup status |
252
+ | `caliber uninstall` | Remove all Caliber resources from a project |
224
253
  | `caliber undo` | Revert all changes made by Caliber |
225
254
 
226
255
  ## FAQ
@@ -235,9 +264,16 @@ No. Caliber shows you a diff of every proposed change. You accept, refine, or de
235
264
  <details>
236
265
  <summary><strong>Does it need an API key?</strong></summary>
237
266
 
238
- **Scoring:** No. `caliber score` runs 100% locally with no LLM.
267
+ **Bootstrap & scoring:** No. Both run 100% locally with no LLM.
239
268
 
240
- **Generation:** Uses your existing Claude Code or Cursor subscription (no API key needed), or bring your own key for Anthropic, OpenAI, or Vertex AI.
269
+ **Generation** (via `/setup-caliber` or `caliber init`): Uses your existing Claude Code or Cursor subscription (no API key needed), or bring your own key for Anthropic, OpenAI, or Vertex AI.
270
+
271
+ </details>
272
+
273
+ <details>
274
+ <summary><strong>What's the difference between bootstrap and init?</strong></summary>
275
+
276
+ `caliber bootstrap` installs agent skills in 2 seconds — your agent then runs `/setup-caliber` to handle the rest from inside your session. `caliber init` is the full interactive wizard for users who prefer a CLI-driven setup. Both end up in the same place.
241
277
 
242
278
  </details>
243
279
 
@@ -258,24 +294,10 @@ Yes. Run `caliber init` from any directory. `caliber refresh` can update configs
258
294
  <details>
259
295
  <summary><strong>Does it send my code anywhere?</strong></summary>
260
296
 
261
- Scoring is fully local. Generation sends your project fingerprint (not source code) to whatever LLM provider you configure — the same provider your AI editor already uses. Anonymous usage analytics (no code, no file contents) can be disabled via `caliber config`.
297
+ Scoring is fully local. Generation sends a project summary (languages, structure, dependencies — not source code) to whatever LLM provider you configure — the same provider your AI editor already uses. Anonymous usage analytics (no code, no file contents) can be disabled via `caliber config`.
262
298
 
263
299
  </details>
264
300
 
265
- ## Add a Caliber badge to your repo
266
-
267
- After scoring your project, add a badge to your README:
268
-
269
- ![Caliber Score](https://img.shields.io/badge/caliber-94%2F100-brightgreen)
270
-
271
- Copy this markdown and replace `94` with your actual score:
272
-
273
- ```
274
- ![Caliber Score](https://img.shields.io/badge/caliber-SCORE%2F100-COLOR)
275
- ```
276
-
277
- Color guide: `brightgreen` (90+), `green` (70-89), `yellow` (40-69), `red` (<40).
278
-
279
301
  ## LLM Providers
280
302
 
281
303
  No API key? No problem. Caliber works with your existing AI tool subscription:
@@ -285,9 +307,9 @@ No API key? No problem. Caliber works with your existing AI tool subscription:
285
307
  | **Claude Code** (your seat) | `caliber config` → Claude Code | Inherited from Claude Code |
286
308
  | **Cursor** (your seat) | `caliber config` → Cursor | Inherited from Cursor |
287
309
  | **Anthropic** | `export ANTHROPIC_API_KEY=sk-ant-...` | `claude-sonnet-4-6` |
288
- | **OpenAI** | `export OPENAI_API_KEY=sk-...` | `gpt-5.4-mini` |
310
+ | **OpenAI** | `export OPENAI_API_KEY=sk-...` | `gpt-4.1` |
289
311
  | **Vertex AI** | `export VERTEX_PROJECT_ID=my-project` | `claude-sonnet-4-6` |
290
- | **Custom endpoint** | `OPENAI_API_KEY` + `OPENAI_BASE_URL` | `gpt-5.4-mini` |
312
+ | **Custom endpoint** | `OPENAI_API_KEY` + `OPENAI_BASE_URL` | `gpt-4.1` |
291
313
 
292
314
  Override the model for any provider: `export CALIBER_MODEL=<model-name>` or use `caliber config`.
293
315
 
@@ -333,11 +355,6 @@ export GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account.json
333
355
 
334
356
  </details>
335
357
 
336
- ## Requirements
337
-
338
- - **Node.js** >= 20
339
- - **One LLM provider:** your **Claude Code** or **Cursor** subscription (no API key), or an API key for Anthropic / OpenAI / Vertex AI
340
-
341
358
  ## Contributing
342
359
 
343
360
  See [CONTRIBUTING.md](./CONTRIBUTING.md) for detailed guidelines.
@@ -353,6 +370,20 @@ npm run build # Compile
353
370
 
354
371
  Uses [conventional commits](https://www.conventionalcommits.org/) — `feat:` for features, `fix:` for bug fixes.
355
372
 
373
+ ## Add a Caliber badge to your repo
374
+
375
+ After scoring your project, add a badge to your README:
376
+
377
+ ![Caliber Score](https://img.shields.io/badge/caliber-94%2F100-brightgreen)
378
+
379
+ Copy this markdown and replace `94` with your actual score:
380
+
381
+ ```
382
+ ![Caliber Score](https://img.shields.io/badge/caliber-SCORE%2F100-COLOR)
383
+ ```
384
+
385
+ Color guide: `brightgreen` (90+), `green` (70-89), `yellow` (40-69), `red` (<40).
386
+
356
387
  ## License
357
388
 
358
389
  MIT
package/dist/bin.js CHANGED
@@ -35,9 +35,9 @@ function getMaxPromptTokens() {
35
35
  return Math.max(MIN_PROMPT_TOKENS, Math.min(budget, MAX_PROMPT_TOKENS_CAP));
36
36
  }
37
37
  function loadConfig() {
38
- const fileConfig = readConfigFile();
39
- if (fileConfig) return fileConfig;
40
- return resolveFromEnv();
38
+ const envConfig = resolveFromEnv();
39
+ if (envConfig) return envConfig;
40
+ return readConfigFile();
41
41
  }
42
42
  function resolveFromEnv() {
43
43
  if (process.env.ANTHROPIC_API_KEY) {
@@ -130,7 +130,7 @@ var init_config = __esm({
130
130
  DEFAULT_MODELS = {
131
131
  anthropic: "claude-sonnet-4-6",
132
132
  vertex: "claude-sonnet-4-6",
133
- openai: "gpt-5.4-mini",
133
+ openai: "gpt-4.1",
134
134
  cursor: "sonnet-4.6",
135
135
  "claude-cli": "default"
136
136
  };
@@ -139,7 +139,8 @@ var init_config = __esm({
139
139
  "claude-opus-4-6": 2e5,
140
140
  "claude-haiku-4-5-20251001": 2e5,
141
141
  "claude-sonnet-4-5-20250514": 2e5,
142
- "gpt-5.4-mini": 1e6,
142
+ "gpt-4.1": 1e6,
143
+ "gpt-4.1-mini": 1e6,
143
144
  "gpt-4o": 128e3,
144
145
  "gpt-4o-mini": 128e3,
145
146
  "sonnet-4.6": 2e5
@@ -151,7 +152,7 @@ var init_config = __esm({
151
152
  DEFAULT_FAST_MODELS = {
152
153
  anthropic: "claude-haiku-4-5-20251001",
153
154
  vertex: "claude-haiku-4-5-20251001",
154
- openai: "gpt-5.4-mini",
155
+ openai: "gpt-4.1-mini",
155
156
  cursor: "gpt-5.3-codex-fast"
156
157
  };
157
158
  }
@@ -2835,7 +2836,13 @@ var KNOWN_MODELS = {
2835
2836
  "claude-opus-4-6@20250605",
2836
2837
  "claude-opus-4-1-20250620"
2837
2838
  ],
2838
- openai: ["gpt-5.4-mini", "gpt-4o", "gpt-4o-mini", "o3-mini"],
2839
+ openai: [
2840
+ "gpt-4.1",
2841
+ "gpt-4.1-mini",
2842
+ "gpt-4o",
2843
+ "gpt-4o-mini",
2844
+ "o3-mini"
2845
+ ],
2839
2846
  cursor: ["auto", "composer-1.5"],
2840
2847
  "claude-cli": []
2841
2848
  };
@@ -2843,8 +2850,7 @@ function isModelNotAvailableError(error) {
2843
2850
  const msg = error.message.toLowerCase();
2844
2851
  const status = error.status;
2845
2852
  if (status === 404 && msg.includes("model")) return true;
2846
- if (msg.includes("model") && (msg.includes("not found") || msg.includes("not_found")))
2847
- return true;
2853
+ if (msg.includes("model") && (msg.includes("not found") || msg.includes("not_found"))) return true;
2848
2854
  if (msg.includes("model") && msg.includes("not available")) return true;
2849
2855
  if (msg.includes("model") && msg.includes("does not exist")) return true;
2850
2856
  if (msg.includes("publisher model")) return true;
@@ -2870,18 +2876,12 @@ function filterRelevantModels(models, provider) {
2870
2876
  async function handleModelNotAvailable(failedModel, provider, config) {
2871
2877
  if (!process.stdin.isTTY) {
2872
2878
  console.error(
2873
- chalk.red(
2874
- `Model "${failedModel}" is not available. Run \`${resolveCaliber()} config\` to select a different model.`
2875
- )
2879
+ chalk.red(`Model "${failedModel}" is not available. Run \`${resolveCaliber()} config\` to select a different model.`)
2876
2880
  );
2877
2881
  return null;
2878
2882
  }
2879
- console.log(
2880
- chalk.yellow(
2881
- `
2882
- \u26A0 Model "${failedModel}" is not available on your ${config.provider} deployment.`
2883
- )
2884
- );
2883
+ console.log(chalk.yellow(`
2884
+ \u26A0 Model "${failedModel}" is not available on your ${config.provider} deployment.`));
2885
2885
  let models = [];
2886
2886
  if (provider.listModels) {
2887
2887
  try {
@@ -2895,11 +2895,7 @@ async function handleModelNotAvailable(failedModel, provider, config) {
2895
2895
  }
2896
2896
  models = models.filter((m) => m !== failedModel);
2897
2897
  if (models.length === 0) {
2898
- console.log(
2899
- chalk.red(
2900
- ` No alternative models found. Run \`${resolveCaliber()} config\` to configure manually.`
2901
- )
2902
- );
2898
+ console.log(chalk.red(` No alternative models found. Run \`${resolveCaliber()} config\` to configure manually.`));
2903
2899
  return null;
2904
2900
  }
2905
2901
  console.log("");
@@ -3105,6 +3101,7 @@ var SKILL_FORMAT_RULES = `All skills follow the OpenSkills standard (agentskills
3105
3101
 
3106
3102
  Skill field requirements:
3107
3103
  - "name": kebab-case (lowercase letters, numbers, hyphens only). Becomes the directory name.
3104
+ - "name" MUST NOT be any of these reserved names (they are managed by Caliber automatically): "setup-caliber", "find-skills", "save-learning". Do NOT generate skills with these names.
3108
3105
  - "description": MUST include WHAT it does + WHEN to use it with specific trigger phrases. Example: "Manages database migrations. Use when user says 'run migration', 'create migration', 'db schema change', or modifies files in db/migrations/."
3109
3106
  - "content": markdown body only \u2014 do NOT include YAML frontmatter, it is generated from name+description.
3110
3107
 
@@ -3148,7 +3145,7 @@ PRIORITY WHEN CONSTRAINTS CONFLICT: Grounding and reference density matter more
3148
3145
  Note: Permissions, hooks, freshness tracking, and OpenSkills frontmatter are scored automatically by caliber \u2014 do not optimize for them.
3149
3146
  README.md is provided for context only \u2014 do NOT include a readmeMd field in your output.`;
3150
3147
  var OUTPUT_SIZE_CONSTRAINTS = `OUTPUT SIZE CONSTRAINTS \u2014 these are critical:
3151
- - CLAUDE.md / AGENTS.md: MUST be under 150 lines for maximum score. Aim for 100-140 lines. Be concise \u2014 commands, architecture overview, and key conventions. Use bullet points and tables, not prose.
3148
+ - CLAUDE.md / AGENTS.md: MUST be under 400 lines for maximum score. Aim for 200-350 lines. Be thorough \u2014 commands, architecture overview, key conventions, data flow, and important patterns. Use bullet points and tables, not prose.
3152
3149
 
3153
3150
  Pack project references densely in architecture sections \u2014 use inline paths, not prose paragraphs:
3154
3151
  GOOD: **Entry**: \`src/bin.ts\` \u2192 \`src/cli.ts\` \xB7 **LLM** (\`src/llm/\`): \`anthropic.ts\` \xB7 \`vertex.ts\` \xB7 \`openai-compat.ts\`
@@ -3286,7 +3283,7 @@ Structure:
3286
3283
  5. "## Common Issues" (required) \u2014 specific error messages and their fixes. Not "check your config" but "If you see 'Connection refused on port 5432': 1. Verify postgres is running: docker ps | grep postgres 2. Check .env has correct DATABASE_URL"
3287
3284
 
3288
3285
  Rules:
3289
- - Max 150 lines. Focus on actionable instructions, not documentation prose.
3286
+ - Max 400 lines. Focus on actionable instructions, not documentation prose.
3290
3287
  - Study existing code in the project context to extract the real patterns being used. A skill for "create API route" should show the exact file structure, imports, error handling, and naming that existing routes use.
3291
3288
  - Be specific and actionable. GOOD: "Run \`pnpm test -- --filter=api\` to verify". BAD: "Validate the data before proceeding."
3292
3289
  - Never use ambiguous language. Instead of "handle errors properly", write "Wrap the DB call in try/catch. On failure, return { error: string, code: number } matching the ErrorResponse type in \`src/types.ts\`."
@@ -3344,7 +3341,7 @@ Rules:
3344
3341
  - Update the "fileDescriptions" to reflect any changes you make.
3345
3342
 
3346
3343
  Quality constraints \u2014 your changes are scored, so do not break these:
3347
- - CLAUDE.md / AGENTS.md: MUST stay under 150 lines. If adding content, remove less important lines to stay within budget. Do not refuse the user's request \u2014 make the change and trim elsewhere.
3344
+ - CLAUDE.md / AGENTS.md: MUST stay under 400 lines. If adding content, remove less important lines to stay within budget. Do not refuse the user's request \u2014 make the change and trim elsewhere.
3348
3345
  - Avoid vague instructions ("follow best practices", "write clean code", "ensure quality").
3349
3346
  - Do NOT add directory tree listings in code blocks.
3350
3347
  - Do NOT remove existing code blocks \u2014 they contribute to the executable content score.
@@ -3368,7 +3365,7 @@ CONSERVATIVE UPDATE means:
3368
3365
  - NEVER replace specific paths/commands with generic prose
3369
3366
 
3370
3367
  Quality constraints (the output is scored deterministically):
3371
- - CLAUDE.md / AGENTS.md: MUST stay under 150 lines. If the diff adds content, trim the least important lines elsewhere.
3368
+ - CLAUDE.md / AGENTS.md: MUST stay under 400 lines. If the diff adds content, trim the least important lines elsewhere.
3372
3369
  - Keep 3+ code blocks with executable commands \u2014 do not remove code blocks
3373
3370
  - Every file path, command, and identifier must be in backticks
3374
3371
  - ONLY reference file paths that exist in the provided file tree \u2014 do NOT invent paths
@@ -3381,6 +3378,7 @@ Cross-agent sync:
3381
3378
 
3382
3379
  Managed content:
3383
3380
  - Keep managed blocks (<!-- caliber:managed --> ... <!-- /caliber:managed -->) intact
3381
+ - Keep context sync blocks (<!-- caliber:managed:sync --> ... <!-- /caliber:managed:sync -->) intact
3384
3382
  - Do NOT modify CALIBER_LEARNINGS.md \u2014 it is managed separately
3385
3383
  - Preserve any references to CALIBER_LEARNINGS.md in CLAUDE.md
3386
3384
 
@@ -3396,9 +3394,12 @@ Return a JSON object with this exact shape:
3396
3394
  "copilotInstructionFiles": [{"filename": "name.instructions.md", "content": "..."}] or null
3397
3395
  },
3398
3396
  "changesSummary": "<1-2 sentence summary of what was updated and why>",
3397
+ "fileChanges": [{"file": "CLAUDE.md", "description": "added new API routes, updated build commands"}],
3399
3398
  "docsUpdated": ["CLAUDE.md", "README.md"]
3400
3399
  }
3401
3400
 
3401
+ The "fileChanges" array MUST include one entry per file that was updated (non-null in updatedDocs). Each entry describes what specifically changed in that file \u2014 be concrete (e.g. "added auth middleware section" not "updated docs").
3402
+
3402
3403
  Respond with ONLY the JSON object, no markdown fences or extra text.`;
3403
3404
  var LEARN_SYSTEM_PROMPT = `You are an expert developer experience engineer. You analyze raw tool call events from AI coding sessions to extract reusable operational lessons that will help future LLM sessions work more effectively in this project.
3404
3405
 
@@ -5258,7 +5259,7 @@ import fs12 from "fs";
5258
5259
  import path12 from "path";
5259
5260
  function writeClaudeConfig(config) {
5260
5261
  const written = [];
5261
- fs12.writeFileSync("CLAUDE.md", appendLearningsBlock(appendPreCommitBlock(config.claudeMd)));
5262
+ fs12.writeFileSync("CLAUDE.md", appendSyncBlock(appendLearningsBlock(appendPreCommitBlock(config.claudeMd))));
5262
5263
  written.push("CLAUDE.md");
5263
5264
  if (config.skills?.length) {
5264
5265
  for (const skill of config.skills) {
@@ -5304,7 +5305,8 @@ function writeCursorConfig(config) {
5304
5305
  }
5305
5306
  const preCommitRule = getCursorPreCommitRule();
5306
5307
  const learningsRule = getCursorLearningsRule();
5307
- const allRules = [...config.rules || [], preCommitRule, learningsRule];
5308
+ const syncRule = getCursorSyncRule();
5309
+ const allRules = [...config.rules || [], preCommitRule, learningsRule, syncRule];
5308
5310
  const rulesDir = path13.join(".cursor", "rules");
5309
5311
  if (!fs13.existsSync(rulesDir)) fs13.mkdirSync(rulesDir, { recursive: true });
5310
5312
  for (const rule of allRules) {
@@ -5387,7 +5389,7 @@ function writeGithubCopilotConfig(config) {
5387
5389
  fs15.mkdirSync(".github", { recursive: true });
5388
5390
  fs15.writeFileSync(
5389
5391
  path15.join(".github", "copilot-instructions.md"),
5390
- appendLearningsBlock(appendPreCommitBlock(config.instructions, "copilot"))
5392
+ appendSyncBlock(appendLearningsBlock(appendPreCommitBlock(config.instructions, "copilot")))
5391
5393
  );
5392
5394
  written.push(".github/copilot-instructions.md");
5393
5395
  }
@@ -6120,11 +6122,11 @@ var POINTS_LEARNED_CONTENT = 2;
6120
6122
  var POINTS_SOURCES_CONFIGURED = 3;
6121
6123
  var POINTS_SOURCES_REFERENCED = 3;
6122
6124
  var TOKEN_BUDGET_THRESHOLDS = [
6123
- { maxTokens: 2e3, points: 6 },
6124
- { maxTokens: 3500, points: 5 },
6125
- { maxTokens: 5e3, points: 4 },
6126
- { maxTokens: 8e3, points: 2 },
6127
- { maxTokens: 12e3, points: 1 }
6125
+ { maxTokens: 5e3, points: 6 },
6126
+ { maxTokens: 8e3, points: 5 },
6127
+ { maxTokens: 12e3, points: 4 },
6128
+ { maxTokens: 16e3, points: 2 },
6129
+ { maxTokens: 24e3, points: 1 }
6128
6130
  ];
6129
6131
  var CODE_BLOCK_THRESHOLDS = [
6130
6132
  { minBlocks: 3, points: 8 },
@@ -6292,17 +6294,14 @@ function checkExistence(dir) {
6292
6294
  const opencodeSkills = countFiles(join3(dir, ".opencode", "skills"), /SKILL\.md$/);
6293
6295
  const skillCount = claudeSkills.length + codexSkills.length + opencodeSkills.length;
6294
6296
  const skillBase = skillCount >= 1 ? POINTS_SKILLS_EXIST : 0;
6295
- const skillBonus = Math.min(
6296
- (skillCount - 1) * POINTS_SKILLS_BONUS_PER_EXTRA,
6297
- POINTS_SKILLS_BONUS_CAP
6298
- );
6297
+ const skillBonus = Math.min((skillCount - 1) * POINTS_SKILLS_BONUS_PER_EXTRA, POINTS_SKILLS_BONUS_CAP);
6299
6298
  const skillPoints = skillCount >= 1 ? skillBase + Math.max(0, skillBonus) : 0;
6300
6299
  const maxSkillPoints = POINTS_SKILLS_EXIST + POINTS_SKILLS_BONUS_CAP;
6301
6300
  checks.push({
6302
6301
  id: "skills_exist",
6303
6302
  name: "Skills configured",
6304
6303
  category: "existence",
6305
- maxPoints: skillCount >= 1 ? maxSkillPoints : 0,
6304
+ maxPoints: maxSkillPoints,
6306
6305
  earnedPoints: Math.min(skillPoints, maxSkillPoints),
6307
6306
  passed: skillCount >= 1,
6308
6307
  detail: skillCount === 0 ? "No skills found" : `${skillCount} skill${skillCount === 1 ? "" : "s"} found`,
@@ -6335,7 +6334,7 @@ function checkExistence(dir) {
6335
6334
  id: "mcp_servers",
6336
6335
  name: "MCP servers configured",
6337
6336
  category: "existence",
6338
- maxPoints: mcp.count >= 1 ? POINTS_MCP_SERVERS : 0,
6337
+ maxPoints: POINTS_MCP_SERVERS,
6339
6338
  earnedPoints: mcp.count >= 1 ? POINTS_MCP_SERVERS : 0,
6340
6339
  passed: mcp.count >= 1,
6341
6340
  detail: mcp.count > 0 ? `${mcp.count} server${mcp.count === 1 ? "" : "s"} in ${mcp.sources.join(", ")}` : "No MCP servers configured",
@@ -6870,11 +6869,7 @@ init_resolve_caliber();
6870
6869
  init_pre_commit_block();
6871
6870
  function hasPreCommitHook(dir) {
6872
6871
  try {
6873
- const gitDir = execSync12("git rev-parse --git-dir", {
6874
- cwd: dir,
6875
- encoding: "utf-8",
6876
- stdio: ["pipe", "pipe", "pipe"]
6877
- }).trim();
6872
+ const gitDir = execSync12("git rev-parse --git-dir", { cwd: dir, encoding: "utf-8", stdio: ["pipe", "pipe", "pipe"] }).trim();
6878
6873
  const hookPath = join7(gitDir, "hooks", "pre-commit");
6879
6874
  const content = readFileOrNull(hookPath);
6880
6875
  return content ? content.includes("caliber") : false;
@@ -6965,9 +6960,9 @@ function checkBonus(dir) {
6965
6960
  id: "open_skills_format",
6966
6961
  name: "Skills use OpenSkills format",
6967
6962
  category: "bonus",
6968
- maxPoints: totalSkillFiles > 0 ? POINTS_OPEN_SKILLS_FORMAT : 0,
6963
+ maxPoints: POINTS_OPEN_SKILLS_FORMAT,
6969
6964
  earnedPoints: allOpenSkills ? POINTS_OPEN_SKILLS_FORMAT : 0,
6970
- passed: allOpenSkills || totalSkillFiles === 0,
6965
+ passed: allOpenSkills,
6971
6966
  detail: totalSkillFiles === 0 ? "No skills to check" : allOpenSkills ? `All ${totalSkillFiles} skill${totalSkillFiles === 1 ? "" : "s"} use SKILL.md with frontmatter` : `${openSkillsCount}/${totalSkillFiles} use OpenSkills format`,
6972
6967
  suggestion: totalSkillFiles > 0 && !allOpenSkills ? "Migrate skills to .claude/skills/{name}/SKILL.md with YAML frontmatter" : void 0,
6973
6968
  fix: totalSkillFiles > 0 && !allOpenSkills ? {
@@ -9773,22 +9768,11 @@ async function initCommand(options) {
9773
9768
  );
9774
9769
  console.log(chalk14.dim(" Keep your AI agent configs in sync \u2014 automatically."));
9775
9770
  console.log(chalk14.dim(" Works across Claude Code, Cursor, Codex, and GitHub Copilot.\n"));
9776
- console.log(title.bold(" What this does:\n"));
9777
- console.log(
9778
- chalk14.dim(" Caliber reads your project structure (file tree, package.json, etc.)")
9779
- );
9780
- console.log(chalk14.dim(" and generates agent config files (CLAUDE.md, .cursor/rules/, etc.)."));
9781
- console.log(chalk14.dim(" You review all changes before anything is written to disk.\n"));
9782
- console.log(title.bold(" Steps:\n"));
9783
- console.log(
9784
- chalk14.dim(" 1. Connect Pick your LLM provider (or use your existing subscription)")
9785
- );
9786
- console.log(
9787
- chalk14.dim(" 2. Build Scan project, generate configs, install pre-commit sync")
9788
- );
9789
- console.log(
9790
- chalk14.dim(" 3. Review See exactly what changed \u2014 accept, refine, or decline\n")
9791
- );
9771
+ console.log(title.bold(" How it works:\n"));
9772
+ console.log(chalk14.dim(" 1. Connect Link your LLM provider and select your agents"));
9773
+ console.log(chalk14.dim(" 2. Setup Detect stack, install sync hooks & skills"));
9774
+ console.log(chalk14.dim(" 3. Generate Audit existing config or generate from scratch"));
9775
+ console.log(chalk14.dim(" 4. Finalize Review changes and score your setup\n"));
9792
9776
  } else {
9793
9777
  console.log(brand.bold("\n CALIBER") + chalk14.dim(" \u2014 setting up continuous sync\n"));
9794
9778
  }
@@ -9886,10 +9870,11 @@ async function initCommand(options) {
9886
9870
  console.log(chalk14.dim(` Target: ${targetAgent.join(", ")}
9887
9871
  `));
9888
9872
  trackInitAgentSelected(targetAgent, agentAutoDetected);
9889
- console.log(title.bold(" Step 2/3 \u2014 Build\n"));
9873
+ console.log(title.bold(" Step 2/4 \u2014 Setup\n"));
9874
+ console.log(chalk14.dim(" Installing sync infrastructure...\n"));
9890
9875
  const hookResult = installPreCommitHook();
9891
9876
  if (hookResult.installed) {
9892
- console.log(` ${chalk14.green("\u2713")} Pre-commit hook installed`);
9877
+ console.log(` ${chalk14.green("\u2713")} Pre-commit hook installed \u2014 configs sync on every commit`);
9893
9878
  } else if (hookResult.alreadyInstalled) {
9894
9879
  console.log(` ${chalk14.green("\u2713")} Pre-commit hook \u2014 active`);
9895
9880
  }
@@ -9906,7 +9891,11 @@ async function initCommand(options) {
9906
9891
  }
9907
9892
  const skillsWritten = ensureBuiltinSkills2();
9908
9893
  if (skillsWritten.length > 0) {
9909
- console.log(` ${chalk14.green("\u2713")} Agent skills installed`);
9894
+ console.log(
9895
+ ` ${chalk14.green("\u2713")} Agent skills installed \u2014 /setup-caliber, /find-skills, /save-learning`
9896
+ );
9897
+ } else {
9898
+ console.log(` ${chalk14.green("\u2713")} Agent skills \u2014 already installed`);
9910
9899
  }
9911
9900
  const hasLearnableAgent = targetAgent.includes("claude") || targetAgent.includes("cursor");
9912
9901
  if (hasLearnableAgent) {
@@ -9916,8 +9905,19 @@ async function initCommand(options) {
9916
9905
  trackInitLearnEnabled(true);
9917
9906
  }
9918
9907
  console.log("");
9908
+ console.log(chalk14.dim(" New team members can run /setup-caliber inside their coding agent"));
9909
+ console.log(chalk14.dim(" (Claude Code or Cursor) to get set up automatically.\n"));
9919
9910
  const baselineScore = computeLocalScore(process.cwd(), targetAgent);
9920
- log(options.verbose, `Baseline score: ${baselineScore.score}/100`);
9911
+ console.log(chalk14.dim(" Current config score:"));
9912
+ displayScoreSummary(baselineScore);
9913
+ if (options.verbose) {
9914
+ for (const c of baselineScore.checks) {
9915
+ log(
9916
+ options.verbose,
9917
+ ` ${c.passed ? "\u2713" : "\u2717"} ${c.name}: ${c.earnedPoints}/${c.maxPoints}${c.suggestion ? ` \u2014 ${c.suggestion}` : ""}`
9918
+ );
9919
+ }
9920
+ }
9921
9921
  if (report) {
9922
9922
  report.markStep("Baseline scoring");
9923
9923
  report.addSection(
@@ -9943,26 +9943,27 @@ async function initCommand(options) {
9943
9943
  ]);
9944
9944
  const passingCount = baselineScore.checks.filter((c) => c.passed).length;
9945
9945
  const failingCount = baselineScore.checks.filter((c) => !c.passed).length;
9946
- trackInitScoreComputed(baselineScore.score, passingCount, failingCount, false);
9947
9946
  let skipGeneration = false;
9948
- if (hasExistingConfig && baselineScore.score === 100 && !options.force) {
9949
- skipGeneration = true;
9947
+ if (hasExistingConfig && baselineScore.score === 100) {
9948
+ trackInitScoreComputed(baselineScore.score, passingCount, failingCount, true);
9949
+ console.log(chalk14.bold.green("\n Your config is already optimal.\n"));
9950
+ skipGeneration = !options.force;
9950
9951
  } else if (hasExistingConfig && !options.force && !options.autoApprove) {
9951
- const topGains = baselineScore.checks.filter((c) => !c.passed && c.maxPoints > 0).sort((a, b) => b.maxPoints - b.earnedPoints - (a.maxPoints - a.earnedPoints)).slice(0, 3);
9952
- console.log(chalk14.dim(` Config score: ${baselineScore.score}/100
9953
- `));
9954
- if (topGains.length > 0) {
9955
- console.log(chalk14.dim(" Top improvements Caliber can make:"));
9956
- for (const c of topGains) {
9957
- const pts = c.maxPoints - c.earnedPoints;
9958
- console.log(
9959
- chalk14.dim(` +${pts} pts`) + chalk14.white(` ${c.name}`) + (c.suggestion ? chalk14.gray(` \u2014 ${c.suggestion}`) : "")
9960
- );
9961
- }
9962
- console.log("");
9963
- }
9964
- const improveAnswer = await confirm2({ message: "Improve your existing configs?" });
9965
- skipGeneration = !improveAnswer;
9952
+ trackInitScoreComputed(baselineScore.score, passingCount, failingCount, false);
9953
+ console.log(
9954
+ chalk14.dim("\n Sync infrastructure is ready. Caliber can also audit your existing")
9955
+ );
9956
+ console.log(chalk14.dim(" configs and improve them using AI.\n"));
9957
+ const auditAnswer = await promptInput(" Audit and improve your existing config? (Y/n) ");
9958
+ skipGeneration = auditAnswer.toLowerCase() === "n";
9959
+ } else if (!hasExistingConfig && !options.force && !options.autoApprove) {
9960
+ trackInitScoreComputed(baselineScore.score, passingCount, failingCount, false);
9961
+ console.log(chalk14.dim("\n Sync infrastructure is ready. Caliber can also generate tailored"));
9962
+ console.log(chalk14.dim(" CLAUDE.md, Cursor rules, and Codex configs for your project.\n"));
9963
+ const generateAnswer = await promptInput(" Generate agent configs? (Y/n) ");
9964
+ skipGeneration = generateAnswer.toLowerCase() === "n";
9965
+ } else {
9966
+ trackInitScoreComputed(baselineScore.score, passingCount, failingCount, false);
9966
9967
  }
9967
9968
  if (skipGeneration) {
9968
9969
  const {
@@ -10032,6 +10033,7 @@ async function initCommand(options) {
10032
10033
  );
10033
10034
  return;
10034
10035
  }
10036
+ console.log(title.bold("\n Step 3/4 \u2014 Generate\n"));
10035
10037
  const genModelInfo = fastModel ? ` Using ${displayModel} for docs, ${fastModel} for skills` : ` Using ${displayModel}`;
10036
10038
  console.log(chalk14.dim(genModelInfo + "\n"));
10037
10039
  if (report) report.markStep("Generation");
@@ -10270,7 +10272,7 @@ async function initCommand(options) {
10270
10272
  options.verbose,
10271
10273
  `Generation completed: ${elapsedMs}ms, stopReason: ${genStopReason || "end_turn"}`
10272
10274
  );
10273
- console.log(title.bold(" Step 3/3 \u2014 Done\n"));
10275
+ console.log(title.bold(" Step 4/4 \u2014 Finalize\n"));
10274
10276
  const setupFiles = collectSetupFiles(generatedSetup, targetAgent);
10275
10277
  const staged = stageFiles(setupFiles, process.cwd());
10276
10278
  const totalChanges = staged.newFiles + staged.modifiedFiles;
@@ -10468,26 +10470,31 @@ ${agentRefs.join(" ")}
10468
10470
  console.log(chalk14.bold.green("\n Caliber is set up!\n"));
10469
10471
  console.log(chalk14.bold(" What's configured:\n"));
10470
10472
  console.log(
10471
- ` ${done} Continuous sync ${chalk14.dim("pre-commit hook keeps all agent configs in sync")}`
10473
+ ` ${done} Continuous sync ${chalk14.dim("pre-commit hook keeps all agent configs in sync")}`
10474
+ );
10475
+ console.log(
10476
+ ` ${done} Config generated ${title(`${bin} score`)} ${chalk14.dim("for full breakdown")}`
10472
10477
  );
10473
- console.log(` ${done} Config generated ${chalk14.dim(`score: ${afterScore.score}/100`)}`);
10474
10478
  console.log(
10475
- ` ${done} Agent skills ${chalk14.dim("/setup-caliber for new team members")}`
10479
+ ` ${done} Agent skills ${chalk14.dim("/setup-caliber for new team members")}`
10476
10480
  );
10477
10481
  if (hasLearnableAgent) {
10478
- console.log(` ${done} Session learning ${chalk14.dim("learns from your corrections")}`);
10482
+ console.log(
10483
+ ` ${done} Session learning ${chalk14.dim("agent learns from your feedback")}`
10484
+ );
10479
10485
  }
10480
10486
  if (communitySkillsInstalled > 0) {
10481
10487
  console.log(
10482
- ` ${done} Community skills ${chalk14.dim(`${communitySkillsInstalled} installed for your stack`)}`
10488
+ ` ${done} Community skills ${chalk14.dim(`${communitySkillsInstalled} skill${communitySkillsInstalled > 1 ? "s" : ""} installed for your stack`)}`
10483
10489
  );
10484
10490
  }
10485
10491
  console.log(chalk14.bold("\n What happens next:\n"));
10486
- console.log(chalk14.dim(" Every commit syncs your agent configs automatically."));
10487
- console.log(chalk14.dim(" New team members run /setup-caliber to get set up instantly.\n"));
10488
- console.log(` ${title(`${bin} score`)} Full scoring breakdown`);
10489
- console.log(` ${title(`${bin} skills`)} Find community skills`);
10490
- console.log(` ${title(`${bin} undo`)} Revert changes`);
10492
+ console.log(chalk14.dim(" Every commit will automatically sync your agent configs."));
10493
+ console.log(chalk14.dim(" New team members can run /setup-caliber to get set up instantly.\n"));
10494
+ console.log(chalk14.bold(" Explore:\n"));
10495
+ console.log(` ${title(`${bin} score`)} Full scoring breakdown with improvement tips`);
10496
+ console.log(` ${title(`${bin} skills`)} Find community skills for your stack`);
10497
+ console.log(` ${title(`${bin} undo`)} Revert all changes from this run`);
10491
10498
  console.log(` ${title(`${bin} uninstall`)} Remove Caliber completely`);
10492
10499
  console.log("");
10493
10500
  if (options.showTokens) {
@@ -10836,27 +10843,18 @@ async function scoreCommand(options) {
10836
10843
  const separator = chalk18.gray(" " + "\u2500".repeat(53));
10837
10844
  console.log(separator);
10838
10845
  const bin = resolveCaliber();
10839
- const failing = result.checks.filter((c) => !c.passed && c.maxPoints > 0).sort((a, b) => b.maxPoints - b.earnedPoints - (a.maxPoints - a.earnedPoints));
10840
- if (result.score < 70 && failing.length > 0) {
10841
- const topFix = failing[0];
10842
- const pts = topFix.maxPoints - topFix.earnedPoints;
10846
+ if (result.score < 40) {
10843
10847
  console.log(
10844
- chalk18.gray(" Biggest gain: ") + chalk18.yellow(`+${pts} pts`) + chalk18.gray(` from "${topFix.name}"`) + (topFix.suggestion ? chalk18.gray(` \u2014 ${topFix.suggestion}`) : "")
10848
+ chalk18.gray(" Run ") + chalk18.hex("#83D1EB")(`${bin} init`) + chalk18.gray(" to generate a complete, optimized config.")
10845
10849
  );
10850
+ } else if (result.score < 70) {
10846
10851
  console.log(
10847
- chalk18.gray(" Run ") + chalk18.hex("#83D1EB")(`${bin} init`) + chalk18.gray(" to auto-fix these.")
10848
- );
10849
- } else if (failing.length > 0) {
10850
- console.log(
10851
- chalk18.green(" Looking good!") + chalk18.gray(
10852
- ` ${failing.length} check${failing.length === 1 ? "" : "s"} can still be improved.`
10853
- )
10852
+ chalk18.gray(" Run ") + chalk18.hex("#83D1EB")(`${bin} init`) + chalk18.gray(" to improve your config.")
10854
10853
  );
10854
+ } else {
10855
10855
  console.log(
10856
- chalk18.gray(" Run ") + chalk18.hex("#83D1EB")(`${bin} init`) + chalk18.gray(" to improve, or ") + chalk18.hex("#83D1EB")(`${bin} regenerate`) + chalk18.gray(" to rebuild from scratch.")
10856
+ chalk18.green(" Looking good!") + chalk18.gray(" Run ") + chalk18.hex("#83D1EB")(`${bin} regenerate`) + chalk18.gray(" to rebuild from scratch.")
10857
10857
  );
10858
- } else {
10859
- console.log(chalk18.green(" Perfect score! Your agent configs are fully optimized."));
10860
10858
  }
10861
10859
  console.log("");
10862
10860
  }
@@ -11062,6 +11060,13 @@ Changed files: ${diff.changedFiles.join(", ")}`);
11062
11060
  parts.push("\n[.cursorrules]");
11063
11061
  parts.push(existingDocs.cursorrules);
11064
11062
  }
11063
+ if (existingDocs.claudeSkills?.length) {
11064
+ for (const skill of existingDocs.claudeSkills) {
11065
+ parts.push(`
11066
+ [.claude/skills/${skill.filename}]`);
11067
+ parts.push(skill.content);
11068
+ }
11069
+ }
11065
11070
  if (existingDocs.cursorRules?.length) {
11066
11071
  for (const rule of existingDocs.cursorRules) {
11067
11072
  if (rule.filename.startsWith("caliber-")) continue;
@@ -11324,6 +11329,15 @@ function clearRefreshError() {
11324
11329
  } catch {
11325
11330
  }
11326
11331
  }
11332
+ function detectSyncedAgents(writtenFiles) {
11333
+ const agents = [];
11334
+ const joined = writtenFiles.join(" ");
11335
+ if (joined.includes("CLAUDE.md") || joined.includes(".claude/")) agents.push("Claude Code");
11336
+ if (joined.includes(".cursor/") || joined.includes(".cursorrules")) agents.push("Cursor");
11337
+ if (joined.includes("copilot-instructions") || joined.includes(".github/instructions/")) agents.push("Copilot");
11338
+ if (joined.includes("AGENTS.md") || joined.includes(".agents/")) agents.push("Codex");
11339
+ return agents;
11340
+ }
11327
11341
  function log2(quiet, ...args) {
11328
11342
  if (!quiet) console.log(...args);
11329
11343
  }
@@ -11486,8 +11500,18 @@ async function refreshSingleRepo(repoDir, options) {
11486
11500
  }
11487
11501
  recordScore(postScore, "refresh");
11488
11502
  spinner?.succeed(`${prefix}Updated ${written.length} doc${written.length === 1 ? "" : "s"}`);
11503
+ const fileChangesMap = new Map(
11504
+ (response.fileChanges || []).map((fc) => [fc.file, fc.description])
11505
+ );
11489
11506
  for (const file of written) {
11490
- log2(quiet, ` ${chalk19.green("\u2713")} ${file}`);
11507
+ const desc = fileChangesMap.get(file);
11508
+ const suffix = desc ? chalk19.dim(` \u2014 ${desc}`) : "";
11509
+ log2(quiet, ` ${chalk19.green("\u2713")} ${file}${suffix}`);
11510
+ }
11511
+ const agents = detectSyncedAgents(written);
11512
+ if (agents.length > 1) {
11513
+ log2(quiet, chalk19.cyan(`
11514
+ ${agents.length} agent formats in sync (${agents.join(", ")})`));
11491
11515
  }
11492
11516
  if (response.changesSummary) {
11493
11517
  log2(quiet, chalk19.dim(`
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@rely-ai/caliber",
3
- "version": "1.36.1",
3
+ "version": "1.37.0-dev.1774893051",
4
4
  "description": "AI context infrastructure for coding agents — keeps CLAUDE.md, Cursor rules, and skills in sync as your codebase evolves",
5
5
  "type": "module",
6
6
  "bin": {