@rely-ai/caliber 1.5.5 → 1.6.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (3) hide show
  1. package/README.md +74 -63
  2. package/dist/bin.js +23 -5
  3. package/package.json +1 -1
package/README.md CHANGED
@@ -8,13 +8,27 @@
8
8
  <a href="https://nodejs.org"><img src="https://img.shields.io/node/v/@rely-ai/caliber" alt="node"></a>
9
9
  </p>
10
10
 
11
- <p align="center"><strong>Analyze your codebase. Generate optimized AI agent configs. One command.</strong></p>
11
+ <p align="center"><strong>Improve your agentic development experience with one command</strong></p>
12
12
 
13
13
  ---
14
14
 
15
15
  Caliber scans your project — languages, frameworks, dependencies, file structure — and generates tailored config files for **Claude Code**, **Cursor**, and **OpenAI Codex**. If configs already exist, it audits them against your actual codebase and suggests targeted improvements.
16
16
 
17
- 🔑 **No API key required** — use your existing Claude Code or Cursor subscription. Or bring your own key (Anthropic, OpenAI, Vertex AI, any OpenAI-compatible endpoint).
17
+ 🔑 **API Key Optional** — use your existing Claude Code or Cursor subscription. Or bring your own key (Anthropic, OpenAI, Vertex AI, any OpenAI-compatible endpoint).
18
+
19
+ 🧠 **BYOAI** — Caliber works where you do. All LLM processing runs through your own models — no data is sent to third parties.
20
+
21
+ ### Why Caliber?
22
+
23
+ Caliber **generates, audits, and maintains** your agentic development sessions.
24
+
25
+ - 🏗️ **Generates, not just score** — builds your CLAUDE.md, Cursor rules, AGENTS.md, skills, and MCP configs from scratch
26
+ - 🔀 **Multi-agent** — one command sets up Claude Code, Cursor, and Codex together
27
+ - 🌍 **Any codebase** — TypeScript, Python, Go, Rust, Terraform, Java, Ruby — detection is fully LLM-driven, not hardcoded
28
+ - 🧩 **Finds and installs skills** — searches community registries and installs relevant skills for your stack
29
+ - 🔗 **Discovers MCP servers** — auto-detects tools your project uses and installs matching MCP servers
30
+ - 🔄 **Keeps configs fresh** — git hooks and session hooks auto-update your docs as your code changes
31
+ - ↩️ **Fully reversible** — automatic backups, score regression guard, and one-command undo
18
32
 
19
33
  ## 🚀 Quick Start
20
34
 
@@ -32,6 +46,7 @@ caliber onboard
32
46
  ```
33
47
 
34
48
  > **Already have an API key?** Skip the interactive setup:
49
+ >
35
50
  > ```bash
36
51
  > export ANTHROPIC_API_KEY=sk-ant-...
37
52
  > npx @rely-ai/caliber onboard
@@ -65,18 +80,18 @@ Caliber works on **any codebase** — TypeScript, Python, Go, Rust, Terraform, J
65
80
 
66
81
  ### 📦 What It Generates
67
82
 
68
- | File | Platform | Purpose |
69
- |------|----------|---------|
70
- | `CLAUDE.md` | Claude Code | Project context — build/test commands, architecture, conventions |
71
- | `.cursor/rules/*.mdc` | Cursor | Modern rules with frontmatter (description, globs, alwaysApply) |
72
- | `.cursorrules` | Cursor | Legacy rules file (if no `.cursor/rules/` exists) |
73
- | `AGENTS.md` | Codex | Project context for OpenAI Codex |
83
+ | File | Platform | Purpose |
84
+ | --------------------------- | ----------- | ------------------------------------------------------------------- |
85
+ | `CLAUDE.md` | Claude Code | Project context — build/test commands, architecture, conventions |
86
+ | `.cursor/rules/*.mdc` | Cursor | Modern rules with frontmatter (description, globs, alwaysApply) |
87
+ | `.cursorrules` | Cursor | Legacy rules file (if no `.cursor/rules/` exists) |
88
+ | `AGENTS.md` | Codex | Project context for OpenAI Codex |
74
89
  | `.claude/skills/*/SKILL.md` | Claude Code | Reusable skill files following [OpenSkills](https://agentskills.io) |
75
- | `.cursor/skills/*/SKILL.md` | Cursor | Skills for Cursor |
76
- | `.agents/skills/*/SKILL.md` | Codex | Skills for Codex |
77
- | `.mcp.json` | Claude Code | MCP server configurations |
78
- | `.cursor/mcp.json` | Cursor | MCP server configurations |
79
- | `.claude/settings.json` | Claude Code | Permissions and hooks |
90
+ | `.cursor/skills/*/SKILL.md` | Cursor | Skills for Cursor |
91
+ | `.agents/skills/*/SKILL.md` | Codex | Skills for Codex |
92
+ | `.mcp.json` | Claude Code | MCP server configurations |
93
+ | `.cursor/mcp.json` | Cursor | MCP server configurations |
94
+ | `.claude/settings.json` | Claude Code | Permissions and hooks |
80
95
 
81
96
  If these files already exist, Caliber audits them and suggests improvements — keeping what works, fixing what's stale, adding what's missing.
82
97
 
@@ -91,17 +106,17 @@ Every change Caliber makes is reversible:
91
106
 
92
107
  ## 📋 Commands
93
108
 
94
- | Command | Description |
95
- |---------|-------------|
96
- | `caliber onboard` | 🏁 Onboard your project — full 6-step wizard |
97
- | `caliber score` | 📊 Score your config quality (deterministic, no LLM) |
98
- | `caliber skills` | 🧩 Discover and install community skills |
99
- | `caliber regenerate` | 🔄 Re-analyze and regenerate your setup |
100
- | `caliber refresh` | 🔃 Update docs based on recent code changes |
101
- | `caliber hooks` | 🪝 Manage auto-refresh hooks |
102
- | `caliber config` | ⚙️ Configure LLM provider, API key, and model |
103
- | `caliber status` | 📌 Show current setup status |
104
- | `caliber undo` | ↩️ Revert all changes made by Caliber |
109
+ | Command | Description |
110
+ | -------------------- | ---------------------------------------------------- |
111
+ | `caliber onboard` | 🏁 Onboard your project — full 6-step wizard |
112
+ | `caliber score` | 📊 Score your config quality (deterministic, no LLM) |
113
+ | `caliber skills` | 🧩 Discover and install community skills |
114
+ | `caliber regenerate` | 🔄 Re-analyze and regenerate your setup |
115
+ | `caliber refresh` | 🔃 Update docs based on recent code changes |
116
+ | `caliber hooks` | 🪝 Manage auto-refresh hooks |
117
+ | `caliber config` | ⚙️ Configure LLM provider, API key, and model |
118
+ | `caliber status` | 📌 Show current setup status |
119
+ | `caliber undo` | ↩️ Revert all changes made by Caliber |
105
120
 
106
121
  ### Examples
107
122
 
@@ -134,7 +149,7 @@ caliber undo # Revert everything
134
149
 
135
150
  ## 📊 Scoring
136
151
 
137
- `caliber score` gives you a deterministic quality score no LLM calls, no network, instant results.
152
+ `caliber score` gives you a deterministic quality score using industry best practices.
138
153
 
139
154
  ```
140
155
  Config Score: 87/100 (A) ✨
@@ -147,37 +162,33 @@ caliber undo # Revert everything
147
162
  BONUS 5/5 ████████████████████████
148
163
  ```
149
164
 
150
- | Category | Points | What it checks |
151
- |----------|--------|----------------|
152
- | **Files & Setup** | 25 | Config files exist, skills present, cross-platform parity |
153
- | **Quality** | 25 | Has build/test commands, not bloated, no vague text, no duplicates |
154
- | **Coverage** | 20 | Mentions actual dependencies and services |
155
- | **Accuracy** | 15 | Documented commands and file paths are valid |
156
- | **Freshness & Safety** | 10 | Recently updated, no leaked secrets, permissions set |
157
- | **Bonus** | 5 | Auto-refresh hooks, AGENTS.md, OpenSkills format |
165
+ | Category | Points | What it checks |
166
+ | ---------------------- | ------ | ------------------------------------------------------------------ |
167
+ | **Files & Setup** | 25 | Config files exist, skills present, cross-platform parity |
168
+ | **Quality** | 25 | Has build/test commands, not bloated, no vague text, no duplicates |
169
+ | **Coverage** | 20 | Mentions actual dependencies and services |
170
+ | **Accuracy** | 15 | Documented commands and file paths are valid |
171
+ | **Freshness & Safety** | 10 | Recently updated, no leaked secrets, permissions set |
172
+ | **Bonus** | 5 | Auto-refresh hooks, AGENTS.md, OpenSkills format |
158
173
 
159
174
  ## 🧩 Skills
160
175
 
161
- Caliber searches three community registries and scores results against your project:
162
-
163
- - 🌐 [skills.sh](https://skills.sh) — OpenSkills registry
164
- - 🔧 [tessl.io](https://tessl.io) — Tessl skill registry
165
- - 📚 [Awesome Claude Code](https://github.com/hesreallyhim/awesome-claude-code) — Curated list
176
+ Caliber searches three community registries and scores results against your project
166
177
 
167
178
  ```bash
168
179
  caliber skills
169
180
  ```
170
181
 
171
- Skills are scored by LLM relevance (0–100) based on your project's actual tech stack, then you pick which ones to install via an interactive selector. Installed skills follow the [OpenSkills](https://agentskills.io) standard with YAML frontmatter.
182
+ Skills are scored by LLM relevance (0–100) based on your project's actual tech stack and development patterns, then you pick which ones to install via an interactive selector. Installed skills follow the [OpenSkills](https://agentskills.io) standard with YAML frontmatter.
172
183
 
173
184
  ## 🔄 Auto-Refresh
174
185
 
175
186
  Keep your agent configs in sync with your codebase automatically:
176
187
 
177
- | Hook | Trigger | What it does |
178
- |------|---------|--------------|
179
- | 🤖 **Claude Code** | End of each session | Runs `caliber refresh` and updates docs |
180
- | 📝 **Git pre-commit** | Before each commit | Refreshes docs and stages updated files |
188
+ | Hook | Trigger | What it does |
189
+ | --------------------- | ------------------- | --------------------------------------- |
190
+ | 🤖 **Claude Code** | End of each session | Runs `caliber refresh` and updates docs |
191
+ | 📝 **Git pre-commit** | Before each commit | Refreshes docs and stages updated files |
181
192
 
182
193
  Set up hooks interactively with `caliber hooks`, or non-interactively:
183
194
 
@@ -190,14 +201,14 @@ The refresh command analyzes your git diff (committed, staged, and unstaged chan
190
201
 
191
202
  ## 🔌 LLM Providers
192
203
 
193
- | Provider | Setup | Default Model |
194
- |----------|-------|---------------|
195
- | 🟣 **Claude Code** (your seat) | `caliber config` → Claude Code | Inherited from Claude Code |
196
- | 🔵 **Cursor** (your seat) | `caliber config` → Cursor | Inherited from Cursor |
197
- | 🟠 **Anthropic** | `export ANTHROPIC_API_KEY=sk-ant-...` | `claude-sonnet-4-6` |
198
- | 🟢 **OpenAI** | `export OPENAI_API_KEY=sk-...` | `gpt-4.1` |
199
- | 🔴 **Vertex AI** | `export VERTEX_PROJECT_ID=my-project` | `claude-sonnet-4-6` |
200
- | ⚪ **Custom endpoint** | `OPENAI_API_KEY` + `OPENAI_BASE_URL` | `gpt-4.1` |
204
+ | Provider | Setup | Default Model |
205
+ | ------------------------------ | ------------------------------------- | -------------------------- |
206
+ | 🟣 **Claude Code** (your seat) | `caliber config` → Claude Code | Inherited from Claude Code |
207
+ | 🔵 **Cursor** (your seat) | `caliber config` → Cursor | Inherited from Cursor |
208
+ | 🟠 **Anthropic** | `export ANTHROPIC_API_KEY=sk-ant-...` | `claude-sonnet-4-6` |
209
+ | 🟢 **OpenAI** | `export OPENAI_API_KEY=sk-...` | `gpt-4.1` |
210
+ | 🔴 **Vertex AI** | `export VERTEX_PROJECT_ID=my-project` | `claude-sonnet-4-6` |
211
+ | ⚪ **Custom endpoint** | `OPENAI_API_KEY` + `OPENAI_BASE_URL` | `gpt-4.1` |
201
212
 
202
213
  Override the model for any provider: `export CALIBER_MODEL=<model-name>` or use `caliber config`.
203
214
 
@@ -225,18 +236,18 @@ export GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account.json
225
236
  <details>
226
237
  <summary>Environment variables reference</summary>
227
238
 
228
- | Variable | Purpose |
229
- |----------|---------|
230
- | `ANTHROPIC_API_KEY` | Anthropic API key |
231
- | `OPENAI_API_KEY` | OpenAI API key |
232
- | `OPENAI_BASE_URL` | Custom OpenAI-compatible endpoint |
233
- | `VERTEX_PROJECT_ID` | GCP project ID for Vertex AI |
234
- | `VERTEX_REGION` | Vertex AI region (default: `us-east5`) |
235
- | `VERTEX_SA_CREDENTIALS` | Service account JSON (inline) |
236
- | `GOOGLE_APPLICATION_CREDENTIALS` | Service account JSON file path |
237
- | `CALIBER_USE_CLAUDE_CLI` | Use Claude Code CLI (`1` to enable) |
238
- | `CALIBER_USE_CURSOR_SEAT` | Use Cursor subscription (`1` to enable) |
239
- | `CALIBER_MODEL` | Override model for any provider |
239
+ | Variable | Purpose |
240
+ | -------------------------------- | --------------------------------------- |
241
+ | `ANTHROPIC_API_KEY` | Anthropic API key |
242
+ | `OPENAI_API_KEY` | OpenAI API key |
243
+ | `OPENAI_BASE_URL` | Custom OpenAI-compatible endpoint |
244
+ | `VERTEX_PROJECT_ID` | GCP project ID for Vertex AI |
245
+ | `VERTEX_REGION` | Vertex AI region (default: `us-east5`) |
246
+ | `VERTEX_SA_CREDENTIALS` | Service account JSON (inline) |
247
+ | `GOOGLE_APPLICATION_CREDENTIALS` | Service account JSON file path |
248
+ | `CALIBER_USE_CLAUDE_CLI` | Use Claude Code CLI (`1` to enable) |
249
+ | `CALIBER_USE_CURSOR_SEAT` | Use Cursor subscription (`1` to enable) |
250
+ | `CALIBER_MODEL` | Override model for any provider |
240
251
 
241
252
  </details>
242
253
 
package/dist/bin.js CHANGED
@@ -520,6 +520,11 @@ var DEFAULT_MODELS = {
520
520
  cursor: "default",
521
521
  "claude-cli": "default"
522
522
  };
523
+ var DEFAULT_FAST_MODELS = {
524
+ anthropic: "claude-haiku-4-5-20251001",
525
+ vertex: "claude-haiku-4-5-20251001",
526
+ openai: "gpt-4.1-mini"
527
+ };
523
528
  function loadConfig() {
524
529
  const envConfig = resolveFromEnv();
525
530
  if (envConfig) return envConfig;
@@ -591,7 +596,12 @@ function getConfigFilePath() {
591
596
  return CONFIG_FILE;
592
597
  }
593
598
  function getFastModel() {
594
- return process.env.CALIBER_FAST_MODEL || process.env.ANTHROPIC_SMALL_FAST_MODEL || void 0;
599
+ if (process.env.CALIBER_FAST_MODEL) return process.env.CALIBER_FAST_MODEL;
600
+ if (process.env.ANTHROPIC_SMALL_FAST_MODEL) return process.env.ANTHROPIC_SMALL_FAST_MODEL;
601
+ const config = loadConfig();
602
+ if (config?.fastModel) return config.fastModel;
603
+ if (config?.provider) return DEFAULT_FAST_MODELS[config.provider];
604
+ return void 0;
595
605
  }
596
606
 
597
607
  // src/llm/anthropic.ts
@@ -4361,6 +4371,7 @@ async function scoreWithLLM(candidates, toolDeps) {
4361
4371
  const vendorTag = c.vendor ? " [VENDOR/OFFICIAL]" : "";
4362
4372
  return `${i}. "${c.name}"${vendorTag} (${c.stars} stars) \u2014 ${c.description.slice(0, 100)}`;
4363
4373
  }).join("\n");
4374
+ const fastModel = getFastModel();
4364
4375
  const scored = await llmJsonCall({
4365
4376
  system: SCORE_MCP_PROMPT,
4366
4377
  prompt: `TOOL DEPENDENCIES IN PROJECT:
@@ -4368,7 +4379,8 @@ ${toolDeps.join(", ")}
4368
4379
 
4369
4380
  MCP SERVER CANDIDATES:
4370
4381
  ${candidateList}`,
4371
- maxTokens: 4e3
4382
+ maxTokens: 4e3,
4383
+ ...fastModel ? { model: fastModel } : {}
4372
4384
  });
4373
4385
  if (!Array.isArray(scored)) return [];
4374
4386
  return scored.filter((s) => s.score >= 60 && s.index >= 0 && s.index < candidates.length).sort((a, b) => b.score - a.score).slice(0, 5).map((s) => ({
@@ -4401,13 +4413,15 @@ async function fetchReadme(repoFullName) {
4401
4413
  async function extractMcpConfig(readme, serverName) {
4402
4414
  try {
4403
4415
  const truncated = readme.length > 15e3 ? readme.slice(0, 15e3) : readme;
4416
+ const fastModel = getFastModel();
4404
4417
  const result = await llmJsonCall({
4405
4418
  system: EXTRACT_CONFIG_PROMPT,
4406
4419
  prompt: `MCP Server: ${serverName}
4407
4420
 
4408
4421
  README:
4409
4422
  ${truncated}`,
4410
- maxTokens: 2e3
4423
+ maxTokens: 2e3,
4424
+ ...fastModel ? { model: fastModel } : {}
4411
4425
  });
4412
4426
  if (!result || !result.command) return null;
4413
4427
  return {
@@ -4943,6 +4957,7 @@ async function searchAllProviders(technologies, platform) {
4943
4957
  }
4944
4958
  async function scoreWithLLM2(candidates, projectContext, technologies) {
4945
4959
  const candidateList = candidates.map((c, i) => `${i}. "${c.name}" \u2014 ${c.reason || "no description"}`).join("\n");
4960
+ const fastModel = getFastModel();
4946
4961
  const scored = await llmJsonCall({
4947
4962
  system: `You evaluate whether AI agent skills and tools are relevant to a specific software project.
4948
4963
  Given a project context and a list of candidates, score each one's relevance from 0-100 and provide a brief reason (max 80 chars).
@@ -4970,7 +4985,8 @@ ${technologies.join(", ")}
4970
4985
 
4971
4986
  CANDIDATES:
4972
4987
  ${candidateList}`,
4973
- maxTokens: 8e3
4988
+ maxTokens: 8e3,
4989
+ ...fastModel ? { model: fastModel } : {}
4974
4990
  });
4975
4991
  if (!Array.isArray(scored)) return [];
4976
4992
  return scored.filter((s) => s.score >= 60 && s.index >= 0 && s.index < candidates.length).sort((a, b) => b.score - a.score).slice(0, 20).map((s) => ({
@@ -6945,10 +6961,12 @@ ${skillsSummary}`);
6945
6961
  const prompt = `${contextParts.length ? contextParts.join("\n\n---\n\n") + "\n\n---\n\n" : ""}## Tool Events from Session (${fittedEvents.length} events)
6946
6962
 
6947
6963
  ${eventsText}`;
6964
+ const fastModel = getFastModel();
6948
6965
  const raw = await llmCall({
6949
6966
  system: LEARN_SYSTEM_PROMPT,
6950
6967
  prompt,
6951
- maxTokens: 4096
6968
+ maxTokens: 4096,
6969
+ ...fastModel ? { model: fastModel } : {}
6952
6970
  });
6953
6971
  return parseAnalysisResponse(raw);
6954
6972
  }
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@rely-ai/caliber",
3
- "version": "1.5.5",
3
+ "version": "1.6.0",
4
4
  "description": "Analyze your codebase and generate optimized AI agent configs (CLAUDE.md, .cursorrules, skills) — no API key needed",
5
5
  "type": "module",
6
6
  "bin": {