cto-ai-cli 1.3.0 β†’ 3.0.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,326 +1,252 @@
1
- # πŸ”§ CTO β€” Claude Token Optimizer
1
+ # CTO β€” Your AI is reading too much code. We fix that.
2
2
 
3
- > Optimize your token usage with Claude Code without modifying your projects.
3
+ > **Early access** β€” This is a test version. We'd love your feedback.
4
4
 
5
5
  [![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](LICENSE)
6
- [![Node.js](https://img.shields.io/badge/node-%3E%3D18-brightgreen.svg)](https://nodejs.org)
7
- [![TypeScript](https://img.shields.io/badge/TypeScript-5.4-blue.svg)](https://www.typescriptlang.org)
8
-
9
- CTO analyzes your codebase, classifies files into **hot/warm/cold tiers** based on recency and dependency structure, and generates optimized `CLAUDE.md` and `.claudeignore` files β€” so Claude Code reads only what matters. Includes an **MCP Server** for native Claude Code integration and **enterprise-grade security** with secret detection, audit logging, and data integrity verification.
10
-
11
- ## ✨ Features
12
-
13
- - **πŸ” Token Analysis** β€” Scan your project and estimate token usage per file (chars/4 or tiktoken)
14
- - **πŸ“Š File Tiering** β€” Classify files as πŸ”₯ hot / 🌑️ warm / ❄️ cold based on recency + AST analysis
15
- - **🧠 AST Analysis** β€” Dependency graph, hub file detection, cyclomatic complexity via ts-morph
16
- - **🎯 Smart Model Routing** β€” Suggests Opus/Sonnet/Haiku based on file complexity
17
- - **πŸ“ CLAUDE.md Generation** β€” Auto-generate context files optimized for Claude Code
18
- - **🚫 .claudeignore Generation** β€” Keep cold files out of Claude's context
19
- - **πŸ‘οΈ Watch Mode** β€” Auto-recalculate tiers and regenerate artifacts when files change
20
- - **πŸ“ Session Tracking** β€” Track Claude Code sessions, log file reads, detect waste patterns
21
- - **πŸ“Š Dashboard** β€” Terminal dashboard with metrics, savings ratio, and suggestions
22
- - **πŸ“ˆ Reports** β€” Weekly reports, project comparison, CSV/JSON export
23
- - **πŸ”Œ MCP Server** β€” Native Claude Code integration via Model Context Protocol
24
- - **πŸ“‹ Prompt Templates** β€” Pre-built prompts optimized by task type (debug, review, refactor, test)
25
- - **πŸ” Secret Detection** β€” Scans for API keys, tokens, passwords, connection strings before generating artifacts
26
- - **πŸ“ Audit Logging** β€” Immutable, integrity-verified audit trail of every CTO action
27
- - **πŸ›‘οΈ Data Integrity** β€” SHA-256 verification of artifacts, backups, and sessions
28
- - **πŸͺ§ CI/CD Validation** β€” `cto validate` for automated security checks in pipelines
29
- - **βœ‚οΈ Smart Context Pruning** β€” Signatures-only for warm files, skeleton for cold β€” 60-80% token savings
30
- - **πŸ”€ Git-Aware Tiering** β€” Uses `git diff`/blame for precise tier classification, auto-promotes changed files
31
- - **πŸ’° Cost Estimation** β€” Real dollar savings per session, weekly/monthly/yearly projections
32
- - **🎯 Token Budget Optimizer** β€” Knapsack algorithm selects optimal files within a token budget
33
- - **πŸ€– Multi-AI Generator** β€” Generate context for Claude, Cursor, Copilot, and Gemini
34
- - **πŸ”€ PR Context** β€” Focused context for code reviews with dependency-aware file selection
35
- - **πŸ’‘ Explain** β€” Transparent tier reasoning showing all factors per file
36
- - **🎯 Prompt Engineering** β€” Enhanced prompts with role priming, chain-of-thought, constraints, and anti-hallucination
37
- - **🧠 Smart Model Routing** β€” Auto-select Haiku/Sonnet/Opus based on task complexity
38
- - **πŸ“œ Specification-Driven Development** β€” Extract specs, generate contract documents, validate implementations
39
- - **πŸ“ Project-Local Config** β€” `cto init --local` creates `.cto/` in project root, shareable via git
40
- - **πŸ”„ Reversible** β€” Every apply creates a backup; revert anytime
41
- - **πŸ“¦ Autocontained** β€” All CTO data in `~/.config/cto/` or `.cto/` in project root
42
- - **βš™οΈ Configurable** β€” YAML config with global, per-project, and local overrides
43
-
44
- ## πŸ“¦ Installation
6
+ [![Tests](https://img.shields.io/badge/tests-433_passing-brightgreen.svg)](#)
7
+
8
+ ## Try it now (zero install)
45
9
 
46
10
  ```bash
47
- # From source
48
- git clone <repo-url>
49
- cd cto
50
- npm install
51
- npm run build
52
- npm link
11
+ npx cto-score
12
+ ```
13
+
14
+ That's it. Run it on any project. You'll see something like this:
53
15
 
54
- # Coming soon: npm i -g @cto/cli
16
+ ```
17
+ ⚑ cto-score β€” analyzing your project...
18
+
19
+ ╔══════════════════════════════════════════════════╗
20
+ β•‘ β•‘
21
+ β•‘ 🟒 Context Scoreβ„’ 87 / 100 Grade: A- β•‘
22
+ β•‘ β•‘
23
+ β•‘ Efficiency β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘β–‘β–‘β–‘ 74% β•‘
24
+ β•‘ Coverage β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ 100% β•‘
25
+ β•‘ Risk Control β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ 100% β•‘
26
+ β•‘ β•‘
27
+ β•‘ πŸ’° vs. Sending Everything: β•‘
28
+ β•‘ Tokens saved: 289K (85%) β•‘
29
+ β•‘ Monthly savings: ~$695 β•‘
30
+ β•‘ β•‘
31
+ β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•
32
+
33
+ Scanned in 11.7s Β· 177 files Β· 340K tokens
55
34
  ```
56
35
 
57
- ## πŸš€ Quick Start
36
+ Run `npx cto-score --benchmark` to see how CTO compares to naive (alphabetical) and random file selection.
58
37
 
59
- ```bash
60
- # 1. Initialize CTO (interactive wizard)
61
- cto init
38
+ No data leaves your machine. No API keys. MIT licensed.
62
39
 
63
- # 2. Analyze your project
64
- cto analyze /path/to/project
40
+ ---
65
41
 
66
- # 3. View file tiers
67
- cto tiers /path/to/project
42
+ ## What problem does CTO solve?
68
43
 
69
- # 4. Generate optimized artifacts
70
- cto generate /path/to/project
44
+ When you ask an AI assistant to help with code, it needs context β€” your files. The question is: **which files?**
71
45
 
72
- # 5. Preview what would change
73
- cto diff claude-md /path/to/project
46
+ **Most tools today** either send everything (expensive, noisy) or pick files based on what's open (misses dependencies). Neither approach is great.
74
47
 
75
- # 6. Apply to your project (with confirmation)
76
- cto apply claude-md /path/to/project
77
- ```
48
+ **CTO analyzes your project** β€” dependencies, file importance, risk of excluding each file β€” and picks the best subset that fits your token budget. It's like a smart assistant that knows which files matter for each task.
78
49
 
79
- ## πŸ“– Commands
80
-
81
- | Command | Description |
82
- |---------|-------------|
83
- | `cto init` | Interactive setup wizard |
84
- | `cto analyze [path]` | Analyze token usage (read-only) |
85
- | `cto tiers [path]` | Show hot/warm/cold file tiers |
86
- | `cto generate [path]` | Generate CLAUDE.md & .claudeignore |
87
- | `cto show <artifact>` | Preview a generated artifact |
88
- | `cto diff <artifact>` | Show what would change before applying |
89
- | `cto apply <artifact>` | Apply artifact to project (with backup) |
90
- | `cto revert [artifact]` | Undo last apply using backup |
91
- | `cto clean [path]` | Remove all CTO data for a project |
92
- | `cto prompts [path]` | Show/generate optimized prompt templates |
93
- | `cto config [path]` | Show current configuration |
94
- | `cto deps [path]` | Show dependency graph, hub files & complexity |
95
- | `cto watch [path]` | Watch for changes, auto-recalculate tiers |
96
- | `cto session start\|end\|current\|list\|log` | Track Claude Code sessions |
97
- | `cto dashboard [path]` | Terminal dashboard with metrics |
98
- | `cto report weekly\|project\|export` | Usage reports and data export |
99
- | `cto doctor [path]` | Health check & security audit |
100
- | `cto audit log\|verify\|purge` | View and manage audit trail |
101
- | `cto validate [path]` | CI/CD validation & secret scan |
102
- | `cto prune preview\|file` | Smart context pruning preview |
103
- | `cto costs estimate\|history\|pricing` | Token cost estimation & savings |
104
- | `cto context budget\|pr` | Budget optimizer & PR-focused context |
105
- | `cto multi-gen generate\|list` | Generate for Claude/Cursor/Copilot/Gemini |
106
- | `cto explain <file>` | Explain why a file is in a specific tier |
107
- | `cto route recommend\|tasks` | Smart model routing per task type |
108
- | `cto multi-gen generate --enhanced` | Prompt-engineered context generation |
109
- | `cto sdd extract\|spec\|validate` | Specification-Driven Development |
110
- | `cto init --local` | Create .cto/ in project root (team-shareable) |
111
-
112
- **Artifact types:** `claude-md`, `claudeignore`, `all`
113
-
114
- ## βš™οΈ Configuration
115
-
116
- CTO uses YAML config files:
117
-
118
- - **Global:** `~/.config/cto/config.yaml`
119
- - **Per-project:** `~/.config/cto/projects/<hash>/config.yaml`
120
-
121
- ```yaml
122
- version: "1.3.0"
123
- model: sonnet # sonnet | opus | haiku
124
- tokenEstimation: tiktoken # tiktoken (accurate) | chars4 (fast)
125
-
126
- tiering:
127
- hotDays: 3 # Files modified within N days = hot
128
- warmDays: 14 # Files modified within N days = warm
129
- hotTokenLimit: 50000
130
- warmTokenLimit: 200000
131
-
132
- ignoreDirs:
133
- - node_modules
134
- - .git
135
- - dist
136
- - build
137
-
138
- extensions:
139
- code:
140
- - ts
141
- - tsx
142
- - js
143
- - py
144
- config:
145
- - json
146
- - yaml
147
- docs:
148
- - md
149
- ```
50
+ ### A simple example
51
+
52
+ You ask the AI: *"refactor the auth middleware"*
53
+
54
+ | Approach | What gets sent | Result |
55
+ |----------|---------------|--------|
56
+ | **Send everything** | 340K tokens (all 177 files) | Expensive. AI drowns in irrelevant code. |
57
+ | **Send open files** | Whatever you have open | Might miss types, dependencies, config. |
58
+ | **CTO** | 50K tokens (93 relevant files) | 85% cheaper. Includes types, deps, related files. |
59
+
60
+ ### Why does it matter?
150
61
 
151
- ## πŸ“Š How Tiering Works
62
+ We tested something specific: when the AI generates code, does it have the type definitions it needs?
152
63
 
153
- | Tier | Criteria | Action |
154
- |------|----------|--------|
155
- | πŸ”₯ **Hot** | Modified within 3 days | Read first, always include |
156
- | 🌑️ **Warm** | Modified within 14 days | Read if needed |
157
- | ❄️ **Cold** | Not modified in 14+ days | Skip unless explicitly needed |
64
+ | | CTO | Without CTO |
65
+ |--|-----|-------------|
66
+ | **Type files included** | 5 out of 6 | **0 out of 6** |
67
+ | **TypeScript compiler** | βœ… Compiles | ❌ 4 errors |
158
68
 
159
- CTO supports two token estimation methods:
160
- - **`chars4`** β€” Fast estimate at ~4 characters per token (default)
161
- - **`tiktoken`** β€” Accurate estimation using Claude's real tokenizer
69
+ We ran this on 5 different tasks. Same result every time. CTO context compiles. Naive context doesn't.
162
70
 
163
- ### AST-Enhanced Tiering (v0.5.1+)
71
+ Without type definitions, the AI invents interfaces β€” wrong property names, wrong shapes. The code doesn't compile. ([Details](#compile-proof))
164
72
 
165
- Beyond recency, CTO uses AST analysis to boost tier priority:
166
- - **Hub files** (imported by 3+ files) get promoted one tier (cold→warm, warm→hot)
167
- - **High-complexity files** (cyclomatic complexity >30) get promoted from warm→hot
168
- - **Model suggestions**: Opus for complex files, Sonnet for moderate, Haiku for simple
73
+ ---
169
74
 
170
- The tier thresholds are fully configurable.
75
+ ## Getting started
171
76
 
172
- ## πŸ—οΈ Architecture
77
+ ### Option 1: Quick score (no install)
173
78
 
79
+ ```bash
80
+ npx cto-score # Score your project
81
+ npx cto-score ./my-project # Score a specific project
82
+ npx cto-score --benchmark # Compare CTO vs naive vs random
83
+ npx cto-score --json # Machine-readable output (for CI)
84
+ ```
85
+
86
+ ### Option 2: Full install
87
+
88
+ ```bash
89
+ npm install -g cto-ai-cli
90
+
91
+ cto2 init # Set up for your project
92
+ cto2 analyze # See structure + risk profile
93
+ cto2 interact "refactor the auth middleware" # Get optimized context for a task
94
+ ```
95
+
96
+ ### Option 3: Use with your AI editor (MCP)
97
+
98
+ CTO works as an [MCP server](https://modelcontextprotocol.io/) β€” plug it into Claude, Windsurf, or Cursor.
99
+
100
+ **Windsurf** β€” add to `~/.codeium/windsurf/mcp_config.json`:
101
+ ```json
102
+ {
103
+ "mcpServers": {
104
+ "cto": { "command": "cto2-mcp" }
105
+ }
106
+ }
174
107
  ```
175
- ~/.config/cto/
176
- β”œβ”€β”€ config.yaml # Global config
177
- β”œβ”€β”€ audit/ # Immutable audit logs
178
- β”‚ └── audit_YYYYMMDD.json # Daily audit entries (0600 perms)
179
- └── projects/
180
- └── <project-hash>/
181
- β”œβ”€β”€ config.yaml # Project-specific overrides
182
- β”œβ”€β”€ analysis.json # Last analysis results
183
- β”œβ”€β”€ integrity.json # SHA-256 integrity manifest (0600)
184
- β”œβ”€β”€ artifacts/
185
- β”‚ β”œβ”€β”€ CLAUDE.md # Generated CLAUDE.md (sanitized)
186
- β”‚ └── .claudeignore # Generated .claudeignore
187
- β”œβ”€β”€ backups/
188
- β”‚ β”œβ”€β”€ manifest.json # Backup tracking
189
- β”‚ └── <backup-files> # Original files before apply
190
- └── sessions/
191
- β”œβ”€β”€ current.json # Active session
192
- └── session_*.json # Archived sessions
108
+
109
+ **Claude Desktop** β€” add to your MCP config:
110
+ ```json
111
+ {
112
+ "mcpServers": {
113
+ "cto": { "command": "node", "args": ["/path/to/dist/mcp/v2.js"] }
114
+ }
115
+ }
193
116
  ```
194
117
 
195
- ## πŸ” Security
118
+ Once connected, your AI editor can use tools like `cto_analyze`, `cto_select_context`, `cto_score`, and `cto_benchmark` automatically.
196
119
 
197
- CTO is built with enterprise security requirements in mind.
120
+ ---
198
121
 
199
- ### Secret Detection
122
+ ## How it works (the short version)
200
123
 
201
- Before generating CLAUDE.md, CTO scans for and redacts:
202
- - **API keys** β€” OpenAI, Anthropic, generic API keys
203
- - **Cloud credentials** β€” AWS access keys, secret keys
204
- - **Private keys** β€” RSA, EC, DSA, OpenSSH
205
- - **Passwords** β€” Hardcoded passwords, database passwords
206
- - **Tokens** β€” GitHub, GitLab, npm, OAuth, bearer tokens
207
- - **Connection strings** β€” PostgreSQL, MongoDB, MySQL, Redis, AMQP
208
- - **Custom patterns** β€” Define your own regex patterns in config
124
+ 1. **Scans** your project β€” files, imports, dependencies, structure
125
+ 2. **Scores** each file β€” how important is it? What breaks if we exclude it?
126
+ 3. **Selects** the best files for your task β€” within your token budget
127
+ 4. **Proves** the result β€” coverage score, benchmark comparison, cost savings
209
128
 
210
- ### Audit Logging
129
+ CTO doesn't use AI for selection. It uses dependency analysis, risk modeling, and optimization algorithms. Same input always produces the same output.
211
130
 
212
- Every CTO action is logged to an immutable audit trail:
213
- - Each entry includes: timestamp, user, action, project, and details
214
- - Entries are **integrity-protected** with SHA-256 hashes
215
- - Tampering is detected with `cto audit verify`
216
- - Auto-purge after configurable retention period (default: 90 days)
131
+ ---
217
132
 
218
- ### Data Integrity
133
+ ## Real numbers
219
134
 
220
- - **SHA-256 hashes** for all artifacts, backups, and sessions
221
- - `cto validate` verifies integrity on demand (CI/CD ready)
222
- - Exits with code 1 on failure β€” use `--strict` to fail on warnings
135
+ We ran CTO on three open-source projects. No cherry-picking β€” you can reproduce these with `npx cto-score --benchmark`.
223
136
 
224
- ### Secure File Permissions
137
+ | Project | Files | Score | What CTO does |
138
+ |---------|-------|-------|---------------|
139
+ | **Zod** | 441 files, 804K tokens | 92/100 (A) | Selects 64 files, 100% coverage, $1,809/mo savings |
140
+ | **This project** | 177 files, 340K tokens | 87/100 (A-) | Selects 93 files, 100% coverage, $695/mo savings |
141
+ | **Express.js** | 158 files, 171K tokens | 74/100 (B-) | Needs only 895 tokens for full coverage |
225
142
 
226
- - All CTO data files: `0600` (owner read/write only)
227
- - All CTO directories: `0700` (owner access only)
228
- - `cto doctor --fix` enforces permissions automatically
143
+ "Coverage" means: all the files that are important for your task are included. "Savings" is estimated based on 800 AI interactions per month.
229
144
 
230
- ### CI/CD Integration
145
+ <details>
146
+ <summary><b>Detailed comparison: CTO vs Naive vs Random</b></summary>
231
147
 
232
- ```bash
233
- # Add to your CI pipeline
234
- cto validate /path/to/project --strict --json
148
+ > Budget: 50K tokens Β· Task: "refactor the core module"
235
149
 
236
- # Fails on: leaked secrets, corrupted data, stale artifacts
237
- ```
150
+ | Project | Strategy | Files | Tokens | Coverage | High-Risk Included |
151
+ |---------|----------|-------|--------|----------|-------------------|
152
+ | **Zod** | **CTO** | 64 | 50.0K | **100%** | **6/6** |
153
+ | | Naive (alphabetical) | 71 | 50.0K | 16% | 2/6 |
154
+ | | Random | 45 | 50.0K | 10% | 1/6 |
155
+ | **CTO** | **CTO** | 163 | 47.4K | **100%** | **11/11** |
156
+ | | Naive | 25 | 50.0K | 15% | 0/11 |
157
+ | | Random | 38 | 50.0K | 23% | 6/11 |
158
+ | **Express** | **CTO** | 158 | 0.9K | **100%** | n/a |
159
+ | | Naive | 64 | 50.0K | 41% | n/a |
160
+ | | Random | 61 | 50.0K | 39% | n/a |
238
161
 
239
- ## πŸ”’ Principles
162
+ **Note:** "Naive" means alphabetical file order (a common default). "Random" is random selection. These are simple baselines β€” real-world tools like Cursor use smarter heuristics, so we don't claim CTO beats them. We just show the difference between informed and uninformed selection.
240
163
 
241
- | Principle | Guarantee |
242
- |-----------|-----------|
243
- | πŸ”’ **Transparent** | Zero files created inside your project unless you explicitly `apply` |
244
- | πŸ“– **Read-only by default** | Analysis never modifies anything |
245
- | πŸ”„ **Reversible** | Every `apply` creates a backup, every change can be reverted |
246
- | πŸ“¦ **Autocontained** | `~/.config/cto/` is the only location CTO writes to |
247
- | 🚫 **No secrets leak** | All generated artifacts are sanitized for secrets |
248
- | πŸ“ **Auditable** | Every action is logged with tamper-proof integrity hashes |
249
- | ⚑ **Minimal deps** | No databases, no Docker, no external services |
250
- | 🎯 **Quality first** | Optimizing tokens β‰  losing context |
164
+ </details>
251
165
 
252
- ## πŸ§ͺ Development
166
+ <details id="compile-proof">
167
+ <summary><b>Compile Proof: real TypeScript compiler output</b></summary>
253
168
 
254
- ```bash
255
- # Install dependencies
256
- npm install
169
+ We ran the actual `tsc` compiler to verify this isn't just theory.
257
170
 
258
- # Build
259
- npm run build
171
+ **How it works:**
172
+ 1. Copy only the selected files (CTO or naive) to a temp directory
173
+ 2. Generate TypeScript code that imports and uses the project's types
174
+ 3. Run `tsc --noEmit`
175
+ 4. Count real compiler errors
260
176
 
261
- # Run tests
262
- npm test
177
+ | Task | CTO | Naive | Naive missing |
178
+ |------|-----|-------|--------------|
179
+ | Refactor selector | βœ… 0 errors | ❌ 4 errors | All type files |
180
+ | Optimize risk scoring | βœ… 0 errors | ❌ 4 errors | All type files |
181
+ | MCP error handling | βœ… 0 errors | ❌ 4 errors | All type files |
182
+ | Cache invalidation | βœ… 0 errors | ❌ 4 errors | All type files |
183
+ | Add semantic tool | βœ… 0 errors | ❌ 4 errors | All type files |
263
184
 
264
- # Watch mode
265
- npm run test:watch
185
+ The naive selection (alphabetical) consistently misses all type definition files. The compiler output:
266
186
 
267
- # Type check
268
- npm run typecheck
187
+ ```
188
+ error TS2307: Cannot find module './src/types/engine.js'
189
+ error TS2307: Cannot find module './src/types/config.js'
190
+ error TS2307: Cannot find module './src/types/govern.js'
191
+ error TS2307: Cannot find module './src/types/interact.js'
269
192
  ```
270
193
 
271
- ## πŸ”Œ MCP Server
194
+ Without these files, the AI has to guess the shape of `AnalyzedFile`, `ContextSelection`, `TaskType`, etc. It will get them wrong.
272
195
 
273
- CTO includes a Model Context Protocol server for native Claude Code integration.
196
+ </details>
274
197
 
275
- ### Setup
198
+ ---
276
199
 
277
- Add to your Claude Code MCP config (`~/.claude.json` or project `.mcp.json`):
200
+ ## What you can do with CTO
278
201
 
279
- ```json
280
- {
281
- "mcpServers": {
282
- "cto": {
283
- "command": "cto-mcp",
284
- "args": []
285
- }
286
- }
287
- }
288
- ```
202
+ | Use case | How |
203
+ |----------|-----|
204
+ | **Score your project** | `npx cto-score` |
205
+ | **Compare strategies** | `npx cto-score --benchmark` |
206
+ | **Get optimized context for a task** | `cto2 interact "your task"` |
207
+ | **PR-focused context** | `cto2 interact --pr "review this PR"` |
208
+ | **Use in your AI editor** | Add MCP server (see setup above) |
209
+ | **Use in CI/CD** | GitHub Action posts score on every PR |
210
+ | **Use as an API** | `cto2-api` starts an HTTP server |
211
+ | **JSON output (scripting)** | `npx cto-score --json` |
212
+
213
+ ---
214
+
215
+ ## Honest limitations
289
216
 
290
- ### Available Tools
217
+ This is an early test version. Here's what we know:
291
218
 
292
- | Tool | Description |
293
- |------|-------------|
294
- | `cto_analyze_project` | Analyze project token usage and tier breakdown |
295
- | `cto_get_hot_files` | Get hot-tier files (read these first) |
296
- | `cto_should_read_file` | Check if a file is hot/warm/cold with recommendation |
297
- | `cto_get_context` | Get the optimized CLAUDE.md context |
298
- | `cto_session_log` | Start/end sessions and log file reads |
299
- | `cto_get_prompt` | Get an optimized prompt template |
300
- | `cto_dashboard` | Get dashboard metrics (sessions, tokens, savings) |
219
+ - **TypeScript/JavaScript projects work best.** We support other languages (Python, Go, Rust, Java) for basic analysis, but TypeScript gets the deepest understanding.
220
+ - **Our benchmarks use simple baselines** (alphabetical, random). We haven't compared against Cursor's or Copilot's internal context selection.
221
+ - **The savings numbers are estimates** based on average API pricing. Your actual savings depend on your model, pricing tier, and usage patterns.
222
+ - **We need more projects to test on.** If you try it and share your score, that helps us a lot.
301
223
 
302
- ### MCP Resources
224
+ ---
303
225
 
304
- - `cto://context/{projectPath}` β€” Project context as markdown
226
+ ## What's next
305
227
 
306
- ### MCP Prompts
228
+ We're working on:
229
+ - **More language support** β€” deeper analysis for Python and Go
230
+ - **VS Code extension** β€” see risk scores and context suggestions inline
231
+ - **Model-specific optimization** β€” different context for GPT-4 vs Claude vs Gemini
232
+ - **Team features** β€” share learned patterns across your team
233
+ - **Your feedback** β€” [open an issue](https://github.com/cto-ai/cto-ai-cli/issues) or reach out
307
234
 
308
- - `optimize-task` β€” Optimized prompt for a task with minimal token usage
309
- - `review-code` β€” Optimized prompt for code review
235
+ ---
310
236
 
311
- ## πŸ—ΊοΈ Roadmap
237
+ ## For contributors
238
+
239
+ ```bash
240
+ git clone <repo-url>
241
+ cd cto
242
+ npm install
243
+ npm run build
244
+ npm test # 433 tests
245
+ npm run typecheck # strict TypeScript
246
+ ```
312
247
 
313
- - [x] **v0.5.0** β€” TypeScript CLI with full analysis, tiering, generation, apply/revert
314
- - [x] **v0.5.1** β€” AST-based analysis (ts-morph), tiktoken integration, dependency graph
315
- - [x] **v0.5.2** β€” Watch mode with chokidar, auto-regeneration, tier change notifications
316
- - [x] **v0.7.x** β€” Session tracking, terminal dashboard, weekly reports, metrics export
317
- - [x] **v0.9.x** β€” MCP Server with 7 tools, resources, and prompt templates
318
- - [x] **v1.0.0** β€” Enterprise security: secret detection, audit logging, integrity verification, CI/CD validation
319
- - [x] **v1.1.0** β€” Smart context pruning, git-aware tiering, cost estimation in real dollars
320
- - [x] **v1.2.0** β€” Token budget optimizer, multi-AI generator, PR context, explain command
321
- - [x] **v1.3.0** β€” Smart model routing, prompt engineering, SDD, project-local config
322
- - [ ] **v1.x+** β€” Plugin system, GitHub Action, VS Code extension
248
+ Full CLI docs, MCP server setup, API server, and programmatic API are documented in [DOCS.md](DOCS.md).
323
249
 
324
- ## πŸ“„ License
250
+ ## License
325
251
 
326
252
  [MIT](LICENSE)