cto-ai-cli 1.3.0 β 3.0.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/DOCS.md +351 -0
- package/README.md +189 -263
- package/dist/action/index.js +25730 -0
- package/dist/api/dashboard.js +2073 -0
- package/dist/api/dashboard.js.map +1 -0
- package/dist/api/server.js +3401 -0
- package/dist/api/server.js.map +1 -0
- package/dist/cli/score.js +1971 -0
- package/dist/cli/v2/index.d.ts +2 -0
- package/dist/cli/v2/index.js +3496 -0
- package/dist/cli/v2/index.js.map +1 -0
- package/dist/engine/index.d.ts +816 -0
- package/dist/engine/index.js +4997 -0
- package/dist/engine/index.js.map +1 -0
- package/dist/govern/index.d.ts +261 -0
- package/dist/govern/index.js +662 -0
- package/dist/govern/index.js.map +1 -0
- package/dist/interact/index.d.ts +234 -0
- package/dist/interact/index.js +1343 -0
- package/dist/interact/index.js.map +1 -0
- package/dist/mcp/v2.d.ts +2 -0
- package/dist/mcp/v2.js +18289 -0
- package/dist/mcp/v2.js.map +1 -0
- package/package.json +56 -25
package/README.md
CHANGED
|
@@ -1,326 +1,252 @@
|
|
|
1
|
-
#
|
|
1
|
+
# CTO β Your AI is reading too much code. We fix that.
|
|
2
2
|
|
|
3
|
-
>
|
|
3
|
+
> **Early access** β This is a test version. We'd love your feedback.
|
|
4
4
|
|
|
5
5
|
[](LICENSE)
|
|
6
|
-
[](#)
|
|
7
|
+
|
|
8
|
+
## Try it now (zero install)
|
|
45
9
|
|
|
46
10
|
```bash
|
|
47
|
-
|
|
48
|
-
|
|
49
|
-
|
|
50
|
-
|
|
51
|
-
npm run build
|
|
52
|
-
npm link
|
|
11
|
+
npx cto-score
|
|
12
|
+
```
|
|
13
|
+
|
|
14
|
+
That's it. Run it on any project. You'll see something like this:
|
|
53
15
|
|
|
54
|
-
|
|
16
|
+
```
|
|
17
|
+
β‘ cto-score β analyzing your project...
|
|
18
|
+
|
|
19
|
+
ββββββββββββββββββββββββββββββββββββββββββββββββββββ
|
|
20
|
+
β β
|
|
21
|
+
β π’ Context Scoreβ’ 87 / 100 Grade: A- β
|
|
22
|
+
β β
|
|
23
|
+
β Efficiency ββββββββββββββββββββ 74% β
|
|
24
|
+
β Coverage ββββββββββββββββββββ 100% β
|
|
25
|
+
β Risk Control ββββββββββββββββββββ 100% β
|
|
26
|
+
β β
|
|
27
|
+
β π° vs. Sending Everything: β
|
|
28
|
+
β Tokens saved: 289K (85%) β
|
|
29
|
+
β Monthly savings: ~$695 β
|
|
30
|
+
β β
|
|
31
|
+
ββββββββββββββββββββββββββββββββββββββββββββββββββββ
|
|
32
|
+
|
|
33
|
+
Scanned in 11.7s Β· 177 files Β· 340K tokens
|
|
55
34
|
```
|
|
56
35
|
|
|
57
|
-
|
|
36
|
+
Run `npx cto-score --benchmark` to see how CTO compares to naive (alphabetical) and random file selection.
|
|
58
37
|
|
|
59
|
-
|
|
60
|
-
# 1. Initialize CTO (interactive wizard)
|
|
61
|
-
cto init
|
|
38
|
+
No data leaves your machine. No API keys. MIT licensed.
|
|
62
39
|
|
|
63
|
-
|
|
64
|
-
cto analyze /path/to/project
|
|
40
|
+
---
|
|
65
41
|
|
|
66
|
-
|
|
67
|
-
cto tiers /path/to/project
|
|
42
|
+
## What problem does CTO solve?
|
|
68
43
|
|
|
69
|
-
|
|
70
|
-
cto generate /path/to/project
|
|
44
|
+
When you ask an AI assistant to help with code, it needs context β your files. The question is: **which files?**
|
|
71
45
|
|
|
72
|
-
|
|
73
|
-
cto diff claude-md /path/to/project
|
|
46
|
+
**Most tools today** either send everything (expensive, noisy) or pick files based on what's open (misses dependencies). Neither approach is great.
|
|
74
47
|
|
|
75
|
-
|
|
76
|
-
cto apply claude-md /path/to/project
|
|
77
|
-
```
|
|
48
|
+
**CTO analyzes your project** β dependencies, file importance, risk of excluding each file β and picks the best subset that fits your token budget. It's like a smart assistant that knows which files matter for each task.
|
|
78
49
|
|
|
79
|
-
|
|
80
|
-
|
|
81
|
-
|
|
82
|
-
|
|
83
|
-
|
|
|
84
|
-
|
|
85
|
-
|
|
|
86
|
-
|
|
|
87
|
-
|
|
|
88
|
-
|
|
89
|
-
|
|
90
|
-
| `cto revert [artifact]` | Undo last apply using backup |
|
|
91
|
-
| `cto clean [path]` | Remove all CTO data for a project |
|
|
92
|
-
| `cto prompts [path]` | Show/generate optimized prompt templates |
|
|
93
|
-
| `cto config [path]` | Show current configuration |
|
|
94
|
-
| `cto deps [path]` | Show dependency graph, hub files & complexity |
|
|
95
|
-
| `cto watch [path]` | Watch for changes, auto-recalculate tiers |
|
|
96
|
-
| `cto session start\|end\|current\|list\|log` | Track Claude Code sessions |
|
|
97
|
-
| `cto dashboard [path]` | Terminal dashboard with metrics |
|
|
98
|
-
| `cto report weekly\|project\|export` | Usage reports and data export |
|
|
99
|
-
| `cto doctor [path]` | Health check & security audit |
|
|
100
|
-
| `cto audit log\|verify\|purge` | View and manage audit trail |
|
|
101
|
-
| `cto validate [path]` | CI/CD validation & secret scan |
|
|
102
|
-
| `cto prune preview\|file` | Smart context pruning preview |
|
|
103
|
-
| `cto costs estimate\|history\|pricing` | Token cost estimation & savings |
|
|
104
|
-
| `cto context budget\|pr` | Budget optimizer & PR-focused context |
|
|
105
|
-
| `cto multi-gen generate\|list` | Generate for Claude/Cursor/Copilot/Gemini |
|
|
106
|
-
| `cto explain <file>` | Explain why a file is in a specific tier |
|
|
107
|
-
| `cto route recommend\|tasks` | Smart model routing per task type |
|
|
108
|
-
| `cto multi-gen generate --enhanced` | Prompt-engineered context generation |
|
|
109
|
-
| `cto sdd extract\|spec\|validate` | Specification-Driven Development |
|
|
110
|
-
| `cto init --local` | Create .cto/ in project root (team-shareable) |
|
|
111
|
-
|
|
112
|
-
**Artifact types:** `claude-md`, `claudeignore`, `all`
|
|
113
|
-
|
|
114
|
-
## βοΈ Configuration
|
|
115
|
-
|
|
116
|
-
CTO uses YAML config files:
|
|
117
|
-
|
|
118
|
-
- **Global:** `~/.config/cto/config.yaml`
|
|
119
|
-
- **Per-project:** `~/.config/cto/projects/<hash>/config.yaml`
|
|
120
|
-
|
|
121
|
-
```yaml
|
|
122
|
-
version: "1.3.0"
|
|
123
|
-
model: sonnet # sonnet | opus | haiku
|
|
124
|
-
tokenEstimation: tiktoken # tiktoken (accurate) | chars4 (fast)
|
|
125
|
-
|
|
126
|
-
tiering:
|
|
127
|
-
hotDays: 3 # Files modified within N days = hot
|
|
128
|
-
warmDays: 14 # Files modified within N days = warm
|
|
129
|
-
hotTokenLimit: 50000
|
|
130
|
-
warmTokenLimit: 200000
|
|
131
|
-
|
|
132
|
-
ignoreDirs:
|
|
133
|
-
- node_modules
|
|
134
|
-
- .git
|
|
135
|
-
- dist
|
|
136
|
-
- build
|
|
137
|
-
|
|
138
|
-
extensions:
|
|
139
|
-
code:
|
|
140
|
-
- ts
|
|
141
|
-
- tsx
|
|
142
|
-
- js
|
|
143
|
-
- py
|
|
144
|
-
config:
|
|
145
|
-
- json
|
|
146
|
-
- yaml
|
|
147
|
-
docs:
|
|
148
|
-
- md
|
|
149
|
-
```
|
|
50
|
+
### A simple example
|
|
51
|
+
|
|
52
|
+
You ask the AI: *"refactor the auth middleware"*
|
|
53
|
+
|
|
54
|
+
| Approach | What gets sent | Result |
|
|
55
|
+
|----------|---------------|--------|
|
|
56
|
+
| **Send everything** | 340K tokens (all 177 files) | Expensive. AI drowns in irrelevant code. |
|
|
57
|
+
| **Send open files** | Whatever you have open | Might miss types, dependencies, config. |
|
|
58
|
+
| **CTO** | 50K tokens (93 relevant files) | 85% cheaper. Includes types, deps, related files. |
|
|
59
|
+
|
|
60
|
+
### Why does it matter?
|
|
150
61
|
|
|
151
|
-
|
|
62
|
+
We tested something specific: when the AI generates code, does it have the type definitions it needs?
|
|
152
63
|
|
|
153
|
-
|
|
|
154
|
-
|
|
155
|
-
|
|
|
156
|
-
|
|
|
157
|
-
| βοΈ **Cold** | Not modified in 14+ days | Skip unless explicitly needed |
|
|
64
|
+
| | CTO | Without CTO |
|
|
65
|
+
|--|-----|-------------|
|
|
66
|
+
| **Type files included** | 5 out of 6 | **0 out of 6** |
|
|
67
|
+
| **TypeScript compiler** | β
Compiles | β 4 errors |
|
|
158
68
|
|
|
159
|
-
CTO
|
|
160
|
-
- **`chars4`** β Fast estimate at ~4 characters per token (default)
|
|
161
|
-
- **`tiktoken`** β Accurate estimation using Claude's real tokenizer
|
|
69
|
+
We ran this on 5 different tasks. Same result every time. CTO context compiles. Naive context doesn't.
|
|
162
70
|
|
|
163
|
-
|
|
71
|
+
Without type definitions, the AI invents interfaces β wrong property names, wrong shapes. The code doesn't compile. ([Details](#compile-proof))
|
|
164
72
|
|
|
165
|
-
|
|
166
|
-
- **Hub files** (imported by 3+ files) get promoted one tier (coldβwarm, warmβhot)
|
|
167
|
-
- **High-complexity files** (cyclomatic complexity >30) get promoted from warmβhot
|
|
168
|
-
- **Model suggestions**: Opus for complex files, Sonnet for moderate, Haiku for simple
|
|
73
|
+
---
|
|
169
74
|
|
|
170
|
-
|
|
75
|
+
## Getting started
|
|
171
76
|
|
|
172
|
-
|
|
77
|
+
### Option 1: Quick score (no install)
|
|
173
78
|
|
|
79
|
+
```bash
|
|
80
|
+
npx cto-score # Score your project
|
|
81
|
+
npx cto-score ./my-project # Score a specific project
|
|
82
|
+
npx cto-score --benchmark # Compare CTO vs naive vs random
|
|
83
|
+
npx cto-score --json # Machine-readable output (for CI)
|
|
84
|
+
```
|
|
85
|
+
|
|
86
|
+
### Option 2: Full install
|
|
87
|
+
|
|
88
|
+
```bash
|
|
89
|
+
npm install -g cto-ai-cli
|
|
90
|
+
|
|
91
|
+
cto2 init # Set up for your project
|
|
92
|
+
cto2 analyze # See structure + risk profile
|
|
93
|
+
cto2 interact "refactor the auth middleware" # Get optimized context for a task
|
|
94
|
+
```
|
|
95
|
+
|
|
96
|
+
### Option 3: Use with your AI editor (MCP)
|
|
97
|
+
|
|
98
|
+
CTO works as an [MCP server](https://modelcontextprotocol.io/) β plug it into Claude, Windsurf, or Cursor.
|
|
99
|
+
|
|
100
|
+
**Windsurf** β add to `~/.codeium/windsurf/mcp_config.json`:
|
|
101
|
+
```json
|
|
102
|
+
{
|
|
103
|
+
"mcpServers": {
|
|
104
|
+
"cto": { "command": "cto2-mcp" }
|
|
105
|
+
}
|
|
106
|
+
}
|
|
174
107
|
```
|
|
175
|
-
|
|
176
|
-
|
|
177
|
-
|
|
178
|
-
|
|
179
|
-
|
|
180
|
-
|
|
181
|
-
|
|
182
|
-
|
|
183
|
-
βββ integrity.json # SHA-256 integrity manifest (0600)
|
|
184
|
-
βββ artifacts/
|
|
185
|
-
β βββ CLAUDE.md # Generated CLAUDE.md (sanitized)
|
|
186
|
-
β βββ .claudeignore # Generated .claudeignore
|
|
187
|
-
βββ backups/
|
|
188
|
-
β βββ manifest.json # Backup tracking
|
|
189
|
-
β βββ <backup-files> # Original files before apply
|
|
190
|
-
βββ sessions/
|
|
191
|
-
βββ current.json # Active session
|
|
192
|
-
βββ session_*.json # Archived sessions
|
|
108
|
+
|
|
109
|
+
**Claude Desktop** β add to your MCP config:
|
|
110
|
+
```json
|
|
111
|
+
{
|
|
112
|
+
"mcpServers": {
|
|
113
|
+
"cto": { "command": "node", "args": ["/path/to/dist/mcp/v2.js"] }
|
|
114
|
+
}
|
|
115
|
+
}
|
|
193
116
|
```
|
|
194
117
|
|
|
195
|
-
|
|
118
|
+
Once connected, your AI editor can use tools like `cto_analyze`, `cto_select_context`, `cto_score`, and `cto_benchmark` automatically.
|
|
196
119
|
|
|
197
|
-
|
|
120
|
+
---
|
|
198
121
|
|
|
199
|
-
|
|
122
|
+
## How it works (the short version)
|
|
200
123
|
|
|
201
|
-
|
|
202
|
-
|
|
203
|
-
|
|
204
|
-
|
|
205
|
-
- **Passwords** β Hardcoded passwords, database passwords
|
|
206
|
-
- **Tokens** β GitHub, GitLab, npm, OAuth, bearer tokens
|
|
207
|
-
- **Connection strings** β PostgreSQL, MongoDB, MySQL, Redis, AMQP
|
|
208
|
-
- **Custom patterns** β Define your own regex patterns in config
|
|
124
|
+
1. **Scans** your project β files, imports, dependencies, structure
|
|
125
|
+
2. **Scores** each file β how important is it? What breaks if we exclude it?
|
|
126
|
+
3. **Selects** the best files for your task β within your token budget
|
|
127
|
+
4. **Proves** the result β coverage score, benchmark comparison, cost savings
|
|
209
128
|
|
|
210
|
-
|
|
129
|
+
CTO doesn't use AI for selection. It uses dependency analysis, risk modeling, and optimization algorithms. Same input always produces the same output.
|
|
211
130
|
|
|
212
|
-
|
|
213
|
-
- Each entry includes: timestamp, user, action, project, and details
|
|
214
|
-
- Entries are **integrity-protected** with SHA-256 hashes
|
|
215
|
-
- Tampering is detected with `cto audit verify`
|
|
216
|
-
- Auto-purge after configurable retention period (default: 90 days)
|
|
131
|
+
---
|
|
217
132
|
|
|
218
|
-
|
|
133
|
+
## Real numbers
|
|
219
134
|
|
|
220
|
-
-
|
|
221
|
-
- `cto validate` verifies integrity on demand (CI/CD ready)
|
|
222
|
-
- Exits with code 1 on failure β use `--strict` to fail on warnings
|
|
135
|
+
We ran CTO on three open-source projects. No cherry-picking β you can reproduce these with `npx cto-score --benchmark`.
|
|
223
136
|
|
|
224
|
-
|
|
137
|
+
| Project | Files | Score | What CTO does |
|
|
138
|
+
|---------|-------|-------|---------------|
|
|
139
|
+
| **Zod** | 441 files, 804K tokens | 92/100 (A) | Selects 64 files, 100% coverage, $1,809/mo savings |
|
|
140
|
+
| **This project** | 177 files, 340K tokens | 87/100 (A-) | Selects 93 files, 100% coverage, $695/mo savings |
|
|
141
|
+
| **Express.js** | 158 files, 171K tokens | 74/100 (B-) | Needs only 895 tokens for full coverage |
|
|
225
142
|
|
|
226
|
-
|
|
227
|
-
- All CTO directories: `0700` (owner access only)
|
|
228
|
-
- `cto doctor --fix` enforces permissions automatically
|
|
143
|
+
"Coverage" means: all the files that are important for your task are included. "Savings" is estimated based on 800 AI interactions per month.
|
|
229
144
|
|
|
230
|
-
|
|
145
|
+
<details>
|
|
146
|
+
<summary><b>Detailed comparison: CTO vs Naive vs Random</b></summary>
|
|
231
147
|
|
|
232
|
-
|
|
233
|
-
# Add to your CI pipeline
|
|
234
|
-
cto validate /path/to/project --strict --json
|
|
148
|
+
> Budget: 50K tokens Β· Task: "refactor the core module"
|
|
235
149
|
|
|
236
|
-
|
|
237
|
-
|
|
150
|
+
| Project | Strategy | Files | Tokens | Coverage | High-Risk Included |
|
|
151
|
+
|---------|----------|-------|--------|----------|-------------------|
|
|
152
|
+
| **Zod** | **CTO** | 64 | 50.0K | **100%** | **6/6** |
|
|
153
|
+
| | Naive (alphabetical) | 71 | 50.0K | 16% | 2/6 |
|
|
154
|
+
| | Random | 45 | 50.0K | 10% | 1/6 |
|
|
155
|
+
| **CTO** | **CTO** | 163 | 47.4K | **100%** | **11/11** |
|
|
156
|
+
| | Naive | 25 | 50.0K | 15% | 0/11 |
|
|
157
|
+
| | Random | 38 | 50.0K | 23% | 6/11 |
|
|
158
|
+
| **Express** | **CTO** | 158 | 0.9K | **100%** | n/a |
|
|
159
|
+
| | Naive | 64 | 50.0K | 41% | n/a |
|
|
160
|
+
| | Random | 61 | 50.0K | 39% | n/a |
|
|
238
161
|
|
|
239
|
-
|
|
162
|
+
**Note:** "Naive" means alphabetical file order (a common default). "Random" is random selection. These are simple baselines β real-world tools like Cursor use smarter heuristics, so we don't claim CTO beats them. We just show the difference between informed and uninformed selection.
|
|
240
163
|
|
|
241
|
-
|
|
242
|
-
|-----------|-----------|
|
|
243
|
-
| π **Transparent** | Zero files created inside your project unless you explicitly `apply` |
|
|
244
|
-
| π **Read-only by default** | Analysis never modifies anything |
|
|
245
|
-
| π **Reversible** | Every `apply` creates a backup, every change can be reverted |
|
|
246
|
-
| π¦ **Autocontained** | `~/.config/cto/` is the only location CTO writes to |
|
|
247
|
-
| π« **No secrets leak** | All generated artifacts are sanitized for secrets |
|
|
248
|
-
| π **Auditable** | Every action is logged with tamper-proof integrity hashes |
|
|
249
|
-
| β‘ **Minimal deps** | No databases, no Docker, no external services |
|
|
250
|
-
| π― **Quality first** | Optimizing tokens β losing context |
|
|
164
|
+
</details>
|
|
251
165
|
|
|
252
|
-
|
|
166
|
+
<details id="compile-proof">
|
|
167
|
+
<summary><b>Compile Proof: real TypeScript compiler output</b></summary>
|
|
253
168
|
|
|
254
|
-
|
|
255
|
-
# Install dependencies
|
|
256
|
-
npm install
|
|
169
|
+
We ran the actual `tsc` compiler to verify this isn't just theory.
|
|
257
170
|
|
|
258
|
-
|
|
259
|
-
|
|
171
|
+
**How it works:**
|
|
172
|
+
1. Copy only the selected files (CTO or naive) to a temp directory
|
|
173
|
+
2. Generate TypeScript code that imports and uses the project's types
|
|
174
|
+
3. Run `tsc --noEmit`
|
|
175
|
+
4. Count real compiler errors
|
|
260
176
|
|
|
261
|
-
|
|
262
|
-
|
|
177
|
+
| Task | CTO | Naive | Naive missing |
|
|
178
|
+
|------|-----|-------|--------------|
|
|
179
|
+
| Refactor selector | β
0 errors | β 4 errors | All type files |
|
|
180
|
+
| Optimize risk scoring | β
0 errors | β 4 errors | All type files |
|
|
181
|
+
| MCP error handling | β
0 errors | β 4 errors | All type files |
|
|
182
|
+
| Cache invalidation | β
0 errors | β 4 errors | All type files |
|
|
183
|
+
| Add semantic tool | β
0 errors | β 4 errors | All type files |
|
|
263
184
|
|
|
264
|
-
|
|
265
|
-
npm run test:watch
|
|
185
|
+
The naive selection (alphabetical) consistently misses all type definition files. The compiler output:
|
|
266
186
|
|
|
267
|
-
|
|
268
|
-
|
|
187
|
+
```
|
|
188
|
+
error TS2307: Cannot find module './src/types/engine.js'
|
|
189
|
+
error TS2307: Cannot find module './src/types/config.js'
|
|
190
|
+
error TS2307: Cannot find module './src/types/govern.js'
|
|
191
|
+
error TS2307: Cannot find module './src/types/interact.js'
|
|
269
192
|
```
|
|
270
193
|
|
|
271
|
-
|
|
194
|
+
Without these files, the AI has to guess the shape of `AnalyzedFile`, `ContextSelection`, `TaskType`, etc. It will get them wrong.
|
|
272
195
|
|
|
273
|
-
|
|
196
|
+
</details>
|
|
274
197
|
|
|
275
|
-
|
|
198
|
+
---
|
|
276
199
|
|
|
277
|
-
|
|
200
|
+
## What you can do with CTO
|
|
278
201
|
|
|
279
|
-
|
|
280
|
-
|
|
281
|
-
|
|
282
|
-
|
|
283
|
-
|
|
284
|
-
|
|
285
|
-
|
|
286
|
-
|
|
287
|
-
|
|
288
|
-
|
|
202
|
+
| Use case | How |
|
|
203
|
+
|----------|-----|
|
|
204
|
+
| **Score your project** | `npx cto-score` |
|
|
205
|
+
| **Compare strategies** | `npx cto-score --benchmark` |
|
|
206
|
+
| **Get optimized context for a task** | `cto2 interact "your task"` |
|
|
207
|
+
| **PR-focused context** | `cto2 interact --pr "review this PR"` |
|
|
208
|
+
| **Use in your AI editor** | Add MCP server (see setup above) |
|
|
209
|
+
| **Use in CI/CD** | GitHub Action posts score on every PR |
|
|
210
|
+
| **Use as an API** | `cto2-api` starts an HTTP server |
|
|
211
|
+
| **JSON output (scripting)** | `npx cto-score --json` |
|
|
212
|
+
|
|
213
|
+
---
|
|
214
|
+
|
|
215
|
+
## Honest limitations
|
|
289
216
|
|
|
290
|
-
|
|
217
|
+
This is an early test version. Here's what we know:
|
|
291
218
|
|
|
292
|
-
|
|
293
|
-
|
|
294
|
-
|
|
295
|
-
|
|
296
|
-
| `cto_should_read_file` | Check if a file is hot/warm/cold with recommendation |
|
|
297
|
-
| `cto_get_context` | Get the optimized CLAUDE.md context |
|
|
298
|
-
| `cto_session_log` | Start/end sessions and log file reads |
|
|
299
|
-
| `cto_get_prompt` | Get an optimized prompt template |
|
|
300
|
-
| `cto_dashboard` | Get dashboard metrics (sessions, tokens, savings) |
|
|
219
|
+
- **TypeScript/JavaScript projects work best.** We support other languages (Python, Go, Rust, Java) for basic analysis, but TypeScript gets the deepest understanding.
|
|
220
|
+
- **Our benchmarks use simple baselines** (alphabetical, random). We haven't compared against Cursor's or Copilot's internal context selection.
|
|
221
|
+
- **The savings numbers are estimates** based on average API pricing. Your actual savings depend on your model, pricing tier, and usage patterns.
|
|
222
|
+
- **We need more projects to test on.** If you try it and share your score, that helps us a lot.
|
|
301
223
|
|
|
302
|
-
|
|
224
|
+
---
|
|
303
225
|
|
|
304
|
-
|
|
226
|
+
## What's next
|
|
305
227
|
|
|
306
|
-
|
|
228
|
+
We're working on:
|
|
229
|
+
- **More language support** β deeper analysis for Python and Go
|
|
230
|
+
- **VS Code extension** β see risk scores and context suggestions inline
|
|
231
|
+
- **Model-specific optimization** β different context for GPT-4 vs Claude vs Gemini
|
|
232
|
+
- **Team features** β share learned patterns across your team
|
|
233
|
+
- **Your feedback** β [open an issue](https://github.com/cto-ai/cto-ai-cli/issues) or reach out
|
|
307
234
|
|
|
308
|
-
|
|
309
|
-
- `review-code` β Optimized prompt for code review
|
|
235
|
+
---
|
|
310
236
|
|
|
311
|
-
##
|
|
237
|
+
## For contributors
|
|
238
|
+
|
|
239
|
+
```bash
|
|
240
|
+
git clone <repo-url>
|
|
241
|
+
cd cto
|
|
242
|
+
npm install
|
|
243
|
+
npm run build
|
|
244
|
+
npm test # 433 tests
|
|
245
|
+
npm run typecheck # strict TypeScript
|
|
246
|
+
```
|
|
312
247
|
|
|
313
|
-
|
|
314
|
-
- [x] **v0.5.1** β AST-based analysis (ts-morph), tiktoken integration, dependency graph
|
|
315
|
-
- [x] **v0.5.2** β Watch mode with chokidar, auto-regeneration, tier change notifications
|
|
316
|
-
- [x] **v0.7.x** β Session tracking, terminal dashboard, weekly reports, metrics export
|
|
317
|
-
- [x] **v0.9.x** β MCP Server with 7 tools, resources, and prompt templates
|
|
318
|
-
- [x] **v1.0.0** β Enterprise security: secret detection, audit logging, integrity verification, CI/CD validation
|
|
319
|
-
- [x] **v1.1.0** β Smart context pruning, git-aware tiering, cost estimation in real dollars
|
|
320
|
-
- [x] **v1.2.0** β Token budget optimizer, multi-AI generator, PR context, explain command
|
|
321
|
-
- [x] **v1.3.0** β Smart model routing, prompt engineering, SDD, project-local config
|
|
322
|
-
- [ ] **v1.x+** β Plugin system, GitHub Action, VS Code extension
|
|
248
|
+
Full CLI docs, MCP server setup, API server, and programmatic API are documented in [DOCS.md](DOCS.md).
|
|
323
249
|
|
|
324
|
-
##
|
|
250
|
+
## License
|
|
325
251
|
|
|
326
252
|
[MIT](LICENSE)
|