@corbat-tech/coco 1.1.0 โ†’ 1.2.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,120 +1,150 @@
1
- # ๐Ÿฅฅ Corbat-Coco
1
+ <p align="center">
2
+ <img src="https://img.shields.io/badge/v1.2.0-stable-blueviolet?style=for-the-badge" alt="Version">
3
+ <img src="https://img.shields.io/badge/TypeScript-5.7-3178c6?style=for-the-badge&logo=typescript&logoColor=white" alt="TypeScript">
4
+ <img src="https://img.shields.io/badge/Node.js-22+-339933?style=for-the-badge&logo=nodedotjs&logoColor=white" alt="Node.js">
5
+ <img src="https://img.shields.io/badge/License-MIT-f5c542?style=for-the-badge" alt="MIT License">
6
+ <img src="https://img.shields.io/badge/Tests-4350%2B_passing-22c55e?style=for-the-badge" alt="Tests">
7
+ </p>
2
8
 
3
- **The open-source coding agent that iterates until your code is actually good.**
9
+ <h1 align="center">๐Ÿฅฅ Corbat-Coco</h1>
4
10
 
5
- [![TypeScript](https://img.shields.io/badge/TypeScript-5.3-blue)](https://www.typescriptlang.org/)
6
- [![Node.js](https://img.shields.io/badge/Node.js-22+-green)](https://nodejs.org/)
7
- [![License](https://img.shields.io/badge/License-MIT-yellow)](./LICENSE)
8
- [![Tests](https://img.shields.io/badge/Tests-4000%2B%20passing-brightgreen)](./)
9
- [![Coverage](https://img.shields.io/badge/Coverage-80%25%2B-brightgreen)](./)
11
+ <p align="center">
12
+ <strong>The open-source coding agent that iterates on your code until it's actually production-ready.</strong>
13
+ </p>
10
14
 
11
- ---
15
+ <p align="center">
16
+ <em>Generate โ†’ Test โ†’ Measure โ†’ Fix โ†’ Repeat โ€” autonomously.</em>
17
+ </p>
12
18
 
13
- ## The Problem
19
+ ---
14
20
 
15
- AI coding assistants generate code and hope for the best. You paste it in, tests fail, you iterate manually, you lose an hour. Studies show **67% of AI-generated PRs get rejected** on first review.
21
+ ## Why Coco?
16
22
 
17
- ## The Solution
23
+ Most AI coding tools generate code and hand it to you. If something breaks โ€” tests fail, types don't match, a security issue slips in โ€” that's your problem.
18
24
 
19
- Coco doesn't stop at code generation. It runs your tests, measures quality across 12 dimensions, diagnoses failures, generates targeted fixes, and repeats โ€” autonomously โ€” until quality reaches a configurable threshold (default: 85/100).
25
+ Coco takes a different approach. After generating code, it **runs your tests, measures quality across 12 dimensions, diagnoses what's wrong, and fixes it** โ€” in a loop, autonomously โ€” until the code actually meets a quality bar you define.
20
26
 
21
27
  ```
22
- Generate โ†’ Test โ†’ Measure โ†’ Diagnose โ†’ Fix โ†’ Repeat
23
- โ†“
24
- Quality โ‰ฅ 85? โ†’ Done โœ…
28
+ โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
29
+ โ”‚ Generate โ”‚ โ”€โ”€โ–บ โ”‚ Test โ”‚ โ”€โ”€โ–บ โ”‚ Measure โ”‚ โ”€โ”€โ–บ โ”‚ Fix โ”‚
30
+ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
31
+ โ”‚
32
+ Score < 85? โ”‚ โ”€โ”€โ–บ Loop back
33
+ Score โ‰ฅ 85? โ”‚ โ”€โ”€โ–บ Done โœ…
25
34
  ```
26
35
 
27
- **This is the Quality Convergence Loop.** No other open-source coding agent does this.
36
+ This is the **Quality Convergence Loop** โ€” Coco's core differentiator.
28
37
 
29
38
  ---
30
39
 
31
40
  ## Quick Start
32
41
 
33
42
  ```bash
34
- npm install -g corbat-coco
35
- coco init # Configure your LLM provider
36
- coco "Build a REST API with authentication" # That's it
43
+ npm install -g @corbat-tech/coco
44
+ coco # Opens interactive REPL โ€” guided setup on first run
37
45
  ```
38
46
 
39
- Coco will generate code, run tests, iterate until quality passes, and generate CI/CD + docs.
47
+ That's it. Coco walks you through provider configuration on first launch.
48
+
49
+ ```bash
50
+ # Or use it directly:
51
+ coco "Add a REST API endpoint for user authentication with tests"
52
+ ```
40
53
 
41
54
  ---
42
55
 
43
- ## What Makes Coco Different
56
+ ## What Coco Does Well
57
+
58
+ ### Quality Convergence Loop
59
+
60
+ Coco doesn't just generate code โ€” it iterates until quality converges:
61
+
62
+ | Iteration | Score | What happened |
63
+ |:---------:|:-----:|---------------|
64
+ | 1 | 52 | Code generated โ€” 3 tests failing, no error handling |
65
+ | 2 | 71 | Tests fixed, security vulnerability found |
66
+ | 3 | 84 | Security patched, coverage improved to 82% |
67
+ | 4 | 91 | All green โ€” quality converged โœ… |
68
+
69
+ The loop is configurable: target score, max iterations, convergence threshold, security requirements. You control the bar.
70
+
71
+ ### 12-Dimension Quality Scoring
72
+
73
+ Every iteration measures your code across 12 dimensions using real static analysis:
44
74
 
45
- ### 1. Quality Convergence Loop (Unique Differentiator)
75
+ | Dimension | How it's measured |
76
+ |-----------|-------------------|
77
+ | Test Coverage | c8/v8 instrumentation |
78
+ | Security | Pattern matching + optional Snyk |
79
+ | Complexity | Cyclomatic complexity via AST parsing |
80
+ | Duplication | Line-based similarity detection |
81
+ | Correctness | Test pass rate + build verification |
82
+ | Style | oxlint / eslint / biome integration |
83
+ | Documentation | JSDoc coverage analysis |
84
+ | Readability | AST: naming quality, function length, nesting |
85
+ | Maintainability | AST: file size, coupling, function count |
86
+ | Test Quality | Assertion density, edge case coverage |
87
+ | Completeness | Export density + test file coverage |
88
+ | Robustness | Error handling pattern detection |
46
89
 
47
- Other agents generate code once. Coco iterates:
90
+ > **Transparency note**: 7 dimensions use instrumented measurements. 5 use heuristic-based static analysis. We label which is which โ€” no black boxes.
48
91
 
49
- | Iteration | Score | What Happened |
50
- |-----------|-------|---------------|
51
- | 1 | 52/100 | Generated code, 3 tests failing |
52
- | 2 | 71/100 | Fixed test failures, found security issue |
53
- | 3 | 84/100 | Fixed security, improved coverage |
54
- | 4 | 91/100 | All tests pass, quality converged โœ… |
92
+ ### Multi-Provider Support
55
93
 
56
- The loop stops when:
57
- - Score โ‰ฅ 85/100 (configurable)
58
- - Score stabilized (delta < 2 between iterations)
59
- - All critical issues resolved
60
- - Or max 10 iterations reached
94
+ Bring your own API keys. Coco works with:
61
95
 
62
- ### 2. 12-Dimension Quality Scoring
96
+ | Provider | Auth | Models |
97
+ |----------|------|--------|
98
+ | **Anthropic** | API key / OAuth PKCE | Claude Opus, Sonnet, Haiku |
99
+ | **OpenAI** | API key | GPT-4o, o1, o3 |
100
+ | **Google** | API key / gcloud ADC | Gemini Pro, Flash |
101
+ | **Ollama** | Local | Any local model |
102
+ | **LM Studio** | Local | Any GGUF model |
103
+ | **Moonshot** | API key | Kimi models |
63
104
 
64
- Every iteration measures code across 12 real dimensions:
105
+ ### Multi-Agent Architecture
65
106
 
66
- | Dimension | Method | Type |
67
- |-----------|--------|------|
68
- | **Test Coverage** | c8/v8 instrumentation | Instrumented |
69
- | **Security** | Pattern matching + optional Snyk | Instrumented |
70
- | **Complexity** | Cyclomatic complexity via AST | Instrumented |
71
- | **Duplication** | Line-based similarity detection | Instrumented |
72
- | **Correctness** | Test pass rate + build verification | Instrumented |
73
- | **Style** | oxlint/eslint/biome integration | Instrumented |
74
- | **Documentation** | JSDoc coverage analysis | Instrumented |
75
- | **Readability** | AST: naming quality, function length, nesting depth | Heuristic |
76
- | **Maintainability** | AST: file length, coupling, function count | Heuristic |
77
- | **Test Quality** | Assertion density, trivial ratio, edge cases | Heuristic |
78
- | **Completeness** | Export density + test file coverage ratio | Heuristic |
79
- | **Robustness** | Error handling pattern detection via AST | Heuristic |
107
+ Six specialized agents with weighted-scoring routing:
80
108
 
81
- > **Transparency**: 7 dimensions use instrumented analysis (real measurements). 5 use heuristic-based static analysis (directional signals via pattern detection). We label which is which.
109
+ - **Researcher** โ€” Explores, analyzes, maps the codebase
110
+ - **Coder** โ€” Writes and edits code (default route)
111
+ - **Tester** โ€” Generates tests, improves coverage
112
+ - **Reviewer** โ€” Code review, quality auditing
113
+ - **Optimizer** โ€” Refactoring and performance
114
+ - **Planner** โ€” Architecture design, task decomposition
82
115
 
83
- ### 3. Multi-Agent with Weighted Scoring Routing
116
+ Coco picks the right agent for each task automatically. When confidence is low, it defaults to the coder โ€” no guessing games.
84
117
 
85
- Six specialized agents, each with real LLM tool-use execution:
118
+ ### Interactive REPL
86
119
 
87
- | Agent | Primary Keywords (weight 3) | Tools |
88
- |-------|----------------------------|-------|
89
- | **Researcher** | research, analyze, explore, investigate | read_file, grep, glob |
90
- | **Coder** | (default) | read_file, write_file, edit_file, bash |
91
- | **Tester** | test, coverage, spec, mock | read_file, write_file, run_tests |
92
- | **Reviewer** | review, quality, audit, lint | read_file, calculate_quality, grep |
93
- | **Optimizer** | optimize, refactor, performance | read_file, write_file, analyze_complexity |
94
- | **Planner** | plan, design, architect, decompose | read_file, grep, glob, codebase_map |
120
+ A terminal-first experience with:
95
121
 
96
- Task routing scores each role against the task description. The highest-scoring role is selected; below threshold, it defaults to "coder". Each agent runs a multi-turn tool-use loop via the LLM protocol.
122
+ - **Ghost-text completion** โ€” Tab to accept inline suggestions
123
+ - **Slash commands** โ€” `/coco`, `/plan`, `/build`, `/diff`, `/commit`, `/help`
124
+ - **Image paste** โ€” `Ctrl+V` to paste screenshots for visual context
125
+ - **Intent recognition** โ€” Natural language mapped to commands
126
+ - **Context management** โ€” Automatic compaction when context grows large
97
127
 
98
- ### 4. Production Hardening
128
+ ### Production Hardening
99
129
 
100
- - **Error Recovery**: 9 error types with automatic retry strategies and exponential backoff
101
- - **Checkpoint/Resume**: Ctrl+C saves state. `coco resume` continues from where you left off
102
- - **Error Messages**: Every error includes an actionable suggestion for how to fix it
103
- - **Convergence Analysis**: Detects oscillation, diminishing returns, and stuck patterns
104
- - **AST Validation**: Parses and validates syntax before saving files
130
+ - **Error recovery** with typed error strategies and exponential backoff
131
+ - **Checkpoint/Resume** โ€” `Ctrl+C` saves state, `coco resume` picks up where you left off
132
+ - **AST validation** โ€” Syntax-checks generated code before saving
133
+ - **Convergence analysis** โ€” Detects oscillation, diminishing returns, and stuck patterns
134
+ - **Path sandboxing** โ€” Tools can only access files within the project
105
135
 
106
136
  ---
107
137
 
108
- ## Architecture: COCO Methodology
138
+ ## COCO Methodology
109
139
 
110
- Four phases, each with its own executor:
140
+ Four phases, each with a dedicated executor:
111
141
 
112
142
  ```
113
- CONVERGE ORCHESTRATE COMPLETE OUTPUT
143
+ CONVERGE ORCHESTRATE COMPLETE OUTPUT
114
144
  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
115
145
  โ”‚ Gather โ”‚ โ”‚ Design โ”‚ โ”‚ Execute with โ”‚ โ”‚ Generate โ”‚
116
146
  โ”‚ reqs โ”‚ โ”€โ”€โ–บ โ”‚ architecture โ”‚โ”€โ”€โ–บโ”‚ quality โ”‚โ”€โ”€โ–บโ”‚ CI/CD, โ”‚
117
- โ”‚ + spec โ”‚ โ”‚ + backlog โ”‚ โ”‚ iteration โ”‚ โ”‚ docs โ”‚
147
+ โ”‚ + spec โ”‚ โ”‚ + backlog โ”‚ โ”‚ convergence โ”‚ โ”‚ docs โ”‚
118
148
  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
119
149
  โ†‘ โ†“
120
150
  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
@@ -123,78 +153,23 @@ Four phases, each with its own executor:
123
153
  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
124
154
  ```
125
155
 
126
- ### Technology Stack
127
-
128
- | Component | Technology |
129
- |-----------|-----------|
130
- | Language | TypeScript (ESM, strict mode) |
131
- | Runtime | Node.js 22+ |
132
- | Testing | Vitest (4,000+ tests) |
133
- | Linting | oxlint |
134
- | Build | tsup |
135
- | LLM Providers | Anthropic Claude, OpenAI GPT, Google Gemini, Ollama, LM Studio |
136
- | Auth | OAuth 2.0 PKCE (browser + device code flow) |
137
-
138
- ---
139
-
140
- ## Comparison with Alternatives
141
-
142
- | Feature | Cursor | Aider | Goose | Devin | **Coco** |
143
- |---------|--------|-------|-------|-------|----------|
144
- | Quality Convergence Loop | โŒ | โŒ | โŒ | Partialยน | **โœ…** |
145
- | Multi-Dimensional Scoring | โŒ | โŒ | โŒ | Internal | **12 dimensions** |
146
- | Multi-Agent | โŒ | โŒ | Via MCP | โœ… | **โœ… (weighted routing)** |
147
- | AST Validation | โŒ | โŒ | โŒ | โœ… | **โœ…** |
148
- | Error Recovery + Resume | โŒ | โŒ | โŒ | โœ… | **โœ… (9 error types)** |
149
- | Open Source | โŒ | โœ… | โœ… | โŒ | **โœ…** |
150
- | Price | $20/mo | Freeยฒ | Freeยฒ | $500/mo | **Freeยฒ** |
151
-
152
- ยน Devin iterates internally but doesn't expose a configurable quality scoring system.
153
- ยฒ Free beyond LLM API costs (bring your own keys).
154
-
155
- ### Where Coco Excels
156
- - **Quality iteration**: The only open-source agent with a configurable multi-dimensional convergence loop
157
- - **Transparency**: Every score is computed, not estimated. You can inspect the analyzers
158
- - **Cost**: $0 subscription. ~$2-5 in API costs per project
159
-
160
- ### Where Coco is Behind
161
- - **IDE integration**: CLI-only today. VS Code extension planned
162
- - **Maturity**: Earlier stage than Cursor (millions of users) or Devin (2+ years production)
163
- - **Speed**: Iteration takes time. For quick edits, use Cursor or Copilot
164
- - **Language support**: Best with TypeScript/JavaScript. Python/Go experimental
156
+ 1. **Converge** โ€” Understand what needs to be built. Gather requirements, produce a spec.
157
+ 2. **Orchestrate** โ€” Design the architecture, decompose into a task backlog.
158
+ 3. **Complete** โ€” Execute each task with the quality convergence loop.
159
+ 4. **Output** โ€” Generate CI/CD pipelines, documentation, and deployment config.
165
160
 
166
161
  ---
167
162
 
168
- ## CLI Experience
163
+ ## Use Cases
169
164
 
170
- ### Interactive REPL
165
+ Coco is designed for developers who want AI assistance with **accountability**:
171
166
 
172
- ```bash
173
- coco # Opens interactive REPL
174
- ```
175
-
176
- **Slash commands**:
177
- - `/coco` โ€” Toggle quality convergence mode (auto-test + iterate)
178
- - `/tutorial` โ€” Quick 5-step guide for new users
179
- - `/init` โ€” Initialize a new project
180
- - `/plan` โ€” Design architecture and backlog
181
- - `/build` โ€” Build with quality iteration
182
- - `/task <desc>` โ€” Execute a single task
183
- - `/status` โ€” Check project state
184
- - `/diff` โ€” Review changes
185
- - `/commit` โ€” Commit with message
186
- - `/help` โ€” See all commands
187
-
188
- ### Provider Support
189
-
190
- | Provider | Auth Method | Models |
191
- |----------|------------|--------|
192
- | Anthropic | API key or OAuth PKCE | Claude Opus, Sonnet, Haiku |
193
- | OpenAI | API key | GPT-4o, GPT-4, o1, o3 |
194
- | Google | API key or gcloud ADC | Gemini Pro, Flash |
195
- | Ollama | Local (no key) | Any local model |
196
- | LM Studio | Local (no key) | Any GGUF model |
197
- | Moonshot | API key | Kimi models |
167
+ - **Feature development** โ€” Describe what you want, get tested and reviewed code
168
+ - **Vibe coding** โ€” Explore ideas interactively; Coco handles the quality checks
169
+ - **Refactoring** โ€” Point at code and say "make this better" โ€” Coco iterates until metrics improve
170
+ - **Test generation** โ€” Improve coverage with meaningful tests, not boilerplate
171
+ - **Code review** โ€” Get multi-dimensional quality feedback on existing code
172
+ - **Learning** โ€” See how code quality improves across iterations
198
173
 
199
174
  ---
200
175
 
@@ -204,64 +179,75 @@ coco # Opens interactive REPL
204
179
  git clone https://github.com/corbat/corbat-coco
205
180
  cd corbat-coco
206
181
  pnpm install
207
- pnpm dev # Run in dev mode
208
- pnpm test # Run 4,000+ tests
182
+ pnpm dev # Run in dev mode (tsx)
183
+ pnpm test # 4,350+ tests via Vitest
209
184
  pnpm check # typecheck + lint + test
185
+ pnpm build # Production build (tsup)
210
186
  ```
211
187
 
212
188
  ### Project Structure
213
189
 
214
190
  ```
215
- corbat-coco/
216
- โ”œโ”€โ”€ src/
217
- โ”‚ โ”œโ”€โ”€ agents/ # Multi-agent coordination + weighted routing
218
- โ”‚ โ”œโ”€โ”€ cli/ # REPL, commands, input handling
219
- โ”‚ โ”œโ”€โ”€ orchestrator/ # Phase coordinator + recovery
220
- โ”‚ โ”œโ”€โ”€ phases/ # COCO phases (converge/orchestrate/complete/output)
221
- โ”‚ โ”œโ”€โ”€ quality/ # 12 quality analyzers
222
- โ”‚ โ”œโ”€โ”€ providers/ # 6 LLM providers + OAuth
223
- โ”‚ โ”œโ”€โ”€ tools/ # 20+ tool implementations
224
- โ”‚ โ”œโ”€โ”€ hooks/ # Lifecycle hooks (safety, lint, format, audit)
225
- โ”‚ โ”œโ”€โ”€ mcp/ # MCP server for external integration
226
- โ”‚ โ””โ”€โ”€ config/ # Zod-validated configuration
227
- โ”œโ”€โ”€ test/e2e/ # End-to-end pipeline tests
228
- โ””โ”€โ”€ docs/ # Architecture docs + ADRs
191
+ src/
192
+ โ”œโ”€โ”€ agents/ # Multi-agent coordination + weighted routing
193
+ โ”œโ”€โ”€ cli/ # REPL, commands, input handling, output rendering
194
+ โ”œโ”€โ”€ orchestrator/ # Phase coordinator + state recovery
195
+ โ”œโ”€โ”€ phases/ # COCO phases (converge/orchestrate/complete/output)
196
+ โ”œโ”€โ”€ quality/ # 12 quality analyzers + convergence engine
197
+ โ”œโ”€โ”€ providers/ # 6 LLM providers + OAuth flows
198
+ โ”œโ”€โ”€ tools/ # 20+ tool implementations
199
+ โ”œโ”€โ”€ hooks/ # Lifecycle hooks (safety, lint, format, audit)
200
+ โ”œโ”€โ”€ mcp/ # MCP server for external integration
201
+ โ””โ”€โ”€ config/ # Zod-validated configuration system
229
202
  ```
230
203
 
204
+ ### Technology Stack
205
+
206
+ | Component | Technology |
207
+ |-----------|-----------|
208
+ | Language | TypeScript (ESM, strict mode) |
209
+ | Runtime | Node.js 22+ |
210
+ | Testing | Vitest (4,350+ tests) |
211
+ | Linting | oxlint |
212
+ | Formatting | oxfmt |
213
+ | Build | tsup |
214
+ | Schema validation | Zod |
215
+
231
216
  ---
232
217
 
233
- ## Limitations (Honest)
218
+ ## Known Limitations
219
+
220
+ We'd rather you know upfront:
234
221
 
235
- - **TypeScript/JavaScript first**: Other languages have basic support
236
- - **CLI-only**: No IDE integration yet
237
- - **Heuristic analyzers**: 5 of 12 dimensions use pattern matching, not deep semantic analysis
238
- - **Early stage**: Not yet battle-tested at enterprise scale
239
- - **Iteration takes time**: 2-5 minutes per task with convergence loop
240
- - **LLM-dependent**: Quality of generated code depends on the LLM you use
222
+ - **TypeScript/JavaScript first** โ€” Other languages have basic support but fewer analyzers
223
+ - **CLI-only** โ€” No IDE extension yet (VS Code integration is planned)
224
+ - **Iteration takes time** โ€” The convergence loop adds 2-5 minutes per task. For quick one-line fixes, a simpler tool may be faster
225
+ - **Heuristic analyzers** โ€” 5 of 12 quality dimensions use pattern-based heuristics, not deep semantic analysis
226
+ - **LLM-dependent** โ€” Output quality depends on the model you connect. Larger models produce better results
227
+ - **Early stage** โ€” Actively developed. Not yet battle-tested at large enterprise scale
241
228
 
242
229
  ---
243
230
 
244
231
  ## Contributing
245
232
 
246
- MIT License. We welcome contributions:
233
+ We welcome contributions of all kinds:
234
+
247
235
  - Bug reports and feature requests
248
236
  - New quality analyzers
249
237
  - Additional LLM provider integrations
250
- - Documentation improvements
238
+ - Documentation and examples
251
239
  - Real-world usage feedback
252
240
 
253
- See [CONTRIBUTING.md](./CONTRIBUTING.md).
241
+ See [CONTRIBUTING.md](./CONTRIBUTING.md) for guidelines.
254
242
 
255
243
  ---
256
244
 
257
- ## About Corbat
245
+ ## About
258
246
 
259
- Corbat-Coco is built by [Corbat](https://corbat.tech), a boutique technology consultancy. We believe AI coding tools should be transparent, measurable, and open source.
247
+ Corbat-Coco is built by [Corbat](https://corbat.tech), a technology consultancy that believes AI coding tools should be transparent, measurable, and open source.
260
248
 
261
- **Links**:
262
- - [GitHub](https://github.com/corbat/corbat-coco)
263
- - [corbat.tech](https://corbat.tech)
264
-
265
- ---
249
+ <p align="center">
250
+ <a href="https://github.com/corbat/corbat-coco">GitHub</a> ยท <a href="https://corbat.tech">corbat.tech</a>
251
+ </p>
266
252
 
267
- **Made with ๐Ÿฅฅ by developers who measure before they ship.**
253
+ <p align="center"><strong>MIT License</strong> ยท Made by developers who measure before they ship. ๐Ÿฅฅ</p>