@corbat-tech/coco 1.1.0 โ 1.2.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +159 -173
- package/dist/cli/index.js +147 -42
- package/dist/cli/index.js.map +1 -1
- package/dist/index.js +94 -26
- package/dist/index.js.map +1 -1
- package/package.json +3 -3
package/README.md
CHANGED
|
@@ -1,120 +1,150 @@
|
|
|
1
|
-
|
|
1
|
+
<p align="center">
|
|
2
|
+
<img src="https://img.shields.io/badge/v1.2.0-stable-blueviolet?style=for-the-badge" alt="Version">
|
|
3
|
+
<img src="https://img.shields.io/badge/TypeScript-5.7-3178c6?style=for-the-badge&logo=typescript&logoColor=white" alt="TypeScript">
|
|
4
|
+
<img src="https://img.shields.io/badge/Node.js-22+-339933?style=for-the-badge&logo=nodedotjs&logoColor=white" alt="Node.js">
|
|
5
|
+
<img src="https://img.shields.io/badge/License-MIT-f5c542?style=for-the-badge" alt="MIT License">
|
|
6
|
+
<img src="https://img.shields.io/badge/Tests-4350%2B_passing-22c55e?style=for-the-badge" alt="Tests">
|
|
7
|
+
</p>
|
|
2
8
|
|
|
3
|
-
|
|
9
|
+
<h1 align="center">๐ฅฅ Corbat-Coco</h1>
|
|
4
10
|
|
|
5
|
-
|
|
6
|
-
|
|
7
|
-
|
|
8
|
-
[](./)
|
|
9
|
-
[](./)
|
|
11
|
+
<p align="center">
|
|
12
|
+
<strong>The open-source coding agent that iterates on your code until it's actually production-ready.</strong>
|
|
13
|
+
</p>
|
|
10
14
|
|
|
11
|
-
|
|
15
|
+
<p align="center">
|
|
16
|
+
<em>Generate โ Test โ Measure โ Fix โ Repeat โ autonomously.</em>
|
|
17
|
+
</p>
|
|
12
18
|
|
|
13
|
-
|
|
19
|
+
---
|
|
14
20
|
|
|
15
|
-
|
|
21
|
+
## Why Coco?
|
|
16
22
|
|
|
17
|
-
|
|
23
|
+
Most AI coding tools generate code and hand it to you. If something breaks โ tests fail, types don't match, a security issue slips in โ that's your problem.
|
|
18
24
|
|
|
19
|
-
Coco
|
|
25
|
+
Coco takes a different approach. After generating code, it **runs your tests, measures quality across 12 dimensions, diagnoses what's wrong, and fixes it** โ in a loop, autonomously โ until the code actually meets a quality bar you define.
|
|
20
26
|
|
|
21
27
|
```
|
|
22
|
-
|
|
23
|
-
|
|
24
|
-
|
|
28
|
+
โโโโโโโโโโโโ โโโโโโโโโโโโ โโโโโโโโโโโโ โโโโโโโโโโโโ
|
|
29
|
+
โ Generate โ โโโบ โ Test โ โโโบ โ Measure โ โโโบ โ Fix โ
|
|
30
|
+
โโโโโโโโโโโโ โโโโโโโโโโโโ โโโโโโโโโโโโ โโโโโโโโโโโโ
|
|
31
|
+
โ
|
|
32
|
+
Score < 85? โ โโโบ Loop back
|
|
33
|
+
Score โฅ 85? โ โโโบ Done โ
|
|
25
34
|
```
|
|
26
35
|
|
|
27
|
-
|
|
36
|
+
This is the **Quality Convergence Loop** โ Coco's core differentiator.
|
|
28
37
|
|
|
29
38
|
---
|
|
30
39
|
|
|
31
40
|
## Quick Start
|
|
32
41
|
|
|
33
42
|
```bash
|
|
34
|
-
npm install -g corbat-coco
|
|
35
|
-
coco
|
|
36
|
-
coco "Build a REST API with authentication" # That's it
|
|
43
|
+
npm install -g @corbat-tech/coco
|
|
44
|
+
coco # Opens interactive REPL โ guided setup on first run
|
|
37
45
|
```
|
|
38
46
|
|
|
39
|
-
|
|
47
|
+
That's it. Coco walks you through provider configuration on first launch.
|
|
48
|
+
|
|
49
|
+
```bash
|
|
50
|
+
# Or use it directly:
|
|
51
|
+
coco "Add a REST API endpoint for user authentication with tests"
|
|
52
|
+
```
|
|
40
53
|
|
|
41
54
|
---
|
|
42
55
|
|
|
43
|
-
## What
|
|
56
|
+
## What Coco Does Well
|
|
57
|
+
|
|
58
|
+
### Quality Convergence Loop
|
|
59
|
+
|
|
60
|
+
Coco doesn't just generate code โ it iterates until quality converges:
|
|
61
|
+
|
|
62
|
+
| Iteration | Score | What happened |
|
|
63
|
+
|:---------:|:-----:|---------------|
|
|
64
|
+
| 1 | 52 | Code generated โ 3 tests failing, no error handling |
|
|
65
|
+
| 2 | 71 | Tests fixed, security vulnerability found |
|
|
66
|
+
| 3 | 84 | Security patched, coverage improved to 82% |
|
|
67
|
+
| 4 | 91 | All green โ quality converged โ
|
|
|
68
|
+
|
|
69
|
+
The loop is configurable: target score, max iterations, convergence threshold, security requirements. You control the bar.
|
|
70
|
+
|
|
71
|
+
### 12-Dimension Quality Scoring
|
|
72
|
+
|
|
73
|
+
Every iteration measures your code across 12 dimensions using real static analysis:
|
|
44
74
|
|
|
45
|
-
|
|
75
|
+
| Dimension | How it's measured |
|
|
76
|
+
|-----------|-------------------|
|
|
77
|
+
| Test Coverage | c8/v8 instrumentation |
|
|
78
|
+
| Security | Pattern matching + optional Snyk |
|
|
79
|
+
| Complexity | Cyclomatic complexity via AST parsing |
|
|
80
|
+
| Duplication | Line-based similarity detection |
|
|
81
|
+
| Correctness | Test pass rate + build verification |
|
|
82
|
+
| Style | oxlint / eslint / biome integration |
|
|
83
|
+
| Documentation | JSDoc coverage analysis |
|
|
84
|
+
| Readability | AST: naming quality, function length, nesting |
|
|
85
|
+
| Maintainability | AST: file size, coupling, function count |
|
|
86
|
+
| Test Quality | Assertion density, edge case coverage |
|
|
87
|
+
| Completeness | Export density + test file coverage |
|
|
88
|
+
| Robustness | Error handling pattern detection |
|
|
46
89
|
|
|
47
|
-
|
|
90
|
+
> **Transparency note**: 7 dimensions use instrumented measurements. 5 use heuristic-based static analysis. We label which is which โ no black boxes.
|
|
48
91
|
|
|
49
|
-
|
|
50
|
-
|-----------|-------|---------------|
|
|
51
|
-
| 1 | 52/100 | Generated code, 3 tests failing |
|
|
52
|
-
| 2 | 71/100 | Fixed test failures, found security issue |
|
|
53
|
-
| 3 | 84/100 | Fixed security, improved coverage |
|
|
54
|
-
| 4 | 91/100 | All tests pass, quality converged โ
|
|
|
92
|
+
### Multi-Provider Support
|
|
55
93
|
|
|
56
|
-
|
|
57
|
-
- Score โฅ 85/100 (configurable)
|
|
58
|
-
- Score stabilized (delta < 2 between iterations)
|
|
59
|
-
- All critical issues resolved
|
|
60
|
-
- Or max 10 iterations reached
|
|
94
|
+
Bring your own API keys. Coco works with:
|
|
61
95
|
|
|
62
|
-
|
|
96
|
+
| Provider | Auth | Models |
|
|
97
|
+
|----------|------|--------|
|
|
98
|
+
| **Anthropic** | API key / OAuth PKCE | Claude Opus, Sonnet, Haiku |
|
|
99
|
+
| **OpenAI** | API key | GPT-4o, o1, o3 |
|
|
100
|
+
| **Google** | API key / gcloud ADC | Gemini Pro, Flash |
|
|
101
|
+
| **Ollama** | Local | Any local model |
|
|
102
|
+
| **LM Studio** | Local | Any GGUF model |
|
|
103
|
+
| **Moonshot** | API key | Kimi models |
|
|
63
104
|
|
|
64
|
-
|
|
105
|
+
### Multi-Agent Architecture
|
|
65
106
|
|
|
66
|
-
|
|
67
|
-
|-----------|--------|------|
|
|
68
|
-
| **Test Coverage** | c8/v8 instrumentation | Instrumented |
|
|
69
|
-
| **Security** | Pattern matching + optional Snyk | Instrumented |
|
|
70
|
-
| **Complexity** | Cyclomatic complexity via AST | Instrumented |
|
|
71
|
-
| **Duplication** | Line-based similarity detection | Instrumented |
|
|
72
|
-
| **Correctness** | Test pass rate + build verification | Instrumented |
|
|
73
|
-
| **Style** | oxlint/eslint/biome integration | Instrumented |
|
|
74
|
-
| **Documentation** | JSDoc coverage analysis | Instrumented |
|
|
75
|
-
| **Readability** | AST: naming quality, function length, nesting depth | Heuristic |
|
|
76
|
-
| **Maintainability** | AST: file length, coupling, function count | Heuristic |
|
|
77
|
-
| **Test Quality** | Assertion density, trivial ratio, edge cases | Heuristic |
|
|
78
|
-
| **Completeness** | Export density + test file coverage ratio | Heuristic |
|
|
79
|
-
| **Robustness** | Error handling pattern detection via AST | Heuristic |
|
|
107
|
+
Six specialized agents with weighted-scoring routing:
|
|
80
108
|
|
|
81
|
-
|
|
109
|
+
- **Researcher** โ Explores, analyzes, maps the codebase
|
|
110
|
+
- **Coder** โ Writes and edits code (default route)
|
|
111
|
+
- **Tester** โ Generates tests, improves coverage
|
|
112
|
+
- **Reviewer** โ Code review, quality auditing
|
|
113
|
+
- **Optimizer** โ Refactoring and performance
|
|
114
|
+
- **Planner** โ Architecture design, task decomposition
|
|
82
115
|
|
|
83
|
-
|
|
116
|
+
Coco picks the right agent for each task automatically. When confidence is low, it defaults to the coder โ no guessing games.
|
|
84
117
|
|
|
85
|
-
|
|
118
|
+
### Interactive REPL
|
|
86
119
|
|
|
87
|
-
|
|
88
|
-
|-------|----------------------------|-------|
|
|
89
|
-
| **Researcher** | research, analyze, explore, investigate | read_file, grep, glob |
|
|
90
|
-
| **Coder** | (default) | read_file, write_file, edit_file, bash |
|
|
91
|
-
| **Tester** | test, coverage, spec, mock | read_file, write_file, run_tests |
|
|
92
|
-
| **Reviewer** | review, quality, audit, lint | read_file, calculate_quality, grep |
|
|
93
|
-
| **Optimizer** | optimize, refactor, performance | read_file, write_file, analyze_complexity |
|
|
94
|
-
| **Planner** | plan, design, architect, decompose | read_file, grep, glob, codebase_map |
|
|
120
|
+
A terminal-first experience with:
|
|
95
121
|
|
|
96
|
-
|
|
122
|
+
- **Ghost-text completion** โ Tab to accept inline suggestions
|
|
123
|
+
- **Slash commands** โ `/coco`, `/plan`, `/build`, `/diff`, `/commit`, `/help`
|
|
124
|
+
- **Image paste** โ `Ctrl+V` to paste screenshots for visual context
|
|
125
|
+
- **Intent recognition** โ Natural language mapped to commands
|
|
126
|
+
- **Context management** โ Automatic compaction when context grows large
|
|
97
127
|
|
|
98
|
-
###
|
|
128
|
+
### Production Hardening
|
|
99
129
|
|
|
100
|
-
- **Error
|
|
101
|
-
- **Checkpoint/Resume
|
|
102
|
-
- **
|
|
103
|
-
- **Convergence
|
|
104
|
-
- **
|
|
130
|
+
- **Error recovery** with typed error strategies and exponential backoff
|
|
131
|
+
- **Checkpoint/Resume** โ `Ctrl+C` saves state, `coco resume` picks up where you left off
|
|
132
|
+
- **AST validation** โ Syntax-checks generated code before saving
|
|
133
|
+
- **Convergence analysis** โ Detects oscillation, diminishing returns, and stuck patterns
|
|
134
|
+
- **Path sandboxing** โ Tools can only access files within the project
|
|
105
135
|
|
|
106
136
|
---
|
|
107
137
|
|
|
108
|
-
##
|
|
138
|
+
## COCO Methodology
|
|
109
139
|
|
|
110
|
-
Four phases, each with
|
|
140
|
+
Four phases, each with a dedicated executor:
|
|
111
141
|
|
|
112
142
|
```
|
|
113
|
-
CONVERGE ORCHESTRATE COMPLETE
|
|
143
|
+
CONVERGE ORCHESTRATE COMPLETE OUTPUT
|
|
114
144
|
โโโโโโโโโโโโ โโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโ โโโโโโโโโโโโ
|
|
115
145
|
โ Gather โ โ Design โ โ Execute with โ โ Generate โ
|
|
116
146
|
โ reqs โ โโโบ โ architecture โโโโบโ quality โโโโบโ CI/CD, โ
|
|
117
|
-
โ + spec โ โ + backlog โ โ
|
|
147
|
+
โ + spec โ โ + backlog โ โ convergence โ โ docs โ
|
|
118
148
|
โโโโโโโโโโโโ โโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโ โโโโโโโโโโโโ
|
|
119
149
|
โ โ
|
|
120
150
|
โโโโโโโโโโโโโโโ
|
|
@@ -123,78 +153,23 @@ Four phases, each with its own executor:
|
|
|
123
153
|
โโโโโโโโโโโโโโโ
|
|
124
154
|
```
|
|
125
155
|
|
|
126
|
-
|
|
127
|
-
|
|
128
|
-
|
|
129
|
-
|
|
130
|
-
| Language | TypeScript (ESM, strict mode) |
|
|
131
|
-
| Runtime | Node.js 22+ |
|
|
132
|
-
| Testing | Vitest (4,000+ tests) |
|
|
133
|
-
| Linting | oxlint |
|
|
134
|
-
| Build | tsup |
|
|
135
|
-
| LLM Providers | Anthropic Claude, OpenAI GPT, Google Gemini, Ollama, LM Studio |
|
|
136
|
-
| Auth | OAuth 2.0 PKCE (browser + device code flow) |
|
|
137
|
-
|
|
138
|
-
---
|
|
139
|
-
|
|
140
|
-
## Comparison with Alternatives
|
|
141
|
-
|
|
142
|
-
| Feature | Cursor | Aider | Goose | Devin | **Coco** |
|
|
143
|
-
|---------|--------|-------|-------|-------|----------|
|
|
144
|
-
| Quality Convergence Loop | โ | โ | โ | Partialยน | **โ
** |
|
|
145
|
-
| Multi-Dimensional Scoring | โ | โ | โ | Internal | **12 dimensions** |
|
|
146
|
-
| Multi-Agent | โ | โ | Via MCP | โ
| **โ
(weighted routing)** |
|
|
147
|
-
| AST Validation | โ | โ | โ | โ
| **โ
** |
|
|
148
|
-
| Error Recovery + Resume | โ | โ | โ | โ
| **โ
(9 error types)** |
|
|
149
|
-
| Open Source | โ | โ
| โ
| โ | **โ
** |
|
|
150
|
-
| Price | $20/mo | Freeยฒ | Freeยฒ | $500/mo | **Freeยฒ** |
|
|
151
|
-
|
|
152
|
-
ยน Devin iterates internally but doesn't expose a configurable quality scoring system.
|
|
153
|
-
ยฒ Free beyond LLM API costs (bring your own keys).
|
|
154
|
-
|
|
155
|
-
### Where Coco Excels
|
|
156
|
-
- **Quality iteration**: The only open-source agent with a configurable multi-dimensional convergence loop
|
|
157
|
-
- **Transparency**: Every score is computed, not estimated. You can inspect the analyzers
|
|
158
|
-
- **Cost**: $0 subscription. ~$2-5 in API costs per project
|
|
159
|
-
|
|
160
|
-
### Where Coco is Behind
|
|
161
|
-
- **IDE integration**: CLI-only today. VS Code extension planned
|
|
162
|
-
- **Maturity**: Earlier stage than Cursor (millions of users) or Devin (2+ years production)
|
|
163
|
-
- **Speed**: Iteration takes time. For quick edits, use Cursor or Copilot
|
|
164
|
-
- **Language support**: Best with TypeScript/JavaScript. Python/Go experimental
|
|
156
|
+
1. **Converge** โ Understand what needs to be built. Gather requirements, produce a spec.
|
|
157
|
+
2. **Orchestrate** โ Design the architecture, decompose into a task backlog.
|
|
158
|
+
3. **Complete** โ Execute each task with the quality convergence loop.
|
|
159
|
+
4. **Output** โ Generate CI/CD pipelines, documentation, and deployment config.
|
|
165
160
|
|
|
166
161
|
---
|
|
167
162
|
|
|
168
|
-
##
|
|
163
|
+
## Use Cases
|
|
169
164
|
|
|
170
|
-
|
|
165
|
+
Coco is designed for developers who want AI assistance with **accountability**:
|
|
171
166
|
|
|
172
|
-
|
|
173
|
-
|
|
174
|
-
|
|
175
|
-
|
|
176
|
-
**
|
|
177
|
-
-
|
|
178
|
-
- `/tutorial` โ Quick 5-step guide for new users
|
|
179
|
-
- `/init` โ Initialize a new project
|
|
180
|
-
- `/plan` โ Design architecture and backlog
|
|
181
|
-
- `/build` โ Build with quality iteration
|
|
182
|
-
- `/task <desc>` โ Execute a single task
|
|
183
|
-
- `/status` โ Check project state
|
|
184
|
-
- `/diff` โ Review changes
|
|
185
|
-
- `/commit` โ Commit with message
|
|
186
|
-
- `/help` โ See all commands
|
|
187
|
-
|
|
188
|
-
### Provider Support
|
|
189
|
-
|
|
190
|
-
| Provider | Auth Method | Models |
|
|
191
|
-
|----------|------------|--------|
|
|
192
|
-
| Anthropic | API key or OAuth PKCE | Claude Opus, Sonnet, Haiku |
|
|
193
|
-
| OpenAI | API key | GPT-4o, GPT-4, o1, o3 |
|
|
194
|
-
| Google | API key or gcloud ADC | Gemini Pro, Flash |
|
|
195
|
-
| Ollama | Local (no key) | Any local model |
|
|
196
|
-
| LM Studio | Local (no key) | Any GGUF model |
|
|
197
|
-
| Moonshot | API key | Kimi models |
|
|
167
|
+
- **Feature development** โ Describe what you want, get tested and reviewed code
|
|
168
|
+
- **Vibe coding** โ Explore ideas interactively; Coco handles the quality checks
|
|
169
|
+
- **Refactoring** โ Point at code and say "make this better" โ Coco iterates until metrics improve
|
|
170
|
+
- **Test generation** โ Improve coverage with meaningful tests, not boilerplate
|
|
171
|
+
- **Code review** โ Get multi-dimensional quality feedback on existing code
|
|
172
|
+
- **Learning** โ See how code quality improves across iterations
|
|
198
173
|
|
|
199
174
|
---
|
|
200
175
|
|
|
@@ -204,64 +179,75 @@ coco # Opens interactive REPL
|
|
|
204
179
|
git clone https://github.com/corbat/corbat-coco
|
|
205
180
|
cd corbat-coco
|
|
206
181
|
pnpm install
|
|
207
|
-
pnpm dev # Run in dev mode
|
|
208
|
-
pnpm test #
|
|
182
|
+
pnpm dev # Run in dev mode (tsx)
|
|
183
|
+
pnpm test # 4,350+ tests via Vitest
|
|
209
184
|
pnpm check # typecheck + lint + test
|
|
185
|
+
pnpm build # Production build (tsup)
|
|
210
186
|
```
|
|
211
187
|
|
|
212
188
|
### Project Structure
|
|
213
189
|
|
|
214
190
|
```
|
|
215
|
-
|
|
216
|
-
โโโ
|
|
217
|
-
|
|
218
|
-
|
|
219
|
-
|
|
220
|
-
|
|
221
|
-
|
|
222
|
-
|
|
223
|
-
|
|
224
|
-
|
|
225
|
-
|
|
226
|
-
โ โโโ config/ # Zod-validated configuration
|
|
227
|
-
โโโ test/e2e/ # End-to-end pipeline tests
|
|
228
|
-
โโโ docs/ # Architecture docs + ADRs
|
|
191
|
+
src/
|
|
192
|
+
โโโ agents/ # Multi-agent coordination + weighted routing
|
|
193
|
+
โโโ cli/ # REPL, commands, input handling, output rendering
|
|
194
|
+
โโโ orchestrator/ # Phase coordinator + state recovery
|
|
195
|
+
โโโ phases/ # COCO phases (converge/orchestrate/complete/output)
|
|
196
|
+
โโโ quality/ # 12 quality analyzers + convergence engine
|
|
197
|
+
โโโ providers/ # 6 LLM providers + OAuth flows
|
|
198
|
+
โโโ tools/ # 20+ tool implementations
|
|
199
|
+
โโโ hooks/ # Lifecycle hooks (safety, lint, format, audit)
|
|
200
|
+
โโโ mcp/ # MCP server for external integration
|
|
201
|
+
โโโ config/ # Zod-validated configuration system
|
|
229
202
|
```
|
|
230
203
|
|
|
204
|
+
### Technology Stack
|
|
205
|
+
|
|
206
|
+
| Component | Technology |
|
|
207
|
+
|-----------|-----------|
|
|
208
|
+
| Language | TypeScript (ESM, strict mode) |
|
|
209
|
+
| Runtime | Node.js 22+ |
|
|
210
|
+
| Testing | Vitest (4,350+ tests) |
|
|
211
|
+
| Linting | oxlint |
|
|
212
|
+
| Formatting | oxfmt |
|
|
213
|
+
| Build | tsup |
|
|
214
|
+
| Schema validation | Zod |
|
|
215
|
+
|
|
231
216
|
---
|
|
232
217
|
|
|
233
|
-
## Limitations
|
|
218
|
+
## Known Limitations
|
|
219
|
+
|
|
220
|
+
We'd rather you know upfront:
|
|
234
221
|
|
|
235
|
-
- **TypeScript/JavaScript first
|
|
236
|
-
- **CLI-only
|
|
237
|
-
- **
|
|
238
|
-
- **
|
|
239
|
-
- **
|
|
240
|
-
- **
|
|
222
|
+
- **TypeScript/JavaScript first** โ Other languages have basic support but fewer analyzers
|
|
223
|
+
- **CLI-only** โ No IDE extension yet (VS Code integration is planned)
|
|
224
|
+
- **Iteration takes time** โ The convergence loop adds 2-5 minutes per task. For quick one-line fixes, a simpler tool may be faster
|
|
225
|
+
- **Heuristic analyzers** โ 5 of 12 quality dimensions use pattern-based heuristics, not deep semantic analysis
|
|
226
|
+
- **LLM-dependent** โ Output quality depends on the model you connect. Larger models produce better results
|
|
227
|
+
- **Early stage** โ Actively developed. Not yet battle-tested at large enterprise scale
|
|
241
228
|
|
|
242
229
|
---
|
|
243
230
|
|
|
244
231
|
## Contributing
|
|
245
232
|
|
|
246
|
-
|
|
233
|
+
We welcome contributions of all kinds:
|
|
234
|
+
|
|
247
235
|
- Bug reports and feature requests
|
|
248
236
|
- New quality analyzers
|
|
249
237
|
- Additional LLM provider integrations
|
|
250
|
-
- Documentation
|
|
238
|
+
- Documentation and examples
|
|
251
239
|
- Real-world usage feedback
|
|
252
240
|
|
|
253
|
-
See [CONTRIBUTING.md](./CONTRIBUTING.md).
|
|
241
|
+
See [CONTRIBUTING.md](./CONTRIBUTING.md) for guidelines.
|
|
254
242
|
|
|
255
243
|
---
|
|
256
244
|
|
|
257
|
-
## About
|
|
245
|
+
## About
|
|
258
246
|
|
|
259
|
-
Corbat-Coco is built by [Corbat](https://corbat.tech), a
|
|
247
|
+
Corbat-Coco is built by [Corbat](https://corbat.tech), a technology consultancy that believes AI coding tools should be transparent, measurable, and open source.
|
|
260
248
|
|
|
261
|
-
|
|
262
|
-
|
|
263
|
-
|
|
264
|
-
|
|
265
|
-
---
|
|
249
|
+
<p align="center">
|
|
250
|
+
<a href="https://github.com/corbat/corbat-coco">GitHub</a> ยท <a href="https://corbat.tech">corbat.tech</a>
|
|
251
|
+
</p>
|
|
266
252
|
|
|
267
|
-
|
|
253
|
+
<p align="center"><strong>MIT License</strong> ยท Made by developers who measure before they ship. ๐ฅฅ</p>
|