yolo-coder 0.0.2__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2026 Erdem OZKAN
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
@@ -0,0 +1,460 @@
1
+ Metadata-Version: 2.4
2
+ Name: yolo-coder
3
+ Version: 0.0.2
4
+ Summary: An AI agent that fixes your broken CLI commands automatically using a local LLM
5
+ Author-email: erdemozkan <ozkanerdem@gmail.com>
6
+ License-Expression: MIT
7
+ Project-URL: Homepage, https://github.com/erdemozkan/YOLO-CODER
8
+ Project-URL: Repository, https://github.com/erdemozkan/YOLO-CODER
9
+ Project-URL: Bug Tracker, https://github.com/erdemozkan/YOLO-CODER/issues
10
+ Keywords: ai,llm,cli,debugging,automation,ollama,developer-tools
11
+ Classifier: Development Status :: 3 - Alpha
12
+ Classifier: Environment :: Console
13
+ Classifier: Intended Audience :: Developers
14
+ Classifier: Programming Language :: Python :: 3
15
+ Classifier: Programming Language :: Python :: 3.10
16
+ Classifier: Programming Language :: Python :: 3.11
17
+ Classifier: Programming Language :: Python :: 3.12
18
+ Classifier: Programming Language :: Python :: 3.13
19
+ Classifier: Topic :: Software Development :: Debuggers
20
+ Classifier: Topic :: Utilities
21
+ Requires-Python: >=3.10
22
+ Description-Content-Type: text/markdown
23
+ License-File: LICENSE
24
+ Requires-Dist: openai>=1.0.0
25
+ Requires-Dist: python-dotenv>=1.0.0
26
+ Requires-Dist: rich>=13.0.0
27
+ Requires-Dist: colorama>=0.4.6
28
+ Dynamic: license-file
29
+
30
+ <p align="center">
31
+ <img src="assets/yolo-coder-logo.png" alt="YOLO Logo" width="320" />
32
+ </p>
33
+
34
+ <h1 align="center">YOLO — You Only Launch Once</h1>
35
+
36
+ <p align="center">
37
+ <strong>An AI agent that fixes your broken CLI commands. Automatically. While you watch.</strong>
38
+ </p>
39
+
40
+ <p align="center">
41
+ <a href="LICENSE"><img src="https://img.shields.io/badge/License-MIT-yellow.svg" alt="License: MIT" /></a>
42
+ <a href="https://python.org"><img src="https://img.shields.io/badge/Python-3.10%2B-blue.svg" alt="Python 3.10+" /></a>
43
+ <a href="https://ollama.com"><img src="https://img.shields.io/badge/Powered%20by-Ollama-black.svg" alt="Ollama" /></a>
44
+ <img src="https://img.shields.io/badge/PRs-welcome-brightgreen.svg" alt="PRs Welcome" />
45
+ <img src="https://img.shields.io/badge/privacy-100%25%20local-green.svg" alt="100% Local" />
46
+ </p>
47
+
48
+ > [!CAUTION]
49
+ > **Disclaimer:** YOCO is experimental and can modify/delete files. Use it in isolated environments like Docker. See [DISCLAIMER.md](DISCLAIMER.md) for full details.
50
+
51
+ ---
52
+
53
+ ## The pitch
54
+
55
+ You run a command. It breaks. You stare at the error. You Google it. You copy-paste from Stack Overflow. It breaks again. You question your life choices.
56
+
57
+ **Or:** you run YOLO. It sees the error. It fixes it. It retries. You get coffee.
58
+
59
+ ```bash
60
+ $ yoco python3 myapp.py
61
+ ```
62
+
63
+ That's it. That's the whole interface.
64
+
65
+ ---
66
+
67
+ ## Demo
68
+
69
+ ### Brain 1 — Interceptor: missing package fixed in under a second
70
+ <p align="center">
71
+ <img src="assets/01_basic_fix.gif" alt="YOCO fixes a missing package instantly" width="800" />
72
+ </p>
73
+
74
+ ### Brain 3 — Local LLM: patches a logic bug in your code
75
+ <p align="center">
76
+ <img src="assets/02_llm_fix.gif" alt="YOCO uses local LLM to fix a ZeroDivisionError" width="800" />
77
+ </p>
78
+
79
+ ### Brain 2 — Fix Memory: same error, instant replay
80
+ <p align="center">
81
+ <img src="assets/03_memory_hit.gif" alt="YOCO recalls a past fix and applies it instantly" width="800" />
82
+ </p>
83
+
84
+ ### Rollback: undo everything YOCO changed
85
+ <p align="center">
86
+ <img src="assets/04_rollback.gif" alt="YOCO interactive rollback picker" width="800" />
87
+ </p>
88
+
89
+ ### Dry Run: preview the fix before applying it
90
+ <p align="center">
91
+ <img src="assets/05_dry_run.gif" alt="YOCO dry run mode" width="800" />
92
+ </p>
93
+
94
+ ### Security Gate: blocks git push when an API key is exposed
95
+ <p align="center">
96
+ <img src="assets/06_security.gif" alt="YOCO blocks git push due to exposed OpenAI API key" width="800" />
97
+ </p>
98
+
99
+ ### Watch Mode: auto-fixes on every file save
100
+ <p align="center">
101
+ <img src="assets/07_watch.gif" alt="YOCO watch mode fixes two bugs automatically as files change" width="800" />
102
+ </p>
103
+
104
+ ---
105
+
106
+ ## What actually happens under the hood
107
+
108
+ YOLO has three brains, tried in order from fastest to slowest:
109
+
110
+ ```
111
+ Error hits
112
+
113
+
114
+ ┌─────────────────────────────────────────────────┐
115
+ │ Brain 1: Interceptors │ ← 23 regex rules
116
+ │ "ModuleNotFoundError: No module named 'flask'" │ fires in <1ms
117
+ │ → pip install flask │ no LLM involved
118
+ └─────────────────────────────────────────────────┘
119
+ │ no match
120
+
121
+ ┌─────────────────────────────────────────────────┐
122
+ │ Brain 2: Fix Memory │ ← remembers past fixes
123
+ │ "seen this IndexError before (3x)" │ fires in <5ms
124
+ │ → replay the fix that worked last time │ no LLM involved
125
+ └─────────────────────────────────────────────────┘
126
+ │ cache miss
127
+
128
+ ┌─────────────────────────────────────────────────┐
129
+ │ Brain 3: Local LLM │ ← fine-tuned Qwen2.5-Coder
130
+ │ reads your error + your code │ runs 100% locally
131
+ │ → generates a targeted fix command │ ~1-3 seconds on Apple Silicon
132
+ └─────────────────────────────────────────────────┘
133
+ ```
134
+
135
+ Fix works → snapshot it, remember it, move on.
136
+ Fix fails → roll back every file to its pre-YOLO state, try again.
137
+
138
+ ---
139
+
140
+ ## Install
141
+
142
+ ### 1. Clone and install YOCO
143
+
144
+ ```bash
145
+ git clone https://github.com/erdemozkan/YOLO-CODER
146
+ cd YOLO-CODER
147
+ pip install -e .
148
+ ```
149
+
150
+ ### 2. Install Ollama
151
+
152
+ Download from [ollama.com](https://ollama.com) or:
153
+
154
+ ```bash
155
+ brew install ollama
156
+ ollama serve # start the server (runs on http://localhost:11434)
157
+ ```
158
+
159
+ ### 3. Set up the AI model
160
+
161
+ **Option A — Pull directly from Hugging Face (easiest):**
162
+
163
+ ```bash
164
+ # 1.5B model — fast, ~941MB, runs on any machine
165
+ ollama run hf.co/erdemozkan/YOLO-1.5B-Qwen-Coder
166
+
167
+ # 7B model — smarter, ~4.4GB, needs ~6GB RAM
168
+ ollama run hf.co/erdemozkan/YOLO-7B-Qwen-Coder
169
+ ```
170
+
171
+ **Option B — Download GGUF manually and register:**
172
+
173
+ ```bash
174
+ # Download the Q4 GGUF from HuggingFace
175
+ # → https://huggingface.co/erdemozkan/YOLO-7B-Qwen-Coder/blob/main/YOLO-7B-Qwen-q4.gguf
176
+
177
+ # Create a Modelfile
178
+ cat > Modelfile <<'EOF'
179
+ FROM ./YOLO-7B-Qwen-q4.gguf
180
+
181
+ TEMPLATE """{{ if .System }}<|im_start|>system
182
+ {{ .System }}<|im_end|>
183
+ {{ end }}<|im_start|>user
184
+ {{ .Prompt }}<|im_end|>
185
+ <|im_start|>assistant
186
+ """
187
+
188
+ PARAMETER stop "<|im_start|>"
189
+ PARAMETER stop "<|im_end|>"
190
+ PARAMETER temperature 0.1
191
+ PARAMETER top_p 0.1
192
+ SYSTEM """You are a CLI repair tool. Output ONLY a single bare bash command to fix the error. No explanation. No markdown. No backticks."""
193
+ EOF
194
+
195
+ # Register with Ollama
196
+ ollama create yolo-7b -f Modelfile
197
+
198
+ # Verify it works
199
+ ollama run yolo-7b "ModuleNotFoundError: No module named 'requests'"
200
+ # → pip install requests
201
+ ```
202
+
203
+ ### 4. Configure YOCO to use your model
204
+
205
+ ```bash
206
+ mkdir -p ~/.yolo
207
+ # Use 1.5B (default, fast):
208
+ echo '{"model": "hf.co/erdemozkan/YOLO-1.5B-Qwen-Coder"}' > ~/.yolo/config.json
209
+
210
+ # Or use 7B (if registered manually as above):
211
+ echo '{"model": "yolo-7b"}' > ~/.yolo/config.json
212
+ ```
213
+
214
+ ### 5. Run it
215
+
216
+ ```bash
217
+ yoco python3 myapp.py
218
+ ```
219
+
220
+ ---
221
+
222
+ ## Usage
223
+
224
+ ```bash
225
+ # Basic: fix whatever breaks
226
+ yoco python3 myapp.py
227
+ yoco npm run dev
228
+ yoco cargo build
229
+ yoco docker-compose up
230
+
231
+ # See what it would do without doing it
232
+ yoco --dry-run python3 myapp.py
233
+
234
+ # Get a full AI explanation of what went wrong and why
235
+ yoco --explain python3 myapp.py
236
+
237
+ # Watch mode: re-run on every file save
238
+ yoco --watch python3 myapp.py
239
+
240
+ # Undo everything YOLO changed in the last session
241
+ yoco --rollback
242
+
243
+ # Undo a specific file
244
+ yoco --rollback src/main.py
245
+
246
+ # Browse history of past runs
247
+ yoco --history
248
+
249
+ # Use the bigger 7B model for hard errors
250
+ yoco --model yolo-7b python3 myapp.py
251
+ ```
252
+
253
+ ---
254
+
255
+ ## What it can fix right now
256
+
257
+ | Category | Examples |
258
+ |---|---|
259
+ | **Python** | `ModuleNotFoundError`, `SyntaxError`, `PermissionError`, `FileNotFoundError`, `IndexError`, `ZeroDivisionError`, `AttributeError`, `KeyError`, `TypeError` |
260
+ | **pip** | `DEPRECATION`, `--break-system-packages`, missing packages, hash mismatches |
261
+ | **Node.js** | `Cannot find module`, `MODULE_NOT_FOUND` |
262
+ | **npm** | `ENOENT`, `ERESOLVE` (peer deps), `EACCES` (permissions) |
263
+ | **TypeScript** | `TS2304` (cannot find name), `TS2339` (property does not exist) |
264
+ | **Docker** | Image not found, port already in use, container name collision, daemon not running |
265
+ | **Git** | Merge conflicts, detached HEAD, push rejected, nothing to commit, not a repo |
266
+ | **Everything else** | LLM fallback covers what the rules don't |
267
+
268
+ ---
269
+
270
+ ## The model
271
+
272
+ YOLO ships with fine-tuned `Qwen2.5-Coder` models trained specifically on CLI error/fix pairs. It's trained to output exactly one bare shell command — no markdown, no explanation, no backticks. Just the fix.
273
+
274
+ **Our fine-tuned models are live on Hugging Face!**
275
+
276
+ ### 1. Using with Ollama
277
+ You can pull and run the models directly via Ollama:
278
+ ```bash
279
+ # For fast fixes, common errors, and low RAM usage:
280
+ ollama run hf.co/erdemozkan/YOLO-1.5B-Qwen-Coder
281
+
282
+ # For complex errors and better reasoning:
283
+ ollama run hf.co/erdemozkan/YOLO-7B-Qwen-Coder
284
+ ```
285
+
286
+ ### 2. Using with LM Studio or llama.cpp
287
+ 1. Browse to my Hugging Face profile: [erdemozkan](https://huggingface.co/erdemozkan).
288
+ 2. Open the model repository (`YOLO-1.5B-Qwen-Coder` or `YOLO-7B-Qwen-Coder`).
289
+ 3. Download the `.gguf` file from the "Files" section.
290
+ 4. Load the file into LM Studio or run it with your `llama.cpp` server.
291
+
292
+ | Model | Size | Best for |
293
+ |---|---|---|
294
+ | `YOLO-1.5B-Qwen-Coder` | 1.5B | Fast fixes, common errors, low RAM |
295
+ | `YOLO-7B-Qwen-Coder` | 7B | Complex errors, better reasoning |
296
+ | `qwen2.5-coder:7b` | 7B | Vanilla base model |
297
+
298
+ Training data: 2,250 error/fix pairs covering Python, Node, npm, TypeScript, Docker, Git, web frameworks, auth, async, CORS, circular imports, and more. Format: ChatML LoRA on Apple Silicon M-series.
299
+
300
+ ---
301
+
302
+ ## Rollback & history
303
+
304
+ YOLO snapshots every file it touches before making any changes. If a fix fails after 3 attempts, everything is restored automatically.
305
+
306
+ ```bash
307
+ # See what YOLO changed in the last session
308
+ yoco --rollback
309
+ # → numbered list of modified files, pick one or press 'a' to undo all
310
+
311
+ # See the last 20 runs
312
+ yoco --history
313
+ # → table: date / command / outcome / source (interceptor / memory / LLM)
314
+
315
+ # See details of run #5
316
+ yoco --history 5
317
+ ```
318
+
319
+ ---
320
+
321
+ ## Configuration
322
+
323
+ YOCO supports **Ollama**, **LM Studio**, and **llama.cpp** out of the box. Config is layered — CLI flags override the saved file, which overrides built-in defaults.
324
+
325
+ ### Config file
326
+
327
+ ```json
328
+ // ~/.yolo/config.json
329
+ {
330
+ "provider": "ollama",
331
+ "host": "localhost",
332
+ "port": 11434,
333
+ "model": "hf.co/erdemozkan/YOLO-1.5B-Qwen-Coder",
334
+ "max_attempts": 3,
335
+ "dry_run": false
336
+ }
337
+ ```
338
+
339
+ ### Per-run CLI overrides
340
+
341
+ ```bash
342
+ # Switch provider for one run
343
+ yoco --provider lmstudio python3 myapp.py
344
+
345
+ # Custom host or port (e.g. Ollama on a remote machine or non-default port)
346
+ yoco --host 192.168.1.50 --port 11434 python3 myapp.py
347
+
348
+ # Override model for one run
349
+ yoco --model yolo-7b python3 myapp.py
350
+ ```
351
+
352
+ ### Provider defaults
353
+
354
+ | Provider | Default port | Notes |
355
+ |---|---|---|
356
+ | `ollama` | `11434` | `ollama serve` — recommended |
357
+ | `lmstudio` | `1234` | Enable "Local Server" in the LM Studio UI |
358
+ | `llamacpp` | `8080` | `./server -m model.gguf --port 8080` |
359
+
360
+ ### Setting up each provider
361
+
362
+ **Ollama (recommended)**
363
+ ```bash
364
+ ollama serve # start the server
365
+ ollama run hf.co/erdemozkan/YOLO-1.5B-Qwen-Coder # pull + verify model
366
+ echo '{"provider": "ollama", "model": "hf.co/erdemozkan/YOLO-1.5B-Qwen-Coder"}' > ~/.yolo/config.json
367
+ ```
368
+
369
+ **LM Studio**
370
+ ```bash
371
+ # 1. Open LM Studio → Model tab → load any GGUF model
372
+ # 2. Go to Local Server tab → Start Server (enable CORS)
373
+ # 3. Tell YOCO to use it:
374
+ echo '{"provider": "lmstudio"}' > ~/.yolo/config.json
375
+ # LM Studio uses whatever model is currently loaded — no model name needed
376
+ ```
377
+
378
+ **llama.cpp server**
379
+ ```bash
380
+ ./server -m YOLO-7B-Qwen-q4.gguf --port 8080 # start the server
381
+ echo '{"provider": "llamacpp", "port": 8080}' > ~/.yolo/config.json
382
+ ```
383
+
384
+ **Custom port or remote host**
385
+ ```bash
386
+ # Ollama running on a non-default port
387
+ echo '{"provider": "ollama", "host": "localhost", "port": 12345}' > ~/.yolo/config.json
388
+
389
+ # Ollama on a remote machine on your local network
390
+ echo '{"provider": "ollama", "host": "192.168.1.50", "port": 11434}' > ~/.yolo/config.json
391
+ ```
392
+
393
+ ### Check active config
394
+
395
+ ```bash
396
+ yoco --config
397
+ ```
398
+
399
+ ---
400
+
401
+ ## Philosophy
402
+
403
+ Most AI coding tools want to be your pair programmer. YOLO wants to be the intern who quietly fixes the thing that was blocking you so you can keep doing what you were doing.
404
+
405
+ It runs locally. It doesn't read your whole codebase. It doesn't require an API key. It doesn't post your stack traces to anyone. It doesn't ask for confirmation for the obvious stuff. It just fixes it.
406
+
407
+ The design priorities, in order:
408
+
409
+ 1. **Speed** — interceptors fire before the LLM even wakes up
410
+ 2. **Safety** — nothing is applied without a snapshot; everything is reversible
411
+ 3. **Locality** — 100% local inference, no data leaves your machine
412
+ 4. **Simplicity** — one command, wraps whatever you were already running
413
+
414
+ Oh, and while it works, it'll throw out a random quip — "YOLOing..", "Crying..", "Day dreaming.." — just for fun 😄
415
+
416
+ ---
417
+
418
+ ## Roadmap
419
+
420
+ See [FUTURE_FEATURES.md](FUTURE_FEATURES.md) for deferred ideas with architectural reasoning.
421
+
422
+ Completed:
423
+ - [x] `--explain` mode — plain-English diff after every fix
424
+ - [x] `--watch` mode — re-runs on every file save
425
+ - [x] `--rollback` — interactive undo picker
426
+ - [x] `--dry-run` — preview fix without applying
427
+ - [x] Fix memory — instant replay of past fixes
428
+ - [x] Security gate — blocks git push when secrets are detected
429
+ - [x] First-run disclaimer with local acceptance record
430
+
431
+ Near-term:
432
+ - [ ] Rust / cargo error interceptors
433
+ - [ ] Shell script error interceptors (bash -e failures)
434
+ - [ ] VS Code extension (show fix inline before applying)
435
+ - [ ] CI mode (non-interactive, exits 0 on fix, 1 on failure)
436
+ - [ ] `pip install yoco` — publish to PyPI for global install
437
+ - [ ] Lean terminal UI — interactive dashboard while YOCO works
438
+ - [ ] YOCO Web — browser-based interface similar to Claude Code
439
+
440
+ ---
441
+
442
+ ## Contributing
443
+
444
+ Interceptors are the easiest entry point. Add a function to `core/interceptors.py`, append it to the `INTERCEPTORS` list, add a test case to `tests/tests.json`. See [CLAUDE.md](CLAUDE.md) for the full developer guide.
445
+
446
+ ---
447
+
448
+ ## License
449
+
450
+ MIT. Do whatever you want with it. If you make a million dollars, consider buying the author a coffee.
451
+
452
+ ---
453
+
454
+ <div align="center">
455
+
456
+ **Star it if it saved you from a Stack Overflow rabbit hole.**
457
+
458
+ *Built on Apple Silicon. Powered by local LLMs. Runs at YOLO speed.*
459
+
460
+ </div>