codemaxxing 1.0.15 → 1.0.17
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +58 -15
- package/dist/index.js +120 -1263
- package/dist/ui/banner.d.ts +12 -0
- package/dist/ui/banner.js +28 -0
- package/dist/ui/connection-types.d.ts +33 -0
- package/dist/ui/connection-types.js +1 -0
- package/dist/ui/connection.d.ts +11 -0
- package/dist/ui/connection.js +182 -0
- package/dist/ui/input-router.d.ts +136 -0
- package/dist/ui/input-router.js +657 -0
- package/dist/ui/paste-interceptor.d.ts +21 -0
- package/dist/ui/paste-interceptor.js +179 -0
- package/dist/ui/pickers.d.ts +150 -0
- package/dist/ui/pickers.js +111 -0
- package/dist/ui/status-bar.d.ts +8 -0
- package/dist/ui/status-bar.js +15 -0
- package/dist/ui/wizard-types.d.ts +27 -0
- package/dist/ui/wizard-types.js +1 -0
- package/dist/ui/wizard.d.ts +3 -0
- package/dist/ui/wizard.js +214 -0
- package/dist/utils/paste.d.ts +1 -0
- package/dist/utils/paste.js +16 -0
- package/package.json +1 -1
package/README.md
CHANGED
|
@@ -12,7 +12,7 @@ Open-source terminal coding agent. Connect **any** LLM — local or remote — a
|
|
|
12
12
|
|
|
13
13
|
## Why?
|
|
14
14
|
|
|
15
|
-
Every coding agent locks you into their API. Codemaxxing doesn't. Run it with LM Studio, Ollama, OpenRouter, OpenAI, or any OpenAI-compatible endpoint. Your machine, your model, your rules.
|
|
15
|
+
Every coding agent locks you into their API. Codemaxxing doesn't. Run it with LM Studio, Ollama, OpenRouter, OpenAI, Anthropic, or any OpenAI-compatible endpoint. Your machine, your model, your rules.
|
|
16
16
|
|
|
17
17
|
## Install
|
|
18
18
|
|
|
@@ -48,29 +48,72 @@ curl -fsSL -o $env:TEMP\install-codemaxxing.bat https://raw.githubusercontent.co
|
|
|
48
48
|
npm update -g codemaxxing
|
|
49
49
|
```
|
|
50
50
|
|
|
51
|
-
If that doesn't get the latest version:
|
|
51
|
+
If that doesn't get the latest version, use the exact reinstall path:
|
|
52
52
|
```bash
|
|
53
53
|
npm install -g codemaxxing@latest
|
|
54
54
|
```
|
|
55
55
|
|
|
56
|
+
Then verify:
|
|
57
|
+
```bash
|
|
58
|
+
codemaxxing --version
|
|
59
|
+
```
|
|
60
|
+
|
|
56
61
|
## Quick Start
|
|
57
62
|
|
|
58
|
-
###
|
|
63
|
+
### Option A — easiest local setup
|
|
59
64
|
|
|
60
|
-
|
|
65
|
+
If you already have a local server running, Codemaxxing auto-detects common defaults:
|
|
66
|
+
- **LM Studio** on `http://localhost:1234/v1`
|
|
67
|
+
- **Ollama** on `http://localhost:11434`
|
|
68
|
+
- **vLLM** on `http://localhost:8000`
|
|
61
69
|
|
|
70
|
+
For LM Studio:
|
|
62
71
|
1. Download [LM Studio](https://lmstudio.ai)
|
|
63
|
-
2.
|
|
64
|
-
3.
|
|
65
|
-
4.
|
|
72
|
+
2. Load a coding model (for example **Qwen 2.5 Coder 7B** for a lightweight test)
|
|
73
|
+
3. Start the local server
|
|
74
|
+
4. Run:
|
|
66
75
|
|
|
67
|
-
|
|
76
|
+
```bash
|
|
77
|
+
codemaxxing
|
|
78
|
+
```
|
|
79
|
+
|
|
80
|
+
### Option B — no local model yet
|
|
81
|
+
|
|
82
|
+
Just run:
|
|
68
83
|
|
|
69
84
|
```bash
|
|
70
85
|
codemaxxing
|
|
71
86
|
```
|
|
72
87
|
|
|
73
|
-
|
|
88
|
+
If no LLM is available, Codemaxxing can guide you through:
|
|
89
|
+
- detecting your hardware
|
|
90
|
+
- recommending a model
|
|
91
|
+
- installing Ollama
|
|
92
|
+
- downloading the model
|
|
93
|
+
- connecting automatically
|
|
94
|
+
|
|
95
|
+
### Option C — cloud provider
|
|
96
|
+
|
|
97
|
+
Authenticate first:
|
|
98
|
+
|
|
99
|
+
```bash
|
|
100
|
+
codemaxxing login
|
|
101
|
+
```
|
|
102
|
+
|
|
103
|
+
Then run:
|
|
104
|
+
|
|
105
|
+
```bash
|
|
106
|
+
codemaxxing
|
|
107
|
+
```
|
|
108
|
+
|
|
109
|
+
## First-run sanity check
|
|
110
|
+
|
|
111
|
+
Once it launches, a quick smoke test is:
|
|
112
|
+
- type `/`
|
|
113
|
+
- open `/theme`
|
|
114
|
+
- open `/skills`
|
|
115
|
+
- paste a multiline block and confirm it becomes a paste badge
|
|
116
|
+
- send one normal prompt
|
|
74
117
|
|
|
75
118
|
---
|
|
76
119
|
|
|
@@ -138,7 +181,7 @@ Dual-model planning. A "planner" model reasons through the approach, then your e
|
|
|
138
181
|
- Great for pairing expensive reasoning models with fast editors
|
|
139
182
|
|
|
140
183
|
### 🧠 Skills System (21 Built-In)
|
|
141
|
-
Downloadable skill packs that teach the agent domain expertise. Ships with 21 built-in skills:
|
|
184
|
+
Downloadable skill packs that teach the agent domain expertise. Ships with 21 built-in skills and a menu-first `/skills` flow so you can browse instead of memorizing names:
|
|
142
185
|
|
|
143
186
|
| Category | Skills |
|
|
144
187
|
|----------|--------|
|
|
@@ -226,14 +269,14 @@ Switch models mid-session with an interactive picker:
|
|
|
226
269
|
- `/model gpt-5` — switch directly by name
|
|
227
270
|
- Native Anthropic API support (not just OpenAI-compatible)
|
|
228
271
|
|
|
229
|
-
### 🎨
|
|
230
|
-
`/theme` to browse: cyberpunk-neon, dracula, gruvbox, nord, catppuccin, tokyo-night, one-dark, rose-pine, synthwave, blood-moon, mono, solarized, hacker,
|
|
272
|
+
### 🎨 14 Themes
|
|
273
|
+
`/theme` to browse: cyberpunk-neon, dracula, gruvbox, nord, catppuccin, tokyo-night, one-dark, rose-pine, synthwave, blood-moon, mono, solarized, hacker, acid
|
|
231
274
|
|
|
232
275
|
### 🔐 Authentication
|
|
233
276
|
One command to connect any LLM provider. OpenRouter OAuth, Anthropic subscription linking, Codex/Qwen CLI import, GitHub Copilot device flow, or manual API keys.
|
|
234
277
|
|
|
235
278
|
### 📋 Smart Paste
|
|
236
|
-
Multi-line pastes collapse into `[Pasted text #1 +N lines]` badges.
|
|
279
|
+
Multi-line pastes collapse into `[Pasted text #1 +N lines]` badges instead of dumping raw text into the input box. This was specifically hardened for bracketed-paste terminal weirdness.
|
|
237
280
|
|
|
238
281
|
### ⌨️ Slash Commands
|
|
239
282
|
Type `/` for autocomplete suggestions. Arrow keys to navigate, Tab or Enter to select.
|
|
@@ -351,8 +394,8 @@ Drop a `CODEMAXXING.md` file in your project root to give the model extra contex
|
|
|
351
394
|
- **MCP:** [@modelcontextprotocol/sdk](https://github.com/modelcontextprotocol/typescript-sdk)
|
|
352
395
|
- **Sessions:** [better-sqlite3](https://github.com/WiseLibs/better-sqlite3)
|
|
353
396
|
- **Local LLM:** Ollama integration (auto-install, pull, manage)
|
|
354
|
-
- **Tests:** Vitest —
|
|
355
|
-
- **Zero cloud dependencies** — everything runs locally
|
|
397
|
+
- **Tests:** Vitest — 42 tests across 9 test files covering commands, tools, config, paste handling, and agent behavior
|
|
398
|
+
- **Zero cloud dependencies** — everything runs locally unless you choose a remote provider
|
|
356
399
|
|
|
357
400
|
## Inspired By
|
|
358
401
|
|