nex-code 0.5.16 → 0.5.18
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +24 -16
- package/dist/background-worker.js +547 -502
- package/dist/benchmark.js +552 -507
- package/dist/nex-code.js +805 -753
- package/package.json +2 -2
package/README.md
CHANGED
|
@@ -1,21 +1,23 @@
|
|
|
1
1
|
# nex-code
|
|
2
2
|
|
|
3
|
-
**
|
|
3
|
+
**An open-model-first CLI coding assistant for production development workflows.**
|
|
4
4
|
|
|
5
|
-
`nex-code` is
|
|
5
|
+
`nex-code` is a terminal coding assistant built around affordable open-model workflows. It works directly in real repositories, reasons through tasks in phases, and routes work across Ollama, Ollama Cloud, local models, and optional premium providers.
|
|
6
6
|
|
|
7
7
|
## Overview
|
|
8
8
|
|
|
9
|
-
Most
|
|
9
|
+
Most coding assistants are optimized for short demos: generate a file, suggest a snippet, answer a question. Real development work is different. It involves understanding an existing repository, planning changes, editing carefully, running verification, and working with the operational tools around the code.
|
|
10
10
|
|
|
11
11
|
`nex-code` exists to close that gap. It is designed as a serious CLI-first system that can:
|
|
12
12
|
|
|
13
|
-
-
|
|
13
|
+
- make Ollama, Ollama Cloud, and local open models the recommended path
|
|
14
|
+
- keep premium providers such as OpenAI, Anthropic, and Gemini optional
|
|
15
|
+
- show token usage, cost mode, budget state, and fallback behavior
|
|
14
16
|
- move through a structured plan -> implement -> verify loop
|
|
15
17
|
- use developer tooling such as Git, SSH, Docker, and Kubernetes
|
|
16
18
|
- adapt model choice to the kind of work being done
|
|
17
19
|
|
|
18
|
-
The result is not just "chat in the terminal." It is
|
|
20
|
+
The result is not just "chat in the terminal." It is a CLI workflow engine for software delivery that keeps model cost visible.
|
|
19
21
|
|
|
20
22
|
## Core Concept
|
|
21
23
|
|
|
@@ -29,22 +31,25 @@ The result is not just "chat in the terminal." It is an agentic workflow engine
|
|
|
29
31
|
|
|
30
32
|
This matters because the failure mode of many coding assistants is not generation quality alone. It is premature action. A useful assistant must know when to inspect first, when to change code, and when to stop and verify before claiming success.
|
|
31
33
|
|
|
32
|
-
###
|
|
34
|
+
### Open-Model-First Routing
|
|
33
35
|
|
|
34
36
|
Different models are good at different things. Some are better at fast repo exploration, some at careful implementation, and some at structured verification or longer-context reasoning.
|
|
35
37
|
|
|
36
|
-
`nex-code` is built around that reality. Instead of binding the entire session to one model, it can route work by phase, task type,
|
|
38
|
+
`nex-code` is built around that reality while treating open and affordable models as first-class defaults. Instead of binding the entire session to one model, it can route work by phase, task type, provider availability, and configured budget. In practice, this means:
|
|
37
39
|
|
|
38
40
|
- using one model for planning and another for implementation
|
|
39
|
-
-
|
|
40
|
-
- falling back
|
|
41
|
+
- preferring Ollama Cloud or local Ollama where possible
|
|
42
|
+
- falling back to premium providers only when configured
|
|
41
43
|
- benchmarking configured models to improve routing decisions over time
|
|
44
|
+
- warning when paid-provider budgets are near their limits
|
|
42
45
|
|
|
43
|
-
The goal is not provider abstraction for its own sake. The goal is to make model choice operational
|
|
46
|
+
The goal is not provider abstraction for its own sake. The goal is to make model choice operational, reliable, and cost-aware.
|
|
44
47
|
|
|
45
48
|
## Key Features
|
|
46
49
|
|
|
47
50
|
- **CLI-first operation** with low overhead and a workflow that fits existing terminal habits
|
|
51
|
+
- **Open-model-first defaults** for Ollama Cloud, local Ollama, and strong open coding models
|
|
52
|
+
- **Cost visibility** for token usage, provider cost mode, budget warnings, and fallback routing
|
|
48
53
|
- **Phase-based execution** that separates planning, implementation, and verification
|
|
49
54
|
- **Multi-provider support** for OpenAI, Anthropic, Gemini, Ollama Cloud, and local Ollama
|
|
50
55
|
- **Tool-integrated execution** across files, shell commands, Git, SSH, Docker, and Kubernetes
|
|
@@ -128,21 +133,24 @@ nex-code
|
|
|
128
133
|
Basic requirements:
|
|
129
134
|
|
|
130
135
|
- Node.js 18+
|
|
131
|
-
-
|
|
136
|
+
- Ollama Cloud key, or a local Ollama setup
|
|
137
|
+
- optional premium provider keys for fallback or specialized use
|
|
132
138
|
|
|
133
139
|
Typical environment configuration:
|
|
134
140
|
|
|
135
141
|
```env
|
|
136
142
|
OLLAMA_API_KEY=your-key
|
|
143
|
+
DEFAULT_PROVIDER=ollama
|
|
144
|
+
DEFAULT_MODEL=qwen3-coder:480b
|
|
145
|
+
|
|
146
|
+
# Optional premium fallbacks:
|
|
147
|
+
DEEPSEEK_API_KEY=your-key
|
|
137
148
|
OPENAI_API_KEY=your-key
|
|
138
149
|
ANTHROPIC_API_KEY=your-key
|
|
139
150
|
GEMINI_API_KEY=your-key
|
|
140
|
-
|
|
141
|
-
DEFAULT_PROVIDER=ollama
|
|
142
|
-
DEFAULT_MODEL=devstral-2:123b
|
|
143
151
|
```
|
|
144
152
|
|
|
145
|
-
On first launch, `nex-code`
|
|
153
|
+
On first launch, `nex-code` guides setup interactively and recommends Ollama Cloud or local Ollama first. Use `/models coding` for cost-aware model recommendations, `/budget` to cap premium spend, and `/fallback` to decide when paid providers may be used.
|
|
146
154
|
|
|
147
155
|
## Future Direction
|
|
148
156
|
|
|
@@ -156,4 +164,4 @@ Likely areas of continued investment include:
|
|
|
156
164
|
- tighter verification loops for tests, diffs, and deployment workflows
|
|
157
165
|
- better support for persistent project knowledge and reusable team workflows
|
|
158
166
|
|
|
159
|
-
The direction is clear: make
|
|
167
|
+
The direction is clear: make model-assisted development behave more like a disciplined engineering system and less like an isolated chat interface, while keeping costs controllable.
|