@halfagiraf/clawx 0.1.27 → 0.1.28
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +33 -4
- package/package.json +1 -1
package/README.md
CHANGED
|
@@ -10,6 +10,8 @@ Terminal-first coding agent — runs locally with Ollama, DeepSeek, OpenAI, or a
|
|
|
10
10
|
|
|
11
11
|
Clawx started because tools like OpenClaw kept getting heavier. Prompts ballooned, context windows filled up, and local models choked. We wanted the good parts — the tool-calling loop, the terminal UI, the coding tools — without the bloat. So we built something lean on top of the open-source [pi-coding-agent](https://github.com/badlogic/pi-mono) SDK: an agent that runs local models on modest hardware, hits DeepSeek when you need more muscle, and scales up to frontier models when the task calls for it. No token budget wasted on platform overhead. Just the model, the tools, and your prompt.
|
|
12
12
|
|
|
13
|
+
> **Why not just use Claude Code with Ollama?** You can — Anthropic added [Ollama integration](https://docs.ollama.com/integrations/claude-code). But Claude Code "requires a large context window. We recommend at least 64k tokens" (Anthropic & Ollama, 2026). That 64k minimum exists because the system prompt, tool definitions, and protocol overhead consume a significant portion of the context before your first message is even sent. Clawx's orchestration is ~200 lines. The system prompt is lean. Tool definitions are minimal. This means more of your context window goes to actual work, not platform scaffolding — which matters when you're running a 7B model on 12GB VRAM where every token counts.
|
|
14
|
+
|
|
13
15
|
> **Fair warning:** Clawx runs with the guardrails off. It will create files, delete files, install packages, and execute shell commands — all without asking you first. That's the point. No confirmation dialogs, no "are you sure?", no waiting around. You give it a task, it gets on with it. This makes it ideal for disposable environments, home labs, Raspberry Pis, VMs, and machines you're happy to let rip. If you're pointing it at a production server with your life's work on it... maybe don't do that. Or do.
|
|
14
16
|
|
|
15
17
|
Clawx can create files, write code, run commands, execute over SSH, and iterate until the job is done. The model decides what to build and how — no file lists, no hand-holding.
|
|
@@ -22,6 +24,7 @@ Clawx can create files, write code, run commands, execute over SSH, and iterate
|
|
|
22
24
|
- **Executes over SSH** — scaffolds and manages remote services
|
|
23
25
|
- **Iterates** — reads command output, fixes errors, tries again
|
|
24
26
|
- **Streams output** — shows progress as the model works
|
|
27
|
+
- **Falls back to chat** — models without tool support switch to chat mode automatically
|
|
25
28
|
|
|
26
29
|
## What it doesn't do
|
|
27
30
|
|
|
@@ -518,8 +521,8 @@ clawx init Set up provider, model, and API key
|
|
|
518
521
|
clawx [prompt] Launch TUI (default mode, rich terminal UI)
|
|
519
522
|
clawx --basic Launch basic readline REPL instead of TUI
|
|
520
523
|
clawx run <prompt> Run a task headless and exit
|
|
521
|
-
clawx chat Interactive
|
|
522
|
-
clawx chat -c Resume last session in
|
|
524
|
+
clawx chat Interactive chat (no tools — works with any model)
|
|
525
|
+
clawx chat -c Resume last session in chat mode
|
|
523
526
|
clawx continue Resume last session
|
|
524
527
|
clawx sessions List recent sessions
|
|
525
528
|
clawx profiles List saved profiles
|
|
@@ -550,6 +553,24 @@ The TUI mode uses pi-coding-agent's InteractiveMode:
|
|
|
550
553
|
- Session branching and tree navigation
|
|
551
554
|
- Markdown rendering in responses
|
|
552
555
|
- /slash commands for settings, models, sessions
|
|
556
|
+
- `/chat` to toggle between **agent mode** (tools enabled) and **chat mode** (no tools)
|
|
557
|
+
|
|
558
|
+
### Agent mode vs chat mode
|
|
559
|
+
|
|
560
|
+
Clawx runs in two modes, shown in the TUI footer:
|
|
561
|
+
|
|
562
|
+
| Mode | Tools | System prompt | When |
|
|
563
|
+
|------|-------|---------------|------|
|
|
564
|
+
| **Agent mode** | All tools active (read, write, bash, ssh, etc.) | Coding agent — action-oriented, creates files, runs commands | Default for models that support tool calling |
|
|
565
|
+
| **Chat mode** | No tools | Conversational assistant — discusses code, explains concepts | Models without tool support, or toggled with `/chat` |
|
|
566
|
+
|
|
567
|
+
**Auto-detection:** If your model doesn't support tool calling (e.g. `glm47-uncensored`), Clawx detects this and switches to chat mode automatically — no crash, no error. You can still have a conversation.
|
|
568
|
+
|
|
569
|
+
**Manual toggle:** Type `/chat` in the TUI to switch modes at any time. Useful when you want to discuss an approach before the agent starts executing, or when using a model that works better without tools.
|
|
570
|
+
|
|
571
|
+
**On model switch:** When you change model (Ctrl+P), Clawx restores agent mode so the new model gets a fresh start with tools.
|
|
572
|
+
|
|
573
|
+
`clawx chat` (the CLI command) always starts in chat mode — it never sends tools, so it works with every model regardless of tool support.
|
|
553
574
|
|
|
554
575
|
### Basic REPL commands
|
|
555
576
|
|
|
@@ -587,6 +608,8 @@ src/
|
|
|
587
608
|
provider.ts Model/provider resolution for local endpoints
|
|
588
609
|
session.ts JSON-file session persistence
|
|
589
610
|
streaming.ts Terminal output renderer
|
|
611
|
+
extensions/
|
|
612
|
+
chat-mode.ts TUI extension: /chat toggle, auto-detection, prompt swap
|
|
590
613
|
tools/
|
|
591
614
|
sshRun.ts SSH execution (ssh2)
|
|
592
615
|
gitStatus.ts Git status wrapper
|
|
@@ -761,9 +784,15 @@ Next time you run `clawx`, the correct `fd` binary will be downloaded automatica
|
|
|
761
784
|
|
|
762
785
|
If you set up clawx via `clawx init`, your configured model should appear in `/models`. If it doesn't, check that your `~/.clawx/config` file has the correct `CLAWDEX_PROVIDER`, `CLAWDEX_MODEL`, and `CLAWDEX_API_KEY` values.
|
|
763
786
|
|
|
764
|
-
### Model doesn't
|
|
787
|
+
### Model doesn't support tool calling
|
|
788
|
+
|
|
789
|
+
If the TUI shows "does not support tool calling" or you see a 400 error about tools, your model doesn't support structured tool calls. Clawx handles this gracefully:
|
|
790
|
+
|
|
791
|
+
- **TUI mode** (`clawx`): automatically switches to **chat mode** — you can still have a conversation, just without file/command tools. Type `/chat` to toggle back if you switch to a different model.
|
|
792
|
+
- **Chat mode** (`clawx chat`): always works — never sends tools, compatible with every model.
|
|
793
|
+
- **Run mode** (`clawx run`): will show an error and suggest alternatives.
|
|
765
794
|
|
|
766
|
-
|
|
795
|
+
To use the full agent loop (file creation, command execution, SSH), switch to a model that supports structured tool calls — see the [model compatibility table](#model-compatibility-and-benchmarks).
|
|
767
796
|
|
|
768
797
|
### Connection errors
|
|
769
798
|
|