@halfagiraf/clawx 0.1.14 → 0.1.15
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +42 -0
- package/package.json +1 -1
package/README.md
CHANGED
|
@@ -6,6 +6,8 @@
|
|
|
6
6
|
|
|
7
7
|
Terminal-first coding agent — runs locally with Ollama, DeepSeek, OpenAI, or any OpenAI-compatible endpoint.
|
|
8
8
|
|
|
9
|
+
> **Beta** — Clawx is under active development. It works well with the providers we've tested (Ollama, DeepSeek, OpenAI, Anthropic) but not every combination has been battle-tested yet. If you hit a bug, [open an issue](https://github.com/stevenmcsorley/clawx/issues) — we fix things fast.
|
|
10
|
+
|
|
9
11
|
Clawx started because tools like OpenClaw kept getting heavier. Prompts ballooned, context windows filled up, and local models choked. We wanted the good parts — the tool-calling loop, the terminal UI, the coding tools — without the bloat. So we built something lean on top of the open-source [pi-coding-agent](https://github.com/badlogic/pi-mono) SDK: an agent that runs local models on modest hardware, hits DeepSeek when you need more muscle, and scales up to frontier models when the task calls for it. No token budget wasted on platform overhead. Just the model, the tools, and your prompt.
|
|
10
12
|
|
|
11
13
|
> **Fair warning:** Clawx runs with the guardrails off. It will create files, delete files, install packages, and execute shell commands — all without asking you first. That's the point. No confirmation dialogs, no "are you sure?", no waiting around. You give it a task, it gets on with it. This makes it ideal for disposable environments, home labs, Raspberry Pis, VMs, and machines you're happy to let rip. If you're pointing it at a production server with your life's work on it... maybe don't do that. Or do.
|
|
@@ -647,6 +649,46 @@ Next time you run `clawx`, the correct `fd` binary will be downloaded automatica
|
|
|
647
649
|
|
|
648
650
|
If you set up clawx via `clawx init`, your configured model should appear in `/models`. If it doesn't, check that your `~/.clawx/config` file has the correct `CLAWDEX_PROVIDER`, `CLAWDEX_MODEL`, and `CLAWDEX_API_KEY` values.
|
|
649
651
|
|
|
652
|
+
### Model doesn't produce tool calls
|
|
653
|
+
|
|
654
|
+
If the agent responds with text but never creates files or runs commands, your model likely doesn't support **structured tool calling**. It needs to return `tool_calls` objects in the API response, not text like `<tool_call>`. Check the [model compatibility table](#model-compatibility-and-benchmarks) — models marked "Not compatible" won't work with the agent loop.
|
|
655
|
+
|
|
656
|
+
### Connection errors
|
|
657
|
+
|
|
658
|
+
```
|
|
659
|
+
[error] Connection error.
|
|
660
|
+
```
|
|
661
|
+
|
|
662
|
+
This means clawx can't reach the model endpoint. Check:
|
|
663
|
+
- Is Ollama running? (`ollama serve` or check if the service is active)
|
|
664
|
+
- Is the base URL correct? (`http://localhost:11434/v1` for Ollama)
|
|
665
|
+
- Is the model pulled? (`ollama list` to check)
|
|
666
|
+
- For API providers: is your API key valid?
|
|
667
|
+
|
|
668
|
+
### Reporting bugs
|
|
669
|
+
|
|
670
|
+
Clawx is in beta — if something breaks, we want to know. [Open an issue](https://github.com/stevenmcsorley/clawx/issues) with:
|
|
671
|
+
|
|
672
|
+
1. **What you ran** — the command and prompt
|
|
673
|
+
2. **What happened** — error message or unexpected behaviour
|
|
674
|
+
3. **Your setup** — OS, provider, model, clawx version (`clawx --version`)
|
|
675
|
+
4. **Verbose output** — run with `-v` flag for debug logs: `clawx run -v "your prompt"`
|
|
676
|
+
|
|
677
|
+
### Tested vs untested providers
|
|
678
|
+
|
|
679
|
+
| Provider | Status |
|
|
680
|
+
|----------|--------|
|
|
681
|
+
| Ollama | Tested on Windows + Linux |
|
|
682
|
+
| DeepSeek API | Tested |
|
|
683
|
+
| OpenAI API | Tested |
|
|
684
|
+
| Anthropic API | Tested |
|
|
685
|
+
| LM Studio | Untested — should work (OpenAI-compatible) |
|
|
686
|
+
| vLLM | Untested — should work (OpenAI-compatible) |
|
|
687
|
+
| llama.cpp server | Tested — tool calling depends on model |
|
|
688
|
+
| Google / Mistral | Untested |
|
|
689
|
+
|
|
690
|
+
If you test a provider that isn't listed, let us know how it went.
|
|
691
|
+
|
|
650
692
|
## License
|
|
651
693
|
|
|
652
694
|
MIT. Built on the open-source [pi-coding-agent](https://github.com/badlogic/pi-mono) SDK (MIT).
|