@halfagiraf/clawx 0.1.14 → 0.1.16

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +87 -2
  2. package/package.json +1 -1
package/README.md CHANGED
@@ -6,6 +6,8 @@
6
6
 
7
7
  Terminal-first coding agent — runs locally with Ollama, DeepSeek, OpenAI, or any OpenAI-compatible endpoint.
8
8
 
9
+ > **Beta** — Clawx is under active development. It works well with the providers we've tested (Ollama, DeepSeek, OpenAI, Anthropic) but not every combination has been battle-tested yet. If you hit a bug, [open an issue](https://github.com/stevenmcsorley/clawx/issues) — we fix things fast.
10
+
9
11
  Clawx started because tools like OpenClaw kept getting heavier. Prompts ballooned, context windows filled up, and local models choked. We wanted the good parts — the tool-calling loop, the terminal UI, the coding tools — without the bloat. So we built something lean on top of the open-source [pi-coding-agent](https://github.com/badlogic/pi-mono) SDK: an agent that runs local models on modest hardware, hits DeepSeek when you need more muscle, and scales up to frontier models when the task calls for it. No token budget wasted on platform overhead. Just the model, the tools, and your prompt.
10
12
 
11
13
  > **Fair warning:** Clawx runs with the guardrails off. It will create files, delete files, install packages, and execute shell commands — all without asking you first. That's the point. No confirmation dialogs, no "are you sure?", no waiting around. You give it a task, it gets on with it. This makes it ideal for disposable environments, home labs, Raspberry Pis, VMs, and machines you're happy to let rip. If you're pointing it at a production server with your life's work on it... maybe don't do that. Or do.
@@ -568,11 +570,54 @@ The agent will create files, install dependencies, build, and verify — iterati
568
570
 
569
571
  ### Remote scaffolding via SSH
570
572
 
573
+ Clawx can SSH into other machines on your network and run commands — from installing packages to deploying services. You describe what you want on your desktop; it happens on the remote machine.
574
+
575
+ **1. Configure an SSH target** in `clawx.json` (in your working directory):
576
+
577
+ ```json
578
+ {
579
+ "sshTargets": {
580
+ "pi": {
581
+ "host": "192.168.1.198",
582
+ "username": "dev",
583
+ "privateKeyPath": "~/.ssh/id_ed25519"
584
+ }
585
+ }
586
+ }
587
+ ```
588
+
589
+ **2. Run a prompt that references the target:**
590
+
591
+ ```bash
592
+ clawx run "SSH into the pi and run: hostname && uname -a"
593
+ ```
594
+
595
+ **3. Clawx connects and executes:**
596
+
597
+ ```
598
+ [tool] ssh_run target="pi" command="hostname && uname -a"
599
+ [pi] exit=0 (943ms)
600
+ ubuntu
601
+ Linux ubuntu 6.14.0-1019-raspi aarch64 GNU/Linux
602
+ ```
603
+
604
+ Tested and verified with DeepSeek API → Raspberry Pi 4 (Ubuntu aarch64) over local network.
605
+
606
+ **More SSH examples:**
607
+
571
608
  ```bash
572
- # With SSH targets configured in .env or clawx.json
573
- clawx run "SSH into my Pi and set up a Node.js service that monitors CPU temperature and exposes it as a Prometheus metric on port 9100"
609
+ # Install and start a service on a remote Pi
610
+ clawx run "SSH into the pi, install Node.js, create an Express API with a /hello endpoint, start it on port 3000, and verify it's running with curl"
611
+
612
+ # Set up monitoring
613
+ clawx run "SSH into the pi and set up a Node.js service that monitors CPU temperature and exposes it as a Prometheus metric on port 9100"
614
+
615
+ # Deploy to a server
616
+ clawx run "SSH into server, pull the latest code from git, run npm install, and restart the PM2 process"
574
617
  ```
575
618
 
619
+ You can define multiple targets (pi, server, vm, etc.) and reference them by name in your prompts.
620
+
576
621
  ### Interactive basic REPL
577
622
 
578
623
  ```bash
@@ -647,6 +692,46 @@ Next time you run `clawx`, the correct `fd` binary will be downloaded automatica
647
692
 
648
693
  If you set up clawx via `clawx init`, your configured model should appear in `/models`. If it doesn't, check that your `~/.clawx/config` file has the correct `CLAWDEX_PROVIDER`, `CLAWDEX_MODEL`, and `CLAWDEX_API_KEY` values.
649
694
 
695
+ ### Model doesn't produce tool calls
696
+
697
+ If the agent responds with text but never creates files or runs commands, your model likely doesn't support **structured tool calling**. It needs to return `tool_calls` objects in the API response, not text like `<tool_call>`. Check the [model compatibility table](#model-compatibility-and-benchmarks) — models marked "Not compatible" won't work with the agent loop.
698
+
699
+ ### Connection errors
700
+
701
+ ```
702
+ [error] Connection error.
703
+ ```
704
+
705
+ This means clawx can't reach the model endpoint. Check:
706
+ - Is Ollama running? (`ollama serve` or check if the service is active)
707
+ - Is the base URL correct? (`http://localhost:11434/v1` for Ollama)
708
+ - Is the model pulled? (`ollama list` to check)
709
+ - For API providers: is your API key valid?
710
+
711
+ ### Reporting bugs
712
+
713
+ Clawx is in beta — if something breaks, we want to know. [Open an issue](https://github.com/stevenmcsorley/clawx/issues) with:
714
+
715
+ 1. **What you ran** — the command and prompt
716
+ 2. **What happened** — error message or unexpected behaviour
717
+ 3. **Your setup** — OS, provider, model, clawx version (`clawx --version`)
718
+ 4. **Verbose output** — run with `-v` flag for debug logs: `clawx run -v "your prompt"`
719
+
720
+ ### Tested vs untested providers
721
+
722
+ | Provider | Status |
723
+ |----------|--------|
724
+ | Ollama | Tested on Windows + Linux |
725
+ | DeepSeek API | Tested |
726
+ | OpenAI API | Tested |
727
+ | Anthropic API | Tested |
728
+ | LM Studio | Untested — should work (OpenAI-compatible) |
729
+ | vLLM | Untested — should work (OpenAI-compatible) |
730
+ | llama.cpp server | Tested — tool calling depends on model |
731
+ | Google / Mistral | Untested |
732
+
733
+ If you test a provider that isn't listed, let us know how it went.
734
+
650
735
  ## License
651
736
 
652
737
  MIT. Built on the open-source [pi-coding-agent](https://github.com/badlogic/pi-mono) SDK (MIT).
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@halfagiraf/clawx",
3
- "version": "0.1.14",
3
+ "version": "0.1.16",
4
4
  "description": "Terminal-first coding agent — runs locally with Ollama, DeepSeek, OpenAI, or any OpenAI-compatible endpoint",
5
5
  "type": "module",
6
6
  "bin": {