fireclaw 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md ADDED
@@ -0,0 +1,178 @@
1
+ # fireclaw-vm-demo
2
+
3
+ Run [OpenClaw](https://github.com/openclaw/openclaw) instances inside Firecracker microVMs, each fully isolated with its own filesystem, network, and process tree.
4
+
5
+ ## Why
6
+
7
+ Running OpenClaw directly on a host (bare metal or in Docker) means every instance shares the kernel, network namespace, and often the Docker socket. That's fine for a single bot, but problematic when you want:
8
+
9
+ - **Isolation** — one misbehaving instance can't interfere with others or the host
10
+ - **Reproducibility** — each VM boots from a clean rootfs image, no drift
11
+ - **Security** — no Docker socket mount into the container; the guest runs its own Docker daemon
12
+ - **Density** — Firecracker VMs boot in ~125ms and use ~5MB overhead per VM, so you can pack many instances on a single host
13
+
14
+ This repo is a minimal control plane that wires up Firecracker VM lifecycle, networking, and OpenClaw provisioning with plain bash and systemd. No orchestrator, no Kubernetes, no extra daemons.
15
+
16
+ ## How it works
17
+
18
+ ```
19
+ Host
20
+ ├── systemd: firecracker-vmdemo-<id>.service ← runs the VM
21
+ ├── systemd: vmdemo-proxy-<id>.service ← socat: localhost:<port> → VM:18789
22
+ ├── bridge: fcbr0 (172.16.0.0/24) ← shared bridge for all VMs
23
+
24
+ └── Firecracker VM (172.16.0.x)
25
+ ├── cloud-init: ubuntu user, SSH key, Docker install
26
+ ├── Docker: pulls OpenClaw image
27
+ ├── systemd: openclaw-<id>.service ← docker run --network host ... gateway --bind lan --port 18789
28
+ └── Browser binaries (Playwright Chromium, installed at provision time)
29
+ ```
30
+
31
+ 1. **`fireclaw setup`** creates a new instance: copies the base rootfs, optionally resizes it (default `40G`), generates a cloud-init seed image, allocates an IP + host port, writes a Firecracker config, creates systemd units, boots the VM with a per-instance Firecracker API socket, waits for SSH, then SCPs `provision-guest.sh` into the guest and runs it.
32
+
33
+ 2. **`provision-guest.sh`** runs inside the VM as root: waits for cloud-init/apt locks, expands the guest ext4 filesystem (`resize2fs`), configures Docker for Firecracker (`iptables=false`, `ip6tables=false`, `bridge=none`), pulls the OpenClaw image, runs the OpenClaw CLI (`doctor --fix` included), installs Playwright Chromium, writes browser path + health-check script, then creates and starts the guest systemd service.
34
+
35
+ 3. **`fireclaw`** manages lifecycle after setup: start/stop/restart VMs, tail logs (guest or host side), open an SSH shell, show status, or destroy an instance.
36
+
37
+ All state lives in two places:
38
+ - **Repo-local** `.vm-demo/.vm-<id>/` — env file, token, provision vars
39
+ - **Host filesystem** `/srv/firecracker/vm-demo/<id>/` — VM images, Firecracker config, logs
40
+
41
+ ## Prerequisites
42
+
43
+ - Linux host with KVM support (`/dev/kvm` accessible)
44
+ - `firecracker` binary at `/usr/local/bin/firecracker` ([install guide](https://github.com/firecracker-microvm/firecracker/blob/main/docs/getting-started.md))
45
+ - `qemu-img` (from `qemu-utils`) for rootfs resizing
46
+ - `cloud-localds` (from `cloud-image-utils`), `socat`, `jq`, `iptables`, `iproute2`, `ssh`, `scp`, `curl`, `openssl`
47
+ - Base VM images: a Linux kernel (`vmlinux`) and an ext4 rootfs with cloud-init support. Optionally an initrd.
48
+
49
+ Set `BASE_IMAGES_DIR` or pass `--base-kernel`/`--base-rootfs`/`--base-initrd` to point at your images.
50
+
51
+ ## Setup
52
+
53
+ ```bash
54
+ # 1. Clone
55
+ git clone https://github.com/bchewy/fireclaw-vm-demo.git
56
+ cd fireclaw-vm-demo
57
+
58
+ # 2. (Optional) install global CLI
59
+ sudo install -m 0755 ./bin/fireclaw /usr/local/bin/fireclaw
60
+
61
+ # 3. Create an instance
62
+ sudo fireclaw setup \
63
+ --instance my-bot \
64
+ --telegram-token "<your-bot-token>" \
65
+ --telegram-users "<your-telegram-user-id>" \
66
+ --model "anthropic/claude-opus-4-6" \
67
+ --anthropic-api-key "<key>"
68
+ ```
69
+
70
+ ## Install From npm
71
+
72
+ ```bash
73
+ # Global install
74
+ sudo npm install -g fireclaw
75
+
76
+ # Then use:
77
+ sudo fireclaw --help
78
+ sudo fireclaw list
79
+ ```
80
+
81
+ ## Publish To npm
82
+
83
+ ```bash
84
+ # 1. Login/check auth
85
+ npm login
86
+ npm whoami
87
+
88
+ # 2. Validate package contents and scripts
89
+ npm test
90
+ npm pack --dry-run
91
+
92
+ # 3. Bump version and publish
93
+ npm version patch
94
+ npm publish
95
+ ```
96
+
97
+ If `npm publish` fails because the package name is unavailable, change `"name"` in `package.json` (for example to a scoped package like `@<your-scope>/fireclaw`) and publish again.
98
+
99
+ ### Options
100
+
101
+ | Flag | Default | Description |
102
+ |------|---------|-------------|
103
+ | `--instance` | (required) | Instance ID (`[a-z0-9_-]+`) |
104
+ | `--telegram-token` | (required) | Telegram bot token |
105
+ | `--telegram-users` | | Comma-separated Telegram user IDs for allowlist |
106
+ | `--model` | `anthropic/claude-opus-4-6` | Model ID |
107
+ | `--skills` | `github,tmux,coding-agent,session-logs,skill-creator` | Comma-separated skill list |
108
+ | `--openclaw-image` | `ghcr.io/openclaw/openclaw:latest` | Docker image for OpenClaw |
109
+ | `--vm-vcpu` | `4` | vCPUs per VM |
110
+ | `--vm-mem-mib` | `8192` | Memory per VM (MiB) |
111
+ | `--disk-size` | `40G` | Resize copied rootfs image to this virtual size before boot |
112
+ | `--api-sock` | `<fc-dir>/firecracker.socket` | Firecracker API socket path (must be unique per VM) |
113
+ | `--anthropic-api-key` | | Anthropic API key |
114
+ | `--openai-api-key` | | OpenAI API key |
115
+ | `--minimax-api-key` | | MiniMax API key |
116
+ | `--skip-browser-install` | `false` | Skip Playwright Chromium install |
117
+
118
+ ## Usage
119
+
120
+ ```bash
121
+ # List all instances
122
+ sudo fireclaw list
123
+
124
+ # Status of one instance
125
+ sudo fireclaw status my-bot
126
+
127
+ # Stop / start / restart
128
+ sudo fireclaw stop my-bot
129
+ sudo fireclaw start my-bot
130
+ sudo fireclaw restart my-bot
131
+
132
+ # Tail guest logs (OpenClaw service)
133
+ sudo fireclaw logs my-bot
134
+
135
+ # Tail host logs (Firecracker + proxy)
136
+ sudo fireclaw logs my-bot host
137
+
138
+ # SSH into the VM
139
+ sudo fireclaw shell my-bot
140
+
141
+ # Run a command inside the VM
142
+ sudo fireclaw shell my-bot "docker ps"
143
+
144
+ # Get the gateway token
145
+ sudo fireclaw token my-bot
146
+
147
+ # Health check
148
+ curl -fsS http://127.0.0.1:<HOST_PORT>/health
149
+ # If proxy health is flaky, inspect VM-side health too:
150
+ sudo fireclaw status my-bot
151
+
152
+ # Destroy (interactive confirmation)
153
+ sudo fireclaw destroy my-bot
154
+
155
+ # Destroy (skip confirmation)
156
+ sudo fireclaw destroy my-bot --force
157
+ ```
158
+
159
+ ## Networking
160
+
161
+ Each VM gets a static IP on a bridge (`fcbr0`, `172.16.0.0/24`). The host acts as the gateway at `172.16.0.1` with NAT for outbound traffic. A `socat` proxy on the host forwards `127.0.0.1:<HOST_PORT>` to the VM's gateway port (`18789`), so the OpenClaw API is only reachable from localhost.
162
+
163
+ ## Environment variables
164
+
165
+ All scripts respect these overrides:
166
+
167
+ | Variable | Default |
168
+ |----------|---------|
169
+ | `STATE_ROOT` | `<repo>/.vm-demo` |
170
+ | `FC_ROOT` | `/srv/firecracker/vm-demo` |
171
+ | `BASE_PORT` | `18890` |
172
+ | `BRIDGE_NAME` | `fcbr0` |
173
+ | `BRIDGE_ADDR` | `172.16.0.1/24` |
174
+ | `SUBNET_CIDR` | `172.16.0.0/24` |
175
+ | `SSH_KEY_PATH` | `~/.ssh/vmdemo_vm` |
176
+ | `BASE_IMAGES_DIR` | `/srv/firecracker/base/images` |
177
+ | `DISK_SIZE` | `40G` |
178
+ | `API_SOCK` | `<FC_ROOT>/<instance>/firecracker.socket` |
package/bin/fireclaw ADDED
@@ -0,0 +1,66 @@
1
+ #!/usr/bin/env bash
2
+ set -euo pipefail
3
+
4
+ SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
5
+
6
+ usage() {
7
+ cat <<'EOF'
8
+ Usage: fireclaw <command> [args...]
9
+
10
+ Primary commands:
11
+ setup <flags...> Create and provision a new VM instance
12
+ list List instances
13
+ status [id] Show instance status
14
+ start <id> Start instance
15
+ stop <id> Stop instance
16
+ restart <id> Restart instance
17
+ logs <id> [guest|host] Stream logs
18
+ shell <id> [command...] SSH shell or run command in VM
19
+ token <id> Show gateway token
20
+ destroy <id> [--force] Destroy instance
21
+
22
+ Compatibility commands:
23
+ ctl <vm-ctl-args...> Pass through to vm-ctl
24
+ vm-setup <flags...> Pass through to vm-setup
25
+ vm-ctl <args...> Pass through to vm-ctl
26
+
27
+ Examples:
28
+ sudo ./bin/fireclaw setup --instance my-bot --telegram-token <token> --telegram-users <uid>
29
+ sudo ./bin/fireclaw list
30
+ sudo ./bin/fireclaw status my-bot
31
+ EOF
32
+ }
33
+
34
+ [[ $# -ge 1 ]] || {
35
+ usage
36
+ exit 1
37
+ }
38
+
39
+ cmd="$1"
40
+ shift || true
41
+
42
+ case "$cmd" in
43
+ setup|create|provision)
44
+ VM_SETUP_CMD_NAME="fireclaw setup" exec "$SCRIPT_DIR/vm-setup" "$@"
45
+ ;;
46
+ list|status|start|stop|restart|logs|shell|token|destroy)
47
+ if [[ "${1:-}" == "-h" || "${1:-}" == "--help" || "${1:-}" == "help" ]]; then
48
+ VM_CTL_CMD_NAME="fireclaw" exec "$SCRIPT_DIR/vm-ctl" --help
49
+ fi
50
+ VM_CTL_CMD_NAME="fireclaw" exec "$SCRIPT_DIR/vm-ctl" "$cmd" "$@"
51
+ ;;
52
+ ctl|vm-ctl)
53
+ VM_CTL_CMD_NAME="fireclaw ctl" exec "$SCRIPT_DIR/vm-ctl" "$@"
54
+ ;;
55
+ vm-setup)
56
+ VM_SETUP_CMD_NAME="fireclaw vm-setup" exec "$SCRIPT_DIR/vm-setup" "$@"
57
+ ;;
58
+ help|-h|--help)
59
+ usage
60
+ ;;
61
+ *)
62
+ echo "Error: unknown command '$cmd'" >&2
63
+ usage
64
+ exit 1
65
+ ;;
66
+ esac
@@ -0,0 +1,116 @@
1
+ #!/usr/bin/env bash
2
+ set -euo pipefail
3
+
4
+ REPO_ROOT="${REPO_ROOT:-$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)}"
5
+ STATE_ROOT="${STATE_ROOT:-$REPO_ROOT/.vm-demo}"
6
+ FC_ROOT="${FC_ROOT:-/srv/firecracker/vm-demo}"
7
+ BASE_PORT="${BASE_PORT:-18890}"
8
+
9
+ BRIDGE_NAME="${BRIDGE_NAME:-fcbr0}"
10
+ BRIDGE_ADDR="${BRIDGE_ADDR:-172.16.0.1/24}"
11
+ SUBNET_CIDR="${SUBNET_CIDR:-172.16.0.0/24}"
12
+
13
+ OPENCLAW_IMAGE_DEFAULT="${OPENCLAW_IMAGE_DEFAULT:-ghcr.io/openclaw/openclaw:latest}"
14
+ SSH_KEY_PATH="${SSH_KEY_PATH:-$HOME/.ssh/vmdemo_vm}"
15
+
16
+ log() { printf '==> %s\n' "$*"; }
17
+ warn() { printf 'Warning: %s\n' "$*" >&2; }
18
+ die() { printf 'Error: %s\n' "$*" >&2; exit 1; }
19
+
20
+ require_cmd() { command -v "$1" >/dev/null 2>&1 || die "Missing command: $1"; }
21
+ require_root() { [[ $EUID -eq 0 ]] || die "Run as root"; }
22
+
23
+ ensure_root_dirs() { mkdir -p "$STATE_ROOT" "$FC_ROOT"; }
24
+
25
+ validate_instance_id() {
26
+ local id="$1"
27
+ [[ -n "$id" ]] || die "instance id is required"
28
+ [[ "$id" =~ ^[a-z0-9_-]+$ ]] || die "instance id must match [a-z0-9_-]+"
29
+ }
30
+
31
+ instance_dir() { printf '%s/.vm-%s\n' "$STATE_ROOT" "$1"; }
32
+ instance_env() { printf '%s/.env\n' "$(instance_dir "$1")"; }
33
+ instance_token() { printf '%s/.token\n' "$(instance_dir "$1")"; }
34
+ fc_instance_dir() { printf '%s/%s\n' "$FC_ROOT" "$1"; }
35
+ vm_service() { printf 'firecracker-vmdemo-%s.service\n' "$1"; }
36
+ proxy_service() { printf 'vmdemo-proxy-%s.service\n' "$1"; }
37
+ guest_health_script() { printf '/usr/local/bin/openclaw-health-%s.sh\n' "$1"; }
38
+
39
+ load_instance_env() {
40
+ local id="$1"
41
+ validate_instance_id "$id"
42
+ local f
43
+ f="$(instance_env "$id")"
44
+ [[ -f "$f" ]] || die "instance '$id' not found"
45
+ set -a
46
+ source "$f"
47
+ set +a
48
+ }
49
+
50
+ next_port() {
51
+ local max="$BASE_PORT"
52
+ shopt -s nullglob
53
+ local f p
54
+ for f in "$STATE_ROOT"/.vm-*/.env; do
55
+ p="$(grep '^HOST_PORT=' "$f" | cut -d= -f2 || true)"
56
+ if [[ -n "${p:-}" && "$p" -gt "$max" ]]; then
57
+ max="$p"
58
+ fi
59
+ done
60
+ shopt -u nullglob
61
+ echo $((max + 1))
62
+ }
63
+
64
+ next_ip() {
65
+ local max_octet=1
66
+ shopt -s nullglob
67
+ local f ip oct
68
+ for f in "$STATE_ROOT"/.vm-*/.env; do
69
+ ip="$(grep '^VM_IP=' "$f" | cut -d= -f2 || true)"
70
+ oct="${ip##*.}"
71
+ if [[ "$oct" =~ ^[0-9]+$ ]] && (( oct > max_octet )); then
72
+ max_octet="$oct"
73
+ fi
74
+ done
75
+ shopt -u nullglob
76
+
77
+ local next=$((max_octet + 1))
78
+ (( next < 255 )) || die "IP pool exhausted"
79
+ echo "172.16.0.$next"
80
+ }
81
+
82
+ ensure_bridge_and_nat() {
83
+ if ! ip link show "$BRIDGE_NAME" >/dev/null 2>&1; then
84
+ ip link add "$BRIDGE_NAME" type bridge
85
+ fi
86
+ ip addr add "$BRIDGE_ADDR" dev "$BRIDGE_NAME" 2>/dev/null || true
87
+ ip link set "$BRIDGE_NAME" up
88
+
89
+ sysctl -w net.ipv4.ip_forward=1 >/dev/null
90
+
91
+ iptables -t nat -C POSTROUTING -s "$SUBNET_CIDR" ! -o "$BRIDGE_NAME" -j MASQUERADE 2>/dev/null \
92
+ || iptables -t nat -A POSTROUTING -s "$SUBNET_CIDR" ! -o "$BRIDGE_NAME" -j MASQUERADE
93
+ }
94
+
95
+ wait_for_ssh() {
96
+ local ip="$1"
97
+ local key="${2:-$SSH_KEY_PATH}"
98
+ local retries="${3:-120}"
99
+ local i
100
+ for ((i=1; i<=retries; i++)); do
101
+ if ssh -i "$key" -o StrictHostKeyChecking=accept-new -o UserKnownHostsFile=/dev/null -o ConnectTimeout=3 "ubuntu@$ip" true >/dev/null 2>&1; then
102
+ return 0
103
+ fi
104
+ sleep 2
105
+ done
106
+ return 1
107
+ }
108
+
109
+ check_guest_health() {
110
+ local id="$1"
111
+ local ip="$2"
112
+ local key="${3:-$SSH_KEY_PATH}"
113
+ local script
114
+ script="$(guest_health_script "$id")"
115
+ ssh -i "$key" -o StrictHostKeyChecking=accept-new -o UserKnownHostsFile=/dev/null -o ConnectTimeout=3 "ubuntu@$ip" "if [[ -x '$script' ]]; then sudo '$script'; else curl -fsS http://127.0.0.1:18789/health >/dev/null; fi" >/dev/null 2>&1
116
+ }
package/bin/vm-ctl ADDED
@@ -0,0 +1,208 @@
1
+ #!/usr/bin/env bash
2
+ set -euo pipefail
3
+
4
+ SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
5
+ source "$SCRIPT_DIR/vm-common.sh"
6
+ CMD_NAME="${VM_CTL_CMD_NAME:-$(basename "$0")}"
7
+
8
+ usage() {
9
+ cat <<EOF
10
+ Usage: $CMD_NAME <command> [instance]
11
+
12
+ Commands:
13
+ list
14
+ status [id]
15
+ start <id>
16
+ stop <id>
17
+ restart <id>
18
+ logs <id> [guest|host]
19
+ shell <id> [command...]
20
+ token <id>
21
+ destroy <id> [--force]
22
+ EOF
23
+ }
24
+
25
+ ssh_run() {
26
+ local ip="$1"; shift
27
+ local key="${SSH_KEY_PATH:-$HOME/.ssh/vmdemo_vm}"
28
+ ssh -i "$key" -o StrictHostKeyChecking=accept-new -o UserKnownHostsFile=/dev/null "ubuntu@$ip" "$@"
29
+ }
30
+
31
+ cmd_list() {
32
+ shopt -s nullglob
33
+ local found="false"
34
+ for d in "$STATE_ROOT"/.vm-*/; do
35
+ found="true"
36
+ local id
37
+ id="$(basename "$d" | sed 's/^\.vm-//')"
38
+ if [[ ! "$id" =~ ^[a-z0-9_-]+$ ]]; then
39
+ warn "Skipping invalid instance state directory: $d"
40
+ continue
41
+ fi
42
+ load_instance_env "$id"
43
+ local ssh_key="${SSH_KEY_PATH:-$HOME/.ssh/vmdemo_vm}"
44
+ local health="down"
45
+ local host_health="down"
46
+ local guest_health="down"
47
+ curl -fsS "http://127.0.0.1:$HOST_PORT/health" >/dev/null 2>&1 && host_health="up"
48
+ if check_guest_health "$id" "$VM_IP" "$ssh_key"; then
49
+ guest_health="up"
50
+ fi
51
+ if [[ "$host_health" == "up" || "$guest_health" == "up" ]]; then
52
+ health="up"
53
+ fi
54
+ local vm_state proxy_state
55
+ vm_state="$(systemctl is-active "$(vm_service "$id")" 2>/dev/null || echo inactive)"
56
+ proxy_state="$(systemctl is-active "$(proxy_service "$id")" 2>/dev/null || echo inactive)"
57
+ echo "$id ip=$VM_IP port=$HOST_PORT vm=$vm_state proxy=$proxy_state health=$health host_health=$host_health guest_health=$guest_health"
58
+ done
59
+ shopt -u nullglob
60
+ [[ "$found" == "true" ]] || echo "(no instances)"
61
+ }
62
+
63
+ cmd_status_one() {
64
+ local id="$1"
65
+ validate_instance_id "$id"
66
+ load_instance_env "$id"
67
+ local ssh_key="${SSH_KEY_PATH:-$HOME/.ssh/vmdemo_vm}"
68
+ local vm_state proxy_state health host_health guest_health guest
69
+ vm_state="$(systemctl is-active "$(vm_service "$id")" 2>/dev/null || echo inactive)"
70
+ proxy_state="$(systemctl is-active "$(proxy_service "$id")" 2>/dev/null || echo inactive)"
71
+ health="down"
72
+ host_health="down"
73
+ guest_health="down"
74
+ curl -fsS "http://127.0.0.1:$HOST_PORT/health" >/dev/null 2>&1 && host_health="up"
75
+ guest="unknown"
76
+ if wait_for_ssh "$VM_IP" "$ssh_key" 1; then
77
+ guest="$(ssh_run "$VM_IP" "systemctl is-active openclaw-$id.service" 2>/dev/null || echo unknown)"
78
+ if check_guest_health "$id" "$VM_IP" "$ssh_key"; then
79
+ guest_health="up"
80
+ fi
81
+ fi
82
+ if [[ "$host_health" == "up" || "$guest_health" == "up" ]]; then
83
+ health="up"
84
+ fi
85
+ echo "$id ip=$VM_IP port=$HOST_PORT vm=$vm_state proxy=$proxy_state guest=$guest health=$health host_health=$host_health guest_health=$guest_health"
86
+ }
87
+
88
+ cmd_status() {
89
+ if [[ $# -eq 1 ]]; then
90
+ cmd_status_one "$1"
91
+ return
92
+ fi
93
+ cmd_list
94
+ }
95
+
96
+ cmd_start() {
97
+ local id="$1"
98
+ validate_instance_id "$id"
99
+ require_root
100
+ load_instance_env "$id"
101
+
102
+ systemctl enable --now "$(vm_service "$id")"
103
+ wait_for_ssh "$VM_IP" "$SSH_KEY_PATH" 180 || die "VM started but SSH unreachable"
104
+
105
+ ssh_run "$VM_IP" "sudo systemctl enable --now openclaw-$id.service" || warn "Guest service start failed"
106
+ systemctl enable --now "$(proxy_service "$id")"
107
+
108
+ cmd_status_one "$id"
109
+ }
110
+
111
+ cmd_stop() {
112
+ local id="$1"
113
+ validate_instance_id "$id"
114
+ require_root
115
+ load_instance_env "$id"
116
+
117
+ systemctl stop "$(proxy_service "$id")" 2>/dev/null || true
118
+ if wait_for_ssh "$VM_IP" "$SSH_KEY_PATH" 1; then
119
+ ssh_run "$VM_IP" "sudo systemctl stop openclaw-$id.service" || true
120
+ fi
121
+ systemctl stop "$(vm_service "$id")"
122
+ cmd_status_one "$id"
123
+ }
124
+
125
+ cmd_restart() {
126
+ cmd_stop "$1"
127
+ cmd_start "$1"
128
+ }
129
+
130
+ cmd_logs() {
131
+ local id="$1"
132
+ local mode="${2:-guest}"
133
+ validate_instance_id "$id"
134
+ load_instance_env "$id"
135
+
136
+ if [[ "$mode" == "host" ]]; then
137
+ journalctl -u "$(vm_service "$id")" -u "$(proxy_service "$id")" -f
138
+ return
139
+ fi
140
+
141
+ wait_for_ssh "$VM_IP" "$SSH_KEY_PATH" 30 || die "VM SSH unavailable"
142
+ ssh_run "$VM_IP" "sudo journalctl -u openclaw-$id.service -f"
143
+ }
144
+
145
+ cmd_shell() {
146
+ local id="$1"
147
+ shift || true
148
+ validate_instance_id "$id"
149
+ load_instance_env "$id"
150
+ wait_for_ssh "$VM_IP" "$SSH_KEY_PATH" 30 || die "VM SSH unavailable"
151
+
152
+ if [[ $# -gt 0 ]]; then
153
+ ssh_run "$VM_IP" "$*"
154
+ else
155
+ ssh -i "$SSH_KEY_PATH" -o StrictHostKeyChecking=accept-new -o UserKnownHostsFile=/dev/null "ubuntu@$VM_IP"
156
+ fi
157
+ }
158
+
159
+ cmd_token() {
160
+ local id="$1"
161
+ validate_instance_id "$id"
162
+ cat "$(instance_token "$id")"
163
+ }
164
+
165
+ cmd_destroy() {
166
+ local id="$1"
167
+ local force="${2:-}"
168
+ validate_instance_id "$id"
169
+ require_root
170
+ load_instance_env "$id"
171
+
172
+ if [[ "$force" != "--force" ]]; then
173
+ read -r -p "Destroy '$id' and remove VM assets? [y/N] " confirm
174
+ [[ "$confirm" =~ ^[Yy]$ ]] || { echo "Cancelled"; return; }
175
+ fi
176
+
177
+ systemctl stop "$(proxy_service "$id")" 2>/dev/null || true
178
+ systemctl stop "$(vm_service "$id")" 2>/dev/null || true
179
+ systemctl disable "$(proxy_service "$id")" 2>/dev/null || true
180
+ systemctl disable "$(vm_service "$id")" 2>/dev/null || true
181
+
182
+ rm -f "/etc/systemd/system/$(proxy_service "$id")"
183
+ rm -f "/etc/systemd/system/$(vm_service "$id")"
184
+ systemctl daemon-reload
185
+
186
+ rm -rf "$(instance_dir "$id")" "$(fc_instance_dir "$id")"
187
+
188
+ echo "Destroyed: $id"
189
+ }
190
+
191
+ [[ $# -ge 1 ]] || { usage; exit 1; }
192
+
193
+ cmd="$1"
194
+ shift || true
195
+
196
+ case "$cmd" in
197
+ list) cmd_list ;;
198
+ status) cmd_status "$@" ;;
199
+ start) [[ $# -eq 1 ]] || die "Usage: vm-ctl start <id>"; cmd_start "$1" ;;
200
+ stop) [[ $# -eq 1 ]] || die "Usage: vm-ctl stop <id>"; cmd_stop "$1" ;;
201
+ restart) [[ $# -eq 1 ]] || die "Usage: vm-ctl restart <id>"; cmd_restart "$1" ;;
202
+ logs) [[ $# -ge 1 ]] || die "Usage: vm-ctl logs <id> [guest|host]"; cmd_logs "$@" ;;
203
+ shell) [[ $# -ge 1 ]] || die "Usage: vm-ctl shell <id> [command...]"; id="$1"; shift; cmd_shell "$id" "$@" ;;
204
+ token) [[ $# -eq 1 ]] || die "Usage: vm-ctl token <id>"; cmd_token "$1" ;;
205
+ destroy) [[ $# -ge 1 ]] || die "Usage: vm-ctl destroy <id> [--force]"; cmd_destroy "$@" ;;
206
+ -h|--help|help) usage ;;
207
+ *) die "Unknown command: $cmd" ;;
208
+ esac
package/bin/vm-setup ADDED
@@ -0,0 +1,369 @@
1
+ #!/usr/bin/env bash
2
+ set -euo pipefail
3
+
4
+ SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
5
+ source "$SCRIPT_DIR/vm-common.sh"
6
+ CMD_NAME="${VM_SETUP_CMD_NAME:-$(basename "$0")}"
7
+
8
+ usage() {
9
+ cat <<EOF
10
+ Usage: $CMD_NAME --instance <id> --telegram-token <token> [options]
11
+
12
+ Options:
13
+ --telegram-users <csv>
14
+ --model <id> (default: anthropic/claude-opus-4-6)
15
+ --skills <csv> (default: github,tmux,coding-agent,session-logs,skill-creator)
16
+ --openclaw-image <image>
17
+ --vm-vcpu <n> (default: 4)
18
+ --vm-mem-mib <n> (default: 8192)
19
+ --disk-size <size> (default: 40G)
20
+ --api-sock <path> (default: <fc-instance-dir>/firecracker.socket)
21
+ --base-kernel <path>
22
+ --base-rootfs <path>
23
+ --base-initrd <path>
24
+ --anthropic-api-key <key>
25
+ --openai-api-key <key>
26
+ --minimax-api-key <key>
27
+ --skip-browser-install
28
+ -h|--help
29
+ EOF
30
+ }
31
+
32
+ INSTANCE=""
33
+ TELEGRAM_TOKEN=""
34
+ TELEGRAM_USERS=""
35
+ MODEL="${MODEL:-anthropic/claude-opus-4-6}"
36
+ SKILLS="${SKILLS:-github,tmux,coding-agent,session-logs,skill-creator}"
37
+ OPENCLAW_IMAGE="$OPENCLAW_IMAGE_DEFAULT"
38
+ VM_VCPU="${VM_VCPU:-4}"
39
+ VM_MEM_MIB="${VM_MEM_MIB:-8192}"
40
+ DISK_SIZE="${DISK_SIZE:-40G}"
41
+ API_SOCK="${API_SOCK:-}"
42
+ BASE_IMAGES_DIR="${BASE_IMAGES_DIR:-/srv/firecracker/base/images}"
43
+ BASE_KERNEL="${BASE_KERNEL:-$BASE_IMAGES_DIR/vmlinux}"
44
+ BASE_ROOTFS="${BASE_ROOTFS:-$BASE_IMAGES_DIR/rootfs.ext4}"
45
+ BASE_INITRD="${BASE_INITRD:-$BASE_IMAGES_DIR/initrd.img}"
46
+ SKIP_BROWSER_INSTALL="false"
47
+
48
+ ANTHROPIC_API_KEY="${ANTHROPIC_API_KEY:-}"
49
+ OPENAI_API_KEY="${OPENAI_API_KEY:-}"
50
+ MINIMAX_API_KEY="${MINIMAX_API_KEY:-}"
51
+
52
+ while [[ $# -gt 0 ]]; do
53
+ case "$1" in
54
+ --instance) INSTANCE="$2"; shift 2 ;;
55
+ --telegram-token) TELEGRAM_TOKEN="$2"; shift 2 ;;
56
+ --telegram-users) TELEGRAM_USERS="$2"; shift 2 ;;
57
+ --model) MODEL="$2"; shift 2 ;;
58
+ --skills) SKILLS="$2"; shift 2 ;;
59
+ --openclaw-image) OPENCLAW_IMAGE="$2"; shift 2 ;;
60
+ --vm-vcpu) VM_VCPU="$2"; shift 2 ;;
61
+ --vm-mem-mib) VM_MEM_MIB="$2"; shift 2 ;;
62
+ --disk-size) DISK_SIZE="$2"; shift 2 ;;
63
+ --api-sock) API_SOCK="$2"; shift 2 ;;
64
+ --base-kernel) BASE_KERNEL="$2"; shift 2 ;;
65
+ --base-rootfs) BASE_ROOTFS="$2"; shift 2 ;;
66
+ --base-initrd) BASE_INITRD="$2"; shift 2 ;;
67
+ --anthropic-api-key) ANTHROPIC_API_KEY="$2"; shift 2 ;;
68
+ --openai-api-key) OPENAI_API_KEY="$2"; shift 2 ;;
69
+ --minimax-api-key) MINIMAX_API_KEY="$2"; shift 2 ;;
70
+ --skip-browser-install) SKIP_BROWSER_INSTALL="true"; shift ;;
71
+ -h|--help) usage; exit 0 ;;
72
+ *) die "Unknown option: $1" ;;
73
+ esac
74
+ done
75
+
76
+ [[ -n "$INSTANCE" ]] || die "Missing --instance"
77
+ [[ -n "$TELEGRAM_TOKEN" ]] || die "Missing --telegram-token"
78
+ validate_instance_id "$INSTANCE"
79
+
80
+ require_root
81
+ for c in firecracker systemctl ip iptables openssl jq cloud-localds ssh scp socat curl qemu-img; do
82
+ require_cmd "$c"
83
+ done
84
+
85
+ [[ -f "$BASE_KERNEL" ]] || die "Kernel not found: $BASE_KERNEL"
86
+ [[ -f "$BASE_ROOTFS" ]] || die "Rootfs not found: $BASE_ROOTFS"
87
+
88
+ ensure_root_dirs
89
+
90
+ if [[ ! -f "$SSH_KEY_PATH" ]]; then
91
+ log "Generating SSH key: $SSH_KEY_PATH"
92
+ mkdir -p "$(dirname "$SSH_KEY_PATH")"
93
+ ssh-keygen -t ed25519 -N "" -f "$SSH_KEY_PATH" >/dev/null
94
+ fi
95
+
96
+ inst_dir="$(instance_dir "$INSTANCE")"
97
+ fc_dir="$(fc_instance_dir "$INSTANCE")"
98
+ if [[ -z "$API_SOCK" ]]; then
99
+ API_SOCK="$fc_dir/firecracker.socket"
100
+ fi
101
+ [[ "$API_SOCK" = /* ]] || die "--api-sock must be an absolute path"
102
+ [[ ! -e "$inst_dir" ]] || die "Instance already exists: $inst_dir"
103
+
104
+ HOST_PORT="$(next_port)"
105
+ VM_IP="$(next_ip)"
106
+ VM_OCTET="${VM_IP##*.}"
107
+ SHORT_ID="$(echo "$INSTANCE" | tr -cd 'a-z0-9' | cut -c1-6)"
108
+ [[ -n "$SHORT_ID" ]] || SHORT_ID="vm"
109
+ VM_TAP="t${SHORT_ID}${VM_OCTET}"
110
+ VM_MAC="$(printf '06:fc:00:10:00:%02x' "$VM_OCTET")"
111
+ GATEWAY_TOKEN="$(openssl rand -hex 32)"
112
+
113
+ mkdir -p "$inst_dir/config" "$inst_dir/workspace"
114
+ mkdir -p "$fc_dir/images" "$fc_dir/config" "$fc_dir/logs"
115
+ mkdir -p "$(dirname "$API_SOCK")"
116
+
117
+ cat > "$inst_dir/.env" <<EOF
118
+ INSTANCE_ID=$INSTANCE
119
+ HOST_PORT=$HOST_PORT
120
+ VM_IP=$VM_IP
121
+ VM_TAP=$VM_TAP
122
+ VM_MAC=$VM_MAC
123
+ GATEWAY_TOKEN=$GATEWAY_TOKEN
124
+ MODEL=$MODEL
125
+ SKILLS=$SKILLS
126
+ TELEGRAM_USERS=$TELEGRAM_USERS
127
+ OPENCLAW_IMAGE=$OPENCLAW_IMAGE
128
+ VM_VCPU=$VM_VCPU
129
+ VM_MEM_MIB=$VM_MEM_MIB
130
+ DISK_SIZE=$DISK_SIZE
131
+ API_SOCK=$API_SOCK
132
+ SSH_KEY_PATH=$SSH_KEY_PATH
133
+ SKIP_BROWSER_INSTALL=$SKIP_BROWSER_INSTALL
134
+ ANTHROPIC_API_KEY=$ANTHROPIC_API_KEY
135
+ OPENAI_API_KEY=$OPENAI_API_KEY
136
+ MINIMAX_API_KEY=$MINIMAX_API_KEY
137
+ EOF
138
+
139
+ echo "$GATEWAY_TOKEN" > "$inst_dir/.token"
140
+ chmod 600 "$inst_dir/.token"
141
+
142
+ log "Preparing VM disk and assets"
143
+ cp --reflink=auto "$BASE_ROOTFS" "$fc_dir/images/rootfs.ext4" 2>/dev/null || cp "$BASE_ROOTFS" "$fc_dir/images/rootfs.ext4"
144
+ if [[ -n "$DISK_SIZE" ]]; then
145
+ log "Resizing rootfs to $DISK_SIZE"
146
+ qemu-img resize "$fc_dir/images/rootfs.ext4" "$DISK_SIZE" >/dev/null
147
+ fi
148
+ cp "$BASE_KERNEL" "$fc_dir/images/vmlinux"
149
+
150
+ INITRD_PATH=""
151
+ if [[ -f "$BASE_INITRD" ]]; then
152
+ cp "$BASE_INITRD" "$fc_dir/images/initrd.img"
153
+ INITRD_PATH="$fc_dir/images/initrd.img"
154
+ fi
155
+
156
+ SSH_PUB_KEY="$(cat "${SSH_KEY_PATH}.pub")"
157
+
158
+ cat > "$fc_dir/config/user-data" <<EOF
159
+ #cloud-config
160
+ users:
161
+ - default
162
+ - name: ubuntu
163
+ groups: [sudo, docker]
164
+ shell: /bin/bash
165
+ sudo: ALL=(ALL) NOPASSWD:ALL
166
+ ssh_authorized_keys:
167
+ - $SSH_PUB_KEY
168
+ package_update: true
169
+ packages:
170
+ - docker.io
171
+ - jq
172
+ - curl
173
+ runcmd:
174
+ - [ systemctl, enable, docker ]
175
+ - [ systemctl, start, docker ]
176
+ EOF
177
+
178
+ cat > "$fc_dir/config/meta-data" <<EOF
179
+ instance-id: $INSTANCE
180
+ local-hostname: $INSTANCE
181
+ EOF
182
+
183
+ cat > "$fc_dir/config/network-config" <<EOF
184
+ version: 2
185
+ ethernets:
186
+ eth0:
187
+ match:
188
+ macaddress: "$VM_MAC"
189
+ set-name: eth0
190
+ dhcp4: false
191
+ addresses:
192
+ - $VM_IP/24
193
+ routes:
194
+ - to: 0.0.0.0/0
195
+ via: 172.16.0.1
196
+ nameservers:
197
+ addresses: [1.1.1.1,8.8.8.8]
198
+ EOF
199
+
200
+ cloud-localds --network-config="$fc_dir/config/network-config" "$fc_dir/images/seed.img" "$fc_dir/config/user-data" "$fc_dir/config/meta-data"
201
+
202
+ jq -n \
203
+ --arg kernel "$fc_dir/images/vmlinux" \
204
+ --arg initrd "$INITRD_PATH" \
205
+ --arg rootfs "$fc_dir/images/rootfs.ext4" \
206
+ --arg seed "$fc_dir/images/seed.img" \
207
+ --arg tap "$VM_TAP" \
208
+ --arg mac "$VM_MAC" \
209
+ --arg log "$fc_dir/logs/firecracker.log" \
210
+ --argjson vcpu "$VM_VCPU" \
211
+ --argjson mem "$VM_MEM_MIB" '
212
+ {
213
+ "boot-source": (
214
+ { "kernel_image_path": $kernel, "boot_args": "console=ttyS0 reboot=k panic=1 pci=off" } +
215
+ (if $initrd != "" then { "initrd_path": $initrd } else {} end)
216
+ ),
217
+ "drives": [
218
+ { "drive_id": "rootfs", "path_on_host": $rootfs, "is_root_device": true, "is_read_only": false },
219
+ { "drive_id": "seed", "path_on_host": $seed, "is_root_device": false, "is_read_only": true }
220
+ ],
221
+ "network-interfaces": [
222
+ { "iface_id": "eth0", "host_dev_name": $tap, "guest_mac": $mac }
223
+ ],
224
+ "machine-config": { "vcpu_count": $vcpu, "mem_size_mib": $mem, "smt": false },
225
+ "logger": { "log_path": $log, "level": "Info", "show_level": true, "show_log_origin": false }
226
+ }
227
+ ' > "$fc_dir/config/vm-config.json"
228
+
229
+ cat > "$fc_dir/start-vm.sh" <<EOF
230
+ #!/usr/bin/env bash
231
+ set -euo pipefail
232
+
233
+ BRIDGE_NAME="$BRIDGE_NAME"
234
+ BRIDGE_ADDR="$BRIDGE_ADDR"
235
+ SUBNET_CIDR="$SUBNET_CIDR"
236
+ VM_TAP="$VM_TAP"
237
+ API_SOCK="$API_SOCK"
238
+ CONFIG_JSON="$fc_dir/config/vm-config.json"
239
+
240
+ if ! ip link show "\$BRIDGE_NAME" >/dev/null 2>&1; then
241
+ ip link add "\$BRIDGE_NAME" type bridge
242
+ fi
243
+ ip addr add "\$BRIDGE_ADDR" dev "\$BRIDGE_NAME" 2>/dev/null || true
244
+ ip link set "\$BRIDGE_NAME" up
245
+ sysctl -w net.ipv4.ip_forward=1 >/dev/null
246
+ iptables -t nat -C POSTROUTING -s "\$SUBNET_CIDR" ! -o "\$BRIDGE_NAME" -j MASQUERADE 2>/dev/null || iptables -t nat -A POSTROUTING -s "\$SUBNET_CIDR" ! -o "\$BRIDGE_NAME" -j MASQUERADE
247
+
248
+ ip tuntap add dev "\$VM_TAP" mode tap user root 2>/dev/null || true
249
+ ip link set "\$VM_TAP" master "\$BRIDGE_NAME"
250
+ ip link set "\$VM_TAP" up
251
+
252
+ rm -f "\$API_SOCK"
253
+ exec /usr/local/bin/firecracker --api-sock "\$API_SOCK" --config-file "\$CONFIG_JSON"
254
+ EOF
255
+ chmod +x "$fc_dir/start-vm.sh"
256
+
257
+ cat > "$fc_dir/stop-vm.sh" <<EOF
258
+ #!/usr/bin/env bash
259
+ set -euo pipefail
260
+ VM_TAP="$VM_TAP"
261
+ ip link set "\$VM_TAP" down 2>/dev/null || true
262
+ ip link del "\$VM_TAP" 2>/dev/null || true
263
+ EOF
264
+ chmod +x "$fc_dir/stop-vm.sh"
265
+
266
+ VM_SVC="$(vm_service "$INSTANCE")"
267
+ PROXY_SVC="$(proxy_service "$INSTANCE")"
268
+
269
+ cat > "/etc/systemd/system/$VM_SVC" <<EOF
270
+ [Unit]
271
+ Description=Firecracker VM ($INSTANCE)
272
+ After=network-online.target
273
+ Wants=network-online.target
274
+
275
+ [Service]
276
+ Type=simple
277
+ ExecStart=$fc_dir/start-vm.sh
278
+ ExecStop=$fc_dir/stop-vm.sh
279
+ Restart=on-failure
280
+ RestartSec=2
281
+
282
+ [Install]
283
+ WantedBy=multi-user.target
284
+ EOF
285
+
286
+ cat > "/etc/systemd/system/$PROXY_SVC" <<EOF
287
+ [Unit]
288
+ Description=Localhost proxy for VM ($INSTANCE)
289
+ After=$VM_SVC
290
+ Requires=$VM_SVC
291
+
292
+ [Service]
293
+ Type=simple
294
+ ExecStart=/usr/bin/socat TCP-LISTEN:$HOST_PORT,bind=127.0.0.1,reuseaddr,fork TCP:$VM_IP:18789
295
+ Restart=always
296
+ RestartSec=2
297
+
298
+ [Install]
299
+ WantedBy=multi-user.target
300
+ EOF
301
+
302
+ systemctl daemon-reload
303
+ systemctl enable --now "$VM_SVC"
304
+
305
+ log "Waiting for SSH on $VM_IP"
306
+ if ! wait_for_ssh "$VM_IP" "$SSH_KEY_PATH" 180; then
307
+ die "VM booted but SSH was not reachable"
308
+ fi
309
+
310
+ PROVISION_VARS="$inst_dir/provision.vars"
311
+ : > "$PROVISION_VARS"
312
+ kv() { printf "%s=%q\n" "$1" "$2" >> "$PROVISION_VARS"; }
313
+
314
+ kv INSTANCE_ID "$INSTANCE"
315
+ kv TELEGRAM_TOKEN "$TELEGRAM_TOKEN"
316
+ kv TELEGRAM_USERS "$TELEGRAM_USERS"
317
+ kv MODEL "$MODEL"
318
+ kv SKILLS "$SKILLS"
319
+ kv GATEWAY_TOKEN "$GATEWAY_TOKEN"
320
+ kv OPENCLAW_IMAGE "$OPENCLAW_IMAGE"
321
+ kv SKIP_BROWSER_INSTALL "$SKIP_BROWSER_INSTALL"
322
+ kv DISK_SIZE "$DISK_SIZE"
323
+ kv ANTHROPIC_API_KEY "$ANTHROPIC_API_KEY"
324
+ kv OPENAI_API_KEY "$OPENAI_API_KEY"
325
+ kv MINIMAX_API_KEY "$MINIMAX_API_KEY"
326
+
327
+ [[ -f "$REPO_ROOT/scripts/provision-guest.sh" ]] || die "Missing: $REPO_ROOT/scripts/provision-guest.sh"
328
+
329
+ scp -i "$SSH_KEY_PATH" -o StrictHostKeyChecking=accept-new -o UserKnownHostsFile=/dev/null \
330
+ "$REPO_ROOT/scripts/provision-guest.sh" "ubuntu@$VM_IP:/tmp/provision-guest.sh"
331
+ scp -i "$SSH_KEY_PATH" -o StrictHostKeyChecking=accept-new -o UserKnownHostsFile=/dev/null \
332
+ "$PROVISION_VARS" "ubuntu@$VM_IP:/tmp/provision.vars"
333
+
334
+ ssh -i "$SSH_KEY_PATH" -o StrictHostKeyChecking=accept-new -o UserKnownHostsFile=/dev/null \
335
+ "ubuntu@$VM_IP" "sudo bash /tmp/provision-guest.sh /tmp/provision.vars"
336
+
337
+ systemctl enable --now "$PROXY_SVC"
338
+
339
+ host_health_ok="false"
340
+ guest_health_ok="false"
341
+ GUEST_HEALTH_SCRIPT="$(guest_health_script "$INSTANCE")"
342
+ for _ in {1..30}; do
343
+ if [[ "$guest_health_ok" != "true" ]] && check_guest_health "$INSTANCE" "$VM_IP" "$SSH_KEY_PATH"; then
344
+ guest_health_ok="true"
345
+ fi
346
+ if [[ "$host_health_ok" != "true" ]] && curl -fsS "http://127.0.0.1:$HOST_PORT/health" >/dev/null 2>&1; then
347
+ host_health_ok="true"
348
+ fi
349
+ if [[ "$guest_health_ok" == "true" && "$host_health_ok" == "true" ]]; then
350
+ break
351
+ fi
352
+ sleep 2
353
+ done
354
+
355
+ echo "✓ VM instance configured"
356
+ echo " Instance: $INSTANCE"
357
+ echo " VM IP: $VM_IP"
358
+ echo " Port: $HOST_PORT"
359
+ echo " Token: $GATEWAY_TOKEN"
360
+ if [[ "$guest_health_ok" == "true" && "$host_health_ok" == "true" ]]; then
361
+ echo " Health: up (guest + proxy)"
362
+ elif [[ "$guest_health_ok" == "true" ]]; then
363
+ echo " Health: guest up, proxy pending"
364
+ elif [[ "$host_health_ok" == "true" ]]; then
365
+ echo " Health: proxy up (guest check pending)"
366
+ else
367
+ echo " Health: pending (guest script: $GUEST_HEALTH_SCRIPT)"
368
+ fi
369
+ echo " Status: $(systemctl is-active "$VM_SVC") / $(systemctl is-active "$PROXY_SVC")"
package/package.json ADDED
@@ -0,0 +1,42 @@
1
+ {
2
+ "name": "fireclaw",
3
+ "version": "0.1.0",
4
+ "description": "Firecracker microVM control plane for isolated OpenClaw instances",
5
+ "bin": {
6
+ "fireclaw": "bin/fireclaw"
7
+ },
8
+ "files": [
9
+ "bin",
10
+ "scripts/provision-guest.sh",
11
+ "README.md"
12
+ ],
13
+ "scripts": {
14
+ "lint:shell": "bash -n bin/fireclaw && bash -n bin/vm-common.sh && bash -n bin/vm-setup && bash -n bin/vm-ctl && bash -n scripts/provision-guest.sh",
15
+ "test": "npm run lint:shell",
16
+ "pack:check": "npm pack --dry-run",
17
+ "prepublishOnly": "npm test && npm run pack:check"
18
+ },
19
+ "keywords": [
20
+ "firecracker",
21
+ "openclaw",
22
+ "vm",
23
+ "cli",
24
+ "systemd"
25
+ ],
26
+ "homepage": "https://github.com/bchewy/fireclaw#readme",
27
+ "repository": {
28
+ "type": "git",
29
+ "url": "git+https://github.com/bchewy/fireclaw.git"
30
+ },
31
+ "bugs": {
32
+ "url": "https://github.com/bchewy/fireclaw/issues"
33
+ },
34
+ "license": "UNLICENSED",
35
+ "author": "bchewy",
36
+ "engines": {
37
+ "node": ">=18"
38
+ },
39
+ "os": [
40
+ "linux"
41
+ ]
42
+ }
@@ -0,0 +1,334 @@
1
+ #!/usr/bin/env bash
2
+ set -euo pipefail
3
+
4
+ [[ $# -eq 1 ]] || { echo "Usage: provision-guest.sh <vars-file>" >&2; exit 1; }
5
+ VARS_FILE="$1"
6
+ [[ -f "$VARS_FILE" ]] || { echo "Vars file missing: $VARS_FILE" >&2; exit 1; }
7
+
8
+ set -a
9
+ source "$VARS_FILE"
10
+ set +a
11
+
12
+ require() { [[ -n "${!1:-}" ]] || { echo "Missing required var: $1" >&2; exit 1; }; }
13
+
14
+ require INSTANCE_ID
15
+ require TELEGRAM_TOKEN
16
+ require MODEL
17
+ require SKILLS
18
+ require GATEWAY_TOKEN
19
+ require OPENCLAW_IMAGE
20
+
21
+ [[ "$INSTANCE_ID" =~ ^[a-z0-9_-]+$ ]] || {
22
+ echo "INSTANCE_ID must match [a-z0-9_-]+" >&2
23
+ exit 1
24
+ }
25
+
26
+ CONFIG_ROOT="/home/ubuntu/.openclaw-${INSTANCE_ID}"
27
+ CONFIG_DIR="$CONFIG_ROOT/config"
28
+ WORKSPACE_DIR="/home/ubuntu/openclaw-${INSTANCE_ID}/workspace"
29
+ TOOLS_DIR="/home/ubuntu/openclaw-${INSTANCE_ID}/tools"
30
+ ENV_FILE="$CONFIG_ROOT/openclaw.env"
31
+ GUEST_SERVICE="openclaw-${INSTANCE_ID}.service"
32
+ HEALTH_SCRIPT="/usr/local/bin/openclaw-health-${INSTANCE_ID}.sh"
33
+
34
+ log() { printf '==> %s\n' "$*"; }
35
+ warn() { printf 'Warning: %s\n' "$*" >&2; }
36
+
37
+ wait_for_cloud_init() {
38
+ if command -v cloud-init >/dev/null 2>&1; then
39
+ log "Waiting for cloud-init to finish"
40
+ cloud-init status --wait >/dev/null 2>&1 || warn "cloud-init wait failed; continuing"
41
+ fi
42
+ }
43
+
44
+ wait_for_apt_locks() {
45
+ local timeout="${1:-300}"
46
+ local waited=0
47
+ local lock_files=(
48
+ /var/lib/dpkg/lock-frontend
49
+ /var/lib/dpkg/lock
50
+ /var/lib/apt/lists/lock
51
+ /var/cache/apt/archives/lock
52
+ )
53
+
54
+ while true; do
55
+ local locked=0
56
+ if command -v fuser >/dev/null 2>&1; then
57
+ local lock
58
+ for lock in "${lock_files[@]}"; do
59
+ if [[ -e "$lock" ]] && fuser "$lock" >/dev/null 2>&1; then
60
+ locked=1
61
+ break
62
+ fi
63
+ done
64
+ else
65
+ pgrep -f 'apt|dpkg' >/dev/null 2>&1 && locked=1
66
+ fi
67
+
68
+ if [[ "$locked" -eq 0 ]]; then
69
+ return 0
70
+ fi
71
+
72
+ if (( waited == 0 )); then
73
+ log "Waiting for apt/dpkg locks to clear"
74
+ fi
75
+ sleep 2
76
+ waited=$((waited + 2))
77
+ (( waited < timeout )) || { echo "Timed out waiting for apt locks" >&2; return 1; }
78
+ done
79
+ }
80
+
81
+ apt_get_retry() {
82
+ local attempt
83
+ for attempt in {1..5}; do
84
+ wait_for_apt_locks
85
+ if apt-get "$@"; then
86
+ return 0
87
+ fi
88
+ if (( attempt == 5 )); then
89
+ echo "apt-get $* failed after $attempt attempts" >&2
90
+ return 1
91
+ fi
92
+ warn "apt-get $* failed (attempt $attempt/5), retrying"
93
+ sleep 3
94
+ done
95
+ }
96
+
97
+ maybe_resize_rootfs() {
98
+ command -v resize2fs >/dev/null 2>&1 || { warn "resize2fs not found; skipping rootfs expansion"; return 0; }
99
+
100
+ local root_device fs_device fs_type
101
+ root_device="$(findmnt -n -o SOURCE / || true)"
102
+ if [[ "$root_device" == "/dev/root" ]]; then
103
+ root_device="$(readlink -f /dev/root || true)"
104
+ fi
105
+
106
+ fs_device=""
107
+ for candidate in "$root_device" /dev/vda; do
108
+ [[ -n "$candidate" && -b "$candidate" ]] || continue
109
+ fs_type="$(blkid -s TYPE -o value "$candidate" 2>/dev/null || true)"
110
+ if [[ "$fs_type" == "ext4" ]]; then
111
+ fs_device="$candidate"
112
+ break
113
+ fi
114
+ done
115
+
116
+ if [[ -z "$fs_device" ]]; then
117
+ warn "No ext4 root block device found for resize2fs; skipping"
118
+ return 0
119
+ fi
120
+
121
+ log "Expanding ext4 filesystem on $fs_device"
122
+ resize2fs "$fs_device" || warn "resize2fs failed on $fs_device; continuing"
123
+ }
124
+
125
+ ensure_docker_daemon_config() {
126
+ local docker_cfg="/etc/docker/daemon.json"
127
+ local tmp_cfg
128
+ local restart_needed="false"
129
+ tmp_cfg="$(mktemp)"
130
+
131
+ cat > "$tmp_cfg" <<'EOF'
132
+ {
133
+ "iptables": false,
134
+ "ip6tables": false,
135
+ "bridge": "none"
136
+ }
137
+ EOF
138
+
139
+ mkdir -p /etc/docker
140
+ if [[ ! -f "$docker_cfg" ]] || ! cmp -s "$tmp_cfg" "$docker_cfg"; then
141
+ install -m 0644 "$tmp_cfg" "$docker_cfg"
142
+ restart_needed="true"
143
+ fi
144
+ rm -f "$tmp_cfg"
145
+
146
+ systemctl enable docker >/dev/null 2>&1 || true
147
+ if [[ "$restart_needed" == "true" ]]; then
148
+ systemctl restart docker
149
+ elif ! systemctl is-active --quiet docker; then
150
+ systemctl restart docker
151
+ fi
152
+ }
153
+
154
+ export DEBIAN_FRONTEND=noninteractive
155
+
156
+ wait_for_cloud_init
157
+ wait_for_apt_locks
158
+ maybe_resize_rootfs
159
+
160
+ if ! command -v docker >/dev/null 2>&1; then
161
+ apt_get_retry update
162
+ apt_get_retry install -y docker.io jq curl ca-certificates
163
+ fi
164
+ if ! command -v jq >/dev/null 2>&1 || ! command -v curl >/dev/null 2>&1; then
165
+ apt_get_retry update
166
+ apt_get_retry install -y jq curl ca-certificates
167
+ fi
168
+ ensure_docker_daemon_config
169
+
170
+ mkdir -p "$CONFIG_DIR" "$WORKSPACE_DIR" "$TOOLS_DIR"
171
+ chown -R ubuntu:ubuntu "$CONFIG_ROOT" "/home/ubuntu/openclaw-${INSTANCE_ID}"
172
+
173
+ cat > "$ENV_FILE" <<EOF
174
+ NODE_ENV=production
175
+ BROWSER=echo
176
+ CLAWDBOT_PREFER_PNPM=1
177
+ OPENCLAW_PREFER_PNPM=1
178
+ PLAYWRIGHT_BROWSERS_PATH=/home/node/clawd/tools/.playwright
179
+ OPENCLAW_GATEWAY_TOKEN=$GATEWAY_TOKEN
180
+ ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY:-}
181
+ OPENAI_API_KEY=${OPENAI_API_KEY:-}
182
+ MINIMAX_API_KEY=${MINIMAX_API_KEY:-}
183
+ EOF
184
+ chmod 600 "$ENV_FILE"
185
+
186
+ docker pull "$OPENCLAW_IMAGE"
187
+
188
+ run_openclaw_cli() {
189
+ docker run --rm -i -T \
190
+ --network host \
191
+ -e HOME=/home/node \
192
+ -e OPENCLAW_GATEWAY_TOKEN="$GATEWAY_TOKEN" \
193
+ -e OPENCLAW_TELEGRAM_TOKEN="$TELEGRAM_TOKEN" \
194
+ -e OPENCLAW_MODEL="$MODEL" \
195
+ -e OPENCLAW_TELEGRAM_ALLOW_FROM_JSON="$telegram_allow_from_json" \
196
+ -e OPENCLAW_SKILLS_JSON="$skills_json" \
197
+ -e OPENCLAW_BROWSER_PATH="${OPENCLAW_BROWSER_PATH:-}" \
198
+ -e OPENCLAW_MINIMAX_PROVIDER_JSON="${OPENCLAW_MINIMAX_PROVIDER_JSON:-}" \
199
+ -e ANTHROPIC_API_KEY="${ANTHROPIC_API_KEY:-}" \
200
+ -e OPENAI_API_KEY="${OPENAI_API_KEY:-}" \
201
+ -e MINIMAX_API_KEY="${MINIMAX_API_KEY:-}" \
202
+ -v "$CONFIG_DIR:/home/node/.openclaw" \
203
+ -v "$WORKSPACE_DIR:/home/node/.openclaw/workspace" \
204
+ -v "$TOOLS_DIR:/home/node/clawd/tools" \
205
+ --entrypoint /bin/bash \
206
+ "$OPENCLAW_IMAGE" -se
207
+ }
208
+
209
+ skills_json="[]"
210
+ IFS=',' read -r -a skills_arr <<< "${SKILLS:-}"
211
+ if (( ${#skills_arr[@]} > 0 )); then
212
+ skills_json="$(printf '%s\n' "${skills_arr[@]}" | sed '/^$/d' | jq -R . | jq -s .)"
213
+ fi
214
+
215
+ telegram_allow_from_json="[]"
216
+ IFS=',' read -r -a users_arr <<< "${TELEGRAM_USERS:-}"
217
+ if (( ${#users_arr[@]} > 0 )); then
218
+ telegram_allow_from_json="$(printf '%s\n' "${users_arr[@]}" | sed '/^$/d' | jq -R . | jq -s .)"
219
+ fi
220
+
221
+ run_openclaw_cli <<'EOF'
222
+ set -euo pipefail
223
+ OPENCLAW='node /app/openclaw.mjs'
224
+ $OPENCLAW config set gateway.mode local
225
+ $OPENCLAW config set gateway.bind lan
226
+ $OPENCLAW config set gateway.auth.mode token
227
+ $OPENCLAW config set gateway.auth.token "$OPENCLAW_GATEWAY_TOKEN"
228
+ $OPENCLAW config set gateway.tailscale.mode off
229
+ $OPENCLAW config set agents.defaults.model.primary "$OPENCLAW_MODEL"
230
+ $OPENCLAW config set agents.defaults.skipBootstrap false --json
231
+ $OPENCLAW config set channels.telegram.enabled true --json
232
+ $OPENCLAW config set channels.telegram.botToken "$OPENCLAW_TELEGRAM_TOKEN"
233
+ $OPENCLAW config set channels.telegram.dmPolicy allowlist
234
+ $OPENCLAW config set channels.telegram.groupPolicy disabled
235
+ $OPENCLAW config set channels.telegram.allowFrom "$OPENCLAW_TELEGRAM_ALLOW_FROM_JSON" --json
236
+ $OPENCLAW config set plugins.entries.telegram.enabled true --json
237
+ $OPENCLAW config set skills.allowBundled "$OPENCLAW_SKILLS_JSON" --json
238
+ EOF
239
+
240
+ if [[ "$MODEL" == minimax/* ]]; then
241
+ OPENCLAW_MINIMAX_PROVIDER_JSON="$(jq -cn --arg api_key "${MINIMAX_API_KEY:-}" '
242
+ {
243
+ baseUrl: "https://api.minimax.io/anthropic",
244
+ apiKey: $api_key,
245
+ api: "anthropic-messages",
246
+ models: [
247
+ {
248
+ id: "MiniMax-M2.1",
249
+ name: "MiniMax M2.1",
250
+ reasoning: false,
251
+ input: ["text"],
252
+ contextWindow: 200000,
253
+ maxTokens: 8192
254
+ }
255
+ ]
256
+ }
257
+ ')"
258
+ run_openclaw_cli <<'EOF'
259
+ OPENCLAW='node /app/openclaw.mjs'
260
+ $OPENCLAW config set models.mode merge
261
+ $OPENCLAW config set models.providers.minimax "$OPENCLAW_MINIMAX_PROVIDER_JSON" --json
262
+ EOF
263
+ fi
264
+
265
+ if [[ "${SKIP_BROWSER_INSTALL:-false}" != "true" ]]; then
266
+ docker run --rm -T \
267
+ --network host \
268
+ -e PLAYWRIGHT_BROWSERS_PATH=/home/node/clawd/tools/.playwright \
269
+ -v "$TOOLS_DIR:/home/node/clawd/tools" \
270
+ --entrypoint /bin/bash \
271
+ "$OPENCLAW_IMAGE" -lc "
272
+ set -euo pipefail
273
+ npx --yes playwright@latest install chromium
274
+ "
275
+
276
+ chrome_host_path="$(
277
+ find "$TOOLS_DIR/.playwright" -type f \
278
+ \( -path '*/chromium_headless_shell-*/chrome-headless-shell' -o -name 'chrome-headless-shell' -o -name 'headless_shell' \) \
279
+ | sort | head -n 1 || true
280
+ )"
281
+ if [[ -n "$chrome_host_path" ]]; then
282
+ chrome_container_path="${chrome_host_path/#$TOOLS_DIR/\/home\/node\/clawd\/tools}"
283
+ OPENCLAW_BROWSER_PATH="$chrome_container_path"
284
+ run_openclaw_cli <<'EOF'
285
+ OPENCLAW='node /app/openclaw.mjs'
286
+ $OPENCLAW config set browser.executablePath "$OPENCLAW_BROWSER_PATH"
287
+ EOF
288
+ else
289
+ warn "Playwright Chromium installed but executable path was not found in $TOOLS_DIR/.playwright"
290
+ fi
291
+ fi
292
+
293
+ run_openclaw_cli <<'EOF'
294
+ set -euo pipefail
295
+ OPENCLAW='node /app/openclaw.mjs'
296
+ $OPENCLAW doctor --fix || true
297
+ EOF
298
+
299
+ cat > "$HEALTH_SCRIPT" <<EOF
300
+ #!/usr/bin/env bash
301
+ set -euo pipefail
302
+
303
+ CONTAINER_NAME="openclaw-${INSTANCE_ID}"
304
+
305
+ if ! /usr/bin/docker inspect -f '{{.State.Running}}' "\$CONTAINER_NAME" 2>/dev/null | grep -qx true; then
306
+ exit 1
307
+ fi
308
+
309
+ curl -fsS http://127.0.0.1:18789/health >/dev/null
310
+ EOF
311
+ chmod 755 "$HEALTH_SCRIPT"
312
+
313
+ cat > "/etc/systemd/system/$GUEST_SERVICE" <<EOF
314
+ [Unit]
315
+ Description=OpenClaw ($INSTANCE_ID)
316
+ After=docker.service network-online.target
317
+ Requires=docker.service
318
+
319
+ [Service]
320
+ Type=simple
321
+ ExecStartPre=-/usr/bin/docker rm -f openclaw-$INSTANCE_ID
322
+ ExecStart=/usr/bin/docker run --rm --name openclaw-$INSTANCE_ID --init --network host --env-file $ENV_FILE -v $CONFIG_DIR:/home/node/.openclaw -v $WORKSPACE_DIR:/home/node/.openclaw/workspace -v $TOOLS_DIR:/home/node/clawd/tools $OPENCLAW_IMAGE node dist/index.js gateway --bind lan --port 18789
323
+ ExecStop=/usr/bin/docker stop openclaw-$INSTANCE_ID
324
+ Restart=always
325
+ RestartSec=5
326
+
327
+ [Install]
328
+ WantedBy=multi-user.target
329
+ EOF
330
+
331
+ systemctl daemon-reload
332
+ systemctl enable --now "$GUEST_SERVICE"
333
+
334
+ echo "Guest provisioning complete for $INSTANCE_ID"