openfused 0.3.3 → 0.3.5

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -14,8 +14,8 @@ No vendor lock-in. No proprietary protocol. Just a directory convention that any
14
14
  # TypeScript (npm)
15
15
  npm install -g openfused
16
16
 
17
- # Rust (from source)
18
- cd rust && cargo install --path .
17
+ # Rust (crates.io)
18
+ cargo install openfuse
19
19
 
20
20
  # Docker (daemon)
21
21
  docker compose up
@@ -124,7 +124,7 @@ The `age` format is interoperable — Rust CLI and TypeScript SDK use the same k
124
124
 
125
125
  ## Registry — DNS for Agents
126
126
 
127
- Public registry at `openfuse-registry.wzmcghee.workers.dev`. Any agent can register, discover others, and send messages.
127
+ Public registry at `registry.openfused.dev`. Any agent can register, discover others, and send messages.
128
128
 
129
129
  ```bash
130
130
  # Register your agent
@@ -146,7 +146,7 @@ openfuse send wearethecompute "hello from the mesh"
146
146
 
147
147
  ## Sync
148
148
 
149
- Pull peer context and push outbox messages. Two transports:
149
+ Pull peer context, pull their outbox for your mail, push your outbox. Two transports:
150
150
 
151
151
  ```bash
152
152
  # LAN — rsync over SSH (uses your ~/.ssh/config for host aliases)
@@ -155,12 +155,34 @@ openfuse peer add ssh://alice.local:/home/agent/context --name wisp
155
155
  # WAN — HTTP against the OpenFused daemon
156
156
  openfuse peer add http://agent.example.com:9781 --name wisp
157
157
 
158
- # Sync
158
+ # Sync all peers
159
159
  openfuse sync
160
+
161
+ # Watch mode — sync every 60s + local file watcher
162
+ openfuse watch
163
+
164
+ # Watch + reverse SSH tunnel (NAT traversal)
165
+ openfuse watch --tunnel alice.local
166
+ ```
167
+
168
+ Sync does three things:
169
+ 1. **Pulls** peer's CONTEXT.md, PROFILE.md, shared/, knowledge/ into `.peers/<name>/`
170
+ 2. **Pulls** peer's outbox for messages addressed to you (`*_to-{your-name}.json`)
171
+ 3. **Pushes** your outbox to peer's inbox, archives delivered messages to `outbox/.sent/`
172
+
173
+ ### Message envelope format
174
+
175
+ Filenames encode routing metadata so agents know what's for them:
176
+
177
+ ```
178
+ {timestamp}_from-{sender}_to-{recipient}.json
160
179
  ```
161
180
 
162
- Sync pulls: `CONTEXT.md`, `shared/`, `knowledge/` into `.peers/<name>/`.
163
- Sync pushes: outbox messages to the peer's inbox. Delivered messages move to `outbox/.sent/`.
181
+ Examples:
182
+ - `2026-03-21T07-59-44Z_from-claude-code_to-wisp.json` DM, encrypted for wisp
183
+ - `2026-03-21T08-00-00Z_from-wisp_to-all.json` — broadcast, signed but not encrypted
184
+
185
+ Agents only process files matching `_to-{their-name}` or `_to-all`.
164
186
 
165
187
  SSH transport uses hostnames from `~/.ssh/config` — not raw IPs.
166
188
 
@@ -179,7 +201,7 @@ Any MCP client (Claude Desktop, Claude Code, Cursor) can use OpenFused as a tool
179
201
  }
180
202
  ```
181
203
 
182
- 13 tools: `context_read/write/append`, `soul_read/write`, `inbox_list/send`, `shared_list/read/write`, `status`, `peer_list/add`.
204
+ 13 tools: `context_read/write/append`, `profile_read/write`, `inbox_list/send`, `shared_list/read/write`, `status`, `peer_list/add`.
183
205
 
184
206
  ## Docker
185
207
 
@@ -191,12 +213,29 @@ docker compose up
191
213
  TUNNEL_TOKEN=your-token docker compose --profile tunnel up
192
214
  ```
193
215
 
194
- The daemon serves your context store over HTTP and accepts inbox messages via POST.
216
+ The daemon has two modes:
217
+
218
+ ```bash
219
+ # Full mode — serves everything to trusted LAN peers
220
+ openfused serve --store ./my-context --port 9781
221
+
222
+ # Public mode — only PROFILE.md + inbox (for WAN/tunnels)
223
+ openfused serve --store ./my-context --port 9781 --public
224
+ ```
225
+
226
+ ## File Watching
227
+
228
+ `openfuse watch` combines three things:
229
+
230
+ 1. **Local inbox watcher** — chokidar (inotify on Linux) for instant notification when messages arrive
231
+ 2. **CONTEXT.md watcher** — detects local changes
232
+ 3. **Periodic peer sync** — pulls from all peers every 60s (configurable)
195
233
 
196
234
  ```bash
197
- # Or build manually
198
- cd daemon && cargo build --release
199
- ./target/release/openfused serve --store ./my-context --port 9781
235
+ openfuse watch -d ./store # sync every 60s
236
+ openfuse watch -d ./store --sync-interval 30 # sync every 30s
237
+ openfuse watch -d ./store --sync-interval 0 # local watch only
238
+ openfuse watch -d ./store --tunnel alice.local # + reverse SSH tunnel
200
239
  ```
201
240
 
202
241
  ## Reachability
package/dist/cli.js CHANGED
@@ -3,12 +3,12 @@ import { Command } from "commander";
3
3
  import { nanoid } from "nanoid";
4
4
  import { ContextStore } from "./store.js";
5
5
  import { watchInbox, watchContext, watchSync } from "./watch.js";
6
- import { syncAll, syncOne } from "./sync.js";
6
+ import { syncAll, syncOne, deliverOne } from "./sync.js";
7
7
  import * as registry from "./registry.js";
8
8
  import { fingerprint } from "./crypto.js";
9
9
  import { resolve } from "node:path";
10
10
  import { readFile } from "node:fs/promises";
11
- const VERSION = "0.3.3";
11
+ const VERSION = "0.3.5";
12
12
  const program = new Command();
13
13
  program
14
14
  .name("openfuse")
@@ -123,8 +123,15 @@ inbox
123
123
  .option("-d, --dir <path>", "Context store directory", ".")
124
124
  .action(async (peerId, message, opts) => {
125
125
  const store = new ContextStore(resolve(opts.dir));
126
- await store.sendInbox(peerId, message);
127
- console.log(`Message sent to ${peerId}'s outbox.`);
126
+ const filename = await store.sendInbox(peerId, message);
127
+ // Try immediate delivery — if peer is reachable, deliver now
128
+ const delivered = await deliverOne(store, peerId, filename);
129
+ if (delivered) {
130
+ console.log(`Delivered to ${peerId}.`);
131
+ }
132
+ else {
133
+ console.log(`Queued for ${peerId}. Will deliver on next sync.`);
134
+ }
128
135
  });
129
136
  // --- watch ---
130
137
  program
@@ -132,8 +139,9 @@ program
132
139
  .description("Watch for inbox messages, context changes, and sync with peers")
133
140
  .option("-d, --dir <path>", "Context store directory", ".")
134
141
  .option("--sync-interval <seconds>", "Peer sync interval in seconds (0 to disable)", "60")
135
- .option("--tunnel <host>", "Open reverse SSH tunnel to host (makes your store reachable from behind NAT)")
136
- .option("--tunnel-port <port>", "Remote port for reverse tunnel", "2222")
142
+ .option("--tunnel <host>", "Reverse SSH tunnel to host for NAT traversal (uses autossh if available)")
143
+ .option("--tunnel-port <port>", "Remote port for reverse SSH tunnel", "2222")
144
+ .option("--cloudflared", "Start a cloudflared quick tunnel (no config needed, gives you a public URL)")
137
145
  .action(async (opts) => {
138
146
  const store = new ContextStore(resolve(opts.dir));
139
147
  if (!(await store.exists())) {
@@ -175,6 +183,24 @@ program
175
183
  console.log(`Tunnel: ${cmd} -R ${tunnelPort}:localhost:9781 ${tunnelHost}`);
176
184
  console.log(`Your store is reachable at ssh://${tunnelHost}:${tunnelPort} (via daemon on :9781)`);
177
185
  }
186
+ // Cloudflared quick tunnel (optional) — gives you a public *.trycloudflare.com URL
187
+ if (opts.cloudflared) {
188
+ const { spawn } = await import("node:child_process");
189
+ const cf = spawn("cloudflared", ["tunnel", "--url", "http://localhost:9781"], {
190
+ stdio: ["ignore", "pipe", "pipe"],
191
+ });
192
+ cf.on("error", (e) => console.error(`[cloudflared] failed: ${e.message}. Install: https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/downloads/`));
193
+ cf.stderr.on("data", (data) => {
194
+ const line = data.toString();
195
+ const match = line.match(/https:\/\/[^\s]+\.trycloudflare\.com/);
196
+ if (match) {
197
+ console.log(`[cloudflared] Your public URL: ${match[0]}`);
198
+ console.log(` Register it: openfuse register --endpoint ${match[0]}`);
199
+ }
200
+ });
201
+ process.on("exit", () => cf.kill());
202
+ console.log("Starting cloudflared tunnel...");
203
+ }
178
204
  console.log(`Press Ctrl+C to stop.\n`);
179
205
  watchInbox(store.root, (from, message) => {
180
206
  console.log(`\n[inbox] New message from ${from}:`);
package/dist/crypto.js CHANGED
@@ -1,3 +1,8 @@
1
+ // --- Why Ed25519 + age? ---
2
+ // Ed25519: fast, deterministic, no padding oracle attacks, widely supported (SSH, FIDO2, libsodium).
3
+ // age over PGP: simpler API, no config footguns, no Web of Trust baggage — just X25519+ChaCha20-Poly1305.
4
+ // Two separate keypairs because signing (Ed25519) and encryption (X25519) are distinct operations;
5
+ // combining them would violate key-separation best practice.
1
6
  import { generateKeyPairSync, sign, verify, createPrivateKey, createPublicKey, createHash } from "node:crypto";
2
7
  import { readFile, writeFile, mkdir } from "node:fs/promises";
3
8
  import { join } from "node:path";
@@ -27,6 +32,8 @@ export async function hasKeys(storeRoot) {
27
32
  return existsSync(join(storeRoot, KEY_DIR, "private.key"));
28
33
  }
29
34
  // --- Fingerprint ---
35
+ // SHA-256 truncated to 16 bytes, displayed as colon-separated hex pairs (GPG-style).
36
+ // Human-readable so agents can verify identities out-of-band — same UX as SSH fingerprints.
30
37
  export function fingerprint(publicKey) {
31
38
  const hash = createHash("sha256").update(publicKey).digest();
32
39
  const pairs = [];
@@ -64,6 +71,12 @@ export async function signMessage(storeRoot, from, message) {
64
71
  const signature = sign(null, payload, privateKey).toString("base64");
65
72
  return { from, timestamp, message, signature, publicKey, encrypted: false };
66
73
  }
74
+ // --- Encrypt-then-sign ---
75
+ // Encrypt first, then sign the ciphertext. This order matters:
76
+ // 1. Proves WHO sent the ciphertext (non-repudiation on the encrypted blob)
77
+ // 2. Prevents Surreptitious Forwarding — signature covers the encrypted form,
78
+ // so a relay can't strip the signature and re-sign for a different recipient.
79
+ // 3. Signature is verifiable by anyone without needing the decryption key.
67
80
  export async function signAndEncrypt(storeRoot, from, plaintext, recipientAgeKey) {
68
81
  const ciphertext = await ageEncrypt(plaintext, recipientAgeKey);
69
82
  const encoded = Buffer.from(ciphertext).toString("base64");
@@ -104,6 +117,8 @@ async function ageDecrypt(ciphertext, storeRoot) {
104
117
  return await d.decrypt(ciphertext, "text");
105
118
  }
106
119
  // --- Helpers ---
120
+ // XML envelope wrapping — gives LLMs a structured, parseable format with clear
121
+ // trust signals (verified/UNVERIFIED). HTML-escaped to prevent injection into prompts.
107
122
  export function wrapExternalMessage(signed, verified) {
108
123
  const status = verified ? "verified" : "UNVERIFIED";
109
124
  const esc = (s) => s.replace(/&/g, "&amp;").replace(/"/g, "&quot;").replace(/</g, "&lt;").replace(/>/g, "&gt;");
package/dist/mcp.js CHANGED
@@ -1,12 +1,18 @@
1
1
  #!/usr/bin/env node
2
+ // --- MCP server: 13 tools ---
3
+ // Why exactly 13? They map 1:1 to the store's capabilities — no more, no less.
4
+ // CRUD for context (read/write/append), profile (read/write), inbox (list/send),
5
+ // shared files (list/read/write), status, and peer management (list/add).
6
+ // Every tool an LLM needs to be a full participant in the mesh, nothing it doesn't.
7
+ // stdio transport because MCP clients (Claude Desktop, Cursor) expect it.
2
8
  import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
3
9
  import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
4
10
  import { z } from "zod";
5
11
  import { ContextStore } from "./store.js";
6
12
  import { resolve } from "node:path";
7
- /** Reject path traversal in filenames extract basename, block dangerous patterns */
13
+ // LLMs will pass whatever filenames users ask for including "../../etc/shadow".
14
+ // This is the trust boundary between the AI and the filesystem.
8
15
  function sanitizeFilename(name) {
9
- // Extract basename (strip any directory components)
10
16
  const base = name.split("/").pop().split("\\").pop();
11
17
  if (!base || base === "." || base === ".." || base.includes("..")) {
12
18
  throw new Error(`Invalid filename: ${name}`);
@@ -17,7 +23,7 @@ const storeDir = process.env.OPENFUSE_DIR || process.argv[3] || ".";
17
23
  const store = new ContextStore(resolve(storeDir));
18
24
  const server = new McpServer({
19
25
  name: "openfuse",
20
- version: "0.3.3",
26
+ version: "0.3.5",
21
27
  });
22
28
  // --- Context ---
23
29
  server.tool("context_read", "Read the agent's CONTEXT.md (working memory)", async () => {
@@ -1,5 +1,5 @@
1
1
  import { ContextStore } from "./store.js";
2
- export declare const DEFAULT_REGISTRY = "https://openfuse-registry.wzmcghee.workers.dev";
2
+ export declare const DEFAULT_REGISTRY = "https://registry.openfused.dev";
3
3
  export interface Manifest {
4
4
  name: string;
5
5
  endpoint: string;
package/dist/registry.js CHANGED
@@ -1,5 +1,13 @@
1
+ // --- Registry: DNS + keyserver hybrid ---
2
+ // The registry solves agent discovery without requiring a DHT or blockchain.
3
+ // It's a signed directory: agents register name→endpoint+publicKey mappings,
4
+ // similar to DNS (name resolution) + PGP keyservers (key distribution).
5
+ // Crucially, imported keys are UNTRUSTED by default — the local agent must
6
+ // explicitly `openfuse key trust` after out-of-band verification (fingerprint check).
7
+ // This is TOFU (Trust On First Use) done right: the registry distributes keys,
8
+ // but never asserts trust. Trust is a local decision.
1
9
  import { signMessage, fingerprint } from "./crypto.js";
2
- export const DEFAULT_REGISTRY = "https://openfuse-registry.wzmcghee.workers.dev";
10
+ export const DEFAULT_REGISTRY = "https://registry.openfused.dev";
3
11
  export function resolveRegistry(flag) {
4
12
  return flag || process.env.OPENFUSE_REGISTRY || DEFAULT_REGISTRY;
5
13
  }
@@ -16,6 +24,8 @@ export async function register(store, endpoint, registry) {
16
24
  created: new Date().toISOString(),
17
25
  capabilities: ["inbox", "shared", "knowledge"],
18
26
  };
27
+ // Canonical string prevents field-reordering attacks — pipe-delimited, deterministic order.
28
+ // Signature proves the registrant owns the private key (anti-squatting).
19
29
  const canonical = `${manifest.name}|${manifest.endpoint}|${manifest.publicKey}|${manifest.encryptionKey || ""}`;
20
30
  const signed = await signMessage(store.root, manifest.name, canonical);
21
31
  manifest.signature = signed.signature;
@@ -31,7 +41,22 @@ export async function register(store, endpoint, registry) {
31
41
  }
32
42
  return manifest;
33
43
  }
44
+ // Discovery: try DNS TXT first (decentralized, no registry needed), fall back to Worker API.
45
+ // DNS format: v=of1 e={endpoint} pk={pubkey} ek={agekey} fp={fingerprint}
46
+ // Self-hosted: _openfuse.{name}.{their-domain} — user manages their own TXT records.
47
+ // Our zone: _openfuse.{name}.openfused.dev — managed by the registry Worker on registration.
34
48
  export async function discover(name, registry) {
49
+ // If name contains a dot, it's a domain — try DNS TXT directly
50
+ // Otherwise try DNS at openfused.dev, then fall back to registry API
51
+ const dnsNames = name.includes(".")
52
+ ? [`_openfuse.${name}`]
53
+ : [`_openfuse.${name}.openfused.dev`];
54
+ for (const dnsName of dnsNames) {
55
+ const manifest = await discoverViaDns(dnsName, name);
56
+ if (manifest)
57
+ return manifest;
58
+ }
59
+ // Fall back to registry API
35
60
  const resp = await fetch(`${registry.replace(/\/$/, "")}/discover/${name}`);
36
61
  if (!resp.ok) {
37
62
  const body = await resp.json().catch(() => ({ error: `HTTP ${resp.status}` }));
@@ -39,6 +64,45 @@ export async function discover(name, registry) {
39
64
  }
40
65
  return (await resp.json());
41
66
  }
67
+ async function discoverViaDns(dnsName, agentName) {
68
+ try {
69
+ // Use DNS-over-HTTPS (Cloudflare 1.1.1.1) to resolve TXT records
70
+ const resp = await fetch(`https://1.1.1.1/dns-query?name=${encodeURIComponent(dnsName)}&type=TXT`, {
71
+ headers: { "Accept": "application/dns-json" },
72
+ });
73
+ if (!resp.ok)
74
+ return null;
75
+ const data = await resp.json();
76
+ if (!data.Answer || data.Answer.length === 0)
77
+ return null;
78
+ // Parse v=of1 format from TXT record
79
+ const txt = data.Answer[0].data.replace(/"/g, "");
80
+ if (!txt.startsWith("v=of1"))
81
+ return null;
82
+ const fields = {};
83
+ for (const part of txt.split(" ")) {
84
+ const [k, v] = part.split("=", 2);
85
+ if (k && v)
86
+ fields[k] = v;
87
+ }
88
+ if (!fields.e || !fields.pk)
89
+ return null;
90
+ return {
91
+ name: agentName,
92
+ endpoint: fields.e,
93
+ publicKey: fields.pk,
94
+ encryptionKey: fields.ek || undefined,
95
+ fingerprint: fields.fp || "",
96
+ created: "",
97
+ capabilities: ["inbox", "shared", "knowledge"],
98
+ };
99
+ }
100
+ catch {
101
+ return null;
102
+ }
103
+ }
104
+ // Revocation is permanent and self-authenticated: the agent signs its own revocation
105
+ // with the key being revoked. No admin needed — if you have the private key, you can kill it.
42
106
  export async function revoke(store, registry) {
43
107
  const config = await store.readConfig();
44
108
  if (!config.publicKey)
@@ -59,6 +123,7 @@ export async function revoke(store, registry) {
59
123
  throw new Error(body.error || `Revocation failed`);
60
124
  }
61
125
  }
126
+ // Non-blocking version check with 2s timeout — never delays the CLI for a slow network.
62
127
  export async function checkUpdate(currentVersion) {
63
128
  try {
64
129
  const controller = new AbortController();
package/dist/store.d.ts CHANGED
@@ -28,7 +28,7 @@ export declare class ContextStore {
28
28
  writeContext(content: string): Promise<void>;
29
29
  readProfile(): Promise<string>;
30
30
  writeProfile(content: string): Promise<void>;
31
- sendInbox(peerId: string, message: string): Promise<void>;
31
+ sendInbox(peerId: string, message: string): Promise<string>;
32
32
  readInbox(): Promise<Array<{
33
33
  file: string;
34
34
  content: string;
package/dist/store.js CHANGED
@@ -1,3 +1,16 @@
1
+ // --- Store convention ---
2
+ // The context store IS the protocol. Every agent is a directory on disk with a known layout:
3
+ // CONTEXT.md — working memory (mutable, private)
4
+ // PROFILE.md — public address card (replaces SOUL.md: "soul" implied private identity,
5
+ // but this file is shared with peers — "profile" is honest about its visibility)
6
+ // inbox/ — append-only message queue from other agents
7
+ // outbox/ — signed envelopes waiting to be delivered
8
+ // shared/ — files explicitly published to peers
9
+ // history/ — conversation logs
10
+ // knowledge/ — reference docs
11
+ // .keys/ — Ed25519 + age keypairs (gitignored)
12
+ // .mesh.json — config, peer list, keyring
13
+ // No database, no daemon required. `ls` is your status command.
1
14
  import { readFile, writeFile, mkdir, readdir } from "node:fs/promises";
2
15
  import { join, resolve } from "node:path";
3
16
  import { existsSync } from "node:fs";
@@ -44,7 +57,8 @@ export class ContextStore {
44
57
  async readConfig() {
45
58
  const raw = await readFile(this.configPath, "utf-8");
46
59
  const config = JSON.parse(raw);
47
- // Migrate legacy trustedKeys → keyring
60
+ // Migrate legacy trustedKeys → keyring (v0.1 stored bare public keys in a flat array;
61
+ // v0.2+ uses a GPG-style keyring with trust levels, fingerprints, and encryption keys)
48
62
  if (config.trustedKeys && config.trustedKeys.length > 0) {
49
63
  if (!config.keyring)
50
64
  config.keyring = [];
@@ -95,9 +109,12 @@ export class ContextStore {
95
109
  else {
96
110
  signed = await signMessage(this.root, config.id, message);
97
111
  }
112
+ // Envelope filename encodes routing metadata so sync can match outbox files to peers
113
+ // without parsing JSON. Colons/dots replaced to stay filesystem-safe across OS.
98
114
  const timestamp = new Date().toISOString().replace(/[:.]/g, "-");
99
- const filename = `${timestamp}_${peerId}.json`;
115
+ const filename = `${timestamp}_from-${config.name}_to-${peerId}.json`;
100
116
  await writeFile(join(this.root, "outbox", filename), serializeSignedMessage(signed));
117
+ return filename;
101
118
  }
102
119
  async readInbox() {
103
120
  const inboxDir = join(this.root, "inbox");
@@ -159,7 +176,8 @@ export class ContextStore {
159
176
  return readdir(sharedDir);
160
177
  }
161
178
  async share(filename, content) {
162
- // Sanitize: extract basename, reject traversal
179
+ // Path traversal defense: basename extraction + ".." rejection.
180
+ // Critical because MCP tools pass user-supplied filenames directly.
163
181
  const base = filename.split("/").pop().split("\\").pop();
164
182
  if (!base || base === "." || base === ".." || base.includes("..")) {
165
183
  throw new Error(`Invalid filename: ${filename}`);
package/dist/sync.d.ts CHANGED
@@ -5,5 +5,7 @@ export interface SyncResult {
5
5
  pushed: string[];
6
6
  errors: string[];
7
7
  }
8
+ /** Try to deliver a single outbox message immediately. Returns true if delivered. */
9
+ export declare function deliverOne(store: ContextStore, peerName: string, filename: string): Promise<boolean>;
8
10
  export declare function syncAll(store: ContextStore): Promise<SyncResult[]>;
9
11
  export declare function syncOne(store: ContextStore, peerName: string): Promise<SyncResult>;
package/dist/sync.js CHANGED
@@ -1,10 +1,16 @@
1
+ // --- Transport design ---
2
+ // Two transports, one protocol. HTTP for WAN (daemon serves context over the internet),
3
+ // SSH/rsync for LAN (zero config if you already have SSH keys — uses ~/.ssh/config aliases
4
+ // so agents reference hostnames, never raw IPs that change). Both transports do the same
5
+ // thing: pull CONTEXT.md + PROFILE.md + shared/ + knowledge/, push outbox → peer inbox.
1
6
  import { readFile, writeFile, mkdir, readdir, rename } from "node:fs/promises";
2
7
  import { join } from "node:path";
3
8
  import { existsSync } from "node:fs";
4
9
  import { execFile as execFileCb } from "node:child_process";
5
10
  import { promisify } from "node:util";
6
11
  const execFile = promisify(execFileCb);
7
- /** Move delivered message from outbox/ to outbox/.sent/ to prevent re-delivery. */
12
+ // Archive instead of delete: preserves audit trail and lets agents review what was sent.
13
+ // Without this, sync would re-deliver the same message every cycle.
8
14
  async function archiveSent(outboxDir, fname) {
9
15
  const sentDir = join(outboxDir, ".sent");
10
16
  await mkdir(sentDir, { recursive: true });
@@ -21,7 +27,8 @@ function parseUrl(url) {
21
27
  throw new Error("SSH URL must be ssh://host:/path");
22
28
  const host = rest.slice(0, colonIdx);
23
29
  const path = rest.slice(colonIdx + 1);
24
- // Validate: prevent argument injection via rsync
30
+ // Prevent argument injection: rsync treats leading "-" as flags, and shell
31
+ // metacharacters could escape the execFile boundary on some platforms.
25
32
  if (host.startsWith("-") || path.startsWith("-")) {
26
33
  throw new Error("Invalid SSH URL: host/path cannot start with '-'");
27
34
  }
@@ -32,6 +39,42 @@ function parseUrl(url) {
32
39
  }
33
40
  throw new Error(`Unknown URL scheme: ${url}. Use http:// or ssh://`);
34
41
  }
42
+ /** Try to deliver a single outbox message immediately. Returns true if delivered. */
43
+ export async function deliverOne(store, peerName, filename) {
44
+ const config = await store.readConfig();
45
+ const peer = config.peers.find((p) => p.name === peerName || p.id === peerName);
46
+ if (!peer)
47
+ return false;
48
+ const outboxDir = join(store.root, "outbox");
49
+ const filePath = join(outboxDir, filename);
50
+ if (!existsSync(filePath))
51
+ return false;
52
+ try {
53
+ const transport = parseUrl(peer.url);
54
+ if (transport.type === "http") {
55
+ const body = await readFile(filePath, "utf-8");
56
+ const r = await fetch(`${transport.baseUrl}/inbox`, {
57
+ method: "POST",
58
+ headers: { "Content-Type": "application/json" },
59
+ body,
60
+ });
61
+ if (!r.ok)
62
+ return false;
63
+ }
64
+ else {
65
+ await execFile("rsync", [
66
+ "-az", filePath,
67
+ `${transport.host}:${transport.path}/inbox/${filename}`,
68
+ ]);
69
+ }
70
+ // Delivered — archive to .sent/
71
+ await archiveSent(outboxDir, filename);
72
+ return true;
73
+ }
74
+ catch {
75
+ return false; // stays in outbox for next sync
76
+ }
77
+ }
35
78
  export async function syncAll(store) {
36
79
  const config = await store.readConfig();
37
80
  const results = [];
@@ -91,7 +134,8 @@ async function syncHttp(store, peer, baseUrl, peerDir) {
91
134
  for (const f of files) {
92
135
  if (f.is_dir)
93
136
  continue;
94
- // Sanitize remote filename — extract basename, reject traversal
137
+ // Remote peer controls this filename — must sanitize before writing to local disk.
138
+ // Basename extraction blocks "../../../etc/passwd" style traversal from a malicious peer.
95
139
  const safeName = f.name.split("/").pop().split("\\").pop();
96
140
  if (!safeName || safeName.includes(".."))
97
141
  continue;
@@ -112,7 +156,7 @@ async function syncHttp(store, peer, baseUrl, peerDir) {
112
156
  for (const fname of await readdir(outboxDir)) {
113
157
  if (!fname.endsWith(".json"))
114
158
  continue;
115
- if (!fname.includes(peer.name) && !fname.includes(peer.id))
159
+ if (!fname.includes(`_to-${peer.name}.json`) && !fname.includes(peer.id))
116
160
  continue;
117
161
  try {
118
162
  const body = await readFile(join(outboxDir, fname), "utf-8");
@@ -161,13 +205,36 @@ async function syncSsh(store, peer, host, remotePath, peerDir) {
161
205
  errors.push(`${dir}/: ${e.stderr || e.message}`);
162
206
  }
163
207
  }
164
- // Push outbox
208
+ // Pull peer's outbox for messages addressed to us — peer may be behind NAT
209
+ // and can't push to us, so we grab messages they left in their outbox for us.
210
+ const config = await store.readConfig();
211
+ const myName = config.name;
212
+ const inboxDir = join(store.root, "inbox");
213
+ await mkdir(inboxDir, { recursive: true });
214
+ try {
215
+ await execFile("rsync", [
216
+ "-az", "--ignore-existing",
217
+ "--include", `*_to-${myName}.json`,
218
+ "--include", `*_to-all.json`,
219
+ "--include", `*_${myName}.json`, // legacy format (pre-envelope)
220
+ "--exclude", "*",
221
+ `${host}:${remotePath}/outbox/`,
222
+ `${inboxDir}/`,
223
+ ]);
224
+ pulled.push("outbox→inbox");
225
+ }
226
+ catch (e) {
227
+ if (!String(e.stderr || e.message).includes("No such file")) {
228
+ errors.push(`pull outbox: ${e.stderr || e.message}`);
229
+ }
230
+ }
231
+ // Push our outbox → peer inbox
165
232
  const outboxDir = join(store.root, "outbox");
166
233
  if (existsSync(outboxDir)) {
167
234
  for (const fname of await readdir(outboxDir)) {
168
235
  if (!fname.endsWith(".json"))
169
236
  continue;
170
- if (!fname.includes(peer.name) && !fname.includes(peer.id))
237
+ if (!fname.includes(`_to-${peer.name}.json`) && !fname.includes(peer.id))
171
238
  continue;
172
239
  try {
173
240
  await execFile("rsync", ["-az", join(outboxDir, fname), `${host}:${remotePath}/inbox/${fname}`]);
package/dist/watch.js CHANGED
@@ -1,3 +1,7 @@
1
+ // --- Watch strategy ---
2
+ // chokidar for local filesystem events (inbox, CONTEXT.md) — instant, inotify-backed on Linux.
3
+ // Polling interval for remote sync (watchSync) — because remote peers are over HTTP/SSH,
4
+ // there's no filesystem event to listen for. Polling is the only option without WebSockets.
1
5
  import { watch } from "chokidar";
2
6
  import { readFile } from "node:fs/promises";
3
7
  import { join, basename } from "node:path";
@@ -25,6 +29,8 @@ export function watchInbox(storeRoot, callback) {
25
29
  }
26
30
  catch { }
27
31
  };
32
+ // awaitWriteFinish: messages are written by sync (multi-step: create + write + close).
33
+ // Without stability threshold, we'd fire on half-written files.
28
34
  const watcher = watch(inboxDir, {
29
35
  ignoreInitial: true,
30
36
  awaitWriteFinish: { stabilityThreshold: 500 },
@@ -55,8 +61,11 @@ export function watchContext(storeRoot, callback) {
55
61
  export function watchSync(store, intervalMs, onSync, onError) {
56
62
  let running = false;
57
63
  const doSync = async () => {
64
+ // Guard against overlapping syncs: if a peer is slow or unreachable, the previous
65
+ // cycle may still be running when the next interval fires. Overlapping syncs could
66
+ // double-deliver outbox messages or corrupt in-flight file writes.
58
67
  if (running)
59
- return; // skip if previous sync still in progress
68
+ return;
60
69
  running = true;
61
70
  try {
62
71
  const results = await syncAll(store);
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "openfused",
3
- "version": "0.3.3",
3
+ "version": "0.3.5",
4
4
  "description": "Decentralized context mesh for AI agents. Encrypted sync, signed messaging, MCP server. The protocol is files.",
5
5
  "license": "MIT",
6
6
  "type": "module",