openfused 0.4.0 → 0.4.2
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +79 -11
- package/dist/cli.js +189 -66
- package/dist/sync.js +10 -4
- package/dist/wasm-core.js +7 -5
- package/package.json +3 -2
- package/wasm/.gitkeep +0 -0
- package/wasm/openfuse-core.wasm +0 -0
package/README.md
CHANGED
|
@@ -242,6 +242,48 @@ Any MCP client (Claude Desktop, Claude Code, Cursor) can use OpenFused as a tool
|
|
|
242
242
|
|
|
243
243
|
13 tools: `context_read/write/append`, `profile_read/write`, `inbox_list/send`, `shared_list/read/write`, `status`, `peer_list/add`.
|
|
244
244
|
|
|
245
|
+
## Hosted Mailbox
|
|
246
|
+
|
|
247
|
+
No server? No problem. Register your keys and get a free inbox at `inbox.openfused.dev`:
|
|
248
|
+
|
|
249
|
+
```bash
|
|
250
|
+
# Register with the hosted mailbox as your endpoint
|
|
251
|
+
openfuse register --endpoint https://inbox.openfused.dev
|
|
252
|
+
|
|
253
|
+
# Anyone can now send you messages
|
|
254
|
+
openfuse send your-name "hello"
|
|
255
|
+
|
|
256
|
+
# You pull messages whenever you're online
|
|
257
|
+
openfuse inbox list
|
|
258
|
+
```
|
|
259
|
+
|
|
260
|
+
No server to run. No port to open. No tunnel to configure. Messages wait in the mailbox until your agent wakes up and pulls them. It's email for agents.
|
|
261
|
+
|
|
262
|
+
The paid tier ($5/mo) gets a dedicated store at `{name}.openfused.dev` with full context, shared files, knowledge base, and custom Worker code.
|
|
263
|
+
|
|
264
|
+
## A2A Compatibility
|
|
265
|
+
|
|
266
|
+
OpenFused speaks the [A2A protocol](https://github.com/a2aproject/A2A) (Google/Linux Foundation). The daemon exposes a standard A2A facade over the file-native store:
|
|
267
|
+
|
|
268
|
+
```bash
|
|
269
|
+
# Start daemon with A2A enabled
|
|
270
|
+
openfused serve --store ./my-store --token "$OPENFUSE_TOKEN"
|
|
271
|
+
|
|
272
|
+
# A2A clients can now:
|
|
273
|
+
# - Discover your agent at /.well-known/agent-card.json
|
|
274
|
+
# - Send tasks via POST /message/send
|
|
275
|
+
# - Stream progress via POST /message/stream (SSE)
|
|
276
|
+
# - Check results via GET /tasks/{id}
|
|
277
|
+
```
|
|
278
|
+
|
|
279
|
+
A2A is how agents talk. OpenFused is where agents think. The daemon translates HTTP to files and files to HTTP — any agent picks up tasks by reading files, reports progress by writing files. No runtime lock-in.
|
|
280
|
+
|
|
281
|
+
```bash
|
|
282
|
+
# CLI task management
|
|
283
|
+
openfuse tasks list --token "$OPENFUSE_TOKEN"
|
|
284
|
+
openfuse tasks get <task-id> --token "$OPENFUSE_TOKEN"
|
|
285
|
+
```
|
|
286
|
+
|
|
245
287
|
## Docker
|
|
246
288
|
|
|
247
289
|
```bash
|
|
@@ -260,18 +302,36 @@ openfused serve --store ./my-context --port 2053
|
|
|
260
302
|
|
|
261
303
|
# Public mode — PROFILE.md + inbox + outbox pickup (for WAN/tunnels)
|
|
262
304
|
openfused serve --store ./my-context --port 2053 --public
|
|
263
|
-
```
|
|
264
305
|
|
|
265
|
-
|
|
306
|
+
# With auth and task GC
|
|
307
|
+
openfused serve --store ./my-context --token "$OPENFUSE_TOKEN" --gc-days 7
|
|
308
|
+
```
|
|
266
309
|
|
|
267
|
-
|
|
|
268
|
-
|
|
269
|
-
|
|
|
270
|
-
|
|
|
271
|
-
|
|
|
272
|
-
|
|
273
|
-
|
|
274
|
-
|
|
310
|
+
| Flag | Purpose |
|
|
311
|
+
|------|---------|
|
|
312
|
+
| `--token` / `OPENFUSE_TOKEN` | Bearer token for A2A routes |
|
|
313
|
+
| `--gc-days N` | Auto-delete terminal tasks older than N days (default: 7) |
|
|
314
|
+
| `--public` | Restrict to PROFILE.md + inbox only |
|
|
315
|
+
|
|
316
|
+
Rate limiting, IP filtering, and TLS belong at the reverse proxy layer (nginx, Caddy, cloudflared). The daemon focuses on application logic.
|
|
317
|
+
|
|
318
|
+
Endpoints:
|
|
319
|
+
|
|
320
|
+
| Endpoint | Method | Auth | Purpose |
|
|
321
|
+
|----------|--------|------|---------|
|
|
322
|
+
| `/.well-known/agent-card.json` | GET | None | A2A agent discovery |
|
|
323
|
+
| `/profile` | GET | None | PROFILE.md |
|
|
324
|
+
| `/config` | GET | None | Public keys |
|
|
325
|
+
| `/message/send` | POST | Bearer | Create A2A task |
|
|
326
|
+
| `/message/stream` | POST | Bearer | Create task + SSE stream |
|
|
327
|
+
| `/tasks` | GET | Bearer | List tasks |
|
|
328
|
+
| `/tasks/{id}` | GET | Bearer | Get task |
|
|
329
|
+
| `/tasks/{id}/cancel` | POST | Bearer | Cancel task |
|
|
330
|
+
| `/tasks/{id}/subscribe` | POST | Bearer | SSE subscribe |
|
|
331
|
+
| `/tasks/{id}/status` | POST | Bearer | Update task status |
|
|
332
|
+
| `/tasks/{id}/artifacts` | POST | Bearer | Add artifact |
|
|
333
|
+
| `/inbox` | POST | Ed25519 sig | Receive signed message |
|
|
334
|
+
| `/outbox/{name}` | GET | Ed25519 challenge | Pull outbox |
|
|
275
335
|
|
|
276
336
|
## File Watching
|
|
277
337
|
|
|
@@ -292,10 +352,12 @@ openfuse watch -d ./store --tunnel your-server # + reverse SSH tunnel
|
|
|
292
352
|
|
|
293
353
|
| Scenario | Solution | Decentralized? |
|
|
294
354
|
|----------|----------|----------------|
|
|
355
|
+
| No server at all | `inbox.openfused.dev` hosted mailbox | Federated |
|
|
295
356
|
| VPS agent | `openfused serve` — public IP | Yes |
|
|
296
357
|
| Behind NAT + cloudflared | `openfused serve` + `cloudflared tunnel` | Yes |
|
|
297
358
|
| Docker agent | Mount store as volume | Yes |
|
|
298
359
|
| Pull-only agent | `openfuse sync` on cron — outbound only | Yes |
|
|
360
|
+
| A2A ecosystem | Daemon with `--token` — standard A2A interface | Yes |
|
|
299
361
|
|
|
300
362
|
## Security
|
|
301
363
|
|
|
@@ -315,8 +377,13 @@ Hey, the research is done. Check shared/findings.md
|
|
|
315
377
|
|
|
316
378
|
### Hardening
|
|
317
379
|
|
|
318
|
-
-
|
|
380
|
+
- Bearer token auth on A2A routes (constant-time comparison via subtle crate)
|
|
381
|
+
- File locking on task.json (flock, prevents concurrent write corruption)
|
|
382
|
+
- Task garbage collection (auto-deletes terminal tasks after configurable days)
|
|
383
|
+
- Path traversal blocked (canonicalized paths, iterative `..` stripping, leading-dot rejection)
|
|
319
384
|
- Daemon body size limit (1MB)
|
|
385
|
+
- SSE stream timeout (30 minutes, prevents resource exhaustion)
|
|
386
|
+
- GC canonicalizes paths before deletion (symlink traversal defense)
|
|
320
387
|
- PROFILE.md is public; private config stays in your agent runtime (CLAUDE.md, etc.)
|
|
321
388
|
- Registry rate-limited on all mutation endpoints
|
|
322
389
|
- Outbox per-recipient subdirs with fingerprint binding (anti name-squatting)
|
|
@@ -324,6 +391,7 @@ Hey, the research is done. Check shared/findings.md
|
|
|
324
391
|
- Sending requires recipient in keyring (no blind sends to unknown agents)
|
|
325
392
|
- SSH URLs validated (no argument injection)
|
|
326
393
|
- XML values escaped in message wrapping (no prompt injection via attributes)
|
|
394
|
+
- Rate limiting, IP filtering, TLS belong at the proxy layer — the daemon does not duplicate them
|
|
327
395
|
|
|
328
396
|
## How agents communicate
|
|
329
397
|
|
package/dist/cli.js
CHANGED
|
@@ -617,88 +617,211 @@ program
|
|
|
617
617
|
console.log(` Created: ${manifest.created}`);
|
|
618
618
|
});
|
|
619
619
|
// --- send ---
|
|
620
|
+
// Helper: find the newest outbox file for a recipient
|
|
621
|
+
import { readdirSync, statSync } from "node:fs";
|
|
622
|
+
function findNewestOutboxFile(storeRoot, name) {
|
|
623
|
+
const outboxDir = join(storeRoot, "outbox");
|
|
624
|
+
try {
|
|
625
|
+
for (const entry of readdirSync(outboxDir)) {
|
|
626
|
+
if (entry.startsWith(`${name}-`) && statSync(join(outboxDir, entry)).isDirectory()) {
|
|
627
|
+
const files = readdirSync(join(outboxDir, entry))
|
|
628
|
+
.filter((f) => f.endsWith(".json"))
|
|
629
|
+
.sort()
|
|
630
|
+
.reverse();
|
|
631
|
+
if (files.length > 0)
|
|
632
|
+
return join(entry, files[0]);
|
|
633
|
+
}
|
|
634
|
+
}
|
|
635
|
+
}
|
|
636
|
+
catch { }
|
|
637
|
+
return "";
|
|
638
|
+
}
|
|
620
639
|
program
|
|
621
640
|
.command("send <name> <message>")
|
|
622
|
-
.description("Send a message to an agent
|
|
641
|
+
.description("Send a message to an agent")
|
|
623
642
|
.option("-d, --dir <path>", "Context store directory", ".")
|
|
624
643
|
.option("-r, --registry <url>", "Registry URL")
|
|
644
|
+
.option("--http", "Force HTTP delivery (uses registry endpoint)")
|
|
645
|
+
.option("--ssh", "Force SSH delivery (uses local peer SSH URL)")
|
|
625
646
|
.action(async (name, message, opts) => {
|
|
626
647
|
const store = new ContextStore(resolve(opts.dir));
|
|
627
648
|
const reg = registry.resolveRegistry(opts.registry);
|
|
628
|
-
|
|
629
|
-
|
|
630
|
-
|
|
631
|
-
|
|
632
|
-
|
|
633
|
-
|
|
634
|
-
|
|
635
|
-
|
|
636
|
-
|
|
637
|
-
|
|
638
|
-
|
|
639
|
-
|
|
640
|
-
|
|
641
|
-
|
|
642
|
-
|
|
643
|
-
|
|
644
|
-
|
|
645
|
-
|
|
646
|
-
|
|
647
|
-
|
|
648
|
-
|
|
649
|
-
url: manifest.endpoint,
|
|
650
|
-
access: "read",
|
|
651
|
-
});
|
|
652
|
-
}
|
|
653
|
-
await store.writeConfig(config);
|
|
654
|
-
const filename = await store.sendInbox(name, message);
|
|
655
|
-
// Try direct HTTP delivery if endpoint is http(s)
|
|
656
|
-
if (manifest.endpoint.startsWith("http")) {
|
|
657
|
-
try {
|
|
658
|
-
// SSRF check: registry endpoints are attacker-controlled
|
|
659
|
-
const { checkSsrf } = await import("./sync.js");
|
|
660
|
-
await checkSsrf(manifest.endpoint);
|
|
661
|
-
const body = await readFile(join(store.root, "outbox", filename), "utf-8");
|
|
662
|
-
const r = await fetch(`${manifest.endpoint.replace(/\/$/, "")}/inbox`, {
|
|
663
|
-
method: "POST",
|
|
664
|
-
headers: { "Content-Type": "application/json" },
|
|
665
|
-
body,
|
|
649
|
+
let config = await store.readConfig();
|
|
650
|
+
// Ensure recipient is known — check local peers, then registry
|
|
651
|
+
const existingPeer = config.peers.find((p) => p.name === name);
|
|
652
|
+
let httpEndpoint = existingPeer?.url?.startsWith("http") ? existingPeer.url : "";
|
|
653
|
+
let sshUrl = existingPeer?.url?.startsWith("ssh") ? existingPeer.url : "";
|
|
654
|
+
// If --http forced or no local peer, discover from registry
|
|
655
|
+
if (opts.http || !existingPeer) {
|
|
656
|
+
try {
|
|
657
|
+
const manifest = await registry.discover(name, reg);
|
|
658
|
+
if (manifest.endpoint?.startsWith("http"))
|
|
659
|
+
httpEndpoint = manifest.endpoint;
|
|
660
|
+
// Auto-import key + add as peer
|
|
661
|
+
if (!config.keyring.some((e) => e.signingKey === manifest.publicKey)) {
|
|
662
|
+
config.keyring.push({
|
|
663
|
+
name: manifest.name,
|
|
664
|
+
address: `${manifest.name}@registry`,
|
|
665
|
+
signingKey: manifest.publicKey,
|
|
666
|
+
encryptionKey: manifest.encryptionKey,
|
|
667
|
+
fingerprint: manifest.fingerprint,
|
|
668
|
+
trusted: false,
|
|
669
|
+
added: new Date().toISOString(),
|
|
666
670
|
});
|
|
667
|
-
if (r.ok) {
|
|
668
|
-
// Archive to .sent/ within the recipient subdir
|
|
669
|
-
const { mkdir, rename } = await import("node:fs/promises");
|
|
670
|
-
const filePath = join(store.root, "outbox", filename);
|
|
671
|
-
const dir = join(filePath, "..");
|
|
672
|
-
const sentDir = join(dir, ".sent");
|
|
673
|
-
const baseName = filename.includes("/") ? filename.split("/").pop() : filename;
|
|
674
|
-
await mkdir(sentDir, { recursive: true });
|
|
675
|
-
await rename(filePath, join(sentDir, baseName));
|
|
676
|
-
console.log(`Delivered to ${name}.`);
|
|
677
|
-
}
|
|
678
|
-
else {
|
|
679
|
-
console.log(`Queued for ${name}. Endpoint returned ${r.status}. Will deliver on next sync.`);
|
|
680
|
-
}
|
|
681
671
|
}
|
|
682
|
-
|
|
683
|
-
|
|
672
|
+
if (manifest.endpoint && !config.peers.some((p) => p.name === manifest.name)) {
|
|
673
|
+
config.peers.push({
|
|
674
|
+
id: (await import("nanoid")).nanoid(12),
|
|
675
|
+
name: manifest.name,
|
|
676
|
+
url: manifest.endpoint,
|
|
677
|
+
access: "read",
|
|
678
|
+
});
|
|
684
679
|
}
|
|
680
|
+
await store.writeConfig(config);
|
|
685
681
|
}
|
|
686
|
-
|
|
687
|
-
|
|
682
|
+
catch {
|
|
683
|
+
if (!existingPeer && !config.keyring.some((k) => k.name === name)) {
|
|
684
|
+
console.error(`Agent '${name}' not found in local peers or registry.`);
|
|
685
|
+
process.exit(1);
|
|
686
|
+
}
|
|
688
687
|
}
|
|
689
|
-
|
|
690
|
-
|
|
688
|
+
}
|
|
689
|
+
// Create signed message in outbox
|
|
690
|
+
await store.sendInbox(name, message);
|
|
691
|
+
const outboxFile = findNewestOutboxFile(store.root, name);
|
|
692
|
+
if (!outboxFile) {
|
|
693
|
+
console.log(`Queued for ${name}.`);
|
|
694
|
+
return;
|
|
695
|
+
}
|
|
696
|
+
// Determine delivery method
|
|
697
|
+
const forceHttp = opts.http;
|
|
698
|
+
const forceSsh = opts.ssh;
|
|
699
|
+
// --ssh: deliver via local peer SSH
|
|
700
|
+
if (forceSsh) {
|
|
701
|
+
if (!sshUrl) {
|
|
702
|
+
console.log(`Queued for ${name}. No SSH peer configured — use \`openfuse peer add ssh://...\`.`);
|
|
703
|
+
return;
|
|
691
704
|
}
|
|
705
|
+
const delivered = await deliverOne(store, name, outboxFile);
|
|
706
|
+
console.log(delivered ? `Delivered to ${name} via SSH.` : `Queued for ${name}. SSH delivery failed — run \`openfuse sync\`.`);
|
|
707
|
+
return;
|
|
692
708
|
}
|
|
693
|
-
|
|
694
|
-
|
|
695
|
-
|
|
696
|
-
|
|
697
|
-
|
|
698
|
-
|
|
709
|
+
// --http or default with HTTP endpoint: deliver via HTTP
|
|
710
|
+
if ((forceHttp || !sshUrl) && httpEndpoint) {
|
|
711
|
+
try {
|
|
712
|
+
const { checkSsrf } = await import("./sync.js");
|
|
713
|
+
await checkSsrf(httpEndpoint);
|
|
714
|
+
const body = await readFile(join(store.root, "outbox", outboxFile), "utf-8");
|
|
715
|
+
const inboxUrl = `${httpEndpoint.replace(/\/$/, "")}/inbox/${encodeURIComponent(name)}`;
|
|
716
|
+
const r = await fetch(inboxUrl, {
|
|
717
|
+
method: "POST",
|
|
718
|
+
headers: { "Content-Type": "application/json" },
|
|
719
|
+
body,
|
|
720
|
+
});
|
|
721
|
+
if (r.ok) {
|
|
722
|
+
const { mkdir, rename } = await import("node:fs/promises");
|
|
723
|
+
const filePath = join(store.root, "outbox", outboxFile);
|
|
724
|
+
const sentDir = join(filePath, "..", ".sent");
|
|
725
|
+
const baseName = outboxFile.split("/").pop();
|
|
726
|
+
await mkdir(sentDir, { recursive: true });
|
|
727
|
+
await rename(filePath, join(sentDir, baseName));
|
|
728
|
+
console.log(`Delivered to ${name}.`);
|
|
729
|
+
}
|
|
730
|
+
else {
|
|
731
|
+
console.log(`Queued for ${name}. Endpoint returned ${r.status}.`);
|
|
732
|
+
}
|
|
699
733
|
}
|
|
700
|
-
|
|
734
|
+
catch (e) {
|
|
701
735
|
console.log(`Queued for ${name}. Run \`openfuse sync\` to deliver.`);
|
|
736
|
+
if (process.env.DEBUG)
|
|
737
|
+
console.error(` Delivery error: ${e.message}`);
|
|
738
|
+
}
|
|
739
|
+
return;
|
|
740
|
+
}
|
|
741
|
+
// Default: try local peer (SSH or HTTP)
|
|
742
|
+
if (existingPeer) {
|
|
743
|
+
const delivered = await deliverOne(store, name, outboxFile);
|
|
744
|
+
console.log(delivered ? `Delivered to ${name}.` : `Queued for ${name}. Run \`openfuse sync\` to deliver.`);
|
|
745
|
+
return;
|
|
746
|
+
}
|
|
747
|
+
console.log(`Queued for ${name}. No endpoint — they'll need to pull from your outbox.`);
|
|
748
|
+
});
|
|
749
|
+
// --- tasks (A2A) ---
|
|
750
|
+
const tasks = program.command("tasks").description("Manage A2A tasks on the daemon");
|
|
751
|
+
tasks
|
|
752
|
+
.command("list")
|
|
753
|
+
.description("List all tasks from the daemon")
|
|
754
|
+
.option("--url <url>", "Daemon URL", "http://127.0.0.1:2053")
|
|
755
|
+
.option("--token <token>", "Bearer token (also reads OPENFUSE_TOKEN env)")
|
|
756
|
+
.option("--json", "Output raw JSON")
|
|
757
|
+
.action(async (opts) => {
|
|
758
|
+
const token = opts.token || process.env.OPENFUSE_TOKEN;
|
|
759
|
+
const headers = { "Content-Type": "application/json" };
|
|
760
|
+
if (token)
|
|
761
|
+
headers["Authorization"] = `Bearer ${token}`;
|
|
762
|
+
const res = await fetch(`${opts.url}/tasks`, { headers });
|
|
763
|
+
if (!res.ok) {
|
|
764
|
+
const body = await res.text();
|
|
765
|
+
console.error(`Error ${res.status}: ${body}`);
|
|
766
|
+
process.exit(1);
|
|
767
|
+
}
|
|
768
|
+
const data = (await res.json());
|
|
769
|
+
if (opts.json) {
|
|
770
|
+
console.log(JSON.stringify(data.tasks, null, 2));
|
|
771
|
+
return;
|
|
772
|
+
}
|
|
773
|
+
if (data.tasks.length === 0) {
|
|
774
|
+
console.log("No tasks.");
|
|
775
|
+
return;
|
|
776
|
+
}
|
|
777
|
+
for (const t of data.tasks) {
|
|
778
|
+
const created = t._openfuse?.createdAt?.slice(0, 19) || "";
|
|
779
|
+
const msgs = t.history?.length || 0;
|
|
780
|
+
const arts = t.artifacts?.length || 0;
|
|
781
|
+
console.log(` ${t.id} [${t.status.state}] ${msgs} msg, ${arts} artifact ${created}`);
|
|
782
|
+
}
|
|
783
|
+
});
|
|
784
|
+
tasks
|
|
785
|
+
.command("get <id>")
|
|
786
|
+
.description("Get a specific task by ID")
|
|
787
|
+
.option("--url <url>", "Daemon URL", "http://127.0.0.1:2053")
|
|
788
|
+
.option("--token <token>", "Bearer token (also reads OPENFUSE_TOKEN env)")
|
|
789
|
+
.option("--json", "Output raw JSON")
|
|
790
|
+
.action(async (id, opts) => {
|
|
791
|
+
const token = opts.token || process.env.OPENFUSE_TOKEN;
|
|
792
|
+
const headers = { "Content-Type": "application/json" };
|
|
793
|
+
if (token)
|
|
794
|
+
headers["Authorization"] = `Bearer ${token}`;
|
|
795
|
+
const res = await fetch(`${opts.url}/tasks/${encodeURIComponent(id)}`, { headers });
|
|
796
|
+
if (!res.ok) {
|
|
797
|
+
const body = await res.text();
|
|
798
|
+
console.error(`Error ${res.status}: ${body}`);
|
|
799
|
+
process.exit(1);
|
|
800
|
+
}
|
|
801
|
+
const task = (await res.json());
|
|
802
|
+
if (opts.json) {
|
|
803
|
+
console.log(JSON.stringify(task, null, 2));
|
|
804
|
+
return;
|
|
805
|
+
}
|
|
806
|
+
console.log(`Task: ${task.id}`);
|
|
807
|
+
console.log(`State: ${task.status.state}`);
|
|
808
|
+
if (task.contextId)
|
|
809
|
+
console.log(`Context: ${task.contextId}`);
|
|
810
|
+
if (task._openfuse) {
|
|
811
|
+
console.log(`Created: ${task._openfuse.createdAt}`);
|
|
812
|
+
console.log(`Updated: ${task._openfuse.updatedAt}`);
|
|
813
|
+
}
|
|
814
|
+
if (task.history?.length > 0) {
|
|
815
|
+
console.log(`\nHistory (${task.history.length} messages):`);
|
|
816
|
+
for (const msg of task.history) {
|
|
817
|
+
const text = msg.parts?.map((p) => p.text).filter(Boolean).join(" ") || "(non-text)";
|
|
818
|
+
console.log(` [${msg.role}] ${text.slice(0, 120)}`);
|
|
819
|
+
}
|
|
820
|
+
}
|
|
821
|
+
if (task.artifacts?.length > 0) {
|
|
822
|
+
console.log(`\nArtifacts (${task.artifacts.length}):`);
|
|
823
|
+
for (const a of task.artifacts) {
|
|
824
|
+
console.log(` ${a.artifactId}: ${a.name || "(unnamed)"}`);
|
|
702
825
|
}
|
|
703
826
|
}
|
|
704
827
|
});
|
package/dist/sync.js
CHANGED
|
@@ -103,7 +103,8 @@ export async function deliverOne(store, peerName, filename) {
|
|
|
103
103
|
if (transport.type === "http") {
|
|
104
104
|
await checkSsrf(transport.baseUrl);
|
|
105
105
|
const body = await readFile(filePath, "utf-8");
|
|
106
|
-
const
|
|
106
|
+
const inboxUrl = `${transport.baseUrl}/inbox/${encodeURIComponent(peerName)}`;
|
|
107
|
+
const r = await fetch(inboxUrl, {
|
|
107
108
|
method: "POST",
|
|
108
109
|
headers: { "Content-Type": "application/json" },
|
|
109
110
|
body,
|
|
@@ -243,7 +244,10 @@ async function syncHttp(store, peer, baseUrl, peerDir) {
|
|
|
243
244
|
const safeFrom = from.replace(/[^a-zA-Z0-9\-_]/g, "");
|
|
244
245
|
const safeTs = ts.replace(/[^a-zA-Z0-9\-_]/g, "");
|
|
245
246
|
const fname = `${safeTs}_from-${safeFrom}_to-${myName}.json`;
|
|
246
|
-
|
|
247
|
+
// Sanitize outboxFile — it comes from the remote peer's response and could
|
|
248
|
+
// contain path traversal characters (e.g., "../../inbox/important.json").
|
|
249
|
+
const rawOutboxFile = msg._outboxFile || "";
|
|
250
|
+
const outboxFile = rawOutboxFile.replace(/[^a-zA-Z0-9_\-. ]/g, "");
|
|
247
251
|
const dest = join(inboxDir, fname);
|
|
248
252
|
if (!existsSync(dest)) {
|
|
249
253
|
// Strip the _outboxFile metadata before saving
|
|
@@ -285,7 +289,8 @@ async function syncHttp(store, peer, baseUrl, peerDir) {
|
|
|
285
289
|
const relPath = `${entry.name}/${fname}`;
|
|
286
290
|
try {
|
|
287
291
|
const body = await readFile(join(subDir, fname), "utf-8");
|
|
288
|
-
const
|
|
292
|
+
const inboxUrl = `${baseUrl}/inbox/${encodeURIComponent(peer.name)}`;
|
|
293
|
+
const r = await fetch(inboxUrl, {
|
|
289
294
|
method: "POST",
|
|
290
295
|
headers: { "Content-Type": "application/json" },
|
|
291
296
|
body,
|
|
@@ -308,7 +313,8 @@ async function syncHttp(store, peer, baseUrl, peerDir) {
|
|
|
308
313
|
continue;
|
|
309
314
|
try {
|
|
310
315
|
const body = await readFile(join(outboxDir, entry.name), "utf-8");
|
|
311
|
-
const
|
|
316
|
+
const inboxUrl = `${baseUrl}/inbox/${encodeURIComponent(peer.name)}`;
|
|
317
|
+
const r = await fetch(inboxUrl, {
|
|
312
318
|
method: "POST",
|
|
313
319
|
headers: { "Content-Type": "application/json" },
|
|
314
320
|
body,
|
package/dist/wasm-core.js
CHANGED
|
@@ -4,7 +4,7 @@
|
|
|
4
4
|
// Networking (sync, registry, watch) stays in Node.js.
|
|
5
5
|
import { WASI } from "node:wasi";
|
|
6
6
|
import { readFileSync } from "node:fs";
|
|
7
|
-
import { readFile,
|
|
7
|
+
import { readFile, mkdtemp, rm } from "node:fs/promises";
|
|
8
8
|
import { join, dirname } from "node:path";
|
|
9
9
|
import { fileURLToPath } from "node:url";
|
|
10
10
|
import { tmpdir } from "node:os";
|
|
@@ -23,11 +23,13 @@ function getModule() {
|
|
|
23
23
|
}
|
|
24
24
|
async function callWasm(storeRoot, args) {
|
|
25
25
|
// Create a temp file to capture stdout (node:wasi doesn't support piping stdout directly)
|
|
26
|
+
// Restrictive permissions: 0o700 dir + 0o600 file — prevents other users from reading
|
|
27
|
+
// WASM output which may contain decrypted messages, keys, or config data.
|
|
26
28
|
const tmpDir = await mkdtemp(join(tmpdir(), "openfuse-wasi-"));
|
|
29
|
+
const { chmodSync, openSync, closeSync } = await import("node:fs");
|
|
30
|
+
chmodSync(tmpDir, 0o700);
|
|
27
31
|
const stdoutPath = join(tmpDir, "stdout");
|
|
28
|
-
|
|
29
|
-
const { openSync, closeSync } = await import("node:fs");
|
|
30
|
-
const fd = openSync(stdoutPath, "w");
|
|
32
|
+
const fd = openSync(stdoutPath, "w", 0o600);
|
|
31
33
|
try {
|
|
32
34
|
const wasi = new WASI({
|
|
33
35
|
version: "preview1",
|
|
@@ -70,7 +72,7 @@ async function callWasm(storeRoot, args) {
|
|
|
70
72
|
async function callWasmJson(storeRoot, args) {
|
|
71
73
|
const { stdout, exitCode } = await callWasm(storeRoot, args);
|
|
72
74
|
if (!stdout) {
|
|
73
|
-
throw new Error(`WASM returned empty output for: ${args
|
|
75
|
+
throw new Error(`WASM returned empty output for command: ${args[0] ?? "unknown"}`);
|
|
74
76
|
}
|
|
75
77
|
const parsed = JSON.parse(stdout);
|
|
76
78
|
if (exitCode !== 0 && parsed.error) {
|
package/package.json
CHANGED
|
@@ -1,6 +1,7 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "openfused",
|
|
3
|
-
"version": "0.4.
|
|
3
|
+
"version": "0.4.2",
|
|
4
|
+
"mcpName": "io.github.openfused/openfuse-mcp",
|
|
4
5
|
"description": "The file protocol for AI agent context. Encrypted, signed, peer-to-peer.",
|
|
5
6
|
"license": "MIT",
|
|
6
7
|
"type": "module",
|
|
@@ -36,7 +37,7 @@
|
|
|
36
37
|
},
|
|
37
38
|
"devDependencies": {
|
|
38
39
|
"@types/node": "^25.5.0",
|
|
39
|
-
"typescript": "^
|
|
40
|
+
"typescript": "^6.0.2"
|
|
40
41
|
},
|
|
41
42
|
"engines": {
|
|
42
43
|
"node": ">=20"
|
package/wasm/.gitkeep
ADDED
|
File without changes
|
package/wasm/openfuse-core.wasm
CHANGED
|
Binary file
|