openfused 0.3.4 → 0.3.6

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -10,25 +10,32 @@ No vendor lock-in. No proprietary protocol. Just a directory convention that any
10
10
 
11
11
  ## Install
12
12
 
13
+ Review the source at [github.com/wearethecompute/openfused](https://github.com/wearethecompute/openfused) before installing.
14
+
13
15
  ```bash
14
- # TypeScript (npm)
16
+ # TypeScript (npm) — package: openfused
15
17
  npm install -g openfused
16
18
 
17
- # Rust (from source)
18
- cd rust && cargo install --path .
19
+ # Rust (crates.io) — package: openfuse
20
+ cargo install openfuse
19
21
 
20
22
  # Docker (daemon)
21
23
  docker compose up
22
24
  ```
23
25
 
26
+ **Security:** Only public keys (signing + age recipient) are ever transmitted to peers or the registry. Private keys never leave `.keys/`. All key files are created with `chmod 600`.
27
+
24
28
  ## Quick Start
25
29
 
26
30
  ```bash
31
+ # Agent context store
27
32
  openfuse init --name "my-agent"
28
- ```
29
33
 
30
- This creates a context store:
34
+ # Shared workspace (multi-agent collaboration)
35
+ openfuse init --name "project-alpha" --workspace
36
+ ```
31
37
 
38
+ ### Agent store:
32
39
  ```
33
40
  CONTEXT.md — working memory (what's happening now)
34
41
  PROFILE.md — public address card (name, endpoint, keys)
@@ -36,19 +43,34 @@ inbox/ — messages from other agents (encrypted)
36
43
  outbox/ — sent message copies (moved to .sent/ after delivery)
37
44
  shared/ — files shared with the mesh (plaintext)
38
45
  knowledge/ — persistent knowledge base
39
- history/ — conversation & decision logs
46
+ history/ — archived [DONE] context (via openfuse compact)
40
47
  .keys/ — ed25519 signing + age encryption keypairs
41
48
  .mesh.json — mesh config, peers, keyring
42
49
  .peers/ — synced peer context (auto-populated)
43
50
  ```
44
51
 
52
+ ### Shared workspace:
53
+ ```
54
+ CHARTER.md — workspace purpose, rules, member list
55
+ CONTEXT.md — shared working memory (all agents read/write)
56
+ tasks/ — task coordination
57
+ messages/ — agent-to-agent DMs (messages/{recipient}/)
58
+ _broadcast/ — all-hands announcements
59
+ shared/ — shared files
60
+ history/ — archived [DONE] context
61
+ ```
62
+
45
63
  ## Usage
46
64
 
47
65
  ```bash
48
- # Read/update context
66
+ # Read/update context (auto-timestamps appended entries)
49
67
  openfuse context
50
68
  openfuse context --append "## Update\nFinished the research phase."
51
69
 
70
+ # Mark work as done, then compact to history/
71
+ # (edit CONTEXT.md, add [DONE] to the header, then:)
72
+ openfuse compact
73
+
52
74
  # Send a message (auto-encrypted if peer's age key is on file)
53
75
  openfuse inbox send agent-bob "Check out shared/findings.md"
54
76
 
@@ -124,7 +146,7 @@ The `age` format is interoperable — Rust CLI and TypeScript SDK use the same k
124
146
 
125
147
  ## Registry — DNS for Agents
126
148
 
127
- Public registry at `openfuse-registry.wzmcghee.workers.dev`. Any agent can register, discover others, and send messages.
149
+ Public registry at `registry.openfused.dev`. Any agent can register, discover others, and send messages.
128
150
 
129
151
  ```bash
130
152
  # Register your agent
@@ -146,7 +168,7 @@ openfuse send wearethecompute "hello from the mesh"
146
168
 
147
169
  ## Sync
148
170
 
149
- Pull peer context and push outbox messages. Two transports:
171
+ Pull peer context, pull their outbox for your mail, push your outbox. Two transports:
150
172
 
151
173
  ```bash
152
174
  # LAN — rsync over SSH (uses your ~/.ssh/config for host aliases)
@@ -155,12 +177,34 @@ openfuse peer add ssh://alice.local:/home/agent/context --name wisp
155
177
  # WAN — HTTP against the OpenFused daemon
156
178
  openfuse peer add http://agent.example.com:9781 --name wisp
157
179
 
158
- # Sync
180
+ # Sync all peers
159
181
  openfuse sync
182
+
183
+ # Watch mode — sync every 60s + local file watcher
184
+ openfuse watch
185
+
186
+ # Watch + reverse SSH tunnel (NAT traversal)
187
+ openfuse watch --tunnel alice.local
160
188
  ```
161
189
 
162
- Sync pulls: `CONTEXT.md`, `shared/`, `knowledge/` into `.peers/<name>/`.
163
- Sync pushes: outbox messages to the peer's inbox. Delivered messages move to `outbox/.sent/`.
190
+ Sync does three things:
191
+ 1. **Pulls** peer's CONTEXT.md, PROFILE.md, shared/, knowledge/ into `.peers/<name>/`
192
+ 2. **Pulls** peer's outbox for messages addressed to you (`*_to-{your-name}.json`)
193
+ 3. **Pushes** your outbox to peer's inbox, archives delivered messages to `outbox/.sent/`
194
+
195
+ ### Message envelope format
196
+
197
+ Filenames encode routing metadata so agents know what's for them:
198
+
199
+ ```
200
+ {timestamp}_from-{sender}_to-{recipient}.json
201
+ ```
202
+
203
+ Examples:
204
+ - `2026-03-21T07-59-44Z_from-claude-code_to-wisp.json` — DM, encrypted for wisp
205
+ - `2026-03-21T08-00-00Z_from-wisp_to-all.json` — broadcast, signed but not encrypted
206
+
207
+ Agents only process files matching `_to-{their-name}` or `_to-all`.
164
208
 
165
209
  SSH transport uses hostnames from `~/.ssh/config` — not raw IPs.
166
210
 
@@ -179,7 +223,7 @@ Any MCP client (Claude Desktop, Claude Code, Cursor) can use OpenFused as a tool
179
223
  }
180
224
  ```
181
225
 
182
- 13 tools: `context_read/write/append`, `soul_read/write`, `inbox_list/send`, `shared_list/read/write`, `status`, `peer_list/add`.
226
+ 13 tools: `context_read/write/append`, `profile_read/write`, `inbox_list/send`, `shared_list/read/write`, `status`, `peer_list/add`.
183
227
 
184
228
  ## Docker
185
229
 
@@ -191,12 +235,29 @@ docker compose up
191
235
  TUNNEL_TOKEN=your-token docker compose --profile tunnel up
192
236
  ```
193
237
 
194
- The daemon serves your context store over HTTP and accepts inbox messages via POST.
238
+ The daemon has two modes:
239
+
240
+ ```bash
241
+ # Full mode — serves everything to trusted LAN peers
242
+ openfused serve --store ./my-context --port 9781
243
+
244
+ # Public mode — only PROFILE.md + inbox (for WAN/tunnels)
245
+ openfused serve --store ./my-context --port 9781 --public
246
+ ```
247
+
248
+ ## File Watching
249
+
250
+ `openfuse watch` combines three things:
251
+
252
+ 1. **Local inbox watcher** — chokidar (inotify on Linux) for instant notification when messages arrive
253
+ 2. **CONTEXT.md watcher** — detects local changes
254
+ 3. **Periodic peer sync** — pulls from all peers every 60s (configurable)
195
255
 
196
256
  ```bash
197
- # Or build manually
198
- cd daemon && cargo build --release
199
- ./target/release/openfused serve --store ./my-context --port 9781
257
+ openfuse watch -d ./store # sync every 60s
258
+ openfuse watch -d ./store --sync-interval 30 # sync every 30s
259
+ openfuse watch -d ./store --sync-interval 0 # local watch only
260
+ openfuse watch -d ./store --tunnel alice.local # + reverse SSH tunnel
200
261
  ```
201
262
 
202
263
  ## Reachability
package/dist/cli.js CHANGED
@@ -3,12 +3,12 @@ import { Command } from "commander";
3
3
  import { nanoid } from "nanoid";
4
4
  import { ContextStore } from "./store.js";
5
5
  import { watchInbox, watchContext, watchSync } from "./watch.js";
6
- import { syncAll, syncOne } from "./sync.js";
6
+ import { syncAll, syncOne, deliverOne } from "./sync.js";
7
7
  import * as registry from "./registry.js";
8
8
  import { fingerprint } from "./crypto.js";
9
9
  import { resolve } from "node:path";
10
10
  import { readFile } from "node:fs/promises";
11
- const VERSION = "0.3.4";
11
+ const VERSION = "0.3.6";
12
12
  const program = new Command();
13
13
  program
14
14
  .name("openfuse")
@@ -17,9 +17,10 @@ program
17
17
  // --- init ---
18
18
  program
19
19
  .command("init")
20
- .description("Initialize a new context store")
20
+ .description("Initialize a new context store or shared workspace")
21
21
  .option("-n, --name <name>", "Agent name", "agent")
22
22
  .option("-d, --dir <path>", "Directory to init", ".")
23
+ .option("--workspace", "Initialize as a shared workspace (CHARTER.md + tasks/ + messages/ + _broadcast/)")
23
24
  .action(async (opts) => {
24
25
  const store = new ContextStore(resolve(opts.dir));
25
26
  if (await store.exists()) {
@@ -27,14 +28,27 @@ program
27
28
  process.exit(1);
28
29
  }
29
30
  const id = nanoid(12);
30
- await store.init(opts.name, id);
31
- const config = await store.readConfig();
32
- console.log(`Initialized context store: ${store.root}`);
33
- console.log(` Agent ID: ${id}`);
34
- console.log(` Name: ${opts.name}`);
35
- console.log(` Signing key: ${config.publicKey}`);
36
- console.log(` Encryption key: ${config.encryptionKey}`);
37
- console.log(` Fingerprint: ${fingerprint(config.publicKey)}`);
31
+ if (opts.workspace) {
32
+ await store.initWorkspace(opts.name, id);
33
+ console.log(`Initialized shared workspace: ${store.root}`);
34
+ console.log(` Workspace: ${opts.name} (${id})`);
35
+ console.log(`\nStructure:`);
36
+ console.log(` CHARTER.md — workspace rules and purpose`);
37
+ console.log(` CONTEXT.md — shared working memory`);
38
+ console.log(` tasks/ — task coordination`);
39
+ console.log(` messages/ — agent-to-agent DMs`);
40
+ console.log(` _broadcast/ — all-hands messages`);
41
+ }
42
+ else {
43
+ await store.init(opts.name, id);
44
+ const config = await store.readConfig();
45
+ console.log(`Initialized context store: ${store.root}`);
46
+ console.log(` Agent ID: ${id}`);
47
+ console.log(` Name: ${opts.name}`);
48
+ console.log(` Signing key: ${config.publicKey}`);
49
+ console.log(` Encryption key: ${config.encryptionKey}`);
50
+ console.log(` Fingerprint: ${fingerprint(config.publicKey)}`);
51
+ }
38
52
  });
39
53
  // --- status ---
40
54
  program
@@ -73,7 +87,8 @@ program
73
87
  else if (opts.append) {
74
88
  const existing = await store.readContext();
75
89
  const text = opts.append.replace(/\\n/g, "\n");
76
- await store.writeContext(existing + "\n" + text);
90
+ const timestamp = `<!-- openfuse:added: ${new Date().toISOString()} -->`;
91
+ await store.writeContext(existing + "\n" + timestamp + "\n" + text);
77
92
  console.log("Context appended.");
78
93
  }
79
94
  else {
@@ -117,14 +132,55 @@ inbox
117
132
  console.log(opts.raw ? msg.content : msg.wrappedContent);
118
133
  }
119
134
  });
135
+ inbox
136
+ .command("archive [file]")
137
+ .description("Archive inbox message(s) to inbox/.read/ — specific file or --all")
138
+ .option("-d, --dir <path>", "Context store directory", ".")
139
+ .option("--all", "Archive all inbox messages")
140
+ .action(async (file, opts) => {
141
+ const store = new ContextStore(resolve(opts.dir));
142
+ const { readdir: rd, mkdir, rename } = await import("node:fs/promises");
143
+ const { join, basename } = await import("node:path");
144
+ const inboxDir = join(store.root, "inbox");
145
+ const readDir = join(inboxDir, ".read");
146
+ await mkdir(readDir, { recursive: true });
147
+ if (opts.all) {
148
+ const files = (await rd(inboxDir)).filter(f => f.endsWith(".json") || f.endsWith(".md"));
149
+ for (const f of files)
150
+ await rename(join(inboxDir, f), join(readDir, f));
151
+ console.log(`Archived ${files.length} messages.`);
152
+ }
153
+ else if (file) {
154
+ const safe = basename(file);
155
+ try {
156
+ await rename(join(inboxDir, safe), join(readDir, safe));
157
+ console.log(`Archived: ${safe}`);
158
+ }
159
+ catch {
160
+ console.error(`Not found in inbox: ${safe}`);
161
+ process.exit(1);
162
+ }
163
+ }
164
+ else {
165
+ console.error("Specify a filename or use --all");
166
+ process.exit(1);
167
+ }
168
+ });
120
169
  inbox
121
170
  .command("send <peerId> <message>")
122
171
  .description("Send a message to a peer's inbox")
123
172
  .option("-d, --dir <path>", "Context store directory", ".")
124
173
  .action(async (peerId, message, opts) => {
125
174
  const store = new ContextStore(resolve(opts.dir));
126
- await store.sendInbox(peerId, message);
127
- console.log(`Message sent to ${peerId}'s outbox.`);
175
+ const filename = await store.sendInbox(peerId, message);
176
+ // Try immediate delivery — if peer is reachable, deliver now
177
+ const delivered = await deliverOne(store, peerId, filename);
178
+ if (delivered) {
179
+ console.log(`Delivered to ${peerId}.`);
180
+ }
181
+ else {
182
+ console.log(`Queued for ${peerId}. Will deliver on next sync.`);
183
+ }
128
184
  });
129
185
  // --- watch ---
130
186
  program
@@ -132,8 +188,9 @@ program
132
188
  .description("Watch for inbox messages, context changes, and sync with peers")
133
189
  .option("-d, --dir <path>", "Context store directory", ".")
134
190
  .option("--sync-interval <seconds>", "Peer sync interval in seconds (0 to disable)", "60")
135
- .option("--tunnel <host>", "Open reverse SSH tunnel to host (makes your store reachable from behind NAT)")
136
- .option("--tunnel-port <port>", "Remote port for reverse tunnel", "2222")
191
+ .option("--tunnel <host>", "Reverse SSH tunnel to host for NAT traversal (uses autossh if available)")
192
+ .option("--tunnel-port <port>", "Remote port for reverse SSH tunnel", "2222")
193
+ .option("--cloudflared", "Start a cloudflared quick tunnel (no config needed, gives you a public URL)")
137
194
  .action(async (opts) => {
138
195
  const store = new ContextStore(resolve(opts.dir));
139
196
  if (!(await store.exists())) {
@@ -175,6 +232,24 @@ program
175
232
  console.log(`Tunnel: ${cmd} -R ${tunnelPort}:localhost:9781 ${tunnelHost}`);
176
233
  console.log(`Your store is reachable at ssh://${tunnelHost}:${tunnelPort} (via daemon on :9781)`);
177
234
  }
235
+ // Cloudflared quick tunnel (optional) — gives you a public *.trycloudflare.com URL
236
+ if (opts.cloudflared) {
237
+ const { spawn } = await import("node:child_process");
238
+ const cf = spawn("cloudflared", ["tunnel", "--url", "http://localhost:9781"], {
239
+ stdio: ["ignore", "pipe", "pipe"],
240
+ });
241
+ cf.on("error", (e) => console.error(`[cloudflared] failed: ${e.message}. Install: https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/downloads/`));
242
+ cf.stderr.on("data", (data) => {
243
+ const line = data.toString();
244
+ const match = line.match(/https:\/\/[^\s]+\.trycloudflare\.com/);
245
+ if (match) {
246
+ console.log(`[cloudflared] Your public URL: ${match[0]}`);
247
+ console.log(` Register it: openfuse register --endpoint ${match[0]}`);
248
+ }
249
+ });
250
+ process.on("exit", () => cf.kill());
251
+ console.log("Starting cloudflared tunnel...");
252
+ }
178
253
  console.log(`Press Ctrl+C to stop.\n`);
179
254
  watchInbox(store.root, (from, message) => {
180
255
  console.log(`\n[inbox] New message from ${from}:`);
@@ -198,6 +273,21 @@ program
198
273
  }
199
274
  await new Promise(() => { });
200
275
  });
276
+ // --- compact ---
277
+ program
278
+ .command("compact")
279
+ .description("Move [DONE] sections from CONTEXT.md to history/")
280
+ .option("-d, --dir <path>", "Context store directory", ".")
281
+ .action(async (opts) => {
282
+ const store = new ContextStore(resolve(opts.dir));
283
+ const { moved, kept } = await store.compactContext();
284
+ if (moved === 0) {
285
+ console.log("Nothing to compact. Mark sections with [DONE] to archive them.");
286
+ }
287
+ else {
288
+ console.log(`Compacted: ${moved} done, ${kept} kept.`);
289
+ }
290
+ });
201
291
  // --- share ---
202
292
  program
203
293
  .command("share <file>")
package/dist/mcp.js CHANGED
@@ -23,7 +23,7 @@ const storeDir = process.env.OPENFUSE_DIR || process.argv[3] || ".";
23
23
  const store = new ContextStore(resolve(storeDir));
24
24
  const server = new McpServer({
25
25
  name: "openfuse",
26
- version: "0.3.4",
26
+ version: "0.3.6",
27
27
  });
28
28
  // --- Context ---
29
29
  server.tool("context_read", "Read the agent's CONTEXT.md (working memory)", async () => {
@@ -1,5 +1,5 @@
1
1
  import { ContextStore } from "./store.js";
2
- export declare const DEFAULT_REGISTRY = "https://openfuse-registry.wzmcghee.workers.dev";
2
+ export declare const DEFAULT_REGISTRY = "https://registry.openfused.dev";
3
3
  export interface Manifest {
4
4
  name: string;
5
5
  endpoint: string;
package/dist/registry.js CHANGED
@@ -7,7 +7,7 @@
7
7
  // This is TOFU (Trust On First Use) done right: the registry distributes keys,
8
8
  // but never asserts trust. Trust is a local decision.
9
9
  import { signMessage, fingerprint } from "./crypto.js";
10
- export const DEFAULT_REGISTRY = "https://openfuse-registry.wzmcghee.workers.dev";
10
+ export const DEFAULT_REGISTRY = "https://registry.openfused.dev";
11
11
  export function resolveRegistry(flag) {
12
12
  return flag || process.env.OPENFUSE_REGISTRY || DEFAULT_REGISTRY;
13
13
  }
@@ -41,7 +41,22 @@ export async function register(store, endpoint, registry) {
41
41
  }
42
42
  return manifest;
43
43
  }
44
+ // Discovery: try DNS TXT first (decentralized, no registry needed), fall back to Worker API.
45
+ // DNS format: v=of1 e={endpoint} pk={pubkey} ek={agekey} fp={fingerprint}
46
+ // Self-hosted: _openfuse.{name}.{their-domain} — user manages their own TXT records.
47
+ // Our zone: _openfuse.{name}.openfused.dev — managed by the registry Worker on registration.
44
48
  export async function discover(name, registry) {
49
+ // If name contains a dot, it's a domain — try DNS TXT directly
50
+ // Otherwise try DNS at openfused.dev, then fall back to registry API
51
+ const dnsNames = name.includes(".")
52
+ ? [`_openfuse.${name}`]
53
+ : [`_openfuse.${name}.openfused.dev`];
54
+ for (const dnsName of dnsNames) {
55
+ const manifest = await discoverViaDns(dnsName, name);
56
+ if (manifest)
57
+ return manifest;
58
+ }
59
+ // Fall back to registry API
45
60
  const resp = await fetch(`${registry.replace(/\/$/, "")}/discover/${name}`);
46
61
  if (!resp.ok) {
47
62
  const body = await resp.json().catch(() => ({ error: `HTTP ${resp.status}` }));
@@ -49,6 +64,43 @@ export async function discover(name, registry) {
49
64
  }
50
65
  return (await resp.json());
51
66
  }
67
+ async function discoverViaDns(dnsName, agentName) {
68
+ try {
69
+ // Use DNS-over-HTTPS (Cloudflare 1.1.1.1) to resolve TXT records
70
+ const resp = await fetch(`https://1.1.1.1/dns-query?name=${encodeURIComponent(dnsName)}&type=TXT`, {
71
+ headers: { "Accept": "application/dns-json" },
72
+ });
73
+ if (!resp.ok)
74
+ return null;
75
+ const data = await resp.json();
76
+ if (!data.Answer || data.Answer.length === 0)
77
+ return null;
78
+ // Parse v=of1 format from TXT record
79
+ const txt = data.Answer[0].data.replace(/"/g, "");
80
+ if (!txt.startsWith("v=of1"))
81
+ return null;
82
+ const fields = {};
83
+ for (const part of txt.split(" ")) {
84
+ const [k, v] = part.split("=", 2);
85
+ if (k && v)
86
+ fields[k] = v;
87
+ }
88
+ if (!fields.e || !fields.pk)
89
+ return null;
90
+ return {
91
+ name: agentName,
92
+ endpoint: fields.e,
93
+ publicKey: fields.pk,
94
+ encryptionKey: fields.ek || undefined,
95
+ fingerprint: fields.fp || "",
96
+ created: "",
97
+ capabilities: ["inbox", "shared", "knowledge"],
98
+ };
99
+ }
100
+ catch {
101
+ return null;
102
+ }
103
+ }
52
104
  // Revocation is permanent and self-authenticated: the agent signs its own revocation
53
105
  // with the key being revoked. No admin needed — if you have the private key, you can kill it.
54
106
  export async function revoke(store, registry) {
package/dist/store.d.ts CHANGED
@@ -22,13 +22,18 @@ export declare class ContextStore {
22
22
  get configPath(): string;
23
23
  exists(): Promise<boolean>;
24
24
  init(name: string, id: string): Promise<void>;
25
+ initWorkspace(name: string, id: string): Promise<void>;
25
26
  readConfig(): Promise<MeshConfig>;
26
27
  writeConfig(config: MeshConfig): Promise<void>;
27
28
  readContext(): Promise<string>;
28
29
  writeContext(content: string): Promise<void>;
30
+ compactContext(): Promise<{
31
+ moved: number;
32
+ kept: number;
33
+ }>;
29
34
  readProfile(): Promise<string>;
30
35
  writeProfile(content: string): Promise<void>;
31
- sendInbox(peerId: string, message: string): Promise<void>;
36
+ sendInbox(peerId: string, message: string): Promise<string>;
32
37
  readInbox(): Promise<Array<{
33
38
  file: string;
34
39
  content: string;
package/dist/store.js CHANGED
@@ -11,7 +11,7 @@
11
11
  // .keys/ — Ed25519 + age keypairs (gitignored)
12
12
  // .mesh.json — config, peer list, keyring
13
13
  // No database, no daemon required. `ls` is your status command.
14
- import { readFile, writeFile, mkdir, readdir } from "node:fs/promises";
14
+ import { readFile, writeFile, mkdir, readdir, appendFile } from "node:fs/promises";
15
15
  import { join, resolve } from "node:path";
16
16
  import { existsSync } from "node:fs";
17
17
  import { generateKeys, signMessage, signAndEncrypt, verifyMessage, decryptMessage, deserializeSignedMessage, serializeSignedMessage, wrapExternalMessage, fingerprint, } from "./crypto.js";
@@ -54,6 +54,32 @@ export class ContextStore {
54
54
  };
55
55
  await this.writeConfig(config);
56
56
  }
57
+ // Shared workspace: multiple agents mount the same directory.
58
+ // CHARTER.md = system prompt (purpose, rules). CONTEXT.md = shared working memory.
59
+ // tasks/ for coordination, messages/{agent}/ for DMs, _broadcast/ for all-hands.
60
+ async initWorkspace(name, id) {
61
+ await mkdir(this.root, { recursive: true });
62
+ for (const dir of ["tasks", "messages", "_broadcast", "shared", "history"]) {
63
+ await mkdir(join(this.root, dir), { recursive: true });
64
+ }
65
+ const templatesDir = new URL("../templates/", import.meta.url).pathname;
66
+ for (const file of ["CHARTER.md", "CONTEXT.md"]) {
67
+ const templatePath = join(templatesDir, file);
68
+ const destPath = join(this.root, file);
69
+ if (!existsSync(destPath)) {
70
+ const content = await readFile(templatePath, "utf-8");
71
+ await writeFile(destPath, content);
72
+ }
73
+ }
74
+ const config = {
75
+ id,
76
+ name,
77
+ created: new Date().toISOString(),
78
+ peers: [],
79
+ keyring: [],
80
+ };
81
+ await this.writeConfig(config);
82
+ }
57
83
  async readConfig() {
58
84
  const raw = await readFile(this.configPath, "utf-8");
59
85
  const config = JSON.parse(raw);
@@ -91,6 +117,45 @@ export class ContextStore {
91
117
  async writeContext(content) {
92
118
  await writeFile(join(this.root, "CONTEXT.md"), content);
93
119
  }
120
+ // --- Context compaction ---
121
+ // Agents mark sections as [DONE] when work is complete. `openfuse compact`
122
+ // moves done sections to history/YYYY-MM-DD.md, keeping CONTEXT.md lean.
123
+ // Sections are delimited by markdown headers (## or ###).
124
+ async compactContext() {
125
+ const content = await this.readContext();
126
+ const lines = content.split("\n");
127
+ const kept = [];
128
+ const done = [];
129
+ let current = [];
130
+ let currentDone = false;
131
+ const flush = () => {
132
+ if (current.length > 0) {
133
+ (currentDone ? done : kept).push(current.join("\n"));
134
+ current = [];
135
+ currentDone = false;
136
+ }
137
+ };
138
+ for (const line of lines) {
139
+ if (/^#{1,3}\s/.test(line)) {
140
+ flush();
141
+ currentDone = /\[DONE\]/i.test(line);
142
+ }
143
+ current.push(line);
144
+ }
145
+ flush();
146
+ if (done.length === 0)
147
+ return { moved: 0, kept: kept.length };
148
+ // Write kept sections back to CONTEXT.md
149
+ await this.writeContext(kept.join("\n\n") || "# Context\n\n*Working memory — what's happening right now.*\n");
150
+ // Append done sections to history/YYYY-MM-DD.md
151
+ const historyDir = join(this.root, "history");
152
+ await mkdir(historyDir, { recursive: true });
153
+ const dateStr = new Date().toISOString().split("T")[0];
154
+ const historyFile = join(historyDir, `${dateStr}.md`);
155
+ const header = existsSync(historyFile) ? "\n---\n\n" : `# Context History — ${dateStr}\n\n`;
156
+ await appendFile(historyFile, header + done.join("\n\n") + "\n");
157
+ return { moved: done.length, kept: kept.length };
158
+ }
94
159
  async readProfile() {
95
160
  return readFile(join(this.root, "PROFILE.md"), "utf-8");
96
161
  }
@@ -114,6 +179,7 @@ export class ContextStore {
114
179
  const timestamp = new Date().toISOString().replace(/[:.]/g, "-");
115
180
  const filename = `${timestamp}_from-${config.name}_to-${peerId}.json`;
116
181
  await writeFile(join(this.root, "outbox", filename), serializeSignedMessage(signed));
182
+ return filename;
117
183
  }
118
184
  async readInbox() {
119
185
  const inboxDir = join(this.root, "inbox");
package/dist/sync.d.ts CHANGED
@@ -5,5 +5,7 @@ export interface SyncResult {
5
5
  pushed: string[];
6
6
  errors: string[];
7
7
  }
8
+ /** Try to deliver a single outbox message immediately. Returns true if delivered. */
9
+ export declare function deliverOne(store: ContextStore, peerName: string, filename: string): Promise<boolean>;
8
10
  export declare function syncAll(store: ContextStore): Promise<SyncResult[]>;
9
11
  export declare function syncOne(store: ContextStore, peerName: string): Promise<SyncResult>;
package/dist/sync.js CHANGED
@@ -39,6 +39,42 @@ function parseUrl(url) {
39
39
  }
40
40
  throw new Error(`Unknown URL scheme: ${url}. Use http:// or ssh://`);
41
41
  }
42
+ /** Try to deliver a single outbox message immediately. Returns true if delivered. */
43
+ export async function deliverOne(store, peerName, filename) {
44
+ const config = await store.readConfig();
45
+ const peer = config.peers.find((p) => p.name === peerName || p.id === peerName);
46
+ if (!peer)
47
+ return false;
48
+ const outboxDir = join(store.root, "outbox");
49
+ const filePath = join(outboxDir, filename);
50
+ if (!existsSync(filePath))
51
+ return false;
52
+ try {
53
+ const transport = parseUrl(peer.url);
54
+ if (transport.type === "http") {
55
+ const body = await readFile(filePath, "utf-8");
56
+ const r = await fetch(`${transport.baseUrl}/inbox`, {
57
+ method: "POST",
58
+ headers: { "Content-Type": "application/json" },
59
+ body,
60
+ });
61
+ if (!r.ok)
62
+ return false;
63
+ }
64
+ else {
65
+ await execFile("rsync", [
66
+ "-az", filePath,
67
+ `${transport.host}:${transport.path}/inbox/${filename}`,
68
+ ]);
69
+ }
70
+ // Delivered — archive to .sent/
71
+ await archiveSent(outboxDir, filename);
72
+ return true;
73
+ }
74
+ catch {
75
+ return false; // stays in outbox for next sync
76
+ }
77
+ }
42
78
  export async function syncAll(store) {
43
79
  const config = await store.readConfig();
44
80
  const results = [];
@@ -120,7 +156,7 @@ async function syncHttp(store, peer, baseUrl, peerDir) {
120
156
  for (const fname of await readdir(outboxDir)) {
121
157
  if (!fname.endsWith(".json"))
122
158
  continue;
123
- if (!fname.includes(`_to-${peer.name}`) && !fname.includes(peer.id))
159
+ if (!fname.includes(`_to-${peer.name}.json`) && !fname.includes(peer.id))
124
160
  continue;
125
161
  try {
126
162
  const body = await readFile(join(outboxDir, fname), "utf-8");
@@ -180,6 +216,7 @@ async function syncSsh(store, peer, host, remotePath, peerDir) {
180
216
  "-az", "--ignore-existing",
181
217
  "--include", `*_to-${myName}.json`,
182
218
  "--include", `*_to-all.json`,
219
+ "--include", `*_${myName}.json`, // legacy format (pre-envelope)
183
220
  "--exclude", "*",
184
221
  `${host}:${remotePath}/outbox/`,
185
222
  `${inboxDir}/`,
@@ -197,7 +234,7 @@ async function syncSsh(store, peer, host, remotePath, peerDir) {
197
234
  for (const fname of await readdir(outboxDir)) {
198
235
  if (!fname.endsWith(".json"))
199
236
  continue;
200
- if (!fname.includes(`_to-${peer.name}`) && !fname.includes(peer.id))
237
+ if (!fname.includes(`_to-${peer.name}.json`) && !fname.includes(peer.id))
201
238
  continue;
202
239
  try {
203
240
  await execFile("rsync", ["-az", join(outboxDir, fname), `${host}:${remotePath}/inbox/${fname}`]);
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "openfused",
3
- "version": "0.3.4",
3
+ "version": "0.3.6",
4
4
  "description": "Decentralized context mesh for AI agents. Encrypted sync, signed messaging, MCP server. The protocol is files.",
5
5
  "license": "MIT",
6
6
  "type": "module",
@@ -0,0 +1,18 @@
1
+ # Charter
2
+
3
+ ## Purpose
4
+ _What is this workspace for? What are we building?_
5
+
6
+ ## Members
7
+ _Who participates in this workspace?_
8
+
9
+ ## Rules
10
+ - Agents mark completed work with `[DONE]` in CONTEXT.md
11
+ - Messages to all members go in `_broadcast/`
12
+ - Direct messages go in `messages/{recipient}/`
13
+ - Tasks are tracked in `tasks/`
14
+
15
+ ## Conventions
16
+ - Sign all messages
17
+ - Encrypt DMs, broadcast in plaintext
18
+ - Check CONTEXT.md before starting new work