moltmind 0.4.0 → 0.4.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -26,9 +26,32 @@ Add to your MCP config:
26
26
  }
27
27
  ```
28
28
 
29
+ ## Moltbook Social (opt-in)
30
+
31
+ MoltMind includes optional social tools for [moltbook.com](https://moltbook.com) — a social network for AI agents. These are **disabled by default** to keep token overhead low. To enable them, add the `--moltbook` flag:
32
+
33
+ **Claude Code:**
34
+ ```bash
35
+ claude mcp add moltmind -- npx -y moltmind --moltbook
36
+ ```
37
+
38
+ **Cursor / Windsurf / Cline:**
39
+ ```json
40
+ {
41
+ "mcpServers": {
42
+ "moltmind": {
43
+ "command": "npx",
44
+ "args": ["-y", "moltmind", "--moltbook"]
45
+ }
46
+ }
47
+ }
48
+ ```
49
+
50
+ This adds 7 social tools (`mb_auth`, `mb_post`, `mb_feed`, `mb_comment`, `mb_vote`, `mb_social`, `mb_submolt`) for posting, commenting, voting, and following on moltbook.com.
51
+
29
52
  ## Tools
30
53
 
31
- MoltMind provides 14 tools that your agent can use:
54
+ MoltMind provides 14 core tools by default (21 with `--moltbook`):
32
55
 
33
56
  | Tool | Description |
34
57
  |------|-------------|
@@ -82,7 +105,7 @@ MoltMind automatically tracks sessions across agent restarts:
82
105
  All diagnostics are tagged with the current session ID, so you can see exactly what tools were called in each session.
83
106
 
84
107
  ### Diagnostics & Metrics
85
- Every tool call is logged locally with latency and success/failure. `mm_status` shows a health score, and `mm_metrics` provides a full dashboard of adoption data, per-tool usage stats, and error rates. All data stays on your machine.
108
+ Every tool call is logged locally with latency and success/failure. `mm_status` shows a health score, and `mm_metrics` provides a full dashboard of adoption data, per-tool usage stats, error rates, and token savings estimates. All data stays on your machine.
86
109
 
87
110
  ## Data Storage
88
111
 
@@ -93,6 +116,51 @@ Every tool call is logged locally with latency and success/failure. `mm_status`
93
116
  | `~/.moltmind/models/` | Cached embedding model |
94
117
  | `~/.moltmind/instance_id` | Anonymous instance identifier |
95
118
 
119
+ ## Token Cost
120
+
121
+ MCP tools add token overhead because their descriptions are sent with every LLM request. MoltMind is designed to pay for itself quickly:
122
+
123
+ ### Overhead
124
+
125
+ | Mode | Tools | Overhead per request |
126
+ |------|-------|---------------------|
127
+ | Default (memory + sessions) | 14 | ~500 tokens |
128
+ | + Moltbook social (`--moltbook`) | 21 | ~1,000 tokens |
129
+ | Default + prompt caching | 14 | ~50 tokens |
130
+
131
+ Most LLM providers (Claude, GPT-4) cache tool descriptions after the first request, reducing real overhead by ~90%.
132
+
133
+ ### ROI: session resume vs cold start
134
+
135
+ Without MoltMind, an agent re-exploring a codebase from scratch costs **~8,000 tokens** per session. MoltMind's `mm_session_resume` restores full context in **~325 tokens** — a 96% reduction.
136
+
137
+ | Scenario | Without MoltMind | With MoltMind | Savings |
138
+ |----------|-----------------|---------------|---------|
139
+ | Single session resume | ~8,000 tokens | ~825 tokens | 90% |
140
+ | 5-session project | ~40,000 tokens | ~7,500 tokens | 81% |
141
+ | 20-session project | ~160,000 tokens | ~40,200 tokens | 75% |
142
+
143
+ The tool overhead pays for itself after a single session resume.
144
+
145
+ ### Built-in tracking
146
+
147
+ MoltMind tracks token savings automatically. Run `mm_metrics` to see your cumulative savings:
148
+
149
+ ```
150
+ Token Savings (estimated):
151
+ Sessions tracked: 15
152
+ Cold starts avoided: 12 (saved ~92,100 tokens)
153
+ Mode: default (add --moltbook for social tools)
154
+ ```
155
+
156
+ ### Benchmark
157
+
158
+ Run the built-in benchmark to see projected savings for your usage pattern:
159
+
160
+ ```bash
161
+ npm run benchmark
162
+ ```
163
+
96
164
  ## Architecture
97
165
 
98
166
  ```
@@ -101,7 +169,7 @@ Agent (Claude Code / Cursor / any MCP client)
101
169
  ▼ (STDIO JSON-RPC)
102
170
  ┌─────────────────────────────────────┐
103
171
  │ MCP Server (src/index.ts) │
104
- │ 14 tools with zod validation
172
+ │ 14-21 tools with zod validation
105
173
  │ withDiagnostics() on every call │
106
174
  ├─────────────────────────────────────┤
107
175
  │ Embeddings │ Diagnostics │
@@ -110,7 +178,7 @@ Agent (Claude Code / Cursor / any MCP client)
110
178
  │ Graceful fallback │ Metrics │
111
179
  ├─────────────────────────────────────┤
112
180
  │ SQLite + WAL + FTS5 │
113
- │ Schema v4 with migrations │
181
+ │ Schema v5 with migrations │
114
182
  └─────────────────────────────────────┘
115
183
  ```
116
184
 
package/dist/index.js CHANGED
@@ -25,7 +25,7 @@ const moltbookInstructions = isMoltbookEnabled()
25
25
  : "";
26
26
  const server = new McpServer({
27
27
  name: "moltmind",
28
- version: "0.4.0",
28
+ version: "0.4.1",
29
29
  }, {
30
30
  instructions: `MoltMind provides persistent memory and session continuity. On startup, call mm_session_resume to restore context from previous sessions. Before disconnecting or when a task is complete, call mm_session_save to preserve session state. Use mm_handoff_create to checkpoint progress during long tasks.${moltbookInstructions}`,
31
31
  });
@@ -8,7 +8,7 @@ export async function handleMmStatus() {
8
8
  const uptimeSeconds = Math.floor((Date.now() - startTime) / 1000);
9
9
  return {
10
10
  success: true,
11
- version: "0.4.0",
11
+ version: "0.4.1",
12
12
  db_stats: stats,
13
13
  health_score: healthScore,
14
14
  embedding_model_ready: isModelReady(),
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "moltmind",
3
- "version": "0.4.0",
3
+ "version": "0.4.1",
4
4
  "description": "Agent Memory MCP Server — persistent semantic memory and session continuity for AI agents",
5
5
  "type": "module",
6
6
  "bin": {