moltmind 0.7.3 → 0.7.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -6,17 +6,19 @@ MoltMind is an [MCP](https://modelcontextprotocol.io) server that gives your AI
6
6
 
7
7
  ## Why MoltMind?
8
8
 
9
- Every time your AI agent starts a new conversation, it forgets everything. Re-exploring your codebase costs ~8,000 tokens per session about **$0.024** on Claude Sonnet. That adds up fast:
10
-
11
- | Project size | Without MoltMind | With MoltMind | You save |
12
- |-------------|-----------------|---------------|----------|
13
- | 5 sessions | $0.12 | $0.02 | **$0.10** |
14
- | 20 sessions | $0.48 | $0.05 | **$0.43** |
15
- | Daily use (1 year) | $8.76 | $0.87 | **$7.89** |
16
-
17
- MoltMind restores your agent's context in ~325 tokens ($0.001) instead of re-exploring from scratch. Your agent picks up right where it left off — same project knowledge, same decisions, same learnings.
18
-
19
- > Dollar estimates based on Claude Sonnet 4.5 input pricing ($3/1M tokens). Actual savings vary by model and usage.
9
+ Every time your AI agent starts a new conversation, it forgets everything. It spends 1-2 minutes re-reading your files, re-learning your architecture, and re-discovering decisions you already made. MoltMind gives it memory your agent picks up right where it left off in seconds.
10
+
11
+ | | Without MoltMind | With MoltMind |
12
+ |--|-----------------|---------------|
13
+ | **Model used** | Claude Opus 4.6 ($5/$25 per 1M tokens) | |
14
+ | **Time per session** | 1-2 min re-exploring | Seconds to resume |
15
+ | **Cost per session** | ~$0.09 | ~$0.009 |
16
+ | **20 sessions** | $1.80 | $0.18 |
17
+ | **Daily use (1 year)** | $32.85 | $3.29 |
18
+ | **Time saved (1 year)** | — | **~6 hours** |
19
+ | **Money saved (1 year)** | | **~$30** |
20
+
21
+ > Assumes ~8,000 input + ~2,000 output tokens per cold start, ~825 input + ~200 output per resume. Savings scale with usage — power users save more.
20
22
 
21
23
  ## Quick Start
22
24
 
package/dist/index.js CHANGED
@@ -25,7 +25,7 @@ const moltbookInstructions = isMoltbookEnabled()
25
25
  : "";
26
26
  const server = new McpServer({
27
27
  name: "moltmind",
28
- version: "0.7.3",
28
+ version: "0.7.4",
29
29
  }, {
30
30
  instructions: `MoltMind provides persistent memory and session continuity. On startup, call mm_session_resume to restore context from previous sessions. Before disconnecting or when a task is complete, call mm_session_save to preserve session state. Use mm_handoff_create to checkpoint progress during long tasks.${moltbookInstructions}`,
31
31
  });
@@ -9,7 +9,7 @@ export async function handleMmStatus() {
9
9
  const uptimeSeconds = Math.floor((Date.now() - startTime) / 1000);
10
10
  return {
11
11
  success: true,
12
- version: "0.7.3",
12
+ version: "0.7.4",
13
13
  tier: isProTier() ? "pro" : "free",
14
14
  usage: checkStoreLimits().message,
15
15
  db_stats: stats,
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "moltmind",
3
- "version": "0.7.3",
3
+ "version": "0.7.4",
4
4
  "description": "Agent Memory MCP Server — persistent semantic memory and session continuity for AI agents",
5
5
  "type": "module",
6
6
  "bin": {