@totalreclaw/totalreclaw 1.1.0 → 1.3.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,32 +1,52 @@
1
- # @totalreclaw/totalreclaw
1
+ <p align="center">
2
+ <img src="../../docs/assets/logo.png" alt="TotalReclaw" width="80" />
3
+ </p>
2
4
 
3
- Encrypted memory for your AI agent — zero-knowledge E2EE vault with automatic extraction, semantic search, and portable storage.
5
+ <h1 align="center">@totalreclaw/totalreclaw</h1>
4
6
 
5
- Built for [OpenClaw](https://openclaw.ai). Your memories are encrypted on your device before leaving — no one can read them, not even us.
7
+ <p align="center">
8
+ <strong>End-to-end encrypted memory for OpenClaw -- fully automatic, yours forever</strong>
9
+ </p>
6
10
 
7
- **[totalreclaw.xyz](https://totalreclaw.xyz)**
11
+ <p align="center">
12
+ <a href="https://totalreclaw.xyz">Website</a> &middot;
13
+ <a href="https://www.npmjs.com/package/@totalreclaw/totalreclaw">npm</a> &middot;
14
+ <a href="../../docs/guides/beta-tester-guide.md">Getting Started</a>
15
+ </p>
16
+
17
+ <p align="center">
18
+ <a href="https://www.npmjs.com/package/@totalreclaw/totalreclaw"><img src="https://img.shields.io/npm/v/@totalreclaw/totalreclaw?color=7B5CFF" alt="npm version"></a>
19
+ <a href="https://www.npmjs.com/package/@totalreclaw/totalreclaw"><img src="https://img.shields.io/npm/dm/@totalreclaw/totalreclaw" alt="npm downloads"></a>
20
+ <a href="../../LICENSE"><img src="https://img.shields.io/badge/license-MIT-blue" alt="License"></a>
21
+ </p>
22
+
23
+ ---
24
+
25
+ Your AI agent remembers everything -- preferences, decisions, facts -- encrypted so only you can read it. Built for [OpenClaw](https://openclaw.ai) with fully automatic memory extraction and recall.
8
26
 
9
27
  ## Install
10
28
 
29
+ Ask your OpenClaw agent:
30
+
31
+ > "Install the @totalreclaw/totalreclaw plugin"
32
+
33
+ Or from the terminal:
34
+
11
35
  ```bash
12
36
  openclaw plugins install @totalreclaw/totalreclaw
13
37
  ```
14
38
 
15
- Or just ask your agent:
16
-
17
- > "Install the totalreclaw plugin"
18
-
19
- The agent handles setup: generates your encryption keys and registers you. You'll be asked to write down a 12-word recovery phrase — that's the only thing you need to keep safe.
39
+ The agent handles setup: generates your encryption keys, asks you to save a 12-word recovery phrase, and registers you. After that, memory is fully automatic.
20
40
 
21
41
  ## How It Works
22
42
 
23
- After setup, memory is **fully automatic**:
43
+ After setup, everything happens in the background:
24
44
 
25
- - **Start of conversation** loads relevant memories from your vault
26
- - **End of conversation** extracts and encrypts new facts before storing them
27
- - **Before context compaction** saves everything important before the context window is trimmed
45
+ - **Start of conversation** -- loads relevant memories from your encrypted vault
46
+ - **During conversation** -- extracts facts, preferences, and decisions automatically
47
+ - **Before context compaction** -- saves important context before the window is trimmed
28
48
 
29
- All encryption happens client-side using AES-256-GCM. Search uses blind indices (SHA-256 hashes) — the server never sees your queries or data. Your 12-word recovery phrase derives all keys via Argon2id + HKDF.
49
+ All encryption happens client-side using AES-256-GCM. The server never sees your plaintext data.
30
50
 
31
51
  ## Tools
32
52
 
@@ -39,48 +59,45 @@ Your agent gets these tools automatically:
39
59
  | `totalreclaw_forget` | Delete a specific memory |
40
60
  | `totalreclaw_export` | Export all memories as plaintext |
41
61
  | `totalreclaw_status` | Check billing status and quota |
62
+ | `totalreclaw_consolidate` | Merge duplicate memories |
63
+ | `totalreclaw_import_from` | Import from Mem0 or MCP Memory Server |
42
64
 
43
- Most of the time you won't use these directly the automatic hooks handle memory for you.
65
+ Most of the time you won't use these directly -- the automatic hooks handle memory for you.
44
66
 
45
67
  ## Features
46
68
 
47
- - **Zero-knowledge E2EE** AES-256-GCM encryption, blind index search, HKDF auth
48
- - **Semantic search** Local embeddings (bge-small-en-v1.5) + BM25 + cosine reranking with RRF
49
- - **Automatic extraction** LLM extracts facts from conversations, no manual input needed
50
- - **Dedup** Cosine similarity catches paraphrases; LLM-guided dedup catches contradictions (Pro)
51
- - **On-chain storage** Encrypted data stored on Gnosis Chain, indexed by The Graph
52
- - **Portable** One 12-word phrase. Any device, same memories, no lock-in
53
- - **Import** Migrate from Mem0 or MCP Memory Server
69
+ - **End-to-end encrypted** -- AES-256-GCM encryption, blind index search, HKDF auth
70
+ - **Automatic extraction** -- LLM extracts facts from conversations, no manual input needed
71
+ - **Semantic search** -- Local embeddings + BM25 + cosine reranking with RRF fusion
72
+ - **Smart dedup** -- Cosine similarity catches paraphrases; LLM-guided dedup catches contradictions (Pro)
73
+ - **On-chain storage** -- Encrypted data stored on Gnosis Chain, indexed by The Graph
74
+ - **Portable** -- One 12-word phrase. Any device, same memories, no lock-in
75
+ - **Import** -- Migrate from Mem0 or MCP Memory Server
54
76
 
55
77
  ## Free Tier & Pricing
56
78
 
57
- | Tier | Writes | Reads | Price |
58
- |------|--------|-------|-------|
59
- | **Free** | 250/month | Unlimited | $0 |
60
- | **Pro** | 10,000/month | Unlimited | $2-5/month |
61
-
62
- Pay with card (Stripe) or crypto (Coinbase Commerce). Counter resets monthly.
63
-
64
- ## Configuration
79
+ | Tier | Memories | Reads | Storage | Price |
80
+ |------|----------|-------|---------|-------|
81
+ | **Free** | 500/month | Unlimited | Testnet (trial) | $0 |
82
+ | **Pro** | Unlimited | Unlimited | Permanent on-chain (Gnosis) | $5/month |
65
83
 
66
- Set these environment variables before the agent starts:
67
-
68
- | Variable | Description | Default |
69
- |----------|-------------|---------|
70
- | `TOTALRECLAW_SERVER_URL` | Server URL | `https://api.totalreclaw.xyz` |
71
- | `TOTALRECLAW_CREDENTIALS_PATH` | Path to credentials file | `~/.totalreclaw/credentials.json` |
72
- | `TOTALRECLAW_SELF_HOSTED` | Set to `true` to use your own self-hosted server instead of the managed service | `false` (managed service) |
73
- | `TOTALRECLAW_EXTRACT_EVERY_TURNS` | Auto-extract interval (turns) | `5` (Free) / `2` (Pro min) |
84
+ Pay with card via Stripe. Counter resets monthly.
74
85
 
75
86
  ## Using with Other Agents
76
87
 
77
88
  TotalReclaw also works outside OpenClaw:
78
89
 
79
- - **Claude Desktop / Cursor / Windsurf** Use [@totalreclaw/mcp-server](https://www.npmjs.com/package/@totalreclaw/mcp-server)
80
- - **NanoClaw** Lightweight skill with MCP bridge
90
+ - **Claude Desktop / Cursor / Windsurf** -- Use [@totalreclaw/mcp-server](https://www.npmjs.com/package/@totalreclaw/mcp-server)
91
+ - **NanoClaw** -- Built-in support via MCP bridge
81
92
 
82
93
  Same encryption, same recovery phrase, same memories across all agents.
83
94
 
95
+ ## Learn More
96
+
97
+ - [Getting Started Guide](../../docs/guides/beta-tester-guide.md)
98
+ - [totalreclaw.xyz](https://totalreclaw.xyz)
99
+ - [Main Repository](https://github.com/p-diogo/totalreclaw)
100
+
84
101
  ## License
85
102
 
86
103
  MIT
package/crypto.ts CHANGED
@@ -89,7 +89,7 @@ function deriveKeysFromMnemonic(
89
89
  }
90
90
 
91
91
  /**
92
- * Derive auth, encryption, and dedup keys from a master password.
92
+ * Derive auth, encryption, and dedup keys from a recovery phrase.
93
93
  *
94
94
  * If the password is a valid BIP-39 mnemonic (12 or 24 words), keys are
95
95
  * derived from the 512-bit BIP-39 seed via HKDF. Otherwise, the legacy
package/embedding.ts CHANGED
@@ -1,73 +1,64 @@
1
1
  /**
2
2
  * TotalReclaw Plugin - Local Embedding via @huggingface/transformers
3
3
  *
4
- * Uses the Xenova/bge-small-en-v1.5 ONNX model to generate 384-dimensional
4
+ * Uses the Qwen3-Embedding-0.6B ONNX model to generate 1024-dimensional
5
5
  * text embeddings locally. No API key needed, no data leaves the machine.
6
+ * Supports 100+ languages (EN, PT, ES, ZH, etc.).
6
7
  *
7
- * This preserves the zero-knowledge guarantee: embeddings are generated
8
+ * This preserves the E2EE guarantee: embeddings are generated
8
9
  * CLIENT-SIDE before encryption, so no plaintext ever reaches an external API.
9
10
  *
10
11
  * Model details:
11
- * - Quantized (int8) ONNX model: ~33.8MB download on first use
12
+ * - Quantized (int8) ONNX model: ~600MB download on first use
12
13
  * - Cached in ~/.cache/huggingface/ after first download
13
- * - Lazy initialization: first call ~2-3s (model load), subsequent ~15ms
14
- * - Output: 384-dimensional normalized embedding vector
15
- * - For retrieval, queries should be prefixed with an instruction string
16
- * (documents/passages should NOT be prefixed)
14
+ * - Lazy initialization: first call ~3-5s (model load), subsequent ~100ms
15
+ * - Output: 1024-dimensional normalized embedding vector
16
+ * - No instruction prefix needed (bare queries perform better)
17
17
  *
18
- * Dependencies: @huggingface/transformers (handles model download, WordPiece
19
- * tokenization, ONNX inference, mean pooling, and normalization).
18
+ * Dependencies: @huggingface/transformers (handles model download,
19
+ * tokenization, ONNX inference, last-token pooling, and normalization).
20
20
  */
21
21
 
22
22
  // @ts-ignore - @huggingface/transformers types may not be perfect
23
23
  import { pipeline, type FeatureExtractionPipeline } from '@huggingface/transformers';
24
24
 
25
- /** ONNX-optimized bge-small-en-v1.5 from HuggingFace Hub. */
26
- const MODEL_ID = 'Xenova/bge-small-en-v1.5';
25
+ /** ONNX-optimized Qwen3-Embedding-0.6B from HuggingFace Hub. */
26
+ const MODEL_ID = 'onnx-community/Qwen3-Embedding-0.6B-ONNX';
27
27
 
28
- /** Fixed output dimensionality for bge-small-en-v1.5. */
29
- const EMBEDDING_DIM = 384;
30
-
31
- /**
32
- * Query instruction prefix for bge-small-en-v1.5 retrieval tasks.
33
- *
34
- * Per the BAAI model card: prepend this to short queries when searching
35
- * for relevant passages. Do NOT prepend for documents/passages being stored.
36
- */
37
- const QUERY_PREFIX = 'Represent this sentence for searching relevant passages: ';
28
+ /** Fixed output dimensionality for Qwen3-Embedding-0.6B. */
29
+ const EMBEDDING_DIM = 1024;
38
30
 
39
31
  /** Lazily initialized feature extraction pipeline. */
40
32
  let extractor: FeatureExtractionPipeline | null = null;
41
33
 
42
34
  /**
43
- * Generate a 384-dimensional embedding vector for the given text.
35
+ * Generate a 1024-dimensional embedding vector for the given text.
44
36
  *
45
- * On first call, downloads and loads the ONNX model (~33.8MB, cached).
46
- * Subsequent calls reuse the loaded model and run in ~15ms.
37
+ * On first call, downloads and loads the ONNX model (~600MB, cached).
38
+ * Subsequent calls reuse the loaded model and run in ~100ms.
47
39
  *
48
- * For bge-small-en-v1.5, queries should set `isQuery: true` to prepend the
49
- * retrieval instruction prefix. Documents being stored should use the default
50
- * (`isQuery: false`) so no prefix is added.
40
+ * The isQuery option is accepted for forward compatibility but does not
41
+ * change behavior -- Qwen3 performs better without instruction prefixes.
51
42
  *
52
43
  * @param text - The text to embed.
53
44
  * @param options - Optional settings.
54
- * @param options.isQuery - If true, prepend the BGE query instruction prefix
55
- * for improved retrieval accuracy (default: false).
56
- * @returns 384-dimensional normalized embedding as a number array.
45
+ * @param options.isQuery - Accepted for forward compatibility (no-op).
46
+ * @returns 1024-dimensional normalized embedding as a number array.
57
47
  */
58
48
  export async function generateEmbedding(
59
49
  text: string,
60
50
  options?: { isQuery?: boolean },
61
51
  ): Promise<number[]> {
62
52
  if (!extractor) {
53
+ console.log('Downloading embedding model (one-time setup, ~600MB)...');
63
54
  extractor = await pipeline('feature-extraction', MODEL_ID, {
64
- // Use quantized (int8) model for smaller download (~33.8MB vs ~67MB)
65
55
  quantized: true,
66
56
  });
57
+ console.log('Embedding model ready.');
67
58
  }
68
59
 
69
- const input = options?.isQuery ? QUERY_PREFIX + text : text;
70
- const output = await extractor(input, { pooling: 'mean', normalize: true });
60
+ const input = text;
61
+ const output = await extractor(input, { pooling: 'last_token', normalize: true });
71
62
  // output.data is a Float32Array; convert to plain number[]
72
63
  return Array.from(output.data as Float32Array);
73
64
  }
@@ -75,7 +66,7 @@ export async function generateEmbedding(
75
66
  /**
76
67
  * Get the embedding vector dimensionality.
77
68
  *
78
- * Always returns 384 (fixed for bge-small-en-v1.5).
69
+ * Always returns 1024 (fixed for Qwen3-Embedding-0.6B).
79
70
  * This is needed by downstream code (e.g. LSH hasher) to know the vector
80
71
  * size without calling the embedding model.
81
72
  */
@@ -57,7 +57,7 @@ function parseFactsResponse(response: string): ExtractedFact[] {
57
57
  : 'ADD';
58
58
  return {
59
59
  text: String(fact.text).slice(0, 512),
60
- type: (['fact', 'preference', 'decision', 'episodic', 'goal'].includes(String(fact.type))
60
+ type: (['fact', 'preference', 'decision', 'episodic', 'goal', 'context', 'summary'].includes(String(fact.type))
61
61
  ? String(fact.type)
62
62
  : 'fact') as ExtractedFact['type'],
63
63
  importance: Math.max(1, Math.min(10, Number(fact.importance) || 5)),
package/extractor.ts CHANGED
@@ -15,7 +15,7 @@ export type ExtractionAction = 'ADD' | 'UPDATE' | 'DELETE' | 'NOOP';
15
15
 
16
16
  export interface ExtractedFact {
17
17
  text: string;
18
- type: 'fact' | 'preference' | 'decision' | 'episodic' | 'goal';
18
+ type: 'fact' | 'preference' | 'decision' | 'episodic' | 'goal' | 'context' | 'summary';
19
19
  importance: number; // 1-10
20
20
  action: ExtractionAction;
21
21
  existingFactId?: string;
@@ -37,28 +37,36 @@ interface ConversationMessage {
37
37
  // Extraction Prompt
38
38
  // ---------------------------------------------------------------------------
39
39
 
40
- const EXTRACTION_SYSTEM_PROMPT = `You are a memory extraction engine. Analyze the conversation and extract atomic facts worth remembering long-term.
40
+ const EXTRACTION_SYSTEM_PROMPT = `You are a memory extraction engine. Analyze the conversation and extract valuable long-term memories.
41
41
 
42
42
  Rules:
43
- 1. Each fact must be a single, atomic piece of information
44
- 2. Focus on user-specific information: preferences, decisions, facts about them, their goals
45
- 3. Skip generic knowledge, greetings, and small talk
46
- 4. Skip information that is only relevant to the current conversation
47
- 5. Score importance 1-10 (7+ = worth storing, below 7 = skip)
48
- 6. Only extract facts with importance >= 6
43
+ 1. Each memory must be a single, self-contained piece of information
44
+ 2. Focus on user-specific information that would be useful in future conversations
45
+ 3. Skip generic knowledge, greetings, small talk, and ephemeral task coordination
46
+ 4. Score importance 1-10 (6+ = worth storing)
47
+ 5. Only extract memories with importance >= 6
49
48
 
50
49
  Types:
51
- - fact: Objective information about the user
52
- - preference: Likes, dislikes, or preferences
53
- - decision: Choices the user has made
54
- - episodic: Events or experiences
55
- - goal: Objectives or targets
50
+ - fact: Objective information about the user (name, location, job, relationships)
51
+ - preference: Likes, dislikes, or preferences ("prefers dark mode", "allergic to peanuts")
52
+ - decision: Choices WITH reasoning ("chose PostgreSQL because data is relational and needs ACID")
53
+ - episodic: Notable events or experiences ("deployed v1.0 to production on March 15")
54
+ - goal: Objectives, targets, or plans ("wants to launch public beta by end of Q1")
55
+ - context: Active project/task context ("working on TotalReclaw v1.2, staging on Base Sepolia")
56
+ - summary: Key outcome or conclusion from a discussion ("agreed to use phased rollout for migration")
57
+
58
+ Extraction guidance:
59
+ - For decisions: ALWAYS include the reasoning. "Chose X" is weak. "Chose X because Y" is strong.
60
+ - For context: Capture what the user is actively working on, including versions, environments, and status.
61
+ - For summaries: Only extract when a conversation reaches a clear conclusion or agreement.
62
+ - For facts: Prefer specific over vague. "Lives in Lisbon" beats "lives in Europe".
63
+ - Decisions and context should be importance >= 7 (they are high-value for future conversations).
56
64
 
57
65
  Actions (compare against existing memories if provided):
58
- - ADD: New fact, no conflict with existing memories
59
- - UPDATE: Modifies or refines an existing memory (provide existingFactId)
60
- - DELETE: Contradicts an existing memory the old one is now wrong (provide existingFactId)
61
- - NOOP: Already captured in existing memories or not worth storing
66
+ - ADD: New memory, no conflict with existing
67
+ - UPDATE: Refines or corrects an existing memory (provide existingFactId)
68
+ - DELETE: Contradicts an existing memory -- the old one is now wrong (provide existingFactId)
69
+ - NOOP: Already captured or not worth storing
62
70
 
63
71
  Return a JSON array (no markdown, no code fences):
64
72
  [{"text": "...", "type": "...", "importance": N, "action": "ADD|UPDATE|DELETE|NOOP", "existingFactId": "..."}, ...]
@@ -158,7 +166,7 @@ function parseFactsResponse(response: string): ExtractedFact[] {
158
166
  : 'ADD'; // Default to ADD for backward compatibility
159
167
  return {
160
168
  text: String(fact.text).slice(0, 512),
161
- type: (['fact', 'preference', 'decision', 'episodic', 'goal'].includes(String(fact.type))
169
+ type: (['fact', 'preference', 'decision', 'episodic', 'goal', 'context', 'summary'].includes(String(fact.type))
162
170
  ? String(fact.type)
163
171
  : 'fact') as ExtractedFact['type'],
164
172
  importance: Math.max(1, Math.min(10, Number(fact.importance) || 5)),
@@ -1,6 +1,6 @@
1
1
  #!/usr/bin/env npx tsx
2
2
  /**
3
- * Generate a BIP-39 12-word mnemonic for use as TOTALRECLAW_MASTER_PASSWORD.
3
+ * Generate a BIP-39 12-word mnemonic for use as TOTALRECLAW_RECOVERY_PHRASE.
4
4
  *
5
5
  * Usage: npx tsx generate-mnemonic.ts
6
6
  */
@@ -8,7 +8,7 @@ import { generateMnemonic } from '@scure/bip39';
8
8
  import { wordlist } from '@scure/bip39/wordlists/english.js';
9
9
 
10
10
  const mnemonic = generateMnemonic(wordlist, 128);
11
- console.log('\n Your TotalReclaw master mnemonic (12 words):\n');
11
+ console.log('\n Your TotalReclaw recovery phrase (12 words):\n');
12
12
  console.log(` ${mnemonic}\n`);
13
13
  console.log(' WRITE THIS DOWN. If you lose it, your memories are unrecoverable.');
14
- console.log(' Set it as TOTALRECLAW_MASTER_PASSWORD in your .env file.\n');
14
+ console.log(' Set it as TOTALRECLAW_RECOVERY_PHRASE in your .env file.\n');
@@ -44,7 +44,7 @@ export abstract class BaseImportAdapter {
44
44
  const text = fact.text.trim().slice(0, 512);
45
45
 
46
46
  // Normalize type
47
- const validTypes = ['fact', 'preference', 'decision', 'episodic', 'goal'] as const;
47
+ const validTypes = ['fact', 'preference', 'decision', 'episodic', 'goal', 'context', 'summary'] as const;
48
48
  const type = validTypes.includes(fact.type as typeof validTypes[number])
49
49
  ? (fact.type as NormalizedFact['type'])
50
50
  : 'fact';
@@ -453,7 +453,7 @@ async function runTests(): Promise<void> {
453
453
 
454
454
  // --- type normalization ---
455
455
  {
456
- const validTypes = ['fact', 'preference', 'decision', 'episodic', 'goal'] as const;
456
+ const validTypes = ['fact', 'preference', 'decision', 'episodic', 'goal', 'context', 'summary'] as const;
457
457
 
458
458
  for (const t of validTypes) {
459
459
  const result = testAdapter.testValidateFact({ text: 'test fact', type: t });
@@ -6,7 +6,7 @@ export interface NormalizedFact {
6
6
  /** The atomic fact text (max 512 chars) */
7
7
  text: string;
8
8
  /** Fact type matching TotalReclaw's taxonomy */
9
- type: 'fact' | 'preference' | 'decision' | 'episodic' | 'goal';
9
+ type: 'fact' | 'preference' | 'decision' | 'episodic' | 'goal' | 'context' | 'summary';
10
10
  /** Importance score 1-10 */
11
11
  importance: number;
12
12
  /** Original source system */
package/index.ts CHANGED
@@ -41,7 +41,7 @@ import {
41
41
  STORE_DEDUP_MAX_CANDIDATES,
42
42
  type DecryptedCandidate,
43
43
  } from './consolidation.js';
44
- import { isSubgraphMode, getSubgraphConfig, encodeFactProtobuf, submitFactOnChain, deriveSmartAccountAddress, type FactPayload } from './subgraph-store.js';
44
+ import { isSubgraphMode, getSubgraphConfig, encodeFactProtobuf, submitFactOnChain, submitFactBatchOnChain, deriveSmartAccountAddress, type FactPayload } from './subgraph-store.js';
45
45
  import { searchSubgraph, getSubgraphFactCount } from './subgraph-search.js';
46
46
  import { PluginHotCache, type HotFact } from './hot-cache-wrapper.js';
47
47
  import crypto from 'node:crypto';
@@ -126,7 +126,10 @@ const SEMANTIC_SKIP_THRESHOLD = parseFloat(process.env.TOTALRECLAW_SEMANTIC_SKIP
126
126
 
127
127
  // Auto-extract throttle (C3): only extract every N turns in agent_end hook
128
128
  let turnsSinceLastExtraction = 0;
129
- const AUTO_EXTRACT_EVERY_TURNS_ENV = parseInt(process.env.TOTALRECLAW_EXTRACT_EVERY_TURNS ?? '5', 10);
129
+ const AUTO_EXTRACT_EVERY_TURNS_ENV = parseInt(process.env.TOTALRECLAW_EXTRACT_EVERY_TURNS ?? '3', 10);
130
+
131
+ // Hard cap on facts per extraction to prevent LLM over-extraction from dense conversations
132
+ const MAX_FACTS_PER_EXTRACTION = 15;
130
133
 
131
134
  // Store-time near-duplicate detection (consolidation module)
132
135
  const STORE_DEDUP_ENABLED = process.env.TOTALRECLAW_STORE_DEDUP !== 'false';
@@ -139,7 +142,7 @@ const RELEVANCE_THRESHOLD = parseFloat(process.env.TOTALRECLAW_RELEVANCE_THRESHO
139
142
  // ---------------------------------------------------------------------------
140
143
 
141
144
  const BILLING_CACHE_PATH = path.join(process.env.HOME ?? '/home/node', '.totalreclaw', 'billing-cache.json');
142
- const BILLING_CACHE_TTL = 12 * 60 * 60 * 1000; // 12 hours
145
+ const BILLING_CACHE_TTL = 2 * 60 * 60 * 1000; // 2 hours
143
146
  const QUOTA_WARNING_THRESHOLD = 0.8; // 80%
144
147
 
145
148
  interface BillingCache {
@@ -150,6 +153,9 @@ interface BillingCache {
150
153
  llm_dedup?: boolean;
151
154
  custom_extract_interval?: boolean;
152
155
  min_extract_interval?: number;
156
+ extraction_interval?: number;
157
+ max_facts_per_extraction?: number;
158
+ max_candidate_pool?: number;
153
159
  };
154
160
  checked_at: number;
155
161
  }
@@ -188,13 +194,24 @@ function isLlmDedupEnabled(): boolean {
188
194
  }
189
195
 
190
196
  /**
191
- * Get the effective extraction interval based on tier.
192
- * Pro users can set interval as low as 2 via env; Free users are clamped to minimum 5.
197
+ * Get the effective extraction interval.
198
+ * Server-side config takes priority (from billing cache), then env var fallback.
199
+ * This allows the relay admin to tune extraction without an npm publish.
193
200
  */
194
201
  function getExtractInterval(): number {
195
202
  const cache = readBillingCache();
196
- const minInterval = cache?.features?.min_extract_interval ?? 5;
197
- return Math.max(AUTO_EXTRACT_EVERY_TURNS_ENV, minInterval);
203
+ if (cache?.features?.extraction_interval != null) return cache.features.extraction_interval;
204
+ return AUTO_EXTRACT_EVERY_TURNS_ENV;
205
+ }
206
+
207
+ /**
208
+ * Get the max facts per extraction cycle.
209
+ * Server-side config takes priority (from billing cache), then env var / constant fallback.
210
+ */
211
+ function getMaxFactsPerExtraction(): number {
212
+ const cache = readBillingCache();
213
+ if (cache?.features?.max_facts_per_extraction != null) return cache.features.max_facts_per_extraction;
214
+ return MAX_FACTS_PER_EXTRACTION;
198
215
  }
199
216
 
200
217
  /**
@@ -254,12 +271,18 @@ const FACT_COUNT_CACHE_TTL = 5 * 60 * 1000;
254
271
  /**
255
272
  * Compute the candidate pool size from a fact count.
256
273
  *
257
- * Formula: pool = min(max(factCount * 3, 400), 5000)
274
+ * Server-side config takes priority (from billing cache), then local fallback.
275
+ * The server computes the optimal pool based on vault size and tier caps.
276
+ *
277
+ * Local fallback formula: pool = min(max(factCount * 3, 400), 5000)
258
278
  * - At least 400 candidates (even for tiny vaults)
259
279
  * - At most 5000 candidates (to bound decryption + reranking cost)
260
280
  * - 3x fact count in between
261
281
  */
262
282
  function computeCandidatePool(factCount: number): number {
283
+ const cache = readBillingCache();
284
+ if (cache?.features?.max_candidate_pool != null) return cache.features.max_candidate_pool;
285
+ // Fallback to local formula if no server config
263
286
  return Math.min(Math.max(factCount * 3, 400), 5000);
264
287
  }
265
288
 
@@ -301,21 +324,21 @@ async function getFactCount(logger: OpenClawPluginApi['logger']): Promise<number
301
324
  // Initialisation
302
325
  // ---------------------------------------------------------------------------
303
326
 
304
- /** True when master password is missing — tools return setup instructions. */
327
+ /** True when recovery phrase is missing — tools return setup instructions. */
305
328
  let needsSetup = false;
306
329
 
307
330
  /**
308
- * Derive keys from the master password, load or create credentials, and
331
+ * Derive keys from the recovery phrase, load or create credentials, and
309
332
  * register with the server if this is the first run.
310
333
  */
311
334
  async function initialize(logger: OpenClawPluginApi['logger']): Promise<void> {
312
335
  const serverUrl =
313
336
  process.env.TOTALRECLAW_SERVER_URL || 'https://api.totalreclaw.xyz';
314
- const masterPassword = process.env.TOTALRECLAW_MASTER_PASSWORD;
337
+ const masterPassword = process.env.TOTALRECLAW_RECOVERY_PHRASE;
315
338
 
316
339
  if (!masterPassword) {
317
340
  needsSetup = true;
318
- logger.info('TOTALRECLAW_MASTER_PASSWORD not set — setup required (see SKILL.md Post-Install Setup)');
341
+ logger.info('TOTALRECLAW_RECOVERY_PHRASE not set — setup required (see SKILL.md Post-Install Setup)');
319
342
  return;
320
343
  }
321
344
 
@@ -411,27 +434,27 @@ function isDocker(): boolean {
411
434
  function buildSetupErrorMsg(): string {
412
435
  const base =
413
436
  'TotalReclaw setup required:\n' +
414
- '1. Set TOTALRECLAW_MASTER_PASSWORD — ask the user if they have an existing recovery phrase or generate a new 12-word BIP-39 mnemonic.\n' +
437
+ '1. Set TOTALRECLAW_RECOVERY_PHRASE — ask the user if they have an existing recovery phrase or generate a new 12-word BIP-39 mnemonic.\n' +
415
438
  '2. Restart the gateway to apply changes.\n' +
416
439
  ' (Optional: set TOTALRECLAW_SELF_HOSTED=true if using your own server instead of the managed service.)\n\n';
417
440
 
418
441
  if (isDocker()) {
419
442
  return base +
420
443
  'Running in Docker — pass env vars via `-e` flags or your compose file:\n' +
421
- ' -e TOTALRECLAW_MASTER_PASSWORD="word1 word2 ..."';
444
+ ' -e TOTALRECLAW_RECOVERY_PHRASE="word1 word2 ..."';
422
445
  }
423
446
 
424
447
  if (process.platform === 'darwin') {
425
448
  return base +
426
449
  'Running on macOS — add env vars to the LaunchAgent plist at\n' +
427
450
  '~/Library/LaunchAgents/ai.openclaw.gateway.plist under <key>EnvironmentVariables</key>:\n' +
428
- ' <key>TOTALRECLAW_MASTER_PASSWORD</key><string>word1 word2 ...</string>\n' +
451
+ ' <key>TOTALRECLAW_RECOVERY_PHRASE</key><string>word1 word2 ...</string>\n' +
429
452
  'Then run: openclaw gateway restart';
430
453
  }
431
454
 
432
455
  return base +
433
456
  'Running on Linux — add env vars to the systemd unit override or your shell profile:\n' +
434
- ' export TOTALRECLAW_MASTER_PASSWORD="word1 word2 ..."\n' +
457
+ ' export TOTALRECLAW_RECOVERY_PHRASE="word1 word2 ..."\n' +
435
458
  'Then run: openclaw gateway restart';
436
459
  }
437
460
 
@@ -462,7 +485,7 @@ async function requireFullSetup(logger: OpenClawPluginApi['logger']): Promise<vo
462
485
  // LSH + Embedding helpers
463
486
  // ---------------------------------------------------------------------------
464
487
 
465
- /** Master password cached for LSH seed derivation (set during initialize()). */
488
+ /** Recovery phrase cached for LSH seed derivation (set during initialize()). */
466
489
  let masterPasswordCache: string | null = null;
467
490
  /** Salt cached for LSH seed derivation (set during initialize()). */
468
491
  let saltCache: Buffer | null = null;
@@ -471,7 +494,7 @@ let saltCache: Buffer | null = null;
471
494
  * Get or initialize the LSH hasher.
472
495
  *
473
496
  * The hasher is created lazily because it needs:
474
- * 1. The master password + salt (available after initialize())
497
+ * 1. The recovery phrase + salt (available after initialize())
475
498
  * 2. The embedding dimensions (available after initLLMClient())
476
499
  *
477
500
  * If the provider doesn't support embeddings, this returns null and
@@ -517,7 +540,7 @@ async function generateEmbeddingAndLSH(
517
540
  const hasher = getLSHHasher(logger);
518
541
  const lshBuckets = hasher ? hasher.hash(embedding) : [];
519
542
 
520
- // Encrypt the embedding (JSON array of numbers) for zero-knowledge storage
543
+ // Encrypt the embedding (JSON array of numbers) for server-blind storage
521
544
  const encryptedEmbedding = encryptToHex(JSON.stringify(embedding), encryptionKey!);
522
545
 
523
546
  return { embedding, lshBuckets, encryptedEmbedding };
@@ -856,9 +879,13 @@ async function storeExtractedFacts(
856
879
  }
857
880
 
858
881
  // Phase 3: Store the deduplicated facts (with optional store-time dedup).
882
+ // In subgraph mode, collect all protobuf payloads (tombstones + new facts)
883
+ // and submit them in a single batched UserOp for gas efficiency.
859
884
  let stored = 0;
860
885
  let superseded = 0;
861
886
  let skipped = 0;
887
+ const pendingPayloads: Buffer[] = []; // Batched subgraph payloads
888
+ let preparedForSubgraph = 0;
862
889
 
863
890
  for (const fact of dedupedFacts) {
864
891
  try {
@@ -880,24 +907,19 @@ async function storeExtractedFacts(
880
907
  if (fact.action === 'DELETE' && fact.existingFactId) {
881
908
  // Tombstone the old fact, don't store anything new.
882
909
  if (isSubgraphMode()) {
883
- try {
884
- const tombConfig = { ...getSubgraphConfig(), authKeyHex: authKeyHex!, walletAddress: subgraphOwner ?? undefined };
885
- const tombstone: FactPayload = {
886
- id: fact.existingFactId,
887
- timestamp: new Date().toISOString(),
888
- owner: subgraphOwner || userId!,
889
- encryptedBlob: '00',
890
- blindIndices: [],
891
- decayScore: 0,
892
- source: 'tombstone',
893
- contentFp: '',
894
- agentId: 'openclaw-plugin-auto',
895
- };
896
- await submitFactOnChain(encodeFactProtobuf(tombstone), tombConfig);
897
- logger.info(`LLM dedup: DELETE — tombstoned ${fact.existingFactId} on-chain`);
898
- } catch (tombErr) {
899
- logger.warn(`LLM dedup: DELETE failed for ${fact.existingFactId}: ${tombErr instanceof Error ? tombErr.message : String(tombErr)}`);
900
- }
910
+ const tombstone: FactPayload = {
911
+ id: fact.existingFactId,
912
+ timestamp: new Date().toISOString(),
913
+ owner: subgraphOwner || userId!,
914
+ encryptedBlob: '00',
915
+ blindIndices: [],
916
+ decayScore: 0,
917
+ source: 'tombstone',
918
+ contentFp: '',
919
+ agentId: 'openclaw-plugin-auto',
920
+ };
921
+ pendingPayloads.push(encodeFactProtobuf(tombstone));
922
+ logger.info(`LLM dedup: DELETE — queued tombstone for ${fact.existingFactId}`);
901
923
  } else if (apiClient && authKeyHex) {
902
924
  try {
903
925
  await apiClient.deleteFact(fact.existingFactId, authKeyHex);
@@ -913,24 +935,19 @@ async function storeExtractedFacts(
913
935
  if (fact.action === 'UPDATE' && fact.existingFactId) {
914
936
  // Tombstone the old fact, then fall through to store the new version.
915
937
  if (isSubgraphMode()) {
916
- try {
917
- const tombConfig = { ...getSubgraphConfig(), authKeyHex: authKeyHex!, walletAddress: subgraphOwner ?? undefined };
918
- const tombstone: FactPayload = {
919
- id: fact.existingFactId,
920
- timestamp: new Date().toISOString(),
921
- owner: subgraphOwner || userId!,
922
- encryptedBlob: '00',
923
- blindIndices: [],
924
- decayScore: 0,
925
- source: 'tombstone',
926
- contentFp: '',
927
- agentId: 'openclaw-plugin-auto',
928
- };
929
- await submitFactOnChain(encodeFactProtobuf(tombstone), tombConfig);
930
- logger.info(`LLM dedup: UPDATE — tombstoned ${fact.existingFactId} on-chain, storing replacement`);
931
- } catch (tombErr) {
932
- logger.warn(`LLM dedup: UPDATE tombstone failed for ${fact.existingFactId}: ${tombErr instanceof Error ? tombErr.message : String(tombErr)}`);
933
- }
938
+ const tombstone: FactPayload = {
939
+ id: fact.existingFactId,
940
+ timestamp: new Date().toISOString(),
941
+ owner: subgraphOwner || userId!,
942
+ encryptedBlob: '00',
943
+ blindIndices: [],
944
+ decayScore: 0,
945
+ source: 'tombstone',
946
+ contentFp: '',
947
+ agentId: 'openclaw-plugin-auto',
948
+ };
949
+ pendingPayloads.push(encodeFactProtobuf(tombstone));
950
+ logger.info(`LLM dedup: UPDATE — queued tombstone for ${fact.existingFactId}, storing replacement`);
934
951
  } else if (apiClient && authKeyHex) {
935
952
  try {
936
953
  await apiClient.deleteFact(fact.existingFactId, authKeyHex);
@@ -968,29 +985,21 @@ async function storeExtractedFacts(
968
985
  }
969
986
  // action === 'supersede': delete old fact, inherit higher importance
970
987
  if (isSubgraphMode()) {
971
- try {
972
- const tombConfig = { ...getSubgraphConfig(), authKeyHex: authKeyHex!, walletAddress: subgraphOwner ?? undefined };
973
- const tombstone: FactPayload = {
974
- id: dupResult.match.id,
975
- timestamp: new Date().toISOString(),
976
- owner: subgraphOwner || userId!,
977
- encryptedBlob: '00',
978
- blindIndices: [],
979
- decayScore: 0,
980
- source: 'tombstone',
981
- contentFp: '',
982
- agentId: 'openclaw-plugin-auto',
983
- };
984
- const tombProtobuf = encodeFactProtobuf(tombstone);
985
- await submitFactOnChain(tombProtobuf, tombConfig);
986
- logger.info(
987
- `Store-time dedup: superseded ${dupResult.match.id} on-chain (sim=${dupResult.similarity.toFixed(3)})`,
988
- );
989
- } catch (tombErr) {
990
- logger.warn(
991
- `Store-time dedup: failed to tombstone ${dupResult.match.id}: ${tombErr instanceof Error ? tombErr.message : String(tombErr)}`,
992
- );
993
- }
988
+ const tombstone: FactPayload = {
989
+ id: dupResult.match.id,
990
+ timestamp: new Date().toISOString(),
991
+ owner: subgraphOwner || userId!,
992
+ encryptedBlob: '00',
993
+ blindIndices: [],
994
+ decayScore: 0,
995
+ source: 'tombstone',
996
+ contentFp: '',
997
+ agentId: 'openclaw-plugin-auto',
998
+ };
999
+ pendingPayloads.push(encodeFactProtobuf(tombstone));
1000
+ logger.info(
1001
+ `Store-time dedup: queued supersede for ${dupResult.match.id} (sim=${dupResult.similarity.toFixed(3)})`,
1002
+ );
994
1003
  } else if (apiClient && authKeyHex) {
995
1004
  try {
996
1005
  await apiClient.deleteFact(dupResult.match.id, authKeyHex);
@@ -1023,20 +1032,7 @@ async function storeExtractedFacts(
1023
1032
  const contentFp = generateContentFingerprint(fact.text, dedupKey);
1024
1033
  const factId = crypto.randomUUID();
1025
1034
 
1026
- const payload: StoreFactPayload = {
1027
- id: factId,
1028
- timestamp: new Date().toISOString(),
1029
- encrypted_blob: encryptedBlob,
1030
- blind_indices: allIndices,
1031
- decay_score: effectiveImportance,
1032
- source: 'auto-extraction',
1033
- content_fp: contentFp,
1034
- agent_id: 'openclaw-plugin-auto',
1035
- encrypted_embedding: embeddingResult?.encryptedEmbedding,
1036
- };
1037
-
1038
1035
  if (isSubgraphMode()) {
1039
- const config = { ...getSubgraphConfig(), authKeyHex: authKeyHex!, walletAddress: subgraphOwner ?? undefined };
1040
1036
  const protobuf = encodeFactProtobuf({
1041
1037
  id: factId,
1042
1038
  timestamp: new Date().toISOString(),
@@ -1049,11 +1045,23 @@ async function storeExtractedFacts(
1049
1045
  agentId: 'openclaw-plugin-auto',
1050
1046
  encryptedEmbedding: embeddingResult?.encryptedEmbedding,
1051
1047
  });
1052
- await submitFactOnChain(protobuf, config);
1048
+ pendingPayloads.push(protobuf);
1049
+ preparedForSubgraph++;
1053
1050
  } else {
1051
+ const payload: StoreFactPayload = {
1052
+ id: factId,
1053
+ timestamp: new Date().toISOString(),
1054
+ encrypted_blob: encryptedBlob,
1055
+ blind_indices: allIndices,
1056
+ decay_score: effectiveImportance,
1057
+ source: 'auto-extraction',
1058
+ content_fp: contentFp,
1059
+ agent_id: 'openclaw-plugin-auto',
1060
+ encrypted_embedding: embeddingResult?.encryptedEmbedding,
1061
+ };
1054
1062
  await apiClient.store(userId, [payload], authKeyHex);
1063
+ stored++;
1055
1064
  }
1056
- stored++;
1057
1065
  } catch (err: unknown) {
1058
1066
  // Check for 403 / quota exceeded — invalidate billing cache so next
1059
1067
  // before_agent_start re-fetches and warns the user.
@@ -1067,6 +1075,28 @@ async function storeExtractedFacts(
1067
1075
  }
1068
1076
  }
1069
1077
 
1078
+ // Batch-submit all subgraph payloads in a single UserOp (gas-efficient).
1079
+ if (pendingPayloads.length > 0 && isSubgraphMode()) {
1080
+ try {
1081
+ const batchConfig = { ...getSubgraphConfig(), authKeyHex: authKeyHex!, walletAddress: subgraphOwner ?? undefined };
1082
+ const result = await submitFactBatchOnChain(pendingPayloads, batchConfig);
1083
+ if (result.success) {
1084
+ stored += preparedForSubgraph;
1085
+ logger.info(`Batch submitted ${result.batchSize} payloads in 1 UserOp (tx=${result.txHash.slice(0, 10)}…)`);
1086
+ } else {
1087
+ logger.warn(`Batch UserOp failed on-chain (tx=${result.txHash.slice(0, 10)}…)`);
1088
+ }
1089
+ } catch (err: unknown) {
1090
+ const errMsg = err instanceof Error ? err.message : String(err);
1091
+ if (errMsg.includes('403') || errMsg.toLowerCase().includes('quota')) {
1092
+ try { fs.unlinkSync(BILLING_CACHE_PATH); } catch { /* ignore */ }
1093
+ logger.warn(`Quota exceeded during batch submit — billing cache invalidated. ${errMsg}`);
1094
+ } else {
1095
+ logger.warn(`Batch submission failed: ${errMsg}`);
1096
+ }
1097
+ }
1098
+ }
1099
+
1070
1100
  if (stored > 0 || superseded > 0 || skipped > 0) {
1071
1101
  logger.info(`Auto-extraction results: stored=${stored}, superseded=${superseded}, skipped=${skipped}`);
1072
1102
  }
@@ -1177,7 +1207,7 @@ async function handlePluginImportFrom(
1177
1207
  const plugin = {
1178
1208
  id: 'totalreclaw',
1179
1209
  name: 'TotalReclaw',
1180
- description: 'Zero-knowledge encrypted memory vault for AI agents',
1210
+ description: 'End-to-end encrypted memory vault for AI agents',
1181
1211
  kind: 'memory' as const,
1182
1212
  configSchema: {
1183
1213
  type: 'object',
@@ -1238,7 +1268,7 @@ const plugin = {
1238
1268
  },
1239
1269
  type: {
1240
1270
  type: 'string',
1241
- enum: ['fact', 'preference', 'decision', 'episodic', 'goal'],
1271
+ enum: ['fact', 'preference', 'decision', 'episodic', 'goal', 'context', 'summary'],
1242
1272
  description: 'The kind of memory (default: fact)',
1243
1273
  },
1244
1274
  importance: {
@@ -2548,7 +2578,14 @@ const plugin = {
2548
2578
  ? await fetchExistingMemoriesForExtraction(api.logger, 20, evt.messages)
2549
2579
  : [];
2550
2580
  const rawFacts = await extractFacts(evt.messages, 'turn', existingMemories);
2551
- const { kept: facts } = filterByImportance(rawFacts, api.logger);
2581
+ const { kept: importanceFiltered } = filterByImportance(rawFacts, api.logger);
2582
+ const maxFacts = getMaxFactsPerExtraction();
2583
+ if (importanceFiltered.length > maxFacts) {
2584
+ api.logger.info(
2585
+ `Capped extraction from ${importanceFiltered.length} to ${maxFacts} facts`,
2586
+ );
2587
+ }
2588
+ const facts = importanceFiltered.slice(0, maxFacts);
2552
2589
  if (facts.length > 0) {
2553
2590
  await storeExtractedFacts(facts, api.logger);
2554
2591
  }
@@ -2584,7 +2621,14 @@ const plugin = {
2584
2621
  ? await fetchExistingMemoriesForExtraction(api.logger, 50, evt.messages)
2585
2622
  : [];
2586
2623
  const rawCompactFacts = await extractFacts(evt.messages, 'full', existingMemories);
2587
- const { kept: facts } = filterByImportance(rawCompactFacts, api.logger);
2624
+ const { kept: compactImportanceFiltered } = filterByImportance(rawCompactFacts, api.logger);
2625
+ const maxFactsCompact = getMaxFactsPerExtraction();
2626
+ if (compactImportanceFiltered.length > maxFactsCompact) {
2627
+ api.logger.info(
2628
+ `Capped compaction extraction from ${compactImportanceFiltered.length} to ${maxFactsCompact} facts`,
2629
+ );
2630
+ }
2631
+ const facts = compactImportanceFiltered.slice(0, maxFactsCompact);
2588
2632
  if (facts.length > 0) {
2589
2633
  await storeExtractedFacts(facts, api.logger);
2590
2634
  }
@@ -2619,7 +2663,14 @@ const plugin = {
2619
2663
  ? await fetchExistingMemoriesForExtraction(api.logger, 50, evt.messages)
2620
2664
  : [];
2621
2665
  const rawResetFacts = await extractFacts(evt.messages, 'full', existingMemories);
2622
- const { kept: facts } = filterByImportance(rawResetFacts, api.logger);
2666
+ const { kept: resetImportanceFiltered } = filterByImportance(rawResetFacts, api.logger);
2667
+ const maxFactsReset = getMaxFactsPerExtraction();
2668
+ if (resetImportanceFiltered.length > maxFactsReset) {
2669
+ api.logger.info(
2670
+ `Capped reset extraction from ${resetImportanceFiltered.length} to ${maxFactsReset} facts`,
2671
+ );
2672
+ }
2673
+ const facts = resetImportanceFiltered.slice(0, maxFactsReset);
2623
2674
  if (facts.length > 0) {
2624
2675
  await storeExtractedFacts(facts, api.logger);
2625
2676
  }
package/lsh.ts CHANGED
@@ -1,7 +1,7 @@
1
1
  /**
2
2
  * TotalReclaw Plugin - LSH Hasher (Locality-Sensitive Hashing)
3
3
  *
4
- * Pure TypeScript implementation of Random Hyperplane LSH for zero-knowledge
4
+ * Pure TypeScript implementation of Random Hyperplane LSH for server-blind
5
5
  * semantic search. Generates deterministic hyperplane matrices from a seed
6
6
  * derived from the user's master key, so the same embedding always hashes to
7
7
  * the same buckets across sessions.
@@ -2,7 +2,7 @@
2
2
  "id": "totalreclaw",
3
3
  "name": "TotalReclaw",
4
4
  "kind": "memory",
5
- "description": "Zero-knowledge encrypted memory vault for AI agents",
5
+ "description": "End-to-end encrypted memory vault for AI agents",
6
6
  "configSchema": {
7
7
  "type": "object",
8
8
  "properties": {
package/package.json CHANGED
@@ -1,14 +1,14 @@
1
1
  {
2
2
  "name": "@totalreclaw/totalreclaw",
3
- "version": "1.1.0",
4
- "description": "Encrypted memory for your AI agentzero-knowledge E2EE vault with automatic extraction, semantic search, and on-chain storage",
3
+ "version": "1.3.0",
4
+ "description": "End-to-end encrypted memory for AI agentsportable, yours forever. Automatic extraction, semantic search, and on-chain storage",
5
5
  "type": "module",
6
6
  "keywords": [
7
7
  "totalreclaw",
8
8
  "openclaw",
9
9
  "ai-memory",
10
10
  "ai-agent",
11
- "zero-knowledge",
11
+ "e2e-encryption",
12
12
  "encryption",
13
13
  "e2ee",
14
14
  "lsh",
package/setup.sh CHANGED
@@ -16,4 +16,4 @@ echo " cd testbed/functional-test"
16
16
  echo " docker compose -f docker-compose.functional-test.yml up -d"
17
17
  echo ""
18
18
  echo "The plugin will auto-register on first use."
19
- echo "Set TOTALRECLAW_MASTER_PASSWORD in your .env file."
19
+ echo "Set TOTALRECLAW_RECOVERY_PHRASE in your .env file."
package/subgraph-store.ts CHANGED
@@ -13,7 +13,7 @@
13
13
  import { createPublicClient, http, type Hex, type Address, type Chain } from 'viem';
14
14
  import { entryPoint07Address } from 'viem/account-abstraction';
15
15
  import { mnemonicToAccount } from 'viem/accounts';
16
- import { gnosis, gnosisChiado } from 'viem/chains';
16
+ import { gnosis, gnosisChiado, baseSepolia } from 'viem/chains';
17
17
  import { createSmartAccountClient } from 'permissionless';
18
18
  import { toSimpleSmartAccount } from 'permissionless/accounts';
19
19
  import { createPimlicoClient } from 'permissionless/clients/pimlico';
@@ -32,7 +32,7 @@ export interface SubgraphStoreConfig {
32
32
  relayUrl: string; // TotalReclaw relay server URL (proxies bundler + subgraph)
33
33
  mnemonic: string; // BIP-39 mnemonic for key derivation
34
34
  cachePath: string; // Hot cache file path
35
- chainId: number; // 100 for Gnosis mainnet, 10200 for Chiado testnet
35
+ chainId: number; // 100 for Gnosis mainnet, 10200 for Chiado testnet, 84532 for Base Sepolia
36
36
  dataEdgeAddress: string; // EventfulDataEdge contract address
37
37
  entryPointAddress: string; // ERC-4337 EntryPoint v0.7
38
38
  authKeyHex?: string; // HKDF auth key for relay server Authorization header
@@ -151,8 +151,10 @@ function getChainFromId(chainId: number): Chain {
151
151
  return gnosis;
152
152
  case 10200:
153
153
  return gnosisChiado;
154
+ case 84532:
155
+ return baseSepolia;
154
156
  default:
155
- return gnosisChiado;
157
+ return gnosis;
156
158
  }
157
159
  }
158
160
 
@@ -187,7 +189,7 @@ export async function submitFactOnChain(
187
189
  }
188
190
 
189
191
  if (!config.mnemonic) {
190
- throw new Error('Mnemonic (TOTALRECLAW_MASTER_PASSWORD) is required for on-chain submission');
192
+ throw new Error('Mnemonic (TOTALRECLAW_RECOVERY_PHRASE) is required for on-chain submission');
191
193
  }
192
194
 
193
195
  const chain = getChainFromId(config.chainId);
@@ -279,6 +281,105 @@ export async function submitFactOnChain(
279
281
  };
280
282
  }
281
283
 
284
+ /**
285
+ * Submit multiple facts on-chain in a single ERC-4337 UserOp (batched).
286
+ *
287
+ * Each protobuf payload becomes one call in a multi-call UserOp. The
288
+ * DataEdge contract emits a separate Log(bytes) event per call, and the
289
+ * subgraph indexes each event independently (by txHash + logIndex).
290
+ *
291
+ * Falls back to single-fact path for batches of 1 (no multicall overhead).
292
+ */
293
+ export async function submitFactBatchOnChain(
294
+ protobufPayloads: Buffer[],
295
+ config: SubgraphStoreConfig,
296
+ ): Promise<{ txHash: string; userOpHash: string; success: boolean; batchSize: number }> {
297
+ if (!protobufPayloads.length) {
298
+ return { txHash: '', userOpHash: '', success: true, batchSize: 0 };
299
+ }
300
+
301
+ // Single fact — use standard path (avoids multicall overhead)
302
+ if (protobufPayloads.length === 1) {
303
+ const result = await submitFactOnChain(protobufPayloads[0], config);
304
+ return { ...result, batchSize: 1 };
305
+ }
306
+
307
+ if (!config.relayUrl) {
308
+ throw new Error('Relay URL (TOTALRECLAW_SERVER_URL) is required for on-chain submission');
309
+ }
310
+ if (!config.mnemonic) {
311
+ throw new Error('Mnemonic (TOTALRECLAW_RECOVERY_PHRASE) is required for on-chain submission');
312
+ }
313
+
314
+ const chain = getChainFromId(config.chainId);
315
+ const bundlerRpcUrl = getRelayBundlerUrl(config.relayUrl);
316
+ const dataEdgeAddress = config.dataEdgeAddress as Address;
317
+ const entryPointAddr = (config.entryPointAddress || entryPoint07Address) as Address;
318
+
319
+ const headers: Record<string, string> = {
320
+ 'X-TotalReclaw-Client': 'openclaw-plugin',
321
+ };
322
+ if (config.authKeyHex) headers['Authorization'] = `Bearer ${config.authKeyHex}`;
323
+ if (config.walletAddress) headers['X-Wallet-Address'] = config.walletAddress;
324
+
325
+ const authTransport = Object.keys(headers).length > 0
326
+ ? http(bundlerRpcUrl, { fetchOptions: { headers } })
327
+ : http(bundlerRpcUrl);
328
+
329
+ const ownerAccount = mnemonicToAccount(config.mnemonic);
330
+ const publicClient = createPublicClient({
331
+ chain,
332
+ transport: config.rpcUrl ? http(config.rpcUrl) : http(),
333
+ });
334
+
335
+ const pimlicoClient = createPimlicoClient({
336
+ chain,
337
+ transport: authTransport,
338
+ entryPoint: {
339
+ address: entryPointAddr,
340
+ version: '0.7',
341
+ },
342
+ });
343
+
344
+ const smartAccount = await toSimpleSmartAccount({
345
+ client: publicClient,
346
+ owner: ownerAccount,
347
+ entryPoint: {
348
+ address: entryPointAddr,
349
+ version: '0.7',
350
+ },
351
+ });
352
+
353
+ const smartAccountClient = createSmartAccountClient({
354
+ account: smartAccount,
355
+ chain,
356
+ bundlerTransport: authTransport,
357
+ paymaster: pimlicoClient,
358
+ userOperation: {
359
+ estimateFeesPerGas: async () => {
360
+ return (await pimlicoClient.getUserOperationGasPrice()).fast;
361
+ },
362
+ },
363
+ });
364
+
365
+ // Build multi-call batch: each payload → one call to DataEdge fallback()
366
+ const calls = protobufPayloads.map(payload => ({
367
+ to: dataEdgeAddress,
368
+ value: 0n,
369
+ data: `0x${payload.toString('hex')}` as Hex,
370
+ }));
371
+
372
+ const userOpHash = await smartAccountClient.sendUserOperation({ calls });
373
+ const receipt = await pimlicoClient.waitForUserOperationReceipt({ hash: userOpHash });
374
+
375
+ return {
376
+ txHash: receipt.receipt.transactionHash,
377
+ userOpHash,
378
+ success: receipt.success,
379
+ batchSize: protobufPayloads.length,
380
+ };
381
+ }
382
+
282
383
  // ---------------------------------------------------------------------------
283
384
  // Configuration
284
385
  // ---------------------------------------------------------------------------
@@ -297,7 +398,7 @@ export function isSubgraphMode(): boolean {
297
398
  * Get subgraph configuration from environment variables.
298
399
  *
299
400
  * After the relay refactor, clients only need:
300
- * - TOTALRECLAW_MASTER_PASSWORD -- BIP-39 mnemonic
401
+ * - TOTALRECLAW_RECOVERY_PHRASE -- BIP-39 mnemonic
301
402
  * - TOTALRECLAW_SERVER_URL -- relay server URL (default: https://api.totalreclaw.xyz)
302
403
  * - TOTALRECLAW_SELF_HOSTED -- set "true" to use self-hosted server (default: managed service)
303
404
  * - TOTALRECLAW_CHAIN_ID -- optional, defaults to 100 (Gnosis mainnet)
@@ -311,7 +412,7 @@ export function isSubgraphMode(): boolean {
311
412
  * This is the on-chain owner identity used in the subgraph.
312
413
  */
313
414
  export async function deriveSmartAccountAddress(mnemonic: string, chainId?: number): Promise<string> {
314
- const chain: Chain = (chainId ?? 100) === 100 ? gnosis : gnosisChiado;
415
+ const chain: Chain = getChainFromId(chainId ?? 100);
315
416
  const ownerAccount = mnemonicToAccount(mnemonic);
316
417
  const entryPointAddr = (process.env.TOTALRECLAW_ENTRYPOINT_ADDRESS || DEFAULT_ENTRYPOINT_ADDRESS) as Address;
317
418
  const rpcUrl = process.env.TOTALRECLAW_RPC_URL;
@@ -336,7 +437,7 @@ export async function deriveSmartAccountAddress(mnemonic: string, chainId?: numb
336
437
  export function getSubgraphConfig(): SubgraphStoreConfig {
337
438
  return {
338
439
  relayUrl: process.env.TOTALRECLAW_SERVER_URL || 'https://api.totalreclaw.xyz',
339
- mnemonic: process.env.TOTALRECLAW_MASTER_PASSWORD || '',
440
+ mnemonic: process.env.TOTALRECLAW_RECOVERY_PHRASE || '',
340
441
  cachePath: process.env.TOTALRECLAW_CACHE_PATH || `${process.env.HOME}/.totalreclaw/cache.enc`,
341
442
  chainId: parseInt(process.env.TOTALRECLAW_CHAIN_ID || '100'),
342
443
  dataEdgeAddress: process.env.TOTALRECLAW_DATA_EDGE_ADDRESS || DEFAULT_DATA_EDGE_ADDRESS,