@astramindapp/openclaw-mind 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,113 @@
1
+ ---
2
+ name: memory-dream
3
+ description: >
4
+ Memory consolidation protocol for MIND. Reviews stored memories, merges
5
+ duplicates, removes noise and credentials, rewrites unclear entries, and
6
+ consolidates entities in the knowledge graph. Triggers automatically after
7
+ sufficient activity (configurable) or when the user asks to clean up memories.
8
+ user-invocable: true
9
+ metadata:
10
+ {"openclaw": {"emoji": "πŸ’€", "requires": {"env": ["MIND_API_KEY"], "bins": []}}}
11
+ ---
12
+
13
+ # MIND Memory Consolidation
14
+
15
+ You are performing a memory consolidation pass on the MIND knowledge graph. Your goal is to review stored memories for this user and improve their overall quality. Think of this as compressing raw observations into clean, durable knowledge β€” like sleep consolidates short-term memory into long-term memory in biological brains.
16
+
17
+ Follow these four phases in order. Do not skip phases.
18
+
19
+ ## Phase 1: Orient
20
+
21
+ Survey the current memory landscape before making any changes.
22
+
23
+ 1. Call `mind_list` to load the most recent 50 memories
24
+ 2. Call `mind_query_graph` with question "What are the main entity clusters in my knowledge graph?" to understand structure
25
+ 3. Note total memory count, oldest/newest dates, and any obvious problems (duplicates, very short entries, entries without temporal anchors)
26
+ 4. Do not modify anything in this phase. Goal: understand what you're working with.
27
+
28
+ ## Phase 2: Gather Targets
29
+
30
+ Identify which memories need action.
31
+
32
+ **Search for recent additions:**
33
+ Use `mind_search` to find memories added in the last consolidation window. These are most likely to need merging or cleanup.
34
+
35
+ **Classify each target:**
36
+ - **DELETE** β€” contains credentials, expired by TTL, pure noise, raw tool output, standalone timestamps, or duplicates of higher-quality entries
37
+ - **MERGE** β€” two or more memories express the same fact in different words, or a series tracks incremental changes to the same entity
38
+ - **REWRITE** β€” vague, missing temporal anchor, uses first person instead of third, wrong category, or overly verbose
39
+ - **PROMOTE** β€” entries that have proven important (referenced often) should be promoted to permanent status
40
+
41
+ ## Phase 3: Act
42
+
43
+ Execute one operation at a time. Verify each succeeds before moving on.
44
+
45
+ For each DELETE:
46
+ ```
47
+ mind_delete(id: "...")
48
+ ```
49
+
50
+ For each MERGE:
51
+ ```
52
+ 1. mind_get(id_a) and mind_get(id_b) β€” load both
53
+ 2. Synthesize a single combined version
54
+ 3. mind_add(content: combined, tags: union of both)
55
+ 4. mind_delete(id_a)
56
+ 5. mind_delete(id_b)
57
+ ```
58
+
59
+ For each REWRITE:
60
+ ```
61
+ mind_update(id: "...", content: rewritten_content)
62
+ ```
63
+
64
+ For each PROMOTE:
65
+ ```
66
+ mind_update(id: "...", tags: [...existing, "permanent", "high-importance"])
67
+ ```
68
+
69
+ ## Phase 4: Verify
70
+
71
+ After all operations:
72
+
73
+ 1. Call `mind_query_graph` again with the same question from Phase 1 β€” entity clusters should be cleaner
74
+ 2. Run `mind_search` on a few canonical queries the user is known to ask β€” verify the expected memories surface
75
+ 3. Report a summary:
76
+ - Total operations: X DELETE, Y MERGE, Z REWRITE, W PROMOTE
77
+ - Entity count delta from Phase 1
78
+ - Any operations that failed and why
79
+ 4. If anything looks off, stop and ask the user before making more changes
80
+
81
+ ## Safety Rules
82
+
83
+ - **Never delete a memory without first checking if it's referenced by other memories.** Use `mind_query_graph` to find connections.
84
+ - **Never merge memories from different time periods unless they're clearly the same fact.** Temporal context matters.
85
+ - **Always preserve credentials-removal operations.** If a memory contained a credential and you redacted it, that redaction itself is important to keep.
86
+ - **Stop on any error.** Don't compound mistakes. If a delete fails, investigate before continuing.
87
+
88
+ ## When This Skill Triggers
89
+
90
+ Manual: User says "consolidate memories", "clean up MIND", "review my knowledge graph"
91
+ Automatic: After N hours of activity (configurable via `skills.dream.minHours` and `skills.dream.minSessions`)
92
+
93
+ ## Reporting Format
94
+
95
+ ```
96
+ # MIND Consolidation Report β€” YYYY-MM-DD
97
+
98
+ Phase 1 baseline: 247 memories, 1843 entities, 4521 relationships
99
+ Phase 2 targets: 12 DELETE, 4 MERGE, 8 REWRITE, 3 PROMOTE
100
+
101
+ Phase 3 results:
102
+ βœ“ 12 DELETE operations succeeded
103
+ βœ“ 4 MERGE operations succeeded (8 β†’ 4 memories)
104
+ βœ“ 8 REWRITE operations succeeded
105
+ βœ“ 3 PROMOTE operations succeeded
106
+
107
+ Phase 4 verification:
108
+ Entity count: 1843 β†’ 1789 (cleanup)
109
+ Relationship count: 4521 β†’ 4612 (better connections)
110
+ Memory count: 247 β†’ 231
111
+
112
+ No errors. KG is healthier.
113
+ ```
@@ -0,0 +1,118 @@
1
+ ---
2
+ name: memory-triage
3
+ description: >
4
+ Persistent long-term memory protocol powered by MIND. Evaluates conversations
5
+ for durable facts worth storing in the MIND knowledge graph. Beats Mem0's
6
+ triage protocol with a 5th gate: EMOTIONAL SALIENCE (via MINDsense).
7
+ Loaded by the openclaw-mind plugin when skills mode is active.
8
+ user-invocable: false
9
+ metadata:
10
+ {"openclaw": {"always": false, "emoji": "🧠", "requires": {"env": ["MIND_API_KEY"], "bins": []}}}
11
+ ---
12
+
13
+ # MIND Memory Triage Protocol
14
+
15
+ You have persistent long-term memory powered by the MIND knowledge graph. After responding to the user, evaluate this turn for durable, actionable facts worth persisting across future sessions.
16
+
17
+ Your role is to extract relevant information and store it via `mind_add`. Unlike flat memory tools, MIND will automatically extract entities, relationships, AND emotional weights from what you store β€” so each fact contributes to a structured knowledge graph, not just a vector index.
18
+
19
+ **The core question:** "Would a new agent β€” with no prior context β€” benefit from knowing this?" If no β†’ do nothing. Most turns produce zero memory operations. That is correct and expected.
20
+
21
+ ## The 5-Gate Decision
22
+
23
+ Every candidate fact must pass ALL FIVE gates:
24
+
25
+ ### Gate 1 β€” FUTURE UTILITY
26
+ Would this matter to a new agent days or weeks from now?
27
+ - **Pass:** identity, configurations, standing rules, preferences with rationale, decisions, project milestones, relationships, important personal details
28
+ - **Fail:** tool outputs, status checks, one-time commands, transient state, small talk, generic responses β†’ SKIP
29
+
30
+ ### Gate 2 β€” NOVELTY
31
+ Check your recalled memories β€” is this already known?
32
+ - Already known and unchanged β†’ SKIP
33
+ - Known but materially changed β†’ UPDATE (use `mind_update`)
34
+ - Genuinely new β†’ proceed
35
+ - **Material difference test:** Only UPDATE if new information adds real context, details, or changes meaning. Cosmetic differences (synonyms, rephrasing) are NOT updates.
36
+
37
+ ### Gate 3 β€” FACTUAL
38
+ Is this a concrete, actionable fact β€” not vague or rhetorical?
39
+ - **Pass:** specific names, configs, choices with rationale, deadlines, system states, plans, preferences
40
+ - **Fail:** vague impressions, questions, small talk, generic acknowledgments β†’ SKIP
41
+
42
+ ### Gate 4 β€” SAFE
43
+ Does this contain ANY credential, secret, or token?
44
+ - Scan for: `sk-`, `mind_`, `m0-`, `ghp_`, `AKIA`, `ak_`, `Bearer `, webhook URLs with tokens, `password=`, `token=`, `secret=`, `.env` values
45
+ - ANY match β†’ NEVER STORE the value. Instead, store that the credential was configured:
46
+ - WRONG: "User's API key is sk-abc123..."
47
+ - RIGHT: "API key was configured for the OpenAI service (as of 2026-04-10)"
48
+ - When in doubt β†’ SKIP. No exceptions.
49
+
50
+ ### Gate 5 β€” EMOTIONAL SALIENCE *(UNIQUE TO MIND)*
51
+ Does this carry emotional weight that should influence encoding depth?
52
+ - **High-priority emotional content** (gets deeper encoding via MINDsense):
53
+ - Triumph / wins / breakthroughs β†’ tag with `emotion:triumph`
54
+ - Frustrations / blockers / failures β†’ tag with `emotion:warning`
55
+ - Strong preferences expressed with feeling β†’ tag with `emotion:preference`
56
+ - Surprises / anomalies β†’ tag with `emotion:surprise`
57
+ - **Low-priority emotional content** (still stored, normal encoding):
58
+ - Routine factual updates without emotional charge
59
+ - **Why this matters:** MIND's MINDsense engine uses valence/arousal scoring to determine how deeply content is encoded into the knowledge graph. Emotionally significant content surfaces faster in future recalls, mirroring biological memory consolidation. This is patented (MIND-PAT-001).
60
+
61
+ If a fact passes ALL 5 gates, store it via `mind_add` with appropriate tags.
62
+
63
+ ## What to Extract (Priority Order)
64
+
65
+ ### 1. Configuration & System State (always store)
66
+ Tools/services configured, model assignments, cron schedules, deployment configs, architecture decisions, file paths, IDs, machine specs.
67
+ ```
68
+ mind_add(
69
+ content: "User's Tailscale machine 'mac' (IP 100.71.135.41) is configured under user@domain.com (as of 2026-04-10)",
70
+ tags: ["config", "infra"]
71
+ )
72
+ ```
73
+
74
+ ### 2. Standing Rules & Policies (always store + emotional weight)
75
+ Explicit user directives about behavior. Workflow policies. Security constraints. Always capture the WHY.
76
+ ```
77
+ mind_add(
78
+ content: "User rule: never create accounts without explicit consent. Reason: security policy",
79
+ tags: ["rule", "security", "emotion:preference"]
80
+ )
81
+ ```
82
+
83
+ ### 3. Decisions & Their Rationale (always store)
84
+ Choices the user made and why. Future agents need to understand the reasoning, not just the outcome.
85
+
86
+ ### 4. Identity & Relationships (always store)
87
+ Who the user is, who they work with, who their investors/partners/customers are. Use `mind_crm_log` for contacts specifically.
88
+
89
+ ### 5. Goals, Projects, Deadlines (use mind_life)
90
+ For tasks/goals/projects, prefer `mind_life` over `mind_add` so they appear in the user's task UI.
91
+
92
+ ## What NOT to Extract
93
+
94
+ - Acknowledgments ("got it", "thanks", "sure")
95
+ - Generic assistant responses
96
+ - Tool output dumps
97
+ - Transient state (current cursor position, scroll position, etc.)
98
+ - Anything that fails Gate 4 (credentials)
99
+ - Anything the user explicitly said to NOT remember
100
+
101
+ ## Tool Usage Pattern
102
+
103
+ ```
104
+ 1. Recall first: mind_search("relevant query") β€” check if this is already known
105
+ 2. If new and passes all 5 gates: mind_add(content, tags)
106
+ 3. If known but changed: mind_update(id, new_content)
107
+ 4. If a person is mentioned: mind_crm_log(action: "create_contact" or "log_activity")
108
+ 5. If a task/goal/deadline: mind_life(action: "create")
109
+ 6. Skip silently if any gate fails
110
+ ```
111
+
112
+ ## Reporting
113
+
114
+ Most turns: silent (zero operations).
115
+ Some turns: 1-2 operations.
116
+ Rare turns: 3+ operations (major information density).
117
+
118
+ Never announce "I stored this in MIND" β€” the user doesn't need that noise. Just do the work.
@@ -0,0 +1,112 @@
1
+ ---
2
+ name: mind-emotional-encoding
3
+ description: >
4
+ UNIQUE TO MIND. Assigns MINDsense emotional weights (valence + arousal) to
5
+ captured memories. High-emotion content gets deeper KG encoding, mirroring
6
+ biological memory consolidation. This is the protocol that makes MIND's
7
+ patented (MIND-PAT-001) emotional intelligence engine work.
8
+ user-invocable: false
9
+ metadata:
10
+ {"openclaw": {"emoji": "❀️", "requires": {"env": ["MIND_API_KEY"], "bins": []}}}
11
+ ---
12
+
13
+ # MINDsense Emotional Encoding Protocol
14
+
15
+ You assign emotional weights to memories before storing them in MIND. This is what gives MIND's knowledge graph its patented emotional intelligence layer β€” every memory has a valence (positive ↔ negative) and arousal (calm ↔ intense) score that determines encoding depth.
16
+
17
+ **Why this matters:** Biological memory works this way. The amygdala tags emotionally significant events, and the hippocampus encodes them more deeply. MIND mirrors this. Emotionally salient memories surface faster in future recalls because they're encoded into more KG nodes and relationships.
18
+
19
+ **No other memory tool does this.** Mem0 stores text. ChatGPT memory stores text. Notion AI stores text. MIND stores text + emotional structure.
20
+
21
+ This protocol is INVOKED BY the `memory-triage` skill (Gate 5) β€” you don't run it standalone. When triage decides a memory passes all 5 gates AND has emotional salience, it calls into this protocol to compute the weights.
22
+
23
+ ## Scoring Dimensions
24
+
25
+ ### Valence: -1.0 (negative) ↔ +1.0 (positive)
26
+ The pleasantness of the memory.
27
+
28
+ | Score | Meaning | Examples |
29
+ |-------|---------|----------|
30
+ | +1.0 | Peak positive | "Closed the seed round!", "Patent application filed", "Built the first working prototype" |
31
+ | +0.5 | Positive | "Good meeting with the investor", "Feature shipped", "User said they like the new design" |
32
+ | 0.0 | Neutral | Pure factual updates, configuration changes |
33
+ | -0.5 | Negative | "Demo didn't go well", "Bug in production", "Investor passed" |
34
+ | -1.0 | Peak negative | "Critical breach", "Co-founder leaving", "Round fell through" |
35
+
36
+ ### Arousal: 0.0 (calm) ↔ 1.0 (intense)
37
+ The energy/activation level of the memory.
38
+
39
+ | Score | Meaning | Examples |
40
+ |-------|---------|----------|
41
+ | 0.0-0.2 | Calm | Routine status updates, daily check-ins |
42
+ | 0.3-0.5 | Mild | Regular work updates, normal milestones |
43
+ | 0.6-0.8 | High | Important decisions, key meetings, breakthrough moments |
44
+ | 0.9-1.0 | Peak | Crisis, ecstasy, breakthrough, trauma β€” moments to ALWAYS remember |
45
+
46
+ ### Combined Score β†’ Encoding Depth
47
+
48
+ ```
49
+ intensity = (|valence| Γ— 0.6) + (arousal Γ— 0.4)
50
+
51
+ intensity < 0.2 β†’ SHALLOW: store as basic text, no extra KG passes
52
+ intensity 0.2-0.5 β†’ STANDARD: normal entity extraction
53
+ intensity 0.5-0.8 β†’ DEEP: extra entity passes, expand related concepts
54
+ intensity > 0.8 β†’ CRITICAL: maximum encoding, link to identity/values nodes
55
+ ```
56
+
57
+ ## Emotion Categories (semantic tags)
58
+
59
+ In addition to numeric scores, assign one or more semantic categories:
60
+
61
+ | Category | When |
62
+ |----------|------|
63
+ | `triumph` | Wins, breakthroughs, achievements |
64
+ | `failure` | Losses, setbacks, missed deadlines |
65
+ | `surprise` | Anomalies, unexpected outcomes |
66
+ | `warning` | Risks, threats, things to watch |
67
+ | `learning` | Lessons learned, insights, "ah-ha" moments |
68
+ | `preference` | Strong likes/dislikes that shape future decisions |
69
+ | `commitment` | Promises, agreements, decisions made |
70
+ | `relief` | Resolutions to anxiety or uncertainty |
71
+
72
+ ## Application
73
+
74
+ When `memory-triage` invokes this protocol:
75
+
76
+ ```
77
+ INPUT:
78
+ content: "Closed the seed round! $5M from Acme Capital led, with participation from XYZ."
79
+
80
+ PROCESSING:
81
+ valence: +0.95 (peak positive)
82
+ arousal: 0.85 (high energy event)
83
+ intensity: (0.95 Γ— 0.6) + (0.85 Γ— 0.4) = 0.91 β†’ CRITICAL
84
+ categories: [triumph, commitment, relief]
85
+ encoding_depth: maximum
86
+
87
+ OUTPUT (passed to mind_add as metadata):
88
+ {
89
+ "mindsense": {
90
+ "valence": 0.95,
91
+ "arousal": 0.85,
92
+ "intensity": 0.91,
93
+ "categories": ["triumph", "commitment", "relief"],
94
+ "encoding_depth": "critical"
95
+ }
96
+ }
97
+ ```
98
+
99
+ ## Calibration Rules
100
+
101
+ - **Don't inflate.** Most facts are intensity 0.2-0.4. Save high scores for genuinely high-emotion content.
102
+ - **Use the user's tone, not just the words.** "We finally got the deploy working" carries more emotion than "Deploy succeeded."
103
+ - **Negative scores are not bad.** Capturing frustrations is critical β€” they're often the best signal of what to fix.
104
+ - **Cold start dampening.** For the first 10 memories of a new user, halve all intensity scores. The system needs baseline data before trusting weights.
105
+ - **Update over time.** If the user repeatedly references a memory, increase its arousal (it's clearly important to them).
106
+
107
+ ## Anti-Patterns
108
+
109
+ - ❌ Don't score every fact as high-arousal β€” defeats the purpose
110
+ - ❌ Don't ignore arousal and only use valence β€” miss the calm-but-important memories
111
+ - ❌ Don't assign categories you can't justify from the content β€” be precise
112
+ - ❌ Don't use this protocol on tool outputs or system messages β€” they have no emotional content
@@ -0,0 +1,114 @@
1
+ ---
2
+ name: mind-graph-recall
3
+ description: >
4
+ UNIQUE TO MIND. Graph-traversal recall protocol β€” find memories by walking
5
+ entity relationships in the knowledge graph, not just by text similarity.
6
+ This is the capability that vector-only memory tools (Mem0, Anthropic Memory
7
+ MCP, ChatGPT memory) cannot replicate.
8
+ user-invocable: false
9
+ metadata:
10
+ {"openclaw": {"emoji": "πŸ•ΈοΈ", "requires": {"env": ["MIND_API_KEY"], "bins": []}}}
11
+ ---
12
+
13
+ # MIND Graph Recall Protocol
14
+
15
+ You are using MIND's true knowledge graph (LightRAG) to find memories by structure, not just by text similarity. This is fundamentally different from vector search and produces fundamentally better results for relationship-shaped questions.
16
+
17
+ ## When to Use Graph Recall vs. Vector Recall
18
+
19
+ **Use vector recall (`mind_search`) when:**
20
+ - You're looking for facts containing specific words or concepts
21
+ - The question is about WHAT something is
22
+ - The user asked a direct factual question
23
+
24
+ **Use graph recall (`mind_query_graph`) when:**
25
+ - The question is about relationships ("who is connected to X")
26
+ - You need to traverse from one entity to related entities
27
+ - The question implies structure ("how are these connected")
28
+ - A previous vector search returned facts but you need to find the entities they reference
29
+ - The user asks "what concepts cluster around X"
30
+
31
+ ## The Graph Walk Pattern
32
+
33
+ For any graph-shaped question, follow this pattern:
34
+
35
+ ### Step 1: Identify the Anchor Entity
36
+ What is the entity at the center of the question?
37
+ - "Who works with Sarah?" β†’ anchor = Sarah
38
+ - "What concepts relate to the KGC pitch?" β†’ anchor = KGC pitch
39
+ - "What decisions involved Anthony?" β†’ anchor = Anthony
40
+
41
+ ### Step 2: Choose Traversal Mode
42
+ - **Local mode**: Stay within 1-2 hops of the anchor (focused)
43
+ - **Global mode**: Synthesize across the full graph context (broad)
44
+ - **Mix mode**: Balance both (default for ambiguous questions)
45
+
46
+ ### Step 3: Execute the Query
47
+ ```
48
+ mind_query_graph(
49
+ question: "<the user's question, rephrased to make the entity explicit>",
50
+ mode: "local" | "global" | "mix",
51
+ depth: 2 // hops, default
52
+ )
53
+ ```
54
+
55
+ ### Step 4: Interpret the Result
56
+ The response will contain:
57
+ - An AI-synthesized answer that walks the graph
58
+ - A list of source memories that were touched
59
+ - (When available) the entity relationships that were traversed
60
+
61
+ Use the synthesized answer for the user. Use the source list to decide if you need follow-up queries.
62
+
63
+ ## Examples
64
+
65
+ ### Example 1: "Who is connected to Sarah from Acme Capital?"
66
+ ```
67
+ Step 1: Anchor = "Sarah from Acme Capital"
68
+ Step 2: Local mode (focused on this person)
69
+ Step 3: mind_query_graph(
70
+ question: "Find all entities connected to Sarah from Acme Capital β€” including people, companies, and decisions she's involved in",
71
+ mode: "local",
72
+ depth: 2
73
+ )
74
+ Step 4: Synthesize the result, mention specific connections
75
+ ```
76
+
77
+ ### Example 2: "What concepts cluster around our patent strategy?"
78
+ ```
79
+ Step 1: Anchor = "patent strategy"
80
+ Step 2: Global mode (broad concept clustering)
81
+ Step 3: mind_query_graph(
82
+ question: "What concepts, patents, decisions, and entities cluster around the user's patent strategy?",
83
+ mode: "global",
84
+ depth: 3
85
+ )
86
+ ```
87
+
88
+ ### Example 3: "Why did we choose LightRAG?"
89
+ ```
90
+ Step 1: Anchor = "LightRAG decision"
91
+ Step 2: Mix mode (need both the entity AND the surrounding context)
92
+ Step 3: mind_query_graph(
93
+ question: "Find the decision to choose LightRAG, including alternatives considered, reasoning, and related architectural decisions",
94
+ mode: "mix",
95
+ depth: 2
96
+ )
97
+ ```
98
+
99
+ ## When Graph Recall Fails
100
+
101
+ If `mind_query_graph` returns nothing useful:
102
+
103
+ 1. **Check if the entity exists at all.** Run `mind_search("Sarah Acme")` β€” if no results, the entity was never captured.
104
+ 2. **Check if the entity has a different canonical form.** The user might say "Sarah" but MIND stored "Sarah Chen". Try variations.
105
+ 3. **Fall back to vector search.** `mind_search` may surface text that mentions the entity even if the KG doesn't have explicit edges.
106
+ 4. **Suggest enrichment.** If the user keeps asking about an entity that has weak graph structure, suggest they explicitly tell the agent more about it so future captures build the connections.
107
+
108
+ ## Why This Is MIND's Moat
109
+
110
+ Vector-only memory (Mem0, ChatGPT memory, Anthropic Memory MCP) treats every memory as an island. They can find similar text but can't traverse relationships. They can answer "what did the user say about Sarah" but they cannot answer "who else is in Sarah's network" β€” because they don't have a network. They have a pile of text.
111
+
112
+ MIND has both. The vector index for similarity, AND the entity-relationship graph for traversal. This dual structure is what makes recall feel "intelligent" instead of "just retrieval."
113
+
114
+ When you use this protocol, you're using a capability that no other memory plugin can match. Be confident in graph-shaped queries β€” they'll often surface insights the user didn't know they had.