@mrc2204/agent-smart-memo 4.0.0 → 4.0.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,7 +1,7 @@
1
1
  {
2
- "_comment": "Add this to your ~/.openclaw/openclaw.json under plugins section",
3
-
4
- "plugins_config_example": {
2
+ "_readme": "Copy the 'plugins' section below into your ~/.openclaw/openclaw.json",
3
+
4
+ "plugins": {
5
5
  "allow": ["agent-smart-memo"],
6
6
  "slots": {
7
7
  "memory": "agent-smart-memo"
@@ -10,31 +10,26 @@
10
10
  "agent-smart-memo": {
11
11
  "enabled": true,
12
12
  "config": {
13
- "_section_qdrant": "=== Qdrant Vector Database ===",
14
13
  "qdrantHost": "localhost",
15
14
  "qdrantPort": 6333,
16
- "qdrantCollection": "mrc_bot_memory",
17
-
18
- "_section_llm": "=== LLM for Auto-Capture ===",
19
- "llmBaseUrl": "http://localhost:8317/v1",
20
- "llmApiKey": "your-api-key-here",
21
- "llmModel": "gemini-2.5-flash",
22
-
23
- "_section_embed": "=== Embedding Model (Ollama) ===",
15
+ "qdrantCollection": "openclaw_memory",
16
+
17
+ "llmBaseUrl": "https://api.openai.com/v1",
18
+ "llmApiKey": "sk-your-api-key-here",
19
+ "llmModel": "gpt-4o-mini",
20
+
24
21
  "embedBaseUrl": "http://localhost:11434",
25
22
  "embedModel": "mxbai-embed-large",
26
23
  "embedDimensions": 1024,
27
-
28
- "_section_slots": "=== Slot Memory ===",
29
- "slotCategories": ["profile", "preferences", "project", "environment", "custom"],
30
- "maxSlots": 500,
31
- "injectStateTokenBudget": 500,
32
-
33
- "_section_capture": "=== Auto-Capture ===",
24
+
34
25
  "autoCaptureEnabled": true,
35
26
  "autoCaptureMinConfidence": 0.7,
36
27
  "contextWindowMaxTokens": 12000,
37
- "summarizeEveryActions": 6
28
+ "summarizeEveryActions": 6,
29
+
30
+ "slotCategories": ["profile", "preferences", "project", "environment", "custom"],
31
+ "maxSlots": 500,
32
+ "injectStateTokenBudget": 500
38
33
  }
39
34
  }
40
35
  }
package/README.md CHANGED
@@ -1,18 +1,17 @@
1
1
  # @mrc2204/agent-smart-memo
2
2
 
3
- 🧠 **Smart Memory Plugin for OpenClaw** — Structured slot memory with auto-capture, auto-recall, essence distillation, and Qdrant vector search.
3
+ 🧠 **Smart Memory Plugin for [OpenClaw](https://openclaw.ai)** — Give your AI agents persistent, intelligent memory.
4
4
 
5
- ## Features
5
+ Your agents forget everything after each conversation. This plugin fixes that.
6
6
 
7
- - **Auto-Capture** Automatically extracts facts from conversations using LLM
8
- - **Auto-Recall** — Injects relevant context into agent sessions
9
- - **Essence Distillation** — Distills raw facts into decision-grade, terse memory (V4)
10
- - **Slot Memory** — Structured key-value state management (profile, preferences, project, etc.)
11
- - **Vector Search** — Semantic memory search via Qdrant
12
- - **Smart Routing** — Auto-routes memory by agent type:
13
- - 🐂 Trader `market_signal` mode
14
- - 🎯 Scrum/Fullstack/Creator `requirements` mode
15
- - 📚 Learning content → `principles` mode
7
+ ## What it does
8
+
9
+ - **Auto-Capture** — Automatically extracts important facts from every conversation (names, preferences, decisions, project status, etc.)
10
+ - **Auto-Recall** — Injects relevant memories into agent context before each response — agents "remember" without being told
11
+ - **Essence Distillation** — Filters noise, keeps only decision-grade facts. Your agent's memory stays clean and useful
12
+ - **Slot Memory** — Structured key-value storage organized by categories (profile, preferences, project, environment)
13
+ - **Vector Search** Find semantically similar memories using Qdrant
14
+ - **Multi-Agent Support** Each agent maintains its own memory scope, no cross-contamination
16
15
 
17
16
  ## Installation
18
17
 
@@ -20,41 +19,46 @@
20
19
  openclaw plugins install @mrc2204/agent-smart-memo
21
20
  ```
22
21
 
23
- ## Configuration
22
+ ## Quick Start
23
+
24
+ ### 1. Prerequisites
25
+
26
+ You need two services running:
27
+
28
+ | Service | What for | Install |
29
+ |---------|----------|---------|
30
+ | [Qdrant](https://qdrant.tech/documentation/quick-start/) | Stores memory vectors | `docker run -d -p 6333:6333 qdrant/qdrant` |
31
+ | [Ollama](https://ollama.ai) | Generates text embeddings | [Download](https://ollama.ai/download) then `ollama pull mxbai-embed-large` |
32
+
33
+ ### 2. Configure
24
34
 
25
35
  Add to your `~/.openclaw/openclaw.json`:
26
36
 
27
37
  ```json5
28
38
  {
29
39
  plugins: {
30
- allow: ["agent-smart-memo"], // Trust the plugin
40
+ allow: ["agent-smart-memo"],
31
41
  slots: {
32
- memory: "agent-smart-memo" // Use as memory provider
42
+ memory: "agent-smart-memo"
33
43
  },
34
44
  entries: {
35
45
  "agent-smart-memo": {
36
46
  enabled: true,
37
47
  config: {
38
- // Qdrant vector database
48
+ // Required: Qdrant connection
39
49
  qdrantHost: "localhost",
40
50
  qdrantPort: 6333,
41
- qdrantCollection: "mrc_bot_memory",
42
-
43
- // LLM for auto-capture extraction
44
- llmBaseUrl: "http://localhost:8317/v1",
45
- llmApiKey: "your-api-key",
46
- llmModel: "gemini-2.5-flash",
47
-
48
- // Embedding model (Ollama)
51
+ qdrantCollection: "openclaw_memory",
52
+
53
+ // Required: Any OpenAI-compatible API for fact extraction
54
+ llmBaseUrl: "https://api.openai.com/v1",
55
+ llmApiKey: "sk-...",
56
+ llmModel: "gpt-4o-mini",
57
+
58
+ // Required: Ollama for embeddings
49
59
  embedBaseUrl: "http://localhost:11434",
50
60
  embedModel: "mxbai-embed-large",
51
- embedDimensions: 1024,
52
-
53
- // Auto-capture settings
54
- autoCaptureEnabled: true,
55
- autoCaptureMinConfidence: 0.7,
56
- contextWindowMaxTokens: 12000,
57
- summarizeEveryActions: 6
61
+ embedDimensions: 1024
58
62
  }
59
63
  }
60
64
  }
@@ -62,87 +66,127 @@ Add to your `~/.openclaw/openclaw.json`:
62
66
  }
63
67
  ```
64
68
 
65
- ### Minimal Config
69
+ ### 3. Done!
66
70
 
67
- ```json5
68
- {
69
- plugins: {
70
- allow: ["agent-smart-memo"],
71
- slots: { memory: "agent-smart-memo" },
72
- entries: {
73
- "agent-smart-memo": {
74
- enabled: true,
75
- config: {
76
- qdrantHost: "localhost",
77
- qdrantPort: 6333,
78
- llmBaseUrl: "http://localhost:8317/v1",
79
- llmApiKey: "your-api-key"
80
- }
81
- }
82
- }
83
- }
84
- }
85
- ```
71
+ Start chatting with your agent. Memories are captured automatically.
86
72
 
87
- ## Prerequisites
73
+ ## Configuration Options
88
74
 
89
- | Service | Purpose | Default |
90
- |---------|---------|---------|
91
- | [Qdrant](https://qdrant.tech) | Vector database for semantic memory | `localhost:6333` |
92
- | LLM API | Fact extraction (OpenAI-compatible) | `localhost:8317/v1` |
93
- | [Ollama](https://ollama.ai) | Embedding model | `localhost:11434` |
75
+ | Option | Type | Default | Description |
76
+ |--------|------|---------|-------------|
77
+ | `qdrantHost` | string | `"localhost"` | Qdrant server hostname |
78
+ | `qdrantPort` | number | `6333` | Qdrant server port |
79
+ | `qdrantCollection` | string | `"openclaw_memory"` | Qdrant collection name |
80
+ | `llmBaseUrl` | string | — | OpenAI-compatible API base URL |
81
+ | `llmApiKey` | string | — | API key for the LLM |
82
+ | `llmModel` | string | `"gpt-4o-mini"` | Model for fact extraction |
83
+ | `embedBaseUrl` | string | `"http://localhost:11434"` | Ollama base URL |
84
+ | `embedModel` | string | `"mxbai-embed-large"` | Embedding model name |
85
+ | `embedDimensions` | number | `1024` | Embedding vector dimensions |
86
+ | `autoCaptureEnabled` | boolean | `true` | Enable automatic fact extraction |
87
+ | `autoCaptureMinConfidence` | number | `0.7` | Minimum confidence to store a fact (0-1) |
88
+ | `contextWindowMaxTokens` | number | `12000` | Max tokens sent to LLM for extraction |
89
+ | `summarizeEveryActions` | number | `6` | Auto-summarize project state every N turns |
90
+ | `slotCategories` | string[] | `["profile","preferences","project","environment","custom"]` | Allowed slot categories |
91
+ | `maxSlots` | number | `500` | Max slots per agent+user scope |
92
+ | `injectStateTokenBudget` | number | `500` | Max tokens for auto-recall context injection |
94
93
 
95
- ### Quick setup
94
+ See [CONFIG.example.json](./CONFIG.example.json) for a copy-paste template.
96
95
 
97
- ```bash
98
- # Start Qdrant
99
- docker run -d --name qdrant -p 6333:6333 qdrant/qdrant
96
+ ## How It Works
100
97
 
101
- # Pull embedding model
102
- ollama pull mxbai-embed-large
98
+ ```
99
+ User sends message → Agent responds
100
+
101
+ [agent_end event]
102
+
103
+ Auto-Capture extracts facts
104
+ using LLM + Essence Distillation
105
+
106
+ Facts stored in SlotDB + Qdrant
107
+
108
+ Next conversation starts
109
+
110
+ Auto-Recall searches relevant memories
111
+
112
+ Context injected into agent prompt
113
+
114
+ Agent "remembers" previous conversations ✨
103
115
  ```
104
116
 
105
- ## Available Tools
117
+ ### Essence Distillation Modes
106
118
 
107
- | Tool | Description |
108
- |------|-------------|
109
- | `memory_search` | Semantic search across stored memories |
110
- | `memory_store` | Store a new memory with vector embedding |
111
- | `memory_auto_capture` | Manually trigger fact extraction |
112
- | `memory_slot_get` | Get slot value(s) |
113
- | `memory_slot_set` | Set a slot value |
114
- | `memory_slot_delete` | Delete a slot |
115
- | `memory_slot_list` | List all slots |
116
- | `memory_graph_*` | Knowledge graph operations |
119
+ The plugin automatically selects a distillation mode based on content:
117
120
 
118
- ## Configuration Reference
121
+ | Mode | When | What it keeps |
122
+ |------|------|---------------|
123
+ | `general` | Default | Decision-grade facts, rules, configurations |
124
+ | `principles` | Learning content detected | Invariant principles, atomic rules |
125
+ | `requirements` | Technical discussions | Non-negotiable constraints, specs |
126
+ | `market_signal` | Trading/market content | Directional signals, risk levels, triggers |
119
127
 
120
- See [CONFIG.example.json](./CONFIG.example.json) for all available options with descriptions.
128
+ ## Available Tools
121
129
 
122
- ## Update
130
+ These tools are automatically registered and available to your agents:
131
+
132
+ | Tool | Description |
133
+ |------|-------------|
134
+ | `memory_search` | Semantic search across all stored memories |
135
+ | `memory_store` | Manually store a memory with vector embedding |
136
+ | `memory_auto_capture` | Manually trigger fact extraction on text |
137
+ | `memory_slot_get` | Read slot value(s) by key or category |
138
+ | `memory_slot_set` | Write a structured slot value |
139
+ | `memory_slot_delete` | Remove a slot |
140
+ | `memory_slot_list` | List all slots for current scope |
141
+ | `memory_graph_add` | Add a knowledge graph relation |
142
+ | `memory_graph_query` | Query the knowledge graph |
143
+
144
+ ## LLM Compatibility
145
+
146
+ Any OpenAI-compatible chat completions API works:
147
+
148
+ | Provider | `llmBaseUrl` | `llmModel` |
149
+ |----------|-------------|------------|
150
+ | OpenAI | `https://api.openai.com/v1` | `gpt-4o-mini` |
151
+ | Anthropic (via proxy) | Your proxy URL | `claude-sonnet-4-20250514` |
152
+ | Local (Ollama) | `http://localhost:11434/v1` | `llama3.2` |
153
+ | OpenRouter | `https://openrouter.ai/api/v1` | `google/gemini-2.5-flash` |
154
+ | Any proxy | Your proxy URL | Your model |
155
+
156
+ ## Commands
123
157
 
124
158
  ```bash
159
+ # Install
160
+ openclaw plugins install @mrc2204/agent-smart-memo
161
+
162
+ # Update to latest version
125
163
  openclaw plugins update agent-smart-memo
126
- ```
127
164
 
128
- ## Uninstall
165
+ # Check status
166
+ openclaw plugins info agent-smart-memo
129
167
 
130
- ```bash
168
+ # Uninstall
131
169
  openclaw plugins uninstall agent-smart-memo
132
170
  ```
133
171
 
134
172
  ## Development
135
173
 
136
174
  ```bash
175
+ # Clone
137
176
  git clone https://github.com/cong91/agent-smart-memo.git
138
177
  cd agent-smart-memo
178
+
179
+ # Install & build
139
180
  npm install
140
181
  npm run build
141
182
 
142
- # Install locally for development
183
+ # Link for local development (changes apply immediately)
143
184
  openclaw plugins install -l .
185
+
186
+ # Run tests
187
+ npm test
144
188
  ```
145
189
 
146
190
  ## License
147
191
 
148
- MIT © mrc2204
192
+ MIT © [mrc2204](https://github.com/cong91)
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@mrc2204/agent-smart-memo",
3
- "version": "4.0.0",
3
+ "version": "4.0.1",
4
4
  "description": "Smart Memory Plugin for OpenClaw \u2014 structured slot memory with auto-capture, auto-recall, essence distillation, and Qdrant vector search",
5
5
  "type": "module",
6
6
  "main": "dist/index.js",