@mrc2204/agent-smart-memo 4.0.0 → 4.0.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,7 +1,7 @@
1
1
  {
2
- "_comment": "Add this to your ~/.openclaw/openclaw.json under plugins section",
3
-
4
- "plugins_config_example": {
2
+ "_readme": "Copy the 'plugins' section below into your ~/.openclaw/openclaw.json",
3
+
4
+ "plugins": {
5
5
  "allow": ["agent-smart-memo"],
6
6
  "slots": {
7
7
  "memory": "agent-smart-memo"
@@ -10,31 +10,26 @@
10
10
  "agent-smart-memo": {
11
11
  "enabled": true,
12
12
  "config": {
13
- "_section_qdrant": "=== Qdrant Vector Database ===",
14
13
  "qdrantHost": "localhost",
15
14
  "qdrantPort": 6333,
16
- "qdrantCollection": "mrc_bot_memory",
17
-
18
- "_section_llm": "=== LLM for Auto-Capture ===",
19
- "llmBaseUrl": "http://localhost:8317/v1",
20
- "llmApiKey": "your-api-key-here",
21
- "llmModel": "gemini-2.5-flash",
22
-
23
- "_section_embed": "=== Embedding Model (Ollama) ===",
15
+ "qdrantCollection": "openclaw_memory",
16
+
17
+ "llmBaseUrl": "https://api.openai.com/v1",
18
+ "llmApiKey": "sk-your-api-key-here",
19
+ "llmModel": "gpt-4o-mini",
20
+
24
21
  "embedBaseUrl": "http://localhost:11434",
25
22
  "embedModel": "mxbai-embed-large",
26
23
  "embedDimensions": 1024,
27
-
28
- "_section_slots": "=== Slot Memory ===",
29
- "slotCategories": ["profile", "preferences", "project", "environment", "custom"],
30
- "maxSlots": 500,
31
- "injectStateTokenBudget": 500,
32
-
33
- "_section_capture": "=== Auto-Capture ===",
24
+
34
25
  "autoCaptureEnabled": true,
35
26
  "autoCaptureMinConfidence": 0.7,
36
27
  "contextWindowMaxTokens": 12000,
37
- "summarizeEveryActions": 6
28
+ "summarizeEveryActions": 6,
29
+
30
+ "slotCategories": ["profile", "preferences", "project", "environment", "custom"],
31
+ "maxSlots": 500,
32
+ "injectStateTokenBudget": 500
38
33
  }
39
34
  }
40
35
  }
package/README.md CHANGED
@@ -1,18 +1,17 @@
1
1
  # @mrc2204/agent-smart-memo
2
2
 
3
- 🧠 **Smart Memory Plugin for OpenClaw** — Structured slot memory with auto-capture, auto-recall, essence distillation, and Qdrant vector search.
3
+ 🧠 **Smart Memory Plugin for [OpenClaw](https://openclaw.ai)** — Give your AI agents persistent, intelligent memory.
4
4
 
5
- ## Features
5
+ Your agents forget everything after each conversation. This plugin fixes that.
6
6
 
7
- - **Auto-Capture** Automatically extracts facts from conversations using LLM
8
- - **Auto-Recall** — Injects relevant context into agent sessions
9
- - **Essence Distillation** — Distills raw facts into decision-grade, terse memory (V4)
10
- - **Slot Memory** — Structured key-value state management (profile, preferences, project, etc.)
11
- - **Vector Search** — Semantic memory search via Qdrant
12
- - **Smart Routing** — Auto-routes memory by agent type:
13
- - 🐂 Trader `market_signal` mode
14
- - 🎯 Scrum/Fullstack/Creator `requirements` mode
15
- - 📚 Learning content → `principles` mode
7
+ ## What it does
8
+
9
+ - **Auto-Capture** — Automatically extracts important facts from every conversation (names, preferences, decisions, project status, etc.)
10
+ - **Auto-Recall** — Injects relevant memories into agent context before each response — agents "remember" without being told
11
+ - **Essence Distillation** — Filters noise, keeps only decision-grade facts. Your agent's memory stays clean and useful
12
+ - **Slot Memory** — Structured key-value storage organized by categories (profile, preferences, project, environment)
13
+ - **Vector Search** Find semantically similar memories using Qdrant
14
+ - **Multi-Agent Support** Each agent maintains its own memory scope, no cross-contamination
16
15
 
17
16
  ## Installation
18
17
 
@@ -20,41 +19,46 @@
20
19
  openclaw plugins install @mrc2204/agent-smart-memo
21
20
  ```
22
21
 
23
- ## Configuration
22
+ ## Quick Start
23
+
24
+ ### 1. Prerequisites
25
+
26
+ You need two services running:
27
+
28
+ | Service | What for | Install |
29
+ |---------|----------|---------|
30
+ | [Qdrant](https://qdrant.tech/documentation/quick-start/) | Stores memory vectors | `docker run -d -p 6333:6333 qdrant/qdrant` |
31
+ | [Ollama](https://ollama.ai) | Generates text embeddings | [Download](https://ollama.ai/download) then `ollama pull mxbai-embed-large` |
32
+
33
+ ### 2. Configure
24
34
 
25
35
  Add to your `~/.openclaw/openclaw.json`:
26
36
 
27
37
  ```json5
28
38
  {
29
39
  plugins: {
30
- allow: ["agent-smart-memo"], // Trust the plugin
40
+ allow: ["agent-smart-memo"],
31
41
  slots: {
32
- memory: "agent-smart-memo" // Use as memory provider
42
+ memory: "agent-smart-memo"
33
43
  },
34
44
  entries: {
35
45
  "agent-smart-memo": {
36
46
  enabled: true,
37
47
  config: {
38
- // Qdrant vector database
48
+ // Required: Qdrant connection
39
49
  qdrantHost: "localhost",
40
50
  qdrantPort: 6333,
41
- qdrantCollection: "mrc_bot_memory",
42
-
43
- // LLM for auto-capture extraction
44
- llmBaseUrl: "http://localhost:8317/v1",
45
- llmApiKey: "your-api-key",
46
- llmModel: "gemini-2.5-flash",
47
-
48
- // Embedding model (Ollama)
51
+ qdrantCollection: "openclaw_memory",
52
+
53
+ // Required: Any OpenAI-compatible API for fact extraction
54
+ llmBaseUrl: "https://api.openai.com/v1",
55
+ llmApiKey: "sk-...",
56
+ llmModel: "gpt-4o-mini",
57
+
58
+ // Required: Ollama for embeddings
49
59
  embedBaseUrl: "http://localhost:11434",
50
60
  embedModel: "mxbai-embed-large",
51
- embedDimensions: 1024,
52
-
53
- // Auto-capture settings
54
- autoCaptureEnabled: true,
55
- autoCaptureMinConfidence: 0.7,
56
- contextWindowMaxTokens: 12000,
57
- summarizeEveryActions: 6
61
+ embedDimensions: 1024
58
62
  }
59
63
  }
60
64
  }
@@ -62,87 +66,129 @@ Add to your `~/.openclaw/openclaw.json`:
62
66
  }
63
67
  ```
64
68
 
65
- ### Minimal Config
69
+ ### 3. Done!
66
70
 
67
- ```json5
68
- {
69
- plugins: {
70
- allow: ["agent-smart-memo"],
71
- slots: { memory: "agent-smart-memo" },
72
- entries: {
73
- "agent-smart-memo": {
74
- enabled: true,
75
- config: {
76
- qdrantHost: "localhost",
77
- qdrantPort: 6333,
78
- llmBaseUrl: "http://localhost:8317/v1",
79
- llmApiKey: "your-api-key"
80
- }
81
- }
82
- }
83
- }
84
- }
85
- ```
71
+ Start chatting with your agent. Memories are captured automatically.
86
72
 
87
- ## Prerequisites
73
+ ## Configuration Options
88
74
 
89
- | Service | Purpose | Default |
90
- |---------|---------|---------|
91
- | [Qdrant](https://qdrant.tech) | Vector database for semantic memory | `localhost:6333` |
92
- | LLM API | Fact extraction (OpenAI-compatible) | `localhost:8317/v1` |
93
- | [Ollama](https://ollama.ai) | Embedding model | `localhost:11434` |
75
+ | Option | Type | Default | Description |
76
+ |--------|------|---------|-------------|
77
+ | `qdrantHost` | string | `"localhost"` | Qdrant server hostname |
78
+ | `qdrantPort` | number | `6333` | Qdrant server port |
79
+ | `qdrantCollection` | string | `"openclaw_memory"` | Qdrant collection name |
80
+ | `llmBaseUrl` | string | — | OpenAI-compatible API base URL |
81
+ | `llmApiKey` | string | — | API key for the LLM |
82
+ | `llmModel` | string | `"gpt-4o-mini"` | Model for fact extraction |
83
+ | `embedBaseUrl` | string | `"http://localhost:11434"` | Ollama base URL |
84
+ | `embedModel` | string | `"mxbai-embed-large"` | Embedding model name |
85
+ | `embedDimensions` | number | `1024` | Embedding vector dimensions |
86
+ | `autoCaptureEnabled` | boolean | `true` | Enable automatic fact extraction |
87
+ | `autoCaptureMinConfidence` | number | `0.7` | Minimum confidence to store a fact (0-1) |
88
+ | `contextWindowMaxTokens` | number | `12000` | Max tokens sent to LLM for extraction |
89
+ | `summarizeEveryActions` | number | `6` | Auto-summarize project state every N turns |
90
+ | `slotCategories` | string[] | `["profile","preferences","project","environment","custom"]` | Allowed slot categories |
91
+ | `maxSlots` | number | `500` | Max slots per agent+user scope |
92
+ | `injectStateTokenBudget` | number | `500` | Max tokens for auto-recall context injection |
94
93
 
95
- ### Quick setup
94
+ See [CONFIG.example.json](./CONFIG.example.json) for a copy-paste template.
96
95
 
97
- ```bash
98
- # Start Qdrant
99
- docker run -d --name qdrant -p 6333:6333 qdrant/qdrant
96
+ ## How It Works
100
97
 
101
- # Pull embedding model
102
- ollama pull mxbai-embed-large
98
+ ```
99
+ User sends message → Agent responds
100
+
101
+ [agent_end event]
102
+
103
+ Auto-Capture extracts facts
104
+ using LLM + Essence Distillation
105
+
106
+ Facts stored in SlotDB + Qdrant
107
+
108
+ Next conversation starts
109
+
110
+ Auto-Recall searches relevant memories
111
+
112
+ Context injected into agent prompt
113
+
114
+ Agent "remembers" previous conversations ✨
103
115
  ```
104
116
 
105
- ## Available Tools
117
+ ### Essence Distillation Modes
106
118
 
107
- | Tool | Description |
108
- |------|-------------|
109
- | `memory_search` | Semantic search across stored memories |
110
- | `memory_store` | Store a new memory with vector embedding |
111
- | `memory_auto_capture` | Manually trigger fact extraction |
112
- | `memory_slot_get` | Get slot value(s) |
113
- | `memory_slot_set` | Set a slot value |
114
- | `memory_slot_delete` | Delete a slot |
115
- | `memory_slot_list` | List all slots |
116
- | `memory_graph_*` | Knowledge graph operations |
119
+ The plugin automatically detects what kind of content is being discussed and applies the right distillation mode:
120
+
121
+ | Mode | Auto-detected when... | What it keeps |
122
+ |------|----------------------|---------------|
123
+ | `general` | Most conversations | Key decisions, rules, configurations |
124
+ | `principles` | Learning or teaching content | Core principles, atomic rules |
125
+ | `requirements` | Technical specs or constraints | Measurable requirements, acceptance criteria |
126
+ | `market_signal` | Financial or market discussions | Actionable signals, risk levels, triggers |
117
127
 
118
- ## Configuration Reference
128
+ Modes are inferred automatically — no configuration needed.
119
129
 
120
- See [CONFIG.example.json](./CONFIG.example.json) for all available options with descriptions.
130
+ ## Available Tools
121
131
 
122
- ## Update
132
+ These tools are automatically registered and available to your agents:
133
+
134
+ | Tool | Description |
135
+ |------|-------------|
136
+ | `memory_search` | Semantic search across all stored memories |
137
+ | `memory_store` | Manually store a memory with vector embedding |
138
+ | `memory_auto_capture` | Manually trigger fact extraction on text |
139
+ | `memory_slot_get` | Read slot value(s) by key or category |
140
+ | `memory_slot_set` | Write a structured slot value |
141
+ | `memory_slot_delete` | Remove a slot |
142
+ | `memory_slot_list` | List all slots for current scope |
143
+ | `memory_graph_add` | Add a knowledge graph relation |
144
+ | `memory_graph_query` | Query the knowledge graph |
145
+
146
+ ## LLM Compatibility
147
+
148
+ Any OpenAI-compatible chat completions API works:
149
+
150
+ | Provider | `llmBaseUrl` | `llmModel` |
151
+ |----------|-------------|------------|
152
+ | OpenAI | `https://api.openai.com/v1` | `gpt-4o-mini` |
153
+ | Anthropic (via proxy) | Your proxy URL | `claude-sonnet-4-20250514` |
154
+ | Local (Ollama) | `http://localhost:11434/v1` | `llama3.2` |
155
+ | OpenRouter | `https://openrouter.ai/api/v1` | `google/gemini-2.5-flash` |
156
+ | Any proxy | Your proxy URL | Your model |
157
+
158
+ ## Commands
123
159
 
124
160
  ```bash
161
+ # Install
162
+ openclaw plugins install @mrc2204/agent-smart-memo
163
+
164
+ # Update to latest version
125
165
  openclaw plugins update agent-smart-memo
126
- ```
127
166
 
128
- ## Uninstall
167
+ # Check status
168
+ openclaw plugins info agent-smart-memo
129
169
 
130
- ```bash
170
+ # Uninstall
131
171
  openclaw plugins uninstall agent-smart-memo
132
172
  ```
133
173
 
134
174
  ## Development
135
175
 
136
176
  ```bash
177
+ # Clone
137
178
  git clone https://github.com/cong91/agent-smart-memo.git
138
179
  cd agent-smart-memo
180
+
181
+ # Install & build
139
182
  npm install
140
183
  npm run build
141
184
 
142
- # Install locally for development
185
+ # Link for local development (changes apply immediately)
143
186
  openclaw plugins install -l .
187
+
188
+ # Run tests
189
+ npm test
144
190
  ```
145
191
 
146
192
  ## License
147
193
 
148
- MIT © mrc2204
194
+ MIT © [mrc2204](https://github.com/cong91)
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@mrc2204/agent-smart-memo",
3
- "version": "4.0.0",
3
+ "version": "4.0.2",
4
4
  "description": "Smart Memory Plugin for OpenClaw \u2014 structured slot memory with auto-capture, auto-recall, essence distillation, and Qdrant vector search",
5
5
  "type": "module",
6
6
  "main": "dist/index.js",