memory-lancedb-pro 1.0.26 → 1.1.0-beta.2
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG-v1.1.0.md +227 -0
- package/CHANGELOG.md +23 -0
- package/README.md +82 -0
- package/README_CN.md +82 -0
- package/index.ts +106 -11
- package/openclaw.plugin.json +69 -1
- package/package.json +1 -1
- package/src/access-tracker.ts +13 -3
- package/src/decay-engine.ts +227 -0
- package/src/extraction-prompts.ts +205 -0
- package/src/llm-client.ts +92 -0
- package/src/memory-categories.ts +69 -0
- package/src/retriever.ts +152 -4
- package/src/smart-extractor.ts +524 -0
- package/src/tier-manager.ts +189 -0
|
@@ -0,0 +1,227 @@
|
|
|
1
|
+
# memory-lancedb-pro v1.1.0 — 智能记忆增强
|
|
2
|
+
|
|
3
|
+
> **日期**: 2026-03-03
|
|
4
|
+
> **作者**: CJY
|
|
5
|
+
> **概述**: 基于对 AI Agent 记忆系统的深入理解,对记忆的写入质量、生命周期管理和去重能力进行了全面改进与完善
|
|
6
|
+
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
## 一、改进动机
|
|
10
|
+
|
|
11
|
+
原有记忆系统在**检索侧**表现优异(Vector+BM25 混合检索、cross-encoder 重排序、多维评分),但在以下方面存在提升空间:
|
|
12
|
+
|
|
13
|
+
- **记忆写入质量**:依赖正则表达式触发捕获,容易漏捕有价值信息或误捕噪声
|
|
14
|
+
- **记忆结构层次**:扁平文本存储,缺乏分层索引能力
|
|
15
|
+
- **记忆生命周期**:简单时间衰减,无法模拟人类记忆的遗忘与强化规律
|
|
16
|
+
- **去重能力**:仅基于向量相似度的粗粒度去重,缺乏语义级判断
|
|
17
|
+
|
|
18
|
+
本次改进针对这三个维度进行了系统性增强。
|
|
19
|
+
|
|
20
|
+
---
|
|
21
|
+
|
|
22
|
+
## 二、变更摘要
|
|
23
|
+
|
|
24
|
+
| 改进维度 | 核心变更 | 效果 |
|
|
25
|
+
| ------------ | ----------------------------------------- | ---------------------------------- |
|
|
26
|
+
| 智能提取 | LLM 驱动的 6 类别提取 + L0/L1/L2 分层存储 | 记忆写入更精准、结构更丰富 |
|
|
27
|
+
| 生命周期管理 | Weibull 衰减模型 + 三层晋升/降级 | 重要记忆持久保留,过时记忆自然淡化 |
|
|
28
|
+
| 智能去重 | 向量预过滤 + LLM 语义决策 | 避免冗余记忆,支持信息演化合并 |
|
|
29
|
+
|
|
30
|
+
---
|
|
31
|
+
|
|
32
|
+
## 三、新增文件
|
|
33
|
+
|
|
34
|
+
### 1. `src/memory-categories.ts` — 6 类别分类系统
|
|
35
|
+
|
|
36
|
+
设计了语义明确的记忆分类体系,将记忆分为两大类六小类:
|
|
37
|
+
|
|
38
|
+
- **用户记忆**:`profile`(身份属性)、`preferences`(偏好习惯)、`entities`(持续存在的实体)、`events`(发生的事件)
|
|
39
|
+
- **Agent 记忆**:`cases`(问题-解决方案对)、`patterns`(可复用的处理流程)
|
|
40
|
+
|
|
41
|
+
每个类别有不同的合并策略:
|
|
42
|
+
|
|
43
|
+
- `profile` → 始终合并(用户身份信息持续累积)
|
|
44
|
+
- `preferences` / `entities` / `patterns` → 支持智能合并
|
|
45
|
+
- `events` / `cases` → 仅新增或跳过(独立记录,保留历史完整性)
|
|
46
|
+
|
|
47
|
+
---
|
|
48
|
+
|
|
49
|
+
### 2. `src/llm-client.ts` — LLM 客户端
|
|
50
|
+
|
|
51
|
+
封装了 LLM 调用接口,专注于结构化 JSON 输出:
|
|
52
|
+
|
|
53
|
+
- 复用现有 OpenAI SDK 依赖,零新增包
|
|
54
|
+
- 内置 JSON 容错解析:支持 markdown 代码块包裹和平衡大括号提取
|
|
55
|
+
- 低温度 (0.1) 保证输出一致性
|
|
56
|
+
- 30 秒超时保护,失败时优雅降级
|
|
57
|
+
|
|
58
|
+
---
|
|
59
|
+
|
|
60
|
+
### 3. `src/extraction-prompts.ts` — 记忆提取提示模板
|
|
61
|
+
|
|
62
|
+
精心设计了 3 个提示模板:
|
|
63
|
+
|
|
64
|
+
| 函数 | 用途 |
|
|
65
|
+
| ------------------------- | --------------------------------------------------- |
|
|
66
|
+
| `buildExtractionPrompt()` | 从对话中提取 6 类别 L0/L1/L2 记忆,含 few-shot 示例 |
|
|
67
|
+
| `buildDedupPrompt()` | CREATE / MERGE / SKIP 去重决策 |
|
|
68
|
+
| `buildMergePrompt()` | 将新旧记忆合并为三层结构 |
|
|
69
|
+
|
|
70
|
+
提取提示包含完整的记忆价值判断标准、类别决策逻辑表、常见混淆澄清规则和 6 个 few-shot 示例。
|
|
71
|
+
|
|
72
|
+
---
|
|
73
|
+
|
|
74
|
+
### 4. `src/smart-extractor.ts` — 智能提取管线
|
|
75
|
+
|
|
76
|
+
实现了完整的 LLM 驱动提取流水线:
|
|
77
|
+
|
|
78
|
+
```
|
|
79
|
+
对话文本 → LLM 提取 → 候选记忆 → 向量去重 → LLM 决策 → 持久化
|
|
80
|
+
```
|
|
81
|
+
|
|
82
|
+
核心设计:
|
|
83
|
+
|
|
84
|
+
- **两阶段去重**:先用向量相似度(阈值 0.7)快速筛选候选,再用 LLM 进行语义级判断
|
|
85
|
+
- **类别感知合并**:不同类别应用不同合并策略
|
|
86
|
+
- **L0/L1/L2 三层存储**:L0 一句话索引用于检索注入,L1 结构化摘要用于精读,L2 完整叙述用于深度回顾
|
|
87
|
+
- **向后兼容**:新增的 6 类别自动映射到已有的 5 类别存储,L0/L1/L2 存储在 metadata JSON 中
|
|
88
|
+
- **按类别设定重要度**:profile (0.9) > patterns (0.85) > cases/preferences (0.8) > entities (0.7) > events (0.6)
|
|
89
|
+
|
|
90
|
+
---
|
|
91
|
+
|
|
92
|
+
### 5. `src/decay-engine.ts` — Weibull 衰减引擎
|
|
93
|
+
|
|
94
|
+
基于认知心理学中的记忆遗忘曲线研究,实现了复合衰减模型:
|
|
95
|
+
|
|
96
|
+
**复合分数 = 时效权重 × 时效 + 频率权重 × 频率 + 内在权重 × 内在价值**
|
|
97
|
+
|
|
98
|
+
三个分量:
|
|
99
|
+
|
|
100
|
+
| 分量 | 机制 | 含义 |
|
|
101
|
+
| ------------------------ | --------------------------------- | ---------------------- |
|
|
102
|
+
| **时效 (recency)** | Weibull 拉伸指数衰减 `exp(-λt^β)` | 越久远的记忆衰减越快 |
|
|
103
|
+
| **频率 (frequency)** | 对数饱和曲线 + 时间加权 | 越常被访问的记忆越活跃 |
|
|
104
|
+
| **内在价值 (intrinsic)** | `importance × confidence` | 高价值记忆天然抵抗遗忘 |
|
|
105
|
+
|
|
106
|
+
层级特定的衰减形状 (β 参数):
|
|
107
|
+
|
|
108
|
+
- **Core** (β=0.8):亚指数衰减 → 遗忘极慢,衰减地板 0.9
|
|
109
|
+
- **Working** (β=1.0):标准指数衰减,衰减地板 0.7
|
|
110
|
+
- **Peripheral** (β=1.3):超指数衰减 → 遗忘加速,衰减地板 0.5
|
|
111
|
+
|
|
112
|
+
关键特性:
|
|
113
|
+
|
|
114
|
+
- **重要性调制半衰期**:`effectiveHL = halfLife × exp(μ × importance)`,重要记忆持续更久
|
|
115
|
+
- **搜索结果加权**:检索时自动应用衰减加权,让活跃记忆排名更高
|
|
116
|
+
- **过期识别**:识别 composite < 0.3 的过期记忆
|
|
117
|
+
|
|
118
|
+
---
|
|
119
|
+
|
|
120
|
+
### 6. `src/tier-manager.ts` — 三层晋升/降级管理器
|
|
121
|
+
|
|
122
|
+
模拟人类记忆的多级存储模型:
|
|
123
|
+
|
|
124
|
+
```
|
|
125
|
+
Peripheral(外围) ⟷ Working(工作) ⟷ Core(核心)
|
|
126
|
+
```
|
|
127
|
+
|
|
128
|
+
**晋升条件**:
|
|
129
|
+
|
|
130
|
+
| 方向 | 条件 |
|
|
131
|
+
| -------------------- | ----------------------------------------------- |
|
|
132
|
+
| Peripheral → Working | 访问次数 ≥ 3 且 衰减分数 ≥ 0.4 |
|
|
133
|
+
| Working → Core | 访问次数 ≥ 10 且 衰减分数 ≥ 0.7 且 重要度 ≥ 0.8 |
|
|
134
|
+
|
|
135
|
+
**降级条件**:
|
|
136
|
+
|
|
137
|
+
| 方向 | 条件 |
|
|
138
|
+
| -------------------- | ------------------------------------------------ |
|
|
139
|
+
| Working → Peripheral | 衰减分数 < 0.15 或(年龄 > 60 天且访问次数 < 3) |
|
|
140
|
+
| Core → Working | 衰减分数 < 0.15 且 访问次数 < 3(极少触发) |
|
|
141
|
+
|
|
142
|
+
---
|
|
143
|
+
|
|
144
|
+
## 四、修改文件
|
|
145
|
+
|
|
146
|
+
### `index.ts` — 插件入口
|
|
147
|
+
|
|
148
|
+
#### 新增配置项
|
|
149
|
+
|
|
150
|
+
```typescript
|
|
151
|
+
smartExtraction?: boolean; // 是否启用 LLM 智能提取(默认 true)
|
|
152
|
+
llm?: {
|
|
153
|
+
apiKey?: string; // LLM API Key(默认复用 embedding.apiKey)
|
|
154
|
+
model?: string; // LLM 模型(默认 gpt-4o-mini)
|
|
155
|
+
baseURL?: string; // LLM API 端点
|
|
156
|
+
};
|
|
157
|
+
extractMinMessages?: number; // 最少消息数才触发提取(默认 4)
|
|
158
|
+
extractMaxChars?: number; // 送入 LLM 的最大字符数(默认 8000)
|
|
159
|
+
```
|
|
160
|
+
|
|
161
|
+
#### `agent_end` 钩子改进
|
|
162
|
+
|
|
163
|
+
- 当 `smartExtraction` 启用时,优先使用 SmartExtractor 进行 LLM 6 类别提取
|
|
164
|
+
- 当消息数不足或 SmartExtractor 未初始化时,降级回原有正则触发逻辑
|
|
165
|
+
- 提取完成后输出统计日志:`smart-extracted N created, M merged, K skipped`
|
|
166
|
+
|
|
167
|
+
#### `before_agent_start` 钩子改进
|
|
168
|
+
|
|
169
|
+
- 注入的记忆上下文现在显示 L0 摘要而非原始文本
|
|
170
|
+
- 新增 6 类别标签(如 `[preferences:global]`)
|
|
171
|
+
- 新增层级标记(`[C]`ore / `[W]`orking / `[P]`eripheral)
|
|
172
|
+
|
|
173
|
+
---
|
|
174
|
+
|
|
175
|
+
## 五、配置指南
|
|
176
|
+
|
|
177
|
+
### 最简配置(复用已有 API Key)
|
|
178
|
+
|
|
179
|
+
```json
|
|
180
|
+
{
|
|
181
|
+
"embedding": {
|
|
182
|
+
"apiKey": "${OPENAI_API_KEY}",
|
|
183
|
+
"model": "text-embedding-3-small"
|
|
184
|
+
},
|
|
185
|
+
"smartExtraction": true
|
|
186
|
+
}
|
|
187
|
+
```
|
|
188
|
+
|
|
189
|
+
### 完整配置
|
|
190
|
+
|
|
191
|
+
```json
|
|
192
|
+
{
|
|
193
|
+
"embedding": {
|
|
194
|
+
"apiKey": "${OPENAI_API_KEY}",
|
|
195
|
+
"model": "text-embedding-3-small"
|
|
196
|
+
},
|
|
197
|
+
"smartExtraction": true,
|
|
198
|
+
"llm": {
|
|
199
|
+
"apiKey": "${OPENAI_API_KEY}",
|
|
200
|
+
"model": "gpt-4o-mini",
|
|
201
|
+
"baseURL": "https://api.openai.com/v1"
|
|
202
|
+
},
|
|
203
|
+
"extractMinMessages": 4,
|
|
204
|
+
"extractMaxChars": 8000
|
|
205
|
+
}
|
|
206
|
+
```
|
|
207
|
+
|
|
208
|
+
### 禁用智能提取
|
|
209
|
+
|
|
210
|
+
```json
|
|
211
|
+
{
|
|
212
|
+
"smartExtraction": false
|
|
213
|
+
}
|
|
214
|
+
```
|
|
215
|
+
|
|
216
|
+
---
|
|
217
|
+
|
|
218
|
+
## 六、向后兼容性
|
|
219
|
+
|
|
220
|
+
| 方面 | 兼容方式 |
|
|
221
|
+
| -------------- | ---------------------------------------------- |
|
|
222
|
+
| LanceDB Schema | 新字段存储在 `metadata` JSON 中,不修改表结构 |
|
|
223
|
+
| 记忆类别 | 新 6 类别自动映射到原有 5 类别 |
|
|
224
|
+
| 混合检索 | Vector+BM25 检索管线完全保留 |
|
|
225
|
+
| 去重逻辑 | 仅在 `smartExtraction: true` 时生效 |
|
|
226
|
+
| 已有数据 | 旧记忆正常读取,新记忆额外携带 L0/L1/L2 元数据 |
|
|
227
|
+
| 配置 | 全部新增配置项均有默认值,零配置即可使用 |
|
package/CHANGELOG.md
CHANGED
|
@@ -1,5 +1,28 @@
|
|
|
1
1
|
# Changelog
|
|
2
2
|
|
|
3
|
+
## 1.1.0-beta.2 (Smart Memory Beta + Access Reinforcement)
|
|
4
|
+
|
|
5
|
+
This is a **beta** release published under the npm dist-tag **`beta`** (it does not affect the stable `latest` channel).
|
|
6
|
+
|
|
7
|
+
Highlights:
|
|
8
|
+
- **Smart Extraction (LLM-powered)**: 6-category extraction with L0/L1/L2 metadata (falls back to regex capture when disabled or init fails)
|
|
9
|
+
- **Lifecycle scoring integrated into retrieval**: decay-based score adjustment + tier floors
|
|
10
|
+
- **Tier transitions (best-effort)**: bounded metadata write-backs for top results (tier / access stats)
|
|
11
|
+
- **Access reinforcement for time decay**: frequently *manually recalled* memories decay more slowly (spaced-repetition style)
|
|
12
|
+
- Adds `AccessTracker` with debounced metadata write-back (accessCount / lastAccessedAt)
|
|
13
|
+
- Adds retrieval config: `reinforcementFactor` (default: 0.5) and `maxHalfLifeMultiplier` (default: 3)
|
|
14
|
+
|
|
15
|
+
Notes:
|
|
16
|
+
- Access reinforcement is gated to manual recall (`source: \"manual\"`) to avoid auto-recall strengthening noise.
|
|
17
|
+
|
|
18
|
+
---
|
|
19
|
+
|
|
20
|
+
## 1.1.0-beta.1 (Smart Memory Beta)
|
|
21
|
+
|
|
22
|
+
- Initial beta with Smart Extraction + lifecycle components (decay engine + tier manager)
|
|
23
|
+
|
|
24
|
+
---
|
|
25
|
+
|
|
3
26
|
## 1.0.26
|
|
4
27
|
|
|
5
28
|
**Access Reinforcement for Time Decay**
|
package/README.md
CHANGED
|
@@ -52,6 +52,82 @@ The built-in `memory-lancedb` plugin in OpenClaw provides basic vector search. *
|
|
|
52
52
|
|
|
53
53
|
---
|
|
54
54
|
|
|
55
|
+
## 🧪 Beta: Smart Memory v1.1.0
|
|
56
|
+
|
|
57
|
+
> **Status**: Beta — available on npm under the `beta` dist-tag. Stable users on `latest` are not affected.
|
|
58
|
+
|
|
59
|
+
The `dev/smart-memory-v1.1.0` branch introduces three major enhancements to the memory write & retrieval pipeline:
|
|
60
|
+
|
|
61
|
+
### What's New
|
|
62
|
+
|
|
63
|
+
| Feature | Description |
|
|
64
|
+
|---------|-------------|
|
|
65
|
+
| **Smart Extraction** | LLM-powered 6-category extraction (profile, preferences, entities, events, cases, patterns) with L0/L1/L2 layered metadata. Falls back to regex capture when disabled or LLM init fails. |
|
|
66
|
+
| **Lifecycle Scoring** | Weibull decay model integrated into retrieval — scores are adjusted by `max(tierFloor, decayComposite)` so frequently-accessed and high-importance memories rank higher. |
|
|
67
|
+
| **Tier Management** | Three-tier system (Core → Working → Peripheral) with automatic promotion/demotion based on access frequency, composite score, and importance. |
|
|
68
|
+
|
|
69
|
+
### Install the Beta
|
|
70
|
+
|
|
71
|
+
```bash
|
|
72
|
+
npm i memory-lancedb-pro@beta
|
|
73
|
+
```
|
|
74
|
+
|
|
75
|
+
Or pin the exact version:
|
|
76
|
+
|
|
77
|
+
```bash
|
|
78
|
+
npm i memory-lancedb-pro@1.1.0-beta.1
|
|
79
|
+
```
|
|
80
|
+
|
|
81
|
+
### Configuration
|
|
82
|
+
|
|
83
|
+
Smart extraction is **enabled by default**. It reuses your existing embedding API key for LLM calls (or you can configure a separate LLM endpoint):
|
|
84
|
+
|
|
85
|
+
```json
|
|
86
|
+
{
|
|
87
|
+
"plugins.entries.memory-lancedb-pro": {
|
|
88
|
+
"config": {
|
|
89
|
+
"smartExtraction": true,
|
|
90
|
+
"llm": {
|
|
91
|
+
"apiKey": "${OPENAI_API_KEY}",
|
|
92
|
+
"model": "gpt-4o-mini",
|
|
93
|
+
"baseURL": "https://api.openai.com/v1"
|
|
94
|
+
},
|
|
95
|
+
"extractMinMessages": 4,
|
|
96
|
+
"extractMaxChars": 8000
|
|
97
|
+
}
|
|
98
|
+
}
|
|
99
|
+
}
|
|
100
|
+
```
|
|
101
|
+
|
|
102
|
+
| Config Key | Default | Description |
|
|
103
|
+
|------------|---------|-------------|
|
|
104
|
+
| `smartExtraction` | `true` | Enable/disable LLM-powered extraction |
|
|
105
|
+
| `llm.apiKey` | *(embedding apiKey)* | API key for extraction LLM |
|
|
106
|
+
| `llm.model` | `gpt-4o-mini` | LLM model for extraction & dedup |
|
|
107
|
+
| `llm.baseURL` | *(embedding baseURL)* | Base URL for LLM API |
|
|
108
|
+
| `extractMinMessages` | `4` | Min conversation messages before extraction triggers |
|
|
109
|
+
| `extractMaxChars` | `8000` | Max conversation chars to process |
|
|
110
|
+
|
|
111
|
+
### New Files
|
|
112
|
+
|
|
113
|
+
| File | Purpose |
|
|
114
|
+
|------|---------|
|
|
115
|
+
| `src/smart-extractor.ts` | LLM extraction pipeline: conversation → extract → dedup → persist |
|
|
116
|
+
| `src/extraction-prompts.ts` | Prompt templates for extraction, dedup, and merge |
|
|
117
|
+
| `src/llm-client.ts` | OpenAI-compatible LLM client with JSON parsing |
|
|
118
|
+
| `src/memory-categories.ts` | 6-category classification system + merge strategies |
|
|
119
|
+
| `src/decay-engine.ts` | Weibull stretched-exponential decay with tier-specific beta |
|
|
120
|
+
| `src/tier-manager.ts` | Three-tier promotion/demotion lifecycle manager |
|
|
121
|
+
|
|
122
|
+
### Feedback
|
|
123
|
+
|
|
124
|
+
This is a beta release — please report issues or share feedback at:
|
|
125
|
+
- [GitHub Issues](https://github.com/win4r/memory-lancedb-pro/issues)
|
|
126
|
+
|
|
127
|
+
To revert to stable: `npm i memory-lancedb-pro@latest`
|
|
128
|
+
|
|
129
|
+
---
|
|
130
|
+
|
|
55
131
|
## Architecture
|
|
56
132
|
|
|
57
133
|
```
|
|
@@ -92,6 +168,12 @@ The built-in `memory-lancedb` plugin in OpenClaw provides basic vector search. *
|
|
|
92
168
|
| `src/noise-filter.ts` | Noise filter. Filters out agent refusals, meta-questions, greetings, and low-quality content |
|
|
93
169
|
| `src/adaptive-retrieval.ts` | Adaptive retrieval. Determines whether a query needs memory retrieval (skips greetings, slash commands, simple confirmations, emoji) |
|
|
94
170
|
| `src/migrate.ts` | Migration tool. Migrates data from the built-in `memory-lancedb` plugin to Pro |
|
|
171
|
+
| `src/smart-extractor.ts` | *(Beta)* LLM-powered 6-category extraction pipeline with L0/L1/L2 layered storage |
|
|
172
|
+
| `src/extraction-prompts.ts` | *(Beta)* Prompt templates for memory extraction, dedup decisions, and merge |
|
|
173
|
+
| `src/llm-client.ts` | *(Beta)* OpenAI-compatible LLM client wrapper with JSON fence parsing |
|
|
174
|
+
| `src/memory-categories.ts` | *(Beta)* 6-category classification (profile, preferences, entities, events, cases, patterns) |
|
|
175
|
+
| `src/decay-engine.ts` | *(Beta)* Weibull decay model with importance-modulated half-life and tier-specific beta |
|
|
176
|
+
| `src/tier-manager.ts` | *(Beta)* Three-tier (Core/Working/Peripheral) promotion/demotion lifecycle manager |
|
|
95
177
|
|
|
96
178
|
---
|
|
97
179
|
|
package/README_CN.md
CHANGED
|
@@ -52,6 +52,82 @@ OpenClaw 内置的 `memory-lancedb` 插件仅提供基本的向量搜索。**mem
|
|
|
52
52
|
|
|
53
53
|
---
|
|
54
54
|
|
|
55
|
+
## 🧪 Beta:智能记忆 v1.1.0
|
|
56
|
+
|
|
57
|
+
> **状态**:Beta 版 — 通过 npm `beta` dist-tag 发布,不影响 `latest` 稳定通道。
|
|
58
|
+
|
|
59
|
+
`dev/smart-memory-v1.1.0` 分支为记忆写入和检索管线引入了三大增强:
|
|
60
|
+
|
|
61
|
+
### 新功能
|
|
62
|
+
|
|
63
|
+
| 功能 | 说明 |
|
|
64
|
+
|------|------|
|
|
65
|
+
| **智能提取** | LLM 驱动的 6 类别提取(profile、preferences、entities、events、cases、patterns),支持 L0/L1/L2 分层元数据。禁用或 LLM 初始化失败时回退到正则捕获。 |
|
|
66
|
+
| **生命周期评分** | Weibull 衰减模型集成到检索中 — 分数通过 `max(tierFloor, decayComposite)` 调整,高频访问和高重要性的记忆排名更靠前。 |
|
|
67
|
+
| **分层管理** | 三层系统(Core → Working → Peripheral),根据访问频率、复合得分和重要性自动晋升/降级。 |
|
|
68
|
+
|
|
69
|
+
### 安装 Beta 版
|
|
70
|
+
|
|
71
|
+
```bash
|
|
72
|
+
npm i memory-lancedb-pro@beta
|
|
73
|
+
```
|
|
74
|
+
|
|
75
|
+
或指定精确版本:
|
|
76
|
+
|
|
77
|
+
```bash
|
|
78
|
+
npm i memory-lancedb-pro@1.1.0-beta.1
|
|
79
|
+
```
|
|
80
|
+
|
|
81
|
+
### 配置
|
|
82
|
+
|
|
83
|
+
智能提取**默认开启**。它复用你现有的 embedding API key 进行 LLM 调用(也可以单独配置 LLM 端点):
|
|
84
|
+
|
|
85
|
+
```json
|
|
86
|
+
{
|
|
87
|
+
"plugins.entries.memory-lancedb-pro": {
|
|
88
|
+
"config": {
|
|
89
|
+
"smartExtraction": true,
|
|
90
|
+
"llm": {
|
|
91
|
+
"apiKey": "${OPENAI_API_KEY}",
|
|
92
|
+
"model": "gpt-4o-mini",
|
|
93
|
+
"baseURL": "https://api.openai.com/v1"
|
|
94
|
+
},
|
|
95
|
+
"extractMinMessages": 4,
|
|
96
|
+
"extractMaxChars": 8000
|
|
97
|
+
}
|
|
98
|
+
}
|
|
99
|
+
}
|
|
100
|
+
```
|
|
101
|
+
|
|
102
|
+
| 配置项 | 默认值 | 说明 |
|
|
103
|
+
|--------|--------|------|
|
|
104
|
+
| `smartExtraction` | `true` | 启用/禁用 LLM 驱动提取 |
|
|
105
|
+
| `llm.apiKey` | *(embedding apiKey)* | 提取 LLM 的 API key |
|
|
106
|
+
| `llm.model` | `gpt-4o-mini` | 提取和去重使用的 LLM 模型 |
|
|
107
|
+
| `llm.baseURL` | *(embedding baseURL)* | LLM API 的 Base URL |
|
|
108
|
+
| `extractMinMessages` | `4` | 触发提取的最少对话消息数 |
|
|
109
|
+
| `extractMaxChars` | `8000` | 处理提取的最大对话字符数 |
|
|
110
|
+
|
|
111
|
+
### 新增文件
|
|
112
|
+
|
|
113
|
+
| 文件 | 用途 |
|
|
114
|
+
|------|------|
|
|
115
|
+
| `src/smart-extractor.ts` | LLM 提取管线:对话 → 提取 → 去重 → 持久化 |
|
|
116
|
+
| `src/extraction-prompts.ts` | 提取、去重和合并的提示词模板 |
|
|
117
|
+
| `src/llm-client.ts` | OpenAI 兼容 LLM 客户端,含 JSON 解析 |
|
|
118
|
+
| `src/memory-categories.ts` | 6 类别分类系统 + 合并策略 |
|
|
119
|
+
| `src/decay-engine.ts` | Weibull 拉伸指数衰减模型 |
|
|
120
|
+
| `src/tier-manager.ts` | 三层晋升/降级生命周期管理器 |
|
|
121
|
+
|
|
122
|
+
### 反馈
|
|
123
|
+
|
|
124
|
+
这是 beta 版本 — 欢迎在以下地方报告问题或分享反馈:
|
|
125
|
+
- [GitHub Issues](https://github.com/win4r/memory-lancedb-pro/issues)
|
|
126
|
+
|
|
127
|
+
回退到稳定版:`npm i memory-lancedb-pro@latest`
|
|
128
|
+
|
|
129
|
+
---
|
|
130
|
+
|
|
55
131
|
## 架构概览
|
|
56
132
|
|
|
57
133
|
```
|
|
@@ -92,6 +168,12 @@ OpenClaw 内置的 `memory-lancedb` 插件仅提供基本的向量搜索。**mem
|
|
|
92
168
|
| `src/noise-filter.ts` | 噪声过滤器。过滤 Agent 拒绝回复、Meta 问题、寒暄等低质量记忆 |
|
|
93
169
|
| `src/adaptive-retrieval.ts` | 自适应检索。判断 query 是否需要触发记忆检索(跳过问候、命令、简单确认等) |
|
|
94
170
|
| `src/migrate.ts` | 迁移工具。从旧版 `memory-lancedb` 插件迁移数据到 Pro 版 |
|
|
171
|
+
| `src/smart-extractor.ts` | *(Beta)* LLM 驱动的 6 类别提取管线,L0/L1/L2 分层存储 |
|
|
172
|
+
| `src/extraction-prompts.ts` | *(Beta)* 记忆提取、去重决策和合并的提示词模板 |
|
|
173
|
+
| `src/llm-client.ts` | *(Beta)* OpenAI 兼容 LLM 客户端封装,含 JSON 围栏解析 |
|
|
174
|
+
| `src/memory-categories.ts` | *(Beta)* 6 类别分类(profile、preferences、entities、events、cases、patterns) |
|
|
175
|
+
| `src/decay-engine.ts` | *(Beta)* Weibull 衰减模型,重要性调制半衰期 + 分层 beta |
|
|
176
|
+
| `src/tier-manager.ts` | *(Beta)* 三层(Core/Working/Peripheral)晋升/降级生命周期管理器 |
|
|
95
177
|
|
|
96
178
|
---
|
|
97
179
|
|
package/index.ts
CHANGED
|
@@ -20,6 +20,12 @@ import { shouldSkipRetrieval } from "./src/adaptive-retrieval.js";
|
|
|
20
20
|
import { AccessTracker } from "./src/access-tracker.js";
|
|
21
21
|
import { createMemoryCLI } from "./cli.js";
|
|
22
22
|
|
|
23
|
+
// Import smart extraction & lifecycle components
|
|
24
|
+
import { SmartExtractor } from "./src/smart-extractor.js";
|
|
25
|
+
import { createLlmClient } from "./src/llm-client.js";
|
|
26
|
+
import { createDecayEngine, DEFAULT_DECAY_CONFIG } from "./src/decay-engine.js";
|
|
27
|
+
import { createTierManager, DEFAULT_TIER_CONFIG } from "./src/tier-manager.js";
|
|
28
|
+
|
|
23
29
|
// ============================================================================
|
|
24
30
|
// Configuration & Types
|
|
25
31
|
// ============================================================================
|
|
@@ -27,7 +33,7 @@ import { createMemoryCLI } from "./cli.js";
|
|
|
27
33
|
interface PluginConfig {
|
|
28
34
|
embedding: {
|
|
29
35
|
provider: "openai-compatible";
|
|
30
|
-
apiKey: string;
|
|
36
|
+
apiKey: string | string[];
|
|
31
37
|
model?: string;
|
|
32
38
|
baseURL?: string;
|
|
33
39
|
dimensions?: number;
|
|
@@ -60,6 +66,15 @@ interface PluginConfig {
|
|
|
60
66
|
reinforcementFactor?: number;
|
|
61
67
|
maxHalfLifeMultiplier?: number;
|
|
62
68
|
};
|
|
69
|
+
// Smart extraction config (Phase 1: from epro-memory)
|
|
70
|
+
smartExtraction?: boolean;
|
|
71
|
+
llm?: {
|
|
72
|
+
apiKey?: string;
|
|
73
|
+
model?: string;
|
|
74
|
+
baseURL?: string;
|
|
75
|
+
};
|
|
76
|
+
extractMinMessages?: number;
|
|
77
|
+
extractMaxChars?: number;
|
|
63
78
|
scopes?: {
|
|
64
79
|
default?: string;
|
|
65
80
|
definitions?: Record<string, { description: string }>;
|
|
@@ -398,10 +413,19 @@ const memoryLanceDBProPlugin = {
|
|
|
398
413
|
taskPassage: config.embedding.taskPassage,
|
|
399
414
|
normalized: config.embedding.normalized,
|
|
400
415
|
});
|
|
401
|
-
|
|
402
|
-
|
|
403
|
-
|
|
404
|
-
|
|
416
|
+
// Initialize decay engine + tier manager (lifecycle scoring)
|
|
417
|
+
const decayEngine = createDecayEngine(DEFAULT_DECAY_CONFIG);
|
|
418
|
+
const tierManager = createTierManager(DEFAULT_TIER_CONFIG);
|
|
419
|
+
|
|
420
|
+
const retriever = createRetriever(
|
|
421
|
+
store,
|
|
422
|
+
embedder,
|
|
423
|
+
{
|
|
424
|
+
...DEFAULT_RETRIEVAL_CONFIG,
|
|
425
|
+
...config.retrieval,
|
|
426
|
+
},
|
|
427
|
+
{ decayEngine, tierManager },
|
|
428
|
+
);
|
|
405
429
|
|
|
406
430
|
// Access reinforcement tracker (debounced write-back)
|
|
407
431
|
const accessTracker = new AccessTracker({
|
|
@@ -414,10 +438,46 @@ const memoryLanceDBProPlugin = {
|
|
|
414
438
|
const scopeManager = createScopeManager(config.scopes);
|
|
415
439
|
const migrator = createMigrator(store);
|
|
416
440
|
|
|
441
|
+
// Initialize smart extraction (Phase 1: from epro-memory)
|
|
442
|
+
let smartExtractor: SmartExtractor | null = null;
|
|
443
|
+
if (config.smartExtraction !== false) {
|
|
444
|
+
try {
|
|
445
|
+
const embeddingKey = Array.isArray(config.embedding.apiKey)
|
|
446
|
+
? config.embedding.apiKey[0]
|
|
447
|
+
: config.embedding.apiKey;
|
|
448
|
+
const llmApiKey = config.llm?.apiKey
|
|
449
|
+
? resolveEnvVars(config.llm.apiKey)
|
|
450
|
+
: resolveEnvVars(embeddingKey);
|
|
451
|
+
const llmBaseURL = config.llm?.baseURL
|
|
452
|
+
? resolveEnvVars(config.llm.baseURL)
|
|
453
|
+
: config.embedding.baseURL;
|
|
454
|
+
const llmModel = config.llm?.model || "gpt-4o-mini";
|
|
455
|
+
|
|
456
|
+
const llmClient = createLlmClient({
|
|
457
|
+
apiKey: llmApiKey,
|
|
458
|
+
model: llmModel,
|
|
459
|
+
baseURL: llmBaseURL,
|
|
460
|
+
timeoutMs: 30000,
|
|
461
|
+
});
|
|
462
|
+
|
|
463
|
+
smartExtractor = new SmartExtractor(store, embedder, llmClient, {
|
|
464
|
+
user: "User",
|
|
465
|
+
extractMinMessages: config.extractMinMessages ?? 4,
|
|
466
|
+
extractMaxChars: config.extractMaxChars ?? 8000,
|
|
467
|
+
defaultScope: config.scopes?.default ?? "global",
|
|
468
|
+
log: (msg: string) => api.logger.info(msg),
|
|
469
|
+
});
|
|
470
|
+
|
|
471
|
+
api.logger.info("memory-lancedb-pro: smart extraction enabled (LLM model: " + llmModel + ")");
|
|
472
|
+
} catch (err) {
|
|
473
|
+
api.logger.warn(`memory-lancedb-pro: smart extraction init failed, falling back to regex: ${String(err)}`);
|
|
474
|
+
}
|
|
475
|
+
}
|
|
476
|
+
|
|
417
477
|
const pluginVersion = getPluginVersion();
|
|
418
478
|
|
|
419
479
|
api.logger.info(
|
|
420
|
-
`memory-lancedb-pro@${pluginVersion}: plugin registered (db: ${resolvedDbPath}, model: ${config.embedding.model || "text-embedding-3-small"})`,
|
|
480
|
+
`memory-lancedb-pro@${pluginVersion}: plugin registered (db: ${resolvedDbPath}, model: ${config.embedding.model || "text-embedding-3-small"}, smartExtraction: ${smartExtractor ? "ON" : "OFF"})`,
|
|
421
481
|
);
|
|
422
482
|
|
|
423
483
|
// ========================================================================
|
|
@@ -484,11 +544,19 @@ const memoryLanceDBProPlugin = {
|
|
|
484
544
|
return;
|
|
485
545
|
}
|
|
486
546
|
|
|
547
|
+
// Format with L0 abstracts grouped by category when available
|
|
487
548
|
const memoryContext = results
|
|
488
|
-
.map(
|
|
489
|
-
|
|
490
|
-
|
|
491
|
-
|
|
549
|
+
.map((r) => {
|
|
550
|
+
let metaObj: Record<string, unknown> = {};
|
|
551
|
+
try {
|
|
552
|
+
metaObj = JSON.parse(r.entry.metadata || "{}");
|
|
553
|
+
} catch {}
|
|
554
|
+
const displayCategory = (metaObj.memory_category as string) || r.entry.category;
|
|
555
|
+
const displayTier = (metaObj.tier as string) || "";
|
|
556
|
+
const tierPrefix = displayTier ? `[${displayTier.charAt(0).toUpperCase()}]` : "";
|
|
557
|
+
const abstract = (metaObj.l0_abstract as string) || r.entry.text;
|
|
558
|
+
return `- ${tierPrefix}[${displayCategory}:${r.entry.scope}] ${sanitizeForContext(abstract)} (${(r.score * 100).toFixed(0)}%${r.sources?.bm25 ? ", vector+BM25" : ""}${r.sources?.reranked ? "+reranked" : ""})`;
|
|
559
|
+
})
|
|
492
560
|
.join("\n");
|
|
493
561
|
|
|
494
562
|
api.logger.info?.(
|
|
@@ -561,7 +629,29 @@ const memoryLanceDBProPlugin = {
|
|
|
561
629
|
}
|
|
562
630
|
}
|
|
563
631
|
|
|
564
|
-
//
|
|
632
|
+
// ----------------------------------------------------------------
|
|
633
|
+
// Smart Extraction (Phase 1: LLM-powered 6-category extraction)
|
|
634
|
+
// ----------------------------------------------------------------
|
|
635
|
+
if (smartExtractor) {
|
|
636
|
+
const minMessages = config.extractMinMessages ?? 4;
|
|
637
|
+
if (texts.length >= minMessages) {
|
|
638
|
+
const conversationText = texts.join("\n");
|
|
639
|
+
const sessionKey = (event as any).sessionKey || "unknown";
|
|
640
|
+
const stats = await smartExtractor.extractAndPersist(
|
|
641
|
+
conversationText, sessionKey,
|
|
642
|
+
);
|
|
643
|
+
if (stats.created > 0 || stats.merged > 0) {
|
|
644
|
+
api.logger.info(
|
|
645
|
+
`memory-lancedb-pro: smart-extracted ${stats.created} created, ${stats.merged} merged, ${stats.skipped} skipped for agent ${agentId}`
|
|
646
|
+
);
|
|
647
|
+
}
|
|
648
|
+
return; // Smart extraction handled everything
|
|
649
|
+
}
|
|
650
|
+
}
|
|
651
|
+
|
|
652
|
+
// ----------------------------------------------------------------
|
|
653
|
+
// Fallback: regex-triggered capture (original logic)
|
|
654
|
+
// ----------------------------------------------------------------
|
|
565
655
|
const toCapture = texts.filter((text) => text && shouldCapture(text));
|
|
566
656
|
if (toCapture.length === 0) {
|
|
567
657
|
return;
|
|
@@ -934,6 +1024,11 @@ function parsePluginConfig(value: unknown): PluginConfig {
|
|
|
934
1024
|
typeof cfg.retrieval === "object" && cfg.retrieval !== null
|
|
935
1025
|
? (cfg.retrieval as any)
|
|
936
1026
|
: undefined,
|
|
1027
|
+
// Smart extraction config (Phase 1)
|
|
1028
|
+
smartExtraction: cfg.smartExtraction !== false, // Default ON
|
|
1029
|
+
llm: typeof cfg.llm === "object" && cfg.llm !== null ? (cfg.llm as any) : undefined,
|
|
1030
|
+
extractMinMessages: parsePositiveInt(cfg.extractMinMessages) ?? 4,
|
|
1031
|
+
extractMaxChars: parsePositiveInt(cfg.extractMaxChars) ?? 8000,
|
|
937
1032
|
scopes:
|
|
938
1033
|
typeof cfg.scopes === "object" && cfg.scopes !== null
|
|
939
1034
|
? (cfg.scopes as any)
|