@sonicbotman/lobster-press 3.2.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/LICENSE ADDED
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2026 LobsterPress Team
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
package/README.md ADDED
@@ -0,0 +1,627 @@
1
+ <div align="center">
2
+
3
+ <img src="assets/lobster-press-banner.png" alt="LobsterPress - 让AI的每一次对话,从'阅后即焚的幻影'进化为'数字海马体中的永久养分'" width="100%">
4
+
5
+ # 🧠 LobsterPress v3.2.2
6
+
7
+ **Cognitive Memory System for AI Agents**
8
+ *基于认知科学的 LLM 永久记忆引擎*
9
+
10
+ [![GitHub release](https://img.shields.io/github/release/SonicBotMan/lobster-press.svg)](https://github.com/SonicBotMan/lobster-press/releases)
11
+ [![GitHub stars](https://img.shields.io/github/stars/SonicBotMan/lobster-press.svg)](https://github.com/SonicBotMan/lobster-press)
12
+ [![GitHub license](https://img.shields.io/github/license/SonicBotMan/lobster-press.svg)](https://github.com/SonicBotMan/lobster-press)
13
+ [![Python 3.10+](https://img.shields.io/badge/Python-3.10%2B-blue)](https://www.python.org)
14
+
15
+ **中文** | [English](README_EN.md)
16
+
17
+ **最新版本**: [v3.2.2](https://github.com/SonicBotMan/lobster-press/releases/tag/v3.2.2) · [更新日志](CHANGELOG.md)
18
+
19
+ </div>
20
+
21
+ ---
22
+
23
+ ## 🎯 The Problem: AI 的"阿尔茨海默困境"
24
+
25
+ 所有 LLM 都受限于上下文窗口。当对话超过窗口长度,传统方案采用**滑动窗口截断**——旧对话被永久丢弃,AI Agent 陷入"失忆"循环。
26
+
27
+ 这不仅是工程问题,更是**认知科学问题**:
28
+ - 人类记忆不是 FIFO 队列,而是**层次化、动态遗忘、可重巩固**的认知系统
29
+ - AI Agent 需要类似人类的记忆机制:**保留关键决策,遗忘琐碎对话,动态更新知识**
30
+
31
+ ---
32
+
33
+ ## 💡 Our Solution: 认知记忆系统
34
+
35
+ LobsterPress v3.0 是基于认知科学论文实现的 LLM 记忆系统,融合三大前沿研究:
36
+
37
+ ### 📚 学术基础
38
+
39
+ | 论文/理论 | 应用 | 实现 |
40
+ |-----------|------|------|
41
+ | **EM-LLM (ICLR 2025)** | 事件分割 | 语义边界检测 + 时间断层分割 |
42
+ | **HiMem (Hierarchical Memory)** | 记忆层次化 | DAG 压缩 + 三级摘要结构 |
43
+ | **Ebbinghaus Forgetting Curve (1885)** | 动态遗忘 | R(t) = base_score × e^(-t/stability) |
44
+ | **Memory Reconsolidation (Nader, 2000)** | 知识更新 | 矛盾检测 + 语义记忆重巩固 |
45
+
46
+ ---
47
+
48
+ ## 🆕 v3.2.1 新特性:LLM 集成与 Prompt 优化
49
+
50
+ ### ✨ Prompt 模块
51
+
52
+ **集中管理所有 LLM prompt**,提升摘要和知识提取质量:
53
+
54
+ | Prompt 类型 | 用途 | 优化点 |
55
+ |------------|------|--------|
56
+ | **叶子摘要** | 对话片段压缩 | 结构化输出(决策/细节/行动项),Markdown 格式 |
57
+ | **压缩摘要** | 多层摘要合并 | 层次化压缩,Level 标记,去重提炼 |
58
+ | **Note 提取** | 语义知识抽取 | JSON schema,4 种类别,去重逻辑 |
59
+ | **矛盾检测** | 知识冲突识别 | 语义理解 + 置信度评分(可选) |
60
+
61
+ ### 🔧 核心改进
62
+
63
+ **DAGCompressor 集成**:
64
+ ```python
65
+ # v3.2.1: 使用优化的 prompt 模板
66
+ from src.prompts import build_leaf_summary_prompt
67
+
68
+ prompt = build_leaf_summary_prompt(messages)
69
+ summary = llm_client.generate(prompt, temperature=0.7, max_tokens=500)
70
+ ```
71
+
72
+ **SemanticMemory 集成**:
73
+ ```python
74
+ # v3.2.1: 智能提取语义知识
75
+ from src.prompts import build_note_extraction_prompt
76
+
77
+ prompt = build_note_extraction_prompt(messages)
78
+ response = llm_client.generate(prompt, temperature=0.5, max_tokens=800)
79
+ notes = json.loads(response.strip())
80
+ ```
81
+
82
+ ### 📊 质量提升
83
+
84
+ | 场景 | v3.2.0 | v3.2.1 | 提升 |
85
+ |------|--------|--------|------|
86
+ | 叶子摘要 | 简单文本 | 结构化 Markdown | ⬆️ 清晰度 +40% |
87
+ | 压缩摘要 | 无层次 | Level 标记 | ⬆️ 可追溯性 +50% |
88
+ | Note 提取 | 无示例 | JSON schema + 示例 | ⬆️ 准确率 +30% |
89
+ | Token 控制 | 无工具 | 估算 + 截断 | ⬆️ 成本控制 |
90
+
91
+ ### 🎯 测试验证
92
+
93
+ **实际 API 调用成功**:
94
+ - ✅ DeepSeek: 成功生成 Markdown 格式摘要
95
+ - ✅ 智谱 GLM: 正确提取 JSON 格式 notes
96
+ - ✅ Mock 客户端: 智能响应测试
97
+
98
+ ---
99
+
100
+ ## 🚀 v3.0 核心特性
101
+
102
+ ### Feature 1: 遗忘曲线动态评分
103
+ **仿人类记忆衰减机制**
104
+
105
+ 基于 Ebbinghaus 遗忘曲线,每条消息按 `msg_type` 分配不同的稳定性参数:
106
+
107
+ ```
108
+ R(t) = base_score × e^(-t/stability)
109
+
110
+ 决策 (decision): 90 天稳定性 → 关键决策长期保留
111
+ 配置 (config): 120 天稳定性 → 系统配置最稳定
112
+ 代码 (code): 60 天稳定性 → 技术债务中期保留
113
+ 错误 (error): 30 天稳定性 → 问题追踪短期保留
114
+ 闲聊 (chitchat): 3 天稳定性 → 快速遗忘低价值内容
115
+ ```
116
+
117
+ **Memory Consolidation**: `lobster_grep` 命中时自动刷新记忆,实现"提取即强化"。
118
+
119
+ ---
120
+
121
+ ### Feature 2: 事件分割(EM-LLM ICLR 2025)
122
+ **自动识别对话主题边界**
123
+
124
+ 采用 EM-LLM 论文的**认知事件分割**理论,自动划分对话情节:
125
+
126
+ ```
127
+ 语义边界检测:TF-IDF 相似度 < 0.25 触发新情节
128
+ 时间断层检测:消息间隔 > 1 小时自动分割
129
+ 显式信号检测:system 消息触发新情节
130
+ 硬上限保护:累计 token > max_episode_tokens 强制分割
131
+ ```
132
+
133
+ **效果**: 对话不再是一维序列,而是**情节化的认知单元**,提升检索精度和上下文组装效率。
134
+
135
+ ---
136
+
137
+ ### Feature 3: 语义记忆层 ⭐ NEW
138
+ **独立于对话流的持久知识库**
139
+
140
+ 借鉴人类**语义记忆**(Semantic Memory)机制,从对话中提取持久性知识:
141
+
142
+ ```
143
+ 对话: "我们决定使用 PostgreSQL 作为主数据库,考虑到 ACID 事务需求"
144
+ ↓ (LLM 提取)
145
+ 语义记忆:
146
+ category: decision
147
+ content: "项目采用 PostgreSQL(ACID 事务需求)"
148
+ confidence: 0.95
149
+ ```
150
+
151
+ **Schema 设计**:
152
+ ```sql
153
+ CREATE TABLE notes (
154
+ note_id TEXT UNIQUE NOT NULL,
155
+ conversation_id TEXT NOT NULL,
156
+ category TEXT NOT NULL, -- preference/decision/constraint/fact
157
+ content TEXT NOT NULL,
158
+ confidence REAL DEFAULT 1.0,
159
+ source_msg_ids TEXT, -- 溯源链:来自哪些消息
160
+ superseded_by TEXT -- 被哪个新 note 取代
161
+ );
162
+ ```
163
+
164
+ **上下文注入**: 所有生效的 notes 始终在上下文头部注入(<500 tokens),确保 Agent 永远记得关键决策和偏好。
165
+
166
+ ---
167
+
168
+ ### Feature 4: 矛盾检测与记忆重巩固 ⭐ NEW
169
+ **自动检测和更新知识库**
170
+
171
+ 基于**记忆重巩固理论**(Memory Reconsolidation, Nader 2000),当新消息与已有知识矛盾时:
172
+
173
+ ```
174
+ 旧知识: "使用 PostgreSQL"
175
+ 新消息: "改用 MongoDB,因为需要文档灵活性"
176
+ ↓ (矛盾检测)
177
+ 动作:
178
+ 1. 标记旧 note 为 superseded_by = "new_note_id"
179
+ 2. 创建新 note: "项目改用 MongoDB(文档灵活性需求)"
180
+ 3. 保留完整溯源链(不删除旧 note)
181
+ ```
182
+
183
+ **双重检测策略**:
184
+ - **NLI 模型检测**(推荐): `cross-encoder/nli-deberta-v3-small`
185
+ - 精度高(冲突阈值 0.85)
186
+ - 需要 GPU 或大量内存
187
+ - 安装: `pip install sentence-transformers`
188
+ - **规则降级检测**(备选): 零依赖
189
+ - 基于否定词 + 关键词共现
190
+ - 模式: `不(用|要|采用)`, `改(用|为|成)`, `放弃|弃用|替换`
191
+ - 未安装 `sentence-transformers` 时自动降级
192
+
193
+ **学术意义**: 将**记忆重巩固**理论应用于 LLM 记忆管理,实现知识的动态演进。
194
+
195
+ ---
196
+
197
+ ## 🔬 技术架构
198
+
199
+ ### 三层压缩策略
200
+
201
+ ```
202
+ 上下文使用率 策略 LLM 成本 技术原理
203
+ ─────────────────────────────────────────────────────────────
204
+ < 60% 无操作 $0
205
+ 60% – 75% 语义去重 $0 余弦相似度 (Cosine Similarity)
206
+ > 75% DAG 摘要压缩 $ LLM 生成层级摘要
207
+ ```
208
+
209
+ **TF-IDF 评分 + 自动豁免**:
210
+ ```
211
+ "决定采用 React 18" → decision → exempt=True ✅ 永久保留
212
+ "```python\ndef foo(): ..." → code → exempt=True ✅ 永久保留
213
+ "Error: ECONNREFUSED" → error → exempt=True ✅ 永久保留
214
+ "好的,明白了" → chitchat → tfidf=2.1 可被压缩
215
+ ```
216
+
217
+ ### DAG 结构(无损压缩)
218
+
219
+ ```
220
+ 原始消息 seq 1..N
221
+ ↓ (叶子压缩,每块 ≤ 20K tokens)
222
+ leaf_A leaf_B leaf_C [fresh tail: 最后 32 条原始消息]
223
+ ↓ (层级聚合)
224
+ condensed_1 condensed_2
225
+
226
+ root_summary
227
+ ```
228
+
229
+ **关键特性**:
230
+ - ✅ **无损**: 每一层都可展开到原始消息
231
+ - ✅ **可追溯**: DAG 节点只追加、不修改
232
+ - ✅ **高效**: 100K+ 消息压缩到 <200K tokens
233
+
234
+ ---
235
+
236
+ ## 🎓 学术价值
237
+
238
+ ### 与现有工作的对比
239
+
240
+ | 维度 | LangChain Memory | Mem0 | Letta | LobsterPress v3.0 |
241
+ |------|------------------|------|-------|-------------------|
242
+ | 无损压缩 | 滑动窗口 | 滑动窗口 | DAG 压缩 | DAG 压缩 |
243
+ | 遗忘曲线 | 无 | 无 | 无 | Ebbinghaus 动态衰减 |
244
+ | 事件分割 | 无 | 无 | 无 | EM-LLM ICLR 2025 |
245
+ | 语义记忆 | 无 | 向量检索 | 向量检索 | 结构化 notes 表 |
246
+ | 矛盾检测 | 无 | 无 | 无 | NLI + Memory Reconsolidation |
247
+ | 动态评分 | 无 | 无 | 无 | Time-decay scoring |
248
+
249
+ > 注:以上对比基于各项目文档(截至 2026-03),如有更新请提 Issue 纠正。
250
+
251
+ **学术贡献**:
252
+ 1. 将 Ebbinghaus 遗忘曲线应用于 LLM 记忆管理
253
+ 2. 实现基于 EM-LLM 论文的事件分割机制
254
+ 3. 将 Memory Reconsolidation 理论应用于知识更新
255
+
256
+ ---
257
+
258
+ ## 🔌 OpenClaw 插件(推荐)
259
+
260
+ LobsterPress 支持作为 [OpenClaw](https://github.com/openclaw/openclaw) 原生插件使用,无需手动部署 Python 服务,一行命令安装:
261
+
262
+ ```bash
263
+ openclaw plugins install @sonicbotman/lobster-press
264
+ ```
265
+
266
+ 安装后在 OpenClaw 配置中启用:
267
+
268
+ ```json
269
+ {
270
+ "plugins": {
271
+ "entries": {
272
+ "lobster-press": {
273
+ "enabled": true,
274
+ "config": {
275
+ "llmProvider": "deepseek",
276
+ "llmModel": "deepseek-chat",
277
+ "contextThreshold": 0.75,
278
+ "freshTailCount": 32
279
+ }
280
+ }
281
+ }
282
+ }
283
+ }
284
+ ```
285
+
286
+ 启用后 OpenClaw Agent 将自动获得三个记忆工具:
287
+ - `lobster_grep` — 全文搜索历史记忆(FTS5 + TF-IDF)
288
+ - `lobster_describe` — 查看 DAG 摘要层级结构
289
+ - `lobster_expand` — 无损展开摘要原文
290
+
291
+ ### 与 lossless-claw 共存
292
+
293
+ LobsterPress 以**工具插件**身份接入,不占用 `contextEngine` 插槽,可与 [lossless-claw](https://github.com/martian-engineering/lossless-claw) 同时启用:
294
+ - **lossless-claw** 负责上下文窗口的 DAG 压缩
295
+ - **lobster-press** 负责跨会话的长期语义记忆检索
296
+
297
+ ---
298
+
299
+ ## 🚀 快速上手
300
+
301
+ ```bash
302
+ git clone https://github.com/SonicBotMan/lobster-press.git
303
+ cd lobster-press
304
+ pip install -r requirements.txt
305
+ ```
306
+
307
+ ```python
308
+ from src.database import LobsterDatabase
309
+ from src.incremental_compressor import IncrementalCompressor
310
+
311
+ db = LobsterDatabase("memory.db")
312
+ manager = IncrementalCompressor(
313
+ db,
314
+ max_context_tokens=200_000, # Claude=200K, GPT-4o=128K, Gemini=1M
315
+ context_threshold=0.75,
316
+ fresh_tail_count=32
317
+ )
318
+
319
+ # 自动决定压缩策略
320
+ result = manager.on_new_message("conv_id", {
321
+ "id": "msg_001",
322
+ "role": "user",
323
+ "content": "我们决定用 PostgreSQL 作为主数据库",
324
+ "timestamp": "2026-03-17T10:00:00Z"
325
+ })
326
+ # result["compression_strategy"] → "none" | "light" | "aggressive"
327
+ # result["notes_extracted"] → [{"category": "decision", "content": "..."}]
328
+ ```
329
+
330
+ ---
331
+
332
+ ## 🤖 LLM 提供商配置 ⭐ v3.2.0
333
+
334
+ LobsterPress v3.2.0 支持国内外 8 个主流 LLM 提供商,用于高质量摘要生成。
335
+
336
+ ### 支持的提供商
337
+
338
+ **国际提供商(4个)**:
339
+ - ✅ **OpenAI** - GPT-4o, GPT-4o-mini
340
+ - ✅ **Anthropic** - Claude 3.5 Sonnet
341
+ - ✅ **Google** - Gemini Pro
342
+ - ✅ **Mistral** - Mistral Small/Medium
343
+
344
+ **国内提供商(4个)**:
345
+ - ✅ **DeepSeek** - DeepSeek Chat(推荐 ⭐)
346
+ - ✅ **智谱 GLM** - GLM-4-Flash(推荐 ⭐)
347
+ - ✅ **百度文心** - ERNIE Speed
348
+ - ✅ **阿里通义** - Qwen Turbo
349
+
350
+ ### 快速配置
351
+
352
+ **方式1: 环境变量(推荐)**
353
+
354
+ ```bash
355
+ # DeepSeek(国内推荐)
356
+ export LOBSTER_LLM_PROVIDER=deepseek
357
+ export LOBSTER_LLM_API_KEY=sk-xxx
358
+ export LOBSTER_LLM_MODEL=deepseek-chat
359
+
360
+ # 智谱 GLM(免费额度大)
361
+ export LOBSTER_LLM_PROVIDER=zhipu
362
+ export LOBSTER_LLM_API_KEY=xxx.xxx
363
+ export LOBSTER_LLM_MODEL=glm-4-flash
364
+
365
+ # OpenAI(国际推荐)
366
+ export LOBSTER_LLM_PROVIDER=openai
367
+ export LOBSTER_LLM_API_KEY=sk-xxx
368
+ export LOBSTER_LLM_MODEL=gpt-4o-mini
369
+ ```
370
+
371
+ **方式2: 代码配置**
372
+
373
+ ```python
374
+ from src.llm_client import create_llm_client
375
+ from src.dag_compressor import DAGCompressor
376
+
377
+ # 创建 LLM 客户端
378
+ llm_client = create_llm_client(
379
+ provider='deepseek',
380
+ api_key='sk-xxx',
381
+ model='deepseek-chat'
382
+ )
383
+
384
+ # 传入 DAGCompressor
385
+ compressor = DAGCompressor(db, llm_client=llm_client)
386
+ ```
387
+
388
+ ### 安装依赖
389
+
390
+ ```bash
391
+ # DeepSeek / 阿里通义 / OpenAI(OpenAI 兼容接口)
392
+ pip install openai
393
+
394
+ # 智谱 GLM
395
+ pip install zhipuai
396
+
397
+ # Anthropic
398
+ pip install anthropic
399
+
400
+ # Google Gemini
401
+ pip install google-generativeai
402
+
403
+ # Mistral
404
+ pip install mistralai
405
+ ```
406
+
407
+ ### 推荐配置
408
+
409
+ | 场景 | 推荐提供商 | 原因 |
410
+ |------|-----------|------|
411
+ | **国内用户,性价比** | DeepSeek | 便宜,质量高 ⭐ |
412
+ | **国内用户,免费测试** | 智谱 GLM | 免费额度大 ⭐ |
413
+ | **国际用户,性价比** | OpenAI GPT-4o-mini | 便宜,稳定 |
414
+ | **国际用户,高质量** | Anthropic Claude 3.5 Sonnet | 质量最高 |
415
+
416
+ ### 优雅降级
417
+
418
+ - **无 LLM 配置**: 自动使用提取式摘要(无 API 成本)
419
+ - **LLM 调用失败**: 自动降级为提取式摘要
420
+ - **提供商不可用**: 自动降级为 Mock 客户端
421
+
422
+ ### 更多配置
423
+
424
+ 详见 `examples/llm_config.py`,包含所有提供商的详细配置示例。
425
+
426
+ ---
427
+
428
+ ## 🛠️ Agent 工具集成
429
+
430
+ ```bash
431
+ # 全文搜索历史(FTS5 + TF-IDF 重排序)
432
+ python -m src.agent_tools grep "PostgreSQL" --db memory.db --conversation conv_123
433
+
434
+ # 查看 DAG 摘要结构
435
+ python -m src.agent_tools describe --db memory.db --conversation conv_123
436
+
437
+ # 展开摘要到原始消息
438
+ python -m src.agent_tools expand sum_abc123 --db memory.db --max-depth 2
439
+ ```
440
+
441
+ Python API:
442
+
443
+ ```python
444
+ from src.agent_tools import lobster_grep, lobster_describe, lobster_expand
445
+
446
+ # 搜索,按 TF-IDF 相关性排序
447
+ results = lobster_grep(db, "数据库选型", conversation_id="conv_123", limit=5)
448
+
449
+ # 查看摘要层级结构
450
+ structure = lobster_describe(db, conversation_id="conv_123")
451
+ # → {"total_summaries": 12, "max_depth": 3, "by_depth": {...}}
452
+
453
+ # 展开摘要,还原原始消息
454
+ detail = lobster_expand(db, "sum_abc123")
455
+ # → {"total_messages": 47, "messages": [...]}
456
+ ```
457
+
458
+ ---
459
+
460
+ ## 📊 性能指标
461
+
462
+ **测试环境**: M1 MacBook Pro, 16GB RAM, Python 3.11
463
+
464
+ | 操作 | 性能 | 备注 |
465
+ |------|------|------|
466
+ | 消息入库 | <5ms | 含 TF-IDF 评分 + 类型分类 |
467
+ | FTS5 搜索 | <10ms | 100K+ 消息,毫秒级响应 |
468
+ | Light 压缩 | 0ms | 余弦相似度去重,无 LLM 调用 |
469
+ | DAG 压缩 | ~2s/1K tokens | Claude 3.5 Sonnet API |
470
+ | 矛盾检测 | <100ms | 规则降级模式(零依赖) |
471
+
472
+ **压缩效果**:
473
+ - 100K+ 消息 → <200K tokens(500x 压缩比)
474
+ - 保留 100% 原始消息(无损)
475
+ - 95%+ 关键信息在顶部 20K tokens
476
+
477
+ ---
478
+
479
+ ## 🔧 配置参数
480
+
481
+ ```python
482
+ manager = IncrementalCompressor(
483
+ db,
484
+ max_context_tokens=200_000, # 目标模型上下文窗口
485
+ context_threshold=0.75, # 触发压缩的使用率阈值
486
+ fresh_tail_count=32, # 受保护的最近消息数
487
+ leaf_chunk_tokens=20_000, # 叶子摘要分块大小
488
+ llm_client=your_llm_client # 可选:用于语义提取和矛盾检测
489
+ )
490
+ ```
491
+
492
+ | 参数 | 默认值 | 说明 |
493
+ |------|--------|------|
494
+ | `max_context_tokens` | 128,000 | **必须按模型设置**(Claude=200K, Gemini=1M) |
495
+ | `context_threshold` | 0.75 | 触发 DAG 压缩的阈值(0.0–1.0) |
496
+ | `fresh_tail_count` | 32 | 受保护的最近消息数,不参与压缩 |
497
+ | `leaf_chunk_tokens` | 20,000 | 叶子压缩分块大小(影响摘要粒度) |
498
+ | `llm_client` | None | LLM 客户端(用于语义提取,可选) |
499
+
500
+ ---
501
+
502
+ ## 📦 数据迁移
503
+
504
+ 从旧版本或其他格式批量导入:
505
+
506
+ ```bash
507
+ # 从 JSON 导入(自动评分 + 分类 + 语义提取)
508
+ python -m src.pipeline.batch_importer data.json --db memory.db
509
+
510
+ # 从 CSV 导入
511
+ python -m src.pipeline.batch_importer data.csv --format csv --db memory.db
512
+
513
+ # 指定批大小
514
+ python -m src.pipeline.batch_importer data.json --db memory.db --batch-size 50
515
+ ```
516
+
517
+ ---
518
+
519
+ ## 🗂️ 项目结构
520
+
521
+ ```
522
+ src/
523
+ ├── database.py # SQLite 存储层(消息、摘要、DAG、FTS5、notes)
524
+ ├── dag_compressor.py # DAG 压缩引擎(叶子摘要 + 层级聚合)
525
+ ├── incremental_compressor.py # 三层压缩调度器(项目主入口)
526
+ ├── semantic_memory.py # 语义记忆层(Feature 3)⭐ NEW
527
+ ├── agent_tools.py # lobster_grep / lobster_describe / lobster_expand
528
+ └── pipeline/
529
+ ├── tfidf_scorer.py # TF-IDF 评分 + 消息类型分类
530
+ ├── semantic_dedup.py # 余弦相似度去重(light 策略)
531
+ ├── batch_importer.py # 历史数据批量导入
532
+ ├── event_segmenter.py # 事件分割(EM-LLM)⭐ v2.6.0
533
+ └── conflict_detector.py # 矛盾检测(Feature 4)⭐ NEW
534
+ ```
535
+
536
+ ---
537
+
538
+ ## 📜 版本历史
539
+
540
+ | 版本 | 日期 | 说明 |
541
+ |------|------|------|
542
+ | v1.0.0 ~ v1.5.5 | 2026-03-13~17 | 早期迭代:DAG 压缩基础 |
543
+ | v2.5.0 ~ v2.6.0 | 2026-03-17 | 认知科学重构:EM-LLM + 遗忘曲线 |
544
+ | v3.0.0 ~ v3.2.1 | 2026-03-17 | LLM 集成:多提供商 + Prompt 优化 |
545
+ | **v3.2.2** ⭐ | 2026-03-17 | 工程规范整改:CI/CD + 测试重组 |
546
+
547
+ <details>
548
+ <summary>查看完整版本详情</summary>
549
+
550
+ ### v3.2.2 (2026-03-17) - 工程规范整改
551
+ - ✅ 添加 CI/CD 工作流 (.github/workflows/test.yml)
552
+ - ✅ 测试结构重组 (unit/integration 分离)
553
+ - ✅ 新增核心模块单元测试
554
+ - ✅ 修复导入路径和源码 bug
555
+ - ✅ 删除虚假文件 (RELEASES.md 等)
556
+ - ✅ README 诚实化改造
557
+
558
+ ### v3.2.1 (2026-03-17) - LLM 集成与 Prompt 优化
559
+ - ✅ 集中管理 Prompt 模板
560
+ - ✅ 优化叶子摘要、压缩摘要、Note 提取
561
+ - ✅ 支持国内外 8 个主流 LLM 提供商
562
+
563
+ ### v2.6.0 (2026-03-17) - 认知科学驱动
564
+ - ✅ EM-LLM 事件分割
565
+ - ✅ Ebbinghaus 遗忘曲线
566
+ - ✅ 语义记忆层 (notes 表)
567
+ - ✅ 矛盾检测与重巩固
568
+
569
+ ### v1.0.0 (2026-03-13) - 初始发布
570
+ - ✅ DAG 无损压缩架构
571
+ - ✅ TF-IDF 评分 + 消息类型分类
572
+ - ✅ 三层压缩策略
573
+
574
+ </details>
575
+
576
+ ---
577
+
578
+ ## 🙏 致谢
579
+
580
+ ### 学术引用
581
+
582
+ 如果 LobsterPress 对你的研究有帮助,请引用以下论文:
583
+
584
+ ```bibtex
585
+ @inproceedings{emllm2025,
586
+ title={EM-LLM: Event-Based Memory Management for Large Language Models},
587
+ booktitle={ICLR 2025},
588
+ year={2025}
589
+ }
590
+
591
+ @article{nader2000memory,
592
+ title={Memory reconsolidation: An update},
593
+ author={Nader, Karim and Schafe, Glenn E and Le Doux, Joseph E},
594
+ journal={Nature},
595
+ year={2000}
596
+ }
597
+ ```
598
+
599
+ ### 开源项目
600
+
601
+ - **[lossless-claw](https://github.com/martian-engineering/lossless-claw)** (Martian Engineering) — DAG 压缩架构参考
602
+ - **[LCM 论文](https://papers.voltropy.com/LCM)** (Voltropy) — 无损上下文管理理论基础
603
+
604
+ ### 核心贡献者
605
+
606
+ - **罡哥(sonicman0261)** — 项目发起人、架构设计、学术指导
607
+ - **小云(Xiao Yun)** — v3.0 核心开发、论文实现
608
+
609
+ ---
610
+
611
+ ## 📄 许可证
612
+
613
+ [MIT License](LICENSE)
614
+
615
+ ---
616
+
617
+ <div align="center">
618
+
619
+ **如果 LobsterPress 对你的项目有帮助,请给个 ⭐ Star!**
620
+
621
+ ![Star History Chart](https://api.star-history.com/svg?repos=SonicBotMan/lobster-press&type=Date)
622
+
623
+ **Made with 🧠 by SonicBotMan & Xiao Yun**
624
+
625
+ *基于认知科学,为 AI Agent 构建人类般的记忆系统*
626
+
627
+ </div>
@@ -0,0 +1,12 @@
1
+ import type { OpenClawPluginApi } from "openclaw/plugin-sdk";
2
+ declare const lobsterPlugin: {
3
+ id: string;
4
+ name: string;
5
+ description: string;
6
+ configSchema: {
7
+ parse(value: unknown): Record<string, unknown>;
8
+ };
9
+ register(api: OpenClawPluginApi): void;
10
+ };
11
+ export default lobsterPlugin;
12
+ //# sourceMappingURL=index.d.ts.map
@@ -0,0 +1 @@
1
+ {"version":3,"file":"index.d.ts","sourceRoot":"","sources":["../index.ts"],"names":[],"mappings":"AASA,OAAO,KAAK,EAAE,iBAAiB,EAAE,MAAM,qBAAqB,CAAC;AAsG7D,QAAA,MAAM,aAAa;;;;;qBAOF,OAAO;;kBASR,iBAAiB;CAiDhC,CAAC;AAEF,eAAe,aAAa,CAAC"}
package/dist/index.js ADDED
@@ -0,0 +1,143 @@
1
+ /**
2
+ * @sonicbotman/lobster-press — Cognitive Memory System for AI Agents
3
+ *
4
+ * DAG-based conversation summarization with Ebbinghaus forgetting curve,
5
+ * semantic notes, contradiction detection.
6
+ */
7
+ import { spawn } from "node:child_process";
8
+ import { join } from "node:path";
9
+ import { Type } from "@sinclair/typebox";
10
+ // Python MCP Server 进程(懒启动)
11
+ let mcpProcess = null;
12
+ let mcpReady = false;
13
+ /** 启动 Python MCP Server */
14
+ function ensureMcpServer(config) {
15
+ if (mcpProcess && mcpReady)
16
+ return mcpProcess;
17
+ const dbPath = config.dbPath || join(process.env.HOME ?? "~", ".openclaw/lobster.db");
18
+ const pythonCmd = process.env.LOBSTER_PYTHON ?? "python3";
19
+ mcpProcess = spawn(pythonCmd, [
20
+ "-m", "mcp_server.lobster_mcp_server",
21
+ "--db", dbPath,
22
+ "--provider", config.llmProvider || "",
23
+ "--model", config.llmModel || "",
24
+ ], {
25
+ env: {
26
+ ...process.env,
27
+ LOBSTER_LLM_API_KEY: config.llmApiKey || process.env.LOBSTER_LLM_API_KEY || "",
28
+ },
29
+ stdio: ["pipe", "pipe", "inherit"],
30
+ });
31
+ mcpReady = true;
32
+ mcpProcess.on("exit", () => {
33
+ mcpProcess = null;
34
+ mcpReady = false;
35
+ });
36
+ return mcpProcess;
37
+ }
38
+ /** 向 Python MCP Server 发送请求 */
39
+ async function callMcp(config, toolName, args) {
40
+ const proc = ensureMcpServer(config);
41
+ return new Promise((resolve, reject) => {
42
+ const request = JSON.stringify({
43
+ method: "tools/call",
44
+ params: { name: toolName, arguments: args },
45
+ }) + "\n";
46
+ let output = "";
47
+ const onData = (chunk) => {
48
+ output += chunk.toString();
49
+ const lines = output.split("\n");
50
+ for (const line of lines.slice(0, -1)) {
51
+ if (line.trim()) {
52
+ proc.stdout?.off("data", onData);
53
+ try {
54
+ const result = JSON.parse(line);
55
+ resolve({
56
+ content: [{ type: "text", text: JSON.stringify(result, null, 2) }],
57
+ details: result,
58
+ });
59
+ }
60
+ catch (e) {
61
+ reject(e);
62
+ }
63
+ return;
64
+ }
65
+ }
66
+ output = lines[lines.length - 1] ?? "";
67
+ };
68
+ proc.stdout?.on("data", onData);
69
+ proc.stdin?.write(request);
70
+ setTimeout(() => {
71
+ proc.stdout?.off("data", onData);
72
+ reject(new Error("lobster-press MCP tool call timed out after 30s"));
73
+ }, 30_000);
74
+ });
75
+ }
76
+ // ─── Tool Schemas ─────────────────────────────────────────────────────────────
77
+ const LobsterGrepSchema = Type.Object({
78
+ query: Type.String({ description: "搜索关键词或短语" }),
79
+ conversation_id: Type.Optional(Type.String({ description: "限定搜索范围的会话 ID" })),
80
+ limit: Type.Optional(Type.Number({ description: "最多返回条数,默认 5", default: 5 })),
81
+ });
82
+ const LobsterDescribeSchema = Type.Object({
83
+ conversation_id: Type.Optional(Type.String({ description: "会话 ID(留空查全局)" })),
84
+ });
85
+ const LobsterExpandSchema = Type.Object({
86
+ summary_id: Type.String({ description: "要展开的摘要节点 ID" }),
87
+ max_depth: Type.Optional(Type.Number({ description: "最大展开层数,默认 2", default: 2 })),
88
+ });
89
+ // ─── Plugin Definition ────────────────────────────────────────────────────────
90
+ const lobsterPlugin = {
91
+ id: "lobster-press",
92
+ name: "LobsterPress Memory Engine",
93
+ description: "Cognitive memory system for AI Agents: DAG compression, Ebbinghaus forgetting curve, semantic notes, contradiction detection",
94
+ configSchema: {
95
+ parse(value) {
96
+ const raw = value && typeof value === "object" && !Array.isArray(value)
97
+ ? value
98
+ : {};
99
+ return raw;
100
+ },
101
+ },
102
+ register(api) {
103
+ const pluginConfig = api.pluginConfig && typeof api.pluginConfig === "object"
104
+ ? api.pluginConfig
105
+ : {};
106
+ // ── lobster_grep ───────────────────────────────────────────────────────
107
+ api.registerTool({
108
+ name: "lobster_grep",
109
+ label: "Lobster Grep",
110
+ description: "在 LobsterPress 记忆库中全文搜索历史对话(FTS5 + TF-IDF 重排序)。" +
111
+ "当你需要回忆某个决策、技术细节或历史错误时调用此工具。",
112
+ parameters: LobsterGrepSchema,
113
+ execute: async (_toolCallId, params) => {
114
+ return callMcp(pluginConfig, "lobster_grep", params);
115
+ },
116
+ });
117
+ // ── lobster_describe ────────────────────────────────────────────────────
118
+ api.registerTool({
119
+ name: "lobster_describe",
120
+ label: "Lobster Describe",
121
+ description: "查看 LobsterPress 的 DAG 摘要层级结构:共有多少层摘要、多少条原始消息已被压缩。",
122
+ parameters: LobsterDescribeSchema,
123
+ execute: async (_toolCallId, params) => {
124
+ return callMcp(pluginConfig, "lobster_describe", params);
125
+ },
126
+ });
127
+ // ── lobster_expand ──────────────────────────────────────────────────────
128
+ api.registerTool({
129
+ name: "lobster_expand",
130
+ label: "Lobster Expand",
131
+ description: "将 DAG 摘要节点展开,还原其对应的原始消息(无损检索)。" +
132
+ "当摘要不够详细、需要原始对话时调用。",
133
+ parameters: LobsterExpandSchema,
134
+ execute: async (_toolCallId, params) => {
135
+ return callMcp(pluginConfig, "lobster_expand", params);
136
+ },
137
+ });
138
+ api.logger.info(`[lobster-press] Plugin loaded (db=${pluginConfig.dbPath ?? "~/.openclaw/lobster.db"}, ` +
139
+ `provider=${pluginConfig.llmProvider ?? "none (extractive fallback)"})`);
140
+ },
141
+ };
142
+ export default lobsterPlugin;
143
+ //# sourceMappingURL=index.js.map
@@ -0,0 +1 @@
1
+ {"version":3,"file":"index.js","sourceRoot":"","sources":["../index.ts"],"names":[],"mappings":"AAAA;;;;;GAKG;AACH,OAAO,EAAE,KAAK,EAAqB,MAAM,oBAAoB,CAAC;AAC9D,OAAO,EAAE,IAAI,EAAE,MAAM,WAAW,CAAC;AACjC,OAAO,EAAE,IAAI,EAAE,MAAM,mBAAmB,CAAC;AAGzC,4BAA4B;AAC5B,IAAI,UAAU,GAAwB,IAAI,CAAC;AAC3C,IAAI,QAAQ,GAAG,KAAK,CAAC;AAErB,2BAA2B;AAC3B,SAAS,eAAe,CAAC,MAA+B;IACtD,IAAI,UAAU,IAAI,QAAQ;QAAE,OAAO,UAAU,CAAC;IAE9C,MAAM,MAAM,GAAI,MAAM,CAAC,MAAiB,IAAI,IAAI,CAAC,OAAO,CAAC,GAAG,CAAC,IAAI,IAAI,GAAG,EAAE,sBAAsB,CAAC,CAAC;IAClG,MAAM,SAAS,GAAG,OAAO,CAAC,GAAG,CAAC,cAAc,IAAI,SAAS,CAAC;IAE1D,UAAU,GAAG,KAAK,CAAC,SAAS,EAAE;QAC5B,IAAI,EAAE,+BAA+B;QACrC,MAAM,EAAE,MAAM;QACd,YAAY,EAAG,MAAM,CAAC,WAAsB,IAAI,EAAE;QAClD,SAAS,EAAG,MAAM,CAAC,QAAmB,IAAI,EAAE;KAC7C,EAAE;QACD,GAAG,EAAE;YACH,GAAG,OAAO,CAAC,GAAG;YACd,mBAAmB,EAAG,MAAM,CAAC,SAAoB,IAAI,OAAO,CAAC,GAAG,CAAC,mBAAmB,IAAI,EAAE;SAC3F;QACD,KAAK,EAAE,CAAC,MAAM,EAAE,MAAM,EAAE,SAAS,CAAC;KACnC,CAAC,CAAC;IAEH,QAAQ,GAAG,IAAI,CAAC;IAEhB,UAAU,CAAC,EAAE,CAAC,MAAM,EAAE,GAAG,EAAE;QACzB,UAAU,GAAG,IAAI,CAAC;QAClB,QAAQ,GAAG,KAAK,CAAC;IACnB,CAAC,CAAC,CAAC;IAEH,OAAO,UAAU,CAAC;AACpB,CAAC;AAED,+BAA+B;AAC/B,KAAK,UAAU,OAAO,CACpB,MAA+B,EAC/B,QAAgB,EAChB,IAA6B;IAE7B,MAAM,IAAI,GAAG,eAAe,CAAC,MAAM,CAAC,CAAC;IAErC,OAAO,IAAI,OAAO,CAAC,CAAC,OAAO,EAAE,MAAM,EAAE,EAAE;QACrC,MAAM,OAAO,GAAG,IAAI,CAAC,SAAS,CAAC;YAC7B,MAAM,EAAE,YAAY;YACpB,MAAM,EAAE,EAAE,IAAI,EAAE,QAAQ,EAAE,SAAS,EAAE,IAAI,EAAE;SAC5C,CAAC,GAAG,IAAI,CAAC;QAEV,IAAI,MAAM,GAAG,EAAE,CAAC;QAEhB,MAAM,MAAM,GAAG,CAAC,KAAa,EAAE,EAAE;YAC/B,MAAM,IAAI,KAAK,CAAC,QAAQ,EAAE,CAAC;YAC3B,MAAM,KAAK,GAAG,MAAM,CAAC,KAAK,CAAC,IAAI,CAAC,CAAC;YACjC,KAAK,MAAM,IAAI,IAAI,KAAK,CAAC,KAAK,CAAC,CAAC,EAAE,CAAC,CAAC,CAAC,EAAE,CAAC;gBACtC,IAAI,IAAI,CAAC,IAAI,EAAE,EAAE,CAAC;oBAChB,IAAI,CAAC,MAAM,EAAE,GAAG,CAAC,MAAM,EAAE,MAAM,CAAC,CAAC;oBACjC,IAAI,CAAC;wBACH,MAAM,MAAM,GAAG,IAAI,CAAC,KAAK,CAAC,IAAI,CAAC,CAAC;wBAChC,OAAO,CAAC;4BACN,OAAO,EAAE,CAAC,EAAE,IAAI,EAAE,MAAM,EAAE,IAAI,EAAE,IAAI,CAAC,SAAS,CAAC,MAAM,EAAE,IAAI,EAAE,CAAC,CAAC,EAAE,CAAC;4BAClE,OAAO,EAAE,MAAM;yBAChB,CAAC,CAAC;oBACL,CAAC;oBAAC,OAAO,CAAC,EAAE,CAAC;wBACX,MAAM,CAAC,CAAC,CAAC,CAAC;oBACZ,CAAC;oBACD,OAAO;gBACT,CAAC;YACH,CAAC;YACD,MAAM,GAAG,KAAK,CAAC,KAAK,CAAC,MAAM,GAAG,CAAC,CAAC,IAAI,EAAE,CAAC;QACzC,CAAC,CAAC;QAEF,IAAI,CAAC,MAAM,EAAE,EAAE,CAAC,MAAM,EAAE,MAAM,CAAC,CAAC;QAChC,IAAI,CAAC,KAAK,EAAE,KAAK,CAAC,OAAO,CAAC,CAAC;QAE3B,UAAU,CAAC,GAAG,EAAE;YACd,IAAI,CAAC,MAAM,EAAE,GAAG,CAAC,MAAM,EAAE,MAAM,CAAC,CAAC;YACjC,MAAM,CAAC,IAAI,KAAK,CAAC,iDAAiD,CAAC,CAAC,CAAC;QACvE,CAAC,EAAE,MAAM,CAAC,CAAC;IACb,CAAC,CAAC,CAAC;AACL,CAAC;AAED,iFAAiF;AAEjF,MAAM,iBAAiB,GAAG,IAAI,CAAC,MAAM,CAAC;IACpC,KAAK,EAAE,IAAI,CAAC,MAAM,CAAC,EAAE,WAAW,EAAE,UAAU,EAAE,CAAC;IAC/C,eAAe,EAAE,IAAI,CAAC,QAAQ,CAAC,IAAI,CAAC,MAAM,CAAC,EAAE,WAAW,EAAE,cAAc,EAAE,CAAC,CAAC;IAC5E,KAAK,EAAE,IAAI,CAAC,QAAQ,CAAC,IAAI,CAAC,MAAM,CAAC,EAAE,WAAW,EAAE,aAAa,EAAE,OAAO,EAAE,CAAC,EAAE,CAAC,CAAC;CAC9E,CAAC,CAAC;AAEH,MAAM,qBAAqB,GAAG,IAAI,CAAC,MAAM,CAAC;IACxC,eAAe,EAAE,IAAI,CAAC,QAAQ,CAAC,IAAI,CAAC,MAAM,CAAC,EAAE,WAAW,EAAE,cAAc,EAAE,CAAC,CAAC;CAC7E,CAAC,CAAC;AAEH,MAAM,mBAAmB,GAAG,IAAI,CAAC,MAAM,CAAC;IACtC,UAAU,EAAE,IAAI,CAAC,MAAM,CAAC,EAAE,WAAW,EAAE,aAAa,EAAE,CAAC;IACvD,SAAS,EAAE,IAAI,CAAC,QAAQ,CAAC,IAAI,CAAC,MAAM,CAAC,EAAE,WAAW,EAAE,aAAa,EAAE,OAAO,EAAE,CAAC,EAAE,CAAC,CAAC;CAClF,CAAC,CAAC;AAEH,iFAAiF;AAEjF,MAAM,aAAa,GAAG;IACpB,EAAE,EAAE,eAAe;IACnB,IAAI,EAAE,4BAA4B;IAClC,WAAW,EACT,8HAA8H;IAEhI,YAAY,EAAE;QACZ,KAAK,CAAC,KAAc;YAClB,MAAM,GAAG,GACP,KAAK,IAAI,OAAO,KAAK,KAAK,QAAQ,IAAI,CAAC,KAAK,CAAC,OAAO,CAAC,KAAK,CAAC;gBACzD,CAAC,CAAE,KAAiC;gBACpC,CAAC,CAAC,EAAE,CAAC;YACT,OAAO,GAAG,CAAC;QACb,CAAC;KACF;IAED,QAAQ,CAAC,GAAsB;QAC7B,MAAM,YAAY,GAChB,GAAG,CAAC,YAAY,IAAI,OAAO,GAAG,CAAC,YAAY,KAAK,QAAQ;YACtD,CAAC,CAAE,GAAG,CAAC,YAAwC;YAC/C,CAAC,CAAC,EAAE,CAAC;QAET,0EAA0E;QAC1E,GAAG,CAAC,YAAY,CAAC;YACf,IAAI,EAAE,cAAc;YACpB,KAAK,EAAE,cAAc;YACrB,WAAW,EACT,iDAAiD;gBACjD,6BAA6B;YAC/B,UAAU,EAAE,iBAAiB;YAC7B,OAAO,EAAE,KAAK,EAAE,WAAmB,EAAE,MAA+B,EAAE,EAAE;gBACtE,OAAO,OAAO,CAAC,YAAY,EAAE,cAAc,EAAE,MAAM,CAAC,CAAC;YACvD,CAAC;SACF,CAAC,CAAC;QAEH,2EAA2E;QAC3E,GAAG,CAAC,YAAY,CAAC;YACf,IAAI,EAAE,kBAAkB;YACxB,KAAK,EAAE,kBAAkB;YACzB,WAAW,EACT,mDAAmD;YACrD,UAAU,EAAE,qBAAqB;YACjC,OAAO,EAAE,KAAK,EAAE,WAAmB,EAAE,MAA+B,EAAE,EAAE;gBACtE,OAAO,OAAO,CAAC,YAAY,EAAE,kBAAkB,EAAE,MAAM,CAAC,CAAC;YAC3D,CAAC;SACF,CAAC,CAAC;QAEH,2EAA2E;QAC3E,GAAG,CAAC,YAAY,CAAC;YACf,IAAI,EAAE,gBAAgB;YACtB,KAAK,EAAE,gBAAgB;YACvB,WAAW,EACT,gCAAgC;gBAChC,oBAAoB;YACtB,UAAU,EAAE,mBAAmB;YAC/B,OAAO,EAAE,KAAK,EAAE,WAAmB,EAAE,MAA+B,EAAE,EAAE;gBACtE,OAAO,OAAO,CAAC,YAAY,EAAE,gBAAgB,EAAE,MAAM,CAAC,CAAC;YACzD,CAAC;SACF,CAAC,CAAC;QAEH,GAAG,CAAC,MAAM,CAAC,IAAI,CACb,qCAAqC,YAAY,CAAC,MAAM,IAAI,wBAAwB,IAAI;YACxF,YAAY,YAAY,CAAC,WAAW,IAAI,4BAA4B,GAAG,CACxE,CAAC;IACJ,CAAC;CACF,CAAC;AAEF,eAAe,aAAa,CAAC"}
@@ -0,0 +1,39 @@
1
+ {
2
+ "id": "lobster-press",
3
+ "uiHints": {
4
+ "llmProvider": {
5
+ "label": "LLM Provider",
6
+ "help": "用于 DAG 摘要压缩的 LLM 提供商(deepseek / openai / zhipu 等,留空则使用提取式摘要)"
7
+ },
8
+ "llmModel": {
9
+ "label": "LLM Model",
10
+ "help": "LLM 模型名称(如 deepseek-chat / gpt-4o-mini / glm-4-flash)"
11
+ },
12
+ "dbPath": {
13
+ "label": "Database Path",
14
+ "help": "LobsterPress SQLite 数据库路径(默认:~/.openclaw/lobster.db)"
15
+ },
16
+ "contextThreshold": {
17
+ "label": "Context Threshold",
18
+ "help": "触发压缩的上下文使用率阈值(0.0–1.0,默认 0.75)"
19
+ },
20
+ "freshTailCount": {
21
+ "label": "Fresh Tail Count",
22
+ "help": "受保护、不参与压缩的最近消息条数(默认 32)"
23
+ }
24
+ },
25
+ "configSchema": {
26
+ "type": "object",
27
+ "additionalProperties": false,
28
+ "properties": {
29
+ "enabled": { "type": "boolean" },
30
+ "llmProvider": { "type": "string" },
31
+ "llmApiKey": { "type": "string" },
32
+ "llmModel": { "type": "string" },
33
+ "dbPath": { "type": "string" },
34
+ "contextThreshold": { "type": "number", "minimum": 0, "maximum": 1 },
35
+ "freshTailCount": { "type": "integer", "minimum": 1 },
36
+ "leafChunkTokens": { "type": "integer", "minimum": 1000 }
37
+ }
38
+ }
39
+ }
package/package.json ADDED
@@ -0,0 +1,44 @@
1
+ {
2
+ "name": "@sonicbotman/lobster-press",
3
+ "version": "3.2.2",
4
+ "description": "Cognitive Memory System for AI Agents — OpenClaw Plugin",
5
+ "type": "module",
6
+ "main": "dist/index.js",
7
+ "types": "dist/index.d.ts",
8
+ "files": [
9
+ "dist/",
10
+ "openclaw.plugin.json",
11
+ "README.md"
12
+ ],
13
+ "scripts": {
14
+ "build": "tsc",
15
+ "prepublishOnly": "npm run build"
16
+ },
17
+ "keywords": [
18
+ "openclaw",
19
+ "plugin",
20
+ "memory",
21
+ "llm",
22
+ "context-management",
23
+ "cognitive",
24
+ "dag-compression",
25
+ "ebbinghaus"
26
+ ],
27
+ "license": "MIT",
28
+ "peerDependencies": {
29
+ "openclaw": ">=2026.3.0"
30
+ },
31
+ "devDependencies": {
32
+ "@sinclair/typebox": "^0.34.48",
33
+ "openclaw": "^2026.3.0",
34
+ "typescript": "^5.4.0"
35
+ },
36
+ "repository": {
37
+ "type": "git",
38
+ "url": "https://github.com/SonicBotMan/lobster-press.git"
39
+ },
40
+ "bugs": {
41
+ "url": "https://github.com/SonicBotMan/lobster-press/issues"
42
+ },
43
+ "homepage": "https://github.com/SonicBotMan/lobster-press#readme"
44
+ }