@yun-zero/claw-memory 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (131) hide show
  1. package/.claude/settings.local.json +68 -0
  2. package/README.md +323 -0
  3. package/dist/config/llm.d.ts +13 -0
  4. package/dist/config/llm.d.ts.map +1 -0
  5. package/dist/config/llm.js +96 -0
  6. package/dist/config/llm.js.map +1 -0
  7. package/dist/config/plugin.d.ts +15 -0
  8. package/dist/config/plugin.d.ts.map +1 -0
  9. package/dist/config/plugin.js +32 -0
  10. package/dist/config/plugin.js.map +1 -0
  11. package/dist/db/entityRepository.d.ts +21 -0
  12. package/dist/db/entityRepository.d.ts.map +1 -0
  13. package/dist/db/entityRepository.js +55 -0
  14. package/dist/db/entityRepository.js.map +1 -0
  15. package/dist/db/repository.d.ts +22 -0
  16. package/dist/db/repository.d.ts.map +1 -0
  17. package/dist/db/repository.js +77 -0
  18. package/dist/db/repository.js.map +1 -0
  19. package/dist/db/schema.d.ts +5 -0
  20. package/dist/db/schema.d.ts.map +1 -0
  21. package/dist/db/schema.js +112 -0
  22. package/dist/db/schema.js.map +1 -0
  23. package/dist/db/todoRepository.d.ts +26 -0
  24. package/dist/db/todoRepository.d.ts.map +1 -0
  25. package/dist/db/todoRepository.js +54 -0
  26. package/dist/db/todoRepository.js.map +1 -0
  27. package/dist/hooks/bootstrap.d.ts +3 -0
  28. package/dist/hooks/bootstrap.d.ts.map +1 -0
  29. package/dist/hooks/bootstrap.js +28 -0
  30. package/dist/hooks/bootstrap.js.map +1 -0
  31. package/dist/hooks/message.d.ts +18 -0
  32. package/dist/hooks/message.d.ts.map +1 -0
  33. package/dist/hooks/message.js +52 -0
  34. package/dist/hooks/message.js.map +1 -0
  35. package/dist/index.d.ts +3 -0
  36. package/dist/index.d.ts.map +1 -0
  37. package/dist/index.js +46 -0
  38. package/dist/index.js.map +1 -0
  39. package/dist/mcp/tools.d.ts +26 -0
  40. package/dist/mcp/tools.d.ts.map +1 -0
  41. package/dist/mcp/tools.js +360 -0
  42. package/dist/mcp/tools.js.map +1 -0
  43. package/dist/plugin.d.ts +18 -0
  44. package/dist/plugin.d.ts.map +1 -0
  45. package/dist/plugin.js +62 -0
  46. package/dist/plugin.js.map +1 -0
  47. package/dist/services/entityGraphService.d.ts +87 -0
  48. package/dist/services/entityGraphService.d.ts.map +1 -0
  49. package/dist/services/entityGraphService.js +271 -0
  50. package/dist/services/entityGraphService.js.map +1 -0
  51. package/dist/services/memory.d.ts +26 -0
  52. package/dist/services/memory.d.ts.map +1 -0
  53. package/dist/services/memory.js +281 -0
  54. package/dist/services/memory.js.map +1 -0
  55. package/dist/services/memoryIndex.d.ts +34 -0
  56. package/dist/services/memoryIndex.d.ts.map +1 -0
  57. package/dist/services/memoryIndex.js +100 -0
  58. package/dist/services/memoryIndex.js.map +1 -0
  59. package/dist/services/metadataExtractor.d.ts +16 -0
  60. package/dist/services/metadataExtractor.d.ts.map +1 -0
  61. package/dist/services/metadataExtractor.js +75 -0
  62. package/dist/services/metadataExtractor.js.map +1 -0
  63. package/dist/services/retrieval.d.ts +24 -0
  64. package/dist/services/retrieval.d.ts.map +1 -0
  65. package/dist/services/retrieval.js +40 -0
  66. package/dist/services/retrieval.js.map +1 -0
  67. package/dist/services/scheduler.d.ts +122 -0
  68. package/dist/services/scheduler.d.ts.map +1 -0
  69. package/dist/services/scheduler.js +434 -0
  70. package/dist/services/scheduler.js.map +1 -0
  71. package/dist/services/summarizer.d.ts +43 -0
  72. package/dist/services/summarizer.d.ts.map +1 -0
  73. package/dist/services/summarizer.js +252 -0
  74. package/dist/services/summarizer.js.map +1 -0
  75. package/dist/services/tagService.d.ts +64 -0
  76. package/dist/services/tagService.d.ts.map +1 -0
  77. package/dist/services/tagService.js +281 -0
  78. package/dist/services/tagService.js.map +1 -0
  79. package/dist/tools/memory.d.ts +3 -0
  80. package/dist/tools/memory.d.ts.map +1 -0
  81. package/dist/tools/memory.js +114 -0
  82. package/dist/tools/memory.js.map +1 -0
  83. package/dist/types.d.ts +128 -0
  84. package/dist/types.d.ts.map +1 -0
  85. package/dist/types.js +6 -0
  86. package/dist/types.js.map +1 -0
  87. package/docs/plans/2026-03-02-claw-memory-design.md +445 -0
  88. package/docs/plans/2026-03-02-incremental-summary-design.md +157 -0
  89. package/docs/plans/2026-03-02-incremental-summary-implementation.md +468 -0
  90. package/docs/plans/2026-03-02-memory-index-design.md +163 -0
  91. package/docs/plans/2026-03-02-memory-index-implementation.md +836 -0
  92. package/docs/plans/2026-03-02-mvp-implementation.md +1703 -0
  93. package/docs/plans/2026-03-02-testing-implementation.md +395 -0
  94. package/docs/plans/2026-03-02-testing-plan.md +93 -0
  95. package/docs/plans/2026-03-03-claw-memory-openclaw-plugin-design.md +285 -0
  96. package/docs/plans/2026-03-03-claw-memory-plugin-implementation.md +642 -0
  97. package/docs/plans/2026-03-03-entity-graph-design.md +121 -0
  98. package/docs/plans/2026-03-03-entity-graph-implementation.md +687 -0
  99. package/docs/plans/2026-03-03-llm-generic-config-design.md +43 -0
  100. package/docs/plans/2026-03-03-llm-generic-config-implementation.md +186 -0
  101. package/docs/plans/2026-03-03-memory-e2e-stress-test-design.md +110 -0
  102. package/docs/plans/2026-03-03-memory-e2e-stress-test-implementation.md +464 -0
  103. package/docs/plans/2026-03-03-minimax-llm-fix.md +156 -0
  104. package/docs/plans/2026-03-03-scheduler-design.md +165 -0
  105. package/docs/plans/2026-03-03-scheduler-implementation.md +777 -0
  106. package/docs/plans/2026-03-03-tags-visualization-design.md +73 -0
  107. package/docs/plans/2026-03-03-tags-visualization-implementation.md +539 -0
  108. package/openclaw.plugin.json +11 -0
  109. package/package.json +41 -0
  110. package/src/config/llm.ts +129 -0
  111. package/src/config/plugin.ts +47 -0
  112. package/src/db/entityRepository.ts +80 -0
  113. package/src/db/repository.ts +106 -0
  114. package/src/db/schema.ts +121 -0
  115. package/src/db/todoRepository.ts +76 -0
  116. package/src/hooks/bootstrap.ts +36 -0
  117. package/src/hooks/message.ts +84 -0
  118. package/src/index.ts +50 -0
  119. package/src/plugin.ts +85 -0
  120. package/src/services/entityGraphService.ts +367 -0
  121. package/src/services/memory.ts +338 -0
  122. package/src/services/memoryIndex.ts +140 -0
  123. package/src/services/metadataExtractor.ts +89 -0
  124. package/src/services/retrieval.ts +71 -0
  125. package/src/services/scheduler.ts +529 -0
  126. package/src/services/summarizer.ts +318 -0
  127. package/src/services/tagService.ts +335 -0
  128. package/src/tools/memory.ts +137 -0
  129. package/src/types.ts +139 -0
  130. package/tsconfig.json +20 -0
  131. package/vitest.config.ts +16 -0
@@ -0,0 +1,43 @@
1
+ # LLM配置通用化设计
2
+
3
+ ## 概述
4
+
5
+ 将LLM配置从固定供应商改为通用格式,支持任意OpenAI兼容API。
6
+
7
+ ## 配置结构
8
+
9
+ ```typescript
10
+ export interface LLMConfig {
11
+ format: 'openai' | 'anthropic' | 'openai-compatible';
12
+ baseUrl: string;
13
+ apiKey: string;
14
+ model: string;
15
+ }
16
+ ```
17
+
18
+ ## 环境变量
19
+
20
+ | 变量 | 说明 | 默认值 |
21
+ |------|------|--------|
22
+ | LLM_FORMAT | 格式类型 | openai |
23
+ | LLM_BASE_URL | API基础URL | https://api.openai.com/v1 |
24
+ | LLM_API_KEY | API密钥 | - |
25
+ | LLM_MODEL | 模型名称 | gpt-4o-mini |
26
+
27
+ ## 请求格式
28
+
29
+ - `openai`: OpenAI官方API `/v1/chat/completions`
30
+ - `anthropic`: Anthropic API `/v1/messages`
31
+ - `openai-compatible`: 兼容格式,使用OpenAI格式调用其他API
32
+
33
+ ## 实现步骤
34
+
35
+ 1. 修改 `LLMConfig` 接口
36
+ 2. 更新 `getLLMConfig()` 读取新环境变量
37
+ 3. 添加 `generateWithOpenAICompatible()` 函数
38
+ 4. 更新 `generateSummaryWithLLM()` 路由逻辑
39
+ 5. 测试验证
40
+
41
+ ## 向后兼容
42
+
43
+ 现有使用 `OPENAI_API_KEY` 或 `ANTHROPIC_API_KEY` 的代码需要迁移到新格式。
@@ -0,0 +1,186 @@
1
+ # LLM配置通用化实现计划
2
+
3
+ > **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
4
+
5
+ **Goal:** 将LLM配置从固定供应商改为通用格式,支持任意OpenAI兼容API
6
+
7
+ **Architecture:** 通过format字段区分请求格式,baseUrl灵活配置API端点,实现最大兼容性
8
+
9
+ **Tech Stack:** TypeScript, 环境变量配置
10
+
11
+ ---
12
+
13
+ ### Task 1: 修改LLMConfig接口
14
+
15
+ **Files:**
16
+ - Modify: `src/config/llm.ts:1-115`
17
+
18
+ **Step 1: 修改接口定义**
19
+
20
+ ```typescript
21
+ export interface LLMConfig {
22
+ format: 'openai' | 'anthropic' | 'openai-compatible';
23
+ baseUrl: string;
24
+ apiKey: string;
25
+ model: string;
26
+ }
27
+ ```
28
+
29
+ **Step 2: 更新getLLMConfig函数**
30
+
31
+ ```typescript
32
+ export function getLLMConfig(): LLMConfig {
33
+ const format = (process.env.LLM_FORMAT as LLMConfig['format']) || 'openai';
34
+ const baseUrl = process.env.LLM_BASE_URL || getDefaultBaseUrl(format);
35
+ const apiKey = process.env.LLM_API_KEY || process.env.OPENAI_API_KEY || '';
36
+ const model = process.env.LLM_MODEL || getDefaultModel(format);
37
+
38
+ if (!apiKey) {
39
+ throw new Error('No LLM API key configured. Set LLM_API_KEY environment variable.');
40
+ }
41
+
42
+ return { format, baseUrl, apiKey, model };
43
+ }
44
+
45
+ function getDefaultBaseUrl(format: LLMConfig['format']): string {
46
+ switch (format) {
47
+ case 'anthropic':
48
+ return 'https://api.anthropic.com';
49
+ case 'openai':
50
+ case 'openai-compatible':
51
+ default:
52
+ return 'https://api.openai.com/v1';
53
+ }
54
+ }
55
+
56
+ function getDefaultModel(format: LLMConfig['format']): string {
57
+ switch (format) {
58
+ case 'anthropic':
59
+ return 'claude-3-haiku-20240307';
60
+ case 'openai':
61
+ case 'openai-compatible':
62
+ default:
63
+ return 'gpt-4o-mini';
64
+ }
65
+ }
66
+ ```
67
+
68
+ **Step 3: 提交代码**
69
+
70
+ ```bash
71
+ git add src/config/llm.ts
72
+ git commit -m "refactor: update LLMConfig to generic format"
73
+ ```
74
+
75
+ ---
76
+
77
+ ### Task 2: 添加OpenAI兼容API支持
78
+
79
+ **Files:**
80
+ - Modify: `src/config/llm.ts`
81
+
82
+ **Step 1: 添加generateWithOpenAICompatible函数**
83
+
84
+ ```typescript
85
+ async function generateWithOpenAICompatible(
86
+ systemPrompt: string,
87
+ userPrompt: string,
88
+ config: LLMConfig
89
+ ): Promise<string> {
90
+ const response = await fetch(`${config.baseUrl}/chat/completions`, {
91
+ method: 'POST',
92
+ headers: {
93
+ 'Content-Type': 'application/json',
94
+ 'Authorization': `Bearer ${config.apiKey}`
95
+ },
96
+ body: JSON.stringify({
97
+ model: config.model,
98
+ max_tokens: 1024,
99
+ messages: [
100
+ { role: 'system', content: systemPrompt },
101
+ { role: 'user', content: userPrompt }
102
+ ]
103
+ })
104
+ });
105
+
106
+ if (!response.ok) {
107
+ const error = await response.text();
108
+ throw new Error(`OpenAI Compatible API error: ${error}`);
109
+ }
110
+
111
+ const data = await response.json() as { choices: { message: { content: string } }[] };
112
+ return data.choices[0]?.message?.content || '总结生成失败';
113
+ }
114
+ ```
115
+
116
+ **Step 2: 更新generateSummaryWithLLM路由逻辑**
117
+
118
+ 修改 `generateSummaryWithLLM` 函数:
119
+
120
+ ```typescript
121
+ export async function generateSummaryWithLLM(
122
+ report: string,
123
+ config?: LLMConfig
124
+ ): Promise<string> {
125
+ const llmConfig = config || getLLMConfig();
126
+
127
+ const systemPrompt = `你是一个智能助手...`; // 现有prompt
128
+
129
+ if (llmConfig.format === 'anthropic') {
130
+ return generateWithAnthropic(systemPrompt, report, llmConfig);
131
+ } else if (llmConfig.format === 'openai-compatible') {
132
+ return generateWithOpenAICompatible(systemPrompt, report, llmConfig);
133
+ } else {
134
+ return generateWithOpenAI(systemPrompt, report, llmConfig);
135
+ }
136
+ }
137
+ ```
138
+
139
+ **Step 3: 提交代码**
140
+
141
+ ```bash
142
+ git add src/config/llm.ts
143
+ git commit -m "feat: add OpenAI compatible API support"
144
+ ```
145
+
146
+ ---
147
+
148
+ ### Task 3: 验证与测试
149
+
150
+ **Files:**
151
+ - Test: `src/config/llm.ts`
152
+
153
+ **Step 1: 验证配置解析**
154
+
155
+ ```bash
156
+ # 临时设置环境变量测试
157
+ LLM_FORMAT=openai-compatible \
158
+ LLM_BASE_URL=https://api.minimax.chat/v1 \
159
+ LLM_API_KEY=test-key \
160
+ LLM_MODEL=abab6.5s-chat \
161
+ node -e "import('./src/config/llm.js').then(m => console.log(m.getLLMConfig()))"
162
+ ```
163
+
164
+ **Step 2: 运行现有测试**
165
+
166
+ ```bash
167
+ npm test
168
+ ```
169
+
170
+ **Step 3: 提交**
171
+
172
+ ```bash
173
+ git commit -m "test: verify LLM config works"
174
+ ```
175
+
176
+ ---
177
+
178
+ ## 执行选项
179
+
180
+ **Plan complete and saved to `docs/plans/2026-03-03-llm-generic-config-design.md`. Two execution options:**
181
+
182
+ **1. Subagent-Driven (this session)** - I dispatch fresh subagent per task, review between tasks, fast iteration
183
+
184
+ **2. Parallel Session (separate)** - Open new session with executing-plans, batch execution with checkpoints
185
+
186
+ **Which approach?**
@@ -0,0 +1,110 @@
1
+ # Memory E2E & Stress Test Design
2
+
3
+ **Date:** 2026-03-03
4
+ **Status:** Draft
5
+
6
+ ## Overview
7
+
8
+ Design for testing claw-memory component through simulated OpenClaw usage, including end-to-end functional tests and stress tests with large-scale knowledge domain conversations.
9
+
10
+ ## Goals
11
+
12
+ 1. **End-to-End Test (A)**: Verify complete workflow with real conversation logs
13
+ 2. **Stress Test (B)**: Test batch writing with different knowledge domains and evaluate retrieval accuracy
14
+
15
+ ---
16
+
17
+ ## Part A: End-to-End Test
18
+
19
+ ### Test Flow
20
+
21
+ ```
22
+ Real conversation logs → Parser → MemoryService → MCP Tools → Verify
23
+
24
+ Database Storage
25
+
26
+ Retrieve/Summary → Compare with expected
27
+ ```
28
+
29
+ ### Test Steps
30
+
31
+ 1. **Data Preparation**: Read real conversation samples from `test/fixtures/`
32
+ 2. **Save Memory**: Call `save_memory` for each conversation
33
+ 3. **Search Verification**: Call `search_memory` with different queries, verify relevance
34
+ 4. **Context Verification**: Call `get_context`, verify context contains relevant memories
35
+ 5. **Summary Verification**: Call `get_summary`, verify summary accuracy and completeness
36
+
37
+ ### Test Data
38
+
39
+ - Source: Real conversation logs from project test directory
40
+ - Format: JSON with conversation messages, metadata
41
+
42
+ ---
43
+
44
+ ## Part B: Stress Test
45
+
46
+ ### 1. Batch Writing Optimization
47
+
48
+ **Problem**: LLM API calls for each memory are expensive
49
+
50
+ **Solution**: Batch multiple memories into single LLM call
51
+
52
+ ```typescript
53
+ // Before: N LLM calls
54
+ for (const msg of messages) {
55
+ await llm.extractMetadata(msg);
56
+ }
57
+
58
+ // After: 1 LLM call per batch (50-100 messages)
59
+ const batchResult = await llm.extractMetadataBatch(messages);
60
+ ```
61
+
62
+ **Expected Reduction**: 1000 memories → ~10-20 API calls instead of 1000
63
+
64
+ ### 2. Knowledge Domains
65
+
66
+ Write conversations from diverse domains:
67
+
68
+ - **Technology/Programming**: React, Python, Database, APIs
69
+ - **Product/Design**: UI/UX, Requirements Analysis
70
+ - **Daily Office**: Meeting notes, Task management
71
+ - **Random Mixed**: Various domains
72
+
73
+ ### 3. Retrieval Accuracy Metrics
74
+
75
+ | Metric | Description |
76
+ |--------|-------------|
77
+ | **Recall@K** | Ratio of relevant memories retrieved in top K |
78
+ | **Precision@K** | Ratio of retrieved results that are relevant |
79
+ | **MRR** | Mean Reciprocal Rank of first relevant result |
80
+ | **NDCG@K** | Normalized Discounted Cumulative Gain considering ranking position |
81
+
82
+ ### 4. Test Queries
83
+
84
+ | Query | Expected Domain |
85
+ |-------|-----------------|
86
+ | "React 组件开发" | React-related memories |
87
+ | "数据库优化" | Database-related memories |
88
+ | "项目进度" | Task/progress-related memories |
89
+
90
+ ### 5. Test Method
91
+
92
+ ```
93
+ Write 1000 memories from different domains
94
+
95
+ For each test query:
96
+ - Execute search_memory
97
+ - Mark results as relevant/irrelevant (using ground truth)
98
+ - Calculate Recall@K, Precision@K, MRR, NDCG@K
99
+
100
+ Output evaluation report
101
+ ```
102
+
103
+ ---
104
+
105
+ ## Acceptance Criteria
106
+
107
+ 1. E2E test covers full flow: save → search → context → summary
108
+ 2. Stress test writes 1000+ memories with batch LLM calls
109
+ 3. Retrieval accuracy metrics calculated and reported
110
+ 4. Test results reproducible and automated