@morningljn/mnemo 0.2.1 → 0.3.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/dist/config.d.ts +2 -0
- package/dist/config.js +28 -0
- package/dist/config.js.map +1 -0
- package/dist/dream-engine.d.ts +17 -0
- package/dist/dream-engine.js +144 -0
- package/dist/dream-engine.js.map +1 -0
- package/dist/llm-client.d.ts +10 -0
- package/dist/llm-client.js +55 -0
- package/dist/llm-client.js.map +1 -0
- package/dist/resources.js +1 -1
- package/dist/store.d.ts +2 -1
- package/dist/store.js +32 -8
- package/dist/store.js.map +1 -1
- package/dist/types.d.ts +14 -0
- package/docs/superpowers/plans/2026-05-16-llm-dream.md +973 -0
- package/openspec/changes/llm-dream/.openspec.yaml +2 -0
- package/openspec/changes/llm-dream/design.md +84 -0
- package/openspec/changes/llm-dream/proposal.md +36 -0
- package/openspec/changes/llm-dream/specs/dream-cycle/spec.md +42 -0
- package/openspec/changes/llm-dream/specs/llm-client/spec.md +57 -0
- package/openspec/changes/llm-dream/specs/llm-dream-engine/spec.md +72 -0
- package/openspec/changes/llm-dream/tasks.md +32 -0
- package/package.json +1 -1
- package/src/config.ts +29 -0
- package/src/dream-engine.ts +162 -0
- package/src/llm-client.ts +59 -0
- package/src/resources.ts +1 -1
- package/src/store.ts +39 -7
- package/src/types.ts +16 -0
- package/tests/dream-engine.test.ts +163 -0
- package/tests/llm-client.test.ts +105 -0
- package/tests/store.test.ts +6 -5
|
@@ -0,0 +1,973 @@
|
|
|
1
|
+
# LLM Dream Engine Implementation Plan
|
|
2
|
+
|
|
3
|
+
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
|
|
4
|
+
|
|
5
|
+
**Goal:** 用 LLM 替代硬编码规则做语义级记忆整理(合并/摘要/分类),Ollama 不可用时降级到规则引擎。
|
|
6
|
+
|
|
7
|
+
**Architecture:** 新增 `llm-client.ts`(统一 OpenAI 兼容 `/v1/chat/completions` 客户端)+ `dream-engine.ts`(LLM 驱动的三个整理任务 + 安全校验 + 降级)。修改 `store.ts` 的 `runDream()` 集成 dream engine。配置从 `~/.mnemo/config.json` 加载,无配置时默认 Ollama localhost。
|
|
8
|
+
|
|
9
|
+
**Tech Stack:** TypeScript, Node.js 原生 fetch, better-sqlite3, vitest
|
|
10
|
+
|
|
11
|
+
---
|
|
12
|
+
|
|
13
|
+
## File Structure
|
|
14
|
+
|
|
15
|
+
| File | Action | Responsibility |
|
|
16
|
+
|------|--------|---------------|
|
|
17
|
+
| `src/types.ts` | Modify | 新增 LLMConfig、LLMMessage、DreamReport 新增 fallback 字段 |
|
|
18
|
+
| `src/config.ts` | Create | 加载 `~/.mnemo/config.json`,返回 LLM 配置 |
|
|
19
|
+
| `src/llm-client.ts` | Create | OpenAI 兼容 `/v1/chat/completions` 客户端 |
|
|
20
|
+
| `src/dream-engine.ts` | Create | LLM 语义合并/摘要/分类 + 安全校验 + 降级到规则引擎 |
|
|
21
|
+
| `src/store.ts` | Modify | `runDream()` 集成 DreamEngine |
|
|
22
|
+
| `src/dream.ts` | Modify | CLI 使用新 dream engine |
|
|
23
|
+
| `tests/llm-client.test.ts` | Create | LLM 客户端测试(mock fetch) |
|
|
24
|
+
| `tests/dream-engine.test.ts` | Create | Dream engine 测试(mock LLM) |
|
|
25
|
+
|
|
26
|
+
---
|
|
27
|
+
|
|
28
|
+
### Task 1: 类型定义
|
|
29
|
+
|
|
30
|
+
**Files:**
|
|
31
|
+
- Modify: `src/types.ts`
|
|
32
|
+
|
|
33
|
+
- [ ] **Step 1: 新增 LLM 相关类型到 types.ts**
|
|
34
|
+
|
|
35
|
+
在 `src/types.ts` 文件末尾追加:
|
|
36
|
+
|
|
37
|
+
```typescript
|
|
38
|
+
/** LLM 配置 */
|
|
39
|
+
export interface LLMConfig {
|
|
40
|
+
baseUrl: string
|
|
41
|
+
model: string
|
|
42
|
+
apiKey?: string
|
|
43
|
+
temperature: number
|
|
44
|
+
}
|
|
45
|
+
|
|
46
|
+
/** LLM 聊天消息 */
|
|
47
|
+
export interface LLMMessage {
|
|
48
|
+
role: 'system' | 'user' | 'assistant'
|
|
49
|
+
content: string
|
|
50
|
+
}
|
|
51
|
+
```
|
|
52
|
+
|
|
53
|
+
修改 `DreamReport` 接口,新增两个字段:
|
|
54
|
+
|
|
55
|
+
```typescript
|
|
56
|
+
export interface DreamReport {
|
|
57
|
+
merged: number
|
|
58
|
+
compressed: number
|
|
59
|
+
reclassified: number
|
|
60
|
+
deleted: number
|
|
61
|
+
mergeDetails: Array<{ kept: number; removed: number; similarity: number }>
|
|
62
|
+
fallback?: boolean
|
|
63
|
+
fallbackReason?: string
|
|
64
|
+
health: {
|
|
65
|
+
total: number
|
|
66
|
+
avg_trust: number
|
|
67
|
+
avg_length: number
|
|
68
|
+
coverage: Record<FactCategory, number>
|
|
69
|
+
}
|
|
70
|
+
}
|
|
71
|
+
```
|
|
72
|
+
|
|
73
|
+
- [ ] **Step 2: 验证构建通过**
|
|
74
|
+
|
|
75
|
+
Run: `npm run build 2>&1 | tail -5`
|
|
76
|
+
Expected: 无错误
|
|
77
|
+
|
|
78
|
+
- [ ] **Step 3: Commit**
|
|
79
|
+
|
|
80
|
+
```bash
|
|
81
|
+
git add src/types.ts
|
|
82
|
+
git commit -m "feat(types): add LLMConfig, LLMMessage types and DreamReport fallback fields"
|
|
83
|
+
```
|
|
84
|
+
|
|
85
|
+
---
|
|
86
|
+
|
|
87
|
+
### Task 2: 配置加载
|
|
88
|
+
|
|
89
|
+
**Files:**
|
|
90
|
+
- Create: `src/config.ts`
|
|
91
|
+
|
|
92
|
+
- [ ] **Step 1: 创建 config.ts**
|
|
93
|
+
|
|
94
|
+
```typescript
|
|
95
|
+
import { existsSync, readFileSync } from 'node:fs'
|
|
96
|
+
import { join } from 'node:path'
|
|
97
|
+
import { homedir } from 'node:os'
|
|
98
|
+
import type { LLMConfig } from './types.js'
|
|
99
|
+
|
|
100
|
+
const DEFAULT_CONFIG: LLMConfig = {
|
|
101
|
+
baseUrl: 'http://localhost:11434/v1',
|
|
102
|
+
model: 'qwen3:8b',
|
|
103
|
+
temperature: 0.1,
|
|
104
|
+
}
|
|
105
|
+
|
|
106
|
+
export function loadConfig(): LLMConfig {
|
|
107
|
+
const configPath = join(homedir(), '.mnemo', 'config.json')
|
|
108
|
+
if (!existsSync(configPath)) return { ...DEFAULT_CONFIG }
|
|
109
|
+
|
|
110
|
+
try {
|
|
111
|
+
const raw = readFileSync(configPath, 'utf-8')
|
|
112
|
+
const parsed = JSON.parse(raw)
|
|
113
|
+
const llm = parsed.llm ?? {}
|
|
114
|
+
return {
|
|
115
|
+
baseUrl: llm.baseUrl ?? DEFAULT_CONFIG.baseUrl,
|
|
116
|
+
model: llm.model ?? DEFAULT_CONFIG.model,
|
|
117
|
+
apiKey: llm.apiKey,
|
|
118
|
+
temperature: llm.temperature ?? DEFAULT_CONFIG.temperature,
|
|
119
|
+
}
|
|
120
|
+
} catch {
|
|
121
|
+
return { ...DEFAULT_CONFIG }
|
|
122
|
+
}
|
|
123
|
+
}
|
|
124
|
+
```
|
|
125
|
+
|
|
126
|
+
- [ ] **Step 2: 验证构建通过**
|
|
127
|
+
|
|
128
|
+
Run: `npm run build 2>&1 | tail -3`
|
|
129
|
+
Expected: 无错误
|
|
130
|
+
|
|
131
|
+
- [ ] **Step 3: Commit**
|
|
132
|
+
|
|
133
|
+
```bash
|
|
134
|
+
git add src/config.ts
|
|
135
|
+
git commit -m "feat(config): add loadConfig for ~/.mnemo/config.json with Ollama defaults"
|
|
136
|
+
```
|
|
137
|
+
|
|
138
|
+
---
|
|
139
|
+
|
|
140
|
+
### Task 3: LLM 客户端
|
|
141
|
+
|
|
142
|
+
**Files:**
|
|
143
|
+
- Create: `src/llm-client.ts`
|
|
144
|
+
- Create: `tests/llm-client.test.ts`
|
|
145
|
+
|
|
146
|
+
- [ ] **Step 1: 写失败测试 `tests/llm-client.test.ts`**
|
|
147
|
+
|
|
148
|
+
```typescript
|
|
149
|
+
import { describe, it, expect, vi, beforeEach } from 'vitest'
|
|
150
|
+
import { LLMClient } from '../src/llm-client.js'
|
|
151
|
+
import type { LLMConfig, LLMMessage } from '../src/types.js'
|
|
152
|
+
|
|
153
|
+
const mockConfig: LLMConfig = {
|
|
154
|
+
baseUrl: 'http://localhost:11434/v1',
|
|
155
|
+
model: 'test-model',
|
|
156
|
+
temperature: 0.1,
|
|
157
|
+
}
|
|
158
|
+
|
|
159
|
+
function mockFetchResponse(body: unknown, ok = true, status = 200) {
|
|
160
|
+
return vi.fn().mockResolvedValue({
|
|
161
|
+
ok,
|
|
162
|
+
status,
|
|
163
|
+
json: () => Promise.resolve(body),
|
|
164
|
+
text: () => Promise.resolve(JSON.stringify(body)),
|
|
165
|
+
})
|
|
166
|
+
}
|
|
167
|
+
|
|
168
|
+
describe('LLMClient', () => {
|
|
169
|
+
beforeEach(() => {
|
|
170
|
+
vi.restoreAllMocks()
|
|
171
|
+
})
|
|
172
|
+
|
|
173
|
+
describe('chat', () => {
|
|
174
|
+
it('sends request and returns text content', async () => {
|
|
175
|
+
const mockResp = {
|
|
176
|
+
choices: [{ message: { content: 'Hello from LLM' } }],
|
|
177
|
+
}
|
|
178
|
+
globalThis.fetch = mockFetchResponse(mockResp)
|
|
179
|
+
|
|
180
|
+
const client = new LLMClient(mockConfig)
|
|
181
|
+
const messages: LLMMessage[] = [{ role: 'user', content: 'Hi' }]
|
|
182
|
+
const result = await client.chat(messages)
|
|
183
|
+
|
|
184
|
+
expect(result).toBe('Hello from LLM')
|
|
185
|
+
expect(globalThis.fetch).toHaveBeenCalledWith(
|
|
186
|
+
'http://localhost:11434/v1/chat/completions',
|
|
187
|
+
expect.objectContaining({
|
|
188
|
+
method: 'POST',
|
|
189
|
+
headers: expect.objectContaining({ 'Content-Type': 'application/json' }),
|
|
190
|
+
}),
|
|
191
|
+
)
|
|
192
|
+
})
|
|
193
|
+
|
|
194
|
+
it('includes Authorization header when apiKey is set', async () => {
|
|
195
|
+
const configWithKey = { ...mockConfig, apiKey: 'sk-test-key' }
|
|
196
|
+
globalThis.fetch = mockFetchResponse({
|
|
197
|
+
choices: [{ message: { content: 'ok' } }],
|
|
198
|
+
})
|
|
199
|
+
|
|
200
|
+
const client = new LLMClient(configWithKey)
|
|
201
|
+
await client.chat([{ role: 'user', content: 'test' }])
|
|
202
|
+
|
|
203
|
+
const callArgs = (globalThis.fetch as ReturnType<typeof vi.fn>).mock.calls[0][1] as RequestInit
|
|
204
|
+
expect(callArgs.headers).toHaveProperty('Authorization', 'Bearer sk-test-key')
|
|
205
|
+
})
|
|
206
|
+
|
|
207
|
+
it('throws on connection failure', async () => {
|
|
208
|
+
globalThis.fetch = vi.fn().mockRejectedValue(new Error('ECONNREFUSED'))
|
|
209
|
+
const client = new LLMClient(mockConfig)
|
|
210
|
+
await expect(client.chat([{ role: 'user', content: 'test' }])).rejects.toThrow('ECONNREFUSED')
|
|
211
|
+
})
|
|
212
|
+
|
|
213
|
+
it('throws on non-ok HTTP status', async () => {
|
|
214
|
+
globalThis.fetch = mockFetchResponse({ error: 'bad request' }, false, 400)
|
|
215
|
+
const client = new LLMClient(mockConfig)
|
|
216
|
+
await expect(client.chat([{ role: 'user', content: 'test' }])).rejects.toThrow('400')
|
|
217
|
+
})
|
|
218
|
+
|
|
219
|
+
it('extracts JSON from markdown code fence', async () => {
|
|
220
|
+
const jsonBody = { result: [1, 2] }
|
|
221
|
+
const fenced = '```json\n' + JSON.stringify(jsonBody) + '\n```'
|
|
222
|
+
globalThis.fetch = mockFetchResponse({
|
|
223
|
+
choices: [{ message: { content: fenced } }],
|
|
224
|
+
})
|
|
225
|
+
|
|
226
|
+
const client = new LLMClient(mockConfig)
|
|
227
|
+
const result = await client.chatJSON([{ role: 'user', content: 'test' }])
|
|
228
|
+
expect(result).toEqual(jsonBody)
|
|
229
|
+
})
|
|
230
|
+
|
|
231
|
+
it('throws on invalid JSON response in chatJSON', async () => {
|
|
232
|
+
globalThis.fetch = mockFetchResponse({
|
|
233
|
+
choices: [{ message: { content: 'not json at all' } }],
|
|
234
|
+
})
|
|
235
|
+
const client = new LLMClient(mockConfig)
|
|
236
|
+
await expect(client.chatJSON([{ role: 'user', content: 'test' }])).rejects.toThrow()
|
|
237
|
+
})
|
|
238
|
+
})
|
|
239
|
+
|
|
240
|
+
describe('isAvailable', () => {
|
|
241
|
+
it('returns true when service is reachable', async () => {
|
|
242
|
+
globalThis.fetch = mockFetchResponse({ data: [] })
|
|
243
|
+
const client = new LLMClient(mockConfig)
|
|
244
|
+
expect(await client.isAvailable()).toBe(true)
|
|
245
|
+
})
|
|
246
|
+
|
|
247
|
+
it('returns false when service is unreachable', async () => {
|
|
248
|
+
globalThis.fetch = vi.fn().mockRejectedValue(new Error('fail'))
|
|
249
|
+
const client = new LLMClient(mockConfig)
|
|
250
|
+
expect(await client.isAvailable()).toBe(false)
|
|
251
|
+
})
|
|
252
|
+
})
|
|
253
|
+
})
|
|
254
|
+
```
|
|
255
|
+
|
|
256
|
+
- [ ] **Step 2: 运行测试确认失败**
|
|
257
|
+
|
|
258
|
+
Run: `npx vitest run tests/llm-client.test.ts 2>&1 | tail -10`
|
|
259
|
+
Expected: FAIL — `Cannot find module '../src/llm-client.js'`
|
|
260
|
+
|
|
261
|
+
- [ ] **Step 3: 实现 `src/llm-client.ts`**
|
|
262
|
+
|
|
263
|
+
```typescript
|
|
264
|
+
import type { LLMConfig, LLMMessage } from './types.js'
|
|
265
|
+
|
|
266
|
+
export class LLMClient {
|
|
267
|
+
constructor(private config: LLMConfig) {}
|
|
268
|
+
|
|
269
|
+
async chat(messages: LLMMessage[], options?: { temperature?: number }): Promise<string> {
|
|
270
|
+
const url = `${this.config.baseUrl}/chat/completions`
|
|
271
|
+
const headers: Record<string, string> = { 'Content-Type': 'application/json' }
|
|
272
|
+
if (this.config.apiKey) {
|
|
273
|
+
headers['Authorization'] = `Bearer ${this.config.apiKey}`
|
|
274
|
+
}
|
|
275
|
+
|
|
276
|
+
const resp = await fetch(url, {
|
|
277
|
+
method: 'POST',
|
|
278
|
+
headers,
|
|
279
|
+
body: JSON.stringify({
|
|
280
|
+
model: this.config.model,
|
|
281
|
+
messages,
|
|
282
|
+
temperature: options?.temperature ?? this.config.temperature,
|
|
283
|
+
stream: false,
|
|
284
|
+
}),
|
|
285
|
+
})
|
|
286
|
+
|
|
287
|
+
if (!resp.ok) {
|
|
288
|
+
throw new Error(`LLM request failed: ${resp.status} ${await resp.text()}`)
|
|
289
|
+
}
|
|
290
|
+
|
|
291
|
+
const data = await resp.json() as { choices: Array<{ message: { content: string } }> }
|
|
292
|
+
return data.choices[0]?.message?.content ?? ''
|
|
293
|
+
}
|
|
294
|
+
|
|
295
|
+
async chatJSON<T = unknown>(messages: LLMMessage[]): Promise<T> {
|
|
296
|
+
const text = await this.chat(messages)
|
|
297
|
+
// 尝试直接解析
|
|
298
|
+
try {
|
|
299
|
+
return JSON.parse(text)
|
|
300
|
+
} catch {
|
|
301
|
+
// 尝试从 markdown code fence 中提取
|
|
302
|
+
const match = text.match(/```(?:json)?\s*\n?([\s\S]*?)\n?```/)
|
|
303
|
+
if (match) {
|
|
304
|
+
return JSON.parse(match[1].trim())
|
|
305
|
+
}
|
|
306
|
+
throw new Error(`LLM response is not valid JSON: ${text.slice(0, 200)}`)
|
|
307
|
+
}
|
|
308
|
+
}
|
|
309
|
+
|
|
310
|
+
async isAvailable(): Promise<boolean> {
|
|
311
|
+
try {
|
|
312
|
+
const url = `${this.config.baseUrl}/models`
|
|
313
|
+
const resp = await fetch(url, { method: 'GET', signal: AbortSignal.timeout(3000) })
|
|
314
|
+
return resp.ok
|
|
315
|
+
} catch {
|
|
316
|
+
return false
|
|
317
|
+
}
|
|
318
|
+
}
|
|
319
|
+
}
|
|
320
|
+
```
|
|
321
|
+
|
|
322
|
+
- [ ] **Step 4: 运行测试确认通过**
|
|
323
|
+
|
|
324
|
+
Run: `npx vitest run tests/llm-client.test.ts 2>&1 | tail -8`
|
|
325
|
+
Expected: 6 tests passed
|
|
326
|
+
|
|
327
|
+
- [ ] **Step 5: Commit**
|
|
328
|
+
|
|
329
|
+
```bash
|
|
330
|
+
git add src/llm-client.ts tests/llm-client.test.ts
|
|
331
|
+
git commit -m "feat(llm-client): OpenAI-compatible /v1/chat/completions client with health check"
|
|
332
|
+
```
|
|
333
|
+
|
|
334
|
+
---
|
|
335
|
+
|
|
336
|
+
### Task 4: Dream Engine — 语义合并
|
|
337
|
+
|
|
338
|
+
**Files:**
|
|
339
|
+
- Create: `src/dream-engine.ts`
|
|
340
|
+
- Create: `tests/dream-engine.test.ts`(部分)
|
|
341
|
+
|
|
342
|
+
- [ ] **Step 1: 写语义合并的失败测试**
|
|
343
|
+
|
|
344
|
+
在 `tests/dream-engine.test.ts` 中:
|
|
345
|
+
|
|
346
|
+
```typescript
|
|
347
|
+
import { describe, it, expect, vi, beforeEach } from 'vitest'
|
|
348
|
+
import { DreamEngine } from '../src/dream-engine.js'
|
|
349
|
+
import { LLMClient } from '../src/llm-client.js'
|
|
350
|
+
import { MemoryStore } from '../src/store.js'
|
|
351
|
+
import { mkdtempSync, rmSync } from 'node:fs'
|
|
352
|
+
import { join } from 'node:path'
|
|
353
|
+
import { tmpdir } from 'node:os'
|
|
354
|
+
import type { LLMConfig } from '../src/types.js'
|
|
355
|
+
|
|
356
|
+
let store: MemoryStore
|
|
357
|
+
let tmpDir: string
|
|
358
|
+
let mockChatJSON: ReturnType<typeof vi.fn>
|
|
359
|
+
|
|
360
|
+
beforeEach(() => {
|
|
361
|
+
tmpDir = mkdtempSync(join(tmpdir(), 'mnemo-dream-'))
|
|
362
|
+
store = new MemoryStore(join(tmpDir, 'test.db'))
|
|
363
|
+
})
|
|
364
|
+
|
|
365
|
+
afterEach(() => {
|
|
366
|
+
store.close()
|
|
367
|
+
rmSync(tmpDir, { recursive: true, force: true })
|
|
368
|
+
})
|
|
369
|
+
|
|
370
|
+
function createEngine(llmResponses: unknown[]): DreamEngine {
|
|
371
|
+
mockChatJSON = vi.fn()
|
|
372
|
+
for (const resp of llmResponses) {
|
|
373
|
+
mockChatJSON.mockResolvedValueOnce(resp)
|
|
374
|
+
}
|
|
375
|
+
const mockClient = { chatJSON: mockChatJSON, isAvailable: vi.fn().mockResolvedValue(true) } as unknown as LLMClient
|
|
376
|
+
return new DreamEngine(mockClient, store)
|
|
377
|
+
}
|
|
378
|
+
|
|
379
|
+
describe('DreamEngine - semanticMerge', () => {
|
|
380
|
+
it('merges semantically duplicate facts in same category', async () => {
|
|
381
|
+
store.addFact('用户喜欢使用 VS Code 编辑器写代码', 'tool_pref')
|
|
382
|
+
store.addFact('用户偏好 Visual Studio Code 作为开发工具', 'tool_pref')
|
|
383
|
+
|
|
384
|
+
mockChatJSON.mockResolvedValueOnce({
|
|
385
|
+
merges: [{ kept: 1, removed: 2, reason: '都描述偏好VS Code编辑器' }],
|
|
386
|
+
})
|
|
387
|
+
|
|
388
|
+
const engine = createEngine([])
|
|
389
|
+
// override mockChatJSON for this test
|
|
390
|
+
const mockClient = { chatJSON: mockChatJSON, isAvailable: vi.fn().mockResolvedValue(true) } as unknown as LLMClient
|
|
391
|
+
const eng = new DreamEngine(mockClient, store)
|
|
392
|
+
const result = await eng.semanticMerge()
|
|
393
|
+
|
|
394
|
+
expect(result.merged).toBe(1)
|
|
395
|
+
expect(result.details.length).toBe(1)
|
|
396
|
+
expect(result.details[0].reason).toBe('都描述偏好VS Code编辑器')
|
|
397
|
+
})
|
|
398
|
+
|
|
399
|
+
it('skips batch when LLM returns invalid JSON', async () => {
|
|
400
|
+
store.addFact('一些事实内容', 'general')
|
|
401
|
+
|
|
402
|
+
const mockClient = {
|
|
403
|
+
chatJSON: vi.fn().mockRejectedValue(new Error('invalid json')),
|
|
404
|
+
isAvailable: vi.fn().mockResolvedValue(true),
|
|
405
|
+
} as unknown as LLMClient
|
|
406
|
+
const eng = new DreamEngine(mockClient, store)
|
|
407
|
+
const result = await eng.semanticMerge()
|
|
408
|
+
|
|
409
|
+
expect(result.merged).toBe(0)
|
|
410
|
+
})
|
|
411
|
+
|
|
412
|
+
it('protects high-trust facts from deletion', async () => {
|
|
413
|
+
const id1 = store.addFact('高信任事实内容', 'general')
|
|
414
|
+
store.recordFeedback(id1, true)
|
|
415
|
+
store.recordFeedback(id1, true)
|
|
416
|
+
store.recordFeedback(id1, true)
|
|
417
|
+
store.recordFeedback(id1, true)
|
|
418
|
+
store.recordFeedback(id1, true)
|
|
419
|
+
// trust now ~0.75, need more pushes
|
|
420
|
+
store.recordFeedback(id1, true)
|
|
421
|
+
store.recordFeedback(id1, true)
|
|
422
|
+
store.recordFeedback(id1, true)
|
|
423
|
+
store.recordFeedback(id1, true)
|
|
424
|
+
store.recordFeedback(id1, true)
|
|
425
|
+
// trust now ~1.0
|
|
426
|
+
|
|
427
|
+
const id2 = store.addFact('另一个事实内容', 'general')
|
|
428
|
+
|
|
429
|
+
mockChatJSON.mockResolvedValueOnce({
|
|
430
|
+
merges: [{ kept: id2, removed: id1, reason: '重复' }],
|
|
431
|
+
})
|
|
432
|
+
|
|
433
|
+
const mockClient = { chatJSON: mockChatJSON, isAvailable: vi.fn().mockResolvedValue(true) } as unknown as LLMClient
|
|
434
|
+
const eng = new DreamEngine(mockClient, store)
|
|
435
|
+
const result = await eng.semanticMerge()
|
|
436
|
+
|
|
437
|
+
// id1 trust > 0.8, should be protected
|
|
438
|
+
expect(result.merged).toBe(0)
|
|
439
|
+
})
|
|
440
|
+
})
|
|
441
|
+
```
|
|
442
|
+
|
|
443
|
+
- [ ] **Step 2: 运行测试确认失败**
|
|
444
|
+
|
|
445
|
+
Run: `npx vitest run tests/dream-engine.test.ts 2>&1 | tail -10`
|
|
446
|
+
Expected: FAIL — module not found
|
|
447
|
+
|
|
448
|
+
- [ ] **Step 3: 创建 `src/dream-engine.ts` 骨架,实现 semanticMerge**
|
|
449
|
+
|
|
450
|
+
```typescript
|
|
451
|
+
import type { LLMConfig, LLMMessage, FactCategory } from './types.js'
|
|
452
|
+
import type { LLMClient } from './llm-client.js'
|
|
453
|
+
import type { MemoryStore } from './store.js'
|
|
454
|
+
import type { Fact } from './types.js'
|
|
455
|
+
|
|
456
|
+
const BATCH_SIZE = 20
|
|
457
|
+
const MAX_DELETE_RATIO = 0.1
|
|
458
|
+
const TRUST_DELETE_LIMIT = 0.8
|
|
459
|
+
const RETRIEVAL_DELETE_LIMIT = 100
|
|
460
|
+
|
|
461
|
+
export class DreamEngine {
|
|
462
|
+
constructor(private llm: LLMClient, private store: MemoryStore) {}
|
|
463
|
+
|
|
464
|
+
async semanticMerge(): Promise<{
|
|
465
|
+
merged: number
|
|
466
|
+
details: Array<{ kept: number; removed: number; reason: string }>
|
|
467
|
+
}> {
|
|
468
|
+
const categories: FactCategory[] = ['identity', 'coding_style', 'tool_pref', 'workflow', 'general']
|
|
469
|
+
let merged = 0
|
|
470
|
+
const details: Array<{ kept: number; removed: number; reason: string }> = []
|
|
471
|
+
|
|
472
|
+
const totalFacts = this.store.getTotalCount()
|
|
473
|
+
const maxDeletes = Math.max(1, Math.floor(totalFacts * MAX_DELETE_RATIO))
|
|
474
|
+
|
|
475
|
+
for (const cat of categories) {
|
|
476
|
+
const facts = this.store.listFacts(cat, 0, 200)
|
|
477
|
+
if (facts.length < 2) continue
|
|
478
|
+
|
|
479
|
+
// 分批处理
|
|
480
|
+
for (let i = 0; i < facts.length; i += BATCH_SIZE) {
|
|
481
|
+
const batch = facts.slice(i, i + BATCH_SIZE)
|
|
482
|
+
const factList = batch.map(f => `[${f.factId}] ${f.content}`).join('\n')
|
|
483
|
+
|
|
484
|
+
const messages: LLMMessage[] = [
|
|
485
|
+
{
|
|
486
|
+
role: 'system',
|
|
487
|
+
content: `你是一个记忆整理助手。分析以下同一分类(${cat})的记忆条目,找出语义重复的条目对。
|
|
488
|
+
只输出JSON,格式:{"merges": [{"kept": 保留的fact_id, "removed": 删除的fact_id, "reason": "原因"}]}
|
|
489
|
+
如果没有语义重复的条目,输出:{"merges": []}
|
|
490
|
+
规则:
|
|
491
|
+
- 保留内容更完整、信息量更大的条目
|
|
492
|
+
- 用词不同但意思相同的条目应合并(如"喜欢VS Code"和"偏好Visual Studio Code")
|
|
493
|
+
- 不要合并只是主题相关但内容不同的条目`,
|
|
494
|
+
},
|
|
495
|
+
{ role: 'user', content: factList },
|
|
496
|
+
]
|
|
497
|
+
|
|
498
|
+
try {
|
|
499
|
+
const result = await this.llm.chatJSON<{ merges: Array<{ kept: number; removed: number; reason: string }> }>(messages)
|
|
500
|
+
if (!result?.merges || !Array.isArray(result.merges)) continue
|
|
501
|
+
|
|
502
|
+
for (const merge of result.merges) {
|
|
503
|
+
if (merged >= maxDeletes) break
|
|
504
|
+
if (!merge.kept || !merge.removed) continue
|
|
505
|
+
|
|
506
|
+
// 安全校验
|
|
507
|
+
const toRemove = this.store.listFacts(cat, 0, 200).find(f => f.factId === merge.removed)
|
|
508
|
+
if (!toRemove) continue
|
|
509
|
+
if (toRemove.trustScore > TRUST_DELETE_LIMIT) continue
|
|
510
|
+
if (toRemove.retrievalCount > RETRIEVAL_DELETE_LIMIT) continue
|
|
511
|
+
|
|
512
|
+
const toKeep = this.store.listFacts(cat, 0, 200).find(f => f.factId === merge.kept)
|
|
513
|
+
if (!toKeep) continue
|
|
514
|
+
|
|
515
|
+
this.store.removeFact(merge.removed)
|
|
516
|
+
details.push({ kept: merge.kept, removed: merge.removed, reason: merge.reason })
|
|
517
|
+
merged++
|
|
518
|
+
}
|
|
519
|
+
} catch {
|
|
520
|
+
// LLM 输出格式错误,跳过该批
|
|
521
|
+
continue
|
|
522
|
+
}
|
|
523
|
+
}
|
|
524
|
+
}
|
|
525
|
+
|
|
526
|
+
return { merged, details }
|
|
527
|
+
}
|
|
528
|
+
|
|
529
|
+
async smartCompress(): Promise<number> {
|
|
530
|
+
// Task 5 实现
|
|
531
|
+
return 0
|
|
532
|
+
}
|
|
533
|
+
|
|
534
|
+
async smartReclassify(): Promise<number> {
|
|
535
|
+
// Task 6 实现
|
|
536
|
+
return 0
|
|
537
|
+
}
|
|
538
|
+
}
|
|
539
|
+
```
|
|
540
|
+
|
|
541
|
+
- [ ] **Step 4: 运行测试**
|
|
542
|
+
|
|
543
|
+
Run: `npx vitest run tests/dream-engine.test.ts 2>&1 | tail -10`
|
|
544
|
+
Expected: 3 tests passed
|
|
545
|
+
|
|
546
|
+
- [ ] **Step 5: Commit**
|
|
547
|
+
|
|
548
|
+
```bash
|
|
549
|
+
git add src/dream-engine.ts tests/dream-engine.test.ts
|
|
550
|
+
git commit -m "feat(dream-engine): add semanticMerge with LLM batch analysis and safety validation"
|
|
551
|
+
```
|
|
552
|
+
|
|
553
|
+
---
|
|
554
|
+
|
|
555
|
+
### Task 5: Dream Engine — 智能摘要
|
|
556
|
+
|
|
557
|
+
**Files:**
|
|
558
|
+
- Modify: `src/dream-engine.ts`
|
|
559
|
+
- Modify: `tests/dream-engine.test.ts`
|
|
560
|
+
|
|
561
|
+
- [ ] **Step 1: 在 test 文件中追加智能摘要测试**
|
|
562
|
+
|
|
563
|
+
```typescript
|
|
564
|
+
describe('DreamEngine - smartCompress', () => {
|
|
565
|
+
it('generates summary for long facts without summary', async () => {
|
|
566
|
+
const longContent = '这是一段很长的记忆内容。'.repeat(15) // >200 chars
|
|
567
|
+
store.addFact(longContent, 'general')
|
|
568
|
+
|
|
569
|
+
const summary = '这是一段关于长内容的简洁摘要'
|
|
570
|
+
mockChatJSON.mockResolvedValueOnce({ summaries: [{ fact_id: 1, summary }] })
|
|
571
|
+
|
|
572
|
+
const mockClient = { chatJSON: mockChatJSON, isAvailable: vi.fn().mockResolvedValue(true) } as unknown as LLMClient
|
|
573
|
+
const eng = new DreamEngine(mockClient, store)
|
|
574
|
+
const result = await eng.smartCompress()
|
|
575
|
+
|
|
576
|
+
expect(result).toBe(1)
|
|
577
|
+
})
|
|
578
|
+
|
|
579
|
+
it('skips facts that already have summary', async () => {
|
|
580
|
+
const longContent = '这是一段很长的记忆内容。'.repeat(15)
|
|
581
|
+
const id = store.addFact(longContent, 'general')
|
|
582
|
+
store.connection.prepare('UPDATE facts SET summary = ? WHERE fact_id = ?').run('已有摘要', id)
|
|
583
|
+
|
|
584
|
+
const mockClient = { chatJSON: mockChatJSON, isAvailable: vi.fn().mockResolvedValue(true) } as unknown as LLMClient
|
|
585
|
+
const eng = new DreamEngine(mockClient, store)
|
|
586
|
+
const result = await eng.smartCompress()
|
|
587
|
+
|
|
588
|
+
expect(result).toBe(0)
|
|
589
|
+
expect(mockChatJSON).not.toHaveBeenCalled()
|
|
590
|
+
})
|
|
591
|
+
|
|
592
|
+
it('truncates summary longer than 150 chars', async () => {
|
|
593
|
+
const longContent = '这是一段很长的记忆内容。'.repeat(15)
|
|
594
|
+
store.addFact(longContent, 'general')
|
|
595
|
+
|
|
596
|
+
const tooLongSummary = 'a'.repeat(200)
|
|
597
|
+
mockChatJSON.mockResolvedValueOnce({ summaries: [{ fact_id: 1, summary: tooLongSummary }] })
|
|
598
|
+
|
|
599
|
+
const mockClient = { chatJSON: mockChatJSON, isAvailable: vi.fn().mockResolvedValue(true) } as unknown as LLMClient
|
|
600
|
+
const eng = new DreamEngine(mockClient, store)
|
|
601
|
+
await eng.smartCompress()
|
|
602
|
+
|
|
603
|
+
const fact = store.listFacts('general', 0, 10)[0]
|
|
604
|
+
expect(fact.summary!.length).toBeLessThanOrEqual(150)
|
|
605
|
+
})
|
|
606
|
+
})
|
|
607
|
+
```
|
|
608
|
+
|
|
609
|
+
- [ ] **Step 2: 运行测试确认失败**
|
|
610
|
+
|
|
611
|
+
Run: `npx vitest run tests/dream-engine.test.ts 2>&1 | grep -A5 "smartCompress"`
|
|
612
|
+
Expected: smartCompress 测试失败(方法返回 0)
|
|
613
|
+
|
|
614
|
+
- [ ] **Step 3: 在 `dream-engine.ts` 中实现 smartCompress**
|
|
615
|
+
|
|
616
|
+
替换 `smartCompress` 方法:
|
|
617
|
+
|
|
618
|
+
```typescript
|
|
619
|
+
async smartCompress(): Promise<number> {
|
|
620
|
+
const rows = this.store.connection.prepare(
|
|
621
|
+
"SELECT fact_id, content FROM facts WHERE length(content) > 200 AND (summary IS NULL OR summary = '')"
|
|
622
|
+
).all() as Array<{ fact_id: number; content: string }>
|
|
623
|
+
|
|
624
|
+
if (rows.length === 0) return 0
|
|
625
|
+
|
|
626
|
+
let compressed = 0
|
|
627
|
+
|
|
628
|
+
for (let i = 0; i < rows.length; i += BATCH_SIZE) {
|
|
629
|
+
const batch = rows.slice(i, i + BATCH_SIZE)
|
|
630
|
+
const factList = batch.map(f => `[${f.fact_id}] ${f.content}`).join('\n\n---\n\n')
|
|
631
|
+
|
|
632
|
+
const messages: LLMMessage[] = [
|
|
633
|
+
{
|
|
634
|
+
role: 'system',
|
|
635
|
+
content: `你是一个记忆摘要助手。为每条记忆生成简洁的摘要(≤150字)。
|
|
636
|
+
摘要应保留核心信息:谁/什么/关键决策/关键数据。去除示例、过程描述、冗余细节。
|
|
637
|
+
输出JSON:{"summaries": [{"fact_id": 数字, "summary": "摘要内容"}]}`,
|
|
638
|
+
},
|
|
639
|
+
{ role: 'user', content: factList },
|
|
640
|
+
]
|
|
641
|
+
|
|
642
|
+
try {
|
|
643
|
+
const result = await this.llm.chatJSON<{ summaries: Array<{ fact_id: number; summary: string }> }>(messages)
|
|
644
|
+
if (!result?.summaries || !Array.isArray(result.summaries)) continue
|
|
645
|
+
|
|
646
|
+
for (const item of result.summaries) {
|
|
647
|
+
if (!item.fact_id || !item.summary) continue
|
|
648
|
+
const truncated = item.summary.length > 150 ? item.summary.slice(0, 147) + '...' : item.summary
|
|
649
|
+
this.store.connection.prepare('UPDATE facts SET summary = ? WHERE fact_id = ?').run(truncated, item.fact_id)
|
|
650
|
+
compressed++
|
|
651
|
+
}
|
|
652
|
+
} catch {
|
|
653
|
+
continue
|
|
654
|
+
}
|
|
655
|
+
}
|
|
656
|
+
|
|
657
|
+
return compressed
|
|
658
|
+
}
|
|
659
|
+
```
|
|
660
|
+
|
|
661
|
+
注意:需要添加 import `LLMMessage`(已在 Task 4 中导入)。
|
|
662
|
+
|
|
663
|
+
- [ ] **Step 4: 运行测试确认通过**
|
|
664
|
+
|
|
665
|
+
Run: `npx vitest run tests/dream-engine.test.ts 2>&1 | tail -8`
|
|
666
|
+
Expected: 所有 smartCompress 测试通过
|
|
667
|
+
|
|
668
|
+
- [ ] **Step 5: Commit**
|
|
669
|
+
|
|
670
|
+
```bash
|
|
671
|
+
git add src/dream-engine.ts tests/dream-engine.test.ts
|
|
672
|
+
git commit -m "feat(dream-engine): add smartCompress with LLM-generated summaries"
|
|
673
|
+
```
|
|
674
|
+
|
|
675
|
+
---
|
|
676
|
+
|
|
677
|
+
### Task 6: Dream Engine — 智能分类
|
|
678
|
+
|
|
679
|
+
**Files:**
|
|
680
|
+
- Modify: `src/dream-engine.ts`
|
|
681
|
+
- Modify: `tests/dream-engine.test.ts`
|
|
682
|
+
|
|
683
|
+
- [ ] **Step 1: 追加智能分类测试**
|
|
684
|
+
|
|
685
|
+
```typescript
|
|
686
|
+
describe('DreamEngine - smartReclassify', () => {
|
|
687
|
+
it('moves general facts to correct category via LLM', async () => {
|
|
688
|
+
store.addFact('用户编码规范要求文件不超过500行', 'general')
|
|
689
|
+
|
|
690
|
+
mockChatJSON.mockResolvedValueOnce({
|
|
691
|
+
reclassify: [{ fact_id: 1, to: 'coding_style' }],
|
|
692
|
+
})
|
|
693
|
+
|
|
694
|
+
const mockClient = { chatJSON: mockChatJSON, isAvailable: vi.fn().mockResolvedValue(true) } as unknown as LLMClient
|
|
695
|
+
const eng = new DreamEngine(mockClient, store)
|
|
696
|
+
const result = await eng.smartReclassify()
|
|
697
|
+
|
|
698
|
+
expect(result).toBe(1)
|
|
699
|
+
const fact = store.listFacts('coding_style', 0, 10)[0]
|
|
700
|
+
expect(fact).toBeDefined()
|
|
701
|
+
expect(fact.content).toContain('编码规范')
|
|
702
|
+
})
|
|
703
|
+
|
|
704
|
+
it('ignores invalid category from LLM', async () => {
|
|
705
|
+
store.addFact('一些内容', 'general')
|
|
706
|
+
|
|
707
|
+
mockChatJSON.mockResolvedValueOnce({
|
|
708
|
+
reclassify: [{ fact_id: 1, to: 'invalid_category' }],
|
|
709
|
+
})
|
|
710
|
+
|
|
711
|
+
const mockClient = { chatJSON: mockChatJSON, isAvailable: vi.fn().mockResolvedValue(true) } as unknown as LLMClient
|
|
712
|
+
const eng = new DreamEngine(mockClient, store)
|
|
713
|
+
const result = await eng.smartReclassify()
|
|
714
|
+
|
|
715
|
+
expect(result).toBe(0)
|
|
716
|
+
})
|
|
717
|
+
|
|
718
|
+
it('skips when LLM says keep general', async () => {
|
|
719
|
+
store.addFact('一些杂项内容', 'general')
|
|
720
|
+
|
|
721
|
+
mockChatJSON.mockResolvedValueOnce({
|
|
722
|
+
reclassify: [{ fact_id: 1, to: 'general' }],
|
|
723
|
+
})
|
|
724
|
+
|
|
725
|
+
const mockClient = { chatJSON: mockChatJSON, isAvailable: vi.fn().mockResolvedValue(true) } as unknown as LLMClient
|
|
726
|
+
const eng = new DreamEngine(mockClient, store)
|
|
727
|
+
const result = await eng.smartReclassify()
|
|
728
|
+
|
|
729
|
+
expect(result).toBe(0)
|
|
730
|
+
})
|
|
731
|
+
})
|
|
732
|
+
```
|
|
733
|
+
|
|
734
|
+
- [ ] **Step 2: 运行测试确认失败**
|
|
735
|
+
|
|
736
|
+
Run: `npx vitest run tests/dream-engine.test.ts 2>&1 | grep "smartReclassify"`
|
|
737
|
+
Expected: smartReclassify 测试失败
|
|
738
|
+
|
|
739
|
+
- [ ] **Step 3: 实现 smartReclassify**
|
|
740
|
+
|
|
741
|
+
替换 `smartReclassify` 方法:
|
|
742
|
+
|
|
743
|
+
```typescript
|
|
744
|
+
async smartReclassify(): Promise<number> {
|
|
745
|
+
const rows = this.store.connection.prepare(
|
|
746
|
+
"SELECT fact_id, content FROM facts WHERE category = 'general'"
|
|
747
|
+
).all() as Array<{ fact_id: number; content: string }>
|
|
748
|
+
|
|
749
|
+
if (rows.length === 0) return 0
|
|
750
|
+
|
|
751
|
+
const validCategories = ['identity', 'coding_style', 'tool_pref', 'workflow']
|
|
752
|
+
let reclassified = 0
|
|
753
|
+
|
|
754
|
+
for (let i = 0; i < rows.length; i += BATCH_SIZE) {
|
|
755
|
+
const batch = rows.slice(i, i + BATCH_SIZE)
|
|
756
|
+
const factList = batch.map(f => `[${f.fact_id}] ${f.content}`).join('\n')
|
|
757
|
+
|
|
758
|
+
const messages: LLMMessage[] = [
|
|
759
|
+
{
|
|
760
|
+
role: 'system',
|
|
761
|
+
content: `你是一个记忆分类助手。分析以下记忆条目,判断它们应该属于哪个分类。
|
|
762
|
+
可选分类:identity(身份/角色)、coding_style(编码规范)、tool_pref(工具偏好)、workflow(工作流)
|
|
763
|
+
如果记忆不属于以上任何分类,保持 general。
|
|
764
|
+
输出JSON:{"reclassify": [{"fact_id": 数字, "to": "分类名"}]}
|
|
765
|
+
不需要重新分类的条目不要输出。`,
|
|
766
|
+
},
|
|
767
|
+
{ role: 'user', content: factList },
|
|
768
|
+
]
|
|
769
|
+
|
|
770
|
+
try {
|
|
771
|
+
const result = await this.llm.chatJSON<{ reclassify: Array<{ fact_id: number; to: string }> }>(messages)
|
|
772
|
+
if (!result?.reclassify || !Array.isArray(result.reclassify)) continue
|
|
773
|
+
|
|
774
|
+
for (const item of result.reclassify) {
|
|
775
|
+
if (!item.fact_id || !item.to) continue
|
|
776
|
+
if (!validCategories.includes(item.to)) continue
|
|
777
|
+
|
|
778
|
+
this.store.connection.prepare(
|
|
779
|
+
"UPDATE facts SET category = ?, updated_at = datetime('now', 'localtime') WHERE fact_id = ?"
|
|
780
|
+
).run(item.to, item.fact_id)
|
|
781
|
+
reclassified++
|
|
782
|
+
}
|
|
783
|
+
} catch {
|
|
784
|
+
continue
|
|
785
|
+
}
|
|
786
|
+
}
|
|
787
|
+
|
|
788
|
+
return reclassified
|
|
789
|
+
}
|
|
790
|
+
```
|
|
791
|
+
|
|
792
|
+
- [ ] **Step 4: 运行测试**
|
|
793
|
+
|
|
794
|
+
Run: `npx vitest run tests/dream-engine.test.ts 2>&1 | tail -8`
|
|
795
|
+
Expected: 所有测试通过
|
|
796
|
+
|
|
797
|
+
- [ ] **Step 5: Commit**
|
|
798
|
+
|
|
799
|
+
```bash
|
|
800
|
+
git add src/dream-engine.ts tests/dream-engine.test.ts
|
|
801
|
+
git commit -m "feat(dream-engine): add smartReclassify with LLM-driven category assignment"
|
|
802
|
+
```
|
|
803
|
+
|
|
804
|
+
---
|
|
805
|
+
|
|
806
|
+
### Task 7: 集成到 runDream — 降级策略
|
|
807
|
+
|
|
808
|
+
**Files:**
|
|
809
|
+
- Modify: `src/store.ts`
|
|
810
|
+
- Modify: `src/dream.ts`
|
|
811
|
+
- Modify: `src/server.ts`
|
|
812
|
+
- Modify: `tests/store.test.ts`
|
|
813
|
+
|
|
814
|
+
- [ ] **Step 1: 更新 store.ts 的 runDream 集成 DreamEngine**
|
|
815
|
+
|
|
816
|
+
在 `src/store.ts` 顶部添加 import:
|
|
817
|
+
|
|
818
|
+
```typescript
|
|
819
|
+
import { loadConfig } from './config.js'
|
|
820
|
+
import { LLMClient } from './llm-client.js'
|
|
821
|
+
import { DreamEngine } from './dream-engine.js'
|
|
822
|
+
```
|
|
823
|
+
|
|
824
|
+
替换 `runDream` 方法:
|
|
825
|
+
|
|
826
|
+
```typescript
|
|
827
|
+
async runDream(options?: { skipBackup?: boolean }): Promise<DreamReport> {
|
|
828
|
+
if (!options?.skipBackup) {
|
|
829
|
+
await this.backupDatabase()
|
|
830
|
+
}
|
|
831
|
+
|
|
832
|
+
// 尝试 LLM 驱动的 dream
|
|
833
|
+
const config = loadConfig()
|
|
834
|
+
const llmClient = new LLMClient(config)
|
|
835
|
+
|
|
836
|
+
const available = await llmClient.isAvailable()
|
|
837
|
+
if (available) {
|
|
838
|
+
try {
|
|
839
|
+
const engine = new DreamEngine(llmClient, this)
|
|
840
|
+
const mergeResult = await engine.semanticMerge()
|
|
841
|
+
const compressed = await engine.smartCompress()
|
|
842
|
+
const reclassified = await engine.smartReclassify()
|
|
843
|
+
|
|
844
|
+
const stats = this.db.prepare(`
|
|
845
|
+
SELECT COUNT(*) as total,
|
|
846
|
+
AVG(trust_score) as avg_trust,
|
|
847
|
+
AVG(length(content)) as avg_length
|
|
848
|
+
FROM facts
|
|
849
|
+
`).get() as { total: number; avg_trust: number; avg_length: number }
|
|
850
|
+
|
|
851
|
+
const categories: FactCategory[] = ['identity', 'coding_style', 'tool_pref', 'workflow', 'general']
|
|
852
|
+
const coverage: Record<string, number> = {}
|
|
853
|
+
for (const cat of categories) {
|
|
854
|
+
const row = this.db.prepare('SELECT COUNT(*) as c FROM facts WHERE category = ?').get(cat) as { c: number }
|
|
855
|
+
coverage[cat] = row.c
|
|
856
|
+
}
|
|
857
|
+
|
|
858
|
+
return {
|
|
859
|
+
merged: mergeResult.merged,
|
|
860
|
+
compressed,
|
|
861
|
+
reclassified,
|
|
862
|
+
deleted: mergeResult.merged,
|
|
863
|
+
mergeDetails: mergeResult.details.map(d => ({ kept: d.kept, removed: d.removed, similarity: 0 })),
|
|
864
|
+
fallback: false,
|
|
865
|
+
health: {
|
|
866
|
+
total: stats.total,
|
|
867
|
+
avg_trust: Math.round((stats.avg_trust ?? 0) * 100) / 100,
|
|
868
|
+
avg_length: Math.round(stats.avg_length ?? 0),
|
|
869
|
+
coverage: coverage as Record<FactCategory, number>,
|
|
870
|
+
},
|
|
871
|
+
}
|
|
872
|
+
} catch (err) {
|
|
873
|
+
// LLM 执行失败,降级到规则引擎
|
|
874
|
+
}
|
|
875
|
+
}
|
|
876
|
+
|
|
877
|
+
// 降级到规则引擎
|
|
878
|
+
const compressed = this.compressLongFacts()
|
|
879
|
+
const mergeResult = this.mergeOverlappingFacts()
|
|
880
|
+
const reclassified = this.reclassifyFacts()
|
|
881
|
+
|
|
882
|
+
const stats = this.db.prepare(`
|
|
883
|
+
SELECT COUNT(*) as total,
|
|
884
|
+
AVG(trust_score) as avg_trust,
|
|
885
|
+
AVG(length(content)) as avg_length
|
|
886
|
+
FROM facts
|
|
887
|
+
`).get() as { total: number; avg_trust: number; avg_length: number }
|
|
888
|
+
|
|
889
|
+
const categories: FactCategory[] = ['identity', 'coding_style', 'tool_pref', 'workflow', 'general']
|
|
890
|
+
const coverage: Record<string, number> = {}
|
|
891
|
+
for (const cat of categories) {
|
|
892
|
+
const row = this.db.prepare('SELECT COUNT(*) as c FROM facts WHERE category = ?').get(cat) as { c: number }
|
|
893
|
+
coverage[cat] = row.c
|
|
894
|
+
}
|
|
895
|
+
|
|
896
|
+
return {
|
|
897
|
+
merged: mergeResult.merged,
|
|
898
|
+
compressed,
|
|
899
|
+
reclassified,
|
|
900
|
+
deleted: mergeResult.merged,
|
|
901
|
+
mergeDetails: mergeResult.details,
|
|
902
|
+
fallback: true,
|
|
903
|
+
fallbackReason: 'LLM unavailable',
|
|
904
|
+
health: {
|
|
905
|
+
total: stats.total,
|
|
906
|
+
avg_trust: Math.round((stats.avg_trust ?? 0) * 100) / 100,
|
|
907
|
+
avg_length: Math.round(stats.avg_length ?? 0),
|
|
908
|
+
coverage: coverage as Record<FactCategory, number>,
|
|
909
|
+
},
|
|
910
|
+
}
|
|
911
|
+
}
|
|
912
|
+
```
|
|
913
|
+
|
|
914
|
+
- [ ] **Step 2: 更新 store.test.ts 中 dream 测试**
|
|
915
|
+
|
|
916
|
+
现有 `dream - runDream` 测试用例需要适配 — 因为 `loadConfig` 和 `LLMClient` 需要 mock。最简单的方式是 `skipBackup` 测试时 Ollama 不可用会降级:
|
|
917
|
+
|
|
918
|
+
测试不需要改动逻辑,但需要确认 dream 报告新增的 `fallback` 字段不影响现有断言。在 `tests/store.test.ts` 的 `dream - runDream` 测试中,在 expect 后追加:
|
|
919
|
+
|
|
920
|
+
```typescript
|
|
921
|
+
expect(report.fallback).toBe(true) // 测试环境无 Ollama,应降级
|
|
922
|
+
```
|
|
923
|
+
|
|
924
|
+
- [ ] **Step 3: 运行全量测试**
|
|
925
|
+
|
|
926
|
+
Run: `npx vitest run 2>&1 | tail -15`
|
|
927
|
+
Expected: 所有测试通过
|
|
928
|
+
|
|
929
|
+
- [ ] **Step 4: 构建**
|
|
930
|
+
|
|
931
|
+
Run: `npm run build 2>&1 | tail -3`
|
|
932
|
+
Expected: 无错误
|
|
933
|
+
|
|
934
|
+
- [ ] **Step 5: Commit**
|
|
935
|
+
|
|
936
|
+
```bash
|
|
937
|
+
git add src/store.ts src/dream.ts tests/store.test.ts
|
|
938
|
+
git commit -m "feat(store): integrate DreamEngine into runDream with fallback to rule engine"
|
|
939
|
+
```
|
|
940
|
+
|
|
941
|
+
---
|
|
942
|
+
|
|
943
|
+
### Task 8: 最终验证
|
|
944
|
+
|
|
945
|
+
**Files:**
|
|
946
|
+
- 无新增
|
|
947
|
+
|
|
948
|
+
- [ ] **Step 1: 运行全量测试**
|
|
949
|
+
|
|
950
|
+
Run: `npx vitest run 2>&1 | tail -10`
|
|
951
|
+
Expected: 所有测试通过
|
|
952
|
+
|
|
953
|
+
- [ ] **Step 2: 构建**
|
|
954
|
+
|
|
955
|
+
Run: `npm run build 2>&1`
|
|
956
|
+
Expected: 无错误
|
|
957
|
+
|
|
958
|
+
- [ ] **Step 3: CLI 测试(无 Ollama 时降级)**
|
|
959
|
+
|
|
960
|
+
Run: `node dist/dream.js 2>&1`
|
|
961
|
+
Expected: 输出包含 `"fallback": true`,报告结构正确
|
|
962
|
+
|
|
963
|
+
- [ ] **Step 4: MCP 测试(在 Claude Code 中调用)**
|
|
964
|
+
|
|
965
|
+
Run: `fact_store(action="dream")`
|
|
966
|
+
Expected: 返回带有 `fallback` 字段的 DreamReport
|
|
967
|
+
|
|
968
|
+
- [ ] **Step 5: Commit 所有未提交的改动(如有)**
|
|
969
|
+
|
|
970
|
+
```bash
|
|
971
|
+
git add -A
|
|
972
|
+
git commit -m "chore: llm-dream integration complete"
|
|
973
|
+
```
|