@smyslenny/agent-memory 2.1.0 โ†’ 2.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -1,5 +1,25 @@
1
1
  # Changelog
2
2
 
3
+ ## 2.1.2 (2026-02-22)
4
+
5
+ ### ๐Ÿ”— OpenClaw Integration V1 (Cron Synchronization)
6
+
7
+ - **Capture โ†’ Consolidate โ†’ Surface closed loop** โ€” agent-memory now integrates seamlessly with OpenClaw's built-in memory cron jobs (`memory-sync`, `memory-tidy`, `memory-surface`) with zero code changes ([DD-0004](docs/design/0004-agent-memory-integration.md))
8
+ - **memory-sync** (14:00 & 22:00): Each new journal bullet is auto-synced to agent-memory via `mcporter call agent-memory.remember` with keyword-based type classification and URI dedup
9
+ - **memory-tidy** (03:00): Triggers `agent-memory.reflect phase=all` (Ebbinghaus decay + tidy + govern) after Markdown consolidation, with consistency spot-check
10
+ - **memory-surface** (14:05 & 22:05): Generates structured `RECENT.md` (โ‰ค80 lines) from high-vitality agent-memory entries, with fallback to raw journal reading
11
+ - **Design principle**: Markdown remains source of truth; agent-memory is a derived index layer with best-effort sync
12
+ - **README updated** with full OpenClaw Integration guide
13
+
14
+ ## 2.1.1 (2026-02-21)
15
+
16
+ ### ๐Ÿ“ Documentation
17
+
18
+ - **Memory-janitor integration guide** โ€” New `examples/openclaw-setup.md` section explaining what a memory janitor is and how to wire it to agent-memory (decay trigger + consistency check) ([PR #2](https://github.com/smysle/agent-memory/pull/2))
19
+ - **Phase 5 prompt template** โ€” New `examples/memory-janitor-phase5.md` with full prompt template for appending decay + consistency phases to an existing janitor cron job
20
+ - Covers Gap 1 (Ebbinghaus decay never fires without external trigger) and Gap 2 (two-store divergence between agent-memory and canonical memory files)
21
+ - Includes configurable conflict resolution strategies (canonical wins / agent-memory wins / manual review)
22
+
3
23
  ## 2.0.0 (2026-02-20)
4
24
 
5
25
  ### ๐ŸŽ‰ Complete Rewrite
package/README.md CHANGED
@@ -119,6 +119,46 @@ runDecay(db);
119
119
 
120
120
  **9 MCP Tools:** `remember` ยท `recall` ยท `recall_path` ยท `boot` ยท `forget` ยท `link` ยท `snapshot` ยท `reflect` ยท `status`
121
121
 
122
+ ## ๐Ÿ”— OpenClaw Integration
123
+
124
+ AgentMemory works **out of the box** with [OpenClaw](https://github.com/smysle/openclaw)'s built-in memory cron jobs โ€” no code changes required. The integration implements a **Capture โ†’ Consolidate โ†’ Surface** closed loop that keeps Markdown journals and the structured memory DB in sync automatically.
125
+
126
+ ### How It Works
127
+
128
+ ```
129
+ Capture (memory-sync) Consolidate (memory-tidy) Surface (memory-surface)
130
+ โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
131
+ 14:00 & 22:00 03:00 14:05 & 22:05
132
+ Session โ†’ daily journal Compress old dailies Recall top memories
133
+ + remember each bullet Distill โ†’ MEMORY.md โ†’ generate RECENT.md
134
+ into agent-memory DB + reflect phase=all (โ‰ค80 lines, 3 sections)
135
+ ```
136
+
137
+ | Phase | Cron Job | What Happens | agent-memory Integration |
138
+ |-------|----------|-------------|--------------------------|
139
+ | **Capture** | `memory-sync` | Scans sessions, appends bullets to `memory/YYYY-MM-DD.md` | Each new bullet is also written via `mcporter call agent-memory.remember` with auto-classified type and URI-based dedup |
140
+ | **Consolidate** | `memory-tidy` | Compresses old dailies โ†’ weekly summaries, distills `MEMORY.md` | Triggers `agent-memory.reflect phase=all` (decay + tidy + govern) + consistency spot-check |
141
+ | **Surface** | `memory-surface` | Generates short-term context for new sessions | Reads high-vitality memories from agent-memory, outputs structured `RECENT.md` with emotion/knowledge/event sections |
142
+
143
+ ### Key Design Principles
144
+
145
+ - **Markdown is source of truth** โ€” agent-memory is a derived index layer; all data flows Markdown โ†’ DB, never the reverse.
146
+ - **Best-effort sync** โ€” If `mcporter` or agent-memory is unavailable, Markdown operations proceed normally. Failures only log warnings.
147
+ - **URI-based idempotency** โ€” Each journal bullet maps to a unique URI (`event://journal/2026-02-21#2200-1`), so re-runs are safe.
148
+ - **Keyword-based classification** โ€” Bullets are auto-classified as `knowledge`, `emotion`, or `event` using simple keyword rules (no extra model calls).
149
+
150
+ ### Setup
151
+
152
+ If you're running OpenClaw with the standard memory cron suite (`memory-sync`, `memory-tidy`, `memory-surface`), the integration is **already active** โ€” the cron prompts include agent-memory sync steps. Just make sure:
153
+
154
+ 1. **agent-memory is installed and initialized** โ€” `agent-memory init`
155
+ 2. **mcporter bridge is configured** โ€” agent-memory MCP server registered in your mcporter config
156
+ 3. **Cron jobs are enabled** โ€” check with `openclaw cron list`
157
+
158
+ For detailed setup and prompt templates, see:
159
+ - [`examples/openclaw-setup.md`](examples/openclaw-setup.md) โ€” Full setup walkthrough
160
+ - [`docs/design/0004-agent-memory-integration.md`](docs/design/0004-agent-memory-integration.md) โ€” Design document (DD-0004)
161
+
122
162
  ## ๐Ÿ—๏ธ Architecture
123
163
 
124
164
  ```
@@ -853,7 +853,8 @@ async function searchHybrid(db, query, opts) {
853
853
  if (!provider || !model) {
854
854
  return bm25.slice(0, limit);
855
855
  }
856
- const qVec = Float32Array.from(await provider.embed(query));
856
+ const embedFn = provider.embedQuery ?? provider.embed;
857
+ const qVec = Float32Array.from(await embedFn.call(provider, query));
857
858
  const embeddings = listEmbeddings(db, agentId, model);
858
859
  const scored = [];
859
860
  for (const e of embeddings) {
@@ -886,6 +887,29 @@ async function searchHybrid(db, query, opts) {
886
887
  }
887
888
 
888
889
  // src/search/providers.ts
890
+ var QWEN_DEFAULT_INSTRUCTION = "Given a query, retrieve the most semantically relevant document";
891
+ function getDefaultInstruction(model) {
892
+ const m = model.toLowerCase();
893
+ if (m.includes("qwen")) return QWEN_DEFAULT_INSTRUCTION;
894
+ if (m.includes("gemini")) return null;
895
+ return null;
896
+ }
897
+ function resolveInstruction(model) {
898
+ const override = process.env.AGENT_MEMORY_EMBEDDINGS_INSTRUCTION;
899
+ if (override !== void 0) {
900
+ const normalized = override.trim();
901
+ if (!normalized) return null;
902
+ const lowered = normalized.toLowerCase();
903
+ if (lowered === "none" || lowered === "off" || lowered === "false" || lowered === "null") return null;
904
+ return normalized;
905
+ }
906
+ return getDefaultInstruction(model);
907
+ }
908
+ function buildQueryInput(query, instructionPrefix) {
909
+ if (!instructionPrefix) return query;
910
+ return `Instruct: ${instructionPrefix}
911
+ Query: ${query}`;
912
+ }
889
913
  function getEmbeddingProviderFromEnv() {
890
914
  const provider = (process.env.AGENT_MEMORY_EMBEDDINGS_PROVIDER ?? "none").toLowerCase();
891
915
  if (provider === "none" || provider === "off" || provider === "false") return null;
@@ -894,14 +918,24 @@ function getEmbeddingProviderFromEnv() {
894
918
  const model = process.env.AGENT_MEMORY_EMBEDDINGS_MODEL ?? "text-embedding-3-small";
895
919
  const baseUrl = process.env.OPENAI_BASE_URL ?? "https://api.openai.com/v1";
896
920
  if (!apiKey) return null;
897
- return createOpenAIProvider({ apiKey, model, baseUrl });
921
+ const instruction = resolveInstruction(model);
922
+ return createOpenAIProvider({ apiKey, model, baseUrl, instruction });
923
+ }
924
+ if (provider === "gemini" || provider === "google") {
925
+ const apiKey = process.env.GEMINI_API_KEY ?? process.env.OPENAI_API_KEY;
926
+ const model = process.env.AGENT_MEMORY_EMBEDDINGS_MODEL ?? "gemini-embedding-001";
927
+ const baseUrl = process.env.GEMINI_BASE_URL ?? process.env.OPENAI_BASE_URL ?? "https://generativelanguage.googleapis.com/v1beta";
928
+ if (!apiKey) return null;
929
+ const instruction = resolveInstruction(model);
930
+ return createOpenAIProvider({ id: "gemini", apiKey, model, baseUrl, instruction });
898
931
  }
899
932
  if (provider === "qwen" || provider === "dashscope" || provider === "tongyi") {
900
933
  const apiKey = process.env.DASHSCOPE_API_KEY;
901
934
  const model = process.env.AGENT_MEMORY_EMBEDDINGS_MODEL ?? "text-embedding-v3";
902
935
  const baseUrl = process.env.DASHSCOPE_BASE_URL ?? "https://dashscope.aliyuncs.com";
903
936
  if (!apiKey) return null;
904
- return createDashScopeProvider({ apiKey, model, baseUrl });
937
+ const instruction = resolveInstruction(model);
938
+ return createDashScopeProvider({ apiKey, model, baseUrl, instruction });
905
939
  }
906
940
  return null;
907
941
  }
@@ -918,52 +952,67 @@ function normalizeEmbedding(e) {
918
952
  }
919
953
  function createOpenAIProvider(opts) {
920
954
  const baseUrl = opts.baseUrl ?? "https://api.openai.com/v1";
955
+ const instructionPrefix = opts.instruction ?? null;
956
+ async function requestEmbedding(input) {
957
+ const resp = await fetch(`${baseUrl.replace(/\/$/, "")}/embeddings`, {
958
+ method: "POST",
959
+ headers: {
960
+ "content-type": "application/json",
961
+ authorization: authHeader(opts.apiKey)
962
+ },
963
+ body: JSON.stringify({ model: opts.model, input })
964
+ });
965
+ if (!resp.ok) {
966
+ const body = await resp.text().catch(() => "");
967
+ throw new Error(`OpenAI embeddings failed: ${resp.status} ${resp.statusText} ${body}`.trim());
968
+ }
969
+ const data = await resp.json();
970
+ return normalizeEmbedding(data.data?.[0]?.embedding);
971
+ }
921
972
  return {
922
- id: "openai",
973
+ id: opts.id ?? "openai",
923
974
  model: opts.model,
975
+ instructionPrefix,
924
976
  async embed(text) {
925
- const resp = await fetch(`${baseUrl.replace(/\/$/, "")}/embeddings`, {
926
- method: "POST",
927
- headers: {
928
- "content-type": "application/json",
929
- authorization: authHeader(opts.apiKey)
930
- },
931
- body: JSON.stringify({ model: opts.model, input: text })
932
- });
933
- if (!resp.ok) {
934
- const body = await resp.text().catch(() => "");
935
- throw new Error(`OpenAI embeddings failed: ${resp.status} ${resp.statusText} ${body}`.trim());
936
- }
937
- const data = await resp.json();
938
- const emb = data.data?.[0]?.embedding;
939
- return normalizeEmbedding(emb);
977
+ return requestEmbedding(text);
978
+ },
979
+ async embedQuery(query) {
980
+ return requestEmbedding(buildQueryInput(query, instructionPrefix));
940
981
  }
941
982
  };
942
983
  }
943
984
  function createDashScopeProvider(opts) {
944
985
  const baseUrl = opts.baseUrl ?? "https://dashscope.aliyuncs.com";
986
+ const instructionPrefix = opts.instruction ?? null;
987
+ async function requestEmbedding(text) {
988
+ const resp = await fetch(`${baseUrl.replace(/\/$/, "")}/api/v1/services/embeddings/text-embedding/text-embedding`, {
989
+ method: "POST",
990
+ headers: {
991
+ "content-type": "application/json",
992
+ authorization: authHeader(opts.apiKey)
993
+ },
994
+ body: JSON.stringify({
995
+ model: opts.model,
996
+ input: { texts: [text] }
997
+ })
998
+ });
999
+ if (!resp.ok) {
1000
+ const body = await resp.text().catch(() => "");
1001
+ throw new Error(`DashScope embeddings failed: ${resp.status} ${resp.statusText} ${body}`.trim());
1002
+ }
1003
+ const data = await resp.json();
1004
+ const emb = data.output?.embeddings?.[0]?.embedding ?? data.output?.embeddings?.[0]?.vector ?? data.output?.embedding ?? data.data?.[0]?.embedding;
1005
+ return normalizeEmbedding(emb);
1006
+ }
945
1007
  return {
946
1008
  id: "dashscope",
947
1009
  model: opts.model,
1010
+ instructionPrefix,
948
1011
  async embed(text) {
949
- const resp = await fetch(`${baseUrl.replace(/\/$/, "")}/api/v1/services/embeddings/text-embedding/text-embedding`, {
950
- method: "POST",
951
- headers: {
952
- "content-type": "application/json",
953
- authorization: authHeader(opts.apiKey)
954
- },
955
- body: JSON.stringify({
956
- model: opts.model,
957
- input: { texts: [text] }
958
- })
959
- });
960
- if (!resp.ok) {
961
- const body = await resp.text().catch(() => "");
962
- throw new Error(`DashScope embeddings failed: ${resp.status} ${resp.statusText} ${body}`.trim());
963
- }
964
- const data = await resp.json();
965
- const emb = data?.output?.embeddings?.[0]?.embedding ?? data?.output?.embeddings?.[0]?.vector ?? data?.output?.embedding ?? data?.data?.[0]?.embedding;
966
- return normalizeEmbedding(emb);
1012
+ return requestEmbedding(text);
1013
+ },
1014
+ async embedQuery(query) {
1015
+ return requestEmbedding(buildQueryInput(query, instructionPrefix));
967
1016
  }
968
1017
  };
969
1018
  }