@joshuaswarren/openclaw-engram 9.0.74 → 9.0.75

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +91 -44
  2. package/package.json +1 -1
package/README.md CHANGED
@@ -1,37 +1,63 @@
1
1
  # Engram
2
2
 
3
- **Persistent memory for AI coding agents.** Your agents forget everything between sessions — Engram fixes that.
3
+ **Persistent, private memory for AI agents.** Your agents forget everything between sessions — Engram fixes that.
4
4
 
5
- Engram gives AI agents long-term memory that survives across conversations. Decisions, preferences, debugging history, architecture context, project conventions — everything your agent learns persists and resurfaces exactly when it's needed.
5
+ Engram gives AI agents long-term memory that survives across conversations. Decisions, preferences, project context, personal details, past mistakes — everything your agent learns persists and resurfaces exactly when it's needed. All data stays on your machine as plain markdown files. No cloud services, no subscriptions, no sharing your data with third parties.
6
6
 
7
7
  [![npm version](https://img.shields.io/npm/v/@joshuaswarren/openclaw-engram)](https://www.npmjs.com/package/@joshuaswarren/openclaw-engram)
8
8
  [![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](LICENSE)
9
+ [![Sponsor](https://img.shields.io/badge/Sponsor-%E2%9D%A4-pink)](https://github.com/sponsors/joshuaswarren)
9
10
 
10
11
  ## The Problem
11
12
 
12
- Every AI coding session starts from zero. Your agent doesn't know your project conventions, your architecture decisions, the bugs you already debugged, or even your name. You re-explain the same context over and over and the agent still makes the same mistakes.
13
+ Every AI agent session starts from zero. Your agent doesn't know your name, your projects, the decisions you've already made, or the bugs you already debugged. Whether it's a personal assistant, a coding agent, a research agent, or a multi-agent team — they all forget everything between conversations. You re-explain the same context over and over, and your agents still make the same mistakes.
14
+
15
+ OpenClaw's built-in memory works for simple cases, but it doesn't scale. It lacks semantic search, lifecycle management, entity tracking, and governance. Third-party memory services exist, but they cost money and require sending your private data to someone else's servers.
13
16
 
14
17
  ## The Solution
15
18
 
16
- Engram watches your agent conversations, extracts durable knowledge, and injects the right memories back at the start of every session. It works with **[OpenClaw](https://github.com/openclaw/openclaw)** as a native plugin and with **[Codex CLI](https://github.com/openai/codex)** via MCPwith more integrations coming.
19
+ Engram is an open-source, local-first memory system that replaces OpenClaw's default memory with something much more capable — while keeping everything on your machine. It watches your agent conversations, extracts durable knowledge, and injects the right memories back at the start of every session. Use OpenAI or a **local LLM** (Ollama, LM Studio, etc.) for extractionyour choice.
20
+
21
+ It works as a native **[OpenClaw](https://github.com/openclaw/openclaw)** plugin, with **[Codex CLI](https://github.com/openai/codex)** via MCP, and with any other MCP-compatible client — with more integrations coming.
17
22
 
18
23
  | Without Engram | With Engram |
19
24
  |---|---|
20
- | Re-explain project conventions every session | Agent recalls coding standards and patterns automatically |
21
- | Repeat architecture context for every task | Entity knowledge surfaces schemas, API contracts, and module boundaries |
22
- | Lose debugging context between sessions | Past root causes and dead ends are recalled — no repeated work |
23
- | Manually restate tool/linter/workflow preferences | Preferences persist across sessions and projects |
25
+ | Re-explain who you are and what you're working on | Agent recalls your identity, projects, and preferences automatically |
26
+ | Repeat context for every task | Entity knowledge surfaces people, projects, tools, and relationships on demand |
27
+ | Lose debugging and research context between sessions | Past root causes, dead ends, and findings are recalled — no repeated work |
28
+ | Manually restate preferences every session | Preferences persist across sessions, agents, and projects |
24
29
  | Context-switching tax when resuming work | Session-start recall brings you back to speed instantly |
30
+ | Default OpenClaw memory doesn't scale | Hybrid search, lifecycle management, namespaces, and governance |
31
+ | Third-party memory services cost money and share your data | Everything stays local — your filesystem, your rules |
25
32
 
26
- ## Quick Start
33
+ ## Installation
27
34
 
28
- ### With OpenClaw (native plugin)
35
+ ### Option 1: Install from the CLI
29
36
 
30
37
  ```bash
31
38
  openclaw plugins install @joshuaswarren/openclaw-engram --pin
32
39
  ```
33
40
 
34
- Add to `openclaw.json`:
41
+ ### Option 2: Ask your OpenClaw agent to install it
42
+
43
+ Tell any OpenClaw agent:
44
+
45
+ > Install the openclaw-engram plugin and configure it as my memory system.
46
+
47
+ Your agent will run the install command, update `openclaw.json`, and restart the gateway for you.
48
+
49
+ ### Option 3: Developer install from source
50
+
51
+ ```bash
52
+ git clone https://github.com/joshuaswarren/openclaw-engram.git \
53
+ ~/.openclaw/extensions/openclaw-engram
54
+ cd ~/.openclaw/extensions/openclaw-engram
55
+ npm ci && npm run build
56
+ ```
57
+
58
+ ### Configure
59
+
60
+ After installation, add Engram to your `openclaw.json`:
35
61
 
36
62
  ```jsonc
37
63
  {
@@ -42,7 +68,13 @@ Add to `openclaw.json`:
42
68
  "openclaw-engram": {
43
69
  "enabled": true,
44
70
  "config": {
71
+ // Use OpenAI for extraction:
45
72
  "openaiApiKey": "${OPENAI_API_KEY}"
73
+
74
+ // OR use a local LLM (no API key needed):
75
+ // "localLlmEnabled": true,
76
+ // "localLlmUrl": "http://localhost:1234/v1",
77
+ // "localLlmModel": "qwen2.5-32b-instruct"
46
78
  }
47
79
  }
48
80
  }
@@ -50,9 +82,26 @@ Add to `openclaw.json`:
50
82
  }
51
83
  ```
52
84
 
53
- Restart the gateway and start a conversation — Engram begins learning immediately.
85
+ Restart the gateway:
54
86
 
55
- ### With Codex CLI (via MCP)
87
+ ```bash
88
+ launchctl kickstart -k gui/$(id -u)/ai.openclaw.gateway # macOS
89
+ # or: systemctl restart openclaw-gateway # Linux
90
+ ```
91
+
92
+ Start a conversation — Engram begins learning immediately.
93
+
94
+ > **Note:** This shows only the minimal config. Engram has 60+ configuration options for search backends, capture modes, memory OS features, and more. See the [full config reference](docs/config-reference.md) for every setting.
95
+
96
+ ### Verify installation
97
+
98
+ ```bash
99
+ openclaw engram setup --json # Validates config, scaffolds directories
100
+ openclaw engram doctor --json # Health diagnostics with remediation hints
101
+ openclaw engram config-review --json # Opinionated config tuning recommendations
102
+ ```
103
+
104
+ ## Using Engram with Codex CLI
56
105
 
57
106
  Start the Engram HTTP server:
58
107
 
@@ -75,9 +124,9 @@ url = "http://127.0.0.1:4318/mcp"
75
124
  bearer_token_env_var = "OPENCLAW_ENGRAM_ACCESS_TOKEN"
76
125
  ```
77
126
 
78
- That's it. Codex now has access to Engram's recall, store, and entity tools. See the [full Codex integration guide](docs/guides/codex-cli.md) for session-start hooks and cross-machine setup.
127
+ That's it. Codex now has access to Engram's recall, store, and entity tools. See the [full Codex integration guide](docs/guides/codex-cli.md) for session-start hooks, cross-machine setup, and automatic recall at session start.
79
128
 
80
- ### With Any MCP Client (Claude Code, etc.)
129
+ ## Using Engram with Any MCP Client
81
130
 
82
131
  Run the stdio MCP server:
83
132
 
@@ -85,7 +134,7 @@ Run the stdio MCP server:
85
134
  openclaw engram access mcp-serve
86
135
  ```
87
136
 
88
- Point your MCP client's command at `openclaw engram access mcp-serve`. The server exposes the same tools as the HTTP endpoint.
137
+ Point your MCP client's command at `openclaw engram access mcp-serve`. Works with Claude Code, and any other MCP-compatible client. The server exposes the same 8 tools as the HTTP endpoint.
89
138
 
90
139
  ## How It Works
91
140
 
@@ -114,14 +163,22 @@ Memory categories include: `fact`, `decision`, `preference`, `correction`, `rela
114
163
 
115
164
  ## Why Engram?
116
165
 
117
- ### Local-first, zero lock-in
166
+ ### Your data stays yours
167
+
168
+ All memory lives on your filesystem as plain markdown files. No cloud dependency, no subscriptions, no proprietary formats, no sending your private conversations to third-party servers. Back it up with git, rsync, or Time Machine. Move it between machines with a folder copy. You own your data completely.
118
169
 
119
- All memory lives on your filesystem as markdown. No cloud dependency, no proprietary formats. Back it up with git, rsync, or Time Machine. Move it between machines with a folder copy.
170
+ ### A real upgrade from default OpenClaw memory
171
+
172
+ OpenClaw's built-in memory is basic — it works for getting started, but lacks semantic search, entity tracking, lifecycle management, governance, and multi-agent isolation. Engram is a drop-in replacement that brings all of those capabilities while keeping the same local-first philosophy.
120
173
 
121
174
  ### Smart recall, not keyword search
122
175
 
123
176
  Engram uses hybrid search (BM25 + vector + reranking via [QMD](https://github.com/tobilu/qmd)) to find semantically relevant memories. It doesn't just match keywords — it understands what you're working on and surfaces the right context.
124
177
 
178
+ ### OpenAI or local LLM — your choice
179
+
180
+ Use OpenAI for extraction and reranking, or run entirely offline with a local LLM via Ollama, LM Studio, or any OpenAI-compatible endpoint. The `local-llm-heavy` preset is optimized for fully local operation. See the [Local LLM Guide](docs/guides/local-llm.md).
181
+
125
182
  ### Progressive complexity
126
183
 
127
184
  Start with zero config. Enable features as your needs grow:
@@ -274,22 +331,22 @@ See the [full CLI reference](docs/api.md#cli-commands) for all commands.
274
331
 
275
332
  ## Configuration
276
333
 
277
- All settings live in `openclaw.json` under `plugins.entries.openclaw-engram.config`.
278
-
279
- Key settings:
334
+ All settings live in `openclaw.json` under `plugins.entries.openclaw-engram.config`. The table below shows the most commonly changed settings — Engram has **60+ configuration options** covering search backends, capture modes, memory OS features, namespaces, governance, benchmarking, and more.
280
335
 
281
336
  | Setting | Default | Description |
282
337
  |---------|---------|-------------|
283
- | `openaiApiKey` | `(env)` | OpenAI API key (optional with local LLM) |
284
- | `model` | `gpt-5.2` | LLM for extraction |
285
- | `searchBackend` | `"qmd"` | Search engine to use |
286
- | `captureMode` | `implicit` | Memory write policy |
338
+ | `openaiApiKey` | `(env)` | OpenAI API key (optional when using a local LLM) |
339
+ | `localLlmEnabled` | `false` | Use a local LLM instead of OpenAI for extraction |
340
+ | `localLlmUrl` | unset | Local LLM endpoint (e.g., `http://localhost:1234/v1`) |
341
+ | `localLlmModel` | unset | Local model name (e.g., `qwen2.5-32b-instruct`) |
342
+ | `model` | `gpt-5.2` | OpenAI model for extraction (when not using local LLM) |
343
+ | `searchBackend` | `"qmd"` | Search engine: `qmd`, `orama`, `lancedb`, `meilisearch`, `remote`, `noop` |
344
+ | `captureMode` | `implicit` | Memory write policy: `implicit`, `explicit`, `hybrid` |
287
345
  | `recallBudgetChars` | `maxMemoryTokens * 4` | Recall budget (default ~8K chars; set 64K+ for large-context models) |
288
346
  | `memoryDir` | `~/.openclaw/workspace/memory/local` | Memory storage root |
289
347
  | `memoryOsPreset` | unset | Quick config: `conservative`, `balanced`, `research-max`, `local-llm-heavy` |
290
- | `localLlmEnabled` | `false` | Use local LLM for extraction |
291
348
 
292
- Full reference: [docs/config-reference.md](docs/config-reference.md)
349
+ **[See the full config reference for all 60+ settings](docs/config-reference.md)** including search backend configuration, namespace policies, Memory OS features, governance, evaluation harness, trust zones, causal trajectories, and more.
293
350
 
294
351
  ## Documentation
295
352
 
@@ -313,22 +370,6 @@ Full reference: [docs/config-reference.md](docs/config-reference.md)
313
370
  - [Enable All Features](docs/enable-all-v8.md) — Full-feature config profile
314
371
  - [Migration Guide](docs/guides/migrations.md) — Upgrading from older versions
315
372
 
316
- ## Developer Install
317
-
318
- ```bash
319
- git clone https://github.com/joshuaswarren/openclaw-engram.git \
320
- ~/.openclaw/extensions/openclaw-engram
321
- cd ~/.openclaw/extensions/openclaw-engram
322
- npm ci && npm run build
323
- ```
324
-
325
- Run tests:
326
-
327
- ```bash
328
- npm test # Full suite (672 tests)
329
- npm run check-types # TypeScript type checking
330
- ```
331
-
332
373
  ## Contributing
333
374
 
334
375
  Contributions are welcome! Please:
@@ -336,9 +377,15 @@ Contributions are welcome! Please:
336
377
  1. Fork the repository
337
378
  2. Create a feature branch (`git checkout -b feat/my-feature`)
338
379
  3. Write tests for new functionality
339
- 4. Ensure `npm test` and `npm run check-types` pass
380
+ 4. Ensure `npm test` (672 tests) and `npm run check-types` pass
340
381
  5. Submit a pull request
341
382
 
383
+ ## Sponsorship
384
+
385
+ If Engram is useful to you, consider [sponsoring the project](https://github.com/sponsors/joshuaswarren). Sponsorship helps fund continued development, new integrations, and keeping Engram free and open source.
386
+
387
+ [![Sponsor](https://img.shields.io/badge/Sponsor-%E2%9D%A4-pink?style=for-the-badge)](https://github.com/sponsors/joshuaswarren)
388
+
342
389
  ## License
343
390
 
344
391
  [MIT](LICENSE)
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@joshuaswarren/openclaw-engram",
3
- "version": "9.0.74",
3
+ "version": "9.0.75",
4
4
  "type": "module",
5
5
  "description": "Local-first memory plugin for OpenClaw. LLM-powered extraction, markdown storage, hybrid search via QMD.",
6
6
  "keywords": [