@joshuaswarren/openclaw-engram 9.0.0 → 9.0.2
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +146 -65
- package/dist/index.js +2 -1
- package/dist/index.js.map +1 -1
- package/package.json +1 -1
package/README.md
CHANGED
|
@@ -1,16 +1,27 @@
|
|
|
1
1
|
# openclaw-engram
|
|
2
2
|
|
|
3
|
-
|
|
3
|
+
**Long-term memory for AI agents.** Engram gives your [OpenClaw](https://github.com/openclaw/openclaw) agents persistent, searchable memory that survives across conversations. Every interaction builds a richer understanding of your world — decisions, preferences, facts, relationships, and more — so your agents remember what matters.
|
|
4
4
|
|
|
5
|
-
|
|
5
|
+
[](https://www.npmjs.com/package/@joshuaswarren/openclaw-engram)
|
|
6
|
+
[](LICENSE)
|
|
6
7
|
|
|
7
|
-
##
|
|
8
|
+
## Why Engram?
|
|
9
|
+
|
|
10
|
+
AI agents forget everything between conversations. Engram fixes that.
|
|
11
|
+
|
|
12
|
+
- **Automatic extraction** — Engram watches conversations and extracts facts, decisions, preferences, corrections, and more. No manual tagging required.
|
|
13
|
+
- **Smart recall** — Before each conversation, Engram injects the most relevant memories into the agent's context. Your agents remember what they need, when they need it.
|
|
14
|
+
- **Local-first** — All memory data stays on your filesystem as plain markdown files. No cloud dependency, no vendor lock-in, fully portable.
|
|
15
|
+
- **Pluggable search** — Choose from six search backends: QMD (hybrid BM25+vector+reranking), LanceDB, Meilisearch, Orama, remote HTTP, or bring your own.
|
|
16
|
+
- **Zero-config start** — Install, add an API key, restart. Engram works out of the box with sensible defaults and progressively unlocks advanced features as you enable them.
|
|
17
|
+
|
|
18
|
+
## Quick Start
|
|
8
19
|
|
|
9
20
|
```bash
|
|
10
21
|
openclaw plugins install @joshuaswarren/openclaw-engram --pin
|
|
11
22
|
```
|
|
12
23
|
|
|
13
|
-
`openclaw.json
|
|
24
|
+
Add to your `openclaw.json`:
|
|
14
25
|
|
|
15
26
|
```jsonc
|
|
16
27
|
{
|
|
@@ -29,89 +40,159 @@ openclaw plugins install @joshuaswarren/openclaw-engram --pin
|
|
|
29
40
|
}
|
|
30
41
|
```
|
|
31
42
|
|
|
32
|
-
Restart gateway:
|
|
43
|
+
Restart the gateway:
|
|
33
44
|
|
|
34
45
|
```bash
|
|
35
46
|
launchctl kickstart -k gui/$(id -u)/ai.openclaw.gateway
|
|
36
47
|
```
|
|
37
48
|
|
|
38
|
-
|
|
49
|
+
That's it. Start a conversation — Engram begins learning immediately.
|
|
50
|
+
|
|
51
|
+
## Verify Installation
|
|
39
52
|
|
|
40
53
|
```bash
|
|
41
|
-
openclaw engram compat --strict
|
|
42
|
-
openclaw engram stats
|
|
43
|
-
|
|
54
|
+
openclaw engram compat --strict # Should exit 0
|
|
55
|
+
openclaw engram stats # Shows memory counts and search status
|
|
56
|
+
```
|
|
57
|
+
|
|
58
|
+
## How It Works
|
|
59
|
+
|
|
60
|
+
Engram operates in three phases, running automatically in the background:
|
|
61
|
+
|
|
62
|
+
```
|
|
63
|
+
Recall Before each conversation, inject relevant memories
|
|
64
|
+
Buffer After each turn, accumulate content until a trigger fires
|
|
65
|
+
Extract Periodically, use an LLM to extract structured memories
|
|
66
|
+
```
|
|
67
|
+
|
|
68
|
+
Memories are stored as markdown files with YAML frontmatter:
|
|
69
|
+
|
|
70
|
+
```yaml
|
|
71
|
+
---
|
|
72
|
+
id: decision-1738789200000-a1b2
|
|
73
|
+
category: decision
|
|
74
|
+
confidence: 0.92
|
|
75
|
+
tags: ["architecture", "search"]
|
|
76
|
+
---
|
|
77
|
+
Decided to use the port/adapter pattern for search backends
|
|
78
|
+
so alternative engines can replace QMD without changing core logic.
|
|
79
|
+
```
|
|
80
|
+
|
|
81
|
+
Categories include: `fact`, `decision`, `preference`, `correction`, `relationship`, `principle`, `commitment`, `moment`, `skill`, and more.
|
|
82
|
+
|
|
83
|
+
## Search Backends
|
|
84
|
+
|
|
85
|
+
Engram v9 introduces a pluggable search architecture. Set `searchBackend` in your config to switch engines:
|
|
86
|
+
|
|
87
|
+
| Backend | Type | Best For | Config |
|
|
88
|
+
|---------|------|----------|--------|
|
|
89
|
+
| **QMD** (default) | Hybrid BM25+vector+reranking | Best recall quality, production use | `"qmd"` |
|
|
90
|
+
| **Orama** | Embedded, pure JS | Zero native deps, quick setup | `"orama"` |
|
|
91
|
+
| **LanceDB** | Embedded, native Arrow | Large collections, fast vector search | `"lancedb"` |
|
|
92
|
+
| **Meilisearch** | Server-based | Shared search across services | `"meilisearch"` |
|
|
93
|
+
| **Remote** | HTTP REST | Custom search service integration | `"remote"` |
|
|
94
|
+
| **Noop** | No-op | Disable search (extraction only) | `"noop"` |
|
|
95
|
+
|
|
96
|
+
Example — switch to Orama (zero setup, no external dependencies):
|
|
97
|
+
|
|
98
|
+
```jsonc
|
|
99
|
+
{
|
|
100
|
+
"searchBackend": "orama"
|
|
101
|
+
}
|
|
44
102
|
```
|
|
45
103
|
|
|
46
|
-
|
|
47
|
-
|
|
48
|
-
|
|
49
|
-
- `conversation-index-health` reports `status: "ok"` when conversation index is enabled
|
|
104
|
+
See the [Search Backends Guide](docs/search-backends.md) for detailed configuration and tradeoffs.
|
|
105
|
+
|
|
106
|
+
Want to build your own? See [Writing a Search Backend](docs/writing-a-search-backend.md).
|
|
50
107
|
|
|
51
|
-
##
|
|
108
|
+
## Feature Highlights
|
|
52
109
|
|
|
53
|
-
|
|
110
|
+
Engram's capabilities are organized into feature families that you can enable progressively:
|
|
54
111
|
|
|
55
|
-
|
|
56
|
-
|
|
57
|
-
|
|
112
|
+
| Feature | What It Does |
|
|
113
|
+
|---------|-------------|
|
|
114
|
+
| **Recall Planner** | Lightweight gating that decides whether to retrieve memories or skip recall |
|
|
115
|
+
| **Memory Boxes** | Groups related memories into topic-windowed episodes with trace linking |
|
|
116
|
+
| **Episode/Note Model** | Classifies memories as time-specific events or stable beliefs |
|
|
117
|
+
| **Graph Recall** | Entity-relationship graph for causal and timeline queries |
|
|
118
|
+
| **Lifecycle Policy** | Automatic memory aging: active, validated, stale, archived |
|
|
119
|
+
| **Identity Continuity** | Maintains consistent agent personality across sessions |
|
|
120
|
+
| **Shared Context** | Cross-agent memory sharing for multi-agent setups |
|
|
121
|
+
| **Compounding** | Weekly synthesis that surfaces patterns and recurring mistakes |
|
|
122
|
+
| **Hot/Cold Tiering** | Automatic migration of aging memories to cold storage |
|
|
123
|
+
| **Behavior Loop Tuning** | Runtime self-tuning of extraction and recall parameters |
|
|
58
124
|
|
|
59
|
-
|
|
60
|
-
|
|
125
|
+
Start with defaults, then enable features as needed. See [Enable All Features](docs/enable-all-v8.md) for a full-feature config profile.
|
|
126
|
+
|
|
127
|
+
## Agent & Operator Commands
|
|
128
|
+
|
|
129
|
+
```bash
|
|
130
|
+
openclaw engram stats # Memory counts, search status, health
|
|
131
|
+
openclaw engram search "your query" # Search memories from CLI
|
|
132
|
+
openclaw engram compat --strict # Compatibility check
|
|
133
|
+
openclaw engram conversation-index-health # Conversation index status
|
|
134
|
+
openclaw engram graph-health # Entity graph status
|
|
135
|
+
openclaw engram tier-status # Hot/cold tier metrics
|
|
136
|
+
openclaw engram policy-status # Lifecycle policy snapshot
|
|
137
|
+
```
|
|
61
138
|
|
|
62
|
-
##
|
|
139
|
+
## Configuration
|
|
63
140
|
|
|
64
|
-
|
|
65
|
-
2. Extraction: create typed memories (`fact`, `decision`, `preference`, `correction`, etc).
|
|
66
|
-
3. Storage: markdown files + frontmatter, local filesystem only.
|
|
67
|
-
4. Recall: assemble multi-section context by ordered recall pipeline.
|
|
68
|
-
5. Maintenance: dedupe, lifecycle transitions, compounding, migration/repair tooling.
|
|
141
|
+
All settings live in `openclaw.json` under `plugins.entries.openclaw-engram.config`. Only `openaiApiKey` is required — everything else has sensible defaults.
|
|
69
142
|
|
|
70
|
-
|
|
71
|
-
- [Architecture Overview](docs/architecture/overview.md)
|
|
72
|
-
- [Retrieval Pipeline](docs/architecture/retrieval-pipeline.md)
|
|
73
|
-
- [Memory Lifecycle](docs/architecture/memory-lifecycle.md)
|
|
143
|
+
Key settings:
|
|
74
144
|
|
|
75
|
-
|
|
145
|
+
| Setting | Default | Description |
|
|
146
|
+
|---------|---------|-------------|
|
|
147
|
+
| `openaiApiKey` | `(env fallback)` | OpenAI API key or `${ENV_VAR}` reference |
|
|
148
|
+
| `model` | `gpt-5.2` | LLM model for extraction |
|
|
149
|
+
| `searchBackend` | `"qmd"` | Search engine: `qmd`, `orama`, `lancedb`, `meilisearch`, `remote`, `noop` |
|
|
150
|
+
| `qmdEnabled` | `true` | Enable QMD hybrid search |
|
|
151
|
+
| `memoryDir` | `~/.openclaw/workspace/memory/local` | Memory storage root |
|
|
76
152
|
|
|
77
|
-
|
|
78
|
-
|---|---|
|
|
79
|
-
| Recall planning + assembly | `recallPlannerEnabled`, `recallPipeline`, `recallBudgetChars` |
|
|
80
|
-
| Episodic memory model | `memoryBoxesEnabled`, `traceWeaverEnabled`, `episodeNoteModeEnabled` |
|
|
81
|
-
| Query-aware retrieval | `queryAwareIndexingEnabled`, `graphRecallEnabled`, `graphAssistShadowEvalEnabled` |
|
|
82
|
-
| Lifecycle + action policy | `lifecyclePolicyEnabled`, `contextCompressionActionsEnabled`, `compressionGuidelineLearningEnabled` |
|
|
83
|
-
| Identity continuity | `identityContinuityEnabled`, `continuityAuditEnabled`, `continuityIncidentLoggingEnabled` |
|
|
84
|
-
| Session integrity + replay | `sessionObserverEnabled`, replay/session CLI commands |
|
|
85
|
-
| Routing + work layer | `routingRulesEnabled`, `task`/`project` CLI |
|
|
86
|
-
| Hot/cold tiering | `qmdTierMigrationEnabled`, `qmdTierAutoBackfillEnabled` |
|
|
87
|
-
| Shared intelligence + compounding | `sharedContextEnabled`, `sharedCrossSignalSemanticEnabled`, `compoundingEnabled` |
|
|
88
|
-
| Behavior loop runtime tuning | `behaviorLoopAutoTuneEnabled`, policy CLI commands |
|
|
153
|
+
Full reference: [Config Reference](docs/config-reference.md)
|
|
89
154
|
|
|
90
|
-
|
|
91
|
-
- [Config Reference](docs/config-reference.md)
|
|
155
|
+
## Documentation
|
|
92
156
|
|
|
93
|
-
|
|
157
|
+
- [Getting Started](docs/getting-started.md) — Installation, setup, first-run verification
|
|
158
|
+
- [Search Backends](docs/search-backends.md) — Choosing and configuring search engines
|
|
159
|
+
- [Writing a Search Backend](docs/writing-a-search-backend.md) — Build your own adapter
|
|
160
|
+
- [Config Reference](docs/config-reference.md) — Every setting with defaults
|
|
161
|
+
- [Architecture Overview](docs/architecture/overview.md) — System design and storage layout
|
|
162
|
+
- [Retrieval Pipeline](docs/architecture/retrieval-pipeline.md) — How recall works
|
|
163
|
+
- [Memory Lifecycle](docs/architecture/memory-lifecycle.md) — Write, consolidation, expiry
|
|
164
|
+
- [Enable All Features](docs/enable-all-v8.md) — Full-feature config profile
|
|
165
|
+
- [Operations](docs/operations.md) — Backup, export, maintenance
|
|
166
|
+
- [Namespaces](docs/namespaces.md) — Multi-agent memory isolation
|
|
167
|
+
- [Shared Context](docs/shared-context.md) — Cross-agent intelligence
|
|
168
|
+
- [Identity Continuity](docs/identity-continuity.md) — Consistent agent personality
|
|
94
169
|
|
|
95
|
-
|
|
170
|
+
## Developer Install
|
|
96
171
|
|
|
97
172
|
```bash
|
|
98
|
-
openclaw
|
|
99
|
-
openclaw
|
|
100
|
-
openclaw
|
|
101
|
-
|
|
102
|
-
openclaw engram graph-health
|
|
103
|
-
openclaw engram tier-status
|
|
104
|
-
openclaw engram policy-status
|
|
173
|
+
git clone https://github.com/joshuaswarren/openclaw-engram.git \
|
|
174
|
+
~/.openclaw/extensions/openclaw-engram
|
|
175
|
+
cd ~/.openclaw/extensions/openclaw-engram
|
|
176
|
+
npm ci && npm run build
|
|
105
177
|
```
|
|
106
178
|
|
|
107
|
-
|
|
108
|
-
|
|
109
|
-
|
|
110
|
-
|
|
111
|
-
-
|
|
112
|
-
|
|
113
|
-
|
|
114
|
-
|
|
115
|
-
|
|
116
|
-
|
|
117
|
-
|
|
179
|
+
Run tests:
|
|
180
|
+
|
|
181
|
+
```bash
|
|
182
|
+
npm test # Full suite (672 tests)
|
|
183
|
+
npm run check-types # TypeScript type checking
|
|
184
|
+
```
|
|
185
|
+
|
|
186
|
+
## Contributing
|
|
187
|
+
|
|
188
|
+
Contributions are welcome! Please:
|
|
189
|
+
|
|
190
|
+
1. Fork the repository
|
|
191
|
+
2. Create a feature branch (`git checkout -b feat/my-feature`)
|
|
192
|
+
3. Write tests for new functionality
|
|
193
|
+
4. Ensure `npm test` and `npm run check-types` pass
|
|
194
|
+
5. Submit a pull request
|
|
195
|
+
|
|
196
|
+
## License
|
|
197
|
+
|
|
198
|
+
[MIT](LICENSE)
|
package/dist/index.js
CHANGED
|
@@ -1390,7 +1390,8 @@ var LocalLlmClient = class _LocalLlmClient {
|
|
|
1390
1390
|
log.debug(
|
|
1391
1391
|
`local LLM response: choices=${data.choices?.length}, usage=${JSON.stringify(data.usage)}`
|
|
1392
1392
|
);
|
|
1393
|
-
const
|
|
1393
|
+
const msg = data.choices?.[0]?.message;
|
|
1394
|
+
const content = msg?.content || msg?.reasoning_content || "";
|
|
1394
1395
|
if (!content) {
|
|
1395
1396
|
log.warn(`local LLM returned empty content. choices=${JSON.stringify(data.choices)?.slice(0, 200)}`);
|
|
1396
1397
|
return null;
|