@susu-eng/gralkor 27.2.9 → 27.2.11
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +130 -209
- package/dist/server-env.d.ts.map +1 -1
- package/dist/server-env.js +1 -4
- package/dist/server-env.js.map +1 -1
- package/openclaw.plugin.json +1 -1
- package/package.json +1 -1
- package/server/main.py +12 -24
- package/server/wheels/falkordblite-0.9.0-py3-none-manylinux_2_36_aarch64.whl +0 -0
- package/docker-compose.yml +0 -34
- package/server/Dockerfile +0 -7
package/README.md
CHANGED
|
@@ -8,74 +8,55 @@ Gralkor automatically remembers and recalls everything your agent says, _thinks_
|
|
|
8
8
|
|
|
9
9
|
## Why Gralkor
|
|
10
10
|
|
|
11
|
-
Here's the honest field report on every OpenClaw memory plugin:
|
|
12
|
-
|
|
13
|
-
| Plugin | Storage | Captures thinking | Episode scope | Temporal facts | Local |
|
|
14
|
-
|---|---|---|---|---|---|
|
|
15
|
-
| **memory-core** *(built-in)* | Markdown files | no | full (LLM-written at compaction) | no | ✓ |
|
|
16
|
-
| **lancedb-pro** | LanceDB (flat vector) | no | new messages per run | partial | ✓ |
|
|
17
|
-
| **MemOS Local** | SQLite + vector | no | turn delta | recency decay only | ✓ |
|
|
18
|
-
| **Cognee** | Cognee graph API | no | Q&A pairs | partial | optional |
|
|
19
|
-
| **Supermemory** | Cloud (opaque) | no | last turn only | server-side flag | ✗ |
|
|
20
|
-
| **MemOS Cloud** | Cloud (opaque) | no | last turn *(default)* | none | ✗ |
|
|
21
|
-
| **Awareness** | Cloud + MD mirror | no | first message + last reply | none | ✗ |
|
|
22
|
-
| **Gralkor** | Graphiti knowledge graph | **yes** | full session | `valid_at`/`invalid_at`/`expired_at` | ✓ |
|
|
23
|
-
|
|
24
11
|
Let's look in detail about the decisions made for Gralkor and why they make it the best memory plugin for OpenClaw.
|
|
25
12
|
|
|
26
|
-
**Graphs, not Markdown or pure vector.**
|
|
27
|
-
|
|
28
|
-
Yet: the most popular memory plugin — memory-core, the one that ships inside OpenClaw — writes your agent's memory to `MEMORY.md` and `memory/YYYY-MM-DD.md`. The second most popular, lancedb-pro, stores extracted facts as flat rows in LanceDB. [Graphiti](https://github.com/getzep/graphiti) combines a knowledge graph with vector embeddings — you get structured relationships *and* semantic retrieval. Facts carry temporal validity: when they became true, when they stopped being true, when they were superseded.
|
|
29
|
-
|
|
30
|
-
This is not another chunking strategy or embedding experiment. Graphiti has solved this layer of the problem and Gralkor deploys and leverages it optimally for this use case.
|
|
13
|
+
**Graphs, not Markdown or pure vector.** Graphs are the right data structure for representing knowledge. Your code is a graph - the _world_ is a deeply interrelated graph and trying to flatten it into Markdown files or pure vector embeddings is fighting reality. Gralkor doesn't use MD files (other than indexing yours), and this is not another chunking strategy or embedding experiment. Graphiti has already solved this layer and Gralkor leverages it optimally for this use case.
|
|
31
14
|
|
|
32
15
|
[HippoRAG](https://arxiv.org/abs/2405.14831) (NeurIPS 2024) found graph-based retrieval reaches 89.1% recall@5 on 2WikiMultiHopQA versus 68.2% for flat vector retrieval — a 20.9-point gap. [AriGraph](https://arxiv.org/abs/2407.04363) (IJCAI 2025) independently found KG-augmented agents markedly outperform RAG, summarization, and full-conversation-history baselines across interactive environments.
|
|
33
16
|
|
|
34
|
-
**Remembering behaviour, not just dialog.** Agents make mistakes
|
|
17
|
+
**Remembering behaviour, not just dialog.** Agents make mistakes, weigh options, reject approaches - they _learn_ as they complete tasks. Gralkor distills the agent's behaviour - not just its dialog - into first-person behavioural reports weaved into episode transcripts before ingestion.
|
|
35
18
|
|
|
36
|
-
|
|
19
|
+
For almost all other memory plugins, your agent is inherently dishonest with you, frequently claiming to remember what it has done when it only really remembers what it _already claimed_ to have done, or to have thought _what it is only now imagining_.
|
|
37
20
|
|
|
38
|
-
Gralkor
|
|
21
|
+
With Gralkor your agent actually remembers it's thoughts and actions.
|
|
39
22
|
|
|
40
23
|
[Reflexion](https://arxiv.org/abs/2303.11366) (NeurIPS 2023) showed agents storing self-reflective reasoning traces outperform GPT-4 output-only baselines by 11 points on HumanEval. [ExpeL](https://arxiv.org/abs/2308.10144) (AAAI 2024) directly ablated reasoning-trace storage versus output-only: +11–19 points across benchmarks from storing the reasoning process alone.
|
|
41
24
|
|
|
42
25
|
**Maximum context at ingestion.** Gralkor captures all messages in each session of work, distills behaviour, and feeds results to Graphiti *as whole episodes*. Extraction works _way_ better when Graphiti has full context.
|
|
43
26
|
|
|
44
|
-
|
|
27
|
+
Most memory plugins save isolated question-answer pairs or summarized snippets: Some store only the first user message and the last assistant reply, others store to the last turn only.
|
|
45
28
|
|
|
46
|
-
Gralkor captures
|
|
29
|
+
Gralkor captures the entire series of questions, thoughts, actions, and responses that _solved the problem_ together, with all their interrelationships. Richer semantics, better understanding, better recall.
|
|
47
30
|
|
|
48
31
|
[SeCom](https://arxiv.org/abs/2502.05589) (ICLR 2025) found coherent multi-turn episode storage scores 5.99 GPT4Score points higher than isolated turn-level storage on LOCOMO. [LongMemEval](https://arxiv.org/abs/2410.10813) (ICLR 2025) confirms: fact-level QA-pair extraction drops accuracy from 0.692 to 0.615 versus full-round episode storage.
|
|
49
32
|
|
|
50
|
-
**Built for the long term.** Graphiti
|
|
33
|
+
**Built for the long term.** Graphiti (and therefore Gralkor) is deeply temporal. On every ingestion, it doesn't just append; it resolves new information against the existing graph, amending, expiring, and invalidating so that your agent knows _what happened over time_.
|
|
51
34
|
|
|
52
|
-
Graphiti does the heavy temporal lifting on ingestion
|
|
35
|
+
Graphiti does the heavy temporal lifting on ingestion. It's bad for throughput, and useless for short-lived agents, which means serving a single, long-lived user agent is _the perfect use case_.
|
|
53
36
|
|
|
54
37
|
[LongMemEval](https://arxiv.org/abs/2410.10813) (ICLR 2025) established that temporal reasoning is the hardest memory sub-task for commercial LLMs; time-aware indexing recovers 7–11% of that loss. [MemoTime](https://arxiv.org/abs/2510.13614) (WWW 2026) found temporal knowledge graphs enable a 4B model to match GPT-4-Turbo on temporal reasoning, with up to 24% improvement over static memory baselines.
|
|
55
38
|
|
|
56
|
-
**Recursion through reflection.**
|
|
39
|
+
**Recursion through reflection.** Point your agent back at its own memory — let it reflect on what it knows, identify contradictions, synthesize higher-order insights, and do with them whatever you believe to be _good cognitive architecture_. Gralkor doesn't limit you to one approach, but the research is quite clear - you should do _something_.
|
|
57
40
|
|
|
58
|
-
My way is to use cron and [Thinker CLI](https://github.com/elimydlarz/thinker-cli) together, directing the agent to use the search and add memory tools. Share yours, and ask to see mine.
|
|
41
|
+
My way is to use cron and [Thinker CLI](https://github.com/elimydlarz/thinker-cli) together, directing the agent to use the search and add memory tools in a sequential reflective process. Share yours, and ask to see mine.
|
|
59
42
|
|
|
60
43
|
[Reflexion](https://arxiv.org/abs/2303.11366) (NeurIPS 2023) demonstrated that agents storing verbal reflections in an episodic buffer gain 11 points with no weight updates. [Generative Agents](https://arxiv.org/abs/2304.03442) (UIST 2023) showed empirically that a reflection layer synthesizing raw memories into higher-order insights is essential for coherent long-term behavior.
|
|
61
44
|
|
|
62
|
-
**Custom ontology: model your agent's world _your way_.**
|
|
45
|
+
**Custom ontology: model your agent's world _your way_.** Gralkor lets you define your own entity types, attributes, and relationships so that information is parsed into entities and relationships you define. Your graph doesn't have to be a black box - you can keep track of what matters to you.
|
|
63
46
|
|
|
64
|
-
|
|
47
|
+
You can use a domain model codified by experts in your field, or encode _your_ model of the world so that your agent shares it.
|
|
65
48
|
|
|
66
49
|
[Apple's ODKE+](https://arxiv.org/abs/2509.04696) (2025) showed ontology-guided extraction hits 98.8% precision vs 91% raw LLM; [GoLLIE](https://arxiv.org/abs/2310.03668) (ICLR 2024) directly ablated schema-constrained versus unconstrained generation on the same model, finding +13 F1 points average across NER, relation, and event extraction in zero-shot settings.
|
|
67
50
|
|
|
68
|
-
**
|
|
51
|
+
**Interpretation** Gralkor interprets information in memory for relevance to the task at hand. This step radically improves output with minimal impact on cost and latency.
|
|
69
52
|
|
|
70
|
-
|
|
53
|
+
**On cost.** Gralkor costs more to run than a Markdown file in the short term. In the longer term, Gralkor provides more efficient context management, reducing token burn. Instead of paying to pollute your context window with junk every read, you pay more on ingestion in exchange for cheap, high-relevance reads forever.
|
|
71
54
|
|
|
72
|
-
You stop spending turns re-establishing context and focus more on what you care about.
|
|
55
|
+
An agent that remembers behaviour, decisions, your preferences, and reasoning across sessions changes the _character_ of your work. You stop spending turns re-establishing context and focus more on what you care about. A single recalled behavioural fact — "we rejected mysql because it lacked jsonb column support needed for X" — prevents re-litigating that decision in a new session - it might save 10 subagents repeating a parallel investigation of database options.
|
|
73
56
|
|
|
74
|
-
Gralkor is _good_ memory, not cheap memory. You can push the llm choice and perhaps get better extraction, but otherwise I've just made it as good as possible
|
|
57
|
+
Gralkor is _good_ memory, not cheap memory. You can push the llm choice and perhaps get better extraction, but otherwise I've just made it as good as possible while being reasonable about latency.
|
|
75
58
|
|
|
76
|
-
##
|
|
77
|
-
|
|
78
|
-
Gralkor replaces the native memory plugin entirely, taking the memory slot.
|
|
59
|
+
## Tools
|
|
79
60
|
|
|
80
61
|
- **`memory_search`** — searches the knowledge graph and returns relevant facts and entity summaries
|
|
81
62
|
- **`memory_add`** — stores information in the knowledge graph; Graphiti extracts entities and relationships
|
|
@@ -134,12 +115,22 @@ openclaw plugins install ./susu-eng-gralkor-memory-26.0.14.tgz --dangerously-for
|
|
|
134
115
|
|
|
135
116
|
### 4. Enable and assign the memory slot
|
|
136
117
|
|
|
118
|
+
OpenClaw has a single `memory` slot that determines which plugin provides memory to your agents. You must explicitly assign Gralkor to the `memory` slot, otherwise installing the plugin does nothing — auto-capture and auto-recall hooks will never fire.
|
|
119
|
+
|
|
137
120
|
```bash
|
|
138
|
-
#
|
|
121
|
+
# If you use an allowlist, add gralkor to it
|
|
139
122
|
openclaw config set --json plugins.allow '["gralkor"]'
|
|
140
123
|
|
|
141
|
-
#
|
|
124
|
+
# Enable the plugin entry
|
|
125
|
+
openclaw config set plugins.entries.gralkor.enabled true
|
|
126
|
+
|
|
127
|
+
# Assign Gralkor to the memory slot (replaces the built-in memory-core)
|
|
142
128
|
openclaw config set plugins.slots.memory gralkor
|
|
129
|
+
|
|
130
|
+
# Expose Gralkor's tools to the agent. Auto-capture and auto-recall work without
|
|
131
|
+
# this, but the agent won't see memory_add / memory_build_indices / memory_build_communities
|
|
132
|
+
# unless you add them to the active tool profile's allowlist.
|
|
133
|
+
openclaw config set --json tools.alsoAllow '["gralkor"]'
|
|
143
134
|
```
|
|
144
135
|
|
|
145
136
|
### 5. Restart and go
|
|
@@ -167,20 +158,6 @@ Start chatting with your agent. Gralkor works in the background:
|
|
|
167
158
|
openclaw plugins update gralkor --dangerously-force-unsafe-install
|
|
168
159
|
```
|
|
169
160
|
|
|
170
|
-
### Reinstalling
|
|
171
|
-
|
|
172
|
-
The plugin dir (`~/.openclaw/extensions/gralkor`) is ephemeral — it can be deleted and reinstalled freely. The `dataDir` is persistent — the venv and FalkorDB database survive across reinstalls.
|
|
173
|
-
|
|
174
|
-
```bash
|
|
175
|
-
openclaw plugins uninstall gralkor
|
|
176
|
-
openclaw plugins install @susu-eng/gralkor --dangerously-force-unsafe-install
|
|
177
|
-
openclaw config set plugins.slots.memory gralkor
|
|
178
|
-
```
|
|
179
|
-
|
|
180
|
-
`uninstall` removes the plugin files and resets the memory slot automatically.
|
|
181
|
-
|
|
182
|
-
The second boot is fast (~4s) because the venv in `dataDir` is reused.
|
|
183
|
-
|
|
184
161
|
### LLM providers
|
|
185
162
|
|
|
186
163
|
Graphiti needs an LLM to extract entities and relationships from conversations.
|
|
@@ -192,24 +169,12 @@ Graphiti needs an LLM to extract entities and relationships from conversations.
|
|
|
192
169
|
| **Anthropic** | `anthropicApiKey` | LLM only — still needs `openaiApiKey` for embeddings |
|
|
193
170
|
| **Groq** | `groqApiKey` | LLM only — still needs `openaiApiKey` for embeddings |
|
|
194
171
|
|
|
195
|
-
To switch away from Gemini, set `llm` and `embedder
|
|
172
|
+
To switch away from Gemini, set `llm` and `embedder`. For example, with OpenAI:
|
|
196
173
|
|
|
197
|
-
```
|
|
198
|
-
|
|
199
|
-
|
|
200
|
-
|
|
201
|
-
"gralkor": {
|
|
202
|
-
"enabled": true,
|
|
203
|
-
"config": {
|
|
204
|
-
"dataDir": "/path/to/gralkor-data",
|
|
205
|
-
"openaiApiKey": { "$ref": "env:OPENAI_API_KEY" },
|
|
206
|
-
"llm": { "provider": "openai", "model": "gpt-4.1-mini" },
|
|
207
|
-
"embedder": { "provider": "openai", "model": "text-embedding-3-small" }
|
|
208
|
-
}
|
|
209
|
-
}
|
|
210
|
-
}
|
|
211
|
-
}
|
|
212
|
-
}
|
|
174
|
+
```bash
|
|
175
|
+
openclaw config set plugins.entries.gralkor.config.openaiApiKey "$OPENAI_API_KEY"
|
|
176
|
+
openclaw config set --json plugins.entries.gralkor.config.llm '{"provider":"openai","model":"gpt-4.1-mini"}'
|
|
177
|
+
openclaw config set --json plugins.entries.gralkor.config.embedder '{"provider":"openai","model":"text-embedding-3-small"}'
|
|
213
178
|
```
|
|
214
179
|
|
|
215
180
|
## CLI
|
|
@@ -219,26 +184,15 @@ openclaw gralkor status # Server state, config, graph stats, data d
|
|
|
219
184
|
openclaw gralkor search <group_id> <query> # Search the knowledge graph
|
|
220
185
|
```
|
|
221
186
|
|
|
187
|
+
See [Graph partitioning](#graph-partitioning) for what `<group_id>` should be.
|
|
188
|
+
|
|
222
189
|
## Configuration
|
|
223
190
|
|
|
224
|
-
Configure
|
|
191
|
+
Configure with `openclaw config set`. For example:
|
|
225
192
|
|
|
226
|
-
```
|
|
227
|
-
|
|
228
|
-
|
|
229
|
-
"entries": {
|
|
230
|
-
"gralkor": {
|
|
231
|
-
"enabled": true,
|
|
232
|
-
"config": {
|
|
233
|
-
"autoCapture": { "enabled": true },
|
|
234
|
-
"autoRecall": { "enabled": true, "maxResults": 10 },
|
|
235
|
-
"idleTimeoutMs": 300000,
|
|
236
|
-
"dataDir": "/path/to/data"
|
|
237
|
-
}
|
|
238
|
-
}
|
|
239
|
-
}
|
|
240
|
-
}
|
|
241
|
-
}
|
|
193
|
+
```bash
|
|
194
|
+
openclaw config set --json plugins.entries.gralkor.config.autoRecall.maxResults 20
|
|
195
|
+
openclaw config set --json plugins.entries.gralkor.config.idleTimeoutMs 600000
|
|
242
196
|
```
|
|
243
197
|
|
|
244
198
|
| Setting | Default | Description |
|
|
@@ -254,38 +208,33 @@ Configure in your OpenClaw plugin settings (`~/.openclaw/openclaw.json`):
|
|
|
254
208
|
|
|
255
209
|
### Complete config reference
|
|
256
210
|
|
|
211
|
+
The full plugin config shape (as it appears under `plugins.entries.gralkor.config` in `~/.openclaw/openclaw.json`):
|
|
212
|
+
|
|
257
213
|
```json
|
|
258
214
|
{
|
|
259
|
-
"
|
|
260
|
-
|
|
261
|
-
|
|
262
|
-
|
|
263
|
-
|
|
264
|
-
|
|
265
|
-
|
|
266
|
-
|
|
267
|
-
|
|
268
|
-
|
|
269
|
-
|
|
270
|
-
|
|
271
|
-
|
|
272
|
-
|
|
273
|
-
|
|
274
|
-
"entities": {},
|
|
275
|
-
"edges": {},
|
|
276
|
-
"edgeMap": {}
|
|
277
|
-
},
|
|
278
|
-
"test": false
|
|
279
|
-
}
|
|
280
|
-
}
|
|
281
|
-
}
|
|
282
|
-
}
|
|
215
|
+
"dataDir": "/path/to/gralkor-data",
|
|
216
|
+
"workspaceDir": "~/.openclaw/workspace",
|
|
217
|
+
"googleApiKey": "your-gemini-key",
|
|
218
|
+
"llm": { "provider": "gemini", "model": "gemini-3.1-flash-lite-preview" },
|
|
219
|
+
"embedder": { "provider": "gemini", "model": "gemini-embedding-2-preview" },
|
|
220
|
+
"autoCapture": { "enabled": true },
|
|
221
|
+
"autoRecall": { "enabled": true, "maxResults": 10 },
|
|
222
|
+
"search": { "maxResults": 20, "maxEntityResults": 10 },
|
|
223
|
+
"idleTimeoutMs": 300000,
|
|
224
|
+
"ontology": {
|
|
225
|
+
"entities": {},
|
|
226
|
+
"edges": {},
|
|
227
|
+
"edgeMap": {}
|
|
228
|
+
},
|
|
229
|
+
"test": false
|
|
283
230
|
}
|
|
284
231
|
```
|
|
285
232
|
|
|
286
233
|
### Graph partitioning
|
|
287
234
|
|
|
288
|
-
Each agent gets its own graph partition automatically
|
|
235
|
+
Each agent gets its own graph partition automatically — different agents won't see each other's knowledge, and no configuration is needed.
|
|
236
|
+
|
|
237
|
+
The partition key (`group_id`) is the agent's ID with hyphens replaced by underscores (FalkorDB's RediSearch syntax doesn't accept hyphens). So an agent named `my-coding-agent` stores its memory under group `my_coding_agent`. Agents running without an explicit ID use the partition `default`. You'll need this `group_id` whenever you query the graph directly — e.g. `openclaw gralkor search <group_id> <query>`.
|
|
289
238
|
|
|
290
239
|
## Custom entity and relationship types
|
|
291
240
|
|
|
@@ -293,81 +242,70 @@ By default, Graphiti extracts generic entities and connects them with generic `R
|
|
|
293
242
|
|
|
294
243
|
If you want more structured extraction, you can define custom entity and relationship types. Graphiti will classify entities into your types, extract structured attributes, and create typed relationships between them.
|
|
295
244
|
|
|
296
|
-
### Entities
|
|
245
|
+
### Entities
|
|
297
246
|
|
|
298
|
-
The simplest useful ontology defines just entity types. Relationships will still be created, using Graphiti's default `RELATES_TO` type.
|
|
247
|
+
The simplest useful ontology defines just entity types. Relationships will still be created, using Graphiti's default `RELATES_TO` type. Set the whole ontology in one go:
|
|
299
248
|
|
|
300
|
-
```
|
|
301
|
-
{
|
|
302
|
-
"
|
|
303
|
-
"
|
|
304
|
-
"
|
|
305
|
-
|
|
306
|
-
"
|
|
307
|
-
|
|
308
|
-
|
|
309
|
-
|
|
310
|
-
|
|
311
|
-
|
|
312
|
-
|
|
313
|
-
|
|
314
|
-
}
|
|
315
|
-
},
|
|
316
|
-
"Technology": {
|
|
317
|
-
"description": "A programming language, framework, library, database, or infrastructure tool. Identify by mentions of specific named technologies used in or considered for projects.",
|
|
318
|
-
"attributes": {
|
|
319
|
-
"category": ["language", "framework", "database", "infrastructure", "tool"]
|
|
320
|
-
}
|
|
321
|
-
}
|
|
322
|
-
}
|
|
323
|
-
}
|
|
324
|
-
}
|
|
249
|
+
```bash
|
|
250
|
+
openclaw config set --json plugins.entries.gralkor.config.ontology '{
|
|
251
|
+
"entities": {
|
|
252
|
+
"Project": {
|
|
253
|
+
"description": "A software project or initiative being actively developed. Look for mentions of repositories, codebases, applications, services, or named systems that are built and maintained by a team.",
|
|
254
|
+
"attributes": {
|
|
255
|
+
"status": ["active", "completed", "paused"],
|
|
256
|
+
"language": "Primary programming language used in the project"
|
|
257
|
+
}
|
|
258
|
+
},
|
|
259
|
+
"Technology": {
|
|
260
|
+
"description": "A programming language, framework, library, database, or infrastructure tool. Identify by mentions of specific named technologies used in or considered for projects.",
|
|
261
|
+
"attributes": {
|
|
262
|
+
"category": ["language", "framework", "database", "infrastructure", "tool"]
|
|
325
263
|
}
|
|
326
264
|
}
|
|
327
265
|
}
|
|
328
|
-
}
|
|
266
|
+
}'
|
|
329
267
|
```
|
|
330
268
|
|
|
331
|
-
###
|
|
269
|
+
### Relationships
|
|
332
270
|
|
|
333
271
|
To control how entities are connected, add `edges` (relationship types) and `edgeMap` (which entity pairs they apply to):
|
|
334
272
|
|
|
335
273
|
```json
|
|
336
274
|
{
|
|
337
|
-
"
|
|
338
|
-
"
|
|
339
|
-
"
|
|
340
|
-
|
|
341
|
-
"
|
|
342
|
-
|
|
343
|
-
"language": "Primary programming language used in the project"
|
|
344
|
-
}
|
|
345
|
-
},
|
|
346
|
-
"Technology": {
|
|
347
|
-
"description": "A programming language, framework, library, database, or infrastructure tool. Identify by mentions of specific named technologies used in or considered for projects.",
|
|
348
|
-
"attributes": {
|
|
349
|
-
"category": ["language", "framework", "database", "infrastructure", "tool"]
|
|
350
|
-
}
|
|
275
|
+
"entities": {
|
|
276
|
+
"Project": {
|
|
277
|
+
"description": "A software project or initiative being actively developed. Look for mentions of repositories, codebases, applications, services, or named systems that are built and maintained by a team.",
|
|
278
|
+
"attributes": {
|
|
279
|
+
"status": ["active", "completed", "paused"],
|
|
280
|
+
"language": "Primary programming language used in the project"
|
|
351
281
|
}
|
|
352
282
|
},
|
|
353
|
-
"
|
|
354
|
-
"
|
|
355
|
-
|
|
356
|
-
"
|
|
357
|
-
|
|
358
|
-
|
|
283
|
+
"Technology": {
|
|
284
|
+
"description": "A programming language, framework, library, database, or infrastructure tool. Identify by mentions of specific named technologies used in or considered for projects.",
|
|
285
|
+
"attributes": {
|
|
286
|
+
"category": ["language", "framework", "database", "infrastructure", "tool"]
|
|
287
|
+
}
|
|
288
|
+
}
|
|
289
|
+
},
|
|
290
|
+
"edges": {
|
|
291
|
+
"Uses": {
|
|
292
|
+
"description": "A project actively using a technology in its stack. Look for statements about tech choices, dependencies, or implementation details that indicate a project relies on a specific technology.",
|
|
293
|
+
"attributes": {
|
|
294
|
+
"version": "Version of the technology in use, if mentioned"
|
|
359
295
|
}
|
|
360
|
-
},
|
|
361
|
-
"edgeMap": {
|
|
362
|
-
"Project,Technology": ["Uses"]
|
|
363
296
|
}
|
|
297
|
+
},
|
|
298
|
+
"edgeMap": {
|
|
299
|
+
"Project,Technology": ["Uses"]
|
|
364
300
|
}
|
|
365
301
|
}
|
|
366
302
|
```
|
|
367
303
|
|
|
304
|
+
Apply with `openclaw config set --json plugins.entries.gralkor.config.ontology '<above>'`.
|
|
305
|
+
|
|
368
306
|
Without `edgeMap`, all edge types can connect any entity pair. With `edgeMap`, relationships are constrained to specific pairs — entity pairs not listed fall back to `RELATES_TO`.
|
|
369
307
|
|
|
370
|
-
###
|
|
308
|
+
### Attributes
|
|
371
309
|
|
|
372
310
|
Attributes control what Graphiti extracts for each entity or relationship. They are **required fields** — if the LLM can't populate them from the text, it won't extract that entity type at all. This makes attributes the primary mechanism for gating extraction quality.
|
|
373
311
|
|
|
@@ -380,7 +318,7 @@ Attributes control what Graphiti extracts for each entity or relationship. They
|
|
|
380
318
|
|
|
381
319
|
Supported types for the object form: `string`, `int`, `float`, `bool`, `datetime`.
|
|
382
320
|
|
|
383
|
-
###
|
|
321
|
+
### Descriptions
|
|
384
322
|
|
|
385
323
|
Descriptions are the most important part of your ontology — they tell the LLM what to look for. Write them like extraction instructions, not dictionary definitions.
|
|
386
324
|
|
|
@@ -416,21 +354,21 @@ openclaw config set plugins.entries.gralkor.config.dataDir /data/gralkor
|
|
|
416
354
|
|
|
417
355
|
```
|
|
418
356
|
User sends message
|
|
419
|
-
|
|
420
|
-
|
|
421
|
-
|
|
422
|
-
│
|
|
423
|
-
│ hook │
|
|
424
|
-
|
|
425
|
-
|
|
426
|
-
|
|
357
|
+
│
|
|
358
|
+
▼
|
|
359
|
+
┌──────────────┐ search ┌──────────────┐ query ┌──────────────┐
|
|
360
|
+
│ auto-recall │ ───────────▶ │ Graphiti │ ───────────▶ │ FalkorDB │
|
|
361
|
+
│ hook │ ◀─────────── │ API │ ◀─────────── │ │
|
|
362
|
+
└──────────────┘ facts └──────────────┘ subgraph └──────────────┘
|
|
363
|
+
│
|
|
364
|
+
▼
|
|
427
365
|
Agent runs (with recalled facts as context)
|
|
428
|
-
|
|
429
|
-
|
|
430
|
-
┌──────────────┐ ingest
|
|
431
|
-
│ auto-capture
|
|
432
|
-
│ hook
|
|
433
|
-
└──────────────┘
|
|
366
|
+
│
|
|
367
|
+
▼
|
|
368
|
+
┌──────────────┐ ingest ┌──────────────┐ extract ┌──────────────┐
|
|
369
|
+
│ auto-capture │ ───────────▶ │ Graphiti │ ───────────▶ │ FalkorDB │
|
|
370
|
+
│ hook │ │ API │ entities │ │
|
|
371
|
+
└──────────────┘ └──────────────┘ & facts └──────────────┘
|
|
434
372
|
```
|
|
435
373
|
|
|
436
374
|
Graphiti handles the heavy lifting: entity extraction, relationship mapping, temporal tracking, and embedding-based search. Gralkor wires it into the OpenClaw plugin lifecycle. The Graphiti server and embedded FalkorDB run as a managed subprocess — started and stopped automatically by the plugin.
|
|
@@ -438,7 +376,18 @@ Graphiti handles the heavy lifting: entity extraction, relationship mapping, tem
|
|
|
438
376
|
## Troubleshooting
|
|
439
377
|
|
|
440
378
|
**`openclaw gralkor status` says "Server process: stopped"**
|
|
441
|
-
|
|
379
|
+
Many things can cause this. Use the available diagnostics to narrow it down:
|
|
380
|
+
|
|
381
|
+
- **`openclaw gralkor status`** — shows process state, config summary, `dataDir`, venv state, and (if unreachable) the connection error
|
|
382
|
+
- **Gateway logs** — grep for `[gralkor] boot:` markers. You should see `boot: plugin loaded`, `boot: starting`, then `boot: ready`. A `boot: ... failed:` line tells you which stage broke
|
|
383
|
+
- **`openclaw gralkor search <group_id> <query>`** — quick end-to-end check that the server is reachable and the graph has data (group ID is required; it's the agent ID with hyphens replaced by underscores)
|
|
384
|
+
|
|
385
|
+
Common causes:
|
|
386
|
+
- `uv` not on PATH (Python itself is managed by `uv` — it fetches 3.12+ on demand and produces its own errors)
|
|
387
|
+
- `uv sync` failed (network/registry issue, or on first boot ~1–2 min is normal — wait it out)
|
|
388
|
+
- Missing or invalid LLM API key — the server starts but every operation fails
|
|
389
|
+
- Stale `server.pid` in `dataDir` holding port 8001 (the manager tries to clean this up, but a SIGKILL'd predecessor can leave the port wedged)
|
|
390
|
+
- On `linux/arm64`: bundled falkordblite wheel couldn't be resolved (not in `server/wheels/`, not cached in `dataDir/wheels/`, and the GitHub Release download failed)
|
|
442
391
|
|
|
443
392
|
**First startup takes a long time**
|
|
444
393
|
Normal — Gralkor is creating a Python virtual environment and installing dependencies via pip. This takes ~1-2 minutes. Subsequent starts reuse the venv and skip pip.
|
|
@@ -448,7 +397,7 @@ Most likely: missing or invalid LLM API key. Check your provider API key configu
|
|
|
448
397
|
|
|
449
398
|
**No memories being recalled**
|
|
450
399
|
- Check that `autoRecall.enabled` is `true` (it is by default)
|
|
451
|
-
- Verify the graph has data: run `openclaw gralkor search <term>`
|
|
400
|
+
- Verify the graph has data: run `openclaw gralkor search <group_id> <term>` (group ID = agent ID with hyphens replaced by underscores)
|
|
452
401
|
- Auto-recall extracts keywords from the user's message — very short messages may not match
|
|
453
402
|
|
|
454
403
|
**Agent doesn't store conversations**
|
|
@@ -457,31 +406,3 @@ Most likely: missing or invalid LLM API key. Check your provider API key configu
|
|
|
457
406
|
- Conversations where the first user message starts with `/` are skipped by design
|
|
458
407
|
- Empty conversations (no extractable text) are skipped
|
|
459
408
|
|
|
460
|
-
**Agent doesn't have plugin tools (`memory_add`, `memory_build_indices`, etc.)**
|
|
461
|
-
OpenClaw's tool profiles (`coding`, `minimal`, etc.) only allowlist core tools by default. Plugin tools are filtered out when a profile is active. To enable them, add them to `alsoAllow` in your `openclaw.json`:
|
|
462
|
-
|
|
463
|
-
```json
|
|
464
|
-
{
|
|
465
|
-
"tools": {
|
|
466
|
-
"alsoAllow": ["memory_add", "memory_build_indices", "memory_build_communities"]
|
|
467
|
-
}
|
|
468
|
-
}
|
|
469
|
-
```
|
|
470
|
-
|
|
471
|
-
You can also allow all Gralkor tools with `"alsoAllow": ["gralkor"]` or all plugin tools with `"alsoAllow": ["group:plugins"]`. Note that `memory_add` is not required for Gralkor to work — auto-capture already stores everything your agent hears, says, thinks, and does. `memory_add` is only needed if you want the agent to selectively store specific insights or conclusions on its own.
|
|
472
|
-
|
|
473
|
-
## Legacy Docker mode
|
|
474
|
-
|
|
475
|
-
If you prefer to run FalkorDB as a separate Docker container (e.g. for production deployments with specific resource constraints), you can set `FALKORDB_URI` to bypass the embedded mode:
|
|
476
|
-
|
|
477
|
-
```bash
|
|
478
|
-
cd ~/.openclaw/plugins/gralkor
|
|
479
|
-
docker build -t gralkor-server:latest server/
|
|
480
|
-
FALKORDB_URI=redis://falkordb:6379 docker compose up -d
|
|
481
|
-
```
|
|
482
|
-
|
|
483
|
-
This starts FalkorDB on port 6379 and the Graphiti API on port 8001. If your OpenClaw gateway runs in Docker, connect it to the `gralkor` network:
|
|
484
|
-
|
|
485
|
-
```bash
|
|
486
|
-
docker network connect gralkor <your-openclaw-container-name>
|
|
487
|
-
```
|
package/dist/server-env.d.ts.map
CHANGED
|
@@ -1 +1 @@
|
|
|
1
|
-
{"version":3,"file":"server-env.d.ts","sourceRoot":"","sources":["../src/server-env.ts"],"names":[],"mappings":"AAQA,KAAK,MAAM,GAAG,MAAM,CAAC,MAAM,EAAE,MAAM,CAAC,CAAC;AAMrC,wBAAgB,YAAY,CAAC,OAAO,EAAE,MAAM,GAAG,MAAM,CAEpD;AAED,wBAAgB,WAAW,CAAC,OAAO,EAAE,MAAM,GAAG,MAAM,CAEnD;AAED,wBAAgB,aAAa,CAAC,IAAI,EAAE;IAClC,KAAK,CAAC,EAAE,MAAM,CAAC,MAAM,EAAE,MAAM,CAAC,CAAC;IAC/B,SAAS,CAAC,EAAE,MAAM,CAAC,MAAM,EAAE,MAAM,CAAC,CAAC;IACnC,eAAe,EAAE,MAAM,CAAC;IACxB,UAAU,EAAE,MAAM,CAAC;CACpB,GAAG,MAAM,
|
|
1
|
+
{"version":3,"file":"server-env.d.ts","sourceRoot":"","sources":["../src/server-env.ts"],"names":[],"mappings":"AAQA,KAAK,MAAM,GAAG,MAAM,CAAC,MAAM,EAAE,MAAM,CAAC,CAAC;AAMrC,wBAAgB,YAAY,CAAC,OAAO,EAAE,MAAM,GAAG,MAAM,CAEpD;AAED,wBAAgB,WAAW,CAAC,OAAO,EAAE,MAAM,GAAG,MAAM,CAEnD;AAED,wBAAgB,aAAa,CAAC,IAAI,EAAE;IAClC,KAAK,CAAC,EAAE,MAAM,CAAC,MAAM,EAAE,MAAM,CAAC,CAAC;IAC/B,SAAS,CAAC,EAAE,MAAM,CAAC,MAAM,EAAE,MAAM,CAAC,CAAC;IACnC,eAAe,EAAE,MAAM,CAAC;IACxB,UAAU,EAAE,MAAM,CAAC;CACpB,GAAG,MAAM,CAQT"}
|
package/dist/server-env.js
CHANGED
|
@@ -15,15 +15,12 @@ export function buildPipEnv(venvDir) {
|
|
|
15
15
|
return { ...baseEnv(), VIRTUAL_ENV: venvDir };
|
|
16
16
|
}
|
|
17
17
|
export function buildSpawnEnv(opts) {
|
|
18
|
-
|
|
18
|
+
return {
|
|
19
19
|
...baseEnv(),
|
|
20
20
|
...opts.extra,
|
|
21
21
|
...opts.secretEnv,
|
|
22
22
|
FALKORDB_DATA_DIR: opts.falkordbDataDir,
|
|
23
23
|
CONFIG_PATH: opts.configPath,
|
|
24
24
|
};
|
|
25
|
-
// Absence of FALKORDB_URI triggers embedded FalkorDBLite mode.
|
|
26
|
-
delete env.FALKORDB_URI;
|
|
27
|
-
return env;
|
|
28
25
|
}
|
|
29
26
|
//# sourceMappingURL=server-env.js.map
|
package/dist/server-env.js.map
CHANGED
|
@@ -1 +1 @@
|
|
|
1
|
-
{"version":3,"file":"server-env.js","sourceRoot":"","sources":["../src/server-env.ts"],"names":[],"mappings":"AAAA,8DAA8D;AAC9D,EAAE;AACF,8EAA8E;AAC9E,6EAA6E;AAC7E,8EAA8E;AAC9E,yEAAyE;AACzE,yDAAyD;AAIzD,SAAS,OAAO;IACd,OAAO,EAAE,GAAI,OAAO,CAAC,GAAc,EAAE,CAAC;AACxC,CAAC;AAED,MAAM,UAAU,YAAY,CAAC,OAAe;IAC1C,OAAO,EAAE,GAAG,OAAO,EAAE,EAAE,sBAAsB,EAAE,OAAO,EAAE,CAAC;AAC3D,CAAC;AAED,MAAM,UAAU,WAAW,CAAC,OAAe;IACzC,OAAO,EAAE,GAAG,OAAO,EAAE,EAAE,WAAW,EAAE,OAAO,EAAE,CAAC;AAChD,CAAC;AAED,MAAM,UAAU,aAAa,CAAC,IAK7B;IACC,
|
|
1
|
+
{"version":3,"file":"server-env.js","sourceRoot":"","sources":["../src/server-env.ts"],"names":[],"mappings":"AAAA,8DAA8D;AAC9D,EAAE;AACF,8EAA8E;AAC9E,6EAA6E;AAC7E,8EAA8E;AAC9E,yEAAyE;AACzE,yDAAyD;AAIzD,SAAS,OAAO;IACd,OAAO,EAAE,GAAI,OAAO,CAAC,GAAc,EAAE,CAAC;AACxC,CAAC;AAED,MAAM,UAAU,YAAY,CAAC,OAAe;IAC1C,OAAO,EAAE,GAAG,OAAO,EAAE,EAAE,sBAAsB,EAAE,OAAO,EAAE,CAAC;AAC3D,CAAC;AAED,MAAM,UAAU,WAAW,CAAC,OAAe;IACzC,OAAO,EAAE,GAAG,OAAO,EAAE,EAAE,WAAW,EAAE,OAAO,EAAE,CAAC;AAChD,CAAC;AAED,MAAM,UAAU,aAAa,CAAC,IAK7B;IACC,OAAO;QACL,GAAG,OAAO,EAAE;QACZ,GAAG,IAAI,CAAC,KAAK;QACb,GAAG,IAAI,CAAC,SAAS;QACjB,iBAAiB,EAAE,IAAI,CAAC,eAAe;QACvC,WAAW,EAAE,IAAI,CAAC,UAAU;KAC7B,CAAC;AACJ,CAAC"}
|
package/openclaw.plugin.json
CHANGED
package/package.json
CHANGED
package/server/main.py
CHANGED
|
@@ -204,32 +204,20 @@ async def lifespan(_app: FastAPI):
|
|
|
204
204
|
global graphiti, ontology_entity_types, ontology_edge_types, ontology_edge_type_map
|
|
205
205
|
cfg = _load_config()
|
|
206
206
|
|
|
207
|
-
|
|
208
|
-
|
|
209
|
-
if falkordb_uri:
|
|
210
|
-
# Legacy Docker mode: external FalkorDB via TCP
|
|
211
|
-
stripped = falkordb_uri.split("://", 1)[-1]
|
|
212
|
-
if ":" in stripped:
|
|
213
|
-
host, port_str = stripped.rsplit(":", 1)
|
|
214
|
-
port = int(port_str)
|
|
215
|
-
else:
|
|
216
|
-
host, port = stripped, 6379
|
|
217
|
-
driver = FalkorDriver(host=host, port=port)
|
|
218
|
-
else:
|
|
219
|
-
# Default: embedded FalkorDBLite (no Docker needed)
|
|
220
|
-
logging.getLogger("redislite").setLevel(logging.DEBUG)
|
|
207
|
+
# Embedded FalkorDBLite (no Docker needed)
|
|
208
|
+
logging.getLogger("redislite").setLevel(logging.DEBUG)
|
|
221
209
|
|
|
222
|
-
|
|
210
|
+
from redislite.async_falkordb_client import AsyncFalkorDB
|
|
223
211
|
|
|
224
|
-
|
|
225
|
-
|
|
226
|
-
|
|
227
|
-
|
|
228
|
-
|
|
229
|
-
|
|
230
|
-
|
|
231
|
-
|
|
232
|
-
|
|
212
|
+
data_dir = os.getenv("FALKORDB_DATA_DIR", "./data/falkordb")
|
|
213
|
+
os.makedirs(data_dir, exist_ok=True)
|
|
214
|
+
db_path = os.path.join(data_dir, "gralkor.db")
|
|
215
|
+
try:
|
|
216
|
+
db = AsyncFalkorDB(db_path)
|
|
217
|
+
except Exception as e:
|
|
218
|
+
_log_falkordblite_diagnostics(e)
|
|
219
|
+
raise
|
|
220
|
+
driver = FalkorDriver(falkor_db=db)
|
|
233
221
|
|
|
234
222
|
graphiti = Graphiti(
|
|
235
223
|
graph_driver=driver,
|
|
Binary file
|
package/docker-compose.yml
DELETED
|
@@ -1,34 +0,0 @@
|
|
|
1
|
-
services:
|
|
2
|
-
falkordb:
|
|
3
|
-
image: falkordb/falkordb:latest
|
|
4
|
-
restart: unless-stopped
|
|
5
|
-
ports:
|
|
6
|
-
- "6379:6379"
|
|
7
|
-
- "3000:3000"
|
|
8
|
-
volumes:
|
|
9
|
-
- ${FALKORDB_DATA_DIR:-falkordb_data}:/var/lib/falkordb/data
|
|
10
|
-
networks:
|
|
11
|
-
- gralkor
|
|
12
|
-
|
|
13
|
-
graphiti:
|
|
14
|
-
image: gralkor-server:latest
|
|
15
|
-
restart: unless-stopped
|
|
16
|
-
ports:
|
|
17
|
-
- "8001:8001"
|
|
18
|
-
env_file:
|
|
19
|
-
- .env
|
|
20
|
-
environment:
|
|
21
|
-
FALKORDB_URI: redis://falkordb:6379
|
|
22
|
-
volumes:
|
|
23
|
-
- ./config.yaml:/app/config.yaml:ro
|
|
24
|
-
depends_on:
|
|
25
|
-
- falkordb
|
|
26
|
-
networks:
|
|
27
|
-
- gralkor
|
|
28
|
-
|
|
29
|
-
networks:
|
|
30
|
-
gralkor:
|
|
31
|
-
name: gralkor
|
|
32
|
-
|
|
33
|
-
volumes:
|
|
34
|
-
falkordb_data:
|