@susu-eng/gralkor 27.2.8 → 27.2.10

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -8,62 +8,55 @@ Gralkor automatically remembers and recalls everything your agent says, _thinks_
8
8
 
9
9
  ## Why Gralkor
10
10
 
11
- Here's the honest field report on every OpenClaw memory plugin:
12
-
13
- | Plugin | Storage | Captures thinking | Episode scope | Temporal facts | Local |
14
- |---|---|---|---|---|---|
15
- | **memory-core** *(built-in)* | Markdown files | no | full (LLM-written at compaction) | no | ✓ |
16
- | **lancedb-pro** | LanceDB (flat vector) | no | new messages per run | partial | ✓ |
17
- | **MemOS Local** | SQLite + vector | no | turn delta | recency decay only | ✓ |
18
- | **Cognee** | Cognee graph API | no | Q&A pairs | partial | optional |
19
- | **Supermemory** | Cloud (opaque) | no | last turn only | server-side flag | ✗ |
20
- | **MemOS Cloud** | Cloud (opaque) | no | last turn *(default)* | none | ✗ |
21
- | **Awareness** | Cloud + MD mirror | no | first message + last reply | none | ✗ |
22
- | **Gralkor** | Graphiti knowledge graph | **yes** | full session | `valid_at`/`invalid_at`/`expired_at` | ✓ |
23
-
24
11
  Let's look in detail about the decisions made for Gralkor and why they make it the best memory plugin for OpenClaw.
25
12
 
26
- **Graphs, not Markdown or pure vector.** The AI ecosystem's fixation on Markdown-based memory is baffling. Graphs are the right data structure for representing knowledge. Your code is a graph (syntax trees), your filesystem is a graph, the web is a graph. The world is a deeply interrelated graph, and trying to flatten it into Markdown files or pure vector embeddings is fighting reality.
27
-
28
- Yet: the most popular memory plugin — memory-core, the one that ships inside OpenClaw — writes your agent's memory to `MEMORY.md` and `memory/YYYY-MM-DD.md`. The second most popular, lancedb-pro, stores extracted facts as flat rows in LanceDB. [Graphiti](https://github.com/getzep/graphiti) combines a knowledge graph with vector embeddings — you get structured relationships *and* semantic retrieval. Facts carry temporal validity: when they became true, when they stopped being true, when they were superseded.
29
-
30
- This is not another chunking strategy or embedding experiment. Graphiti has solved this layer of the problem and Gralkor deploys and leverages it optimally for this use case.
13
+ **Graphs, not Markdown or pure vector.** Graphs are the right data structure for representing knowledge. Your code is a graph - the _world_ is a deeply interrelated graph and trying to flatten it into Markdown files or pure vector embeddings is fighting reality. Gralkor doesn't use MD files (other than indexing yours), and this is not another chunking strategy or embedding experiment. Graphiti has already solved this layer and Gralkor leverages it optimally for this use case.
31
14
 
32
15
  [HippoRAG](https://arxiv.org/abs/2405.14831) (NeurIPS 2024) found graph-based retrieval reaches 89.1% recall@5 on 2WikiMultiHopQA versus 68.2% for flat vector retrieval — a 20.9-point gap. [AriGraph](https://arxiv.org/abs/2407.04363) (IJCAI 2025) independently found KG-augmented agents markedly outperform RAG, summarization, and full-conversation-history baselines across interactive environments.
33
16
 
34
- **Remembering behaviour, not just dialog.** Agents make mistakes options, weight options, reject approaches - they _learn_ as they complete tasks. Gralkor distills the agent's thinking blocks - it's learning - into first-person behavioural summaries and weaves them into the episode transcript before ingestion. The graph doesn't just know what was said; it knows how the agent arrived there.
17
+ **Remembering behaviour, not just dialog.** Agents make mistakes, weigh options, reject approaches - they _learn_ as they complete tasks. Gralkor distills the agent's behaviour - not just its dialog - into first-person behavioural reports weaved into episode transcripts before ingestion.
35
18
 
36
- Yet: Every other OpenClaw memory plugin only remembers what was spoken, totally ignoring what your agent thinks and does — lancedb-pro filters for `type === "text"` only, MemOS strips `<think>` tags, Supermemory never looks at them. Even if you have a sophisticated memory system, your agent is inherently dishonest with you, frequently claiming to remember what it has done when it only really remembers what it claimed to have done, or to have thought what it is only now imagining.
19
+ For almost all other memory plugins, your agent is inherently dishonest with you, frequently claiming to remember what it has done when it only really remembers what it _already claimed_ to have done, or to have thought _what it is only now imagining_.
37
20
 
38
- Gralkor actually remembers what your agent thought and did — it is the only OpenClaw memory plugin with this capability.
21
+ With Gralkor your agent actually remembers it's thoughts and actions.
39
22
 
40
23
  [Reflexion](https://arxiv.org/abs/2303.11366) (NeurIPS 2023) showed agents storing self-reflective reasoning traces outperform GPT-4 output-only baselines by 11 points on HumanEval. [ExpeL](https://arxiv.org/abs/2308.10144) (AAAI 2024) directly ablated reasoning-trace storage versus output-only: +11–19 points across benchmarks from storing the reasoning process alone.
41
24
 
42
25
  **Maximum context at ingestion.** Gralkor captures all messages in each session of work, distills behaviour, and feeds results to Graphiti *as whole episodes*. Extraction works _way_ better when Graphiti has full context.
43
26
 
44
- Yet: Most memory plugins save isolated question-answer pairs or summarized snippets: "Awareness" stores the first user message and the last assistant reply a 30-turn debugging session becomes two sentences. Some default to the last turn only. Others capture single turns of dialog.
27
+ Most memory plugins save isolated question-answer pairs or summarized snippets: Some store only the first user message and the last assistant reply, others store to the last turn only.
45
28
 
46
- Gralkor captures _the whole episode_ — the entire series of questions, thoughts, actions, and responses that _solved the problem_. Richer semantics, better understanding, better recall.
29
+ Gralkor captures the entire series of questions, thoughts, actions, and responses that _solved the problem_ together, with all their interrelationships. Richer semantics, better understanding, better recall.
47
30
 
48
31
  [SeCom](https://arxiv.org/abs/2502.05589) (ICLR 2025) found coherent multi-turn episode storage scores 5.99 GPT4Score points higher than isolated turn-level storage on LOCOMO. [LongMemEval](https://arxiv.org/abs/2410.10813) (ICLR 2025) confirms: fact-level QA-pair extraction drops accuracy from 0.692 to 0.615 versus full-round episode storage.
49
32
 
50
- **Built for the long term.** Graphiti on which Gralkor is based — is _temporally aware_. On every ingestion, it doesn't just append; it resolves new information against the existing graph, amending, expiring, and invalidating so that your agent knows _what happened over time_.
33
+ **Built for the long term.** Graphiti (and therefore Gralkor) is deeply temporal. On every ingestion, it doesn't just append; it resolves new information against the existing graph, amending, expiring, and invalidating so that your agent knows _what happened over time_.
34
+
35
+ Graphiti does the heavy temporal lifting on ingestion. It's bad for throughput, and useless for short-lived agents, which means serving a single, long-lived user agent is _the perfect use case_.
36
+
37
+ [LongMemEval](https://arxiv.org/abs/2410.10813) (ICLR 2025) established that temporal reasoning is the hardest memory sub-task for commercial LLMs; time-aware indexing recovers 7–11% of that loss. [MemoTime](https://arxiv.org/abs/2510.13614) (WWW 2026) found temporal knowledge graphs enable a 4B model to match GPT-4-Turbo on temporal reasoning, with up to 24% improvement over static memory baselines.
51
38
 
52
- One plugin has something in this direction an `invalidated_at` timestamp on vector rows, but there's no graph. Graphiti tracks four timestamps per fact (`created_at`, `valid_at`, `invalid_at`, `expired_at`) and supports point-in-time queries across a traversable structure. This is expensive, bad for throughput, and useless for short-lived agents, so serving a single, long-lived user agent is _the perfect use case_. Graphiti was destined for Gralkor and OpenClaw. [LongMemEval](https://arxiv.org/abs/2410.10813) (ICLR 2025) established that temporal reasoning is the hardest memory sub-task for commercial LLMs; time-aware indexing recovers 7–11% of that loss. [MemoTime](https://arxiv.org/abs/2510.13614) (WWW 2026) found temporal knowledge graphs enable a 4B model to match GPT-4-Turbo on temporal reasoning, with up to 24% improvement over static memory baselines.
39
+ **Recursion through reflection.** Point your agent back at its own memory let it reflect on what it knows, identify contradictions, synthesize higher-order insights, and do with them whatever you believe to be _good cognitive architecture_. Gralkor doesn't limit you to one approach, but the research is quite clear - you should do _something_.
53
40
 
54
- **Recursion through reflection.** A knowledge graph is a living structure. The most powerful thing you can do with it is point the agent back at its own memory — let it reflect on what it knows, identify contradictions, synthesize higher-order insights, and do with them whatever you believe to be _good cognitive architecture_ :shrug:. Gralkor doesn't prescribe how you do this. Instead, it provides the platform for cognitive architecture experimentation: a structured, temporally-aware graph that the agent can both read from and write to using OpenClaw crons. Share yours, and ask to see mine. This is where it gets interesting. The graph gives you a substrate for experimentation — reflection strategies, knowledge consolidation, cross-session reasoning — that flat retrieval systems simply cannot support. [Reflexion](https://arxiv.org/abs/2303.11366) (NeurIPS 2023) demonstrated that agents storing verbal reflections in an episodic buffer gain 11 points with no weight updates. [Generative Agents](https://arxiv.org/abs/2304.03442) (UIST 2023) showed empirically that a reflection layer synthesizing raw memories into higher-order insights is essential for coherent long-term behavior.
41
+ My way is to use cron and [Thinker CLI](https://github.com/elimydlarz/thinker-cli) together, directing the agent to use the search and add memory tools in a sequential reflective process. Share yours, and ask to see mine.
55
42
 
56
- **Custom ontology: model your agent's world _your way_.** Define your own entity types, attributes, and relationships so that information is parsed into the language of your domain — or your life. [Apple's ODKE+](https://arxiv.org/abs/2509.04696) (2025) showed ontology-guided extraction hits 98.8% precision vs 91% raw LLM; [GoLLIE](https://arxiv.org/abs/2310.03668) (ICLR 2024) directly ablated schema-constrained versus unconstrained generation on the same model, finding +13 F1 points average across NER, relation, and event extraction in zero-shot settings. No other OpenClaw memory plugin offers this. If you want extraction to speak your domain's language: lancedb-pro has six hardcoded categories you can filter but not extend; Supermemory lets you write a free-text hint to guide extraction; the rest offer nothing. Custom ontologies give your agent a model of the world: you could use a domain model codified by experts, be the expert, or try to encode _your_ model of the world. Agent memory doesn't have to be so fuzzy that you lose track of what matters.
43
+ [Reflexion](https://arxiv.org/abs/2303.11366) (NeurIPS 2023) demonstrated that agents storing verbal reflections in an episodic buffer gain 11 points with no weight updates. [Generative Agents](https://arxiv.org/abs/2304.03442) (UIST 2023) showed empirically that a reflection layer synthesizing raw memories into higher-order insights is essential for coherent long-term behavior.
57
44
 
58
- **On cost.** Gralkor costs more to run than a Markdown file. It's better context management, not overhead. Instead of paying to pollute your context window with junk every read, you pay more on ingestion in exchange for cheap, high-relevance reads. Extract and structure what matters, then pull only the right stuff at read time. It's also worth it: A single recalled fact "we chose postgres over mysql because of the jsonb column support we need for X" — prevents re-litigating that decision in a new session.
45
+ **Custom ontology: model your agent's world _your way_.** Gralkor lets you define your own entity types, attributes, and relationships so that information is parsed into entities and relationships you define. Your graph doesn't have to be a black box - you can keep track of what matters to you.
59
46
 
60
- An agent that remembers your architectural decisions, your preferences, your debugging history, and your reasoning across sessions doesn't just save time; it changes the character of the work. You stop spending turns re-establishing context and start doing the actual work you opened the terminal for. Paying $20 to Google every month to make your agent _meaningfully_ more effective is a no-brainer. The agents that cost you _real_ money are the ones that forget everything and make you start over, or burn tokens overloading context with noise.
47
+ You can use a domain model codified by experts in your field, or encode _your_ model of the world so that your agent shares it.
61
48
 
62
- Gralkor is _good_ memory, not cheap memory. You can push the llm choice and perhaps get better extraction, but otherwise I've just made it as good as possible, other than being reasonable about latency.
49
+ [Apple's ODKE+](https://arxiv.org/abs/2509.04696) (2025) showed ontology-guided extraction hits 98.8% precision vs 91% raw LLM; [GoLLIE](https://arxiv.org/abs/2310.03668) (ICLR 2024) directly ablated schema-constrained versus unconstrained generation on the same model, finding +13 F1 points average across NER, relation, and event extraction in zero-shot settings.
63
50
 
64
- ## What it does
51
+ **Interpretation** Gralkor interprets information in memory for relevance to the task at hand. This step radically improves output with minimal impact on cost and latency.
65
52
 
66
- Gralkor replaces the native memory plugin entirely, taking the memory slot.
53
+ **On cost.** Gralkor costs more to run than a Markdown file in the short term. In the longer term, Gralkor provides more efficient context management, reducing token burn. Instead of paying to pollute your context window with junk every read, you pay more on ingestion in exchange for cheap, high-relevance reads forever.
54
+
55
+ An agent that remembers behaviour, decisions, your preferences, and reasoning across sessions changes the _character_ of your work. You stop spending turns re-establishing context and focus more on what you care about. A single recalled behavioural fact — "we rejected mysql because it lacked jsonb column support needed for X" — prevents re-litigating that decision in a new session - it might save 10 subagents repeating a parallel investigation of database options.
56
+
57
+ Gralkor is _good_ memory, not cheap memory. You can push the llm choice and perhaps get better extraction, but otherwise I've just made it as good as possible while being reasonable about latency.
58
+
59
+ ## Tools
67
60
 
68
61
  - **`memory_search`** — searches the knowledge graph and returns relevant facts and entity summaries
69
62
  - **`memory_add`** — stores information in the knowledge graph; Graphiti extracts entities and relationships
@@ -122,12 +115,22 @@ openclaw plugins install ./susu-eng-gralkor-memory-26.0.14.tgz --dangerously-for
122
115
 
123
116
  ### 4. Enable and assign the memory slot
124
117
 
118
+ OpenClaw has a single `memory` slot that determines which plugin provides memory to your agents. You must explicitly assign Gralkor to the `memory` slot, otherwise installing the plugin does nothing — auto-capture and auto-recall hooks will never fire.
119
+
125
120
  ```bash
126
- # Allowlist (if you use one)
121
+ # If you use an allowlist, add gralkor to it
127
122
  openclaw config set --json plugins.allow '["gralkor"]'
128
123
 
129
- # Assign the memory slot — replaces the built-in memory-core
124
+ # Enable the plugin entry
125
+ openclaw config set plugins.entries.gralkor.enabled true
126
+
127
+ # Assign Gralkor to the memory slot (replaces the built-in memory-core)
130
128
  openclaw config set plugins.slots.memory gralkor
129
+
130
+ # Expose Gralkor's tools to the agent. Auto-capture and auto-recall work without
131
+ # this, but the agent won't see memory_add / memory_build_indices / memory_build_communities
132
+ # unless you add them to the active tool profile's allowlist.
133
+ openclaw config set --json tools.alsoAllow '["gralkor"]'
131
134
  ```
132
135
 
133
136
  ### 5. Restart and go
@@ -155,20 +158,6 @@ Start chatting with your agent. Gralkor works in the background:
155
158
  openclaw plugins update gralkor --dangerously-force-unsafe-install
156
159
  ```
157
160
 
158
- ### Reinstalling
159
-
160
- The plugin dir (`~/.openclaw/extensions/gralkor`) is ephemeral — it can be deleted and reinstalled freely. The `dataDir` is persistent — the venv and FalkorDB database survive across reinstalls.
161
-
162
- ```bash
163
- openclaw plugins uninstall gralkor
164
- openclaw plugins install @susu-eng/gralkor --dangerously-force-unsafe-install
165
- openclaw config set plugins.slots.memory gralkor
166
- ```
167
-
168
- `uninstall` removes the plugin files and resets the memory slot automatically.
169
-
170
- The second boot is fast (~4s) because the venv in `dataDir` is reused.
171
-
172
161
  ### LLM providers
173
162
 
174
163
  Graphiti needs an LLM to extract entities and relationships from conversations.
@@ -180,24 +169,12 @@ Graphiti needs an LLM to extract entities and relationships from conversations.
180
169
  | **Anthropic** | `anthropicApiKey` | LLM only — still needs `openaiApiKey` for embeddings |
181
170
  | **Groq** | `groqApiKey` | LLM only — still needs `openaiApiKey` for embeddings |
182
171
 
183
- To switch away from Gemini, set `llm` and `embedder` in the plugin config. For example, with OpenAI:
172
+ To switch away from Gemini, set `llm` and `embedder`. For example, with OpenAI:
184
173
 
185
- ```json
186
- {
187
- "plugins": {
188
- "entries": {
189
- "gralkor": {
190
- "enabled": true,
191
- "config": {
192
- "dataDir": "/path/to/gralkor-data",
193
- "openaiApiKey": { "$ref": "env:OPENAI_API_KEY" },
194
- "llm": { "provider": "openai", "model": "gpt-4.1-mini" },
195
- "embedder": { "provider": "openai", "model": "text-embedding-3-small" }
196
- }
197
- }
198
- }
199
- }
200
- }
174
+ ```bash
175
+ openclaw config set plugins.entries.gralkor.config.openaiApiKey "$OPENAI_API_KEY"
176
+ openclaw config set --json plugins.entries.gralkor.config.llm '{"provider":"openai","model":"gpt-4.1-mini"}'
177
+ openclaw config set --json plugins.entries.gralkor.config.embedder '{"provider":"openai","model":"text-embedding-3-small"}'
201
178
  ```
202
179
 
203
180
  ## CLI
@@ -207,26 +184,15 @@ openclaw gralkor status # Server state, config, graph stats, data d
207
184
  openclaw gralkor search <group_id> <query> # Search the knowledge graph
208
185
  ```
209
186
 
187
+ See [Graph partitioning](#graph-partitioning) for what `<group_id>` should be.
188
+
210
189
  ## Configuration
211
190
 
212
- Configure in your OpenClaw plugin settings (`~/.openclaw/openclaw.json`):
191
+ Configure with `openclaw config set`. For example:
213
192
 
214
- ```json
215
- {
216
- "plugins": {
217
- "entries": {
218
- "gralkor": {
219
- "enabled": true,
220
- "config": {
221
- "autoCapture": { "enabled": true },
222
- "autoRecall": { "enabled": true, "maxResults": 10 },
223
- "idleTimeoutMs": 300000,
224
- "dataDir": "/path/to/data"
225
- }
226
- }
227
- }
228
- }
229
- }
193
+ ```bash
194
+ openclaw config set --json plugins.entries.gralkor.config.autoRecall.maxResults 20
195
+ openclaw config set --json plugins.entries.gralkor.config.idleTimeoutMs 600000
230
196
  ```
231
197
 
232
198
  | Setting | Default | Description |
@@ -242,38 +208,33 @@ Configure in your OpenClaw plugin settings (`~/.openclaw/openclaw.json`):
242
208
 
243
209
  ### Complete config reference
244
210
 
211
+ The full plugin config shape (as it appears under `plugins.entries.gralkor.config` in `~/.openclaw/openclaw.json`):
212
+
245
213
  ```json
246
214
  {
247
- "plugins": {
248
- "entries": {
249
- "gralkor": {
250
- "enabled": true,
251
- "config": {
252
- "dataDir": "/path/to/gralkor-data",
253
- "workspaceDir": "~/.openclaw/workspace",
254
- "googleApiKey": "your-gemini-key",
255
- "llm": { "provider": "gemini", "model": "gemini-3.1-flash-lite-preview" },
256
- "embedder": { "provider": "gemini", "model": "gemini-embedding-2-preview" },
257
- "autoCapture": { "enabled": true },
258
- "autoRecall": { "enabled": true, "maxResults": 10 },
259
- "search": { "maxResults": 20, "maxEntityResults": 10 },
260
- "idleTimeoutMs": 300000,
261
- "ontology": {
262
- "entities": {},
263
- "edges": {},
264
- "edgeMap": {}
265
- },
266
- "test": false
267
- }
268
- }
269
- }
270
- }
215
+ "dataDir": "/path/to/gralkor-data",
216
+ "workspaceDir": "~/.openclaw/workspace",
217
+ "googleApiKey": "your-gemini-key",
218
+ "llm": { "provider": "gemini", "model": "gemini-3.1-flash-lite-preview" },
219
+ "embedder": { "provider": "gemini", "model": "gemini-embedding-2-preview" },
220
+ "autoCapture": { "enabled": true },
221
+ "autoRecall": { "enabled": true, "maxResults": 10 },
222
+ "search": { "maxResults": 20, "maxEntityResults": 10 },
223
+ "idleTimeoutMs": 300000,
224
+ "ontology": {
225
+ "entities": {},
226
+ "edges": {},
227
+ "edgeMap": {}
228
+ },
229
+ "test": false
271
230
  }
272
231
  ```
273
232
 
274
233
  ### Graph partitioning
275
234
 
276
- Each agent gets its own graph partition automatically (based on `agentId`). No configuration needed — different agents won't see each other's knowledge.
235
+ Each agent gets its own graph partition automatically — different agents won't see each other's knowledge, and no configuration is needed.
236
+
237
+ The partition key (`group_id`) is the agent's ID with hyphens replaced by underscores (FalkorDB's RediSearch syntax doesn't accept hyphens). So an agent named `my-coding-agent` stores its memory under group `my_coding_agent`. Agents running without an explicit ID use the partition `default`. You'll need this `group_id` whenever you query the graph directly — e.g. `openclaw gralkor search <group_id> <query>`.
277
238
 
278
239
  ## Custom entity and relationship types
279
240
 
@@ -281,81 +242,70 @@ By default, Graphiti extracts generic entities and connects them with generic `R
281
242
 
282
243
  If you want more structured extraction, you can define custom entity and relationship types. Graphiti will classify entities into your types, extract structured attributes, and create typed relationships between them.
283
244
 
284
- ### Entities only (start here)
245
+ ### Entities
285
246
 
286
- The simplest useful ontology defines just entity types. Relationships will still be created, using Graphiti's default `RELATES_TO` type.
247
+ The simplest useful ontology defines just entity types. Relationships will still be created, using Graphiti's default `RELATES_TO` type. Set the whole ontology in one go:
287
248
 
288
- ```json
289
- {
290
- "plugins": {
291
- "entries": {
292
- "gralkor": {
293
- "enabled": true,
294
- "config": {
295
- "ontology": {
296
- "entities": {
297
- "Project": {
298
- "description": "A software project or initiative being actively developed. Look for mentions of repositories, codebases, applications, services, or named systems that are built and maintained by a team.",
299
- "attributes": {
300
- "status": ["active", "completed", "paused"],
301
- "language": "Primary programming language used in the project"
302
- }
303
- },
304
- "Technology": {
305
- "description": "A programming language, framework, library, database, or infrastructure tool. Identify by mentions of specific named technologies used in or considered for projects.",
306
- "attributes": {
307
- "category": ["language", "framework", "database", "infrastructure", "tool"]
308
- }
309
- }
310
- }
311
- }
312
- }
249
+ ```bash
250
+ openclaw config set --json plugins.entries.gralkor.config.ontology '{
251
+ "entities": {
252
+ "Project": {
253
+ "description": "A software project or initiative being actively developed. Look for mentions of repositories, codebases, applications, services, or named systems that are built and maintained by a team.",
254
+ "attributes": {
255
+ "status": ["active", "completed", "paused"],
256
+ "language": "Primary programming language used in the project"
257
+ }
258
+ },
259
+ "Technology": {
260
+ "description": "A programming language, framework, library, database, or infrastructure tool. Identify by mentions of specific named technologies used in or considered for projects.",
261
+ "attributes": {
262
+ "category": ["language", "framework", "database", "infrastructure", "tool"]
313
263
  }
314
264
  }
315
265
  }
316
- }
266
+ }'
317
267
  ```
318
268
 
319
- ### Adding relationships
269
+ ### Relationships
320
270
 
321
271
  To control how entities are connected, add `edges` (relationship types) and `edgeMap` (which entity pairs they apply to):
322
272
 
323
273
  ```json
324
274
  {
325
- "ontology": {
326
- "entities": {
327
- "Project": {
328
- "description": "A software project or initiative being actively developed. Look for mentions of repositories, codebases, applications, services, or named systems that are built and maintained by a team.",
329
- "attributes": {
330
- "status": ["active", "completed", "paused"],
331
- "language": "Primary programming language used in the project"
332
- }
333
- },
334
- "Technology": {
335
- "description": "A programming language, framework, library, database, or infrastructure tool. Identify by mentions of specific named technologies used in or considered for projects.",
336
- "attributes": {
337
- "category": ["language", "framework", "database", "infrastructure", "tool"]
338
- }
275
+ "entities": {
276
+ "Project": {
277
+ "description": "A software project or initiative being actively developed. Look for mentions of repositories, codebases, applications, services, or named systems that are built and maintained by a team.",
278
+ "attributes": {
279
+ "status": ["active", "completed", "paused"],
280
+ "language": "Primary programming language used in the project"
339
281
  }
340
282
  },
341
- "edges": {
342
- "Uses": {
343
- "description": "A project actively using a technology in its stack. Look for statements about tech choices, dependencies, or implementation details that indicate a project relies on a specific technology.",
344
- "attributes": {
345
- "version": "Version of the technology in use, if mentioned"
346
- }
283
+ "Technology": {
284
+ "description": "A programming language, framework, library, database, or infrastructure tool. Identify by mentions of specific named technologies used in or considered for projects.",
285
+ "attributes": {
286
+ "category": ["language", "framework", "database", "infrastructure", "tool"]
347
287
  }
348
- },
349
- "edgeMap": {
350
- "Project,Technology": ["Uses"]
351
288
  }
289
+ },
290
+ "edges": {
291
+ "Uses": {
292
+ "description": "A project actively using a technology in its stack. Look for statements about tech choices, dependencies, or implementation details that indicate a project relies on a specific technology.",
293
+ "attributes": {
294
+ "version": "Version of the technology in use, if mentioned"
295
+ }
296
+ }
297
+ },
298
+ "edgeMap": {
299
+ "Project,Technology": ["Uses"]
352
300
  }
353
301
  }
354
302
  ```
355
303
 
304
+ Apply with `openclaw config set --json plugins.entries.gralkor.config.ontology '<above>'`.
305
+
356
306
  Without `edgeMap`, all edge types can connect any entity pair. With `edgeMap`, relationships are constrained to specific pairs — entity pairs not listed fall back to `RELATES_TO`.
357
307
 
358
- ### Attribute format
308
+ ### Attributes
359
309
 
360
310
  Attributes control what Graphiti extracts for each entity or relationship. They are **required fields** — if the LLM can't populate them from the text, it won't extract that entity type at all. This makes attributes the primary mechanism for gating extraction quality.
361
311
 
@@ -368,7 +318,7 @@ Attributes control what Graphiti extracts for each entity or relationship. They
368
318
 
369
319
  Supported types for the object form: `string`, `int`, `float`, `bool`, `datetime`.
370
320
 
371
- ### Writing good descriptions
321
+ ### Descriptions
372
322
 
373
323
  Descriptions are the most important part of your ontology — they tell the LLM what to look for. Write them like extraction instructions, not dictionary definitions.
374
324
 
@@ -404,21 +354,21 @@ openclaw config set plugins.entries.gralkor.config.dataDir /data/gralkor
404
354
 
405
355
  ```
406
356
  User sends message
407
-
408
-
409
- ┌─────────────┐ search ┌──────────┐ query ┌──────────┐
410
- auto-recall ──────────▶ │ Graphiti ──────────▶FalkorDB
411
- │ hook │ ◀────────── API ◀──────────
412
- └─────────────┘ facts └──────────┘ subgraph └──────────┘
413
-
414
-
357
+
358
+
359
+ ┌──────────────┐ search ┌──────────────┐ query ┌──────────────┐
360
+ auto-recall ───────────▶Graphiti ───────────▶ FalkorDB │
361
+ │ hook │ ◀─────────── API ◀───────────
362
+ └──────────────┘ facts └──────────────┘ subgraph └──────────────┘
363
+
364
+
415
365
  Agent runs (with recalled facts as context)
416
-
417
-
418
- ┌──────────────┐ ingest ┌──────────┐ extract ┌──────────┐
419
- │ auto-capture ──────────▶ Graphiti ──────────▶FalkorDB
420
- │ hook │ │ API entities │
421
- └──────────────┘ └──────────┘ & facts └──────────┘
366
+
367
+
368
+ ┌──────────────┐ ingest ┌──────────────┐ extract ┌──────────────┐
369
+ │ auto-capture │ ───────────▶ Graphiti ───────────▶ FalkorDB │
370
+ │ hook │ │ API entities │
371
+ └──────────────┘ └──────────────┘ & facts └──────────────┘
422
372
  ```
423
373
 
424
374
  Graphiti handles the heavy lifting: entity extraction, relationship mapping, temporal tracking, and embedding-based search. Gralkor wires it into the OpenClaw plugin lifecycle. The Graphiti server and embedded FalkorDB run as a managed subprocess — started and stopped automatically by the plugin.
@@ -426,7 +376,18 @@ Graphiti handles the heavy lifting: entity extraction, relationship mapping, tem
426
376
  ## Troubleshooting
427
377
 
428
378
  **`openclaw gralkor status` says "Server process: stopped"**
429
- Python 3.12+ is not found on the system PATH. Install Python 3.12+ and restart OpenClaw.
379
+ Many things can cause this. Use the available diagnostics to narrow it down:
380
+
381
+ - **`openclaw gralkor status`** — shows process state, config summary, `dataDir`, venv state, and (if unreachable) the connection error
382
+ - **Gateway logs** — grep for `[gralkor] boot:` markers. You should see `boot: plugin loaded`, `boot: starting`, then `boot: ready`. A `boot: ... failed:` line tells you which stage broke
383
+ - **`openclaw gralkor search <group_id> <query>`** — quick end-to-end check that the server is reachable and the graph has data (group ID is required; it's the agent ID with hyphens replaced by underscores)
384
+
385
+ Common causes:
386
+ - `uv` not on PATH (Python itself is managed by `uv` — it fetches 3.12+ on demand and produces its own errors)
387
+ - `uv sync` failed (network/registry issue, or on first boot ~1–2 min is normal — wait it out)
388
+ - Missing or invalid LLM API key — the server starts but every operation fails
389
+ - Stale `server.pid` in `dataDir` holding port 8001 (the manager tries to clean this up, but a SIGKILL'd predecessor can leave the port wedged)
390
+ - On `linux/arm64`: bundled falkordblite wheel couldn't be resolved (not in `server/wheels/`, not cached in `dataDir/wheels/`, and the GitHub Release download failed)
430
391
 
431
392
  **First startup takes a long time**
432
393
  Normal — Gralkor is creating a Python virtual environment and installing dependencies via pip. This takes ~1-2 minutes. Subsequent starts reuse the venv and skip pip.
@@ -436,7 +397,7 @@ Most likely: missing or invalid LLM API key. Check your provider API key configu
436
397
 
437
398
  **No memories being recalled**
438
399
  - Check that `autoRecall.enabled` is `true` (it is by default)
439
- - Verify the graph has data: run `openclaw gralkor search <term>`
400
+ - Verify the graph has data: run `openclaw gralkor search <group_id> <term>` (group ID = agent ID with hyphens replaced by underscores)
440
401
  - Auto-recall extracts keywords from the user's message — very short messages may not match
441
402
 
442
403
  **Agent doesn't store conversations**
@@ -445,31 +406,3 @@ Most likely: missing or invalid LLM API key. Check your provider API key configu
445
406
  - Conversations where the first user message starts with `/` are skipped by design
446
407
  - Empty conversations (no extractable text) are skipped
447
408
 
448
- **Agent doesn't have plugin tools (`memory_add`, `memory_build_indices`, etc.)**
449
- OpenClaw's tool profiles (`coding`, `minimal`, etc.) only allowlist core tools by default. Plugin tools are filtered out when a profile is active. To enable them, add them to `alsoAllow` in your `openclaw.json`:
450
-
451
- ```json
452
- {
453
- "tools": {
454
- "alsoAllow": ["memory_add", "memory_build_indices", "memory_build_communities"]
455
- }
456
- }
457
- ```
458
-
459
- You can also allow all Gralkor tools with `"alsoAllow": ["gralkor"]` or all plugin tools with `"alsoAllow": ["group:plugins"]`. Note that `memory_add` is not required for Gralkor to work — auto-capture already stores everything your agent hears, says, thinks, and does. `memory_add` is only needed if you want the agent to selectively store specific insights or conclusions on its own.
460
-
461
- ## Legacy Docker mode
462
-
463
- If you prefer to run FalkorDB as a separate Docker container (e.g. for production deployments with specific resource constraints), you can set `FALKORDB_URI` to bypass the embedded mode:
464
-
465
- ```bash
466
- cd ~/.openclaw/plugins/gralkor
467
- docker build -t gralkor-server:latest server/
468
- FALKORDB_URI=redis://falkordb:6379 docker compose up -d
469
- ```
470
-
471
- This starts FalkorDB on port 6379 and the Graphiti API on port 8001. If your OpenClaw gateway runs in Docker, connect it to the `gralkor` network:
472
-
473
- ```bash
474
- docker network connect gralkor <your-openclaw-container-name>
475
- ```
@@ -1 +1 @@
1
- {"version":3,"file":"server-env.d.ts","sourceRoot":"","sources":["../src/server-env.ts"],"names":[],"mappings":"AAQA,KAAK,MAAM,GAAG,MAAM,CAAC,MAAM,EAAE,MAAM,CAAC,CAAC;AAMrC,wBAAgB,YAAY,CAAC,OAAO,EAAE,MAAM,GAAG,MAAM,CAEpD;AAED,wBAAgB,WAAW,CAAC,OAAO,EAAE,MAAM,GAAG,MAAM,CAEnD;AAED,wBAAgB,aAAa,CAAC,IAAI,EAAE;IAClC,KAAK,CAAC,EAAE,MAAM,CAAC,MAAM,EAAE,MAAM,CAAC,CAAC;IAC/B,SAAS,CAAC,EAAE,MAAM,CAAC,MAAM,EAAE,MAAM,CAAC,CAAC;IACnC,eAAe,EAAE,MAAM,CAAC;IACxB,UAAU,EAAE,MAAM,CAAC;CACpB,GAAG,MAAM,CAWT"}
1
+ {"version":3,"file":"server-env.d.ts","sourceRoot":"","sources":["../src/server-env.ts"],"names":[],"mappings":"AAQA,KAAK,MAAM,GAAG,MAAM,CAAC,MAAM,EAAE,MAAM,CAAC,CAAC;AAMrC,wBAAgB,YAAY,CAAC,OAAO,EAAE,MAAM,GAAG,MAAM,CAEpD;AAED,wBAAgB,WAAW,CAAC,OAAO,EAAE,MAAM,GAAG,MAAM,CAEnD;AAED,wBAAgB,aAAa,CAAC,IAAI,EAAE;IAClC,KAAK,CAAC,EAAE,MAAM,CAAC,MAAM,EAAE,MAAM,CAAC,CAAC;IAC/B,SAAS,CAAC,EAAE,MAAM,CAAC,MAAM,EAAE,MAAM,CAAC,CAAC;IACnC,eAAe,EAAE,MAAM,CAAC;IACxB,UAAU,EAAE,MAAM,CAAC;CACpB,GAAG,MAAM,CAQT"}
@@ -15,15 +15,12 @@ export function buildPipEnv(venvDir) {
15
15
  return { ...baseEnv(), VIRTUAL_ENV: venvDir };
16
16
  }
17
17
  export function buildSpawnEnv(opts) {
18
- const env = {
18
+ return {
19
19
  ...baseEnv(),
20
20
  ...opts.extra,
21
21
  ...opts.secretEnv,
22
22
  FALKORDB_DATA_DIR: opts.falkordbDataDir,
23
23
  CONFIG_PATH: opts.configPath,
24
24
  };
25
- // Absence of FALKORDB_URI triggers embedded FalkorDBLite mode.
26
- delete env.FALKORDB_URI;
27
- return env;
28
25
  }
29
26
  //# sourceMappingURL=server-env.js.map
@@ -1 +1 @@
1
- {"version":3,"file":"server-env.js","sourceRoot":"","sources":["../src/server-env.ts"],"names":[],"mappings":"AAAA,8DAA8D;AAC9D,EAAE;AACF,8EAA8E;AAC9E,6EAA6E;AAC7E,8EAA8E;AAC9E,yEAAyE;AACzE,yDAAyD;AAIzD,SAAS,OAAO;IACd,OAAO,EAAE,GAAI,OAAO,CAAC,GAAc,EAAE,CAAC;AACxC,CAAC;AAED,MAAM,UAAU,YAAY,CAAC,OAAe;IAC1C,OAAO,EAAE,GAAG,OAAO,EAAE,EAAE,sBAAsB,EAAE,OAAO,EAAE,CAAC;AAC3D,CAAC;AAED,MAAM,UAAU,WAAW,CAAC,OAAe;IACzC,OAAO,EAAE,GAAG,OAAO,EAAE,EAAE,WAAW,EAAE,OAAO,EAAE,CAAC;AAChD,CAAC;AAED,MAAM,UAAU,aAAa,CAAC,IAK7B;IACC,MAAM,GAAG,GAAW;QAClB,GAAG,OAAO,EAAE;QACZ,GAAG,IAAI,CAAC,KAAK;QACb,GAAG,IAAI,CAAC,SAAS;QACjB,iBAAiB,EAAE,IAAI,CAAC,eAAe;QACvC,WAAW,EAAE,IAAI,CAAC,UAAU;KAC7B,CAAC;IACF,+DAA+D;IAC/D,OAAO,GAAG,CAAC,YAAY,CAAC;IACxB,OAAO,GAAG,CAAC;AACb,CAAC"}
1
+ {"version":3,"file":"server-env.js","sourceRoot":"","sources":["../src/server-env.ts"],"names":[],"mappings":"AAAA,8DAA8D;AAC9D,EAAE;AACF,8EAA8E;AAC9E,6EAA6E;AAC7E,8EAA8E;AAC9E,yEAAyE;AACzE,yDAAyD;AAIzD,SAAS,OAAO;IACd,OAAO,EAAE,GAAI,OAAO,CAAC,GAAc,EAAE,CAAC;AACxC,CAAC;AAED,MAAM,UAAU,YAAY,CAAC,OAAe;IAC1C,OAAO,EAAE,GAAG,OAAO,EAAE,EAAE,sBAAsB,EAAE,OAAO,EAAE,CAAC;AAC3D,CAAC;AAED,MAAM,UAAU,WAAW,CAAC,OAAe;IACzC,OAAO,EAAE,GAAG,OAAO,EAAE,EAAE,WAAW,EAAE,OAAO,EAAE,CAAC;AAChD,CAAC;AAED,MAAM,UAAU,aAAa,CAAC,IAK7B;IACC,OAAO;QACL,GAAG,OAAO,EAAE;QACZ,GAAG,IAAI,CAAC,KAAK;QACb,GAAG,IAAI,CAAC,SAAS;QACjB,iBAAiB,EAAE,IAAI,CAAC,eAAe;QACvC,WAAW,EAAE,IAAI,CAAC,UAAU;KAC7B,CAAC;AACJ,CAAC"}
@@ -153,5 +153,5 @@
153
153
  "label": "Groq API key"
154
154
  }
155
155
  },
156
- "version": "27.2.8"
156
+ "version": "27.2.10"
157
157
  }
package/package.json CHANGED
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "name": "@susu-eng/gralkor",
3
3
  "displayName": "Gralkor",
4
- "version": "27.2.8",
4
+ "version": "27.2.10",
5
5
  "description": "OpenClaw memory plugin powered by Graphiti knowledge graphs and FalkorDB",
6
6
  "type": "module",
7
7
  "main": "./dist/index.js",
package/server/main.py CHANGED
@@ -204,32 +204,20 @@ async def lifespan(_app: FastAPI):
204
204
  global graphiti, ontology_entity_types, ontology_edge_types, ontology_edge_type_map
205
205
  cfg = _load_config()
206
206
 
207
- falkordb_uri = os.getenv("FALKORDB_URI")
208
-
209
- if falkordb_uri:
210
- # Legacy Docker mode: external FalkorDB via TCP
211
- stripped = falkordb_uri.split("://", 1)[-1]
212
- if ":" in stripped:
213
- host, port_str = stripped.rsplit(":", 1)
214
- port = int(port_str)
215
- else:
216
- host, port = stripped, 6379
217
- driver = FalkorDriver(host=host, port=port)
218
- else:
219
- # Default: embedded FalkorDBLite (no Docker needed)
220
- logging.getLogger("redislite").setLevel(logging.DEBUG)
207
+ # Embedded FalkorDBLite (no Docker needed)
208
+ logging.getLogger("redislite").setLevel(logging.DEBUG)
221
209
 
222
- from redislite.async_falkordb_client import AsyncFalkorDB
210
+ from redislite.async_falkordb_client import AsyncFalkorDB
223
211
 
224
- data_dir = os.getenv("FALKORDB_DATA_DIR", "./data/falkordb")
225
- os.makedirs(data_dir, exist_ok=True)
226
- db_path = os.path.join(data_dir, "gralkor.db")
227
- try:
228
- db = AsyncFalkorDB(db_path)
229
- except Exception as e:
230
- _log_falkordblite_diagnostics(e)
231
- raise
232
- driver = FalkorDriver(falkor_db=db)
212
+ data_dir = os.getenv("FALKORDB_DATA_DIR", "./data/falkordb")
213
+ os.makedirs(data_dir, exist_ok=True)
214
+ db_path = os.path.join(data_dir, "gralkor.db")
215
+ try:
216
+ db = AsyncFalkorDB(db_path)
217
+ except Exception as e:
218
+ _log_falkordblite_diagnostics(e)
219
+ raise
220
+ driver = FalkorDriver(falkor_db=db)
233
221
 
234
222
  graphiti = Graphiti(
235
223
  graph_driver=driver,
@@ -1,34 +0,0 @@
1
- services:
2
- falkordb:
3
- image: falkordb/falkordb:latest
4
- restart: unless-stopped
5
- ports:
6
- - "6379:6379"
7
- - "3000:3000"
8
- volumes:
9
- - ${FALKORDB_DATA_DIR:-falkordb_data}:/var/lib/falkordb/data
10
- networks:
11
- - gralkor
12
-
13
- graphiti:
14
- image: gralkor-server:latest
15
- restart: unless-stopped
16
- ports:
17
- - "8001:8001"
18
- env_file:
19
- - .env
20
- environment:
21
- FALKORDB_URI: redis://falkordb:6379
22
- volumes:
23
- - ./config.yaml:/app/config.yaml:ro
24
- depends_on:
25
- - falkordb
26
- networks:
27
- - gralkor
28
-
29
- networks:
30
- gralkor:
31
- name: gralkor
32
-
33
- volumes:
34
- falkordb_data:
package/server/Dockerfile DELETED
@@ -1,7 +0,0 @@
1
- FROM python:3.12-slim
2
- WORKDIR /app
3
- COPY requirements.txt .
4
- RUN pip install --no-cache-dir -r requirements.txt
5
- COPY main.py .
6
- EXPOSE 8001
7
- CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8001"]