@susu-eng/gralkor 27.2.1 → 27.2.4
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# Gralkor
|
|
2
2
|
|
|
3
|
-
**
|
|
3
|
+
**The best memory plugin for OpenClaw agents**
|
|
4
4
|
|
|
5
5
|
Gralkor is an OpenClaw plugin that gives your agents long-term, temporally-aware memory. It uses [Graphiti](https://github.com/getzep/graphiti) (by Zep) for knowledge graph construction and [FalkorDB](https://www.falkordb.com/) as the graph database backend. Both run automatically as a managed subprocess - no independent server for you to manage, or SaaS company to connect to.
|
|
6
6
|
|
|
@@ -21,7 +21,9 @@ Here's the honest field report on every OpenClaw memory plugin:
|
|
|
21
21
|
| **Awareness** | Cloud + MD mirror | no | first message + last reply | none | ✗ |
|
|
22
22
|
| **Gralkor** | Graphiti knowledge graph | **yes** | full session | `valid_at`/`invalid_at`/`expired_at` | ✓ |
|
|
23
23
|
|
|
24
|
-
|
|
24
|
+
Let's look in detail about the decisions made for Gralkor and why they make it the best memory plugin for OpenClaw.
|
|
25
|
+
|
|
26
|
+
**Graphs, not Markdown or pure vector.** The AI ecosystem's fixation on Markdown-based memory is baffling. Graphs are the right data structure for representing knowledge. Your code is a graph (syntax trees), your filesystem is a graph, the web is a graph. The world is a deeply interrelated graph, and trying to flatten it into Markdown files or pure vector embeddings is fighting reality. Yet: the most popular memory plugin — memory-core, the one that ships inside OpenClaw — writes your agent's memory to `MEMORY.md` and `memory/YYYY-MM-DD.md`. The second most popular, lancedb-pro, stores extracted facts as flat rows in LanceDB. [Graphiti](https://github.com/getzep/graphiti) combines a knowledge graph with vector embeddings — you get structured relationships *and* semantic retrieval. Facts carry temporal validity: when they became true, when they stopped being true, when they were superseded. This is not another chunking strategy or embedding experiment. Graphiti has solved this layer of the problem and Gralkor deploys and leverages it optimally for this use case. [HippoRAG](https://arxiv.org/abs/2405.14831) (NeurIPS 2024) found graph-based retrieval reaches 89.1% recall@5 on 2WikiMultiHopQA versus 68.2% for flat vector retrieval — a 20.9-point gap. [AriGraph](https://arxiv.org/abs/2407.04363) (IJCAI 2025) independently found KG-augmented agents markedly outperform RAG, summarization, and full-conversation-history baselines across interactive environments.
|
|
25
27
|
|
|
26
28
|
**Remembering behaviour, not just dialog.** Agents make mistakes options, weight options, reject approaches - they _learn_ as they complete tasks. Gralkor distills the agent's thinking blocks - it's learning - into first-person behavioural summaries and weaves them into the episode transcript before ingestion. The graph doesn't just know what was said; it knows how the agent arrived there. Yet: Every other OpenClaw memory plugin only remembers what was spoken, totally ignoring what your agent thinks and does — lancedb-pro filters for `type === "text"` only, MemOS strips `<think>` tags, Supermemory never looks at them. Even if you have a sophisticated memory system, your agent is inherently dishonest with you, frequently claiming to remember what it has done when it only really remembers what it claimed to have done, or to have thought what it is only now imagining. Gralkor actually remembers what your agent thought and did — it is the only OpenClaw memory plugin with this capability. [Reflexion](https://arxiv.org/abs/2303.11366) (NeurIPS 2023) showed agents storing self-reflective reasoning traces outperform GPT-4 output-only baselines by 11 points on HumanEval. [ExpeL](https://arxiv.org/abs/2308.10144) (AAAI 2024) directly ablated reasoning-trace storage versus output-only: +11–19 points across benchmarks from storing the reasoning process alone.
|
|
27
29
|
|
|
@@ -29,7 +31,7 @@ Here's the honest field report on every OpenClaw memory plugin:
|
|
|
29
31
|
|
|
30
32
|
**Maximum context at ingestion.** Gralkor captures all messages in each session of work, distills behaviour, and feeds results to Graphiti *as whole episodes*. Extraction works _way_ better when Graphiti has full context. Yet: Most memory plugins save isolated question-answer pairs or summarized snippets: "Awareness" stores the first user message and the last assistant reply — a 30-turn debugging session becomes two sentences. Supermemory and MemOS Cloud default to the last turn only. Other plugins capture single turns of dialog; we capture _the whole episode_ — the entire series of questions, thoughts, actions, and responses that _solved the problem_. Richer semantics, better understanding, better recall. [SeCom](https://arxiv.org/abs/2502.05589) (ICLR 2025) found coherent multi-turn episode storage scores 5.99 GPT4Score points higher than isolated turn-level storage on LOCOMO. [LongMemEval](https://arxiv.org/abs/2410.10813) (ICLR 2025) confirms: fact-level QA-pair extraction drops accuracy from 0.692 to 0.615 versus full-round episode storage.
|
|
31
33
|
|
|
32
|
-
**Built for the long term.** Graphiti — on which Gralkor is based — is _temporally aware_. On every ingestion, it doesn't just append; it resolves new information against the existing graph, amending, expiring, and invalidating so that your agent knows _what happened over time_. lancedb-pro has something in this direction — an `invalidated_at` timestamp on vector rows,
|
|
34
|
+
**Built for the long term.** Graphiti — on which Gralkor is based — is _temporally aware_. On every ingestion, it doesn't just append; it resolves new information against the existing graph, amending, expiring, and invalidating so that your agent knows _what happened over time_. lancedb-pro has something in this direction — an `invalidated_at` timestamp on vector rows, but there's no graph. Graphiti tracks four timestamps per fact (`created_at`, `valid_at`, `invalid_at`, `expired_at`) and supports point-in-time queries across a traversable structure. This is expensive, bad for throughput, and useless for short-lived agents, so serving a single, long-lived user agent is _the perfect use case_. Graphiti was destined for Gralkor and OpenClaw. [LongMemEval](https://arxiv.org/abs/2410.10813) (ICLR 2025) established that temporal reasoning is the hardest memory sub-task for commercial LLMs; time-aware indexing recovers 7–11% of that loss. [MemoTime](https://arxiv.org/abs/2510.13614) (WWW 2026) found temporal knowledge graphs enable a 4B model to match GPT-4-Turbo on temporal reasoning, with up to 24% improvement over static memory baselines.
|
|
33
35
|
|
|
34
36
|
**Recursion through reflection.** A knowledge graph is a living structure. The most powerful thing you can do with it is point the agent back at its own memory — let it reflect on what it knows, identify contradictions, synthesize higher-order insights, and do with them whatever you believe to be _good cognitive architecture_ :shrug:. Gralkor doesn't prescribe how you do this. Instead, it provides the platform for cognitive architecture experimentation: a structured, temporally-aware graph that the agent can both read from and write to using OpenClaw crons. Share yours, and ask to see mine. This is where it gets interesting. The graph gives you a substrate for experimentation — reflection strategies, knowledge consolidation, cross-session reasoning — that flat retrieval systems simply cannot support. [Reflexion](https://arxiv.org/abs/2303.11366) (NeurIPS 2023) demonstrated that agents storing verbal reflections in an episodic buffer gain 11 points with no weight updates. [Generative Agents](https://arxiv.org/abs/2304.03442) (UIST 2023) showed empirically that a reflection layer synthesizing raw memories into higher-order insights is essential for coherent long-term behavior.
|
|
35
37
|
|
|
@@ -76,13 +78,7 @@ openclaw config set plugins.entries.gralkor.config.test true
|
|
|
76
78
|
### 3. Install the plugin
|
|
77
79
|
|
|
78
80
|
```bash
|
|
79
|
-
openclaw plugins install
|
|
80
|
-
```
|
|
81
|
-
|
|
82
|
-
Or as a bare spec (ClawHub is checked first, falls back to npm):
|
|
83
|
-
|
|
84
|
-
```bash
|
|
85
|
-
openclaw plugins install @susu-eng/gralkor
|
|
81
|
+
openclaw plugins install npm:@susu-eng/gralkor
|
|
86
82
|
```
|
|
87
83
|
|
|
88
84
|
From a tarball (e.g. for air-gapped deploys):
|
|
@@ -132,7 +128,7 @@ The plugin dir (`~/.openclaw/extensions/gralkor`) is ephemeral — it can be del
|
|
|
132
128
|
|
|
133
129
|
```bash
|
|
134
130
|
openclaw plugins uninstall gralkor
|
|
135
|
-
openclaw plugins install
|
|
131
|
+
openclaw plugins install npm:@susu-eng/gralkor
|
|
136
132
|
openclaw config set plugins.slots.memory gralkor
|
|
137
133
|
```
|
|
138
134
|
|
package/openclaw.plugin.json
CHANGED
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "@susu-eng/gralkor",
|
|
3
|
-
"version": "27.2.
|
|
3
|
+
"version": "27.2.4",
|
|
4
4
|
"description": "OpenClaw memory plugin powered by Graphiti knowledge graphs and FalkorDB",
|
|
5
5
|
"type": "module",
|
|
6
6
|
"main": "./dist/index.js",
|
|
@@ -75,7 +75,8 @@
|
|
|
75
75
|
"docker:up": "pnpm run docker:build && docker compose up -d",
|
|
76
76
|
"docker:down": "docker compose down",
|
|
77
77
|
"docker:logs": "docker compose logs graphiti",
|
|
78
|
-
"publish:npm": "bash scripts/publish.sh",
|
|
79
|
-
"publish:clawhub": "bash scripts/publish-clawhub.sh"
|
|
78
|
+
"publish:npm": "bash scripts/publish-npm.sh",
|
|
79
|
+
"publish:clawhub": "bash scripts/publish-clawhub.sh",
|
|
80
|
+
"publish:all": "bash scripts/publish-all.sh"
|
|
80
81
|
}
|
|
81
82
|
}
|
package/server/main.py
CHANGED
|
@@ -287,6 +287,42 @@ def _find_rate_limit_error(exc: Exception) -> Exception | None:
|
|
|
287
287
|
return None
|
|
288
288
|
|
|
289
289
|
|
|
290
|
+
_CREDENTIAL_HINTS = ("api key", "apikey", "credential", "authentication", "expired", "unauthorized")
|
|
291
|
+
|
|
292
|
+
|
|
293
|
+
def _downstream_llm_response(exc: Exception) -> JSONResponse:
|
|
294
|
+
"""Map a downstream LLM provider error to an appropriate HTTP response."""
|
|
295
|
+
http_code = int(getattr(exc, "status_code", None) or getattr(exc, "code", None))
|
|
296
|
+
msg = str(exc).split("\n")[0][:200]
|
|
297
|
+
|
|
298
|
+
if 400 <= http_code < 500:
|
|
299
|
+
if http_code == 400:
|
|
300
|
+
status = 503 if any(h in msg.lower() for h in _CREDENTIAL_HINTS) else 500
|
|
301
|
+
elif http_code in (401, 403):
|
|
302
|
+
status = 503
|
|
303
|
+
elif http_code in (404, 422):
|
|
304
|
+
status = 500
|
|
305
|
+
else:
|
|
306
|
+
status = 502
|
|
307
|
+
else:
|
|
308
|
+
status = 502
|
|
309
|
+
|
|
310
|
+
return JSONResponse(status_code=status, content={"error": "provider error", "detail": msg})
|
|
311
|
+
|
|
312
|
+
|
|
313
|
+
def _find_downstream_llm_error(exc: Exception) -> Exception | None:
|
|
314
|
+
"""Walk the exception chain to find a downstream LLM provider error with an HTTP status code."""
|
|
315
|
+
current: Exception | None = exc
|
|
316
|
+
seen: set[int] = set()
|
|
317
|
+
while current is not None and id(current) not in seen:
|
|
318
|
+
seen.add(id(current))
|
|
319
|
+
http_code = getattr(current, "status_code", None) or getattr(current, "code", None)
|
|
320
|
+
if http_code is not None and int(http_code) != 429:
|
|
321
|
+
return current
|
|
322
|
+
current = current.__cause__ or current.__context__
|
|
323
|
+
return None
|
|
324
|
+
|
|
325
|
+
|
|
290
326
|
_DEFAULT_RETRY_AFTER = 5 # seconds
|
|
291
327
|
|
|
292
328
|
|
|
@@ -306,6 +342,9 @@ async def rate_limit_middleware(request, call_next):
|
|
|
306
342
|
content={"detail": msg},
|
|
307
343
|
headers={"retry-after": str(int(retry_after))},
|
|
308
344
|
)
|
|
345
|
+
llm_err = _find_downstream_llm_error(exc)
|
|
346
|
+
if llm_err is not None:
|
|
347
|
+
return _downstream_llm_response(llm_err)
|
|
309
348
|
raise
|
|
310
349
|
|
|
311
350
|
|
|
Binary file
|