agenr 1.9.3 → 2.0.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +39 -0
- package/README.md +25 -15
- package/dist/adapters/openclaw/index.js +191 -56
- package/dist/{chunk-I6A6DPNF.js → chunk-XD3446YW.js} +2 -2
- package/dist/{chunk-EMRMV2QR.js → chunk-Y2BC7RCE.js} +1347 -110
- package/dist/chunk-ZYADFKX3.js +115 -0
- package/dist/cli.js +750 -247
- package/dist/core/recall/index.js +1 -2
- package/dist/internal-recall-eval-server.js +131 -12
- package/package.json +5 -4
- package/dist/chunk-ETQPUJGS.js +0 -0
- package/dist/{chunk-GUDCFFRV.js → chunk-MEHOGUZE.js} +175 -175
package/CHANGELOG.md
CHANGED
|
@@ -2,6 +2,45 @@
|
|
|
2
2
|
|
|
3
3
|
## [Unreleased]
|
|
4
4
|
|
|
5
|
+
## [2.0.1] - 2026-04-13
|
|
6
|
+
|
|
7
|
+
OpenClaw startup-failure handling patch release.
|
|
8
|
+
|
|
9
|
+
### Fixed
|
|
10
|
+
|
|
11
|
+
- **OpenClaw plugin startup no longer crashes on agenr service initialization failures.** Startup-time database schema mismatches and similar initialization errors are now logged and surfaced as stable runtime/status failures instead of escaping as an unhandled promise rejection.
|
|
12
|
+
|
|
13
|
+
### Validation
|
|
14
|
+
|
|
15
|
+
Changes since last push to `origin/master`:
|
|
16
|
+
|
|
17
|
+
- Handle OpenClaw startup schema mismatch failures gracefully
|
|
18
|
+
|
|
19
|
+
## [2.0.0] - 2026-04-13
|
|
20
|
+
|
|
21
|
+
Procedural memory foundation, sync and recall pipelines, and unified routing major release.
|
|
22
|
+
|
|
23
|
+
### Added
|
|
24
|
+
|
|
25
|
+
- **Procedural memory ships as a first-class corpus lane.** Agenr now includes the procedural memory model, repo-authored procedure assets, CLI ingest support, sync plumbing, and fixture-backed validation coverage for procedure-aware workflows.
|
|
26
|
+
- **Dedicated procedure recall and eval support.** The release adds a procedure-specific recall pipeline plus recall-eval fixture provisioning so procedural knowledge can be exercised independently from durable entries and episodes.
|
|
27
|
+
|
|
28
|
+
### Changed
|
|
29
|
+
|
|
30
|
+
- **Unified recall is now procedure-aware.** Recall routing can now surface procedural memory alongside the existing durable and episodic paths, tightening the retrieval model for task and workflow queries.
|
|
31
|
+
- **Repository guidance now documents procedural-memory ownership more explicitly.** Architecture and subsystem docs were refreshed to explain where procedural behavior belongs and how it fits the broader memory stack.
|
|
32
|
+
|
|
33
|
+
### Validation
|
|
34
|
+
|
|
35
|
+
Changes since last push to `origin/master`:
|
|
36
|
+
|
|
37
|
+
- docs: refresh surgeon markdown
|
|
38
|
+
- Add procedural memory v1 design spec
|
|
39
|
+
- Add procedural memory phase 1 foundation
|
|
40
|
+
- Add procedural memory phase 2 sync pipeline
|
|
41
|
+
- Add dedicated procedure recall pipeline
|
|
42
|
+
- Add procedure-aware unified recall routing
|
|
43
|
+
|
|
5
44
|
## [1.9.3] - 2026-04-12
|
|
6
45
|
|
|
7
46
|
Supersession sweep-exhaustion and plugin-manifest alignment patch release.
|
package/README.md
CHANGED
|
@@ -24,6 +24,7 @@ What makes agenr different is the combination of local-first storage, semantic e
|
|
|
24
24
|
|
|
25
25
|
- Hybrid recall for durable knowledge: vector similarity, lexical FTS, temporal awareness, recency decay, and importance weighting.
|
|
26
26
|
- Episodic memory: session-level summaries with temporal filtering and optional semantic episode search for questions like "what happened yesterday?"
|
|
27
|
+
- Procedural memory: repo-authored YAML procedures synced into durable structured runbooks for repeatable how-to workflows.
|
|
27
28
|
- LLM-powered knowledge extraction from conversation transcripts.
|
|
28
29
|
- Semantic deduplication using exact hashes, normalized hashes, embeddings, and within-run clustering.
|
|
29
30
|
- Session continuity with predecessor resolution, recent transcript tails, and LLM-generated continuity summaries.
|
|
@@ -137,19 +138,20 @@ Compatibility policy:
|
|
|
137
138
|
|
|
138
139
|
The CLI surface is still intentionally compact, but it now covers setup, recall, ingest, and corpus maintenance.
|
|
139
140
|
|
|
140
|
-
| Command
|
|
141
|
-
|
|
|
142
|
-
| `agenr init`
|
|
143
|
-
| `agenr setup`
|
|
144
|
-
| `agenr recall <query>`
|
|
145
|
-
| `agenr ingest <path>`
|
|
146
|
-
| `agenr ingest entries <path>`
|
|
147
|
-
| `agenr ingest episodes [path]`
|
|
148
|
-
| `agenr
|
|
149
|
-
| `agenr surgeon
|
|
150
|
-
| `agenr surgeon
|
|
151
|
-
| `agenr surgeon
|
|
152
|
-
| `agenr
|
|
141
|
+
| Command | What it does |
|
|
142
|
+
| -------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
|
143
|
+
| `agenr init` | Interactive first-run wizard: auth, model selection, OpenClaw detection, plugin install, and optional initial ingestion. |
|
|
144
|
+
| `agenr setup` | Configure auth, model defaults, embeddings, and the agenr database path. |
|
|
145
|
+
| `agenr recall <query>` | Run the hybrid recall pipeline with optional temporal and type/tag filters. |
|
|
146
|
+
| `agenr ingest <path>` | Default durable-entry ingest shorthand. Equivalent to `agenr ingest entries <path>`. |
|
|
147
|
+
| `agenr ingest entries <path>` | Bulk-ingest one file or directory of OpenClaw transcript files into durable knowledge entries. |
|
|
148
|
+
| `agenr ingest episodes [path]` | Backfill episodic summaries from OpenClaw session transcripts, including rotated `.reset.*` and `.deleted.*` files. |
|
|
149
|
+
| `agenr ingest procedures [path]` | Sync repo-authored YAML procedures into procedural-memory revisions stored in the knowledge database. |
|
|
150
|
+
| `agenr surgeon run` | Execute a surgeon maintenance pass. Defaults to retirement; use `--pass supersession` for lineage review. Dry-run by default; add `--apply` to mutate the corpus. |
|
|
151
|
+
| `agenr surgeon status` | Show corpus health, claim-key lifecycle counts, proposal backlog, and the latest surgeon run summary. |
|
|
152
|
+
| `agenr surgeon history` | Show recent surgeon runs. |
|
|
153
|
+
| `agenr surgeon actions <runId>` | Show the audit trail for one surgeon run. |
|
|
154
|
+
| `agenr db reset` | Delete and recreate the knowledge database. |
|
|
153
155
|
|
|
154
156
|
The OpenClaw plugin also gives the agent five tools directly inside the runtime: `agenr_store`, `agenr_recall`, `agenr_retire`, `agenr_update`, and `agenr_trace`.
|
|
155
157
|
|
|
@@ -165,6 +167,9 @@ agenr ingest ~/.openclaw/agents/main/sessions/
|
|
|
165
167
|
# Backfill episodic summaries
|
|
166
168
|
agenr ingest episodes ~/.openclaw/agents/main/sessions/ --recent 30d
|
|
167
169
|
|
|
170
|
+
# Preview procedure sync changes
|
|
171
|
+
agenr ingest procedures --dry-run
|
|
172
|
+
|
|
168
173
|
# Run the surgeon retirement pass (dry-run by default)
|
|
169
174
|
agenr surgeon run --budget 2.00
|
|
170
175
|
|
|
@@ -189,12 +194,17 @@ Recall is a hybrid pipeline. Agenr embeds the query, retrieves candidates throug
|
|
|
189
194
|
|
|
190
195
|
## How Ingestion Works
|
|
191
196
|
|
|
192
|
-
Agenr has two ingest pipelines
|
|
197
|
+
Agenr has two transcript-ingest pipelines plus one repo-authored procedure sync path:
|
|
193
198
|
|
|
194
199
|
- `agenr ingest entries <path>` extracts durable typed knowledge such as facts, decisions, preferences, lessons, milestones, and relationships.
|
|
195
200
|
- `agenr ingest episodes [path]` generates one narrative summary per session so the brain can answer temporal questions like "what happened last week?"
|
|
201
|
+
- `agenr ingest procedures [path]` validates and syncs repo-authored procedural workflows from `procedures/` into the database.
|
|
202
|
+
|
|
203
|
+
The two transcript paths parse OpenClaw transcripts first, but they optimize for different outputs: entry ingest distills durable knowledge and runs semantic dedup across the whole ingest batch, while episode ingest does a session-by-session preflight pass, uses `sessions.json` metadata when available, reconstructs missing surface metadata for rotated files, and writes episodic summaries. Procedure sync is different: it reads strict YAML authoring files, normalizes them into canonical stored revisions, and writes only when a procedure is new or semantically changed. Details: [docs/INGEST.md](./docs/INGEST.md), [docs/STORE.md](./docs/STORE.md), and [docs/PROCEDURES.md](./docs/PROCEDURES.md).
|
|
204
|
+
|
|
205
|
+
## How Procedures Work
|
|
196
206
|
|
|
197
|
-
|
|
207
|
+
Procedures are the durable how-to layer. They are authored in `procedures/` as reviewed YAML, normalized into canonical stored procedure revisions, and synced with `agenr ingest procedures [path]`. Phase 2 ships the authoring and sync path, including source-only updates and semantic supersession, but not yet procedure recall. For the current model, storage shape, and sync semantics, see [docs/PROCEDURES.md](./docs/PROCEDURES.md).
|
|
198
208
|
|
|
199
209
|
## How Episodes Work
|
|
200
210
|
|
|
@@ -7,7 +7,7 @@ import {
|
|
|
7
7
|
parseTuiSessionKey,
|
|
8
8
|
readOpenClawSessionsStore,
|
|
9
9
|
storeEntriesDetailed
|
|
10
|
-
} from "../../chunk-
|
|
10
|
+
} from "../../chunk-XD3446YW.js";
|
|
11
11
|
import {
|
|
12
12
|
EMBEDDING_DIMENSIONS,
|
|
13
13
|
ENTRY_TYPES,
|
|
@@ -24,10 +24,10 @@ import {
|
|
|
24
24
|
resolveEmbeddingModel,
|
|
25
25
|
runUnifiedRecall,
|
|
26
26
|
validateTemporalValidityRange
|
|
27
|
-
} from "../../chunk-
|
|
27
|
+
} from "../../chunk-Y2BC7RCE.js";
|
|
28
28
|
import {
|
|
29
29
|
resolveClaimSlotPolicy
|
|
30
|
-
} from "../../chunk-
|
|
30
|
+
} from "../../chunk-MEHOGUZE.js";
|
|
31
31
|
|
|
32
32
|
// src/adapters/openclaw/index.ts
|
|
33
33
|
import { definePluginEntry } from "openclaw/plugin-sdk/plugin-entry";
|
|
@@ -41,7 +41,7 @@ var ENTRY_TYPE_DESCRIPTION = "Knowledge type to store. Use fact for durable trut
|
|
|
41
41
|
var EXPIRY_DESCRIPTION = "Lifetime bucket: core (always injected at session start, use sparingly), permanent (durable and recalled on demand), or temporary (short-horizon).";
|
|
42
42
|
var UPDATE_EXPIRY_DESCRIPTION = `${EXPIRY_DESCRIPTION} Accepted values: ${EXPIRY_LEVELS.join(", ")}.`;
|
|
43
43
|
var DEFAULT_RECALL_LIMIT = 10;
|
|
44
|
-
var RECALL_MODES = ["auto", "entries", "episodes"];
|
|
44
|
+
var RECALL_MODES = ["auto", "entries", "episodes", "procedures"];
|
|
45
45
|
var RESULT_SUBJECT_LOG_LIMIT = 5;
|
|
46
46
|
async function resolveTargetEntry(services, params, options = {}) {
|
|
47
47
|
const id = readStringParam(params, "id");
|
|
@@ -89,7 +89,7 @@ function parseRecallMode(value) {
|
|
|
89
89
|
if (value === void 0) {
|
|
90
90
|
return void 0;
|
|
91
91
|
}
|
|
92
|
-
if (value === "auto" || value === "entries" || value === "episodes") {
|
|
92
|
+
if (value === "auto" || value === "entries" || value === "episodes" || value === "procedures") {
|
|
93
93
|
return value;
|
|
94
94
|
}
|
|
95
95
|
throw new Error(`Unsupported recall mode "${value}".`);
|
|
@@ -233,6 +233,10 @@ function formatUnifiedRecallResults(result) {
|
|
|
233
233
|
lines.push(result.asOf);
|
|
234
234
|
lines.push("");
|
|
235
235
|
}
|
|
236
|
+
if (result.routing.queried.includes("procedures") || result.procedure || result.procedureCandidates.length > 0 || result.procedureNotices.length > 0) {
|
|
237
|
+
appendProcedureMatches(lines, result);
|
|
238
|
+
lines.push("");
|
|
239
|
+
}
|
|
236
240
|
const renderEntriesFirst = result.routing.detectedIntent === "historical_state";
|
|
237
241
|
if (renderEntriesFirst) {
|
|
238
242
|
appendEntryMatches(lines, result);
|
|
@@ -256,6 +260,33 @@ function formatUnifiedRecallResults(result) {
|
|
|
256
260
|
}
|
|
257
261
|
return lines.join("\n");
|
|
258
262
|
}
|
|
263
|
+
function appendProcedureMatches(lines, result) {
|
|
264
|
+
lines.push("Procedure Matches");
|
|
265
|
+
if (!result.procedure && result.procedureCandidates.length === 0) {
|
|
266
|
+
lines.push("None.");
|
|
267
|
+
} else {
|
|
268
|
+
if (result.procedure) {
|
|
269
|
+
appendCanonicalProcedure(lines, result.procedure, result.procedureCandidates);
|
|
270
|
+
} else {
|
|
271
|
+
lines.push("Canonical procedure: none.");
|
|
272
|
+
}
|
|
273
|
+
const additionalCandidates = result.procedureCandidates.filter((candidate) => candidate.procedure.id !== result.procedure?.id);
|
|
274
|
+
if (additionalCandidates.length > 0) {
|
|
275
|
+
lines.push("Other Candidates");
|
|
276
|
+
for (const [index, candidate] of additionalCandidates.entries()) {
|
|
277
|
+
lines.push(
|
|
278
|
+
`${index + 1}. ${candidate.procedure.procedure_key} | ${candidate.procedure.title} | score ${candidate.score.toFixed(2)} | lexical=${candidate.scores.lexical.toFixed(2)} | vector=${candidate.scores.vector.toFixed(2)}`
|
|
279
|
+
);
|
|
280
|
+
}
|
|
281
|
+
}
|
|
282
|
+
}
|
|
283
|
+
if (result.procedureNotices.length > 0) {
|
|
284
|
+
lines.push("Procedure Notices");
|
|
285
|
+
for (const notice of result.procedureNotices) {
|
|
286
|
+
lines.push(`- ${notice}`);
|
|
287
|
+
}
|
|
288
|
+
}
|
|
289
|
+
}
|
|
259
290
|
function appendEntryMatches(lines, result) {
|
|
260
291
|
lines.push("Entry Matches");
|
|
261
292
|
if (result.projectedEntries.length === 0) {
|
|
@@ -294,6 +325,32 @@ function appendEpisodeMatches(lines, result) {
|
|
|
294
325
|
lines.push(` why_matched=${describeEpisodeMatch(episode)}`);
|
|
295
326
|
}
|
|
296
327
|
}
|
|
328
|
+
function appendCanonicalProcedure(lines, procedure, candidates) {
|
|
329
|
+
const leadCandidate = candidates.find((candidate) => candidate.procedure.id === procedure.id);
|
|
330
|
+
lines.push(
|
|
331
|
+
leadCandidate ? `Canonical Procedure. ${procedure.procedure_key} | ${procedure.title} | score ${leadCandidate.score.toFixed(2)}` : `Canonical Procedure. ${procedure.procedure_key} | ${procedure.title}`
|
|
332
|
+
);
|
|
333
|
+
lines.push(` goal=${procedure.goal}`);
|
|
334
|
+
appendLabeledList(lines, "when_to_use", procedure.when_to_use);
|
|
335
|
+
appendLabeledList(lines, "when_not_to_use", procedure.when_not_to_use);
|
|
336
|
+
appendLabeledList(lines, "prerequisites", procedure.prerequisites);
|
|
337
|
+
lines.push(" steps");
|
|
338
|
+
for (const [index, step] of procedure.steps.entries()) {
|
|
339
|
+
lines.push(` ${index + 1}. [${step.kind}] ${step.instruction}`);
|
|
340
|
+
const stepDetails = formatProcedureStepDetails(step);
|
|
341
|
+
if (stepDetails.length > 0) {
|
|
342
|
+
for (const detail of stepDetails) {
|
|
343
|
+
lines.push(` ${detail}`);
|
|
344
|
+
}
|
|
345
|
+
}
|
|
346
|
+
}
|
|
347
|
+
appendLabeledList(lines, "verification", procedure.verification);
|
|
348
|
+
appendLabeledList(lines, "failure_modes", procedure.failure_modes);
|
|
349
|
+
lines.push(" sources");
|
|
350
|
+
for (const source of procedure.sources) {
|
|
351
|
+
lines.push(` - ${formatProcedureSource(source)}`);
|
|
352
|
+
}
|
|
353
|
+
}
|
|
297
354
|
function appendClaimTransitions(lines, result) {
|
|
298
355
|
lines.push("Claim Transitions");
|
|
299
356
|
if (result.claimTransitions.length === 0) {
|
|
@@ -314,11 +371,55 @@ function appendClaimTransitions(lines, result) {
|
|
|
314
371
|
}
|
|
315
372
|
}
|
|
316
373
|
function formatUnifiedRecallLogSummary(result) {
|
|
374
|
+
const procedureCount = result.procedureCandidates.length;
|
|
375
|
+
const procedureSummary = result.procedure ? ` [procedure: ${JSON.stringify(truncate(result.procedure.title, 80))}]` : "";
|
|
317
376
|
const entrySubjects = result.entries.map((entry) => entry.entry.subject.trim()).filter((subject) => subject.length > 0);
|
|
318
377
|
const displayed = entrySubjects.slice(0, RESULT_SUBJECT_LOG_LIMIT).map((subject) => JSON.stringify(truncate(subject, 80)));
|
|
319
378
|
const remaining = entrySubjects.length - RESULT_SUBJECT_LOG_LIMIT;
|
|
320
379
|
const suffix = displayed.length === 0 ? "" : ` [entry subjects: ${displayed.join(", ")}${remaining > 0 ? `, ... and ${remaining} more` : ""}]`;
|
|
321
|
-
|
|
380
|
+
const entryEpisodeSummary = `${result.episodes.length} episode${result.episodes.length === 1 ? "" : "s"}, ${result.entries.length} entr${result.entries.length === 1 ? "y" : "ies"}`;
|
|
381
|
+
if (procedureCount === 0 && !result.procedure) {
|
|
382
|
+
return `${entryEpisodeSummary}${suffix}`;
|
|
383
|
+
}
|
|
384
|
+
return `${procedureCount} procedure candidate${procedureCount === 1 ? "" : "s"}, ${entryEpisodeSummary}${procedureSummary}${suffix}`;
|
|
385
|
+
}
|
|
386
|
+
function appendLabeledList(lines, label, values) {
|
|
387
|
+
lines.push(` ${label}`);
|
|
388
|
+
if (values.length === 0) {
|
|
389
|
+
lines.push(" - none");
|
|
390
|
+
return;
|
|
391
|
+
}
|
|
392
|
+
for (const value of values) {
|
|
393
|
+
lines.push(` - ${value}`);
|
|
394
|
+
}
|
|
395
|
+
}
|
|
396
|
+
function formatProcedureStepDetails(step) {
|
|
397
|
+
switch (step.kind) {
|
|
398
|
+
case "run_command":
|
|
399
|
+
return [`command=${step.command}`];
|
|
400
|
+
case "read_reference":
|
|
401
|
+
return [`ref=${formatProcedureSource(step.ref)}`];
|
|
402
|
+
case "inspect_state":
|
|
403
|
+
return [step.target ? `target=${step.target}` : void 0, step.query ? `query=${step.query}` : void 0].filter(
|
|
404
|
+
(value) => value !== void 0
|
|
405
|
+
);
|
|
406
|
+
case "edit_file":
|
|
407
|
+
return [`path=${step.path}`, `edit=${step.edit}`];
|
|
408
|
+
case "ask_user":
|
|
409
|
+
return [`prompt=${step.prompt}`];
|
|
410
|
+
case "invoke_tool":
|
|
411
|
+
return [step.tool ? `tool=${step.tool}` : void 0, step.arguments ? `arguments=${JSON.stringify(step.arguments)}` : void 0].filter(
|
|
412
|
+
(value) => value !== void 0
|
|
413
|
+
);
|
|
414
|
+
case "verify":
|
|
415
|
+
return step.checks.map((check) => `check=${check}`);
|
|
416
|
+
default:
|
|
417
|
+
return [];
|
|
418
|
+
}
|
|
419
|
+
}
|
|
420
|
+
function formatProcedureSource(source) {
|
|
421
|
+
const parts = [source.kind, source.label, source.path, source.locator].filter((value) => Boolean(value && value.length > 0));
|
|
422
|
+
return parts.join(" | ");
|
|
322
423
|
}
|
|
323
424
|
function formatTrace(entry, supersededBy, supersedes, claimFamily, recallEvents) {
|
|
324
425
|
const slotPolicy = entry.claim_key ? claimFamily ? {
|
|
@@ -469,12 +570,12 @@ var RECALL_TOOL_PARAMETERS = {
|
|
|
469
570
|
properties: {
|
|
470
571
|
query: {
|
|
471
572
|
type: "string",
|
|
472
|
-
description: "What you need to remember. Use a focused natural-language query rather than a broad 'everything' search. Phrase prior-state asks directly, for example 'what was the previous approach' or 'what changed from X to Y'."
|
|
573
|
+
description: "What you need to remember. Use a focused natural-language query rather than a broad 'everything' search. Phrase prior-state asks directly, for example 'what was the previous approach' or 'what changed from X to Y'. Phrase procedural asks directly, for example 'how do I rotate credentials' or 'what steps should I follow'."
|
|
473
574
|
},
|
|
474
575
|
mode: {
|
|
475
576
|
type: "string",
|
|
476
577
|
enum: [...RECALL_MODES],
|
|
477
|
-
description: "Recall mode: auto routes between exact entry recall, historical-state recall, and episodes; entries forces semantic recall; episodes forces temporal or semantic session recall."
|
|
578
|
+
description: "Recall mode: auto routes between exact entry recall, historical-state recall, procedural recall, and episodes; entries forces semantic recall; episodes forces temporal or semantic session recall; procedures forces procedural recall."
|
|
478
579
|
},
|
|
479
580
|
limit: {
|
|
480
581
|
type: "integer",
|
|
@@ -512,7 +613,7 @@ function createAgenrRecallTool(ctx, servicesPromise, logger) {
|
|
|
512
613
|
return {
|
|
513
614
|
name: "agenr_recall",
|
|
514
615
|
label: "Agenr Recall",
|
|
515
|
-
description: "Retrieve knowledge from agenr long-term memory. Use mode=auto for the normal path, including historical-state questions like what was the previous approach or what changed from X to Y; use mode=entries for exact facts and decisions; use mode=episodes for time-bounded 'what happened' questions. Time periods are parsed from the query text. Session-start recall is already handled automatically.",
|
|
616
|
+
description: "Retrieve knowledge from agenr long-term memory. Use mode=auto for the normal path, including historical-state questions like what was the previous approach or what changed from X to Y and procedural questions like how to do something or what steps to follow; use mode=entries for exact facts and decisions; use mode=episodes for time-bounded 'what happened' questions; use mode=procedures for canonical methods and checklists. Time periods are parsed from the query text. Session-start recall is already handled automatically.",
|
|
516
617
|
parameters: RECALL_TOOL_PARAMETERS,
|
|
517
618
|
async execute(_toolCallId, rawParams) {
|
|
518
619
|
try {
|
|
@@ -559,6 +660,7 @@ function createAgenrRecallTool(ctx, servicesPromise, logger) {
|
|
|
559
660
|
);
|
|
560
661
|
const result = await runUnifiedRecall(request, {
|
|
561
662
|
database: services.episodes,
|
|
663
|
+
procedures: services.procedures,
|
|
562
664
|
recall: services.recall,
|
|
563
665
|
embeddingAvailable: services.embeddingStatus.available,
|
|
564
666
|
embeddingError: services.embeddingStatus.error,
|
|
@@ -588,6 +690,24 @@ function createAgenrRecallTool(ctx, servicesPromise, logger) {
|
|
|
588
690
|
},
|
|
589
691
|
...result.asOf ? { asOf: result.asOf } : {},
|
|
590
692
|
...result.timeWindow ? { timeWindow: result.timeWindow } : {},
|
|
693
|
+
...result.procedure ? {
|
|
694
|
+
procedure: {
|
|
695
|
+
id: result.procedure.id,
|
|
696
|
+
procedureKey: result.procedure.procedure_key,
|
|
697
|
+
title: result.procedure.title,
|
|
698
|
+
goal: result.procedure.goal
|
|
699
|
+
}
|
|
700
|
+
} : {},
|
|
701
|
+
procedures: result.procedureCandidates.map((candidate) => ({
|
|
702
|
+
id: candidate.procedure.id,
|
|
703
|
+
procedureKey: candidate.procedure.procedure_key,
|
|
704
|
+
title: candidate.procedure.title,
|
|
705
|
+
goal: candidate.procedure.goal,
|
|
706
|
+
score: candidate.score,
|
|
707
|
+
lexicalScore: candidate.scores.lexical,
|
|
708
|
+
vectorScore: candidate.scores.vector
|
|
709
|
+
})),
|
|
710
|
+
procedureNotices: result.procedureNotices,
|
|
591
711
|
episodes: result.episodes.map((episode) => ({
|
|
592
712
|
id: episode.episode.id,
|
|
593
713
|
source: episode.episode.source,
|
|
@@ -1055,7 +1175,7 @@ function registerAgenrOpenClawTools(api, servicesPromise, logger) {
|
|
|
1055
1175
|
var openclaw_plugin_default = {
|
|
1056
1176
|
id: "agenr",
|
|
1057
1177
|
name: "agenr",
|
|
1058
|
-
version: "
|
|
1178
|
+
version: "2.0.1",
|
|
1059
1179
|
description: "agenr memory plugin for OpenClaw",
|
|
1060
1180
|
kind: "memory",
|
|
1061
1181
|
contracts: {
|
|
@@ -3116,59 +3236,69 @@ function buildAgenrMemoryFlushPlan(_params, logger) {
|
|
|
3116
3236
|
function createAgenrMemoryRuntime(servicesPromise) {
|
|
3117
3237
|
return {
|
|
3118
3238
|
async getMemorySearchManager() {
|
|
3119
|
-
|
|
3120
|
-
|
|
3121
|
-
|
|
3122
|
-
|
|
3123
|
-
|
|
3124
|
-
|
|
3125
|
-
|
|
3126
|
-
|
|
3127
|
-
|
|
3128
|
-
|
|
3129
|
-
|
|
3130
|
-
|
|
3131
|
-
|
|
3132
|
-
|
|
3133
|
-
|
|
3134
|
-
custom: {
|
|
3135
|
-
activeEntries: snapshot.activeEntries,
|
|
3136
|
-
coreEntries: snapshot.coreEntries,
|
|
3137
|
-
sourceFiles: snapshot.sourceFiles
|
|
3138
|
-
}
|
|
3139
|
-
};
|
|
3140
|
-
return {
|
|
3141
|
-
manager: {
|
|
3142
|
-
async search() {
|
|
3143
|
-
return [];
|
|
3144
|
-
},
|
|
3145
|
-
async readFile({ relPath }) {
|
|
3146
|
-
throw new Error(`[agenr] memory file reads are not supported for "${relPath}"`);
|
|
3147
|
-
},
|
|
3148
|
-
status() {
|
|
3149
|
-
return status;
|
|
3150
|
-
},
|
|
3151
|
-
async probeEmbeddingAvailability() {
|
|
3152
|
-
return {
|
|
3153
|
-
ok: services.embeddingStatus.available,
|
|
3154
|
-
...services.embeddingStatus.error ? { error: services.embeddingStatus.error } : {}
|
|
3155
|
-
};
|
|
3156
|
-
},
|
|
3157
|
-
async probeVectorAvailability() {
|
|
3158
|
-
return vectorAvailable;
|
|
3239
|
+
try {
|
|
3240
|
+
const services = await servicesPromise;
|
|
3241
|
+
const snapshot = await services.memory.getMemoryStatusSnapshot();
|
|
3242
|
+
const vectorAvailable = await services.memory.probeVectorAvailability();
|
|
3243
|
+
const status = {
|
|
3244
|
+
backend: "builtin",
|
|
3245
|
+
provider: "agenr",
|
|
3246
|
+
model: services.embeddingStatus.model,
|
|
3247
|
+
dbPath: services.dbPath,
|
|
3248
|
+
files: snapshot.sourceFiles,
|
|
3249
|
+
chunks: snapshot.activeEntries,
|
|
3250
|
+
vector: {
|
|
3251
|
+
enabled: true,
|
|
3252
|
+
available: vectorAvailable,
|
|
3253
|
+
dims: EMBEDDING_DIMENSIONS
|
|
3159
3254
|
},
|
|
3160
|
-
|
|
3161
|
-
|
|
3255
|
+
custom: {
|
|
3256
|
+
activeEntries: snapshot.activeEntries,
|
|
3257
|
+
coreEntries: snapshot.coreEntries,
|
|
3258
|
+
sourceFiles: snapshot.sourceFiles
|
|
3162
3259
|
}
|
|
3163
|
-
}
|
|
3164
|
-
|
|
3260
|
+
};
|
|
3261
|
+
return {
|
|
3262
|
+
manager: {
|
|
3263
|
+
async search() {
|
|
3264
|
+
return [];
|
|
3265
|
+
},
|
|
3266
|
+
async readFile({ relPath }) {
|
|
3267
|
+
throw new Error(`[agenr] memory file reads are not supported for "${relPath}"`);
|
|
3268
|
+
},
|
|
3269
|
+
status() {
|
|
3270
|
+
return status;
|
|
3271
|
+
},
|
|
3272
|
+
async probeEmbeddingAvailability() {
|
|
3273
|
+
return {
|
|
3274
|
+
ok: services.embeddingStatus.available,
|
|
3275
|
+
...services.embeddingStatus.error ? { error: services.embeddingStatus.error } : {}
|
|
3276
|
+
};
|
|
3277
|
+
},
|
|
3278
|
+
async probeVectorAvailability() {
|
|
3279
|
+
return vectorAvailable;
|
|
3280
|
+
},
|
|
3281
|
+
async sync() {
|
|
3282
|
+
return;
|
|
3283
|
+
}
|
|
3284
|
+
}
|
|
3285
|
+
};
|
|
3286
|
+
} catch (error) {
|
|
3287
|
+
return {
|
|
3288
|
+
manager: null,
|
|
3289
|
+
error: `[agenr] memory runtime unavailable: ${formatErrorMessage2(error)}`
|
|
3290
|
+
};
|
|
3291
|
+
}
|
|
3165
3292
|
},
|
|
3166
3293
|
resolveMemoryBackendConfig() {
|
|
3167
3294
|
return { backend: "builtin" };
|
|
3168
3295
|
},
|
|
3169
3296
|
async closeAllMemorySearchManagers() {
|
|
3170
|
-
|
|
3171
|
-
|
|
3297
|
+
try {
|
|
3298
|
+
const services = await servicesPromise;
|
|
3299
|
+
await services.close();
|
|
3300
|
+
} catch {
|
|
3301
|
+
}
|
|
3172
3302
|
}
|
|
3173
3303
|
};
|
|
3174
3304
|
}
|
|
@@ -3193,6 +3323,7 @@ async function createAgenrOpenClawServices(config, options) {
|
|
|
3193
3323
|
dbPath: resolvedConfig.dbPath,
|
|
3194
3324
|
entries: runtimeServices.entries,
|
|
3195
3325
|
episodes: runtimeServices.episodes,
|
|
3326
|
+
procedures: runtimeServices.procedures,
|
|
3196
3327
|
memory: runtimeServices.memory,
|
|
3197
3328
|
embedding: runtimeServices.embedding,
|
|
3198
3329
|
recall: runtimeServices.recall,
|
|
@@ -3229,6 +3360,7 @@ async function createRuntimeServices(dbPath, config, embeddingStatus, openClawCo
|
|
|
3229
3360
|
return {
|
|
3230
3361
|
entries: database,
|
|
3231
3362
|
episodes: database,
|
|
3363
|
+
procedures: database,
|
|
3232
3364
|
memory: createOpenClawRepository(database, {
|
|
3233
3365
|
claimSlotPolicyConfig: openClawContext.pluginConfig.memoryPolicy?.slotPolicies
|
|
3234
3366
|
}),
|
|
@@ -3331,6 +3463,9 @@ var openclaw_default = definePluginEntry({
|
|
|
3331
3463
|
},
|
|
3332
3464
|
resolvePath: api.resolvePath
|
|
3333
3465
|
});
|
|
3466
|
+
void servicesPromise.catch((error) => {
|
|
3467
|
+
api.logger.error(`[agenr] startup failed: ${formatErrorMessage2(error)}`);
|
|
3468
|
+
});
|
|
3334
3469
|
api.registerMemoryCapability({
|
|
3335
3470
|
promptBuilder: buildAgenrMemoryPromptSection,
|
|
3336
3471
|
flushPlanResolver: (params) => buildAgenrMemoryFlushPlan(params, api.logger),
|
|
@@ -22,7 +22,7 @@ import {
|
|
|
22
22
|
readOptionalString,
|
|
23
23
|
readRequiredString,
|
|
24
24
|
validateTemporalValidityRange
|
|
25
|
-
} from "./chunk-
|
|
25
|
+
} from "./chunk-Y2BC7RCE.js";
|
|
26
26
|
import {
|
|
27
27
|
compactClaimKey,
|
|
28
28
|
describeClaimKeyNormalizationFailure,
|
|
@@ -34,7 +34,7 @@ import {
|
|
|
34
34
|
parseRelativeDate,
|
|
35
35
|
resolveClaimSlotPolicy,
|
|
36
36
|
validateExtractedClaimKey
|
|
37
|
-
} from "./chunk-
|
|
37
|
+
} from "./chunk-MEHOGUZE.js";
|
|
38
38
|
|
|
39
39
|
// src/adapters/openclaw/transcript/parser.ts
|
|
40
40
|
import { createHash } from "crypto";
|