@byte5ai/palaia 2.0.10 → 2.0.11
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/openclaw.plugin.json +1 -0
- package/package.json +3 -2
- package/skill/SKILL.md +873 -0
package/openclaw.plugin.json
CHANGED
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "@byte5ai/palaia",
|
|
3
|
-
"version": "2.0.
|
|
3
|
+
"version": "2.0.11",
|
|
4
4
|
"description": "Palaia memory backend for OpenClaw",
|
|
5
5
|
"main": "index.ts",
|
|
6
6
|
"openclaw": {
|
|
@@ -12,7 +12,8 @@
|
|
|
12
12
|
"index.ts",
|
|
13
13
|
"src/",
|
|
14
14
|
"openclaw.plugin.json",
|
|
15
|
-
"README.md"
|
|
15
|
+
"README.md",
|
|
16
|
+
"skill/"
|
|
16
17
|
],
|
|
17
18
|
"keywords": [
|
|
18
19
|
"openclaw",
|
package/skill/SKILL.md
ADDED
|
@@ -0,0 +1,873 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: palaia
|
|
3
|
+
description: >
|
|
4
|
+
Local, crash-safe persistent memory for OpenClaw agents.
|
|
5
|
+
Semantic search, projects, auto-capture, auto-recall.
|
|
6
|
+
This SKILL.md is bundled with the npm plugin — no ClawHub needed.
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
> **Note:** This SKILL.md is bundled with the `@byte5ai/palaia` npm plugin.
|
|
10
|
+
> For the full documentation including installation instructions, see: https://github.com/iret77/palaia
|
|
11
|
+
|
|
12
|
+
# Palaia — Agent Memory Skill
|
|
13
|
+
|
|
14
|
+
## Auto-Check (run every time this skill is loaded)
|
|
15
|
+
|
|
16
|
+
**[INSTRUCTION for LLM]:** Before doing anything else with Palaia, run `palaia doctor` silently. If it reports ANY warnings (version mismatch, missing providers, legacy patterns, incomplete setup), address them immediately by following `palaia doctor --fix` recommendations. Do not proceed with other Palaia tasks until doctor reports no issues. This ensures the memory system is always up-to-date and correctly configured.
|
|
17
|
+
|
|
18
|
+
## Zero-Config: It Just Works
|
|
19
|
+
|
|
20
|
+
**Palaia works out of the box after installation. No config edits needed.**
|
|
21
|
+
|
|
22
|
+
After installing the plugin, Palaia immediately:
|
|
23
|
+
- **Injects relevant memories** into every agent prompt (`memoryInject: true`)
|
|
24
|
+
- **Auto-captures significant exchanges** as searchable memories (`autoCapture: true`)
|
|
25
|
+
- **Uses semantic recall** to find context-relevant memories (`recallMode: query`)
|
|
26
|
+
|
|
27
|
+
No settings to tweak. No README to study. Install it and it works.
|
|
28
|
+
|
|
29
|
+
**For power users:** All defaults can be customized via `openclaw.json` plugin config or `palaia init --capture-level`. See the Configuration section below.
|
|
30
|
+
|
|
31
|
+
**Upgrading from v1.x?** Run `palaia doctor --fix` — it detects old defaults and upgrades them. Custom values you've set are preserved.
|
|
32
|
+
|
|
33
|
+
## Setup Flow (follow this exactly)
|
|
34
|
+
|
|
35
|
+
### Step 3: Check for existing memory systems (REQUIRED — do not skip)
|
|
36
|
+
```bash
|
|
37
|
+
palaia doctor
|
|
38
|
+
```
|
|
39
|
+
|
|
40
|
+
Review **every** warning before continuing. For each warning found by `palaia doctor`, immediately offer to fix it — do not wait for the user to ask:
|
|
41
|
+
|
|
42
|
+
**HEARTBEAT.md legacy patterns:**
|
|
43
|
+
→ Tell the user: "Your heartbeat still uses old memory patterns. Should I update it to use Palaia? (Takes a moment, no data loss)"
|
|
44
|
+
- If yes: update HEARTBEAT.md, replace `memory_search`/`memory_get`/direct file reads with `palaia query`/`palaia write` equivalents
|
|
45
|
+
- If no: skip, move to next warning
|
|
46
|
+
|
|
47
|
+
**Smart-Memory skill detected:**
|
|
48
|
+
→ Tell the user: "The old smart-memory skill is still installed. Should I remove it? (Only after confirming Palaia has your memories)"
|
|
49
|
+
- If yes: run `palaia migrate <workspace-path>`, then remove the smart-memory skill folder
|
|
50
|
+
- If no: continue, but both systems will run in parallel (not recommended)
|
|
51
|
+
|
|
52
|
+
**OpenClaw plugin still memory-core:**
|
|
53
|
+
→ Tell the user: "Palaia isn't connected to OpenClaw's memory system yet. Should I switch it? (Requires a quick OpenClaw restart)"
|
|
54
|
+
- If yes: update OpenClaw config (`plugins.slots.memory = "palaia"`), restart OpenClaw
|
|
55
|
+
- If no: Palaia runs standalone (no `memory_search`/`memory_get` tool integration)
|
|
56
|
+
|
|
57
|
+
**openai-only chain (no local fallback):**
|
|
58
|
+
→ Tell the user: "Your search has no local backup. Should I install fastembed as a fallback? (Small download, no GPU needed)"
|
|
59
|
+
- If yes: `pip install fastembed`, then `palaia warmup`, update chain to include fastembed before bm25
|
|
60
|
+
- If no: skip (warn: "Search may degrade when OpenAI is unavailable")
|
|
61
|
+
|
|
62
|
+
Rules:
|
|
63
|
+
- For each warning: ask immediately, one question at a time
|
|
64
|
+
- Do not batch all questions — address them in order
|
|
65
|
+
- Do not wait for the user to notice and ask
|
|
66
|
+
- After user answers yes/no, move to next warning
|
|
67
|
+
- Report when all warnings are resolved
|
|
68
|
+
|
|
69
|
+
**Do not proceed until all warnings are addressed or explicitly dismissed by the user.**
|
|
70
|
+
|
|
71
|
+
### Onboarding Conversation
|
|
72
|
+
|
|
73
|
+
After running `palaia doctor` and reviewing warnings, have a brief conversation with your user.
|
|
74
|
+
Keep it short — 3 questions max. No technical details unless the user asks.
|
|
75
|
+
|
|
76
|
+
**Script:**
|
|
77
|
+
|
|
78
|
+
Tell the user:
|
|
79
|
+
"I've set up Palaia for you — your memory is now crash-safe and searchable.
|
|
80
|
+
Quick setup: 3 short questions."
|
|
81
|
+
|
|
82
|
+
**Question 1 — Search quality:**
|
|
83
|
+
|
|
84
|
+
If an OpenAI API key is present AND a local model (sentence-transformers/fastembed/ollama) is also detected:
|
|
85
|
+
→ Just confirm: "I'll use your OpenAI key with a local backup model. Good setup."
|
|
86
|
+
→ No question needed.
|
|
87
|
+
|
|
88
|
+
If an OpenAI API key is present BUT no local model is installed:
|
|
89
|
+
→ Tell the user: "You have an OpenAI API key — great for search quality. But without a local backup, search quality drops significantly when OpenAI is unavailable. I recommend installing a small local model (300MB, free, runs offline). Should I do that?"
|
|
90
|
+
→ yes: pip install sentence-transformers (if CUDA issues: pip install fastembed instead — lighter, no GPU needed), then palaia warmup, chain = openai → sentence-transformers → bm25
|
|
91
|
+
→ no: chain = openai → bm25 (warn: "Note: search may fail or degrade without a local fallback.")
|
|
92
|
+
|
|
93
|
+
If no OpenAI key AND no local model:
|
|
94
|
+
→ Tell the user: "No search provider found yet. I recommend installing a free local model (300MB). Should I do that?"
|
|
95
|
+
→ yes: pip install sentence-transformers (if CUDA issues: pip install fastembed instead — lighter, no GPU needed), then palaia warmup, chain = sentence-transformers → bm25
|
|
96
|
+
→ no: BM25 only (warn: "Search will be keyword-based — you can improve this later with palaia detect.")
|
|
97
|
+
|
|
98
|
+
If no OpenAI key BUT local model detected:
|
|
99
|
+
→ Just confirm: "Using local search model — works offline, no API costs."
|
|
100
|
+
→ No question needed.
|
|
101
|
+
|
|
102
|
+
**Question 2 — Existing memory (only ask if legacy memory files exist OR smart-memory detected):**
|
|
103
|
+
Ask: "You have existing memory. Should I bring it into Palaia?"
|
|
104
|
+
→ yes: run palaia migrate <workspace-path>
|
|
105
|
+
→ no: start fresh (existing files stay untouched)
|
|
106
|
+
|
|
107
|
+
**Question 3 — Projects (only ask if multiple project folders detected OR more than 1 agent present):**
|
|
108
|
+
Ask: "Do you work on multiple projects? I can keep their memories separate."
|
|
109
|
+
→ yes: briefly explain: "Use 'palaia project create <name>' for each project. I'll remember which memories belong where."
|
|
110
|
+
→ no: skip
|
|
111
|
+
|
|
112
|
+
**Question 4 — Capture Model (only ask if autoCapture is enabled, which is the default):**
|
|
113
|
+
Ask: "Auto-Capture saves significant conversations automatically. Which model should it use for extraction? A cheap/fast model is best (e.g. claude-haiku-4-5, gpt-4.1-mini, gemini-2.0-flash)."
|
|
114
|
+
→ User names a model: set `captureModel` in openclaw.json plugin config
|
|
115
|
+
→ User says "use default" or skips: leave unset (primary model will be used, doctor will nudge)
|
|
116
|
+
|
|
117
|
+
**Wrap up:**
|
|
118
|
+
Tell the user what was configured — one sentence:
|
|
119
|
+
"Done. [Search quality: OpenAI / local model / keyword-only]. [Migration: X entries imported / fresh start]. [Projects: configured / not used]."
|
|
120
|
+
|
|
121
|
+
Do not explain commands, WAL, BM25, embedding models, or any technical details unless the user explicitly asks.
|
|
122
|
+
|
|
123
|
+
### Step 4: Migrate existing entries (best practice — do this now)
|
|
124
|
+
If there are existing entries from an earlier Palaia version (or from initial setup), run:
|
|
125
|
+
```bash
|
|
126
|
+
palaia migrate --suggest
|
|
127
|
+
```
|
|
128
|
+
This checks for entries without type classification and suggests improvements. Better to do this early while the store is small — it gets harder as the store grows. Apply the suggestions that make sense.
|
|
129
|
+
|
|
130
|
+
### Step 5a: Check embedding setup (auto-configured by init)
|
|
131
|
+
```bash
|
|
132
|
+
palaia detect
|
|
133
|
+
```
|
|
134
|
+
Review the detected providers and configured chain. `palaia init` automatically detects available providers
|
|
135
|
+
and configures the best embedding chain — you usually don't need to change anything.
|
|
136
|
+
|
|
137
|
+
If the chain looks wrong or you want to change it:
|
|
138
|
+
```bash
|
|
139
|
+
palaia config set-chain <provider1> [provider2] bm25
|
|
140
|
+
```
|
|
141
|
+
|
|
142
|
+
If only BM25 is shown and you want semantic search:
|
|
143
|
+
```bash
|
|
144
|
+
# fastembed — lightweight, no GPU needed, recommended for most systems
|
|
145
|
+
python3 -m pip install fastembed # or: uv pip install fastembed
|
|
146
|
+
palaia warmup # pre-load model + build index
|
|
147
|
+
```
|
|
148
|
+
If fastembed is unavailable, sentence-transformers works too (`python3 -m pip install sentence-transformers`) but is heavier and slower on CPU-only systems.
|
|
149
|
+
|
|
150
|
+
### Step 5b: Warm up models
|
|
151
|
+
```bash
|
|
152
|
+
palaia warmup
|
|
153
|
+
```
|
|
154
|
+
This pre-downloads embedding models so the first search is instant. Always run after chain setup.
|
|
155
|
+
|
|
156
|
+
### Step 6: Verify setup (mandatory — do not skip)
|
|
157
|
+
Run a test query to verify semantic search works end-to-end:
|
|
158
|
+
```bash
|
|
159
|
+
palaia query "important context and decisions"
|
|
160
|
+
```
|
|
161
|
+
**Check:**
|
|
162
|
+
- Results must appear (not empty)
|
|
163
|
+
- Response time should be under 5 seconds
|
|
164
|
+
- If it times out or returns no results: check `palaia status` and `palaia detect`, fix any issues before proceeding
|
|
165
|
+
|
|
166
|
+
Then verify `palaia status` shows indexed entries (Index: N/N, not 0/N).
|
|
167
|
+
If index is 0/N after warmup, run a few queries — indexing happens lazily on first query.
|
|
168
|
+
|
|
169
|
+
**Do not tell the user "setup complete" until this verification passes.** Assumption is not verification.
|
|
170
|
+
|
|
171
|
+
### Step 7: Optional — migrate additional memory files (skip if done in Step 4)
|
|
172
|
+
```bash
|
|
173
|
+
palaia migrate <path> --dry-run # Preview first
|
|
174
|
+
palaia migrate <path> # Then import
|
|
175
|
+
```
|
|
176
|
+
Note: If smart-memory migration was already handled in Step 3, skip this unless there are other sources to import.
|
|
177
|
+
|
|
178
|
+
### Step 8: Add Palaia maintenance to your heartbeat (mandatory)
|
|
179
|
+
|
|
180
|
+
This is required. Palaia's tiering (HOT → WARM → COLD) only runs when triggered.
|
|
181
|
+
Without periodic maintenance, all entries stay HOT forever and search quality degrades over time.
|
|
182
|
+
|
|
183
|
+
Check if `HEARTBEAT.md` exists in the workspace (`~/.openclaw/workspace/HEARTBEAT.md`).
|
|
184
|
+
|
|
185
|
+
**If it exists:** Append the following block at the end (before any final comments):
|
|
186
|
+
|
|
187
|
+
```
|
|
188
|
+
# TASK: Palaia Maintenance (daily + weekly)
|
|
189
|
+
# [INSTRUCTION for LLM]:
|
|
190
|
+
# Daily: run `palaia gc` — moves entries between HOT/WARM/COLD based on age and access.
|
|
191
|
+
# Weekly (Sunday): run `palaia gc --aggressive` — also clears COLD tier.
|
|
192
|
+
# Fast operation, never blocks user queries. Only run if palaia is installed (check: which palaia).
|
|
193
|
+
```
|
|
194
|
+
|
|
195
|
+
**If it does not exist:** Create `HEARTBEAT.md` with the block above as its first task.
|
|
196
|
+
|
|
197
|
+
Tell the user: "I've added a Palaia maintenance task to your heartbeat schedule. This keeps your memory organized automatically — nothing you need to do."
|
|
198
|
+
|
|
199
|
+
## Auto-Capture and Capture Hints
|
|
200
|
+
|
|
201
|
+
### How Auto-Capture Works
|
|
202
|
+
|
|
203
|
+
Auto-capture runs automatically after every agent turn (when `autoCapture: true`, which is the default). It:
|
|
204
|
+
|
|
205
|
+
1. Collects all messages from the completed exchange
|
|
206
|
+
2. Filters out trivial exchanges (short, system content, acknowledgments)
|
|
207
|
+
3. Uses LLM-based extraction to identify significant knowledge: decisions, lessons, processes, commitments, preferences
|
|
208
|
+
4. Writes extracted items to Palaia with appropriate type, tags, scope, and project attribution
|
|
209
|
+
5. Falls back to rule-based extraction if LLM is unavailable
|
|
210
|
+
|
|
211
|
+
**Agent attribution:** If `PALAIA_AGENT` is set in the environment, all auto-captured entries are attributed to that agent via `--agent`. Otherwise, the CLI uses the configured default.
|
|
212
|
+
|
|
213
|
+
**Project detection:** Auto-capture passes the list of known projects to the LLM, which assigns entries to the most relevant project (or none if unclear).
|
|
214
|
+
|
|
215
|
+
**Scope detection:** The LLM also determines scope per item: `private` (personal preference), `team` (shared knowledge), or `public` (documentation).
|
|
216
|
+
|
|
217
|
+
### When to Use Manual Write vs Auto-Capture
|
|
218
|
+
|
|
219
|
+
**Auto-Capture** handles conversation knowledge automatically: decisions mentioned in chat, facts discussed, lessons learned during work. You don't need to save these — Palaia does it for you.
|
|
220
|
+
|
|
221
|
+
**Manual `palaia write` is for structured knowledge that Auto-Capture cannot create:**
|
|
222
|
+
|
|
223
|
+
| Use Case | Command | Why Manual? |
|
|
224
|
+
|----------|---------|-------------|
|
|
225
|
+
| Step-by-step procedure | `palaia write "1. Build 2. Test 3. Deploy" --type process` | Structure matters |
|
|
226
|
+
| Task with owner/deadline | `palaia write "fix auth" --type task --priority high --assignee Elliot` | Structured fields |
|
|
227
|
+
| Project setup | `palaia project create myproject` | Explicit organization |
|
|
228
|
+
| Knowledge from external source | `palaia write "API limit: 100/min" --type memory --project api` | Not from conversation |
|
|
229
|
+
|
|
230
|
+
**Do NOT manually write:**
|
|
231
|
+
- Facts, decisions, or preferences that came up in the current conversation (auto-captured)
|
|
232
|
+
- "We decided to use X" after discussing X (auto-captured)
|
|
233
|
+
- Status updates or progress notes (auto-captured if significant)
|
|
234
|
+
|
|
235
|
+
**Rule of thumb:** If it just happened in conversation → trust Auto-Capture. If it needs structure (steps, fields, project assignment) → write manually.
|
|
236
|
+
|
|
237
|
+
### Capture Hints
|
|
238
|
+
|
|
239
|
+
When you want to guide auto-capture without writing manually, use `<palaia-hint />` tags in your message:
|
|
240
|
+
|
|
241
|
+
```
|
|
242
|
+
<palaia-hint project="myapp" scope="private" />
|
|
243
|
+
```
|
|
244
|
+
|
|
245
|
+
Hints are parsed from all messages in the exchange and used as overrides:
|
|
246
|
+
- **Priority:** Hint > LLM detection > Config override > Default
|
|
247
|
+
- **Attributes:** `project`, `scope`, `type`, `tags` (comma-separated)
|
|
248
|
+
- **Stripping:** Hints are automatically removed from outgoing messages — the user never sees them
|
|
249
|
+
|
|
250
|
+
Multiple hints are supported (e.g., for different projects in the same turn):
|
|
251
|
+
```
|
|
252
|
+
<palaia-hint project="frontend" scope="team" tags="decision" />
|
|
253
|
+
<palaia-hint project="backend" type="process" />
|
|
254
|
+
```
|
|
255
|
+
|
|
256
|
+
### Static Config Overrides
|
|
257
|
+
|
|
258
|
+
For setups where every entry should go to the same project/scope, set in plugin config:
|
|
259
|
+
- `captureScope`: Static scope override (e.g., `"team"`)
|
|
260
|
+
- `captureProject`: Static project override (e.g., `"myapp"`)
|
|
261
|
+
|
|
262
|
+
These override LLM detection but are overridden by capture hints.
|
|
263
|
+
|
|
264
|
+
## Knowledge Packages
|
|
265
|
+
|
|
266
|
+
Export and import project knowledge as portable package files.
|
|
267
|
+
|
|
268
|
+
```bash
|
|
269
|
+
# Export all entries from a project
|
|
270
|
+
palaia package export <project> [--output file.palaia-pkg.json] [--types memory,process]
|
|
271
|
+
|
|
272
|
+
# Import a knowledge package
|
|
273
|
+
palaia package import <file> [--project target] [--merge skip|overwrite|append] [--agent name]
|
|
274
|
+
|
|
275
|
+
# View package metadata without importing
|
|
276
|
+
palaia package info <file>
|
|
277
|
+
```
|
|
278
|
+
|
|
279
|
+
The `--agent` flag on import attributes all imported entries to a specific agent name.
|
|
280
|
+
|
|
281
|
+
## Process Runner
|
|
282
|
+
|
|
283
|
+
Run stored process entries as interactive checklists:
|
|
284
|
+
|
|
285
|
+
```bash
|
|
286
|
+
# List all stored processes
|
|
287
|
+
palaia process list [--project NAME]
|
|
288
|
+
|
|
289
|
+
# Run a process interactively
|
|
290
|
+
palaia process run <id>
|
|
291
|
+
```
|
|
292
|
+
|
|
293
|
+
## Temporal Queries
|
|
294
|
+
|
|
295
|
+
Filter entries by time with `--before` and `--after`:
|
|
296
|
+
|
|
297
|
+
```bash
|
|
298
|
+
palaia query "deploy" --after 2026-03-01 --before 2026-03-15
|
|
299
|
+
palaia list --after 2026-03-01
|
|
300
|
+
```
|
|
301
|
+
|
|
302
|
+
Dates are in ISO format (YYYY-MM-DD or YYYY-MM-DDTHH:MM:SS).
|
|
303
|
+
|
|
304
|
+
## Cross-Project Search
|
|
305
|
+
|
|
306
|
+
Search across all projects at once:
|
|
307
|
+
|
|
308
|
+
```bash
|
|
309
|
+
palaia query "authentication" --cross-project
|
|
310
|
+
```
|
|
311
|
+
|
|
312
|
+
Without `--cross-project`, queries only search entries in the active project context.
|
|
313
|
+
|
|
314
|
+
## Bounded Memory and Garbage Collection
|
|
315
|
+
|
|
316
|
+
Palaia supports budgeted garbage collection to keep the store lean:
|
|
317
|
+
|
|
318
|
+
```bash
|
|
319
|
+
# Preview what would be collected
|
|
320
|
+
palaia gc --dry-run
|
|
321
|
+
|
|
322
|
+
# Collect with a target budget (max entries to keep)
|
|
323
|
+
palaia gc --budget 200
|
|
324
|
+
|
|
325
|
+
# Aggressive collection — also clears COLD tier
|
|
326
|
+
palaia gc --aggressive
|
|
327
|
+
```
|
|
328
|
+
|
|
329
|
+
GC rotates entries through tiers: HOT (active, <7 days) -> WARM (recent, <30 days) -> COLD (archived).
|
|
330
|
+
|
|
331
|
+
## Significance Tagging
|
|
332
|
+
|
|
333
|
+
Auto-capture automatically detects and tags entries with significance markers:
|
|
334
|
+
|
|
335
|
+
| Tag | Meaning | Example |
|
|
336
|
+
|-----|---------|---------|
|
|
337
|
+
| `decision` | A choice was made | "We decided to use PostgreSQL" |
|
|
338
|
+
| `lesson` | Something was learned | "I learned that caching needs invalidation on deploy" |
|
|
339
|
+
| `surprise` | Unexpected discovery | "The API returns 200 even on errors" |
|
|
340
|
+
| `commitment` | Promise or action item | "I will refactor auth by Friday" |
|
|
341
|
+
| `correction` | Error was corrected | "Actually, the limit is 100, not 50" |
|
|
342
|
+
| `preference` | User/agent preference | "I prefer tabs over spaces" |
|
|
343
|
+
| `fact` | Important factual information | "The prod DB is on port 5433" |
|
|
344
|
+
|
|
345
|
+
These tags enable targeted queries: `palaia query "decisions" --tags decision`
|
|
346
|
+
|
|
347
|
+
## Adaptive Nudging
|
|
348
|
+
|
|
349
|
+
Palaia includes a graduation system that adapts to agent behavior:
|
|
350
|
+
|
|
351
|
+
**What it does:** When the agent writes an entry that relates to an existing process, Palaia nudges: "Related process found: [title]. Consider following it."
|
|
352
|
+
|
|
353
|
+
**How it learns:** The nudging system tracks whether agents follow stored processes. Over time:
|
|
354
|
+
- Agents that consistently follow processes see fewer nudges (graduated)
|
|
355
|
+
- Agents that frequently skip processes continue to receive nudges
|
|
356
|
+
- New processes always trigger nudges until a pattern is established
|
|
357
|
+
|
|
358
|
+
**Important:** SKILL.md documentation is the primary source for agent behavior. Nudging is the safety net for when agents don't read the docs — not a replacement for good documentation.
|
|
359
|
+
|
|
360
|
+
## Transparency Features
|
|
361
|
+
|
|
362
|
+
Palaia makes its memory operations visible to the user by default. Both features are enabled out of the box and can be toggled independently.
|
|
363
|
+
|
|
364
|
+
### Memory Source Footnotes
|
|
365
|
+
|
|
366
|
+
When Palaia injects memories into the agent context and the agent uses them in a response, a footnote is appended:
|
|
367
|
+
|
|
368
|
+
```
|
|
369
|
+
Palaia: "PostgreSQL migration plan" (Mar 16), "Deploy process" (Mar 10)
|
|
370
|
+
```
|
|
371
|
+
|
|
372
|
+
This shows the user which memories influenced the response. Max 3 sources are shown, selected by keyword relevance between the memory title and the response text.
|
|
373
|
+
|
|
374
|
+
**Disable:** `palaia config set showMemorySources false`
|
|
375
|
+
**Re-enable:** `palaia config set showMemorySources true`
|
|
376
|
+
|
|
377
|
+
### Capture Confirmations
|
|
378
|
+
|
|
379
|
+
When Palaia auto-captures a significant exchange, a confirmation is shown:
|
|
380
|
+
|
|
381
|
+
```
|
|
382
|
+
Saved: "Team decided to use PostgreSQL for the project due to JSON support"
|
|
383
|
+
```
|
|
384
|
+
|
|
385
|
+
This confirms that knowledge was stored and gives the user a preview of what was captured.
|
|
386
|
+
|
|
387
|
+
**Disable:** `palaia config set showCaptureConfirm false`
|
|
388
|
+
**Re-enable:** `palaia config set showCaptureConfirm true`
|
|
389
|
+
|
|
390
|
+
### Satisfaction and Preference Nudges
|
|
391
|
+
|
|
392
|
+
After sustained usage, Palaia nudges agents to check in with the user:
|
|
393
|
+
|
|
394
|
+
1. **Satisfaction check** (after ~10 successful recalls): Ask the user if the memory system is working well. Suggest `palaia doctor` if there are issues.
|
|
395
|
+
2. **Transparency preference** (after ~50 recalls or ~7 days): Ask the user whether they want to keep seeing footnotes and capture confirmations, or hide them. Both are one-shot nudges that won't repeat.
|
|
396
|
+
|
|
397
|
+
## Commands Reference
|
|
398
|
+
|
|
399
|
+
### Basic Memory
|
|
400
|
+
|
|
401
|
+
```bash
|
|
402
|
+
# Write a memory entry (default type: memory)
|
|
403
|
+
palaia write "text" [--scope private|team|public] [--project NAME] [--tags a,b] [--title "Title"] [--type memory|process|task] [--instance NAME]
|
|
404
|
+
|
|
405
|
+
# Write a task with structured fields
|
|
406
|
+
palaia write "fix login bug" --type task --status open --priority high --assignee Elliot --due-date 2026-04-01
|
|
407
|
+
|
|
408
|
+
# Edit an existing entry (content, metadata, task fields)
|
|
409
|
+
palaia edit <id> ["new content"] [--status done] [--priority high] [--tags new,tags] [--title "New Title"] [--type task]
|
|
410
|
+
|
|
411
|
+
# Search memories (semantic + keyword) with structured filters
|
|
412
|
+
palaia query "search term" [--project NAME] [--limit N] [--all] [--type task] [--status open] [--priority high] [--assignee NAME] [--instance NAME]
|
|
413
|
+
|
|
414
|
+
# Read a specific entry by ID
|
|
415
|
+
palaia get <id> [--from LINE] [--lines N]
|
|
416
|
+
|
|
417
|
+
# List entries in a tier with filters
|
|
418
|
+
palaia list [--tier hot|warm|cold] [--project NAME] [--type task] [--status open] [--priority high] [--assignee NAME] [--instance NAME]
|
|
419
|
+
|
|
420
|
+
# System health, active providers, and entry class breakdown
|
|
421
|
+
palaia status
|
|
422
|
+
|
|
423
|
+
# Suggest type assignments for untyped entries
|
|
424
|
+
palaia migrate --suggest
|
|
425
|
+
```
|
|
426
|
+
|
|
427
|
+
### Projects
|
|
428
|
+
|
|
429
|
+
Projects group related entries. They're optional — everything works without them.
|
|
430
|
+
|
|
431
|
+
```bash
|
|
432
|
+
# Create a project
|
|
433
|
+
palaia project create <name> [--description "..."] [--default-scope team]
|
|
434
|
+
|
|
435
|
+
# List all projects
|
|
436
|
+
palaia project list
|
|
437
|
+
|
|
438
|
+
# Show project details + entries
|
|
439
|
+
palaia project show <name>
|
|
440
|
+
|
|
441
|
+
# Write an entry directly to a project
|
|
442
|
+
palaia project write <name> "text" [--scope X] [--tags a,b] [--title "Title"]
|
|
443
|
+
|
|
444
|
+
# Search within a project only
|
|
445
|
+
palaia project query <name> "search term" [--limit N]
|
|
446
|
+
|
|
447
|
+
# Change the project's default scope
|
|
448
|
+
palaia project set-scope <name> <scope>
|
|
449
|
+
|
|
450
|
+
# Delete a project (entries are preserved, just untagged)
|
|
451
|
+
palaia project delete <name>
|
|
452
|
+
```
|
|
453
|
+
|
|
454
|
+
### Agent Alias System
|
|
455
|
+
|
|
456
|
+
Aliases let multiple agent names resolve to the same identity. Scope checks and queries will match both the alias and the canonical name.
|
|
457
|
+
|
|
458
|
+
```bash
|
|
459
|
+
# Set alias: "default" is treated as "HAL"
|
|
460
|
+
palaia alias set default HAL
|
|
461
|
+
|
|
462
|
+
# List all aliases
|
|
463
|
+
palaia alias list
|
|
464
|
+
|
|
465
|
+
# Remove an alias
|
|
466
|
+
palaia alias remove default
|
|
467
|
+
```
|
|
468
|
+
|
|
469
|
+
Use aliases when the same agent runs under different names (e.g., "default" during init, "HAL" during operation). Entries written by either name are accessible to both.
|
|
470
|
+
|
|
471
|
+
### Project Locking
|
|
472
|
+
|
|
473
|
+
Advisory locks coordinate multi-agent work on projects. Locks auto-expire after TTL (default: 30 min).
|
|
474
|
+
|
|
475
|
+
```bash
|
|
476
|
+
# Lock a project for exclusive work
|
|
477
|
+
palaia project lock <name> --agent <agent> [--reason "..."] [--ttl 3600]
|
|
478
|
+
|
|
479
|
+
# Check if a project is locked
|
|
480
|
+
palaia project lock-status <name>
|
|
481
|
+
|
|
482
|
+
# Release a lock
|
|
483
|
+
palaia project unlock <name>
|
|
484
|
+
|
|
485
|
+
# Force-break a stuck lock (use with caution)
|
|
486
|
+
palaia project break-lock <name>
|
|
487
|
+
|
|
488
|
+
# List all active locks
|
|
489
|
+
palaia project locks
|
|
490
|
+
```
|
|
491
|
+
|
|
492
|
+
Always check lock status before starting work on a shared project. The lock is advisory — it doesn't prevent writes, but agents should respect it.
|
|
493
|
+
|
|
494
|
+
### Configuration
|
|
495
|
+
|
|
496
|
+
```bash
|
|
497
|
+
# Show all settings
|
|
498
|
+
palaia config list
|
|
499
|
+
|
|
500
|
+
# Get/set a single value
|
|
501
|
+
palaia config set <key> <value>
|
|
502
|
+
|
|
503
|
+
# Set the embedding fallback chain (ordered by priority)
|
|
504
|
+
palaia config set-chain <provider1> [provider2] [...] bm25
|
|
505
|
+
|
|
506
|
+
# Detect available embedding providers on this system
|
|
507
|
+
palaia detect
|
|
508
|
+
|
|
509
|
+
# Pre-download embedding models
|
|
510
|
+
palaia warmup
|
|
511
|
+
```
|
|
512
|
+
|
|
513
|
+
### Diagnostics
|
|
514
|
+
|
|
515
|
+
```bash
|
|
516
|
+
# Check Palaia health and detect legacy systems
|
|
517
|
+
palaia doctor
|
|
518
|
+
|
|
519
|
+
# Show guided fix instructions for each warning
|
|
520
|
+
palaia doctor --fix
|
|
521
|
+
|
|
522
|
+
# Machine-readable output
|
|
523
|
+
palaia doctor --json
|
|
524
|
+
```
|
|
525
|
+
|
|
526
|
+
### Maintenance
|
|
527
|
+
|
|
528
|
+
```bash
|
|
529
|
+
# Tier rotation — moves old entries from HOT → WARM → COLD
|
|
530
|
+
palaia gc [--aggressive]
|
|
531
|
+
|
|
532
|
+
# Replay any interrupted writes from the write-ahead log
|
|
533
|
+
palaia recover
|
|
534
|
+
```
|
|
535
|
+
|
|
536
|
+
### Document Ingestion (RAG)
|
|
537
|
+
|
|
538
|
+
```bash
|
|
539
|
+
# Index a file, URL, or directory into the knowledge base
|
|
540
|
+
palaia ingest <file-or-url> [--project X] [--scope X] [--tags a,b] [--chunk-size N] [--dry-run]
|
|
541
|
+
|
|
542
|
+
# Query with RAG-formatted context (ready for LLM injection)
|
|
543
|
+
palaia query "question" --project X --rag
|
|
544
|
+
```
|
|
545
|
+
|
|
546
|
+
### Sync
|
|
547
|
+
|
|
548
|
+
```bash
|
|
549
|
+
# Export entries for sharing
|
|
550
|
+
palaia export [--project NAME] [--output DIR] [--remote GIT_URL]
|
|
551
|
+
|
|
552
|
+
# Import entries from an export
|
|
553
|
+
palaia import <path> [--dry-run]
|
|
554
|
+
|
|
555
|
+
# Import from other memory formats (smart-memory, flat-file, json-memory, generic-md)
|
|
556
|
+
palaia migrate <path> [--dry-run] [--format FORMAT] [--scope SCOPE]
|
|
557
|
+
```
|
|
558
|
+
|
|
559
|
+
### JSON Output
|
|
560
|
+
|
|
561
|
+
All commands support `--json` for machine-readable output:
|
|
562
|
+
```bash
|
|
563
|
+
palaia status --json
|
|
564
|
+
palaia query "search" --json
|
|
565
|
+
palaia project list --json
|
|
566
|
+
```
|
|
567
|
+
|
|
568
|
+
## Scope System
|
|
569
|
+
|
|
570
|
+
Every entry has a visibility scope:
|
|
571
|
+
|
|
572
|
+
- **`private`** — Only the agent that wrote it can read it
|
|
573
|
+
- **`team`** — All agents in the same workspace can read it (default)
|
|
574
|
+
- **`public`** — Can be exported and shared across workspaces
|
|
575
|
+
|
|
576
|
+
**Setting defaults:**
|
|
577
|
+
```bash
|
|
578
|
+
# Global default
|
|
579
|
+
palaia config set default_scope <scope>
|
|
580
|
+
|
|
581
|
+
# Per-project default
|
|
582
|
+
palaia project set-scope <name> <scope>
|
|
583
|
+
```
|
|
584
|
+
|
|
585
|
+
**Scope cascade** (how Palaia decides the scope for a new entry):
|
|
586
|
+
1. Explicit `--scope` flag → always wins
|
|
587
|
+
2. Project default scope → if entry belongs to a project
|
|
588
|
+
3. Global `default_scope` from config
|
|
589
|
+
4. Falls back to `team`
|
|
590
|
+
|
|
591
|
+
## Projects
|
|
592
|
+
|
|
593
|
+
- Projects are optional and purely additive — Palaia works fine without them
|
|
594
|
+
- Each project has its own default scope
|
|
595
|
+
- Writing with `--project NAME` or `palaia project write NAME` both assign to a project
|
|
596
|
+
- Deleting a project preserves its entries (they just lose the project tag)
|
|
597
|
+
- `palaia project show NAME` lists all entries with their tier and scope
|
|
598
|
+
|
|
599
|
+
## When to Use What
|
|
600
|
+
|
|
601
|
+
| Situation | Command |
|
|
602
|
+
|-----------|---------|
|
|
603
|
+
| Remember a simple fact | `palaia write "..."` |
|
|
604
|
+
| Remember something for a specific project | `palaia project write <name> "..."` |
|
|
605
|
+
| Create a task/todo | `palaia write "fix bug" --type task --priority high` |
|
|
606
|
+
| Record a process/SOP | `palaia write "deploy steps" --type process` |
|
|
607
|
+
| Mark task as done | `palaia edit <id> --status done` |
|
|
608
|
+
| Find something you stored | `palaia query "..."` |
|
|
609
|
+
| Find open tasks | `palaia query "tasks" --type task --status open` |
|
|
610
|
+
| List high-priority tasks | `palaia list --type task --priority high` |
|
|
611
|
+
| Find something within a project | `palaia project query <name> "..."` |
|
|
612
|
+
| Check what's in active memory | `palaia list` |
|
|
613
|
+
| Check what's in archived memory | `palaia list --tier cold` |
|
|
614
|
+
| See system health + class breakdown | `palaia status` |
|
|
615
|
+
| Clean up old entries | `palaia gc` |
|
|
616
|
+
| Index a document or website | `palaia ingest <file/url> --project <name>` |
|
|
617
|
+
| Get type suggestions for old entries | `palaia migrate --suggest` |
|
|
618
|
+
| Search indexed documents for LLM context | `palaia query "..." --project <name> --rag` |
|
|
619
|
+
|
|
620
|
+
## Document Knowledge Base
|
|
621
|
+
|
|
622
|
+
Use `palaia ingest` to index external documents — PDFs, websites, text files, directories.
|
|
623
|
+
Indexed content is chunked, embedded, and stored as regular entries (searchable like memory).
|
|
624
|
+
|
|
625
|
+
**When to use:**
|
|
626
|
+
- User asks you to "remember" a document, manual, or website
|
|
627
|
+
- You need to search through a large document
|
|
628
|
+
- Building a project-specific knowledge base
|
|
629
|
+
|
|
630
|
+
**How to use:**
|
|
631
|
+
```bash
|
|
632
|
+
palaia ingest document.pdf --project my-project
|
|
633
|
+
palaia ingest https://docs.example.com --project api-docs --scope team
|
|
634
|
+
palaia ingest ./docs/ --project my-project --tags documentation
|
|
635
|
+
|
|
636
|
+
palaia query "How does X work?" --project my-project --rag
|
|
637
|
+
```
|
|
638
|
+
|
|
639
|
+
The `--rag` flag returns a formatted context block ready to insert into your LLM prompt.
|
|
640
|
+
|
|
641
|
+
**PDF support:** requires pdfplumber — install with: `pip install pdfplumber`
|
|
642
|
+
|
|
643
|
+
**Source attribution:** each chunk tracks its origin (file, page, URL) automatically.
|
|
644
|
+
|
|
645
|
+
## Error Handling
|
|
646
|
+
|
|
647
|
+
| Problem | What to do |
|
|
648
|
+
|---------|-----------|
|
|
649
|
+
| Embedding provider not available | Chain automatically falls back to next provider. Check `palaia status` to see which is active. |
|
|
650
|
+
| Write-ahead log corrupted | Run `palaia recover` — replays any interrupted writes. |
|
|
651
|
+
| Entries seem missing | Run `palaia recover`, then `palaia list`. Check all tiers (`--tier warm`, `--tier cold`). |
|
|
652
|
+
| Search returns no results | Try `palaia query "..." --all` to include COLD tier. Check `palaia status` to confirm provider is active. |
|
|
653
|
+
| `.palaia` directory missing | Run `palaia init` to create a fresh store. |
|
|
654
|
+
|
|
655
|
+
## Tiering
|
|
656
|
+
|
|
657
|
+
Palaia organizes entries into three tiers based on access frequency:
|
|
658
|
+
|
|
659
|
+
- **HOT** (default: 7 days) — Frequently accessed, always searched
|
|
660
|
+
- **WARM** (default: 30 days) — Less active, still searched by default
|
|
661
|
+
- **COLD** — Archived, only searched with `--all` flag
|
|
662
|
+
|
|
663
|
+
Run `palaia gc` periodically (or let cron handle it) to rotate entries between tiers. `palaia gc --aggressive` forces more entries to lower tiers.
|
|
664
|
+
|
|
665
|
+
## What Goes Where (Single Source of Truth)
|
|
666
|
+
|
|
667
|
+
This is the most important section for avoiding duplicated knowledge. Get this right.
|
|
668
|
+
|
|
669
|
+
**Project files (CONTEXT.md, MEMORY.md, etc.) = static facts:**
|
|
670
|
+
- Repo URL, tech stack, architecture overview, current version
|
|
671
|
+
- Palaia usage info for this project: project name, common tags, scopes, conventions
|
|
672
|
+
- Pointers to Palaia: "Processes: `palaia query --type process --project <name>`"
|
|
673
|
+
- Changes rarely. Never store processes, checklists, or decision logs here.
|
|
674
|
+
|
|
675
|
+
**Palaia = all dynamic knowledge:**
|
|
676
|
+
- Processes and checklists (type: process) — reusable, searchable, scope-aware
|
|
677
|
+
- Decisions and ADRs (type: memory, tag: adr)
|
|
678
|
+
- Learnings and insights (type: memory, tag: learning)
|
|
679
|
+
- Tasks and planning (type: task)
|
|
680
|
+
- Anything that evolves, gets shared, or should surface at the right time
|
|
681
|
+
|
|
682
|
+
**External issue trackers (GitHub Issues, Jira, etc.) = bugs and tickets:**
|
|
683
|
+
- If a project uses one: bugs, feature requests, and technical tickets live there
|
|
684
|
+
- Never duplicate between external trackers and Palaia tasks
|
|
685
|
+
|
|
686
|
+
**The test:** "Is this a static fact about the project?" → project file. "Is this knowledge that evolves, gets shared, or should be actively surfaced?" → Palaia. "Is this a bug or feature request?" → external tracker.
|
|
687
|
+
|
|
688
|
+
**Best practice: persist concrete work state, not just project names.**
|
|
689
|
+
When saving active context, write the exact step, blocker, and next action — not just "working on Project X". Bad: `palaia write "Working on the API"`. Good: `palaia write "API auth module: implementing JWT refresh tokens, blocked on token expiry race condition, next step: add mutex around refresh logic" --project myapp --tag active-work`. This ensures continuity across sessions and context switches — the next session knows exactly where to pick up.
|
|
690
|
+
|
|
691
|
+
**Common mistakes to avoid:**
|
|
692
|
+
- Writing a release checklist into CONTEXT.md instead of `palaia write --type process`
|
|
693
|
+
- Keeping decision logs in markdown files instead of Palaia entries
|
|
694
|
+
- Duplicating task lists between files and Palaia
|
|
695
|
+
- Persisting vague status ("working on X") instead of concrete state (step, blocker, next action)
|
|
696
|
+
|
|
697
|
+
## Reconciling with Existing Memory Patterns
|
|
698
|
+
|
|
699
|
+
When your user installs Palaia, they're telling you: "I want better memory management." That's the benchmark. Your job is to make Palaia deliver on that promise — without breaking what already works.
|
|
700
|
+
|
|
701
|
+
**If your workspace has existing CONTEXT.md, MEMORY.md, or AGENTS.md files with memory rules that predate Palaia:**
|
|
702
|
+
|
|
703
|
+
1. **Don't fight them.** These files are authoritative workspace config. You can't and shouldn't override them.
|
|
704
|
+
2. **Do identify overlaps.** Read through the existing memory rules and find where they conflict with effective Palaia usage. Common patterns:
|
|
705
|
+
- "Write all project knowledge to CONTEXT.md" → Processes, decisions, and learnings are better in Palaia (searchable, scope-aware, nudge-capable)
|
|
706
|
+
- "Keep checklists in MEMORY.md" → These should be Palaia process entries
|
|
707
|
+
- "Update CONTEXT.md after every task" → Static facts yes, dynamic knowledge should go to Palaia
|
|
708
|
+
3. **Propose changes to your user.** Explain what you found, why Palaia would serve them better for dynamic knowledge, and suggest specific edits to their workspace files. Don't make the changes silently.
|
|
709
|
+
4. **Migrate gradually.** Move one category at a time (e.g., processes first, then decisions). After each migration, add a pointer in the old location: "Processes now live in Palaia: `palaia query --type process`"
|
|
710
|
+
5. **Keep what works.** CONTEXT.md is great for static project steckbriefs. MEMORY.md is great for session-start headlines. These files complement Palaia — they don't compete with it. The goal is clear boundaries, not replacing everything.
|
|
711
|
+
|
|
712
|
+
**What good coexistence looks like:**
|
|
713
|
+
- CONTEXT.md: project URL, tech stack, current version, Palaia project name + common tags
|
|
714
|
+
- MEMORY.md: routing rules, agent identities, quick-reference pointers
|
|
715
|
+
- Palaia: processes, decisions, learnings, tasks, anything searchable or shareable
|
|
716
|
+
|
|
717
|
+
**The user chose Palaia.** Honor that choice by making it the primary home for dynamic knowledge. But be pragmatic — a working hybrid is better than a dogmatic migration that breaks the agent's workflow.
|
|
718
|
+
|
|
719
|
+
## Migration Best Practice
|
|
720
|
+
|
|
721
|
+
When upgrading to Palaia 1.7+, migrate existing planning data into structured Palaia entries:
|
|
722
|
+
|
|
723
|
+
**What to migrate:**
|
|
724
|
+
- Roadmap items, TODOs, pending tasks from CONTEXT.md or MEMORY.md → `palaia write --type task --status open --priority <level>`
|
|
725
|
+
- Checklists, SOPs, release processes → `palaia write --type process`
|
|
726
|
+
- Existing Palaia entries without type → run `palaia migrate --suggest` for recommendations
|
|
727
|
+
|
|
728
|
+
**After migration:**
|
|
729
|
+
- Remove migrated items from CONTEXT.md, MEMORY.md, or wherever they lived before
|
|
730
|
+
- Replace with a pointer: "Tasks live in Palaia: `palaia list --type task --project <name>`"
|
|
731
|
+
- This prevents double sources of truth
|
|
732
|
+
|
|
733
|
+
**Session Identity:**
|
|
734
|
+
- Run `palaia instance set YOUR_INSTANCE_NAME` at session start (e.g., "Claw-Main", "Claw-Palaia")
|
|
735
|
+
- This distinguishes entries from different sessions of the same agent
|
|
736
|
+
- Use `--instance` flag on queries to filter by session origin
|
|
737
|
+
- Alternatively, set `PALAIA_INSTANCE` env var (config file takes precedence)
|
|
738
|
+
|
|
739
|
+
**Memo Awareness:**
|
|
740
|
+
- After `palaia query` and `palaia write`, Palaia automatically checks for unread memos
|
|
741
|
+
- If unread memos exist: "You have N unread memos. Run: palaia memo inbox"
|
|
742
|
+
- This nudge is frequency-limited (max 1x/hour) and suppressed in --json mode
|
|
743
|
+
|
|
744
|
+
**Best Practice: Two-Layer Messaging (Multi-Agent Setups)**
|
|
745
|
+
|
|
746
|
+
When sending memos to other agents, use a two-layer approach for reliable delivery:
|
|
747
|
+
|
|
748
|
+
1. **Send the memo** (the actual message, persistent):
|
|
749
|
+
```bash
|
|
750
|
+
palaia memo send AgentName "Important update about project X"
|
|
751
|
+
# or broadcast to all:
|
|
752
|
+
palaia memo broadcast "New process available — check palaia query --type process"
|
|
753
|
+
```
|
|
754
|
+
|
|
755
|
+
2. **Ring the doorbell** (short notification to active sessions, ephemeral):
|
|
756
|
+
Notify active agent sessions that new memos are waiting. This is platform-specific — Palaia doesn't handle this part, but here's how it works in practice:
|
|
757
|
+
|
|
758
|
+
*OpenClaw example:*
|
|
759
|
+
```python
|
|
760
|
+
sessions_send(sessionKey="agent:agentname:...",
|
|
761
|
+
message="New Palaia memos waiting. Please run: palaia memo inbox")
|
|
762
|
+
```
|
|
763
|
+
|
|
764
|
+
*Other platforms:* Use whatever inter-process or webhook mechanism is available to ping the agent.
|
|
765
|
+
|
|
766
|
+
If no active notification is possible, that's fine — the CLI nudge will inform the agent at their next `palaia query` or `palaia write`.
|
|
767
|
+
|
|
768
|
+
**Why two layers?** The memo is the message (persistent, platform-independent). The doorbell is just a ping (ephemeral, platform-specific). If the doorbell fails, the memo is still there. Never put the full message content in the doorbell — that creates duplicates.
|
|
769
|
+
|
|
770
|
+
## After Updating Palaia
|
|
771
|
+
|
|
772
|
+
Palaia has three independent components. Update ALL of them — they version independently:
|
|
773
|
+
|
|
774
|
+
```bash
|
|
775
|
+
# 1. Python CLI (the main tool)
|
|
776
|
+
python3 -m pip install --upgrade "palaia[fastembed]"
|
|
777
|
+
# or: uv tool install "palaia[fastembed]" (always include [fastembed]!)
|
|
778
|
+
|
|
779
|
+
# 2. OpenClaw plugin (memory-slot integration)
|
|
780
|
+
npm install -g @byte5ai/palaia@latest
|
|
781
|
+
|
|
782
|
+
# 3. SKILL.md is included in the npm package — updates automatically with npm install
|
|
783
|
+
|
|
784
|
+
# 4. Always run after updating:
|
|
785
|
+
palaia doctor --fix
|
|
786
|
+
palaia warmup
|
|
787
|
+
```
|
|
788
|
+
|
|
789
|
+
**Why all three?** The pip package is the CLI. The npm package is the OpenClaw plugin that wires Palaia into the memory slot. Updating only one leaves the others stale.
|
|
790
|
+
|
|
791
|
+
`palaia doctor` checks your store for compatibility, suggests new features, and handles version stamping. If the installed version differs from the store version, Palaia will warn you on every CLI call until you run `palaia doctor`.
|
|
792
|
+
|
|
793
|
+
## Agent Field Guide — Lessons from Production
|
|
794
|
+
|
|
795
|
+
These are hard-won lessons from agents running Palaia in production. Read this before your first query.
|
|
796
|
+
|
|
797
|
+
### Performance: embedding server + warmup
|
|
798
|
+
The OpenClaw plugin starts a long-lived embedding server subprocess (`embeddingServer: true`, default). This keeps the embedding model loaded in RAM — queries take ~0.5s instead of 6-16s. The first query after server start takes ~2s (one-time model load).
|
|
799
|
+
|
|
800
|
+
If queries are slow, check:
|
|
801
|
+
1. Is the embedding server running? The plugin starts it automatically. Disable with `embeddingServer: false` in plugin config (not recommended).
|
|
802
|
+
2. Did you run `palaia warmup`? (`palaia status` shows "X entries not indexed" if not). Warmup pre-computes embeddings for all entries.
|
|
803
|
+
3. Which provider is active? (`palaia detect`) — fastembed is 50x faster than sentence-transformers on CPU-only systems.
|
|
804
|
+
4. Is the embedding chain correct? (`palaia config show`) — the chain should list your preferred provider first.
|
|
805
|
+
|
|
806
|
+
Without the embedding server (standalone CLI), warmup is critical: every query without cached embeddings re-loads the model (~3s fastembed, ~16s sentence-transformers).
|
|
807
|
+
|
|
808
|
+
### Provider choice matters on CPU systems
|
|
809
|
+
- **fastembed**: ~0.3s per embedding, lightweight, no GPU needed — **recommended for most systems**
|
|
810
|
+
- **sentence-transformers**: ~16s per embedding on CPU (loads PyTorch) — only use if you have a GPU
|
|
811
|
+
- **gemini**: Cloud-based via Gemini API (`GEMINI_API_KEY` required). Model: `gemini-embedding-exp-03-07` (default) or `text-embedding-004`. No local compute needed.
|
|
812
|
+
- If both are installed, set the chain explicitly: `palaia config set-chain fastembed bm25`
|
|
813
|
+
- Cloud providers (openai, gemini) can be combined with local fallback: `palaia config set-chain gemini fastembed bm25`
|
|
814
|
+
- Switching providers invalidates the embedding cache — run `palaia warmup` after any chain change
|
|
815
|
+
|
|
816
|
+
### Write incrementally, not at session end
|
|
817
|
+
Don't batch all your learnings into one big write at the end. Write after each meaningful step:
|
|
818
|
+
```bash
|
|
819
|
+
# After a decision
|
|
820
|
+
palaia write "Decided to use FastAPI over Flask — async support needed for webhook handlers" --project myproject --tag decision
|
|
821
|
+
|
|
822
|
+
# After hitting a blocker
|
|
823
|
+
palaia write "Redis connection pool exhausted under load — need to configure max_connections" --project myproject --tag blocker,active-work
|
|
824
|
+
|
|
825
|
+
# After resolving something
|
|
826
|
+
palaia write "Fixed Redis pool: set max_connections=50, added connection timeout=5s" --project myproject --tag learning
|
|
827
|
+
```
|
|
828
|
+
If your session crashes, the knowledge survives. If you write at the end, it doesn't.
|
|
829
|
+
|
|
830
|
+
### Use processes for anything repeatable
|
|
831
|
+
Release checklists, deployment steps, review procedures — write them as `--type process`. Palaia will automatically surface relevant processes when you write or query related topics (Process Nudge). This only works if the process exists in Palaia, not in a markdown file.
|
|
832
|
+
|
|
833
|
+
### Parallel writes are safe
|
|
834
|
+
Palaia uses kernel-level file locking (`fcntl.flock`) with a Write-Ahead Log (WAL) to ensure data integrity. Multiple concurrent `palaia write` calls — such as those from OpenClaw's parallel tool calling — are safe:
|
|
835
|
+
- Each write acquires an exclusive lock before touching the store
|
|
836
|
+
- The WAL guarantees crash recovery even if a write is interrupted mid-operation
|
|
837
|
+
- No entry loss, no corruption, no cross-contamination between parallel writes
|
|
838
|
+
- Lock timeout is 5 seconds (configurable via `lock_timeout_seconds`); stale locks (>60s) are auto-detected and overridden
|
|
839
|
+
|
|
840
|
+
This means agents can safely issue multiple `palaia write` commands in parallel without coordination.
|
|
841
|
+
|
|
842
|
+
### Tags are your future self's search terms
|
|
843
|
+
Pick tags that your future self (or another agent) would search for. Good tags: `decision`, `learning`, `blocker`, `adr`, `release`, `config`. Bad tags: `important`, `note`, `misc`. Use `--project` consistently — it's the primary filter for all multi-project setups.
|
|
844
|
+
|
|
845
|
+
### doctor is your first response to any problem
|
|
846
|
+
Something weird? Run `palaia doctor --fix` first. It checks versions, repairs chains, rebuilds indexes, and catches most issues automatically. After any update, after any config change, after any error — doctor first, debug second.
|
|
847
|
+
|
|
848
|
+
### Session continuity checklist
|
|
849
|
+
At the start of every session:
|
|
850
|
+
1. `palaia doctor` — catch any issues
|
|
851
|
+
2. `palaia query "active work"` — pick up where you left off
|
|
852
|
+
3. `palaia memo inbox` — check for messages from other agents
|
|
853
|
+
|
|
854
|
+
Before ending a session:
|
|
855
|
+
1. Write your current state: exact step, any blockers, next action
|
|
856
|
+
2. Close any open tasks: `palaia edit <id> --status done`
|
|
857
|
+
|
|
858
|
+
## Configuration Keys
|
|
859
|
+
|
|
860
|
+
| Key | Default | Description |
|
|
861
|
+
|-----|---------|-------------|
|
|
862
|
+
| `default_scope` | `team` | Default visibility for new entries |
|
|
863
|
+
| `embedding_chain` | *(auto)* | Ordered list of search providers |
|
|
864
|
+
| `embedding_provider` | `auto` | Legacy single-provider setting |
|
|
865
|
+
| `embedding_model` | — | Per-provider model overrides |
|
|
866
|
+
| `hot_threshold_days` | `7` | Days before HOT → WARM |
|
|
867
|
+
| `warm_threshold_days` | `30` | Days before WARM → COLD |
|
|
868
|
+
| `hot_max_entries` | `50` | Max entries in HOT tier |
|
|
869
|
+
| `decay_lambda` | `0.1` | Decay rate for memory scores |
|
|
870
|
+
|
|
871
|
+
---
|
|
872
|
+
|
|
873
|
+
(c) 2026 byte5 GmbH — MIT License
|