@letta-ai/letta-code 0.13.7 → 0.13.9
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "@letta-ai/letta-code",
|
|
3
|
-
"version": "0.13.
|
|
3
|
+
"version": "0.13.9",
|
|
4
4
|
"description": "Letta Code is a CLI tool for interacting with stateful Letta agents from the terminal.",
|
|
5
5
|
"type": "module",
|
|
6
6
|
"bin": {
|
|
@@ -30,7 +30,7 @@
|
|
|
30
30
|
"access": "public"
|
|
31
31
|
},
|
|
32
32
|
"dependencies": {
|
|
33
|
-
"@letta-ai/letta-client": "^1.7.
|
|
33
|
+
"@letta-ai/letta-client": "^1.7.6",
|
|
34
34
|
"glob": "^13.0.0",
|
|
35
35
|
"ink-link": "^5.0.0",
|
|
36
36
|
"open": "^10.2.0",
|
|
@@ -1,15 +1,17 @@
|
|
|
1
1
|
---
|
|
2
2
|
name: defragmenting-memory
|
|
3
|
-
description:
|
|
3
|
+
description: Decomposes and reorganizes agent memory blocks into focused, single-purpose components. Use when memory has large multi-topic blocks, redundancy, or poor organization. Backs up memory, uses a subagent to decompose and clean it up, then restores the improved version.
|
|
4
4
|
---
|
|
5
5
|
|
|
6
6
|
# Memory Defragmentation Skill
|
|
7
7
|
|
|
8
8
|
This skill helps you maintain clean, well-organized memory blocks by:
|
|
9
9
|
1. Dumping current memory to local files and backing up the agent file
|
|
10
|
-
2. Using
|
|
10
|
+
2. Using a subagent to decompose and reorganize the files
|
|
11
11
|
3. Restoring the cleaned files back to memory
|
|
12
12
|
|
|
13
|
+
The focus is on **decomposition**—splitting large, multi-purpose blocks into focused, single-purpose components—rather than consolidation.
|
|
14
|
+
|
|
13
15
|
## When to Use
|
|
14
16
|
|
|
15
17
|
- Memory blocks have redundant information
|
|
@@ -21,6 +23,8 @@ This skill helps you maintain clean, well-organized memory blocks by:
|
|
|
21
23
|
|
|
22
24
|
## Workflow
|
|
23
25
|
|
|
26
|
+
⚠️ **CRITICAL SAFETY REQUIREMENT**: You MUST complete Step 1 (backup) before proceeding to Step 2. The backup is your safety net. Do not spawn the subagent until the backup is guaranteed to have succeeded.
|
|
27
|
+
|
|
24
28
|
### Step 1: Backup Memory to Files
|
|
25
29
|
|
|
26
30
|
```bash
|
|
@@ -32,31 +36,88 @@ This creates:
|
|
|
32
36
|
- `.letta/backups/working/` - Working directory with editable files
|
|
33
37
|
- Each memory block as a `.md` file: `persona.md`, `human.md`, `project.md`, etc.
|
|
34
38
|
|
|
35
|
-
### Step 2: Spawn
|
|
39
|
+
### Step 2: Spawn Subagent to Clean Files
|
|
36
40
|
|
|
37
41
|
```typescript
|
|
38
42
|
Task({
|
|
39
43
|
subagent_type: "memory",
|
|
40
|
-
description: "Clean up memory files",
|
|
41
|
-
prompt:
|
|
44
|
+
description: "Clean up and decompose memory files",
|
|
45
|
+
prompt: `⚠️ CRITICAL PREREQUISITE: The agent memory blocks MUST be backed up to .letta/backups/working/ BEFORE you begin this task. The main agent must have run backup-memory.ts first. You are ONLY responsible for editing the files in that working directory—the backup is your safety net.
|
|
46
|
+
|
|
47
|
+
You are decomposing and reorganizing memory block files in .letta/backups/working/ to improve clarity and focus. "Decompose" means take large memory blocks with multiple sections and split them into smaller memory blocks, each with fewer sections and a single focused purpose.
|
|
48
|
+
|
|
49
|
+
## Evaluation Criteria
|
|
50
|
+
|
|
51
|
+
1. **DECOMPOSITION** - Split large, multi-purpose blocks into focused, single-purpose components
|
|
52
|
+
- Example: A "persona" block mixing Git operations, communication style, AND behavioral preferences should become separate blocks like "communication-style.md", "behavioral-preferences.md", "version-control-practices.md"
|
|
53
|
+
- Example: A "project" block with structure, patterns, rendering, error handling, and architecture should split into specialized blocks like "architecture.md", "patterns.md", "rendering-approach.md", "error-handling.md"
|
|
54
|
+
- Goal: Each block should have ONE clear purpose that can be described in a short title
|
|
55
|
+
- Create new files when splitting (e.g., communication-style.md, behavioral-preferences.md)
|
|
42
56
|
|
|
43
|
-
|
|
44
|
-
-
|
|
45
|
-
-
|
|
46
|
-
-
|
|
47
|
-
- Resolve contradictions
|
|
48
|
-
- Improve scannability
|
|
57
|
+
2. **STRUCTURE** - Organize content with clear markdown formatting
|
|
58
|
+
- Use headers (##, ###) for subsections
|
|
59
|
+
- Use bullet points for lists
|
|
60
|
+
- Make content scannable at a glance
|
|
49
61
|
|
|
50
|
-
|
|
62
|
+
3. **CONCISENESS** - Remove redundancy and unnecessary detail
|
|
63
|
+
- Eliminate duplicate information across blocks
|
|
64
|
+
- Remove speculation ("probably", "maybe", "I think")
|
|
65
|
+
- Keep only what adds unique value
|
|
51
66
|
|
|
52
|
-
|
|
53
|
-
|
|
67
|
+
4. **CLARITY** - Resolve contradictions and improve readability
|
|
68
|
+
- If blocks contradict, clarify or choose the better guidance
|
|
69
|
+
- Use plain language, avoid jargon
|
|
70
|
+
- Ensure each statement is concrete and actionable
|
|
54
71
|
|
|
55
|
-
|
|
72
|
+
5. **ORGANIZATION** - Group related information logically
|
|
73
|
+
- Within each block, organize content from general to specific
|
|
74
|
+
- Order sections by importance
|
|
75
|
+
|
|
76
|
+
## Workflow
|
|
77
|
+
|
|
78
|
+
1. **Analyze** - Read each file and identify its purpose(s)
|
|
79
|
+
- If a block serves 2+ distinct purposes, it needs decomposition
|
|
80
|
+
- Flag blocks where subtopics could be their own focused blocks
|
|
81
|
+
|
|
82
|
+
2. **Decompose** - Split multi-purpose blocks into specialized files
|
|
83
|
+
- Create new .md files for each focused purpose
|
|
84
|
+
- Use clear, descriptive filenames (e.g., "keyboard-protocols.md", "error-handling-patterns.md")
|
|
85
|
+
- Ensure each new block has ONE primary purpose
|
|
86
|
+
|
|
87
|
+
3. **Clean Up** - For remaining blocks (or new focused blocks):
|
|
88
|
+
- Add markdown structure with headers and bullets
|
|
89
|
+
- Remove redundancy
|
|
90
|
+
- Resolve contradictions
|
|
91
|
+
- Improve clarity
|
|
92
|
+
|
|
93
|
+
4. **Delete** - Remove files only when appropriate
|
|
94
|
+
- After consolidating into other blocks (rare - most blocks should stay focused)
|
|
95
|
+
- Never delete a focused, single-purpose block
|
|
96
|
+
- Only delete if a block contains junk/irrelevant data with no value
|
|
97
|
+
|
|
98
|
+
## Files to Edit
|
|
99
|
+
- persona.md → Consider splitting into: communication-style.md, behavioral-preferences.md, technical-practices.md
|
|
100
|
+
- project.md → Consider splitting into: architecture.md, patterns.md, rendering.md, error-handling.md, etc.
|
|
101
|
+
- human.md → OK to keep as-is if focused on understanding the user
|
|
102
|
+
- DO NOT edit: skills.md (auto-generated), loaded_skills.md (system-managed)
|
|
103
|
+
|
|
104
|
+
## Success Indicators
|
|
105
|
+
- No block tries to cover 2+ distinct topics
|
|
106
|
+
- Each block title clearly describes its single purpose
|
|
107
|
+
- Content within each block is focused and relevant to its title
|
|
108
|
+
- Well-organized with markdown structure
|
|
109
|
+
- Clear reduction in confusion/overlap across blocks
|
|
110
|
+
|
|
111
|
+
Provide a detailed report including:
|
|
112
|
+
- Files created (new decomposed blocks)
|
|
113
|
+
- Files modified (what changed)
|
|
114
|
+
- Files deleted (if any, explain why)
|
|
115
|
+
- Before/after character counts
|
|
116
|
+
- Rationale for how decomposition improves the memory structure`
|
|
56
117
|
})
|
|
57
118
|
```
|
|
58
119
|
|
|
59
|
-
The
|
|
120
|
+
The subagent will:
|
|
60
121
|
- Read the files from `.letta/backups/working/`
|
|
61
122
|
- Edit them to reorganize and consolidate redundancy
|
|
62
123
|
- Merge related blocks together for better organization
|
|
@@ -79,20 +140,23 @@ This will:
|
|
|
79
140
|
## Example Complete Flow
|
|
80
141
|
|
|
81
142
|
```typescript
|
|
82
|
-
//
|
|
143
|
+
// ⚠️ STEP 1 IS MANDATORY: Backup memory to files
|
|
144
|
+
// This MUST complete successfully before proceeding to Step 2
|
|
83
145
|
Bash({
|
|
84
146
|
command: "npx tsx <SKILL_DIR>/scripts/backup-memory.ts $LETTA_AGENT_ID .letta/backups/working",
|
|
85
|
-
description: "Backup memory to files"
|
|
147
|
+
description: "Backup memory to files (MANDATORY prerequisite)"
|
|
86
148
|
})
|
|
87
149
|
|
|
88
|
-
//
|
|
150
|
+
// ⚠️ STEP 2 CAN ONLY BEGIN AFTER STEP 1 SUCCEEDS
|
|
151
|
+
// The subagent works on the backed-up files, with the original memory safe
|
|
89
152
|
Task({
|
|
90
153
|
subagent_type: "memory",
|
|
91
|
-
description: "Clean up memory files",
|
|
92
|
-
prompt: "
|
|
154
|
+
description: "Clean up and decompose memory files",
|
|
155
|
+
prompt: "Decompose and reorganize memory block files in .letta/backups/working/. Be aggressive about splitting large multi-section blocks into many smaller, single-purpose blocks with fewer sections. Prefer creating new focused files over keeping large blocks. Structure with markdown headers and bullets. Remove redundancy and speculation. Resolve contradictions. Organize logically. Each block should have ONE clear purpose. Create new files for decomposed blocks rather than consolidating. Report files created, modified, deleted, before/after character counts, and rationale for changes."
|
|
93
156
|
})
|
|
94
157
|
|
|
95
|
-
// Step 3: Restore
|
|
158
|
+
// Step 3: Restore (only after cleanup is approved)
|
|
159
|
+
// Review the subagent's report before running this
|
|
96
160
|
Bash({
|
|
97
161
|
command: "npx tsx <SKILL_DIR>/scripts/restore-memory.ts $LETTA_AGENT_ID .letta/backups/working",
|
|
98
162
|
description: "Restore cleaned memory blocks"
|
|
@@ -119,19 +183,23 @@ Preview changes without applying them:
|
|
|
119
183
|
npx tsx <SKILL_DIR>/scripts/restore-memory.ts $LETTA_AGENT_ID .letta/backups/working --dry-run
|
|
120
184
|
```
|
|
121
185
|
|
|
122
|
-
## What the
|
|
123
|
-
|
|
124
|
-
The
|
|
125
|
-
-
|
|
126
|
-
-
|
|
127
|
-
-
|
|
128
|
-
-
|
|
129
|
-
-
|
|
130
|
-
-
|
|
186
|
+
## What the Subagent Does
|
|
187
|
+
|
|
188
|
+
The subagent focuses on decomposing and cleaning up files. It has full tool access (including Bash) and:
|
|
189
|
+
- Discovers `.md` files in `.letta/backups/working/` (via Glob or Bash)
|
|
190
|
+
- Reads and examines each file's content
|
|
191
|
+
- Identifies multi-purpose blocks that serve 2+ distinct purposes
|
|
192
|
+
- Splits large blocks into focused, single-purpose components
|
|
193
|
+
- Modifies/creates .md files for decomposed blocks
|
|
194
|
+
- Improves structure with headers and bullet points
|
|
195
|
+
- Removes redundancy and speculation across blocks
|
|
196
|
+
- Resolves contradictions with clear, concrete guidance
|
|
197
|
+
- Organizes content logically (general to specific, by importance)
|
|
198
|
+
- Provides detailed before/after reports including decomposition rationale
|
|
131
199
|
- Does NOT run backup scripts (main agent does this)
|
|
132
200
|
- Does NOT run restore scripts (main agent does this)
|
|
133
201
|
|
|
134
|
-
The
|
|
202
|
+
The focus is on decomposition—breaking apart large monolithic blocks into focused, specialized components rather than consolidating them together.
|
|
135
203
|
|
|
136
204
|
## Tips
|
|
137
205
|
|
|
@@ -142,18 +210,19 @@ The memory subagent runs with `bypassPermissions` mode, giving it full Bash acce
|
|
|
142
210
|
- Speculation ("probably", "maybe" - make it concrete or remove)
|
|
143
211
|
- Transient details that won't matter in a week
|
|
144
212
|
|
|
145
|
-
**
|
|
146
|
-
-
|
|
147
|
-
-
|
|
213
|
+
**Decomposition Strategy:**
|
|
214
|
+
- Split blocks that serve 2+ distinct purposes into focused components
|
|
215
|
+
- Create new specialized blocks with clear, single-purpose titles
|
|
216
|
+
- Example: A "persona" mixing communication style + Git practices → split into "communication-style.md" and "version-control-practices.md"
|
|
217
|
+
- Example: A "project" with structure + patterns + rendering → split into "architecture.md", "patterns.md", "rendering.md"
|
|
148
218
|
- Add clear headers and bullet points for scannability
|
|
149
|
-
- Group similar information together
|
|
150
|
-
- After merging blocks, DELETE the source files to avoid duplication
|
|
219
|
+
- Group similar information together within focused blocks
|
|
151
220
|
|
|
152
221
|
**When to DELETE a file:**
|
|
153
|
-
-
|
|
154
|
-
-
|
|
155
|
-
-
|
|
156
|
-
-
|
|
222
|
+
- Only delete if file contains junk/irrelevant data with no project value
|
|
223
|
+
- Don't delete after decomposing - Each new focused block is valuable
|
|
224
|
+
- Don't delete unique information just to reduce file count
|
|
225
|
+
- Exception: Delete source files only if consolidating multiple blocks into one (rare)
|
|
157
226
|
|
|
158
227
|
**What to preserve:**
|
|
159
228
|
- User preferences (sacred - never delete)
|
|
@@ -21,7 +21,7 @@
|
|
|
21
21
|
import { mkdirSync, readFileSync, writeFileSync } from "node:fs";
|
|
22
22
|
import { createRequire } from "node:module";
|
|
23
23
|
import { homedir } from "node:os";
|
|
24
|
-
import { join } from "node:path";
|
|
24
|
+
import { dirname, join } from "node:path";
|
|
25
25
|
|
|
26
26
|
// Use createRequire for @letta-ai/letta-client so NODE_PATH is respected
|
|
27
27
|
// (ES module imports don't respect NODE_PATH, but require does)
|
|
@@ -70,7 +70,8 @@ function getApiKey(): string {
|
|
|
70
70
|
* Create a Letta client with auth from env/settings
|
|
71
71
|
*/
|
|
72
72
|
function createClient(): LettaClient {
|
|
73
|
-
|
|
73
|
+
const baseUrl = process.env.LETTA_BASE_URL || "https://api.letta.com";
|
|
74
|
+
return new Letta({ apiKey: getApiKey(), baseUrl });
|
|
74
75
|
}
|
|
75
76
|
|
|
76
77
|
/**
|
|
@@ -123,9 +124,16 @@ async function backupMemory(
|
|
|
123
124
|
limit?: number;
|
|
124
125
|
}>) {
|
|
125
126
|
const label = block.label || `block-${block.id}`;
|
|
127
|
+
// For hierarchical labels like "A/B", create directory A/ with file B.md
|
|
126
128
|
const filename = `${label}.md`;
|
|
127
129
|
const filepath = join(backupPath, filename);
|
|
128
130
|
|
|
131
|
+
// Create parent directories if label contains slashes
|
|
132
|
+
const parentDir = dirname(filepath);
|
|
133
|
+
if (parentDir !== backupPath) {
|
|
134
|
+
mkdirSync(parentDir, { recursive: true });
|
|
135
|
+
}
|
|
136
|
+
|
|
129
137
|
// Write block content to file
|
|
130
138
|
const content = block.value || "";
|
|
131
139
|
writeFileSync(filepath, content, "utf-8");
|
|
@@ -16,10 +16,10 @@
|
|
|
16
16
|
* npx tsx restore-memory.ts $LETTA_AGENT_ID .letta/backups/working --dry-run
|
|
17
17
|
*/
|
|
18
18
|
|
|
19
|
-
import { readdirSync, readFileSync } from "node:fs";
|
|
19
|
+
import { readdirSync, readFileSync, statSync } from "node:fs";
|
|
20
20
|
import { createRequire } from "node:module";
|
|
21
21
|
import { homedir } from "node:os";
|
|
22
|
-
import { extname, join } from "node:path";
|
|
22
|
+
import { extname, join, relative } from "node:path";
|
|
23
23
|
|
|
24
24
|
import type { BackupManifest } from "./backup-memory";
|
|
25
25
|
|
|
@@ -57,7 +57,33 @@ function getApiKey(): string {
|
|
|
57
57
|
* Create a Letta client with auth from env/settings
|
|
58
58
|
*/
|
|
59
59
|
function createClient(): LettaClient {
|
|
60
|
-
|
|
60
|
+
const baseUrl = process.env.LETTA_BASE_URL || "https://api.letta.com";
|
|
61
|
+
return new Letta({ apiKey: getApiKey(), baseUrl });
|
|
62
|
+
}
|
|
63
|
+
|
|
64
|
+
/**
|
|
65
|
+
* Recursively scan directory for .md files
|
|
66
|
+
* Returns array of relative file paths from baseDir
|
|
67
|
+
*/
|
|
68
|
+
function scanMdFiles(dir: string, baseDir: string = dir): string[] {
|
|
69
|
+
const results: string[] = [];
|
|
70
|
+
const entries = readdirSync(dir);
|
|
71
|
+
|
|
72
|
+
for (const entry of entries) {
|
|
73
|
+
const fullPath = join(dir, entry);
|
|
74
|
+
const stat = statSync(fullPath);
|
|
75
|
+
|
|
76
|
+
if (stat.isDirectory()) {
|
|
77
|
+
// Recursively scan subdirectory
|
|
78
|
+
results.push(...scanMdFiles(fullPath, baseDir));
|
|
79
|
+
} else if (stat.isFile() && extname(entry) === ".md") {
|
|
80
|
+
// Convert to relative path from baseDir
|
|
81
|
+
const relativePath = relative(baseDir, fullPath);
|
|
82
|
+
results.push(relativePath);
|
|
83
|
+
}
|
|
84
|
+
}
|
|
85
|
+
|
|
86
|
+
return results;
|
|
61
87
|
}
|
|
62
88
|
|
|
63
89
|
/**
|
|
@@ -77,22 +103,30 @@ async function restoreMemory(
|
|
|
77
103
|
console.log("⚠️ DRY RUN MODE - No changes will be made\n");
|
|
78
104
|
}
|
|
79
105
|
|
|
80
|
-
// Read manifest
|
|
106
|
+
// Read manifest for metadata only (block IDs)
|
|
81
107
|
const manifestPath = join(backupDir, "manifest.json");
|
|
82
108
|
let manifest: BackupManifest | null = null;
|
|
83
109
|
|
|
84
110
|
try {
|
|
85
111
|
const manifestContent = readFileSync(manifestPath, "utf-8");
|
|
86
112
|
manifest = JSON.parse(manifestContent);
|
|
87
|
-
console.log(`Loaded manifest (${manifest?.blocks.length} blocks)\n`);
|
|
88
113
|
} catch {
|
|
89
|
-
|
|
90
|
-
"Warning: No manifest.json found, will scan directory for .md files",
|
|
91
|
-
);
|
|
114
|
+
// Manifest is optional
|
|
92
115
|
}
|
|
93
116
|
|
|
94
|
-
// Get current agent blocks
|
|
95
|
-
const
|
|
117
|
+
// Get current agent blocks using direct fetch (SDK may hit wrong server)
|
|
118
|
+
const baseUrl = process.env.LETTA_BASE_URL || "https://api.letta.com";
|
|
119
|
+
const blocksResp = await fetch(
|
|
120
|
+
`${baseUrl}/v1/agents/${agentId}/core-memory`,
|
|
121
|
+
{
|
|
122
|
+
headers: { Authorization: `Bearer ${getApiKey()}` },
|
|
123
|
+
},
|
|
124
|
+
);
|
|
125
|
+
if (!blocksResp.ok) {
|
|
126
|
+
throw new Error(`Failed to list blocks: ${blocksResp.status}`);
|
|
127
|
+
}
|
|
128
|
+
const blocksJson = (await blocksResp.json()) as { blocks: unknown[] };
|
|
129
|
+
const blocksResponse = blocksJson.blocks;
|
|
96
130
|
const currentBlocks = Array.isArray(blocksResponse)
|
|
97
131
|
? blocksResponse
|
|
98
132
|
: (blocksResponse as { items?: unknown[] }).items ||
|
|
@@ -104,32 +138,22 @@ async function restoreMemory(
|
|
|
104
138
|
),
|
|
105
139
|
);
|
|
106
140
|
|
|
107
|
-
//
|
|
108
|
-
|
|
109
|
-
|
|
110
|
-
|
|
111
|
-
|
|
112
|
-
|
|
113
|
-
|
|
114
|
-
|
|
115
|
-
//
|
|
116
|
-
|
|
117
|
-
|
|
118
|
-
|
|
119
|
-
|
|
120
|
-
|
|
121
|
-
|
|
122
|
-
|
|
123
|
-
const files = readdirSync(backupDir);
|
|
124
|
-
filesToRestore = files
|
|
125
|
-
.filter((f) => extname(f) === ".md")
|
|
126
|
-
.map((f) => ({
|
|
127
|
-
label: f.replace(/\.md$/, ""),
|
|
128
|
-
filename: f,
|
|
129
|
-
}));
|
|
130
|
-
}
|
|
131
|
-
|
|
132
|
-
console.log(`Found ${filesToRestore.length} files to restore\n`);
|
|
141
|
+
// Always scan directory for .md files (manifest is only used for block IDs)
|
|
142
|
+
const files = scanMdFiles(backupDir);
|
|
143
|
+
console.log(`Scanned ${files.length} .md files\n`);
|
|
144
|
+
const filesToRestore = files.map((relativePath) => {
|
|
145
|
+
// Convert path like "A/B.md" to label "A/B"
|
|
146
|
+
// Replace backslashes with forward slashes (Windows compatibility)
|
|
147
|
+
const normalizedPath = relativePath.replace(/\\/g, "/");
|
|
148
|
+
const label = normalizedPath.replace(/\.md$/, "");
|
|
149
|
+
// Look up block ID from manifest if available
|
|
150
|
+
const manifestBlock = manifest?.blocks.find((b) => b.label === label);
|
|
151
|
+
return {
|
|
152
|
+
label,
|
|
153
|
+
filename: relativePath,
|
|
154
|
+
blockId: manifestBlock?.id,
|
|
155
|
+
};
|
|
156
|
+
});
|
|
133
157
|
|
|
134
158
|
// Detect blocks to delete (exist on agent but not in backup)
|
|
135
159
|
const backupLabels = new Set(filesToRestore.map((f) => f.label));
|
|
@@ -140,16 +164,8 @@ async function restoreMemory(
|
|
|
140
164
|
// Restore each block
|
|
141
165
|
let updated = 0;
|
|
142
166
|
let created = 0;
|
|
143
|
-
let skipped = 0;
|
|
144
167
|
let deleted = 0;
|
|
145
168
|
|
|
146
|
-
// Track new blocks for later confirmation
|
|
147
|
-
const blocksToCreate: Array<{
|
|
148
|
-
label: string;
|
|
149
|
-
value: string;
|
|
150
|
-
description: string;
|
|
151
|
-
}> = [];
|
|
152
|
-
|
|
153
169
|
for (const { label, filename } of filesToRestore) {
|
|
154
170
|
const filepath = join(backupDir, filename);
|
|
155
171
|
|
|
@@ -158,95 +174,62 @@ async function restoreMemory(
|
|
|
158
174
|
const existingBlock = blocksByLabel.get(label);
|
|
159
175
|
|
|
160
176
|
if (existingBlock) {
|
|
161
|
-
// Update existing block
|
|
162
|
-
const unchanged = existingBlock.value === newValue;
|
|
163
|
-
|
|
164
|
-
if (unchanged) {
|
|
165
|
-
console.log(` ⏭️ ${label} - unchanged, skipping`);
|
|
166
|
-
skipped++;
|
|
167
|
-
continue;
|
|
168
|
-
}
|
|
169
|
-
|
|
177
|
+
// Update existing block using block ID (not label, which may contain /)
|
|
170
178
|
if (!options.dryRun) {
|
|
171
|
-
|
|
172
|
-
|
|
173
|
-
|
|
179
|
+
const baseUrl = process.env.LETTA_BASE_URL || "https://api.letta.com";
|
|
180
|
+
const url = `${baseUrl}/v1/blocks/${existingBlock.id}`;
|
|
181
|
+
const resp = await fetch(url, {
|
|
182
|
+
method: "PATCH",
|
|
183
|
+
headers: {
|
|
184
|
+
"Content-Type": "application/json",
|
|
185
|
+
Authorization: `Bearer ${getApiKey()}`,
|
|
186
|
+
},
|
|
187
|
+
body: JSON.stringify({ value: newValue }),
|
|
174
188
|
});
|
|
189
|
+
if (!resp.ok) {
|
|
190
|
+
throw new Error(`${resp.status} ${await resp.text()}`);
|
|
191
|
+
}
|
|
175
192
|
}
|
|
176
193
|
|
|
177
194
|
const oldLen = existingBlock.value?.length || 0;
|
|
178
195
|
const newLen = newValue.length;
|
|
179
|
-
const
|
|
180
|
-
const diffStr = diff > 0 ? `+${diff}` : `${diff}`;
|
|
196
|
+
const unchanged = existingBlock.value === newValue;
|
|
181
197
|
|
|
182
|
-
|
|
183
|
-
` ✓ ${label} -
|
|
184
|
-
|
|
198
|
+
if (unchanged) {
|
|
199
|
+
console.log(` ✓ ${label} - restored (${newLen} chars, unchanged)`);
|
|
200
|
+
} else {
|
|
201
|
+
const diff = newLen - oldLen;
|
|
202
|
+
const diffStr = diff > 0 ? `+${diff}` : `${diff}`;
|
|
203
|
+
console.log(
|
|
204
|
+
` ✓ ${label} - restored (${oldLen} -> ${newLen} chars, ${diffStr})`,
|
|
205
|
+
);
|
|
206
|
+
}
|
|
185
207
|
updated++;
|
|
186
208
|
} else {
|
|
187
|
-
// New block -
|
|
188
|
-
|
|
189
|
-
blocksToCreate.push({
|
|
190
|
-
label,
|
|
191
|
-
value: newValue,
|
|
192
|
-
description: `Memory block: ${label}`,
|
|
193
|
-
});
|
|
194
|
-
}
|
|
195
|
-
} catch (error) {
|
|
196
|
-
console.error(
|
|
197
|
-
` ❌ ${label} - error: ${error instanceof Error ? error.message : String(error)}`,
|
|
198
|
-
);
|
|
199
|
-
}
|
|
200
|
-
}
|
|
201
|
-
|
|
202
|
-
// Handle new blocks (exist in backup but not on agent)
|
|
203
|
-
if (blocksToCreate.length > 0) {
|
|
204
|
-
console.log(`\n➕ Found ${blocksToCreate.length} new block(s) to create:`);
|
|
205
|
-
for (const block of blocksToCreate) {
|
|
206
|
-
console.log(` - ${block.label} (${block.value.length} chars)`);
|
|
207
|
-
}
|
|
208
|
-
|
|
209
|
-
if (!options.dryRun) {
|
|
210
|
-
console.log(`\nThese blocks will be CREATED on the agent.`);
|
|
211
|
-
console.log(
|
|
212
|
-
`Press Ctrl+C to cancel, or press Enter to confirm creation...`,
|
|
213
|
-
);
|
|
214
|
-
|
|
215
|
-
// Wait for user confirmation
|
|
216
|
-
await new Promise<void>((resolve) => {
|
|
217
|
-
process.stdin.once("data", () => resolve());
|
|
218
|
-
});
|
|
219
|
-
|
|
220
|
-
console.log();
|
|
221
|
-
for (const block of blocksToCreate) {
|
|
222
|
-
try {
|
|
223
|
-
// Create the block
|
|
209
|
+
// New block - create immediately
|
|
210
|
+
if (!options.dryRun) {
|
|
224
211
|
const createdBlock = await client.blocks.create({
|
|
225
|
-
label
|
|
226
|
-
value:
|
|
227
|
-
description: block
|
|
212
|
+
label,
|
|
213
|
+
value: newValue,
|
|
214
|
+
description: `Memory block: ${label}`,
|
|
228
215
|
limit: 20000,
|
|
229
216
|
});
|
|
230
217
|
|
|
231
218
|
if (!createdBlock.id) {
|
|
232
|
-
throw new Error(`Created block ${
|
|
219
|
+
throw new Error(`Created block ${label} has no ID`);
|
|
233
220
|
}
|
|
234
221
|
|
|
235
|
-
// Attach the newly created block to the agent
|
|
236
222
|
await client.agents.blocks.attach(createdBlock.id, {
|
|
237
223
|
agent_id: agentId,
|
|
238
224
|
});
|
|
239
|
-
|
|
240
|
-
console.log(` ✅ ${block.label} - created and attached`);
|
|
241
|
-
created++;
|
|
242
|
-
} catch (error) {
|
|
243
|
-
console.error(
|
|
244
|
-
` ❌ ${block.label} - error creating: ${error instanceof Error ? error.message : String(error)}`,
|
|
245
|
-
);
|
|
246
225
|
}
|
|
226
|
+
console.log(` ✓ ${label} - created (${newValue.length} chars)`);
|
|
227
|
+
created++;
|
|
247
228
|
}
|
|
248
|
-
}
|
|
249
|
-
console.
|
|
229
|
+
} catch (error) {
|
|
230
|
+
console.error(
|
|
231
|
+
` ❌ ${label} - error: ${error instanceof Error ? error.message : String(error)}`,
|
|
232
|
+
);
|
|
250
233
|
}
|
|
251
234
|
}
|
|
252
235
|
|
|
@@ -290,8 +273,7 @@ async function restoreMemory(
|
|
|
290
273
|
}
|
|
291
274
|
|
|
292
275
|
console.log(`\n📊 Summary:`);
|
|
293
|
-
console.log(`
|
|
294
|
-
console.log(` Skipped: ${skipped}`);
|
|
276
|
+
console.log(` Restored: ${updated}`);
|
|
295
277
|
console.log(` Created: ${created}`);
|
|
296
278
|
console.log(` Deleted: ${deleted}`);
|
|
297
279
|
|