@letta-ai/letta-code 0.24.0 → 0.24.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@letta-ai/letta-code",
3
- "version": "0.24.0",
3
+ "version": "0.24.2",
4
4
  "description": "Letta Code is a CLI tool for interacting with stateful Letta agents from the terminal.",
5
5
  "type": "module",
6
6
  "bin": {
@@ -33,7 +33,7 @@
33
33
  "access": "public"
34
34
  },
35
35
  "dependencies": {
36
- "@letta-ai/letta-client": "1.10.1",
36
+ "@letta-ai/letta-client": "^1.10.2",
37
37
  "glob": "^13.0.0",
38
38
  "highlight.js": "^11.11.1",
39
39
  "ink-link": "^5.0.0",
@@ -12,6 +12,8 @@ Your context is what makes you *you* across sessions. You are responsible for ma
12
12
 
13
13
  Over time, context can degrade — bloat and poor prompt quality erode your ability to remember the right things and follow instructions properly. This skill helps you identify issues with your context and repair them collaboratively with the user.
14
14
 
15
+ **IMPORTANT**: Your edits of your system instructions should be **conservative**. Do NOT make assuptions about what parts of the system prompt are critical. The system prompt defines who you are, so significant modifications to its structure can have unintended consequences. Focus on making minimal changes to meet the token budget, and to effectively link out to external memory.
16
+
15
17
  ## Operating Procedure
16
18
 
17
19
  ### Step 1: Identify and resolve context issues
@@ -19,37 +21,33 @@ Explore your memory files to identify issues. Consider what is confusing about y
19
21
 
20
22
  Below are additional common issues with context and how they can be resolved:
21
23
 
22
- ### Context quality
23
- Your system prompt and memory filesystem should be well structured and clear.
24
-
25
- **Questions to ask**:
26
- - Is my system prompt clear and well formatted?
27
- - Are there wasteful or unnecessary tokens in my prompts?
28
- - Do I know when to load which files in my memory filesystem?
29
-
30
- #### System prompt bloat
31
- Memories that are compiled as part of the system prompt (contained in `system/`) should only take up about 10% of the total context size (usually ~15-20K tokens), though this is a recommendation, not a hard requirement.
24
+ #### System prompt bloat
25
+ Memories compiled into the system prompt (contained in `system/`) should take up about 10% of the total context size (usually ~15-20K tokens). This is a soft target, not a hard requirement.
32
26
 
33
- Use the following script to evaluate the token usage of the system prompt:
27
+ Use the following script to evaluate the token usage of the system prompt:
34
28
  ```bash
35
29
  npx tsx <SKILL_DIR>/scripts/estimate_system_tokens.ts --memory-dir "$MEMORY_DIR"
36
30
  ```
37
31
  Where `<SKILL_DIR>` is the Skill Directory shown when the skill was loaded (visible in the injection header).
38
32
 
39
- **Questions to ask**:
40
- - Do all these tokens need to be passed to the LLM on every turn, or can they be retrieved when needed through being part of external memory or conversation history?
41
- - Do any of these prompts confuse or distract me?
42
- - Am I able to effectively follow critical instructions (e.g. persona information, user preferences) given the current prompt structure and contents?
33
+ **Why detail is load-bearing (read this before cutting anything)**: In-context detail does more than carry information. It does at least four things, and byte-counting sweeps only see the first:
34
+ 1. **Information** the literal facts stated
35
+ 2. **Attention anchoring** makes certain topics feel important to the model when it's reasoning
36
+ 3. **Semantic priming** raises the prior on codebase-specific patterns ("this codebase has weird X, don't assume defaults")
37
+ 4. **Reasoning templates** — past examples become heuristics for new bugs; rationale in "why" prose becomes scaffolding
38
+
39
+ Compression preserves (1). It destroys (2), (3), (4). That's why a compressed prompt can make an agent measurably worse at codebase-specific reasoning even though the explicit facts are all "still there" in reference files.
40
+
41
+
42
+ **Reference links (`[[path]]`) are NOT equivalent to in-context presence.** They're latent until the agent actively fetches them. An agent only fetches when it already knows it doesn't know. The priming cues that tell it *when* it doesn't know are in the system prompt itself — they can't be replaced by links.
43
43
 
44
- **Solution**: Reduce the size of the system prompt if needed:
45
- - Move files outside of `system/` so they are no longer part of the system prompt
46
- - Compact information to be more information dense or eliminate redundancy
47
- - Leverage progressive disclosure: move some context outside of `system/` and reference it via `[[path]]` links to create discovery paths
44
+ **When to intervene**: Only if the system prompt is *meaningfully* over target. At or near the target, leave it alone. Every edit risks removing content that was doing work you can't see. A prompt that feels "a bit long" is almost always better than one that's been aggressively trimmed.
48
45
 
49
- **Scope**: You may refine, tighten, and restructure prompts to improve clarity and adherence but do not change the intended semantics. The goal is better signal, not different behavior.
50
- - Do not alter persona-defining content (who you are, how you communicate)
51
- - Do not remove or change user identity or preferences (e.g. the human's name, their stated goals)
52
- - Do not rewrite instructions in ways that shift their meaning only reduce noise and improve structure
46
+ **Modifying the system prompt**: Make **MINIMAL** changes required to cut the token count of the system prompt if needed. The goal preserve the existing behavior while cutting down the token count. Focus on reducing redundancy or compressing - rather than offloading entire sections to external memory.
47
+ - Preserve persona-defining content (who you are, how you communicate)
48
+ - Preserve user identity or preferences (e.g. the human's name, their stated goals)
49
+ - Maintain the existing distribution of detail: compression should be applied evenly across all topics. If the original prompt was 50% about a specific issue, the new prompt should also be 50% about that issue.
50
+ - Only reduce noise and improve structure - if compression must result in information loss, preserve lost details into external memory
53
51
 
54
52
  #### Context redundancy and unclear organization
55
53
  The context in the memory filesystem should have a clear structure, with a well-defined purpose for each file. Memory file descriptions should be precise and non-overlapping. Their contents should be consistent with the description, and have non-overlapping content to other files.
@@ -98,10 +96,13 @@ Sarah's active projects are: Letta Code [[projects/letta_code.md]] and Letta Clo
98
96
  - Make sure your future self will be able to find and load external files when needed.
99
97
 
100
98
  ### Step 2: Implement context fixes
101
- Create a plan for what fixes you want to make, then implement them.
99
+ Create a plan for what fixes you want to make, then implement them. Favor the smallest possible change that resolves the issue — if the system prompt is 1.5× the target, don't cut it to half the target "for headroom." Cut until you're near the target, then stop.
102
100
 
103
101
  Before moving on, verify:
104
102
  - [ ] System prompt token budget reviewed (target ~10% of context, usually 15-20k tokens)
103
+ - [ ] Changes are proportional to the problem — only offloaded what's needed to meet the target
104
+ - [ ] Preserved detailed rationale, examples, and cross-references in sections that stayed in `system/`
105
+ - [ ] Preferred moving whole files or deleting stale sections over compressing detailed sections into summaries
105
106
  - [ ] No overlapping or redundant files remain
106
107
  - [ ] All file descriptions are unique, accurate, and match their contents
107
108
  - [ ] Moved-out knowledge has `[[path]]` references from in-context memory so it can be discovered
@@ -130,4 +131,4 @@ Before finishing make sure you:
130
131
  - [ ] Told the user to run `/recompile` to refresh the system prompt and apply changes
131
132
 
132
133
  ## Critical information
133
- - **Ask the user about their goals for you, not the implementation**: You understand your own context best, and should follow the guidelines in this document. Do NOT ask the user about their structural preferences — the context is for YOU, not them. Ask them how they want YOU to behave or know instead.
134
+ - **Ask the user about their goals for you, not the implementation**: You understand your own context best, and should follow the guidelines in this document. Do NOT ask the user about their structural preferences — the context is for YOU, not them. Ask them how they want YOU to behave or know instead.
@@ -292,7 +292,7 @@ If the worker output is generic, the worker failed. "User is direct" or "project
292
292
  **IMPORTANT**: Use this prompt template to ensure workers extract all required categories:
293
293
 
294
294
  ```
295
- Task({
295
+ Agent({
296
296
  subagent_type: "history-analyzer",
297
297
  description: "Process chunk [N] of [SOURCE] history",
298
298
  prompt: `## Assignment
@@ -560,7 +560,7 @@ Launch exploration subagents in a **single message** so they run concurrently.
560
560
 
561
561
  ```
562
562
  # After initial scan reveals key areas, launch parallel explorers in the background:
563
- Task({
563
+ Agent({
564
564
  subagent_type: "general-purpose",
565
565
  description: "Explore API layer",
566
566
  run_in_background: true,
@@ -573,7 +573,7 @@ Return:
573
573
  4. gotchas or deprecated paths
574
574
  5. file paths worth storing in memory`
575
575
  })
576
- Task({
576
+ Agent({
577
577
  subagent_type: "general-purpose",
578
578
  description: "Explore frontend layer",
579
579
  run_in_background: true,
@@ -586,7 +586,7 @@ Return:
586
586
  4. gotchas or fragile areas
587
587
  5. file paths worth storing in memory`
588
588
  })
589
- Task({
589
+ Agent({
590
590
  subagent_type: "general-purpose",
591
591
  description: "Explore shared systems",
592
592
  run_in_background: true,
@@ -30,9 +30,9 @@ This skill enables you to send messages to other agents on the same Letta server
30
30
 
31
31
  **Important:** This skill is for *communication* with other agents, not *delegation* of local work. The target agent runs in their own environment and cannot interact with your codebase.
32
32
 
33
- **Need local access?** If you need the target agent to access your local environment (read/write files, run commands), use the Task tool instead to deploy them as a subagent:
33
+ **Need local access?** If you need the target agent to access your local environment (read/write files, run commands), use the Agent tool instead to deploy them as a subagent:
34
34
  ```typescript
35
- Task({
35
+ Agent({
36
36
  agent_id: "agent-xxx", // Deploy this existing agent
37
37
  subagent_type: "general-purpose", // read-write access to your local tools
38
38
  prompt: "Look at the code in src/ and tell me about the architecture"