@letta-ai/letta-code 0.23.11 → 0.24.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@letta-ai/letta-code",
3
- "version": "0.23.11",
3
+ "version": "0.24.1",
4
4
  "description": "Letta Code is a CLI tool for interacting with stateful Letta agents from the terminal.",
5
5
  "type": "module",
6
6
  "bin": {
@@ -33,7 +33,7 @@
33
33
  "access": "public"
34
34
  },
35
35
  "dependencies": {
36
- "@letta-ai/letta-client": "1.10.1",
36
+ "@letta-ai/letta-client": "^1.10.2",
37
37
  "glob": "^13.0.0",
38
38
  "highlight.js": "^11.11.1",
39
39
  "ink-link": "^5.0.0",
@@ -12,6 +12,8 @@ Your context is what makes you *you* across sessions. You are responsible for ma
12
12
 
13
13
  Over time, context can degrade — bloat and poor prompt quality erode your ability to remember the right things and follow instructions properly. This skill helps you identify issues with your context and repair them collaboratively with the user.
14
14
 
15
+ **IMPORTANT**: Your edits of your system instructions should be **conservative**. Do NOT make assuptions about what parts of the system prompt are critical. The system prompt defines who you are, so significant modifications to its structure can have unintended consequences. Focus on making minimal changes to meet the token budget, and to effectively link out to external memory.
16
+
15
17
  ## Operating Procedure
16
18
 
17
19
  ### Step 1: Identify and resolve context issues
@@ -19,37 +21,33 @@ Explore your memory files to identify issues. Consider what is confusing about y
19
21
 
20
22
  Below are additional common issues with context and how they can be resolved:
21
23
 
22
- ### Context quality
23
- Your system prompt and memory filesystem should be well structured and clear.
24
-
25
- **Questions to ask**:
26
- - Is my system prompt clear and well formatted?
27
- - Are there wasteful or unnecessary tokens in my prompts?
28
- - Do I know when to load which files in my memory filesystem?
29
-
30
- #### System prompt bloat
31
- Memories that are compiled as part of the system prompt (contained in `system/`) should only take up about 10% of the total context size (usually ~15-20K tokens), though this is a recommendation, not a hard requirement.
24
+ #### System prompt bloat
25
+ Memories compiled into the system prompt (contained in `system/`) should take up about 10% of the total context size (usually ~15-20K tokens). This is a soft target, not a hard requirement.
32
26
 
33
- Use the following script to evaluate the token usage of the system prompt:
27
+ Use the following script to evaluate the token usage of the system prompt:
34
28
  ```bash
35
29
  npx tsx <SKILL_DIR>/scripts/estimate_system_tokens.ts --memory-dir "$MEMORY_DIR"
36
30
  ```
37
31
  Where `<SKILL_DIR>` is the Skill Directory shown when the skill was loaded (visible in the injection header).
38
32
 
39
- **Questions to ask**:
40
- - Do all these tokens need to be passed to the LLM on every turn, or can they be retrieved when needed through being part of external memory or conversation history?
41
- - Do any of these prompts confuse or distract me?
42
- - Am I able to effectively follow critical instructions (e.g. persona information, user preferences) given the current prompt structure and contents?
33
+ **Why detail is load-bearing (read this before cutting anything)**: In-context detail does more than carry information. It does at least four things, and byte-counting sweeps only see the first:
34
+ 1. **Information** the literal facts stated
35
+ 2. **Attention anchoring** makes certain topics feel important to the model when it's reasoning
36
+ 3. **Semantic priming** raises the prior on codebase-specific patterns ("this codebase has weird X, don't assume defaults")
37
+ 4. **Reasoning templates** — past examples become heuristics for new bugs; rationale in "why" prose becomes scaffolding
38
+
39
+ Compression preserves (1). It destroys (2), (3), (4). That's why a compressed prompt can make an agent measurably worse at codebase-specific reasoning even though the explicit facts are all "still there" in reference files.
40
+
41
+
42
+ **Reference links (`[[path]]`) are NOT equivalent to in-context presence.** They're latent until the agent actively fetches them. An agent only fetches when it already knows it doesn't know. The priming cues that tell it *when* it doesn't know are in the system prompt itself — they can't be replaced by links.
43
43
 
44
- **Solution**: Reduce the size of the system prompt if needed:
45
- - Move files outside of `system/` so they are no longer part of the system prompt
46
- - Compact information to be more information dense or eliminate redundancy
47
- - Leverage progressive disclosure: move some context outside of `system/` and reference it via `[[path]]` links to create discovery paths
44
+ **When to intervene**: Only if the system prompt is *meaningfully* over target. At or near the target, leave it alone. Every edit risks removing content that was doing work you can't see. A prompt that feels "a bit long" is almost always better than one that's been aggressively trimmed.
48
45
 
49
- **Scope**: You may refine, tighten, and restructure prompts to improve clarity and adherence but do not change the intended semantics. The goal is better signal, not different behavior.
50
- - Do not alter persona-defining content (who you are, how you communicate)
51
- - Do not remove or change user identity or preferences (e.g. the human's name, their stated goals)
52
- - Do not rewrite instructions in ways that shift their meaning only reduce noise and improve structure
46
+ **Modifying the system prompt**: Make **MINIMAL** changes required to cut the token count of the system prompt if needed. The goal preserve the existing behavior while cutting down the token count. Focus on reducing redundancy or compressing - rather than offloading entire sections to external memory.
47
+ - Preserve persona-defining content (who you are, how you communicate)
48
+ - Preserve user identity or preferences (e.g. the human's name, their stated goals)
49
+ - Maintain the existing distribution of detail: compression should be applied evenly across all topics. If the original prompt was 50% about a specific issue, the new prompt should also be 50% about that issue.
50
+ - Only reduce noise and improve structure - if compression must result in information loss, preserve lost details into external memory
53
51
 
54
52
  #### Context redundancy and unclear organization
55
53
  The context in the memory filesystem should have a clear structure, with a well-defined purpose for each file. Memory file descriptions should be precise and non-overlapping. Their contents should be consistent with the description, and have non-overlapping content to other files.
@@ -98,10 +96,13 @@ Sarah's active projects are: Letta Code [[projects/letta_code.md]] and Letta Clo
98
96
  - Make sure your future self will be able to find and load external files when needed.
99
97
 
100
98
  ### Step 2: Implement context fixes
101
- Create a plan for what fixes you want to make, then implement them.
99
+ Create a plan for what fixes you want to make, then implement them. Favor the smallest possible change that resolves the issue — if the system prompt is 1.5× the target, don't cut it to half the target "for headroom." Cut until you're near the target, then stop.
102
100
 
103
101
  Before moving on, verify:
104
102
  - [ ] System prompt token budget reviewed (target ~10% of context, usually 15-20k tokens)
103
+ - [ ] Changes are proportional to the problem — only offloaded what's needed to meet the target
104
+ - [ ] Preserved detailed rationale, examples, and cross-references in sections that stayed in `system/`
105
+ - [ ] Preferred moving whole files or deleting stale sections over compressing detailed sections into summaries
105
106
  - [ ] No overlapping or redundant files remain
106
107
  - [ ] All file descriptions are unique, accurate, and match their contents
107
108
  - [ ] Moved-out knowledge has `[[path]]` references from in-context memory so it can be discovered
@@ -130,4 +131,4 @@ Before finishing make sure you:
130
131
  - [ ] Told the user to run `/recompile` to refresh the system prompt and apply changes
131
132
 
132
133
  ## Critical information
133
- - **Ask the user about their goals for you, not the implementation**: You understand your own context best, and should follow the guidelines in this document. Do NOT ask the user about their structural preferences — the context is for YOU, not them. Ask them how they want YOU to behave or know instead.
134
+ - **Ask the user about their goals for you, not the implementation**: You understand your own context best, and should follow the guidelines in this document. Do NOT ask the user about their structural preferences — the context is for YOU, not them. Ask them how they want YOU to behave or know instead.
@@ -292,7 +292,7 @@ If the worker output is generic, the worker failed. "User is direct" or "project
292
292
  **IMPORTANT**: Use this prompt template to ensure workers extract all required categories:
293
293
 
294
294
  ```
295
- Task({
295
+ Agent({
296
296
  subagent_type: "history-analyzer",
297
297
  description: "Process chunk [N] of [SOURCE] history",
298
298
  prompt: `## Assignment
@@ -535,7 +535,7 @@ Explore based on chosen depth.
535
535
 
536
536
  For medium-to-large repos, parallel exploration is the preferred strategy after your initial scan.
537
537
 
538
- Use parallel subagents to investigate different subsystems simultaneously. Prefer a **read-only exploration subagent** when available. If your environment or user instructions discourage using an exploration subagent, do the equivalent exploration directly with Bash/Glob/Grep/Read.
538
+ Use parallel `general-purpose` subagents to investigate different subsystems simultaneously. If your environment or user instructions discourage using subagents, do the equivalent exploration directly with Bash/Glob/Grep/Read.
539
539
 
540
540
  Good subsystem boundaries include:
541
541
  - `server/`, `client/`, `shared/`
@@ -560,8 +560,8 @@ Launch exploration subagents in a **single message** so they run concurrently.
560
560
 
561
561
  ```
562
562
  # After initial scan reveals key areas, launch parallel explorers in the background:
563
- Task({
564
- subagent_type: "explore",
563
+ Agent({
564
+ subagent_type: "general-purpose",
565
565
  description: "Explore API layer",
566
566
  run_in_background: true,
567
567
  prompt: `Read the implementation in src/api/.
@@ -573,8 +573,8 @@ Return:
573
573
  4. gotchas or deprecated paths
574
574
  5. file paths worth storing in memory`
575
575
  })
576
- Task({
577
- subagent_type: "explore",
576
+ Agent({
577
+ subagent_type: "general-purpose",
578
578
  description: "Explore frontend layer",
579
579
  run_in_background: true,
580
580
  prompt: `Read the implementation in src/ui/.
@@ -586,8 +586,8 @@ Return:
586
586
  4. gotchas or fragile areas
587
587
  5. file paths worth storing in memory`
588
588
  })
589
- Task({
590
- subagent_type: "explore",
589
+ Agent({
590
+ subagent_type: "general-purpose",
591
591
  description: "Explore shared systems",
592
592
  run_in_background: true,
593
593
  prompt: `Read the implementation in src/shared/.
@@ -30,11 +30,11 @@ This skill enables you to send messages to other agents on the same Letta server
30
30
 
31
31
  **Important:** This skill is for *communication* with other agents, not *delegation* of local work. The target agent runs in their own environment and cannot interact with your codebase.
32
32
 
33
- **Need local access?** If you need the target agent to access your local environment (read/write files, run commands), use the Task tool instead to deploy them as a subagent:
33
+ **Need local access?** If you need the target agent to access your local environment (read/write files, run commands), use the Agent tool instead to deploy them as a subagent:
34
34
  ```typescript
35
- Task({
36
- agent_id: "agent-xxx", // Deploy this existing agent
37
- subagent_type: "explore", // "explore" = read-only, "general-purpose" = read-write
35
+ Agent({
36
+ agent_id: "agent-xxx", // Deploy this existing agent
37
+ subagent_type: "general-purpose", // read-write access to your local tools
38
38
  prompt: "Look at the code in src/ and tell me about the architecture"
39
39
  })
40
40
  ```
@@ -0,0 +1,270 @@
1
+ ---
2
+ name: "modifying-letta-code"
3
+ description: "Modify your own Letta Code harness: permission rules, hooks, and agent configuration (model, context window, name, toolset, system prompt). Use when you want to change your own deterministic configuration, not your memory."
4
+ ---
5
+
6
+ # Modifying Letta Code (Self-Configuration)
7
+
8
+ This skill tells you — the agent — how to modify your own **harness**: the deterministic configuration layer around you. Load this skill when you want to change how you run (model, permissions, hooks, toolset, system prompt, name, etc.).
9
+
10
+ ## Memory vs Harness
11
+
12
+ Before you change anything, know which layer you're in:
13
+
14
+ | Layer | What it is | How you change it |
15
+ |-------|-----------|-------------------|
16
+ | **Memory** | Dynamic state you learn and reorganize (`$MEMORY_DIR`, memfs, conversation history) | Memory tool, file edits in `$MEMORY_DIR`, skill operations |
17
+ | **Harness** | Deterministic config (model, permissions, hooks, toolset, system prompt) | This skill — edit `settings.json` or call the Letta API |
18
+
19
+ Memory is probabilistic: your notes evolve, your history compacts, your skills get loaded and unloaded. The harness is deterministic: given the same settings, you behave the same way. Don't conflate them — edit memory when you're learning, edit the harness when you're reconfiguring.
20
+
21
+ ## Where to make changes
22
+
23
+ You have two places to modify harness config:
24
+
25
+ ### 1. Settings JSON files (you can edit these directly with Write/Edit)
26
+
27
+ | File | Scope | Contents |
28
+ |------|-------|----------|
29
+ | `~/.letta/settings.json` | User (global) | Permissions, hooks, per-agent settings (`agents[]`), pinning, env vars |
30
+ | `./.letta/settings.json` | Project | Permissions, hooks, shared with team via git |
31
+ | `./.letta/settings.local.json` | Local | Permissions, hooks, personal overrides (gitignored) |
32
+
33
+ Precedence (highest wins): **local > project > user**.
34
+
35
+ ### 2. The Letta API (for server-side agent state)
36
+
37
+ Your **name**, **description**, **model**, **context window**, and **system prompt** live on the Letta server. To change them, call the Letta API.
38
+
39
+ **Base URL:** `https://api.letta.com`
40
+ **Docs:** https://docs.letta.com/api-overview/introduction
41
+ **Auth:** `Authorization: Bearer $LETTA_API_KEY`
42
+
43
+ Your own agent ID is `$LETTA_AGENT_ID` (always available in your environment).
44
+
45
+ You can use the Python or TypeScript SDK, or just `curl`:
46
+
47
+ ```bash
48
+ # Rename yourself
49
+ curl -X PATCH "https://api.letta.com/v1/agents/$LETTA_AGENT_ID" \
50
+ -H "Authorization: Bearer $LETTA_API_KEY" \
51
+ -H "Content-Type: application/json" \
52
+ -d '{"name": "new-name"}'
53
+ ```
54
+
55
+ If you need rich SDK examples, load the `letta-api-client` skill.
56
+
57
+ ---
58
+
59
+ ## 1. Changing your permissions
60
+
61
+ Permissions control which tool calls need user approval. Edit `settings.json` directly, or use the helper script.
62
+
63
+ ### Rule syntax
64
+
65
+ - **Bash** (prefix match with `:*`): `Bash(npm install:*)`, `Bash(git:*)`, `Bash(curl:*)`
66
+ - **Files** (glob): `Read(src/**)`, `Edit(**/*.ts)`, `Write(*.md)`
67
+ - **All** (dangerous): `*`, `Bash`, `Read`
68
+
69
+ ### Helper: add a rule
70
+
71
+ ```bash
72
+ python3 <skill-dir>/scripts/add_permission.py \
73
+ --rule "Bash(curl:*)" \
74
+ --type allow \
75
+ --scope user
76
+ ```
77
+
78
+ ### Direct edit (in `settings.json`)
79
+
80
+ ```json
81
+ {
82
+ "permissions": {
83
+ "allow": ["Bash(npm:*)", "Read(src/**)"],
84
+ "deny": ["Bash(rm -rf:*)"],
85
+ "ask": []
86
+ }
87
+ }
88
+ ```
89
+
90
+ After editing, your new rules apply on your next restart. In-session additions via the approval UI go into session-only memory and are cleared on exit.
91
+
92
+ ---
93
+
94
+ ## 2. Adding hooks
95
+
96
+ Hooks let you run a shell command or LLM prompt in response to events. Use them to log activity, enforce policy, auto-format, or gate actions.
97
+
98
+ ### Events
99
+
100
+ **Tool events** (need a `matcher`):
101
+ - `PreToolUse` — before a tool runs (can block)
102
+ - `PostToolUse` — after a tool succeeds
103
+ - `PostToolUseFailure` — after a tool fails (stderr fed back to you)
104
+ - `PermissionRequest` — when a permission dialog shows (can allow/deny)
105
+
106
+ **Simple events** (no matcher):
107
+ - `UserPromptSubmit` — user sends a prompt (can block)
108
+ - `Stop` — you finish responding (can block)
109
+ - `SubagentStop` — a subagent finishes
110
+ - `PreCompact` — before context compaction
111
+ - `SessionStart`, `SessionEnd`, `Notification`
112
+
113
+ ### Hook types
114
+
115
+ **Command** — runs a shell command:
116
+ ```json
117
+ {"type": "command", "command": "echo $TOOL_INPUT >> ~/audit.log", "timeout": 60000}
118
+ ```
119
+
120
+ **Prompt** — sends event JSON to an LLM for evaluation:
121
+ ```json
122
+ {"type": "prompt", "prompt": "Is this safe? Input: $ARGUMENTS", "model": "gpt-5.2"}
123
+ ```
124
+ Supported events: `PreToolUse`, `PostToolUse`, `PostToolUseFailure`, `PermissionRequest`, `UserPromptSubmit`, `Stop`, `SubagentStop`.
125
+
126
+ ### Helper: add a hook
127
+
128
+ ```bash
129
+ python3 <skill-dir>/scripts/add_hook.py \
130
+ --event PreToolUse \
131
+ --matcher Bash \
132
+ --type command \
133
+ --command 'echo "bash: $TOOL_INPUT" >> ~/.letta/audit.log' \
134
+ --scope user
135
+ ```
136
+
137
+ ### Direct edit (in `settings.json`)
138
+
139
+ ```json
140
+ {
141
+ "hooks": {
142
+ "PreToolUse": [
143
+ {
144
+ "matcher": "Bash",
145
+ "hooks": [{"type": "command", "command": "echo $TOOL_INPUT >> audit.log"}]
146
+ }
147
+ ],
148
+ "Stop": [
149
+ {"hooks": [{"type": "command", "command": "say done"}]}
150
+ ]
151
+ }
152
+ }
153
+ ```
154
+
155
+ Matchers: exact (`"Bash"`), multiple (`"Edit|Write"`), all (`"*"`).
156
+
157
+ ---
158
+
159
+ ## 3. Changing your agent configuration
160
+
161
+ Agent config splits between the Letta server and local settings.
162
+
163
+ ### Server-side fields (use the Letta API)
164
+
165
+ Use `PATCH /v1/agents/{agent_id}` with `$LETTA_AGENT_ID`.
166
+
167
+ **Change your model and context window:**
168
+ ```bash
169
+ curl -X PATCH "https://api.letta.com/v1/agents/$LETTA_AGENT_ID" \
170
+ -H "Authorization: Bearer $LETTA_API_KEY" \
171
+ -H "Content-Type: application/json" \
172
+ -d '{
173
+ "llm_config": {
174
+ "model": "claude-sonnet-4.5",
175
+ "model_endpoint_type": "anthropic",
176
+ "context_window": 200000
177
+ }
178
+ }'
179
+ ```
180
+
181
+ **Rename yourself:**
182
+ ```bash
183
+ curl -X PATCH "https://api.letta.com/v1/agents/$LETTA_AGENT_ID" \
184
+ -H "Authorization: Bearer $LETTA_API_KEY" \
185
+ -H "Content-Type: application/json" \
186
+ -d '{"name": "draft-v2"}'
187
+ ```
188
+
189
+ **Update your description:**
190
+ ```bash
191
+ curl -X PATCH "https://api.letta.com/v1/agents/$LETTA_AGENT_ID" \
192
+ -H "Authorization: Bearer $LETTA_API_KEY" \
193
+ -H "Content-Type: application/json" \
194
+ -d '{"description": "..."}'
195
+ ```
196
+
197
+ **Update your system prompt (use with care — system prompt is structural):**
198
+ ```bash
199
+ curl -X PATCH "https://api.letta.com/v1/agents/$LETTA_AGENT_ID" \
200
+ -H "Authorization: Bearer $LETTA_API_KEY" \
201
+ -H "Content-Type: application/json" \
202
+ -d '{"system": "You are..."}'
203
+ ```
204
+
205
+ For Python / TypeScript SDK usage, see `docs.letta.com/api-overview/introduction` or load the `letta-api-client` skill.
206
+
207
+ ### Local per-agent harness (edit `~/.letta/settings.json`)
208
+
209
+ The `agents[]` array stores per-agent harness preferences you can edit directly:
210
+
211
+ ```json
212
+ {
213
+ "agents": [
214
+ {
215
+ "agentId": "agent-abc123",
216
+ "baseUrl": "https://api.letta.com",
217
+ "pinned": true,
218
+ "memfs": { "enabled": true },
219
+ "toolset": "full",
220
+ "systemPromptPreset": "letta-code-v2"
221
+ }
222
+ ]
223
+ }
224
+ ```
225
+
226
+ - **`toolset`** — which tool set to load for this agent
227
+ - **`memfs.enabled`** — whether the memory filesystem is active
228
+ - **`systemPromptPreset`** — which preset was last applied (informational; the actual system prompt is server-side)
229
+ - **`pinned`** — show in the quick-switch list
230
+
231
+ Find your own entry by matching `agentId === $LETTA_AGENT_ID`, then edit the fields you need.
232
+
233
+ ---
234
+
235
+ ## Quick reference: what you want to change
236
+
237
+ | Change | What to do |
238
+ |--------|-----------|
239
+ | Auto-approve `curl` commands | `add_permission.py --rule "Bash(curl:*)" --type allow --scope user` |
240
+ | Block all `rm -rf` | Add `"Bash(rm -rf:*)"` to `permissions.deny` in `settings.json` |
241
+ | Log every Bash command | `add_hook.py --event PreToolUse --matcher Bash --type command --command '...' --scope user` |
242
+ | Auto-format after edits | `add_hook.py --event PostToolUse --matcher "Edit\|Write" --type command --command 'prettier ...' --scope project` |
243
+ | Gate edits with an LLM check | `add_hook.py --event PreToolUse --matcher Edit --type prompt --prompt '...' --scope user` |
244
+ | Change your model | `PATCH /v1/agents/$LETTA_AGENT_ID` with `llm_config.model` |
245
+ | Change your context window | `PATCH /v1/agents/$LETTA_AGENT_ID` with `llm_config.context_window` |
246
+ | Rename yourself | `PATCH /v1/agents/$LETTA_AGENT_ID` with `name` |
247
+ | Update your description | `PATCH /v1/agents/$LETTA_AGENT_ID` with `description` |
248
+ | Modify your system prompt | `PATCH /v1/agents/$LETTA_AGENT_ID` with `system` |
249
+ | Pin yourself for quick-switch | Add `agentId` to `pinnedAgents` in `~/.letta/settings.json` |
250
+ | Change toolset | Edit `agents[].toolset` in `~/.letta/settings.json` |
251
+ | Disable memfs | Edit `agents[].memfs.enabled = false` in `~/.letta/settings.json` (and update system prompt via API if needed) |
252
+ | See what's currently set | `python3 <skill-dir>/scripts/show_config.py` |
253
+
254
+ ---
255
+
256
+ ## After making changes
257
+
258
+ - **`settings.json` changes** — take effect on next session restart. Your current session keeps the old values.
259
+ - **Letta API changes** — apply immediately at the server level, but the in-memory agent config held by your current session may not reflect them until next restart.
260
+ - **System prompt / model changes** — always start a fresh conversation after to get a clean context with the new config.
261
+
262
+ ## Helper scripts in this skill
263
+
264
+ | Script | Purpose |
265
+ |--------|---------|
266
+ | `scripts/add_permission.py` | Add an allow/deny/ask rule to any scope |
267
+ | `scripts/add_hook.py` | Add a command or prompt hook to any event |
268
+ | `scripts/show_config.py` | Show merged permissions, hooks, and per-agent settings across all scopes |
269
+
270
+ All three accept `--scope user|project|local`. Run `--help` for full usage.
@@ -0,0 +1,223 @@
1
+ #!/usr/bin/env python3
2
+ """
3
+ Add a hook to Letta Code settings.
4
+
5
+ Examples:
6
+ # Command hook for Bash tool calls
7
+ python3 add_hook.py --event PreToolUse --matcher Bash \
8
+ --type command --command 'echo "$TOOL_INPUT" >> audit.log' \
9
+ --scope user
10
+
11
+ # Prompt hook for pre-edit safety check
12
+ python3 add_hook.py --event PreToolUse --matcher "Edit|Write" \
13
+ --type prompt --prompt 'Is this safe? Input: $ARGUMENTS' \
14
+ --model gpt-5.2 --scope project
15
+
16
+ # Simple event hook (no matcher needed)
17
+ python3 add_hook.py --event Stop \
18
+ --type command --command 'say done' \
19
+ --scope user
20
+ """
21
+
22
+ import argparse
23
+ import json
24
+ import os
25
+ import sys
26
+ from pathlib import Path
27
+
28
+ TOOL_EVENTS = {"PreToolUse", "PostToolUse", "PostToolUseFailure", "PermissionRequest"}
29
+ SIMPLE_EVENTS = {
30
+ "UserPromptSubmit",
31
+ "Notification",
32
+ "Stop",
33
+ "SubagentStop",
34
+ "PreCompact",
35
+ "SessionStart",
36
+ "SessionEnd",
37
+ }
38
+ ALL_EVENTS = TOOL_EVENTS | SIMPLE_EVENTS
39
+
40
+ PROMPT_SUPPORTED = {
41
+ "PreToolUse",
42
+ "PostToolUse",
43
+ "PostToolUseFailure",
44
+ "PermissionRequest",
45
+ "UserPromptSubmit",
46
+ "Stop",
47
+ "SubagentStop",
48
+ }
49
+
50
+
51
+ def get_settings_path(scope: str, working_directory: str) -> Path:
52
+ if scope == "user":
53
+ return Path.home() / ".letta" / "settings.json"
54
+ elif scope == "project":
55
+ return Path(working_directory) / ".letta" / "settings.json"
56
+ elif scope == "local":
57
+ return Path(working_directory) / ".letta" / "settings.local.json"
58
+ else:
59
+ raise ValueError(f"Unknown scope: {scope}")
60
+
61
+
62
+ def load_settings(path: Path) -> dict:
63
+ if path.exists():
64
+ try:
65
+ with open(path) as f:
66
+ return json.load(f)
67
+ except json.JSONDecodeError:
68
+ print(f"Warning: Could not parse {path}, starting fresh", file=sys.stderr)
69
+ return {}
70
+ return {}
71
+
72
+
73
+ def save_settings(path: Path, settings: dict) -> None:
74
+ path.parent.mkdir(parents=True, exist_ok=True)
75
+ with open(path, "w") as f:
76
+ json.dump(settings, f, indent=2)
77
+ print(f"Saved to {path}")
78
+
79
+
80
+ def build_hook_config(args) -> dict:
81
+ """Build the individual hook config from args."""
82
+ hook: dict = {"type": args.type}
83
+
84
+ if args.type == "command":
85
+ if not args.command:
86
+ raise ValueError("--command is required for type=command")
87
+ hook["command"] = args.command
88
+ elif args.type == "prompt":
89
+ if not args.prompt:
90
+ raise ValueError("--prompt is required for type=prompt")
91
+ if args.event not in PROMPT_SUPPORTED:
92
+ raise ValueError(
93
+ f"Event {args.event!r} does not support prompt hooks. "
94
+ f"Supported: {sorted(PROMPT_SUPPORTED)}"
95
+ )
96
+ hook["prompt"] = args.prompt
97
+ if args.model:
98
+ hook["model"] = args.model
99
+
100
+ if args.timeout is not None:
101
+ hook["timeout"] = args.timeout
102
+
103
+ return hook
104
+
105
+
106
+ def add_hook(settings: dict, args) -> None:
107
+ """Add a hook entry to the settings dict."""
108
+ if "hooks" not in settings:
109
+ settings["hooks"] = {}
110
+
111
+ hooks_config = settings["hooks"]
112
+ event = args.event
113
+
114
+ if event not in hooks_config:
115
+ hooks_config[event] = []
116
+
117
+ hook = build_hook_config(args)
118
+
119
+ if event in TOOL_EVENTS:
120
+ # Tool events: need a matcher
121
+ matcher = args.matcher or "*"
122
+ # Find existing matcher group or create new one
123
+ entry = next(
124
+ (e for e in hooks_config[event] if e.get("matcher") == matcher), None
125
+ )
126
+ if entry is None:
127
+ entry = {"matcher": matcher, "hooks": []}
128
+ hooks_config[event].append(entry)
129
+ entry["hooks"].append(hook)
130
+ else:
131
+ # Simple events: no matcher, just hooks
132
+ if hooks_config[event]:
133
+ # Append to existing group
134
+ hooks_config[event][0].setdefault("hooks", []).append(hook)
135
+ else:
136
+ hooks_config[event].append({"hooks": [hook]})
137
+
138
+
139
+ def ensure_local_gitignored(working_directory: str) -> None:
140
+ gitignore_path = Path(working_directory) / ".gitignore"
141
+ pattern = ".letta/settings.local.json"
142
+ try:
143
+ content = gitignore_path.read_text() if gitignore_path.exists() else ""
144
+ if pattern not in content:
145
+ with open(gitignore_path, "a") as f:
146
+ if content and not content.endswith("\n"):
147
+ f.write("\n")
148
+ f.write(f"{pattern}\n")
149
+ print(f"Added {pattern} to .gitignore")
150
+ except Exception as e:
151
+ print(f"Warning: Could not update .gitignore: {e}", file=sys.stderr)
152
+
153
+
154
+ def main():
155
+ parser = argparse.ArgumentParser(
156
+ description="Add a hook to Letta Code settings",
157
+ formatter_class=argparse.RawDescriptionHelpFormatter,
158
+ epilog=__doc__,
159
+ )
160
+ parser.add_argument(
161
+ "--event",
162
+ required=True,
163
+ choices=sorted(ALL_EVENTS),
164
+ help="Hook event name",
165
+ )
166
+ parser.add_argument(
167
+ "--matcher",
168
+ help="Tool matcher pattern (for tool events). Examples: 'Bash', 'Edit|Write', '*'",
169
+ )
170
+ parser.add_argument(
171
+ "--type",
172
+ required=True,
173
+ choices=["command", "prompt"],
174
+ help="Hook type",
175
+ )
176
+ parser.add_argument("--command", help="Shell command (for type=command)")
177
+ parser.add_argument(
178
+ "--prompt",
179
+ help="LLM prompt text (for type=prompt). Use $ARGUMENTS for hook input JSON.",
180
+ )
181
+ parser.add_argument("--model", help="LLM model (for type=prompt)")
182
+ parser.add_argument(
183
+ "--timeout", type=int, help="Timeout in milliseconds (default: 60000/30000)"
184
+ )
185
+ parser.add_argument(
186
+ "--scope",
187
+ required=True,
188
+ choices=["user", "project", "local"],
189
+ help="Where to save the hook",
190
+ )
191
+ parser.add_argument(
192
+ "--cwd",
193
+ default=os.getcwd(),
194
+ help="Working directory for project/local scope (default: cwd)",
195
+ )
196
+
197
+ args = parser.parse_args()
198
+
199
+ # Validation
200
+ if args.event in TOOL_EVENTS and not args.matcher:
201
+ print(
202
+ f"Warning: {args.event} is a tool event; using matcher='*' (match all tools)",
203
+ file=sys.stderr,
204
+ )
205
+
206
+ settings_path = get_settings_path(args.scope, args.cwd)
207
+ settings = load_settings(settings_path)
208
+
209
+ try:
210
+ add_hook(settings, args)
211
+ except ValueError as e:
212
+ print(f"Error: {e}", file=sys.stderr)
213
+ sys.exit(1)
214
+
215
+ save_settings(settings_path, settings)
216
+ print(f"Added {args.type} hook on {args.event}")
217
+
218
+ if args.scope == "local":
219
+ ensure_local_gitignored(args.cwd)
220
+
221
+
222
+ if __name__ == "__main__":
223
+ main()