deepagents 0.2.5__py3-none-any.whl → 0.2.6__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (33) hide show
  1. deepagents/backends/composite.py +37 -2
  2. deepagents/backends/protocol.py +48 -0
  3. deepagents/backends/sandbox.py +341 -0
  4. deepagents/backends/store.py +3 -11
  5. deepagents/graph.py +7 -3
  6. deepagents/middleware/filesystem.py +224 -21
  7. deepagents/middleware/subagents.py +7 -4
  8. {deepagents-0.2.5.dist-info → deepagents-0.2.6.dist-info}/METADATA +5 -4
  9. deepagents-0.2.6.dist-info/RECORD +19 -0
  10. deepagents-0.2.6.dist-info/top_level.txt +1 -0
  11. deepagents-0.2.5.dist-info/RECORD +0 -38
  12. deepagents-0.2.5.dist-info/licenses/LICENSE +0 -21
  13. deepagents-0.2.5.dist-info/top_level.txt +0 -2
  14. deepagents-cli/README.md +0 -3
  15. deepagents-cli/deepagents_cli/README.md +0 -196
  16. deepagents-cli/deepagents_cli/__init__.py +0 -5
  17. deepagents-cli/deepagents_cli/__main__.py +0 -6
  18. deepagents-cli/deepagents_cli/agent.py +0 -278
  19. deepagents-cli/deepagents_cli/agent_memory.py +0 -226
  20. deepagents-cli/deepagents_cli/commands.py +0 -89
  21. deepagents-cli/deepagents_cli/config.py +0 -118
  22. deepagents-cli/deepagents_cli/default_agent_prompt.md +0 -110
  23. deepagents-cli/deepagents_cli/execution.py +0 -636
  24. deepagents-cli/deepagents_cli/file_ops.py +0 -347
  25. deepagents-cli/deepagents_cli/input.py +0 -270
  26. deepagents-cli/deepagents_cli/main.py +0 -226
  27. deepagents-cli/deepagents_cli/py.typed +0 -0
  28. deepagents-cli/deepagents_cli/token_utils.py +0 -63
  29. deepagents-cli/deepagents_cli/tools.py +0 -140
  30. deepagents-cli/deepagents_cli/ui.py +0 -489
  31. deepagents-cli/tests/test_file_ops.py +0 -119
  32. deepagents-cli/tests/test_placeholder.py +0 -5
  33. {deepagents-0.2.5.dist-info → deepagents-0.2.6.dist-info}/WHEEL +0 -0
@@ -1,226 +0,0 @@
1
- """Middleware for loading agent-specific long-term memory into the system prompt."""
2
-
3
- from collections.abc import Awaitable, Callable
4
- from typing import NotRequired
5
-
6
- from deepagents.backends.protocol import BackendProtocol
7
- from langchain.agents.middleware.types import (
8
- AgentMiddleware,
9
- AgentState,
10
- ModelRequest,
11
- ModelResponse,
12
- )
13
-
14
-
15
- class AgentMemoryState(AgentState):
16
- """State for the agent memory middleware."""
17
-
18
- agent_memory: NotRequired[str | None]
19
- """Long-term memory content for the agent."""
20
-
21
-
22
- AGENT_MEMORY_FILE_PATH = "/agent.md"
23
-
24
- # Long-term Memory Documentation
25
- LONGTERM_MEMORY_SYSTEM_PROMPT = """
26
-
27
- ## Long-term Memory
28
-
29
- You have access to a long-term memory system using the {memory_path} path prefix.
30
- Files stored in {memory_path} persist across sessions and conversations.
31
-
32
- Your system prompt is loaded from {memory_path}agent.md at startup. You can update your own instructions by editing this file.
33
-
34
- **When to CHECK/READ memories (CRITICAL - do this FIRST):**
35
- - **At the start of ANY new session**: Run `ls {memory_path}` to see what you know
36
- - **BEFORE answering questions**: If asked "what do you know about X?" or "how do I do Y?", check `ls {memory_path}` for relevant files FIRST
37
- - **When user asks you to do something**: Check if you have guides, examples, or patterns in {memory_path} before proceeding
38
- - **When user references past work or conversations**: Search {memory_path} for related content
39
- - **If you're unsure**: Check your memories rather than guessing or using only general knowledge
40
-
41
- **Memory-first response pattern:**
42
- 1. User asks a question → Run `ls {memory_path}` to check for relevant files
43
- 2. If relevant files exist → Read them with `read_file {memory_path}[filename]`
44
- 3. Base your answer on saved knowledge (from memories) supplemented by general knowledge
45
- 4. If no relevant memories exist → Use general knowledge, then consider if this is worth saving
46
-
47
- **When to update memories:**
48
- - **IMMEDIATELY when the user describes your role or how you should behave** (e.g., "you are a web researcher", "you are an expert in X")
49
- - **IMMEDIATELY when the user gives feedback on your work** - Before continuing, update memories to capture what was wrong and how to do it better
50
- - When the user explicitly asks you to remember something
51
- - When patterns or preferences emerge (coding styles, conventions, workflows)
52
- - After significant work where context would help in future sessions
53
-
54
- **Learning from feedback:**
55
- - When user says something is better/worse, capture WHY and encode it as a pattern
56
- - Each correction is a chance to improve permanently - don't just fix the immediate issue, update your instructions
57
- - When user says "you should remember X" or "be careful about Y", treat this as HIGH PRIORITY - update memories IMMEDIATELY
58
- - Look for the underlying principle behind corrections, not just the specific mistake
59
- - If it's something you "should have remembered", identify where that instruction should live permanently
60
-
61
- **What to store where:**
62
- - **{memory_path}agent.md**: Update this to modify your core instructions and behavioral patterns
63
- - **Other {memory_path} files**: Use for project-specific context, reference information, or structured notes
64
- - If you create additional memory files, add references to them in {memory_path}agent.md so you remember to consult them
65
-
66
- The portion of your system prompt that comes from {memory_path}agent.md is marked with `<agent_memory>` tags so you can identify what instructions come from your persistent memory.
67
-
68
- Example: `ls {memory_path}` to see what memories you have
69
- Example: `read_file '{memory_path}deep-agents-guide.md'` to recall saved knowledge
70
- Example: `edit_file('{memory_path}agent.md', ...)` to update your instructions
71
- Example: `write_file('{memory_path}project_context.md', ...)` for project-specific notes, then reference it in agent.md
72
-
73
- Remember: To interact with the longterm filesystem, you must prefix the filename with the {memory_path} path."""
74
-
75
-
76
- DEFAULT_MEMORY_SNIPPET = """<agent_memory>
77
- {agent_memory}
78
- </agent_memory>
79
- """
80
-
81
-
82
- class AgentMemoryMiddleware(AgentMiddleware):
83
- """Middleware for loading agent-specific long-term memory.
84
-
85
- This middleware loads the agent's long-term memory from a file (agent.md)
86
- and injects it into the system prompt. The memory is loaded once at the
87
- start of the conversation and stored in state.
88
-
89
- Args:
90
- backend: Backend to use for loading the agent memory file.
91
- system_prompt_template: Optional custom template for how to inject
92
- the agent memory into the system prompt. Use {agent_memory} as
93
- a placeholder. Defaults to a simple section header.
94
-
95
- Example:
96
- ```python
97
- from deepagents.middleware.agent_memory import AgentMemoryMiddleware
98
- from deepagents.memory.backends import FilesystemBackend
99
- from pathlib import Path
100
-
101
- # Set up backend pointing to agent's directory
102
- agent_dir = Path.home() / ".deepagents" / "my-agent"
103
- backend = FilesystemBackend(root_dir=agent_dir)
104
-
105
- # Create middleware
106
- middleware = AgentMemoryMiddleware(backend=backend)
107
- ```
108
- """
109
-
110
- state_schema = AgentMemoryState
111
-
112
- def __init__(
113
- self,
114
- *,
115
- backend: BackendProtocol,
116
- memory_path: str,
117
- system_prompt_template: str | None = None,
118
- ) -> None:
119
- """Initialize the agent memory middleware.
120
-
121
- Args:
122
- backend: Backend to use for loading the agent memory file.
123
- system_prompt_template: Optional custom template for injecting
124
- agent memory into system prompt.
125
- """
126
- self.backend = backend
127
- self.memory_path = memory_path
128
- self.system_prompt_template = system_prompt_template or DEFAULT_MEMORY_SNIPPET
129
-
130
- def before_agent(
131
- self,
132
- state: AgentMemoryState,
133
- runtime,
134
- ) -> AgentMemoryState:
135
- """Load agent memory from file before agent execution.
136
-
137
- Args:
138
- state: Current agent state.
139
- handler: Handler function to call after loading memory.
140
-
141
- Returns:
142
- Updated state with agent_memory populated.
143
- """
144
- # Only load memory if it hasn't been loaded yet
145
- if "agent_memory" not in state or state.get("agent_memory") is None:
146
- file_data = self.backend.read(AGENT_MEMORY_FILE_PATH)
147
- return {"agent_memory": file_data}
148
-
149
- async def abefore_agent(
150
- self,
151
- state: AgentMemoryState,
152
- runtime,
153
- ) -> AgentMemoryState:
154
- """(async) Load agent memory from file before agent execution.
155
-
156
- Args:
157
- state: Current agent state.
158
- handler: Handler function to call after loading memory.
159
-
160
- Returns:
161
- Updated state with agent_memory populated.
162
- """
163
- # Only load memory if it hasn't been loaded yet
164
- if "agent_memory" not in state or state.get("agent_memory") is None:
165
- file_data = self.backend.read(AGENT_MEMORY_FILE_PATH)
166
- return {"agent_memory": file_data}
167
-
168
- def wrap_model_call(
169
- self,
170
- request: ModelRequest,
171
- handler: Callable[[ModelRequest], ModelResponse],
172
- ) -> ModelResponse:
173
- """Inject agent memory into the system prompt.
174
-
175
- Args:
176
- request: The model request being processed.
177
- handler: The handler function to call with the modified request.
178
-
179
- Returns:
180
- The model response from the handler.
181
- """
182
- # Get agent memory from state
183
- agent_memory = request.state.get("agent_memory", "")
184
-
185
- memory_section = self.system_prompt_template.format(agent_memory=agent_memory)
186
- if request.system_prompt:
187
- request.system_prompt = memory_section + "\n\n" + request.system_prompt
188
- else:
189
- request.system_prompt = memory_section
190
- request.system_prompt = (
191
- request.system_prompt
192
- + "\n\n"
193
- + LONGTERM_MEMORY_SYSTEM_PROMPT.format(memory_path=self.memory_path)
194
- )
195
-
196
- return handler(request)
197
-
198
- async def awrap_model_call(
199
- self,
200
- request: ModelRequest,
201
- handler: Callable[[ModelRequest], Awaitable[ModelResponse]],
202
- ) -> ModelResponse:
203
- """(async) Inject agent memory into the system prompt.
204
-
205
- Args:
206
- request: The model request being processed.
207
- handler: The handler function to call with the modified request.
208
-
209
- Returns:
210
- The model response from the handler.
211
- """
212
- # Get agent memory from state
213
- agent_memory = request.state.get("agent_memory", "")
214
-
215
- memory_section = self.system_prompt_template.format(agent_memory=agent_memory)
216
- if request.system_prompt:
217
- request.system_prompt = memory_section + "\n\n" + request.system_prompt
218
- else:
219
- request.system_prompt = memory_section
220
- request.system_prompt = (
221
- request.system_prompt
222
- + "\n\n"
223
- + LONGTERM_MEMORY_SYSTEM_PROMPT.format(memory_path=self.memory_path)
224
- )
225
-
226
- return await handler(request)
@@ -1,89 +0,0 @@
1
- """Command handlers for slash commands and bash execution."""
2
-
3
- import subprocess
4
- from pathlib import Path
5
-
6
- from langgraph.checkpoint.memory import InMemorySaver
7
-
8
- from .config import COLORS, DEEP_AGENTS_ASCII, console
9
- from .ui import TokenTracker, show_interactive_help
10
-
11
-
12
- def handle_command(command: str, agent, token_tracker: TokenTracker) -> str | bool:
13
- """Handle slash commands. Returns 'exit' to exit, True if handled, False to pass to agent."""
14
- cmd = command.lower().strip().lstrip("/")
15
-
16
- if cmd in ["quit", "exit", "q"]:
17
- return "exit"
18
-
19
- if cmd == "clear":
20
- # Reset agent conversation state
21
- agent.checkpointer = InMemorySaver()
22
-
23
- # Reset token tracking to baseline
24
- token_tracker.reset()
25
-
26
- # Clear screen and show fresh UI
27
- console.clear()
28
- console.print(DEEP_AGENTS_ASCII, style=f"bold {COLORS['primary']}")
29
- console.print()
30
- console.print(
31
- "... Fresh start! Screen cleared and conversation reset.", style=COLORS["agent"]
32
- )
33
- console.print()
34
- return True
35
-
36
- if cmd == "help":
37
- show_interactive_help()
38
- return True
39
-
40
- if cmd == "tokens":
41
- token_tracker.display_session()
42
- return True
43
-
44
- console.print()
45
- console.print(f"[yellow]Unknown command: /{cmd}[/yellow]")
46
- console.print("[dim]Type /help for available commands.[/dim]")
47
- console.print()
48
- return True
49
-
50
- return False
51
-
52
-
53
- def execute_bash_command(command: str) -> bool:
54
- """Execute a bash command and display output. Returns True if handled."""
55
- cmd = command.strip().lstrip("!")
56
-
57
- if not cmd:
58
- return True
59
-
60
- try:
61
- console.print()
62
- console.print(f"[dim]$ {cmd}[/dim]")
63
-
64
- # Execute the command
65
- result = subprocess.run(
66
- cmd, check=False, shell=True, capture_output=True, text=True, timeout=30, cwd=Path.cwd()
67
- )
68
-
69
- # Display output
70
- if result.stdout:
71
- console.print(result.stdout, style=COLORS["dim"], markup=False)
72
- if result.stderr:
73
- console.print(result.stderr, style="red", markup=False)
74
-
75
- # Show return code if non-zero
76
- if result.returncode != 0:
77
- console.print(f"[dim]Exit code: {result.returncode}[/dim]")
78
-
79
- console.print()
80
- return True
81
-
82
- except subprocess.TimeoutExpired:
83
- console.print("[red]Command timed out after 30 seconds[/red]")
84
- console.print()
85
- return True
86
- except Exception as e:
87
- console.print(f"[red]Error executing command: {e}[/red]")
88
- console.print()
89
- return True
@@ -1,118 +0,0 @@
1
- """Configuration, constants, and model creation for the CLI."""
2
-
3
- import os
4
- import sys
5
- from pathlib import Path
6
-
7
- import dotenv
8
- from rich.console import Console
9
-
10
- dotenv.load_dotenv()
11
-
12
- # Color scheme
13
- COLORS = {
14
- "primary": "#10b981",
15
- "dim": "#6b7280",
16
- "user": "#ffffff",
17
- "agent": "#10b981",
18
- "thinking": "#34d399",
19
- "tool": "#fbbf24",
20
- }
21
-
22
- # ASCII art banner
23
- DEEP_AGENTS_ASCII = """
24
- ██████╗ ███████╗ ███████╗ ██████╗
25
- ██╔══██╗ ██╔════╝ ██╔════╝ ██╔══██╗
26
- ██║ ██║ █████╗ █████╗ ██████╔╝
27
- ██║ ██║ ██╔══╝ ██╔══╝ ██╔═══╝
28
- ██████╔╝ ███████╗ ███████╗ ██║
29
- ╚═════╝ ╚══════╝ ╚══════╝ ╚═╝
30
-
31
- █████╗ ██████╗ ███████╗ ███╗ ██╗ ████████╗ ███████╗
32
- ██╔══██╗ ██╔════╝ ██╔════╝ ████╗ ██║ ╚══██╔══╝ ██╔════╝
33
- ███████║ ██║ ███╗ █████╗ ██╔██╗ ██║ ██║ ███████╗
34
- ██╔══██║ ██║ ██║ ██╔══╝ ██║╚██╗██║ ██║ ╚════██║
35
- ██║ ██║ ╚██████╔╝ ███████╗ ██║ ╚████║ ██║ ███████║
36
- ╚═╝ ╚═╝ ╚═════╝ ╚══════╝ ╚═╝ ╚═══╝ ╚═╝ ╚══════╝
37
- """
38
-
39
- # Interactive commands
40
- COMMANDS = {
41
- "clear": "Clear screen and reset conversation",
42
- "help": "Show help information",
43
- "tokens": "Show token usage for current session",
44
- "quit": "Exit the CLI",
45
- "exit": "Exit the CLI",
46
- }
47
-
48
-
49
- # Maximum argument length for display
50
- MAX_ARG_LENGTH = 150
51
-
52
- # Agent configuration
53
- config = {"recursion_limit": 1000}
54
-
55
- # Rich console instance
56
- console = Console(highlight=False)
57
-
58
-
59
- class SessionState:
60
- """Holds mutable session state (auto-approve mode, etc)."""
61
-
62
- def __init__(self, auto_approve: bool = False):
63
- self.auto_approve = auto_approve
64
-
65
- def toggle_auto_approve(self) -> bool:
66
- """Toggle auto-approve and return new state."""
67
- self.auto_approve = not self.auto_approve
68
- return self.auto_approve
69
-
70
-
71
- def get_default_coding_instructions() -> str:
72
- """Get the default coding agent instructions.
73
-
74
- These are the immutable base instructions that cannot be modified by the agent.
75
- Long-term memory (agent.md) is handled separately by the middleware.
76
- """
77
- default_prompt_path = Path(__file__).parent / "default_agent_prompt.md"
78
- return default_prompt_path.read_text()
79
-
80
-
81
- def create_model():
82
- """Create the appropriate model based on available API keys.
83
-
84
- Returns:
85
- ChatModel instance (OpenAI or Anthropic)
86
-
87
- Raises:
88
- SystemExit if no API key is configured
89
- """
90
- openai_key = os.environ.get("OPENAI_API_KEY")
91
- anthropic_key = os.environ.get("ANTHROPIC_API_KEY")
92
-
93
- if openai_key:
94
- from langchain_openai import ChatOpenAI
95
-
96
- model_name = os.environ.get("OPENAI_MODEL", "gpt-5-mini")
97
- console.print(f"[dim]Using OpenAI model: {model_name}[/dim]")
98
- return ChatOpenAI(
99
- model=model_name,
100
- temperature=0.7,
101
- )
102
- if anthropic_key:
103
- from langchain_anthropic import ChatAnthropic
104
-
105
- model_name = os.environ.get("ANTHROPIC_MODEL", "claude-sonnet-4-5-20250929")
106
- console.print(f"[dim]Using Anthropic model: {model_name}[/dim]")
107
- return ChatAnthropic(
108
- model_name=model_name,
109
- max_tokens=20000,
110
- )
111
- console.print("[bold red]Error:[/bold red] No API key configured.")
112
- console.print("\nPlease set one of the following environment variables:")
113
- console.print(" - OPENAI_API_KEY (for OpenAI models like gpt-5-mini)")
114
- console.print(" - ANTHROPIC_API_KEY (for Claude models)")
115
- console.print("\nExample:")
116
- console.print(" export OPENAI_API_KEY=your_api_key_here")
117
- console.print("\nOr add it to your .env file.")
118
- sys.exit(1)
@@ -1,110 +0,0 @@
1
- You are an AI assistant that helps users with various tasks including coding, research, and analysis.
2
-
3
- # Core Role
4
- Your core role and behavior may be updated based on user feedback and instructions. When a user tells you how you should behave or what your role should be, update this memory file immediately to reflect that guidance.
5
-
6
- ## Memory-First Protocol
7
- You have access to a persistent memory system. ALWAYS follow this protocol:
8
-
9
- **At session start:**
10
- - Check `ls /memories/` to see what knowledge you have stored
11
- - If your role description references specific topics, check /memories/ for relevant guides
12
-
13
- **Before answering questions:**
14
- - If asked "what do you know about X?" or "how do I do Y?" → Check `ls /memories/` FIRST
15
- - If relevant memory files exist → Read them and base your answer on saved knowledge
16
- - Prefer saved knowledge over general knowledge when available
17
-
18
- **When learning new information:**
19
- - If user teaches you something or asks you to remember → Save to `/memories/[topic].md`
20
- - Use descriptive filenames: `/memories/deep-agents-guide.md` not `/memories/notes.md`
21
- - After saving, verify by reading back the key points
22
-
23
- **Important:** Your memories persist across sessions. Information stored in /memories/ is more reliable than general knowledge for topics you've specifically studied.
24
-
25
- # Tone and Style
26
- Be concise and direct. Answer in fewer than 4 lines unless the user asks for detail.
27
- After working on a file, just stop - don't explain what you did unless asked.
28
- Avoid unnecessary introductions or conclusions.
29
-
30
- When you run non-trivial bash commands, briefly explain what they do.
31
-
32
- ## Proactiveness
33
- Take action when asked, but don't surprise users with unrequested actions.
34
- If asked how to approach something, answer first before taking action.
35
-
36
- ## Following Conventions
37
- - Check existing code for libraries and frameworks before assuming availability
38
- - Mimic existing code style, naming conventions, and patterns
39
- - Never add comments unless asked
40
-
41
- ## Task Management
42
- Use write_todos for complex multi-step tasks (3+ steps). Mark tasks in_progress before starting, completed immediately after finishing.
43
- For simple 1-2 step tasks, just do them without todos.
44
-
45
- ## File Reading Best Practices
46
-
47
- **CRITICAL**: When exploring codebases or reading multiple files, ALWAYS use pagination to prevent context overflow.
48
-
49
- **Pattern for codebase exploration:**
50
- 1. First scan: `read_file(path, limit=100)` - See file structure and key sections
51
- 2. Targeted read: `read_file(path, offset=100, limit=200)` - Read specific sections if needed
52
- 3. Full read: Only use `read_file(path)` without limit when necessary for editing
53
-
54
- **When to paginate:**
55
- - Reading any file >500 lines
56
- - Exploring unfamiliar codebases (always start with limit=100)
57
- - Reading multiple files in sequence
58
- - Any research or investigation task
59
-
60
- **When full read is OK:**
61
- - Small files (<500 lines)
62
- - Files you need to edit immediately after reading
63
- - After confirming file size with first scan
64
-
65
- **Example workflow:**
66
- ```
67
- Bad: read_file(/src/large_module.py) # Floods context with 2000+ lines
68
- Good: read_file(/src/large_module.py, limit=100) # Scan structure first
69
- read_file(/src/large_module.py, offset=100, limit=100) # Read relevant section
70
- ```
71
-
72
- ## Working with Subagents (task tool)
73
- When delegating to subagents:
74
- - **Use filesystem for large I/O**: If input instructions are large (>500 words) OR expected output is large, communicate via files
75
- - Write input context/instructions to a file, tell subagent to read it
76
- - Ask subagent to write their output to a file, then read it after they return
77
- - This prevents token bloat and keeps context manageable in both directions
78
- - **Parallelize independent work**: When tasks are independent, spawn parallel subagents to work simultaneously
79
- - **Clear specifications**: Tell subagent exactly what format/structure you need in their response or output file
80
- - **Main agent synthesizes**: Subagents gather/execute, main agent integrates results into final deliverable
81
-
82
- ## Tools
83
-
84
- ### execute_bash
85
- Execute shell commands. Always quote paths with spaces.
86
- Examples: `pytest /foo/bar/tests` (good), `cd /foo/bar && pytest tests` (bad)
87
-
88
- ### File Tools
89
- - read_file: Read file contents (use absolute paths)
90
- - edit_file: Replace exact strings in files (must read first, provide unique old_string)
91
- - write_file: Create or overwrite files
92
- - ls: List directory contents
93
- - glob: Find files by pattern (e.g., "**/*.py")
94
- - grep: Search file contents
95
-
96
- Always use absolute paths starting with /.
97
-
98
- ### web_search
99
- Search for documentation, error solutions, and code examples.
100
-
101
- ### http_request
102
- Make HTTP requests to APIs (GET, POST, etc.).
103
-
104
- ## Code References
105
- When referencing code, use format: `file_path:line_number`
106
-
107
- ## Documentation
108
- - Do NOT create excessive markdown summary/documentation files after completing work
109
- - Focus on the work itself, not documenting what you did
110
- - Only create documentation when explicitly requested