deepagents 0.3.9__py3-none-any.whl → 0.3.10__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,527 +0,0 @@
1
- Metadata-Version: 2.4
2
- Name: deepagents
3
- Version: 0.3.9
4
- Summary: General purpose 'deep agent' with sub-agent spawning, todo list capabilities, and mock file system. Built on LangGraph.
5
- License: MIT
6
- Project-URL: Homepage, https://docs.langchain.com/oss/python/deepagents/overview
7
- Project-URL: Documentation, https://reference.langchain.com/python/deepagents/
8
- Project-URL: Source, https://github.com/langchain-ai/deepagents
9
- Project-URL: Twitter, https://x.com/LangChain
10
- Project-URL: Slack, https://www.langchain.com/join-community
11
- Project-URL: Reddit, https://www.reddit.com/r/LangChain/
12
- Requires-Python: <4.0,>=3.11
13
- Description-Content-Type: text/markdown
14
- Requires-Dist: langchain-core<2.0.0,>=1.2.7
15
- Requires-Dist: langchain<2.0.0,>=1.2.7
16
- Requires-Dist: langchain-anthropic<2.0.0,>=1.3.1
17
- Requires-Dist: langchain-google-genai<5.0.0,>=4.2.0
18
- Requires-Dist: wcmatch
19
-
20
- # 🧠🤖Deep Agents
21
-
22
- Using an LLM to call tools in a loop is the simplest form of an agent.
23
- This architecture, however, can yield agents that are "shallow" and fail to plan and act over longer, more complex tasks.
24
-
25
- Applications like "Deep Research", "Manus", and "Claude Code" have gotten around this limitation by implementing a combination of four things:
26
- a **planning tool**, **sub agents**, access to a **file system**, and a **detailed prompt**.
27
-
28
- <img src="../../deep_agents.png" alt="deep agent" width="600"/>
29
-
30
- `deepagents` is a Python package that implements these in a general purpose way so that you can easily create a Deep Agent for your application. For a full overview and quickstart of `deepagents`, the best resource is our [docs](https://docs.langchain.com/oss/python/deepagents/overview).
31
-
32
- **Acknowledgements: This project was primarily inspired by Claude Code, and initially was largely an attempt to see what made Claude Code general purpose, and make it even more so.**
33
-
34
- ## Installation
35
-
36
- ```bash
37
- # pip
38
- pip install deepagents
39
-
40
- # uv
41
- uv add deepagents
42
-
43
- # poetry
44
- poetry add deepagents
45
- ```
46
-
47
- ## Usage
48
-
49
- > **Note:** `deepagents` requires using a LLM that supports [tool calling](https://docs.langchain.com/oss/python/langchain/overview).
50
-
51
- This example uses [Tavily](https://tavily.com/) as an example search provider, but you can substitute any search API (e.g., DuckDuckGo, SerpAPI, Brave Search). To run the example below, you will need to `pip install tavily-python`.
52
-
53
- Make sure to set `TAVILY_API_KEY` in your environment. You can generate one [here](https://www.tavily.com/).
54
-
55
- ```python
56
- import os
57
- from typing import Literal
58
- from tavily import TavilyClient
59
- from deepagents import create_deep_agent
60
-
61
- tavily_client = TavilyClient(api_key=os.environ["TAVILY_API_KEY"])
62
-
63
- # Web search tool
64
- def internet_search(
65
- query: str,
66
- max_results: int = 5,
67
- topic: Literal["general", "news", "finance"] = "general",
68
- include_raw_content: bool = False,
69
- ):
70
- """Run a web search"""
71
- return tavily_client.search(
72
- query,
73
- max_results=max_results,
74
- include_raw_content=include_raw_content,
75
- topic=topic,
76
- )
77
-
78
-
79
- # System prompt to steer the agent to be an expert researcher
80
- research_instructions = """You are an expert researcher. Your job is to conduct thorough research, and then write a polished report.
81
-
82
- You have access to an internet search tool as your primary means of gathering information.
83
-
84
- ## `internet_search`
85
-
86
- Use this to run an internet search for a given query. You can specify the max number of results to return, the topic, and whether raw content should be included.
87
- """
88
-
89
- # Create the deep agent
90
- agent = create_deep_agent(
91
- tools=[internet_search],
92
- system_prompt=research_instructions,
93
- )
94
-
95
- # Invoke the agent
96
- result = agent.invoke({"messages": [{"role": "user", "content": "What is langgraph?"}]})
97
- ```
98
-
99
- See [examples/research/research_agent.py](examples/research/research_agent.py) for a more complex example.
100
-
101
- The agent created with `create_deep_agent` is just a LangGraph graph - so you can interact with it (streaming, human-in-the-loop, memory, studio)
102
- in the same way you would any LangGraph agent.
103
-
104
- ## Core Capabilities
105
-
106
- **Planning & Task Decomposition**
107
-
108
- Deep Agents include a built-in `write_todos` tool that enables agents to break down complex tasks into discrete steps, track progress, and adapt plans as new information emerges.
109
-
110
- **Context Management**
111
-
112
- File system tools (`ls`, `read_file`, `write_file`, `edit_file`, `glob`, `grep`) allow agents to offload large context to memory, preventing context window overflow and enabling work with variable-length tool results.
113
-
114
- **Subagent Spawning**
115
-
116
- A built-in `task` tool enables agents to spawn specialized subagents for context isolation. This keeps the main agent's context clean while still going deep on specific subtasks.
117
-
118
- **Long-term Memory**
119
-
120
- Extend agents with persistent memory across threads using LangGraph's `BaseStore`. Agents can save and retrieve information from previous conversations.
121
-
122
- ## Customizing Deep Agents
123
-
124
- There are several parameters you can pass to `create_deep_agent` to create your own custom deep agent.
125
-
126
- ### `model`
127
-
128
- By default, `deepagents` uses `claude-sonnet-4-5-20250929`. You can customize this by passing any [LangChain model object](https://docs.langchain.com/oss/python/integrations/providers/overview).
129
-
130
- > **Tip:** Use the `provider:model` format (e.g., `openai:gpt-5`) to quickly switch between models. See the [reference](https://reference.langchain.com/python/langchain/models/#langchain.chat_models.init_chat_model(model)) for more info.
131
-
132
- ```python
133
- from langchain.chat_models import init_chat_model
134
- from deepagents import create_deep_agent
135
-
136
- model = init_chat_model(model="openai:gpt-5")
137
- agent = create_deep_agent(model=model)
138
- ```
139
-
140
- ### `system_prompt`
141
-
142
- Deep Agents come with a built-in system prompt. This is relatively detailed prompt that is heavily based on and inspired by [attempts](https://github.com/kn1026/cc/blob/main/claudecode.md) to [replicate](https://github.com/asgeirtj/system_prompts_leaks/blob/main/Anthropic/claude-code.md)
143
- Claude Code's system prompt. It was made more general purpose than Claude Code's system prompt. The default prompt contains detailed instructions for how to use the built-in planning tool, file system tools, and sub agents.
144
-
145
- Each deep agent tailored to a use case should include a custom system prompt specific to that use case as well. The importance of prompting for creating a successful deep agent cannot be overstated.
146
-
147
- ```python
148
- from deepagents import create_deep_agent
149
-
150
- research_instructions = """You are an expert researcher. Your job is to conduct thorough research, and then write a polished report.
151
- """
152
-
153
- agent = create_deep_agent(
154
- system_prompt=research_instructions,
155
- )
156
- ```
157
-
158
- ### `tools`
159
-
160
- In addition to custom tools you provide, `deepagents` include built-in tools for planning (`write_todos`), file management (`ls`, `read_file`, `write_file`, `edit_file`, `glob`, `grep`), and subagent spawning (`task`).
161
-
162
- ```python
163
- import os
164
- from typing import Literal
165
- from tavily import TavilyClient
166
- from deepagents import create_deep_agent
167
-
168
- tavily_client = TavilyClient(api_key=os.environ["TAVILY_API_KEY"])
169
-
170
- def internet_search(
171
- query: str,
172
- max_results: int = 5,
173
- topic: Literal["general", "news", "finance"] = "general",
174
- include_raw_content: bool = False,
175
- ):
176
- """Run a web search"""
177
- return tavily_client.search(
178
- query,
179
- max_results=max_results,
180
- include_raw_content=include_raw_content,
181
- topic=topic,
182
- )
183
-
184
- agent = create_deep_agent(
185
- tools=[internet_search]
186
- )
187
- ```
188
-
189
- ### `middleware`
190
-
191
- `create_deep_agent` is implemented with middleware that can be customized. You can provide additional middleware to extend functionality, add tools, or implement custom hooks.
192
-
193
- ```python
194
- from langchain_core.tools import tool
195
- from deepagents import create_deep_agent
196
- from langchain.agents.middleware import AgentMiddleware
197
-
198
- @tool
199
- def get_weather(city: str) -> str:
200
- """Get the weather in a city."""
201
- return f"The weather in {city} is sunny."
202
-
203
- @tool
204
- def get_temperature(city: str) -> str:
205
- """Get the temperature in a city."""
206
- return f"The temperature in {city} is 70 degrees Fahrenheit."
207
-
208
- class WeatherMiddleware(AgentMiddleware):
209
- tools = [get_weather, get_temperature]
210
-
211
- agent = create_deep_agent(
212
- model="anthropic:claude-sonnet-4-20250514",
213
- middleware=[WeatherMiddleware()]
214
- )
215
- ```
216
-
217
- ### `subagents`
218
-
219
- A main feature of Deep Agents is their ability to spawn subagents. You can specify custom subagents that your agent can hand off work to in the subagents parameter. Sub agents are useful for context quarantine (to help not pollute the overall context of the main agent) as well as custom instructions.
220
-
221
- `subagents` should be a list of dictionaries, where each dictionary follow this schema:
222
-
223
- ```python
224
- class SubAgent(TypedDict):
225
- name: str
226
- description: str
227
- system_prompt: str
228
- tools: Sequence[BaseTool | Callable | dict[str, Any]]
229
- model: NotRequired[str | BaseChatModel]
230
- middleware: NotRequired[list[AgentMiddleware]]
231
- interrupt_on: NotRequired[dict[str, bool | InterruptOnConfig]]
232
-
233
- class CompiledSubAgent(TypedDict):
234
- name: str
235
- description: str
236
- runnable: Runnable
237
- ```
238
-
239
- **`SubAgent` fields:**
240
-
241
- - **name**: This is the name of the subagent, and how the main agent will call the subagent
242
- - **description**: This is the description of the subagent that is shown to the main agent
243
- - **system_prompt**: This is the system prompt used for the subagent
244
- - **tools**: This is the list of tools that the subagent has access to.
245
- - **model**: Optional model name or model instance.
246
- - **middleware** Additional middleware to attach to the subagent. See [here](https://docs.langchain.com/oss/python/langchain/middleware) for an introduction into middleware and how it works with create_agent.
247
- - **interrupt_on** A custom interrupt config that specifies human-in-the-loop interactions for your tools.
248
-
249
- **CompiledSubAgent fields:**
250
-
251
- - **name**: This is the name of the subagent, and how the main agent will call the subagent
252
- - **description**: This is the description of the subagent that is shown to the main agent
253
- - **runnable**: A pre-built LangGraph graph/agent that will be used as the subagent. **Important:** The runnable's state schema must include a `messages` key. This is required for the subagent to communicate results back to the main agent.
254
-
255
- #### Using `SubAgent`
256
-
257
- ```python
258
- import os
259
- from typing import Literal
260
- from tavily import TavilyClient
261
- from deepagents import create_deep_agent
262
-
263
- tavily_client = TavilyClient(api_key=os.environ["TAVILY_API_KEY"])
264
-
265
- def internet_search(
266
- query: str,
267
- max_results: int = 5,
268
- topic: Literal["general", "news", "finance"] = "general",
269
- include_raw_content: bool = False,
270
- ):
271
- """Run a web search"""
272
- return tavily_client.search(
273
- query,
274
- max_results=max_results,
275
- include_raw_content=include_raw_content,
276
- topic=topic,
277
- )
278
-
279
- research_subagent = {
280
- "name": "research-agent",
281
- "description": "Used to research more in depth questions",
282
- "system_prompt": "You are a great researcher",
283
- "tools": [internet_search],
284
- "model": "openai:gpt-4o", # Optional override, defaults to main agent model
285
- }
286
- subagents = [research_subagent]
287
-
288
- agent = create_deep_agent(
289
- model="anthropic:claude-sonnet-4-20250514",
290
- subagents=subagents
291
- )
292
- ```
293
-
294
- #### Using `CompiledSubAgent`
295
-
296
- For complex workflows, use a pre-built LangGraph graph:
297
-
298
- ```python
299
- # Create a custom agent graph
300
- custom_graph = create_agent(
301
- model=your_model,
302
- tools=specialized_tools,
303
- prompt="You are a specialized agent for data analysis..."
304
- )
305
-
306
- # Use it as a compiled subagent
307
- custom_subagent = CompiledSubAgent(
308
- name="data-analyzer",
309
- description="Specialized agent for complex data analysis tasks",
310
- runnable=custom_graph
311
- )
312
-
313
- subagents = [custom_subagent]
314
-
315
- agent = create_deep_agent(
316
- model="anthropic:claude-sonnet-4-20250514",
317
- tools=[internet_search],
318
- system_prompt=research_instructions,
319
- subagents=subagents
320
- )
321
- ```
322
-
323
- ### `interrupt_on`
324
-
325
- The harness can pause agent execution at specified tool calls to allow human approval or modification. This feature is opt-in via the `interrupt_on` parameter.
326
-
327
- Pass `interrupt_on` to `create_deep_agent` with a mapping of tool names to interrupt configurations. Example: `interrupt_on={"edit_file": True}` pauses before every edit.
328
-
329
- These tool configs are passed to our prebuilt [HITL middleware](https://docs.langchain.com/oss/python/langchain/middleware#human-in-the-loop) so that the agent pauses execution and waits for feedback from the user before executing configured tools.
330
-
331
- ```python
332
- from langchain_core.tools import tool
333
- from deepagents import create_deep_agent
334
-
335
- @tool
336
- def get_weather(city: str) -> str:
337
- """Get the weather in a city."""
338
- return f"The weather in {city} is sunny."
339
-
340
- agent = create_deep_agent(
341
- model="anthropic:claude-sonnet-4-20250514",
342
- tools=[get_weather],
343
- interrupt_on={
344
- "get_weather": {
345
- "allowed_decisions": ["approve", "edit", "reject"]
346
- },
347
- }
348
- )
349
-
350
- ```
351
-
352
- ## Deep Agents Middleware
353
-
354
- Deep Agents are built with a modular middleware architecture. As a reminder, Deep Agents have access to:
355
-
356
- - A planning tool
357
- - A filesystem for storing context and long-term memories
358
- - The ability to spawn subagents
359
-
360
- Each of these features is implemented as separate middleware. When you create a deep agent with `create_deep_agent`, we automatically attach **TodoListMiddleware**, **FilesystemMiddleware** and **SubAgentMiddleware** to your agent.
361
-
362
- Middleware is a composable concept, and you can choose to add as many or as few middleware to an agent depending on your use case. That means that you can also use any of the aforementioned middleware independently!
363
-
364
- ### `TodoListMiddleware`
365
-
366
- Planning is integral to solving complex problems. If you've used Claude Code recently, you'll notice how it writes out a to-do list before tackling complex, multi-part tasks. You'll also notice how it can adapt and update this to-do list on the fly as more information comes in.
367
-
368
- `TodoListMiddleware` provides your agent with a tool specifically for updating this to-do list. Before, and while it executes a multi-part task, the agent is prompted to use the `write_todos` tool to keep track of what it's doing, and what still needs to be done.
369
-
370
- ```python
371
- from langchain.agents import create_agent
372
- from langchain.agents.middleware import TodoListMiddleware
373
-
374
- # TodoListMiddleware is included by default in create_deep_agent
375
- # You can customize it if building a custom agent
376
- agent = create_agent(
377
- model="anthropic:claude-sonnet-4-20250514",
378
- # Custom planning instructions can be added via middleware
379
- middleware=[
380
- TodoListMiddleware(
381
- system_prompt="Use the write_todos tool to..." # Optional: Custom addition to the system prompt
382
- ),
383
- ],
384
- )
385
- ```
386
-
387
- ### `FilesystemMiddleware`
388
-
389
- Context engineering is one of the main challenges in building effective agents. This can be particularly hard when using tools that can return variable length results (e.g., `web_search`, RAG), as long tool results can quickly fill up your context window.
390
- `FilesystemMiddleware` provides four tools to your agent to interact with both short-term and long-term memory:
391
-
392
- - `ls`: List the files in your filesystem
393
- - `read_file`: Read an entire file, or a certain number of lines from a file
394
- - `write_file`: Write a new file to your filesystem
395
- - `edit_file`: Edit an existing file in your filesystem
396
-
397
- ```python
398
- from langchain.agents import create_agent
399
- from deepagents.middleware.filesystem import FilesystemMiddleware
400
-
401
-
402
- # FilesystemMiddleware is included by default in create_deep_agent
403
- # You can customize it if building a custom agent
404
- agent = create_agent(
405
- model="anthropic:claude-sonnet-4-20250514",
406
- middleware=[
407
- FilesystemMiddleware(
408
- backend=..., # Optional: customize storage backend
409
- system_prompt="Write to the filesystem when...", # Optional custom system prompt override
410
- custom_tool_descriptions={
411
- "ls": "Use the ls tool when...",
412
- "read_file": "Use the read_file tool to..."
413
- } # Optional: Custom descriptions for filesystem tools
414
- ),
415
- ],
416
- )
417
- ```
418
-
419
- ### `SubAgentMiddleware`
420
-
421
- Handing off tasks to subagents is a great way to isolate context, keeping the context window of the main (supervisor) agent clean while still going deep on a task. `SubAgentMiddleware` allows you to supply subagents through a `task` tool.
422
-
423
- A subagent is defined with a name, description, system prompt, and tools. You can also provide a subagent with a custom model, or with additional middleware. This can be particularly useful when you want to give the subagent an additional state key to share with the main agent.
424
-
425
- ```python
426
- from langchain_core.tools import tool
427
- from langchain.agents import create_agent
428
- from deepagents.middleware.subagents import SubAgentMiddleware
429
-
430
-
431
- @tool
432
- def get_weather(city: str) -> str:
433
- """Get the weather in a city."""
434
- return f"The weather in {city} is sunny."
435
-
436
- agent = create_agent(
437
- model="claude-sonnet-4-20250514",
438
- middleware=[
439
- SubAgentMiddleware(
440
- default_model="claude-sonnet-4-20250514",
441
- default_tools=[],
442
- subagents=[
443
- {
444
- "name": "weather",
445
- "description": "This subagent can get weather in cities.",
446
- "system_prompt": "Use the get_weather tool to get the weather in a city.",
447
- "tools": [get_weather],
448
- "model": "gpt-4.1",
449
- "middleware": [],
450
- }
451
- ],
452
- )
453
- ],
454
- )
455
- ```
456
-
457
- For more complex use cases, you can also provide your own pre-built LangGraph graph as a subagent.
458
-
459
- ```python
460
- # Create a custom LangGraph graph
461
- # Important: Your state must include a 'messages' key
462
- def create_weather_graph():
463
- workflow = StateGraph(...)
464
- # Build your custom graph
465
- # Make sure your state schema includes 'messages'
466
- return workflow.compile()
467
-
468
- weather_graph = create_weather_graph()
469
-
470
- # Wrap it in a `CompiledSubAgent`
471
- weather_subagent = CompiledSubAgent(
472
- name="weather",
473
- description="This subagent can get weather in cities.",
474
- runnable=weather_graph
475
- )
476
-
477
- agent = create_agent(
478
- model="anthropic:claude-sonnet-4-20250514",
479
- middleware=[
480
- SubAgentMiddleware(
481
- default_model="claude-sonnet-4-20250514",
482
- default_tools=[],
483
- subagents=[weather_subagent],
484
- )
485
- ],
486
- )
487
- ```
488
-
489
- ## Sync vs Async
490
-
491
- Prior versions of deepagents separated sync and async agent factories.
492
-
493
- `async_create_deep_agent` has been folded in to `create_deep_agent`.
494
-
495
- **You should use `create_deep_agent` as the factory for both sync and async agents**
496
-
497
- ## MCP
498
-
499
- The `deepagents` library can be ran with MCP tools. This can be achieved by using the [Langchain MCP Adapter library](https://github.com/langchain-ai/langchain-mcp-adapters).
500
-
501
- **NOTE:** MCP tools are async, so you'll need to use `agent.ainvoke()` or `agent.astream()` for invocation.
502
-
503
- (To run the example below, will need to `pip install langchain-mcp-adapters`)
504
-
505
- ```python
506
- import asyncio
507
- from langchain_mcp_adapters.client import MultiServerMCPClient
508
- from deepagents import create_deep_agent
509
-
510
- async def main():
511
- # Collect MCP tools
512
- mcp_client = MultiServerMCPClient(...)
513
- mcp_tools = await mcp_client.get_tools()
514
-
515
- # Create agent
516
- agent = create_deep_agent(tools=mcp_tools, ....)
517
-
518
- # Stream the agent
519
- async for chunk in agent.astream(
520
- {"messages": [{"role": "user", "content": "what is langgraph?"}]},
521
- stream_mode="values"
522
- ):
523
- if "messages" in chunk:
524
- chunk["messages"][-1].pretty_print()
525
-
526
- asyncio.run(main())
527
- ```
@@ -1,22 +0,0 @@
1
- deepagents/__init__.py,sha256=LHQm0v_7N9Gd4pmpRjhnlOCMIK2O0jQ4cEU8RiXEI8k,447
2
- deepagents/graph.py,sha256=czbV_e5wMyt7K83LwcpKzKdJq8Tr9Ha7iSmAh32WC_4,10520
3
- deepagents/backends/__init__.py,sha256=BOKu2cQ1OdMyO_l2rLqZQiXppYFmQbx7OIQb7WYwvZc,457
4
- deepagents/backends/composite.py,sha256=WZ_dnn63BmrU19ZJ5-m728f99pSa0Uq_CnwZjwmxz1U,26198
5
- deepagents/backends/filesystem.py,sha256=hiWSxatfJrLqBqVlj22CnXsDxjHB1oX5NJBogGBPiXM,26713
6
- deepagents/backends/protocol.py,sha256=HUmIrwYGduPfDcs_wtOzVU2QPA9kICZuGO-sUwxzz5I,15997
7
- deepagents/backends/sandbox.py,sha256=LC-89RawCXy2qQmvOu_UYAHo5mRAylhYhfuZPKmO-7Y,13196
8
- deepagents/backends/state.py,sha256=Qq4uRjKg6POEqLl4tNnWnXzbmLBpu3bZdMkcUROIgHw,7899
9
- deepagents/backends/store.py,sha256=9gdUQqPWChYgHVoopOUaocUdyUbFBpf-PxhTiXRXCto,18219
10
- deepagents/backends/utils.py,sha256=CE_HXddNTr954auqFIVgYLLD4Gdsfr9U8b384g07Wuc,13932
11
- deepagents/middleware/__init__.py,sha256=tATwi3JI-90-Wuf3Wg-szWkSBuKO9F2iyc5NoHP9q4g,566
12
- deepagents/middleware/_utils.py,sha256=ojy62kQLASQ2GabevWJaPGLItyccdNxLMPpYV25Lf20,687
13
- deepagents/middleware/filesystem.py,sha256=6CGwzwLj62LXFlF5bEZrXPBMIrzKQcOIQeK7g6G2H4M,55779
14
- deepagents/middleware/memory.py,sha256=D6CNDeh5wUGLuY0CZWubFn_cfW81XuzxEZGJK02vFiU,15860
15
- deepagents/middleware/patch_tool_calls.py,sha256=PdNhxPaQqwnFkhEAZEE2kEzadTNAOO3_iJRA30WqpGE,1981
16
- deepagents/middleware/skills.py,sha256=0nOj7knAzPC9FmFK7Po3bsZeMAQJ8VJU6K1BEvoj3NM,24181
17
- deepagents/middleware/subagents.py,sha256=2pIwqC_0MUptX2TsBtTpr4tFDdQYkOWCG6lwAgPO_cw,27273
18
- deepagents/middleware/summarization.py,sha256=y5oXrepdJ0BkGcqdUQ3qO-cNVHb_QKVWjEFRurUF5sM,29127
19
- deepagents-0.3.9.dist-info/METADATA,sha256=rQfU0aeLPZqCDOka6yf_LmFaODD59BT_7vWhj0aYsM0,19748
20
- deepagents-0.3.9.dist-info/WHEEL,sha256=wUyA8OaulRlbfwMtmQsvNngGrxQHAvkKcvRmdizlJi0,92
21
- deepagents-0.3.9.dist-info/top_level.txt,sha256=drAzchOzPNePwpb3_pbPuvLuayXkN7SNqeIKMBWJoAo,11
22
- deepagents-0.3.9.dist-info/RECORD,,