letta-nightly 0.11.5__py3-none-any.whl → 0.11.6.dev20250827104106__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,665 @@
1
+ Metadata-Version: 2.4
2
+ Name: letta-nightly
3
+ Version: 0.11.6.dev20250827104106
4
+ Summary: Create LLM agents with long-term memory and custom tools
5
+ Author-email: Letta Team <contact@letta.com>
6
+ License: Apache License
7
+ License-File: LICENSE
8
+ Requires-Python: <3.14,>=3.11
9
+ Requires-Dist: aiomultiprocess>=0.9.1
10
+ Requires-Dist: alembic>=1.13.3
11
+ Requires-Dist: anthropic>=0.49.0
12
+ Requires-Dist: apscheduler>=3.11.0
13
+ Requires-Dist: black[jupyter]>=24.2.0
14
+ Requires-Dist: brotli>=1.1.0
15
+ Requires-Dist: certifi>=2025.6.15
16
+ Requires-Dist: colorama>=0.4.6
17
+ Requires-Dist: composio-core>=0.7.7
18
+ Requires-Dist: datamodel-code-generator[http]>=0.25.0
19
+ Requires-Dist: demjson3>=3.0.6
20
+ Requires-Dist: docstring-parser<0.17,>=0.16
21
+ Requires-Dist: faker>=36.1.0
22
+ Requires-Dist: firecrawl-py<3.0.0,>=2.8.0
23
+ Requires-Dist: grpcio-tools>=1.68.1
24
+ Requires-Dist: grpcio>=1.68.1
25
+ Requires-Dist: html2text>=2020.1.16
26
+ Requires-Dist: httpx-sse>=0.4.0
27
+ Requires-Dist: httpx>=0.28.0
28
+ Requires-Dist: jinja2>=3.1.5
29
+ Requires-Dist: letta-client>=0.1.277
30
+ Requires-Dist: llama-index-embeddings-openai>=0.3.1
31
+ Requires-Dist: llama-index>=0.12.2
32
+ Requires-Dist: markitdown[docx,pdf,pptx]>=0.1.2
33
+ Requires-Dist: marshmallow-sqlalchemy>=1.4.1
34
+ Requires-Dist: matplotlib>=3.10.1
35
+ Requires-Dist: mcp[cli]>=1.9.4
36
+ Requires-Dist: mistralai>=1.8.1
37
+ Requires-Dist: nltk>=3.8.1
38
+ Requires-Dist: numpy>=2.1.0
39
+ Requires-Dist: openai>=1.99.9
40
+ Requires-Dist: opentelemetry-api==1.30.0
41
+ Requires-Dist: opentelemetry-exporter-otlp==1.30.0
42
+ Requires-Dist: opentelemetry-instrumentation-requests==0.51b0
43
+ Requires-Dist: opentelemetry-instrumentation-sqlalchemy==0.51b0
44
+ Requires-Dist: opentelemetry-sdk==1.30.0
45
+ Requires-Dist: orjson>=3.11.1
46
+ Requires-Dist: pathvalidate>=3.2.1
47
+ Requires-Dist: prettytable>=3.9.0
48
+ Requires-Dist: pydantic-settings>=2.2.1
49
+ Requires-Dist: pydantic>=2.10.6
50
+ Requires-Dist: pyhumps>=3.8.0
51
+ Requires-Dist: python-box>=7.1.1
52
+ Requires-Dist: python-multipart>=0.0.19
53
+ Requires-Dist: pytz>=2023.3.post1
54
+ Requires-Dist: pyyaml>=6.0.1
55
+ Requires-Dist: questionary>=2.0.1
56
+ Requires-Dist: rich>=13.9.4
57
+ Requires-Dist: sentry-sdk[fastapi]==2.19.1
58
+ Requires-Dist: setuptools>=70
59
+ Requires-Dist: sqlalchemy-json>=0.7.0
60
+ Requires-Dist: sqlalchemy-utils>=0.41.2
61
+ Requires-Dist: sqlalchemy[asyncio]>=2.0.41
62
+ Requires-Dist: sqlmodel>=0.0.16
63
+ Requires-Dist: structlog>=25.4.0
64
+ Requires-Dist: tavily-python>=0.7.2
65
+ Requires-Dist: tqdm>=4.66.1
66
+ Requires-Dist: typer>=0.15.2
67
+ Provides-Extra: bedrock
68
+ Requires-Dist: aioboto3>=14.3.0; extra == 'bedrock'
69
+ Requires-Dist: boto3>=1.36.24; extra == 'bedrock'
70
+ Provides-Extra: cloud-tool-sandbox
71
+ Requires-Dist: e2b-code-interpreter>=1.0.3; extra == 'cloud-tool-sandbox'
72
+ Provides-Extra: desktop
73
+ Requires-Dist: aiosqlite>=0.21.0; extra == 'desktop'
74
+ Requires-Dist: docker>=7.1.0; extra == 'desktop'
75
+ Requires-Dist: fastapi>=0.115.6; extra == 'desktop'
76
+ Requires-Dist: langchain-community>=0.3.7; extra == 'desktop'
77
+ Requires-Dist: langchain>=0.3.7; extra == 'desktop'
78
+ Requires-Dist: locust>=2.31.5; extra == 'desktop'
79
+ Requires-Dist: pgvector>=0.2.3; extra == 'desktop'
80
+ Requires-Dist: sqlite-vec>=0.1.7a2; extra == 'desktop'
81
+ Requires-Dist: uvicorn>=0.24.0.post1; extra == 'desktop'
82
+ Requires-Dist: websockets; extra == 'desktop'
83
+ Requires-Dist: wikipedia>=1.4.0; extra == 'desktop'
84
+ Provides-Extra: dev
85
+ Requires-Dist: autoflake>=2.3.0; extra == 'dev'
86
+ Requires-Dist: black[jupyter]>=24.4.2; extra == 'dev'
87
+ Requires-Dist: ipdb>=0.13.13; extra == 'dev'
88
+ Requires-Dist: ipykernel>=6.29.5; extra == 'dev'
89
+ Requires-Dist: isort>=5.13.2; extra == 'dev'
90
+ Requires-Dist: pexpect>=4.9.0; extra == 'dev'
91
+ Requires-Dist: pre-commit>=3.5.0; extra == 'dev'
92
+ Requires-Dist: pyright>=1.1.347; extra == 'dev'
93
+ Requires-Dist: pytest; extra == 'dev'
94
+ Requires-Dist: pytest-asyncio>=0.24.0; extra == 'dev'
95
+ Requires-Dist: pytest-json-report>=1.5.0; extra == 'dev'
96
+ Requires-Dist: pytest-mock>=3.14.0; extra == 'dev'
97
+ Requires-Dist: pytest-order>=1.2.0; extra == 'dev'
98
+ Provides-Extra: experimental
99
+ Requires-Dist: google-cloud-profiler>=4.1.0; extra == 'experimental'
100
+ Requires-Dist: granian[reload,uvloop]>=2.3.2; extra == 'experimental'
101
+ Requires-Dist: uvloop>=0.21.0; extra == 'experimental'
102
+ Provides-Extra: external-tools
103
+ Requires-Dist: docker>=7.1.0; extra == 'external-tools'
104
+ Requires-Dist: firecrawl-py<3.0.0,>=2.8.0; extra == 'external-tools'
105
+ Requires-Dist: langchain-community>=0.3.7; extra == 'external-tools'
106
+ Requires-Dist: langchain>=0.3.7; extra == 'external-tools'
107
+ Requires-Dist: turbopuffer>=0.5.17; extra == 'external-tools'
108
+ Requires-Dist: wikipedia>=1.4.0; extra == 'external-tools'
109
+ Provides-Extra: google
110
+ Requires-Dist: google-genai>=1.15.0; extra == 'google'
111
+ Provides-Extra: modal
112
+ Requires-Dist: modal>=1.1.0; extra == 'modal'
113
+ Provides-Extra: pinecone
114
+ Requires-Dist: pinecone[asyncio]>=7.3.0; extra == 'pinecone'
115
+ Provides-Extra: postgres
116
+ Requires-Dist: asyncpg>=0.30.0; extra == 'postgres'
117
+ Requires-Dist: pg8000>=1.30.3; extra == 'postgres'
118
+ Requires-Dist: pgvector>=0.2.3; extra == 'postgres'
119
+ Requires-Dist: psycopg2-binary>=2.9.10; extra == 'postgres'
120
+ Requires-Dist: psycopg2>=2.9.10; extra == 'postgres'
121
+ Provides-Extra: redis
122
+ Requires-Dist: redis>=6.2.0; extra == 'redis'
123
+ Provides-Extra: server
124
+ Requires-Dist: fastapi>=0.115.6; extra == 'server'
125
+ Requires-Dist: uvicorn>=0.24.0.post1; extra == 'server'
126
+ Requires-Dist: websockets; extra == 'server'
127
+ Provides-Extra: sqlite
128
+ Requires-Dist: aiosqlite>=0.21.0; extra == 'sqlite'
129
+ Requires-Dist: sqlite-vec>=0.1.7a2; extra == 'sqlite'
130
+ Description-Content-Type: text/markdown
131
+
132
+ <p align="center">
133
+ <picture>
134
+ <source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/letta-ai/letta/refs/heads/main/assets/Letta-logo-RGB_GreyonTransparent_cropped_small.png">
135
+ <source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/letta-ai/letta/refs/heads/main/assets/Letta-logo-RGB_OffBlackonTransparent_cropped_small.png">
136
+ <img alt="Letta logo" src="https://raw.githubusercontent.com/letta-ai/letta/refs/heads/main/assets/Letta-logo-RGB_GreyonOffBlack_cropped_small.png" width="500">
137
+ </picture>
138
+ </p>
139
+
140
+ # Letta (formerly MemGPT)
141
+
142
+ Letta is the platform for building stateful agents: open AI with advanced memory that can learn and self-improve over time.
143
+
144
+ ### Quicklinks:
145
+ * [**Developer Documentation**](https://docs.letta.com): Learn how create agents that learn using Python / TypeScript
146
+ * [**Agent Development Environment (ADE)**](https://docs.letta.com/guides/ade/overview): A no-code UI for building stateful agents
147
+ * [**Letta Desktop**](https://docs.letta.com/guides/ade/desktop): A fully-local version of the ADE, available on MacOS and Windows
148
+ * [**Letta Cloud**](https://app.letta.com/): The fastest way to try Letta, with agents running in the cloud
149
+
150
+ ## Get started
151
+
152
+ To get started, install the Letta SDK (available for both Python and TypeScript):
153
+
154
+ ### [Python SDK](https://github.com/letta-ai/letta-python)
155
+ ```sh
156
+ pip install letta-client
157
+ ```
158
+
159
+ ### [TypeScript / Node.js SDK](https://github.com/letta-ai/letta-node)
160
+ ```sh
161
+ npm install @letta-ai/letta-client
162
+ ```
163
+
164
+ ## Simple Hello World example
165
+
166
+ In the example below, we'll create a stateful agent with two memory blocks, one for itself (the `persona` block), and one for the human. We'll initialize the `human` memory block with incorrect information, and correct agent in our first message - which will trigger the agent to update its own memory with a tool call.
167
+
168
+ *To run the examples, you'll need to get a `LETTA_API_KEY` from [Letta Cloud](https://app.letta.com/api-keys), or run your own self-hosted server (see [our guide](https://docs.letta.com/guides/selfhosting))*
169
+
170
+
171
+ ### Python
172
+ ```python
173
+ from letta_client import Letta
174
+
175
+ client = Letta(token="LETTA_API_KEY")
176
+ # client = Letta(base_url="http://localhost:8283") # if self-hosting, set your base_url
177
+
178
+ agent_state = client.agents.create(
179
+ model="openai/gpt-4.1",
180
+ embedding="openai/text-embedding-3-small",
181
+ memory_blocks=[
182
+ {
183
+ "label": "human",
184
+ "value": "The human's name is Chad. They like vibe coding."
185
+ },
186
+ {
187
+ "label": "persona",
188
+ "value": "My name is Sam, a helpful assistant."
189
+ }
190
+ ],
191
+ tools=["web_search", "run_code"]
192
+ )
193
+
194
+ print(agent_state.id)
195
+ # agent-d9be...0846
196
+
197
+ response = client.agents.messages.create(
198
+ agent_id=agent_state.id,
199
+ messages=[
200
+ {
201
+ "role": "user",
202
+ "content": "Hey, nice to meet you, my name is Brad."
203
+ }
204
+ ]
205
+ )
206
+
207
+ # the agent will think, then edit its memory using a tool
208
+ for message in response.messages:
209
+ print(message)
210
+ ```
211
+
212
+ ### TypeScript / Node.js
213
+ ```typescript
214
+ import { LettaClient } from '@letta-ai/letta-client'
215
+
216
+ const client = new LettaClient({ token: "LETTA_API_KEY" });
217
+ // const client = new LettaClient({ baseUrl: "http://localhost:8283" }); // if self-hosting, set your baseUrl
218
+
219
+ const agentState = await client.agents.create({
220
+ model: "openai/gpt-4.1",
221
+ embedding: "openai/text-embedding-3-small",
222
+ memoryBlocks: [
223
+ {
224
+ label: "human",
225
+ value: "The human's name is Chad. They like vibe coding."
226
+ },
227
+ {
228
+ label: "persona",
229
+ value: "My name is Sam, a helpful assistant."
230
+ }
231
+ ],
232
+ tools: ["web_search", "run_code"]
233
+ });
234
+
235
+ console.log(agentState.id);
236
+ // agent-d9be...0846
237
+
238
+ const response = await client.agents.messages.create(
239
+ agentState.id, {
240
+ messages: [
241
+ {
242
+ role: "user",
243
+ content: "Hey, nice to meet you, my name is Brad."
244
+ }
245
+ ]
246
+ }
247
+ );
248
+
249
+ // the agent will think, then edit its memory using a tool
250
+ for (const message of response.messages) {
251
+ console.log(message);
252
+ }
253
+ ```
254
+
255
+ ## Core concepts in Letta:
256
+
257
+ Letta is made by the creators of [MemGPT](https://arxiv.org/abs/2310.08560), a research paper that introduced the concept of the "LLM Operating System" for memory management. The core concepts in Letta for designing stateful agents follow the MemGPT LLM OS principles:
258
+
259
+ 1. [**Memory Hierarchy**](https://docs.letta.com/guides/agents/memory): Agents have self-editing memory that is split between in-context memory and out-of-context memory
260
+ 2. [**Memory Blocks**](https://docs.letta.com/guides/agents/memory-blocks): The agent's in-context memory is composed of persistent editable **memory blocks**
261
+ 3. [**Agentic Context Engineering**](https://docs.letta.com/guides/agents/context-engineering): Agents control the context window by using tools to edit, delete, or search for memory
262
+ 4. [**Perpetual Self-Improving Agents**](https://docs.letta.com/guides/agents/overview): Every "agent" is a single entity that has a perpetual (infinite) message history
263
+
264
+ ## Multi-agent shared memory ([full guide](https://docs.letta.com/guides/agents/multi-agent-shared-memory))
265
+
266
+ A single memory block can be attached to multiple agents, allowing to extremely powerful multi-agent shared memory setups.
267
+ For example, you can create two agents that have their own independent memory blocks in addition to a shared memory block.
268
+
269
+ ### Python
270
+ ```python
271
+ # create a shared memory block
272
+ shared_block = client.blocks.create(
273
+ label="organization",
274
+ description="Shared information between all agents within the organization.",
275
+ value="Nothing here yet, we should update this over time."
276
+ )
277
+
278
+ # create a supervisor agent
279
+ supervisor_agent = client.agents.create(
280
+ model="anthropic/claude-3-5-sonnet-20241022",
281
+ embedding="openai/text-embedding-3-small",
282
+ # blocks created for this agent
283
+ memory_blocks=[{"label": "persona", "value": "I am a supervisor"}],
284
+ # pre-existing shared block that is "attached" to this agent
285
+ block_ids=[shared_block.id],
286
+ )
287
+
288
+ # create a worker agent
289
+ worker_agent = client.agents.create(
290
+ model="openai/gpt-4.1-mini",
291
+ embedding="openai/text-embedding-3-small",
292
+ # blocks created for this agent
293
+ memory_blocks=[{"label": "persona", "value": "I am a worker"}],
294
+ # pre-existing shared block that is "attached" to this agent
295
+ block_ids=[shared_block.id],
296
+ )
297
+ ```
298
+
299
+ ### TypeScript / Node.js
300
+ ```typescript
301
+ // create a shared memory block
302
+ const sharedBlock = await client.blocks.create({
303
+ label: "organization",
304
+ description: "Shared information between all agents within the organization.",
305
+ value: "Nothing here yet, we should update this over time."
306
+ });
307
+
308
+ // create a supervisor agent
309
+ const supervisorAgent = await client.agents.create({
310
+ model: "anthropic/claude-3-5-sonnet-20241022",
311
+ embedding: "openai/text-embedding-3-small",
312
+ // blocks created for this agent
313
+ memoryBlocks: [{ label: "persona", value: "I am a supervisor" }],
314
+ // pre-existing shared block that is "attached" to this agent
315
+ blockIds: [sharedBlock.id]
316
+ });
317
+
318
+ // create a worker agent
319
+ const workerAgent = await client.agents.create({
320
+ model: "openai/gpt-4.1-mini",
321
+ embedding: "openai/text-embedding-3-small",
322
+ // blocks created for this agent
323
+ memoryBlocks: [{ label: "persona", value: "I am a worker" }],
324
+ // pre-existing shared block that is "attached" to this agent
325
+ blockIds: [sharedBlock.id]
326
+ });
327
+ ```
328
+
329
+ ## Sleep-time agents ([full guide](https://docs.letta.com/guides/agents/architectures/sleeptime))
330
+
331
+ In Letta, you can create special **sleep-time agents** that share the memory of your primary agents, but run in the background (like an agent's "subconcious"). You can think of sleep-time agents as a special form of multi-agent architecture.
332
+
333
+ To enable sleep-time agents for your agent, set the `enable_sleeptime` flag to true when creating your agent. This will automatically create a sleep-time agent in addition to your main agent which will handle the memory editing, instead of your primary agent.
334
+
335
+ ### Python
336
+ ```python
337
+ agent_state = client.agents.create(
338
+ ...
339
+ enable_sleeptime=True, # <- enable this flag to create a sleep-time agent
340
+ )
341
+ ```
342
+
343
+ ### TypeScript / Node.js
344
+ ```typescript
345
+ const agentState = await client.agents.create({
346
+ ...
347
+ enableSleeptime: true // <- enable this flag to create a sleep-time agent
348
+ });
349
+ ```
350
+
351
+ ## Saving and sharing agents with Agent File (`.af`) ([full guide](https://docs.letta.com/guides/agents/agent-file))
352
+
353
+ In Letta, all agent data is persisted to disk (Postgres or SQLite), and can be easily imported and exported using the open source [Agent File](https://github.com/letta-ai/agent-file) (`.af`) file format. You can use Agent File to checkpoint your agents, as well as move your agents (and their complete state/memories) between different Letta servers, e.g. between self-hosted Letta and Letta Cloud.
354
+
355
+ <details>
356
+ <summary>View code snippets</summary>
357
+
358
+ ### Python
359
+ ```python
360
+ # Import your .af file from any location
361
+ agent_state = client.agents.import_agent_serialized(file=open("/path/to/agent/file.af", "rb"))
362
+
363
+ print(f"Imported agent: {agent.id}")
364
+
365
+ # Export your agent into a serialized schema object (which you can write to a file)
366
+ schema = client.agents.export_agent_serialized(agent_id="<AGENT_ID>")
367
+ ```
368
+
369
+ ### TypeScript / Node.js
370
+ ```typescript
371
+ import { readFileSync } from 'fs';
372
+ import { Blob } from 'buffer';
373
+
374
+ // Import your .af file from any location
375
+ const file = new Blob([readFileSync('/path/to/agent/file.af')])
376
+ const agentState = await client.agents.importAgentSerialized(file, {})
377
+
378
+ console.log(`Imported agent: ${agentState.id}`);
379
+
380
+ // Export your agent into a serialized schema object (which you can write to a file)
381
+ const schema = await client.agents.exportAgentSerialized("<AGENT_ID>");
382
+ ```
383
+ </details>
384
+
385
+ ## Model Context Protocol (MCP) and custom tools ([full guide](https://docs.letta.com/guides/mcp/overview))
386
+
387
+ Letta has rich support for MCP tools (Letta acts as an MCP client), as well as custom Python tools.
388
+ MCP servers can be easily added within the Agent Development Environment (ADE) tool manager UI, as well as via the SDK:
389
+
390
+
391
+ <details>
392
+ <summary>View code snippets</summary>
393
+
394
+ ### Python
395
+ ```python
396
+ # List tools from an MCP server
397
+ tools = client.tools.list_mcp_tools_by_server(mcp_server_name="weather-server")
398
+
399
+ # Add a specific tool from the MCP server
400
+ tool = client.tools.add_mcp_tool(
401
+ mcp_server_name="weather-server",
402
+ mcp_tool_name="get_weather"
403
+ )
404
+
405
+ # Create agent with MCP tool attached
406
+ agent_state = client.agents.create(
407
+ model="openai/gpt-4o-mini",
408
+ embedding="openai/text-embedding-3-small",
409
+ tool_ids=[tool.id]
410
+ )
411
+
412
+ # Or attach tools to an existing agent
413
+ client.agents.tool.attach(
414
+ agent_id=agent_state.id
415
+ tool_id=tool.id
416
+ )
417
+
418
+ # Use the agent with MCP tools
419
+ response = client.agents.messages.create(
420
+ agent_id=agent_state.id,
421
+ messages=[
422
+ {
423
+ "role": "user",
424
+ "content": "Use the weather tool to check the forecast"
425
+ }
426
+ ]
427
+ )
428
+ ```
429
+
430
+ ### TypeScript / Node.js
431
+ ```typescript
432
+ // List tools from an MCP server
433
+ const tools = await client.tools.listMcpToolsByServer("weather-server");
434
+
435
+ // Add a specific tool from the MCP server
436
+ const tool = await client.tools.addMcpTool("weather-server", "get_weather");
437
+
438
+ // Create agent with MCP tool
439
+ const agentState = await client.agents.create({
440
+ model: "openai/gpt-4o-mini",
441
+ embedding: "openai/text-embedding-3-small",
442
+ toolIds: [tool.id]
443
+ });
444
+
445
+ // Use the agent with MCP tools
446
+ const response = await client.agents.messages.create(agentState.id, {
447
+ messages: [
448
+ {
449
+ role: "user",
450
+ content: "Use the weather tool to check the forecast"
451
+ }
452
+ ]
453
+ });
454
+ ```
455
+ </details>
456
+
457
+ ## Filesystem ([full guide](https://docs.letta.com/guides/agents/filesystem))
458
+
459
+ Letta’s filesystem allow you to easily connect your agents to external files, for example: research papers, reports, medical records, or any other data in common text formats (`.pdf`, `.txt`, `.md`, `.json`, etc).
460
+ Once you attach a folder to an agent, the agent will be able to use filesystem tools (`open_file`, `grep_file`, `search_file`) to browse the files to search for information.
461
+
462
+ <details>
463
+ <summary>View code snippets</summary>
464
+
465
+ ### Python
466
+ ```python
467
+ # get an available embedding_config
468
+ embedding_configs = client.embedding_models.list()
469
+ embedding_config = embedding_configs[0]
470
+
471
+ # create the folder
472
+ folder = client.folders.create(
473
+ name="my_folder",
474
+ embedding_config=embedding_config
475
+ )
476
+
477
+ # upload a file into the folder
478
+ job = client.folders.files.upload(
479
+ folder_id=folder.id,
480
+ file=open("my_file.txt", "rb")
481
+ )
482
+
483
+ # wait until the job is completed
484
+ while True:
485
+ job = client.jobs.retrieve(job.id)
486
+ if job.status == "completed":
487
+ break
488
+ elif job.status == "failed":
489
+ raise ValueError(f"Job failed: {job.metadata}")
490
+ print(f"Job status: {job.status}")
491
+ time.sleep(1)
492
+
493
+ # once you attach a folder to an agent, the agent can see all files in it
494
+ client.agents.folders.attach(agent_id=agent.id, folder_id=folder.id)
495
+
496
+ response = client.agents.messages.create(
497
+ agent_id=agent_state.id,
498
+ messages=[
499
+ {
500
+ "role": "user",
501
+ "content": "What data is inside of my_file.txt?"
502
+ }
503
+ ]
504
+ )
505
+
506
+ for message in response.messages:
507
+ print(message)
508
+ ```
509
+
510
+ ### TypeScript / Node.js
511
+ ```typescript
512
+ // get an available embedding_config
513
+ const embeddingConfigs = await client.embeddingModels.list()
514
+ const embeddingConfig = embeddingConfigs[0];
515
+
516
+ // create the folder
517
+ const folder = await client.folders.create({
518
+ name: "my_folder",
519
+ embeddingConfig: embeddingConfig
520
+ });
521
+
522
+ // upload a file into the folder
523
+ const uploadJob = await client.folders.files.upload(
524
+ createReadStream("my_file.txt"),
525
+ folder.id,
526
+ );
527
+ console.log("file uploaded")
528
+
529
+ // wait until the job is completed
530
+ while (true) {
531
+ const job = await client.jobs.retrieve(uploadJob.id);
532
+ if (job.status === "completed") {
533
+ break;
534
+ } else if (job.status === "failed") {
535
+ throw new Error(`Job failed: ${job.metadata}`);
536
+ }
537
+ console.log(`Job status: ${job.status}`);
538
+ await new Promise((resolve) => setTimeout(resolve, 1000));
539
+ }
540
+
541
+ // list files in the folder
542
+ const files = await client.folders.files.list(folder.id);
543
+ console.log(`Files in folder: ${files}`);
544
+
545
+ // list passages in the folder
546
+ const passages = await client.folders.passages.list(folder.id);
547
+ console.log(`Passages in folder: ${passages}`);
548
+
549
+ // once you attach a folder to an agent, the agent can see all files in it
550
+ await client.agents.folders.attach(agent.id, folder.id);
551
+
552
+ const response = await client.agents.messages.create(
553
+ agentState.id, {
554
+ messages: [
555
+ {
556
+ role: "user",
557
+ content: "What data is inside of my_file.txt?"
558
+ }
559
+ ]
560
+ }
561
+ );
562
+
563
+ for (const message of response.messages) {
564
+ console.log(message);
565
+ }
566
+ ```
567
+ </details>
568
+
569
+ ## Long-running agents ([full guide](https://docs.letta.com/guides/agents/long-running))
570
+
571
+ When agents need to execute multiple tool calls or perform complex operations (like deep research, data analysis, or multi-step workflows), processing time can vary significantly. Letta supports both a background mode (with resumable streaming) as well as an async mode (with polling) to enable robust long-running agent executions.
572
+
573
+
574
+ <details>
575
+ <summary>View code snippets</summary>
576
+
577
+ ### Python
578
+ ```python
579
+ stream = client.agents.messages.create_stream(
580
+ agent_id=agent_state.id,
581
+ messages=[
582
+ {
583
+ "role": "user",
584
+ "content": "Run comprehensive analysis on this dataset"
585
+ }
586
+ ],
587
+ stream_tokens=True,
588
+ background=True,
589
+ )
590
+ run_id = None
591
+ last_seq_id = None
592
+ for chunk in stream:
593
+ if hasattr(chunk, "run_id") and hasattr(chunk, "seq_id"):
594
+ run_id = chunk.run_id # Save this to reconnect if your connection drops
595
+ last_seq_id = chunk.seq_id # Save this as your resumption point for cursor-based pagination
596
+ print(chunk)
597
+
598
+ # If disconnected, resume from last received seq_id:
599
+ for chunk in client.runs.stream(run_id, starting_after=last_seq_id):
600
+ print(chunk)
601
+ ```
602
+
603
+ ### TypeScript / Node.js
604
+ ```typescript
605
+ const stream = await client.agents.messages.createStream({
606
+ agentId: agentState.id,
607
+ requestBody: {
608
+ messages: [
609
+ {
610
+ role: "user",
611
+ content: "Run comprehensive analysis on this dataset"
612
+ }
613
+ ],
614
+ streamTokens: true,
615
+ background: true,
616
+ }
617
+ });
618
+
619
+ let runId = null;
620
+ let lastSeqId = null;
621
+ for await (const chunk of stream) {
622
+ if (chunk.run_id && chunk.seq_id) {
623
+ runId = chunk.run_id; // Save this to reconnect if your connection drops
624
+ lastSeqId = chunk.seq_id; // Save this as your resumption point for cursor-based pagination
625
+ }
626
+ console.log(chunk);
627
+ }
628
+
629
+ // If disconnected, resume from last received seq_id
630
+ for await (const chunk of client.runs.stream(runId, {startingAfter: lastSeqId})) {
631
+ console.log(chunk);
632
+ }
633
+ ```
634
+ </details>
635
+
636
+ ## Using local models
637
+
638
+ Letta is model agnostic and supports using local model providers such as [Ollama](https://docs.letta.com/guides/server/providers/ollama) and [LM Studio](https://docs.letta.com/guides/server/providers/lmstudio). You can also easily swap models inside an agent after the agent has been created, by modifying the agent state with the new model provider via the SDK or in the ADE.
639
+
640
+ ## Development (only needed if you need to modify the server code)
641
+
642
+ *Note: this repostory contains the source code for the core Letta service (API server), not the client SDKs. The client SDKs can be found here: [Python](https://github.com/letta-ai/letta-python), [TypeScript](https://github.com/letta-ai/letta-node).*
643
+
644
+ To install the Letta server from source, fork the repo, clone your fork, then use [uv](https://docs.astral.sh/uv/getting-started/installation/) to install from inside the main directory:
645
+ ```sh
646
+ cd letta
647
+ uv sync --all-extras
648
+ ```
649
+
650
+ To run the Letta server from source, use `uv run`:
651
+ ```sh
652
+ uv run letta server
653
+ ```
654
+
655
+ ## Contributing
656
+
657
+ Letta is an open source project built by over a hundred contributors. There are many ways to get involved in the Letta OSS project!
658
+
659
+ * [**Join the Discord**](https://discord.gg/letta): Chat with the Letta devs and other AI developers.
660
+ * [**Chat on our forum**](https://forum.letta.com/): If you're not into Discord, check out our developer forum.
661
+ * **Follow our socials**: [Twitter/X](https://twitter.com/Letta_AI), [LinkedIn](https://www.linkedin.com/in/letta), [YouTube](https://www.youtube.com/@letta-ai)
662
+
663
+ ---
664
+
665
+ ***Legal notices**: By using Letta and related Letta services (such as the Letta endpoint or hosted service), you are agreeing to our [privacy policy](https://www.letta.com/privacy-policy) and [terms of service](https://www.letta.com/terms-of-service).*