autochatlib 0.1.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (38) hide show
  1. autochatlib-0.1.0/.gitignore +13 -0
  2. autochatlib-0.1.0/LICENSE +21 -0
  3. autochatlib-0.1.0/PKG-INFO +350 -0
  4. autochatlib-0.1.0/README.md +306 -0
  5. autochatlib-0.1.0/examples/basic_chat_example.py +62 -0
  6. autochatlib-0.1.0/examples/basic_persistence_compression_example.py +78 -0
  7. autochatlib-0.1.0/examples/basic_retriever_example.py +73 -0
  8. autochatlib-0.1.0/examples/basic_tool_example.py +78 -0
  9. autochatlib-0.1.0/pyproject.toml +56 -0
  10. autochatlib-0.1.0/src/autochat/__init__.py +66 -0
  11. autochatlib-0.1.0/src/autochat/chat.py +116 -0
  12. autochatlib-0.1.0/src/autochat/compression/__init__.py +11 -0
  13. autochatlib-0.1.0/src/autochat/compression/config.py +86 -0
  14. autochatlib-0.1.0/src/autochat/compression/strategies.py +186 -0
  15. autochatlib-0.1.0/src/autochat/compression/types.py +21 -0
  16. autochatlib-0.1.0/src/autochat/config.py +11 -0
  17. autochatlib-0.1.0/src/autochat/exceptions/__init__.py +3 -0
  18. autochatlib-0.1.0/src/autochat/exceptions/errors.py +6 -0
  19. autochatlib-0.1.0/src/autochat/graph/__init__.py +4 -0
  20. autochatlib-0.1.0/src/autochat/graph/builder.py +151 -0
  21. autochatlib-0.1.0/src/autochat/graph/retrievers.py +51 -0
  22. autochatlib-0.1.0/src/autochat/graph/runtime.py +34 -0
  23. autochatlib-0.1.0/src/autochat/graph/state.py +12 -0
  24. autochatlib-0.1.0/src/autochat/graph/tools.py +72 -0
  25. autochatlib-0.1.0/src/autochat/guidelines.py +6 -0
  26. autochatlib-0.1.0/src/autochat/py.typed +0 -0
  27. autochatlib-0.1.0/src/autochat/retrieval/__init__.py +23 -0
  28. autochatlib-0.1.0/src/autochat/retrieval/base.py +201 -0
  29. autochatlib-0.1.0/src/autochat/retrieval/config.py +14 -0
  30. autochatlib-0.1.0/src/autochat/retrieval/documents.py +17 -0
  31. autochatlib-0.1.0/src/autochat/retrieval/strategies.py +37 -0
  32. autochatlib-0.1.0/src/autochat/retrieval/types.py +48 -0
  33. autochatlib-0.1.0/src/autochat/runtime/__init__.py +3 -0
  34. autochatlib-0.1.0/src/autochat/runtime/context.py +18 -0
  35. autochatlib-0.1.0/src/autochat/tools/__init__.py +19 -0
  36. autochatlib-0.1.0/src/autochat/tools/base.py +326 -0
  37. autochatlib-0.1.0/src/autochat/tools/decorators.py +37 -0
  38. autochatlib-0.1.0/src/autochat/tools/types.py +48 -0
@@ -0,0 +1,13 @@
1
+ # Python-generated files
2
+ __pycache__/
3
+ *.py[oc]
4
+ build/
5
+ dist/
6
+ wheels/
7
+ *.egg-info
8
+
9
+ # Virtual environments
10
+ .venv
11
+
12
+ # Environment files
13
+ .env
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2026 autochat contributors
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
@@ -0,0 +1,350 @@
1
+ Metadata-Version: 2.4
2
+ Name: autochatlib
3
+ Version: 0.1.0
4
+ Summary: A context-aware chat harness primitive built on LangChain and LangGraph.
5
+ Project-URL: Homepage, https://github.com/magnumxpm/autochat
6
+ Project-URL: Repository, https://github.com/magnumxpm/autochat
7
+ Project-URL: Issues, https://github.com/magnumxpm/autochat/issues
8
+ Author-email: Pritam Mukherjee <me@pmukherjee.dev>
9
+ License: MIT License
10
+
11
+ Copyright (c) 2026 autochat contributors
12
+
13
+ Permission is hereby granted, free of charge, to any person obtaining a copy
14
+ of this software and associated documentation files (the "Software"), to deal
15
+ in the Software without restriction, including without limitation the rights
16
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
17
+ copies of the Software, and to permit persons to whom the Software is
18
+ furnished to do so, subject to the following conditions:
19
+
20
+ The above copyright notice and this permission notice shall be included in all
21
+ copies or substantial portions of the Software.
22
+
23
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
24
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
25
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
26
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
27
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
28
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
29
+ SOFTWARE.
30
+ License-File: LICENSE
31
+ Keywords: ai,chat,langchain,langgraph,llm
32
+ Classifier: Development Status :: 3 - Alpha
33
+ Classifier: Intended Audience :: Developers
34
+ Classifier: License :: OSI Approved :: MIT License
35
+ Classifier: Operating System :: OS Independent
36
+ Classifier: Programming Language :: Python :: 3
37
+ Classifier: Programming Language :: Python :: 3.13
38
+ Classifier: Typing :: Typed
39
+ Requires-Python: >=3.13
40
+ Requires-Dist: langchain-core>=1.3.2
41
+ Requires-Dist: langchain>=1.2.17
42
+ Requires-Dist: langgraph>=1.1.10
43
+ Description-Content-Type: text/markdown
44
+
45
+ # autochat
46
+
47
+ `autochat` is a small Python library for building context-aware chat applications on top of LangGraph and LangChain.
48
+
49
+ It gives your app a clean chat harness primitive:
50
+
51
+ ```python
52
+ from autochat import AutoChat
53
+
54
+ chat = AutoChat(...)
55
+ ```
56
+
57
+ Then you can invoke or stream the graph while passing your own runtime context into tools, retrievers, processors, and graph execution.
58
+
59
+ `autochat` is still under construction. It is not published to PyPI yet, but it will soon be installable as `autochat` with `pip`, `uv`, and other standard Python package managers.
60
+
61
+ ## Why
62
+
63
+ Most production chat apps need the same foundation:
64
+
65
+ - model and tool orchestration
66
+ - runtime context for auth, tenancy, request metadata, and app services
67
+ - context-aware tools and retrievers
68
+ - thread persistence
69
+ - optional history compression
70
+ - a simple async invoke/stream API
71
+
72
+ `autochat` packages those pieces into a small, typed, async-first interface.
73
+
74
+ ## Installation
75
+
76
+ For local development:
77
+
78
+ ```bash
79
+ uv sync
80
+ ```
81
+
82
+ For examples that use OpenAI models:
83
+
84
+ ```bash
85
+ uv sync --dev
86
+ ```
87
+
88
+ Future install flow:
89
+
90
+ ```bash
91
+ pip install autochat
92
+ ```
93
+
94
+ or:
95
+
96
+ ```bash
97
+ uv add autochat
98
+ ```
99
+
100
+ ## Quick Start
101
+
102
+ ```python
103
+ import asyncio
104
+ from dataclasses import dataclass
105
+
106
+ from langchain_openai import ChatOpenAI
107
+
108
+ from autochat import AutoChat, ChatConfig, ChatRuntime, chat_tool
109
+
110
+
111
+ @dataclass(frozen=True)
112
+ class AppContext:
113
+ user_id: str
114
+ plan: str
115
+
116
+
117
+ @chat_tool(name="current_plan")
118
+ async def current_plan(runtime: ChatRuntime[AppContext]) -> str:
119
+ return f"The user is on the {runtime.context.plan} plan."
120
+
121
+
122
+ async def main() -> None:
123
+ chat = AutoChat[AppContext](
124
+ config=ChatConfig(model=ChatOpenAI(model="gpt-5-nano")),
125
+ tools=[current_plan],
126
+ system_message="You are concise and practical.",
127
+ )
128
+
129
+ result = await chat.ainvoke(
130
+ "What plan am I on?",
131
+ thread_id="thread_123",
132
+ context=AppContext(user_id="user_1", plan="pro"),
133
+ )
134
+
135
+ print(result["messages"][-1].content)
136
+
137
+
138
+ if __name__ == "__main__":
139
+ asyncio.run(main())
140
+ ```
141
+
142
+ Run the included examples:
143
+
144
+ ```bash
145
+ uv run python examples/basic_tool_example.py
146
+ OPENAI_API_KEY=... uv run --dev python examples/basic_chat_example.py
147
+ OPENAI_API_KEY=... uv run --dev python examples/basic_retriever_example.py
148
+ OPENAI_API_KEY=... uv run --dev python examples/basic_persistence_compression_example.py
149
+ ```
150
+
151
+ ## Runtime Context
152
+
153
+ `ChatRuntime[TContext]` is created for each chat run and passed through the graph layer.
154
+
155
+ Use it to carry app-specific data like user IDs, org IDs, permissions, request metadata, database handles, or tenant config.
156
+
157
+ ```python
158
+ @dataclass(frozen=True)
159
+ class AppContext:
160
+ org_id: str
161
+ permissions: set[str]
162
+
163
+
164
+ @chat_tool(name="billing_status")
165
+ async def billing_status(runtime: ChatRuntime[AppContext]) -> str:
166
+ return f"Billing is active for {runtime.context.org_id}."
167
+ ```
168
+
169
+ ## Tools
170
+
171
+ Use `@chat_tool` for native AutoChat tools. Function schemas are inferred from normal function parameters, and `runtime` is injected automatically.
172
+
173
+ ```python
174
+ @chat_tool(name="calculator")
175
+ async def calculator(
176
+ a: float,
177
+ b: float,
178
+ runtime: ChatRuntime[AppContext],
179
+ ) -> float:
180
+ return a + b
181
+ ```
182
+
183
+ You can also wrap LangChain tools:
184
+
185
+ ```python
186
+ from autochat import ChatTool
187
+
188
+ chat = AutoChat(
189
+ config=ChatConfig(model=model),
190
+ tools=[ChatTool(langchain_tool)],
191
+ )
192
+ ```
193
+
194
+ ## Tool Processors
195
+
196
+ Preprocessors and postprocessors wrap tool execution with app logic such as auth checks, input normalization, logging, or cleanup.
197
+
198
+ ```python
199
+ from autochat import ToolInvocation
200
+
201
+
202
+ def require(permission: str):
203
+ def processor(invocation: ToolInvocation[AppContext, object]) -> object:
204
+ if permission not in invocation.runtime.context.permissions:
205
+ raise PermissionError(f"Missing permission: {permission}")
206
+ return invocation.input
207
+
208
+ return processor
209
+
210
+
211
+ @chat_tool(name="billing_status", preprocessors=[require("billing.read")])
212
+ async def billing_status(runtime: ChatRuntime[AppContext]) -> str:
213
+ return "Billing is active."
214
+ ```
215
+
216
+ ## Retrieval
217
+
218
+ Retrievers are exposed to the model as callable retrieval tools. The model decides when to call them, and AutoChat executes the retriever with the current `ChatRuntime`.
219
+
220
+ ```python
221
+ from autochat import ChatRetriever, ChatRuntime
222
+
223
+
224
+ async def search_docs(query: str, runtime: ChatRuntime[AppContext]) -> list[str]:
225
+ return [f"Docs for {runtime.context.org_id}: {query}"]
226
+
227
+
228
+ chat = AutoChat[AppContext](
229
+ config=ChatConfig(model=model),
230
+ retrievers=[
231
+ ChatRetriever(
232
+ search_docs,
233
+ name="docs",
234
+ description="Search organization documentation.",
235
+ )
236
+ ],
237
+ system_message="Use the docs retriever for policy or product questions.",
238
+ )
239
+ ```
240
+
241
+ ## Persistence
242
+
243
+ AutoChat uses LangGraph checkpointers for thread persistence. Pass a checkpointer with `persistence=...`, and LangGraph stores graph state by `thread_id`.
244
+
245
+ ```python
246
+ from langgraph.checkpoint.memory import InMemorySaver
247
+
248
+ chat = AutoChat(
249
+ config=ChatConfig(model=model),
250
+ persistence=InMemorySaver(),
251
+ )
252
+ ```
253
+
254
+ For production, swap `InMemorySaver` for a durable LangGraph saver.
255
+
256
+ ## Compression
257
+
258
+ Compression is optional. It runs before the model call, after persisted thread state has been loaded.
259
+
260
+ ```python
261
+ from autochat import AutoCompress, SummarizeAll
262
+ from langgraph.checkpoint.memory import InMemorySaver
263
+
264
+ chat = AutoChat(
265
+ config=ChatConfig(
266
+ model=model,
267
+ context_window=128_000,
268
+ ),
269
+ persistence=InMemorySaver(),
270
+ compression=AutoCompress(
271
+ at=0.6,
272
+ strategy=SummarizeAll(),
273
+ ),
274
+ )
275
+ ```
276
+
277
+ Available strategies:
278
+
279
+ - `SummarizeAll()`: summarize older history into one summary message
280
+ - `SummarizeLatestN(n=20)`: summarize only the latest `n` historical messages
281
+ - `KeepLatestN(n=20)`: keep only the latest `n` messages without summarizing
282
+
283
+ Summaries replace graph history using LangGraph message removal, so future turns see a compacted thread state.
284
+
285
+ ## Core Pieces
286
+
287
+ - `AutoChat`: public chat harness for invoke and stream workflows
288
+ - `ChatConfig`: model configuration and context-window metadata
289
+ - `ChatRuntime[TContext]`: per-run context passed through graph execution
290
+ - `ChatTool` / `@chat_tool`: LangChain-compatible and native context-aware tools
291
+ - `ChatRetriever`: LangChain-compatible and native context-aware retrievers
292
+ - `AutoCompress`: optional automatic thread compression
293
+ - `ChatGuideline`: lightweight reusable instruction primitive
294
+
295
+ ## Project Structure
296
+
297
+ A high-level map for contributors:
298
+
299
+ ```text
300
+ src/autochat/
301
+ chat.py AutoChat public harness API
302
+ config.py ChatConfig and model configuration
303
+ guidelines.py Lightweight guideline primitives
304
+ runtime/ Invocation-scoped runtime context
305
+ tools/ ChatTool, @chat_tool, processor types
306
+ retrieval/ ChatRetriever, retrieval config, RAG strategies
307
+ compression/ AutoCompress and compression strategies
308
+ graph/ LangGraph state, builder, runtime wiring, execution
309
+ exceptions/ Library exception types
310
+
311
+ examples/
312
+ basic_tool_example.py Context-aware tool + processor
313
+ basic_chat_example.py AutoChat + model + tools
314
+ basic_retriever_example.py AutoChat + retriever
315
+ basic_persistence_compression_example.py Persistence + compression
316
+ ```
317
+
318
+ The intended dependency direction is:
319
+
320
+ ```text
321
+ AutoChat
322
+ -> graph
323
+ -> tools / retrieval / compression
324
+ -> runtime
325
+ ```
326
+
327
+ ## Contributing
328
+
329
+ The library is early and the API is still being shaped. Contributions should keep the surface area small, typed, and pleasant for application developers.
330
+
331
+ Before changing internals, run the examples when relevant:
332
+
333
+ ```bash
334
+ uv run python examples/basic_tool_example.py
335
+ uv run --dev python examples/basic_chat_example.py
336
+ uv run --dev python examples/basic_retriever_example.py
337
+ uv run --dev python examples/basic_persistence_compression_example.py
338
+ ```
339
+
340
+ Design preferences:
341
+
342
+ - async-first internally
343
+ - explicit runtime context
344
+ - native primitives with LangChain compatibility
345
+ - LangGraph persistence instead of custom thread storage
346
+ - minimal graph details in user-facing APIs
347
+
348
+ ## License
349
+
350
+ `autochat` is released under the MIT License. See [LICENSE](LICENSE).
@@ -0,0 +1,306 @@
1
+ # autochat
2
+
3
+ `autochat` is a small Python library for building context-aware chat applications on top of LangGraph and LangChain.
4
+
5
+ It gives your app a clean chat harness primitive:
6
+
7
+ ```python
8
+ from autochat import AutoChat
9
+
10
+ chat = AutoChat(...)
11
+ ```
12
+
13
+ Then you can invoke or stream the graph while passing your own runtime context into tools, retrievers, processors, and graph execution.
14
+
15
+ `autochat` is still under construction. It is not published to PyPI yet, but it will soon be installable as `autochat` with `pip`, `uv`, and other standard Python package managers.
16
+
17
+ ## Why
18
+
19
+ Most production chat apps need the same foundation:
20
+
21
+ - model and tool orchestration
22
+ - runtime context for auth, tenancy, request metadata, and app services
23
+ - context-aware tools and retrievers
24
+ - thread persistence
25
+ - optional history compression
26
+ - a simple async invoke/stream API
27
+
28
+ `autochat` packages those pieces into a small, typed, async-first interface.
29
+
30
+ ## Installation
31
+
32
+ For local development:
33
+
34
+ ```bash
35
+ uv sync
36
+ ```
37
+
38
+ For examples that use OpenAI models:
39
+
40
+ ```bash
41
+ uv sync --dev
42
+ ```
43
+
44
+ Future install flow:
45
+
46
+ ```bash
47
+ pip install autochat
48
+ ```
49
+
50
+ or:
51
+
52
+ ```bash
53
+ uv add autochat
54
+ ```
55
+
56
+ ## Quick Start
57
+
58
+ ```python
59
+ import asyncio
60
+ from dataclasses import dataclass
61
+
62
+ from langchain_openai import ChatOpenAI
63
+
64
+ from autochat import AutoChat, ChatConfig, ChatRuntime, chat_tool
65
+
66
+
67
+ @dataclass(frozen=True)
68
+ class AppContext:
69
+ user_id: str
70
+ plan: str
71
+
72
+
73
+ @chat_tool(name="current_plan")
74
+ async def current_plan(runtime: ChatRuntime[AppContext]) -> str:
75
+ return f"The user is on the {runtime.context.plan} plan."
76
+
77
+
78
+ async def main() -> None:
79
+ chat = AutoChat[AppContext](
80
+ config=ChatConfig(model=ChatOpenAI(model="gpt-5-nano")),
81
+ tools=[current_plan],
82
+ system_message="You are concise and practical.",
83
+ )
84
+
85
+ result = await chat.ainvoke(
86
+ "What plan am I on?",
87
+ thread_id="thread_123",
88
+ context=AppContext(user_id="user_1", plan="pro"),
89
+ )
90
+
91
+ print(result["messages"][-1].content)
92
+
93
+
94
+ if __name__ == "__main__":
95
+ asyncio.run(main())
96
+ ```
97
+
98
+ Run the included examples:
99
+
100
+ ```bash
101
+ uv run python examples/basic_tool_example.py
102
+ OPENAI_API_KEY=... uv run --dev python examples/basic_chat_example.py
103
+ OPENAI_API_KEY=... uv run --dev python examples/basic_retriever_example.py
104
+ OPENAI_API_KEY=... uv run --dev python examples/basic_persistence_compression_example.py
105
+ ```
106
+
107
+ ## Runtime Context
108
+
109
+ `ChatRuntime[TContext]` is created for each chat run and passed through the graph layer.
110
+
111
+ Use it to carry app-specific data like user IDs, org IDs, permissions, request metadata, database handles, or tenant config.
112
+
113
+ ```python
114
+ @dataclass(frozen=True)
115
+ class AppContext:
116
+ org_id: str
117
+ permissions: set[str]
118
+
119
+
120
+ @chat_tool(name="billing_status")
121
+ async def billing_status(runtime: ChatRuntime[AppContext]) -> str:
122
+ return f"Billing is active for {runtime.context.org_id}."
123
+ ```
124
+
125
+ ## Tools
126
+
127
+ Use `@chat_tool` for native AutoChat tools. Function schemas are inferred from normal function parameters, and `runtime` is injected automatically.
128
+
129
+ ```python
130
+ @chat_tool(name="calculator")
131
+ async def calculator(
132
+ a: float,
133
+ b: float,
134
+ runtime: ChatRuntime[AppContext],
135
+ ) -> float:
136
+ return a + b
137
+ ```
138
+
139
+ You can also wrap LangChain tools:
140
+
141
+ ```python
142
+ from autochat import ChatTool
143
+
144
+ chat = AutoChat(
145
+ config=ChatConfig(model=model),
146
+ tools=[ChatTool(langchain_tool)],
147
+ )
148
+ ```
149
+
150
+ ## Tool Processors
151
+
152
+ Preprocessors and postprocessors wrap tool execution with app logic such as auth checks, input normalization, logging, or cleanup.
153
+
154
+ ```python
155
+ from autochat import ToolInvocation
156
+
157
+
158
+ def require(permission: str):
159
+ def processor(invocation: ToolInvocation[AppContext, object]) -> object:
160
+ if permission not in invocation.runtime.context.permissions:
161
+ raise PermissionError(f"Missing permission: {permission}")
162
+ return invocation.input
163
+
164
+ return processor
165
+
166
+
167
+ @chat_tool(name="billing_status", preprocessors=[require("billing.read")])
168
+ async def billing_status(runtime: ChatRuntime[AppContext]) -> str:
169
+ return "Billing is active."
170
+ ```
171
+
172
+ ## Retrieval
173
+
174
+ Retrievers are exposed to the model as callable retrieval tools. The model decides when to call them, and AutoChat executes the retriever with the current `ChatRuntime`.
175
+
176
+ ```python
177
+ from autochat import ChatRetriever, ChatRuntime
178
+
179
+
180
+ async def search_docs(query: str, runtime: ChatRuntime[AppContext]) -> list[str]:
181
+ return [f"Docs for {runtime.context.org_id}: {query}"]
182
+
183
+
184
+ chat = AutoChat[AppContext](
185
+ config=ChatConfig(model=model),
186
+ retrievers=[
187
+ ChatRetriever(
188
+ search_docs,
189
+ name="docs",
190
+ description="Search organization documentation.",
191
+ )
192
+ ],
193
+ system_message="Use the docs retriever for policy or product questions.",
194
+ )
195
+ ```
196
+
197
+ ## Persistence
198
+
199
+ AutoChat uses LangGraph checkpointers for thread persistence. Pass a checkpointer with `persistence=...`, and LangGraph stores graph state by `thread_id`.
200
+
201
+ ```python
202
+ from langgraph.checkpoint.memory import InMemorySaver
203
+
204
+ chat = AutoChat(
205
+ config=ChatConfig(model=model),
206
+ persistence=InMemorySaver(),
207
+ )
208
+ ```
209
+
210
+ For production, swap `InMemorySaver` for a durable LangGraph saver.
211
+
212
+ ## Compression
213
+
214
+ Compression is optional. It runs before the model call, after persisted thread state has been loaded.
215
+
216
+ ```python
217
+ from autochat import AutoCompress, SummarizeAll
218
+ from langgraph.checkpoint.memory import InMemorySaver
219
+
220
+ chat = AutoChat(
221
+ config=ChatConfig(
222
+ model=model,
223
+ context_window=128_000,
224
+ ),
225
+ persistence=InMemorySaver(),
226
+ compression=AutoCompress(
227
+ at=0.6,
228
+ strategy=SummarizeAll(),
229
+ ),
230
+ )
231
+ ```
232
+
233
+ Available strategies:
234
+
235
+ - `SummarizeAll()`: summarize older history into one summary message
236
+ - `SummarizeLatestN(n=20)`: summarize only the latest `n` historical messages
237
+ - `KeepLatestN(n=20)`: keep only the latest `n` messages without summarizing
238
+
239
+ Summaries replace graph history using LangGraph message removal, so future turns see a compacted thread state.
240
+
241
+ ## Core Pieces
242
+
243
+ - `AutoChat`: public chat harness for invoke and stream workflows
244
+ - `ChatConfig`: model configuration and context-window metadata
245
+ - `ChatRuntime[TContext]`: per-run context passed through graph execution
246
+ - `ChatTool` / `@chat_tool`: LangChain-compatible and native context-aware tools
247
+ - `ChatRetriever`: LangChain-compatible and native context-aware retrievers
248
+ - `AutoCompress`: optional automatic thread compression
249
+ - `ChatGuideline`: lightweight reusable instruction primitive
250
+
251
+ ## Project Structure
252
+
253
+ A high-level map for contributors:
254
+
255
+ ```text
256
+ src/autochat/
257
+ chat.py AutoChat public harness API
258
+ config.py ChatConfig and model configuration
259
+ guidelines.py Lightweight guideline primitives
260
+ runtime/ Invocation-scoped runtime context
261
+ tools/ ChatTool, @chat_tool, processor types
262
+ retrieval/ ChatRetriever, retrieval config, RAG strategies
263
+ compression/ AutoCompress and compression strategies
264
+ graph/ LangGraph state, builder, runtime wiring, execution
265
+ exceptions/ Library exception types
266
+
267
+ examples/
268
+ basic_tool_example.py Context-aware tool + processor
269
+ basic_chat_example.py AutoChat + model + tools
270
+ basic_retriever_example.py AutoChat + retriever
271
+ basic_persistence_compression_example.py Persistence + compression
272
+ ```
273
+
274
+ The intended dependency direction is:
275
+
276
+ ```text
277
+ AutoChat
278
+ -> graph
279
+ -> tools / retrieval / compression
280
+ -> runtime
281
+ ```
282
+
283
+ ## Contributing
284
+
285
+ The library is early and the API is still being shaped. Contributions should keep the surface area small, typed, and pleasant for application developers.
286
+
287
+ Before changing internals, run the examples when relevant:
288
+
289
+ ```bash
290
+ uv run python examples/basic_tool_example.py
291
+ uv run --dev python examples/basic_chat_example.py
292
+ uv run --dev python examples/basic_retriever_example.py
293
+ uv run --dev python examples/basic_persistence_compression_example.py
294
+ ```
295
+
296
+ Design preferences:
297
+
298
+ - async-first internally
299
+ - explicit runtime context
300
+ - native primitives with LangChain compatibility
301
+ - LangGraph persistence instead of custom thread storage
302
+ - minimal graph details in user-facing APIs
303
+
304
+ ## License
305
+
306
+ `autochat` is released under the MIT License. See [LICENSE](LICENSE).