pydantic-ai-rlm 0.1.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,344 @@
1
+ Metadata-Version: 2.4
2
+ Name: pydantic-ai-rlm
3
+ Version: 0.1.0
4
+ Summary: Recursive Language Model (RLM) toolset for Pydantic AI - handle extremely large contexts
5
+ Author: Pydantic AI RLM Contributors
6
+ License-Expression: MIT
7
+ License-File: LICENSE
8
+ Keywords: ai,context,llm,machine-learning,pydantic,pydantic-ai,recursive,repl
9
+ Classifier: Development Status :: 3 - Alpha
10
+ Classifier: Intended Audience :: Developers
11
+ Classifier: License :: OSI Approved :: MIT License
12
+ Classifier: Programming Language :: Python :: 3
13
+ Classifier: Programming Language :: Python :: 3.10
14
+ Classifier: Programming Language :: Python :: 3.11
15
+ Classifier: Programming Language :: Python :: 3.12
16
+ Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
17
+ Classifier: Typing :: Typed
18
+ Requires-Python: >=3.10
19
+ Requires-Dist: pydantic-ai>=0.1.0
20
+ Provides-Extra: dev
21
+ Requires-Dist: mypy>=1.0; extra == 'dev'
22
+ Requires-Dist: pytest-asyncio>=0.21; extra == 'dev'
23
+ Requires-Dist: pytest>=7.0; extra == 'dev'
24
+ Requires-Dist: ruff>=0.1; extra == 'dev'
25
+ Provides-Extra: logging
26
+ Requires-Dist: rich>=13.0; extra == 'logging'
27
+ Description-Content-Type: text/markdown
28
+
29
+ <h1 align="center">Pydantic AI RLM</h1>
30
+
31
+ <p align="center">
32
+ <b>Handle Extremely Large Contexts with Any LLM Provider</b>
33
+ </p>
34
+
35
+ <p align="center">
36
+ <a href="https://github.com/vstorm-co/pydantic-ai-rlm">GitHub</a> •
37
+ <a href="https://github.com/vstorm-co/pydantic-ai-rlm#examples">Examples</a>
38
+ </p>
39
+
40
+ <p align="center">
41
+ <a href="https://www.python.org/downloads/"><img src="https://img.shields.io/badge/python-3.10+-blue.svg" alt="Python 3.10+"></a>
42
+ <a href="https://opensource.org/licenses/MIT"><img src="https://img.shields.io/badge/License-MIT-yellow.svg" alt="License: MIT"></a>
43
+ <a href="https://github.com/pydantic/pydantic-ai"><img src="https://img.shields.io/badge/Powered%20by-Pydantic%20AI-E92063?logo=pydantic&logoColor=white" alt="Pydantic AI"></a>
44
+ </p>
45
+
46
+ <p align="center">
47
+ <b>Switch Providers Instantly</b>
48
+ &nbsp;•&nbsp;
49
+ <b>Sandboxed Code Execution</b>
50
+ &nbsp;•&nbsp;
51
+ <b>Sub-Model Delegation</b>
52
+ &nbsp;•&nbsp;
53
+ <b>Fully Type-Safe</b>
54
+ </p>
55
+
56
+ ---
57
+
58
+ ## What is RLM?
59
+
60
+ **RLM (Recursive Language Model)** is a pattern for handling contexts that exceed a model's context window, introduced by **Alex L. Zhang, Tim Kraska, and Omar Khattab** in their paper [Recursive Language Models](https://arxiv.org/abs/2512.24601). Instead of trying to fit everything into one prompt, the LLM writes Python code to programmatically explore and analyze the data.
61
+
62
+ **The key insight:** An LLM can write code to search through millions of lines in seconds, then use `llm_query()` to delegate semantic analysis of relevant chunks to a sub-model.
63
+
64
+ This library is an implementation inspired by the [original minimal implementation](https://github.com/alexzhang13/rlm-minimal).
65
+
66
+ ---
67
+
68
+ ## Get Started in 60 Seconds
69
+
70
+ ```bash
71
+ pip install pydantic-ai-rlm
72
+ ```
73
+
74
+ ```python
75
+ from pydantic_ai_rlm import run_rlm_analysis
76
+
77
+ answer = await run_rlm_analysis(
78
+ context=massive_document, # Can be millions of characters
79
+ query="Find the magic number hidden in the text",
80
+ model="openai:gpt-5",
81
+ sub_model="openai:gpt-5-mini",
82
+ )
83
+ ```
84
+
85
+ **That's it.** Your agent can now:
86
+
87
+ - Write Python code to analyze massive contexts
88
+ - Use `llm_query()` to delegate semantic analysis to sub-models
89
+ - Work with any Pydantic AI compatible provider
90
+
91
+ ---
92
+
93
+ ## Why pydantic-ai-rlm?
94
+
95
+ ### Switch Providers Instantly
96
+
97
+ Built on Pydantic AI, you can test any model with a single line change:
98
+
99
+ ```python
100
+ # OpenAI
101
+ agent = create_rlm_agent(model="openai:gpt-5", sub_model="openai:gpt-5-mini")
102
+
103
+ # Anthropic
104
+ agent = create_rlm_agent(model="anthropic:claude-sonnet-4-5", sub_model="anthropic:claude-haiku-4-5")
105
+
106
+ agent = create_rlm_agent(model="anthropic:claude-sonnet-4-5", sub_model="openai:gpt-5-mini")
107
+ ```
108
+
109
+ ### Reusable Toolset
110
+
111
+ The RLM toolset integrates with any pydantic-ai agent:
112
+
113
+ ```python
114
+ from pydantic_ai import Agent
115
+ from pydantic_ai_rlm import create_rlm_toolset, RLMDependencies
116
+
117
+ # Use the toolset in any agent
118
+ toolset = create_rlm_toolset(sub_model="openai:gpt-5-mini")
119
+ agent = Agent("openai:gpt-5", toolsets=[toolset])
120
+ ```
121
+
122
+ ---
123
+
124
+ ## How It Works
125
+
126
+ ```
127
+ ┌─────────────────────────────────────────────────────────────────┐
128
+ │ pydantic-ai-rlm │
129
+ │ │
130
+ │ ┌─────────────┐ ┌─────────────────────────────────┐ │
131
+ │ │ Main LLM │ │ Sandboxed REPL Environment │ │
132
+ │ │ (gpt-5) │────────>│ │ │
133
+ │ └─────────────┘ │ context = <your massive data> │ │
134
+ │ │ │ │ │
135
+ │ │ │ # LLM writes Python code: │ │
136
+ │ │ │ for line in context.split(): │ │
137
+ │ │ │ if "magic" in line: │ │
138
+ │ │ │ result = llm_query( │ │
139
+ │ │ │ f"Analyze: {line}"│ │
140
+ │ │ │ ) │ │
141
+ │ │ │ │ │
142
+ │ │ └───────────────┬─────────────────┘ │
143
+ │ │ │ │
144
+ │ │ ▼ │
145
+ │ │ ┌─────────────────┐ │
146
+ │ │ │ Sub LLM │ │
147
+ │ │ │ (gpt-5-mini) │ │
148
+ │ │ └─────────────────┘ │
149
+ │ │ │
150
+ │ ▼ │
151
+ │ ┌─────────────┐ │
152
+ │ │ Answer │ │
153
+ │ └─────────────┘ │
154
+ │ │
155
+ └─────────────────────────────────────────────────────────────────┘
156
+ ```
157
+
158
+ 1. **Main LLM** receives the query and writes Python code
159
+ 2. **REPL Environment** executes code with access to `context` variable
160
+ 3. **llm_query()** delegates semantic analysis to a cheaper/faster sub-model
161
+ 4. **Main LLM** synthesizes the final answer from code execution results
162
+
163
+ ---
164
+
165
+ ## Examples
166
+
167
+ ### Needle in Haystack
168
+
169
+ Find specific information in massive text:
170
+
171
+ ```python
172
+ from pydantic_ai_rlm import run_rlm_analysis
173
+
174
+ # 1 million lines of text with a hidden number
175
+ massive_text = generate_haystack(num_lines=1_000_000)
176
+
177
+ answer = await run_rlm_analysis(
178
+ context=massive_text,
179
+ query="Find the magic number hidden in the text",
180
+ model="openai:gpt-5",
181
+ sub_model="openai:gpt-5-mini",
182
+ )
183
+ ```
184
+
185
+ ### JSON Data Analysis
186
+
187
+ Works with structured data too:
188
+
189
+ ```python
190
+ from pydantic_ai_rlm import create_rlm_agent, RLMDependencies
191
+
192
+ agent = create_rlm_agent(model="openai:gpt-5")
193
+
194
+ deps = RLMDependencies(
195
+ context={"users": [...], "transactions": [...], "logs": [...]},
196
+ )
197
+
198
+ result = await agent.run(
199
+ "Find all users with suspicious transaction patterns",
200
+ deps=deps,
201
+ )
202
+ ```
203
+
204
+ ---
205
+
206
+ ## API Reference
207
+
208
+ ### `create_rlm_agent()`
209
+
210
+ Create a Pydantic AI agent with RLM capabilities.
211
+
212
+ ```python
213
+ agent = create_rlm_agent(
214
+ model="openai:gpt-5", # Main model for orchestration
215
+ sub_model="openai:gpt-5-mini", # Model for llm_query() (optional)
216
+ code_timeout=60.0, # Timeout for code execution
217
+ custom_instructions="...", # Additional instructions
218
+ )
219
+ ```
220
+
221
+ ### `create_rlm_toolset()`
222
+
223
+ Create a standalone RLM toolset for composition.
224
+
225
+ ```python
226
+ toolset = create_rlm_toolset(
227
+ code_timeout=60.0,
228
+ sub_model="openai:gpt-5-mini",
229
+ )
230
+ ```
231
+
232
+ ### `run_rlm_analysis()` / `run_rlm_analysis_sync()`
233
+
234
+ Convenience functions for quick analysis.
235
+
236
+ ```python
237
+ # Async
238
+ answer = await run_rlm_analysis(context, query, model="openai:gpt-5")
239
+
240
+ # Sync
241
+ answer = run_rlm_analysis_sync(context, query, model="openai:gpt-5")
242
+ ```
243
+
244
+ ### `RLMDependencies`
245
+
246
+ Dependencies for RLM agents.
247
+
248
+ ```python
249
+ deps = RLMDependencies(
250
+ context="...", # str, dict, or list
251
+ config=RLMConfig(
252
+ code_timeout=60.0,
253
+ truncate_output_chars=50_000,
254
+ sub_model="openai:gpt-5-mini",
255
+ ),
256
+ )
257
+ ```
258
+
259
+ ### `configure_logging()`
260
+
261
+ Enable verbose logging to see what the agent is doing in real-time.
262
+
263
+ ```python
264
+ from pydantic_ai_rlm import configure_logging, run_rlm_analysis
265
+
266
+ # Enable logging (uses rich if installed, falls back to plain text)
267
+ configure_logging(enabled=True)
268
+
269
+ # Now you'll see code executions and outputs in the terminal
270
+ answer = await run_rlm_analysis(
271
+ context=massive_document,
272
+ query="Find the magic number",
273
+ model="openai:gpt-5",
274
+ )
275
+
276
+ # Disable logging when done
277
+ configure_logging(enabled=False)
278
+ ```
279
+
280
+ Install with rich logging support for syntax highlighting and styled output:
281
+ ```bash
282
+ pip install pydantic-ai-rlm[logging]
283
+ ```
284
+
285
+ Or install rich separately:
286
+ ```bash
287
+ pip install rich
288
+ ```
289
+
290
+ When enabled, you'll see:
291
+ - Syntax-highlighted code being executed (with rich)
292
+ - Execution results with status indicators (SUCCESS/ERROR)
293
+ - Execution time for each code block
294
+ - Variables created during execution
295
+ - LLM sub-queries and responses (when using `llm_query()`)
296
+
297
+ **Note:** Logging works without rich installed - it will use plain text output instead of styled panels
298
+
299
+ ---
300
+
301
+ ## REPL Environment
302
+
303
+ The sandboxed REPL provides:
304
+
305
+ | Feature | Description |
306
+ |---------|-------------|
307
+ | `context` variable | Your data loaded and ready to use |
308
+ | `llm_query(prompt)` | Delegate to sub-model (if configured) |
309
+ | Safe built-ins | `print`, `len`, `range`, etc. |
310
+ | Common imports | `json`, `re`, `collections`, etc. |
311
+ | Persistent state | Variables persist across executions |
312
+ | Output capture | stdout/stderr returned to agent |
313
+
314
+ **Blocked for security:** `eval`, `exec`, `compile`, `open` (outside temp dir)
315
+
316
+ ---
317
+
318
+ ## Related Projects
319
+
320
+ - **[rlm](https://github.com/alexzhang13/rlm)** - Original RLM implementation by Alex L. Zhang, Tim Kraska, and Omar Khattab
321
+ - **[rlm-minimal](https://github.com/alexzhang13/rlm-minimal)** - Minal RLM implementation by Alex L. Zhang
322
+ - **[pydantic-ai](https://github.com/pydantic/pydantic-ai)** - The foundation: Agent framework by Pydantic
323
+ - **[pydantic-deep](https://github.com/vstorm-co/pydantic-deepagents)** - Full deep agent framework with planning, filesystem, and more
324
+
325
+ ---
326
+
327
+ ## Contributing
328
+
329
+ ```bash
330
+ git clone https://github.com/vstorm-co/pydantic-ai-rlm.git
331
+ cd pydantic-ai-rlm
332
+ pip install -e ".[dev]"
333
+ pytest
334
+ ```
335
+
336
+ ---
337
+
338
+ ## License
339
+
340
+ MIT — see [LICENSE](LICENSE)
341
+
342
+ <p align="center">
343
+ <sub>Built with Pydantic AI by <a href="https://github.com/vstorm-co">vstorm-co</a></sub>
344
+ </p>
@@ -0,0 +1,13 @@
1
+ pydantic_ai_rlm/__init__.py,sha256=94ev1VspkrVdB-PabCD6I791vZkBVASXv_qqneYQQ0I,794
2
+ pydantic_ai_rlm/agent.py,sha256=_zHBLatfhquGw3vjHC06b0lBZRpLv2nrxi7w69f7lBY,4824
3
+ pydantic_ai_rlm/dependencies.py,sha256=bzPXhCRJ-0D5N0FS32LAETXxp6cbInU00qaMr0qd_iQ,1306
4
+ pydantic_ai_rlm/logging.py,sha256=DXJzfrfOnyR3VAbKF4_0Mc2MP7zqPmxyFDph4Q9kAb0,9308
5
+ pydantic_ai_rlm/prompts.py,sha256=BpbugghZ_I_IjC_Il5pUClMG7bgWfzWgtsUsxwyjH10,3648
6
+ pydantic_ai_rlm/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
7
+ pydantic_ai_rlm/repl.py,sha256=twn6ArhaONYg49Feoi-sD6z0HkYHyHHc-prdBCsj0jo,14864
8
+ pydantic_ai_rlm/toolset.py,sha256=g1244qq2VzaQVhAQixLGPCWW7vJqQfjhUgf0kDxkkzo,5564
9
+ pydantic_ai_rlm/utils.py,sha256=_tqdMcD-_UbRbdK6ECD-XlXfRHgeAKj7ulLaXrYOJx4,1500
10
+ pydantic_ai_rlm-0.1.0.dist-info/METADATA,sha256=2w1yJoJGl5HHCWcl6JRz5pDys0o4iBTLcWSsXuGt8nk,11613
11
+ pydantic_ai_rlm-0.1.0.dist-info/WHEEL,sha256=WLgqFyCfm_KASv4WHyYy0P3pM_m7J5L9k2skdKLirC8,87
12
+ pydantic_ai_rlm-0.1.0.dist-info/licenses/LICENSE,sha256=ktSo7DxQBRIZIhD4P53iVxq88CG3cvXRTBRqPT8zO1c,1085
13
+ pydantic_ai_rlm-0.1.0.dist-info/RECORD,,
@@ -0,0 +1,4 @@
1
+ Wheel-Version: 1.0
2
+ Generator: hatchling 1.28.0
3
+ Root-Is-Purelib: true
4
+ Tag: py3-none-any
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2024 Pydantic AI RLM Contributors
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.