ai-rlm 0.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/LICENSE ADDED
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2025 Joe Hsu
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
package/README.md ADDED
@@ -0,0 +1,366 @@
1
+ # RLM (Recursive Language Model) - TypeScript Implementation
2
+
3
+ A TypeScript implementation of the RLM (Recursive Language Model) inference strategy using the Vercel AI SDK.
4
+
5
+ Based on the paper "Recursive Language Models" by Zhang, Kraska, and Khattab (2025).
6
+
7
+ ## Overview
8
+
9
+ RLM is an inference strategy where LLMs treat long contexts as part of an external environment rather than feeding them directly to the model. The LLM writes JavaScript code to programmatically examine, decompose, and recursively call sub-LLMs over snippets.
10
+
11
+ ### Key Features
12
+
13
+ - **Iterative Code Execution**: The model writes JavaScript code, sees output, then writes more code
14
+ - **Sub-LLM Queries**: Access to `llm_query()` and `llm_query_batched()` for semantic analysis
15
+ - **Context Management**: Efficient handling of large contexts through chunking
16
+ - **Sandboxed REPL**: JavaScript execution in a sandboxed `node:vm` context
17
+ - **AI SDK Integration**: Works as an Agent or Tool with the Vercel AI SDK
18
+ - **Multiple Usage Patterns**: Use as standalone agent or as a tool in larger workflows
19
+
20
+ ## Installation
21
+
22
+ ```bash
23
+ npm install ai-rlm @ai-sdk/openai
24
+ ```
25
+
26
+ The `model` and `subModel` settings accept any AI SDK `LanguageModel` — use any provider ([OpenAI](https://sdk.vercel.ai/providers/ai-sdk-providers/openai), [Anthropic](https://sdk.vercel.ai/providers/ai-sdk-providers/anthropic), [Google](https://sdk.vercel.ai/providers/ai-sdk-providers/google-generative-ai), etc.).
27
+
28
+ ## Usage
29
+
30
+ ### As Agent (Recommended)
31
+
32
+ The **RLMAgent** class provides a clean, agent-based API that integrates seamlessly with the AI SDK:
33
+
34
+ ```typescript
35
+ import { RLMAgent } from 'ai-rlm';
36
+ import { openai } from '@ai-sdk/openai';
37
+
38
+ // Create agent
39
+ const agent = new RLMAgent({
40
+ model: openai('gpt-4.1'), // Root agent model
41
+ subModel: openai('gpt-4.1-mini'), // Sub-LLM model for queries
42
+ maxIterations: 20, // Max REPL iterations
43
+ maxLLMCalls: 50, // Max sub-LLM calls
44
+ });
45
+
46
+ // Process a context
47
+ const context = `
48
+ The quick brown fox jumps over the lazy dog.
49
+ The magic number is 42.
50
+ `;
51
+
52
+ const query = 'What is the magic number?';
53
+
54
+ const result = await agent.generate({
55
+ context,
56
+ query,
57
+ });
58
+
59
+ console.log('Answer:', result.text);
60
+ console.log('Iterations:', result.iterations);
61
+ console.log('LLM Calls:', result.llmCallCount);
62
+ console.log('Steps:', result.steps); // Full trajectory
63
+ ```
64
+
65
+ ### As Tool
66
+
67
+ Use **createRLMTool** to create an AI SDK-compatible tool for use with `generateText` or `ToolLoopAgent`:
68
+
69
+ ```typescript
70
+ import { createRLMTool } from 'ai-rlm';
71
+ import { generateText } from 'ai';
72
+ import { openai } from '@ai-sdk/openai';
73
+
74
+ // Create the tool
75
+ const rlmTool = createRLMTool({
76
+ model: openai('gpt-4.1'),
77
+ subModel: openai('gpt-4.1-mini'),
78
+ });
79
+
80
+ // Use in generateText
81
+ const result = await generateText({
82
+ model: openai('gpt-4.1'),
83
+ tools: { analyzeLargeContext: rlmTool },
84
+ prompt: 'Analyze this large codebase for security vulnerabilities',
85
+ });
86
+ ```
87
+
88
+ ### With ToolLoopAgent
89
+
90
+ ```typescript
91
+ import { ToolLoopAgent } from 'ai';
92
+ import { createRLMTool } from 'ai-rlm';
93
+ import { openai } from '@ai-sdk/openai';
94
+
95
+ const agent = new ToolLoopAgent({
96
+ model: openai('gpt-4.1'),
97
+ tools: {
98
+ analyzeLargeContext: createRLMTool({
99
+ model: openai('gpt-4.1'),
100
+ subModel: openai('gpt-4.1-mini'),
101
+ }),
102
+ // ... other tools
103
+ },
104
+ });
105
+
106
+ const result = await agent.generate({
107
+ prompt: 'Check this document for compliance issues',
108
+ });
109
+ ```
110
+
111
+ ### Streaming Support
112
+
113
+ ```typescript
114
+ const stream = await agent.stream({
115
+ context: largeDocument,
116
+ query: 'Analyze this',
117
+ });
118
+
119
+ // Read from the stream
120
+ const reader = stream.textStream.getReader();
121
+ while (true) {
122
+ const { done, value } = await reader.read();
123
+ if (done) break;
124
+ process.stdout.write(value);
125
+ }
126
+ ```
127
+
128
+ ## How It Works
129
+
130
+ The RLM agent writes JavaScript code to explore the context in an iterative loop:
131
+
132
+ ```javascript
133
+ // First, explore the context
134
+ console.log('Context length:', context.length);
135
+ console.log('First 200 chars:', context.substring(0, 200));
136
+
137
+ // Search for specific patterns
138
+ const lines = context.split('\n');
139
+ const targetLine = lines.find(line => line.includes('magic number'));
140
+ console.log('Found:', targetLine);
141
+
142
+ // Store result for later
143
+ const answer = targetLine?.match(/magic number is (\d+)/)?.[1];
144
+
145
+ // Submit answer
146
+ FINAL_VAR(answer)
147
+ ```
148
+
149
+ 1. **Context Loading**: The context is loaded into a sandboxed JavaScript REPL environment
150
+ 2. **Iterative Reasoning**: The root LLM writes JavaScript code to explore the context
151
+ 3. **Code Execution**: Code is executed in a `node:vm` sandbox with a 30s timeout
152
+ 4. **Sub-LLM Queries**: For semantic analysis, `llm_query()` delegates to a sub-model
153
+ 5. **Result Accumulation**: The model iterates until it finds an answer
154
+ 6. **Final Answer**: The model submits an answer using `FINAL(answer)` or `FINAL_VAR(variable_name)`
155
+
156
+ ### System Prompt
157
+
158
+ The RLM system prompt instructs the model to:
159
+ - EXPLORE FIRST - Look at data before processing
160
+ - ITERATE - Write small code snippets, observe outputs
161
+ - VERIFY BEFORE SUBMITTING - Check results are correct
162
+ - USE llm_query FOR SEMANTICS - Code finds WHERE; LLM understands WHAT
163
+ - CHUNK SMARTLY - Feed substantial chunks to sub-LLMs (~500K chars)
164
+
165
+ ## REPL Sandbox
166
+
167
+ The JavaScript REPL runs code in a `node:vm` sandboxed context:
168
+
169
+ ### Available in the Sandbox:
170
+
171
+ - **`context`**: The input context (string or object)
172
+ - **`console.log()` / `console.error()`**: Output logging
173
+ - **`llm_query(prompt)`**: Query a sub-LLM for semantic analysis
174
+ - **`llm_query_batched(prompts)`**: Query multiple sub-LLMs
175
+ - **`FINAL(answer)`**: Submit final answer directly
176
+ - **`FINAL_VAR(varName)`**: Submit a variable from the REPL
177
+ - **Standard JavaScript**: All ES6+ features, Array methods, String methods, Math, JSON, etc.
178
+
179
+ ### Security Features:
180
+
181
+ - 30-second timeout on code execution
182
+ - No access to Node.js built-in modules or file system
183
+ - No network access
184
+ - Sandboxed console output capture
185
+
186
+ ## API Reference
187
+
188
+ ### RLMAgent
189
+
190
+ The primary class for using RLM as an agent.
191
+
192
+ #### `constructor(settings: RLMAgentSettings)`
193
+
194
+ ```typescript
195
+ import type { LanguageModel } from 'ai';
196
+
197
+ interface RLMAgentSettings {
198
+ model: LanguageModel; // Required: Root agent model
199
+ subModel?: LanguageModel; // Optional: Sub-LLM model (defaults to model)
200
+ maxIterations?: number; // Max REPL iterations (default: 20)
201
+ maxLLMCalls?: number; // Max sub-LLM calls (default: 50)
202
+ maxOutputChars?: number; // Max REPL output chars (default: 100000)
203
+ verbose?: boolean; // Enable verbose logging (default: false)
204
+ }
205
+ ```
206
+
207
+ #### `async generate(options): Promise<RLMGenerateResult>`
208
+
209
+ Generate an answer by iteratively analyzing the context.
210
+
211
+ **Parameters:**
212
+ ```typescript
213
+ interface RLMAgentCallParameters {
214
+ context: RLMContext; // The large context to analyze
215
+ query: string; // The question or task
216
+ abortSignal?: AbortSignal; // Optional abort signal
217
+ timeout?: number; // Optional timeout in ms
218
+ onStepFinish?: (step: REPLStep) => void; // Callback for each step
219
+ }
220
+ ```
221
+
222
+ **Returns:**
223
+ ```typescript
224
+ interface RLMGenerateResult {
225
+ text: string; // The generated answer
226
+ steps: REPLStep[]; // Array of REPL steps taken
227
+ llmCallCount: number; // Total LLM calls made
228
+ iterations: number; // Total iterations performed
229
+ }
230
+
231
+ interface REPLStep {
232
+ iteration: number;
233
+ reasoning: string; // The model's reasoning before code
234
+ code: string; // JavaScript code executed
235
+ output: string; // Console output and results
236
+ }
237
+ ```
238
+
239
+ #### `async stream(options): Promise<RLMStreamResult>`
240
+
241
+ Stream the answer generation process.
242
+
243
+ **Returns:**
244
+ ```typescript
245
+ interface RLMStreamResult extends RLMGenerateResult {
246
+ textStream: ReadableStream<string>; // Readable stream of text
247
+ }
248
+ ```
249
+
250
+ ### createRLMTool
251
+
252
+ Factory function to create RLM as an AI SDK-compatible tool.
253
+
254
+ #### `createRLMTool(config?: RLMToolConfig)`
255
+
256
+ ```typescript
257
+ import type { LanguageModel } from 'ai';
258
+
259
+ function createRLMTool(config?: {
260
+ model?: LanguageModel; // Root agent model
261
+ subModel?: LanguageModel; // Sub-LLM model
262
+ maxIterations?: number; // Max iterations (default: 20)
263
+ maxLLMCalls?: number; // Max LLM calls (default: 50)
264
+ maxOutputChars?: number; // Max output chars (default: 100000)
265
+ }): Tool
266
+ ```
267
+
268
+ **Tool Input Schema:**
269
+ ```typescript
270
+ {
271
+ context: string | string[] | Record<string, unknown>;
272
+ query: string;
273
+ maxIterations?: number; // Optional override
274
+ maxLLMCalls?: number; // Optional override
275
+ }
276
+ ```
277
+
278
+ **Tool Output:**
279
+ ```typescript
280
+ {
281
+ answer: string; // The generated answer
282
+ iterations: number; // Number of iterations
283
+ stepsTaken: number; // Number of steps executed
284
+ }
285
+ ```
286
+
287
+ ### RLMContext
288
+
289
+ Context can be any of these formats:
290
+ ```typescript
291
+ type RLMContext = string | string[] | Record<string, unknown>;
292
+ ```
293
+
294
+ - `string`: Raw text document
295
+ - `string[]`: Array of lines or documents
296
+ - `Record<string, unknown>`: JSON/structured data
297
+
298
+ ## Architecture
299
+
300
+ ```
301
+ ┌─────────────────────────────────────────────────────────────┐
302
+ │ RLMAgent Class │
303
+ ├─────────────────────────────────────────────────────────────┤
304
+ │ ┌───────────────────────────────────────────────────────┐ │
305
+ │ │ REPL Environment (node:vm) │ │
306
+ │ │ - Sandboxed JavaScript execution │ │
307
+ │ │ - llm_query() for sub-LLM semantic analysis │ │
308
+ │ │ - 30s timeout protection │ │
309
+ │ └───────────────────────────────────────────────────────┘ │
310
+ │ │
311
+ │ ┌───────────────────────────────────────────────────────┐ │
312
+ │ │ generate() Method │ │
313
+ │ │ 1. Generate reasoning + JS code │ │
314
+ │ │ 2. Execute in sandboxed context │ │
315
+ │ │ 3. Process llm_query markers → real LLM calls │ │
316
+ │ │ 4. Check for FINAL() answer │ │
317
+ │ │ 5. Repeat or return answer │ │
318
+ │ └───────────────────────────────────────────────────────┘ │
319
+ │ │
320
+ │ ┌───────────────────────────────────────────────────────┐ │
321
+ │ │ stream() Method │ │
322
+ │ │ - Same as generate() with streaming │ │
323
+ │ │ - Returns ReadableStream for real-time output │ │
324
+ │ └───────────────────────────────────────────────────────┘ │
325
+ └─────────────────────────────────────────────────────────────┘
326
+
327
+ │ createRLMTool()
328
+
329
+ ┌──────────────────────┐
330
+ │ AI SDK Tool │
331
+ │ - Tool interface │
332
+ │ - Input validation │
333
+ │ - Auto-execution │
334
+ └──────────────────────┘
335
+ ```
336
+
337
+ ## Examples
338
+
339
+ Run the examples:
340
+
341
+ ```bash
342
+ # Basic agent examples
343
+ bun run examples/basic-usage.ts
344
+
345
+ # Tool integration examples
346
+ bun run examples/tool-usage.ts
347
+
348
+ # Individual examples
349
+ bun run -e "import { example1SimpleTextSearch } from './examples/basic-usage.ts'; example1SimpleTextSearch()"
350
+ ```
351
+
352
+ ### Example Files
353
+
354
+ - **`examples/basic-usage.ts`**: Agent API examples (generate, stream, callbacks)
355
+ - **`examples/tool-usage.ts`**: Tool API examples (with generateText, ToolLoopAgent)
356
+ - **`examples/document-comparison.ts`**: Document diffing example
357
+ - **`examples/data-transformation.ts`**: Data extraction and transformation
358
+
359
+ ## License
360
+
361
+ MIT
362
+
363
+ ## References
364
+
365
+ - Paper: "Recursive Language Models" (Zhang, Kraska, Khattab, 2025)
366
+ - AI SDK Documentation: https://sdk.vercel.ai/docs
@@ -0,0 +1,12 @@
1
+ /**
2
+ * AI RLM - Recursive Language Model for the Vercel AI SDK
3
+ *
4
+ * @module ai-rlm
5
+ * @description Process long contexts through iterative code execution and sub-LLM queries
6
+ */
7
+ export { RLMAgent } from "./rlm.js";
8
+ export type { RLMAgentSettings, RLMAgentCallParameters, REPLStep, RLMGenerateResult, RLMStreamResult, RLMResult, RLMContext, } from "./rlm.js";
9
+ export { createRLMTool } from "./rlm-tool.js";
10
+ export type { RLMToolConfig } from "./rlm-tool.js";
11
+ export { RLMAgent as default } from "./rlm.js";
12
+ //# sourceMappingURL=index.d.ts.map
@@ -0,0 +1 @@
1
+ {"version":3,"file":"index.d.ts","sourceRoot":"","sources":["../src/index.ts"],"names":[],"mappings":"AAAA;;;;;GAKG;AAGH,OAAO,EAAE,QAAQ,EAAE,MAAM,UAAU,CAAC;AACpC,YAAY,EACV,gBAAgB,EAChB,sBAAsB,EACtB,QAAQ,EACR,iBAAiB,EACjB,eAAe,EACf,SAAS,EACT,UAAU,GACX,MAAM,UAAU,CAAC;AAGlB,OAAO,EAAE,aAAa,EAAE,MAAM,eAAe,CAAC;AAC9C,YAAY,EAAE,aAAa,EAAE,MAAM,eAAe,CAAC;AAGnD,OAAO,EAAE,QAAQ,IAAI,OAAO,EAAE,MAAM,UAAU,CAAC"}
package/dist/index.js ADDED
@@ -0,0 +1,13 @@
1
+ /**
2
+ * AI RLM - Recursive Language Model for the Vercel AI SDK
3
+ *
4
+ * @module ai-rlm
5
+ * @description Process long contexts through iterative code execution and sub-LLM queries
6
+ */
7
+ // Main exports
8
+ export { RLMAgent } from "./rlm.js";
9
+ // Tool exports
10
+ export { createRLMTool } from "./rlm-tool.js";
11
+ // Default export
12
+ export { RLMAgent as default } from "./rlm.js";
13
+ //# sourceMappingURL=index.js.map
@@ -0,0 +1 @@
1
+ {"version":3,"file":"index.js","sourceRoot":"","sources":["../src/index.ts"],"names":[],"mappings":"AAAA;;;;;GAKG;AAEH,eAAe;AACf,OAAO,EAAE,QAAQ,EAAE,MAAM,UAAU,CAAC;AAWpC,eAAe;AACf,OAAO,EAAE,aAAa,EAAE,MAAM,eAAe,CAAC;AAG9C,iBAAiB;AACjB,OAAO,EAAE,QAAQ,IAAI,OAAO,EAAE,MAAM,UAAU,CAAC"}
@@ -0,0 +1,128 @@
1
+ /**
2
+ * RLM Tool - Tool Factory for Recursive Language Model
3
+ *
4
+ * This module provides a factory function to create RLM tools that can be used
5
+ * with the Vercel AI SDK's generateText, ToolLoopAgent, or any tool-compatible API.
6
+ */
7
+ import { type LanguageModel } from "ai";
8
+ import { type RLMAgentSettings } from "./rlm.js";
9
+ /**
10
+ * Configuration options for the RLM tool
11
+ */
12
+ export interface RLMToolConfig extends Partial<RLMAgentSettings> {
13
+ /** Model for the root agent */
14
+ model: LanguageModel;
15
+ /** Model for sub-LLM queries (defaults to model) */
16
+ subModel: LanguageModel;
17
+ /** Maximum iterations for the REPL loop (default: 20) */
18
+ maxIterations?: number;
19
+ /** Maximum sub-LLM calls per execution (default: 50) */
20
+ maxLLMCalls?: number;
21
+ /** Maximum characters in REPL output (default: 100000) */
22
+ maxOutputChars?: number;
23
+ }
24
+ /**
25
+ * Create an RLM tool for use with AI SDK functions.
26
+ *
27
+ * This tool allows models to analyze large contexts iteratively using JavaScript
28
+ * code execution and sub-LLM queries. It's particularly useful for:
29
+ * - Searching through large documents
30
+ * - Analyzing datasets that exceed context limits
31
+ * - Extracting structured information from unstructured data
32
+ * - Performing complex multi-step analysis
33
+ *
34
+ * @example
35
+ * ```typescript
36
+ * import { createRLMTool } from './rlm-tool.js';
37
+ * import { generateText } from 'ai';
38
+ *
39
+ * const rlmTool = createRLMTool({
40
+ * model: 'gpt-4.1',
41
+ * subModel: 'gpt-4.1-mini',
42
+ * });
43
+ *
44
+ * const result = await generateText({
45
+ * model: 'gpt-4.1',
46
+ * tools: { analyzeLargeContext: rlmTool },
47
+ * prompt: 'Find all security issues in this codebase',
48
+ * });
49
+ * ```
50
+ */
51
+ export declare function createRLMTool(config: RLMToolConfig): import("ai").Tool<{
52
+ context: string | string[] | Record<string, unknown>;
53
+ query: string;
54
+ maxIterations?: number | undefined;
55
+ maxLLMCalls?: number | undefined;
56
+ }, {
57
+ answer: string;
58
+ iterations: number;
59
+ stepsTaken: number;
60
+ }>;
61
+ /**
62
+ * Create a pre-configured RLM tool for specific use cases.
63
+ *
64
+ * @example
65
+ * ```typescript
66
+ * // Create a tool optimized for code analysis
67
+ * const codeAnalyzer = createRLMTool({
68
+ * model: 'gpt-4.1',
69
+ * subModel: 'gpt-4.1-mini',
70
+ * maxIterations: 30,
71
+ * maxLLMCalls: 50,
72
+ * });
73
+ * ```
74
+ */
75
+ export declare function createRLMToolForCodeAnalysis(config: RLMToolConfig): import("ai").Tool<{
76
+ context: string | string[] | Record<string, unknown>;
77
+ query: string;
78
+ maxIterations?: number | undefined;
79
+ maxLLMCalls?: number | undefined;
80
+ }, {
81
+ answer: string;
82
+ iterations: number;
83
+ stepsTaken: number;
84
+ }>;
85
+ /**
86
+ * Create a pre-configured RLM tool optimized for log analysis.
87
+ *
88
+ * @example
89
+ * ```typescript
90
+ * const logAnalyzer = createRLMToolForLogAnalysis({
91
+ * model: 'gpt-4.1',
92
+ * maxIterations: 25,
93
+ * });
94
+ * ```
95
+ */
96
+ export declare function createRLMToolForLogAnalysis(config: RLMToolConfig): import("ai").Tool<{
97
+ context: string | string[] | Record<string, unknown>;
98
+ query: string;
99
+ maxIterations?: number | undefined;
100
+ maxLLMCalls?: number | undefined;
101
+ }, {
102
+ answer: string;
103
+ iterations: number;
104
+ stepsTaken: number;
105
+ }>;
106
+ /**
107
+ * Create a pre-configured RLM tool optimized for document search.
108
+ *
109
+ * @example
110
+ * ```typescript
111
+ * const documentSearcher = createRLMToolForDocumentSearch({
112
+ * model: 'gpt-4.1',
113
+ * maxIterations: 20,
114
+ * });
115
+ * ```
116
+ */
117
+ export declare function createRLMToolForDocumentSearch(config: RLMToolConfig): import("ai").Tool<{
118
+ context: string | string[] | Record<string, unknown>;
119
+ query: string;
120
+ maxIterations?: number | undefined;
121
+ maxLLMCalls?: number | undefined;
122
+ }, {
123
+ answer: string;
124
+ iterations: number;
125
+ stepsTaken: number;
126
+ }>;
127
+ export default createRLMTool;
128
+ //# sourceMappingURL=rlm-tool.d.ts.map
@@ -0,0 +1 @@
1
+ {"version":3,"file":"rlm-tool.d.ts","sourceRoot":"","sources":["../src/rlm-tool.ts"],"names":[],"mappings":"AAAA;;;;;GAKG;AAEH,OAAO,EAAQ,KAAK,aAAa,EAAE,MAAM,IAAI,CAAC;AAE9C,OAAO,EAAY,KAAK,gBAAgB,EAAE,MAAM,UAAU,CAAC;AAE3D;;GAEG;AACH,MAAM,WAAW,aAAc,SAAQ,OAAO,CAAC,gBAAgB,CAAC;IAC9D,+BAA+B;IAC/B,KAAK,EAAE,aAAa,CAAC;IACrB,oDAAoD;IACpD,QAAQ,EAAE,aAAa,CAAC;IACxB,yDAAyD;IACzD,aAAa,CAAC,EAAE,MAAM,CAAC;IACvB,wDAAwD;IACxD,WAAW,CAAC,EAAE,MAAM,CAAC;IACrB,0DAA0D;IAC1D,cAAc,CAAC,EAAE,MAAM,CAAC;CACzB;AAED;;;;;;;;;;;;;;;;;;;;;;;;;;GA0BG;AACH,wBAAgB,aAAa,CAAC,MAAM,EAAE,aAAa;;;;;;;;;GA6ElD;AAED;;;;;;;;;;;;;GAaG;AACH,wBAAgB,4BAA4B,CAAC,MAAM,EAAE,aAAa;;;;;;;;;GAQjE;AAED;;;;;;;;;;GAUG;AACH,wBAAgB,2BAA2B,CAAC,MAAM,EAAE,aAAa;;;;;;;;;GAQhE;AAED;;;;;;;;;;GAUG;AACH,wBAAgB,8BAA8B,CAAC,MAAM,EAAE,aAAa;;;;;;;;;GAQnE;AAED,eAAe,aAAa,CAAC"}
@@ -0,0 +1,167 @@
1
+ /**
2
+ * RLM Tool - Tool Factory for Recursive Language Model
3
+ *
4
+ * This module provides a factory function to create RLM tools that can be used
5
+ * with the Vercel AI SDK's generateText, ToolLoopAgent, or any tool-compatible API.
6
+ */
7
+ import { tool } from "ai";
8
+ import { z } from "zod";
9
+ import { RLMAgent } from "./rlm.js";
10
+ /**
11
+ * Create an RLM tool for use with AI SDK functions.
12
+ *
13
+ * This tool allows models to analyze large contexts iteratively using JavaScript
14
+ * code execution and sub-LLM queries. It's particularly useful for:
15
+ * - Searching through large documents
16
+ * - Analyzing datasets that exceed context limits
17
+ * - Extracting structured information from unstructured data
18
+ * - Performing complex multi-step analysis
19
+ *
20
+ * @example
21
+ * ```typescript
22
+ * import { createRLMTool } from './rlm-tool.js';
23
+ * import { generateText } from 'ai';
24
+ *
25
+ * const rlmTool = createRLMTool({
26
+ * model: 'gpt-4.1',
27
+ * subModel: 'gpt-4.1-mini',
28
+ * });
29
+ *
30
+ * const result = await generateText({
31
+ * model: 'gpt-4.1',
32
+ * tools: { analyzeLargeContext: rlmTool },
33
+ * prompt: 'Find all security issues in this codebase',
34
+ * });
35
+ * ```
36
+ */
37
+ export function createRLMTool(config) {
38
+ return tool({
39
+ description: `Analyze large contexts iteratively using JavaScript code execution.
40
+
41
+ Use this tool when you need to:
42
+ - Search through or analyze documents/datasets too large for direct processing
43
+ - Extract structured information from unstructured text
44
+ - Perform complex multi-step analysis requiring code execution
45
+ - Answer questions about large codebases, logs, or datasets
46
+
47
+ The tool will:
48
+ 1. Write JavaScript code to explore the context
49
+ 2. Execute the code in a secure sandbox (vm2)
50
+ 3. Use sub-LLM calls for semantic understanding when needed
51
+ 4. Iterate until it finds the answer
52
+
53
+ Provide the context (string, array of strings, or JSON object) and your query/question.`,
54
+ inputSchema: z.object({
55
+ context: z
56
+ .union([
57
+ z.string().describe("Text document or content to analyze"),
58
+ z.array(z.string()).describe("Array of text lines or documents"),
59
+ z.record(z.unknown()).describe("JSON object or structured data"),
60
+ ])
61
+ .describe("The large context, document, or dataset to analyze"),
62
+ query: z
63
+ .string()
64
+ .describe("The specific question, task, or instruction to perform on the context. Be clear and specific."),
65
+ maxIterations: z
66
+ .number()
67
+ .min(1)
68
+ .max(100)
69
+ .optional()
70
+ .describe("Maximum number of REPL iterations (default: 20)"),
71
+ maxLLMCalls: z
72
+ .number()
73
+ .min(1)
74
+ .max(200)
75
+ .optional()
76
+ .describe("Maximum sub-LLM calls allowed (default: 50)"),
77
+ }),
78
+ execute: async ({ context, query, maxIterations, maxLLMCalls }, { abortSignal }) => {
79
+ // Create RLMAgent with merged config
80
+ const agent = new RLMAgent({
81
+ model: config.model ?? "gpt-4.1",
82
+ subModel: config.subModel,
83
+ maxIterations: maxIterations ?? config.maxIterations ?? 20,
84
+ maxLLMCalls: maxLLMCalls ?? config.maxLLMCalls ?? 50,
85
+ maxOutputChars: config.maxOutputChars ?? 100000,
86
+ verbose: false,
87
+ });
88
+ // Execute the analysis
89
+ const result = await agent.generate({
90
+ context,
91
+ query,
92
+ abortSignal,
93
+ });
94
+ // Return just the essential information
95
+ return {
96
+ answer: result.text,
97
+ iterations: result.iterations,
98
+ stepsTaken: result.steps.length,
99
+ };
100
+ },
101
+ });
102
+ }
103
+ /**
104
+ * Create a pre-configured RLM tool for specific use cases.
105
+ *
106
+ * @example
107
+ * ```typescript
108
+ * // Create a tool optimized for code analysis
109
+ * const codeAnalyzer = createRLMTool({
110
+ * model: 'gpt-4.1',
111
+ * subModel: 'gpt-4.1-mini',
112
+ * maxIterations: 30,
113
+ * maxLLMCalls: 50,
114
+ * });
115
+ * ```
116
+ */
117
+ export function createRLMToolForCodeAnalysis(config) {
118
+ return createRLMTool({
119
+ model: config.model,
120
+ subModel: config.subModel,
121
+ maxIterations: config.maxIterations ?? 30,
122
+ maxLLMCalls: config.maxLLMCalls ?? 50,
123
+ maxOutputChars: config.maxOutputChars ?? 100000,
124
+ });
125
+ }
126
+ /**
127
+ * Create a pre-configured RLM tool optimized for log analysis.
128
+ *
129
+ * @example
130
+ * ```typescript
131
+ * const logAnalyzer = createRLMToolForLogAnalysis({
132
+ * model: 'gpt-4.1',
133
+ * maxIterations: 25,
134
+ * });
135
+ * ```
136
+ */
137
+ export function createRLMToolForLogAnalysis(config) {
138
+ return createRLMTool({
139
+ model: config.model ?? "gpt-4.1",
140
+ subModel: config.subModel ?? "gpt-4.1-mini",
141
+ maxIterations: config.maxIterations ?? 25,
142
+ maxLLMCalls: config.maxLLMCalls ?? 30,
143
+ maxOutputChars: config.maxOutputChars ?? 100000,
144
+ });
145
+ }
146
+ /**
147
+ * Create a pre-configured RLM tool optimized for document search.
148
+ *
149
+ * @example
150
+ * ```typescript
151
+ * const documentSearcher = createRLMToolForDocumentSearch({
152
+ * model: 'gpt-4.1',
153
+ * maxIterations: 20,
154
+ * });
155
+ * ```
156
+ */
157
+ export function createRLMToolForDocumentSearch(config) {
158
+ return createRLMTool({
159
+ model: config.model ?? "gpt-4.1",
160
+ subModel: config.subModel ?? "gpt-4.1-mini",
161
+ maxIterations: config.maxIterations ?? 20,
162
+ maxLLMCalls: config.maxLLMCalls ?? 20,
163
+ maxOutputChars: config.maxOutputChars ?? 100000,
164
+ });
165
+ }
166
+ export default createRLMTool;
167
+ //# sourceMappingURL=rlm-tool.js.map
@@ -0,0 +1 @@
1
+ {"version":3,"file":"rlm-tool.js","sourceRoot":"","sources":["../src/rlm-tool.ts"],"names":[],"mappings":"AAAA;;;;;GAKG;AAEH,OAAO,EAAE,IAAI,EAAsB,MAAM,IAAI,CAAC;AAC9C,OAAO,EAAE,CAAC,EAAE,MAAM,KAAK,CAAC;AACxB,OAAO,EAAE,QAAQ,EAAyB,MAAM,UAAU,CAAC;AAkB3D;;;;;;;;;;;;;;;;;;;;;;;;;;GA0BG;AACH,MAAM,UAAU,aAAa,CAAC,MAAqB;IACjD,OAAO,IAAI,CAAC;QACV,WAAW,EAAE;;;;;;;;;;;;;;wFAcuE;QAEpF,WAAW,EAAE,CAAC,CAAC,MAAM,CAAC;YACpB,OAAO,EAAE,CAAC;iBACP,KAAK,CAAC;gBACL,CAAC,CAAC,MAAM,EAAE,CAAC,QAAQ,CAAC,qCAAqC,CAAC;gBAC1D,CAAC,CAAC,KAAK,CAAC,CAAC,CAAC,MAAM,EAAE,CAAC,CAAC,QAAQ,CAAC,kCAAkC,CAAC;gBAChE,CAAC,CAAC,MAAM,CAAC,CAAC,CAAC,OAAO,EAAE,CAAC,CAAC,QAAQ,CAAC,gCAAgC,CAAC;aACjE,CAAC;iBACD,QAAQ,CAAC,oDAAoD,CAAC;YAEjE,KAAK,EAAE,CAAC;iBACL,MAAM,EAAE;iBACR,QAAQ,CACP,+FAA+F,CAChG;YAEH,aAAa,EAAE,CAAC;iBACb,MAAM,EAAE;iBACR,GAAG,CAAC,CAAC,CAAC;iBACN,GAAG,CAAC,GAAG,CAAC;iBACR,QAAQ,EAAE;iBACV,QAAQ,CAAC,iDAAiD,CAAC;YAE9D,WAAW,EAAE,CAAC;iBACX,MAAM,EAAE;iBACR,GAAG,CAAC,CAAC,CAAC;iBACN,GAAG,CAAC,GAAG,CAAC;iBACR,QAAQ,EAAE;iBACV,QAAQ,CAAC,6CAA6C,CAAC;SAC3D,CAAC;QAEF,OAAO,EAAE,KAAK,EACZ,EAAE,OAAO,EAAE,KAAK,EAAE,aAAa,EAAE,WAAW,EAAE,EAC9C,EAAE,WAAW,EAAE,EACf,EAAE;YACF,qCAAqC;YACrC,MAAM,KAAK,GAAG,IAAI,QAAQ,CAAC;gBACzB,KAAK,EAAE,MAAM,CAAC,KAAK,IAAI,SAAS;gBAChC,QAAQ,EAAE,MAAM,CAAC,QAAQ;gBACzB,aAAa,EAAE,aAAa,IAAI,MAAM,CAAC,aAAa,IAAI,EAAE;gBAC1D,WAAW,EAAE,WAAW,IAAI,MAAM,CAAC,WAAW,IAAI,EAAE;gBACpD,cAAc,EAAE,MAAM,CAAC,cAAc,IAAI,MAAM;gBAC/C,OAAO,EAAE,KAAK;aACf,CAAC,CAAC;YAEH,uBAAuB;YACvB,MAAM,MAAM,GAAG,MAAM,KAAK,CAAC,QAAQ,CAAC;gBAClC,OAAO;gBACP,KAAK;gBACL,WAAW;aACZ,CAAC,CAAC;YAEH,wCAAwC;YACxC,OAAO;gBACL,MAAM,EAAE,MAAM,CAAC,IAAI;gBACnB,UAAU,EAAE,MAAM,CAAC,UAAU;gBAC7B,UAAU,EAAE,MAAM,CAAC,KAAK,CAAC,MAAM;aAChC,CAAC;QACJ,CAAC;KACF,CAAC,CAAC;AACL,CAAC;AAED;;;;;;;;;;;;;GAaG;AACH,MAAM,UAAU,4BAA4B,CAAC,MAAqB;IAChE,OAAO,aAAa,CAAC;QACnB,KAAK,EAAE,MAAM,CAAC,KAAK;QACnB,QAAQ,EAAE,MAAM,CAAC,QAAQ;QACzB,aAAa,EAAE,MAAM,CAAC,aAAa,IAAI,EAAE;QACzC,WAAW,EAAE,MAAM,CAAC,WAAW,IAAI,EAAE;QACrC,cAAc,EAAE,MAAM,CAAC,cAAc,IAAI,MAAM;KAChD,CAAC,CAAC;AACL,CAAC;AAED;;;;;;;;;;GAUG;AACH,MAAM,UAAU,2BAA2B,CAAC,MAAqB;IAC/D,OAAO,aAAa,CAAC;QACnB,KAAK,EAAE,MAAM,CAAC,KAAK,IAAI,SAAS;QAChC,QAAQ,EAAE,MAAM,CAAC,QAAQ,IAAI,cAAc;QAC3C,aAAa,EAAE,MAAM,CAAC,aAAa,IAAI,EAAE;QACzC,WAAW,EAAE,MAAM,CAAC,WAAW,IAAI,EAAE;QACrC,cAAc,EAAE,MAAM,CAAC,cAAc,IAAI,MAAM;KAChD,CAAC,CAAC;AACL,CAAC;AAED;;;;;;;;;;GAUG;AACH,MAAM,UAAU,8BAA8B,CAAC,MAAqB;IAClE,OAAO,aAAa,CAAC;QACnB,KAAK,EAAE,MAAM,CAAC,KAAK,IAAI,SAAS;QAChC,QAAQ,EAAE,MAAM,CAAC,QAAQ,IAAI,cAAc;QAC3C,aAAa,EAAE,MAAM,CAAC,aAAa,IAAI,EAAE;QACzC,WAAW,EAAE,MAAM,CAAC,WAAW,IAAI,EAAE;QACrC,cAAc,EAAE,MAAM,CAAC,cAAc,IAAI,MAAM;KAChD,CAAC,CAAC;AACL,CAAC;AAED,eAAe,aAAa,CAAC"}
package/dist/rlm.d.ts ADDED
@@ -0,0 +1,99 @@
1
+ /**
2
+ * RLM (Recursive Language Model) - TypeScript Implementation
3
+ *
4
+ * Based on the paper "Recursive Language Models" (Zhang, Kraska, Khattab, 2025)
5
+ * Uses the Vercel AI SDK for the implementation.
6
+ *
7
+ * REPL Environment: JavaScript with node:vm sandbox
8
+ */
9
+ import type { LanguageModel } from "ai";
10
+ /**
11
+ * Settings for RLMAgent
12
+ */
13
+ export interface RLMAgentSettings {
14
+ /** Model for the root agent */
15
+ model: LanguageModel;
16
+ /** Model for sub-LLM queries (defaults to model) */
17
+ subModel?: LanguageModel;
18
+ /** Maximum iterations for the REPL loop (default: 20) */
19
+ maxIterations?: number;
20
+ /** Maximum sub-LLM calls per execution (default: 50) */
21
+ maxLLMCalls?: number;
22
+ /** Maximum characters in REPL output (default: 100000) */
23
+ maxOutputChars?: number;
24
+ /** Enable verbose logging (default: false) */
25
+ verbose?: boolean;
26
+ }
27
+ /**
28
+ * Parameters for RLMAgent.generate() and RLMAgent.stream()
29
+ */
30
+ export interface RLMAgentCallParameters {
31
+ /** The large context, document, or dataset to analyze */
32
+ context: RLMContext;
33
+ /** The specific question or task to perform on the context */
34
+ query: string;
35
+ /** Optional abort signal */
36
+ abortSignal?: AbortSignal;
37
+ /** Optional timeout in milliseconds */
38
+ timeout?: number;
39
+ /** Callback for each step completion */
40
+ onStepFinish?: (step: REPLStep) => void | Promise<void>;
41
+ }
42
+ /**
43
+ * A single step in the REPL trajectory
44
+ */
45
+ export interface REPLStep {
46
+ iteration: number;
47
+ reasoning: string;
48
+ code: string;
49
+ output: string;
50
+ }
51
+ /**
52
+ * Result from RLMAgent.generate()
53
+ */
54
+ export interface RLMGenerateResult {
55
+ /** The generated answer text */
56
+ text: string;
57
+ /** Array of REPL steps taken during generation */
58
+ steps: REPLStep[];
59
+ /** Total number of LLM calls made */
60
+ llmCallCount: number;
61
+ /** Total iterations performed */
62
+ iterations: number;
63
+ }
64
+ /**
65
+ * Result from RLMAgent.stream() - extends generate result with streaming capabilities
66
+ */
67
+ export interface RLMStreamResult extends RLMGenerateResult {
68
+ /** Readable stream of text chunks */
69
+ textStream: ReadableStream<string>;
70
+ }
71
+ /**
72
+ * @deprecated Use the new RLMAgent API instead. This interface is kept for backward compatibility.
73
+ */
74
+ export interface RLMResult {
75
+ answer: string;
76
+ trajectory: REPLStep[];
77
+ llmCallCount: number;
78
+ iterations: number;
79
+ }
80
+ /**
81
+ * Context can be a string, array of strings, or structured data
82
+ */
83
+ export type RLMContext = string | string[] | Record<string, unknown>;
84
+ export declare class RLMAgent {
85
+ private settings;
86
+ constructor(settings: RLMAgentSettings);
87
+ /**
88
+ * Generate an answer by iteratively analyzing the context.
89
+ * This is the primary method for using RLMAgent.
90
+ */
91
+ generate({ context, query, abortSignal, timeout, onStepFinish, }: RLMAgentCallParameters): Promise<RLMGenerateResult>;
92
+ /**
93
+ * Stream the answer generation process.
94
+ * Each step is yielded as it's completed.
95
+ */
96
+ stream({ context, query, abortSignal, timeout, onStepFinish, }: RLMAgentCallParameters): Promise<RLMStreamResult>;
97
+ }
98
+ export default RLMAgent;
99
+ //# sourceMappingURL=rlm.d.ts.map
@@ -0,0 +1 @@
1
+ {"version":3,"file":"rlm.d.ts","sourceRoot":"","sources":["../src/rlm.ts"],"names":[],"mappings":"AAAA;;;;;;;GAOG;AAGH,OAAO,KAAK,EAAgB,aAAa,EAAE,MAAM,IAAI,CAAC;AAGtD;;GAEG;AACH,MAAM,WAAW,gBAAgB;IAC/B,+BAA+B;IAC/B,KAAK,EAAE,aAAa,CAAC;IACrB,oDAAoD;IACpD,QAAQ,CAAC,EAAE,aAAa,CAAC;IACzB,yDAAyD;IACzD,aAAa,CAAC,EAAE,MAAM,CAAC;IACvB,wDAAwD;IACxD,WAAW,CAAC,EAAE,MAAM,CAAC;IACrB,0DAA0D;IAC1D,cAAc,CAAC,EAAE,MAAM,CAAC;IACxB,8CAA8C;IAC9C,OAAO,CAAC,EAAE,OAAO,CAAC;CACnB;AAED;;GAEG;AACH,MAAM,WAAW,sBAAsB;IACrC,yDAAyD;IACzD,OAAO,EAAE,UAAU,CAAC;IACpB,8DAA8D;IAC9D,KAAK,EAAE,MAAM,CAAC;IACd,4BAA4B;IAC5B,WAAW,CAAC,EAAE,WAAW,CAAC;IAC1B,uCAAuC;IACvC,OAAO,CAAC,EAAE,MAAM,CAAC;IACjB,wCAAwC;IACxC,YAAY,CAAC,EAAE,CAAC,IAAI,EAAE,QAAQ,KAAK,IAAI,GAAG,OAAO,CAAC,IAAI,CAAC,CAAC;CACzD;AAED;;GAEG;AACH,MAAM,WAAW,QAAQ;IACvB,SAAS,EAAE,MAAM,CAAC;IAClB,SAAS,EAAE,MAAM,CAAC;IAClB,IAAI,EAAE,MAAM,CAAC;IACb,MAAM,EAAE,MAAM,CAAC;CAChB;AAED;;GAEG;AACH,MAAM,WAAW,iBAAiB;IAChC,gCAAgC;IAChC,IAAI,EAAE,MAAM,CAAC;IACb,kDAAkD;IAClD,KAAK,EAAE,QAAQ,EAAE,CAAC;IAClB,qCAAqC;IACrC,YAAY,EAAE,MAAM,CAAC;IACrB,iCAAiC;IACjC,UAAU,EAAE,MAAM,CAAC;CACpB;AAED;;GAEG;AACH,MAAM,WAAW,eAAgB,SAAQ,iBAAiB;IACxD,qCAAqC;IACrC,UAAU,EAAE,cAAc,CAAC,MAAM,CAAC,CAAC;CACpC;AAED;;GAEG;AACH,MAAM,WAAW,SAAS;IACxB,MAAM,EAAE,MAAM,CAAC;IACf,UAAU,EAAE,QAAQ,EAAE,CAAC;IACvB,YAAY,EAAE,MAAM,CAAC;IACrB,UAAU,EAAE,MAAM,CAAC;CACpB;AAED;;GAEG;AACH,MAAM,MAAM,UAAU,GAAG,MAAM,GAAG,MAAM,EAAE,GAAG,MAAM,CAAC,MAAM,EAAE,OAAO,CAAC,CAAC;AAyPrE,qBAAa,QAAQ;IACnB,OAAO,CAAC,QAAQ,CAA6B;gBAEjC,QAAQ,EAAE,gBAAgB;IAWtC;;;OAGG;IACG,QAAQ,CAAC,EACb,OAAO,EACP,KAAK,EACL,WAAW,EACX,OAAO,EACP,YAAY,GACb,EAAE,sBAAsB,GAAG,OAAO,CAAC,iBAAiB,CAAC;IA6MtD;;;OAGG;IACG,MAAM,CAAC,EACX,OAAO,EACP,KAAK,EACL,WAAW,EACX,OAAO,EACP,YAAY,GACb,EAAE,sBAAsB,GAAG,OAAO,CAAC,eAAe,CAAC;CAwBrD;AAED,eAAe,QAAQ,CAAC"}
package/dist/rlm.js ADDED
@@ -0,0 +1,406 @@
1
+ /**
2
+ * RLM (Recursive Language Model) - TypeScript Implementation
3
+ *
4
+ * Based on the paper "Recursive Language Models" (Zhang, Kraska, Khattab, 2025)
5
+ * Uses the Vercel AI SDK for the implementation.
6
+ *
7
+ * REPL Environment: JavaScript with node:vm sandbox
8
+ */
9
+ import { generateText } from "ai";
10
+ import * as vm from "node:vm";
11
+ /**
12
+ * Sandbox environment for executing JavaScript code safely
13
+ * Uses node:vm (Bun/Node)
14
+ */
15
+ class REPLEnvironment {
16
+ vmContext;
17
+ llmCallCount;
18
+ maxLLMCalls;
19
+ subModel;
20
+ contextLoaded = false;
21
+ consoleOutput = [];
22
+ timeout;
23
+ constructor(subModel, maxLLMCalls, timeout = 30000) {
24
+ this.llmCallCount = 0;
25
+ this.maxLLMCalls = maxLLMCalls;
26
+ this.subModel = subModel;
27
+ this.timeout = timeout;
28
+ }
29
+ /**
30
+ * Load context into the REPL environment
31
+ */
32
+ loadContext(context) {
33
+ if (this.contextLoaded) {
34
+ throw new Error("Context already loaded");
35
+ }
36
+ let contextData;
37
+ if (typeof context === "string") {
38
+ contextData = context;
39
+ }
40
+ else if (Array.isArray(context)) {
41
+ contextData = context.join("\n");
42
+ }
43
+ else {
44
+ contextData = context;
45
+ }
46
+ // Create sandbox with all necessary functions
47
+ const sandbox = {
48
+ console: {
49
+ log: (...args) => {
50
+ this.consoleOutput.push(args.map((a) => String(a)).join(" "));
51
+ },
52
+ error: (...args) => {
53
+ this.consoleOutput.push("ERROR: " + args.map((a) => String(a)).join(" "));
54
+ },
55
+ },
56
+ context: contextData,
57
+ llm_query: (prompt) => {
58
+ return `<<<LLM_QUERY_START>>>\n${prompt}\n<<<LLM_QUERY_END>>>`;
59
+ },
60
+ llm_query_batched: (prompts) => {
61
+ return prompts.map((p) => `<<<LLM_QUERY_START>>>\n${p}\n<<<LLM_QUERY_END>>>`);
62
+ },
63
+ FINAL: (answer) => {
64
+ return { type: "final", value: answer };
65
+ },
66
+ FINAL_VAR: (varName) => {
67
+ return { type: "final_var", value: varName };
68
+ },
69
+ };
70
+ this.vmContext = vm.createContext(sandbox);
71
+ this.contextLoaded = true;
72
+ }
73
+ /**
74
+ * Query a sub-LLM
75
+ */
76
+ async llmQuery(prompt) {
77
+ if (this.llmCallCount >= this.maxLLMCalls) {
78
+ throw new Error(`LLM call limit exceeded: ${this.llmCallCount}/${this.maxLLMCalls}`);
79
+ }
80
+ this.llmCallCount++;
81
+ const result = await generateText({
82
+ model: this.subModel,
83
+ prompt: prompt,
84
+ });
85
+ return result.text;
86
+ }
87
+ /**
88
+ * Execute JavaScript code in the sandbox with timeout protection
89
+ */
90
+ executeJavaScript(code) {
91
+ this.consoleOutput = [];
92
+ try {
93
+ let result;
94
+ const script = new vm.Script(code);
95
+ if (this.vmContext) {
96
+ result = script.runInContext(this.vmContext, { timeout: this.timeout });
97
+ }
98
+ const stdout = this.consoleOutput.join("\n");
99
+ return {
100
+ stdout,
101
+ stderr: "",
102
+ result: result !== undefined ? result : undefined,
103
+ };
104
+ }
105
+ catch (error) {
106
+ if (error instanceof Error) {
107
+ return {
108
+ stdout: this.consoleOutput.join("\n"),
109
+ stderr: error.message,
110
+ error: error.message,
111
+ };
112
+ }
113
+ return {
114
+ stdout: this.consoleOutput.join("\n"),
115
+ stderr: "Unknown error",
116
+ error: "Unknown error",
117
+ };
118
+ }
119
+ }
120
+ /**
121
+ * Get a variable value from the VM
122
+ */
123
+ getVariable(name) {
124
+ try {
125
+ if (this.vmContext) {
126
+ const script = new vm.Script(name);
127
+ return script.runInContext(this.vmContext, { timeout: 1000 });
128
+ }
129
+ return undefined;
130
+ }
131
+ catch {
132
+ return undefined;
133
+ }
134
+ }
135
+ /**
136
+ * Get current LLM call count
137
+ */
138
+ getLLMCallCount() {
139
+ return this.llmCallCount;
140
+ }
141
+ /**
142
+ * Clean up
143
+ */
144
+ cleanup() {
145
+ this.consoleOutput = [];
146
+ this.vmContext = undefined;
147
+ }
148
+ }
149
+ function extractCodeBlocks(text) {
150
+ const codeBlockRegex = /```(?:javascript|js)?\s*\n([\s\S]*?)\n```/g;
151
+ const blocks = [];
152
+ let match;
153
+ while ((match = codeBlockRegex.exec(text)) !== null) {
154
+ const code = match[1];
155
+ if (code) {
156
+ blocks.push(code.trim());
157
+ }
158
+ }
159
+ return blocks;
160
+ }
161
+ function extractFinalAnswer(text) {
162
+ const finalVarMatch = text.match(/FINAL_VAR\s*\(\s*["']?([^"')\s]+)["']?\s*\)/i);
163
+ if (finalVarMatch) {
164
+ const content = finalVarMatch[1];
165
+ if (content) {
166
+ return { type: "variable", content };
167
+ }
168
+ }
169
+ const finalMatch = text.match(/FINAL\s*\(\s*["']?([^"')]+)["']?\s*\)/i);
170
+ if (finalMatch) {
171
+ const content = finalMatch[1];
172
+ if (content) {
173
+ return { type: "direct", content };
174
+ }
175
+ }
176
+ return null;
177
+ }
178
+ // ============================================================================
179
+ // System Prompts
180
+ // ============================================================================
181
+ const RLM_SYSTEM_PROMPT = `You are a Recursive Language Model (RLM) agent. You have access to a JavaScript REPL environment to analyze and process large contexts iteratively.
182
+
183
+ Your task is to answer queries by:
184
+ 1. EXPLORING the context through code execution
185
+ 2. ITERATING with small code snippets to understand the data
186
+ 3. USING llm_query() for semantic analysis when needed
187
+ 4. SUBMITTING your final answer when complete
188
+
189
+ Available in the REPL environment:
190
+ - context variable: Contains the input context (loaded as string or object)
191
+ - llm_query(prompt): Query a sub-LLM (~500K char capacity) for semantic analysis
192
+ - llm_query_batched(prompts): Query multiple prompts concurrently (returns array)
193
+ - console.log(): ALWAYS log to see results
194
+ - Standard JavaScript: JSON, Array methods, String methods, Math, etc.
195
+
196
+ IMPORTANT GUIDELINES:
197
+ 1. EXPLORE FIRST - Look at your data before processing it. Log samples, check types/lengths, understand the structure.
198
+ 2. ITERATE - Write small code snippets, observe outputs, then decide next steps. State persists between iterations.
199
+ 3. VERIFY BEFORE SUBMITTING - If results seem wrong, reconsider your approach.
200
+ 4. USE llm_query FOR SEMANTICS - Code finds WHERE things are; llm_query understands WHAT things mean.
201
+ 5. CHUNK SMARTLY - The sub-LLM can handle ~500K characters. Feed it substantial chunks, not tiny pieces.
202
+
203
+ When done, provide your final answer using:
204
+ - FINAL(your_answer) - to submit directly
205
+ - FINAL_VAR(variable_name) - to submit a variable from the REPL
206
+
207
+ Think step-by-step and show your reasoning before each code block.`;
208
+ export class RLMAgent {
209
+ settings;
210
+ constructor(settings) {
211
+ this.settings = {
212
+ model: settings.model,
213
+ subModel: settings.subModel ?? settings.model,
214
+ maxIterations: settings.maxIterations ?? 20,
215
+ maxLLMCalls: settings.maxLLMCalls ?? 50,
216
+ maxOutputChars: settings.maxOutputChars ?? 100000,
217
+ verbose: settings.verbose ?? false,
218
+ };
219
+ }
220
+ /**
221
+ * Generate an answer by iteratively analyzing the context.
222
+ * This is the primary method for using RLMAgent.
223
+ */
224
+ async generate({ context, query, abortSignal, timeout, onStepFinish, }) {
225
+ const repl = new REPLEnvironment(this.settings.subModel, this.settings.maxLLMCalls, timeout ?? 30000);
226
+ const steps = [];
227
+ let mainLLMCallCount = 0; // Track main agent LLM calls
228
+ try {
229
+ repl.loadContext(context);
230
+ const messages = [
231
+ { role: "system", content: RLM_SYSTEM_PROMPT },
232
+ {
233
+ role: "user",
234
+ content: `Context loaded. Answer the following query: "${query}"`,
235
+ },
236
+ ];
237
+ for (let iteration = 0; iteration < this.settings.maxIterations; iteration++) {
238
+ if (this.settings.verbose) {
239
+ console.log(`\n=== Iteration ${iteration + 1}/${this.settings.maxIterations} ===`);
240
+ }
241
+ // Generate next action
242
+ const result = await generateText({
243
+ model: this.settings.model,
244
+ messages,
245
+ abortSignal,
246
+ });
247
+ mainLLMCallCount++; // Track main LLM call
248
+ const response = result.text;
249
+ if (this.settings.verbose) {
250
+ console.log("LLM Response:", response.substring(0, 500));
251
+ }
252
+ // Check for final answer
253
+ const finalAnswer = extractFinalAnswer(response);
254
+ if (finalAnswer && finalAnswer.content) {
255
+ let answer;
256
+ if (finalAnswer.type === "direct") {
257
+ answer = finalAnswer.content;
258
+ }
259
+ else {
260
+ const varValue = repl.getVariable(finalAnswer.content);
261
+ answer =
262
+ varValue !== undefined
263
+ ? String(varValue)
264
+ : `[Variable ${finalAnswer.content} not found]`;
265
+ }
266
+ // Return RLMGenerateResult
267
+ return {
268
+ text: answer,
269
+ steps: steps,
270
+ llmCallCount: mainLLMCallCount + repl.getLLMCallCount(),
271
+ iterations: iteration + 1,
272
+ };
273
+ }
274
+ // Execute code
275
+ const codeBlocks = extractCodeBlocks(response);
276
+ if (codeBlocks.length > 0 && codeBlocks[0]) {
277
+ const code = codeBlocks[0];
278
+ const executionResult = repl.executeJavaScript(code);
279
+ // Process llm_query calls
280
+ let processedOutput = executionResult.stdout;
281
+ const llmQueryRegex = /<<<LLM_QUERY_START>>>\n([\s\S]*?)\n<<<LLM_QUERY_END>>>/g;
282
+ let llmMatch;
283
+ while ((llmMatch = llmQueryRegex.exec(executionResult.stdout)) !== null) {
284
+ const prompt = llmMatch[1];
285
+ if (prompt) {
286
+ try {
287
+ const llmResult = await repl.llmQuery(prompt);
288
+ processedOutput = processedOutput.replace(llmMatch[0], `\n[LLM Result]: ${llmResult}\n`);
289
+ }
290
+ catch (e) {
291
+ const errorMessage = e instanceof Error ? e.message : String(e);
292
+ processedOutput = processedOutput.replace(llmMatch[0], `\n[LLM Error]: ${errorMessage}\n`);
293
+ }
294
+ }
295
+ }
296
+ // Build full output
297
+ let fullOutput = processedOutput;
298
+ if (executionResult.result !== undefined &&
299
+ executionResult.result !== null) {
300
+ fullOutput += `\n[Return value]: ${JSON.stringify(executionResult.result)}`;
301
+ }
302
+ if (executionResult.error) {
303
+ fullOutput += `\n[Error]: ${executionResult.error}`;
304
+ }
305
+ // Truncate
306
+ const truncatedOutput = fullOutput.length > this.settings.maxOutputChars
307
+ ? fullOutput.substring(0, this.settings.maxOutputChars) +
308
+ "\n...[truncated]"
309
+ : fullOutput;
310
+ // Get reasoning
311
+ const reasoningParts = response.split("```");
312
+ const reasoning = reasoningParts.length > 0 ? (reasoningParts[0] ?? "").trim() : "";
313
+ // Create step result
314
+ const step = {
315
+ iteration: iteration + 1,
316
+ reasoning,
317
+ code,
318
+ output: truncatedOutput,
319
+ };
320
+ // Add to steps array
321
+ steps.push(step);
322
+ // Call onStepFinish callback if provided
323
+ if (onStepFinish) {
324
+ await onStepFinish(step);
325
+ }
326
+ // Add to messages
327
+ messages.push({ role: "assistant", content: response }, {
328
+ role: "user",
329
+ content: `Code executed:\n\`\`\`javascript\n${code}\n\`\`\`\n\nOutput:\n${truncatedOutput}\n\nContinue with the next step.`,
330
+ });
331
+ }
332
+ else {
333
+ messages.push({ role: "assistant", content: response }, {
334
+ role: "user",
335
+ content: "Please write JavaScript code in a ```javascript block to explore the context and answer the query.",
336
+ });
337
+ }
338
+ }
339
+ // Max iterations reached
340
+ messages.push({
341
+ role: "user",
342
+ content: "Maximum iterations reached. Based on all the information gathered, provide your final answer using FINAL(your_answer).",
343
+ });
344
+ const finalResult = await generateText({
345
+ model: this.settings.model,
346
+ messages,
347
+ abortSignal,
348
+ });
349
+ mainLLMCallCount++; // Track final LLM call
350
+ const finalAnswer = extractFinalAnswer(finalResult.text);
351
+ let answer;
352
+ if (finalAnswer && finalAnswer.content) {
353
+ if (finalAnswer.type === "direct") {
354
+ answer = finalAnswer.content;
355
+ }
356
+ else {
357
+ const varValue = repl.getVariable(finalAnswer.content);
358
+ answer =
359
+ varValue !== undefined
360
+ ? String(varValue)
361
+ : `[Variable ${finalAnswer.content} not found]`;
362
+ }
363
+ }
364
+ else {
365
+ answer = finalResult.text;
366
+ }
367
+ return {
368
+ text: answer,
369
+ steps: steps,
370
+ llmCallCount: mainLLMCallCount + repl.getLLMCallCount(),
371
+ iterations: this.settings.maxIterations,
372
+ };
373
+ }
374
+ finally {
375
+ repl.cleanup();
376
+ }
377
+ }
378
+ /**
379
+ * Stream the answer generation process.
380
+ * Each step is yielded as it's completed.
381
+ */
382
+ async stream({ context, query, abortSignal, timeout, onStepFinish, }) {
383
+ // For now, delegate to generate() and create a simple stream wrapper
384
+ // Full streaming implementation would require more complex handling
385
+ const result = await this.generate({
386
+ context,
387
+ query,
388
+ abortSignal,
389
+ timeout,
390
+ onStepFinish,
391
+ });
392
+ // Create a simple text stream from the result
393
+ const stream = new ReadableStream({
394
+ start(controller) {
395
+ controller.enqueue(result.text);
396
+ controller.close();
397
+ },
398
+ });
399
+ return {
400
+ textStream: stream,
401
+ ...result,
402
+ };
403
+ }
404
+ }
405
+ export default RLMAgent;
406
+ //# sourceMappingURL=rlm.js.map
@@ -0,0 +1 @@
1
+ {"version":3,"file":"rlm.js","sourceRoot":"","sources":["../src/rlm.ts"],"names":[],"mappings":"AAAA;;;;;;;GAOG;AAEH,OAAO,EAAE,YAAY,EAAE,MAAM,IAAI,CAAC;AAElC,OAAO,KAAK,EAAE,MAAM,SAAS,CAAC;AA+F9B;;;GAGG;AACH,MAAM,eAAe;IACX,SAAS,CAAyB;IAClC,YAAY,CAAS;IACrB,WAAW,CAAS;IACpB,QAAQ,CAAgB;IACxB,aAAa,GAAY,KAAK,CAAC;IAC/B,aAAa,GAAa,EAAE,CAAC;IAC7B,OAAO,CAAS;IAExB,YAAY,QAAuB,EAAE,WAAmB,EAAE,OAAO,GAAG,KAAK;QACvE,IAAI,CAAC,YAAY,GAAG,CAAC,CAAC;QACtB,IAAI,CAAC,WAAW,GAAG,WAAW,CAAC;QAC/B,IAAI,CAAC,QAAQ,GAAG,QAAQ,CAAC;QACzB,IAAI,CAAC,OAAO,GAAG,OAAO,CAAC;IACzB,CAAC;IAED;;OAEG;IACH,WAAW,CAAC,OAAmB;QAC7B,IAAI,IAAI,CAAC,aAAa,EAAE,CAAC;YACvB,MAAM,IAAI,KAAK,CAAC,wBAAwB,CAAC,CAAC;QAC5C,CAAC;QAED,IAAI,WAAoB,CAAC;QAEzB,IAAI,OAAO,OAAO,KAAK,QAAQ,EAAE,CAAC;YAChC,WAAW,GAAG,OAAO,CAAC;QACxB,CAAC;aAAM,IAAI,KAAK,CAAC,OAAO,CAAC,OAAO,CAAC,EAAE,CAAC;YAClC,WAAW,GAAG,OAAO,CAAC,IAAI,CAAC,IAAI,CAAC,CAAC;QACnC,CAAC;aAAM,CAAC;YACN,WAAW,GAAG,OAAO,CAAC;QACxB,CAAC;QAED,8CAA8C;QAC9C,MAAM,OAAO,GAAY;YACvB,OAAO,EAAE;gBACP,GAAG,EAAE,CAAC,GAAG,IAAe,EAAE,EAAE;oBAC1B,IAAI,CAAC,aAAa,CAAC,IAAI,CAAC,IAAI,CAAC,GAAG,CAAC,CAAC,CAAC,EAAE,EAAE,CAAC,MAAM,CAAC,CAAC,CAAC,CAAC,CAAC,IAAI,CAAC,GAAG,CAAC,CAAC,CAAC;gBAChE,CAAC;gBACD,KAAK,EAAE,CAAC,GAAG,IAAe,EAAE,EAAE;oBAC5B,IAAI,CAAC,aAAa,CAAC,IAAI,CACrB,SAAS,GAAG,IAAI,CAAC,GAAG,CAAC,CAAC,CAAC,EAAE,EAAE,CAAC,MAAM,CAAC,CAAC,CAAC,CAAC,CAAC,IAAI,CAAC,GAAG,CAAC,CACjD,CAAC;gBACJ,CAAC;aACF;YACD,OAAO,EAAE,WAAW;YACpB,SAAS,EAAE,CAAC,MAAc,EAAU,EAAE;gBACpC,OAAO,0BAA0B,MAAM,uBAAuB,CAAC;YACjE,CAAC;YACD,iBAAiB,EAAE,CAAC,OAAiB,EAAY,EAAE;gBACjD,OAAO,OAAO,CAAC,GAAG,CAChB,CAAC,CAAC,EAAE,EAAE,CAAC,0BAA0B,CAAC,uBAAuB,CAC1D,CAAC;YACJ,CAAC;YACD,KAAK,EAAE,CAAC,MAAc,EAAoC,EAAE;gBAC1D,OAAO,EAAE,IAAI,EAAE,OAAO,EAAE,KAAK,EAAE,MAAM,EAAE,CAAC;YAC1C,CAAC;YACD,SAAS,EAAE,CAAC,OAAe,EAAwC,EAAE;gBACnE,OAAO,EAAE,IAAI,EAAE,WAAW,EAAE,KAAK,EAAE,OAAO,EAAE,CAAC;YAC/C,CAAC;SACF,CAAC;QAEF,IAAI,CAAC,SAAS,GAAG,EAAE,CAAC,aAAa,CAAC,OAAO,CAAC,CAAC;QAC3C,IAAI,CAAC,aAAa,GAAG,IAAI,CAAC;IAC5B,CAAC;IAED;;OAEG;IACH,KAAK,CAAC,QAAQ,CAAC,MAAc;QAC3B,IAAI,IAAI,CAAC,YAAY,IAAI,IAAI,CAAC,WAAW,EAAE,CAAC;YAC1C,MAAM,IAAI,KAAK,CACb,4BAA4B,IAAI,CAAC,YAAY,IAAI,IAAI,CAAC,WAAW,EAAE,CACpE,CAAC;QACJ,CAAC;QAED,IAAI,CAAC,YAAY,EAAE,CAAC;QAEpB,MAAM,MAAM,GAAG,MAAM,YAAY,CAAC;YAChC,KAAK,EAAE,IAAI,CAAC,QAAQ;YACpB,MAAM,EAAE,MAAM;SACf,CAAC,CAAC;QAEH,OAAO,MAAM,CAAC,IAAI,CAAC;IACrB,CAAC;IAED;;OAEG;IACH,iBAAiB,CAAC,IAAY;QAM5B,IAAI,CAAC,aAAa,GAAG,EAAE,CAAC;QAExB,IAAI,CAAC;YACH,IAAI,MAAe,CAAC;YAEpB,MAAM,MAAM,GAAG,IAAI,EAAE,CAAC,MAAM,CAAC,IAAI,CAAC,CAAC;YACnC,IAAI,IAAI,CAAC,SAAS,EAAE,CAAC;gBACnB,MAAM,GAAG,MAAM,CAAC,YAAY,CAAC,IAAI,CAAC,SAAS,EAAE,EAAE,OAAO,EAAE,IAAI,CAAC,OAAO,EAAE,CAAC,CAAC;YAC1E,CAAC;YAED,MAAM,MAAM,GAAG,IAAI,CAAC,aAAa,CAAC,IAAI,CAAC,IAAI,CAAC,CAAC;YAE7C,OAAO;gBACL,MAAM;gBACN,MAAM,EAAE,EAAE;gBACV,MAAM,EAAE,MAAM,KAAK,SAAS,CAAC,CAAC,CAAC,MAAM,CAAC,CAAC,CAAC,SAAS;aAClD,CAAC;QACJ,CAAC;QAAC,OAAO,KAAK,EAAE,CAAC;YACf,IAAI,KAAK,YAAY,KAAK,EAAE,CAAC;gBAC3B,OAAO;oBACL,MAAM,EAAE,IAAI,CAAC,aAAa,CAAC,IAAI,CAAC,IAAI,CAAC;oBACrC,MAAM,EAAE,KAAK,CAAC,OAAO;oBACrB,KAAK,EAAE,KAAK,CAAC,OAAO;iBACrB,CAAC;YACJ,CAAC;YACD,OAAO;gBACL,MAAM,EAAE,IAAI,CAAC,aAAa,CAAC,IAAI,CAAC,IAAI,CAAC;gBACrC,MAAM,EAAE,eAAe;gBACvB,KAAK,EAAE,eAAe;aACvB,CAAC;QACJ,CAAC;IACH,CAAC;IAED;;OAEG;IACH,WAAW,CAAC,IAAY;QACtB,IAAI,CAAC;YACH,IAAI,IAAI,CAAC,SAAS,EAAE,CAAC;gBACnB,MAAM,MAAM,GAAG,IAAI,EAAE,CAAC,MAAM,CAAC,IAAI,CAAC,CAAC;gBACnC,OAAO,MAAM,CAAC,YAAY,CAAC,IAAI,CAAC,SAAS,EAAE,EAAE,OAAO,EAAE,IAAI,EAAE,CAAC,CAAC;YAChE,CAAC;YACD,OAAO,SAAS,CAAC;QACnB,CAAC;QAAC,MAAM,CAAC;YACP,OAAO,SAAS,CAAC;QACnB,CAAC;IACH,CAAC;IAED;;OAEG;IACH,eAAe;QACb,OAAO,IAAI,CAAC,YAAY,CAAC;IAC3B,CAAC;IAED;;OAEG;IACH,OAAO;QACL,IAAI,CAAC,aAAa,GAAG,EAAE,CAAC;QACxB,IAAI,CAAC,SAAS,GAAG,SAAS,CAAC;IAC7B,CAAC;CACF;AAED,SAAS,iBAAiB,CAAC,IAAY;IACrC,MAAM,cAAc,GAAG,4CAA4C,CAAC;IACpE,MAAM,MAAM,GAAa,EAAE,CAAC;IAC5B,IAAI,KAA6B,CAAC;IAElC,OAAO,CAAC,KAAK,GAAG,cAAc,CAAC,IAAI,CAAC,IAAI,CAAC,CAAC,KAAK,IAAI,EAAE,CAAC;QACpD,MAAM,IAAI,GAAG,KAAK,CAAC,CAAC,CAAC,CAAC;QACtB,IAAI,IAAI,EAAE,CAAC;YACT,MAAM,CAAC,IAAI,CAAC,IAAI,CAAC,IAAI,EAAE,CAAC,CAAC;QAC3B,CAAC;IACH,CAAC;IAED,OAAO,MAAM,CAAC;AAChB,CAAC;AAED,SAAS,kBAAkB,CACzB,IAAY;IAEZ,MAAM,aAAa,GAAG,IAAI,CAAC,KAAK,CAC9B,8CAA8C,CAC/C,CAAC;IACF,IAAI,aAAa,EAAE,CAAC;QAClB,MAAM,OAAO,GAAG,aAAa,CAAC,CAAC,CAAC,CAAC;QACjC,IAAI,OAAO,EAAE,CAAC;YACZ,OAAO,EAAE,IAAI,EAAE,UAAU,EAAE,OAAO,EAAE,CAAC;QACvC,CAAC;IACH,CAAC;IAED,MAAM,UAAU,GAAG,IAAI,CAAC,KAAK,CAAC,wCAAwC,CAAC,CAAC;IACxE,IAAI,UAAU,EAAE,CAAC;QACf,MAAM,OAAO,GAAG,UAAU,CAAC,CAAC,CAAC,CAAC;QAC9B,IAAI,OAAO,EAAE,CAAC;YACZ,OAAO,EAAE,IAAI,EAAE,QAAQ,EAAE,OAAO,EAAE,CAAC;QACrC,CAAC;IACH,CAAC;IAED,OAAO,IAAI,CAAC;AACd,CAAC;AAED,+EAA+E;AAC/E,iBAAiB;AACjB,+EAA+E;AAE/E,MAAM,iBAAiB,GAAG;;;;;;;;;;;;;;;;;;;;;;;;;;mEA0ByC,CAAC;AAEpE,MAAM,OAAO,QAAQ;IACX,QAAQ,CAA6B;IAE7C,YAAY,QAA0B;QACpC,IAAI,CAAC,QAAQ,GAAG;YACd,KAAK,EAAE,QAAQ,CAAC,KAAK;YACrB,QAAQ,EAAE,QAAQ,CAAC,QAAQ,IAAI,QAAQ,CAAC,KAAK;YAC7C,aAAa,EAAE,QAAQ,CAAC,aAAa,IAAI,EAAE;YAC3C,WAAW,EAAE,QAAQ,CAAC,WAAW,IAAI,EAAE;YACvC,cAAc,EAAE,QAAQ,CAAC,cAAc,IAAI,MAAM;YACjD,OAAO,EAAE,QAAQ,CAAC,OAAO,IAAI,KAAK;SACnC,CAAC;IACJ,CAAC;IAED;;;OAGG;IACH,KAAK,CAAC,QAAQ,CAAC,EACb,OAAO,EACP,KAAK,EACL,WAAW,EACX,OAAO,EACP,YAAY,GACW;QACvB,MAAM,IAAI,GAAG,IAAI,eAAe,CAC9B,IAAI,CAAC,QAAQ,CAAC,QAAQ,EACtB,IAAI,CAAC,QAAQ,CAAC,WAAW,EACzB,OAAO,IAAI,KAAK,CACjB,CAAC;QACF,MAAM,KAAK,GAAe,EAAE,CAAC;QAC7B,IAAI,gBAAgB,GAAG,CAAC,CAAC,CAAC,6BAA6B;QAEvD,IAAI,CAAC;YACH,IAAI,CAAC,WAAW,CAAC,OAAO,CAAC,CAAC;YAE1B,MAAM,QAAQ,GAAmB;gBAC/B,EAAE,IAAI,EAAE,QAAQ,EAAE,OAAO,EAAE,iBAAiB,EAAE;gBAC9C;oBACE,IAAI,EAAE,MAAM;oBACZ,OAAO,EAAE,gDAAgD,KAAK,GAAG;iBAClE;aACF,CAAC;YAEF,KACE,IAAI,SAAS,GAAG,CAAC,EACjB,SAAS,GAAG,IAAI,CAAC,QAAQ,CAAC,aAAa,EACvC,SAAS,EAAE,EACX,CAAC;gBACD,IAAI,IAAI,CAAC,QAAQ,CAAC,OAAO,EAAE,CAAC;oBAC1B,OAAO,CAAC,GAAG,CACT,mBAAmB,SAAS,GAAG,CAAC,IAAI,IAAI,CAAC,QAAQ,CAAC,aAAa,MAAM,CACtE,CAAC;gBACJ,CAAC;gBAED,uBAAuB;gBACvB,MAAM,MAAM,GAAG,MAAM,YAAY,CAAC;oBAChC,KAAK,EAAE,IAAI,CAAC,QAAQ,CAAC,KAAK;oBAC1B,QAAQ;oBACR,WAAW;iBACZ,CAAC,CAAC;gBACH,gBAAgB,EAAE,CAAC,CAAC,sBAAsB;gBAE1C,MAAM,QAAQ,GAAG,MAAM,CAAC,IAAI,CAAC;gBAE7B,IAAI,IAAI,CAAC,QAAQ,CAAC,OAAO,EAAE,CAAC;oBAC1B,OAAO,CAAC,GAAG,CAAC,eAAe,EAAE,QAAQ,CAAC,SAAS,CAAC,CAAC,EAAE,GAAG,CAAC,CAAC,CAAC;gBAC3D,CAAC;gBAED,yBAAyB;gBACzB,MAAM,WAAW,GAAG,kBAAkB,CAAC,QAAQ,CAAC,CAAC;gBACjD,IAAI,WAAW,IAAI,WAAW,CAAC,OAAO,EAAE,CAAC;oBACvC,IAAI,MAAc,CAAC;oBAEnB,IAAI,WAAW,CAAC,IAAI,KAAK,QAAQ,EAAE,CAAC;wBAClC,MAAM,GAAG,WAAW,CAAC,OAAO,CAAC;oBAC/B,CAAC;yBAAM,CAAC;wBACN,MAAM,QAAQ,GAAG,IAAI,CAAC,WAAW,CAAC,WAAW,CAAC,OAAO,CAAC,CAAC;wBACvD,MAAM;4BACJ,QAAQ,KAAK,SAAS;gCACpB,CAAC,CAAC,MAAM,CAAC,QAAQ,CAAC;gCAClB,CAAC,CAAC,aAAa,WAAW,CAAC,OAAO,aAAa,CAAC;oBACtD,CAAC;oBAED,2BAA2B;oBAC3B,OAAO;wBACL,IAAI,EAAE,MAAM;wBACZ,KAAK,EAAE,KAAK;wBACZ,YAAY,EAAE,gBAAgB,GAAG,IAAI,CAAC,eAAe,EAAE;wBACvD,UAAU,EAAE,SAAS,GAAG,CAAC;qBAC1B,CAAC;gBACJ,CAAC;gBAED,eAAe;gBACf,MAAM,UAAU,GAAG,iBAAiB,CAAC,QAAQ,CAAC,CAAC;gBAE/C,IAAI,UAAU,CAAC,MAAM,GAAG,CAAC,IAAI,UAAU,CAAC,CAAC,CAAC,EAAE,CAAC;oBAC3C,MAAM,IAAI,GAAW,UAAU,CAAC,CAAC,CAAC,CAAC;oBACnC,MAAM,eAAe,GAAG,IAAI,CAAC,iBAAiB,CAAC,IAAI,CAAC,CAAC;oBAErD,0BAA0B;oBAC1B,IAAI,eAAe,GAAG,eAAe,CAAC,MAAM,CAAC;oBAC7C,MAAM,aAAa,GACjB,yDAAyD,CAAC;oBAC5D,IAAI,QAAQ,CAAC;oBAEb,OACE,CAAC,QAAQ,GAAG,aAAa,CAAC,IAAI,CAAC,eAAe,CAAC,MAAM,CAAC,CAAC,KAAK,IAAI,EAChE,CAAC;wBACD,MAAM,MAAM,GAAG,QAAQ,CAAC,CAAC,CAAC,CAAC;wBAC3B,IAAI,MAAM,EAAE,CAAC;4BACX,IAAI,CAAC;gCACH,MAAM,SAAS,GAAG,MAAM,IAAI,CAAC,QAAQ,CAAC,MAAM,CAAC,CAAC;gCAC9C,eAAe,GAAG,eAAe,CAAC,OAAO,CACvC,QAAQ,CAAC,CAAC,CAAC,EACX,mBAAmB,SAAS,IAAI,CACjC,CAAC;4BACJ,CAAC;4BAAC,OAAO,CAAC,EAAE,CAAC;gCACX,MAAM,YAAY,GAAG,CAAC,YAAY,KAAK,CAAC,CAAC,CAAC,CAAC,CAAC,OAAO,CAAC,CAAC,CAAC,MAAM,CAAC,CAAC,CAAC,CAAC;gCAChE,eAAe,GAAG,eAAe,CAAC,OAAO,CACvC,QAAQ,CAAC,CAAC,CAAC,EACX,kBAAkB,YAAY,IAAI,CACnC,CAAC;4BACJ,CAAC;wBACH,CAAC;oBACH,CAAC;oBAED,oBAAoB;oBACpB,IAAI,UAAU,GAAG,eAAe,CAAC;oBACjC,IACE,eAAe,CAAC,MAAM,KAAK,SAAS;wBACpC,eAAe,CAAC,MAAM,KAAK,IAAI,EAC/B,CAAC;wBACD,UAAU,IAAI,qBAAqB,IAAI,CAAC,SAAS,CAAC,eAAe,CAAC,MAAM,CAAC,EAAE,CAAC;oBAC9E,CAAC;oBACD,IAAI,eAAe,CAAC,KAAK,EAAE,CAAC;wBAC1B,UAAU,IAAI,cAAc,eAAe,CAAC,KAAK,EAAE,CAAC;oBACtD,CAAC;oBAED,WAAW;oBACX,MAAM,eAAe,GACnB,UAAU,CAAC,MAAM,GAAG,IAAI,CAAC,QAAQ,CAAC,cAAc;wBAC9C,CAAC,CAAC,UAAU,CAAC,SAAS,CAAC,CAAC,EAAE,IAAI,CAAC,QAAQ,CAAC,cAAc,CAAC;4BACrD,kBAAkB;wBACpB,CAAC,CAAC,UAAU,CAAC;oBAEjB,gBAAgB;oBAChB,MAAM,cAAc,GAAG,QAAQ,CAAC,KAAK,CAAC,KAAK,CAAC,CAAC;oBAC7C,MAAM,SAAS,GACb,cAAc,CAAC,MAAM,GAAG,CAAC,CAAC,CAAC,CAAC,CAAC,cAAc,CAAC,CAAC,CAAC,IAAI,EAAE,CAAC,CAAC,IAAI,EAAE,CAAC,CAAC,CAAC,EAAE,CAAC;oBAEpE,qBAAqB;oBACrB,MAAM,IAAI,GAAa;wBACrB,SAAS,EAAE,SAAS,GAAG,CAAC;wBACxB,SAAS;wBACT,IAAI;wBACJ,MAAM,EAAE,eAAe;qBACxB,CAAC;oBAEF,qBAAqB;oBACrB,KAAK,CAAC,IAAI,CAAC,IAAI,CAAC,CAAC;oBAEjB,yCAAyC;oBACzC,IAAI,YAAY,EAAE,CAAC;wBACjB,MAAM,YAAY,CAAC,IAAI,CAAC,CAAC;oBAC3B,CAAC;oBAED,kBAAkB;oBAClB,QAAQ,CAAC,IAAI,CACX,EAAE,IAAI,EAAE,WAAW,EAAE,OAAO,EAAE,QAAQ,EAAE,EACxC;wBACE,IAAI,EAAE,MAAM;wBACZ,OAAO,EAAE,qCAAqC,IAAI,wBAAwB,eAAe,kCAAkC;qBAC5H,CACF,CAAC;gBACJ,CAAC;qBAAM,CAAC;oBACN,QAAQ,CAAC,IAAI,CACX,EAAE,IAAI,EAAE,WAAW,EAAE,OAAO,EAAE,QAAQ,EAAE,EACxC;wBACE,IAAI,EAAE,MAAM;wBACZ,OAAO,EACL,oGAAoG;qBACvG,CACF,CAAC;gBACJ,CAAC;YACH,CAAC;YAED,yBAAyB;YACzB,QAAQ,CAAC,IAAI,CAAC;gBACZ,IAAI,EAAE,MAAM;gBACZ,OAAO,EACL,wHAAwH;aAC3H,CAAC,CAAC;YAEH,MAAM,WAAW,GAAG,MAAM,YAAY,CAAC;gBACrC,KAAK,EAAE,IAAI,CAAC,QAAQ,CAAC,KAAK;gBAC1B,QAAQ;gBACR,WAAW;aACZ,CAAC,CAAC;YACH,gBAAgB,EAAE,CAAC,CAAC,uBAAuB;YAE3C,MAAM,WAAW,GAAG,kBAAkB,CAAC,WAAW,CAAC,IAAI,CAAC,CAAC;YACzD,IAAI,MAAc,CAAC;YAEnB,IAAI,WAAW,IAAI,WAAW,CAAC,OAAO,EAAE,CAAC;gBACvC,IAAI,WAAW,CAAC,IAAI,KAAK,QAAQ,EAAE,CAAC;oBAClC,MAAM,GAAG,WAAW,CAAC,OAAO,CAAC;gBAC/B,CAAC;qBAAM,CAAC;oBACN,MAAM,QAAQ,GAAG,IAAI,CAAC,WAAW,CAAC,WAAW,CAAC,OAAO,CAAC,CAAC;oBACvD,MAAM;wBACJ,QAAQ,KAAK,SAAS;4BACpB,CAAC,CAAC,MAAM,CAAC,QAAQ,CAAC;4BAClB,CAAC,CAAC,aAAa,WAAW,CAAC,OAAO,aAAa,CAAC;gBACtD,CAAC;YACH,CAAC;iBAAM,CAAC;gBACN,MAAM,GAAG,WAAW,CAAC,IAAI,CAAC;YAC5B,CAAC;YAED,OAAO;gBACL,IAAI,EAAE,MAAM;gBACZ,KAAK,EAAE,KAAK;gBACZ,YAAY,EAAE,gBAAgB,GAAG,IAAI,CAAC,eAAe,EAAE;gBACvD,UAAU,EAAE,IAAI,CAAC,QAAQ,CAAC,aAAa;aACxC,CAAC;QACJ,CAAC;gBAAS,CAAC;YACT,IAAI,CAAC,OAAO,EAAE,CAAC;QACjB,CAAC;IACH,CAAC;IAED;;;OAGG;IACH,KAAK,CAAC,MAAM,CAAC,EACX,OAAO,EACP,KAAK,EACL,WAAW,EACX,OAAO,EACP,YAAY,GACW;QACvB,qEAAqE;QACrE,oEAAoE;QACpE,MAAM,MAAM,GAAG,MAAM,IAAI,CAAC,QAAQ,CAAC;YACjC,OAAO;YACP,KAAK;YACL,WAAW;YACX,OAAO;YACP,YAAY;SACb,CAAC,CAAC;QAEH,8CAA8C;QAC9C,MAAM,MAAM,GAAG,IAAI,cAAc,CAAC;YAChC,KAAK,CAAC,UAAU;gBACd,UAAU,CAAC,OAAO,CAAC,MAAM,CAAC,IAAI,CAAC,CAAC;gBAChC,UAAU,CAAC,KAAK,EAAE,CAAC;YACrB,CAAC;SACF,CAAC,CAAC;QAEH,OAAO;YACL,UAAU,EAAE,MAAM;YAClB,GAAG,MAAM;SACV,CAAC;IACJ,CAAC;CACF;AAED,eAAe,QAAQ,CAAC"}
package/package.json ADDED
@@ -0,0 +1,79 @@
1
+ {
2
+ "name": "ai-rlm",
3
+ "version": "0.0.0",
4
+ "description": "Recursive Language Model (RLM) implementation using the Vercel AI SDK. Process long contexts through iterative code execution and sub-LLM queries.",
5
+ "author": "Joe Hsu <jhsu.x1@gmail.com>",
6
+ "license": "MIT",
7
+ "repository": {
8
+ "type": "git",
9
+ "url": "git+https://github.com/yourusername/ai-rlm.git"
10
+ },
11
+ "bugs": {
12
+ "url": "https://github.com/yourusername/ai-rlm/issues"
13
+ },
14
+ "homepage": "https://github.com/yourusername/ai-rlm#readme",
15
+ "keywords": [
16
+ "ai",
17
+ "rlm",
18
+ "recursive",
19
+ "language-model",
20
+ "llm",
21
+ "ai-sdk",
22
+ "context",
23
+ "large-context",
24
+ "agent",
25
+ "tool"
26
+ ],
27
+ "type": "module",
28
+ "main": "./dist/index.js",
29
+ "types": "./dist/index.d.ts",
30
+ "exports": {
31
+ ".": {
32
+ "types": "./dist/index.d.ts",
33
+ "import": "./dist/index.js",
34
+ "require": "./dist/index.cjs"
35
+ },
36
+ "./rlm": {
37
+ "types": "./dist/rlm.d.ts",
38
+ "import": "./dist/rlm.js",
39
+ "require": "./dist/rlm.cjs"
40
+ },
41
+ "./rlm-tool": {
42
+ "types": "./dist/rlm-tool.d.ts",
43
+ "import": "./dist/rlm-tool.js",
44
+ "require": "./dist/rlm-tool.cjs"
45
+ }
46
+ },
47
+ "files": [
48
+ "dist",
49
+ "README.md",
50
+ "LICENSE"
51
+ ],
52
+ "scripts": {
53
+ "build": "tsc -p tsconfig.build.json",
54
+ "build:watch": "tsc -p tsconfig.build.json --watch",
55
+ "clean": "rm -rf dist",
56
+ "prepublishOnly": "npm run clean && npm run build",
57
+ "test": "echo \"Error: no test specified\" && exit 1"
58
+ },
59
+ "engines": {
60
+ "node": ">=18.0.0"
61
+ },
62
+ "dependencies": {
63
+ "ai": "^6.0.86",
64
+ "zod": "^3.23.0"
65
+ },
66
+ "optionalDependencies": {
67
+ "@ai-sdk/openai": "^3.0.29",
68
+ "vm2": "^3.10.4"
69
+ },
70
+ "peerDependencies": {
71
+ "typescript": "^5.0.0"
72
+ },
73
+ "devDependencies": {
74
+ "@changesets/cli": "^2.29.8",
75
+ "@types/bun": "latest",
76
+ "@types/node": "^20.0.0",
77
+ "typescript": "^5.0.0"
78
+ }
79
+ }