recursive-llm-ts 2.0.12 → 3.0.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -69,11 +69,13 @@ class PythoniaBridge {
69
69
  // Lazy load pythonia to avoid errors in Bun environments
70
70
  if (!this.python) {
71
71
  try {
72
+ // @ts-ignore - Optional dependency, may not be installed
72
73
  const pythonia = yield Promise.resolve().then(() => __importStar(require('pythonia')));
73
74
  this.python = pythonia.python;
74
75
  }
75
76
  catch (error) {
76
- throw new Error('pythonia is not installed. Install it with: npm install pythonia\n' +
77
+ throw new Error('pythonia is not available (Python dependencies removed in v3.0). ' +
78
+ 'Please use the Go bridge (default) or install pythonia separately: npm install pythonia\n' +
77
79
  'Note: pythonia only works with Node.js runtime, not Bun');
78
80
  }
79
81
  }
package/go/README.md ADDED
@@ -0,0 +1,347 @@
1
+ # RLM Go Binary
2
+
3
+ Go implementation of Recursive Language Models (RLM) based on the [original Python implementation](https://github.com/alexzhang13/rlm).
4
+
5
+ ## Overview
6
+
7
+ This is a self-contained Go binary that implements the RLM algorithm, allowing language models to process extremely long contexts (100k+ tokens) by storing context as a variable and allowing recursive exploration.
8
+
9
+ **Key Difference from Python**: Uses JavaScript REPL instead of Python REPL for code execution.
10
+
11
+ ## Building
12
+
13
+ ```bash
14
+ # Build the binary
15
+ go build -o rlm ./cmd/rlm
16
+
17
+ # Run tests
18
+ go test ./internal/rlm/... -v
19
+
20
+ # Build with optimization
21
+ go build -ldflags="-s -w" -o rlm ./cmd/rlm
22
+ ```
23
+
24
+ ## Usage
25
+
26
+ The binary accepts JSON input on stdin and returns JSON output on stdout.
27
+
28
+ ### Input Format
29
+
30
+ ```json
31
+ {
32
+ "model": "gpt-4o-mini",
33
+ "query": "What are the main themes?",
34
+ "context": "Your long document here...",
35
+ "config": {
36
+ "recursive_model": "gpt-4o-mini",
37
+ "api_base": "https://api.openai.com/v1",
38
+ "api_key": "sk-...",
39
+ "max_depth": 5,
40
+ "max_iterations": 30,
41
+ "temperature": 0.7
42
+ }
43
+ }
44
+ ```
45
+
46
+ ### Output Format
47
+
48
+ ```json
49
+ {
50
+ "result": "The main themes are...",
51
+ "stats": {
52
+ "llm_calls": 3,
53
+ "iterations": 2,
54
+ "depth": 0
55
+ }
56
+ }
57
+ ```
58
+
59
+ ### Example
60
+
61
+ ```bash
62
+ # Basic usage
63
+ echo '{
64
+ "model": "gpt-4o-mini",
65
+ "query": "Summarize this",
66
+ "context": "Long document...",
67
+ "config": {
68
+ "api_key": "sk-..."
69
+ }
70
+ }' | ./rlm
71
+
72
+ # With environment variable for API key
73
+ export OPENAI_API_KEY="sk-..."
74
+ echo '{
75
+ "model": "gpt-4o-mini",
76
+ "query": "What is this about?",
77
+ "context": "Document text..."
78
+ }' | ./rlm
79
+ ```
80
+
81
+ ## Configuration Options
82
+
83
+ All fields in `config` are optional and have defaults:
84
+
85
+ | Field | Type | Default | Description |
86
+ |-------|------|---------|-------------|
87
+ | `recursive_model` | string | Same as `model` | Cheaper model for recursive calls |
88
+ | `api_base` | string | `https://api.openai.com/v1` | API endpoint URL |
89
+ | `api_key` | string | From `OPENAI_API_KEY` env | API key for authentication |
90
+ | `max_depth` | int | 5 | Maximum recursion depth |
91
+ | `max_iterations` | int | 30 | Maximum REPL iterations per call |
92
+ | `temperature` | float | 0.7 | LLM temperature (0-2) |
93
+ | `timeout` | int | 60 | HTTP timeout in seconds |
94
+
95
+ Any other fields in `config` are passed as extra parameters to the LLM API.
96
+
97
+ ## JavaScript REPL Environment
98
+
99
+ The LLM can write JavaScript code to explore the context. Available globals:
100
+
101
+ ### Core Variables
102
+ - `context` - The document to analyze (string)
103
+ - `query` - The user's question (string)
104
+ - `recursive_llm(sub_query, sub_context)` - Recursively process sub-context
105
+
106
+ ### String Operations
107
+ ```javascript
108
+ context.slice(0, 100) // First 100 chars
109
+ context.split('\n') // Split by newline
110
+ context.length // String length
111
+ ```
112
+
113
+ ### Regex (Python-style API)
114
+ ```javascript
115
+ re.findall("ERROR", context) // Find all matches
116
+ re.search("ERROR", context) // Find first match
117
+ ```
118
+
119
+ ### Built-in Functions
120
+ ```javascript
121
+ len(context) // Length of string/array
122
+ print("hello") // Print output
123
+ console.log("hello") // Same as print
124
+ ```
125
+
126
+ ### JSON
127
+ ```javascript
128
+ json.loads('{"key":"value"}') // Parse JSON
129
+ json.dumps({key: "value"}) // Stringify JSON
130
+ ```
131
+
132
+ ### Array Operations
133
+ ```javascript
134
+ range(5) // [0, 1, 2, 3, 4]
135
+ range(2, 5) // [2, 3, 4]
136
+ sorted([3, 1, 2]) // [1, 2, 3]
137
+ sum([1, 2, 3]) // 6
138
+ min([1, 2, 3]) // 1
139
+ max([1, 2, 3]) // 3
140
+ enumerate(['a', 'b']) // [[0,'a'], [1,'b']]
141
+ zip([1, 2], ['a', 'b']) // [[1,'a'], [2,'b']]
142
+ any([false, true]) // true
143
+ all([true, true]) // true
144
+ ```
145
+
146
+ ### Counting & Grouping
147
+ ```javascript
148
+ Counter("hello") // {h:1, e:1, l:2, o:1}
149
+ defaultdict(() => 0) // Dict with default values
150
+ ```
151
+
152
+ ### Math
153
+ ```javascript
154
+ Math.floor(3.7) // 3
155
+ Math.ceil(3.2) // 4
156
+ Math.max(1, 2, 3) // 3
157
+ ```
158
+
159
+ ### Returning Results
160
+ ```javascript
161
+ // Option 1: Direct answer (write as text, not code)
162
+ FINAL("The answer is 42")
163
+
164
+ // Option 2: Return a variable
165
+ const answer = "The answer is 42"
166
+ FINAL_VAR(answer)
167
+ ```
168
+
169
+ ## Supported LLM Providers
170
+
171
+ Works with any OpenAI-compatible API:
172
+
173
+ - **OpenAI**: `model: "gpt-4o"`, `model: "gpt-4o-mini"`
174
+ - **Azure OpenAI**: Set custom `api_base`
175
+ - **Ollama**: `api_base: "http://localhost:11434/v1"`, `model: "llama3.2"`
176
+ - **llama.cpp**: `api_base: "http://localhost:8000/v1"`
177
+ - **vLLM**: `api_base: "http://localhost:8000/v1"`
178
+ - Any other OpenAI-compatible endpoint
179
+
180
+ ## Architecture
181
+
182
+ ```
183
+ cmd/rlm/main.go # CLI entry point (JSON I/O)
184
+ internal/rlm/
185
+ ├── rlm.go # Core RLM logic
186
+ ├── types.go # Config and stats types
187
+ ├── parser.go # FINAL() extraction
188
+ ├── prompt.go # System prompt builder
189
+ ├── repl.go # JavaScript REPL (goja)
190
+ └── openai.go # OpenAI API client
191
+ ```
192
+
193
+ ## Error Handling
194
+
195
+ Errors are written to stderr with exit code 1:
196
+
197
+ ```bash
198
+ # Missing model
199
+ echo '{"query":"test"}' | ./rlm
200
+ # stderr: Missing model in request payload
201
+
202
+ # API error
203
+ echo '{
204
+ "model": "invalid",
205
+ "query": "test",
206
+ "context": "test"
207
+ }' | ./rlm 2>&1
208
+ # stderr: LLM request failed (401): ...
209
+ ```
210
+
211
+ ## Testing
212
+
213
+ ```bash
214
+ # Run all tests
215
+ go test ./internal/rlm/... -v
216
+
217
+ # Run specific test
218
+ go test ./internal/rlm -run TestParser -v
219
+
220
+ # With coverage
221
+ go test ./internal/rlm/... -cover
222
+
223
+ # Benchmark
224
+ go test ./internal/rlm/... -bench=. -benchmem
225
+ ```
226
+
227
+ ## Performance
228
+
229
+ - **Binary size**: ~15MB (uncompressed), ~5MB (compressed with UPX)
230
+ - **Memory**: ~50MB baseline + context size
231
+ - **Startup**: <10ms
232
+ - **REPL overhead**: ~1-2ms per iteration
233
+
234
+ ## Comparison with Python Implementation
235
+
236
+ | Feature | Python | Go |
237
+ |---------|--------|-----|
238
+ | **REPL Language** | Python (RestrictedPython) | JavaScript (goja) |
239
+ | **LLM Providers** | 100+ via LiteLLM | OpenAI-compatible only |
240
+ | **Async Support** | ✅ Full async/await | ❌ Synchronous only |
241
+ | **Distribution** | Requires Python runtime | ✅ Single binary |
242
+ | **Startup Time** | ~500ms | ~10ms |
243
+ | **Memory Usage** | ~150MB | ~50MB |
244
+
245
+ ## Known Limitations
246
+
247
+ 1. **JavaScript vs Python**: LLMs are more familiar with Python, may need more iterations
248
+ 2. **No async**: Recursive calls are sequential, not parallel
249
+ 3. **OpenAI API only**: Doesn't support all LiteLLM providers
250
+ 4. **No streaming**: Full response only
251
+
252
+ ## Integration with TypeScript
253
+
254
+ From Node.js/TypeScript:
255
+
256
+ ```typescript
257
+ import { spawn } from 'child_process';
258
+
259
+ interface RLMRequest {
260
+ model: string;
261
+ query: string;
262
+ context: string;
263
+ config?: {
264
+ api_key?: string;
265
+ max_depth?: number;
266
+ max_iterations?: number;
267
+ };
268
+ }
269
+
270
+ interface RLMResponse {
271
+ result: string;
272
+ stats: {
273
+ llm_calls: number;
274
+ iterations: number;
275
+ depth: number;
276
+ };
277
+ }
278
+
279
+ async function callRLM(request: RLMRequest): Promise<RLMResponse> {
280
+ return new Promise((resolve, reject) => {
281
+ const proc = spawn('./rlm');
282
+ let stdout = '';
283
+ let stderr = '';
284
+
285
+ proc.stdout.on('data', (data) => { stdout += data; });
286
+ proc.stderr.on('data', (data) => { stderr += data; });
287
+
288
+ proc.on('close', (code) => {
289
+ if (code !== 0) {
290
+ reject(new Error(stderr || `Exit code ${code}`));
291
+ } else {
292
+ resolve(JSON.parse(stdout));
293
+ }
294
+ });
295
+
296
+ proc.stdin.write(JSON.stringify(request));
297
+ proc.stdin.end();
298
+ });
299
+ }
300
+
301
+ // Usage
302
+ const result = await callRLM({
303
+ model: 'gpt-4o-mini',
304
+ query: 'What is this about?',
305
+ context: longDocument,
306
+ config: {
307
+ api_key: process.env.OPENAI_API_KEY,
308
+ },
309
+ });
310
+
311
+ console.log(result.result);
312
+ console.log(`Stats: ${result.stats.llm_calls} LLM calls`);
313
+ ```
314
+
315
+ ## Troubleshooting
316
+
317
+ ### "Missing model in request payload"
318
+ Include the `model` field in your JSON input.
319
+
320
+ ### "LLM request failed (401)"
321
+ Check your API key is correct and has sufficient credits.
322
+
323
+ ### "max iterations exceeded"
324
+ Increase `max_iterations` in config, or simplify your query.
325
+
326
+ ### "max recursion depth exceeded"
327
+ Increase `max_depth` in config.
328
+
329
+ ### "Execution error: ReferenceError: xyz is not defined"
330
+ Check the JavaScript syntax. Use `console.log()` not `print()`, or ensure `print()` is available.
331
+
332
+ ## Contributing
333
+
334
+ 1. Write tests for new features
335
+ 2. Ensure all tests pass: `go test ./internal/rlm/... -v`
336
+ 3. Format code: `go fmt ./...`
337
+ 4. Update documentation
338
+
339
+ ## License
340
+
341
+ MIT License - Same as the original Python implementation
342
+
343
+ ## Acknowledgments
344
+
345
+ - Based on [Recursive Language Models paper](https://alexzhang13.github.io/blog/2025/rlm/) by Alex Zhang and Omar Khattab (MIT)
346
+ - Original Python implementation: https://github.com/alexzhang13/rlm
347
+ - JavaScript engine: [goja](https://github.com/dop251/goja)
@@ -0,0 +1,63 @@
1
+ package main
2
+
3
+ import (
4
+ "encoding/json"
5
+ "fmt"
6
+ "io"
7
+ "os"
8
+
9
+ "recursive-llm-go/internal/rlm"
10
+ )
11
+
12
+ type requestPayload struct {
13
+ Model string `json:"model"`
14
+ Query string `json:"query"`
15
+ Context string `json:"context"`
16
+ Config map[string]interface{} `json:"config"`
17
+ }
18
+
19
+ type responsePayload struct {
20
+ Result string `json:"result"`
21
+ Stats rlm.RLMStats `json:"stats"`
22
+ }
23
+
24
+ func main() {
25
+ input, err := io.ReadAll(os.Stdin)
26
+ if err != nil {
27
+ fmt.Fprintln(os.Stderr, "Failed to read stdin:", err)
28
+ os.Exit(1)
29
+ }
30
+
31
+ var req requestPayload
32
+ if err := json.Unmarshal(input, &req); err != nil {
33
+ fmt.Fprintln(os.Stderr, "Failed to parse input JSON:", err)
34
+ os.Exit(1)
35
+ }
36
+
37
+ if req.Model == "" {
38
+ fmt.Fprintln(os.Stderr, "Missing model in request payload")
39
+ os.Exit(1)
40
+ }
41
+
42
+ config := rlm.ConfigFromMap(req.Config)
43
+ engine := rlm.New(req.Model, config)
44
+
45
+ result, stats, err := engine.Completion(req.Query, req.Context)
46
+ if err != nil {
47
+ fmt.Fprintln(os.Stderr, err)
48
+ os.Exit(1)
49
+ }
50
+
51
+ resp := responsePayload{
52
+ Result: result,
53
+ Stats: stats,
54
+ }
55
+
56
+ payload, err := json.Marshal(resp)
57
+ if err != nil {
58
+ fmt.Fprintln(os.Stderr, "Failed to encode response JSON:", err)
59
+ os.Exit(1)
60
+ }
61
+
62
+ fmt.Println(string(payload))
63
+ }
package/go/go.mod ADDED
@@ -0,0 +1,12 @@
1
+ module recursive-llm-go
2
+
3
+ go 1.21
4
+
5
+ require github.com/dop251/goja v0.0.0-20231027120936-b396bb4c349d
6
+
7
+ require (
8
+ github.com/dlclark/regexp2 v1.7.0 // indirect
9
+ github.com/go-sourcemap/sourcemap v2.1.3+incompatible // indirect
10
+ github.com/google/pprof v0.0.0-20230207041349-798e818bf904 // indirect
11
+ golang.org/x/text v0.3.8 // indirect
12
+ )
package/go/go.sum ADDED
@@ -0,0 +1,57 @@
1
+ github.com/chzyer/logex v1.2.0/go.mod h1:9+9sk7u7pGNWYMkh0hdiL++6OeibzJccyQU4p4MedaY=
2
+ github.com/chzyer/readline v1.5.0/go.mod h1:x22KAscuvRqlLoK9CsoYsmxoXZMMFVyOl86cAH8qUic=
3
+ github.com/chzyer/test v0.0.0-20210722231415-061457976a23/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
4
+ github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
5
+ github.com/dlclark/regexp2 v1.4.1-0.20201116162257-a2a8dda75c91/go.mod h1:2pZnwuY/m+8K6iRw6wQdMtk+rH5tNGR1i55kozfMjCc=
6
+ github.com/dlclark/regexp2 v1.7.0 h1:7lJfhqlPssTb1WQx4yvTHN0uElPEv52sbaECrAQxjAo=
7
+ github.com/dlclark/regexp2 v1.7.0/go.mod h1:DHkYz0B9wPfa6wondMfaivmHpzrQ3v9q8cnmRbL6yW8=
8
+ github.com/dop251/goja v0.0.0-20211022113120-dc8c55024d06/go.mod h1:R9ET47fwRVRPZnOGvHxxhuZcbrMCuiqOz3Rlrh4KSnk=
9
+ github.com/dop251/goja v0.0.0-20231027120936-b396bb4c349d h1:wi6jN5LVt/ljaBG4ue79Ekzb12QfJ52L9Q98tl8SWhw=
10
+ github.com/dop251/goja v0.0.0-20231027120936-b396bb4c349d/go.mod h1:QMWlm50DNe14hD7t24KEqZuUdC9sOTy8W6XbCU1mlw4=
11
+ github.com/dop251/goja_nodejs v0.0.0-20210225215109-d91c329300e7/go.mod h1:hn7BA7c8pLvoGndExHudxTDKZ84Pyvv+90pbBjbTz0Y=
12
+ github.com/dop251/goja_nodejs v0.0.0-20211022123610-8dd9abb0616d/go.mod h1:DngW8aVqWbuLRMHItjPUyqdj+HWPvnQe8V8y1nDpIbM=
13
+ github.com/go-sourcemap/sourcemap v2.1.3+incompatible h1:W1iEw64niKVGogNgBN3ePyLFfuisuzeidWPMPWmECqU=
14
+ github.com/go-sourcemap/sourcemap v2.1.3+incompatible/go.mod h1:F8jJfvm2KbVjc5NqelyYJmf/v5J0dwNLS2mL4sNA1Jg=
15
+ github.com/google/pprof v0.0.0-20230207041349-798e818bf904 h1:4/hN5RUoecvl+RmJRE2YxKWtnnQls6rQjjW5oV7qg2U=
16
+ github.com/google/pprof v0.0.0-20230207041349-798e818bf904/go.mod h1:uglQLonpP8qtYCYyzA+8c/9qtqgA3qsXGYqCPKARAFg=
17
+ github.com/ianlancetaylor/demangle v0.0.0-20220319035150-800ac71e25c2/go.mod h1:aYm2/VgdVmcIU8iMfdMvDMsRAQjcfZSKFby6HOFvi/w=
18
+ github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
19
+ github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
20
+ github.com/kr/pretty v0.3.0/go.mod h1:640gp4NfQd8pI5XOwp5fnNeVWj67G7CFk/SaSQn7NBk=
21
+ github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
22
+ github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
23
+ github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
24
+ github.com/rogpeppe/go-internal v1.6.1/go.mod h1:xXDCJY+GAPziupqXw64V24skbSoqbTEfhy4qGm1nDQc=
25
+ github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
26
+ golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
27
+ golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
28
+ golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
29
+ golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
30
+ golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
31
+ golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
32
+ golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
33
+ golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
34
+ golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
35
+ golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
36
+ golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
37
+ golang.org/x/sys v0.0.0-20220310020820-b874c991c1a5/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
38
+ golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
39
+ golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
40
+ golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
41
+ golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
42
+ golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
43
+ golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
44
+ golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
45
+ golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
46
+ golang.org/x/text v0.3.8 h1:nAL+RVCQ9uMn3vJZbV+MRnydTJFPf8qqY42YiA6MrqY=
47
+ golang.org/x/text v0.3.8/go.mod h1:E6s5w1FMmriuDzIBO73fBruAKo1PCIq6d2Q6DHfQ8WQ=
48
+ golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
49
+ golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
50
+ golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
51
+ golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
52
+ gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
53
+ gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
54
+ gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
55
+ gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
56
+ gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
57
+ gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
@@ -0,0 +1,169 @@
1
+ #!/bin/bash
2
+ # Integration test with real LLM API
3
+ # Set OPENAI_API_KEY environment variable before running
4
+
5
+ set -e
6
+
7
+ echo "🧪 RLM Go Integration Tests"
8
+ echo "================================"
9
+ echo ""
10
+
11
+ # Check if binary exists
12
+ if [ ! -f "./rlm" ]; then
13
+ echo "❌ Binary not found. Building..."
14
+ go build -o rlm ./cmd/rlm
15
+ echo "✅ Built binary"
16
+ fi
17
+
18
+ # Check for API key
19
+ if [ -z "$OPENAI_API_KEY" ]; then
20
+ echo "❌ OPENAI_API_KEY environment variable not set"
21
+ echo ""
22
+ echo "Usage:"
23
+ echo " export OPENAI_API_KEY='sk-...'"
24
+ echo " ./integration_test.sh"
25
+ exit 1
26
+ fi
27
+
28
+ echo "✅ API key found"
29
+ echo ""
30
+
31
+ # Test 1: Simple query
32
+ echo "📝 Test 1: Simple context analysis"
33
+ echo "-----------------------------------"
34
+ RESULT=$(cat <<EOF | ./rlm
35
+ {
36
+ "model": "gpt-4o-mini",
37
+ "query": "How many times does the word 'test' appear?",
38
+ "context": "This is a test. Another test here. Final test.",
39
+ "config": {
40
+ "api_key": "$OPENAI_API_KEY",
41
+ "max_iterations": 10
42
+ }
43
+ }
44
+ EOF
45
+ )
46
+
47
+ if [ $? -eq 0 ]; then
48
+ echo "✅ Test 1 passed"
49
+ echo "Result: $(echo $RESULT | jq -r '.result')"
50
+ echo "Stats: $(echo $RESULT | jq '.stats')"
51
+ else
52
+ echo "❌ Test 1 failed"
53
+ exit 1
54
+ fi
55
+ echo ""
56
+
57
+ # Test 2: Count/aggregation
58
+ echo "📝 Test 2: Counting errors in logs"
59
+ echo "-----------------------------------"
60
+ LOG_CONTEXT='2024-01-01 INFO: System started
61
+ 2024-01-01 ERROR: Connection failed
62
+ 2024-01-01 INFO: Retrying
63
+ 2024-01-01 ERROR: Timeout
64
+ 2024-01-01 ERROR: Failed again
65
+ 2024-01-01 INFO: Success'
66
+
67
+ RESULT=$(./rlm <<EOF
68
+ {
69
+ "model": "gpt-4o-mini",
70
+ "query": "Count how many ERROR entries are in the logs",
71
+ "context": "$LOG_CONTEXT",
72
+ "config": {
73
+ "api_key": "$OPENAI_API_KEY",
74
+ "max_iterations": 10
75
+ }
76
+ }
77
+ EOF
78
+ )
79
+
80
+ if [ $? -eq 0 ]; then
81
+ echo "✅ Test 2 passed"
82
+ echo "Result: $(echo $RESULT | jq -r '.result')"
83
+ ITERATIONS=$(echo $RESULT | jq '.stats.iterations')
84
+ echo "Iterations: $ITERATIONS"
85
+ else
86
+ echo "❌ Test 2 failed"
87
+ exit 1
88
+ fi
89
+ echo ""
90
+
91
+ # Test 3: Long context
92
+ echo "📝 Test 3: Long context processing"
93
+ echo "-----------------------------------"
94
+ LONG_CONTEXT=$(cat <<EOF
95
+ Chapter 1: The Beginning
96
+
97
+ It was a dark and stormy night. The hero embarked on a journey.
98
+ $(for i in {1..100}; do echo "Line $i of the story continues here with more content."; done)
99
+
100
+ Chapter 2: The Middle
101
+
102
+ The hero faced many challenges.
103
+ $(for i in {1..100}; do echo "Line $i describes the adventure."; done)
104
+
105
+ Chapter 3: The End
106
+
107
+ Finally, the hero succeeded and returned home triumphant.
108
+ EOF
109
+ )
110
+
111
+ RESULT=$(cat <<EOF | ./rlm
112
+ {
113
+ "model": "gpt-4o-mini",
114
+ "query": "How many chapters are in this document?",
115
+ "context": "$LONG_CONTEXT",
116
+ "config": {
117
+ "api_key": "$OPENAI_API_KEY",
118
+ "max_iterations": 15
119
+ }
120
+ }
121
+ EOF
122
+ )
123
+
124
+ if [ $? -eq 0 ]; then
125
+ echo "✅ Test 3 passed"
126
+ echo "Result: $(echo $RESULT | jq -r '.result')"
127
+ LLM_CALLS=$(echo $RESULT | jq '.stats.llm_calls')
128
+ echo "LLM calls: $LLM_CALLS"
129
+ else
130
+ echo "❌ Test 3 failed"
131
+ exit 1
132
+ fi
133
+ echo ""
134
+
135
+ # Test 4: Different model configurations
136
+ echo "📝 Test 4: Two-model configuration"
137
+ echo "-----------------------------------"
138
+ RESULT=$(cat <<EOF | ./rlm
139
+ {
140
+ "model": "gpt-4o",
141
+ "query": "What is this text about?",
142
+ "context": "Artificial intelligence and machine learning are transforming technology.",
143
+ "config": {
144
+ "recursive_model": "gpt-4o-mini",
145
+ "api_key": "$OPENAI_API_KEY",
146
+ "max_iterations": 10,
147
+ "temperature": 0.3
148
+ }
149
+ }
150
+ EOF
151
+ )
152
+
153
+ if [ $? -eq 0 ]; then
154
+ echo "✅ Test 4 passed"
155
+ echo "Result: $(echo $RESULT | jq -r '.result')"
156
+ else
157
+ echo "❌ Test 4 failed"
158
+ exit 1
159
+ fi
160
+ echo ""
161
+
162
+ echo "================================"
163
+ echo "✅ All integration tests passed!"
164
+ echo ""
165
+ echo "Summary:"
166
+ echo " - Simple queries work"
167
+ echo " - Counting/aggregation works"
168
+ echo " - Long context works"
169
+ echo " - Model configuration works"