rlm-cli 0.2.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +184 -0
- package/bin/rlm.mjs +45 -0
- package/dist/cli.d.ts +15 -0
- package/dist/cli.js +185 -0
- package/dist/config.d.ts +13 -0
- package/dist/config.js +73 -0
- package/dist/env.d.ts +9 -0
- package/dist/env.js +34 -0
- package/dist/interactive.d.ts +10 -0
- package/dist/interactive.js +789 -0
- package/dist/main.d.ts +11 -0
- package/dist/main.js +144 -0
- package/dist/repl.d.ts +47 -0
- package/dist/repl.js +183 -0
- package/dist/rlm.d.ts +55 -0
- package/dist/rlm.js +354 -0
- package/dist/runtime.py +185 -0
- package/dist/viewer.d.ts +12 -0
- package/dist/viewer.js +828 -0
- package/package.json +48 -0
- package/rlm_config.yaml +17 -0
package/README.md
ADDED
|
@@ -0,0 +1,184 @@
|
|
|
1
|
+
# rlm-cli
|
|
2
|
+
|
|
3
|
+
```
|
|
4
|
+
██████╗ ██╗ ███╗ ███╗
|
|
5
|
+
██╔══██╗██║ ████╗ ████║
|
|
6
|
+
██████╔╝██║ ██╔████╔██║
|
|
7
|
+
██╔══██╗██║ ██║╚██╔╝██║
|
|
8
|
+
██║ ██║███████╗██║ ╚═╝ ██║
|
|
9
|
+
╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝
|
|
10
|
+
```
|
|
11
|
+
|
|
12
|
+
CLI for **Recursive Language Models** — based on the [RLM paper](https://arxiv.org/abs/2512.24601).
|
|
13
|
+
|
|
14
|
+
Instead of dumping a huge context into a single LLM call, RLM lets the model write Python code to process it — slicing, chunking, running sub-queries on pieces, and building up an answer across multiple iterations.
|
|
15
|
+
|
|
16
|
+
## Quick Start
|
|
17
|
+
|
|
18
|
+
```bash
|
|
19
|
+
git clone https://github.com/viplismism/rlm-cli.git
|
|
20
|
+
cd rlm-cli
|
|
21
|
+
npm install
|
|
22
|
+
npm run build
|
|
23
|
+
npm link # makes `rlm` available globally
|
|
24
|
+
```
|
|
25
|
+
|
|
26
|
+
Then create a `.env` file with your API key:
|
|
27
|
+
|
|
28
|
+
```bash
|
|
29
|
+
cp .env.example .env
|
|
30
|
+
```
|
|
31
|
+
|
|
32
|
+
```bash
|
|
33
|
+
# .env
|
|
34
|
+
ANTHROPIC_API_KEY=sk-ant-...
|
|
35
|
+
# or
|
|
36
|
+
OPENAI_API_KEY=sk-...
|
|
37
|
+
|
|
38
|
+
# Optional: override default model
|
|
39
|
+
# RLM_MODEL=claude-sonnet-4-5-20250929
|
|
40
|
+
```
|
|
41
|
+
|
|
42
|
+
That's it. Run `rlm` and you're in.
|
|
43
|
+
|
|
44
|
+
## Usage
|
|
45
|
+
|
|
46
|
+
### Interactive Terminal
|
|
47
|
+
|
|
48
|
+
```bash
|
|
49
|
+
rlm
|
|
50
|
+
```
|
|
51
|
+
|
|
52
|
+
This is the main way to use it. You get a persistent session where you can:
|
|
53
|
+
|
|
54
|
+
- Load context from a file, URL, or by pasting text directly
|
|
55
|
+
- Ask questions and watch the RLM loop run — you'll see the code it writes, the output, sub-queries, everything in real-time
|
|
56
|
+
- All runs are saved as trajectory files you can browse later
|
|
57
|
+
|
|
58
|
+
You don't even need to load context first — just type a query directly and RLM will use your question as the context:
|
|
59
|
+
|
|
60
|
+
```bash
|
|
61
|
+
> what are the top 5 sorting algorithms and their time complexities?
|
|
62
|
+
```
|
|
63
|
+
|
|
64
|
+
Load context and ask in one shot:
|
|
65
|
+
|
|
66
|
+
```bash
|
|
67
|
+
> @path/to/file.txt what are the main functions here?
|
|
68
|
+
```
|
|
69
|
+
|
|
70
|
+
Or set context first, then ask multiple questions:
|
|
71
|
+
|
|
72
|
+
```bash
|
|
73
|
+
> /file big-codebase.py
|
|
74
|
+
> what does the main class do?
|
|
75
|
+
> find all the error handling patterns
|
|
76
|
+
```
|
|
77
|
+
|
|
78
|
+
**Ctrl+C** stops the current query. **Ctrl+C twice** exits.
|
|
79
|
+
|
|
80
|
+
Type `/help` inside the terminal for all commands.
|
|
81
|
+
|
|
82
|
+
### Single-Shot Mode
|
|
83
|
+
|
|
84
|
+
For scripting or one-off queries:
|
|
85
|
+
|
|
86
|
+
```bash
|
|
87
|
+
rlm run --file large-file.txt "List all classes and their methods"
|
|
88
|
+
rlm run --url https://example.com/data.txt "Summarize this"
|
|
89
|
+
cat data.txt | rlm run --stdin "Count the errors"
|
|
90
|
+
```
|
|
91
|
+
|
|
92
|
+
Answer goes to stdout, progress to stderr — pipe-friendly.
|
|
93
|
+
|
|
94
|
+
### Trajectory Viewer
|
|
95
|
+
|
|
96
|
+
```bash
|
|
97
|
+
rlm viewer
|
|
98
|
+
```
|
|
99
|
+
|
|
100
|
+
Browse saved runs in a TUI. Navigate iterations, inspect the code and output at each step, drill into individual sub-queries.
|
|
101
|
+
|
|
102
|
+
## Benchmarks
|
|
103
|
+
|
|
104
|
+
Compare direct LLM vs RLM on the same query from standard long-context datasets. This runs both approaches side-by-side so you can see the difference.
|
|
105
|
+
|
|
106
|
+
### Available Benchmarks
|
|
107
|
+
|
|
108
|
+
| Benchmark | Dataset | What it tests |
|
|
109
|
+
|-----------|---------|---------------|
|
|
110
|
+
| `oolong` | [Oolong Synth](https://huggingface.co/datasets/oolongbench/oolong-synth) | Synthetic long-context tasks: timeline ordering, user tracking, counting |
|
|
111
|
+
| `longbench` | [LongBench NarrativeQA](https://huggingface.co/datasets/THUDM/LongBench) | Reading comprehension over long narratives |
|
|
112
|
+
|
|
113
|
+
### Running
|
|
114
|
+
|
|
115
|
+
```bash
|
|
116
|
+
rlm benchmark oolong # default: index 4743 (14.7MB timeline+subset counting)
|
|
117
|
+
rlm benchmark longbench # default: index 182 (205KB multi-hop narrative reasoning)
|
|
118
|
+
|
|
119
|
+
# Pick a specific example from the dataset
|
|
120
|
+
rlm benchmark oolong --idx 10
|
|
121
|
+
rlm benchmark longbench --idx 50
|
|
122
|
+
```
|
|
123
|
+
|
|
124
|
+
Python dependencies are auto-installed into a `.venv` on first run.
|
|
125
|
+
|
|
126
|
+
Each run:
|
|
127
|
+
1. Loads one example from the dataset
|
|
128
|
+
2. Runs direct LLM (single prompt, no RLM)
|
|
129
|
+
3. Runs RLM (iterative code execution with sub-queries)
|
|
130
|
+
4. Prints both answers side-by-side with the expected answer, timing, and stats
|
|
131
|
+
5. Saves a trajectory file for later inspection with `rlm viewer`
|
|
132
|
+
|
|
133
|
+
## How It Works
|
|
134
|
+
|
|
135
|
+
1. Your full context is loaded into a persistent Python REPL as a `context` variable
|
|
136
|
+
2. The LLM gets metadata about the context (size, preview of first/last lines) plus your query
|
|
137
|
+
3. It writes Python code that can slice `context`, call `llm_query(chunk, instruction)` to ask sub-questions about pieces, and call `FINAL(answer)` when it has the answer
|
|
138
|
+
4. Code runs, output is captured and fed back for the next iteration
|
|
139
|
+
5. Loop continues until `FINAL()` is called or max iterations are reached
|
|
140
|
+
|
|
141
|
+
For large documents, the model typically chunks the text and runs parallel sub-queries with `async_llm_query()` + `asyncio.gather()`, then aggregates the results.
|
|
142
|
+
|
|
143
|
+
## Configuration
|
|
144
|
+
|
|
145
|
+
Edit `rlm_config.yaml` in the project root:
|
|
146
|
+
|
|
147
|
+
```yaml
|
|
148
|
+
max_iterations: 20 # Max iterations before giving up
|
|
149
|
+
max_depth: 3 # Max recursive sub-agent depth
|
|
150
|
+
max_sub_queries: 50 # Max total sub-queries
|
|
151
|
+
truncate_len: 5000 # Truncate REPL output beyond this
|
|
152
|
+
metadata_preview_lines: 20
|
|
153
|
+
```
|
|
154
|
+
|
|
155
|
+
## Project Structure
|
|
156
|
+
|
|
157
|
+
```
|
|
158
|
+
src/
|
|
159
|
+
main.ts CLI entry point and command router
|
|
160
|
+
interactive.ts Interactive terminal REPL
|
|
161
|
+
rlm.ts Core RLM loop
|
|
162
|
+
repl.ts Python REPL subprocess manager
|
|
163
|
+
runtime.py Python runtime (FINAL, llm_query, async_llm_query)
|
|
164
|
+
cli.ts Single-shot CLI mode
|
|
165
|
+
viewer.ts Trajectory viewer TUI
|
|
166
|
+
config.ts Config loader
|
|
167
|
+
env.ts .env file loader
|
|
168
|
+
benchmarks/
|
|
169
|
+
oolong_synth.ts Oolong Synth benchmark
|
|
170
|
+
longbench_narrativeqa.ts LongBench NarrativeQA benchmark
|
|
171
|
+
requirements.txt Python deps for benchmarks
|
|
172
|
+
bin/
|
|
173
|
+
rlm.mjs Global CLI shim
|
|
174
|
+
```
|
|
175
|
+
|
|
176
|
+
## Requirements
|
|
177
|
+
|
|
178
|
+
- Node.js >= 20
|
|
179
|
+
- Python 3
|
|
180
|
+
- An API key (Anthropic, OpenAI, or OpenRouter)
|
|
181
|
+
|
|
182
|
+
## License
|
|
183
|
+
|
|
184
|
+
MIT
|
package/bin/rlm.mjs
ADDED
|
@@ -0,0 +1,45 @@
|
|
|
1
|
+
#!/usr/bin/env node
|
|
2
|
+
|
|
3
|
+
/**
|
|
4
|
+
* rlm — Recursive Language Model CLI
|
|
5
|
+
*
|
|
6
|
+
* This shim boots the CLI entry point. It tries the compiled dist first,
|
|
7
|
+
* then falls back to tsx for development.
|
|
8
|
+
*/
|
|
9
|
+
|
|
10
|
+
import { fileURLToPath } from "node:url";
|
|
11
|
+
import { dirname, join } from "node:path";
|
|
12
|
+
import { existsSync } from "node:fs";
|
|
13
|
+
|
|
14
|
+
const __dirname = dirname(fileURLToPath(import.meta.url));
|
|
15
|
+
const distEntry = join(__dirname, "..", "dist", "main.js");
|
|
16
|
+
|
|
17
|
+
if (existsSync(distEntry)) {
|
|
18
|
+
// Production: use compiled JS
|
|
19
|
+
await import(distEntry);
|
|
20
|
+
} else {
|
|
21
|
+
// Development: use tsx to run TypeScript directly
|
|
22
|
+
const srcEntry = join(__dirname, "..", "src", "main.ts");
|
|
23
|
+
const { register } = await import("node:module");
|
|
24
|
+
|
|
25
|
+
// Try to register tsx loader, then import
|
|
26
|
+
try {
|
|
27
|
+
const tsxPath = join(__dirname, "..", "node_modules", "tsx", "dist", "esm", "index.mjs");
|
|
28
|
+
if (existsSync(tsxPath)) {
|
|
29
|
+
register(tsxPath);
|
|
30
|
+
}
|
|
31
|
+
await import(srcEntry);
|
|
32
|
+
} catch {
|
|
33
|
+
// Fallback: spawn tsx as a child process
|
|
34
|
+
const { spawn } = await import("node:child_process");
|
|
35
|
+
const tsxBin = join(__dirname, "..", "node_modules", ".bin", "tsx");
|
|
36
|
+
const child = spawn(tsxBin, [srcEntry, ...process.argv.slice(2)], {
|
|
37
|
+
stdio: "inherit",
|
|
38
|
+
});
|
|
39
|
+
child.on("exit", (code) => process.exit(code ?? 1));
|
|
40
|
+
child.on("error", (err) => {
|
|
41
|
+
console.error(`Failed to start rlm: ${err.message}`);
|
|
42
|
+
process.exit(1);
|
|
43
|
+
});
|
|
44
|
+
}
|
|
45
|
+
}
|
package/dist/cli.d.ts
ADDED
|
@@ -0,0 +1,15 @@
|
|
|
1
|
+
#!/usr/bin/env tsx
|
|
2
|
+
/**
|
|
3
|
+
* Standalone RLM CLI — run Recursive Language Model queries from the terminal.
|
|
4
|
+
*
|
|
5
|
+
* Usage:
|
|
6
|
+
* npx tsx src/cli.ts --model claude-sonnet-4-20250514 --file large-file.txt "What are the main themes?"
|
|
7
|
+
* npx tsx src/cli.ts --model claude-sonnet-4-20250514 --url https://example.com/big.txt "Summarize this"
|
|
8
|
+
* cat data.txt | npx tsx src/cli.ts --model claude-sonnet-4-20250514 --stdin "Count the errors"
|
|
9
|
+
*
|
|
10
|
+
* Environment:
|
|
11
|
+
* ANTHROPIC_API_KEY — required for Anthropic models
|
|
12
|
+
* OPENAI_API_KEY — required for OpenAI models
|
|
13
|
+
* (etc. per @mariozechner/pi-ai provider)
|
|
14
|
+
*/
|
|
15
|
+
import "./env.js";
|
package/dist/cli.js
ADDED
|
@@ -0,0 +1,185 @@
|
|
|
1
|
+
#!/usr/bin/env tsx
|
|
2
|
+
/**
|
|
3
|
+
* Standalone RLM CLI — run Recursive Language Model queries from the terminal.
|
|
4
|
+
*
|
|
5
|
+
* Usage:
|
|
6
|
+
* npx tsx src/cli.ts --model claude-sonnet-4-20250514 --file large-file.txt "What are the main themes?"
|
|
7
|
+
* npx tsx src/cli.ts --model claude-sonnet-4-20250514 --url https://example.com/big.txt "Summarize this"
|
|
8
|
+
* cat data.txt | npx tsx src/cli.ts --model claude-sonnet-4-20250514 --stdin "Count the errors"
|
|
9
|
+
*
|
|
10
|
+
* Environment:
|
|
11
|
+
* ANTHROPIC_API_KEY — required for Anthropic models
|
|
12
|
+
* OPENAI_API_KEY — required for OpenAI models
|
|
13
|
+
* (etc. per @mariozechner/pi-ai provider)
|
|
14
|
+
*/
|
|
15
|
+
import "./env.js";
|
|
16
|
+
import * as fs from "node:fs";
|
|
17
|
+
// Dynamic imports — ensures env.js has set process.env BEFORE pi-ai loads
|
|
18
|
+
const { getModels, getProviders } = await import("@mariozechner/pi-ai");
|
|
19
|
+
const { PythonRepl } = await import("./repl.js");
|
|
20
|
+
const { runRlmLoop } = await import("./rlm.js");
|
|
21
|
+
// ── Arg parsing ─────────────────────────────────────────────────────────────
|
|
22
|
+
function usage() {
|
|
23
|
+
console.error(`
|
|
24
|
+
rlm-cli — Recursive Language Model CLI (arXiv:2512.24601)
|
|
25
|
+
|
|
26
|
+
USAGE
|
|
27
|
+
rlm run [OPTIONS] "<query>"
|
|
28
|
+
|
|
29
|
+
OPTIONS
|
|
30
|
+
--model <id> Model ID (default: RLM_MODEL from .env)
|
|
31
|
+
--file <path> Read context from a file
|
|
32
|
+
--url <url> Fetch context from a URL
|
|
33
|
+
--stdin Read context from stdin (pipe data in)
|
|
34
|
+
--verbose Show iteration progress
|
|
35
|
+
|
|
36
|
+
EXAMPLES
|
|
37
|
+
rlm run --file big.txt "List all classes"
|
|
38
|
+
curl -s https://example.com/large.py | rlm run --stdin "Summarize"
|
|
39
|
+
rlm run --url https://raw.githubusercontent.com/.../typing.py "Count public classes"
|
|
40
|
+
`.trim());
|
|
41
|
+
process.exit(1);
|
|
42
|
+
}
|
|
43
|
+
function parseArgs() {
|
|
44
|
+
const args = process.argv.slice(2);
|
|
45
|
+
let modelId;
|
|
46
|
+
let file;
|
|
47
|
+
let url;
|
|
48
|
+
let useStdin = false;
|
|
49
|
+
let verbose = false;
|
|
50
|
+
const positional = [];
|
|
51
|
+
for (let i = 0; i < args.length; i++) {
|
|
52
|
+
const arg = args[i];
|
|
53
|
+
if (arg === "--model" && i + 1 < args.length) {
|
|
54
|
+
modelId = args[++i];
|
|
55
|
+
}
|
|
56
|
+
else if (arg === "--file" && i + 1 < args.length) {
|
|
57
|
+
file = args[++i];
|
|
58
|
+
}
|
|
59
|
+
else if (arg === "--url" && i + 1 < args.length) {
|
|
60
|
+
url = args[++i];
|
|
61
|
+
}
|
|
62
|
+
else if (arg === "--stdin") {
|
|
63
|
+
useStdin = true;
|
|
64
|
+
}
|
|
65
|
+
else if (arg === "--verbose") {
|
|
66
|
+
verbose = true;
|
|
67
|
+
}
|
|
68
|
+
else if (arg === "--help" || arg === "-h") {
|
|
69
|
+
usage();
|
|
70
|
+
}
|
|
71
|
+
else if (!arg.startsWith("--")) {
|
|
72
|
+
positional.push(arg);
|
|
73
|
+
}
|
|
74
|
+
else {
|
|
75
|
+
console.error(`Unknown option: ${arg}`);
|
|
76
|
+
usage();
|
|
77
|
+
}
|
|
78
|
+
}
|
|
79
|
+
if (!modelId) {
|
|
80
|
+
modelId = process.env.RLM_MODEL || "claude-sonnet-4-5-20250929";
|
|
81
|
+
}
|
|
82
|
+
if (positional.length === 0) {
|
|
83
|
+
console.error("Error: query argument is required");
|
|
84
|
+
usage();
|
|
85
|
+
}
|
|
86
|
+
const query = positional.join(" ");
|
|
87
|
+
if (!file && !url && !useStdin) {
|
|
88
|
+
console.error("Error: one of --file, --url, or --stdin is required");
|
|
89
|
+
usage();
|
|
90
|
+
}
|
|
91
|
+
return { modelId, file, url, useStdin, verbose, query };
|
|
92
|
+
}
|
|
93
|
+
// ── Helpers ─────────────────────────────────────────────────────────────────
|
|
94
|
+
async function readStdin() {
|
|
95
|
+
const chunks = [];
|
|
96
|
+
for await (const chunk of process.stdin) {
|
|
97
|
+
chunks.push(chunk);
|
|
98
|
+
}
|
|
99
|
+
return Buffer.concat(chunks).toString("utf-8");
|
|
100
|
+
}
|
|
101
|
+
async function fetchUrl(url) {
|
|
102
|
+
const resp = await fetch(url);
|
|
103
|
+
if (!resp.ok) {
|
|
104
|
+
throw new Error(`Failed to fetch ${url}: ${resp.status} ${resp.statusText}`);
|
|
105
|
+
}
|
|
106
|
+
return resp.text();
|
|
107
|
+
}
|
|
108
|
+
// ── Main ────────────────────────────────────────────────────────────────────
|
|
109
|
+
async function main() {
|
|
110
|
+
const args = parseArgs();
|
|
111
|
+
// Resolve model by scanning all providers
|
|
112
|
+
let model;
|
|
113
|
+
const allModelIds = [];
|
|
114
|
+
for (const provider of getProviders()) {
|
|
115
|
+
const providerModels = getModels(provider);
|
|
116
|
+
for (const m of providerModels) {
|
|
117
|
+
allModelIds.push(m.id);
|
|
118
|
+
if (m.id === args.modelId) {
|
|
119
|
+
model = m;
|
|
120
|
+
}
|
|
121
|
+
}
|
|
122
|
+
}
|
|
123
|
+
if (!model) {
|
|
124
|
+
console.error(`Error: unknown model "${args.modelId}"`);
|
|
125
|
+
console.error(`Available models: ${allModelIds.join(", ")}`);
|
|
126
|
+
process.exit(1);
|
|
127
|
+
}
|
|
128
|
+
// Load context
|
|
129
|
+
let context;
|
|
130
|
+
if (args.file) {
|
|
131
|
+
console.error(`Reading context from file: ${args.file}`);
|
|
132
|
+
context = fs.readFileSync(args.file, "utf-8");
|
|
133
|
+
}
|
|
134
|
+
else if (args.url) {
|
|
135
|
+
console.error(`Fetching context from URL: ${args.url}`);
|
|
136
|
+
context = await fetchUrl(args.url);
|
|
137
|
+
}
|
|
138
|
+
else {
|
|
139
|
+
console.error("Reading context from stdin...");
|
|
140
|
+
context = await readStdin();
|
|
141
|
+
}
|
|
142
|
+
console.error(`Context loaded: ${context.length.toLocaleString()} characters`);
|
|
143
|
+
console.error(`Model: ${model.id}`);
|
|
144
|
+
console.error(`Query: ${args.query}`);
|
|
145
|
+
console.error("---");
|
|
146
|
+
// Start REPL
|
|
147
|
+
const repl = new PythonRepl();
|
|
148
|
+
const ac = new AbortController();
|
|
149
|
+
process.on("SIGINT", () => {
|
|
150
|
+
console.error("\nAborting...");
|
|
151
|
+
ac.abort();
|
|
152
|
+
});
|
|
153
|
+
try {
|
|
154
|
+
await repl.start(ac.signal);
|
|
155
|
+
const startTime = Date.now();
|
|
156
|
+
const result = await runRlmLoop({
|
|
157
|
+
context,
|
|
158
|
+
query: args.query,
|
|
159
|
+
model,
|
|
160
|
+
repl,
|
|
161
|
+
signal: ac.signal,
|
|
162
|
+
onProgress: args.verbose
|
|
163
|
+
? (info) => {
|
|
164
|
+
const elapsed = ((Date.now() - startTime) / 1000).toFixed(1);
|
|
165
|
+
console.error(`[${elapsed}s] Iteration ${info.iteration}/${info.maxIterations} | ` +
|
|
166
|
+
`Sub-queries: ${info.subQueries} | Phase: ${info.phase}`);
|
|
167
|
+
}
|
|
168
|
+
: undefined,
|
|
169
|
+
});
|
|
170
|
+
const elapsed = ((Date.now() - startTime) / 1000).toFixed(1);
|
|
171
|
+
console.error("---");
|
|
172
|
+
console.error(`Completed in ${elapsed}s | ${result.iterations} iterations | ${result.totalSubQueries} sub-queries | ${result.completed ? "success" : "incomplete"}`);
|
|
173
|
+
console.error("---");
|
|
174
|
+
// Write the answer to stdout (not stderr) so it can be piped
|
|
175
|
+
console.log(result.answer);
|
|
176
|
+
}
|
|
177
|
+
finally {
|
|
178
|
+
repl.shutdown();
|
|
179
|
+
}
|
|
180
|
+
}
|
|
181
|
+
main().catch((err) => {
|
|
182
|
+
console.error("Fatal error:", err);
|
|
183
|
+
process.exit(1);
|
|
184
|
+
});
|
|
185
|
+
//# sourceMappingURL=cli.js.map
|
package/dist/config.d.ts
ADDED
|
@@ -0,0 +1,13 @@
|
|
|
1
|
+
/**
|
|
2
|
+
* Configuration loader for RLM CLI.
|
|
3
|
+
*
|
|
4
|
+
* Reads rlm_config.yaml from the project root (or cwd), with sensible defaults.
|
|
5
|
+
*/
|
|
6
|
+
export interface RlmConfig {
|
|
7
|
+
max_iterations: number;
|
|
8
|
+
max_depth: number;
|
|
9
|
+
max_sub_queries: number;
|
|
10
|
+
truncate_len: number;
|
|
11
|
+
metadata_preview_lines: number;
|
|
12
|
+
}
|
|
13
|
+
export declare function loadConfig(): RlmConfig;
|
package/dist/config.js
ADDED
|
@@ -0,0 +1,73 @@
|
|
|
1
|
+
/**
|
|
2
|
+
* Configuration loader for RLM CLI.
|
|
3
|
+
*
|
|
4
|
+
* Reads rlm_config.yaml from the project root (or cwd), with sensible defaults.
|
|
5
|
+
*/
|
|
6
|
+
import * as fs from "node:fs";
|
|
7
|
+
import * as path from "node:path";
|
|
8
|
+
const DEFAULTS = {
|
|
9
|
+
max_iterations: 20,
|
|
10
|
+
max_depth: 3,
|
|
11
|
+
max_sub_queries: 50,
|
|
12
|
+
truncate_len: 5000,
|
|
13
|
+
metadata_preview_lines: 20,
|
|
14
|
+
};
|
|
15
|
+
function parseYaml(text) {
|
|
16
|
+
// Minimal YAML parser for flat key:value files (no nested objects, no arrays)
|
|
17
|
+
const result = {};
|
|
18
|
+
for (const line of text.split("\n")) {
|
|
19
|
+
const trimmed = line.trim();
|
|
20
|
+
if (!trimmed || trimmed.startsWith("#"))
|
|
21
|
+
continue;
|
|
22
|
+
const colonIdx = trimmed.indexOf(":");
|
|
23
|
+
if (colonIdx === -1)
|
|
24
|
+
continue;
|
|
25
|
+
const key = trimmed.slice(0, colonIdx).trim();
|
|
26
|
+
const rawVal = trimmed.slice(colonIdx + 1).trim();
|
|
27
|
+
// Strip inline comments
|
|
28
|
+
const val = rawVal.replace(/\s+#.*$/, "");
|
|
29
|
+
// Parse number
|
|
30
|
+
const num = Number(val);
|
|
31
|
+
if (!isNaN(num) && val !== "") {
|
|
32
|
+
result[key] = num;
|
|
33
|
+
}
|
|
34
|
+
else if (val === "true") {
|
|
35
|
+
result[key] = true;
|
|
36
|
+
}
|
|
37
|
+
else if (val === "false") {
|
|
38
|
+
result[key] = false;
|
|
39
|
+
}
|
|
40
|
+
else {
|
|
41
|
+
// Strip quotes
|
|
42
|
+
result[key] = val.replace(/^["']|["']$/g, "");
|
|
43
|
+
}
|
|
44
|
+
}
|
|
45
|
+
return result;
|
|
46
|
+
}
|
|
47
|
+
export function loadConfig() {
|
|
48
|
+
// Search order: cwd, then package root
|
|
49
|
+
const candidates = [
|
|
50
|
+
path.resolve(process.cwd(), "rlm_config.yaml"),
|
|
51
|
+
path.resolve(new URL(".", import.meta.url).pathname, "..", "rlm_config.yaml"),
|
|
52
|
+
];
|
|
53
|
+
for (const configPath of candidates) {
|
|
54
|
+
if (fs.existsSync(configPath)) {
|
|
55
|
+
try {
|
|
56
|
+
const raw = fs.readFileSync(configPath, "utf-8");
|
|
57
|
+
const parsed = parseYaml(raw);
|
|
58
|
+
return {
|
|
59
|
+
max_iterations: typeof parsed.max_iterations === "number" ? parsed.max_iterations : DEFAULTS.max_iterations,
|
|
60
|
+
max_depth: typeof parsed.max_depth === "number" ? parsed.max_depth : DEFAULTS.max_depth,
|
|
61
|
+
max_sub_queries: typeof parsed.max_sub_queries === "number" ? parsed.max_sub_queries : DEFAULTS.max_sub_queries,
|
|
62
|
+
truncate_len: typeof parsed.truncate_len === "number" ? parsed.truncate_len : DEFAULTS.truncate_len,
|
|
63
|
+
metadata_preview_lines: typeof parsed.metadata_preview_lines === "number" ? parsed.metadata_preview_lines : DEFAULTS.metadata_preview_lines,
|
|
64
|
+
};
|
|
65
|
+
}
|
|
66
|
+
catch {
|
|
67
|
+
// Fall through to defaults
|
|
68
|
+
}
|
|
69
|
+
}
|
|
70
|
+
}
|
|
71
|
+
return { ...DEFAULTS };
|
|
72
|
+
}
|
|
73
|
+
//# sourceMappingURL=config.js.map
|
package/dist/env.d.ts
ADDED
package/dist/env.js
ADDED
|
@@ -0,0 +1,34 @@
|
|
|
1
|
+
/**
|
|
2
|
+
* Load .env file into process.env.
|
|
3
|
+
* Must be imported BEFORE any module that reads env vars (e.g. pi-ai).
|
|
4
|
+
*
|
|
5
|
+
* Supports:
|
|
6
|
+
* - ANTHROPIC_API_KEY
|
|
7
|
+
* - RLM_MODEL (model name, e.g. claude-sonnet-4-5-20250929)
|
|
8
|
+
*/
|
|
9
|
+
import * as fs from "node:fs";
|
|
10
|
+
import * as path from "node:path";
|
|
11
|
+
// Load .env file from the package root (not CWD, which could be untrusted)
|
|
12
|
+
const __dir = path.dirname(new URL(import.meta.url).pathname);
|
|
13
|
+
const envPath = path.resolve(__dir, "..", ".env");
|
|
14
|
+
if (fs.existsSync(envPath)) {
|
|
15
|
+
const content = fs.readFileSync(envPath, "utf-8");
|
|
16
|
+
for (const line of content.split("\n")) {
|
|
17
|
+
const trimmed = line.trim();
|
|
18
|
+
if (!trimmed || trimmed.startsWith("#"))
|
|
19
|
+
continue;
|
|
20
|
+
const eqIndex = trimmed.indexOf("=");
|
|
21
|
+
if (eqIndex === -1)
|
|
22
|
+
continue;
|
|
23
|
+
const key = trimmed.slice(0, eqIndex).trim();
|
|
24
|
+
const value = trimmed.slice(eqIndex + 1).trim();
|
|
25
|
+
if (key) {
|
|
26
|
+
process.env[key] = value;
|
|
27
|
+
}
|
|
28
|
+
}
|
|
29
|
+
}
|
|
30
|
+
// Default model
|
|
31
|
+
if (!process.env.RLM_MODEL) {
|
|
32
|
+
process.env.RLM_MODEL = "claude-sonnet-4-5-20250929";
|
|
33
|
+
}
|
|
34
|
+
//# sourceMappingURL=env.js.map
|
|
@@ -0,0 +1,10 @@
|
|
|
1
|
+
#!/usr/bin/env tsx
|
|
2
|
+
/**
|
|
3
|
+
* RLM Interactive — Production-quality interactive terminal REPL.
|
|
4
|
+
*
|
|
5
|
+
* Launch with `rlm` and get a persistent session where you can:
|
|
6
|
+
* - Set context (file/URL/paste)
|
|
7
|
+
* - Type queries and watch the RLM loop run with smooth, real-time output
|
|
8
|
+
* - Browse previous trajectories
|
|
9
|
+
*/
|
|
10
|
+
import "./env.js";
|