@cloudwarriors-ai/rlm 0.1.6 → 0.1.7

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +122 -154
  2. package/package.json +2 -2
package/README.md CHANGED
@@ -1,208 +1,176 @@
1
1
  # @cloudwarriors-ai/rlm
2
2
 
3
- Recursive Language Model - Process massive contexts (10M+ tokens) through recursive LLM decomposition.
3
+ Process massive amounts of text with AI - way more than fits in a normal context window.
4
4
 
5
- ## Overview
5
+ ## What Problem Does This Solve?
6
6
 
7
- RLM enables processing of contexts far larger than typical LLM context windows by:
7
+ LLMs have context limits. Claude can handle ~200K tokens, GPT-4 around 128K. But what if you need to analyze:
8
+ - An entire codebase (500+ files)
9
+ - Years of log files
10
+ - A collection of documents
8
11
 
9
- 1. Treating content as Python REPL variables
10
- 2. Having the LLM write code to decompose and process chunks
11
- 3. Recursively calling itself with subsets of context
12
- 4. Aggregating results into a final answer
12
+ You can't just paste it all in. **RLM solves this.**
13
+
14
+ ## How It Works
15
+
16
+ Instead of trying to cram everything into one prompt, RLM:
17
+
18
+ 1. Gives your data to an LLM as a Python variable
19
+ 2. Asks the LLM to write code to analyze it
20
+ 3. Runs that code in a safe sandbox
21
+ 4. If needed, recursively processes chunks
22
+
23
+ The LLM becomes a programmer that writes its own analysis tools.
24
+
25
+ ```
26
+ Your huge context (10MB of code)
27
+
28
+
29
+ LLM writes Python to
30
+ analyze and chunk it
31
+
32
+
33
+ Sandbox runs code
34
+
35
+
36
+ Answer
37
+ ```
13
38
 
14
39
  ## Installation
15
40
 
16
41
  ```bash
17
- # From GitHub Packages (requires auth)
18
42
  npm install @cloudwarriors-ai/rlm
19
-
20
- # Or link locally during development
21
- npm link /path/to/rlm
22
43
  ```
23
44
 
24
- ## Quick Start
45
+ **Requirement:** You need an [OpenRouter](https://openrouter.ai) API key.
46
+
47
+ ## Basic Usage
25
48
 
26
49
  ```typescript
27
50
  import { createRLM } from '@cloudwarriors-ai/rlm';
28
51
  import fs from 'node:fs';
29
52
 
53
+ // Create an RLM instance
30
54
  const rlm = createRLM({
31
55
  apiKey: process.env.OPENROUTER_API_KEY,
32
56
  });
33
57
 
58
+ // Query with your massive context
34
59
  const result = await rlm.query(
35
- fs.readFileSync('./massive-codebase.txt', 'utf-8'),
36
- 'Find all security vulnerabilities'
60
+ fs.readFileSync('./huge-codebase.txt', 'utf-8'),
61
+ 'Find all the security vulnerabilities'
37
62
  );
38
63
 
39
- console.log(result.answer);
40
- console.log(`Cost: $${result.usage.costUsd.toFixed(4)}`);
41
- ```
42
-
43
- ## CLI Usage
44
-
45
- ```bash
46
- # Install globally
47
- npm install -g @cloudwarriors-ai/rlm
48
-
49
- # Initialize config
50
- rlm config init
51
- rlm config set apiKey YOUR_OPENROUTER_API_KEY
52
-
53
- # Run a query
54
- rlm query --file ./large-file.txt --query "Summarize this document"
55
-
56
- # With options
57
- rlm query \
58
- --file ./codebase.txt \
59
- --query "Find security issues" \
60
- --max-depth 5 \
61
- --max-cost 5.00 \
62
- --verbose
63
- ```
64
-
65
- ## Configuration
66
-
67
- ### Environment Variables
68
-
69
- ```bash
70
- OPENROUTER_API_KEY=sk-or-... # Required
71
- RLM_MODEL=anthropic/claude-sonnet-4 # Default model
72
- RLM_MAX_DEPTH=5 # Max recursion depth
73
- RLM_MAX_COST_USD=10 # Max cost per session
74
- RLM_MAX_TOKENS=1000000 # Max total tokens
75
- RLM_TIMEOUT_SECONDS=300 # Session timeout
76
- RLM_SANDBOX_MODE=strict # strict|permissive|disabled
77
- ```
78
-
79
- ### Config File
80
-
81
- Create `~/.rlm/config.json`:
82
-
83
- ```json
84
- {
85
- "apiKey": "sk-or-...",
86
- "model": "anthropic/claude-sonnet-4",
87
- "maxRecursionDepth": 5,
88
- "maxCostUsd": 10,
89
- "maxTokens": 1000000,
90
- "timeoutMs": 300000
64
+ // Get the answer
65
+ if (result.success) {
66
+ console.log(result.answer);
67
+ console.log(`Cost: $${result.usage.costUsd.toFixed(4)}`);
68
+ } else {
69
+ console.error('Failed:', result.error);
91
70
  }
92
71
  ```
93
72
 
94
- ## API Reference
95
-
96
- ### createRLM(options)
97
-
98
- Create an RLM service instance.
73
+ ## Configuration Options
99
74
 
100
75
  ```typescript
101
76
  const rlm = createRLM({
102
- apiKey: 'sk-or-...',
77
+ // Required
78
+ apiKey: process.env.OPENROUTER_API_KEY,
79
+
80
+ // Optional - defaults shown
103
81
  model: 'anthropic/claude-sonnet-4',
82
+
104
83
  config: {
105
- maxRecursionDepth: 5,
106
- maxCostUsd: 10,
84
+ maxRecursionDepth: 5, // How many levels deep it can go
85
+ maxCostUsd: 10.0, // Stop if cost exceeds this
86
+ maxTokens: 1000000, // Total token budget
87
+ timeoutMs: 300000, // 5 minute timeout
107
88
  },
108
89
  });
109
90
  ```
110
91
 
111
- ### rlm.query(context, query, config?)
112
-
113
- Execute an RLM query.
92
+ ## What You Get Back
114
93
 
115
94
  ```typescript
116
- const result = await rlm.query(
117
- context, // string or ContextInput
118
- 'Your question',
119
- { maxRecursionDepth: 3 } // optional overrides
120
- );
95
+ const result = await rlm.query(context, question);
96
+
97
+ result.sessionId // Unique ID for this query
98
+ result.success // true if it worked
99
+ result.answer // The LLM's answer
100
+ result.error // Error message if failed
101
+ result.usage // Token counts and cost
102
+ result.trace // Step-by-step execution log
121
103
  ```
122
104
 
123
- Returns `RLMResult`:
105
+ ## Environment Variables
124
106
 
125
- ```typescript
126
- interface RLMResult {
127
- sessionId: string;
128
- answer: string;
129
- success: boolean;
130
- error?: string;
131
- trace: ExecutionTrace;
132
- usage: ResourceUsage;
133
- }
134
- ```
107
+ You can also configure via environment:
135
108
 
136
- ## Architecture
137
-
138
- ```
139
- ┌─────────────────────────────────────────────────────────────────┐
140
- │ CLI Layer │
141
- │ (src/cli/) │
142
- └─────────────────────────────────────────────────────────────────┘
143
-
144
-
145
- ┌─────────────────────────────────────────────────────────────────┐
146
- │ Application Layer │
147
- │ (src/application/) │
148
- │ RLMService - orchestrates use cases │
149
- └─────────────────────────────────────────────────────────────────┘
150
-
151
-
152
- ┌─────────────────────────────────────────────────────────────────┐
153
- │ Domain Layer │
154
- │ (src/domain/) │
155
- │ Pure business logic, types, interfaces │
156
- └─────────────────────────────────────────────────────────────────┘
157
-
158
-
159
- ┌─────────────────────────────────────────────────────────────────┐
160
- │ Infrastructure Layer │
161
- │ (src/infrastructure/) │
162
- │ LLM providers, Python executor, persistence │
163
- └─────────────────────────────────────────────────────────────────┘
109
+ ```bash
110
+ OPENROUTER_API_KEY=sk-or-... # Required
111
+ RLM_MODEL=anthropic/claude-sonnet-4
112
+ RLM_MAX_DEPTH=5
113
+ RLM_MAX_COST_USD=10
114
+ RLM_TIMEOUT_SECONDS=300
164
115
  ```
165
116
 
166
- ## Integration with Hermes
117
+ ## Real-World Example
118
+
119
+ Analyzing an entire codebase:
167
120
 
168
121
  ```typescript
169
- // In Hermes project
170
122
  import { createRLM } from '@cloudwarriors-ai/rlm';
171
- import { db } from './db';
172
- import { eventBus } from './events';
173
-
174
- const rlm = createRLM({
175
- apiKey: process.env.OPENROUTER_API_KEY,
176
- sessionStore: new DrizzleSessionStore(db),
177
- eventBus,
178
- });
179
-
180
- // Use in API routes
181
- app.post('/api/rlm/query', async (req, res) => {
182
- const result = await rlm.query(req.body.context, req.body.query);
183
- res.json(result);
184
- });
123
+ import { readdir, readFile } from 'node:fs/promises';
124
+ import { join } from 'node:path';
125
+
126
+ async function analyzeCodebase(dir: string) {
127
+ // Gather all source files
128
+ const files = await readdir(dir, { recursive: true });
129
+ const sourceFiles = files.filter(f => f.endsWith('.ts') || f.endsWith('.js'));
130
+
131
+ // Read contents
132
+ let context = '';
133
+ for (const file of sourceFiles) {
134
+ const content = await readFile(join(dir, file), 'utf-8');
135
+ context += `\n### ${file}\n\`\`\`\n${content}\n\`\`\`\n`;
136
+ }
137
+
138
+ // Analyze with RLM
139
+ const rlm = createRLM({
140
+ apiKey: process.env.OPENROUTER_API_KEY,
141
+ });
142
+
143
+ const result = await rlm.query(
144
+ context,
145
+ 'Analyze this codebase. What are the main components? Any code smells or issues?'
146
+ );
147
+
148
+ return result.answer;
149
+ }
185
150
  ```
186
151
 
187
- ## Development
152
+ ## Error Handling
188
153
 
189
- ```bash
190
- # Install dependencies
191
- npm install
192
-
193
- # Build
194
- npm run build
195
-
196
- # Run tests
197
- npm test
198
-
199
- # Type check
200
- npm run lint
201
-
202
- # Watch mode
203
- npm run dev
154
+ ```typescript
155
+ import { createRLM, LimitExceededError, LLMError } from '@cloudwarriors-ai/rlm';
156
+
157
+ try {
158
+ const result = await rlm.query(context, query);
159
+
160
+ if (!result.success) {
161
+ console.error('Query failed:', result.error);
162
+ }
163
+ } catch (error) {
164
+ if (error instanceof LimitExceededError) {
165
+ console.error('Hit a limit:', error.message);
166
+ } else if (error instanceof LLMError) {
167
+ console.error('LLM error:', error.message);
168
+ } else {
169
+ throw error;
170
+ }
171
+ }
204
172
  ```
205
173
 
206
174
  ## License
207
175
 
208
- UNLICENSED - Private package for CloudWarriors AI
176
+ MIT
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@cloudwarriors-ai/rlm",
3
- "version": "0.1.6",
3
+ "version": "0.1.7",
4
4
  "description": "Recursive Language Model - Process massive contexts through recursive LLM decomposition",
5
5
  "type": "module",
6
6
  "main": "./dist/index.js",
@@ -45,7 +45,7 @@
45
45
  "ai"
46
46
  ],
47
47
  "author": "CloudWarriors AI",
48
- "license": "UNLICENSED",
48
+ "license": "MIT",
49
49
  "publishConfig": {
50
50
  "access": "public"
51
51
  },