jasper-context-compactor 0.2.2 → 0.3.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (3) hide show
  1. package/README.md +80 -16
  2. package/cli.js +111 -55
  3. package/package.json +4 -2
package/README.md CHANGED
@@ -1,27 +1,65 @@
1
- # Context Compactor
1
+ # Jasper Context Compactor
2
2
 
3
3
  > Token-based context compaction for OpenClaw with local models (MLX, llama.cpp, Ollama)
4
4
 
5
- ## Why?
5
+ ## The Problem
6
6
 
7
- Local LLM servers don't report context overflow errors like cloud APIs do. OpenClaw's built-in compaction relies on these errors to trigger. This plugin estimates tokens client-side and proactively summarizes older messages before hitting the model's limit.
7
+ Local LLMs don't report context overflow errors like cloud APIs do. When context gets too long, they either:
8
+ - Silently truncate your conversation
9
+ - Return garbage output
10
+ - Crash without explanation
11
+
12
+ OpenClaw's built-in compaction relies on error signals that local models don't provide.
13
+
14
+ ## The Solution
15
+
16
+ Jasper Context Compactor estimates tokens client-side and proactively summarizes older messages before hitting your model's limit. No more broken conversations.
8
17
 
9
18
  ## Quick Start
10
19
 
11
20
  ```bash
12
- # One command setup (installs + configures)
13
21
  npx jasper-context-compactor setup
22
+ ```
14
23
 
15
- # Restart gateway
24
+ **The setup will:**
25
+
26
+ 1. ✅ **Back up your config** — Saves `openclaw.json` to `~/.openclaw/backups/` with restore instructions
27
+ 2. ✅ **Ask permission** — Won't read your config without consent
28
+ 3. ✅ **Detect your model** — Suggests appropriate token limits based on your setup
29
+ 4. ✅ **Let you customize** — Enter your own values if auto-detection doesn't match
30
+ 5. ✅ **Update config safely** — Adds the plugin with your chosen settings
31
+
32
+ Then restart OpenClaw:
33
+ ```bash
16
34
  openclaw gateway restart
17
35
  ```
18
36
 
19
- That's it! The setup command:
20
- - Copies plugin files to `~/.openclaw/extensions/context-compactor/`
21
- - Adds plugin config to `openclaw.json` with sensible defaults
37
+ ## Privacy
38
+
39
+ 🔒 **Everything runs 100% locally.** Nothing is sent to external servers.
40
+
41
+ The setup only reads your local `openclaw.json` file (with your permission) to detect your model and suggest appropriate limits.
42
+
43
+ ## How It Works
44
+
45
+ 1. Before each message, estimates total context tokens (chars ÷ 4)
46
+ 2. If over `maxTokens`, splits messages into "old" and "recent"
47
+ 3. Summarizes old messages using your session model
48
+ 4. Injects summary as context — conversation continues seamlessly
49
+
50
+ ## Commands
51
+
52
+ After setup, use these in chat:
53
+
54
+ | Command | Description |
55
+ |---------|-------------|
56
+ | `/context-stats` | Show current token usage and limits |
57
+ | `/compact-now` | Clear cache and force fresh compaction |
22
58
 
23
59
  ## Configuration
24
60
 
61
+ The setup configures these values in `~/.openclaw/openclaw.json`:
62
+
25
63
  ```json
26
64
  {
27
65
  "plugins": {
@@ -40,17 +78,43 @@ That's it! The setup command:
40
78
  }
41
79
  ```
42
80
 
43
- ## Commands
81
+ | Option | Description |
82
+ |--------|-------------|
83
+ | `maxTokens` | Trigger compaction above this (set to ~80% of your model's context) |
84
+ | `keepRecentTokens` | Recent context to preserve (default: 25% of max) |
85
+ | `summaryMaxTokens` | Max tokens for the summary (default: 12.5% of max) |
86
+ | `charsPerToken` | Token estimation ratio (4 works for English) |
44
87
 
45
- - `/context-stats` Show current token usage
46
- - `/compact-now` — Force fresh compaction on next message
88
+ ## Restoring Your Config
47
89
 
48
- ## How It Works
90
+ Setup always backs up first. To restore:
91
+
92
+ ```bash
93
+ # List backups
94
+ ls ~/.openclaw/backups/
95
+
96
+ # Restore (use the timestamp from your backup)
97
+ cp ~/.openclaw/backups/openclaw-2026-02-11T08-00-00-000Z.json ~/.openclaw/openclaw.json
98
+
99
+ # Restart
100
+ openclaw gateway restart
101
+ ```
102
+
103
+ ## Uninstall
104
+
105
+ ```bash
106
+ # Remove plugin files
107
+ rm -rf ~/.openclaw/extensions/context-compactor
108
+
109
+ # Remove from config (edit openclaw.json and delete the context-compactor entry)
110
+ # Or restore from backup
111
+ ```
112
+
113
+ ## Links
49
114
 
50
- 1. Before each agent turn, estimates total context tokens
51
- 2. If over `maxTokens`, splits messages into "old" and "recent"
52
- 3. Summarizes old messages using the session model
53
- 4. Injects summary + recent messages as context
115
+ - **npm:** https://www.npmjs.com/package/jasper-context-compactor
116
+ - **GitHub:** https://github.com/E-x-O-Entertainment-Studios-Inc/openclaw-context-compactor
117
+ - **ClawHub:** https://clawhub.ai/skills/context-compactor
54
118
 
55
119
  ## License
56
120
 
package/cli.js CHANGED
@@ -1,6 +1,6 @@
1
1
  #!/usr/bin/env node
2
2
  /**
3
- * Context Compactor CLI
3
+ * Jasper Context Compactor CLI
4
4
  * Setup script with interactive token limit configuration
5
5
  */
6
6
 
@@ -10,6 +10,7 @@ const os = require('os');
10
10
  const readline = require('readline');
11
11
 
12
12
  const OPENCLAW_CONFIG = path.join(os.homedir(), '.openclaw', 'openclaw.json');
13
+ const OPENCLAW_BACKUPS = path.join(os.homedir(), '.openclaw', 'backups');
13
14
  const OPENCLAW_EXTENSIONS = path.join(os.homedir(), '.openclaw', 'extensions', 'context-compactor');
14
15
  const OLD_EXTENSIONS = path.join(os.homedir(), '.openclaw', 'extensions', 'openclaw-context-compactor');
15
16
 
@@ -35,31 +36,35 @@ function prompt(question) {
35
36
  });
36
37
  }
37
38
 
39
+ function backupConfig() {
40
+ if (!fs.existsSync(OPENCLAW_CONFIG)) return null;
41
+
42
+ fs.mkdirSync(OPENCLAW_BACKUPS, { recursive: true });
43
+ const timestamp = new Date().toISOString().replace(/[:.]/g, '-');
44
+ const backupPath = path.join(OPENCLAW_BACKUPS, `openclaw-${timestamp}.json`);
45
+
46
+ fs.copyFileSync(OPENCLAW_CONFIG, backupPath);
47
+ return backupPath;
48
+ }
49
+
38
50
  async function detectModelContextWindow(config) {
39
- // Try to detect from OpenClaw config
40
51
  const model = config?.agents?.defaults?.model?.primary;
41
-
42
52
  if (!model) return null;
43
53
 
44
- // Common context windows (conservative estimates)
45
54
  const knownContexts = {
46
- // Anthropic
47
55
  'anthropic/claude-opus': 200000,
48
56
  'anthropic/claude-sonnet': 200000,
49
57
  'anthropic/claude-haiku': 200000,
50
- // OpenAI
51
58
  'openai/gpt-4': 128000,
52
59
  'openai/gpt-4-turbo': 128000,
53
60
  'openai/gpt-3.5-turbo': 16000,
54
- // Local models (common sizes)
55
- 'mlx': 8000,
56
- 'ollama': 8000,
57
- 'llama': 8000,
61
+ 'mlx': 32000, // Most MLX models support 32K+
62
+ 'ollama': 32000, // Most Ollama models support 32K+
63
+ 'llama': 32000,
58
64
  'mistral': 32000,
59
65
  'qwen': 32000,
60
66
  };
61
67
 
62
- // Check for exact match first
63
68
  for (const [pattern, tokens] of Object.entries(knownContexts)) {
64
69
  if (model.toLowerCase().includes(pattern.toLowerCase())) {
65
70
  return { model, tokens, source: 'detected' };
@@ -70,18 +75,49 @@ async function detectModelContextWindow(config) {
70
75
  }
71
76
 
72
77
  async function setup() {
73
- log('Context Compactor — Setup');
74
- console.log('='.repeat(50));
78
+ console.log('');
79
+ log('Jasper Context Compactor — Setup');
80
+ console.log('='.repeat(55));
81
+
82
+ // Explain what we're going to do
83
+ console.log('');
84
+ console.log(' This setup will:');
85
+ console.log('');
86
+ console.log(' 1. Copy plugin files to ~/.openclaw/extensions/');
87
+ console.log(' 2. Add plugin config to your openclaw.json');
88
+ console.log(' 3. Help you configure token limits for your model');
89
+ console.log('');
90
+ console.log(' 🔒 Privacy: Everything runs locally. Nothing is sent externally.');
91
+ console.log(' 📁 Your config will be backed up before any changes.');
92
+ console.log('');
93
+
94
+ const proceed = await prompt(' Continue? (y/n): ');
95
+ if (proceed.toLowerCase() !== 'y' && proceed.toLowerCase() !== 'yes') {
96
+ console.log('\n Setup cancelled.\n');
97
+ process.exit(0);
98
+ }
75
99
 
76
100
  // Check if OpenClaw is installed
77
101
  const openclawDir = path.join(os.homedir(), '.openclaw');
78
102
  if (!fs.existsSync(openclawDir)) {
103
+ console.log('');
79
104
  error('OpenClaw not detected (~/.openclaw not found)');
80
- console.log('Install OpenClaw first: https://docs.openclaw.ai');
105
+ console.log(' Install OpenClaw first: https://docs.openclaw.ai');
81
106
  process.exit(1);
82
107
  }
83
108
 
84
- // Copy plugin files to extensions directory
109
+ // Backup config FIRST
110
+ console.log('');
111
+ log('Backing up config...');
112
+ const backupPath = backupConfig();
113
+ if (backupPath) {
114
+ console.log(` ✓ Backup saved: ${backupPath}`);
115
+ console.log(' → Restore with: cp "' + backupPath + '" ~/.openclaw/openclaw.json');
116
+ } else {
117
+ console.log(' ⚠ No existing config to backup');
118
+ }
119
+
120
+ // Copy plugin files
85
121
  console.log('');
86
122
  log('Installing plugin files...');
87
123
  fs.mkdirSync(OPENCLAW_EXTENSIONS, { recursive: true });
@@ -98,7 +134,7 @@ async function setup() {
98
134
  }
99
135
  }
100
136
 
101
- // Clean up old package name if it exists
137
+ // Clean up old package name
102
138
  if (fs.existsSync(OLD_EXTENSIONS)) {
103
139
  try {
104
140
  fs.rmSync(OLD_EXTENSIONS, { recursive: true });
@@ -126,12 +162,10 @@ async function setup() {
126
162
  console.log(' To set the right limit, I can check your OpenClaw config');
127
163
  console.log(' to see what model you\'re using.');
128
164
  console.log('');
129
- console.log(' 🔒 Privacy: This runs 100% locally. Nothing is sent externally.');
130
- console.log('');
131
165
 
132
166
  const checkConfig = await prompt(' Check your config for model info? (y/n): ');
133
167
 
134
- let maxTokens = 8000; // Default
168
+ let maxTokens = 16000; // OpenClaw minimum
135
169
  let detectedInfo = null;
136
170
 
137
171
  if (checkConfig.toLowerCase() === 'y' || checkConfig.toLowerCase() === 'yes') {
@@ -142,7 +176,6 @@ async function setup() {
142
176
  console.log(` ✓ Detected model: ${detectedInfo.model}`);
143
177
  console.log(` ✓ Context window: ~${detectedInfo.tokens.toLocaleString()} tokens`);
144
178
 
145
- // Suggest a safe limit (leave 20% headroom)
146
179
  const suggested = Math.floor(detectedInfo.tokens * 0.8);
147
180
  console.log(` → Suggested maxTokens: ${suggested.toLocaleString()} (80% of context)`);
148
181
  console.log('');
@@ -161,30 +194,48 @@ async function setup() {
161
194
  }
162
195
  }
163
196
 
164
- // If we still don't have a good value, ask manually
197
+ // Manual entry if needed
165
198
  if (maxTokens === 8000 && (!detectedInfo || !detectedInfo.tokens)) {
166
199
  console.log('');
167
200
  console.log(' Common context windows:');
168
- console.log(' • MLX / llama.cpp (small): 4,000 - 8,000');
169
- console.log(' • Mistral / Qwen (medium): 32,000');
170
- console.log(' • Claude / GPT-4 (large): 128,000+');
201
+ console.log(' • MLX / llama.cpp (small): 4,000 - 8,000');
202
+ console.log(' • Mistral / Qwen (medium): 32,000');
203
+ console.log(' • Claude / GPT-4 (large): 128,000+');
204
+ console.log('');
205
+ console.log(' ⚠️ Minimum recommended: 16,000 tokens (OpenClaw requirement)');
171
206
  console.log('');
172
207
  console.log(' Check your model\'s docs or LM Studio/Ollama settings.');
173
208
  console.log(' Config location: ~/.openclaw/openclaw.json');
174
209
  console.log('');
175
210
 
176
- const customTokens = await prompt(' Enter maxTokens (default 8000): ');
211
+ const customTokens = await prompt(' Enter maxTokens (default 16000, minimum 16000): ');
177
212
  if (/^\d+$/.test(customTokens)) {
178
213
  maxTokens = parseInt(customTokens, 10);
214
+ } else {
215
+ maxTokens = 16000;
179
216
  }
180
217
  }
181
218
 
182
- // Calculate keepRecentTokens (25% of max)
219
+ // Enforce minimum
220
+ const MIN_TOKENS = 16000;
221
+ if (maxTokens < MIN_TOKENS) {
222
+ console.log('');
223
+ console.log(` ⚠️ Warning: ${maxTokens} tokens is below OpenClaw's minimum of ${MIN_TOKENS}.`);
224
+ console.log(` Increasing to ${MIN_TOKENS} to prevent agent failures.`);
225
+ console.log('');
226
+ console.log(' If your model truly has a smaller context window, consider:');
227
+ console.log(' • Using a larger model (Qwen 7B+ or Mistral 7B+)');
228
+ console.log(' • Using the cloud fallback for complex tasks');
229
+ console.log('');
230
+ maxTokens = MIN_TOKENS;
231
+ }
232
+
233
+ // Calculate derived values
183
234
  const keepRecentTokens = Math.floor(maxTokens * 0.25);
184
235
  const summaryMaxTokens = Math.floor(maxTokens * 0.125);
185
236
 
186
237
  console.log('');
187
- console.log(` Configuration:`);
238
+ console.log(' Final configuration:');
188
239
  console.log(` maxTokens: ${maxTokens.toLocaleString()}`);
189
240
  console.log(` keepRecentTokens: ${keepRecentTokens.toLocaleString()} (25%)`);
190
241
  console.log(` summaryMaxTokens: ${summaryMaxTokens.toLocaleString()} (12.5%)`);
@@ -193,11 +244,9 @@ async function setup() {
193
244
  console.log('');
194
245
  log('Updating OpenClaw config...');
195
246
 
196
- // Initialize plugins structure if needed
197
247
  if (!config.plugins) config.plugins = {};
198
248
  if (!config.plugins.entries) config.plugins.entries = {};
199
249
 
200
- // Add/update plugin config
201
250
  config.plugins.entries['context-compactor'] = {
202
251
  enabled: true,
203
252
  config: {
@@ -208,49 +257,56 @@ async function setup() {
208
257
  }
209
258
  };
210
259
 
211
- // Write back with nice formatting
212
260
  fs.writeFileSync(OPENCLAW_CONFIG, JSON.stringify(config, null, 2) + '\n');
213
261
  console.log(' ✓ Saved to openclaw.json');
214
262
 
263
+ // Done!
215
264
  console.log('');
216
- console.log('='.repeat(50));
265
+ console.log('='.repeat(55));
217
266
  log('Setup complete!');
218
267
  console.log('');
219
- console.log('Next steps:');
220
- console.log(' 1. Restart OpenClaw: openclaw gateway restart');
221
- console.log(' 2. Check status in chat: /context-stats');
268
+ console.log(' Next steps:');
269
+ console.log(' 1. Restart OpenClaw: openclaw gateway restart');
270
+ console.log(' 2. Check status in chat: /context-stats');
222
271
  console.log('');
223
- console.log('To adjust later, edit ~/.openclaw/openclaw.json');
224
- console.log('under plugins.entries["context-compactor"].config');
272
+ console.log(' To adjust later:');
273
+ console.log(' Edit ~/.openclaw/openclaw.json');
274
+ console.log(' Look for plugins.entries["context-compactor"].config');
275
+ console.log('');
276
+ if (backupPath) {
277
+ console.log(' To restore original config:');
278
+ console.log(` cp "${backupPath}" ~/.openclaw/openclaw.json`);
279
+ console.log('');
280
+ }
225
281
  }
226
282
 
227
283
  function showHelp() {
228
284
  console.log(`
229
- Context Compactor
230
- Token-based context compaction for local models
285
+ Jasper Context Compactor
286
+ Token-based context compaction for local models (MLX, llama.cpp, Ollama)
231
287
 
232
288
  USAGE:
233
- npx openclaw-context-compactor setup Install and configure plugin
234
- npx openclaw-context-compactor help Show this help
289
+ npx jasper-context-compactor setup Install and configure plugin
290
+ npx jasper-context-compactor help Show this help
235
291
 
236
292
  WHAT IT DOES:
237
- - Copies plugin files to ~/.openclaw/extensions/context-compactor/
238
- - Detects your model's context window (with permission)
239
- - Configures appropriate token limits
240
- - Enables automatic context compaction for local models
293
+ Local LLMs don't report context overflow errors like cloud APIs.
294
+ This plugin estimates tokens client-side and proactively summarizes
295
+ older messages before hitting your model's context limit.
241
296
 
242
- CONFIGURATION:
243
- After setup, adjust in openclaw.json:
244
-
245
- "context-compactor": {
246
- "enabled": true,
247
- "config": {
248
- "maxTokens": 8000, // Your model's context limit minus buffer
249
- "keepRecentTokens": 2000 // Recent context to preserve
250
- }
251
- }
297
+ SETUP PROCESS:
298
+ 1. Backs up your openclaw.json (with restore instructions)
299
+ 2. Copies plugin files to ~/.openclaw/extensions/
300
+ 3. Asks permission before reading your config
301
+ 4. Detects your model and suggests appropriate token limits
302
+ 5. Lets you customize or enter values manually
303
+ 6. Updates openclaw.json with the plugin config
304
+
305
+ PRIVACY:
306
+ Everything runs 100% locally. Nothing is sent to external servers.
307
+ We only read your local config file (with your permission).
252
308
 
253
- COMMANDS (in chat):
309
+ COMMANDS (in chat after setup):
254
310
  /context-stats Show current token usage
255
311
  /compact-now Force fresh compaction
256
312
  `);
package/package.json CHANGED
@@ -1,13 +1,15 @@
1
1
  {
2
2
  "name": "jasper-context-compactor",
3
- "version": "0.2.2",
3
+ "version": "0.3.1",
4
4
  "description": "Context compaction plugin for OpenClaw - works with local models (MLX, llama.cpp) that don't report token limits",
5
5
  "main": "index.ts",
6
6
  "bin": {
7
7
  "context-compactor": "./cli.js"
8
8
  },
9
9
  "openclaw": {
10
- "extensions": ["./index.ts"]
10
+ "extensions": [
11
+ "./index.ts"
12
+ ]
11
13
  },
12
14
  "keywords": [
13
15
  "openclaw",