@opencompress/openclaw 3.0.0 → 3.0.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,108 +1,214 @@
1
- # OpenCompress Plugin for OpenClaw
1
+ <p align="center">
2
+ <br />
3
+ <br />
4
+ <strong><code>🗜️ OpenCompress</code></strong>
5
+ <br />
6
+ <em>for OpenClaw</em>
7
+ <br />
8
+ <br />
9
+ </p>
2
10
 
3
- Compress every LLM call automatically keep your existing provider, same models, same quality, **40-70% cheaper**.
11
+ <h3 align="center">Your keys. Your models. Fewer tokens. Better quality.</h3>
4
12
 
5
- ## How it works
13
+ <br />
6
14
 
7
- ```
8
- Your Agent → OpenCompress (compress) → Your LLM Provider (OpenAI/Anthropic/OpenRouter/Google)
9
- ```
15
+ <p align="center">
16
+ <a href="https://www.npmjs.com/package/@opencompress/openclaw"><img src="https://img.shields.io/npm/v/@opencompress/openclaw?style=flat-square&color=000&label=npm" alt="npm" /></a>
17
+ &nbsp;
18
+ <a href="https://github.com/open-compress/opencompress-openclaw"><img src="https://img.shields.io/github/stars/open-compress/opencompress-openclaw?style=flat-square&color=000&label=stars" alt="stars" /></a>
19
+ &nbsp;
20
+ <a href="https://github.com/open-compress/opencompress-openclaw/blob/main/LICENSE"><img src="https://img.shields.io/badge/license-MIT-000?style=flat-square" alt="license" /></a>
21
+ &nbsp;
22
+ <a href="https://openclaw.ai"><img src="https://img.shields.io/badge/OpenClaw-plugin-000?style=flat-square" alt="OpenClaw" /></a>
23
+ </p>
10
24
 
11
- You already pay for an LLM provider. OpenCompress adds a compression layer on top — your prompts are compressed through a 5-layer pipeline before reaching your provider. You pay your provider at their normal rates, and we charge only **20% of what you save**.
25
+ <br />
12
26
 
13
- - **53%** average input token reduction
14
- - **62%** latency improvement
15
- - **96%** quality preservation (SQuALITY benchmark)
27
+ ---
16
28
 
17
- ## Install
29
+ <p align="center">
30
+ OpenCompress is an <a href="https://openclaw.ai">OpenClaw</a> plugin that optimizes LLM input and output using a state-of-the-art multi-stage compression pipeline. It reduces token usage and improves response quality, automatically, on every call. Works with any provider you already use: Anthropic, OpenAI, Google, OpenRouter, and any OpenAI-compatible API.
31
+ </p>
32
+
33
+ ---
34
+
35
+ <br />
36
+
37
+ We don't sell tokens. We don't resell API access.
38
+
39
+ You use your own keys, your own models, your own account. Billed directly by Anthropic, OpenAI, or whoever you choose. We compress the traffic so you get charged less and your agent thinks clearer.
40
+
41
+ Compression doesn't just save money. It removes the noise. Leaner prompts mean the model focuses on what matters. Shorter context, better answers, better code.
42
+
43
+ No vendor lock-in. Uninstall anytime. Everything goes back to exactly how it was.
44
+
45
+ <br />
46
+
47
+ ---
48
+
49
+ <br />
50
+
51
+ ### How it works
18
52
 
19
- ```bash
20
- openclaw plugins install @opencompress/opencompress
21
53
  ```
54
+ ┌──────────────────────────────┐
55
+ │ Your OpenClaw Agent │
56
+ │ │
57
+ │ model: opencompress/auto │
58
+ └──────────────┬───────────────┘
59
+
60
+
61
+ ┌──────────────────────────────┐
62
+ │ Local Proxy (:8401) │
63
+ │ │
64
+ │ reads your provider key │
65
+ │ from OpenClaw config │
66
+ └──────────────┬───────────────┘
67
+
68
+
69
+ ┌──────────────────────────────┐
70
+ │ opencompress.ai │
71
+ │ │
72
+ │ compress → forward │
73
+ │ your key in header │
74
+ │ never stored │
75
+ └──────────────┬───────────────┘
76
+
77
+
78
+ ┌──────────────────────────────┐
79
+ │ Your LLM Provider │
80
+ │ (Anthropic / OpenAI) │
81
+ │ │
82
+ │ sees fewer tokens │
83
+ │ charges you less │
84
+ └──────────────────────────────┘
85
+ ```
86
+
87
+ <br />
22
88
 
23
- ## Setup
89
+ ---
24
90
 
25
- ### 1. Connect your LLM key
91
+ <br />
26
92
 
27
- After installing, run onboard and connect your existing provider key:
93
+ ### Install
28
94
 
29
95
  ```bash
96
+ openclaw plugins install @opencompress/openclaw
30
97
  openclaw onboard opencompress
98
+ openclaw gateway restart
31
99
  ```
32
100
 
33
- The wizard auto-provisions your account ($1.00 free credit) and asks for your upstream LLM key. Supported providers:
101
+ Select **`opencompress/auto`** as your model. Done.
34
102
 
35
- | Key prefix | Provider |
36
- |---|---|
37
- | `sk-proj-` or `sk-` | OpenAI |
38
- | `sk-ant-` | Anthropic |
39
- | `sk-or-` | OpenRouter |
40
- | `AIza...` | Google AI |
103
+ <br />
41
104
 
42
- Once connected, every LLM call is compressed automatically — you pay your provider directly, we only charge the compression fee.
105
+ ### Models
43
106
 
44
- > Don't have an LLM key? No problem — we can route through OpenRouter for you. Just skip the key step during onboard.
45
-
46
- ### 2. Use it
47
-
48
- Switch to the OpenCompress provider:
107
+ Every provider you already have gets a compressed mirror:
49
108
 
50
109
  ```
51
- /model opencompress/gpt-4o-mini
110
+ opencompress/auto → your default, compressed
111
+ opencompress/anthropic/claude-sonnet-4 → Claude Sonnet, compressed
112
+ opencompress/anthropic/claude-opus-4-6 → Claude Opus, compressed
113
+ opencompress/openai/gpt-5.4 → GPT-5.4, compressed
52
114
  ```
53
115
 
54
- That's it. Same model IDs as your current provider — no config changes needed.
116
+ Switch back to the original model anytime to disable compression.
117
+
118
+ <br />
55
119
 
56
- ### 3. Connect or switch your key anytime
120
+ ### Commands
57
121
 
58
122
  ```
59
- /compress-byok sk-proj-your-openai-key # Connect OpenAI
60
- /compress-byok sk-ant-your-anthropic-key # Connect Anthropic
61
- /compress-byok sk-or-your-openrouter-key # Connect OpenRouter
62
- /compress-byok off # Switch back to router mode
123
+ /compress-stats view savings, balance, token metrics
124
+ /compress show status and available models
63
125
  ```
64
126
 
65
- ## Commands
127
+ <br />
66
128
 
67
- | Command | Description |
68
- |---------|-------------|
69
- | `/compress-stats` | Show compression savings (calls, tokens saved, cost saved) |
70
- | `/compress-byok <key>` | Connect or switch your LLM provider key |
71
- | `/compress-byok off` | Disconnect your key (switch to router fallback) |
129
+ ---
72
130
 
73
- ## Supported models (20)
131
+ <br />
74
132
 
75
- Works with all major providers — use whichever models you already use:
133
+ ### What we believe
76
134
 
77
- | Provider | Models |
78
- |----------|--------|
79
- | **OpenAI** | `gpt-4o`, `gpt-4o-mini`, `gpt-4.1`, `gpt-4.1-mini`, `gpt-4.1-nano`, `o3`, `o4-mini` |
80
- | **Anthropic** | `claude-sonnet-4-6`, `claude-opus-4-6`, `claude-haiku-4-5-20251001` |
81
- | **Google** | `gemini-2.5-pro`, `gemini-2.5-flash`, `google/gemini-2.5-pro-preview` |
82
- | **DeepSeek** | `deepseek/deepseek-chat-v3-0324`, `deepseek/deepseek-reasoner` |
83
- | **Meta** | `meta-llama/llama-4-maverick`, `meta-llama/llama-4-scout` |
84
- | **Qwen** | `qwen/qwen3-235b-a22b`, `qwen/qwen3-32b` |
85
- | **Mistral** | `mistralai/mistral-large-2411` |
135
+ <table>
136
+ <tr>
137
+ <td width="50%">
86
138
 
87
- ## Pricing
139
+ **Your keys are yours.**
88
140
 
89
- You pay your LLM provider directly at their normal rates. OpenCompress charges **20% of the tokens you save** if compression saves you $1.00 in tokens, you pay us $0.20. Net saving: **$0.80**.
141
+ We read your API key from OpenClaw's config at runtime, pass it in a per-request header, and discard it immediately. We never store, log, or cache your provider credentials. Ever.
90
142
 
91
- $1.00 free credit on sign-up covers ~50-100 compressed calls.
143
+ </td>
144
+ <td width="50%">
92
145
 
93
- ## Configuration
146
+ **Your prompts are yours.**
94
147
 
95
- | Key | Default | Description |
96
- |-----|---------|-------------|
97
- | `apiKey` | — | Your `sk-occ-...` key (set during onboard) |
98
- | `baseUrl` | `https://www.opencompress.ai/api` | Custom API endpoint |
148
+ Prompts are compressed in-memory and forwarded. Nothing is stored, logged, or used for training. The only thing we record is token counts for billing, original vs compressed. That's it.
99
149
 
100
- ## Uninstall
150
+ </td>
151
+ </tr>
152
+ <tr>
153
+ <td>
101
154
 
102
- ```bash
103
- openclaw plugins uninstall opencompress
155
+ **Zero lock-in.**
156
+
157
+ We don't replace your provider. We don't wrap your billing. If you uninstall, your agents keep working exactly as before. Same keys, same models, same everything.
158
+
159
+ </td>
160
+ <td>
161
+
162
+ **Failure is invisible.**
163
+
164
+ If our service goes down, your requests fall back directly to your provider. No errors, no downtime, no interruption. You just temporarily lose the compression savings.
165
+
166
+ </td>
167
+ </tr>
168
+ </table>
169
+
170
+ <br />
171
+
172
+ ---
173
+
174
+ <br />
175
+
176
+ ### Supported providers
177
+
178
+ ```
179
+ Anthropic Claude Sonnet, Opus, Haiku anthropic-messages
180
+ OpenAI GPT-5.x, o-series openai-completions
181
+ Google Gemini openai-compat
182
+ OpenRouter 400+ models openai-completions
183
+ Any OpenAI-compatible endpoint openai-completions
104
184
  ```
105
185
 
106
- ## License
186
+ <br />
187
+
188
+ ### Pricing
189
+
190
+ Free credit on signup. No credit card. Pay only for the tokens you save.
191
+
192
+ **[Dashboard →](https://www.opencompress.ai/dashboard)**
193
+
194
+ <br />
195
+
196
+ ---
197
+
198
+ <br />
199
+
200
+ <p align="center">
201
+ <a href="https://www.opencompress.ai">opencompress.ai</a>
202
+ &nbsp;&nbsp;·&nbsp;&nbsp;
203
+ <a href="https://www.npmjs.com/package/@opencompress/openclaw">npm</a>
204
+ &nbsp;&nbsp;·&nbsp;&nbsp;
205
+ <a href="https://github.com/open-compress/opencompress-openclaw">github</a>
206
+ </p>
207
+
208
+ <p align="center">
209
+ <sub>MIT License · OpenCompress</sub>
210
+ </p>
211
+
212
+ <br />
107
213
 
108
- MIT
214
+ <!-- SEO: OpenClaw plugin, LLM token compression, save LLM tokens, reduce token cost, prompt compression, token optimization, reduce OpenClaw token usage, save API costs, LLM cost reduction, compress prompts, token savings, reduce AI costs, BYOK compression, OpenAI cost reduction, Anthropic cost reduction, Claude token savings, GPT token optimization, context compression, agentic compression, save money on LLM, reduce LLM bill, compress LLM context, openclaw cost savings -->
package/dist/index.d.ts CHANGED
@@ -1,9 +1,11 @@
1
1
  /**
2
- * OpenCompress Save tokens, sharpen quality on every LLM call.
2
+ * OpenCompress for OpenClaw
3
3
  *
4
4
  * Registers as an OpenClaw Provider. Users select opencompress/* models.
5
5
  * Local HTTP proxy compresses requests via opencompress.ai, then forwards
6
6
  * to the user's upstream provider. Keys never leave your machine.
7
+ *
8
+ * Auto-provisions API key on first load. No onboard step needed.
7
9
  */
8
10
  type ModelApi = "openai-completions" | "openai-responses" | "anthropic-messages" | "google-generative-ai";
9
11
  type ModelDefinitionConfig = {
package/dist/index.js CHANGED
@@ -1,11 +1,39 @@
1
+ var __require = /* @__PURE__ */ ((x) => typeof require !== "undefined" ? require : typeof Proxy !== "undefined" ? new Proxy(x, {
2
+ get: (a, b) => (typeof require !== "undefined" ? require : a)[b]
3
+ }) : x)(function(x) {
4
+ if (typeof require !== "undefined") return require.apply(this, arguments);
5
+ throw Error('Dynamic require of "' + x + '" is not supported');
6
+ });
7
+
1
8
  // src/config.ts
2
- var VERSION = "3.0.0";
9
+ var VERSION = "3.0.2";
3
10
  var PROXY_PORT = 8401;
4
11
  var PROXY_HOST = "127.0.0.1";
5
12
  var OCC_API = "https://www.opencompress.ai/api";
6
13
  var PROVIDER_ID = "opencompress";
7
14
 
8
15
  // src/models.ts
16
+ var BUILTIN_PROVIDERS = {
17
+ anthropic: { baseUrl: "https://api.anthropic.com", api: "anthropic-messages", envVar: "ANTHROPIC_API_KEY" },
18
+ openai: { baseUrl: "https://api.openai.com", api: "openai-completions", envVar: "OPENAI_API_KEY" },
19
+ google: { baseUrl: "https://generativelanguage.googleapis.com", api: "google-generative-ai", envVar: "GOOGLE_API_KEY" },
20
+ xai: { baseUrl: "https://api.x.ai", api: "openai-completions", envVar: "XAI_API_KEY" },
21
+ deepseek: { baseUrl: "https://api.deepseek.com", api: "openai-completions", envVar: "DEEPSEEK_API_KEY" }
22
+ };
23
+ function resolveBuiltin(providerId) {
24
+ const builtin = BUILTIN_PROVIDERS[providerId];
25
+ if (!builtin) return null;
26
+ const key = process.env[builtin.envVar];
27
+ if (!key) return null;
28
+ return {
29
+ upstreamProvider: providerId,
30
+ upstreamModel: "",
31
+ // caller must set
32
+ upstreamKey: key,
33
+ upstreamBaseUrl: builtin.baseUrl,
34
+ upstreamApi: builtin.api
35
+ };
36
+ }
9
37
  function resolveUpstream(modelId, providers) {
10
38
  const stripped = modelId.replace(/^opencompress\//, "");
11
39
  if (stripped === "auto") {
@@ -21,31 +49,54 @@ function resolveUpstream(modelId, providers) {
21
49
  upstreamApi: config2.api || "openai-completions"
22
50
  };
23
51
  }
52
+ for (const [id, builtin2] of Object.entries(BUILTIN_PROVIDERS)) {
53
+ const key = process.env[builtin2.envVar];
54
+ if (key) {
55
+ return {
56
+ upstreamProvider: id,
57
+ upstreamModel: "",
58
+ // OpenClaw will set the actual model
59
+ upstreamKey: key,
60
+ upstreamBaseUrl: builtin2.baseUrl,
61
+ upstreamApi: builtin2.api
62
+ };
63
+ }
64
+ }
24
65
  return null;
25
66
  }
26
67
  const slashIdx = stripped.indexOf("/");
27
68
  if (slashIdx === -1) {
28
69
  const config2 = providers[stripped];
29
- if (!config2) return null;
30
- return {
31
- upstreamProvider: stripped,
32
- upstreamModel: config2.models?.[0]?.id || stripped,
33
- upstreamKey: config2.apiKey,
34
- upstreamBaseUrl: config2.baseUrl,
35
- upstreamApi: config2.api || "openai-completions"
36
- };
70
+ if (config2) {
71
+ return {
72
+ upstreamProvider: stripped,
73
+ upstreamModel: config2.models?.[0]?.id || stripped,
74
+ upstreamKey: config2.apiKey,
75
+ upstreamBaseUrl: config2.baseUrl,
76
+ upstreamApi: config2.api || "openai-completions"
77
+ };
78
+ }
79
+ const builtin2 = resolveBuiltin(stripped);
80
+ if (builtin2) return builtin2;
81
+ return null;
37
82
  }
38
83
  const upstreamProvider = stripped.slice(0, slashIdx);
39
84
  const upstreamModel = stripped.slice(slashIdx + 1);
40
85
  const config = providers[upstreamProvider];
41
- if (!config) return null;
42
- return {
43
- upstreamProvider,
44
- upstreamModel,
45
- upstreamKey: config.apiKey,
46
- upstreamBaseUrl: config.baseUrl,
47
- upstreamApi: config.api || "openai-completions"
48
- };
86
+ if (config) {
87
+ return {
88
+ upstreamProvider,
89
+ upstreamModel,
90
+ upstreamKey: config.apiKey,
91
+ upstreamBaseUrl: config.baseUrl,
92
+ upstreamApi: config.api || "openai-completions"
93
+ };
94
+ }
95
+ const builtin = resolveBuiltin(upstreamProvider);
96
+ if (builtin) {
97
+ return { ...builtin, upstreamModel };
98
+ }
99
+ return null;
49
100
  }
50
101
  function generateModelCatalog(providers) {
51
102
  const models = [];
@@ -84,6 +135,37 @@ function startProxy(getProviders2, getOccKey) {
84
135
  res.end(JSON.stringify({ status: "ok", version: "2.0.0" }));
85
136
  return;
86
137
  }
138
+ if (req.url === "/provision" && req.method === "POST") {
139
+ try {
140
+ const provRes = await fetch(`${OCC_API}/v1/provision`, {
141
+ method: "POST",
142
+ headers: { "Content-Type": "application/json" },
143
+ body: "{}"
144
+ });
145
+ const data = await provRes.text();
146
+ res.writeHead(provRes.status, { "Content-Type": "application/json" });
147
+ res.end(data);
148
+ } catch (err) {
149
+ res.writeHead(502, { "Content-Type": "application/json" });
150
+ res.end(JSON.stringify({ error: { message: String(err) } }));
151
+ }
152
+ return;
153
+ }
154
+ if (req.url === "/stats" && req.method === "GET") {
155
+ const authHeader = req.headers["authorization"] || "";
156
+ try {
157
+ const statsRes = await fetch(`${OCC_API}/user/stats`, {
158
+ headers: { Authorization: authHeader }
159
+ });
160
+ const data = await statsRes.text();
161
+ res.writeHead(statsRes.status, { "Content-Type": "application/json" });
162
+ res.end(data);
163
+ } catch (err) {
164
+ res.writeHead(502, { "Content-Type": "application/json" });
165
+ res.end(JSON.stringify({ error: { message: String(err) } }));
166
+ }
167
+ return;
168
+ }
87
169
  if (req.method !== "POST") {
88
170
  res.writeHead(405);
89
171
  res.end("Method not allowed");
@@ -263,17 +345,76 @@ function readBody(req) {
263
345
  }
264
346
 
265
347
  // src/index.ts
348
+ var _cachedKey;
266
349
  function getApiKey(api) {
350
+ if (_cachedKey) return _cachedKey;
267
351
  const auth = api.config.auth;
268
352
  const fromConfig = auth?.profiles?.opencompress?.credentials?.["api-key"]?.apiKey;
269
- if (fromConfig) return fromConfig;
270
- if (process.env.OPENCOMPRESS_API_KEY) return process.env.OPENCOMPRESS_API_KEY;
271
- if (api.pluginConfig?.apiKey) return api.pluginConfig.apiKey;
353
+ if (fromConfig) {
354
+ _cachedKey = fromConfig;
355
+ return fromConfig;
356
+ }
357
+ if (process.env.OPENCOMPRESS_API_KEY) {
358
+ _cachedKey = process.env.OPENCOMPRESS_API_KEY;
359
+ return _cachedKey;
360
+ }
361
+ if (api.pluginConfig?.apiKey) {
362
+ _cachedKey = api.pluginConfig.apiKey;
363
+ return _cachedKey;
364
+ }
365
+ try {
366
+ const fs = __require("fs");
367
+ const os = __require("os");
368
+ const path = __require("path");
369
+ const keyPath = path.join(os.homedir(), ".openclaw", "opencompress", "api-key");
370
+ if (fs.existsSync(keyPath)) {
371
+ const key = fs.readFileSync(keyPath, "utf-8").trim();
372
+ if (key.startsWith("sk-occ-")) {
373
+ _cachedKey = key;
374
+ return key;
375
+ }
376
+ }
377
+ } catch {
378
+ }
272
379
  return void 0;
273
380
  }
381
+ async function autoProvision(api) {
382
+ try {
383
+ const res = await fetch(`http://${PROXY_HOST}:${PROXY_PORT}/provision`, {
384
+ method: "POST"
385
+ });
386
+ if (!res.ok) return void 0;
387
+ const data = await res.json();
388
+ const key = data.apiKey;
389
+ if (!key) return void 0;
390
+ try {
391
+ const fs = __require("fs");
392
+ const os = __require("os");
393
+ const path = __require("path");
394
+ const dir = path.join(os.homedir(), ".openclaw", "opencompress");
395
+ fs.mkdirSync(dir, { recursive: true });
396
+ fs.writeFileSync(path.join(dir, "api-key"), key, { mode: 384 });
397
+ } catch {
398
+ }
399
+ _cachedKey = key;
400
+ api.logger.info(`OpenCompress: auto-provisioned API key (${data.freeCredit} free credit)`);
401
+ return key;
402
+ } catch {
403
+ return void 0;
404
+ }
405
+ }
274
406
  function getProviders(api) {
275
407
  return api.config.models?.providers || {};
276
408
  }
409
+ function injectEnvVars(api) {
410
+ const envVars = api.config.env?.vars;
411
+ if (!envVars) return;
412
+ for (const [key, value] of Object.entries(envVars)) {
413
+ if (!process.env[key] && value) {
414
+ process.env[key] = value;
415
+ }
416
+ }
417
+ }
277
418
  function createProvider(api) {
278
419
  return {
279
420
  id: PROVIDER_ID,
@@ -298,7 +439,7 @@ function createProvider(api) {
298
439
  kind: "custom",
299
440
  run: async (ctx) => {
300
441
  ctx.prompter.note(
301
- "\u{1F5DC}\uFE0F OpenCompress \u2014 save tokens and sharpen quality on every LLM call\n\nUse your existing LLM providers. Your API keys stay on your machine.\nWe compress prompts to reduce costs and improve output quality."
442
+ "OpenCompress compresses LLM input and output to save tokens and improve quality.\nYour existing API keys stay on your machine. We just make the traffic smaller."
302
443
  );
303
444
  const spinner = ctx.prompter.progress("Creating your account...");
304
445
  try {
@@ -313,19 +454,17 @@ function createProvider(api) {
313
454
  }
314
455
  const data = await res.json();
315
456
  spinner.stop("Account created!");
457
+ _cachedKey = data.apiKey;
316
458
  return {
317
459
  profiles: [{
318
460
  profileId: "default",
319
461
  credential: { apiKey: data.apiKey }
320
462
  }],
321
463
  notes: [
322
- "\u{1F5DC}\uFE0F OpenCompress ready!",
323
- `\u{1F4B0} ${data.freeCredit} free credit.`,
324
- "",
325
- "Select any opencompress/* model to enable compression.",
326
- "Your existing provider keys are used automatically.",
464
+ "OpenCompress ready!",
465
+ `${data.freeCredit} free credit. No credit card needed.`,
327
466
  "",
328
- "Dashboard: https://www.opencompress.ai/dashboard"
467
+ "Select any opencompress/* model to enable compression."
329
468
  ]
330
469
  };
331
470
  } catch (err) {
@@ -340,9 +479,10 @@ function createProvider(api) {
340
479
  var plugin = {
341
480
  id: "opencompress",
342
481
  name: "OpenCompress",
343
- description: "Save tokens and sharpen quality on any LLM \u2014 use your existing providers",
482
+ description: "Save tokens and sharpen quality on any LLM. Use your existing providers.",
344
483
  version: VERSION,
345
484
  register(api) {
485
+ injectEnvVars(api);
346
486
  api.registerProvider(createProvider(api));
347
487
  api.logger.info(`OpenCompress v${VERSION} registered`);
348
488
  api.registerService({
@@ -358,7 +498,10 @@ var plugin = {
358
498
  stopProxy();
359
499
  }
360
500
  });
361
- setTimeout(() => {
501
+ setTimeout(async () => {
502
+ if (!getApiKey(api)) {
503
+ await autoProvision(api);
504
+ }
362
505
  try {
363
506
  startProxy(
364
507
  () => getProviders(api),
@@ -366,14 +509,17 @@ var plugin = {
366
509
  );
367
510
  } catch {
368
511
  }
369
- }, 1e3);
512
+ }, 1500);
370
513
  api.registerCommand({
371
- name: "compress-stats",
514
+ name: "compress_stats",
372
515
  description: "Show OpenCompress savings and balance",
373
516
  handler: async () => {
374
- const key = getApiKey(api);
517
+ let key = getApiKey(api);
375
518
  if (!key) {
376
- return { text: "No API key. Run `openclaw onboard opencompress` first." };
519
+ key = await autoProvision(api) || void 0;
520
+ }
521
+ if (!key) {
522
+ return { text: "Could not provision API key. Check your network connection." };
377
523
  }
378
524
  try {
379
525
  const res = await fetch(`${OCC_API}/user/stats`, {
@@ -387,15 +533,15 @@ var plugin = {
387
533
  return {
388
534
  text: [
389
535
  "```",
390
- "\u{1F5DC}\uFE0F OpenCompress Stats",
391
- "======================",
536
+ "OpenCompress Stats",
537
+ "==================",
392
538
  `Balance: $${balance.toFixed(2)}`,
393
539
  `API calls: ${calls}`,
394
540
  `Avg compression: ${rate}`,
395
541
  `Tokens saved: ${(Number(s.totalOriginalTokens || 0) - Number(s.totalCompressedTokens || 0)).toLocaleString()}`,
396
542
  "```",
397
543
  "",
398
- balance < 0.5 ? `\u26A0\uFE0F Low balance! Link account for $10 bonus: https://www.opencompress.ai/dashboard?link=${encodeURIComponent(key)}` : "Dashboard: https://www.opencompress.ai/dashboard"
544
+ balance < 0.5 ? `Low balance. Link your account for $10 bonus: https://www.opencompress.ai/dashboard?link=${encodeURIComponent(key)}` : "Dashboard: https://www.opencompress.ai/dashboard"
399
545
  ].join("\n")
400
546
  };
401
547
  } catch (err) {
@@ -407,20 +553,23 @@ var plugin = {
407
553
  name: "compress",
408
554
  description: "Show OpenCompress status and available models",
409
555
  handler: async () => {
410
- const key = getApiKey(api);
556
+ let key = getApiKey(api);
557
+ if (!key) {
558
+ key = await autoProvision(api) || void 0;
559
+ }
411
560
  const providers = getProviders(api);
412
561
  const models = generateModelCatalog(providers);
413
562
  return {
414
563
  text: [
415
564
  "**OpenCompress**",
416
565
  "",
417
- `API key: ${key ? `${key.slice(0, 12)}...` : "not set \u2014 run `openclaw onboard opencompress`"}`,
566
+ `API key: ${key ? `${key.slice(0, 15)}...` : "provisioning failed"}`,
418
567
  `Proxy: http://${PROXY_HOST}:${PROXY_PORT}`,
419
568
  "",
420
569
  "**Compressed models:**",
421
570
  ...models.map((m) => ` ${m.id}`),
422
571
  "",
423
- "Select any opencompress/* model to enable compression."
572
+ "Use `/model opencompress/auto` to enable compression."
424
573
  ].join("\n")
425
574
  };
426
575
  }
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@opencompress/openclaw",
3
- "version": "3.0.0",
3
+ "version": "3.0.2",
4
4
  "description": "OpenCompress for OpenClaw — save tokens and sharpen quality on any LLM",
5
5
  "type": "module",
6
6
  "main": "dist/index.js",