@opencompress/opencompress 1.0.0 → 1.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +46 -50
  2. package/package.json +1 -1
package/README.md CHANGED
@@ -1,14 +1,14 @@
1
1
  # OpenCompress Plugin for OpenClaw
2
2
 
3
- Automatic 5-layer prompt compression for any LLM. Drop-in replacement for your current provider same models, same API, 40-70% cheaper and faster.
3
+ Compress every LLM call automatically keep your existing provider, same models, same quality, **40-70% cheaper**.
4
4
 
5
5
  ## How it works
6
6
 
7
7
  ```
8
- Your Agent → OpenCompress (compress) → Your LLM provider (OpenRouter/OpenAI/Anthropic)
8
+ Your Agent → OpenCompress (compress) → Your LLM Provider (OpenAI/Anthropic/OpenRouter/Google)
9
9
  ```
10
10
 
11
- OpenCompress sits between OpenClaw and your existing LLM provider. It compresses prompts through a 5-layer pipeline (distilled model + dict aliasing + output shaping + adaptive rate + closed-loop control), then forwards to your provider using your own API key (BYOK).
11
+ You already pay for an LLM provider. OpenCompress adds a compression layer on top — your prompts are compressed through a 5-layer pipeline before reaching your provider. You pay your provider at their normal rates, and we charge only **20% of what you save**.
12
12
 
13
13
  - **53%** average input token reduction
14
14
  - **62%** latency improvement
@@ -16,33 +16,34 @@ OpenCompress sits between OpenClaw and your existing LLM provider. It compresses
16
16
 
17
17
  ## Install
18
18
 
19
- ```bash
20
- openclaw plugins install /path/to/opencompress
21
- ```
22
-
23
- Or from npm (coming soon):
24
-
25
19
  ```bash
26
20
  openclaw plugins install @opencompress/opencompress
27
21
  ```
28
22
 
29
23
  ## Setup
30
24
 
31
- ### 1. Get an OpenCompress API key
25
+ ### 1. Connect your LLM key
32
26
 
33
- 1. Go to [opencompress.ai/dashboard](https://www.opencompress.ai/dashboard)
34
- 2. Sign up and create a **BYOK key** — enter your existing OpenRouter or OpenAI API key
35
- 3. Copy the `sk-occ-...` key
36
-
37
- ### 2. Onboard in OpenClaw
27
+ After installing, run onboard and connect your existing provider key:
38
28
 
39
29
  ```bash
40
30
  openclaw onboard opencompress
41
31
  ```
42
32
 
43
- The wizard will prompt for your API key and verify it.
33
+ The wizard auto-provisions your account ($1.00 free credit) and asks for your upstream LLM key. Supported providers:
34
+
35
+ | Key prefix | Provider |
36
+ |---|---|
37
+ | `sk-proj-` or `sk-` | OpenAI |
38
+ | `sk-ant-` | Anthropic |
39
+ | `sk-or-` | OpenRouter |
40
+ | `AIza...` | Google AI |
41
+
42
+ Once connected, every LLM call is compressed automatically — you pay your provider directly, we only charge the compression fee.
44
43
 
45
- ### 3. Use it
44
+ > Don't have an LLM key? No problem — we can route through OpenRouter for you. Just skip the key step during onboard.
45
+
46
+ ### 2. Use it
46
47
 
47
48
  Switch to the OpenCompress provider:
48
49
 
@@ -50,61 +51,56 @@ Switch to the OpenCompress provider:
50
51
  /model opencompress/gpt-4o-mini
51
52
  ```
52
53
 
53
- All your requests now go through compression automatically. Model IDs are identical to OpenRouter/OpenAI — no config changes needed.
54
+ That's it. Same model IDs as your current provider — no config changes needed.
55
+
56
+ ### 3. Connect or switch your key anytime
57
+
58
+ ```
59
+ /compress-byok sk-proj-your-openai-key # Connect OpenAI
60
+ /compress-byok sk-ant-your-anthropic-key # Connect Anthropic
61
+ /compress-byok sk-or-your-openrouter-key # Connect OpenRouter
62
+ /compress-byok off # Switch back to router mode
63
+ ```
54
64
 
55
65
  ## Commands
56
66
 
57
67
  | Command | Description |
58
68
  |---------|-------------|
59
69
  | `/compress-stats` | Show compression savings (calls, tokens saved, cost saved) |
70
+ | `/compress-byok <key>` | Connect or switch your LLM provider key |
71
+ | `/compress-byok off` | Disconnect your key (switch to router fallback) |
60
72
 
61
73
  ## Supported models (20)
62
74
 
63
- | Model | ID |
64
- |-------|-----|
65
- | GPT-4o | `gpt-4o` |
66
- | GPT-4o Mini | `gpt-4o-mini` |
67
- | GPT-4.1 | `gpt-4.1` |
68
- | GPT-4.1 Mini | `gpt-4.1-mini` |
69
- | GPT-4.1 Nano | `gpt-4.1-nano` |
70
- | O3 | `o3` |
71
- | O4 Mini | `o4-mini` |
72
- | Claude Sonnet 4.6 | `claude-sonnet-4-6` |
73
- | Claude Opus 4.6 | `claude-opus-4-6` |
74
- | Claude Haiku 4.5 | `claude-haiku-4-5-20251001` |
75
- | Gemini 2.5 Pro | `gemini-2.5-pro` |
76
- | Gemini 2.5 Flash | `gemini-2.5-flash` |
77
- | DeepSeek V3 | `deepseek/deepseek-chat-v3-0324` |
78
- | DeepSeek Reasoner | `deepseek/deepseek-reasoner` |
79
- | Llama 4 Maverick | `meta-llama/llama-4-maverick` |
80
- | Llama 4 Scout | `meta-llama/llama-4-scout` |
81
- | Qwen3 235B | `qwen/qwen3-235b-a22b` |
82
- | Qwen3 32B | `qwen/qwen3-32b` |
83
- | Mistral Large | `mistralai/mistral-large-2411` |
84
- | Gemini 2.5 Pro Preview | `google/gemini-2.5-pro-preview` |
75
+ Works with all major providers — use whichever models you already use:
76
+
77
+ | Provider | Models |
78
+ |----------|--------|
79
+ | **OpenAI** | `gpt-4o`, `gpt-4o-mini`, `gpt-4.1`, `gpt-4.1-mini`, `gpt-4.1-nano`, `o3`, `o4-mini` |
80
+ | **Anthropic** | `claude-sonnet-4-6`, `claude-opus-4-6`, `claude-haiku-4-5-20251001` |
81
+ | **Google** | `gemini-2.5-pro`, `gemini-2.5-flash`, `google/gemini-2.5-pro-preview` |
82
+ | **DeepSeek** | `deepseek/deepseek-chat-v3-0324`, `deepseek/deepseek-reasoner` |
83
+ | **Meta** | `meta-llama/llama-4-maverick`, `meta-llama/llama-4-scout` |
84
+ | **Qwen** | `qwen/qwen3-235b-a22b`, `qwen/qwen3-32b` |
85
+ | **Mistral** | `mistralai/mistral-large-2411` |
85
86
 
86
87
  ## Pricing
87
88
 
88
- OpenCompress charges **20% of what you save**. If compression saves you $1.00, you pay $0.20 net saving $0.80.
89
+ You pay your LLM provider directly at their normal rates. OpenCompress charges **20% of the tokens you save** if compression saves you $1.00 in tokens, you pay us $0.20. Net saving: **$0.80**.
89
90
 
90
- BYOK mode: you pay your LLM provider directly + the compression fee.
91
+ $1.00 free credit on sign-up covers ~50-100 compressed calls.
91
92
 
92
93
  ## Configuration
93
94
 
94
- Plugin config options in `openclaw.plugin.json`:
95
-
96
95
  | Key | Default | Description |
97
96
  |-----|---------|-------------|
98
- | `apiKey` | — | Your `sk-occ-...` API key (optional, set during onboard) |
97
+ | `apiKey` | — | Your `sk-occ-...` key (set during onboard) |
99
98
  | `baseUrl` | `https://www.opencompress.ai/api` | Custom API endpoint |
100
99
 
101
- ## Development
100
+ ## Uninstall
102
101
 
103
102
  ```bash
104
- npm install
105
- npm run build # Build with tsup
106
- npm run dev # Watch mode
107
- npm run typecheck # Type check
103
+ openclaw plugins uninstall opencompress
108
104
  ```
109
105
 
110
106
  ## License
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@opencompress/opencompress",
3
- "version": "1.0.0",
3
+ "version": "1.1.0",
4
4
  "description": "OpenCompress plugin for OpenClaw — automatic 5-layer prompt compression for any LLM",
5
5
  "type": "module",
6
6
  "main": "dist/index.js",