entroplain 0.1.1 → 0.2.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/docs/USAGE.md CHANGED
@@ -1,302 +1,302 @@
1
- # Entroplain Documentation
2
-
3
- ## Installation
4
-
5
- ### pip (Python)
6
-
7
- ```bash
8
- # Core package
9
- pip install entroplain
10
-
11
- # With provider support
12
- pip install "entroplain[openai]"
13
- pip install "entroplain[anthropic]"
14
- pip install "entroplain[all]"
15
- ```
16
-
17
- ### npm (Node.js)
18
-
19
- ```bash
20
- npm install entroplain
21
- ```
22
-
23
- ### From Source
24
-
25
- ```bash
26
- git clone https://github.com/entroplain/entroplain.git
27
- cd entroplain
28
- pip install -e .
29
- ```
30
-
31
- ---
32
-
33
- ## Quick Start
34
-
35
- ```python
36
- from entroplain import EntropyMonitor
37
-
38
- monitor = EntropyMonitor()
39
-
40
- # Track tokens with entropy
41
- monitor.track("The", 0.8)
42
- monitor.track("answer", 0.5)
43
- monitor.track("is", 0.2)
44
-
45
- # Check convergence
46
- if monitor.should_exit():
47
- print("Reasoning complete!")
48
- ```
49
-
50
- ---
51
-
52
- ## How It Works
53
-
54
- ### 1. Entropy Calculation
55
-
56
- For each token, we calculate **Shannon entropy** from the model's output distribution:
57
-
58
- ```
59
- H = -Σ p(x) * log₂(p(x))
60
- ```
61
-
62
- Where `p(x)` is the probability of token x.
63
-
64
- ### 2. Valley Detection
65
-
66
- A **valley** is a local minimum in the entropy trajectory:
67
-
68
- ```
69
- Token: A B C D E
70
- Entropy: 0.8 0.3* 0.7 0.4* 0.9
71
- ↑ ↑
72
- Valley 1 Valley 2
73
- ```
74
-
75
- Valleys indicate moments when the model was confident about the next token — reasoning milestones.
76
-
77
- ### 3. Exit Conditions
78
-
79
- | Condition | When to Exit |
80
- |-----------|--------------|
81
- | `entropy_drop` | Entropy < threshold |
82
- | `valleys_plateau` | Valley count stabilizes |
83
- | `velocity_zero` | Entropy change < threshold |
84
- | `combined` | (entropy_low OR valleys_plateau) AND velocity_stable |
85
-
86
- ---
87
-
88
- ## Configuration
89
-
90
- ```python
91
- monitor = EntropyMonitor(
92
- # Exit when entropy drops below this
93
- entropy_threshold=0.15,
94
-
95
- # Require at least N valleys
96
- min_valleys=2,
97
-
98
- # Exit when velocity < this
99
- velocity_threshold=0.05,
100
-
101
- # Don't exit before N tokens
102
- min_tokens=50,
103
-
104
- # Exit condition strategy
105
- exit_condition="combined" # or "entropy_drop", "valleys_plateau", "velocity_zero"
106
- )
107
- ```
108
-
109
- ### Environment Variables
110
-
111
- ```bash
112
- # Provider API keys
113
- ENTROPPLAIN_OPENAI_API_KEY=sk-...
114
- ENTROPPLAIN_ANTHROPIC_API_KEY=sk-ant-...
115
- ENTROPPLAIN_NVIDIA_API_KEY=nvapi-...
116
- ENTROPPLAIN_GOOGLE_API_KEY=...
117
-
118
- # Local models
119
- ENTROPPLAIN_LOCAL_PROVIDER=ollama
120
- ENTROPPLAIN_LOCAL_MODEL=llama3.1
121
- ```
122
-
123
- ---
124
-
125
- ## Provider Examples
126
-
127
- ### OpenAI
128
-
129
- ```python
130
- from openai import OpenAI
131
- from entroplain import EntropyMonitor
132
-
133
- client = OpenAI()
134
- monitor = EntropyMonitor()
135
-
136
- response = client.chat.completions.create(
137
- model="gpt-4o",
138
- messages=[{"role": "user", "content": "Explain quantum computing"}],
139
- logprobs=True,
140
- top_logprobs=5,
141
- stream=True
142
- )
143
-
144
- for chunk in response:
145
- if chunk.choices[0].logprobs:
146
- for content in chunk.choices[0].logprobs.content:
147
- entropy = monitor.calculate_entropy(
148
- [lp["logprob"] for lp in content["top_logprobs"]]
149
- )
150
- monitor.track(content["token"], entropy)
151
-
152
- if monitor.should_exit():
153
- break
154
- ```
155
-
156
- ### NVIDIA NIM
157
-
158
- ```python
159
- from entroplain import NVIDIAProvider, EntropyMonitor
160
-
161
- provider = NVIDIAProvider()
162
- monitor = EntropyMonitor()
163
-
164
- for token in provider.stream_with_entropy(
165
- model="meta/llama-3.1-70b-instruct",
166
- messages=[{"role": "user", "content": "Hello"}]
167
- ):
168
- monitor.track(token.token, token.entropy)
169
-
170
- if monitor.should_exit():
171
- print("Early exit!")
172
- break
173
- ```
174
-
175
- ### Ollama (Local)
176
-
177
- ```python
178
- from entroplain import OllamaProvider, EntropyMonitor
179
-
180
- provider = OllamaProvider()
181
- monitor = EntropyMonitor()
182
-
183
- for token in provider.stream_with_entropy(
184
- model="llama3.1",
185
- prompt="Think step by step..."
186
- ):
187
- print(token.token, end="")
188
- monitor.track(token.token, token.entropy)
189
- ```
190
-
191
- ---
192
-
193
- ## Agent Framework Integration
194
-
195
- ### OpenClaw
196
-
197
- ```yaml
198
- # In your agent config
199
- entropy_monitor:
200
- enabled: true
201
- entropy_threshold: 0.15
202
- min_valleys: 2
203
-
204
- hooks:
205
- on_token: entroplain.hooks.track_entropy
206
- on_exit_check: entroplain.hooks.early_exit
207
- ```
208
-
209
- ### Claude Code
210
-
211
- ```json
212
- {
213
- "hooks": {
214
- "on_token": "entroplain.hooks.track_entropy",
215
- "on_converge": "entroplain.hooks.early_exit"
216
- }
217
- }
218
- ```
219
-
220
- ### Custom Agent
221
-
222
- ```python
223
- from entroplain.hooks import EntropyHook
224
-
225
- hook = EntropyHook(config={"entropy_threshold": 0.15})
226
-
227
- # In your agent loop
228
- for token, entropy in your_agent.generate():
229
- result = hook.on_token(token, entropy)
230
-
231
- if result["should_exit"]:
232
- print(f"Exiting early at token {result['index']}")
233
- break
234
- ```
235
-
236
- ---
237
-
238
- ## CLI Reference
239
-
240
- ```bash
241
- # Analyze entropy trajectory
242
- entroplain analyze "What is 2+2?" --model gpt-4o --output results.json
243
-
244
- # Stream with early exit
245
- entroplain stream "Solve: x^2=16" --exit-on-converge --threshold 0.15
246
-
247
- # Run benchmarks
248
- entroplain benchmark --problems gsm8k --output benchmark.json
249
-
250
- # Visualize trajectory
251
- entroplain visualize results.json --output entropy_plot.png
252
- ```
253
-
254
- ---
255
-
256
- ## Research
257
-
258
- ### Key Findings
259
-
260
- | Metric | Easy | Medium | Hard |
261
- |--------|------|--------|------|
262
- | Avg Valleys | 61.3 | 53.0 | 70.2 |
263
- | Avg Entropy | 0.376 | 0.327 | 0.295 |
264
- | Avg Velocity | 0.485 | 0.439 | 0.410 |
265
-
266
- **H1 Supported:** Harder problems have more entropy valleys (correlates with reasoning complexity)
267
-
268
- **H2 Supported:** Entropy velocity differs by difficulty (useful for crystallization detection)
269
-
270
- ### Paper
271
-
272
- See [`paper.md`](../paper.md) for the full research proposal.
273
-
274
- ---
275
-
276
- ## Troubleshooting
277
-
278
- ### "No logprobs returned"
279
-
280
- Ensure your API request includes:
281
- - OpenAI/NVIDIA: `logprobs=True, top_logprobs=5`
282
- - Anthropic: `logprobs=True`
283
- - Gemini: `response_logprobs=True`
284
-
285
- ### "Entropy is always 0"
286
-
287
- Some providers don't expose logprobs in streaming mode. Try non-streaming or check provider docs.
288
-
289
- ### "Should_exit always returns False"
290
-
291
- Check your thresholds:
292
- - `entropy_threshold` too low?
293
- - `min_valleys` too high?
294
- - `min_tokens` not reached?
295
-
296
- ---
297
-
298
- ## Support
299
-
300
- - **Issues:** https://github.com/entroplain/entroplain/issues
301
- - **Discord:** https://discord.gg/entroplain
302
- - **Docs:** https://entroplain.ai/docs
1
+ # Entroplain Documentation
2
+
3
+ ## Installation
4
+
5
+ ### pip (Python)
6
+
7
+ ```bash
8
+ # Core package
9
+ pip install entroplain
10
+
11
+ # With provider support
12
+ pip install "entroplain[openai]"
13
+ pip install "entroplain[anthropic]"
14
+ pip install "entroplain[all]"
15
+ ```
16
+
17
+ ### npm (Node.js)
18
+
19
+ ```bash
20
+ npm install entroplain
21
+ ```
22
+
23
+ ### From Source
24
+
25
+ ```bash
26
+ git clone https://github.com/entroplain/entroplain.git
27
+ cd entroplain
28
+ pip install -e .
29
+ ```
30
+
31
+ ---
32
+
33
+ ## Quick Start
34
+
35
+ ```python
36
+ from entroplain import EntropyMonitor
37
+
38
+ monitor = EntropyMonitor()
39
+
40
+ # Track tokens with entropy
41
+ monitor.track("The", 0.8)
42
+ monitor.track("answer", 0.5)
43
+ monitor.track("is", 0.2)
44
+
45
+ # Check convergence
46
+ if monitor.should_exit():
47
+ print("Reasoning complete!")
48
+ ```
49
+
50
+ ---
51
+
52
+ ## How It Works
53
+
54
+ ### 1. Entropy Calculation
55
+
56
+ For each token, we calculate **Shannon entropy** from the model's output distribution:
57
+
58
+ ```
59
+ H = -Σ p(x) * log₂(p(x))
60
+ ```
61
+
62
+ Where `p(x)` is the probability of token x.
63
+
64
+ ### 2. Valley Detection
65
+
66
+ A **valley** is a local minimum in the entropy trajectory:
67
+
68
+ ```
69
+ Token: A B C D E
70
+ Entropy: 0.8 0.3* 0.7 0.4* 0.9
71
+ ↑ ↑
72
+ Valley 1 Valley 2
73
+ ```
74
+
75
+ Valleys indicate moments when the model was confident about the next token — reasoning milestones.
76
+
77
+ ### 3. Exit Conditions
78
+
79
+ | Condition | When to Exit |
80
+ |-----------|--------------|
81
+ | `entropy_drop` | Entropy < threshold |
82
+ | `valleys_plateau` | Valley count stabilizes |
83
+ | `velocity_zero` | Entropy change < threshold |
84
+ | `combined` | (entropy_low OR valleys_plateau) AND velocity_stable |
85
+
86
+ ---
87
+
88
+ ## Configuration
89
+
90
+ ```python
91
+ monitor = EntropyMonitor(
92
+ # Exit when entropy drops below this
93
+ entropy_threshold=0.15,
94
+
95
+ # Require at least N valleys
96
+ min_valleys=2,
97
+
98
+ # Exit when velocity < this
99
+ velocity_threshold=0.05,
100
+
101
+ # Don't exit before N tokens
102
+ min_tokens=50,
103
+
104
+ # Exit condition strategy
105
+ exit_condition="combined" # or "entropy_drop", "valleys_plateau", "velocity_zero"
106
+ )
107
+ ```
108
+
109
+ ### Environment Variables
110
+
111
+ ```bash
112
+ # Provider API keys
113
+ ENTROPPLAIN_OPENAI_API_KEY=sk-...
114
+ ENTROPPLAIN_ANTHROPIC_API_KEY=sk-ant-...
115
+ ENTROPPLAIN_NVIDIA_API_KEY=nvapi-...
116
+ ENTROPPLAIN_GOOGLE_API_KEY=...
117
+
118
+ # Local models
119
+ ENTROPPLAIN_LOCAL_PROVIDER=ollama
120
+ ENTROPPLAIN_LOCAL_MODEL=llama3.1
121
+ ```
122
+
123
+ ---
124
+
125
+ ## Provider Examples
126
+
127
+ ### OpenAI
128
+
129
+ ```python
130
+ from openai import OpenAI
131
+ from entroplain import EntropyMonitor
132
+
133
+ client = OpenAI()
134
+ monitor = EntropyMonitor()
135
+
136
+ response = client.chat.completions.create(
137
+ model="gpt-4o",
138
+ messages=[{"role": "user", "content": "Explain quantum computing"}],
139
+ logprobs=True,
140
+ top_logprobs=5,
141
+ stream=True
142
+ )
143
+
144
+ for chunk in response:
145
+ if chunk.choices[0].logprobs:
146
+ for content in chunk.choices[0].logprobs.content:
147
+ entropy = monitor.calculate_entropy(
148
+ [lp["logprob"] for lp in content["top_logprobs"]]
149
+ )
150
+ monitor.track(content["token"], entropy)
151
+
152
+ if monitor.should_exit():
153
+ break
154
+ ```
155
+
156
+ ### NVIDIA NIM
157
+
158
+ ```python
159
+ from entroplain import NVIDIAProvider, EntropyMonitor
160
+
161
+ provider = NVIDIAProvider()
162
+ monitor = EntropyMonitor()
163
+
164
+ for token in provider.stream_with_entropy(
165
+ model="meta/llama-3.1-70b-instruct",
166
+ messages=[{"role": "user", "content": "Hello"}]
167
+ ):
168
+ monitor.track(token.token, token.entropy)
169
+
170
+ if monitor.should_exit():
171
+ print("Early exit!")
172
+ break
173
+ ```
174
+
175
+ ### Ollama (Local)
176
+
177
+ ```python
178
+ from entroplain import OllamaProvider, EntropyMonitor
179
+
180
+ provider = OllamaProvider()
181
+ monitor = EntropyMonitor()
182
+
183
+ for token in provider.stream_with_entropy(
184
+ model="llama3.1",
185
+ prompt="Think step by step..."
186
+ ):
187
+ print(token.token, end="")
188
+ monitor.track(token.token, token.entropy)
189
+ ```
190
+
191
+ ---
192
+
193
+ ## Agent Framework Integration
194
+
195
+ ### OpenClaw
196
+
197
+ ```yaml
198
+ # In your agent config
199
+ entropy_monitor:
200
+ enabled: true
201
+ entropy_threshold: 0.15
202
+ min_valleys: 2
203
+
204
+ hooks:
205
+ on_token: entroplain.hooks.track_entropy
206
+ on_exit_check: entroplain.hooks.early_exit
207
+ ```
208
+
209
+ ### Claude Code
210
+
211
+ ```json
212
+ {
213
+ "hooks": {
214
+ "on_token": "entroplain.hooks.track_entropy",
215
+ "on_converge": "entroplain.hooks.early_exit"
216
+ }
217
+ }
218
+ ```
219
+
220
+ ### Custom Agent
221
+
222
+ ```python
223
+ from entroplain.hooks import EntropyHook
224
+
225
+ hook = EntropyHook(config={"entropy_threshold": 0.15})
226
+
227
+ # In your agent loop
228
+ for token, entropy in your_agent.generate():
229
+ result = hook.on_token(token, entropy)
230
+
231
+ if result["should_exit"]:
232
+ print(f"Exiting early at token {result['index']}")
233
+ break
234
+ ```
235
+
236
+ ---
237
+
238
+ ## CLI Reference
239
+
240
+ ```bash
241
+ # Analyze entropy trajectory
242
+ entroplain analyze "What is 2+2?" --model gpt-4o --output results.json
243
+
244
+ # Stream with early exit
245
+ entroplain stream "Solve: x^2=16" --exit-on-converge --threshold 0.15
246
+
247
+ # Run benchmarks
248
+ entroplain benchmark --problems gsm8k --output benchmark.json
249
+
250
+ # Visualize trajectory
251
+ entroplain visualize results.json --output entropy_plot.png
252
+ ```
253
+
254
+ ---
255
+
256
+ ## Research
257
+
258
+ ### Key Findings
259
+
260
+ | Metric | Easy | Medium | Hard |
261
+ |--------|------|--------|------|
262
+ | Avg Valleys | 61.3 | 53.0 | 70.2 |
263
+ | Avg Entropy | 0.376 | 0.327 | 0.295 |
264
+ | Avg Velocity | 0.485 | 0.439 | 0.410 |
265
+
266
+ **H1 Supported:** Harder problems have more entropy valleys (correlates with reasoning complexity)
267
+
268
+ **H2 Supported:** Entropy velocity differs by difficulty (useful for crystallization detection)
269
+
270
+ ### Paper
271
+
272
+ See [`paper.md`](../paper.md) for the full research proposal.
273
+
274
+ ---
275
+
276
+ ## Troubleshooting
277
+
278
+ ### "No logprobs returned"
279
+
280
+ Ensure your API request includes:
281
+ - OpenAI/NVIDIA: `logprobs=True, top_logprobs=5`
282
+ - Anthropic: `logprobs=True`
283
+ - Gemini: `response_logprobs=True`
284
+
285
+ ### "Entropy is always 0"
286
+
287
+ Some providers don't expose logprobs in streaming mode. Try non-streaming or check provider docs.
288
+
289
+ ### "Should_exit always returns False"
290
+
291
+ Check your thresholds:
292
+ - `entropy_threshold` too low?
293
+ - `min_valleys` too high?
294
+ - `min_tokens` not reached?
295
+
296
+ ---
297
+
298
+ ## Support
299
+
300
+ - **Issues:** https://github.com/entroplain/entroplain/issues
301
+ - **Discord:** https://discord.gg/entroplain
302
+ - **Docs:** https://entroplain.ai/docs
@@ -1,33 +1,32 @@
1
- """
2
- Entroplain — Entropy-based early exit for efficient agent reasoning.
3
- """
4
-
5
- __version__ = "0.1.1"
6
- __author__ = "Entroplain Contributors"
7
-
8
- from .monitor import EntropyMonitor, calculate_entropy
9
- from .providers import (
10
- OpenAIProvider,
11
- AnthropicProvider,
12
- GeminiProvider,
13
- NVIDIAProvider,
14
- OllamaProvider,
15
- LlamaCppProvider,
16
- )
17
- from .hooks import track_entropy, early_exit
18
- from .proxy import EntropyProxy, ProxyConfig
19
-
20
- __all__ = [
21
- "EntropyMonitor",
22
- "calculate_entropy",
23
- "OpenAIProvider",
24
- "AnthropicProvider",
25
- "GeminiProvider",
26
- "NVIDIAProvider",
27
- "OllamaProvider",
28
- "LlamaCppProvider",
29
- "track_entropy",
30
- "early_exit",
31
- "EntropyProxy",
32
- "ProxyConfig",
33
- ]
1
+ """
2
+ Entroplain — Entropy-based early exit for efficient agent reasoning.
3
+ """
4
+
5
+ __version__ = "0.2.2"
6
+ __author__ = "Entroplain Contributors"
7
+
8
+ from .monitor import EntropyMonitor, calculate_entropy_from_logprobs
9
+ from .providers import (
10
+ OpenAIProvider,
11
+ AnthropicProvider,
12
+ GeminiProvider,
13
+ NVIDIAProvider,
14
+ OllamaProvider,
15
+ LlamaCppProvider,
16
+ )
17
+ from .hooks import track_entropy, early_exit
18
+ from .cost_tracker import CostTracker
19
+
20
+ __all__ = [
21
+ "EntropyMonitor",
22
+ "calculate_entropy_from_logprobs",
23
+ "OpenAIProvider",
24
+ "AnthropicProvider",
25
+ "GeminiProvider",
26
+ "NVIDIAProvider",
27
+ "OllamaProvider",
28
+ "LlamaCppProvider",
29
+ "track_entropy",
30
+ "early_exit",
31
+ "CostTracker",
32
+ ]