entroplain 0.2.0 → 0.2.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/DEPLOY.md +41 -0
- package/README.md +478 -476
- package/dist/entroplain-0.2.2-py3-none-any.whl +0 -0
- package/dist/entroplain-0.2.2.tar.gz +0 -0
- package/dist/entroplain-0.2.3-py3-none-any.whl +0 -0
- package/dist/entroplain-0.2.3.tar.gz +0 -0
- package/docs/AGENT_USAGE.md +178 -178
- package/docs/USAGE.md +302 -302
- package/entroplain/__init__.py +5 -3
- package/entroplain/cost_tracker.py +231 -231
- package/entroplain/dashboard.py +480 -368
- package/entroplain/monitor.py +390 -390
- package/entroplain/providers.py +626 -626
- package/entroplain/proxy.py +561 -349
- package/entroplain/shared_state.py +72 -0
- package/package.json +47 -46
- package/pyproject.toml +1 -1
- package/scripts/setup.bat +89 -0
- package/scripts/setup.sh +98 -0
- package/test_nvidia.py +56 -56
- package/test_proxy.py +16 -16
- package/vercel.json +6 -0
- package/website/README.md +14 -0
- package/website/app/globals.css +88 -0
- package/website/app/layout.tsx +34 -0
- package/website/app/page.tsx +537 -0
- package/website/package-lock.json +520 -0
- package/website/package.json +25 -0
- package/website/tsconfig.json +40 -0
- package/website/vercel.json +3 -0
- package/dist/entroplain-0.2.0-py3-none-any.whl +0 -0
- package/dist/entroplain-0.2.0.tar.gz +0 -0
package/README.md
CHANGED
|
@@ -1,476 +1,478 @@
|
|
|
1
|
-
# Entroplain
|
|
2
|
-
|
|
3
|
-
**Entropy-based early exit for efficient agent reasoning.**
|
|
4
|
-
|
|
5
|
-
Stop burning tokens. Know when your agent has finished thinking.
|
|
6
|
-
|
|
7
|
-
|
|
8
|
-
|
|
9
|
-
|
|
10
|
-
|
|
11
|
-
|
|
12
|
-
|
|
13
|
-
|
|
14
|
-
|
|
15
|
-
|
|
16
|
-
|
|
17
|
-
|
|
18
|
-
|
|
19
|
-
|
|
20
|
-
|
|
21
|
-
|
|
22
|
-
|
|
23
|
-
|
|
24
|
-
|
|
25
|
-
|
|
26
|
-
|
|
27
|
-
|
|
28
|
-
|
|
29
|
-
|
|
30
|
-
|
|
31
|
-
|
|
32
|
-
|
|
33
|
-
|
|
34
|
-
|
|
35
|
-
|
|
36
|
-
|
|
37
|
-
|
|
38
|
-
**
|
|
39
|
-
|
|
40
|
-
**
|
|
41
|
-
|
|
42
|
-
|
|
43
|
-
|
|
44
|
-
|
|
45
|
-
export
|
|
46
|
-
|
|
47
|
-
|
|
48
|
-
|
|
49
|
-
|
|
50
|
-
|
|
51
|
-
|
|
52
|
-
|
|
53
|
-
|
|
54
|
-
|
|
55
|
-
|
|
56
|
-
|
|
57
|
-
|
|
58
|
-
|
|
59
|
-
|
|
60
|
-
|
|
61
|
-
|
|
62
|
-
|
|
63
|
-
|
|
64
|
-
|
|
65
|
-
|
|
66
|
-
|
|
67
|
-
|
|
68
|
-
|
|
69
|
-
|
|
70
|
-
|
|
71
|
-
|
|
72
|
-
|
|
73
|
-
|
|
74
|
-
|
|
75
|
-
|
|
76
|
-
|
|
77
|
-
|
|
78
|
-
|
|
79
|
-
|
|
80
|
-
|
|
81
|
-
|
|
82
|
-
|
|
83
|
-
|
|
84
|
-
|
|
85
|
-
|
|
86
|
-
|
|
87
|
-
|
|
88
|
-
|
|
89
|
-
|
|
90
|
-
|
|
91
|
-
|
|
92
|
-
|
|
93
|
-
|
|
94
|
-
|
|
95
|
-
|
|
96
|
-
|
|
97
|
-
|
|
98
|
-
|
|
99
|
-
|
|
100
|
-
|
|
101
|
-
|
|
102
|
-
|
|
103
|
-
|
|
104
|
-
|
|
105
|
-
|
|
106
|
-
|
|
107
|
-
|
|
108
|
-
|
|
109
|
-
|
|
110
|
-
|
|
111
|
-
|
|
112
|
-
|
|
113
|
-
|
|
114
|
-
|
|
115
|
-
|
|
116
|
-
|
|
117
|
-
|
|
118
|
-
|
|
119
|
-
|
|
120
|
-
|
|
121
|
-
|
|
122
|
-
|
|
123
|
-
|
|
124
|
-
- **
|
|
125
|
-
- **
|
|
126
|
-
|
|
127
|
-
|
|
128
|
-
|
|
129
|
-
|
|
130
|
-
|
|
131
|
-
|
|
132
|
-
|
|
133
|
-
|
|
134
|
-
|
|
135
|
-
|
|
136
|
-
|
|
137
|
-
|
|
138
|
-
tracker
|
|
139
|
-
tracker.
|
|
140
|
-
|
|
141
|
-
|
|
142
|
-
|
|
143
|
-
|
|
144
|
-
|
|
145
|
-
|
|
146
|
-
|
|
147
|
-
|
|
148
|
-
|
|
149
|
-
|
|
150
|
-
|
|
151
|
-
|
|
152
|
-
|
|
153
|
-
|
|
154
|
-
|
|
155
|
-
|
|
156
|
-
|
|
157
|
-
|
|
158
|
-
|
|
159
|
-
|
|
160
|
-
|
|
161
|
-
|
|
162
|
-
|
|
163
|
-
|
|
164
|
-
|
|
165
|
-
|
|
166
|
-
|
|
167
|
-
|
|
168
|
-
|
|
169
|
-
|
|
170
|
-
|
|
171
|
-
|
|
172
|
-
|
|
173
|
-
|
|
174
|
-
|
|
175
|
-
|
|
176
|
-
|
|
177
|
-
|
|
178
|
-
|
|
179
|
-
|
|
180
|
-
|
|
181
|
-
|
|
182
|
-
|
|
183
|
-
```
|
|
184
|
-
|
|
185
|
-
|
|
186
|
-
|
|
187
|
-
|
|
188
|
-
|
|
189
|
-
|
|
190
|
-
|
|
191
|
-
|
|
192
|
-
|
|
193
|
-
|
|
194
|
-
|
|
195
|
-
|
|
196
|
-
|
|
197
|
-
|
|
198
|
-
|
|
199
|
-
|
|
200
|
-
|
|
201
|
-
|
|
202
|
-
|
|
203
|
-
|
|
204
|
-
|
|
205
|
-
|
|
206
|
-
|
|
207
|
-
|
|
|
208
|
-
|
|
209
|
-
| `
|
|
210
|
-
| `
|
|
211
|
-
| `
|
|
212
|
-
| `
|
|
213
|
-
|
|
214
|
-
|
|
215
|
-
|
|
216
|
-
|
|
217
|
-
|
|
218
|
-
|
|
219
|
-
|
|
220
|
-
|
|
221
|
-
|
|
222
|
-
|
|
223
|
-
|
|
224
|
-
|
|
225
|
-
|
|
226
|
-
|
|
227
|
-
|
|
228
|
-
|
|
229
|
-
|
|
|
230
|
-
|
|
231
|
-
|
|
|
232
|
-
|
|
233
|
-
|
|
234
|
-
|
|
235
|
-
|
|
236
|
-
|
|
237
|
-
|
|
238
|
-
|
|
239
|
-
|
|
240
|
-
|
|
241
|
-
|
|
|
242
|
-
|
|
243
|
-
| **
|
|
244
|
-
| **
|
|
245
|
-
| **
|
|
246
|
-
| **
|
|
247
|
-
|
|
248
|
-
|
|
249
|
-
|
|
250
|
-
|
|
251
|
-
|
|
252
|
-
|
|
253
|
-
|
|
254
|
-
|
|
255
|
-
|
|
256
|
-
|
|
257
|
-
|
|
258
|
-
|
|
259
|
-
|
|
260
|
-
|
|
261
|
-
|
|
262
|
-
|
|
263
|
-
|
|
264
|
-
|
|
265
|
-
|
|
266
|
-
|
|
267
|
-
|
|
268
|
-
|
|
269
|
-
|
|
270
|
-
|
|
271
|
-
|
|
272
|
-
|
|
273
|
-
|
|
274
|
-
|
|
275
|
-
|
|
276
|
-
|
|
277
|
-
|
|
278
|
-
|
|
279
|
-
|
|
280
|
-
|
|
281
|
-
|
|
282
|
-
|
|
283
|
-
|
|
284
|
-
|
|
285
|
-
|
|
286
|
-
|
|
287
|
-
|
|
288
|
-
|
|
289
|
-
|
|
290
|
-
|
|
291
|
-
|
|
292
|
-
|
|
293
|
-
|
|
294
|
-
|
|
295
|
-
|
|
296
|
-
|
|
297
|
-
|
|
298
|
-
|
|
299
|
-
|
|
300
|
-
|
|
301
|
-
|
|
302
|
-
|
|
303
|
-
|
|
304
|
-
|
|
305
|
-
|
|
306
|
-
|
|
307
|
-
|
|
308
|
-
|
|
309
|
-
|
|
310
|
-
|
|
311
|
-
|
|
312
|
-
|
|
313
|
-
|
|
314
|
-
|
|
315
|
-
|
|
316
|
-
|
|
317
|
-
|
|
318
|
-
|
|
319
|
-
|
|
320
|
-
|
|
321
|
-
|
|
322
|
-
|
|
323
|
-
|
|
324
|
-
|
|
325
|
-
|
|
326
|
-
|
|
327
|
-
|
|
328
|
-
|
|
329
|
-
|
|
330
|
-
|
|
331
|
-
|
|
332
|
-
|
|
333
|
-
|
|
334
|
-
|
|
335
|
-
|
|
336
|
-
|
|
337
|
-
|
|
338
|
-
|
|
339
|
-
|
|
340
|
-
|
|
341
|
-
|
|
342
|
-
|
|
343
|
-
|
|
344
|
-
|
|
345
|
-
|
|
346
|
-
|
|
347
|
-
|
|
348
|
-
|
|
349
|
-
|
|
350
|
-
|
|
351
|
-
|
|
352
|
-
|
|
353
|
-
|
|
354
|
-
|
|
355
|
-
|
|
356
|
-
|
|
357
|
-
|
|
358
|
-
|
|
359
|
-
|
|
360
|
-
|
|
361
|
-
|
|
362
|
-
|
|
363
|
-
|
|
364
|
-
|
|
365
|
-
|
|
366
|
-
|
|
367
|
-
|
|
368
|
-
|
|
369
|
-
|
|
370
|
-
|
|
371
|
-
|
|
372
|
-
|
|
373
|
-
|
|
374
|
-
|
|
375
|
-
|
|
376
|
-
|
|
377
|
-
|
|
378
|
-
|
|
379
|
-
|
|
380
|
-
|
|
381
|
-
|
|
382
|
-
|
|
383
|
-
|
|
384
|
-
|
|
385
|
-
|
|
386
|
-
|
|
387
|
-
|
|
388
|
-
|
|
389
|
-
|
|
390
|
-
|
|
391
|
-
|
|
392
|
-
|
|
393
|
-
|
|
394
|
-
|
|
395
|
-
|
|
396
|
-
|
|
397
|
-
|
|
398
|
-
|
|
399
|
-
|
|
400
|
-
|
|
401
|
-
|
|
402
|
-
|
|
403
|
-
|
|
404
|
-
|
|
405
|
-
|
|
406
|
-
|
|
407
|
-
--
|
|
408
|
-
--
|
|
409
|
-
--
|
|
410
|
-
--
|
|
411
|
-
|
|
412
|
-
|
|
413
|
-
|
|
414
|
-
|
|
415
|
-
|
|
416
|
-
|
|
417
|
-
|
|
418
|
-
|
|
419
|
-
|
|
420
|
-
|
|
421
|
-
|
|
422
|
-
|
|
423
|
-
|
|
424
|
-
|
|
425
|
-
|
|
426
|
-
|
|
427
|
-
|
|
428
|
-
|
|
429
|
-
|
|
430
|
-
|
|
431
|
-
|
|
432
|
-
|
|
433
|
-
|
|
434
|
-
|
|
435
|
-
|
|
436
|
-
|
|
437
|
-
}
|
|
438
|
-
|
|
439
|
-
|
|
440
|
-
|
|
441
|
-
|
|
442
|
-
|
|
443
|
-
|
|
444
|
-
|
|
445
|
-
|
|
446
|
-
|
|
447
|
-
|
|
448
|
-
|
|
449
|
-
|
|
450
|
-
|
|
451
|
-
|
|
452
|
-
|
|
453
|
-
|
|
454
|
-
|
|
455
|
-
|
|
456
|
-
|
|
457
|
-
|
|
458
|
-
|
|
459
|
-
|
|
460
|
-
|
|
461
|
-
|
|
462
|
-
|
|
463
|
-
|
|
464
|
-
|
|
465
|
-
|
|
466
|
-
|
|
467
|
-
- **
|
|
468
|
-
- **
|
|
469
|
-
|
|
470
|
-
|
|
471
|
-
|
|
472
|
-
|
|
473
|
-
|
|
474
|
-
|
|
475
|
-
|
|
476
|
-
-
|
|
1
|
+
# Entroplain
|
|
2
|
+
|
|
3
|
+
**Entropy-based early exit for efficient agent reasoning.**
|
|
4
|
+
|
|
5
|
+
Stop burning tokens. Know when your agent has finished thinking.
|
|
6
|
+
|
|
7
|
+
🌐 **Website:** https://entroplain.vercel.app/
|
|
8
|
+
|
|
9
|
+
---
|
|
10
|
+
|
|
11
|
+
## What It Does
|
|
12
|
+
|
|
13
|
+
Entroplain monitors your LLM's **predictive entropy** — the uncertainty in its output distribution — to detect when reasoning has converged.
|
|
14
|
+
|
|
15
|
+
```text
|
|
16
|
+
High entropy → Model is searching, exploring, uncertain
|
|
17
|
+
Low entropy → Model is confident, converged, ready to output
|
|
18
|
+
```
|
|
19
|
+
|
|
20
|
+
**Key insight:** Reasoning follows a multi-modal entropy trajectory. Local minima ("valleys") mark reasoning milestones. Exit at the right valley, save 40-60% compute with minimal accuracy loss.
|
|
21
|
+
|
|
22
|
+
---
|
|
23
|
+
|
|
24
|
+
## Quick Start
|
|
25
|
+
|
|
26
|
+
### Install
|
|
27
|
+
|
|
28
|
+
```bash
|
|
29
|
+
# Python (pip)
|
|
30
|
+
pip install entroplain
|
|
31
|
+
|
|
32
|
+
# Node.js (npm)
|
|
33
|
+
npm install entroplain
|
|
34
|
+
```
|
|
35
|
+
|
|
36
|
+
### Requirements
|
|
37
|
+
|
|
38
|
+
**Python:** 3.8+
|
|
39
|
+
|
|
40
|
+
**Node.js:** 18+
|
|
41
|
+
|
|
42
|
+
**For cloud providers:** Set API keys via environment variables:
|
|
43
|
+
|
|
44
|
+
```bash
|
|
45
|
+
export OPENAI_API_KEY=sk-...
|
|
46
|
+
export ANTHROPIC_API_KEY=sk-ant-...
|
|
47
|
+
export NVIDIA_API_KEY=nvapi-...
|
|
48
|
+
```
|
|
49
|
+
|
|
50
|
+
**For local models:** Install [Ollama](https://ollama.ai) or [llama.cpp](https://github.com/ggerganov/llama.cpp)
|
|
51
|
+
|
|
52
|
+
---
|
|
53
|
+
|
|
54
|
+
## 🚀 Works With Any Agent (Proxy Method)
|
|
55
|
+
|
|
56
|
+
The **proxy** is the easiest way to use Entroplain with OpenClaw, Claude Code, or any other agent framework:
|
|
57
|
+
|
|
58
|
+
### How It Works
|
|
59
|
+
|
|
60
|
+
```
|
|
61
|
+
Your Agent → Proxy (localhost:8765) → Real API
|
|
62
|
+
│
|
|
63
|
+
▼
|
|
64
|
+
Entropy Monitor
|
|
65
|
+
│
|
|
66
|
+
▼
|
|
67
|
+
Early Exit Check
|
|
68
|
+
```
|
|
69
|
+
|
|
70
|
+
The proxy intercepts all LLM API calls, monitors entropy, and terminates streams when reasoning converges.
|
|
71
|
+
|
|
72
|
+
### Setup (One-Time)
|
|
73
|
+
|
|
74
|
+
```bash
|
|
75
|
+
# Install with proxy support
|
|
76
|
+
pip install entroplain[proxy]
|
|
77
|
+
|
|
78
|
+
# Start the proxy
|
|
79
|
+
entroplain-proxy --port 8765 --log-entropy
|
|
80
|
+
|
|
81
|
+
# Point your agent to the proxy
|
|
82
|
+
export OPENAI_BASE_URL=http://localhost:8765/v1
|
|
83
|
+
|
|
84
|
+
# or for NVIDIA:
|
|
85
|
+
export NVIDIA_BASE_URL=http://localhost:8765/v1
|
|
86
|
+
|
|
87
|
+
# or for Anthropic:
|
|
88
|
+
export ANTHROPIC_BASE_URL=http://localhost:8765/v1
|
|
89
|
+
```
|
|
90
|
+
|
|
91
|
+
That's it! Now run your agent normally and entropy monitoring is automatic.
|
|
92
|
+
|
|
93
|
+
### Proxy Options
|
|
94
|
+
|
|
95
|
+
```bash
|
|
96
|
+
# Monitor only, don't exit early
|
|
97
|
+
entroplain-proxy --port 8765 --no-early-exit
|
|
98
|
+
|
|
99
|
+
# Custom thresholds
|
|
100
|
+
entroplain-proxy --port 8765 --entropy-threshold 0.2 --min-valleys 3
|
|
101
|
+
|
|
102
|
+
# Enable cost tracking
|
|
103
|
+
entroplain-proxy --port 8765 --model gpt-4o --log-entropy
|
|
104
|
+
|
|
105
|
+
# Launch dashboard
|
|
106
|
+
entroplain-dashboard --port 8050
|
|
107
|
+
```
|
|
108
|
+
|
|
109
|
+
---
|
|
110
|
+
|
|
111
|
+
## 🎯 Dashboard
|
|
112
|
+
|
|
113
|
+
Real-time entropy visualization:
|
|
114
|
+
|
|
115
|
+
```bash
|
|
116
|
+
# Start the dashboard
|
|
117
|
+
entroplain-dashboard --port 8050
|
|
118
|
+
|
|
119
|
+
# Open in browser
|
|
120
|
+
open http://localhost:8050
|
|
121
|
+
```
|
|
122
|
+
|
|
123
|
+
The dashboard shows:
|
|
124
|
+
- **Live entropy curve** with valley markers
|
|
125
|
+
- **Token count** and valleys detected
|
|
126
|
+
- **Cost savings** in real-time
|
|
127
|
+
- **Status badges** (active/idle/exited)
|
|
128
|
+
|
|
129
|
+
---
|
|
130
|
+
|
|
131
|
+
## 💰 Cost Tracking
|
|
132
|
+
|
|
133
|
+
Track actual savings from early exit:
|
|
134
|
+
|
|
135
|
+
```python
|
|
136
|
+
from entroplain import CostTracker
|
|
137
|
+
|
|
138
|
+
tracker = CostTracker(model="gpt-4o")
|
|
139
|
+
tracker.track_input(100) # 100 input tokens
|
|
140
|
+
tracker.track_output(50) # 50 output tokens
|
|
141
|
+
tracker.set_full_estimate(150) # Would have been 150
|
|
142
|
+
|
|
143
|
+
estimate = tracker.get_estimate()
|
|
144
|
+
print(f"Saved ${estimate.cost_saved_usd:.4f} ({estimate.savings_percent:.1f}%)")
|
|
145
|
+
```
|
|
146
|
+
|
|
147
|
+
**Supported pricing:** GPT-4o, GPT-4-turbo, Claude 4, Llama 3.1 (NVIDIA), or custom rates.
|
|
148
|
+
|
|
149
|
+
---
|
|
150
|
+
|
|
151
|
+
## Direct Usage (Python)
|
|
152
|
+
|
|
153
|
+
If you want more control, use Entroplain directly:
|
|
154
|
+
|
|
155
|
+
```python
|
|
156
|
+
from entroplain import EntropyMonitor, NVIDIAProvider
|
|
157
|
+
|
|
158
|
+
monitor = EntropyMonitor()
|
|
159
|
+
provider = NVIDIAProvider()
|
|
160
|
+
|
|
161
|
+
for token in provider.stream_with_entropy(
|
|
162
|
+
model="meta/llama-3.1-70b-instruct",
|
|
163
|
+
messages=[{"role": "user", "content": "Solve: x^2 = 16"}]
|
|
164
|
+
):
|
|
165
|
+
monitor.track(token.token, token.entropy)
|
|
166
|
+
print(token.token, end="")
|
|
167
|
+
|
|
168
|
+
if monitor.should_exit():
|
|
169
|
+
print("\n[Early exit - reasoning converged]")
|
|
170
|
+
break
|
|
171
|
+
|
|
172
|
+
print(f"\nStats: {monitor.get_stats()}")
|
|
173
|
+
```
|
|
174
|
+
|
|
175
|
+
---
|
|
176
|
+
|
|
177
|
+
## How It Works
|
|
178
|
+
|
|
179
|
+
### 1. Track Entropy Per Token
|
|
180
|
+
|
|
181
|
+
Every token has an entropy value derived from the model's output distribution:
|
|
182
|
+
|
|
183
|
+
```python
|
|
184
|
+
entropy = -sum(p * log2(p) for p in probabilities if p > 0)
|
|
185
|
+
```
|
|
186
|
+
|
|
187
|
+
### 2. Detect Valleys
|
|
188
|
+
|
|
189
|
+
Local minima in the entropy trajectory indicate reasoning milestones:
|
|
190
|
+
|
|
191
|
+
```text
|
|
192
|
+
Entropy: 0.8 → 0.6 → 0.3* → 0.5 → 0.2* → 0.1*
|
|
193
|
+
↑ ↑
|
|
194
|
+
Valley 1 Valley 2
|
|
195
|
+
```
|
|
196
|
+
|
|
197
|
+
### 3. Exit at the Right Moment
|
|
198
|
+
|
|
199
|
+
When valley count plateaus and velocity stabilizes, reasoning is complete.
|
|
200
|
+
|
|
201
|
+
---
|
|
202
|
+
|
|
203
|
+
## Exit Strategies
|
|
204
|
+
|
|
205
|
+
Choose how Entroplain detects convergence:
|
|
206
|
+
|
|
207
|
+
| Strategy | Description |
|
|
208
|
+
|----------|-------------|
|
|
209
|
+
| `combined` | Entropy low OR valleys plateau, AND velocity stable (default) |
|
|
210
|
+
| `valleys_plateau` | Exit when reasoning milestones stabilize |
|
|
211
|
+
| `entropy_drop` | Exit when model confidence is high |
|
|
212
|
+
| `velocity_zero` | Exit when entropy stops changing |
|
|
213
|
+
| `repetition` | Exit when model starts repeating itself |
|
|
214
|
+
| `confidence` | Exit when top token prob > 95% for N tokens |
|
|
215
|
+
|
|
216
|
+
```python
|
|
217
|
+
monitor = EntropyMonitor(
|
|
218
|
+
exit_condition="repetition", # or "confidence", "combined", etc.
|
|
219
|
+
repetition_threshold=0.3, # Exit when 30% of recent tokens repeat
|
|
220
|
+
)
|
|
221
|
+
```
|
|
222
|
+
|
|
223
|
+
---
|
|
224
|
+
|
|
225
|
+
## Experimental Evidence
|
|
226
|
+
|
|
227
|
+
Tested on Llama-3.1-70b via NVIDIA API:
|
|
228
|
+
|
|
229
|
+
| Difficulty | Avg Valleys | Avg Entropy | Avg Velocity |
|
|
230
|
+
|------------|-------------|-------------|--------------|
|
|
231
|
+
| Easy | 61.3 | 0.3758 | 0.4852 |
|
|
232
|
+
| Medium | 53.0 | 0.3267 | 0.4394 |
|
|
233
|
+
| Hard | 70.2 | 0.2947 | 0.4095 |
|
|
234
|
+
|
|
235
|
+
**Finding:** Hard problems have more entropy valleys (70.2 vs 61.3) — valleys correlate with reasoning complexity.
|
|
236
|
+
|
|
237
|
+
---
|
|
238
|
+
|
|
239
|
+
## Platform Support
|
|
240
|
+
|
|
241
|
+
| Platform | Support | How to Enable |
|
|
242
|
+
|----------|---------|---------------|
|
|
243
|
+
| **Local (llama.cpp, Ollama)** | ✅ Full | Built-in, no config |
|
|
244
|
+
| **OpenAI** | ✅ Yes | `logprobs: true` |
|
|
245
|
+
| **Anthropic Claude** | ✅ Yes (Claude 4) | `logprobs: True` |
|
|
246
|
+
| **Google Gemini** | ✅ Yes | `response_logprobs=True` |
|
|
247
|
+
| **NVIDIA NIM** | ✅ Yes | `logprobs: true` |
|
|
248
|
+
| **OpenRouter** | ⚠️ Partial | ~23% of models support it |
|
|
249
|
+
|
|
250
|
+
---
|
|
251
|
+
|
|
252
|
+
## Integration Examples
|
|
253
|
+
|
|
254
|
+
### OpenAI / NVIDIA / OpenRouter
|
|
255
|
+
|
|
256
|
+
```python
|
|
257
|
+
from openai import OpenAI
|
|
258
|
+
from entroplain import EntropyMonitor
|
|
259
|
+
|
|
260
|
+
client = OpenAI()
|
|
261
|
+
monitor = EntropyMonitor()
|
|
262
|
+
|
|
263
|
+
response = client.chat.completions.create(
|
|
264
|
+
model="gpt-4o",
|
|
265
|
+
messages=[{"role": "user", "content": "Solve this step by step..."}],
|
|
266
|
+
logprobs=True,
|
|
267
|
+
top_logprobs=5,
|
|
268
|
+
stream=True
|
|
269
|
+
)
|
|
270
|
+
|
|
271
|
+
for chunk in response:
|
|
272
|
+
if chunk.choices[0].delta.content:
|
|
273
|
+
token = chunk.choices[0].delta.content
|
|
274
|
+
entropy = monitor.calculate_entropy(chunk.choices[0].logprobs)
|
|
275
|
+
|
|
276
|
+
if monitor.should_exit():
|
|
277
|
+
print("\n[Early exit — reasoning converged]")
|
|
278
|
+
break
|
|
279
|
+
|
|
280
|
+
print(token, end="")
|
|
281
|
+
```
|
|
282
|
+
|
|
283
|
+
### Ollama (Local)
|
|
284
|
+
|
|
285
|
+
```python
|
|
286
|
+
import ollama
|
|
287
|
+
from entroplain import EntropyMonitor
|
|
288
|
+
|
|
289
|
+
monitor = EntropyMonitor()
|
|
290
|
+
|
|
291
|
+
response = ollama.generate(
|
|
292
|
+
model="llama3.1",
|
|
293
|
+
prompt="Think through this carefully...",
|
|
294
|
+
options={"num_ctx": 4096}
|
|
295
|
+
)
|
|
296
|
+
|
|
297
|
+
for token_data in response.get("token_probs", []):
|
|
298
|
+
entropy = monitor.calculate_from_logits(token_data["logits"])
|
|
299
|
+
monitor.track(token_data["token"], entropy)
|
|
300
|
+
```
|
|
301
|
+
|
|
302
|
+
### Anthropic Claude
|
|
303
|
+
|
|
304
|
+
```python
|
|
305
|
+
from anthropic import Anthropic
|
|
306
|
+
from entroplain import EntropyMonitor
|
|
307
|
+
|
|
308
|
+
client = Anthropic()
|
|
309
|
+
monitor = EntropyMonitor()
|
|
310
|
+
|
|
311
|
+
with client.messages.stream(
|
|
312
|
+
model="claude-sonnet-4-20250514",
|
|
313
|
+
max_tokens=1024,
|
|
314
|
+
messages=[{"role": "user", "content": "Analyze this..."}],
|
|
315
|
+
) as stream:
|
|
316
|
+
for text in stream.text_stream:
|
|
317
|
+
entropy = monitor.get_entropy()
|
|
318
|
+
|
|
319
|
+
if monitor.should_exit():
|
|
320
|
+
break
|
|
321
|
+
|
|
322
|
+
print(text, end="", flush=True)
|
|
323
|
+
```
|
|
324
|
+
|
|
325
|
+
---
|
|
326
|
+
|
|
327
|
+
## CLI
|
|
328
|
+
|
|
329
|
+
```bash
|
|
330
|
+
# Analyze a prompt's entropy trajectory
|
|
331
|
+
entroplain analyze "What is 2+2?" --model gpt-4o
|
|
332
|
+
|
|
333
|
+
# Stream with early exit
|
|
334
|
+
entroplain stream "Explain quantum computing" --exit-on-converge
|
|
335
|
+
|
|
336
|
+
# Run the proxy (works with any agent)
|
|
337
|
+
entroplain-proxy --port 8765 --log-entropy --model gpt-4o
|
|
338
|
+
|
|
339
|
+
# Launch the dashboard
|
|
340
|
+
entroplain-dashboard --port 8050
|
|
341
|
+
|
|
342
|
+
# Benchmark entropy patterns
|
|
343
|
+
entroplain benchmark --problems gsm8k --output results.json
|
|
344
|
+
```
|
|
345
|
+
|
|
346
|
+
---
|
|
347
|
+
|
|
348
|
+
## API Reference
|
|
349
|
+
|
|
350
|
+
### `EntropyMonitor`
|
|
351
|
+
|
|
352
|
+
```python
|
|
353
|
+
class EntropyMonitor:
|
|
354
|
+
def __init__(
|
|
355
|
+
self,
|
|
356
|
+
entropy_threshold: float = 0.15,
|
|
357
|
+
min_valleys: int = 2,
|
|
358
|
+
velocity_threshold: float = 0.05,
|
|
359
|
+
min_tokens: int = 50,
|
|
360
|
+
exit_condition: str = "combined"
|
|
361
|
+
):
|
|
362
|
+
...
|
|
363
|
+
|
|
364
|
+
def track(self, token: str, entropy: float, confidence: float = 0.0) -> EntropyPoint:
|
|
365
|
+
"""Track a token and its entropy value."""
|
|
366
|
+
|
|
367
|
+
def should_exit(self) -> bool:
|
|
368
|
+
"""Determine if reasoning has converged."""
|
|
369
|
+
|
|
370
|
+
def get_valleys(self) -> List[Tuple[int, float]]:
|
|
371
|
+
"""Get all entropy valleys (local minima)."""
|
|
372
|
+
|
|
373
|
+
def get_stats(self) -> Dict:
|
|
374
|
+
"""Get current statistics."""
|
|
375
|
+
|
|
376
|
+
def reset(self) -> None:
|
|
377
|
+
"""Clear all tracked data."""
|
|
378
|
+
```
|
|
379
|
+
|
|
380
|
+
### `CostTracker`
|
|
381
|
+
|
|
382
|
+
```python
|
|
383
|
+
class CostTracker:
|
|
384
|
+
def __init__(self, model: str = "default"):
|
|
385
|
+
...
|
|
386
|
+
|
|
387
|
+
def track_input(self, tokens: int):
|
|
388
|
+
"""Track input tokens."""
|
|
389
|
+
|
|
390
|
+
def track_output(self, tokens: int):
|
|
391
|
+
"""Track output tokens."""
|
|
392
|
+
|
|
393
|
+
def set_full_estimate(self, tokens: int):
|
|
394
|
+
"""Set estimated output if no early exit."""
|
|
395
|
+
|
|
396
|
+
def get_estimate(self) -> CostEstimate:
|
|
397
|
+
"""Get cost estimate with savings."""
|
|
398
|
+
```
|
|
399
|
+
|
|
400
|
+
### `EntropyProxy`
|
|
401
|
+
|
|
402
|
+
```bash
|
|
403
|
+
# Run the proxy
|
|
404
|
+
entroplain-proxy --port 8765 --log-entropy --model gpt-4o
|
|
405
|
+
|
|
406
|
+
# Options
|
|
407
|
+
--entropy-threshold 0.15 # Exit threshold
|
|
408
|
+
--min-valleys 2 # Minimum valleys
|
|
409
|
+
--no-early-exit # Monitor only, don't exit
|
|
410
|
+
--log-entropy # Log entropy values
|
|
411
|
+
--model gpt-4o # Model for cost tracking
|
|
412
|
+
--no-cost-tracking # Disable cost tracking
|
|
413
|
+
```
|
|
414
|
+
|
|
415
|
+
---
|
|
416
|
+
|
|
417
|
+
## Research
|
|
418
|
+
|
|
419
|
+
### Paper
|
|
420
|
+
|
|
421
|
+
See [`paper.md`](./paper.md) for the full research proposal:
|
|
422
|
+
|
|
423
|
+
**"Entropy-Based Early Exit for Efficient Agent Reasoning"**
|
|
424
|
+
|
|
425
|
+
### Key Findings
|
|
426
|
+
|
|
427
|
+
1. **H1 Supported:** Entropy valleys correlate with reasoning complexity (70.2 valleys for hard problems vs 61.3 for easy)
|
|
428
|
+
2. **H2 Supported:** Entropy velocity differs by difficulty (0.4852 easy vs 0.4095 hard)
|
|
429
|
+
3. **Potential:** 40-60% compute reduction with 95%+ accuracy retention
|
|
430
|
+
|
|
431
|
+
### Citation
|
|
432
|
+
|
|
433
|
+
```bibtex
|
|
434
|
+
@software{entroplain2026,
|
|
435
|
+
title = {Entroplain: Entropy-Based Early Exit for Efficient Agent Reasoning},
|
|
436
|
+
author = {Entroplain Contributors},
|
|
437
|
+
year = {2026},
|
|
438
|
+
url = {https://github.com/entroplain/entroplain}
|
|
439
|
+
}
|
|
440
|
+
```
|
|
441
|
+
|
|
442
|
+
---
|
|
443
|
+
|
|
444
|
+
## Contributing
|
|
445
|
+
|
|
446
|
+
We welcome contributions! See [CONTRIBUTING.md](./CONTRIBUTING.md) for guidelines.
|
|
447
|
+
|
|
448
|
+
### Development Setup
|
|
449
|
+
|
|
450
|
+
```bash
|
|
451
|
+
git clone https://github.com/entroplain/entroplain.git
|
|
452
|
+
cd entroplain
|
|
453
|
+
pip install -e ".[dev]"
|
|
454
|
+
pytest
|
|
455
|
+
```
|
|
456
|
+
|
|
457
|
+
---
|
|
458
|
+
|
|
459
|
+
## License
|
|
460
|
+
|
|
461
|
+
MIT License — see [LICENSE](./LICENSE) for details.
|
|
462
|
+
|
|
463
|
+
---
|
|
464
|
+
|
|
465
|
+
## Links
|
|
466
|
+
|
|
467
|
+
- **PyPI:** https://pypi.org/project/entroplain/
|
|
468
|
+
- **npm:** https://www.npmjs.com/package/entroplain
|
|
469
|
+
- **GitHub:** https://github.com/entroplain/entroplain
|
|
470
|
+
- **Issues:** https://github.com/entroplain/entroplain/issues
|
|
471
|
+
|
|
472
|
+
---
|
|
473
|
+
|
|
474
|
+
## Acknowledgments
|
|
475
|
+
|
|
476
|
+
- Research inspired by early exit architectures in transformers
|
|
477
|
+
- Experimental validation using NVIDIA NIM API
|
|
478
|
+
- Built for the agent-first future of AI
|