cc-context-stats 1.11.1 → 1.12.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.backup.md CHANGED
@@ -1,98 +1,62 @@
1
1
  <div align="center">
2
2
  <img src="assets/logo/logo-full.svg" alt="cc-context-stats" width="320"/>
3
3
 
4
- <h1>Stop Shipping from a Half-Blind Model</h1>
4
+ <h3>Keep your model sharp. Ship with confidence.</h3>
5
5
 
6
- <p><strong>Real-time model intelligence monitoring for Claude Code.</strong><br/>Know exactly when your model is at peak quality and when it's time for a fresh session.</p>
6
+ <p>Real-time model intelligence monitoring for Claude Code — know exactly when your model is at peak quality and when it's time for a fresh session.</p>
7
7
 
8
8
  [![PyPI version](https://img.shields.io/pypi/v/cc-context-stats)](https://pypi.org/project/cc-context-stats/)
9
9
  [![npm version](https://img.shields.io/npm/v/cc-context-stats)](https://www.npmjs.com/package/cc-context-stats)
10
10
  [![PyPI Downloads](https://img.shields.io/pypi/dm/cc-context-stats)](https://pypi.org/project/cc-context-stats/)
11
11
  [![npm Downloads](https://img.shields.io/npm/dm/cc-context-stats)](https://www.npmjs.com/package/cc-context-stats)
12
- [![GitHub stars](https://img.shields.io/github/stars/luongnv89/cc-context-stats)](https://github.com/luongnv89/cc-context-stats)
13
12
  [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
14
13
 
15
- [**Get Started in 60 Seconds →**](#installation)
16
-
17
14
  </div>
18
15
 
19
- ---
16
+ **Always use Claude at its best** — monitor model intelligence in real-time so you know exactly when quality starts to drop.
20
17
 
21
18
  ![Context Stats - Model Intelligence](images/1.10/1.10.0-model-intelligence.png)
22
19
 
23
- ## The Problem
24
-
25
- You're deep into a Claude Code session — refactoring, debugging, shipping. Everything feels fine. But behind the scenes:
26
-
27
- - **Your model is getting dumber and you can't see it.** Research shows LLM retrieval accuracy drops as the context window fills. Claude starts missing details, hallucinating references, and losing track of your codebase — silently.
28
- - **You don't know when to start fresh.** Is 50% context usage safe? 70%? It depends on the model. Opus holds quality longer than Sonnet, which degrades faster than Haiku. Without data, you're guessing.
29
- - **Wasted sessions cost real money.** Pushing through a degraded context means more back-and-forth, more corrections, more tokens burned on worse output. You pay more for less.
30
-
31
- You can't fix what you can't measure.
32
-
33
- ## How cc-context-stats Fixes This
34
-
35
- cc-context-stats gives you a **Model Intelligence (MI) score** — a single number from 1.000 to 0.000 that tells you how sharp your model is right now, calibrated from Anthropic's [MRCR v2 8-needle](https://docs.anthropic.com/) retrieval benchmark.
36
-
37
- - **One glance, full picture** — MI score lives in your Claude Code status bar. Green means sharp. Yellow means degrading. Red means stop and start fresh.
38
- - **Per-model awareness** — Opus (beta=1.8) retains quality longest. Sonnet (beta=1.5) is moderate. Haiku (beta=1.2) degrades earliest. MI reflects your actual model automatically.
39
- - **Live dashboard** — ASCII graphs track context growth, MI degradation, and token I/O over time. Watch quality erode in real-time so you can make informed decisions.
40
- - **Zero config, zero dependencies** — Install in one command. Works with pip, npm, or a shell script. No API keys, no network calls. All data stays local.
41
- - **Context zones** — Five-state indicators tell you where you stand:
42
-
43
- | Zone | Indicator | Color | What It Means |
44
- | --- | --- | --- | --- |
45
- | **Planning** | Plan | Green | Safe to plan and code |
46
- | **Code-only** | Code | Yellow | Avoid starting new plans |
47
- | **Dump zone** | Dump | Orange | Quality declining — finish up |
48
- | **Hard limit** | ExDump | Dark red | Start a new session now |
49
- | **Dead zone** | Dead | Gray | Nothing productive here |
50
-
51
- [**Install and See Your MI Score →**](#installation)
52
-
53
- ## How It Works
54
-
55
- 1. **Install** — One command: `pip install cc-context-stats` or `npm install -g cc-context-stats`
56
- 2. **Configure** — Add the statusline command to `~/.claude/settings.json` (two lines of JSON)
57
- 3. **Restart Claude Code** — MI score and context stats appear in your status bar immediately
58
- 4. **Monitor** — Run `context-stats <session_id>` for a live dashboard with graphs, zone status, and session summary
59
-
60
- | Status Bar (green — model is sharp) | Status Bar (yellow — quality degrading) |
61
- |:---:|:---:|
62
- | ![Green](images/1.10/statusline-green.png) | ![Yellow](images/1.10/1.10-statusline.png) |
63
-
64
- | Delta Graph | Cumulative Graph |
65
- |:---:|:---:|
66
- | ![Delta](images/1.10/1.10-delta.png) | ![Cumulative](images/1.10/1.10-cumulative.png) |
20
+ ## Why Context Stats?
67
21
 
68
- [**See Full CLI Options →**](#context-stats-cli)
22
+ Research shows that LLM quality degrades as the context window fills up — even the best models lose retrieval accuracy at longer contexts. But you can't see this happening. Context Stats makes it visible:
69
23
 
70
- ## Model Intelligence The Science
24
+ - **Model Intelligence (MI)** - A benchmark-calibrated score (1.000 → 0.000) that tracks how much quality has degraded, with per-model profiles for Opus, Sonnet, and Haiku
25
+ - **Know your zone** - See if you're in the Smart Zone, Dumb Zone, or Wrap Up Zone
26
+ - **Track context usage** - Real-time monitoring with live-updating ASCII graphs
27
+ - **Get early warnings** - Color-coded alerts tell you when to start a fresh session
28
+ - **Per-model awareness** - Opus retains quality longer than Sonnet, which degrades faster than Haiku. MI reflects this automatically
71
29
 
72
- MI isn't a guess. It's derived from `MI(u) = max(0, 1 - u^beta)` where `u` is context utilization and `beta` is a model-specific degradation rate calibrated against Anthropic's MRCR v2 8-needle long-context retrieval benchmark.
73
-
74
- | Model | Beta | MI at 50% Context | MI at 75% Context | When to Worry |
75
- |-------|------|-----------|-----------|---------------|
76
- | Opus | 1.8 | 0.713 | 0.404 | ~60% used |
77
- | Sonnet| 1.5 | 0.646 | 0.350 | ~50% used |
78
- | Haiku | 1.2 | 0.565 | 0.292 | ~45% used |
30
+ ## Context Zones
79
31
 
80
- The model is auto-detected from your session. See [Model Intelligence docs](docs/MODEL_INTELLIGENCE.md) for the full formula and benchmark data.
32
+ | Zone | Context Used | Status | What It Means |
33
+ | ------------------- | ------------ | -------- | --------------------------------------------- |
34
+ | 🟢 **Smart Zone** | < 40% | Optimal | Claude is performing at its best |
35
+ | 🟡 **Dumb Zone** | 40-80% | Degraded | Context getting full, Claude may miss details |
36
+ | 🔴 **Wrap Up Zone** | > 80% | Critical | Better to wrap up and start a new session |
81
37
 
82
38
  ## Installation
83
39
 
84
- ### Shell Script (quickest)
40
+ ### Shell Script
41
+
42
+ For the quickest setup:
85
43
 
86
44
  ```bash
87
45
  curl -fsSL https://raw.githubusercontent.com/luongnv89/cc-context-stats/main/install.sh | bash
88
46
  ```
89
47
 
90
- ### npm
48
+ ### NPM
91
49
 
92
50
  ```bash
93
51
  npm install -g cc-context-stats
94
52
  ```
95
53
 
54
+ Or with yarn:
55
+
56
+ ```bash
57
+ yarn global add cc-context-stats
58
+ ```
59
+
96
60
  ### Python
97
61
 
98
62
  ```bash
@@ -107,13 +71,23 @@ uv pip install cc-context-stats
107
71
 
108
72
  ### Verify Installation
109
73
 
74
+ After installing via any method, verify that both the statusline and context-stats CLI are working:
75
+
110
76
  ```bash
111
77
  curl -fsSL https://raw.githubusercontent.com/luongnv89/cc-context-stats/main/scripts/check-install.sh | bash
112
78
  ```
113
79
 
114
- ### Quick Start
80
+ Or if you cloned the repo:
81
+
82
+ ```bash
83
+ ./scripts/check-install.sh
84
+ ```
85
+
86
+ ## Quick Start
87
+
88
+ ### Status Line Integration
115
89
 
116
- Add to `~/.claude/settings.json`:
90
+ Add to `~/.claude/settings.json` (the command depends on how you installed):
117
91
 
118
92
  **pip or npm install:**
119
93
  ```json
@@ -135,40 +109,37 @@ Add to `~/.claude/settings.json`:
135
109
  }
136
110
  ```
137
111
 
138
- Restart Claude Code. MI score and context stats appear in your status bar immediately.
112
+ Restart Claude Code to see real-time model intelligence and context stats in your status bar.
139
113
 
140
- ### Real-Time Dashboard
114
+ ### Real-Time Monitoring
141
115
 
142
- Get your session ID from the status line (the last part after the pipe `|`), then:
116
+ Get your session ID from the status line (the last part after the pipe `|`), then run:
143
117
 
144
118
  ```bash
145
119
  context-stats <session_id>
146
120
  ```
147
121
 
122
+ For example:
123
+
124
+ ```bash
125
+ context-stats abc123def-456-789
148
126
  ```
149
- Context Stats (my-project • abc123def)
150
127
 
151
- Context Growth Per Interaction
152
- Max: 4,787 Min: 0 Points: 254
153
- ...graph...
128
+ This opens a live dashboard that refreshes every 2 seconds, showing:
154
129
 
155
- Session Summary
156
- ----------------------------------------------------------------------------
157
- Context Remaining: 43,038/200,000 (21%)
158
- >>> DUMB ZONE <<< (You are in the dumb zone - Dex Horthy says so)
159
- Model Intelligence: 0.646 (Context pressure building, consider wrapping up)
160
- Context: 79% used
130
+ - Your current project and session
131
+ - Context growth per interaction graph
132
+ - Model Intelligence degradation over time
133
+ - Your current zone status and MI score
134
+ - Remaining context percentage
161
135
 
162
- Last Growth: +2,500
163
- Input Tokens: 1,234
164
- Output Tokens: 567
165
- Lines Changed: +45 / -12
166
- Total Cost: $0.1234
167
- Model: claude-sonnet-4-6
168
- Session Duration: 2h 29m
169
- ```
136
+ Press `Ctrl+C` to exit.
170
137
 
171
- [**See All Graph Types and Options →**](#context-stats-cli)
138
+ ### Graph Types
139
+
140
+ | Delta (default) | Cumulative |
141
+ |:---:|:---:|
142
+ | ![Delta](images/1.10/1.10-delta.png) | ![Cumulative](images/1.10/1.10-cumulative.png) |
172
143
 
173
144
  ## Context Stats CLI
174
145
 
@@ -185,41 +156,38 @@ context-stats explain # Diagnostic dump (pipe JSON to stdin)
185
156
  context-stats --version # Show version
186
157
  ```
187
158
 
188
- ## FAQ
189
-
190
- **Is it free?**
191
- Yes. MIT licensed, zero dependencies, free forever. [See the license](LICENSE).
192
-
193
- **Does it send my data anywhere?**
194
- No. All data stays local in `~/.claude/statusline/`. No network requests, no telemetry, no API keys required.
195
-
196
- **Is it actively maintained?**
197
- Very. 11 releases since January 2025, with MI per-model profiles, configurable colors, state rotation, and cross-implementation parity tests all shipped in the last few months.
198
-
199
- **How does it compare to just watching the context counter?**
200
- The raw context counter tells you how full the window is. MI tells you how much quality you've lost — which depends on the model. 50% context on Opus (MI: 0.713) is fine. 50% on Haiku (MI: 0.565) means you should start wrapping up. cc-context-stats gives you the nuance.
159
+ ### Output Example
201
160
 
202
- **Can I use it with Opus, Sonnet, and Haiku?**
203
- Yes. MI auto-detects your model and applies the correct degradation curve. Each model has a calibrated beta value from benchmark data.
204
-
205
- **What runtimes does it support?**
206
- Python (pip), Node.js (npm), or pure Bash. The statusline scripts are implemented in all three languages so you can use whichever runtime you have available.
207
-
208
- **How do I customize colors?**
209
- Create `~/.claude/statusline.conf` with named colors or hex codes. See [Configuration docs](docs/configuration.md) for all options.
161
+ ```
162
+ Context Stats (my-project abc123def)
210
163
 
211
- ## Start Shipping with Confidence
164
+ Context Growth Per Interaction
165
+ Max: 4,787 Min: 0 Points: 254
166
+ ...graph...
212
167
 
213
- You wouldn't deploy without monitoring your servers. Don't code without monitoring your model.
168
+ Session Summary
169
+ ----------------------------------------------------------------------------
170
+ Context Remaining: 43,038/200,000 (21%)
171
+ >>> DUMB ZONE <<< (You are in the dumb zone - Dex Horthy says so)
172
+ Model Intelligence: 0.646 (Context pressure building, consider wrapping up)
173
+ Context: 79% used
214
174
 
215
- cc-context-stats is MIT licensed, has zero dependencies, installs in one command, and works with any Claude Code setup. If you don't like it, `pip uninstall cc-context-stats` and it's gone.
175
+ Last Growth: +2,500
176
+ Input Tokens: 1,234
177
+ Output Tokens: 567
178
+ Lines Changed: +45 / -12
179
+ Total Cost: $0.1234
180
+ Model: claude-sonnet-4-6
181
+ Session Duration: 2h 29m
182
+ ```
216
183
 
217
- [**Install cc-context-stats Now →**](#installation)
184
+ ## Status Line
218
185
 
219
- ---
186
+ Colors change based on MI score and context utilization — green when the model is sharp, yellow as quality degrades:
220
187
 
221
- <details>
222
- <summary><strong>Status Line Components</strong></summary>
188
+ | MI >= 0.90 (green) | MI < 0.90 (yellow) |
189
+ |:---:|:---:|
190
+ | ![Green](images/1.10/statusline-green.png) | ![Yellow](images/1.10/1.10-statusline.png) |
223
191
 
224
192
  The status line shows at-a-glance metrics in your Claude Code interface:
225
193
 
@@ -232,12 +200,7 @@ The status line shows at-a-glance metrics in your Claude Code interface:
232
200
  | Git | Branch name and uncommitted changes |
233
201
  | Session | Session ID for correlation |
234
202
 
235
- Colors change based on MI score and context utilization — green when the model is sharp, yellow as quality degrades.
236
-
237
- </details>
238
-
239
- <details>
240
- <summary><strong>Configuration</strong></summary>
203
+ ## Configuration
241
204
 
242
205
  Create `~/.claude/statusline.conf`:
243
206
 
@@ -254,33 +217,25 @@ mi_curve_beta=0 # Use model-specific profile (0=auto, or set custom beta)
254
217
  color_green=#7dcfff
255
218
  color_red=#f7768e
256
219
  color_yellow=bright_yellow
257
-
258
- # Per-property colors (override individual elements)
259
- color_context_length=bold_white # Context remaining
260
- color_project_name=cyan # Project directory
261
- color_branch_name=green # Git branch
262
- color_mi_score=yellow # MI score
263
- color_separator=dim # Model, delta, session
264
220
  ```
265
221
 
266
- </details>
222
+ ## Model Intelligence (MI)
267
223
 
268
- <details>
269
- <summary><strong>Migration from cc-statusline</strong></summary>
224
+ MI estimates how well the model will perform at your current context fill level, calibrated from the [MRCR v2 8-needle](https://docs.anthropic.com/) long context retrieval benchmark. The score drops from 1.000 (fresh context) to 0.000 (full context), with model-specific degradation rates:
270
225
 
271
- If you were using the previous `cc-statusline` package:
226
+ | Model | Beta | MI at 50% | MI at 75% | When to worry |
227
+ |-------|------|-----------|-----------|---------------|
228
+ | Opus | 1.8 | 0.713 | 0.404 | ~60% used |
229
+ | Sonnet| 1.5 | 0.646 | 0.350 | ~50% used |
230
+ | Haiku | 1.2 | 0.565 | 0.292 | ~45% used |
272
231
 
273
- ```bash
274
- pip uninstall cc-statusline
275
- pip install cc-context-stats
276
- ```
232
+ The model is auto-detected from your session. See [Model Intelligence docs](docs/MODEL_INTELLIGENCE.md) for the full formula and benchmark data.
277
233
 
278
- The `claude-statusline` command still works. The main change is `token-graph` is now `context-stats`.
234
+ ## How It Works
279
235
 
280
- </details>
236
+ Context Stats hooks into Claude Code's status line feature to track token usage across your sessions. The Python and Node.js statusline scripts write state data to local CSV files, which the context-stats CLI reads to render live graphs. Data is stored locally in `~/.claude/statusline/` and never sent anywhere.
281
237
 
282
- <details>
283
- <summary><strong>Documentation</strong></summary>
238
+ ## Documentation
284
239
 
285
240
  - [Installation Guide](docs/installation.md) - Platform-specific setup (shell, pip, npm)
286
241
  - [Context Stats Guide](docs/context-stats.md) - Detailed CLI usage guide
@@ -294,25 +249,22 @@ The `claude-statusline` command still works. The main change is `token-graph` is
294
249
  - [Troubleshooting](docs/troubleshooting.md) - Common issues and solutions
295
250
  - [Changelog](CHANGELOG.md) - Version history
296
251
 
297
- </details>
298
-
299
- <details>
300
- <summary><strong>Contributing</strong></summary>
252
+ ## Contributing
301
253
 
302
254
  Contributions are welcome! Please read our [Contributing Guide](CONTRIBUTING.md) for details on the development setup, branching strategy, and PR process.
303
255
 
304
256
  This project follows the [Contributor Covenant Code of Conduct](CODE_OF_CONDUCT.md).
305
257
 
306
- </details>
307
-
308
- <details>
309
- <summary><strong>How It Works (Architecture)</strong></summary>
258
+ ## Migration from cc-statusline
310
259
 
311
- Context Stats hooks into Claude Code's status line feature to track token usage across your sessions. The Python and Node.js statusline scripts write state data to local CSV files, which the context-stats CLI reads to render live graphs. Data is stored locally in `~/.claude/statusline/` and never sent anywhere.
260
+ If you were using the previous `cc-statusline` package:
312
261
 
313
- The statusline is implemented in three languages (Bash, Python, Node.js) so you can choose whichever runtime you have available. Claude Code invokes the statusline script via stdin JSON pipe — any implementation that reads JSON from stdin and writes formatted text to stdout works.
262
+ ```bash
263
+ pip uninstall cc-statusline
264
+ pip install cc-context-stats
265
+ ```
314
266
 
315
- </details>
267
+ The `claude-statusline` command still works. The main change is `token-graph` is now `context-stats`.
316
268
 
317
269
  ## Related
318
270
 
package/README.md CHANGED
@@ -48,6 +48,20 @@ cc-context-stats gives you a **Model Intelligence (MI) score** — a single numb
48
48
  | **Hard limit** | ExDump | Dark red | Start a new session now |
49
49
  | **Dead zone** | Dead | Gray | Nothing productive here |
50
50
 
51
+ **Plan zone** (green — safe to plan and code):
52
+
53
+ ![Plan zone](images/1.12.0/plan-zone.png)
54
+
55
+ **Code zone** (yellow — avoid starting new plans):
56
+
57
+ ![Code zone](images/1.12.0/code-zone.png)
58
+
59
+ **Dump zone** (orange — quality declining):
60
+
61
+ ![Dump zone](images/1.12.0/dump-zone.png)
62
+
63
+ No screenshots for ExDump and Dead zones — I don't dare go that far into a session.
64
+
51
65
  [**Install and See Your MI Score →**](#installation)
52
66
 
53
67
  ## How It Works
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "cc-context-stats",
3
- "version": "1.11.1",
3
+ "version": "1.12.1",
4
4
  "description": "Monitor your Claude Code session context in real-time - track token usage and never run out of context",
5
5
  "main": "scripts/statusline.js",
6
6
  "bin": {
@@ -571,21 +571,18 @@ process.stdin.on('end', () => {
571
571
  ? freeTokens.toLocaleString('en-US')
572
572
  : `${(freeTokens / 1000).toFixed(1)}k`;
573
573
 
574
- // Color based on MI thresholds (consistent with MI display)
575
- const ctxUtil = totalSize > 0 ? usedTokens / totalSize : 0;
576
- const ctxMI = computeMI(usedTokens, totalSize, modelId, miCurveBeta);
577
- const ctxColor = getMIColor(ctxMI.mi, ctxUtil, cGreen, cYellow, cRed);
574
+ // Zone indicator determines color for both context info and zone label
575
+ const zoneResult = getContextZone(usedTokens, totalSize);
576
+ const zoneAnsi = zoneAnsiColor(zoneResult.colorName);
578
577
 
579
- // Use per-property context_length color if configured, else MI-based color
580
- const effectiveCtxColor = c.context_length || ctxColor;
578
+ // Context info uses zone color (traffic-light), with per-property override
579
+ const effectiveCtxColor = c.context_length || zoneAnsi;
581
580
 
582
581
  contextInfo = ` | ${effectiveCtxColor}${freeDisplay} (${freePct.toFixed(1)}%)${RESET}`;
583
582
 
584
- // Always show zone indicator
585
- const zoneResult = getContextZone(usedTokens, totalSize);
586
- // Use per-property zone color if configured, else dynamic zone color
587
- const zoneAnsi = c.zone || zoneAnsiColor(zoneResult.colorName);
588
- zoneInfo = ` | ${zoneAnsi}${zoneResult.zone}${RESET}`;
583
+ // Zone label uses same color, with per-property override
584
+ const effectiveZoneColor = c.zone || zoneAnsi;
585
+ zoneInfo = ` | ${effectiveZoneColor}${zoneResult.zone}${RESET}`;
589
586
 
590
587
  // Read previous entry if needed for delta OR MI
591
588
  if (showDelta || showMI) {