ccmeter 0.0.1__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,16 @@
1
+ __pycache__/
2
+ *.py[cod]
3
+ build/
4
+ dist/
5
+ *.egg-info/
6
+
7
+ .env
8
+ .venv/
9
+ venv/
10
+
11
+ .coverage
12
+ .pytest_cache/
13
+ .ruff_cache/
14
+
15
+ .DS_Store
16
+ *.db
@@ -0,0 +1 @@
1
+ 3.12
@@ -0,0 +1,51 @@
1
+ # ccmeter
2
+
3
+ Measure what Anthropic won't tell you: what Claude subscription limits actually mean in tokens.
4
+
5
+ ## Architecture
6
+
7
+ ```
8
+ ccmeter/
9
+ auth.py — reads OAuth creds from OS keychain (macOS Keychain, Linux libsecret)
10
+ cli.py — fncli entry point: poll, report, history, status, install, uninstall
11
+ db.py — sqlite schema (~/.ccmeter/meter.db)
12
+ poll.py — usage API poller with change detection and exponential backoff
13
+ scan.py — JSONL scanner: reads per-message token counts from ~/.claude/projects/
14
+ report.py — cross-references usage ticks against token windows to derive tokens-per-percent
15
+ history.py — display raw usage samples
16
+ status.py — collection health
17
+ daemon.py — launchd (macOS) / systemd (Linux) install/uninstall
18
+ docs/
19
+ evidence.md — sourced incidents of opaque limit changes
20
+ ```
21
+
22
+ Zero external deps beyond `fncli`. stdlib `urllib` for HTTP.
23
+
24
+ ## Key facts
25
+
26
+ - Usage API: `api.anthropic.com/api/oauth/usage` with header `anthropic-beta: oauth-2025-04-20`
27
+ - macOS keychain: `security find-generic-password -a $USER -s "Claude Code-credentials" -w`
28
+ - Credential blob: `{claudeAiOauth: {accessToken, refreshToken, expiresAt, subscriptionType, rateLimitTier}}`
29
+ - Known buckets: `five_hour`, `seven_day`, `seven_day_sonnet`, `seven_day_opus`, `seven_day_cowork`, `extra_usage`
30
+ - `extra_usage` uses `used_credits` instead of `utilization`
31
+ - Null buckets (e.g. `iguana_necktie`) are feature flags — we capture everything the API returns
32
+ - JSONL location: `~/.claude/projects/<project>/<session_id>.jsonl`
33
+ - JSONL assistant messages contain `.message.usage` with `input_tokens`, `output_tokens`, `cache_read_input_tokens`, `cache_creation_input_tokens`
34
+ - Session metadata: `~/.claude/usage-data/session-meta/<session_id>.json` (token counts in thousands)
35
+
36
+ ## Conventions
37
+
38
+ - fncli for CLI (not click/argparse)
39
+ - stdlib over deps
40
+ - sqlite for local storage
41
+ - print() for output
42
+ - Deferred imports in CLI handlers
43
+ - `just format` / `just lint`
44
+
45
+ ## Roadmap
46
+
47
+ - [ ] Confidence intervals on calibration (need more data)
48
+ - [ ] Anonymous contribution: `ccmeter export` dumps standardized JSON for community sharing
49
+ - [ ] Community dataset repo for aggregated calibration data
50
+ - [ ] Windows support (Windows Credential Manager)
51
+ - [ ] PyPI publish
ccmeter-0.0.1/LICENSE ADDED
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2026 Tyson Chan
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
ccmeter-0.0.1/PKG-INFO ADDED
@@ -0,0 +1,103 @@
1
+ Metadata-Version: 2.4
2
+ Name: ccmeter
3
+ Version: 0.0.1
4
+ Summary: Reverse-engineer Anthropic's opaque Claude subscription limits into hard numbers
5
+ Project-URL: Repository, https://github.com/iteebz/ccmeter
6
+ Project-URL: Issues, https://github.com/iteebz/ccmeter/issues
7
+ Author-email: Tyson Chan <tyson.chan@proton.me>
8
+ License-Expression: MIT
9
+ License-File: LICENSE
10
+ Keywords: anthropic,claude,limits,metering,transparency,usage
11
+ Classifier: Development Status :: 2 - Pre-Alpha
12
+ Classifier: Environment :: Console
13
+ Classifier: Intended Audience :: Developers
14
+ Classifier: License :: OSI Approved :: MIT License
15
+ Classifier: Programming Language :: Python :: 3
16
+ Classifier: Programming Language :: Python :: 3.12
17
+ Classifier: Programming Language :: Python :: 3.13
18
+ Classifier: Topic :: System :: Monitoring
19
+ Requires-Python: >=3.12
20
+ Requires-Dist: fncli
21
+ Description-Content-Type: text/markdown
22
+
23
+ # ccmeter
24
+
25
+ Measure what Anthropic won't tell you: what your Claude subscription limits actually mean in tokens.
26
+
27
+ ## Why
28
+
29
+ Anthropic charges $20-$200/month for Claude but doesn't publish what the usage limits actually are. The API reports utilization as a percentage — but a percentage of what? Nobody knows.
30
+
31
+ Three times in four months, Anthropic has run the same play: announce a temporary usage boost, silently tighten baseline limits during or after, then attribute complaints to "contrast effect." See [docs/evidence.md](docs/evidence.md) for the receipts.
32
+
33
+ ccmeter is a local instrument that figures out the actual numbers.
34
+
35
+ ## How it works
36
+
37
+ 1. Polls Anthropic's OAuth usage API every 2 minutes — records utilization percentages per bucket (`five_hour`, `seven_day`, `seven_day_sonnet`, etc.)
38
+ 2. Scans Claude Code's local JSONL files for per-message token counts with timestamps
39
+ 3. When utilization ticks from 15% to 16% and you used N tokens in that window: 1% = N tokens
40
+
41
+ That's the whole trick. Track that number over time. If it shrinks, the cap shrank.
42
+
43
+ ## Install
44
+
45
+ ```bash
46
+ pip install ccmeter
47
+ ```
48
+
49
+ Or clone and run directly:
50
+
51
+ ```bash
52
+ git clone https://github.com/iteebz/ccmeter && cd ccmeter && uv sync
53
+ ```
54
+
55
+ Requires Python 3.12+, Claude Code installed and signed in. macOS and Linux. Zero dependencies beyond [fncli](https://pypi.org/project/fncli/).
56
+
57
+ ## Usage
58
+
59
+ ```bash
60
+ # Install as background daemon (survives restarts)
61
+ ccmeter install
62
+
63
+ # Or run in foreground
64
+ ccmeter poll
65
+
66
+ # What does 1% actually cost?
67
+ ccmeter report
68
+
69
+ # Structured output for sharing
70
+ ccmeter report --json
71
+
72
+ # Raw usage tick history
73
+ ccmeter history
74
+
75
+ # Collection health
76
+ ccmeter status
77
+
78
+ # Remove daemon
79
+ ccmeter uninstall
80
+ ```
81
+
82
+ ## What it collects
83
+
84
+ **From Anthropic's API** (polled every 2 min, recorded on change):
85
+ - Utilization percentage per bucket
86
+ - Reset timestamps
87
+ - Your subscription tier (detected automatically from credentials)
88
+
89
+ **From Claude Code's local JSONL files** (scanned on `report`):
90
+ - Per-message token counts: input, output, cache_read, cache_create
91
+ - Timestamps, model, Claude Code version, session ID
92
+
93
+ **Everything stays local** in `~/.ccmeter/meter.db`. Your OAuth token is only sent to Anthropic's own API — the same call Claude Code already makes.
94
+
95
+ ## Known confounds
96
+
97
+ - **Multi-surface usage**: claude.ai, Claude Code, and Cowork share limits but only Claude Code has local token logs. If you use multiple surfaces simultaneously, token counts will be inflated relative to the utilization tick.
98
+ - **1% granularity**: The API reports whole percentages only. More samples over longer periods = better accuracy.
99
+ - **Bucket overlap**: Some buckets may share underlying quotas in ways the API doesn't surface.
100
+
101
+ ## License
102
+
103
+ MIT
@@ -0,0 +1,81 @@
1
+ # ccmeter
2
+
3
+ Measure what Anthropic won't tell you: what your Claude subscription limits actually mean in tokens.
4
+
5
+ ## Why
6
+
7
+ Anthropic charges $20-$200/month for Claude but doesn't publish what the usage limits actually are. The API reports utilization as a percentage — but a percentage of what? Nobody knows.
8
+
9
+ Three times in four months, Anthropic has run the same play: announce a temporary usage boost, silently tighten baseline limits during or after, then attribute complaints to "contrast effect." See [docs/evidence.md](docs/evidence.md) for the receipts.
10
+
11
+ ccmeter is a local instrument that figures out the actual numbers.
12
+
13
+ ## How it works
14
+
15
+ 1. Polls Anthropic's OAuth usage API every 2 minutes — records utilization percentages per bucket (`five_hour`, `seven_day`, `seven_day_sonnet`, etc.)
16
+ 2. Scans Claude Code's local JSONL files for per-message token counts with timestamps
17
+ 3. When utilization ticks from 15% to 16% and you used N tokens in that window: 1% = N tokens
18
+
19
+ That's the whole trick. Track that number over time. If it shrinks, the cap shrank.
20
+
21
+ ## Install
22
+
23
+ ```bash
24
+ pip install ccmeter
25
+ ```
26
+
27
+ Or clone and run directly:
28
+
29
+ ```bash
30
+ git clone https://github.com/iteebz/ccmeter && cd ccmeter && uv sync
31
+ ```
32
+
33
+ Requires Python 3.12+, Claude Code installed and signed in. macOS and Linux. Zero dependencies beyond [fncli](https://pypi.org/project/fncli/).
34
+
35
+ ## Usage
36
+
37
+ ```bash
38
+ # Install as background daemon (survives restarts)
39
+ ccmeter install
40
+
41
+ # Or run in foreground
42
+ ccmeter poll
43
+
44
+ # What does 1% actually cost?
45
+ ccmeter report
46
+
47
+ # Structured output for sharing
48
+ ccmeter report --json
49
+
50
+ # Raw usage tick history
51
+ ccmeter history
52
+
53
+ # Collection health
54
+ ccmeter status
55
+
56
+ # Remove daemon
57
+ ccmeter uninstall
58
+ ```
59
+
60
+ ## What it collects
61
+
62
+ **From Anthropic's API** (polled every 2 min, recorded on change):
63
+ - Utilization percentage per bucket
64
+ - Reset timestamps
65
+ - Your subscription tier (detected automatically from credentials)
66
+
67
+ **From Claude Code's local JSONL files** (scanned on `report`):
68
+ - Per-message token counts: input, output, cache_read, cache_create
69
+ - Timestamps, model, Claude Code version, session ID
70
+
71
+ **Everything stays local** in `~/.ccmeter/meter.db`. Your OAuth token is only sent to Anthropic's own API — the same call Claude Code already makes.
72
+
73
+ ## Known confounds
74
+
75
+ - **Multi-surface usage**: claude.ai, Claude Code, and Cowork share limits but only Claude Code has local token logs. If you use multiple surfaces simultaneously, token counts will be inflated relative to the utilization tick.
76
+ - **1% granularity**: The API reports whole percentages only. More samples over longer periods = better accuracy.
77
+ - **Bucket overlap**: Some buckets may share underlying quotas in ways the API doesn't surface.
78
+
79
+ ## License
80
+
81
+ MIT
@@ -0,0 +1,3 @@
1
+ """cc-meter: Reverse-engineer Anthropic's opaque Claude subscription limits into hard numbers."""
2
+
3
+ __version__ = "0.0.1"
@@ -0,0 +1,65 @@
1
+ """Read Claude Code OAuth credentials from OS keychain."""
2
+
3
+ import json
4
+ import subprocess
5
+ import sys
6
+ from dataclasses import dataclass
7
+
8
+
9
+ @dataclass
10
+ class Credentials:
11
+ access_token: str
12
+ refresh_token: str | None
13
+ expires_at: str | None
14
+ subscription_type: str | None
15
+ rate_limit_tier: str | None
16
+
17
+
18
+ def get_credentials() -> Credentials | None:
19
+ """Extract OAuth credentials Claude Code stores in the OS credential store."""
20
+ if sys.platform == "darwin":
21
+ return _macos_keychain()
22
+ if sys.platform == "linux":
23
+ return _linux_secret()
24
+ return None
25
+
26
+
27
+ def _parse_credentials(raw: str) -> Credentials | None:
28
+ try:
29
+ data = json.loads(raw)
30
+ except json.JSONDecodeError:
31
+ return None
32
+ oauth = data.get("claudeAiOauth")
33
+ if not oauth or not isinstance(oauth, dict):
34
+ return None
35
+ token = oauth.get("accessToken")
36
+ if not token:
37
+ return None
38
+ return Credentials(
39
+ access_token=token,
40
+ refresh_token=oauth.get("refreshToken"),
41
+ expires_at=oauth.get("expiresAt"),
42
+ subscription_type=oauth.get("subscriptionType"),
43
+ rate_limit_tier=oauth.get("rateLimitTier"),
44
+ )
45
+
46
+
47
+ def _run_keychain(args: list[str]) -> Credentials | None:
48
+ try:
49
+ result = subprocess.run(args, capture_output=True, text=True, timeout=5)
50
+ if result.returncode != 0:
51
+ return None
52
+ return _parse_credentials(result.stdout.strip())
53
+ except (subprocess.TimeoutExpired, FileNotFoundError):
54
+ return None
55
+
56
+
57
+ def _macos_keychain() -> Credentials | None:
58
+ import os
59
+
60
+ user = os.environ.get("USER", "")
61
+ return _run_keychain(["security", "find-generic-password", "-a", user, "-s", "Claude Code-credentials", "-w"])
62
+
63
+
64
+ def _linux_secret() -> Credentials | None:
65
+ return _run_keychain(["secret-tool", "lookup", "service", "Claude Code-credentials"])
@@ -0,0 +1,65 @@
1
+ """ccmeter CLI."""
2
+
3
+ import sys
4
+
5
+ import fncli
6
+
7
+ from ccmeter import __version__
8
+
9
+
10
+ @fncli.cli()
11
+ def version():
12
+ """print version"""
13
+ print(__version__)
14
+
15
+
16
+ @fncli.cli("ccmeter")
17
+ def poll(interval: int = 120, once: bool = False):
18
+ """poll usage API and record samples to local sqlite"""
19
+ from ccmeter.poll import run_poll
20
+
21
+ run_poll(interval=interval, once=once)
22
+
23
+
24
+ @fncli.cli("ccmeter")
25
+ def report(days: int = 30, json: bool = False):
26
+ """show calibration report: what does 1% actually cost in tokens"""
27
+ from ccmeter.report import run_report
28
+
29
+ run_report(days=days, json_output=json)
30
+
31
+
32
+ @fncli.cli("ccmeter")
33
+ def history(days: int = 7, json: bool = False):
34
+ """show raw usage sample history"""
35
+ from ccmeter.history import show_history
36
+
37
+ show_history(days=days, json_output=json)
38
+
39
+
40
+ @fncli.cli("ccmeter")
41
+ def status():
42
+ """show current usage and collection stats"""
43
+ from ccmeter.status import show_status
44
+
45
+ show_status()
46
+
47
+
48
+ @fncli.cli("ccmeter")
49
+ def install():
50
+ """install ccmeter as a background daemon (survives restarts)"""
51
+ from ccmeter.daemon import install as do_install
52
+
53
+ raise SystemExit(do_install())
54
+
55
+
56
+ @fncli.cli("ccmeter")
57
+ def uninstall():
58
+ """stop and remove the background daemon"""
59
+ from ccmeter.daemon import uninstall as do_uninstall
60
+
61
+ raise SystemExit(do_uninstall())
62
+
63
+
64
+ def main():
65
+ raise SystemExit(fncli.dispatch(["ccmeter", *sys.argv[1:]]))
@@ -0,0 +1,138 @@
1
+ """Install/uninstall ccmeter as a background daemon that survives restarts."""
2
+
3
+ import shutil
4
+ import subprocess
5
+ import sys
6
+ import textwrap
7
+ from pathlib import Path
8
+
9
+ LAUNCHD_LABEL = "com.ccmeter.poll"
10
+ LAUNCHD_PLIST = Path.home() / "Library" / "LaunchAgents" / f"{LAUNCHD_LABEL}.plist"
11
+
12
+ SYSTEMD_UNIT = Path.home() / ".config" / "systemd" / "user" / "ccmeter.service"
13
+
14
+
15
+ def install():
16
+ """Install ccmeter as a background daemon."""
17
+ ccmeter_bin = shutil.which("ccmeter")
18
+ if not ccmeter_bin:
19
+ print("error: ccmeter not found in PATH", file=sys.stderr)
20
+ print("install first: pip install ccmeter", file=sys.stderr)
21
+ return 1
22
+
23
+ if sys.platform == "darwin":
24
+ return _install_launchd(ccmeter_bin)
25
+ if sys.platform == "linux":
26
+ return _install_systemd(ccmeter_bin)
27
+
28
+ print(f"error: unsupported platform {sys.platform}", file=sys.stderr)
29
+ return 1
30
+
31
+
32
+ def uninstall():
33
+ """Remove ccmeter background daemon."""
34
+ if sys.platform == "darwin":
35
+ return _uninstall_launchd()
36
+ if sys.platform == "linux":
37
+ return _uninstall_systemd()
38
+
39
+ print(f"error: unsupported platform {sys.platform}", file=sys.stderr)
40
+ return 1
41
+
42
+
43
+ def _install_launchd(ccmeter_bin: str) -> int:
44
+ plist = textwrap.dedent(f"""\
45
+ <?xml version="1.0" encoding="UTF-8"?>
46
+ <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
47
+ <plist version="1.0">
48
+ <dict>
49
+ <key>Label</key>
50
+ <string>{LAUNCHD_LABEL}</string>
51
+ <key>ProgramArguments</key>
52
+ <array>
53
+ <string>{ccmeter_bin}</string>
54
+ <string>poll</string>
55
+ </array>
56
+ <key>RunAtLoad</key>
57
+ <true/>
58
+ <key>KeepAlive</key>
59
+ <true/>
60
+ <key>StandardOutPath</key>
61
+ <string>{Path.home()}/.ccmeter/poll.log</string>
62
+ <key>StandardErrorPath</key>
63
+ <string>{Path.home()}/.ccmeter/poll.err</string>
64
+ </dict>
65
+ </plist>
66
+ """)
67
+
68
+ LAUNCHD_PLIST.parent.mkdir(parents=True, exist_ok=True)
69
+ LAUNCHD_PLIST.write_text(plist)
70
+
71
+ # unload first if already loaded
72
+ subprocess.run(["launchctl", "unload", str(LAUNCHD_PLIST)], capture_output=True)
73
+ result = subprocess.run(["launchctl", "load", str(LAUNCHD_PLIST)], capture_output=True, text=True)
74
+
75
+ if result.returncode != 0:
76
+ print(f"error loading launchd plist: {result.stderr}", file=sys.stderr)
77
+ return 1
78
+
79
+ print("ccmeter daemon installed and running")
80
+ print(f" plist: {LAUNCHD_PLIST}")
81
+ print(" log: ~/.ccmeter/poll.log")
82
+ print(" stop: ccmeter uninstall")
83
+ return 0
84
+
85
+
86
+ def _uninstall_launchd() -> int:
87
+ if not LAUNCHD_PLIST.exists():
88
+ print("ccmeter daemon not installed")
89
+ return 0
90
+
91
+ subprocess.run(["launchctl", "unload", str(LAUNCHD_PLIST)], capture_output=True)
92
+ LAUNCHD_PLIST.unlink()
93
+ print("ccmeter daemon stopped and removed")
94
+ return 0
95
+
96
+
97
+ def _install_systemd(ccmeter_bin: str) -> int:
98
+ unit = textwrap.dedent(f"""\
99
+ [Unit]
100
+ Description=ccmeter usage polling daemon
101
+ After=network.target
102
+
103
+ [Service]
104
+ ExecStart={ccmeter_bin} poll
105
+ Restart=always
106
+ RestartSec=30
107
+
108
+ [Install]
109
+ WantedBy=default.target
110
+ """)
111
+
112
+ SYSTEMD_UNIT.parent.mkdir(parents=True, exist_ok=True)
113
+ SYSTEMD_UNIT.write_text(unit)
114
+
115
+ subprocess.run(["systemctl", "--user", "daemon-reload"], capture_output=True)
116
+ result = subprocess.run(["systemctl", "--user", "enable", "--now", "ccmeter"], capture_output=True, text=True)
117
+
118
+ if result.returncode != 0:
119
+ print(f"error enabling systemd unit: {result.stderr}", file=sys.stderr)
120
+ return 1
121
+
122
+ print("ccmeter daemon installed and running")
123
+ print(f" unit: {SYSTEMD_UNIT}")
124
+ print(" status: systemctl --user status ccmeter")
125
+ print(" stop: ccmeter uninstall")
126
+ return 0
127
+
128
+
129
+ def _uninstall_systemd() -> int:
130
+ if not SYSTEMD_UNIT.exists():
131
+ print("ccmeter daemon not installed")
132
+ return 0
133
+
134
+ subprocess.run(["systemctl", "--user", "disable", "--now", "ccmeter"], capture_output=True)
135
+ SYSTEMD_UNIT.unlink()
136
+ subprocess.run(["systemctl", "--user", "daemon-reload"], capture_output=True)
137
+ print("ccmeter daemon stopped and removed")
138
+ return 0
@@ -0,0 +1,41 @@
1
+ """Local sqlite storage for usage samples."""
2
+
3
+ import sqlite3
4
+ from pathlib import Path
5
+
6
+ DB_PATH = Path.home() / ".ccmeter" / "meter.db"
7
+
8
+ SCHEMA = """
9
+ CREATE TABLE IF NOT EXISTS usage_samples (
10
+ id INTEGER PRIMARY KEY AUTOINCREMENT,
11
+ ts TEXT NOT NULL DEFAULT (strftime('%Y-%m-%dT%H:%M:%fZ', 'now')),
12
+ bucket TEXT NOT NULL,
13
+ utilization REAL NOT NULL,
14
+ resets_at TEXT,
15
+ tier TEXT,
16
+ raw JSON
17
+ );
18
+
19
+ CREATE INDEX IF NOT EXISTS idx_usage_ts ON usage_samples(ts);
20
+ CREATE INDEX IF NOT EXISTS idx_usage_bucket_ts ON usage_samples(bucket, ts);
21
+
22
+ CREATE TABLE IF NOT EXISTS sessions (
23
+ id INTEGER PRIMARY KEY AUTOINCREMENT,
24
+ session_id TEXT NOT NULL UNIQUE,
25
+ project TEXT,
26
+ started_at TEXT,
27
+ last_seen TEXT,
28
+ total_input_tokens INTEGER DEFAULT 0,
29
+ total_output_tokens INTEGER DEFAULT 0
30
+ );
31
+
32
+ CREATE INDEX IF NOT EXISTS idx_sessions_id ON sessions(session_id);
33
+ """
34
+
35
+
36
+ def connect() -> sqlite3.Connection:
37
+ DB_PATH.parent.mkdir(parents=True, exist_ok=True)
38
+ conn = sqlite3.connect(str(DB_PATH))
39
+ conn.row_factory = sqlite3.Row
40
+ conn.executescript(SCHEMA)
41
+ return conn
@@ -0,0 +1,30 @@
1
+ """Display usage sample history."""
2
+
3
+ import json
4
+
5
+ from ccmeter.db import connect
6
+
7
+
8
+ def show_history(days: int = 7, json_output: bool = False):
9
+ conn = connect()
10
+ rows = conn.execute(
11
+ "SELECT ts, bucket, utilization, resets_at FROM usage_samples "
12
+ "WHERE ts > datetime('now', ? || ' days') ORDER BY ts DESC",
13
+ (f"-{days}",),
14
+ ).fetchall()
15
+ conn.close()
16
+
17
+ if not rows:
18
+ print(f"no samples in the last {days} days. run: ccmeter poll")
19
+ return
20
+
21
+ if json_output:
22
+ print(json.dumps([dict(r) for r in rows], indent=2))
23
+ return
24
+
25
+ print(f"{'timestamp':<28} {'bucket':<20} {'utilization':>6}")
26
+ print("-" * 58)
27
+ for r in rows:
28
+ print(f"{r['ts']:<28} {r['bucket']:<20} {r['utilization']:>5.0f}%")
29
+
30
+ print(f"\n{len(rows)} samples over {days} days")
@@ -0,0 +1,127 @@
1
+ """Poll Anthropic usage API and record samples."""
2
+
3
+ import json
4
+ import signal
5
+ import sys
6
+ import time
7
+ import urllib.error
8
+ import urllib.request
9
+
10
+ from ccmeter.auth import Credentials, get_credentials
11
+ from ccmeter.db import connect
12
+
13
+ USAGE_URL = "https://api.anthropic.com/api/oauth/usage"
14
+ BETA_HEADER = "oauth-2025-04-20"
15
+
16
+ BUCKETS = ("five_hour", "seven_day", "seven_day_sonnet", "seven_day_opus", "seven_day_cowork", "extra_usage")
17
+
18
+ _running = True
19
+
20
+
21
+ def _handle_signal(sig, frame):
22
+ global _running
23
+ _running = False
24
+ print("\nshutting down...")
25
+
26
+
27
+ def fetch_usage(creds: Credentials) -> dict | None:
28
+ """Fetch current usage from Anthropic's OAuth endpoint."""
29
+ req = urllib.request.Request(
30
+ USAGE_URL,
31
+ headers={
32
+ "Authorization": f"Bearer {creds.access_token}",
33
+ "anthropic-beta": BETA_HEADER,
34
+ },
35
+ )
36
+ try:
37
+ resp = urllib.request.urlopen(req, timeout=10)
38
+ return json.loads(resp.read().decode())
39
+ except (urllib.error.URLError, json.JSONDecodeError, OSError) as e:
40
+ print(f"usage API error: {e}", file=sys.stderr)
41
+ return None
42
+
43
+
44
+ def record_samples(data: dict, last_seen: dict, conn, tier: str | None = None) -> dict:
45
+ """Write rows for any bucket that changed. Returns updated last_seen."""
46
+ for key, value in data.items():
47
+ if not isinstance(value, dict):
48
+ continue
49
+
50
+ utilization = value.get("utilization")
51
+ if utilization is None and key == "extra_usage":
52
+ utilization = value.get("used_credits")
53
+ if utilization is None:
54
+ continue
55
+
56
+ prev = last_seen.get(key)
57
+ if prev is not None and prev == utilization:
58
+ continue
59
+
60
+ resets_at = value.get("resets_at")
61
+ conn.execute(
62
+ "INSERT INTO usage_samples (bucket, utilization, resets_at, tier, raw) VALUES (?, ?, ?, ?, ?)",
63
+ (key, float(utilization), resets_at, tier, json.dumps(value)),
64
+ )
65
+ conn.commit()
66
+
67
+ direction = ""
68
+ if prev is not None:
69
+ direction = f" (was {prev}%)"
70
+ print(f" {key}: {utilization}%{direction}")
71
+
72
+ last_seen[key] = utilization
73
+
74
+ return last_seen
75
+
76
+
77
+ def seed_last_seen(conn) -> dict:
78
+ """Load most recent utilization per bucket from DB to avoid duplicate rows on restart."""
79
+ last_seen = {}
80
+ rows = conn.execute(
81
+ "SELECT bucket, utilization FROM usage_samples WHERE id IN (SELECT MAX(id) FROM usage_samples GROUP BY bucket)"
82
+ ).fetchall()
83
+ for row in rows:
84
+ last_seen[row["bucket"]] = row["utilization"]
85
+ return last_seen
86
+
87
+
88
+ def run_poll(interval: int = 120, once: bool = False):
89
+ """Main poll loop."""
90
+ creds = get_credentials()
91
+ if not creds:
92
+ print("error: could not find Claude Code OAuth token in OS keychain", file=sys.stderr)
93
+ print(file=sys.stderr)
94
+ print("ccmeter reads the same credential Claude Code uses.", file=sys.stderr)
95
+ print("make sure Claude Code is installed and you've signed in.", file=sys.stderr)
96
+ sys.exit(1)
97
+
98
+ tier = creds.subscription_type or creds.rate_limit_tier
99
+ conn = connect()
100
+ last_seen = seed_last_seen(conn)
101
+
102
+ signal.signal(signal.SIGINT, _handle_signal)
103
+ signal.signal(signal.SIGTERM, _handle_signal)
104
+
105
+ print(f"ccmeter polling every {interval}s")
106
+ if tier:
107
+ print(f" tier: {tier}")
108
+ if last_seen:
109
+ print(f" resumed with {len(last_seen)} cached bucket(s)")
110
+
111
+ backoff = interval
112
+ while _running:
113
+ data = fetch_usage(creds)
114
+ if data:
115
+ last_seen = record_samples(data, last_seen, conn, tier=tier)
116
+ backoff = interval
117
+ else:
118
+ backoff = min(backoff * 2, 600)
119
+ print(f" backing off to {backoff}s", file=sys.stderr)
120
+
121
+ if once:
122
+ break
123
+
124
+ time.sleep(backoff)
125
+
126
+ conn.close()
127
+ print("stopped.")
@@ -0,0 +1,151 @@
1
+ """Generate calibration report by cross-referencing usage ticks against JSONL token data."""
2
+
3
+ import json
4
+
5
+ from ccmeter.auth import get_credentials
6
+ from ccmeter.db import connect
7
+ from ccmeter.scan import scan
8
+
9
+
10
+ def tokens_in_window(events, t0: str, t1: str) -> dict:
11
+ """Sum token counts for events between two timestamps."""
12
+ totals = {"input": 0, "output": 0, "cache_read": 0, "cache_create": 0}
13
+ for e in events:
14
+ if t0 <= e.ts <= t1:
15
+ totals["input"] += e.input_tokens
16
+ totals["output"] += e.output_tokens
17
+ totals["cache_read"] += e.cache_read
18
+ totals["cache_create"] += e.cache_create
19
+ return totals
20
+
21
+
22
+ def calibrate_bucket(bucket: str, events, conn) -> list[dict]:
23
+ """Find utilization ticks and calculate tokens per percent for a bucket."""
24
+ rows = conn.execute(
25
+ """
26
+ SELECT s1.ts as t0, s2.ts as t1,
27
+ s1.utilization as u0, s2.utilization as u1,
28
+ s2.utilization - s1.utilization as delta_pct
29
+ FROM usage_samples s1
30
+ JOIN usage_samples s2
31
+ ON s2.bucket = s1.bucket
32
+ AND s2.id = (SELECT MIN(id) FROM usage_samples
33
+ WHERE bucket = s1.bucket AND id > s1.id)
34
+ WHERE s1.bucket = ?
35
+ AND s2.utilization > s1.utilization
36
+ ORDER BY s1.ts
37
+ """,
38
+ (bucket,),
39
+ ).fetchall()
40
+
41
+ calibrations = []
42
+ for r in rows:
43
+ t0, t1, delta = r["t0"], r["t1"], r["delta_pct"]
44
+ tokens = tokens_in_window(events, t0, t1)
45
+ total = tokens["input"] + tokens["output"] + tokens["cache_read"] + tokens["cache_create"]
46
+ if total == 0:
47
+ continue
48
+ calibrations.append(
49
+ {
50
+ "t0": t0,
51
+ "t1": t1,
52
+ "delta_pct": delta,
53
+ "tokens": tokens,
54
+ "tokens_per_pct": {k: int(v / delta) for k, v in tokens.items()},
55
+ "total_per_pct": int(total / delta),
56
+ }
57
+ )
58
+ return calibrations
59
+
60
+
61
+ def run_report(days: int = 30, json_output: bool = False):
62
+ """Generate and display calibration report."""
63
+ creds = get_credentials()
64
+ tier = "unknown"
65
+ rate_tier = "unknown"
66
+ if creds:
67
+ tier = creds.subscription_type or "unknown"
68
+ rate_tier = creds.rate_limit_tier or "unknown"
69
+
70
+ print("scanning JSONL files...")
71
+ result = scan(days=days)
72
+
73
+ if not result.events:
74
+ print(f"no token events found in the last {days} days.")
75
+ print("make sure Claude Code has been used and JSONL logs exist in ~/.claude/projects/")
76
+ return
77
+
78
+ conn = connect()
79
+ sample_count = conn.execute("SELECT COUNT(*) as n FROM usage_samples").fetchone()["n"]
80
+
81
+ if sample_count == 0:
82
+ print("no usage samples collected yet. run: ccmeter poll")
83
+ conn.close()
84
+ return
85
+
86
+ buckets = ["five_hour", "seven_day", "seven_day_sonnet"]
87
+ report_data = {
88
+ "tier": tier,
89
+ "rate_limit_tier": rate_tier,
90
+ "os": result.os,
91
+ "cc_versions": sorted(result.cc_versions),
92
+ "models": sorted(result.models),
93
+ "sessions": result.sessions,
94
+ "token_events": len(result.events),
95
+ "usage_samples": sample_count,
96
+ "lookback_days": days,
97
+ "buckets": {},
98
+ }
99
+
100
+ for bucket in buckets:
101
+ cals = calibrate_bucket(bucket, result.events, conn)
102
+ if cals:
103
+ avg_per_pct = {}
104
+ for key in ("input", "output", "cache_read", "cache_create"):
105
+ vals = [c["tokens_per_pct"][key] for c in cals]
106
+ avg_per_pct[key] = int(sum(vals) / len(vals))
107
+ avg_total = sum(avg_per_pct.values())
108
+
109
+ report_data["buckets"][bucket] = {
110
+ "ticks": len(cals),
111
+ "avg_tokens_per_pct": avg_per_pct,
112
+ "avg_total_per_pct": avg_total,
113
+ }
114
+
115
+ conn.close()
116
+
117
+ if json_output:
118
+ print(json.dumps(report_data, indent=2))
119
+ return
120
+
121
+ _print_report(report_data)
122
+
123
+
124
+ def _print_report(data: dict):
125
+ print()
126
+ print(f"tier: {data['tier']} ({data['rate_limit_tier']})")
127
+ print(f"os: {data['os']}")
128
+ print(f"cc versions: {', '.join(data['cc_versions']) or 'unknown'}")
129
+ print(f"models: {', '.join(data['models']) or 'unknown'}")
130
+ print(f"sessions: {data['sessions']}")
131
+ print(f"events: {data['token_events']} token events over {data['lookback_days']}d")
132
+ print(f"samples: {data['usage_samples']} usage ticks")
133
+
134
+ if not data["buckets"]:
135
+ print()
136
+ print("no calibration data yet — need usage ticks that overlap with JSONL session data.")
137
+ print("keep ccmeter poll running while you use Claude Code.")
138
+ return
139
+
140
+ print()
141
+ for bucket, cal in data["buckets"].items():
142
+ tpp = cal["avg_tokens_per_pct"]
143
+ print(f"{bucket} ({cal['ticks']} ticks):")
144
+ print(f" 1% ≈ {cal['avg_total_per_pct']:,} tokens total")
145
+ print(
146
+ f" {tpp['input']:,} input / {tpp['output']:,} output / {tpp['cache_read']:,} cache_read / {tpp['cache_create']:,} cache_create"
147
+ )
148
+ print()
149
+
150
+ print("⚠ if you use claude.ai alongside Claude Code, token counts may be inflated")
151
+ print(" (the API tracks combined usage but we can only see Claude Code's tokens)")
@@ -0,0 +1,92 @@
1
+ """Scan Claude Code JSONL files for per-message token usage."""
2
+
3
+ import json
4
+ import platform
5
+ from dataclasses import dataclass, field
6
+ from datetime import UTC, datetime, timedelta
7
+ from pathlib import Path
8
+
9
+ CLAUDE_DIR = Path.home() / ".claude" / "projects"
10
+
11
+
12
+ @dataclass
13
+ class TokenEvent:
14
+ ts: str
15
+ input_tokens: int
16
+ output_tokens: int
17
+ cache_read: int
18
+ cache_create: int
19
+ model: str
20
+ session_id: str
21
+ cc_version: str
22
+
23
+
24
+ @dataclass
25
+ class ScanResult:
26
+ events: list[TokenEvent] = field(default_factory=list)
27
+ cc_versions: set[str] = field(default_factory=set)
28
+ models: set[str] = field(default_factory=set)
29
+ sessions: int = 0
30
+ os: str = field(default_factory=lambda: platform.system().lower())
31
+
32
+
33
+ def scan(days: int = 30) -> ScanResult:
34
+ """Scan all JSONL files for token events within the lookback window."""
35
+ cutoff = (datetime.now(tz=UTC) - timedelta(days=days)).isoformat()
36
+ result = ScanResult()
37
+ seen_sessions = set()
38
+
39
+ if not CLAUDE_DIR.exists():
40
+ return result
41
+
42
+ for jsonl in CLAUDE_DIR.glob("*/*.jsonl"):
43
+ _scan_file(jsonl, cutoff, result, seen_sessions)
44
+
45
+ result.sessions = len(seen_sessions)
46
+ result.events.sort(key=lambda e: e.ts)
47
+ return result
48
+
49
+
50
+ def _scan_file(path: Path, cutoff: str, result: ScanResult, seen_sessions: set):
51
+ try:
52
+ with path.open() as f:
53
+ for line in f:
54
+ try:
55
+ d = json.loads(line)
56
+ except json.JSONDecodeError:
57
+ continue
58
+
59
+ msg = d.get("message")
60
+ if not isinstance(msg, dict) or "usage" not in msg:
61
+ continue
62
+
63
+ ts = d.get("timestamp", "")
64
+ if ts < cutoff:
65
+ continue
66
+
67
+ usage = msg["usage"]
68
+ session_id = d.get("sessionId", "")
69
+ cc_version = d.get("version", "")
70
+ model = msg.get("model", "")
71
+
72
+ if session_id:
73
+ seen_sessions.add(session_id)
74
+ if cc_version:
75
+ result.cc_versions.add(cc_version)
76
+ if model:
77
+ result.models.add(model)
78
+
79
+ result.events.append(
80
+ TokenEvent(
81
+ ts=ts,
82
+ input_tokens=usage.get("input_tokens", 0),
83
+ output_tokens=usage.get("output_tokens", 0),
84
+ cache_read=usage.get("cache_read_input_tokens", 0),
85
+ cache_create=usage.get("cache_creation_input_tokens", 0),
86
+ model=model,
87
+ session_id=session_id,
88
+ cc_version=cc_version,
89
+ )
90
+ )
91
+ except OSError:
92
+ pass
@@ -0,0 +1,25 @@
1
+ """Show current collection status."""
2
+
3
+ from ccmeter.db import DB_PATH, connect
4
+
5
+
6
+ def show_status():
7
+ if not DB_PATH.exists():
8
+ print("no data collected yet. run: ccmeter poll")
9
+ return
10
+
11
+ conn = connect()
12
+
13
+ total = conn.execute("SELECT COUNT(*) as n FROM usage_samples").fetchone()["n"]
14
+ buckets = conn.execute("SELECT DISTINCT bucket FROM usage_samples ORDER BY bucket").fetchall()
15
+ latest = conn.execute("SELECT ts FROM usage_samples ORDER BY ts DESC LIMIT 1").fetchone()
16
+ oldest = conn.execute("SELECT ts FROM usage_samples ORDER BY ts ASC LIMIT 1").fetchone()
17
+
18
+ conn.close()
19
+
20
+ print("ccmeter status")
21
+ print(f" db: {DB_PATH}")
22
+ print(f" samples: {total}")
23
+ print(f" buckets: {', '.join(r['bucket'] for r in buckets)}")
24
+ if oldest and latest:
25
+ print(f" range: {oldest['ts'][:16]} → {latest['ts'][:16]}")
@@ -0,0 +1,43 @@
1
+ # Evidence: Anthropic's pattern of opaque limit changes
2
+
3
+ ## Incident 1: Christmas 2x promo → post-holiday nerf (Dec 25, 2025 – Jan 5, 2026)
4
+
5
+ Anthropic ran a 2x usage promotion from December 25 at 00:00 UTC through December 31, covering Pro and Max subscribers across claude.ai, Claude Code, and Claude in Chrome.
6
+
7
+ When it expired, limits felt tighter than pre-promotion baseline. A Claude Code user provided The Register with screenshots showing roughly 60% reduction in token usage limits based on token-level analysis of Claude Code logs. Anthropic's response: users are "reacting to the withdrawal of bonus usage awarded over the holidays."
8
+
9
+ A Max plan subscriber who'd rarely hit limits filed a GitHub bug report on January 4 — hitting rate limits within an hour of normal usage since January 1st, theorizing limits reverted to a tighter fall baseline rather than the pre-promotion level.
10
+
11
+ On forums, Reddit, and the Claude Developers Discord, developers reported token consumption suddenly increasing, with accounts reaching maximum within minutes or hours on tasks that previously worked fine.
12
+
13
+ ## Incident 2: Third-party tool crackdown (January 9, 2026)
14
+
15
+ On January 9 at 02:20 UTC, Anthropic deployed safeguards blocking third-party tools using subscription OAuth tokens. No warning. No migration path.
16
+
17
+ Tools like OpenCode had been using subscription tokens to access Claude at flat rates rather than metered API pricing. The economic gap: a month of heavy Claude Code usage can easily exceed $1,000 via API.
18
+
19
+ DHH called the move "very customer hostile." Some users reported being banned within 20 minutes of starting a task on the $200/month Max plan. Anthropic later reversed erroneous bans.
20
+
21
+ ## Incident 3: March 2x off-peak promo → simultaneous silent nerf (Mar 13–28, 2026)
22
+
23
+ From March 13 through March 28, five-hour usage was doubled during off-peak hours (outside 8 AM–2 PM ET on weekdays). Framed as a thank-you for growth after a competitor boycott drove Claude to #1 on the App Store.
24
+
25
+ During the same promotional period, Anthropic silently tightened peak-hour session limits. Reports started around March 23, with a $200 Max subscriber posting screenshots showing session usage climbing from 52% to 68% to 91% within minutes of a five-hour window.
26
+
27
+ The explanation came days later — not as a blog post, but a tweet thread from Thariq, one engineer. About 7% of users would now hit limits they wouldn't have before. Pro subscribers most affected.
28
+
29
+ ## The pattern
30
+
31
+ Same structure every time:
32
+
33
+ 1. Announce generous temporary promotion
34
+ 2. Silently tighten baseline limits during or immediately after the promotion window
35
+ 3. Attribute complaints to "contrast effect" or "return to normal"
36
+
37
+ Meanwhile: Anthropic does not publish token budgets, does not version cap announcements, and does not provide a changelog when limits change. The usage API reports percentages at 1% granularity — a percentage of an undisclosed number.
38
+
39
+ ## Why this matters
40
+
41
+ This isn't about the limits being too low. It's about the opacity.
42
+
43
+ If you're paying $200/month for Max 20x, you should be able to answer: "20x what?" Anthropic doesn't say. The only way to find out is to measure it yourself — which is what ccmeter does.
ccmeter-0.0.1/justfile ADDED
@@ -0,0 +1,16 @@
1
+ default:
2
+ @just --list
3
+
4
+ install:
5
+ uv sync
6
+
7
+ format:
8
+ uv run ruff format . && uv run ruff check --fix . || true
9
+
10
+ lint:
11
+ uv run ruff check .
12
+
13
+ test:
14
+ uv run pytest tests/ -q
15
+
16
+ ci: lint test
@@ -0,0 +1,43 @@
1
+ [build-system]
2
+ requires = ["hatchling"]
3
+ build-backend = "hatchling.build"
4
+
5
+ [project]
6
+ name = "ccmeter"
7
+ version = "0.0.1"
8
+ description = "Reverse-engineer Anthropic's opaque Claude subscription limits into hard numbers"
9
+ authors = [{name = "Tyson Chan", email = "tyson.chan@proton.me"}]
10
+ keywords = ["claude", "anthropic", "usage", "limits", "transparency", "metering"]
11
+ readme = "README.md"
12
+ license = "MIT"
13
+ requires-python = ">=3.12"
14
+ classifiers = [
15
+ "Development Status :: 2 - Pre-Alpha",
16
+ "Environment :: Console",
17
+ "Intended Audience :: Developers",
18
+ "License :: OSI Approved :: MIT License",
19
+ "Programming Language :: Python :: 3",
20
+ "Programming Language :: Python :: 3.12",
21
+ "Programming Language :: Python :: 3.13",
22
+ "Topic :: System :: Monitoring",
23
+ ]
24
+
25
+ dependencies = [
26
+ "fncli",
27
+ ]
28
+
29
+ [project.scripts]
30
+ ccmeter = "ccmeter.cli:main"
31
+
32
+ [project.urls]
33
+ Repository = "https://github.com/iteebz/ccmeter"
34
+ Issues = "https://github.com/iteebz/ccmeter/issues"
35
+
36
+ [dependency-groups]
37
+ dev = [
38
+ "ruff>=0.13.0",
39
+ "pytest>=8.4.0",
40
+ ]
41
+
42
+ [tool.hatch.build.targets.wheel]
43
+ packages = ["ccmeter"]
@@ -0,0 +1,43 @@
1
+ line-length = 120
2
+ target-version = "py312"
3
+
4
+ [format]
5
+ quote-style = "double"
6
+ indent-style = "space"
7
+
8
+ [lint]
9
+ select = [
10
+ "E", # pycodestyle errors
11
+ "W", # pycodestyle warnings
12
+ "F", # pyflakes
13
+ "I", # isort
14
+ "N", # pep8-naming
15
+ "UP", # pyupgrade
16
+ "B", # flake8-bugbear
17
+ "C4", # flake8-comprehensions
18
+ "SIM", # flake8-simplify
19
+ "RET", # flake8-return
20
+ "S", # flake8-bandit (security)
21
+ "PTH", # flake8-use-pathlib
22
+ "PERF", # perflint
23
+ "RUF", # ruff-specific
24
+ "TID", # tidy imports
25
+ "PIE", # misc lints
26
+ "T20", # flake8-print
27
+ "FLY", # flynt (f-strings)
28
+ "FURB", # refurb (modern python)
29
+ "PLC", # pylint conventions
30
+ ]
31
+ ignore = [
32
+ "SIM108", # ternary not always clearer
33
+ "E501", # line length (formatter handles)
34
+ "S104", # binding to all interfaces
35
+ "S603", # subprocess calls intentional
36
+ "S607", # partial executable path
37
+ "T201", # print() is the CLI output primitive
38
+ "PLC0415", # deferred imports for fast CLI startup
39
+ ]
40
+
41
+ [lint.per-file-ignores]
42
+ "tests/**/*.py" = ["S101", "S105"]
43
+ "ccmeter/poll.py" = ["S310"]
File without changes
ccmeter-0.0.1/uv.lock ADDED
@@ -0,0 +1,121 @@
1
+ version = 1
2
+ revision = 3
3
+ requires-python = ">=3.12"
4
+
5
+ [[package]]
6
+ name = "ccmeter"
7
+ version = "0.0.1"
8
+ source = { editable = "." }
9
+ dependencies = [
10
+ { name = "fncli" },
11
+ ]
12
+
13
+ [package.dev-dependencies]
14
+ dev = [
15
+ { name = "pytest" },
16
+ { name = "ruff" },
17
+ ]
18
+
19
+ [package.metadata]
20
+ requires-dist = [{ name = "fncli" }]
21
+
22
+ [package.metadata.requires-dev]
23
+ dev = [
24
+ { name = "pytest", specifier = ">=8.4.0" },
25
+ { name = "ruff", specifier = ">=0.13.0" },
26
+ ]
27
+
28
+ [[package]]
29
+ name = "colorama"
30
+ version = "0.4.6"
31
+ source = { registry = "https://pypi.org/simple" }
32
+ sdist = { url = "https://files.pythonhosted.org/packages/d8/53/6f443c9a4a8358a93a6792e2acffb9d9d5cb0a5cfd8802644b7b1c9a02e4/colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44", size = 27697, upload-time = "2022-10-25T02:36:22.414Z" }
33
+ wheels = [
34
+ { url = "https://files.pythonhosted.org/packages/d1/d6/3965ed04c63042e047cb6a3e6ed1a63a35087b6a609aa3a15ed8ac56c221/colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6", size = 25335, upload-time = "2022-10-25T02:36:20.889Z" },
35
+ ]
36
+
37
+ [[package]]
38
+ name = "fncli"
39
+ version = "0.0.1"
40
+ source = { registry = "https://pypi.org/simple" }
41
+ sdist = { url = "https://files.pythonhosted.org/packages/74/97/55543a2f0854571a6d3ca9c87e1c0986c7599bc4d7490026ba7b156d15ba/fncli-0.0.1.tar.gz", hash = "sha256:1408b7e3031a2b0637d4a6c1f0f852a6169432d29a09e41a97d448a5c2cd20bb", size = 2230, upload-time = "2026-02-22T08:27:49.453Z" }
42
+ wheels = [
43
+ { url = "https://files.pythonhosted.org/packages/8f/81/300be30b0492b0a8d7bd7a59c01bb902d1b47863af4ea72cd174652ed21a/fncli-0.0.1-py3-none-any.whl", hash = "sha256:99493151288c1c7fde464a23c4645d2c330f2bb968c35489b02cafd907c30328", size = 2659, upload-time = "2026-02-22T08:27:48.397Z" },
44
+ ]
45
+
46
+ [[package]]
47
+ name = "iniconfig"
48
+ version = "2.3.0"
49
+ source = { registry = "https://pypi.org/simple" }
50
+ sdist = { url = "https://files.pythonhosted.org/packages/72/34/14ca021ce8e5dfedc35312d08ba8bf51fdd999c576889fc2c24cb97f4f10/iniconfig-2.3.0.tar.gz", hash = "sha256:c76315c77db068650d49c5b56314774a7804df16fee4402c1f19d6d15d8c4730", size = 20503, upload-time = "2025-10-18T21:55:43.219Z" }
51
+ wheels = [
52
+ { url = "https://files.pythonhosted.org/packages/cb/b1/3846dd7f199d53cb17f49cba7e651e9ce294d8497c8c150530ed11865bb8/iniconfig-2.3.0-py3-none-any.whl", hash = "sha256:f631c04d2c48c52b84d0d0549c99ff3859c98df65b3101406327ecc7d53fbf12", size = 7484, upload-time = "2025-10-18T21:55:41.639Z" },
53
+ ]
54
+
55
+ [[package]]
56
+ name = "packaging"
57
+ version = "26.0"
58
+ source = { registry = "https://pypi.org/simple" }
59
+ sdist = { url = "https://files.pythonhosted.org/packages/65/ee/299d360cdc32edc7d2cf530f3accf79c4fca01e96ffc950d8a52213bd8e4/packaging-26.0.tar.gz", hash = "sha256:00243ae351a257117b6a241061796684b084ed1c516a08c48a3f7e147a9d80b4", size = 143416, upload-time = "2026-01-21T20:50:39.064Z" }
60
+ wheels = [
61
+ { url = "https://files.pythonhosted.org/packages/b7/b9/c538f279a4e237a006a2c98387d081e9eb060d203d8ed34467cc0f0b9b53/packaging-26.0-py3-none-any.whl", hash = "sha256:b36f1fef9334a5588b4166f8bcd26a14e521f2b55e6b9de3aaa80d3ff7a37529", size = 74366, upload-time = "2026-01-21T20:50:37.788Z" },
62
+ ]
63
+
64
+ [[package]]
65
+ name = "pluggy"
66
+ version = "1.6.0"
67
+ source = { registry = "https://pypi.org/simple" }
68
+ sdist = { url = "https://files.pythonhosted.org/packages/f9/e2/3e91f31a7d2b083fe6ef3fa267035b518369d9511ffab804f839851d2779/pluggy-1.6.0.tar.gz", hash = "sha256:7dcc130b76258d33b90f61b658791dede3486c3e6bfb003ee5c9bfb396dd22f3", size = 69412, upload-time = "2025-05-15T12:30:07.975Z" }
69
+ wheels = [
70
+ { url = "https://files.pythonhosted.org/packages/54/20/4d324d65cc6d9205fabedc306948156824eb9f0ee1633355a8f7ec5c66bf/pluggy-1.6.0-py3-none-any.whl", hash = "sha256:e920276dd6813095e9377c0bc5566d94c932c33b27a3e3945d8389c374dd4746", size = 20538, upload-time = "2025-05-15T12:30:06.134Z" },
71
+ ]
72
+
73
+ [[package]]
74
+ name = "pygments"
75
+ version = "2.20.0"
76
+ source = { registry = "https://pypi.org/simple" }
77
+ sdist = { url = "https://files.pythonhosted.org/packages/c3/b2/bc9c9196916376152d655522fdcebac55e66de6603a76a02bca1b6414f6c/pygments-2.20.0.tar.gz", hash = "sha256:6757cd03768053ff99f3039c1a36d6c0aa0b263438fcab17520b30a303a82b5f", size = 4955991, upload-time = "2026-03-29T13:29:33.898Z" }
78
+ wheels = [
79
+ { url = "https://files.pythonhosted.org/packages/f4/7e/a72dd26f3b0f4f2bf1dd8923c85f7ceb43172af56d63c7383eb62b332364/pygments-2.20.0-py3-none-any.whl", hash = "sha256:81a9e26dd42fd28a23a2d169d86d7ac03b46e2f8b59ed4698fb4785f946d0176", size = 1231151, upload-time = "2026-03-29T13:29:30.038Z" },
80
+ ]
81
+
82
+ [[package]]
83
+ name = "pytest"
84
+ version = "9.0.2"
85
+ source = { registry = "https://pypi.org/simple" }
86
+ dependencies = [
87
+ { name = "colorama", marker = "sys_platform == 'win32'" },
88
+ { name = "iniconfig" },
89
+ { name = "packaging" },
90
+ { name = "pluggy" },
91
+ { name = "pygments" },
92
+ ]
93
+ sdist = { url = "https://files.pythonhosted.org/packages/d1/db/7ef3487e0fb0049ddb5ce41d3a49c235bf9ad299b6a25d5780a89f19230f/pytest-9.0.2.tar.gz", hash = "sha256:75186651a92bd89611d1d9fc20f0b4345fd827c41ccd5c299a868a05d70edf11", size = 1568901, upload-time = "2025-12-06T21:30:51.014Z" }
94
+ wheels = [
95
+ { url = "https://files.pythonhosted.org/packages/3b/ab/b3226f0bd7cdcf710fbede2b3548584366da3b19b5021e74f5bde2a8fa3f/pytest-9.0.2-py3-none-any.whl", hash = "sha256:711ffd45bf766d5264d487b917733b453d917afd2b0ad65223959f59089f875b", size = 374801, upload-time = "2025-12-06T21:30:49.154Z" },
96
+ ]
97
+
98
+ [[package]]
99
+ name = "ruff"
100
+ version = "0.15.8"
101
+ source = { registry = "https://pypi.org/simple" }
102
+ sdist = { url = "https://files.pythonhosted.org/packages/14/b0/73cf7550861e2b4824950b8b52eebdcc5adc792a00c514406556c5b80817/ruff-0.15.8.tar.gz", hash = "sha256:995f11f63597ee362130d1d5a327a87cb6f3f5eae3094c620bcc632329a4d26e", size = 4610921, upload-time = "2026-03-26T18:39:38.675Z" }
103
+ wheels = [
104
+ { url = "https://files.pythonhosted.org/packages/4a/92/c445b0cd6da6e7ae51e954939cb69f97e008dbe750cfca89b8cedc081be7/ruff-0.15.8-py3-none-linux_armv6l.whl", hash = "sha256:cbe05adeba76d58162762d6b239c9056f1a15a55bd4b346cfd21e26cd6ad7bc7", size = 10527394, upload-time = "2026-03-26T18:39:41.566Z" },
105
+ { url = "https://files.pythonhosted.org/packages/eb/92/f1c662784d149ad1414cae450b082cf736430c12ca78367f20f5ed569d65/ruff-0.15.8-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:d3e3d0b6ba8dca1b7ef9ab80a28e840a20070c4b62e56d675c24f366ef330570", size = 10905693, upload-time = "2026-03-26T18:39:30.364Z" },
106
+ { url = "https://files.pythonhosted.org/packages/ca/f2/7a631a8af6d88bcef997eb1bf87cc3da158294c57044aafd3e17030613de/ruff-0.15.8-py3-none-macosx_11_0_arm64.whl", hash = "sha256:6ee3ae5c65a42f273f126686353f2e08ff29927b7b7e203b711514370d500de3", size = 10323044, upload-time = "2026-03-26T18:39:33.37Z" },
107
+ { url = "https://files.pythonhosted.org/packages/67/18/1bf38e20914a05e72ef3b9569b1d5c70a7ef26cd188d69e9ca8ef588d5bf/ruff-0.15.8-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:fdce027ada77baa448077ccc6ebb2fa9c3c62fd110d8659d601cf2f475858d94", size = 10629135, upload-time = "2026-03-26T18:39:44.142Z" },
108
+ { url = "https://files.pythonhosted.org/packages/d2/e9/138c150ff9af60556121623d41aba18b7b57d95ac032e177b6a53789d279/ruff-0.15.8-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:12e617fc01a95e5821648a6df341d80456bd627bfab8a829f7cfc26a14a4b4a3", size = 10348041, upload-time = "2026-03-26T18:39:52.178Z" },
109
+ { url = "https://files.pythonhosted.org/packages/02/f1/5bfb9298d9c323f842c5ddeb85f1f10ef51516ac7a34ba446c9347d898df/ruff-0.15.8-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:432701303b26416d22ba696c39f2c6f12499b89093b61360abc34bcc9bf07762", size = 11121987, upload-time = "2026-03-26T18:39:55.195Z" },
110
+ { url = "https://files.pythonhosted.org/packages/10/11/6da2e538704e753c04e8d86b1fc55712fdbdcc266af1a1ece7a51fff0d10/ruff-0.15.8-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:d910ae974b7a06a33a057cb87d2a10792a3b2b3b35e33d2699fdf63ec8f6b17a", size = 11951057, upload-time = "2026-03-26T18:39:19.18Z" },
111
+ { url = "https://files.pythonhosted.org/packages/83/f0/c9208c5fd5101bf87002fed774ff25a96eea313d305f1e5d5744698dc314/ruff-0.15.8-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2033f963c43949d51e6fdccd3946633c6b37c484f5f98c3035f49c27395a8ab8", size = 11464613, upload-time = "2026-03-26T18:40:06.301Z" },
112
+ { url = "https://files.pythonhosted.org/packages/f8/22/d7f2fabdba4fae9f3b570e5605d5eb4500dcb7b770d3217dca4428484b17/ruff-0.15.8-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0f29b989a55572fb885b77464cf24af05500806ab4edf9a0fd8977f9759d85b1", size = 11257557, upload-time = "2026-03-26T18:39:57.972Z" },
113
+ { url = "https://files.pythonhosted.org/packages/71/8c/382a9620038cf6906446b23ce8632ab8c0811b8f9d3e764f58bedd0c9a6f/ruff-0.15.8-py3-none-manylinux_2_31_riscv64.whl", hash = "sha256:ac51d486bf457cdc985a412fb1801b2dfd1bd8838372fc55de64b1510eff4bec", size = 11169440, upload-time = "2026-03-26T18:39:22.205Z" },
114
+ { url = "https://files.pythonhosted.org/packages/4d/0d/0994c802a7eaaf99380085e4e40c845f8e32a562e20a38ec06174b52ef24/ruff-0.15.8-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:c9861eb959edab053c10ad62c278835ee69ca527b6dcd72b47d5c1e5648964f6", size = 10605963, upload-time = "2026-03-26T18:39:46.682Z" },
115
+ { url = "https://files.pythonhosted.org/packages/19/aa/d624b86f5b0aad7cef6bbf9cd47a6a02dfdc4f72c92a337d724e39c9d14b/ruff-0.15.8-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:8d9a5b8ea13f26ae90838afc33f91b547e61b794865374f114f349e9036835fb", size = 10357484, upload-time = "2026-03-26T18:39:49.176Z" },
116
+ { url = "https://files.pythonhosted.org/packages/35/c3/e0b7835d23001f7d999f3895c6b569927c4d39912286897f625736e1fd04/ruff-0.15.8-py3-none-musllinux_1_2_i686.whl", hash = "sha256:c2a33a529fb3cbc23a7124b5c6ff121e4d6228029cba374777bd7649cc8598b8", size = 10830426, upload-time = "2026-03-26T18:40:03.702Z" },
117
+ { url = "https://files.pythonhosted.org/packages/f0/51/ab20b322f637b369383adc341d761eaaa0f0203d6b9a7421cd6e783d81b9/ruff-0.15.8-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:75e5cd06b1cf3f47a3996cfc999226b19aa92e7cce682dcd62f80d7035f98f49", size = 11345125, upload-time = "2026-03-26T18:39:27.799Z" },
118
+ { url = "https://files.pythonhosted.org/packages/37/e6/90b2b33419f59d0f2c4c8a48a4b74b460709a557e8e0064cf33ad894f983/ruff-0.15.8-py3-none-win32.whl", hash = "sha256:bc1f0a51254ba21767bfa9a8b5013ca8149dcf38092e6a9eb704d876de94dc34", size = 10571959, upload-time = "2026-03-26T18:39:36.117Z" },
119
+ { url = "https://files.pythonhosted.org/packages/1f/a2/ef467cb77099062317154c63f234b8a7baf7cb690b99af760c5b68b9ee7f/ruff-0.15.8-py3-none-win_amd64.whl", hash = "sha256:04f79eff02a72db209d47d665ba7ebcad609d8918a134f86cb13dd132159fc89", size = 11743893, upload-time = "2026-03-26T18:39:25.01Z" },
120
+ { url = "https://files.pythonhosted.org/packages/15/e2/77be4fff062fa78d9b2a4dea85d14785dac5f1d0c1fb58ed52331f0ebe28/ruff-0.15.8-py3-none-win_arm64.whl", hash = "sha256:cf891fa8e3bb430c0e7fac93851a5978fc99c8fa2c053b57b118972866f8e5f2", size = 11048175, upload-time = "2026-03-26T18:40:01.06Z" },
121
+ ]