promptmeter 0.1.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (41) hide show
  1. promptmeter-0.1.0/.github/ISSUE_TEMPLATE/bug.md +30 -0
  2. promptmeter-0.1.0/.github/ISSUE_TEMPLATE/feature.md +18 -0
  3. promptmeter-0.1.0/.github/ISSUE_TEMPLATE/suggest_rule.md +32 -0
  4. promptmeter-0.1.0/.github/pull_request_template.md +19 -0
  5. promptmeter-0.1.0/.github/workflows/ci.yml +35 -0
  6. promptmeter-0.1.0/.github/workflows/publish.yml +28 -0
  7. promptmeter-0.1.0/.gitignore +42 -0
  8. promptmeter-0.1.0/CHANGELOG.md +24 -0
  9. promptmeter-0.1.0/CONTRIBUTING.md +96 -0
  10. promptmeter-0.1.0/LICENSE +21 -0
  11. promptmeter-0.1.0/PKG-INFO +179 -0
  12. promptmeter-0.1.0/README.md +147 -0
  13. promptmeter-0.1.0/REQUIREMENTS.md +106 -0
  14. promptmeter-0.1.0/SPEC.md +430 -0
  15. promptmeter-0.1.0/docs/demo.gif +0 -0
  16. promptmeter-0.1.0/examples/analyze_only.py +34 -0
  17. promptmeter-0.1.0/examples/basic_usage.py +52 -0
  18. promptmeter-0.1.0/pyproject.toml +67 -0
  19. promptmeter-0.1.0/tests/__init__.py +0 -0
  20. promptmeter-0.1.0/tests/test_client.py +86 -0
  21. promptmeter-0.1.0/tests/test_rewriter.py +63 -0
  22. promptmeter-0.1.0/tests/test_rules.py +109 -0
  23. promptmeter-0.1.0/tests/test_scorer.py +58 -0
  24. promptmeter-0.1.0/token_tracker/__init__.py +3 -0
  25. promptmeter-0.1.0/token_tracker/analyzer/__init__.py +0 -0
  26. promptmeter-0.1.0/token_tracker/analyzer/preflight.py +60 -0
  27. promptmeter-0.1.0/token_tracker/analyzer/rules.py +157 -0
  28. promptmeter-0.1.0/token_tracker/analyzer/scorer.py +22 -0
  29. promptmeter-0.1.0/token_tracker/cli/__init__.py +0 -0
  30. promptmeter-0.1.0/token_tracker/cli/demo.py +122 -0
  31. promptmeter-0.1.0/token_tracker/cli/main.py +69 -0
  32. promptmeter-0.1.0/token_tracker/core/__init__.py +0 -0
  33. promptmeter-0.1.0/token_tracker/core/client.py +126 -0
  34. promptmeter-0.1.0/token_tracker/core/models.py +45 -0
  35. promptmeter-0.1.0/token_tracker/core/tracker.py +61 -0
  36. promptmeter-0.1.0/token_tracker/dashboard/__init__.py +0 -0
  37. promptmeter-0.1.0/token_tracker/dashboard/report.py +240 -0
  38. promptmeter-0.1.0/token_tracker/optimizer/__init__.py +0 -0
  39. promptmeter-0.1.0/token_tracker/optimizer/rewriter.py +103 -0
  40. promptmeter-0.1.0/token_tracker/storage/__init__.py +0 -0
  41. promptmeter-0.1.0/token_tracker/storage/db.py +159 -0
@@ -0,0 +1,30 @@
1
+ ---
2
+ name: Bug report
3
+ about: Something doesn't work as expected
4
+ title: "[BUG] "
5
+ labels: bug
6
+ ---
7
+
8
+ ## What happened
9
+
10
+ <!-- A clear description of what went wrong. -->
11
+
12
+ ## What you expected
13
+
14
+ <!-- What did you expect to happen instead? -->
15
+
16
+ ## Reproduction
17
+
18
+ ```python
19
+ # minimal code that reproduces the issue
20
+ ```
21
+
22
+ If a specific prompt triggered it:
23
+ - Prompt: `<paste here>`
24
+ - Rule that fired (if any): `<RULE_ID>`
25
+
26
+ ## Environment
27
+
28
+ - promptmeter version: <!-- `pip show promptmeter` -->
29
+ - Python version: <!-- `python --version` -->
30
+ - OS:
@@ -0,0 +1,18 @@
1
+ ---
2
+ name: Feature request
3
+ about: Suggest a new feature or enhancement
4
+ title: "[FEATURE] "
5
+ labels: enhancement
6
+ ---
7
+
8
+ ## Problem
9
+
10
+ <!-- What problem are you trying to solve? -->
11
+
12
+ ## Proposed solution
13
+
14
+ <!-- What would you like to see? -->
15
+
16
+ ## Alternatives considered
17
+
18
+ <!-- Any other approaches you've thought about? -->
@@ -0,0 +1,32 @@
1
+ ---
2
+ name: Suggest a new rule
3
+ about: Propose a new pre-flight rule for the analyzer
4
+ title: "[RULE] "
5
+ labels: new-rule
6
+ ---
7
+
8
+ ## Pattern this rule should catch
9
+
10
+ <!-- One sentence: what wasteful pattern does this detect? -->
11
+
12
+ ## Example prompts that should trigger it
13
+
14
+ ```
15
+ <paste 1-3 example prompts>
16
+ ```
17
+
18
+ ## Example prompts that should NOT trigger it (false-positive guards)
19
+
20
+ ```
21
+ <paste 1-3 prompts that look similar but are fine>
22
+ ```
23
+
24
+ ## Suggested severity
25
+
26
+ - [ ] high (clear waste, real cost)
27
+ - [ ] medium (suboptimal, costs more output)
28
+ - [ ] low (cosmetic, costs a few tokens)
29
+
30
+ ## Suggested fix message
31
+
32
+ <!-- What would you tell the user? -->
@@ -0,0 +1,19 @@
1
+ ## Summary
2
+
3
+ <!-- 1–3 bullets on what changed and why. -->
4
+
5
+ ## Type of change
6
+
7
+ - [ ] New rule (added to `analyzer/rules.py`)
8
+ - [ ] Bug fix
9
+ - [ ] New feature
10
+ - [ ] Documentation
11
+ - [ ] Refactor / internal cleanup
12
+
13
+ ## Checklist
14
+
15
+ - [ ] `pytest` passes locally
16
+ - [ ] `ruff check .` passes locally
17
+ - [ ] Added / updated tests
18
+ - [ ] Updated README rule table (if a new rule was added)
19
+ - [ ] Added an entry under `## [Unreleased]` in CHANGELOG.md
@@ -0,0 +1,35 @@
1
+ name: CI
2
+
3
+ on:
4
+ push:
5
+ branches: [main]
6
+ pull_request:
7
+ branches: [main]
8
+
9
+ jobs:
10
+ test:
11
+ runs-on: ubuntu-latest
12
+ strategy:
13
+ fail-fast: false
14
+ matrix:
15
+ python-version: ["3.11", "3.12"]
16
+
17
+ steps:
18
+ - uses: actions/checkout@v4
19
+
20
+ - name: Set up Python ${{ matrix.python-version }}
21
+ uses: actions/setup-python@v5
22
+ with:
23
+ python-version: ${{ matrix.python-version }}
24
+ cache: pip
25
+
26
+ - name: Install package + dev dependencies
27
+ run: |
28
+ python -m pip install --upgrade pip
29
+ pip install -e ".[dev]"
30
+
31
+ - name: Lint
32
+ run: ruff check .
33
+
34
+ - name: Test
35
+ run: pytest -v
@@ -0,0 +1,28 @@
1
+ name: Publish to PyPI
2
+
3
+ on:
4
+ push:
5
+ tags:
6
+ - "v*"
7
+
8
+ jobs:
9
+ publish:
10
+ runs-on: ubuntu-latest
11
+ permissions:
12
+ id-token: write # required for PyPI trusted publishing
13
+
14
+ steps:
15
+ - uses: actions/checkout@v4
16
+
17
+ - name: Set up Python
18
+ uses: actions/setup-python@v5
19
+ with:
20
+ python-version: "3.12"
21
+
22
+ - name: Build
23
+ run: |
24
+ python -m pip install --upgrade pip build
25
+ python -m build
26
+
27
+ - name: Publish to PyPI
28
+ uses: pypa/gh-action-pypi-publish@release/v1
@@ -0,0 +1,42 @@
1
+ # Python
2
+ __pycache__/
3
+ *.py[cod]
4
+ *$py.class
5
+ *.so
6
+ .Python
7
+ build/
8
+ dist/
9
+ *.egg-info/
10
+ *.egg
11
+ MANIFEST
12
+
13
+ # Virtual environments
14
+ .venv/
15
+ venv/
16
+ env/
17
+ ENV/
18
+
19
+ # Editor / IDE
20
+ .vscode/
21
+ .idea/
22
+ *.swp
23
+ *.swo
24
+ .DS_Store
25
+ Thumbs.db
26
+
27
+ # Local Token Tracker data (never commit your usage DB)
28
+ *.db
29
+ .token_tracker/
30
+
31
+ # Test / coverage
32
+ .pytest_cache/
33
+ .coverage
34
+ htmlcov/
35
+ .tox/
36
+
37
+ # Environment
38
+ .env
39
+ .env.local
40
+
41
+ # Smoke / scratch test scripts (these were one-off dev verifications)
42
+ test_phase*.py
@@ -0,0 +1,24 @@
1
+ # Changelog
2
+
3
+ All notable changes to this project will be documented in this file. Format follows
4
+ [Keep a Changelog](https://keepachangelog.com/en/1.1.0/) and this project adheres to
5
+ [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
6
+
7
+ ## [Unreleased]
8
+
9
+ ## [0.1.0] - 2026-05-10
10
+
11
+ Initial release.
12
+
13
+ ### Added
14
+
15
+ - `TrackedClient` — drop-in wrapper around `anthropic.Anthropic`
16
+ - SQLite logging at `~/.token_tracker/usage.db` (sessions + usage records)
17
+ - Pre-flight analyzer with 8 rules:
18
+ `VAGUE_INTENT`, `OPEN_ENDED_TASK`, `WALL_OF_TEXT`, `MISSING_FORMAT`,
19
+ `MISSING_SCOPE`, `REDUNDANT_CONTEXT`, `FILLER_WORDS`, `AMBIGUOUS_PRONOUN`
20
+ - 0–100 efficiency scoring with severity-based penalties
21
+ - Rule-based prompt optimizer with side-by-side rewrite diff
22
+ - Interactive `[s]end / [o]ptimize+send / [c]ancel` flow
23
+ - CLI: `tt report`, `tt sessions`, `tt analyze`, `tt top-waste`, `tt cost-model`, `tt demo`
24
+ - Cost calculation for `claude-opus-4-7`, `claude-sonnet-4-6`, `claude-haiku-4-5`
@@ -0,0 +1,96 @@
1
+ # Contributing to Token Tracker
2
+
3
+ Thanks for considering a contribution. The lowest-friction way to help is **adding a new rule to the pre-flight analyzer** — see below.
4
+
5
+ ---
6
+
7
+ ## Setting up
8
+
9
+ ```bash
10
+ git clone https://github.com/emam07/Token-tracker.git
11
+ cd Token-tracker
12
+ python -m venv .venv
13
+ .venv/Scripts/activate # Windows
14
+ source .venv/bin/activate # macOS / Linux
15
+ pip install -e ".[dev]"
16
+ ```
17
+
18
+ Run the test suite:
19
+ ```bash
20
+ pytest
21
+ ruff check .
22
+ ```
23
+
24
+ ---
25
+
26
+ ## Adding a new rule (great first PR)
27
+
28
+ A "rule" detects one wasteful prompt pattern. Adding one takes ~10 lines.
29
+
30
+ **1.** Open `token_tracker/analyzer/rules.py`.
31
+
32
+ **2.** Write your rule as a function returning `Warning | None`:
33
+
34
+ ```python
35
+ def check_my_pattern(prompt: str) -> Warning | None:
36
+ if re.search(r"\bsome bad pattern\b", prompt, re.IGNORECASE):
37
+ return Warning(
38
+ rule="MY_RULE_ID",
39
+ severity="medium", # "low" | "medium" | "high"
40
+ message="Short explanation of what's wrong.",
41
+ suggestion="Concrete advice on how to fix it.",
42
+ )
43
+ return None
44
+ ```
45
+
46
+ **3.** Add the function to the `RULES` list at the bottom of the file.
47
+
48
+ **4.** Add a unit test in `tests/test_rules.py`:
49
+
50
+ ```python
51
+ def test_my_rule_triggers_on_bad_prompt():
52
+ result = check_my_pattern("contains some bad pattern here")
53
+ assert result is not None
54
+ assert result.rule == "MY_RULE_ID"
55
+
56
+ def test_my_rule_skips_clean_prompt():
57
+ assert check_my_pattern("a perfectly fine prompt") is None
58
+ ```
59
+
60
+ **5.** Add the rule to the table in `README.md`.
61
+
62
+ That's it. Open the PR.
63
+
64
+ ---
65
+
66
+ ## Reporting a false positive
67
+
68
+ Open an issue with the **"Bug"** template and include:
69
+ - The exact prompt that triggered the warning
70
+ - Which rule fired
71
+ - Why you think it's wrong
72
+
73
+ ---
74
+
75
+ ## Pull request checklist
76
+
77
+ - [ ] New rule has a unit test (positive + negative case)
78
+ - [ ] `pytest` passes locally
79
+ - [ ] `ruff check .` passes locally
80
+ - [ ] README rule table updated (if a new rule was added)
81
+ - [ ] CHANGELOG entry added under `## [Unreleased]`
82
+
83
+ ---
84
+
85
+ ## Code style
86
+
87
+ - 100-char lines max
88
+ - Type hints on public functions
89
+ - No comments explaining *what* the code does — only *why*, when non-obvious
90
+ - Prefer stdlib over new dependencies
91
+
92
+ ---
93
+
94
+ ## Questions
95
+
96
+ Open a [discussion](https://github.com/emam07/Token-tracker/discussions) before a big change.
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2026 Token Tracker contributors
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
@@ -0,0 +1,179 @@
1
+ Metadata-Version: 2.4
2
+ Name: promptmeter
3
+ Version: 0.1.0
4
+ Summary: Pre-flight token analyzer and usage tracker for Claude — gas optimization for LLM prompts
5
+ Project-URL: Homepage, https://github.com/emam07/Token-tracker
6
+ Project-URL: Repository, https://github.com/emam07/Token-tracker
7
+ Project-URL: Issues, https://github.com/emam07/Token-tracker/issues
8
+ Project-URL: Documentation, https://github.com/emam07/Token-tracker#readme
9
+ Author: Token Tracker contributors
10
+ License: MIT
11
+ License-File: LICENSE
12
+ Keywords: ai,anthropic,claude,cost,llm,prompt-optimization,token
13
+ Classifier: Development Status :: 3 - Alpha
14
+ Classifier: Intended Audience :: Developers
15
+ Classifier: License :: OSI Approved :: MIT License
16
+ Classifier: Operating System :: OS Independent
17
+ Classifier: Programming Language :: Python :: 3
18
+ Classifier: Programming Language :: Python :: 3.11
19
+ Classifier: Programming Language :: Python :: 3.12
20
+ Classifier: Topic :: Software Development :: Libraries :: Python Modules
21
+ Classifier: Topic :: Utilities
22
+ Requires-Python: >=3.11
23
+ Requires-Dist: anthropic>=0.40.0
24
+ Requires-Dist: rich>=13.0.0
25
+ Requires-Dist: tiktoken>=0.6.0
26
+ Requires-Dist: typer>=0.12.0
27
+ Provides-Extra: dev
28
+ Requires-Dist: pytest-cov>=4.0; extra == 'dev'
29
+ Requires-Dist: pytest>=7.0; extra == 'dev'
30
+ Requires-Dist: ruff>=0.4.0; extra == 'dev'
31
+ Description-Content-Type: text/markdown
32
+
33
+ # Token Tracker
34
+
35
+ > **Gas optimization for LLM prompts.** A pre-flight analyzer + usage tracker for the Claude API. Catches vague, wasteful, or open-ended prompts *before* they hit the wire — and suggests a leaner rewrite.
36
+
37
+ [![PyPI version](https://img.shields.io/pypi/v/promptmeter.svg)](https://pypi.org/project/promptmeter/)
38
+ [![Python](https://img.shields.io/pypi/pyversions/promptmeter.svg)](https://pypi.org/project/promptmeter/)
39
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](LICENSE)
40
+ [![CI](https://github.com/emam07/Token-tracker/actions/workflows/ci.yml/badge.svg)](https://github.com/emam07/Token-tracker/actions)
41
+
42
+ ---
43
+
44
+ ## The idea
45
+
46
+ Solidity devs use a gas estimator before deploying. LLM devs send blind prompts and wait for the bill.
47
+
48
+ Token Tracker fixes that. It wraps the Anthropic SDK and adds three layers:
49
+
50
+ | Layer | What it does | Analogy |
51
+ |---|---|---|
52
+ | **Token Tracker** | Logs every API call — tokens in/out, cost, session | Gas meter |
53
+ | **Pre-flight Analyzer** | Scores your prompt before sending — flags 8 patterns of waste | Gas estimator + linter |
54
+ | **Prompt Optimizer** | Rewrites bad prompts into leaner versions, side-by-side | Compiler optimizer |
55
+
56
+ All analysis is **offline** — zero extra API calls. Storage is local SQLite.
57
+
58
+ ---
59
+
60
+ ## Demo
61
+
62
+ ![Token Tracker demo](docs/demo.gif)
63
+
64
+ *Run `tt demo` yourself to see all features without using any API credits.*
65
+
66
+ ---
67
+
68
+ ## Quickstart
69
+
70
+ ```bash
71
+ pip install promptmeter
72
+ export ANTHROPIC_API_KEY="sk-ant-..."
73
+ ```
74
+
75
+ ### As a drop-in for the Anthropic SDK
76
+
77
+ ```python
78
+ from token_tracker import TrackedClient
79
+
80
+ client = TrackedClient(session_name="my-app")
81
+
82
+ response = client.messages.create(
83
+ model="claude-sonnet-4-6",
84
+ max_tokens=512,
85
+ messages=[{"role": "user", "content": "List 3 Python web frameworks."}],
86
+ )
87
+ ```
88
+
89
+ That's it. Every call is now analyzed, logged, and reportable.
90
+
91
+ ### As a standalone CLI
92
+
93
+ ```bash
94
+ tt analyze "Please could you kindly tell me something about Python"
95
+ tt report # today's usage summary
96
+ tt report --week # last 7 days
97
+ tt cost-model # cost breakdown by model
98
+ tt top-waste # most expensive flagged prompts
99
+ tt sessions # list named sessions
100
+ tt demo # guided walkthrough — zero API calls
101
+ ```
102
+
103
+ ---
104
+
105
+ ## What the pre-flight catches
106
+
107
+ 8 rules, each with severity and a suggested fix:
108
+
109
+ | Rule | Severity | Detects |
110
+ |---|---|---|
111
+ | `VAGUE_INTENT` | high | `"tell me something about X"` — no clear task |
112
+ | `OPEN_ENDED_TASK` | high | `"explain everything about X"` — unbounded scope |
113
+ | `WALL_OF_TEXT` | high | unstructured 500+ token dumps |
114
+ | `MISSING_FORMAT` | medium | asks for a list without specifying format |
115
+ | `MISSING_SCOPE` | medium | `"explain X"` with no length/depth limit |
116
+ | `REDUNDANT_CONTEXT` | medium | the same sentence twice |
117
+ | `FILLER_WORDS` | low | "please", "could you kindly", "thank you in advance" |
118
+ | `AMBIGUOUS_PRONOUN` | low | "fix it" — fix what? |
119
+
120
+ Each rule contributes to a **0–100 efficiency score**. Below 60 triggers a warning. Below 40 hard-blocks (in interactive mode).
121
+
122
+ ---
123
+
124
+ ## Try the walkthrough
125
+
126
+ ```bash
127
+ tt demo
128
+ ```
129
+
130
+ Runs through every feature with three example prompts (good, bad, terrible) and renders the live dashboard. No API calls, safe to run anytime.
131
+
132
+ ---
133
+
134
+ ## How it actually saves tokens
135
+
136
+ | Without Token Tracker | With Token Tracker |
137
+ |---|---|
138
+ | Vague prompt → Claude asks clarifying questions → 2-3 round trips | Pre-flight catches vagueness → fix once → one round trip |
139
+ | `"explain X"` → Claude writes 800 words when you needed 100 | `MISSING_SCOPE` flags it → add `"in 3 sentences"` → 6× cheaper output |
140
+ | Same context pasted in every message | `REDUNDANT_CONTEXT` warns → use prompt caching → 90% cheaper input |
141
+ | No visibility into spend → no behavior change | Daily report shows cost per session → habits adjust |
142
+
143
+ Conservative real-world reduction: **30–50% of token spend** for a developer who acts on warnings.
144
+
145
+ ---
146
+
147
+ ## Configuration
148
+
149
+ ```python
150
+ TrackedClient(
151
+ api_key="sk-ant-...",
152
+ session_name="my-app", # tag this session in the DB
153
+ analyze=True, # run pre-flight before every call
154
+ interactive=True, # prompt user on flagged prompts (s/o/c menu)
155
+ warn_threshold=60, # show warning when score drops below this
156
+ )
157
+ ```
158
+
159
+ ---
160
+
161
+ ## Where data lives
162
+
163
+ A single SQLite file at `~/.token_tracker/usage.db`. Two tables: `sessions`, `usage_records`. Open it with [DB Browser for SQLite](https://sqlitebrowser.org/) to poke around.
164
+
165
+ Token Tracker stores a hash of each prompt — never the full text. Your prompts stay in your process.
166
+
167
+ ---
168
+
169
+ ## Contributing
170
+
171
+ The lowest-friction first contribution is **adding a new rule** in `token_tracker/analyzer/rules.py` — about 10 lines of Python. See [CONTRIBUTING.md](CONTRIBUTING.md).
172
+
173
+ Found a false positive? Open an issue with the prompt and the rule that misfired.
174
+
175
+ ---
176
+
177
+ ## License
178
+
179
+ MIT — see [LICENSE](LICENSE).
@@ -0,0 +1,147 @@
1
+ # Token Tracker
2
+
3
+ > **Gas optimization for LLM prompts.** A pre-flight analyzer + usage tracker for the Claude API. Catches vague, wasteful, or open-ended prompts *before* they hit the wire — and suggests a leaner rewrite.
4
+
5
+ [![PyPI version](https://img.shields.io/pypi/v/promptmeter.svg)](https://pypi.org/project/promptmeter/)
6
+ [![Python](https://img.shields.io/pypi/pyversions/promptmeter.svg)](https://pypi.org/project/promptmeter/)
7
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](LICENSE)
8
+ [![CI](https://github.com/emam07/Token-tracker/actions/workflows/ci.yml/badge.svg)](https://github.com/emam07/Token-tracker/actions)
9
+
10
+ ---
11
+
12
+ ## The idea
13
+
14
+ Solidity devs use a gas estimator before deploying. LLM devs send blind prompts and wait for the bill.
15
+
16
+ Token Tracker fixes that. It wraps the Anthropic SDK and adds three layers:
17
+
18
+ | Layer | What it does | Analogy |
19
+ |---|---|---|
20
+ | **Token Tracker** | Logs every API call — tokens in/out, cost, session | Gas meter |
21
+ | **Pre-flight Analyzer** | Scores your prompt before sending — flags 8 patterns of waste | Gas estimator + linter |
22
+ | **Prompt Optimizer** | Rewrites bad prompts into leaner versions, side-by-side | Compiler optimizer |
23
+
24
+ All analysis is **offline** — zero extra API calls. Storage is local SQLite.
25
+
26
+ ---
27
+
28
+ ## Demo
29
+
30
+ ![Token Tracker demo](docs/demo.gif)
31
+
32
+ *Run `tt demo` yourself to see all features without using any API credits.*
33
+
34
+ ---
35
+
36
+ ## Quickstart
37
+
38
+ ```bash
39
+ pip install promptmeter
40
+ export ANTHROPIC_API_KEY="sk-ant-..."
41
+ ```
42
+
43
+ ### As a drop-in for the Anthropic SDK
44
+
45
+ ```python
46
+ from token_tracker import TrackedClient
47
+
48
+ client = TrackedClient(session_name="my-app")
49
+
50
+ response = client.messages.create(
51
+ model="claude-sonnet-4-6",
52
+ max_tokens=512,
53
+ messages=[{"role": "user", "content": "List 3 Python web frameworks."}],
54
+ )
55
+ ```
56
+
57
+ That's it. Every call is now analyzed, logged, and reportable.
58
+
59
+ ### As a standalone CLI
60
+
61
+ ```bash
62
+ tt analyze "Please could you kindly tell me something about Python"
63
+ tt report # today's usage summary
64
+ tt report --week # last 7 days
65
+ tt cost-model # cost breakdown by model
66
+ tt top-waste # most expensive flagged prompts
67
+ tt sessions # list named sessions
68
+ tt demo # guided walkthrough — zero API calls
69
+ ```
70
+
71
+ ---
72
+
73
+ ## What the pre-flight catches
74
+
75
+ 8 rules, each with severity and a suggested fix:
76
+
77
+ | Rule | Severity | Detects |
78
+ |---|---|---|
79
+ | `VAGUE_INTENT` | high | `"tell me something about X"` — no clear task |
80
+ | `OPEN_ENDED_TASK` | high | `"explain everything about X"` — unbounded scope |
81
+ | `WALL_OF_TEXT` | high | unstructured 500+ token dumps |
82
+ | `MISSING_FORMAT` | medium | asks for a list without specifying format |
83
+ | `MISSING_SCOPE` | medium | `"explain X"` with no length/depth limit |
84
+ | `REDUNDANT_CONTEXT` | medium | the same sentence twice |
85
+ | `FILLER_WORDS` | low | "please", "could you kindly", "thank you in advance" |
86
+ | `AMBIGUOUS_PRONOUN` | low | "fix it" — fix what? |
87
+
88
+ Each rule contributes to a **0–100 efficiency score**. Below 60 triggers a warning. Below 40 hard-blocks (in interactive mode).
89
+
90
+ ---
91
+
92
+ ## Try the walkthrough
93
+
94
+ ```bash
95
+ tt demo
96
+ ```
97
+
98
+ Runs through every feature with three example prompts (good, bad, terrible) and renders the live dashboard. No API calls, safe to run anytime.
99
+
100
+ ---
101
+
102
+ ## How it actually saves tokens
103
+
104
+ | Without Token Tracker | With Token Tracker |
105
+ |---|---|
106
+ | Vague prompt → Claude asks clarifying questions → 2-3 round trips | Pre-flight catches vagueness → fix once → one round trip |
107
+ | `"explain X"` → Claude writes 800 words when you needed 100 | `MISSING_SCOPE` flags it → add `"in 3 sentences"` → 6× cheaper output |
108
+ | Same context pasted in every message | `REDUNDANT_CONTEXT` warns → use prompt caching → 90% cheaper input |
109
+ | No visibility into spend → no behavior change | Daily report shows cost per session → habits adjust |
110
+
111
+ Conservative real-world reduction: **30–50% of token spend** for a developer who acts on warnings.
112
+
113
+ ---
114
+
115
+ ## Configuration
116
+
117
+ ```python
118
+ TrackedClient(
119
+ api_key="sk-ant-...",
120
+ session_name="my-app", # tag this session in the DB
121
+ analyze=True, # run pre-flight before every call
122
+ interactive=True, # prompt user on flagged prompts (s/o/c menu)
123
+ warn_threshold=60, # show warning when score drops below this
124
+ )
125
+ ```
126
+
127
+ ---
128
+
129
+ ## Where data lives
130
+
131
+ A single SQLite file at `~/.token_tracker/usage.db`. Two tables: `sessions`, `usage_records`. Open it with [DB Browser for SQLite](https://sqlitebrowser.org/) to poke around.
132
+
133
+ Token Tracker stores a hash of each prompt — never the full text. Your prompts stay in your process.
134
+
135
+ ---
136
+
137
+ ## Contributing
138
+
139
+ The lowest-friction first contribution is **adding a new rule** in `token_tracker/analyzer/rules.py` — about 10 lines of Python. See [CONTRIBUTING.md](CONTRIBUTING.md).
140
+
141
+ Found a false positive? Open an issue with the prompt and the rule that misfired.
142
+
143
+ ---
144
+
145
+ ## License
146
+
147
+ MIT — see [LICENSE](LICENSE).