tokenator 0.1.10__tar.gz → 0.1.12__tar.gz

Sign up to get free protection for your applications and to get access to all the features.
@@ -0,0 +1,240 @@
1
+ Metadata-Version: 2.3
2
+ Name: tokenator
3
+ Version: 0.1.12
4
+ Summary: Token usage tracking wrapper for LLMs
5
+ License: MIT
6
+ Author: Ujjwal Maheshwari
7
+ Author-email: your.email@example.com
8
+ Requires-Python: >=3.9,<4.0
9
+ Classifier: License :: OSI Approved :: MIT License
10
+ Classifier: Programming Language :: Python :: 3
11
+ Classifier: Programming Language :: Python :: 3.9
12
+ Classifier: Programming Language :: Python :: 3.10
13
+ Classifier: Programming Language :: Python :: 3.11
14
+ Classifier: Programming Language :: Python :: 3.12
15
+ Classifier: Programming Language :: Python :: 3.13
16
+ Requires-Dist: alembic (>=1.13.0,<2.0.0)
17
+ Requires-Dist: anthropic (>=0.40.0,<0.41.0)
18
+ Requires-Dist: openai (>=1.57.0,<2.0.0)
19
+ Requires-Dist: requests (>=2.32.3,<3.0.0)
20
+ Requires-Dist: sqlalchemy (>=2.0.0,<3.0.0)
21
+ Description-Content-Type: text/markdown
22
+
23
+ # Tokenator : Track and analyze LLM token usage and cost
24
+
25
+ Have you ever wondered about :
26
+ - How many tokens does your AI agent consume?
27
+ - How much does it cost to do run a complex AI workflow with multiple LLM providers?
28
+ - How much money/tokens did you spend today on developing with LLMs?
29
+
30
+ Afraid not, tokenator is here! With tokenator's easy to use API, you can start tracking LLM usage in a matter of minutes.
31
+
32
+ Get started with just 3 lines of code!
33
+
34
+ ## Installation
35
+
36
+ ```bash
37
+ pip install tokenator
38
+ ```
39
+
40
+ ## Usage
41
+
42
+ ### OpenAI
43
+
44
+ ```python
45
+ from openai import OpenAI
46
+ from tokenator import tokenator_openai
47
+
48
+ openai_client = OpenAI(api_key="your-api-key")
49
+
50
+ # Wrap it with Tokenator
51
+ client = tokenator_openai(openai_client)
52
+
53
+ # Use it exactly like the OpenAI client
54
+ response = client.chat.completions.create(
55
+ model="gpt-4o",
56
+ messages=[{"role": "user", "content": "Hello!"}]
57
+ )
58
+ ```
59
+
60
+ Works with AsyncOpenAI and `streaming=True` as well!
61
+ Note : When streaming, don't forget to add `stream_options={"include_usage": True}` to the `create()` call!
62
+
63
+ ### Cost Analysis
64
+
65
+ ```python
66
+ from tokenator import usage
67
+
68
+ # Get usage for different time periods
69
+ usage.last_hour()
70
+ usage.last_day()
71
+ usage.last_week()
72
+ usage.last_month()
73
+
74
+ # Custom date range
75
+ usage.between("2024-03-01", "2024-03-15")
76
+
77
+ # Get usage for different LLM providers
78
+ usage.last_day("openai")
79
+ usage.last_day("anthropic")
80
+ usage.last_day("google")
81
+ ```
82
+
83
+ ### Example `usage` object
84
+
85
+ ```python
86
+ print(cost.last_hour().model_dump_json(indent=4))
87
+ ```
88
+
89
+ ```json
90
+ {
91
+ "total_cost": 0.0004,
92
+ "total_tokens": 79,
93
+ "prompt_tokens": 52,
94
+ "completion_tokens": 27,
95
+ "providers": [
96
+ {
97
+ "total_cost": 0.0004,
98
+ "total_tokens": 79,
99
+ "prompt_tokens": 52,
100
+ "completion_tokens": 27,
101
+ "provider": "openai",
102
+ "models": [
103
+ {
104
+ "total_cost": 0.0004,
105
+ "total_tokens": 79,
106
+ "prompt_tokens": 52,
107
+ "completion_tokens": 27,
108
+ "model": "gpt-4o-2024-08-06"
109
+ }
110
+ ]
111
+ }
112
+ ]
113
+ }
114
+ ```
115
+
116
+ ## Features
117
+
118
+ - Drop-in replacement for OpenAI, Anthropic client
119
+ - Automatic token usage tracking
120
+ - Cost analysis for different time periods
121
+ - SQLite storage with zero configuration
122
+ - Thread-safe operations
123
+ - Minimal memory footprint
124
+ - Minimal latency footprint
125
+
126
+ ### Anthropic
127
+
128
+ ```python
129
+ from anthropic import Anthropic, AsyncAnthropic
130
+ from tokenator import tokenator_anthropic
131
+
132
+ anthropic_client = AsyncAnthropic(api_key="your-api-key")
133
+
134
+ # Wrap it with Tokenator
135
+ client = tokenator_anthropic(anthropic_client)
136
+
137
+ # Use it exactly like the Anthropic client
138
+ response = await client.messages.create(
139
+ model="claude-3-5-haiku-20241022",
140
+ messages=[{"role": "user", "content": "hello how are you"}],
141
+ max_tokens=20,
142
+ )
143
+
144
+ print(response)
145
+
146
+ print(usage.last_execution().model_dump_json(indent=4))
147
+ """
148
+ {
149
+ "total_cost": 0.0001,
150
+ "total_tokens": 23,
151
+ "prompt_tokens": 10,
152
+ "completion_tokens": 13,
153
+ "providers": [
154
+ {
155
+ "total_cost": 0.0001,
156
+ "total_tokens": 23,
157
+ "prompt_tokens": 10,
158
+ "completion_tokens": 13,
159
+ "provider": "anthropic",
160
+ "models": [
161
+ {
162
+ "total_cost": 0.0004,
163
+ "total_tokens": 79,
164
+ "prompt_tokens": 52,
165
+ "completion_tokens": 27,
166
+ "model": "claude-3-5-haiku-20241022"
167
+ }
168
+ ]
169
+ }
170
+ ]
171
+ }
172
+ """
173
+ ```
174
+
175
+ ### xAI
176
+
177
+ You can use xAI models through the `openai` SDK and track usage using `provider` parameter in `tokenator`.
178
+
179
+ ```python
180
+ from openai import OpenAI
181
+ from tokenator import tokenator_openai
182
+
183
+ xai_client = OpenAI(
184
+ api_key=os.getenv("XAI_API_KEY"),
185
+ base_url="https://api.x.ai/v1"
186
+ )
187
+
188
+ # Wrap it with Tokenator
189
+ client = tokenator_openai(client, db_path=temp_db, provider="xai")
190
+
191
+ # Use it exactly like the OpenAI client but with xAI models
192
+ response = client.chat.completions.create(
193
+ model="grok-2-latest",
194
+ messages=[{"role": "user", "content": "Hello!"}]
195
+ )
196
+
197
+ print(response)
198
+
199
+ print(usage.last_execution())
200
+ ```
201
+
202
+ ### Other AI model providers through openai SDKs
203
+
204
+ Today, a variety of AI companies have made their APIs compatible to the `openai` SDK.
205
+ You can track usage of any such AI models using `tokenator`'s `provider` parameter.
206
+
207
+ For example, let's see how we can track usage of `perplexity` tokens.
208
+
209
+ ```python
210
+ from openai import OpenAI
211
+ from tokenator import tokenator_openai
212
+
213
+ xai_client = OpenAI(
214
+ api_key=os.getenv("PERPLEXITY_API_KEY"),
215
+ base_url="https://api.perplexity.ai"
216
+ )
217
+
218
+ # Wrap it with Tokenator
219
+ client = tokenator_openai(client, db_path=temp_db, provider="perplexity")
220
+
221
+ # Use it exactly like the OpenAI client but with xAI models
222
+ response = client.chat.completions.create(
223
+ model="grok-2-latest",
224
+ messages=[{"role": "user", "content": "Hello!"}]
225
+ )
226
+
227
+ print(response)
228
+
229
+ print(usage.last_execution())
230
+
231
+ print(usage.provider("perplexity"))
232
+ ```
233
+
234
+ ---
235
+
236
+ Most importantly, none of your data is ever sent to any server.
237
+
238
+ ## License
239
+
240
+ MIT
@@ -0,0 +1,218 @@
1
+ # Tokenator : Track and analyze LLM token usage and cost
2
+
3
+ Have you ever wondered about :
4
+ - How many tokens does your AI agent consume?
5
+ - How much does it cost to do run a complex AI workflow with multiple LLM providers?
6
+ - How much money/tokens did you spend today on developing with LLMs?
7
+
8
+ Afraid not, tokenator is here! With tokenator's easy to use API, you can start tracking LLM usage in a matter of minutes.
9
+
10
+ Get started with just 3 lines of code!
11
+
12
+ ## Installation
13
+
14
+ ```bash
15
+ pip install tokenator
16
+ ```
17
+
18
+ ## Usage
19
+
20
+ ### OpenAI
21
+
22
+ ```python
23
+ from openai import OpenAI
24
+ from tokenator import tokenator_openai
25
+
26
+ openai_client = OpenAI(api_key="your-api-key")
27
+
28
+ # Wrap it with Tokenator
29
+ client = tokenator_openai(openai_client)
30
+
31
+ # Use it exactly like the OpenAI client
32
+ response = client.chat.completions.create(
33
+ model="gpt-4o",
34
+ messages=[{"role": "user", "content": "Hello!"}]
35
+ )
36
+ ```
37
+
38
+ Works with AsyncOpenAI and `streaming=True` as well!
39
+ Note : When streaming, don't forget to add `stream_options={"include_usage": True}` to the `create()` call!
40
+
41
+ ### Cost Analysis
42
+
43
+ ```python
44
+ from tokenator import usage
45
+
46
+ # Get usage for different time periods
47
+ usage.last_hour()
48
+ usage.last_day()
49
+ usage.last_week()
50
+ usage.last_month()
51
+
52
+ # Custom date range
53
+ usage.between("2024-03-01", "2024-03-15")
54
+
55
+ # Get usage for different LLM providers
56
+ usage.last_day("openai")
57
+ usage.last_day("anthropic")
58
+ usage.last_day("google")
59
+ ```
60
+
61
+ ### Example `usage` object
62
+
63
+ ```python
64
+ print(cost.last_hour().model_dump_json(indent=4))
65
+ ```
66
+
67
+ ```json
68
+ {
69
+ "total_cost": 0.0004,
70
+ "total_tokens": 79,
71
+ "prompt_tokens": 52,
72
+ "completion_tokens": 27,
73
+ "providers": [
74
+ {
75
+ "total_cost": 0.0004,
76
+ "total_tokens": 79,
77
+ "prompt_tokens": 52,
78
+ "completion_tokens": 27,
79
+ "provider": "openai",
80
+ "models": [
81
+ {
82
+ "total_cost": 0.0004,
83
+ "total_tokens": 79,
84
+ "prompt_tokens": 52,
85
+ "completion_tokens": 27,
86
+ "model": "gpt-4o-2024-08-06"
87
+ }
88
+ ]
89
+ }
90
+ ]
91
+ }
92
+ ```
93
+
94
+ ## Features
95
+
96
+ - Drop-in replacement for OpenAI, Anthropic client
97
+ - Automatic token usage tracking
98
+ - Cost analysis for different time periods
99
+ - SQLite storage with zero configuration
100
+ - Thread-safe operations
101
+ - Minimal memory footprint
102
+ - Minimal latency footprint
103
+
104
+ ### Anthropic
105
+
106
+ ```python
107
+ from anthropic import Anthropic, AsyncAnthropic
108
+ from tokenator import tokenator_anthropic
109
+
110
+ anthropic_client = AsyncAnthropic(api_key="your-api-key")
111
+
112
+ # Wrap it with Tokenator
113
+ client = tokenator_anthropic(anthropic_client)
114
+
115
+ # Use it exactly like the Anthropic client
116
+ response = await client.messages.create(
117
+ model="claude-3-5-haiku-20241022",
118
+ messages=[{"role": "user", "content": "hello how are you"}],
119
+ max_tokens=20,
120
+ )
121
+
122
+ print(response)
123
+
124
+ print(usage.last_execution().model_dump_json(indent=4))
125
+ """
126
+ {
127
+ "total_cost": 0.0001,
128
+ "total_tokens": 23,
129
+ "prompt_tokens": 10,
130
+ "completion_tokens": 13,
131
+ "providers": [
132
+ {
133
+ "total_cost": 0.0001,
134
+ "total_tokens": 23,
135
+ "prompt_tokens": 10,
136
+ "completion_tokens": 13,
137
+ "provider": "anthropic",
138
+ "models": [
139
+ {
140
+ "total_cost": 0.0004,
141
+ "total_tokens": 79,
142
+ "prompt_tokens": 52,
143
+ "completion_tokens": 27,
144
+ "model": "claude-3-5-haiku-20241022"
145
+ }
146
+ ]
147
+ }
148
+ ]
149
+ }
150
+ """
151
+ ```
152
+
153
+ ### xAI
154
+
155
+ You can use xAI models through the `openai` SDK and track usage using `provider` parameter in `tokenator`.
156
+
157
+ ```python
158
+ from openai import OpenAI
159
+ from tokenator import tokenator_openai
160
+
161
+ xai_client = OpenAI(
162
+ api_key=os.getenv("XAI_API_KEY"),
163
+ base_url="https://api.x.ai/v1"
164
+ )
165
+
166
+ # Wrap it with Tokenator
167
+ client = tokenator_openai(client, db_path=temp_db, provider="xai")
168
+
169
+ # Use it exactly like the OpenAI client but with xAI models
170
+ response = client.chat.completions.create(
171
+ model="grok-2-latest",
172
+ messages=[{"role": "user", "content": "Hello!"}]
173
+ )
174
+
175
+ print(response)
176
+
177
+ print(usage.last_execution())
178
+ ```
179
+
180
+ ### Other AI model providers through openai SDKs
181
+
182
+ Today, a variety of AI companies have made their APIs compatible to the `openai` SDK.
183
+ You can track usage of any such AI models using `tokenator`'s `provider` parameter.
184
+
185
+ For example, let's see how we can track usage of `perplexity` tokens.
186
+
187
+ ```python
188
+ from openai import OpenAI
189
+ from tokenator import tokenator_openai
190
+
191
+ xai_client = OpenAI(
192
+ api_key=os.getenv("PERPLEXITY_API_KEY"),
193
+ base_url="https://api.perplexity.ai"
194
+ )
195
+
196
+ # Wrap it with Tokenator
197
+ client = tokenator_openai(client, db_path=temp_db, provider="perplexity")
198
+
199
+ # Use it exactly like the OpenAI client but with xAI models
200
+ response = client.chat.completions.create(
201
+ model="grok-2-latest",
202
+ messages=[{"role": "user", "content": "Hello!"}]
203
+ )
204
+
205
+ print(response)
206
+
207
+ print(usage.last_execution())
208
+
209
+ print(usage.provider("perplexity"))
210
+ ```
211
+
212
+ ---
213
+
214
+ Most importantly, none of your data is ever sent to any server.
215
+
216
+ ## License
217
+
218
+ MIT
@@ -1,6 +1,6 @@
1
1
  [tool.poetry]
2
2
  name = "tokenator"
3
- version = "0.1.10"
3
+ version = "0.1.12"
4
4
  description = "Token usage tracking wrapper for LLMs"
5
5
  authors = ["Ujjwal Maheshwari <your.email@example.com>"]
6
6
  readme = "README.md"
@@ -71,7 +71,6 @@ def _create_usage_callback(execution_id, log_usage_fn):
71
71
  usage_data.usage.prompt_tokens += chunk.message.usage.input_tokens
72
72
  usage_data.usage.completion_tokens += chunk.message.usage.output_tokens
73
73
  elif isinstance(chunk, RawMessageDeltaEvent):
74
- usage_data.usage.prompt_tokens += chunk.usage.input_tokens
75
74
  usage_data.usage.completion_tokens += chunk.usage.output_tokens
76
75
 
77
76
  usage_data.usage.total_tokens = usage_data.usage.prompt_tokens + usage_data.usage.completion_tokens
@@ -47,7 +47,7 @@ class BaseWrapper:
47
47
  total_tokens=token_usage_stats.usage.total_tokens,
48
48
  )
49
49
  session.add(token_usage)
50
- logger.info(
50
+ logger.debug(
51
51
  "Logged token usage: model=%s, total_tokens=%d",
52
52
  token_usage_stats.model,
53
53
  token_usage_stats.usage.total_tokens,
@@ -14,7 +14,9 @@ logger = logging.getLogger(__name__)
14
14
 
15
15
 
16
16
  class BaseOpenAIWrapper(BaseWrapper):
17
- provider = "openai"
17
+ def __init__(self, client, db_path=None, provider: str = "openai"):
18
+ super().__init__(client, db_path)
19
+ self.provider = provider
18
20
 
19
21
  def _process_response_usage(
20
22
  self, response: ResponseType
@@ -134,6 +136,7 @@ class AsyncOpenAIWrapper(BaseOpenAIWrapper):
134
136
  def tokenator_openai(
135
137
  client: OpenAI,
136
138
  db_path: Optional[str] = None,
139
+ provider: str = "openai",
137
140
  ) -> OpenAIWrapper: ...
138
141
 
139
142
 
@@ -141,23 +144,26 @@ def tokenator_openai(
141
144
  def tokenator_openai(
142
145
  client: AsyncOpenAI,
143
146
  db_path: Optional[str] = None,
147
+ provider: str = "openai",
144
148
  ) -> AsyncOpenAIWrapper: ...
145
149
 
146
150
 
147
151
  def tokenator_openai(
148
152
  client: Union[OpenAI, AsyncOpenAI],
149
153
  db_path: Optional[str] = None,
154
+ provider: str = "openai",
150
155
  ) -> Union[OpenAIWrapper, AsyncOpenAIWrapper]:
151
156
  """Create a token-tracking wrapper for an OpenAI client.
152
157
 
153
158
  Args:
154
159
  client: OpenAI or AsyncOpenAI client instance
155
160
  db_path: Optional path to SQLite database for token tracking
161
+ provider: Provider name, defaults to "openai"
156
162
  """
157
163
  if isinstance(client, OpenAI):
158
- return OpenAIWrapper(client=client, db_path=db_path)
164
+ return OpenAIWrapper(client=client, db_path=db_path, provider=provider)
159
165
 
160
166
  if isinstance(client, AsyncOpenAI):
161
- return AsyncOpenAIWrapper(client=client, db_path=db_path)
167
+ return AsyncOpenAIWrapper(client=client, db_path=db_path, provider=provider)
162
168
 
163
169
  raise ValueError("Client must be an instance of OpenAI or AsyncOpenAI")
tokenator-0.1.10/PKG-INFO DELETED
@@ -1,127 +0,0 @@
1
- Metadata-Version: 2.1
2
- Name: tokenator
3
- Version: 0.1.10
4
- Summary: Token usage tracking wrapper for LLMs
5
- License: MIT
6
- Author: Ujjwal Maheshwari
7
- Author-email: your.email@example.com
8
- Requires-Python: >=3.9,<4.0
9
- Classifier: License :: OSI Approved :: MIT License
10
- Classifier: Programming Language :: Python :: 3
11
- Classifier: Programming Language :: Python :: 3.9
12
- Classifier: Programming Language :: Python :: 3.10
13
- Classifier: Programming Language :: Python :: 3.11
14
- Classifier: Programming Language :: Python :: 3.12
15
- Classifier: Programming Language :: Python :: 3.13
16
- Requires-Dist: alembic (>=1.13.0,<2.0.0)
17
- Requires-Dist: anthropic (>=0.40.0,<0.41.0)
18
- Requires-Dist: openai (>=1.57.0,<2.0.0)
19
- Requires-Dist: requests (>=2.32.3,<3.0.0)
20
- Requires-Dist: sqlalchemy (>=2.0.0,<3.0.0)
21
- Description-Content-Type: text/markdown
22
-
23
- # Tokenator : Easiest way to track and analyze LLM token usage and cost
24
-
25
- Have you ever wondered about :
26
- - How many tokens does your AI agent consume?
27
- - How much does it cost to do run a complex AI workflow with multiple LLM providers?
28
- - How much money did I spent today on development?
29
-
30
- Afraid not, tokenator is here! With tokenator's easy to use API, you can start tracking LLM usage in a matter of minutes.
31
-
32
- Get started with just 3 lines of code!
33
-
34
- ## Installation
35
-
36
- ```bash
37
- pip install tokenator
38
- ```
39
-
40
- ## Usage
41
-
42
- ### OpenAI
43
-
44
- ```python
45
- from openai import OpenAI
46
- from tokenator import tokenator_openai
47
-
48
- openai_client = OpenAI(api_key="your-api-key")
49
-
50
- # Wrap it with Tokenator
51
- client = tokenator_openai(openai_client)
52
-
53
- # Use it exactly like the OpenAI client
54
- response = client.chat.completions.create(
55
- model="gpt-4o",
56
- messages=[{"role": "user", "content": "Hello!"}]
57
- )
58
- ```
59
-
60
- ### Cost Analysis
61
-
62
- ```python
63
- from tokenator import usage
64
-
65
- # Get usage for different time periods
66
- usage.last_hour()
67
- usage.last_day()
68
- usage.last_week()
69
- usage.last_month()
70
-
71
- # Custom date range
72
- usage.between("2024-03-01", "2024-03-15")
73
-
74
- # Get usage for different LLM providers
75
- usage.last_day("openai")
76
- usage.last_day("anthropic")
77
- usage.last_day("google")
78
- ```
79
-
80
- ### Example `usage` object
81
-
82
- ```python
83
- print(cost.last_hour().model_dump_json(indent=4))
84
- ```
85
-
86
- ```json
87
- {
88
- "total_cost": 0.0004,
89
- "total_tokens": 79,
90
- "prompt_tokens": 52,
91
- "completion_tokens": 27,
92
- "providers": [
93
- {
94
- "total_cost": 0.0004,
95
- "total_tokens": 79,
96
- "prompt_tokens": 52,
97
- "completion_tokens": 27,
98
- "provider": "openai",
99
- "models": [
100
- {
101
- "total_cost": 0.0004,
102
- "total_tokens": 79,
103
- "prompt_tokens": 52,
104
- "completion_tokens": 27,
105
- "model": "gpt-4o-2024-08-06"
106
- }
107
- ]
108
- }
109
- ]
110
- }
111
- ```
112
-
113
- ## Features
114
-
115
- - Drop-in replacement for OpenAI, Anthropic client
116
- - Automatic token usage tracking
117
- - Cost analysis for different time periods
118
- - SQLite storage with zero configuration
119
- - Thread-safe operations
120
- - Minimal memory footprint
121
- - Minimal latency footprint
122
-
123
- Most importantly, none of your data is ever sent to any server.
124
-
125
- ## License
126
-
127
- MIT
@@ -1,105 +0,0 @@
1
- # Tokenator : Easiest way to track and analyze LLM token usage and cost
2
-
3
- Have you ever wondered about :
4
- - How many tokens does your AI agent consume?
5
- - How much does it cost to do run a complex AI workflow with multiple LLM providers?
6
- - How much money did I spent today on development?
7
-
8
- Afraid not, tokenator is here! With tokenator's easy to use API, you can start tracking LLM usage in a matter of minutes.
9
-
10
- Get started with just 3 lines of code!
11
-
12
- ## Installation
13
-
14
- ```bash
15
- pip install tokenator
16
- ```
17
-
18
- ## Usage
19
-
20
- ### OpenAI
21
-
22
- ```python
23
- from openai import OpenAI
24
- from tokenator import tokenator_openai
25
-
26
- openai_client = OpenAI(api_key="your-api-key")
27
-
28
- # Wrap it with Tokenator
29
- client = tokenator_openai(openai_client)
30
-
31
- # Use it exactly like the OpenAI client
32
- response = client.chat.completions.create(
33
- model="gpt-4o",
34
- messages=[{"role": "user", "content": "Hello!"}]
35
- )
36
- ```
37
-
38
- ### Cost Analysis
39
-
40
- ```python
41
- from tokenator import usage
42
-
43
- # Get usage for different time periods
44
- usage.last_hour()
45
- usage.last_day()
46
- usage.last_week()
47
- usage.last_month()
48
-
49
- # Custom date range
50
- usage.between("2024-03-01", "2024-03-15")
51
-
52
- # Get usage for different LLM providers
53
- usage.last_day("openai")
54
- usage.last_day("anthropic")
55
- usage.last_day("google")
56
- ```
57
-
58
- ### Example `usage` object
59
-
60
- ```python
61
- print(cost.last_hour().model_dump_json(indent=4))
62
- ```
63
-
64
- ```json
65
- {
66
- "total_cost": 0.0004,
67
- "total_tokens": 79,
68
- "prompt_tokens": 52,
69
- "completion_tokens": 27,
70
- "providers": [
71
- {
72
- "total_cost": 0.0004,
73
- "total_tokens": 79,
74
- "prompt_tokens": 52,
75
- "completion_tokens": 27,
76
- "provider": "openai",
77
- "models": [
78
- {
79
- "total_cost": 0.0004,
80
- "total_tokens": 79,
81
- "prompt_tokens": 52,
82
- "completion_tokens": 27,
83
- "model": "gpt-4o-2024-08-06"
84
- }
85
- ]
86
- }
87
- ]
88
- }
89
- ```
90
-
91
- ## Features
92
-
93
- - Drop-in replacement for OpenAI, Anthropic client
94
- - Automatic token usage tracking
95
- - Cost analysis for different time periods
96
- - SQLite storage with zero configuration
97
- - Thread-safe operations
98
- - Minimal memory footprint
99
- - Minimal latency footprint
100
-
101
- Most importantly, none of your data is ever sent to any server.
102
-
103
- ## License
104
-
105
- MIT
File without changes