tokenr-ruby 0.1.4 → 0.1.5
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/README.md +36 -1
- data/lib/tokenr/version.rb +1 -1
- metadata +1 -1
checksums.yaml
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
---
|
|
2
2
|
SHA256:
|
|
3
|
-
metadata.gz:
|
|
4
|
-
data.tar.gz:
|
|
3
|
+
metadata.gz: 011b6bda965d1ebb0baa5949398253f2e010b86bd8301b3166745494c254394e
|
|
4
|
+
data.tar.gz: ecd282c0a1e3ede564bef3b1ff90240b530f36c59222e3c11a56e5e8d2b7a2d1
|
|
5
5
|
SHA512:
|
|
6
|
-
metadata.gz:
|
|
7
|
-
data.tar.gz:
|
|
6
|
+
metadata.gz: 78b3feeecae3404792b229e412c66bae6530d4bf489232affc33cfd8616bd22cb02f2c764bac9a0e01cd5ff8b52deeca8c7a3531cf30f6e4c5b7aaf953e35586
|
|
7
|
+
data.tar.gz: eac13684791536a785e1bc18f426b08a8e7914b7371f3e10849864b78c88d4724660f5cbc1bb5dd9db537e5b85f31ab083355f5bb7abf5ef0e364f9ce1e9d000
|
data/README.md
CHANGED
|
@@ -185,12 +185,33 @@ Tokenr.client.get_costs_by_agent(limit: 20)
|
|
|
185
185
|
Tokenr.client.get_timeseries(interval: "day")
|
|
186
186
|
```
|
|
187
187
|
|
|
188
|
+
## Prompt Caching
|
|
189
|
+
|
|
190
|
+
Both OpenAI and Anthropic support prompt caching, and the SDK handles it automatically.
|
|
191
|
+
|
|
192
|
+
**OpenAI** includes cached tokens inside `prompt_tokens`. The SDK reads `prompt_tokens_details["cached_tokens"]` and separates them so Tokenr can price each category at the correct rate.
|
|
193
|
+
|
|
194
|
+
**Anthropic** reports cache tokens as separate fields (`cache_creation_input_tokens` and `cache_read_input_tokens`). The SDK passes these through directly.
|
|
195
|
+
|
|
196
|
+
For manual tracking, you can pass cache tokens explicitly:
|
|
197
|
+
|
|
198
|
+
```ruby
|
|
199
|
+
Tokenr.track(
|
|
200
|
+
provider: "anthropic",
|
|
201
|
+
model: "claude-sonnet-4-20250514",
|
|
202
|
+
input_tokens: 500,
|
|
203
|
+
output_tokens: 200,
|
|
204
|
+
cache_read_tokens: 8000,
|
|
205
|
+
cache_write_tokens: 2000,
|
|
206
|
+
)
|
|
207
|
+
```
|
|
208
|
+
|
|
188
209
|
## How It Works
|
|
189
210
|
|
|
190
211
|
1. `Tokenr::Integrations::OpenAI.wrap(client)` returns a thin wrapper around your existing client
|
|
191
212
|
2. After each call the wrapper reads token counts from the response `usage` field
|
|
192
213
|
3. Events are pushed onto an in-process queue and flushed to Tokenr in the background
|
|
193
|
-
4. If tracking fails for any reason, the exception is swallowed
|
|
214
|
+
4. If tracking fails for any reason, the exception is swallowed and your app is unaffected
|
|
194
215
|
5. On process exit, `at_exit` flushes any remaining queued events
|
|
195
216
|
|
|
196
217
|
## Supported Providers
|
|
@@ -212,6 +233,20 @@ Tokenr.client.get_timeseries(interval: "day")
|
|
|
212
233
|
export TOKENR_TOKEN="your-token-here"
|
|
213
234
|
```
|
|
214
235
|
|
|
236
|
+
## Running Tests
|
|
237
|
+
|
|
238
|
+
```bash
|
|
239
|
+
# Unit and mock tests (no API keys needed)
|
|
240
|
+
bundle exec rspec
|
|
241
|
+
|
|
242
|
+
# Live integration tests (makes real API calls, costs fractions of a cent)
|
|
243
|
+
OPENAI_API_KEY=sk-... ANTHROPIC_API_KEY=sk-ant-... bundle exec rspec spec/live_integration_spec.rb -fd
|
|
244
|
+
```
|
|
245
|
+
|
|
246
|
+
The live tests make a real call to each provider, then verify that the token counts in the Tokenr payload match what the provider actually returned. This includes a test that triggers Anthropic prompt caching and confirms cache tokens are extracted correctly.
|
|
247
|
+
|
|
248
|
+
Note: the live tests require the `ruby-openai` and `anthropic` gems to be installed. They are not in the Gemfile since they are optional runtime dependencies.
|
|
249
|
+
|
|
215
250
|
## Security
|
|
216
251
|
|
|
217
252
|
This SDK is open source so you can audit exactly what data is sent and when. The short version:
|
data/lib/tokenr/version.rb
CHANGED