mohdel 0.104.0 → 0.104.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/js/core/events.js CHANGED
@@ -62,14 +62,12 @@
62
62
  * @property {number} outputTokens
63
63
  * @property {number} thinkingTokens
64
64
  * @property {number} [cacheWriteInputTokens]
65
- * Tokens written to a fresh prompt cache breakpoint, billed at
66
- * `cacheWritePrice` (typically 1.25× input on Anthropic). Absent on
67
- * providers that don't surface this counter (OpenAI doesn't separately
68
- * bill cache writes).
65
+ * Input tokens written to a fresh prompt cache breakpoint, billed at
66
+ * `cacheWritePrice`. Absent when the provider has no separate
67
+ * cache-write counter.
69
68
  * @property {number} [cacheReadInputTokens]
70
- * Tokens served from prompt cache, billed at `cacheReadPrice` (typically
71
- * 0.1× input). Set by Anthropic directly and by OpenAI-shape adapters
72
- * after subset→additive normalization of `prompt_tokens_details.cached_tokens`.
69
+ * Input tokens served from prompt cache, billed at `cacheReadPrice`.
70
+ * Absent when the provider has no prompt caching.
73
71
  * @property {number} cost
74
72
  * USD, computed from curated pricing. Single number (not a breakdown).
75
73
  * @property {Timestamps} timestamps
@@ -29,14 +29,10 @@ import { getSpec, setCatalog } from './_catalog.js'
29
29
  * - `cacheWritePrice` → `inputPrice` (graceful for non-caching providers)
30
30
  * - `cacheReadPrice` → `inputPrice`
31
31
  *
32
- * Token-counting conventions:
33
- * - Anthropic reports `cache_creation_input_tokens` and `cache_read_input_tokens`
34
- * as ADDITIONAL to `input_tokens` (separately billable). The adapter
35
- * surfaces them as `cacheWriteInputTokens` / `cacheReadInputTokens`
36
- * (write/read pair, matching catalog `cacheWritePrice`/`cacheReadPrice`).
37
- * - OpenAI reports `prompt_tokens_details.cached_tokens` as a SUBSET of
38
- * `prompt_tokens` (already counted). Adapters subtract before passing
39
- * `inputTokens` to keep this function additive across providers.
32
+ * Token-counting convention: this function is purely additive across
33
+ * `inputTokens`, `cacheWriteInputTokens`, `cacheReadInputTokens`,
34
+ * `outputTokens`, and `thinkingTokens`. Adapters normalize provider-specific
35
+ * shapes (e.g. subset-of-input vs. additional-to-input) before calling here.
40
36
  *
41
37
  * @param {any} spec Catalog entry, or `undefined`.
42
38
  * @param {{inputTokens?: number, outputTokens?: number, thinkingTokens?: number,
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "mohdel",
3
- "version": "0.104.0",
3
+ "version": "0.104.1",
4
4
  "license": "MIT",
5
5
  "author": {
6
6
  "name": "Christophe Le Bars",
@@ -87,7 +87,7 @@
87
87
  "@opentelemetry/exporter-trace-otlp-grpc": "^0.217.0",
88
88
  "@opentelemetry/sdk-node": "^0.217.0",
89
89
  "chalk": "^5.4.0",
90
- "mohdel-thin-gate-linux-x64-gnu": "0.104.0"
90
+ "mohdel-thin-gate-linux-x64-gnu": "0.104.1"
91
91
  },
92
92
  "dependencies": {
93
93
  "@anthropic-ai/sdk": "^0.95.1",