@x12i/funcx 3.1.7 → 3.1.8
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +15 -1
- package/package.json +2 -2
package/README.md
CHANGED
|
@@ -358,7 +358,7 @@ const res = await ai.ask("Write a product tagline.", {
|
|
|
358
358
|
maxTokens: 200,
|
|
359
359
|
temperature: 0.7,
|
|
360
360
|
});
|
|
361
|
-
// res.text, res.usage, res.model
|
|
361
|
+
// res.text, res.usage, res.model, res.routing?, res.cost?, res.timing?
|
|
362
362
|
|
|
363
363
|
// Or use mode and let the client resolve model from config/env/preset:
|
|
364
364
|
await ai.ask("...", { mode: "strong", maxTokens: 500, temperature: 0.7 });
|
|
@@ -366,6 +366,20 @@ await ai.ask("...", { mode: "strong", maxTokens: 500, temperature: 0.7 });
|
|
|
366
366
|
|
|
367
367
|
You can set the strong/normal model once via env (`LLM_MODEL_STRONG`, `LLM_MODEL_NORMAL`) or `createClient({ models: { normal, strong } })`; then `ask(..., { mode: "strong" })` uses that model without passing it every time.
|
|
368
368
|
|
|
369
|
+
### Trace / diagnostics metadata (authoritative when available)
|
|
370
|
+
|
|
371
|
+
`client.ask()` returns a stable, additive metadata envelope for downstream trace-mode consumers:
|
|
372
|
+
|
|
373
|
+
- **`res.usage`**: provider token usage when available, plus funcx echoes **`usage.maxTokensRequested`** (the cap you passed in `AskOptions.maxTokens`). For convenience, `usage.tokensPrompt|tokensCompletion|tokensTotal` mirror `prompt_tokens|completion_tokens|total_tokens`.
|
|
374
|
+
- **`res.routing`**: stable provider label + request-id bag for correlation (keys are additive; existing keys won’t be renamed).
|
|
375
|
+
- **`res.cost`**: `costUsd` when the backend/provider returns a reliable USD cost.
|
|
376
|
+
- **`res.timing`**: ISO timestamps and durations when tracked by the backend.
|
|
377
|
+
|
|
378
|
+
Notes:
|
|
379
|
+
|
|
380
|
+
- These fields are **optional** and are populated only when the backend can extract them from response bodies/headers. (OpenRouter provides exact cost via `usage.cost` and may provide request IDs via headers.)
|
|
381
|
+
- The `raw` response remains **opt-in**; result payloads stay small by default.
|
|
382
|
+
|
|
369
383
|
### Backends
|
|
370
384
|
|
|
371
385
|
```ts
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "@x12i/funcx",
|
|
3
|
-
"version": "3.1.
|
|
3
|
+
"version": "3.1.8",
|
|
4
4
|
"description": "FuncX: LLM function runtime with Catalox-backed content catalog.",
|
|
5
5
|
"type": "module",
|
|
6
6
|
"publishConfig": {
|
|
@@ -108,7 +108,7 @@
|
|
|
108
108
|
},
|
|
109
109
|
"dependencies": {
|
|
110
110
|
"@x12i/env": "^4.0.1",
|
|
111
|
-
"@x12i/logxer": "^4.
|
|
111
|
+
"@x12i/logxer": "^4.3.5",
|
|
112
112
|
"ajv": "^8.17.0",
|
|
113
113
|
"firebase-admin": "^13.8.0",
|
|
114
114
|
"secure-json-parse": "^4.1.0",
|