@mariozechner/pi-ai 0.49.0 → 0.49.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (44) hide show
  1. package/README.md +23 -2
  2. package/dist/models.generated.d.ts +0 -17
  3. package/dist/models.generated.d.ts.map +1 -1
  4. package/dist/models.generated.js +2 -19
  5. package/dist/models.generated.js.map +1 -1
  6. package/dist/providers/amazon-bedrock.d.ts.map +1 -1
  7. package/dist/providers/amazon-bedrock.js +9 -7
  8. package/dist/providers/amazon-bedrock.js.map +1 -1
  9. package/dist/providers/anthropic.d.ts.map +1 -1
  10. package/dist/providers/anthropic.js +8 -8
  11. package/dist/providers/anthropic.js.map +1 -1
  12. package/dist/providers/google-gemini-cli.d.ts.map +1 -1
  13. package/dist/providers/google-gemini-cli.js +2 -28
  14. package/dist/providers/google-gemini-cli.js.map +1 -1
  15. package/dist/providers/google-shared.d.ts +1 -1
  16. package/dist/providers/google-shared.d.ts.map +1 -1
  17. package/dist/providers/google-shared.js +8 -3
  18. package/dist/providers/google-shared.js.map +1 -1
  19. package/dist/providers/google-vertex.d.ts.map +1 -1
  20. package/dist/providers/google-vertex.js +1 -0
  21. package/dist/providers/google-vertex.js.map +1 -1
  22. package/dist/providers/google.d.ts.map +1 -1
  23. package/dist/providers/google.js +1 -0
  24. package/dist/providers/google.js.map +1 -1
  25. package/dist/providers/openai-codex-responses.d.ts.map +1 -1
  26. package/dist/providers/openai-codex-responses.js +21 -3
  27. package/dist/providers/openai-codex-responses.js.map +1 -1
  28. package/dist/providers/openai-completions.d.ts.map +1 -1
  29. package/dist/providers/openai-completions.js +17 -7
  30. package/dist/providers/openai-completions.js.map +1 -1
  31. package/dist/providers/openai-responses.d.ts.map +1 -1
  32. package/dist/providers/openai-responses.js +21 -5
  33. package/dist/providers/openai-responses.js.map +1 -1
  34. package/dist/providers/transform-messages.d.ts +7 -2
  35. package/dist/providers/transform-messages.d.ts.map +1 -1
  36. package/dist/providers/transform-messages.js +42 -41
  37. package/dist/providers/transform-messages.js.map +1 -1
  38. package/dist/stream.d.ts.map +1 -1
  39. package/dist/stream.js +7 -1
  40. package/dist/stream.js.map +1 -1
  41. package/dist/types.d.ts +11 -4
  42. package/dist/types.d.ts.map +1 -1
  43. package/dist/types.js.map +1 -1
  44. package/package.json +1 -1
package/README.md CHANGED
@@ -598,6 +598,20 @@ context.messages.push({ role: 'user', content: 'Please continue' });
598
598
  const continuation = await complete(model, context);
599
599
  ```
600
600
 
601
+ ### Debugging Provider Payloads
602
+
603
+ Use the `onPayload` callback to inspect the request payload sent to the provider. This is useful for debugging request formatting issues or provider validation errors.
604
+
605
+ ```typescript
606
+ const response = await complete(model, context, {
607
+ onPayload: (payload) => {
608
+ console.log('Provider payload:', JSON.stringify(payload, null, 2));
609
+ }
610
+ });
611
+ ```
612
+
613
+ The callback is supported by `stream`, `complete`, `streamSimple`, and `completeSimple`.
614
+
601
615
  ## APIs, Models, and Providers
602
616
 
603
617
  The library implements 4 API interfaces, each with its own streaming function and options:
@@ -703,16 +717,20 @@ const response = await stream(ollamaModel, context, {
703
717
 
704
718
  ### OpenAI Compatibility Settings
705
719
 
706
- The `openai-completions` API is implemented by many providers with minor differences. By default, the library auto-detects compatibility settings based on `baseUrl` for known providers (Cerebras, xAI, Mistral, Chutes, etc.). For custom proxies or unknown endpoints, you can override these settings via the `compat` field:
720
+ The `openai-completions` API is implemented by many providers with minor differences. By default, the library auto-detects compatibility settings based on `baseUrl` for known providers (Cerebras, xAI, Mistral, Chutes, etc.). For custom proxies or unknown endpoints, you can override these settings via the `compat` field. For `openai-responses` models, the compat field only supports Responses-specific flags.
707
721
 
708
722
  ```typescript
709
- interface OpenAICompat {
723
+ interface OpenAICompletionsCompat {
710
724
  supportsStore?: boolean; // Whether provider supports the `store` field (default: true)
711
725
  supportsDeveloperRole?: boolean; // Whether provider supports `developer` role vs `system` (default: true)
712
726
  supportsReasoningEffort?: boolean; // Whether provider supports `reasoning_effort` (default: true)
713
727
  maxTokensField?: 'max_completion_tokens' | 'max_tokens'; // Which field name to use (default: max_completion_tokens)
714
728
  thinkingFormat?: 'openai' | 'zai'; // Format for reasoning param: 'openai' uses reasoning_effort, 'zai' uses thinking: { type: "enabled" } (default: openai)
715
729
  }
730
+
731
+ interface OpenAIResponsesCompat {
732
+ // Reserved for future use
733
+ }
716
734
  ```
717
735
 
718
736
  If `compat` is not set, the library falls back to URL-based detection. If `compat` is partially set, unspecified fields use the detected defaults. This is useful for:
@@ -1082,6 +1100,9 @@ Create or update test files to cover the new provider:
1082
1100
  - `tool-call-without-result.test.ts` - Orphaned tool calls
1083
1101
  - `image-tool-result.test.ts` - Images in tool results
1084
1102
  - `total-tokens.test.ts` - Token counting accuracy
1103
+ - `cross-provider-handoff.test.ts` - Cross-provider context replay
1104
+
1105
+ For `cross-provider-handoff.test.ts`, add at least one provider/model pair. If the provider exposes multiple model families (for example GPT and Claude), add at least one pair per family.
1085
1106
 
1086
1107
  For providers with non-standard auth (AWS, Google Vertex), create a utility like `bedrock-utils.ts` with credential detection helpers.
1087
1108
 
@@ -1330,23 +1330,6 @@ export declare const MODELS: {
1330
1330
  contextWindow: number;
1331
1331
  maxTokens: number;
1332
1332
  };
1333
- readonly "zai-glm-4.6": {
1334
- id: string;
1335
- name: string;
1336
- api: "openai-completions";
1337
- provider: string;
1338
- baseUrl: string;
1339
- reasoning: false;
1340
- input: "text"[];
1341
- cost: {
1342
- input: number;
1343
- output: number;
1344
- cacheRead: number;
1345
- cacheWrite: number;
1346
- };
1347
- contextWindow: number;
1348
- maxTokens: number;
1349
- };
1350
1333
  readonly "zai-glm-4.7": {
1351
1334
  id: string;
1352
1335
  name: string;