@oh-my-pi/pi-ai 13.9.2 → 13.9.3
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +84 -6
- package/package.json +2 -2
- package/src/auth-storage.ts +587 -249
- package/src/index.ts +2 -2
- package/src/model-manager.ts +5 -4
- package/src/model-thinking.ts +530 -0
- package/src/models.json +5934 -887
- package/src/models.ts +2 -17
- package/src/provider-models/descriptors.ts +16 -6
- package/src/provider-models/index.ts +0 -1
- package/src/provider-models/openai-compat.ts +108 -25
- package/src/providers/amazon-bedrock.ts +30 -69
- package/src/providers/anthropic.ts +9 -41
- package/src/providers/azure-openai-responses.ts +5 -0
- package/src/providers/gitlab-duo.ts +1 -4
- package/src/providers/google-vertex.ts +4 -4
- package/src/providers/google.ts +4 -4
- package/src/providers/kimi.ts +2 -2
- package/src/providers/openai-codex/constants.ts +17 -1
- package/src/providers/openai-codex/request-transformer.ts +10 -25
- package/src/providers/openai-codex-responses.ts +186 -58
- package/src/providers/openai-completions.ts +10 -2
- package/src/providers/openai-responses.ts +96 -38
- package/src/providers/synthetic.ts +2 -2
- package/src/providers/transform-messages.ts +1 -1
- package/src/stream.ts +41 -129
- package/src/types.ts +45 -4
- package/src/usage/claude.ts +10 -86
- package/src/{providers/google-gemini-cli-usage.ts → usage/gemini.ts} +5 -19
- package/src/usage/github-copilot.ts +7 -42
- package/src/usage/google-antigravity.ts +4 -22
- package/src/usage/kimi.ts +12 -54
- package/src/usage/openai-codex.ts +8 -65
- package/src/usage/zai.ts +14 -47
- package/src/usage.ts +0 -18
- package/src/utils/oauth/alibaba-coding-plan.ts +59 -0
- package/src/utils/oauth/index.ts +17 -3
- package/src/utils/oauth/openai-codex.ts +14 -4
- package/src/utils/oauth/opencode.ts +0 -1
- package/src/utils/oauth/types.ts +3 -1
- package/src/provider-models/model-policies.ts +0 -94
- package/src/thinking.ts +0 -85
package/CHANGELOG.md
CHANGED
|
@@ -2,6 +2,82 @@
|
|
|
2
2
|
|
|
3
3
|
## [Unreleased]
|
|
4
4
|
|
|
5
|
+
## [13.9.3] - 2026-03-07
|
|
6
|
+
### Breaking Changes
|
|
7
|
+
|
|
8
|
+
- Changed `reasoning` parameter from `ThinkingLevel | undefined` to `Effort | undefined` in `SimpleStreamOptions`; 'off' is no longer valid (omit the field instead)
|
|
9
|
+
- Removed `supportsXhigh()` function; check `model.thinking?.maxLevel` instead
|
|
10
|
+
- Removed `ThinkingLevel` and `ThinkingEffort` types; use `Effort` enum
|
|
11
|
+
- Removed `getAvailableThinkingLevels()` and `getAvailableThinkingEfforts()` functions
|
|
12
|
+
- Changed `transformRequestBody()` signature to require `Model` parameter as second argument for effort validation
|
|
13
|
+
- Removed `thinking.ts` module export; import from `model-thinking.ts` instead
|
|
14
|
+
|
|
15
|
+
### Added
|
|
16
|
+
|
|
17
|
+
- Added `incremental` flag to `OpenAIResponsesHistoryPayload` to support building conversation history from multiple assistant messages instead of replacing it
|
|
18
|
+
- Added `dt` flag to `OpenAIResponsesHistoryPayload` for transport-level metadata
|
|
19
|
+
- Added `ThinkingConfig` interface to models for canonical thinking transport metadata with min/max effort levels and provider-specific mode
|
|
20
|
+
- Added `thinking` field to `Model` type containing per-model thinking capabilities used to clamp and map user-facing effort levels
|
|
21
|
+
- Added `Effort` enum (minimal, low, medium, high, xhigh) as canonical user-facing thinking levels replacing `ThinkingLevel`
|
|
22
|
+
- Added `enrichModelThinking()` function to automatically populate thinking metadata on models based on their capabilities
|
|
23
|
+
- Added `mapEffortToAnthropicAdaptiveEffort()` function to map user effort levels to Anthropic adaptive thinking effort
|
|
24
|
+
- Added `mapEffortToGoogleThinkingLevel()` function to map user effort levels to Google thinking levels
|
|
25
|
+
- Added `requireSupportedEffort()` function to validate and clamp effort levels per model, throwing errors for unsupported combinations
|
|
26
|
+
- Added `clampThinkingLevelForModel()` function to clamp thinking levels to model-supported range
|
|
27
|
+
- Added `applyGeneratedModelPolicies()` and `linkSparkPromotionTargets()` exports from model-thinking module
|
|
28
|
+
- Added `serviceTier` option to control OpenAI processing priority and cost (auto, default, flex, scale, priority)
|
|
29
|
+
- Added `providerPayload` field to messages and responses for reconstructing transport-native history
|
|
30
|
+
- Added Gemini usage provider for tracking quota and tier information
|
|
31
|
+
- Added `getCodexAccountId()` utility to extract account ID from Codex JWT tokens
|
|
32
|
+
- Added email extraction from OpenAI Codex OAuth tokens for credential deduplication
|
|
33
|
+
|
|
34
|
+
### Changed
|
|
35
|
+
|
|
36
|
+
- Changed credential disabling mechanism from boolean `disabled` flag to `disabled_cause` text field for tracking why credentials were disabled
|
|
37
|
+
- Changed `deleteAuthCredential()` and `deleteAuthCredentialsForProvider()` methods to require a `disabledCause` parameter explaining the reason for disabling
|
|
38
|
+
- Changed Gemini model parsing to strip `-preview` suffix for consistent model identification
|
|
39
|
+
- Changed OpenAI Codex websocket error handling to detect fatal connection errors and immediately fall back to SSE without retrying
|
|
40
|
+
- Changed OpenAI Codex to always use websockets v2 protocol (removed v1 support)
|
|
41
|
+
- Changed `reasoning` parameter type from `ThinkingLevel` to `Effort` in `SimpleStreamOptions`, removing 'off' value (callers should omit the field instead)
|
|
42
|
+
- Changed thinking configuration to use model-specific metadata instead of hardcoded provider logic for effort mapping
|
|
43
|
+
- Changed OpenAI Codex request transformer to accept `Model` parameter for effort validation instead of string model ID
|
|
44
|
+
- Changed Anthropic provider to use model thinking metadata for determining adaptive thinking support instead of model ID pattern matching
|
|
45
|
+
- Changed Google Vertex and Google providers to use shorter variable names for thinking config construction
|
|
46
|
+
- Moved thinking-related utilities from `thinking.ts` to new `model-thinking.ts` module with expanded functionality
|
|
47
|
+
- Moved model policy functions from `provider-models/model-policies.ts` to `model-thinking.ts`
|
|
48
|
+
- Moved `googleGeminiCliUsageProvider` from `providers/google-gemini-cli-usage.ts` to `usage/gemini.ts`
|
|
49
|
+
- Changed default OpenAI model from gpt-5.1-codex to gpt-5.4 across all providers
|
|
50
|
+
- Changed `UsageFetchContext` to remove cache and now() dependencies—usage fetchers now use Date.now() directly
|
|
51
|
+
- Removed `resetInMs` field from usage windows; consumers should calculate from `resetsAt` timestamp
|
|
52
|
+
- Changed OpenAI Codex credential ranking to deduplicate by email when accountId matches
|
|
53
|
+
- Improved OpenAI Codex error handling with retryable error detection
|
|
54
|
+
|
|
55
|
+
### Removed
|
|
56
|
+
|
|
57
|
+
- Removed `thinking.ts` module; use `model-thinking.ts` instead
|
|
58
|
+
- Removed `provider-models/model-policies.ts` module; functionality moved to `model-thinking.ts`
|
|
59
|
+
- Removed `supportsXhigh()` function from models.ts; use model.thinking metadata instead
|
|
60
|
+
- Removed `ThinkingLevel` and `ThinkingEffort` types; use `Effort` enum instead
|
|
61
|
+
- Removed `getAvailableThinkingLevels()` and `getAvailableThinkingEfforts()` functions
|
|
62
|
+
- Removed `model-policies` export from `provider-models/index.ts`
|
|
63
|
+
- Removed hardcoded thinking level clamping logic from OpenAI Codex request transformer; now uses model metadata
|
|
64
|
+
- Removed `UsageCache` and `UsageCacheEntry` interfaces—caching is now handled internally by AuthStorage
|
|
65
|
+
- Removed `google-gemini-cli-usage` export; use new `gemini` usage provider instead
|
|
66
|
+
- Removed `resetInMs` computation from all usage providers
|
|
67
|
+
- Removed cache TTL constants and cache management from usage fetchers (claude, github-copilot, google-antigravity, kimi, openai-codex, zai)
|
|
68
|
+
|
|
69
|
+
### Fixed
|
|
70
|
+
|
|
71
|
+
- Fixed credential purging to respect disabled credentials when deduplicating by email, preventing re-enablement of intentionally disabled credentials
|
|
72
|
+
- Fixed OpenAI Codex websocket error reporting to include detailed error messages from error events
|
|
73
|
+
- Fixed conversation history reconstruction to support incremental updates from multiple assistant messages while maintaining backward compatibility with full-snapshot payloads
|
|
74
|
+
- Fixed OpenAI Codex to reject unsupported effort levels instead of silently clamping them, providing clear error messages about supported efforts
|
|
75
|
+
- Fixed model cache normalization to properly apply thinking enrichment when loading cached models
|
|
76
|
+
- Fixed dynamic model merging to apply thinking enrichment to merged model results
|
|
77
|
+
- Fixed OpenAI Codex streaming to properly include service_tier in SSE payloads
|
|
78
|
+
- Fixed type safety in OpenAI responses by removing unsafe type casts on image content blocks
|
|
79
|
+
- Fixed credential purging to respect disabled credentials when deduplicating by email
|
|
80
|
+
|
|
5
81
|
## [13.9.2] - 2026-03-05
|
|
6
82
|
|
|
7
83
|
### Added
|
|
@@ -19,31 +95,33 @@
|
|
|
19
95
|
- Fixed Unicode normalization to consistently apply `toWellFormed()` to all text content, including thinking blocks, ensuring proper handling of malformed UTF-16 sequences
|
|
20
96
|
|
|
21
97
|
## [13.9.1] - 2026-03-05
|
|
98
|
+
|
|
22
99
|
### Breaking Changes
|
|
23
100
|
|
|
24
101
|
- Removed `THINKING_LEVELS`, `ALL_THINKING_LEVELS`, `ALL_THINKING_MODES`, `THINKING_MODE_DESCRIPTIONS`, and `THINKING_MODE_LABELS` exports
|
|
25
102
|
- Renamed `formatThinking()` to `getThinkingMetadata()` with changed return type from string to `ThinkingMetadata` object
|
|
26
103
|
- Renamed `getAvailableThinkingLevel()` to `getAvailableThinkingLevels()` and added default parameter
|
|
27
|
-
- Renamed `
|
|
104
|
+
- Renamed `getAvailableEffort()` to `getAvailableEfforts()` and added default parameter
|
|
28
105
|
|
|
29
106
|
### Added
|
|
30
107
|
|
|
31
108
|
- Added `ThinkingMetadata` type to provide structured access to thinking mode information (value, label, description)
|
|
32
109
|
|
|
33
110
|
## [13.9.0] - 2026-03-05
|
|
111
|
+
|
|
34
112
|
### Added
|
|
35
113
|
|
|
36
|
-
- Exported new thinking module with `
|
|
37
|
-
- Added `
|
|
38
|
-
- Added `
|
|
114
|
+
- Exported new thinking module with `Effort`, `ThinkingLevel`, and `ThinkingMode` types for managing reasoning effort levels
|
|
115
|
+
- Added `getAvailableEffort()` function to determine supported thinking effort levels based on model capabilities
|
|
116
|
+
- Added `parseEffort()`, `parseThinkingLevel()`, and `parseThinkingMode()` functions for parsing thinking configuration strings
|
|
39
117
|
- Added `THINKING_LEVELS`, `ALL_THINKING_LEVELS`, and `ALL_THINKING_MODES` constants for iterating over available thinking options
|
|
40
118
|
- Added `THINKING_MODE_DESCRIPTIONS` and `THINKING_MODE_LABELS` for displaying thinking modes in user interfaces
|
|
41
119
|
- Added `formatThinking()` function to format thinking modes as compact display labels
|
|
42
120
|
|
|
43
121
|
### Changed
|
|
44
122
|
|
|
45
|
-
- Refactored thinking level handling to distinguish between `
|
|
46
|
-
- Updated `ThinkingBudgets` type to use `
|
|
123
|
+
- Refactored thinking level handling to distinguish between `Effort` (provider-level, no "off") and `ThinkingLevel` (user-facing, includes "off")
|
|
124
|
+
- Updated `ThinkingBudgets` type to use `Effort` instead of `ThinkingLevel` for more precise token budget configuration
|
|
47
125
|
- Improved reasoning option handling to explicitly support "off" value for disabling reasoning across all providers
|
|
48
126
|
- Simplified thinking effort mapping logic by centralizing provider-specific clamping behavior
|
|
49
127
|
|
package/package.json
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
{
|
|
2
2
|
"type": "module",
|
|
3
3
|
"name": "@oh-my-pi/pi-ai",
|
|
4
|
-
"version": "13.9.
|
|
4
|
+
"version": "13.9.3",
|
|
5
5
|
"description": "Unified LLM API with automatic model discovery and provider configuration",
|
|
6
6
|
"homepage": "https://github.com/can1357/oh-my-pi",
|
|
7
7
|
"author": "Can Boluk",
|
|
@@ -41,7 +41,7 @@
|
|
|
41
41
|
"@aws-sdk/client-bedrock-runtime": "^3",
|
|
42
42
|
"@bufbuild/protobuf": "^2.11",
|
|
43
43
|
"@google/genai": "^1.43",
|
|
44
|
-
"@oh-my-pi/pi-utils": "13.9.
|
|
44
|
+
"@oh-my-pi/pi-utils": "13.9.3",
|
|
45
45
|
"@sinclair/typebox": "^0.34",
|
|
46
46
|
"@smithy/node-http-handler": "^4.4",
|
|
47
47
|
"ajv": "^8.18",
|