voice-router-dev 0.8.2 → 0.8.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -5,6 +5,258 @@ All notable changes to this project will be documented in this file.
5
5
  The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
6
6
  and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7
7
 
8
+ ## [0.8.4] - 2026-04-06
9
+
10
+ ### Fixed
11
+
12
+ #### AssemblyAI: Migrate `speech_model` → `speech_models` (API Breaking Change)
13
+
14
+ AssemblyAI deprecated the singular `speech_model` parameter and now requires `speech_models` (plural, array). The old field is rejected with HTTP 400:
15
+
16
+ ```
17
+ "speech_models" must be a non-empty list containing one or more of: "universal-3-pro", "universal-2"
18
+ ```
19
+
20
+ **What changed:**
21
+
22
+ | | Before (0.8.3) | After (0.8.4) |
23
+ |---|---|---|
24
+ | **API field** | `speech_model: "best"` (singular, deprecated) | `speech_models: ["universal-3-pro"]` (array, required) |
25
+ | **Model values** | `best`, `slam-1`, `universal` | `universal-3-pro`, `universal-2` |
26
+ | **Constants** | `AssemblyAITranscriptionModel.best` | `AssemblyAITranscriptionModel["universal-3-pro"]` |
27
+
28
+ **Adapter fix:** `options.model` now maps to `request.speech_models = [model]` instead of `request.speech_model = model`.
29
+
30
+ **Generated types regenerated** from AssemblyAI's updated OpenAPI spec (v1.3.4):
31
+ - `SpeechModel` is now `string` (no enum — AssemblyAI removed the fixed list)
32
+ - `TranscriptOptionalParams.speech_models` added (required `SpeechModel[]`)
33
+ - `TranscriptOptionalParams.speech_model` marked `@deprecated`
34
+ - New response field: `speech_model_used` (which model actually ran)
35
+
36
+ **Migration:**
37
+
38
+ ```typescript
39
+ // Before
40
+ import { AssemblyAITranscriptionModel } from 'voice-router-dev/constants'
41
+ { model: AssemblyAITranscriptionModel.best }
42
+
43
+ // After
44
+ import { AssemblyAITranscriptionModel } from 'voice-router-dev/constants'
45
+ { model: AssemblyAITranscriptionModel["universal-3-pro"] }
46
+
47
+ // Or pass directly via assemblyai-specific options for multi-model routing
48
+ { assemblyai: { speech_models: ["universal-3-pro", "universal-2"] } }
49
+ ```
50
+
51
+ **Note:** Streaming is unaffected — it still uses `speech_model` query parameter with streaming-specific model names (`universal-streaming-english`, `universal-streaming-multilingual`).
52
+
53
+ #### Unified Error Normalization Across All Providers
54
+
55
+ HTTP errors from all 8 providers now return semantic error codes and the actual provider error message instead of axios internals.
56
+
57
+ **Before:**
58
+ ```typescript
59
+ {
60
+ code: "ERR_BAD_REQUEST", // axios internal code
61
+ message: "Request failed with status code 400", // generic axios message
62
+ statusCode: 400,
63
+ details: { responseData: { error: "audio_url is required" } } // real message buried
64
+ }
65
+ ```
66
+
67
+ **After:**
68
+ ```typescript
69
+ {
70
+ code: "INVALID_INPUT", // semantic error code
71
+ message: "audio_url is required", // actual provider message surfaced
72
+ statusCode: 400,
73
+ details: { responseData: { error: "audio_url is required" } } // still preserved
74
+ }
75
+ ```
76
+
77
+ Error codes are now mapped from HTTP status:
78
+
79
+ | HTTP Status | Error Code |
80
+ |-------------|------------|
81
+ | 400, 404, 422 | `INVALID_INPUT` |
82
+ | 401, 403 | `AUTHENTICATION_ERROR` |
83
+ | 408 | `CONNECTION_TIMEOUT` |
84
+ | 429 | `RATE_LIMIT` |
85
+ | 5xx | `SERVER_ERROR` |
86
+
87
+ Provider error messages are extracted from all response body shapes:
88
+
89
+ | Provider | Error body shape | Extracted field |
90
+ |----------|-----------------|-----------------|
91
+ | AssemblyAI | `{ error: "string" }` | `error` |
92
+ | OpenAI | `{ error: { message: "..." } }` | `error.message` |
93
+ | Gladia, Azure, Soniox, Deepgram | `{ message: "..." }` | `message` |
94
+ | Speechmatics | `{ error: "string" }` | `error` |
95
+ | Deepgram legacy | `{ err_msg: "..." }` | `err_msg` |
96
+ | ElevenLabs | `{ detail: { message: "..." } }` | `detail.message` |
97
+
98
+ **New exports:**
99
+
100
+ | Export | Description |
101
+ |--------|-------------|
102
+ | `AUTHENTICATION_ERROR` | New error code for 401/403 |
103
+ | `RATE_LIMIT` | New error code for 429 |
104
+ | `SERVER_ERROR` | New error code for 5xx |
105
+ | `httpStatusToErrorCode()` | Map HTTP status → semantic error code |
106
+ | `extractProviderMessage()` | Extract real error message from any provider's response body |
107
+
108
+ **No breaking changes.** The `details` object is unchanged — consumers that already read `details.responseData` continue to work. The `code` field now contains our taxonomy codes instead of axios codes, which is what consumers should have been getting all along. Any adapter that passes an explicit `code` to `createErrorResponse()` (e.g. `TRANSCRIPTION_ERROR` from polling) still takes priority.
109
+
110
+ ---
111
+
112
+ ## [0.8.4] - 2026-03-21
113
+
114
+ ### Added
115
+
116
+ #### Speechmatics Webhook Callbacks + Polling
117
+
118
+ Speechmatics `transcribe()` now supports the same `webhookUrl` pattern as Gladia, AssemblyAI, and Deepgram:
119
+
120
+ ```typescript
121
+ // With webhook: returns immediately, callback delivers result
122
+ const result = await adapter.transcribe(audio, {
123
+ language: 'en',
124
+ webhookUrl: 'https://myapp.com/webhook/speechmatics'
125
+ })
126
+ console.log(result.data.id) // Job ID returned immediately
127
+
128
+ // Without webhook: polls until complete (new default)
129
+ const result = await adapter.transcribe(audio, { language: 'en' })
130
+ console.log(result.data.text) // Full transcript after polling
131
+ ```
132
+
133
+ The webhook URL is wired to Speechmatics' per-job `notification_config` with `transcript` content type. Without a webhook, `transcribe()` now polls via `pollForCompletion()` instead of returning a queued job ID.
134
+
135
+ #### Azure STT Webhook Management + Polling
136
+
137
+ Azure uses subscription-wide webhooks (not per-job). New helper methods to manage them:
138
+
139
+ ```typescript
140
+ // Register a webhook for transcription events (one-time setup)
141
+ const webhook = await adapter.registerWebhook('https://myapp.com/webhook/azure', {
142
+ displayName: 'My App Webhook',
143
+ events: {
144
+ transcriptionCompletion: true,
145
+ transcriptionFailed: true
146
+ }
147
+ })
148
+
149
+ // List registered webhooks
150
+ const webhooks = await adapter.listWebhooks()
151
+
152
+ // Unregister a webhook
153
+ await adapter.unregisterWebhook(webhook.self?.split('/').pop()!)
154
+ ```
155
+
156
+ Azure `transcribe()` now polls via `pollForCompletion()` instead of returning immediately with a queued status.
157
+
158
+ **New exports:**
159
+
160
+ | Export | Description |
161
+ |--------|-------------|
162
+ | `webHooksCreate` | Azure API: create subscription-wide webhook |
163
+ | `webHooksDelete` | Azure API: delete webhook by ID |
164
+ | `webHooksList` | Azure API: list registered webhooks |
165
+ | `WebHook` | Azure webhook type |
166
+ | `WebHookEvents` | Azure webhook event filter type |
167
+
168
+ #### Typed Webhook Payloads for Azure & Speechmatics
169
+
170
+ `ProviderWebhookPayloadMap` now has concrete types instead of `unknown`:
171
+
172
+ | Provider | Before | After |
173
+ |----------|--------|-------|
174
+ | `azure-stt` | `unknown` | `AzureWebhookPayload` |
175
+ | `speechmatics` | `unknown` | `SpeechmaticsWebhookPayload` (`RetrieveTranscriptResponse`) |
176
+
177
+ ```typescript
178
+ import type { UnifiedWebhookEvent } from 'voice-router-dev/webhooks'
179
+
180
+ // event.raw is now fully typed
181
+ const event: UnifiedWebhookEvent<'speechmatics'> = handler.parse(payload)
182
+ event.raw.results // ✅ Typed as RetrieveTranscriptResponse
183
+
184
+ const azureEvent: UnifiedWebhookEvent<'azure-stt'> = handler.parse(payload)
185
+ azureEvent.raw.action // ✅ Typed as string
186
+ ```
187
+
188
+ **Webhook + polling support summary (updated):**
189
+
190
+ | Provider | webhookUrl wired | Auto-poll | API webhook model |
191
+ |----------|-----------------|-----------|-------------------|
192
+ | Gladia | ✅ `callback_config.url` | ✅ `pollForCompletion` | Per-job |
193
+ | AssemblyAI | ✅ `webhook_url` | ✅ `pollForCompletion` | Per-job |
194
+ | Deepgram | ✅ `params.callback` | N/A (sync) | Per-request |
195
+ | **Speechmatics** | ✅ `notification_config` | ✅ `pollForCompletion` | Per-job |
196
+ | **Azure STT** | ✅ `registerWebhook()` | ✅ `pollForCompletion` | Subscription-wide |
197
+ | ElevenLabs | N/A (sync) | N/A (sync) | N/A |
198
+ | OpenAI | N/A (sync) | N/A (sync) | N/A |
199
+
200
+ ---
201
+
202
+ ## [0.8.3] - 2026-03-19
203
+
204
+ ### Added
205
+
206
+ #### AssemblyAI Regional Endpoints (EU Data Residency)
207
+
208
+ Region support for AssemblyAI, matching the pattern used by Deepgram, Speechmatics, and Soniox:
209
+
210
+ ```typescript
211
+ import { createAssemblyAIAdapter, AssemblyAIRegion } from 'voice-router-dev'
212
+
213
+ const adapter = createAssemblyAIAdapter({
214
+ apiKey: process.env.ASSEMBLYAI_API_KEY,
215
+ region: AssemblyAIRegion.eu // All data stays in the EU
216
+ })
217
+
218
+ // Dynamic region switching
219
+ adapter.setRegion(AssemblyAIRegion.us)
220
+ console.log(adapter.getRegion())
221
+ // { api: "https://api.assemblyai.com", websocket: "wss://streaming.assemblyai.com/v3/ws" }
222
+ ```
223
+
224
+ | Region | REST API | Streaming |
225
+ |--------|----------|-----------|
226
+ | `us` (default) | api.assemblyai.com | streaming.assemblyai.com |
227
+ | `eu` | api.eu.assemblyai.com | streaming.eu.assemblyai.com |
228
+
229
+ **New exports:**
230
+
231
+ | Export | Entry Point |
232
+ |--------|-------------|
233
+ | `AssemblyAIRegion` | `voice-router-dev/constants` |
234
+ | `AssemblyAIRegionType` | `voice-router-dev/constants` |
235
+ | `AssemblyAIConfig` | `voice-router-dev` |
236
+
237
+ **Priority:** `baseUrl`/`wsBaseUrl` > `region` > default (US)
238
+
239
+ **Region support summary (updated):**
240
+
241
+ | Provider | Regions | Dynamic Switch |
242
+ |----------|---------|----------------|
243
+ | **Deepgram** | `global`, `eu` | `setRegion()` |
244
+ | **AssemblyAI** | `us`, `eu` | `setRegion()` |
245
+ | **Speechmatics** | `eu1`, `eu2`\*, `us1`, `us2`\*, `au1` | `setRegion()` |
246
+ | **Soniox** | `us`, `eu`, `jp` | `setRegion()` |
247
+ | **Gladia** | `us-west`, `eu-west` | Per-request |
248
+ | **ElevenLabs** | `global`, `us`, `eu`, `in` | Adapter init |
249
+ | **Azure** | Via `speechConfig` | Reinitialize |
250
+ | **OpenAI** | N/A | N/A |
251
+
252
+ \* Enterprise only
253
+
254
+ ### Fixed
255
+
256
+ - Suppress `noShadowRestrictedNames` biome errors during orval generation of `src/generated/`
257
+
258
+ ---
259
+
8
260
  ## [0.8.2] - 2026-03-15
9
261
 
10
262
  ### Fixed
@@ -331,7 +331,7 @@ declare const DeepgramArchitectureLanguages: {
331
331
  readonly base: readonly ["bg", "ca", "cs", "da", "de", "de-AT", "de-CH", "de-DE", "el", "en", "en-AU", "en-GB", "en-IN", "en-NZ", "en-US", "es", "es-419", "es-AR", "es-ES", "es-LATAM", "es-MX", "es-US", "et", "fi", "fr", "fr-BE", "fr-ca", "fr-CA", "fr-CH", "fr-FR", "hi", "hi-Latn", "hu", "id", "id-ID", "it", "ja", "ko", "lt", "lv", "ms", "ms-MY", "ms-SG", "nl", "no", "pl", "pt", "pt-BR", "pt-PT", "ro", "ro-MD", "ru", "sk", "sv", "ta", "taq", "th", "th-TH", "tr", "uk", "vi", "zh", "zh-CN", "zh-Hans", "zh-Hant", "zh-HK", "zh-TW"];
332
332
  readonly nova: readonly ["en", "en-AU", "en-GB", "en-IN", "en-NZ", "en-US", "es", "es-419", "es-LATAM", "hi-Latn"];
333
333
  readonly "nova-2": readonly ["bg", "ca", "cs", "da", "da-DK", "de", "de-AT", "de-CH", "de-DE", "el", "en", "en-AU", "en-CA", "en-GB", "en-IE", "en-IN", "en-MY", "en-NZ", "en-PH", "en-US", "en-ZA", "es", "es-419", "es-AR", "es-ES", "es-MX", "es-US", "et", "fi", "fr", "fr-BE", "fr-CA", "fr-CH", "fr-FR", "hi", "hi-Latn", "hu", "id", "it", "it-IT", "ja", "ja-JP", "ko", "ko-KR", "lt", "lv", "ms", "ms-MY", "multi", "nl", "nl-BE", "nl-NL", "no", "no-NO", "pl", "pl-PL", "pt", "pt-BR", "pt-PT", "ro", "ru", "ru-RU", "sk", "sv", "sv-SE", "th", "th-TH", "tr", "tr-TR", "uk", "vi", "zh", "zh-CN", "zh-Hans", "zh-Hant", "zh-HK", "zh-TW"];
334
- readonly "nova-3": readonly ["ar", "ar-AE", "ar-DZ", "ar-EG", "ar-IQ", "ar-IR", "ar-JO", "ar-KW", "ar-LB", "ar-MA", "ar-PS", "ar-QA", "ar-SA", "ar-SD", "ar-SY", "ar-TD", "ar-TN", "be", "be-BY", "bg", "bn", "bn-IN", "bs", "bs-BA", "ca", "cs", "da", "da-DK", "de", "de-AT", "de-CH", "de-DE", "el", "en", "en-AU", "en-CA", "en-GB", "en-IE", "en-IN", "en-NZ", "en-US", "es", "es-419", "es-AR", "es-ES", "es-MX", "es-US", "et", "fa", "fi", "fr", "fr-BE", "fr-CA", "fr-CH", "fr-FR", "he", "hi", "hr", "hr-HR", "hu", "id", "id-ID", "it", "it-IT", "ja", "ja-JP", "kn", "kn-IN", "ko", "ko-KR", "lt", "lv", "mk", "mk-MK", "mr", "mr-IN", "ms", "multi", "nl", "nl-BE", "nl-NL", "no", "no-NO", "pl", "pl-PL", "pt", "pt-BR", "pt-PT", "ro", "ru", "ru-Latn", "ru-RU", "sk", "sl", "sl-SL", "sr", "sr-RS", "sv", "sv-SE", "ta", "ta-IN", "te", "te-IN", "th", "th-TH", "tl", "tr", "tr-TR", "uk", "ur", "vi", "zh-HK"];
334
+ readonly "nova-3": readonly ["ar", "ar-AE", "ar-DZ", "ar-EG", "ar-IQ", "ar-IR", "ar-JO", "ar-KW", "ar-LB", "ar-MA", "ar-PS", "ar-QA", "ar-SA", "ar-SD", "ar-SY", "ar-TD", "ar-TN", "be", "be-BY", "bg", "bn", "bn-IN", "bs", "bs-BA", "ca", "cs", "da", "da-DK", "de", "de-AT", "de-CH", "de-DE", "el", "en", "en-AU", "en-CA", "en-GB", "en-IE", "en-IN", "en-NZ", "en-US", "es", "es-419", "es-AR", "es-ES", "es-MX", "es-US", "et", "fa", "fi", "fr", "fr-BE", "fr-CA", "fr-CH", "fr-FR", "he", "hi", "hr", "hr-HR", "hu", "id", "id-ID", "it", "it-IT", "ja", "ja-JP", "kn", "kn-IN", "ko", "ko-KR", "lt", "lv", "mk", "mk-MK", "mr", "mr-IN", "ms", "multi", "nl", "nl-BE", "nl-NL", "no", "no-NO", "pl", "pl-PL", "pt", "pt-BR", "pt-PT", "ro", "ru", "ru-Latn", "ru-RU", "sk", "sl", "sl-SL", "sr", "sr-RS", "sv", "sv-SE", "ta", "ta-IN", "te", "te-IN", "th", "th-TH", "tl", "tr", "tr-TR", "uk", "ur", "vi", "zh", "zh-CN", "zh-Hans", "zh-Hant", "zh-HK", "zh-TW"];
335
335
  readonly polaris: readonly ["da", "de", "en", "en-IN", "en-US", "es", "es-419", "es-LATAM", "fr", "hi", "it", "ja", "ko", "nl", "no", "pl", "pt", "pt-BR", "pt-PT", "sv", "ta", "taq"];
336
336
  readonly unknown: readonly ["da", "da-DK", "sv", "sv-SE"];
337
337
  readonly whisper: readonly ["af", "am", "ar", "as", "az", "ba", "be", "bg", "bn", "bo", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "en-AU", "en-GB", "en-IN", "en-NZ", "en-US", "es", "es-419", "es-LATAM", "et", "eu", "fa", "fi", "fo", "fr", "fr-CA", "gl", "gu", "ha", "haw", "he", "hi", "hi-Latn", "hr", "ht", "hu", "hy", "id", "id-ID", "is", "it", "ja", "jw", "ka", "kk", "km", "kn", "ko", "la", "lb", "ln", "lo", "lt", "lv", "mg", "mi", "mk", "ml", "mn", "mr", "ms", "mt", "my", "ne", "nl", "nn", "no", "oc", "pa", "pl", "ps", "pt", "pt-BR", "pt-PT", "ro", "ru", "sa", "sd", "si", "sk", "sl", "sn", "so", "sq", "sr", "su", "sv", "sw", "ta", "te", "tg", "th", "tk", "tl", "tr", "tt", "uk", "ur", "uz", "vi", "yi", "yo", "zh", "zh-CN", "zh-TW"];
@@ -2792,25 +2792,24 @@ declare const AssemblyAIEncoding: {
2792
2792
  /**
2793
2793
  * AssemblyAI batch transcription models
2794
2794
  *
2795
- * Values: `best`, `slam-1`, `universal`
2795
+ * Uses the `speech_models` (plural) API parameter — pass as array.
2796
+ * AssemblyAI routes audio to the best available model from the list.
2796
2797
  *
2797
- * - `best`: Highest accuracy, best for most use cases (default)
2798
- * - `slam-1`: Speech-Language Aligned Model, optimized for specific domains
2799
- * - `universal`: General-purpose model with broad language support
2798
+ * - `universal-3-pro`: Highest accuracy, latest generation
2799
+ * - `universal-2`: Previous generation, broad language support
2800
2800
  *
2801
2801
  * @example
2802
2802
  * ```typescript
2803
2803
  * import { AssemblyAITranscriptionModel } from 'voice-router-dev/constants'
2804
2804
  *
2805
2805
  * await router.transcribe('assemblyai', audioUrl, {
2806
- * speechModel: AssemblyAITranscriptionModel.best
2806
+ * model: AssemblyAITranscriptionModel["universal-3-pro"]
2807
2807
  * })
2808
2808
  * ```
2809
2809
  */
2810
2810
  declare const AssemblyAITranscriptionModel: {
2811
- readonly best: "best";
2812
- readonly "slam-1": "slam-1";
2813
- readonly universal: "universal";
2811
+ readonly "universal-3-pro": "universal-3-pro";
2812
+ readonly "universal-2": "universal-2";
2814
2813
  };
2815
2814
  /**
2816
2815
  * AssemblyAI language codes for transcription
@@ -2994,6 +2993,34 @@ declare const AssemblyAIStatus: {
2994
2993
  readonly completed: "completed";
2995
2994
  readonly error: "error";
2996
2995
  };
2996
+ /**
2997
+ * AssemblyAI regional endpoints for data residency
2998
+ *
2999
+ * | Region | REST API | Streaming |
3000
+ * |--------|----------|-----------|
3001
+ * | US (default) | api.assemblyai.com | streaming.assemblyai.com |
3002
+ * | EU | api.eu.assemblyai.com | streaming.eu.assemblyai.com |
3003
+ *
3004
+ * The EU endpoint guarantees audio and transcription data never leaves the EU.
3005
+ *
3006
+ * @example
3007
+ * ```typescript
3008
+ * import { AssemblyAIRegion } from 'voice-router-dev/constants'
3009
+ *
3010
+ * const adapter = createAssemblyAIAdapter({
3011
+ * apiKey: process.env.ASSEMBLYAI_API_KEY,
3012
+ * region: AssemblyAIRegion.eu
3013
+ * })
3014
+ * ```
3015
+ *
3016
+ * @see https://www.assemblyai.com/docs/getting-started/cloud-endpoints - Official docs
3017
+ */
3018
+ declare const AssemblyAIRegion: {
3019
+ /** United States (default) */
3020
+ readonly us: "us";
3021
+ /** European Union — data never leaves the EU */
3022
+ readonly eu: "eu";
3023
+ };
2997
3024
  /**
2998
3025
  * Gladia job status values for filtering
2999
3026
  *
@@ -3098,6 +3125,8 @@ type AssemblyAISpeechModelType = (typeof AssemblyAISpeechModel)[keyof typeof Ass
3098
3125
  type AssemblyAISampleRateType = (typeof AssemblyAISampleRate)[keyof typeof AssemblyAISampleRate];
3099
3126
  /** AssemblyAI status type derived from const object */
3100
3127
  type AssemblyAIStatusType = (typeof AssemblyAIStatus)[keyof typeof AssemblyAIStatus];
3128
+ /** AssemblyAI region type derived from const object */
3129
+ type AssemblyAIRegionType = (typeof AssemblyAIRegion)[keyof typeof AssemblyAIRegion];
3101
3130
  /** Gladia status type derived from const object */
3102
3131
  type GladiaStatusType = (typeof GladiaStatus)[keyof typeof GladiaStatus];
3103
3132
  /** Azure status type derived from const object */
@@ -3381,7 +3410,7 @@ declare const OpenAIModel: {
3381
3410
  readonly "whisper-1": "whisper-1";
3382
3411
  };
3383
3412
  declare const OpenAIModelCodes: readonly ["gpt-4o-mini-realtime-preview", "gpt-4o-mini-realtime-preview-2024-12-17", "gpt-4o-mini-transcribe", "gpt-4o-mini-transcribe-2025-12-15", "gpt-4o-realtime-preview", "gpt-4o-realtime-preview-2024-10-01", "gpt-4o-realtime-preview-2024-12-17", "gpt-4o-realtime-preview-2025-06-03", "gpt-4o-transcribe", "gpt-4o-transcribe-diarize", "gpt-audio-mini", "gpt-audio-mini-2025-10-06", "gpt-audio-mini-2025-12-15", "gpt-realtime", "gpt-realtime-2025-08-28", "gpt-realtime-mini", "gpt-realtime-mini-2025-10-06", "gpt-realtime-mini-2025-12-15", "whisper-1"];
3384
- declare const OpenAIModelLabels: Record<"gpt-4o-mini-realtime-preview" | "gpt-4o-mini-realtime-preview-2024-12-17" | "gpt-4o-mini-transcribe" | "gpt-4o-mini-transcribe-2025-12-15" | "gpt-4o-realtime-preview" | "gpt-4o-realtime-preview-2024-10-01" | "gpt-4o-realtime-preview-2024-12-17" | "gpt-4o-realtime-preview-2025-06-03" | "gpt-4o-transcribe" | "gpt-4o-transcribe-diarize" | "gpt-audio-mini" | "gpt-audio-mini-2025-10-06" | "gpt-audio-mini-2025-12-15" | "gpt-realtime" | "gpt-realtime-2025-08-28" | "gpt-realtime-mini" | "gpt-realtime-mini-2025-10-06" | "gpt-realtime-mini-2025-12-15" | "whisper-1", string>;
3413
+ declare const OpenAIModelLabels: Record<"gpt-4o-mini-transcribe" | "gpt-4o-mini-transcribe-2025-12-15" | "gpt-4o-transcribe" | "gpt-4o-transcribe-diarize" | "whisper-1" | "gpt-4o-mini-realtime-preview" | "gpt-4o-mini-realtime-preview-2024-12-17" | "gpt-4o-realtime-preview" | "gpt-4o-realtime-preview-2024-10-01" | "gpt-4o-realtime-preview-2024-12-17" | "gpt-4o-realtime-preview-2025-06-03" | "gpt-audio-mini" | "gpt-audio-mini-2025-10-06" | "gpt-audio-mini-2025-12-15" | "gpt-realtime" | "gpt-realtime-2025-08-28" | "gpt-realtime-mini" | "gpt-realtime-mini-2025-10-06" | "gpt-realtime-mini-2025-12-15", string>;
3385
3414
  /**
3386
3415
  * OpenAI Realtime API models (streaming)
3387
3416
  * @see scripts/generate-openai-models.js
@@ -3561,4 +3590,4 @@ declare const OpenAILanguage: {
3561
3590
  /** OpenAI language type */
3562
3591
  type OpenAILanguageType = (typeof OpenAILanguageCodes)[number];
3563
3592
 
3564
- export { AssemblyAIEncoding, type AssemblyAIEncodingType, AssemblyAILanguage, type AssemblyAILanguageType, AssemblyAISampleRate, type AssemblyAISampleRateType, AssemblyAISpeechModel, type AssemblyAISpeechModelType, AssemblyAIStatus, type AssemblyAIStatusType, AssemblyAITranscriptionModel, type AssemblyAITranscriptionModelType, AzureLocale, type AzureLocaleCode, AzureLocaleCodes, AzureLocaleLabels, type AzureLocaleType, AzureLocales, AzureStatus, type AzureStatusType, type DeepgramArchitecture, DeepgramArchitectureLanguages, DeepgramArchitectures, DeepgramCallbackMethod, type DeepgramCallbackMethodType, DeepgramEncoding, type DeepgramEncodingType, DeepgramIntentMode, type DeepgramIntentModeType, DeepgramLanguage, type DeepgramLanguageCode, DeepgramLanguageCodes, type DeepgramLanguageCode as DeepgramLanguageType, DeepgramModel, type DeepgramModelCode, DeepgramModelCodes, DeepgramModelLabels, type DeepgramModelCode as DeepgramModelType, type DeepgramMultilingualArchitecture, DeepgramMultilingualArchitectures, DeepgramRedact, type DeepgramRedactType, DeepgramRegion, type DeepgramRegionType, DeepgramSampleRate, type DeepgramSampleRateType, DeepgramStatus, type DeepgramStatusType, DeepgramTTSContainer, type DeepgramTTSContainerType, DeepgramTTSEncoding, type DeepgramTTSEncodingType, DeepgramTTSModel, type DeepgramTTSModelType, DeepgramTTSSampleRate, type DeepgramTTSSampleRateType, DeepgramTopicMode, type DeepgramTopicModeType, ElevenLabsAudioFormat, type ElevenLabsAudioFormatType, ElevenLabsLanguage, type ElevenLabsLanguageCode, ElevenLabsLanguageCodes, ElevenLabsLanguageLabels, type ElevenLabsLanguageType, ElevenLabsLanguages, ElevenLabsModel, type ElevenLabsModelCode, ElevenLabsModelCodes, ElevenLabsModelLabels, type ElevenLabsModelType, ElevenLabsRealtimeModel, type ElevenLabsRealtimeModelCode, ElevenLabsRealtimeModelCodes, type ElevenLabsRealtimeModelType, ElevenLabsRegion, type ElevenLabsRegionType, GladiaBitDepth, type GladiaBitDepthType, GladiaEncoding, type GladiaEncodingType, GladiaLanguage, type GladiaLanguageType, GladiaModel, type GladiaModelType, GladiaRegion, type GladiaRegionType, GladiaSampleRate, type GladiaSampleRateType, GladiaStatus, type GladiaStatusType, GladiaTranslationLanguage, type GladiaTranslationLanguageType, OpenAILanguage, OpenAILanguageCodes, type OpenAILanguageType, OpenAIModel, type OpenAIModelCode, OpenAIModelCodes, OpenAIModelLabels, type OpenAIModelType, OpenAIRealtimeAudioFormat, type OpenAIRealtimeAudioFormatType, OpenAIRealtimeModel, type OpenAIRealtimeModelCode, OpenAIRealtimeModelCodes, type OpenAIRealtimeModelType, OpenAIRealtimeTranscriptionModel, type OpenAIRealtimeTranscriptionModelType, OpenAIRealtimeTurnDetection, type OpenAIRealtimeTurnDetectionType, OpenAIResponseFormat, type OpenAIResponseFormatType, type OpenAITranscriptionModelCode, SonioxAsyncModel, type SonioxAsyncModelCode, SonioxAsyncModelCodes, SonioxLanguage, type SonioxLanguageCode, SonioxLanguageCodes, SonioxLanguageLabels, type SonioxLanguageType, SonioxLanguages, SonioxModel, type SonioxModelCode, SonioxModelCodes, SonioxModelLabels, SonioxModels, SonioxRealtimeModel, type SonioxRealtimeModelCode, SonioxRealtimeModelCodes, SonioxRegion, type SonioxRegionType, SpeechmaticsLanguage, type SpeechmaticsLanguageCode, SpeechmaticsLanguageCodes, SpeechmaticsLanguageLabels, type SpeechmaticsLanguageType, SpeechmaticsLanguages, SpeechmaticsOperatingPoint, type SpeechmaticsOperatingPointType, SpeechmaticsRegion, type SpeechmaticsRegionType };
3593
+ export { AssemblyAIEncoding, type AssemblyAIEncodingType, AssemblyAILanguage, type AssemblyAILanguageType, AssemblyAIRegion, type AssemblyAIRegionType, AssemblyAISampleRate, type AssemblyAISampleRateType, AssemblyAISpeechModel, type AssemblyAISpeechModelType, AssemblyAIStatus, type AssemblyAIStatusType, AssemblyAITranscriptionModel, type AssemblyAITranscriptionModelType, AzureLocale, type AzureLocaleCode, AzureLocaleCodes, AzureLocaleLabels, type AzureLocaleType, AzureLocales, AzureStatus, type AzureStatusType, type DeepgramArchitecture, DeepgramArchitectureLanguages, DeepgramArchitectures, DeepgramCallbackMethod, type DeepgramCallbackMethodType, DeepgramEncoding, type DeepgramEncodingType, DeepgramIntentMode, type DeepgramIntentModeType, DeepgramLanguage, type DeepgramLanguageCode, DeepgramLanguageCodes, type DeepgramLanguageCode as DeepgramLanguageType, DeepgramModel, type DeepgramModelCode, DeepgramModelCodes, DeepgramModelLabels, type DeepgramModelCode as DeepgramModelType, type DeepgramMultilingualArchitecture, DeepgramMultilingualArchitectures, DeepgramRedact, type DeepgramRedactType, DeepgramRegion, type DeepgramRegionType, DeepgramSampleRate, type DeepgramSampleRateType, DeepgramStatus, type DeepgramStatusType, DeepgramTTSContainer, type DeepgramTTSContainerType, DeepgramTTSEncoding, type DeepgramTTSEncodingType, DeepgramTTSModel, type DeepgramTTSModelType, DeepgramTTSSampleRate, type DeepgramTTSSampleRateType, DeepgramTopicMode, type DeepgramTopicModeType, ElevenLabsAudioFormat, type ElevenLabsAudioFormatType, ElevenLabsLanguage, type ElevenLabsLanguageCode, ElevenLabsLanguageCodes, ElevenLabsLanguageLabels, type ElevenLabsLanguageType, ElevenLabsLanguages, ElevenLabsModel, type ElevenLabsModelCode, ElevenLabsModelCodes, ElevenLabsModelLabels, type ElevenLabsModelType, ElevenLabsRealtimeModel, type ElevenLabsRealtimeModelCode, ElevenLabsRealtimeModelCodes, type ElevenLabsRealtimeModelType, ElevenLabsRegion, type ElevenLabsRegionType, GladiaBitDepth, type GladiaBitDepthType, GladiaEncoding, type GladiaEncodingType, GladiaLanguage, type GladiaLanguageType, GladiaModel, type GladiaModelType, GladiaRegion, type GladiaRegionType, GladiaSampleRate, type GladiaSampleRateType, GladiaStatus, type GladiaStatusType, GladiaTranslationLanguage, type GladiaTranslationLanguageType, OpenAILanguage, OpenAILanguageCodes, type OpenAILanguageType, OpenAIModel, type OpenAIModelCode, OpenAIModelCodes, OpenAIModelLabels, type OpenAIModelType, OpenAIRealtimeAudioFormat, type OpenAIRealtimeAudioFormatType, OpenAIRealtimeModel, type OpenAIRealtimeModelCode, OpenAIRealtimeModelCodes, type OpenAIRealtimeModelType, OpenAIRealtimeTranscriptionModel, type OpenAIRealtimeTranscriptionModelType, OpenAIRealtimeTurnDetection, type OpenAIRealtimeTurnDetectionType, OpenAIResponseFormat, type OpenAIResponseFormatType, type OpenAITranscriptionModelCode, SonioxAsyncModel, type SonioxAsyncModelCode, SonioxAsyncModelCodes, SonioxLanguage, type SonioxLanguageCode, SonioxLanguageCodes, SonioxLanguageLabels, type SonioxLanguageType, SonioxLanguages, SonioxModel, type SonioxModelCode, SonioxModelCodes, SonioxModelLabels, SonioxModels, SonioxRealtimeModel, type SonioxRealtimeModelCode, SonioxRealtimeModelCodes, SonioxRegion, type SonioxRegionType, SpeechmaticsLanguage, type SpeechmaticsLanguageCode, SpeechmaticsLanguageCodes, SpeechmaticsLanguageLabels, type SpeechmaticsLanguageType, SpeechmaticsLanguages, SpeechmaticsOperatingPoint, type SpeechmaticsOperatingPointType, SpeechmaticsRegion, type SpeechmaticsRegionType };
@@ -331,7 +331,7 @@ declare const DeepgramArchitectureLanguages: {
331
331
  readonly base: readonly ["bg", "ca", "cs", "da", "de", "de-AT", "de-CH", "de-DE", "el", "en", "en-AU", "en-GB", "en-IN", "en-NZ", "en-US", "es", "es-419", "es-AR", "es-ES", "es-LATAM", "es-MX", "es-US", "et", "fi", "fr", "fr-BE", "fr-ca", "fr-CA", "fr-CH", "fr-FR", "hi", "hi-Latn", "hu", "id", "id-ID", "it", "ja", "ko", "lt", "lv", "ms", "ms-MY", "ms-SG", "nl", "no", "pl", "pt", "pt-BR", "pt-PT", "ro", "ro-MD", "ru", "sk", "sv", "ta", "taq", "th", "th-TH", "tr", "uk", "vi", "zh", "zh-CN", "zh-Hans", "zh-Hant", "zh-HK", "zh-TW"];
332
332
  readonly nova: readonly ["en", "en-AU", "en-GB", "en-IN", "en-NZ", "en-US", "es", "es-419", "es-LATAM", "hi-Latn"];
333
333
  readonly "nova-2": readonly ["bg", "ca", "cs", "da", "da-DK", "de", "de-AT", "de-CH", "de-DE", "el", "en", "en-AU", "en-CA", "en-GB", "en-IE", "en-IN", "en-MY", "en-NZ", "en-PH", "en-US", "en-ZA", "es", "es-419", "es-AR", "es-ES", "es-MX", "es-US", "et", "fi", "fr", "fr-BE", "fr-CA", "fr-CH", "fr-FR", "hi", "hi-Latn", "hu", "id", "it", "it-IT", "ja", "ja-JP", "ko", "ko-KR", "lt", "lv", "ms", "ms-MY", "multi", "nl", "nl-BE", "nl-NL", "no", "no-NO", "pl", "pl-PL", "pt", "pt-BR", "pt-PT", "ro", "ru", "ru-RU", "sk", "sv", "sv-SE", "th", "th-TH", "tr", "tr-TR", "uk", "vi", "zh", "zh-CN", "zh-Hans", "zh-Hant", "zh-HK", "zh-TW"];
334
- readonly "nova-3": readonly ["ar", "ar-AE", "ar-DZ", "ar-EG", "ar-IQ", "ar-IR", "ar-JO", "ar-KW", "ar-LB", "ar-MA", "ar-PS", "ar-QA", "ar-SA", "ar-SD", "ar-SY", "ar-TD", "ar-TN", "be", "be-BY", "bg", "bn", "bn-IN", "bs", "bs-BA", "ca", "cs", "da", "da-DK", "de", "de-AT", "de-CH", "de-DE", "el", "en", "en-AU", "en-CA", "en-GB", "en-IE", "en-IN", "en-NZ", "en-US", "es", "es-419", "es-AR", "es-ES", "es-MX", "es-US", "et", "fa", "fi", "fr", "fr-BE", "fr-CA", "fr-CH", "fr-FR", "he", "hi", "hr", "hr-HR", "hu", "id", "id-ID", "it", "it-IT", "ja", "ja-JP", "kn", "kn-IN", "ko", "ko-KR", "lt", "lv", "mk", "mk-MK", "mr", "mr-IN", "ms", "multi", "nl", "nl-BE", "nl-NL", "no", "no-NO", "pl", "pl-PL", "pt", "pt-BR", "pt-PT", "ro", "ru", "ru-Latn", "ru-RU", "sk", "sl", "sl-SL", "sr", "sr-RS", "sv", "sv-SE", "ta", "ta-IN", "te", "te-IN", "th", "th-TH", "tl", "tr", "tr-TR", "uk", "ur", "vi", "zh-HK"];
334
+ readonly "nova-3": readonly ["ar", "ar-AE", "ar-DZ", "ar-EG", "ar-IQ", "ar-IR", "ar-JO", "ar-KW", "ar-LB", "ar-MA", "ar-PS", "ar-QA", "ar-SA", "ar-SD", "ar-SY", "ar-TD", "ar-TN", "be", "be-BY", "bg", "bn", "bn-IN", "bs", "bs-BA", "ca", "cs", "da", "da-DK", "de", "de-AT", "de-CH", "de-DE", "el", "en", "en-AU", "en-CA", "en-GB", "en-IE", "en-IN", "en-NZ", "en-US", "es", "es-419", "es-AR", "es-ES", "es-MX", "es-US", "et", "fa", "fi", "fr", "fr-BE", "fr-CA", "fr-CH", "fr-FR", "he", "hi", "hr", "hr-HR", "hu", "id", "id-ID", "it", "it-IT", "ja", "ja-JP", "kn", "kn-IN", "ko", "ko-KR", "lt", "lv", "mk", "mk-MK", "mr", "mr-IN", "ms", "multi", "nl", "nl-BE", "nl-NL", "no", "no-NO", "pl", "pl-PL", "pt", "pt-BR", "pt-PT", "ro", "ru", "ru-Latn", "ru-RU", "sk", "sl", "sl-SL", "sr", "sr-RS", "sv", "sv-SE", "ta", "ta-IN", "te", "te-IN", "th", "th-TH", "tl", "tr", "tr-TR", "uk", "ur", "vi", "zh", "zh-CN", "zh-Hans", "zh-Hant", "zh-HK", "zh-TW"];
335
335
  readonly polaris: readonly ["da", "de", "en", "en-IN", "en-US", "es", "es-419", "es-LATAM", "fr", "hi", "it", "ja", "ko", "nl", "no", "pl", "pt", "pt-BR", "pt-PT", "sv", "ta", "taq"];
336
336
  readonly unknown: readonly ["da", "da-DK", "sv", "sv-SE"];
337
337
  readonly whisper: readonly ["af", "am", "ar", "as", "az", "ba", "be", "bg", "bn", "bo", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "en-AU", "en-GB", "en-IN", "en-NZ", "en-US", "es", "es-419", "es-LATAM", "et", "eu", "fa", "fi", "fo", "fr", "fr-CA", "gl", "gu", "ha", "haw", "he", "hi", "hi-Latn", "hr", "ht", "hu", "hy", "id", "id-ID", "is", "it", "ja", "jw", "ka", "kk", "km", "kn", "ko", "la", "lb", "ln", "lo", "lt", "lv", "mg", "mi", "mk", "ml", "mn", "mr", "ms", "mt", "my", "ne", "nl", "nn", "no", "oc", "pa", "pl", "ps", "pt", "pt-BR", "pt-PT", "ro", "ru", "sa", "sd", "si", "sk", "sl", "sn", "so", "sq", "sr", "su", "sv", "sw", "ta", "te", "tg", "th", "tk", "tl", "tr", "tt", "uk", "ur", "uz", "vi", "yi", "yo", "zh", "zh-CN", "zh-TW"];
@@ -2792,25 +2792,24 @@ declare const AssemblyAIEncoding: {
2792
2792
  /**
2793
2793
  * AssemblyAI batch transcription models
2794
2794
  *
2795
- * Values: `best`, `slam-1`, `universal`
2795
+ * Uses the `speech_models` (plural) API parameter — pass as array.
2796
+ * AssemblyAI routes audio to the best available model from the list.
2796
2797
  *
2797
- * - `best`: Highest accuracy, best for most use cases (default)
2798
- * - `slam-1`: Speech-Language Aligned Model, optimized for specific domains
2799
- * - `universal`: General-purpose model with broad language support
2798
+ * - `universal-3-pro`: Highest accuracy, latest generation
2799
+ * - `universal-2`: Previous generation, broad language support
2800
2800
  *
2801
2801
  * @example
2802
2802
  * ```typescript
2803
2803
  * import { AssemblyAITranscriptionModel } from 'voice-router-dev/constants'
2804
2804
  *
2805
2805
  * await router.transcribe('assemblyai', audioUrl, {
2806
- * speechModel: AssemblyAITranscriptionModel.best
2806
+ * model: AssemblyAITranscriptionModel["universal-3-pro"]
2807
2807
  * })
2808
2808
  * ```
2809
2809
  */
2810
2810
  declare const AssemblyAITranscriptionModel: {
2811
- readonly best: "best";
2812
- readonly "slam-1": "slam-1";
2813
- readonly universal: "universal";
2811
+ readonly "universal-3-pro": "universal-3-pro";
2812
+ readonly "universal-2": "universal-2";
2814
2813
  };
2815
2814
  /**
2816
2815
  * AssemblyAI language codes for transcription
@@ -2994,6 +2993,34 @@ declare const AssemblyAIStatus: {
2994
2993
  readonly completed: "completed";
2995
2994
  readonly error: "error";
2996
2995
  };
2996
+ /**
2997
+ * AssemblyAI regional endpoints for data residency
2998
+ *
2999
+ * | Region | REST API | Streaming |
3000
+ * |--------|----------|-----------|
3001
+ * | US (default) | api.assemblyai.com | streaming.assemblyai.com |
3002
+ * | EU | api.eu.assemblyai.com | streaming.eu.assemblyai.com |
3003
+ *
3004
+ * The EU endpoint guarantees audio and transcription data never leaves the EU.
3005
+ *
3006
+ * @example
3007
+ * ```typescript
3008
+ * import { AssemblyAIRegion } from 'voice-router-dev/constants'
3009
+ *
3010
+ * const adapter = createAssemblyAIAdapter({
3011
+ * apiKey: process.env.ASSEMBLYAI_API_KEY,
3012
+ * region: AssemblyAIRegion.eu
3013
+ * })
3014
+ * ```
3015
+ *
3016
+ * @see https://www.assemblyai.com/docs/getting-started/cloud-endpoints - Official docs
3017
+ */
3018
+ declare const AssemblyAIRegion: {
3019
+ /** United States (default) */
3020
+ readonly us: "us";
3021
+ /** European Union — data never leaves the EU */
3022
+ readonly eu: "eu";
3023
+ };
2997
3024
  /**
2998
3025
  * Gladia job status values for filtering
2999
3026
  *
@@ -3098,6 +3125,8 @@ type AssemblyAISpeechModelType = (typeof AssemblyAISpeechModel)[keyof typeof Ass
3098
3125
  type AssemblyAISampleRateType = (typeof AssemblyAISampleRate)[keyof typeof AssemblyAISampleRate];
3099
3126
  /** AssemblyAI status type derived from const object */
3100
3127
  type AssemblyAIStatusType = (typeof AssemblyAIStatus)[keyof typeof AssemblyAIStatus];
3128
+ /** AssemblyAI region type derived from const object */
3129
+ type AssemblyAIRegionType = (typeof AssemblyAIRegion)[keyof typeof AssemblyAIRegion];
3101
3130
  /** Gladia status type derived from const object */
3102
3131
  type GladiaStatusType = (typeof GladiaStatus)[keyof typeof GladiaStatus];
3103
3132
  /** Azure status type derived from const object */
@@ -3381,7 +3410,7 @@ declare const OpenAIModel: {
3381
3410
  readonly "whisper-1": "whisper-1";
3382
3411
  };
3383
3412
  declare const OpenAIModelCodes: readonly ["gpt-4o-mini-realtime-preview", "gpt-4o-mini-realtime-preview-2024-12-17", "gpt-4o-mini-transcribe", "gpt-4o-mini-transcribe-2025-12-15", "gpt-4o-realtime-preview", "gpt-4o-realtime-preview-2024-10-01", "gpt-4o-realtime-preview-2024-12-17", "gpt-4o-realtime-preview-2025-06-03", "gpt-4o-transcribe", "gpt-4o-transcribe-diarize", "gpt-audio-mini", "gpt-audio-mini-2025-10-06", "gpt-audio-mini-2025-12-15", "gpt-realtime", "gpt-realtime-2025-08-28", "gpt-realtime-mini", "gpt-realtime-mini-2025-10-06", "gpt-realtime-mini-2025-12-15", "whisper-1"];
3384
- declare const OpenAIModelLabels: Record<"gpt-4o-mini-realtime-preview" | "gpt-4o-mini-realtime-preview-2024-12-17" | "gpt-4o-mini-transcribe" | "gpt-4o-mini-transcribe-2025-12-15" | "gpt-4o-realtime-preview" | "gpt-4o-realtime-preview-2024-10-01" | "gpt-4o-realtime-preview-2024-12-17" | "gpt-4o-realtime-preview-2025-06-03" | "gpt-4o-transcribe" | "gpt-4o-transcribe-diarize" | "gpt-audio-mini" | "gpt-audio-mini-2025-10-06" | "gpt-audio-mini-2025-12-15" | "gpt-realtime" | "gpt-realtime-2025-08-28" | "gpt-realtime-mini" | "gpt-realtime-mini-2025-10-06" | "gpt-realtime-mini-2025-12-15" | "whisper-1", string>;
3413
+ declare const OpenAIModelLabels: Record<"gpt-4o-mini-transcribe" | "gpt-4o-mini-transcribe-2025-12-15" | "gpt-4o-transcribe" | "gpt-4o-transcribe-diarize" | "whisper-1" | "gpt-4o-mini-realtime-preview" | "gpt-4o-mini-realtime-preview-2024-12-17" | "gpt-4o-realtime-preview" | "gpt-4o-realtime-preview-2024-10-01" | "gpt-4o-realtime-preview-2024-12-17" | "gpt-4o-realtime-preview-2025-06-03" | "gpt-audio-mini" | "gpt-audio-mini-2025-10-06" | "gpt-audio-mini-2025-12-15" | "gpt-realtime" | "gpt-realtime-2025-08-28" | "gpt-realtime-mini" | "gpt-realtime-mini-2025-10-06" | "gpt-realtime-mini-2025-12-15", string>;
3385
3414
  /**
3386
3415
  * OpenAI Realtime API models (streaming)
3387
3416
  * @see scripts/generate-openai-models.js
@@ -3561,4 +3590,4 @@ declare const OpenAILanguage: {
3561
3590
  /** OpenAI language type */
3562
3591
  type OpenAILanguageType = (typeof OpenAILanguageCodes)[number];
3563
3592
 
3564
- export { AssemblyAIEncoding, type AssemblyAIEncodingType, AssemblyAILanguage, type AssemblyAILanguageType, AssemblyAISampleRate, type AssemblyAISampleRateType, AssemblyAISpeechModel, type AssemblyAISpeechModelType, AssemblyAIStatus, type AssemblyAIStatusType, AssemblyAITranscriptionModel, type AssemblyAITranscriptionModelType, AzureLocale, type AzureLocaleCode, AzureLocaleCodes, AzureLocaleLabels, type AzureLocaleType, AzureLocales, AzureStatus, type AzureStatusType, type DeepgramArchitecture, DeepgramArchitectureLanguages, DeepgramArchitectures, DeepgramCallbackMethod, type DeepgramCallbackMethodType, DeepgramEncoding, type DeepgramEncodingType, DeepgramIntentMode, type DeepgramIntentModeType, DeepgramLanguage, type DeepgramLanguageCode, DeepgramLanguageCodes, type DeepgramLanguageCode as DeepgramLanguageType, DeepgramModel, type DeepgramModelCode, DeepgramModelCodes, DeepgramModelLabels, type DeepgramModelCode as DeepgramModelType, type DeepgramMultilingualArchitecture, DeepgramMultilingualArchitectures, DeepgramRedact, type DeepgramRedactType, DeepgramRegion, type DeepgramRegionType, DeepgramSampleRate, type DeepgramSampleRateType, DeepgramStatus, type DeepgramStatusType, DeepgramTTSContainer, type DeepgramTTSContainerType, DeepgramTTSEncoding, type DeepgramTTSEncodingType, DeepgramTTSModel, type DeepgramTTSModelType, DeepgramTTSSampleRate, type DeepgramTTSSampleRateType, DeepgramTopicMode, type DeepgramTopicModeType, ElevenLabsAudioFormat, type ElevenLabsAudioFormatType, ElevenLabsLanguage, type ElevenLabsLanguageCode, ElevenLabsLanguageCodes, ElevenLabsLanguageLabels, type ElevenLabsLanguageType, ElevenLabsLanguages, ElevenLabsModel, type ElevenLabsModelCode, ElevenLabsModelCodes, ElevenLabsModelLabels, type ElevenLabsModelType, ElevenLabsRealtimeModel, type ElevenLabsRealtimeModelCode, ElevenLabsRealtimeModelCodes, type ElevenLabsRealtimeModelType, ElevenLabsRegion, type ElevenLabsRegionType, GladiaBitDepth, type GladiaBitDepthType, GladiaEncoding, type GladiaEncodingType, GladiaLanguage, type GladiaLanguageType, GladiaModel, type GladiaModelType, GladiaRegion, type GladiaRegionType, GladiaSampleRate, type GladiaSampleRateType, GladiaStatus, type GladiaStatusType, GladiaTranslationLanguage, type GladiaTranslationLanguageType, OpenAILanguage, OpenAILanguageCodes, type OpenAILanguageType, OpenAIModel, type OpenAIModelCode, OpenAIModelCodes, OpenAIModelLabels, type OpenAIModelType, OpenAIRealtimeAudioFormat, type OpenAIRealtimeAudioFormatType, OpenAIRealtimeModel, type OpenAIRealtimeModelCode, OpenAIRealtimeModelCodes, type OpenAIRealtimeModelType, OpenAIRealtimeTranscriptionModel, type OpenAIRealtimeTranscriptionModelType, OpenAIRealtimeTurnDetection, type OpenAIRealtimeTurnDetectionType, OpenAIResponseFormat, type OpenAIResponseFormatType, type OpenAITranscriptionModelCode, SonioxAsyncModel, type SonioxAsyncModelCode, SonioxAsyncModelCodes, SonioxLanguage, type SonioxLanguageCode, SonioxLanguageCodes, SonioxLanguageLabels, type SonioxLanguageType, SonioxLanguages, SonioxModel, type SonioxModelCode, SonioxModelCodes, SonioxModelLabels, SonioxModels, SonioxRealtimeModel, type SonioxRealtimeModelCode, SonioxRealtimeModelCodes, SonioxRegion, type SonioxRegionType, SpeechmaticsLanguage, type SpeechmaticsLanguageCode, SpeechmaticsLanguageCodes, SpeechmaticsLanguageLabels, type SpeechmaticsLanguageType, SpeechmaticsLanguages, SpeechmaticsOperatingPoint, type SpeechmaticsOperatingPointType, SpeechmaticsRegion, type SpeechmaticsRegionType };
3593
+ export { AssemblyAIEncoding, type AssemblyAIEncodingType, AssemblyAILanguage, type AssemblyAILanguageType, AssemblyAIRegion, type AssemblyAIRegionType, AssemblyAISampleRate, type AssemblyAISampleRateType, AssemblyAISpeechModel, type AssemblyAISpeechModelType, AssemblyAIStatus, type AssemblyAIStatusType, AssemblyAITranscriptionModel, type AssemblyAITranscriptionModelType, AzureLocale, type AzureLocaleCode, AzureLocaleCodes, AzureLocaleLabels, type AzureLocaleType, AzureLocales, AzureStatus, type AzureStatusType, type DeepgramArchitecture, DeepgramArchitectureLanguages, DeepgramArchitectures, DeepgramCallbackMethod, type DeepgramCallbackMethodType, DeepgramEncoding, type DeepgramEncodingType, DeepgramIntentMode, type DeepgramIntentModeType, DeepgramLanguage, type DeepgramLanguageCode, DeepgramLanguageCodes, type DeepgramLanguageCode as DeepgramLanguageType, DeepgramModel, type DeepgramModelCode, DeepgramModelCodes, DeepgramModelLabels, type DeepgramModelCode as DeepgramModelType, type DeepgramMultilingualArchitecture, DeepgramMultilingualArchitectures, DeepgramRedact, type DeepgramRedactType, DeepgramRegion, type DeepgramRegionType, DeepgramSampleRate, type DeepgramSampleRateType, DeepgramStatus, type DeepgramStatusType, DeepgramTTSContainer, type DeepgramTTSContainerType, DeepgramTTSEncoding, type DeepgramTTSEncodingType, DeepgramTTSModel, type DeepgramTTSModelType, DeepgramTTSSampleRate, type DeepgramTTSSampleRateType, DeepgramTopicMode, type DeepgramTopicModeType, ElevenLabsAudioFormat, type ElevenLabsAudioFormatType, ElevenLabsLanguage, type ElevenLabsLanguageCode, ElevenLabsLanguageCodes, ElevenLabsLanguageLabels, type ElevenLabsLanguageType, ElevenLabsLanguages, ElevenLabsModel, type ElevenLabsModelCode, ElevenLabsModelCodes, ElevenLabsModelLabels, type ElevenLabsModelType, ElevenLabsRealtimeModel, type ElevenLabsRealtimeModelCode, ElevenLabsRealtimeModelCodes, type ElevenLabsRealtimeModelType, ElevenLabsRegion, type ElevenLabsRegionType, GladiaBitDepth, type GladiaBitDepthType, GladiaEncoding, type GladiaEncodingType, GladiaLanguage, type GladiaLanguageType, GladiaModel, type GladiaModelType, GladiaRegion, type GladiaRegionType, GladiaSampleRate, type GladiaSampleRateType, GladiaStatus, type GladiaStatusType, GladiaTranslationLanguage, type GladiaTranslationLanguageType, OpenAILanguage, OpenAILanguageCodes, type OpenAILanguageType, OpenAIModel, type OpenAIModelCode, OpenAIModelCodes, OpenAIModelLabels, type OpenAIModelType, OpenAIRealtimeAudioFormat, type OpenAIRealtimeAudioFormatType, OpenAIRealtimeModel, type OpenAIRealtimeModelCode, OpenAIRealtimeModelCodes, type OpenAIRealtimeModelType, OpenAIRealtimeTranscriptionModel, type OpenAIRealtimeTranscriptionModelType, OpenAIRealtimeTurnDetection, type OpenAIRealtimeTurnDetectionType, OpenAIResponseFormat, type OpenAIResponseFormatType, type OpenAITranscriptionModelCode, SonioxAsyncModel, type SonioxAsyncModelCode, SonioxAsyncModelCodes, SonioxLanguage, type SonioxLanguageCode, SonioxLanguageCodes, SonioxLanguageLabels, type SonioxLanguageType, SonioxLanguages, SonioxModel, type SonioxModelCode, SonioxModelCodes, SonioxModelLabels, SonioxModels, SonioxRealtimeModel, type SonioxRealtimeModelCode, SonioxRealtimeModelCodes, SonioxRegion, type SonioxRegionType, SpeechmaticsLanguage, type SpeechmaticsLanguageCode, SpeechmaticsLanguageCodes, SpeechmaticsLanguageLabels, type SpeechmaticsLanguageType, SpeechmaticsLanguages, SpeechmaticsOperatingPoint, type SpeechmaticsOperatingPointType, SpeechmaticsRegion, type SpeechmaticsRegionType };
package/dist/constants.js CHANGED
@@ -23,6 +23,7 @@ var constants_exports = {};
23
23
  __export(constants_exports, {
24
24
  AssemblyAIEncoding: () => AssemblyAIEncoding,
25
25
  AssemblyAILanguage: () => AssemblyAILanguage,
26
+ AssemblyAIRegion: () => AssemblyAIRegion,
26
27
  AssemblyAISampleRate: () => AssemblyAISampleRate,
27
28
  AssemblyAISpeechModel: () => AssemblyAISpeechModel,
28
29
  AssemblyAIStatus: () => AssemblyAIStatus,
@@ -934,7 +935,12 @@ var DeepgramArchitectureLanguages = {
934
935
  "uk",
935
936
  "ur",
936
937
  "vi",
937
- "zh-HK"
938
+ "zh",
939
+ "zh-CN",
940
+ "zh-Hans",
941
+ "zh-Hant",
942
+ "zh-HK",
943
+ "zh-TW"
938
944
  ],
939
945
  "polaris": [
940
946
  "da",
@@ -2976,13 +2982,6 @@ var TranslationLanguageCodeEnum = {
2976
2982
  zh: "zh"
2977
2983
  };
2978
2984
 
2979
- // src/generated/assemblyai/schema/speechModel.ts
2980
- var SpeechModel = {
2981
- best: "best",
2982
- "slam-1": "slam-1",
2983
- universal: "universal"
2984
- };
2985
-
2986
2985
  // src/generated/assemblyai/schema/transcriptLanguageCode.ts
2987
2986
  var TranscriptLanguageCode = {
2988
2987
  en: "en",
@@ -3366,7 +3365,10 @@ var AssemblyAIEncoding = {
3366
3365
  /** μ-law (telephony) */
3367
3366
  pcmMulaw: "pcm_mulaw"
3368
3367
  };
3369
- var AssemblyAITranscriptionModel = SpeechModel;
3368
+ var AssemblyAITranscriptionModel = {
3369
+ "universal-3-pro": "universal-3-pro",
3370
+ "universal-2": "universal-2"
3371
+ };
3370
3372
  var AssemblyAILanguage = TranscriptLanguageCode;
3371
3373
  var AssemblyAISpeechModel = {
3372
3374
  /** Optimized for English */
@@ -3382,6 +3384,12 @@ var AssemblyAISampleRate = {
3382
3384
  rate48000: 48e3
3383
3385
  };
3384
3386
  var AssemblyAIStatus = TranscriptStatus;
3387
+ var AssemblyAIRegion = {
3388
+ /** United States (default) */
3389
+ us: "us",
3390
+ /** European Union — data never leaves the EU */
3391
+ eu: "eu"
3392
+ };
3385
3393
  var GladiaStatus = TranscriptionControllerListV2StatusItem;
3386
3394
  var AzureStatus = Status;
3387
3395
  var DeepgramStatus = ManageV1FilterStatusParameter;
@@ -3492,6 +3500,7 @@ var OpenAILanguage = {
3492
3500
  0 && (module.exports = {
3493
3501
  AssemblyAIEncoding,
3494
3502
  AssemblyAILanguage,
3503
+ AssemblyAIRegion,
3495
3504
  AssemblyAISampleRate,
3496
3505
  AssemblyAISpeechModel,
3497
3506
  AssemblyAIStatus,
@@ -830,7 +830,12 @@ var DeepgramArchitectureLanguages = {
830
830
  "uk",
831
831
  "ur",
832
832
  "vi",
833
- "zh-HK"
833
+ "zh",
834
+ "zh-CN",
835
+ "zh-Hans",
836
+ "zh-Hant",
837
+ "zh-HK",
838
+ "zh-TW"
834
839
  ],
835
840
  "polaris": [
836
841
  "da",
@@ -2872,13 +2877,6 @@ var TranslationLanguageCodeEnum = {
2872
2877
  zh: "zh"
2873
2878
  };
2874
2879
 
2875
- // src/generated/assemblyai/schema/speechModel.ts
2876
- var SpeechModel = {
2877
- best: "best",
2878
- "slam-1": "slam-1",
2879
- universal: "universal"
2880
- };
2881
-
2882
2880
  // src/generated/assemblyai/schema/transcriptLanguageCode.ts
2883
2881
  var TranscriptLanguageCode = {
2884
2882
  en: "en",
@@ -3262,7 +3260,10 @@ var AssemblyAIEncoding = {
3262
3260
  /** μ-law (telephony) */
3263
3261
  pcmMulaw: "pcm_mulaw"
3264
3262
  };
3265
- var AssemblyAITranscriptionModel = SpeechModel;
3263
+ var AssemblyAITranscriptionModel = {
3264
+ "universal-3-pro": "universal-3-pro",
3265
+ "universal-2": "universal-2"
3266
+ };
3266
3267
  var AssemblyAILanguage = TranscriptLanguageCode;
3267
3268
  var AssemblyAISpeechModel = {
3268
3269
  /** Optimized for English */
@@ -3278,6 +3279,12 @@ var AssemblyAISampleRate = {
3278
3279
  rate48000: 48e3
3279
3280
  };
3280
3281
  var AssemblyAIStatus = TranscriptStatus;
3282
+ var AssemblyAIRegion = {
3283
+ /** United States (default) */
3284
+ us: "us",
3285
+ /** European Union — data never leaves the EU */
3286
+ eu: "eu"
3287
+ };
3281
3288
  var GladiaStatus = TranscriptionControllerListV2StatusItem;
3282
3289
  var AzureStatus = Status;
3283
3290
  var DeepgramStatus = ManageV1FilterStatusParameter;
@@ -3387,6 +3394,7 @@ var OpenAILanguage = {
3387
3394
  export {
3388
3395
  AssemblyAIEncoding,
3389
3396
  AssemblyAILanguage,
3397
+ AssemblyAIRegion,
3390
3398
  AssemblyAISampleRate,
3391
3399
  AssemblyAISpeechModel,
3392
3400
  AssemblyAIStatus,