voice-router-dev 0.3.4 → 0.4.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -5,28 +5,138 @@ All notable changes to this project will be documented in this file.
5
5
  The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
6
6
  and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7
7
 
8
+ ## [0.3.7] - 2026-01-09
9
+
10
+ ### Added
11
+
12
+ #### Region Support for Multiple Providers
13
+
14
+ Region support for data residency, compliance, and latency optimization:
15
+
16
+ **Deepgram EU Region** (GA Jan 2026):
17
+
18
+ ```typescript
19
+ import { createDeepgramAdapter, DeepgramRegion } from 'voice-router-dev'
20
+
21
+ const adapter = createDeepgramAdapter({
22
+ apiKey: process.env.DEEPGRAM_API_KEY,
23
+ region: DeepgramRegion.eu // All processing in EU
24
+ })
25
+ ```
26
+
27
+ **Speechmatics Regional Endpoints** (EU, US, AU):
28
+
29
+ ```typescript
30
+ import { createSpeechmaticsAdapter, SpeechmaticsRegion } from 'voice-router-dev'
31
+
32
+ const adapter = createSpeechmaticsAdapter({
33
+ apiKey: process.env.SPEECHMATICS_API_KEY,
34
+ region: SpeechmaticsRegion.us1 // USA endpoint
35
+ })
36
+ ```
37
+
38
+ | Region | Endpoint | Availability |
39
+ |--------|----------|--------------|
40
+ | `eu1` | eu1.asr.api.speechmatics.com | All customers |
41
+ | `eu2` | eu2.asr.api.speechmatics.com | Enterprise only |
42
+ | `us1` | us1.asr.api.speechmatics.com | All customers |
43
+ | `us2` | us2.asr.api.speechmatics.com | Enterprise only |
44
+ | `au1` | au1.asr.api.speechmatics.com | All customers |
45
+
46
+ **Gladia Streaming Regions**:
47
+
48
+ ```typescript
49
+ import { GladiaRegion } from 'voice-router-dev/constants'
50
+
51
+ await adapter.transcribeStream({
52
+ region: GladiaRegion["us-west"] // or "eu-west"
53
+ })
54
+ ```
55
+
56
+ **Dynamic region switching** for debugging and testing:
57
+
58
+ ```typescript
59
+ // Switch regions on the fly without reinitializing
60
+ adapter.setRegion(DeepgramRegion.eu)
61
+ await adapter.transcribe(audio)
62
+
63
+ // Check current region
64
+ console.log(adapter.getRegion())
65
+ // Deepgram: { api: "https://api.eu.deepgram.com/v1", websocket: "wss://api.eu.deepgram.com/v1/listen" }
66
+ // Speechmatics: "https://us1.asr.api.speechmatics.com/v2"
67
+ ```
68
+
69
+ **Region support summary:**
70
+
71
+ | Provider | Regions | Config Level | Dynamic Switch |
72
+ |----------|---------|--------------|----------------|
73
+ | **Deepgram** | `global`, `eu` | Adapter init | `setRegion()` |
74
+ | **Speechmatics** | `eu1`, `eu2`*, `us1`, `us2`*, `au1` | Adapter init | `setRegion()` |
75
+ | **Gladia** | `us-west`, `eu-west` | Streaming options | Per-request |
76
+ | **Azure** | Via `speechConfig` | Adapter init | Reinitialize |
77
+
78
+ \* Enterprise only
79
+
80
+ #### OpenAPI Spec Sync
81
+
82
+ New unified spec management system for syncing provider OpenAPI specs from official sources:
83
+
84
+ ```bash
85
+ # Sync all specs from remote sources
86
+ pnpm openapi:sync
87
+
88
+ # Sync specific providers
89
+ pnpm openapi:sync:gladia
90
+ pnpm openapi:sync:deepgram
91
+ pnpm openapi:sync:assemblyai
92
+
93
+ # Full rebuild with fresh specs
94
+ pnpm openapi:rebuild
95
+ ```
96
+
97
+ **Spec sources:**
98
+ - Gladia: https://api.gladia.io/openapi.json
99
+ - AssemblyAI: https://github.com/AssemblyAI/assemblyai-api-spec
100
+ - Deepgram: https://github.com/deepgram/deepgram-api-specs
101
+
102
+ All specs are now stored locally in `./specs/` for reproducible builds.
103
+
104
+ ### Fixed
105
+
106
+ - Deepgram spec regeneration now works correctly with Orval input transformer
107
+ - Manual Deepgram parameter files (SpeakV1Container, SpeakV1Encoding, SpeakV1SampleRate) are preserved during regeneration
108
+
109
+ ### Changed
110
+
111
+ - `prepublishOnly` now syncs and validates specs before publishing
112
+
113
+ ---
114
+
8
115
  ## [0.3.3] - 2026-01-08
9
116
 
10
117
  ### Added
11
118
 
12
119
  #### Gladia Audio File Download
13
120
 
14
- New `getAudioFile()` method for Gladia adapter - download the original audio used for transcription:
121
+ New `getAudioFile()` method for Gladia adapter - download the original audio used for transcription.
122
+
123
+ Returns `ArrayBuffer` for cross-platform compatibility (Node.js and browser):
15
124
 
16
125
  ```typescript
17
- // Download audio from a pre-recorded transcription
18
126
  const result = await gladiaAdapter.getAudioFile('transcript-123')
19
127
  if (result.success && result.data) {
20
- // Save to file (Node.js)
21
- const buffer = Buffer.from(await result.data.arrayBuffer())
128
+ // Node.js: Convert to Buffer and save
129
+ const buffer = Buffer.from(result.data)
22
130
  fs.writeFileSync('audio.mp3', buffer)
23
131
 
24
- // Or create download URL (browser)
25
- const url = URL.createObjectURL(result.data)
132
+ // Browser: Convert to Blob for playback/download
133
+ const blob = new Blob([result.data], { type: result.contentType || 'audio/mpeg' })
134
+ const url = URL.createObjectURL(blob)
26
135
  }
27
136
 
28
137
  // Download audio from a live/streaming session
29
138
  const liveResult = await gladiaAdapter.getAudioFile('stream-456', 'streaming')
139
+ console.log('Size:', liveResult.data?.byteLength, 'bytes')
30
140
  ```
31
141
 
32
142
  **Note:** This is a Gladia-specific feature. Other providers (Deepgram, AssemblyAI, Azure) do not store audio files after transcription.
@@ -463,6 +463,27 @@ declare const GladiaTranslationLanguage: {
463
463
  readonly yo: "yo";
464
464
  readonly zh: "zh";
465
465
  };
466
+ /**
467
+ * Gladia streaming regions for low-latency processing
468
+ *
469
+ * Values: `us-west`, `eu-west`
470
+ *
471
+ * Use the region closest to your users for optimal latency.
472
+ * Region selection is only available for streaming transcription.
473
+ *
474
+ * @example
475
+ * ```typescript
476
+ * import { GladiaRegion } from 'voice-router-dev/constants'
477
+ *
478
+ * await adapter.transcribeStream({
479
+ * region: GladiaRegion["us-west"]
480
+ * })
481
+ * ```
482
+ */
483
+ declare const GladiaRegion: {
484
+ readonly "us-west": "us-west";
485
+ readonly "eu-west": "eu-west";
486
+ };
466
487
  /**
467
488
  * AssemblyAI audio encoding formats
468
489
  *
@@ -627,6 +648,8 @@ type GladiaModelType = (typeof GladiaModel)[keyof typeof GladiaModel];
627
648
  type GladiaLanguageType = (typeof GladiaLanguage)[keyof typeof GladiaLanguage];
628
649
  /** Gladia translation language type derived from const object */
629
650
  type GladiaTranslationLanguageType = (typeof GladiaTranslationLanguage)[keyof typeof GladiaTranslationLanguage];
651
+ /** Gladia region type derived from const object */
652
+ type GladiaRegionType = (typeof GladiaRegion)[keyof typeof GladiaRegion];
630
653
  /** AssemblyAI encoding type derived from const object */
631
654
  type AssemblyAIEncodingType = (typeof AssemblyAIEncoding)[keyof typeof AssemblyAIEncoding];
632
655
  /** AssemblyAI speech model type derived from const object */
@@ -641,5 +664,275 @@ type GladiaStatusType = (typeof GladiaStatus)[keyof typeof GladiaStatus];
641
664
  type AzureStatusType = (typeof AzureStatus)[keyof typeof AzureStatus];
642
665
  /** Deepgram status type derived from const object */
643
666
  type DeepgramStatusType = (typeof DeepgramStatus)[keyof typeof DeepgramStatus];
667
+ /**
668
+ * Speechmatics regional endpoints
669
+ *
670
+ * Speechmatics offers multiple regional endpoints for data residency and latency optimization.
671
+ * EU2 and US2 are enterprise-only for high availability and failover.
672
+ *
673
+ * | Region | Endpoint | Availability |
674
+ * |--------|----------|--------------|
675
+ * | EU1 | eu1.asr.api.speechmatics.com | All customers |
676
+ * | EU2 | eu2.asr.api.speechmatics.com | Enterprise only |
677
+ * | US1 | us1.asr.api.speechmatics.com | All customers |
678
+ * | US2 | us2.asr.api.speechmatics.com | Enterprise only |
679
+ * | AU1 | au1.asr.api.speechmatics.com | All customers |
680
+ *
681
+ * @example
682
+ * ```typescript
683
+ * import { SpeechmaticsRegion } from 'voice-router-dev/constants'
684
+ *
685
+ * const adapter = new SpeechmaticsAdapter()
686
+ * adapter.initialize({
687
+ * apiKey: process.env.SPEECHMATICS_API_KEY,
688
+ * region: SpeechmaticsRegion.eu1
689
+ * })
690
+ * ```
691
+ *
692
+ * @see https://docs.speechmatics.com/get-started/authentication#supported-endpoints
693
+ */
694
+ declare const SpeechmaticsRegion: {
695
+ /** Europe (default, all customers) */
696
+ readonly eu1: "eu1";
697
+ /** Europe (enterprise only - HA/failover) */
698
+ readonly eu2: "eu2";
699
+ /** USA (all customers) */
700
+ readonly us1: "us1";
701
+ /** USA (enterprise only - HA/failover) */
702
+ readonly us2: "us2";
703
+ /** Australia (all customers) */
704
+ readonly au1: "au1";
705
+ };
706
+ /**
707
+ * Deepgram regional endpoints
708
+ *
709
+ * Deepgram offers regional endpoints for EU data residency.
710
+ * The EU endpoint keeps all processing within the European Union.
711
+ *
712
+ * | Region | API Endpoint | WebSocket Endpoint |
713
+ * |--------|--------------|-------------------|
714
+ * | Global | api.deepgram.com | wss://api.deepgram.com |
715
+ * | EU | api.eu.deepgram.com | wss://api.eu.deepgram.com |
716
+ *
717
+ * **Note:** Deepgram also supports Dedicated endpoints (`{SHORT_UID}.{REGION}.api.deepgram.com`)
718
+ * and self-hosted deployments. Use `baseUrl` in config for custom endpoints.
719
+ *
720
+ * @example
721
+ * ```typescript
722
+ * import { DeepgramRegion } from 'voice-router-dev/constants'
723
+ *
724
+ * const adapter = new DeepgramAdapter()
725
+ * adapter.initialize({
726
+ * apiKey: process.env.DEEPGRAM_API_KEY,
727
+ * region: DeepgramRegion.eu
728
+ * })
729
+ * ```
730
+ *
731
+ * @see https://developers.deepgram.com/reference/custom-endpoints - Official custom endpoints docs
732
+ */
733
+ declare const DeepgramRegion: {
734
+ /** Global endpoint (default) */
735
+ readonly global: "global";
736
+ /** European Union endpoint */
737
+ readonly eu: "eu";
738
+ };
739
+ /** Speechmatics region type derived from const object */
740
+ type SpeechmaticsRegionType = (typeof SpeechmaticsRegion)[keyof typeof SpeechmaticsRegion];
741
+ /** Deepgram region type derived from const object */
742
+ type DeepgramRegionType = (typeof DeepgramRegion)[keyof typeof DeepgramRegion];
743
+ /**
744
+ * Deepgram TTS voice models
745
+ *
746
+ * Aura 2 voices offer improved quality with support for English and Spanish.
747
+ * Use the voice name to select a specific voice persona.
748
+ *
749
+ * @example
750
+ * ```typescript
751
+ * import { DeepgramTTSModel } from 'voice-router-dev/constants'
752
+ *
753
+ * { model: DeepgramTTSModel["aura-2-athena-en"] }
754
+ * { model: DeepgramTTSModel["aura-2-sirio-es"] }
755
+ * ```
756
+ */
757
+ declare const DeepgramTTSModel: {
758
+ readonly "aura-asteria-en": "aura-asteria-en";
759
+ readonly "aura-luna-en": "aura-luna-en";
760
+ readonly "aura-stella-en": "aura-stella-en";
761
+ readonly "aura-athena-en": "aura-athena-en";
762
+ readonly "aura-hera-en": "aura-hera-en";
763
+ readonly "aura-orion-en": "aura-orion-en";
764
+ readonly "aura-arcas-en": "aura-arcas-en";
765
+ readonly "aura-perseus-en": "aura-perseus-en";
766
+ readonly "aura-angus-en": "aura-angus-en";
767
+ readonly "aura-orpheus-en": "aura-orpheus-en";
768
+ readonly "aura-helios-en": "aura-helios-en";
769
+ readonly "aura-zeus-en": "aura-zeus-en";
770
+ readonly "aura-2-amalthea-en": "aura-2-amalthea-en";
771
+ readonly "aura-2-andromeda-en": "aura-2-andromeda-en";
772
+ readonly "aura-2-apollo-en": "aura-2-apollo-en";
773
+ readonly "aura-2-arcas-en": "aura-2-arcas-en";
774
+ readonly "aura-2-aries-en": "aura-2-aries-en";
775
+ readonly "aura-2-asteria-en": "aura-2-asteria-en";
776
+ readonly "aura-2-athena-en": "aura-2-athena-en";
777
+ readonly "aura-2-atlas-en": "aura-2-atlas-en";
778
+ readonly "aura-2-aurora-en": "aura-2-aurora-en";
779
+ readonly "aura-2-callista-en": "aura-2-callista-en";
780
+ readonly "aura-2-cordelia-en": "aura-2-cordelia-en";
781
+ readonly "aura-2-cora-en": "aura-2-cora-en";
782
+ readonly "aura-2-delia-en": "aura-2-delia-en";
783
+ readonly "aura-2-draco-en": "aura-2-draco-en";
784
+ readonly "aura-2-electra-en": "aura-2-electra-en";
785
+ readonly "aura-2-harmonia-en": "aura-2-harmonia-en";
786
+ readonly "aura-2-helena-en": "aura-2-helena-en";
787
+ readonly "aura-2-hera-en": "aura-2-hera-en";
788
+ readonly "aura-2-hermes-en": "aura-2-hermes-en";
789
+ readonly "aura-2-hyperion-en": "aura-2-hyperion-en";
790
+ readonly "aura-2-iris-en": "aura-2-iris-en";
791
+ readonly "aura-2-janus-en": "aura-2-janus-en";
792
+ readonly "aura-2-juno-en": "aura-2-juno-en";
793
+ readonly "aura-2-jupiter-en": "aura-2-jupiter-en";
794
+ readonly "aura-2-luna-en": "aura-2-luna-en";
795
+ readonly "aura-2-mars-en": "aura-2-mars-en";
796
+ readonly "aura-2-minerva-en": "aura-2-minerva-en";
797
+ readonly "aura-2-neptune-en": "aura-2-neptune-en";
798
+ readonly "aura-2-odysseus-en": "aura-2-odysseus-en";
799
+ readonly "aura-2-ophelia-en": "aura-2-ophelia-en";
800
+ readonly "aura-2-orion-en": "aura-2-orion-en";
801
+ readonly "aura-2-orpheus-en": "aura-2-orpheus-en";
802
+ readonly "aura-2-pandora-en": "aura-2-pandora-en";
803
+ readonly "aura-2-phoebe-en": "aura-2-phoebe-en";
804
+ readonly "aura-2-pluto-en": "aura-2-pluto-en";
805
+ readonly "aura-2-saturn-en": "aura-2-saturn-en";
806
+ readonly "aura-2-selene-en": "aura-2-selene-en";
807
+ readonly "aura-2-thalia-en": "aura-2-thalia-en";
808
+ readonly "aura-2-theia-en": "aura-2-theia-en";
809
+ readonly "aura-2-vesta-en": "aura-2-vesta-en";
810
+ readonly "aura-2-zeus-en": "aura-2-zeus-en";
811
+ readonly "aura-2-sirio-es": "aura-2-sirio-es";
812
+ readonly "aura-2-nestor-es": "aura-2-nestor-es";
813
+ readonly "aura-2-carina-es": "aura-2-carina-es";
814
+ readonly "aura-2-celeste-es": "aura-2-celeste-es";
815
+ readonly "aura-2-alvaro-es": "aura-2-alvaro-es";
816
+ readonly "aura-2-diana-es": "aura-2-diana-es";
817
+ readonly "aura-2-aquila-es": "aura-2-aquila-es";
818
+ readonly "aura-2-selena-es": "aura-2-selena-es";
819
+ readonly "aura-2-estrella-es": "aura-2-estrella-es";
820
+ readonly "aura-2-javier-es": "aura-2-javier-es";
821
+ };
822
+ /**
823
+ * Deepgram TTS audio encoding formats
824
+ *
825
+ * Values: `linear16`, `aac`, `opus`, `mp3`, `flac`, `mulaw`, `alaw`
826
+ *
827
+ * @example
828
+ * ```typescript
829
+ * import { DeepgramTTSEncoding } from 'voice-router-dev/constants'
830
+ *
831
+ * { encoding: DeepgramTTSEncoding.mp3 }
832
+ * { encoding: DeepgramTTSEncoding.opus }
833
+ * ```
834
+ */
835
+ declare const DeepgramTTSEncoding: {
836
+ readonly linear16: "linear16";
837
+ readonly aac: "aac";
838
+ readonly opus: "opus";
839
+ readonly mp3: "mp3";
840
+ readonly flac: "flac";
841
+ readonly mulaw: "mulaw";
842
+ readonly alaw: "alaw";
843
+ };
844
+ /**
845
+ * Deepgram TTS audio container formats
846
+ *
847
+ * Values: `none`, `wav`, `ogg`
848
+ *
849
+ * @example
850
+ * ```typescript
851
+ * import { DeepgramTTSContainer } from 'voice-router-dev/constants'
852
+ *
853
+ * { container: DeepgramTTSContainer.wav }
854
+ * ```
855
+ */
856
+ declare const DeepgramTTSContainer: {
857
+ readonly none: "none";
858
+ readonly wav: "wav";
859
+ readonly ogg: "ogg";
860
+ };
861
+ /**
862
+ * Deepgram TTS sample rates (Hz)
863
+ *
864
+ * Values: `8000`, `16000`, `22050`, `24000`, `32000`, `48000`
865
+ *
866
+ * @example
867
+ * ```typescript
868
+ * import { DeepgramTTSSampleRate } from 'voice-router-dev/constants'
869
+ *
870
+ * { sampleRate: DeepgramTTSSampleRate.NUMBER_24000 }
871
+ * ```
872
+ */
873
+ declare const DeepgramTTSSampleRate: {
874
+ readonly NUMBER_8000: 8000;
875
+ readonly NUMBER_16000: 16000;
876
+ readonly NUMBER_22050: 22050;
877
+ readonly NUMBER_24000: 24000;
878
+ readonly NUMBER_32000: 32000;
879
+ readonly NUMBER_48000: 48000;
880
+ readonly null: null;
881
+ };
882
+ /** Deepgram TTS model type derived from const object */
883
+ type DeepgramTTSModelType = (typeof DeepgramTTSModel)[keyof typeof DeepgramTTSModel];
884
+ /** Deepgram TTS encoding type derived from const object */
885
+ type DeepgramTTSEncodingType = (typeof DeepgramTTSEncoding)[keyof typeof DeepgramTTSEncoding];
886
+ /** Deepgram TTS container type derived from const object */
887
+ type DeepgramTTSContainerType = (typeof DeepgramTTSContainer)[keyof typeof DeepgramTTSContainer];
888
+ /** Deepgram TTS sample rate type derived from const object */
889
+ type DeepgramTTSSampleRateType = (typeof DeepgramTTSSampleRate)[keyof typeof DeepgramTTSSampleRate];
890
+ /**
891
+ * OpenAI Whisper transcription models
892
+ *
893
+ * Values: `whisper-1`, `gpt-4o-transcribe`, `gpt-4o-mini-transcribe`, `gpt-4o-transcribe-diarize`
894
+ *
895
+ * @example
896
+ * ```typescript
897
+ * import { OpenAIModel } from 'voice-router-dev/constants'
898
+ *
899
+ * { model: OpenAIModel["whisper-1"] }
900
+ * { model: OpenAIModel["gpt-4o-transcribe"] }
901
+ * ```
902
+ */
903
+ declare const OpenAIModel: {
904
+ readonly "whisper-1": "whisper-1";
905
+ readonly "gpt-4o-mini-transcribe": "gpt-4o-mini-transcribe";
906
+ readonly "gpt-4o-transcribe": "gpt-4o-transcribe";
907
+ readonly "gpt-4o-transcribe-diarize": "gpt-4o-transcribe-diarize";
908
+ };
909
+ /**
910
+ * OpenAI transcription response formats
911
+ *
912
+ * Values: `json`, `text`, `srt`, `verbose_json`, `vtt`, `diarized_json`
913
+ *
914
+ * Note: `diarized_json` is only available with `gpt-4o-transcribe-diarize` model.
915
+ * GPT-4o transcribe models only support `json` format.
916
+ *
917
+ * @example
918
+ * ```typescript
919
+ * import { OpenAIResponseFormat } from 'voice-router-dev/constants'
920
+ *
921
+ * { responseFormat: OpenAIResponseFormat.verbose_json }
922
+ * { responseFormat: OpenAIResponseFormat.srt }
923
+ * ```
924
+ */
925
+ declare const OpenAIResponseFormat: {
926
+ readonly json: "json";
927
+ readonly text: "text";
928
+ readonly srt: "srt";
929
+ readonly verbose_json: "verbose_json";
930
+ readonly vtt: "vtt";
931
+ readonly diarized_json: "diarized_json";
932
+ };
933
+ /** OpenAI model type derived from const object */
934
+ type OpenAIModelType = (typeof OpenAIModel)[keyof typeof OpenAIModel];
935
+ /** OpenAI response format type derived from const object */
936
+ type OpenAIResponseFormatType = (typeof OpenAIResponseFormat)[keyof typeof OpenAIResponseFormat];
644
937
 
645
- export { AssemblyAIEncoding, type AssemblyAIEncodingType, AssemblyAISampleRate, type AssemblyAISampleRateType, AssemblyAISpeechModel, type AssemblyAISpeechModelType, AssemblyAIStatus, type AssemblyAIStatusType, AzureStatus, type AzureStatusType, DeepgramCallbackMethod, type DeepgramCallbackMethodType, DeepgramEncoding, type DeepgramEncodingType, DeepgramIntentMode, type DeepgramIntentModeType, DeepgramModel, type DeepgramModelType, DeepgramRedact, type DeepgramRedactType, DeepgramSampleRate, type DeepgramSampleRateType, DeepgramStatus, type DeepgramStatusType, DeepgramTopicMode, type DeepgramTopicModeType, GladiaBitDepth, type GladiaBitDepthType, GladiaEncoding, type GladiaEncodingType, GladiaLanguage, type GladiaLanguageType, GladiaModel, type GladiaModelType, GladiaSampleRate, type GladiaSampleRateType, GladiaStatus, type GladiaStatusType, GladiaTranslationLanguage, type GladiaTranslationLanguageType };
938
+ export { AssemblyAIEncoding, type AssemblyAIEncodingType, AssemblyAISampleRate, type AssemblyAISampleRateType, AssemblyAISpeechModel, type AssemblyAISpeechModelType, AssemblyAIStatus, type AssemblyAIStatusType, AzureStatus, type AzureStatusType, DeepgramCallbackMethod, type DeepgramCallbackMethodType, DeepgramEncoding, type DeepgramEncodingType, DeepgramIntentMode, type DeepgramIntentModeType, DeepgramModel, type DeepgramModelType, DeepgramRedact, type DeepgramRedactType, DeepgramRegion, type DeepgramRegionType, DeepgramSampleRate, type DeepgramSampleRateType, DeepgramStatus, type DeepgramStatusType, DeepgramTTSContainer, type DeepgramTTSContainerType, DeepgramTTSEncoding, type DeepgramTTSEncodingType, DeepgramTTSModel, type DeepgramTTSModelType, DeepgramTTSSampleRate, type DeepgramTTSSampleRateType, DeepgramTopicMode, type DeepgramTopicModeType, GladiaBitDepth, type GladiaBitDepthType, GladiaEncoding, type GladiaEncodingType, GladiaLanguage, type GladiaLanguageType, GladiaModel, type GladiaModelType, GladiaRegion, type GladiaRegionType, GladiaSampleRate, type GladiaSampleRateType, GladiaStatus, type GladiaStatusType, GladiaTranslationLanguage, type GladiaTranslationLanguageType, OpenAIModel, type OpenAIModelType, OpenAIResponseFormat, type OpenAIResponseFormatType, SpeechmaticsRegion, type SpeechmaticsRegionType };