sinapsis-speech 0.1.0__py3-none-any.whl → 0.2.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: sinapsis-speech
3
- Version: 0.1.0
3
+ Version: 0.2.0
4
4
  Summary: Generate speech using various libraries.
5
5
  Author-email: SinapsisAI <dev@sinapsis-ai.com>
6
6
  License: GNU AFFERO GENERAL PUBLIC LICENSE
@@ -666,25 +666,20 @@ License: GNU AFFERO GENERAL PUBLIC LICENSE
666
666
  <https://www.gnu.org/licenses/>.
667
667
 
668
668
  Project-URL: Homepage, https://sinapsis.tech
669
- Project-URL: Documentation, https://docs.sinapsis.tech/docs
669
+ Project-URL: Documentation, https://docs.sinapsis.tech/docs/sinapsis-speech
670
670
  Project-URL: Tutorials, https://docs.sinapsis.tech/tutorials
671
671
  Project-URL: Repository, https://github.com/Sinapsis-AI/sinapsis-speech.git
672
672
  Requires-Python: >=3.10
673
673
  Description-Content-Type: text/markdown
674
- License-File: LICENSE
675
674
  Requires-Dist: pip>=24.3.1
676
- Requires-Dist: sinapsis>=0.1.1
677
- Provides-Extra: elevenlabs-app
678
- Requires-Dist: sinapsis-elevenlabs; extra == "elevenlabs-app"
679
- Requires-Dist: sinapsis-speech[gradio-app]; extra == "elevenlabs-app"
680
- Provides-Extra: gradio-app
681
- Requires-Dist: gradio>=5.14.0; extra == "gradio-app"
682
- Requires-Dist: sinapsis-data-readers>=0.1.0; extra == "gradio-app"
675
+ Requires-Dist: sinapsis>=0.2.2
683
676
  Provides-Extra: all
684
- Requires-Dist: sinapsis-elevenlabs; extra == "all"
685
- Requires-Dist: sinapsis-speech[gradio-app]; extra == "all"
686
- Requires-Dist: sinapsis-speech[elevenlabs-app]; extra == "all"
687
- Dynamic: license-file
677
+ Requires-Dist: sinapsis-elevenlabs[all]; extra == "all"
678
+ Requires-Dist: sinapsis-f5-tts[all]; extra == "all"
679
+ Requires-Dist: sinapsis-speech[webapp]; extra == "all"
680
+ Requires-Dist: sinapsis-zonos[all]; extra == "all"
681
+ Provides-Extra: gradio-app
682
+ Requires-Dist: sinapsis[webapp]>=0.2.3; extra == "gradio-app"
688
683
 
689
684
  <h1 align="center">
690
685
  <br>
@@ -702,7 +697,7 @@ Sinapsis Speech
702
697
  <p align="center">
703
698
  <a href="#installation">🐍 Installation</a> •
704
699
  <a href="#packages">📦 Packages</a> •
705
- <a href="#webapp">🌐 Webapp</a> •
700
+ <a href="#webapp">🌐 Webapps</a> •
706
701
  <a href="#documentation">📙 Documentation</a> •
707
702
  <a href="#packages">🔍 License</a>
708
703
  </p>
@@ -715,47 +710,93 @@ Sinapsis Speech
715
710
  > Sinapsis projects requires Python 3.10 or higher.
716
711
  >
717
712
 
718
- We strongly encourage the use of <code>uv</code>, although any other package manager should work too.
719
- If you need to install <code>uv</code> please see the [official documentation](https://docs.astral.sh/uv/getting-started/installation/#installation-methods).
713
+ This repo includes packages for performing speech synthesis using different tools:
720
714
 
715
+ * <code>sinapsis-elevenlabs</code>
716
+ * <code>sinapsis-f5-tts</code>
717
+ * <code>sinapsis-zonos</code>
721
718
 
722
- 1. Install using your favourite package manager.
719
+ Install using your preferred package manager. We strongly recommend using <code>uv</code>. To install <code>uv</code>, refer to the [official documentation](https://docs.astral.sh/uv/getting-started/installation/#installation-methods).
723
720
 
724
- Example with <code>uv</code>:
721
+
722
+ Install with <code>uv</code>:
725
723
  ```bash
726
- uv pip install sinapsis-elevenlabs --extra-index-url https://pypi.sinapsis.tech
724
+ uv pip install sinapsis-elevenlabs --extra-index-url https://pypi.sinapsis.tech
727
725
  ```
728
- or with raw <code>pip</code>:
726
+ Or with raw <code>pip</code>:
729
727
  ```bash
730
- pip install sinapsis-elevenlabs --extra-index-url https://pypi.sinapsis.tech
728
+ pip install sinapsis-elevenlabs --extra-index-url https://pypi.sinapsis.tech
731
729
  ```
732
- **Change the name of the package for the one you want to install**.
730
+
731
+ **Replace `sinapsis-elevenlabs` with the name of the package you intend to install**.
732
+
733
+ > [!IMPORTANT]
734
+ > Templates in each package may require additional dependencies. For development, we recommend installing the package all optional dependencies:
735
+ >
736
+ With <code>uv</code>:
737
+
738
+ ```bash
739
+ uv pip install sinapsis-elevenlabs[all] --extra-index-url https://pypi.sinapsis.tech
740
+ ```
741
+ Or with raw <code>pip</code>:
742
+ ```bash
743
+ pip install sinapsis-elevenlabs[all] --extra-index-url https://pypi.sinapsis.tech
744
+ ```
745
+
746
+ **Be sure to substitute `sinapsis-elevenlabs` with the appropriate package name**.
747
+
748
+
733
749
 
734
750
  > [!TIP]
735
751
  > You can also install all the packages within this project:
736
752
  >
737
753
  ```bash
738
- uv pip install sinapsis-speech[all] --extra-index-url https://pypi.sinapsis.tech
754
+ uv pip install sinapsis-speech[all] --extra-index-url https://pypi.sinapsis.tech
739
755
  ```
740
756
 
741
757
 
742
758
  <h2 id="packages">📦 Packages</h2>
743
759
 
744
- Each package can be used independently or combined to create more complex workflows. Below is an overview of the available packages:
760
+ This repository is organized into modular packages, each designed for integration with different text-to-speech tools. These packages provide ready-to-use templates for speech synthesis. Below is an overview of the available packages:
745
761
 
746
762
  <details>
747
- <summary id="elevenlabs"><strong><span style="font-size: 1.4em;"> Elevenlabs </span></strong></summary>
763
+ <summary id="elevenlabs"><strong><span style="font-size: 1.4em;"> Sinapsis ElevenLabs </span></strong></summary>
764
+
765
+ This package offers a suite of templates and utilities designed for effortless integrating, configuration, and execution of **text-to-speech (TTS)** and **voice generation** functionalities powered by [ElevenLabs](https://elevenlabs.io/).
748
766
 
749
- This package provides a suite of templates and utilities for seamlessly integrating, configuring, and running **text-to-speech (TTS)** and **voice generation** functionalities powered by [ElevenLabs](https://elevenlabs.io/):
767
+ - **ElevenLabsTTS**: Template for converting text into speech using ElevenLabs' voice models.
750
768
 
751
- - **Text-to-speech**: Template for converting text into speech using ElevenLabs' voice models.
769
+ - **ElevenLabsVoiceGeneration**: Template for generating custom synthetic voices based on user-provided descriptions.
752
770
 
753
- - **Voice generation**: Template for generating custom synthetic voices based on user-provided descriptions.
771
+ For specific instructions and further details, see the [README.md](https://github.com/Sinapsis-AI/sinapsis-speech/blob/main/packages/sinapsis_elevenlabs/README.md).
754
772
 
755
773
  </details>
756
- <h2 id="webapps">🌐 Webapps</h2>
757
- The webapps included in this project showcase the modularity of the templates, in this case
758
- for speech generation tasks.
774
+
775
+
776
+ <details>
777
+ <summary id="f5tts"><strong><span style="font-size: 1.4em;"> Sinapsis F5-TTS</span></strong></summary>
778
+
779
+ This package provides a template for seamlessly integrating, configuring, and running **text-to-speech (TTS)** functionalities powered by [F5TTS](https://github.com/SWivid/F5-TTS).
780
+
781
+ - **F5TTSInference**: Converts text to speech using the F5TTS model with voice cloning capabilities.
782
+
783
+ For specific instructions and further details, see the [README.md](https://github.com/Sinapsis-AI/sinapsis-speech/blob/main/packages/sinapsis_f5_tts/README.md).
784
+
785
+ </details>
786
+
787
+ <details>
788
+ <summary id="zonos"><strong><span style="font-size: 1.4em;"> Sinapsis Zonos</span></strong></summary>
789
+
790
+ This package provides a single template for integrating, configuring, and running **text-to-speech (TTS)** and **voice cloning** functionalities powered by [Zonos](https://github.com/Zyphra/Zonos/tree/main).
791
+
792
+ - **ZonosTTS**: Template for converting text to speech or performing voice cloning based on the presence of an audio sample.​
793
+
794
+ For specific instructions and further details, see the [README.md](https://github.com/Sinapsis-AI/sinapsis-speech/blob/main/packages/sinapsis_zonos/README.md).
795
+
796
+ </details>
797
+
798
+ <h2 id="webapp">🌐 Webapps</h2>
799
+ The webapps included in this project showcase the modularity of the templates, in this case for speech generation tasks.
759
800
 
760
801
  > [!IMPORTANT]
761
802
  > To run the app you first need to clone this repository:
@@ -768,89 +809,102 @@ cd sinapsis-speech
768
809
  > [!NOTE]
769
810
  > If you'd like to enable external app sharing in Gradio, `export GRADIO_SHARE_APP=True`
770
811
 
771
- > [!IMPORTANT]
772
- > The CosyVoice model requires at least 4GB of ram to work.
773
812
 
774
813
  > [!IMPORTANT]
775
- > Elevenlabs requires an api key to run any inference. Please go to the [official website](https://elevenlabs.io), create an account.
776
- If you already have an account, go to the [token page](https://elevenlabs.io/app/settings/api-keys) and generate a token.
814
+ > Elevenlabs requires an API key to run any inference. To get started, visit the [official website](https://elevenlabs.io) and create an account. If you already have an account, go to the [API keys page](https://elevenlabs.io/app/settings/api-keys) to generate a token.
777
815
 
778
816
  > [!IMPORTANT]
779
- > set your env var using <code> export ELEVENLABS_API_KEY='your-api-key'</code>
817
+ > Set your env var using <code> export ELEVENLABS_API_KEY='your-api-key'</code>
780
818
 
819
+ > [!IMPORTANT]
820
+ > F5-TTS requires a reference audio file for voice cloning. Make sure you have a reference audio file in the artifacts directory.
781
821
 
782
- > [!TIP]
783
- > The agent configuration can be updated using the AGENT_CONFIG_PATH environment var.
822
+ > [!NOTE]
823
+ > Agent configuration can be changed through the `AGENT_CONFIG_PATH` env var. You can check the available configurations in each package configs folder.
784
824
 
785
825
 
786
826
  <details>
787
- <summary id="docker"><strong><span style="font-size: 1.4em;">🐳 Build with Docker</span></strong></summary>
827
+ <summary id="docker"><strong><span style="font-size: 1.4em;">🐳 Docker</span></strong></summary>
788
828
 
789
- **IMPORTANT** This docker image depends on the sinapsis-nvidia:base image. Please refer to the official [sinapsis](https://github.com/Sinapsis-ai/sinapsis?tab=readme-ov-file#docker) instructions to Build with Docker.
829
+ **IMPORTANT**: This Docker image depends on the `sinapsis-nvidia:base` image. For detailed instructions, please refer to the [Sinapsis README](https://github.com/Sinapsis-ai/sinapsis?tab=readme-ov-file#docker).
790
830
 
791
- 1. **Build the Docker image**:
831
+ 1. **Build the sinapsis-speech image**:
792
832
  ```bash
793
833
  docker compose -f docker/compose.yaml build
794
834
  ```
795
835
 
796
-
797
- 2. **Launch the service**:
836
+ 2. **Start the app container**:
837
+ For ElevenLabs:
798
838
  ```bash
799
839
  docker compose -f docker/compose_apps.yaml up -d sinapsis-elevenlabs
800
840
  ```
841
+ For F5-TTS:
842
+ ```bash
843
+ docker compose -f docker/compose_apps.yaml up -d sinapsis-f5_tts
844
+ ```
845
+ For Zonos:
846
+ ```bash
847
+ docker compose -f docker/compose_apps.yaml up -d sinapsis-zonos
848
+ ```
801
849
 
802
-
803
- 2. **Check the logs**
850
+ 3. **Check the logs**
851
+ For ElevenLabs:
804
852
  ```bash
805
853
  docker logs -f sinapsis-elevenlabs
806
854
  ```
807
- 3. **The logs will display the URL to access the webapp, e.g.,:**:
855
+ For F5-TTS:
856
+ ```bash
857
+ docker logs -f sinapsis-f5tts
858
+ ```
859
+ For Zonos:
860
+ ```bash
861
+ docker logs -f sinapsis-zonos
862
+ ```
863
+ 4. **The logs will display the URL to access the webapp, e.g.,:**:
808
864
  ```bash
809
865
  Running on local URL: http://127.0.0.1:7860
810
866
  ```
811
- 4. To stop the app:
867
+ **NOTE**: The url may be different, check the output of logs.
868
+ 5. **To stop the app**:
812
869
  ```bash
813
- docker compose -f docker/compose_apps.yaml down sinapsis-elevenlabs
870
+ docker compose -f docker/compose_apps.yaml down
814
871
  ```
815
872
  </details>
816
873
 
817
874
  <details>
818
875
  <summary id="virtual-environment"><strong><span style="font-size: 1.4em;">💻 UV</span></strong></summary>
819
876
 
877
+ To run the webapp using the <code>uv</code> package manager, follow these steps:
820
878
 
821
879
  1. **Sync the virtual environment**:
822
880
 
823
881
  ```bash
824
882
  uv sync --frozen
825
883
  ```
826
- 2. Install the wheel:
884
+ 2. **Install the wheel**:
827
885
 
828
886
  ```bash
829
887
  uv pip install sinapsis-speech[all] --extra-index-url https://pypi.sinapsis.tech
830
888
  ```
831
889
 
832
-
833
- 3. **Activate the virtual environment**:
834
-
890
+ 3. **Run the webapp**:
891
+ For ElevenLabs:
835
892
  ```bash
836
- source .venv/bin/activate
893
+ uv run webapps/elevenlabs/elevenlabs_tts_app.py
837
894
  ```
838
- 4. **Declare PYTHONPATH**
895
+ For F5-TTS:
839
896
  ```bash
840
- export PYTHONPATH=$PWD/webapps
897
+ uv run webapps/f5-tts/f5_tts_app.py
841
898
  ```
842
- **NOTE** if not located in <code>sinapsis-speech</code> folder, change $PWD for the actual path to <code>sinapsis-speech</code>
843
-
844
- 5. **Launch the demo**:
845
-
899
+ For Zonos:
846
900
  ```bash
847
- python webapps/elevenlabs/elevenlabs_tts_app.py
901
+ uv run webapps/zonos/zonos_tts_app.py
848
902
  ```
849
- 6. Open the displayed URL, e.g.:
903
+ 4. **The terminal will display the URL to access the webapp (e.g.)**:
850
904
  ```bash
851
905
  Running on local URL: http://127.0.0.1:7860
852
906
  ```
853
- **NOTE**: The URL can be different, please make sure you check the logs.
907
+ **NOTE**: The URL may vary; check the terminal output for the correct address.
854
908
 
855
909
  </details>
856
910
 
@@ -0,0 +1,21 @@
1
+ sinapsis_elevenlabs/src/sinapsis_elevenlabs/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
2
+ sinapsis_elevenlabs/src/sinapsis_elevenlabs/helpers/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
3
+ sinapsis_elevenlabs/src/sinapsis_elevenlabs/helpers/env_var_keys.py,sha256=j8J64iplBNaff1WvmfJ03eJozE1f5SdqtqQeldV2vPY,998
4
+ sinapsis_elevenlabs/src/sinapsis_elevenlabs/helpers/voice_utils.py,sha256=fR1r1aaoFy_rQGfJLunUNdZfVxDyAo7shevS4TAXH_M,2420
5
+ sinapsis_elevenlabs/src/sinapsis_elevenlabs/templates/__init__.py,sha256=pyTWPBLN_P6sxFTF1QqfL7iTZd9E0EaggpfwB0qLLHI,579
6
+ sinapsis_elevenlabs/src/sinapsis_elevenlabs/templates/elevenlabs_base.py,sha256=MQglkwvyOVk4krXTXoMSPZ4yCeDBq9vMpI3riz87aIg,8291
7
+ sinapsis_elevenlabs/src/sinapsis_elevenlabs/templates/elevenlabs_tts.py,sha256=WVTROfB2ODAksHmWwV5RKcub3Hoc29OM_eAw75c9yio,2847
8
+ sinapsis_elevenlabs/src/sinapsis_elevenlabs/templates/elevenlabs_voice_generation.py,sha256=bKo7zhfsiZwsn-qZx_MCVAIx_MmaKnaP3lc-07AwAaY,2819
9
+ sinapsis_f5_tts/src/sinapsis_f5_tts/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
10
+ sinapsis_f5_tts/src/sinapsis_f5_tts/templates/__init__.py,sha256=28BOPAr9GG1jYcrXi45ZWO1n2FAZJOdDcmRkOXdEYmk,496
11
+ sinapsis_f5_tts/src/sinapsis_f5_tts/templates/f5_tts_inference.py,sha256=7EBxw-tRthbPDz0zFopaLdBhv7DXwxyMGXam6F1MwGs,15802
12
+ sinapsis_zonos/src/sinapsis_zonos/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
13
+ sinapsis_zonos/src/sinapsis_zonos/helpers/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
14
+ sinapsis_zonos/src/sinapsis_zonos/helpers/zonos_keys.py,sha256=m1GdOYfzP73JGmtxH30mNiqbNkzFsQl9o2QaT7QxSVU,2470
15
+ sinapsis_zonos/src/sinapsis_zonos/helpers/zonos_tts_utils.py,sha256=8Tr2YgxjBfRqv_Hf6sw36X2pLzW7fdQWqa6QPBxNZK8,6419
16
+ sinapsis_zonos/src/sinapsis_zonos/templates/__init__.py,sha256=A-_F0K3hbEFqeWWAh4YftgU9CFX-WHrauSiCAww9yp8,482
17
+ sinapsis_zonos/src/sinapsis_zonos/templates/zonos_tts.py,sha256=KsNuT8cFTTjTEqjfEWsIr4B-DjGhVacSw2SdPckuFvk,7507
18
+ sinapsis_speech-0.2.0.dist-info/METADATA,sha256=-qJhZCqgMvFKr7iZBbv6lIleFa2DCTb0wXp1B2dKs18,48741
19
+ sinapsis_speech-0.2.0.dist-info/WHEEL,sha256=CmyFI0kx5cdEMTLiONQRbGQwjIoR1aIYB7eCAQ4KPJ0,91
20
+ sinapsis_speech-0.2.0.dist-info/top_level.txt,sha256=vQFjL84TMSRld2lKvEVMUNyY2b3AVluCT1Ijws7o7_c,51
21
+ sinapsis_speech-0.2.0.dist-info/RECORD,,
@@ -1,5 +1,5 @@
1
1
  Wheel-Version: 1.0
2
- Generator: setuptools (77.0.1)
2
+ Generator: setuptools (78.1.0)
3
3
  Root-Is-Purelib: true
4
4
  Tag: py3-none-any
5
5
 
@@ -0,0 +1,3 @@
1
+ sinapsis_elevenlabs
2
+ sinapsis_f5_tts
3
+ sinapsis_zonos
File without changes
File without changes
@@ -0,0 +1,67 @@
1
+ # -*- coding: utf-8 -*-
2
+ from typing import Literal
3
+
4
+ from pydantic import BaseModel
5
+ from pydantic.dataclasses import dataclass
6
+
7
+
8
+ @dataclass(frozen=True)
9
+ class TTSKeys:
10
+ """
11
+ A class to hold constants for the keys used in the Text-to-Speech (TTS) model configuration.
12
+
13
+ These keys represent standard fields that are used to configure various parameters of the TTS model,
14
+ such as speaker attributes, emotions, and other audio-related settings. They are typically used in
15
+ templates and potentially a TTS web application to adjust and access specific TTS settings."
16
+ """
17
+
18
+ speaker: Literal["speaker"] = "speaker"
19
+ emotion: Literal["emotion"] = "emotion"
20
+ vqscore_8: Literal["vqscore_8"] = "vqscore_8"
21
+ fmax: Literal["fmax"] = "fmax"
22
+ pitch_std: Literal["pitch_std"] = "pitch_std"
23
+ speaking_rate: Literal["speaking_rate"] = "speaking_rate"
24
+ dnsmos_ovrl: Literal["dnsmos_ovrl"] = "dnsmos_ovrl"
25
+ speaker_noised: Literal["speaker_noised"] = "speaker_noised"
26
+ wav: Literal["wav"] = "wav"
27
+ en_language: Literal["en-us"] = "en-us"
28
+ min_p: Literal["min_p"] = "min_p"
29
+
30
+
31
+ class SamplingParams(BaseModel):
32
+ """
33
+ A class to hold the sampling parameters for the TTS model.
34
+
35
+ Attributes:
36
+ min_p (float): Minimum token probability, scaled by the highest token probability. Range: 0-1. Default: 0.0.
37
+ top_k (int): Number of top tokens to sample from. Range: 0-1024. Default: 0.
38
+ top_p (float): Cumulative probability threshold for nucleus sampling. Range: 0-1. Default: 0.0.
39
+ linear (float): Controls the token unusualness. Range: -2.0 to 2.0. Default: 0.0.
40
+ conf (float): Confidence level for randomness. Range: -2.0 to 2.0. Default: 0.0.
41
+ quad (float): Controls how much low probabilities are reduced. Range: -2.0 to 2.0. Default: 0.0.
42
+ """
43
+
44
+ min_p: float = 0.0
45
+ top_k: int = 0
46
+ top_p: float = 0.0
47
+ linear: float = 0.0
48
+ conf: float = 0.0
49
+ quad: float = 0.0
50
+
51
+
52
+ class EmotionsConfig(BaseModel):
53
+ """
54
+ A class to hold emotional attributes that influence the tone of the generated speech.
55
+
56
+ These emotions are represented as float values and are used to adjust the emotional tone of the speech.
57
+ Higher values can represent a stronger presence of a particular emotion.
58
+ """
59
+
60
+ happiness: float = 0
61
+ sadness: float = 0
62
+ disgust: float = 0
63
+ fear: float = 0
64
+ surprise: float = 0
65
+ anger: float = 0
66
+ other: float = 0
67
+ neutral: float = 0
@@ -0,0 +1,153 @@
1
+ # -*- coding: utf-8 -*-
2
+ from typing import Set
3
+
4
+ import torch
5
+ import torchaudio
6
+ from sinapsis_core.template_base.template import TemplateAttributeType
7
+ from sinapsis_core.utils.logging_utils import sinapsis_logger
8
+ from zonos.conditioning import make_cond_dict, supported_language_codes
9
+ from zonos.model import Zonos
10
+
11
+ from sinapsis_zonos.helpers.zonos_keys import SamplingParams, TTSKeys
12
+
13
+
14
+ def get_audio_prefix_codes(prefix_path: str | None, model: Zonos) -> torch.Tensor | None:
15
+ """Generates audio prefix codes from an audio file.
16
+
17
+ Args:
18
+ prefix_path (str): Path to the audio file to generate the prefix codes from.
19
+ model (Zonos): The Zonos model used to generate the audio prefix codes.
20
+
21
+ Returns:
22
+ torch.Tensor | None: The generated audio prefix codes if available, otherwise None.
23
+ """
24
+ if prefix_path:
25
+ waveform, sample_rate = torchaudio.load(prefix_path)
26
+ waveform = waveform.mean(0, keepdim=True)
27
+ waveform = model.autoencoder.preprocess(waveform, sample_rate)
28
+ return model.autoencoder.encode(waveform.unsqueeze(0))
29
+ return None
30
+
31
+
32
+ def get_conditioning(
33
+ attributes: TemplateAttributeType, model: Zonos, input_text: str, device: torch.device
34
+ ) -> torch.Tensor:
35
+ """
36
+ Generates conditioning tensor for the input text, combining it with speaker embeddings and emotions.
37
+
38
+ Args:
39
+ attributes (TemplateAttributeType): attributes with configuration for the conditioning dictionary
40
+ of the model.
41
+ model (Zonos): Model to be used during inference, where the setup is modified.
42
+ input_text (str): The text to be converted to speech.
43
+ device (torch.device): Device where model should be loaded.
44
+
45
+ Returns:
46
+ torch.Tensor: The generated conditioning tensor for speech synthesis.
47
+ """
48
+ speaker_embedding = get_speaker_embedding(attributes.speaker_audio, attributes.unconditional_keys, model, device)
49
+ emotion_data = get_emotion_tensor(attributes, device)
50
+ validate_language(attributes)
51
+
52
+ vq_data = torch.tensor([attributes.vq_score] * 8, device=device).unsqueeze(0)
53
+
54
+ conditioning_dict = make_cond_dict(
55
+ text=input_text,
56
+ language=attributes.language,
57
+ speaker=speaker_embedding,
58
+ emotion=emotion_data,
59
+ vqscore_8=vq_data,
60
+ fmax=attributes.fmax,
61
+ pitch_std=attributes.pitch_std,
62
+ speaking_rate=attributes.speaking_rate,
63
+ dnsmos_ovrl=attributes.dnsmos,
64
+ speaker_noised=attributes.denoised_speaker,
65
+ device=device,
66
+ unconditional_keys=attributes.unconditional_keys,
67
+ )
68
+ return model.prepare_conditioning(conditioning_dict)
69
+
70
+
71
+ def get_emotion_tensor(attributes: TemplateAttributeType, device: torch.device) -> torch.Tensor:
72
+ """
73
+ Extracts or constructs an emotion tensor from the given attributes.
74
+
75
+ If `attributes.emotions` is present, its values are serialized and converted into a tensor.
76
+ If not, a default zero tensor of shape (8,) is returned, and the `emotion` key is
77
+ added to `attributes.unconditional_keys` (if not already included) to indicate unconditional conditioning.
78
+
79
+ Args:
80
+ attributes (TemplateAttributeType): Attributes for Zonos TTS model configuration.
81
+ device (torch.device): The device on which the tensor should be created.
82
+
83
+ Returns:
84
+ torch.Tensor: A tensor representing emotion values, either user-provided or default.
85
+ """
86
+ if attributes.emotions:
87
+ emotion_values = list(map(float, attributes.emotions.model_dump().values()))
88
+ return torch.tensor(emotion_values, device=device)
89
+ else:
90
+ if TTSKeys.emotion not in attributes.unconditional_keys:
91
+ attributes.unconditional_keys.add(TTSKeys.emotion)
92
+ return torch.tensor([0.0] * 8, device=device)
93
+
94
+
95
+ def get_sampling_params(sampling_params: SamplingParams | dict) -> dict:
96
+ """
97
+ Returns a dictionary of sampling parameters for audio generation.
98
+
99
+ If `sampling_params` is a Pydantic model, its non-null fields are serialized using `model_dump()`.
100
+ If `sampling_params` is empty, a default dictionary with a minimum probability value is returned.
101
+
102
+ Args:
103
+ sampling_params (SamplingParams | dict): A SamplingParams Pydantic model or dictionary.
104
+
105
+ Returns:
106
+ dict: A dictionary of sampling parameters, either user-defined or with a default fallback.
107
+ """
108
+ if isinstance(sampling_params, SamplingParams):
109
+ return sampling_params.model_dump(exclude_none=True)
110
+ return {TTSKeys.min_p: 0.1}
111
+
112
+
113
+ def get_speaker_embedding(
114
+ speaker_path: str | None, unconditional_keys: Set[str], model: Zonos, device: torch.device
115
+ ) -> torch.Tensor | None:
116
+ """Extracts speaker embedding from an audio file.
117
+
118
+ Args:
119
+ speaker_path (str): Path to the audio file from which the speaker embedding will be extracted.
120
+ unconditional_keys (dict): Dictionary of keys to condition speech synthesis.
121
+ This will be used to determine whether a speaker embedding is needed.
122
+ model (Zonos): The Zonos model used for generating the speaker embedding.
123
+
124
+ Returns:
125
+ torch.Tensor | None: The speaker embedding if available, otherwise None.
126
+ """
127
+ if speaker_path and TTSKeys.speaker not in unconditional_keys:
128
+ waveform, sample_rate = torchaudio.load(speaker_path)
129
+ speaker_embedding = model.make_speaker_embedding(waveform, sample_rate)
130
+ return speaker_embedding.to(device, dtype=torch.bfloat16)
131
+ return None
132
+
133
+
134
+ def init_seed(attributes: TemplateAttributeType) -> None:
135
+ """Initializes the seed for reproducible results."""
136
+ if attributes.randomized_seed:
137
+ attributes.seed = torch.randint(0, 2**32 - 1, (1,)).item()
138
+ torch.manual_seed(attributes.seed)
139
+
140
+
141
+ def validate_language(attributes: TemplateAttributeType) -> None:
142
+ """
143
+ Validates and updates the language attribute in the provided TTS configuration.
144
+
145
+ Checks if `attributes.language` is included in the list of supported language codes.
146
+ If the language is unsupported, logs an error and defaults it to `TTSKeys.en_language`.
147
+
148
+ Args:
149
+ attributes (TemplateAttributeType): The model attributes containing the language setting.
150
+ """
151
+ if attributes.language not in supported_language_codes:
152
+ sinapsis_logger.error(f"Language {attributes.language} not supported. Defaulting to {TTSKeys.en_language}")
153
+ attributes.language = TTSKeys.en_language
@@ -0,0 +1,20 @@
1
+ # -*- coding: utf-8 -*-
2
+ import importlib
3
+ from typing import Callable
4
+
5
+ _root_lib_path = "sinapsis_zonos.templates"
6
+
7
+ _template_lookup = {
8
+ "ZonosTTS": f"{_root_lib_path}.zonos_tts",
9
+ }
10
+
11
+
12
+ def __getattr__(name: str) -> Callable:
13
+ if name in _template_lookup:
14
+ module = importlib.import_module(_template_lookup[name])
15
+ return getattr(module, name)
16
+
17
+ raise AttributeError(f"template `{name}` not found in {_root_lib_path}")
18
+
19
+
20
+ __all__ = list(_template_lookup.keys())