whisperlivekit 0.1.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2023 ÚFAL
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
@@ -0,0 +1,198 @@
1
+ Metadata-Version: 2.2
2
+ Name: whisperlivekit
3
+ Version: 0.1.0
4
+ Summary: Real-time, Fully Local Whisper's Speech-to-Text and Speaker Diarization
5
+ Home-page: https://github.com/QuentinFuxa/WhisperLiveKit
6
+ Author: Quentin Fuxa
7
+ Classifier: Development Status :: 4 - Beta
8
+ Classifier: Intended Audience :: Developers
9
+ Classifier: License :: OSI Approved :: MIT License
10
+ Classifier: Programming Language :: Python :: 3.9
11
+ Classifier: Programming Language :: Python :: 3.10
12
+ Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
13
+ Classifier: Topic :: Multimedia :: Sound/Audio :: Speech
14
+ Requires-Python: >=3.9
15
+ Description-Content-Type: text/markdown
16
+ License-File: LICENSE
17
+ Requires-Dist: fastapi
18
+ Requires-Dist: ffmpeg-python
19
+ Requires-Dist: librosa
20
+ Requires-Dist: soundfile
21
+ Requires-Dist: faster-whisper
22
+ Requires-Dist: uvicorn
23
+ Requires-Dist: websockets
24
+ Provides-Extra: diarization
25
+ Requires-Dist: diart; extra == "diarization"
26
+ Provides-Extra: vac
27
+ Requires-Dist: torch; extra == "vac"
28
+ Provides-Extra: sentence
29
+ Requires-Dist: mosestokenizer; extra == "sentence"
30
+ Requires-Dist: wtpsplit; extra == "sentence"
31
+ Dynamic: author
32
+ Dynamic: classifier
33
+ Dynamic: description
34
+ Dynamic: description-content-type
35
+ Dynamic: home-page
36
+ Dynamic: provides-extra
37
+ Dynamic: requires-dist
38
+ Dynamic: requires-python
39
+ Dynamic: summary
40
+
41
+ <h1 align="center">WhisperLiveKit</h1>
42
+ <p align="center"><b>Real-time, Fully Local Whisper's Speech-to-Text and Speaker Diarization</b></p>
43
+
44
+
45
+ This project is based on [Whisper Streaming](https://github.com/ufal/whisper_streaming) and lets you transcribe audio directly from your browser. Simply launch the local server and grant microphone access. Everything runs locally on your machine ✨
46
+
47
+ <p align="center">
48
+ <img src="https://raw.githubusercontent.com/QuentinFuxa/WhisperLiveKit/demo.png" alt="Demo Screenshot" width="730">
49
+ </p>
50
+
51
+ ### Differences from [Whisper Streaming](https://github.com/ufal/whisper_streaming)
52
+
53
+ #### ⚙️ **Core Improvements**
54
+ - **Buffering Preview** – Displays unvalidated transcription segments
55
+ - **Multi-User Support** – Handles multiple users simultaneously by decoupling backend and online asr
56
+ - **MLX Whisper Backend** – Optimized for Apple Silicon for faster local processing.
57
+ - **Confidence validation** – Immediately validate high-confidence tokens for faster inference
58
+
59
+ #### 🎙️ **Speaker Identification**
60
+ - **Real-Time Diarization** – Identify different speakers in real time using [Diart](https://github.com/juanmc2005/diart)
61
+
62
+ #### 🌐 **Web & API**
63
+ - **Built-in Web UI** – Simple raw html browser interface with no frontend setup required
64
+ - **FastAPI WebSocket Server** – Real-time speech-to-text processing with async FFmpeg streaming.
65
+ - **JavaScript Client** – Ready-to-use MediaRecorder implementation for seamless client-side integration.
66
+
67
+ ## Installation
68
+
69
+ ### Via pip
70
+
71
+ ```bash
72
+ pip install whisperlivekit
73
+ ```
74
+
75
+ ### From source
76
+
77
+ 1. **Clone the Repository**:
78
+
79
+ ```bash
80
+ git clone https://github.com/QuentinFuxa/WhisperLiveKit
81
+ cd WhisperLiveKit
82
+ pip install -e .
83
+ ```
84
+
85
+ ### System Dependencies
86
+
87
+ You need to install FFmpeg on your system:
88
+
89
+ - Install system dependencies:
90
+ ```bash
91
+ # Install FFmpeg on your system (required for audio processing)
92
+ # For Ubuntu/Debian:
93
+ sudo apt install ffmpeg
94
+
95
+ # For macOS:
96
+ brew install ffmpeg
97
+
98
+ # For Windows:
99
+ # Download from https://ffmpeg.org/download.html and add to PATH
100
+ ```
101
+
102
+ - Install required Python dependencies:
103
+
104
+ ```bash
105
+ # Whisper streaming required dependencies
106
+ pip install librosa soundfile
107
+
108
+ # Whisper streaming web required dependencies
109
+ pip install fastapi ffmpeg-python
110
+ ```
111
+ - Install at least one whisper backend among:
112
+
113
+ ```
114
+ whisper
115
+ whisper-timestamped
116
+ faster-whisper (faster backend on NVIDIA GPU)
117
+ mlx-whisper (faster backend on Apple Silicon)
118
+ ```
119
+ - Optionnal dependencies
120
+
121
+ ```
122
+ # If you want to use VAC (Voice Activity Controller). Useful for preventing hallucinations
123
+ torch
124
+
125
+ # If you choose sentences as buffer trimming strategy
126
+ mosestokenizer
127
+ wtpsplit
128
+ tokenize_uk # If you work with Ukrainian text
129
+
130
+ # If you want to run the server using uvicorn (recommended)
131
+ uvicorn
132
+
133
+ # If you want to use diarization
134
+ diart
135
+ ```
136
+
137
+ Diart uses by default [pyannote.audio](https://github.com/pyannote/pyannote-audio) models from the _huggingface hub_. To use them, please follow the steps described [here](https://github.com/juanmc2005/diart?tab=readme-ov-file#get-access-to--pyannote-models).
138
+
139
+
140
+ 3. **Run the FastAPI Server**:
141
+
142
+ ```bash
143
+ python whisper_fastapi_online_server.py --host 0.0.0.0 --port 8000
144
+ ```
145
+
146
+ **Parameters**
147
+
148
+ The following parameters are supported:
149
+
150
+ - `--host` and `--port` let you specify the server's IP/port.
151
+ - `-min-chunk-size` sets the minimum chunk size for audio processing. Make sure this value aligns with the chunk size selected in the frontend. If not aligned, the system will work but may unnecessarily over-process audio data.
152
+ - `--transcription`: Enable/disable transcription (default: True)
153
+ - `--diarization`: Enable/disable speaker diarization (default: False)
154
+ - `--confidence-validation`: Use confidence scores for faster validation. Transcription will be faster but punctuation might be less accurate (default: True)
155
+ - `--warmup-file`: The path to a speech audio wav file to warm up Whisper so that the very first chunk processing is fast. :
156
+ - If not set, uses https://github.com/ggerganov/whisper.cpp/raw/master/samples/jfk.wav.
157
+ - If False, no warmup is performed.
158
+ - `--min-chunk-size` Minimum audio chunk size in seconds. It waits up to this time to do processing. If the processing takes shorter time, it waits, otherwise it processes the whole segment that was received by this time.
159
+ - `--model` {_tiny.en, tiny, base.en, base, small.en, small, medium.en, medium, large-v1, large-v2, large-v3, large, large-v3-turbo_}
160
+ Name size of the Whisper model to use (default: tiny). The model is automatically downloaded from the model hub if not present in model cache dir.
161
+ - `--model_cache_dir` Overriding the default model cache dir where models downloaded from the hub are saved
162
+ - `--model_dir` Dir where Whisper model.bin and other files are saved. This option overrides --model and --model_cache_dir parameter.
163
+ - `--lan`, --language Source language code, e.g. en,de,cs, or 'auto' for language detection.
164
+ - `--task` {_transcribe, translate_} Transcribe or translate. If translate is set, we recommend avoiding the _large-v3-turbo_ backend, as it [performs significantly worse](https://github.com/QuentinFuxa/whisper_streaming_web/issues/40#issuecomment-2652816533) than other models for translation.
165
+ - `--backend` {_faster-whisper, whisper_timestamped, openai-api, mlx-whisper_} Load only this backend for Whisper processing.
166
+ - `--vac` Use VAC = voice activity controller. Requires torch.
167
+ - `--vac-chunk-size` VAC sample size in seconds.
168
+ - `--vad` Use VAD = voice activity detection, with the default parameters.
169
+ - `--buffer_trimming` {_sentence, segment_} Buffer trimming strategy -- trim completed sentences marked with punctuation mark and detected by sentence segmenter, or the completed segments returned by Whisper. Sentence segmenter must be installed for "sentence" option.
170
+ - `--buffer_trimming_sec` Buffer trimming length threshold in seconds. If buffer length is longer, trimming sentence/segment is triggered.
171
+
172
+ 5. **Open the Provided HTML**:
173
+
174
+ - By default, the server root endpoint `/` serves a simple `live_transcription.html` page.
175
+ - Open your browser at `http://localhost:8000` (or replace `localhost` and `8000` with whatever you specified).
176
+ - The page uses vanilla JavaScript and the WebSocket API to capture your microphone and stream audio to the server in real time.
177
+
178
+ ### How the Live Interface Works
179
+
180
+ - Once you **allow microphone access**, the page records small chunks of audio using the **MediaRecorder** API in **webm/opus** format.
181
+ - These chunks are sent over a **WebSocket** to the FastAPI endpoint at `/asr`.
182
+ - The Python server decodes `.webm` chunks on the fly using **FFmpeg** and streams them into the **whisper streaming** implementation for transcription.
183
+ - **Partial transcription** appears as soon as enough audio is processed. The “unvalidated” text is shown in **lighter or grey color** (i.e., an ‘aperçu’) to indicate it’s still buffered partial output. Once Whisper finalizes that segment, it’s displayed in normal text.
184
+ - You can watch the transcription update in near real time, ideal for demos, prototyping, or quick debugging.
185
+
186
+ ### Deploying to a Remote Server
187
+
188
+ If you want to **deploy** this setup:
189
+
190
+ 1. **Host the FastAPI app** behind a production-grade HTTP(S) server (like **Uvicorn + Nginx** or Docker). If you use HTTPS, use "wss" instead of "ws" in WebSocket URL.
191
+ 2. The **HTML/JS page** can be served by the same FastAPI app or a separate static host.
192
+ 3. Users open the page in **Chrome/Firefox** (any modern browser that supports MediaRecorder + WebSocket).
193
+
194
+ No additional front-end libraries or frameworks are required. The WebSocket logic in `live_transcription.html` is minimal enough to adapt for your own custom UI or embed in other pages.
195
+
196
+ ## Acknowledgments
197
+
198
+ This project builds upon the foundational work of the Whisper Streaming project. We extend our gratitude to the original authors for their contributions.
@@ -0,0 +1,158 @@
1
+ <h1 align="center">WhisperLiveKit</h1>
2
+ <p align="center"><b>Real-time, Fully Local Whisper's Speech-to-Text and Speaker Diarization</b></p>
3
+
4
+
5
+ This project is based on [Whisper Streaming](https://github.com/ufal/whisper_streaming) and lets you transcribe audio directly from your browser. Simply launch the local server and grant microphone access. Everything runs locally on your machine ✨
6
+
7
+ <p align="center">
8
+ <img src="https://raw.githubusercontent.com/QuentinFuxa/WhisperLiveKit/demo.png" alt="Demo Screenshot" width="730">
9
+ </p>
10
+
11
+ ### Differences from [Whisper Streaming](https://github.com/ufal/whisper_streaming)
12
+
13
+ #### ⚙️ **Core Improvements**
14
+ - **Buffering Preview** – Displays unvalidated transcription segments
15
+ - **Multi-User Support** – Handles multiple users simultaneously by decoupling backend and online asr
16
+ - **MLX Whisper Backend** – Optimized for Apple Silicon for faster local processing.
17
+ - **Confidence validation** – Immediately validate high-confidence tokens for faster inference
18
+
19
+ #### 🎙️ **Speaker Identification**
20
+ - **Real-Time Diarization** – Identify different speakers in real time using [Diart](https://github.com/juanmc2005/diart)
21
+
22
+ #### 🌐 **Web & API**
23
+ - **Built-in Web UI** – Simple raw html browser interface with no frontend setup required
24
+ - **FastAPI WebSocket Server** – Real-time speech-to-text processing with async FFmpeg streaming.
25
+ - **JavaScript Client** – Ready-to-use MediaRecorder implementation for seamless client-side integration.
26
+
27
+ ## Installation
28
+
29
+ ### Via pip
30
+
31
+ ```bash
32
+ pip install whisperlivekit
33
+ ```
34
+
35
+ ### From source
36
+
37
+ 1. **Clone the Repository**:
38
+
39
+ ```bash
40
+ git clone https://github.com/QuentinFuxa/WhisperLiveKit
41
+ cd WhisperLiveKit
42
+ pip install -e .
43
+ ```
44
+
45
+ ### System Dependencies
46
+
47
+ You need to install FFmpeg on your system:
48
+
49
+ - Install system dependencies:
50
+ ```bash
51
+ # Install FFmpeg on your system (required for audio processing)
52
+ # For Ubuntu/Debian:
53
+ sudo apt install ffmpeg
54
+
55
+ # For macOS:
56
+ brew install ffmpeg
57
+
58
+ # For Windows:
59
+ # Download from https://ffmpeg.org/download.html and add to PATH
60
+ ```
61
+
62
+ - Install required Python dependencies:
63
+
64
+ ```bash
65
+ # Whisper streaming required dependencies
66
+ pip install librosa soundfile
67
+
68
+ # Whisper streaming web required dependencies
69
+ pip install fastapi ffmpeg-python
70
+ ```
71
+ - Install at least one whisper backend among:
72
+
73
+ ```
74
+ whisper
75
+ whisper-timestamped
76
+ faster-whisper (faster backend on NVIDIA GPU)
77
+ mlx-whisper (faster backend on Apple Silicon)
78
+ ```
79
+ - Optionnal dependencies
80
+
81
+ ```
82
+ # If you want to use VAC (Voice Activity Controller). Useful for preventing hallucinations
83
+ torch
84
+
85
+ # If you choose sentences as buffer trimming strategy
86
+ mosestokenizer
87
+ wtpsplit
88
+ tokenize_uk # If you work with Ukrainian text
89
+
90
+ # If you want to run the server using uvicorn (recommended)
91
+ uvicorn
92
+
93
+ # If you want to use diarization
94
+ diart
95
+ ```
96
+
97
+ Diart uses by default [pyannote.audio](https://github.com/pyannote/pyannote-audio) models from the _huggingface hub_. To use them, please follow the steps described [here](https://github.com/juanmc2005/diart?tab=readme-ov-file#get-access-to--pyannote-models).
98
+
99
+
100
+ 3. **Run the FastAPI Server**:
101
+
102
+ ```bash
103
+ python whisper_fastapi_online_server.py --host 0.0.0.0 --port 8000
104
+ ```
105
+
106
+ **Parameters**
107
+
108
+ The following parameters are supported:
109
+
110
+ - `--host` and `--port` let you specify the server's IP/port.
111
+ - `-min-chunk-size` sets the minimum chunk size for audio processing. Make sure this value aligns with the chunk size selected in the frontend. If not aligned, the system will work but may unnecessarily over-process audio data.
112
+ - `--transcription`: Enable/disable transcription (default: True)
113
+ - `--diarization`: Enable/disable speaker diarization (default: False)
114
+ - `--confidence-validation`: Use confidence scores for faster validation. Transcription will be faster but punctuation might be less accurate (default: True)
115
+ - `--warmup-file`: The path to a speech audio wav file to warm up Whisper so that the very first chunk processing is fast. :
116
+ - If not set, uses https://github.com/ggerganov/whisper.cpp/raw/master/samples/jfk.wav.
117
+ - If False, no warmup is performed.
118
+ - `--min-chunk-size` Minimum audio chunk size in seconds. It waits up to this time to do processing. If the processing takes shorter time, it waits, otherwise it processes the whole segment that was received by this time.
119
+ - `--model` {_tiny.en, tiny, base.en, base, small.en, small, medium.en, medium, large-v1, large-v2, large-v3, large, large-v3-turbo_}
120
+ Name size of the Whisper model to use (default: tiny). The model is automatically downloaded from the model hub if not present in model cache dir.
121
+ - `--model_cache_dir` Overriding the default model cache dir where models downloaded from the hub are saved
122
+ - `--model_dir` Dir where Whisper model.bin and other files are saved. This option overrides --model and --model_cache_dir parameter.
123
+ - `--lan`, --language Source language code, e.g. en,de,cs, or 'auto' for language detection.
124
+ - `--task` {_transcribe, translate_} Transcribe or translate. If translate is set, we recommend avoiding the _large-v3-turbo_ backend, as it [performs significantly worse](https://github.com/QuentinFuxa/whisper_streaming_web/issues/40#issuecomment-2652816533) than other models for translation.
125
+ - `--backend` {_faster-whisper, whisper_timestamped, openai-api, mlx-whisper_} Load only this backend for Whisper processing.
126
+ - `--vac` Use VAC = voice activity controller. Requires torch.
127
+ - `--vac-chunk-size` VAC sample size in seconds.
128
+ - `--vad` Use VAD = voice activity detection, with the default parameters.
129
+ - `--buffer_trimming` {_sentence, segment_} Buffer trimming strategy -- trim completed sentences marked with punctuation mark and detected by sentence segmenter, or the completed segments returned by Whisper. Sentence segmenter must be installed for "sentence" option.
130
+ - `--buffer_trimming_sec` Buffer trimming length threshold in seconds. If buffer length is longer, trimming sentence/segment is triggered.
131
+
132
+ 5. **Open the Provided HTML**:
133
+
134
+ - By default, the server root endpoint `/` serves a simple `live_transcription.html` page.
135
+ - Open your browser at `http://localhost:8000` (or replace `localhost` and `8000` with whatever you specified).
136
+ - The page uses vanilla JavaScript and the WebSocket API to capture your microphone and stream audio to the server in real time.
137
+
138
+ ### How the Live Interface Works
139
+
140
+ - Once you **allow microphone access**, the page records small chunks of audio using the **MediaRecorder** API in **webm/opus** format.
141
+ - These chunks are sent over a **WebSocket** to the FastAPI endpoint at `/asr`.
142
+ - The Python server decodes `.webm` chunks on the fly using **FFmpeg** and streams them into the **whisper streaming** implementation for transcription.
143
+ - **Partial transcription** appears as soon as enough audio is processed. The “unvalidated” text is shown in **lighter or grey color** (i.e., an ‘aperçu’) to indicate it’s still buffered partial output. Once Whisper finalizes that segment, it’s displayed in normal text.
144
+ - You can watch the transcription update in near real time, ideal for demos, prototyping, or quick debugging.
145
+
146
+ ### Deploying to a Remote Server
147
+
148
+ If you want to **deploy** this setup:
149
+
150
+ 1. **Host the FastAPI app** behind a production-grade HTTP(S) server (like **Uvicorn + Nginx** or Docker). If you use HTTPS, use "wss" instead of "ws" in WebSocket URL.
151
+ 2. The **HTML/JS page** can be served by the same FastAPI app or a separate static host.
152
+ 3. Users open the page in **Chrome/Firefox** (any modern browser that supports MediaRecorder + WebSocket).
153
+
154
+ No additional front-end libraries or frameworks are required. The WebSocket logic in `live_transcription.html` is minimal enough to adapt for your own custom UI or embed in other pages.
155
+
156
+ ## Acknowledgments
157
+
158
+ This project builds upon the foundational work of the Whisper Streaming project. We extend our gratitude to the original authors for their contributions.
@@ -0,0 +1,4 @@
1
+ [egg_info]
2
+ tag_build =
3
+ tag_date = 0
4
+
@@ -0,0 +1,44 @@
1
+ from setuptools import setup, find_packages
2
+
3
+ setup(
4
+ name="whisperlivekit",
5
+ version="0.1.0",
6
+ description="Real-time, Fully Local Whisper's Speech-to-Text and Speaker Diarization",
7
+ long_description=open("README.md", "r", encoding="utf-8").read(),
8
+ long_description_content_type="text/markdown",
9
+ author="Quentin Fuxa",
10
+ url="https://github.com/QuentinFuxa/WhisperLiveKit",
11
+ packages=find_packages(),
12
+ install_requires=[
13
+ "fastapi",
14
+ "ffmpeg-python",
15
+ "librosa",
16
+ "soundfile",
17
+ "faster-whisper",
18
+ "uvicorn",
19
+ "websockets",
20
+ ],
21
+ extras_require={
22
+ "diarization": ["diart"],
23
+ "vac": ["torch"],
24
+ "sentence": ["mosestokenizer", "wtpsplit"],
25
+ },
26
+ package_data={
27
+ 'whisperlivekit': ['web/*.html'],
28
+ },
29
+ entry_points={
30
+ 'console_scripts': [
31
+ 'whisperlivekit-server=whisperlivekit.server:run_server',
32
+ ],
33
+ },
34
+ classifiers=[
35
+ "Development Status :: 4 - Beta",
36
+ "Intended Audience :: Developers",
37
+ "License :: OSI Approved :: MIT License",
38
+ "Programming Language :: Python :: 3.9",
39
+ "Programming Language :: Python :: 3.10",
40
+ "Topic :: Scientific/Engineering :: Artificial Intelligence",
41
+ "Topic :: Multimedia :: Sound/Audio :: Speech",
42
+ ],
43
+ python_requires=">=3.9",
44
+ )
@@ -0,0 +1,4 @@
1
+ from .core import WhisperLiveKit, parse_args
2
+ from .audio_processor import AudioProcessor
3
+
4
+ __all__ = ['WhisperLiveKit', 'AudioProcessor', 'parse_args']