vid2cc-ai 0.1.0__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- vid2cc_ai-0.1.0/LICENSE +21 -0
- vid2cc_ai-0.1.0/PKG-INFO +126 -0
- vid2cc_ai-0.1.0/README.md +114 -0
- vid2cc_ai-0.1.0/pyproject.toml +18 -0
- vid2cc_ai-0.1.0/setup.cfg +4 -0
- vid2cc_ai-0.1.0/src/vid2cc_ai/__init__.py +6 -0
- vid2cc_ai-0.1.0/src/vid2cc_ai/audio.py +38 -0
- vid2cc_ai-0.1.0/src/vid2cc_ai/cli.py +47 -0
- vid2cc_ai-0.1.0/src/vid2cc_ai/formatter.py +13 -0
- vid2cc_ai-0.1.0/src/vid2cc_ai/tests/__init__.py +0 -0
- vid2cc_ai-0.1.0/src/vid2cc_ai/tests/test_formatter.py +19 -0
- vid2cc_ai-0.1.0/src/vid2cc_ai/transcriber.py +10 -0
- vid2cc_ai-0.1.0/src/vid2cc_ai.egg-info/PKG-INFO +126 -0
- vid2cc_ai-0.1.0/src/vid2cc_ai.egg-info/SOURCES.txt +16 -0
- vid2cc_ai-0.1.0/src/vid2cc_ai.egg-info/dependency_links.txt +1 -0
- vid2cc_ai-0.1.0/src/vid2cc_ai.egg-info/entry_points.txt +2 -0
- vid2cc_ai-0.1.0/src/vid2cc_ai.egg-info/requires.txt +3 -0
- vid2cc_ai-0.1.0/src/vid2cc_ai.egg-info/top_level.txt +1 -0
vid2cc_ai-0.1.0/LICENSE
ADDED
|
@@ -0,0 +1,21 @@
|
|
|
1
|
+
MIT License
|
|
2
|
+
|
|
3
|
+
Copyright (c) 2026 Dilshan
|
|
4
|
+
|
|
5
|
+
Permission is hereby granted, free of charge, to any person obtaining a copy
|
|
6
|
+
of this software and associated documentation files (the "Software"), to deal
|
|
7
|
+
in the Software without restriction, including without limitation the rights
|
|
8
|
+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
|
9
|
+
copies of the Software, and to permit persons to whom the Software is
|
|
10
|
+
furnished to do so, subject to the following conditions:
|
|
11
|
+
|
|
12
|
+
The above copyright notice and this permission notice shall be included in all
|
|
13
|
+
copies or substantial portions of the Software.
|
|
14
|
+
|
|
15
|
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
16
|
+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
17
|
+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
|
18
|
+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
19
|
+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
|
20
|
+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
21
|
+
SOFTWARE.
|
vid2cc_ai-0.1.0/PKG-INFO
ADDED
|
@@ -0,0 +1,126 @@
|
|
|
1
|
+
Metadata-Version: 2.4
|
|
2
|
+
Name: vid2cc-ai
|
|
3
|
+
Version: 0.1.0
|
|
4
|
+
Summary: AI-powered subtitle generator using Whisper and FFmpeg
|
|
5
|
+
Requires-Python: >=3.9
|
|
6
|
+
Description-Content-Type: text/markdown
|
|
7
|
+
License-File: LICENSE
|
|
8
|
+
Requires-Dist: openai-whisper
|
|
9
|
+
Requires-Dist: ffmpeg-python
|
|
10
|
+
Requires-Dist: torch
|
|
11
|
+
Dynamic: license-file
|
|
12
|
+
|
|
13
|
+
# vid2cc-AI ποΈπ¬
|
|
14
|
+
|
|
15
|
+
[](https://pypi.org/project/vid2cc-ai/)
|
|
16
|
+
[](https://opensource.org/licenses/MIT)
|
|
17
|
+
[](https://www.python.org/downloads/)
|
|
18
|
+
[](https://github.com/psf/black)
|
|
19
|
+

|
|
20
|
+
|
|
21
|
+
**vid2cc-AI** is a high-performance CLI tool designed to bridge the gap between raw video and accessible content. By leveraging OpenAI's Whisper models and FFmpeg's robust media handling, it automates the creation of perfectly synced `.srt` subtitles.
|
|
22
|
+
|
|
23
|
+
---
|
|
24
|
+
|
|
25
|
+
## π Key Features
|
|
26
|
+
|
|
27
|
+
- **AI-Driven Transcription:** Powered by OpenAI Whisper for industry-leading accuracy.
|
|
28
|
+
- **Hardware Acceleration:** Automatic CUDA detection for GPU-accelerated processing.
|
|
29
|
+
- **Intelligent Pre-processing:** FFmpeg-based audio extraction optimized for speech recognition (16kHz Mono).
|
|
30
|
+
- **Professional Packaging:** Fully installable via pip with a dedicated command-line entry point.
|
|
31
|
+
|
|
32
|
+
---
|
|
33
|
+
|
|
34
|
+
## βοΈ Installation
|
|
35
|
+
|
|
36
|
+
### 1. Prerequisite: FFmpeg
|
|
37
|
+
This tool requires FFmpeg to be installed on your system.
|
|
38
|
+
- **macOS:** `brew install ffmpeg`
|
|
39
|
+
- **Windows:** `choco install ffmpeg`
|
|
40
|
+
- **Linux:** `sudo apt install ffmpeg`
|
|
41
|
+
|
|
42
|
+
### 2. Install vid2cc-AI
|
|
43
|
+
Install directly from the source for development:
|
|
44
|
+
```bash
|
|
45
|
+
git clone https://github.com/0xdilshan/vid2cc-AI.git
|
|
46
|
+
cd vid2cc-AI
|
|
47
|
+
pip install -e .
|
|
48
|
+
```
|
|
49
|
+
|
|
50
|
+
## π How To Use
|
|
51
|
+
|
|
52
|
+
Once installed, the `vid2cc` command is available globally in your terminal.
|
|
53
|
+
|
|
54
|
+
|
|
55
|
+
---
|
|
56
|
+
|
|
57
|
+
### π οΈ Advanced Options
|
|
58
|
+
|
|
59
|
+
Fine-tune your output using the following flags:
|
|
60
|
+
|
|
61
|
+
| Flag | Description |
|
|
62
|
+
| :--- | :--- |
|
|
63
|
+
| `--model [size]` | Choose Whisper model: `tiny`, `base`, `small`, `medium`, or `large`. |
|
|
64
|
+
| `--embed` | **Soft Subtitles:** Adds the SRT as a metadata track. Fast and allows users to toggle subtitles on/off in players like VLC. |
|
|
65
|
+
| `--hardcode` | **Burn-in Subtitles:** Permanently draws subtitles onto the video. Essential for social media (Instagram/TikTok) where players don't support SRT files. |
|
|
66
|
+
|
|
67
|
+
#### Examples
|
|
68
|
+
|
|
69
|
+
*For maximum accuracy with toggleable subs:*
|
|
70
|
+
|
|
71
|
+
```bash
|
|
72
|
+
vid2cc example.mp4 --model large --embed
|
|
73
|
+
```
|
|
74
|
+
|
|
75
|
+
## π¦ Usage as a Library
|
|
76
|
+
|
|
77
|
+
You can integrate **vid2cc-AI** directly into your Python projects:
|
|
78
|
+
|
|
79
|
+
```python
|
|
80
|
+
from vid2cc_ai import Transcriber, extract_audio
|
|
81
|
+
|
|
82
|
+
# Extract and Transcribe
|
|
83
|
+
extract_audio("video.mp4", "audio.wav")
|
|
84
|
+
ts = Transcriber("base")
|
|
85
|
+
segments = ts.transcribe("audio.wav")
|
|
86
|
+
|
|
87
|
+
for s in segments:
|
|
88
|
+
print(f"[{s['start']:.2f}s] {s['text']}")
|
|
89
|
+
```
|
|
90
|
+
|
|
91
|
+
|
|
92
|
+
---
|
|
93
|
+
|
|
94
|
+
### π§ͺ Testing
|
|
95
|
+
|
|
96
|
+
|
|
97
|
+
```bash
|
|
98
|
+
# Install test dependencies
|
|
99
|
+
pip install pytest
|
|
100
|
+
|
|
101
|
+
# Run the test suite
|
|
102
|
+
pytest
|
|
103
|
+
```
|
|
104
|
+
|
|
105
|
+
|
|
106
|
+
---
|
|
107
|
+
|
|
108
|
+
## πΊοΈ Roadmap
|
|
109
|
+
|
|
110
|
+
- [x] Local video β SRT transcription
|
|
111
|
+
- [x] Embed subtitles into video containers (`--embed`)
|
|
112
|
+
- [x] Burn-in subtitles (`--hardcode`)
|
|
113
|
+
- [ ] Multilingual transcription & translation support
|
|
114
|
+
- [ ] Transcription from YouTube/Vimeo URLs (`yt-dlp`)
|
|
115
|
+
|
|
116
|
+
## π οΈ Tech Stack
|
|
117
|
+
|
|
118
|
+
- **Inference:** OpenAI Whisper
|
|
119
|
+
- **Media Engine:** FFmpeg
|
|
120
|
+
- **Core:** Python 3.9+, PyTorch
|
|
121
|
+
- **CLI Framework:** Argparse
|
|
122
|
+
|
|
123
|
+
## π License
|
|
124
|
+
|
|
125
|
+
Distributed under the MIT License.
|
|
126
|
+
See `LICENSE` for more information.
|
|
@@ -0,0 +1,114 @@
|
|
|
1
|
+
# vid2cc-AI ποΈπ¬
|
|
2
|
+
|
|
3
|
+
[](https://pypi.org/project/vid2cc-ai/)
|
|
4
|
+
[](https://opensource.org/licenses/MIT)
|
|
5
|
+
[](https://www.python.org/downloads/)
|
|
6
|
+
[](https://github.com/psf/black)
|
|
7
|
+

|
|
8
|
+
|
|
9
|
+
**vid2cc-AI** is a high-performance CLI tool designed to bridge the gap between raw video and accessible content. By leveraging OpenAI's Whisper models and FFmpeg's robust media handling, it automates the creation of perfectly synced `.srt` subtitles.
|
|
10
|
+
|
|
11
|
+
---
|
|
12
|
+
|
|
13
|
+
## π Key Features
|
|
14
|
+
|
|
15
|
+
- **AI-Driven Transcription:** Powered by OpenAI Whisper for industry-leading accuracy.
|
|
16
|
+
- **Hardware Acceleration:** Automatic CUDA detection for GPU-accelerated processing.
|
|
17
|
+
- **Intelligent Pre-processing:** FFmpeg-based audio extraction optimized for speech recognition (16kHz Mono).
|
|
18
|
+
- **Professional Packaging:** Fully installable via pip with a dedicated command-line entry point.
|
|
19
|
+
|
|
20
|
+
---
|
|
21
|
+
|
|
22
|
+
## βοΈ Installation
|
|
23
|
+
|
|
24
|
+
### 1. Prerequisite: FFmpeg
|
|
25
|
+
This tool requires FFmpeg to be installed on your system.
|
|
26
|
+
- **macOS:** `brew install ffmpeg`
|
|
27
|
+
- **Windows:** `choco install ffmpeg`
|
|
28
|
+
- **Linux:** `sudo apt install ffmpeg`
|
|
29
|
+
|
|
30
|
+
### 2. Install vid2cc-AI
|
|
31
|
+
Install directly from the source for development:
|
|
32
|
+
```bash
|
|
33
|
+
git clone https://github.com/0xdilshan/vid2cc-AI.git
|
|
34
|
+
cd vid2cc-AI
|
|
35
|
+
pip install -e .
|
|
36
|
+
```
|
|
37
|
+
|
|
38
|
+
## π How To Use
|
|
39
|
+
|
|
40
|
+
Once installed, the `vid2cc` command is available globally in your terminal.
|
|
41
|
+
|
|
42
|
+
|
|
43
|
+
---
|
|
44
|
+
|
|
45
|
+
### π οΈ Advanced Options
|
|
46
|
+
|
|
47
|
+
Fine-tune your output using the following flags:
|
|
48
|
+
|
|
49
|
+
| Flag | Description |
|
|
50
|
+
| :--- | :--- |
|
|
51
|
+
| `--model [size]` | Choose Whisper model: `tiny`, `base`, `small`, `medium`, or `large`. |
|
|
52
|
+
| `--embed` | **Soft Subtitles:** Adds the SRT as a metadata track. Fast and allows users to toggle subtitles on/off in players like VLC. |
|
|
53
|
+
| `--hardcode` | **Burn-in Subtitles:** Permanently draws subtitles onto the video. Essential for social media (Instagram/TikTok) where players don't support SRT files. |
|
|
54
|
+
|
|
55
|
+
#### Examples
|
|
56
|
+
|
|
57
|
+
*For maximum accuracy with toggleable subs:*
|
|
58
|
+
|
|
59
|
+
```bash
|
|
60
|
+
vid2cc example.mp4 --model large --embed
|
|
61
|
+
```
|
|
62
|
+
|
|
63
|
+
## π¦ Usage as a Library
|
|
64
|
+
|
|
65
|
+
You can integrate **vid2cc-AI** directly into your Python projects:
|
|
66
|
+
|
|
67
|
+
```python
|
|
68
|
+
from vid2cc_ai import Transcriber, extract_audio
|
|
69
|
+
|
|
70
|
+
# Extract and Transcribe
|
|
71
|
+
extract_audio("video.mp4", "audio.wav")
|
|
72
|
+
ts = Transcriber("base")
|
|
73
|
+
segments = ts.transcribe("audio.wav")
|
|
74
|
+
|
|
75
|
+
for s in segments:
|
|
76
|
+
print(f"[{s['start']:.2f}s] {s['text']}")
|
|
77
|
+
```
|
|
78
|
+
|
|
79
|
+
|
|
80
|
+
---
|
|
81
|
+
|
|
82
|
+
### π§ͺ Testing
|
|
83
|
+
|
|
84
|
+
|
|
85
|
+
```bash
|
|
86
|
+
# Install test dependencies
|
|
87
|
+
pip install pytest
|
|
88
|
+
|
|
89
|
+
# Run the test suite
|
|
90
|
+
pytest
|
|
91
|
+
```
|
|
92
|
+
|
|
93
|
+
|
|
94
|
+
---
|
|
95
|
+
|
|
96
|
+
## πΊοΈ Roadmap
|
|
97
|
+
|
|
98
|
+
- [x] Local video β SRT transcription
|
|
99
|
+
- [x] Embed subtitles into video containers (`--embed`)
|
|
100
|
+
- [x] Burn-in subtitles (`--hardcode`)
|
|
101
|
+
- [ ] Multilingual transcription & translation support
|
|
102
|
+
- [ ] Transcription from YouTube/Vimeo URLs (`yt-dlp`)
|
|
103
|
+
|
|
104
|
+
## π οΈ Tech Stack
|
|
105
|
+
|
|
106
|
+
- **Inference:** OpenAI Whisper
|
|
107
|
+
- **Media Engine:** FFmpeg
|
|
108
|
+
- **Core:** Python 3.9+, PyTorch
|
|
109
|
+
- **CLI Framework:** Argparse
|
|
110
|
+
|
|
111
|
+
## π License
|
|
112
|
+
|
|
113
|
+
Distributed under the MIT License.
|
|
114
|
+
See `LICENSE` for more information.
|
|
@@ -0,0 +1,18 @@
|
|
|
1
|
+
[build-system]
|
|
2
|
+
requires = ["setuptools>=61.0", "wheel"]
|
|
3
|
+
build-backend = "setuptools.build_meta"
|
|
4
|
+
|
|
5
|
+
[project]
|
|
6
|
+
name = "vid2cc-ai"
|
|
7
|
+
version = "0.1.0"
|
|
8
|
+
description = "AI-powered subtitle generator using Whisper and FFmpeg"
|
|
9
|
+
readme = "README.md"
|
|
10
|
+
requires-python = ">=3.9"
|
|
11
|
+
dependencies = [
|
|
12
|
+
"openai-whisper",
|
|
13
|
+
"ffmpeg-python",
|
|
14
|
+
"torch",
|
|
15
|
+
]
|
|
16
|
+
|
|
17
|
+
[project.scripts]
|
|
18
|
+
vid2cc = "vid2cc_ai.cli:main"
|
|
@@ -0,0 +1,38 @@
|
|
|
1
|
+
import subprocess
|
|
2
|
+
import os
|
|
3
|
+
|
|
4
|
+
def extract_audio(video_path: str, output_path: str):
|
|
5
|
+
"""Uses FFmpeg to extract high-quality mono audio for Whisper."""
|
|
6
|
+
command = [
|
|
7
|
+
"ffmpeg", "-i", video_path,
|
|
8
|
+
"-vn", "-acodec", "pcm_s16le",
|
|
9
|
+
"-ar", "16000", "-ac", "1",
|
|
10
|
+
"-y", output_path
|
|
11
|
+
]
|
|
12
|
+
subprocess.run(command, capture_output=True, check=True)
|
|
13
|
+
|
|
14
|
+
def embed_subtitles(video_path: str, srt_path: str, output_path: str):
|
|
15
|
+
"""Soft-embeds SRT as a subtitle track (no re-encoding)."""
|
|
16
|
+
# For MP4 we use mov_text, for MKV we can use srt/ass
|
|
17
|
+
ext = os.path.splitext(output_path)[1].lower()
|
|
18
|
+
codec = "mov_text" if ext == ".mp4" else "srt"
|
|
19
|
+
|
|
20
|
+
command = [
|
|
21
|
+
"ffmpeg", "-i", video_path, "-i", srt_path,
|
|
22
|
+
"-c", "copy", f"-c:s:{0}", codec,
|
|
23
|
+
"-y", output_path
|
|
24
|
+
]
|
|
25
|
+
subprocess.run(command, capture_output=True, check=True)
|
|
26
|
+
|
|
27
|
+
def hardcode_subtitles(video_path: str, srt_path: str, output_path: str):
|
|
28
|
+
"""Burns subtitles directly into the video frames (requires re-encoding)."""
|
|
29
|
+
# The 'subtitles' filter requires escaping backslashes on Windows
|
|
30
|
+
safe_srt_path = srt_path.replace("\\", "/").replace(":", "\\:")
|
|
31
|
+
|
|
32
|
+
command = [
|
|
33
|
+
"ffmpeg", "-i", video_path,
|
|
34
|
+
"-vf", f"subtitles='{safe_srt_path}'",
|
|
35
|
+
"-c:a", "copy", # Keep original audio
|
|
36
|
+
"-y", output_path
|
|
37
|
+
]
|
|
38
|
+
subprocess.run(command, capture_output=True, check=True)
|
|
@@ -0,0 +1,47 @@
|
|
|
1
|
+
import argparse
|
|
2
|
+
import os
|
|
3
|
+
from .audio import extract_audio, embed_subtitles, hardcode_subtitles
|
|
4
|
+
from .transcriber import Transcriber
|
|
5
|
+
from .formatter import save_as_srt
|
|
6
|
+
|
|
7
|
+
def main():
|
|
8
|
+
parser = argparse.ArgumentParser(description="vid2cc-AI: Video to Subtitles")
|
|
9
|
+
parser.add_argument("input", help="Path to video file")
|
|
10
|
+
parser.add_argument("--model", default="base", help="tiny, base, small, medium, large")
|
|
11
|
+
parser.add_argument("--embed", action="store_true", help="Soft-embed subtitles into a new video file")
|
|
12
|
+
parser.add_argument("--hardcode", action="store_true", help="Burn subtitles into the video (re-encodes)")
|
|
13
|
+
|
|
14
|
+
args = parser.parse_args()
|
|
15
|
+
temp_audio = "temp_audio.wav"
|
|
16
|
+
base_name = os.path.splitext(args.input)[0]
|
|
17
|
+
output_srt = base_name + ".srt"
|
|
18
|
+
|
|
19
|
+
try:
|
|
20
|
+
print(f"--- Extracting Audio from {args.input} ---")
|
|
21
|
+
extract_audio(args.input, temp_audio)
|
|
22
|
+
|
|
23
|
+
print(f"--- Transcribing with Whisper ({args.model}) ---")
|
|
24
|
+
ts = Transcriber(args.model)
|
|
25
|
+
segments = ts.transcribe(temp_audio)
|
|
26
|
+
|
|
27
|
+
save_as_srt(segments, output_srt)
|
|
28
|
+
print(f"SRT saved: {output_srt}")
|
|
29
|
+
|
|
30
|
+
if args.embed:
|
|
31
|
+
out_v = f"{base_name}_embedded.mp4"
|
|
32
|
+
print(f"--- Embedding Subtitles into {out_v} ---")
|
|
33
|
+
embed_subtitles(args.input, output_srt, out_v)
|
|
34
|
+
print(f"DONE! Embedded video: {out_v}")
|
|
35
|
+
|
|
36
|
+
if args.hardcode:
|
|
37
|
+
out_v = f"{base_name}_hardcoded.mp4"
|
|
38
|
+
print(f"--- Hardcoding Subtitles into {out_v} (This may take a while) ---")
|
|
39
|
+
hardcode_subtitles(args.input, output_srt, out_v)
|
|
40
|
+
print(f"DONE! Hardcoded video: {out_v}")
|
|
41
|
+
|
|
42
|
+
finally:
|
|
43
|
+
if os.path.exists(temp_audio):
|
|
44
|
+
os.remove(temp_audio)
|
|
45
|
+
|
|
46
|
+
if __name__ == "__main__":
|
|
47
|
+
main()
|
|
@@ -0,0 +1,13 @@
|
|
|
1
|
+
def format_timestamp(seconds: float) -> str:
|
|
2
|
+
hours = int(seconds // 3600)
|
|
3
|
+
minutes = int((seconds % 3600) // 60)
|
|
4
|
+
secs = int(seconds % 60)
|
|
5
|
+
msecs = int((seconds - int(seconds)) * 1000)
|
|
6
|
+
return f"{hours:02}:{minutes:02}:{secs:02},{msecs:03}"
|
|
7
|
+
|
|
8
|
+
def save_as_srt(segments, output_path):
|
|
9
|
+
with open(output_path, "w", encoding="utf-8") as f:
|
|
10
|
+
for i, segment in enumerate(segments, start=1):
|
|
11
|
+
start = format_timestamp(segment['start'])
|
|
12
|
+
end = format_timestamp(segment['end'])
|
|
13
|
+
f.write(f"{i}\n{start} --> {end}\n{segment['text'].strip()}\n\n")
|
|
File without changes
|
|
@@ -0,0 +1,19 @@
|
|
|
1
|
+
import pytest
|
|
2
|
+
from vid2cc_ai.formatter import format_timestamp
|
|
3
|
+
|
|
4
|
+
def test_format_timestamp_basic():
|
|
5
|
+
"""Test standard second-to-SRT conversion."""
|
|
6
|
+
assert format_timestamp(61.5) == "00:01:01,500"
|
|
7
|
+
|
|
8
|
+
def test_format_timestamp_hours():
|
|
9
|
+
"""Test conversion when hours are involved."""
|
|
10
|
+
# 3661 seconds = 1 hour, 1 minute, 1 second
|
|
11
|
+
assert format_timestamp(3661.0) == "01:01:01,000"
|
|
12
|
+
|
|
13
|
+
def test_format_timestamp_milliseconds():
|
|
14
|
+
"""Test high-precision milliseconds."""
|
|
15
|
+
assert format_timestamp(0.123) == "00:00:00,123"
|
|
16
|
+
|
|
17
|
+
def test_format_timestamp_zero():
|
|
18
|
+
"""Test the starting point."""
|
|
19
|
+
assert format_timestamp(0.0) == "00:00:00,000"
|
|
@@ -0,0 +1,10 @@
|
|
|
1
|
+
import whisper
|
|
2
|
+
import torch
|
|
3
|
+
|
|
4
|
+
class Transcriber:
|
|
5
|
+
def __init__(self, model_size="base"):
|
|
6
|
+
self.device = "cuda" if torch.cuda.is_available() else "cpu"
|
|
7
|
+
self.model = whisper.load_model(model_size, device=self.device)
|
|
8
|
+
|
|
9
|
+
def transcribe(self, audio_path: str):
|
|
10
|
+
return self.model.transcribe(audio_path)['segments']
|
|
@@ -0,0 +1,126 @@
|
|
|
1
|
+
Metadata-Version: 2.4
|
|
2
|
+
Name: vid2cc-ai
|
|
3
|
+
Version: 0.1.0
|
|
4
|
+
Summary: AI-powered subtitle generator using Whisper and FFmpeg
|
|
5
|
+
Requires-Python: >=3.9
|
|
6
|
+
Description-Content-Type: text/markdown
|
|
7
|
+
License-File: LICENSE
|
|
8
|
+
Requires-Dist: openai-whisper
|
|
9
|
+
Requires-Dist: ffmpeg-python
|
|
10
|
+
Requires-Dist: torch
|
|
11
|
+
Dynamic: license-file
|
|
12
|
+
|
|
13
|
+
# vid2cc-AI ποΈπ¬
|
|
14
|
+
|
|
15
|
+
[](https://pypi.org/project/vid2cc-ai/)
|
|
16
|
+
[](https://opensource.org/licenses/MIT)
|
|
17
|
+
[](https://www.python.org/downloads/)
|
|
18
|
+
[](https://github.com/psf/black)
|
|
19
|
+

|
|
20
|
+
|
|
21
|
+
**vid2cc-AI** is a high-performance CLI tool designed to bridge the gap between raw video and accessible content. By leveraging OpenAI's Whisper models and FFmpeg's robust media handling, it automates the creation of perfectly synced `.srt` subtitles.
|
|
22
|
+
|
|
23
|
+
---
|
|
24
|
+
|
|
25
|
+
## π Key Features
|
|
26
|
+
|
|
27
|
+
- **AI-Driven Transcription:** Powered by OpenAI Whisper for industry-leading accuracy.
|
|
28
|
+
- **Hardware Acceleration:** Automatic CUDA detection for GPU-accelerated processing.
|
|
29
|
+
- **Intelligent Pre-processing:** FFmpeg-based audio extraction optimized for speech recognition (16kHz Mono).
|
|
30
|
+
- **Professional Packaging:** Fully installable via pip with a dedicated command-line entry point.
|
|
31
|
+
|
|
32
|
+
---
|
|
33
|
+
|
|
34
|
+
## βοΈ Installation
|
|
35
|
+
|
|
36
|
+
### 1. Prerequisite: FFmpeg
|
|
37
|
+
This tool requires FFmpeg to be installed on your system.
|
|
38
|
+
- **macOS:** `brew install ffmpeg`
|
|
39
|
+
- **Windows:** `choco install ffmpeg`
|
|
40
|
+
- **Linux:** `sudo apt install ffmpeg`
|
|
41
|
+
|
|
42
|
+
### 2. Install vid2cc-AI
|
|
43
|
+
Install directly from the source for development:
|
|
44
|
+
```bash
|
|
45
|
+
git clone https://github.com/0xdilshan/vid2cc-AI.git
|
|
46
|
+
cd vid2cc-AI
|
|
47
|
+
pip install -e .
|
|
48
|
+
```
|
|
49
|
+
|
|
50
|
+
## π How To Use
|
|
51
|
+
|
|
52
|
+
Once installed, the `vid2cc` command is available globally in your terminal.
|
|
53
|
+
|
|
54
|
+
|
|
55
|
+
---
|
|
56
|
+
|
|
57
|
+
### π οΈ Advanced Options
|
|
58
|
+
|
|
59
|
+
Fine-tune your output using the following flags:
|
|
60
|
+
|
|
61
|
+
| Flag | Description |
|
|
62
|
+
| :--- | :--- |
|
|
63
|
+
| `--model [size]` | Choose Whisper model: `tiny`, `base`, `small`, `medium`, or `large`. |
|
|
64
|
+
| `--embed` | **Soft Subtitles:** Adds the SRT as a metadata track. Fast and allows users to toggle subtitles on/off in players like VLC. |
|
|
65
|
+
| `--hardcode` | **Burn-in Subtitles:** Permanently draws subtitles onto the video. Essential for social media (Instagram/TikTok) where players don't support SRT files. |
|
|
66
|
+
|
|
67
|
+
#### Examples
|
|
68
|
+
|
|
69
|
+
*For maximum accuracy with toggleable subs:*
|
|
70
|
+
|
|
71
|
+
```bash
|
|
72
|
+
vid2cc example.mp4 --model large --embed
|
|
73
|
+
```
|
|
74
|
+
|
|
75
|
+
## π¦ Usage as a Library
|
|
76
|
+
|
|
77
|
+
You can integrate **vid2cc-AI** directly into your Python projects:
|
|
78
|
+
|
|
79
|
+
```python
|
|
80
|
+
from vid2cc_ai import Transcriber, extract_audio
|
|
81
|
+
|
|
82
|
+
# Extract and Transcribe
|
|
83
|
+
extract_audio("video.mp4", "audio.wav")
|
|
84
|
+
ts = Transcriber("base")
|
|
85
|
+
segments = ts.transcribe("audio.wav")
|
|
86
|
+
|
|
87
|
+
for s in segments:
|
|
88
|
+
print(f"[{s['start']:.2f}s] {s['text']}")
|
|
89
|
+
```
|
|
90
|
+
|
|
91
|
+
|
|
92
|
+
---
|
|
93
|
+
|
|
94
|
+
### π§ͺ Testing
|
|
95
|
+
|
|
96
|
+
|
|
97
|
+
```bash
|
|
98
|
+
# Install test dependencies
|
|
99
|
+
pip install pytest
|
|
100
|
+
|
|
101
|
+
# Run the test suite
|
|
102
|
+
pytest
|
|
103
|
+
```
|
|
104
|
+
|
|
105
|
+
|
|
106
|
+
---
|
|
107
|
+
|
|
108
|
+
## πΊοΈ Roadmap
|
|
109
|
+
|
|
110
|
+
- [x] Local video β SRT transcription
|
|
111
|
+
- [x] Embed subtitles into video containers (`--embed`)
|
|
112
|
+
- [x] Burn-in subtitles (`--hardcode`)
|
|
113
|
+
- [ ] Multilingual transcription & translation support
|
|
114
|
+
- [ ] Transcription from YouTube/Vimeo URLs (`yt-dlp`)
|
|
115
|
+
|
|
116
|
+
## π οΈ Tech Stack
|
|
117
|
+
|
|
118
|
+
- **Inference:** OpenAI Whisper
|
|
119
|
+
- **Media Engine:** FFmpeg
|
|
120
|
+
- **Core:** Python 3.9+, PyTorch
|
|
121
|
+
- **CLI Framework:** Argparse
|
|
122
|
+
|
|
123
|
+
## π License
|
|
124
|
+
|
|
125
|
+
Distributed under the MIT License.
|
|
126
|
+
See `LICENSE` for more information.
|
|
@@ -0,0 +1,16 @@
|
|
|
1
|
+
LICENSE
|
|
2
|
+
README.md
|
|
3
|
+
pyproject.toml
|
|
4
|
+
src/vid2cc_ai/__init__.py
|
|
5
|
+
src/vid2cc_ai/audio.py
|
|
6
|
+
src/vid2cc_ai/cli.py
|
|
7
|
+
src/vid2cc_ai/formatter.py
|
|
8
|
+
src/vid2cc_ai/transcriber.py
|
|
9
|
+
src/vid2cc_ai.egg-info/PKG-INFO
|
|
10
|
+
src/vid2cc_ai.egg-info/SOURCES.txt
|
|
11
|
+
src/vid2cc_ai.egg-info/dependency_links.txt
|
|
12
|
+
src/vid2cc_ai.egg-info/entry_points.txt
|
|
13
|
+
src/vid2cc_ai.egg-info/requires.txt
|
|
14
|
+
src/vid2cc_ai.egg-info/top_level.txt
|
|
15
|
+
src/vid2cc_ai/tests/__init__.py
|
|
16
|
+
src/vid2cc_ai/tests/test_formatter.py
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
vid2cc_ai
|