TonieToolbox 0.1.7__tar.gz → 0.2.0__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- {tonietoolbox-0.1.7/TonieToolbox.egg-info → tonietoolbox-0.2.0}/PKG-INFO +36 -3
- {tonietoolbox-0.1.7 → tonietoolbox-0.2.0}/README.md +35 -2
- {tonietoolbox-0.1.7 → tonietoolbox-0.2.0}/TonieToolbox/__init__.py +1 -1
- {tonietoolbox-0.1.7 → tonietoolbox-0.2.0}/TonieToolbox/__main__.py +49 -7
- {tonietoolbox-0.1.7 → tonietoolbox-0.2.0}/TonieToolbox/audio_conversion.py +88 -27
- {tonietoolbox-0.1.7 → tonietoolbox-0.2.0}/TonieToolbox/dependency_manager.py +178 -71
- tonietoolbox-0.2.0/TonieToolbox/recursive_processor.py +250 -0
- {tonietoolbox-0.1.7 → tonietoolbox-0.2.0/TonieToolbox.egg-info}/PKG-INFO +36 -3
- {tonietoolbox-0.1.7 → tonietoolbox-0.2.0}/TonieToolbox.egg-info/SOURCES.txt +1 -0
- {tonietoolbox-0.1.7 → tonietoolbox-0.2.0}/LICENSE.md +0 -0
- {tonietoolbox-0.1.7 → tonietoolbox-0.2.0}/MANIFEST.in +0 -0
- {tonietoolbox-0.1.7 → tonietoolbox-0.2.0}/TonieToolbox/constants.py +0 -0
- {tonietoolbox-0.1.7 → tonietoolbox-0.2.0}/TonieToolbox/filename_generator.py +0 -0
- {tonietoolbox-0.1.7 → tonietoolbox-0.2.0}/TonieToolbox/logger.py +0 -0
- {tonietoolbox-0.1.7 → tonietoolbox-0.2.0}/TonieToolbox/ogg_page.py +0 -0
- {tonietoolbox-0.1.7 → tonietoolbox-0.2.0}/TonieToolbox/opus_packet.py +0 -0
- {tonietoolbox-0.1.7 → tonietoolbox-0.2.0}/TonieToolbox/tonie_analysis.py +0 -0
- {tonietoolbox-0.1.7 → tonietoolbox-0.2.0}/TonieToolbox/tonie_file.py +0 -0
- {tonietoolbox-0.1.7 → tonietoolbox-0.2.0}/TonieToolbox/tonie_header.proto +0 -0
- {tonietoolbox-0.1.7 → tonietoolbox-0.2.0}/TonieToolbox/tonie_header_pb2.py +0 -0
- {tonietoolbox-0.1.7 → tonietoolbox-0.2.0}/TonieToolbox/version_handler.py +0 -0
- {tonietoolbox-0.1.7 → tonietoolbox-0.2.0}/TonieToolbox.egg-info/dependency_links.txt +0 -0
- {tonietoolbox-0.1.7 → tonietoolbox-0.2.0}/TonieToolbox.egg-info/entry_points.txt +0 -0
- {tonietoolbox-0.1.7 → tonietoolbox-0.2.0}/TonieToolbox.egg-info/requires.txt +0 -0
- {tonietoolbox-0.1.7 → tonietoolbox-0.2.0}/TonieToolbox.egg-info/top_level.txt +0 -0
- {tonietoolbox-0.1.7 → tonietoolbox-0.2.0}/pyproject.toml +0 -0
- {tonietoolbox-0.1.7 → tonietoolbox-0.2.0}/setup.cfg +0 -0
- {tonietoolbox-0.1.7 → tonietoolbox-0.2.0}/setup.py +0 -0
@@ -1,6 +1,6 @@
|
|
1
1
|
Metadata-Version: 2.4
|
2
2
|
Name: TonieToolbox
|
3
|
-
Version: 0.
|
3
|
+
Version: 0.2.0
|
4
4
|
Summary: Convert audio files to Tonie box compatible format
|
5
5
|
Home-page: https://github.com/Quentendo64/TonieToolbox
|
6
6
|
Author: Quentendo64
|
@@ -62,6 +62,7 @@ TonieToolbox allows you to create custom audio content for Tonie boxes by conver
|
|
62
62
|
The tool provides several capabilities:
|
63
63
|
|
64
64
|
- Convert single or multiple audio files into a Tonie-compatible format
|
65
|
+
- Process complex folder structures recursively to handle entire audio collections
|
65
66
|
- Analyze and validate existing Tonie files
|
66
67
|
- Split Tonie files into individual opus tracks
|
67
68
|
- Compare two TAF files for debugging differences
|
@@ -137,6 +138,22 @@ Or use a list file (.lst) containing paths to multiple audio files:
|
|
137
138
|
tonietoolbox playlist.lst
|
138
139
|
```
|
139
140
|
|
141
|
+
**Process folders recursively:**
|
142
|
+
|
143
|
+
To process an entire folder structure with multiple audio folders:
|
144
|
+
|
145
|
+
```
|
146
|
+
tonietoolbox --recursive "Music/Albums"
|
147
|
+
```
|
148
|
+
|
149
|
+
This will scan all subfolders, identify those containing audio files, and create a TAF file for each folder.
|
150
|
+
|
151
|
+
By default, all generated TAF files are saved in the `.\output` directory. If you want to save each TAF file in its source directory instead:
|
152
|
+
|
153
|
+
```
|
154
|
+
tonietoolbox --recursive --output-to-source "Music/Albums"
|
155
|
+
```
|
156
|
+
|
140
157
|
### Advanced Options
|
141
158
|
|
142
159
|
Run the following command to see all available options:
|
@@ -149,8 +166,8 @@ Output:
|
|
149
166
|
```
|
150
167
|
usage: TonieToolbox.py [-h] [--ts TIMESTAMP] [--ffmpeg FFMPEG] [--opusenc OPUSENC]
|
151
168
|
[--bitrate BITRATE] [--cbr] [--append-tonie-tag TAG]
|
152
|
-
[--no-tonie-header] [--info] [--split] [--recursive] [--
|
153
|
-
[--detailed-compare] [--debug] [--trace] [--quiet] [--silent]
|
169
|
+
[--no-tonie-header] [--info] [--split] [--recursive] [--output-to-source]
|
170
|
+
[--compare FILE2] [--detailed-compare] [--debug] [--trace] [--quiet] [--silent]
|
154
171
|
SOURCE [TARGET]
|
155
172
|
|
156
173
|
Create Tonie compatible file from Ogg opus file(s).
|
@@ -170,6 +187,8 @@ optional arguments:
|
|
170
187
|
--no-tonie-header do not write Tonie header
|
171
188
|
--info Check and display info about Tonie file
|
172
189
|
--split Split Tonie file into opus tracks
|
190
|
+
--recursive Process folders recursively
|
191
|
+
--output-to-source Save output files in the source directory instead of output directory
|
173
192
|
--compare FILE2 Compare input file with another .taf file for debugging
|
174
193
|
--detailed-compare Show detailed OGG page differences when comparing files
|
175
194
|
|
@@ -222,6 +241,20 @@ tonietoolbox input.mp3 --ts ./reference.taf # Reference TAF for extraction
|
|
222
241
|
tonietoolbox input.mp3 --bitrate 128
|
223
242
|
```
|
224
243
|
|
244
|
+
#### Process a complex folder structure:
|
245
|
+
|
246
|
+
Process an audiobook series with multiple folders:
|
247
|
+
|
248
|
+
```
|
249
|
+
tonietoolbox --recursive "\Hörspiele\Die drei Fragezeichen\Folgen"
|
250
|
+
```
|
251
|
+
|
252
|
+
Process a music collection with nested album folders and save TAF files alongside the source directories:
|
253
|
+
|
254
|
+
```
|
255
|
+
tonietoolbox --recursive --output-to-source "\Hörspiele\"
|
256
|
+
```
|
257
|
+
|
225
258
|
## Technical Details
|
226
259
|
|
227
260
|
### TAF (Tonie Audio Format) File Structure
|
@@ -35,6 +35,7 @@ TonieToolbox allows you to create custom audio content for Tonie boxes by conver
|
|
35
35
|
The tool provides several capabilities:
|
36
36
|
|
37
37
|
- Convert single or multiple audio files into a Tonie-compatible format
|
38
|
+
- Process complex folder structures recursively to handle entire audio collections
|
38
39
|
- Analyze and validate existing Tonie files
|
39
40
|
- Split Tonie files into individual opus tracks
|
40
41
|
- Compare two TAF files for debugging differences
|
@@ -110,6 +111,22 @@ Or use a list file (.lst) containing paths to multiple audio files:
|
|
110
111
|
tonietoolbox playlist.lst
|
111
112
|
```
|
112
113
|
|
114
|
+
**Process folders recursively:**
|
115
|
+
|
116
|
+
To process an entire folder structure with multiple audio folders:
|
117
|
+
|
118
|
+
```
|
119
|
+
tonietoolbox --recursive "Music/Albums"
|
120
|
+
```
|
121
|
+
|
122
|
+
This will scan all subfolders, identify those containing audio files, and create a TAF file for each folder.
|
123
|
+
|
124
|
+
By default, all generated TAF files are saved in the `.\output` directory. If you want to save each TAF file in its source directory instead:
|
125
|
+
|
126
|
+
```
|
127
|
+
tonietoolbox --recursive --output-to-source "Music/Albums"
|
128
|
+
```
|
129
|
+
|
113
130
|
### Advanced Options
|
114
131
|
|
115
132
|
Run the following command to see all available options:
|
@@ -122,8 +139,8 @@ Output:
|
|
122
139
|
```
|
123
140
|
usage: TonieToolbox.py [-h] [--ts TIMESTAMP] [--ffmpeg FFMPEG] [--opusenc OPUSENC]
|
124
141
|
[--bitrate BITRATE] [--cbr] [--append-tonie-tag TAG]
|
125
|
-
[--no-tonie-header] [--info] [--split] [--recursive] [--
|
126
|
-
[--detailed-compare] [--debug] [--trace] [--quiet] [--silent]
|
142
|
+
[--no-tonie-header] [--info] [--split] [--recursive] [--output-to-source]
|
143
|
+
[--compare FILE2] [--detailed-compare] [--debug] [--trace] [--quiet] [--silent]
|
127
144
|
SOURCE [TARGET]
|
128
145
|
|
129
146
|
Create Tonie compatible file from Ogg opus file(s).
|
@@ -143,6 +160,8 @@ optional arguments:
|
|
143
160
|
--no-tonie-header do not write Tonie header
|
144
161
|
--info Check and display info about Tonie file
|
145
162
|
--split Split Tonie file into opus tracks
|
163
|
+
--recursive Process folders recursively
|
164
|
+
--output-to-source Save output files in the source directory instead of output directory
|
146
165
|
--compare FILE2 Compare input file with another .taf file for debugging
|
147
166
|
--detailed-compare Show detailed OGG page differences when comparing files
|
148
167
|
|
@@ -195,6 +214,20 @@ tonietoolbox input.mp3 --ts ./reference.taf # Reference TAF for extraction
|
|
195
214
|
tonietoolbox input.mp3 --bitrate 128
|
196
215
|
```
|
197
216
|
|
217
|
+
#### Process a complex folder structure:
|
218
|
+
|
219
|
+
Process an audiobook series with multiple folders:
|
220
|
+
|
221
|
+
```
|
222
|
+
tonietoolbox --recursive "\Hörspiele\Die drei Fragezeichen\Folgen"
|
223
|
+
```
|
224
|
+
|
225
|
+
Process a music collection with nested album folders and save TAF files alongside the source directories:
|
226
|
+
|
227
|
+
```
|
228
|
+
tonietoolbox --recursive --output-to-source "\Hörspiele\"
|
229
|
+
```
|
230
|
+
|
198
231
|
## Technical Details
|
199
232
|
|
200
233
|
### TAF (Tonie Audio Format) File Structure
|
@@ -16,6 +16,7 @@ from .dependency_manager import get_ffmpeg_binary, get_opus_binary
|
|
16
16
|
from .logger import setup_logging, get_logger
|
17
17
|
from .filename_generator import guess_output_filename
|
18
18
|
from .version_handler import check_for_updates, clear_version_cache
|
19
|
+
from .recursive_processor import process_recursive_folders
|
19
20
|
|
20
21
|
def main():
|
21
22
|
"""Entry point for the TonieToolbox application."""
|
@@ -38,6 +39,9 @@ def main():
|
|
38
39
|
parser.add_argument('--no-tonie-header', action='store_true', help='do not write Tonie header')
|
39
40
|
parser.add_argument('--info', action='store_true', help='Check and display info about Tonie file')
|
40
41
|
parser.add_argument('--split', action='store_true', help='Split Tonie file into opus tracks')
|
42
|
+
parser.add_argument('--recursive', action='store_true', help='Process folders recursively')
|
43
|
+
parser.add_argument('--output-to-source', action='store_true',
|
44
|
+
help='Save output files in the source directory instead of output directory')
|
41
45
|
parser.add_argument('--auto-download', action='store_true', help='Automatically download FFmpeg and opusenc if needed')
|
42
46
|
parser.add_argument('--keep-temp', action='store_true',
|
43
47
|
help='Keep temporary opus files in a temp folder for testing')
|
@@ -113,6 +117,39 @@ def main():
|
|
113
117
|
sys.exit(1)
|
114
118
|
logger.debug("Using opusenc binary: %s", opus_binary)
|
115
119
|
|
120
|
+
# Handle recursive processing
|
121
|
+
if args.recursive:
|
122
|
+
logger.info("Processing folders recursively: %s", args.input_filename)
|
123
|
+
process_tasks = process_recursive_folders(args.input_filename)
|
124
|
+
|
125
|
+
if not process_tasks:
|
126
|
+
logger.error("No folders with audio files found for recursive processing")
|
127
|
+
sys.exit(1)
|
128
|
+
|
129
|
+
output_dir = None if args.output_to_source else './output'
|
130
|
+
|
131
|
+
if output_dir and not os.path.exists(output_dir):
|
132
|
+
os.makedirs(output_dir, exist_ok=True)
|
133
|
+
logger.debug("Created output directory: %s", output_dir)
|
134
|
+
|
135
|
+
for task_index, (output_name, folder_path, audio_files) in enumerate(process_tasks):
|
136
|
+
if args.output_to_source:
|
137
|
+
task_out_filename = os.path.join(folder_path, f"{output_name}.taf")
|
138
|
+
else:
|
139
|
+
task_out_filename = os.path.join(output_dir, f"{output_name}.taf")
|
140
|
+
|
141
|
+
logger.info("[%d/%d] Processing folder: %s -> %s",
|
142
|
+
task_index + 1, len(process_tasks), folder_path, task_out_filename)
|
143
|
+
|
144
|
+
create_tonie_file(task_out_filename, audio_files, args.no_tonie_header, args.user_timestamp,
|
145
|
+
args.bitrate, not args.cbr, ffmpeg_binary, opus_binary, args.keep_temp,
|
146
|
+
args.auto_download)
|
147
|
+
logger.info("Successfully created Tonie file: %s", task_out_filename)
|
148
|
+
|
149
|
+
logger.info("Recursive processing completed. Created %d Tonie files.", len(process_tasks))
|
150
|
+
sys.exit(0)
|
151
|
+
|
152
|
+
# Handle directory or file input
|
116
153
|
if os.path.isdir(args.input_filename):
|
117
154
|
logger.debug("Input is a directory: %s", args.input_filename)
|
118
155
|
args.input_filename += "/*"
|
@@ -143,12 +180,17 @@ def main():
|
|
143
180
|
out_filename = args.output_filename
|
144
181
|
else:
|
145
182
|
guessed_name = guess_output_filename(args.input_filename, files)
|
146
|
-
|
147
|
-
|
148
|
-
|
149
|
-
|
150
|
-
|
151
|
-
|
183
|
+
if args.output_to_source:
|
184
|
+
source_dir = os.path.dirname(files[0]) if files else '.'
|
185
|
+
out_filename = os.path.join(source_dir, guessed_name)
|
186
|
+
logger.debug("Using source location for output: %s", out_filename)
|
187
|
+
else:
|
188
|
+
output_dir = './output'
|
189
|
+
if not os.path.exists(output_dir):
|
190
|
+
logger.debug("Creating default output directory: %s", output_dir)
|
191
|
+
os.makedirs(output_dir, exist_ok=True)
|
192
|
+
out_filename = os.path.join(output_dir, guessed_name)
|
193
|
+
logger.debug("Using default output location: %s", out_filename)
|
152
194
|
|
153
195
|
if args.append_tonie_tag:
|
154
196
|
logger.debug("Appending Tonie tag to output filename")
|
@@ -162,7 +204,7 @@ def main():
|
|
162
204
|
|
163
205
|
if not out_filename.lower().endswith('.taf'):
|
164
206
|
out_filename += '.taf'
|
165
|
-
|
207
|
+
|
166
208
|
logger.info("Creating Tonie file: %s with %d input file(s)", out_filename, len(files))
|
167
209
|
create_tonie_file(out_filename, files, args.no_tonie_header, args.user_timestamp,
|
168
210
|
args.bitrate, not args.cbr, ffmpeg_binary, opus_binary, args.keep_temp, args.auto_download)
|
@@ -58,37 +58,65 @@ def get_opus_tempfile(ffmpeg_binary=None, opus_binary=None, filename=None, bitra
|
|
58
58
|
logger.info("Creating persistent temporary file: %s", temp_path)
|
59
59
|
|
60
60
|
logger.debug("Starting FFmpeg process")
|
61
|
-
|
62
|
-
|
63
|
-
|
61
|
+
try:
|
62
|
+
ffmpeg_process = subprocess.Popen(
|
63
|
+
[ffmpeg_binary, "-hide_banner", "-loglevel", "warning", "-i", filename, "-f", "wav",
|
64
|
+
"-ar", "48000", "-"], stdout=subprocess.PIPE)
|
65
|
+
except FileNotFoundError:
|
66
|
+
logger.error("Error opening input file %s", filename)
|
67
|
+
raise RuntimeError(f"Error opening input file {filename}")
|
64
68
|
|
65
69
|
logger.debug("Starting opusenc process")
|
66
|
-
|
67
|
-
|
68
|
-
|
70
|
+
try:
|
71
|
+
opusenc_process = subprocess.Popen(
|
72
|
+
[opus_binary, "--quiet", vbr_parameter, "--bitrate", f"{bitrate:d}", "-", temp_path],
|
73
|
+
stdin=ffmpeg_process.stdout, stderr=subprocess.DEVNULL)
|
74
|
+
except Exception as e:
|
75
|
+
logger.error("Opus encoding failed: %s", str(e))
|
76
|
+
raise RuntimeError(f"Opus encoding failed: {str(e)}")
|
69
77
|
|
70
|
-
|
78
|
+
ffmpeg_process.stdout.close() # Allow ffmpeg to receive SIGPIPE if opusenc exits
|
79
|
+
opusenc_return = opusenc_process.wait()
|
80
|
+
ffmpeg_return = ffmpeg_process.wait()
|
71
81
|
|
72
|
-
if
|
73
|
-
logger.error("
|
74
|
-
raise RuntimeError(f"
|
82
|
+
if ffmpeg_return != 0:
|
83
|
+
logger.error("FFmpeg processing failed with return code %d", ffmpeg_return)
|
84
|
+
raise RuntimeError(f"FFmpeg processing failed with return code {ffmpeg_return}")
|
85
|
+
|
86
|
+
if opusenc_return != 0:
|
87
|
+
logger.error("Opus encoding failed with return code %d", opusenc_return)
|
88
|
+
raise RuntimeError(f"Opus encoding failed with return code {opusenc_return}")
|
75
89
|
|
76
90
|
logger.debug("Opening temporary file for reading")
|
77
|
-
|
78
|
-
|
91
|
+
try:
|
92
|
+
tmp_file = open(temp_path, "rb")
|
93
|
+
return tmp_file, temp_path
|
94
|
+
except Exception as e:
|
95
|
+
logger.error("Failed to open temporary file: %s", str(e))
|
96
|
+
raise RuntimeError(f"Failed to open temporary file: {str(e)}")
|
79
97
|
else:
|
80
98
|
logger.debug("Using in-memory temporary file")
|
81
99
|
|
82
100
|
logger.debug("Starting FFmpeg process")
|
83
|
-
|
84
|
-
|
85
|
-
|
101
|
+
try:
|
102
|
+
ffmpeg_process = subprocess.Popen(
|
103
|
+
[ffmpeg_binary, "-hide_banner", "-loglevel", "warning", "-i", filename, "-f", "wav",
|
104
|
+
"-ar", "48000", "-"], stdout=subprocess.PIPE)
|
105
|
+
except FileNotFoundError:
|
106
|
+
logger.error("Error opening input file %s", filename)
|
107
|
+
raise RuntimeError(f"Error opening input file {filename}")
|
86
108
|
|
87
109
|
logger.debug("Starting opusenc process")
|
88
|
-
|
89
|
-
|
90
|
-
|
110
|
+
try:
|
111
|
+
opusenc_process = subprocess.Popen(
|
112
|
+
[opus_binary, "--quiet", vbr_parameter, "--bitrate", f"{bitrate:d}", "-", "-"],
|
113
|
+
stdin=ffmpeg_process.stdout, stdout=subprocess.PIPE, stderr=subprocess.DEVNULL)
|
114
|
+
except Exception as e:
|
115
|
+
logger.error("Opus encoding failed: %s", str(e))
|
116
|
+
raise RuntimeError(f"Opus encoding failed: {str(e)}")
|
91
117
|
|
118
|
+
ffmpeg_process.stdout.close() # Allow ffmpeg to receive SIGPIPE if opusenc exits
|
119
|
+
|
92
120
|
tmp_file = tempfile.SpooledTemporaryFile()
|
93
121
|
bytes_written = 0
|
94
122
|
|
@@ -97,9 +125,16 @@ def get_opus_tempfile(ffmpeg_binary=None, opus_binary=None, filename=None, bitra
|
|
97
125
|
tmp_file.write(chunk)
|
98
126
|
bytes_written += len(chunk)
|
99
127
|
|
100
|
-
|
101
|
-
|
102
|
-
|
128
|
+
opusenc_return = opusenc_process.wait()
|
129
|
+
ffmpeg_return = ffmpeg_process.wait()
|
130
|
+
|
131
|
+
if ffmpeg_return != 0:
|
132
|
+
logger.error("FFmpeg processing failed with return code %d", ffmpeg_return)
|
133
|
+
raise RuntimeError(f"FFmpeg processing failed with return code {ffmpeg_return}")
|
134
|
+
|
135
|
+
if opusenc_return != 0:
|
136
|
+
logger.error("Opus encoding failed with return code %d", opusenc_return)
|
137
|
+
raise RuntimeError(f"Opus encoding failed with return code {opusenc_return}")
|
103
138
|
|
104
139
|
logger.debug("Wrote %d bytes to temporary file", bytes_written)
|
105
140
|
tmp_file.seek(0)
|
@@ -157,12 +192,38 @@ def get_input_files(input_filename):
|
|
157
192
|
logger.debug("Processing list file: %s", input_filename)
|
158
193
|
list_dir = os.path.dirname(os.path.abspath(input_filename))
|
159
194
|
input_files = []
|
160
|
-
with open(input_filename) as file_list:
|
161
|
-
for line in file_list:
|
162
|
-
fname = line.
|
163
|
-
|
164
|
-
|
165
|
-
|
195
|
+
with open(input_filename, 'r', encoding='utf-8') as file_list:
|
196
|
+
for line_num, line in enumerate(file_list, 1):
|
197
|
+
fname = line.strip()
|
198
|
+
if not fname or fname.startswith('#'): # Skip empty lines and comments
|
199
|
+
continue
|
200
|
+
|
201
|
+
# Remove any quote characters from path
|
202
|
+
fname = fname.strip('"\'')
|
203
|
+
|
204
|
+
# Check if the path is absolute or has a drive letter (Windows)
|
205
|
+
if os.path.isabs(fname) or (len(fname) > 1 and fname[1] == ':'):
|
206
|
+
full_path = fname # Use as is if it's an absolute path
|
207
|
+
logger.trace("Using absolute path from list: %s", full_path)
|
208
|
+
else:
|
209
|
+
full_path = os.path.join(list_dir, fname)
|
210
|
+
logger.trace("Using relative path from list: %s", full_path)
|
211
|
+
|
212
|
+
# Handle directory paths by finding all audio files in the directory
|
213
|
+
if os.path.isdir(full_path):
|
214
|
+
logger.debug("Path is a directory, finding audio files in: %s", full_path)
|
215
|
+
dir_glob = os.path.join(full_path, "*")
|
216
|
+
dir_files = sorted(filter_directories(glob.glob(dir_glob)))
|
217
|
+
if dir_files:
|
218
|
+
input_files.extend(dir_files)
|
219
|
+
logger.debug("Found %d audio files in directory", len(dir_files))
|
220
|
+
else:
|
221
|
+
logger.warning("No audio files found in directory at line %d: %s", line_num, full_path)
|
222
|
+
elif os.path.isfile(full_path):
|
223
|
+
input_files.append(full_path)
|
224
|
+
else:
|
225
|
+
logger.warning("File not found at line %d: %s", line_num, full_path)
|
226
|
+
|
166
227
|
logger.debug("Found %d files in list file", len(input_files))
|
167
228
|
else:
|
168
229
|
logger.debug("Processing glob pattern: %s", input_filename)
|
@@ -13,11 +13,15 @@ import shutil
|
|
13
13
|
import zipfile
|
14
14
|
import tarfile
|
15
15
|
import urllib.request
|
16
|
+
import time
|
16
17
|
from pathlib import Path
|
17
18
|
|
18
19
|
from .logger import get_logger
|
19
20
|
logger = get_logger('dependency_manager')
|
20
21
|
|
22
|
+
CACHE_DIR = os.path.join(os.path.expanduser("~"), ".tonietoolbox")
|
23
|
+
LIBS_DIR = os.path.join(CACHE_DIR, "libs")
|
24
|
+
|
21
25
|
DEPENDENCIES = {
|
22
26
|
'ffmpeg': {
|
23
27
|
'windows': {
|
@@ -59,16 +63,7 @@ def get_system():
|
|
59
63
|
|
60
64
|
def get_user_data_dir():
|
61
65
|
"""Get the user data directory for storing downloaded dependencies."""
|
62
|
-
|
63
|
-
|
64
|
-
if system == 'windows':
|
65
|
-
base_dir = os.environ.get('APPDATA', os.path.expanduser('~'))
|
66
|
-
elif system == 'darwin':
|
67
|
-
base_dir = os.path.expanduser('~/Library/Application Support')
|
68
|
-
else: # linux or other unix-like
|
69
|
-
base_dir = os.environ.get('XDG_DATA_HOME', os.path.expanduser('~/.local/share'))
|
70
|
-
|
71
|
-
app_dir = os.path.join(base_dir, 'TonieToolbox')
|
66
|
+
app_dir = LIBS_DIR
|
72
67
|
logger.debug("Using application data directory: %s", app_dir)
|
73
68
|
|
74
69
|
os.makedirs(app_dir, exist_ok=True)
|
@@ -130,31 +125,121 @@ def extract_archive(archive_path, extract_dir):
|
|
130
125
|
logger.info("Extracting %s to %s", archive_path, extract_dir)
|
131
126
|
os.makedirs(extract_dir, exist_ok=True)
|
132
127
|
|
128
|
+
# Extract to a temporary subdirectory first
|
129
|
+
temp_extract_dir = os.path.join(extract_dir, "_temp_extract")
|
130
|
+
os.makedirs(temp_extract_dir, exist_ok=True)
|
131
|
+
|
133
132
|
if archive_path.endswith('.zip'):
|
134
133
|
logger.debug("Extracting ZIP archive")
|
135
134
|
with zipfile.ZipFile(archive_path, 'r') as zip_ref:
|
136
|
-
zip_ref.extractall(
|
137
|
-
|
135
|
+
zip_ref.extractall(temp_extract_dir)
|
136
|
+
files_extracted = zip_ref.namelist()
|
137
|
+
logger.trace("Extracted files: %s", files_extracted)
|
138
138
|
elif archive_path.endswith(('.tar.gz', '.tgz')):
|
139
139
|
logger.debug("Extracting TAR.GZ archive")
|
140
140
|
with tarfile.open(archive_path, 'r:gz') as tar_ref:
|
141
|
-
tar_ref.extractall(
|
142
|
-
|
141
|
+
tar_ref.extractall(temp_extract_dir)
|
142
|
+
files_extracted = tar_ref.getnames()
|
143
|
+
logger.trace("Extracted files: %s", files_extracted)
|
143
144
|
elif archive_path.endswith(('.tar.xz', '.txz')):
|
144
145
|
logger.debug("Extracting TAR.XZ archive")
|
145
146
|
with tarfile.open(archive_path, 'r:xz') as tar_ref:
|
146
|
-
tar_ref.extractall(
|
147
|
-
|
147
|
+
tar_ref.extractall(temp_extract_dir)
|
148
|
+
files_extracted = tar_ref.getnames()
|
149
|
+
logger.trace("Extracted files: %s", files_extracted)
|
148
150
|
elif archive_path.endswith('.tar'):
|
149
151
|
logger.debug("Extracting TAR archive")
|
150
152
|
with tarfile.open(archive_path, 'r') as tar_ref:
|
151
|
-
tar_ref.extractall(
|
152
|
-
|
153
|
+
tar_ref.extractall(temp_extract_dir)
|
154
|
+
files_extracted = tar_ref.getnames()
|
155
|
+
logger.trace("Extracted files: %s", files_extracted)
|
153
156
|
else:
|
154
157
|
logger.error("Unsupported archive format: %s", archive_path)
|
155
158
|
return False
|
156
159
|
|
157
160
|
logger.info("Archive extracted successfully")
|
161
|
+
|
162
|
+
# Fix FFmpeg nested directory issue by moving binary files to the correct location
|
163
|
+
dependency_name = os.path.basename(extract_dir)
|
164
|
+
if dependency_name == 'ffmpeg':
|
165
|
+
# Check for common nested directory structures for FFmpeg
|
166
|
+
if os.path.exists(os.path.join(temp_extract_dir, "ffmpeg-master-latest-win64-gpl", "bin")):
|
167
|
+
# Windows FFmpeg path
|
168
|
+
bin_dir = os.path.join(temp_extract_dir, "ffmpeg-master-latest-win64-gpl", "bin")
|
169
|
+
logger.debug("Found nested FFmpeg bin directory: %s", bin_dir)
|
170
|
+
|
171
|
+
# Move all files from bin directory to the main dependency directory
|
172
|
+
for file in os.listdir(bin_dir):
|
173
|
+
src = os.path.join(bin_dir, file)
|
174
|
+
dst = os.path.join(extract_dir, file)
|
175
|
+
logger.debug("Moving %s to %s", src, dst)
|
176
|
+
shutil.move(src, dst)
|
177
|
+
|
178
|
+
elif os.path.exists(os.path.join(temp_extract_dir, "ffmpeg-master-latest-linux64-gpl", "bin")):
|
179
|
+
# Linux FFmpeg path
|
180
|
+
bin_dir = os.path.join(temp_extract_dir, "ffmpeg-master-latest-linux64-gpl", "bin")
|
181
|
+
logger.debug("Found nested FFmpeg bin directory: %s", bin_dir)
|
182
|
+
|
183
|
+
# Move all files from bin directory to the main dependency directory
|
184
|
+
for file in os.listdir(bin_dir):
|
185
|
+
src = os.path.join(bin_dir, file)
|
186
|
+
dst = os.path.join(extract_dir, file)
|
187
|
+
logger.debug("Moving %s to %s", src, dst)
|
188
|
+
shutil.move(src, dst)
|
189
|
+
else:
|
190
|
+
# Check for any directory with a 'bin' subdirectory
|
191
|
+
for root, dirs, _ in os.walk(temp_extract_dir):
|
192
|
+
if "bin" in dirs:
|
193
|
+
bin_dir = os.path.join(root, "bin")
|
194
|
+
logger.debug("Found nested bin directory: %s", bin_dir)
|
195
|
+
|
196
|
+
# Move all files from bin directory to the main dependency directory
|
197
|
+
for file in os.listdir(bin_dir):
|
198
|
+
src = os.path.join(bin_dir, file)
|
199
|
+
dst = os.path.join(extract_dir, file)
|
200
|
+
logger.debug("Moving %s to %s", src, dst)
|
201
|
+
shutil.move(src, dst)
|
202
|
+
break
|
203
|
+
else:
|
204
|
+
# If no bin directory was found, just move everything from the temp directory
|
205
|
+
logger.debug("No bin directory found, moving all files from temp directory")
|
206
|
+
for item in os.listdir(temp_extract_dir):
|
207
|
+
src = os.path.join(temp_extract_dir, item)
|
208
|
+
dst = os.path.join(extract_dir, item)
|
209
|
+
if os.path.isfile(src):
|
210
|
+
logger.debug("Moving file %s to %s", src, dst)
|
211
|
+
shutil.move(src, dst)
|
212
|
+
else:
|
213
|
+
# For non-FFmpeg dependencies, just move all files from temp directory
|
214
|
+
for item in os.listdir(temp_extract_dir):
|
215
|
+
src = os.path.join(temp_extract_dir, item)
|
216
|
+
dst = os.path.join(extract_dir, item)
|
217
|
+
if os.path.isfile(src):
|
218
|
+
logger.debug("Moving file %s to %s", src, dst)
|
219
|
+
shutil.move(src, dst)
|
220
|
+
else:
|
221
|
+
logger.debug("Moving directory %s to %s", src, dst)
|
222
|
+
# If destination already exists, remove it first
|
223
|
+
if os.path.exists(dst):
|
224
|
+
shutil.rmtree(dst)
|
225
|
+
shutil.move(src, dst)
|
226
|
+
|
227
|
+
# Clean up the temporary extraction directory
|
228
|
+
try:
|
229
|
+
shutil.rmtree(temp_extract_dir)
|
230
|
+
logger.debug("Removed temporary extraction directory")
|
231
|
+
except Exception as e:
|
232
|
+
logger.warning("Failed to remove temporary extraction directory: %s", e)
|
233
|
+
|
234
|
+
# Remove the archive file after successful extraction
|
235
|
+
try:
|
236
|
+
logger.debug("Removing archive file: %s", archive_path)
|
237
|
+
os.remove(archive_path)
|
238
|
+
logger.debug("Archive file removed successfully")
|
239
|
+
except Exception as e:
|
240
|
+
logger.warning("Failed to remove archive file: %s (error: %s)", archive_path, e)
|
241
|
+
# Continue even if we couldn't remove the file
|
242
|
+
|
158
243
|
return True
|
159
244
|
except Exception as e:
|
160
245
|
logger.error("Failed to extract %s: %s", archive_path, e)
|
@@ -302,73 +387,93 @@ def ensure_dependency(dependency_name, auto_download=False):
|
|
302
387
|
logger.error("Unknown dependency: %s", dependency_name)
|
303
388
|
return None
|
304
389
|
|
305
|
-
# First check if it's already in PATH
|
306
|
-
bin_name = dependency_name if dependency_name != 'opusenc' else 'opusenc'
|
307
|
-
path_binary = check_binary_in_path(bin_name)
|
308
|
-
if path_binary:
|
309
|
-
logger.info("Found %s in PATH: %s", dependency_name, path_binary)
|
310
|
-
return path_binary
|
311
|
-
|
312
390
|
# Set up paths to check for previously downloaded versions
|
313
391
|
user_data_dir = get_user_data_dir()
|
314
392
|
dependency_info = DEPENDENCIES[dependency_name].get(system, {})
|
315
|
-
|
316
|
-
binary_path = dependency_info.get('bin_path', bin_name)
|
317
|
-
extract_dir = os.path.join(user_data_dir, extract_dir_name)
|
393
|
+
binary_path = dependency_info.get('bin_path', dependency_name if dependency_name != 'opusenc' else 'opusenc')
|
318
394
|
|
319
|
-
#
|
320
|
-
|
321
|
-
|
322
|
-
|
323
|
-
|
324
|
-
|
325
|
-
|
326
|
-
|
327
|
-
|
328
|
-
|
329
|
-
|
330
|
-
|
331
|
-
|
332
|
-
|
333
|
-
|
334
|
-
|
335
|
-
|
336
|
-
|
337
|
-
|
338
|
-
|
339
|
-
|
340
|
-
|
341
|
-
|
395
|
+
# Define bin_name early so it's available in all code paths
|
396
|
+
bin_name = dependency_name if dependency_name != 'opusenc' else 'opusenc'
|
397
|
+
|
398
|
+
# Create a specific folder for this dependency
|
399
|
+
dependency_dir = os.path.join(user_data_dir, dependency_name)
|
400
|
+
|
401
|
+
# First priority: Check if we already downloaded and extracted it previously
|
402
|
+
# When auto_download is True, we'll skip this check and download fresh versions
|
403
|
+
if not auto_download:
|
404
|
+
logger.debug("Checking for previously downloaded %s in %s", dependency_name, dependency_dir)
|
405
|
+
if os.path.exists(dependency_dir):
|
406
|
+
existing_binary = find_binary_in_extracted_dir(dependency_dir, binary_path)
|
407
|
+
if existing_binary and os.path.exists(existing_binary):
|
408
|
+
# Verify that the binary works
|
409
|
+
logger.info("Found previously downloaded %s: %s", dependency_name, existing_binary)
|
410
|
+
try:
|
411
|
+
if os.access(existing_binary, os.X_OK) or system == 'windows':
|
412
|
+
if system in ['linux', 'darwin']:
|
413
|
+
logger.debug("Ensuring executable permissions on %s", existing_binary)
|
414
|
+
os.chmod(existing_binary, 0o755)
|
415
|
+
|
416
|
+
# Quick check to verify binary works
|
417
|
+
if dependency_name == 'opusenc':
|
418
|
+
cmd = [existing_binary, '--version']
|
419
|
+
try:
|
420
|
+
result = subprocess.run(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, timeout=5)
|
421
|
+
if result.returncode == 0:
|
422
|
+
logger.info("Using previously downloaded %s: %s", dependency_name, existing_binary)
|
423
|
+
return existing_binary
|
424
|
+
except:
|
425
|
+
# If --version fails, try without arguments
|
426
|
+
try:
|
427
|
+
result = subprocess.run([existing_binary], stdout=subprocess.PIPE, stderr=subprocess.PIPE, timeout=5)
|
428
|
+
if result.returncode == 0:
|
429
|
+
logger.info("Using previously downloaded %s: %s", dependency_name, existing_binary)
|
430
|
+
return existing_binary
|
431
|
+
except:
|
432
|
+
pass
|
433
|
+
else:
|
434
|
+
cmd = [existing_binary, '-version']
|
342
435
|
try:
|
343
|
-
result = subprocess.run(
|
436
|
+
result = subprocess.run(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, timeout=5)
|
344
437
|
if result.returncode == 0:
|
345
438
|
logger.info("Using previously downloaded %s: %s", dependency_name, existing_binary)
|
346
439
|
return existing_binary
|
347
440
|
except:
|
348
441
|
pass
|
349
|
-
|
350
|
-
|
351
|
-
|
352
|
-
|
353
|
-
|
354
|
-
|
355
|
-
|
356
|
-
|
357
|
-
|
358
|
-
|
359
|
-
|
442
|
+
|
443
|
+
logger.warning("Previously downloaded %s exists but failed verification", dependency_name)
|
444
|
+
except Exception as e:
|
445
|
+
logger.warning("Error verifying downloaded binary: %s", e)
|
446
|
+
|
447
|
+
# Second priority: Check if it's in PATH (only if auto_download is False)
|
448
|
+
path_binary = check_binary_in_path(bin_name)
|
449
|
+
if path_binary:
|
450
|
+
logger.info("Found %s in PATH: %s", dependency_name, path_binary)
|
451
|
+
return path_binary
|
452
|
+
else:
|
453
|
+
logger.info("Auto-download enabled, forcing download/installation of %s", dependency_name)
|
454
|
+
# If there's an existing download directory, rename or remove it
|
455
|
+
if os.path.exists(dependency_dir):
|
456
|
+
try:
|
457
|
+
backup_dir = f"{dependency_dir}_backup_{int(time.time())}"
|
458
|
+
logger.debug("Moving existing dependency directory to: %s", backup_dir)
|
459
|
+
os.rename(dependency_dir, backup_dir)
|
360
460
|
except Exception as e:
|
361
|
-
logger.warning("
|
461
|
+
logger.warning("Failed to rename existing dependency directory: %s", e)
|
462
|
+
try:
|
463
|
+
logger.debug("Trying to remove existing dependency directory")
|
464
|
+
shutil.rmtree(dependency_dir, ignore_errors=True)
|
465
|
+
except Exception as e:
|
466
|
+
logger.warning("Failed to remove existing dependency directory: %s", e)
|
362
467
|
|
363
468
|
# If auto_download is not enabled, don't try to install or download
|
364
469
|
if not auto_download:
|
365
|
-
logger.warning("%s not found in PATH and auto-download is disabled. Use --auto-download to enable automatic installation.", dependency_name)
|
470
|
+
logger.warning("%s not found in libs directory or PATH and auto-download is disabled. Use --auto-download to enable automatic installation.", dependency_name)
|
366
471
|
return None
|
367
472
|
|
368
|
-
# If not in PATH, check if we should install via package manager
|
473
|
+
# If not in libs or PATH, check if we should install via package manager
|
369
474
|
if 'package' in dependency_info:
|
370
475
|
package_name = dependency_info['package']
|
371
|
-
logger.info("%s not found. Attempting to install %s package...", dependency_name, package_name)
|
476
|
+
logger.info("%s not found or forced download. Attempting to install %s package...", dependency_name, package_name)
|
372
477
|
if install_package(package_name):
|
373
478
|
path_binary = check_binary_in_path(bin_name)
|
374
479
|
if path_binary:
|
@@ -382,16 +487,18 @@ def ensure_dependency(dependency_name, auto_download=False):
|
|
382
487
|
|
383
488
|
# Set up download paths
|
384
489
|
download_url = dependency_info['url']
|
385
|
-
|
490
|
+
|
491
|
+
# Create dependency-specific directory
|
492
|
+
os.makedirs(dependency_dir, exist_ok=True)
|
386
493
|
|
387
494
|
# Download and extract
|
388
495
|
archive_ext = '.zip' if download_url.endswith('zip') else '.tar.xz'
|
389
|
-
archive_path = os.path.join(
|
496
|
+
archive_path = os.path.join(dependency_dir, f"{dependency_name}{archive_ext}")
|
390
497
|
logger.debug("Using archive path: %s", archive_path)
|
391
498
|
|
392
499
|
if download_file(download_url, archive_path):
|
393
|
-
if extract_archive(archive_path,
|
394
|
-
binary = find_binary_in_extracted_dir(
|
500
|
+
if extract_archive(archive_path, dependency_dir):
|
501
|
+
binary = find_binary_in_extracted_dir(dependency_dir, binary_path)
|
395
502
|
if binary:
|
396
503
|
# Make sure it's executable on Unix-like systems
|
397
504
|
if system in ['linux', 'darwin']:
|
@@ -0,0 +1,250 @@
|
|
1
|
+
"""
|
2
|
+
Recursive folder processing functionality for the TonieToolbox package
|
3
|
+
"""
|
4
|
+
|
5
|
+
import os
|
6
|
+
import glob
|
7
|
+
from typing import List, Dict, Tuple, Set
|
8
|
+
import logging
|
9
|
+
import re
|
10
|
+
|
11
|
+
from .audio_conversion import filter_directories
|
12
|
+
from .logger import get_logger
|
13
|
+
|
14
|
+
logger = get_logger('recursive_processor')
|
15
|
+
|
16
|
+
|
17
|
+
def find_audio_folders(root_path: str) -> List[Dict[str, any]]:
|
18
|
+
"""
|
19
|
+
Find and return all folders that contain audio files in a recursive manner,
|
20
|
+
organized in a way that handles nested folder structures.
|
21
|
+
|
22
|
+
Args:
|
23
|
+
root_path: Root directory to start searching from
|
24
|
+
|
25
|
+
Returns:
|
26
|
+
List of dictionaries with folder information, including paths and relationships
|
27
|
+
"""
|
28
|
+
logger.info("Finding folders with audio files in: %s", root_path)
|
29
|
+
|
30
|
+
# Dictionary to store folder information
|
31
|
+
# Key: folder path, Value: {audio_files, parent, children, depth}
|
32
|
+
folders_info = {}
|
33
|
+
abs_root = os.path.abspath(root_path)
|
34
|
+
|
35
|
+
# First pass: Identify all folders containing audio files and calculate their depth
|
36
|
+
for dirpath, dirnames, filenames in os.walk(abs_root):
|
37
|
+
# Look for audio files in this directory
|
38
|
+
all_files = [os.path.join(dirpath, f) for f in filenames]
|
39
|
+
audio_files = filter_directories(all_files)
|
40
|
+
|
41
|
+
if audio_files:
|
42
|
+
# Calculate folder depth relative to root
|
43
|
+
rel_path = os.path.relpath(dirpath, abs_root)
|
44
|
+
depth = 0 if rel_path == '.' else rel_path.count(os.sep) + 1
|
45
|
+
|
46
|
+
# Store folder info
|
47
|
+
folders_info[dirpath] = {
|
48
|
+
'path': dirpath,
|
49
|
+
'audio_files': audio_files,
|
50
|
+
'parent': os.path.dirname(dirpath),
|
51
|
+
'children': [],
|
52
|
+
'depth': depth,
|
53
|
+
'file_count': len(audio_files)
|
54
|
+
}
|
55
|
+
logger.debug("Found folder with %d audio files: %s (depth %d)",
|
56
|
+
len(audio_files), dirpath, depth)
|
57
|
+
|
58
|
+
# Second pass: Build parent-child relationships
|
59
|
+
for folder_path, info in folders_info.items():
|
60
|
+
parent_path = info['parent']
|
61
|
+
if parent_path in folders_info:
|
62
|
+
folders_info[parent_path]['children'].append(folder_path)
|
63
|
+
|
64
|
+
# Convert to list and sort by path for consistent processing
|
65
|
+
folder_list = sorted(folders_info.values(), key=lambda x: x['path'])
|
66
|
+
logger.info("Found %d folders containing audio files", len(folder_list))
|
67
|
+
|
68
|
+
return folder_list
|
69
|
+
|
70
|
+
|
71
|
+
def determine_processing_folders(folders: List[Dict[str, any]]) -> List[Dict[str, any]]:
|
72
|
+
"""
|
73
|
+
Determine which folders should be processed based on their position in the hierarchy.
|
74
|
+
|
75
|
+
Args:
|
76
|
+
folders: List of folder dictionaries with hierarchy information
|
77
|
+
|
78
|
+
Returns:
|
79
|
+
List of folders that should be processed (filtered)
|
80
|
+
"""
|
81
|
+
# We'll use a set to track which folders we've decided to process
|
82
|
+
to_process = set()
|
83
|
+
|
84
|
+
# Let's examine folders with the deepest nesting level first
|
85
|
+
max_depth = max(folder['depth'] for folder in folders) if folders else 0
|
86
|
+
|
87
|
+
# First, mark terminal folders (leaf nodes) for processing
|
88
|
+
for folder in folders:
|
89
|
+
if not folder['children']: # No children means it's a leaf node
|
90
|
+
to_process.add(folder['path'])
|
91
|
+
logger.debug("Marking leaf folder for processing: %s", folder['path'])
|
92
|
+
|
93
|
+
# Check if any parent folders should be processed
|
94
|
+
# If a parent folder has significantly more audio files than the sum of its children,
|
95
|
+
# or some children aren't marked for processing, we should process the parent too
|
96
|
+
all_folders_by_path = {folder['path']: folder for folder in folders}
|
97
|
+
|
98
|
+
# Work from bottom up (max depth to min)
|
99
|
+
for depth in range(max_depth, -1, -1):
|
100
|
+
for folder in [f for f in folders if f['depth'] == depth]:
|
101
|
+
if folder['path'] in to_process:
|
102
|
+
continue
|
103
|
+
|
104
|
+
# Count audio files in children that will be processed
|
105
|
+
child_file_count = sum(all_folders_by_path[child]['file_count']
|
106
|
+
for child in folder['children']
|
107
|
+
if child in to_process)
|
108
|
+
|
109
|
+
# If this folder has more files than what will be processed in children,
|
110
|
+
# or not all children will be processed, then process this folder too
|
111
|
+
if folder['file_count'] > child_file_count or any(child not in to_process for child in folder['children']):
|
112
|
+
to_process.add(folder['path'])
|
113
|
+
logger.debug("Marking parent folder for processing: %s (files: %d, child files: %d)",
|
114
|
+
folder['path'], folder['file_count'], child_file_count)
|
115
|
+
|
116
|
+
# Return only folders that should be processed
|
117
|
+
result = [folder for folder in folders if folder['path'] in to_process]
|
118
|
+
logger.info("Determined %d folders should be processed (out of %d total folders with audio)",
|
119
|
+
len(result), len(folders))
|
120
|
+
return result
|
121
|
+
|
122
|
+
|
123
|
+
def get_folder_audio_files(folder_path: str) -> List[str]:
|
124
|
+
"""
|
125
|
+
Get all audio files in a specific folder.
|
126
|
+
|
127
|
+
Args:
|
128
|
+
folder_path: Path to folder
|
129
|
+
|
130
|
+
Returns:
|
131
|
+
List of paths to audio files in natural sort order
|
132
|
+
"""
|
133
|
+
audio_files = glob.glob(os.path.join(folder_path, "*"))
|
134
|
+
filtered_files = filter_directories(audio_files)
|
135
|
+
|
136
|
+
# Sort files naturally (so that '2' comes before '10')
|
137
|
+
sorted_files = natural_sort(filtered_files)
|
138
|
+
logger.debug("Found %d audio files in folder: %s", len(sorted_files), folder_path)
|
139
|
+
|
140
|
+
return sorted_files
|
141
|
+
|
142
|
+
|
143
|
+
def natural_sort(file_list: List[str]) -> List[str]:
|
144
|
+
"""
|
145
|
+
Sort a list of files in natural order (so that 2 comes before 10).
|
146
|
+
|
147
|
+
Args:
|
148
|
+
file_list: List of file paths
|
149
|
+
|
150
|
+
Returns:
|
151
|
+
Naturally sorted list of file paths
|
152
|
+
"""
|
153
|
+
def convert(text):
|
154
|
+
return int(text) if text.isdigit() else text.lower()
|
155
|
+
|
156
|
+
def alphanum_key(key):
|
157
|
+
return [convert(c) for c in re.split('([0-9]+)', key)]
|
158
|
+
|
159
|
+
return sorted(file_list, key=alphanum_key)
|
160
|
+
|
161
|
+
|
162
|
+
def extract_folder_meta(folder_path: str) -> Dict[str, str]:
|
163
|
+
"""
|
164
|
+
Extract metadata from folder name.
|
165
|
+
Common format might be: "YYYY - NNN - Title"
|
166
|
+
|
167
|
+
Args:
|
168
|
+
folder_path: Path to folder
|
169
|
+
|
170
|
+
Returns:
|
171
|
+
Dictionary with extracted metadata (year, number, title)
|
172
|
+
"""
|
173
|
+
folder_name = os.path.basename(folder_path)
|
174
|
+
logger.debug("Extracting metadata from folder: %s", folder_name)
|
175
|
+
|
176
|
+
# Try to match the format "YYYY - NNN - Title"
|
177
|
+
match = re.match(r'(\d{4})\s*-\s*(\d+)\s*-\s*(.+)', folder_name)
|
178
|
+
|
179
|
+
meta = {
|
180
|
+
'year': '',
|
181
|
+
'number': '',
|
182
|
+
'title': folder_name # Default to the folder name if parsing fails
|
183
|
+
}
|
184
|
+
|
185
|
+
if match:
|
186
|
+
year, number, title = match.groups()
|
187
|
+
meta['year'] = year
|
188
|
+
meta['number'] = number
|
189
|
+
meta['title'] = title.strip()
|
190
|
+
logger.debug("Extracted metadata: year=%s, number=%s, title=%s",
|
191
|
+
meta['year'], meta['number'], meta['title'])
|
192
|
+
else:
|
193
|
+
# Try to match just the number format "NNN - Title"
|
194
|
+
match = re.match(r'(\d+)\s*-\s*(.+)', folder_name)
|
195
|
+
if match:
|
196
|
+
number, title = match.groups()
|
197
|
+
meta['number'] = number
|
198
|
+
meta['title'] = title.strip()
|
199
|
+
logger.debug("Extracted metadata: number=%s, title=%s",
|
200
|
+
meta['number'], meta['title'])
|
201
|
+
else:
|
202
|
+
logger.debug("Could not extract structured metadata from folder name")
|
203
|
+
|
204
|
+
return meta
|
205
|
+
|
206
|
+
|
207
|
+
def process_recursive_folders(root_path: str) -> List[Tuple[str, str, List[str]]]:
|
208
|
+
"""
|
209
|
+
Process folders recursively and prepare data for conversion.
|
210
|
+
|
211
|
+
Args:
|
212
|
+
root_path: Root directory to start processing from
|
213
|
+
|
214
|
+
Returns:
|
215
|
+
List of tuples: (output_filename, folder_path, list_of_audio_files)
|
216
|
+
"""
|
217
|
+
logger.info("Processing folders recursively: %s", root_path)
|
218
|
+
|
219
|
+
# Get folder info with hierarchy details
|
220
|
+
all_folders = find_audio_folders(root_path)
|
221
|
+
|
222
|
+
# Determine which folders should be processed
|
223
|
+
folders_to_process = determine_processing_folders(all_folders)
|
224
|
+
|
225
|
+
results = []
|
226
|
+
for folder_info in folders_to_process:
|
227
|
+
folder_path = folder_info['path']
|
228
|
+
audio_files = folder_info['audio_files']
|
229
|
+
|
230
|
+
# Use natural sort order to ensure consistent results
|
231
|
+
audio_files = natural_sort(audio_files)
|
232
|
+
|
233
|
+
meta = extract_folder_meta(folder_path)
|
234
|
+
|
235
|
+
if audio_files:
|
236
|
+
# Create output filename from metadata
|
237
|
+
if meta['number'] and meta['title']:
|
238
|
+
output_name = f"{meta['number']} - {meta['title']}"
|
239
|
+
else:
|
240
|
+
output_name = os.path.basename(folder_path)
|
241
|
+
|
242
|
+
# Clean up the output name (remove invalid filename characters)
|
243
|
+
output_name = re.sub(r'[<>:"/\\|?*]', '_', output_name)
|
244
|
+
|
245
|
+
results.append((output_name, folder_path, audio_files))
|
246
|
+
logger.debug("Created processing task: %s -> %s (%d files)",
|
247
|
+
folder_path, output_name, len(audio_files))
|
248
|
+
|
249
|
+
logger.info("Created %d processing tasks", len(results))
|
250
|
+
return results
|
@@ -1,6 +1,6 @@
|
|
1
1
|
Metadata-Version: 2.4
|
2
2
|
Name: TonieToolbox
|
3
|
-
Version: 0.
|
3
|
+
Version: 0.2.0
|
4
4
|
Summary: Convert audio files to Tonie box compatible format
|
5
5
|
Home-page: https://github.com/Quentendo64/TonieToolbox
|
6
6
|
Author: Quentendo64
|
@@ -62,6 +62,7 @@ TonieToolbox allows you to create custom audio content for Tonie boxes by conver
|
|
62
62
|
The tool provides several capabilities:
|
63
63
|
|
64
64
|
- Convert single or multiple audio files into a Tonie-compatible format
|
65
|
+
- Process complex folder structures recursively to handle entire audio collections
|
65
66
|
- Analyze and validate existing Tonie files
|
66
67
|
- Split Tonie files into individual opus tracks
|
67
68
|
- Compare two TAF files for debugging differences
|
@@ -137,6 +138,22 @@ Or use a list file (.lst) containing paths to multiple audio files:
|
|
137
138
|
tonietoolbox playlist.lst
|
138
139
|
```
|
139
140
|
|
141
|
+
**Process folders recursively:**
|
142
|
+
|
143
|
+
To process an entire folder structure with multiple audio folders:
|
144
|
+
|
145
|
+
```
|
146
|
+
tonietoolbox --recursive "Music/Albums"
|
147
|
+
```
|
148
|
+
|
149
|
+
This will scan all subfolders, identify those containing audio files, and create a TAF file for each folder.
|
150
|
+
|
151
|
+
By default, all generated TAF files are saved in the `.\output` directory. If you want to save each TAF file in its source directory instead:
|
152
|
+
|
153
|
+
```
|
154
|
+
tonietoolbox --recursive --output-to-source "Music/Albums"
|
155
|
+
```
|
156
|
+
|
140
157
|
### Advanced Options
|
141
158
|
|
142
159
|
Run the following command to see all available options:
|
@@ -149,8 +166,8 @@ Output:
|
|
149
166
|
```
|
150
167
|
usage: TonieToolbox.py [-h] [--ts TIMESTAMP] [--ffmpeg FFMPEG] [--opusenc OPUSENC]
|
151
168
|
[--bitrate BITRATE] [--cbr] [--append-tonie-tag TAG]
|
152
|
-
[--no-tonie-header] [--info] [--split] [--recursive] [--
|
153
|
-
[--detailed-compare] [--debug] [--trace] [--quiet] [--silent]
|
169
|
+
[--no-tonie-header] [--info] [--split] [--recursive] [--output-to-source]
|
170
|
+
[--compare FILE2] [--detailed-compare] [--debug] [--trace] [--quiet] [--silent]
|
154
171
|
SOURCE [TARGET]
|
155
172
|
|
156
173
|
Create Tonie compatible file from Ogg opus file(s).
|
@@ -170,6 +187,8 @@ optional arguments:
|
|
170
187
|
--no-tonie-header do not write Tonie header
|
171
188
|
--info Check and display info about Tonie file
|
172
189
|
--split Split Tonie file into opus tracks
|
190
|
+
--recursive Process folders recursively
|
191
|
+
--output-to-source Save output files in the source directory instead of output directory
|
173
192
|
--compare FILE2 Compare input file with another .taf file for debugging
|
174
193
|
--detailed-compare Show detailed OGG page differences when comparing files
|
175
194
|
|
@@ -222,6 +241,20 @@ tonietoolbox input.mp3 --ts ./reference.taf # Reference TAF for extraction
|
|
222
241
|
tonietoolbox input.mp3 --bitrate 128
|
223
242
|
```
|
224
243
|
|
244
|
+
#### Process a complex folder structure:
|
245
|
+
|
246
|
+
Process an audiobook series with multiple folders:
|
247
|
+
|
248
|
+
```
|
249
|
+
tonietoolbox --recursive "\Hörspiele\Die drei Fragezeichen\Folgen"
|
250
|
+
```
|
251
|
+
|
252
|
+
Process a music collection with nested album folders and save TAF files alongside the source directories:
|
253
|
+
|
254
|
+
```
|
255
|
+
tonietoolbox --recursive --output-to-source "\Hörspiele\"
|
256
|
+
```
|
257
|
+
|
225
258
|
## Technical Details
|
226
259
|
|
227
260
|
### TAF (Tonie Audio Format) File Structure
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|