mapillary-downloader 0.7.8__tar.gz → 0.8.1__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (20) hide show
  1. {mapillary_downloader-0.7.8 → mapillary_downloader-0.8.1}/PKG-INFO +23 -25
  2. {mapillary_downloader-0.7.8 → mapillary_downloader-0.8.1}/README.md +22 -24
  3. {mapillary_downloader-0.7.8 → mapillary_downloader-0.8.1}/pyproject.toml +1 -1
  4. {mapillary_downloader-0.7.8 → mapillary_downloader-0.8.1}/src/mapillary_downloader/downloader.py +77 -110
  5. {mapillary_downloader-0.7.8 → mapillary_downloader-0.8.1}/src/mapillary_downloader/exif_writer.py +4 -4
  6. {mapillary_downloader-0.7.8 → mapillary_downloader-0.8.1}/src/mapillary_downloader/logging_config.py +5 -0
  7. {mapillary_downloader-0.7.8 → mapillary_downloader-0.8.1}/src/mapillary_downloader/webp_converter.py +0 -4
  8. {mapillary_downloader-0.7.8 → mapillary_downloader-0.8.1}/LICENSE.md +0 -0
  9. {mapillary_downloader-0.7.8 → mapillary_downloader-0.8.1}/src/mapillary_downloader/__init__.py +0 -0
  10. {mapillary_downloader-0.7.8 → mapillary_downloader-0.8.1}/src/mapillary_downloader/__main__.py +0 -0
  11. {mapillary_downloader-0.7.8 → mapillary_downloader-0.8.1}/src/mapillary_downloader/client.py +0 -0
  12. {mapillary_downloader-0.7.8 → mapillary_downloader-0.8.1}/src/mapillary_downloader/ia_check.py +0 -0
  13. {mapillary_downloader-0.7.8 → mapillary_downloader-0.8.1}/src/mapillary_downloader/ia_meta.py +0 -0
  14. {mapillary_downloader-0.7.8 → mapillary_downloader-0.8.1}/src/mapillary_downloader/ia_stats.py +0 -0
  15. {mapillary_downloader-0.7.8 → mapillary_downloader-0.8.1}/src/mapillary_downloader/metadata_reader.py +0 -0
  16. {mapillary_downloader-0.7.8 → mapillary_downloader-0.8.1}/src/mapillary_downloader/tar_sequences.py +0 -0
  17. {mapillary_downloader-0.7.8 → mapillary_downloader-0.8.1}/src/mapillary_downloader/utils.py +0 -0
  18. {mapillary_downloader-0.7.8 → mapillary_downloader-0.8.1}/src/mapillary_downloader/worker.py +0 -0
  19. {mapillary_downloader-0.7.8 → mapillary_downloader-0.8.1}/src/mapillary_downloader/worker_pool.py +0 -0
  20. {mapillary_downloader-0.7.8 → mapillary_downloader-0.8.1}/src/mapillary_downloader/xmp_writer.py +0 -0
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: mapillary_downloader
3
- Version: 0.7.8
3
+ Version: 0.8.1
4
4
  Summary: Archive user data from Mapillary
5
5
  Author-email: Gareth Davidson <gaz@bitplane.net>
6
6
  Requires-Python: >=3.10
@@ -32,7 +32,7 @@ Provides-Extra: dev
32
32
 
33
33
  Download your Mapillary data before it's gone.
34
34
 
35
- ## Installation
35
+ ## ▶️ Installation
36
36
 
37
37
  Installation is optional, you can prefix the command with `uvx` or `pipx` to
38
38
  download and run it. Or if you're oldskool you can do:
@@ -41,7 +41,7 @@ download and run it. Or if you're oldskool you can do:
41
41
  pip install mapillary-downloader
42
42
  ```
43
43
 
44
- ## Usage
44
+ ## Usage
45
45
 
46
46
  First, get your Mapillary API access token from
47
47
  [the developer dashboard](https://www.mapillary.com/dashboard/developers)
@@ -75,12 +75,14 @@ The downloader will:
75
75
  * 🏛️ Check Internet Archive to avoid duplicate downloads
76
76
  * 📷 Download multiple users' images organized by sequence
77
77
  * 📜 Inject EXIF metadata (GPS coordinates, camera info, timestamps,
78
- compass direction)
78
+ compass direction) and XMP data for panoramas.
79
79
  * 🗜️ Convert to WebP (by default) to save ~70% disk space
80
- * 🛟 Save progress so you can safely resume if interrupted
81
- * 📦 Tar sequence directories (by default) for faster uploads to Internet Archive
80
+ * 🛟 Save progress every 5 minutes so you can safely resume if interrupted
81
+ ()
82
+ * 📦 Tar sequence directories (by default) for faster uploads to Internet
83
+ Archive
82
84
 
83
- ## WebP Conversion
85
+ ## 🖼️ WebP Conversion
84
86
 
85
87
  You'll need the `cwebp` binary installed:
86
88
 
@@ -94,11 +96,7 @@ brew install webp
94
96
 
95
97
  To disable WebP conversion and keep original JPEGs, use `--no-webp`:
96
98
 
97
- ```bash
98
- mapillary-downloader --no-webp USERNAME
99
- ```
100
-
101
- ## Tarballs
99
+ ## 📦 Tarballs
102
100
 
103
101
  Images are organized by capture date (YYYY-MM-DD) for incremental archiving:
104
102
 
@@ -116,16 +114,20 @@ mapillary-username-quality/
116
114
  ```
117
115
 
118
116
  By default, these date directories are automatically tarred after download
119
- (resulting in `2024-01-15.tar`, `2024-01-16.tar`, etc.). This date-based
120
- organization enables:
117
+ (`2024-01-15.tar`, `2024-01-16.tar`, etc.). Reasons:
121
118
 
122
- - **Incremental uploads** - Upload each day's tar as soon as it's ready
123
- - **Manageable file counts** - ~365 days/year × 10 years = 3,650 tars max
124
- - **Chronological organization** - Natural sorting and progress tracking
119
+ * ⤴️ Incremental uploads. Add more to a collection. Well, eventually anyway.
120
+ This won't work yet unless you delete the jsonl file and start again.
121
+ * 📂 Fewer files - ~365 days/year × 10 years = 3,650 tars max. IA only want
122
+ 5k items per collection
123
+ * 🧨 Avoids blowing up IA's derive workers. We don't want Brewster's computers
124
+ to create thumbs for 2 billion images.
125
+ * 💾 I like to have a few inodes available for things other than this. I'm sure
126
+ you do too.
125
127
 
126
128
  To keep individual files instead of creating tars, use the `--no-tar` flag.
127
129
 
128
- ## Internet Archive upload
130
+ ## 🏛️ Internet Archive upload
129
131
 
130
132
  I've written a bash tool to rip media then tag, queue, and upload to The
131
133
  Internet Archive. The metadata is in the same format. If you symlink your
@@ -139,15 +141,11 @@ See inlay for details:
139
141
 
140
142
  To see overall project progress, or an estimate, use `--stats`
141
143
 
142
- ```bash
143
- mapillary-downloader --stats
144
- ```
145
-
146
144
  ## 🚧 Development
147
145
 
148
146
  ```bash
149
147
  make dev # Setup dev environment
150
- make test # Run tests
148
+ make test # Run tests. Note: requires `exiftool`
151
149
  make dist # Build the distribution
152
150
  make help # See other make options
153
151
  ```
@@ -160,12 +158,12 @@ make help # See other make options
160
158
  * [🐱 github](https://github.com/bitplane/mapillary_downloader)
161
159
  * [📀 rip](https://bitplane.net/dev/sh/rip)
162
160
 
163
- ## License
161
+ ## ⚖️ License
164
162
 
165
163
  WTFPL with one additional clause
166
164
 
167
165
  1. Don't blame me
168
166
 
169
167
  Do wtf you want, but don't blame me if it makes jokes about the size of your
170
- disk drive.
168
+ disk.
171
169
 
@@ -2,7 +2,7 @@
2
2
 
3
3
  Download your Mapillary data before it's gone.
4
4
 
5
- ## Installation
5
+ ## ▶️ Installation
6
6
 
7
7
  Installation is optional, you can prefix the command with `uvx` or `pipx` to
8
8
  download and run it. Or if you're oldskool you can do:
@@ -11,7 +11,7 @@ download and run it. Or if you're oldskool you can do:
11
11
  pip install mapillary-downloader
12
12
  ```
13
13
 
14
- ## Usage
14
+ ## Usage
15
15
 
16
16
  First, get your Mapillary API access token from
17
17
  [the developer dashboard](https://www.mapillary.com/dashboard/developers)
@@ -45,12 +45,14 @@ The downloader will:
45
45
  * 🏛️ Check Internet Archive to avoid duplicate downloads
46
46
  * 📷 Download multiple users' images organized by sequence
47
47
  * 📜 Inject EXIF metadata (GPS coordinates, camera info, timestamps,
48
- compass direction)
48
+ compass direction) and XMP data for panoramas.
49
49
  * 🗜️ Convert to WebP (by default) to save ~70% disk space
50
- * 🛟 Save progress so you can safely resume if interrupted
51
- * 📦 Tar sequence directories (by default) for faster uploads to Internet Archive
50
+ * 🛟 Save progress every 5 minutes so you can safely resume if interrupted
51
+ ()
52
+ * 📦 Tar sequence directories (by default) for faster uploads to Internet
53
+ Archive
52
54
 
53
- ## WebP Conversion
55
+ ## 🖼️ WebP Conversion
54
56
 
55
57
  You'll need the `cwebp` binary installed:
56
58
 
@@ -64,11 +66,7 @@ brew install webp
64
66
 
65
67
  To disable WebP conversion and keep original JPEGs, use `--no-webp`:
66
68
 
67
- ```bash
68
- mapillary-downloader --no-webp USERNAME
69
- ```
70
-
71
- ## Tarballs
69
+ ## 📦 Tarballs
72
70
 
73
71
  Images are organized by capture date (YYYY-MM-DD) for incremental archiving:
74
72
 
@@ -86,16 +84,20 @@ mapillary-username-quality/
86
84
  ```
87
85
 
88
86
  By default, these date directories are automatically tarred after download
89
- (resulting in `2024-01-15.tar`, `2024-01-16.tar`, etc.). This date-based
90
- organization enables:
87
+ (`2024-01-15.tar`, `2024-01-16.tar`, etc.). Reasons:
91
88
 
92
- - **Incremental uploads** - Upload each day's tar as soon as it's ready
93
- - **Manageable file counts** - ~365 days/year × 10 years = 3,650 tars max
94
- - **Chronological organization** - Natural sorting and progress tracking
89
+ * ⤴️ Incremental uploads. Add more to a collection. Well, eventually anyway.
90
+ This won't work yet unless you delete the jsonl file and start again.
91
+ * 📂 Fewer files - ~365 days/year × 10 years = 3,650 tars max. IA only want
92
+ 5k items per collection
93
+ * 🧨 Avoids blowing up IA's derive workers. We don't want Brewster's computers
94
+ to create thumbs for 2 billion images.
95
+ * 💾 I like to have a few inodes available for things other than this. I'm sure
96
+ you do too.
95
97
 
96
98
  To keep individual files instead of creating tars, use the `--no-tar` flag.
97
99
 
98
- ## Internet Archive upload
100
+ ## 🏛️ Internet Archive upload
99
101
 
100
102
  I've written a bash tool to rip media then tag, queue, and upload to The
101
103
  Internet Archive. The metadata is in the same format. If you symlink your
@@ -109,15 +111,11 @@ See inlay for details:
109
111
 
110
112
  To see overall project progress, or an estimate, use `--stats`
111
113
 
112
- ```bash
113
- mapillary-downloader --stats
114
- ```
115
-
116
114
  ## 🚧 Development
117
115
 
118
116
  ```bash
119
117
  make dev # Setup dev environment
120
- make test # Run tests
118
+ make test # Run tests. Note: requires `exiftool`
121
119
  make dist # Build the distribution
122
120
  make help # See other make options
123
121
  ```
@@ -130,11 +128,11 @@ make help # See other make options
130
128
  * [🐱 github](https://github.com/bitplane/mapillary_downloader)
131
129
  * [📀 rip](https://bitplane.net/dev/sh/rip)
132
130
 
133
- ## License
131
+ ## ⚖️ License
134
132
 
135
133
  WTFPL with one additional clause
136
134
 
137
135
  1. Don't blame me
138
136
 
139
137
  Do wtf you want, but don't blame me if it makes jokes about the size of your
140
- disk drive.
138
+ disk.
@@ -1,7 +1,7 @@
1
1
  [project]
2
2
  name = "mapillary_downloader"
3
3
  description = "Archive user data from Mapillary"
4
- version = "0.7.8"
4
+ version = "0.8.1"
5
5
  authors = [
6
6
  { name = "Gareth Davidson", email = "gaz@bitplane.net" }
7
7
  ]
@@ -5,6 +5,7 @@ import json
5
5
  import logging
6
6
  import os
7
7
  import shutil
8
+ import threading
8
9
  import time
9
10
  from pathlib import Path
10
11
  from mapillary_downloader.utils import format_size, format_time, safe_json_save
@@ -146,6 +147,65 @@ class MapillaryDownloader:
146
147
  # Write atomically using utility function
147
148
  safe_json_save(self.progress_file, progress)
148
149
 
150
+ def _submit_metadata_batch(self, file_handle, quality_field, pool, convert_webp, process_results, base_submitted):
151
+ """Read metadata lines from current position, submit to workers.
152
+
153
+ Args:
154
+ file_handle: Open file positioned at read point
155
+ quality_field: Field name for quality URL (e.g., "thumb_1024_url")
156
+ pool: Worker pool to submit to
157
+ convert_webp: Whether to convert to webp
158
+ process_results: Callback to drain result queue
159
+ base_submitted: Running total for cumulative logging
160
+
161
+ Returns:
162
+ tuple: (submitted_count, skipped_count) for this batch
163
+ """
164
+ submitted = 0
165
+ skipped = 0
166
+
167
+ for line in file_handle:
168
+ line = line.strip()
169
+ if not line:
170
+ continue
171
+
172
+ try:
173
+ image = json.loads(line)
174
+ except json.JSONDecodeError:
175
+ continue
176
+
177
+ if image.get("__complete__"):
178
+ continue
179
+
180
+ image_id = image.get("id")
181
+ if not image_id:
182
+ continue
183
+
184
+ if image_id in self.downloaded:
185
+ skipped += 1
186
+ continue
187
+
188
+ if not image.get(quality_field):
189
+ continue
190
+
191
+ work_item = (
192
+ image,
193
+ str(self.output_dir),
194
+ self.quality,
195
+ convert_webp,
196
+ self.client.access_token,
197
+ )
198
+ pool.submit(work_item)
199
+ submitted += 1
200
+
201
+ total = base_submitted + submitted
202
+ if total % 1000 == 0:
203
+ logger.info(f"Queue: submitted {total:,} images")
204
+
205
+ process_results()
206
+
207
+ return submitted, skipped
208
+
149
209
  def download_user_data(self, bbox=None, convert_webp=False):
150
210
  """Download all images for a user using streaming queue-based architecture.
151
211
 
@@ -187,13 +247,13 @@ class MapillaryDownloader:
187
247
  total_bytes = 0
188
248
  failed_count = 0
189
249
  submitted = 0
250
+ skipped_count = 0
190
251
 
191
252
  try:
192
253
  # Step 3a: Fetch metadata from API in parallel (write-only, don't block on queue)
193
- if not api_complete:
194
- import threading
254
+ api_fetch_complete = threading.Event()
195
255
 
196
- api_fetch_complete = threading.Event()
256
+ if not api_complete:
197
257
  new_images_count = [0] # Mutable so thread can update it
198
258
 
199
259
  def fetch_api_metadata():
@@ -221,7 +281,7 @@ class MapillaryDownloader:
221
281
  api_thread = threading.Thread(target=fetch_api_metadata, daemon=True)
222
282
  api_thread.start()
223
283
  else:
224
- api_fetch_complete = None
284
+ api_fetch_complete.set()
225
285
 
226
286
  # Step 3b: Tail metadata file and submit to workers
227
287
  logger.debug("Starting metadata tail and download queue feeder")
@@ -244,9 +304,10 @@ class MapillaryDownloader:
244
304
  total_bytes += bytes_dl
245
305
 
246
306
  # Log every download for first 10, then every 100
307
+ total_downloaded = len(self.downloaded)
247
308
  should_log = downloaded_count <= 10 or downloaded_count % 100 == 0
248
309
  if should_log:
249
- logger.info(f"Downloaded: {downloaded_count:,} ({format_size(total_bytes)})")
310
+ logger.info(f"Downloaded: {total_downloaded:,} ({format_size(total_bytes)} this session)")
250
311
 
251
312
  if downloaded_count % 100 == 0:
252
313
  pool.check_throughput(downloaded_count)
@@ -260,117 +321,20 @@ class MapillaryDownloader:
260
321
 
261
322
  # Tail the metadata file and submit to workers
262
323
  while True:
263
- # Check if API fetch is done and we've processed everything
264
- if api_fetch_complete and api_fetch_complete.is_set():
265
- # Read any remaining lines
266
- if self.metadata_file.exists():
267
- with open(self.metadata_file) as f:
268
- f.seek(last_position)
269
- for line in f:
270
- line = line.strip()
271
- if not line:
272
- continue
273
-
274
- try:
275
- image = json.loads(line)
276
- except json.JSONDecodeError:
277
- # Incomplete line, will retry
278
- continue
279
-
280
- # Skip completion marker
281
- if image.get("__complete__"):
282
- continue
283
-
284
- image_id = image.get("id")
285
- if not image_id:
286
- continue
287
-
288
- # Skip if already downloaded or no quality URL
289
- if image_id in self.downloaded:
290
- downloaded_count += 1
291
- continue
292
- if not image.get(quality_field):
293
- continue
294
-
295
- # Submit to workers
296
- work_item = (
297
- image,
298
- str(self.output_dir),
299
- self.quality,
300
- convert_webp,
301
- self.client.access_token,
302
- )
303
- pool.submit(work_item)
304
- submitted += 1
305
-
306
- if submitted % 1000 == 0:
307
- logger.info(f"Queue: submitted {submitted:,} images")
308
-
309
- # Process results while submitting
310
- process_results()
311
-
312
- last_position = f.tell()
313
-
314
- # API done and all lines processed, break
315
- break
316
-
317
- # API still running or API was already complete, tail the file
318
324
  if self.metadata_file.exists():
319
325
  with open(self.metadata_file) as f:
320
326
  f.seek(last_position)
321
- for line in f:
322
- line = line.strip()
323
- if not line:
324
- continue
325
-
326
- try:
327
- image = json.loads(line)
328
- except json.JSONDecodeError:
329
- # Incomplete line, will retry next iteration
330
- continue
331
-
332
- # Skip completion marker
333
- if image.get("__complete__"):
334
- continue
335
-
336
- image_id = image.get("id")
337
- if not image_id:
338
- continue
339
-
340
- # Skip if already downloaded or no quality URL
341
- if image_id in self.downloaded:
342
- downloaded_count += 1
343
- continue
344
- if not image.get(quality_field):
345
- continue
346
-
347
- # Submit to workers
348
- work_item = (
349
- image,
350
- str(self.output_dir),
351
- self.quality,
352
- convert_webp,
353
- self.client.access_token,
354
- )
355
- pool.submit(work_item)
356
- submitted += 1
357
-
358
- if submitted % 1000 == 0:
359
- logger.info(f"Queue: submitted {submitted:,} images")
360
-
361
- # Process results while submitting
362
- process_results()
363
-
327
+ batch_submitted, batch_skipped = self._submit_metadata_batch(
328
+ f, quality_field, pool, convert_webp, process_results, submitted
329
+ )
330
+ submitted += batch_submitted
331
+ skipped_count += batch_skipped
364
332
  last_position = f.tell()
365
333
 
366
- # If API is already complete, we've read the whole file, so break
367
- if api_fetch_complete is None:
334
+ if api_fetch_complete.is_set():
368
335
  break
369
336
 
370
- # Sleep briefly before next tail iteration
371
337
  time.sleep(0.1)
372
-
373
- # Process any results that came in
374
338
  process_results()
375
339
 
376
340
  # Send shutdown signals
@@ -397,7 +361,7 @@ class MapillaryDownloader:
397
361
  total_bytes += bytes_dl
398
362
 
399
363
  if downloaded_count % 100 == 0:
400
- logger.info(f"Downloaded: {downloaded_count:,} ({format_size(total_bytes)})")
364
+ logger.info(f"Downloaded: {len(self.downloaded):,} ({format_size(total_bytes)} this session)")
401
365
  pool.check_throughput(downloaded_count)
402
366
  # Save progress every 5 minutes
403
367
  if time.time() - self._last_save_time >= 300:
@@ -414,7 +378,10 @@ class MapillaryDownloader:
414
378
  self._save_progress()
415
379
  elapsed = time.time() - start_time
416
380
 
417
- logger.info(f"Complete! Downloaded {downloaded_count:,} ({format_size(total_bytes)}), failed {failed_count:,}")
381
+ logger.info(
382
+ f"Complete! Downloaded {downloaded_count:,} this session ({format_size(total_bytes)}), "
383
+ f"{len(self.downloaded):,} total, skipped {skipped_count:,}, failed {failed_count:,}"
384
+ )
418
385
  logger.info(f"Total time: {format_time(elapsed)}")
419
386
 
420
387
  # Tar sequence directories for efficient IA uploads
@@ -85,8 +85,8 @@ def write_exif_to_image(image_path, metadata):
85
85
  exif_dict["0th"][piexif.ImageIFD.DateTime] = datetime_bytes
86
86
  exif_dict["Exif"][piexif.ExifIFD.DateTimeOriginal] = datetime_bytes
87
87
  exif_dict["Exif"][piexif.ExifIFD.DateTimeDigitized] = datetime_bytes
88
- exif_dict["Exif"][piexif.ExifIFD.SubSecTimeOriginal] = ('000'+str(metadata["captured_at"] % 1000))[-3:]
89
- exif_dict["Exif"][piexif.ExifIFD.SubSecTimeDigitized] = ('000'+str(metadata["captured_at"] % 1000))[-3:]
88
+ exif_dict["Exif"][piexif.ExifIFD.SubSecTimeOriginal] = ("000" + str(metadata["captured_at"] % 1000))[-3:]
89
+ exif_dict["Exif"][piexif.ExifIFD.SubSecTimeDigitized] = ("000" + str(metadata["captured_at"] % 1000))[-3:]
90
90
 
91
91
  # GPS data - prefer computed_geometry over geometry
92
92
  geometry = metadata.get("computed_geometry") or metadata.get("geometry")
@@ -101,8 +101,8 @@ def write_exif_to_image(image_path, metadata):
101
101
  exif_dict["GPS"][piexif.GPSIFD.GPSLongitude] = decimal_to_dms(lon)
102
102
  exif_dict["GPS"][piexif.GPSIFD.GPSLongitudeRef] = b"E" if lon >= 0 else b"W"
103
103
 
104
- # GPS Altitude - prefer computed_altitude over altitude
105
- altitude = metadata.get("computed_altitude") or metadata.get("altitude")
104
+ # GPS Altitude - prefer raw altitude (photogrammetry can't compute elevation)
105
+ altitude = metadata.get("altitude") or metadata.get("computed_altitude")
106
106
  if altitude is not None:
107
107
  altitude_val = int(abs(altitude) * 100)
108
108
  logger.debug(f"Raw altitude value: {altitude}, calculated: {altitude_val}")
@@ -15,6 +15,7 @@ class ColoredFormatter(logging.Formatter):
15
15
  "DEBUG": "\033[94m", # Blue
16
16
  "RESET": "\033[0m",
17
17
  }
18
+ CYAN = "\033[96m"
18
19
 
19
20
  def __init__(self, fmt=None, datefmt=None, use_color=True):
20
21
  """Initialize the formatter.
@@ -41,6 +42,10 @@ class ColoredFormatter(logging.Formatter):
41
42
  if levelname in self.COLORS:
42
43
  record.levelname = f"{self.COLORS[levelname]}{levelname}{self.COLORS['RESET']}"
43
44
 
45
+ # Color API messages differently so they stand out
46
+ if record.msg.startswith("API"):
47
+ record.msg = f"{self.CYAN}{record.msg}{self.COLORS['RESET']}"
48
+
44
49
  return super().format(record)
45
50
 
46
51
 
@@ -43,7 +43,6 @@ def convert_to_webp(jpg_path, output_path=None, delete_original=True):
43
43
  ["cwebp", "-metadata", "all", str(jpg_path), "-o", str(webp_path)],
44
44
  capture_output=True,
45
45
  text=True,
46
- timeout=60,
47
46
  )
48
47
 
49
48
  if result.returncode != 0:
@@ -55,9 +54,6 @@ def convert_to_webp(jpg_path, output_path=None, delete_original=True):
55
54
  jpg_path.unlink()
56
55
  return webp_path
57
56
 
58
- except subprocess.TimeoutExpired:
59
- logger.error(f"cwebp conversion timed out for {jpg_path}")
60
- return None
61
57
  except Exception as e:
62
58
  logger.error(f"Error converting {jpg_path} to WebP: {e}")
63
59
  return None