leechtop_downloader 0.1.3
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +7 -0
- checksums.yaml.gz.sig +0 -0
- data/Gemfile +5 -0
- data/LICENSE.txt +21 -0
- data/README.md +90 -0
- data/certs/leechtop_downloader-public_cert.pem +25 -0
- data/exe/leechtop +10 -0
- data/leechtop_downloader.gemspec +57 -0
- data/lib/leechtop_downloader/cli.rb +167 -0
- data/lib/leechtop_downloader/cli_helpers.rb +85 -0
- data/lib/leechtop_downloader/client.rb +145 -0
- data/lib/leechtop_downloader/file_manager.rb +82 -0
- data/lib/leechtop_downloader/version.rb +8 -0
- data/lib/leechtop_downloader.rb +16 -0
- data.tar.gz.sig +0 -0
- metadata +306 -0
- metadata.gz.sig +1 -0
checksums.yaml
ADDED
|
@@ -0,0 +1,7 @@
|
|
|
1
|
+
---
|
|
2
|
+
SHA256:
|
|
3
|
+
metadata.gz: f9cda9290e92e0b12a23dd395a4933c29a519041b710a1ef1b2268a930d1c067
|
|
4
|
+
data.tar.gz: a85bf047dd7deb3d5b37c56a8e8d0ffc4fb8e3ad345cc59a7e9ceb677efa6df1
|
|
5
|
+
SHA512:
|
|
6
|
+
metadata.gz: 0334dd930713518bcebbb1b4220cdc100c4e86a6f962c7ec76bd461a79042acd509499086a8623ec85bddaf2f3276edfd029ab6d0187f57eb11de2348db1c2ef
|
|
7
|
+
data.tar.gz: ce9b2409c63f8288826294e481347263cbb16aa8afcdc7d85ed1e60512059d92b3680835045f6e7bd7df0c0d2fb0ee93e51b569e17ec64fbccc1363b03beee15
|
checksums.yaml.gz.sig
ADDED
|
Binary file
|
data/Gemfile
ADDED
data/LICENSE.txt
ADDED
|
@@ -0,0 +1,21 @@
|
|
|
1
|
+
MIT License
|
|
2
|
+
|
|
3
|
+
Copyright (c) 2026 Vitalii Lazebnyi
|
|
4
|
+
|
|
5
|
+
Permission is hereby granted, free of charge, to any person obtaining a copy
|
|
6
|
+
of this software and associated documentation files (the "Software"), to deal
|
|
7
|
+
in the Software without restriction, including without limitation the rights
|
|
8
|
+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
|
9
|
+
copies of the Software, and to permit persons to whom the Software is
|
|
10
|
+
furnished to do so, subject to the following conditions:
|
|
11
|
+
|
|
12
|
+
The above copyright notice and this permission notice shall be included in all
|
|
13
|
+
copies or substantial portions of the Software.
|
|
14
|
+
|
|
15
|
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
16
|
+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
17
|
+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
|
18
|
+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
19
|
+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
|
20
|
+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
21
|
+
SOFTWARE.
|
data/README.md
ADDED
|
@@ -0,0 +1,90 @@
|
|
|
1
|
+
# Leechtop Downloader
|
|
2
|
+
|
|
3
|
+
A CLI utility for downloading files from leechtop.com.
|
|
4
|
+
|
|
5
|
+
## Installation
|
|
6
|
+
|
|
7
|
+
To install the gem globally, run:
|
|
8
|
+
|
|
9
|
+
```bash
|
|
10
|
+
gem install leechtop_downloader
|
|
11
|
+
```
|
|
12
|
+
|
|
13
|
+
This will make the `leechtop` command available in your terminal.
|
|
14
|
+
|
|
15
|
+
## Local Development Setup
|
|
16
|
+
|
|
17
|
+
Ruby `>= 3.2.0` is required to run the gem.
|
|
18
|
+
|
|
19
|
+
1. Clone the repository and run:
|
|
20
|
+
```bash
|
|
21
|
+
bundle install
|
|
22
|
+
```
|
|
23
|
+
3. Install local git hooks (if desired):
|
|
24
|
+
```bash
|
|
25
|
+
cp .git/hooks/pre-commit .git/hooks/pre-commit.bak # if existing
|
|
26
|
+
# our automated hook is self-contained.
|
|
27
|
+
```
|
|
28
|
+
|
|
29
|
+
## Testing
|
|
30
|
+
|
|
31
|
+
This project mandates 100% test coverage.
|
|
32
|
+
|
|
33
|
+
```bash
|
|
34
|
+
bundle exec rspec
|
|
35
|
+
```
|
|
36
|
+
|
|
37
|
+
## Static Analysis
|
|
38
|
+
|
|
39
|
+
This project mandates zero linting or type-checking errors.
|
|
40
|
+
|
|
41
|
+
```bash
|
|
42
|
+
bundle exec rubocop
|
|
43
|
+
bundle exec srb tc
|
|
44
|
+
```
|
|
45
|
+
|
|
46
|
+
## Usage
|
|
47
|
+
|
|
48
|
+
The CLI can download files from direct `leechtop.com` links or automatically extract and download multiple links from any generic HTML webpage. All downloads are saved in the current folder by default, which can be overridden via the `--destination` option.
|
|
49
|
+
|
|
50
|
+
### Direct Download
|
|
51
|
+
Provide a direct Leechtop URL:
|
|
52
|
+
```bash
|
|
53
|
+
leechtop download "https://leechtop.com/example1"
|
|
54
|
+
```
|
|
55
|
+
|
|
56
|
+
### Batch Download via HTML Parsing
|
|
57
|
+
Provide the URL of an HTML page (like a manga chapter index) containing `leechtop.com` links. The tool will parse the page, extract all valid links, and download them sequentially:
|
|
58
|
+
```bash
|
|
59
|
+
leechtop download "https://dl-raw.ac/example-manga-page/"
|
|
60
|
+
```
|
|
61
|
+
|
|
62
|
+
### Multiple URLs
|
|
63
|
+
You can pass any combination of direct URLs or generic HTML pages at once:
|
|
64
|
+
```bash
|
|
65
|
+
leechtop download "https://leechtop.com/example1" "https://dl-raw.ac/example-page/"
|
|
66
|
+
```
|
|
67
|
+
|
|
68
|
+
### Batch Download via Text File
|
|
69
|
+
You can also pass the path to a local text file that contains a list of URLs (one per line). The tool will read the file and download each link:
|
|
70
|
+
```bash
|
|
71
|
+
# links.txt contains URLs separated by newlines
|
|
72
|
+
leechtop download links.txt
|
|
73
|
+
```
|
|
74
|
+
|
|
75
|
+
### Parallel & Concurrent Downloading
|
|
76
|
+
Since `leechtop.com` does not allow parallel downloads, the tool enforces a strict global application lock. If you attempt to run multiple instances of the downloader concurrently in different terminal tabs, any secondary instances will automatically detect the active process, print an error message, and safely exit to prevent conflicts.
|
|
77
|
+
|
|
78
|
+
### Options
|
|
79
|
+
|
|
80
|
+
**`--skip-existing`** (Default: `true`)
|
|
81
|
+
By default, the tool will skip downloading files that already exist in the destination directory. If you want to force re-downloading and overwrite existing files, pass `--no-skip-existing`:
|
|
82
|
+
```bash
|
|
83
|
+
leechtop download "https://leechtop.com/example1" --no-skip-existing
|
|
84
|
+
```
|
|
85
|
+
|
|
86
|
+
**`--destination`** (Default: `.`)
|
|
87
|
+
Specify a custom destination directory for downloaded files:
|
|
88
|
+
```bash
|
|
89
|
+
leechtop download "https://leechtop.com/example1" --destination="/path/to/custom/dir"
|
|
90
|
+
```
|
|
@@ -0,0 +1,25 @@
|
|
|
1
|
+
-----BEGIN CERTIFICATE-----
|
|
2
|
+
MIIEOTCCAqGgAwIBAgIUBONmsFo7fxLGkUHsKe65onH+5ogwDQYJKoZIhvcNAQEL
|
|
3
|
+
BQAwLDEqMCgGA1UEAwwhdml0YWxpaS5sYXplYm55aS5naXRodWJAZ21haWwuY29t
|
|
4
|
+
MB4XDTI2MDQxNTEzNTMyOVoXDTM2MDQxMjEzNTMyOVowLDEqMCgGA1UEAwwhdml0
|
|
5
|
+
YWxpaS5sYXplYm55aS5naXRodWJAZ21haWwuY29tMIIBojANBgkqhkiG9w0BAQEF
|
|
6
|
+
AAOCAY8AMIIBigKCAYEA5zdezJE+Zrsk9j53/IxBfRoaqvLcPvrcfl+EaEwWhIkV
|
|
7
|
+
0+08GtgS9N7VpB8cgaH2rkLJPjHIetsN/g5GMkDRsbJNXMrPVhxe1e1lI/r6j0Tm
|
|
8
|
+
JD0PaU4r8VzitxkqY9BBmSI8GjDjAfrT1u5jSXH1iAtKUoq5F116uYrxbgiDpvqa
|
|
9
|
+
kUQYcTf+6cZaPlF4KKhULnhKqs8u/NxyH4vPZyxEfg/gA4bODvcjW1A6d59BTiLV
|
|
10
|
+
yrJPebwU+F+URb8aoQ4AGvPKFiG1Y1fxRHuPrOpyymFnBnjwgMyQkNHtzTeEriV9
|
|
11
|
+
z1BUb10Pb/pjLBCrOvnStTPmcm1GE8HL2psYvlLvBlYqq3gzpQPBBKE3Jefa7ilC
|
|
12
|
+
cYsBYOGpynpA9uu9cXKa4jtpPDGQ7Qrpnk9gHy/0xfbgLdAkRCoZJeR7wDL/1xmm
|
|
13
|
+
nXwcUOLSOBj1Y4P9M+uQSQUZFTAaLbwyaBfE1gvVjwbTv3+rNP1ck1hACt+numGG
|
|
14
|
+
m7R6MF+Hmh8pNnDBYpBNAgMBAAGjUzBRMB0GA1UdDgQWBBRbuaz1EhdG6T4KIeWr
|
|
15
|
+
ac8LULxO9zAfBgNVHSMEGDAWgBRbuaz1EhdG6T4KIeWrac8LULxO9zAPBgNVHRMB
|
|
16
|
+
Af8EBTADAQH/MA0GCSqGSIb3DQEBCwUAA4IBgQBgfGTDIMxlm6o8o7dzCR0HosRm
|
|
17
|
+
DSeUrx46EG1knTEqO05CooEHW98hrHa1/EwzkPaH1KhjjserQb6VtczMnySlfySu
|
|
18
|
+
HbKWAIaqzlpf8zaE5tCiAKgFKr77b2XB7xKt25p/Vf/Kn/RLm3+sYQ2izzzMimei
|
|
19
|
+
tBHo29cLV9bB/5HHFDwjrtdC5a0HJHiir0w4MCSDDGtnsKird4RKD2xESpoVjiNg
|
|
20
|
+
L9nEGk25YDeIfKn8UtxduMv53T86CiBSsDcEb6oVjNiMOA0HFucwFKX+Vy5u0/qx
|
|
21
|
+
ZRoLbZiCkTTGyNkBh4o6RCCTn37Lj98FBxYMbAHLNhEcKnAGxB7XP/CYsV4+QHOy
|
|
22
|
+
h0PctylhIvm24QeKgIWJUWamFPfqdvlP660T4umxl2wMqvNpWmGMmGTMCraoKwxl
|
|
23
|
+
zpp6uA15MXgTU7CxGivRgUKM64TqBMZKkOJcCtPkruSobxiR8cROrBNTqEbrmedM
|
|
24
|
+
26EUEoxwDzfSzHU2SKz5pMR+8DClMUKB1rctg68=
|
|
25
|
+
-----END CERTIFICATE-----
|
data/exe/leechtop
ADDED
|
@@ -0,0 +1,57 @@
|
|
|
1
|
+
# frozen_string_literal: true
|
|
2
|
+
|
|
3
|
+
require_relative 'lib/leechtop_downloader/version'
|
|
4
|
+
|
|
5
|
+
Gem::Specification.new do |spec|
|
|
6
|
+
spec.name = 'leechtop_downloader'
|
|
7
|
+
spec.version = LeechtopDownloader::VERSION
|
|
8
|
+
spec.authors = ['Vitalii Lazebnyi']
|
|
9
|
+
spec.email = ['vitalii.lazebnyi.github@gmail.com']
|
|
10
|
+
|
|
11
|
+
spec.summary = 'CLI utility to download files from leechtop.com.'
|
|
12
|
+
spec.description = 'A robust command-line tool that automates extracting direct ' \
|
|
13
|
+
'download links and saving files from leechtop.com. ' \
|
|
14
|
+
'Supports batch processing, duplicate skipping, and safe concurrency.'
|
|
15
|
+
spec.homepage = 'https://github.com/VitaliiLazebnyi/leechtop-downloader'
|
|
16
|
+
spec.license = 'MIT'
|
|
17
|
+
spec.cert_chain = ['certs/leechtop_downloader-public_cert.pem']
|
|
18
|
+
if $PROGRAM_NAME.end_with?('gem') && File.exist?(File.expand_path('~/.gem/gem-private_key.pem'))
|
|
19
|
+
spec.signing_key = File.expand_path('~/.gem/gem-private_key.pem')
|
|
20
|
+
end
|
|
21
|
+
spec.required_ruby_version = '>= 3.2.0'
|
|
22
|
+
spec.metadata['rubygems_mfa_required'] = 'true'
|
|
23
|
+
|
|
24
|
+
# Specify which files should be added to the gem when it is released.
|
|
25
|
+
spec.files = %w[
|
|
26
|
+
Gemfile
|
|
27
|
+
LICENSE.txt
|
|
28
|
+
README.md
|
|
29
|
+
leechtop_downloader.gemspec
|
|
30
|
+
] + Dir.glob('{exe,lib,certs}/**/*', base: __dir__).select do |f|
|
|
31
|
+
File.file?(File.expand_path(f, __dir__))
|
|
32
|
+
end
|
|
33
|
+
|
|
34
|
+
spec.bindir = 'exe'
|
|
35
|
+
spec.executables = spec.files.grep(%r{\Aexe/}) { |f| File.basename(f) }
|
|
36
|
+
spec.require_paths = ['lib']
|
|
37
|
+
|
|
38
|
+
# Runtime dependencies
|
|
39
|
+
spec.add_dependency 'dotenv', '~> 3.0'
|
|
40
|
+
spec.add_dependency 'down', '~> 5.0'
|
|
41
|
+
spec.add_dependency 'faraday', '~> 2.0'
|
|
42
|
+
spec.add_dependency 'nokogiri', '~> 1.0'
|
|
43
|
+
spec.add_dependency 'sorbet-runtime', '~> 0.6'
|
|
44
|
+
spec.add_dependency 'thor', '~> 1.0'
|
|
45
|
+
|
|
46
|
+
# Development dependencies
|
|
47
|
+
spec.add_development_dependency 'rspec', '~> 3.13'
|
|
48
|
+
spec.add_development_dependency 'rubocop', '~> 1.86'
|
|
49
|
+
spec.add_development_dependency 'rubocop-rspec', '~> 3.9'
|
|
50
|
+
spec.add_development_dependency 'rubocop-sorbet', '~> 0.12'
|
|
51
|
+
spec.add_development_dependency 'simplecov', '~> 0.22'
|
|
52
|
+
spec.add_development_dependency 'sorbet', '~> 0.6'
|
|
53
|
+
spec.add_development_dependency 'tapioca', '~> 0.19'
|
|
54
|
+
spec.add_development_dependency 'webmock', '~> 3.26'
|
|
55
|
+
spec.add_development_dependency 'yard', '~> 0.9'
|
|
56
|
+
spec.add_development_dependency 'yard-sorbet', '~> 0.9'
|
|
57
|
+
end
|
|
@@ -0,0 +1,167 @@
|
|
|
1
|
+
# typed: true
|
|
2
|
+
# frozen_string_literal: true
|
|
3
|
+
|
|
4
|
+
require 'sorbet-runtime'
|
|
5
|
+
require 'thor'
|
|
6
|
+
require 'digest'
|
|
7
|
+
require 'fileutils'
|
|
8
|
+
require 'tmpdir'
|
|
9
|
+
require_relative 'cli_helpers'
|
|
10
|
+
|
|
11
|
+
module LeechtopDownloader
|
|
12
|
+
# CLI is the Thor application for handling commands.
|
|
13
|
+
class CLI < Thor
|
|
14
|
+
extend T::Sig
|
|
15
|
+
include CLIHelpers
|
|
16
|
+
|
|
17
|
+
# Ensure Thor exits with an error status on failure.
|
|
18
|
+
# @return [T::Boolean] true
|
|
19
|
+
def self.exit_on_failure?
|
|
20
|
+
true
|
|
21
|
+
end
|
|
22
|
+
|
|
23
|
+
method_option :skip_existing, type: :boolean, default: true, desc: 'Skip downloading already existing files'
|
|
24
|
+
method_option :destination, type: :string, default: '.', desc: 'Destination folder for downloaded files'
|
|
25
|
+
desc 'download URL_OR_FILE...', 'Download one or more URLs (or a text file with links) from leechtop.com'
|
|
26
|
+
# LT-REQ-001, LT-REQ-002
|
|
27
|
+
# Main entry point to download files.
|
|
28
|
+
# @param urls [Array<String>] The URLs or file paths to download from.
|
|
29
|
+
def download(*urls)
|
|
30
|
+
if urls.empty?
|
|
31
|
+
puts 'Error: You must provide at least one URL or file path.'
|
|
32
|
+
exit 1
|
|
33
|
+
end
|
|
34
|
+
|
|
35
|
+
with_app_lock do
|
|
36
|
+
skip_existing = options.fetch(:skip_existing, true)
|
|
37
|
+
destination = options.fetch(:destination, '.')
|
|
38
|
+
|
|
39
|
+
urls.each { |arg| process_argument(arg, skip_existing, destination) }
|
|
40
|
+
end
|
|
41
|
+
end
|
|
42
|
+
|
|
43
|
+
private
|
|
44
|
+
|
|
45
|
+
# Processes a single CLI argument (file or URL).
|
|
46
|
+
# @param arg [String] The argument string.
|
|
47
|
+
# @param skip_existing [T::Boolean] Whether to skip existing files.
|
|
48
|
+
# @param destination [String] The download destination directory.
|
|
49
|
+
sig { params(arg: String, skip_existing: T::Boolean, destination: String).void }
|
|
50
|
+
def process_argument(arg, skip_existing, destination)
|
|
51
|
+
if File.file?(arg)
|
|
52
|
+
File.readlines(arg, chomp: true).each do |line|
|
|
53
|
+
link = line.strip
|
|
54
|
+
process_url(link, skip_existing: skip_existing, destination: destination) unless link.empty?
|
|
55
|
+
end
|
|
56
|
+
else
|
|
57
|
+
process_url(arg, skip_existing: skip_existing, destination: destination)
|
|
58
|
+
end
|
|
59
|
+
end
|
|
60
|
+
|
|
61
|
+
# Processes a single URL to determine if it is a direct download or page parsing.
|
|
62
|
+
# @param url [String] The URL string.
|
|
63
|
+
# @param skip_existing [T::Boolean] Whether to skip existing files.
|
|
64
|
+
# @param destination [String] The download destination directory.
|
|
65
|
+
sig { params(url: String, skip_existing: T::Boolean, destination: String).void }
|
|
66
|
+
def process_url(url, skip_existing: true, destination: '.')
|
|
67
|
+
if url.match?(%r{^https?://(?:www\.)?leechtop\.com/})
|
|
68
|
+
download_single(url, skip_existing: skip_existing, destination: destination)
|
|
69
|
+
else
|
|
70
|
+
extract_and_download_links(url, skip_existing: skip_existing, destination: destination)
|
|
71
|
+
end
|
|
72
|
+
end
|
|
73
|
+
|
|
74
|
+
# Extracts multiple links from a given URL and downloads them.
|
|
75
|
+
# @param url [String] The URL string.
|
|
76
|
+
# @param skip_existing [T::Boolean] Whether to skip existing files.
|
|
77
|
+
# @param destination [String] The download destination directory.
|
|
78
|
+
sig { params(url: String, skip_existing: T::Boolean, destination: String).void }
|
|
79
|
+
def extract_and_download_links(url, skip_existing: true, destination: '.')
|
|
80
|
+
puts "Fetching links from: #{url}"
|
|
81
|
+
links = Client.extract_page_links(url)
|
|
82
|
+
if links.empty?
|
|
83
|
+
puts "No leechtop.com links found on #{url}"
|
|
84
|
+
else
|
|
85
|
+
puts "Found #{links.size} leechtop.com link(s). Downloading..."
|
|
86
|
+
links.each { |link| download_single(link, skip_existing: skip_existing, destination: destination) }
|
|
87
|
+
end
|
|
88
|
+
end
|
|
89
|
+
|
|
90
|
+
# Downloads a single specific file.
|
|
91
|
+
# @param url [String] The URL string.
|
|
92
|
+
# @param skip_existing [T::Boolean] Whether to skip existing files.
|
|
93
|
+
# @param destination [String] The download destination directory.
|
|
94
|
+
sig { params(url: String, skip_existing: T::Boolean, destination: String).void }
|
|
95
|
+
def download_single(url, skip_existing: true, destination: '.')
|
|
96
|
+
puts "Downloading: #{url}\nBypassing countdown and extracting direct link..."
|
|
97
|
+
metadata = Client.fetch_metadata(url)
|
|
98
|
+
return if skip_download?(metadata.fetch(:filename), skip_existing, destination)
|
|
99
|
+
|
|
100
|
+
download_with_lock(url, metadata, metadata.fetch(:filename), destination)
|
|
101
|
+
rescue Error => e
|
|
102
|
+
puts "Error downloading #{url}: #{e.message}"
|
|
103
|
+
end
|
|
104
|
+
|
|
105
|
+
# Wraps a block execution with a file-based lock.
|
|
106
|
+
# @param url [String] The URL string.
|
|
107
|
+
# @param filename_hint [String] The filename used for the lock.
|
|
108
|
+
# @param destination [String] The download destination directory.
|
|
109
|
+
# @yield The block to execute if the lock is acquired.
|
|
110
|
+
sig { params(url: String, filename_hint: String, destination: String, blk: T.proc.void).void }
|
|
111
|
+
def with_lock(url, filename_hint, destination, &blk)
|
|
112
|
+
lock_path = lock_path_for(url, filename_hint, destination)
|
|
113
|
+
acquired = acquire_lock(lock_path)
|
|
114
|
+
return skip_locked(filename_hint) unless acquired
|
|
115
|
+
|
|
116
|
+
blk.call
|
|
117
|
+
ensure
|
|
118
|
+
FileUtils.rm_f(lock_path) if acquired
|
|
119
|
+
end
|
|
120
|
+
|
|
121
|
+
# Prepares the destination and initiates the download with a lock.
|
|
122
|
+
# @param url [String] The URL string.
|
|
123
|
+
# @param metadata [Hash<Symbol, String>] Extracted metadata from the page.
|
|
124
|
+
# @param filename_hint [String] The filename to use.
|
|
125
|
+
# @param destination [String] The download destination directory.
|
|
126
|
+
sig { params(url: String, metadata: T::Hash[Symbol, String], filename_hint: String, destination: String).void }
|
|
127
|
+
def download_with_lock(url, metadata, filename_hint, destination)
|
|
128
|
+
FileUtils.mkdir_p(destination)
|
|
129
|
+
with_lock(url, filename_hint, destination) do
|
|
130
|
+
perform_download(url, metadata, filename_hint, destination)
|
|
131
|
+
end
|
|
132
|
+
end
|
|
133
|
+
|
|
134
|
+
# Performs the actual downloading process.
|
|
135
|
+
# @param url [String] The URL string.
|
|
136
|
+
# @param metadata [Hash<Symbol, String>] Extracted metadata from the page.
|
|
137
|
+
# @param filename_hint [String] The filename to use.
|
|
138
|
+
# @param destination [String] The download destination directory.
|
|
139
|
+
sig { params(url: String, metadata: T::Hash[Symbol, String], filename_hint: String, destination: String).void }
|
|
140
|
+
def perform_download(url, metadata, filename_hint, destination)
|
|
141
|
+
display_name = filename_hint.empty? ? 'unknown' : filename_hint
|
|
142
|
+
puts "Starting download of file: #{display_name}..."
|
|
143
|
+
io = Client.download_from_html(metadata.fetch(:html))
|
|
144
|
+
filename = extract_filename(url, io, filename_hint)
|
|
145
|
+
bytes_written, resolved_filename = FileManager.save_stream(io, filename, destination)
|
|
146
|
+
puts "Successfully downloaded #{resolved_filename} (#{bytes_written} bytes)"
|
|
147
|
+
end
|
|
148
|
+
|
|
149
|
+
# Helper method to extract filename, falling back to a default if necessary
|
|
150
|
+
# @param _url [String] The URL string.
|
|
151
|
+
# @param io [IO, StringIO, Tempfile] The IO object.
|
|
152
|
+
# @param filename_hint [String] The filename hint.
|
|
153
|
+
# @return [String] The final extracted filename.
|
|
154
|
+
sig { params(_url: String, io: T.any(IO, StringIO, Tempfile), filename_hint: String).returns(String) }
|
|
155
|
+
def extract_filename(_url, io, filename_hint = '')
|
|
156
|
+
if io.respond_to?(:original_filename) && T.unsafe(io).original_filename
|
|
157
|
+
name = T.cast(T.unsafe(io).original_filename, String)
|
|
158
|
+
fix_encoding(name)
|
|
159
|
+
elsif !filename_hint.empty?
|
|
160
|
+
filename_hint
|
|
161
|
+
else
|
|
162
|
+
# Fallback metric timestamp based filename if no headers present
|
|
163
|
+
"leechtop_#{Time.now.utc.to_i}.bin"
|
|
164
|
+
end
|
|
165
|
+
end
|
|
166
|
+
end
|
|
167
|
+
end
|
|
@@ -0,0 +1,85 @@
|
|
|
1
|
+
# typed: true
|
|
2
|
+
# frozen_string_literal: true
|
|
3
|
+
|
|
4
|
+
require 'sorbet-runtime'
|
|
5
|
+
require 'tmpdir'
|
|
6
|
+
|
|
7
|
+
module LeechtopDownloader
|
|
8
|
+
# CLIHelpers provides shared utility methods for CLI operations
|
|
9
|
+
module CLIHelpers
|
|
10
|
+
extend T::Sig
|
|
11
|
+
|
|
12
|
+
# Acquires an application-level lock to prevent concurrent executions.
|
|
13
|
+
# @yield The block to execute if the lock is acquired.
|
|
14
|
+
sig { params(blk: T.proc.void).void }
|
|
15
|
+
def with_app_lock(&blk)
|
|
16
|
+
lock_path = File.join(Dir.tmpdir, 'leechtop_downloader.lock')
|
|
17
|
+
File.open(lock_path, 'w') do |file|
|
|
18
|
+
unless file.flock(File::LOCK_EX | File::LOCK_NB)
|
|
19
|
+
Kernel.puts 'Error: Another instance of leechtop downloader is already running.'
|
|
20
|
+
Kernel.exit(1)
|
|
21
|
+
end
|
|
22
|
+
|
|
23
|
+
blk.call
|
|
24
|
+
end
|
|
25
|
+
end
|
|
26
|
+
|
|
27
|
+
# Checks if a download should be skipped because the file already exists.
|
|
28
|
+
# @param filename_hint [String] The name of the file.
|
|
29
|
+
# @param skip_existing [T::Boolean] Whether to skip existing files.
|
|
30
|
+
# @param destination [String] The download destination directory.
|
|
31
|
+
# @return [T::Boolean] true if the download should be skipped, false otherwise.
|
|
32
|
+
sig { params(filename_hint: String, skip_existing: T::Boolean, destination: String).returns(T::Boolean) }
|
|
33
|
+
def skip_download?(filename_hint, skip_existing, destination)
|
|
34
|
+
return false unless skip_existing && !filename_hint.empty?
|
|
35
|
+
|
|
36
|
+
if File.exist?(File.join(destination, filename_hint))
|
|
37
|
+
Kernel.puts "File #{filename_hint} already exists. Skipping."
|
|
38
|
+
true
|
|
39
|
+
else
|
|
40
|
+
false
|
|
41
|
+
end
|
|
42
|
+
end
|
|
43
|
+
|
|
44
|
+
# Generates a lock file path for a specific download.
|
|
45
|
+
# @param url [String] The URL being downloaded.
|
|
46
|
+
# @param filename_hint [String] The name of the file.
|
|
47
|
+
# @param destination [String] The download destination directory.
|
|
48
|
+
# @return [String] The lock file path.
|
|
49
|
+
sig { params(url: String, filename_hint: String, destination: String).returns(String) }
|
|
50
|
+
def lock_path_for(url, filename_hint, destination)
|
|
51
|
+
lock_filename = filename_hint.empty? ? "#{Digest::MD5.hexdigest(url)}.lock" : "#{filename_hint}.lock"
|
|
52
|
+
File.join(destination, lock_filename)
|
|
53
|
+
end
|
|
54
|
+
|
|
55
|
+
# Attempts to acquire a file-based lock.
|
|
56
|
+
# @param lock_path [String] The path to the lock file.
|
|
57
|
+
# @return [T::Boolean] true if lock acquired successfully, false if already exists.
|
|
58
|
+
sig { params(lock_path: String).returns(T::Boolean) }
|
|
59
|
+
def acquire_lock(lock_path)
|
|
60
|
+
File.new(lock_path, File::WRONLY | File::CREAT | File::EXCL).close
|
|
61
|
+
true
|
|
62
|
+
rescue Errno::EEXIST
|
|
63
|
+
false
|
|
64
|
+
end
|
|
65
|
+
|
|
66
|
+
# Prints a message indicating a file is currently being downloaded.
|
|
67
|
+
# @param filename_hint [String] The name of the file.
|
|
68
|
+
sig { params(filename_hint: String).void }
|
|
69
|
+
def skip_locked(filename_hint)
|
|
70
|
+
name = filename_hint.empty? ? 'File' : "File #{filename_hint}"
|
|
71
|
+
Kernel.puts "#{name} is currently being downloaded by another process. Skipping."
|
|
72
|
+
end
|
|
73
|
+
|
|
74
|
+
# Fixes mojibake or broken encoding strings to valid UTF-8.
|
|
75
|
+
# @param name [String] The possibly corrupted string.
|
|
76
|
+
# @return [String] A valid UTF-8 string.
|
|
77
|
+
sig { params(name: String).returns(String) }
|
|
78
|
+
def fix_encoding(name)
|
|
79
|
+
fixed = name.encode('iso-8859-1').force_encoding('utf-8')
|
|
80
|
+
fixed.valid_encoding? ? fixed : name
|
|
81
|
+
rescue EncodingError
|
|
82
|
+
name
|
|
83
|
+
end
|
|
84
|
+
end
|
|
85
|
+
end
|
|
@@ -0,0 +1,145 @@
|
|
|
1
|
+
# typed: true
|
|
2
|
+
# frozen_string_literal: true
|
|
3
|
+
|
|
4
|
+
require 'sorbet-runtime'
|
|
5
|
+
require 'down'
|
|
6
|
+
require 'faraday'
|
|
7
|
+
require 'nokogiri'
|
|
8
|
+
require 'json'
|
|
9
|
+
|
|
10
|
+
module LeechtopDownloader
|
|
11
|
+
# Client handles the network requests to Leechtop.
|
|
12
|
+
class Client
|
|
13
|
+
extend T::Sig
|
|
14
|
+
|
|
15
|
+
# Error class for download-specific errors.
|
|
16
|
+
class DownloadError < Error; end
|
|
17
|
+
|
|
18
|
+
# LT-REQ-002, LT-REQ-003
|
|
19
|
+
# Downloads a file from the given direct URL.
|
|
20
|
+
# @param url [String] The URL to download.
|
|
21
|
+
# @return [IO, StringIO, Tempfile] The downloaded stream.
|
|
22
|
+
sig { params(url: String).returns(T.any(IO, StringIO, Tempfile)) }
|
|
23
|
+
def self.download(url)
|
|
24
|
+
direct_url = extract_direct_url(url)
|
|
25
|
+
Down.download(direct_url, open_timeout: 10, read_timeout: 60)
|
|
26
|
+
rescue Down::NotFound
|
|
27
|
+
raise DownloadError, 'The file could not be found on the host server (404 Not Found). It may have been deleted.'
|
|
28
|
+
rescue Down::ClientError, Down::ServerError => e
|
|
29
|
+
raise DownloadError, "The host server rejected the download: #{e.message}"
|
|
30
|
+
rescue StandardError => e
|
|
31
|
+
raise DownloadError, "Network or extraction failure: #{e.message}"
|
|
32
|
+
end
|
|
33
|
+
|
|
34
|
+
# Downloads a file by extracting the direct link from the page HTML.
|
|
35
|
+
# @param html [String] The HTML content of the page.
|
|
36
|
+
# @return [IO, StringIO, Tempfile] The downloaded stream.
|
|
37
|
+
sig { params(html: String).returns(T.any(IO, StringIO, Tempfile)) }
|
|
38
|
+
def self.download_from_html(html)
|
|
39
|
+
tokens = parse_tokens(html)
|
|
40
|
+
direct_url = fetch_ajax_direct_link(tokens)
|
|
41
|
+
Down.download(direct_url, open_timeout: 10, read_timeout: 60)
|
|
42
|
+
rescue Down::NotFound
|
|
43
|
+
raise DownloadError, 'The file could not be found on the host server (404 Not Found). It may have been deleted.'
|
|
44
|
+
rescue Down::ClientError, Down::ServerError => e
|
|
45
|
+
raise DownloadError, "The host server rejected the download: #{e.message}"
|
|
46
|
+
rescue StandardError => e
|
|
47
|
+
raise DownloadError, "Network or extraction failure: #{e.message}"
|
|
48
|
+
end
|
|
49
|
+
|
|
50
|
+
# Fetches the HTML and metadata (filename) from a given URL.
|
|
51
|
+
# @param url [String] The URL to fetch.
|
|
52
|
+
# @return [Hash<Symbol, String>] The parsed metadata including html and filename.
|
|
53
|
+
sig { params(url: String).returns(T::Hash[Symbol, String]) }
|
|
54
|
+
def self.fetch_metadata(url)
|
|
55
|
+
html = fetch_html(url)
|
|
56
|
+
document = Nokogiri::HTML(html, nil, 'UTF-8')
|
|
57
|
+
h4 = document.at_css('h4.mb-2')
|
|
58
|
+
filename = h4&.text&.strip || ''
|
|
59
|
+
|
|
60
|
+
{ html: html, filename: filename }
|
|
61
|
+
end
|
|
62
|
+
|
|
63
|
+
# Extracts the direct URL from the page HTML via AJAX.
|
|
64
|
+
# @param url [String] The URL to fetch.
|
|
65
|
+
# @return [String] The extracted direct URL.
|
|
66
|
+
sig { params(url: String).returns(String) }
|
|
67
|
+
def self.extract_direct_url(url)
|
|
68
|
+
html = fetch_html(url)
|
|
69
|
+
tokens = parse_tokens(html)
|
|
70
|
+
fetch_ajax_direct_link(tokens)
|
|
71
|
+
end
|
|
72
|
+
|
|
73
|
+
# Fetches the raw HTML content from a URL.
|
|
74
|
+
# @param url [String] The URL to fetch.
|
|
75
|
+
# @return [String] The HTML content.
|
|
76
|
+
sig { params(url: String).returns(String) }
|
|
77
|
+
def self.fetch_html(url)
|
|
78
|
+
response = Faraday.new(request: { open_timeout: 10, timeout: 60 }).get(url)
|
|
79
|
+
raise "Failed to load page: HTTP #{response.status}" unless response.status == 200
|
|
80
|
+
|
|
81
|
+
response.body
|
|
82
|
+
end
|
|
83
|
+
|
|
84
|
+
# Parses the necessary tokens from the given HTML string.
|
|
85
|
+
# @param html [String] The HTML content.
|
|
86
|
+
# @return [Hash<Symbol, String>] The extracted tokens.
|
|
87
|
+
sig { params(html: String).returns(T::Hash[Symbol, String]) }
|
|
88
|
+
def self.parse_tokens(html)
|
|
89
|
+
document = Nokogiri::HTML(html, nil, 'UTF-8')
|
|
90
|
+
|
|
91
|
+
button = document.at_css('.go-download-direct')
|
|
92
|
+
raise 'Could not find download button in HTML' unless button
|
|
93
|
+
|
|
94
|
+
nonce_match = html.match(/"nonce":"([^"]+)"/)
|
|
95
|
+
raise 'Could not extract nonce from page' unless nonce_match
|
|
96
|
+
|
|
97
|
+
{
|
|
98
|
+
p: T.must(button['data-p']),
|
|
99
|
+
mb: T.must(button['data-mb']),
|
|
100
|
+
nonce: T.must(nonce_match[1])
|
|
101
|
+
}
|
|
102
|
+
end
|
|
103
|
+
|
|
104
|
+
# Submits the parsed tokens via AJAX to obtain the direct download link.
|
|
105
|
+
# @param tokens [Hash<Symbol, String>] The extracted tokens.
|
|
106
|
+
# @return [String] The direct download link.
|
|
107
|
+
sig { params(tokens: T::Hash[Symbol, String]).returns(String) }
|
|
108
|
+
def self.fetch_ajax_direct_link(tokens)
|
|
109
|
+
ajax_url = 'https://leechtop.com/wp-admin/admin-ajax.php'
|
|
110
|
+
body = URI.encode_www_form({ action: 'z_do_ajax', _action: 'directDownload' }.merge(tokens))
|
|
111
|
+
post_response = Faraday.new(request: { open_timeout: 10, timeout: 60 }).post(ajax_url, body)
|
|
112
|
+
|
|
113
|
+
raise "AJAX request failed: HTTP #{post_response.status}" unless post_response.status == 200
|
|
114
|
+
|
|
115
|
+
parse_ajax_response(post_response.body)
|
|
116
|
+
end
|
|
117
|
+
|
|
118
|
+
# Parses the AJAX response to extract the direct link.
|
|
119
|
+
# @param body [String] The JSON string.
|
|
120
|
+
# @return [String] The extracted direct download link.
|
|
121
|
+
sig { params(body: String).returns(String) }
|
|
122
|
+
def self.parse_ajax_response(body)
|
|
123
|
+
json = JSON.parse(body)
|
|
124
|
+
direct_link = json['mes']
|
|
125
|
+
|
|
126
|
+
raise "Server rejected download request: #{json}" if direct_link == 'no' || direct_link.nil?
|
|
127
|
+
|
|
128
|
+
direct_link
|
|
129
|
+
end
|
|
130
|
+
|
|
131
|
+
# Extracts all Leechtop links from a given page HTML.
|
|
132
|
+
# @param url [String] The URL to fetch.
|
|
133
|
+
# @return [Array<String>] The extracted URLs.
|
|
134
|
+
sig { params(url: String).returns(T::Array[String]) }
|
|
135
|
+
def self.extract_page_links(url)
|
|
136
|
+
html = fetch_html(url)
|
|
137
|
+
document = Nokogiri::HTML(html, nil, 'UTF-8')
|
|
138
|
+
|
|
139
|
+
document.css('a').filter_map do |a|
|
|
140
|
+
href = a['href']
|
|
141
|
+
href if href&.match?(%r{^https?://(?:www\.)?leechtop\.com/})
|
|
142
|
+
end
|
|
143
|
+
end
|
|
144
|
+
end
|
|
145
|
+
end
|
|
@@ -0,0 +1,82 @@
|
|
|
1
|
+
# typed: true
|
|
2
|
+
# frozen_string_literal: true
|
|
3
|
+
|
|
4
|
+
require 'sorbet-runtime'
|
|
5
|
+
require 'fileutils'
|
|
6
|
+
|
|
7
|
+
module LeechtopDownloader
|
|
8
|
+
# FileManager handles the physical saving of files using Metric/UTC standards.
|
|
9
|
+
class FileManager
|
|
10
|
+
extend T::Sig
|
|
11
|
+
|
|
12
|
+
# LT-REQ-003, LT-REQ-004, LT-REQ-005, LT-REQ-006, BUG-LT-001
|
|
13
|
+
# Saves an IO stream to the specified destination directory.
|
|
14
|
+
# @param io [IO, StringIO, Tempfile] The stream to read from.
|
|
15
|
+
# @param filename [String] The name of the file to save.
|
|
16
|
+
# @param destination [String] The directory where the file will be saved.
|
|
17
|
+
# @return [Array<Integer, String>] The number of bytes written and the final resolved filename.
|
|
18
|
+
sig { params(io: T.any(IO, StringIO, Tempfile), filename: String, destination: String).returns([Integer, String]) }
|
|
19
|
+
def self.save_stream(io, filename, destination = '.')
|
|
20
|
+
# Enforcement of UTC and Metric standards
|
|
21
|
+
FileUtils.mkdir_p(destination)
|
|
22
|
+
resolved_filename = resolve_filename(filename, destination)
|
|
23
|
+
filepath = File.join(destination, resolved_filename)
|
|
24
|
+
|
|
25
|
+
bytes_written = write_stream(io, filepath)
|
|
26
|
+
|
|
27
|
+
# Enforce UTC timestamps on the generated file
|
|
28
|
+
utc_now = Time.now.utc
|
|
29
|
+
File.utime(utc_now, utc_now, filepath)
|
|
30
|
+
|
|
31
|
+
[bytes_written, resolved_filename]
|
|
32
|
+
end
|
|
33
|
+
|
|
34
|
+
# Resolves a unique filename within the specified directory.
|
|
35
|
+
# @param original_filename [String] The original requested filename.
|
|
36
|
+
# @param directory [String] The destination directory.
|
|
37
|
+
# @return [String] A unique filename string.
|
|
38
|
+
sig { params(original_filename: String, directory: String).returns(String) }
|
|
39
|
+
def self.resolve_filename(original_filename, directory)
|
|
40
|
+
return original_filename unless File.exist?(File.join(directory, original_filename))
|
|
41
|
+
|
|
42
|
+
ext = File.extname(original_filename)
|
|
43
|
+
base = File.basename(original_filename, ext)
|
|
44
|
+
find_unique_filename(base, ext, directory, original_filename)
|
|
45
|
+
end
|
|
46
|
+
|
|
47
|
+
# Finds a unique filename by appending an incremental counter.
|
|
48
|
+
# @param base [String] The base filename without extension.
|
|
49
|
+
# @param ext [String] The file extension.
|
|
50
|
+
# @param dir [String] The directory to check for existence.
|
|
51
|
+
# @param orig [String] The original filename, used for warning messages.
|
|
52
|
+
# @return [String] A unique filename string.
|
|
53
|
+
sig { params(base: String, ext: String, dir: String, orig: String).returns(String) }
|
|
54
|
+
def self.find_unique_filename(base, ext, dir, orig)
|
|
55
|
+
counter = 1
|
|
56
|
+
loop do
|
|
57
|
+
name = "#{base}_#{counter}#{ext}"
|
|
58
|
+
unless File.exist?(File.join(dir, name))
|
|
59
|
+
puts "Warning: File '#{orig}' already exists. Saving as '#{name}'"
|
|
60
|
+
return name
|
|
61
|
+
end
|
|
62
|
+
counter += 1
|
|
63
|
+
end
|
|
64
|
+
end
|
|
65
|
+
|
|
66
|
+
# Writes chunks of data from an IO stream to a physical file.
|
|
67
|
+
# @param io [IO, StringIO, Tempfile] The stream to read from.
|
|
68
|
+
# @param filepath [String] The full path to the destination file.
|
|
69
|
+
# @return [Integer] The total number of bytes written.
|
|
70
|
+
sig { params(io: T.any(IO, StringIO, Tempfile), filepath: String).returns(Integer) }
|
|
71
|
+
def self.write_stream(io, filepath)
|
|
72
|
+
bytes_written = 0
|
|
73
|
+
File.open(filepath, 'wb') do |file|
|
|
74
|
+
while (chunk = io.read(8192)) # 8 KB metric chunking
|
|
75
|
+
file.write(chunk)
|
|
76
|
+
bytes_written += chunk.bytesize
|
|
77
|
+
end
|
|
78
|
+
end
|
|
79
|
+
bytes_written
|
|
80
|
+
end
|
|
81
|
+
end
|
|
82
|
+
end
|
|
@@ -0,0 +1,16 @@
|
|
|
1
|
+
# typed: true
|
|
2
|
+
# frozen_string_literal: true
|
|
3
|
+
|
|
4
|
+
require 'sorbet-runtime'
|
|
5
|
+
require 'dotenv/load'
|
|
6
|
+
|
|
7
|
+
# The main module for the LeechtopDownloader application.
|
|
8
|
+
module LeechtopDownloader
|
|
9
|
+
# Standard Error class for the application.
|
|
10
|
+
class Error < StandardError; end
|
|
11
|
+
end
|
|
12
|
+
|
|
13
|
+
require_relative 'leechtop_downloader/version'
|
|
14
|
+
require_relative 'leechtop_downloader/file_manager'
|
|
15
|
+
require_relative 'leechtop_downloader/client'
|
|
16
|
+
require_relative 'leechtop_downloader/cli'
|
data.tar.gz.sig
ADDED
|
Binary file
|
metadata
ADDED
|
@@ -0,0 +1,306 @@
|
|
|
1
|
+
--- !ruby/object:Gem::Specification
|
|
2
|
+
name: leechtop_downloader
|
|
3
|
+
version: !ruby/object:Gem::Version
|
|
4
|
+
version: 0.1.3
|
|
5
|
+
platform: ruby
|
|
6
|
+
authors:
|
|
7
|
+
- Vitalii Lazebnyi
|
|
8
|
+
bindir: exe
|
|
9
|
+
cert_chain:
|
|
10
|
+
- |
|
|
11
|
+
-----BEGIN CERTIFICATE-----
|
|
12
|
+
MIIEOTCCAqGgAwIBAgIUBONmsFo7fxLGkUHsKe65onH+5ogwDQYJKoZIhvcNAQEL
|
|
13
|
+
BQAwLDEqMCgGA1UEAwwhdml0YWxpaS5sYXplYm55aS5naXRodWJAZ21haWwuY29t
|
|
14
|
+
MB4XDTI2MDQxNTEzNTMyOVoXDTM2MDQxMjEzNTMyOVowLDEqMCgGA1UEAwwhdml0
|
|
15
|
+
YWxpaS5sYXplYm55aS5naXRodWJAZ21haWwuY29tMIIBojANBgkqhkiG9w0BAQEF
|
|
16
|
+
AAOCAY8AMIIBigKCAYEA5zdezJE+Zrsk9j53/IxBfRoaqvLcPvrcfl+EaEwWhIkV
|
|
17
|
+
0+08GtgS9N7VpB8cgaH2rkLJPjHIetsN/g5GMkDRsbJNXMrPVhxe1e1lI/r6j0Tm
|
|
18
|
+
JD0PaU4r8VzitxkqY9BBmSI8GjDjAfrT1u5jSXH1iAtKUoq5F116uYrxbgiDpvqa
|
|
19
|
+
kUQYcTf+6cZaPlF4KKhULnhKqs8u/NxyH4vPZyxEfg/gA4bODvcjW1A6d59BTiLV
|
|
20
|
+
yrJPebwU+F+URb8aoQ4AGvPKFiG1Y1fxRHuPrOpyymFnBnjwgMyQkNHtzTeEriV9
|
|
21
|
+
z1BUb10Pb/pjLBCrOvnStTPmcm1GE8HL2psYvlLvBlYqq3gzpQPBBKE3Jefa7ilC
|
|
22
|
+
cYsBYOGpynpA9uu9cXKa4jtpPDGQ7Qrpnk9gHy/0xfbgLdAkRCoZJeR7wDL/1xmm
|
|
23
|
+
nXwcUOLSOBj1Y4P9M+uQSQUZFTAaLbwyaBfE1gvVjwbTv3+rNP1ck1hACt+numGG
|
|
24
|
+
m7R6MF+Hmh8pNnDBYpBNAgMBAAGjUzBRMB0GA1UdDgQWBBRbuaz1EhdG6T4KIeWr
|
|
25
|
+
ac8LULxO9zAfBgNVHSMEGDAWgBRbuaz1EhdG6T4KIeWrac8LULxO9zAPBgNVHRMB
|
|
26
|
+
Af8EBTADAQH/MA0GCSqGSIb3DQEBCwUAA4IBgQBgfGTDIMxlm6o8o7dzCR0HosRm
|
|
27
|
+
DSeUrx46EG1knTEqO05CooEHW98hrHa1/EwzkPaH1KhjjserQb6VtczMnySlfySu
|
|
28
|
+
HbKWAIaqzlpf8zaE5tCiAKgFKr77b2XB7xKt25p/Vf/Kn/RLm3+sYQ2izzzMimei
|
|
29
|
+
tBHo29cLV9bB/5HHFDwjrtdC5a0HJHiir0w4MCSDDGtnsKird4RKD2xESpoVjiNg
|
|
30
|
+
L9nEGk25YDeIfKn8UtxduMv53T86CiBSsDcEb6oVjNiMOA0HFucwFKX+Vy5u0/qx
|
|
31
|
+
ZRoLbZiCkTTGyNkBh4o6RCCTn37Lj98FBxYMbAHLNhEcKnAGxB7XP/CYsV4+QHOy
|
|
32
|
+
h0PctylhIvm24QeKgIWJUWamFPfqdvlP660T4umxl2wMqvNpWmGMmGTMCraoKwxl
|
|
33
|
+
zpp6uA15MXgTU7CxGivRgUKM64TqBMZKkOJcCtPkruSobxiR8cROrBNTqEbrmedM
|
|
34
|
+
26EUEoxwDzfSzHU2SKz5pMR+8DClMUKB1rctg68=
|
|
35
|
+
-----END CERTIFICATE-----
|
|
36
|
+
date: 1980-01-02 00:00:00.000000000 Z
|
|
37
|
+
dependencies:
|
|
38
|
+
- !ruby/object:Gem::Dependency
|
|
39
|
+
name: dotenv
|
|
40
|
+
requirement: !ruby/object:Gem::Requirement
|
|
41
|
+
requirements:
|
|
42
|
+
- - "~>"
|
|
43
|
+
- !ruby/object:Gem::Version
|
|
44
|
+
version: '3.0'
|
|
45
|
+
type: :runtime
|
|
46
|
+
prerelease: false
|
|
47
|
+
version_requirements: !ruby/object:Gem::Requirement
|
|
48
|
+
requirements:
|
|
49
|
+
- - "~>"
|
|
50
|
+
- !ruby/object:Gem::Version
|
|
51
|
+
version: '3.0'
|
|
52
|
+
- !ruby/object:Gem::Dependency
|
|
53
|
+
name: down
|
|
54
|
+
requirement: !ruby/object:Gem::Requirement
|
|
55
|
+
requirements:
|
|
56
|
+
- - "~>"
|
|
57
|
+
- !ruby/object:Gem::Version
|
|
58
|
+
version: '5.0'
|
|
59
|
+
type: :runtime
|
|
60
|
+
prerelease: false
|
|
61
|
+
version_requirements: !ruby/object:Gem::Requirement
|
|
62
|
+
requirements:
|
|
63
|
+
- - "~>"
|
|
64
|
+
- !ruby/object:Gem::Version
|
|
65
|
+
version: '5.0'
|
|
66
|
+
- !ruby/object:Gem::Dependency
|
|
67
|
+
name: faraday
|
|
68
|
+
requirement: !ruby/object:Gem::Requirement
|
|
69
|
+
requirements:
|
|
70
|
+
- - "~>"
|
|
71
|
+
- !ruby/object:Gem::Version
|
|
72
|
+
version: '2.0'
|
|
73
|
+
type: :runtime
|
|
74
|
+
prerelease: false
|
|
75
|
+
version_requirements: !ruby/object:Gem::Requirement
|
|
76
|
+
requirements:
|
|
77
|
+
- - "~>"
|
|
78
|
+
- !ruby/object:Gem::Version
|
|
79
|
+
version: '2.0'
|
|
80
|
+
- !ruby/object:Gem::Dependency
|
|
81
|
+
name: nokogiri
|
|
82
|
+
requirement: !ruby/object:Gem::Requirement
|
|
83
|
+
requirements:
|
|
84
|
+
- - "~>"
|
|
85
|
+
- !ruby/object:Gem::Version
|
|
86
|
+
version: '1.0'
|
|
87
|
+
type: :runtime
|
|
88
|
+
prerelease: false
|
|
89
|
+
version_requirements: !ruby/object:Gem::Requirement
|
|
90
|
+
requirements:
|
|
91
|
+
- - "~>"
|
|
92
|
+
- !ruby/object:Gem::Version
|
|
93
|
+
version: '1.0'
|
|
94
|
+
- !ruby/object:Gem::Dependency
|
|
95
|
+
name: sorbet-runtime
|
|
96
|
+
requirement: !ruby/object:Gem::Requirement
|
|
97
|
+
requirements:
|
|
98
|
+
- - "~>"
|
|
99
|
+
- !ruby/object:Gem::Version
|
|
100
|
+
version: '0.6'
|
|
101
|
+
type: :runtime
|
|
102
|
+
prerelease: false
|
|
103
|
+
version_requirements: !ruby/object:Gem::Requirement
|
|
104
|
+
requirements:
|
|
105
|
+
- - "~>"
|
|
106
|
+
- !ruby/object:Gem::Version
|
|
107
|
+
version: '0.6'
|
|
108
|
+
- !ruby/object:Gem::Dependency
|
|
109
|
+
name: thor
|
|
110
|
+
requirement: !ruby/object:Gem::Requirement
|
|
111
|
+
requirements:
|
|
112
|
+
- - "~>"
|
|
113
|
+
- !ruby/object:Gem::Version
|
|
114
|
+
version: '1.0'
|
|
115
|
+
type: :runtime
|
|
116
|
+
prerelease: false
|
|
117
|
+
version_requirements: !ruby/object:Gem::Requirement
|
|
118
|
+
requirements:
|
|
119
|
+
- - "~>"
|
|
120
|
+
- !ruby/object:Gem::Version
|
|
121
|
+
version: '1.0'
|
|
122
|
+
- !ruby/object:Gem::Dependency
|
|
123
|
+
name: rspec
|
|
124
|
+
requirement: !ruby/object:Gem::Requirement
|
|
125
|
+
requirements:
|
|
126
|
+
- - "~>"
|
|
127
|
+
- !ruby/object:Gem::Version
|
|
128
|
+
version: '3.13'
|
|
129
|
+
type: :development
|
|
130
|
+
prerelease: false
|
|
131
|
+
version_requirements: !ruby/object:Gem::Requirement
|
|
132
|
+
requirements:
|
|
133
|
+
- - "~>"
|
|
134
|
+
- !ruby/object:Gem::Version
|
|
135
|
+
version: '3.13'
|
|
136
|
+
- !ruby/object:Gem::Dependency
|
|
137
|
+
name: rubocop
|
|
138
|
+
requirement: !ruby/object:Gem::Requirement
|
|
139
|
+
requirements:
|
|
140
|
+
- - "~>"
|
|
141
|
+
- !ruby/object:Gem::Version
|
|
142
|
+
version: '1.86'
|
|
143
|
+
type: :development
|
|
144
|
+
prerelease: false
|
|
145
|
+
version_requirements: !ruby/object:Gem::Requirement
|
|
146
|
+
requirements:
|
|
147
|
+
- - "~>"
|
|
148
|
+
- !ruby/object:Gem::Version
|
|
149
|
+
version: '1.86'
|
|
150
|
+
- !ruby/object:Gem::Dependency
|
|
151
|
+
name: rubocop-rspec
|
|
152
|
+
requirement: !ruby/object:Gem::Requirement
|
|
153
|
+
requirements:
|
|
154
|
+
- - "~>"
|
|
155
|
+
- !ruby/object:Gem::Version
|
|
156
|
+
version: '3.9'
|
|
157
|
+
type: :development
|
|
158
|
+
prerelease: false
|
|
159
|
+
version_requirements: !ruby/object:Gem::Requirement
|
|
160
|
+
requirements:
|
|
161
|
+
- - "~>"
|
|
162
|
+
- !ruby/object:Gem::Version
|
|
163
|
+
version: '3.9'
|
|
164
|
+
- !ruby/object:Gem::Dependency
|
|
165
|
+
name: rubocop-sorbet
|
|
166
|
+
requirement: !ruby/object:Gem::Requirement
|
|
167
|
+
requirements:
|
|
168
|
+
- - "~>"
|
|
169
|
+
- !ruby/object:Gem::Version
|
|
170
|
+
version: '0.12'
|
|
171
|
+
type: :development
|
|
172
|
+
prerelease: false
|
|
173
|
+
version_requirements: !ruby/object:Gem::Requirement
|
|
174
|
+
requirements:
|
|
175
|
+
- - "~>"
|
|
176
|
+
- !ruby/object:Gem::Version
|
|
177
|
+
version: '0.12'
|
|
178
|
+
- !ruby/object:Gem::Dependency
|
|
179
|
+
name: simplecov
|
|
180
|
+
requirement: !ruby/object:Gem::Requirement
|
|
181
|
+
requirements:
|
|
182
|
+
- - "~>"
|
|
183
|
+
- !ruby/object:Gem::Version
|
|
184
|
+
version: '0.22'
|
|
185
|
+
type: :development
|
|
186
|
+
prerelease: false
|
|
187
|
+
version_requirements: !ruby/object:Gem::Requirement
|
|
188
|
+
requirements:
|
|
189
|
+
- - "~>"
|
|
190
|
+
- !ruby/object:Gem::Version
|
|
191
|
+
version: '0.22'
|
|
192
|
+
- !ruby/object:Gem::Dependency
|
|
193
|
+
name: sorbet
|
|
194
|
+
requirement: !ruby/object:Gem::Requirement
|
|
195
|
+
requirements:
|
|
196
|
+
- - "~>"
|
|
197
|
+
- !ruby/object:Gem::Version
|
|
198
|
+
version: '0.6'
|
|
199
|
+
type: :development
|
|
200
|
+
prerelease: false
|
|
201
|
+
version_requirements: !ruby/object:Gem::Requirement
|
|
202
|
+
requirements:
|
|
203
|
+
- - "~>"
|
|
204
|
+
- !ruby/object:Gem::Version
|
|
205
|
+
version: '0.6'
|
|
206
|
+
- !ruby/object:Gem::Dependency
|
|
207
|
+
name: tapioca
|
|
208
|
+
requirement: !ruby/object:Gem::Requirement
|
|
209
|
+
requirements:
|
|
210
|
+
- - "~>"
|
|
211
|
+
- !ruby/object:Gem::Version
|
|
212
|
+
version: '0.19'
|
|
213
|
+
type: :development
|
|
214
|
+
prerelease: false
|
|
215
|
+
version_requirements: !ruby/object:Gem::Requirement
|
|
216
|
+
requirements:
|
|
217
|
+
- - "~>"
|
|
218
|
+
- !ruby/object:Gem::Version
|
|
219
|
+
version: '0.19'
|
|
220
|
+
- !ruby/object:Gem::Dependency
|
|
221
|
+
name: webmock
|
|
222
|
+
requirement: !ruby/object:Gem::Requirement
|
|
223
|
+
requirements:
|
|
224
|
+
- - "~>"
|
|
225
|
+
- !ruby/object:Gem::Version
|
|
226
|
+
version: '3.26'
|
|
227
|
+
type: :development
|
|
228
|
+
prerelease: false
|
|
229
|
+
version_requirements: !ruby/object:Gem::Requirement
|
|
230
|
+
requirements:
|
|
231
|
+
- - "~>"
|
|
232
|
+
- !ruby/object:Gem::Version
|
|
233
|
+
version: '3.26'
|
|
234
|
+
- !ruby/object:Gem::Dependency
|
|
235
|
+
name: yard
|
|
236
|
+
requirement: !ruby/object:Gem::Requirement
|
|
237
|
+
requirements:
|
|
238
|
+
- - "~>"
|
|
239
|
+
- !ruby/object:Gem::Version
|
|
240
|
+
version: '0.9'
|
|
241
|
+
type: :development
|
|
242
|
+
prerelease: false
|
|
243
|
+
version_requirements: !ruby/object:Gem::Requirement
|
|
244
|
+
requirements:
|
|
245
|
+
- - "~>"
|
|
246
|
+
- !ruby/object:Gem::Version
|
|
247
|
+
version: '0.9'
|
|
248
|
+
- !ruby/object:Gem::Dependency
|
|
249
|
+
name: yard-sorbet
|
|
250
|
+
requirement: !ruby/object:Gem::Requirement
|
|
251
|
+
requirements:
|
|
252
|
+
- - "~>"
|
|
253
|
+
- !ruby/object:Gem::Version
|
|
254
|
+
version: '0.9'
|
|
255
|
+
type: :development
|
|
256
|
+
prerelease: false
|
|
257
|
+
version_requirements: !ruby/object:Gem::Requirement
|
|
258
|
+
requirements:
|
|
259
|
+
- - "~>"
|
|
260
|
+
- !ruby/object:Gem::Version
|
|
261
|
+
version: '0.9'
|
|
262
|
+
description: A robust command-line tool that automates extracting direct download
|
|
263
|
+
links and saving files from leechtop.com. Supports batch processing, duplicate skipping,
|
|
264
|
+
and safe concurrency.
|
|
265
|
+
email:
|
|
266
|
+
- vitalii.lazebnyi.github@gmail.com
|
|
267
|
+
executables:
|
|
268
|
+
- leechtop
|
|
269
|
+
extensions: []
|
|
270
|
+
extra_rdoc_files: []
|
|
271
|
+
files:
|
|
272
|
+
- Gemfile
|
|
273
|
+
- LICENSE.txt
|
|
274
|
+
- README.md
|
|
275
|
+
- certs/leechtop_downloader-public_cert.pem
|
|
276
|
+
- exe/leechtop
|
|
277
|
+
- leechtop_downloader.gemspec
|
|
278
|
+
- lib/leechtop_downloader.rb
|
|
279
|
+
- lib/leechtop_downloader/cli.rb
|
|
280
|
+
- lib/leechtop_downloader/cli_helpers.rb
|
|
281
|
+
- lib/leechtop_downloader/client.rb
|
|
282
|
+
- lib/leechtop_downloader/file_manager.rb
|
|
283
|
+
- lib/leechtop_downloader/version.rb
|
|
284
|
+
homepage: https://github.com/VitaliiLazebnyi/leechtop-downloader
|
|
285
|
+
licenses:
|
|
286
|
+
- MIT
|
|
287
|
+
metadata:
|
|
288
|
+
rubygems_mfa_required: 'true'
|
|
289
|
+
rdoc_options: []
|
|
290
|
+
require_paths:
|
|
291
|
+
- lib
|
|
292
|
+
required_ruby_version: !ruby/object:Gem::Requirement
|
|
293
|
+
requirements:
|
|
294
|
+
- - ">="
|
|
295
|
+
- !ruby/object:Gem::Version
|
|
296
|
+
version: 3.2.0
|
|
297
|
+
required_rubygems_version: !ruby/object:Gem::Requirement
|
|
298
|
+
requirements:
|
|
299
|
+
- - ">="
|
|
300
|
+
- !ruby/object:Gem::Version
|
|
301
|
+
version: '0'
|
|
302
|
+
requirements: []
|
|
303
|
+
rubygems_version: 4.0.10
|
|
304
|
+
specification_version: 4
|
|
305
|
+
summary: CLI utility to download files from leechtop.com.
|
|
306
|
+
test_files: []
|
metadata.gz.sig
ADDED
|
@@ -0,0 +1 @@
|
|
|
1
|
+
5e�s�%����]���T��A
|