ruborg 0.9.0 → 0.9.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: a4d69dff5edaf281b1014e64653462df7d7af84be0f341db0c4fe389978f25b1
4
- data.tar.gz: b206422e04dab022cd19a8d4ea6f9bd399988fc27b6e810fd673dfb7de0fb9ba
3
+ metadata.gz: b10d4c2ab697830a9ac1b3e4ce1e13de22728577b2c5f7326465dd9ac877d688
4
+ data.tar.gz: 488993366f00681563d850468c28e50bbdeffae07aa2da7e1925451d813ab135
5
5
  SHA512:
6
- metadata.gz: f881b5b0908afaa16729339263d1584d861cd3a3d6b613599cbdb120340a86a2dc69408dc3f630d347769b990ea6c36ced1b538d17ba1517bebfb347c5c9ef2b
7
- data.tar.gz: ffd3ee5bd76b08ac9753d66ae95ac1aebaed2f4ba6db38ddc720e013970d2799a99202e4cf7295f969dd93ddf4f8b8b7df6d8730352a33f8ff86ce76eecbaff6
6
+ metadata.gz: 33792f380eb30ecbeb8d1036d807b11077fd3ddd55ab22abedcd4e7b7ee8d8aaffe3552e796ca816f396fc947e6c338a3f68a9d2330db041c1608cf2d1ac020a
7
+ data.tar.gz: 23cdd2f090b1a7c860a1568edc266088c59c39ecc54424523fbad3f8e260cb54da33e6400d645ea4df5d22906414a72cc6c7f02b5fae07d3906a8f914e9a10a2
data/CHANGELOG.md CHANGED
@@ -7,6 +7,75 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
7
7
 
8
8
  ## [Unreleased]
9
9
 
10
+ ## [0.9.4] - 2026-05-09
11
+
12
+ ### Fixed
13
+ - **`ruborg lock` now detects Borg cache locks** — previously, a stale cache lock (`~/.cache/borg/<id>/lock.exclusive`) was invisible to `ruborg lock`, which only checked repository-level lock files. The command now reports both the repository lock and the cache lock state independently, so a cache-only stale lock no longer produces a misleading "No lock found" result.
14
+ - `Repository#cache_locked?` — pure filesystem check on the cache lock; reads the repo ID from `<repo>/config` without invoking Borg or requiring a passphrase
15
+ - `Repository#force_break_lock` — now also removes the cache lock and includes `"cache:lock.exclusive"` in its returned list when removed
16
+ - `borg break-lock` (used by `--break`) already handles both locks atomically — no change needed there
17
+ - Respects `BORG_CACHE_DIR` when set
18
+ - Fixes [#10](https://github.com/mpantel/ruborg/issues/10)
19
+ - **Per-file backup: O(N) `borg info` calls replaced with a single `borg list --format` pass** — previously, building the archive inventory called `borg info` once per archive not yet in the `ArchiveCache`. On large repositories this produced hundreds of sequential Borg subprocesses per run, each briefly acquiring the Borg cache lock, causing cumulative lock timeout errors. The inventory now uses a single `borg list --format '{name} {comment}\n'` call which retrieves name and metadata comment for all archives at once, warming the cache transparently in the same pass (combined warm-and-continue). Subsequent runs hit the cache with zero additional Borg calls.
20
+ - Fixes [#12](https://github.com/mpantel/ruborg/issues/12)
21
+ - **Progress display improvements** — backup runs now give richer real-time feedback (fixes [#14](https://github.com/mpantel/ruborg/issues/14)):
22
+ - **Elapsed time in all spinners**: any step that takes more than 3 seconds shows a live `(Ns)` counter so long-running stages are visually distinguishable from fast ones
23
+ - **Stage 1 is always animated**: the "Verifying repository" stage now spins immediately (passphrase fetch, auto-init, lock-wait) rather than flashing and disappearing
24
+ - **Standard backup file count**: `borg create` runs with `--list --filter AM`; the spinner updates in real time with the count of new/changed files being archived, and the completion line reports the final count (e.g. `✓ Archive created: my-repo-… — 42 new/changed file(s)`)
25
+ - **`Progress#update_spin`**: new method for updating a spinner's label mid-operation without restarting it
26
+
27
+ ## [0.9.3] - 2026-05-09
28
+
29
+ ### Added
30
+ - **`ruborg lock` command**: Check for and optionally break stale Borg repository locks
31
+ - `ruborg lock --repository NAME` — exits 0 if no lock, exits 1 if lock detected
32
+ - `ruborg lock --repository NAME --break --yes` — breaks the lock via `borg break-lock`
33
+ - `ruborg lock --repository NAME --force --yes` — force-removes lock files directly without invoking Borg (useful when Borg itself can't run)
34
+ - `--break` and `--force` are mutually exclusive; both require `--yes` as a safety guard
35
+ - `Repository#locked?` — pure filesystem check on `lock.exclusive` / `lock.roster`, no Borg invocation or passphrase required
36
+ - `Repository#break_lock` — delegates to `borg break-lock`; requires Borg >= 1.4.0
37
+ - `Repository#force_break_lock` — direct filesystem removal of lock files/dirs; no Borg needed
38
+ - Status output (lock present/absent) goes to stdout for scriptability; warning messages go to `$stderr`
39
+ - **Pre-flight lock detection during backup**: If a repository is locked when `backup` starts, ruborg waits and retries instead of failing immediately
40
+ - Polls every 5 seconds, prints elapsed time via the spinner
41
+ - Aborts with a clear error message after `lock_wait` seconds (default 300), suggesting `ruborg lock` to inspect or clear
42
+ - **`lock_wait` config key**: Optional integer (seconds). When set, also passed as `--lock-wait` to all Borg commands so Borg itself waits for mid-operation locks. Omitting the key leaves Borg at its own default (1 second)
43
+ - **Minimum Borg version**: Raised to 1.4.0; `break_lock` verifies this before invoking `borg break-lock`
44
+ - **`CLI::DEFAULT_LOCK_WAIT = 300`**: Named constant for the pre-flight wait timeout
45
+ - Fixes [#8](https://github.com/mpantel/ruborg/issues/8)
46
+
47
+ ## [0.9.2] - 2026-05-09
48
+
49
+ ### Added
50
+ - **CLI progress display**: Real-time feedback during backup operations
51
+ - Named stages printed to `$stderr`: `[1/3] Verifying repository`, `[2/3] Backing up files`, `[3/3] Pruning`
52
+ - Animated spinner for indeterminate operations (cache loading, `borg create`, pruning)
53
+ - Inline progress bar for per-file backup mode: `[=========> ] 42/120 filename.jpg`
54
+ - Stage count adapts to the operation: 2 stages without pruning, 3 with
55
+ - Degrades gracefully to plain text lines when output is piped or redirected (non-TTY)
56
+ - All progress output goes to `$stderr` — `--json` stdout and piped output remain clean
57
+ - No external gem dependencies — pure Ruby with ANSI `\r` rewrite
58
+ - Fixes [#6](https://github.com/mpantel/ruborg/issues/6)
59
+
60
+ ## [0.9.1] - 2026-05-09
61
+
62
+ ### Added
63
+ - **Archive metadata cache**: New `ArchiveCache` class eliminates N+1 `borg info` calls during per-file backup runs
64
+ - Metadata cached in `<repo_path>.ruborg_cache.json` — sibling file to the repository
65
+ - Cache is shared across machines: any host with access to the repo path shares the same cache
66
+ - Local repos use `File::LOCK_EX` for safe concurrent reads/writes with merge-on-conflict
67
+ - SSH repos (`user@host:/path`, `ssh://user@host/path`) fetch/push the cache via `scp` with optimistic locking (fetch-fresh → merge → push), avoiding deadlocks on process crashes
68
+ - Only archives not yet in cache trigger a `borg info` call; warm runs reduce subprocess overhead from O(n) to O(new archives)
69
+ - Fixes [#4](https://github.com/mpantel/ruborg/issues/4)
70
+ - **Catalog command**: New `ruborg catalog` command for fast, offline browsing of backed-up files
71
+ - Reads the local cache file — no `borg` subprocess calls needed
72
+ - `--search PATTERN` — filter entries by file path using a regex
73
+ - `--stats` — show aggregate statistics (total archives, unique files, total size, source dirs)
74
+ - `--json` — machine-readable JSON output; default is a human-friendly text table
75
+ - Works per-repository like all other commands (`--repository`)
76
+ - Supports SSH repos transparently via the same `scp`-based cache fetch
77
+ - **Bug fix**: `ArchiveCache` now normalises all loaded metadata to symbol keys, ensuring cache hits and cache misses return identical key types
78
+
10
79
  ## [0.9.0] - 2025-10-14
11
80
 
12
81
  ### Changed
data/README.md CHANGED
@@ -25,13 +25,15 @@ A friendly Ruby frontend for [Borg Backup](https://www.borgbackup.org/). Ruborg
25
25
  - 📈 **Summary View** - Quick overview of all repositories and their configurations
26
26
  - 🔧 **Custom Borg Path** - Support for custom Borg executable paths per repository
27
27
  - 🏠 **Hostname Validation** - NEW! Restrict backups to specific hosts (global or per-repository)
28
- - **Well-tested** - Comprehensive test suite with RSpec (297 examples, 0 failures)
28
+ - 🔒 **Lock Management** - Detect and break stale Borg repository locks with `ruborg lock`
29
+ - ⏳ **Lock-aware Backups** - Pre-flight lock detection with configurable wait timeout before backup
30
+ - ✅ **Well-tested** - Comprehensive test suite with RSpec (412 examples, 0 failures)
29
31
  - 🔒 **Security-focused** - Path validation, safe YAML loading, command injection protection
30
32
 
31
33
  ## Prerequisites
32
34
 
33
35
  - Ruby >= 3.2.0
34
- - [Borg Backup](https://www.borgbackup.org/) installed and available in PATH
36
+ - [Borg Backup](https://www.borgbackup.org/) >= 1.4.0 installed and available in PATH
35
37
  - [Passbolt CLI](https://github.com/passbolt/go-passbolt-cli) (optional, for password management)
36
38
 
37
39
  ### Installing Borg Backup
@@ -169,8 +171,9 @@ repositories:
169
171
  - **Source Deletion Safety**: `allow_remove_source` flag to explicitly enable `--remove-source` option (default: disabled)
170
172
  - **Skip Hash Check**: Optional `skip_hash_check` flag to skip content hash verification for faster backups (per-file mode only)
171
173
  - **Type-Safe Booleans**: Strict boolean validation prevents configuration errors (must use `true`/`false`, not strings)
172
- - **Global Settings**: Hostname, compression, encryption, auto_init, allow_remove_source, skip_hash_check, log_file, borg_path, borg_options, and retention apply to all repositories
173
- - **Per-Repository Overrides**: Any global setting can be overridden at the repository level (including hostname, allow_remove_source, skip_hash_check, and custom borg_path)
174
+ - **Lock Wait Timeout**: Optional `lock_wait` (integer, seconds) how long ruborg waits for a locked repository before aborting. Also passed as `--lock-wait` to Borg when set. Default: 300 seconds (pre-flight), Borg default: 1 second (when not configured)
175
+ - **Global Settings**: Hostname, compression, encryption, auto_init, allow_remove_source, skip_hash_check, lock_wait, log_file, borg_path, borg_options, and retention apply to all repositories
176
+ - **Per-Repository Overrides**: Any global setting can be overridden at the repository level (including hostname, allow_remove_source, skip_hash_check, lock_wait, and custom borg_path)
174
177
  - **Custom Borg Path**: Specify a custom Borg executable path if borg is not in PATH or to use a specific version
175
178
  - **Retention Policies**: Define how many backups to keep (hourly, daily, weekly, monthly, yearly)
176
179
  - **Multiple Sources**: Each repository can have multiple backup sources with their own exclude patterns
@@ -472,6 +475,47 @@ Group: postgres
472
475
  Type: regular file
473
476
  ```
474
477
 
478
+ ### Manage Repository Locks
479
+
480
+ Borg uses lock files to prevent concurrent access. If a backup crashes, stale locks can block all subsequent operations. Use `ruborg lock` to inspect and clear them.
481
+
482
+ Borg maintains **two independent locks** — one on the repository itself and one on the local cache (`~/.cache/borg/<id>/lock.exclusive`). `ruborg lock` checks and reports both:
483
+
484
+ ```
485
+ Lock detected on repository 'documents' (/mnt/backups/documents):
486
+ Repository lock : clear
487
+ Cache lock : LOCKED
488
+ Run with --break --yes (via borg) or --force --yes (direct removal).
489
+ ```
490
+
491
+ ```bash
492
+ # Check both repository and cache locks (exits 0 = no lock, 1 = locked)
493
+ ruborg lock --repository documents
494
+
495
+ # Break both locks via borg break-lock (requires Borg >= 1.4.0)
496
+ ruborg lock --repository documents --break --yes
497
+
498
+ # Force-remove lock files directly (no Borg required, last resort)
499
+ ruborg lock --repository documents --force --yes
500
+ ```
501
+
502
+ **Lock-aware backups:** When `ruborg backup` starts and detects a lock, it waits up to `lock_wait` seconds (default 300) for the lock to clear before aborting:
503
+
504
+ ```
505
+ [1/2] Verifying repository: documents
506
+ Repository locked — waiting for lock to clear (5s / 300s)…
507
+ Repository locked — waiting for lock to clear (10s / 300s)…
508
+ ✓ Lock cleared
509
+ [2/2] Creating archive
510
+ ```
511
+
512
+ Configure the timeout in `ruborg.yml` (also passed as `--lock-wait` to Borg when set):
513
+
514
+ ```yaml
515
+ # Wait up to 60s for a lock before aborting; also passes --lock-wait 60 to borg commands
516
+ lock_wait: 60
517
+ ```
518
+
475
519
  ### Validate Repository Compatibility
476
520
 
477
521
  ```bash
@@ -659,6 +703,7 @@ See [SECURITY.md](SECURITY.md) for detailed security information and best practi
659
703
  | `list` | List archives or files in repository | `--config`, `--repository`, `--archive`, `--log` |
660
704
  | `restore ARCHIVE` | Restore files from archive | `--config`, `--repository`, `--destination`, `--path`, `--log` |
661
705
  | `metadata ARCHIVE` | Get file metadata from archive | `--config`, `--repository`, `--file`, `--log` |
706
+ | `lock` | Check for and optionally break a repository lock | `--config`, `--repository`, `--break`, `--force`, `--yes`, `--log` |
662
707
  | `info` | Show repository information | `--config`, `--repository`, `--log` |
663
708
  | `version` | Show ruborg version | None |
664
709
 
@@ -821,6 +866,13 @@ repositories:
821
866
 
822
867
  **Performance Note:** Per-file mode creates many archives (one per file). Borg handles this efficiently due to deduplication, but it's best suited for directories with hundreds to thousands of files rather than millions.
823
868
 
869
+ **Progress feedback:** During backup, ruborg shows real-time progress on stderr:
870
+ - All spinner stages display an elapsed-time counter after 3 seconds (`Preparing... (12s)`) so long-running steps are always visible
871
+ - Standard backup mode (`borg create`) streams file-level output and shows a running count of new/changed files in the spinner, with a final summary in the completion line (e.g. `✓ Archive created: my-repo-… — 42 new/changed file(s)`)
872
+ - Per-file mode shows a progress bar with current/total file count
873
+
874
+ **Inventory performance:** At the start of each per-file backup run, ruborg builds an inventory of existing archives using a single `borg list` call. Metadata is cached locally in a `.ruborg_cache.json` file beside the repository — subsequent runs serve the inventory entirely from cache with no additional Borg calls. The first run after introducing a large existing repository will be slower as it warms the cache, but all subsequent runs are fast regardless of how many archives exist.
875
+
824
876
  **Backup vs Retention:** The per-file `retention_mode` only affects how archives are created and pruned. Traditional backup commands still work normally - you can list, restore, and check per-file archives just like standard archives.
825
877
 
826
878
  ### Skip Hash Check for Faster Backups
@@ -0,0 +1,189 @@
1
+ # frozen_string_literal: true
2
+
3
+ require "json"
4
+ require "open3"
5
+ require "tempfile"
6
+
7
+ module Ruborg
8
+ # Persistent cache of per-archive metadata, stored as a JSON file sibling to the
9
+ # Borg repository. Eliminates repeated `borg info` calls across runs.
10
+ #
11
+ # Supports local paths (File::LOCK_EX) and SSH paths (optimistic merge via scp).
12
+ # All metadata is stored and returned with symbol keys (:path, :size, :hash, :source_dir).
13
+ class ArchiveCache
14
+ SSH_PATTERN = %r{\A(?:ssh://|[^\s/]+@[^\s:]+:)}
15
+
16
+ def initialize(repo_path)
17
+ @repo_path = repo_path
18
+ @data = {}
19
+ @snapshot = {}
20
+ @loaded = false
21
+ end
22
+
23
+ def fetch
24
+ return self if @loaded
25
+
26
+ if ssh?
27
+ load_remote
28
+ else
29
+ load_local
30
+ end
31
+
32
+ @snapshot = snapshot(@data)
33
+ @loaded = true
34
+ self
35
+ end
36
+
37
+ def [](archive_name)
38
+ @data[archive_name]
39
+ end
40
+
41
+ def store(archive_name, metadata)
42
+ @data[archive_name] = symbolize(metadata)
43
+ end
44
+
45
+ # Returns all cached entries as an array of hashes, each including :archive_name.
46
+ def entries
47
+ @data.map { |archive_name, metadata| metadata.merge(archive_name: archive_name) }
48
+ end
49
+
50
+ def save_if_changed
51
+ return unless dirty?
52
+
53
+ if ssh?
54
+ save_remote
55
+ else
56
+ save_local
57
+ end
58
+ end
59
+
60
+ private
61
+
62
+ def dirty?
63
+ @data != @snapshot
64
+ end
65
+
66
+ def snapshot(hash)
67
+ hash.transform_values(&:dup)
68
+ end
69
+
70
+ def ssh?
71
+ SSH_PATTERN.match?(@repo_path)
72
+ end
73
+
74
+ def cache_path_for(path)
75
+ "#{path}.ruborg_cache.json"
76
+ end
77
+
78
+ def symbolize(metadata)
79
+ metadata.transform_keys(&:to_sym)
80
+ end
81
+
82
+ def normalize_archives(raw)
83
+ (raw || {}).transform_values { |v| symbolize(v) }
84
+ end
85
+
86
+ def load_local
87
+ path = cache_path_for(@repo_path)
88
+ return unless File.exist?(path)
89
+
90
+ File.open(path, "r") do |f|
91
+ f.flock(File::LOCK_SH)
92
+ parsed = JSON.parse(f.read)
93
+ @data = normalize_archives(parsed["archives"])
94
+ end
95
+ rescue JSON::ParserError
96
+ @data = {}
97
+ end
98
+
99
+ def save_local
100
+ path = cache_path_for(@repo_path)
101
+ File.open(path, File::RDWR | File::CREAT, 0o600) do |f|
102
+ f.flock(File::LOCK_EX)
103
+ existing = read_existing_local(f)
104
+ merged = existing.merge(@data)
105
+ f.rewind
106
+ f.write(JSON.generate({ "version" => 1, "archives" => stringify_for_storage(merged) }))
107
+ f.truncate(f.pos)
108
+ end
109
+ end
110
+
111
+ def read_existing_local(file)
112
+ content = file.read
113
+ return {} if content.empty?
114
+
115
+ normalize_archives(JSON.parse(content)["archives"])
116
+ rescue JSON::ParserError
117
+ {}
118
+ end
119
+
120
+ # JSON requires string keys; convert symbol keys back before writing.
121
+ def stringify_for_storage(data)
122
+ data.transform_values { |v| v.transform_keys(&:to_s) }
123
+ end
124
+
125
+ def parse_ssh
126
+ if @repo_path.start_with?("ssh://")
127
+ require "uri"
128
+ uri = URI.parse(@repo_path)
129
+ host = uri.user ? "#{uri.user}@#{uri.host}" : uri.host
130
+ host = "#{host}:#{uri.port}" if uri.port && uri.port != 22
131
+ [host, uri.path]
132
+ else
133
+ match = @repo_path.match(%r{\A([^\s/]+@[^\s:]+):(.+)\z})
134
+ return [nil, nil] unless match
135
+
136
+ [match[1], match[2]]
137
+ end
138
+ end
139
+
140
+ def load_remote
141
+ host, path = parse_ssh
142
+ return unless host
143
+
144
+ remote = "#{host}:#{cache_path_for(path)}"
145
+ loaded = nil
146
+ Tempfile.create(["ruborg_cache", ".json"]) do |tmp|
147
+ _, status = Open3.capture2e("scp", "-q", "-B", remote, tmp.path)
148
+ next unless status.success?
149
+
150
+ begin
151
+ loaded = normalize_archives(JSON.parse(File.read(tmp.path))["archives"])
152
+ rescue JSON::ParserError
153
+ loaded = {}
154
+ end
155
+ end
156
+ @data = loaded if loaded
157
+ end
158
+
159
+ def save_remote
160
+ host, path = parse_ssh
161
+ return unless host
162
+
163
+ remote = "#{host}:#{cache_path_for(path)}"
164
+ fresh = fetch_remote_fresh(remote)
165
+ merged = fresh.merge(@data)
166
+
167
+ Tempfile.create(["ruborg_cache_upload", ".json"]) do |tmp|
168
+ tmp.write(JSON.generate({ "version" => 1, "archives" => stringify_for_storage(merged) }))
169
+ tmp.flush
170
+ Open3.capture2e("scp", "-q", "-B", tmp.path, remote)
171
+ end
172
+ end
173
+
174
+ def fetch_remote_fresh(remote)
175
+ result = {}
176
+ Tempfile.create(["ruborg_cache_fresh", ".json"]) do |tmp|
177
+ _, status = Open3.capture2e("scp", "-q", "-B", remote, tmp.path)
178
+ next unless status.success?
179
+
180
+ begin
181
+ result = normalize_archives(JSON.parse(File.read(tmp.path))["archives"])
182
+ rescue JSON::ParserError
183
+ result = {}
184
+ end
185
+ end
186
+ result
187
+ end
188
+ end
189
+ end
data/lib/ruborg/backup.rb CHANGED
@@ -3,13 +3,15 @@
3
3
  module Ruborg
4
4
  # Backup operations using Borg
5
5
  class Backup
6
- def initialize(repository, config:, retention_mode: "standard", repo_name: nil, logger: nil, skip_hash_check: false)
6
+ def initialize(repository, config:, retention_mode: "standard", repo_name: nil, logger: nil,
7
+ skip_hash_check: false, progress: nil)
7
8
  @repository = repository
8
9
  @config = config
9
10
  @retention_mode = retention_mode
10
11
  @repo_name = repo_name
11
12
  @logger = logger
12
13
  @skip_hash_check = skip_hash_check
14
+ @progress = progress
13
15
  end
14
16
 
15
17
  def create(name: nil, remove_source: false)
@@ -27,36 +29,36 @@ module Ruborg
27
29
  def create_standard_archive(name, remove_source)
28
30
  archive_name = name || Time.now.strftime("%Y-%m-%d_%H-%M-%S")
29
31
 
30
- # Show repository header in console only
31
32
  print_repository_header
33
+ @progress&.spin("Creating archive: #{archive_name}")
32
34
 
33
- # Show progress in console
34
- puts "Creating archive: #{archive_name}"
35
-
36
- cmd = build_create_command(archive_name)
37
-
38
- execute_borg_command(cmd)
35
+ count = if @progress
36
+ stream_borg_create(archive_name)
37
+ else
38
+ execute_borg_command(build_create_command(archive_name))
39
+ 0
40
+ end
39
41
 
40
- # Log successful action
42
+ summary = count.positive? ? " — #{count} new/changed file(s)" : ""
43
+ @progress&.done("Archive created: #{archive_name}#{summary}")
41
44
  @logger&.info("[#{@repo_name}] Created archive #{archive_name} with #{@config.backup_paths.size} source(s)")
42
- puts "✓ Archive created successfully"
43
45
 
44
46
  remove_source_files if remove_source
45
47
  end
46
48
 
47
49
  # rubocop:disable Metrics/AbcSize, Metrics/MethodLength, Metrics/PerceivedComplexity, Metrics/BlockNesting
48
50
  def create_per_file_archives(name_prefix, remove_source)
49
- # Collect all files from backup paths
51
+ @progress&.spin("Collecting files...")
50
52
  files_to_backup = collect_files_from_paths(@config.backup_paths, @config.exclude_patterns)
53
+ @progress&.stop_spin
51
54
 
52
55
  raise BorgError, "No files found to backup" if files_to_backup.empty?
53
56
 
54
- # Get list of existing archives for duplicate detection
57
+ @progress&.spin("Loading archive catalog...")
55
58
  existing_archives = get_existing_archive_names
59
+ @progress&.done("Catalog loaded — #{existing_archives.size} archive(s) known")
56
60
 
57
- # Show repository header in console only
58
61
  print_repository_header
59
-
60
62
  puts "Found #{files_to_backup.size} file(s) to backup"
61
63
 
62
64
  backed_up_count = 0
@@ -78,8 +80,8 @@ module Ruborg
78
80
  # Ensure archive name doesn't exceed 255 characters (filesystem limit)
79
81
  archive_name = name_prefix || build_archive_name(@repo_name, sanitized_filename, path_hash, file_mtime)
80
82
 
81
- # Show progress in console
82
- print " [#{index + 1}/#{files_to_backup.size}] Backing up: #{file_path}"
83
+ @progress&.bar(index + 1, files_to_backup.size, File.basename(file_path))
84
+ $stderr.print " [#{index + 1}/#{files_to_backup.size}] Backing up: #{file_path}" unless @progress
83
85
 
84
86
  # Check if archive already exists AND contains this exact file
85
87
  if existing_archives.key?(archive_name)
@@ -149,7 +151,7 @@ module Ruborg
149
151
  cmd = build_per_file_create_command(archive_name, file_path, source_dir)
150
152
 
151
153
  execute_borg_command(cmd)
152
- puts ""
154
+ puts "" unless @progress
153
155
 
154
156
  # Log successful action with details
155
157
  @logger&.info("[#{@repo_name}] Archived #{file_path} in archive #{archive_name}")
@@ -160,11 +162,13 @@ module Ruborg
160
162
  end
161
163
  # rubocop:enable Metrics/BlockLength
162
164
 
163
- if skipped_count.positive?
164
- puts "✓ Per-file backup completed: #{backed_up_count} file(s) backed up, #{skipped_count} skipped (unchanged)"
165
- else
166
- puts "✓ Per-file backup completed: #{backed_up_count} file(s) backed up"
167
- end
165
+ summary = if skipped_count.positive?
166
+ "#{backed_up_count} file(s) backed up, #{skipped_count} skipped (unchanged)"
167
+ else
168
+ "#{backed_up_count} file(s) backed up"
169
+ end
170
+ @progress&.done(summary)
171
+ puts "✓ Per-file backup completed: #{summary}" unless @progress
168
172
  end
169
173
  # rubocop:enable Metrics/AbcSize, Metrics/MethodLength, Metrics/PerceivedComplexity, Metrics/BlockNesting
170
174
 
@@ -343,19 +347,43 @@ module Ruborg
343
347
  cmd
344
348
  end
345
349
 
346
- def execute_borg_command(cmd)
350
+ def borg_env
347
351
  env = {}
348
352
  passphrase = @repository.instance_variable_get(:@passphrase)
349
353
  env["BORG_PASSPHRASE"] = passphrase if passphrase
350
354
  env["BORG_RELOCATED_REPO_ACCESS_IS_OK"] = "yes"
351
355
  env["BORG_UNKNOWN_UNENCRYPTED_REPO_ACCESS_IS_OK"] = "yes"
356
+ env
357
+ end
352
358
 
353
- result = system(env, *cmd, in: "/dev/null")
359
+ def execute_borg_command(cmd)
360
+ result = system(borg_env, *cmd, in: "/dev/null")
354
361
  raise BorgError, "Borg command failed: #{cmd.join(" ")}" unless result
355
362
 
356
363
  result
357
364
  end
358
365
 
366
+ # Run borg create with --list --filter AM, streaming stderr to count
367
+ # new/changed files and update the spinner label in real time.
368
+ def stream_borg_create(archive_name)
369
+ require "open3"
370
+
371
+ cmd = build_create_command(archive_name) + ["--list", "--filter", "AM"]
372
+ file_count = 0
373
+
374
+ Open3.popen3(borg_env, *cmd, in: "/dev/null") do |_sin, _sout, stderr, wait_thr|
375
+ stderr.each_line do |line|
376
+ next unless line.match?(/\A[AM] /)
377
+
378
+ file_count += 1
379
+ @progress.update_spin("Creating archive: #{archive_name} — #{file_count} new/changed file(s)")
380
+ end
381
+ raise BorgError, "Borg command failed: #{cmd.join(" ")}" unless wait_thr.value.success?
382
+ end
383
+
384
+ file_count
385
+ end
386
+
359
387
  def remove_single_file(file_path)
360
388
  require "fileutils"
361
389
 
@@ -459,89 +487,57 @@ module Ruborg
459
487
  puts "=" * 60
460
488
  end
461
489
 
462
- # rubocop:disable Metrics/AbcSize, Metrics/MethodLength, Metrics/PerceivedComplexity
463
490
  def get_existing_archive_names
464
- require "json"
465
491
  require "open3"
466
492
 
467
- # First get list of archives
468
- cmd = [@repository.borg_path, "list", @repository.path, "--json"]
469
493
  env = {}
470
494
  passphrase = @repository.instance_variable_get(:@passphrase)
471
495
  env["BORG_PASSPHRASE"] = passphrase if passphrase
472
496
  env["BORG_RELOCATED_REPO_ACCESS_IS_OK"] = "yes"
473
497
  env["BORG_UNKNOWN_UNENCRYPTED_REPO_ACCESS_IS_OK"] = "yes"
474
498
 
499
+ # Single borg call: fetch name + comment for every archive at once.
500
+ # Replaces the previous O(N) borg info loop and warms the cache in the same pass.
501
+ cmd = [@repository.borg_path, "list", @repository.path, "--format", "{name} {comment}\n"]
475
502
  stdout, stderr, status = Open3.capture3(env, *cmd)
476
503
  raise BorgError, "Failed to list archives: #{stderr}" unless status.success?
477
504
 
478
- json_data = JSON.parse(stdout)
479
- archives = json_data["archives"] || []
505
+ cache = ArchiveCache.new(@repository.path).fetch
506
+ result = {}
480
507
 
481
- # Build hash by querying each archive individually for comment
482
- # This is necessary because 'borg list' doesn't include comments
483
- archives.each_with_object({}) do |archive, hash|
484
- archive_name = archive["name"]
508
+ stdout.each_line do |line|
509
+ archive_name, comment = line.chomp.split(" ", 2)
510
+ next unless archive_name
485
511
 
486
- # Query this specific archive to get the comment
487
- info_cmd = [@repository.borg_path, "info", "#{@repository.path}::#{archive_name}", "--json"]
488
- info_stdout, _, info_status = Open3.capture3(env, *info_cmd)
489
-
490
- unless info_status.success?
491
- # If we can't get info for this archive, skip it with defaults
492
- hash[archive_name] = { path: "", size: 0, hash: "", source_dir: "" }
512
+ if (cached = cache[archive_name])
513
+ result[archive_name] = cached
493
514
  next
494
515
  end
495
516
 
496
- info_data = JSON.parse(info_stdout)
497
- archive_info = info_data["archives"]&.first || {}
498
- comment = archive_info["comment"] || ""
499
-
500
- # Parse comment based on format
501
- # The comment field stores metadata as: path|||size|||hash|||source_dir (using ||| as delimiter)
502
- # For backward compatibility, handle old formats:
503
- # - Old format 1: plain path (no |||)
504
- # - Old format 2: path|||hash (2 parts)
505
- # - Old format 3: path|||size|||hash (3 parts)
506
- # - New format: path|||size|||hash|||source_dir (4 parts)
507
- if comment.include?("|||")
508
- parts = comment.split("|||")
509
- file_path = parts[0]
510
- if parts.length >= 4
511
- # New format: path|||size|||hash|||source_dir
512
- file_size = parts[1].to_i
513
- file_hash = parts[2] || ""
514
- source_dir = parts[3] || ""
515
- elsif parts.length >= 3
516
- # Format 3: path|||size|||hash (no source_dir)
517
- file_size = parts[1].to_i
518
- file_hash = parts[2] || ""
519
- source_dir = ""
520
- else
521
- # Old format: path|||hash (size and source_dir not available)
522
- file_size = 0
523
- file_hash = parts[1] || ""
524
- source_dir = ""
525
- end
517
+ metadata = parse_archive_comment(comment || "")
518
+ cache.store(archive_name, metadata)
519
+ result[archive_name] = metadata
520
+ end
521
+
522
+ cache.save_if_changed
523
+ result
524
+ end
525
+
526
+ def parse_archive_comment(comment)
527
+ if comment.include?("|||")
528
+ parts = comment.split("|||")
529
+ file_path = parts[0]
530
+ if parts.length >= 4
531
+ { path: file_path, size: parts[1].to_i, hash: parts[2] || "", source_dir: parts[3] || "" }
532
+ elsif parts.length >= 3
533
+ { path: file_path, size: parts[1].to_i, hash: parts[2] || "", source_dir: "" }
526
534
  else
527
- # Oldest format: comment is just the path string
528
- file_path = comment
529
- file_size = 0
530
- file_hash = ""
531
- source_dir = ""
535
+ { path: file_path, size: 0, hash: parts[1] || "", source_dir: "" }
532
536
  end
533
-
534
- hash[archive_name] = {
535
- path: file_path,
536
- size: file_size,
537
- hash: file_hash,
538
- source_dir: source_dir
539
- }
537
+ else
538
+ { path: comment, size: 0, hash: "", source_dir: "" }
540
539
  end
541
- rescue JSON::ParserError => e
542
- raise BorgError, "Failed to parse archive info: #{e.message}"
543
540
  end
544
- # rubocop:enable Metrics/AbcSize, Metrics/MethodLength, Metrics/PerceivedComplexity
545
541
 
546
542
  def find_next_version_name(base_name, existing_archives)
547
543
  version = 2