appydave-tools 0.77.5 → 0.77.7

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: a66f70a8ea85fce726ea3da189166f623c7fb1ca007f70b1b5fb75f3fd828077
4
- data.tar.gz: 423a8d4309fb166084940efe6bcb7fbd3ab9d620094b006bf95194aef671719a
3
+ metadata.gz: cdcce7ddc1828ec8a28fcda70c442f1786443a9e3d04cbdf0a5f03b223f1ce8f
4
+ data.tar.gz: e419a736811babee9b523fa39cec97e219a903e388b140d4c3140757c15587c0
5
5
  SHA512:
6
- metadata.gz: d89bf4f8992c1ab8630a51af56dc97d9fbc651605593be1b70510cc5dd4837472dfc55514d12d9ff0542b439ec04d0c47752a3799136b7ca4cada342d49aa023
7
- data.tar.gz: 6fc875fe4006e621674af60652e5ff1241d5404f4d4bdd6e3374314e1460212bce2ec8cfc4129feab3a55ec18ae01fb33f08dc55679ad55d1efde3ed209a5bb0
6
+ metadata.gz: 14de60fca8cc46a8fbe74bb1033ce5d451e1770f22693f7ba641c3f82e95b1dbacec821ad72bd8034c702b268e8673bd7d66ba02c57af25c5c9f9eef61d61d8c
7
+ data.tar.gz: b8973bb5408519c418c01632b09d40060d18195e7dbd14d8f1e17759ff55f0e367492a65728bf09e372e18d48010997becd26d6b140b7125ae9db39422d1a3da
data/CHANGELOG.md CHANGED
@@ -1,3 +1,18 @@
1
+ ## [0.77.6](https://github.com/appydave/appydave-tools/compare/v0.77.5...v0.77.6) (2026-03-20)
2
+
3
+
4
+ ### Bug Fixes
5
+
6
+ * extract S3Archiver from S3Operations; S3Operations is now a thin delegation facade (B020 complete) ([41419e5](https://github.com/appydave/appydave-tools/commit/41419e5e365f62a8e7a16f3ce16e74c41a80d4a4))
7
+
8
+ ## [0.77.5](https://github.com/appydave/appydave-tools/compare/v0.77.4...v0.77.5) (2026-03-20)
9
+
10
+
11
+ ### Bug Fixes
12
+
13
+ * extract S3StatusChecker from S3Operations; status/calculate_sync_status/sync_timestamps delegate to focused class ([6766897](https://github.com/appydave/appydave-tools/commit/6766897421e37c5b3695db7a3e11f3696a1b2f09))
14
+ * remove redundant rubocop disable directives from S3StatusChecker (CI rubocop 1.85.1) ([9f15b34](https://github.com/appydave/appydave-tools/commit/9f15b34c7ab3e139d208d1a5692fc15a42bcfe1a))
15
+
1
16
  ## [0.77.4](https://github.com/appydave/appydave-tools/compare/v0.77.3...v0.77.4) (2026-03-20)
2
17
 
3
18
 
@@ -1,7 +1,7 @@
1
1
  # Project Backlog — AppyDave Tools
2
2
 
3
- **Last updated**: 2026-03-20 (batch-a-features campaign complete)
4
- **Total**: 41 | Pending: 4 | Done: 36 | Deferred: 0 | Rejected: 0
3
+ **Last updated**: 2026-03-20 (s3-operations-split campaign complete)
4
+ **Total**: 41 | Pending: 3 | Done: 37 | Deferred: 0 | Rejected: 0
5
5
 
6
6
  ---
7
7
 
@@ -10,9 +10,8 @@
10
10
  ### Medium Priority
11
11
  - [ ] B001 — FR-1: GPT Context token counting | Priority: medium
12
12
  - [ ] B012 — Arch: add integration tests for brand resolution end-to-end | Priority: medium
13
- - [ ] B007 — Performance: parallel git/S3 status checks for dam list | Priority: low (unblocked after B020)
13
+ - [ ] B007 — Performance: parallel git/S3 status checks for dam list | Priority: low (unblocked B020 complete)
14
14
  - [ ] B008 — Performance: cache git/S3 status with 5-min TTL | Priority: low
15
- - [ ] B020 — Arch: split S3Operations (1,030 lines, mixed I/O + logic) | Priority: medium (next major campaign)
16
15
  - [ ] B040 — Fix: ProjectResolver.resolve raises RuntimeError not typed exception (found in B012) | Priority: low
17
16
 
18
17
  ---
@@ -55,6 +54,7 @@
55
54
  - [x] B001 — FR-1: GPT Context token counting (--tokens flag, warn to stderr, 100k/200k thresholds) | Completed: batch-a-features (2026-03-20), v0.77.0
56
55
  - [x] B010 — UX: terminal-width-aware separator lines + truncate_path in project_listing | Completed: batch-a-features (2026-03-20), v0.77.0
57
56
  - [x] B009 — UX: progress indicators for dam S3 operations (upload, download, status, archive, sync-ssd) | Completed: batch-a-features (2026-03-20), v0.77.1
57
+ - [x] B020 — Arch: split S3Operations into S3Base + S3Uploader + S3Downloader + S3StatusChecker + S3Archiver; S3Operations thin facade | Completed: s3-operations-split (2026-03-20), v0.77.6
58
58
 
59
59
  ---
60
60
 
@@ -5,18 +5,18 @@
5
5
  **Target**: 870 examples passing, rubocop 0, S3Operations ≤ 80 lines, each focused class standalone
6
6
 
7
7
  ## Summary
8
- - Total: 5 | Complete: 0 | In Progress: 0 | Pending: 5 | Failed: 0
8
+ - Total: 5 | Complete: 5 | In Progress: 0 | Pending: 0 | Failed: 0
9
9
 
10
10
  ## Pending
11
- - [ ] WU1-s3-base — Extract shared infrastructure into S3Base class; S3Operations inherits from it; all 870 tests pass with no public API change
12
- - [ ] WU2-s3-uploader — Create S3Uploader < S3Base; move upload + helpers; S3Operations.upload delegates
13
- - [ ] WU3-s3-downloader — Create S3Downloader < S3Base; move download + helpers; S3Operations.download delegates
14
- - [ ] WU4-s3-status-checker — Create S3StatusChecker < S3Base; move status/calculate_sync_status/sync_timestamps + helpers; S3Operations delegates
15
- - [ ] WU5-s3-archiver — Create S3Archiver < S3Base; move archive/cleanup/cleanup_local + helpers; S3Operations becomes thin facade; add s3_base require to lib/appydave/tools.rb
16
11
 
17
12
  ## In Progress
18
13
 
19
14
  ## Complete
15
+ - [x] WU1-s3-base — Extract shared infrastructure into S3Base class; S3Operations inherits from it; all 870 tests pass with no public API change (v0.77.2)
16
+ - [x] WU2-s3-uploader — Create S3Uploader < S3Base; move upload + helpers; S3Operations.upload delegates (v0.77.3)
17
+ - [x] WU3-s3-downloader — Create S3Downloader < S3Base; move download + helpers; S3Operations.download delegates (v0.77.4)
18
+ - [x] WU4-s3-status-checker — Create S3StatusChecker < S3Base; move status/calculate_sync_status/sync_timestamps; S3Operations delegates (v0.77.5)
19
+ - [x] WU5-s3-archiver — Create S3Archiver < S3Base; move archive/cleanup/cleanup_local + helpers; S3Operations is thin facade (v0.77.6)
20
20
 
21
21
  ## Failed / Needs Retry
22
22
 
@@ -167,9 +167,12 @@ module Appydave
167
167
  brand_info = Appydave::Tools::Configuration::Config.brands.get_brand(brand_arg)
168
168
  is_git_repo = Dir.exist?(File.join(brand_path, '.git'))
169
169
 
170
+ # Run all projects in parallel threads — each project's git + S3 checks are I/O-bound
170
171
  project_data = projects.map do |project|
171
- collect_project_data(brand_arg, brand_path, brand_info, project, is_git_repo, detailed: detailed, s3: s3)
172
- end
172
+ Thread.new do
173
+ collect_project_data(brand_arg, brand_path, brand_info, project, is_git_repo, detailed: detailed, s3: s3)
174
+ end
175
+ end.map(&:value)
173
176
 
174
177
  # Print common header
175
178
  puts "Projects in #{brand}:"
@@ -547,15 +550,12 @@ module Appydave
547
550
  size = FileHelper.calculate_directory_size(project_path)
548
551
  modified = File.mtime(project_path)
549
552
 
550
- # Check if project has uncommitted changes (if brand is git repo)
551
- git_status = if is_git_repo
552
- calculate_project_git_status(brand_path, project)
553
- else
554
- 'N/A'
555
- end
553
+ # Run git and S3 checks concurrently both are I/O-bound (shell + network)
554
+ git_thread = is_git_repo ? Thread.new { calculate_project_git_status(brand_path, project) } : nil
555
+ s3_thread = s3 ? Thread.new { calculate_project_s3_sync_status(brand_arg, brand_info, project) } : nil
556
556
 
557
- # Calculate 3-state S3 sync status - only if requested (performance optimization)
558
- s3_sync = s3 ? calculate_project_s3_sync_status(brand_arg, brand_info, project) : 'N/A'
557
+ git_status = git_thread ? git_thread.value : 'N/A'
558
+ s3_sync = s3_thread ? s3_thread.value : 'N/A'
559
559
 
560
560
  result = {
561
561
  name: project,
@@ -0,0 +1,258 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Appydave
4
+ module Tools
5
+ module Dam
6
+ # Handles S3 cleanup and SSD archive operations.
7
+ # Inherits shared infrastructure and helpers from S3Base.
8
+ class S3Archiver < S3Base
9
+ # Cleanup S3 files
10
+ def cleanup(force: false, dry_run: false)
11
+ s3_files = list_s3_files
12
+
13
+ if s3_files.empty?
14
+ puts "❌ No files found in S3 for #{brand}/#{project_id}"
15
+ return
16
+ end
17
+
18
+ puts "🗑️ Found #{s3_files.size} file(s) in S3 for #{brand}/#{project_id}"
19
+ puts ''
20
+
21
+ unless force
22
+ puts '⚠️ This will DELETE all files from S3 for this project.'
23
+ puts 'Use --force to confirm deletion.'
24
+ return
25
+ end
26
+
27
+ deleted = 0
28
+ failed = 0
29
+
30
+ s3_files.each do |s3_file|
31
+ key = s3_file['Key']
32
+ relative_path = extract_relative_path(key)
33
+
34
+ if delete_s3_file(key, dry_run: dry_run)
35
+ puts " ✓ Deleted: #{relative_path}"
36
+ deleted += 1
37
+ else
38
+ puts " ✗ Failed: #{relative_path}"
39
+ failed += 1
40
+ end
41
+ end
42
+
43
+ puts ''
44
+ puts '✅ Cleanup complete!'
45
+ puts " Deleted: #{deleted}, Failed: #{failed}"
46
+ end
47
+
48
+ # Cleanup local s3-staging files
49
+ def cleanup_local(force: false, dry_run: false)
50
+ project_dir = project_directory_path
51
+ staging_dir = File.join(project_dir, 's3-staging')
52
+
53
+ unless Dir.exist?(staging_dir)
54
+ puts "❌ No s3-staging directory found: #{staging_dir}"
55
+ return
56
+ end
57
+
58
+ files = Dir.glob("#{staging_dir}/**/*").select { |f| File.file?(f) }
59
+
60
+ if files.empty?
61
+ puts '❌ No files found in s3-staging/'
62
+ return
63
+ end
64
+
65
+ puts "🗑️ Found #{files.size} file(s) in local s3-staging/"
66
+ puts ''
67
+
68
+ unless force
69
+ puts '⚠️ This will DELETE all files from s3-staging/ for this project.'
70
+ puts 'Use --force to confirm deletion.'
71
+ return
72
+ end
73
+
74
+ deleted = 0
75
+ failed = 0
76
+
77
+ files.each do |file|
78
+ relative_path = file.sub("#{staging_dir}/", '')
79
+
80
+ if delete_local_file(file, dry_run: dry_run)
81
+ puts " ✓ Deleted: #{relative_path}"
82
+ deleted += 1
83
+ else
84
+ puts " ✗ Failed: #{relative_path}"
85
+ failed += 1
86
+ end
87
+ end
88
+
89
+ Dir.glob("#{staging_dir}/**/*").select { |d| File.directory?(d) }.sort.reverse.each do |dir|
90
+ Dir.rmdir(dir) if Dir.empty?(dir)
91
+ rescue StandardError
92
+ nil
93
+ end
94
+
95
+ puts ''
96
+ puts '✅ Local cleanup complete!'
97
+ puts " Deleted: #{deleted}, Failed: #{failed}"
98
+ end
99
+
100
+ # Archive project to SSD
101
+ def archive(force: false, dry_run: false)
102
+ ssd_backup = brand_info.locations.ssd_backup
103
+
104
+ unless ssd_backup && !ssd_backup.empty?
105
+ puts "❌ SSD backup location not configured for brand '#{brand}'"
106
+ return
107
+ end
108
+
109
+ unless Dir.exist?(ssd_backup)
110
+ puts "❌ SSD not mounted at #{ssd_backup}"
111
+ puts ' Please connect the SSD before archiving.'
112
+ return
113
+ end
114
+
115
+ project_dir = project_directory_path
116
+
117
+ unless Dir.exist?(project_dir)
118
+ puts "❌ Project not found: #{project_dir}"
119
+ puts ''
120
+ puts " Try: dam list #{brand} # See available projects"
121
+ return
122
+ end
123
+
124
+ ssd_project_dir = File.join(ssd_backup, project_id)
125
+
126
+ puts "📦 Archive: #{brand}/#{project_id}"
127
+ puts ''
128
+
129
+ if copy_to_ssd(project_dir, ssd_project_dir, dry_run: dry_run)
130
+ if force
131
+ delete_local_project(project_dir, dry_run: dry_run)
132
+ else
133
+ puts ''
134
+ puts '⚠️ Project copied to SSD but NOT deleted locally.'
135
+ puts ' Use --force to delete local copy after archiving.'
136
+ end
137
+ end
138
+
139
+ puts ''
140
+ puts dry_run ? '✅ Archive dry-run complete!' : '✅ Archive complete!'
141
+ end
142
+
143
+ private
144
+
145
+ def delete_s3_file(s3_key, dry_run: false)
146
+ if dry_run
147
+ puts " [DRY-RUN] Would delete: s3://#{brand_info.aws.s3_bucket}/#{s3_key}"
148
+ return true
149
+ end
150
+
151
+ s3_client.delete_object(
152
+ bucket: brand_info.aws.s3_bucket,
153
+ key: s3_key
154
+ )
155
+
156
+ true
157
+ rescue Aws::S3::Errors::ServiceError => e
158
+ puts " Error: #{e.message}"
159
+ false
160
+ end
161
+
162
+ def delete_local_file(file_path, dry_run: false)
163
+ if dry_run
164
+ puts " [DRY-RUN] Would delete: #{file_path}"
165
+ return true
166
+ end
167
+
168
+ File.delete(file_path)
169
+ true
170
+ rescue StandardError => e
171
+ puts " Error: #{e.message}"
172
+ false
173
+ end
174
+
175
+ def copy_to_ssd(source_dir, dest_dir, dry_run: false)
176
+ if Dir.exist?(dest_dir)
177
+ puts '⚠️ Already exists on SSD'
178
+ puts " Path: #{dest_dir}"
179
+ puts ' Skipping copy step'
180
+ return true
181
+ end
182
+
183
+ size = calculate_directory_size(source_dir)
184
+ puts '📋 Copy to SSD (excluding generated files):'
185
+ puts " From: #{source_dir}"
186
+ puts " To: #{dest_dir}"
187
+ puts " Size: #{file_size_human(size)}"
188
+ puts ''
189
+
190
+ if dry_run
191
+ puts ' [DRY-RUN] Would copy project to SSD (excluding node_modules, .git, etc.)'
192
+ return true
193
+ end
194
+
195
+ FileUtils.mkdir_p(dest_dir)
196
+ stats = copy_with_exclusions(source_dir, dest_dir)
197
+ puts " ✅ Copied to SSD (#{stats[:files]} files, excluded #{stats[:excluded]} generated files)"
198
+
199
+ true
200
+ rescue StandardError => e
201
+ puts " ✗ Failed to copy: #{e.message}"
202
+ false
203
+ end
204
+
205
+ def copy_with_exclusions(source_dir, dest_dir)
206
+ stats = { files: 0, excluded: 0 }
207
+
208
+ Dir.glob(File.join(source_dir, '**', '*'), File::FNM_DOTMATCH).each do |source_path|
209
+ next if File.directory?(source_path)
210
+ next if ['.', '..'].include?(File.basename(source_path))
211
+
212
+ relative_path = source_path.sub("#{source_dir}/", '')
213
+
214
+ if excluded_path?(relative_path)
215
+ stats[:excluded] += 1
216
+ next
217
+ end
218
+
219
+ dest_path = File.join(dest_dir, relative_path)
220
+ FileUtils.mkdir_p(File.dirname(dest_path))
221
+ FileUtils.cp(source_path, dest_path, preserve: true)
222
+ stats[:files] += 1
223
+ end
224
+
225
+ stats
226
+ end
227
+
228
+ def delete_local_project(project_dir, dry_run: false)
229
+ size = calculate_directory_size(project_dir)
230
+
231
+ puts ''
232
+ puts '🗑️ Delete local project:'
233
+ puts " Path: #{project_dir}"
234
+ puts " Size: #{file_size_human(size)}"
235
+ puts ''
236
+
237
+ if dry_run
238
+ puts ' [DRY-RUN] Would delete entire local folder'
239
+ return true
240
+ end
241
+
242
+ FileUtils.rm_rf(project_dir)
243
+ puts ' ✅ Deleted local folder'
244
+ puts " 💾 Freed: #{file_size_human(size)}"
245
+
246
+ true
247
+ rescue StandardError => e
248
+ puts " ✗ Failed to delete: #{e.message}"
249
+ false
250
+ end
251
+
252
+ def calculate_directory_size(dir_path)
253
+ FileHelper.calculate_directory_size(dir_path)
254
+ end
255
+ end
256
+ end
257
+ end
258
+ end
@@ -3,9 +3,9 @@
3
3
  module Appydave
4
4
  module Tools
5
5
  module Dam
6
- # S3 operations for VAT (upload, download, status, cleanup).
6
+ # Thin delegation facade for S3 operations.
7
+ # Each method delegates to a focused class that handles one concern.
7
8
  # Inherits shared infrastructure and helpers from S3Base.
8
- # Will become a thin delegation facade as focused classes are extracted (B020).
9
9
  class S3Operations < S3Base
10
10
  # Upload files from s3-staging/ to S3
11
11
  def upload(dry_run: false)
@@ -24,140 +24,17 @@ module Appydave
24
24
 
25
25
  # Cleanup S3 files
26
26
  def cleanup(force: false, dry_run: false)
27
- s3_files = list_s3_files
28
-
29
- if s3_files.empty?
30
- puts "❌ No files found in S3 for #{brand}/#{project_id}"
31
- return
32
- end
33
-
34
- puts "🗑️ Found #{s3_files.size} file(s) in S3 for #{brand}/#{project_id}"
35
- puts ''
36
-
37
- unless force
38
- puts '⚠️ This will DELETE all files from S3 for this project.'
39
- puts 'Use --force to confirm deletion.'
40
- return
41
- end
42
-
43
- deleted = 0
44
- failed = 0
45
-
46
- s3_files.each do |s3_file|
47
- key = s3_file['Key']
48
- relative_path = extract_relative_path(key)
49
-
50
- if delete_s3_file(key, dry_run: dry_run)
51
- puts " ✓ Deleted: #{relative_path}"
52
- deleted += 1
53
- else
54
- puts " ✗ Failed: #{relative_path}"
55
- failed += 1
56
- end
57
- end
58
-
59
- puts ''
60
- puts '✅ Cleanup complete!'
61
- puts " Deleted: #{deleted}, Failed: #{failed}"
27
+ S3Archiver.new(brand, project_id, **delegated_opts).cleanup(force: force, dry_run: dry_run)
62
28
  end
63
29
 
64
30
  # Cleanup local s3-staging files
65
31
  def cleanup_local(force: false, dry_run: false)
66
- project_dir = project_directory_path
67
- staging_dir = File.join(project_dir, 's3-staging')
68
-
69
- unless Dir.exist?(staging_dir)
70
- puts "❌ No s3-staging directory found: #{staging_dir}"
71
- return
72
- end
73
-
74
- files = Dir.glob("#{staging_dir}/**/*").select { |f| File.file?(f) }
75
-
76
- if files.empty?
77
- puts '❌ No files found in s3-staging/'
78
- return
79
- end
80
-
81
- puts "🗑️ Found #{files.size} file(s) in local s3-staging/"
82
- puts ''
83
-
84
- unless force
85
- puts '⚠️ This will DELETE all files from s3-staging/ for this project.'
86
- puts 'Use --force to confirm deletion.'
87
- return
88
- end
89
-
90
- deleted = 0
91
- failed = 0
92
-
93
- files.each do |file|
94
- relative_path = file.sub("#{staging_dir}/", '')
95
-
96
- if delete_local_file(file, dry_run: dry_run)
97
- puts " ✓ Deleted: #{relative_path}"
98
- deleted += 1
99
- else
100
- puts " ✗ Failed: #{relative_path}"
101
- failed += 1
102
- end
103
- end
104
-
105
- # Remove empty directories
106
- Dir.glob("#{staging_dir}/**/*").select { |d| File.directory?(d) }.sort.reverse.each do |dir|
107
- Dir.rmdir(dir) if Dir.empty?(dir)
108
- rescue StandardError
109
- nil
110
- end
111
-
112
- puts ''
113
- puts '✅ Local cleanup complete!'
114
- puts " Deleted: #{deleted}, Failed: #{failed}"
32
+ S3Archiver.new(brand, project_id, **delegated_opts).cleanup_local(force: force, dry_run: dry_run)
115
33
  end
116
34
 
117
35
  # Archive project to SSD
118
36
  def archive(force: false, dry_run: false)
119
- ssd_backup = brand_info.locations.ssd_backup
120
-
121
- unless ssd_backup && !ssd_backup.empty?
122
- puts "❌ SSD backup location not configured for brand '#{brand}'"
123
- return
124
- end
125
-
126
- unless Dir.exist?(ssd_backup)
127
- puts "❌ SSD not mounted at #{ssd_backup}"
128
- puts ' Please connect the SSD before archiving.'
129
- return
130
- end
131
-
132
- project_dir = project_directory_path
133
-
134
- unless Dir.exist?(project_dir)
135
- puts "❌ Project not found: #{project_dir}"
136
- puts ''
137
- puts " Try: dam list #{brand} # See available projects"
138
- return
139
- end
140
-
141
- # Determine SSD destination path
142
- ssd_project_dir = File.join(ssd_backup, project_id)
143
-
144
- puts "📦 Archive: #{brand}/#{project_id}"
145
- puts ''
146
-
147
- # Step 1: Copy to SSD
148
- if copy_to_ssd(project_dir, ssd_project_dir, dry_run: dry_run)
149
- # Step 2: Delete local project (if force is true)
150
- if force
151
- delete_local_project(project_dir, dry_run: dry_run)
152
- else
153
- puts ''
154
- puts '⚠️ Project copied to SSD but NOT deleted locally.'
155
- puts ' Use --force to delete local copy after archiving.'
156
- end
157
- end
158
-
159
- puts ''
160
- puts dry_run ? '✅ Archive dry-run complete!' : '✅ Archive complete!'
37
+ S3Archiver.new(brand, project_id, **delegated_opts).archive(force: force, dry_run: dry_run)
161
38
  end
162
39
 
163
40
  # Calculate 3-state S3 sync status
@@ -177,126 +54,6 @@ module Appydave
177
54
  def delegated_opts
178
55
  { brand_info: brand_info, brand_path: brand_path, s3_client: @s3_client_override }
179
56
  end
180
-
181
- # Delete file from S3
182
- def delete_s3_file(s3_key, dry_run: false)
183
- if dry_run
184
- puts " [DRY-RUN] Would delete: s3://#{brand_info.aws.s3_bucket}/#{s3_key}"
185
- return true
186
- end
187
-
188
- s3_client.delete_object(
189
- bucket: brand_info.aws.s3_bucket,
190
- key: s3_key
191
- )
192
-
193
- true
194
- rescue Aws::S3::Errors::ServiceError => e
195
- puts " Error: #{e.message}"
196
- false
197
- end
198
-
199
- # Delete local file
200
- def delete_local_file(file_path, dry_run: false)
201
- if dry_run
202
- puts " [DRY-RUN] Would delete: #{file_path}"
203
- return true
204
- end
205
-
206
- File.delete(file_path)
207
- true
208
- rescue StandardError => e
209
- puts " Error: #{e.message}"
210
- false
211
- end
212
-
213
- # Copy project to SSD
214
- def copy_to_ssd(source_dir, dest_dir, dry_run: false)
215
- if Dir.exist?(dest_dir)
216
- puts '⚠️ Already exists on SSD'
217
- puts " Path: #{dest_dir}"
218
- puts ' Skipping copy step'
219
- return true
220
- end
221
-
222
- size = calculate_directory_size(source_dir)
223
- puts '📋 Copy to SSD (excluding generated files):'
224
- puts " From: #{source_dir}"
225
- puts " To: #{dest_dir}"
226
- puts " Size: #{file_size_human(size)}"
227
- puts ''
228
-
229
- if dry_run
230
- puts ' [DRY-RUN] Would copy project to SSD (excluding node_modules, .git, etc.)'
231
- return true
232
- end
233
-
234
- FileUtils.mkdir_p(dest_dir)
235
-
236
- # Copy files with exclusion filtering
237
- stats = copy_with_exclusions(source_dir, dest_dir)
238
-
239
- puts " ✅ Copied to SSD (#{stats[:files]} files, excluded #{stats[:excluded]} generated files)"
240
-
241
- true
242
- rescue StandardError => e
243
- puts " ✗ Failed to copy: #{e.message}"
244
- false
245
- end
246
-
247
- # Copy directory contents with exclusion filtering
248
- def copy_with_exclusions(source_dir, dest_dir)
249
- stats = { files: 0, excluded: 0 }
250
-
251
- Dir.glob(File.join(source_dir, '**', '*'), File::FNM_DOTMATCH).each do |source_path|
252
- next if File.directory?(source_path)
253
- next if ['.', '..'].include?(File.basename(source_path))
254
-
255
- relative_path = source_path.sub("#{source_dir}/", '')
256
-
257
- if excluded_path?(relative_path)
258
- stats[:excluded] += 1
259
- next
260
- end
261
-
262
- dest_path = File.join(dest_dir, relative_path)
263
- FileUtils.mkdir_p(File.dirname(dest_path))
264
- FileUtils.cp(source_path, dest_path, preserve: true)
265
- stats[:files] += 1
266
- end
267
-
268
- stats
269
- end
270
-
271
- # Delete local project directory
272
- def delete_local_project(project_dir, dry_run: false)
273
- size = calculate_directory_size(project_dir)
274
-
275
- puts ''
276
- puts '🗑️ Delete local project:'
277
- puts " Path: #{project_dir}"
278
- puts " Size: #{file_size_human(size)}"
279
- puts ''
280
-
281
- if dry_run
282
- puts ' [DRY-RUN] Would delete entire local folder'
283
- return true
284
- end
285
-
286
- FileUtils.rm_rf(project_dir)
287
- puts ' ✅ Deleted local folder'
288
- puts " 💾 Freed: #{file_size_human(size)}"
289
-
290
- true
291
- rescue StandardError => e
292
- puts " ✗ Failed to delete: #{e.message}"
293
- false
294
- end
295
-
296
- # Calculate total size of a directory
297
- def calculate_directory_size(dir_path)
298
- FileHelper.calculate_directory_size(dir_path)
299
- end
300
57
  end
301
58
  end
302
59
  end
@@ -2,6 +2,6 @@
2
2
 
3
3
  module Appydave
4
4
  module Tools
5
- VERSION = '0.77.5'
5
+ VERSION = '0.77.7'
6
6
  end
7
7
  end
@@ -71,6 +71,7 @@ require 'appydave/tools/dam/s3_base'
71
71
  require 'appydave/tools/dam/s3_uploader'
72
72
  require 'appydave/tools/dam/s3_downloader'
73
73
  require 'appydave/tools/dam/s3_status_checker'
74
+ require 'appydave/tools/dam/s3_archiver'
74
75
  require 'appydave/tools/dam/s3_operations'
75
76
  require 'appydave/tools/dam/s3_scanner'
76
77
  require 'appydave/tools/dam/share_operations'
data/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "appydave-tools",
3
- "version": "0.77.5",
3
+ "version": "0.77.7",
4
4
  "description": "AppyDave YouTube Automation Tools",
5
5
  "scripts": {
6
6
  "release": "semantic-release"
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: appydave-tools
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.77.5
4
+ version: 0.77.7
5
5
  platform: ruby
6
6
  authors:
7
7
  - David Cruwys
@@ -373,6 +373,7 @@ files:
373
373
  - lib/appydave/tools/dam/repo_push.rb
374
374
  - lib/appydave/tools/dam/repo_status.rb
375
375
  - lib/appydave/tools/dam/repo_sync.rb
376
+ - lib/appydave/tools/dam/s3_archiver.rb
376
377
  - lib/appydave/tools/dam/s3_arg_parser.rb
377
378
  - lib/appydave/tools/dam/s3_base.rb
378
379
  - lib/appydave/tools/dam/s3_downloader.rb