bard-backup 0.9.1 → 0.10.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 9e6b8c5fc22e116de54fd82fc993bdee128e727c475306b05cfe0241a33a567d
4
- data.tar.gz: 32384f16f624b3c0fea64b9d0e89070f85755f4abd94d8ef92813a3ac43c45d8
3
+ metadata.gz: 79b2f29911f73ce6b73f1adce60d4595854edac99e4d0664a39c1b01c5680076
4
+ data.tar.gz: 7f17e3ce3ed4c71555039b6b384759d5e5ddd3d3c17085014c3fccc2b41e593e
5
5
  SHA512:
6
- metadata.gz: 131137228ac376b6ba371d00285eacd77256dc9c5fb78bf0b81a471a6d11cbbe7f5442b3874e1007ddbdcfe03731c899f9b22713e3cb0f53a3cc666c5a192dbc
7
- data.tar.gz: eba4898acca3c5da3accb686f04bc29e9b3c79d7b68d43c8940ae7f109c47a7a2187bb45afdd42f45c23a04676bddc847b718ea345c9d610aebbb39a90d27e65
6
+ metadata.gz: ad7eb50c8781bed4f6ec83359afd2f7d029087e52574678bbed7578dbb64b01e0e9636e65cb98794bae2214a1fea3db956894d803a8af3a29b626cce52d1f692
7
+ data.tar.gz: 3e0a8974d131887b9da4d5f82a71608f56d28af077a88abd2d03bd267c74769dbeadbe974bb653ec1d433c6fe1956e6367706e6fe53484b93dc3f7cf43a6f39f
data/CLAUDE.md ADDED
@@ -0,0 +1,69 @@
1
+ # CLAUDE.md
2
+
3
+ This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
4
+
5
+ ## Project Overview
6
+
7
+ bard-backup is a Ruby gem that provides automated backup for Bard projects. It:
8
+ - Dumps the database, uploads to S3, deletes old backups using a retention heuristic (48 hours, 30 days, 26 weeks, 24 months, then yearly), and verifies the previous hour's backup exists.
9
+ - Syncs configured data directories to S3 via a manifest-cached file tree (`Bard::Backup::FileTree`).
10
+ - Optionally encrypts uploaded payloads at rest with AES-256-GCM (`Bard::Backup::Encryptor`).
11
+
12
+ ## Commands
13
+
14
+ ```bash
15
+ # Run the default test suite (Cucumber acceptance tests)
16
+ bundle exec rake
17
+
18
+ # Run RSpec unit tests
19
+ bundle exec rspec
20
+
21
+ # Run a single spec file
22
+ bundle exec rspec spec/bard/backup/deleter_spec.rb
23
+
24
+ # Run Cucumber acceptance tests
25
+ bundle exec cucumber
26
+
27
+ # Run a specific Cucumber feature
28
+ bundle exec cucumber features/backup.feature
29
+ ```
30
+
31
+ ## Test Credentials
32
+
33
+ Tests require AWS credentials at `spec/support/credentials.json`. In CI, this is generated from GitHub secrets. For local development, create it manually:
34
+
35
+ ```json
36
+ {
37
+ "access_key_id": "...",
38
+ "secret_access_key": "...",
39
+ "region": "..."
40
+ }
41
+ ```
42
+
43
+ ## Architecture
44
+
45
+ **Entry points**:
46
+ - `Bard::Backup.create!` accepts destination configs (or reads from `Bard::Config`) and delegates to destination strategies. Returns a `Bard::Backup` instance with timestamp/size/destinations.
47
+ - `Bard::Backup::FileTree.create!` syncs configured data directories to S3.
48
+
49
+ **Destination strategy pattern**: `Destination.build(config)` is a factory that picks the right class based on `:type`:
50
+ - `S3Destination` — dumps DB locally via backhoe, uploads to S3, runs `Deleter` for retention, verifies previous hour's backup
51
+ - `UploadDestination` — dumps DB and uploads to presigned URLs (multi-threaded)
52
+
53
+ **Config DSL** (loaded via `bard/plugins/backup` and `bard/plugins/encrypt`, which extend `Bard::Config`):
54
+ ```ruby
55
+ backup do
56
+ s3 "primary", path: "bucket/subfolder", region: "us-west-2"
57
+ end
58
+ encrypt true # reads key from config/master.key
59
+ ```
60
+
61
+ **Key classes**:
62
+ - `S3Tree` — `Data.define`-based S3 wrapper used by both `S3Destination` and `FileTree`. Methods: `list_objects`, `put_file`, `put_body`, `get`, `delete_keys`, `mv`, `empty!`. Supports encryption via `Encryptor` and STS `session_token`.
63
+ - `FileTree` — syncs local data paths to S3 using a local `.bard-file-tree-sync.json` manifest (mtime+size fast path, MD5 verification, falls back to remote listing on first run)
64
+ - `Encryptor` — AES-256-GCM with HKDF-derived keys and a deterministic IV (HMAC of plaintext), enabling content-addressable encryption
65
+ - `Deleter` — implements the retention policy via `Filter` structs that check time-based granularities
66
+ - `LocalBackhoe` / `CachedLocalBackhoe` — database dump strategies (cached variant avoids conflicts when running parallel destinations)
67
+ - `LatestFinder` — finds the most recent backup across all configured destinations
68
+ - `BackupConfig` — the `backup do ... end` DSL surface (`bard`, `disabled`, `s3 name, **kwargs`); `create!` reads `bard_config.backup.destinations` from it
69
+ - `Railtie` — loads `tasks.rake` which provides `bard:backup` (DB + data) and `bard:backup:data` (data only) rake tasks in Rails apps
data/README.md CHANGED
@@ -1,17 +1,58 @@
1
1
  # Bard::Backup
2
2
 
3
- Bard::Backup does 3 things in a bard project
4
- 1. Takes a database dump and uploads it to our s3 bucket
5
- 2. Deletes old backups using a backoff heuristic: 48 hours, 30 days, 26 weeks, 24 months, then yearly
6
- 3. Raises an error if we don't have a backup from the previous hour
3
+ Bard::Backup handles backups for a bard project:
4
+ 1. Takes a database dump and uploads it to S3 (or PUTs it to presigned URLs)
5
+ 2. Syncs configured data directories to S3 with a local manifest cache
6
+ 3. Deletes old database backups using a backoff heuristic: 48 hours, 30 days, 26 weeks, 24 months, then yearly
7
+ 4. Raises an error if we don't have a database backup from the previous hour
8
+ 5. Optionally encrypts uploaded payloads at rest with AES-256-GCM
7
9
 
8
10
  ## Installation
9
11
 
12
+ Add to your `Gemfile`:
13
+
14
+ ```ruby
15
+ gem "bard-backup"
16
+ ```
17
+
10
18
  ## Usage
11
19
 
12
- Run with `Bard::Backup.call path: "s3_bucket/optional_subfolder", access_key_id: "...", secret_access_key: "...", region: "..."`
20
+ In a Rails app, configure destinations in `config/bard.rb` using the `Bard::Config` DSL:
21
+
22
+ ```ruby
23
+ backup do
24
+ s3 "primary", path: "my-bucket/my-project", region: "us-west-2"
25
+ end
26
+
27
+ # Optional: encrypt payloads at rest. Reads the key from config/master.key.
28
+ encrypt true
29
+ ```
30
+
31
+ Credentials live in Rails encrypted credentials under `bard_backup` (matched by `name:`):
32
+
33
+ ```yaml
34
+ bard_backup:
35
+ - name: primary
36
+ access_key_id: ...
37
+ secret_access_key: ...
38
+ ```
39
+
40
+ Then run via the rake tasks provided by the bundled Railtie:
41
+
42
+ ```bash
43
+ rake bard:backup # database backup + data file-tree sync
44
+ rake bard:backup:data # data file-tree sync only
45
+ ```
46
+
47
+ Or call programmatically:
48
+
49
+ ```ruby
50
+ Bard::Backup.create!(type: :s3, path: "bucket/subfolder",
51
+ access_key_id: "...", secret_access_key: "...", region: "...")
52
+ Bard::Backup::FileTree.create!
53
+ ```
13
54
 
14
- Or just run via the `bard-rake` gem: `rake db:backup`, which wires up the above for you.
55
+ `UploadDestination` (`type: :upload`, with `urls: [...]`) PUTs the dump to one or more presigned URLs in parallel — useful when the receiver, not the sender, holds the S3 credentials.
15
56
 
16
57
  ## Development
17
58
 
@@ -21,7 +62,7 @@ To install this gem onto your local machine, run `bundle exec rake install`. To
21
62
 
22
63
  ## Contributing
23
64
 
24
- Bug reports and pull requests are welcome on GitHub at https://github.com/[USERNAME]/bard-backup.
65
+ Bug reports and pull requests are welcome on GitHub at https://github.com/botandrose/bard-backup.
25
66
 
26
67
  ## License
27
68
 
@@ -2,13 +2,13 @@ require "backhoe"
2
2
 
3
3
  module Bard
4
4
  class Backup
5
- class CachedLocalBackhoe < Struct.new(:s3_dir, :now)
5
+ class CachedLocalBackhoe < Struct.new(:s3_tree, :now)
6
6
  def self.call *args
7
7
  new(*args).call
8
8
  end
9
9
 
10
10
  def call
11
- s3_dir.put path
11
+ s3_tree.put_file(path, File.basename(path))
12
12
  end
13
13
 
14
14
  private
@@ -0,0 +1,33 @@
1
+ require "bard/backup/destination"
2
+
3
+ module Bard
4
+ class Backup
5
+ module Database
6
+ def self.create!(destination_hashes = nil, **config)
7
+ if destination_hashes.nil? && !config.empty?
8
+ destination_hashes = [config]
9
+ end
10
+
11
+ bard_config = defined?(Bard::Config) ? Bard::Config.current : nil
12
+ destination_hashes ||= bard_config&.backup&.destinations || []
13
+
14
+ destinations = if destination_hashes.is_a?(Hash)
15
+ [destination_hashes]
16
+ else
17
+ Array(destination_hashes)
18
+ end
19
+
20
+ encryption_key = bard_config&.respond_to?(:encryption_key) ? bard_config.encryption_key : nil
21
+ if encryption_key
22
+ destinations = destinations.map { |h| { encryption_key: encryption_key, **h } }
23
+ end
24
+
25
+ result = nil
26
+ destinations.each do |hash|
27
+ result = Backup::Destination.build(hash).call
28
+ end
29
+ result
30
+ end
31
+ end
32
+ end
33
+ end
@@ -4,13 +4,13 @@ require "active_support/core_ext/integer/time"
4
4
 
5
5
  module Bard
6
6
  class Backup
7
- class Deleter < Struct.new(:s3_dir, :now)
7
+ class Deleter < Struct.new(:s3_tree, :now)
8
8
  def call
9
- s3_dir.delete files_to_delete
9
+ s3_tree.delete_keys files_to_delete
10
10
  end
11
11
 
12
12
  def files_to_delete
13
- s3_dir.files.select do |file|
13
+ s3_tree.list_objects.keys.select do |file|
14
14
  [
15
15
  Filter.new(now, 48, :hours),
16
16
  Filter.new(now, 30, :days),
@@ -1,4 +1,4 @@
1
- require "bard/backup/s3_dir"
1
+ require "bard/backup/s3_tree"
2
2
  require "bard/backup/deleter"
3
3
  require "bard/backup/local_backhoe"
4
4
  require "bard/backup/cached_local_backhoe"
@@ -7,12 +7,12 @@ module Bard
7
7
  class Backup
8
8
  class S3Destination < Destination
9
9
  def call
10
- strategy.call(s3_dir, now)
11
- Deleter.new(s3_dir, now).call
10
+ strategy.call(s3_tree, now)
11
+ Deleter.new(s3_tree, now).call
12
12
  end
13
13
 
14
- def s3_dir
15
- @s3_dir ||= S3Dir.new(**config.slice(:endpoint, :path, :access_key_id, :secret_access_key, :region))
14
+ def s3_tree
15
+ @s3_tree ||= S3Tree.new(**config.slice(:endpoint, :path, :access_key_id, :secret_access_key, :region, :encryption_key))
16
16
  end
17
17
 
18
18
  def info
@@ -23,15 +23,7 @@ module Bard
23
23
 
24
24
  def config
25
25
  @config ||= begin
26
- config = {}
27
-
28
- if defined?(Rails)
29
- credentials = Rails.application.credentials.bard_backup || []
30
- credentials = [credentials] if credentials.is_a?(Hash)
31
- config = credentials.find { |c| c[:name] == super[:name] } || {}
32
- end
33
-
34
- config = { type: :s3, region: "us-west-2" }.merge(config).merge(super)
26
+ config = { type: :s3, region: "us-west-2" }.merge(super)
35
27
  config[:endpoint] ||= "https://s3.#{config[:region]}.amazonaws.com"
36
28
  config
37
29
  end
@@ -1,6 +1,7 @@
1
1
  require "fileutils"
2
2
  require "uri"
3
3
  require "net/http"
4
+ require "bard/backup/encryptor"
4
5
 
5
6
  module Bard
6
7
  class Backup
@@ -53,8 +54,12 @@ module Bard
53
54
  uri = URI.parse(url)
54
55
 
55
56
  File.open(file_path, "rb") do |file|
57
+ body = file.read
58
+ if config[:encryption_key]
59
+ body = Encryptor.new(config[:encryption_key]).encrypt(body)
60
+ end
56
61
  request = Net::HTTP::Put.new(uri)
57
- request.body = file.read
62
+ request.body = body
58
63
  request.content_type = "application/octet-stream"
59
64
 
60
65
  response = Net::HTTP.start(uri.hostname, uri.port, use_ssl: uri.scheme == "https") do |http|
@@ -0,0 +1,39 @@
1
+ require "openssl"
2
+
3
+ module Bard
4
+ class Backup
5
+ class Encryptor
6
+ def initialize(key)
7
+ @encrypt_key = derive_key(key, "encryption")
8
+ @iv_key = derive_key(key, "iv-derivation")
9
+ end
10
+
11
+ def encrypt(data)
12
+ data = data.b if data.encoding != Encoding::BINARY
13
+ iv = OpenSSL::HMAC.digest("SHA256", @iv_key, data)[0, 12]
14
+ cipher = OpenSSL::Cipher.new("aes-256-gcm")
15
+ cipher.encrypt
16
+ cipher.key = @encrypt_key
17
+ cipher.iv = iv
18
+ ciphertext = cipher.update(data) + cipher.final
19
+ iv + cipher.auth_tag + ciphertext
20
+ end
21
+
22
+ def decrypt(data)
23
+ data = data.b if data.encoding != Encoding::BINARY
24
+ cipher = OpenSSL::Cipher.new("aes-256-gcm")
25
+ cipher.decrypt
26
+ cipher.key = @encrypt_key
27
+ cipher.iv = data[0, 12]
28
+ cipher.auth_tag = data[12, 16]
29
+ cipher.update(data[28..]) + cipher.final
30
+ end
31
+
32
+ private
33
+
34
+ def derive_key(raw_key, info)
35
+ OpenSSL::KDF.hkdf(raw_key, salt: "bard-backup-v1", info: info, length: 32, hash: "SHA256")
36
+ end
37
+ end
38
+ end
39
+ end
@@ -0,0 +1,108 @@
1
+ require "json"
2
+ require "digest/md5"
3
+ require "bard/backup/s3_tree"
4
+
5
+ module Bard
6
+ class Backup
7
+ class FileTree
8
+ MANIFEST_PATH = ".bard-file-tree-sync.json"
9
+ DEFAULT_BUCKET = "bard-data"
10
+
11
+ def self.create!(data_paths: nil, project_name: nil, bucket: DEFAULT_BUCKET, **s3_config)
12
+ bard_config = defined?(Bard::Config) ? Bard::Config.current : nil
13
+ data_paths ||= bard_config&.data || []
14
+ project_name ||= bard_config&.project_name
15
+ return if data_paths.empty?
16
+
17
+ encryption_key = s3_config.delete(:encryption_key)
18
+ encryption_key ||= bard_config&.respond_to?(:encryption_key) ? bard_config.encryption_key : nil
19
+
20
+ s3_tree = S3Tree.new(path: "#{bucket}/#{project_name}", encryption_key: encryption_key, **s3_config)
21
+ new(s3_tree, data_paths).call
22
+ end
23
+
24
+ def initialize(s3_tree, data_paths)
25
+ @s3_tree = s3_tree
26
+ @data_paths = data_paths
27
+ end
28
+
29
+ def call
30
+ manifest = load_manifest
31
+ local_files = collect_local_files
32
+
33
+ if manifest.empty?
34
+ sync_from_s3(local_files)
35
+ else
36
+ sync_from_manifest(local_files, manifest)
37
+ end
38
+ end
39
+
40
+ private
41
+
42
+ attr_reader :s3_tree, :data_paths
43
+
44
+ def sync_from_manifest(local_files, manifest)
45
+ new_manifest = {}
46
+
47
+ local_files.each do |path, stat|
48
+ cached = manifest[path]
49
+ if cached && cached["mtime"] == stat[:mtime] && cached["size"] == stat[:size]
50
+ new_manifest[path] = cached
51
+ else
52
+ md5 = Digest::MD5.file(path).hexdigest
53
+ if cached && cached["md5"] == md5
54
+ new_manifest[path] = cached.merge("mtime" => stat[:mtime])
55
+ else
56
+ s3_tree.put_file(path, path)
57
+ new_manifest[path] = { "md5" => md5, "mtime" => stat[:mtime], "size" => stat[:size] }
58
+ end
59
+ end
60
+ end
61
+
62
+ removed = manifest.keys - local_files.keys
63
+ s3_tree.delete_keys(removed)
64
+
65
+ save_manifest(new_manifest)
66
+ end
67
+
68
+ def sync_from_s3(local_files)
69
+ remote = s3_tree.list_objects
70
+ new_manifest = {}
71
+
72
+ local_files.each do |path, stat|
73
+ md5 = Digest::MD5.file(path).hexdigest
74
+ unless remote[path] == md5
75
+ s3_tree.put_file(path, path)
76
+ end
77
+ new_manifest[path] = { "md5" => md5, "mtime" => stat[:mtime], "size" => stat[:size] }
78
+ end
79
+
80
+ removed = remote.keys - local_files.keys
81
+ s3_tree.delete_keys(removed)
82
+
83
+ save_manifest(new_manifest)
84
+ end
85
+
86
+ def collect_local_files
87
+ result = {}
88
+ data_paths.each do |data_path|
89
+ Dir.glob("#{data_path}/**/*").each do |file|
90
+ next unless File.file?(file)
91
+ stat = File.stat(file)
92
+ result[file] = { mtime: stat.mtime.to_f, size: stat.size }
93
+ end
94
+ end
95
+ result
96
+ end
97
+
98
+ def load_manifest
99
+ return {} unless File.exist?(MANIFEST_PATH)
100
+ JSON.parse(File.read(MANIFEST_PATH))
101
+ end
102
+
103
+ def save_manifest(manifest)
104
+ File.write(MANIFEST_PATH, JSON.pretty_generate(manifest))
105
+ end
106
+ end
107
+ end
108
+ end
@@ -11,7 +11,7 @@ module Bard
11
11
  end
12
12
 
13
13
  all_backups = destinations.flat_map do |dest|
14
- dest.s3_dir.files.filter_map do |filename|
14
+ dest.s3_tree.list_objects.keys.filter_map do |filename|
15
15
  timestamp = parse_timestamp(filename)
16
16
  next unless timestamp
17
17
 
@@ -25,7 +25,7 @@ module Bard
25
25
 
26
26
  Bard::Backup.new(
27
27
  timestamp: latest[:timestamp],
28
- size: get_file_size(latest[:destination].s3_dir, latest[:filename]),
28
+ size: get_file_size(latest[:destination].s3_tree, latest[:filename]),
29
29
  destinations: all_backups
30
30
  .select { |b| b[:timestamp] == latest[:timestamp] }
31
31
  .map { |b| b[:destination].info }
@@ -38,9 +38,9 @@ module Bard
38
38
  filename =~ /^(\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}Z)/ ? Time.parse($1) : nil
39
39
  end
40
40
 
41
- def get_file_size(s3_dir, filename)
42
- key = [s3_dir.folder_prefix, filename].compact.join("/")
43
- s3_dir.send(:client).head_object(bucket: s3_dir.bucket_name, key: key).content_length
41
+ def get_file_size(s3_tree, filename)
42
+ key = [s3_tree.folder_prefix, filename].compact.join("/")
43
+ s3_tree.send(:client).head_object(bucket: s3_tree.bucket_name, key: key).content_length
44
44
  end
45
45
  end
46
46
  end
@@ -3,11 +3,11 @@ require "backhoe"
3
3
  module Bard
4
4
  class Backup
5
5
  class LocalBackhoe
6
- def self.call s3_dir, now
6
+ def self.call s3_tree, now
7
7
  filename = "#{now.iso8601}.sql.gz"
8
8
  path = "/tmp/#{filename}"
9
9
  Backhoe.dump path
10
- s3_dir.mv path
10
+ s3_tree.mv path
11
11
  end
12
12
  end
13
13
  end
@@ -0,0 +1,19 @@
1
+ module Bard
2
+ class Backup
3
+ module RailsCredentials
4
+ def self.find(name: nil)
5
+ entries = all
6
+ return {} if entries.empty?
7
+ return entries.first if name.nil?
8
+ entries.find { |c| c[:name] == name } || {}
9
+ end
10
+
11
+ def self.all
12
+ return [] unless defined?(Rails)
13
+ creds = Rails.application.credentials.bard_backup
14
+ return [] unless creds
15
+ creds.is_a?(Hash) ? [creds] : Array(creds)
16
+ end
17
+ end
18
+ end
19
+ end
@@ -0,0 +1,114 @@
1
+ require "aws-sdk-s3"
2
+ require "fileutils"
3
+ require "bard/backup/encryptor"
4
+
5
+ module Bard
6
+ class Backup
7
+ class S3Tree < Data.define(:endpoint, :path, :access_key_id, :secret_access_key, :region, :session_token, :encryption_key)
8
+ def initialize(**kwargs)
9
+ kwargs[:endpoint] ||= "https://s3.#{kwargs[:region]}.amazonaws.com"
10
+ kwargs[:session_token] ||= nil
11
+ kwargs[:encryption_key] ||= nil
12
+ super
13
+ end
14
+
15
+ def list_objects
16
+ result = {}
17
+ continuation_token = nil
18
+
19
+ loop do
20
+ response = client.list_objects_v2({
21
+ bucket: bucket_name,
22
+ prefix: folder_prefix ? "#{folder_prefix}/" : nil,
23
+ continuation_token: continuation_token,
24
+ }.compact)
25
+
26
+ response.contents.each do |object|
27
+ key = folder_prefix ? object.key.sub("#{folder_prefix}/", "") : object.key
28
+ result[key] = object.etag.tr('"', "")
29
+ end
30
+
31
+ break unless response.is_truncated
32
+ continuation_token = response.next_continuation_token
33
+ end
34
+
35
+ result
36
+ end
37
+
38
+ def put_file(local_path, remote_key)
39
+ put_body(remote_key, File.binread(local_path))
40
+ end
41
+
42
+ def put_body(remote_key, body)
43
+ body = encryptor.encrypt(body) if encryptor
44
+ client.put_object({
45
+ bucket: bucket_name,
46
+ key: [folder_prefix, remote_key].compact.join("/"),
47
+ body: body,
48
+ })
49
+ end
50
+
51
+ def mv(local_path)
52
+ put_file(local_path, File.basename(local_path))
53
+ FileUtils.rm(local_path)
54
+ end
55
+
56
+ def get(remote_key)
57
+ response = client.get_object({
58
+ bucket: bucket_name,
59
+ key: [folder_prefix, remote_key].compact.join("/"),
60
+ })
61
+ body = response.body.read
62
+ body = encryptor.decrypt(body) if encryptor
63
+ body
64
+ end
65
+
66
+ def delete_keys(keys)
67
+ return if keys.empty?
68
+ keys.each_slice(1000) do |batch|
69
+ objects_to_delete = batch.map do |key|
70
+ { key: [folder_prefix, key].compact.join("/") }
71
+ end
72
+ client.delete_objects({
73
+ bucket: bucket_name,
74
+ delete: {
75
+ objects: objects_to_delete,
76
+ quiet: true,
77
+ },
78
+ })
79
+ end
80
+ end
81
+
82
+ def empty!
83
+ list_objects.keys.each_slice(1000) do |batch|
84
+ delete_keys(batch)
85
+ end
86
+ end
87
+
88
+ def bucket_name
89
+ path.split("/").first
90
+ end
91
+
92
+ def folder_prefix
93
+ return nil if !path.include?("/")
94
+ path.split("/")[1..].join("/")
95
+ end
96
+
97
+ private
98
+
99
+ def encryptor
100
+ Encryptor.new(encryption_key) if encryption_key
101
+ end
102
+
103
+ def client
104
+ Aws::S3::Client.new({
105
+ endpoint: endpoint,
106
+ region: region,
107
+ access_key_id: access_key_id,
108
+ secret_access_key: secret_access_key,
109
+ session_token: session_token,
110
+ }.compact)
111
+ end
112
+ end
113
+ end
114
+ end
@@ -1,7 +1,22 @@
1
+ require "bard/backup/rails_credentials"
2
+
1
3
  namespace :bard do
2
- desc "Backup the database to configured destinations"
4
+ desc "Backup the database and file trees to configured destinations"
3
5
  task :backup => :environment do
4
6
  require "bard/backup"
5
- Bard::Backup.create!
7
+
8
+ destinations = Bard::Config.current.backup.destinations.map do |dest|
9
+ Bard::Backup::RailsCredentials.find(name: dest[:name]).merge(dest)
10
+ end
11
+
12
+ Bard::Backup.create!(destinations, **Bard::Backup::RailsCredentials.find)
13
+ end
14
+
15
+ namespace :backup do
16
+ desc "Backup file trees to S3"
17
+ task :data => :environment do
18
+ require "bard/backup"
19
+ Bard::Backup::FileTree.create!(**Bard::Backup::RailsCredentials.find)
20
+ end
6
21
  end
7
22
  end
@@ -1,6 +1,6 @@
1
1
  module Bard
2
2
  class Backup
3
- VERSION = "0.9.1"
3
+ VERSION = "0.10.0"
4
4
  end
5
5
  end
6
6
 
data/lib/bard/backup.rb CHANGED
@@ -1,26 +1,17 @@
1
- require "bard/backup/destination"
1
+ require "bard/backup/database"
2
+ require "bard/backup/file_tree"
2
3
  require "bard/backup/latest_finder"
4
+ require "bard"
3
5
  require "bard/backup/railtie" if defined?(Rails)
4
6
 
5
7
  module Bard
6
8
  class Backup
7
- def self.create!(destination_hashes = nil, **config)
8
- if destination_hashes.nil? && !config.empty?
9
- destination_hashes = [config]
10
- end
11
- destination_hashes ||= Bard::Config.current.backup.destinations
12
-
13
- destinations = if destination_hashes.is_a?(Hash)
14
- [destination_hashes]
15
- else
16
- Array(destination_hashes)
17
- end
9
+ FILE_TREE_KEYS = [:access_key_id, :secret_access_key, :session_token, :region, :encryption_key].freeze
18
10
 
19
- result = nil
20
- destinations.each do |hash|
21
- result = Destination.build(hash).call
22
- end
23
- result
11
+ def self.create!(destination_hashes = nil, **config)
12
+ backup = Database.create!(destination_hashes, **config)
13
+ FileTree.create!(**config.slice(*FILE_TREE_KEYS))
14
+ backup
24
15
  end
25
16
 
26
17
  def self.latest
@@ -0,0 +1,62 @@
1
+ require "bard/config"
2
+
3
+ module Bard
4
+ class BackupConfig
5
+ attr_reader :destinations
6
+
7
+ def initialize(&block)
8
+ @destinations = []
9
+ instance_eval(&block) if block_given?
10
+ end
11
+
12
+ def bard
13
+ @bard = true
14
+ end
15
+
16
+ def bard?
17
+ !!@bard
18
+ end
19
+
20
+ def disabled
21
+ @disabled = true
22
+ end
23
+
24
+ def disabled?
25
+ !!@disabled
26
+ end
27
+
28
+ def enabled?
29
+ !disabled?
30
+ end
31
+
32
+ def s3(name, **kwargs)
33
+ @destinations << {
34
+ name: name,
35
+ type: :s3,
36
+ **kwargs,
37
+ }
38
+ end
39
+
40
+ def self_managed?
41
+ @destinations.any?
42
+ end
43
+ end
44
+ end
45
+
46
+ class Bard::Config
47
+ def backup(value = nil, &block)
48
+ if block
49
+ @backup = Bard::BackupConfig.new(&block)
50
+ elsif value == false
51
+ @backup = Bard::BackupConfig.new { disabled }
52
+ elsif value.nil?
53
+ @backup ||= Bard::BackupConfig.new { bard }
54
+ else
55
+ raise ArgumentError, "backup accepts false or a block"
56
+ end
57
+ end
58
+
59
+ def backup_enabled?
60
+ backup == true
61
+ end
62
+ end
@@ -0,0 +1,16 @@
1
+ require "bard/config"
2
+
3
+ class Bard::Config
4
+ def encrypt(value = nil)
5
+ if value.nil?
6
+ @encrypt
7
+ else
8
+ @encrypt = value
9
+ end
10
+ end
11
+
12
+ def encryption_key
13
+ return nil unless encrypt
14
+ File.read("config/master.key").strip
15
+ end
16
+ end
metadata CHANGED
@@ -1,13 +1,13 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: bard-backup
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.9.1
4
+ version: 0.10.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Micah Geisel
8
8
  bindir: exe
9
9
  cert_chain: []
10
- date: 2025-12-15 00:00:00.000000000 Z
10
+ date: 2026-05-16 00:00:00.000000000 Z
11
11
  dependencies:
12
12
  - !ruby/object:Gem::Dependency
13
13
  name: backhoe
@@ -88,6 +88,7 @@ files:
88
88
  - ".rspec"
89
89
  - ".ruby-gemset"
90
90
  - ".ruby-version"
91
+ - CLAUDE.md
91
92
  - LICENSE.txt
92
93
  - README.md
93
94
  - Rakefile
@@ -95,16 +96,22 @@ files:
95
96
  - lib/bard-backup.rb
96
97
  - lib/bard/backup.rb
97
98
  - lib/bard/backup/cached_local_backhoe.rb
99
+ - lib/bard/backup/database.rb
98
100
  - lib/bard/backup/deleter.rb
99
101
  - lib/bard/backup/destination.rb
100
102
  - lib/bard/backup/destination/s3_destination.rb
101
103
  - lib/bard/backup/destination/upload_destination.rb
104
+ - lib/bard/backup/encryptor.rb
105
+ - lib/bard/backup/file_tree.rb
102
106
  - lib/bard/backup/latest_finder.rb
103
107
  - lib/bard/backup/local_backhoe.rb
108
+ - lib/bard/backup/rails_credentials.rb
104
109
  - lib/bard/backup/railtie.rb
105
- - lib/bard/backup/s3_dir.rb
110
+ - lib/bard/backup/s3_tree.rb
106
111
  - lib/bard/backup/tasks.rake
107
112
  - lib/bard/backup/version.rb
113
+ - lib/bard/plugins/backup.rb
114
+ - lib/bard/plugins/encrypt.rb
108
115
  - sig/bard/backup.rbs
109
116
  homepage: https://github.com/botandrose/bard-backup
110
117
  licenses:
@@ -1,86 +0,0 @@
1
- require "aws-sdk-s3"
2
- require "rexml"
3
-
4
- module Bard
5
- class Backup
6
- class S3Dir < Data.define(:endpoint, :path, :access_key_id, :secret_access_key, :region)
7
- def initialize **kwargs
8
- kwargs[:endpoint] ||= "https://s3.#{kwargs[:region]}.amazonaws.com"
9
- super
10
- end
11
-
12
- def files
13
- response = client.list_objects_v2({
14
- bucket: bucket_name,
15
- prefix: folder_prefix,
16
- })
17
- raise if response.is_truncated
18
- response.contents.map do |object|
19
- object.key.sub("#{folder_prefix}/", "")
20
- end
21
- end
22
-
23
- def put file_path, body: File.read(file_path)
24
- client.put_object({
25
- bucket: bucket_name,
26
- key: [folder_prefix, File.basename(file_path)].compact.join("/"),
27
- body: body,
28
- })
29
- end
30
-
31
- def presigned_url file_path
32
- presigner = Aws::S3::Presigner.new(client: client)
33
- presigner.presigned_url(
34
- :put_object,
35
- bucket: bucket_name,
36
- key: [folder_prefix, File.basename(file_path)].compact.join("/"),
37
- )
38
- end
39
-
40
- def mv file_path, body: File.read(file_path)
41
- put file_path, body: body
42
- FileUtils.rm file_path
43
- end
44
-
45
- def delete file_paths
46
- return if file_paths.empty?
47
- objects_to_delete = Array(file_paths).map do |file_path|
48
- { key: [folder_prefix, File.basename(file_path)].compact.join("/") }
49
- end
50
- client.delete_objects({
51
- bucket: bucket_name,
52
- delete: {
53
- objects: objects_to_delete,
54
- quiet: true,
55
- }
56
- })
57
- end
58
-
59
- def empty!
60
- files.each_slice(1000) do |batch|
61
- delete batch
62
- end
63
- end
64
-
65
- def bucket_name
66
- path.split("/").first
67
- end
68
-
69
- def folder_prefix
70
- return nil if !path.include?("/")
71
- path.split("/")[1..].join("/")
72
- end
73
-
74
- private
75
-
76
- def client
77
- Aws::S3::Client.new({
78
- endpoint: endpoint,
79
- region: region,
80
- access_key_id: access_key_id,
81
- secret_access_key: secret_access_key,
82
- })
83
- end
84
- end
85
- end
86
- end