deb-s3 0.10.0 → 0.11.6

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
- SHA1:
3
- metadata.gz: 49885adda26d9a194e593feb9db2ff154d697951
4
- data.tar.gz: 65f61dd1e945500c9ead7bc772932d8769c7c45a
2
+ SHA256:
3
+ metadata.gz: '041340787f3a58818cab4049b36b08aafd6a7cd402c2ca131c48c89f646ddf6f'
4
+ data.tar.gz: 7453f5436b991b898a0d7fcadedfe0592aaebe1612253d5b2eca550f3564b761
5
5
  SHA512:
6
- metadata.gz: 78d6217e6414568658bc3b2707fc53865bb52a6ec5fdf48d43d496ad888dd5e96f5882a0d0e7c22b9ff92162a62f7387590b1434deb1ea392a5bc5482c859d5d
7
- data.tar.gz: f33a94764a6b2d6e92984a99e9458a11b8dea8148b48285936a41bfd743b9c6f631611246bb56ea87fee124c5dedae6311773a4796a1ece4d81abf0aa0651c7a
6
+ metadata.gz: a967586900ae51b0eb8c84028a8b988a51c928c41065628029a6acf0274dcabb24c6bbad4588e93aaeb1b70149f243db483620b84451b01ee6a89897b6527954
7
+ data.tar.gz: 0ecae828a6647a11d1e06b9e5f17f0d06b3aa5897edffd3450300a13cd57e581294b5dffdee053ca2248d9f89c1590f714ef9af7c24dd5de1768e883b4272e65
data/README.md CHANGED
@@ -1,6 +1,8 @@
1
1
  # deb-s3
2
2
 
3
- [![Build Status](https://travis-ci.org/krobertson/deb-s3.svg?branch=master)](https://travis-ci.org/krobertson/deb-s3)
3
+ [![Build Status](https://travis-ci.org/deb-s3/deb-s3.svg?branch=master)](https://travis-ci.org/deb-s3/deb-s3)
4
+
5
+ **This repository is a fork of [krobertson/deb-s3](https://github.com/krobertson/deb-s3).**
4
6
 
5
7
  `deb-s3` is a simple utility to make creating and managing APT repositories on
6
8
  S3.
@@ -27,17 +29,31 @@ With `deb-s3`, there is no need for this. `deb-s3` features:
27
29
 
28
30
  ## Getting Started
29
31
 
30
- You can simply install it from rubygems:
32
+ Install the package via gem
31
33
 
32
34
  ```console
33
35
  $ gem install deb-s3
34
36
  ```
35
37
 
36
- Or to run the code directly, just check out the repo and run Bundler to ensure
38
+ or via APT:
39
+
40
+ ```console
41
+ # Add repository key
42
+ $ sudo wget -O /etc/apt/trusted.gpg.d/deb-s3-archive-keyring.gpg https://raw.githubusercontent.com/deb-s3/deb-s3/master/deb-s3-archive-keyring.gpg
43
+
44
+ # Add repository
45
+ $ echo "deb http://deb-s3-repo.s3.us-east-2.amazonaws.com/debian/ $(lsb_release -cs) main" | sudo tee -a /etc/apt/sources.list > /dev/null
46
+
47
+ # Install package
48
+ $ sudo apt-get update
49
+ $ sudo apt-get install deb-s3
50
+ ```
51
+
52
+ To run the code directly, just check out the repo and run bundler to ensure
37
53
  all dependencies are installed:
38
54
 
39
55
  ```console
40
- $ git clone https://github.com/krobertson/deb-s3.git
56
+ $ git clone https://github.com/deb-s3/deb-s3.git
41
57
  $ cd deb-s3
42
58
  $ bundle install
43
59
  ```
@@ -92,8 +108,9 @@ Uploads the given files to a S3 bucket as an APT repository.
92
108
  ```
93
109
 
94
110
  You can also delete packages from the APT repository. Please keep in mind that
95
- this does NOT delete the .deb file itself, it only removes it from the list of
96
- packages in the specified component, codename and architecture.
111
+ this does NOT delete the .deb file itself (the `clean` command does that), it
112
+ only removes it from the list of packages in the specified component, codename
113
+ and architecture.
97
114
 
98
115
  Now to delete the package:
99
116
  ```console
@@ -139,6 +156,48 @@ Options:
139
156
  Remove the package named PACKAGE. If --versions is not specified, deleteall versions of PACKAGE. Otherwise, only the specified versions will be deleted.
140
157
  ```
141
158
 
159
+ Dangling `.deb` files left by the `delete` command (or uploading new versions) can be removed using the `clean` command:
160
+
161
+ ```console
162
+ $ deb-s3 clean --bucket my-bucket
163
+ >> Retrieving existing manifests
164
+ >> Searching for unreferenced packages
165
+ -- pool/m/my/my-deb-package-1.0.0_amd64.deb
166
+ ```
167
+
168
+ ```
169
+ Usage:
170
+ deb-s3 clean
171
+
172
+ Options:
173
+ -l, [--lock], [--no-lock] # Whether to check for an existing lock on the repository to prevent simultaneous updates
174
+ -b, [--bucket=BUCKET] # The name of the S3 bucket to upload to.
175
+ [--prefix=PREFIX] # The path prefix to use when storing on S3.
176
+ -o, [--origin=ORIGIN] # The origin to use in the repository Release file.
177
+ [--suite=SUITE] # The suite to use in the repository Release file.
178
+ -c, [--codename=CODENAME] # The codename of the APT repository.
179
+ # Default: stable
180
+ -m, [--component=COMPONENT] # The component of the APT repository.
181
+ # Default: main
182
+ [--access-key-id=ACCESS_KEY_ID] # The access key for connecting to S3.
183
+ [--secret-access-key=SECRET_ACCESS_KEY] # The secret key for connecting to S3.
184
+ [--session-token=SESSION_TOKEN] # The (optional) session token for connecting to S3.
185
+ [--endpoint=ENDPOINT] # The URL endpoint to the S3 API.
186
+ [--s3-region=S3_REGION] # The region for connecting to S3.
187
+ # Default: us-east-1
188
+ [--force-path-style], [--no-force-path-style] # Use S3 path style instead of subdomains.
189
+ [--proxy-uri=PROXY_URI] # The URI of the proxy to send service requests through.
190
+ -v, [--visibility=VISIBILITY] # The access policy for the uploaded files. Can be public, private, or authenticated.
191
+ # Default: public
192
+ [--sign=SIGN] # GPG Sign the Release file when uploading a package, or when verifying it after removing a package. Use --sign with your GPG key ID to use a specific key (--sign=6643C242C18FE05B).
193
+ [--gpg-options=GPG_OPTIONS] # Additional command line options to pass to GPG when signing.
194
+ -e, [--encryption], [--no-encryption] # Use S3 server side encryption.
195
+ -q, [--quiet], [--no-quiet] # Doesn't output information, just returns status appropriately.
196
+ -C, [--cache-control=CACHE_CONTROL] # Add cache-control headers to S3 objects.
197
+
198
+ Delete packages from the pool which are no longer referenced
199
+ ```
200
+
142
201
  You can also verify an existing APT repository on S3 using the `verify` command:
143
202
 
144
203
  ```console
@@ -179,3 +238,47 @@ Options:
179
238
 
180
239
  Verifies that the files in the package manifests exist
181
240
  ```
241
+
242
+ #### Example S3 IAM Policy
243
+
244
+ ```
245
+ {
246
+ "Version": "2012-10-17",
247
+ "Statement": [
248
+ {
249
+ "Effect": "Allow",
250
+ "Action": [
251
+ "s3:ListBucket"
252
+ ],
253
+ "Resource": [
254
+ "arn:aws:s3:::BUCKETNAME",
255
+ ]
256
+ },
257
+ {
258
+ "Effect": "Allow",
259
+ "Action": [
260
+ "s3:PutObject",
261
+ "s3:GetObject",
262
+ "s3:DeleteObject",
263
+ "s3:DeleteObjectVersion",
264
+ "s3:GetObjectAcl",
265
+ "s3:GetObjectTagging",
266
+ "s3:GetObjectTorrent",
267
+ "s3:GetObjectVersion",
268
+ "s3:GetObjectVersionAcl",
269
+ "s3:GetObjectVersionTagging",
270
+ "s3:GetObjectVersionTorrent",
271
+ "s3:PutObjectAcl",
272
+ "s3:PutObjectTagging",
273
+ "s3:PutObjectVersionAcl",
274
+ "s3:PutObjectVersionTagging",
275
+ "s3:ReplicateObject",
276
+ "s3:RestoreObject"
277
+ ],
278
+ "Resource": [
279
+ "arn:aws:s3:::BUCKETNAME/*"
280
+ ]
281
+ }
282
+ ]
283
+ }
284
+ ```
data/lib/deb/s3/cli.rb CHANGED
@@ -1,5 +1,5 @@
1
1
  # -*- encoding : utf-8 -*-
2
- require "aws-sdk"
2
+ require "aws-sdk-s3"
3
3
  require "thor"
4
4
 
5
5
  # Hack: aws requires this!
@@ -86,10 +86,12 @@ class Deb::S3::CLI < Thor
86
86
  "Can be public, private, or authenticated."
87
87
 
88
88
  class_option :sign,
89
- :type => :string,
90
- :desc => "GPG Sign the Release file when uploading a package, " +
89
+ :type => :array,
90
+ :repeatable => true,
91
+ :desc => "GPG Sign the Release file when uploading a package, " +
91
92
  "or when verifying it after removing a package. " +
92
- "Use --sign with your GPG key ID to use a specific key (--sign=6643C242C18FE05B)."
93
+ "Use --sign with your GPG key ID to use a specific key (--sign=6643C242C18FE05B)." +
94
+ "Can be specified multiple times for multiple singing keys."
93
95
 
94
96
  class_option :gpg_options,
95
97
  :default => "",
@@ -163,12 +165,6 @@ class Deb::S3::CLI < Thor
163
165
  begin
164
166
  if options[:lock]
165
167
  log("Checking for existing lock file")
166
- if Deb::S3::Lock.locked?(options[:codename], component, options[:arch], options[:cache_control])
167
- lock = Deb::S3::Lock.current(options[:codename], component, options[:arch], options[:cache_control])
168
- log("Repository is locked by another user: #{lock.user} at host #{lock.host}")
169
- log("Attempting to obtain a lock")
170
- Deb::S3::Lock.wait_for_lock(options[:codename], component, options[:arch], options[:cache_control])
171
- end
172
168
  log("Locking repository for updates")
173
169
  Deb::S3::Lock.lock(options[:codename], component, options[:arch], options[:cache_control])
174
170
  @lock_acquired = true
@@ -205,11 +201,15 @@ class Deb::S3::CLI < Thor
205
201
  # throw an error. This is mainly the case when initializing a brand new
206
202
  # repository. With "all", we won't know which architectures they're using.
207
203
  if arch == "all" && manifests.count == 0
208
- error("Package #{File.basename(file)} had architecture \"all\", " +
209
- "however noexisting package lists exist. This can often happen " +
210
- "if the first package you are add to a new repository is an " +
211
- "\"all\" architecture file. Please use --arch [i386|amd64|armhf] or " +
212
- "another platform type to upload the file.")
204
+ manifests['amd64'] = Deb::S3::Manifest.retrieve(options[:codename], component,'amd64', options[:cache_control], options[:fail_if_exists], options[:skip_package_upload])
205
+ manifests['i386'] = Deb::S3::Manifest.retrieve(options[:codename], component,'i386', options[:cache_control], options[:fail_if_exists], options[:skip_package_upload])
206
+ manifests['armhf'] = Deb::S3::Manifest.retrieve(options[:codename], component,'armhf', options[:cache_control], options[:fail_if_exists], options[:skip_package_upload])
207
+
208
+ # error("Package #{File.basename(file)} had architecture \"all\", " +
209
+ # "however noexisting package lists exist. This can often happen " +
210
+ # "if the first package you are add to a new repository is an " +
211
+ # "\"all\" architecture file. Please use --arch [i386|amd64|armhf] or " +
212
+ # "another platform type to upload the file.")
213
213
  end
214
214
 
215
215
  # retrieve the manifest for the arch if we don't have it already
@@ -286,7 +286,7 @@ class Deb::S3::CLI < Thor
286
286
  false, false)
287
287
  manifest.packages.map do |package|
288
288
  if options[:long]
289
- package.generate
289
+ package.generate(options[:codename])
290
290
  else
291
291
  [package.name, package.full_version, package.architecture].tap do |row|
292
292
  row.each_with_index do |col, i|
@@ -331,7 +331,7 @@ class Deb::S3::CLI < Thor
331
331
  error "No such package found."
332
332
  end
333
333
 
334
- puts package.generate
334
+ puts package.generate(options[:codename])
335
335
  end
336
336
 
337
337
  desc "copy PACKAGE TO_CODENAME TO_COMPONENT ",
@@ -347,6 +347,13 @@ class Deb::S3::CLI < Thor
347
347
  :aliases => "-a",
348
348
  :desc => "The architecture of the package in the APT repository."
349
349
 
350
+ option :lock,
351
+ :default => false,
352
+ :type => :boolean,
353
+ :aliases => "-l",
354
+ :desc => "Whether to check for an existing lock on the repository " +
355
+ "to prevent simultaneous updates "
356
+
350
357
  option :versions,
351
358
  :default => nil,
352
359
  :type => :array,
@@ -392,42 +399,56 @@ class Deb::S3::CLI < Thor
392
399
 
393
400
  configure_s3_client
394
401
 
395
- # retrieve the existing manifests
396
- log "Retrieving existing manifests"
397
- from_manifest = Deb::S3::Manifest.retrieve(options[:codename],
398
- component, arch,
402
+ begin
403
+ if options[:lock]
404
+ log("Checking for existing lock file")
405
+ log("Locking repository for updates")
406
+ Deb::S3::Lock.lock(options[:codename], to_component, options[:arch], options[:cache_control])
407
+ @lock_acquired = true
408
+ end
409
+
410
+ # retrieve the existing manifests
411
+ log "Retrieving existing manifests"
412
+ from_manifest = Deb::S3::Manifest.retrieve(options[:codename],
413
+ component, arch,
414
+ options[:cache_control],
415
+ false, options[:skip_package_upload])
416
+ to_release = Deb::S3::Release.retrieve(to_codename)
417
+ to_manifest = Deb::S3::Manifest.retrieve(to_codename, to_component, arch,
399
418
  options[:cache_control],
400
- false, options[:skip_package_upload])
401
- to_release = Deb::S3::Release.retrieve(to_codename)
402
- to_manifest = Deb::S3::Manifest.retrieve(to_codename, to_component, arch,
403
- options[:cache_control],
404
- options[:fail_if_exists],
405
- options[:skip_package_upload])
406
- packages = from_manifest.packages.select { |p|
407
- p.name == package_name &&
408
- (versions.nil? || versions.include?(p.full_version))
409
- }
410
- if packages.size == 0
411
- error "No packages found in repository."
412
- end
419
+ options[:fail_if_exists],
420
+ options[:skip_package_upload])
421
+ packages = from_manifest.packages.select { |p|
422
+ p.name == package_name &&
423
+ (versions.nil? || versions.include?(p.full_version))
424
+ }
425
+ if packages.size == 0
426
+ error "No packages found in repository."
427
+ end
428
+
429
+ packages.each do |package|
430
+ begin
431
+ to_manifest.add package, options[:preserve_versions], false
432
+ rescue Deb::S3::Utils::AlreadyExistsError => e
433
+ error("Preparing manifest failed because: #{e}")
434
+ end
435
+ end
413
436
 
414
- packages.each do |package|
415
437
  begin
416
- to_manifest.add package, options[:preserve_versions], false
438
+ to_manifest.write_to_s3 { |f| sublog("Transferring #{f}") }
417
439
  rescue Deb::S3::Utils::AlreadyExistsError => e
418
- error("Preparing manifest failed because: #{e}")
440
+ error("Copying manifest failed because: #{e}")
419
441
  end
420
- end
442
+ to_release.update_manifest(to_manifest)
443
+ to_release.write_to_s3 { |f| sublog("Transferring #{f}") }
421
444
 
422
- begin
423
- to_manifest.write_to_s3 { |f| sublog("Transferring #{f}") }
424
- rescue Deb::S3::Utils::AlreadyExistsError => e
425
- error("Copying manifest failed because: #{e}")
445
+ log "Copy complete."
446
+ ensure
447
+ if options[:lock] && @lock_acquired
448
+ Deb::S3::Lock.unlock(options[:codename], component, options[:arch], options[:cache_control])
449
+ log("Lock released.")
450
+ end
426
451
  end
427
- to_release.update_manifest(to_manifest)
428
- to_release.write_to_s3 { |f| sublog("Transferring #{f}") }
429
-
430
- log "Copy complete."
431
452
  end
432
453
 
433
454
  desc "delete PACKAGE",
@@ -440,6 +461,13 @@ class Deb::S3::CLI < Thor
440
461
  :aliases => "-a",
441
462
  :desc => "The architecture of the package in the APT repository."
442
463
 
464
+ option :lock,
465
+ :default => false,
466
+ :type => :boolean,
467
+ :aliases => "-l",
468
+ :desc => "Whether to check for an existing lock on the repository " +
469
+ "to prevent simultaneous updates "
470
+
443
471
  option :versions,
444
472
  :default => nil,
445
473
  :type => :array,
@@ -466,30 +494,62 @@ class Deb::S3::CLI < Thor
466
494
 
467
495
  configure_s3_client
468
496
 
469
- # retrieve the existing manifests
470
- log("Retrieving existing manifests")
471
- release = Deb::S3::Release.retrieve(options[:codename], options[:origin], options[:suite])
472
- manifest = Deb::S3::Manifest.retrieve(options[:codename], component, options[:arch], options[:cache_control], false, options[:skip_package_upload])
497
+ begin
498
+ if options[:lock]
499
+ log("Checking for existing lock file")
500
+ log("Locking repository for updates")
501
+ Deb::S3::Lock.lock(options[:codename], component, options[:arch], options[:cache_control])
502
+ @lock_acquired = true
503
+ end
473
504
 
474
- deleted = manifest.delete_package(package, versions)
475
- if deleted.length == 0
476
- if versions.nil?
477
- error("No packages were deleted. #{package} not found.")
505
+ # retrieve the existing manifests
506
+ log("Retrieving existing manifests")
507
+ release = Deb::S3::Release.retrieve(options[:codename], options[:origin], options[:suite])
508
+ if arch == 'all'
509
+ selected_arch = release.architectures
510
+ else
511
+ selected_arch = [arch]
512
+ end
513
+ all_found = 0
514
+ selected_arch.each { |ar|
515
+ manifest = Deb::S3::Manifest.retrieve(options[:codename], component, ar, options[:cache_control], false, options[:skip_package_upload])
516
+
517
+ deleted = manifest.delete_package(package, versions)
518
+ all_found += deleted.length
519
+ if deleted.length == 0
520
+ if versions.nil?
521
+ sublog("No packages were deleted. #{package} not found in arch #{ar}.")
522
+ next
523
+ else
524
+ sublog("No packages were deleted. #{package} versions #{versions.join(', ')} could not be found in arch #{ar}.")
525
+ next
526
+ end
478
527
  else
479
- error("No packages were deleted. #{package} versions #{versions.join(', ')} could not be found.")
528
+ deleted.each { |p|
529
+ sublog("Deleting #{p.name} version #{p.full_version} from arch #{ar}")
530
+ }
480
531
  end
481
- else
482
- deleted.each { |p|
483
- sublog("Deleting #{p.name} version #{p.full_version}")
484
- }
485
- end
486
532
 
487
- log("Uploading new manifests to S3")
488
- manifest.write_to_s3 {|f| sublog("Transferring #{f}") }
489
- release.update_manifest(manifest)
490
- release.write_to_s3 {|f| sublog("Transferring #{f}") }
533
+ log("Uploading new manifests to S3")
534
+ manifest.write_to_s3 {|f| sublog("Transferring #{f}") }
535
+ release.update_manifest(manifest)
536
+ release.write_to_s3 {|f| sublog("Transferring #{f}") }
491
537
 
492
- log("Update complete.")
538
+ log("Update complete.")
539
+ }
540
+ if all_found == 0
541
+ if versions.nil?
542
+ error("No packages were deleted. #{package} not found.")
543
+ else
544
+ error("No packages were deleted. #{package} versions #{versions.join(', ')} could not be found.")
545
+ end
546
+ end
547
+ ensure
548
+ if options[:lock] && @lock_acquired
549
+ Deb::S3::Lock.unlock(options[:codename], component, options[:arch], options[:cache_control])
550
+ log("Lock released.")
551
+ end
552
+ end
493
553
  end
494
554
 
495
555
 
@@ -515,9 +575,9 @@ class Deb::S3::CLI < Thor
515
575
  missing_packages = []
516
576
 
517
577
  manifest.packages.each do |p|
518
- unless Deb::S3::Utils.s3_exists? p.url_filename_encoded
578
+ unless Deb::S3::Utils.s3_exists? p.url_filename_encoded(options[:codename])
519
579
  sublog("The following packages are missing:\n\n") if missing_packages.empty?
520
- puts(p.generate)
580
+ puts(p.generate(options[:codename]))
521
581
  puts("")
522
582
 
523
583
  missing_packages << p
@@ -536,6 +596,92 @@ class Deb::S3::CLI < Thor
536
596
  end
537
597
  end
538
598
 
599
+ desc "clean", "Delete packages from the pool which are no longer referenced"
600
+
601
+ option :lock,
602
+ :default => false,
603
+ :type => :boolean,
604
+ :aliases => "-l",
605
+ :desc => "Whether to check for an existing lock on the repository " +
606
+ "to prevent simultaneous updates "
607
+
608
+ def clean
609
+ configure_s3_client
610
+
611
+ begin
612
+ if options[:lock]
613
+ log("Checking for existing lock file")
614
+ log("Locking repository for updates")
615
+ Deb::S3::Lock.lock(options[:codename], component, options[:arch], options[:cache_control])
616
+ @lock_acquired = true
617
+ end
618
+
619
+ log("Retrieving existing manifests")
620
+
621
+ # Enumerate objects under the dists/<codename>/ prefix to find any
622
+ # Packages files and load them....
623
+
624
+ req = Deb::S3::Utils.s3.list_objects_v2({
625
+ :bucket => Deb::S3::Utils.bucket,
626
+ :prefix => Deb::S3::Utils.s3_path("dists/#{ options[:codename] }/"),
627
+ })
628
+
629
+ manifests = []
630
+ req.contents.each do |object|
631
+ if match = object.key.match(/dists\/([^\/]+)\/([^\/]+)\/binary-([^\/]+)\/Packages$/)
632
+ codename, component, arch = match.captures
633
+ manifests.push(Deb::S3::Manifest.retrieve(codename, component, arch, options[:cache_control], options[:fail_if_exists], options[:skip_package_upload]))
634
+ end
635
+ end
636
+
637
+ # Iterate over the packages in each manifest and build a Set of all the
638
+ # referenced URLs (relative to bucket root)...
639
+
640
+ refd_urls = Set[]
641
+ manifests.each do |manifest|
642
+ manifest.packages.each do |package|
643
+ refd_urls.add(Deb::S3::Utils.s3_path(package.url_filename(manifest.codename)))
644
+ end
645
+ end
646
+
647
+ log("Searching for unreferenced packages")
648
+
649
+ # Enumerate objects under the pools/<codename> prefix and delete any that
650
+ # arent referenced by any of the manifests.
651
+
652
+ continuation_token = nil
653
+ while true
654
+ req = Deb::S3::Utils.s3.list_objects_v2({
655
+ :bucket => Deb::S3::Utils.bucket,
656
+ :prefix => Deb::S3::Utils.s3_path("pool/#{ options[:codename] }/"),
657
+ :continuation_token => continuation_token,
658
+ })
659
+
660
+ req.contents.each do |object|
661
+ if not refd_urls.include?(object.key)
662
+ sublog("Deleting #{ object.key }")
663
+
664
+ Deb::S3::Utils.s3.delete_object({
665
+ :bucket => Deb::S3::Utils.bucket,
666
+ :key => object.key,
667
+ })
668
+ end
669
+ end
670
+
671
+ if req.is_truncated
672
+ continuation_token = req.next_continuation_token
673
+ else
674
+ break
675
+ end
676
+ end
677
+ ensure
678
+ if options[:lock] && @lock_acquired
679
+ Deb::S3::Lock.unlock(options[:codename], component, options[:arch], options[:cache_control])
680
+ log("Lock released.")
681
+ end
682
+ end
683
+ end
684
+
539
685
  private
540
686
 
541
687
  def component
data/lib/deb/s3/lock.rb CHANGED
@@ -1,58 +1,122 @@
1
1
  # -*- encoding : utf-8 -*-
2
- require "tempfile"
3
- require "socket"
2
+ require "base64"
3
+ require "digest/md5"
4
4
  require "etc"
5
+ require "socket"
6
+ require "tempfile"
5
7
 
6
8
  class Deb::S3::Lock
7
- attr_accessor :user
8
- attr_accessor :host
9
+ attr_reader :user
10
+ attr_reader :host
9
11
 
10
- def initialize
11
- @user = nil
12
- @host = nil
12
+ def initialize(user, host)
13
+ @user = user
14
+ @host = host
13
15
  end
14
16
 
15
17
  class << self
16
- def locked?(codename, component = nil, architecture = nil, cache_control = nil)
17
- Deb::S3::Utils.s3_exists?(lock_path(codename, component, architecture, cache_control))
18
- end
18
+ #
19
+ # 2-phase mutual lock mechanism based on `s3:CopyObject`.
20
+ #
21
+ # This logic isn't relying on S3's enhanced features like Object Lock
22
+ # because it imposes some limitation on using other features like
23
+ # S3 Cross-Region replication. This should work more than good enough
24
+ # with S3's strong read-after-write consistency which we can presume
25
+ # in all region nowadays.
26
+ #
27
+ # This is relying on S3 to set object's ETag as object's MD5 if an
28
+ # object isn't comprized from multiple parts. We'd be able to presume
29
+ # it as the lock file is usually an object of some smaller bytes.
30
+ #
31
+ # acquire lock:
32
+ # 1. call `s3:HeadObject` on final lock object
33
+ # 1. If final lock object exists, restart from the beginning
34
+ # 2. Otherwise, call `s3:PutObject` to create initial lock object
35
+ # 2. Perform `s3:CopyObject` to copy from initial lock object
36
+ # to final lock object with specifying ETag/MD5 of the initial
37
+ # lock object
38
+ # 1. If copy object fails as `PreconditionFailed`, restart
39
+ # from the beginning
40
+ # 2. Otherwise, lock has been acquired
41
+ #
42
+ # release lock:
43
+ # 1. remove final lock object by `s3:DeleteObject`
44
+ #
45
+ def lock(codename, component = nil, architecture = nil, cache_control = nil, max_attempts=60, max_wait_interval=10)
46
+ lockbody = "#{Etc.getlogin}@#{Socket.gethostname}"
47
+ initial_lockfile = initial_lock_path(codename, component, architecture, cache_control)
48
+ final_lockfile = lock_path(codename, component, architecture, cache_control)
19
49
 
20
- def wait_for_lock(codename, component = nil, architecture = nil, cache_control = nil, max_attempts=60, wait=10)
21
- attempts = 0
22
- while self.locked?(codename, component, architecture, cache_control) do
23
- attempts += 1
24
- throw "Unable to obtain a lock after #{max_attempts}, giving up." if attempts > max_attempts
25
- sleep(wait)
50
+ md5_b64 = Base64.encode64(Digest::MD5.digest(lockbody))
51
+ md5_hex = Digest::MD5.hexdigest(lockbody)
52
+ max_attempts.times do |i|
53
+ wait_interval = [(1<<i)/10, max_wait_interval].min
54
+ if Deb::S3::Utils.s3_exists?(final_lockfile)
55
+ lock = current(codename, component, architecture, cache_control)
56
+ $stderr.puts("Repository is locked by another user: #{lock.user} at host #{lock.host} (phase-1)")
57
+ $stderr.puts("Attempting to obtain a lock after #{wait_interval} secound(s).")
58
+ sleep(wait_interval)
59
+ else
60
+ # upload the file
61
+ Deb::S3::Utils.s3.put_object(
62
+ bucket: Deb::S3::Utils.bucket,
63
+ key: Deb::S3::Utils.s3_path(initial_lockfile),
64
+ body: lockbody,
65
+ content_type: "text/plain",
66
+ content_md5: md5_b64,
67
+ metadata: {
68
+ "md5" => md5_hex,
69
+ },
70
+ )
71
+ begin
72
+ Deb::S3::Utils.s3.copy_object(
73
+ bucket: Deb::S3::Utils.bucket,
74
+ key: Deb::S3::Utils.s3_path(final_lockfile),
75
+ copy_source: "/#{Deb::S3::Utils.bucket}/#{Deb::S3::Utils.s3_path(initial_lockfile)}",
76
+ copy_source_if_match: md5_hex,
77
+ )
78
+ return
79
+ rescue Aws::S3::Errors::PreconditionFailed => error
80
+ lock = current(codename, component, architecture, cache_control)
81
+ $stderr.puts("Repository is locked by another user: #{lock.user} at host #{lock.host} (phase-2)")
82
+ $stderr.puts("Attempting to obtain a lock after #{wait_interval} second(s).")
83
+ sleep(wait_interval)
84
+ end
85
+ end
26
86
  end
27
- end
28
-
29
- def lock(codename, component = nil, architecture = nil, cache_control = nil)
30
- lockfile = Tempfile.new("lockfile")
31
- lockfile.write("#{Etc.getlogin}@#{Socket.gethostname}")
32
- lockfile.close
33
-
34
- Deb::S3::Utils.s3_store(lockfile.path,
35
- lock_path(codename, component, architecture, cache_control),
36
- "text/plain",
37
- cache_control)
87
+ # TODO: throw appropriate error class
88
+ raise("Unable to obtain a lock after #{max_attempts}, giving up.")
38
89
  end
39
90
 
40
91
  def unlock(codename, component = nil, architecture = nil, cache_control = nil)
92
+ Deb::S3::Utils.s3_remove(initial_lock_path(codename, component, architecture, cache_control))
41
93
  Deb::S3::Utils.s3_remove(lock_path(codename, component, architecture, cache_control))
42
94
  end
43
95
 
44
96
  def current(codename, component = nil, architecture = nil, cache_control = nil)
45
- lock_content = Deb::S3::Utils.s3_read(lock_path(codename, component, architecture, cache_control))
46
- lock_content = lock_content.split('@')
47
- lock = Deb::S3::Lock.new
48
- lock.user = lock_content[0]
49
- lock.host = lock_content[1] if lock_content.size > 1
97
+ lockbody = Deb::S3::Utils.s3_read(lock_path(codename, component, architecture, cache_control))
98
+ if lockbody
99
+ user, host = lockbody.to_s.split("@", 2)
100
+ lock = Deb::S3::Lock.new(user, host)
101
+ else
102
+ lock = Deb::S3::Lock.new("unknown", "unknown")
103
+ end
50
104
  lock
51
105
  end
52
106
 
53
107
  private
108
+ def initial_lock_path(codename, component = nil, architecture = nil, cache_control = nil)
109
+ "dists/#{codename}/lockfile.lock"
110
+ end
111
+
54
112
  def lock_path(codename, component = nil, architecture = nil, cache_control = nil)
55
- "dists/#{codename}/#{component}/binary-#{architecture}/lockfile"
113
+ #
114
+ # Acquire repository lock at `codename` level to avoid race between concurrent upload attempts.
115
+ #
116
+ # * `deb-s3 upload --arch=all` touchs multiples of `dists/{codename}/{component}/binary-*/Packages*`
117
+ # * All `deb-s3 upload` touchs `dists/{codename}/Release`
118
+ #
119
+ "dists/#{codename}/lockfile"
56
120
  end
57
121
  end
58
122
  end
@@ -2,6 +2,7 @@
2
2
  require "digest/sha1"
3
3
  require "digest/sha2"
4
4
  require "digest/md5"
5
+ require "open3"
5
6
  require "socket"
6
7
  require "tmpdir"
7
8
  require "uri"
@@ -30,7 +31,6 @@ class Deb::S3::Package
30
31
  attr_accessor :attributes
31
32
 
32
33
  # hashes
33
- attr_accessor :url_filename
34
34
  attr_accessor :sha1
35
35
  attr_accessor :sha256
36
36
  attr_accessor :md5
@@ -57,22 +57,36 @@ class Deb::S3::Package
57
57
 
58
58
  def extract_control(package)
59
59
  if system("which dpkg > /dev/null 2>&1")
60
- `dpkg -f #{package}`
60
+ output, status = Open3.capture2("dpkg", "-f", package)
61
+ output
61
62
  else
63
+ # use ar to determine control file name (control.ext)
64
+ package_files = `ar t #{package}`
65
+ control_file = package_files.split("\n").select do |file|
66
+ file.start_with?("control.")
67
+ end.first
68
+ if control_file === "control.tar.gz"
69
+ compression = "z"
70
+ elsif control_file === "control.tar.zst"
71
+ compression = "I zstd"
72
+ else
73
+ compression = "J"
74
+ end
75
+
62
76
  # ar fails to find the control.tar.gz tarball within the .deb
63
77
  # on Mac OS. Try using ar to list the control file, if found,
64
78
  # use ar to extract, otherwise attempt with tar which works on OS X.
65
- extract_control_tarball_cmd = "ar p #{package} control.tar.gz"
79
+ extract_control_tarball_cmd = "ar p #{package} #{control_file}"
66
80
 
67
81
  begin
68
- safesystem("ar t #{package} control.tar.gz &> /dev/null")
82
+ safesystem("ar t #{package} #{control_file} &> /dev/null")
69
83
  rescue SafeSystemError
70
84
  warn "Failed to find control data in .deb with ar, trying tar."
71
- extract_control_tarball_cmd = "tar zxf #{package} --to-stdout control.tar.gz"
85
+ extract_control_tarball_cmd = "tar -#{compression} -xf #{package} --to-stdout #{control_file}"
72
86
  end
73
87
 
74
88
  Dir.mktmpdir do |path|
75
- safesystem("#{extract_control_tarball_cmd} | tar -zxf - -C #{path}")
89
+ safesystem("#{extract_control_tarball_cmd} | tar -#{compression} -xf - -C #{path}")
76
90
  File.read(File.join(path, "control"))
77
91
  end
78
92
  end
@@ -122,11 +136,10 @@ class Deb::S3::Package
122
136
  [[epoch, version].compact.join(":"), iteration].compact.join("-")
123
137
  end
124
138
 
125
- def filename=(f)
126
- @filename = f
127
- @filename
139
+ def url_filename=(f)
140
+ @url_filename = f
128
141
  end
129
-
142
+
130
143
  def url_filename(codename)
131
144
  @url_filename || "pool/#{codename}/#{self.name[0]}/#{self.name[0..1]}/#{File.basename(self.filename)}"
132
145
  end
@@ -242,7 +255,7 @@ class Deb::S3::Package
242
255
 
243
256
  # Packages manifest fields
244
257
  filename = fields.delete('Filename')
245
- self.url_filename = filename && URI.unescape(filename)
258
+ self.url_filename = filename && CGI.unescape(filename)
246
259
  self.sha1 = fields.delete('SHA1')
247
260
  self.sha256 = fields.delete('SHA256')
248
261
  self.md5 = fields.delete('MD5sum')
@@ -102,7 +102,7 @@ class Deb::S3::Release
102
102
 
103
103
  # sign the file, if necessary
104
104
  if Deb::S3::Utils.signing_key
105
- key_param = Deb::S3::Utils.signing_key != "" ? "--default-key=#{Deb::S3::Utils.signing_key}" : ""
105
+ key_param = Deb::S3::Utils.signing_key.any? ? "-u #{Deb::S3::Utils.signing_key.join(" -u ")}" : ""
106
106
  if system("gpg -a #{key_param} --digest-algo SHA256 #{Deb::S3::Utils.gpg_options} -s --clearsign #{release_tmp.path}")
107
107
  local_file = release_tmp.path+".asc"
108
108
  remote_file = "dists/#{@codename}/InRelease"
data/lib/deb/s3/utils.rb CHANGED
@@ -1,8 +1,6 @@
1
1
  # -*- encoding : utf-8 -*-
2
- require "base64"
3
2
  require "digest/md5"
4
3
  require "erb"
5
- require "tmpdir"
6
4
 
7
5
  module Deb::S3::Utils
8
6
  module_function
@@ -41,7 +39,7 @@ module Deb::S3::Utils
41
39
  def template(path)
42
40
  template_file = File.join(File.dirname(__FILE__), "templates", path)
43
41
  template_code = File.read(template_file)
44
- ERB.new(template_code, nil, "-")
42
+ ERB.new(template_code, trim_mode: "-")
45
43
  end
46
44
 
47
45
  def s3_path(path)
@@ -70,7 +68,7 @@ module Deb::S3::Utils
70
68
  :key => s3_path(path),
71
69
  )[:body].read
72
70
  rescue Aws::S3::Errors::NoSuchKey
73
- false
71
+ nil
74
72
  end
75
73
 
76
74
  def s3_store(path, filename=nil, content_type='application/octet-stream; charset=binary', cache_control=nil, fail_if_exists=false)
data/lib/deb/s3.rb CHANGED
@@ -1,6 +1,6 @@
1
1
  # -*- encoding : utf-8 -*-
2
2
  module Deb
3
3
  module S3
4
- VERSION = "0.10.0"
4
+ VERSION = "0.11.6"
5
5
  end
6
6
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: deb-s3
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.10.0
4
+ version: 0.11.6
5
5
  platform: ruby
6
6
  authors:
7
7
  - Ken Robertson
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2018-07-16 00:00:00.000000000 Z
11
+ date: 2022-11-06 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: thor
@@ -16,28 +16,28 @@ dependencies:
16
16
  requirements:
17
17
  - - "~>"
18
18
  - !ruby/object:Gem::Version
19
- version: 0.19.0
19
+ version: '1'
20
20
  type: :runtime
21
21
  prerelease: false
22
22
  version_requirements: !ruby/object:Gem::Requirement
23
23
  requirements:
24
24
  - - "~>"
25
25
  - !ruby/object:Gem::Version
26
- version: 0.19.0
26
+ version: '1'
27
27
  - !ruby/object:Gem::Dependency
28
- name: aws-sdk
28
+ name: aws-sdk-s3
29
29
  requirement: !ruby/object:Gem::Requirement
30
30
  requirements:
31
31
  - - "~>"
32
32
  - !ruby/object:Gem::Version
33
- version: '2'
33
+ version: '1'
34
34
  type: :runtime
35
35
  prerelease: false
36
36
  version_requirements: !ruby/object:Gem::Requirement
37
37
  requirements:
38
38
  - - "~>"
39
39
  - !ruby/object:Gem::Version
40
- version: '2'
40
+ version: '1'
41
41
  - !ruby/object:Gem::Dependency
42
42
  name: minitest
43
43
  requirement: !ruby/object:Gem::Requirement
@@ -96,15 +96,14 @@ required_ruby_version: !ruby/object:Gem::Requirement
96
96
  requirements:
97
97
  - - ">="
98
98
  - !ruby/object:Gem::Version
99
- version: 1.9.3
99
+ version: 2.7.0
100
100
  required_rubygems_version: !ruby/object:Gem::Requirement
101
101
  requirements:
102
102
  - - ">="
103
103
  - !ruby/object:Gem::Version
104
104
  version: '0'
105
105
  requirements: []
106
- rubyforge_project:
107
- rubygems_version: 2.6.13
106
+ rubygems_version: 3.3.7
108
107
  signing_key:
109
108
  specification_version: 4
110
109
  summary: Easily create and manage an APT repository on S3.