deb-s3 0.10.0 → 0.11.5

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
- SHA1:
3
- metadata.gz: 49885adda26d9a194e593feb9db2ff154d697951
4
- data.tar.gz: 65f61dd1e945500c9ead7bc772932d8769c7c45a
2
+ SHA256:
3
+ metadata.gz: 63a8b704bcc52020044d1c8e2a67d5c83b20fdcb0febb012a1aa95b3a7e4e33b
4
+ data.tar.gz: e0c1a63f7c3528a7493a20ca65837653bd45e2c3e78815e8858becd2372efe1a
5
5
  SHA512:
6
- metadata.gz: 78d6217e6414568658bc3b2707fc53865bb52a6ec5fdf48d43d496ad888dd5e96f5882a0d0e7c22b9ff92162a62f7387590b1434deb1ea392a5bc5482c859d5d
7
- data.tar.gz: f33a94764a6b2d6e92984a99e9458a11b8dea8148b48285936a41bfd743b9c6f631611246bb56ea87fee124c5dedae6311773a4796a1ece4d81abf0aa0651c7a
6
+ metadata.gz: 9fd7a53af575ee39fa3a1ea7577e5c81feea7200715a8477674022c449ed3a7cde33b8f0f84b8c096b8eddfe72a4d56c7e057bbfa4503f75063d23e744348fa6
7
+ data.tar.gz: d2049aeacd81d10398e901e8413421e4ea06827d426300c85fc7477b36f56fab651bc0b8402cac25f7562de3123d0ba348b9d7b43ceb6a53be36fb666acf9bf5
data/README.md CHANGED
@@ -1,6 +1,8 @@
1
1
  # deb-s3
2
2
 
3
- [![Build Status](https://travis-ci.org/krobertson/deb-s3.svg?branch=master)](https://travis-ci.org/krobertson/deb-s3)
3
+ [![Build Status](https://travis-ci.org/deb-s3/deb-s3.svg?branch=master)](https://travis-ci.org/deb-s3/deb-s3)
4
+
5
+ **This repository is a fork of [krobertson/deb-s3](https://github.com/krobertson/deb-s3).**
4
6
 
5
7
  `deb-s3` is a simple utility to make creating and managing APT repositories on
6
8
  S3.
@@ -27,17 +29,32 @@ With `deb-s3`, there is no need for this. `deb-s3` features:
27
29
 
28
30
  ## Getting Started
29
31
 
30
- You can simply install it from rubygems:
32
+ [Install the package via gem](https://github.com/deb-s3/deb-s3/packages/304683) or manually:
31
33
 
32
34
  ```console
33
- $ gem install deb-s3
35
+ $ curl -sLO https://github.com/deb-s3/deb-s3/releases/download/0.11.5/deb-s3-0.11.5.gem
36
+ $ gem install deb-s3-0.11.5.gem
34
37
  ```
35
38
 
36
- Or to run the code directly, just check out the repo and run Bundler to ensure
39
+ or via APT:
40
+
41
+ ```console
42
+ # Add repository key
43
+ $ sudo wget -O /etc/apt/trusted.gpg.d/deb-s3-archive-keyring.gpg https://raw.githubusercontent.com/deb-s3/deb-s3/master/deb-s3-archive-keyring.gpg
44
+
45
+ # Add repository
46
+ $ echo "deb http://deb-s3-repo.s3.us-east-2.amazonaws.com/debian/ $(lsb_release -cs) main" | sudo tee -a /etc/apt/sources.list > /dev/null
47
+
48
+ # Install package
49
+ $ sudo apt-get update
50
+ $ sudo apt-get install deb-s3
51
+ ```
52
+
53
+ To run the code directly, just check out the repo and run bundler to ensure
37
54
  all dependencies are installed:
38
55
 
39
56
  ```console
40
- $ git clone https://github.com/krobertson/deb-s3.git
57
+ $ git clone https://github.com/deb-s3/deb-s3.git
41
58
  $ cd deb-s3
42
59
  $ bundle install
43
60
  ```
@@ -92,8 +109,9 @@ Uploads the given files to a S3 bucket as an APT repository.
92
109
  ```
93
110
 
94
111
  You can also delete packages from the APT repository. Please keep in mind that
95
- this does NOT delete the .deb file itself, it only removes it from the list of
96
- packages in the specified component, codename and architecture.
112
+ this does NOT delete the .deb file itself (the `clean` command does that), it
113
+ only removes it from the list of packages in the specified component, codename
114
+ and architecture.
97
115
 
98
116
  Now to delete the package:
99
117
  ```console
@@ -139,6 +157,48 @@ Options:
139
157
  Remove the package named PACKAGE. If --versions is not specified, deleteall versions of PACKAGE. Otherwise, only the specified versions will be deleted.
140
158
  ```
141
159
 
160
+ Dangling `.deb` files left by the `delete` command (or uploading new versions) can be removed using the `clean` command:
161
+
162
+ ```console
163
+ $ deb-s3 clean --bucket my-bucket
164
+ >> Retrieving existing manifests
165
+ >> Searching for unreferenced packages
166
+ -- pool/m/my/my-deb-package-1.0.0_amd64.deb
167
+ ```
168
+
169
+ ```
170
+ Usage:
171
+ deb-s3 clean
172
+
173
+ Options:
174
+ -l, [--lock], [--no-lock] # Whether to check for an existing lock on the repository to prevent simultaneous updates
175
+ -b, [--bucket=BUCKET] # The name of the S3 bucket to upload to.
176
+ [--prefix=PREFIX] # The path prefix to use when storing on S3.
177
+ -o, [--origin=ORIGIN] # The origin to use in the repository Release file.
178
+ [--suite=SUITE] # The suite to use in the repository Release file.
179
+ -c, [--codename=CODENAME] # The codename of the APT repository.
180
+ # Default: stable
181
+ -m, [--component=COMPONENT] # The component of the APT repository.
182
+ # Default: main
183
+ [--access-key-id=ACCESS_KEY_ID] # The access key for connecting to S3.
184
+ [--secret-access-key=SECRET_ACCESS_KEY] # The secret key for connecting to S3.
185
+ [--session-token=SESSION_TOKEN] # The (optional) session token for connecting to S3.
186
+ [--endpoint=ENDPOINT] # The URL endpoint to the S3 API.
187
+ [--s3-region=S3_REGION] # The region for connecting to S3.
188
+ # Default: us-east-1
189
+ [--force-path-style], [--no-force-path-style] # Use S3 path style instead of subdomains.
190
+ [--proxy-uri=PROXY_URI] # The URI of the proxy to send service requests through.
191
+ -v, [--visibility=VISIBILITY] # The access policy for the uploaded files. Can be public, private, or authenticated.
192
+ # Default: public
193
+ [--sign=SIGN] # GPG Sign the Release file when uploading a package, or when verifying it after removing a package. Use --sign with your GPG key ID to use a specific key (--sign=6643C242C18FE05B).
194
+ [--gpg-options=GPG_OPTIONS] # Additional command line options to pass to GPG when signing.
195
+ -e, [--encryption], [--no-encryption] # Use S3 server side encryption.
196
+ -q, [--quiet], [--no-quiet] # Doesn't output information, just returns status appropriately.
197
+ -C, [--cache-control=CACHE_CONTROL] # Add cache-control headers to S3 objects.
198
+
199
+ Delete packages from the pool which are no longer referenced
200
+ ```
201
+
142
202
  You can also verify an existing APT repository on S3 using the `verify` command:
143
203
 
144
204
  ```console
@@ -179,3 +239,47 @@ Options:
179
239
 
180
240
  Verifies that the files in the package manifests exist
181
241
  ```
242
+
243
+ #### Example S3 IAM Policy
244
+
245
+ ```
246
+ {
247
+ "Version": "2012-10-17",
248
+ "Statement": [
249
+ {
250
+ "Effect": "Allow",
251
+ "Action": [
252
+ "s3:ListBucket"
253
+ ],
254
+ "Resource": [
255
+ "arn:aws:s3:::BUCKETNAME",
256
+ ]
257
+ },
258
+ {
259
+ "Effect": "Allow",
260
+ "Action": [
261
+ "s3:PutObject",
262
+ "s3:GetObject",
263
+ "s3:DeleteObject",
264
+ "s3:DeleteObjectVersion",
265
+ "s3:GetObjectAcl",
266
+ "s3:GetObjectTagging",
267
+ "s3:GetObjectTorrent",
268
+ "s3:GetObjectVersion",
269
+ "s3:GetObjectVersionAcl",
270
+ "s3:GetObjectVersionTagging",
271
+ "s3:GetObjectVersionTorrent",
272
+ "s3:PutObjectAcl",
273
+ "s3:PutObjectTagging",
274
+ "s3:PutObjectVersionAcl",
275
+ "s3:PutObjectVersionTagging",
276
+ "s3:ReplicateObject",
277
+ "s3:RestoreObject"
278
+ ],
279
+ "Resource": [
280
+ "arn:aws:s3:::BUCKETNAME/*"
281
+ ]
282
+ }
283
+ ]
284
+ }
285
+ ```
data/lib/deb/s3/cli.rb CHANGED
@@ -1,5 +1,5 @@
1
1
  # -*- encoding : utf-8 -*-
2
- require "aws-sdk"
2
+ require "aws-sdk-s3"
3
3
  require "thor"
4
4
 
5
5
  # Hack: aws requires this!
@@ -86,10 +86,11 @@ class Deb::S3::CLI < Thor
86
86
  "Can be public, private, or authenticated."
87
87
 
88
88
  class_option :sign,
89
- :type => :string,
89
+ :type => :array,
90
90
  :desc => "GPG Sign the Release file when uploading a package, " +
91
91
  "or when verifying it after removing a package. " +
92
- "Use --sign with your GPG key ID to use a specific key (--sign=6643C242C18FE05B)."
92
+ "Use --sign with your space delimited GPG key IDs to use one ore more specific keys " +
93
+ "(--sign 6643C242C18FE05B 6243C322C18FE05C)."
93
94
 
94
95
  class_option :gpg_options,
95
96
  :default => "",
@@ -163,12 +164,6 @@ class Deb::S3::CLI < Thor
163
164
  begin
164
165
  if options[:lock]
165
166
  log("Checking for existing lock file")
166
- if Deb::S3::Lock.locked?(options[:codename], component, options[:arch], options[:cache_control])
167
- lock = Deb::S3::Lock.current(options[:codename], component, options[:arch], options[:cache_control])
168
- log("Repository is locked by another user: #{lock.user} at host #{lock.host}")
169
- log("Attempting to obtain a lock")
170
- Deb::S3::Lock.wait_for_lock(options[:codename], component, options[:arch], options[:cache_control])
171
- end
172
167
  log("Locking repository for updates")
173
168
  Deb::S3::Lock.lock(options[:codename], component, options[:arch], options[:cache_control])
174
169
  @lock_acquired = true
@@ -205,11 +200,15 @@ class Deb::S3::CLI < Thor
205
200
  # throw an error. This is mainly the case when initializing a brand new
206
201
  # repository. With "all", we won't know which architectures they're using.
207
202
  if arch == "all" && manifests.count == 0
208
- error("Package #{File.basename(file)} had architecture \"all\", " +
209
- "however noexisting package lists exist. This can often happen " +
210
- "if the first package you are add to a new repository is an " +
211
- "\"all\" architecture file. Please use --arch [i386|amd64|armhf] or " +
212
- "another platform type to upload the file.")
203
+ manifests['amd64'] = Deb::S3::Manifest.retrieve(options[:codename], component,'amd64', options[:cache_control], options[:fail_if_exists], options[:skip_package_upload])
204
+ manifests['i386'] = Deb::S3::Manifest.retrieve(options[:codename], component,'i386', options[:cache_control], options[:fail_if_exists], options[:skip_package_upload])
205
+ manifests['armhf'] = Deb::S3::Manifest.retrieve(options[:codename], component,'armhf', options[:cache_control], options[:fail_if_exists], options[:skip_package_upload])
206
+
207
+ # error("Package #{File.basename(file)} had architecture \"all\", " +
208
+ # "however noexisting package lists exist. This can often happen " +
209
+ # "if the first package you are add to a new repository is an " +
210
+ # "\"all\" architecture file. Please use --arch [i386|amd64|armhf] or " +
211
+ # "another platform type to upload the file.")
213
212
  end
214
213
 
215
214
  # retrieve the manifest for the arch if we don't have it already
@@ -286,7 +285,7 @@ class Deb::S3::CLI < Thor
286
285
  false, false)
287
286
  manifest.packages.map do |package|
288
287
  if options[:long]
289
- package.generate
288
+ package.generate(options[:codename])
290
289
  else
291
290
  [package.name, package.full_version, package.architecture].tap do |row|
292
291
  row.each_with_index do |col, i|
@@ -331,7 +330,7 @@ class Deb::S3::CLI < Thor
331
330
  error "No such package found."
332
331
  end
333
332
 
334
- puts package.generate
333
+ puts package.generate(options[:codename])
335
334
  end
336
335
 
337
336
  desc "copy PACKAGE TO_CODENAME TO_COMPONENT ",
@@ -347,6 +346,13 @@ class Deb::S3::CLI < Thor
347
346
  :aliases => "-a",
348
347
  :desc => "The architecture of the package in the APT repository."
349
348
 
349
+ option :lock,
350
+ :default => false,
351
+ :type => :boolean,
352
+ :aliases => "-l",
353
+ :desc => "Whether to check for an existing lock on the repository " +
354
+ "to prevent simultaneous updates "
355
+
350
356
  option :versions,
351
357
  :default => nil,
352
358
  :type => :array,
@@ -392,42 +398,56 @@ class Deb::S3::CLI < Thor
392
398
 
393
399
  configure_s3_client
394
400
 
395
- # retrieve the existing manifests
396
- log "Retrieving existing manifests"
397
- from_manifest = Deb::S3::Manifest.retrieve(options[:codename],
398
- component, arch,
401
+ begin
402
+ if options[:lock]
403
+ log("Checking for existing lock file")
404
+ log("Locking repository for updates")
405
+ Deb::S3::Lock.lock(options[:codename], to_component, options[:arch], options[:cache_control])
406
+ @lock_acquired = true
407
+ end
408
+
409
+ # retrieve the existing manifests
410
+ log "Retrieving existing manifests"
411
+ from_manifest = Deb::S3::Manifest.retrieve(options[:codename],
412
+ component, arch,
413
+ options[:cache_control],
414
+ false, options[:skip_package_upload])
415
+ to_release = Deb::S3::Release.retrieve(to_codename)
416
+ to_manifest = Deb::S3::Manifest.retrieve(to_codename, to_component, arch,
399
417
  options[:cache_control],
400
- false, options[:skip_package_upload])
401
- to_release = Deb::S3::Release.retrieve(to_codename)
402
- to_manifest = Deb::S3::Manifest.retrieve(to_codename, to_component, arch,
403
- options[:cache_control],
404
- options[:fail_if_exists],
405
- options[:skip_package_upload])
406
- packages = from_manifest.packages.select { |p|
407
- p.name == package_name &&
408
- (versions.nil? || versions.include?(p.full_version))
409
- }
410
- if packages.size == 0
411
- error "No packages found in repository."
412
- end
418
+ options[:fail_if_exists],
419
+ options[:skip_package_upload])
420
+ packages = from_manifest.packages.select { |p|
421
+ p.name == package_name &&
422
+ (versions.nil? || versions.include?(p.full_version))
423
+ }
424
+ if packages.size == 0
425
+ error "No packages found in repository."
426
+ end
427
+
428
+ packages.each do |package|
429
+ begin
430
+ to_manifest.add package, options[:preserve_versions], false
431
+ rescue Deb::S3::Utils::AlreadyExistsError => e
432
+ error("Preparing manifest failed because: #{e}")
433
+ end
434
+ end
413
435
 
414
- packages.each do |package|
415
436
  begin
416
- to_manifest.add package, options[:preserve_versions], false
437
+ to_manifest.write_to_s3 { |f| sublog("Transferring #{f}") }
417
438
  rescue Deb::S3::Utils::AlreadyExistsError => e
418
- error("Preparing manifest failed because: #{e}")
439
+ error("Copying manifest failed because: #{e}")
419
440
  end
420
- end
441
+ to_release.update_manifest(to_manifest)
442
+ to_release.write_to_s3 { |f| sublog("Transferring #{f}") }
421
443
 
422
- begin
423
- to_manifest.write_to_s3 { |f| sublog("Transferring #{f}") }
424
- rescue Deb::S3::Utils::AlreadyExistsError => e
425
- error("Copying manifest failed because: #{e}")
444
+ log "Copy complete."
445
+ ensure
446
+ if options[:lock] && @lock_acquired
447
+ Deb::S3::Lock.unlock(options[:codename], component, options[:arch], options[:cache_control])
448
+ log("Lock released.")
449
+ end
426
450
  end
427
- to_release.update_manifest(to_manifest)
428
- to_release.write_to_s3 { |f| sublog("Transferring #{f}") }
429
-
430
- log "Copy complete."
431
451
  end
432
452
 
433
453
  desc "delete PACKAGE",
@@ -440,6 +460,13 @@ class Deb::S3::CLI < Thor
440
460
  :aliases => "-a",
441
461
  :desc => "The architecture of the package in the APT repository."
442
462
 
463
+ option :lock,
464
+ :default => false,
465
+ :type => :boolean,
466
+ :aliases => "-l",
467
+ :desc => "Whether to check for an existing lock on the repository " +
468
+ "to prevent simultaneous updates "
469
+
443
470
  option :versions,
444
471
  :default => nil,
445
472
  :type => :array,
@@ -466,30 +493,62 @@ class Deb::S3::CLI < Thor
466
493
 
467
494
  configure_s3_client
468
495
 
469
- # retrieve the existing manifests
470
- log("Retrieving existing manifests")
471
- release = Deb::S3::Release.retrieve(options[:codename], options[:origin], options[:suite])
472
- manifest = Deb::S3::Manifest.retrieve(options[:codename], component, options[:arch], options[:cache_control], false, options[:skip_package_upload])
496
+ begin
497
+ if options[:lock]
498
+ log("Checking for existing lock file")
499
+ log("Locking repository for updates")
500
+ Deb::S3::Lock.lock(options[:codename], component, options[:arch], options[:cache_control])
501
+ @lock_acquired = true
502
+ end
473
503
 
474
- deleted = manifest.delete_package(package, versions)
475
- if deleted.length == 0
476
- if versions.nil?
477
- error("No packages were deleted. #{package} not found.")
504
+ # retrieve the existing manifests
505
+ log("Retrieving existing manifests")
506
+ release = Deb::S3::Release.retrieve(options[:codename], options[:origin], options[:suite])
507
+ if arch == 'all'
508
+ selected_arch = release.architectures
509
+ else
510
+ selected_arch = [arch]
511
+ end
512
+ all_found = 0
513
+ selected_arch.each { |ar|
514
+ manifest = Deb::S3::Manifest.retrieve(options[:codename], component, ar, options[:cache_control], false, options[:skip_package_upload])
515
+
516
+ deleted = manifest.delete_package(package, versions)
517
+ all_found += deleted.length
518
+ if deleted.length == 0
519
+ if versions.nil?
520
+ sublog("No packages were deleted. #{package} not found in arch #{ar}.")
521
+ next
522
+ else
523
+ sublog("No packages were deleted. #{package} versions #{versions.join(', ')} could not be found in arch #{ar}.")
524
+ next
525
+ end
478
526
  else
479
- error("No packages were deleted. #{package} versions #{versions.join(', ')} could not be found.")
527
+ deleted.each { |p|
528
+ sublog("Deleting #{p.name} version #{p.full_version} from arch #{ar}")
529
+ }
480
530
  end
481
- else
482
- deleted.each { |p|
483
- sublog("Deleting #{p.name} version #{p.full_version}")
484
- }
485
- end
486
531
 
487
- log("Uploading new manifests to S3")
488
- manifest.write_to_s3 {|f| sublog("Transferring #{f}") }
489
- release.update_manifest(manifest)
490
- release.write_to_s3 {|f| sublog("Transferring #{f}") }
532
+ log("Uploading new manifests to S3")
533
+ manifest.write_to_s3 {|f| sublog("Transferring #{f}") }
534
+ release.update_manifest(manifest)
535
+ release.write_to_s3 {|f| sublog("Transferring #{f}") }
491
536
 
492
- log("Update complete.")
537
+ log("Update complete.")
538
+ }
539
+ if all_found == 0
540
+ if versions.nil?
541
+ error("No packages were deleted. #{package} not found.")
542
+ else
543
+ error("No packages were deleted. #{package} versions #{versions.join(', ')} could not be found.")
544
+ end
545
+ end
546
+ ensure
547
+ if options[:lock] && @lock_acquired
548
+ Deb::S3::Lock.unlock(options[:codename], component, options[:arch], options[:cache_control])
549
+ log("Lock released.")
550
+ end
551
+ end
493
552
  end
494
553
 
495
554
 
@@ -515,9 +574,9 @@ class Deb::S3::CLI < Thor
515
574
  missing_packages = []
516
575
 
517
576
  manifest.packages.each do |p|
518
- unless Deb::S3::Utils.s3_exists? p.url_filename_encoded
577
+ unless Deb::S3::Utils.s3_exists? p.url_filename_encoded(options[:codename])
519
578
  sublog("The following packages are missing:\n\n") if missing_packages.empty?
520
- puts(p.generate)
579
+ puts(p.generate(options[:codename]))
521
580
  puts("")
522
581
 
523
582
  missing_packages << p
@@ -536,6 +595,92 @@ class Deb::S3::CLI < Thor
536
595
  end
537
596
  end
538
597
 
598
+ desc "clean", "Delete packages from the pool which are no longer referenced"
599
+
600
+ option :lock,
601
+ :default => false,
602
+ :type => :boolean,
603
+ :aliases => "-l",
604
+ :desc => "Whether to check for an existing lock on the repository " +
605
+ "to prevent simultaneous updates "
606
+
607
+ def clean
608
+ configure_s3_client
609
+
610
+ begin
611
+ if options[:lock]
612
+ log("Checking for existing lock file")
613
+ log("Locking repository for updates")
614
+ Deb::S3::Lock.lock(options[:codename], component, options[:arch], options[:cache_control])
615
+ @lock_acquired = true
616
+ end
617
+
618
+ log("Retrieving existing manifests")
619
+
620
+ # Enumerate objects under the dists/<codename>/ prefix to find any
621
+ # Packages files and load them....
622
+
623
+ req = Deb::S3::Utils.s3.list_objects_v2({
624
+ :bucket => Deb::S3::Utils.bucket,
625
+ :prefix => Deb::S3::Utils.s3_path("dists/#{ options[:codename] }/"),
626
+ })
627
+
628
+ manifests = []
629
+ req.contents.each do |object|
630
+ if match = object.key.match(/dists\/([^\/]+)\/([^\/]+)\/binary-([^\/]+)\/Packages$/)
631
+ codename, component, arch = match.captures
632
+ manifests.push(Deb::S3::Manifest.retrieve(codename, component, arch, options[:cache_control], options[:fail_if_exists], options[:skip_package_upload]))
633
+ end
634
+ end
635
+
636
+ # Iterate over the packages in each manifest and build a Set of all the
637
+ # referenced URLs (relative to bucket root)...
638
+
639
+ refd_urls = Set[]
640
+ manifests.each do |manifest|
641
+ manifest.packages.each do |package|
642
+ refd_urls.add(Deb::S3::Utils.s3_path(package.url_filename(manifest.codename)))
643
+ end
644
+ end
645
+
646
+ log("Searching for unreferenced packages")
647
+
648
+ # Enumerate objects under the pools/<codename> prefix and delete any that
649
+ # arent referenced by any of the manifests.
650
+
651
+ continuation_token = nil
652
+ while true
653
+ req = Deb::S3::Utils.s3.list_objects_v2({
654
+ :bucket => Deb::S3::Utils.bucket,
655
+ :prefix => Deb::S3::Utils.s3_path("pool/#{ options[:codename] }/"),
656
+ :continuation_token => continuation_token,
657
+ })
658
+
659
+ req.contents.each do |object|
660
+ if not refd_urls.include?(object.key)
661
+ sublog("Deleting #{ object.key }")
662
+
663
+ Deb::S3::Utils.s3.delete_object({
664
+ :bucket => Deb::S3::Utils.bucket,
665
+ :key => object.key,
666
+ })
667
+ end
668
+ end
669
+
670
+ if req.is_truncated
671
+ continuation_token = req.next_continuation_token
672
+ else
673
+ break
674
+ end
675
+ end
676
+ ensure
677
+ if options[:lock] && @lock_acquired
678
+ Deb::S3::Lock.unlock(options[:codename], component, options[:arch], options[:cache_control])
679
+ log("Lock released.")
680
+ end
681
+ end
682
+ end
683
+
539
684
  private
540
685
 
541
686
  def component
data/lib/deb/s3/lock.rb CHANGED
@@ -1,58 +1,122 @@
1
1
  # -*- encoding : utf-8 -*-
2
- require "tempfile"
3
- require "socket"
2
+ require "base64"
3
+ require "digest/md5"
4
4
  require "etc"
5
+ require "socket"
6
+ require "tempfile"
5
7
 
6
8
  class Deb::S3::Lock
7
- attr_accessor :user
8
- attr_accessor :host
9
+ attr_reader :user
10
+ attr_reader :host
9
11
 
10
- def initialize
11
- @user = nil
12
- @host = nil
12
+ def initialize(user, host)
13
+ @user = user
14
+ @host = host
13
15
  end
14
16
 
15
17
  class << self
16
- def locked?(codename, component = nil, architecture = nil, cache_control = nil)
17
- Deb::S3::Utils.s3_exists?(lock_path(codename, component, architecture, cache_control))
18
- end
18
+ #
19
+ # 2-phase mutual lock mechanism based on `s3:CopyObject`.
20
+ #
21
+ # This logic isn't relying on S3's enhanced features like Object Lock
22
+ # because it imposes some limitation on using other features like
23
+ # S3 Cross-Region replication. This should work more than good enough
24
+ # with S3's strong read-after-write consistency which we can presume
25
+ # in all region nowadays.
26
+ #
27
+ # This is relying on S3 to set object's ETag as object's MD5 if an
28
+ # object isn't comprized from multiple parts. We'd be able to presume
29
+ # it as the lock file is usually an object of some smaller bytes.
30
+ #
31
+ # acquire lock:
32
+ # 1. call `s3:HeadObject` on final lock object
33
+ # 1. If final lock object exists, restart from the beginning
34
+ # 2. Otherwise, call `s3:PutObject` to create initial lock object
35
+ # 2. Perform `s3:CopyObject` to copy from initial lock object
36
+ # to final lock object with specifying ETag/MD5 of the initial
37
+ # lock object
38
+ # 1. If copy object fails as `PreconditionFailed`, restart
39
+ # from the beginning
40
+ # 2. Otherwise, lock has been acquired
41
+ #
42
+ # release lock:
43
+ # 1. remove final lock object by `s3:DeleteObject`
44
+ #
45
+ def lock(codename, component = nil, architecture = nil, cache_control = nil, max_attempts=60, max_wait_interval=10)
46
+ lockbody = "#{Etc.getlogin}@#{Socket.gethostname}"
47
+ initial_lockfile = initial_lock_path(codename, component, architecture, cache_control)
48
+ final_lockfile = lock_path(codename, component, architecture, cache_control)
19
49
 
20
- def wait_for_lock(codename, component = nil, architecture = nil, cache_control = nil, max_attempts=60, wait=10)
21
- attempts = 0
22
- while self.locked?(codename, component, architecture, cache_control) do
23
- attempts += 1
24
- throw "Unable to obtain a lock after #{max_attempts}, giving up." if attempts > max_attempts
25
- sleep(wait)
50
+ md5_b64 = Base64.encode64(Digest::MD5.digest(lockbody))
51
+ md5_hex = Digest::MD5.hexdigest(lockbody)
52
+ max_attempts.times do |i|
53
+ wait_interval = [(1<<i)/10, max_wait_interval].min
54
+ if Deb::S3::Utils.s3_exists?(final_lockfile)
55
+ lock = current(codename, component, architecture, cache_control)
56
+ $stderr.puts("Repository is locked by another user: #{lock.user} at host #{lock.host} (phase-1)")
57
+ $stderr.puts("Attempting to obtain a lock after #{wait_interval} secound(s).")
58
+ sleep(wait_interval)
59
+ else
60
+ # upload the file
61
+ Deb::S3::Utils.s3.put_object(
62
+ bucket: Deb::S3::Utils.bucket,
63
+ key: Deb::S3::Utils.s3_path(initial_lockfile),
64
+ body: lockbody,
65
+ content_type: "text/plain",
66
+ content_md5: md5_b64,
67
+ metadata: {
68
+ "md5" => md5_hex,
69
+ },
70
+ )
71
+ begin
72
+ Deb::S3::Utils.s3.copy_object(
73
+ bucket: Deb::S3::Utils.bucket,
74
+ key: Deb::S3::Utils.s3_path(final_lockfile),
75
+ copy_source: "/#{Deb::S3::Utils.bucket}/#{Deb::S3::Utils.s3_path(initial_lockfile)}",
76
+ copy_source_if_match: md5_hex,
77
+ )
78
+ return
79
+ rescue Aws::S3::Errors::PreconditionFailed => error
80
+ lock = current(codename, component, architecture, cache_control)
81
+ $stderr.puts("Repository is locked by another user: #{lock.user} at host #{lock.host} (phase-2)")
82
+ $stderr.puts("Attempting to obtain a lock after #{wait_interval} second(s).")
83
+ sleep(wait_interval)
84
+ end
85
+ end
26
86
  end
27
- end
28
-
29
- def lock(codename, component = nil, architecture = nil, cache_control = nil)
30
- lockfile = Tempfile.new("lockfile")
31
- lockfile.write("#{Etc.getlogin}@#{Socket.gethostname}")
32
- lockfile.close
33
-
34
- Deb::S3::Utils.s3_store(lockfile.path,
35
- lock_path(codename, component, architecture, cache_control),
36
- "text/plain",
37
- cache_control)
87
+ # TODO: throw appropriate error class
88
+ raise("Unable to obtain a lock after #{max_attempts}, giving up.")
38
89
  end
39
90
 
40
91
  def unlock(codename, component = nil, architecture = nil, cache_control = nil)
92
+ Deb::S3::Utils.s3_remove(initial_lock_path(codename, component, architecture, cache_control))
41
93
  Deb::S3::Utils.s3_remove(lock_path(codename, component, architecture, cache_control))
42
94
  end
43
95
 
44
96
  def current(codename, component = nil, architecture = nil, cache_control = nil)
45
- lock_content = Deb::S3::Utils.s3_read(lock_path(codename, component, architecture, cache_control))
46
- lock_content = lock_content.split('@')
47
- lock = Deb::S3::Lock.new
48
- lock.user = lock_content[0]
49
- lock.host = lock_content[1] if lock_content.size > 1
97
+ lockbody = Deb::S3::Utils.s3_read(lock_path(codename, component, architecture, cache_control))
98
+ if lockbody
99
+ user, host = lockbody.to_s.split("@", 2)
100
+ lock = Deb::S3::Lock.new(user, host)
101
+ else
102
+ lock = Deb::S3::Lock.new("unknown", "unknown")
103
+ end
50
104
  lock
51
105
  end
52
106
 
53
107
  private
108
+ def initial_lock_path(codename, component = nil, architecture = nil, cache_control = nil)
109
+ "dists/#{codename}/lockfile.lock"
110
+ end
111
+
54
112
  def lock_path(codename, component = nil, architecture = nil, cache_control = nil)
55
- "dists/#{codename}/#{component}/binary-#{architecture}/lockfile"
113
+ #
114
+ # Acquire repository lock at `codename` level to avoid race between concurrent upload attempts.
115
+ #
116
+ # * `deb-s3 upload --arch=all` touchs multiples of `dists/{codename}/{component}/binary-*/Packages*`
117
+ # * All `deb-s3 upload` touchs `dists/{codename}/Release`
118
+ #
119
+ "dists/#{codename}/lockfile"
56
120
  end
57
121
  end
58
122
  end
@@ -2,6 +2,7 @@
2
2
  require "digest/sha1"
3
3
  require "digest/sha2"
4
4
  require "digest/md5"
5
+ require "open3"
5
6
  require "socket"
6
7
  require "tmpdir"
7
8
  require "uri"
@@ -30,7 +31,6 @@ class Deb::S3::Package
30
31
  attr_accessor :attributes
31
32
 
32
33
  # hashes
33
- attr_accessor :url_filename
34
34
  attr_accessor :sha1
35
35
  attr_accessor :sha256
36
36
  attr_accessor :md5
@@ -57,22 +57,36 @@ class Deb::S3::Package
57
57
 
58
58
  def extract_control(package)
59
59
  if system("which dpkg > /dev/null 2>&1")
60
- `dpkg -f #{package}`
60
+ output, status = Open3.capture2("dpkg", "-f", package)
61
+ output
61
62
  else
63
+ # use ar to determine control file name (control.ext)
64
+ package_files = `ar t #{package}`
65
+ control_file = package_files.split("\n").select do |file|
66
+ file.start_with?("control.")
67
+ end.first
68
+ if control_file === "control.tar.gz"
69
+ compression = "z"
70
+ elsif control_file === "control.tar.zst"
71
+ compression = "I zstd"
72
+ else
73
+ compression = "J"
74
+ end
75
+
62
76
  # ar fails to find the control.tar.gz tarball within the .deb
63
77
  # on Mac OS. Try using ar to list the control file, if found,
64
78
  # use ar to extract, otherwise attempt with tar which works on OS X.
65
- extract_control_tarball_cmd = "ar p #{package} control.tar.gz"
79
+ extract_control_tarball_cmd = "ar p #{package} #{control_file}"
66
80
 
67
81
  begin
68
- safesystem("ar t #{package} control.tar.gz &> /dev/null")
82
+ safesystem("ar t #{package} #{control_file} &> /dev/null")
69
83
  rescue SafeSystemError
70
84
  warn "Failed to find control data in .deb with ar, trying tar."
71
- extract_control_tarball_cmd = "tar zxf #{package} --to-stdout control.tar.gz"
85
+ extract_control_tarball_cmd = "tar -#{compression} -xf #{package} --to-stdout #{control_file}"
72
86
  end
73
87
 
74
88
  Dir.mktmpdir do |path|
75
- safesystem("#{extract_control_tarball_cmd} | tar -zxf - -C #{path}")
89
+ safesystem("#{extract_control_tarball_cmd} | tar -#{compression} -xf - -C #{path}")
76
90
  File.read(File.join(path, "control"))
77
91
  end
78
92
  end
@@ -122,11 +136,10 @@ class Deb::S3::Package
122
136
  [[epoch, version].compact.join(":"), iteration].compact.join("-")
123
137
  end
124
138
 
125
- def filename=(f)
126
- @filename = f
127
- @filename
139
+ def url_filename=(f)
140
+ @url_filename = f
128
141
  end
129
-
142
+
130
143
  def url_filename(codename)
131
144
  @url_filename || "pool/#{codename}/#{self.name[0]}/#{self.name[0..1]}/#{File.basename(self.filename)}"
132
145
  end
@@ -242,7 +255,7 @@ class Deb::S3::Package
242
255
 
243
256
  # Packages manifest fields
244
257
  filename = fields.delete('Filename')
245
- self.url_filename = filename && URI.unescape(filename)
258
+ self.url_filename = filename && CGI.unescape(filename)
246
259
  self.sha1 = fields.delete('SHA1')
247
260
  self.sha256 = fields.delete('SHA256')
248
261
  self.md5 = fields.delete('MD5sum')
@@ -102,7 +102,7 @@ class Deb::S3::Release
102
102
 
103
103
  # sign the file, if necessary
104
104
  if Deb::S3::Utils.signing_key
105
- key_param = Deb::S3::Utils.signing_key != "" ? "--default-key=#{Deb::S3::Utils.signing_key}" : ""
105
+ key_param = Deb::S3::Utils.signing_key.any? ? "-u #{Deb::S3::Utils.signing_key.join(" -u ")}" : ""
106
106
  if system("gpg -a #{key_param} --digest-algo SHA256 #{Deb::S3::Utils.gpg_options} -s --clearsign #{release_tmp.path}")
107
107
  local_file = release_tmp.path+".asc"
108
108
  remote_file = "dists/#{@codename}/InRelease"
data/lib/deb/s3/utils.rb CHANGED
@@ -1,8 +1,6 @@
1
1
  # -*- encoding : utf-8 -*-
2
- require "base64"
3
2
  require "digest/md5"
4
3
  require "erb"
5
- require "tmpdir"
6
4
 
7
5
  module Deb::S3::Utils
8
6
  module_function
@@ -41,7 +39,7 @@ module Deb::S3::Utils
41
39
  def template(path)
42
40
  template_file = File.join(File.dirname(__FILE__), "templates", path)
43
41
  template_code = File.read(template_file)
44
- ERB.new(template_code, nil, "-")
42
+ ERB.new(template_code, trim_mode: "-")
45
43
  end
46
44
 
47
45
  def s3_path(path)
@@ -70,7 +68,7 @@ module Deb::S3::Utils
70
68
  :key => s3_path(path),
71
69
  )[:body].read
72
70
  rescue Aws::S3::Errors::NoSuchKey
73
- false
71
+ nil
74
72
  end
75
73
 
76
74
  def s3_store(path, filename=nil, content_type='application/octet-stream; charset=binary', cache_control=nil, fail_if_exists=false)
data/lib/deb/s3.rb CHANGED
@@ -1,6 +1,6 @@
1
1
  # -*- encoding : utf-8 -*-
2
2
  module Deb
3
3
  module S3
4
- VERSION = "0.10.0"
4
+ VERSION = "0.11.5"
5
5
  end
6
6
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: deb-s3
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.10.0
4
+ version: 0.11.5
5
5
  platform: ruby
6
6
  authors:
7
7
  - Ken Robertson
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2018-07-16 00:00:00.000000000 Z
11
+ date: 2022-11-05 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: thor
@@ -16,28 +16,28 @@ dependencies:
16
16
  requirements:
17
17
  - - "~>"
18
18
  - !ruby/object:Gem::Version
19
- version: 0.19.0
19
+ version: '1'
20
20
  type: :runtime
21
21
  prerelease: false
22
22
  version_requirements: !ruby/object:Gem::Requirement
23
23
  requirements:
24
24
  - - "~>"
25
25
  - !ruby/object:Gem::Version
26
- version: 0.19.0
26
+ version: '1'
27
27
  - !ruby/object:Gem::Dependency
28
- name: aws-sdk
28
+ name: aws-sdk-s3
29
29
  requirement: !ruby/object:Gem::Requirement
30
30
  requirements:
31
31
  - - "~>"
32
32
  - !ruby/object:Gem::Version
33
- version: '2'
33
+ version: '1'
34
34
  type: :runtime
35
35
  prerelease: false
36
36
  version_requirements: !ruby/object:Gem::Requirement
37
37
  requirements:
38
38
  - - "~>"
39
39
  - !ruby/object:Gem::Version
40
- version: '2'
40
+ version: '1'
41
41
  - !ruby/object:Gem::Dependency
42
42
  name: minitest
43
43
  requirement: !ruby/object:Gem::Requirement
@@ -96,15 +96,14 @@ required_ruby_version: !ruby/object:Gem::Requirement
96
96
  requirements:
97
97
  - - ">="
98
98
  - !ruby/object:Gem::Version
99
- version: 1.9.3
99
+ version: 2.7.0
100
100
  required_rubygems_version: !ruby/object:Gem::Requirement
101
101
  requirements:
102
102
  - - ">="
103
103
  - !ruby/object:Gem::Version
104
104
  version: '0'
105
105
  requirements: []
106
- rubyforge_project:
107
- rubygems_version: 2.6.13
106
+ rubygems_version: 3.3.7
108
107
  signing_key:
109
108
  specification_version: 4
110
109
  summary: Easily create and manage an APT repository on S3.