dpkg-s3 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA256:
3
+ metadata.gz: 8085118424eabe4c62a7016b26f53033f27719496830531aa12dadde91887553
4
+ data.tar.gz: 8894ff4df72564465523ba841ea680f0e370970ce2fe5922c478d13236c69da1
5
+ SHA512:
6
+ metadata.gz: 66a81617d7e57fd104e9ec3a03b682e607f2dcdd454148e4fd6a25b778d393ee349cf62a9336f9100cfa51e1e6c7c5b53ed533708f92b4d85452c4eb558b1db0
7
+ data.tar.gz: 712883d85bc9704a0a009be8674c672911b51351ab74101130c981698b24156f53d3066833d9a1c95cd97610165b39efd211009eb0f21af4e9cab0a18d43bfb8
@@ -0,0 +1,181 @@
1
+ # dpkg-s3
2
+
3
+ [![Build Status](https://travis-ci.org/gamunu/dpkg-s3.svg?branch=master)](https://travis-ci.org/gamunu/dpkg-s3)
4
+
5
+ `dpkg-s3` is a simple utility to make creating and managing APT repositories on
6
+ S3.
7
+
8
+ Most existing guides on using S3 to host an APT repository have you
9
+ using something like [reprepro](http://mirrorer.alioth.debian.org/) to generate
10
+ the repository file structure, and then [s3cmd](http://s3tools.org/s3cmd) to
11
+ sync the files to S3.
12
+
13
+ The annoying thing about this process is it requires you to maintain a local
14
+ copy of the file tree for regenerating and syncing the next time. Personally,
15
+ my process is to use one-off virtual machines with
16
+ [Vagrant](http://vagrantup.com), script out the build process, and then would
17
+ prefer to just upload the final `.deb` from my Mac.
18
+
19
+ With `dpkg-s3`, there is no need for this. `dpkg-s3` features:
20
+
21
+ * Downloads the existing package manifest and parses it.
22
+ * Updates it with the new package, replacing the existing entry if already
23
+ there or adding a new one if not.
24
+ * Uploads the package itself, the Packages manifest, and the Packages.gz
25
+ manifest. It will skip the uploading if the package is already there.
26
+ * Updates the Release file with the new hashes and file sizes.
27
+
28
+ ## Getting Started
29
+
30
+ You can simply install it from rubygems:
31
+
32
+ ```console
33
+ $ gem install dpkg-s3
34
+ ```
35
+
36
+ Or to run the code directly, just check out the repo and run Bundler to ensure
37
+ all dependencies are installed:
38
+
39
+ ```console
40
+ $ git clone https://github.com/gamunu/dpkg-s3.git
41
+ $ cd dpkg-s3
42
+ $ bundle install
43
+ ```
44
+
45
+ Now to upload a package, simply use:
46
+
47
+ ```console
48
+ $ dpkg-s3 upload --bucket my-bucket my-deb-package-1.0.0_amd64.deb
49
+ >> Examining package file my-deb-package-1.0.0_amd64.deb
50
+ >> Retrieving existing package manifest
51
+ >> Uploading package and new manifests to S3
52
+ -- Transferring pool/m/my/my-deb-package-1.0.0_amd64.deb
53
+ -- Transferring dists/stable/main/binary-amd64/Packages
54
+ -- Transferring dists/stable/main/binary-amd64/Packages.gz
55
+ -- Transferring dists/stable/Release
56
+ >> Update complete.
57
+ ```
58
+
59
+ ```
60
+ Usage:
61
+ dpkg-s3 upload FILES
62
+
63
+ Options:
64
+ -a, [--arch=ARCH] # The architecture of the package in the APT repository.
65
+ -p, [--preserve-versions], [--no-preserve-versions] # Whether to preserve other versions of a package in the repository when uploading one.
66
+ -l, [--lock], [--no-lock] # Whether to check for an existing lock on the repository to prevent simultaneous updates
67
+ [--fail-if-exists], [--no-fail-if-exists] # Whether to overwrite any existing package that has the same filename in the pool or the same name and version in the manifest but different contents.
68
+ [--skip-package-upload], [--no-skip-package-upload] # Whether to skip all package uploads.This is useful when hosting .deb files outside of the bucket.
69
+ -b, [--bucket=BUCKET] # The name of the S3 bucket to upload to.
70
+ [--prefix=PREFIX] # The path prefix to use when storing on S3.
71
+ -o, [--origin=ORIGIN] # The origin to use in the repository Release file.
72
+ [--suite=SUITE] # The suite to use in the repository Release file.
73
+ -c, [--codename=CODENAME] # The codename of the APT repository.
74
+ # Default: stable
75
+ -m, [--component=COMPONENT] # The component of the APT repository.
76
+ # Default: main
77
+ [--access-key-id=ACCESS_KEY_ID] # The access key for connecting to S3.
78
+ [--secret-access-key=SECRET_ACCESS_KEY] # The secret key for connecting to S3.
79
+ [--s3-region=S3_REGION] # The region for connecting to S3.
80
+ # Default: us-east-1
81
+ [--force-path-style], [--no-force-path-style] # Use S3 path style instead of subdomains.
82
+ [--proxy-uri=PROXY_URI] # The URI of the proxy to send service requests through.
83
+ -v, [--visibility=VISIBILITY] # The access policy for the uploaded files. Can be public, private, or authenticated.
84
+ # Default: public
85
+ [--sign=SIGN] # GPG Sign the Release file when uploading a package, or when verifying it after removing a package. Use --sign with your GPG key ID to use a specific key (--sign=6643C242C18FE05B).
86
+ [--gpg-options=GPG_OPTIONS] # Additional command line options to pass to GPG when signing.
87
+ -e, [--encryption], [--no-encryption] # Use S3 server side encryption.
88
+ -q, [--quiet], [--no-quiet] # Doesn't output information, just returns status appropriately.
89
+ -C, [--cache-control=CACHE_CONTROL] # Add cache-control headers to S3 objects.
90
+
91
+ Uploads the given files to a S3 bucket as an APT repository.
92
+ ```
93
+
94
+ You can also delete packages from the APT repository. Please keep in mind that
95
+ this does NOT delete the .deb file itself, it only removes it from the list of
96
+ packages in the specified component, codename and architecture.
97
+
98
+ Now to delete the package:
99
+ ```console
100
+ $ dpkg-s3 delete my-deb-package --arch amd64 --bucket my-bucket --versions 1.0.0
101
+ >> Retrieving existing manifests
102
+ -- Deleting my-deb-package version 1.0.0
103
+ >> Uploading new manifests to S3
104
+ -- Transferring dists/stable/main/binary-amd64/Packages
105
+ -- Transferring dists/stable/main/binary-amd64/Packages.gz
106
+ -- Transferring dists/stable/Release
107
+ >> Update complete.
108
+ ```
109
+
110
+ ```
111
+ Usage:
112
+ dpkg-s3 delete PACKAGE
113
+
114
+ Options:
115
+ -a, [--arch=ARCH] # The architecture of the package in the APT repository.
116
+ [--versions=one two three] # The space-delimited versions of PACKAGE to delete. If not specified, ALL VERSIONS will be deleted. Fair warning. E.g. --versions "0.1 0.2 0.3"
117
+ -b, [--bucket=BUCKET] # The name of the S3 bucket to upload to.
118
+ [--prefix=PREFIX] # The path prefix to use when storing on S3.
119
+ -o, [--origin=ORIGIN] # The origin to use in the repository Release file.
120
+ [--suite=SUITE] # The suite to use in the repository Release file.
121
+ -c, [--codename=CODENAME] # The codename of the APT repository.
122
+ # Default: stable
123
+ -m, [--component=COMPONENT] # The component of the APT repository.
124
+ # Default: main
125
+ [--access-key-id=ACCESS_KEY_ID] # The access key for connecting to S3.
126
+ [--secret-access-key=SECRET_ACCESS_KEY] # The secret key for connecting to S3.
127
+ [--s3-region=S3_REGION] # The region for connecting to S3.
128
+ # Default: us-east-1
129
+ [--force-path-style], [--no-force-path-style] # Use S3 path style instead of subdomains.
130
+ [--proxy-uri=PROXY_URI] # The URI of the proxy to send service requests through.
131
+ -v, [--visibility=VISIBILITY] # The access policy for the uploaded files. Can be public, private, or authenticated.
132
+ # Default: public
133
+ [--sign=SIGN] # GPG Sign the Release file when uploading a package, or when verifying it after removing a package. Use --sign with your GPG key ID to use a specific key (--sign=6643C242C18FE05B).
134
+ [--gpg-options=GPG_OPTIONS] # Additional command line options to pass to GPG when signing.
135
+ -e, [--encryption], [--no-encryption] # Use S3 server side encryption.
136
+ -q, [--quiet], [--no-quiet] # Doesn't output information, just returns status appropriately.
137
+ -C, [--cache-control=CACHE_CONTROL] # Add cache-control headers to S3 objects.
138
+
139
+ Remove the package named PACKAGE. If --versions is not specified, deleteall versions of PACKAGE. Otherwise, only the specified versions will be deleted.
140
+ ```
141
+
142
+ You can also verify an existing APT repository on S3 using the `verify` command:
143
+
144
+ ```console
145
+ dpkg-s3 verify -b my-bucket
146
+ >> Retrieving existing manifests
147
+ >> Checking for missing packages in: stable/main i386
148
+ >> Checking for missing packages in: stable/main amd64
149
+ >> Checking for missing packages in: stable/main all
150
+ ```
151
+
152
+ ```
153
+ Usage:
154
+ dpkg-s3 verify
155
+
156
+ Options:
157
+ -f, [--fix-manifests], [--no-fix-manifests] # Whether to fix problems in manifests when verifying.
158
+ -b, [--bucket=BUCKET] # The name of the S3 bucket to upload to.
159
+ [--prefix=PREFIX] # The path prefix to use when storing on S3.
160
+ -o, [--origin=ORIGIN] # The origin to use in the repository Release file.
161
+ [--suite=SUITE] # The suite to use in the repository Release file.
162
+ -c, [--codename=CODENAME] # The codename of the APT repository.
163
+ # Default: stable
164
+ -m, [--component=COMPONENT] # The component of the APT repository.
165
+ # Default: main
166
+ [--access-key-id=ACCESS_KEY_ID] # The access key for connecting to S3.
167
+ [--secret-access-key=SECRET_ACCESS_KEY] # The secret key for connecting to S3.
168
+ [--s3-region=S3_REGION] # The region for connecting to S3.
169
+ # Default: us-east-1
170
+ [--force-path-style], [--no-force-path-style] # Use S3 path style instead of subdomains.
171
+ [--proxy-uri=PROXY_URI] # The URI of the proxy to send service requests through.
172
+ -v, [--visibility=VISIBILITY] # The access policy for the uploaded files. Can be public, private, or authenticated.
173
+ # Default: public
174
+ [--sign=SIGN] # GPG Sign the Release file when uploading a package, or when verifying it after removing a package. Use --sign with your GPG key ID to use a specific key (--sign=6643C242C18FE05B).
175
+ [--gpg-options=GPG_OPTIONS] # Additional command line options to pass to GPG when signing.
176
+ -e, [--encryption], [--no-encryption] # Use S3 server side encryption.
177
+ -q, [--quiet], [--no-quiet] # Doesn't output information, just returns status appropriately.
178
+ -C, [--cache-control=CACHE_CONTROL] # Add cache-control headers to S3 objects.
179
+
180
+ Verifies that the files in the package manifests exist
181
+ ```
@@ -0,0 +1,10 @@
1
+ #!/usr/bin/env ruby
2
+
3
+ require 'pathname'
4
+ $:.unshift File.join(Pathname.new(__FILE__).realpath,'../../lib')
5
+
6
+ require 'rubygems'
7
+ require 'dpkg/s3/cli'
8
+
9
+ Dpkg::S3::CLI.start
10
+
@@ -0,0 +1,6 @@
1
+ # -*- encoding : utf-8 -*-
2
+ module Dpkg
3
+ module S3
4
+ VERSION = "0.1.0"
5
+ end
6
+ end
@@ -0,0 +1,641 @@
1
+ # -*- encoding : utf-8 -*-
2
+ require "aws-sdk"
3
+ require "thor"
4
+
5
+ # Hack: aws requires this!
6
+ require "json"
7
+
8
+ require "dpkg/s3"
9
+ require "dpkg/s3/utils"
10
+ require "dpkg/s3/manifest"
11
+ require "dpkg/s3/package"
12
+ require "dpkg/s3/release"
13
+ require "dpkg/s3/lock"
14
+
15
+ class Dpkg::S3::CLI < Thor
16
+ class_option :bucket,
17
+ :type => :string,
18
+ :aliases => "-b",
19
+ :desc => "The name of the S3 bucket to upload to."
20
+
21
+ class_option :prefix,
22
+ :type => :string,
23
+ :desc => "The path prefix to use when storing on S3."
24
+
25
+ class_option :origin,
26
+ :type => :string,
27
+ :aliases => "-o",
28
+ :desc => "The origin to use in the repository Release file."
29
+
30
+ class_option :suite,
31
+ :type => :string,
32
+ :desc => "The suite to use in the repository Release file."
33
+
34
+ class_option :codename,
35
+ :default => "stable",
36
+ :type => :string,
37
+ :aliases => "-c",
38
+ :desc => "The codename of the APT repository."
39
+
40
+ class_option :component,
41
+ :default => "main",
42
+ :type => :string,
43
+ :aliases => "-m",
44
+ :desc => "The component of the APT repository."
45
+
46
+ class_option :section,
47
+ :type => :string,
48
+ :aliases => "-s",
49
+ :hide => true
50
+
51
+ class_option :access_key_id,
52
+ :type => :string,
53
+ :desc => "The access key for connecting to S3."
54
+
55
+ class_option :secret_access_key,
56
+ :type => :string,
57
+ :desc => "The secret key for connecting to S3."
58
+
59
+ class_option :session_token,
60
+ :type => :string,
61
+ :desc => "The (optional) session token for connecting to S3."
62
+
63
+ class_option :endpoint,
64
+ :type => :string,
65
+ :desc => "The URL endpoint to the S3 API."
66
+
67
+ class_option :s3_region,
68
+ :type => :string,
69
+ :desc => "The region for connecting to S3.",
70
+ :default => ENV["AWS_DEFAULT_REGION"] || "us-east-1"
71
+
72
+ class_option :force_path_style,
73
+ :default => false,
74
+ :type => :boolean,
75
+ :desc => "Use S3 path style instead of subdomains."
76
+
77
+ class_option :proxy_uri,
78
+ :type => :string,
79
+ :desc => "The URI of the proxy to send service requests through."
80
+
81
+ class_option :visibility,
82
+ :default => "public",
83
+ :type => :string,
84
+ :aliases => "-v",
85
+ :desc => "The access policy for the uploaded files. " +
86
+ "Can be public, private, or authenticated."
87
+
88
+ class_option :sign,
89
+ :type => :string,
90
+ :desc => "GPG Sign the Release file when uploading a package, " +
91
+ "or when verifying it after removing a package. " +
92
+ "Use --sign with your GPG key ID to use a specific key (--sign=6643C242C18FE05B)."
93
+
94
+ class_option :gpg_options,
95
+ :default => "",
96
+ :type => :string,
97
+ :desc => "Additional command line options to pass to GPG when signing."
98
+
99
+ class_option :encryption,
100
+ :default => false,
101
+ :type => :boolean,
102
+ :aliases => "-e",
103
+ :desc => "Use S3 server side encryption."
104
+
105
+ class_option :quiet,
106
+ :type => :boolean,
107
+ :aliases => "-q",
108
+ :desc => "Doesn't output information, just returns status appropriately."
109
+
110
+ class_option :cache_control,
111
+ :type => :string,
112
+ :aliases => "-C",
113
+ :desc => "Add cache-control headers to S3 objects."
114
+
115
+ desc "upload FILES",
116
+ "Uploads the given files to a S3 bucket as an APT repository."
117
+
118
+ option :arch,
119
+ :type => :string,
120
+ :aliases => "-a",
121
+ :desc => "The architecture of the package in the APT repository."
122
+
123
+ option :preserve_versions,
124
+ :default => false,
125
+ :type => :boolean,
126
+ :aliases => "-p",
127
+ :desc => "Whether to preserve other versions of a package " +
128
+ "in the repository when uploading one."
129
+
130
+ option :lock,
131
+ :default => false,
132
+ :type => :boolean,
133
+ :aliases => "-l",
134
+ :desc => "Whether to check for an existing lock on the repository " +
135
+ "to prevent simultaneous updates "
136
+
137
+ option :fail_if_exists,
138
+ :default => false,
139
+ :type => :boolean,
140
+ :desc => "Whether to overwrite any existing package that has the same " +
141
+ "filename in the pool or the same name and version in the manifest but " +
142
+ "different contents."
143
+
144
+ option :skip_package_upload,
145
+ :default => false,
146
+ :type => :boolean,
147
+ :desc => "Whether to skip all package uploads." +
148
+ "This is useful when hosting .deb files outside of the bucket."
149
+
150
+ def upload(*files)
151
+ if files.nil? || files.empty?
152
+ error("You must specify at least one file to upload")
153
+ end
154
+
155
+ # make sure all the files exists
156
+ if missing_file = files.find { |pattern| Dir.glob(pattern).empty? }
157
+ error("File '#{missing_file}' doesn't exist")
158
+ end
159
+
160
+ # configure AWS::S3
161
+ configure_s3_client
162
+
163
+ begin
164
+ if options[:lock]
165
+ log("Checking for existing lock file")
166
+ if Dpkg::S3::Lock.locked?(options[:codename], component, options[:arch], options[:cache_control])
167
+ lock = Dpkg::S3::Lock.current(options[:codename], component, options[:arch], options[:cache_control])
168
+ log("Repository is locked by another user: #{lock.user} at host #{lock.host}")
169
+ log("Attempting to obtain a lock")
170
+ Dpkg::S3::Lock.wait_for_lock(options[:codename], component, options[:arch], options[:cache_control])
171
+ end
172
+ log("Locking repository for updates")
173
+ Dpkg::S3::Lock.lock(options[:codename], component, options[:arch], options[:cache_control])
174
+ @lock_acquired = true
175
+ end
176
+
177
+ # retrieve the existing manifests
178
+ log("Retrieving existing manifests")
179
+ release = Dpkg::S3::Release.retrieve(options[:codename], options[:origin], options[:suite], options[:cache_control])
180
+ manifests = {}
181
+ release.architectures.each do |arch|
182
+ manifests[arch] = Dpkg::S3::Manifest.retrieve(options[:codename], component, arch, options[:cache_control], options[:fail_if_exists], options[:skip_package_upload])
183
+ end
184
+
185
+ packages_arch_all = []
186
+
187
+ # examine all the files
188
+ files.collect { |f| Dir.glob(f) }.flatten.each do |file|
189
+ log("Examining package file #{File.basename(file)}")
190
+ pkg = Dpkg::S3::Package.parse_file(file)
191
+
192
+ # copy over some options if they weren't given
193
+ arch = options[:arch] || pkg.architecture
194
+
195
+ # If they've specified an arch type that doesn't match the package let them know
196
+ if options.key?("arch") && options[:arch] != pkg.architecture
197
+ warn("You specified architecture #{options[:arch]} but package #{pkg.name} has architecture type of #{pkg.architecture}")
198
+ end
199
+
200
+ # validate we have them
201
+ error("No architcture given and unable to determine one for #{file}. " +
202
+ "Please specify one with --arch [i386|amd64|armhf].") unless arch
203
+
204
+ # If the arch is all and the list of existing manifests is none, then
205
+ # throw an error. This is mainly the case when initializing a brand new
206
+ # repository. With "all", we won't know which architectures they're using.
207
+ if arch == "all" && manifests.count == 0
208
+ manifests['amd64'] = Dpkg::S3::Manifest.retrieve(options[:codename], component,'amd64', options[:cache_control], options[:fail_if_exists], options[:skip_package_upload])
209
+ manifests['i386'] = Dpkg::S3::Manifest.retrieve(options[:codename], component,'i386', options[:cache_control], options[:fail_if_exists], options[:skip_package_upload])
210
+ manifests['armhf'] = Dpkg::S3::Manifest.retrieve(options[:codename], component,'armhf', options[:cache_control], options[:fail_if_exists], options[:skip_package_upload])
211
+
212
+ # error("Package #{File.basename(file)} had architecture \"all\", " +
213
+ # "however noexisting package lists exist. This can often happen " +
214
+ # "if the first package you are add to a new repository is an " +
215
+ # "\"all\" architecture file. Please use --arch [i386|amd64|armhf] or " +
216
+ # "another platform type to upload the file.")
217
+ end
218
+
219
+ # retrieve the manifest for the arch if we don't have it already
220
+ manifests[arch] ||= Dpkg::S3::Manifest.retrieve(options[:codename], component, arch, options[:cache_control], options[:fail_if_exists], options[:skip_package_upload])
221
+
222
+ # add package in manifests
223
+ begin
224
+ manifests[arch].add(pkg, options[:preserve_versions])
225
+ rescue Dpkg::S3::Utils::AlreadyExistsError => e
226
+ error("Preparing manifest failed because: #{e}")
227
+ end
228
+
229
+ # If arch is all, we must add this package in all arch available
230
+ if arch == 'all'
231
+ packages_arch_all << pkg
232
+ end
233
+ end
234
+
235
+ manifests.each do |arch, manifest|
236
+ next if arch == 'all'
237
+ packages_arch_all.each do |pkg|
238
+ begin
239
+ manifest.add(pkg, options[:preserve_versions], false)
240
+ rescue Dpkg::S3::Utils::AlreadyExistsError => e
241
+ error("Preparing manifest failed because: #{e}")
242
+ end
243
+ end
244
+ end
245
+
246
+ # upload the manifest
247
+ log("Uploading packages and new manifests to S3")
248
+ manifests.each_value do |manifest|
249
+ begin
250
+ manifest.write_to_s3 { |f| sublog("Transferring #{f}") }
251
+ rescue Dpkg::S3::Utils::AlreadyExistsError => e
252
+ error("Uploading manifest failed because: #{e}")
253
+ end
254
+ release.update_manifest(manifest)
255
+ end
256
+ release.write_to_s3 { |f| sublog("Transferring #{f}") }
257
+
258
+ log("Update complete.")
259
+ ensure
260
+ if options[:lock] && @lock_acquired
261
+ Dpkg::S3::Lock.unlock(options[:codename], component, options[:arch], options[:cache_control])
262
+ log("Lock released.")
263
+ end
264
+ end
265
+ end
266
+
267
+ desc "list", "Lists packages in given codename, component, and optionally architecture"
268
+
269
+ option :long,
270
+ :type => :boolean,
271
+ :aliases => '-l',
272
+ :desc => "Shows all package information in original format.",
273
+ :default => false
274
+
275
+ option :arch,
276
+ :type => :string,
277
+ :aliases => "-a",
278
+ :desc => "The architecture of the package in the APT repository."
279
+
280
+ def list
281
+ configure_s3_client
282
+
283
+ release = Dpkg::S3::Release.retrieve(options[:codename])
284
+ archs = release.architectures
285
+ archs &= [options[:arch]] if options[:arch] && options[:arch] != "all"
286
+ widths = [0, 0]
287
+ rows = archs.map { |arch|
288
+ manifest = Dpkg::S3::Manifest.retrieve(options[:codename], component,
289
+ arch, options[:cache_control],
290
+ false, false)
291
+ manifest.packages.map do |package|
292
+ if options[:long]
293
+ package.generate(options[:codename])
294
+ else
295
+ [package.name, package.full_version, package.architecture].tap do |row|
296
+ row.each_with_index do |col, i|
297
+ widths[i] = [widths[i], col.size].max if widths[i]
298
+ end
299
+ end
300
+ end
301
+ end
302
+ }.flatten(1)
303
+
304
+ if options[:long]
305
+ $stdout.puts rows.join("\n")
306
+ else
307
+ rows.each do |row|
308
+ $stdout.puts "% -#{widths[0]}s % -#{widths[1]}s %s" % row
309
+ end
310
+ end
311
+ end
312
+
313
+ desc "show PACKAGE VERSION ARCH", "Shows information about a package."
314
+
315
+ def show(package_name, version, arch)
316
+ if version.nil?
317
+ error "You must specify the name of the package to show."
318
+ end
319
+ if version.nil?
320
+ error "You must specify the version of the package to show."
321
+ end
322
+ if arch.nil?
323
+ error "You must specify the architecture of the package to show."
324
+ end
325
+
326
+ configure_s3_client
327
+
328
+ # retrieve the existing manifests
329
+ manifest = Dpkg::S3::Manifest.retrieve(options[:codename], component, arch,
330
+ options[:cache_control], false, false)
331
+ package = manifest.packages.detect { |p|
332
+ p.name == package_name && p.full_version == version
333
+ }
334
+ if package.nil?
335
+ error "No such package found."
336
+ end
337
+
338
+ puts package.generate(options[:codename])
339
+ end
340
+
341
+ desc "copy PACKAGE TO_CODENAME TO_COMPONENT ",
342
+ "Copy the package named PACKAGE to given codename and component. If --versions is not specified, copy all versions of PACKAGE. Otherwise, only the specified versions will be copied. Source codename and component is given by --codename and --component options."
343
+
344
+ option :cache_control,
345
+ :type => :string,
346
+ :aliases => "-C",
347
+ :desc => "Add cache-control headers to S3 objects."
348
+
349
+ option :arch,
350
+ :type => :string,
351
+ :aliases => "-a",
352
+ :desc => "The architecture of the package in the APT repository."
353
+
354
+ option :versions,
355
+ :default => nil,
356
+ :type => :array,
357
+ :desc => "The space-delimited versions of PACKAGE to delete. If not " +
358
+ "specified, ALL VERSIONS will be deleted. Fair warning. " +
359
+ "E.g. --versions \"0.1 0.2 0.3\""
360
+
361
+ option :preserve_versions,
362
+ :default => false,
363
+ :type => :boolean,
364
+ :aliases => "-p",
365
+ :desc => "Whether to preserve other versions of a package " +
366
+ "in the repository when uploading one."
367
+
368
+ option :fail_if_exists,
369
+ :default => true,
370
+ :type => :boolean,
371
+ :desc => "Whether to overwrite any existing package that has the same " +
372
+ "filename in the pool or the same name and version in the manifest."
373
+
374
+ def copy(package_name, to_codename, to_component)
375
+ if package_name.nil?
376
+ error "You must specify a package name."
377
+ end
378
+ if to_codename.nil?
379
+ error "You must specify a codename to copy to."
380
+ end
381
+ if to_component.nil?
382
+ error "You must specify a component to copy to."
383
+ end
384
+
385
+ arch = options[:arch]
386
+ if arch.nil?
387
+ error "You must specify the architecture of the package to copy."
388
+ end
389
+
390
+ versions = options[:versions]
391
+ if versions.nil?
392
+ warn "===> WARNING: Copying all versions of #{package_name}"
393
+ else
394
+ log "Versions to copy: #{versions.join(', ')}"
395
+ end
396
+
397
+ configure_s3_client
398
+
399
+ # retrieve the existing manifests
400
+ log "Retrieving existing manifests"
401
+ from_manifest = Dpkg::S3::Manifest.retrieve(options[:codename],
402
+ component, arch,
403
+ options[:cache_control],
404
+ false, options[:skip_package_upload])
405
+ to_release = Dpkg::S3::Release.retrieve(to_codename)
406
+ to_manifest = Dpkg::S3::Manifest.retrieve(to_codename, to_component, arch,
407
+ options[:cache_control],
408
+ options[:fail_if_exists],
409
+ options[:skip_package_upload])
410
+ packages = from_manifest.packages.select { |p|
411
+ p.name == package_name &&
412
+ (versions.nil? || versions.include?(p.full_version))
413
+ }
414
+ if packages.size == 0
415
+ error "No packages found in repository."
416
+ end
417
+
418
+ packages.each do |package|
419
+ begin
420
+ to_manifest.add package, options[:preserve_versions], false
421
+ rescue Dpkg::S3::Utils::AlreadyExistsError => e
422
+ error("Preparing manifest failed because: #{e}")
423
+ end
424
+ end
425
+
426
+ begin
427
+ to_manifest.write_to_s3 { |f| sublog("Transferring #{f}") }
428
+ rescue Dpkg::S3::Utils::AlreadyExistsError => e
429
+ error("Copying manifest failed because: #{e}")
430
+ end
431
+ to_release.update_manifest(to_manifest)
432
+ to_release.write_to_s3 { |f| sublog("Transferring #{f}") }
433
+
434
+ log "Copy complete."
435
+ end
436
+
437
+ desc "delete PACKAGE",
438
+ "Remove the package named PACKAGE. If --versions is not specified, delete" +
439
+ "all versions of PACKAGE. Otherwise, only the specified versions will be " +
440
+ "deleted."
441
+
442
+ option :arch,
443
+ :type => :string,
444
+ :aliases => "-a",
445
+ :desc => "The architecture of the package in the APT repository."
446
+
447
+ option :versions,
448
+ :default => nil,
449
+ :type => :array,
450
+ :desc => "The space-delimited versions of PACKAGE to delete. If not " +
451
+ "specified, ALL VERSIONS will be deleted. Fair warning. " +
452
+ "E.g. --versions \"0.1 0.2 0.3\""
453
+
454
+ def delete(package)
455
+ if package.nil?
456
+ error("You must specify a package name.")
457
+ end
458
+
459
+ versions = options[:versions]
460
+ if versions.nil?
461
+ warn("===> WARNING: Deleting all versions of #{package}")
462
+ else
463
+ log("Versions to delete: #{versions.join(', ')}")
464
+ end
465
+
466
+ arch = options[:arch]
467
+ if arch.nil?
468
+ error("You must specify the architecture of the package to remove.")
469
+ end
470
+
471
+ configure_s3_client
472
+
473
+ # retrieve the existing manifests
474
+ log("Retrieving existing manifests")
475
+ release = Dpkg::S3::Release.retrieve(options[:codename], options[:origin], options[:suite])
476
+ if arch == 'all'
477
+ selected_arch = release.architectures
478
+ else
479
+ selected_arch = [arch]
480
+ end
481
+ all_found = 0
482
+ selected_arch.each { |ar|
483
+ manifest = Dpkg::S3::Manifest.retrieve(options[:codename], component, ar, options[:cache_control], false, options[:skip_package_upload])
484
+
485
+ deleted = manifest.delete_package(package, versions)
486
+ all_found += deleted.length
487
+ if deleted.length == 0
488
+ if versions.nil?
489
+ sublog("No packages were deleted. #{package} not found in arch #{ar}.")
490
+ next
491
+ else
492
+ sublog("No packages were deleted. #{package} versions #{versions.join(', ')} could not be found in arch #{ar}.")
493
+ next
494
+ end
495
+ else
496
+ deleted.each { |p|
497
+ sublog("Deleting #{p.name} version #{p.full_version} from arch #{ar}")
498
+ }
499
+ end
500
+
501
+ log("Uploading new manifests to S3")
502
+ manifest.write_to_s3 {|f| sublog("Transferring #{f}") }
503
+ release.update_manifest(manifest)
504
+ release.write_to_s3 {|f| sublog("Transferring #{f}") }
505
+
506
+ log("Update complete.")
507
+ }
508
+ if all_found == 0
509
+ if versions.nil?
510
+ error("No packages were deleted. #{package} not found.")
511
+ else
512
+ error("No packages were deleted. #{package} versions #{versions.join(', ')} could not be found.")
513
+ end
514
+ end
515
+
516
+ end
517
+
518
+
519
+ desc "verify", "Verifies that the files in the package manifests exist"
520
+
521
+ option :fix_manifests,
522
+ :default => false,
523
+ :type => :boolean,
524
+ :aliases => "-f",
525
+ :desc => "Whether to fix problems in manifests when verifying."
526
+
527
+ def verify
528
+ configure_s3_client
529
+
530
+ log("Retrieving existing manifests")
531
+ release = Dpkg::S3::Release.retrieve(options[:codename], options[:origin], options[:suite])
532
+
533
+ release.architectures.each do |arch|
534
+ log("Checking for missing packages in: #{options[:codename]}/#{options[:component]} #{arch}")
535
+ manifest = Dpkg::S3::Manifest.retrieve(options[:codename], component,
536
+ arch, options[:cache_control], false,
537
+ options[:skip_package_upload])
538
+ missing_packages = []
539
+
540
+ manifest.packages.each do |p|
541
+ unless Dpkg::S3::Utils.s3_exists? p.url_filename_encoded(options[:codename])
542
+ sublog("The following packages are missing:\n\n") if missing_packages.empty?
543
+ puts(p.generate(options[:codename]))
544
+ puts("")
545
+
546
+ missing_packages << p
547
+ end
548
+ end
549
+
550
+ if options[:sign] || (options[:fix_manifests] && !missing_packages.empty?)
551
+ log("Removing #{missing_packages.length} package(s) from the manifest...")
552
+ missing_packages.each { |p| manifest.packages.delete(p) }
553
+ manifest.write_to_s3 { |f| sublog("Transferring #{f}") }
554
+ release.update_manifest(manifest)
555
+ release.write_to_s3 { |f| sublog("Transferring #{f}") }
556
+
557
+ log("Update complete.")
558
+ end
559
+ end
560
+ end
561
+
562
+ private
563
+
564
+ def component
565
+ return @component if @component
566
+ @component = if (section = options[:section])
567
+ warn("===> WARNING: The --section/-s argument is " \
568
+ "deprecated, please use --component/-m.")
569
+ section
570
+ else
571
+ options[:component]
572
+ end
573
+ end
574
+
575
+ def puts(*args)
576
+ $stdout.puts(*args) unless options[:quiet]
577
+ end
578
+
579
+ def log(message)
580
+ puts ">> #{message}" unless options[:quiet]
581
+ end
582
+
583
+ def sublog(message)
584
+ puts " -- #{message}" unless options[:quiet]
585
+ end
586
+
587
+ def error(message)
588
+ $stderr.puts "!! #{message}" unless options[:quiet]
589
+ exit 1
590
+ end
591
+
592
+ def provider
593
+ access_key_id = options[:access_key_id]
594
+ secret_access_key = options[:secret_access_key]
595
+ session_token = options[:session_token]
596
+
597
+ if access_key_id.nil? ^ secret_access_key.nil?
598
+ error("If you specify one of --access-key-id or --secret-access-key, you must specify the other.")
599
+ end
600
+ static_credentials = {}
601
+ static_credentials[:access_key_id] = access_key_id if access_key_id
602
+ static_credentials[:secret_access_key] = secret_access_key if secret_access_key
603
+ static_credentials[:session_token] = session_token if session_token
604
+
605
+ static_credentials
606
+ end
607
+
608
+ def configure_s3_client
609
+ error("No value provided for required options '--bucket'") unless options[:bucket]
610
+
611
+ settings = {
612
+ :region => options[:s3_region],
613
+ :http_proxy => options[:proxy_uri],
614
+ :force_path_style => options[:force_path_style]
615
+ }
616
+ settings[:endpoint] = options[:endpoint] if options[:endpoint]
617
+ settings.merge!(provider)
618
+
619
+ Dpkg::S3::Utils.s3 = Aws::S3::Client.new(settings)
620
+ Dpkg::S3::Utils.bucket = options[:bucket]
621
+ Dpkg::S3::Utils.signing_key = options[:sign]
622
+ Dpkg::S3::Utils.gpg_options = options[:gpg_options]
623
+ Dpkg::S3::Utils.prefix = options[:prefix]
624
+ Dpkg::S3::Utils.encryption = options[:encryption]
625
+
626
+ # make sure we have a valid visibility setting
627
+ Dpkg::S3::Utils.access_policy =
628
+ case options[:visibility]
629
+ when "public"
630
+ "public-read"
631
+ when "private"
632
+ "private"
633
+ when "authenticated"
634
+ "authenticated-read"
635
+ when "bucket_owner"
636
+ "bucket-owner-full-control"
637
+ else
638
+ error("Invalid visibility setting given. Can be public, private, authenticated, or bucket_owner.")
639
+ end
640
+ end
641
+ end