subspace 3.0.0.rc1 → 3.0.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: df006855b51c155540821c2e5fd82968ebf79580e3e85f1149eeef576bb63672
4
- data.tar.gz: 3548c10a76fa9117c130d73e198c42097373a1f3942f47fa47722741e2368f92
3
+ metadata.gz: b24a2573b737094142caaf93e8b7380b4f4bf756c611fe81012a9adcc0d8d026
4
+ data.tar.gz: c0fe4b9c23cf613c28ddf832a4bf67ac48055fcc51a32b1ffe8ebc4ad143d14a
5
5
  SHA512:
6
- metadata.gz: c5ed6d9c43e07d68482e1481378e9295bda85c2b133cd50ac4fb18787c6d755945ca4f58c83031ade513e0d01233ee088032506d427f080a07730aaf9deb22df
7
- data.tar.gz: 7d7779fe933a7297a4a682cc3a76fce740c74d2bd84306760ab7248e50c34251e3401039082900918a588660b1b82afd76b42072ae13bb23dafac37af2da4c9b
6
+ metadata.gz: a758a05af7793e9f5ee377f1008238678ee94d5fc3d5ff5e03289cb4b1736288e3d829efa36440b3744045358a5340ae4934107fd29a034bda9042b4cfcc9254
7
+ data.tar.gz: da042e9f7c0f6871ac50bb64c3525ef4ce54bdcebd9ed1b3bdb6d5a8602332dd93551a47f1fda97215c1180d9550e9f87bf0030453c1f9f5d388c3dc944eeaca
data/CHANGELOG.md CHANGED
@@ -12,7 +12,17 @@ This project attempts to follow [semantic versioning](https://semver.org/).
12
12
 
13
13
  ## Unreleased
14
14
 
15
- ## 3.0.0rc1
15
+ ## 3.0.0
16
+ * Install redis from vendor repos (BREAKING, see README)
17
+ * Removed outdated awscli role
18
+ * Added `subspace secrets rekey` to generate and rekey ansible-vault secrets
19
+ * Update tailscale role
20
+ * Don't default to using pemfile (use tailscale instead!)
21
+ * Add `subspace inventory keyscan` to fix ssh fingerprints
22
+ * Use `sidekiq_workers` var in systemd
23
+ * Tailscale hostname is now {{project_name}}-{{hostname}}
24
+
25
+ ## 3.0.0.rc1
16
26
  * Added infrastructure management via Terraform!
17
27
  * Added new `subspace exec` command for manual remote management
18
28
  * BREAKING: Consolidated inventory file into config/provision/inventory.env.yml
data/README.md CHANGED
@@ -101,11 +101,23 @@ common | authorized\_keys | updates the authorized\_keys file for t
101
101
  rails | appyml |
102
102
  monit | monit | All tasks in the monit role have been tagged 'monit'
103
103
 
104
- ### `subspace vars <environment> [--edit] [--create]`
104
+ ### `subspace secrets <environment> [--edit] [--create]`
105
105
 
106
- Manage environment variables on different platforms. The default action is simply to show the vars defined for an environment. Pass --edit to edit them in the system editor.
106
+ The `secrets` command will manage encrypted secrets for different environments. The default action is simply to show the secrets defined for an environment. Pass --edit to edit them in the system editor (vim, etc).
107
107
 
108
- The new system uses a file in `config/provision/templates/application.yml.template` that contains environment variables for all environments. The configuration that is not secret is visible and version controlled, while the secrets are stored in the vault files for their environments. The default file created by `subspace init` looks like this:
108
+ This uses `ansible-vault` under the hood and requires a vault password file. You will need to get the `.vault_pass` from from a teammate out of band (secrets.10fw.ne, 1password, sticky-note, etc), and put it into `config/provision/.vault_pass`
109
+
110
+ These secrets are used during provisioning to populate variables in a few different places:
111
+ 1. `config/application.yml`, which uses the `figaro` gem to manage environment variables in rails.
112
+ 2. `config/database.yml`, which handles the database connection password.
113
+
114
+
115
+ Subspace uses a template file in `config/provision/templates/application.yml.template` that contains environment variables for all environments. If you have non-secret variables that change based on the target server, you can simply put that in plaintext in the template file. This was designed so the configuration that is not secret is visible and version controlled, while the secret values are stored in the vault files for their environments.
116
+
117
+ NOTE: application.yml should be in the `.gitignore`, since subspace creates a new version on the server and symlinks it on top of whatever is checked in. You should make changes to the template file instead, which should be checked in to version control.
118
+
119
+
120
+ The default template created by `subspace init` looks like this:
109
121
 
110
122
  ```
111
123
  # These environment variables are applied to all environments, and can be secret or not
@@ -113,7 +125,7 @@ The new system uses a file in `config/provision/templates/application.yml.templa
113
125
  # This is secret and can be changed on all three environment easily by using subspace vars <env> --edit
114
126
  SECRET_KEY_BASE: {{secret_key_base}}
115
127
 
116
- #This is not secret, and is the same value for all environments
128
+ # This is not secret, and is the same value for all environments
117
129
  ENABLE_SOME_FEATURE: false
118
130
 
119
131
  development:
@@ -127,14 +139,12 @@ production:
127
139
 
128
140
  ```
129
141
 
130
- Further, you can use the extremely command to create a local copy of `config/application.yml`
142
+ You can also use this command to automatically create a local version of `config/application.yml` based on the template and encrypted secrets for a specific environment.
131
143
 
132
144
  # Create a local copy of config/application.yml with the secrets encrypted in secrets/development.yml
133
145
  $ subspace vars development --create
134
146
 
135
- This can get you up and running in development securely, the only thing you need to distribute to new team members is the vault password. Grab it from a teammate and put it into `config/provision/.vault_pass`
136
-
137
- NOTE: application.yml should be in the `.gitignore`, since subspace creates a new version on the server and symlinks it on top of whatever is checked in.
147
+ This can get you up and running quickly in development securely.
138
148
 
139
149
  ## Procedure for updating on projects
140
150
 
@@ -146,13 +156,13 @@ Then,
146
156
 
147
157
  * `subspace provision production`
148
158
 
149
- If you get an error saying you need a vault password file, you should be able to find it in 1Password. You might also need to update `ansible`.
159
+ If you get an error saying you need a vault password file, you need to get it from somoene on the team (see above). You might also need to update `ansible`.
150
160
 
151
161
  You'll want to do this for each environment (ie: `subspace provision qa`, etc.). Best to start with staging and work your way up.
152
162
 
153
163
  # Host configuration
154
164
 
155
- We need to know some info about hosts, but not much. See the files for details, it's mostly the hostname and the user that can administer the system, eg `ubuntu` on AWS/ubuntu, `ec2-user`, or even `root` (not recommended)
165
+ We need to know some info about hosts, but not much. See the files for details, it's mostly the hostname and the user that can administer the system, eg `ubuntu` on AWS/ubuntu, `ec2-user`, or even `root` (not recommended, but used on linode/Digital Ocean)
156
166
 
157
167
  # Role Configuration
158
168
 
@@ -251,20 +261,34 @@ Defaults:
251
261
 
252
262
  ## letsencrypt
253
263
 
254
- By default, this creates a single certificate for every server alias/server name in the configuration file.
255
- If you'd like more control over the certs created, you can define the variables `le_ssl_certs` as follows:
264
+ This creates a single certificate for every server alias/server name in the configuration file.
265
+
266
+ letsencrypt_email: "me@example.com"
267
+ server_name: app.example.com
268
+
269
+
270
+ If you'd like more control over the cert, you can customize the variable `le_ssl_cert` as follows:
256
271
 
257
- le_ssl_certs:
258
- - cert_name: mycert
259
- domains:
260
- - mydomain.example.com
261
- - otherdomain.example.com
262
- - cert_name: othersite
263
- domains:
264
- - othersite.example.com
272
+ le_ssl_cert:
273
+ cert_name: "{{server_name}}"
274
+ preferred_challenges: "http"
275
+ plugin: standalone
276
+ domains: "{{ [server_name] + server_aliases }}"
277
+
278
+ For example, to force a manual DNS challenge you can do the following:
279
+
280
+ le_ssl_cert:
281
+ cert_name: star_example
282
+ preferred_challenges: dns
283
+ plugin: manual
284
+ domains:
285
+ - example.com
286
+ - "*.example.com"
287
+
288
+ (you will need to futz around the first time and manually install the DNS record, but it should work on renewals)
289
+
290
+ Note that this role needs to be included _before_ the webserver (apache or nginx) role
265
291
 
266
- Note that this role needs to be included _before_ the webserver (apache or
267
- nginx) role
268
292
 
269
293
  ## logrotate
270
294
 
@@ -291,14 +315,6 @@ Installs memcache on the server. By default, memcache will only listen on local
291
315
  # bind to all interfaces
292
316
  memcache_bind: "0.0.0.0"
293
317
 
294
- ## monit
295
-
296
- ## mysql
297
-
298
- ## mysql2_gem
299
-
300
- ## newrelic
301
-
302
318
  ## newrelic-infra
303
319
  This role will install the next-gen "Newrelic One" infrastructure agent which can perform a few different functions for newrelic. The previous "newrelic" role is deprecated.
304
320
 
@@ -369,13 +385,13 @@ The full list of distributions is here: https://github.com/nodesource/distributi
369
385
  database_user: "{{project_name}}"
370
386
 
371
387
  ## puma
372
- Use the puma app server for your rails app. Usually combined with nginx to server as a static file server and reverse proxy.
388
+ Use the puma app server for your rails app. Usually combined with nginx to server as a static file server and reverse proxy.
373
389
 
374
390
  **Prerequesites:**
375
391
  - add `gem puma` to your gemfile
376
392
  - add `config/puma/` to the `linked_dirs` config in capistrano's `deploy.rb`
377
393
 
378
- This role will generate a reasonable `puma.rb` and configure it to be controlled by systemd.
394
+ This role will generate a reasonable `puma.rb` and configure it to be controlled by systemd.
379
395
 
380
396
  **Variables:**
381
397
 
@@ -387,7 +403,7 @@ This role will generate a reasonable `puma.rb` and configure it to be controlled
387
403
 
388
404
  Provisions for a rails app. This one is probably pretty important.
389
405
 
390
- We no longer provider default values, so make sure to define all the following variables:
406
+ We no longer provider default values, so make sure to define all the following variables:
391
407
 
392
408
  rails_env: production
393
409
  database_pool: 5
@@ -407,6 +423,11 @@ Installs redis on the server.
407
423
  # Change to * if you want this available everywhere instead of localhost
408
424
  redis_bind: 127.0.0.1
409
425
 
426
+ As of Subspace 3.0, this uses the official redis apt repo instead of the debian/ubuntu ones. If you previously had installed redis from the distro, you will need to manually uninstall, purge, and reinstall. This should not delete any data but back it up just in case.
427
+
428
+ sudo apt-get purge redis-server
429
+ sudo apt-get install redis-server
430
+
410
431
  ## resque
411
432
 
412
433
  Install monitoring and automatic startup for resque workers via monit. You MUST set the `job_queues` variable as follows:
@@ -1,45 +1,28 @@
1
1
  ---
2
2
  - set_fact: delayed_job_installed="true"
3
3
 
4
- - name: Monit Stop All
5
- shell: monit stop all
4
+ - name: Install systemd delayed_job script
6
5
  become: true
7
- ignore_errors: yes
8
-
9
- - name: Wait for monit to stop
10
- shell: monit status | grep Monitored | wc -l | awk '{print $1 $2}'
11
- register: monit_stopped
12
- retries: 10
13
- until: monit_stopped.stdout == "0"
14
- delay: 10
15
- become: true
16
-
17
- - name: Install delayed_job monit script
6
+ vars:
7
+ job_queue: "{{ item }}"
8
+ loop_index: "{{ loop_index }}"
18
9
  template:
19
- src: delayed-job-monit-rc
20
- dest: /etc/monit/conf.d/delayed_job_{{project_name}}_{{rails_env}}
10
+ src: delayed-job-systemd.service
11
+ dest: /etc/systemd/system/delayed_job_{{ item }}{{ loop_index }}.service
12
+ loop: "{{ job_queues }}"
13
+ loop_control:
14
+ index_var: loop_index
15
+ loop_var: item
16
+
17
+ - name: Enable systemd delayed_job service
21
18
  become: true
19
+ systemd:
20
+ name: "delayed_job_{{ item }}{{ loop_index }}"
21
+ daemon_reload: true
22
+ enabled: yes
23
+ state: started
24
+ loop: "{{ job_queues }}"
25
+ loop_control:
26
+ loop_var: item
27
+ index_var: loop_index
22
28
 
23
- - name: Remove old upstart files
24
- file:
25
- path: /etc/init/delayed-job.conf
26
- state: absent
27
- become: true
28
-
29
- - name: Remove old monit files
30
- file:
31
- path: /etc/monit/conf.d/delayed_job
32
- state: absent
33
- become: true
34
-
35
- - name: reload_monit
36
- shell: monit reload
37
- become: true
38
-
39
- - name: wait
40
- pause:
41
- seconds: 3
42
-
43
- - name: restart monit services
44
- shell: monit restart all
45
- become: true
@@ -0,0 +1,33 @@
1
+ [Unit]
2
+ Description=Start delayed_job_{{job_queue}}{{loop_index}} instance
3
+ After=syslog.target network.target
4
+
5
+ [Service]
6
+ Type=simple
7
+
8
+ # Uncomment this if you are going to use this as a system service
9
+ # if using as a user service then leave commented out, or you will get an error trying to start the service
10
+ # !!! Change this to your deploy user account if you are using this as a system service !!!
11
+ User=deploy
12
+ Group=deploy
13
+ UMask=0002
14
+
15
+ Environment=RAILS_ENV={{rails_env}}
16
+
17
+ WorkingDirectory=/u/apps/{{project_name}}/current
18
+ ExecStart=/usr/local/bin/bundle exec {{delayed_job_command}} --identifier={{job_queue}}{{loop_index}} --queue={{job_queue}} start
19
+ ExecStop=/usr/local/bin/bundle exec {{delayed_job_command}} --identifier={{job_queue}}{{loop_index}} --queue={{job_queue}} stop
20
+ TimeoutSec=120
21
+ PIDFile=/u/apps/{{project_name}}/shared/tmp/pids/delayed_job_{{job_queue}}{{loop_index}}.pid
22
+
23
+ # if we crash, restart
24
+ RestartSec=1
25
+ Restart=on-failure
26
+
27
+ StandardOutput=syslog
28
+ StandardError=syslog
29
+ # This will default to "bundler" if we don't specify it
30
+ SyslogIdentifier=delayed_job
31
+
32
+ [Install]
33
+ WantedBy=multi-user.target
@@ -1,12 +1,12 @@
1
1
  ---
2
2
  certbot_dir: "/opt/certbot"
3
- apache_ssl_config: |
4
- SSLCertificateFile /etc/letsencrypt/live/{{server_name}}/cert.pem
5
- SSLCertificateKeyFile /etc/letsencrypt/live/{{server_name}}/privkey.pem
6
- Include /etc/letsencrypt/options-ssl-apache.conf
7
- SSLCertificateChainFile /etc/letsencrypt/live/{{server_name}}/chain.pem
3
+ le_ssl_cert:
4
+ cert_name: "{{server_name}}"
5
+ preferred_challenges: "http"
6
+ plugin: standalone
7
+ domains: "{{ [server_name] + server_aliases }}"
8
8
 
9
9
  nginx_ssl_config: |
10
- ssl_certificate /etc/letsencrypt/live/{{server_name}}/fullchain.pem;
11
- ssl_certificate_key /etc/letsencrypt/live/{{server_name}}/privkey.pem;
10
+ ssl_certificate /etc/letsencrypt/live/{{le_ssl_cert.cert_name}}/fullchain.pem;
11
+ ssl_certificate_key /etc/letsencrypt/live/{{le_ssl_cert.cert_name}}/privkey.pem;
12
12
  include /etc/letsencrypt/options-ssl-nginx.conf;
@@ -41,15 +41,25 @@
41
41
  delay: 1
42
42
  state: stopped
43
43
 
44
- - name: Run default
45
- when: le_ssl_certs is not defined
44
+ - name: Generate SSL Certificate
46
45
  become: true
47
- command: "{{certbot_bin}} certonly --email {{letsencrypt_email}} --domains {{([server_name] + server_aliases) | join(',')}} --cert-name {{server_name}} --standalone --agree-tos --expand --non-interactive"
48
-
49
- - name: Generate SSL Certificates
50
- become: true
51
- with_items: "{{le_ssl_certs|default([])}}"
52
- command: "{{certbot_bin}} certonly --email {{letsencrypt_email}} --domains {{item.domains | join(',')}} --cert-name {{item.cert_name}} --standalone --agree-tos --expand --non-interactive"
46
+ command:
47
+ argv:
48
+ - "{{ certbot_bin }}"
49
+ - certonly
50
+ - "--email"
51
+ - "{{ letsencrypt_email }}"
52
+ - "--domains"
53
+ - "{{ le_ssl_cert.domains | join(',') }}"
54
+ - "--preferred-challenges"
55
+ - "{{ le_ssl_cert.preferred_challenges }}"
56
+ - "--cert-name"
57
+ - "{{ le_ssl_cert.cert_name }}"
58
+ - "--{{ le_ssl_cert.plugin }}"
59
+ - "--manual-auth-hook=/bin/yes"
60
+ - "--agree-tos"
61
+ - "--expand"
62
+ - "--non-interactive"
53
63
 
54
64
  - name: Update nginx default options
55
65
  when: "'nginx' in role_names"
@@ -57,12 +67,6 @@
57
67
  url: https://raw.githubusercontent.com/certbot/certbot/master/certbot-nginx/certbot_nginx/_internal/tls_configs/options-ssl-nginx.conf
58
68
  dest: /etc/letsencrypt/options-ssl-nginx.conf
59
69
 
60
- - name: Update apache default options
61
- when: "'apache' in role_names"
62
- get_url:
63
- url: https://raw.githubusercontent.com/certbot/certbot/master/certbot-apache/certbot_apache/options-ssl-apache.conf
64
- dest: /etc/letsencrypt/options-ssl-apache.conf
65
-
66
70
  - name: start webserver after standalone mode
67
71
  debug: msg="Startup webserver"
68
72
  notify: start webserver
@@ -74,16 +78,6 @@
74
78
  env: yes
75
79
  job: /usr/bin:/bin:/usr/sbin
76
80
 
77
- - name: Setup cron job to auto renew
78
- become: true
79
- when: "'apache' in role_names"
80
- cron:
81
- name: Auto-renew SSL
82
- job: "{{certbot_bin}} renew --no-self-upgrade --apache >> /var/log/cron.log 2>&1"
83
- hour: "0"
84
- minute: "33"
85
- state: present
86
-
87
81
  - name: Setup cron job to auto renew
88
82
  become: true
89
83
  when: "'nginx' in role_names"
@@ -23,6 +23,7 @@ WorkingDirectory=/u/apps/{{project_name}}/current
23
23
 
24
24
  # Helpful for debugging socket activation, etc.
25
25
  # Environment=PUMA_DEBUG=1
26
+ Environment=RAILS_ENV={{rails_env}}
26
27
 
27
28
  # SystemD will not run puma even if it is in your path. You must specify
28
29
  # an absolute URL to puma. For example /usr/local/bin/puma
@@ -1,9 +1,27 @@
1
1
  ---
2
- - name: Install Redis
2
+ - name: Add an Apt signing key for redis repo
3
+ ansible.builtin.apt_key:
4
+ url: https://packages.redis.io/gpg
5
+ state: present
6
+
7
+ - name: Add redis repository into sources list
8
+ ansible.builtin.apt_repository:
9
+ repo: deb https://packages.redis.io/deb {{ ansible_distribution_release }} main
10
+ state: present
11
+ register: redis_apt_repo
12
+
13
+ - name: Purge distro redis package
14
+ apt:
15
+ name: redis-server
16
+ state: absent
17
+ purge: true
18
+ when: redis_apt_repo.changed
19
+
20
+ - name: Install Redis from official repo
3
21
  become: true
4
22
  apt:
5
- pkg: redis-server
6
- state: present
23
+ name: redis-server
24
+ state: latest
7
25
  update_cache: true
8
26
 
9
27
  - name: Set bind IP
@@ -23,7 +23,7 @@ Role Variables
23
23
 
24
24
  > ruby_version: This variable controls the version of Ruby that will be compiled and installed. It should correspond with the tarball filename excluding the ".tar.gz" extension (e.g. "ruby-1.9.3-p484").
25
25
 
26
- > ruby_checksum: The SHA256 checksum of the gzipped tarball that will be downloaded and compiled.
26
+ > ruby_checksum: The checksum of the gzipped tarball that will be downloaded and compiled. (prefix with algorithm, e.g. sha256:abcdef01234567890)
27
27
 
28
28
  > ruby_download_location: The URL that the tarball should be retrieved from. Using the ruby_version variable within this variable is a good practice (e.g. "http://cache.ruby-lang.org/pub/ruby/1.9/{{ ruby_version }}.tar.gz").
29
29
 
@@ -40,7 +40,7 @@
40
40
  - name: Download the Ruby source code
41
41
  get_url: url={{ ruby_download_location }}
42
42
  dest=/usr/local/src/
43
- sha256sum={{ ruby_checksum }}
43
+ checksum={{ ruby_checksum }}
44
44
  become: true
45
45
 
46
46
  - name: Generate the Ruby installation script
@@ -32,7 +32,7 @@ Type=notify
32
32
  WatchdogSec=10
33
33
 
34
34
  WorkingDirectory=/u/apps/{{project_name}}/current
35
- ExecStart=/usr/local/bin/bundle exec sidekiq -e {{rails_env}} --queue {{hostname}} {{ job_queues | map('regex_replace', '^(.*)$', '--queue \\1') | join(' ') }}
35
+ ExecStart=/usr/local/bin/bundle exec sidekiq -e {{rails_env}} --queue {{hostname}} {{ job_queues | map('regex_replace', '^(.*)$', '--queue \\1') | join(' ') }} -c {{sidekiq_workers}}
36
36
 
37
37
  # Use `systemctl kill -s TSTP sidekiq` to quiet the Sidekiq process
38
38
 
@@ -46,6 +46,7 @@ UMask=0002
46
46
  # Greatly reduce Ruby memory fragmentation and heap usage
47
47
  # https://www.mikeperham.com/2018/04/25/taming-rails-memory-bloat/
48
48
  Environment=MALLOC_ARENA_MAX=2
49
+ Environment=RAILS_ENV={{rails_env}}
49
50
 
50
51
  # if we crash, restart
51
52
  RestartSec=1
@@ -19,4 +19,4 @@
19
19
 
20
20
  - name: "Join the tailnet"
21
21
  become: true
22
- command: tailscale up --auth-key {{tailscale_auth_key}} {{tailscale_options}}
22
+ command: tailscale up --ssh --auth-key={{tailscale_auth_key}} --hostname={{project_name}}-{{hostname}} --accept-risk=lose-ssh {{tailscale_options}}
data/bin/console CHANGED
@@ -6,9 +6,5 @@ require "subspace"
6
6
  # You can add fixtures and/or initialization code here to make experimenting
7
7
  # with your gem easier. You can also use a different console, if you like.
8
8
 
9
- # (If you use this, don't forget to add pry to your Gemfile!)
10
- # require "pry"
11
- # Pry.start
12
-
13
9
  require "irb"
14
10
  IRB.start
data/lib/subspace/cli.rb CHANGED
@@ -155,6 +155,7 @@ class Subspace::Cli
155
155
 
156
156
  capistrano - generate config/deploy/[env].rb. Requires the --env option.
157
157
  list - list the current inventory as understood by subspace.
158
+ keyscan - Update ~/.known_hosts with new host key fingerprints
158
159
  EOS
159
160
  c.option "--env ENVIRONMENT", "Optional: Limit function to a specific environment (aka group)"
160
161
  c.when_called Subspace::Commands::Inventory
@@ -3,18 +3,18 @@ module Subspace
3
3
  module Ansible
4
4
  def ansible_playbook(*args)
5
5
  args.push "--diff"
6
- args.push "--private-key"
7
- args.push "subspace.pem"
8
6
  ansible_command("ansible-playbook", *args)
9
7
  end
10
8
 
11
9
  def ansible_command(command, *args)
12
10
  update_ansible_cfg
11
+ retval = false
13
12
  Dir.chdir "config/subspace" do
14
13
  say ">> Running #{command} #{args.join(' ')}"
15
- system(command, *args, out: $stdout, err: $stderr)
14
+ retval = system(command, *args, out: $stdout, err: $stderr)
16
15
  say "<< Done"
17
16
  end
17
+ retval
18
18
  end
19
19
 
20
20
  private
@@ -8,6 +8,8 @@ class Subspace::Commands::Inventory < Subspace::Commands::Base
8
8
  capistrano_deployrb
9
9
  when "list"
10
10
  list_inventory
11
+ when "keyscan"
12
+ keyscan_inventory
11
13
  else
12
14
  say "Unknown or missing command to inventory: #{command}"
13
15
  say "try subspace inventory [list, capistrano]"
@@ -15,12 +17,19 @@ class Subspace::Commands::Inventory < Subspace::Commands::Base
15
17
  end
16
18
 
17
19
  def list_inventory
18
- inventory.find_hosts!(@env || "all").each do |host_name|
19
- host = inventory.hosts[host_name]
20
+ inventory.find_hosts!(@env || "all").each do |host|
20
21
  puts "#{host.name}\t#{host.vars["ansible_host"]}\t(#{host.group_list.join ','})"
21
22
  end
22
23
  end
23
24
 
25
+ def keyscan_inventory
26
+ inventory.find_hosts!(@env || "all").each do |host|
27
+ ip = host.vars["ansible_host"]
28
+ system %Q(ssh-keygen -R #{ip})
29
+ system %Q(ssh-keyscan -Ht ed25519 #{ip} >> "$HOME/.ssh/known_hosts")
30
+ end
31
+ end
32
+
24
33
  def capistrano_deployrb
25
34
  if @env.nil?
26
35
  puts "Please provide an environment e.g: subspace inventory capistrano --env production"
@@ -29,8 +38,8 @@ class Subspace::Commands::Inventory < Subspace::Commands::Base
29
38
 
30
39
  say "# config/deploy/#{@env}.rb"
31
40
  say "# Generated by Subspace"
32
- inventory.find_hosts!(@env).each do |host_name|
33
- host = inventory.hosts[host_name]
41
+ inventory.find_hosts!(@env).each do |host|
42
+ host = inventory.hosts[host.name]
34
43
  db_role = false
35
44
  roles = host.group_list.map do |group_name|
36
45
  if group_name =~ /web/
@@ -1,15 +1,19 @@
1
1
  class Subspace::Commands::Secrets < Subspace::Commands::Base
2
2
  def initialize(args, options)
3
- @environment = args.first
4
- @action = if options.edit
5
- "edit"
6
- elsif options.create
7
- "create"
3
+ if args.first == "rekey"
4
+ rekey
8
5
  else
9
- "view"
10
- end
6
+ @environment = args.first
7
+ @action = if options.edit
8
+ "edit"
9
+ elsif options.create
10
+ "create"
11
+ else
12
+ "view"
13
+ end
11
14
 
12
- run
15
+ run
16
+ end
13
17
  end
14
18
 
15
19
  def run
@@ -40,6 +44,22 @@ class Subspace::Commands::Secrets < Subspace::Commands::Base
40
44
  system "cat", "config/application.yml"
41
45
  end
42
46
 
47
+ def rekey
48
+ secret_files = Dir.glob("config/subspace/secrets/*.yml").map {|x| "secrets/#{File.basename(x)}"}
49
+ exit unless agree("This will re-key your secrets with a new random vault_pass. (#{secret_files}). Proceed? (yes to continue) ")
50
+
51
+
52
+ say "Writing new password to .vault_pass.new"
53
+ File.write "config/subspace/.vault_pass.new", SecureRandom.base64(24) + "\n"
54
+ success = ansible_command "ansible-vault", "rekey", "--vault-password-file", ".vault_pass", "--new-vault-password-file", ".vault_pass.new", "-v", *secret_files
55
+ if success
56
+ FileUtils.mv "config/subspace/.vault_pass", "config/subspace/.vault_pass.old"
57
+ FileUtils.mv "config/subspace/.vault_pass.new", "config/subspace/.vault_pass"
58
+ else
59
+ say "Something went wrong, not changing .vault_pass"
60
+ end
61
+ end
62
+
43
63
  private
44
64
 
45
65
  def application_yml_template
@@ -18,11 +18,15 @@ class Subspace::Commands::Ssh < Subspace::Commands::Base
18
18
  return
19
19
  end
20
20
  host_vars = inventory.hosts[@host].vars
21
- user = host_vars["ansible_user"]
21
+ if host_vars.key?('ansible_ssh_user')
22
+ say "Supposed to be ansible_user not ansible_ssh_user"
23
+ end
24
+ user = @user || host_vars["ansible_user"]
22
25
  host = host_vars["ansible_host"]
23
26
  port = host_vars["ansible_port"] || 22
24
- pem = host_vars["ansible_ssh_private_key_file"] || 'subspace.pem'
25
- cmd = "ssh #{user}@#{host} -p #{port} -i config/subspace/#{pem} #{pass_through_params.join(" ")}"
27
+ pem = host_vars["ansible_ssh_private_key_file"]
28
+ pem_cmd = "-i config/subspace/#{pem}" if pem
29
+ cmd = "ssh #{user}@#{host} -p #{port} #{pem_cmd} #{pass_through_params.join(" ")}"
26
30
  say "> #{cmd} \n"
27
31
  exec cmd
28
32
  end
@@ -73,7 +73,7 @@ module Subspace
73
73
  def update_inventory
74
74
  puts "Apply succeeded, updating inventory."
75
75
  Dir.chdir "config/subspace/terraform/#{@env}" do
76
- @output = JSON.parse `terraform output -json oxenwagen`
76
+ @output = JSON.parse `terraform output -json inventory`
77
77
  end
78
78
  inventory.merge(@output)
79
79
  inventory.write
@@ -60,20 +60,20 @@ module Subspace
60
60
  end
61
61
 
62
62
  def merge(inventory_json)
63
- inventory_json["inventory"]["hostnames"].each_with_index do |host, i|
63
+ inventory_json["hostnames"].each_with_index do |host, i|
64
64
  if hosts[host]
65
65
  old_ip = hosts[host].vars["ansible_host"]
66
- new_ip = inventory_json["inventory"]["ip_addresses"][i]
66
+ new_ip = inventory_json["ip_addresses"][i]
67
67
  if old_ip != new_ip
68
68
  say " * Host '#{host}' IP address changed! You may need to update the inventory! (#{old_ip} => #{new_ip})"
69
69
  end
70
70
  next
71
71
  end
72
72
  hosts[host] = Host.new(host)
73
- hosts[host].vars["ansible_host"] = inventory_json["inventory"]["ip_addresses"][i]
74
- hosts[host].vars["ansible_user"] = inventory_json["inventory"]["users"][i]
73
+ hosts[host].vars["ansible_host"] = inventory_json["ip_addresses"][i]
74
+ hosts[host].vars["ansible_user"] = inventory_json["users"][i]
75
75
  hosts[host].vars["hostname"] = host
76
- hosts[host].group_list = inventory_json["inventory"]["groups"][i].split(/\s/)
76
+ hosts[host].group_list = inventory_json["groups"][i].split(/\s/)
77
77
  end
78
78
  end
79
79
 
@@ -1,3 +1,3 @@
1
1
  module Subspace
2
- VERSION = "3.0.0.rc1"
2
+ VERSION = "3.0.0"
3
3
  end
@@ -63,7 +63,7 @@ module oxenwagen {
63
63
  # lb_alternate_names = []
64
64
  }
65
65
 
66
- output "oxenwagen" {
66
+ output "inventory" {
67
67
  value = module.oxenwagen
68
68
  }
69
69
 
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: subspace
3
3
  version: !ruby/object:Gem::Version
4
- version: 3.0.0.rc1
4
+ version: 3.0.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Brian Samson
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2022-07-29 00:00:00.000000000 Z
11
+ date: 2023-01-10 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: bundler
@@ -113,7 +113,6 @@ files:
113
113
  - ansible/roles/apache/handlers/main.yml
114
114
  - ansible/roles/apache/tasks/main.yml
115
115
  - ansible/roles/apache/templates/server_status.conf
116
- - ansible/roles/awscli/tasks/main.yml
117
116
  - ansible/roles/collectd/defaults/main.yml
118
117
  - ansible/roles/collectd/handlers/main.yml
119
118
  - ansible/roles/collectd/tasks/main.yml
@@ -138,15 +137,13 @@ files:
138
137
  - ansible/roles/delayed_job/README.md
139
138
  - ansible/roles/delayed_job/defaults/main.yml
140
139
  - ansible/roles/delayed_job/handlers/main.yml
141
- - ansible/roles/delayed_job/meta/main.yml
142
140
  - ansible/roles/delayed_job/tasks/main.yml
143
141
  - ansible/roles/delayed_job/templates/delayed-job-monit-rc
142
+ - ansible/roles/delayed_job/templates/delayed-job-systemd.service
144
143
  - ansible/roles/letsencrypt/defaults/main.yml
145
144
  - ansible/roles/letsencrypt/tasks/legacy.yml
146
145
  - ansible/roles/letsencrypt/tasks/main.yml
147
146
  - ansible/roles/letsencrypt/tasks/modern.yml
148
- - ansible/roles/letsencrypt_dns/defaults/main.yml
149
- - ansible/roles/letsencrypt_dns/tasks/main.yml
150
147
  - ansible/roles/logrotate/LICENSE
151
148
  - ansible/roles/logrotate/README.md
152
149
  - ansible/roles/logrotate/defaults/main.yml
@@ -312,11 +309,11 @@ required_ruby_version: !ruby/object:Gem::Requirement
312
309
  version: '0'
313
310
  required_rubygems_version: !ruby/object:Gem::Requirement
314
311
  requirements:
315
- - - ">"
312
+ - - ">="
316
313
  - !ruby/object:Gem::Version
317
- version: 1.3.1
314
+ version: '0'
318
315
  requirements: []
319
- rubygems_version: 3.3.18
316
+ rubygems_version: 3.3.3
320
317
  signing_key:
321
318
  specification_version: 4
322
319
  summary: Ansible-based server provisioning for rails projects
@@ -1,10 +0,0 @@
1
- ---
2
- - name: Install pip
3
- apt:
4
- pkg: python-pip
5
- state: latest
6
- become: true
7
-
8
- - name: Install awscli
9
- pip:
10
- name: awscli
@@ -1,5 +0,0 @@
1
- ---
2
- dependencies:
3
- - {
4
- role: monit
5
- }
@@ -1,4 +0,0 @@
1
- ---
2
- nginx_ssl_config: |
3
- ssl_certificate /etc/letsencrypt/live/{{server_name}}/fullchain.crt;
4
- ssl_certificate_key /etc/letsencrypt/live/{{server_name}}/privkey.pem;
@@ -1,133 +0,0 @@
1
- - name: Update repositories cache and install pip and setuptools package
2
- apt:
3
- name: [python-pip, python-setuptools]
4
- update_cache: yes
5
-
6
- - pip:
7
- name: [pyopenssl, boto]
8
- tags:
9
- - cert
10
-
11
- - name: Creates private key directory
12
- file:
13
- path: "/etc/letsencrypt/live/{{ server_name }}"
14
- state: directory
15
- tags:
16
- - cert
17
-
18
- - name: Generate an OpenSSL private key with the default values (4096 bits, RSA)
19
- openssl_privatekey:
20
- path: "/etc/letsencrypt/live/{{ server_name }}/privkey.pem"
21
- register: privkey
22
- tags:
23
- - cert
24
-
25
- - name: Generate an OpenSSL account key with the default values (4096 bits, RSA)
26
- openssl_privatekey:
27
- path: "/etc/letsencrypt/live/{{ server_name }}/account.pem"
28
- tags:
29
- - cert
30
-
31
- - name: Generate an OpenSSL Certificate Signing Request
32
- openssl_csr:
33
- path: "/etc/letsencrypt/live/{{ server_name }}/server.csr"
34
- privatekey_path: "/etc/letsencrypt/live/{{ server_name }}/privkey.pem"
35
- country_name: US
36
- email_address: "{{ letsencrypt_email }}"
37
- subject_alt_name: "{{ item.value | map('regex_replace', '^', 'DNS:') | list }}"
38
- when: privkey is changed
39
- register: csr
40
- with_dict:
41
- dns_server:
42
- - "{{ server_name }}"
43
- - "*.{{ server_name }}"
44
- tags:
45
- - cert
46
-
47
- - name: Create a challenge using an account key from a variable.
48
- acme_certificate:
49
- acme_version: 2
50
- account_key_src: "/etc/letsencrypt/live/{{ server_name }}/account.pem"
51
- csr: "/etc/letsencrypt/live/{{ server_name }}/server.csr"
52
- cert: "/etc/letsencrypt/live/{{ server_name }}/server.crt"
53
- fullchain: "/etc/letsencrypt/live/{{ server_name }}/fullchain.crt"
54
- chain: "/etc/letsencrypt/live/{{ server_name }}/intermediate.crt"
55
- challenge: dns-01
56
- acme_directory: https://acme-v02.api.letsencrypt.org/directory
57
- terms_agreed: yes
58
- remaining_days: 60
59
- when: csr is changed
60
- register: le_challenge
61
- tags:
62
- - cert
63
-
64
- - name: Install txt record on route53
65
- route53:
66
- zone: "{{ route53_zone }}"
67
- type: TXT
68
- ttl: 60
69
- state: present
70
- wait: yes
71
- record: "{{ item.key }}"
72
- value: "{{ item.value | map('regex_replace', '^(.*)$', '\"\\1\"' ) | list }}"
73
- aws_access_key: "{{ AWS_ACCESS_KEY_ID }}"
74
- aws_secret_key: "{{ AWS_SECRET_ACCESS_KEY }}"
75
- overwrite: yes
76
- loop: "{{ le_challenge.challenge_data_dns | default({}) | dict2items }}"
77
- tags:
78
- - cert
79
-
80
- - name: Flush dns cache
81
- become: true
82
- command: "systemd-resolve --flush-caches"
83
- when: le_challenge is changed
84
- tags:
85
- - cert
86
-
87
- - name: "Wait for DNS"
88
- when: le_challenge is changed
89
- pause:
90
- minutes: 2
91
- tags:
92
- - cert
93
-
94
- - name: Let the challenge be validated and retrieve the cert and intermediate certificate
95
- acme_certificate:
96
- acme_version: 2
97
- account_key_src: "/etc/letsencrypt/live/{{ server_name }}/account.pem"
98
- csr: "/etc/letsencrypt/live/{{ server_name }}/server.csr"
99
- cert: "/etc/letsencrypt/live/{{ server_name }}/server.crt"
100
- fullchain: "/etc/letsencrypt/live/{{ server_name }}/fullchain.crt"
101
- chain: "/etc/letsencrypt/live/{{ server_name }}/intermediate.crt"
102
- challenge: dns-01
103
- acme_directory: https://acme-v02.api.letsencrypt.org/directory
104
- remaining_days: 60
105
- terms_agreed: yes
106
- data: "{{ le_challenge }}"
107
- when: le_challenge is changed
108
- tags:
109
- - cert
110
-
111
- - name: Delete txt record on route53
112
- route53:
113
- zone: "{{ route53_zone }}"
114
- type: TXT
115
- ttl: 60
116
- state: absent
117
- wait: yes
118
- record: "{{ item.key }}"
119
- value: "{{ item.value | map('regex_replace', '^(.*)$', '\"\\1\"' ) | list }}"
120
- aws_access_key: "{{ AWS_ACCESS_KEY_ID }}"
121
- aws_secret_key: "{{ AWS_SECRET_ACCESS_KEY }}"
122
- overwrite: yes
123
- loop: "{{ le_challenge.challenge_data_dns | default({}) | dict2items }}"
124
- tags:
125
- - cert
126
-
127
- - name: restart webserver
128
- debug: msg="restart webserver"
129
- notify: restart webserver
130
- changed_when: true
131
- when: le_challenge is changed
132
- tags:
133
- - cert