odysseus-core 0.1.0 → 0.3.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 917c4ba0f885774f77c2bb74ce396300be9f9ae077f5ddc8d54bd96e4766d6d8
4
- data.tar.gz: ea50eb29cd561fa1ae3de2f6f81df06d33f4cb21bd4a36f3a0a60acc94cfd0f4
3
+ metadata.gz: 1a525389c6d10cc9400d5b1f0b5fa053bd446b5239cd7894d02e1ee829b71bdb
4
+ data.tar.gz: ed2314a392b04d987a3e6ca18a544b9ad40810f41b7a78cd7a045908bab00433
5
5
  SHA512:
6
- metadata.gz: 1eef3bc36f17dce94de204c1b4ddec3bdbc9f2d3882ec2dc2b7fd56cfe90f56a53856c8bcfb90df3ee10dce4e9236f5a748efc7b7b88d5babea919a02f636772
7
- data.tar.gz: 2a3b7d72b6ddbc2e6210839e803a8c9173d41d952775695dea80a1172a919d22308f40e17b0f79ae37668d0b7f109b1e4591b3445ea0fab2a47cde8b9e570caa
6
+ metadata.gz: 101e926f2178d3e8467d07eb375170de61c99b038f42ea61f9041eeb8dc7e6227e18eba407e592383b8ce626dcdd9590a506200a863eb07b0c216d57ccd9349b
7
+ data.tar.gz: bb11c66ef05c4d809bda9abb486d2dd1871d8be2ba9b17520f1b528f49339fafa1b25cd933d18a520f2022ad80442fb5d321f53581d32281d9c074177e79c012
data/README.md CHANGED
@@ -1,43 +1,88 @@
1
- # Odysseus::Core
1
+ # Odysseus Core
2
2
 
3
- TODO: Delete this and the text below, and describe your gem
4
-
5
- Welcome to your new gem! In this directory, you'll find the files you need to be able to package up your Ruby library into a gem. Put your Ruby code in the file `lib/odysseus/core`. To experiment with that code, run `bin/console` for an interactive prompt.
3
+ Core library for [Odysseus](https://github.com/WA-Systems-EU/odysseus), a zero-downtime Docker deployment tool with Caddy reverse proxy integration.
6
4
 
7
5
  ## Installation
8
6
 
9
- TODO: Replace `UPDATE_WITH_YOUR_GEM_NAME_IMMEDIATELY_AFTER_RELEASE_TO_RUBYGEMS_ORG` with your gem name right after releasing it to RubyGems.org. Please do not do it earlier due to security reasons. Alternatively, replace this section with instructions to install your gem from git if you don't plan to release to RubyGems.org.
10
-
11
- Install the gem and add to the application's Gemfile by executing:
12
-
13
7
  ```bash
14
- bundle add UPDATE_WITH_YOUR_GEM_NAME_IMMEDIATELY_AFTER_RELEASE_TO_RUBYGEMS_ORG
8
+ gem install odysseus-core
15
9
  ```
16
10
 
17
- If bundler is not being used to manage dependencies, install the gem by executing:
11
+ Or add to your Gemfile:
18
12
 
19
- ```bash
20
- gem install UPDATE_WITH_YOUR_GEM_NAME_IMMEDIATELY_AFTER_RELEASE_TO_RUBYGEMS_ORG
13
+ ```ruby
14
+ gem 'odysseus-core'
21
15
  ```
22
16
 
17
+ ## Overview
18
+
19
+ Odysseus Core provides the foundational components for Docker container deployment:
20
+
21
+ - **Configuration parsing** - YAML-based deploy.yml configuration
22
+ - **Docker client** - Container lifecycle management via SSH
23
+ - **Caddy client** - Reverse proxy configuration and routing
24
+ - **Deployer** - Zero-downtime deployment orchestration
25
+ - **Secrets** - Encrypted secrets file support
26
+
23
27
  ## Usage
24
28
 
25
- TODO: Write usage instructions here
29
+ This gem is primarily used by [odysseus-cli](https://rubygems.org/gems/odysseus-cli). For direct usage:
26
30
 
27
- ## Development
31
+ ```ruby
32
+ require 'odysseus'
28
33
 
29
- After checking out the repo, run `bin/setup` to install dependencies. Then, run `rake spec` to run the tests. You can also run `bin/console` for an interactive prompt that will allow you to experiment.
34
+ # Parse configuration
35
+ parser = Odysseus::Config::Parser.new('deploy.yml')
36
+ config = parser.parse
30
37
 
31
- To install this gem onto your local machine, run `bundle exec rake install`. To release a new version, update the version number in `version.rb`, and then run `bundle exec rake release`, which will create a git tag for the version, push git commits and the created tag, and push the `.gem` file to [rubygems.org](https://rubygems.org).
38
+ # Create executor
39
+ executor = Odysseus::Deployer::Executor.new('deploy.yml')
32
40
 
33
- ## Contributing
41
+ # Deploy
42
+ executor.deploy_all(image_tag: 'v1.0.0')
43
+ ```
34
44
 
35
- Bug reports and pull requests are welcome on GitHub at https://github.com/[USERNAME]/odysseus-core. This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to the [code of conduct](https://github.com/[USERNAME]/odysseus-core/blob/trunk/CODE_OF_CONDUCT.md).
45
+ ## Components
36
46
 
37
- ## License
47
+ ### Odysseus::Config::Parser
38
48
 
39
- The gem is available as open source under the terms of the [MIT License](https://opensource.org/licenses/MIT).
49
+ Parses deploy.yml configuration files with support for:
50
+ - Server roles (web, jobs, workers)
51
+ - Proxy configuration (Caddy)
52
+ - Accessories (databases, Redis, etc.)
53
+ - Environment variables and secrets
54
+ - AWS Auto Scaling Group integration
40
55
 
41
- ## Code of Conduct
56
+ ### Odysseus::Docker::Client
57
+
58
+ Docker operations via SSH:
59
+ - Container lifecycle (run, stop, remove)
60
+ - Image management
61
+ - Health checks
62
+ - Log streaming
63
+
64
+ ### Odysseus::Caddy::Client
65
+
66
+ Caddy reverse proxy management:
67
+ - Dynamic upstream configuration
68
+ - Zero-downtime routing updates
69
+ - TLS certificate management
70
+
71
+ ### Odysseus::Deployer::Executor
72
+
73
+ Deployment orchestration:
74
+ - Build and distribute images
75
+ - Zero-downtime container replacement
76
+ - Health check verification
77
+ - Automatic rollback on failure
78
+
79
+ ### Odysseus::Secrets::EncryptedFile
80
+
81
+ Encrypted secrets management:
82
+ - AES-256-GCM encryption
83
+ - Environment variable injection
84
+ - Secure key management
85
+
86
+ ## License
42
87
 
43
- Everyone interacting in the Odysseus::Core project's codebases, issue trackers, chat rooms and mailing lists is expected to follow the [code of conduct](https://github.com/[USERNAME]/odysseus-core/blob/trunk/CODE_OF_CONDUCT.md).
88
+ LGPL-3.0-only
@@ -65,11 +65,48 @@ module Odysseus
65
65
  options: symbolize_keys(config['options'] || {}),
66
66
  cmd: config['cmd'],
67
67
  volumes: config['volumes'],
68
- healthcheck: parse_server_healthcheck(config['healthcheck'])
68
+ healthcheck: parse_server_healthcheck(config['healthcheck']),
69
+ containers: parse_containers(config['containers']),
70
+ deploy: parse_deploy(config['deploy'])
69
71
  }
70
72
  end
71
73
  end
72
74
 
75
+ # Parse containers config (for multi-container per host)
76
+ def parse_containers(containers)
77
+ return nil unless containers
78
+
79
+ {
80
+ count: containers['count'] || 1,
81
+ name_pattern: containers['name_pattern']
82
+ }
83
+ end
84
+
85
+ # Parse deploy strategy config
86
+ def parse_deploy(deploy)
87
+ return nil unless deploy
88
+
89
+ {
90
+ strategy: deploy['strategy']&.to_sym,
91
+ drain_timeout: deploy['drain_timeout'] || 30,
92
+ stop_timeout: deploy['stop_timeout'] || 10,
93
+ boot_timeout: deploy['boot_timeout'] || 60,
94
+ health_check: parse_deploy_health_check(deploy['health_check'])
95
+ }
96
+ end
97
+
98
+ # Parse deploy-level health check (HTTP polling with threshold)
99
+ def parse_deploy_health_check(hc)
100
+ return nil unless hc
101
+
102
+ {
103
+ path: hc['path'] || '/up',
104
+ interval: hc['interval'] || 2,
105
+ threshold: hc['threshold'] || 3,
106
+ timeout: hc['timeout'] || 5
107
+ }
108
+ end
109
+
73
110
  # Parse AWS host provider config
74
111
  # @param aws [Hash] aws block from server config
75
112
  # @return [Hash, nil] normalized aws config or nil
@@ -2,6 +2,6 @@
2
2
 
3
3
  module Odysseus
4
4
  module Core
5
- VERSION = "0.1.0"
5
+ VERSION = "0.3.0"
6
6
  end
7
7
  end
@@ -0,0 +1,77 @@
1
+ # lib/odysseus/core/volume_namespacer.rb
2
+
3
+ module Odysseus
4
+ module Core
5
+ module VolumeNamespacer
6
+ # Namespace volumes to avoid conflicts when multiple apps share a server.
7
+ #
8
+ # Named volumes (no leading /) get prefixed with the service name:
9
+ # "data:/var/lib/postgresql/data" → "myapp-data:/var/lib/postgresql/data"
10
+ #
11
+ # Host path volumes (leading /) are left as-is — the user owns the path.
12
+ #
13
+ # When a named volume is being namespaced, we check if the old (un-namespaced)
14
+ # volume already exists on the server. If it does, we reuse it to avoid data loss
15
+ # and log a deprecation warning.
16
+ #
17
+ # @param volumes [Array<String>, nil] volume specs (e.g. ["data:/container/path"])
18
+ # @param service [String] service name used as prefix
19
+ # @return [Array<String>, nil] namespaced volume specs
20
+ def namespace_volumes(volumes, service:)
21
+ return nil unless volumes
22
+
23
+ volumes.map { |v| namespace_volume(v, service: service) }
24
+ end
25
+
26
+ private
27
+
28
+ def namespace_volume(volume_spec, service:)
29
+ host_part, container_part, mode = volume_spec.split(':')
30
+
31
+ # Host path mount (absolute path) — user controls the path, leave as-is
32
+ return volume_spec if host_part.start_with?('/')
33
+
34
+ # Already namespaced — don't double-prefix
35
+ return volume_spec if host_part.start_with?("#{service}-")
36
+
37
+ namespaced = "#{service}-#{host_part}"
38
+
39
+ # Check if we need to handle migration from old volume name
40
+ if docker_volume_exists?(namespaced)
41
+ # New namespaced volume already exists — use it
42
+ return build_volume_spec(namespaced, container_part, mode)
43
+ end
44
+
45
+ if docker_volume_exists?(host_part)
46
+ # Old un-namespaced volume exists but new one doesn't.
47
+ # Reuse the old volume to avoid data loss, and warn the user.
48
+ log "Volume '#{host_part}' exists but is not namespaced. Reusing it to avoid data loss.", :warn
49
+ log " To migrate, run: docker volume create #{namespaced} && " \
50
+ "docker run --rm -v #{host_part}:/from -v #{namespaced}:/to alpine sh -c 'cp -a /from/. /to/'", :warn
51
+ return volume_spec
52
+ end
53
+
54
+ # Neither exists — use the new namespaced name (fresh deploy)
55
+ build_volume_spec(namespaced, container_part, mode)
56
+ end
57
+
58
+ def build_volume_spec(host_part, container_part, mode)
59
+ parts = [host_part, container_part]
60
+ parts << mode if mode
61
+ parts.join(':')
62
+ end
63
+
64
+ def docker_volume_exists?(name)
65
+ return false unless respond_to?(:docker_client, true)
66
+
67
+ docker_client.volume_exists?(name)
68
+ rescue StandardError
69
+ false
70
+ end
71
+
72
+ def docker_client
73
+ @docker
74
+ end
75
+ end
76
+ end
77
+ end
data/lib/odysseus/core.rb CHANGED
@@ -1,6 +1,7 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  require_relative "core/version"
4
+ require_relative "core/volume_namespacer"
4
5
 
5
6
  module Odysseus
6
7
  module Core
@@ -315,7 +315,16 @@ module Odysseus
315
315
 
316
316
  def build_orchestrator(ssh, role)
317
317
  logger = build_logger
318
- if role == WEB_ROLE
318
+ role_config = @config[:servers][role] || {}
319
+ strategy = role_config.dig(:deploy, :strategy)
320
+
321
+ # Check if a sail plugin provides this strategy
322
+ if strategy && Odysseus::Sails.registered?(strategy)
323
+ sail_klass = Odysseus::Sails.resolve(strategy)
324
+ sail_klass.new(
325
+ ssh: ssh, config: @config, logger: logger, secrets_loader: @secrets_loader
326
+ )
327
+ elsif role == WEB_ROLE
319
328
  Odysseus::Orchestrator::WebDeploy.new(
320
329
  ssh: ssh, config: @config, logger: logger, secrets_loader: @secrets_loader
321
330
  )
@@ -254,6 +254,22 @@ module Odysseus
254
254
  results
255
255
  end
256
256
 
257
+ # Check if a Docker named volume exists
258
+ # @param name [String] volume name
259
+ # @return [Boolean]
260
+ def volume_exists?(name)
261
+ output = @ssh.execute("docker volume inspect #{name} 2>/dev/null && echo 'yes' || echo 'no'")
262
+ output.strip.end_with?('yes')
263
+ end
264
+
265
+ # Ensure a Docker network exists, creating it if missing
266
+ # @param name [String] network name
267
+ # @param labels [Hash] labels to apply when creating
268
+ def ensure_network(name, labels: {})
269
+ label_flags = labels.map { |k, v| "--label #{k}=#{v}" }.join(' ')
270
+ @ssh.execute("docker network inspect #{name} >/dev/null 2>&1 || docker network create #{label_flags} #{name}".strip)
271
+ end
272
+
257
273
  # Get disk usage info
258
274
  # @return [String] docker system df output
259
275
  def disk_usage
@@ -6,8 +6,7 @@ module Odysseus
6
6
  # Registry of available host providers
7
7
  def providers
8
8
  @providers ||= {
9
- static: Static,
10
- aws_asg: AwsAsg
9
+ static: Static
11
10
  }
12
11
  end
13
12
 
@@ -15,14 +14,14 @@ module Odysseus
15
14
  # @param role_config [Hash] role configuration from deploy.yml
16
15
  # @return [Base] host provider instance
17
16
  def build(role_config)
18
- if role_config[:aws]
19
- # AWS ASG provider
20
- AwsAsg.new(role_config[:aws])
17
+ if role_config[:aws] && providers[:aws_asg]
18
+ providers[:aws_asg].new(role_config[:aws])
19
+ elsif role_config[:aws]
20
+ raise Odysseus::ConfigError,
21
+ "AWS ASG host provider not available — is the odysseus-sail-aws-asg gem loaded?"
21
22
  elsif role_config[:hosts]
22
- # Static hosts (default)
23
23
  Static.new(hosts: role_config[:hosts])
24
24
  else
25
- # No hosts configured
26
25
  Static.new(hosts: [])
27
26
  end
28
27
  end
@@ -1,8 +1,12 @@
1
1
  # lib/odysseus/orchestrator/accessory_deploy.rb
2
2
 
3
+ require_relative '../core/volume_namespacer'
4
+
3
5
  module Odysseus
4
6
  module Orchestrator
5
7
  class AccessoryDeploy
8
+ include Odysseus::Core::VolumeNamespacer
9
+
6
10
  # @param ssh [Odysseus::Deployer::SSH] SSH connection
7
11
  # @param config [Hash] parsed deploy config
8
12
  # @param secrets_loader [Odysseus::Secrets::Loader] secrets loader (optional)
@@ -27,42 +31,53 @@ module Odysseus
27
31
  image = accessory_config[:image]
28
32
 
29
33
  log "Deploying accessory: #{service_name}"
34
+ log " Image: #{image}"
35
+
36
+ # Ensure the Docker network exists (accessories may boot before any service deploy)
37
+ ensure_network!
30
38
 
31
39
  # Check if accessory is already running
32
40
  existing = @docker.list(service: service_name)
33
41
  if existing.any? { |c| c['State'] == 'running' }
34
- log "Accessory #{service_name} is already running"
42
+ log " Already running skipping"
35
43
  return { success: true, already_running: true, service: service_name }
36
44
  end
37
45
 
38
46
  # Start the accessory
39
47
  log "Starting #{service_name}..."
40
48
  container_id = start_accessory(name: name, config: accessory_config)
41
- log "Started container: #{container_id[0..11]}"
49
+ log " Container started: #{container_id[0..11]}"
42
50
 
43
51
  # Wait for healthy if healthcheck configured
44
52
  if accessory_config[:healthcheck]
45
- log "Waiting for container to be healthy..."
53
+ hc = accessory_config[:healthcheck]
54
+ log "Waiting for health check... (cmd: #{hc[:cmd]}, interval: #{hc[:interval]}s)"
46
55
  unless @docker.wait_healthy(container_id, timeout: 120)
56
+ log_health_failure(container_id)
47
57
  @docker.stop(container_id)
48
58
  @docker.remove(container_id, force: true)
49
59
  raise Odysseus::DeployError, "Accessory failed health checks"
50
60
  end
51
- log "Container is healthy!"
61
+ log " Health check passed"
52
62
  else
63
+ log "No health check configured, waiting 3s for startup..."
53
64
  sleep 3
54
65
  unless @docker.running?(container_id)
66
+ log_health_failure(container_id)
55
67
  raise Odysseus::DeployError, "Accessory failed to start"
56
68
  end
69
+ log " Container is running"
57
70
  end
58
71
 
59
72
  # Add to Caddy if proxy config is present
60
73
  if accessory_config[:proxy]
61
- log "Configuring proxy..."
74
+ proxy_hosts = accessory_config[:proxy][:hosts]&.join(', ')
75
+ log "Configuring proxy (hosts: #{proxy_hosts})..."
62
76
  add_to_caddy(name: name, container_id: container_id, config: accessory_config)
77
+ log " Proxy configured"
63
78
  end
64
79
 
65
- log "Accessory #{service_name} deployed!"
80
+ log "Accessory #{service_name} deployed"
66
81
 
67
82
  {
68
83
  success: true,
@@ -71,7 +86,7 @@ module Odysseus
71
86
  image: image
72
87
  }
73
88
  rescue StandardError => e
74
- log "Accessory deploy failed: #{e.message}", :error
89
+ log "Accessory deploy FAILED: #{e.message}", :error
75
90
  raise
76
91
  end
77
92
 
@@ -122,12 +137,16 @@ module Odysseus
122
137
  service_name = accessory_name(name)
123
138
  image = accessory_config[:image]
124
139
 
125
- log "Upgrading accessory: #{service_name} to #{image}"
140
+ log "Upgrading accessory: #{service_name}"
141
+ log " Image: #{image}"
142
+
143
+ # Ensure the Docker network exists
144
+ ensure_network!
126
145
 
127
146
  # Pull the new image first (before stopping anything)
128
- log "Pulling new image: #{image}..."
147
+ log "Pulling new image..."
129
148
  @docker.pull(image)
130
- log "Image pulled successfully"
149
+ log " Image pulled"
131
150
 
132
151
  # Check for existing container
133
152
  existing = @docker.list(service: service_name, all: true)
@@ -137,46 +156,54 @@ module Odysseus
137
156
  if accessory_config[:proxy] && old_container && old_container['State'] == 'running'
138
157
  container_name = old_container['Names'].delete_prefix('/')
139
158
  port = accessory_config[:proxy][:app_port]
140
- log "Removing from proxy..."
159
+ log "Draining from proxy..."
141
160
  @caddy.drain_upstream(service: service_name, upstream: "#{container_name}:#{port}")
161
+ log " Drained from proxy"
142
162
  end
143
163
 
144
164
  # Stop and remove old container if exists
145
165
  if old_container
146
- log "Stopping old container: #{old_container['ID'][0..11]}..."
166
+ log "Stopping old container #{old_container['ID'][0..11]} (30s grace period)..."
147
167
  @docker.stop(old_container['ID'], timeout: 30) if old_container['State'] == 'running'
148
168
  @docker.remove(old_container['ID'], force: true)
149
- log "Old container removed"
169
+ log " Old container removed"
150
170
  end
151
171
 
152
172
  # Start the accessory with the new image (volumes are preserved on host)
153
- log "Starting new container with #{image}..."
173
+ log "Starting new container..."
154
174
  container_id = start_accessory(name: name, config: accessory_config)
155
- log "Started container: #{container_id[0..11]}"
175
+ log " Container started: #{container_id[0..11]}"
156
176
 
157
177
  # Wait for healthy if healthcheck configured
158
178
  if accessory_config[:healthcheck]
159
- log "Waiting for container to be healthy..."
179
+ hc = accessory_config[:healthcheck]
180
+ log "Waiting for health check... (cmd: #{hc[:cmd]}, interval: #{hc[:interval]}s)"
160
181
  unless @docker.wait_healthy(container_id, timeout: 120)
182
+ log_health_failure(container_id)
161
183
  @docker.stop(container_id)
162
184
  @docker.remove(container_id, force: true)
163
185
  raise Odysseus::DeployError, "Accessory failed health checks after upgrade"
164
186
  end
165
- log "Container is healthy!"
187
+ log " Health check passed"
166
188
  else
189
+ log "No health check configured, waiting 3s for startup..."
167
190
  sleep 3
168
191
  unless @docker.running?(container_id)
192
+ log_health_failure(container_id)
169
193
  raise Odysseus::DeployError, "Accessory failed to start after upgrade"
170
194
  end
195
+ log " Container is running"
171
196
  end
172
197
 
173
198
  # Add to Caddy if proxy config is present
174
199
  if accessory_config[:proxy]
175
- log "Configuring proxy..."
200
+ proxy_hosts = accessory_config[:proxy][:hosts]&.join(', ')
201
+ log "Configuring proxy (hosts: #{proxy_hosts})..."
176
202
  add_to_caddy(name: name, container_id: container_id, config: accessory_config)
203
+ log " Proxy configured"
177
204
  end
178
205
 
179
- log "Accessory #{service_name} upgraded to #{image}!"
206
+ log "Accessory #{service_name} upgraded"
180
207
 
181
208
  {
182
209
  success: true,
@@ -217,17 +244,34 @@ module Odysseus
217
244
  "#{@config[:service]}-#{name}"
218
245
  end
219
246
 
247
+ def ensure_network!
248
+ log "Ensuring Docker network exists..."
249
+ @docker.ensure_network('odysseus', labels: { 'odysseus.managed' => 'true' })
250
+ end
251
+
220
252
  def start_accessory(name:, config:)
221
253
  service_name = accessory_name(name)
222
254
 
255
+ env = build_environment(config[:env])
256
+ log " Environment: #{env.size} variable(s) injected" if env.any?
257
+
258
+ volumes = namespace_volumes(config[:volumes], service: service_name)
259
+ if volumes&.any?
260
+ log " Volumes: #{volumes.join(', ')}"
261
+ end
262
+
263
+ if config[:ports]&.any?
264
+ log " Ports: #{config[:ports].join(', ')}"
265
+ end
266
+
223
267
  @docker.run(
224
268
  name: service_name,
225
269
  image: config[:image],
226
270
  options: {
227
271
  service: service_name,
228
- env: build_environment(config[:env]),
272
+ env: env,
229
273
  ports: config[:ports],
230
- volumes: config[:volumes],
274
+ volumes: volumes,
231
275
  network: 'odysseus',
232
276
  restart: 'unless-stopped',
233
277
  healthcheck: build_healthcheck(config[:healthcheck]),
@@ -293,6 +337,27 @@ module Odysseus
293
337
  )
294
338
  end
295
339
 
340
+ def log_health_failure(container_id)
341
+ log "Health check FAILED for container #{container_id[0..11]}", :error
342
+
343
+ begin
344
+ recent_logs = @docker.logs(container_id, tail: 30)
345
+ unless recent_logs.strip.empty?
346
+ log " Container logs (last 30 lines):", :error
347
+ recent_logs.each_line { |line| log " #{line.rstrip}", :error }
348
+ end
349
+ rescue StandardError => e
350
+ log " Could not fetch container logs: #{e.message}", :warn
351
+ end
352
+
353
+ begin
354
+ status = @docker.health_status(container_id)
355
+ log " Health status: #{status}", :error
356
+ rescue StandardError
357
+ # ignore
358
+ end
359
+ end
360
+
296
361
  def default_logger
297
362
  @default_logger ||= Object.new.tap do |l|
298
363
  def l.info(msg); puts msg; end
@@ -1,8 +1,12 @@
1
1
  # lib/odysseus/orchestrator/job_deploy.rb
2
2
 
3
+ require_relative '../core/volume_namespacer'
4
+
3
5
  module Odysseus
4
6
  module Orchestrator
5
7
  class JobDeploy
8
+ include Odysseus::Core::VolumeNamespacer
9
+
6
10
  # @param ssh [Odysseus::Deployer::SSH] SSH connection
7
11
  # @param config [Hash] parsed deploy config
8
12
  # @param logger [Object] logger (optional)
@@ -24,46 +28,54 @@ module Odysseus
24
28
  role_name = "#{service}-#{role}"
25
29
  image = "#{@config[:image]}:#{image_tag}"
26
30
 
27
- log "Starting deploy of #{role_name} with #{image}"
31
+ log "Deploying #{role_name}"
32
+ log " Image: #{image}"
33
+
34
+ server_config = @config[:servers][role] || {}
35
+ log " Command: #{server_config[:cmd]}" if server_config[:cmd]
28
36
 
29
37
  # Step 1: Find existing containers for this role
30
- log "Checking for existing containers..."
31
38
  old_containers = @docker.list(service: role_name)
32
- log "Found #{old_containers.size} existing container(s)"
39
+ log " Found #{old_containers.size} existing container(s)"
33
40
 
34
41
  # Step 2: Start new container
35
42
  log "Starting new container..."
36
43
  new_container_id = start_new_container(image: image, role: role)
37
- log "Started container: #{new_container_id[0..11]}"
44
+ log " Container started: #{new_container_id[0..11]}"
38
45
 
39
46
  # Step 3: Wait for healthy (if healthcheck configured)
40
- server_config = @config[:servers][role] || {}
41
47
  if server_config[:healthcheck]
42
- log "Waiting for container to be healthy..."
48
+ hc = server_config[:healthcheck]
49
+ log "Waiting for health check... (cmd: #{hc[:cmd]}, interval: #{hc[:interval]}s)"
43
50
  unless wait_for_healthy(new_container_id)
51
+ log_health_failure(new_container_id)
44
52
  handle_failed_deploy(new_container_id)
45
53
  raise Odysseus::DeployError, "Container failed health checks"
46
54
  end
47
- log "Container is healthy!"
55
+ log " Health check passed"
48
56
  else
49
- # No healthcheck - just wait a few seconds for startup
50
- log "No healthcheck configured, waiting for startup..."
57
+ log "No health check configured, waiting 5s for startup..."
51
58
  sleep 5
52
59
  unless @docker.running?(new_container_id)
60
+ log_health_failure(new_container_id)
53
61
  handle_failed_deploy(new_container_id)
54
62
  raise Odysseus::DeployError, "Container failed to start"
55
63
  end
64
+ log " Container is running"
56
65
  end
57
66
 
58
67
  # Step 4: Stop old containers gracefully
59
68
  old_containers.each do |old|
60
- log "Stopping old container: #{old['ID'][0..11]}..."
69
+ log "Stopping old container #{old['ID'][0..11]} (30s grace period)..."
61
70
  graceful_stop(old['ID'])
71
+ log " Old container removed"
62
72
  end
63
73
 
64
74
  # Step 5: Cleanup old stopped containers
65
- log "Cleaning up old containers..."
66
- @docker.cleanup_old_containers(service: role_name, keep: 2)
75
+ cleaned = @docker.cleanup_old_containers(service: role_name, keep: 2)
76
+ log " Cleaned up #{cleaned.size} old container(s)" if cleaned.any?
77
+
78
+ log "Deploy complete for #{role_name}"
67
79
 
68
80
  {
69
81
  success: true,
@@ -72,7 +84,7 @@ module Odysseus
72
84
  image: image
73
85
  }
74
86
  rescue StandardError => e
75
- log "Deploy failed: #{e.message}", :error
87
+ log "Deploy FAILED: #{e.message}", :error
76
88
  raise
77
89
  end
78
90
 
@@ -87,13 +99,26 @@ module Odysseus
87
99
  server_config = @config[:servers][role] || {}
88
100
  options = server_config[:options] || {}
89
101
 
102
+ env = build_environment
103
+ log " Environment: #{env.size} variable(s) injected"
104
+
105
+ volumes = namespace_volumes(server_config[:volumes], service: role_name)
106
+ if volumes&.any?
107
+ log " Volumes: #{volumes.join(', ')}"
108
+ end
109
+
110
+ if options[:memory] || options[:cpus]
111
+ log " Resources: memory=#{options[:memory] || 'default'}, cpus=#{options[:cpus] || 'default'}"
112
+ end
113
+
90
114
  @docker.run(
91
115
  name: container_name,
92
116
  image: image,
93
117
  options: {
94
118
  service: role_name,
95
119
  version: timestamp,
96
- env: build_environment,
120
+ env: env,
121
+ volumes: volumes,
97
122
  memory: options[:memory],
98
123
  memory_reservation: options[:memory_reservation],
99
124
  cpus: options[:cpus],
@@ -157,7 +182,28 @@ module Odysseus
157
182
  log "Rolling back failed deploy...", :warn
158
183
  @docker.stop(new_container_id)
159
184
  @docker.remove(new_container_id, force: true)
160
- log "Rollback complete"
185
+ log "Rollback complete — failed container removed"
186
+ end
187
+
188
+ def log_health_failure(container_id)
189
+ log "Health check FAILED for container #{container_id[0..11]}", :error
190
+
191
+ begin
192
+ recent_logs = @docker.logs(container_id, tail: 30)
193
+ unless recent_logs.strip.empty?
194
+ log " Container logs (last 30 lines):", :error
195
+ recent_logs.each_line { |line| log " #{line.rstrip}", :error }
196
+ end
197
+ rescue StandardError => e
198
+ log " Could not fetch container logs: #{e.message}", :warn
199
+ end
200
+
201
+ begin
202
+ status = @docker.health_status(container_id)
203
+ log " Health status: #{status}", :error
204
+ rescue StandardError
205
+ # ignore
206
+ end
161
207
  end
162
208
 
163
209
  def default_logger
@@ -1,8 +1,12 @@
1
1
  # lib/odysseus/orchestrator/web_deploy.rb
2
2
 
3
+ require_relative '../core/volume_namespacer'
4
+
3
5
  module Odysseus
4
6
  module Orchestrator
5
7
  class WebDeploy
8
+ include Odysseus::Core::VolumeNamespacer
9
+
6
10
  # @param ssh [Odysseus::Deployer::SSH] SSH connection
7
11
  # @param config [Hash] parsed deploy config
8
12
  # @param logger [Object] logger (optional)
@@ -24,48 +28,59 @@ module Odysseus
24
28
  service = @config[:service]
25
29
  image = "#{@config[:image]}:#{image_tag}"
26
30
 
27
- log "Starting deploy of #{service} with #{image}"
31
+ log "Deploying #{service} (role: #{role})"
32
+ log " Image: #{image}"
28
33
 
29
34
  # Step 1: Ensure Caddy is running
30
35
  log "Ensuring Caddy proxy is running..."
31
- ensure_caddy!
36
+ if @caddy.running?
37
+ log " Caddy already running"
38
+ else
39
+ @caddy.ensure_running
40
+ log " Caddy started"
41
+ end
32
42
 
33
43
  # Step 2: Find existing containers
34
- log "Checking for existing containers..."
35
44
  old_containers = @docker.list(service: service)
36
- log "Found #{old_containers.size} existing container(s)"
45
+ log " Found #{old_containers.size} existing container(s)"
37
46
 
38
47
  # Step 3: Start new container
39
48
  log "Starting new container..."
40
49
  new_container_id = start_new_container(image: image, role: role)
41
- log "Started container: #{new_container_id[0..11]}"
50
+ log " Container started: #{new_container_id[0..11]}"
42
51
 
43
52
  # Step 4: Wait for healthy
44
- log "Waiting for container to be healthy..."
53
+ healthcheck_desc = describe_healthcheck(@config[:proxy]&.dig(:healthcheck))
54
+ log "Waiting for health check... #{healthcheck_desc}"
45
55
  unless wait_for_healthy(new_container_id)
56
+ log_health_failure(new_container_id)
46
57
  handle_failed_deploy(new_container_id, old_containers)
47
58
  raise Odysseus::DeployError, "Container failed health checks"
48
59
  end
49
- log "Container is healthy!"
60
+ log " Health check passed"
50
61
 
51
62
  # Step 5: Add new container to Caddy
52
- log "Adding container to Caddy..."
63
+ proxy_hosts = @config[:proxy][:hosts]&.join(', ')
64
+ log "Adding to Caddy proxy (hosts: #{proxy_hosts})..."
53
65
  add_to_caddy(new_container_id)
66
+ log " Caddy routing configured"
54
67
 
55
68
  # Step 6: Remove old containers from Caddy and stop them
56
69
  old_containers.each do |old|
57
- log "Draining old container: #{old['ID'][0..11]}..."
70
+ log "Draining old container #{old['ID'][0..11]}..."
58
71
  drain_and_remove(old['ID'])
72
+ log " Old container removed"
59
73
  end
60
74
 
61
75
  # Step 7: Cleanup old stopped containers
62
- log "Cleaning up old containers..."
63
- @docker.cleanup_old_containers(service: service, keep: 2)
76
+ cleaned = @docker.cleanup_old_containers(service: service, keep: 2)
77
+ log " Cleaned up #{cleaned.size} old container(s)" if cleaned.any?
64
78
 
65
79
  # Step 8: Cleanup stale Caddy upstreams (in case any were missed)
66
- log "Cleaning up stale Caddy routes..."
67
80
  removed_upstreams = @caddy.cleanup_stale_upstreams(service: service)
68
- log "Removed #{removed_upstreams.size} stale upstream(s)" if removed_upstreams.any?
81
+ log " Removed #{removed_upstreams.size} stale upstream(s)" if removed_upstreams.any?
82
+
83
+ log "Deploy complete for #{service}"
69
84
 
70
85
  {
71
86
  success: true,
@@ -74,7 +89,7 @@ module Odysseus
74
89
  image: image
75
90
  }
76
91
  rescue StandardError => e
77
- log "Deploy failed: #{e.message}", :error
92
+ log "Deploy FAILED: #{e.message}", :error
78
93
  raise
79
94
  end
80
95
 
@@ -95,6 +110,18 @@ module Odysseus
95
110
  options = server_config[:options] || {}
96
111
  proxy_config = @config[:proxy] || {}
97
112
 
113
+ env = build_environment
114
+ log " Environment: #{env.size} variable(s) injected"
115
+
116
+ volumes = namespace_volumes(server_config[:volumes], service: service)
117
+ if volumes&.any?
118
+ log " Volumes: #{volumes.join(', ')}"
119
+ end
120
+
121
+ if options[:memory] || options[:cpus]
122
+ log " Resources: memory=#{options[:memory] || 'default'}, cpus=#{options[:cpus] || 'default'}"
123
+ end
124
+
98
125
  @docker.run(
99
126
  name: container_name,
100
127
  image: image,
@@ -102,8 +129,8 @@ module Odysseus
102
129
  service: service,
103
130
  version: timestamp,
104
131
  ports: internal_port_mapping(proxy_config[:app_port]),
105
- env: build_environment,
106
- volumes: server_config[:volumes],
132
+ env: env,
133
+ volumes: volumes,
107
134
  memory: options[:memory],
108
135
  memory_reservation: options[:memory_reservation],
109
136
  cpus: options[:cpus],
@@ -234,7 +261,41 @@ module Odysseus
234
261
  @docker.remove(new_container_id, force: true)
235
262
 
236
263
  # Old containers should still be running and in Caddy
237
- log "Rollback complete - old containers still serving traffic"
264
+ if old_containers.any?
265
+ log "Rollback complete — #{old_containers.size} old container(s) still serving traffic"
266
+ else
267
+ log "Rollback complete — no previous containers to fall back to", :warn
268
+ end
269
+ end
270
+
271
+ def log_health_failure(container_id)
272
+ log "Health check FAILED for container #{container_id[0..11]}", :error
273
+
274
+ # Fetch recent container logs to help diagnose the failure
275
+ begin
276
+ recent_logs = @docker.logs(container_id, tail: 30)
277
+ unless recent_logs.strip.empty?
278
+ log " Container logs (last 30 lines):", :error
279
+ recent_logs.each_line { |line| log " #{line.rstrip}", :error }
280
+ end
281
+ rescue StandardError => e
282
+ log " Could not fetch container logs: #{e.message}", :warn
283
+ end
284
+
285
+ # Show the health check status
286
+ begin
287
+ status = @docker.health_status(container_id)
288
+ log " Health status: #{status}", :error
289
+ rescue StandardError
290
+ # ignore
291
+ end
292
+ end
293
+
294
+ def describe_healthcheck(hc_config)
295
+ return "(no health check configured)" unless hc_config && hc_config[:path]
296
+
297
+ port = @config[:proxy][:app_port]
298
+ "(GET http://localhost:#{port}#{hc_config[:path]}, interval: #{hc_config[:interval] || 10}s)"
238
299
  end
239
300
 
240
301
  def default_logger
@@ -0,0 +1,44 @@
1
+ # lib/odysseus/sails.rb
2
+
3
+ module Odysseus
4
+ module Sails
5
+ class << self
6
+ # Registry of available deploy strategy plugins (sails)
7
+ def strategies
8
+ @strategies ||= {}
9
+ end
10
+
11
+ # Register a deploy strategy
12
+ # @param name [Symbol] strategy name (e.g., :rolling)
13
+ # @param klass [Class] orchestrator class
14
+ def register(name, klass)
15
+ strategies[name.to_sym] = klass
16
+ end
17
+
18
+ # Look up a registered strategy
19
+ # @param name [Symbol] strategy name
20
+ # @return [Class, nil] orchestrator class or nil
21
+ def resolve(name)
22
+ strategies[name.to_sym]
23
+ end
24
+
25
+ # Check if a strategy is registered
26
+ # @param name [Symbol] strategy name
27
+ # @return [Boolean]
28
+ def registered?(name)
29
+ strategies.key?(name.to_sym)
30
+ end
31
+
32
+ # List all registered strategy names
33
+ # @return [Array<Symbol>]
34
+ def available
35
+ strategies.keys
36
+ end
37
+
38
+ # Reset registry (for testing)
39
+ def reset!
40
+ @strategies = {}
41
+ end
42
+ end
43
+ end
44
+ end
@@ -38,6 +38,9 @@ module Odysseus
38
38
  raise Odysseus::ConfigValidationError,
39
39
  "server role '#{role}' must have 'hosts' array" \
40
40
  unless config.is_a?(Hash) && config['hosts'].is_a?(Array)
41
+
42
+ validate_containers!(role, config['containers']) if config['containers']
43
+ validate_deploy!(role, config['deploy']) if config['deploy']
41
44
  end
42
45
  end
43
46
 
@@ -68,6 +71,36 @@ module Odysseus
68
71
  end
69
72
  end
70
73
 
74
+ def validate_containers!(role, containers)
75
+ return unless containers.is_a?(Hash)
76
+
77
+ count = containers['count']
78
+ if count && (!count.is_a?(Integer) || count < 1)
79
+ raise Odysseus::ConfigValidationError,
80
+ "servers.#{role}.containers.count must be an integer >= 1"
81
+ end
82
+ end
83
+
84
+ def validate_deploy!(role, deploy)
85
+ return unless deploy.is_a?(Hash)
86
+
87
+ strategy = deploy['strategy']
88
+ if strategy && !Odysseus::Sails.registered?(strategy.to_sym)
89
+ raise Odysseus::ConfigValidationError,
90
+ "servers.#{role}.deploy.strategy '#{strategy}' is not registered — is the sail plugin gem loaded?"
91
+ end
92
+
93
+ %w[drain_timeout stop_timeout boot_timeout].each do |key|
94
+ val = deploy[key]
95
+ next unless val
96
+
97
+ unless val.is_a?(Integer) && val > 0
98
+ raise Odysseus::ConfigValidationError,
99
+ "servers.#{role}.deploy.#{key} must be a positive integer"
100
+ end
101
+ end
102
+ end
103
+
71
104
  def validate_ssh!
72
105
  ssh = @config['ssh']
73
106
  return if ssh.nil?
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: odysseus-core
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.1.0
4
+ version: 0.3.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Thomas
@@ -83,17 +83,18 @@ files:
83
83
  - lib/odysseus/config/parser.rb
84
84
  - lib/odysseus/core.rb
85
85
  - lib/odysseus/core/version.rb
86
+ - lib/odysseus/core/volume_namespacer.rb
86
87
  - lib/odysseus/deployer/executor.rb
87
88
  - lib/odysseus/deployer/ssh.rb
88
89
  - lib/odysseus/docker/client.rb
89
90
  - lib/odysseus/errors.rb
90
91
  - lib/odysseus/host_providers.rb
91
- - lib/odysseus/host_providers/aws_asg.rb
92
92
  - lib/odysseus/host_providers/base.rb
93
93
  - lib/odysseus/host_providers/static.rb
94
94
  - lib/odysseus/orchestrator/accessory_deploy.rb
95
95
  - lib/odysseus/orchestrator/job_deploy.rb
96
96
  - lib/odysseus/orchestrator/web_deploy.rb
97
+ - lib/odysseus/sails.rb
97
98
  - lib/odysseus/secrets/encrypted_file.rb
98
99
  - lib/odysseus/secrets/loader.rb
99
100
  - lib/odysseus/validators/config.rb
@@ -1,91 +0,0 @@
1
- # lib/odysseus/host_providers/aws_asg.rb
2
-
3
- module Odysseus
4
- module HostProviders
5
- # AWS Auto Scaling Group host provider
6
- # Resolves hosts from EC2 instances in an ASG
7
- #
8
- # Config options:
9
- # asg: ASG name (required)
10
- # region: AWS region (required)
11
- # use_private_ip: Use private IP instead of public (default: false)
12
- # state: Only include instances in this lifecycle state (default: InService)
13
- #
14
- # AWS credentials are loaded from standard AWS credential chain:
15
- # - Environment variables (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)
16
- # - Shared credentials file (~/.aws/credentials)
17
- # - IAM instance profile (when running on EC2)
18
- class AwsAsg < Base
19
- def initialize(config)
20
- super
21
- @asg_name = config[:asg]
22
- @region = config[:region]
23
- @use_private_ip = config[:use_private_ip] || false
24
- @lifecycle_state = config[:state] || 'InService'
25
-
26
- validate_config!
27
- end
28
-
29
- # @return [Array<String>] list of instance IPs/hostnames
30
- def resolve
31
- require_aws_sdk!
32
-
33
- instances = fetch_asg_instances
34
- instances.map { |i| extract_address(i) }.compact
35
- end
36
-
37
- def name
38
- "aws_asg(#{@asg_name})"
39
- end
40
-
41
- private
42
-
43
- def validate_config!
44
- raise Odysseus::ConfigError, "AWS ASG provider requires 'asg' name" unless @asg_name
45
- raise Odysseus::ConfigError, "AWS ASG provider requires 'region'" unless @region
46
- end
47
-
48
- def require_aws_sdk!
49
- require 'aws-sdk-autoscaling'
50
- require 'aws-sdk-ec2'
51
- rescue LoadError
52
- raise Odysseus::ConfigError,
53
- "AWS SDK not installed. Add 'aws-sdk-autoscaling' and 'aws-sdk-ec2' to your Gemfile."
54
- end
55
-
56
- def fetch_asg_instances
57
- asg_client = Aws::AutoScaling::Client.new(region: @region)
58
- ec2_client = Aws::EC2::Client.new(region: @region)
59
-
60
- # Get instance IDs from ASG
61
- asg_response = asg_client.describe_auto_scaling_groups(
62
- auto_scaling_group_names: [@asg_name]
63
- )
64
-
65
- asg = asg_response.auto_scaling_groups.first
66
- raise Odysseus::ConfigError, "ASG '#{@asg_name}' not found" unless asg
67
-
68
- # Filter by lifecycle state
69
- instance_ids = asg.instances
70
- .select { |i| i.lifecycle_state == @lifecycle_state }
71
- .map(&:instance_id)
72
-
73
- return [] if instance_ids.empty?
74
-
75
- # Get instance details from EC2
76
- ec2_response = ec2_client.describe_instances(instance_ids: instance_ids)
77
-
78
- ec2_response.reservations.flat_map(&:instances)
79
- end
80
-
81
- def extract_address(instance)
82
- if @use_private_ip
83
- instance.private_ip_address
84
- else
85
- # Prefer public IP, fall back to private
86
- instance.public_ip_address || instance.private_ip_address
87
- end
88
- end
89
- end
90
- end
91
- end