floe 0.7.1 → 0.9.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 88cfa362fcb949aeaa6e9b0b799bff4bbb0e6b7158c95920545300ba063d6a0d
4
- data.tar.gz: 14df3cddf7a1811e945f533b88fcacd6611566e212cacb88bebf2da0bae9fe67
3
+ metadata.gz: 8c7a74a5297258d481fb588ae0fa6eb1b22b7ecf5c049865b77ad23d6fb135cb
4
+ data.tar.gz: 82f73726e293e5345d3e7fa55a0049f881f2dce6e6d46570d2352968907c04b9
5
5
  SHA512:
6
- metadata.gz: 90a3cdfdb241242d5b52ad403e7bbbf409476efec6d01d16f8f6f223b6f5fa3e227647da98ab2fb9466993982793be32684521ca5a268e73ab191ff7b6a025fb
7
- data.tar.gz: c1194b741493a3db2f3abdf083ef2858b25af61442584c6a2001d222579c78e5057f0a7efd3c6c904e02d898c296f543c5af1889c2e4bb91c95f9197d327fa35
6
+ metadata.gz: 32d58e28cd76d936f31f9af2c1091d8a7dd930e47a2197b532c02d6d48df2c82feee696072af701aeca5f2af5437040ea3bace5df622c1ad5d0e47e388884ad2
7
+ data.tar.gz: 1ee0628fbfde496d00fae67812ac869582b1aca754aca7cf44caa7170edc1b0f6c9100a587770d6f6bb97bc0a948dd8fe0123fd108b9ed037ad71bc62bb8e104
data/CHANGELOG.md CHANGED
@@ -4,8 +4,26 @@ This project adheres to [Semantic Versioning](http://semver.org/).
4
4
 
5
5
  ## [Unreleased]
6
6
 
7
- ## [0.7.1] - 2024-01-17
7
+ ## [0.9.0] - 2024-02-19
8
+ ### Changed
9
+ - Default to wait indefinitely ([#157](https://github.com/ManageIQ/floe/pull/157))
10
+ - Create docker runners factory and add scheme ([#152](https://github.com/ManageIQ/floe/pull/152))
11
+ - Add a watch method to Workflow::Runner for event driven updates ([#95](https://github.com/ManageIQ/floe/pull/95))
12
+
13
+ ### Fixed
14
+ - Fix waiting on extremely short durations ([#160](https://github.com/ManageIQ/floe/pull/160))
15
+ - Fix wait state missing finish ([#159](https://github.com/ManageIQ/floe/pull/159))
16
+
17
+ ## [0.8.0] - 2024-01-17
18
+ ### Added
19
+ - Add CLI shorthand options for docker runner ([#147](https://github.com/ManageIQ/floe/pull/147))
20
+ - Run multiple workflows in exe/floe ([#149](https://github.com/ManageIQ/floe/pull/149))
21
+ - Add secure options for passing credentials via command-line ([#151](https://github.com/ManageIQ/floe/pull/151))
22
+ - Add a Docker Runner pull-policy option ([#155](https://github.com/ManageIQ/floe/pull/155))
23
+
8
24
  ### Fixed
25
+ - Fix podman with empty output ([#150](https://github.com/ManageIQ/floe/pull/150))
26
+ - Fix run_container logger saying docker when using podman ([#154](https://github.com/ManageIQ/floe/pull/154))
9
27
  - Ensure that workflow credentials is not-nil ([#156](https://github.com/ManageIQ/floe/pull/156))
10
28
 
11
29
  ## [0.7.0] - 2023-12-18
@@ -118,8 +136,9 @@ This project adheres to [Semantic Versioning](http://semver.org/).
118
136
  ### Added
119
137
  - Initial release
120
138
 
121
- [Unreleased]: https://github.com/ManageIQ/floe/compare/v0.7.1...HEAD
122
- [0.7.1]: https://github.com/ManageIQ/floe/compare/v0.7.0...v0.7.1
139
+ [Unreleased]: https://github.com/ManageIQ/floe/compare/v0.9.0...HEAD
140
+ [0.9.0]: https://github.com/ManageIQ/floe/compare/v0.8.0...v0.9.0
141
+ [0.8.0]: https://github.com/ManageIQ/floe/compare/v0.7.0...v0.8.0
123
142
  [0.7.0]: https://github.com/ManageIQ/floe/compare/v0.6.1...v0.7.0
124
143
  [0.6.1]: https://github.com/ManageIQ/floe/compare/v0.6.0...v0.6.1
125
144
  [0.6.0]: https://github.com/ManageIQ/floe/compare/v0.5.0...v0.6.0
data/README.md CHANGED
@@ -51,6 +51,16 @@ You can provide that at runtime via the `--credentials` parameter:
51
51
  bundle exec ruby exe/floe --workflow my-workflow.asl --credentials='{"roleArn": "arn:aws:iam::111122223333:role/LambdaRole"}'
52
52
  ```
53
53
 
54
+ Or if you are running the floe command programmatically you can securely provide the credentials via a stdin pipe via `--credentials=-`:
55
+ ```
56
+ echo '{"roleArn": "arn:aws:iam::111122223333:role/LambdaRole"}' | bundle exec ruby exe/floe --workflow my-workflow.asl --credentials -
57
+ ```
58
+
59
+ Or you can pass a file path with the `--credentials-file` parameter:
60
+ ```
61
+ bundle exec ruby exe/floe --workflow my-workflow.asl --credentials-file /tmp/20231218-80537-kj494t
62
+ ```
63
+
54
64
  If you need to set a credential at runtime you can do that by using the `"ResultPath": "$.Credentials"` directive, for example to user a username/password to login and get a Bearer token:
55
65
 
56
66
  ```
@@ -152,6 +162,7 @@ end
152
162
  Options supported by the Docker docker runner are:
153
163
 
154
164
  * `network` - What docker to connect the container to, defaults to `"bridge"`. If you need access to host resources for development you can pass `network=host`.
165
+ * `pull-policy` - Pull image policy. The default is missing. Allowed values: always, missing, never
155
166
 
156
167
  #### Podman
157
168
 
@@ -161,6 +172,7 @@ Options supported by the podman docker runner are:
161
172
  * `log-level=string` - Log messages above specified level (trace, debug, info, warn, warning, error, fatal, panic)
162
173
  * `network=string` - What docker to connect the container to, defaults to `"bridge"`. If you need access to host resources for development you can pass `network=host`.
163
174
  * `noout=boolean` - do not output to stdout
175
+ * `pull-policy=string` - Pull image policy. The default is missing. Allowed values: always, missing, never, newer
164
176
  * `root=string` - Path to the root directory in which data, including images, is stored
165
177
  * `runroot=string` - Path to the 'run directory' where all state information is stored
166
178
  * `runtime=string` - Path to the OCI-compatible binary used to run containers
@@ -179,6 +191,7 @@ Options supported by the kubernetes docker runner are:
179
191
  * `kubeconfig` - Path to a kubeconfig file, defaults to `KUBECONFIG` environment variable or `~/.kube/config`
180
192
  * `kubeconfig_context` - Context to use in the kubeconfig file, defaults to `"default"`
181
193
  * `namespace` - Namespace to use when creating kubernetes resources, defaults to `"default"`
194
+ * `pull-policy` - Pull image policy. The default is Always. Allowed values: IfNotPresent, Always, Never
182
195
  * `server` - A kubernetes API Server URL, overrides anything in your kubeconfig file. If set `KUBERNETES_SERVICE_HOST` and `KUBERNETES_SERVICE_PORT` will be used
183
196
  * `token` - A bearer_token to use to authenticate to the kubernetes API, overrides anything in your kubeconfig file. If present, `/run/secrets/kubernetes.io/serviceaccount/token` will be used
184
197
  * `ca_file` - Path to a certificate-authority file for the kubernetes API, only valid if server and token are passed. If present `/run/secrets/kubernetes.io/serviceaccount/ca.crt` will be used
data/exe/floe CHANGED
@@ -6,35 +6,64 @@ require "optimist"
6
6
 
7
7
  opts = Optimist.options do
8
8
  version("v#{Floe::VERSION}\n")
9
- opt :workflow, "Path to your workflow json", :type => :string, :required => true
10
- opt :input, "JSON payload to input to the workflow", :default => '{}'
11
- opt :credentials, "JSON payload with credentials", :default => "{}"
12
- opt :docker_runner, "Type of runner for docker images", :default => "docker"
13
- opt :docker_runner_options, "Options to pass to the runner", :type => :strings
9
+ usage("[options] workflow input [workflow2 input2]")
10
+
11
+ opt :workflow, "Path to your workflow json (legacy)", :type => :string
12
+ opt :input, "JSON payload to input to the workflow (legacy)", :type => :string
13
+ opt :credentials, "JSON payload with credentials", :type => :string
14
+ opt :credentials_file, "Path to a file with credentials", :type => :string
15
+ opt :docker_runner, "Type of runner for docker images", :type => :string, :short => 'r'
16
+ opt :docker_runner_options, "Options to pass to the runner", :type => :strings, :short => 'o'
17
+
18
+ opt :docker, "Use docker to run images (short for --docker_runner=docker)", :type => :boolean
19
+ opt :podman, "Use podman to run images (short for --docker_runner=podman)", :type => :boolean
20
+ opt :kubernetes, "Use kubernetes to run images (short for --docker_runner=kubernetes)", :type => :boolean
14
21
  end
15
22
 
16
- Optimist.die(:docker_runner, "must be one of #{Floe::Workflow::Runner::TYPES.join(", ")}") unless Floe::Workflow::Runner::TYPES.include?(opts[:docker_runner])
23
+ # legacy support for --workflow
24
+ args = ARGV.empty? ? [opts[:workflow], opts[:input]] : ARGV
25
+ Optimist.die(:workflow, "must be specified") if args.empty?
26
+
27
+ # shortcut support
28
+ opts[:docker_runner] ||= "docker" if opts[:docker]
29
+ opts[:docker_runner] ||= "podman" if opts[:podman]
30
+ opts[:docker_runner] ||= "kubernetes" if opts[:kubernetes]
17
31
 
18
32
  require "logger"
19
33
  Floe.logger = Logger.new($stdout)
20
34
 
21
- runner_klass = case opts[:docker_runner]
22
- when "docker"
23
- Floe::Workflow::Runner::Docker
24
- when "podman"
25
- Floe::Workflow::Runner::Podman
26
- when "kubernetes"
27
- Floe::Workflow::Runner::Kubernetes
28
- end
29
-
30
35
  runner_options = opts[:docker_runner_options].to_h { |opt| opt.split("=", 2) }
31
36
 
32
- Floe::Workflow::Runner.docker_runner = runner_klass.new(runner_options)
37
+ begin
38
+ Floe.set_runner("docker", opts[:docker_runner], runner_options)
39
+ rescue ArgumentError => e
40
+ Optimist.die(:docker_runner, e.message)
41
+ end
42
+
43
+ credentials =
44
+ if opts[:credentials_given]
45
+ opts[:credentials] == "-" ? $stdin.read : opts[:credentials]
46
+ elsif opts[:credentials_file_given]
47
+ File.read(opts[:credentials_file])
48
+ end
33
49
 
34
- context = Floe::Workflow::Context.new(:input => opts[:input])
35
- workflow = Floe::Workflow.load(opts[:workflow], context, opts[:credentials])
50
+ workflows =
51
+ args.each_slice(2).map do |workflow, input|
52
+ context = Floe::Workflow::Context.new(:input => input || opts[:input] || "{}")
53
+ Floe::Workflow.load(workflow, context, credentials)
54
+ end
55
+
56
+ # run
57
+
58
+ Floe::Workflow.wait(workflows, &:run_nonblock)
59
+
60
+ # display status
61
+
62
+ workflows.each do |workflow|
63
+ puts "", "#{workflow.name}#{" (#{workflow.status})" unless workflow.context.success?}", "===" if workflows.size > 1
64
+ puts workflow.output.inspect
65
+ end
36
66
 
37
- workflow.run!
67
+ # exit status
38
68
 
39
- puts workflow.output.inspect
40
- exit workflow.status == "success" ? 0 : 1
69
+ exit workflows.all? { |workflow| workflow.context.success? } ? 0 : 1
data/floe.gemspec CHANGED
@@ -29,7 +29,8 @@ Gem::Specification.new do |spec|
29
29
  spec.executables = spec.files.grep(%r{\Aexe/}) { |f| File.basename(f) }
30
30
  spec.require_paths = ["lib"]
31
31
 
32
- spec.add_dependency "awesome_spawn", "~>1.0"
32
+ spec.add_dependency "awesome_spawn", "~>1.6"
33
+ spec.add_dependency "io-wait"
33
34
  spec.add_dependency "jsonpath", "~>1.1"
34
35
  spec.add_dependency "kubeclient", "~>4.7"
35
36
  spec.add_dependency "optimist", "~>3.0"
data/lib/floe/version.rb CHANGED
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module Floe
4
- VERSION = "0.7.1".freeze
4
+ VERSION = "0.9.0".freeze
5
5
  end
@@ -74,6 +74,10 @@ module Floe
74
74
  end
75
75
  end
76
76
 
77
+ def success?
78
+ status == "success"
79
+ end
80
+
77
81
  def state=(val)
78
82
  @context["State"] = val
79
83
  end
@@ -10,11 +10,13 @@ module Floe
10
10
 
11
11
  def initialize(options = {})
12
12
  require "awesome_spawn"
13
+ require "io/wait"
13
14
  require "tempfile"
14
15
 
15
16
  super
16
17
 
17
- @network = options.fetch("network", "bridge")
18
+ @network = options.fetch("network", "bridge")
19
+ @pull_policy = options["pull-policy"]
18
20
  end
19
21
 
20
22
  def run_async!(resource, env = {}, secrets = {})
@@ -44,10 +46,63 @@ module Floe
44
46
  delete_secret(secrets_file) if secrets_file
45
47
  end
46
48
 
49
+ def wait(timeout: nil, events: %i[create update delete], &block)
50
+ until_timestamp = Time.now.utc + timeout if timeout
51
+
52
+ r, w = IO.pipe
53
+
54
+ pid = AwesomeSpawn.run_detached(
55
+ self.class::DOCKER_COMMAND, :err => :out, :out => w, :params => wait_params(until_timestamp)
56
+ )
57
+
58
+ w.close
59
+
60
+ loop do
61
+ readable_timeout = until_timestamp - Time.now.utc if until_timestamp
62
+
63
+ # Wait for our end of the pipe to be readable and if it didn't timeout
64
+ # get the events from stdout
65
+ next if r.wait_readable(readable_timeout).nil?
66
+
67
+ # Get all events while the pipe is readable
68
+ notices = []
69
+ while r.ready?
70
+ notice = r.gets
71
+
72
+ # If the process has exited `r.gets` returns `nil` and the pipe is
73
+ # always `ready?`
74
+ break if notice.nil?
75
+
76
+ event, runner_context = parse_notice(notice)
77
+ next if event.nil? || !events.include?(event)
78
+
79
+ notices << [event, runner_context]
80
+ end
81
+
82
+ # If we're given a block yield the events otherwise return them
83
+ if block
84
+ notices.each(&block)
85
+ else
86
+ # Terminate the `docker events` process before returning the events
87
+ sigterm(pid)
88
+
89
+ return notices
90
+ end
91
+
92
+ # Check that the `docker events` process is still alive
93
+ Process.kill(0, pid)
94
+ rescue Errno::ESRCH
95
+ # Break out of the loop if the `docker events` process has exited
96
+ break
97
+ end
98
+ ensure
99
+ r.close
100
+ end
101
+
47
102
  def status!(runner_context)
48
103
  return if runner_context.key?("Error")
49
104
 
50
- runner_context["container_state"] = inspect_container(runner_context["container_ref"]).first&.dig("State")
105
+ runner_context["container_state"] = inspect_container(runner_context["container_ref"])&.dig("State")
51
106
  end
52
107
 
53
108
  def running?(runner_context)
@@ -72,7 +127,7 @@ module Floe
72
127
  def run_container(image, env, secrets_file)
73
128
  params = run_container_params(image, env, secrets_file)
74
129
 
75
- logger.debug("Running #{AwesomeSpawn.build_command_line("docker", params)}")
130
+ logger.debug("Running #{AwesomeSpawn.build_command_line(self.class::DOCKER_COMMAND, params)}")
76
131
 
77
132
  result = docker!(*params)
78
133
  result.output
@@ -83,14 +138,52 @@ module Floe
83
138
  params << :detach
84
139
  params += env.map { |k, v| [:e, "#{k}=#{v}"] }
85
140
  params << [:e, "_CREDENTIALS=/run/secrets"] if secrets_file
141
+ params << [:pull, @pull_policy] if @pull_policy
86
142
  params << [:net, "host"] if @network == "host"
87
143
  params << [:v, "#{secrets_file}:/run/secrets:z"] if secrets_file
88
144
  params << [:name, container_name(image)]
89
145
  params << image
90
146
  end
91
147
 
148
+ def wait_params(until_timestamp)
149
+ params = ["events", [:format, "{{json .}}"], [:filter, "type=container"], [:since, Time.now.utc.to_i]]
150
+ params << [:until, until_timestamp.to_i] if until_timestamp
151
+ params
152
+ end
153
+
154
+ def parse_notice(notice)
155
+ notice = JSON.parse(notice)
156
+
157
+ status = notice["status"]
158
+ event = docker_event_status_to_event(status)
159
+ running = event != :delete
160
+
161
+ name, exit_code = notice.dig("Actor", "Attributes")&.values_at("name", "exitCode")
162
+
163
+ runner_context = {"container_ref" => name, "container_state" => {"Running" => running, "ExitCode" => exit_code.to_i}}
164
+
165
+ [event, runner_context]
166
+ rescue JSON::ParserError
167
+ []
168
+ end
169
+
170
+ def docker_event_status_to_event(status)
171
+ case status
172
+ when "create"
173
+ :create
174
+ when "start"
175
+ :update
176
+ when "die", "destroy"
177
+ :delete
178
+ else
179
+ :unkonwn
180
+ end
181
+ end
182
+
92
183
  def inspect_container(container_id)
93
- JSON.parse(docker!("inspect", container_id).output)
184
+ JSON.parse(docker!("inspect", container_id).output).first
185
+ rescue
186
+ nil
94
187
  end
95
188
 
96
189
  def delete_container(container_id)
@@ -114,6 +207,12 @@ module Floe
114
207
  secrets_file.path
115
208
  end
116
209
 
210
+ def sigterm(pid)
211
+ Process.kill("TERM", pid)
212
+ rescue Errno::ESRCH
213
+ nil
214
+ end
215
+
117
216
  def global_docker_options
118
217
  []
119
218
  end
@@ -40,6 +40,7 @@ module Floe
40
40
 
41
41
  @namespace = options.fetch("namespace", "default")
42
42
 
43
+ @pull_policy = options["pull-policy"]
43
44
  @task_service_account = options["task_service_account"]
44
45
 
45
46
  super
@@ -52,7 +53,7 @@ module Floe
52
53
  name = container_name(image)
53
54
  secret = create_secret!(secrets) if secrets && !secrets.empty?
54
55
 
55
- runner_context = {"container_ref" => name, "secrets_ref" => secret}
56
+ runner_context = {"container_ref" => name, "container_state" => {"phase" => "Pending"}, "secrets_ref" => secret}
56
57
 
57
58
  begin
58
59
  create_pod!(name, image, env, secret)
@@ -101,6 +102,54 @@ module Floe
101
102
  delete_secret(secret) if secret
102
103
  end
103
104
 
105
+ def wait(timeout: nil, events: %i[create update delete])
106
+ retry_connection = true
107
+
108
+ begin
109
+ watcher = kubeclient.watch_pods(:namespace => namespace)
110
+
111
+ retry_connection = true
112
+
113
+ if timeout.to_i > 0
114
+ timeout_thread = Thread.new do
115
+ sleep(timeout)
116
+ watcher.finish
117
+ end
118
+ end
119
+
120
+ watcher.each do |notice|
121
+ break if error_notice?(notice)
122
+
123
+ event = kube_notice_type_to_event(notice.type)
124
+ next unless events.include?(event)
125
+
126
+ runner_context = parse_notice(notice)
127
+ next if runner_context.nil?
128
+
129
+ if block_given?
130
+ yield [event, runner_context]
131
+ else
132
+ timeout_thread&.kill # If we break out before the timeout, kill the timeout thread
133
+ return [[event, runner_context]]
134
+ end
135
+ end
136
+ rescue Kubeclient::HttpError => err
137
+ raise unless err.error_code == 401 && retry_connection
138
+
139
+ @kubeclient = nil
140
+ retry_connection = false
141
+ retry
142
+ ensure
143
+ begin
144
+ watch&.finish
145
+ rescue
146
+ nil
147
+ end
148
+
149
+ timeout_thread&.join(0)
150
+ end
151
+ end
152
+
104
153
  private
105
154
 
106
155
  attr_reader :ca_file, :kubeconfig_file, :kubeconfig_context, :namespace, :server, :token, :verify_ssl
@@ -143,6 +192,7 @@ module Floe
143
192
  }
144
193
  }
145
194
 
195
+ spec[:spec][:imagePullPolicy] = @pull_policy if @pull_policy
146
196
  spec[:spec][:serviceAccountName] = @task_service_account if @task_service_account
147
197
 
148
198
  if secret
@@ -215,6 +265,41 @@ module Floe
215
265
  nil
216
266
  end
217
267
 
268
+ def kube_notice_type_to_event(type)
269
+ case type
270
+ when "ADDED"
271
+ :create
272
+ when "MODIFIED"
273
+ :update
274
+ when "DELETED"
275
+ :delete
276
+ else
277
+ :unknown
278
+ end
279
+ end
280
+
281
+ def error_notice?(notice)
282
+ return false unless notice.type == "ERROR"
283
+
284
+ message = notice.object&.message
285
+ code = notice.object&.code
286
+ reason = notice.object&.reason
287
+
288
+ logger.warn("Received [#{code} #{reason}], [#{message}]")
289
+
290
+ true
291
+ end
292
+
293
+ def parse_notice(notice)
294
+ return if notice.object.nil?
295
+
296
+ pod = notice.object
297
+ container_ref = pod.metadata.name
298
+ container_state = pod.to_h[:status].deep_stringify_keys
299
+
300
+ {"container_ref" => container_ref, "container_state" => container_state}
301
+ end
302
+
218
303
  def kubeclient
219
304
  return @kubeclient unless @kubeclient.nil?
220
305
 
@@ -16,6 +16,7 @@ module Floe
16
16
  @log_level = options["log-level"]
17
17
  @network = options["network"]
18
18
  @noout = options["noout"].to_s == "true" if options.key?("noout")
19
+ @pull_policy = options["pull-policy"]
19
20
  @root = options["root"]
20
21
  @runroot = options["runroot"]
21
22
  @runtime = options["runtime"]
@@ -35,7 +36,8 @@ module Floe
35
36
  params << :detach
36
37
  params += env.map { |k, v| [:e, "#{k}=#{v}"] }
37
38
  params << [:e, "_CREDENTIALS=/run/secrets/#{secret}"] if secret
38
- params << [:net, "host"] if @network == "host"
39
+ params << [:pull, @pull_policy] if @pull_policy
40
+ params << [:net, "host"] if @network == "host"
39
41
  params << [:secret, secret] if secret
40
42
  params << [:name, container_name(image)]
41
43
  params << image
@@ -53,6 +55,32 @@ module Floe
53
55
  nil
54
56
  end
55
57
 
58
+ def parse_notice(notice)
59
+ id, status, exit_code = JSON.parse(notice).values_at("ID", "Status", "ContainerExitCode")
60
+
61
+ event = podman_event_status_to_event(status)
62
+ running = event != :delete
63
+
64
+ runner_context = {"container_ref" => id, "container_state" => {"Running" => running, "ExitCode" => exit_code.to_i}}
65
+
66
+ [event, runner_context]
67
+ rescue JSON::ParserError
68
+ []
69
+ end
70
+
71
+ def podman_event_status_to_event(status)
72
+ case status
73
+ when "create"
74
+ :create
75
+ when "init", "start"
76
+ :update
77
+ when "died", "cleanup", "remove"
78
+ :delete
79
+ else
80
+ :unknown
81
+ end
82
+ end
83
+
56
84
  alias podman! docker!
57
85
 
58
86
  def global_docker_options
@@ -5,29 +5,42 @@ module Floe
5
5
  class Runner
6
6
  include Logging
7
7
 
8
- TYPES = %w[docker podman kubernetes].freeze
9
8
  OUTPUT_MARKER = "__FLOE_OUTPUT__\n"
10
9
 
11
10
  def initialize(_options = {})
12
11
  end
13
12
 
13
+ @runners = {}
14
14
  class << self
15
- attr_writer :docker_runner
15
+ # deprecated -- use Floe.set_runner instead
16
+ def docker_runner=(value)
17
+ set_runner("docker", value)
18
+ end
16
19
 
17
- def docker_runner
18
- @docker_runner ||= Floe::Workflow::Runner::Docker.new
20
+ # see Floe.set_runner
21
+ def set_runner(scheme, name_or_instance, options = {})
22
+ @runners[scheme] =
23
+ case name_or_instance
24
+ when "docker", nil
25
+ Floe::Workflow::Runner::Docker.new(options)
26
+ when "podman"
27
+ Floe::Workflow::Runner::Podman.new(options)
28
+ when "kubernetes"
29
+ Floe::Workflow::Runner::Kubernetes.new(options)
30
+ when Floe::Workflow::Runner
31
+ name_or_instance
32
+ else
33
+ raise ArgumentError, "docker runner must be one of: docker, podman, kubernetes"
34
+ end
19
35
  end
20
36
 
21
37
  def for_resource(resource)
22
38
  raise ArgumentError, "resource cannot be nil" if resource.nil?
23
39
 
40
+ # if no runners are set, default docker:// to docker
41
+ set_runner("docker", "docker") if @runners.empty?
24
42
  scheme = resource.split("://").first
25
- case scheme
26
- when "docker"
27
- docker_runner
28
- else
29
- raise "Invalid resource scheme [#{scheme}]"
30
- end
43
+ @runners[scheme] || raise(ArgumentError, "Invalid resource scheme [#{scheme}]")
31
44
  end
32
45
  end
33
46
 
@@ -55,6 +68,10 @@ module Floe
55
68
  def cleanup(_runner_context)
56
69
  raise NotImplementedError, "Must be implemented in a subclass"
57
70
  end
71
+
72
+ def wait(timeout: nil, events: %i[create update delete])
73
+ raise NotImplementedError, "Must be implemented in a subclass"
74
+ end
58
75
  end
59
76
  end
60
77
  end
@@ -33,16 +33,12 @@ module Floe
33
33
  raise Floe::InvalidWorkflowError, "State name [#{name}] must be less than or equal to 80 characters" if name.length > 80
34
34
  end
35
35
 
36
- def run!(_input = nil)
37
- wait until run_nonblock! == 0
38
- end
39
-
40
- def wait(timeout: 5)
36
+ def wait(timeout: nil)
41
37
  start = Time.now.utc
42
38
 
43
39
  loop do
44
40
  return 0 if ready?
45
- return Errno::EAGAIN if timeout.zero? || Time.now.utc - start > timeout
41
+ return Errno::EAGAIN if timeout && (timeout.zero? || Time.now.utc - start > timeout)
46
42
 
47
43
  sleep(1)
48
44
  end
@@ -97,6 +93,14 @@ module Floe
97
93
  context.state.key?("FinishedTime")
98
94
  end
99
95
 
96
+ def waiting?
97
+ context.state["WaitUntil"] && Time.now.utc <= Time.parse(context.state["WaitUntil"])
98
+ end
99
+
100
+ def wait_until
101
+ context.state["WaitUntil"] && Time.parse(context.state["WaitUntil"])
102
+ end
103
+
100
104
  private
101
105
 
102
106
  def wait_until!(seconds: nil, time: nil)
@@ -109,10 +113,6 @@ module Floe
109
113
  time.iso8601
110
114
  end
111
115
  end
112
-
113
- def waiting?
114
- context.state["WaitUntil"] && Time.now.utc <= Time.parse(context.state["WaitUntil"])
115
- end
116
116
  end
117
117
  end
118
118
  end
@@ -4,6 +4,11 @@ module Floe
4
4
  class Workflow
5
5
  module States
6
6
  module NonTerminalMixin
7
+ def finish
8
+ context.next_state = end? ? nil : @next
9
+ super
10
+ end
11
+
7
12
  def validate_state_next!
8
13
  raise Floe::InvalidWorkflowError, "Missing \"Next\" field in state [#{name}]" if @next.nil? && !@end
9
14
  raise Floe::InvalidWorkflowError, "\"Next\" [#{@next}] not in \"States\" for state [#{name}]" if @next && !workflow.payload["States"].key?(@next)
@@ -46,18 +46,19 @@ module Floe
46
46
  end
47
47
 
48
48
  def finish
49
+ super
50
+
49
51
  output = runner.output(context.state["RunnerContext"])
50
52
 
51
53
  if success?
52
54
  output = parse_output(output)
53
55
  context.state["Output"] = process_output(context.input.dup, output)
54
- context.next_state = next_state
55
56
  else
57
+ context.next_state = nil
56
58
  error = parse_error(output)
57
59
  retry_state!(error) || catch_error!(error) || fail_workflow!(error)
58
60
  end
59
61
 
60
- super
61
62
  ensure
62
63
  runner.cleanup(context.state["RunnerContext"])
63
64
  end
@@ -137,8 +138,8 @@ module Floe
137
138
  end
138
139
 
139
140
  def parse_output(output)
140
- return if output.nil?
141
141
  return output if output.kind_of?(Hash)
142
+ return if output.nil? || output.empty?
142
143
 
143
144
  JSON.parse(output.split("\n").last)
144
145
  rescue JSON::ParserError
@@ -28,10 +28,9 @@ module Floe
28
28
 
29
29
  def start(input)
30
30
  super
31
- input = input_path.value(context, input)
32
31
 
33
- context.output = output_path.value(context, input)
34
- context.next_state = end? ? nil : @next
32
+ input = input_path.value(context, input)
33
+ context.output = output_path.value(context, input)
35
34
 
36
35
  wait_until!(
37
36
  :seconds => seconds_path ? seconds_path.value(context, input).to_i : seconds,
data/lib/floe/workflow.rb CHANGED
@@ -8,32 +8,86 @@ module Floe
8
8
  include Logging
9
9
 
10
10
  class << self
11
- def load(path_or_io, context = nil, credentials = {})
11
+ def load(path_or_io, context = nil, credentials = {}, name = nil)
12
12
  payload = path_or_io.respond_to?(:read) ? path_or_io.read : File.read(path_or_io)
13
- new(payload, context, credentials)
13
+ # default the name if it is a filename and none was passed in
14
+ name ||= path_or_io.respond_to?(:read) ? "stream" : path_or_io.split("/").last.split(".").first
15
+
16
+ new(payload, context, credentials, name)
14
17
  end
15
18
 
16
- def wait(workflows, timeout: 5)
19
+ def wait(workflows, timeout: nil, &block)
20
+ workflows = [workflows] if workflows.kind_of?(self)
17
21
  logger.info("checking #{workflows.count} workflows...")
18
22
 
19
- start = Time.now.utc
20
- ready = []
23
+ run_until = Time.now.utc + timeout if timeout.to_i > 0
24
+ ready = []
25
+ queue = Queue.new
26
+ wait_thread = Thread.new do
27
+ loop do
28
+ Runner.for_resource("docker").wait do |event, runner_context|
29
+ queue.push([event, runner_context])
30
+ end
31
+ end
32
+ end
21
33
 
22
34
  loop do
23
35
  ready = workflows.select(&:step_nonblock_ready?)
24
- break if timeout.zero? || Time.now.utc - start > timeout || !ready.empty?
25
-
26
- sleep(1)
36
+ break if block.nil? && !ready.empty?
37
+
38
+ ready.each(&block)
39
+
40
+ # Break if all workflows are completed or we've exceeded the
41
+ # requested timeout
42
+ break if workflows.all?(&:end?)
43
+ break if timeout && (timeout.zero? || Time.now.utc > run_until)
44
+
45
+ # Find the earliest time that we should wakeup if no container events
46
+ # are caught, either a workflow in a Wait or Retry state or we've
47
+ # exceeded the requested timeout
48
+ wait_until = workflows.map(&:wait_until)
49
+ .unshift(run_until)
50
+ .compact
51
+ .min
52
+
53
+ # If a workflow is in a waiting state wakeup the main thread when
54
+ # it will be done sleeping
55
+ if wait_until
56
+ sleep_thread = Thread.new do
57
+ sleep_duration = wait_until - Time.now.utc
58
+ sleep sleep_duration if sleep_duration > 0
59
+ queue.push(nil)
60
+ end
61
+ end
62
+
63
+ loop do
64
+ # Block until an event is raised
65
+ event, runner_context = queue.pop
66
+ break if event.nil?
67
+
68
+ # If the event is for one of our workflows set the updated runner_context
69
+ workflows.each do |workflow|
70
+ next unless workflow.context.state.dig("RunnerContext", "container_ref") == runner_context["container_ref"]
71
+
72
+ workflow.context.state["RunnerContext"] = runner_context
73
+ end
74
+
75
+ break if queue.empty?
76
+ end
77
+ ensure
78
+ sleep_thread&.kill
27
79
  end
28
80
 
29
81
  logger.info("checking #{workflows.count} workflows...Complete - #{ready.count} ready")
30
82
  ready
83
+ ensure
84
+ wait_thread&.kill
31
85
  end
32
86
  end
33
87
 
34
- attr_reader :context, :credentials, :payload, :states, :states_by_name, :start_at
88
+ attr_reader :context, :credentials, :payload, :states, :states_by_name, :start_at, :name
35
89
 
36
- def initialize(payload, context = nil, credentials = {})
90
+ def initialize(payload, context = nil, credentials = {}, name = nil)
37
91
  payload = JSON.parse(payload) if payload.kind_of?(String)
38
92
  credentials = JSON.parse(credentials) if credentials.kind_of?(String)
39
93
  context = Context.new(context) unless context.kind_of?(Context)
@@ -42,12 +96,13 @@ module Floe
42
96
  raise Floe::InvalidWorkflowError, "Missing field \"StartAt\"" if payload["StartAt"].nil?
43
97
  raise Floe::InvalidWorkflowError, "\"StartAt\" not in the \"States\" field" unless payload["States"].key?(payload["StartAt"])
44
98
 
99
+ @name = name
45
100
  @payload = payload
46
101
  @context = context
47
102
  @credentials = credentials || {}
48
103
  @start_at = payload["StartAt"]
49
104
 
50
- @states = payload["States"].to_a.map { |name, state| State.build!(self, name, state) }
105
+ @states = payload["States"].to_a.map { |state_name, state| State.build!(self, state_name, state) }
51
106
  @states_by_name = @states.each_with_object({}) { |state, result| result[state.name] = state }
52
107
 
53
108
  unless context.state.key?("Name")
@@ -58,16 +113,6 @@ module Floe
58
113
  raise Floe::InvalidWorkflowError, err.message
59
114
  end
60
115
 
61
- def run!
62
- step until end?
63
- self
64
- end
65
-
66
- def step
67
- step_nonblock_wait until step_nonblock == 0
68
- self
69
- end
70
-
71
116
  def run_nonblock
72
117
  loop while step_nonblock == 0 && !end?
73
118
  self
@@ -80,7 +125,7 @@ module Floe
80
125
  current_state.run_nonblock!
81
126
  end
82
127
 
83
- def step_nonblock_wait(timeout: 5)
128
+ def step_nonblock_wait(timeout: nil)
84
129
  current_state.wait(:timeout => timeout)
85
130
  end
86
131
 
@@ -88,6 +133,14 @@ module Floe
88
133
  current_state.ready?
89
134
  end
90
135
 
136
+ def waiting?
137
+ current_state.waiting?
138
+ end
139
+
140
+ def wait_until
141
+ current_state.wait_until
142
+ end
143
+
91
144
  def status
92
145
  context.status
93
146
  end
data/lib/floe.rb CHANGED
@@ -45,7 +45,27 @@ module Floe
45
45
  @logger ||= NullLogger.new
46
46
  end
47
47
 
48
+ # Set the logger to use
49
+ #
50
+ # @example
51
+ # require "logger"
52
+ # Floe.logger = Logger.new($stdout)
53
+ #
54
+ # @param logger [Logger] logger to use for logging actions
48
55
  def self.logger=(logger)
49
56
  @logger = logger
50
57
  end
58
+
59
+ # Set the runner to use
60
+ #
61
+ # @example
62
+ # Floe.set_runner "docker", kubernetes", {}
63
+ # Floe.set_runner "docker", Floe::Workflow::Runner::Kubernetes.new({})
64
+ #
65
+ # @param scheme [String] scheme Protocol to register (e.g.: docker)
66
+ # @param name_or_instance [String|Floe::Workflow::Runner] Name of runner to use for docker (e.g.: docker)
67
+ # @param options [Hash] Options for constructor of the runner (optional)
68
+ def self.set_runner(scheme, name_or_instance, options = {})
69
+ Floe::Workflow::Runner.set_runner(scheme, name_or_instance, options)
70
+ end
51
71
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: floe
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.7.1
4
+ version: 0.9.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - ManageIQ Developers
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2024-01-17 00:00:00.000000000 Z
11
+ date: 2024-02-19 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: awesome_spawn
@@ -16,14 +16,28 @@ dependencies:
16
16
  requirements:
17
17
  - - "~>"
18
18
  - !ruby/object:Gem::Version
19
- version: '1.0'
19
+ version: '1.6'
20
20
  type: :runtime
21
21
  prerelease: false
22
22
  version_requirements: !ruby/object:Gem::Requirement
23
23
  requirements:
24
24
  - - "~>"
25
25
  - !ruby/object:Gem::Version
26
- version: '1.0'
26
+ version: '1.6'
27
+ - !ruby/object:Gem::Dependency
28
+ name: io-wait
29
+ requirement: !ruby/object:Gem::Requirement
30
+ requirements:
31
+ - - ">="
32
+ - !ruby/object:Gem::Version
33
+ version: '0'
34
+ type: :runtime
35
+ prerelease: false
36
+ version_requirements: !ruby/object:Gem::Requirement
37
+ requirements:
38
+ - - ">="
39
+ - !ruby/object:Gem::Version
40
+ version: '0'
27
41
  - !ruby/object:Gem::Dependency
28
42
  name: jsonpath
29
43
  requirement: !ruby/object:Gem::Requirement